id
stringlengths 2
8
| title
stringlengths 1
130
| text
stringlengths 0
252k
| formulas
listlengths 1
823
| url
stringlengths 38
44
|
---|---|---|---|---|
1527578
|
Critical variable
|
Thermodynamic variables associated with critical points
Critical variables are defined, for example in thermodynamics, in terms of the values of variables at the critical point.
On a PV diagram, the critical point is an inflection point. Thus:
formula_0
formula_1
For the van der Waals equation, the above yields:
formula_2
formula_3
formula_4
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\n\\left(\\frac{\\partial P}{\\partial V}\\right)_{C}=0\n"
},
{
"math_id": 1,
"text": "\n\\left(\\frac{\\partial^2 P}{\\partial V^2}\\right)_{C}=0\n"
},
{
"math_id": 2,
"text": "\nP_C=\\frac{a}{27b^2}\n"
},
{
"math_id": 3,
"text": "\n\\displaystyle{V_C=3nb}\n"
},
{
"math_id": 4,
"text": "\nT_C=\\frac{8a}{27bR}\n"
}
] |
https://en.wikipedia.org/wiki?curid=1527578
|
15275887
|
Heat flux sensor
|
Sensor which measures heat transfer
A heat flux sensor is a transducer that generates an electrical signal proportional to the total heat rate applied to the surface of the sensor. The measured heat rate is divided by the surface area of the sensor to determine the heat flux.
The heat flux can have different origins; in principle convective, radiative as well as conductive heat can be measured. Heat flux sensors are known under different names, such as heat flux transducers, heat flux gauges, or heat flux plates. Some instruments are actually single-purpose heat flux sensors, like pyranometers for solar radiation measurement. Other heat flux sensors include Gardon gauges (also known as a circular-foil gauge), thin-film thermopiles, and Schmidt-Boelter gauges.
Usage.
Heat flux sensors are used for a variety of applications. Common applications are studies of building envelope thermal resistance, studies of the effect of fire and flames or laser power measurements. More exotic applications include estimation of fouling on boiler surfaces, temperature measurement of moving foil material, etc.
The total heat flux is composed of a conductive, convective and radiative part. Depending on the application, one might want to measure all three of these quantities or single one out.
An example of measurement of conductive heat flux is a heat flux plate incorporated into a wall.
An example of measurement of radiative heat flux density is a pyranometer for measurement of solar radiation.
An example of a sensor sensitive to radiative as well as convective heat flux is a Gardon or Schmidt–Boelter gauge, used for studies of fire and flames. The Gardon must measure convection perpendicular to the face of the sensor to be accurate due to the circular-foil construction, while the wire-wound geometry of the Schmidt-Boelter gauge can measure both perpendicular and parallel flows. In this case the sensor is mounted on a water-cooled body. Such sensors are used in fire resistance testing to put the fire to which samples are exposed to the right intensity level.
There are various examples of sensors that internally use heat flux sensors examples are laser power meters, pyranometers, etc.
We will discuss three large fields of application in what follows.
Applications in meteorology and agriculture.
Soil heat flux is a most important parameter in agro-meteorological studies, since it allows one to study the amount of energy stored in the soil as a function of time.
Typically, two or three sensors are buried in the ground around a meteorological station at a depth of around 4 cm below the surface. The problems that are encountered in soil are threefold:
First is the fact that the thermal properties of the soil are constantly changing by absorption and subsequent evaporation of water.
Second, the flow of water through the soil also represents a flow of energy, going together with a "thermal shock", which often is misinterpreted by conventional sensors.
The third aspect of soil is that by the constant process of wetting and drying and by the animals living on the soil, the quality of the contact between sensor and soil is not known.
The result of all this is the quality of the data in soil heat flux measurement is not under control; the measurement of soil heat flux is considered to be extremely difficult.
Applications in building physics.
In a world ever more concerned with saving energy, studying the thermal properties of buildings has become a growing field of interest. One of the starting points in these studies is the mounting of heat flux sensors on walls in existing buildings or structures built especially for this type of research. Heat flux sensors mounted to building walls or envelope component can monitor the amount of heat energy loss/gain through that component and/or can be used to measure the envelope thermal resistance, R-value, or thermal transmittance, U-value.
The measurement of heat flux in walls is comparable to that in soil in many respects. Two major differences however are the fact that the thermal properties of a wall generally do not change (provided its moisture content does not change) and that it is not always possible to insert the heat flux sensor in the wall, so that it has to be mounted on its inner or outer surface.
When the heat flux sensor has to be mounted on the surface of the wall, one has to take care that the added thermal resistance is not too large. Also, the spectral properties should be matching those of the wall as closely as possible. If the sensor is exposed to solar radiation, this is especially important. In this case one should consider painting the sensor in the same color as the wall. Also, in walls the use of self-calibrating heat flux sensors should be considered.
Applications in medical studies.
The measurement of the heat exchange of human beings is of importance for medical studies, and when designing clothing, immersion suits and sleeping bags.
A difficulty during this measurement is that the human skin is not particularly suitable for the mounting of heat flux sensors. Also, the sensor has to be thin: the skin essentially is a constant temperature heat sink, so added thermal resistance has to be avoided. Another problem is that test persons might be moving. The contact between the test person and the sensor can be lost. For this reason, whenever a high level of quality assurance of the measurement is required, it can be recommended to use a self-calibrating sensor.
Applications in industry.
Heat flux sensors are also used in industrial environments, where temperature and heat flux may be much higher. Examples of these environments are aluminium smelting, solar concentrators, coal fired boilers, blast furnaces, flare systems, fluidized beds, cokers...
Applications in aerospace and explosive research.
Special heat flux solutions are used in highly transient temperatures changes. These gauges called Thermocouple MCT, allow the measurement of highly transient surface temperatures. For example, they are typical for testing wind tunnel models in impulse facilities, the change of the cylinder wall temperature during one cycle of a combustion engine, all types of industrial applications, and research-oriented work where the registration of highly transient temperatures is of importance. The response time of the gauges has been proven to be in the range of a few microseconds.
The output of all gauges represents the time-dependent temperature of its measuring part which in this case may significantly deviate from the temperature of the gauge-surrounding heating or cooling environment. For example, in a piston engine a flush wall-mounted temperature gauge registers with its typical response time the variation of the cylinder wall temperature and not the variation of the average gas temperature within the cylinder. The measured time-dependent surface temperature of the gauge and its known thermal properties allow to recalculate the time-dependent heat flux from the heating environment onto the gauge which caused the temperature change of the gauge. This is accomplished by the theory of heat conduction into a semi-infinite body. The design of the gauges is such that during a typical time period of about 10 ms, the requirements of a body of semi-infinite thickness are fulfilled. The direction of the deduced heat flux is perpendicular to the measuring surface of the gauge.
Properties.
A heat flux sensor should measure the local heat flux density in one direction. The result is expressed in watts per square meter. The calculation is done according to:
formula_0
Where formula_1 is the sensor output and formula_2 is the calibration constant, specific for the sensor.
As shown before in the figure to the left, heat flux sensors generally have the shape of a flat plate and a sensitivity in the direction perpendicular to the sensor surface.
Usually, a number of thermocouples connected in series called thermopiles are used. General advantages of thermopiles are their stability, low ohmic value (which implies little pickup of electromagnetic disturbances), good signal-noise ratio and the fact that zero input gives zero output. Disadvantageous is the low sensitivity.
For better understanding of heat flux sensor behavior, it can be modeled as a simple electrical circuit consisting of a resistance, formula_3, and a capacitor, formula_4. In this way it can be seen that one can attribute a thermal resistance formula_5, a thermal capacity formula_6 and also a response time formula_7 to the sensor.
Usually, the thermal resistance and the thermal capacity of the entire heat flux sensor are equal to those of the filling material. Stretching the analogy with the electric circuit further, one arrives at the following expression for the response time:
formula_8
In which formula_9 is the sensor thickness, formula_10 the density, formula_11 the specific heat capacity and formula_12 the thermal conductivity. From this formula one can conclude that material properties of the filling material and dimensions are determining the response time.
As a rule of thumb, the response time is proportional to the thickness to the power of two.
Other parameters that are determining sensor properties are the electrical characteristics of the thermocouple. The temperature dependence of the thermocouple causes the temperature dependence and the non-linearity of the heat flux sensor. The non-linearity at a certain temperature is in fact the derivative of the temperature dependence at that temperature.
However, a well-designed sensor may have a lower temperature dependence and better linearity than expected. There are two ways of achieving this:
As a first possibility, the thermal dependence of conductivity of the filling material and of the thermocouple material can be used to counterbalance the temperature dependence of the voltage that is generated by the thermopile.
Another possibility to minimize the temperature dependence of a heat flux sensor, is to use a resistance network with an incorporated thermistor. The temperature dependence of the thermistor will balance the temperature dependence of the thermopile.
Another factor that determines heat flux sensor behavior, is the construction of the sensor. In particular some designs have a strongly nonuniform sensitivity. Others even exhibit a sensitivity to lateral fluxes. The sensor schematically given in the above figure would for example also be sensitive to heat flows from left to right. This type of behavior will not cause problems as long as fluxes are uniform and in one direction only.
To promote uniformity of sensitivity, a so-called sandwich construction as shown in the figure to the left can be used. The purpose of the plates, which have a high conductivity, is to promote the transport of heat across the whole sensitive surface.
It is difficult to quantify non-uniformity and sensitivity to lateral fluxes. Some sensors are equipped with an extra electrical lead, splitting the sensor into two parts. If during application, there is non-uniform behavior of the sensor or the flux, this will result in different outputs of the two parts.
Summarizing: The intrinsic specifications that can be attributed to heat flux sensors are thermal conductivity, total thermal resistance, heat capacity, response time, non-linearity, stability, temperature dependence of sensitivity, uniformity of sensitivity and sensitivity to lateral fluxes. For the latter two specifications, a good method for quantification is not known.
Calibration of thin heat flux transducers.
In order to do in-situ measurements, the user must be provided with the correct calibration constant formula_13. This constant is also called "sensitivity". The sensitivity is primarily determined by the sensor construction and operation temperatures, but also by the geometry and material properties of the object that is measured. Therefore, the sensor should be calibrated under conditions that are close to the conditions of the intended application. The calibration set-up should also be properly shielded to limit external influences.
Preparation.
To do a calibration measurement, one needs a voltmeter or datalogger with resolution of ±2μV or better. One should avoid air gaps between layers in the test stack. These can be filled with filling materials, like toothpaste, caulk or putty. If need be, thermally conductive gel can be used to improve contact between layers. A temperature sensor should be placed on or near the sensor and connected to a readout device.
Measuring.
The calibration is done by applying a controlled heat flux through the sensor. By varying the hot and cold sides of the stack, and measuring the voltages of the heat flux sensor and temperature sensor, the correct sensitivity can be determined with:
formula_14
where formula_15 is the sensor output and formula_16 is the known heat flux through the sensor.
If the sensor is mounted onto a surface and is exposed to convection and radiation during the expected applications, the same conditions should be taken into account during calibration.
Doing measurements at different temperatures allows for determining sensitivity as a function of the temperature.
In-situ calibration.
While heat flux sensors are typically supplied with a sensitivity by the manufacturer, there are times and situations that call for a re-calibration of the sensor. Especially in building walls or envelopes the heat flux sensors cannot be removed after the initial installation or may be very difficult to reach. In order to calibrate the sensor, some come with an integrated heater with specified characteristics. By applying a known voltage on and current through the heater, a controlled heat flux is provided which can be used to calculate the new sensitivity.
Error sources.
The interpretation of measurement results of heat flux sensors is often done assuming that the phenomenon that is studied, is quasi-static and taking place in a direction transversal to the sensor surface.
Dynamic effects and lateral fluxes are possible error sources.
Dynamic effects.
The assumption that conditions are quasi-static should be related to the response time of the detector.
The case that the heat flux sensor is used as a radiation detector (see figure to the left) will serve to illustrate the effect of changing fluxes. Assuming that the cold joints of the sensor are at a constant temperature, and an energy flows from formula_17, the sensor response is:
formula_18
This shows that one should expect a false reading during a period that equals several response times, formula_7. Generally, heat flux sensors are quite slow and will need several minutes to reach 95% response. This is the reason why one prefers to work with values that are integrated over a long period; during this period the sensor signal will go up and down. The assumption is that errors due to long response times will cancel. The upgoing signal will give an error, the downgoing signal will produce an equally large error with a different sign. This will be valid only if periods with stable heat flow prevail.
In order to avoid errors caused by long response times, one should use sensors with low value of formula_19, since this product determines the response time. In other words: sensors with low mass or small thickness.
The sensor response time equation above holds as long as the cold joints are at a constant temperature. An unexpected result shows when the temperature of the sensor changes.
Assuming that the sensor temperature starts changing at the cold joints, at a rate of formula_20, starting at formula_21, formula_7 is the sensor response time, the reaction to this is:
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\phi_q =\\frac{V_{\\text{sen}}}{E_{\\text{sen}}}"
},
{
"math_id": 1,
"text": "V_{\\text{sen}}"
},
{
"math_id": 2,
"text": "E_{\\text{sen}}"
},
{
"math_id": 3,
"text": "R"
},
{
"math_id": 4,
"text": "C"
},
{
"math_id": 5,
"text": "R_{\\text{sen}}"
},
{
"math_id": 6,
"text": "C_{\\text{sen}}"
},
{
"math_id": 7,
"text": "\\tau_{\\text{sen}}"
},
{
"math_id": 8,
"text": "\\tau_{\\text{sen}} = R_{\\text{sen}} C_{\\text{sen}} = \\frac{d^2 \\rho C_p}{\\lambda}"
},
{
"math_id": 9,
"text": "d"
},
{
"math_id": 10,
"text": "\\rho"
},
{
"math_id": 11,
"text": "C_p"
},
{
"math_id": 12,
"text": "\\lambda"
},
{
"math_id": 13,
"text": "E_{sen}"
},
{
"math_id": 14,
"text": "E_\\text{sen} = \\frac{V_\\text{sen}}{\\phi_{q}} "
},
{
"math_id": 15,
"text": "V_\\text{sen}"
},
{
"math_id": 16,
"text": "\\phi_{q}"
},
{
"math_id": 17,
"text": "t>0"
},
{
"math_id": 18,
"text": "V_{\\text{sen}} = E_{\\text{sen}} \\left( 1 - e^{- \\frac{t}{\\tau_{\\text{sen}}}} \\right)"
},
{
"math_id": 19,
"text": "R_{\\text{sen}}C_{\\text{sen}}"
},
{
"math_id": 20,
"text": "\\frac{\\mathrm{d}T}{\\mathrm{d}t}"
},
{
"math_id": 21,
"text": "t=0"
}
] |
https://en.wikipedia.org/wiki?curid=15275887
|
1527655
|
Implicit surface
|
Surface in 3D space defined by an implicit function of three variables
In mathematics, an implicit surface is a surface in Euclidean space defined by an equation
formula_0
An "implicit surface" is the set of zeros of a function of three variables. "Implicit" means that the equation is not solved for x or y or z.
The graph of a function is usually described by an equation formula_1 and is called an "explicit" representation. The third essential description of a surface is the "parametric" one:
formula_2, where the x-, y- and z-coordinates of surface points are represented by three functions formula_3 depending on common parameters formula_4. Generally the change of representations is simple only when the explicit representation formula_1 is given: formula_5 (implicit), formula_6 (parametric).
"Examples":
For a plane, a sphere, and a torus there exist simple parametric representations. This is not true for the fourth example.
The implicit function theorem describes conditions under which an equation formula_12 can be solved (at least implicitly) for x, y or z. But in general the solution may not be made explicit. This theorem is the key to the computation of essential geometric features of a surface: tangent planes, surface normals, curvatures (see below). But they have an essential drawback: their visualization is difficult.
If formula_13 is polynomial in x, y and z, the surface is called algebraic. Example 5 is "non"-algebraic.
Despite difficulty of visualization, implicit surfaces provide relatively simple techniques to generate theoretically (e.g. Steiner surface) and practically (see below) interesting surfaces.
Formulas.
Throughout the following considerations the implicit surface is represented by an equation
formula_12 where function formula_14 meets the necessary conditions of differentiability. The partial derivatives of
formula_14 are formula_15.
Tangent plane and normal vector.
A surface point formula_16 is called regular if and only if the gradient of formula_14 at formula_16 is not the zero vector formula_17, meaning
formula_18.
If the surface point formula_16 is "not" regular, it is called singular.
The equation of the tangent plane at a regular point formula_19 is
formula_20
and a "normal vector" is
formula_21
Normal curvature.
In order to keep the formula simple the arguments formula_19 are omitted:
formula_22
is the normal curvature of the surface at a regular point for the unit tangent direction formula_23. formula_24 is the Hessian matrix of formula_14 (matrix of the second derivatives).
The proof of this formula relies (as in the case of an implicit curve) on the implicit function theorem and the formula for the normal curvature of a parametric surface.
Applications of implicit surfaces.
As in the case of implicit curves it is an easy task to generate implicit surfaces with desired shapes by applying algebraic operations (addition, multiplication) on simple primitives.
Equipotential surface of point charges.
The electrical potential of a point charge formula_25 at point formula_26 generates at point formula_27 the potential (omitting physical constants)
formula_28
The equipotential surface for the potential value formula_29 is the implicit surface formula_30 which is a sphere with center at point formula_31.
The potential of formula_32 point charges is represented by
formula_33
For the picture the four charges equal 1 and are located at the points
formula_34. The displayed surface is the equipotential surface (implicit surface) formula_35.
Constant distance product surface.
A Cassini oval can be defined as the point set for which the product of the distances to two given points is constant (in contrast, for an ellipse the "sum" is constant). In a similar way implicit surfaces can be defined by a constant distance product to several fixed points.
In the diagram "metamorphoses" the upper left surface is generated by this rule: With
formula_36
the constant distance product surface formula_37 is displayed.
Metamorphoses of implicit surfaces.
A further simple method to generate new implicit surfaces is called "metamorphosis" of implicit surfaces:
For two implicit surfaces formula_38 (in the diagram: a constant distance product surface and a torus) one defines new surfaces using the design parameter formula_39:
formula_40
In the diagram the design parameter is successively formula_41 .
Smooth approximations of several implicit surfaces.
formula_42-surfaces can be used to approximate any given smooth and bounded object in formula_43 whose surface is defined by a single polynomial as a product of subsidiary polynomials. In other words, we can design any smooth object with a single algebraic surface. Let us denote the defining polynomials as formula_44. Then, the approximating object is defined by the polynomial
formula_45
where formula_46 stands for the blending parameter that controls the approximating error.
Analogously to the smooth approximation with implicit curves, the equation
formula_47
represents for suitable parameters formula_29 smooth approximations of three intersecting tori with equations
formula_48
Visualization of implicit surfaces.
There are various algorithms for rendering implicit surfaces, including the marching cubes algorithm. Essentially there are two ideas for visualizing an implicit surface: One generates a net of polygons which is visualized (see surface triangulation) and the second relies on ray tracing which determines intersection points of rays with the surface. The intersection points can be approximated by "sphere tracing", using a signed distance function to find the distance to the surface.
|
[
{
"math_id": 0,
"text": "F(x,y,z)=0."
},
{
"math_id": 1,
"text": "z=f(x,y)"
},
{
"math_id": 2,
"text": "(x(s,t),y(s,t), z(s,t))"
},
{
"math_id": 3,
"text": "x(s,t)\\, , y(s,t)\\, , z(s,t)"
},
{
"math_id": 4,
"text": "s,t"
},
{
"math_id": 5,
"text": "z-f(x,y)=0"
},
{
"math_id": 6,
"text": " (s,t,f(s,t)) "
},
{
"math_id": 7,
"text": " x+2y-3z+1=0."
},
{
"math_id": 8,
"text": " x^2+y^2+z^2-4=0."
},
{
"math_id": 9,
"text": "(x^2+y^2+z^2+R^2-a^2)^2-4R^2(x^2+y^2)=0. "
},
{
"math_id": 10,
"text": "2y(y^2-3x^2)(1-z^2)+(x^2+y^2)^2-(9z^2-1)(1-z^2)=0"
},
{
"math_id": 11,
"text": " x^2+y^2-(\\ln(z+3.2))^2-0.02=0"
},
{
"math_id": 12,
"text": "F(x,y,z)=0"
},
{
"math_id": 13,
"text": "F(x,y,z)"
},
{
"math_id": 14,
"text": "F"
},
{
"math_id": 15,
"text": "F_x,F_y,F_z,F_{xx},\\ldots"
},
{
"math_id": 16,
"text": "(x_0, y_0,z_0)"
},
{
"math_id": 17,
"text": "(0, 0, 0)"
},
{
"math_id": 18,
"text": " (F_x(x_0,y_0,z_0),F_y(x_0,y_0,z_0),F_z(x_0,y_0,z_0))\\ne (0,0,0)"
},
{
"math_id": 19,
"text": "(x_0,y_0,z_0)"
},
{
"math_id": 20,
"text": "F_x(x_0,y_0,z_0)(x-x_0)+F_y(x_0,y_0,z_0)(y-y_0)+F_z(x_0,y_0,z_0)(z-z_0)=0,"
},
{
"math_id": 21,
"text": " \\mathbf n(x_0,y_0,z_0)=(F_x(x_0,y_0,z_0),F_y(x_0,y_0,z_0),F_z(x_0,y_0,z_0))^T."
},
{
"math_id": 22,
"text": "\\kappa_n = \\frac{\\mathbf v^\\top H_F\\mathbf v}{\\|\\operatorname{grad} F\\|}"
},
{
"math_id": 23,
"text": " \\mathbf v"
},
{
"math_id": 24,
"text": "H_F"
},
{
"math_id": 25,
"text": "q_i"
},
{
"math_id": 26,
"text": "\\mathbf p_i=(x_i,y_i,z_i)"
},
{
"math_id": 27,
"text": " \\mathbf p=(x,y,z)"
},
{
"math_id": 28,
"text": "F_i(x,y,z)=\\frac{q_i}{\\|\\mathbf p -\\mathbf p_i\\|}."
},
{
"math_id": 29,
"text": "c"
},
{
"math_id": 30,
"text": " F_i(x,y,z)-c=0 "
},
{
"math_id": 31,
"text": "\\mathbf p_i"
},
{
"math_id": 32,
"text": "4"
},
{
"math_id": 33,
"text": "F(x,y,z)=\\frac{q_1}{\\|\\mathbf p -\\mathbf p_1\\|}+ \\frac{q_2}{\\|\\mathbf p -\\mathbf p_2\\|}+ \\frac{q_3}{\\|\\mathbf p -\\mathbf p_3\\|}+\\frac{q_4}{\\|\\mathbf p -\\mathbf p_4\\|}."
},
{
"math_id": 34,
"text": "(\\pm 1,\\pm 1,0)"
},
{
"math_id": 35,
"text": "F(x,y,z)-2.8=0"
},
{
"math_id": 36,
"text": "\n\\begin{align}\nF(x,y,z) = {} & \\Big( \\sqrt{(x-1)^2+y^2+z^2}\\cdot \\sqrt{(x+1)^2+y^2+z^2} \\\\\n& \\qquad \\cdot \\sqrt{x^2+(y-1)^2+z^2}\\cdot\\sqrt{x^2+(y+1)^2+z^2} \\Big)\n\\end{align}\n"
},
{
"math_id": 37,
"text": "F(x,y,z)-1.1=0"
},
{
"math_id": 38,
"text": "F_1(x,y,z)=0, F_2(x,y,z)=0"
},
{
"math_id": 39,
"text": " \\mu \\in [0,1]"
},
{
"math_id": 40,
"text": "F(x,y,z)=\\mu F_1(x,y,z)+(1-\\mu)F_2(x,y,z)=0"
},
{
"math_id": 41,
"text": "\\mu=0, \\, 0.33, \\, 0.66, \\, 1"
},
{
"math_id": 42,
"text": "\\Pi"
},
{
"math_id": 43,
"text": "R^3"
},
{
"math_id": 44,
"text": "f_i\\in\\mathbb{R}[x_1,\\ldots,x_n](i=1,\\ldots,k)"
},
{
"math_id": 45,
"text": "F(x,y,z) = \\prod_i f_i(x,y,z) - r"
},
{
"math_id": 46,
"text": "r\\in\\mathbb{R}"
},
{
"math_id": 47,
"text": "F(x,y,z)=F_1(x,y,z)\\cdot F_2(x,y,z)\\cdot F_3(x,y,z) -r= 0"
},
{
"math_id": 48,
"text": "\n\\begin{align}\nF_1=(x^2+y^2+z^2+R^2-a^2)^2-4R^2(x^2+y^2)=0, \\\\[3pt]\nF_2=(x^2+y^2+z^2+R^2-a^2)^2-4R^2(x^2+z^2)=0, \\\\[3pt]\nF_3=(x^2+y^2+z^2+R^2-a^2)^2-4R^2(y^2+z^2)=0.\n\\end{align}\n"
},
{
"math_id": 49,
"text": " R=1, \\, a=0.2, \\, r=0.01."
}
] |
https://en.wikipedia.org/wiki?curid=1527655
|
1528061
|
Omitted-variable bias
|
Type of statistical bias
In statistics, omitted-variable bias (OVB) occurs when a statistical model leaves out one or more relevant variables. The bias results in the model attributing the effect of the missing variables to those that were included.
More specifically, OVB is the bias that appears in the estimates of parameters in a regression analysis, when the assumed specification is incorrect in that it omits an independent variable that is a determinant of the dependent variable and correlated with one or more of the included independent variables.
In linear regression.
Intuition.
Suppose the true cause-and-effect relationship is given by:
formula_0
with parameters "a, b, c", dependent variable "y", independent variables "x" and "z", and error term "u". We wish to know the effect of "x" itself upon "y" (that is, we wish to obtain an estimate of "b").
Two conditions must hold true for omitted-variable bias to exist in linear regression:
Suppose we omit "z" from the regression, and suppose the relation between "x" and "z" is given by
formula_1
with parameters "d", "f" and error term "e". Substituting the second equation into the first gives
formula_2
If a regression of "y" is conducted upon "x" only, this last equation is what is estimated, and the regression coefficient on "x" is actually an estimate of ("b" + "cf" ), giving not simply an estimate of the desired direct effect of "x" upon "y" (which is "b"), but rather of its sum with the indirect effect (the effect "f" of "x" on "z" times the effect "c" of "z" on "y"). Thus by omitting the variable "z" from the regression, we have estimated the total derivative of "y" with respect to "x" rather than its partial derivative with respect to "x". These differ if both "c" and "f" are non-zero.
The direction and extent of the bias are both contained in "cf", since the effect sought is "b" but the regression estimates "b+cf". The extent of the bias is the absolute value of "cf", and the direction of bias is upward (toward a more positive or less negative value) if "cf" > 0 (if the direction of correlation between "y" and "z" is the same as that between "x" and "z"), and it is downward otherwise.
Detailed analysis.
As an example, consider a linear model of the form
formula_3
where
We collect the observations of all variables subscripted "i" = 1, ..., "n", and stack them one below another, to obtain the matrix "X" and the vectors "Y", "Z", and "U":
formula_4
and
formula_5
If the independent variable "z" is omitted from the regression, then the estimated values of the response parameters of the other independent variables will be given by the usual least squares calculation,
formula_6
(where the "prime" notation means the transpose of a matrix and the -1 superscript is matrix inversion).
Substituting for "Y" based on the assumed linear model,
formula_7
On taking expectations, the contribution of the final term is zero; this follows from the assumption that "U" is uncorrelated with the regressors "X". On simplifying the remaining terms:
formula_8
The second term after the equal sign is the omitted-variable bias in this case, which is non-zero if the omitted variable "z" is correlated with any of the included variables in the matrix "X" (that is, if "X′Z" does not equal a vector of zeroes). Note that the bias is equal to the weighted portion of "z""i" which is "explained" by "x""i".
Effect in ordinary least squares.
The Gauss–Markov theorem states that regression models which fulfill the classical linear regression model assumptions provide the most efficient, linear and unbiased estimators. In ordinary least squares, the relevant assumption of the classical linear regression model is that the error term is uncorrelated with the regressors.
The presence of omitted-variable bias violates this particular assumption. The violation causes the OLS estimator to be biased and inconsistent. The direction of the bias depends on the estimators as well as the covariance between the regressors and the omitted variables. A positive covariance of the omitted variable with both a regressor and the dependent variable will lead the OLS estimate of the included regressor's coefficient to be greater than the true value of that coefficient. This effect can be seen by taking the expectation of the parameter, as shown in the previous section.
|
[
{
"math_id": 0,
"text": "y=a+bx+cz+u"
},
{
"math_id": 1,
"text": "z=d+fx+e"
},
{
"math_id": 2,
"text": "y=(a+cd)+(b+cf)x+(u+ce)."
},
{
"math_id": 3,
"text": "y_i = x_i \\beta + z_i \\delta + u_i,\\qquad i = 1,\\dots,n"
},
{
"math_id": 4,
"text": " X = \\left[ \\begin{array}{c} x_1 \\\\ \\vdots \\\\ x_n \\end{array} \\right] \\in \\mathbb{R}^{n\\times p},"
},
{
"math_id": 5,
"text": " Y = \\left[ \\begin{array}{c} y_1 \\\\ \\vdots \\\\ y_n \\end{array} \\right],\\quad Z = \\left[ \\begin{array}{c} z_1 \\\\ \\vdots \\\\ z_n \\end{array} \\right],\\quad U = \\left[ \\begin{array}{c} u_1 \\\\ \\vdots \\\\ u_n \\end{array} \\right] \\in \\mathbb{R}^{n\\times 1}."
},
{
"math_id": 6,
"text": "\\widehat{\\beta} = (X'X)^{-1}X'Y\\,"
},
{
"math_id": 7,
"text": "\n\\begin{align}\n\\widehat{\\beta} & = (X'X)^{-1}X'(X\\beta+Z\\delta+U) \\\\\n& =(X'X)^{-1}X'X\\beta + (X'X)^{-1}X'Z\\delta + (X'X)^{-1}X'U \\\\\n& =\\beta + (X'X)^{-1}X'Z\\delta + (X'X)^{-1}X'U.\n\\end{align}\n"
},
{
"math_id": 8,
"text": "\n\\begin{align}\nE[ \\widehat{\\beta} \\mid X ] & = \\beta + (X'X)^{-1}E[ X'Z \\mid X ]\\delta \\\\\n& = \\beta + \\text{bias}.\n\\end{align}\n"
}
] |
https://en.wikipedia.org/wiki?curid=1528061
|
15281107
|
Shape context
|
Shape context is a feature descriptor used in object recognition. Serge Belongie and Jitendra Malik proposed the term in their paper "Matching with Shape Contexts" in 2000.
Theory.
The shape context is intended to be a way of describing shapes that allows for measuring shape similarity and the recovering of point correspondences. The basic idea is to pick "n" points on the contours of a shape. For each point "p""i" on the shape, consider the "n" − 1 vectors obtained by connecting "p""i" to all other points. The set of all these vectors is a rich description of the shape localized at that point but is far too detailed. The key idea is that the distribution over relative positions is a robust, compact, and highly discriminative descriptor. So, for the point "p""i", the coarse histogram of the relative coordinates of the remaining "n" − 1 points,
formula_0
is defined to be the shape context of formula_1. The bins are normally taken to be uniform in log-polar space. The fact that the shape context is a rich and discriminative descriptor can be seen in the figure below, in which the shape contexts of two different versions of the letter "A" are shown.
(a) and (b) are the sampled edge points of the two shapes. (c) is the diagram of the log-polar bins used to compute the shape context. (d) is the shape context for the point marked with a circle in (a), (e) is that for the point marked as a diamond in (b), and (f) is that for the triangle. As can be seen, since (d) and (e) are the shape contexts for two closely related points, they are quite similar, while the shape context in (f) is very different.
For a feature descriptor to be useful, it needs to have certain invariances. In particular it needs to be invariant to translation, scaling, small perturbations, and, depending on the application, rotation. Translational invariance comes naturally to shape context. Scale invariance is obtained by normalizing all radial distances by the mean distance formula_2 between all the point pairs in the shape although the median distance can also be used. Shape contexts are empirically demonstrated to be robust to deformations, noise, and outliers using synthetic point set matching experiments.
One can provide complete rotational invariance in shape contexts. One way is to measure angles at each point relative to the direction of the tangent at that point (since the points are chosen on edges). This results in a completely rotationally invariant descriptor. But of course this is not always desired since some local features lose their discriminative power if not measured relative to the same frame. Many applications in fact forbid rotational invariance e.g. distinguishing a "6" from a "9".
Use in shape matching.
A complete system that uses shape contexts for shape matching consists of the following steps (which will be covered in more detail in the Details of Implementation section):
Details of implementation.
Step 1: Finding a list of points on shape edges.
The approach assumes that the shape of an object is essentially captured by a finite subset of the points on the internal or external contours on the object. These can be simply obtained using the Canny edge detector and picking a random set of points from the edges. Note that these points need not and in general do not correspond to key-points such as maxima of curvature or inflection points. It is preferable to sample the shape with roughly uniform spacing, though it is not critical.
Step 2: Computing the shape context.
This step is described in detail in the Theory section.
Step 3: Computing the cost matrix.
Consider two points "p" and "q" that have normalized "K"-bin histograms (i.e. shape contexts) "g"("k") and "h"("k"). As shape contexts are distributions represented as histograms, it is natural to use the "χ"2 test statistic as the "shape context cost" of matching the two points:
formula_3
The values of this range from 0 to 1.
In addition to the shape context cost, an extra cost based on the appearance can be added. For instance, it could be a measure of tangent angle dissimilarity (particularly useful in digit recognition):
formula_4
This is half the length of the chord in unit circle between the unit vectors with angles formula_5 and formula_6. Its values also range from 0 to 1. Now the total cost of matching the two points could be a weighted-sum of the two costs:
formula_7
Now for each point "p""i" on the first shape and a point "q""j" on the second shape, calculate the cost as described and call it "C""i","j". This is the cost matrix.
Step 4: Finding the matching that minimizes total cost.
Now, a one-to-one matching formula_8 that matches each point "p""i" on shape 1 and "q""j" on shape 2 that minimizes the total cost of matching,
formula_9
is needed. This can be done in formula_10 time using the Hungarian method, although there are more efficient algorithms.
To have robust handling of outliers, one can add "dummy" nodes that have a constant but reasonably large cost of matching to the cost matrix. This would cause the matching algorithm to match outliers to a "dummy" if there is no real match.
Step 5: Modeling transformation.
Given the set of correspondences between a finite set of points on the two shapes, a transformation formula_11 can be estimated to map any point from one shape to the other. There are several choices for this transformation, described below.
Affine.
The affine model is a standard choice: formula_12. The least squares solution for the matrix formula_13 and the translational offset vector "o" is obtained by:
formula_14
Where formula_15 with a similar expression for formula_16. formula_17 is the pseudoinverse of formula_16.
Thin plate spline.
The thin plate spline (TPS) model is the most widely used model for transformations when working with shape contexts. A 2D transformation can be separated into two TPS function to model a coordinate transform:
formula_18
where each of the "ƒ""x" and "ƒ""y" have the form:
formula_19
and the kernel function formula_20 is defined by formula_21. The exact details of how to solve for the parameters can be found elsewhere but it essentially involves solving a linear system of equations. The bending energy (a measure of how much transformation is needed to align the points) will also be easily obtained.
Regularized TPS.
The TPS formulation above has exact matching requirement for the pairs of points on the two shapes. For noisy data, it is best to relax this exact requirement. If we let formula_22 denote the target function values at corresponding locations formula_23 (Note that for formula_24, formula_22 would formula_25 the x-coordinate of the point corresponding to formula_1 and for formula_26 it would be the y-coordinate, formula_27), relaxing the requirement amounts to minimizing
formula_28
where formula_29 is the bending energy and formula_30 is called the regularization parameter. This "ƒ" that minimizes "H"["ƒ"] can be found in a fairly straightforward way. If one uses normalize coordinates for formula_31, then scale invariance is kept. However, if one uses the original non-normalized coordinates, then the regularization parameter needs to be normalized.
Note that in many cases, regardless of the transformation used, the initial estimate of the correspondences contains some errors which could reduce the quality of the transformation. If we iterate the steps of finding correspondences and estimating transformations (i.e. repeating steps 2–5 with the newly transformed shape) we can overcome this problem. Typically, three iterations are all that is needed to obtain reasonable results.
Step 6: Computing the shape distance.
Now, a shape distance between two shapes formula_32 and formula_16. This distance is going to be a weighted sum of three potential terms:
Shape context distance: this is the symmetric sum of shape context matching costs over best matching points:
formula_33
where "T"(·) is the estimated TPS transform that maps the points in "Q" to those in "P".
Appearance cost: After establishing image correspondences and properly warping one image to match the other, one can define an appearance cost as the sum of squared brightness differences in Gaussian windows around corresponding image points:
formula_34
where formula_35 and formula_36 are the gray-level images (formula_36 is the image after warping) and formula_37 is a Gaussian windowing function.
Transformation cost: The final cost formula_38 measures how much transformation is necessary to bring the two images into alignment. In the case of TPS, it is assigned to be the bending energy.
Now that we have a way of calculating the distance between two shapes, we can use a nearest neighbor classifier (k-NN) with distance defined as the shape distance calculated here. The results of applying this to different situations is given in the following section.
Results.
Digit recognition.
The authors Serge Belongie and Jitendra Malik tested their approach on the MNIST database. Currently, more than 50 algorithms have been tested on the database. The database has a training set of 60,000 examples, and a test set of 10,000 examples. The error rate for this approach was 0.63% using 20,000 training examples and 3-NN. At the time of publication, this error rate was the lowest. Currently, the lowest error rate is 0.18%.
Silhouette similarity-based retrieval.
The authors experimented with the MPEG-7 shape silhouette database, performing Core Experiment CE-Shape-1 part B, which measures performance of similarity-based retrieval. The database has 70 shape categories and 20 images per shape category. Performance of a retrieval scheme is tested by using each image as a query and counting the number of correct images in the top 40 matches. For this experiment, the authors increased the number of points sampled from each shape. Also, since the shapes in the database sometimes were rotated or flipped, the authors took defined the distance between a reference shape and query shape to be minimum shape distance between the query shape and either the unchanged reference, the vertically flipped, or the reference horizontally flipped. With these changes, they obtained a retrieval rate of 76.45%, which in 2002 was the best.
3D object recognition.
The next experiment performed on shape contexts involved the 20 common household objects in the Columbia Object Image Library (COIL-20). Each object has 72 views in the database. In the experiment, the method was trained on a number of equally spaced views for each object and the remaining views were used for testing. A 1-NN classifier was used. The authors also developed an "editing" algorithm based on shape context similarity and k-medoid clustering that improved on their performance.
Trademark retrieval.
Shape contexts were used to retrieve the closest matching trademarks from a database to a query trademark (useful in detecting trademark infringement). No visually similar trademark was missed by the algorithm (verified manually by the authors).
|
[
{
"math_id": 0,
"text": "h_i(k) = \\#\\{q \\ne p_i : (q - p_i) \\in \\mbox{bin}(k)\\}"
},
{
"math_id": 1,
"text": "p_i"
},
{
"math_id": 2,
"text": "\\alpha"
},
{
"math_id": 3,
"text": "C_S = \\frac{1}{2}\\sum_{k=1}^K \\frac{[g(k) - h(k)]^2}{g(k) + h(k)}"
},
{
"math_id": 4,
"text": "C_A = \\frac{1}{2}\\begin{Vmatrix}\n \\dbinom{\\cos(\\theta_1)}{\\sin(\\theta_1)} - \\dbinom{\\cos(\\theta_2)}{\\sin(\\theta_2)}\n\\end{Vmatrix}"
},
{
"math_id": 5,
"text": "\\theta_1"
},
{
"math_id": 6,
"text": "\\theta_2"
},
{
"math_id": 7,
"text": "C = (1 - \\beta)C_S + \\beta C_A\\!\\,"
},
{
"math_id": 8,
"text": "\\pi (i)"
},
{
"math_id": 9,
"text": "H(\\pi) = \\sum_i C\\left (p_i,q_{\\pi (i)} \\right )"
},
{
"math_id": 10,
"text": "O(N^3)"
},
{
"math_id": 11,
"text": "T : \\mathbb{R}^2 \\to \\mathbb{R}^2"
},
{
"math_id": 12,
"text": "T(p) = Ap + o\\!"
},
{
"math_id": 13,
"text": "A"
},
{
"math_id": 14,
"text": "o = \\frac{1}{n}\\sum_{i=1}^n \\left (p_i - q_{\\pi(i)} \\right ),\n A = (Q^+ P)^t"
},
{
"math_id": 15,
"text": "P = \\begin{pmatrix}\n 1 & p_{11} & p_{12} \\\\\n \\vdots & \\vdots & \\vdots \\\\\n 1 & p_{n1} & p_{n2}\n\\end{pmatrix} "
},
{
"math_id": 16,
"text": "Q\\!"
},
{
"math_id": 17,
"text": "Q^+\\!"
},
{
"math_id": 18,
"text": " T(x,y) = \\left (f_x(x,y),f_y(x,y)\\right )"
},
{
"math_id": 19,
"text": " f(x,y) = a_1 + a_xx + a_yy + \\sum_{i=1}^n\\omega_iU\\left (\\begin{Vmatrix}\n(x_i,y_i) - (x,y) \\end{Vmatrix} \\right ),"
},
{
"math_id": 20,
"text": "U(r)\\!"
},
{
"math_id": 21,
"text": "U(r) = r^2\\log r^2\\!"
},
{
"math_id": 22,
"text": "v_i"
},
{
"math_id": 23,
"text": "p_i = (x_i,y_i)"
},
{
"math_id": 24,
"text": "f_x"
},
{
"math_id": 25,
"text": "x'"
},
{
"math_id": 26,
"text": "f_y"
},
{
"math_id": 27,
"text": "y'"
},
{
"math_id": 28,
"text": " H[f] = \\sum_{i=1}^n(v_i - f(x_i,y_i))^2 + \\lambda I_f"
},
{
"math_id": 29,
"text": "I_f\\!"
},
{
"math_id": 30,
"text": "\\lambda\\!"
},
{
"math_id": 31,
"text": "(x_i,y_i)\\mbox{ and } (x'_i,y'_i)"
},
{
"math_id": 32,
"text": "P\\!"
},
{
"math_id": 33,
"text": "D_{sc}(P,Q) = \\frac{1}{n}\\sum_{p \\in P} \\arg \\underset{q \\in Q}{\\min} C(p,T(q)) + \\frac{1}{m}\\sum_{q \\in Q} \\arg \\underset{p \\in P}{\\min} C(p,T(q))"
},
{
"math_id": 34,
"text": "D_{ac}(P,Q) = \\frac{1}{n}\\sum_{i=1}^n\\sum_{\\Delta \\in Z^2} G(\\Delta)\\left [I_P(p_i + \\Delta) - I_Q(T(q_{\\pi(i)}) + \\Delta)\\right ]^2"
},
{
"math_id": 35,
"text": "I_P\\!"
},
{
"math_id": 36,
"text": "I_Q\\!"
},
{
"math_id": 37,
"text": "G\\!"
},
{
"math_id": 38,
"text": "D_{be}(P,Q)\\!\\,"
}
] |
https://en.wikipedia.org/wiki?curid=15281107
|
1528164
|
Long-tail traffic
|
A long-tailed or heavy-tailed distribution is one that assigns relatively high probabilities to regions far from the mean or median. A more formal mathematical definition is given below. In the context of teletraffic engineering a number of quantities of interest have been shown to have a long-tailed distribution. For example, if we consider the sizes of files transferred from a web server, then, to a good degree of accuracy, the distribution is heavy-tailed, that is, there are a large number of small files transferred but, crucially, the number of very large files transferred remains a major component of the volume downloaded.
Many processes are technically long-range dependent but not self-similar. The differences between these two phenomena are subtle. Heavy-tailed refers to a probability distribution, and long-range dependent refers to a property of a time series and so these should be used with care and a distinction should be made. The terms are distinct although superpositions of samples from heavy-tailed distributions aggregate to form long-range dependent time series.
Additionally, there is Brownian motion which is self-similar but not long-range dependent.
Overview.
The design of robust and reliable networks and network services has become an increasingly challenging task in today's Internet world. To achieve this goal, understanding the characteristics of Internet traffic plays a more and more critical role. Empirical studies of measured traffic traces have led to the wide recognition of self-similarity in network traffic.
Self-similar Ethernet traffic exhibits dependencies over a long range of time scales. This is to be contrasted with telephone traffic which is Poisson in its arrival and departure process.
With many time-series if the series is averaged then the data begins to look smoother. However, with self-similar data, one is confronted with traces that are spiky and bursty, even at large scales. Such behaviour is caused by strong dependence in the data: large values tend to come in clusters, and clusters of clusters, etc. This can have far-reaching consequences for network performance.
Heavy-tail distributions have been observed in many natural phenomena including both physical and sociological phenomena. Mandelbrot established the use of heavy-tail distributions to model real-world fractal phenomena, e.g. Stock markets, earthquakes, and the weather.
Ethernet, WWW, SS7, TCP, FTP, TELNET and VBR video (digitised video of the type that is transmitted over ATM networks) traffic is self-similar.
Self-similarity in packetised data networks can be caused by the distribution of file sizes, human interactions and/or Ethernet dynamics. Self-similar and long-range dependent characteristics in computer networks present a fundamentally different set of problems to people doing analysis and/or design of networks, and many of the previous assumptions upon which systems have been built are no longer valid in the presence of self-similarity.
Short-range dependence vs. long-range dependence.
Long-range and short-range dependent processes are characterised by their autocovariance functions.
In short-range dependent processes, the coupling between values at different times decreases rapidly as the time difference increases.
In long-range processes, the correlations at longer time scales are more significant.
formula_0
where ρ("k") is the autocorrelation function at a lag "k", α is a parameter in the interval (0,1) and the ~ means asymptotically proportional to as "k" approaches infinity.
Long-range dependence as a consequence of mathematical convergence.
Such power law scaling of the autocorrelation function can be shown to be biconditionally related to a power law relationship between the variance and the mean, when evaluated from sequences by the method of expanding bins. This variance to mean power law is an inherent feature of a family of statistical distributions called the Tweedie exponential dispersion models. Much as the central limit theorem explains how certain types of random data converge towards the form of a normal distribution there exists a related theorem, the Tweedie convergence theorem that explains how other types of random data will converge towards the form of these Tweedie distributions, and consequently express both the variance to mean power law and a power law decay in their autocorrelation functions.
The Poisson distribution and traffic.
Before the heavy-tail distribution is introduced mathematically, the memoryless Poisson distribution, used to model traditional telephony networks, is briefly reviewed below. For more details, see the article on the Poisson distribution.
Assuming pure-chance arrivals and pure-chance terminations leads to the following:
formula_1
where "a" is the number of call arrivals and formula_2 is the mean number of call arrivals in time "T". For this reason, pure-chance traffic is also known as Poisson traffic.
formula_3
where "d" is the number of call departures and formula_4 is the mean number of call departures in time "T".
formula_5
where "h" is the Mean Holding Time (MHT).
Information on the fundamentals of statistics and probability theory can be found in the external links section.
The heavy-tail distribution.
Heavy-tail distributions have properties that are qualitatively different from commonly used (memoryless) distributions such as the exponential distribution.
The Hurst parameter "H" is a measure of the level of self-similarity of a time series that exhibits long-range dependence, to which the heavy-tail distribution can be applied. "H" takes on values from 0.5 to 1. A value of 0.5 indicates the data is uncorrelated or has only short-range correlations. The closer "H" is to 1, the greater the degree of persistence or long-range dependence.
Typical values of the Hurst parameter, "H":
A distribution is said to be heavy-tailed if:
formula_6
This means that regardless of the distribution for small values of the random variable, if the asymptotic shape of the distribution is hyperbolic, it is heavy-tailed. The simplest heavy-tail distribution is the Pareto distribution which is hyperbolic over its entire range. Complementary distribution functions for the exponential and Pareto distributions are shown below. Shown on the left is a graph of the distributions shown on linear axes, spanning a large domain. To its right is a graph of the complementary distribution functions over a smaller domain, and with a logarithmic range.
If the logarithm of the range of an exponential distribution is taken, the resulting plot is linear. In contrast, that of the heavy-tail distribution is still curvilinear. These characteristics can be clearly seen on the graph above to the right. A characteristic of long-tail distributions is that if the logarithm of both the range and the domain is taken, the tail of the long-tail distribution is approximately linear over many orders of magnitude. In the graph above left, the condition for the existence of a heavy-tail distribution, as previously presented, is not met by the curve labelled "Gamma-Exponential Tail".
The probability mass function of a heavy-tail distribution is given by:
formula_7
and its cumulative distribution function is given by:
formula_8
where "k" represents the smallest value the random variable can take.
Readers interested in a more rigorous mathematical treatment of the subject are referred to the external links section.
What causes long-tail traffic?
In general, there are three main theories for the causes of long-tail traffic (see a review of all three causes). First, is a cause based in the application layer which theorizes that user session durations vary with a long-tail distribution due to the file size distribution. If the distribution of file sizes is heavy-tailed then the superposition of many file transfers in a client/server network environment will be long-range dependent. Additionally, this causal mechanism is robust with respect to changes in network resources (bandwidth and buffer capacity) and network topology. This is currently the most popular explanation in the engineering literature and the one with the most empirical evidence through observed file size distributions.
Second, is a transport layer cause which theorizes that the feedback between multiple TCP streams due to TCP's congestion avoidance algorithm in moderate to high packet loss situations causes self-similar traffic or at least allows it to propagate. However, this is believed only to be a significant factor at relatively short timescales and not the long-term cause of self-similar traffic.
Finally, is a theorized link layer cause which is predicated based on physics simulations of packet switching networks on simulated topologies. At a critical packet creation rate, the flow in a network becomes congested and exhibits 1/f noise and long-tail traffic characteristics. There have been criticisms on these sorts of models though as being unrealistic in that network traffic is long-tailed even in non-congested regions and at all levels of traffic.
Simulation showed that long-range dependence could arise in the queue
length dynamics at a given node (an entity that transfers traffic) within a communications network even when the traffic sources are free of long-range dependence. The mechanism for this is believed to relate to feedback from routing effects in the simulation.
Modelling long-tail traffic.
Modelling of long-tail traffic is necessary so that networks can be provisioned based on accurate assumptions of the traffic that they carry. The dimensioning and provisioning of networks that carry long-tail traffic is discussed in the next section.
Since (unlike traditional telephony traffic) packetised traffic exhibits self-similar or fractal characteristics, conventional traffic models do not apply to networks that carry long-tail traffic. Previous analytic work done in Internet studies adopted assumptions such as exponentially-distributed packet inter-arrivals, and conclusions reached under such assumptions may be misleading or incorrect in the presence of heavy-tailed distributions.
It has for long been realised that efficient and accurate modelling of various real-world phenomena needs to incorporate the fact that observations made on different scales each carry essential information. In most simple terms, representing data on large scales by its mean is often useful (such as an average income or an average number of clients per day) but can be inappropriate (e.g. in the context of buffering or waiting queues).
With the convergence of voice and data, the future multi-service network will be based on packetised traffic, and models which accurately reflect the nature of long-tail traffic will be required to develop, design and dimension future multi-service networks. We seek an equivalent to the Erlang model for circuit switched networks.
There is not an abundance of heavy-tailed models with rich sets of accompanying data-fitting techniques. A clear model for fractal traffic has not yet emerged, nor is there any definite direction towards a clear model. Deriving mathematical models which accurately represent long-tail traffic is a fertile area of research.
Gaussian models, even long-range dependent Gaussian models, are unable to accurately model current Internet traffic. Classical models of time series such as Poisson and finite Markov processes rely heavily on the assumption of independence, or at least weak dependence. Poisson and Markov related processes have, however, been used with some success. Nonlinear methods are used for producing packet traffic models which can replicate both short-range and long-range dependent streams.
A number of models have been proposed for the task of modelling long-tail traffic. These include the following:
No unanimity exists about which of the competing models is appropriate, but the Poisson Pareto Burst Process (PPBP), which is an M/G/formula_9 process, is perhaps the most successful model to date. It is demonstrated to satisfy the basic requirements of a simple, but accurate, model of long-tail traffic.
Finally, results from simulations using formula_10-stable stochastic processes for modelling traffic in broadband networks are presented. The simulations are compared to a variety of empirical data (Ethernet, WWW, VBR Video).
Network performance.
In some cases, an increase in the Hurst parameter can lead to a reduction in network performance. The extent to which heavy-tailedness degrades network performance is determined by how well congestion control is able to shape source traffic into an on-average constant output stream while conserving information. Congestion control of heavy-tailed traffic is discussed in the following section.
Traffic self-similarity negatively affects primary performance measures such as queue size and packet-loss rate. The queue length distribution of long-tail traffic decays more slowly than with Poisson sources.
However, long-range dependence implies nothing about its short-term correlations which affect performance in small buffers.
For heavy-tailed traffic, extremely large bursts occur more frequently than with light-tailed traffic. Additionally, aggregating streams of long-tail traffic typically intensifies the self-similarity ("burstiness") rather than smoothing it, compounding the problem.
The graph above right, taken from, presents a queueing performance comparison between traffic streams of varying degrees of self-similarity. Note how the queue size increases with increasing self-similarity of the data, for any given channel utilisation, thus degrading network performance.
In the modern network environment with multimedia and other QoS sensitive traffic streams comprising a growing fraction of network traffic, second-order performance measures in the form of “jitter” such as delay variation and packet loss variation are of import to provisioning user-specified QoS. Self-similar burstiness is expected to exert a negative influence on second-order performance measures.
Packet-switching-based services, such as the Internet (and other networks that employ IP) are best-effort services, so degraded performance, although undesirable, can be tolerated. However, since the connection is contracted, ATM networks need to keep delays and jitter within negotiated limits.
Self-similar traffic exhibits the persistence of clustering which has a negative impact on network performance.
Many aspects of network quality of service depend on coping with traffic peaks that might cause network failures, such as
Poisson processes are well-behaved because they are stateless, and peak loading is not sustained, so queues do not fill. With long-range order, peaks last longer and have greater impact: the equilibrium shifts for a while.
Due to the increased demands that long-tail traffic places on networks resources, networks need to be carefully provisioned to ensure that quality of service and service level agreements are met. The following subsection deals with the provisioning of standard network resources, and the subsection after that looks at provisioning web servers that carry a significant amount of long-tail traffic.
Network provisioning for long-tail traffic.
For network queues with long-range dependent inputs, the sharp increase in queuing delays at fairly low levels of utilisation and slow decay of queue lengths implies that an incremental improvement in loss performance requires a significant increase in buffer size.
While throughput declines gradually as self-similarity increases, queuing delay increases more drastically. When traffic is self-similar, we find that queuing delay grows proportionally to the buffer capacity present in the system. Taken together, these two observations have potentially dire implications for QoS provisions in networks. To achieve a constant level of throughput or packet loss as self-similarity is increased, extremely large buffer capacity is needed. However, increased buffering leads to large queuing delays and thus self-similarity significantly steepens the trade-off curve between throughput/ packet loss and delay.
ATM can be employed in telecommunications networks to overcome second-order performance measure problems. The short fixed-length cell used in ATM reduces the delay and most significantly the jitter for delay-sensitive services such as voice and video.
Web site provisioning for long-tail traffic.
Workload pattern complexities (for example, bursty arrival patterns) can significantly affect resource demands, throughput, and the latency encountered by user requests, in terms of higher average response times and higher response time variance. Without adaptive, optimal management and control of resources, SLAs based on response time are impossible. The capacity requirements on the site are increased while its ability to provide acceptable levels of performance and availability diminishes. Techniques to control and manage long-tail traffic are discussed in the following section.
The ability to accurately forecast request patterns is an important requirement of capacity planning. A practical consequence of burstiness and heavy-tailed and correlated arrivals is difficulty in capacity planning.
With respect to SLAs, the same level of service for heavy-tailed distributions requires a more powerful set of servers, compared with the case of independent light-tailed request traffic. To guarantee good performance, focus needs to be given to peak traffic duration because it is the huge bursts of requests that most degrade performance. That is why some busy sites require more headroom (spare capacity) to handle the volumes; for example, a high-volume online trading site reserves spare capacity with a ratio of three to one.
Reference to additional information on the effect of long-range dependency on network performance can be found in the external links section.
Controlling long-tail traffic.
Given the ubiquity of scale-invariant burstiness observed across diverse networking contexts, finding an effective traffic control algorithm capable of detecting and managing self-similar traffic has become an important problem. The problem of controlling self-similar network traffic is still in its infancy.
Traffic control for self-similar traffic has been explored on two fronts: Firstly, as an extension of performance analysis in the resource provisioning context, and secondly, from the multiple time scale traffic control perspective where the correlation structure at large time scales is actively exploited to improve network performance.
The resource provisioning approach seeks to identify the relative utility of the two principal network resource types – bandwidth and buffer capacity – with respect to their curtailing effects on self-similarity, and advocates a small buffer/ large bandwidth resource dimensioning policy. Whereas resource provisioning is open-loop in nature, multiple time scale traffic control exploits the long-range correlation structure present in self-similar traffic. Congestion control can be exercised concurrently at multiple time scales, and by cooperatively engaging information extracted at different time scales, achieve significant performance gains.
Another approach adopted in controlling long-tail traffic makes traffic controls cognizant of workload properties. For example, when TCP is invoked in HTTP in the context of web client/ server interactions, the size of the file being transported (which is known at the server) is conveyed or made accessible to protocols in the transport layer, including the selection of alternative protocols, for more effective data transport. For short files, which constitute the bulk of connection requests in heavy-tailed file size distributions of web servers, elaborate feedback control may be bypassed in favour of lightweight mechanisms in the spirit of optimistic control, which can result in improved bandwidth utilisation.
It was found that the simplest way to control packet traffic is to limit the length of queues. Long queues in the network invariably occur at hosts (entities that can transmit and receive packets). Congestion control can therefore be achieved by reducing the rate of packet production at hosts with long queues.
Long-range dependence and its exploitation for traffic control is best suited for flows or connections whose lifetime or connection duration is long lasting.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rho(k) \\sim k^{-\\alpha}"
},
{
"math_id": 1,
"text": "\nP(a)= \\left ( \\frac{\\mu^a}{a!} \\right )e^{-\\mu},\n"
},
{
"math_id": 2,
"text": "\\mu"
},
{
"math_id": 3,
"text": "\nP(d)=\\left(\\frac{\\lambda^d}{d!}\\right)e^{-\\lambda},\n"
},
{
"math_id": 4,
"text": "\\lambda"
},
{
"math_id": 5,
"text": "\nP[T \\ge \\ t]=e^{\\frac{-t}{h}},\n"
},
{
"math_id": 6,
"text": "\nP[X>x] \\sim x^{- \\alpha},\\ \\text{as} \\ x \\to \\infty, 0< \\alpha <2\n"
},
{
"math_id": 7,
"text": "\np(x)= \\alpha k^{\\alpha} x^{- \\alpha -1},\\ \\alpha ,k>0,\\ x \\ge k\n"
},
{
"math_id": 8,
"text": "\nF(x)=P[X \\le \\ x]=1- \\left(\\frac{k}{x}\\right)^{\\alpha}\n"
},
{
"math_id": 9,
"text": "\\mathcal{1}"
},
{
"math_id": 10,
"text": "\\alpha"
}
] |
https://en.wikipedia.org/wiki?curid=1528164
|
1528221
|
Sheet metal
|
Metal formed into thin, flat pieces
Sheet metal is metal formed into thin, flat pieces, usually by an industrial process.
Thicknesses can vary significantly; extremely thin sheets are considered foil or leaf, and pieces thicker than 6 mm (0.25 in) are considered plate, such as plate steel, a class of structural steel.
Sheet metal is available in flat pieces or coiled strips. The coils are formed by running a continuous sheet of metal through a roll slitter.
In most of the world, sheet metal thickness is consistently specified in millimeters. In the U.S., the thickness of sheet metal is commonly specified by a traditional, non-linear measure known as its gauge. The larger the gauge number, the thinner the metal. Commonly used steel sheet metal ranges from 30 gauge to about 7 gauge. Gauge differs between ferrous (iron-based) metals and nonferrous metals such as aluminum or copper. Copper thickness, for example, is measured in ounces, representing the weight of copper contained in an area of one square foot. Parts manufactured from sheet metal must maintain a uniform thickness for ideal results.
There are many different metals that can be made into sheet metal, such as aluminium, brass, copper, steel, tin, nickel and titanium. For decorative uses, some important sheet metals include silver, gold, and platinum (platinum sheet metal is also utilized as a catalyst). These metal sheets are processed through different processing technologies, mainly including cold rolling and hot rolling. Sometimes hot-dip galvanizing process is adopted as needed to prevent it from rusting due to constant exposure to the outdoors. Sometimes a layer of color coating is applied to the surface of the cold-rolled sheet to obtain a decorative and protective metal sheet, generally called a color-coated metal sheet.
Sheet metal is used in automobile and truck (lorry) bodies, major appliances, airplane fuselages and wings, tinplate for tin cans, roofing for buildings (architecture), and many other applications. Sheet metal of iron and other materials with high magnetic permeability, also known as laminated steel cores, has applications in transformers and electric machines. Historically, an important use of sheet metal was in plate armor worn by cavalry, and sheet metal continues to have many decorative uses, including in horse tack. Sheet metal workers are also known as "tin bashers" (or "tin knockers"), a name derived from the hammering of panel seams when installing tin roofs.
History.
Hand-hammered metal sheets have been used since ancient times for architectural purposes. Water-powered rolling mills replaced the manual process in the late 17th century. The process of flattening metal sheets required large rotating iron cylinders which pressed metal pieces into sheets. The metals suited for this were lead, copper, zinc, iron and later steel. Tin was often used to coat iron and steel sheets to prevent it from rusting. This tin-coated sheet metal was called "tinplate." Sheet metals appeared in the United States in the 1870s, being used for shingle roofing, stamped ornamental ceilings, and exterior façades. Sheet metal ceilings were only popularly known as "tin ceilings" later as manufacturers of the period did not use the term. The popularity of both shingles and ceilings encouraged widespread production. With further advances of steel sheet metal production in the 1890s, the promise of being cheap, durable, easy to install, lightweight and fireproof gave the middle-class a significant appetite for sheet metal products. It was not until the 1930s and WWII that metals became scarce and the sheet metal industry began to collapse. However, some American companies, such as the W.F. Norman Corporation, were able to stay in business by making other products until Historic preservation projects aided the revival of ornamental sheet metal.
Materials.
Stainless steel.
Grade 304 is the most common of the three grades. It offers good corrosion resistance while maintaining formability and weldability. Available finishes are #2B, #3, and #4. Grade 303 is not available in sheet form.
Grade 316 possesses more corrosion resistance and strength at elevated temperatures than 304. It is commonly used for pumps, valves, chemical equipment, and marine applications. Available finishes are #2B, #3, and #4.
Grade 410 is a heat treatable stainless steel, but it has a lower corrosion resistance than the other grades. It is commonly used in cutlery. The only available finish is dull.
Grade 430 is a popular grade, low-cost alternative to series 300's grades. This is used when high corrosion resistance is not a primary criterion. Common grade for appliance products, often with a brushed finish.
Aluminium.
Aluminium is widely used in sheet metal form due to its flexibility, wide range of options, cost effectiveness, and other properties. The four most common aluminium grades available as sheet metal are 1100-H14, 3003-H14, 5052-H32, and 6061-T6.
Grade 1100-H14 is commercially pure aluminium, highly chemical and weather resistant. It is ductile enough for deep drawing and weldable, but has low strength. It is commonly used in chemical processing equipment, light reflectors, and jewelry.
Grade 3003-H14 is stronger than 1100, while maintaining the same formability and low cost. It is corrosion resistant and weldable. It is often used in stampings, spun and drawn parts, mail boxes, cabinets, tanks, and fan blades.
Grade 5052-H32 is much stronger than 3003 while still maintaining good formability. It maintains high corrosion resistance and weldability. Common applications include electronic chassis, tanks, and pressure vessels.
Grade 6061-T6 is a common heat-treated structural aluminium alloy. It is weldable, corrosion resistant, and stronger than 5052, but not as formable. It loses some of its strength when welded. It is used in modern aircraft structures.
Brass.
Brass is an alloy of copper, which is widely used as a sheet metal. It has more strength, corrosion resistance and formability when compared to copper while retaining its conductivity.
In sheet hydroforming, variation in incoming sheet coil properties is a common problem for forming process, especially with materials for automotive applications. Even though incoming sheet coil may meet tensile test specifications, high rejection rate is often observed in production due to inconsistent material behavior. Thus there is a strong need for a discriminating method for testing incoming sheet material formability. The hydraulic sheet bulge test emulates biaxial deformation conditions commonly seen in production operations.
For forming limit curves of materials aluminium, mild steel and brass. Theoretical analysis is carried out by deriving governing equations for determining of equivalent stress and equivalent strain based on the bulging to be spherical and Tresca's yield criterion with the associated flow rule. For experimentation circular grid analysis is one of the most effective methods.
Gauge.
Use of gauge numbers to designate sheet metal thickness is discouraged by numerous international standards organizations. For example, ASTM states in specification ASTM A480-10a: "The use of gauge number is discouraged as being an archaic term of limited usefulness not having general agreement on meaning."
Manufacturers' Standard Gauge for Sheet Steel is based on an average density of 41.82 lb per square foot per inch thick, equivalent to . Gauge is defined differently for ferrous (iron-based) and non-ferrous metals (e.g. aluminium and brass).
The gauge thicknesses shown in column 2 (U.S. standard sheet and plate iron and steel decimal inch (mm)) seem somewhat arbitrary. The progression of thicknesses is clear in column 3 (U.S. standard for sheet and plate iron and steel 64ths inch (delta)). The thicknesses vary first by <templatestyles src="Fraction/styles.css" />1⁄32 inch in higher thicknesses and then step down to increments of <templatestyles src="Fraction/styles.css" />1⁄64 inch, then <templatestyles src="Fraction/styles.css" />1⁄128 inch, with the final increments at decimal fractions of <templatestyles src="Fraction/styles.css" />1⁄64 inch.
Some steel tubes are manufactured by folding a single steel sheet into a square/circle and welding the seam together. Their wall thickness has a similar (but distinct) gauge to the thickness of steel sheets.
Tolerances.
During the rolling process the rollers bow slightly, which results in the sheets being thinner on the edges. The tolerances in the table and attachments reflect current manufacturing practices and commercial standards and are not representative of the Manufacturer's Standard Gauge, which has no inherent tolerances.
Forming processes.
Bending.
The equation for estimating the maximum bending force is,
formula_0,
where "k" is a factor taking into account several parameters including friction. "T" is the ultimate tensile strength of the metal. "L" and "t" are the length and thickness of the sheet metal, respectively. The variable "W" is the open width of a V-die or wiping die.
Curling.
The curling process is used to form an edge on a ring. This process is used to remove sharp edges. It also increases the moment of inertia near the curled end.
The flare/burr should be turned away from the die. It is used to curl a material of specific thickness. Tool steel is generally used due to the amount of wear done by operation.
Decambering.
It is a metal working process of removing camber, the horizontal bend, from a strip shaped material. It may be done to a finite length section or coils. It resembles flattening of leveling process, but on a deformed edge.
Deep drawing.
Drawing is a forming process in which the metal is stretched over a form or die. In deep drawing the depth of the part being made is more than half its diameter. Deep drawing is used for making automotive fuel tanks, kitchen sinks, two-piece aluminum cans, etc. Deep drawing is generally done in multiple steps called draw reductions. The greater the depth, the more reductions are required. Deep drawing may also be accomplished with fewer reductions by heating the workpiece, for example in sink manufacture.
In many cases, material is rolled at the mill in both directions to aid in deep drawing. This leads to a more uniform grain structure which limits tearing and is referred to as "draw quality" material.
Expanding.
Expanding is a process of cutting or stamping slits in alternating pattern much like the stretcher bond in brickwork and then stretching the sheet open in accordion-like fashion. It is used in applications where air and water flow are desired as well as when light weight is desired at cost of a solid flat surface. A similar process is used in other materials such as paper to create a low cost packing paper with better supportive properties than flat paper alone.
Hemming and seaming.
Hemming is a process of folding the edge of sheet metal onto itself to reinforce that edge. Seaming is a process of folding two sheets of metal together to form a joint.
Hydroforming.
Hydroforming is a process that is analogous to deep drawing, in that the part is formed by stretching the blank over a stationary die. The force required is generated by the direct application of extremely high hydrostatic pressure to the workpiece or to a bladder that is in contact with the workpiece, rather than by the movable part of a die in a mechanical or hydraulic press. Unlike deep drawing, hydroforming usually does not involve draw reductions—the piece is formed in a single step.
Incremental sheet forming.
Incremental sheet forming or ISF forming process is basically sheet metal working or sheet metal forming process. In this case, sheet is formed into final shape by a series of processes in which small incremental deformation can be done in each series.
Ironing.
Ironing is a sheet metal working or sheet metal forming process. It uniformly thins the workpiece in a specific area. This is a very useful process. It is used to produce a uniform wall thickness part with a high height-to-diameter ratio.
It is used in making aluminium beverage cans.
Laser cutting.
Sheet metal can be cut in various ways, from hand tools called tin snips up to very large powered shears. With the advances in technology, sheet metal cutting has turned to computers for precise cutting. Many sheet metal cutting operations are based on computer numerically controlled (CNC) laser cutting or multi-tool CNC punch press.
CNC laser involves moving a lens assembly carrying a beam of laser light over the surface of the metal. Oxygen, nitrogen or air is fed through the same nozzle from which the laser beam exits. The metal is heated and burnt by the laser beam, cutting the metal sheet. The quality of the edge can be mirror smooth and a precision of around can be obtained. Cutting speeds on thin sheet can be as high as per minute. Most laser cutting systems use a CO2 based laser source with a wavelength of around 10 μm; some more recent systems use a YAG based laser with a wavelength of around 1 μm.
Photochemical machining.
Photochemical machining, also known as photo etching, is a tightly controlled corrosion process which is used to produce complex metal parts from sheet metal with very fine detail. The photo etching process involves photo sensitive polymer being applied to a raw metal sheet. Using CAD designed photo-tools as stencils, the metal is exposed to UV light to leave a design pattern, which is developed and etched from the metal sheet.
Perforating.
Perforating is a cutting process that punches multiple small holes close together in a flat workpiece. Perforated sheet metal is used to make a wide variety of surface cutting tools, such as the surform.
Press brake forming.
This is a form of bending used to produce long, thin sheet metal parts. The machine that bends the metal is called a press brake. The lower part of the press contains a V-shaped groove called the die. The upper part of the press contains a punch that presses the sheet metal down into the v-shaped die, causing it to bend. There are several techniques used, but the most common modern method is "air bending". Here, the die has a sharper angle than the required bend (typically 85 degrees for a 90 degree bend) and the upper tool is precisely controlled in its stroke to push the metal down the required amount to bend it through 90 degrees. Typically, a general purpose machine has an available bending force of around 25 tons per meter of length. The opening width of the lower die is typically 8 to 10 times the thickness of the metal to be bent (for example, 5 mm material could be bent in a 40 mm die). The inner radius of the bend formed in the metal is determined not by the radius of the upper tool, but by the lower die width. Typically, the inner radius is equal to 1/6 of the V-width used in the forming process.
The press usually has some sort of back gauge to position depth of the bend along the workpiece. The backgauge can be computer controlled to allow the operator to make a series of bends in a component to a high degree of accuracy. Simple machines control only the backstop, more advanced machines control the position and angle of the stop, its height and the position of the two reference pegs used to locate the material. The machine can also record the exact position and pressure required for each bending operation to allow the operator to achieve a perfect 90 degree bend across a variety of operations on the part.
Punching.
Punching is performed by placing the sheet of metal stock between a punch and a die mounted in a press. The punch and die are made of hardened steel and are the same shape. The punch is sized to be a very close fit in the die. The press pushes the punch against and into the die with enough force to cut a hole in the stock. In some cases the punch and die "nest" together to create a depression in the stock. In progressive stamping, a coil of stock is fed into a long die/punch set with many stages. Multiple simple shaped holes may be produced in one stage, but complex holes are created in multiple stages. In the final stage, the part is punched free from the "web".
A typical CNC turret punch has a choice of up to 60 tools in a "turret" that can be rotated to bring any tool to the punching position. A simple shape (e.g. a square, circle, or hexagon) is cut directly from the sheet. A complex shape can be cut out by making many square or rounded cuts around the perimeter. A punch is less flexible than a laser for cutting compound shapes, but faster for repetitive shapes (for example, the grille of an air-conditioning unit). A CNC punch can achieve 600 strokes per minute.
A typical component (such as the side of a computer case) can be cut to high precision from a blank sheet in under 15 seconds by either a press or a laser CNC machine.
Roll forming.
A continuous bending operation for producing open profiles or welded tubes with long lengths or in large quantities.
Rolling.
Rolling is metal working or metal forming process. In this method, stock passes through one or more pair of rolls to reduce thickness. It is used to make thickness uniform. It is classified according to its temperature of rolling:
Spinning.
Spinning is used to make tubular (axis-symmetric) parts by fixing a piece of sheet stock to a rotating form (mandrel). Rollers or rigid tools press the stock against the form, stretching it until the stock takes the shape of the form. Spinning is used to make rocket motor casings, missile nose cones, satellite dishes and metal kitchen funnels.
Stamping.
Stamping includes a variety of operations such as punching, blanking, embossing, bending, flanging, and coining; simple or complex shapes can be formed at high production rates; tooling and equipment costs can be high, but labor costs are low.
Alternatively, the related techniques repoussé and chasing have low tooling and equipment costs, but high labor costs.
Water jet cutting.
A water jet cutter, also known as a waterjet, is a tool capable of a controlled erosion into metal or other materials using a jet of water at high velocity and pressure, or a mixture of water and an abrasive substance.
Wheeling.
The process of using an English wheel is called wheeling. It is basically a metal working or metal forming process. An English wheel is used by a craftsperson to form compound curves from a flat sheet of metal of aluminium or steel. It is costly, as highly skilled labour is required. It can produce different panels by the same method. A stamping press is used for high numbers in production.
Sheet metal fabrication.
The use of sheet metal, through a comprehensive cold working process, including bending, shearing, punching, laser cutting, water jet cutting, riveting, splicing, etc. to make the final product we want (such as computer chassis, washing machine shells, refrigerator door panels, etc.), we generally called sheet metal fabrication. The academic community currently has no uniform definition, but this process has a common feature of the process is that the material is generally a thin sheet, and will not change the thickness of most of the material of the part.
Fasteners.
Fasteners that are commonly used on sheet metal include: clecos, rivets, and sheet metal screws.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "F_\\text{max} = k \\frac{TLt^{2}}{W}"
}
] |
https://en.wikipedia.org/wiki?curid=1528221
|
1528261
|
Gephyrocapsa huxleyi
|
Unicellular algae responsible for the formation of chalk
<templatestyles src="Template:Taxobox/core/styles.css" />
Gephyrocapsa huxleyi, formerly called Emiliania huxleyi, is a species of coccolithophore found in almost all ocean ecosystems from the equator to sub-polar regions, and from nutrient rich upwelling zones to nutrient poor oligotrophic waters. It is one of thousands of different photosynthetic plankton that freely drift in the photic zone of the ocean, forming the basis of virtually all marine food webs. It is studied for the extensive blooms it forms in nutrient-depleted waters after the reformation of the summer thermocline. Like other coccolithophores, "E. huxleyi" is a single-celled phytoplankton covered with uniquely ornamented calcite disks called coccoliths. Individual coccoliths are abundant in marine sediments although complete coccospheres are more unusual. In the case of "E. huxleyi", not only the shell, but also the soft part of the organism may be recorded in sediments. It produces a group of chemical compounds that are very resistant to decomposition. These chemical compounds, known as alkenones, can be found in marine sediments long after other soft parts of the organisms have decomposed. Alkenones are most commonly used by earth scientists as a means to estimate past sea surface temperatures.
Basic facts.
"Emiliania huxleyi" was named after Thomas Huxley and Cesare Emiliani, who were the first to examine sea-bottom sediment and discover the coccoliths within it. It is believed to have evolved approximately 270,000 years ago from the older genus "Gephyrocapsa" Kampter and became dominant in planktonic assemblages, and thus in the fossil record, approximately 70,000 years ago. It is the most numerically abundant and widespread coccolithophore species. The species is divided into seven morphological forms called morphotypes based on differences in coccolith structure (See Nannotax for more detail on these forms). Its coccoliths are transparent and commonly colourless, but are formed of calcite which refracts light very efficiently in the water column. This, and the high concentrations caused by continual shedding of their coccoliths makes "E. huxleyi" blooms easily visible from space. Satellite images show that blooms can cover areas of more than 10,000 kmformula_0, with complementary shipboard measurements indicating that "E. huxleyi" is by far the dominant phytoplankton species under these conditions. This species has been an inspiration for James Lovelock's Gaia hypothesis which claims that living organisms collectively self-regulate biogeochemistry and climate at nonrandom metastable states.
Abundance and distribution.
"Emiliania huxleyi" is considered a ubiquitous species. It exhibits one of the largest temperature ranges (1–30 °C) of any coccolithophores species. It has been observed under a range of nutrient levels from oligotrophic (subtropical gyres) to eutrophic waters (upwelling zones/ Norwegian fjords). Its presence in plankton communities from the surface to 200m depth indicates a high tolerance for both fluctuating and low light conditions. This extremely wide tolerance of environmental conditions is believed to be explained by the existence of a range of environmentally adapted ecotypes within the species. As a result of these tolerances its distribution ranges from the sub-Arctic to the sub-Antarctic and from coastal to oceanic habitats. Within this range it is present in nearly all euphotic zone water samples and accounts for 20–50% or more of the total coccolithophore community.
During massive blooms (which can cover over 100,000 square kilometers), "E. huxleyi" cell concentrations can outnumber those of all other species in the region combined, accounting for 75% or more of the total number of photosynthetic plankton in the area. "E. huxleyi" blooms regionally act as an important source of calcium carbonate and dimethyl sulfide, the massive production of which can have a significant impact not only on the properties of the surface mixed layer, but also on global climate. The blooms can be identified through satellite imagery because of the large amount of light back-scattered from the water column, which provides a method to assess their biogeochemical importance on both basin and global scales. These blooms are prevalent in the Norwegian fjords, causing satellites to pick up "white waters", which describes the reflectance of the blooms picked up by satellites. This is due to the mass of coccoliths reflecting the incoming sunlight back out of the water, allowing the extent of "E. huxleyi" blooms to be distinguished in fine detail.
Extensive "E. huxleyi" blooms can have a visible impact on sea albedo. While multiple scattering can increase light path per unit depth, increasing absorption and solar heating of the water column, "E. huxleyi" has inspired proposals for geomimesis, because micron-sized air bubbles are specular reflectors, and so in contrast to "E. huxleyi", tend to lower the temperature of the upper water column. As with self-shading within water-whitening coccolithophore plankton blooms, this may reduce photosynthetic productivity by altering the geometry of the euphotic zone. Both experiments and modeling are needed to quantify the potential biological impact of such effects, and the corollary potential of reflective blooms of other organisms to increase or reduce evaporation and methane evolution by altering fresh water temperatures.
Biogeochemical impacts.
Climate change.
As with all phytoplankton, primary production of "E. huxleyi" through photosynthesis is a sink of carbon dioxide. However, the production of coccoliths through calcification is a source of CO2. This means that coccolithophores, including "E. huxleyi", have the potential to act as a net source of CO2 out of the ocean. Whether they are a net source or sink and how they will react to ocean acidification is not yet well understood.
Ocean heat retention.
Scattering stimulated by "E. huxleyi" blooms not only causes more heat and light to be pushed back up into the atmosphere than usual, but also cause more of the remaining heat to be trapped closer to the ocean surface. This is problematic because it is the surface water that exchanges heat with the atmosphere, and "E. huxleyi" blooms may tend to make the overall temperature of the water column dramatically cooler over longer time periods. However, the importance of this effect, whether positive or negative, is currently being researched and has not yet been established.
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "^2"
}
] |
https://en.wikipedia.org/wiki?curid=1528261
|
15282775
|
Impact pressure
|
In compressible fluid dynamics, impact pressure (dynamic pressure) is the difference between total pressure (also known as pitot pressure or stagnation pressure) and static pressure. In aerodynamics notation, this quantity is denoted as formula_0 or formula_1.
When input to an airspeed indicator, impact pressure is used to provide a calibrated airspeed reading. An air data computer with inputs of pitot and static pressures is able to provide a Mach number and, if static temperature is known, true airspeed.
Some authors in the field of compressible flows use the term "dynamic pressure" or "compressible dynamic pressure" instead of "impact pressure".
Isentropic flow.
In isentropic flow the ratio of total pressure to static pressure is given by:
formula_2
where:
formula_3 is total pressure
formula_4 is static pressure
formula_5 is the ratio of specific heats
formula_6 is the freestream Mach number
Taking formula_5 to be 1.4, and since formula_7
formula_8
Expressing the incompressible dynamic pressure as formula_9 and expanding by the binomial series gives:
formula_10
where:
formula_11 is dynamic pressure
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "q_c"
},
{
"math_id": 1,
"text": "Q_c"
},
{
"math_id": 2,
"text": "\\frac{P_t}{P} = \\left(1+ \\frac{\\gamma -1}{2} M^2 \\right)^\\tfrac{\\gamma}{\\gamma - 1}"
},
{
"math_id": 3,
"text": "P_t"
},
{
"math_id": 4,
"text": "P"
},
{
"math_id": 5,
"text": "\\gamma\\;"
},
{
"math_id": 6,
"text": "M\\;"
},
{
"math_id": 7,
"text": "\\;P_t=P+q_c"
},
{
"math_id": 8,
"text": "\\;q_c = P\\left[\\left(1+0.2 M^2 \\right)^\\tfrac{7}{2}-1\\right]"
},
{
"math_id": 9,
"text": "\\;\\tfrac{1}{2}\\gamma PM^2"
},
{
"math_id": 10,
"text": "\\;q_c=q \\left(1 + \\frac{M^2}{4} + \\frac{M^4}{40} + \\frac{M^6}{1600} ... \\right)\\;"
},
{
"math_id": 11,
"text": "\\;q"
}
] |
https://en.wikipedia.org/wiki?curid=15282775
|
15282871
|
Predicate functor logic
|
Algebraization of first-order logic
In mathematical logic, predicate functor logic (PFL) is one of several ways to express first-order logic (also known as predicate logic) by purely algebraic means, i.e., without quantified variables. PFL employs a small number of algebraic devices called predicate functors (or predicate modifiers) that operate on terms to yield terms. PFL is mostly the invention of the logician and philosopher Willard Quine.
Motivation.
The source for this section, as well as for much of this entry, is Quine (1976). Quine proposed PFL as a way of algebraizing first-order logic in a manner analogous to how Boolean algebra algebraizes propositional logic. He designed PFL to have exactly the expressive power of first-order logic with identity. Hence the metamathematics of PFL are exactly those of first-order logic with no interpreted predicate letters: both logics are sound, complete, and undecidable. Most work Quine published on logic and mathematics in the last 30 years of his life touched on PFL in some way.
Quine took "functor" from the writings of his friend Rudolf Carnap, the first to employ it in philosophy and mathematical logic, and defined it as follows:
"The word "functor", grammatical in import but logical in habitat... is a sign that attaches to one or more expressions of given grammatical kind(s) to produce an expression of a given grammatical kind." (Quine 1982: 129)
Ways other than PFL to algebraize first-order logic include:
PFL is arguably the simplest of these formalisms, yet also the one about which the least has been written.
Quine had a lifelong fascination with combinatory logic, attested to by his introduction to the translation in Van Heijenoort (1967) of the paper by the Russian logician Moses Schönfinkel founding combinatory logic. When Quine began working on PFL in earnest, in 1959, combinatory logic was commonly deemed a failure for the following reasons:
Kuhn's formalization.
The PFL syntax, primitives, and axioms described in this section are largely Steven Kuhn's (1983). The semantics of the functors are Quine's (1982). The rest of this entry incorporates some terminology from Bacon (1985).
Syntax.
An "atomic term" is an upper case Latin letter, "I" and "S" excepted, followed by a numerical superscript called its "degree", or by concatenated lower case variables, collectively known as an "argument list". The degree of a term conveys the same information as the number of variables following a predicate letter. An atomic term of degree 0 denotes a Boolean variable or a truth value. The degree of "I" is invariably 2 and so is not indicated.
The "combinatory" (the word is Quine's) predicate functors, all monadic and peculiar to PFL, are Inv, inv, ∃, +, and p. A term is either an atomic term, or constructed by the following recursive rule. If τ is a term, then Invτ, invτ, ∃τ, +τ, and pτ are terms. A functor with a superscript "n", "n" a natural number > 1, denotes "n" consecutive applications (iterations) of that functor.
A formula is either a term or defined by the recursive rule: if α and β are formulas, then αβ and ~(α) are likewise formulas. Hence "~" is another monadic functor, and concatenation is the sole dyadic predicate functor. Quine called these functors "alethic." The natural interpretation of "~" is negation; that of concatenation is any connective that, when combined with negation, forms a functionally complete set of connectives. Quine's preferred functionally complete set was conjunction and negation. Thus concatenated terms are taken as conjoined. The notation + is Bacon's (1985); all other notation is Quine's (1976; 1982). The alethic part of PFL is identical to the "Boolean term schemata" of Quine (1982).
As is well known, the two alethic functors could be replaced by a single dyadic functor with the following syntax and semantics: if α and β are formulas, then (αβ) is a formula whose semantics are "not (α and/or β)" (see NAND and NOR).
Axioms and semantics.
Quine set out neither axiomatization nor proof procedure for PFL. The following axiomatization of PFL, one of two proposed in Kuhn (1983), is concise and easy to describe, but makes extensive use of free variables and so does not do full justice to the spirit of PFL. Kuhn gives another axiomatization dispensing with free variables, but that is harder to describe and that makes extensive use of defined functors. Kuhn proved both of his PFL axiomatizations sound and complete.
This section is built around the primitive predicate functors and a few defined ones. The alethic functors can be axiomatized by any set of axioms for sentential logic whose primitives are negation and one of ∧ or ∨. Equivalently, all tautologies of sentential logic can be taken as axioms.
Quine's (1982) semantics for each predicate functor are stated below in terms of abstraction (set builder notation), followed by either the relevant axiom from Kuhn (1983), or a definition from Quine (1976). The notation formula_0 denotes the set of "n"-tuples satisfying the atomic formula formula_1
formula_2
Identity is reflexive (Ixx), symmetric ("Ixy"→"Iyx"), transitive (("Ixy"∧"Iyz") → "Ixz"), and obeys the substitution property:
formula_3
formula_4
formula_5
formula_6
formula_7
"Cropping" enables two useful defined functors:
formula_8
formula_9
S generalizes the notion of reflexivity to all terms of any finite degree greater than 2. N.B: S should not be confused with the primitive combinator S of combinatory logic.
formula_11
Here only, Quine adopted an infix notation, because this infix notation for Cartesian product is very well established in mathematics. Cartesian product allows restating conjunction as follows:
formula_12
Reorder the concatenated argument list so as to shift a pair of duplicate variables to the far left, then invoke S to eliminate the duplication. Repeating this as many times as required results in an argument list of length max("m","n").
The next three functors enable reordering argument lists at will.
formula_13
formula_14
formula_15
formula_16
formula_17
formula_18
Given an argument list consisting of "n" variables, p implicitly treats the last "n"−1 variables like a bicycle chain, with each variable constituting a link in the chain. One application of p advances the chain by one link. "k" consecutive applications of p to "F""n" moves the "k"+1 variable to the second argument position in "F".
When "n"=2, Inv and inv merely interchange "x"1 and "x"2. When "n"=1, they have no effect. Hence p has no effect when "n" < 3.
Kuhn (1983) takes "Major inversion" and "Minor inversion" as primitive. The notation p in Kuhn corresponds to inv; he has no analog to "Permutation" and hence has no axioms for it. If, following Quine (1976), p is taken as primitive, Inv and inv can be defined as nontrivial combinations of +, ∃, and iterated p.
The following table summarizes how the functors affect the degrees of their arguments.
Rules.
All instances of a predicate letter may be replaced by another predicate letter of the same degree, without affecting validity. The rules are:
Some useful results.
Instead of axiomatizing PFL, Quine (1976) proposed the following conjectures as candidate axioms.
formula_22
"n"−1 consecutive iterations of p restores the "status quo ante":
formula_23
+ and ∃ annihilate each other:
formula_24
Negation distributes over +, ∃, and p:
formula_25
+ and p distributes over conjunction:
formula_26
Identity has the interesting implication:
formula_27
Quine also conjectured the rule: If α is a PFL theorem, then so are "pα", +α, and formula_28.
Bacon's work.
Bacon (1985) takes the conditional, negation, "Identity", "Padding", and "Major" and "Minor inversion" as primitive, and "Cropping" as defined. Employing terminology and notation differing somewhat from the above, Bacon (1985) sets out two formulations of PFL:
Bacon also:
From first-order logic to PFL.
The following algorithm is adapted from Quine (1976: 300–2). Given a closed formula of first-order logic, first do the following:
"y" as Ixy.
Now apply the following algorithm to the preceding result:
The reverse translation, from PFL to first-order logic, is discussed in Quine (1976: 302–4).
The canonical foundation of mathematics is axiomatic set theory, with a background logic consisting of first-order logic with identity, with a universe of discourse consisting entirely of sets. There is a single predicate letter of degree 2, interpreted as set membership. The PFL translation of the canonical axiomatic set theory ZFC is not difficult, as no ZFC axiom requires more than 6 quantified variables.
Footnotes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\{x_1\\cdots x_n : Fx_1\\cdots x_n\\}"
},
{
"math_id": 1,
"text": "Fx_1\\cdots x_n."
},
{
"math_id": 2,
"text": " IFx_1x_2\\cdots x_n \\leftrightarrow (Fx_1x_1\\cdots x_n \\leftrightarrow Fx_2x_2\\cdots x_n)\\text{.}"
},
{
"math_id": 3,
"text": "(Fx_1\\cdots x_n \\land Ix_1y) \\rightarrow Fyx_2\\cdots x_n."
},
{
"math_id": 4,
"text": "\\ +F^n \\ \\overset{\\underset{\\mathrm{def}}{}}{=}\\ \\{x_0x_1\\cdots x_n : F^n x_1\\cdots x_n\\}."
},
{
"math_id": 5,
"text": "+Fx_1\\cdots x_n \\leftrightarrow Fx_2\\cdots x_n."
},
{
"math_id": 6,
"text": " \\exist F^n \\ \\overset{\\underset{\\mathrm{def}}{}}{=}\\ \\{x_2\\cdots x_n : \\exist x_1 F^n x_1\\cdots x_n\\}."
},
{
"math_id": 7,
"text": "Fx_1\\cdots x_n \\rightarrow \\exist Fx_2\\cdots x_n."
},
{
"math_id": 8,
"text": "SF^n \\ \\overset{\\underset{\\mathrm{def}}{}}{=}\\ \\{x_2\\cdots x_n : F^n x_2x_2\\cdots x_n\\}."
},
{
"math_id": 9,
"text": "SF^n \\leftrightarrow \\exist IF^n."
},
{
"math_id": 10,
"text": "\\times"
},
{
"math_id": 11,
"text": "F^m \\times G^n \\leftrightarrow F^m \\exist^m G^n."
},
{
"math_id": 12,
"text": "F^mx_1\\cdots x_mG^nx_1\\cdots x_n \\leftrightarrow (F^m \\times G^n)x_1\\cdots x_mx_1\\cdots x_n."
},
{
"math_id": 13,
"text": "\\operatorname{Inv} F^n \\ \\overset{\\underset{\\mathrm{def}}{}}{=}\\ \\{x_1\\cdots x_n : F^nx_nx_1\\cdots x_{n-1}\\}."
},
{
"math_id": 14,
"text": "\\operatorname{Inv} Fx_1\\cdots x_n \\leftrightarrow Fx_nx_1\\cdots x_{n-1}."
},
{
"math_id": 15,
"text": "\\operatorname{inv} F^n \\ \\overset{\\underset{\\mathrm{def}}{}}{=}\\ \\{x_1\\cdots x_n : F^n x_2x_1\\cdots x_n\\}."
},
{
"math_id": 16,
"text": "\\operatorname{inv} Fx_1\\cdots x_n \\leftrightarrow Fx_2x_1\\cdots x_n."
},
{
"math_id": 17,
"text": "\\ pF^n \\ \\overset{\\underset{\\mathrm{def}}{}}{=}\\ \\{x_1\\cdots x_n : F^n x_1x_3\\cdots x_nx_2\\}."
},
{
"math_id": 18,
"text": " pFx_1\\cdots x_n \\leftrightarrow \\operatorname{Inv} \\operatorname{inv} Fx_1x_3\\cdots x_nx_2."
},
{
"math_id": 19,
"text": "x_1"
},
{
"math_id": 20,
"text": "(\\alpha \\land Fx_1...x_n) \\rightarrow \\beta "
},
{
"math_id": 21,
"text": "(\\alpha \\land \\exist Fx_2...x_n) \\rightarrow \\beta "
},
{
"math_id": 22,
"text": "\\exist I"
},
{
"math_id": 23,
"text": "F^n \\leftrightarrow p^{n-1}F^n"
},
{
"math_id": 24,
"text": "\\begin{cases}F^n \\rightarrow +\\exist F^n\\\\\nF^n \\leftrightarrow \\exist +F^n\\end{cases}"
},
{
"math_id": 25,
"text": "\\begin{cases}+\\lnot F^n \\leftrightarrow \\lnot +F^n\\\\\n\\lnot\\exist F^n \\rightarrow \\exist \\lnot F^n\\\\\np\\lnot F^n \\leftrightarrow \\lnot pF^n\\end{cases}"
},
{
"math_id": 26,
"text": "\\begin{cases}+(F^nG^m) \\leftrightarrow (+F^n+G^m)\\\\\np(F^nG^m) \\leftrightarrow (pF^npG^m)\\end{cases}"
},
{
"math_id": 27,
"text": "IF^n \\rightarrow p^{n-2} \\exist p+F^n"
},
{
"math_id": 28,
"text": "\\lnot \\exist \\lnot \\alpha"
}
] |
https://en.wikipedia.org/wiki?curid=15282871
|
1528346
|
Totally bounded space
|
Generalization of compactness
In topology and related branches of mathematics, total-boundedness is a generalization of compactness for circumstances in which a set is not necessarily closed. A totally bounded set can be covered by finitely many subsets of every fixed “size” (where the meaning of “size” depends on the structure of the ambient space).
The term precompact (or pre-compact) is sometimes used with the same meaning, but precompact is also used to mean relatively compact. These definitions coincide for subsets of a complete metric space, but not in general.
In metric spaces.
A metric space formula_0 is totally bounded if and only if for every real number formula_1, there exists a finite collection of open balls of radius formula_2 whose centers lie in "M" and whose union contains M. Equivalently, the metric space "M" is totally bounded if and only if for every formula_3, there exists a finite cover such that the radius of each element of the cover is at most formula_2. This is equivalent to the existence of a finite ε-net. A metric space is said to be totally bounded if every sequence admits a Cauchy subsequence; in complete metric spaces, a set is compact if and only if it is closed and totally bounded.
Each totally bounded space is bounded (as the union of finitely many bounded sets is bounded). The reverse is true for subsets of Euclidean space (with the subspace topology), but not in general. For example, an infinite set equipped with the discrete metric is bounded but not totally bounded: every discrete ball of radius formula_4 or less is a singleton, and no finite union of singletons can cover an infinite set.
Uniform (topological) spaces.
A metric appears in the definition of total boundedness only to ensure that each element of the finite cover is of comparable size, and can be weakened to that of a uniform structure. A subset S of a uniform space X is totally bounded if and only if, for any entourage E, there exists a finite cover of S by subsets of X each of whose Cartesian squares is a subset of E. (In other words, E replaces the "size" "ε", and a subset is of size E if its Cartesian square is a subset of E.)
The definition can be extended still further, to any category of spaces with a notion of compactness and Cauchy completion: a space is totally bounded if and only if its (Cauchy) completion is compact.
Examples and elementary properties.
Comparison with compact sets.
In metric spaces, a set is compact if and only if it is complete and totally bounded; without the axiom of choice only the forward direction holds. Precompact sets share a number of properties with compact sets.
In topological groups.
Although the notion of total boundedness is closely tied to metric spaces, the greater algebraic structure of topological groups allows one to trade away some separation properties. For example, in metric spaces, a set is compact if and only if complete and totally bounded. Under the definition below, the same holds for any topological vector space (not necessarily Hausdorff nor complete).
The general logical form of the definition is: a subset formula_5 of a space formula_6 is totally bounded if and only if, given any size formula_7 there exists a finite cover formula_8 of formula_5 such that each element of formula_8 has size at most formula_9 formula_6 is then totally bounded if and only if it is totally bounded when considered as a subset of itself.
We adopt the convention that, for any neighborhood formula_10 of the identity, a subset formula_11 is called (left) formula_12-small if and only if formula_13
A subset formula_5 of a topological group formula_6 is (left) totally bounded if it satisfies any of the following equivalent conditions:
The term pre-compact usually appears in the context of Hausdorff topological vector spaces.
In that case, the following conditions are also all equivalent to formula_5 being (left) totally bounded:
The definition of right totally bounded is analogous: simply swap the order of the products.
Condition 4 implies any subset of formula_14 is totally bounded (in fact, compact; see above). If formula_6 is not Hausdorff then, for example, formula_15 is a compact complete set that is not closed.
Topological vector spaces.
Any topological vector space is an abelian topological group under addition, so the above conditions apply. Historically, statement 6(a) was the first reformulation of total boundedness for topological vector spaces; it dates to a 1935 paper of John von Neumann.
This definition has the appealing property that, in a locally convex space endowed with the weak topology, the precompact sets are exactly the bounded sets.
For separable Banach spaces, there is a nice characterization of the precompact sets (in the norm topology) in terms of weakly convergent sequences of functionals: if formula_6 is a separable Banach space, then formula_11 is precompact if and only if every weakly convergent sequence of functionals converges uniformly on formula_16
References.
<templatestyles src="Reflist/styles.css" />
Bibliography.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": " (M,d) "
},
{
"math_id": 1,
"text": "\\varepsilon > 0"
},
{
"math_id": 2,
"text": "\\varepsilon"
},
{
"math_id": 3,
"text": " \\varepsilon >0"
},
{
"math_id": 4,
"text": "\\varepsilon = 1/2"
},
{
"math_id": 5,
"text": "S"
},
{
"math_id": 6,
"text": "X"
},
{
"math_id": 7,
"text": "E,"
},
{
"math_id": 8,
"text": "\\mathcal{O}"
},
{
"math_id": 9,
"text": "E."
},
{
"math_id": 10,
"text": "U \\subseteq X"
},
{
"math_id": 11,
"text": "S \\subseteq X"
},
{
"math_id": 12,
"text": "U"
},
{
"math_id": 13,
"text": "(- S) + S \\subseteq U."
},
{
"math_id": 14,
"text": "\\operatorname{cl}_X \\{ 0 \\}"
},
{
"math_id": 15,
"text": "\\{ 0 \\}"
},
{
"math_id": 16,
"text": "S."
}
] |
https://en.wikipedia.org/wiki?curid=1528346
|
15284597
|
Hexaquark
|
Hypothetical particles made up of six quarks or antiquarks
In particle physics, hexaquarks, alternatively known as sexaquarks, are a large family of hypothetical particles, each particle consisting of six quarks or antiquarks of any flavours. Six constituent quarks in any of several combinations could yield a colour charge of zero; for example a hexaquark might contain either six quarks, resembling two baryons bound together (a dibaryon), or three quarks and three antiquarks. Once formed, dibaryons are predicted to be fairly stable by the standards of particle physics.
A number of experiments have been suggested to detect dibaryon decays and interactions. In the 1990s, several candidate dibaryon decays were observed but they were not confirmed.
There is a theory that strange particles such as hyperons and dibaryons could form in the interior of a neutron star, changing its mass–radius ratio in ways that might be detectable. Accordingly, measurements of neutron stars could set constraints on possible dibaryon properties. A large fraction of the neutrons in a neutron star could turn into hyperons and merge into dibaryons during the early part of its collapse into a black hole . These dibaryons would very quickly dissolve into quark–gluon plasma during the collapse, or go into some currently unknown state of matter.
D-star hexaquark.
In 2014, a potential dibaryon was detected at the Jülich Research Center at about 2380 MeV. The center claimed that the measurements confirm results from 2011, via a more replicable method. The particle existed for 10−23 seconds and was named d*(2380). This particle is hypothesized to consist of three up and three down quarks, and has been proposed as a candidate for dark matter.
The study found that production of stable d*(2380) hexaquarks could account for 85% of the Universe's dark matter.
H dibaryon.
In 1977, Robert Jaffe proposed that a possibly stable H dibaryon with the quark composition udsuds could notionally result from the combination of two uds hyperons.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\Omega_{ccc}\\Omega_{ccc}"
}
] |
https://en.wikipedia.org/wiki?curid=15284597
|
1528467
|
Copper coulometer
|
The copper coulometer is a one application for the copper-copper(II) sulfate electrode. Such a coulometer consists of two identical copper electrodes immersed in slightly acidic pH-buffered solution of copper(II) sulfate. Passing of current through the element leads to the anodic dissolution of the metal on anode and simultaneous deposition of copper ions on the cathode. These reactions have 100% efficiency over a wide range of current density.
Calculation.
The amount of electric charge (quantity of electricity) passed through the cell can easily be determined by measuring the change in mass of either electrode and calculating:
formula_0,
where:
Although this apparatus is interesting from a theoretical and historical point of view, present-day electronic measurement of time and electric current provide in their multiplication the amount of passed coulombs much easier, with greater precision, and in a shorter period of time than is possible by weighing the electrodes.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "Q = \\frac{z_{Cu}\\Delta m F}{M_{Cu}}"
},
{
"math_id": 1,
"text": "Q"
},
{
"math_id": 2,
"text": "\\Delta m"
},
{
"math_id": 3,
"text": "z_{Cu}"
},
{
"math_id": 4,
"text": "F"
},
{
"math_id": 5,
"text": "M_{Cu}"
}
] |
https://en.wikipedia.org/wiki?curid=1528467
|
15286364
|
Stemloc
|
Open source bioinformatics software developed by Ian Holmes
In bioinformatics, Stemloc is an open source software for multiple RNA sequence alignment and RNA structure prediction based on probabilistic models of RNA structure known as Pair stochastic context-free grammars (also probabilistic context-free grammars). Stemloc attempts to simultaneously predict and align the structure of RNA sequences with an improved time and space cost compared to previous methods with the same motive. The resulting software implements constrained versions of the Sankoff algorithm by introducing both fold and alignment constraints, which reduces processor and memory usage and allows for larger RNA sequences to be analyzed on commodity hardware. Stemloc was written in 2004 by Ian Holmes.
Stemloc can be downloaded as part of the DART software package. It accepts input files in either FASTA or Stockholm format.
Background.
A previously developed algorithm by David Sankoff in 1985 uses dynamic programming to simultaneously align and predict multiple RNA structures. The Sankoff Algorithm takes time and space in big O notation formula_0 and formula_1respectively for formula_2 sequences of length formula_3. This is observantly expensive, and thus is the motivation to create better RNA analysis tools like Stemloc. The initial goal of Stemloc was to reduce the time and space cost of simultaneous alignment and structure prediction of two RNA sequences by using a stochastic context-free grammar (SCFG) scoring scheme and by implementing constrained versions of the Sankoff Algorithm.
Stemloc uses "alignment envelopes" and "fold envelopes" to simultaneously constrain both the alignment and the secondary structures of the sequences being compared. "Fold envelopes" can be used to "prune" the search over secondary structures and determine the subsequences of two RNA sequences that can be considered in the algorithm. For example, including or excluding specific nitrogen-bonded base pairings. "Alignment envelopes" can be used to "prune" the search over the alignments and determine possible "cutpoints" in the alignment of the two sequences. For example, including or excluding specific residue-level homologies. Fold envelopes are pre-calculated for each sequence individually, and alignment envelopes are pre-calculated by comparing the two sequences while ignoring secondary structures. Both global and local alignment is supported.
Input.
Input in Stemloc can either be in FASTA or Stockholm format (see above for descriptions of each). Sample input shown below:
stemloc --local dynalign.trna
The "--local" command analyzes the file in local alignment mode. Using "--global" will use global alignment mode.
Output.
This output is in Stockholm format. It shows the sequence names, the co-ordinates of the matches, the alignment, the consensus primary sequence, the secondary structure of each sequence, the consensus secondary structure, and the log-odds score of the alignment in bits. The "//" line is used to separate alignments or indicate end of file. Sample output shown below:
RD0260/26-67 UACUCCCCUGUCACGGGAGAGAAUGUGGGUUCAAAUCCCAUC
RD0500/26-66 UACGACCCUGUCACGGUCGUGA-CGCGGGUUCgAAUCCCGCC
Process.
Stemloc relies heavily on stochastic context-free grammars, which can be seen as a scoring scheme for the algorithm. Because Sankoff's algorithm considers all possible folds and all possible alignments it is quite accurate and thorough, but it takes a measurable amount of time to obtain any results or output. To better this, Stemloc allows the user to constrain the total number of folds and alignments to be considered. More specifically, each sequence can be pre-folded individually in formula_4 time and pre-aligned, ignoring secondary structure in formula_5 time. For example, the using the "-fast" command below will only consider the 100 best RNA structures rather than analyzing all possible folds. Using the "-log DOTPLOT" command will output a visual representation of the fold and alignment envelopes.
stemloc nanos-tiny.rna -fast -log DOTPLOT
Constraining the envelopes.
The main idea of Stemloc is being able to set a threshold for the number of folds and alignments that are sampled to create the envelopes. This can be done with the options "-nf" and "-na", which sets the number of folds and alignments to be considered. (Using a -1 will unlimited the number of folds and alignments sampled, thus using -1 for both parameters will run the Sankoff algorithm on the input dataset.
stemloc nanos-tiny.rna -nf -1 -na -1
Parameter training.
Another feature of Stemloc is its ability to parameterize probabilistic models like stochastic context-free grammars from data. Stemloc utilizes the Inside-Outside algorithm and stochastic context-free grammars to maximize the likelihood of a training set. This is useful because the default parameters for Stemloc were trained on a selection of pairwise alignments of between 30% and 40% sequence identity from Rfam (database) version 5.0. These parameters however, are not always effective which is why being able to train parameters as a user can be helpful.
In practice.
Stemloc has since been used in a variety of research publications in RNA structure analysis. Most notably in the study of optimal multiple sequence alignment.
|
[
{
"math_id": 0,
"text": "O(L^{3N})"
},
{
"math_id": 1,
"text": "O(L^{2N})"
},
{
"math_id": 2,
"text": "N"
},
{
"math_id": 3,
"text": "L"
},
{
"math_id": 4,
"text": "O(L^3)"
},
{
"math_id": 5,
"text": "O(L^2)"
}
] |
https://en.wikipedia.org/wiki?curid=15286364
|
15287
|
Series (mathematics)
|
Infinite sum
In mathematics, a series is, roughly speaking, an addition of infinitely many quantities, one after the other. The study of series is a major part of calculus and its generalization, mathematical analysis. Series are used in most areas of mathematics, even for studying finite structures (such as in combinatorics) through generating functions. The mathematical properties of infinite series make them widely applicable in other quantitative disciplines such as physics, computer science, statistics and finance.
For a long time, the idea that a potentially infinite summation could produce a finite result was considered paradoxical. This paradox was resolved using the concept of a limit during the 17th century. Zeno's paradox of Achilles and the tortoise illustrates this counterintuitive property of infinite sums: Achilles runs after a tortoise, but when he reaches the position of the tortoise at the beginning of the race, the tortoise has reached a second position; when he reaches this second position, the tortoise is at a third position, and so on. Zeno concluded that Achilles could "never" reach the tortoise, and thus that movement does not exist. Zeno divided the race into infinitely many sub-races, each requiring a finite amount of time, so that the total time for Achilles to catch the tortoise is given by a series. The resolution of the paradox is that, although the series has an infinite number of terms, it has a finite sum, which gives the time necessary for Achilles to catch up with the tortoise.
In modern terminology, any (ordered) infinite sequence formula_0 of terms (that is, numbers, functions, or anything that can be added) defines a series, which is the operation of adding the "a""i" one after the other. To emphasize that there are an infinite number of terms, a series may be called an infinite series. Such a series is represented (or denoted) by an expression like
formula_1
or, using the summation sign,
formula_2
The infinite sequence of additions implied by a series cannot be effectively carried on (at least in a finite amount of time). However, if the set to which the terms and their finite sums belong has a notion of limit, it is sometimes possible to assign a value to a series, called the sum of the series. This value is the limit as "n" tends to infinity (if the limit exists) of the finite sums of the "n" first terms of the series, which are called the "n"th partial sums of the series. That is,
formula_3
When this limit exists, one says that the series is convergent or summable, or that the sequence formula_0 is summable. In this case, the limit is called the sum of the series. Otherwise, the series is said to be divergent.
The notation formula_4 denotes both the series—that is the implicit process of adding the terms one after the other indefinitely—and, if the series is convergent, the sum of the series—the result of the process. This is a generalization of the similar convention of denoting by formula_5 both the addition—the process of adding—and its result—the "sum" of a and b.
Commonly, the terms of a series come from a ring, often the field formula_6 of the real numbers or the field formula_7 of the complex numbers. In this case, the set of all series is itself a ring (and even an associative algebra), in which the addition consists of adding the series term by term, and the multiplication is the Cauchy product.
Basic properties.
An infinite series or simply a series is an infinite sum, represented by an infinite expression of the form
formula_8
where formula_9 is any ordered sequence of terms, such as numbers, functions, or anything else that can be added (for instance elements of any abelian group in abstract algebra). This is an expression that is obtained from the list of terms formula_10 by laying them side by side, and conjoining them with the symbol "+". A series may also be represented by using summation notation, such as
formula_11
If an abelian group "A" of terms has a concept of limit (e.g., if it is a metric space), then some series, the convergent series, can be interpreted as having a value in "A", called the "sum of the series". This includes the common cases from calculus, in which the group is the field of real numbers or the field of complex numbers. Given a series formula_12, its "k"th partial sum is
formula_13
By definition, the series formula_14 "converges" to the limit "L" (or simply "sums" to "L"), if the sequence of its partial sums has a limit "L". In this case, one usually writes
formula_15
A series is said to be "convergent" if it converges to some limit, or "divergent" when it does not. The value of this limit, if it exists, is then the value of the series.
Convergent series.
A series Σ"a""n" is said to converge or to "be convergent" when the sequence ("s""k") of partial sums has a finite limit. If the limit of "s""k" is infinite or does not exist, the series is said to diverge. When the limit of partial sums exists, it is called the value (or sum) of the series
formula_16
An easy way that an infinite series can converge is if all the "a""n" are zero for "n" sufficiently large. Such a series can be identified with a finite sum, so it is only infinite in a trivial sense.
Working out the properties of the series that converge, even if infinitely many terms are nonzero, is the essence of the study of series. Consider the example
formula_17
It is possible to "visualize" its convergence on the real number line: we can imagine a line of length 2, with successive segments marked off of lengths 1, 1/2, 1/4, etc. There is always room to mark the next segment, because the amount of line remaining is always the same as the last segment marked: When we have marked off 1/2, we still have a piece of length 1/2 unmarked, so we can certainly mark the next 1/4. This argument does not prove that the sum is "equal" to 2 (although it is), but it does prove that it is "at most" 2. In other words, the series has an upper bound. Given that the series converges, proving that it is equal to 2 requires only elementary algebra. If the series is denoted "S", it can be seen that
formula_18
Therefore,
formula_19
The idiom can be extended to other, equivalent notions of series. For instance, a recurring decimal, as in
formula_20
encodes the series
formula_21
Since these series always converge to real numbers (because of what is called the completeness property of the real numbers), to talk about the series in this way is the same as to talk about the numbers for which they stand. In particular, the decimal expansion 0.111... can be identified with 1/9. This leads to an argument that 9 × 0.111... = 0.999... = 1, which only relies on the fact that the limit laws for series preserve the arithmetic operations; for more detail on this argument, see 0.999...
Examples of numerical series.
In general, the geometric series
formula_23
converges if and only if formula_24, in which case it converges to formula_25.
The harmonic series is divergent.
(alternating harmonic series) and
formula_28
converges if the sequence "b""n" converges to a limit "L"—as "n" goes to infinity. The value of the series is then "b"1 − "L".
converges for "p" > 1 and diverges for "p" ≤ 1, which can be shown with the integral criterion described below in convergence tests. As a function of "p", the sum of this series is Riemann's zeta function.
and their generalizations (such as basic hypergeometric series and elliptic hypergeometric series) frequently appear in integrable systems and mathematical physics.
Pi.
formula_38
formula_39
Natural logarithm of 2.
formula_40
formula_41
formula_42
formula_43
formula_44
formula_45
formula_46
Natural logarithm base "e".
formula_47
formula_48
Calculus and partial summation as an operation on sequences.
Partial summation takes as input a sequence, ("a""n"), and gives as output another sequence, ("S""N"). It is thus a unary operation on sequences. Further, this function is linear, and thus is a linear operator on the vector space of sequences, denoted Σ. The inverse operator is the finite difference operator, denoted Δ. These behave as discrete analogues of integration and differentiation, only for series (functions of a natural number) instead of functions of a real variable. For example, the sequence (1, 1, 1, ...) has series (1, 2, 3, 4, ...) as its partial summation, which is analogous to the fact that formula_49
In computer science, it is known as prefix sum.
Properties of series.
Series are classified not only by whether they converge or diverge, but also by the properties of the terms an (absolute or conditional convergence); type of convergence of the series (pointwise, uniform); the class of the term an (whether it is a real number, arithmetic progression, trigonometric function); etc.
Non-negative terms.
When "an" is a non-negative real number for every "n", the sequence "SN" of partial sums is non-decreasing. It follows that a series Σ"an" with non-negative terms converges if and only if the sequence "SN" of partial sums is bounded.
For example, the series
formula_50
is convergent, because the inequality
formula_51
and a telescopic sum argument implies that the partial sums are bounded by 2. The exact value of the original series is the Basel problem.
Grouping.
When you group a series reordering of the series does not happen, so Riemann series theorem does not apply. A new series will have its partial sums as subsequence of original series, which means if the original series converges, so does the new series. But for divergent series that is not true, for example 1-1+1-1+... grouped every two elements will create 0+0+0+... series, which is convergent. On the other hand, divergence of the new series means the original series can be only divergent which is sometimes useful, like in Oresme proof.
Absolute convergence.
A series
formula_52
"converges absolutely" if the series of absolute values
formula_53
converges. This is sufficient to guarantee not only that the original series converges to a limit, but also that any reordering of it converges to the same limit.
Conditional convergence.
A series of real or complex numbers is said to be conditionally convergent (or semi-convergent) if it is convergent but not absolutely convergent. A famous example is the alternating series
formula_54
which is convergent (and its sum is equal to formula_55), but the series formed by taking the absolute value of each term is the divergent harmonic series. The Riemann series theorem says that any conditionally convergent series can be reordered to make a divergent series, and moreover, if the formula_56 are real and formula_57 is any real number, that one can find a reordering so that the reordered series converges with sum equal to formula_57.
Abel's test is an important tool for handling semi-convergent series. If a series has the form
formula_58
where the partial sums formula_59 are bounded, formula_60 has bounded variation, and formula_61 exists:
formula_62
then the series formula_63 is convergent. This applies to the point-wise convergence of many trigonometric series, as in
formula_64
with formula_65. Abel's method consists in writing formula_66, and in performing a transformation similar to integration by parts (called summation by parts), that relates the given series formula_63 to the absolutely convergent series
formula_67
Evaluation of truncation errors.
The evaluation of truncation errors is an important procedure in numerical analysis (especially validated numerics and computer-assisted proof).
Alternating series.
When conditions of the alternating series test are satisfied by formula_68, there is an exact error evaluation. Set formula_69 to be the partial sum formula_70 of the given alternating series formula_57. Then the next inequality holds:
formula_71
Taylor series.
Taylor's theorem is a statement that includes the evaluation of the error term when the Taylor series is truncated.
Hypergeometric series.
By using the ratio, we can obtain the evaluation of the error term when the hypergeometric series is truncated.
Matrix exponential.
For the matrix exponential:
formula_72
the following error evaluation holds (scaling and squaring method):
formula_73
Convergence tests.
There exist many tests that can be used to determine whether particular series converge or diverge.
Series of functions.
A series of real- or complex-valued functions
formula_99
converges pointwise on a set "E", if the series converges for each "x" in "E" as an ordinary series of real or complex numbers. Equivalently, the partial sums
formula_100
converge to "ƒ"("x") as "N" → ∞ for each "x" ∈ "E".
A stronger notion of convergence of a series of functions is the uniform convergence. A series converges uniformly if it converges pointwise to the function "ƒ"("x"), and the error in approximating the limit by the "N"th partial sum,
formula_101
can be made minimal "independently" of "x" by choosing a sufficiently large "N".
Uniform convergence is desirable for a series because many properties of the terms of the series are then retained by the limit. For example, if a series of continuous functions converges uniformly, then the limit function is also continuous. Similarly, if the "ƒ""n" are integrable on a closed and bounded interval "I" and converge uniformly, then the series is also integrable on "I" and can be integrated term-by-term. Tests for uniform convergence include the Weierstrass' M-test, Abel's uniform convergence test, Dini's test, and the Cauchy criterion.
More sophisticated types of convergence of a series of functions can also be defined. In measure theory, for instance, a series of functions converges almost everywhere if it converges pointwise except on a certain set of measure zero. Other modes of convergence depend on a different metric space structure on the space of functions under consideration. For instance, a series of functions converges in mean on a set "E" to a limit function "ƒ" provided
formula_102
as "N" → ∞.
Power series.
A power series is a series of the form
formula_103
The Taylor series at a point "c" of a function is a power series that, in many cases, converges to the function in a neighborhood of "c". For example, the series
formula_104
is the Taylor series of formula_105 at the origin and converges to it for every "x".
Unless it converges only at "x"="c", such a series converges on a certain open disc of convergence centered at the point "c" in the complex plane, and may also converge at some of the points of the boundary of the disc. The radius of this disc is known as the radius of convergence, and can in principle be determined from the asymptotics of the coefficients "a""n". The convergence is uniform on closed and bounded (that is, compact) subsets of the interior of the disc of convergence: to wit, it is uniformly convergent on compact sets.
Historically, mathematicians such as Leonhard Euler operated liberally with infinite series, even if they were not convergent. When calculus was put on a sound and correct foundation in the nineteenth century, rigorous proofs of the convergence of series were always required.
Formal power series.
While many uses of power series refer to their sums, it is also possible to treat power series as "formal sums", meaning that no addition operations are actually performed, and the symbol "+" is an abstract symbol of conjunction which is not necessarily interpreted as corresponding to addition. In this setting, the sequence of coefficients itself is of interest, rather than the convergence of the series. Formal power series are used in combinatorics to describe and study sequences that are otherwise difficult to handle, for example, using the method of generating functions. The Hilbert–Poincaré series is a formal power series used to study graded algebras.
Even if the limit of the power series is not considered, if the terms support appropriate structure then it is possible to define operations such as addition, multiplication, derivative, antiderivative for power series "formally", treating the symbol "+" as if it corresponded to addition. In the most common setting, the terms come from a commutative ring, so that the formal power series can be added term-by-term and multiplied via the Cauchy product. In this case the algebra of formal power series is the total algebra of the monoid of natural numbers over the underlying term ring. If the underlying term ring is a differential algebra, then the algebra of formal power series is also a differential algebra, with differentiation performed term-by-term.
Laurent series.
Laurent series generalize power series by admitting terms into the series with negative as well as positive exponents. A Laurent series is thus any series of the form
formula_106
If such a series converges, then in general it does so in an annulus rather than a disc, and possibly some boundary points. The series converges uniformly on compact subsets of the interior of the annulus of convergence.
Dirichlet series.
A Dirichlet series is one of the form
formula_107
where "s" is a complex number. For example, if all "a""n" are equal to 1, then the Dirichlet series is the Riemann zeta function
formula_108
Like the zeta function, Dirichlet series in general play an important role in analytic number theory. Generally a Dirichlet series converges if the real part of "s" is greater than a number called the abscissa of convergence. In many cases, a Dirichlet series can be extended to an analytic function outside the domain of convergence by analytic continuation. For example, the Dirichlet series for the zeta function converges absolutely when Re("s") > 1, but the zeta function can be extended to a holomorphic function defined on formula_109 with a simple pole at 1.
This series can be directly generalized to general Dirichlet series.
Trigonometric series.
A series of functions in which the terms are trigonometric functions is called a trigonometric series:
formula_110
The most important example of a trigonometric series is the Fourier series of a function.
History of the theory of infinite series.
Development of infinite series.
Greek mathematician Archimedes produced the first known summation of an infinite series with a
method that is still used in the area of calculus today. He used the method of exhaustion to calculate the area under the arc of a parabola with the summation of an infinite series, and gave a remarkably accurate approximation of π.
Mathematicians from the Kerala school were studying infinite series c. 1350 CE.
In the 17th century, James Gregory worked in the new decimal system on infinite series and published several Maclaurin series. In 1715, a general method for constructing the Taylor series for all functions for which they exist was provided by Brook Taylor. Leonhard Euler in the 18th century, developed the theory of hypergeometric series and q-series.
Convergence criteria.
The investigation of the validity of infinite series is considered to begin with Gauss in the 19th century. Euler had already considered the hypergeometric series
formula_111
on which Gauss published a memoir in 1812. It established simpler criteria of convergence, and the questions of remainders and the range of convergence.
Cauchy (1821) insisted on strict tests of convergence; he showed that if two series are convergent their product is not necessarily so, and with him begins the discovery of effective criteria. The terms "convergence" and "divergence" had been introduced long before by Gregory (1668). Leonhard Euler and Gauss had given various criteria, and Colin Maclaurin had anticipated some of Cauchy's discoveries. Cauchy advanced the theory of power series by his expansion of a complex function in such a form.
Abel (1826) in his memoir on the binomial series
formula_112
corrected certain of Cauchy's conclusions, and gave a completely scientific summation of the series for complex values of formula_113 and formula_114. He showed the necessity of considering the subject of continuity in questions of convergence.
Cauchy's methods led to special rather than general criteria, and
the same may be said of Raabe (1832), who made the first elaborate investigation of the subject, of De Morgan (from 1842), whose
logarithmic test DuBois-Reymond (1873) and Pringsheim (1889) have
shown to fail within a certain region; of Bertrand (1842), Bonnet
(1843), Malmsten (1846, 1847, the latter without integration); Stokes (1847), Paucker (1852), Chebyshev (1852), and Arndt
(1853).
General criteria began with Kummer (1835), and have been studied by Eisenstein (1847), Weierstrass in his various
contributions to the theory of functions, Dini (1867),
DuBois-Reymond (1873), and many others. Pringsheim's memoirs (1889) present the most complete general theory.
Uniform convergence.
The theory of uniform convergence was treated by Cauchy (1821), his limitations being pointed out by Abel, but the first to attack it
successfully were Seidel and Stokes (1847–48). Cauchy took up the
problem again (1853), acknowledging Abel's criticism, and reaching
the same conclusions which Stokes had already found. Thomae used the
doctrine (1866), but there was great delay in recognizing the importance of distinguishing between uniform and non-uniform
convergence, in spite of the demands of the theory of functions.
Semi-convergence.
A series is said to be semi-convergent (or conditionally convergent) if it is convergent but not absolutely convergent.
Semi-convergent series were studied by Poisson (1823), who also gave a general form for the remainder of the Maclaurin formula. The most important solution of the problem is due, however, to Jacobi (1834), who attacked the question of the remainder from a different standpoint and reached a different formula. This expression was also worked out, and another one given, by Malmsten (1847). Schlömilch ("Zeitschrift", Vol.I, p. 192, 1856) also improved Jacobi's remainder, and showed the relation between the remainder and Bernoulli's function
formula_115
Genocchi (1852) has further contributed to the theory.
Among the early writers was Wronski, whose "loi suprême" (1815) was hardly recognized until Cayley (1873) brought it into
prominence.
Fourier series.
Fourier series were being investigated
as the result of physical considerations at the same time that
Gauss, Abel, and Cauchy were working out the theory of infinite
series. Series for the expansion of sines and cosines, of multiple
arcs in powers of the sine and cosine of the arc had been treated by
Jacob Bernoulli (1702) and his brother Johann Bernoulli (1701) and still
earlier by Vieta. Euler and Lagrange simplified the subject,
as did Poinsot, Schröter, Glaisher, and Kummer.
Fourier (1807) set for himself a different problem, to
expand a given function of "x" in terms of the sines or cosines of
multiples of "x", a problem which he embodied in his "Théorie analytique de la chaleur" (1822). Euler had already given the formulas for determining the coefficients in the series;
Fourier was the first to assert and attempt to prove the general
theorem. Poisson (1820–23) also attacked the problem from a
different standpoint. Fourier did not, however, settle the question
of convergence of his series, a matter left for Cauchy (1826) to
attempt and for Dirichlet (1829) to handle in a thoroughly
scientific manner (see convergence of Fourier series). Dirichlet's treatment ("Crelle", 1829), of trigonometric series was the subject of criticism and improvement by
Riemann (1854), Heine, Lipschitz, Schläfli, and
du Bois-Reymond. Among other prominent contributors to the theory of
trigonometric and Fourier series were Dini, Hermite, Halphen,
Krause, Byerly and Appell.
Generalizations.
Asymptotic series.
Asymptotic series, otherwise asymptotic expansions, are infinite series whose partial sums become good approximations in the limit of some point of the domain. In general they do not converge, but they are useful as sequences of approximations, each of which provides a value close to the desired answer for a finite number of terms. The difference is that an asymptotic series cannot be made to produce an answer as exact as desired, the way that convergent series can. In fact, after a certain number of terms, a typical asymptotic series reaches its best approximation; if more terms are included, most such series will produce worse answers.
Divergent series.
Under many circumstances, it is desirable to assign a limit to a series which fails to converge in the usual sense. A summability method is such an assignment of a limit to a subset of the set of divergent series which properly extends the classical notion of convergence. Summability methods include Cesàro summation, ("C","k") summation, Abel summation, and Borel summation, in increasing order of generality (and hence applicable to increasingly divergent series).
A variety of general results concerning possible summability methods are known. The Silverman–Toeplitz theorem characterizes "matrix summability methods", which are methods for summing a divergent series by applying an infinite matrix to the vector of coefficients. The most general method for summing a divergent series is non-constructive, and concerns Banach limits.
Summations over arbitrary index sets.
Definitions may be given for sums over an arbitrary index set formula_116 There are two main differences with the usual notion of series: first, there is no specific order given on the set formula_117; second, this set formula_117 may be uncountable. The notion of convergence needs to be strengthened, because the concept of conditional convergence depends on the ordering of the index set.
If formula_118 is a function from an index set formula_117 to a set formula_119 then the "series" associated to formula_120 is the formal sum of the elements formula_121 over the index elements formula_122 denoted by the
formula_123
When the index set is the natural numbers formula_124 the function formula_125 is a sequence denoted by formula_126 A series indexed on the natural numbers is an ordered formal sum and so we rewrite formula_127 as formula_128 in order to emphasize the ordering induced by the natural numbers. Thus, we obtain the common notation for a series indexed by the natural numbers
formula_129
Families of non-negative numbers.
When summing a family formula_130 of non-negative real numbers, define
formula_131
When the supremum is finite then the set of formula_132 such that formula_133 is countable. Indeed, for every formula_134 the cardinality formula_135 of the set formula_136 is finite because
formula_137
If formula_117 is countably infinite and enumerated as formula_138 then the above defined sum satisfies
formula_139
provided the value formula_140 is allowed for the sum of the series.
Any sum over non-negative reals can be understood as the integral of a non-negative function with respect to the counting measure, which accounts for the many similarities between the two constructions.
Abelian topological groups.
Let formula_141 be a map, also denoted by formula_142 from some non-empty set formula_117 into a Hausdorff abelian topological group formula_143
Let formula_144 be the collection of all finite subsets of formula_145 with formula_144 viewed as a directed set, ordered under inclusion formula_146 with union as join.
The family formula_142 is said to be unconditionally summable if the following limit, which is denoted by formula_147 and is called the sum of formula_142 exists in formula_148
formula_149
Saying that the sum formula_150 is the limit of finite partial sums means that for every neighborhood formula_151 of the origin in formula_152 there exists a finite subset formula_153 of formula_117 such that
formula_154
Because formula_144 is not totally ordered, this is not a limit of a sequence of partial sums, but rather of a net.
For every neighborhood formula_155 of the origin in formula_152 there is a smaller neighborhood formula_151 such that formula_156 It follows that the finite partial sums of an unconditionally summable family formula_142 form a Cauchy net, that is, for every neighborhood formula_155 of the origin in formula_152 there exists a finite subset formula_153 of formula_117 such that
formula_157
which implies that formula_158 for every formula_159 (by taking formula_160 and formula_161).
When formula_162 is complete, a family formula_163 is unconditionally summable in formula_162 if and only if the finite sums satisfy the latter Cauchy net condition. When formula_162 is complete and formula_142 is unconditionally summable in formula_152 then for every subset formula_164 the corresponding subfamily formula_165 is also unconditionally summable in formula_143
When the sum of a family of non-negative numbers, in the extended sense defined before, is finite, then it coincides with the sum in the topological group formula_166
If a family formula_163 in formula_162 is unconditionally summable then for every neighborhood formula_155 of the origin in formula_152 there is a finite subset formula_167 such that formula_158 for every index formula_168 not in formula_169 If formula_162 is a first-countable space then it follows that the set of formula_132 such that formula_170 is countable. This need not be true in a general abelian topological group (see examples below).
Unconditionally convergent series.
Suppose that formula_171 If a family formula_172 is unconditionally summable in a Hausdorff abelian topological group formula_152 then the series in the usual sense converges and has the same sum,
formula_173
By nature, the definition of unconditional summability is insensitive to the order of the summation. When formula_174 is unconditionally summable, then the series remains convergent after any permutation formula_175 of the set formula_176 of indices, with the same sum,
formula_177
Conversely, if every permutation of a series formula_174 converges, then the series is unconditionally convergent. When formula_162 is complete then unconditional convergence is also equivalent to the fact that all subseries are convergent; if formula_162 is a Banach space, this is equivalent to say that for every sequence of signs formula_178, the series
formula_179
converges in formula_143
Series in topological vector spaces.
If formula_162 is a topological vector space (TVS) and formula_180 is a (possibly uncountable) family in formula_162 then this family is summable if the limit formula_181 of the net formula_182 exists in formula_152 where formula_144 is the directed set of all finite subsets of formula_117 directed by inclusion formula_146 and formula_183
It is called absolutely summable if in addition, for every continuous seminorm formula_184 on formula_152 the family formula_185 is summable.
If formula_162 is a normable space and if formula_180 is an absolutely summable family in formula_152 then necessarily all but a countable collection of formula_186’s are zero. Hence, in normed spaces, it is usually only ever necessary to consider series with countably many terms.
Summable families play an important role in the theory of nuclear spaces.
Series in Banach and seminormed spaces.
The notion of series can be easily extended to the case of a seminormed space.
If formula_187 is a sequence of elements of a normed space formula_162 and if formula_188 then the series formula_189 converges to formula_114 in formula_162 if the sequence of partial sums of the series formula_190 converges to formula_114 in formula_162; to wit,
formula_191
More generally, convergence of series can be defined in any abelian Hausdorff topological group.
Specifically, in this case, formula_189 converges to formula_114 if the sequence of partial sums converges to formula_192
If formula_193 is a seminormed space, then the notion of absolute convergence becomes:
A series formula_194 of vectors in formula_162 converges absolutely if
formula_195
in which case all but at most countably many of the values formula_196 are necessarily zero.
If a countable series of vectors in a Banach space converges absolutely then it converges unconditionally, but the converse only holds in finite-dimensional Banach spaces (theorem of ).
Well-ordered sums.
Conditionally convergent series can be considered if formula_117 is a well-ordered set, for example, an ordinal number formula_197
In this case, define by transfinite recursion:
formula_198
and for a limit ordinal formula_199
formula_200
if this limit exists. If all limits exist up to formula_201 then the series converges.
Examples.
a function whose support is a singleton formula_206 Then
formula_207
in the topology of pointwise convergence (that is, the sum is taken in the infinite product group formula_208).
While, formally, this requires a notion of sums of uncountable series, by construction there are, for every given formula_210 only finitely many nonzero terms in the sum, so issues regarding convergence of such sums do not arise. Actually, one usually assumes more: the family of functions is "locally finite", that is, for every formula_114 there is a neighborhood of formula_114 in which all but a finite number of functions vanish. Any regularity property of the formula_211 such as continuity, differentiability, that is preserved under finite sums will be preserved for the sum of any subcollection of this family of functions.
(in other words, formula_212 copies of 1 is formula_212) only if one takes a limit over all "countable" partial sums, rather than finite partial sums. This space is not separable.
References.
<templatestyles src="Reflist/styles.css" />
Bibliography.
<templatestyles src="Refbegin/styles.css" />
MR
|
[
{
"math_id": 0,
"text": "(a_1,a_2,a_3,\\ldots)"
},
{
"math_id": 1,
"text": "a_1+a_2+a_3+\\cdots,"
},
{
"math_id": 2,
"text": "\\sum_{i=1}^\\infty a_i."
},
{
"math_id": 3,
"text": "\\sum_{i=1}^\\infty a_i = \\lim_{n\\to\\infty} \\sum_{i=1}^n a_i."
},
{
"math_id": 4,
"text": "\\sum_{i=1}^\\infty a_i"
},
{
"math_id": 5,
"text": "a+b"
},
{
"math_id": 6,
"text": "\\mathbb R"
},
{
"math_id": 7,
"text": "\\mathbb C"
},
{
"math_id": 8,
"text": "a_0 + a_1 + a_2 + \\cdots, "
},
{
"math_id": 9,
"text": "(a_n)"
},
{
"math_id": 10,
"text": "a_0,a_1,\\dots"
},
{
"math_id": 11,
"text": "\\sum_{n=0}^{\\infty} a_n . "
},
{
"math_id": 12,
"text": "s=\\sum_{n=0}^\\infty a_n"
},
{
"math_id": 13,
"text": "s_k = \\sum_{n=0}^{k}a_n = a_0 + a_1 + \\cdots + a_k."
},
{
"math_id": 14,
"text": "\\sum_{n=0}^{\\infty} a_n"
},
{
"math_id": 15,
"text": "L = \\sum_{n=0}^{\\infty}a_n."
},
{
"math_id": 16,
"text": "\\sum_{n=0}^\\infty a_n = \\lim_{k\\to\\infty} s_k = \\lim_{k\\to\\infty} \\sum_{n=0}^k a_n."
},
{
"math_id": 17,
"text": " 1 + \\frac{1}{2}+ \\frac{1}{4}+ \\frac{1}{8}+\\cdots+ \\frac{1}{2^n}+\\cdots."
},
{
"math_id": 18,
"text": "S/2 = \\frac{1+ \\frac{1}{2}+ \\frac{1}{4}+ \\frac{1}{8}+\\cdots}{2} = \\frac{1}{2}+ \\frac{1}{4}+ \\frac{1}{8}+ \\frac{1}{16} +\\cdots."
},
{
"math_id": 19,
"text": "S-S/2 = 1 \\Rightarrow S = 2."
},
{
"math_id": 20,
"text": "x = 0.111\\dots , "
},
{
"math_id": 21,
"text": "\\sum_{n=1}^\\infty \\frac{1}{10^n}."
},
{
"math_id": 22,
"text": "1 + {1 \\over 2} + {1 \\over 4} + {1 \\over 8} + {1 \\over 16} + \\cdots=\\sum_{n=0}^\\infty{1 \\over 2^n} = 2."
},
{
"math_id": 23,
"text": "\\sum_{n=0}^\\infty z^n"
},
{
"math_id": 24,
"text": "|z| < 1"
},
{
"math_id": 25,
"text": " {1 \\over 1 - z}"
},
{
"math_id": 26,
"text": "1 + {1 \\over 2} + {1 \\over 3} + {1 \\over 4} + {1 \\over 5} + \\cdots = \\sum_{n=1}^\\infty {1 \\over n}."
},
{
"math_id": 27,
"text": "1 - {1 \\over 2} + {1 \\over 3} - {1 \\over 4} + {1 \\over 5} - \\cdots =\\sum_{n=1}^\\infty {\\left(-1\\right)^{n-1} \\over n}=\\ln(2) \\quad "
},
{
"math_id": 28,
"text": "-1+\\frac{1}{3} - \\frac{1}{5} + \\frac{1}{7} - \\frac{1}{9} + \\cdots =\\sum_{n=1}^\\infty \\frac{\\left(-1\\right)^n}{2n-1} = -\\frac{\\pi}{4}"
},
{
"math_id": 29,
"text": "\\sum_{n=1}^\\infty (b_n-b_{n+1})"
},
{
"math_id": 30,
"text": "3 + {5 \\over 2} + {7 \\over 4} + {9 \\over 8} + {11 \\over 16} + \\cdots=\\sum_{n=0}^\\infty{(3+2n) \\over 2^n}."
},
{
"math_id": 31,
"text": "\\sum_{n=1}^\\infty\\frac{1}{n^p}"
},
{
"math_id": 32,
"text": "_rF_s \\left[ \\begin{matrix}a_1, a_2, \\dotsc, a_r \\\\ b_1, b_2, \\dotsc, b_s \\end{matrix}; z \\right] := \\sum_{n=0}^{\\infty} \\frac{(a_1)_n (a_2)_n \\dotsb (a_r)_n}{(b_1)_n (b_2)_n \\dotsb (b_s)_n \\; n!} z^n"
},
{
"math_id": 33,
"text": "\\sum_{n=1}^\\infty \\frac{1}{n^{3}\\sin^{2} n}"
},
{
"math_id": 34,
"text": "\\pi"
},
{
"math_id": 35,
"text": "m\\pi"
},
{
"math_id": 36,
"text": "\\sin n"
},
{
"math_id": 37,
"text": "\\sin m\\pi = 0"
},
{
"math_id": 38,
"text": " \\sum_{i=1}^{\\infty} \\frac{1}{i^2} = \\frac{1}{1^2} + \\frac{1}{2^2} + \\frac{1}{3^2} + \\frac{1}{4^2} + \\cdots = \\frac{\\pi^2}{6}"
},
{
"math_id": 39,
"text": " \\sum_{i=1}^\\infty \\frac{(-1)^{i+1}(4)}{2i-1} = \\frac{4}{1} - \\frac{4}{3} + \\frac{4}{5} - \\frac{4}{7} + \\frac{4}{9} - \\frac{4}{11} + \\frac{4}{13} - \\cdots = \\pi"
},
{
"math_id": 40,
"text": "\\sum_{i=1}^\\infty \\frac{(-1)^{i+1}}{i} = \\ln 2"
},
{
"math_id": 41,
"text": "\\sum_{i=0}^\\infty \\frac{1}{(2i+1)(2i+2)} = \\ln 2"
},
{
"math_id": 42,
"text": "\\sum_{i=0}^\\infty \\frac{(-1)^i}{(i+1)(i+2)} = 2\\ln(2) -1"
},
{
"math_id": 43,
"text": "\\sum_{i=1}^\\infty \\frac{1}{i \\left(4i^2-1\\right)} = 2\\ln(2) -1"
},
{
"math_id": 44,
"text": " \\sum_{i=1}^\\infty \\frac{1}{2^{i}i} = \\ln 2"
},
{
"math_id": 45,
"text": " \\sum_{i=1}^\\infty \\left(\\frac{1}{3^i}+\\frac{1}{4^i}\\right)\\frac{1}{i} = \\ln 2"
},
{
"math_id": 46,
"text": " \\sum_{i=1}^\\infty \\frac{1}{2i(2i-1)} = \\ln 2"
},
{
"math_id": 47,
"text": "\\sum_{i = 0}^\\infty \\frac{(-1)^i}{i!} = 1-\\frac{1}{1!}+\\frac{1}{2!}-\\frac{1}{3!}+\\cdots = \\frac{1}{e}"
},
{
"math_id": 48,
"text": " \\sum_{i = 0}^\\infty \\frac{1}{i!} = \\frac{1}{0!} + \\frac{1}{1!} + \\frac{1}{2!} + \\frac{1}{3!} + \\frac{1}{4!} + \\cdots = e "
},
{
"math_id": 49,
"text": "\\int_0^x 1\\,dt = x."
},
{
"math_id": 50,
"text": "\\sum_{n = 1}^\\infty \\frac{1}{n^2}"
},
{
"math_id": 51,
"text": "\\frac1 {n^2} \\le \\frac{1}{n-1} - \\frac{1}{n}, \\quad n \\ge 2,"
},
{
"math_id": 52,
"text": "\\sum_{n=0}^\\infty a_n"
},
{
"math_id": 53,
"text": "\\sum_{n=0}^\\infty \\left|a_n\\right|"
},
{
"math_id": 54,
"text": "\\sum\\limits_{n=1}^\\infty {(-1)^{n+1} \\over n} = 1 - {1 \\over 2} + {1 \\over 3} - {1 \\over 4} + {1 \\over 5} - \\cdots,"
},
{
"math_id": 55,
"text": "\\ln 2"
},
{
"math_id": 56,
"text": "a_{n}"
},
{
"math_id": 57,
"text": "S"
},
{
"math_id": 58,
"text": "\\sum a_n = \\sum \\lambda_n b_n"
},
{
"math_id": 59,
"text": "B_{n} = b_{0} + \\cdots + b_{n}"
},
{
"math_id": 60,
"text": "\\lambda_{n}"
},
{
"math_id": 61,
"text": "\\lim \\lambda_{n} b_{n}"
},
{
"math_id": 62,
"text": "\\sup_N \\left| \\sum_{n=0}^N b_n \\right| < \\infty, \\ \\ \\sum \\left|\\lambda_{n+1} - \\lambda_n\\right| < \\infty\\ \\text{and} \\ \\lambda_n B_n \\ \\text{converges,}"
},
{
"math_id": 63,
"text": "\\sum a_{n}"
},
{
"math_id": 64,
"text": "\\sum_{n=2}^\\infty \\frac{\\sin(n x)}{\\ln n}"
},
{
"math_id": 65,
"text": "0 < x < 2\\pi"
},
{
"math_id": 66,
"text": "b_{n+1}=B_{n+1}-B_{n}"
},
{
"math_id": 67,
"text": " \\sum (\\lambda_n - \\lambda_{n+1}) \\, B_n."
},
{
"math_id": 68,
"text": "S:=\\sum_{m=0}^\\infty(-1)^m u_m"
},
{
"math_id": 69,
"text": "s_n"
},
{
"math_id": 70,
"text": "s_n:=\\sum_{m=0}^n(-1)^m u_m"
},
{
"math_id": 71,
"text": "|S-s_n|\\leq u_{n+1}."
},
{
"math_id": 72,
"text": "\\exp(X) := \\sum_{k=0}^\\infty\\frac{1}{k!}X^k,\\quad X\\in\\mathbb{C}^{n\\times n},"
},
{
"math_id": 73,
"text": "T_{r,s}(X) := \\left[\\sum_{j=0}^r\\frac{1}{j!}(X/s)^j\\right]^s,\\quad \\|\\exp(X)-T_{r,s}(X)\\|\\leq\\frac{\\|X\\|^{r+1}}{s^r(r+1)!}\\exp(\\|X\\|)."
},
{
"math_id": 74,
"text": "\\lim_{n \\to \\infty} a_n \\neq 0"
},
{
"math_id": 75,
"text": "\\lim_{n \\to \\infty} a_n = 0"
},
{
"math_id": 76,
"text": "\\sum b_n"
},
{
"math_id": 77,
"text": "\\left\\vert a_n \\right\\vert \\leq C \\left\\vert b_n \\right\\vert"
},
{
"math_id": 78,
"text": "C"
},
{
"math_id": 79,
"text": "n"
},
{
"math_id": 80,
"text": "\\sum a_n"
},
{
"math_id": 81,
"text": "\\sum \\left\\vert b_n \\right\\vert"
},
{
"math_id": 82,
"text": "\\left\\vert a_n \\right\\vert \\geq \\left\\vert b_n \\right\\vert"
},
{
"math_id": 83,
"text": "a_n"
},
{
"math_id": 84,
"text": "\\left\\vert \\frac{a_{n+1}}{a_{n}} \\right\\vert \\leq \\left\\vert \\frac{b_{n+1}}{b_{n}} \\right\\vert"
},
{
"math_id": 85,
"text": "\\sum \\left| b_n \\right|"
},
{
"math_id": 86,
"text": "\\left\\vert \\frac{a_{n+1}}{a_{n}} \\right\\vert \\geq \\left\\vert \\frac{b_{n+1}}{b_{n}} \\right\\vert"
},
{
"math_id": 87,
"text": "C < 1"
},
{
"math_id": 88,
"text": "\\left\\vert \\frac{a_{n+1}}{a_{n}} \\right\\vert < C"
},
{
"math_id": 89,
"text": "1"
},
{
"math_id": 90,
"text": "\\left\\vert a_{n} \\right\\vert^{\\frac{1}{n}} \\leq C"
},
{
"math_id": 91,
"text": "f(x)"
},
{
"math_id": 92,
"text": "[1,\\infty)"
},
{
"math_id": 93,
"text": "f(n)=a_{n}"
},
{
"math_id": 94,
"text": "\\int_{1}^{\\infty} f(x) \\, dx"
},
{
"math_id": 95,
"text": "\\sum 2^{k} a_{(2^{k})}"
},
{
"math_id": 96,
"text": "\\sum (-1)^{n} a_{n}"
},
{
"math_id": 97,
"text": "a_{n} > 0"
},
{
"math_id": 98,
"text": "0"
},
{
"math_id": 99,
"text": "\\sum_{n=0}^\\infty f_n(x)"
},
{
"math_id": 100,
"text": "s_N(x) = \\sum_{n=0}^N f_n(x)"
},
{
"math_id": 101,
"text": "|s_N(x) - f(x)|"
},
{
"math_id": 102,
"text": "\\int_E \\left|s_N(x)-f(x)\\right|^2\\,dx \\to 0"
},
{
"math_id": 103,
"text": "\\sum_{n=0}^\\infty a_n(x-c)^n."
},
{
"math_id": 104,
"text": "\\sum_{n=0}^{\\infty} \\frac{x^n}{n!}"
},
{
"math_id": 105,
"text": "e^x"
},
{
"math_id": 106,
"text": "\\sum_{n=-\\infty}^\\infty a_n x^n."
},
{
"math_id": 107,
"text": "\\sum_{n=1}^\\infty {a_n \\over n^s},"
},
{
"math_id": 108,
"text": "\\zeta(s) = \\sum_{n=1}^\\infty \\frac{1}{n^s}."
},
{
"math_id": 109,
"text": "\\Complex\\setminus\\{1\\}"
},
{
"math_id": 110,
"text": "\\frac12 A_0 + \\sum_{n=1}^\\infty \\left(A_n\\cos nx + B_n \\sin nx\\right)."
},
{
"math_id": 111,
"text": "1 + \\frac{\\alpha\\beta}{1\\cdot\\gamma}x + \\frac{\\alpha(\\alpha+1)\\beta(\\beta+1)}{1 \\cdot 2 \\cdot \\gamma(\\gamma+1)}x^2 + \\cdots"
},
{
"math_id": 112,
"text": "1 + \\frac{m}{1!}x + \\frac{m(m-1)}{2!}x^2 + \\cdots"
},
{
"math_id": 113,
"text": "m"
},
{
"math_id": 114,
"text": "x"
},
{
"math_id": 115,
"text": "F(x) = 1^n + 2^n + \\cdots + (x - 1)^n."
},
{
"math_id": 116,
"text": "I."
},
{
"math_id": 117,
"text": "I"
},
{
"math_id": 118,
"text": "a : I \\mapsto G"
},
{
"math_id": 119,
"text": "G,"
},
{
"math_id": 120,
"text": "a"
},
{
"math_id": 121,
"text": "a(x) \\in G "
},
{
"math_id": 122,
"text": "x \\in I"
},
{
"math_id": 123,
"text": "\\sum_{x \\in I} a(x)."
},
{
"math_id": 124,
"text": "I=\\N,"
},
{
"math_id": 125,
"text": "a : \\N \\mapsto G"
},
{
"math_id": 126,
"text": "a(n) = a_n."
},
{
"math_id": 127,
"text": "\\sum_{n \\in \\N}"
},
{
"math_id": 128,
"text": "\\sum_{n=0}^{\\infty}"
},
{
"math_id": 129,
"text": "\\sum_{n=0}^{\\infty} a_n = a_0 + a_1 + a_2 + \\cdots."
},
{
"math_id": 130,
"text": "\\left\\{a_i : i \\in I\\right\\}"
},
{
"math_id": 131,
"text": "\\sum_{i\\in I}a_i = \\sup \\left\\{ \\sum_{i\\in A} a_i\\, : A \\subseteq I, A \\text{ finite}\\right\\} \\in [0, +\\infty]."
},
{
"math_id": 132,
"text": "i \\in I"
},
{
"math_id": 133,
"text": "a_i > 0"
},
{
"math_id": 134,
"text": "n \\geq 1,"
},
{
"math_id": 135,
"text": "\\left|A_n\\right|"
},
{
"math_id": 136,
"text": "A_n = \\left\\{i \\in I : a_i > 1/n\\right\\}"
},
{
"math_id": 137,
"text": "\\frac{1}{n} \\, \\left|A_n\\right| = \\sum_{i \\in A_n} \\frac{1}{n} \\leq \\sum_{i \\in A_n} a_i \\leq \\sum_{i \\in I} a_i < \\infty."
},
{
"math_id": 138,
"text": "I = \\left\\{i_0, i_1, \\ldots\\right\\}"
},
{
"math_id": 139,
"text": "\\sum_{i \\in I} a_i = \\sum_{k=0}^{+\\infty} a_{i_k},"
},
{
"math_id": 140,
"text": "\\infty"
},
{
"math_id": 141,
"text": "a : I \\to X"
},
{
"math_id": 142,
"text": "\\left(a_i\\right)_{i \\in I},"
},
{
"math_id": 143,
"text": "X."
},
{
"math_id": 144,
"text": "\\operatorname{Finite}(I)"
},
{
"math_id": 145,
"text": "I,"
},
{
"math_id": 146,
"text": "\\,\\subseteq\\,"
},
{
"math_id": 147,
"text": "\\sum_{i\\in I} a_i"
},
{
"math_id": 148,
"text": "X:"
},
{
"math_id": 149,
"text": "\\sum_{i\\in I} a_i := \\lim_{A \\in \\operatorname{Finite}(I)} \\ \\sum_{i\\in A} a_i = \\lim \\left\\{\\sum_{i\\in A} a_i \\,: A \\subseteq I, A \\text{ finite }\\right\\}"
},
{
"math_id": 150,
"text": "S := \\sum_{i\\in I} a_i"
},
{
"math_id": 151,
"text": "V"
},
{
"math_id": 152,
"text": "X,"
},
{
"math_id": 153,
"text": "A_0"
},
{
"math_id": 154,
"text": "S - \\sum_{i \\in A} a_i \\in V \\qquad \\text{ for every finite superset} \\; A \\supseteq A_0."
},
{
"math_id": 155,
"text": "W"
},
{
"math_id": 156,
"text": "V - V \\subseteq W."
},
{
"math_id": 157,
"text": "\\sum_{i \\in A_1} a_i - \\sum_{i \\in A_2} a_i \\in W \\qquad \\text{ for all finite supersets } \\; A_1, A_2 \\supseteq A_0,"
},
{
"math_id": 158,
"text": "a_i \\in W"
},
{
"math_id": 159,
"text": "i \\in I \\setminus A_0"
},
{
"math_id": 160,
"text": "A_1 := A_0 \\cup \\{i\\}"
},
{
"math_id": 161,
"text": "A_2 := A_0"
},
{
"math_id": 162,
"text": "X"
},
{
"math_id": 163,
"text": "\\left(a_i\\right)_{i \\in I}"
},
{
"math_id": 164,
"text": "J \\subseteq I,"
},
{
"math_id": 165,
"text": "\\left(a_j\\right)_{j \\in J},"
},
{
"math_id": 166,
"text": "X = \\R."
},
{
"math_id": 167,
"text": "A_0 \\subseteq I"
},
{
"math_id": 168,
"text": "i"
},
{
"math_id": 169,
"text": "A_0."
},
{
"math_id": 170,
"text": "a_i \\neq 0"
},
{
"math_id": 171,
"text": "I = \\N."
},
{
"math_id": 172,
"text": "a_n, n \\in \\N,"
},
{
"math_id": 173,
"text": "\\sum_{n=0}^\\infty a_n = \\sum_{n \\in \\N} a_n."
},
{
"math_id": 174,
"text": "\\sum a_n"
},
{
"math_id": 175,
"text": "\\sigma : \\N \\to \\N"
},
{
"math_id": 176,
"text": "\\N"
},
{
"math_id": 177,
"text": "\\sum_{n=0}^\\infty a_{\\sigma(n)} = \\sum_{n=0}^\\infty a_n."
},
{
"math_id": 178,
"text": "\\varepsilon_n = \\pm 1"
},
{
"math_id": 179,
"text": "\\sum_{n=0}^\\infty \\varepsilon_n a_n"
},
{
"math_id": 180,
"text": "\\left(x_i\\right)_{i \\in I}"
},
{
"math_id": 181,
"text": "\\lim_{A \\in \\operatorname{Finite}(I)} x_A"
},
{
"math_id": 182,
"text": "\\left(x_A\\right)_{A \\in \\operatorname{Finite}(I)}"
},
{
"math_id": 183,
"text": "x_A := \\sum_{i \\in A} x_i."
},
{
"math_id": 184,
"text": "p"
},
{
"math_id": 185,
"text": "\\left(p\\left(x_i\\right)\\right)_{i \\in I}"
},
{
"math_id": 186,
"text": "x_i"
},
{
"math_id": 187,
"text": "x_n"
},
{
"math_id": 188,
"text": "x \\in X"
},
{
"math_id": 189,
"text": "\\sum x_n"
},
{
"math_id": 190,
"text": "\\left(\\sum_{n=0}^N x_n\\right)_{N=1}^{\\infty}"
},
{
"math_id": 191,
"text": "\\left\\|x - \\sum_{n=0}^N x_n\\right\\| \\to 0 \\quad \\text{ as } N \\to \\infty."
},
{
"math_id": 192,
"text": "x."
},
{
"math_id": 193,
"text": "(X, |\\cdot|)"
},
{
"math_id": 194,
"text": "\\sum_{i \\in I} x_i"
},
{
"math_id": 195,
"text": " \\sum_{i \\in I} \\left|x_i\\right| < +\\infty"
},
{
"math_id": 196,
"text": "\\left|x_i\\right|"
},
{
"math_id": 197,
"text": "\\alpha_0."
},
{
"math_id": 198,
"text": "\\sum_{\\beta < \\alpha + 1} a_\\beta = a_{\\alpha} + \\sum_{\\beta < \\alpha} a_\\beta"
},
{
"math_id": 199,
"text": "\\alpha,"
},
{
"math_id": 200,
"text": "\\sum_{\\beta < \\alpha} a_\\beta = \\lim_{\\gamma\\to\\alpha} \\sum_{\\beta < \\gamma} a_\\beta"
},
{
"math_id": 201,
"text": "\\alpha_0,"
},
{
"math_id": 202,
"text": "f : X \\to Y"
},
{
"math_id": 203,
"text": "Y,"
},
{
"math_id": 204,
"text": "a \\in X,"
},
{
"math_id": 205,
"text": "f_a(x)=\n\\begin{cases}\n0 & x\\neq a, \\\\\nf(a) & x=a, \\\\\n\\end{cases}\n"
},
{
"math_id": 206,
"text": "\\{a\\}."
},
{
"math_id": 207,
"text": "f = \\sum_{a \\in X}f_a"
},
{
"math_id": 208,
"text": "Y^X"
},
{
"math_id": 209,
"text": " \\sum_{i \\in I} \\varphi_i(x) = 1."
},
{
"math_id": 210,
"text": "x,"
},
{
"math_id": 211,
"text": "\\varphi_i,"
},
{
"math_id": 212,
"text": "\\omega_1"
},
{
"math_id": 213,
"text": "f : \\left[0, \\omega_1\\right) \\to \\left[0, \\omega_1\\right]"
},
{
"math_id": 214,
"text": "f(\\alpha) = 1"
},
{
"math_id": 215,
"text": "\\sum_{\\alpha \\in [0,\\omega_1)}f(\\alpha) = \\omega_1"
}
] |
https://en.wikipedia.org/wiki?curid=15287
|
1528793
|
Effective rate of protection
|
Total effect of entire tariff structure
In economics, the effective rate of protection (ERP) is a measure of the total effect of the entire tariff structure on the value added per unit of output in each industry, when both intermediate and final goods are imported. This statistic is used by economists to measure the real amount of protection afforded to a particular industry by import duties, tariffs or other trade restrictions.
History.
Early work on the concept was undertaken by Clarence Barber. The idea was developed and applied to policy analysis by Max Corden.
Explanation.
Consider a simple case: there is a tradable good (shoes) that uses one tradable input to produce (leather). Both shoes and leather are imported into the home country. Suppose that in the absence of any tariffs, shoes use $100 worth of leather to make, and shoes sell for $150 in the international markets. Shoemakers around the world add $50 of value. If the home country imposes a 20% tariff on shoes, but no tariff on leather, shoes would sell for $180 in the home country, and the value added for the domestic shoe maker would increase by $30, from $50 to $80. The domestic shoe maker is afforded a 60% effective rate of protection per dollar of value added.
This equals formula_0d/formula_1int - formula_2, where:
VAd = domestic value added
VAint = international value added
An alternative that yields an identical answer is that the effective rate of protection equals formula_3f formula_4i)formula_5int, where:
Tf = the total tariff theoretically or actually paid on the final product
Ti = the total tariffs paid, theoretically or actually, on the "importable" inputs used to make that product.
The effective rate of protection is used to estimate the protection really afforded to domestic producers at each stage of production, i.e., how much extra they can charge and still be competitive with imported goods. If the total value of the tariffs on importable inputs exceeds that on the output, the effective rate of protection is negative, i.e., the industry is discriminated against in comparison with the imported product.
In this context, it does not matter whether the final product or the inputs used to make it were actually imported or not. What is important is that they are importable. If so, the implied tariffs should be included in the above formulas because, even if the item was not actually imported, the existence of the tariff should have raised its price in the local market by an equivalent value.
The effective rate of protection reveals the extremely adverse effect of tariffs that escalate from low rates on raw materials to high rates on intermediate inputs and yet higher rates on the final product as, in fact, most countries' tariff schedules do. Less developed countries complain that such tariff schedules gravely impede their access to developed countries' markets.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "(VA"
},
{
"math_id": 1,
"text": "VA"
},
{
"math_id": 2,
"text": "1"
},
{
"math_id": 3,
"text": " (T"
},
{
"math_id": 4,
"text": "- T"
},
{
"math_id": 5,
"text": "/VA"
}
] |
https://en.wikipedia.org/wiki?curid=1528793
|
15291970
|
Schur–Weyl duality
|
Schur–Weyl duality is a mathematical theorem in representation theory that relates irreducible finite-dimensional representations of the general linear and symmetric groups. It is named after two pioneers of representation theory of Lie groups, Issai Schur, who discovered the phenomenon, and Hermann Weyl, who popularized it in his books on quantum mechanics and classical groups as a way of classifying representations of unitary and general linear groups.
Schur–Weyl duality can be proven using the double centralizer theorem.
Description.
Schur–Weyl duality forms an archetypical situation in representation theory involving two kinds of symmetry that determine each other. Consider the tensor space
formula_0 with "k" factors.
The symmetric group "S""k" on "k" letters acts on this space (on the left) by permuting the factors,
formula_1
The general linear group "GL""n" of invertible "n"×"n" matrices acts on it by the simultaneous matrix multiplication,
formula_2
These two actions commute, and in its concrete form, the Schur–Weyl duality asserts that under the joint action of the groups "S""k" and "GL""n", the tensor space decomposes into a direct sum of tensor products of irreducible modules (for these two groups) that actually determine each other,
formula_3
The summands are indexed by the Young diagrams "D" with "k" boxes and at most "n" rows, and representations formula_4 of "S""k" with different "D" are mutually non-isomorphic, and the same is true for representations formula_5 of "GL""n".
The abstract form of the Schur–Weyl duality asserts that two algebras of operators on the tensor space generated by the actions of "GL""n" and "S""k" are the full mutual centralizers in the algebra of the endomorphisms formula_6
Example.
Suppose that "k" = 2 and "n" is greater than one. Then the Schur–Weyl duality is the statement that the space of two-tensors decomposes into symmetric and antisymmetric parts, each of which is an irreducible module for "GL""n":
formula_7
The symmetric group "S""2" consists of two elements and has two irreducible representations, the trivial representation and the sign representation. The trivial representation of "S"2 gives rise to the symmetric tensors, which are invariant (i.e. do not change) under the permutation of the factors, and the sign representation corresponds to the skew-symmetric tensors, which flip the sign.
Proof.
First consider the following setup:
The proof uses two algebraic lemmas.
"Proof": Since "U" is semisimple by Maschke's theorem, there is a decomposition formula_16 into simple "A"-modules. Then formula_17. Since "A" is the left regular representation of "G", each simple "G"-module appears in "A" and we have that formula_18 (respectively zero) if and only if formula_19 correspond to the same simple factor of "A" (respectively otherwise). Hence, we have: formula_20 Now, it is easy to see that each nonzero vector in formula_21 generates the whole space as a "B"-module and so formula_15 is simple. (In general, a nonzero module is simple if and only if each of its nonzero cyclic submodule coincides with the module.) formula_22
"Proof": Let formula_26. The formula_27. Also, the image of "W" spans the subspace of symmetric tensors formula_28. Since formula_29, the image of formula_14 spans formula_11. Since formula_25 is dense in "W" either in the Euclidean topology or in the Zariski topology, the assertion follows. formula_22
The Schur–Weyl duality now follows. We take formula_30 to be the symmetric group and formula_23 the "d"-th tensor power of a finite-dimensional complex vector space "V".
Let formula_31 denote the irreducible formula_24-representation corresponding to a partition formula_32 and formula_33. Then by Lemma 1
formula_34
is irreducible as a formula_25-module. Moreover, when formula_35 is the left semisimple decomposition, we have:
formula_36,
which is the semisimple decomposition as a formula_25-module.
Generalizations.
The Brauer algebra plays the role of the symmetric group in the generalization of the Schur-Weyl duality to the orthogonal and symplectic groups.
More generally, the partition algebra and its subalgebras give rise to a number of generalizations of the Schur-Weyl duality.
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\mathbb{C}^n\\otimes\\mathbb{C}^n\\otimes\\cdots\\otimes\\mathbb{C}^n "
},
{
"math_id": 1,
"text": " \\sigma(v_1\\otimes v_2\\otimes\\cdots\\otimes v_k) = v_{\\sigma^{-1}(1)}\\otimes v_{\\sigma^{-1}(2)}\\otimes\\cdots\\otimes v_{\\sigma^{-1}(k)}."
},
{
"math_id": 2,
"text": " g(v_1\\otimes v_2\\otimes\\cdots\\otimes v_k) = gv_1\\otimes gv_2\\otimes\\cdots\\otimes gv_k, \\quad g\\in \\text{GL}_n. "
},
{
"math_id": 3,
"text": " \\mathbb{C}^n\\otimes\\mathbb{C}^n\\otimes\\cdots\\otimes\\mathbb{C}^n = \\bigoplus_D \\pi_k^D\\otimes\\rho_n^D. "
},
{
"math_id": 4,
"text": "\\pi_k^D"
},
{
"math_id": 5,
"text": "\\rho_n^D"
},
{
"math_id": 6,
"text": "\\mathrm{End}_\\mathbb{C}(\\mathbb{C}^n\\otimes\\mathbb{C}^n\\otimes\\cdots\\otimes\\mathbb{C}^n)."
},
{
"math_id": 7,
"text": " \\mathbb{C}^n\\otimes\\mathbb{C}^n = S^2\\mathbb{C}^n \\oplus \\Lambda^2\\mathbb{C}^n."
},
{
"math_id": 8,
"text": "A = \\mathbb{C}[G]"
},
{
"math_id": 9,
"text": "U"
},
{
"math_id": 10,
"text": "B = \\operatorname{End}_G(U)"
},
{
"math_id": 11,
"text": "B"
},
{
"math_id": 12,
"text": "A"
},
{
"math_id": 13,
"text": "\\operatorname{End}(U)"
},
{
"math_id": 14,
"text": "W"
},
{
"math_id": 15,
"text": "U \\otimes_A W"
},
{
"math_id": 16,
"text": "U = \\bigoplus_i U_i^{\\oplus m_i}"
},
{
"math_id": 17,
"text": "U \\otimes_A W = \\bigoplus_i (U_i \\otimes_A W)^{\\oplus m_i}"
},
{
"math_id": 18,
"text": "U_i \\otimes_A W = \\mathbb{C}"
},
{
"math_id": 19,
"text": "U_i, W"
},
{
"math_id": 20,
"text": "U \\otimes_A W = (U_{i_0} \\otimes_A W)^{\\oplus m_{i_0}} = \\mathbb{C}^{\\oplus m_{i_0}}."
},
{
"math_id": 21,
"text": "\\mathbb{C}^{\\oplus m_{i_0}}"
},
{
"math_id": 22,
"text": "\\square"
},
{
"math_id": 23,
"text": "U = V^{\\otimes d}"
},
{
"math_id": 24,
"text": "\\mathfrak{S}_d"
},
{
"math_id": 25,
"text": "\\operatorname{GL}(V)"
},
{
"math_id": 26,
"text": "W = \\operatorname{End}(V)"
},
{
"math_id": 27,
"text": "W \\hookrightarrow \\operatorname{End}(U), w \\mapsto w^d = d! w \\otimes \\cdots \\otimes w"
},
{
"math_id": 28,
"text": "\\operatorname{Sym}^d(W)"
},
{
"math_id": 29,
"text": "B = \\operatorname{Sym}^d(W)"
},
{
"math_id": 30,
"text": "G = \\mathfrak{S}_d"
},
{
"math_id": 31,
"text": "V^{\\lambda}"
},
{
"math_id": 32,
"text": "\\lambda"
},
{
"math_id": 33,
"text": "m_{\\lambda} = \\dim V^{\\lambda}"
},
{
"math_id": 34,
"text": "S^{\\lambda}(V) := V^{\\otimes d} \\otimes_{\\mathfrak{S}_d} V^{\\lambda}"
},
{
"math_id": 35,
"text": "A = \\bigoplus_{\\lambda} (V^{\\lambda})^{\\oplus m_\\lambda}"
},
{
"math_id": 36,
"text": "V^{\\otimes d} = V^{\\otimes d} \\otimes_A A = \\bigoplus_{\\lambda} (V^{\\otimes d} \\otimes_{\\mathfrak{S}_d} V^{\\lambda})^{\\oplus m_{\\lambda}}"
}
] |
https://en.wikipedia.org/wiki?curid=15291970
|
15292063
|
Color difference
|
Metric for difference between two colors
In color science, color difference or color distance is the separation between two colors. This metric allows quantified examination of a notion that formerly could only be described with adjectives. Quantification of these properties is of great importance to those whose work is color-critical. Common definitions make use of the Euclidean distance in a device-independent color space.
Euclidean.
sRGB.
As most definitions of color difference are distances within a color space, the standard means of determining distances is the Euclidean distance. If one presently has an RGB (red, green, blue) tuple and wishes to find the color difference, computationally one of the easiest is to consider "R", "G", "B" linear dimensions defining the color space.
A very simple example can be given between the two colors with RGB values (0, 64, 0) ( ) and (255, 64, 0) ( ): their distance is 255. Going from there to (255, 64, 128) ( ) is a distance of 128.
When we wish to calculate distance from the first point to the third point (i.e. changing more than one of the color values), we can do this:
formula_0
When the result should be computationally simple as well, it is often acceptable to remove the square root and simply use
formula_1
This will work in cases when a single color WAS to be compared to a single color and the need is to simply know whether a distance is greater. If these squared color distances are summed, such a metric effectively becomes the variance of the color distances.
There have been many attempts to weigh RGB values to better fit human perception, where the components are commonly weighted (red 30%, green 59%, and blue 11%), however, these are demonstrably worse at color determinations and are properly the contributions to the brightness of these colors, rather than to the degree to which human vision has less tolerance for these colors. The closer approximations would be more properly (for non-linear sRGB, using a color range of 0–255):
formula_2
where:
formula_3
One of the better low-cost approximations, sometimes called "redmean", combines the two cases smoothly:
formula_4
There are a number of color distance formulae that attempt to use color spaces like HSV or HSL with the hue represented as a circle, placing the various colors within a three-dimensional space of either a cylinder or cone, but most of these are just modifications of RGB; without accounting for differences in human color perception, they will tend to be on par with a simple Euclidean metric.
Uniform color spaces.
CIELAB and CIELUV are relatively perceptually-uniform color spaces and they have been used as spaces for Euclidean measures of color difference. The CIELAB version is known as CIE76. However, the non-uniformity of these spaces were later discovered, leading to the creation of more complex formulae.
<templatestyles src="Template:Blockquote/styles.css" />Uniform color space: a color space in which equivalent numerical differences represent equivalent visual differences, regardless of location within the color space. A truly uniform color space has been the goal of color scientists for many years. Most color spaces, though not perfectly uniform, are referred to as uniform color spaces, since they are more nearly uniform when compared to the chromaticity diagram.
A uniform color space is supposed to make a simple measure of color difference, usually Euclidean, "just work". Color spaces that improve on this issue include CAM02-UCS, CAM16-UCS, and Jzazbz.
Rec. ITU-R BT.2124 or Δ"E"ITP.
In 2019 a new standard for WCG and HDR was introduced, since CIEDE2000 was not adequate for it: CIEDE2000 is not reliable below 1 cd/m2 and has not been verified above 100 cd/m2; in addition, even in BT.709 blue primary CIEDE2000 is underpredicting the error. Δ"E"ITP is scaled so that a value of 1 indicates the potential of a just noticeable color difference. The Δ"E"ITP color difference metric is derived from display referenced ICTCP, but XYZ is also available in the standard. The formula is a simply scaled Euclidean distance:
formula_5
where the components of this "ITP" is given by
"I" = "I",
"T" = 0.5 "C""T",
"P" = "C""P".
Other geometric constructions.
The Euclidean measure is known to work poorly on large color distances (i.e. more than 10 units in most systems). A hybrid approach where a taxicab distance is used between the lightness and the chroma plane, formula_6, is shown to work better on CIELAB.
CIELAB ΔE*.
The International Commission on Illumination (CIE) calls their distance metric Δ"E*" (also inaccurately called "dE"*, dE, or "Delta E") where delta is a Greek letter often used to denote difference, and E stands for "Empfindung"; German for "sensation". Use of this term can be traced back to Hermann von Helmholtz and Ewald Hering.
Perceptual non-uniformities in the underlying CIELAB color space have led to the CIE refining their definition over the years, leading to the superior (as recommended by the CIE) 1994 and 2000 formulas. These non-uniformities are important because the human eye is more sensitive to certain colors than others. CIELAB metric is used to define color tolerance of CMYK solids. A good metric should take this into account in order for the notion of a "just noticeable difference" (JND) to have meaning. Otherwise, a certain Δ"E" may be insignificant between two colors in one part of the color space while being significant in some other part.
All Δ"E*" formulae are originally designed to have the difference of 1.0 stand for a JND. This convention is generally followed by other perceptual distance functions such as the aforementioned Δ"E"ITP. However, further experimentation may invalidate this design assumption, the revision of CIE76 Δ"E"*"ab" JND to 2.3 being an example.
CIE76.
The CIE 1976 color difference formula is the first formula that related a measured color difference to a known set of CIELAB coordinates. This formula has been succeeded by the 1994 and 2000 formulas because the CIELAB space turned out to be not as perceptually uniform as intended, especially in the saturated regions. This means that this formula rates these colors too highly as opposed to other colors.
Given two colors in CIELAB color space, formula_7 and formula_8, the CIE76 color difference formula is defined as:
formula_9
formula_10 corresponds to a JND (just noticeable difference).
CMC l:c (1984).
In 1984, the Colour Measurement Committee of the Society of Dyers and Colourists defined a difference measure based on the CIE L*C*h color model, an alternative representation of L*a*b* coordinates. Named after the developing committee, their metric is called CMC l:c. The quasimetric (i.e. it violates symmetry: parameter T is based on the hue of the reference formula_11 alone) has two parameters: lightness (l) and chroma (c), allowing the users to weight the difference based on the ratio of l:c that is deemed appropriate for the application. Commonly used values are 2:1 for acceptability and 1:1 for the threshold of imperceptibility.
The distance of a color formula_12 to a reference formula_13 is:
formula_14
formula_15
formula_16
CMC l:c is designed to be used with D65 and the CIE Supplementary Observer.
CIE94.
The CIE 1976 color difference definition was extended to address perceptual non-uniformities, while retaining the CIELAB color space, by the introduction of application-specific parametric weighting factors "kL", "kC" and "kH", and functions "SL", "SC", and "SH" derived from an automotive paint test's tolerance data.
As with the CMC I:c, Δ"E" (1994) is defined in the L*C*h* color space and likewise violates symmetry, therefore defining a quasimetric. Given a reference color formula_17 and another color formula_18, the difference is
formula_19
where
formula_20
and where "kC" and "kH" are usually both set to unity, and the parametric weighting factors "kL", "K"1 and "K"2 depend on the application:
Geometrically, the quantity formula_21 corresponds to the arithmetic mean of the chord lengths of the equal chroma circles of the two colors.
CIEDE2000.
Since the 1994 definition did not adequately resolve the perceptual uniformity issue, the CIE refined their definition with the CIEDE2000 formula published in 2001, adding five corrections:
formula_22
formula_23
formula_24
formula_25
formula_26
formula_27
formula_28
formula_29
formula_30
formula_31
formula_32
CIEDE 2000 is not mathematically continuous. The discontinuity stems from calculating the mean hue formula_33 and the hue difference formula_34. The maximum discontinuity happens when the hues of two sample colors are about 180° apart, and is usually small relative to ΔE (less than 4%). There is also a negligible amount of discontinuity from hue rollover.
Sharma, Wu, and Dalal has provided some additional notes on the mathematics and implementation of the formula.
Tolerance.
Tolerancing concerns the question "What is a set of colors that are imperceptibly/acceptably close to a given reference?" If the distance measure is perceptually uniform, then the answer is simply "the set of points whose distance to the reference is less than the just-noticeable-difference (JND) threshold". This requires a perceptually uniform metric in order for the threshold to be constant throughout the gamut (range of colors). Otherwise, the threshold will be a function of the reference color—cumbersome as a practical guide.
In the CIE 1931 color space, for example, the tolerance contours are defined by the MacAdam ellipse, which holds L* (lightness) fixed. As can be observed on the adjacent diagram, the ellipses denoting the tolerance contours vary in size. It is partly this non-uniformity that led to the creation of CIELUV and CIELAB.
More generally, if the lightness is allowed to vary, then we find the tolerance set to be ellipsoidal. Increasing the weighting factor in the aforementioned distance expressions has the effect of increasing the size of the ellipsoid along the respective axis.
Footnotes.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "\n\\text{distance} = \\sqrt{(R_2 - R_1)^2 + (G_2 - G_1)^2 + (B_2 - B_1)^2}.\n"
},
{
"math_id": 1,
"text": "\n\\text{distance}^2 = (R_2 - R_1)^2 + (G_2 - G_1)^2 + (B_2 - B_1)^2.\n"
},
{
"math_id": 2,
"text": "\n\\begin{cases}\n \\sqrt{2 \\Delta R^2 + 4 \\Delta G^2 + 3 \\Delta B^2}, & \\bar R < 128, \\\\\n \\sqrt{3 \\Delta R^2 + 4 \\Delta G^2 + 2 \\Delta B^2} & \\text{otherwise},\n\\end{cases}\n"
},
{
"math_id": 3,
"text": "\n\\begin{aligned}\n \\Delta R &= R_1 - R_2, \\\\\n \\Delta G &= G_1 - G_2, \\\\\n \\Delta B &= B_1 - B_2, \\\\\n \\bar R &= \\frac12 (R_1 + R_2).\n\\end{aligned}\n"
},
{
"math_id": 4,
"text": "\n\\begin{aligned}\n \\bar r &= \\frac12 (R_1 + R_2), \\\\\n \\Delta C &= \\sqrt{\\left(2 + \\frac{\\bar r}{256}\\right) \\Delta R^2 + 4 \\Delta G^2 + \\left(2 + \\frac{255 - \\bar r}{256}\\right) \\Delta B^2}.\n\\end{aligned}"
},
{
"math_id": 5,
"text": "\n\\Delta E_\\text{ITP} = 720 \\sqrt{(I_1 - I_2)^2 + (T_1 - T_2)^2 + (P_1 - P_2)^2},\n"
},
{
"math_id": 6,
"text": "\\Delta E_{\\text{HyAB}} = \\sqrt{ (a_2-a_1)^2+(b_2-b_1)^2 } + \\left| L_2-L_1 \\right|"
},
{
"math_id": 7,
"text": "({L^*_1},{a^*_1},{b^*_1})"
},
{
"math_id": 8,
"text": "({L^*_2},{a^*_2},{b^*_2})"
},
{
"math_id": 9,
"text": "\\Delta E_{ab}^* = \\sqrt{ (L^*_2-L^*_1)^2+(a^*_2-a^*_1)^2 + (b^*_2-b^*_1)^2 }."
},
{
"math_id": 10,
"text": "\\Delta E_{ab}^* \\approx 2.3"
},
{
"math_id": 11,
"text": "h_1"
},
{
"math_id": 12,
"text": "(L^*_2,C^*_2,h_2)"
},
{
"math_id": 13,
"text": "(L^*_1,C^*_1,h_1)"
},
{
"math_id": 14,
"text": "\\Delta E^*_{CMC} = \\sqrt{ \\left( \\frac{L^*_2-L^*_1}{l \\times S_L} \\right)^2 + \\left( \\frac{C^*_2-C^*_1}{c \\times S_C} \\right)^2 + \\left( \\frac{\\Delta H^*_{ab}}{S_H} \\right)^2 }"
},
{
"math_id": 15,
"text": "S_L=\\begin{cases} 0.511 & L^*_1 < 16 \\\\ \\frac{0.040975 L^*_1}{1+0.01765 L^*_1} & L^*_1 \\geq 16 \\end{cases} \\quad S_C=\\frac{0.0638 C^*_1}{1+0.0131 C^*_1} + 0.638 \\quad S_H=S_C (FT+1-F)"
},
{
"math_id": 16,
"text": "F = \\sqrt{\\frac{C^{*^4}_1}{C^{*^4}_1+1900}} \\quad T=\\begin{cases} 0.56 + |0.2 \\cos (h_1+168^\\circ)| & 164^\\circ \\leq h_1 \\leq 345^\\circ \\\\ 0.36 + |0.4 \\cos (h_1+35^\\circ) | & \\mbox{otherwise} \\end{cases}"
},
{
"math_id": 17,
"text": "(L^*_1, a^*_1, b^*_1)"
},
{
"math_id": 18,
"text": "(L^*_2, a^*_2, b^*_2)"
},
{
"math_id": 19,
"text": "\n\\Delta E_{94}^* = \\sqrt{\\left(\\frac{\\Delta L^*}{k_L S_L}\\right)^2 + \\left(\\frac{\\Delta C^*}{k_C S_C}\\right)^2 + \\left(\\frac{\\Delta H^*}{k_H S_H}\\right)^2},\n"
},
{
"math_id": 20,
"text": "\n\\begin{aligned}\n \\Delta L^* &= L^*_1 - L^*_2, \\\\\n C^*_1 &= \\sqrt{{a^*_1}^2 + {b^*_1}^2}, \\\\\n C^*_2 &= \\sqrt{{a^*_2}^2 + {b^*_2}^2}, \\\\\n \\Delta C^* &= C^*_1 - C^*_2, \\\\\n \\Delta H^* &= \\sqrt{{\\Delta E^*_{ab}}^2 - {\\Delta L^*}^2 - {\\Delta C^*}^2} = \\sqrt{{\\Delta a^*}^2 + {\\Delta b^*}^2 - {\\Delta C^*}^2}, \\\\\n \\Delta a^* &= a^*_1 - a^*_2, \\\\\n \\Delta b^* &= b^*_1 - b^*_2, \\\\\n S_L &= 1, \\\\\n S_C &= 1 + K_1 C^*_1, \\\\\n S_H &= 1 + K_2 C^*_1, \\\\\n\\end{aligned}\n"
},
{
"math_id": 21,
"text": "\\Delta H^*_{ab}"
},
{
"math_id": 22,
"text": "\\Delta E_{00}^* = \\sqrt{ \\left(\\frac{\\Delta L'}{k_L S_L}\\right)^2 + \\left(\\frac{\\Delta C'}{k_C S_C}\\right)^2 + \\left(\\frac{\\Delta H'}{k_H S_H}\\right)^2 + R_T \\frac{\\Delta C'}{k_C S_C}\\frac{\\Delta H'}{k_H S_H} }"
},
{
"math_id": 23,
"text": "\\Delta L^\\prime = L^*_2 - L^*_1"
},
{
"math_id": 24,
"text": "\n \\bar{L} = \\frac{L^*_1 + L^*_2}{2} \\quad\n \\bar{C} = \\frac{C^*_1 + C^*_2}{2} \\quad\n \\mbox{where }\n C^*_1 = \\sqrt{{a^*_1}^2 + {b^*_1}^2}, \\quad\n C^*_2 = \\sqrt{{a^*_2}^2 + {b^*_2}^2}, \\quad\n"
},
{
"math_id": 25,
"text": "\n a_1^\\prime = a_1^* + \\frac{a_1^*}{2} \\left( 1 - \\sqrt{\\frac{\\bar{C}^7}{\\bar{C}^7 + 25^7}} \\right) \\quad\n a_2^\\prime = a_2^* + \\frac{a_2^*}{2} \\left( 1 - \\sqrt{\\frac{\\bar{C}^7}{\\bar{C}^7 + 25^7}} \\right)\n"
},
{
"math_id": 26,
"text": "\n \\bar{C}^\\prime = \\frac{C_1^\\prime + C_2^\\prime}{2} \\mbox{ and }\n \\Delta{C'}=C'_2-C'_1 \\quad\n \\mbox{where }\n C_1^\\prime = \\sqrt{a_1^{'^2} + b_1^{*^2}} \\quad\n C_2^\\prime = \\sqrt{a_2^{'^2} + b_2^{*^2}} \\quad\n"
},
{
"math_id": 27,
"text": "\n h_1^\\prime=\\text{atan2} (b_1^*, a_1^\\prime) \\mod 360^\\circ, \\quad\n h_2^\\prime=\\text{atan2} (b_2^*, a_2^\\prime) \\mod 360^\\circ\n"
},
{
"math_id": 28,
"text": "\n \\Delta h' = \\begin{cases}\n h_2^\\prime - h_1^\\prime & \\left| h_1^\\prime - h_2^\\prime \\right| \\leq 180^\\circ \\\\\n h_2^\\prime - h_1^\\prime + 360^\\circ & \\left| h_1^\\prime - h_2^\\prime \\right| > 180^\\circ, h_2^\\prime \\leq h_1^\\prime \\\\\n h_2^\\prime - h_1^\\prime - 360^\\circ & \\left| h_1^\\prime - h_2^\\prime \\right| > 180^\\circ, h_2^\\prime > h_1^\\prime\n \\end{cases}\n"
},
{
"math_id": 29,
"text": "\n \\Delta H^\\prime = 2 \\sqrt{C_1^\\prime C_2^\\prime} \\sin (\\Delta h^\\prime/2), \\quad \\bar{H}^\\prime=\\begin{cases}\n (h_1^\\prime + h_2^\\prime)/2 & \\left| h_1^\\prime - h_2^\\prime \\right| \\leq 180^\\circ \\\\\n (h_1^\\prime + h_2^\\prime + 360^\\circ)/2 & \\left| h_1^\\prime - h_2^\\prime \\right| > 180^\\circ, h_1^\\prime + h_2^\\prime < 360^\\circ \\\\\n (h_1^\\prime + h_2^\\prime - 360^\\circ)/2 & \\left| h_1^\\prime - h_2^\\prime \\right| > 180^\\circ, h_1^\\prime + h_2^\\prime \\geq 360^\\circ \n \\end{cases}\n"
},
{
"math_id": 30,
"text": "\n T = 1 - 0.17 \\cos ( \\bar{H}^\\prime - 30^\\circ )\n + 0.24 \\cos (2\\bar{H}^\\prime)\n + 0.32 \\cos (3\\bar{H}^\\prime + 6^\\circ )\n - 0.20 \\cos (4\\bar{H}^\\prime - 63^\\circ)\n"
},
{
"math_id": 31,
"text": "\n S_L = 1 + \\frac{0.015 \\left( \\bar{L} - 50 \\right)^2}{\\sqrt{20 + {\\left(\\bar{L} - 50 \\right)}^2} } \\quad\n S_C = 1+0.045 \\bar{C}^\\prime \\quad\n S_H = 1+0.015 \\bar{C}^\\prime T\n"
},
{
"math_id": 32,
"text": "R_T = -2 \\sqrt{\\frac{\\bar{C}'^7}{\\bar{C}'^7+25^7}} \\sin \\left[ 60^\\circ \\cdot \\exp \\left( -\\left[ \\frac{\\bar{H}'-275^\\circ}{25^\\circ} \\right]^2 \\right) \\right]"
},
{
"math_id": 33,
"text": " \\Delta H^\\prime"
},
{
"math_id": 34,
"text": "\\Delta h'"
}
] |
https://en.wikipedia.org/wiki?curid=15292063
|
15293557
|
Atwood number
|
The Atwood number (A) is a dimensionless number in fluid dynamics used in the study of hydrodynamic instabilities in density stratified flows.
It is a dimensionless density ratio defined as
formula_0
where
formula_1 = density of heavier fluid
formula_2 = density of lighter fluid
Field of application.
Atwood number is an important parameter in the study of Rayleigh–Taylor instability and Richtmyer–Meshkov instability. In Rayleigh–Taylor instability,
the penetration distance of heavy fluid bubbles into the light fluid is a function of acceleration time scale, formula_3 where "g" is the gravitational acceleration and "t" is the time.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\mathrm{A} = \\frac{\\rho_1 - \\rho_2} {\\rho_1 + \\rho_2} "
},
{
"math_id": 1,
"text": "\\rho_1"
},
{
"math_id": 2,
"text": "\\rho_2"
},
{
"math_id": 3,
"text": "\\mathrm{A} g t^2"
}
] |
https://en.wikipedia.org/wiki?curid=15293557
|
1529485
|
Neighbourhood (mathematics)
|
Open set containing a given point
In topology and related areas of mathematics, a neighbourhood (or neighborhood) is one of the basic concepts in a topological space. It is closely related to the concepts of open set and interior. Intuitively speaking, a neighbourhood of a point is a set of points containing that point where one can move some amount in any direction away from that point without leaving the set.
Definitions.
Neighbourhood of a point.
If formula_3 is a topological space and formula_1 is a point in formula_4 then a neighbourhood of formula_1 is a subset formula_0 of formula_3 that includes an open set formula_5 containing formula_1,
formula_6
This is equivalent to the point formula_7 belonging to the topological interior of formula_0 in formula_8
The neighbourhood formula_0 need not be an open subset of formula_8 When formula_0 is open (resp. closed, compact, etc.) in formula_4 it is called an <templatestyles src="Template:Visible anchor/styles.css" />open neighbourhood (resp. closed neighbourhood, compact neighbourhood, etc.). Some authors require neighbourhoods to be open, so it is important to note their conventions.
A set that is a neighbourhood of each of its points is open since it can be expressed as the union of open sets containing each of its points. A closed rectangle, as illustrated in the figure, is not a neighbourhood of all its points; points on the edges or corners of the rectangle are not contained in any open set that is contained within the rectangle.
The collection of all neighbourhoods of a point is called the neighbourhood system at the point.
Neighbourhood of a set.
If formula_10 is a subset of a topological space formula_3, then a neighbourhood of formula_10 is a set formula_0 that includes an open set formula_5 containing formula_10,formula_11It follows that a set formula_0 is a neighbourhood of formula_10 if and only if it is a neighbourhood of all the points in formula_12 Furthermore, formula_0 is a neighbourhood of formula_10 if and only if formula_10 is a subset of the interior of formula_2
A neighbourhood of formula_10 that is also an open subset of formula_3 is called an <templatestyles src="Template:Visible anchor/styles.css" />open neighbourhood of formula_12
The neighbourhood of a point is just a special case of this definition.
In a metric space.
In a metric space formula_13 a set formula_0 is a neighbourhood of a point formula_1 if there exists an open ball with center formula_1 and radius formula_14 such that
formula_15
is contained in formula_2
formula_0 is called a uniform neighbourhood of a set formula_10 if there exists a positive number formula_16 such that for all elements formula_1 of formula_17
formula_18
is contained in formula_2
Under the same condition, for formula_19 the formula_16-neighbourhood formula_20 of a set formula_10 is the set of all points in formula_3 that are at distance less than formula_16 from formula_10 (or equivalently, formula_20 is the union of all the open balls of radius formula_16 that are centered at a point in formula_10): formula_21
It directly follows that an formula_16-neighbourhood is a uniform neighbourhood, and that a set is a uniform neighbourhood if and only if it contains an formula_16-neighbourhood for some value of formula_22
Examples.
Given the set of real numbers formula_23 with the usual Euclidean metric and a subset formula_0 defined as
formula_24
then formula_0 is a neighbourhood for the set formula_25 of natural numbers, but is not a uniform neighbourhood of this set.
Topology from neighbourhoods.
The above definition is useful if the notion of open set is already defined. There is an alternative way to define a topology, by first defining the neighbourhood system, and then open sets as those sets containing a neighbourhood of each of their points.
A neighbourhood system on formula_3 is the assignment of a filter formula_26 of subsets of formula_3 to each formula_9 in formula_4 such that
One can show that both definitions are compatible, that is, the topology obtained from the neighbourhood system defined using open sets is the original one, and vice versa when starting out from a neighbourhood system.
Uniform neighbourhoods.
In a uniform space formula_30 formula_0 is called a uniform neighbourhood of formula_31 if there exists an entourage formula_32 such that formula_0 contains all points of formula_3 that are formula_5-close to some point of formula_33 that is, formula_34 for all formula_35
Deleted neighbourhood.
A deleted neighbourhood of a point formula_1 (sometimes called a punctured neighbourhood) is a neighbourhood of formula_36 without formula_37 For instance, the interval formula_38 is a neighbourhood of formula_39 in the real line, so the set formula_40 is a deleted neighbourhood of formula_41 A deleted neighbourhood of a given point is not in fact a neighbourhood of the point. The concept of deleted neighbourhood occurs in the definition of the limit of a function and in the definition of limit points (among other things).
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "V"
},
{
"math_id": 1,
"text": "p"
},
{
"math_id": 2,
"text": "V."
},
{
"math_id": 3,
"text": "X"
},
{
"math_id": 4,
"text": "X,"
},
{
"math_id": 5,
"text": "U"
},
{
"math_id": 6,
"text": "p \\in U \\subseteq V \\subseteq X."
},
{
"math_id": 7,
"text": "p \\in X"
},
{
"math_id": 8,
"text": "X."
},
{
"math_id": 9,
"text": "x"
},
{
"math_id": 10,
"text": "S"
},
{
"math_id": 11,
"text": "S \\subseteq U \\subseteq V \\subseteq X."
},
{
"math_id": 12,
"text": "S."
},
{
"math_id": 13,
"text": "M = (X, d),"
},
{
"math_id": 14,
"text": "r>0,"
},
{
"math_id": 15,
"text": "B_r(p) = B(p; r) = \\{ x \\in X : d(x, p) < r \\}"
},
{
"math_id": 16,
"text": "r"
},
{
"math_id": 17,
"text": "S,"
},
{
"math_id": 18,
"text": "B_r(p) = \\{ x \\in X : d(x, p) < r \\}"
},
{
"math_id": 19,
"text": "r > 0,"
},
{
"math_id": 20,
"text": "S_r"
},
{
"math_id": 21,
"text": "S_r = \\bigcup\\limits_{p\\in{}S} B_r(p)."
},
{
"math_id": 22,
"text": "r."
},
{
"math_id": 23,
"text": "\\R"
},
{
"math_id": 24,
"text": "V := \\bigcup_{n \\in \\N} B\\left(n\\,;\\,1/n \\right),"
},
{
"math_id": 25,
"text": "\\N"
},
{
"math_id": 26,
"text": "N(x)"
},
{
"math_id": 27,
"text": "y"
},
{
"math_id": 28,
"text": "V,"
},
{
"math_id": 29,
"text": "N(y)."
},
{
"math_id": 30,
"text": "S = (X, \\Phi),"
},
{
"math_id": 31,
"text": "P"
},
{
"math_id": 32,
"text": "U \\in \\Phi"
},
{
"math_id": 33,
"text": "P;"
},
{
"math_id": 34,
"text": "U[x] \\subseteq V"
},
{
"math_id": 35,
"text": "x \\in P."
},
{
"math_id": 36,
"text": "p,"
},
{
"math_id": 37,
"text": "\\{p\\}."
},
{
"math_id": 38,
"text": "(-1, 1) = \\{y : -1 < y < 1\\}"
},
{
"math_id": 39,
"text": "p = 0"
},
{
"math_id": 40,
"text": "(-1, 0) \\cup (0, 1) = (-1, 1) \\setminus \\{0\\}"
},
{
"math_id": 41,
"text": "0."
}
] |
https://en.wikipedia.org/wiki?curid=1529485
|
152969
|
Eutectic system
|
Mixture with a lower melting point than its constituents
A eutectic system or eutectic mixture ( ) is a homogeneous mixture that has a melting point lower than those of the constituents. The lowest possible melting point over all of the mixing ratios of the constituents is called the "eutectic temperature". On a phase diagram, the eutectic temperature is seen as the eutectic point (see plot on the right).
Non-eutectic mixture ratios have different melting temperatures for their different constituents, since one component's lattice will melt at a lower temperature than the other's. Conversely, as a non-eutectic mixture cools down, each of its components solidifies into a lattice at a different temperature, until the entire mass is solid.
Not all binary alloys have eutectic points, since the valence electrons of the component species are not always compatible in any mixing ratio to form a new type of joint crystal lattice. For example, in the silver-gold system the melt temperature (liquidus) and freeze temperature (solidus) "meet at the pure element endpoints of the atomic ratio axis while slightly separating in the mixture region of this axis".
In the real world, eutectic properties can be used to advantage in such processes as eutectic bonding, where silicon chips are bonded to gold-plated substrates with ultrasound, and eutectic alloys prove valuable in such diverse applications as soldering, brazing, metal casting, electrical protection, fire sprinkler systems, and nontoxic mercury substitutes. By managing phase transformation during solidification, a suitable eutectic alloy can be made stronger than any of its individual components, a valuable property in an extreme application such as the hypereutectic cast aluminum pistons used in the high-revving twin-turbo intercooled DOHC Cadillac Blackwing V8 introduced in 2018.
The term eutectic was coined in 1884 by British physicist and chemist Frederick Guthrie (1833–1886). The word originates from el " "εὐ"- (eû)" 'well' and " "τῆξῐς" (têxis)" 'melting'. Before his studies, chemists assumed "that the alloy of minimum fusing point must have its constituents in some simple atomic proportions", which was indeed proven to be not the case.
Eutectic phase transition.
The eutectic solidification is defined as follows:
formula_0
This type of reaction is an invariant reaction, because it is in thermal equilibrium; another way to define this is the change in Gibbs free energy equals zero. Tangibly, this means the liquid and two solid solutions all coexist at the same time and are in chemical equilibrium. There is also a thermal arrest for the duration of the phase change during which the temperature of the system does not change.
The resulting solid macrostructure from a eutectic reaction depends on a few factors, with the most important factor being how the two solid solutions nucleate and grow. The most common structure is a lamellar structure, but other possible structures include rodlike, globular, and acicular.
Non-eutectic compositions.
Compositions of eutectic systems that are not at the eutectic point can be classified as "hypoeutectic" or "hypereutectic":
As the temperature of a non-eutectic composition is lowered the liquid mixture will precipitate one component of the mixture before the other. In a hypereutectic solution, there will be a "proeutectoid" phase of species β whereas a hypoeutectic solution will have a "proeutectic" α phase.
Types.
Alloys.
Eutectic alloys have two or more materials and have a eutectic composition. When a non-eutectic alloy solidifies, its components solidify at different temperatures, exhibiting a plastic melting range. Conversely, when a well-mixed, eutectic alloy melts, it does so at a single, sharp temperature. The various phase transformations that occur during the solidification of a particular alloy composition can be understood by drawing a vertical line from the liquid phase to the solid phase on the phase diagram for that alloy.
Some uses for eutectic alloys include:
Strengthening mechanisms.
Alloys.
The primary strengthening mechanism of the eutectic structure in metals is composite strengthening (See strengthening mechanisms of materials). This deformation mechanism works through load transfer between the two constituent phases where the more compliant phase transfers stress to the stiffer phase. By taking advantage of the strength of the stiff phase and the ductility of the compliant phase, the overall toughness of the material increases. As the composition is varied to either hypoeutectic or hypereutectic formations, the load transfer mechanism becomes more complex as there is a load transfer between the eutectic phase and the secondary phase as well as the load transfer within the eutectic phase itself.
A second tunable strengthening mechanism of eutectic structures is the spacing of the secondary phase. By changing the spacing of the secondary phase, the fraction of contact between the two phases through shared phase boundaries is also changed. By decreasing the spacing of the eutectic phase, creating a fine eutectic structure, more surface area is shared between the two constituent phases resulting in more effective load transfer. On the micro-scale, the additional boundary area acts as a barrier to dislocations further strengthening the material. As a result of this strengthening mechanism, coarse eutectic structures tend to be less stiff but more ductile while fine eutectic structures are stiffer but more brittle. The spacing of the eutectic phase can be controlled during processing as it is directly related to the cooling rate during solidification of the eutectic structure. For example, for a simple lamellar eutectic structure, the minimal lamellae spacing is:
formula_1
Where is formula_2 is the surface energy of the two-phase boundary, formula_3" "is the molar volume of the eutectic phase, formula_4 is the solidification temperature of the eutectic phase, formula_5 is the enthalpy of formation of the eutectic phase, and formula_6 is the undercooling of the material. So, by altering the undercooling, and by extension the cooling rate, the minimal achievable spacing of the secondary phase is controlled.
Strengthening metallic eutectic phases to resist deformation at high temperatures (see creep deformation) is more convoluted as the primary deformation mechanism changes depending on the level of stress applied. At high temperatures where deformation is dominated by dislocation movement, the strengthening from load transfer and secondary phase spacing remain as they continue to resist dislocation motion. At lower strains where Nabarro-Herring creep is dominant, the shape and size of the eutectic phase structure plays a significant role in material deformation as it affects the available boundary area for vacancy diffusion to occur.
Other critical points.
Eutectoid.
When the solution above the transformation point is solid, rather than liquid, an analogous eutectoid transformation can occur. For instance, in the iron-carbon system, the austenite phase can undergo a eutectoid transformation to produce ferrite and cementite, often in lamellar structures such as pearlite and bainite. This eutectoid point occurs at and 0.76 wt% carbon.
Peritectoid.
A "peritectoid" transformation is a type of isothermal reversible reaction that has two solid phases reacting with each other upon cooling of a binary, ternary, ..., "n"-ary alloy to create a completely different and single solid phase. The reaction plays a key role in the order and decomposition of quasicrystalline phases in several alloy types. A similar structural transition is also predicted for rotating columnar crystals.
Peritectic.
Peritectic transformations are also similar to eutectic reactions. Here, a liquid and solid phase of fixed proportions react at a fixed temperature to yield a single solid phase. Since the solid product forms at the interface between the two reactants, it can form a diffusion barrier and generally causes such reactions to proceed much more slowly than eutectic or eutectoid transformations. Because of this, when a peritectic composition solidifies it does not show the lamellar structure that is found with eutectic solidification.
Such a transformation exists in the iron-carbon system, as seen near the upper-left corner of the figure. It resembles an inverted eutectic, with the δ phase combining with the liquid to produce pure austenite at and 0.17% carbon.
At the peritectic decomposition temperature the compound, rather than melting, decomposes into another solid compound and a liquid. The proportion of each is determined by the lever rule. In the Al-Au phase diagram, for example, it can be seen that only two of the phases melt congruently, AuAl2 and Au2Al, while the rest peritectically decompose.
Eutectic calculation.
The composition and temperature of a eutectic can be calculated from enthalpy and entropy of fusion of each components.
The Gibbs free energy "G" depends on its own differential:
formula_7
Thus, the "G"/"T" derivative at constant pressure is calculated by the following equation:
formula_8
The chemical potential formula_9 is calculated if we assume that the activity is equal to the concentration:
formula_10
At the equilibrium, formula_11, thus formula_12 is obtained as
formula_13
Using and integrating gives
formula_14
The integration constant "K" may be determined for a pure component with a melting temperature formula_15 and an enthalpy of fusion formula_16:
formula_17
We obtain a relation that determines the molar fraction as a function of the temperature for each component:
formula_18
The mixture of "n" components is described by the system
formula_19
formula_20
which can be solved by
formula_21
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\text{Liquid} \\quad \\xrightarrow[\\text{cooling}]{\\text{eutectic} \\atop \\text{temperature}} \\quad \\alpha \\text{ solid solution} \\ + \\ \\beta \\text{ solid solution}"
},
{
"math_id": 1,
"text": "\\lambda^*=\\frac{2\\gamma V_m T_E }{\\Delta H * \\Delta T_0}"
},
{
"math_id": 2,
"text": "\\gamma"
},
{
"math_id": 3,
"text": "V_m"
},
{
"math_id": 4,
"text": "T_E"
},
{
"math_id": 5,
"text": "\\Delta H"
},
{
"math_id": 6,
"text": "\\Delta T_0"
},
{
"math_id": 7,
"text": "\nG = H - TS\n\\Rightarrow \n\\begin{cases}\n H = G + TS \\\\\n \\left(\\frac{\\partial G}{\\partial T}\\right)_P = -S\n\\end{cases}\n\\Rightarrow\nH = G - T \\left(\\frac{\\partial G}{\\partial T}\\right)_P.\n"
},
{
"math_id": 8,
"text": "\n \\left(\\frac{\\partial G / T}{\\partial T}\\right)_P\n =\n \\frac{1}{T} \\left(\\frac{\\partial G}{\\partial T}\\right)_P - \\frac{1}{T^2}G\n =\n -\\frac{1}{T^2} \\left(G - T\\left(\\frac{\\partial G}{\\partial T}\\right)_P\\right)\n = -\\frac{H}{T^2}.\n"
},
{
"math_id": 9,
"text": "\\mu_i"
},
{
"math_id": 10,
"text": "\n\\mu_i = \\mu_i^\\circ + RT\\ln \\frac{a_i}{a} \\approx \\mu_i^\\circ + RT\\ln x_i.\n"
},
{
"math_id": 11,
"text": "\\mu_i = 0"
},
{
"math_id": 12,
"text": "\\mu_i^\\circ"
},
{
"math_id": 13,
"text": "\n\\mu _i = \\mu _i^\\circ + RT\\ln x_i = 0 \\Rightarrow \\mu_i^\\circ = -RT\\ln x_i.\n"
},
{
"math_id": 14,
"text": "\n \\left(\\frac{\\partial \\mu_i / T}{\\partial T}\\right)_P =\n \\frac{\\partial}{\\partial T}\\left(R\\ln x_i\\right) \\Rightarrow\n R\\ln x_i =\n -\\frac{H_i^\\circ}{T} + K.\n"
},
{
"math_id": 15,
"text": "T^\\circ"
},
{
"math_id": 16,
"text": "H^\\circ"
},
{
"math_id": 17,
"text": "\nx_i = 1 \\Rightarrow T = T_i^\\circ \\Rightarrow K = \\frac{H_i^\\circ}{T_i^\\circ}.\n"
},
{
"math_id": 18,
"text": "\nR\\ln x_i = -\\frac{H_i^\\circ}{T} + \\frac{H_i^\\circ}{T_i^\\circ}.\n"
},
{
"math_id": 19,
"text": "\n\\begin{cases}\n \\ln x_i + \\frac{H_i^\\circ}{RT} - \\frac{H_i^\\circ}{RT_i^\\circ } = 0, \\\\\n \\sum\\limits_{i = 1}^n x_i = 1.\n\\end{cases}\n"
},
{
"math_id": 20,
"text": "\n\\begin{cases}\n \\forall i < n \\Rightarrow \\ln x_i + \\frac{H_i^\\circ}{RT} - \\frac{H_i^\\circ}{RT_i^\\circ} = 0, \\\\\n \\ln \\left(1 - \\sum\\limits_{i = 1}^{n - 1} x_i\\right) + \\frac{H_n^\\circ}{RT} - \\frac{H_n^\\circ}{RT_n^\\circ} = 0, \n\\end{cases}\n"
},
{
"math_id": 21,
"text": "\n\\begin{array}{c}\n\\left[ {{\\begin{array}{*{20}c}\n {\\Delta x_1 } \\\\\n {\\Delta x_2 } \\\\\n {\\Delta x_3 } \\\\\n \\vdots \\\\\n {\\Delta x_{n - 1} } \\\\\n {\\Delta T} \\\\\n\\end{array} }} \\right] = \\left[ {{\\begin{array}{*{20}c}\n {1 / x_1 } & 0 & 0 & 0 & 0 & { - \\frac{H_1^\\circ }{RT^{2}}} \\\\\n 0 & {1 / x_2 } & 0 & 0 & 0 & { - \\frac{H_2^\\circ }{RT^{2}}} \\\\\n 0 & 0 & {1 / x_3 } & 0 & 0 & { - \\frac{H_3^\\circ }{RT^{2}}} \\\\\n \\vdots & \\ddots & \\ddots & \\ddots & \\ddots & { \\vdots} \\\\\n 0 & 0 & 0 & 0 & {1 / x_{n - 1} } & { - \\frac{H_{n - 1}^\\circ }{RT^{2}}}\n\\\\\n {\\frac{ - 1}{1 - \\sum\\limits_{i = 1}^{n - 1} {x_i } }} & {\\frac{ - 1}{1 -\n\\sum\\limits_{i = 1}^{n - 1} {x_i } }} & {\\frac{ - 1}{1 -\n\\sum\\limits_{i = 1}^{n - 1} {x_i } }} & {\\frac{ - 1}{1 -\n\\sum\\limits_{i = 1}^{n - 1} {x_i } }} & {\\frac{ - 1}{1 -\n\\sum\\limits_{i = 1}^{n - 1} {x_i } }} & { -\n\\frac{H_n^\\circ }{RT^{2}}} \\\\\n\\end{array} }} \\right]^{ - 1}\n\n.\\left[ {{\\begin{array}{*{20}c}\n {\\ln x_1 + \\frac{H_1 ^\\circ }{RT} - \\frac{H_1^\\circ }{RT_1^\\circ }}\n\\\\\n {\\ln x_2 + \\frac{H_2 ^\\circ }{RT} - \\frac{H_2^\\circ }{RT_2^\\circ }}\n\\\\\n {\\ln x_3 + \\frac{H_3 ^\\circ }{RT} - \\frac{H_3^\\circ }{RT_3^\\circ }}\n\\\\\n \\vdots \\\\\n {\\ln x_{n - 1} + \\frac{H_{n - 1} ^\\circ }{RT} - \\frac{H_{n - 1}^\\circ\n}{RT_{n - 1}^\\circ }} \\\\\n {\\ln \\left({1 - \\sum\\limits_{i = 1}^{n - 1} {x_i } } \\right) + \\frac{H_n\n^\\circ }{RT} - \\frac{H_n^\\circ }{RT_n^\\circ }} \\\\\n\\end{array} }} \\right]\n \\end{array}\n"
}
] |
https://en.wikipedia.org/wiki?curid=152969
|
152983
|
Pedal triangle
|
Triangle found by projecting a point onto the sides of another triangle
In plane geometry, a pedal triangle is obtained by projecting a point onto the sides of a triangle.
More specifically, consider a triangle △"ABC", and a point P that is not one of the vertices A, B, C. Drop perpendiculars from P to the three sides of the triangle (these may need to be produced, i.e., extended). Label L, M, N the intersections of the lines from P with the sides BC, AC, AB. The pedal triangle is then △"LMN".
If △"ABC" is not an obtuse triangle and P is the orthocenter, then the angles of △"LMN" are 180° − 2"A", and 180° − 2"C".
The location of the chosen point P relative to the chosen triangle △"ABC" gives rise to some special cases:
The vertices of the pedal triangle of an interior point P, as shown in the top diagram, divide the sides of the original triangle in such a way as to satisfy Carnot's theorem:
formula_0
Trilinear coordinates.
If P has trilinear coordinates "p" : "q" : "r", then the vertices L, M, N of the pedal triangle of P are given by
formula_1
Antipedal triangle.
One vertex, L', of the antipedal triangle of P is the point of intersection of the perpendicular to BP through B and the perpendicular to CP through C. Its other vertices, M' and N', are constructed analogously. Trilinear coordinates are given by
formula_2
For example, the excentral triangle is the antipedal triangle of the incenter.
Suppose that P does not lie on any of the extended sides BC, CA, AB, and let "P" −1 denote the isogonal conjugate of P. The pedal triangle of P is homothetic to the antipedal triangle of "P" −1. The homothetic center (which is a triangle center if and only if P is a triangle center) is the point given in trilinear coordinates by
formula_3
The product of the areas of the pedal triangle of P and the antipedal triangle of "P" −1 equals the square of the area of △"ABC".
Pedal circle.
The pedal circle is defined as the circumcircle of the pedal triangle. Note that the pedal circle is not defined for points lying on the circumcircle of the triangle.
Pedal circle of isogonal conjugates.
For any point P not lying on the circumcircle of the triangle, it is known that P and its isogonal conjugate P* have a common pedal circle, whose center is the midpoint of these two points.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "|AN|^2 + |BL|^2 + |CM|^2 = |NB|^2 + |LC|^2 + |MA|^2."
},
{
"math_id": 1,
"text": "\\begin{array}{ccccccc}\n L &=& 0 &:& q+p\\cos C &:& r+p\\cos B \\\\[2pt]\n M &=& p+q\\cos C &:& 0 &:& r+q\\cos A \\\\[2pt]\n N &=& p+r\\cos B &:& q+r\\cos A &:& 0\n\\end{array}"
},
{
"math_id": 2,
"text": "\\begin{array}{ccrcrcr}\n L' &=& -(q+p\\cos C)(r+p\\cos B) &:& (r+p\\cos B)(p+q\\cos C) &:& (q+p\\cos C)(p+r\\cos B) \\\\[2pt]\n M' &=& (r+q\\cos A)(q+p\\cos C) &:& -(r+q\\cos A)(p+q\\cos C) &:& (p+q\\cos C)(q+r\\cos A) \\\\[2pt]\n N' &=& (q+r\\cos A)(r+p\\cos B) &:& (p+r\\cos B)(r+q\\cos A) &:& -(p+r\\cos B)(q+r\\cos A)\n\\end{array}"
},
{
"math_id": 3,
"text": "ap(p+q\\cos C)(p+r\\cos B) \\ :\\ bq(q+r\\cos A)(q+p\\cos C) \\ :\\ cr(r+p\\cos B)(r+q\\cos A)"
}
] |
https://en.wikipedia.org/wiki?curid=152983
|
152984
|
Robert Simson
|
Scottish mathematician (1687–1768)
Robert Simson (14 October 1687 – 1 October 1768) was a Scottish mathematician and professor of mathematics at the University of Glasgow. The Simson line is named after him.
Biography.
Robert Simson was born on 14 October 1687, probably the eldest of the seventeen children, all male, of John Simson, a Glasgow merchant, and Agnes, daughter of Patrick Simpson, minister of Renfrew; only six of them reached adulthood.
Simson matriculated at the University of Glasgow in 1701, intending to enter the Church. He followed the course in the faculty of arts (Latin, Greek, logic, natural philosophy) and then concentrated on studying theology and Semitic languages. Mathematics was not taught at the university, but by reading Sinclair's "Tuyrocinia Mathematica in Novem Tractatus" and then Euclid’s "Elements" Simson soon became deeply interested in mathematics and especially geometry. His efforts impressed the university Senate to such an extent that they offered him the chair of mathematics, to replace the recently-dismissed Sinclair. As he had had no formal training in the subject, Simson turned down the offer but agreed to take up the post a year later, during which time he would increase his knowledge of mathematics.
After a failed attempt to go to Oxford, Simson spent his year in London at Christ's Hospital. During this time he made valuable contacts with several prominent mathematicians, including John Caswell, James Jurin (secretary of the Royal Society), Humphrey Ditton and, most importantly, Edmond Halley.
Simson was admitted professor of mathematics at Glasgow, aged 23, on 20 November 1711, where his first task was to design a two-year course in mathematics, some of which he taught himself; his lectures included geometry, of course, and algebra, logarithms and optics. Among his students were Maclaurin, Matthew Stewart, and William Trail. He resigned the post in 1761, and was succeeded by another of his pupils Rev Prof James Williamson FRSE (1725-1795).
During his time at Glasgow Simson noted in 1753 that, as the Fibonacci numbers increased in magnitude, the ratio between adjacent numbers approached the golden ratio, whose value is
<templatestyles src="Block indent/styles.css"/>formula_0...
As for the man himself, “Simson appears to have been tall and of good stature. In spite of his great scholarship he was a modest, unassuming man who was very cautious in promoting his own work. He enjoyed good company and presided over the weekly meetings of a dining club that he had instituted … He had a special interest in botany, in which he was an acknowledged expert”.
Robert Simson did not marry. He died, aged 80, in his college residence at Glasgow on 1 October 1768, and was interred in the Blackfriars Burying Ground (now known as Ramshorn Cemetery), where, in the south wall, is placed to his memory a plain marble tablet, with a highly and justly complimentary inscription”. Simson's library, including some of his own works, was bequeathed to the university on his death. It consists of about 850 printed books, mainly early mathematical and astronomical texts.
Subscriptions towards the erection of a monument to Dr Simson were collected in 1865, with the Senate of the College of Glasgow, the (thirteenth) Earl of Eglinton and Winton, and the Earl Stanhope each donating £10; and John Carrick Moore – the first cousin twice removed of Robert Simson – giving £15. The memorial, designed by Frederick Thomas Pilkington, is “a large octagonal monument with carved Egyptian details, topped with a ball finial”. It is situated on a hilltop in West Kilbride cemetery.
Works.
Simson's contributions to mathematical knowledge took the form of critical editions and commentaries on the works of the ancient geometers. The first of his published writings is a paper in the "Philosophical Transactions" (1723, vol. xl. p. 330) on Euclid's "Porisms".
Then followed "Sectionum conicarum libri V." (Edinburgh, 1735), a second edition of which, with additions, appeared in 1750. The first three books of this treatise were translated into English and, several times, printed as "The Elements of the Conic Sections". In 1749, was published "Apollonii Pergaei locorum planorum libri II.", a restoration of Apollonius's lost treatise, founded on the lemmas given in the seventh book of Pappus's "Mathematical Collection".
In 1756, appeared, both in Latin and in English, the first edition of his "Euclid's Elements". This work, which contained only the first six and the eleventh and twelfth books, and to which, in its English version, he added the "Data" in 1762, was for long the standard text of Euclid in England.
After Simson's death, restorations of Apollonius's treatise "De section determinata" and of Euclid's treatise "De Porismatibus" were printed for private circulation in 1776, at the expense of Earl Stanhope, in a volume with the title "Roberti Simson opera quaedam reliqua". The volume contains also dissertations on "Logarithms" and on the "Limits of Quantities and Ratios", and a few problems illustrating the ancient geometrical analysis.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\varphi = \\frac{1+\\sqrt5}{2} = "
}
] |
https://en.wikipedia.org/wiki?curid=152984
|
15300799
|
Geary's C
|
Geary's "C" is a measure of spatial autocorrelation that attempts to determine if observations of the same variable are spatially autocorrelated globally (rather than at the neighborhood level). Spatial autocorrelation is more complex than autocorrelation because the correlation is multi-dimensional and bi-directional.
Global Geary's C.
Geary's "C" is defined as
formula_0
where formula_1 is the number of spatial units indexed by formula_2 and formula_3; formula_4 is the variable of interest; formula_5 is the mean of formula_4; formula_6 is the formula_7 row of the spatial weights matrix formula_8 with zeroes on the diagonal (i.e., formula_9); and formula_10 is the sum of all weights in formula_8.
The value of Geary's "C" lies between 0 and some unspecified value greater than 1. Values significantly lower than 1 demonstrate increasing positive spatial autocorrelation, whilst values significantly higher than 1 illustrate increasing negative spatial autocorrelation.
Geary's "C" is inversely related to Moran's "I", but it is not identical. While Moran's "I" and Geary's "C" are both measures of global spatial autocorrelation, they are slightly different. Geary's "C" uses the sum of squared distances whereas Moran's "I" uses standardized spatial covariance. By using squared distances Geary's "C" is less sensitive to linear associations and may pickup autocorrelation where Moran's "I" may not.
Geary's "C" is also known as Geary's contiguity ratio or simply Geary's ratio.
This statistic was developed by Roy C. Geary.
Local Geary's C.
Like Moran's I, Geary's C can be decomposed into a sum of Local Indicators of Spatial Association (LISA) statistics. LISA statistics can be used to find local clusters through significance testing, though because a large number of tests must be performed (one per sampling area) this approach suffers from the multiple comparisons problem. As noted by Anselin, this means the analysis of the local Geary statistic is aimed at identifying "interesting" points which should then be subject to further investigation. This is therefore a type of exploratory data analysis.
A local version of formula_11 is given by
formula_12
where
formula_13
then,
formula_14
Local Geary's C can be calculated in GeoDa and PySAL.
Sources.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " C = \\frac{(N-1) \\sum_{i} \\sum_{j} w_{ij} (x_i-x_j)^2}{2 S_0 \\sum_{i}(x_i-\\bar x)^2} "
},
{
"math_id": 1,
"text": "N"
},
{
"math_id": 2,
"text": "i"
},
{
"math_id": 3,
"text": "j"
},
{
"math_id": 4,
"text": "x"
},
{
"math_id": 5,
"text": "\\bar x"
},
{
"math_id": 6,
"text": "w_{ij}"
},
{
"math_id": 7,
"text": "i^{th}"
},
{
"math_id": 8,
"text": "W"
},
{
"math_id": 9,
"text": "w_{ii} = 0"
},
{
"math_id": 10,
"text": "S_0"
},
{
"math_id": 11,
"text": "C"
},
{
"math_id": 12,
"text": "c_i = \\frac{1}{m_2}\\sum_j w_{ij}(x_i-x_j)^2"
},
{
"math_id": 13,
"text": " m_2= \\frac{\\sum_i (x_i-\\bar x)^2 }{N-1}"
},
{
"math_id": 14,
"text": " C = \\sum_i \\frac{c_i}{2S_0} "
}
] |
https://en.wikipedia.org/wiki?curid=15300799
|
153008
|
Knot theory
|
Study of mathematical knots
In topology, knot theory is the study of mathematical knots. While inspired by knots which appear in daily life, such as those in shoelaces and rope, a mathematical knot differs in that the ends are joined so it cannot be undone, the simplest knot being a ring (or "unknot"). In mathematical language, a knot is an embedding of a circle in 3-dimensional Euclidean space, formula_0. Two mathematical knots are equivalent if one can be transformed into the other via a deformation of formula_1 upon itself (known as an ambient isotopy); these transformations correspond to manipulations of a knotted string that do not involve cutting it or passing it through itself.
Knots can be described in various ways. Using different description methods, there may be more than one description of the same knot. For example, a common method of describing a knot is a planar diagram called a knot diagram, in which any knot can be drawn in many different ways. Therefore, a fundamental problem in knot theory is determining when two descriptions represent the same knot.
A complete algorithmic solution to this problem exists, which has unknown complexity. In practice, knots are often distinguished using a "knot invariant", a "quantity" which is the same when computed from different descriptions of a knot. Important invariants include knot polynomials, knot groups, and hyperbolic invariants.
The original motivation for the founders of knot theory was to create a table of knots and links, which are knots of several components entangled with each other. More than six billion knots and links have been tabulated since the beginnings of knot theory in the 19th century.
To gain further insight, mathematicians have generalized the knot concept in several ways. Knots can be considered in other three-dimensional spaces and objects other than circles can be used; see "knot (mathematics)". For example, a higher-dimensional knot is an "n"-dimensional sphere embedded in ("n"+2)-dimensional Euclidean space.
History.
Archaeologists have discovered that knot tying dates back to prehistoric times. Besides their uses such as recording information and tying objects together, knots have interested humans for their aesthetics and spiritual symbolism. Knots appear in various forms of Chinese artwork dating from several centuries BC (see Chinese knotting). The endless knot appears in Tibetan Buddhism, while the Borromean rings have made repeated appearances in different cultures, often representing strength in unity. The Celtic monks who created the Book of Kells lavished entire pages with intricate Celtic knotwork.
A mathematical theory of knots was first developed in 1771 by Alexandre-Théophile Vandermonde who explicitly noted the importance of topological features when discussing the properties of knots related to the geometry of position. Mathematical studies of knots began in the 19th century with Carl Friedrich Gauss, who defined the linking integral . In the 1860s, Lord Kelvin's theory that atoms were knots in the aether led to Peter Guthrie Tait's creation of the first knot tables for complete classification. Tait, in 1885, published a table of knots with up to ten crossings, and what came to be known as the Tait conjectures. This record motivated the early knot theorists, but knot theory eventually became part of the emerging subject of topology.
These topologists in the early part of the 20th century—Max Dehn, J. W. Alexander, and others—studied knots from the point of view of the knot group and invariants from homology theory such as the Alexander polynomial. This would be the main approach to knot theory until a series of breakthroughs transformed the subject.
In the late 1970s, William Thurston introduced hyperbolic geometry into the study of knots with the hyperbolization theorem. Many knots were shown to be hyperbolic knots, enabling the use of geometry in defining new, powerful knot invariants. The discovery of the Jones polynomial by Vaughan Jones in 1984 , and subsequent contributions from Edward Witten, Maxim Kontsevich, and others, revealed deep connections between knot theory and mathematical methods in statistical mechanics and quantum field theory. A plethora of knot invariants have been invented since then, utilizing sophisticated tools such as quantum groups and Floer homology.
In the last several decades of the 20th century, scientists became interested in studying physical knots in order to understand knotting phenomena in DNA and other polymers. Knot theory can be used to determine if a molecule is chiral (has a "handedness") or not . Tangles, strings with both ends fixed in place, have been effectively used in studying the action of topoisomerase on DNA . Knot theory may be crucial in the construction of quantum computers, through the model of topological quantum computation .
Knot equivalence.
A knot is created by beginning with a one-dimensional line segment, wrapping it around itself arbitrarily, and then fusing its two free ends together to form a closed loop . Simply, we can say a knot formula_2 is a "simple closed curve" (see Curve) — that is: a "nearly" injective and continuous function formula_3, with the only "non-injectivity" being formula_4. Topologists consider knots and other entanglements such as links and braids to be equivalent if the knot can be pushed about smoothly, without intersecting itself, to coincide with another knot.
The idea of knot equivalence is to give a precise definition of when two knots should be considered the same even when positioned quite differently in space. A formal mathematical definition is that two knots formula_5 are equivalent if there is an orientation-preserving homeomorphism formula_6 with formula_7.
What this definition of knot equivalence means is that two knots are equivalent when there is a continuous family of homeomorphisms formula_8 of space onto itself, such that the last one of them carries the first knot onto the second knot. (In detail: Two knots formula_9 and formula_10 are equivalent if there exists a continuous mapping formula_11 such that a) for each formula_12 the mapping taking formula_13 to formula_14 is a homeomorphism of formula_15 onto itself; b) formula_16 for all formula_13; and c) formula_17. Such a function formula_18 is known as an ambient isotopy.)
These two notions of knot equivalence agree exactly about which knots are equivalent: Two knots that are equivalent under the orientation-preserving homeomorphism definition are also equivalent under the ambient isotopy definition, because any orientation-preserving homeomorphisms of formula_15 to itself is the final stage of an ambient isotopy starting from the identity. Conversely, two knots equivalent under the ambient isotopy definition are also equivalent under the orientation-preserving homeomorphism definition, because the formula_19 (final) stage of the ambient isotopy must be an orientation-preserving homeomorphism carrying one knot to the other.
The basic problem of knot theory, the recognition problem, is determining the equivalence of two knots. Algorithms exist to solve this problem, with the first given by Wolfgang Haken in the late 1960s . Nonetheless, these algorithms can be extremely time-consuming, and a major issue in the theory is to understand how hard this problem really is . The special case of recognizing the unknot, called the unknotting problem, is of particular interest . In February 2021 Marc Lackenby announced a new unknot recognition algorithm that runs in quasi-polynomial time.
Knot diagrams.
A useful way to visualise and manipulate knots is to project the knot onto a plane—think of the knot casting a shadow on the wall. A small change in the direction of projection will ensure that it is one-to-one except at the double points, called "crossings", where the "shadow" of the knot crosses itself once transversely . At each crossing, to be able to recreate the original knot, the over-strand must be distinguished from the under-strand. This is often done by creating a break in the strand going underneath. The resulting diagram is an immersed plane curve with the additional data of which strand is over and which is under at each crossing. (These diagrams are called knot diagrams when they represent a knot and link diagrams when they represent a link.) Analogously, knotted surfaces in 4-space can be related to immersed surfaces in 3-space.
A reduced diagram is a knot diagram in which there are no reducible crossings (also nugatory or removable crossings), or in which all of the reducible crossings have been removed. A petal projection is a type of projection in which, instead of forming double points, all strands of the knot meet at a single crossing point, connected to it by loops forming non-nested "petals".
Reidemeister moves.
In 1927, working with this diagrammatic form of knots, J. W. Alexander and Garland Baird Briggs, and independently Kurt Reidemeister, demonstrated that two knot diagrams belonging to the same knot can be related by a sequence of three kinds of moves on the diagram, shown below. These operations, now called the "Reidemeister moves", are:
The proof that diagrams of equivalent knots are connected by Reidemeister moves relies on an analysis of what happens under the planar projection of the movement taking one knot to another. The movement can be arranged so that almost all of the time the projection will be a knot diagram, except at finitely many times when an "event" or "catastrophe" occurs, such as when more than two strands cross at a point or multiple strands become tangent at a point. A close inspection will show that complicated events can be eliminated, leaving only the simplest events: (1) a "kink" forming or being straightened out; (2) two strands becoming tangent at a point and passing through; and (3) three strands crossing at a point. These are precisely the Reidemeister moves .
Knot invariants.
A knot invariant is a "quantity" that is the same for equivalent knots . For example, if the invariant is computed from a knot diagram, it should give the same value for two knot diagrams representing equivalent knots. An invariant may take the same value on two different knots, so by itself may be incapable of distinguishing all knots. An elementary invariant is tricolorability.
"Classical" knot invariants include the knot group, which is the fundamental group of the knot complement, and the Alexander polynomial, which can be computed from the Alexander invariant, a module constructed from the infinite cyclic cover of the knot complement . In the late 20th century, invariants such as "quantum" knot polynomials, Vassiliev invariants and hyperbolic invariants were discovered. These aforementioned invariants are only the tip of the iceberg of modern knot theory.
Knot polynomials.
A knot polynomial is a knot invariant that is a polynomial. Well-known examples include the Jones polynomial, the Alexander polynomial, and the Kauffman polynomial. A variant of the Alexander polynomial, the Alexander–Conway polynomial, is a polynomial in the variable "z" with integer coefficients .
The Alexander–Conway polynomial is actually defined in terms of links, which consist of one or more knots entangled with each other. The concepts explained above for knots, e.g. diagrams and Reidemeister moves, also hold for links.
Consider an oriented link diagram, "i.e." one in which every component of the link has a preferred direction indicated by an arrow. For a given crossing of the diagram, let formula_20 be the oriented link diagrams resulting from changing the diagram as indicated in the figure:
The original diagram might be either formula_21 or formula_22, depending on the chosen crossing's configuration. Then the Alexander–Conway polynomial, formula_23, is recursively defined according to the rules:
The second rule is what is often referred to as a skein relation. To check that these rules give an invariant of an oriented link, one should determine that the polynomial does not change under the three Reidemeister moves. Many important knot polynomials can be defined in this way.
The following is an example of a typical computation using a skein relation. It computes the Alexander–Conway polynomial of the trefoil knot. The yellow patches indicate where the relation is applied.
"C"() = "C"() + "z" "C"()
gives the unknot and the Hopf link. Applying the relation to the Hopf link where indicated,
"C"() = "C"() + "z" "C"()
gives a link deformable to one with 0 crossings (it is actually the unlink of two components) and an unknot. The unlink takes a bit of sneakiness:
"C"() = "C"() + "z" "C"()
which implies that "C"(unlink of two components) = 0, since the first two polynomials are of the unknot and thus equal.
Putting all this together will show:
formula_27
Since the Alexander–Conway polynomial is a knot invariant, this shows that the trefoil is not equivalent to the unknot. So the trefoil really is "knotted".
Actually, there are two trefoil knots, called the right and left-handed trefoils, which are mirror images of each other (take a diagram of the trefoil given above and change each crossing to the other way to get the mirror image). These are not equivalent to each other, meaning that they are not amphichiral. This was shown by Max Dehn, before the invention of knot polynomials, using group theoretical methods . But the Alexander–Conway polynomial of each kind of trefoil will be the same, as can be seen by going through the computation above with the mirror image. The "Jones" polynomial can in fact distinguish between the left- and right-handed trefoil knots .
Hyperbolic invariants.
William Thurston proved many knots are hyperbolic knots, meaning that the knot complement (i.e., the set of points of 3-space not on the knot) admits a geometric structure, in particular that of hyperbolic geometry. The hyperbolic structure depends only on the knot so any quantity computed from the hyperbolic structure is then a knot invariant .
Geometry lets us visualize what the inside of a knot or link complement looks like by imagining light rays as traveling along the geodesics of the geometry. An example is provided by the picture of the complement of the Borromean rings. The inhabitant of this link complement is viewing the space from near the red component. The balls in the picture are views of horoball neighborhoods of the link. By thickening the link in a standard way, the horoball neighborhoods of the link components are obtained. Even though the boundary of a neighborhood is a torus, when viewed from inside the link complement, it looks like a sphere. Each link component shows up as infinitely many spheres (of one color) as there are infinitely many light rays from the observer to the link component. The fundamental parallelogram (which is indicated in the picture), tiles both vertically and horizontally and shows how to extend the pattern of spheres infinitely.
This pattern, the horoball pattern, is itself a useful invariant. Other hyperbolic invariants include the shape of the fundamental parallelogram, length of shortest geodesic, and volume. Modern knot and link tabulation efforts have utilized these invariants effectively. Fast computers and clever methods of obtaining these invariants make calculating these invariants, in practice, a simple task .
Higher dimensions.
A knot in three dimensions can be untied when placed in four-dimensional space. This is done by changing crossings. Suppose one strand is behind another as seen from a chosen point. Lift it into the fourth dimension, so there is no obstacle (the front strand having no component there); then slide it forward, and drop it back, now in front. Analogies for the plane would be lifting a string up off the surface, or removing a dot from inside a circle.
In fact, in four dimensions, any non-intersecting closed loop of one-dimensional string is equivalent to an unknot. First "push" the loop into a three-dimensional subspace, which is always possible, though technical to explain.
Four-dimensional space occurs in classical knot theory, however, and an important topic is the study of slice knots and ribbon knots. A notorious open problem asks whether every slice knot is also ribbon.
Knotting spheres of higher dimension.
Since a knot can be considered topologically a 1-dimensional sphere, the next generalization is to consider a two-dimensional sphere (formula_28) embedded in 4-dimensional Euclidean space (formula_29). Such an embedding is knotted if there is no homeomorphism of formula_29 onto itself taking the embedded 2-sphere to the standard "round" embedding of the 2-sphere. Suspended knots and spun knots are two typical families of such 2-sphere knots.
The mathematical technique called "general position" implies that for a given "n"-sphere in "m"-dimensional Euclidean space, if "m" is large enough (depending on "n"), the sphere should be unknotted. In general, piecewise-linear "n"-spheres form knots only in ("n" + 2)-dimensional space , although this is no longer a requirement for smoothly knotted spheres. In fact, there are smoothly knotted formula_30-spheres in 6"k"-dimensional space; e.g., there is a smoothly knotted 3-sphere in formula_31 . Thus the codimension of a smooth knot can be arbitrarily large when not fixing the dimension of the knotted sphere; however, any smooth "k"-sphere embedded in formula_32 with formula_33 is unknotted. The notion of a knot has further generalisations in mathematics, see: Knot (mathematics), isotopy classification of embeddings.
Every knot in the "n"-sphere formula_34 is the link of a real-algebraic set with isolated singularity in formula_35 .
An "n"-knot is a single formula_34 embedded in formula_36. An "n"-link consists of "k"-copies of formula_34 embedded in formula_36, where "k" is a natural number. Both the formula_37 and the formula_38 cases are well studied, and so is the formula_39 case.
Adding knots.
Two knots can be added by cutting both knots and joining the pairs of ends. The operation is called the "knot sum", or sometimes the "connected sum" or "composition" of two knots. This can be formally defined as follows : consider a planar projection of each knot and suppose these projections are disjoint. Find a rectangle in the plane where one pair of opposite sides are arcs along each knot while the rest of the rectangle is disjoint from the knots. Form a new knot by deleting the first pair of opposite sides and adjoining the other pair of opposite sides. The resulting knot is a sum of the original knots. Depending on how this is done, two different knots (but no more) may result. This ambiguity in the sum can be eliminated regarding the knots as "oriented", i.e. having a preferred direction of travel along the knot, and requiring the arcs of the knots in the sum are oriented consistently with the oriented boundary of the rectangle.
The knot sum of oriented knots is commutative and associative. A knot is "prime" if it is non-trivial and cannot be written as the knot sum of two non-trivial knots. A knot that can be written as such a sum is "composite". There is a prime decomposition for knots, analogous to prime and composite numbers . For oriented knots, this decomposition is also unique. Higher-dimensional knots can also be added but there are some differences. While you cannot form the unknot in three dimensions by adding two non-trivial knots, you can in higher dimensions, at least when one considers "smooth" knots in codimension at least 3.
Knots can also be constructed using the circuit topology approach. This is done by combining basic units called soft contacts using five operations (Parallel, Series, Cross, Concerted, and Sub). The approach is applicable to open chains as well and can also be extended to include the so-called hard contacts.
Tabulating knots.
Traditionally, knots have been catalogued in terms of crossing number. Knot tables generally include only prime knots, and only one entry for a knot and its mirror image (even if they are different) . The number of nontrivial knots of a given crossing number increases rapidly, making tabulation computationally difficult . Tabulation efforts have succeeded in enumerating over 6 billion knots and links . The sequence of the number of prime knots of a given crossing number, up to crossing number 16, is 0, 0, 1, 1, 2, 3, 7, 21, 49, 165, 552, 2176, 9988, , , ... (sequence in the OEIS). While exponential upper and lower bounds for this sequence are known, it has not been proven that this sequence is strictly increasing .
The first knot tables by Tait, Little, and Kirkman used knot diagrams, although Tait also used a precursor to the Dowker notation. Different notations have been invented for knots which allow more efficient tabulation .
The early tables attempted to list all knots of at most 10 crossings, and all alternating knots of 11 crossings . The development of knot theory due to Alexander, Reidemeister, Seifert, and others eased the task of verification and tables of knots up to and including 9 crossings were published by Alexander–Briggs and Reidemeister in the late 1920s.
The first major verification of this work was done in the 1960s by John Horton Conway, who not only developed a new notation but also the Alexander–Conway polynomial . This verified the list of knots of at most 11 crossings and a new list of links up to 10 crossings. Conway found a number of omissions but only one duplication in the Tait–Little tables; however he missed the duplicates called the Perko pair, which would only be noticed in 1974 by Kenneth Perko . This famous error would propagate when Dale Rolfsen added a knot table in his influential text, based on Conway's work. Conway's 1970 paper on knot theory also contains a typographical duplication on its non-alternating 11-crossing knots page and omits 4 examples — 2 previously listed in D. Lombardero's 1968 Princeton senior thesis and 2 more subsequently discovered by Alain Caudron. [see Perko (1982), Primality of certain knots, Topology Proceedings] Less famous is the duplicate in his 10 crossing link table: 2.-2.-20.20 is the mirror of 8*-20:-20. [See Perko (2016), Historical highlights of non-cyclic knot theory, J. Knot Theory Ramifications].
In the late 1990s Hoste, Thistlethwaite, and Weeks tabulated all the knots through 16 crossings . In 2003 Rankin, Flint, and Schermann, tabulated the alternating knots through 22 crossings . In 2020 Burton tabulated all prime knots with up to 19 crossings .
Alexander–Briggs notation.
This is the most traditional notation, due to the 1927 paper of James W. Alexander and Garland B. Briggs and later extended by Dale Rolfsen in his knot table (see image above and List of prime knots). The notation simply organizes knots by their crossing number. One writes the crossing number with a subscript to denote its order amongst all knots with that crossing number. This order is arbitrary and so has no special significance (though in each number of crossings the twist knot comes after the torus knot). Links are written by the crossing number with a superscript to denote the number of components and a subscript to denote its order within the links with the same number of components and crossings. Thus the trefoil knot is notated 31 and the Hopf link is 2. Alexander–Briggs names in the range 10162 to 10166 are ambiguous, due to the discovery of the Perko pair in Charles Newton Little's original and subsequent knot tables, and differences in approach to correcting this error in knot tables and other publications created after this point.
Dowker–Thistlethwaite notation.
The Dowker–Thistlethwaite notation, also called the Dowker notation or code, for a knot is a finite sequence of even integers. The numbers are generated by following the knot and marking the crossings with consecutive integers. Since each crossing is visited twice, this creates a pairing of even integers with odd integers. An appropriate sign is given to indicate over and undercrossing. For example, in this figure the knot diagram has crossings labelled with the pairs (1,6) (3,−12) (5,2) (7,8) (9,−4) and (11,−10). The Dowker–Thistlethwaite notation for this labelling is the sequence: 6, −12, 2, 8, −4, −10. A knot diagram has more than one possible Dowker notation, and there is a well-understood ambiguity when reconstructing a knot from a Dowker–Thistlethwaite notation.
Conway notation.
The Conway notation for knots and links, named after John Horton Conway, is based on the theory of tangles . The advantage of this notation is that it reflects some properties of the knot or link.
The notation describes how to construct a particular link diagram of the link. Start with a "basic polyhedron", a 4-valent connected planar graph with no digon regions. Such a polyhedron is denoted first by the number of vertices then a number of asterisks which determine the polyhedron's position on a list of basic polyhedra. For example, 10** denotes the second 10-vertex polyhedron on Conway's list.
Each vertex then has an algebraic tangle substituted into it (each vertex is oriented so there is no arbitrary choice in substitution). Each such tangle has a notation consisting of numbers and + or − signs.
An example is 1*2 −3 2. The 1* denotes the only 1-vertex basic polyhedron. The 2 −3 2 is a sequence describing the continued fraction associated to a rational tangle. One inserts this tangle at the vertex of the basic polyhedron 1*.
A more complicated example is 8*3.1.2 0.1.1.1.1.1 Here again 8* refers to a basic polyhedron with 8 vertices. The periods separate the notation for each tangle.
Any link admits such a description, and it is clear this is a very compact notation even for very large crossing number. There are some further shorthands usually used. The last example is usually written 8*3:2 0, where the ones are omitted and kept the number of dots excepting the dots at the end. For an algebraic knot such as in the first example, 1* is often omitted.
Conway's pioneering paper on the subject lists up to 10-vertex basic polyhedra of which he uses to tabulate links, which have become standard for those links. For a further listing of higher vertex polyhedra, there are nonstandard choices available.
Gauss code.
Gauss code, similar to the Dowker–Thistlethwaite notation, represents a knot with a sequence of integers. However, rather than every crossing being represented by two different numbers, crossings are labeled with only one number. When the crossing is an overcrossing, a positive number is listed. At an undercrossing, a negative number. For example, the trefoil knot in Gauss code can be given as: 1,−2,3,−1,2,−3
Gauss code is limited in its ability to identify knots. This problem is partially addressed with by the extended Gauss code.
References.
Footnotes.
<templatestyles src="Reflist/styles.css" />
Further reading.
Introductory textbooks.
There are a number of introductions to knot theory. A classical introduction for graduate students or advanced undergraduates is . Other good texts from the references are and . Adams is informal and accessible for the most part to high schoolers. Lickorish is a rigorous introduction for graduate students, covering a nice mix of classical and modern topics. is suitable for undergraduates who know point-set topology; knowledge of algebraic topology is not required.
|
[
{
"math_id": 0,
"text": "\\mathbb{E}^3"
},
{
"math_id": 1,
"text": "\\mathbb{R}^3"
},
{
"math_id": 2,
"text": "K"
},
{
"math_id": 3,
"text": "K\\colon[0,1]\\to \\mathbb{R}^3"
},
{
"math_id": 4,
"text": "K(0)=K(1)"
},
{
"math_id": 5,
"text": "K_1, K_2"
},
{
"math_id": 6,
"text": "h\\colon\\R^3\\to\\R^3"
},
{
"math_id": 7,
"text": "h(K_1)=K_2"
},
{
"math_id": 8,
"text": "\\{ h_t: \\mathbb R^3 \\rightarrow \\mathbb R^3\\ \\mathrm{for}\\ 0 \\leq t \\leq 1\\}"
},
{
"math_id": 9,
"text": "K_1"
},
{
"math_id": 10,
"text": "K_2"
},
{
"math_id": 11,
"text": "H: \\mathbb R^3 \\times [0,1] \\rightarrow \\mathbb R^3"
},
{
"math_id": 12,
"text": "t \\in [0,1]"
},
{
"math_id": 13,
"text": "x \\in \\mathbb R^3"
},
{
"math_id": 14,
"text": "H(x,t) \\in \\mathbb R^3"
},
{
"math_id": 15,
"text": "\\mathbb R^3"
},
{
"math_id": 16,
"text": "H(x, 0) = x"
},
{
"math_id": 17,
"text": "H(K_1,1) = K_2"
},
{
"math_id": 18,
"text": "H"
},
{
"math_id": 19,
"text": "t=1"
},
{
"math_id": 20,
"text": "L_+, L_-, L_0"
},
{
"math_id": 21,
"text": "L_+"
},
{
"math_id": 22,
"text": "L_-"
},
{
"math_id": 23,
"text": "C(z)"
},
{
"math_id": 24,
"text": "C(O) = 1"
},
{
"math_id": 25,
"text": "O"
},
{
"math_id": 26,
"text": "C(L_+) = C(L_-) + z C(L_0)."
},
{
"math_id": 27,
"text": "C(\\mathrm{trefoil}) = 1 + z(0 + z) = 1 + z^2"
},
{
"math_id": 28,
"text": "\\mathbb{S}^2"
},
{
"math_id": 29,
"text": "\\R^4"
},
{
"math_id": 30,
"text": "(4k-1)"
},
{
"math_id": 31,
"text": "\\R^6"
},
{
"math_id": 32,
"text": "\\R^n"
},
{
"math_id": 33,
"text": "2n-3k-3>0"
},
{
"math_id": 34,
"text": "\\mathbb{S}^n"
},
{
"math_id": 35,
"text": "\\R^{n+1}"
},
{
"math_id": 36,
"text": "\\R^m"
},
{
"math_id": 37,
"text": "m=n+2"
},
{
"math_id": 38,
"text": "m>n+2"
},
{
"math_id": 39,
"text": "n>1"
}
] |
https://en.wikipedia.org/wiki?curid=153008
|
1530689
|
Complementarity (physics)
|
Quantum physics concept
In physics, complementarity is a conceptual aspect of quantum mechanics that Niels Bohr regarded as an essential feature of the theory. The complementarity principle holds that certain pairs of complementary properties cannot all be observed or measured simultaneously. For example, position and momentum or wave and particle properties. In contemporary terms, complementarity encompasses both the uncertainty principle and wave-particle duality.
Bohr considered one of the foundational truths of quantum mechanics to be the fact that setting up an experiment to measure one quantity of a pair, for instance the position of an electron, excludes the possibility of measuring the other, yet understanding both experiments is necessary to characterize the object under study. In Bohr's view, the behavior of atomic and subatomic objects cannot be separated from the measuring instruments that create the context in which the measured objects behave. Consequently, there is no "single picture" that unifies the results obtained in these different experimental contexts, and only the "totality of the phenomena" together can provide a completely informative description.
History.
Background.
Complementarity as a physical model derives from Niels Bohr's 1927 presentation in Como, Italy, at a scientific celebration of the work of Alessandro Volta 100 years previous. Bohr's subject was complementarity, the idea that measurements of quantum events provide complementary information through seemingly contradictory results. While Bohr's presentation was not well received, it did crystallize the issues ultimately leading to the modern wave-particle duality concept. The contradictory results that triggered Bohr's ideas had been building up over the previous 20 years.
This contradictory evidence came both from light and from electrons.
The wave theory of light, broadly successful for over a hundred years, had been challenged by Planck's 1901 model of blackbody radiation and Einstein's 1905 interpretation of the photoelectric effect. These theoretical models use discrete energy, a quantum, to describe the interaction of light with matter. Despite confirmation by various experimental observations, the photon theory (as it came to be called later) remained controversial until Arthur Compton performed a series of experiments from 1922 to 1924 demonstrating the momentum of light. The experimental evidence of particle-like momentum seemingly contradicted other experiments demonstrating the wave-like interference of light.
The contradictory evidence from electrons arrived in the opposite order. Many experiments by J. J. Thompson, Robert Millikan, and Charles Wilson, among others, had shown that free electrons had particle properties. However, in 1924, Louis de Broglie proposed that electrons had an associated wave and Schrödinger demonstrated that wave equations accurately account for electron properties in atoms. Again some experiments showed particle properties and others wave properties.
Bohr's resolution of these contradictions is to accept them. In his Como lecture he says: "our interpretation of the experimental material rests essentially upon the
classical concepts." Direct observation being impossible, observations of quantum effects are necessarily classical. Whatever the nature of quantum events, our only information will arrive via classical results. If experiments sometimes produce wave results and sometimes particle results, that is the nature of light and of the ultimate constituents of matter.
Bohr's lectures.
Niels Bohr apparently conceived of the principle of complementarity during a skiing vacation in Norway in February and March 1927, during which he received a letter from Werner Heisenberg regarding an as-yet-unpublished result, a thought experiment about a microscope using gamma rays. This thought experiment implied a tradeoff between uncertainties that would later be formalized as the uncertainty principle. To Bohr, Heisenberg's paper did not make clear the distinction between a position measurement merely disturbing the momentum value that a particle carried and the more radical idea that momentum was meaningless or undefinable in a context where position was measured instead. Upon returning from his vacation, by which time Heisenberg had already submitted his paper for publication, Bohr convinced Heisenberg that the uncertainty tradeoff was a manifestation of the deeper concept of complementarity. Heisenberg duly appended a note to this effect to his paper, before its publication, stating:
Bohr has brought to my attention [that] the uncertainty in our observation does not arise exclusively from the occurrence of discontinuities, but is tied directly to the demand that we ascribe equal validity to the quite different experiments which show up in the [particulate] theory on one hand, and in the wave theory on the other hand.
Bohr publicly introduced the principle of complementarity in a lecture he delivered on 16 September 1927 at the International Physics Congress held in Como, Italy, attended by most of the leading physicists of the era, with the notable exceptions of Einstein, Schrödinger, and Dirac. However, these three were in attendance one month later when Bohr again presented the principle at the Fifth Solvay Congress in Brussels, Belgium. The lecture was published in the proceedings of both of these conferences, and was republished the following year in "Naturwissenschaften" (in German) and in "Nature" (in English).
In his original lecture on the topic, Bohr pointed out that just as the finitude of the speed of light implies the impossibility of a sharp separation between space and time (relativity), the finitude of the quantum of action implies the impossibility of a sharp separation between the behavior of a system and its interaction with the measuring instruments and leads to the well-known difficulties with the concept of 'state' in quantum theory; the notion of complementarity is intended to capture this new situation in epistemology created by quantum theory. Physicists F.A.M. Frescura and Basil Hiley have summarized the reasons for the introduction of the principle of complementarity in physics as follows:
<templatestyles src="Template:Blockquote/styles.css" />In the traditional view, it is assumed that there exists a reality in space-time and that this reality is a given thing, all of whose aspects can be viewed or articulated at any given moment. Bohr was the first to point out that quantum mechanics called this traditional outlook into question. To him the "indivisibility of the quantum of action" [...] implied that not all aspects of a system can be viewed simultaneously. By using one particular piece of apparatus only certain features could be made manifest at the expense of others, while with a different piece of apparatus another complementary aspect could be made manifest in such a way that the original set became non-manifest, that is, the original attributes were no longer well defined. For Bohr, this was an indication that the principle of complementarity, a principle that he had previously known to appear extensively in other intellectual disciplines but which did not appear in classical physics, should be adopted as a universal principle.
Debate following the lectures.
Complementarity was a central feature of Bohr's reply to the EPR paradox, an attempt by Albert Einstein, Boris Podolsky and Nathan Rosen to argue that quantum particles must have position and momentum even without being measured and so quantum mechanics must be an incomplete theory. The thought experiment proposed by Einstein, Podolsky and Rosen involved producing two particles and sending them far apart. The experimenter could choose to measure either the position or the momentum of one particle. Given that result, they could in principle make a precise prediction of what the corresponding measurement on the other, faraway particle would find. To Einstein, Podolsky and Rosen, this implied that the faraway particle must have precise values of both quantities whether or not that particle is measured in any way. Bohr argued in response that the deduction of a position value could not be transferred over to the situation where a momentum value is measured, and vice versa.
Later expositions of complementarity by Bohr include a 1938 lecture in Warsaw and a 1949 article written for a festschrift honoring Albert Einstein. It was also covered in a 1953 essay by Bohr's collaborator Léon Rosenfeld.
Mathematical formalism.
For Bohr, complementarity was the "ultimate reason" behind the uncertainty principle. All attempts to grapple with atomic phenomena using classical physics were eventually frustrated, he wrote, leading to the recognition that those phenomena have "complementary aspects". But classical physics can be generalized to address this, and with "astounding simplicity", by describing physical quantities using non-commutative algebra. This mathematical expression of complementarity builds on the work of Hermann Weyl and Julian Schwinger, starting with Hilbert spaces and unitary transformation, leading to the theorems of mutually unbiased bases.
In the mathematical formulation of quantum mechanics, physical quantities that classical mechanics had treated as real-valued variables become self-adjoint operators on a Hilbert space. These operators, called "observables", can fail to commute, in which case they are called "incompatible":formula_0
Incompatible observables cannot have a complete set of common eigenstates; there can be some simultaneous eigenstates of formula_1 and formula_2, but not enough in number to constitute a complete basis. The canonical commutation relation
formula_3
implies that this applies to position and momentum. In a Bohrian view, this is a mathematical statement that position and momentum are complementary aspects. Likewise, an analogous relationship holds for any two of the spin observables defined by the Pauli matrices; measurements of spin along perpendicular axes are complementary. The Pauli spin observables are defined for a quantum system described by a two-dimensional Hilbert space; mutually unbiased bases generalize these observables to Hilbert spaces of arbitrary finite dimension. Two bases formula_4 and formula_5 for an formula_6-dimensional Hilbert space are mutually unbiased whenformula_7
Here the basis vector formula_8, for example, has the same overlap with every formula_9; there is equal transition probability between a state in one basis and any state in the other basis. Each basis corresponds to an observable, and the observables for two mutually unbiased bases are complementary to each other. This leads to the description of complementarity as a statement about quantum kinematics:
<templatestyles src="Template:Blockquote/styles.css" />For each degree of freedom the dynamical variables are a pair of complementary observables.The concept of complementarity has also been applied to quantum measurements described by positive-operator-valued measures (POVMs).
Continuous complementarity.
While the concept of complementarity can be discussed via two experimental extremes, continuous tradeoff is also possible. The wave-particle relation, introduced by Daniel Greenberger and Allaine Yasin in 1988, and since then refined by others, quantifies the trade-off between measuring particle path distinguishability, formula_10, and wave interference fringe visibility, formula_11:
formula_12
The values of formula_10 and formula_11 can vary between 0 and 1 individually, but any experiment that combines particle and wave detection will limit one or the other, or both. The detailed definition of the two terms vary among applications, but the relation expresses the verified constraint that efforts to detect particle paths will result in less visible wave interference.
Modern role.
While many of the early discussions of complementarity discussed hypothetical experiments, advances in technology have allowed advanced tests of this concept. Experiments like the quantum eraser verify the key ideas in complementarity; modern exploration of quantum entanglement builds directly on complementarity:
<templatestyles src="Template:Blockquote/styles.css" />
In his Nobel lecture, physicist Julian Schwinger linked complementarity to quantum field theory:
<templatestyles src="Template:Blockquote/styles.css" />Indeed, relativistic quantum mechanics-the union of the complementarity principle of Bohr with the relativity principle of Einstein-is quantum field theory.
The Consistent histories interpretation of quantum mechanics takes a generalized form of complementarity as a key defining postulate.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\left[\\hat{A}, \\hat{B}\\right] := \\hat{A}\\hat{B} - \\hat{B}\\hat{A} \\neq \\hat{0}."
},
{
"math_id": 1,
"text": "\\hat{A}"
},
{
"math_id": 2,
"text": "\\hat{B}"
},
{
"math_id": 3,
"text": "\\left[\\hat{x}, \\hat{p}\\right] = i\\hbar"
},
{
"math_id": 4,
"text": "\\{|a_j\\rangle\\}"
},
{
"math_id": 5,
"text": "\\{|b_k\\rangle\\}"
},
{
"math_id": 6,
"text": "N"
},
{
"math_id": 7,
"text": "|\\langle a_j|b_k \\rangle|^2 = \\frac{1}{N}\\ \\text{for all}\\ j, k = 1, ... N-1."
},
{
"math_id": 8,
"text": "a_1"
},
{
"math_id": 9,
"text": "b_k"
},
{
"math_id": 10,
"text": "D"
},
{
"math_id": 11,
"text": "V"
},
{
"math_id": 12,
"text": "D^2 + V^2\\ \\le\\ 1"
}
] |
https://en.wikipedia.org/wiki?curid=1530689
|
15307810
|
CIE 1960 color space
|
The CIE 1960 color space ("CIE 1960 UCS", variously expanded "Uniform Color Space", "Uniform Color Scale", "Uniform Chromaticity Scale", "Uniform Chromaticity Space") is another name for the ("u", "v") chromaticity space devised by David MacAdam.
The CIE 1960 UCS does not define a luminance or lightness component, but the "Y" tristimulus value of the XYZ color space or a lightness index similar to "W"* of the CIE 1964 color space are sometimes used.
Today, the CIE 1960 UCS is mostly used to calculate correlated color temperature, where the isothermal lines are perpendicular to the Planckian locus. As a uniform chromaticity space, it has been superseded by the CIE 1976 UCS.
Background.
Judd determined that a more uniform color space could be found by a simple projective transformation of the CIEXYZ tristimulus values:
formula_0
Judd was the first to employ this type of transformation, and many others were to follow. Converting this RGB space to chromaticities one finds
formula_1
formula_2
MacAdam simplified Judd's UCS for computational purposes:
formula_3
formula_4
The Colorimetry committee of the CIE considered MacAdam's proposal at its 14th Session in Brussels for use in situations where more perceptual uniformity was desired than the (x,y) chromaticity space, and officially adopted it as the standard UCS the next year.
Relation to CIE XYZ.
U, V, and W can be found from X, Y, and Z using:
formula_5
formula_6
formula_7
Going the other way:
formula_8
formula_9
formula_10
We then find the chromaticity variables as:
formula_11
formula_12
We can also convert from "u" and "v" to "x" and "y":
formula_13
formula_14
formula_15
formula_16
|
[
{
"math_id": 0,
"text": "\\begin{pmatrix} ''R'' \\\\ ''G'' \\\\ ''B'' \\end{pmatrix} = \\begin{pmatrix} 3.1956 & 2.4478 & -0.1434 \\\\ -2.5455 & 7.0492 & 0.9963 \\\\ 0.0000 & 0.0000 & 1.0000 \\end{pmatrix} \\begin{pmatrix} X \\\\ Y \\\\ Z \\end{pmatrix}"
},
{
"math_id": 1,
"text": "u_{\\rm Judd}=\\frac{0.4661x+0.1593y}{y-0.15735x+0.2424} = \\frac{5.5932x+1.9116y}{12y-1.882x+2.9088}"
},
{
"math_id": 2,
"text": "v_{\\rm Judd}=\\frac{0.6581y}{y-0.15735x+0.2424} = \\frac{7.8972y}{12y-1.882x+2.9088}"
},
{
"math_id": 3,
"text": "u = \\frac{4x}{12y - 2x + 3}"
},
{
"math_id": 4,
"text": "v = \\frac{6y}{12y - 2x + 3}"
},
{
"math_id": 5,
"text": "U= \\textstyle{\\frac{2}{3}}X"
},
{
"math_id": 6,
"text": "V=Y\\,"
},
{
"math_id": 7,
"text": "W=\\textstyle{\\frac{1}{2}}(-X+3Y+Z)"
},
{
"math_id": 8,
"text": "X=\\textstyle{\\frac 32}U"
},
{
"math_id": 9,
"text": "Y=V"
},
{
"math_id": 10,
"text": "Z=\\textstyle{\\frac{3}{2}}U-3V+2W"
},
{
"math_id": 11,
"text": "u =\\frac U{U+V+W}= \\frac{4X}{X + 15Y +3Z}"
},
{
"math_id": 12,
"text": "v =\\frac V{U+V+W}= \\frac{6Y}{X + 15Y + 3Z}"
},
{
"math_id": 13,
"text": "x = \\frac{3u}{2u - 8v + 4}"
},
{
"math_id": 14,
"text": "y = \\frac{2v}{2u - 8v + 4}"
},
{
"math_id": 15,
"text": "u^\\prime = u\\,"
},
{
"math_id": 16,
"text": "v^\\prime = \\textstyle{\\frac{3}{2}}v\\,"
}
] |
https://en.wikipedia.org/wiki?curid=15307810
|
15307893
|
CIE 1964 color space
|
The CIE 1964 ("U"*, "V"*, "W"*) color space, also known as CIEUVW, is based on the CIE 1960 UCS:
formula_0
where ("u"0, "v"0) is the white point and "Y" is the luminous tristimulus value of the object. The asterisks in the exponent indicates that the variable represent a more perceptually uniform color space than its predecessor (compare with CIELAB).
Wyszecki invented the UVW color space in order to be able to calculate color differences without having to hold the luminance constant. He defined a lightness index "W"* by simplifying expressions suggested earlier by Ladd and Pinney, and Glasser "et al.". The chromaticity components "U"* and "V"* are defined such that the white point maps to the origin, as in Adams chromatic valence color spaces. This arrangement has the benefit of being able to express the loci of chromaticities with constant saturation simply as ("U"*)2 + ("V"*)2 = "C" for a constant "C". Furthermore, the chromaticity axes are scaled by the lightness "so as to account for the apparent increase or decrease in saturation when the lightness index is increased or decreased, respectively, and the chromaticity ("u", "v") is kept constant".
Chromaticity and color difference.
The chromaticity coefficients were chosen "on the basis of the spacing of the Munsell system. A lightness difference Δ"W" = 1 is assumed to correspond to a chromaticness difference √Δ"U"2 + Δ"V"2 = 13 (approximately)."
With the coefficients thus selected, the color difference in CIEUVW is simply the Euclidean distance:
formula_1
|
[
{
"math_id": 0,
"text": "U^*=13W^*(u-u_0), \\quad V^*=13W^*(v-v_0), \\quad W^*=25Y^\\frac13-17"
},
{
"math_id": 1,
"text": "\\Delta E_\\text{CIEUVW}=\\sqrt{ \\left(\\Delta U^*\\right)^2 + \\left(\\Delta V^*\\right)^2 + \\left(\\Delta W^*\\right)^2}"
}
] |
https://en.wikipedia.org/wiki?curid=15307893
|
15308227
|
Moran's I
|
Measure of spatial autocorrelation
In statistics, Moran's "I" is a measure of spatial autocorrelation developed by Patrick Alfred Pierce Moran. Spatial autocorrelation is characterized by a correlation in a signal among nearby locations in space. Spatial autocorrelation is more complex than one-dimensional autocorrelation because spatial correlation is multi-dimensional (i.e. 2 or 3 dimensions of space) and multi-directional.
Global Moran's "I".
Global Moran's "I" is a measure of the overall clustering of the spatial data. It is defined as
formula_0
where
Defining the spatial weights matrix.
The value of formula_10 can depend quite a bit on the assumptions built into the spatial weights matrix formula_6. The matrix is required because, in order to address spatial autocorrelation and also model spatial interaction, we need to impose a structure to constrain the number of neighbors to be considered. This is related to Tobler's first law of geography, which states that "Everything depends on everything else, but closer things more so"—in other words, the law implies a spatial distance decay function, such that even though all observations have an influence on all other observations, after some distance threshold that influence can be neglected.
The idea is to construct a matrix that accurately reflects your assumptions about the particular spatial phenomenon in question. A common approach is to give a weight of 1 if two zones are neighbors, and 0 otherwise, though the definition of 'neighbors' can vary. Another common approach might be to give a weight of 1 to formula_11 nearest neighbors, 0 otherwise. An alternative is to use a distance decay function for assigning weights. Sometimes the length of a shared edge is used for assigning different weights to neighbors. The selection of spatial weights matrix should be guided by theory about the phenomenon in question. The value of formula_10 is quite sensitive to the weights and can influence the conclusions you make about a phenomenon, especially when using distances.
Expected value.
The expected value of Moran's "I" under the null hypothesis of no spatial autocorrelation is
formula_12
The null distribution used for this expectation is that the formula_4 input is permuted by a permutation formula_13 picked uniformly at random (and the expectation is over picking the permutation).
At large sample sizes (i.e., as N approaches infinity), the expected value approaches zero.
Its variance equals
formula_14
where
formula_15
formula_16
formula_17
formula_18
formula_19
Values significantly below -1/(N-1) indicate negative spatial autocorrelation and values significantly above -1/(N-1) indicate positive spatial autocorrelation. For statistical hypothesis testing, Moran's "I" values can be transformed to z-scores.
Values of "I" range between formula_20 and formula_21 where formula_22 and formula_23 are the corresponding minimum and maximum eigenvalues of the weight matrix. For a row normalised matrix formula_24.
Moran's "I" is inversely related to Geary's "C", but it is not identical. Moran's "I" is a measure of global spatial autocorrelation, while Geary's "C" is more sensitive to local spatial autocorrelation.
Local Moran's "I".
Global spatial autocorrelation analysis yields only one statistic to summarize the whole study area. In other words, the global analysis assumes homogeneity. If that assumption does not hold, then having only one statistic does not make sense as the statistic should differ over space.
Moreover, even if there is no global autocorrelation or no clustering, we can still find clusters at a local level using local spatial autocorrelation analysis. The fact that Moran's "I" is a summation of individual cross products is exploited by the "local indicators of spatial association" (LISA) to evaluate the clustering in those individual units by calculating Local Moran's "I" for each spatial unit and evaluating the statistical significance for each Ii. From the equation of Global Moran's "I", we can obtain:
formula_25
where:
formula_26
then,
formula_27
I is the Global Moran's "I" measuring global autocorrelation, Ii is local, and N is the number of analysis units on the map.
LISAs can be calculated in GeoDa and ArcGIS Pro which uses the Local Moran's "I", proposed by Luc Anselin in 1995.
Uses.
Moran's "I" is widely used in the fields of geography and geographic information science. Some examples include:
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " I = \\frac N W \\frac {\\sum_{i=1}^N \\sum_{j=1}^N w_{ij}(x_i-\\bar x) (x_j-\\bar x)} {\\sum_{i=1}^N (x_i-\\bar x)^2} "
},
{
"math_id": 1,
"text": "N"
},
{
"math_id": 2,
"text": "i"
},
{
"math_id": 3,
"text": "j"
},
{
"math_id": 4,
"text": "x"
},
{
"math_id": 5,
"text": "\\bar x"
},
{
"math_id": 6,
"text": "w_{ij}"
},
{
"math_id": 7,
"text": "w_{ii} = 0"
},
{
"math_id": 8,
"text": "W"
},
{
"math_id": 9,
"text": "W = \\sum_{i=1}^N \\sum_{j=1}^N {w_{ij}}"
},
{
"math_id": 10,
"text": "I"
},
{
"math_id": 11,
"text": "k"
},
{
"math_id": 12,
"text": " E(I) = \\frac{-1} {N-1} "
},
{
"math_id": 13,
"text": "\\pi"
},
{
"math_id": 14,
"text": " \\operatorname{Var}(I) = \\frac{NS_4-S_3S_5} {(N-1)(N-2)(N-3)W^2} - (E(I))^2 "
},
{
"math_id": 15,
"text": " S_1 = \\frac 1 2 \\sum_i \\sum_j (w_{ij}+w_{ji})^2 "
},
{
"math_id": 16,
"text": " S_2 = \\sum_i \\left( \\sum_j w_{ij} + \\sum_j w_{ji}\\right)^2 "
},
{
"math_id": 17,
"text": " S_3 = \\frac {N^{-1} \\sum_i (x_i - \\bar x)^4} {(N^{-1} \\sum_i (x_i - \\bar x)^2)^2} "
},
{
"math_id": 18,
"text": " S_4 = (N^2-3N+3)S_1 - NS_2 + 3W^2 "
},
{
"math_id": 19,
"text": " S_5 = (N^2-N) S_1 - 2NS_2 + 6W^2 "
},
{
"math_id": 20,
"text": "\\frac N W w_{min}"
},
{
"math_id": 21,
"text": "\\frac N W w_{max}"
},
{
"math_id": 22,
"text": "w_{min}"
},
{
"math_id": 23,
"text": "w_{max}"
},
{
"math_id": 24,
"text": "\\frac N W = 1"
},
{
"math_id": 25,
"text": " I_i = \\frac{x_i-\\bar x}{m_2} \\sum_{j=1}^N w_{ij} (x_j-\\bar x) "
},
{
"math_id": 26,
"text": " m_2= \\frac{\\sum_{i=1}^N (x_i-\\bar x)^2 }{N}"
},
{
"math_id": 27,
"text": " I= \\sum_{i=1}^N \\frac{I_i}{N} "
}
] |
https://en.wikipedia.org/wiki?curid=15308227
|
153099
|
Normal closure (group theory)
|
Smallest normal group containing a set
In group theory, the normal closure of a subset formula_0 of a group formula_1 is the smallest normal subgroup of formula_1 containing formula_2
Properties and description.
Formally, if formula_1 is a group and formula_0 is a subset of formula_3 the normal closure formula_4 of formula_0 is the intersection of all normal subgroups of formula_1 containing formula_0:
formula_5
The normal closure formula_4 is the smallest normal subgroup of formula_1 containing formula_6 in the sense that formula_4 is a subset of every normal subgroup of formula_1 that contains formula_2
The subgroup formula_4 is generated by the set formula_7 of all conjugates of elements of formula_0 in formula_8
Therefore one can also write
formula_9
Any normal subgroup is equal to its normal closure. The conjugate closure of the empty set formula_10 is the trivial subgroup.
A variety of other notations are used for the normal closure in the literature, including formula_11 formula_12 formula_13 and formula_14
Dual to the concept of normal closure is that of normal interior or normal core, defined as the join of all normal subgroups contained in formula_2
Group presentations.
For a group formula_1 given by a presentation formula_15 with generators formula_0 and defining relators formula_16 the presentation notation means that formula_1 is the quotient group formula_17 where formula_18 is a free group on formula_2
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "S"
},
{
"math_id": 1,
"text": "G"
},
{
"math_id": 2,
"text": "S."
},
{
"math_id": 3,
"text": "G,"
},
{
"math_id": 4,
"text": "\\operatorname{ncl}_G(S)"
},
{
"math_id": 5,
"text": "\\operatorname{ncl}_G(S) = \\bigcap_{S \\subseteq N \\triangleleft G} N."
},
{
"math_id": 6,
"text": "S,"
},
{
"math_id": 7,
"text": "S^G=\\{s^g : g\\in G\\} = \\{g^{-1}sg : g\\in G\\}"
},
{
"math_id": 8,
"text": "G."
},
{
"math_id": 9,
"text": "\\operatorname{ncl}_G(S) = \\{g_1^{-1}s_1^{\\epsilon_1} g_1\\dots g_n^{-1}s_n^{\\epsilon_n}g_n : n \\geq 0, \\epsilon_i = \\pm 1, s_i\\in S, g_i \\in G\\}."
},
{
"math_id": 10,
"text": "\\varnothing"
},
{
"math_id": 11,
"text": "\\langle S^G\\rangle,"
},
{
"math_id": 12,
"text": "\\langle S\\rangle^G,"
},
{
"math_id": 13,
"text": "\\langle \\langle S\\rangle\\rangle_G,"
},
{
"math_id": 14,
"text": "\\langle\\langle S\\rangle\\rangle^G."
},
{
"math_id": 15,
"text": "G=\\langle S \\mid R\\rangle"
},
{
"math_id": 16,
"text": "R,"
},
{
"math_id": 17,
"text": "G = F(S) / \\operatorname{ncl}_{F(S)}(R),"
},
{
"math_id": 18,
"text": "F(S)"
}
] |
https://en.wikipedia.org/wiki?curid=153099
|
1531173
|
Wien bridge oscillator
|
Electric circuit that generates sine waves
A Wien bridge oscillator is a type of electronic oscillator that generates sine waves. It can generate a large range of frequencies. The oscillator is based on a bridge circuit originally developed by Max Wien in 1891 for the measurement of impedances.
The bridge comprises four resistors and two capacitors. The oscillator can also be viewed as a positive gain amplifier combined with a bandpass filter that provides positive feedback. Automatic gain control, intentional non-linearity and incidental non-linearity limit the output amplitude in various implementations of the oscillator.
The circuit shown to the right depicts a once-common implementation of the oscillator, with automatic gain control using an incandescent lamp. Under the condition that R1=R2=R and C1=C2=C, the frequency of oscillation is given by:
formula_0
and the condition of stable oscillation is given by
formula_1
Background.
There were several efforts to improve oscillators in the 1930s. Linearity was recognized as important. The "resistance-stabilized oscillator" had an adjustable feedback resistor; that resistor would be set so the oscillator just started (thus setting the loop gain to just over unity). The oscillations would build until the vacuum tube's grid would start conducting current, which would increase losses and limit the output amplitude. Automatic amplitude control was investigated. Frederick Terman states, "The frequency stability and wave-shape form of any common oscillator can be improved by using an automatic-amplitude-control arrangement to maintain the amplitude of oscillations constant under all conditions."
In 1937, Larned Meacham described using a filament lamp for automatic gain control in bridge oscillators.
Also in 1937, Hermon Hosmer Scott described audio oscillators based on various bridges including the Wien bridge.
Terman, at Stanford University, was interested in Harold Stephen Black's work on negative feedback, so he held a graduate seminar on negative feedback. Bill Hewlett attended the seminar. Scott's February 1938 oscillator paper came out during the seminar. Here is a recollection by Terman:
Fred Terman explains: "To complete the requirements for an Engineer's degree at Stanford, Bill had to prepare a thesis. At that time I had decided to devote an entire quarter of my graduate seminar to the subject of 'negative feedback' I had become interested in this then new technique because it seemed to have great potential for doing many useful things. I would report on some applications I had thought up on negative feedback, and the boys would read recent articles and report to each other on current developments. This seminar was just well started when a paper came out that looked interesting to me. It was by a man from General Radio and dealt with a fixed-frequency audio oscillator in which the frequency was controlled by a resistance-capacitance network, and was changed by means of push-buttons. Oscillations were obtained by an ingenious application of negative feedback."
In June 1938, Terman, R. R. Buss, Hewlett and F. C. Cahill gave a presentation about negative feedback at the IRE Convention in New York; in August 1938, there was a second presentation at the IRE Pacific Coast Convention in Portland, OR; the presentation became an IRE paper. One topic was amplitude control in a Wien bridge oscillator. The oscillator was demonstrated in Portland. Hewlett, along with David Packard, co-founded Hewlett-Packard, and Hewlett-Packard's first product was the HP200A, a precision Wien bridge oscillator. The first sale was in January 1939.
Hewlett's June 1939 engineer's degree thesis used a lamp to control the amplitude of a Wien bridge oscillator. Hewlett's oscillator produced a sinusoidal output with a stable amplitude and low distortion.
Oscillators without automatic gain control.
The conventional oscillator circuit is designed so that it will start oscillating ("start up") and that its amplitude will be controlled.
The oscillator at the right uses diodes to add a controlled compression to the amplifier output. It can produce total harmonic distortion in the range of 1-5%, depending on how carefully it is trimmed.
For a linear circuit to oscillate, it must meet the Barkhausen conditions: its loop gain must be one and the phase around the loop must be an integer multiple of 360 degrees. The linear oscillator theory doesn't address how the oscillator starts up or how the amplitude is determined. The linear oscillator can support any amplitude.
In practice, the loop gain is initially larger than unity. Random noise is present in all circuits, and some of that noise will be near the desired frequency. A loop gain greater than one allows the amplitude of frequency to increase exponentially each time around the loop. With a loop gain greater than one, the oscillator will start.
Ideally, the loop gain needs to be just a little bigger than one, but in practice, it is often significantly greater than one. A larger loop gain makes the oscillator start quickly. A large loop gain also compensates for gain variations with temperature and the desired frequency of a tunable oscillator. For the oscillator to start, the loop gain must be greater than one under all possible conditions.
A loop gain greater than one has a down side. In theory, the oscillator amplitude will increase without limit. In practice, the amplitude will increase until the output runs into some limiting factor such as the power supply voltage (the amplifier output runs into the supply rails) or the amplifier output current limits. The limiting reduces the effective gain of the amplifier (the effect is called gain compression). In a stable oscillator, the average loop gain will be one.
Although the limiting action stabilizes the output voltage, it has two significant effects: it introduces harmonic distortion and it affects the frequency stability of the oscillator.
The amount of distortion is related to the extra loop gain used for startup. If there's a lot of extra loop gain at small amplitudes, then the gain must decrease more at higher instantaneous amplitudes. That means more distortion.
The amount of distortion is also related to final amplitude of the oscillation. Although an amplifier's gain is ideally linear, in practice it is nonlinear. The nonlinear transfer function can be expressed as a Taylor series. For small amplitudes, the higher order terms have little effect. For larger amplitudes, the nonlinearity is pronounced. Consequently, for low distortion, the oscillator's output amplitude should be a small fraction of the amplifier's dynamic range.
Meacham's bridge stabilized oscillator.
Larned Meacham disclosed the bridge oscillator circuit shown to the right in 1938. The circuit was described as having very high frequency stability and very pure sinusoidal output. Instead of using tube overloading to control the amplitude, Meacham proposed a circuit that set the loop gain to unity while the amplifier is in its linear region. Meacham's circuit included a quartz crystal oscillator and a lamp in a Wheatstone bridge.
In Meacham's circuit, the frequency determining components are in the negative feed back branch of the bridge and the gain controlling elements are in the positive feed back branch. The crystal, Z4, operates in series resonance. As such it minimizes the negative feedback at resonance. The particular crystal exhibited a real resistance of 114 ohms at resonance. At frequencies below resonance, the crystal is capacitive and the "gain" of the negative feedback branch has a negative phase shift. At frequencies above resonance, the crystal is inductive and the "gain" of the negative feedback branch has a positive phase shift. The phase shift goes through zero at the resonant frequency. As the lamp heats up, it decreases the positive feedback. The Q of the crystal in Meacham's circuit is given as 104,000. At any frequency different from the resonant frequency by more than a small multiple of the bandwidth of the crystal, the negative feedback branch dominates the loop gain and there can be no self-sustaining oscillation except within the narrow bandwidth of the crystal.
In 1944 (after Hewlett's design), J. K. Clapp modified Meacham's circuit to use a vacuum tube phase inverter instead of a transformer to drive the bridge. A modified Meacham oscillator uses Clapp's phase inverter but substitutes a diode limiter for the tungsten lamp.
Hewlett's oscillator.
William R. Hewlett's Wien bridge oscillator can be considered as a combination of a differential amplifier and a Wien bridge, connected in a positive feedback loop between the amplifier output and differential inputs. At the oscillating frequency, the bridge is almost balanced and has very small transfer ratio. The loop gain is a product of the very high amplifier gain and the very low bridge ratio. In Hewlett's circuit, the amplifier is implemented by two vacuum tubes. The amplifier's inverting input is the cathode of tube V1 and the non-inverting input is the control grid of tube V2. To simplify analysis, all the components other than R1, R2, C1 and C2 can be modeled as a non-inverting amplifier with a gain of 1+Rf/Rb and with a high input impedance. R1, R2, C1 and C2 form a bandpass filter which is connected to provide positive feedback at the frequency of oscillation. Rb self heats and increases the negative feedback which reduces the amplifier gain until the point is reached that there is just enough gain to sustain sinusoidal oscillation without over driving the amplifier. If R1 = R2 and C1 = C2 then at equilibrium Rf/Rb = 2 and the amplifier gain is 3. When the circuit is first energized, the lamp is cold and the gain of the circuit is greater than 3 which ensures start up. The dc bias current of vacuum tube V1 also flows through the lamp. This does not change the principles of the circuit's operation, but it does reduce the amplitude of the output at equilibrium because the bias current provides part of the heating of the lamp.
Hewlett's thesis made the following conclusions:
A resistance-capacity oscillator of the type just described should be well suited for laboratory service. It has the ease of handling of a beat-frequency oscillator and yet few of its disadvantages. In the first place the frequency stability at low frequencies is much better than is possible with the beat-frequency type. There need be no critical placements of parts to insure small temperature changes, nor carefully designed detector circuits to prevent interlocking of oscillators. As a result of this, the overall weight of the oscillator may be kept at a minimum. An oscillator of this type, including a 1 watt amplifier and power supply, weighed only 18 pounds, in contrast to 93 pounds for the General Radio beat-frequency oscillator of comparable performance. The distortion and constancy of output compare favorably with the best beat-frequency oscillators now available. Lastly, an oscillator of this type can be laid out and constructed on the same basis as a commercial broadcast receiver, but with fewer adjustments to make. It thus combines quality of performance with cheapness of cost to give an ideal laboratory oscillator.
Wien bridge.
Bridge circuits were a common way of measuring component values by comparing them to known values. Often an unknown component would be put in one arm of a bridge, and then the bridge would be nulled by adjusting the other arms or changing the frequency of the voltage source (see, for example, the Wheatstone bridge).
The Wien bridge is one of many common bridges. Wien's bridge is used for precision measurement of capacitance in terms of resistance and frequency. It was also used to measure audio frequencies.
The Wien bridge does not require equal values of "R" or "C". The phase of the signal at Vp relative to the signal at Vout varies from almost 90° leading at low frequency to almost 90° lagging at high frequency. At some intermediate frequency, the phase shift will be zero. At that frequency the ratio of Z1 to Z2 will be purely real (zero imaginary part). If the ratio of "Rb" to "Rf" is adjusted to the same ratio, then the bridge is balanced and the circuit can sustain oscillation. The circuit will oscillate even if "Rb" / "Rf" has a small phase shift and even if the inverting and non-inverting inputs of the amplifier have different phase shifts. There will always be a frequency at which the total phase shift of each branch of the bridge will be equal. If "Rb" / "Rf" has no phase shift and the phase shifts of the amplifiers inputs are zero then the bridge is balanced when:
formula_2 and formula_3
where ω is the radian frequency.
If one chooses "R1" = "R2" and "C1" = "C2" then "Rf" = 2 "Rb".
In practice, the values of "R" and "C" will never be exactly equal, but the equations above show that for fixed values in the Z1 and Z2 impedances, the bridge will balance at some "ω" and some ratio of "Rb"/"Rf".
Analysis.
Analyzed from loop gain.
According to Schilling, the loop gain of the Wien bridge oscillator, under the condition that R1=R2=R and C1=C2=C, is given by
formula_4
where formula_5 is the frequency-dependent gain of the op-amp (note, the component names in Schilling have been replaced with the component names in the first figure).
Schilling further says that the condition of oscillation is T=1 which, is satisfied by
formula_6
and
formula_7 with formula_8
Another analysis, with particular reference to frequency stability and selectivity, is in and .
formula_9
formula_10
formula_11
formula_12
Frequency determining network.
Let R=R1=R2 and C=C1=C2
formula_13
Normalize to "CR"=1.
formula_14
Thus the frequency determining network has a zero at 0 and poles at formula_15 or −2.6180 and −0.38197.
Amplitude stabilization.
The key to the Wien bridge oscillator's low distortion oscillation is an amplitude stabilization method that does not use clipping. The idea of using a lamp in a bridge configuration for amplitude stabilization was published by Meacham in 1938. The amplitude of electronic oscillators tends to increase until clipping or other gain limitation is reached. This leads to high harmonic distortion, which is often undesirable.
Hewlett used an incandescent bulb as a power detector, low pass filter and gain control element in the oscillator feedback path to control the output amplitude. The resistance of the light bulb filament (see resistivity article) increases as its temperature increases. The temperature of the filament depends on the power dissipated in the filament and some other factors. If the oscillator's period (an inverse of its frequency) is significantly shorter than the thermal time constant of the filament, then the temperature of the filament will be substantially constant over a cycle. The filament resistance will then determine the amplitude of the output signal. If the amplitude increases, the filament heats up and its resistance increases. The circuit is designed so that a larger filament resistance reduces loop gain, which in turn will reduce the output amplitude. The result is a negative feedback system that stabilizes the output amplitude to a constant value. With this form of amplitude control, the oscillator operates as a near ideal linear system and provides a very low distortion output signal. Oscillators that use limiting for amplitude control often have significant harmonic distortion. At low frequencies, as the time period of the Wien bridge oscillator approaches the thermal time constant of the incandescent bulb, the circuit operation becomes more nonlinear, and the output distortion rises significantly.
Light bulbs have their disadvantages when used as gain control elements in Wien bridge oscillators, most notably a very high sensitivity to vibration due to the bulb's microphonic nature amplitude modulating the oscillator output, a limitation in high frequency response due to the inductive nature of the coiled filament, and current requirements that exceed the capability of many op-amps. Modern Wien bridge oscillators have used other nonlinear elements, such as diodes, thermistors, field effect transistors, or photocells for amplitude stabilization in place of light bulbs. Distortion as low as 0.0003% (3 ppm) can be achieved with modern components unavailable to Hewlett.
Wien bridge oscillators that use thermistors exhibit extreme sensitivity to ambient temperature due to the low operating temperature of a thermistor compared to an incandescent lamp.
Automatic gain control dynamics.
Small perturbations in the value of Rb cause the dominant poles to move back and forth across the jω (imaginary) axis. If the poles move into the left half plane, the oscillation dies out exponentially to zero. If the poles move into the right half plane, the oscillation grows exponentially until something limits it. If the perturbation is very small, the magnitude of the equivalent Q is very large so that the amplitude changes slowly. If the perturbations are small and reverse after a short time, the envelope follows a ramp. The envelope is approximately the integral of the perturbation. The perturbation to envelope transfer function rolls off at 6 dB/octave and causes −90° of phase shift.
The light bulb has thermal inertia so that its power to resistance transfer function exhibits a single pole low pass filter. The envelope transfer function and the bulb transfer function are effectively in cascade, so that the control loop has effectively a low pass pole and a pole at zero and a net phase shift of almost −180°. This would cause poor transient response in the control loop due to low phase margin. The output might exhibit squegging. Bernard M. Oliver showed that slight compression of the gain by the amplifier mitigates the envelope transfer function so that most oscillators show good transient response, except in the rare case where non-linearity in the vacuum tubes canceled each other producing an unusually linear amplifier.
References.
<templatestyles src="Reflist/styles.css" />
Other references.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "f_{hz}=\\frac{1}{2 \\pi R C}"
},
{
"math_id": 1,
"text": "R_b = \\frac {R_f} {2} "
},
{
"math_id": 2,
"text": "\\omega^2 = {1 \\over R_1 R_2 C_1 C_2}"
},
{
"math_id": 3,
"text": " {R_f \\over R_b} = {C_1 \\over C_2} + {R_2 \\over R_1} "
},
{
"math_id": 4,
"text": "T = \\left( \\frac { R C s } {R^2 C^2 s^2 + 3RCs +1 } - \\frac {R_b} {R_b + R_f } \\right) A_0 \\,"
},
{
"math_id": 5,
"text": " A_0 \\,"
},
{
"math_id": 6,
"text": " \\omega = \\frac {1} {R C} \\rightarrow f = \\frac {1} {2 \\pi R C}\\,"
},
{
"math_id": 7,
"text": " \\frac {R_f} {R_b} = \\frac {2 A_0 + 3} {A_0 - 3} \\,"
},
{
"math_id": 8,
"text": "\\lim_{A_0\\rightarrow \\infin} \\frac {R_f} {R_b} = 2 \\, "
},
{
"math_id": 9,
"text": " H(s) = \\frac { R_1 / (1 + sC_1 R_1) } {R_1 / (1 + sC_1 R_1) + R_2 + 1/(sC_2)} "
},
{
"math_id": 10,
"text": " H(s) = \\frac { s C_2 R_1 } {(1 + s C_1 R_1) (s C_2 R_1 / (1 + sC_1 R_1) + s C_2 R_2 + 1 )} "
},
{
"math_id": 11,
"text": " H(s) = \\frac { s C_2 R_1 } {s C_2 R_1 + (1 + s C_1 R_1) (s C_2 R_2 + 1 )} "
},
{
"math_id": 12,
"text": " H(s) = \\frac { s C_2 R_1 } {C_1 C_2 R_1 R_2 s^2 + (C_2 R_1 + C_2 R_2 + C_1 R_1) s + 1 } "
},
{
"math_id": 13,
"text": " H(s) = \\frac { s C R } {C^2 R^2 s^2 + 3 C R s + 1 } "
},
{
"math_id": 14,
"text": " H(s) = \\frac { s } {s^2 + 3 s + 1 } "
},
{
"math_id": 15,
"text": "-1.5\\plusmn \\frac{\\sqrt{5}}{2}"
}
] |
https://en.wikipedia.org/wiki?curid=1531173
|
15311782
|
Accessible category
|
The theory of accessible categories is a part of mathematics, specifically of category theory. It attempts to describe categories in terms of the "size" (a cardinal number) of the operations needed to generate their objects.
The theory originates in the work of Grothendieck completed by 1969, and Gabriel and Ulmer (1971). It has been further developed in 1989 by Michael Makkai and Robert Paré, with motivation coming from model theory, a branch of mathematical logic.
A standard text book by Adámek and Rosický appeared in 1994.
Accessible categories also have applications in homotopy theory. Grothendieck continued the development of the theory for homotopy-theoretic purposes in his (still partly unpublished) 1991 manuscript "Les dérivateurs".
Some properties of accessible categories depend on the set universe in use, particularly on the cardinal properties and Vopěnka's principle.
κ-directed colimits and κ-presentable objects.
Let formula_0 be an infinite regular cardinal, i.e. a cardinal number that is not the sum of a smaller number of smaller cardinals; examples are formula_1 (aleph-0), the first infinite cardinal number, and formula_2, the first uncountable cardinal). A partially ordered set formula_3 is called formula_0-directed if every subset formula_4 of formula_5 of cardinality less than formula_0 has an upper bound in formula_5. In particular, the ordinary directed sets are precisely the formula_6-directed sets.
Now let formula_7 be a category. A direct limit (also known as a directed colimit) over a formula_0-directed set formula_3 is called a formula_0-directed colimit. An object formula_8 of formula_7 is called formula_0-presentable if the Hom functor formula_9 preserves all formula_0-directed colimits in formula_7. It is clear that every formula_0-presentable object is also formula_10-presentable whenever formula_11, since every formula_10-directed colimit is also a formula_0-directed colimit in that case. A formula_6-presentable object is called finitely presentable.
κ-accessible and locally presentable categories.
The category formula_7 is called formula_0-accessible provided that:
An formula_6-accessible category is called finitely accessible.
A category is called accessible if it is formula_0-accessible for some infinite regular cardinal formula_0.
When an accessible category is also cocomplete, it is called locally presentable.
A functor formula_14 between formula_0-accessible categories is called formula_0-accessible provided that formula_15 preserves formula_0-directed colimits.
Theorems.
One can show that every locally presentable category is also complete. Furthermore, a category is locally presentable if and only if it is equivalent to the category of models of a limit sketch.
Adjoint functors between locally presentable categories have a particularly simple characterization. A functor formula_14 between locally presentable categories:
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "\\kappa"
},
{
"math_id": 1,
"text": "\\aleph _{0}"
},
{
"math_id": 2,
"text": "\\aleph_{1} "
},
{
"math_id": 3,
"text": "(I, \\leq) "
},
{
"math_id": 4,
"text": "J "
},
{
"math_id": 5,
"text": "I "
},
{
"math_id": 6,
"text": "\\aleph_0"
},
{
"math_id": 7,
"text": "C"
},
{
"math_id": 8,
"text": "X"
},
{
"math_id": 9,
"text": "\\operatorname{Hom}(X,-)"
},
{
"math_id": 10,
"text": "\\kappa'"
},
{
"math_id": 11,
"text": "\\kappa\\leq\\kappa'"
},
{
"math_id": 12,
"text": "R"
},
{
"math_id": 13,
"text": "P"
},
{
"math_id": 14,
"text": "F : C \\to D"
},
{
"math_id": 15,
"text": "F"
},
{
"math_id": 16,
"text": "\\aleph_1"
}
] |
https://en.wikipedia.org/wiki?curid=15311782
|
15312359
|
Polynomial and rational function modeling
|
In statistical modeling (especially process modeling), polynomial functions and rational functions are sometimes used as an empirical technique for curve fitting.
Polynomial function models.
A polynomial function is one that has the form
formula_0
where "n" is a non-negative integer that defines the degree of the polynomial. A polynomial with a degree of 0 is simply a constant function; with a degree of 1 is a line; with a degree of 2 is a quadratic; with a degree of 3 is a cubic, and so on.
Historically, polynomial models are among the most frequently used empirical models for curve fitting.
Advantages.
These models are popular for the following reasons.
Disadvantages.
However, polynomial models also have the following limitations.
When modeling via polynomial functions is inadequate due to any of the limitations above, the use of rational functions for modeling may give a better fit.
Rational function models.
A rational function is simply the ratio of two polynomial functions.
formula_1
with "n" denoting a non-negative integer that defines the degree of the numerator and "m" denoting a non-negative integer that defines the degree of the denominator. For fitting rational function models, the constant term in the denominator is usually set to 1. Rational functions are typically identified by the degrees of the numerator and denominator. For example, a quadratic for the numerator and a cubic for the denominator is identified as a quadratic/cubic rational function. The rational function model is a generalization of the polynomial model: rational function models contain polynomial models as a subset (i.e., the case when the denominator is a constant).
Advantages.
Rational function models have the following advantages:
formula_2
one would need to select four representative points, and perform a linear fit on the model
formula_3
which is derived from the previous equation by clearing the denominator. Here, the "x" and "y" contain the subset of points, not the full data set. The estimated coefficients from this linear fit are used as the starting values for fitting the nonlinear model to the full data set.
This type of fit, with the response variable appearing on both sides of the function, should only be used to obtain starting values for the nonlinear fit. The statistical properties of fits like this are not well understood.
The subset of points should be selected over the range of the data. It is not critical which points are selected, although obvious outliers should be avoided.
Disadvantages.
Rational function models have the following disadvantages:
External links.
This article incorporates public domain material from
|
[
{
"math_id": 0,
"text": "\ny = a_{n}x^{n} + a_{n-1}x^{n-1} + \\cdots + a_{2}x^{2} + a_{1}x + a_{0} \n"
},
{
"math_id": 1,
"text": "\ny = \\frac{a_{n}x^{n} + a_{n-1}x^{n-1} + \\ldots + a_{2}x^{2} + a_{1}x + a_{0}} {b_{m}x^{m} + b_{m-1}x^{m-1} + \\ldots + b_{2}x^{2} + b_{1}x + b_{0}} \n"
},
{
"math_id": 2,
"text": "y=\\frac{A_0 + A_1x} {1 + B_1x + B_2x^{2}} ,"
},
{
"math_id": 3,
"text": "\ny = A_0 + A_1x - B_1xy - B_2x^2y ,\n"
}
] |
https://en.wikipedia.org/wiki?curid=15312359
|
153130
|
Quaternion group
|
Non-abelian group of order eight
In group theory, the quaternion group Q8 (sometimes just denoted by Q) is a non-abelian group of order eight, isomorphic to the eight-element subset
formula_0 of the quaternions under multiplication. It is given by the group presentation
formula_1
where "e" is the identity element and "e" commutes with the other elements of the group. These relations, discovered by W. R. Hamilton, also generate the quaternions as an algebra over the real numbers.
Another presentation of Q8 is
formula_2
Like many other finite groups, it can be realized as the Galois group of a certain field of algebraic numbers.
Compared to dihedral group.
The quaternion group Q8 has the same order as the dihedral group , but a different structure, as shown by their Cayley and cycle graphs:
In the diagrams for D4, the group elements are marked with their action on a letter F in the defining representation R2. The same cannot be done for Q8, since it has no faithful representation in R2 or R3. D4 can be realized as a subset of the split-quaternions in the same way that Q8 can be viewed as a subset of the quaternions.
Cayley table.
The Cayley table (multiplication table) for Q8 is given by:
Properties.
The elements "i", "j", and "k" all have order four in Q8 and any two of them generate the entire group. Another presentation of Q8 based in only two elements to skip this redundancy is:
formula_3
For instance, writing the group elements in lexicographically minimal normal forms, one may identify: formula_4 The quaternion group has the unusual property of being Hamiltonian: Q8 is non-abelian, but every subgroup is normal. Every Hamiltonian group contains a copy of Q8.
The quaternion group Q8 and the dihedral group D4 are the two smallest examples of a nilpotent non-abelian group.
The center and the commutator subgroup of Q8 is the subgroup formula_5. The inner automorphism group of Q8 is given by the group modulo its center, i.e. the factor group formula_6 which is isomorphic to the Klein four-group V. The full automorphism group of Q8 is isomorphic to S4, the symmetric group on four letters (see "Matrix representations" below), and the outer automorphism group of Q8 is thus S4/V, which is isomorphic to S3.
The quaternion group Q8 has five conjugacy classes, formula_7 and so five irreducible representations over the complex numbers, with dimensions 1, 1, 1, 1, 2:
Trivial representation.
Sign representations with i, j, k-kernel: Q8 has three maximal normal subgroups: the cyclic subgroups generated by i, j, and k respectively. For each maximal normal subgroup "N", we obtain a one-dimensional representation factoring through the 2-element quotient group "G"/"N". The representation sends elements of "N" to 1, and elements outside "N" to −1.
2-dimensional representation: Described below in "Matrix representations". It is not realizable over the real numbers, but is a complex representation: indeed, it is just the quaternions formula_8 considered as an algebra over formula_9, and the action is that of left multiplication by formula_10.
The character table of Q8 turns out to be the same as that of D4:
Nevertheless, all the irreducible characters formula_11 in the rows above have real values, this gives the decomposition of the real group algebra of formula_12 into minimal two-sided ideals:
formula_13
where the idempotents formula_14 correspond to the irreducibles:
formula_15
so that
formula_16
Each of these irreducible ideals is isomorphic to a real central simple algebra, the first four to the real field formula_17. The last ideal formula_18 is isomorphic to the skew field of quaternions formula_8 by the correspondence:
formula_19
Furthermore, the projection homomorphism formula_20 given by formula_21 has kernel ideal generated by the idempotent:
formula_22
so the quaternions can also be obtained as the quotient ring formula_23. Note that this is irreducible as a real representation of formula_24, but splits into two copies of the two-dimensional irreducible when extended to the complex numbers. Indeed, the complex group algebra is formula_25 where formula_26 is the algebra of biquaternions.
Matrix representations.
The two-dimensional irreducible complex representation described above gives the quaternion group Q8 as a subgroup of the general linear group formula_27. The quaternion group is a multiplicative subgroup of the quaternion algebra:
formula_28
which has a regular representation formula_29 by left multiplication on itself considered as a complex vector space with basis formula_30 so that formula_31 corresponds to the formula_32-linear mapping formula_33 The resulting representation
formula_34
is given by:
formula_35
Since all of the above matrices have unit determinant, this is a representation of Q8 in the special linear group formula_36.
A variant gives a representation by unitary matrices (table at right). Let formula_37 correspond to the linear mapping formula_38 so that formula_39 is given by:
formula_40
It is worth noting that physicists exclusively use a different convention for the formula_41 matrix representation to make contact with the usual Pauli matrices:
formula_42
This particular choice is convenient and elegant when one describes spin-1/2 states in the formula_43 basis and considers angular momentum ladder operators formula_44
There is also an important action of Q8 on the 2-dimensional vector space over the finite field formula_45 (table at right). A modular representation formula_46 is given by
formula_47
This representation can be obtained from the extension field:
formula_48
where formula_49 and the multiplicative group formula_50 has four generators, formula_51 of order 8. For each formula_52 the two-dimensional formula_53-vector space formula_54 admits a linear mapping:
formula_55
In addition we have the Frobenius automorphism formula_56 satisfying formula_57 and formula_58 Then the above representation matrices are:
formula_59
This representation realizes Q8 as a normal subgroup of GL(2, 3). Thus, for each matrix formula_60, we have a group automorphism
formula_61
with formula_62 In fact, these give the full automorphism group as:
formula_63
This is isomorphic to the symmetric group S4 since the linear mappings formula_64 permute the four one-dimensional subspaces of formula_65 i.e., the four points of the projective space formula_66
Also, this representation permutes the eight non-zero vectors of formula_65 giving an embedding of Q8 in the symmetric group S8, in addition to the embeddings given by the regular representations.
Galois group.
Richard Dedekind considered the field formula_67 in attempting to relate the quaternion group to Galois theory. In 1936 Ernst Witt published his approach to the quaternion group through Galois theory.
In 1981, Richard Dean showed the quaternion group can be realized as the Galois group Gal(T/Q) where Q is the field of rational numbers and T is the splitting field of the polynomial
formula_68.
The development uses the fundamental theorem of Galois theory in specifying four intermediate fields between Q and T and their Galois groups, as well as two theorems on cyclic extension of degree four over a field.
Generalized quaternion group.
A generalized quaternion group Q4"n" of order 4"n" is defined by the presentation
formula_69
for an integer "n" ≥ 2, with the usual quaternion group given by "n" = 2. Coxeter calls Q4"n" the dicyclic group formula_70, a special case of the binary polyhedral group formula_71 and related to the polyhedral group formula_72 and the dihedral group formula_73. The generalized quaternion group can be realized as the subgroup of formula_74 generated by
formula_75
where formula_76. It can also be realized as the subgroup of unit quaternions generated by formula_77 and formula_78.
The generalized quaternion groups have the property that every abelian subgroup is cyclic. It can be shown that a finite "p"-group with this property (every abelian subgroup is cyclic) is either cyclic or a generalized quaternion group as defined above. Another characterization is that a finite "p"-group in which there is a unique subgroup of order "p" is either cyclic or a 2-group isomorphic to generalized quaternion group. In particular, for a finite field "F" with odd characteristic, the 2-Sylow subgroup of SL2("F") is non-abelian and has only one subgroup of order 2, so this 2-Sylow subgroup must be a generalized quaternion group, . Letting "pr" be the size of "F", where "p" is prime, the size of the 2-Sylow subgroup of SL2("F") is 2"n", where "n" = ord2("p"2 − 1) + ord2("r").
The Brauer–Suzuki theorem shows that the groups whose Sylow 2-subgroups are generalized quaternion cannot be simple.
Another terminology reserves the name "generalized quaternion group" for a dicyclic group of order a power of 2, which admits the presentation
formula_79
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\{1,i,j,k,-1,-i,-j,-k\\}"
},
{
"math_id": 1,
"text": "\\mathrm{Q}_8 = \\langle \\bar{e},i,j,k \\mid \\bar{e}^2 = e, \\;i^2 = j^2 = k^2 = ijk = \\bar{e} \\rangle ,"
},
{
"math_id": 2,
"text": "\\mathrm{Q}_8 = \\langle a,b \\mid a^4 = e, a^2 = b^2, ba = a^{-1}b\\rangle."
},
{
"math_id": 3,
"text": "\\left \\langle x,y \\mid x^4 = 1, x^2 = y^2, y^{-1}xy = x^{-1} \\right \\rangle."
},
{
"math_id": 4,
"text": "\\{e, \\bar e, i, \\bar{i}, j, \\bar{j}, k, \\bar{k}\\}\n\\leftrightarrow \\{e, x^2, x, x^3, y, x^2 y, xy, x^3 y \\}. "
},
{
"math_id": 5,
"text": "\\{e,\\bar{e}\\}"
},
{
"math_id": 6,
"text": "\\mathrm{Q}_8/\\{e,\\bar{e}\\},"
},
{
"math_id": 7,
"text": "\\{e\\}, \\{\\bar{e}\\}, \\{i,\\bar{i}\\}, \\{j,\\bar{j}\\}, \\{k,\\bar{k}\\},"
},
{
"math_id": 8,
"text": "\\mathbb{H}"
},
{
"math_id": 9,
"text": "\\mathbb C"
},
{
"math_id": 10,
"text": "Q_8\\subset \\mathbb H "
},
{
"math_id": 11,
"text": "\\chi_\\rho"
},
{
"math_id": 12,
"text": "G = \\mathrm{Q}_8"
},
{
"math_id": 13,
"text": "\\R[\\mathrm{Q}_8]=\\bigoplus_\\rho (e_\\rho),"
},
{
"math_id": 14,
"text": "e_\\rho\\in \\R[\\mathrm{Q}_8]"
},
{
"math_id": 15,
"text": "e_\\rho = \\frac{\\dim(\\rho)}{|G|}\\sum_{g\\in G} \\chi_\\rho(g^{-1})g,"
},
{
"math_id": 16,
"text": "\\begin{align}\ne_{\\text{triv}} &= \\tfrac 18(e + \\bar e + i +\\bar i+j+\\bar j+k+\\bar k) \\\\\ne_{i\\text{-ker}} &= \\tfrac 18(e + \\bar e + i +\\bar i-j-\\bar j-k-\\bar k) \\\\\ne_{j\\text{-ker}} &= \\tfrac 18(e + \\bar e - i -\\bar i+j+\\bar j-k-\\bar k) \\\\\ne_{k\\text{-ker}} &= \\tfrac 18(e + \\bar e - i -\\bar i-j-\\bar j+k+\\bar k) \\\\\ne_{2} &= \\tfrac 28(2e - 2\\bar e) = \\tfrac 12(e - \\bar e)\n\\end{align}"
},
{
"math_id": 17,
"text": "\\R"
},
{
"math_id": 18,
"text": "(e_2)"
},
{
"math_id": 19,
"text": "\\begin{align}\n\\tfrac12(e-\\bar e) &\\longleftrightarrow 1, \\\\\n\\tfrac12(i-\\bar i) &\\longleftrightarrow i, \\\\\n\\tfrac12(j-\\bar j) &\\longleftrightarrow j, \\\\ \n\\tfrac12(k-\\bar k) &\\longleftrightarrow k.\n\\end{align}"
},
{
"math_id": 20,
"text": "\\R[\\mathrm{Q}_8]\\to (e_2)\\cong \\mathbb{H}"
},
{
"math_id": 21,
"text": "r\\mapsto re_2"
},
{
"math_id": 22,
"text": "e_2^\\perp = e_1+e_{i\\text{-ker}}+e_{j\\text{-ker}}+e_{k\\text{-ker}} = \n\\tfrac{1}{2}(e+\\bar e), "
},
{
"math_id": 23,
"text": "\\R[\\mathrm{Q}_8]/(e+\\bar e)\\cong \\mathbb H"
},
{
"math_id": 24,
"text": "Q_8"
},
{
"math_id": 25,
"text": "\\C[\\mathrm{Q}_8] \\cong \\C^{\\oplus 4} \\oplus M_2(\\C),"
},
{
"math_id": 26,
"text": "M_2(\\C) \\cong \\mathbb{H} \\otimes_{\\R} \\C \\cong \\mathbb{H} \\oplus \\mathbb{H}"
},
{
"math_id": 27,
"text": "\\operatorname{GL}(2, \\C)"
},
{
"math_id": 28,
"text": "\\H = \\R 1 + \\R i + \\R j + \\R k= \\C 1+ \\C j,"
},
{
"math_id": 29,
"text": "\\rho:\\H \\to \\operatorname{M}(2, \\C)"
},
{
"math_id": 30,
"text": "\\{1,j\\},"
},
{
"math_id": 31,
"text": "z \\in \\H"
},
{
"math_id": 32,
"text": "\\C"
},
{
"math_id": 33,
"text": "\\rho_z: a + jb \\mapsto z\\cdot(a + jb)."
},
{
"math_id": 34,
"text": "\\begin{cases} \\rho:\\mathrm{Q}_8 \\to \\operatorname{GL}(2,\\C)\\\\ g\\longmapsto\\rho_g \\end{cases}"
},
{
"math_id": 35,
"text": "\\begin{matrix}\ne \\mapsto \\begin{pmatrix}\n 1 & 0 \\\\\n 0 & 1\n\\end{pmatrix} &\ni \\mapsto \\begin{pmatrix}\n i & 0 \\\\\n 0 & \\!\\!\\!\\!-i\n\\end{pmatrix}&\nj \\mapsto \\begin{pmatrix}\n 0 & \\!\\!\\!\\!-1 \\\\\n 1 & 0\n\\end{pmatrix}&\nk \\mapsto \\begin{pmatrix}\n 0 & \\!\\!\\!\\!-i \\\\\n \\!\\!\\!-i & 0\n\\end{pmatrix} \\\\\n\\overline{e} \\mapsto \\begin{pmatrix}\n \\!\\!\\!-1 & 0 \\\\\n 0 & \\!\\!\\!\\!-1\n\\end{pmatrix} &\n\\overline{i} \\mapsto \\begin{pmatrix}\n \\!\\!\\!-i & 0 \\\\\n 0 & i\n\\end{pmatrix}&\n\\overline{j} \\mapsto \\begin{pmatrix}\n 0 & 1 \\\\\n \\!\\!\\!-1 & 0\n\\end{pmatrix}&\n\\overline{k} \\mapsto \\begin{pmatrix}\n 0 & i \\\\\n i & 0\n\\end{pmatrix}.\n\\end{matrix}\n"
},
{
"math_id": 36,
"text": "\\operatorname{SL}(2,\\C)"
},
{
"math_id": 37,
"text": "g\\in \\mathrm{Q}_8"
},
{
"math_id": 38,
"text": "\\rho_g:a+bj\\mapsto (a + bj)\\cdot jg^{-1}j^{-1},"
},
{
"math_id": 39,
"text": "\\rho:\\mathrm{Q}_8 \\to \\operatorname{SU}(2)"
},
{
"math_id": 40,
"text": "\\begin{matrix}\ne \\mapsto \\begin{pmatrix}\n 1 & 0 \\\\\n 0 & 1\n\\end{pmatrix} &\ni \\mapsto \\begin{pmatrix}\n i & 0 \\\\\n 0 & \\!\\!\\!\\!-i\n\\end{pmatrix}&\nj \\mapsto \\begin{pmatrix}\n 0 & 1 \\\\\n \\!\\!\\!-1 & 0\n\\end{pmatrix}&\nk \\mapsto \\begin{pmatrix}\n 0 & i \\\\\n i & 0\n\\end{pmatrix} \\\\\n\\overline{e} \\mapsto \\begin{pmatrix}\n \\!\\!\\!-1 & 0 \\\\\n 0 & \\!\\!\\!\\!-1\n\\end{pmatrix} &\n\\overline{i} \\mapsto \\begin{pmatrix}\n \\!\\!\\!-i & 0 \\\\\n 0 & i\n\\end{pmatrix}&\n\\overline{j} \\mapsto \\begin{pmatrix}\n 0 & \\!\\!\\!\\!-1 \\\\\n 1 & 0\n\\end{pmatrix}&\n\\overline{k} \\mapsto \\begin{pmatrix}\n 0 & \\!\\!\\!\\!-i \\\\\n \\!\\!\\!-i & 0\n\\end{pmatrix}.\n\\end{matrix}"
},
{
"math_id": 41,
"text": "\\operatorname{SU}(2)"
},
{
"math_id": 42,
"text": "\\begin{matrix}\n&e \\mapsto \\begin{pmatrix}\n 1 & 0 \\\\\n 0 & 1\n\\end{pmatrix} = \\quad\\, 1_{2\\times2} \n&i \\mapsto \\begin{pmatrix}\n 0 & \\!\\!\\!-i\\! \\\\\n \\!\\!-i\\!\\! & 0\n\\end{pmatrix} = -i \\sigma_x\n&j \\mapsto \\begin{pmatrix}\n 0 & \\!\\!\\!-1\\! \\\\\n 1 & 0\n\\end{pmatrix} = -i \\sigma_y\n&k \\mapsto \\begin{pmatrix}\n \\!\\!-i\\!\\! & 0 \\\\\n 0 & i\n\\end{pmatrix} = -i \\sigma_z\\\\\n&\\overline{e} \\mapsto \\begin{pmatrix}\n \\!\\!-1\\! & 0 \\\\\n 0 & \\!\\!\\!-1\\!\n\\end{pmatrix} = -1_{2\\times2}\n&\\overline{i} \\mapsto \\begin{pmatrix}\n 0 & i \\\\\n i & 0\n\\end{pmatrix} = \\,\\,\\,\\, i \\sigma_x\n&\\overline{j} \\mapsto \\begin{pmatrix}\n 0 & 1 \\\\\n \\!\\!-1\\!\\! & 0\n\\end{pmatrix} = \\,\\,\\,\\, i \\sigma_y\n&\\overline{k} \\mapsto \\begin{pmatrix}\n i & 0 \\\\\n 0 & \\!\\!\\!-i\\!\n\\end{pmatrix} = \\,\\,\\,\\, i \\sigma_z.\n\\end{matrix}"
},
{
"math_id": 43,
"text": "(\\vec{J}^2, J_z)"
},
{
"math_id": 44,
"text": "J_{\\pm} = J_x \\pm iJ_y."
},
{
"math_id": 45,
"text": "\\mathbb{F}_3 =\\{0, 1, -1\\}"
},
{
"math_id": 46,
"text": "\\rho: \\mathrm{Q}_8 \\to \\operatorname{SL}(2, 3)"
},
{
"math_id": 47,
"text": "\\begin{matrix}\ne \\mapsto \\begin{pmatrix}\n 1 & 0 \\\\\n 0 & 1\n\\end{pmatrix} &\ni \\mapsto \\begin{pmatrix}\n 1 & 1 \\\\\n 1 & \\!\\!\\!\\!-1\n\\end{pmatrix} &\nj \\mapsto \\begin{pmatrix}\n \\!\\!\\!-1 & 1 \\\\\n 1 & 1\n\\end{pmatrix} &\nk \\mapsto \\begin{pmatrix}\n 0 & \\!\\!\\!\\!-1 \\\\\n 1 & 0\n\\end{pmatrix} \\\\\n\\overline{e} \\mapsto \\begin{pmatrix}\n \\!\\!\\!-1 & 0 \\\\\n 0 & \\!\\!\\!\\!-1\n\\end{pmatrix} &\n\\overline{i} \\mapsto \\begin{pmatrix}\n \\!\\!\\!-1 & \\!\\!\\!\\!-1 \\\\\n \\!\\!\\!-1 & 1\n\\end{pmatrix} &\n\\overline{j} \\mapsto \\begin{pmatrix}\n 1 & \\!\\!\\!\\!-1 \\\\\n \\!\\!\\!-1 & \\!\\!\\!\\!-1\n\\end{pmatrix} &\n\\overline{k} \\mapsto \\begin{pmatrix}\n 0 & 1 \\\\\n \\!\\!\\!-1 & 0\n\\end{pmatrix}.\n\\end{matrix}"
},
{
"math_id": 48,
"text": " \\mathbb{F}_9 = \\mathbb{F}_3 [k] = \\mathbb{F}_3 1 + \\mathbb{F}_3 k,"
},
{
"math_id": 49,
"text": "k^2=-1"
},
{
"math_id": 50,
"text": "\\mathbb{F}_9^{\\times}"
},
{
"math_id": 51,
"text": "\\pm(k\\pm1),"
},
{
"math_id": 52,
"text": "z \\in \\mathbb{F}_9,"
},
{
"math_id": 53,
"text": "\\mathbb{F}_3"
},
{
"math_id": 54,
"text": "\\mathbb{F}_9"
},
{
"math_id": 55,
"text": "\\begin{cases} \\mu_z: \\mathbb{F}_9 \\to \\mathbb{F}_9 \\\\ \\mu_z(a+bk)=z\\cdot(a+bk) \\end{cases}"
},
{
"math_id": 56,
"text": "\\phi(a+bk)=(a+bk)^3"
},
{
"math_id": 57,
"text": "\\phi^2 = \\mu_1 "
},
{
"math_id": 58,
"text": "\\phi\\mu_z = \\mu_{\\phi(z)}\\phi."
},
{
"math_id": 59,
"text": "\\begin{align}\n\\rho(\\bar e) &=\\mu_{-1}, \\\\\n\\rho(i) &=\\mu_{k+1}\\phi, \\\\\n\\rho(j)&=\\mu_{k-1} \\phi, \\\\\n\\rho(k)&=\\mu_{k}.\n\\end{align}"
},
{
"math_id": 60,
"text": "m\\in \\operatorname{GL}(2,3)"
},
{
"math_id": 61,
"text": "\\begin{cases} \\psi_m:\\mathrm{Q}_8\\to\\mathrm{Q}_8 \\\\ \\psi_m(g)=mgm^{-1} \\end{cases}"
},
{
"math_id": 62,
"text": "\\psi_I =\\psi_{-I}=\\mathrm{id}_{\\mathrm{Q}_8}."
},
{
"math_id": 63,
"text": "\\operatorname{Aut}(\\mathrm{Q}_8) \\cong \\operatorname{PGL}(2, 3) = \\operatorname{GL}(2,3)/\\{\\pm I\\}\\cong S_4."
},
{
"math_id": 64,
"text": "m:\\mathbb{F}_3^2 \\to \\mathbb{F}_3^2"
},
{
"math_id": 65,
"text": "\\mathbb{F}_3^2,"
},
{
"math_id": 66,
"text": "\\mathbb{P}^1 (\\mathbb{F}_3) = \\operatorname{PG}(1,3)."
},
{
"math_id": 67,
"text": "\\mathbb{Q}[\\sqrt{2}, \\sqrt{3}]"
},
{
"math_id": 68,
"text": "x^8 - 72 x^6 + 180 x^4 - 144 x^2 + 36"
},
{
"math_id": 69,
"text": "\\langle x,y \\mid x^{2n} = y^4 = 1, x^n = y^2, y^{-1}xy = x^{-1}\\rangle"
},
{
"math_id": 70,
"text": "\\langle 2, 2, n\\rangle"
},
{
"math_id": 71,
"text": "\\langle \\ell, m, n\\rangle"
},
{
"math_id": 72,
"text": "(p,q,r)"
},
{
"math_id": 73,
"text": "(2,2,n)"
},
{
"math_id": 74,
"text": "\\operatorname{GL}_2(\\Complex)"
},
{
"math_id": 75,
"text": "\\left(\\begin{array}{cc}\n \\omega_n & 0 \\\\\n 0 & \\overline{\\omega}_n\n \\end{array}\n \\right)\n \\mbox{ and }\n \\left(\\begin{array}{cc}\n 0 & -1 \\\\\n 1 & 0\n \\end{array}\n \\right)\n"
},
{
"math_id": 76,
"text": "\\omega_n = e^{i\\pi/n}"
},
{
"math_id": 77,
"text": "x=e^{i\\pi/n}"
},
{
"math_id": 78,
"text": "y=j"
},
{
"math_id": 79,
"text": "\\langle x,y \\mid x^{2^m} = y^4 = 1, x^{2^{m-1}} = y^2, y^{-1}xy = x^{-1}\\rangle."
}
] |
https://en.wikipedia.org/wiki?curid=153130
|
1531369
|
Phase problem
|
Issue in determining wave cycle part
In physics, the phase problem is the problem of loss of information concerning the phase that can occur when making a physical measurement. The name comes from the field of X-ray crystallography, where the phase problem has to be solved for the determination of a structure from diffraction data. The phase problem is also met in the fields of imaging and signal processing. Various approaches of phase retrieval have been developed over the years.
Overview.
Light detectors, such as photographic plates or CCDs, measure only the intensity of the light that hits them. This measurement is incomplete (even when neglecting other degrees of freedom such as polarization and angle of incidence) because a light wave has not only an amplitude (related to the intensity), but also a phase (related to the direction), and polarization which are systematically lost in a measurement. In diffraction or microscopy experiments, the phase part of the wave often contains valuable information on the studied specimen. The phase problem constitutes a fundamental limitation ultimately related to the nature of measurement in quantum mechanics.
In X-ray crystallography, the diffraction data when properly assembled gives the amplitude of the 3D Fourier transform of the molecule's electron density in the unit cell. If the phases are known, the electron density can be simply obtained by Fourier synthesis. This Fourier transform relation also holds for two-dimensional far-field diffraction patterns (also called Fraunhofer diffraction) giving rise to a similar type of phase problem.
Phase retrieval.
There are several ways to retrieve the lost phases. The phase problem must be solved in x-ray crystallography, neutron crystallography, and electron crystallography.
Not all of the methods of phase retrieval work with every wavelength (x-ray, neutron, and electron) used in crystallography.
Direct ("ab initio)" methods.
If the crystal diffracts to high resolution (<1.2 Å), the initial phases can be estimated using direct methods. Direct methods can be used in x-ray crystallography, neutron crystallography, and electron crystallography.
A number of initial phases are tested and selected by this method. The other is the Patterson method, which directly determines the positions of heavy atoms. The Patterson function gives a large value in a position which corresponds to interatomic vectors. This method can be applied only when the crystal contains heavy atoms or when a significant fraction of the structure is already known.
For molecules whose crystals provide reflections in the sub-Ångström range, it is possible to determine phases by brute force methods, testing a series of phase values until spherical structures are observed in the resultant electron density map. This works because atoms have a characteristic structure when viewed in the sub-Ångström range. The technique is limited by processing power and data quality. For practical purposes, it is limited to "small molecules" and peptides because they consistently provide high-quality diffraction with very few reflections.
Molecular replacement (MR).
Phases can also be inferred by using a process called molecular replacement, where a similar molecule's already-known phases are grafted onto the intensities of the molecule at hand, which are observationally determined. These phases can be obtained experimentally from a homologous molecule or if the phases are known for the same molecule but in a different crystal, by simulating the molecule's packing in the crystal and obtaining theoretical phases. Generally, these techniques are less desirable since they can severely bias the solution of the structure. They are useful, however, for ligand binding studies, or between molecules with small differences and relatively rigid structures (for example derivatizing a small molecule).
Isomorphous replacement.
"Multiple isomorphous replacement (MIR)".
"Multiple isomorphous replacement (MIR)", where heavy atoms are inserted into structure (usually by synthesizing proteins with analogs or by soaking)
Anomalous scattering.
"Multi-wavelength anomalous dispersion (MAD)".
A powerful solution is the "multi-wavelength anomalous dispersion" (MAD) method. In this technique, atoms' inner electrons absorb X-rays of particular wavelengths, and reemit the X-rays after a delay, inducing a phase shift in all of the reflections, known as the "anomalous dispersion effect". Analysis of this phase shift (which may be different for individual reflections) results in a solution for the phases. Since X-ray fluorescence techniques (like this one) require excitation at very specific wavelengths, it is necessary to use synchrotron radiation when using the MAD method.
Phase improvement.
Refining initial phases.
In many cases, an initial set of phases are determined, and the electron density map for the diffraction pattern is calculated. Then the map is used to determine portions of the structure, which portions are used to simulate a new set of phases. This new set of phases is known as a "refinement". These phases are reapplied to the original amplitudes, and an improved electron density map is derived, from which the structure is corrected. This process is repeated until an error term (usually formula_0) has stabilized to a satisfactory value. Because of the phenomenon of phase bias, it is possible for an incorrect initial assignment to propagate through successive refinements, so satisfactory conditions for a structure assignment are still a matter of debate. Indeed, some spectacular incorrect assignments have been reported, including a protein where the entire sequence was threaded backwards.
|
[
{
"math_id": 0,
"text": "R_\\textrm{free}"
}
] |
https://en.wikipedia.org/wiki?curid=1531369
|
1531404
|
Split exact sequence
|
In mathematics, a split exact sequence is a short exact sequence in which the middle term is built out of the two outer terms in the simplest possible way.
Equivalent characterizations.
A short exact sequence of abelian groups or of modules over a fixed ring, or more generally of objects in an abelian category
formula_0
is called split exact if it is isomorphic to the exact sequence where the middle term is the direct sum of the outer ones:
formula_1
The requirement that the sequence is isomorphic means that there is an isomorphism formula_2 such that the composite formula_3 is the natural inclusion formula_4 and such that the composite formula_5 equals "b". This can be summarized by a commutative diagram as:
The splitting lemma provides further equivalent characterizations of split exact sequences.
Examples.
A trivial example of a split short exact sequence is
formula_6
where formula_7 are "R"-modules, formula_8 is the canonical injection and formula_9 is the canonical projection.
Any short exact sequence of vector spaces is split exact. This is a rephrasing of the fact that any set of linearly independent vectors in a vector space can be extended to a basis.
The exact sequence formula_10 (where the first map is multiplication by 2) is not split exact.
Related notions.
Pure exact sequences can be characterized as the filtered colimits of split exact sequences.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "0 \\to A \\mathrel{\\stackrel{a}{\\to}} B \\mathrel{\\stackrel{b}{\\to}} C \\to 0"
},
{
"math_id": 1,
"text": "0 \\to A \\mathrel{\\stackrel{i}{\\to}} A \\oplus C \\mathrel{\\stackrel{p}{\\to}} C \\to 0"
},
{
"math_id": 2,
"text": "f : B \\to A \\oplus C"
},
{
"math_id": 3,
"text": "f \\circ a"
},
{
"math_id": 4,
"text": "i: A \\to A \\oplus C"
},
{
"math_id": 5,
"text": "p \\circ f"
},
{
"math_id": 6,
"text": "0 \\to M_1 \\mathrel{\\stackrel{q}{\\to}} M_1\\oplus M_2 \\mathrel{\\stackrel{p}{\\to}} M_2 \\to 0"
},
{
"math_id": 7,
"text": "M_1, M_2"
},
{
"math_id": 8,
"text": "q"
},
{
"math_id": 9,
"text": "p"
},
{
"math_id": 10,
"text": "0 \\to \\mathbf{Z}\\mathrel{\\stackrel{2}{\\to}} \\mathbf{Z}\\to \\mathbf{Z}/ 2\\mathbf{Z} \\to 0"
}
] |
https://en.wikipedia.org/wiki?curid=1531404
|
1531409
|
Homeomorphism group
|
In mathematics, particularly topology, the homeomorphism group of a topological space is the group consisting of all homeomorphisms from the space to itself with function composition as the group operation. They are important to the theory of topological spaces, generally exemplary of automorphism groups and topologically invariant in the group isomorphism sense.
Properties and examples.
There is a natural group action of the homeomorphism group of a space on that space. Let formula_0 be a topological space and denote the homeomorphism group of formula_0 by formula_1. The action is defined as follows:
formula_2
This is a group action since for all formula_3,
formula_4
where formula_5 denotes the group action, and the identity element of formula_1 (which is the identity function on formula_0) sends points to themselves. If this action is transitive, then the space is said to be homogeneous.
Topology.
As with other sets of maps between topological spaces, the homeomorphism group can be given a topology, such as the compact-open topology.
In the case of regular, locally compact spaces the group multiplication is then continuous.
If the space is compact and Hausdorff, the inversion is continuous as well and formula_6 becomes a topological group.
If formula_0 is Hausdorff, locally compact and locally connected this holds as well.
tSome locally compact separable metric spaces exhibit an inversion map that is not continuous, resulting in formula_7 not forming a topological group.
Mapping class group.
In geometric topology especially, one considers the quotient group obtained by quotienting out by isotopy, called the mapping class group:
formula_8
The MCG can also be interpreted as the 0th homotopy group, formula_9.
This yields the short exact sequence:
formula_10
In some applications, particularly surfaces, the homeomorphism group is studied via this short exact sequence, and by first studying the mapping class group and group of isotopically trivial homeomorphisms, and then (at times) the extension.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "G"
},
{
"math_id": 2,
"text": "\\begin{align}\n G\\times X &\\longrightarrow X\\\\\n (\\varphi, x) &\\longmapsto \\varphi(x)\n\\end{align}"
},
{
"math_id": 3,
"text": "\\varphi,\\psi\\in G"
},
{
"math_id": 4,
"text": "\\varphi\\cdot(\\psi\\cdot x)=\\varphi(\\psi(x))=(\\varphi\\circ\\psi)(x)"
},
{
"math_id": 5,
"text": "\\cdot"
},
{
"math_id": 6,
"text": "\\operatorname{Homeo}(X)"
},
{
"math_id": 7,
"text": "\\text{Homeo}(X)"
},
{
"math_id": 8,
"text": "{\\rm MCG}(X) = {\\rm Homeo}(X) / {\\rm Homeo}_0(X)"
},
{
"math_id": 9,
"text": "{\\rm MCG}(X) = \\pi_0({\\rm Homeo}(X))"
},
{
"math_id": 10,
"text": "1 \\rightarrow {\\rm Homeo}_0(X) \\rightarrow {\\rm Homeo}(X) \\rightarrow {\\rm MCG}(X) \\rightarrow 1."
}
] |
https://en.wikipedia.org/wiki?curid=1531409
|
15314901
|
Proper velocity
|
Ratio in relativity
In relativity, proper velocity (also known as celerity) w of an object relative to an observer is the ratio between observer-measured displacement vector formula_0 and proper time τ elapsed on the clocks of the traveling object:
formula_1
It is an alternative to ordinary velocity, the distance per unit time where both distance and time are measured by the observer.
The two types of velocity, ordinary and proper, are very nearly equal at low speeds. However, at high speeds proper velocity retains many of the properties that velocity loses in relativity compared with Newtonian theory.
For example, proper velocity equals momentum per unit mass at any speed, and therefore has no upper limit. At high speeds, as shown in the figure at right, it is proportional to an object's energy as well.
Proper velocity w can be related to the ordinary velocity v via the Lorentz factor "γ":
formula_2
where "t" is coordinate time or "map time".
For unidirectional motion, each of these is also simply related to a traveling object's hyperbolic velocity angle or rapidity "η" by
formula_3.
Introduction.
In flat spacetime, proper velocity is the ratio between distance traveled relative to a reference map frame (used to define simultaneity) and proper time τ elapsed on the clocks of the traveling object. It equals the object's momentum p divided by its rest mass "m", and is made up of the space-like components of the object's four-vector velocity. William Shurcliff's monograph mentioned its early use in the Sears and Brehme text. Fraundorf has explored its pedagogical value while Ungar, Baylis and Hestenes have examined its relevance from group theory and geometric algebra perspectives. Proper velocity is sometimes referred to as celerity.
Unlike the more familiar coordinate velocity v, proper velocity is synchrony-free (does not require synchronized clocks) and is useful for describing both super-relativistic and sub-relativistic motion. Like coordinate velocity and unlike four-vector velocity, it resides in the three-dimensional slice of spacetime defined by the map frame. As shown below and in the example figure at right, proper-velocities even add as three vectors with rescaling of the out-of-frame component. This makes them more useful for map-based (e.g. engineering) applications, and less useful for gaining coordinate-free insight. Proper speed divided by lightspeed "c" is the hyperbolic sine of rapidity "η", just as the Lorentz factor "γ" is rapidity's hyperbolic cosine, and coordinate speed "v" over lightspeed is rapidity's hyperbolic tangent.
Imagine an object traveling through a region of spacetime locally described by Hermann Minkowski's flat-space metric equation ("cd"τ)2 = ("cd"t)2 − ("dx)2. Here a reference map frame of yardsticks and synchronized clocks define map position x and map time "t" respectively, and the "d" preceding a coordinate means infinitesimal change. A bit of manipulation allows one to show that proper velocity w = "dx/"d"τ = γv where as usual coordinate velocity v = "d"x/"dt". Thus finite "w" ensures that "v" is less than lightspeed "c". By grouping "γ" with v in the expression for relativistic momentum p, proper velocity also extends the Newtonian form of momentum as mass times velocity to high speeds without a need for relativistic mass.
Proper velocity addition formula.
The proper velocity addition formula:
formula_4
where formula_5 is the beta factor given by formula_6.
This formula provides a proper velocity gyrovector space model of hyperbolic geometry that uses a whole space compared to other models of hyperbolic geometry which use discs or half-planes.
In the unidirectional case this becomes commutative and simplifies to a Lorentz factor product times a coordinate velocity sum, e.g. to "w"AC = γABγBC("v"AB + "v"BC), as discussed in the application section below.
Relation to other velocity parameters.
Speed table.
The table below illustrates how the proper velocity of "w" = "c" or "one map-lightyear per traveler-year" is a natural benchmark for the transition from sub-relativistic to super-relativistic motion.
Note from above that velocity angle η and proper-velocity "w" run from 0 to infinity and track coordinate-velocity when "w" « "c". On the other hand, when "w" » "c", proper velocity tracks Lorentz factor while velocity angle is logarithmic and hence increases much more slowly.
Interconversion equations.
The following equations convert between four alternate measures of speed (or unidirectional velocity) that flow from Minkowski's flat-space metric equation:
formula_7.
formula_8
formula_9
formula_10
formula_11
Hyperbolic velocity angle or rapidity.
or in terms of logarithms:
formula_12.
Applications.
Comparing velocities at high speed.
Proper velocity is useful for comparing the speed of objects with momentum per unit rest mass ("w") greater than lightspeed "c". The coordinate speed of such objects is generally near lightspeed, whereas proper velocity indicates how rapidly they are covering ground on "traveling-object clocks". This is important for example if, like some cosmic ray particles, the traveling objects have a finite lifetime. Proper velocity also clues us in to the object's momentum, which has no upper bound.
For example, a 45 GeV electron accelerated by the Large Electron–Positron Collider (LEP) at Cern in 1989 would have had a Lorentz factor γ of about 88,000 (45 GeV divided by the electron rest mass of 511 keV). Its coordinate speed "v" would have been about sixty four trillionths shy of lightspeed "c" at 1 lightsecond per "map" second. On the other hand, its proper speed would have been "w" = "γv" ~ 88,000 lightseconds per "traveler" second. By comparison the coordinate speed of a 250 GeV electron in the proposed International Linear Collider (ILC) will remain near "c", while its proper speed will significantly increase to ~489,000 lightseconds per traveler second.
Proper velocity is also useful for comparing relative velocities along a line at high speed. In this case
formula_13
where A, B and C refer to different objects or frames of reference. For example, "w"AC refers to the proper speed of object A with respect to object C. Thus in calculating the relative proper speed, Lorentz factors multiply when coordinate speeds add.
Hence each of two electrons (A and C) in a head-on collision at 45 GeV in the lab frame (B) would see the other coming toward them at "v"AC ~ "c" and "w"AC = 88,0002(1 + 1) ~ 1.55×1010 lightseconds per traveler second. Thus from the target's point of view, colliders can explore collisions with much higher projectile energy and momentum per unit mass.
Proper velocity-based dispersion relations.
Plotting "("γ" − 1) versus proper velocity" after multiplying the former by "mc"2 and the latter by mass "m", for various values of "m" yields a family of kinetic energy versus momentum curves that includes most of the moving objects encountered in everyday life. Such plots can for example be used to show where lightspeed, Planck's constant, and Boltzmann energy "kT" figure in.
To illustrate, the figure at right with log-log axes shows objects with the same kinetic energy (horizontally related) that carry different amounts of momentum, as well as how the speed of a low-mass object compares (by vertical extrapolation) to the speed after perfectly inelastic collision with a large object at rest. Highly sloped lines (rise/run = 2) mark contours of constant mass, while lines of unit slope mark contours of constant speed.
Objects that fit nicely on this plot are humans driving cars, dust particles in Brownian motion, a spaceship in orbit around the Sun, molecules at room temperature, a fighter jet at Mach 3, one radio wave photon, a person moving at one lightyear per traveler year, the pulse of a 1.8 MegaJoule laser, a 250 GeV electron, and our observable universe with the blackbody kinetic energy expected of a single particle at 3 kelvin.
Unidirectional acceleration via proper velocity.
Proper acceleration at any speed is "the physical acceleration experienced locally by an object". In spacetime it is a three-vector acceleration with respect to the object's instantaneously varying free-float frame. Its magnitude α is the frame-invariant magnitude of that object's four-acceleration. Proper acceleration is also useful from the vantage point (or spacetime slice) of external observers. Not only may observers in all frames agree on its magnitude, but it also measures the extent to which an accelerating rocket "has its pedal to the metal".
In the unidirectional case i.e. when the object's acceleration is parallel or anti-parallel to its velocity in the spacetime slice of the observer, the "change in proper velocity is the integral of proper acceleration over map time" i.e. Δ"w" = "α"Δ"t" for constant "α". At low speeds this reduces to the well-known relation between coordinate velocity and coordinate acceleration times map time, i.e. Δ"v" = "a"Δ"t". For constant unidirectional proper acceleration, similar relationships exist between rapidity "η" and elapsed proper time Δ"τ", as well as between Lorentz factor "γ" and distance traveled Δ"x". To be specific:
formula_14,
where as noted above the various velocity parameters are related by
formula_15.
These equations describe some consequences of accelerated travel at high speed. For example, imagine a spaceship that can accelerate its passengers at 1 g (or 1.03 lightyears/year2) halfway to their destination, and then decelerate them at 1 g for the remaining half so as to provide Earth-like artificial gravity from point A to point B over the shortest possible time. For a map distance of ΔxAB, the first equation above predicts a midpoint Lorentz factor (up from its unit rest value) of γmid=1+α(ΔxAB/2)/c2. Hence the round-trip time on traveler clocks will be Δτ = 4(c/α)cosh−1[γmid], during which the time elapsed on map clocks will be Δt = 4(c/α)sinh[cosh−1[γmid]].
This imagined spaceship could offer round trips to Proxima Centauri lasting about 7.1 traveler years (~12 years on Earth clocks), round trips to the Milky Way's central black hole of about 40 years (~54,000 years elapsed on Earth clocks), and round trips to Andromeda Galaxy lasting around 57 years (over 5 million years on Earth clocks). Unfortunately, while rocket accelerations of 1 g can easily be achieved, they cannot be sustained over long periods of time.
Notes and references.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\textbf{x}"
},
{
"math_id": 1,
"text": "\\textbf{w} = \\frac{d\\textbf{x}}{d\\tau}"
},
{
"math_id": 2,
"text": "\\textbf{w} = \\frac{d\\textbf{x}}{dt}\\frac{dt}{d\\tau}=\\textbf{v} \\,\\gamma(v) "
},
{
"math_id": 3,
"text": "\\eta = \\sinh^{-1}\\frac{w}{c} = \\tanh^{-1}\\frac{v}{c} = \\pm \\cosh^{-1}\\gamma "
},
{
"math_id": 4,
"text": "\\mathbf{u} \\oplus \\mathbf{v}=\\mathbf{u}+\\mathbf{v}+\\left\\{ {\\frac{\\beta_\\mathbf{u}}{1+\\beta_\\mathbf{u}}}{\\frac{\\mathbf{u}\\cdot\\mathbf{v}}{c^2}} + {\\frac{1 - \\beta_\\mathbf{v}}{\\beta_\\mathbf{v}}} \\right\\} \\mathbf{u}"
},
{
"math_id": 5,
"text": "\\beta_\\mathbf{w}"
},
{
"math_id": 6,
"text": "\\beta_\\mathbf{w}=\\frac{1}{\\sqrt{1+\\frac{|\\mathbf{w}|^2}{c^2}}}"
},
{
"math_id": 7,
"text": "\n(c \\delta \\tau)^2 = (c \\delta t)^2 - (\\delta x)^2.\\,\n"
},
{
"math_id": 8,
"text": "\\gamma \\equiv \\frac{dt}{d\\tau}= \\sqrt{1+\\left(\\frac{w}{c}\\right)^2} = \\frac{1}{\\sqrt{1-(\\frac{v}{c})^2}} = \\cosh(\\eta) \\equiv \\frac{e^{\\eta} + e^{-\\eta}}{2}\n"
},
{
"math_id": 9,
"text": "\\frac{w}{c} \\equiv \\frac{1}{c} \\frac{dx}{d\\tau} = \\frac{v}{c} \\frac{1}{\\sqrt{1-(\\frac{v}{c})^2}} = \\sinh(\\eta)\\equiv \\frac{e^{\\eta} - e^{-\\eta}}{2} = \\pm \\sqrt{\\gamma^2 - 1}\n"
},
{
"math_id": 10,
"text": "\\frac{v}{c} \\equiv \\frac{1}{c} \\frac{dx}{dt} = \\frac{w}{c} \\frac{1}{\\sqrt{1 + (\\frac{w}{c})^2}} = \\tanh(\\eta) \\equiv \\frac{e^{2\\eta} - 1} {e^{2\\eta} + 1}= \\pm \\sqrt{1 - \\left(\\frac{1}{\\gamma}\\right)^2}\n"
},
{
"math_id": 11,
"text": "\\eta = \\sinh^{-1}\\left(\\frac{w}{c}\\right) = \\tanh^{-1}\\left(\\frac{v}{c}\\right) = \\pm \\cosh^{-1} \\left(\\gamma \\right)\n"
},
{
"math_id": 12,
"text": "\\eta = \\ln\\left(\\frac{w}{c} + \\sqrt{\\left(\\frac{w}{c}\\right)^2 + 1}\\right) = \\frac{1}{2} \\ln\\left(\\frac{1+\\frac{v}{c}}{1-\\frac{v}{c}}\\right) = \\pm \\ln\\left(\\gamma + \\sqrt{\\gamma^2 - 1}\\right)"
},
{
"math_id": 13,
"text": "\\frac{p_{AC}}{m_1}=w_{AC}= \\gamma_{AC} v_{AC} =\\gamma_{AB} \\gamma_{BC} \\left( v_{AB}+v_{BC} \\right) = \\gamma_{AB} w_{BC}+w_{AB} \\gamma_{BC}\\, "
},
{
"math_id": 14,
"text": "\\alpha=\\frac{\\Delta w}{\\Delta t}=c \\frac{\\Delta \\eta}{\\Delta \\tau}=c^2 \\frac{\\Delta \\gamma}{\\Delta x}"
},
{
"math_id": 15,
"text": "\\eta = \\sinh^{-1}\\left(\\frac{w}{c}\\right) = \\tanh^{-1}\\left(\\frac{v}{c}\\right) = \\pm \\cosh^{-1}\\left(\\gamma\\right)"
}
] |
https://en.wikipedia.org/wiki?curid=15314901
|
1531559
|
Kleiber's law
|
Approximate power law relating animal metabolic rate to mass
Kleiber's law, named after Max Kleiber for his biology work in the early 1930s, is the observation that, for the vast majority of animals, an animal's metabolic rate scales to the <templatestyles src="Fraction/styles.css" />3⁄4 power of the animal's mass. More recently, Kleiber's law has also been shown to apply in plants, suggesting that Kleiber's observation is much more general. Symbolically: if B is the animal's metabolic rate, and M is the animal's mass, then Kleiber's law states that formula_0. Thus, over the same time span, a cat having a mass 100 times that of a mouse will consume only about 32 times the energy the mouse uses.
The exact value of the exponent in Kleiber's law is unclear, in part because the law currently lacks a single theoretical explanation that is entirely satisfactory.
Proposed explanations for the law.
Kleiber's law, like many other biological allometric laws, is a consequence of the physics and/or geometry of circulatory systems in biology. Max Kleiber first discovered the law when analyzing a large number of independent studies on respiration within individual species. Kleiber expected to find an exponent of <templatestyles src="Fraction/styles.css" />2⁄3 (for reasons explained below), and was confounded by the discovery of a <templatestyles src="Fraction/styles.css" />3⁄4 exponent.
Historical context and the <templatestyles src="Fraction/styles.css" />2⁄3 scaling surface law.
Before Kleiber's observation of the 3/4 power scaling, a 2/3 power scaling was largely anticipated based on the "surface law", which states that the basal metabolism of animals differing in size is nearly proportional to their respective body surfaces. This surface law reasoning originated from simple geometrical considerations. As organisms increase in size, their volume (and thus mass) increases at a much faster rate than their surface area. Explanations for <templatestyles src="Fraction/styles.css" />2⁄3-scaling tend to assume that metabolic rates scale to avoid heat exhaustion. Because bodies lose heat passively via their surface but produce heat metabolically throughout their mass, the metabolic rate must scale in such a way as to counteract the square–cube law. Because many physiological processes, like heat loss and nutrient uptake, were believed to be dependent on the surface area of an organism, it was hypothesized that metabolic rate would scale with the 2/3 power of body mass. Rubner (1883) first demonstrated the law in accurate respiration trials on dogs.
Kleiber's contribution.
Max Kleiber challenged this notion in the early 1930s. Through extensive research on various animals' metabolic rates, he found that a 3/4 power scaling provided a better fit to the empirical data than the 2/3 power. His findings provided the groundwork for understanding allometric scaling laws in biology, leading to the formulation of the Metabolic Scaling Theory and the later work by West, Brown, and Enquist, among others.
Such an argument does not address the fact that different organisms exhibit different shapes (and hence have different surface-area-to-volume ratios, even when scaled to the same size). Reasonable estimates for organisms' surface area do appear to scale linearly with the metabolic rate.
Exponent <templatestyles src="Fraction/styles.css" />3⁄4.
West, Brown, and Enquist, (hereafter WBE) proposed a general theory for the origin of many allometric scaling laws in biology. According to the WBE theory, <templatestyles src="Fraction/styles.css" />3⁄4-scaling arises because of efficiency in nutrient distribution and transport throughout an organism. In most organisms, metabolism is supported by a circulatory system featuring branching tubules (i.e., plant vascular systems, insect tracheae, or the human cardiovascular system). WEB claim that (1) metabolism should scale proportionally to nutrient flow (or, equivalently, total fluid flow) in this circulatory system and (2) in order to minimize the energy dissipated in transport, the volume of fluid used to transport nutrients (i.e., blood volume) is a fixed fraction of body mass. The model assumes that the energy dissipated is minimized and that the terminal tubes do not vary with body size. It provides a complete analysis of numerous anatomical and physiological scaling relations for circulatory systems in biology that generally agree with data . More generally, the model predicts the structural and functional properties of vertebrate cardiovascular and respiratory systems, plant vascular systems, insect tracheal tubes, and other distribution networks.
They then analyze the consequences of these two claims at the level of the smallest circulatory tubules (capillaries, alveoli, etc.). Experimentally, the volume contained in those smallest tubules is constant across a wide range of masses. Because fluid flow through a tubule is determined by the volume thereof, the total fluid flow is proportional to the total number of smallest tubules. Thus, if B denotes the basal metabolic rate, Q the total fluid flow, and N the number of minimal tubules,formula_1 Circulatory systems do not grow by simply scaling proportionally larger; they become more deeply nested. The depth of nesting depends on the self-similarity exponents of the tubule dimensions, and the effects of that depth depend on how many "child" tubules each branching produces. Connecting these values to macroscopic quantities depends (very loosely) on a precise model of tubules. WBE show that if the tubules are well-approximated by rigid cylinders, then, to prevent the fluid from "getting clogged" in small cylinders, the total fluid volume V satisfiesformula_2 (Despite conceptual similarities, this condition is inconsistent with Murray's law) Because blood volume is a fixed fraction of body mass, formula_3
Non-power-law scaling.
The WBE theory predicts that the scaling of metabolism is not a strict power law but rather should be slightly curvilinear. The 3/4 exponent only holds exactly in the limit of organisms of infinite size. As body size increases, WBE predict that the scaling of metabolism will converge to a ~3/4 scaling exponent . Indeed, WBE predicts that the metabolic rates of the smallest animals tend to be greater than expected from the power-law scaling (see Fig. 2 in Savage et al. 2010 ). Further, Metabolic rates for smaller animals (birds under , or insects) typically fit to <templatestyles src="Fraction/styles.css" />2⁄3 much better than <templatestyles src="Fraction/styles.css" />3⁄4; for larger animals, the reverse holds. As a result, log-log plots of metabolic rate versus body mass can "curve" slightly upward, and fit better to quadratic models. In all cases, local fits exhibit exponents in the range.
Elaborated and Modified circulatory models.
Elaborations of the WBE model predict "larger" scaling exponents, worsening the discrepancy with observed data. see also, ). However, one can retain a similar theory by relaxing WBE's assumption of a nutrient transport network that is both fractal and circulatory. Different networks are less efficient in that they exhibit a lower scaling exponent. Still, a metabolic rate determined by nutrient transport will always exhibit scaling between <templatestyles src="Fraction/styles.css" />2⁄3 and <templatestyles src="Fraction/styles.css" />3⁄4.. WBE argued that fractal-like circulatory networks are likely under strong stabilizing selection to evolve to minimize energy used for transport. If selection for greater metabolic rates is favored, then smaller organisms will prefer to arrange their networks to scale as <templatestyles src="Fraction/styles.css" />2⁄3. Still, selection for larger-mass organisms will tend to result in networks that scale as <templatestyles src="Fraction/styles.css" />3⁄4, which produces the observed curvature.
Modified thermodynamic models.
An alternative model notes that metabolic rate does not solely serve to generate heat. Metabolic rate contributing solely to useful work should scale with power 1 (linearly), whereas metabolic rate contributing to heat generation should be limited by surface area and scale with power <templatestyles src="Fraction/styles.css" />2⁄3. Basal metabolic rate is then the convex combination of these two effects: if the proportion of useful work is f, then the basal metabolic rate should scale as formula_4 where k and "k"′ are constants of proportionality. "k"′ in particular describes the surface area ratio of organisms and is approximately 0.1 kJ·h−1·g−2/3; typical values for f are 15-20%. The theoretical maximum value of f is 21%, because the efficiency of glucose oxidation is only 42%, and half of the ATP so produced is wasted.
Criticism of explanations.
Kozłowski and Konarzewski have argued against attempts to explain Kleiber's law via any sort of limiting factor because metabolic rates vary by factors of 4-5 between rest and activity. Hence, any limits that affect the scaling of the "basal" metabolic rate would make elevated metabolism — and hence all animal activity — impossible. WBE conversely argue that natural selection can indeed select for minimal transport energy dissipation during rest, without abandoning the ability for less efficient function at other times .
Other researchers have also noted that Kozłowski and Konarzewski's criticism of the law tends to focus on precise structural details of the WBE circulatory networks but that the latter are not essential to the model.
Experimental support.
Analyses of variance for a variety of physical variables suggest that although most variation in basal metabolic rate is determined by mass, additional variables with significant effects include body temperature and taxonomic order.
A 1932 work by Brody calculated that the scaling was approximately 0.73.
A 2004 analysis of field metabolic rates for mammals conclude that they appear to scale with exponent 0.749.
Generalizations.
Kleiber's law has been reported to interspecific comparisons and has been claimed not to apply at the intraspecific level. The taxonomic level that body mass metabolic allometry should be studied has been debated Nonetheless, several analyses suggest that while the exponents of the Kleiber's relationship between body size and metabolism can vary at the intraspecific level, statistically, intraspecific exponents in both plants and animals tend to cluster around 3/4..
In other kingdoms.
A 1999 analysis concluded that biomass production in a given plant scaled with the <templatestyles src="Fraction/styles.css" />3⁄4 power of the plant's mass during the plant's growth, but a 2001 paper that included various types of unicellular photosynthetic organisms found scaling exponents intermediate between 0.75 and 1.00.. Similarly, a 2006 paper in "Nature" argued that the exponent of mass is close to 1 for plant seedlings, but that variation between species, phyla, and growth conditions overwhelm any "Kleiber's law"-like effects.. But, metabolic scaling theory can successfully resolve these apparent exceptions and deviations. For finite-size corrections in networks with both area-preserving and area-increasing branching, the WBE model predicts that fits to data for plants yield scaling exponents that are steeper than 3/4 in small plants but then converge to 3/4 in larger plants (see ).
Intra-organismal results.
Because cell protoplasm appears to have constant density across a range of organism masses, a consequence of Kleiber's law is that, in larger species, less energy is available to each cell volume. Cells appear to cope with this difficulty via choosing one of the following two strategies: smaller cells or a slower cellular metabolic rate. Neurons and adipocytes exhibit the former; every other type of cell, the latter. As a result, different organs exhibit different allometric scalings (see table).
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "B \\propto M^{3/4}"
},
{
"math_id": 1,
"text": "B\\propto Q\\propto N\\text{.}"
},
{
"math_id": 2,
"text": "N^4\\propto V^3\\text{.}"
},
{
"math_id": 3,
"text": "B\\propto M^{\\frac{3}{4}}\\text{.}"
},
{
"math_id": 4,
"text": "B=f\\cdot kM+(1-f)\\cdot k'M^{\\frac{2}{3}}"
}
] |
https://en.wikipedia.org/wiki?curid=1531559
|
15316059
|
UVW mapping
|
UVW mapping is a mathematical technique for coordinate mapping. In computer graphics, it most commonly maps an object's surface in formula_0 to a solid texture with UVW coordinates in formula_1, in contrast to UV mapping, which maps surfaces in formula_0 to an image with UV coordinates in formula_0. The UVW mapping is suitable for painting an object's surface based on a solid texture. This allows a marble texture to wrap a vase to appear as if it were carved from actual marble. "UVW", like the standard Cartesian coordinate system, has three dimensions; the third dimension allows texture maps to wrap in complex ways onto irregular surfaces. Each point in a UVW map corresponds to a point on the surface of the object. The graphic designer or programmer generates the specific mathematical function to implement the map, so that points on the texture are assigned to (XYZ) points on the target surface. Generally speaking, the more orderly the unwrapped polygons are, the easier it is for the texture artist to paint features onto the texture. Once the texture is finished, all that has to be done is to wrap the UVW map back onto the object, projecting the texture in a way that is far more flexible and advanced, preventing graphic artifacts that accompany more simplistic texture mappings such as planar projection. For this reason, UVW mapping is commonly used to texture map non-platonic solids, non-geometric primitives, and other irregularly shaped objects, such as characters and furniture.
|
[
{
"math_id": 0,
"text": "\\mathbb{R}^{2}"
},
{
"math_id": 1,
"text": "\\mathbb{R}^{3}"
}
] |
https://en.wikipedia.org/wiki?curid=15316059
|
15317
|
IPv4
|
Fourth version of the Internet Protocol
Internet Protocol version 4 (IPv4) is the first version of the Internet Protocol (IP) as a standalone specification. It is one of the core protocols of standards-based internetworking methods in the Internet and other packet-switched networks. IPv4 was the first version deployed for production on SATNET in 1982 and on the ARPANET in January 1983. It is still used to route most Internet traffic today, even with the ongoing deployment of Internet Protocol version 6 (IPv6), its successor.
IPv4 uses a 32-bit address space which provides 4,294,967,296 (232) unique addresses, but large blocks are reserved for special networking purposes.
History.
Earlier versions of TCP/IP were a combined specification through TCP/IPv3. With IPv4, the Internet Protocol became a separate specification.
Internet Protocol version 4 is described in IETF publication RFC 791 (September 1981), replacing an earlier definition of January 1980 (RFC 760). In March 1982, the US Department of Defense decided on the Internet Protocol Suite (TCP/IP) as the standard for all military computer networking.
Purpose.
The Internet Protocol is the protocol that defines and enables internetworking at the internet layer of the Internet Protocol Suite. In essence it forms the Internet. It uses a logical addressing system and performs "routing", which is the forwarding of packets from a source host to the next router that is one hop closer to the intended destination host on another network.
IPv4 is a connectionless protocol, and operates on a best-effort delivery model, in that it does not guarantee delivery, nor does it assure proper sequencing or avoidance of duplicate delivery. These aspects, including data integrity, are addressed by an upper layer transport protocol, such as the Transmission Control Protocol (TCP).
Addressing.
IPv4 uses 32-bit addresses which limits the address space to (232) addresses.
IPv4 reserves special address blocks for private networks (224 + 220 + 216 ≈ 18 million addresses) and multicast addresses (228 ≈ 268 million addresses).
Address representations.
IPv4 addresses may be represented in any notation expressing a 32-bit integer value. They are most often written in dot-decimal notation, which consists of four octets of the address expressed individually in decimal numbers and separated by periods.
For example, the quad-dotted IP address in the illustration () represents the 32-bit decimal number 2886794753, which in hexadecimal format is 0xAC10FE01.
CIDR notation combines the address with its routing prefix in a compact format, in which the address is followed by a slash character (/) and the count of leading consecutive "1" bits in the routing prefix (subnet mask).
Other address representations were in common use when classful networking was practiced. For example, the loopback address was commonly written as , given that it belongs to a class-A network with eight bits for the network mask and 24 bits for the host number. When fewer than four numbers were specified in the address in dotted notation, the last value was treated as an integer of as many bytes as are required to fill out the address to four octets. Thus, the address is equivalent to .
Allocation.
In the original design of IPv4, an IP address was divided into two parts: the network identifier was the most significant octet of the address, and the host identifier was the rest of the address. The latter was also called the "rest field". This structure permitted a maximum of 256 network identifiers, which was quickly found to be inadequate.
To overcome this limit, the most-significant address octet was redefined in 1981 to create "network classes", in a system which later became known as "classful" networking. The revised system defined five classes. Classes A, B, and C had different bit lengths for network identification. The rest of the address was used as previously to identify a host within a network. Because of the different sizes of fields in different classes, each network class had a different capacity for addressing hosts. In addition to the three classes for addressing hosts, Class D was defined for multicast addressing and Class E was reserved for future applications.
Dividing existing classful networks into subnets began in 1985 with the publication of . This division was made more flexible with the introduction of variable-length subnet masks (VLSM) in in 1987. In 1993, based on this work, introduced Classless Inter-Domain Routing (CIDR), which expressed the number of bits (from the most significant) as, for instance, , and the class-based scheme was dubbed "classful", by contrast. CIDR was designed to permit repartitioning of any address space so that smaller or larger blocks of addresses could be allocated to users. The hierarchical structure created by CIDR is managed by the Internet Assigned Numbers Authority (IANA) and the regional Internet registries (RIRs). Each RIR maintains a publicly searchable WHOIS database that provides information about IP address assignments.
Special-use addresses.
The Internet Engineering Task Force (IETF) and IANA have restricted from general use various reserved IP addresses for special purposes. Notably these addresses are used for multicast traffic and to provide addressing space for unrestricted uses on private networks.
<section begin=IPv4-special-address-blocks/>
<section end=IPv4-special-address-blocks/>
Private networks.
Of the approximately four billion addresses defined in IPv4, about 18 million addresses in three ranges are reserved for use in private networks. Packets addresses in these ranges are not routable in the public Internet; they are ignored by all public routers. Therefore, private hosts cannot directly communicate with public networks, but require network address translation at a routing gateway for this purpose.
<section begin=IPv4-private-networks/>
<section end=IPv4-private-networks/>
Since two private networks, e.g., two branch offices, cannot directly interoperate via the public Internet, the two networks must be bridged across the Internet via a virtual private network (VPN) or an IP tunnel, which encapsulates packets, including their headers containing the private addresses, in a protocol layer during transmission across the public network. Additionally, encapsulated packets may be encrypted for transmission across public networks to secure the data.
Link-local addressing.
RFC 3927 defines the special address block 169.254.0.0/16 for link-local addressing. These addresses are only valid on the link (such as a local network segment or point-to-point connection) directly connected to a host that uses them. These addresses are not routable. Like private addresses, these addresses cannot be the source or destination of packets traversing the internet. These addresses are primarily used for address autoconfiguration (Zeroconf) when a host cannot obtain an IP address from a DHCP server or other internal configuration methods.
When the address block was reserved, no standards existed for address autoconfiguration. Microsoft created an implementation called Automatic Private IP Addressing (APIPA), which was deployed on millions of machines and became a de facto standard. Many years later, in May 2005, the IETF defined a formal standard in RFC 3927, entitled "Dynamic Configuration of IPv4 Link-Local Addresses".
Loopback.
The class A network (classless network ) is reserved for loopback. IP packets whose source addresses belong to this network should never appear outside a host. Packets received on a non-loopback interface with a loopback source or destination address must be dropped.
First and last subnet addresses.
The first address in a subnet is used to identify the subnet itself. In this address all host bits are "0". To avoid ambiguity in representation, this address is reserved. The last address has all host bits set to "1". It is used as a local broadcast address for sending messages to all devices on the subnet simultaneously. For networks of size or larger, the broadcast address always ends in 255.
For example, in the subnet (subnet mask ) the identifier is used to refer to the entire subnet. The broadcast address of the network is .
However, this does not mean that every address ending in 0 or 255 cannot be used as a host address. For example, in the subnet , which is equivalent to the address range –, the broadcast address is . One can use the following addresses for hosts, even though they end with 255: , , etc. Also, is the network identifier and must not be assigned to an interface. The addresses , , etc., may be assigned, despite ending with 0.
In the past, conflict between network addresses and broadcast addresses arose because some software used non-standard broadcast addresses with zeros instead of ones.
In networks smaller than , broadcast addresses do not necessarily end with 255. For example, a CIDR subnet has the broadcast address .
As a special case, a network has capacity for just two hosts. These networks are typically used for point-to-point connections. There is no network identifier or broadcast address for these networks.
Address resolution.
Hosts on the Internet are usually known by names, e.g., www.example.com, not primarily by their IP address, which is used for routing and network interface identification. The use of domain names requires translating, called "resolving", them to addresses and vice versa. This is analogous to looking up a phone number in a phone book using the recipient's name.
The translation between addresses and domain names is performed by the Domain Name System (DNS), a hierarchical, distributed naming system that allows for the subdelegation of namespaces to other DNS servers.
Unnumbered interface.
A unnumbered point-to-point (PtP) link, also called a transit link, is a link that does not have an IP network or subnet number associated with it, but still has an IP address. First introduced in 1993, Phil Karn from Qualcomm is credited as the original designer.
The purpose of a transit link is to route datagrams. They are used to free IP addresses from a scarce IP address space or to reduce the management of assigning IP and configuration of interfaces. Previously, every link needed to dedicate a or subnet using 2 or 4 IP addresses per point-to-point link. When a link is unnumbered, a "router-id" is used, a single IP address borrowed from a defined (normally a loopback) interface. The same "router-id" can be used on multiple interfaces.
One of the disadvantages of unnumbered interfaces is that it is harder to do remote testing and management.
Address space exhaustion.
In the 1980s, it became apparent that the pool of available IPv4 addresses was depleting at a rate that was not initially anticipated in the original design of the network. The main market forces that accelerated address depletion included the rapidly growing number of Internet users, who increasingly used mobile computing devices, such as laptop computers, personal digital assistants (PDAs), and smart phones with IP data services. In addition, high-speed Internet access was based on always-on devices. The threat of exhaustion motivated the introduction of a number of remedial technologies, such as:
By the mid-1990s, NAT was used pervasively in network access provider systems, along with strict usage-based allocation policies at the regional and local Internet registries.
The primary address pool of the Internet, maintained by IANA, was exhausted on 3 February 2011, when the last five blocks were allocated to the five RIRs. APNIC was the first RIR to exhaust its regional pool on 15 April 2011, except for a small amount of address space reserved for the transition technologies to IPv6, which is to be allocated under a restricted policy.
The long-term solution to address exhaustion was the 1998 specification of a new version of the Internet Protocol, IPv6. It provides a vastly increased address space, but also allows improved route aggregation across the Internet, and offers large subnetwork allocations of a minimum of 264 host addresses to end users. However, IPv4 is not directly interoperable with IPv6, so that IPv4-only hosts cannot directly communicate with IPv6-only hosts. With the phase-out of the 6bone experimental network starting in 2004, permanent formal deployment of IPv6 commenced in 2006. Completion of IPv6 deployment is expected to take considerable time, so that intermediate transition technologies are necessary to permit hosts to participate in the Internet using both versions of the protocol.
Packet structure.
An IP packet consists of a header section and a data section. An IP packet has no data checksum or any other footer after the data section.
Typically the link layer encapsulates IP packets in frames with a CRC footer that detects most errors, many transport-layer protocols carried by IP also have their own error checking.
Header.
The IPv4 packet header consists of 14 fields, of which 13 are required. The 14th field is optional and aptly named: options. The fields in the header are packed with the most significant byte first (network byte order), and for the diagram and discussion, the most significant bits are considered to come first (MSB 0 bit numbering). The most significant bit is numbered 0, so the version field is actually found in the four most significant bits of the first byte, for example.
Again, the data size is preserved: 1,480 + 1,000 = 2,480, and 1,480 + 540 = 2,020.
Also in this case, the "More Fragments" bit remains 1 for all the fragments that came with 1 in them and for the last fragment that arrives, it works as usual, that is the MF bit is set to 0 only in the last one. And of course, the Identification field continues to have the same value in all re-fragmented fragments. This way, even if fragments are re-fragmented, the receiver knows they have initially all started from the same packet.
The last offset and last data size are used to calculate the total data size: formula_0.
Reassembly.
A receiver knows that a packet is a fragment, if at least one of the following conditions is true:
The receiver identifies matching fragments using the source and destination addresses, the protocol ID, and the identification field. The receiver reassembles the data from fragments with the same ID using both the fragment offset and the more fragments flag. When the receiver receives the last fragment, which has the "more fragments" flag set to 0, it can calculate the size of the original data payload, by multiplying the last fragment's offset by eight and adding the last fragment's data size. In the given example, this calculation was formula_1 bytes. When the receiver has all fragments, they can be reassembled in the correct sequence according to the offsets to form the original datagram.
Assistive protocols.
IP addresses are not tied in any permanent manner to networking hardware and, indeed, in modern operating systems, a network interface can have multiple IP addresses. In order to properly deliver an IP packet to the destination host on a link, hosts and routers need additional mechanisms to make an association between the hardware address of network interfaces and IP addresses. The Address Resolution Protocol (ARP) performs this IP-address-to-hardware-address translation for IPv4. In addition, the reverse correlation is often necessary. For example, unless an address is preconfigured by an administrator, when an IP host is booted or connected to a network it needs to determine its IP address. Protocols for such reverse correlations include Dynamic Host Configuration Protocol (DHCP), Bootstrap Protocol (BOOTP) and, infrequently, reverse ARP.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "495 \\times 8+540=3{,}960+540=4{,}500"
},
{
"math_id": 1,
"text": "495 \\times 8+540=4{,}500"
}
] |
https://en.wikipedia.org/wiki?curid=15317
|
15317159
|
Brown–Forsythe test
|
Statistical test for equality of variances
The Brown–Forsythe test is a statistical test for the equality of group variances based on performing an Analysis of Variance (ANOVA) on a transformation of the response variable. When a one-way ANOVA is performed, samples are assumed to have been drawn from distributions with equal variance. If this assumption is not valid, the resulting "F"-test is invalid. The Brown–Forsythe test statistic is the F statistic resulting from an ordinary one-way analysis of variance on the absolute deviations of the groups or treatments data from their individual medians.
Transformation.
The transformed response variable is constructed to measure the spread in each group. Let
formula_0
where formula_1 is the median of group "j". The Brown–Forsythe test statistic is the model "F" statistic from a one way ANOVA on "zij":
formula_2
where "p" is the number of groups, "nj" is the number of observations in group "j", and "N" is the total number of observations. Also formula_3 are the group means of the formula_4 and formula_5 is the overall mean of the formula_4. This "F"-statistic follows the "F"-distribution with degrees of freedom formula_6 and formula_7 under the null hypothesis.
If the variances are indeed heterogeneous, techniques that allow for this (such as the Welch one-way ANOVA) may be used instead of the usual ANOVA.
Good, noting that the deviations are linearly dependent, has modified the test so as to drop the redundant deviations.
Comparison with Levene's test.
Levene's test uses the mean instead of the median. Although the optimal choice depends on the underlying distribution, the definition based on the median is recommended as the choice that provides good robustness against many types of non-normal data while retaining good statistical power.
If one has knowledge of the underlying distribution of the data, this may indicate using one of the other choices. Brown and Forsythe performed Monte Carlo studies that indicated that using the trimmed mean performed best when the underlying data followed a Cauchy distribution (a heavy-tailed distribution) and the median performed best when the underlying data followed a χ2 distribution with four degrees of freedom (a sharply skewed distribution). Using the mean provided the best power for symmetric, moderate-tailed, distributions. O'Brien tested several ways of using the traditional analysis of variance to test heterogeneity of spread in factorial designs with equal or unequal sample sizes. The jackknife pseudovalues of s2 and the absolute deviations from the cell median are shown to be robust and relatively powerful.
References.
<templatestyles src="Reflist/styles.css" />
External links.
This article incorporates public domain material from
|
[
{
"math_id": 0,
"text": "\nz_{ij}=\\left\\vert y_{ij} - \\tilde{y}_j \\right\\vert\n"
},
{
"math_id": 1,
"text": "\\tilde{y}_j"
},
{
"math_id": 2,
"text": "F = \\frac{(N-p)}{(p-1)} \\frac{\\sum_{j=1}^{p} n_j (\\tilde{z}_{\\cdot j}-\\tilde{z}_{\\cdot\\cdot})^2} {\\sum_{j=1}^{p}\\sum_{i=1}^{n_j} (z_{ij}-\\tilde{z}_{\\cdot j})^2}"
},
{
"math_id": 3,
"text": "\\tilde{z}_{\\cdot j}"
},
{
"math_id": 4,
"text": "z_{ij}"
},
{
"math_id": 5,
"text": "\\tilde{z}_{\\cdot\\cdot}"
},
{
"math_id": 6,
"text": "d_1=p-1"
},
{
"math_id": 7,
"text": "d_2=N-p"
}
] |
https://en.wikipedia.org/wiki?curid=15317159
|
1531781
|
Rarita–Schwinger equation
|
Field equation for spin-3/2 fermions
In theoretical physics, the Rarita–Schwinger equation is the
relativistic field equation of spin-3/2 fermions in a four-dimensional flat spacetime. It is similar to the Dirac equation for spin-1/2 fermions. This equation was first introduced by William Rarita and Julian Schwinger in 1941.
In modern notation it can be written as:
formula_0
where formula_1 is the Levi-Civita symbol,
formula_2 are Dirac matrices (with formula_3) and formula_4,
formula_5 is the mass,
formula_6,
and formula_7 is a vector-valued spinor with additional components compared to the four component spinor in the Dirac equation. It corresponds to the (, ) ⊗ ((, 0) ⊕ (0, )) representation of the Lorentz group, or rather, its (1, ) ⊕ (, 1) part.
This field equation can be derived as the Euler–Lagrange equation corresponding to the Rarita–Schwinger Lagrangian:
formula_8
where the bar above formula_9 denotes the Dirac adjoint.
This equation controls the propagation of the wave function of composite objects such as the delta baryons () or for the conjectural gravitino. So far, no elementary particle with spin 3/2 has been found experimentally.
The massless Rarita–Schwinger equation has a fermionic gauge symmetry: is invariant under the gauge transformation formula_10, where formula_11 is an arbitrary spinor field. This is simply the local supersymmetry of supergravity, and the field must be a gravitino.
"Weyl" and "Majorana" versions of the Rarita–Schwinger equation also exist.
Equations of motion in the massless case.
Consider a massless Rarita–Schwinger field described by the Lagrangian density
formula_12
where the sum over spin indices is implicit, formula_9 are Majorana spinors, and
formula_13
To obtain the equations of motion we vary the Lagrangian with respect to the fields formula_9, obtaining:
formula_14
using the Majorana flip properties
we see that the second and first terms on the RHS are equal, concluding that
formula_15
plus unimportant boundary terms.
Imposing formula_16 we thus see that the equation of motion for a massless Majorana Rarita–Schwinger spinor reads:
formula_17
The gauge symmetry of the massless Rarita-Schwinger equation allows the choice of the gauge formula_18, reducing the equations to:
formula_19
A solution with spins 1/2 and 3/2 is given by:
formula_20
where formula_21 is the spatial Laplacian, formula_22 is doubly transverse, carrying spin 3/2, and formula_23 satisfies the massless Dirac equation, therefore carrying spin 1/2.
Drawbacks of the equation.
The current description of massive, higher spin fields through either Rarita–Schwinger or Fierz–Pauli formalisms is afflicted with several maladies.
Superluminal propagation.
As in the case of the Dirac equation, electromagnetic interaction can be added by promoting the partial derivative to gauge covariant derivative:
formula_24.
In 1969, Velo and Zwanziger showed that the Rarita–Schwinger Lagrangian coupled to electromagnetism leads to equation with solutions representing wavefronts, some of which propagate faster than light. In other words,
the field then suffers from acausal, superluminal propagation; consequently, the quantization in interaction with electromagnetism is essentially flawed. In extended supergravity, though, Das and Freedman have shown that local supersymmetry solves this problem.
|
[
{
"math_id": 0,
"text": " \\left ( \\epsilon^{\\mu \\kappa \\rho \\nu} \\gamma_5 \\gamma_\\kappa \\partial_\\rho - i m \\sigma^{\\mu \\nu} \\right)\\psi_\\nu = 0 "
},
{
"math_id": 1,
"text": "\\epsilon^{\\mu\\kappa\\rho\\nu}"
},
{
"math_id": 2,
"text": "\\gamma_\\kappa"
},
{
"math_id": 3,
"text": "\\kappa=0,1,2,3"
},
{
"math_id": 4,
"text": "\\gamma_5=i\\gamma_0\\gamma_1\\gamma_2\\gamma_3"
},
{
"math_id": 5,
"text": "m"
},
{
"math_id": 6,
"text": "\\sigma^{\\mu\\nu} \\equiv \\frac{i}{2} [\\gamma^\\mu,\\gamma^\\nu] "
},
{
"math_id": 7,
"text": "\\psi_\\nu"
},
{
"math_id": 8,
"text": "\\mathcal{L}=-\\tfrac{1}{2}\\;\\bar{\\psi}_\\mu \\left( \\epsilon^{\\mu \\kappa \\rho \\nu} \\gamma_5 \\gamma_\\kappa \\partial_\\rho - i m \\sigma^{\\mu \\nu} \\right)\\psi_\\nu"
},
{
"math_id": 9,
"text": "\\psi_\\mu"
},
{
"math_id": 10,
"text": "\\psi_\\mu \\rightarrow \\psi_\\mu + \\partial_\\mu \\epsilon"
},
{
"math_id": 11,
"text": "\\epsilon\\equiv \\epsilon_\\alpha"
},
{
"math_id": 12,
"text": " \\mathcal L_{RS} = \\bar \\psi_\\mu \\gamma^{\\mu\\nu\\rho} \\partial_\\nu \\psi_\\rho,"
},
{
"math_id": 13,
"text": " \\gamma^{\\mu\\nu\\rho} \\equiv \\frac{1}{3!} \\gamma^{[\\mu}\\gamma^\\nu \\gamma^{\\rho]}. "
},
{
"math_id": 14,
"text": " \\delta \\mathcal L_{RS} =\n \\delta \\bar \\psi_\\mu \\gamma^{\\mu\\nu\\rho} \\partial_\\nu \\psi_\\rho\n + \\bar \\psi_\\mu \\gamma^{\\mu\\nu\\rho} \\partial_\\nu \\delta \\psi_\\rho\n = \\delta \\bar \\psi_\\mu \\gamma^{\\mu\\nu\\rho} \\partial_\\nu \\psi_\\rho\n - \\partial_\\nu \\bar \\psi_\\mu \\gamma^{\\mu\\nu\\rho} \\delta \\psi_\\rho\n + \\text{ boundary terms}\n"
},
{
"math_id": 15,
"text": " \\delta \\mathcal L_{RS} = 2 \\delta \\bar \\psi_\\mu \\gamma^{\\mu\\nu\\rho} \\partial_\\nu \\psi_\\rho, "
},
{
"math_id": 16,
"text": " \\delta \\mathcal L_{RS} = 0"
},
{
"math_id": 17,
"text": " \\gamma^{\\mu\\nu\\rho} \\partial_\\nu \\psi_\\rho = 0. "
},
{
"math_id": 18,
"text": "\\gamma^\\mu \\psi_\\mu = 0"
},
{
"math_id": 19,
"text": "\n\\gamma^\\nu{\\partial}_\\nu \\psi_\\mu = 0, \\quad\n\\partial^\\mu \\psi_\\mu = 0, \\quad\n\\gamma^\\mu \\psi_\\mu = 0.\n"
},
{
"math_id": 20,
"text": "\n\\psi_0=\\kappa, \\quad \\psi_i = \\psi^{TT}_i + \\frac{\\gamma^j\\partial_j}{\\nabla^2}\\gamma_0\\partial_i\\kappa,\n"
},
{
"math_id": 21,
"text": "\\nabla^2 "
},
{
"math_id": 22,
"text": " \\psi^{TT}_i"
},
{
"math_id": 23,
"text": "\\kappa"
},
{
"math_id": 24,
"text": "\\partial_\\mu \\rightarrow D_\\mu = \\partial_\\mu - i e A_\\mu "
}
] |
https://en.wikipedia.org/wiki?curid=1531781
|
15317831
|
Algebraic reconstruction technique
|
Technique in computed tomography
The algebraic reconstruction technique (ART) is an iterative reconstruction technique used in computed tomography. It reconstructs an image from a series of angular projections (a sinogram). Gordon, Bender and Herman first showed its use in image reconstruction; whereas the method is known as Kaczmarz method in numerical linear algebra.
An advantage of ART over other reconstruction methods (such as filtered backprojection) is that it is relatively easy to incorporate prior knowledge into the reconstruction process.
ART can be considered as an iterative solver of a system of linear equations formula_0, where:
formula_1 is a sparse formula_2 matrix whose values represent the relative contribution of each output pixel to different points in the sinogram (formula_3 being the number of individual values in the sinogram, and formula_4 being the number of output pixels);
formula_5 represents the pixels in the generated (output) image, arranged as a vector, and:
formula_6 is a vector representing the sinogram. Each projection (row) in the sinogram is made up of a number of discrete values, arranged along the transverse axis. formula_6 is made up of all of these values, from each of the individual projections.
Given a real or complex matrix formula_1 and a real or complex vector formula_6, respectively, the method computes an approximation of the solution of the linear systems of equations as in the following formula,
formula_7
where formula_8, formula_9 is the "i"-th row of the matrix formula_1, formula_10 is the "i"-th component of the vector formula_6.
formula_11 is an optional relaxation parameter, of the range formula_12. The relaxation parameter is used to slow the convergence of the system. This increases computation time, but can improve the signal-to-noise ratio of the output. In some implementations, the value of formula_11 is reduced with each successive iteration.
A further development of the ART algorithm is the simultaneous algebraic reconstruction technique (SART) algorithm.
|
[
{
"math_id": 0,
"text": " A x = b "
},
{
"math_id": 1,
"text": " A "
},
{
"math_id": 2,
"text": " m \\times n "
},
{
"math_id": 3,
"text": " m "
},
{
"math_id": 4,
"text": " n "
},
{
"math_id": 5,
"text": " x "
},
{
"math_id": 6,
"text": " b "
},
{
"math_id": 7,
"text": "\nx^{k+1} = x^k + \\lambda_k \\frac{b_i - \\langle a_i, x^k \\rangle}{\\|a_i\\|^2} a_i^T\n"
},
{
"math_id": 8,
"text": " i = k \\bmod m + 1 "
},
{
"math_id": 9,
"text": " a_i "
},
{
"math_id": 10,
"text": " b_i "
},
{
"math_id": 11,
"text": " \\lambda_k "
},
{
"math_id": 12,
"text": " 0 < \\lambda_k \\leq 1 "
}
] |
https://en.wikipedia.org/wiki?curid=15317831
|
15318324
|
Helmert transformation
|
Transformation method within a three-dimensional space
The Helmert transformation (named after Friedrich Robert Helmert, 1843–1917) is a geometric transformation method within a three-dimensional space.
It is frequently used in geodesy to produce datum transformations between datums.
The Helmert transformation is also called a seven-parameter transformation and is a similarity transformation.
Definition.
It can be expressed as:
formula_0
where
The parameters are:
Variations.
A special case is the two-dimensional Helmert transformation. Here, only four parameters are needed (two translations, one scaling, one rotation). These can be determined from two known points; if more points are available then checks can be made.
Sometimes it is sufficient to use the five parameter transformation, composed of three translations, only one rotation about the Z-axis, and one change of scale.
Restrictions.
The Helmert transformation only uses one scale factor, so it is not suitable for:
In these cases, a more general affine transformation is preferable.
Application.
The Helmert transformation is used, among other things, in geodesy to transform the coordinates of the point from one coordinate system into another. Using it, it becomes possible to convert regional surveying points into the WGS84 locations used by GPS.
For example, starting with the Gauss–Krüger coordinate, x and y, plus the height, h, are converted into 3D values in steps:
The third step consists of the application of a rotation matrix, multiplication with the
scale factor formula_1 (with a value near 1) and the addition of the three translations, "c""x", "c""y", "c""z".
The coordinates of a reference system B are derived from reference system A by the following formula (position vector transformation convention and very small rotation angles simplification):
formula_2
or for each single parameter of the coordinate:
formula_3
For the reverse transformation, each element is multiplied by −1.
The seven parameters are determined for each region with three or more "identical points" of both systems. To bring them into agreement, the small inconsistencies (usually only a few cm) are adjusted using the method of least squares – that is, eliminated in a statistically plausible manner.
"Note: the rotation angles given in the table are in arcseconds and must be converted to radians before use in the calculation."
Standard parameters.
These are standard parameter sets for the 7-parameter transformation (or data transformation) between two datums. For a transformation in the opposite direction, inverse transformation parameters should be calculated or inverse transformation should be applied (as described in paper "On geodetic transformations"). The translations "c""x", "c""y", "c""z" are sometimes described as "t""x", "t""y", "t""z", or dx, dy, dz. The rotations "r""x", "r""y", and "r""z" are sometimes also described as formula_4, formula_5 and formula_6. In the United Kingdom the prime interest is the transformation between the OSGB36 datum used by the Ordnance survey for Grid References on its Landranger and Explorer maps to the WGS84 implementation used by GPS technology. The Gauss–Krüger coordinate system used in Germany normally refers to the Bessel ellipsoid. A further datum of interest was ED50 (European Datum 1950) based on the Hayford ellipsoid. ED50 was part of the fundamentals of the NATO coordinates up to the 1980s, and many national coordinate systems of Gauss–Krüger are defined by ED50.
The earth does not have a perfect ellipsoidal shape, but is described as a geoid. Instead, the geoid of the earth is described by many ellipsoids. Depending upon the actual location, the "locally best aligned ellipsoid" has been used for surveying and mapping purposes. The standard parameter set gives an accuracy of about for an OSGB36/WGS84 transformation. This is not precise enough for surveying, and the Ordnance Survey supplements these results by using a lookup table of further translations in order to reach accuracy.
Estimating the parameters.
If the transformation parameters are unknown, they can be calculated with reference points (that is, points whose coordinates are known before and after the transformation. Since a total of seven parameters (three translations, one scale, three rotations) have to be determined, at least two points and one coordinate of a third point (for example, the Z-coordinate) must be known. This gives a system with seven equations and seven unknowns, which can be solved.
For transformations between conformal map projections near an arbitrary point, the Helmert transformation parameters can be calculated exactly from the Jacobian matrix of the transformation function.
In practice, it is best to use more points. Through this correspondence, more accuracy is obtained, and a statistical assessment of the results becomes possible. In this case, the calculation is adjusted with the Gaussian least squares method.
A numerical value for the accuracy of the transformation parameters is obtained by calculating the values at the reference points, and weighting the results relative to the centroid of the points.
While the method is mathematically rigorous, it is entirely dependent on the accuracy of the parameters that are used. In practice, these parameters are computed from the inclusion of at least three known points in the networks. However the accuracy of these will affect the following transformation parameters, as these points will contain observation errors. Therefore, a "real-world" transformation will only be a best estimate and should contain a statistical measure of its quality.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "X_T = C + \\mu R X \\, "
},
{
"math_id": 1,
"text": "\\mu = 1 + s"
},
{
"math_id": 2,
"text": "\\begin{bmatrix}\nX \\\\\nY \\\\\nZ\n\\end{bmatrix}^B = \\begin{bmatrix}\nc_x \\\\\nc_y \\\\\nc_z\n\\end{bmatrix} + (1 + s\\times10^{-6}) \\cdot\n\\begin{bmatrix}\n 1 & -r_z & r_y \\\\\n r_z & 1 & -r_x \\\\\n-r_y & r_x & 1\n\\end{bmatrix} \\cdot \\begin{bmatrix}\nX \\\\\nY \\\\\nZ\n\\end{bmatrix}^A\n"
},
{
"math_id": 3,
"text": "\\begin{align}\nX_B & = c_x + (1 + s \\times 10^{-6}) \\cdot (X_A - r_z \\cdot Y_A + r_y \\cdot Z_A) \\\\\nY_B & = c_y + (1 + s \\times 10^{-6}) \\cdot ( r_z \\cdot X_A + Y_A - r_x \\cdot Z_A) \\\\\nZ_B & = c_z + (1 + s \\times 10^{-6}) \\cdot ( -r_y \\cdot X_A + r_x \\cdot Y_A + Z_A).\n\\end{align}\n"
},
{
"math_id": 4,
"text": "\\omega"
},
{
"math_id": 5,
"text": "\\phi"
},
{
"math_id": 6,
"text": "\\kappa"
}
] |
https://en.wikipedia.org/wiki?curid=15318324
|
153208
|
System dynamics
|
Study of non-linear complex systems
System dynamics (SD) is an approach to understanding the nonlinear behaviour of complex systems over time using stocks, flows, internal feedback loops, table functions and time delays.
Overview.
System dynamics is a methodology and mathematical modeling technique to frame, understand, and discuss complex issues and problems. Originally developed in the 1950s to help corporate managers improve their understanding of industrial processes, SD is currently being used throughout the public and private sector for policy analysis and design.
Convenient graphical user interface (GUI) system dynamics software developed into user friendly versions by the 1990s and have been applied to diverse systems. SD models solve the problem of simultaneity (mutual causation) by updating all variables in small time increments with positive and negative feedbacks and time delays structuring the interactions and control. The best known SD model is probably the 1972 "The Limits to Growth". This model forecast that exponential growth of population and capital, with finite resource sources and sinks and perception delays, would lead to economic collapse during the 21st century under a wide variety of growth scenarios.
System dynamics is an aspect of systems theory as a method to understand the dynamic behavior of complex systems. The basis of the method is the recognition that the structure of any system, the many circular, interlocking, sometimes time-delayed relationships among its components, is often just as important in determining its behavior as the individual components themselves. Examples are chaos theory and social dynamics. It is also claimed that because there are often properties-of-the-whole which cannot be found among the properties-of-the-elements, in some cases the behavior of the whole cannot be explained in terms of the behavior of the parts.
History.
System dynamics was created during the mid-1950s by Professor Jay Forrester of the Massachusetts Institute of Technology. In 1956, Forrester accepted a professorship in the newly formed MIT Sloan School of Management. His initial goal was to determine how his background in science and engineering could be brought to bear, in some useful way, on the core issues that determine the success or failure of corporations. Forrester's insights into the common foundations that underlie engineering, which led to the creation of system dynamics, were triggered, to a large degree, by his involvement with managers at General Electric (GE) during the mid-1950s. At that time, the managers at GE were perplexed because employment at their appliance plants in Kentucky exhibited a significant three-year cycle. The business cycle was judged to be an insufficient explanation for the employment instability. From hand simulations (or calculations) of the stock-flow-feedback structure of the GE plants, which included the existing corporate decision-making structure for hiring and layoffs, Forrester was able to show how the instability in GE employment was due to the internal structure of the firm and not to an external force such as the business cycle. These hand simulations were the start of the field of system dynamics.
During the late 1950s and early 1960s, Forrester and a team of graduate students moved the emerging field of system dynamics from the hand-simulation stage to the formal computer modeling stage. Richard Bennett created the first system dynamics computer modeling language called SIMPLE (Simulation of Industrial Management Problems with Lots of Equations) in the spring of 1958. In 1959, Phyllis Fox and Alexander Pugh wrote the first version of
DYNAMO (DYNAmic MOdels), an improved version of SIMPLE, and the system dynamics language became the industry standard for over thirty years. Forrester published the first, and still classic, book in the field titled "Industrial Dynamics" in 1961.
From the late 1950s to the late 1960s, system dynamics was applied almost exclusively to corporate/managerial problems. In 1968, however, an unexpected occurrence caused the field to broaden beyond corporate modeling. John F. Collins, the former mayor of Boston, was appointed a visiting professor of Urban Affairs at MIT. The result of the Collins-Forrester collaboration was a book titled "Urban Dynamics". The Urban Dynamics model presented in the book was the first major non-corporate application of system dynamics. In 1967, Richard M. Goodwin published the first edition of his paper "A Growth Cycle", which was the first attempt to apply the principles of system dynamics to economics. He devoted most of his life teaching what he called "Economic Dynamics", which could be considered a precursor of modern Non-equilibrium economics.
The second major noncorporate application of system dynamics came shortly after the first. In 1970, Jay Forrester was invited by the Club of Rome to a meeting in Bern, Switzerland. The Club of Rome is an organization devoted to solving what its members describe as the "predicament of mankind"—that is, the global crisis that may appear sometime in the future, due to the demands being placed on the Earth's carrying capacity (its sources of renewable and nonrenewable resources and its sinks for the disposal of pollutants) by the world's exponentially growing population. At the Bern meeting, Forrester was asked if system dynamics could be used to address the predicament of mankind. His answer, of course, was that it could. On the plane back from the Bern meeting, Forrester created the first draft of a system dynamics model of the world's socioeconomic system. He called this model WORLD1. Upon his return to the United States, Forrester refined WORLD1 in preparation for a visit to MIT by members of the Club of Rome. Forrester called the refined version of the model WORLD2. Forrester published WORLD2 in a book titled World Dynamics.
Topics in systems dynamics.
The primary elements of system dynamics diagrams are feedback, accumulation of flows into stocks and time delays.
As an illustration of the use of system dynamics, imagine an organisation that plans to introduce an innovative new durable consumer product. The organisation needs to understand the possible market dynamics in order to design marketing and production plans.
Causal loop diagrams.
In the system dynamics methodology, a problem or a system (e.g., ecosystem, political system or mechanical system) may be represented as a causal loop diagram. A causal loop diagram is a simple map of a system with all its constituent components and their interactions. By capturing interactions and consequently the feedback loops (see figure below), a causal loop diagram reveals the structure of a system. By understanding the structure of a system, it becomes possible to ascertain a system's behavior over a certain time period.
The causal loop diagram of the new product introduction may look as follows:
There are two feedback loops in this diagram. The positive reinforcement (labeled R) loop on the right indicates that the more people have already adopted the new product, the stronger the word-of-mouth impact. There will be more references to the product, more demonstrations, and more reviews. This positive feedback should generate sales that continue to grow.
The second feedback loop on the left is negative reinforcement (or "balancing" and hence labeled B). Clearly, growth cannot continue forever, because as more and more people adopt, there remain fewer and fewer potential adopters.
Both feedback loops act simultaneously, but at different times they may have different strengths. Thus one might expect growing sales in the initial years, and then declining sales in the later years. However, in general a causal loop diagram does not specify the structure of a system sufficiently to permit determination of its behavior from the visual representation alone.
Stock and flow diagrams.
Causal loop diagrams aid in visualizing a system's structure and behavior, and analyzing the system qualitatively. To perform a more detailed quantitative analysis, a causal loop diagram is transformed to a stock and flow diagram. A stock and flow model helps in studying and analyzing the system in a quantitative way; such models are usually built and simulated using computer software.
A stock is the term for any entity that accumulates or depletes over time. A flow is the rate of change in a stock.
In this example, there are two stocks: Potential adopters and Adopters. There is one flow: New adopters. For every new adopter, the stock of potential adopters declines by one, and the stock of adopters increases by one.
Equations.
The real power of system dynamics is utilised through simulation. Although it is possible to perform the modeling in a spreadsheet, there are a variety of software packages that have been optimised for this.
The steps involved in a simulation are:
In this example, the equations that change the two stocks via the flow are:
formula_0
formula_1
Equations in discrete time.
List of all the equations in discrete time, in their order of execution in each year, for years 1 to 15 :
formula_2
formula_3
formula_4
formula_5
formula_6
formula_7
formula_8
formula_9
Dynamic simulation results.
The dynamic simulation results show that the behaviour of the system would be to have growth in "adopters" that follows a classic s-curve shape.
The increase in "adopters" is very slow initially, then exponential growth for a period, followed ultimately by saturation.
Equations in continuous time.
To get intermediate values and better accuracy, the model can run in continuous time: we multiply the number of units of time and we proportionally divide values that change stock levels. In this example we multiply the 15 years by 4 to obtain 60 quarters, and we divide the value of the flow by 4.<br>
Dividing the value is the simplest with the Euler method, but other methods could be employed instead, such as Runge–Kutta methods.
List of the equations in continuous time for trimesters = 1 to 60 :
formula_10
formula_11
formula_12
formula_13
formula_14
Application.
System dynamics has found application in a wide range of areas, for example population, agriculture, ecological and economic systems, which usually interact strongly with each other.
System dynamics have various "back of the envelope" management applications. They are a potent tool to:
Computer software is used to simulate a system dynamics model of the situation being studied. Running "what if" simulations to test certain policies on such a model can greatly aid in understanding how the system changes over time. System dynamics is very similar to systems thinking and constructs the same causal loop diagrams of systems with feedback. However, system dynamics typically goes further and utilises simulation to study the behaviour of systems and the impact of alternative policies.
System dynamics has been used to investigate resource dependencies, and resulting problems, in product development.
A system dynamics approach to macroeconomics, known as "Minsky", has been developed by the economist Steve Keen. This has been used to successfully model world economic behaviour from the apparent stability of the Great Moderation to the sudden unexpected Financial crisis of 2007–08.
Example: Growth and decline of companies.
The figure above is a causal loop diagram of a system dynamics model created to examine forces that may be responsible for the growth or decline of life insurance companies in the United Kingdom. A number of this figure's features are worth mentioning. The first is that the model's negative feedback loops are identified by "C's", which stand for "Counteracting" loops. The second is that double slashes are used to indicate places where there is a significant delay between causes (i.e., variables at the tails of arrows) and effects (i.e., variables at the heads of arrows). This is a common causal loop diagramming convention in system dynamics. Third, is that thicker lines are used to identify the feedback loops and links that author wishes the audience to focus on. This is also a common system dynamics diagramming convention. Last, it is clear that a decision maker would find it impossible to think through the dynamic behavior inherent in the model, from inspection of the figure alone.
See also.
<templatestyles src="Col-begin/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\ \\mbox{Potential adopters} = - \\int_{0} ^{t} \\mbox{New adopters }\\,dt "
},
{
"math_id": 1,
"text": " \\ \\mbox{Adopters} = \\int_{0} ^{t} \\mbox{New adopters }\\,dt "
},
{
"math_id": 2,
"text": "1) \\ \\mbox{Probability that contact has not yet adopted}=\\mbox{Potential adopters} / (\\mbox{Potential adopters } + \\mbox{ Adopters}) "
},
{
"math_id": 3,
"text": "2) \\ \\mbox{Imitators}=q \\cdot \\mbox{Adopters} \\cdot \\mbox{Probability that contact has not yet adopted}"
},
{
"math_id": 4,
"text": "3) \\ \\mbox{Innovators}=p \\cdot \\mbox{Potential adopters} "
},
{
"math_id": 5,
"text": "4) \\ \\mbox{New adopters}=\\mbox{Innovators}+\\mbox{Imitators} "
},
{
"math_id": 6,
"text": "4.1) \\ \\mbox{Potential adopters}\\ -= \\mbox{New adopters }"
},
{
"math_id": 7,
"text": "4.2) \\ \\mbox{Adopters}\\ += \\mbox{New adopters }"
},
{
"math_id": 8,
"text": "\\ p=0.03 "
},
{
"math_id": 9,
"text": "\\ q=0.4 "
},
{
"math_id": 10,
"text": "10) \\ \\mbox{Valve New adopters}\\ = \\mbox{New adopters} \\cdot TimeStep "
},
{
"math_id": 11,
"text": "10.1) \\ \\mbox{Potential adopters}\\ -= \\mbox{Valve New adopters} "
},
{
"math_id": 12,
"text": "10.2) \\ \\mbox{Adopters}\\ += \\mbox{Valve New adopters } "
},
{
"math_id": 13,
"text": " \\ TimeStep = 1/4 "
},
{
"math_id": 14,
"text": " \\ \\mbox{Valve New adopters}\\ = \\mbox{New adopters } \\cdot TimeStep "
}
] |
https://en.wikipedia.org/wiki?curid=153208
|
153215
|
Working mass
|
Mass against which a system operates
Working mass, also referred to as reaction mass, is a mass against which a system operates in order to produce acceleration.
In the case of a chemical rocket, for example, the reaction mass is the product of the burned fuel shot backwards to provide propulsion. All acceleration requires an exchange of momentum, which can be thought of as the "unit of movement". Momentum is related to mass and velocity, as given by the formula "P = mv," where "P" is the momentum, "m" the mass, and "v" the velocity. The velocity of a body is easily changeable, but in most cases the mass is not, which makes it important.
Rockets and rocket-like reaction engines.
In rockets, the total velocity change can be calculated (using the Tsiolkovsky rocket equation) as follows:
formula_0
Where:
The term working mass is used primarily in the aerospace field. In more "down to earth" examples, the working mass is typically provided by the Earth, which contains so much momentum in comparison to most vehicles that the amount it gains or loses can be ignored. However, in the case of an aircraft the working mass is the air, and in the case of a rocket, it is the rocket fuel itself. Most rocket engines use light-weight fuels (liquid hydrogen, oxygen, or kerosene) accelerated to supersonic speeds. However, ion engines often use heavier elements like xenon as the reaction mass, accelerated to much higher speeds using electric fields.
In many cases, the working mass is separate from the energy used to accelerate it. In a car, the engine provides power to the wheels, which then accelerates the Earth backward to make the car move forward. This is not the case for most rockets, however, where the rocket propellant is the working mass, as well as the energy source. This means that rockets stop accelerating as soon as they run out of fuel, regardless of other power sources they may have. This can be a problem for satellites that need to be repositioned often, as it limits their useful life. In general, the exhaust velocity should be close to the ship velocity for optimum energy efficiency. This limitation of rocket propulsion is one of the main motivations for the ongoing interest in field propulsion technology.
|
[
{
"math_id": 0,
"text": "\\Delta\\,v = u\\,\\ln\\left(\\frac{m + M}{M}\\right)"
}
] |
https://en.wikipedia.org/wiki?curid=153215
|
153217
|
Carrier wave
|
Sinusoidal wave without any modulation.
In telecommunications, a carrier wave, carrier signal, or just carrier, is a periodic waveform (usually sinusoidal) that carries no information that has one or more of its properties modified (the called modulation) by an information-bearing signal (called the "message signal" or "modulation signal") for the purpose of conveying information.
This carrier wave usually has a much higher frequency than the message signal does. This is because it is impractical to transmit signals with low frequencies.
The purpose of the carrier is usually either to transmit the information through space as an electromagnetic wave (as in radio communication), or to allow several carriers at different frequencies to share a common physical transmission medium by frequency division multiplexing (as in a cable television system).
The term originated in radio communication, where the carrier wave creates the waves which carry the information (modulation) through the air from the transmitter to the receiver. The term is also used for an unmodulated emission in the absence of any modulating signal.
In music production, carrier signals can be controlled by a modulating signal to change the sound property of an audio recording and add a sense of depth and movement.
Overview.
The term "carrier wave" originated with radio. In a radio communication system, such as radio or television broadcasting, information is transmitted across space by radio waves. At the sending end, the information, in the form of a modulation signal, is applied to an electronic device called a transmitter. In the transmitter, an electronic oscillator generates a sinusoidal alternating current of radio frequency; this is the carrier wave. The information signal is used to modulate the carrier wave, altering some aspects of the carrier, to impress the information on the wave. The alternating current is amplified and applied to the transmitter's antenna, radiating radio waves that carry the information to the receiver's location. At the receiver, the radio waves strike the receiver's antenna, inducing a tiny oscillating current in it, which is applied to the receiver. In the receiver, the modulation signal is extracted from the modulated carrier wave, a process called demodulation.
Most radio systems in the 20th century used frequency modulation (FM) or amplitude modulation (AM) to add information to the carrier. The frequency spectrum of a modulated AM or FM signal from a radio transmitter is shown above. It consists of a strong component "(C)" at the carrier frequency formula_0 with the modulation contained in narrow sidebands "(SB)" above and below the carrier frequency. The frequency of a radio or television station is considered to be the carrier frequency. However the carrier itself is not useful in transmitting the information, so the energy in the carrier component is a waste of transmitter power. Therefore, in many modern modulation methods, the carrier is not transmitted. For example, in single-sideband modulation (SSB), the carrier is suppressed (and in some forms of SSB, eliminated). The carrier must be reintroduced at the receiver by a beat frequency oscillator (BFO).
Carriers are also widely used to transmit multiple information channels through a single cable or other communication medium using the technique of frequency division multiplexing (FDM). For example, in a cable television system, hundreds of television channels are distributed to consumers through a single coaxial cable, by modulating each television channel on a carrier wave of a different frequency, then sending all the carriers through the cable. At the receiver, the individual channels can be separated by bandpass filters using tuned circuits so the television channel desired can be displayed. A similar technique called wavelength division multiplexing is used to transmit multiple channels of data through an optical fiber by modulating them on separate light carriers; light beams of different wavelengths.
Carrierless modulation systems.
The information in a modulated radio signal is contained in the sidebands while the power in the carrier frequency component does not transmit information itself, so newer forms of radio communication (such as spread spectrum and ultra-wideband), and OFDM which is widely used in Wi-Fi networks, digital television, and digital audio broadcasting (DAB) do not use a conventional sinusoidal carrier wave.
Carrier leakage.
Carrier leakage is interference caused by crosstalk or a DC offset. It is present as an unmodulated sine wave within the signal's bandwidth, whose amplitude is independent of the signal's amplitude. See frequency mixers.
References.
<templatestyles src="Reflist/styles.css" />
The dictionary definition of "carrier wave" at Wiktionary
|
[
{
"math_id": 0,
"text": "f_C"
}
] |
https://en.wikipedia.org/wiki?curid=153217
|
153219
|
Liquid air cycle engine
|
Concept of hybrid atmospheric jet engine
A liquid air cycle engine (LACE) is a type of spacecraft propulsion engine that attempts to increase its efficiency by gathering part of its oxidizer from the atmosphere. A liquid air cycle engine uses liquid hydrogen (LH2) fuel to liquefy the air.
In a liquid oxygen/liquid hydrogen rocket, the liquid oxygen (LOX) needed for combustion is the majority of the weight of the spacecraft on lift-off, so if some of this can be collected from the air on the way, it might dramatically lower the take-off weight of the spacecraft.
LACE was studied to some extent in the USA during the late 1950s and early 1960s, and by late 1960 Marquardt had a testbed system running. However, as NASA moved to ballistic capsules during Project Mercury, funding for research into winged vehicles slowly disappeared, and LACE work along with it.
LACE was also the basis of the engines on the British Aerospace HOTOL design of the 1980s, but this did not progress beyond studies.
Principle of operation.
Conceptually, LACE works by compressing and then quickly liquefying the air. Compression is achieved through the ram-air effect in an intake similar to that found on a high-speed aircraft like Concorde, where intake ramps create shock waves that compress the air. The LACE design then blows the compressed air over a heat exchanger, in which the liquid hydrogen fuel is flowing. This rapidly cools the air, and the various constituents quickly liquefy. By careful mechanical arrangement the liquid oxygen can be removed from the other parts of the air, notably water, nitrogen and carbon dioxide, at which point the liquid oxygen can be fed into the engine as usual. It will be seen that heat-exchanger limitations always cause this system to run with a hydrogen/air ratio much richer than stoichiometric with a consequent penalty in performance and thus some hydrogen is dumped overboard.
Advantages and disadvantages.
The use of a winged launch vehicle allows using lift rather than thrust to overcome gravity, which greatly reduces gravity losses. On the other hand, the reduced gravity losses come at the price of much higher aerodynamic drag and aerodynamic heating due to the need to stay much deeper within the atmosphere than a pure rocket would during the boost phase.
In order to appreciably reduce the mass of the oxygen carried at launch, a LACE vehicle needs to spend more time in the lower atmosphere to collect enough oxygen to supply the engines during the remainder of the launch. This leads to greatly increased vehicle heating and drag losses, which therefore increases fuel consumption to offset the drag losses and the additional mass of the thermal protection system. This increased fuel consumption offsets somewhat the savings in oxidizer mass; these losses are in turn offset by the higher specific impulse, "I"sp, of the air-breathing engine. Thus, the engineering trade-offs involved are quite complex, and highly sensitive to the design assumptions made.
Other issues are introduced by the relative material and logistical properties of LOx versus LH2. LOx is quite cheap; LH2 is nearly two orders of magnitude more expensive. LOx is dense (1.141 kg/L), whereas LH2 has a very low density (0.0678 kg/L) and is therefore very bulky. (The extreme bulkiness of the LH2 tankage tends to increase vehicle drag by increasing the vehicle's frontal area.) Finally, LOx tanks are relatively lightweight and fairly cheap, while the deep cryogenic nature and extreme physical properties of LH2 mandate that LH2 tanks and plumbing must be large and use heavy, expensive, exotic materials and insulation. Hence, much as the costs of using LH2 rather than a hydrocarbon fuel may well outweigh the "I"sp benefit of using LH2 in a single-stage-to-orbit rocket, the costs of using more LH2 as a propellant and air-liquefaction coolant in LACE may well outweigh the benefits gained by not needing to carry as much LOx on board.
Most significantly, the LACE system is far heavier than a pure rocket engine having the same thrust (air-breathing engines of almost all types have relatively poor thrust-to-weight ratios compared to rockets), and the performance of launch vehicles of all types is particularly affected by increases in vehicle dry mass (such as engines) that must be carried all the way to orbit, as opposed to oxidizer mass that would be burnt off over the course of the flight. Moreover, the lower thrust-to-weight ratio of an air-breathing engine as compared to a rocket significantly decreases the launch vehicle's maximum possible acceleration, and increases gravity losses since more time must be spent to accelerate to orbital velocity. Also, the higher inlet and airframe drag losses of a lifting, air-breathing vehicle launch trajectory as compared to a pure rocket on a ballistic launch trajectory introduces an additional penalty term formula_0 into the rocket equation known as the "air-breather's burden". This term implies that unless the lift-to-drag ratio ("L"/"D") and the acceleration of the vehicle as compared to gravity ("a"/"g") are both implausibly large for a hypersonic air-breathing vehicle, the advantages of the higher "Isp" of the air-breathing engine and the savings in LOx mass are largely lost.
Thus, the advantages, or disadvantages, of the LACE design continue to be a matter of some debate.
History.
LACE was studied to some extent in the United States of America during the late 1950s and early 1960s, where it was seen as a "natural" fit for a winged spacecraft project known as the Aerospaceplane. At the time the concept was known as LACES, for "Liquid Air Collection Engine System". The liquified air and some of the hydrogen is then pumped directly into the engine for burning.
When it was demonstrated that it was relatively easy to separate the oxygen from the other components of air, mostly nitrogen and carbon dioxide, a new concept emerged as ACES for "Air Collection and Enrichment System". This leaves the problem of what to do with the leftover gasses. ACES injected the nitrogen into a ramjet engine, using it as additional working fluid while the engine was running on air and the liquid oxygen was being stored. As the aircraft climbed and the atmosphere thinned, the lack of air was offset by increasing the flow of oxygen from the tanks. This makes ACES an ejector ramjet (or ramrocket) as opposed to the pure rocket LACE design.
Both Marquardt Corporation and General Dynamics were involved in the LACES research. However, as NASA moved to ballistic capsules during Project Mercury, funding for research into winged vehicles slowly disappeared, and ACES along with it.
|
[
{
"math_id": 0,
"text": "\\frac {1} {1 + \\frac {gD} {aL}}"
}
] |
https://en.wikipedia.org/wiki?curid=153219
|
153221
|
Heat exchanger
|
Equipment used to transfer heat between fluids
A heat exchanger is a system used to transfer heat between a source and a working fluid. Heat exchangers are used in both cooling and heating processes. The fluids may be separated by a solid wall to prevent mixing or they may be in direct contact. They are widely used in space heating, refrigeration, air conditioning, power stations, chemical plants, petrochemical plants, petroleum refineries, natural-gas processing, and sewage treatment. The classic example of a heat exchanger is found in an internal combustion engine in which a circulating fluid known as engine coolant flows through radiator coils and air flows past the coils, which cools the coolant and heats the incoming air. Another example is the heat sink, which is a passive heat exchanger that transfers the heat generated by an electronic or a mechanical device to a fluid medium, often air or a liquid coolant.
Flow arrangement.
There are three primary classifications of heat exchangers according to their flow arrangement. In "parallel-flow" heat exchangers, the two fluids enter the exchanger at the same end, and travel in parallel to one another to the other side. In "counter-flow" heat exchangers the fluids enter the exchanger from opposite ends. The counter current design is the most efficient, in that it can transfer the most heat from the heat (transfer) medium per unit mass due to the fact that the average temperature difference along any unit length is "higher". See countercurrent exchange. In a "cross-flow" heat exchanger, the fluids travel roughly perpendicular to one another through the exchanger.
For efficiency, heat exchangers are designed to maximize the surface area of the wall between the two fluids, while minimizing resistance to fluid flow through the exchanger. The exchanger's performance can also be affected by the addition of fins or corrugations in one or both directions, which increase surface area and may channel fluid flow or induce turbulence.
The driving temperature across the heat transfer surface varies with position, but an appropriate mean temperature can be defined. In most simple systems this is the "log mean temperature difference" (LMTD). Sometimes direct knowledge of the LMTD is not available and the NTU method is used.
Types.
Double pipe heat exchangers are the simplest exchangers used in industries. On one hand, these heat exchangers are cheap for both design and maintenance, making them a good choice for small industries. On the other hand, their low efficiency coupled with the high space occupied in large scales, has led modern industries to use more efficient heat exchangers like shell and tube or plate. However, since double pipe heat exchangers are simple, they are used to teach heat exchanger design basics to students as the fundamental rules for all heat exchangers are the same.
1. Double-pipe heat exchanger
When one fluid flows through the smaller pipe, the other flows through the annular gap between the two pipes. These flows may be parallel or counter-flows in a double pipe heat exchanger.
(a) Parallel flow, where both hot and cold liquids enter the heat exchanger from the same side, flow in the same direction and exit at the same end. This configuration is preferable when the two fluids are intended to reach exactly the same temperature, as it reduces thermal stress and produces a more uniform rate of heat transfer.
(b) Counter-flow, where hot and cold fluids enter opposite sides of the heat exchanger, flow in opposite directions, and exit at opposite ends. This configuration is preferable when the objective is to maximize heat transfer between the fluids, as it creates a larger temperature differential when used under otherwise similar conditions.
The figure above illustrates the parallel and counter-flow flow directions of the fluid exchanger.
2. Shell-and-tube heat exchanger
In a shell-and-tube heat exchanger, two fluids at different temperatures flow through the heat exchanger. One of the fluids flows through the tube side and the other fluid flows outside the tubes, but inside the shell (shell side).
Baffles are used to support the tubes, direct the fluid flow to the tubes in an approximately natural manner, and maximize the turbulence of the shell fluid. There are many various kinds of baffles, and the choice of baffle form, spacing, and geometry depends on the allowable flow rate of the drop in shell-side force, the need for tube support, and the flow-induced vibrations. There are several variations of shell-and-tube exchangers available; the differences lie in the arrangement of flow configurations and details of construction.
In application to cool air with shell-and-tube technology (such as intercooler / charge air cooler for combustion engines), fins can be added on the tubes to increase heat transfer area on air side and create a tubes & fins configuration.
3. Plate Heat Exchanger
A plate heat exchanger contains an amount of thin shaped heat transfer plates bundled together. The gasket arrangement of each pair of plates provides two separate channel system. Each pair of plates form a channel where the fluid can flow through. The pairs are attached by welding and bolting methods. The following shows the components in the heat exchanger.
In single channels the configuration of the gaskets enables flow through. Thus, this allows the main and secondary media in counter-current flow. A gasket plate heat exchanger has a heat region from corrugated plates. The gasket function as seal between plates and they are located between frame and pressure plates. Fluid flows in a counter current direction throughout the heat exchanger. An efficient thermal performance is produced. Plates are produced in different depths, sizes and corrugated shapes. There are different types of plates available including plate and frame, plate and shell and spiral plate heat exchangers. The distribution area guarantees the flow of fluid to the whole heat transfer surface. This helps to prevent stagnant area that can cause accumulation of unwanted material on solid surfaces. High flow turbulence between plates results in a greater transfer of heat and a decrease in pressure.
4. Condensers and Boilers
Heat exchangers using a two-phase heat transfer system are condensers, boilers and evaporators. Condensers are instruments that take and cool hot gas or vapor to the point of condensation and transform the gas into a liquid form. The point at which liquid transforms to gas is called vaporization and vice versa is called condensation. Surface condenser is the most common type of condenser where it includes a water supply device. Figure 5 below displays a two-pass surface condenser.
The pressure of steam at the turbine outlet is low where the steam density is very low where the flow rate is very high. To prevent a decrease in pressure in the movement of steam from the turbine to condenser, the condenser unit is placed underneath and connected to the turbine. Inside the tubes the cooling water runs in a parallel way, while steam moves in a vertical downward position from the wide opening at the top and travel through the tube.
Furthermore, boilers are categorized as initial application of heat exchangers. The word steam generator was regularly used to describe a boiler unit where a hot liquid stream is the source of heat rather than the combustion products. Depending on the dimensions and configurations the boilers are manufactured. Several boilers are only able to produce hot fluid while on the other hand the others are manufactured for steam production.
Shell and tube.
Shell and tube heat exchangers consist of a series of tubes which contain fluid that must be either heated or cooled. A second fluid runs over the tubes that are being heated or cooled so that it can either provide the heat or absorb the heat required. A set of tubes is called the tube bundle and can be made up of several types of tubes: plain, longitudinally finned, etc. Shell and tube heat exchangers are typically used for high-pressure applications (with pressures greater than 30 bar and temperatures greater than 260 °C). This is because the shell and tube heat exchangers are robust due to their shape.<br>Several thermal design features must be considered when designing the tubes in the shell and tube heat exchangers:
There can be many variations on the shell and tube design. Typically, the ends of each tube are connected to plenums (sometimes called water boxes) through holes in tubesheets. The tubes may be straight or bent in the shape of a U, called U-tubes.
Fixed tube liquid-cooled heat exchangers especially suitable for marine and harsh applications can be assembled with brass shells, copper tubes, brass baffles, and forged brass integral end hubs. "(See: Copper in heat exchangers)."
Plate.
Another type of heat exchanger is the plate heat exchanger. These exchangers are composed of many thin, slightly separated plates that have very large surface areas and small fluid flow passages for heat transfer. Advances in gasket and brazing technology have made the plate-type heat exchanger increasingly practical. In HVAC applications, large heat exchangers of this type are called "plate-and-frame"; when used in open loops, these heat exchangers are normally of the gasket type to allow periodic disassembly, cleaning, and inspection. There are many types of permanently bonded plate heat exchangers, such as dip-brazed, vacuum-brazed, and welded plate varieties, and they are often specified for closed-loop applications such as refrigeration. Plate heat exchangers also differ in the types of plates that are used, and in the configurations of those plates. Some plates may be stamped with "chevron", dimpled, or other patterns, where others may have machined fins and/or grooves.
When compared to shell and tube exchangers, the stacked-plate arrangement typically has lower volume and cost. Another difference between the two is that plate exchangers typically serve low to medium pressure fluids, compared to medium and high pressures of shell and tube. A third and important difference is that plate exchangers employ more countercurrent flow rather than cross current flow, which allows lower approach temperature differences, high temperature changes, and increased efficiencies.
Plate and shell.
A third type of heat exchanger is a plate and shell heat exchanger, which combines plate heat exchanger with shell and tube heat exchanger technologies. The heart of the heat exchanger contains a fully welded circular plate pack made by pressing and cutting round plates and welding them together. Nozzles carry flow in and out of the platepack (the 'Plate side' flowpath). The fully welded platepack is assembled into an outer shell that creates a second flowpath ( the 'Shell side'). Plate and shell technology offers high heat transfer, high pressure, high operating temperature, compact size, low fouling and close approach temperature. In particular, it does completely without gaskets, which provides security against leakage at high pressures and temperatures.
Adiabatic wheel.
A fourth type of heat exchanger uses an intermediate fluid or solid store to hold heat, which is then moved to the other side of the heat exchanger to be released. Two examples of this are adiabatic wheels, which consist of a large wheel with fine threads rotating through the hot and cold fluids, and fluid heat exchangers.
Plate fin.
This type of heat exchanger uses "sandwiched" passages containing fins to increase the effectiveness of the unit. The designs include crossflow and counterflow coupled with various fin configurations such as straight fins, offset fins and wavy fins.
Plate and fin heat exchangers are usually made of aluminum alloys, which provide high heat transfer efficiency. The material enables the system to operate at a lower temperature difference and reduce the weight of the equipment. Plate and fin heat exchangers are mostly used for low temperature services such as natural gas, helium and oxygen liquefaction plants, air separation plants and transport industries such as motor and aircraft engines.
Advantages of plate and fin heat exchangers:
Disadvantages of plate and fin heat exchangers:
Finned tube.
The usage of fins in a tube-based heat exchanger is common when one of the working fluids is a low-pressure gas, and is typical for heat exchangers that operate using ambient air, such as automotive radiators and HVAC air condensers. Fins dramatically increase the surface area with which heat can be exchanged, which improves the efficiency of conducting heat to a fluid with very low thermal conductivity, such as air. The fins are typically made from aluminium or copper since they must conduct heat from the tube along the length of the fins, which are usually very thin.
The main construction types of finned tube exchangers are:
Stacked-fin or spiral-wound construction can be used for the tubes inside shell-and-tube heat exchangers when high efficiency thermal transfer to a gas is required.
In electronics cooling, heat sinks, particularly those using heat pipes, can have a stacked-fin construction.
Pillow plate.
A pillow plate heat exchanger is commonly used in the dairy industry for cooling milk in large direct-expansion stainless steel bulk tanks. Nearly the entire surface area of a tank can be integrated with this heat exchanger, without gaps that would occur between pipes welded to the exterior of the tank. Pillow plates can also be constructed as flat plates that are stacked inside a tank. The relatively flat surface of the plates allows easy cleaning, especially in sterile applications.
The pillow plate can be constructed using either a thin sheet of metal welded to the thicker surface of a tank or vessel, or two thin sheets welded together. The surface of the plate is welded with a regular pattern of dots or a serpentine pattern of weld lines. After welding the enclosed space is pressurised with sufficient force to cause the thin metal to bulge out around the welds, providing a space for heat exchanger liquids to flow, and creating a characteristic appearance of a swelled pillow formed out of metal.
Waste heat recovery units.
A waste heat recovery unit (WHRU) is a heat exchanger that recovers heat from a hot gas stream while transferring it to a working medium, typically water or oils. The hot gas stream can be the exhaust gas from a gas turbine or a diesel engine or a waste gas from industry or refinery.
Large systems with high volume and temperature gas streams, typical in industry, can benefit from steam Rankine cycle (SRC) in a waste heat recovery unit, but these cycles are too expensive for small systems. The recovery of heat from low temperature systems requires different working fluids than steam.
An organic Rankine cycle (ORC) waste heat recovery unit can be more efficient at low temperature range using refrigerants that boil at lower temperatures than water. Typical organic refrigerants are ammonia, pentafluoropropane (R-245fa and R-245ca), and toluene.
The refrigerant is boiled by the heat source in the evaporator to produce super-heated vapor. This fluid is expanded in the turbine to convert thermal energy to kinetic energy, that is converted to electricity in the electrical generator. This energy transfer process decreases the temperature of the refrigerant that, in turn, condenses. The cycle is closed and completed using a pump to send the fluid back to the evaporator.
Dynamic scraped surface.
Another type of heat exchanger is called "(dynamic) scraped surface heat exchanger". This is mainly used for heating or cooling with high-viscosity products, crystallization processes, evaporation and high-fouling applications. Long running times are achieved due to the continuous scraping of the surface, thus avoiding fouling and achieving a sustainable heat transfer rate during the process.
Phase-change.
In addition to heating up or cooling down fluids in just a single phase, heat exchangers can be used either to heat a liquid to evaporate (or boil) it or used as condensers to cool a vapor and condense it to a liquid. In chemical plants and refineries, reboilers used to heat incoming feed for distillation towers are often heat exchangers.
Distillation set-ups typically use condensers to condense distillate vapors back into liquid.
Power plants that use steam-driven turbines commonly use heat exchangers to boil water into steam. Heat exchangers or similar units for producing steam from water are often called boilers or steam generators.
In the nuclear power plants called pressurized water reactors, special large heat exchangers pass heat from the primary (reactor plant) system to the secondary (steam plant) system, producing steam from water in the process. These are called steam generators. All fossil-fueled and nuclear power plants using steam-driven turbines have surface condensers to convert the exhaust steam from the turbines into condensate (water) for re-use.
To conserve energy and cooling capacity in chemical and other plants, regenerative heat exchangers can transfer heat from a stream that must be cooled to another stream that must be heated, such as distillate cooling and reboiler feed pre-heating.
This term can also refer to heat exchangers that contain a material within their structure that has a change of phase. This is usually a solid to liquid phase due to the small volume difference between these states. This change of phase effectively acts as a buffer because it occurs at a constant temperature but still allows for the heat exchanger to accept additional heat. One example where this has been investigated is for use in high power aircraft electronics.
Heat exchangers functioning in multiphase flow regimes may be subject to the Ledinegg instability.
Direct contact.
Direct contact heat exchangers involve heat transfer between hot and cold streams of two phases in the absence of a separating wall. Thus such heat exchangers can be classified as:
Most direct contact heat exchangers fall under the Gas – Liquid category, where heat is transferred between a gas and liquid in the form of drops, films or sprays.
Such types of heat exchangers are used predominantly in air conditioning, humidification, industrial hot water heating, water cooling and condensing plants.
Microchannel.
Microchannel heat exchangers are multi-pass parallel flow heat exchangers consisting of three main elements: manifolds (inlet and outlet), multi-port tubes with the hydraulic diameters smaller than 1mm, and fins. All the elements usually brazed together using controllable atmosphere brazing process. Microchannel heat exchangers are characterized by high heat transfer ratio, low refrigerant charges, compact size, and lower airside pressure drops compared to finned tube heat exchangers. Microchannel heat exchangers are widely used in automotive industry as the car radiators, and as condenser, evaporator, and cooling/heating coils in HVAC industry.
Micro heat exchangers, Micro-scale heat exchangers, or microstructured heat exchangers are heat exchangers in which (at least one) fluid flows in lateral confinements with typical dimensions below 1 mm. The most typical such confinement are microchannels, which are channels with a hydraulic diameter below 1 mm. Microchannel heat exchangers can be made from metal or ceramics. Microchannel heat exchangers can be used for many applications including:
HVAC and refrigeration air coils.
One of the widest uses of heat exchangers is for refrigeration and air conditioning. This class of heat exchangers is commonly called "air coils", or just "coils" due to their often-serpentine internal tubing, or condensers in the case of refrigeration, and are typically of the finned tube type. Liquid-to-air, or air-to-liquid HVAC coils are typically of modified crossflow arrangement. In vehicles, heat coils are often called heater cores.
On the liquid side of these heat exchangers, the common fluids are water, a water-glycol solution, steam, or a refrigerant. For "heating coils", hot water and steam are the most common, and this heated fluid is supplied by boilers, for example. For "cooling coils", chilled water and refrigerant are most common. Chilled water is supplied from a chiller that is potentially located very far away, but refrigerant must come from a nearby condensing unit. When a refrigerant is used, the cooling coil is the evaporator, and the heating coil is the condenser in the vapor-compression refrigeration cycle. HVAC coils that use this direct-expansion of refrigerants are commonly called "DX coils". Some "DX coils" are "microchannel" type.
On the air side of HVAC coils a significant difference exists between those used for heating, and those for cooling. Due to psychrometrics, air that is cooled often has moisture condensing out of it, except with extremely dry air flows. Heating some air increases that airflow's capacity to hold water. So heating coils need not consider moisture condensation on their air-side, but cooling coils "must" be adequately designed and selected to handle their particular "latent" (moisture) as well as the "sensible" (cooling) loads. The water that is removed is called "condensate".
For many climates, water or steam HVAC coils can be exposed to freezing conditions. Because water expands upon freezing, these somewhat expensive and difficult to replace thin-walled heat exchangers can easily be damaged or destroyed by just one freeze. As such, freeze protection of coils is a major concern of HVAC designers, installers, and operators.
The introduction of indentations placed within the heat exchange fins controlled condensation, allowing water molecules to remain in the cooled air.
The heat exchangers in direct-combustion furnaces, typical in many residences, are not 'coils'. They are, instead, gas-to-air heat exchangers that are typically made of stamped steel sheet metal. The combustion products pass on one side of these heat exchangers, and air to heat on the other. A "cracked heat exchanger" is therefore a dangerous situation that requires immediate attention because combustion products may enter living space.
Helical-coil.
Although double-pipe heat exchangers are the simplest to design, the better choice in the following cases would be the helical-coil heat exchanger (HCHE):
These have been used in the nuclear industry as a method for exchanging heat in a sodium system for large liquid metal fast breeder reactors since the early 1970s, using an HCHE device invented by Charles E. Boardman and John H. Germer. There are several simple methods for designing HCHE for all types of manufacturing industries, such as using the Ramachandra K. Patil (et al.) method from India and the Scott S. Haraburda method from the United States.
However, these are based upon assumptions of estimating inside heat transfer coefficient, predicting flow around the outside of the coil, and upon constant heat flux.
Spiral.
A modification to the perpendicular flow of the typical HCHE involves the replacement of shell with another coiled tube, allowing the two fluids to flow parallel to one another, and which requires the use of different design calculations. These are the Spiral Heat Exchangers (SHE), which may refer to a helical (coiled) tube configuration, more generally, the term refers to a pair of flat surfaces that are coiled to form the two channels in a counter-flow arrangement. Each of the two channels has one long curved path. A pair of fluid ports are connected tangentially to the outer arms of the spiral, and axial ports are common, but optional.
The main advantage of the SHE is its highly efficient use of space. This attribute is often leveraged and partially reallocated to gain other improvements in performance, according to well known tradeoffs in heat exchanger design. (A notable tradeoff is capital cost vs operating cost.) A compact SHE may be used to have a smaller footprint and thus lower all-around capital costs, or an oversized SHE may be used to have less pressure drop, less pumping energy, higher thermal efficiency, and lower energy costs.
Construction.
The distance between the sheets in the spiral channels is maintained by using spacer studs that were welded prior to rolling. Once the main spiral pack has been rolled, alternate top and bottom edges are welded and each end closed by a gasketed flat or conical cover bolted to the body. This ensures no mixing of the two fluids occurs. Any leakage is from the periphery cover to the atmosphere, or to a passage that contains the same fluid.
Self cleaning.
Spiral heat exchangers are often used in the heating of fluids that contain solids and thus tend to foul the inside of the heat exchanger. The low pressure drop lets the SHE handle fouling more easily. The SHE uses a “self cleaning” mechanism, whereby fouled surfaces cause a localized increase in fluid velocity, thus increasing the drag (or fluid friction) on the fouled surface, thus helping to dislodge the blockage and keep the heat exchanger clean. "The internal walls that make up the heat transfer surface are often rather thick, which makes the SHE very robust, and able to last a long time in demanding environments."
They are also easily cleaned, opening out like an oven where any buildup of foulant can be removed by pressure washing.
Self-cleaning water filters are used to keep the system clean and running without the need to shut down or replace cartridges and bags.
Flow arrangements.
There are three main types of flows in a spiral heat exchanger:
Applications.
The Spiral heat exchanger is good for applications such as pasteurization, digester heating, heat recovery, pre-heating (see: recuperator), and effluent cooling. For sludge treatment, SHEs are generally smaller than other types of heat exchangers. These are used to transfer the heat.
Selection.
Due to the many variables involved, selecting optimal heat exchangers is challenging. Hand calculations are possible, but many iterations are typically needed. As such, heat exchangers are most often selected via computer programs, either by system designers, who are typically engineers, or by equipment vendors.
To select an appropriate heat exchanger, the system designers (or equipment vendors) would firstly consider the design limitations for each heat exchanger type.
Though cost is often the primary criterion, several other selection criteria are important:
Small-diameter coil technologies are becoming more popular in modern air conditioning and refrigeration systems because they have better rates of heat transfer than conventional sized condenser and evaporator coils with round copper tubes and aluminum or copper fin that have been the standard in the HVAC industry. Small diameter coils can withstand the higher pressures required by the new generation of environmentally friendlier refrigerants. Two small diameter coil technologies are currently available for air conditioning and refrigeration products: copper microgroove and brazed aluminum microchannel.
Choosing the right heat exchanger (HX) requires some knowledge of the different heat exchanger types, as well as the environment where the unit must operate. Typically in the manufacturing industry, several differing types of heat exchangers are used for just one process or system to derive the final product. For example, a kettle HX for pre-heating, a double pipe HX for the 'carrier' fluid and a plate and frame HX for final cooling. With sufficient knowledge of heat exchanger types and operating requirements, an appropriate selection can be made to optimise the process.
Monitoring and maintenance.
Online monitoring of commercial heat exchangers is done by tracking the overall heat transfer coefficient. The overall heat transfer coefficient tends to decline over time due to fouling.
By periodically calculating the overall heat transfer coefficient from exchanger flow rates and temperatures, the owner of the heat exchanger can estimate when cleaning the heat exchanger is economically attractive.
Integrity inspection of plate and tubular heat exchanger can be tested in situ by the conductivity or helium gas methods. These methods confirm the integrity of the plates or tubes to prevent any cross contamination and the condition of the gaskets.
Mechanical integrity monitoring of heat exchanger tubes may be conducted through Nondestructive methods such as eddy current testing.
Fouling.
Fouling occurs when impurities deposit on the heat exchange surface.
Deposition of these impurities can decrease heat transfer effectiveness significantly over time and are caused by:
The rate of heat exchanger fouling is determined by the rate of particle deposition less re-entrainment/suppression. This model was originally proposed in 1959 by Kern and Seaton.
Crude Oil Exchanger Fouling. In commercial crude oil refining, crude oil is heated from to prior to entering the distillation column. A series of shell and tube heat exchangers typically exchange heat between crude oil and other oil streams to heat the crude to prior to heating in a furnace. Fouling occurs on the crude side of these exchangers due to asphaltene insolubility. The nature of asphaltene solubility in crude oil was successfully modeled by Wiehe and Kennedy. The precipitation of insoluble asphaltenes in crude preheat trains has been successfully modeled as a first order reaction by Ebert and Panchal who expanded on the work of Kern and Seaton.
Cooling Water Fouling.
Cooling water systems are susceptible to fouling. Cooling water typically has a high total dissolved solids content and suspended colloidal solids. Localized precipitation of dissolved solids occurs at the heat exchange surface due to wall temperatures higher than bulk fluid temperature. Low fluid velocities (less than 3 ft/s) allow suspended solids to settle on the heat exchange surface. Cooling water is typically on the tube side of a shell and tube exchanger because it's easy to clean. To prevent fouling, designers typically ensure that cooling water velocity is greater than 0.9 m/s and bulk fluid temperature is maintained less than . Other approaches to control fouling control combine the "blind" application of biocides and anti-scale chemicals with periodic lab testing.
Maintenance.
Plate and frame heat exchangers can be disassembled and cleaned periodically. Tubular heat exchangers can be cleaned by such methods as acid cleaning, sandblasting, high-pressure water jet, bullet cleaning, or drill rods.
In large-scale cooling water systems for heat exchangers, water treatment such as purification, addition of chemicals, and testing, is used to minimize fouling of the heat exchange equipment. Other water treatment is also used in steam systems for power plants, etc. to minimize fouling and corrosion of the heat exchange and other equipment.
A variety of companies have started using water borne oscillations technology to prevent biofouling. Without the use of chemicals, this type of technology has helped in providing a low-pressure drop in heat exchangers.
Design and manufacturing regulations.
The design and manufacturing of heat exchangers has numerous regulations, which vary according to the region in which they will be used.
Design and manufacturing codes include: ASME Boiler and Pressure Vessel Code (US); PD 5500 (UK); BS 1566 (UK); EN 13445 (EU); CODAP (French); Pressure Equipment Safety Regulations 2016 (PER) (UK); Pressure Equipment Directive (EU); NORSOK (Norwegian); TEMA; API 12; and API 560.
In nature.
Humans.
The human nasal passages serve as a heat exchanger, with cool air being inhaled and warm air being exhaled. Its effectiveness can be demonstrated by putting the hand in front of the face and exhaling, first through the nose and then through the mouth. Air exhaled through the nose is substantially cooler. This effect can be enhanced with clothing, by, for example, wearing a scarf over the face while breathing in cold weather.
In species that have external testes (such as human), the artery to the testis is surrounded by a mesh of veins called the pampiniform plexus. This cools the blood heading to the testes, while reheating the returning blood.
Birds, fish, marine mammals.
"Countercurrent" heat exchangers occur naturally in the circulatory systems of fish, whales and other marine mammals. Arteries to the skin carrying warm blood are intertwined with veins from the skin carrying cold blood, causing the warm arterial blood to exchange heat with the cold venous blood. This reduces the overall heat loss in cold water. Heat exchangers are also present in the tongues of baleen whales as large volumes of water flow through their mouths. Wading birds use a similar system to limit heat losses from their body through their legs into the water.
Carotid rete.
Carotid rete is a counter-current heat exchanging organ in some ungulates. The blood ascending the carotid arteries on its way to the brain, flows via a network of vessels where heat is discharged to the veins of cooler blood descending from the nasal passages. The carotid rete allows Thomson's gazelle to maintain its brain almost 3 °C (5.4 °F) cooler than the rest of the body, and therefore aids in tolerating bursts in metabolic heat production such as associated with outrunning cheetahs (during which the body temperature exceeds the maximum temperature at which the brain could function). Humans with other primates lack a carotid rete.
In industry.
Heat exchangers are widely used in industry both for cooling and heating large scale industrial processes. The type and size of heat exchanger used can be tailored to suit a process depending on the type of fluid, its phase, temperature, density, viscosity, pressures, chemical composition and various other thermodynamic properties.
In many industrial processes there is waste of energy or a heat stream that is being exhausted, heat exchangers can be used to recover this heat and put it to use by heating a different stream in the process. This practice saves a lot of money in industry, as the heat supplied to other streams from the heat exchangers would otherwise come from an external source that is more expensive and more harmful to the environment.
Heat exchangers are used in many industries, including:
In waste water treatment, heat exchangers play a vital role in maintaining optimal temperatures within anaerobic digesters to promote the growth of microbes that remove pollutants. Common types of heat exchangers used in this application are the double pipe heat exchanger as well as the plate and frame heat exchanger.
In aircraft.
In commercial aircraft heat exchangers are used to take heat from the engine's oil system to heat cold fuel. This improves fuel efficiency, as well as reduces the possibility of water entrapped in the fuel freezing in components.
Current market and forecast.
Estimated at US$17.5 billion in 2021, the global demand of heat exchangers is expected to experience robust growth of about 5% annually over the next years. The market value is expected to reach US$27 billion by 2030. With an expanding desire for environmentally friendly options and increased development of offices, retail sectors, and public buildings, market expansion is due to grow.
A model of a simple heat exchanger.
A simple heat exchange might be thought of as two straight pipes with fluid flow, which are thermally connected. Let the pipes be of equal length "L", carrying fluids with heat capacity formula_0 (energy per unit mass per unit change in temperature) and let the mass flow rate of the fluids through the pipes, both in the same direction, be formula_1 (mass per unit time), where the subscript "i" applies to pipe 1 or pipe 2.
Temperature profiles for the pipes are formula_2 and formula_3 where "x" is the distance along the pipe. Assume a steady state, so that the temperature profiles are not functions of time. Assume also that the only transfer of heat from a small volume of fluid in one pipe is to the fluid element in the other pipe at the same position, i.e., there is no transfer of heat along a pipe due to temperature differences in that pipe. By Newton's law of cooling the rate of change in energy of a small volume of fluid is proportional to the difference in temperatures between it and the corresponding element in the other pipe:
formula_4
formula_5
( this is for parallel flow in the same direction and opposite temperature gradients, but for counter-flow heat exchange countercurrent exchange the sign is opposite in the second equation in front of formula_6 ), where formula_7 is the thermal energy per unit length and γ is the thermal connection constant per unit length between the two pipes. This change in internal energy results in a change in the temperature of the fluid element. The time rate of change for the fluid element being carried along by the flow is:
formula_8
formula_9
where formula_10 is the "thermal mass flow rate". The differential equations governing the heat exchanger may now be written as:
formula_11
formula_12
Since the system is in a steady state, there are no partial derivatives of temperature with respect to time, and since there is no heat transfer along the pipe, there are no second derivatives in "x" as is found in the heat equation. These two coupled first-order differential equations may be solved to yield:
formula_13
formula_14
where formula_15, formula_16,
formula_17
and "A" and "B" are two as yet undetermined constants of integration. Let formula_21 and formula_22 be the temperatures at x=0 and let formula_23 and formula_24 be the temperatures at the end of the pipe at x=L. Define the average temperatures in each pipe as:
formula_25
formula_26
Using the solutions above, these temperatures are:
Choosing any two of the temperatures above eliminates the constants of integration, letting us find the other four temperatures. We find the total energy transferred by integrating the expressions for the time rate of change of internal energy per unit length:
formula_27
formula_28
By the conservation of energy, the sum of the two energies is zero. The quantity formula_29 is known as the "Log mean temperature difference", and is a measure of the effectiveness of the heat exchanger in transferring heat energy.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "C_i"
},
{
"math_id": 1,
"text": "j_i"
},
{
"math_id": 2,
"text": "T_1(x)"
},
{
"math_id": 3,
"text": "T_2(x)"
},
{
"math_id": 4,
"text": "\\frac{du_1}{dt}=\\gamma(T_2-T_1)"
},
{
"math_id": 5,
"text": "\\frac{du_2}{dt}=\\gamma(T_1-T_2)"
},
{
"math_id": 6,
"text": "\\gamma(T_1-T_2)"
},
{
"math_id": 7,
"text": "u_i(x)"
},
{
"math_id": 8,
"text": "\\frac{du_1}{dt}=J_1 \\frac{dT_1}{dx}"
},
{
"math_id": 9,
"text": "\\frac{du_2}{dt}=J_2 \\frac{dT_2}{dx}"
},
{
"math_id": 10,
"text": "J_i={C_i} {j_i}"
},
{
"math_id": 11,
"text": "J_1\\frac{\\partial T_1}{\\partial x}=\\gamma(T_2-T_1)"
},
{
"math_id": 12,
"text": "J_2\\frac{\\partial T_2}{\\partial x}=\\gamma(T_1-T_2)."
},
{
"math_id": 13,
"text": "T_1=A-\\frac{Bk_1}{k}\\,e^{-kx}"
},
{
"math_id": 14,
"text": "T_2=A+\\frac{Bk_2}{k}\\,e^{-kx}"
},
{
"math_id": 15,
"text": "k_1=\\gamma/J_1"
},
{
"math_id": 16,
"text": "k_2=\\gamma/J_2"
},
{
"math_id": 17,
"text": "k=k_1+k_2"
},
{
"math_id": 18,
"text": "k_2"
},
{
"math_id": 19,
"text": "k_2=k_1"
},
{
"math_id": 20,
"text": "(T_2-T_1)"
},
{
"math_id": 21,
"text": "T_{10}"
},
{
"math_id": 22,
"text": "T_{20}"
},
{
"math_id": 23,
"text": "T_{1L}"
},
{
"math_id": 24,
"text": "T_{2L}"
},
{
"math_id": 25,
"text": "\\overline{T}_1=\\frac{1}{L}\\int_0^LT_1(x)dx"
},
{
"math_id": 26,
"text": "\\overline{T}_2=\\frac{1}{L}\\int_0^LT_2(x)dx."
},
{
"math_id": 27,
"text": "\\frac{dU_1}{dt} = \\int_0^L \\frac{du_1}{dt}\\,dx = J_1(T_{1L}-T_{10})=\\gamma L(\\overline{T}_2-\\overline{T}_1)"
},
{
"math_id": 28,
"text": "\\frac{dU_2}{dt} = \\int_0^L \\frac{du_2}{dt}\\,dx = J_2(T_{2L}-T_{20})=\\gamma L(\\overline{T}_1-\\overline{T}_2)."
},
{
"math_id": 29,
"text": "\\overline{T}_2-\\overline{T}_1"
}
] |
https://en.wikipedia.org/wiki?curid=153221
|
15324368
|
Nephelauxetic effect
|
Term in the chemistry of transition metals
The nephelauxetic effect is a term used in the inorganic chemistry of transition metals. It refers to a decrease in the Racah interelectronic repulsion parameter, given the symbol "B", that occurs when a transition-metal free ion forms a complex with ligands. The name "nephelauxetic" comes from the Greek for "cloud-expanding" and was proposed by the Danish inorganic chemist C. K. Jorgensen. The presence of this effect highlights the disadvantages of crystal field theory, which treats metal-ligand interactions as purely electrostatic, since the nephelauxetic effect reveals the covalent character in the metal-ligand interaction.
Racah parameter.
The decrease in the Racah parameter "B" indicates that in a complex there is less repulsion between the two electrons in a given doubly occupied metal "d"-orbital than there is in the respective Mn+ gaseous metal ion, which in turn implies that the size of the orbital is larger in the complex. This electron cloud expansion effect may occur for one (or both) of two reasons. One is that the effective positive charge on the metal has decreased. Because the positive charge of the metal is reduced by any negative charge on the ligands, the "d"-orbitals can expand slightly. The second is the act of overlapping with ligand orbitals and forming covalent bonds increases orbital size, because the resulting molecular orbital is formed from two atomic orbitals.
The reduction of "B" from its free ion value is normally reported in terms of the nephelauxetic parameter "β":
formula_0
Experimentally, it is observed that size of the nephelauxetic parameter always follows a certain trend with respect to the nature of the ligands present.
Ligands.
The list shown below enlists some common ligands (showing increasing nephelauxetic effect):
F− < H2O < NH3 < en < [NCS - N]− < Cl− < [CN]− < Br− < N3− < I−
Although parts of this series may seem quite similar to the spectrochemical series of ligands - for example, cyanide, ethylenediamine, and fluoride seem to occupy similar positions in the two - others such as chloride, iodide and bromide (amongst others), occupy very different positions. The ordering roughly reflects the ability of the ligands to form good covalent bonds with metals - those that have a small effect are at the start of the series, whereas those that have a large effect are at the end of the series.
Central metal ion.
The nephelauxetic effect does not only depend upon the ligand type, but also upon the central metal ion. These too can be arranged in order of increasing nephelauxetic effect as follows:
Mn(II) < Ni(II) ≈ Co(II) < Mo(II) < Re(IV) < Fe(III) < Ir(III) < Co(III) < Mn(IV)
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\beta = \\frac{B_\\text{complex}}{B_\\text{free ion}}"
}
] |
https://en.wikipedia.org/wiki?curid=15324368
|
15325913
|
Gorn address
|
A Gorn address (Gorn, 1967) is a method of identifying and addressing any node within a tree data structure. This notation is often used for identifying nodes in a parse tree defined by phrase structure rules.
The Gorn address is a sequence of zero or more integers conventionally separated by dots, e.g., "0" or "1.0.1".
The root which Gorn calls * can be regarded as the empty sequence.
And the formula_0-th child of the formula_1-th child has an address formula_2, counting from 0.
It is named after American computer scientist Saul Gorn.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "j"
},
{
"math_id": 1,
"text": "i"
},
{
"math_id": 2,
"text": "i.j"
}
] |
https://en.wikipedia.org/wiki?curid=15325913
|
1532606
|
Grothendieck–Riemann–Roch theorem
|
In mathematics, specifically in algebraic geometry, the Grothendieck–Riemann–Roch theorem is a far-reaching result on coherent cohomology. It is a generalisation of the Hirzebruch–Riemann–Roch theorem, about complex manifolds, which is itself a generalisation of the classical Riemann–Roch theorem for line bundles on compact Riemann surfaces.
Riemann–Roch type theorems relate Euler characteristics of the cohomology of a vector bundle with their topological degrees, or more generally their characteristic classes in (co)homology or algebraic analogues thereof. The classical Riemann–Roch theorem does this for curves and line bundles, whereas the Hirzebruch–Riemann–Roch theorem generalises this to vector bundles over manifolds. The Grothendieck–Riemann–Roch theorem sets both theorems in a relative situation of a morphism between two manifolds (or more general schemes) and changes the theorem from a statement about a single bundle, to one applying to chain complexes of sheaves.
The theorem has been very influential, not least for the development of the Atiyah–Singer index theorem. Conversely, complex analytic analogues of the Grothendieck–Riemann–Roch theorem can be proved using the index theorem for families. Alexander Grothendieck gave a first proof in a 1957 manuscript, later published. Armand Borel and Jean-Pierre Serre wrote up and published Grothendieck's proof in 1958. Later, Grothendieck and his collaborators simplified and generalized the proof.
Formulation.
Let "X" be a smooth quasi-projective scheme over a field. Under these assumptions, the Grothendieck group formula_0 of bounded complexes of coherent sheaves is canonically isomorphic to the Grothendieck group of bounded complexes of finite-rank vector bundles. Using this isomorphism, consider the Chern character (a rational combination of Chern classes) as a functorial transformation:
formula_1
where formula_2 is the Chow group of cycles on "X" of dimension "d" modulo rational equivalence, tensored with the rational numbers. In case "X" is defined over the complex numbers, the latter group maps to the topological cohomology group:
formula_3
Now consider a proper morphism formula_4 between smooth quasi-projective schemes and a bounded complex of sheaves formula_5 on formula_6
The Grothendieck–Riemann–Roch theorem relates the pushforward map
formula_7
(alternating sum of higher direct images) and the pushforward
formula_8
by the formula
formula_9
Here formula_10 is the Todd genus of (the tangent bundle of) "X". Thus the theorem gives a precise measure for the lack of commutativity of taking the push forwards in the above senses and the Chern character and shows that the needed correction factors depend on "X" and "Y" only. In fact, since the Todd genus is functorial and multiplicative in exact sequences, we can rewrite the Grothendieck–Riemann–Roch formula as
formula_11
where formula_12 is the relative tangent sheaf of "f", defined as the element formula_13 in formula_0. For example, when "f" is a smooth morphism, formula_12 is simply a vector bundle, known as the tangent bundle along the fibers of "f".
Using "A"1-homotopy theory, the Grothendieck–Riemann–Roch theorem has been extended by to the situation where "f" is a proper map between two smooth schemes.
Generalising and specialising.
Generalisations of the theorem can be made to the non-smooth case by considering an appropriate generalisation of the combination formula_14 and to the non-proper case by considering cohomology with compact support.
The arithmetic Riemann–Roch theorem extends the Grothendieck–Riemann–Roch theorem to arithmetic schemes.
The Hirzebruch–Riemann–Roch theorem is (essentially) the special case where "Y" is a point and the field is the field of complex numbers.
A version of Riemann–Roch theorem for oriented cohomology theories was proven by Ivan Panin and Alexander Smirnov. It is concerned with multiplicative operations between algebraic oriented cohomology theories (such as algebraic cobordism). The Grothendieck-Riemann-Roch is a particular case of this result, and the Chern character comes up naturally in this setting.
Examples.
Vector bundles on a curve.
A vector bundle formula_15 of rank formula_16 and degree formula_17 (defined as the degree of its determinant; or equivalently the degree of its first Chern class) on a smooth projective curve over a field formula_18 has a formula similar to Riemann–Roch for line bundles. If we take formula_19 and formula_20 a point, then the Grothendieck–Riemann–Roch formula can be read as
formula_21
hence,
formula_22
This formula also holds for coherent sheaves of rank formula_16 and degree formula_17.
Smooth proper maps.
One of the advantages of the Grothendieck–Riemann–Roch formula is it can be interpreted as a relative version of the Hirzebruch–Riemann–Roch formula. For example, a smooth morphism formula_23 has fibers which are all equi-dimensional (and isomorphic as topological spaces when base changing to formula_24). This fact is useful in moduli-theory when considering a moduli space formula_25 parameterizing smooth proper spaces. For example, David Mumford used this formula to deduce relationships of the Chow ring on the moduli space of algebraic curves.
Moduli of curves.
For the moduli stack of genus formula_26 curves (and no marked points) formula_27 there is a universal curve formula_28 where formula_29 is the moduli stack of curves of genus formula_26 and one marked point. Then, he defines the tautological classes
formula_30
where formula_31 and formula_32 is the relative dualizing sheaf. Note the fiber of formula_32over a point formula_33 this is the dualizing sheaf formula_34. He was able to find relations between the formula_35 and formula_36 describing the formula_35 in terms of a sum of formula_36 (corollary 6.2) on the chow ring formula_37 of the smooth locus using Grothendieck–Riemann–Roch. Because formula_27 is a smooth Deligne–Mumford stack, he considered a covering by a scheme formula_38 which presents formula_39 for some finite group formula_40. He uses Grothendieck–Riemann–Roch on formula_41 to get
formula_42
Because
formula_43
this gives the formula
formula_44
The computation of formula_45 can then be reduced even further. In even dimensions formula_46,
formula_47
Also, on dimension 1,
formula_48
where formula_49 is a class on the boundary. In the case formula_50 and on the smooth locus formula_51 there are the relations
formula_52
which can be deduced by analyzing the Chern character of formula_53.
Closed embedding.
Closed embeddings formula_54 have a description using the Grothendieck–Riemann–Roch formula as well, showing another non-trivial case where the formula holds. For a smooth variety formula_55 of dimension formula_16 and a subvariety formula_56 of codimension formula_18, there is the formula
formula_57
Using the short exact sequence
formula_58,
there is the formula
formula_59
for the ideal sheaf since formula_60.
Applications.
Quasi-projectivity of moduli spaces.
Grothendieck–Riemann–Roch can be used in proving that a coarse moduli space formula_61, such as the moduli space of pointed algebraic curves formula_62, admits an embedding into a projective space, hence is a quasi-projective variety. This can be accomplished by looking at canonically associated sheaves on formula_61 and studying the degree of associated line bundles. For instance, formula_62 has the family of curves
formula_63
with sections
formula_64
corresponding to the marked points. Since each fiber has the canonical bundle formula_65, there are the associated line bundles
formula_66
and
formula_67
It turns out that
formula_68
is an ample line bundlepg 209, hence the coarse moduli space formula_62 is quasi-projective.
History.
Alexander Grothendieck's version of the Riemann–Roch theorem was originally conveyed in a letter to Jean-Pierre Serre around 1956–1957. It was made public at the initial Bonn Arbeitstagung, in 1957. Serre and Armand Borel subsequently organized a seminar at Princeton University to understand it. The final published paper was in effect the Borel–Serre exposition.
The significance of Grothendieck's approach rests on several points. First, Grothendieck changed the statement itself: the theorem was, at the time, understood to be a theorem about a variety, whereas Grothendieck saw it as a theorem about a morphism between varieties. By finding the right generalization, the proof became simpler while the conclusion became more general. In short, Grothendieck applied a strong categorical approach to a hard piece of analysis. Moreover, Grothendieck introduced K-groups, as discussed above, which paved the way for algebraic K-theory.
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "K_0(X)"
},
{
"math_id": 1,
"text": "\\mathrm{ch} \\colon K_0(X) \\to A(X, \\Q),"
},
{
"math_id": 2,
"text": "A_d(X,\\Q)"
},
{
"math_id": 3,
"text": "H^{2\\dim(X) - 2d}(X, \\Q)."
},
{
"math_id": 4,
"text": "f \\colon X \\to Y"
},
{
"math_id": 5,
"text": "{\\mathcal F^\\bull}"
},
{
"math_id": 6,
"text": "X."
},
{
"math_id": 7,
"text": "f_{!} = \\sum (-1)^i R^i f_* \\colon K_0(X) \\to K_0(Y)"
},
{
"math_id": 8,
"text": "f_* \\colon A(X) \\to A(Y),"
},
{
"math_id": 9,
"text": " \\mathrm{ch} (f_{!}{\\mathcal F}^\\bull) \\mathrm{td}(Y) = f_* (\\mathrm{ch}({\\mathcal F}^\\bull) \\mathrm{td}(X) ). "
},
{
"math_id": 10,
"text": "\\mathrm{td}(X)"
},
{
"math_id": 11,
"text": " \\mathrm{ch}(f_{!}{\\mathcal F}^\\bull) = f_* (\\mathrm{ch}({\\mathcal F}^\\bull) \\mathrm{td}(T_f) ),"
},
{
"math_id": 12,
"text": "T_f"
},
{
"math_id": 13,
"text": "TX - f^*(TY)"
},
{
"math_id": 14,
"text": "\\mathrm{ch}(-)\\mathrm{td}(X)"
},
{
"math_id": 15,
"text": "E \\to C"
},
{
"math_id": 16,
"text": "n"
},
{
"math_id": 17,
"text": "d"
},
{
"math_id": 18,
"text": "k"
},
{
"math_id": 19,
"text": "X = C"
},
{
"math_id": 20,
"text": "Y = \\{*\\}"
},
{
"math_id": 21,
"text": " \\begin{align}\n\\mathrm{ch}(f_{!}E) &= h^0(C,E) - h^1(C,E) \\\\\nf_*(\\mathrm{ch}(E)\\mathrm{td}(X))&= f_*((n + c_1(E))(1 + (1/2)c_1(T_C))) \\\\\n&= f_*(n + c_1(E) + (n/2)c_1(T_C)) \\\\\n&= f_*(c_1(E) + (n/2)c_1(T_C)) \\\\\n&= d + n(1-g);\n\\end{align}"
},
{
"math_id": 22,
"text": "\\chi(C,E) = d + n(1-g)."
},
{
"math_id": 23,
"text": "f\\colon X \\to Y"
},
{
"math_id": 24,
"text": "\\Complex"
},
{
"math_id": 25,
"text": "\\mathcal{M}"
},
{
"math_id": 26,
"text": "g"
},
{
"math_id": 27,
"text": "\\overline{\\mathcal{M}}_g"
},
{
"math_id": 28,
"text": "\\pi\\colon\\overline{\\mathcal{C}}_g \\to \\overline{\\mathcal{M}}_g"
},
{
"math_id": 29,
"text": "\\overline{\\mathcal{C}}_g = \\overline{\\mathcal{M}}_{g,1}"
},
{
"math_id": 30,
"text": "\\begin{align}\nK_{\\overline{\\mathcal{C}}_g/\\overline{\\mathcal{M}}_g} &= c_1(\\omega_{\\overline{\\mathcal{C}}_g/\\overline{\\mathcal{M}}_g})\\\\\n\\kappa_l &= \\pi_*(K^{l+1}_{\\overline{\\mathcal{C}}_g/\\overline{\\mathcal{M}}_g}) \\\\\n\\mathbb{E} &= \\pi_*(\\omega_{\\overline{\\mathcal{C}}_g/\\overline{\\mathcal{M}}_g}) \\\\\n\\lambda_l &= c_l(\\mathbb{E})\n\\end{align}"
},
{
"math_id": 31,
"text": "1 \\leq l \\leq g"
},
{
"math_id": 32,
"text": "\\omega_{\\overline{\\mathcal{C}}_g/\\overline{\\mathcal{M}}_g}"
},
{
"math_id": 33,
"text": "[C] \\in \\overline{\\mathcal{M}}_g"
},
{
"math_id": 34,
"text": "\\omega_C"
},
{
"math_id": 35,
"text": "\\lambda_i"
},
{
"math_id": 36,
"text": "\\kappa_i"
},
{
"math_id": 37,
"text": "A^*(\\mathcal{M}_g)"
},
{
"math_id": 38,
"text": "\\tilde{\\mathcal{M}}_g \\to \\overline{\\mathcal{M}}_g"
},
{
"math_id": 39,
"text": "\\overline{\\mathcal{M}}_g = [\\tilde{\\mathcal{M}}_g/G]"
},
{
"math_id": 40,
"text": "G"
},
{
"math_id": 41,
"text": "\\omega_{\\tilde{\\mathcal{C}}_g/\\tilde{\\mathcal{M}}_g}"
},
{
"math_id": 42,
"text": "\\mathrm{ch}(\\pi_!(\\omega_{\\tilde{\\mathcal{C}}/\\tilde{\\mathcal{M}}})) = \\pi_*(\\mathrm{ch}(\\omega_{\\tilde{\\mathcal{C}}/ \\tilde{\\mathcal{M}}}) \\mathrm{Td}^\\vee(\\Omega^1_{\\tilde{\\mathcal{C}}/\\tilde{\\mathcal{M}}}))"
},
{
"math_id": 43,
"text": "\\mathbf{R}^1\\pi_!({\\omega _{{\\tilde {\\mathcal {C}}}_{g}/{\\tilde {\\mathcal {M}}}_{g}}}) \\cong \\mathcal{O}_{\\tilde{M}},"
},
{
"math_id": 44,
"text": "\\mathrm{ch}(\\mathbb{E}) = 1 + \\pi_*(\\text{ch}(\\omega_{\\tilde{\\mathcal{C}}/\\tilde{\\mathcal{M}}}) \\text{Td}^\\vee (\\Omega^1_{\\tilde{\\mathcal{C}}/\\tilde{\\mathcal{M}}}))."
},
{
"math_id": 45,
"text": "\\mathrm{ch}(\\mathbb{E})"
},
{
"math_id": 46,
"text": "2k"
},
{
"math_id": 47,
"text": "\\text{ch}(\\mathbb{E})_{2k} = 0."
},
{
"math_id": 48,
"text": "\\lambda_1 = c_1(\\mathbb{E}) = \\frac{1}{12}(\\kappa_1 + \\delta),"
},
{
"math_id": 49,
"text": "\\delta"
},
{
"math_id": 50,
"text": "g=2"
},
{
"math_id": 51,
"text": "\\mathcal{M}_g"
},
{
"math_id": 52,
"text": "\\begin{align}\n\\lambda_1 &= \\frac{1}{12}\\kappa_1 \\\\\n\\lambda_2 &= \\frac{\\lambda_1^2}{2} = \\frac{\\kappa_1^2}{288}\n\\end{align}"
},
{
"math_id": 53,
"text": "\\mathbb{E}"
},
{
"math_id": 54,
"text": "f\\colon Y \\to X"
},
{
"math_id": 55,
"text": "X"
},
{
"math_id": 56,
"text": "Y"
},
{
"math_id": 57,
"text": "c_k(\\mathcal{O}_Y) = (-1)^{k-1}(k-1)![Y]"
},
{
"math_id": 58,
"text": "0 \\to \\mathcal{I}_Y \\to \\mathcal{O}_X \\to \\mathcal{O}_Y \\to 0"
},
{
"math_id": 59,
"text": "c_k(\\mathcal{I}_Y) = (-1)^k(k-1)![Y]"
},
{
"math_id": 60,
"text": "1 = c(\\mathcal{O}_X) = c(\\mathcal{O}_Y)c(\\mathcal{I}_Y)"
},
{
"math_id": 61,
"text": "M"
},
{
"math_id": 62,
"text": "M_{g,n}"
},
{
"math_id": 63,
"text": "\\pi\\colon C_{g,n} \\to M_{g,n}"
},
{
"math_id": 64,
"text": "s_i\\colon M_{g,n} \\to C_{g,n}"
},
{
"math_id": 65,
"text": "\\omega_{C}"
},
{
"math_id": 66,
"text": "\\Lambda_{g,n}(\\pi) = \\det(\\mathbf{R}\\pi_*(\\omega_{C_{g,n}/M_{g,n}}))"
},
{
"math_id": 67,
"text": "\\chi_{g,n}^{(i)} = s_i^*(\\omega_{C_{g,n}/M_{g,n}}) ."
},
{
"math_id": 68,
"text": "\\Lambda_{g,n}(\\pi) \\otimes \\left(\\bigotimes_{i=1}^n \\chi_{g,n}^{(i)}\\right)"
}
] |
https://en.wikipedia.org/wiki?curid=1532606
|
1532648
|
Supplee's paradox
|
In relativistic physics, Supplee's paradox (also called the submarine paradox) is a physical paradox that arises when considering the buoyant force exerted on a relativistic bullet (or in a submarine) immersed in a fluid subject to an ambient gravitational field. If a bullet has neutral buoyancy when it is at rest in a perfect fluid and then it is launched with a relativistic speed, observers at rest within the fluid would conclude that the bullet should sink, since its density will increase due to the length contraction effect. On the other hand, in the bullet's proper frame it is the moving fluid that becomes denser and hence the bullet would float. But the bullet cannot sink in one frame and float in another, so there is a paradox situation.
The paradox was first formulated by James M. Supplee (1989), where a non-rigorous explanation was presented. George Matsas has analysed this paradox in the scope of general relativity and also pointed out that these relativistic buoyancy effects could be important in some questions regarding the thermodynamics of black holes. A comprehensive explanation of Supplee's paradox through both the special and the general theory of relativity was presented by Ricardo Soares Vieira.
Hrvoje Nikolic noticed that rigidity of the submarine is not essential and presented a general relativistic analysis revealing that paradox resolves by the fact that the relevant velocity of the submarine is relative to Earth (which is the source of the gravitational field), not relative to the observer.
Buoyancy.
To simplify the analysis, it is customary to neglect drag and viscosity, and even to assume that the fluid has constant density.
A small object immersed in a container of fluid subjected to a uniform gravitational field will be subject to a net downward gravitational force, compared with the net downward gravitational force on an equal volume of the fluid. If the object is "less dense" than the fluid, the difference between these two vectors is an upward pointing vector, the buoyant force, and the object will rise. If things are the other way around, it will sink. If the object and the fluid have equal density, the object is said to have "neutral buoyancy" and it will neither rise nor sink.
Resolution.
The resolution comes down to observing that the usual Archimedes principle cannot be applied in the relativistic case. If the theory of relativity is correctly employed to analyse the forces involved, there will be no true paradox.
Supplee himself concluded that the paradox can be resolved with a more careful analysis of the gravitational buoyancy forces acting on the bullet. Considering the reasonable (but not justified) assumption that the gravitational force depends on the kinetic energy content of the bodies, Supplee showed that the bullet "sinks" in the frame at rest with the fluid with the acceleration formula_0, where formula_1 is the gravitational acceleration and formula_2 is the Lorentz factor. In the proper reference frame of the bullet, the same result is obtained by noting that this frame is not inertial, which implies that the shape of the container will no more be flat, on the contrary, the sea floor becomes curved upwards, which results in the bullet getting far away from the sea surface, "i.e.", in the bullet relatively sinking.
The non-justified assumption considered by Supplee that the gravitational force on the bullet should depend on its energy content was eliminated by George Matsas, who used the full mathematical methods of general relativity in order to explain the Supplee paradox and agreed with Supplee's results. In particular, he modelled the situation using a Rindler chart, where a submarine is accelerated from the rest to a given velocity "v". Matsas concluded that the paradox can be resolved by noting that in the frame of the fluid, the shape of the bullet is altered, and derived the same result which had been obtained by Supplee. Matsas has applied a similar analysis to shed light on certain questions involving the thermodynamics of black holes.
Finally, Vieira has recently analysed the submarine paradox through both special and general relativity. In the first case, he showed that gravitomagnetic effects should be taken into account in order to describe the forces acting in a moving submarine underwater. When these effects are considered, a "relativistic Archimedes principle" can be formulated, from which he showed that the submarine must sink in both frames. Vieira also considered the case of a curved space-time in the proximity of the Earth. In this case he assumed that the space-time can be approximately regarded as consisting of a flat space but a curved time. He showed that in this case the gravitational force between the Earth at rest and a moving body increases with the speed of the body in the same way as considered by Supplee (formula_3), providing in this way a justification for his assumption. Analysing the paradox again with this "speed-dependent gravitational force", the Supplee paradox is explained and the results agree with those obtained by Supplee and Matsas.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "g (\\gamma^2-1)"
},
{
"math_id": 1,
"text": "g"
},
{
"math_id": 2,
"text": "\\gamma"
},
{
"math_id": 3,
"text": "F =\\gamma m g"
}
] |
https://en.wikipedia.org/wiki?curid=1532648
|
15329594
|
Irwin–Hall distribution
|
Probability distribution
In probability and statistics, the Irwin–Hall distribution, named after Joseph Oscar Irwin and Philip Hall, is a probability distribution for a random variable defined as the sum of a number of independent random variables, each having a uniform distribution. For this reason it is also known as the uniform sum distribution.
The generation of pseudo-random numbers having an approximately normal distribution is sometimes accomplished by computing the sum of a number of pseudo-random numbers having a uniform distribution; usually for the sake of simplicity of programming. Rescaling the Irwin–Hall distribution provides the exact distribution of the random variates being generated.
This distribution is sometimes confused with the Bates distribution, which is the mean (not sum) of "n" independent random variables uniformly distributed from 0 to 1.
Definition.
The Irwin–Hall distribution is the continuous probability distribution for the sum of "n" independent and identically distributed "U"(0, 1) random variables:
formula_0
The probability density function (pdf) for formula_1 is given by
formula_2
where formula_3 denotes the positive part of the expression:
formula_4
Thus the pdf is a spline (piecewise polynomial function) of degree "n" − 1 over the knots 0, 1, ..., "n". In fact, for "x" between the knots located at "k" and "k" + 1, the pdf is equal to
formula_5
where the coefficients "a""j"("k","n") may be found from a recurrence relation over "k"
formula_6
The coefficients are also A188816 in OEIS. The coefficients for the cumulative distribution is A188668.
The mean and variance are "n"/2 and "n"/12, respectively.
formula_7
formula_8
formula_9
formula_10
formula_11
Approximating a Normal distribution.
By the Central Limit Theorem, as "n" increases, the Irwin–Hall distribution more and more strongly approximates a Normal distribution with mean formula_12 and variance formula_13. To approximate the standard Normal distribution formula_14, the Irwin–Hall distribution can be centered by shifting it by its mean of "n/2", and scaling the result by the square root of its variance:
formula_15
This derivation leads to a computationally simple heuristic that removes the square root, whereby a standard Normal distribution can be approximated with the sum of 12 uniform "U(0,1)" draws like so:
formula_16
Similar and related distributions.
The Irwin–Hall distribution is similar to the Bates distribution, but still featuring only integers as parameter. An extension to real-valued parameters is possible by adding also a random uniform variable with "N" − trunc("N") as width.
Extensions to the Irwin–Hall distribution.
When using the Irwin–Hall for data fitting purposes one problem is that the IH is not very flexible because the parameter "n" needs to be an integer. However, instead of summing "n" equal uniform distributions, we could also add e.g. "U" + 0.5"U" to address also the case "n =" 1.5 (giving a trapezoidal distribution).
The Irwin–Hall distribution has an application to beamforming and pattern synthesis in Figure 1 of reference
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\nX = \\sum_{k=1}^n U_k.\n"
},
{
"math_id": 1,
"text": "0\\leq x\\leq n"
},
{
"math_id": 2,
"text": "\nf_X(x;n)=\\frac{1}{(n-1)!}\\sum_{k=0}^n (-1)^k{n \\choose k} (x-k)_+^{n-1}\n"
},
{
"math_id": 3,
"text": "(x-k)_+"
},
{
"math_id": 4,
"text": " (x-k)_+ = \\begin{cases} \nx-k & x-k \\geq 0 \\\\\n0 & x-k < 0.\\end{cases}\n"
},
{
"math_id": 5,
"text": "\nf_X(x;n) = \\frac{1}{(n-1)!}\\sum_{j=0}^{n-1} a_j(k,n) x^j\n"
},
{
"math_id": 6,
"text": "\na_j(k,n)=\\begin{cases} 1&k=0, j=n-1\\\\\n 0&k=0, j<n-1\\\\\na_j(k-1,n) + (-1)^{n+k-j-1}{n\\choose\n k}{{n-1}\\choose j}k^{n-j-1} &k>0\\end{cases}\n"
},
{
"math_id": 7,
"text": "\nf_X(x)= \\begin{cases}\n1 & 0\\le x \\le 1 \\\\\n0 & \\text{otherwise}\n\\end{cases}\n"
},
{
"math_id": 8,
"text": "\nf_X(x)= \\begin{cases}\nx & 0\\le x \\le 1\\\\\n2-x & 1\\le x \\le 2\n\\end{cases}\n"
},
{
"math_id": 9,
"text": "\nf_X(x)= \\begin{cases}\n\\frac{1}{2}x^2 & 0\\le x \\le 1\\\\\n\\frac{1}{2}(-2x^2 + 6x - 3)& 1\\le x \\le 2\\\\\n\\frac{1}{2}(3 - x)^2 & 2\\le x \\le 3\n\\end{cases}\n"
},
{
"math_id": 10,
"text": "\nf_X(x)= \\begin{cases}\n\\frac{1}{6}x^3 & 0\\le x \\le 1\\\\\n\\frac{1}{6}(-3x^3 + 12x^2 - 12x+4)& 1\\le x \\le 2\\\\\n\\frac{1}{6}(3x^3 - 24x^2 +60x-44) & 2\\le x \\le 3\\\\\n\\frac{1}{6}(4 - x)^3 & 3\\le x \\le 4\n\\end{cases}\n"
},
{
"math_id": 11,
"text": "\nf_X(x)= \\begin{cases}\n\\frac{1}{24}x^4 & 0\\le x \\le 1\\\\\n\\frac{1}{24}(-4x^4 + 20x^3 - 30x^2+20x-5)& 1\\le x \\le 2\\\\\n\\frac{1}{24}(6x^4-60x^3+210x^2-300x+155) & 2\\le x \\le 3\\\\\n\\frac{1}{24}(-4x^4+60x^3-330x^2+780x-655) & 3\\le x \\le 4\\\\\n\\frac{1}{24}(5 - x)^4 &4\\le x\\le5\n\\end{cases}\n"
},
{
"math_id": 12,
"text": "\\mu=n/2"
},
{
"math_id": 13,
"text": "\\sigma^2=n/12"
},
{
"math_id": 14,
"text": "\\phi(x)=\\mathcal{N}(\\mu=0, \\sigma^2=1)"
},
{
"math_id": 15,
"text": "\n\\phi(x) \\overset{n\\gg 0}{\\approx} \\sqrt{\\frac{n}{12}} f_X\\left(x\\sqrt{\\frac{n}{12}}+\\frac{n}{2};n \\right )\n"
},
{
"math_id": 16,
"text": "\n\\sum_{k=1}^{12}U_k -6 \\sim f_X(x+6;12) \\mathrel{\\dot\\sim} \\phi(x)\n"
}
] |
https://en.wikipedia.org/wiki?curid=15329594
|
153299
|
Root-finding algorithm
|
Algorithms for zeros of functions
In numerical analysis, a root-finding algorithm is an algorithm for finding zeros, also called "roots", of continuous functions. A zero of a function "f", from the real numbers to real numbers or from the complex numbers to the complex numbers, is a number "x" such that "f"("x") = 0. As, generally, the zeros of a function cannot be computed exactly nor expressed in closed form, root-finding algorithms provide approximations to zeros, expressed either as floating-point numbers or as small isolating intervals, or disks for complex roots (an interval or disk output being equivalent to an approximate output together with an error bound).
Solving an equation "f"("x") = "g"("x") is the same as finding the roots of the function "h"("x") = "f"("x") – "g"("x"). Thus root-finding algorithms allow solving any equation defined by continuous functions. However, most root-finding algorithms do not guarantee that they will find all the roots; in particular, if such an algorithm does not find any root, that does not mean that no root exists.
Most numerical root-finding methods use iteration, producing a sequence of numbers that hopefully converges towards the root as its limit. They require one or more "initial guesses" of the root as starting values, then each iteration of the algorithm produces a successively more accurate approximation to the root. Since the iteration must be stopped at some point, these methods produce an approximation to the root, not an exact solution. Many methods compute subsequent values by evaluating an auxiliary function on the preceding values. The limit is thus a fixed point of the auxiliary function, which is chosen for having the roots of the original equation as fixed points, and for converging rapidly to these fixed points.
The behavior of general root-finding algorithms is studied in numerical analysis. However, for polynomials, root-finding study belongs generally to computer algebra, since algebraic properties of polynomials are fundamental for the most efficient algorithms. The efficiency of an algorithm may depend dramatically on the characteristics of the given functions. For example, many algorithms use the derivative of the input function, while others work on every continuous function. In general, numerical algorithms are not guaranteed to find all the roots of a function, so failing to find a root does not prove that there is no root. However, for polynomials, there are specific algorithms that use algebraic properties for certifying that no root is missed, and locating the roots in separate intervals (or disks for complex roots) that are small enough to ensure the convergence of numerical methods (typically Newton's method) to the unique root so located.
Bracketing methods.
Bracketing methods determine successively smaller intervals (brackets) that contain a root. When the interval is small enough, then a root has been found. They generally use the intermediate value theorem, which asserts that if a continuous function has values of opposite signs at the end points of an interval, then the function has at least one root in the interval. Therefore, they require to start with an interval such that the function takes opposite signs at the end points of the interval. However, in the case of polynomials there are other methods (Descartes' rule of signs, Budan's theorem and Sturm's theorem) for getting information on the number of roots in an interval. They lead to efficient algorithms for real-root isolation of polynomials, which ensure finding all real roots with a guaranteed accuracy.
Bisection method.
The simplest root-finding algorithm is the bisection method. Let "f" be a continuous function, for which one knows an interval ["a", "b"] such that "f"("a") and "f"("b") have opposite signs (a bracket). Let "c" = ("a" +"b")/2 be the middle of the interval (the midpoint or the point that bisects the interval). Then either "f"("a") and "f"("c"), or "f"("c") and "f"("b") have opposite signs, and one has divided by two the size of the interval. Although the bisection method is robust, it gains one and only one bit of accuracy with each iteration. Therefore, the number of function evaluations required for finding an "ε"-approximate root is formula_0. Other methods, under appropriate conditions, can gain accuracy faster.
False position ("regula falsi").
The false position method, also called the "regula falsi" method, is similar to the bisection method, but instead of using bisection search's middle of the interval it uses the "x"-intercept of the line that connects the plotted function values at the endpoints of the interval, that is
formula_1
False position is similar to the secant method, except that, instead of retaining the last two points, it makes sure to keep one point on either side of the root. The false position method can be faster than the bisection method and will never diverge like the secant method; however, it may fail to converge in some naive implementations due to roundoff errors that may lead to a wrong sign for "f"("c"); typically, this may occur if the rate of variation of f is large in the neighborhood of the root.
ITP method.
The ITP method is the only known method to bracket the root with the same worst case guarantees of the bisection method while guaranteeing a superlinear convergence to the root of smooth functions as the secant method. It is also the only known method guaranteed to outperform the bisection method on the average for any continuous distribution on the location of the root (see ITP Method#Analysis). It does so by keeping track of both the bracketing interval as well as the minmax interval in which any point therein converges as fast as the bisection method. The construction of the queried point c follows three steps: interpolation (similar to the regula falsi), truncation (adjusting the regula falsi similar to Regula falsi § Improvements in "regula falsi") and then projection onto the minmax interval. The combination of these steps produces a simultaneously minmax optimal method with guarantees similar to interpolation based methods for smooth functions, and, in practice will outperform both the bisection method and interpolation based methods under both smooth and non-smooth functions.
Interpolation.
Many root-finding processes work by interpolation. This consists in using the last computed approximate values of the root for approximating the function by a polynomial of low degree, which takes the same values at these approximate roots. Then the root of the polynomial is computed and used as a new approximate value of the root of the function, and the process is iterated.
Two values allow interpolating a function by a polynomial of degree one (that is approximating the graph of the function by a line). This is the basis of the secant method. Three values define a quadratic function, which approximates the graph of the function by a parabola. This is Muller's method.
"Regula falsi" is also an interpolation method, which differs from the secant method by using, for interpolating by a line, two points that are not necessarily the last two computed points.
Iterative methods.
Although all root-finding algorithms proceed by iteration, an iterative root-finding method generally uses a specific type of iteration, consisting of defining an auxiliary function, which is applied to the last computed approximations of a root for getting a new approximation. The iteration stops when a fixed point (up to the desired precision) of the auxiliary function is reached, that is when the new computed value is sufficiently close to the preceding ones.
Newton's method (and similar derivative-based methods).
Newton's method assumes the function "f" to have a continuous derivative. Newton's method may not converge if started too far away from a root. However, when it does converge, it is faster than the bisection method, and is usually quadratic. Newton's method is also important because it readily generalizes to higher-dimensional problems. Newton-like methods with higher orders of convergence are the Householder's methods. The first one after Newton's method is Halley's method with cubic order of convergence.
Secant method.
Replacing the derivative in Newton's method with a finite difference, we get the secant method. This method does not require the computation (nor the existence) of a derivative, but the price is slower convergence (the order is approximately 1.6 (golden ratio)). A generalization of the secant method in higher dimensions is Broyden's method.
Steffensen's method.
If we use a polynomial fit to remove the quadratic part of the finite difference used in the Secant method, so that it better approximates the derivative, we obtain Steffensen's method, which has quadratic convergence, and whose behavior (both good and bad) is essentially the same as Newton's method but does not require a derivative.
Fixed point iteration method.
We can use the fixed-point iteration to find the root of a function. Given a function formula_2 which we have set to zero to find the root (formula_3), we rewrite the equation in terms of formula_4 so that formula_3 becomes formula_5 (note, there are often many formula_6 functions for each formula_3 function). Next, we relabel each side of the equation as formula_7 so that we can perform the iteration. Next, we pick a value for formula_8 and perform the iteration until it converges towards a root of the function. If the iteration converges, it will converge to a root. The iteration will only converge if formula_9.
As an example of converting formula_3 to formula_5, if given the function formula_10, we will rewrite it as one of the following equations.
formula_11,
formula_12,
formula_13,
formula_14, or
formula_15.
Inverse interpolation.
The appearance of complex values in interpolation methods can be avoided by interpolating the inverse of "f", resulting in the inverse quadratic interpolation method. Again, convergence is asymptotically faster than the secant method, but inverse quadratic interpolation often behaves poorly when the iterates are not close to the root.
Combinations of methods.
Brent's method.
Brent's method is a combination of the bisection method, the secant method and inverse quadratic interpolation. At every iteration, Brent's method decides which method out of these three is likely to do best, and proceeds by doing a step according to that method. This gives a robust and fast method, which therefore enjoys considerable popularity.
Ridders' method.
Ridders' method is a hybrid method that uses the value of function at the midpoint of the interval to perform an exponential interpolation to the root. This gives a fast convergence with a guaranteed convergence of at most twice the number of iterations as the bisection method.
Finding roots in higher dimensions.
The bisection method has been generalized to higher dimensions; these methods are called generalized bisection methods. At each iteration, the domain is partitioned into two parts, and the algorithm decides - based on a small number of function evaluations - which of these two parts must contain a root. In one dimension, the criterion for decision is that the function has opposite signs. The main challenge in extending the method to multiple dimensions is to find a criterion that can be computed easily, and guarantees the existence of a root.
The Poincaré–Miranda theorem gives a criterion for the existence of a root in a rectangle, but it is hard to verify, since it requires to evaluate the function on the entire boundary of the rectangle.
Another criterion is given by a theorem of Kronecker. It says that, if the topological degree of a function "f" on a rectangle is non-zero, then the rectangle must contain at least one root of "f". This criterion is the basis for several root-finding methods, such as by Stenger and Kearfott. However, computing the topological degree can be time-consuming.
A third criterion is based on a "characteristic polyhedron". This criterion is used by a method called Characteristic Bisection.19-- It does not require to compute the topological degree - it only requires to compute the signs of function values. The number of required evaluations is at least formula_16, where "D" is the length of the longest edge of the characteristic polyhedron.11,&hairsp;Lemma.4.7 Note that prove a lower bound on the number of evaluations, and not an upper bound.
A fourth method uses an intermediate-value theorem on simplices. Again, no upper bound on the number of queries is given.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\log_2\\frac{b-a}{\\varepsilon}"
},
{
"math_id": 1,
"text": "c= \\frac{af(b)-bf(a)}{f(b)-f(a)}."
},
{
"math_id": 2,
"text": " f(x) "
},
{
"math_id": 3,
"text": " f(x)=0 "
},
{
"math_id": 4,
"text": " x "
},
{
"math_id": 5,
"text": " x=g(x) "
},
{
"math_id": 6,
"text": " g(x) "
},
{
"math_id": 7,
"text": " x_{n+1}=g(x_{n}) "
},
{
"math_id": 8,
"text": " x_{1} "
},
{
"math_id": 9,
"text": " |g'(root)|<1 "
},
{
"math_id": 10,
"text": " f(x)=x^2+x-1 "
},
{
"math_id": 11,
"text": " x_{n+1}=(1/x_n) - 1 "
},
{
"math_id": 12,
"text": " x_{n+1}=1/(x_n+1) "
},
{
"math_id": 13,
"text": " x_{n+1}=1-x_n^2 "
},
{
"math_id": 14,
"text": " x_{n+1}=x_n^2+2x_n-1 "
},
{
"math_id": 15,
"text": " x_{n+1}=\\pm \\sqrt{1-x_n} "
},
{
"math_id": 16,
"text": "\\log_2(D/\\epsilon)"
}
] |
https://en.wikipedia.org/wiki?curid=153299
|
153302
|
Characteristic function
|
In mathematics, the term "characteristic function" can refer to any of several distinct concepts:
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Dmbox/styles.css" />
Index of articles associated with the same name
This includes a list of related items that share the same name (or similar names). <br> If an [ internal link] incorrectly led you here, you may wish to change the link to point directly to the intended article.
|
[
{
"math_id": 0,
"text": "\n\\mathbf{1}_A\\colon X \\to \\{0, 1\\},\n"
},
{
"math_id": 1,
"text": "\n\\chi_A (x) := \\begin{cases}\n 0, & x \\in A; \\\\ + \\infty, &\n x \\not \\in A.\n\\end{cases}"
},
{
"math_id": 2,
"text": "\n\\varphi_X(t) = \\operatorname{E}\\left(e^{itX}\\right),\n"
},
{
"math_id": 3,
"text": "\\operatorname{E}"
}
] |
https://en.wikipedia.org/wiki?curid=153302
|
1533070
|
Volatility smile
|
Implied volatility patterns that arise in pricing financial options
Volatility smiles are implied volatility patterns that arise in pricing financial options. It is a parameter (implied volatility) that is needed to be modified for the Black–Scholes formula to fit market prices. In particular for a given expiration, options whose strike price differs substantially from the underlying asset's price command higher prices (and thus implied volatilities) than what is suggested by standard option pricing models. These options are said to be either deep in-the-money or out-of-the-money.
Graphing implied volatilities against strike prices for a given expiry produces a skewed "smile" instead of the expected flat surface. The pattern differs across various markets. Equity options traded in American markets did not show a volatility smile before the Crash of 1987 but began showing one afterwards. It is believed that investor reassessments of the probabilities of fat-tail have led to higher prices for out-of-the-money options. This anomaly implies deficiencies in the standard Black–Scholes option pricing model which assumes constant volatility and log-normal distributions of underlying asset returns. Empirical asset returns distributions, however, tend to exhibit fat-tails (kurtosis) and skew. Modelling the volatility smile is an active area of research in quantitative finance, and better pricing models such as the stochastic volatility model partially address this issue.
A related concept is that of term structure of volatility, which describes how (implied) volatility differs for related options with different maturities. An implied volatility surface is a 3-D plot that plots volatility smile and term structure of volatility in a consolidated three-dimensional surface for all options on a given underlying asset.
Implied volatility.
In the Black–Scholes model, the theoretical value of a vanilla option is a monotonic increasing function of the volatility of the underlying asset. This means it is usually possible to compute a unique implied volatility from a given market price for an option. This implied volatility is best regarded as a rescaling of option prices which makes comparisons between different strikes, expirations, and underlyings easier and more intuitive.
When implied volatility is plotted against strike price, the resulting graph is typically downward sloping for equity markets, or valley-shaped for currency markets. For markets where the graph is downward sloping, such as for equity options, the term "volatility skew" is often used. For other markets, such as FX options or equity index options, where the typical graph turns up at either end, the more familiar term "volatility smile" is used. For example, the implied volatility for upside (i.e. high strike) equity options is typically lower than for at-the-money equity options. However, the implied volatilities of options on foreign exchange contracts tend to rise in both the downside and upside directions. In equity markets, a small tilted smile is often observed near the money as a kink in the general downward sloping implicit volatility graph. Sometimes the term "smirk" is used to describe a skewed smile.
Market practitioners use the term implied-volatility to indicate the volatility parameter for ATM (at-the-money) option. Adjustments to this value are undertaken by incorporating the values of Risk Reversal and Flys (Skews) to determine the actual volatility measure that may be used for options with a delta which is not 50.
formula_0
formula_1
Formula.
where:
Risk reversals are generally quoted as "x"% delta risk reversal and essentially is Long "x"% delta call, and short "x"% delta put.
Butterfly, on the other hand, is a strategy consisting of:
−"y"% delta fly which mean Long "y"% delta call, Long "y"% delta put, short one ATM call and short one ATM put (small hat shape).
Implied volatility and historical volatility.
It is helpful to note that implied volatility is related to historical volatility, but the two are distinct. Historical volatility is a direct measure of the movement of the underlying’s price (realized volatility) over recent history (e.g. a trailing 21-day period). Implied volatility, in contrast, is determined by the market price of the derivative contract itself, and not the underlying. Therefore, different derivative contracts on the same underlying have different implied volatilities as a function of their own supply and demand dynamics. For instance, the IBM call option, strike at $100 and expiring in 6 months, may have an implied volatility of 18%, while the put option strike at $105 and expiring in 1 month may have an implied volatility of 21%. At the same time, the historical volatility for IBM for the previous 21 day period might be 17% (all volatilities are expressed in annualized percentage moves).
Term structure of volatility.
For options of different maturities, we also see characteristic differences in implied volatility. However, in this case, the dominant effect is related to the market's implied impact of upcoming events. For instance, it is well-observed that realized volatility for stock prices rises significantly on the day that a company reports its earnings. Correspondingly, we see that implied volatility for options will rise during the period prior to the earnings announcement, and then fall again as soon as the stock price absorbs the new information. Options that mature earlier exhibit a larger swing in implied volatility (sometimes called "vol of vol") than options with longer maturities.
Other option markets show other behavior. For instance, options on commodity futures typically show increased implied volatility just prior to the announcement of harvest forecasts. Options on US Treasury Bill futures show increased implied volatility just prior to meetings of the Federal Reserve Board (when changes in short-term interest rates are announced).
The market incorporates many other types of events into the term structure of volatility. For instance, the impact of upcoming results of a drug trial can cause implied volatility swings for pharmaceutical stocks. The anticipated resolution date of patent litigation can impact technology stocks, etc.
Volatility term structures list the relationship between implied volatilities and time to expiration. The term structures provide another method for traders to gauge cheap or expensive options.
Implied volatility surface.
It is often useful to plot implied volatility as a function of both strike price and time to maturity. The result is a two-dimensional curved surface plotted in three dimensions whereby the current market implied volatility ("z"-axis) for all options on the underlying is plotted against the price ("y"-axis) and time to maturity ("x"-axis "DTM"). This defines the absolute implied volatility surface; changing coordinates so that the price is replaced by delta yields the relative implied volatility surface.
The implied volatility surface simultaneously shows both volatility smile and term structure of volatility. Option traders use an implied volatility plot to quickly determine the shape of the implied volatility surface, and to identify any areas where the slope of the plot (and therefore relative implied volatilities) seems out of line.
The graph shows an implied volatility surface for all the put options on a particular underlying stock price. The "z"-axis represents implied volatility in percent, and "x" and "y" axes represent the option delta, and the days to maturity. Note that to maintain put–call parity, a 20 delta put must have the same implied volatility as an 80 delta call. For this surface, we can see that the underlying symbol has both volatility skew (a tilt along the delta axis), as well as a volatility term structure indicating an anticipated event in the near future.
Evolution: Sticky.
An implied volatility surface is "static": it describes the implied volatilities at a given moment in time. How the surface changes as the spot changes is called the "evolution of the implied volatility surface".
Common heuristics include:
So if spot moves from $100 to $120, sticky strike would predict that the implied volatility of a $120 strike option would be whatever it was before the move (though it has moved from being OTM to ATM), while sticky delta would predict that the implied volatility of the $120 strike option would be whatever the $100 strike option's implied volatility was before the move (as these are both ATM at the time).
Modeling volatility.
Methods of modelling the volatility smile include stochastic volatility models and local volatility models. For a discussion as to the various alternate approaches developed here, see and .
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\operatorname{Call}x = \\mathrm{ATM} + 0.5\\operatorname{RR}x + \\operatorname{Fly}x"
},
{
"math_id": 1,
"text": "\\operatorname{Put}x = \\mathrm{ATM} - 0.5 \\operatorname{RR}x + \\operatorname{Fly}x"
},
{
"math_id": 2,
"text": "\\operatorname{Call}x"
},
{
"math_id": 3,
"text": "\\operatorname{Put}x"
},
{
"math_id": 4,
"text": "\\operatorname{RR}x = \\operatorname{Call}x - \\operatorname{Put}x"
},
{
"math_id": 5,
"text": "\\operatorname{Fly}x = 0.5(\\operatorname{Call}x + \\operatorname{Put}x) - \\mathrm{ATM}"
}
] |
https://en.wikipedia.org/wiki?curid=1533070
|
1533133
|
Triplet state
|
Quantum state of a system
In quantum mechanics, a triplet state, or spin triplet, is the quantum state of an object such as an electron, atom, or molecule, having a quantum spin "S" = 1. It has three allowed values of the spin's projection along a given axis "m"S = −1, 0, or +1, giving the name "triplet".
Spin, in the context of quantum mechanics, is not a mechanical rotation but a more abstract concept that characterizes a particle's intrinsic angular momentum. It is particularly important for systems at atomic length scales, such as individual atoms, protons, or electrons.
A triplet state occurs in cases where the spins of two unpaired electrons, each having spin "s" = 1/2, align to give "S" = 1, in contrast to the more common case of two electrons aligning oppositely to give "S" = 0, a spin singlet. Most molecules encountered in daily life exist in a singlet state because all of their electrons are paired, but molecular oxygen is an exception. At room temperature, O2 exists in a triplet state, which can only undergo a chemical reaction by making the forbidden transition into a singlet state. This makes it kinetically nonreactive despite being thermodynamically one of the strongest oxidants. Photochemical or thermal activation can bring it into the singlet state, which makes it kinetically as well as thermodynamically a very strong oxidant.
Two spin-1/2 particles.
In a system with two spin-1/2 particles – for example the proton and electron in the ground state of hydrogen – measured on a given axis, each particle can be either spin up or spin down so the system has four basis states in all
formula_0
using the single particle spins to label the basis states, where the first arrow and second arrow in each combination indicate the spin direction of the first particle and second particle respectively.
More rigorously
formula_1
where formula_2 and formula_3 are the spins of the two particles, and formula_4 and formula_5 are their projections onto the z axis. Since for spin-1/2 particles, the formula_6 basis states span a 2-dimensional space, the formula_7 basis states span a 4-dimensional space.
Now the total spin and its projection onto the previously defined axis can be computed using the rules for adding angular momentum in quantum mechanics using the Clebsch–Gordan coefficients. In general
formula_8
substituting in the four basis states
formula_9
returns the possible values for total spin given along with their representation in the formula_7 basis. There are three states with total spin angular momentum 1:
formula_10
which are symmetric and a fourth state with total spin angular momentum 0:
formula_11
which is antisymmetric. The result is that a combination of two spin-1/2 particles can carry a total spin of 1 or 0, depending on whether they occupy a triplet or singlet state.
A mathematical viewpoint.
In terms of representation theory, what has happened is that the two conjugate 2-dimensional spin representations of the spin group SU(2) = Spin(3) (as it sits inside the 3-dimensional Clifford algebra) have tensored to produce a 4-dimensional representation. The 4-dimensional representation descends to the usual orthogonal group SO(3) and so its objects are tensors, corresponding to the integrality of their spin. The 4-dimensional representation decomposes into the sum of a one-dimensional trivial representation (singlet, a scalar, spin zero) and a three-dimensional representation (triplet, spin 1) that is nothing more than the standard representation of SO(3) on formula_12. Thus the "three" in triplet can be identified with the three rotation axes of physical space.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\uparrow\\uparrow,\\uparrow\\downarrow,\\downarrow\\uparrow,\\downarrow\\downarrow"
},
{
"math_id": 1,
"text": "\n|s_1,m_1\\rangle|s_2,m_2\\rangle = |s_1,m_1\\rangle \\otimes |s_2,m_2\\rangle,\n"
},
{
"math_id": 2,
"text": "s_1"
},
{
"math_id": 3,
"text": "s_2"
},
{
"math_id": 4,
"text": "m_1"
},
{
"math_id": 5,
"text": "m_2"
},
{
"math_id": 6,
"text": "\\left|\\frac{1}{2},m\\right\\rangle"
},
{
"math_id": 7,
"text": "\\left|\\frac{1}{2},m_1\\right\\rangle\\left|\\frac{1}{2},m_2\\right\\rangle"
},
{
"math_id": 8,
"text": "|s,m\\rangle = \\sum_{m_1+m_2=m} C_{m_1m_2m}^{s_1s_2s}|s_1 m_1\\rangle|s_2 m_2\\rangle"
},
{
"math_id": 9,
"text": "\\begin{align}\n \\left|\\frac{1}{2},+\\frac{1}{2}\\right\\rangle\\ \\otimes \\left|\\frac{1}{2},+\\frac{1}{2}\\right\\rangle\\ &\\text{ by } (\\uparrow\\uparrow), \\\\\n \\left|\\frac{1}{2},+\\frac{1}{2}\\right\\rangle\\ \\otimes \\left|\\frac{1}{2},-\\frac{1}{2}\\right\\rangle\\ &\\text{ by } (\\uparrow\\downarrow), \\\\\n \\left|\\frac{1}{2},-\\frac{1}{2}\\right\\rangle\\ \\otimes \\left|\\frac{1}{2},+\\frac{1}{2}\\right\\rangle\\ &\\text{ by } (\\downarrow\\uparrow), \\\\\n \\left|\\frac{1}{2},-\\frac{1}{2}\\right\\rangle\\ \\otimes \\left|\\frac{1}{2},-\\frac{1}{2}\\right\\rangle\\ &\\text{ by } (\\downarrow\\downarrow)\\end{align}"
},
{
"math_id": 10,
"text": "\n\\left.\\begin{array}{ll}\n|1,1\\rangle &=\\; \\uparrow\\uparrow \\\\\n|1,0\\rangle &=\\; \\frac{1}{\\sqrt{2}}(\\uparrow\\downarrow + \\downarrow\\uparrow) \\\\\n|1,-1\\rangle &=\\; \\downarrow\\downarrow\n\\end{array}\\right\\}\\quad s = 1\\quad \\mathrm{(triplet)}\n"
},
{
"math_id": 11,
"text": "\\left.|0,0\\rangle = \\frac{1}{\\sqrt{2}}(\\uparrow\\downarrow - \\downarrow\\uparrow)\\;\\right\\}\\quad s=0\\quad\\mathrm{(singlet)}"
},
{
"math_id": 12,
"text": "R^3"
}
] |
https://en.wikipedia.org/wiki?curid=1533133
|
1533196
|
Photoelasticity
|
Change in optical properties of a material due to stress
In materials science, photoelasticity describes changes in the optical properties of a material under mechanical deformation. It is a property of all dielectric media and is often used to experimentally determine the stress distribution in a material.
History.
The photoelastic phenomenon was first discovered by the Scottish physicist David Brewster, who immediately recognized it as stress-induced birefringence. That diagnosis was confirmed in a direct refraction experiment by Augustin-Jean Fresnel. Experimental frameworks were developed at the beginning of the twentieth century with the works of E. G. Coker and L. N. G. Filon of University of London. Their book "Treatise on Photoelasticity", published in 1930 by Cambridge Press, became a standard text on the subject. Between 1930 and 1940, many other books appeared on the subject, including books in Russian, German and French. Max M. Frocht published the classic two volume work, "Photoelasticity", in the field. At the same time, much development occurred in the field – great improvements were achieved in technique, and the equipment was simplified. With refinements in the technology, photoelastic experiments were extended to determining three-dimensional states of stress. In parallel to developments in experimental technique, the first phenomenological description of photoelasticity was given in 1890 by Friedrich Pockels, however this was proved inadequate almost a century later by Nelson & Lax as the description by Pockels only considered the effect of mechanical strain on the optical properties of the material.
With the advent of the digital polariscope – made possible by light-emitting diodes – continuous monitoring of structures under load became possible. This led to the development of dynamic photoelasticity, which has contributed greatly to the study of complex phenomena such as fracture of materials.
Applications.
Photoelasticity has been used for a variety of stress analyses and even for routine use in design, particularly before the advent of numerical methods, such as finite elements or boundary elements. Digitization of polariscopy enables fast image acquisition and data processing, which allows its industrial applications to control quality of manufacturing process for materials such as glass and polymer. Dentistry utilizes photoelasticity to analyze strain in denture materials.
Photoelasticity can successfully be used to investigate the highly localized stress state within masonry or in proximity of a rigid line inclusion (stiffener) embedded in an elastic medium. In the former case, the problem is nonlinear due to the contacts between bricks, while in the latter case the elastic solution is singular, so that numerical methods may fail to provide correct results. These can be obtained through photoelastic techniques. Dynamic photoelasticity integrated with high-speed photography is utilized to investigate fracture behavior in materials. Another important application of the photoelasticity experiments is to study the stress field around bi-material notches. Bi-material notches exist in many engineering application like welded or adhesively bonded structures.
Formal definition.
For a linear dielectric material the change in the inverse permittivity tensor formula_0 with respect to the deformation (the gradient of the displacement formula_1) is described by
formula_2
where formula_3 is the fourth-rank photoelasticity tensor, formula_4 is the linear displacement from equilibrium, and formula_5 denotes differentiation with respect to the Cartesian coordinate formula_6. For isotropic materials, this definition simplifies to
formula_7
where formula_8 is the symmetric part of the photoelastic tensor (the photoelastic strain tensor), and formula_9 is the linear strain. The antisymmetric part of formula_3 is known as the roto-optic tensor. From either definition, it is clear that deformations to the body may induce optical anisotropy, which can cause an otherwise optically isotropic material to exhibit birefringence. Although the symmetric photoelastic tensor is most commonly defined with respect to mechanical strain, it is also possible to express photoelasticity in terms of the mechanical stress.
Experimental principles.
The experimental procedure relies on the property of birefringence, as exhibited by certain transparent materials. Birefringence is a phenomenon in which a ray of light passing through a given material experiences two refractive indices. The property of birefringence (or double refraction) is observed in many optical crystals. Upon the application of stresses, photoelastic materials exhibit the property of birefringence, and the magnitude of the refractive indices at each point in the material is directly related to the state of stresses at that point. Information such as maximum shear stress and its orientation are available by analyzing the birefringence with an instrument called a polariscope.
When a ray of light passes through a photoelastic material, its electromagnetic wave components are resolved along the two principal stress directions and each component experiences a different refractive index due to the birefringence. The difference in the refractive indices leads to a relative phase retardation between the two components. Assuming a thin specimen made of isotropic materials, where two-dimensional photoelasticity is applicable, the magnitude of the relative retardation is given by the "stress-optic law":
formula_10
where Δ is the induced retardation, "C" is the <templatestyles src="Template:Visible anchor/styles.css" />stress-optic coefficient, "t" is the specimen thickness, "λ" is the vacuum wavelength, and "σ"1 and "σ"2 are the first and second principal stresses, respectively. The retardation changes the polarization of transmitted light. The polariscope combines the different polarization states of light waves before and after passing the specimen. Due to optical interference of the two waves, a fringe pattern is revealed. The number of fringe order "N" is denoted as
formula_11
which depends on relative retardation. By studying the fringe pattern one can determine the state of stress at various points in the material.
For materials that do not show photoelastic behavior, it is still possible to study the stress distribution. The first step is to build a model, using photoelastic materials, which has geometry similar to the real structure under investigation. The loading is then applied in the same way to ensure that the stress distribution in the model is similar to the stress in the real structure.
Isoclinics and isochromatics.
Isoclinics are the loci of the points in the specimen along which the principal stresses are in the same direction.
Isochromatics are the loci of the points along which the difference in the first and second principal stress remains the same. Thus they are the lines which join the points with equal maximum shear stress magnitude.
Two-dimensional photoelasticity.
Photoelasticity can describe both three-dimensional and two-dimensional states of stress. However, examining photoelasticity in three-dimensional systems is more involved than two-dimensional or plane-stress system. So the present section deals with photoelasticity in a plane stress system. This condition is achieved when the thickness of the prototype is much smaller as compared to dimensions in the plane. Thus one is only concerned with stresses acting parallel to the plane of the model, as other stress components are zero. The experimental setup varies from experiment to experiment. The two basic kinds of setup used are plane polariscope and circular polariscope.
The working principle of a two-dimensional experiment allows the measurement of retardation, which can be converted to the difference between the first and second principal stress and their orientation. To further get values of each stress component, a technique called stress-separation is required. Several theoretical and experimental methods are utilized to provide additional information to solve individual stress components.
Plane polariscope setup.
The setup consists of two linear polarizers and a light source. The light source can either emit monochromatic light or white light depending upon the experiment. First the light is passed through the first polarizer which converts the light into plane polarized light. The apparatus is set up in such a way that this plane polarized light then passes through the stressed specimen. This light then follows, at each point of the specimen, the direction of principal stress at that point. The light is then made to pass through the analyzer and we finally get the fringe pattern.
The fringe pattern in a plane polariscope setup consists of both the isochromatics and the isoclinics. The isoclinics change with the orientation of the polariscope while there is no change in the isochromatics.
Circular polariscope setup.
In a circular polariscope setup two quarter-wave plates are added to the experimental setup of the plane polariscope. The first quarter-wave plate is placed in between the polarizer and the specimen and the second quarter-wave plate is placed between the specimen and the analyzer. The effect of adding the quarter-wave plate after the source-side polarizer is that we get circularly polarized light passing through the sample. The analyzer-side quarter-wave plate converts the circular polarization state back to linear before the light passes through the analyzer.
The basic advantage of a circular polariscope over a plane polariscope is that in a circular polariscope setup we only get the isochromatics and not the isoclinics. This eliminates the problem of differentiating between the isoclinics and the isochromatics.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\Delta(\\varepsilon^{-1})_{ij}"
},
{
"math_id": 1,
"text": "\\partial_\\ell u_k"
},
{
"math_id": 2,
"text": " \\Delta(\\varepsilon^{-1})_{ij} = P_{ijk\\ell} \\partial_k u_\\ell "
},
{
"math_id": 3,
"text": "P_{ijk\\ell}"
},
{
"math_id": 4,
"text": "u_\\ell"
},
{
"math_id": 5,
"text": "\\partial_l"
},
{
"math_id": 6,
"text": "x_l"
},
{
"math_id": 7,
"text": " \\Delta(\\varepsilon^{-1})_{ij} = p_{ijk\\ell} s_{k\\ell} "
},
{
"math_id": 8,
"text": "p_{ijk\\ell}"
},
{
"math_id": 9,
"text": "s_{k\\ell}"
},
{
"math_id": 10,
"text": " \\Delta = \\frac{2\\pi t} \\lambda C ( \\sigma_1 - \\sigma_2) "
},
{
"math_id": 11,
"text": " N = \\frac \\Delta {2\\pi}"
}
] |
https://en.wikipedia.org/wiki?curid=1533196
|
15333552
|
C-minimal theory
|
In model theory, a branch of mathematical logic, a C-minimal theory is a theory that is "minimal" with respect to a ternary relation "C" with certain properties. Algebraically closed fields with a (Krull) valuation are perhaps the most important example.
This notion was defined in analogy to the o-minimal theories, which are "minimal" (in the same sense) with respect to a linear order.
Definition.
A "C"-relation is a ternary relation "C"("x"; "y", "z") that satisfies the following axioms.
A C-minimal structure is a structure "M", in a signature containing the symbol "C", such that "C" satisfies the above axioms and every set of elements of "M" that is definable with parameters in "M" is a Boolean combination of instances of "C", i.e. of formulas of the form "C"("x"; "b", "c"), where "b" and "c" are elements of "M".
A theory is called C-minimal if all of its models are C-minimal. A structure is called strongly C-minimal if its theory is C-minimal. One can construct C-minimal structures which are not strongly C-minimal.
Example.
For a prime number "p" and a "p"-adic number "a", let |"a"|"p" denote its "p"-adic absolute value. Then the relation defined by formula_4 is a "C"-relation, and the theory of Q"p" with addition and this relation is C-minimal. The theory of Q"p" as a field, however, is not C-minimal.
|
[
{
"math_id": 0,
"text": "\\forall xyz\\, [ C(x;y,z)\\rightarrow C(x;z,y) ],"
},
{
"math_id": 1,
"text": "\\forall xyz\\, [ C(x;y,z)\\rightarrow\\neg C(y;x,z) ],"
},
{
"math_id": 2,
"text": "\\forall xyzw\\, [ C(x;y,z)\\rightarrow (C(w;y,z)\\vee C(x;w,z)) ],"
},
{
"math_id": 3,
"text": "\\forall xy\\, [ x\\neq y \\rightarrow \\exists z\\neq y\\, C(x;y,z) ]."
},
{
"math_id": 4,
"text": "C(a; b, c) \\iff |b-c|_p < |a-c|_p"
}
] |
https://en.wikipedia.org/wiki?curid=15333552
|
15334997
|
Topological game
|
Mathematical game on a topological space
In mathematics, a topological game is an infinite game of perfect information played between two players on a topological space. Players choose objects with topological properties such as points, open sets, closed sets and open coverings. Time is generally discrete, but the plays may have transfinite lengths, and extensions to continuum time have been put forth. The conditions for a player to win can involve notions like topological closure and convergence.
It turns out that some fundamental topological constructions have a natural counterpart in topological games; examples of these are the Baire property, Baire spaces, completeness and convergence properties, separation properties, covering and base properties, continuous images, Suslin sets, and singular spaces. At the same time, some topological properties that arise naturally in topological games can be generalized beyond a game-theoretic context: by virtue of this duality, topological games have been widely used to describe new properties of topological spaces, and to put known properties under a different light. There are also close links with selection principles.
The term "topological game" was first introduced by Claude Berge,
who defined the basic ideas and formalism in analogy with topological groups. A different meaning for "topological game", the concept of “topological properties defined by games”, was introduced in the paper of Rastislav Telgársky,
and later "spaces defined by topological games";
this approach is based on analogies with matrix games, differential games and statistical games, and defines and studies topological games within topology. After more than 35 years, the term “topological game” became widespread, and appeared in several hundreds of publications. The survey paper of Telgársky
emphasizes the origin of topological games from the Banach–Mazur game.
There are two other meanings of topological games, but these are used less frequently.
Basic setup for a topological game.
Many frameworks can be defined for infinite positional games of perfect information.
The typical setup is a game between two players, I and II, who alternately pick subsets of a topological space "X". In the "n"th round, player I plays a subset "I""n" of "X", and player II responds with a subset "J""n". There is a round for every natural number "n", and after all rounds are played, player I wins if the sequence
"I"0, "J"0, "I"1, "J"1...
satisfies some property, and otherwise player II wins.
The game is defined by the target property and the allowed moves at each step. For example, in the Banach–Mazur game "BM"("X"), the allowed moves are nonempty open subsets of the previous move, and player I wins if formula_0.
This typical setup can be modified in various ways. For example, instead of being a subset of "X", each move might consist of a pair formula_1 where formula_2 and formula_3. Alternatively, the sequence of moves might have length some ordinal number other than ω.
"I"0, "J"0, "I"1, "J"1...
The "result of a play" is either a win or a loss for each player.
formula_4
is "according to strategy s". (Here λ denotes the empty sequence of moves.)
The Banach–Mazur game.
The first topological game studied was the Banach–Mazur game, which is a motivating example of the connections between game-theoretic notions and topological properties.
Let "Y" be a topological space, and let "X" be a subset of "Y", called the "winning set". Player I begins the game by picking a nonempty open subset formula_6, and player II responds with a nonempty open subset formula_7. Play continues in this fashion, with players alternately picking a nonempty open subset of the previous play. After an infinite sequence of moves, one for each natural number, the game is finished, and I wins if and only if
formula_8
The game-theoretic and topological connections demonstrated by the game include:
Other topological games.
Some other notable topological games are:
Many more games have been introduced over the years, to study, among others: the Kuratowski coreduction principle; separation and reduction properties of sets in close projective classes; Luzin sieves; invariant descriptive set theory; Suslin sets; the closed graph theorem; webbed spaces; MP-spaces; the axiom of choice; computable functions. Topological games have also been related to ideas in mathematical logic, model theory, infinitely-long formulas, infinite strings of alternating quantifiers, ultrafilters, partially ordered sets, and the chromatic number of infinite graphs.
For a longer list and a more detailed account see the 1987 survey paper of Telgársky.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\bigcap_n I_n \\neq \\emptyset"
},
{
"math_id": 1,
"text": "(I, p)"
},
{
"math_id": 2,
"text": "I \\subseteq X"
},
{
"math_id": 3,
"text": "p \\in x"
},
{
"math_id": 4,
"text": "s(\\lambda), J_0, s(J_0), J_1, s(J_0, J_1), J_2, s(J_0, J_1, J_2), \\ldots"
},
{
"math_id": 5,
"text": "P \\uparrow G"
},
{
"math_id": 6,
"text": "I_0 \\subseteq Y"
},
{
"math_id": 7,
"text": "J_0 \\subseteq I_0"
},
{
"math_id": 8,
"text": " X \\cap \\bigcap_{n \\in \\omega} I_n \\neq \\emptyset."
}
] |
https://en.wikipedia.org/wiki?curid=15334997
|
1533679
|
Cage effect
|
Behavior of molecules in solvent as encapsulated particles
In chemistry, the cage effect (also known as geminate recombination) describes how the properties of a molecule are affected by its surroundings. First introduced by James Franck and Eugene Rabinowitch in 1934, the cage effect suggests that instead of acting as an individual particle, molecules in solvent are more accurately described as an encapsulated particle. The encapsulated molecules or radicals are called cage pairs or geminate pairs. In order to interact with other molecules, the caged particle must diffuse from its solvent cage. The typical lifetime of a solvent cage is 10-11 seconds. Many manifestations of the cage effect exist.
In free radical polymerization, radicals formed from the decomposition of an initiator molecule are surrounded by a cage consisting of solvent and/or monomer molecules. Within the cage, the free radicals undergo many collisions leading to their recombination or mutual deactivation. This can be described by the following reaction:
formula_0
After recombination, free radicals can either react with monomer molecules within the cage walls or diffuse out of the cage. In polymers, the probability of a free radical pair to escape recombination in the cage is 0.1 – 0.01 and 0.3-0.8 in liquids. In unimolecular chemistry, geminate recombination has first been studied in the solution phase using iodine molecules and heme proteins. In the solid state, geminate recombination has been demonstrated with small molecules trapped in noble gas solid matrices and in triiodide crystalline compounds.
Cage recombination efficiency.
The cage effect can be quantitatively described as the cage recombination efficiency Fc where:
formula_1
Here Fc is defined as the ratio of the rate constant for cage recombination (kc) to the sum of the rate constants for all cage processes. According to mathematical models, Fc is dependent on changes on several parameters including radical size, shape, and solvent viscosity. It is reported that the cage effect will increase with an increase in radical size and a decrease in radical mass.
Initiator efficiency.
In free radical polymerization, the rate of initiation is dependent on how effective the initiator is. Low initiator efficiency, ƒ, is largely attributed to the cage effect. The rate of initiation is described as:
formula_2
where Ri is the rate of initiation, kd is the rate constant for initiator dissociation, [I] is the initial concentration of initiator. Initiator efficiency represents the fraction of primary radicals R·, that actually contribute to chain initiation. Due to the cage effect, free radicals can undergo mutual deactivation which produces stable products instead of initiating propagation – reducing the value of ƒ.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\nR\\!-\\!R\n\\;\\;\\underset{k_c}{\\overset{k_1}{\\rightleftharpoons}}\\;\\;\n\\underset{\\text{cage pair}}{(R^{\\,\\bullet},^{\\bullet}\\!R)}\n\\;\\;\\underset{k_D}{\\overset{k_d}{\\rightleftharpoons}}\\;\\;\n\\underset{\\text{free radicals}}{2R^{\\,\\bullet}}\n\\;\\rightarrow\\;\n\\text{Products}\n"
},
{
"math_id": 1,
"text": "F_c = k_c/(k_c + k_d) "
},
{
"math_id": 2,
"text": "R_i = 2fk_d[I] "
}
] |
https://en.wikipedia.org/wiki?curid=1533679
|
1533767
|
Interval tree
|
Tree data structure to hold intervals
In computer science, an interval tree is a tree data structure to hold intervals. Specifically, it allows one to efficiently find all intervals that overlap with any given interval or point. It is often used for windowing queries, for instance, to find all roads on a computerized map inside a rectangular viewport, or to find all visible elements inside a three-dimensional scene. A similar data structure is the segment tree.
The trivial solution is to visit each interval and test whether it intersects the given point or interval, which requires formula_0 time, where formula_1 is the number of intervals in the collection. Since a query may return all intervals, for example if the query is a large interval intersecting all intervals in the collection, this is asymptotically optimal; however, we can do better by considering output-sensitive algorithms, where the runtime is expressed in terms of formula_2, the number of intervals produced by the query. Interval trees have a query time of formula_3 and an initial creation time of formula_4, while limiting memory consumption to formula_0. After creation, interval trees may be dynamic, allowing efficient insertion and deletion of an interval in formula_5 time. If the endpoints of intervals are within a small integer range ("e.g.", in the range formula_6), faster and in fact optimal data structures exist with preprocessing time formula_0 and query time formula_7 for reporting formula_2 intervals containing a given query point (see for a very simple one).
Naive approach.
In a simple case, the intervals do not overlap and they can be inserted into a simple binary search tree and queried in formula_5 time. However, with arbitrarily overlapping intervals, there is no way to compare two intervals for insertion into the tree since orderings sorted by the beginning points or the ending points may be different. A naive approach might be to build two parallel trees, one ordered by the beginning point, and one ordered by the ending point of each interval. This allows discarding half of each tree in formula_5 time, but the results must be merged, requiring formula_0 time. This gives us queries in formula_8, which is no better than brute-force.
Interval trees solve this problem. This article describes two alternative designs for an interval tree, dubbed the "centered interval tree" and the "augmented tree".
Centered interval tree.
Queries require formula_3 time, with formula_1 being the total number of intervals and formula_2 being the number of reported results. Construction requires formula_4 time, and storage requires formula_0 space.
Construction.
Given a set of formula_1 intervals on the number line, we want to construct a data structure so that we can efficiently retrieve all intervals overlapping another interval or point.
We start by taking the entire range of all the intervals and dividing it in half at formula_9 (in practice, formula_9 should be picked to keep the tree relatively balanced). This gives three sets of intervals, those completely to the left of formula_9 which we'll call formula_10, those completely to the right of formula_9 which we'll call formula_11, and those overlapping formula_9 which we'll call formula_12.
The intervals in formula_10 and formula_11 are recursively divided in the same manner until there are no intervals left.
The intervals in formula_12 that overlap the center point are stored in a separate data structure linked to the node in the interval tree. This data structure consists of two lists, one containing all the intervals sorted by their beginning points, and another containing all the intervals sorted by their ending points.
The result is a binary tree with each node storing:
Intersecting.
Given the data structure constructed above, we receive queries consisting of ranges or points, and return all the ranges in the original set overlapping this input.
With a point.
The task is to find all intervals in the tree that overlap a given point formula_13. The tree is walked with a similar recursive algorithm as would be used to traverse a traditional binary tree, but with extra logic to support searching the intervals overlapping the "center" point at each node.
For each tree node, formula_13 is compared to formula_14, the midpoint used in node construction above. If formula_13 is less than formula_14, the leftmost set of intervals, formula_10, is considered. If formula_13 is greater than formula_14, the rightmost set of intervals, formula_11, is considered.
As each node is processed as we traverse the tree from the root to a leaf, the ranges in its formula_15 are processed. If formula_13 is less than formula_14, we know that all intervals in formula_15 end after formula_13, or they could not also overlap formula_14. Therefore, we need only find those intervals in formula_15 that begin before formula_13. We can consult the lists of formula_15 that have already been constructed. Since we only care about the interval beginnings in this scenario, we can consult the list sorted by beginnings. Suppose we find the closest number no greater than formula_13 in this list. All ranges from the beginning of the list to that found point overlap formula_13 because they begin before formula_13 and end after formula_13 (as we know because they overlap formula_14 which is larger than formula_13). Thus, we can simply start enumerating intervals in the list until the startpoint value exceeds formula_13.
Likewise, if formula_13 is greater than formula_14, we know that all intervals in formula_15 must begin before formula_13, so we find those intervals that end after formula_13 using the list sorted by interval endings.
If formula_13 exactly matches formula_14, all intervals in formula_15 can be added to the results without further processing and tree traversal can be stopped.
With an interval.
For a result interval formula_16 to intersect our query interval formula_17 one of the following must hold:
We first find all intervals with start and/or end points inside formula_17 using a separately-constructed tree. In the one-dimensional case, we can use a search tree containing all the start and end points in the interval set, each with a pointer to its corresponding interval. A binary search in formula_5 time for the start and end of formula_17 reveals the minimum and maximum points to consider. Each point within this range references an interval that overlaps formula_17 and is added to the result list. Care must be taken to avoid duplicates, since an interval might both begin and end within formula_17. This can be done using a binary flag on each interval to mark whether or not it has been added to the result set.
Finally, we must find intervals that enclose formula_17. To find these, we pick any point inside formula_17 and use the algorithm above to find all intervals intersecting that point (again, being careful to remove duplicates).
Higher dimensions.
The interval tree data structure can be generalized to a higher dimension formula_18 with identical query and construction time and formula_4 space.
First, a range tree in formula_18 dimensions is constructed that allows efficient retrieval of all intervals with beginning and end points inside the query region formula_19. Once the corresponding ranges are found, the only thing that is left are those ranges that enclose the region in some dimension. To find these overlaps, formula_18 interval trees are created, and one axis intersecting formula_19 is queried for each. For example, in two dimensions, the bottom of the square formula_19 (or any other horizontal line intersecting formula_19) would be queried against the interval tree constructed for the horizontal axis. Likewise, the left (or any other vertical line intersecting formula_19) would be queried against the interval tree constructed on the vertical axis.
Each interval tree also needs an addition for higher dimensions. At each node we traverse in the tree, formula_13 is compared with formula_15 to find overlaps. Instead of two sorted lists of points as was used in the one-dimensional case, a range tree is constructed. This allows efficient retrieval of all points in formula_15 that overlap region formula_19.
Deletion.
If after deleting an interval from the tree, the node containing that interval contains no more intervals, that node may be deleted from the tree. This is more complex than a normal binary tree deletion operation.
An interval may overlap the center point of several nodes in the tree. Since each node stores the intervals that overlap it, with all intervals completely to the left of its center point in the left subtree, similarly for the right subtree, it follows that each interval is stored in the node closest to the root from the set of nodes whose center point it overlaps.
Normal deletion operations in a binary tree (for the case where the node being deleted has two children) involve promoting a node further from the leaf to the position of the node being deleted (usually the leftmost child of the right subtree, or the rightmost child of the left subtree).
As a result of this promotion, some nodes that were above the promoted node will become its descendants; it is necessary to search these nodes for intervals that also overlap the promoted node, and move those intervals into the promoted node. As a consequence, this may result in new empty nodes, which must be deleted, following the same algorithm again.
Balancing.
The same issues that affect deletion also affect rotation operations; rotation must preserve the invariant that nodes are stored as close to the root as possible.
Augmented tree.
Another way to represent intervals is described in .
Both insertion and deletion require formula_5 time, with formula_1 being the total number of intervals in the tree prior to the insertion or deletion operation.
An augmented tree can be built from a simple ordered tree, for example a binary search tree or self-balancing binary search tree, ordered by the 'low' values of the intervals. An extra annotation is then added to every node, recording the maximum upper value among all the intervals from this node down. Maintaining this attribute involves updating all ancestors of the node from the bottom up whenever a node is added or deleted. This takes only O("h") steps per node addition or removal, where "h" is the height of the node added or removed in the tree. If there are any tree rotations during insertion and deletion, the affected nodes may need updating as well.
Now, it is known that two intervals formula_20 and formula_21 overlap only when both formula_22 and formula_23. When searching the trees for nodes overlapping with a given interval, you can immediately skip:
Membership queries.
Some performance may be gained if the tree avoids unnecessary traversals. These can occur when adding intervals that already exist or removing intervals that don't exist.
A total order can be defined on the intervals by ordering them first by their lower bounds and then by their upper bounds. Then, a membership check can be performed in formula_5 time, versus the formula_24 time required to find duplicates if formula_25 intervals overlap the interval to be inserted or removed. This solution has the advantage of not requiring any additional structures. The change is strictly algorithmic. The disadvantage is that membership queries take formula_5 time.
Alternately, at the rate of formula_0 memory, membership queries in expected constant time can be implemented with a hash table, updated in lockstep with the interval tree. This may not necessarily double the total memory requirement, if the intervals are stored by reference rather than by value.
Java example: Adding a new interval to the tree.
The key of each node is the interval itself, hence nodes are ordered first by low value and finally by high value, and the value of each node is the end point of the interval:
public void add(Interval i) {
put(i, i.getEnd());
Java example: Searching a point or an interval in the tree.
To search for an interval, one walks the tree, using the key (codice_0) and high value (codice_1) to omit any branches that cannot overlap the query. The simplest case is a point query:
// Search for all intervals containing "p", starting with the
// node "n" and adding matching intervals to the list "result"
public void search(IntervalNode n, Point p, List<Interval> result) {
// Don't search nodes that don't exist
if (n == null)
return;
// If p is to the right of the rightmost point of any interval
// in this node and all children, there won't be any matches.
if (p.compareTo(n.getValue()) > 0)
return;
// Search left children
search(n.getLeft(), p, result);
// Check this node
if (n.getKey().contains(p))
result.add(n.getKey());
// If p is to the left of the start of this interval,
// then it can't be in any child to the right.
if (p.compareTo(n.getKey().getStart()) < 0)
return;
// Otherwise, search right children
search(n.getRight(), p, result);
where
codice_2 returns a negative value if a < b
codice_2 returns zero if a = b
codice_2 returns a positive value if a > b
The code to search for an interval is similar, except for the check in the middle:
// Check this node
if (n.getKey().overlapsWith(i))
result.add (n.getKey());
codice_5 is defined as:
public boolean overlapsWith(Interval other) {
return start.compareTo(other.getEnd()) <= 0 &&
end.compareTo(other.getStart()) >= 0;
Higher dimensions.
Augmented trees can be extended to higher dimensions by cycling through the dimensions at each level of the tree. For example, for two dimensions, the odd levels of the tree might contain ranges for the "x"-coordinate, while the even levels contain ranges for the "y"-coordinate. This approach effectively converts the data structure from an augmented binary tree to an augmented kd-tree, thus significantly complicating the balancing algorithms for insertions and deletions.
A simpler solution is to use nested interval trees. First, create a tree using the ranges for the "y"-coordinate. Now, for each node in the tree, add another interval tree on the "x"-ranges, for all elements whose "y"-range is the same as that node's "y"-range.
The advantage of this solution is that it can be extended to an arbitrary number of dimensions using the same code base.
At first, the additional cost of the nested trees might seem prohibitive, but this is usually not so. As with the non-nested solution earlier, one node is needed per "x"-coordinate, yielding the same number of nodes for both solutions. The only additional overhead is that of the nested tree structures, one per vertical interval. This structure is usually of negligible size, consisting only of a pointer to the root node, and possibly the number of nodes and the depth of the tree.
Medial- or length-oriented tree.
A medial- or length-oriented tree is similar to an augmented tree, but symmetrical, with the binary search tree ordered by the medial points of the intervals. There is a maximum-oriented binary heap in every node, ordered by the length of the interval (or half of the length). Also we store the minimum and maximum possible value of the subtree in each node (thus the symmetry).
Overlap test.
Using only start and end values of two intervals formula_26, for formula_27, the overlap test can be performed as follows:
formula_28 and formula_29
This can be simplified using the sum and difference:
formula_30
formula_31
Which reduces the overlap test to:
formula_32
Adding interval.
Adding new intervals to the tree is the same as for a binary search tree using the medial value as the key. We push formula_33 onto the binary heap associated with the node, and update the minimum and maximum possible values associated with all higher nodes.
Searching for all overlapping intervals.
Let's use formula_34 for the query interval, and formula_35 for the key of a node (compared to formula_36 of intervals)
Starting with root node, in each node, first we check if it is possible that our query interval overlaps with the node subtree using minimum and maximum values of node (if it is not, we don't continue for this node).
Then we calculate formula_37 for intervals inside this node (not its children) to overlap with query interval (knowing formula_38):
formula_39
and perform a query on its binary heap for the formula_33's bigger than formula_37
Then we pass through both left and right children of the node, doing the same thing.
In the worst-case, we have to scan all nodes of the binary search tree, but since binary heap query is optimum, this is acceptable (a 2- dimensional problem can not be optimum in both dimensions)
This algorithm is expected to be faster than a traditional interval tree (augmented tree) for search operations. Adding elements is a little slower in practice, though the order of growth is the same.
|
[
{
"math_id": 0,
"text": "O(n)"
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "m"
},
{
"math_id": 3,
"text": "O(\\log n + m)"
},
{
"math_id": 4,
"text": "O(n \\log n)"
},
{
"math_id": 5,
"text": "O(\\log n)"
},
{
"math_id": 6,
"text": "[1, \\ldots, O(n)]"
},
{
"math_id": 7,
"text": "O(1+ m)"
},
{
"math_id": 8,
"text": "O( n + \\log n) = O( n)"
},
{
"math_id": 9,
"text": "x_{\\textrm{center}}"
},
{
"math_id": 10,
"text": "S_{\\textrm{left}}"
},
{
"math_id": 11,
"text": "S_{\\textrm{right}}"
},
{
"math_id": 12,
"text": "S_{\\textrm{center}}"
},
{
"math_id": 13,
"text": "x"
},
{
"math_id": 14,
"text": "x_{\\textrm {center}}"
},
{
"math_id": 15,
"text": "S_{\\textrm {center}}"
},
{
"math_id": 16,
"text": "r"
},
{
"math_id": 17,
"text": "q"
},
{
"math_id": 18,
"text": "N"
},
{
"math_id": 19,
"text": "R"
},
{
"math_id": 20,
"text": "A"
},
{
"math_id": 21,
"text": "B"
},
{
"math_id": 22,
"text": "A_{\\textrm{low}} \\leq B_{\\textrm{high}}"
},
{
"math_id": 23,
"text": "A_{\\textrm{high}} \\geq B_{\\textrm{low}}"
},
{
"math_id": 24,
"text": "O(k + \\log n)"
},
{
"math_id": 25,
"text": "k"
},
{
"math_id": 26,
"text": "\\left( a_{i}, b_i \\right)"
},
{
"math_id": 27,
"text": "i=0,1"
},
{
"math_id": 28,
"text": "a_0 < b_1"
},
{
"math_id": 29,
"text": "a_1 < b_0"
},
{
"math_id": 30,
"text": "s_i = a_i + b_i"
},
{
"math_id": 31,
"text": "d_i = b_i - a_i"
},
{
"math_id": 32,
"text": "\\left| s_1 - s_0 \\right| < d_0 + d_1"
},
{
"math_id": 33,
"text": "d_i"
},
{
"math_id": 34,
"text": "a_q, b_q, m_q, d_q"
},
{
"math_id": 35,
"text": "M_n"
},
{
"math_id": 36,
"text": "m_i"
},
{
"math_id": 37,
"text": "\\min \\left\\{ d_i \\right\\}"
},
{
"math_id": 38,
"text": "m_i = M_n"
},
{
"math_id": 39,
"text": "\\min \\left\\{ d_i \\right\\} = \\left| m_q - M_n \\right| - d_q"
}
] |
https://en.wikipedia.org/wiki?curid=1533767
|
153390
|
Phillips curve
|
Economic model relating wages to unemployment
The Phillips curve is an economic model, named after Bill Phillips, that correlates reduced unemployment with increasing wages in an economy. While Phillips did not directly link employment and inflation, this was a trivial deduction from his statistical findings. Paul Samuelson and Robert Solow made the connection explicit and subsequently Milton Friedman and Edmund Phelps put the theoretical structure in place.
While there is a short-run tradeoff between unemployment and inflation, it has not been observed in the long run. In 1967 and 1968, Friedman and Phelps asserted that the Phillips curve was only applicable in the short run and that, in the long run, inflationary policies would not decrease unemployment. Friedman correctly predicted the Stagflation of the 1970's.
In the 2010s the slope of the Phillips curve appears to have declined and there has been controversy over the usefulness of the Phillips curve in predicting inflation. A 2022 study found that the slope of the Phillips curve is small and was small even during the early 1980s. Nonetheless, the Phillips curve is still used by central banks in understanding and forecasting inflation.
History.
Bill Phillips, a New Zealand born economist, wrote a paper in 1958 titled "The Relation between Unemployment and the Rate of Change of Money Wage Rates in the United Kingdom, 1861–1957", which was published in the quarterly journal "Economica". In the paper Phillips describes how he observed an inverse relationship between money wage changes and unemployment in the British economy over the period examined. Similar patterns were found in other countries and in 1960 Paul Samuelson and Robert Solow took Phillips' work and made explicit the link between inflation and unemployment: when inflation was high, unemployment was low, and vice versa.
In the 1920s, an American economist Irving Fisher had noted this relationship between unemployment and prices. However, Phillips' original curve described the behavior of money wages.
In the years following Phillips' 1958 paper, many economists in advanced industrial countries believed that his results showed a permanently stable relationship between inflation and unemployment. One implication of this was that governments could control unemployment and inflation with a Keynesian policy. They could tolerate a reasonably high inflation as this would lead to lower unemployment – there would be a trade-off between inflation and unemployment. For example, monetary policy and/or fiscal policy could be used to stimulate the economy, raising gross domestic product and lowering the unemployment rate. Moving along the Phillips curve, this would lead to a higher inflation rate, the cost of enjoying lower unemployment rates. Economist James Forder disputes this history and argues that it is a 'Phillips curve myth' invented in the 1970s.
Since 1974, seven Nobel Prizes have been given to economists for, among other things, work critical of some variations of the Phillips curve. Some of this criticism is based on the United States' experience during the 1970s, which had periods of high unemployment and high inflation at the same time. The authors receiving those prizes include Thomas Sargent, Christopher Sims, Edmund Phelps, Edward Prescott, Robert A. Mundell, Robert E. Lucas, Milton Friedman, and F.A. Hayek.
Stagflation.
In the 1970s, many countries experienced high levels of both inflation and unemployment also known as stagflation. Theories based on the Phillips curve suggested that this would not occur, and the curve came under attack from a group of economists headed by Milton Friedman. Friedman argued that the Phillips curve relationship was only a short-run phenomenon. This followed eight years after Samuelson and Solow [1960] wrote "All of our discussion has been phrased in short-run terms, dealing with what might happen in the next few years. It would be wrong, though, to think that our Figure 2 menu that related obtainable price and unemployment behavior will maintain its same shape in the longer run. What we do in a policy way during the next few years might cause it to shift in a definite way." As Samuelson and Solow had argued 8 years earlier, Friedman said that in the long run, workers and employers will take inflation into account, resulting in employment contracts that increase pay at rates near anticipated inflation. Unemployment would then begin to rise back to its previous level, but with higher inflation. This implies that over the longer-run there is no trade-off between inflation and unemployment. This is significant because it implies that central banks should not set unemployment targets below the natural rate.
More recent research suggests that there is a moderate trade-off between low-levels of inflation and unemployment. Work by George Akerlof, William Dickens, and George Perry, implies that if inflation is reduced from two to zero percent, unemployment will be permanently increased by 1.5 percent because workers have a higher tolerance for real wage cuts than nominal ones. For example, a worker will more likely accept a wage increase of two percent when inflation is three percent, than a wage cut of one percent when the inflation rate is zero.
Modern application.
Most economists no longer use the Phillips curve in its original form because it was too simplistic. A cursory analysis of US inflation and unemployment data from 1953 to 1992 shows no single curve will fit the data, but there are three rough aggregations—1955–71, 1974–84, and 1985–92—each of which shows a general, downwards slope, but at three very different levels with the shifts occurring abruptly. The data for 1953–54 and 1972–73 do not group easily, and a more formal analysis posits up to five groups/curves over the period.
However, modified forms of the Phillips curve that take inflationary expectations into account remain influential. The theory has several names, with some variation in its details, but all modern versions distinguish between short-run and long-run effects on unemployment. Modern Phillips curve models include both a short-run Phillips Curve and a long-run Phillips Curve. This is because in the short run, there is generally an inverse relationship between inflation and the unemployment rate; as illustrated in the downward sloping short-run Phillips curve. In the long run, that relationship breaks down and the economy eventually returns to the natural rate of unemployment regardless of the inflation rate.
The "short-run Phillips curve" is also called the "expectations-augmented Phillips curve", since it shifts up when inflationary expectations rise, Edmund Phelps and Milton Friedman argued. In the long run, this implies that monetary policy cannot affect unemployment, which adjusts back to its "natural rate", also called the "NAIRU". The popular textbook of Blanchard gives a textbook presentation of the expectations-augmented Phillips curve.
An equation like the expectations-augmented Phillips curve also appears in many recent New Keynesian dynamic stochastic general equilibrium models. As Keynes mentioned: "A Government has to remember, however, that even if a tax is not prohibited it may be unprofitable, and that a medium, rather than an extreme, imposition will yield the greatest gain". In these macroeconomic models with sticky prices, there is a positive relation between the rate of inflation and the level of demand, and therefore a negative relation between the rate of inflation and the rate of unemployment. This relationship is often called the "New Keynesian Phillips curve". Like the expectations-augmented Phillips curve, the New Keynesian Phillips curve implies that increased inflation can lower unemployment temporarily, but cannot lower it permanently. Two influential papers that incorporate a New Keynesian Phillips curve are Clarida, Galí, and Gertler (1999), and Blanchard and Galí (2007).
Mathematics.
There are at least two different mathematical derivations of the Phillips curve. First, there is the traditional or Keynesian version. Then, there is the new Classical version associated with Robert E. Lucas Jr.
The traditional Phillips curve.
The original Phillips curve literature was not based on the unaided application of economic theory. Instead, it was based on empirical generalizations. After that, economists tried to develop theories that fit the data.
Money wage determination.
The traditional Phillips curve story starts with a wage Phillips Curve, of the sort described by Phillips himself. This describes the rate of growth of money wages ("gW"). Here and below, the operator "g" is the equivalent of "the percentage rate of growth of" the variable that follows.
formula_0
The "money wage rate" ("W") is shorthand for total money wage costs per production employee, including benefits and payroll taxes. The focus is on only production workers' money wages, because (as discussed below) these costs are crucial to pricing decisions by the firms.
This equation tells us that the growth of money wages rises with the trend rate of growth of money wages (indicated by the superscript "T") and falls with the unemployment rate ("U"). The function "f" is assumed to be monotonically increasing with "U" so that the dampening of money-wage increases by unemployment is shown by the negative sign in the equation above.
There are several possible stories behind this equation. A major one is that money wages are set by "bilateral negotiations" under partial bilateral monopoly: as the unemployment rate rises, "all else constant" worker bargaining power falls, so that workers are less able to increase their wages in the face of employer resistance.
During the 1970s, this story had to be modified, because (as the late Abba Lerner had suggested in the 1940s) workers try to keep up with inflation. Since the 1970s, the equation has been changed to introduce the role of inflationary expectations (or the expected inflation rate, "gP"ex). This produces the expectations-augmented wage Phillips curve:
formula_1
The introduction of inflationary expectations into the equation implies that actual inflation can "feed back" into inflationary expectations and thus cause further inflation. The late economist James Tobin dubbed the last term "inflationary inertia", because in the current period, inflation exists which represents an inflationary impulse left over from the past.
It also involved much more than expectations, including the price-wage spiral. In this spiral, employers try to protect profits by raising their prices and employees try to keep up with inflation to protect their real wages. This process can feed on itself, becoming a self-fulfilling prophecy.
The parameter λ (which is presumed constant during any time period) represents the degree to which employees can gain money wage increases to keep up with expected inflation, preventing a fall in expected real wages. It is usually assumed that this parameter equals 1 in the long run.
In addition, the function f() was modified to introduce the idea of the non-accelerating inflation rate of unemployment (NAIRU) or what's sometimes called the "natural" rate of unemployment or the
inflation-threshold unemployment rate:
Here, "U*" is the NAIRU. As discussed below, if "U" < "U"*, inflation tends to accelerate. Similarly, if "U" > "U"*, inflation tends to slow. It is assumed that "f"(0) = 0, so that when "U" = "U"*, the "f" term drops out of the equation.
In equation (1), the roles of gWT and gPex seem to be redundant, playing much the same role. However, assuming that λ is equal to unity, it can be seen that they are not. If the trend rate of growth of money wages equals zero, then the case where U equals U* implies that gW equals expected inflation. That is, expected real wages are constant.
In any reasonable economy, however, having constant expected real wages could only be consistent with actual real wages that are constant over the long haul. This does not fit with economic experience in the U.S. or any other major industrial country. Even though real wages have not risen much in recent years, there have been important increases over the decades.
An alternative is to assume that the trend rate of growth of money wages equals the trend rate of growth of average labor productivity (Z). That is:
Under assumption (2), when U equals U* and λ equals unity, expected real wages would increase with labor productivity. This would be consistent with an economy in which actual real wages increase with labor productivity. Deviations of real-wage trends from those of labor productivity might be explained by reference to other variables in the model.
Pricing decisions.
Next, there is price behavior. The standard assumption is that markets are "imperfectly competitive", where most businesses have some power to set prices. So the model assumes that the average business sets a unit price (P) as a mark-up (M) over the unit labor cost in production measured at a standard rate of capacity utilization (say, at 90 percent use of plant and equipment) and then adds in the unit materials cost.
The standardization involves later ignoring deviations from the trend in labor productivity. For example, assume that the growth of labor productivity is the same as that in the trend and that current productivity equals its trend value:
gZ = gZT and Z = ZT.
The markup reflects both the firm's degree of market power and the extent to which overhead costs have to be paid. Put another way, all else equal, M rises with the firm's power to set prices or with a rise of overhead costs relative to total costs.
So pricing follows this equation:
P = M × (unit labor cost) + (unit materials cost)
= M × (total production employment cost)/(quantity of output) + UMC.
UMC is unit raw materials cost (total raw materials costs divided by total output). So the equation can be restated as:
P = M × (production employment cost per worker)/(output per production employee) + UMC.
This equation can again be stated as:
P = M×(average money wage)/(production labor productivity) + UMC
= M×(W/Z) + UMC.
Now, assume that both the average price/cost mark-up (M) and UMC are constant. On the other hand, labor productivity grows, as before. Thus, an equation determining the price inflation rate (gP) is:
gP = gW − gZT.
Price.
Then, combined with the wage Phillips curve [equation 1] and the assumption made above about the trend behavior of money wages [equation 2], this price-inflation equation gives us a simple expectations-augmented price Phillips curve:
gP = −f(U − U*) + λ·gPex.
Some assume that we can simply add in gUMC, the rate of growth of UMC, in order to represent the role of supply shocks (of the sort that plagued the U.S. during the 1970s). This produces a standard short-term Phillips curve:
gP = −f(U − U*) + λ·gPex + gUMC.
Economist Robert J. Gordon has called this the "Triangle Model" because it explains short-run inflationary behavior by three factors: demand inflation (due to low unemployment), supply-shock inflation (gUMC), and inflationary expectations or inertial inflation.
In the "long run", it is assumed, inflationary expectations catch up with and equal actual inflation so that gP = gPex. This represents the long-term equilibrium of expectations adjustment. Part of this adjustment may involve the adaptation of expectations to the experience with actual inflation. Another might involve guesses made by people in the economy based on other evidence. (The latter idea gave us the notion of so-called rational expectations.)
Expectational equilibrium gives us the long-term Phillips curve. First, with λ less than unity:
gP = [1/(1 − λ)]·(−f(U − U*) + gUMC).
This is nothing but a steeper version of the short-run Phillips curve above. Inflation rises as unemployment falls, while this connection is stronger. That is, a low unemployment rate (less than U*) will be associated with a higher inflation rate in the long run than in the short run. This occurs because the actual higher-inflation situation seen in the short run feeds back to raise inflationary expectations, which in turn raises the inflation rate further. Similarly, at high unemployment rates (greater than U*) lead to low inflation
rates. These in turn encourage lower inflationary expectations, so that inflation itself drops again.
This logic goes further if λ is equal to unity, i.e., if workers are able to protect their wages "completely" from expected inflation, even in the short run. Now, the Triangle Model equation becomes:
- f(U − U*) = gUMC.
If we further assume (as seems reasonable) that there are no long-term supply shocks, this can be simplified to become:
−f(U − U*) = 0 which implies that U = U*.
All of the assumptions imply that in the long run, there is only one possible unemployment rate, U* at any one time. This uniqueness explains why some call this unemployment rate "natural".
To truly understand and criticize the uniqueness of U*, a more sophisticated and realistic model is needed. For example, we might introduce the idea that workers in different sectors push for money wage increases that are similar to those in other sectors. Or we might make the model even more realistic. One important place to look is at the determination of the mark-up, M.
New classical version.
The Phillips curve equation can be derived from the (short-run) Lucas aggregate supply function. The Lucas approach is very different from that of the traditional view. Instead of starting with empirical data, he started with a classical economic model following very simple economic principles.
Start with the aggregate supply function:
formula_2
where Y is log value of the actual output, formula_3 is log value of the "natural" level of output, formula_4 is a positive constant, formula_5 is log value of the actual price level, and formula_6 is log value of the expected price level. Lucas assumes that formula_3 has a unique value.
Note that this equation indicates that when expectations of future inflation (or, more correctly, the future price level) are "totally accurate", the last term drops out, so that actual output equals the so-called "natural" level of real GDP. This means that in the Lucas aggregate supply curve, the "only" reason why actual real GDP should deviate from potential—and the actual unemployment rate should deviate from the "natural" rate—is because of "incorrect expectations" of what is going to happen with prices in the future. (The idea has been expressed first by Keynes, "General Theory", Chapter 20 section III paragraph 4).
This differs from other views of the Phillips curve, in which the failure to attain the "natural" level of output can be due to the imperfection or incompleteness of markets, the stickiness of prices, and the like. In the non-Lucas view, incorrect expectations can contribute to aggregate demand failure, but they are not the only cause. To the "new Classical" followers of Lucas, markets are presumed to be perfect and always attain equilibrium (given inflationary expectations).
We re-arrange the equation into:
formula_7
Next we add unexpected exogenous shocks to the world supply formula_8:
formula_9
Subtracting last year's price levels formula_10 will give us inflation rates, because
formula_11
and
formula_12
where formula_13 and formula_14 are the inflation and expected inflation respectively.
There is also a negative relationship between output and unemployment (as expressed by Okun's law). Therefore, using
formula_15
where formula_16 is a positive constant, formula_17 is unemployment, and formula_18 is the natural rate of unemployment or NAIRU, we arrive at the final form of the short-run Phillips curve:
formula_19
This equation, plotting inflation rate formula_13 against unemployment formula_17 gives the downward-sloping curve in the diagram that characterizes the Phillips curve.
New Keynesian version.
The New Keynesian Phillips curve was originally derived by Roberts in 1995, and since been used in most state-of-the-art New Keynesian DSGE models like the one of Clarida, Galí, and Gertler (2000).
formula_20
where
formula_21
The current expectations of next period's inflation are incorporated as formula_22.
NAIRU and rational expectations.
In the 1970s, new theories, such as rational expectations and the NAIRU (non-accelerating inflation rate of unemployment) arose to explain how stagflation could occur. The latter theory, also known as the "natural rate of unemployment", distinguished between the "short-term" Phillips curve and the "long-term" one. The short-term Phillips Curve looked like a normal Phillips Curve but shifted in the long run as expectations changed. In the long run, only a single rate of unemployment (the NAIRU or "natural" rate) was consistent with a stable inflation rate. The long-run Phillips Curve was thus vertical, so there was no trade-off between inflation and unemployment. Milton Friedman in 1976 and Edmund Phelps in 2006 won the Nobel Prize in Economics in part for this work.
However, the expectations argument was in fact very widely understood (albeit not formally) before Friedman's and Phelps's work on it.
In the diagram, the long-run Phillips curve is the vertical red line. The NAIRU theory says that when unemployment is at the rate defined by this line, inflation will be stable. However, in the short-run policymakers will face an inflation-unemployment rate trade-off marked by the "Initial Short-Run Phillips Curve" in the graph. Policymakers can, therefore, reduce the unemployment rate temporarily, moving from point A to point B through expansionary policy. However, according to the NAIRU, exploiting this short-run trade-off will raise inflation expectations, shifting the short-run curve rightward to the "new short-run Phillips curve" and moving the point of equilibrium from B to C. Thus the reduction in unemployment below the "Natural Rate" will be temporary, and lead only to higher inflation in the long run.
Since the short-run curve shifts outward due to the attempt to reduce unemployment, the expansionary policy ultimately worsens the exploitable trade-off between unemployment and inflation. That is, it results in more inflation at each short-run unemployment rate. The name "NAIRU" arises because with actual unemployment below it, inflation accelerates, while with unemployment above it, inflation decelerates. With the actual rate equal to it, inflation is stable, neither accelerating nor decelerating. One practical use of this model was to explain stagflation, which confounded the traditional Phillips curve.
The rational expectations theory said that expectations of inflation were equal to what actually happened, with some minor and temporary errors. This, in turn, suggested that the short-run period was so short that it was non-existent: any effort to reduce unemployment below the NAIRU, for example, would "immediately" cause inflationary expectations to rise and thus imply that the policy would fail. Unemployment would never deviate from the NAIRU except due to random and transitory mistakes in developing expectations about future inflation rates. In this perspective, any deviation of the actual unemployment rate from the NAIRU was an illusion.
However, in the 1990s in the U.S., it became increasingly clear that the NAIRU did not have a unique equilibrium and could change in unpredictable ways. In the late 1990s, the actual unemployment rate fell below 4% of the labor force, much lower than almost all estimates of the NAIRU. But inflation stayed very moderate rather than accelerating. So, just as the Phillips curve had become a subject of debate, so did the NAIRU.
Furthermore, the concept of rational expectations had become subject to much doubt when it became clear that the main assumption of models based on it was that there exists a single (unique) equilibrium in the economy that is set ahead of time, determined independently of demand conditions. The experience of the 1990s suggests that this assumption cannot be sustained.
Theoretical questions.
To Milton Friedman there is a short-term correlation between inflation shocks and employment. When an inflationary surprise occurs, workers are fooled into accepting lower pay because they do not see the fall in real wages right away. Firms hire them because they see the inflation as allowing higher profits for given nominal wages. This is a movement along the Phillips curve as with change A. Eventually, workers discover that real wages have fallen, so they push for higher money wages. This causes the Phillips curve to shift upward and to the right, as with B. Some research underlines that some implicit and serious assumptions are actually in the background of the Friedmanian Phillips curve. This information asymmetry and a special pattern of flexibility of prices and wages are both necessary if one wants to maintain the mechanism told by Friedman. However, as it is argued, these presumptions remain completely unrevealed and theoretically ungrounded by Friedman.
Gordon's triangle model.
Robert J. Gordon of Northwestern University has analyzed the Phillips curve to produce what he calls the triangle model, in which the actual inflation rate is determined by the sum of
The last reflects inflationary expectations and the price/wage spiral. Supply shocks and changes in built-in inflation are the main factors shifting the short-run Phillips curve and changing the trade-off. In this theory, it is not only inflationary expectations that can cause stagflation. For example, the steep climb of oil prices during the 1970s could have this result.
Changes in built-in inflation follow the partial-adjustment logic behind most theories of the NAIRU:
In between these two lies the NAIRU, where the Phillips curve does not have any inherent tendency to shift, so that the inflation rate is stable. However, there seems to be a range in the middle between "high" and "low" where built-in inflation stays stable. The ends of this "non-accelerating inflation range of unemployment rates" change over time.
See also.
<templatestyles src="Div col/styles.css"/>
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "gW = gW^T - f(U)"
},
{
"math_id": 1,
"text": "gW = gW^T - f(U) + \\lambda gP^\\text{ex}."
},
{
"math_id": 2,
"text": "Y = Y_n + a (P-P_e) \\, "
},
{
"math_id": 3,
"text": "Y_n"
},
{
"math_id": 4,
"text": "a"
},
{
"math_id": 5,
"text": "P"
},
{
"math_id": 6,
"text": "P_e"
},
{
"math_id": 7,
"text": " P = P_e + \\frac{Y-Y_n}{a} "
},
{
"math_id": 8,
"text": "v"
},
{
"math_id": 9,
"text": " P = P_e + \\frac{Y-Y_n}{a} + v "
},
{
"math_id": 10,
"text": "P_{-1}"
},
{
"math_id": 11,
"text": " P-P_{-1}\\ \\approx \\pi "
},
{
"math_id": 12,
"text": " P_e- P_{-1}\\ \\approx \\pi_e "
},
{
"math_id": 13,
"text": "\\pi"
},
{
"math_id": 14,
"text": "\\pi_e"
},
{
"math_id": 15,
"text": "\\frac{Y-Y_n}{a} = -b(U-U_n) "
},
{
"math_id": 16,
"text": "b"
},
{
"math_id": 17,
"text": "U"
},
{
"math_id": 18,
"text": "U_n"
},
{
"math_id": 19,
"text": " \\pi = \\pi_e - b(U-U_n) + v."
},
{
"math_id": 20,
"text": "\\pi_{t} = \\beta E_{t}[\\pi_{t+1}] + \\kappa y_{t}"
},
{
"math_id": 21,
"text": "\\kappa = \\frac{\\alpha[1-(1-\\alpha)\\beta]\\phi}{1-\\alpha}."
},
{
"math_id": 22,
"text": "\\beta E_{t}[\\pi_{t+1}]"
}
] |
https://en.wikipedia.org/wiki?curid=153390
|
153391
|
Local ring
|
(Mathematical) ring with a unique maximal ideal
In mathematics, more specifically in ring theory, local rings are certain rings that are comparatively simple, and serve to describe what is called "local behaviour", in the sense of functions defined on algebraic varieties or manifolds, or of algebraic number fields examined at a particular place, or prime. Local algebra is the branch of commutative algebra that studies commutative local rings and their modules.
In practice, a commutative local ring often arises as the result of the localization of a ring at a prime ideal.
The concept of local rings was introduced by Wolfgang Krull in 1938 under the name "Stellenringe". The English term "local ring" is due to Zariski.
Definition and first consequences.
A ring "R" is a local ring if it has any one of the following equivalent properties:
If these properties hold, then the unique maximal left ideal coincides with the unique maximal right ideal and with the ring's Jacobson radical. The third of the properties listed above says that the set of non-units in a local ring forms a (proper) ideal, necessarily contained in the Jacobson radical. The fourth property can be paraphrased as follows: a ring "R" is local if and only if there do not exist two coprime proper (principal) (left) ideals, where two ideals "I"1, "I"2 are called "coprime" if "R" = "I"1 + "I"2.
In the case of commutative rings, one does not have to distinguish between left, right and two-sided ideals: a commutative ring is local if and only if it has a unique maximal ideal.
Before about 1960 many authors required that a local ring be (left and right) Noetherian, and (possibly non-Noetherian) local rings were called quasi-local rings. In this article this requirement is not imposed.
A local ring that is an integral domain is called a local domain.
Examples.
Ring of germs.
To motivate the name "local" for these rings, we consider real-valued continuous functions defined on some open interval around 0 of the real line. We are only interested in the behavior of these functions near 0 (their "local behavior") and we will therefore identify two functions if they agree on some (possibly very small) open interval around 0. This identification defines an equivalence relation, and the equivalence classes are what are called the "germs of real-valued continuous functions at 0". These germs can be added and multiplied and form a commutative ring.
To see that this ring of germs is local, we need to characterize its invertible elements. A germ "f" is invertible if and only if "f"(0) ≠ 0. The reason: if "f"(0) ≠ 0, then by continuity there is an open interval around 0 where "f" is non-zero, and we can form the function "g"("x") = 1/"f"("x") on this interval. The function "g" gives rise to a germ, and the product of "fg" is equal to 1. (Conversely, if "f" is invertible, then there is some "g" such that "f"(0)"g"(0) = 1, hence "f"(0) ≠ 0.)
With this characterization, it is clear that the sum of any two non-invertible germs is again non-invertible, and we have a commutative local ring. The maximal ideal of this ring consists precisely of those germs "f" with "f"(0) = 0.
Exactly the same arguments work for the ring of germs of continuous real-valued functions on any topological space at a given point, or the ring of germs of differentiable functions on any differentiable manifold at a given point, or the ring of germs of rational functions on any algebraic variety at a given point. All these rings are therefore local. These examples help to explain why schemes, the generalizations of varieties, are defined as special locally ringed spaces.
Valuation theory.
Local rings play a major role in valuation theory. By definition, a valuation ring of a field "K" is a subring "R" such that for every non-zero element "x" of "K", at least one of "x" and "x"−1 is in "R". Any such subring will be a local ring. For example, the ring of rational numbers with odd denominator (mentioned above) is a valuation ring in formula_12.
Given a field "K", which may or may not be a function field, we may look for local rings in it. If "K" were indeed the function field of an algebraic variety "V", then for each point "P" of "V" we could try to define a valuation ring "R" of functions "defined at" "P". In cases where "V" has dimension 2 or more there is a difficulty that is seen this way: if "F" and "G" are rational functions on "V" with
"F"("P") = "G"("P") = 0,
the function
"F"/"G"
is an indeterminate form at "P". Considering a simple example, such as
"Y"/"X",
approached along a line
"Y" = "tX",
one sees that the "value at" "P" is a concept without a simple definition. It is replaced by using valuations.
Non-commutative.
Non-commutative local rings arise naturally as endomorphism rings in the study of direct sum decompositions of modules over some other rings. Specifically, if the endomorphism ring of the module "M" is local, then "M" is indecomposable; conversely, if the module "M" has finite length and is indecomposable, then its endomorphism ring is local.
If "k" is a field of characteristic "p" > 0 and "G" is a finite "p"-group, then the group algebra "kG" is local.
Some facts and definitions.
Commutative case.
We also write ("R", "m") for a commutative local ring "R" with maximal ideal "m". Every such ring becomes a topological ring in a natural way if one takes the powers of "m" as a neighborhood base of 0. This is the "m"-adic topology on "R". If ("R", "m") is a commutative Noetherian local ring, then
formula_13
(Krull's intersection theorem), and it follows that "R" with the "m"-adic topology is a Hausdorff space. The theorem is a consequence of the Artin–Rees lemma together with Nakayama's lemma, and, as such, the "Noetherian" assumption is crucial. Indeed, let "R" be the ring of germs of infinitely differentiable functions at 0 in the real line and "m" be the maximal ideal formula_14. Then a nonzero function formula_15 belongs to formula_16 for any "n", since that function divided by formula_17 is still smooth.
As for any topological ring, one can ask whether ("R", "m") is complete (as a uniform space); if it is not, one considers its completion, again a local ring. Complete Noetherian local rings are classified by the Cohen structure theorem.
In algebraic geometry, especially when "R" is the local ring of a scheme at some point "P", "R" / "m" is called the "residue field" of the local ring or residue field of the point "P".
If ("R", "m") and ("S", "n") are local rings, then a local ring homomorphism from "R" to "S" is a ring homomorphism "f" : "R" → "S" with the property "f"("m") ⊆ "n". These are precisely the ring homomorphisms that are continuous with respect to the given topologies on "R" and "S". For example, consider the ring morphism formula_18 sending formula_19. The preimage of formula_20 is formula_14. Another example of a local ring morphism is given by formula_21.
General case.
The Jacobson radical "m" of a local ring "R" (which is equal to the unique maximal left ideal and also to the unique maximal right ideal) consists precisely of the non-units of the ring; furthermore, it is the unique maximal two-sided ideal of "R". However, in the non-commutative case, having a unique maximal two-sided ideal is not equivalent to being local.
For an element "x" of the local ring "R", the following are equivalent:
If ("R", "m") is local, then the factor ring "R"/"m" is a skew field. If "J" ≠ "R" is any two-sided ideal in "R", then the factor ring "R"/"J" is again local, with maximal ideal "m"/"J".
A deep theorem by Irving Kaplansky says that any projective module over a local ring is free, though the case where the module is finitely-generated is a simple corollary to Nakayama's lemma. This has an interesting consequence in terms of Morita equivalence. Namely, if "P" is a finitely generated projective "R" module, then "P" is isomorphic to the free module "R""n", and hence the ring of endomorphisms formula_22 is isomorphic to the full ring of matrices formula_23. Since every ring Morita equivalent to the local ring "R" is of the form formula_22 for such a "P", the conclusion is that the only rings Morita equivalent to a local ring "R" are (isomorphic to) the matrix rings over "R".
|
[
{
"math_id": 0,
"text": "\\mathbb{Z}/p^n\\mathbb{Z}"
},
{
"math_id": 1,
"text": "\\mathbb{C}[[x]]"
},
{
"math_id": 2,
"text": "\\sum_{i=0}^\\infty a_ix^i "
},
{
"math_id": 3,
"text": "(\\sum_{i=0}^\\infty a_ix^i)(\\sum_{i=0}^\\infty b_ix^i)=\\sum_{i=0}^\\infty c_ix^i"
},
{
"math_id": 4,
"text": "c_n=\\sum_{i+j=n}a_ib_j"
},
{
"math_id": 5,
"text": "K[x]"
},
{
"math_id": 6,
"text": "K"
},
{
"math_id": 7,
"text": "x"
},
{
"math_id": 8,
"text": "1 - x"
},
{
"math_id": 9,
"text": "\\Z"
},
{
"math_id": 10,
"text": "(p)"
},
{
"math_id": 11,
"text": "p"
},
{
"math_id": 12,
"text": "\\mathbb{Q}"
},
{
"math_id": 13,
"text": "\\bigcap_{i=1}^\\infty m^i = \\{0\\}"
},
{
"math_id": 14,
"text": "(x)"
},
{
"math_id": 15,
"text": "e^{-{1 \\over x^2}}"
},
{
"math_id": 16,
"text": "m^n"
},
{
"math_id": 17,
"text": "x^n"
},
{
"math_id": 18,
"text": "\\mathbb{C}[x]/(x^3) \\to \\mathbb{C}[x,y]/(x^3,x^2y,y^4)"
},
{
"math_id": 19,
"text": "x \\mapsto x"
},
{
"math_id": 20,
"text": "(x,y)"
},
{
"math_id": 21,
"text": "\\mathbb{C}[x]/(x^3) \\to \\mathbb{C}[x]/(x^2)"
},
{
"math_id": 22,
"text": "\\mathrm{End}_R(P)"
},
{
"math_id": 23,
"text": "\\mathrm{M}_n(R)"
}
] |
https://en.wikipedia.org/wiki?curid=153391
|
15342853
|
Born–Infeld model
|
Model of nonlinear electrodynamics
In theoretical physics, the Born–Infeld model or the Dirac–Born–Infeld action is a particular example of what is usually known as a nonlinear electrodynamics. It was historically introduced in the 1930s to remove the divergence of the electron's self-energy in classical electrodynamics by introducing an upper bound of the electric field at the origin. It was introduced by Max Born and Leopold Infeld in 1934, with further work by Paul Dirac in 1962.
Overview.
Born–Infeld electrodynamics is named after physicists Max Born and Leopold Infeld, who first proposed it. The model possesses a whole series of physically interesting properties.
In analogy to a relativistic limit on velocity, Born–Infeld theory proposes a limiting force via limited electric field strength. A maximum electric field strength produces a finite electric field self-energy, which when attributed entirely to electron mass-produces maximum field
formula_0
Born–Infeld electrodynamics displays good physical properties concerning wave propagation, such as the absence of shock waves and birefringence. A field theory showing this property is usually called completely exceptional, and Born–Infeld theory is the only completely exceptional "regular" nonlinear electrodynamics.
This theory can be seen as a covariant generalization of Mie's theory and very close to Albert Einstein's idea of introducing a nonsymmetric metric tensor with the symmetric part corresponding to the usual metric tensor and the antisymmetric to the electromagnetic field tensor.
The compatibility of Born–Infeld theory with high-precision atomic experimental data requires a value of a limiting field some 200 times higher than that introduced in the original formulation of the theory.
Since 1985 there was a revival of interest on Born–Infeld theory and its nonabelian extensions, as they were found in some limits of string theory. It was discovered by E.S. Fradkin and A.A. Tseytlin that the Born–Infeld action is the leading term in the low-energy effective action of the open string theory expanded in powers of derivatives of gauge field strength.
Equations.
We will use the relativistic notation here, as this theory is fully relativistic.
The Lagrangian density is
formula_1
where "η" is the Minkowski metric, "F" is the Faraday tensor (both are treated as square matrices, so that we can take the determinant of their sum), and "b" is a scale parameter. The maximal possible value of the electric field in this theory is "b", and the self-energy of point charges is finite. For electric and magnetic fields much smaller than "b", the theory reduces to Maxwell electrodynamics.
In 4-dimensional spacetime the Lagrangian can be written as
formula_2
where E is the electric field, and B is the magnetic field.
In string theory, gauge fields on a D-brane (that arise from attached open strings) are described by the same type of Lagrangian:
formula_3
where "T" is the tension of the D-brane and formula_4 is the invert of the string tension.
|
[
{
"math_id": 0,
"text": "E_{\\rm BI} = 1.187 \\times 10^{20} \\, \\mathrm{V} / \\mathrm{m}."
},
{
"math_id": 1,
"text": "\\mathcal{L} = -b^2 \\sqrt{-\\det\\left(\\eta + \\frac{F}{b}\\right)} + b^2,"
},
{
"math_id": 2,
"text": "\\mathcal{L} = -b^2 \\sqrt{1 - \\frac{E^2 - B^2}{b^2} - \\frac{(\\mathbf{E} \\cdot \\mathbf{B})^2}{b^4}} + b^2,"
},
{
"math_id": 3,
"text": "\\mathcal{L} = -T \\sqrt{-\\det(\\eta + 2\\pi\\alpha'F)},"
},
{
"math_id": 4,
"text": "2\\pi \\alpha'"
}
] |
https://en.wikipedia.org/wiki?curid=15342853
|
1534382
|
Univalent function
|
Mathematical concept
In mathematics, in the branch of complex analysis, a holomorphic function on an open subset of the complex plane is called univalent if it is injective.
Examples.
The function formula_0 is univalent in the open unit disc, as formula_1 implies that formula_2. As the second factor is non-zero in the open unit disc, formula_3 so formula_4 is injective.
Basic properties.
One can prove that if formula_5 and formula_6 are two open connected sets in the complex plane, and
formula_7
is a univalent function such that formula_8 (that is, formula_4 is surjective), then the derivative of formula_4 is never zero, formula_4 is invertible, and its inverse formula_9 is also holomorphic. More, one has by the chain rule
formula_10
for all formula_11 in formula_12
Comparison with real functions.
For real analytic functions, unlike for complex analytic (that is, holomorphic) functions, these statements fail to hold. For example, consider the function
formula_13
given by formula_14. This function is clearly injective, but its derivative is 0 at formula_15, and its inverse is not analytic, or even differentiable, on the whole interval formula_16. Consequently, if we enlarge the domain to an open subset formula_5 of the complex plane, it must fail to be injective; and this is the case, since (for example) formula_17 (where formula_18 is a primitive cube root of unity and formula_19 is a positive real number smaller than the radius of formula_5 as a neighbourhood of formula_20).
Note.
<templatestyles src="Reflist/styles.css" />
References.
"This article incorporates material from univalent analytic function on PlanetMath, which is licensed under the ."
|
[
{
"math_id": 0,
"text": "f \\colon z \\mapsto 2z + z^2"
},
{
"math_id": 1,
"text": "f(z) = f(w)"
},
{
"math_id": 2,
"text": "f(z) - f(w) = (z-w)(z+w+2) = 0"
},
{
"math_id": 3,
"text": "z = w"
},
{
"math_id": 4,
"text": "f"
},
{
"math_id": 5,
"text": "G"
},
{
"math_id": 6,
"text": "\\Omega"
},
{
"math_id": 7,
"text": "f: G \\to \\Omega"
},
{
"math_id": 8,
"text": "f(G) = \\Omega"
},
{
"math_id": 9,
"text": "f^{-1}"
},
{
"math_id": 10,
"text": "(f^{-1})'(f(z)) = \\frac{1}{f'(z)}"
},
{
"math_id": 11,
"text": "z"
},
{
"math_id": 12,
"text": "G."
},
{
"math_id": 13,
"text": "f: (-1, 1) \\to (-1, 1) \\, "
},
{
"math_id": 14,
"text": "f(x)=x^3"
},
{
"math_id": 15,
"text": "x=0"
},
{
"math_id": 16,
"text": "(-1,1)"
},
{
"math_id": 17,
"text": "f(\\varepsilon \\omega) = f(\\varepsilon) "
},
{
"math_id": 18,
"text": "\\omega "
},
{
"math_id": 19,
"text": "\\varepsilon"
},
{
"math_id": 20,
"text": "0"
}
] |
https://en.wikipedia.org/wiki?curid=1534382
|
15344641
|
Stokes drift
|
Average velocity of a fluid parcel in a gravity wave
For a pure wave motion in fluid dynamics, the Stokes drift velocity is the average velocity when following a specific fluid parcel as it travels with the fluid flow. For instance, a particle floating at the free surface of water waves, experiences a net Stokes drift velocity in the direction of wave propagation.
More generally, the Stokes drift velocity is the difference between the average Lagrangian flow velocity of a fluid parcel, and the average Eulerian flow velocity of the fluid at a fixed position. This nonlinear phenomenon is named after George Gabriel Stokes, who derived expressions for this drift in his 1847 study of water waves.
The Stokes drift is the difference in end positions, after a predefined amount of time (usually one wave period), as derived from a description in the Lagrangian and Eulerian coordinates. The end position in the Lagrangian description is obtained by following a specific fluid parcel during the time interval. The corresponding end position in the Eulerian description is obtained by integrating the flow velocity at a fixed position—equal to the initial position in the Lagrangian description—during the same time interval.
The Stokes drift velocity equals the Stokes drift divided by the considered time interval.
Often, the Stokes drift velocity is loosely referred to as Stokes drift.
Stokes drift may occur in all instances of oscillatory flow which are inhomogeneous in space. For instance in water waves, tides and atmospheric waves.
In the Lagrangian description, fluid parcels may drift far from their initial positions. As a result, the unambiguous definition of an average Lagrangian velocity and Stokes drift velocity, which can be attributed to a certain fixed position, is by no means a trivial task. However, such an unambiguous description is provided by the "Generalized Lagrangian Mean" (GLM) theory of Andrews and McIntyre in 1978.
The Stokes drift is important for the mass transfer of various kinds of material and organisms by oscillatory flows. It plays a crucial role in the generation of Langmuir circulations.
For nonlinear and periodic water waves, accurate results on the Stokes drift have been computed and tabulated.
Mathematical description.
The Lagrangian motion of a fluid parcel with position vector "x = ξ(α, t)" in the Eulerian coordinates is given by
formula_0
where
∂ξ/∂"t" is the partial derivative of ξ(α, "t") with respect to "t",
ξ(α, "t") is the Lagrangian position vector of a fluid parcel,
u(x, "t") is the Eulerian velocity,
x is the position vector in the Eulerian coordinate system,
α is the position vector in the Lagrangian coordinate system,
"t" is time.
Often, the Lagrangian coordinates α are chosen to coincide with the Eulerian coordinates x at the initial time "t" = "t"0:
formula_1
If the average value of a quantity is denoted by an overbar, then the average Eulerian velocity vector ūE and average Lagrangian velocity vector ūL are
formula_2
Different definitions of the average may be used, depending on the subject of study (see ergodic theory):
The Stokes drift velocity ūS is defined as the difference between the average Eulerian velocity and the average Lagrangian velocity:
formula_3
In many situations, the mapping of average quantities from some Eulerian position x to a corresponding Lagrangian position α forms a problem. Since a fluid parcel with label α traverses along a path of many different Eulerian positions x, it is not possible to assign α to a unique x.
A mathematically sound basis for an unambiguous mapping between average Lagrangian and Eulerian quantities is provided by the theory of the generalized Lagrangian mean (GLM) by Andrews and McIntyre (1978).
Example: A one-dimensional compressible flow.
For the Eulerian velocity as a monochromatic wave of any nature in a continuous medium: formula_4 one readily obtains by the perturbation theory – with formula_5 as a small parameter – for the particle position formula_6:
formula_7
formula_8
Here the last term describes the Stokes drift velocity formula_9
Example: Deep water waves.
The Stokes drift was formulated for water waves by George Gabriel Stokes in 1847. For simplicity, the case of infinitely deep water is considered, with linear wave propagation of a sinusoidal wave on the free surface of a fluid layer:
formula_10
where
"η" is the elevation of the free surface in the "z" direction (meters),
"a" is the wave amplitude (meters),
"k" is the wave number: "k" = 2"π"/"λ" (radians per meter),
"ω" is the angular frequency: "ω" = 2"π"/"T" (radians per second),
"x" is the horizontal coordinate and the wave propagation direction (meters),
"z" is the vertical coordinate, with the positive "z" direction pointing out of the fluid layer (meters),
"λ" is the wave length (meters),
"T" is the wave period (seconds).
As derived below, the horizontal component "ū"S("z") of the Stokes drift velocity for deep-water waves is approximately:
formula_11
As can be seen, the Stokes drift velocity "ū"S is a nonlinear quantity in terms of the wave amplitude "a". Further, the Stokes drift velocity decays exponentially with depth: at a depth of a quarter wavelength, "z" = −"λ"/4, it is about 4% of its value at the mean free surface, "z" = 0.
Derivation.
It is assumed that the waves are of infinitesimal amplitude and the free surface oscillates around the mean level "z" = 0. The waves propagate under the action of gravity, with a constant acceleration vector by gravity (pointing downward in the negative "z" direction). Further the fluid is assumed to be inviscid and incompressible, with a constant mass density. The fluid flow is irrotational. At infinite depth, the fluid is taken to be at rest.
Now the flow may be represented by a velocity potential "φ", satisfying the Laplace equation and
formula_12
In order to have non-trivial solutions for this eigenvalue problem, the wave length and wave period may not be chosen arbitrarily, but must satisfy the deep-water dispersion relation:
formula_13
with "g" the acceleration by gravity in (m/s2). Within the framework of linear theory, the horizontal and vertical components, "ξx" and "ξz" respectively, of the Lagrangian position ξ are
formula_14
The horizontal component "ū"S of the Stokes drift velocity is estimated by using a Taylor expansion around x of the Eulerian horizontal velocity component "ux" = ∂"ξx" / ∂"t" at the position ξ:
formula_15
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\n \\dot{\\boldsymbol{\\xi}} = \\frac{\\partial \\boldsymbol{\\xi}}{\\partial t}\n = \\mathbf{u}\\big(\\boldsymbol{\\xi}(\\boldsymbol{\\alpha}, t), t\\big),\n"
},
{
"math_id": 1,
"text": "\n \\boldsymbol{\\xi}(\\boldsymbol{\\alpha}, t_0) = \\boldsymbol{\\alpha}.\n"
},
{
"math_id": 2,
"text": "\n \\begin{align}\n \\bar\\mathbf{u}_\\text{E} &= \\overline{\\mathbf{u}(\\mathbf{x}, t)},\n \\\\\n \\bar\\mathbf{u}_\\text{L} &= \\overline{\\dot{\\boldsymbol{\\xi}}(\\boldsymbol{\\alpha}, t)}\n = \\overline{\\left(\\frac{\\partial \\boldsymbol{\\xi}(\\boldsymbol{\\alpha}, t)}{\\partial t}\\right)}\n = \\overline{\\boldsymbol{u}\\big(\\boldsymbol{\\xi}(\\boldsymbol{\\alpha}, t), t\\big)}.\n \\end{align}\n"
},
{
"math_id": 3,
"text": "\n \\bar\\mathbf{u}_\\text{S} = \\bar\\mathbf{u}_\\text{L} - \\bar\\mathbf{u}_\\text{E}.\n"
},
{
"math_id": 4,
"text": "u = \\hat{u} \\sin(kx - \\omega t),"
},
{
"math_id": 5,
"text": "k\\hat{u}/\\omega"
},
{
"math_id": 6,
"text": "x = \\xi(\\xi_0, t)"
},
{
"math_id": 7,
"text": "\\dot\\xi = u(\\xi, t) = \\hat{u} \\sin(k\\xi - \\omega t),"
},
{
"math_id": 8,
"text": "\n \\xi(\\xi_0, t) \\approx \\xi_0 + \\frac{\\hat{u}}{\\omega} \\cos(k\\xi_0 - \\omega t) - \\frac14 \\frac{k\\hat{u}^2}{\\omega^2} \\sin 2(k\\xi_0 - \\omega t) + \\frac12 \\frac{k\\hat{u}^2}{\\omega} t.\n"
},
{
"math_id": 9,
"text": "\\tfrac12 k\\hat{u}^2/\\omega."
},
{
"math_id": 10,
"text": "\n \\eta = a \\cos(kx - \\omega t),\n"
},
{
"math_id": 11,
"text": "\n \\bar{u}_\\text{S} \\approx \\omega k a^2 \\text{e}^{2kz}\n = \\frac{4\\pi^2 a^2}{\\lambda T} \\text{e}^{4\\pi z / \\lambda}.\n"
},
{
"math_id": 12,
"text": "\n \\varphi = \\frac{\\omega}{k} a \\text{e}^{kz} \\sin(kx - \\omega t).\n"
},
{
"math_id": 13,
"text": "\n \\omega^2 = gk\n"
},
{
"math_id": 14,
"text": "\n \\begin{align}\n \\xi_x &= x + \\int \\frac{\\partial \\varphi}{\\partial x}\\, \\text{d}t\n = x - a \\text{e}^{kz} \\sin(kx - \\omega t),\n \\\\\n \\xi_z &= z + \\int \\frac{\\partial \\varphi}{\\partial z}\\, \\text{d}t\n = z + a \\text{e}^{kz} \\cos(kx - \\omega t).\n \\end{align}\n"
},
{
"math_id": 15,
"text": "\n \\begin{align}\n \\bar{u}_\\text{S}\n &= \\overline{u_x(\\boldsymbol{\\xi}, t)} - \\overline{u_x(\\mathbf{x}, t)}\n \\\\\n &= \\overline{\\left[\n u_x(\\mathbf{x}, t)\n + (\\xi_x - x) \\frac{\\partial u_x(\\mathbf{x}, t)}{\\partial x}\n + (\\xi_z - z) \\frac{\\partial u_x(\\mathbf{x}, t)}{\\partial z}\n + \\cdots\n \\right]}\n - \\overline{u_x(\\mathbf{x} ,t)}\n \\\\\n &\\approx \\overline{(\\xi_x - x) \\frac{\\partial^2 \\xi_x}{\\partial x\\, \\partial t}}\n + \\overline{(\\xi_z - z) \\frac{\\partial^2 \\xi_x}{\\partial z\\, \\partial t}}\n \\\\\n &= \\overline{\\left[-a \\text{e}^{kz} \\sin(kx - \\omega t)\\right]\n \\left[-\\omega ka \\text{e}^{kz} \\sin(kx - \\omega t)\\right]}\n \\\\\n &+ \\overline{\\left[a \\text{e}^{kz} \\cos(kx - \\omega t)\\right]\n \\left[\\omega ka \\text{e}^{kz} \\cos(kx - \\omega t)\\right] }\n \\\\\n &= \\overline{\\omega ka^2 \\text{e}^{2kz}\n \\left[\\sin^2(kx - \\omega t) + \\cos^2(kx - \\omega t)\\right]}\n \\\\\n &= \\omega ka^2 \\text{e}^{2kz}.\n \\end{align}\n"
}
] |
https://en.wikipedia.org/wiki?curid=15344641
|
15346623
|
Cauchy number
|
The Cauchy number (Ca) is a dimensionless number in continuum mechanics used in the study of compressible flows. It is named after the French mathematician Augustin Louis Cauchy. When the compressibility is important the elastic forces must be considered along with inertial forces for dynamic similarity. Thus, the Cauchy Number is defined as the ratio between inertial and the compressibility force (elastic force) in a flow and can be expressed as
formula_0,
where
formula_1 = density of fluid, (SI units: kg/m3)
"u" = local flow velocity, (SI units: m/s)
"K" = bulk modulus of elasticity, (SI units: Pa)
Relation between Cauchy number and Mach number.
For isentropic processes, the Cauchy number may be expressed in terms of Mach number. The isentropic bulk modulus formula_2, where formula_3 is the specific heat capacity ratio and "p" is the fluid pressure.
If the fluid obeys the ideal gas law, we have
formula_4,
where
formula_5 = speed of sound, (SI units: m/s)
"R" = characteristic gas constant, (SI units: J/(kg K) )
"T" = temperature, (SI units: K)
Substituting "K" ("Ks") in the equation for Ca yields
formula_6.
Thus, the Cauchy number is square of the Mach number for isentropic flow of a perfect gas.
|
[
{
"math_id": 0,
"text": " \\mathrm{Ca} = \\frac{\\rho u^2}{K}"
},
{
"math_id": 1,
"text": "\\rho"
},
{
"math_id": 2,
"text": " K_s = \\gamma p"
},
{
"math_id": 3,
"text": "\\gamma"
},
{
"math_id": 4,
"text": " K_s = \\gamma p = \\gamma \\rho R T = \\,\\rho a^2"
},
{
"math_id": 5,
"text": "a = \\sqrt{\\gamma RT}"
},
{
"math_id": 6,
"text": " \\mathrm{Ca} = \\frac{u^2}{a^2} = \\mathrm{M}^2"
}
] |
https://en.wikipedia.org/wiki?curid=15346623
|
15352
|
Indistinguishable particles
|
Concept in quantum mechanics of perfectly substitutable particles
In quantum mechanics, indistinguishable particles (also called identical or indiscernible particles) are particles that cannot be distinguished from one another, even in principle. Species of identical particles include, but are not limited to, elementary particles (such as electrons), composite subatomic particles (such as atomic nuclei), as well as atoms and molecules. Quasiparticles also behave in this way. Although all known indistinguishable particles only exist at the quantum scale, there is no exhaustive list of all possible sorts of particles nor a clear-cut limit of applicability, as explored in quantum statistics. They were first discussed by Werner Heisenberg and Paul Dirac in 1926.
There are two main categories of identical particles: bosons, which can share quantum states, and fermions, which cannot (as described by the Pauli exclusion principle). Examples of bosons are photons, gluons, phonons, helium-4 nuclei and all mesons. Examples of fermions are electrons, neutrinos, quarks, protons, neutrons, and helium-3 nuclei.
The fact that particles can be identical has important consequences in statistical mechanics, where calculations rely on probabilistic arguments, which are sensitive to whether or not the objects being studied are identical. As a result, identical particles exhibit markedly different statistical behaviour from distinguishable particles. For example, the indistinguishability of particles has been proposed as a solution to Gibbs' mixing paradox.
Distinguishing between particles.
There are two methods for distinguishing between particles. The first method relies on differences in the intrinsic physical properties of the particles, such as mass, electric charge, and spin. If differences exist, it is possible to distinguish between the particles by measuring the relevant properties. However, as far as can be determined, microscopic particles of the same species have completely equivalent physical properties. For instance, every electron has the same electric charge.
Even if the particles have equivalent physical properties, there remains a second method for distinguishing between particles, which is to track the trajectory of each particle. As long as the position of each particle can be measured with infinite precision (even when the particles collide), then there would be no ambiguity about which particle is which.
The problem with the second approach is that it contradicts the principles of quantum mechanics. According to quantum theory, the particles do not possess definite positions during the periods between measurements. Instead, they are governed by wavefunctions that give the probability of finding a particle at each position. As time passes, the wavefunctions tend to spread out and overlap. Once this happens, it becomes impossible to determine, in a subsequent measurement, which of the particle positions correspond to those measured earlier. The particles are then said to be indistinguishable.
Quantum mechanical description.
Symmetrical and antisymmetrical states.
What follows is an example to make the above discussion concrete, using the formalism developed in the article on the mathematical formulation of quantum mechanics.
Let "n" denote a complete set of (discrete) quantum numbers for specifying single-particle states (for example, for the particle in a box problem, take "n" to be the quantized wave vector of the wavefunction.) For simplicity, consider a system composed of two particles that are not interacting with each other. Suppose that one particle is in the state "n"1, and the other is in the state "n"2. The quantum state of the system is denoted by the expression
formula_0
where the order of the tensor product matters ( if formula_1, then the particle 1 occupies the state "n"2 while the particle 2 occupies the state "n"1). This is the canonical way of constructing a basis for a tensor product space formula_2 of the combined system from the individual spaces. This expression is valid for distinguishable particles, however, it is not appropriate for indistinguishable particles since formula_3 and formula_4 as a result of exchanging the particles are generally different states.
Two states are physically equivalent only if they differ at most by a complex phase factor. For two indistinguishable particles, a state before the particle exchange must be physically equivalent to the state after the exchange, so these two states differ at most by a complex phase factor. This fact suggests that a state for two indistinguishable (and non-interacting) particles is given by following two possibilities:
formula_5
States where it is a sum are known as symmetric, while states involving the difference are called antisymmetric. More completely, symmetric states have the form
formula_6
while antisymmetric states have the form
formula_7
Note that if "n"1 and "n"2 are the same, the antisymmetric expression gives zero, which cannot be a state vector since it cannot be normalized. In other words, more than one identical particle cannot occupy an antisymmetric state (one antisymmetric state can be occupied only by one particle). This is known as the Pauli exclusion principle, and it is the fundamental reason behind the chemical properties of atoms and the stability of matter.
Exchange symmetry.
The importance of symmetric and antisymmetric states is ultimately based on empirical evidence. It appears to be a fact of nature that identical particles do not occupy states of a mixed symmetry, such as
formula_8
There is actually an exception to this rule, which will be discussed later. On the other hand, it can be shown that the symmetric and antisymmetric states are in a sense special, by examining a particular symmetry of the multiple-particle states known as exchange symmetry.
Define a linear operator "P", called the exchange operator. When it acts on a tensor product of two state vectors, it exchanges the values of the state vectors:
formula_9
"P" is both Hermitian and unitary. Because it is unitary, it can be regarded as a symmetry operator. This symmetry may be described as the symmetry under the exchange of labels attached to the particles (i.e., to the single-particle Hilbert spaces).
Clearly, formula_10 (the identity operator), so the eigenvalues of "P" are +1 and −1. The corresponding eigenvectors are the symmetric and antisymmetric states:
formula_11
formula_12
In other words, symmetric and antisymmetric states are essentially unchanged under the exchange of particle labels: they are only multiplied by a factor of +1 or −1, rather than being "rotated" somewhere else in the Hilbert space. This indicates that the particle labels have no physical meaning, in agreement with the earlier discussion on indistinguishability.
It will be recalled that "P" is Hermitian. As a result, it can be regarded as an observable of the system, which means that, in principle, a measurement can be performed to find out if a state is symmetric or antisymmetric. Furthermore, the equivalence of the particles indicates that the Hamiltonian can be written in a symmetrical form, such as
formula_13
It is possible to show that such Hamiltonians satisfy the commutation relation
formula_14
According to the Heisenberg equation, this means that the value of "P" is a constant of motion. If the quantum state is initially symmetric (antisymmetric), it will remain symmetric (antisymmetric) as the system evolves. Mathematically, this says that the state vector is confined to one of the two eigenspaces of "P", and is not allowed to range over the entire Hilbert space. Thus, that eigenspace might as well be treated as the actual Hilbert space of the system. This is the idea behind the definition of Fock space.
Fermions and bosons.
The choice of symmetry or antisymmetry is determined by the species of particle. For example, symmetric states must always be used when describing photons or helium-4 atoms, and antisymmetric states when describing electrons or protons.
Particles which exhibit symmetric states are called bosons. The nature of symmetric states has important consequences for the statistical properties of systems composed of many identical bosons. These statistical properties are described as Bose–Einstein statistics.
Particles which exhibit antisymmetric states are called fermions. Antisymmetry gives rise to the Pauli exclusion principle, which forbids identical fermions from sharing the same quantum state. Systems of many identical fermions are described by Fermi–Dirac statistics.
Parastatistics are mathematically possible, but no examples exist in nature.
In certain two-dimensional systems, mixed symmetry can occur. These exotic particles are known as anyons, and they obey fractional statistics. Experimental evidence for the existence of anyons exists in the fractional quantum Hall effect, a phenomenon observed in the two-dimensional electron gases that form the inversion layer of MOSFETs. There is another type of statistic, known as braid statistics, which are associated with particles known as plektons.
The spin-statistics theorem relates the exchange symmetry of identical particles to their spin. It states that bosons have integer spin, and fermions have half-integer spin. Anyons possess fractional spin.
"N" particles.
The above discussion generalizes readily to the case of "N" particles. Suppose there are "N" particles with quantum numbers "n"1, "n"2, ..., "n""N". If the particles are bosons, they occupy a totally symmetric state, which is symmetric under the exchange of "any two" particle labels:
formula_15
Here, the sum is taken over all different states under permutations "p" acting on "N" elements. The square root left to the sum is a normalizing constant. The quantity "mn" stands for the number of times each of the single-particle states "n" appears in the "N"-particle state. Note that Σ"n" "m""n" = "N".
In the same vein, fermions occupy totally antisymmetric states:
formula_16
Here, sgn("p") is the sign of each permutation (i.e. formula_17 if formula_18 is composed of an even number of transpositions, and formula_19 if odd). Note that there is no formula_20 term, because each single-particle state can appear only once in a fermionic state. Otherwise the sum would again be zero due to the antisymmetry, thus representing a physically impossible state. This is the Pauli exclusion principle for many particles.
These states have been normalized so that
formula_21
Measurement.
Suppose there is a system of "N" bosons (fermions) in the symmetric (antisymmetric) state
formula_22
and a measurement is performed on some other set of discrete observables, "m". In general, this yields some result "m"1 for one particle, "m"2 for another particle, and so forth. If the particles are bosons (fermions), the state after the measurement must remain symmetric (antisymmetric), i.e.
formula_23
The probability of obtaining a particular result for the "m" measurement is
formula_24
It can be shown that
formula_25
which verifies that the total probability is 1. The sum has to be restricted to "ordered" values of "m"1, ..., "mN" to ensure that each multi-particle state is not counted more than once.
Wavefunction representation.
So far, the discussion has included only discrete observables. It can be extended to continuous observables, such as the position "x".
Recall that an eigenstate of a continuous observable represents an infinitesimal "range" of values of the observable, not a single value as with discrete observables. For instance, if a particle is in a state |"ψ"⟩, the probability of finding it in a region of volume "d"3"x" surrounding some position "x" is
formula_26
As a result, the continuous eigenstates |"x"⟩ are normalized to the delta function instead of unity:
formula_27
Symmetric and antisymmetric multi-particle states can be constructed from continuous eigenstates in the same way as before. However, it is customary to use a different normalizing constant:
formula_28
A many-body wavefunction can be written,
formula_29
where the single-particle wavefunctions are defined, as usual, by
formula_30
The most important property of these wavefunctions is that exchanging any two of the coordinate variables changes the wavefunction by only a plus or minus sign. This is the manifestation of symmetry and antisymmetry in the wavefunction representation:
formula_31
The many-body wavefunction has the following significance: if the system is initially in a state with quantum numbers "n"1, ..., nN, and a position measurement is performed, the probability of finding particles in infinitesimal volumes near "x"1, "x"2, ..., "x""N" is
formula_32
The factor of "N"! comes from our normalizing constant, which has been chosen so that, by analogy with single-particle wavefunctions,
formula_33
Because each integral runs over all possible values of "x", each multi-particle state appears "N"! times in the integral. In other words, the probability associated with each event is evenly distributed across "N"! equivalent points in the integral space. Because it is usually more convenient to work with unrestricted integrals than restricted ones, the normalizing constant has been chosen to reflect this.
Finally, antisymmetric wavefunction can be written as the determinant of a matrix, known as a Slater determinant:
formula_34
Operator approach and parastatistics.
The Hilbert space for formula_35 particles is given by the tensor product formula_36. The permutation group of formula_37 acts on this space by permuting the entries. By definition the expectation values for an observable formula_38 of formula_35 indistinguishable particles should be invariant under these permutation. This means that for all formula_39 and formula_40
formula_41
or equivalently for each formula_40
formula_42.
Two states are equivalent whenever their expectation values coincide for all observables. If we restrict to observables of formula_43 identical particles, and hence observables satisfying the equation above, we find that the following states (after normalization) are equivalent
formula_44.
The equivalence classes are in bijective relation with irreducible subspaces of formula_36 under formula_37.
Two obvious irreducible subspaces are the one dimensional symmetric/bosonic subspace and anti-symmetric/fermionic subspace. There are however more types of irreducible subspaces. States associated with these other irreducible subspaces are called parastatistic states. Young tableaux provide a way to classify all of these irreducible subspaces.
Statistical properties.
Statistical effects of indistinguishability.
The indistinguishability of particles has a profound effect on their statistical properties. To illustrate this, consider a system of "N" distinguishable, non-interacting particles. Once again, let "n""j" denote the state (i.e. quantum numbers) of particle "j". If the particles have the same physical properties, the "n""j"s run over the same range of values. Let "ε"("n") denote the energy of a particle in state "n". As the particles do not interact, the total energy of the system is the sum of the single-particle energies. The partition function of the system is
formula_45
where "k" is the Boltzmann constant and "T" is the temperature. This expression can be factored to obtain
formula_46
where
formula_47
If the particles are identical, this equation is incorrect. Consider a state of the system, described by the single particle states ["n"1, ..., "n""N"]. In the equation for "Z", every possible permutation of the "n"s occurs once in the sum, even though each of these permutations is describing the same multi-particle state. Thus, the number of states has been over-counted.
If the possibility of overlapping states is neglected, which is valid if the temperature is high, then the number of times each state is counted is approximately "N"!. The correct partition function is
formula_48
Note that this "high temperature" approximation does not distinguish between fermions and bosons.
The discrepancy in the partition functions of distinguishable and indistinguishable particles was known as far back as the 19th century, before the advent of quantum mechanics. It leads to a difficulty known as the Gibbs paradox. Gibbs showed that in the equation "Z" = "ξ""N", the entropy of a classical ideal gas is
formula_49
where "V" is the volume of the gas and "f" is some function of "T" alone. The problem with this result is that "S" is not extensive – if "N" and "V" are doubled, "S" does not double accordingly. Such a system does not obey the postulates of thermodynamics.
Gibbs also showed that using "Z" = "ξ""N"/"N"! alters the result to
formula_50
which is perfectly extensive. However, the reason for this correction to the partition function remained obscure until the discovery of quantum mechanics
Statistical properties of bosons and fermions.
There are important differences between the statistical behavior of bosons and fermions, which are described by Bose–Einstein statistics and Fermi–Dirac statistics respectively. Roughly speaking, bosons have a tendency to clump into the same quantum state, which underlies phenomena such as the laser, Bose–Einstein condensation, and superfluidity. Fermions, on the other hand, are forbidden from sharing quantum states, giving rise to systems such as the Fermi gas. This is known as the Pauli Exclusion Principle, and is responsible for much of chemistry, since the electrons in an atom (fermions) successively fill the many states within shells rather than all lying in the same lowest energy state.
The differences between the statistical behavior of fermions, bosons, and distinguishable particles can be illustrated using a system of two particles. The particles are designated A and B. Each particle can exist in two possible states, labelled formula_51 and formula_52, which have the same energy.
The composite system can evolve in time, interacting with a noisy environment. Because the formula_51 and formula_52 states are energetically equivalent, neither state is favored, so this process has the effect of randomizing the states. (This is discussed in the article on quantum entanglement.) After some time, the composite system will have an equal probability of occupying each of the states available to it. The particle states are then measured.
If A and B are distinguishable particles, then the composite system has four distinct states: formula_53, formula_54, formula_55, and formula_56. The probability of obtaining two particles in the formula_51 state is 0.25; the probability of obtaining two particles in the formula_52 state is 0.25; and the probability of obtaining one particle in the formula_51 state and the other in the formula_52 state is 0.5.
If A and B are identical bosons, then the composite system has only three distinct states: formula_53, formula_54, and formula_57. When the experiment is performed, the probability of obtaining two particles in the formula_51 state is now 0.33; the probability of obtaining two particles in the formula_52 state is 0.33; and the probability of obtaining one particle in the formula_51 state and the other in the formula_52 state is 0.33. Note that the probability of finding particles in the same state is relatively larger than in the distinguishable case. This demonstrates the tendency of bosons to "clump".
If A and B are identical fermions, there is only one state available to the composite system: the totally antisymmetric state formula_58. When the experiment is performed, one particle is always in the formula_51 state and the other is in the formula_52 state.
The results are summarized in Table 1:
As can be seen, even a system of two particles exhibits different statistical behaviors between distinguishable particles, bosons, and fermions. In the articles on Fermi–Dirac statistics and Bose–Einstein statistics, these principles are extended to large number of particles, with qualitatively similar results.
Homotopy class.
To understand why particle statistics work the way that they do, note first that particles are point-localized excitations and that particles that are spacelike separated do not interact. In a flat d-dimensional space M, at any given time, the configuration of two identical particles can be specified as an element of "M" × "M". If there is no overlap between the particles, so that they do not interact directly, then their locations must belong to the space ["M" × "M"] \ {coincident points}, the subspace with coincident points removed. The element ("x", "y") describes the configuration with particle I at x and particle II at y, while ("y", "x") describes the interchanged configuration. With identical particles, the state described by ("x", "y") ought to be indistinguishable from the state described by ("y", "x"). Now consider the homotopy class of continuous paths from ("x", "y") to ("y", "x"), within the space ["M" × "M"] \ {coincident points}. If M is &NoBreak;&NoBreak; where "d" ≥ 3, then this homotopy class only has one element. If M is &NoBreak;&NoBreak;, then this homotopy class has countably many elements (i.e. a counterclockwise interchange by half a turn, a counterclockwise interchange by one and a half turns, two and a half turns, etc., a clockwise interchange by half a turn, etc.). In particular, a counterclockwise interchange by half a turn is "not" homotopic to a clockwise interchange by half a turn. Lastly, if M is &NoBreak;&NoBreak;, then this homotopy class is empty.
Suppose first that "d" ≥ 3. The universal covering space of ["M" × "M"] &setminus; {coincident points}, which is none other than ["M" × "M"] &setminus; {coincident points} itself, only has two points which are physically indistinguishable from ("x", "y"), namely ("x", "y") itself and ("y", "x"). So, the only permissible interchange is to swap both particles. This interchange is an involution, so its only effect is to multiply the phase by a square root of 1. If the root is +1, then the points have Bose statistics, and if the root is –1, the points have Fermi statistics.
In the case formula_59 the universal covering space of ["M" × "M"] &setminus; {coincident points} has infinitely many points that are physically indistinguishable from ("x", "y"). This is described by the infinite cyclic group generated by making a counterclockwise half-turn interchange. Unlike the previous case, performing this interchange twice in a row does not recover the original state; so such an interchange can generically result in a multiplication by exp("iθ") for any real θ (by unitarity, the absolute value of the multiplication must be 1). This is called anyonic statistics. In fact, even with two "distinguishable" particles, even though ("x", "y") is now physically distinguishable from ("y", "x"), the universal covering space still contains infinitely many points which are physically indistinguishable from the original point, now generated by a counterclockwise rotation by one full turn. This generator, then, results in a multiplication by exp("iφ"). This phase factor here is called the mutual statistics.
Finally, in the case formula_60 the space ["M" × "M"] &setminus; {coincident points} is not connected, so even if particle I and particle II are identical, they can still be distinguished via labels such as "the particle on the left" and "the particle on the right". There is no interchange symmetry here.
Footnotes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " | n_1 \\rang | n_2 \\rang "
},
{
"math_id": 1,
"text": " | n_2 \\rang | n_1 \\rang "
},
{
"math_id": 2,
"text": "H \\otimes H"
},
{
"math_id": 3,
"text": " |n_1\\rang |n_2\\rang"
},
{
"math_id": 4,
"text": "|n_2\\rang |n_1\\rang "
},
{
"math_id": 5,
"text": " |n_1\\rang |n_2\\rang \\pm |n_2\\rang |n_1\\rang "
},
{
"math_id": 6,
"text": " |n_1, n_2; S\\rang \\equiv \\mbox{constant} \\times \\bigg( |n_1\\rang |n_2\\rang + |n_2\\rang |n_1\\rang \\bigg) "
},
{
"math_id": 7,
"text": " |n_1, n_2; A\\rang \\equiv \\mbox{constant} \\times \\bigg( |n_1\\rang |n_2\\rang - |n_2\\rang |n_1\\rang \\bigg) "
},
{
"math_id": 8,
"text": " |n_1, n_2; ?\\rang = \\mbox{constant} \\times \\bigg( |n_1\\rang |n_2\\rang + i |n_2\\rang |n_1\\rang \\bigg) "
},
{
"math_id": 9,
"text": "P \\bigg(|\\psi\\rang |\\phi\\rang \\bigg) \\equiv |\\phi\\rang |\\psi\\rang "
},
{
"math_id": 10,
"text": "P^2 = 1"
},
{
"math_id": 11,
"text": "P|n_1, n_2; S\\rang = + |n_1, n_2; S\\rang"
},
{
"math_id": 12,
"text": "P|n_1, n_2; A\\rang = - |n_1, n_2; A\\rang"
},
{
"math_id": 13,
"text": "H = \\frac{p_1^2}{2m} + \\frac{p_2^2}{2m} + U(|x_1 - x_2|) + V(x_1) + V(x_2) "
},
{
"math_id": 14,
"text": "\\left[P, H\\right] = 0"
},
{
"math_id": 15,
"text": "|n_1 n_2 \\cdots n_N; S\\rang = \\sqrt{\\frac{\\prod_n m_n!}{N!}} \\sum_p \\left|n_{p(1)}\\right\\rang \\left|n_{p(2)}\\right\\rang \\cdots \\left|n_{p(N)}\\right\\rang "
},
{
"math_id": 16,
"text": "|n_1 n_2 \\cdots n_N; A\\rang = \\frac{1}{\\sqrt{N!}} \\sum_p \\operatorname{sgn}(p) \\left|n_{p(1)}\\right\\rang \\left|n_{p(2)}\\right\\rang \\cdots \\left|n_{p(N)}\\right\\rang\\ "
},
{
"math_id": 17,
"text": "+1"
},
{
"math_id": 18,
"text": "p"
},
{
"math_id": 19,
"text": "-1"
},
{
"math_id": 20,
"text": "\\Pi_n m_n"
},
{
"math_id": 21,
"text": " \\lang n_1 n_2 \\cdots n_N; S | n_1 n_2 \\cdots n_N; S\\rang = 1, \\qquad \\lang n_1 n_2 \\cdots n_N; A | n_1 n_2 \\cdots n_N; A\\rang = 1. "
},
{
"math_id": 22,
"text": "|n_1 n_2 \\cdots n_N; S/A \\rang"
},
{
"math_id": 23,
"text": "|m_1 m_2 \\cdots m_N; S/A \\rang"
},
{
"math_id": 24,
"text": "P_{S/A}\\left(n_1, \\ldots, n_N \\rightarrow m_1, \\ldots, m_N\\right) \\equiv \\big|\\left\\lang m_1 \\cdots m_N; S/A \\,|\\, n_1 \\cdots n_N; S/A \\right\\rang \\big|^2 "
},
{
"math_id": 25,
"text": "\\sum_{m_1 \\le m_2 \\le \\dots \\le m_N} P_{S/A}(n_1, \\ldots, n_N \\rightarrow m_1, \\ldots, m_N) = 1"
},
{
"math_id": 26,
"text": " |\\lang x | \\psi \\rang|^2 \\; d^3 x "
},
{
"math_id": 27,
"text": " \\lang x | x' \\rang = \\delta^3 (x - x') "
},
{
"math_id": 28,
"text": "\\begin{align}\n |x_1 x_2 \\cdots x_N; S\\rang &= \\sqrt{\\frac{\\prod_j n_j!}{N!}} \\sum_p \\left|x_{p(1)}\\right\\rang \\left|x_{p(2)}\\right\\rang \\cdots \\left|x_{p(N)}\\right\\rang \\\\\n |x_1 x_2 \\cdots x_N; A\\rang &= \\frac{1}{\\sqrt{N!}} \\sum_p \\mathrm{sgn}(p) \\left|x_{p(1)}\\right\\rang \\left|x_{p(2)}\\right\\rang \\cdots \\left|x_{p(N)}\\right\\rang\n\\end{align}"
},
{
"math_id": 29,
"text": "\\begin{align}\n \\Psi^{(S)}_{n_1 n_2 \\cdots n_N} (x_1, x_2, \\ldots, x_N)\n & \\equiv \\lang x_1 x_2 \\cdots x_N; S | n_1 n_2 \\cdots n_N; S \\rang \\\\[4pt]\n & = \\sqrt{\\frac{\\prod_j n_j!}{N!}} \\sum_p \\psi_{p(1)}(x_1) \\psi_{p(2)}(x_2) \\cdots \\psi_{p(N)}(x_N) \\\\[10pt]\n \\Psi^{(A)}_{n_1 n_2 \\cdots n_N} (x_1, x_2, \\ldots, x_N)\n & \\equiv \\lang x_1 x_2 \\cdots x_N; A | n_1 n_2 \\cdots n_N; A \\rang \\\\[4pt]\n & = \\frac{1}{\\sqrt{N!}} \\sum_p \\mathrm{sgn}(p) \\psi_{p(1)}(x_1) \\psi_{p(2)}(x_2) \\cdots \\psi_{p(N)}(x_N)\n\\end{align}"
},
{
"math_id": 30,
"text": "\\psi_n(x) \\equiv \\lang x | n \\rang "
},
{
"math_id": 31,
"text": "\\begin{align}\n \\Psi^{(S)}_{n_1 \\cdots n_N} (\\cdots x_i \\cdots x_j\\cdots) =\n \\Psi^{(S)}_{n_1 \\cdots n_N} (\\cdots x_j \\cdots x_i \\cdots) \\\\[3pt]\n \\Psi^{(A)}_{n_1 \\cdots n_N} (\\cdots x_i \\cdots x_j\\cdots) =\n -\\Psi^{(A)}_{n_1 \\cdots n_N} (\\cdots x_j \\cdots x_i \\cdots)\n\\end{align}"
},
{
"math_id": 32,
"text": " N! \\; \\left|\\Psi^{(S/A)}_{n_1 n_2 \\cdots n_N} (x_1, x_2, \\ldots, x_N) \\right|^2 \\; d^{3N}\\!x "
},
{
"math_id": 33,
"text": " \\int\\!\\int\\!\\cdots\\!\\int\\; \\left|\\Psi^{(S/A)}_{n_1 n_2 \\cdots n_N} (x_1, x_2, \\ldots, x_N)\\right|^2 d^3\\!x_1 d^3\\!x_2 \\cdots d^3\\!x_N = 1 "
},
{
"math_id": 34,
"text": "\\Psi^{(A)}_{n_1 \\cdots n_N} (x_1, \\ldots, x_N) =\n \\frac{1}{\\sqrt{N!}} \\left|\n \\begin{matrix}\n \\psi_{n_1}(x_1) & \\psi_{n_1}(x_2) & \\cdots & \\psi_{n_1}(x_N) \\\\\n \\psi_{n_2}(x_1) & \\psi_{n_2}(x_2) & \\cdots & \\psi_{n_2}(x_N) \\\\\n \\vdots & \\vdots & \\ddots & \\vdots \\\\\n \\psi_{n_N}(x_1) & \\psi_{n_N}(x_2) & \\cdots & \\psi_{n_N}(x_N) \\\\\n \\end{matrix}\n \\right|\n"
},
{
"math_id": 35,
"text": "n"
},
{
"math_id": 36,
"text": " \\bigotimes_n H "
},
{
"math_id": 37,
"text": " S_n "
},
{
"math_id": 38,
"text": "a"
},
{
"math_id": 39,
"text": " \\psi \\in H "
},
{
"math_id": 40,
"text": " \\sigma \\in S_n "
},
{
"math_id": 41,
"text": " (\\sigma \\Psi )^t a (\\sigma \\Psi) = \\Psi^t a \\Psi,"
},
{
"math_id": 42,
"text": " \\sigma^t a \\sigma = a "
},
{
"math_id": 43,
"text": "n "
},
{
"math_id": 44,
"text": " \\Psi \\sim \\sum_{\\sigma \\in S_n} \\lambda_{\\sigma} \\sigma \\Psi"
},
{
"math_id": 45,
"text": " Z = \\sum_{n_1, n_2, \\ldots, n_N} \\exp\\left\\{ -\\frac{1}{kT} \\left[ \\varepsilon(n_1) + \\varepsilon(n_2) + \\cdots + \\varepsilon(n_N) \\right] \\right\\} "
},
{
"math_id": 46,
"text": " Z = \\xi^N "
},
{
"math_id": 47,
"text": " \\xi = \\sum_n \\exp\\left[ - \\frac{\\varepsilon(n)}{kT} \\right]."
},
{
"math_id": 48,
"text": " Z = \\frac{\\xi^N}{N!}."
},
{
"math_id": 49,
"text": "S = N k \\ln \\left(V\\right) + N f(T)"
},
{
"math_id": 50,
"text": "S = N k \\ln \\left(\\frac{V}{N}\\right) + N f(T)"
},
{
"math_id": 51,
"text": "|0\\rangle"
},
{
"math_id": 52,
"text": "|1\\rangle"
},
{
"math_id": 53,
"text": "|0\\rangle|0\\rangle"
},
{
"math_id": 54,
"text": "|1\\rangle|1\\rangle"
},
{
"math_id": 55,
"text": "|0\\rangle|1\\rangle"
},
{
"math_id": 56,
"text": "|1\\rangle|0\\rangle"
},
{
"math_id": 57,
"text": "\\frac{1}{\\sqrt{2}}(|0\\rangle|1\\rangle + |1\\rangle|0\\rangle)"
},
{
"math_id": 58,
"text": "\\frac{1}{\\sqrt{2}}(|0\\rangle|1\\rangle - |1\\rangle|0\\rangle)"
},
{
"math_id": 59,
"text": "M = \\mathbb R^2,"
},
{
"math_id": 60,
"text": "M = \\mathbb R,"
}
] |
https://en.wikipedia.org/wiki?curid=15352
|
15352314
|
Supermodule
|
In mathematics, a supermodule is a Z2-graded module over a superring or superalgebra. Supermodules arise in super linear algebra which is a mathematical framework for studying the concept supersymmetry in theoretical physics.
Supermodules over a commutative superalgebra can be viewed as generalizations of super vector spaces over a (purely even) field "K". Supermodules often play a more prominent role in super linear algebra than do super vector spaces. These reason is that it is often necessary or useful to extend the field of scalars to include odd variables. In doing so one moves from fields to commutative superalgebras and from vector spaces to modules.
"In this article, all superalgebras are assumed be associative and unital unless stated otherwise."
Formal definition.
Let "A" be a fixed superalgebra. A right supermodule over "A" is a right module "E" over "A" with a direct sum decomposition (as an abelian group)
formula_0
such that multiplication by elements of "A" satisfies
formula_1
for all "i" and "j" in Z2. The subgroups "E""i" are then right "A"0-modules.
The elements of "E""i" are said to be homogeneous. The parity of a homogeneous element "x", denoted by |"x"|, is 0 or 1 according to whether it is in "E"0 or "E"1. Elements of parity 0 are said to be even and those of parity 1 to be odd. If "a" is a homogeneous scalar and "x" is a homogeneous element of "E" then |"x"·"a"| is homogeneous and |"x"·"a"| = |"x"| + |"a"|.
Likewise, left supermodules and superbimodules are defined as left modules or bimodules over "A" whose scalar multiplications respect the gradings in the obvious manner. If "A" is supercommutative, then every left or right supermodule over "A" may be regarded as a superbimodule by setting
formula_2
for homogeneous elements "a" ∈ "A" and "x" ∈ "E", and extending by linearity. If "A" is purely even this reduces to the ordinary definition.
Homomorphisms.
A homomorphism between supermodules is a module homomorphism that preserves the grading.
Let "E" and "F" be right supermodules over "A". A map
formula_3
is a supermodule homomorphism if
for all "a"∈"A" and all "x","y"∈"E". The set of all module homomorphisms from "E" to "F" is denoted by Hom("E", "F").
In many cases, it is necessary or convenient to consider a larger class of morphisms between supermodules. Let "A" be a supercommutative algebra. Then all supermodules over "A" be regarded as superbimodules in a natural fashion. For supermodules "E" and "F", let Hom("E", "F") denote the space of all "right" A-linear maps (i.e. all module homomorphisms from "E" to "F" considered as ungraded right "A"-modules). There is a natural grading on Hom("E", "F") where the even homomorphisms are those that preserve the grading
formula_7
and the odd homomorphisms are those that reverse the grading
formula_8
If φ ∈ Hom("E", "F") and "a" ∈ "A" are homogeneous then
formula_9
That is, the even homomorphisms are both right and left linear whereas the odd homomorphism are right linear but left antilinear (with respect to the grading automorphism).
The set Hom("E", "F") can be given the structure of a bimodule over "A" by setting
formula_10
With the above grading Hom("E", "F") becomes a supermodule over "A" whose even part is the set of all ordinary supermodule homomorphisms
formula_11
In the language of category theory, the class of all supermodules over "A" forms a category with supermodule homomorphisms as the morphisms. This category is a symmetric monoidal closed category under the super tensor product whose internal Hom functor is given by Hom.
|
[
{
"math_id": 0,
"text": "E = E_0 \\oplus E_1"
},
{
"math_id": 1,
"text": "E_i A_j \\subseteq E_{i+j}"
},
{
"math_id": 2,
"text": "a\\cdot x = (-1)^{|a||x|}x\\cdot a"
},
{
"math_id": 3,
"text": "\\phi : E \\to F\\,"
},
{
"math_id": 4,
"text": "\\phi(x+y) =\\phi(x)+\\phi(y)\\,"
},
{
"math_id": 5,
"text": "\\phi(x\\cdot a) = \\phi(x)\\cdot a\\,"
},
{
"math_id": 6,
"text": "\\phi(E_i)\\subseteq F_i\\,"
},
{
"math_id": 7,
"text": "\\phi(E_i)\\subseteq F_i"
},
{
"math_id": 8,
"text": "\\phi(E_i)\\subseteq F_{1-i}."
},
{
"math_id": 9,
"text": "\\phi(x\\cdot a) = \\phi(x)\\cdot a\\qquad \\phi(a\\cdot x) = (-1)^{|a||\\phi|}a\\cdot\\phi(x)."
},
{
"math_id": 10,
"text": "\\begin{align}(a\\cdot\\phi)(x) &= a\\cdot\\phi(x)\\\\\n(\\phi\\cdot a)(x) &= \\phi(a\\cdot x).\\end{align}"
},
{
"math_id": 11,
"text": "\\mathbf{Hom}_0(E,F) = \\mathrm{Hom}(E,F)."
}
] |
https://en.wikipedia.org/wiki?curid=15352314
|
1535634
|
Parity anomaly
|
Breakdown of parity at the quantum level
In theoretical physics a quantum field theory is said to have a parity anomaly if its classical action is invariant under a change of parity of the universe, but the quantum theory is not invariant.
This kind of anomaly can occur in odd-dimensional gauge theories with fermions whose gauge groups have odd dual Coxeter numbers. They were first introduced by Antti J. Niemi and Gordon Walter Semenoff in the letter Axial-Anomaly-Induced Fermion Fractionization and Effective Gauge-Theory Actions in Odd-Dimensional Space-Times and by A. Norman Redlich in the letter Gauge Noninvariance and Parity Nonconservation of Three-Dimensional Fermions and the article Parity violation and gauge noninvariance of the effective gauge field action in three dimensions. It is in some sense an odd-dimensional version of Edward Witten's SU(2) anomaly in 4-dimensions, and in fact Redlich writes that his demonstration follows Witten's.
The anomaly in 3-dimensions.
Consider a classically parity-invariant gauge theory whose gauge group G has dual Coxeter number "h" in 3-dimensions. Include "n" Majorana fermions which transform under a real representation of G. This theory naively suffers from an ultraviolet divergence. If one includes a gauge-invariant regulator then the quantum parity invariance of the theory will be broken if "h" and "n" are odd.
Sketch of the demonstration.
The anomaly can only be a choice of sign.
Consider for example Pauli–Villars regularization. One needs to add "n" massive Majorana fermions with opposite statistics and take their masses to infinity. The complication arises from the fact that the 3-dimensional Majorana mass term, formula_0 is not parity invariant, therefore the possibility exists that the violation of parity invariance may remain when the mass goes to infinity. Indeed, this is the source of the anomaly.
If "n" is even, then one may rewrite the "n" Majorana fermions as "n"/2 Dirac fermions. These have parity invariant mass terms, and so Pauli–Villars may be used to regulate the divergences and no parity anomaly arises. Therefore, for even "n" there is no anomaly. Moreover, as the contribution of 2n Majorana fermions to the partition function is the square of the contribution of "n" fermions, the square of the contribution to the anomaly of "n" fermions must be equal to one. Therefore, the anomalous phase may only be equal to a square root of one, in other words, plus or minus one. If it is equal to one, then there is no anomaly. Therefore, the question is, when is there an ambiguity in the partition function of a factor of -1.
Anomaly from the index theorem.
We want to know when the choice of sign of the partition function is ill-defined. The possibility that it be ill-defined exists because the action contains the fermion kinetic term
formula_1
where ψ is a Majorana fermion and A is the vector potential. In the path integral, the exponential of the action is integrated over all of the fields. When integrating the above term over the fermion fields one obtains a factor of the square root of the determinant of the Dirac operator for each of the "n" Majorana fermions.
As is usual with a square root, one needs to determine its sign. The overall phase of the partition function is not an observable in quantum mechanics, and so for a given configuration this sign choice can be made arbitrarily. But one needs to check that the sign choice is consistent. To do this, let us deform the configuration through the configuration space, on a path which eventually returns to the original configuration. If the sign choice was consistent then, having returned to the original configuration, one will have the original sign. This is what needs to be checked.
The original spacetime is 3-dimensional, call the space M. Now we are considering a circle in configuration space, which is the same thing as a single configuration on the space formula_2. To find out the number of times that the sign of the square root vanishes as one goes around the circle, it suffices to count the number of zeroes of the determinant on formula_2, because each time that a pair of eigenvalues changes sign there will be a zero. Notice that the eigenvalues come in pairs, as discussed for example in Supersymmetric Index Of Three-Dimensional Gauge Theory, and so whenever one eigenvalue crosses zero, two will cross.
Summarizing, we want to know how many times the sign of the square root of the determinant of a Dirac operator changes sign as one circumnavigates the circle. The eigenvalues of the Dirac operator come in pairs, and the sign changes each time a pair crosses zero. Thus we are counting the zeroes of the Dirac operator on the space formula_2. These zeroes are counted by the Atiyah–Singer index theorem, which gives the answer h times the second Chern class of the gauge bundle over formula_2. This second Chern class may be any integer. In particular it may be one, in which case the sign changes h times. If the sign changes an odd number of times then the partition function is ill-defined, and so there is an anomaly.
In conclusion, we have found that there is an anomaly if the number "n" of Majorana fermions is odd and if the dual Coxeter number h of the gauge group is also odd.
Chern–Simons gauge theories.
3-dimensional Chern–Simons gauge theories are also anomalous when their level is half-integral. In fact, the derivation is identical to that above. Using Stokes' theorem and the fact that the exterior derivative of the Chern–Simons action is equal to the instanton number, the 4-dimensional theory on formula_2 has a theta angle equal to the level of the Chern–Simons theory, and so the 4-dimensional partition function is equal to -1 precisely when the instanton number is odd. This implies that the 3-dimensional partition function is ill-defined by a factor of -1 when considering deformations over a path with an odd number of instantons.
Fractional quantization conditions.
In particular, the anomalies coming from fermions and the half-level Chern–Simons terms will cancel if and only if the number of Majorana fermions plus twice the Chern–Simons level is even. In the case n=1, this statement is the half-integer quantization condition in formula_3 supersymmetric Chern–Simons gauge theories presented in The Chern-Simons Coefficient in Supersymmetric Yang-Mills Chern-Simons Theories. When n=2 this contribution to the partition function was found in formula_4 and 3 gauge theories in Branes and Supersymmetry Breaking in Three Dimensional Gauge Theories.
One-loop correction to the Chern–Simons level.
The fact that both Chern–Simons terms and Majorana fermions are anomalous under deformations with odd instanton numbers is not a coincidence. When the Pauli–Villars mass for "n" Majorana fermions is taken to infinity, Redlich found that the remaining contribution to the partition function is equal to a Chern–Simons term at level −"n"/2. This means in particular that integrating out "n" charged Majorana fermions renormalizes the Chern–Simons level of the corresponding gauge theory by −"n"/2. The fact that the Chern–Simons level is only allowed to take discrete values implies that the coupling constant can not enter into the correction to the level. This only occurs for the 1-loop correction, therefore the contribution of the Majorana fermions to the Chern–Simons level may be precisely calculated at 1-loop and all higher loop corrections vanish.
|
[
{
"math_id": 0,
"text": "m\\overline{\\psi}\\psi"
},
{
"math_id": 1,
"text": "i\\overline{\\psi}(\\partial_\\mu+A_\\mu)\\Gamma^\\mu\\psi"
},
{
"math_id": 2,
"text": "M\\times S^1"
},
{
"math_id": 3,
"text": "\\mathcal{N}=1"
},
{
"math_id": 4,
"text": "\\mathcal{N}=2"
}
] |
https://en.wikipedia.org/wiki?curid=1535634
|
15357060
|
Zero-velocity surface
|
A zero-velocity surface is a concept that relates to the N-body problem of gravity. It represents a surface a body of given energy cannot cross, since it would have zero velocity on the surface. It was first introduced by George William Hill. The zero-velocity surface is particularly significant when working with weak gravitational interactions among orbiting bodies.
Three-body problem.
In the circular restricted three-body problem two heavy masses orbit each other at constant radial distance and angular velocity, and a particle of negligible mass is affected by their gravity. By shifting to a rotating coordinate system where the masses are stationary a centrifugal force is introduced. Energy and momentum are not conserved separately in this coordinate system, but the Jacobi integral remains constant:
formula_0
where formula_1 is the rotation rate, formula_2 the particle's location in the rotating coordinate system, formula_3 the distances to the bodies, and formula_4 their masses times the gravitational constant.
For a given value of formula_5, points on the surface
formula_6
require that formula_7. That is, the particle will not be able to cross over this surface (since the squared velocity would have to become negative). This is the zero-velocity surface of the problem.
Note that this means zero velocity in the rotating frame: in a non-rotating frame the particle is seen as rotating with the other bodies. The surface also only predicts what regions cannot be entered, not the shape of the trajectory within the surface.
Generalizations.
The concept can be generalized to more complex problems, for example with masses in elliptic orbits, the general planar three-body problem, the four-body problem with solar wind drag, or in rings.
Lagrange points.
The zero-velocity surface is also an important parameter in finding Lagrange points. These points correspond to locations where the apparent potential in the rotating coordinate system is extremal. This corresponds to places where the zero-velocity surfaces pinch and develop holes as formula_5 is changed. Since trajectories are confined by the surfaces, a trajectory that seeks to escape (or enter) a region with minimal energy will typically pass close to the Lagrange point, which is used in low-energy transfer trajectory planning.
Galaxy clusters.
Given a group of galaxies which are gravitationally interacting, the zero-velocity surface is used to determine which objects are gravitationally bound (i.e. not overcome by the Hubble expansion) and thus part of a galaxy cluster, such as the Local Group.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "C=\\omega^2 (x^2+y^2) + 2 \\left(\\frac{\\mu_1}{r_1}+\\frac{\\mu_2}{r_2}\\right) - \\left(\\dot x^2+\\dot y^2+\\dot z^2\\right)"
},
{
"math_id": 1,
"text": "\\omega"
},
{
"math_id": 2,
"text": "x,y"
},
{
"math_id": 3,
"text": "r_1,r_2"
},
{
"math_id": 4,
"text": "\\mu_1,\\mu_2"
},
{
"math_id": 5,
"text": "C"
},
{
"math_id": 6,
"text": "C = \\omega^2 (x^2+y^2) + 2 \\left(\\frac{\\mu_1}{r_1}+\\frac{\\mu_2}{r_2}\\right) "
},
{
"math_id": 7,
"text": "\\dot x^2+\\dot y^2+\\dot z^2=0"
}
] |
https://en.wikipedia.org/wiki?curid=15357060
|
1535719
|
Similarity invariance
|
In linear algebra, similarity invariance is a property exhibited by a function whose value is unchanged under similarities of its domain. That is, formula_0 is invariant under similarities if formula_1 where formula_2 is a matrix similar to "A". Examples of such functions include the trace, determinant, characteristic polynomial, and the minimal polynomial.
A more colloquial phrase that means the same thing as similarity invariance is "basis independence", since a matrix can be regarded as a linear operator, written in a certain basis, and the same operator in a new basis is related to one in the old basis by the conjugation formula_2, where formula_3 is the transformation matrix to the new basis.
|
[
{
"math_id": 0,
"text": "f"
},
{
"math_id": 1,
"text": "f(A) = f(B^{-1}AB)"
},
{
"math_id": 2,
"text": "B^{-1}AB"
},
{
"math_id": 3,
"text": "B"
}
] |
https://en.wikipedia.org/wiki?curid=1535719
|
15358001
|
APMonitor
|
Modelling language for algebraic equations
Advanced process monitor (APMonitor) is a modeling language for differential algebraic (DAE) equations. It is a free web-service or local server for solving representations of physical systems in the form of implicit DAE models. APMonitor is suited for large-scale problems and solves linear programming, integer programming, nonlinear programming, nonlinear mixed integer programming, dynamic simulation, moving horizon estimation, and nonlinear model predictive control. APMonitor does not solve the problems directly, but calls nonlinear programming solvers such as APOPT, BPOPT, IPOPT, MINOS, and SNOPT. The APMonitor API provides exact first and second derivatives of continuous functions to the solvers through automatic differentiation and in sparse matrix form.
Programming language integration.
Julia, MATLAB, Python are mathematical programming languages that have APMonitor integration through web-service APIs. The GEKKO Optimization Suite is a recent extension of APMonitor with complete Python integration. The interfaces are built-in optimization toolboxes or modules to both load and process solutions of optimization problems. APMonitor is an object-oriented modeling language and optimization suite that relies on programming languages to load, run, and retrieve solutions. APMonitor models and data are compiled at run-time and translated into objects that are solved by an optimization engine such as APOPT or IPOPT. The optimization engine is not specified by APMonitor, allowing several different optimization engines to be switched out. The simulation or optimization mode is also configurable to reconfigure the model for dynamic simulation, nonlinear model predictive control, moving horizon estimation or general problems in mathematical optimization.
As a first step in solving the problem, a mathematical model is expressed in terms of variables and equations such as the Hock & Schittkowski Benchmark Problem #71 used to test the performance of nonlinear programming solvers. This particular optimization problem has an objective function formula_0 and subject to the inequality constraint formula_1 and equality constraint formula_2. The four variables must be between a lower bound of 1 and an upper bound of 5. The initial guess values are formula_3. This mathematical model is translated into the APMonitor modeling language in the following text file.
! file saved as hs71.apm
Variables
x1 = 1, >=1, <=5
x2 = 5, >=1, <=5
x3 = 5, >=1, <=5
x4 = 1, >=1, <=5
End Variables
Equations
minimize x1*x4*(x1+x2+x3) + x3
x1*x2*x3*x4 > 25
x1^2 + x2^2 + x3^2 + x4^2 = 40
End Equations
The problem is then solved in Python by first installing the APMonitor package with pip install APMonitor or from the following Python code.
import pip
pip.main(['install','APMonitor'])
Installing a Python is only required once for any module. Once the APMonitor package is installed, it is imported and the apm_solve function solves the optimization problem. The solution is returned to the programming language for further processing and analysis.
from APMonitor.apm import *
sol = apm_solve("hs71", 3)
x1 = sol["x1"]
x2 = sol["x2"]
Similar interfaces are available for MATLAB and Julia with minor differences from the above syntax. Extending the capability of a modeling language is important because significant pre- or post-processing of data or solutions is often required when solving complex optimization, dynamic simulation, estimation, or control problems.
High Index DAEs.
The highest order of a derivative that is necessary to return a DAE to ODE form is called the "differentiation index". A standard way for dealing with high-index DAEs is to differentiate the equations to put them in index-1 DAE or ODE form (see Pantelides algorithm). However, this approach can cause a number of undesirable numerical issues such as instability. While the syntax is similar to other modeling languages such as gProms, APMonitor solves DAEs of any index without rearrangement or differentiation. As an example, an index-3 DAE is shown below for the pendulum motion equations and lower index rearrangements can return this system of equations to ODE form (see Index 0 to 3 Pendulum example).
Pendulum motion (index-3 DAE form).
Model pendulum
Parameters
m = 1
g = 9.81
s = 1
End Parameters
Variables
x = 0
y = -s
v = 1
w = 0
lam = m*(1+s*g)/2*s^2
End Variables
Equations
x^2 + y^2 = s^2
$x = v
$y = w
m*$v = -2*x*lam
m*$w = -m*g - 2*y*lam
End Equations
End Model
Applications in APMonitor Modeling Language.
Many physical systems are naturally expressed by differential algebraic equation. Some of these include:
Models for a direct current (DC) motor and blood glucose response of an insulin dependent patient are listed below. They are representative of differential and algebraic equations encountered in many branches of science and engineering.
Direct current (DC) motor.
Parameters
! motor parameters (dc motor)
v = 36 ! input voltage to the motor (volts)
rm = 0.1 ! motor resistance (ohms)
lm = 0.01 ! motor inductance (henrys)
kb = 6.5e-4 ! back emf constant (volt·s/rad)
kt = 0.1 ! torque constant (N·m/a)
jm = 1.0e-4 ! rotor inertia (kg m<sup>2</sup>)
bm = 1.0e-5 ! mechanical damping (linear model of friction: bm * dth)
! load parameters
jl = 1000*jm ! load inertia (1000 times the rotor)
bl = 1.0e-3 ! load damping (friction)
k = 1.0e2 ! spring constant for motor shaft to load
b = 0.1 ! spring damping for motor shaft to load
End Parameters
Variables
i = 0 ! motor electric current (amperes)
dth_m = 0 ! rotor angular velocity sometimes called omega (radians/sec)
th_m = 0 ! rotor angle, theta (radians)
dth_l = 0 ! wheel angular velocity (rad/s)
th_l = 0 ! wheel angle (radians)
End Variables
Equations
lm*$i - v = -rm*i - kb *$th_m
jm*$dth_m = kt*i - (bm+b)*$th_m - k*th_m + b *$th_l + k*th_l
jl*$dth_l = b *$th_m + k*th_m - (b+bl)*$th_l - k*th_l
dth_m = $th_m
dth_l = $th_l
End Equations
Blood glucose response of an insulin dependent patient.
! Model source:
! A. Roy and R.S. Parker. “Dynamic Modeling of Free Fatty
! Acids, Glucose, and Insulin: An Extended Minimal Model,”
! Diabetes Technology and Therapeutics 8(6), 617-626, 2006.
Parameters
p1 = 0.068 ! 1/min
p2 = 0.037 ! 1/min
p3 = 0.000012 ! 1/min
p4 = 1.3 ! mL/(min·µU)
p5 = 0.000568 ! 1/mL
p6 = 0.00006 ! 1/(min·µmol)
p7 = 0.03 ! 1/min
p8 = 4.5 ! mL/(min·µU)
k1 = 0.02 ! 1/min
k2 = 0.03 ! 1/min
pF2 = 0.17 ! 1/min
pF3 = 0.00001 ! 1/min
n = 0.142 ! 1/min
VolG = 117 ! dL
VolF = 11.7 ! L
! basal parameters for Type-I diabetic
Ib = 0 ! Insulin (µU/mL)
Xb = 0 ! Remote insulin (µU/mL)
Gb = 98 ! Blood Glucose (mg/dL)
Yb = 0 ! Insulin for Lipogenesis (µU/mL)
Fb = 380 ! Plasma Free Fatty Acid (µmol/L)
Zb = 380 ! Remote Free Fatty Acid (µmol/L)
! insulin infusion rate
u1 = 3 ! µU/min
! glucose uptake rate
u2 = 300 ! mg/min
! external lipid infusion
u3 = 0 ! mg/min
End parameters
Intermediates
p9 = 0.00021 * exp(-0.0055*G) ! dL/(min*mg)
End Intermediates
Variables
I = Ib
X = Xb
G = Gb
Y = Yb
F = Fb
Z = Zb
End variables
Equations
! Insulin dynamics
$I = -n*I + p5*u1
! Remote insulin compartment dynamics
$X = -p2*X + p3*I
! Glucose dynamics
$G = -p1*G - p4*X*G + p6*G*Z + p1*Gb - p6*Gb*Zb + u2/VolG
! Insulin dynamics for lipogenesis
$Y = -pF2*Y + pF3*I
! Plasma-free fatty acid (FFA) dynamics
$F = -p7*(F-Fb) - p8*Y*F + p9 * (F*G-Fb*Gb) + u3/VolF
! Remote FFA dynamics
$Z = -k2*(Z-Zb) + k1*(F-Fb)
End Equations
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\min_{x\\in\\mathbb R}\\; x_1 x_4 (x_1+x_2+x_3)+x_3"
},
{
"math_id": 1,
"text": "x_1 x_2 x_3 x_4 \\ge 25"
},
{
"math_id": 2,
"text": "{x_1}^2 + {x_2}^2 + {x_3}^2 + {x_4}^2=40"
},
{
"math_id": 3,
"text": "x_1 = 1, x_2=5, x_3=5, x_4=1"
}
] |
https://en.wikipedia.org/wiki?curid=15358001
|
1536129
|
Money flow index
|
Measurement of stock markets
The money flow index (MFI) is an oscillator that ranges from 0 to 100. It is used to show the "money flow" (an approximation of the dollar value of a day's trading) over several days.
The steps to calculate the money flow index over N days.
Step 1: Calculate the typical price.
The typical price for each day is the average of high price, the low price and the closing price.
formula_0
Step 2: Calculate the positive and negative money flow.
The money flow for a certain day is typical price multiplied by volume on that day.
formula_1
The money flow is divided into positive and negative money flow.
Step 3: Calculate the money ratio.
The money ratio is the ratio of positive money flow to negative money flow.
formula_2
formula_3
Step 4: Calculate the money flow index.
The money flow index can be expressed equivalently as follows.
formula_4
This form more clearly shows what the MFI is a percentage of positive money flow to total money flow.
Uses.
MFI is used to measure the "enthusiasm" of the market. In other words, the money flow index shows how much a stock was traded.
A value of 80 or more is generally considered overbought, a value of 20 or less oversold.
Divergences between MFI and price action are also considered significant; for instance, if price makes a new rally high but the MFI high is less than its previous high then that may indicate a weak advance that is likely to reverse.
MFI is constructed in a similar fashion to the relative strength index (RSI). Both look at up days against total up and down days, but the scale, i.e. what is accumulated on those days, is volume (or dollar volume approximation rather) for the MFI, as opposed to price change amounts for the RSI.
Marek and Čadková (2020) studied different settings of MFI parameters. The testing was randomised in time and companies (e.g., Apple, ExxonMobil, IBM, Microsoft) and showed that MFI can beat simple buy-and-hold strategy; therefore, it can be useful for trading. They showed that settings of MFI which are usually recommended in the literature offers no advantage for trading and it is necessary to optimize settings for each single stock.
Similar indicators.
Other price × volume indicators:
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "typical\\ price = {high + low + close \\over 3}"
},
{
"math_id": 1,
"text": "money\\ flow = typical\\ price \\times volume"
},
{
"math_id": 2,
"text": "money\\ ratio = { positive\\ money\\ flow \\over negative\\ money\\ flow }"
},
{
"math_id": 3,
"text": "MFI = 100 - {100 \\over 1 + money\\ ratio}"
},
{
"math_id": 4,
"text": "MFI = 100 \\times { positive\\ money\\ flow \\over positive\\ money flow + negative\\ money\\ flow }"
}
] |
https://en.wikipedia.org/wiki?curid=1536129
|
15363250
|
Motions in the time-frequency distribution
|
Modification of frequency and time distributions of signals, as used in computer graphics
Several techniques can be used to move signals in the time-frequency distribution. Similar to computer graphic techniques, signals can be subjected to horizontal shifting, vertical shifting, dilation (scaling), shearing, rotation, and twisting. These techniques can help to save the bandwidth with proper motions apply on the signals. Moreover, filters with proper motion transformation can save the hardware cost without additional filters.
The following examples assume time in the horizontal axis versus frequency in the vertical axis. As a coincident, the following transformations happen to have the motion properties in the time-frequency distribution.
Shifting.
Shifting on time axis is like horizontal shifting in time-frequency distribution. On another hand, shifting on the frequency axis would be vertical shifting in time-frequency distribution.
Horizontal shifting.
If t0 is greater than 0, we would be shifting the signal to the right on time axis. (negative would be left)
STFT, Gabor:
formula_0
WDF:
formula_1
Vertical shifting.
If f0 is greater than 0, we would be shifting the signal to the upward on frequency axis. (negative would be downward)
STFT, Gabor:
formula_2
WDF:
formula_3
This results in an amplitude modulation signal.
This sort of shift is also used in a frequency extender.
This sort of shift is also used in most bat detectors.
Such an effect is typically implemented using heterodyning
Dilation.
Dilation is like doing scaling on one of the axis and area is the same after the process. When a > 1, it's expanding on time axis, and narrowing on frequency axis ;vice versa when a < 1.
STFT, Gabor:
formula_4
WDF:
formula_5
When this kind of dilation is applied to audio, it causes a chipmunk effect.
Time stretching.
Time stretching is doing scaling only on the time axis, leaving frequencies the same.
When formula_6 (the most common case), it's narrowing on the time axis, reducing the area.
STFT, Gabor:
formula_7
WDF:
formula_8
Shearing.
Shearing by definition is moving the side of the signal on one direction. Vertical and Horizontal shearing is introduced here.
On Vertical axis only (frequency).
It's shearing on frequency axis, since this only changes the phase.
formula_9
STFT, Gabor:
formula_10
WDF:
formula_11
On Horizontal axis only (time).
It's shearing on time axis, since this only changes the time.
formula_12
STFT, Gabor:
formula_13
WDF:
formula_14
Generalized Shearing.
Transforming the time-frequency distribution from a band-like pattern to a curved shape requires the use of polynomials of order three or higher with respect to formula_15.
It is beneficial for implementing higher-order modulation, and furthermore, it reduces bandwidth, allowing for lower sampling rates and decreased white noise through filtering.
On Vertical axis only (frequency).
It's shearing on frequency axis, since this only changes the phase.
formula_16
formula_17
STFT, Gabor:
formula_18
WDF:
formula_19
On Horizontal axis only (time).
It's shearing on time axis, since this only changes the time.
formula_20
formula_21
STFT, Gabor:
formula_22
WDF:
formula_23
Rotation.
Many transforms has the property of rotations, like Gabor-Wigner, Ambiguity function (counterclockwise), modified Wigner, Cohen's class distribution.
STFT, Gabor, and WDF is introduced in here.
Clockwise rotation by 90 degrees.
By switching the time and negative frequency to frequency and time would act like rotating 90 degrees clockwise.
formula_24
STFT:
formula_25
Gabor:
formula_26
WDF:
formula_27
Counterclockwise rotation by 90 degrees.
By switching the negative time and frequency to frequency and time would act like rotating 90 degrees counterclockwise.
If formula_28, then
formula_29
formula_30
Rotation by 180 degrees.
Changing the sign of both time and frequency would be like flipping twice on both axis, and it ends up like doing 180 degrees rotation.
If formula_31, then
formula_32
formula_33
Rotation: Fractional Fourier Transform (FRFTs).
Rotating the time-frequency distribution by the angle other than formula_34 and formula_35. Compared to Fourier Transform, it transform signal from time domain to fractional domain, domain between time and frequency.
formula_36
For Fourier Transform, formula_37 and formula_38
It is equivalent to the clockwise rotation operation with angle formula_39. for Wigner distribution function and Gabor transform.
Gabor:
formula_40
WDF:
formula_41
Twisting: Linear Canonical Transform (LCT).
The Linear Canonical Transform makes arbitrary linear and integral transformation of a time-frequency distribution.
formula_42
formula_43
Constraint:
formula_44
Transformation Type.
a. fractional Fourier transform
formula_45
* Fourier transform: formula_46
* identity operation: formula_47
* inverse Fourier transform: formula_48
b. Fresnel transform (convolution with a chirp)
formula_49
c. chirp multiplication
formula_50
d. scaling
formula_51
Example.
If we want the left image to become the right image, we can use the techniques from above to achieve the requirement.
There are several ways to solve this problem, this is one of the possible solutions.
First, we apply clockwise rotation of 90 degree by using one of the transform.
STFT:
formula_25
Gabor:
formula_26
WDF:
formula_27
Second, we set a = 1/3, and perform a horizontal shearing on t-axis.
STFT, Gabor:
formula_52
WDF:
formula_53
Third, we shift the signal 2 to the right on t-axis by setting t0 = 2
STFT, Gabor:
formula_54
WDF:
formula_55
Finally, we shift the signal 1 to the left on f-axis by setting f0 = -1
STFT, Gabor:
formula_56
WDF:
formula_57
Applications.
Efficient Sampling.
As mentioned in the introduction, the above techniques can be used to save the bandwidth or the filter cost.
Assume the signal look like this.
The dashed box is the filter, and the area of the dashed box would be the bandwidth required.
After some operations like the above example, the signal turn into the position like this.
As a result, the bandwidth was saved, since the area became smaller. Moreover, only a lowpass filter is required to recover the signal, instead of a bandpass filter.
Signal Decomposition and Filter Design.
Signal is decomposed into several components and filter removes the undesired component of a signal. The Fourier transform is suitable to filter out the noise that is a combination of sinusoid functions. If signal are not separable in both time and frequency domains, using the fractional Fourier transform (FRFTs) is suitable to filter out the noise that is a combination of higher order exponential functions.
Fiter designed by the fractional Fourier transform:
formula_58
formula_59
(1) If formula_60
formula_61: Step function
(2) formula_39 is determined by the angle of cutoff line and f-axis.
(3) formula_62 equals the distance from origin to cutoff line.
See also.
Other time-frequency transforms:
|
[
{
"math_id": 0,
"text": "x(t-t_0) \\rightarrow S_x(t-t_0,f)e^{-j2 \\pi ft_0}"
},
{
"math_id": 1,
"text": "x(t-t_0) \\rightarrow W_x(t-t_0,f)\\,"
},
{
"math_id": 2,
"text": "e^{j2 \\pi f_0t}x(t) \\rightarrow S_x(t,f-f_0)"
},
{
"math_id": 3,
"text": "e^{j2 \\pi f_0t}x(t) \\rightarrow W_x(t,f-f_0)"
},
{
"math_id": 4,
"text": "\\frac{1}{\\sqrt{|a|}}x(\\frac{t}{a}) \\rightarrow \\approx S_x(\\frac{t}{a},af)"
},
{
"math_id": 5,
"text": "\\frac{1}{\\sqrt{|a|}}x(\\frac{t}{a}) \\rightarrow W_x(\\frac{t}{a},af)"
},
{
"math_id": 6,
"text": "a < 1"
},
{
"math_id": 7,
"text": "\\approx S_x(at,f)"
},
{
"math_id": 8,
"text": "\\approx W_x(at,f)"
},
{
"math_id": 9,
"text": "x(t) = e^{j \\pi at^2}y(t) \\, "
},
{
"math_id": 10,
"text": "S_x(t,f) \\approx S_y(t,f-at) \\, "
},
{
"math_id": 11,
"text": "W_x(t,f) = W_y(t,f-at) \\, "
},
{
"math_id": 12,
"text": "x(t) = e^{j \\pi \\frac{t^2}{a}}y(t) \\, "
},
{
"math_id": 13,
"text": "S_x(t,f) \\approx S_y(t-af,f) \\, "
},
{
"math_id": 14,
"text": "W_x(t,f) = W_y(t-af,f) \\, "
},
{
"math_id": 15,
"text": " \\phi (t)"
},
{
"math_id": 16,
"text": "x(t)=e^{j\\phi (t)}y(t)\\,"
},
{
"math_id": 17,
"text": "\\phi (t)=\\sum \\limits_{k=0}^{n}a_{k}t^{k}\\,"
},
{
"math_id": 18,
"text": "S_{x}(t,f)\\approx S_{y}(t,f-\\sum \\limits _{k=1}^{n}{\\frac {ka_{k}}{2\\pi }}t^{k-1})\\,"
},
{
"math_id": 19,
"text": "W_{x}(t,f)\\approx W_{y}(t,f-\\sum \\limits _{k=1}^{n}{\\frac {ka_{k}}{2\\pi }}t^{k-1})\\,"
},
{
"math_id": 20,
"text": "x(t)=h(t)*y(t)\\,"
},
{
"math_id": 21,
"text": "h(t)=IFT(exp(j\\sum \\limits_{k=0}^{n}a_{k}f^{k}))\\,"
},
{
"math_id": 22,
"text": "S_{x}(t,f)\\approx S_{y}(t+\\sum \\limits _{k=1}^{n}{\\frac {ka_{k}}{2\\pi }}f^{k-1},f)\\,"
},
{
"math_id": 23,
"text": "W_{x}(t,f)\\approx W_{y}(t+\\sum \\limits _{k=1}^{n}{\\frac {ka_{k}}{2\\pi }}f^{k-1},f)\\,"
},
{
"math_id": 24,
"text": "X(f) = FT(x(t)) \\, "
},
{
"math_id": 25,
"text": "|S_X(t,f)| \\approx |S_x(-f,t)| \\, "
},
{
"math_id": 26,
"text": "G_X(t,f) = G_x(-f,t)e^{-j2 \\pi ft} \\, "
},
{
"math_id": 27,
"text": "W_X(t,f) = W_x(-f,t) \\, "
},
{
"math_id": 28,
"text": "X(f) = IFT[x(t)] = \\int_{-\\infty}^{\\infty} x(t)e^{j2 \\pi ft} \\, dt "
},
{
"math_id": 29,
"text": "W_X(t,f) = W_x(f,-t) \\, "
},
{
"math_id": 30,
"text": "G_X(t,f) = G_x(f,-t)e^{j2 \\pi tf} \\, "
},
{
"math_id": 31,
"text": " X(f) = x(-t) \\, "
},
{
"math_id": 32,
"text": "W_X(t,f) = W_x(-t,-f) \\, "
},
{
"math_id": 33,
"text": "G_X(t,f) = G_x(-t,-f) \\, "
},
{
"math_id": 34,
"text": "\\pi/2, \\pi, 3\\pi/2"
},
{
"math_id": 35,
"text": "2\\pi"
},
{
"math_id": 36,
"text": "X_{\\phi }(u)={\\sqrt {1-jcot\\phi }} e^{j\\pi cot\\phi u^{2}}\\int _{-\\infty }^{\\infty }e^{-j2\\pi csc\\phi ut}e^{j\\pi cot\\phi \\ t^{2}}x(t)dt, \\quad \\phi = 0.5a\\pi"
},
{
"math_id": 37,
"text": "a=1,\\, \\phi = 0.5 \\pi"
},
{
"math_id": 38,
"text": " csc\\phi = 1,\\, cot\\phi = 0"
},
{
"math_id": 39,
"text": "\\phi"
},
{
"math_id": 40,
"text": "|G_{X_\\phi}(u,v)| = |G_x(ucos\\phi -vsin\\phi, usin\\phi + vcos\\phi)| \\,"
},
{
"math_id": 41,
"text": "W_{X_\\phi}(u,v) = W_x(ucos\\phi - vsin\\phi, usin\\phi + vcos\\phi) \\,"
},
{
"math_id": 42,
"text": "F_{(a,b,c,d)}(u)=\\sqrt{\\frac{1}{jb}} e^{j\\pi \\frac{d}{b}u^2}\\int_{-\\infty}^{\\infty}e^{-j2\\pi \\frac{1}{b}ut}e^{-j\\pi \\frac{a}{b}t^2}x(t)dt, \\quad b\\neq 0"
},
{
"math_id": 43,
"text": "F_{(a,0,c,d)}(u)={\\sqrt {d}} e^{j\\pi cdu^{2}}x(du), \\quad b = 0"
},
{
"math_id": 44,
"text": "ab - bc = 1"
},
{
"math_id": 45,
"text": "\\begin{bmatrix}a & b\\\\c & d\\end{bmatrix} = \\begin{bmatrix}cos\\phi & sin\\phi\\\\-sin\\phi & cos\\phi\\end{bmatrix}"
},
{
"math_id": 46,
"text": "\\phi = \\pi/2"
},
{
"math_id": 47,
"text": "\\phi = 0"
},
{
"math_id": 48,
"text": "\\phi = -\\pi/2"
},
{
"math_id": 49,
"text": "\\begin{bmatrix}a & b\\\\c & d\\end{bmatrix} = \\begin{bmatrix}1 & \\lambda z\\\\0 & 1\\end{bmatrix}"
},
{
"math_id": 50,
"text": "\\begin{bmatrix}a & b\\\\c & d\\end{bmatrix} = \\begin{bmatrix}1 & 0\\\\ \\tau & 1\\end{bmatrix}"
},
{
"math_id": 51,
"text": "\\begin{bmatrix}a & b\\\\c & d\\end{bmatrix} = \\begin{bmatrix}\\frac{1}{\\sigma} & 0\\\\0 & \\sigma\\end{bmatrix}"
},
{
"math_id": 52,
"text": "S_x(t,f) \\approx S_y(t- \\frac{1}{3} f,f) \\, "
},
{
"math_id": 53,
"text": "W_x(t,f) = W_y(t- \\frac{1}{3} f,f) \\, "
},
{
"math_id": 54,
"text": "x(t-t_0) \\rightarrow S_x(t-2,f)e^{-j2 \\pi ft_0}"
},
{
"math_id": 55,
"text": "x(t-t_0) \\rightarrow W_x(t-2,f)\\,"
},
{
"math_id": 56,
"text": "e^{j2 \\pi f_0t}x(t) \\rightarrow S_x(t,f+1)"
},
{
"math_id": 57,
"text": "e^{j2 \\pi f_0t}x(t) \\rightarrow W_x(t,f+1)"
},
{
"math_id": 58,
"text": "x_o(t) = O^{-\\phi}_F{O^{\\phi}_F[x_i{t}]H(u)}"
},
{
"math_id": 59,
"text": "O^{\\phi}_F(x(t)) = {\\sqrt {1-jcot\\phi }} e^{j\\pi cot\\phi u^{2}}\\int _{-\\infty }^{\\infty }e^{-j2\\pi csc\\phi ut}e^{j\\pi cot\\phi \\ t^{2}}x(t)dt \\quad (FRFTs)"
},
{
"math_id": 60,
"text": "H(u) = S(-u+u_0), \\quad H(u)=\\begin{cases} 1 \\ \\ \\ u < u_0 \\\\ 0\\ \\ \\ u > u_0 \\ \\ \\ \\end{cases}"
},
{
"math_id": 61,
"text": "S(u)"
},
{
"math_id": 62,
"text": "u_0"
}
] |
https://en.wikipedia.org/wiki?curid=15363250
|
15364270
|
Cunningham number
|
In mathematics, specifically in number theory, a Cunningham number is a certain kind of integer named after English mathematician A. J. C. Cunningham.
Definition.
Cunningham numbers are a simple type of binomial number – they are of the form
formula_0
where "b" and "n" are integers and "b" is not a perfect power. They are denoted "C"±("b", "n").
Terms.
The first fifteen terms in the sequence of Cunningham numbers are:
3, 5, 7, 8, 9, 10, 15, 17, 24, 26, 28, 31, 33, 35, 37, ... (sequence in the OEIS)
formula_1
formula_2
Properties.
are both contained within the Cunningham numbers, and contain only Odd and Even numbers respectively.
Primality.
Establishing whether or not a given Cunningham number is prime has been the main focus of research around this type of number. Two particularly famous families of Cunningham numbers in this respect are the Fermat numbers, which are those of the form "C"+(2, 2"m"), and the Mersenne numbers, which are of the form "C"−(2, "n").
Cunningham worked on gathering together all known data on which of these numbers were prime. In 1925 he published tables which summarised his findings with H. J. Woodall, and much computation has been done in the intervening time to fill these tables.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "b^n\\pm1"
},
{
"math_id": 1,
"text": "6^n\\pm1"
},
{
"math_id": 2,
"text": "5^n\\pm1"
}
] |
https://en.wikipedia.org/wiki?curid=15364270
|
15366532
|
Preordered class
|
Class equipped with a preorder
In mathematics, a preordered class is a class equipped with a preorder.
Definition.
When dealing with a class "C", it is possible to define a class relation on "C" as a subclass of the power class "C formula_0 C" . Then, it is convenient to use the language of relations on a set.
A preordered class is a class with a preorder on it. "Partially ordered class" and "totally ordered class" are defined in a similar way. These concepts generalize respectively those of preordered set, partially ordered set and totally ordered set. However, it is difficult to work with them as in the "small" case because many constructions common in a set theory are no longer possible in this framework.
Equivalently, a preordered class is a thin category, that is, a category with at most one morphism from an object to another.
|
[
{
"math_id": 0,
"text": " \\times "
}
] |
https://en.wikipedia.org/wiki?curid=15366532
|
1536920
|
Proximity space
|
Structure describing a notion of "nearness" between subsets
In topology, a proximity space, also called a nearness space, is an axiomatization of the intuitive notion of "nearness" that hold set-to-set, as opposed to the better known point-to-set notion that characterize topological spaces.
The concept was described by Frigyes Riesz (1909) but ignored at the time. It was rediscovered and axiomatized by V. A. Efremovič in 1934 under the name of infinitesimal space, but not published until 1951. In the interim, A. D. Wallace (1941) discovered a version of the same concept under the name of separation space.
Definition.
A proximity space formula_0 is a set formula_1 with a relation formula_2 between subsets of formula_1 satisfying the following properties:
For all subsets formula_3
Proximity without the first axiom is called quasi-proximity (but then Axioms 2 and 4 must be stated in a two-sided fashion).
If formula_4 we say formula_13 is near formula_14 or formula_13 and formula_14 are proximal; otherwise we say formula_13 and formula_14 are apart. We say formula_14 is a proximal- or formula_2-neighborhood of formula_15 written formula_16 if and only if formula_13 and formula_17 are apart.
The main properties of this set neighborhood relation, listed below, provide an alternative axiomatic characterization of proximity spaces.
For all subsets formula_18
A proximity space is called separated if formula_29implies formula_30
A proximity or proximal map is one that preserves nearness, that is, given formula_31 if formula_4 in formula_32 then formula_33 in formula_34 Equivalently, a map is proximal if the inverse map preserves proximal neighborhoodness. In the same notation, this means if formula_35 holds in formula_36 then formula_37 holds in formula_38
Properties.
Given a proximity space, one can define a topology by letting formula_39 be a Kuratowski closure operator. If the proximity space is separated, the resulting topology is Hausdorff. Proximity maps will be continuous between the induced topologies.
The resulting topology is always completely regular. This can be proven by imitating the usual proofs of Urysohn's lemma, using the last property of proximal neighborhoods to create the infinite increasing chain used in proving the lemma.
Given a compact Hausdorff space, there is a unique proximity space whose corresponding topology is the given topology: formula_13 is near formula_14 if and only if their closures intersect. More generally, proximities classify the compactifications of a completely regular Hausdorff space.
A uniform space formula_1 induces a proximity relation by declaring formula_13 is near formula_14 if and only if formula_40 has nonempty intersection with every entourage. Uniformly continuous maps will then be proximally continuous.
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "(X, \\delta)"
},
{
"math_id": 1,
"text": "X"
},
{
"math_id": 2,
"text": "\\delta"
},
{
"math_id": 3,
"text": "A, B, C \\subseteq X"
},
{
"math_id": 4,
"text": "A \\;\\delta\\; B"
},
{
"math_id": 5,
"text": "B \\;\\delta\\; A"
},
{
"math_id": 6,
"text": "A \\neq \\varnothing"
},
{
"math_id": 7,
"text": "A \\cap B \\neq \\varnothing"
},
{
"math_id": 8,
"text": "A \\;\\delta\\; (B \\cup C)"
},
{
"math_id": 9,
"text": "A \\;\\delta\\; C"
},
{
"math_id": 10,
"text": "E,"
},
{
"math_id": 11,
"text": "A \\;\\delta\\; E"
},
{
"math_id": 12,
"text": "B \\;\\delta\\; (X \\setminus E)"
},
{
"math_id": 13,
"text": "A"
},
{
"math_id": 14,
"text": "B"
},
{
"math_id": 15,
"text": "A,"
},
{
"math_id": 16,
"text": "A \\ll B,"
},
{
"math_id": 17,
"text": "X \\setminus B"
},
{
"math_id": 18,
"text": "A, B, C, D \\subseteq X"
},
{
"math_id": 19,
"text": "X \\ll X"
},
{
"math_id": 20,
"text": "A \\ll B"
},
{
"math_id": 21,
"text": "A \\subseteq B"
},
{
"math_id": 22,
"text": "A \\subseteq B \\ll C \\subseteq D"
},
{
"math_id": 23,
"text": "A \\ll D"
},
{
"math_id": 24,
"text": "A \\ll C"
},
{
"math_id": 25,
"text": "A \\ll B \\cap C"
},
{
"math_id": 26,
"text": "X \\setminus B \\ll X \\setminus A"
},
{
"math_id": 27,
"text": "E"
},
{
"math_id": 28,
"text": "A \\ll E \\ll B."
},
{
"math_id": 29,
"text": "\\{ x \\} \\;\\delta\\; \\{ y \\}"
},
{
"math_id": 30,
"text": "x = y."
},
{
"math_id": 31,
"text": "f : (X, \\delta) \\to \\left(X^*, \\delta^*\\right),"
},
{
"math_id": 32,
"text": "X,"
},
{
"math_id": 33,
"text": "f[A] \\;\\delta^*\\; f[B]"
},
{
"math_id": 34,
"text": "X^*."
},
{
"math_id": 35,
"text": "C \\ll^* D"
},
{
"math_id": 36,
"text": "X^*,"
},
{
"math_id": 37,
"text": "f^{-1}[C] \\ll f^{-1}[D]"
},
{
"math_id": 38,
"text": "X."
},
{
"math_id": 39,
"text": "A \\mapsto \\left\\{ x : \\{ x \\} \\;\\delta\\; A \\right\\}"
},
{
"math_id": 40,
"text": "A \\times B"
}
] |
https://en.wikipedia.org/wiki?curid=1536920
|
1536947
|
Kruskal–Katona theorem
|
About the numbers of faces of different dimensions in an abstract simplicial complex
In algebraic combinatorics, the Kruskal–Katona theorem gives a complete characterization of the "f"-vectors of abstract simplicial complexes. It includes as a special case the Erdős–Ko–Rado theorem and can be restated in terms of uniform hypergraphs. It is named after Joseph Kruskal and Gyula O. H. Katona, but has been independently discovered by several others.
Statement.
Given two positive integers "N" and "i", there is a unique way to expand "N" as a sum of binomial coefficients as follows:
formula_0
This expansion can be constructed by applying the greedy algorithm: set "n""i" to be the maximal "n" such that formula_1 replace "N" with the difference, "i" with "i" − 1, and repeat until the difference becomes zero. Define
formula_2
Statement for simplicial complexes.
An integral vector formula_3 is the "f"-vector of some formula_4-dimensional simplicial complex if and only if
formula_5
Statement for uniform hypergraphs.
Let "A" be a set consisting of "N" distinct "i"-element subsets of a fixed set "U" ("the universe") and "B" be the set of all formula_6-element subsets of the sets in "A". Expand "N" as above. Then the cardinality of "B" is bounded below as follows:
formula_7
Lovász' simplified formulation.
The following weaker but useful form is due to László Lovász (1993, 13.31b). Let "A" be a set of "i"-element subsets of a fixed set "U" ("the universe") and "B" be the set of all formula_6-element subsets of the sets in "A". If formula_8 then formula_9.
In this formulation, "x" need not be an integer. The value of the binomial expression is formula_10.
Ingredients of the proof.
For every positive "i", list all "i"-element subsets "a"1 < "a"2 < … "a""i" of the set N of natural numbers in the colexicographical order. For example, for "i" = 3, the list begins
formula_11
Given a vector formula_12 with positive integer components, let "Δ""f" be the subset of the power set 2N consisting of the empty set together with the first formula_13 "i"-element subsets of N in the list for "i" = 1, …, "d". Then the following conditions are equivalent:
The difficult implication is 1 ⇒ 2.
History.
The theorem is named after Joseph Kruskal and Gyula O. H. Katona, who published it in 1963 and 1968 respectively.
According to , it was discovered independently by , , Marcel-Paul Schützenberger (1959), , and .
Donald Knuth (2011) writes that the earliest of these references, by Schützenberger, has an incomplete proof.
|
[
{
"math_id": 0,
"text": " N=\\binom{n_i}{i}+\\binom{n_{i-1}}{i-1}+\\ldots+\\binom{n_j}{j},\\quad\nn_i > n_{i-1} > \\ldots > n_j \\geq j\\geq 1. "
},
{
"math_id": 1,
"text": " N\\geq \\binom{n}{i}, "
},
{
"math_id": 2,
"text": " N^{(i-1)}=\\binom{n_i}{i-1}+\\binom{n_{i-1}}{i-2}+\\ldots+\\binom{n_j}{j-1}. "
},
{
"math_id": 3,
"text": "(f_0, f_1, ..., f_{d-1})"
},
{
"math_id": 4,
"text": "(d-1)"
},
{
"math_id": 5,
"text": " 0 \\leq f_{i}^{(i)} \\leq f_{i-1},\\quad 1\\leq i\\leq d-1."
},
{
"math_id": 6,
"text": "(i-r)"
},
{
"math_id": 7,
"text": " |B| \\geq \\binom{n_i}{i-r}+\\binom{n_{i-1}}{i-r-1}+\\ldots+\\binom{n_j}{j-r}. "
},
{
"math_id": 8,
"text": "|A| = \\binom{x}{i}"
},
{
"math_id": 9,
"text": "|B| \\geq \\binom{x}{i-r}"
},
{
"math_id": 10,
"text": "\\binom{x}{i} = \\frac{x(x-1)\\dots(x-i+1)}{i!}"
},
{
"math_id": 11,
"text": " 123, 124, 134, 234, 125, 135, 235, 145, 245, 345, \\ldots. "
},
{
"math_id": 12,
"text": "f = (f_0, f_1, ..., f_{d-1})"
},
{
"math_id": 13,
"text": "f_{i-1}"
},
{
"math_id": 14,
"text": " f_{i}^{(i)} \\leq f_{i-1},\\quad 1\\leq i\\leq d-1."
}
] |
https://en.wikipedia.org/wiki?curid=1536947
|
1536976
|
MINQUE
|
Theory in the field of statistics
In statistics, the theory of minimum norm quadratic unbiased estimation (MINQUE) was developed by C. R. Rao. MINQUE is a theory alongside other estimation methods in estimation theory, such as the method of moments or maximum likelihood estimation. Similar to the theory of best linear unbiased estimation, MINQUE is specifically concerned with linear regression models. The method was originally conceived to estimate heteroscedastic error variance in multiple linear regression. MINQUE estimators also provide an alternative to maximum likelihood estimators or restricted maximum likelihood estimators for variance components in mixed effects models. MINQUE estimators are quadratic forms of the response variable and are used to estimate a linear function of the variances.
Principles.
We are concerned with a mixed effects model for the random vector formula_0 with the following linear structure.
formula_1
Here, formula_2 is a design matrix for the fixed effects, formula_3 represents the unknown fixed-effect parameters, formula_4 is a design matrix for the formula_5-th random-effect component, and formula_6is a random vector for the formula_5-th random-effect component. The random effects are assumed to have zero mean (formula_7) and be uncorrelated (formula_8). Furthermore, any two random effect vectors are also uncorrelated (formula_9). The unknown variances formula_10 represent the variance components of the model.
This is a general model that captures commonly used linear regression models.
A compact representation for the model is the following, where formula_17 and formula_18.
formula_19
Note that this model makes no distributional assumptions about formula_15 other than the first and second moments.
formula_20
formula_21
The goal in MINQUE is to estimate formula_22 using a quadratic form formula_23. MINQUE estimators are derived by identifying a matrix formula_24 such that the estimator has some desirable properties, described below.
Optimal Estimator Properties to Constrain MINQUE.
Invariance to translation of the fixed effects.
Consider a new fixed-effect parameter formula_25, which represents a translation of the original fixed effect. The new, equivalent model is now the following.
formula_26
Under this equivalent model, the MINQUE estimator is now formula_27. Rao argued that since the underlying models are equivalent, this estimator should be equal to formula_28. This can be achieved by constraining formula_24 such that formula_29, which ensures that all terms other than formula_28 in the expansion of the quadratic form are zero.
Unbiased estimation.
Suppose that we constrain formula_29, as argued in the section above. Then, the MINQUE estimator has the following form
formula_30
To ensure that this estimator is unbiased, the expectation of the estimator formula_31 must equal the parameter of interest, formula_32. Below, the expectation of the estimator can be decomposed for each component since the components are uncorrelated with each other. Furthermore, the cyclic property of the trace is used to evaluate the expectation with respect to formula_33.
formula_34
To ensure that this estimator is unbiased, Rao suggested setting formula_35, which can be accomplished by constraining formula_24 such that formula_36 for all components.
Minimum Norm.
Rao argues that if formula_37 were observed, a "natural" estimator for formula_32 would be the following since formula_38. Here, formula_39 is defined as a diagonal matrix.
formula_40
The difference between the proposed estimator and the natural estimator is formula_41. This difference can be minimized by minimizing the norm of the matrix formula_42.
Procedure.
Given the constraints and optimization strategy derived from the optimal properties above, the MINQUE estimator formula_43 for formula_44 is derived by choosing a matrix formula_24 that minimizes formula_42, subject to the constraints
Examples of Estimators.
Standard Estimator for Homoscedastic Error.
In the Gauss-Markov model, the error variance formula_47 is estimated using the following.
formula_48
This estimator is unbiased and can be shown to minimize the Euclidean norm of the form formula_42. Thus, the standard estimator for error variance in the Gauss-Markov model is a MINQUE estimator.
Random Variables with Common Mean and Heteroscedastic Error.
For random variables formula_49 with a common mean and different variances formula_50, the MINQUE estimator for formula_51 is formula_52, where formula_53 and formula_54.
Estimator for Variance Components.
Rao proposed a MINQUE estimator for the variance components model based on minimizing the Euclidean norm. The Euclidean norm formula_55 is the square root of the sum of squares of all elements in the matrix. When evaluating this norm below, formula_56. Furthermore, using the cyclic property of traces, formula_57.
formula_58
Note that since formula_59 does not depend on formula_60, the MINQUE with the Euclidean norm is obtained by identifying the matrix formula_60 that minimizes formula_61, subject to the MINQUE constraints discussed above.
Rao showed that the matrix formula_60 that satisfies this optimization problem is
formula_62,
where formula_63, formula_64 is the projection matrix into the column space of formula_65, and formula_66 represents the generalized inverse of a matrix.
Therefore, the MINQUE estimator is the following, where the vectors formula_67 and formula_68 are defined based on the sum.
formula_69
The vector formula_67 is obtained by using the constraint formula_70. That is, the vector represents the solution to the following system of equations formula_71.
formula_72
This can be written as a matrix product formula_73, where formula_74 and formula_75 is the following.
formula_76
Then, formula_77. This implies that the MINQUE is formula_78. Note that formula_79, where formula_80. Therefore, the estimator for the variance components is formula_81.
Extensions.
MINQUE estimators can be obtained without the invariance criteria, in which case the estimator is only unbiased and minimizes the norm. Such estimators have slightly different constraints on the minimization problem.
The model can be extended to estimate covariance components. In such a model, the random effects of a component are assumed to have a common covariance structure formula_82. A MINQUE estimator for a mixture of variance and covariance components was also proposed. In this model, formula_82 for formula_83 and formula_84 for formula_85.
|
[
{
"math_id": 0,
"text": "\\mathbf{Y} \\in \\mathbb{R}^n"
},
{
"math_id": 1,
"text": "\\mathbf{Y} = \\mathbf{X}\\boldsymbol\\beta + \\mathbf{U}_1 \\boldsymbol\\xi_1 \n+ \\cdots + \\mathbf{U}_k \\boldsymbol\\xi_k"
},
{
"math_id": 2,
"text": "\\mathbf{X} \\in \\mathbb{R}^{n\\times m}"
},
{
"math_id": 3,
"text": "\\boldsymbol\\beta \\in \\mathbb{R}^m"
},
{
"math_id": 4,
"text": "\\mathbf{U}_i \\in \\mathbb{R}^{n\\times c_i}"
},
{
"math_id": 5,
"text": "i"
},
{
"math_id": 6,
"text": "\\boldsymbol\\xi_i\\in\\mathbb{R}^{c_i}"
},
{
"math_id": 7,
"text": "\\mathbb{E}[\\boldsymbol\\xi_i]=\\mathbf{0}"
},
{
"math_id": 8,
"text": "\\mathbb{V}[\\boldsymbol\\xi_i]=\\sigma^2_i\\mathbf{I}_{c_i}"
},
{
"math_id": 9,
"text": "\\mathbb{V}[\\boldsymbol\\xi_i,\n\\boldsymbol\\xi_j]=\\mathbf{0}\\,\n\\forall i\\neq j"
},
{
"math_id": 10,
"text": "\\sigma^2_1,\\cdots,\\sigma^2_k"
},
{
"math_id": 11,
"text": "\\mathbf{U}_1=\\mathbf{I}_n"
},
{
"math_id": 12,
"text": "\\mathbf{Y}=\\mathbf{X}\\boldsymbol\\beta + \\boldsymbol\\epsilon"
},
{
"math_id": 13,
"text": "\\mathbb{E}[\\boldsymbol\\epsilon]=\\mathbf{0}"
},
{
"math_id": 14,
"text": "\\mathbb{V}[\\boldsymbol\\epsilon]=\\sigma^2_1 \\mathbf{I}_n"
},
{
"math_id": 15,
"text": "\\mathbf{Y}"
},
{
"math_id": 16,
"text": "\\mathbf{U}_i"
},
{
"math_id": 17,
"text": "\\mathbf{U} = \\left[\\begin{array}{c|c|c}\\mathbf{U}_1&\\cdots&\\mathbf{U}_k\\end{array}\\right]"
},
{
"math_id": 18,
"text": "\\boldsymbol\\xi^\\top = \\left[\\begin{array}{c|c|c}\n\\boldsymbol\\xi_1^\\top&\\cdots&\\boldsymbol\\xi_k^\\top\\end{array}\\right]"
},
{
"math_id": 19,
"text": "\\mathbf{Y}=\\mathbf{X}\\boldsymbol\\beta+\\mathbf{U}\\boldsymbol\\xi"
},
{
"math_id": 20,
"text": "\\mathbb{E}[\\mathbf{Y}] = \\mathbf{X}\\boldsymbol\\beta"
},
{
"math_id": 21,
"text": "\\mathbb{V}[\\mathbf{Y}]=\\sigma^2_1\\mathbf{U}_1\\mathbf{U}_1^\\top + \\cdots +\n\\sigma^2_k \\mathbf{U}_k \\mathbf{U}_k^\\top\n\\equiv \\sigma^2_1\\mathbf{V}_1 + \\cdots + \\sigma^2_k \\mathbf{V}_k"
},
{
"math_id": 22,
"text": "\\theta = \\sum_{i=1}^k p_i \\sigma^2_i"
},
{
"math_id": 23,
"text": "\\hat{\\theta}=\\mathbf{Y}^\\top \\mathbf{A} \\mathbf{Y}"
},
{
"math_id": 24,
"text": "\\mathbf{A}"
},
{
"math_id": 25,
"text": "\\boldsymbol\\gamma=\\boldsymbol\\beta - \\boldsymbol\\beta_0"
},
{
"math_id": 26,
"text": "\\mathbf{Y} - \\mathbf{X}\\boldsymbol\\beta_0 =\n\\mathbf{X}\\boldsymbol\\gamma + \\mathbf{U}\\boldsymbol\\xi"
},
{
"math_id": 27,
"text": "(\\mathbf{Y} - \\mathbf{X}\\boldsymbol\\beta_0)^\\top \\mathbf{A}\n(\\mathbf{Y} - \\mathbf{X}\\boldsymbol\\beta_0)"
},
{
"math_id": 28,
"text": "\\mathbf{Y}^\\top \\mathbf{A} \\mathbf{Y}"
},
{
"math_id": 29,
"text": "\\mathbf{A}\\mathbf{X} = \\mathbf{0}"
},
{
"math_id": 30,
"text": "\\begin{align}\n\\hat{\\theta} &= \\mathbf{Y}^\\top \\mathbf{A} \\mathbf{Y}\\\\\n&= (\\mathbf{X}\\boldsymbol\\beta + \\mathbf{U}\\boldsymbol\\xi)^\\top \\mathbf{A} (\\mathbf{X}\\boldsymbol\\beta + \\mathbf{U}\\boldsymbol\\xi)\\\\\n&= \\boldsymbol\\xi^\\top\\mathbf{U}^\\top\\mathbf{A}\\mathbf{U}\\boldsymbol\\xi\n\\end{align}"
},
{
"math_id": 31,
"text": "\\mathbb{E}[\\hat{\\theta}]"
},
{
"math_id": 32,
"text": "\\theta"
},
{
"math_id": 33,
"text": "\\boldsymbol\\xi_i"
},
{
"math_id": 34,
"text": "\\begin{align}\n\\mathbb{E}[\\hat{\\theta}] &= \\mathbb{E}[\\boldsymbol\\xi^\\top \\mathbf{U}^\\top \\mathbf{A} \\mathbf{U} \\boldsymbol\\xi]\\\\\n&= \\sum_{i=1}^k \\mathbb{E}[\\boldsymbol\\xi_i^\\top\\mathbf{U}_i^\\top\\mathbf{A}\\mathbf{U}_i\\boldsymbol\\xi_i]\\\\\n&= \\sum_{i=1}^k \\sigma_i^2 \\mathrm{Tr}[\\mathbf{U}_i^\\top \\mathbf{A} \\mathbf{U}_i]\n\\end{align}"
},
{
"math_id": 35,
"text": "\\sum_{i=1}^k \\sigma_i^2 \\mathrm{Tr}[\\mathbf{U}_i^\\top \\mathbf{A} \\mathbf{U}_i] = \\sum_{i=1}^k p_i \\sigma_i^2"
},
{
"math_id": 36,
"text": "\\mathrm{Tr}[\\mathbf{U}_i^\\top \\mathbf{A} \\mathbf{U}_i] = \\mathrm{Tr}[\\mathbf{A}\\mathbf{V}_i] = p_i"
},
{
"math_id": 37,
"text": "\\boldsymbol\\xi"
},
{
"math_id": 38,
"text": "\\mathbb{E}[\\boldsymbol\\xi_i^\\top\\boldsymbol\\xi_i]=c_i \\sigma_i^2"
},
{
"math_id": 39,
"text": "\\boldsymbol\\Delta"
},
{
"math_id": 40,
"text": "\\frac{p_1}{c_1}\\boldsymbol\\xi_1^\\top\\boldsymbol\\xi_1 + \\cdots + \\frac{p_k}{c_k}\\boldsymbol\\xi_k^\\top\\boldsymbol\\xi_k \n= \\boldsymbol\\xi^\\top\\left[\\mathrm{diag}\\left(\\frac{p_1}{c_i},\\cdots,\\frac{p_k}{c_k}\\right)\\right]\\boldsymbol\\xi\n\\equiv \\boldsymbol\\xi^\\top\\boldsymbol\\Delta\\boldsymbol\\xi"
},
{
"math_id": 41,
"text": "\\boldsymbol\\xi^\\top (\\mathbf{U}^\\top \\mathbf{A} \\mathbf{U} - \\boldsymbol\\Delta)\\boldsymbol\\xi"
},
{
"math_id": 42,
"text": "\\lVert \\mathbf{U}^\\top\\mathbf{A}\\mathbf{U}-\\boldsymbol\\Delta \\rVert"
},
{
"math_id": 43,
"text": "\\hat{\\theta}"
},
{
"math_id": 44,
"text": "\\theta=\\sum_{i=1}^k p_i\\sigma_i^2"
},
{
"math_id": 45,
"text": "\\mathbf{A}\\mathbf{X}=\\mathbf{0}"
},
{
"math_id": 46,
"text": "\\mathrm{Tr}[\\mathbf{A}\\mathbf{V}_i]=p_i"
},
{
"math_id": 47,
"text": "\\sigma^2"
},
{
"math_id": 48,
"text": "s^2 = \\frac{1}{n-m}(\\mathbf{Y}-\\mathbf{X}\\hat{\\boldsymbol\\beta})^\\top(\\mathbf{Y}-\\mathbf{X}\\hat{\\boldsymbol\\beta})"
},
{
"math_id": 49,
"text": "Y_1,\\cdots,Y_n"
},
{
"math_id": 50,
"text": "\\sigma^2_1,\\cdots,\\sigma^2_n"
},
{
"math_id": 51,
"text": "\\sigma^2_i"
},
{
"math_id": 52,
"text": "\\frac{n}{n-2}(Y_i - \\overline{Y})^2 - \\frac{s^2}{n - 2}"
},
{
"math_id": 53,
"text": "\\overline{Y} = \\frac{1}{n} \\sum_{i=1}^n Y_i"
},
{
"math_id": 54,
"text": "s^2 = \\frac{1}{n-1} \\sum_{i=1}^n (Y_i - \\overline{Y})^2"
},
{
"math_id": 55,
"text": "\\lVert \\cdot \\rVert_2"
},
{
"math_id": 56,
"text": "\\mathbf{V}=\\mathbf{V}_1+\\cdots+\\mathbf{V}_k = \\mathbf{U} \\mathbf{U}^\\top"
},
{
"math_id": 57,
"text": "\\mathrm{Tr}[\\mathbf{U}^\\top\\mathbf{A}\\mathbf{U}\\boldsymbol\\Delta] = \n\\mathrm{Tr}[\\mathbf{A}\\mathbf{U}\\boldsymbol\\Delta\\mathbf{U}^\\top] =\n\\mathrm{Tr}\\left[\\sum_{i=1}^k \\frac{p_i}{c_i} \\mathbf{A}\\mathbf{V}_i \\right] =\n\\mathrm{Tr}[\\boldsymbol\\Delta\\boldsymbol\\Delta] "
},
{
"math_id": 58,
"text": "\\begin{align}\n\\lVert \\mathbf{U}^\\top\\mathbf{A}\\mathbf{U} - \\boldsymbol\\Delta \\rVert^2_2 &= (\\mathbf{U}^\\top\\mathbf{A}\\mathbf{U} - \\boldsymbol\\Delta)^\\top (\\mathbf{U}^\\top\\mathbf{A}\\mathbf{U} - \\boldsymbol\\Delta)\\\\\n&= \\mathrm{Tr}[\\mathbf{U}^\\top\\mathbf{A}\\mathbf{U}\\mathbf{U}\\mathbf{A}\\mathbf{U}^\\top] - \\mathrm{Tr}[2\\mathbf{U}^\\top\\mathbf{A}\\mathbf{U}\\boldsymbol\\Delta] + \\mathrm{Tr}[\\boldsymbol\\Delta\\boldsymbol\\Delta]\\\\\n&= \\mathrm{Tr}[\\mathbf{A}\\mathbf{V}\\mathbf{A}\\mathbf{V}] - \\mathrm{Tr}[\\boldsymbol\\Delta\\boldsymbol\\Delta]\n\\end{align}"
},
{
"math_id": 59,
"text": "\\mathrm{Tr}[\\boldsymbol\\Delta\\boldsymbol\\Delta] "
},
{
"math_id": 60,
"text": "\\mathbf{A} "
},
{
"math_id": 61,
"text": "\\mathrm{Tr}[\\mathbf{A}\\mathbf{V}\\mathbf{A}\\mathbf{V}] "
},
{
"math_id": 62,
"text": "\\mathbf{A}_\\star=\\sum_{i=1}^k \\lambda_i \\mathbf{R}\\mathbf{V}_i\\mathbf{R} "
},
{
"math_id": 63,
"text": "\\mathbf{R} = \\mathbf{V}^{-1}(\\mathbf{I}-\\mathbf{P}) "
},
{
"math_id": 64,
"text": "\\mathbf{P}=\\mathbf{X}(\\mathbf{X}^\\top\\mathbf{V}^{-1}\\mathbf{X})^{-}\\mathbf{X}^\\top\\mathbf{V}^{-1} "
},
{
"math_id": 65,
"text": "\\mathbf{X} "
},
{
"math_id": 66,
"text": "(\\cdot)^{-} "
},
{
"math_id": 67,
"text": "\\boldsymbol\\lambda "
},
{
"math_id": 68,
"text": "\\mathbf{Q} "
},
{
"math_id": 69,
"text": "\\begin{align}\n\\hat{\\theta} &= \\mathbf{Y}^\\top \\mathbf{A}_\\star\\mathbf{Y}\\\\\n&= \\sum_{i=1}^k \\lambda_i \\mathbf{Y}^\\top\\mathbf{R}\\mathbf{V}_i\\mathbf{R}\\mathbf{Y}\\\\\n&\\equiv\\sum_{i=1}^k \\lambda_i Q_i\\\\\n&\\equiv \\boldsymbol\\lambda^\\top \\mathbf{Q}\n\\end{align} "
},
{
"math_id": 70,
"text": "\\mathrm{Tr}[\\mathbf{A}_\\star\\mathbf{V}_i]=p_i"
},
{
"math_id": 71,
"text": "\\forall j\\in\\{1,\\cdots,k\\} "
},
{
"math_id": 72,
"text": "\\begin{align}\n\\mathrm{Tr}[\\mathbf{A}_\\star\\mathbf{V}_j] &= p_j\\\\\n\\mathrm{Tr}\\left[ \\sum_{i=1}^k \\lambda_i \\mathbf{R}\\mathbf{V}_i\\mathbf{R}\\mathbf{V}_j \\right] &= p_j\\\\\n\\sum_{i=1}^k \\lambda_i \\mathrm{Tr}[\\mathbf{R}\\mathbf{V}_i\\mathbf{R}\\mathbf{V}_j] &= p_j\n\\end{align} "
},
{
"math_id": 73,
"text": "\\mathbf{S}\\boldsymbol\\lambda=\\mathbf{p} "
},
{
"math_id": 74,
"text": "\\mathbf{p}=[p_1\\,\\cdots\\,p_k]^\\top "
},
{
"math_id": 75,
"text": "\\mathbf{S} "
},
{
"math_id": 76,
"text": "\\mathbf{S}=\\begin{bmatrix}\n\\mathrm{Tr}[\\mathbf{R}\\mathbf{V}_1\\mathbf{R}\\mathbf{V}_1] & \\cdots & \\mathrm{Tr}[\\mathbf{R}\\mathbf{V}_k\\mathbf{R}\\mathbf{V}_1]\\\\\n\\vdots & \\ddots & \\vdots\\\\\n\\mathrm{Tr}[\\mathbf{R}\\mathbf{V}_1\\mathbf{R}\\mathbf{V}_k] & \\cdots & \\mathrm{Tr}[\\mathbf{R}\\mathbf{V}_k\\mathbf{R}\\mathbf{V}_k]\n\\end{bmatrix} "
},
{
"math_id": 77,
"text": "\\boldsymbol\\lambda=\\mathbf{S}^{-}\\mathbf{p} "
},
{
"math_id": 78,
"text": "\\hat{\\theta}=\\boldsymbol\\lambda^\\top\\mathbf{Q}=\\mathbf{p}^\\top(\\mathbf{S}^{-})^\\top\\mathbf{Q}=\\mathbf{p}^\\top\\mathbf{S}^{-}\\mathbf{Q} "
},
{
"math_id": 79,
"text": "\\theta=\\sum_{i=1}^k p_i \\sigma_i^2 = \\mathbf{p}^\\top\\boldsymbol\\sigma "
},
{
"math_id": 80,
"text": "\\boldsymbol\\sigma = [\\sigma^2_1\\,\\cdots\\,\\sigma^2_k]^\\top "
},
{
"math_id": 81,
"text": "\\hat{\\boldsymbol\\sigma}=\\mathbf{S}^{-}\\mathbf{Q} "
},
{
"math_id": 82,
"text": "\\mathbb{V}[\\boldsymbol\\xi_i]=\\boldsymbol\\Sigma"
},
{
"math_id": 83,
"text": "i\\in\n\\{1,\\cdots,s\\}"
},
{
"math_id": 84,
"text": "\\mathbb{V}[\\boldsymbol\\xi_i]=\n\\sigma_i^2\\mathbf{I}_{c_i}"
},
{
"math_id": 85,
"text": "i\\in\\{s+1,\\cdots,k\\}"
}
] |
https://en.wikipedia.org/wiki?curid=1536976
|
1537058
|
Line search
|
Optimization algorithm
In optimization, line search is a basic iterative approach to find a local minimum formula_0 of an objective function formula_1. It first finds a descent direction along which the objective function formula_2 will be reduced, and then computes a step size that determines how far formula_3 should move along that direction. The descent direction can be computed by various methods, such as gradient descent or quasi-Newton method. The step size can be determined either exactly or inexactly.
One-dimensional line search.
Suppose "f" is a one-dimensional function, formula_4, and assume that it is unimodal, that is, contains exactly one local minimum "x"* in a given interval ["a","z"]. This means that "f" is strictly decreasing in [a,x*] and strictly increasing in [x*,"z"]. There are several ways to find an (approximate) minimum point in this case.sec.5
Zero-order methods.
Zero-order methods use only function evaluations (i.e., a value oracle) - not derivatives:sec.5
Zero-order methods are very general - they do not assume differentiability or even continuity.
First-order methods.
First-order methods assume that "f" is continuously differentiable, and that we can evaluate not only "f" but also its derivative.sec.5
Curve-fitting methods.
Curve-fitting methods try to attain superlinear convergence by assuming that "f" has some analytic form, e.g. a polynomial of finite degree. At each iteration, there is a set of "working points" in which we know the value of "f" (and possibly also its derivative). Based on these points, we can compute a polynomial that fits the known values, and find its minimum analytically. The minimum point becomes a new working point, and we proceed to the next iteration:sec.5
Curve-fitting methods have superlinear convergence when started close enough to the local minimum, but might diverge otherwise. "Safeguarded curve-fitting methods" simultaneously execute a linear-convergence method in parallel to the curve-fitting method. They check in each iteration whether the point found by the curve-fitting method is close enough to the interval maintained by safeguard method; if it is not, then the safeguard method is used to compute the next iterate.5.2.3.4
Multi-dimensional line search.
In general, we have a multi-dimensional objective function formula_1. The line-search method first finds a descent direction along which the objective function formula_2 will be reduced, and then computes a step size that determines how far formula_3 should move along that direction. The descent direction can be computed by various methods, such as gradient descent or quasi-Newton method. The step size can be determined either exactly or inexactly. Here is an example gradient method that uses a line search in step 5:
At the line search step (2.3), the algorithm may minimize "h" "exactly", by solving formula_20, or "approximately", by using one of the one-dimensional line-search methods mentioned above. It can also be solved "loosely", by asking for a sufficient decrease in "h" that does not necessarily approximate the optimum. One example of the former is conjugate gradient method. The latter is called inexact line search and may be performed in a number of ways, such as a backtracking line search or using the Wolfe conditions.
Overcoming local minima.
Like other optimization methods, line search may be combined with simulated annealing to allow it to jump over some local minima.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathbf{x}^*"
},
{
"math_id": 1,
"text": "f:\\mathbb R^n\\to\\mathbb R"
},
{
"math_id": 2,
"text": "f"
},
{
"math_id": 3,
"text": "\\mathbf{x}"
},
{
"math_id": 4,
"text": "f:\\mathbb R\\to\\mathbb R"
},
{
"math_id": 5,
"text": "\\sqrt{0.5}\\approx 0.71"
},
{
"math_id": 6,
"text": "\\sqrt{2/3}\\approx 0.82"
},
{
"math_id": 7,
"text": "1/ \\varphi \\approx 0.618"
},
{
"math_id": 8,
"text": "\\varphi \\approx 1.618"
},
{
"math_id": 9,
"text": "k=0"
},
{
"math_id": 10,
"text": "\\mathbf{x}_0"
},
{
"math_id": 11,
"text": "\\epsilon"
},
{
"math_id": 12,
"text": "\\mathbf{p}_k"
},
{
"math_id": 13,
"text": "h(\\alpha_k)=f(\\mathbf{x}_k+\\alpha_k\\mathbf{p}_k)"
},
{
"math_id": 14,
"text": "\\displaystyle \\alpha_k"
},
{
"math_id": 15,
"text": "h"
},
{
"math_id": 16,
"text": "\\alpha_k\\in\\mathbb R_+"
},
{
"math_id": 17,
"text": "\\mathbf{x}_{k+1}=\\mathbf{x}_k+\\alpha_k\\mathbf{p}_k"
},
{
"math_id": 18,
"text": " k=k+1"
},
{
"math_id": 19,
"text": "\\|\\nabla f(\\mathbf{x}_{k+1})\\|<\\epsilon"
},
{
"math_id": 20,
"text": "h'(\\alpha_k)=0"
}
] |
https://en.wikipedia.org/wiki?curid=1537058
|
1537143
|
Newton da Costa
|
Brazilian philosopher and mathematician (1929–2024)
Newton Carneiro Affonso da Costa (16 September 1929 – 16 April 2024) was a Brazilian mathematician, logician, and philosopher. Born in Curitiba, he studied engineering and mathematics at the Federal University of Paraná in Curitiba and the title of his 1961 Ph.D. dissertation was "Topological spaces and continuous functions".
Work.
Paraconsistency.
Da Costa's international recognition came especially through his work on paraconsistent logic and its application to various fields such as philosophy, law, computing, and artificial intelligence. He was one of the founders of this non-classical logic. In addition, he constructed the theory of quasi-truth that constitutes a generalization of Alfred Tarski's theory of truth, and applied it to the foundations of science.
Other fields; foundations of physics.
The scope of his research also includes model theory, generalized Galois theory, axiomatic foundations of quantum theory and relativity, complexity theory, and abstract logics. Da Costa significantly contributed to the philosophy of logic, paraconsistent modal logics, ontology, and philosophy of science. He served as the President of the Brazilian Association of Logic and the Director of the Institute of Mathematics at the University of São Paulo. He received many awards and held numerous visiting scholarships at universities and centers of research in all continents.
Da Costa and physicist Francisco Antônio Dória axiomatized large portions of classical physics with the help of Patrick Suppes' predicates. They used that technique to show that for the axiomatized version of dynamical systems theory, chaotic properties of those systems are undecidable and Gödel-incomplete, that is, a sentence like "X is chaotic" is undecidable within that axiomatics. They later exhibited similar results for systems in other areas, such as mathematical economics.
Da Costa believes that the significant progress in the field of logic will give rise to new fundamental developments in computing and technology, especially in connection with non-classical logics and their applications.
Variable-binding term operators.
Da Costa was co-discoverer of the truth-set principle and co-creator of the classical logic of variable-binding term operators—both with John Corcoran. He is also co-author with Chris Mortensen of the definitive pre-1980 history of variable-binding term operators in classical first-order logic: “Notes on the theory of variable-binding term operators”, History and Philosophy of Logic, vol.4 (1983) 63–72.
P = NP.
Together with Francisco Antônio Dória, Da Costa published two papers with conditional relative proofs of the consistency of P = NP with the usual
set-theoretic axioms ZFC. The results they obtain are similar to the results of DeMillo and Lipton (consistency of P = NP with fragments of arithmetic) and those of Sazonov and Maté (conditional proofs of the consistency of P = NP with strong systems).
Basically da Costa and Doria define a formal sentence [P = NP]' which is the same as P = NP in the standard model for arithmetic; however, because [P = NP]' by its very definition includes a disjunct that is not refutable in ZFC, [P = NP]' is not refutable in ZFC, so ZFC + [P = NP]' is consistent (assuming that ZFC is). The paper then continues by an informal proof of the implication
If ZFC + [P = NP]' is consistent, then so is ZFC + [P = NP].
However, a review by Ralf Schindler points out that this last step is too short and contains a gap. A recently published (2006) clarification by the authors shows that their intent was to exhibit a conditional result that was dependent on what they call a "naïvely plausible condition". The 2003 conditional result can be reformulated, according to da Costa and Doria 2006, as
If ZFC + [P = NP]' is ω-consistent, then ZFC + [P = NP] is consistent.
So far no formal argument has been constructed to show that ZFC + [P = NP]' is ω-consistent.
In his reviews for "Mathematical Reviews" of the da Costa/Doria papers on P=NP, logician Andreas Blass states that "the absence of rigor led to numerous errors (and ambiguities)"; he also rejects da Costa's "naïvely plausible condition", as this assumption is "based partly on the possible non-totality of [a certain function] F and partly on an axiom equivalent to the totality of F".
Death.
Da Costa died on 16 April 2024, at the age of 94.
|
[
{
"math_id": 0,
"text": "\\mathcal{C}_1"
}
] |
https://en.wikipedia.org/wiki?curid=1537143
|
15371861
|
Lucas–Lehmer–Riesel test
|
Test for determining whether a number is prime
In mathematics, the Lucas–Lehmer–Riesel test is a primality test for numbers of the form "N" = "k" ⋅ 2"n" − 1 with odd "k" < 2"n". The test was developed by Hans Riesel and it is based on the Lucas–Lehmer primality test. It is the fastest deterministic algorithm known for numbers of that form. For numbers of the form "N" = "k" ⋅ 2"n" + 1 (Proth numbers), either application of Proth's theorem (a Las Vegas algorithm) or one of the deterministic proofs described in Brillhart–Lehmer–Selfridge 1975 (see Pocklington primality test) are used.
The algorithm.
The algorithm is very similar to the Lucas–Lehmer test, but with a variable starting point depending on the value of "k".
Define a sequence "u""i" for all "i" > 0 by:
formula_0
Then "N" = "k" ⋅ 2"n" − 1, with "k" < 2"n" is prime if and only if it divides "u""n"−2.
Finding the starting value.
The starting value "u"0 is determined as follows.
An alternative method for finding the starting value "u"0 is given in Rödseth 1994. The selection method is much easier than that used by Riesel for the "3" divides "k" case. First find a "P" value that satisfies the following equalities of Jacobi symbols:
formula_2.
In practice, only a few "P" values need be checked before one is found (5, 8, 9, or 11 work in about 85% of trials).
To find the starting value "u"0 from the "P" value we can use a Lucas(P,1) sequence, as shown in as well as page 124 of. The latter explains that when 3 ∤ "k", "P"=4 may be used as above, and no further search is necessary.
The starting value "u"0 will be the Lucas sequence term "V""k"("P",1) taken mod "N". This process of selection takes very little time compared to the main test.
How the test works.
The Lucas–Lehmer–Riesel test is a particular case of group-order primality testing; we demonstrate that some number is prime by showing that some group has the order that it would have were that number prime, and we do this by finding an element of that group of precisely the right order.
For Lucas-style tests on a number "N", we work in the multiplicative group of a quadratic extension of the integers modulo "N"; if "N" is prime, the order of this multiplicative group is "N"2 − 1, it has a subgroup of order "N" + 1, and we try to find a generator for that subgroup.
We start off by trying to find a non-iterative expression for the formula_3. Following the model of the Lucas–Lehmer test, put formula_4, and by induction we have formula_5.
So we can consider ourselves as looking at the 2"i"th term of the sequence formula_6. If "a" satisfies a quadratic equation, this is a Lucas sequence, and has an expression of the form formula_7. Really, we're looking at the "k" ⋅ 2"i"th term of a different sequence, but since decimations (take every "k"th term starting with the zeroth) of a Lucas sequence are themselves Lucas sequences, we can deal with the factor "k" by picking a different starting point.
LLR software.
LLR is a program that can run the LLR tests. The program was developed by Jean Penné. Vincent Penné has modified the program so that it can obtain tests via the Internet. The software is both used by individual prime searchers and some distributed computing projects including Riesel Sieve and PrimeGrid.
A revised version, LLR2 was deployed in 2020. This generates a "proof of work" certificate which allows the computation to be verified without needing a full double-check.
A further update, PRST uses an alternate certificate scheme which takes longer to verify but is faster to generate for some forms of prime.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "u_i = u_{i-1}^2-2. \\, "
},
{
"math_id": 1,
"text": "u_0 = (2+\\sqrt{3})^k+(2-\\sqrt{3})^k"
},
{
"math_id": 2,
"text": "\\left(\\frac{P-2}{N}\\right)=1 \\quad\\text{and}\\quad \\left(\\frac{P+2}{N}\\right)=-1"
},
{
"math_id": 3,
"text": "u_i"
},
{
"math_id": 4,
"text": "u_i = a^{2^i} + a^{-2^i}"
},
{
"math_id": 5,
"text": "u_i = u_{i-1}^2 - 2"
},
{
"math_id": 6,
"text": "v(i) = a^i + a^{-i}"
},
{
"math_id": 7,
"text": "v(i) = \\alpha v(i-1) + \\beta v(i-2)"
}
] |
https://en.wikipedia.org/wiki?curid=15371861
|
15373114
|
Rules of passage
|
In mathematical logic, the rules of passage govern how quantifiers distribute over the basic logical connectives of first-order logic. The rules of passage govern the "passage" (translation) from any formula of first-order logic to the equivalent formula in prenex normal form, and vice versa.
The rules.
See Quine (1982: 119, chpt. 23). Let "Q" and "Q"' denote ∀ and ∃ or vice versa. β denotes a closed formula in which "x" does not appear. The rules of passage then include the following sentences, whose main connective is the biconditional:
<templatestyles src="Col-begin/styles.css"/>
The following conditional sentences can also be taken as rules of passage:
"Rules of passage" first appeared in French, in the writings of Jacques Herbrand. Quine employed the English translation of the phrase in each edition of his "Methods of Logic", starting in 1950.
|
[
{
"math_id": 0,
"text": " Qx[\\lnot\\alpha (x)] \\leftrightarrow \\lnot Q'x[\\alpha (x)]."
},
{
"math_id": 1,
"text": "\\exist x[\\alpha (x) \\land \\gamma (x)] \\rightarrow (\\exist x \\alpha (x) \\land \\exist x \\gamma (x))."
},
{
"math_id": 2,
"text": "(\\forall x \\, \\alpha(x) \\lor \\forall x \\, \\gamma(x)) \\rightarrow \\forall x \\, [\\alpha(x) \\lor \\gamma(x)]."
},
{
"math_id": 3,
"text": "(\\exists x \\, \\alpha(x) \\land \\forall x \\, \\gamma(x)) \\rightarrow \\exists x \\, [\\alpha(x) \\land \\gamma(x)]."
}
] |
https://en.wikipedia.org/wiki?curid=15373114
|
15373909
|
BCMP network
|
In queueing theory, a discipline within the mathematical theory of probability, a BCMP network is a class of queueing network for which a product-form equilibrium distribution exists. It is named after the authors of the paper where the network was first described: Baskett, Chandy, Muntz, and Palacios. The theorem is a significant extension to a Jackson network allowing virtually arbitrary customer routing and service time distributions, subject to particular service disciplines.
The paper is well known, and the theorem was described in 1990 as "one of the seminal achievements in queueing theory in the last 20 years" by J. Michael Harrison and Ruth J. Williams.
Definition of a BCMP network.
A network of "m" interconnected queues is known as a BCMP network if each of the queues is of one of the following four types:
In the final three cases, service time distributions must have rational Laplace transforms. This means the Laplace transform must be of the form
formula_1
Also, the following conditions must be met.
Theorem.
For a BCMP network of "m" queues which is open, closed or mixed in which each queue is of type 1, 2, 3 or 4, the equilibrium state probabilities are given by
formula_4
where "C" is a normalizing constant chosen to make the equilibrium state probabilities sum to 1 and formula_5 represents the equilibrium distribution for queue "i".
Proof.
The original proof of the theorem was given by checking the independent balance equations were satisfied.
Peter G. Harrison offered an alternative proof by considering reversed processes.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\scriptstyle{\\mu_j}"
},
{
"math_id": 1,
"text": "L(s) = \\frac{N(s)}{D(s)}."
},
{
"math_id": 2,
"text": "P_{ij}"
},
{
"math_id": 3,
"text": "1-\\sum_{j=1}^{m}P_{ij}"
},
{
"math_id": 4,
"text": "\\pi(x_1,x_2,\\ldots,x_m) = C \\pi_1(x_1) \\pi_2(x_2) \\cdots \\pi_m(x_m),"
},
{
"math_id": 5,
"text": "\\scriptstyle{\\pi_i(\\cdot)}"
}
] |
https://en.wikipedia.org/wiki?curid=15373909
|
15374087
|
Asymptotic computational complexity
|
In computational complexity theory, asymptotic computational complexity is the usage of asymptotic analysis for the estimation of computational complexity of algorithms and computational problems, commonly associated with the usage of the big O notation.
Scope.
With respect to computational resources, asymptotic time complexity and asymptotic space complexity are commonly estimated. Other asymptotically estimated behavior include circuit complexity and various measures of parallel computation, such as the number of (parallel) processors.
Since the ground-breaking 1965 paper by Juris Hartmanis and Richard E. Stearns and the 1979 book by Michael Garey and David S. Johnson on NP-completeness, the term "computational complexity" (of algorithms) has become commonly referred to as asymptotic computational complexity.
Further, unless specified otherwise, the term "computational complexity" usually refers to the upper bound for the asymptotic computational complexity of an algorithm or a problem, which is usually written in terms of the big O notation, e.g.. formula_0 Other types of (asymptotic) computational complexity estimates are lower bounds ("Big Omega" notation; e.g., Ω("n")) and asymptotically tight estimates, when the asymptotic upper and lower bounds coincide (written using the "big Theta"; e.g., Θ("n" log "n")).
A further tacit assumption is that the worst case analysis of computational complexity is in question unless stated otherwise. An alternative approach is probabilistic analysis of algorithms.
Types of algorithms considered.
In most practical cases deterministic algorithms or randomized algorithms are discussed, although theoretical computer science also considers nondeterministic algorithms and other advanced models of computation.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "O(n^3)."
}
] |
https://en.wikipedia.org/wiki?curid=15374087
|
15374396
|
Enveloping von Neumann algebra
|
In operator algebras, the enveloping von Neumann algebra of a C*-algebra is a von Neumann algebra that contains all the operator-algebraic information about the given C*-algebra. This may also be called the "universal" enveloping von Neumann algebra, since it is given by a universal property; and (as always with von Neumann algebras) the term "W*-algebra" may be used in place of "von Neumann algebra".
Definition.
Let "A" be a C*-algebra and "π""U" be its universal representation, acting on Hilbert space "H""U". The image of "π""U", "π""U"("A"), is a C*-subalgebra of bounded operators on "H""U". The enveloping von Neumann algebra of "A" is the closure of "π""U"("A") in the weak operator topology. It is sometimes denoted by "A"′′.
Properties.
The universal representation "π""U" and "A"′′ satisfies the following universal property: for any representation "π", there is a unique *-homomorphism
formula_0
that is continuous in the weak operator topology and the restriction of Φ to "π""U"("A") is "π".
As a particular case, one can consider the continuous functional calculus, whose unique extension gives a canonical Borel functional calculus.
By the Sherman–Takeda theorem, the double dual of a C*-algebra "A", "A"**, can be identified with "A"′′, as Banach spaces.
Every representation of "A" uniquely determines a central projection (i.e. a projection in the center of the algebra) in "A"′′; it is called the central cover of that projection.
|
[
{
"math_id": 0,
"text": " \\Phi: \\pi_U(A)'' \\rightarrow \\pi(A)'' "
}
] |
https://en.wikipedia.org/wiki?curid=15374396
|
15378076
|
Computation of cyclic redundancy checks
|
Overview of the computation of cyclic redundancy checks
Computation of a cyclic redundancy check is derived from the mathematics of polynomial division, modulo two. In practice, it resembles long division of the binary message string, with a fixed number of zeroes appended, by the "generator polynomial" string except that exclusive or operations replace subtractions. Division of this type is efficiently realised in hardware by a modified shift register, and in software by a series of equivalent algorithms, starting with simple code close to the mathematics and becoming faster (and arguably more obfuscated) through byte-wise parallelism and space–time tradeoffs.
Various CRC standards extend the polynomial division algorithm by specifying an initial shift register value, a final Exclusive-Or step and, most critically, a bit ordering (endianness). As a result, the code seen in practice deviates confusingly from "pure" division, and the register may shift left or right.
Example.
As an example of implementing polynomial division in hardware, suppose that we are trying to compute an 8-bit CRC of an 8-bit message made of the ASCII character "W", which is binary 010101112, decimal 8710, or hexadecimal 5716. For illustration, we will use the CRC-8-ATM (HEC) polynomial formula_0. Writing the first bit transmitted (the coefficient of the highest power of formula_1) on the left, this corresponds to the 9-bit string "100000111".
The byte value 5716 can be transmitted in two different orders, depending on the bit ordering convention used. Each one generates a different message polynomial formula_2. Msbit-first, this is formula_3 = 01010111, while lsbit-first, it is formula_4 = 11101010. These can then be multiplied by formula_5 to produce two 16-bit message polynomials formula_6.
Computing the remainder then consists of subtracting multiples of the generator polynomial formula_7. This is just like decimal long division, but even simpler because the only possible multiples at each step are 0 and 1, and the subtractions borrow "from infinity" instead of reducing the upper digits. Because we do not care about the quotient, there is no need to record it.
Observe that after each subtraction, the bits are divided into three groups: at the beginning, a group which is all zero; at the end, a group which is unchanged from the original; and a blue shaded group in the middle which is "interesting". The "interesting" group is 8 bits long, matching the degree of the polynomial. Every step, the appropriate multiple of the polynomial is subtracted to make the zero group one bit longer, and the unchanged group becomes one bit shorter, until only the final remainder is left.
In the msbit-first example, the remainder polynomial is formula_8. Converting to a hexadecimal number using the convention that the highest power of "x" is the msbit; this is A216. In the lsbit-first, the remainder is formula_9. Converting to hexadecimal using the convention that the highest power of "x" is the lsbit, this is 1916.
Implementation.
Writing out the full message at each step, as done in the example above, is very tedious. Efficient implementations
use an formula_10-bit shift register to hold only the interesting bits. Multiplying the polynomial by formula_1 is equivalent to shifting the register by one place, as the coefficients do not change in value but only move up to the next term of the polynomial.
Here is a first draft of some pseudocode for computing an "n"-bit CRC. It uses a contrived composite data type for polynomials, where codice_0 is not an integer variable, but a constructor generating a "Polynomial" object that can be added, multiplied and exponentiated. To codice_1 two polynomials is to add them, modulo two; that is, to exclusive OR the coefficients of each matching term from both polynomials.
function crc("bit array" bitString[1..len], "int" len) {
remainderPolynomial := polynomialForm(bitString[1..n]) "// First n bits of the message"
"// A popular variant complements remainderPolynomial here; see below"
for i from 1 to len {
remainderPolynomial := remainderPolynomial * "x" + bitString[i+n] * "x"0 "// Define bitString[k]=0 for k>len"
if coefficient of "x"n of remainderPolynomial = 1 {
remainderPolynomial := remainderPolynomial xor generatorPolynomial
"// A popular variant complements remainderPolynomial here; see below"
return remainderPolynomial
Code fragment 1: Simple polynomial division
Note that this example code avoids the need to specify a bit-ordering convention by not using bytes; the input codice_2 is already in the form of a bit array, and the codice_3 is manipulated in terms of polynomial operations; the multiplication by formula_1 could be a left or right shift, and the addition of codice_4 is done to the formula_11 coefficient, which could be the right or left end of the register.
This code has two disadvantages. First, it actually requires an "n"+1-bit register to hold the codice_3 so that the formula_12 coefficient can be tested. More significantly, it requires the codice_2 to be padded with "n" zero bits.
The first problem can be solved by testing the formula_13 coefficient of the codice_3 before it is multiplied by formula_1.
The second problem could be solved by doing the last "n" iterations differently, but there is a more subtle optimization which is used universally, in both hardware and software implementations.
Because the XOR operation used to subtract the generator polynomial from the message is commutative and associative, it does not matter in what order the various inputs are combined into the codice_3. And specifically, a given bit of the codice_2 does not need to be added to the codice_3 until the very last instant when it is tested to determine whether to codice_1 with the codice_12.
This eliminates the need to preload the codice_3 with the first "n" bits of the message, as well:
function crc("bit array" bitString[1..len], "int" len) {
remainderPolynomial := 0
"// A popular variant complements remainderPolynomial here; see below"
for i from 1 to len {
remainderPolynomial := remainderPolynomial xor (bitstring[i] * "x"n−1)
if (coefficient of "x"n−1 of remainderPolynomial) = 1 {
remainderPolynomial := (remainderPolynomial * "x") xor generatorPolynomial
} else {
remainderPolynomial := (remainderPolynomial * "x")
"// A popular variant complements remainderPolynomial here; see below"
return remainderPolynomial
Code fragment 2: Polynomial division with deferred message XORing
This is the standard bit-at-a-time hardware CRC implementation, and is well worthy of study; once you understand why this computes exactly the same result as the first version, the remaining optimizations are quite straightforward. If codice_3 is only "n" bits long, then the formula_12 coefficients of it and of codice_12 are simply discarded. This is the reason that you will usually see CRC polynomials written in binary with the leading coefficient omitted.
In software, it is convenient to note that while one may delay the codice_1 of each bit until the very last moment, it is also possible to do it earlier. It is usually convenient to perform the codice_1 a byte at a time, even in a bit-at-a-time implementation like this:
function crc("byte array" string[1..len], "int" len) {
remainderPolynomial := 0
"// A popular variant complements remainderPolynomial here; see below"
for i from 1 to len {
remainderPolynomial := remainderPolynomial xor polynomialForm(string[i]) * xn−8
for j from 1 to 8 { "// Assuming 8 bits per byte"
if coefficient of "x"n−1 of remainderPolynomial = 1 {
remainderPolynomial := (remainderPolynomial * "x") xor generatorPolynomial
} else {
remainderPolynomial := (remainderPolynomial * "x")
"// A popular variant complements remainderPolynomial here; see below"
return remainderPolynomial
Code fragment 3: Polynomial division with bytewise message XORing
This is usually the most compact software implementation, used in microcontrollers when space is at a premium over speed.
Bit ordering (endianness).
When implemented in bit serial hardware, the generator polynomial uniquely describes the bit assignment; the first bit transmitted is always the coefficient of the highest power of formula_1, and the last formula_10 bits transmitted are the CRC remainder formula_14, starting with the coefficient of formula_13 and ending with the coefficient of formula_11, a.k.a. the coefficient of 1.
However, when bits are processed a byte at a time, such as when using parallel transmission, byte framing as in 8B/10B encoding or RS-232-style asynchronous serial communication, or when implementing a CRC in software, it is necessary to specify the bit ordering (endianness) of the data; which bit in each byte is considered "first" and will be the coefficient of the higher power of formula_1.
If the data is destined for serial communication, it is best to use the bit ordering the data will ultimately be sent in. This is because a CRC's ability to detect burst errors is based on proximity in the message polynomial formula_2; if adjacent polynomial terms are not transmitted sequentially, a physical error burst of one length may be seen as a longer burst due to the rearrangement of bits.
For example, both IEEE 802 (ethernet) and RS-232 (serial port) standards specify least-significant bit first (little-endian) transmission, so a software CRC implementation to protect data sent across such a link should map the least significant bits in each byte to coefficients of the highest powers of formula_1. On the other hand, floppy disks and most hard drives write the most significant bit of each byte first.
The lsbit-first CRC is slightly simpler to implement in software, so is somewhat more commonly seen, but many programmers find the msbit-first bit ordering easier to follow. Thus, for example, the XMODEM-CRC extension, an early use of CRCs in software, uses an msbit-first CRC.
So far, the pseudocode has avoided specifying the ordering of bits within bytes by describing shifts in the pseudocode as multiplications by formula_1 and writing explicit conversions from binary to polynomial form. In practice, the CRC is held in a standard binary register using a particular bit-ordering convention. In msbit-first form, the most significant binary bits will be sent first and so contain the higher-order polynomial coefficients, while in lsbit-first form, the least-significant binary bits contain the higher-order coefficients. The above pseudocode can be written in both forms. For concreteness, this uses the 16-bit CRC-16-CCITT polynomial formula_15:
"// Most significant bit first (big-endian)"
"// (x16)+x12+x5+1 = (1) 0001 0000 0010 0001 = 0x1021"
function crc("byte array" string[1..len], "int" len) {
rem := 0
"// A popular variant complements rem here"
for i from 1 to len {
rem := rem xor (string[i] leftShift (n-8)) "// n = 16 in this example"
for j from 1 to 8 { "// Assuming 8 bits per byte"
if rem and 0x8000 { "// Test x15 coefficient"
rem := (rem leftShift 1) xor 0x1021
} else {
rem := rem leftShift 1
rem := rem and 0xffff // Trim remainder to 16 bits
"// A popular variant complements rem here"
return rem
Code fragment 4: Shift register based division, MSB first
"// Least significant bit first (little-endian)"
"// 1+x5+x12+(x16) = 1000 0100 0000 1000 (1) = 0x8408"
function crc("byte array" string[1..len], "int" len) {
rem := 0
"// A popular variant complements rem here"
for i from 1 to len {
rem := rem xor string[i]
for j from 1 to 8 { "// Assuming 8 bits per byte"
if rem and 0x0001 { "// Test x15 coefficient"
rem := (rem rightShift 1) xor 0x8408
} else {
rem := rem rightShift 1
"// A popular variant complements rem here"
return rem
Code fragment 5: Shift register based division, LSB first
Note that the lsbit-first form avoids the need to shift codice_18 before the codice_1. In either case, be sure to transmit the bytes of the CRC in the order that matches your chosen bit-ordering convention.
Multi-bit computation.
Sarwate algorithm (single lookup table).
Another common optimization uses a lookup table indexed by highest order coefficients of codice_20 to process more than one bit of dividend per iteration. Most commonly, a 256-entry lookup table is used, replacing the body of the outer loop (over codice_21) with:
// Msbit-first
rem = (rem leftShift 8) xor big_endian_table[string[i] xor ((leftmost 8 bits of rem) rightShift (n-8))]
// Lsbit-first
rem = (rem rightShift 8) xor little_endian_table[string[i] xor (rightmost 8 bits of rem)]
Code fragment 6: Cores of table based division
One of the most commonly encountered CRC algorithms is known as CRC-32, used by (among others) Ethernet, FDDI, ZIP and other archive formats, and PNG image format. Its polynomial can be written msbit-first as 0x04C11DB7, or lsbit-first as 0xEDB88320. The W3C webpage on PNG includes an appendix with a short and simple table-driven implementation in C of CRC-32. You will note that the code corresponds to the lsbit-first byte-at-a-time pseudocode presented here, and the table is generated using the bit-at-a-time code.
Using a 256-entry table is usually most convenient, but other sizes can be used. In small microcontrollers, using a 16-entry table to process four bits at a time gives a useful speed improvement while keeping the table small. On computers with ample storage, a -entry table can be used to process 16 bits at a time.
Generating the tables.
The software to generate the tables is so small and fast that it is usually faster to compute them on program startup than to load precomputed tables from storage. One popular technique is to use the bit-at-a-time code 256 times to generate the CRCs of the 256 possible 8-bit bytes. However, this can be optimized significantly by taking advantage of the property that codice_22. Only the table entries corresponding to powers of two need to be computed directly.
In the following example code, codice_23 holds the value of codice_24:
big_endian_table[0] := 0
crc := 0x8000 // "Assuming a 16-bit polynomial"
i := 1
do {
if crc and 0x8000 {
crc := (crc leftShift 1) xor 0x1021 // "The CRC polynomial"
} else {
crc := crc leftShift 1
// crc "is the value of" big_endian_table[i]"; let" j "iterate over the already-initialized entries"
for j from 0 to i−1 {
big_endian_table[i + j] := crc xor big_endian_table[j];
i := i leftshift 1
} while i < 256
Code fragment 7: Byte-at-a-time CRC table generation, MSB first
little_endian_table[0] := 0
crc := 1;
i := 128
do {
if crc and 1 {
crc := (crc rightShift 1) xor 0x8408 // "The CRC polynomial"
} else {
crc := crc rightShift 1
// crc "is the value of" little_endian_table[i]"; let" j "iterate over the already-initialized entries"
for j from 0 to 255 by 2 × i {
little_endian_table[i + j] := crc xor little_endian_table[j];
i := i rightshift 1
} while i > 0
Code fragment 8: Byte-at-a-time CRC table generation, LSB first
In these code samples, the table index codice_25 is equivalent to codice_26; you may use whichever form is more convenient.
CRC-32 algorithm.
This is a practical algorithm for the CRC-32 variant of CRC. The CRCTable is a memoization of a calculation that would have to be repeated for each byte of the message ().
Function CRC32
Input:
data: Bytes // Array of bytes
Output:
crc32: UInt32 // 32-bit unsigned CRC-32 value<br>
// Initialize CRC-32 to starting value
crc32 ← 0xFFFFFFFF<br>
for each byte in data do
nLookupIndex ← (crc32 xor byte) and 0xFF
crc32 ← (crc32 shr 8) xor CRCTable[nLookupIndex] // CRCTable is an array of 256 32-bit constants<br>
// Finalize the CRC-32 value by inverting all the bits
crc32 ← crc32 xor 0xFFFFFFFF
return crc32
In C, the algorithm looks as such:
uint32_t CRC32(const uint8_t data[], size_t data_length) {
uint32_t crc32 = 0xFFFFFFFFu;
for (size_t i = 0; i < data_length; i++) {
const uint32_t lookupIndex = (crc32 ^ data[i]) & 0xff;
crc32 = (crc32 » 8) ^ CRCTable[lookupIndex]; // CRCTable is an array of 256 32-bit constants
// Finalize the CRC-32 value by inverting all the bits
crc32 ^= 0xFFFFFFFFu;
return crc32;
Byte-Slicing using multiple tables.
There exists a slice-by-"n" (typically slice-by-8 for CRC32) algorithm that usually doubles or triples the performance compared to the Sarwate algorithm. Instead of reading 8 bits at a time, the algorithm reads 8"n" bits at a time. Doing so maximizes performance on superscalar processors.
It is unclear who actually invented the algorithm.
To understand the advantages, start with the slice-by-2 case. We wish to compute a CRC 2 bytes (16 bits) at a time, but the standard table-based approach would require an inconveniently large 65536-entry table. As mentioned in , CRC tables have the property that codice_27. We can use this identity to replace the large table by two 256-entry tables: codice_28.
So the large table is not stored explicitly, but each iteration computes the CRC value that would be there by combining the values in two smaller tables. That is, the 16-bit index is "sliced" into two 8-bit indexes. At first glance, this seems pointless; why do two lookups in separate tables, when the standard byte-at-a-time algorithm would do two lookups in the "same" table?
The difference is instruction-level parallelism. In the standard algorithm, the index for each lookup depends on the value fetched in the previous one. Thus, the second lookup cannot begin until the first lookup is complete.
When sliced tables are used, both lookups can begin at the same time. If the processor can perform two loads in parallel (2020s microprocessors can keep track of over 100 loads in progress), then this has the potential to double the speed of the inner loop.
This technique can obviously be extended to as many slices as the processor can benefit from.
When the slicing width "equals" the CRC size, there is a minor speedup. In the part of the basic Sarwate algorithm where the previous CRC value is shifted by the size of the table lookup, the previous CRC value is shifted away entirely (what remains is all zero), so the XOR can be eliminated from the critical path.
The resultant slice-by-"n" inner loop consists of:
This still has the property that all of the loads in the second step must be completed before the next iteration can commence, resulting in regular pauses during which the processor's memory subsystem (in particular, the data cache) is unused. However, when the slicing width "exceeds" the CRC size, a significant second speedup appears.
This is because a portion of the results of the first step "no longer depend" on any previous iteration. When XORing a 32-bit CRC with 64 bits of message, half of the result is simply a copy of the message. If coded carefully (to avoid creating a false data dependency), half of the slice table loads can begin "before" the previous loop iteration has completed. The result is enough work to keep the processor's memory subsystem "continuously" busy, which achieves maximum performance. As mentioned, on post-2000 microprocessors, slice-by-8 is generally sufficient to reach this level.
There is no particular need for the slices to be 8 bits wide. For example, it would be entirely possible to compute a CRC 64 bits at a time using a slice-by-9 algorithm, using 9 128-entry lookup tables to handle 63 bits, and the 64th bit handled by the bit-at-a-time algorithm (which is effectively a 1-bit, 2-entry lookup table). This would almost halve the table size (going from 8×256 = 2048 entries to 9×128 = 1152) at the expense of one more data-dependent load per iteration.
Parallel computation without table.
Parallel update for a byte or a word at a time can also be done explicitly, without a table. This is normally used in high-speed hardware implementations. For each bit an equation is solved after 8 bits have been shifted in. The following tables list the equations for some commonly used polynomials, using following symbols:
Two-step computation.
As the CRC-32 polynomial has a large number of terms, when computing the remainder a byte at a time each bit depends on up to 8 bits of the previous iteration. In byte-parallel hardware implementations this calls for either 8-input or cascaded XOR gates which increases propagation delay.
To maximise computation speed, an "intermediate remainder" can be calculated by first computing the CRC of the message modulo "x"123 + "x"111 + "x"92 + "x"84 + "x"64 + "x"46 + "x"23 + 1. This is a carefully selected multiple of the CRC-32 polynomial such that the terms (feedback taps) are at least 8 positions apart. Thus, a 123-bit shift register can be advanced 8 bits per iteration using only two-input XOR gates, the fastest possible. Finally the intermediate remainder can be reduced modulo the standard polynomial in a second shift register to yield the CRC-32 remainder.
If 3- or 4-input XOR gates are permitted, shorter intermediate polynomials of degree 71 or 53, respectively, can be used.
Block-wise computation.
Block-wise computation of the remainder can be performed in hardware for any CRC polynomial by factorizing the State Space transformation matrix needed to compute the remainder into two simpler Toeplitz matrices.
One-pass checking.
When appending a CRC to a message, it is possible to detach the transmitted CRC, recompute it, and verify the recomputed value against the transmitted one. However, a simpler technique is commonly
used in hardware.
When the CRC is transmitted with the correct byte order (matching the chosen bit-ordering convention), a receiver can compute an overall CRC, over the message "and" the CRC, and if they are correct, the result will be zero.
This possibility is the reason that most network protocols which include a CRC do so "before" the ending delimiter; it is not necessary to know whether the end of the packet is imminent to check the CRC.
In fact, a few protocols use the CRC "as" the message delimiter, a technique called CRC-based framing. (This requires multiple frames to detect acquisition or loss of framing, so is limited to applications where the frames are a known length, and the frame contents are sufficiently random that valid CRCs in misaligned data are rare.)
CRC variants.
In practice, most standards specify presetting the register to all-ones and inverting the CRC before transmission. This has no effect on the ability of a CRC to detect changed bits, but gives it the ability to notice bits that are added to the message.
Preset to −1.
The basic mathematics of a CRC accepts (considers as correctly transmitted) messages which, when interpreted as a polynomial, are a multiple of the CRC polynomial. If some leading 0 bits are prepended to such a message, they will not change its interpretation as a polynomial. This is equivalent to the fact that 0001 and 1 are the same number.
But if the message being transmitted does care about leading 0 bits, the inability of the basic CRC algorithm to detect such a change is undesirable. If it is possible that a transmission error could add such bits, a simple solution is to start with the codice_20 shift register set to some non-zero value; for convenience, the all-ones value is typically used. This is mathematically equivalent to complementing (binary NOT) the first "n" bits of the message, where "n" is the number of bits in the CRC register.
This does not affect CRC generation and checking in any way, as long as both generator and checker use the same initial value. Any non-zero initial value will do, and a few standards specify unusual values, but the all-ones value (−1 in twos complement binary) is by far the most common. Note that a one-pass CRC generate/check will still produce a result of zero when the message is correct, regardless of the preset value.
Post-invert.
The same sort of error can occur at the end of a message, albeit with a more limited set of messages. Appending 0 bits to a message is equivalent to multiplying its polynomial by "x", and if it was previously a multiple of the CRC polynomial, the result of that multiplication will be, as well. This is equivalent to the fact that, since 726 is a multiple of 11, so is 7260.
A similar solution can be applied at the end of the message, inverting the CRC register before it is appended to the message. Again, any non-zero change will do; inverting all the bits (XORing with an all-ones pattern) is simply the most common.
This has an effect on one-pass CRC checking: instead of producing a result of zero when the message is correct, it produces a fixed non-zero result. (To be precise, the result is the CRC, with zero preset but with post-invert, of the inversion pattern.) Once this constant has been obtained (e.g. by performing a one-pass CRC generate/check on an arbitrary message), it can be used directly to verify the correctness of any other message checked using the same CRC algorithm.
See also.
General category
Non-CRC checksums
|
[
{
"math_id": 0,
"text": "x^8+x^2+x+1"
},
{
"math_id": 1,
"text": "x"
},
{
"math_id": 2,
"text": "M(x)"
},
{
"math_id": 3,
"text": "x^6+x^4+x^2+x+1"
},
{
"math_id": 4,
"text": "x^7+x^6+x^5+x^3+x"
},
{
"math_id": 5,
"text": "x^8"
},
{
"math_id": 6,
"text": "x^8 M(x)"
},
{
"math_id": 7,
"text": "G(x)"
},
{
"math_id": 8,
"text": "x^7+x^5+x"
},
{
"math_id": 9,
"text": "x^7+x^4+x^3"
},
{
"math_id": 10,
"text": "n"
},
{
"math_id": 11,
"text": "x^0"
},
{
"math_id": 12,
"text": "x^n"
},
{
"math_id": 13,
"text": "x^{n-1}"
},
{
"math_id": 14,
"text": "R(x)"
},
{
"math_id": 15,
"text": "x^{16} + x^{12} + x^5 + 1"
}
] |
https://en.wikipedia.org/wiki?curid=15378076
|
153783
|
Crystal optics
|
Sub-branch of Optical Physics
Crystal optics is the branch of optics that describes the behaviour of light in "anisotropic media", that is, media (such as crystals) in which light behaves differently depending on which direction the light is propagating. The index of refraction depends on both composition and crystal structure and can be calculated using the Gladstone–Dale relation. Crystals are often naturally anisotropic, and in some media (such as liquid crystals) it is possible to induce anisotropy by applying an external electric field.
Isotropic media.
Typical transparent media such as glasses are "isotropic", which means that light behaves the same way no matter which direction it is travelling in the medium. In terms of Maxwell's equations in a dielectric, this gives a relationship between the electric displacement field D and the electric field E:
formula_0
where ε0 is the permittivity of free space and P is the electric polarization (the vector field corresponding to electric dipole moments present in the medium). Physically, the polarization field can be regarded as the response of the medium to the electric field of the light.
Electric susceptibility.
In an isotropic and linear medium, this polarization field P is proportional and parallel to the electric field E:
formula_1
where χ is the "electric susceptibility" of the medium. The relation between D and E is thus:
formula_2
where
formula_3
is the dielectric constant of the medium. The value 1+χ is called the "relative permittivity" of the medium, and is related to the refractive index "n", for non-magnetic media, by
formula_4
Anisotropic media.
In an anisotropic medium, such as a crystal, the polarisation field P is not necessarily aligned with the electric field of the light E. In a physical picture, this can be thought of as the dipoles induced in the medium by the electric field having certain preferred directions, related to the physical structure of the crystal. This can be written as:
formula_5
Here χ is not a number as before but a tensor of rank 2, the "electric susceptibility tensor". In terms of components in 3 dimensions:
formula_6
or using the summation convention:
formula_7
Since χ is a tensor, P is not necessarily colinear with E.
In nonmagnetic and transparent materials, χ"ij" = χ"ji", i.e. the χ tensor is real and symmetric. In accordance with the spectral theorem, it is thus possible to diagonalise the tensor by choosing the appropriate set of coordinate axes, zeroing all components of the tensor except χxx, χyy and χzz. This gives the set of relations:
formula_8
formula_9
formula_10
The directions x, y and z are in this case known as the "principal axes" of the medium. Note that these axes will be orthogonal if all entries in the χ tensor are real, corresponding to a case in which the refractive index is real in all directions.
It follows that D and E are also related by a tensor:
formula_11
Here ε is known as the "relative permittivity tensor" or "dielectric tensor". Consequently, the refractive index of the medium must also be a tensor. Consider a light wave propagating along the z principal axis polarised such the electric field of the wave is parallel to the x-axis. The wave experiences a susceptibility χxx and a permittivity εxx. The refractive index is thus:
formula_12
For a wave polarised in the y direction:
formula_13
Thus these waves will see two different refractive indices and travel at different speeds. This phenomenon is known as "birefringence" and occurs in some common crystals such as calcite and quartz.
If χxx = χyy ≠ χzz, the crystal is known as uniaxial. (See Optic axis of a crystal.) If χxx ≠ χyy and χyy ≠ χzz the crystal is called biaxial. A uniaxial crystal exhibits two refractive indices, an "ordinary" index ("n"o) for light polarised in the x or y directions, and an "extraordinary" index ("n"e) for polarisation in the z direction. A uniaxial crystal is "positive" if ne > no and "negative" if ne < no. Light polarised at some angle to the axes will experience a different phase velocity for different polarization components, and cannot be described by a single index of refraction. This is often depicted as an index ellipsoid.
Other effects.
Certain nonlinear optical phenomena such as the electro-optic effect cause a variation of a medium's permittivity tensor when an external electric field is applied, proportional (to lowest order) to the strength of the field. This causes a rotation of the principal axes of the medium and alters the behaviour of light travelling through it; the effect can be used to produce light modulators.
In response to a magnetic field, some materials can have a dielectric tensor that is complex-Hermitian; this is called a gyro-magnetic or magneto-optic effect. In this case, the principal axes are complex-valued vectors, corresponding to elliptically polarized light, and time-reversal symmetry can be broken. This can be used to design optical isolators, for example.
A dielectric tensor that is not Hermitian gives rise to complex eigenvalues, which corresponds to a material with gain or absorption at a particular frequency.
|
[
{
"math_id": 0,
"text": " \\mathbf{D} = \\varepsilon_0 \\mathbf{E} + \\mathbf{P} "
},
{
"math_id": 1,
"text": " \\mathbf{P} = \\chi \\varepsilon_0 \\mathbf{E} "
},
{
"math_id": 2,
"text": " \\mathbf{D} = \\varepsilon_0 \\mathbf{E} + \\chi \\varepsilon_0 \\mathbf{E} \n= \\varepsilon_0 (1 + \\chi) \\mathbf{E} = \\varepsilon \\mathbf{E} "
},
{
"math_id": 3,
"text": " \\varepsilon = \\varepsilon_0 (1 + \\chi) "
},
{
"math_id": 4,
"text": " n = \\sqrt{ 1 + \\chi} "
},
{
"math_id": 5,
"text": " \\mathbf{P} = \\varepsilon_0 \\boldsymbol{\\chi} \\mathbf{E} ."
},
{
"math_id": 6,
"text": "\\begin{pmatrix} P_x \\\\ P_y \\\\ P_z \\end{pmatrix} = \\varepsilon_0\n\\begin{pmatrix} \\chi_{xx} & \\chi_{xy} & \\chi_{xz} \\\\ \\chi_{yx} & \\chi_{yy} & \\chi_{yz} \\\\ \\chi_{zx} & \\chi_{zy} & \\chi_{zz} \\end{pmatrix}\n\\begin{pmatrix} E_x \\\\ E_y \\\\ E_z \\end{pmatrix}\n"
},
{
"math_id": 7,
"text": " P_i = \\varepsilon_0 \\sum_{j\\in\\{x,y,z\\}}\\chi_{ij} E_j \\quad."
},
{
"math_id": 8,
"text": " P_x = \\varepsilon_0 \\chi_{xx} E_x"
},
{
"math_id": 9,
"text": " P_y = \\varepsilon_0 \\chi_{yy} E_y"
},
{
"math_id": 10,
"text": " P_z = \\varepsilon_0 \\chi_{zz} E_z"
},
{
"math_id": 11,
"text": " \\mathbf{D} = \\varepsilon_0 \\mathbf{E} + \\mathbf{P} = \\varepsilon_0 \\mathbf{E} + \\varepsilon_0 \\boldsymbol{\\chi} \\mathbf{E} = \\varepsilon_0 (I + \\boldsymbol{\\chi}) \\mathbf{E} = \\varepsilon_0 \\boldsymbol{\\varepsilon} \\mathbf{E} ."
},
{
"math_id": 12,
"text": "n_{xx} = (1 + \\chi_{xx})^{1/2} = (\\varepsilon_{xx})^{1/2} ."
},
{
"math_id": 13,
"text": "n_{yy} = (1 + \\chi_{yy})^{1/2} = (\\varepsilon_{yy})^{1/2} ."
}
] |
https://en.wikipedia.org/wiki?curid=153783
|
153788
|
Kőnig's lemma
|
Mathematical result on infinite trees
Kőnig's lemma or Kőnig's infinity lemma is a theorem in graph theory due to the Hungarian mathematician Dénes Kőnig who published it in 1927. It gives a sufficient condition for an infinite graph to have an infinitely long path. The computability aspects of this theorem have been thoroughly investigated by researchers in mathematical logic, especially in computability theory. This theorem also has important roles in constructive mathematics and proof theory.
Statement of the lemma.
Let formula_0 be a connected, locally finite, infinite graph. This means that every two vertices can be connected by a finite path, each vertex is adjacent to only finitely many other vertices, and the graph has infinitely many vertices. Then formula_0 contains a ray: a simple path (a path with no repeated vertices) that starts at one vertex and continues from it through infinitely many vertices.
A useful special case of the lemma is that every infinite tree contains either a vertex of infinite degree or an infinite simple path. If it is locally finite, it meets the conditions of the lemma and has a ray, and if it is not locally finite then it has an infinite-degree vertex.
Construction.
The construction of a ray, in a graph formula_0 that meets the conditions of the lemma, can be performed step by step, maintaining at each step a finite path that can be extended to reach infinitely many vertices (not necessarily all along the same path as each other). To begin this process, start with any single vertex formula_1. This vertex can be thought of as a path of length zero, consisting of one vertex and no edges. By the assumptions of the lemma, each of the infinitely many vertices of formula_0 can be reached by a simple path that starts from formula_1.
Next, as long as the current path ends at some vertex formula_2, consider the infinitely many vertices that can be reached by simple paths that extend the current path, and for each of these vertices construct a simple path to it that extends the current path. There are infinitely many of these extended paths, each of which connects from formula_2 to one of its neighbors, but formula_2 has only finitely many neighbors. Therefore, it follows by a form of the pigeonhole principle that at least one of these neighbors is used as the next step on infinitely many of these extended paths. Let formula_3 be such a neighbor, and extend the current path by one edge, the edge from formula_2 to formula_3. This extension preserves the property that infinitely many vertices can be reached by simple paths that extend the current path.
Repeating this process for extending the path produces an infinite sequence of finite simple paths, each extending the previous path in the sequence by one more edge. The union of all of these paths is the ray whose existence was promised by the lemma.
Computability aspects.
The computability aspects of Kőnig's lemma have been thoroughly investigated. For this purpose it is convenient to state Kőnig's lemma in the form that any infinite finitely branching subtree of formula_4 has an infinite path. Here formula_5 denotes the set of natural numbers (thought of as an ordinal number) and formula_4 the tree whose nodes are all finite sequences of natural numbers, where the parent of a node is obtained by removing the last element from a sequence. Each finite sequence can be identified with a partial function from formula_5 to itself, and each infinite path can be identified with a total function. This allows for an analysis using the techniques of computability theory.
A subtree of formula_4 in which each sequence has only finitely many immediate extensions (that is, the tree has finite degree when viewed as a graph) is called finitely branching. Not every infinite subtree of formula_4 has an infinite path, but Kőnig's lemma shows that any finitely branching infinite subtree must have such a path.
For any subtree formula_6 of formula_4 the notation formula_7 denotes the set of nodes of formula_6 through which there is an infinite path. Even when formula_6 is computable the set formula_7 may not be computable. Whenever a subtree formula_6 of
formula_4 has an infinite path, the path is computable from formula_7, step by step, greedily choosing a successor in formula_7 at each step. The restriction to formula_7 ensures that this greedy process cannot get stuck.
There exist non-finitely branching computable subtrees of formula_4 that have no arithmetical path, and indeed no hyperarithmetical path. However, every computable subtree of formula_4 with a path must have a path computable from Kleene's O, the canonical formula_8 complete set. This is because the set formula_7 is always formula_9 (for the meaning of this notation, see analytical hierarchy) when formula_6 is computable.
A finer analysis has been conducted for computably bounded trees. A subtree of formula_4 is called computably bounded or recursively bounded if there is a computable function formula_10 from formula_5 to formula_5 such that for every sequence in the tree and every natural number formula_11, the formula_11th element of the sequence is at most formula_12. Thus formula_10 gives a bound for how "wide" the tree is. The following basis theorems apply to infinite, computably bounded, computable subtrees of formula_13.
A weak form of Kőnig's lemma which states that every infinite binary tree has an infinite branch is used to define the subsystem WKL0 of second-order arithmetic. This subsystem has an important role in reverse mathematics. Here a binary tree is one in which every term of every sequence in the tree is 0 or 1, which is to say the tree is computably bounded via the constant function 2. The full form of Kőnig's lemma is not provable in WKL0, but is equivalent to the stronger subsystem ACA0.
Relationship to constructive mathematics and compactness.
The proof given above is not generally considered to be constructive, because at each step it uses a proof by contradiction to establish that there exists an adjacent vertex from which infinitely many other vertices can be reached, and because of the reliance on a weak form of the axiom of choice. Facts about the computational aspects of the lemma suggest that no proof can be given that would be considered constructive by the main schools of constructive mathematics.
The fan theorem of L. E. J. Brouwer (1927) is, from a classical point of view, the contrapositive of a form of Kőnig's lemma. A subset "S" of formula_16 is called a "bar" if any function from formula_5 to the set formula_17 has some initial segment in "S". A bar is "detachable" if every sequence is either in the bar or not in the bar (this assumption is required because the theorem is ordinarily considered in situations where the law of the excluded middle is not assumed). A bar is "uniform" if there is some number formula_18 so that any function from formula_5 to formula_17 has an initial segment in the bar of length no more than formula_18. Brouwer's fan theorem says that any detachable bar is uniform.
This can be proven in a classical setting by considering the bar as an open covering of the compact topological space formula_19. Each sequence in the bar represents a basic open set of this space, and these basic open sets cover the space by assumption. By compactness, this cover has a finite subcover. The "N" of the fan theorem can be taken to be the length of the longest sequence whose basic open set is in the finite subcover. This topological proof can be used in classical mathematics to show that the following form of Kőnig's lemma holds: for any natural number "k", any infinite subtree of the tree formula_20 has an infinite path.
Relationship with the axiom of choice.
Kőnig's lemma may be considered to be a choice principle; the first proof above illustrates the relationship between the lemma and the axiom of dependent choice. At each step of the induction, a vertex with a particular property must be selected. Although it is proved that at least one appropriate vertex exists, if there is more than one suitable vertex there may be no canonical choice. In fact, the full strength of the axiom of dependent choice is not needed; as described below, the axiom of countable choice suffices.
If the graph is countable, the vertices are well-ordered and one can canonically choose the smallest suitable vertex. In this case, Kőnig's lemma is provable in second-order arithmetic with arithmetical comprehension, and, a fortiori, in ZF set theory (without choice).
Kőnig's lemma is essentially the restriction of the axiom of dependent choice to entire relations formula_21 such that for each formula_22 there are only finitely many formula_23 such that formula_24. Although the axiom of choice is, in general, stronger than the principle of dependent choice, this restriction of dependent choice is equivalent to a restriction of the axiom of choice.
In particular, when the branching at each node is done on a finite subset of an arbitrary set not assumed to be countable, the form of Kőnig's lemma that says "Every infinite finitely branching tree has an infinite path" is equivalent to the principle that every countable set of finite sets has a choice function, that is to say, the axiom of countable choice for finite sets. This form of the axiom of choice (and hence of Kőnig's lemma) is not provable in ZF set theory.
Generalization.
In the category of sets, the inverse limit of any inverse system of non-empty finite sets is non-empty. This may be seen as a generalization of Kőnig's lemma and can be proved with Tychonoff's theorem, viewing the finite sets as compact discrete spaces, and then using the finite intersection property characterization of compactness.
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "G"
},
{
"math_id": 1,
"text": "v_1"
},
{
"math_id": 2,
"text": "v_i"
},
{
"math_id": 3,
"text": "v_{i+1}"
},
{
"math_id": 4,
"text": "\\omega^{<\\omega}"
},
{
"math_id": 5,
"text": "\\omega"
},
{
"math_id": 6,
"text": "T"
},
{
"math_id": 7,
"text": "\\operatorname{Ext}(T)"
},
{
"math_id": 8,
"text": "\\Pi^1_1"
},
{
"math_id": 9,
"text": "\\Sigma^1_1"
},
{
"math_id": 10,
"text": "f"
},
{
"math_id": 11,
"text": "n"
},
{
"math_id": 12,
"text": "f(n)"
},
{
"math_id": 13,
"text": "\\omega^{< \\omega}"
},
{
"math_id": 14,
"text": "0'"
},
{
"math_id": 15,
"text": "X"
},
{
"math_id": 16,
"text": "\\{0,1\\}^{<\\omega}"
},
{
"math_id": 17,
"text": "\\{0,1\\}"
},
{
"math_id": 18,
"text": "N"
},
{
"math_id": 19,
"text": "\\{0,1\\}^\\omega"
},
{
"math_id": 20,
"text": "\\{0,\\ldots,k\\}^{<\\omega}"
},
{
"math_id": 21,
"text": "R"
},
{
"math_id": 22,
"text": "x"
},
{
"math_id": 23,
"text": "z"
},
{
"math_id": 24,
"text": "xRz"
}
] |
https://en.wikipedia.org/wiki?curid=153788
|
1537992
|
Descent direction
|
In optimization, a descent direction is a vector formula_0 that points towards a local minimum formula_1 of an objective function formula_2.
Computing formula_1 by an iterative method, such as line search defines a descent direction formula_3 at the formula_4th iterate to be any formula_5 such that formula_6, where formula_7 denotes the inner product. The motivation for such an approach is that small steps along formula_5 guarantee that formula_8 is reduced, by Taylor's theorem.
Using this definition, the negative of a non-zero gradient is always a
descent direction, as formula_9.
Numerous methods exist to compute descent directions, all with differing merits, such as gradient descent or the conjugate gradient method.
More generally, if formula_10 is a positive definite matrix, then
formula_11 is a descent direction at formula_12. This generality is used in preconditioned gradient descent methods.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathbf{p}\\in\\mathbb R^n"
},
{
"math_id": 1,
"text": "\\mathbf{x}^*"
},
{
"math_id": 2,
"text": "f:\\mathbb R^n\\to\\mathbb R"
},
{
"math_id": 3,
"text": "\\mathbf{p}_k\\in\\mathbb R^n"
},
{
"math_id": 4,
"text": "k"
},
{
"math_id": 5,
"text": "\\mathbf{p}_k"
},
{
"math_id": 6,
"text": "\\langle\\mathbf{p}_k,\\nabla f(\\mathbf{x}_k)\\rangle < 0"
},
{
"math_id": 7,
"text": " \\langle , \\rangle "
},
{
"math_id": 8,
"text": "\\displaystyle f"
},
{
"math_id": 9,
"text": " \\langle -\\nabla f(\\mathbf{x}_k), \\nabla f(\\mathbf{x}_k) \\rangle = -\\langle \\nabla f(\\mathbf{x}_k), \\nabla f(\\mathbf{x}_k) \\rangle < 0 "
},
{
"math_id": 10,
"text": "P"
},
{
"math_id": 11,
"text": "p_k = -P \\nabla f(x_k)"
},
{
"math_id": 12,
"text": "x_k"
}
] |
https://en.wikipedia.org/wiki?curid=1537992
|
1538007
|
Matrix chain multiplication
|
Mathematics optimization problem
Matrix chain multiplication (or the matrix chain ordering problem) is an optimization problem concerning the most efficient way to multiply a given sequence of matrices. The problem is not actually to "perform" the multiplications, but merely to decide the sequence of the matrix multiplications involved. The problem may be solved using dynamic programming.
There are many options because matrix multiplication is associative. In other words, no matter how the product is parenthesized, the result obtained will remain the same. For example, for four matrices "A", "B", "C", and "D", there are five possible options:
(("AB")"C")"D" = ("A"("BC"))"D" = ("AB")("CD") = "A"(("BC")"D") = "A"("B"("CD")).
Although it does not affect the product, the order in which the terms are parenthesized affects the number of simple arithmetic operations needed to compute the product, that is, the computational complexity. The straightforward multiplication of a matrix that is "X" × "Y" by a matrix that is "Y" × "Z" requires "XYZ" ordinary multiplications and "X"("Y" − 1)"Z" ordinary additions. In this context, it is typical to use the number of ordinary multiplications as a measure of the runtime complexity.
If "A" is a 10 × 30 matrix, "B" is a 30 × 5 matrix, and "C" is a 5 × 60 matrix, then
computing ("AB")"C" needs (10×30×5) + (10×5×60) = 1500 + 3000 = 4500 operations, while
computing "A"("BC") needs (30×5×60) + (10×30×60) = 9000 + 18000 = 27000 operations.
Clearly the first method is more efficient. With this information, the problem statement can be refined as "how to determine the optimal parenthesization of a product of "n" matrices?" The number of possible parenthesizations is given by the ("n"–1)th Catalan number, which is "O"(4"n" / "n"3/2), so checking each possible parenthesization (brute force) would require a run-time that is exponential in the number of matrices, which is very slow and impractical for large "n". A quicker solution to this problem can be achieved by breaking up the problem into a set of related subproblems.
A dynamic programming algorithm.
To begin, let us assume that all we really want to know is the minimum cost, or minimum number of arithmetic operations needed to multiply out the matrices. If we are only multiplying two matrices, there is only one way to multiply them, so the minimum cost is the cost of doing this. In general, we can find the minimum cost using the following recursive algorithm:
For example, if we have four matrices "ABCD", we compute the cost required to find each of ("A")("BCD"), ("AB")("CD"), and ("ABC")("D"), making recursive calls to find the minimum cost to compute "ABC", "AB", "CD", and "BCD". We then choose the best one. Better still, this yields not only the minimum cost, but also demonstrates the best way of doing the multiplication: group it the way that yields the lowest total cost, and do the same for each factor.
However, this algorithm has exponential runtime complexity making it as inefficient as the naive approach of trying all permutations. The reason is that the algorithm does a lot of redundant work. For example, above we made a recursive call to find the best cost for computing both "ABC" and "AB". But finding the best cost for computing ABC also requires finding the best cost for "AB". As the recursion grows deeper, more and more of this type of unnecessary repetition occurs.
One simple solution is called memoization: each time we compute the minimum cost needed to multiply out a specific subsequence, we save it. If we are ever asked to compute it again, we simply give the saved answer, and do not recompute it. Since there are about "n"2/2 different subsequences, where "n" is the number of matrices, the space required to do this is reasonable. It can be shown that this simple trick brings the runtime down to O("n"3) from O(2"n"), which is more than efficient enough for real applications. This is top-down dynamic programming.
The following bottom-up approach computes, for each 2 ≤ "k" ≤ n, the minimum costs of all subsequences of length "k" using the costs of smaller subsequences already computed.
It has the same asymptotic runtime and requires no recursion.
Pseudocode:
// Matrix A[i] has dimension dims[i-1] x dims[i] for i = 1..n
MatrixChainOrder(int dims[])
// length[dims] = n + 1
n = dims.length - 1;
// m[i,j] = Minimum number of scalar multiplications (i.e., cost)
// needed to compute the matrix A[i]A[i+1]...A[j] = A[i..j]
// The cost is zero when multiplying one matrix
for (i = 1; i <= n; i++)
m[i, i] = 0;
for (len = 2; len <= n; len++) { // Subsequence lengths
for (i = 1; i <= n - len + 1; i++) {
j = i + len - 1;
m[i, j] = MAXINT;
for (k = i; k <= j - 1; k++) {
cost = m[i, k] + m[k+1, j] + dims[i-1]*dims[k]*dims[j];
if (cost < m[i, j]) {
m[i, j] = cost;
s[i, j] = k; // Index of the subsequence split that achieved minimal cost
A Python implementation using the memoization decorator from the standard library:
from functools import cache
def matrixChainOrder(dims: list[int]) -> int:
@cache
def a(i, j):
return min((a(i, k) + dims[i] * dims[k] * dims[j] + a(k, j)
for k in range(i + 1, j)), default=0)
return a(0, len(dims) - 1)
More efficient algorithms.
There are algorithms that are more efficient than the "O"("n"3) dynamic programming algorithm, though they are more complex.
Hu & Shing.
An algorithm published by T. C. Hu and M.-T. Shing achieves "O"("n" log "n") computational complexity.
They showed how the matrix chain multiplication problem can be transformed (or reduced) into the problem of triangulation of a regular polygon. The polygon is oriented such that there is a horizontal bottom side, called the base, which represents the final result. The other "n" sides of the polygon, in the clockwise direction, represent the matrices. The vertices on each end of a side are the dimensions of the matrix represented by that side. With "n" matrices in the multiplication chain there are "n"−1 binary operations and "C""n"−1 ways of placing parentheses, where "C""n"−1 is the ("n"−1)-th Catalan number. The algorithm exploits that there are also "C""n"−1 possible triangulations of a polygon with "n"+1 sides.
This image illustrates possible triangulations of a regular hexagon. These correspond to the different ways that parentheses can be placed to order the multiplications for a product of 5 matrices.
For the example below, there are four sides: A, B, C and the final result ABC. A is a 10×30 matrix, B is a 30×5 matrix, C is a 5×60 matrix, and the final result is a 10×60 matrix. The regular polygon for this example is a 4-gon, i.e. a square:
The matrix product AB is a 10x5 matrix and BC is a 30x60 matrix. The two possible triangulations in this example are:
The cost of a single triangle in terms of the number of multiplications needed is the product of its vertices. The total cost of a particular triangulation of the polygon is the sum of the costs of all its triangles:
("AB")"C": (10×30×5) + (10×5×60) = 1500 + 3000 = 4500 multiplications
"A"("BC"): (30×5×60) + (10×30×60) = 9000 + 18000 = 27000 multiplications
Hu & Shing developed an algorithm that finds an optimum solution for the minimum cost partition problem in "O"("n" log "n") time. Their proof of correctness of the algorithm relies on "Lemma 1" proved in a 1981 technical report and omitted from the published paper. The technical report's proof of the lemma is incorrect, but Shing has presented a corrected proof.
Other "O"("n" log "n") algorithms.
Wang, Zhu and Tian have published a simplified "O"("n" log "m") algorithm, where "n" is the number of matrices in the chain and "m" is the number of local minimums in the dimension sequence of the given matrix chain.
Nimbark, Gohel, and Doshi have published a greedy "O"("n" log "n") algorithm, but their proof of optimality is incorrect and their algorithm fails to produce the most efficient parentheses assignment for some matrix chains.
Chin-Hu-Shing approximate solution.
An algorithm created independently by Chin and Hu & Shing runs in O("n") and produces a parenthesization which is at most 15.47% worse than the optimal choice. In most cases the algorithm yields the optimal solution or a solution which is only 1-2 percent worse than the optimal one.
The algorithm starts by translating the problem to the polygon partitioning problem. To each vertex "V" of the polygon is associated a weight "w". Suppose we have three consecutive vertices formula_0, and that formula_1 is the vertex with minimum weight formula_2.
We look at the quadrilateral with vertices formula_3 (in clockwise order).
We can triangulate it in two ways:
Therefore, if
formula_10
or equivalently
formula_11
we remove the vertex formula_12 from the polygon and add the side formula_13 to the triangulation.
We repeat this process until no formula_12 satisfies the condition above.
For all the remaining vertices formula_14, we add the side formula_15 to the triangulation.
This gives us a nearly optimal triangulation.
Generalizations.
The matrix chain multiplication problem generalizes to solving a more abstract problem: given a linear sequence of objects, an associative binary operation on those objects, and a way to compute the cost of performing that operation on any two given objects (as well as all partial results), compute the minimum cost way to group the objects to apply the operation over the sequence. A practical instance of this comes from the ordering of join operations in databases; see .
Another somewhat contrived special case of this is string concatenation of a list of strings. In C, for example, the cost of concatenating two strings of length "m" and "n" using "strcat" is O("m" + "n"), since we need O("m") time to find the end of the first string and O("n") time to copy the second string onto the end of it. Using this cost function, we can write a dynamic programming algorithm to find the fastest way to concatenate a sequence of strings. However, this optimization is rather useless because we can straightforwardly concatenate the strings in time proportional to the sum of their lengths. A similar problem exists for singly linked lists.
Another generalization is to solve the problem when parallel processors are available. In this case, instead of adding the costs of computing each factor of a matrix product, we take the maximum because we can do them simultaneously. This can drastically affect both the minimum cost and the final optimal grouping; more "balanced" groupings that keep all the processors busy are favored. There are even more sophisticated approaches.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "V_{i-1}, V_i, V_{i+1}"
},
{
"math_id": 1,
"text": "V_{\\min}"
},
{
"math_id": 2,
"text": "w_{\\min}"
},
{
"math_id": 3,
"text": "V_{\\min}, V_{i-1}, V_i, V_{i+1}"
},
{
"math_id": 4,
"text": "(V_{\\min}, V_{i-1}, V_i)"
},
{
"math_id": 5,
"text": "(V_{\\min}, V_{i+1}, V_i)"
},
{
"math_id": 6,
"text": "w_{\\min}w_{i-1}w_i+w_{\\min}w_{i+1}w_i"
},
{
"math_id": 7,
"text": "(V_{\\min}, V_{i-1}, V_{i+1})"
},
{
"math_id": 8,
"text": "(V_{i-1}, V_i, V_{i+1})"
},
{
"math_id": 9,
"text": "w_{\\min}w_{i-1}w_{i+1}+w_{i-1}w_{i}w_{i+1}"
},
{
"math_id": 10,
"text": "w_{\\min}w_{i-1}w_{i+1}+w_{i-1}w_{i}w_{i+1}<w_{\\min}w_{i-1}w_i+w_{\\min}w_{i+1}w_i "
},
{
"math_id": 11,
"text": "\\frac{1}{w_i}+\\frac{1}{w_{\\min}}<\\frac{1}{w_{i+1}}+\\frac{1}{w_{i-1}} "
},
{
"math_id": 12,
"text": "V_i"
},
{
"math_id": 13,
"text": "(V_{i-1}, V_{i+1})"
},
{
"math_id": 14,
"text": "V_n"
},
{
"math_id": 15,
"text": "(V_{\\min}, V_n)"
}
] |
https://en.wikipedia.org/wiki?curid=1538007
|
1538109
|
Complex vector bundle
|
In mathematics, a complex vector bundle is a vector bundle whose fibers are complex vector spaces.
Any complex vector bundle can be viewed as a real vector bundle through the restriction of scalars. Conversely, any real vector bundle "E" can be promoted to a complex vector bundle, the complexification
formula_0
whose fibers are "E""x" ⊗R C.
Any complex vector bundle over a paracompact space admits a hermitian metric.
The basic invariant of a complex vector bundle is a Chern class. A complex vector bundle is canonically oriented; in particular, one can take its Euler class.
A complex vector bundle is a holomorphic vector bundle if "X" is a complex manifold and if the local trivializations are biholomorphic.
Complex structure.
A complex vector bundle can be thought of as a real vector bundle with an additional structure, the complex structure. By definition, a complex structure is a bundle map between a real vector bundle "E" and itself:
formula_1
such that "J" acts as the square root "i" of −1 on fibers: if formula_2 is the map on fiber-level, then formula_3 as a linear map. If "E" is a complex vector bundle, then the complex structure "J" can be defined by setting formula_4 to be the scalar multiplication by formula_5. Conversely, if "E" is a real vector bundle with a complex structure "J", then "E" can be turned into a complex vector bundle by setting: for any real numbers "a", "b" and a real vector "v" in a fiber "E""x",
formula_6
Example: A complex structure on the tangent bundle of a real manifold "M" is usually called an almost complex structure. A theorem of Newlander and Nirenberg says that an almost complex structure "J" is "integrable" in the sense it is induced by a structure of a complex manifold if and only if a certain tensor involving "J" vanishes.
Conjugate bundle.
If "E" is a complex vector bundle, then the conjugate bundle formula_7 of "E" is obtained by having complex numbers acting through the complex conjugates of the numbers. Thus, the identity map of the underlying real vector bundles: formula_8 is conjugate-linear, and "E" and its conjugate "E" are isomorphic as real vector bundles.
The "k"-th Chern class of formula_7 is given by
formula_9.
In particular, "E" and "E" are not isomorphic in general.
If "E" has a hermitian metric, then the conjugate bundle "E" is isomorphic to the dual bundle formula_10 through the metric, where we wrote formula_11 for the trivial complex line bundle.
If "E" is a real vector bundle, then the underlying real vector bundle of the complexification of "E" is a direct sum of two copies of "E":
formula_12
(since "V"⊗RC = "V"⊕"i""V" for any real vector space "V".) If a complex vector bundle "E" is the complexification of a real vector bundle "E'", then "E'" is called a real form of "E" (there may be more than one real form) and "E" is said to be defined over the real numbers. If "E" has a real form, then "E" is isomorphic to its conjugate (since they are both sum of two copies of a real form), and consequently the odd Chern classes of "E" have order 2.
|
[
{
"math_id": 0,
"text": "E \\otimes \\mathbb{C} ;"
},
{
"math_id": 1,
"text": "J: E \\to E"
},
{
"math_id": 2,
"text": "J_x: E_x \\to E_x"
},
{
"math_id": 3,
"text": "J_x^2 = -1"
},
{
"math_id": 4,
"text": "J_x"
},
{
"math_id": 5,
"text": "i"
},
{
"math_id": 6,
"text": "(a + ib) v = a v + J(b v)."
},
{
"math_id": 7,
"text": "\\overline{E}"
},
{
"math_id": 8,
"text": "E_{\\mathbb{R}} \\to \\overline{E}_\\mathbb{R} = E_{\\mathbb{R}}"
},
{
"math_id": 9,
"text": "c_k(\\overline{E}) = (-1)^k c_k(E)"
},
{
"math_id": 10,
"text": "E^* = \\operatorname{Hom}(E, \\mathcal{O})"
},
{
"math_id": 11,
"text": "\\mathcal{O}"
},
{
"math_id": 12,
"text": "(E \\otimes \\mathbb{C})_{\\mathbb{R}} = E \\oplus E"
}
] |
https://en.wikipedia.org/wiki?curid=1538109
|
15383952
|
Average-case complexity
|
In computational complexity theory, the average-case complexity of an algorithm is the amount of some computational resource (typically time) used by the algorithm, averaged over all possible inputs. It is frequently contrasted with worst-case complexity which considers the maximal complexity of the algorithm over all possible inputs.
There are three primary motivations for studying average-case complexity. First, although some problems may be intractable in the worst-case, the inputs which elicit this behavior may rarely occur in practice, so the average-case complexity may be a more accurate measure of an algorithm's performance. Second, average-case complexity analysis provides tools and techniques to generate hard instances of problems which can be utilized in areas such as cryptography and derandomization. Third, average-case complexity allows discriminating the most efficient algorithm in practice among algorithms of equivalent best case complexity (for instance Quicksort).
Average-case analysis requires a notion of an "average" input to an algorithm, which leads to the problem of devising a probability distribution over inputs. Alternatively, a randomized algorithm can be used. The analysis of such algorithms leads to the related notion of an expected complexity.
History and background.
The average-case performance of algorithms has been studied since modern notions of computational efficiency were developed in the 1950s. Much of this initial work focused on problems for which worst-case polynomial time algorithms were already known. In 1973, Donald Knuth published Volume 3 of the Art of Computer Programming which extensively surveys average-case performance of algorithms for problems solvable in worst-case polynomial time, such as sorting and median-finding.
An efficient algorithm for NP-complete problems is generally characterized as one which runs in polynomial time for all inputs; this is equivalent to requiring efficient worst-case complexity. However, an algorithm which is inefficient on a "small" number of inputs may still be efficient for "most" inputs that occur in practice. Thus, it is desirable to study the properties of these algorithms where the average-case complexity may differ from the worst-case complexity and find methods to relate the two.
The fundamental notions of average-case complexity were developed by Leonid Levin in 1986 when he published a one-page paper defining average-case complexity and completeness while giving an example of a complete problem for distNP, the average-case analogue of NP.
Definitions.
Efficient average-case complexity.
The first task is to precisely define what is meant by an algorithm which is efficient "on average". An initial attempt might define an efficient average-case algorithm as one which runs in expected polynomial time over all possible inputs. Such a definition has various shortcomings; in particular, it is not robust to changes in the computational model. For example, suppose algorithm A runs in time "t""A"("x") on input x and algorithm B runs in time "t""A"("x")2 on input x; that is, B is quadratically slower than A. Intuitively, any definition of average-case efficiency should capture the idea that A is efficient-on-average if and only if B is efficient on-average. Suppose, however, that the inputs are drawn randomly from the uniform distribution of strings with length n, and that A runs in time "n"2 on all inputs except the string 1"n" for which A takes time 2"n". Then it can be easily checked that the expected running time of A is polynomial but the expected running time of B is exponential.
To create a more robust definition of average-case efficiency, it makes sense to allow an algorithm A to run longer than polynomial time on some inputs but the fraction of inputs on which A requires larger and larger running time becomes smaller and smaller. This intuition is captured in the following formula for average polynomial running time, which balances the polynomial trade-off between running time and fraction of inputs:
formula_0
for every "n", "t" > 0 and polynomial p, where "t""A"("x") denotes the running time of algorithm A on input x, and ε is a positive constant value. Alternatively, this can be written as
formula_1
for some constants C and ε, where "n"
|"x"|. In other words, an algorithm A has good average-case complexity if, after running for "t""A"("n") steps, A can solve all but a fraction of inputs of length n, for some "ε", "c" > 0.
Distributional problem.
The next step is to define the "average" input to a particular problem. This is achieved by associating the inputs of each problem with a particular probability distribution. That is, an "average-case" problem consists of a language L and an associated probability distribution D which forms the pair ("L", "D"). The two most common classes of distributions which are allowed are:
These two formulations, while similar, are not equivalent. If a distribution is P-computable it is also P-samplable, but the converse is not true if P ≠ P#P.
AvgP and distNP.
A distributional problem ("L", "D") is in the complexity class AvgP if there is an efficient average-case algorithm for L, as defined above. The class AvgP is occasionally called distP in the literature.
A distributional problem ("L", "D") is in the complexity class distNP if L is in NP and D is P-computable. When L is in NP and D is P-samplable, ("L", "D") belongs to sampNP.
Together, AvgP and distNP define the average-case analogues of P and NP, respectively.
Reductions between distributional problems.
Let ("L","D") and ("L′", "D′") be two distributional problems. ("L", "D") average-case reduces to ("L′", "D′") (written ("L", "D") ≤AvgP ("L′", "D′")) if there is a function f that for every n, on input x can be computed in time polynomial in n and
The domination condition enforces the notion that if problem ("L", "D") is hard on average, then ("L′", "D′") is also hard on average. Intuitively, a reduction should provide a way to solve an instance x of problem L by computing "f"("x") and feeding the output to the algorithm which solves L'. Without the domination condition, this may not be possible since the algorithm which solves L in polynomial time on average may take super-polynomial time on a small number of inputs but f may map these inputs into a much larger set of D' so that algorithm A' no longer runs in polynomial time on average. The domination condition only allows such strings to occur polynomially as often in D'.
DistNP-complete problems.
The average-case analogue to NP-completeness is distNP-completeness. A distributional problem ("L′", "D′") is distNP-complete if ("L′", "D′") is in distNP and for every ("L", "D") in distNP, ("L", "D") is average-case reducible to ("L′", "D′").
An example of a distNP-complete problem is the Bounded Halting Problem, (for any P-computable "D") defined as follows:
formula_4
In his original paper, Levin showed an example of a distributional tiling problem that is average-case NP-complete. A survey of known distNP-complete problems is available online.
One area of active research involves finding new distNP-complete problems. However, finding such problems can be complicated due to a result of Gurevich which shows that any distributional problem with a flat distribution cannot be distNP-complete unless EXP = NEXP. (A flat distribution μ is one for which there exists an "ε" > 0 such that for any x, "μ"("x") ≤ 2−|"x"|"ε".) A result by Livne shows that all natural NP-complete problems have DistNP-complete versions. However, the goal of finding a natural distributional problem that is DistNP-complete has not yet been achieved.
Applications.
Sorting algorithms.
As mentioned above, much early work relating to average-case complexity focused on problems for which polynomial-time algorithms already existed, such as sorting. For example, many sorting algorithms which utilize randomness, such as Quicksort, have a worst-case running time of O("n"2), but an average-case running time of O("n" log("n")), where n is the length of the input to be sorted.
Cryptography.
For most problems, average-case complexity analysis is undertaken to find efficient algorithms for a problem that is considered difficult in the worst-case. In cryptographic applications, however, the opposite is true: the worst-case complexity is irrelevant; we instead want a guarantee that the average-case complexity of every algorithm which "breaks" the cryptographic scheme is inefficient.
Thus, all secure cryptographic schemes rely on the existence of one-way functions. Although the existence of one-way functions is still an open problem, many candidate one-way functions are based on hard problems such as integer factorization or computing the discrete log. Note that it is not desirable for the candidate function to be NP-complete since this would only guarantee that there is likely no efficient algorithm for solving the problem in the worst case; what we actually want is a guarantee that no efficient algorithm can solve the problem over random inputs (i.e. the average case). In fact, both the integer factorization and discrete log problems are in NP ∩coNP, and are therefore not believed to be NP-complete. The fact that all of cryptography is predicated on the existence of average-case intractable problems in NP is one of the primary motivations for studying average-case complexity.
Other results.
In 1990, Impagliazzo and Levin showed that if there is an efficient average-case algorithm for a distNP-complete problem under the uniform distribution, then there is an average-case algorithm for every problem in NP under any polynomial-time samplable distribution. Applying this theory to natural distributional problems remains an outstanding open question.
In 1992, Ben-David et al. showed that if all languages in distNP have good-on-average decision algorithms, they also have good-on-average search algorithms. Further, they show that this conclusion holds under a weaker assumption: if every language in NP is easy on average for decision algorithms with respect to the uniform distribution, then it is also easy on average for search algorithms with respect to the uniform distribution. Thus, cryptographic one-way functions can exist only if there are distNP problems over the uniform distribution that are hard on average for decision algorithms.
In 1993, Feigenbaum and Fortnow showed that it is not possible to prove, under non-adaptive random reductions, that the existence of a good-on-average algorithm for a distNP-complete problem under the uniform distribution implies the existence of worst-case efficient algorithms for all problems in NP. In 2003, Bogdanov and Trevisan generalized this result to arbitrary non-adaptive reductions. These results show that it is unlikely that any association can be made between average-case complexity and worst-case complexity via reductions.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
The literature of average case complexity includes the following work:
|
[
{
"math_id": 0,
"text": "\n\\Pr_{x \\in_R D_n} \\left[t_A(x) \\geq t \\right] \\leq \\frac{p(n)}{t^\\epsilon}\n"
},
{
"math_id": 1,
"text": "\nE_{x \\in_R D_n} \\left[ \\frac{t_{A}(x)^{\\epsilon}}{n} \\right] \\leq C\n"
},
{
"math_id": 2,
"text": "\\mu(x) = \\sum\\limits_{y \\in \\{0, 1\\}^n : y \\leq x} \\Pr[y]"
},
{
"math_id": 3,
"text": "\\sum\\limits_{x: f(x) = y} D_n(x) \\leq p(n)D'_{m(n)}(y)"
},
{
"math_id": 4,
"text": "BH = \\{(M, x, 1^t) : M \\text{ is a non-deterministic Turing machine that accepts } x \\text{ in} \\leq t \\text{ steps}\\}"
}
] |
https://en.wikipedia.org/wiki?curid=15383952
|
15385848
|
Universal composability
|
Cryptographic framework
The framework of universal composability (UC) is a general-purpose model for the analysis of cryptographic protocols. It guarantees very strong security properties. Protocols remain secure even if arbitrarily composed with other instances of the same or other protocols. Security is defined in the sense of protocol emulation. Intuitively, a protocol is said to emulate another one, if no environment (observer) can distinguish the executions. Literally, the protocol may simulate the other protocol (without having access to the code). The notion of security is derived by implication. Assume a protocol formula_0 is secure per definition. If another protocol formula_1 emulates protocol formula_0 such that no environment tells apart the emulation from the execution of the protocol, then the emulated protocol formula_1 is as secure as protocol formula_0.
Ideal functionality.
An ideal functionality is a protocol in which a trusted party that can communicate over perfectly secure channels with all protocol participants computes the desired protocol outcome. We say that a cryptographic protocol that cannot make use of such a trusted party fulfils an ideal functionality, if the protocol can emulate the behaviour of the trusted party for honest users, and if the view that an adversary learns by attacking the protocol is indistinguishable from what can be computed by a simulator that only interacts with the ideal functionality.
Computation model.
The computation model of universal composability is that of interactive Turing machines that can activate each other by writing on each other's communication tapes. An interactive Turing machine is a form of multi-tape Turing machine and is commonly used for modelling the computational aspects of communication networks in cryptography.
Communication model.
The communication model in the bare UC framework is very basic. The messages of a sending party are handed to the adversary who can replace these messages with messages of his own choice that are delivered to the receiving party. This is also the Dolev-Yao threat model. (Based on the computational model all parties are modeled as interactive turing machines)
All communication models that add additional properties such as confidentiality, authenticity, synchronization, or anonymity are modeled using their own ideal functionality. An ideal communication functionality takes a message as input and produces a message as output. The (more limited) powers for the adversary formula_2 are modeled through the (limited) capacity of the adversary to interact with this ideal functionality.
Ideal authenticated channel.
For an optimal ideal authenticated channel, the ideal functionality formula_3 takes a message formula_4 from a party with identity formula_5 as input, and outputs the same message together with the identity formula_5 to the recipient and the adversary. To model the power of the adversary to delay asynchronous communication the functionality formula_3 may first send a message to the adversary formula_2 and would only deliver the message formula_6 once it receives the command to do so as a reply.
Ideal secure channel.
In an ideal secure channel, the ideal functionality formula_7 only outputs the identity of the sender to both the recipient and the adversary, while the message is only revealed to the recipient. This models the requirement that a secure channel is both authenticated and private. To model some leakage about the information that is being transferred, formula_7 may reveal information about the message to the adversary, e.g. the length of the message. Asynchronous communication is modeled through the same delay mechanism as for formula_3.
More advanced channels.
While the technical means, and the physical assumptions behind anonymous and pseudonymous communication are very different, the modeling of such channels using ideal functionalities is analogous. See also onion routing and Anonymous P2P. Similar functionalities can be defined for broadcast communication, or synchronous communication.
Ideal anonymous channel.
In an ideal anonymous channel, the ideal functionality, formula_8 takes a message formula_4 from a party with identity formula_5 as input, and outputs the same message but without disclosing the identity formula_5 to the recipient and the adversary.
Ideal pseudonymous channel.
In an ideal pseudonymous channel, the participating parties first register unique pseudonyms with the ideal functionality formula_9. To do a transfer formula_9 takes a message formula_4 and the pseudonym formula_10 of the recipient as input. The ideal functionality looks up the owner of the pseudonym and transfers the message formula_11 without revealing the identity of the sender.
These formalisations abstract from the implementation details of the concrete systems that implement such channels. In their pure form an ideal functionality may be found to be unrealizable. It may be necessary to relax the functionality by leaking more information to the adversary (degree of anonymity). On the other hand communication channels can be physical, e.g. a mobile device can achieve an anonymous channel by constantly changing its location before transmitting messages that do not contain identifiers.
Impossibility results.
There exists no bit commitment protocol that is universally composable in the Standard Model.
The intuition is that in the ideal model, the simulator has to extract the value to commit to
from the input of the environment. This would allow the receiver in the real protocol to extract
the committed value and break the security of the protocol. This impossibility result can be
applied to other functionalities.
Setup and trust assumptions.
To circumvent the above impossibility result, additional assumptions are required.
Additional setup and trust assumptions, such as the common reference string model and the assumption of a trusted certification authority are also modeled using ideal functionalities in UC.
|
[
{
"math_id": 0,
"text": "P_1"
},
{
"math_id": 1,
"text": "P_2"
},
{
"math_id": 2,
"text": "\\mathcal{A}"
},
{
"math_id": 3,
"text": "\\mathcal{F}_{\\mathsf{Auth}}"
},
{
"math_id": 4,
"text": "m"
},
{
"math_id": 5,
"text": "P"
},
{
"math_id": 6,
"text": "m,P"
},
{
"math_id": 7,
"text": "\\mathcal{F}_{\\mathsf{Sec}}"
},
{
"math_id": 8,
"text": "\\mathcal{F}_{\\mathsf{Anon}}"
},
{
"math_id": 9,
"text": "\\mathcal{F}_{\\mathsf{Pseu}}"
},
{
"math_id": 10,
"text": "nym"
},
{
"math_id": 11,
"text": "m, nym"
}
] |
https://en.wikipedia.org/wiki?curid=15385848
|
15390784
|
Isochron
|
In the mathematical theory of dynamical systems, an isochron is a set of initial conditions for the system that all lead to the same long-term behaviour.
Mathematical isochron.
An introductory example.
Consider the ordinary differential equation for a solution formula_0 evolving in time:
formula_1
This ordinary differential equation (ODE) needs two initial conditions at, say, time formula_2. Denote the initial conditions by formula_3 and formula_4 where formula_5 and formula_6 are some parameters. The following argument shows that the isochrons for this system are here the straight lines formula_7.
The general solution of the above ODE is
formula_8
Now, as time increases, formula_9, the exponential terms decays very quickly to zero (exponential decay). Thus "all" solutions of the ODE quickly approach formula_10. That is, "all" solutions with the same formula_11 have the same long term evolution. The exponential decay of the formula_12 term brings together a host of solutions to share the same long term evolution. Find the isochrons by answering which initial conditions have the same formula_11.
At the initial time formula_2 we have formula_13 and formula_14. Algebraically eliminate the immaterial constant formula_15 from these two equations to deduce that all initial conditions formula_16 have the same formula_11, hence the same long term evolution, and hence form an isochron.
Accurate forecasting requires isochrons.
Let's turn to a more interesting application of the notion of isochrons. Isochrons arise when trying to forecast predictions from models of dynamical systems. Consider the toy system of two coupled ordinary differential equations
formula_17
A marvellous mathematical trick is the normal form (mathematics) transformation. Here the coordinate transformation near the origin
formula_18
to new variables formula_19 transforms the dynamics to the separated form
formula_20
Hence, near the origin, formula_21 decays to zero exponentially quickly as its equation is formula_22. So the long term evolution is determined solely by formula_23: the formula_23 equation is the model.
Let us use the formula_23 equation to predict the future. Given some initial values formula_24 of the original variables: what initial value should we use for formula_25? Answer: the formula_26 that has the same long term evolution. In the normal form above, formula_23 evolves independently of formula_21. So all initial conditions with the same formula_23, but different formula_21, have the same long term evolution. Fix formula_23 and vary formula_21 gives the curving isochrons in the formula_27 plane. For example, very near the origin the isochrons of the above system are approximately the lines formula_28. Find which isochron the initial values formula_24 lie on: that isochron is characterised by some formula_26; the initial condition that gives the correct forecast from the model for all time is then formula_29.
You may find such normal form transformations for relatively simple systems of ordinary differential equations, both deterministic and stochastic, via an interactive web site.
|
[
{
"math_id": 0,
"text": "y(t)"
},
{
"math_id": 1,
"text": " \\frac{d^2y}{dt^2} + \\frac{dy}{dt} = 1"
},
{
"math_id": 2,
"text": "t=0"
},
{
"math_id": 3,
"text": "y(0)=y_0"
},
{
"math_id": 4,
"text": "dy/dt(0)=y'_0"
},
{
"math_id": 5,
"text": "y_0"
},
{
"math_id": 6,
"text": "y'_0"
},
{
"math_id": 7,
"text": "y_0+y'_0=\\mbox{constant}"
},
{
"math_id": 8,
"text": "y=t+A+B\\exp(-t) "
},
{
"math_id": 9,
"text": "t\\to\\infty"
},
{
"math_id": 10,
"text": "y\\to t+A"
},
{
"math_id": 11,
"text": "A"
},
{
"math_id": 12,
"text": "B\\exp(-t)"
},
{
"math_id": 13,
"text": "y_0=A+B"
},
{
"math_id": 14,
"text": "y'_0=1-B"
},
{
"math_id": 15,
"text": "B"
},
{
"math_id": 16,
"text": "y_0+y'_0=1+A"
},
{
"math_id": 17,
"text": " \\frac{dx}{dt} = -xy \\text{ and } \\frac{dy}{dt} = -y+x^2 - 2y^2"
},
{
"math_id": 18,
"text": " x=X+XY+\\cdots \\text{ and } y=Y+2Y^2+X^2+\\cdots"
},
{
"math_id": 19,
"text": "(X,Y)"
},
{
"math_id": 20,
"text": " \\frac{dX}{dt} = -X^3+ \\cdots \\text{ and } \\frac{dY}{dt} = (-1-2X^2+\\cdots)Y"
},
{
"math_id": 21,
"text": "Y"
},
{
"math_id": 22,
"text": "dY/dt= (\\text{negative})Y"
},
{
"math_id": 23,
"text": "X"
},
{
"math_id": 24,
"text": "(x_0,y_0)"
},
{
"math_id": 25,
"text": "X(0)"
},
{
"math_id": 26,
"text": "X_0"
},
{
"math_id": 27,
"text": "(x,y)"
},
{
"math_id": 28,
"text": "x-Xy=X-X^3"
},
{
"math_id": 29,
"text": "X(0)=X_0"
}
] |
https://en.wikipedia.org/wiki?curid=15390784
|
15393951
|
Vague topology
|
In mathematics, particularly in the area of functional analysis and topological vector spaces, the vague topology is an example of the weak-* topology which arises in the study of measures on locally compact Hausdorff spaces.
Let formula_0 be a locally compact Hausdorff space. Let formula_1 be the space of complex Radon measures on formula_2 and formula_3 denote the dual of formula_4 the Banach space of complex continuous functions on formula_0 vanishing at infinity equipped with the uniform norm. By the Riesz representation theorem formula_1 is isometric to formula_5 The isometry maps a measure formula_6 to a linear functional formula_7
The vague topology is the weak-* topology on formula_5 The corresponding topology on formula_1 induced by the isometry from formula_3 is also called the vague topology on formula_8 Thus in particular, a sequence of measures formula_9 converges vaguely to a measure formula_6 whenever for all test functions formula_10
formula_11
It is also not uncommon to define the vague topology by duality with continuous functions having compact support formula_12 that is, a sequence of measures formula_9 converges vaguely to a measure formula_6 whenever the above convergence holds for all test functions formula_13 This construction gives rise to a different topology. In particular, the topology defined by duality with formula_14 can be metrizable whereas the topology defined by duality with formula_15 is not.
One application of this is to probability theory: for example, the central limit theorem is essentially a statement that if formula_16 are the probability measures for certain sums of independent random variables, then formula_16 converge weakly (and then vaguely) to a normal distribution, that is, the measure formula_16 is "approximately normal" for large formula_17
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" />
"This article incorporates material from Weak-* topology of the space of Radon measures on PlanetMath, which is licensed under the ."
|
[
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "M(X)"
},
{
"math_id": 2,
"text": "X,"
},
{
"math_id": 3,
"text": "C_0(X)^*"
},
{
"math_id": 4,
"text": "C_0(X),"
},
{
"math_id": 5,
"text": "C_0(X)^*."
},
{
"math_id": 6,
"text": "\\mu"
},
{
"math_id": 7,
"text": "I_\\mu(f) := \\int_X f\\, d\\mu."
},
{
"math_id": 8,
"text": "M(X)."
},
{
"math_id": 9,
"text": "\\left(\\mu_n\\right)_{n \\in \\N}"
},
{
"math_id": 10,
"text": "f \\in C_0(X),"
},
{
"math_id": 11,
"text": "\\int_X f d\\mu_n \\to \\int_X f d\\mu."
},
{
"math_id": 12,
"text": "C_c(X),"
},
{
"math_id": 13,
"text": "f \\in C_c(X)."
},
{
"math_id": 14,
"text": "C_c(X)"
},
{
"math_id": 15,
"text": "C_0(X)"
},
{
"math_id": 16,
"text": "\\mu_n"
},
{
"math_id": 17,
"text": "n."
}
] |
https://en.wikipedia.org/wiki?curid=15393951
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.