id
stringlengths 2
8
| title
stringlengths 1
130
| text
stringlengths 0
252k
| formulas
listlengths 1
823
| url
stringlengths 38
44
|
---|---|---|---|---|
6534294
|
Ljung–Box test
|
Statistical test
The Ljung–Box test (named for Greta M. Ljung and George E. P. Box) is a type of statistical test of whether any of a group of autocorrelations of a time series are different from zero. Instead of testing randomness at each distinct lag, it tests the "overall" randomness based on a number of lags, and is therefore a portmanteau test.
This test is sometimes known as the Ljung–Box Q test, and it is closely connected to the Box–Pierce test (which is named after George E. P. Box and David A. Pierce). In fact, the Ljung–Box test statistic was described explicitly in the paper that led to the use of the Box–Pierce statistic, and from which that statistic takes its name. The Box–Pierce test statistic is a simplified version of the Ljung–Box statistic for which subsequent simulation studies have shown poor performance.
The Ljung–Box test is widely applied in econometrics and other applications of time series analysis. A similar assessment can be also carried out with the Breusch–Godfrey test and the Durbin–Watson test.
Formal definition.
The Ljung–Box test may be defined as:
formula_0: The data are independently distributed (i.e. the correlations in the population from which the sample is taken are 0, so that any observed correlations in the data result from randomness of the sampling process).
formula_1: The data are not independently distributed; they exhibit serial correlation.
The test statistic is:
formula_2
where "n" is the sample size, formula_3 is the sample autocorrelation at lag "k", and "h" is the number of lags being tested. Under formula_0 the statistic Q asymptotically follows a formula_4. For significance level α, the critical region for rejection of the hypothesis of randomness is:
formula_5
where formula_6 is the (1 − "α")-quantile of the chi-squared distribution with "h" degrees of freedom.
The Ljung–Box test is commonly used in autoregressive integrated moving average (ARIMA) modeling. Note that it is applied to the residuals of a fitted ARIMA model, not the original series, and in such applications the hypothesis actually being tested is that the residuals from the ARIMA model have no autocorrelation. When testing the residuals of an estimated ARIMA model, the degrees of freedom need to be adjusted to reflect the parameter estimation. For example, for an ARIMA("p",0,"q") model, the degrees of freedom should be set to formula_7.
Box–Pierce test.
The Box–Pierce test uses the test statistic, in the notation outlined above, given by
formula_8
and it uses the same critical region as defined above.
Simulation studies have shown that the distribution for the Ljung–Box statistic is closer to a formula_4 distribution than is the distribution for the Box–Pierce statistic for all sample sizes including small ones.
References.
<templatestyles src="Reflist/styles.css" />
External links.
This article incorporates public domain material from
|
[
{
"math_id": 0,
"text": "H_0"
},
{
"math_id": 1,
"text": "H_a"
},
{
"math_id": 2,
"text": "\nQ = n(n+2)\\sum_{k=1}^h\\frac{\\hat{\\rho}^2_k}{n-k}\n"
},
{
"math_id": 3,
"text": "\\hat{\\rho}_k"
},
{
"math_id": 4,
"text": "\\chi^2_{(h)}"
},
{
"math_id": 5,
"text": "\nQ > \\chi_{1-\\alpha,h}^2\n"
},
{
"math_id": 6,
"text": "\\chi_{1-\\alpha,h}^2"
},
{
"math_id": 7,
"text": "h - p - q"
},
{
"math_id": 8,
"text": "\nQ_\\text{BP} = n \\sum_{k=1}^h \\hat{\\rho}^2_k,\n"
}
] |
https://en.wikipedia.org/wiki?curid=6534294
|
65349345
|
Sterbenz lemma
|
Exact floating-point subtraction theorem
In floating-point arithmetic, the Sterbenz lemma or Sterbenz's lemma is a theorem giving conditions under which floating-point differences are computed exactly.
It is named after Pat H. Sterbenz, who published a variant of it in 1974.
The Sterbenz lemma applies to IEEE 754, the most widely used floating-point number system in computers.
Proof.
Let formula_3 be the radix of the floating-point system and formula_4 the precision.
Consider several easy cases first:
For the rest of the proof, assume formula_17 without loss of generality.
Write formula_18 in terms of their positive integral significands formula_19 and minimal exponents formula_20:
formula_21
Note that formula_0 and formula_1 may be subnormal—we do not assume formula_22.
The subtraction gives:
formula_23
Let formula_24.
Since formula_25 we have:
Further, since formula_31, we have formula_32, so that
formula_33
which implies that
formula_34
Hence
formula_35
so formula_2 is a floating-point number.
Note: Even if formula_0 and formula_1 are normal, "i.e.", formula_22, we cannot prove that formula_36 and therefore cannot prove that formula_2 is also normal.
For example, the difference of the two smallest positive normal floating-point numbers formula_37 and formula_38 is formula_39 which is necessarily subnormal.
In floating-point number systems without subnormal numbers, such as CPUs in nonstandard flush-to-zero mode instead of the standard gradual underflow, the Sterbenz lemma does not apply.
Relation to catastrophic cancellation.
The Sterbenz lemma may be contrasted with the phenomenon of catastrophic cancellation:
In other words, the Sterbenz lemma shows that subtracting nearby floating-point numbers is exact, but if the numbers one has are approximations then even their exact difference may be far off from the difference of numbers one wanted to subtract.
Use in numerical analysis.
The Sterbenz lemma is instrumental in proving theorems on error bounds in numerical analysis of floating-point algorithms.
For example, Heron's formula
formula_44
for the area of triangle with side lengths formula_45, formula_46, and formula_47, where formula_48 is the semi-perimeter, may give poor accuracy for long narrow triangles if evaluated directly in floating-point arithmetic.
However, for formula_49, the alternative formula
formula_50
can be proven, with the help of the Sterbenz lemma, to have low forward error for all inputs.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "x"
},
{
"math_id": 1,
"text": "y"
},
{
"math_id": 2,
"text": "x - y"
},
{
"math_id": 3,
"text": "\\beta"
},
{
"math_id": 4,
"text": "p"
},
{
"math_id": 5,
"text": "x - y = -y"
},
{
"math_id": 6,
"text": "x - y = x"
},
{
"math_id": 7,
"text": "x = y"
},
{
"math_id": 8,
"text": "x < 0"
},
{
"math_id": 9,
"text": "y/2 \\leq x < 0"
},
{
"math_id": 10,
"text": "y < 0"
},
{
"math_id": 11,
"text": "x - y = -(-x - -y)"
},
{
"math_id": 12,
"text": "x, y \\geq 0"
},
{
"math_id": 13,
"text": "x \\leq y"
},
{
"math_id": 14,
"text": "x - y = -(y - x)"
},
{
"math_id": 15,
"text": "x/2 \\leq y \\leq 2 x"
},
{
"math_id": 16,
"text": "x \\geq y"
},
{
"math_id": 17,
"text": "0 < y < x \\leq 2 y"
},
{
"math_id": 18,
"text": "x, y > 0"
},
{
"math_id": 19,
"text": "s_x, s_y \\leq \\beta^p - 1"
},
{
"math_id": 20,
"text": "e_x, e_y"
},
{
"math_id": 21,
"text": "\n\\begin{align}\n x &= s_x \\cdot \\beta^{e_x - p + 1} \\\\\n y &= s_y \\cdot \\beta^{e_y - p + 1}\n\\end{align}\n"
},
{
"math_id": 22,
"text": "s_x, s_y \\geq \\beta^{p - 1}"
},
{
"math_id": 23,
"text": "\n\\begin{align}\n x - y\n &= s_x \\cdot \\beta^{e_x - p + 1}\n - s_y \\cdot \\beta^{e_y - p + 1} \\\\\n &= s_x \\beta^{e_x - e_y} \\cdot \\beta^{e_y - p + 1}\n - s_y \\cdot \\beta^{e_y - p + 1} \\\\\n &= (s_x \\beta^{e_x - e_y} - s_y) \\cdot \\beta^{e_y - p + 1}.\n\\end{align}\n"
},
{
"math_id": 24,
"text": "s' = s_x \\beta^{e_x - e_y} - s_y"
},
{
"math_id": 25,
"text": "0 < y < x"
},
{
"math_id": 26,
"text": "e_y \\leq e_x"
},
{
"math_id": 27,
"text": "e_x - e_y \\geq 0"
},
{
"math_id": 28,
"text": "\\beta^{e_x - e_y}"
},
{
"math_id": 29,
"text": "x - y > 0"
},
{
"math_id": 30,
"text": "s' > 0"
},
{
"math_id": 31,
"text": "x \\leq 2 y"
},
{
"math_id": 32,
"text": "x - y \\leq y"
},
{
"math_id": 33,
"text": "\n s' \\cdot \\beta^{e_y - p + 1} = x - y \\leq y = s_y \\cdot \\beta^{e_y - p + 1}\n"
},
{
"math_id": 34,
"text": "\n 0 < s' \\leq s_y \\leq \\beta^p - 1.\n"
},
{
"math_id": 35,
"text": "\n x - y = s' \\cdot \\beta^{e_y - p + 1},\n \\quad \\text{for} \\quad\n 0 < s' \\leq \\beta^p - 1,\n"
},
{
"math_id": 36,
"text": "s' \\geq \\beta^{p - 1}"
},
{
"math_id": 37,
"text": "x = (\\beta^{p - 1} + 1) \\cdot \\beta^{e_{\\mathrm{min}} - p + 1}"
},
{
"math_id": 38,
"text": "y = \\beta^{p - 1} \\cdot \\beta^{e_{\\mathrm{min}} - p + 1}"
},
{
"math_id": 39,
"text": "x - y = 1 \\cdot \\beta^{e_{\\mathrm{min}} - p + 1}"
},
{
"math_id": 40,
"text": "x \\ominus y = \\operatorname{fl}(x - y)"
},
{
"math_id": 41,
"text": "\\tilde x"
},
{
"math_id": 42,
"text": "\\tilde y"
},
{
"math_id": 43,
"text": "\\tilde x - \\tilde y"
},
{
"math_id": 44,
"text": "A = \\sqrt{s (s - a) (s - b) (s - c)}"
},
{
"math_id": 45,
"text": "a"
},
{
"math_id": 46,
"text": "b"
},
{
"math_id": 47,
"text": "c"
},
{
"math_id": 48,
"text": "s = (a + b + c)/2"
},
{
"math_id": 49,
"text": "a \\geq b \\geq c"
},
{
"math_id": 50,
"text": "A = \\frac{1}{4} \\sqrt{\\bigl(a + (b + c)\\bigr) \\bigl(c - (a - b)\\bigr) \\bigl(c + (a - b)\\bigr) \\bigl(a + (b - c)\\bigr)}"
}
] |
https://en.wikipedia.org/wiki?curid=65349345
|
65351600
|
Perron's irreducibility criterion
|
Condition for polynomials to be unfactorable
Perron's irreducibility criterion is a sufficient condition for a polynomial to be irreducible in formula_0—that is, for it to be unfactorable into the product of lower-degree polynomials with integer coefficients.
This criterion is applicable only to monic polynomials. However, unlike other commonly used criteria, Perron's criterion does not require any knowledge of prime decomposition of the polynomial's coefficients.
Criterion.
Suppose we have the following polynomial with integer coefficients
formula_1
where formula_2. If either of the following two conditions applies:
then formula_5 is irreducible over the integers (and by Gauss's lemma also over the rational numbers).
History.
The criterion was first published by Oskar Perron in 1907 in Journal für die reine und angewandte Mathematik.
Proof.
A short proof can be given based on the following lemma due to Panaitopol:
Lemma. Let formula_6 be a polynomial with formula_7. Then exactly one zero formula_8 of formula_5 satisfies formula_9, and the other formula_10 zeroes of formula_5 satisfy formula_11.
Suppose that formula_12 where formula_13 and formula_14 are integer polynomials. Since, by the above lemma, formula_5 has only one zero with modulus not less than formula_15, one of the polynomials formula_16 has all its zeroes strictly inside the unit circle. Suppose that formula_17 are the zeroes of formula_13, and formula_18. Note that formula_19 is a nonzero integer, and formula_20, contradiction. Therefore, formula_5 is irreducible.
Generalizations.
In his publication Perron provided variants of the criterion for multivariate polynomials over arbitrary fields. In 2010, Bonciocat published novel proofs of these criteria.
|
[
{
"math_id": 0,
"text": "\\mathbb{Z}[x]"
},
{
"math_id": 1,
"text": "f(x)=x^n+a_{n-1}x^{n-1}+\\cdots+a_1x+a_0,"
},
{
"math_id": 2,
"text": "a_0\\neq 0"
},
{
"math_id": 3,
"text": "|a_{n-1}|> 1+|a_{n-2}|+\\cdots+|a_0|"
},
{
"math_id": 4,
"text": "|a_{n-1}|= 1+|a_{n-2}|+\\cdots+|a_0|, \\quad f(\\pm 1) \\neq 0"
},
{
"math_id": 5,
"text": "f"
},
{
"math_id": 6,
"text": "f(x)=x^n+a_{n-1}x^{n-1}+\\cdots+a_1x+a_0"
},
{
"math_id": 7,
"text": "|a_{n-1}|>1+|a_{n-2}|+\\cdots+|a_{1}|+|a_0|"
},
{
"math_id": 8,
"text": "z"
},
{
"math_id": 9,
"text": "|z|>1"
},
{
"math_id": 10,
"text": "n-1"
},
{
"math_id": 11,
"text": "|z|<1"
},
{
"math_id": 12,
"text": "f(x)=g(x)h(x)"
},
{
"math_id": 13,
"text": "g"
},
{
"math_id": 14,
"text": "h"
},
{
"math_id": 15,
"text": "1"
},
{
"math_id": 16,
"text": "g, h"
},
{
"math_id": 17,
"text": "z_1,\\dots,z_k"
},
{
"math_id": 18,
"text": "|z_1|,\\dots,|z_k|<1"
},
{
"math_id": 19,
"text": "g(0)"
},
{
"math_id": 20,
"text": "|g(0)|=|z_1\\cdots z_k|<1"
}
] |
https://en.wikipedia.org/wiki?curid=65351600
|
653589
|
147 (number)
|
Natural number
147 (one hundred [and] forty-seven) is the natural number following 146 and preceding 148.
In mathematics.
147 is the fourth centered icosahedral number. These are a class of figurate numbers that represent points in the shape of a regular icosahedron or alternatively points in the shape of a cuboctahedron, and are magic numbers for the face-centered cubic lattice. Separately, it is also a magic number for the diamond cubic.
It is also the fourth Apéry number formula_0 following 19, where formula_1
with 147 the composite index of the nineteenth triangle number, 190.
There are 147 different ways of representing one as a sum of unit fractions with five terms, allowing repeated fractions, and 147 different self-avoiding polygonal chains of length six using horizontal and vertical segments of the integer lattice.
In other fields.
147 is the highest possible break in snooker, in the absence of fouls and refereeing errors.
In some traditions, there are 147 psalms. However, current Christian and Jewish traditions list a larger number, leading to the suggestion that some of the psalms in the earlier numbering were split into multiple pieces.
147 is the telephonic number of the 27 Brazilian Civil Police forces.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "a_3"
},
{
"math_id": 1,
"text": "a_n=\\sum_{k=0}^n\\binom{n}{k}^2\\binom{n+k}{k},"
}
] |
https://en.wikipedia.org/wiki?curid=653589
|
65368205
|
Geological structure measurement by LiDAR
|
Terrain measurement with light beams
Geological structure measurement by LiDAR technology is a remote sensing method applied in structural geology. It enables monitoring and characterisation of rock bodies. This method's typical use is to acquire high resolution structural and deformational data for identifying geological hazards risk, such as assessing rockfall risks or studying pre-earthquake deformation signs.
Geological structures are the results of tectonic deformations, which control landform distribution patterns. These structures include folds, fault planes, size, persistence, spatial variations, and numbers of the rock discontinuities in a particular region. These discontinuity features significantly impact slope stability, causing slope failures or separating a rock mass into intact rock blocks (rockfall). Some displaced blocks along faults are signs of earthquakes.
Conventionally, geotechnical engineers carried out rock discontinuity studies manually. In post geological hazards studies, such as rockfall, the rockfall source areas are dangerous and are difficult to access, severely hindering the ability to carry out detailed structural measurements and volumetric calculations necessary for hazard assessment. By using LiDAR, geological structures can be evaluated remotely, enabling a 3-D investigation of slopes with virtual outcrops.
LiDAR technology (Light Detection and Ranging) is a remote sensing technique that obtains precise 3-D information and distance. The laser receptor calculates the distance by the travelling time between emitting and receiving laser pulses. LiDAR produces topographic maps, and it is useful for assessing the natural environment.
Importance of measuring geological structures by LiDAR.
Geological structures are responsible for providing distinct physical properties to rock masses. Discontinuous properties and plate tectonic forces may alter rock masses and their geometries. These structures contain joints, fractures, bedding planes, shear zones, mechanical breaks, or any other features ranging from microscopic (<1 cm, foliation development by metamorphism) to macroscopic scale (>100m, mid-oceanic ridges).
Geological structures are typically elongated, their orientations are often described as "strike". If a rock body is extensively tilted, taking its slope resistivity into account, it may have a high potential to cause rockfalls. The use of LiDAR in the structural analysis allows measuring landform features from a single outcrop scale to a terrestrial scale. Some geological structures measurement and their importance are listed below:
Rock plane orientation measurement and rockfall risk assessment.
Rock plane orientations are the natural inclinations that occurred on a rock plane. Some examples of rock planes are bedding planes, fault planes. The planes' orientations are measured by dip and dip direction with a clinometer and compass, where dip represents the maximum inclination of a plane to the horizontal, dip direction is the direction of the intersection line between horizontal and the inclined plane. A stereonet can visualise the distribution of dip and dip directions to analyse the kinematics of a slope.
Kinematics represents the motion of a rock body without external forces that cause them to move. Kinematics analysis concentrates on the possibility of translational failures due to planes sliding although other types of failure modes, such as wedge and toppling failures, can also be recognized.
Faults behaviours measurement and earthquakes predictions.
Faults behaviours can be used to measure the rate of sediment transportation and predict earthquakes. An earthquake can contribute to the formation of faults scraps. One side of a block will be relatively upthrown, causing vertical displacements. Therefore, given the parameters of fault scraps, structural geologists are able to trance the age of it and deduce the time involved to form such features.
Earthquakes are initiated by slow slips. Slips are the displaced blocks along two sides of a fault. However, these slips are undetectable by seismometers (maximum 5mm/day). When the slipping blocks reach a critical rupture velocity, the faults would gradually evolve into a final quake size by linear acceleration along fault planes. The critical displacement of faults is proportional to the initial rupture velocities.
After collecting LiDAR data from pre-earthquake and post-earthquake landforms, by constructing 3-D digital terrain models, the displacement and deformations can be derived. Thus, scientists can predict the final earthquake scale in the future by determining faults and slips' characteristics and areas' size, and short-term earthquake predictions are possible.
Surface processes and geological mapping.
When carrying out geological mapping, interpretations through aerial photographs and satellite imagery are often used, but forest vegetation remained the major challenge for mapping. For example, characterising physical landform features at ridges and valleys are somehow complicated, many of these features are forest-covered. The topographic maps are then constructed by obtaining data manually.
LiDAR provides the full-waveform system, it enables the laser pulse to penetrate through canopies and vegetations. This system allows obtaining bare-ground geological data points. Webster et al. have discovered new craters in Northern Canada by harnessing LiDAR data and digital terrain models. A digital terrain model has to be constructed to measure structural parameters (tilt angles, river incision depths). With the precise bedrock and surficial lithology mapping by LiDAR, structural geologists can reconstruct surficial processes involved.
Traditional structural measurement.
Traditional structural orientations can only be assessed on reachable exposed rock mass manually. Conventionally, engineering geologists investigated rock discontinuities studies with only a limited number at a time. The discontinuities may not represent the whole outcrop. Thus, the traditional rock plane orientation measurement may contain bias.
Geotechnical studies also investigate other geomechanical parameters, such as persistence, block size and rock joint spacing.
LiDAR technology.
LiDAR (Light Detection and Ranging) is a rapid surveying process that emits and receives laser pluses to acquire 3-D information. By illuminating lights with different wavelengths to the object of interest, LiDAR can be used to create precise topographic maps, with applications in: geology, geomorphology, surveying and other applications. Topographic maps are possible because of the Inertial Measuring Unit and Global Positioning System. Furthermore, it is a technology that can carry out studies on a steep slope and rock-cliffs.
Accurately measured data is necessary for the LiDAR data to be geo-referenced such as locating the data in a local or global coordinates system. Therefore, the produced LiDAR can be overlaid onto the aerial photographs collected previously to observe the topography changes over time.
Principle of LiDAR.
LiDAR system emits pulsed and continuous-wave lasers to acquire 3-D information. The laser scanner is the main component of LiDAR. Lasers with a wavelength of 550-600 nm are used on a ground-based system (handheld laser scanning and terrestrial laser scanning), whereas airborne systems use lasers with 1000-1600 nm wavelength.
Laser calculates the reachable range by the following formula:
formula_0
LiDAR receives information by discrete and full-waveform return. Full-waveform (multi-return) is often used for forest analysis by Airborne LiDAR, while discrete return (single return) is used by a ground-based laser scanning method. A laser is reflected whenever it reaches any surfaces. The full-waveform return is able to penetrate down into canopies and return vegetation information at different heights. Discrete return can only return the superficial materials. Thus, airborne LiDAR is often used for forestry studies.
Data representation and data format.
LiDAR data is mainly stored in a point cloud format(.las). The captured point cloud data store X, Y Z geometric data. Each data point is obtained from a single laser scan and represents a local geo-referenced spatial datum. It can represent realistic and three-dimensional rock faces in a remote and inaccessible natural terrain.
The LiDAR data have the following parameters:
These data help rock body features analysis, they include recorded geometrical or radiometric information of natural, excavated, or blasted rock slopes.
Previously, the LiDAR data are in the form of the American Standard Code for Information Interchange format(ASCII), which has several problems :
1) Low reading and interpreting speed of ASCII files
2) Useful data loss during data processing
3) ASCII is unstandardised
After 2003, the American Society for Photogrammetry and Remote Sensing (ASPRS) has standardized the LiDAR data in sequential binary laser(.las file) containing LiDAR or other point cloud data records.
Geo-referencing.
Geo-referencing means the co-ordinates on an aerial photograph/digital terrain model can be referenced on a global/regional geographic system. Thus, the users are able to determine and locate every point of collected data on the Earth's surface. The typical global positioning system uses the World Geodetic System of 1984(WGS84) datum, and stores the geo-referenced data in GeoTIFF/GeoPDF format. In addition, the users may require orthometric elevation (elevation above sea-level or geoid model) in different scenarios. For example, analysing sea-level change by hydrological data.
Geo-referencing can be done by adding control points at the base of the slope. By shifting the point cloud data onto the known correct co-ordinates with at least 3 points, the point cloud data can be repositioned into an accurate co-ordinate system. This information is useful for calculating distances, volume and areas.
Types of LiDAR.
LiDAR data can be collected on ground-based, airborne and mobile platforms. For example, Airborne LiDAR Scanning (ALS), Unmanned Aerial Vehicles (UAV), Terrestrial Laser Scanning (TLS) and Handheld Laser Scanning (HLS). The tables below will compare the mentioned data collection platforms:
1) The data collection methods
2) Geo-referencing technique (how to acquire the exact coordinate of a point)
3) Advantages and disadvantages
Digital terrain modelling.
A digital terrain model (DTM) is a mathematical model that represents Earth's visible terrain surfaces. A DTM transforms discernible LiDAR data points into a continuous 3-D surface, the model connects discrete points with distinct height values to form planes. Thus, structural geologists are able to derive structural orientations from these 3-D planes. This modelling technique is also used to create digital planetary surfaces and there are more other applications.
Principle of Digital Terrain Model.
The DTMs are classified by the basic geometric units, such as triangles, squares. 3 major mathematic functions are used, as listed below :
Moreover, triangle-based and grid-based are the most used.
Point-based.
Point-based surface modelling reconstructs a surface by forming a series of small contiguous discontinuous surfaces. Each of the surfaces is planar and formed by connecting the individual data point. This function can form a surface with regular and irregular patterns by considering the regional boundaries of influence of each point, where a Voronoi diagram defines the regional boundaries. However, regular patterns are most widely used for simpler computations, such as hexagons, squared patterns.
The mathematical expression of the formation for each horizontal planar-surface is:
formula_1
where Z is the height of the planar surface, and H is the ith point's height.
Triangular-based.
A triangular-based function can form a more tilted or irregular digital surface model. This approach is treated as the primary way to construct a complex DTM. Triangles have great flexibility, any polygons (e.g. a square, rectangle) can be decomposed into other smaller triangles. A linked triangular network can incorporate break lines for plane fitting, it facilitates the formation of a curved facet/surface. The minimum requirement for forming a triangle requires 3 data points, where the nearest 3 points are grouped to form a triangle by Delaunay triangulation without overlapping.
As the data points are unevenly distributed, the triangular-based method effectively constructs DTM because this method can create surface variations. Even some data points of the point cloud are removed or added, a local triangle can be reformed without complete DTM reconstruction.
A few parameters control the surface formation process:
1) Density of Mesh
2) Maximum angle of neighbouring triangles
3) Minimum Patch size
Grid-based.
The grid-based model has less relevance for constructing broken terrain or sharp terrain discontinuities with steep slopes. 4 data points are the minimum requirement for forming a grid-based model. The resulting surface is named a bi-linear surface that quadrilaterals of any shapes are linked to create a DTM, such as parallelograms, squares, rectangles or other irregular polygons. This method has the advantage of data handling that the data is in the form of squared grids, evenly distributing data points. In this case, some software has provided a "Random-to-Grid" operation to ensure that the data has the grid form.
Approach for digital terrain modelling.
As the data collected are in the form of a point cloud, there is a need to transform these 3-D co-ordinates of laser points into a digital terrain model with 2 major procedures:
1) Point cloud classification and ground filtering
2) Reconstruction of the ground surface from discrete laser point cloud data by interpolation.
Data processing and classification (Airborne LiDAR scanning only).
Data classification and noise cleaning are the processes of obtaining a non-biased slope surface. When ALS collects the with multiple returns, this principle can classify the objects into different categories. Classification algorithms can be performed by TerraScan & TerraModel, computer software for classifying point cloud data automatically developed by TerraSolid. However, some manual adjustments and validations are needed to ensure the data points are classified correctly.
Algorithms can identify pre-dominate landform features, these algorithms assume the surfaces with significant variations are non-ground features. For creating a surface model of rock slope, only classified ground data is needed. The classifications of the ground and non-ground surfaces are in the following categories:
Data filtering.
Data filtering enables the extraction of bare-Earth's surface by removing unnecessary data or data noise. For rock orientation studies, outcrop mapping and topographical studies, they only acquire information about the rock slope bodies. Thus, data filtering is implemented to separate the point clouds as ground and non-ground features. Bugs, vegetations or other artificial infrastructures belong to non-ground features.
In order to filter 3-D laser points, few methods from open source software can be applied to filter ground points out of the whole area of interests :
Ground verification survey.
The purpose of the ground verification survey is to test the accuracy of LiDAR. LiDAR data is acquired by sending laser pulses at different angles or receiving the returning signal, these signals may include some error induced by atmosphere absorption of wavelength. Therefore, a ground-truth survey is needed to ensure the collected data's co-ordinates match the local coordinate system. For example, horizontal accuracy will be tested by comparing the collected data through different data collection techniques. Also, data can be corrected by setting multiple control points with known coordinates.
Structural orientation analysis.
The DTMs are capable of identifying the structural parameters of geological features(tilt angle of a fold limb). For example, dip and dip directions of rock planes can be derived from DTM. The general methodology has the following steps:
1) Obtain LiDAR data of a targeted slope
2) Noise cleaning and filtering of point data
3) Transform point data into DTM through mathematical functions
4) By using programs, such as Coltop 3D, the structural geometry of the joint/slope planes can be calculated automatically.
5) The rock planes are determined by color-coding or statistical method for evaluating whether the points are grouped to form a single or different planes.
6) Output the structural orientations data into suitable computer software files, and plotting the distribution of dip and dip directions of different planes on a stereonet.
Data partition.
Data partitioning helps redistribute the unevenly distributed point data by dividing data into cubes. Returned point cloud data has different point density, due to the variation of data collection devices and parameters. The point cloud's density determines the cube size, no fewer than 4 points form each cube. The points that are considered as non-conformers will be removed from the point cloud. Afterwards, the normal vector of each cube will be calculated.
Octree Partitioning in open source softwares, including CloudCompare and Geomagic, can achieve data partitioning. Considering the rock masses in different terrains, they have variations in the roughness. Thus, it is necessary for users to set the cube size manually to obtain the best cube. The average number of points in each cube ranges from 15 to 30 points, by setting the range of the point spacing between 4mm and 7mm.
Cluster analysis by normal vectors.
The purpose of rock discontinuities clustering is to group sub-planes of a slope into the same discontinuity set they belong to. The discontinuities demonstrate waviness, roughness or an undulating surface. The points within the same joint set show similar orientation habits. The algorithm determines whether the points are located within the main orientation of bedding planes or sub-parallel planes with each other. The clustering technique can classify different joint sets based on assigned normal vectors on each face within approximately the same orientation.
Challenges for LiDAR Technology.
Although LiDAR has high efficiency in collecting data in a large area within a short time, some issues remain challenging for the overall data processing and generating the expected result.
Data filtering and surface generation parameters.
Vegetation may not be entirely removed during data filtering. It may affect the smoothness and the clustering of rock surfaces. In most cases, 3-D models are generated through the triangulation method, spikes are often formed. The spikes may affect the clustering of rock surfaces. They affect the smoothness of the 3-D surface, inducing errors in calculating the rock plane orientations. Spikes are formed when the data points have similar X and Y coordinates but very different Z coordinates, the needle-like shape triangles are produced as a result of non-smooth surfaces.
The filtering parameters for surface clustering often depend on users' past experiences. For example, the octree parameter can be adjusted with different point densities. The result of DTM formation has to be determined by users. Hence, a repeated testing procedure is needed for generating a satisfactory surface.
Point density.
Point cloud density, known as the spacing between each data point in an obtained LiDAR datasets. This parameter affects the accuracy in measuring rock slopes. It is one of the characteristics that need to be considered during data processing. For example, data filtering, classification, feature extraction and object recognition. Point cloud density depends on various factors:
Cost.
Although LiDAR has been an effective data acquisition method on a rock slope, its high cost caused the technique to be not practical. When the area of interest is of a very small scale, it limits the usability of TLS & ALS, given carrying out an ALS survey requires aircraft, an experienced pilot, a designed flying path and height that got the approval of the local aviation department.
By employing UAV, the problem mentioned above can be resolved. It can collect data in inaccessible areas that enable data collection at a low cost at a small scale, since the device is portable and lightweight.
Other LiDAR applications.
With the high accuracy and high-efficiency data acquisition method of laser scanning, it can potentially apply to different areas other than structural measurements:
References.
<templatestyles src="Reflist/styles.css" />
External links.
For more detailed description of the concepts, some links are provided below:
|
[
{
"math_id": 0,
"text": "R = {1 \\over2} \\cdot c\\cdot t_{s}"
},
{
"math_id": 1,
"text": "Z_i = H_i"
}
] |
https://en.wikipedia.org/wiki?curid=65368205
|
65369055
|
Nicola Leone
|
Italian computer scientist
Nicola Leone is an Italian computer scientist who works in the areas of artificial intelligence, knowledge representation and reasoning, and database theory. Leone is currently the rector of the University of Calabria and a professor of Computer Science. Previously, he was a professor of Database Systems at the TU Wien.
Research work.
Leone has published more than 250 scientific articles in the areas of artificial intelligence, knowledge representation and reasoning, and database theory.
In the area of artificial intelligence and knowledge representation and reasoning, he is best known for his influential early work on answer set programming (ASP)
and for the development of DLV, a pioneering system for knowledge representation and reasoning, which was the very first successful attempt to fully support disjunction in the datalog language, achieving the possibility to compute problems of high complexity, up to NPformula_0.
To the field of database theory he mainly contributed through the invention of hypertree decomposition, a framework for obtaining tractable structural classes of conjunctive queries, and a generalisation of the notion of tree decomposition from graph theory. This work has also had substantial impact in artificial intelligence, since it is known that the problem of evaluating conjunctive queries on relational databases is equivalent to the constraint satisfaction problem
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "^{NP}"
}
] |
https://en.wikipedia.org/wiki?curid=65369055
|
65373517
|
2Sum
|
Algorithm to compute rounding error
2Sum is a floating-point algorithm for computing the exact round-off error in a floating-point addition operation.
2Sum and its variant Fast2Sum were first published by Ole Møller in 1965.
Fast2Sum is often used implicitly in other algorithms such as compensated summation algorithms; Kahan's summation algorithm was published first in 1965, and Fast2Sum was later factored out of it by Dekker in 1971 for double-double arithmetic algorithms.
The names "2Sum" and "Fast2Sum" appear to have been applied retroactively by Shewchuk in 1997.
Algorithm.
Given two floating-point numbers formula_0 and formula_1, 2Sum computes the floating-point sum formula_2 rounded to nearest and the floating-point error formula_3 so that formula_4, where formula_5 and formula_6 respectively denote the addition and subtraction rounded to nearest.
The error formula_7 is itself a floating-point number.
Inputs floating-point numbers formula_8
Outputs rounded sum formula_9 and exact error formula_10
# formula_2
# formula_11
# formula_12
# formula_13
# formula_14
# formula_15
# return formula_16
Provided the floating-point arithmetic is correctly rounded to nearest (with ties resolved any way), as is the default in IEEE 754, and provided the sum does not overflow and, if it underflows, underflows gradually, it can be proven that formula_4.
A variant of 2Sum called Fast2Sum uses only three floating-point operations, for floating-point arithmetic in radix 2 or radix 3, under the assumption that the exponent of formula_0 is at least as large as the exponent of formula_1, such as when formula_17:
Inputs radix-2 or radix-3 floating-point numbers formula_0 and formula_1, of which at least one is zero, or which respectively have normalized exponents formula_18
Outputs rounded sum formula_9 and exact error formula_10
# formula_2
# formula_19
# formula_20
# return formula_16
Even if the conditions are not satisfied, 2Sum and Fast2Sum often provide reasonable approximations to the error, i.e. formula_21, which enables algorithms for compensated summation, dot-product, etc., to have low error even if the inputs are not sorted or the rounding mode is unusual.
More complicated variants of 2Sum and Fast2Sum also exist for rounding modes other than round-to-nearest.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "a"
},
{
"math_id": 1,
"text": "b"
},
{
"math_id": 2,
"text": "s := a \\oplus b"
},
{
"math_id": 3,
"text": "t := a + b - (a \\oplus b)"
},
{
"math_id": 4,
"text": "s + t = a + b"
},
{
"math_id": 5,
"text": "\\oplus"
},
{
"math_id": 6,
"text": "\\ominus"
},
{
"math_id": 7,
"text": "t"
},
{
"math_id": 8,
"text": "a, b"
},
{
"math_id": 9,
"text": "s = a \\oplus b"
},
{
"math_id": 10,
"text": "t = a + b - (a \\oplus b)"
},
{
"math_id": 11,
"text": "a' := s \\ominus b"
},
{
"math_id": 12,
"text": "b' := s \\ominus a'"
},
{
"math_id": 13,
"text": "\\delta_a := a \\ominus a'"
},
{
"math_id": 14,
"text": "\\delta_b := b \\ominus b'"
},
{
"math_id": 15,
"text": "t := \\delta_a \\oplus \\delta_b"
},
{
"math_id": 16,
"text": "(s, t)"
},
{
"math_id": 17,
"text": "\\left|a\\right| \\geq \\left|b\\right|"
},
{
"math_id": 18,
"text": "e_a \\geq e_b"
},
{
"math_id": 19,
"text": "z = s \\ominus a"
},
{
"math_id": 20,
"text": "t = b \\ominus z"
},
{
"math_id": 21,
"text": "s + t \\approx a + b"
}
] |
https://en.wikipedia.org/wiki?curid=65373517
|
653780
|
Normal number (computing)
|
In computing, a normal number is a non-zero number in a floating-point representation which is within the balanced range supported by a given floating-point format: it is a floating point number that can be represented without leading zeros in its significand.
The magnitude of the smallest normal number in a format is given by:
formula_0
where "b" is the base (radix) of the format (like common values 2 or 10, for binary and decimal number systems), and "formula_1" depends on the size and layout of the format.
Similarly, the magnitude of the largest normal number in a format is given by
formula_2
where "p" is the precision of the format in digits and "formula_1" is related to "formula_3" as:
formula_4
In the IEEE 754 binary and decimal formats, "b", "p", formula_1, and "formula_3" have the following values:
For example, in the smallest decimal format in the table (decimal32), the range of positive normal numbers is 10−95 through 9.999999 × 1096.
Non-zero numbers smaller in magnitude than the smallest normal number are called subnormal numbers (or "denormal numbers").
Zero is considered neither normal nor subnormal.
|
[
{
"math_id": 0,
"text": "b^{E_{\\text{min}}}"
},
{
"math_id": 1,
"text": "E_{\\text{min}}"
},
{
"math_id": 2,
"text": "b^{E_{\\text{max}}}\\cdot\\left(b - b^{1-p}\\right)"
},
{
"math_id": 3,
"text": "E_{\\text{max}}"
},
{
"math_id": 4,
"text": "E_{\\text{min}}\\, \\overset{\\Delta}{\\equiv}\\, 1 - E_{\\text{max}} = \\left(-E_{\\text{max}}\\right) + 1"
}
] |
https://en.wikipedia.org/wiki?curid=653780
|
65382329
|
Quantum artificial life
|
Simulation of biological behavior
Quantum artificial life is the application of quantum algorithms with the ability to simulate biological behavior. Quantum computers offer many potential improvements to processes performed on classical computers, including machine learning and artificial intelligence. Artificial intelligence applications are often inspired by the idea of mimicking human brains through closely related biomimicry. This has been implemented to a certain extent on classical computers (using neural networks), but quantum computers offer many advantages in the simulation of artificial life. Artificial life and artificial intelligence are extremely similar, with minor differences; the goal of studying artificial life is to understand living beings better, while the goal of artificial intelligence is to create intelligent beings.
In 2016, Alvarez-Rodriguez et al. developed a proposal for a quantum artificial life algorithm with the ability to simulate life and Darwinian evolution. In 2018, the same research team led by Alvarez-Rodriguez performed the proposed algorithm on the IBM "ibmqx4" quantum computer, and received optimistic results. The results accurately simulated a system with the ability to undergo self-replication at the quantum scale.
Artificial life on quantum computers.
The growing advancement of quantum computers has led researchers to develop quantum algorithms for simulating life processes. Researchers have designed a quantum algorithm that can accurately simulate Darwinian Evolution. Since the complete simulation of artificial life on quantum computers has only been actualized by one group, this section shall focus on the implementation by Alvarez-Rodriguez, Sanz, Lomata, and Solano on an IBM quantum computer.
Individuals were realized as two qubits, one representing the genotype of the individual and the other representing the phenotype. The genotype is copied to transmit genetic information through generations, and the phenotype is dependent on the genetic information as well as the individual's interactions with their environment. In order to set up the system, the state of the genotype is instantiated by some rotation of an ancillary state (formula_0). The environment is a two-dimensional spatial grid occupied by individuals and ancillary states. The environment is divided into cells that are able to possess one or more individuals. Individuals move throughout the grid and occupy cells randomly; when two or more individuals occupy the same cell they interact with each other.
Self replication.
The ability to self-replicate is critical for simulating life. Self-replication occurs when the genotype of an individual interacts with an ancillary state, creating a genotype for a new individual; this genotype interacts with a different ancillary state in order to create the phenotype. During this interaction, one would like to copy some information about the initial state into the ancillary state, but by the no cloning theorem, it is impossible to copy an arbitrary unknown quantum state. However, physicists have derived different methods for quantum cloning which does not require the exact copying of an unknown state. The method that has been implemented by Alvarez-Rodriguez et al. is one that involves the cloning of the expectation value of some observable. For a unitary formula_1 which copies the expectation value of some set of observables formula_2 of stateformula_3 into a blank stateformula_4, the cloning machine is defined by any formula_5 that fulfill the following:
formula_6 formula_7
Where formula_8 is the mean value of the observable in formula_3 before cloning, formula_9is the mean value of the observable in formula_3 after cloning, and formula_10 is the mean value of the observable in formula_4 after cloning. Note that the cloning machine has no dependence on formula_3 because we want to be able to clone the expectation of the observables for any initial state. It is important to note that cloning the mean value of the observable transmits more information than is allowed classically. The calculation of the mean value is defined naturally as:
formula_11, formula_12, formula_13 where formula_14
The simplest cloning machine clones the expectation value of formula_15 in arbitrary state formula_16 to formula_17usingformula_18. This is the cloning machine implemented for self-replication by Alvarez-Rodriguez et al. The self-replication process clearly only requires interactions between two qubits, and therefore this cloning machine is the only one necessary for self replication.
Interactions.
Interactions occur between individuals when the two take up the same space on the environmental grid. The presence of interactions between individuals provides an advantage for shorter-lifespan individuals. When two individuals interact, exchanges of information between the two phenotypes may or may not occur based on their existing values. When both individual's control qubits (genotypes) are alike, no information will be exchanged. When the control qubits differ, the target qubits (phenotype) will be exchanged between the two individuals. This procedure produces a constantly changing predator-prey dynamic in the simulation. Therefore, long-living qubits, with a larger genetic makeup in the simulation, are at a disadvantage. Since information is only exchanged when interacting with an individual of different genetic makeup, the short-lived population has the advantage.
Mutation.
Mutations exist in the artificial world with limited probability, equivalent to their occurrence in the real world. There are two ways in which the individual can mutate: through random single qubit rotations and by errors in the self-replication process. There are two different operators that act on the individual and cause mutations. The M operation causes a spontaneous mutation within the individual by rotating a single qubit by parameter θ. The parameter θ is random for each mutation, which creates biodiversity within the artificial environment. The M operation is a unitary matrix which can be described as:
formula_19
The other possible way for mutations to occur is due to errors in the replication process. Due to the no-cloning theorem, it is impossible to produce perfect copies of systems that are originally in unknown quantum states. However, quantum cloning machines make it possible to create imperfect copies of quantum states, in other words, the process introduces some degree of error. The error that exists in current quantum cloning machines is the root cause for the second kind of mutations in the artificial life experiment. The imperfect cloning operation can be seen as:
formula_20
The two kinds of mutations affect the individual differently. While the spontaneous M operation does not affect the phenotype of the individual, the self-replicating error mutation, U alters both the genotype of the individual, and its associated lifetime.
The presence of mutations in the quantum artificial life experiment is critical for providing randomness and biodiversity. The inclusion of mutations helps to increase the accuracy of the quantum algorithm.
Death.
At the instant the individual is created (when the genotype is copied into the phenotype), the phenotype interacts with the environment. As time evolves, the interaction of the individual with the environment simulates aging which eventually leads to the death of the individual. The death of an individual occurs when the expectation value of formula_21is within some formula_22 of 1 in the phenotype, or, equivalently, when formula_23
The Lindbladian describes the interaction of the individual with the environment: formula_24 with formula_25and withoutformula_26. This interaction causes the phenotype to exponentially decay over time. However, the genetic material contained in the genotype does not dissipate which allows for genes to be passed on to subsequent generations. Given the initial state of the genotype:
formula_27
The expectation values of the genotype and phenotype can be described as:
formula_28,formula_29. Where 'a' represents a single genetic parameter. From this equation, we can see that as 'a' is increased, the life expectancy decreases. Equivalently, the closer the initial state is to formula_30, the greater the life expectancy of the individual.
When formula_31, the individual is considered dead, the phenotype is used as the ancillary state for a new individual. Thus, the cycle continues and the process becomes self-sustaining.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "|0\\rangle\\langle0| "
},
{
"math_id": 1,
"text": "U"
},
{
"math_id": 2,
"text": "\\mathsf{X}"
},
{
"math_id": 3,
"text": "\\rho"
},
{
"math_id": 4,
"text": "\\rho_e"
},
{
"math_id": 5,
"text": "(U, \\rho_e, \\mathsf{X})"
},
{
"math_id": 6,
"text": "\\forall \\rho \\forall X \\in \\mathsf{X}"
},
{
"math_id": 7,
"text": "\\bar{X} = \\bar{X_1} = \\bar{X_2}"
},
{
"math_id": 8,
"text": "\\bar{X}"
},
{
"math_id": 9,
"text": "\\bar{X_1}"
},
{
"math_id": 10,
"text": "\\bar{X_2}"
},
{
"math_id": 11,
"text": "\\bar{X} = Tr[\\rho X]"
},
{
"math_id": 12,
"text": "\\bar{X_1} = Tr[RX \\otimes I]"
},
{
"math_id": 13,
"text": "\\bar{X_2} = Tr[RI \\otimes X]"
},
{
"math_id": 14,
"text": "R = U\\rho \\otimes \\rho_e U^\\dagger"
},
{
"math_id": 15,
"text": "\\sigma_z"
},
{
"math_id": 16,
"text": "\\rho = |\\psi\\rangle \\langle \\psi|"
},
{
"math_id": 17,
"text": "\\rho_e = |0\\rangle \\langle 0|"
},
{
"math_id": 18,
"text": "U = CNOT"
},
{
"math_id": 19,
"text": "M=\\begin{pmatrix} \\cos(\\theta) & sin(\\theta) \\\\ sin(\\theta) & -cos(\\theta) \\end{pmatrix}"
},
{
"math_id": 20,
"text": "U_M(\\theta)=\\Iota_4+\\frac{1}{2}\\begin{pmatrix} 0 & 0 \\\\ 0 & 1 \\end{pmatrix}\\otimes\\begin{pmatrix} -1 & 1 \\\\ 1 & -1 \\end{pmatrix}(cos\\theta + i sin\\theta + 1)"
},
{
"math_id": 21,
"text": "\\sigma_z "
},
{
"math_id": 22,
"text": "\\epsilon "
},
{
"math_id": 23,
"text": "\\rho_p = |0\\rangle\\langle0| "
},
{
"math_id": 24,
"text": "\\dot{\\rho} = \\gamma (\\sigma \\rho \\sigma^{\\dagger} - \\frac{1}{2}\\sigma^\\dagger \\sigma \\rho - \\frac{1}{2}\\rho \\sigma^\\dagger \\sigma ) "
},
{
"math_id": 25,
"text": "\\sigma = I \\otimes |0 \\rangle\\langle 1| "
},
{
"math_id": 26,
"text": "\\rho = \\rho_g \\otimes \\rho_p "
},
{
"math_id": 27,
"text": "\\rho_g =\n\\begin{pmatrix}\n a & b - ic \\\\\n b + ic & 1 - a \\\\\n\\end{pmatrix} "
},
{
"math_id": 28,
"text": "\\langle\\sigma_z\\rangle_g =2a-1"
},
{
"math_id": 29,
"text": "\\langle\\sigma_z\\rangle_p =1-2e^{\\gamma t}(1-a)"
},
{
"math_id": 30,
"text": "|1\\rangle\\langle1| "
},
{
"math_id": 31,
"text": "\\langle \\sigma_z \\rangle_p = 1 - \\epsilon "
}
] |
https://en.wikipedia.org/wiki?curid=65382329
|
65383414
|
Shelf-break front
|
Shelf-Break Fronts are a process by which stratification of the water column occurs. This stratification normally results in thermoclines, since they occur where a sudden change in water depth causes a constriction of the current flow. They can be expressed as a ratio of their potential energy due to maintaining mixed (non-stratified) conditions, to the dissipated energy produced by the current being forced across the sudden change in depth. This can be expressed as:
formula_0
The energy terms can be expressed in very detailed equations, but with constant terms factored out, the important terms are water velocity (average velocity, formula_1) and water depth (h).
The equation for the stratification index can be expressed as:
formula_2
Where formula_3 is a friction coefficient, approximated as 0.003 for a sandy bottom. This index can be calculated for any coastal region, usually in the range of +3 (highly stratified) to -2 (highly turbulent).
Reason to calculate.
The stratification index for a Shelf Break Front is an indication of how productive phytoplankton will be. When the stratification index is approximately 1.5, this produces a nutrient-rich environment for the growth of phytoplankton. Too much higher, and the stratification of the water column will not cause the upwellings of nutrients needed for the phytoplankton to prosper, too much lower, and the water will be too turbulent for the phytoplankton to use the nutrients available.
Stability of the front, in addition to nutrients, is a key to phytoplankton production.
An illustration of the stratification index for Narragansett Bay is shown here, with the average speeds estimated, using actual bathymetry for the bay, and an estimated formula_3 for silt, which composes much of the bay's bottom. Using the Stokes Spreadsheet, and some customization on the size of silt particles, I used a formula_3 = 0.0011. More accurate speed measurements and detailed formula_3 values for the Bay's bottom could yield a higher fidelity image.
Notice the green color (a stratification index of approximately 1.5) along the edges of the Northern Bay and near some of the islands. These areas are favorable to the formation of algal blooms in the Narraganset Bay habitat due to the stratification index being approximately 1.5. Algae have been observed in high concentration in some of these areas, but not all of them.
Using flow cytometry, results have determined that the relative abundance of picophytoplankton (< 2 formula_4m), small nanophytoplankton (2 to 10 formula_4m) and large nanophytoplankton (10-20 formula_4m) are greatly affected by the stratification index of the water column. Cell diversity was greatest in the presence of moderate levels of stratification.
If the turbulence is too high, their numbers remain stable or fall, but if there is no turbulence, their numbers do fall. It is postulated that the nutrient-rich boundary layer around each phytoplankton cell is not exhausted, but renewed, by this moderate level of turbulence.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "Ratio = \\frac{Pot. Energy}{Dissipated Energy}"
},
{
"math_id": 1,
"text": "\\left\\vert \\bar{U} \\right\\vert"
},
{
"math_id": 2,
"text": " S = \\log_{10} \\frac{h}{C_D {\\left\\vert \\bar{U} \\right\\vert} ^{3} }"
},
{
"math_id": 3,
"text": "C_D"
},
{
"math_id": 4,
"text": "\\mu"
}
] |
https://en.wikipedia.org/wiki?curid=65383414
|
6538731
|
Twomey effect
|
Effect concerning the increase of solar radiation reflected by clouds
The Twomey effect describes how additional cloud condensation nuclei (CCN), possibly from anthropogenic pollution, may increase the amount of solar radiation reflected by clouds. This is an indirect effect (or radiative forcing) by such particles, as distinguished from direct effects (forcing) due to enhanced scattering or absorbing radiation by such particles not in clouds.
Cloud droplets normally form on aerosol particles that serve as CCN. Increasing the number density of CCN can lead to formation of more cloud droplets with a smaller size.
The increase in number density increases the optical depth of the cloud, which results in an increase in the cloud albedo making clouds appear whiter. Satellite imagery often shows trails of cloud, or of enhanced brightness of cloud, behind ocean-going ships due to this effect. The decrease in global mean absorption of solar radiation due to increases in CCN concentrations exerts a cooling influence on climate; the global average magnitude of this effect over the industrial era is estimated as between −0.3 and −1.8 W/m2.
Derivation.
Assume a uniform cloud that extends infinitely in the horizontal plane, also assume that the particle size distribution peaks near an average value of formula_0.
The formula for the optical depth of a cloud is
formula_1
where formula_2 is the optical depth, formula_3 is cloud thickness, formula_0 is the average particle size, and formula_4 is the number density of cloud droplets.
The formula for the liquid water content of a cloud is
formula_5
where formula_6is the density of water.
Taking our assumptions into account we can combine the previous two equations to yield
formula_7
To derive the effect of changing formula_4 while keeping formula_3, formula_6 and formula_8 constant, from the last equation we can write
formula_9
and from the equation for formula_8 we can write
formula_10
therefore
formula_11
This illustrates the Twomey Effect mathematically, that is, for a constant liquid water content, formula_8, increasing the number density of cloud droplets, formula_4, increases the optical depth of the cloud.
References.
<templatestyles src="Reflist/styles.css" />
Bibliography.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "\\bar{r}"
},
{
"math_id": 1,
"text": "\\tau=2\\pi\\;\\! h\\bar{r}^{2} N"
},
{
"math_id": 2,
"text": "\\tau"
},
{
"math_id": 3,
"text": "h"
},
{
"math_id": 4,
"text": "N"
},
{
"math_id": 5,
"text": "LWC = \\frac{4}{3} \\pi\\bar{r}^{3}\\rho_L N"
},
{
"math_id": 6,
"text": "\\rho_L"
},
{
"math_id": 7,
"text": "\\tau= \\frac{3}{2} \\frac{h \\, LWC}{\\rho_L \\bar{r}}"
},
{
"math_id": 8,
"text": "LWC"
},
{
"math_id": 9,
"text": "\\tau \\propto \\frac{1}{\\bar{r}}"
},
{
"math_id": 10,
"text": "\\bar{r}^{3} \\propto \\frac{1}{N}"
},
{
"math_id": 11,
"text": "\\tau \\propto N^{1/3}"
}
] |
https://en.wikipedia.org/wiki?curid=6538731
|
65403215
|
Synthetic air data system
|
Type of air data system
A synthetic air data system (SADS) is an alternative air data system that can produce synthetic air data quantities without directly measuring the air data. It uses other information such as GPS, wind information, the aircraft's attitude, and aerodynamic properties to estimate or infer the air data quantities. Though air data includes altitude, airspeed, pressures, air temperature, Mach number, and flow angles (e.g., Angle of Attack and Angle of sideslip), existing known SADS primarily focuses on estimating airspeed, Angle of Attack, and Angle of sideslip. SADS is used to monitor the primary air data system if there is an anomaly due to sensor faults or system faults. It can also be potentially used as a backup to provide air data estimates for any aerial vehicle.
Functionality.
Synthetic air data systems can potentially reduce risk by creating an extra layer of redundancy (analytical redundancy) to the mechanical air data system such as the Pitot-static systems and angle vanes. It can also be used to detect failures of other subsystems through data compatibility checks.
History.
The idea of SADS has been around since the 1980s. The basic idea is to use non air data sensors such as Inertial Measurement Unit (IMU) and GPS fused with vehicle dynamics models to estimate air data triplet airspeed, angle of Attack, and angle of sideslip (either separately or combined). Most of the earlier work used vehicle dynamics models to estimate air data in both aircraft and spacecraft applications. This approach is sometimes referred to as the aerodynamic model-based SADS. However, the aerodynamic model-based SADS is challenging to implement because it is difficult to obtain accurate vehicle dynamics models possessing the fidelity needed to yield the required accuracy in the air data estimates. To address this issue, model-free SADS has been proposed recently. The model-free SADS does not require the vehicle dynamics models. Instead, it relies on the accuracy of the Inertial navigation system (INS) and Three-Dimensional (3D) wind estimates.
SADS has gained a lot of renewed interest after the Air France Flight 447 accident in 2009. Several universities and government agencies such as the University of Minnesota, Delft University of Technology, NASA Langley Research Center, and the Institute of Flight Mechanics and Flight Control at Technische Universität München, have been researching the SADS related topics. Recent patents related to SADS have been filed by the leading air data system producers such as Collins Aerospace and Honeywell. Moreover, the recent two Boeing 737 MAX accidents (Lion Air Flight 610 (2018) and Ethiopian Airlines Flight 302 (2019)) have brought SADS into the spotlight again, which is detailed by the report. In particular, synthetic airspeed has become a focal point to improve Boeing aircraft's safety.
Commercial Aircraft.
SADS has been implemented in some of the most advanced modern commercial aircraft such as the Boeing 787. The ADS on Boeing 787 calculates a synthetic airspeed from the angle of attack measurement, inertial data, accurate Lift coefficient, and aircraft mass (validated after takeoff). The synthetic airspeed has helped the Boeing 787 recover from the erroneous airspeed measurement.
Unmanned Aerial Vehicles.
SADS has also been implemented for Unmanned aerial vehicle UAVs (drones). The motivation of SADS for UAVs is that most of the low-cost air data systems on the small Unmanned Aircraft System (UAS) are not reliable. Also, having multiple air data sensors (e.g., Pitot tubes) on small UAVs is not feasible due to stringent size, weight, and power constraints. SADS can significantly increase drones' overall reliability in both Line-Of-Sight and Beyond Visual Line-Of-Sight (BVLOS) drone operations. Recent academic research has focused on improving SADS's accuracy, fault detectability, and reliability of the ADS used on small UAS by leveraging SADS.
Air Data Triplet.
The airspeed formula_0, angle of attack formula_1, and angle of sideslip formula_2 represents the air data triplet, and the current state of art of SADS is to estimate these three quantities either together or separately. One way to estimate the air data triplet is use the wind triangle relationship. Mathematically, the wind triangle equation is shown below:
formula_3
where formula_4, formula_5, and formula_6 are the translation velocity components expressed in the body frame, formula_7 is the coordinate transformation from North-East-Down (NED) frame to the body frame. The vector formula_8 represents the attitude vector in roll, pitch, and yaw. The formula_9 and formula_10 represent the ground velocity and wind vector in the NED frame respectively.
If the formula_4, formula_5, and formula_6 are known, the airspeed formula_0, angle of attack formula_1 and angle of sideslip formula_2 can be calculated as the following:
formula_11
formula_12
formula_13
Method.
There are various methods to estimate or "synthesize" airspeed, angle of attack, and angle of sideslip without directly using the measured air data. For example, synthetic airspeed can be computed by using the ground velocity, angle of attack, wind velocity,
airplane's pitch attitude and heading. The ground velocity is usually provided by GPS. The angle of attack measurements can come from the angle vanes. The wind velocity can be obtained by the airborne weather radar. The attitude of the airplane can be computed from the inertial navigation system. The exact computation of the synthetic airspeed can vary (e.g., small-angle approximation can be made to simplify the computation), but it is primarily based on the kinematic wind triangle equation. This method is sometimes referred to as the model-free SADS method; there is no vehicle model dynamics involved.
The model-based SADS leverages the vehicle dynamics model to help estimate the air data quantities. In particular, the aerodynamic coefficients are used to compute synthetic air data. For example, angle of attack formula_1 can be synthesized if the Lift coefficient formula_14, Mach number formula_15, and altitude formula_16 are known. Mathematically,
formula_17
The function formula_18 that relates formula_14 to formula_1 can be empirically determined by curve fitting the aerodynamic data. The accuracy of the model-based SADS depends on the accuracy of the aerodynamic coefficient. This accuracy constraint might not an issue for high performance aircraft such as F-15, but it can be quite difficult for low-cost UAVs.
Many model-based and model-free SADS utilize classical estimation methods such as Kalman filtering and least squares extensively to estimate air data when sensor fusion and real-time computing are required. Other non-conventional methods such as data-driven learning or machine learning based air data estimation algorithms have emerged in the last decade, but they are difficult to be certified due to the complexity of the algorithms.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " V_a "
},
{
"math_id": 1,
"text": " \\alpha "
},
{
"math_id": 2,
"text": " \\beta "
},
{
"math_id": 3,
"text": " \\big[\\begin{array}{ccc} u & v & w \\end{array} \\big]^T = {\\bf C}_n^b( \\boldsymbol{\\psi}^n_{nb} )\\left[{\\bf v}^n - {\\bf W}^n\\right]"
},
{
"math_id": 4,
"text": " u "
},
{
"math_id": 5,
"text": " v "
},
{
"math_id": 6,
"text": " w "
},
{
"math_id": 7,
"text": " {\\bf C}_n^b( \\boldsymbol{\\psi}^n_{nb}) "
},
{
"math_id": 8,
"text": " \\boldsymbol{\\psi}^n_{nb} = \\big[\\phi ~\\theta ~\\psi \\big]^T "
},
{
"math_id": 9,
"text": " {\\bf v}^n "
},
{
"math_id": 10,
"text": " {\\bf W}^n "
},
{
"math_id": 11,
"text": " V_a = \\sqrt{u^2 + v^2 + w^2} "
},
{
"math_id": 12,
"text": " \\alpha = \\tan^{-1}\\left(\\dfrac{u}{v}\\right) "
},
{
"math_id": 13,
"text": " \\beta = \\sin^{-1} \\left(\\dfrac{v}{\\sqrt{u^2 + v^2 + w^2}}\\right) "
},
{
"math_id": 14,
"text": "C_L "
},
{
"math_id": 15,
"text": " M "
},
{
"math_id": 16,
"text": " h "
},
{
"math_id": 17,
"text": " \\alpha = f(C_L,~M,~h) "
},
{
"math_id": 18,
"text": " f(\\cdot)"
}
] |
https://en.wikipedia.org/wiki?curid=65403215
|
654098
|
Symmetric algebra
|
"Smallest" commutative algebra that contains a vector space
In mathematics, the symmetric algebra "S"("V") (also denoted Sym("V")) on a vector space "V" over a field "K" is a commutative algebra over K that contains V, and is, in some sense, minimal for this property. Here, "minimal" means that "S"("V") satisfies the following universal property: for every linear map f from V to a commutative algebra A, there is a unique algebra homomorphism "g" : "S"("V") → "A" such that "f" = "g" ∘ "i", where i is the inclusion map of V in "S"("V").
If B is a basis of V, the symmetric algebra "S"("V") can be identified, through a canonical isomorphism, to the polynomial ring "K"["B"], where the elements of B are considered as indeterminates. Therefore, the symmetric algebra over V can be viewed as a "coordinate free" polynomial ring over V.
The symmetric algebra "S"("V") can be built as the quotient of the tensor algebra "T"("V") by the two-sided ideal generated by the elements of the form "x" ⊗ "y" − "y" ⊗ "x".
All these definitions and properties extend naturally to the case where V is a module (not necessarily a free one) over a commutative ring.
Construction.
From tensor algebra.
It is possible to use the tensor algebra "T"("V") to describe the symmetric algebra "S"("V"). In fact, "S"("V") can be defined as the quotient algebra of "T"("V") by the two-sided ideal generated by the commutators formula_0
It is straightforward to verify that the resulting algebra satisfies the universal property stated in the introduction. Because of the universal property of the tensor algebra, a linear map f from V to a commutative algebra A extends to an algebra homomorphism formula_1, which factors through S(V) because A is commutative. The extension of f
to an algebra homomorphism formula_2 is unique because V generates S(V) as a K-algebra.
This results also directly from a general result of category theory, which asserts that the composition of two left adjoint functors is also a left adjoint functor. Here, the forgetful functor from commutative algebras to vector spaces or modules (forgetting the multiplication) is the composition of the forgetful functors from commutative algebras to associative algebras (forgetting commutativity), and from associative algebras to vectors or modules (forgetting the multiplication). As the tensor algebra and the quotient by commutators are left adjoint to these forgetful functors, their composition is left adjoint to the forgetful functor from commutative algebra to vectors or modules, and this proves the desired universal property.
From polynomial ring.
The symmetric algebra "S"("V") can also be built from polynomial rings.
If V is a K-vector space or a free K-module, with a basis B, let "K"["B"] be the polynomial ring that has the elements of B as indeterminates. The homogeneous polynomials of degree one form a vector space or a free module that can be identified with V. It is straightforward to verify that this makes "K"["B"] a solution to the universal problem stated in the introduction. This implies that "K"["B"] and "S"("V") are canonically isomorphic, and can therefore be identified. This results also immediately from general considerations of category theory, since free modules and polynomial rings are free objects of their respective categories.
If V is a module that is not free, it can be written formula_3 where L is a free module, and M is a submodule of L. In this case, one has
formula_4
where formula_5 is the ideal generated by M. (Here, equals signs mean equality up to a canonical isomorphism.) Again this can be proved by showing that one has a solution of the universal property, and this can be done either by a straightforward but boring computation, or by using category theory, and more specifically, the fact that a quotient is the solution of the universal problem for morphisms that map to zero a given subset. (Depending on the case, the kernel is a normal subgroup, a submodule or an ideal, and the usual definition of quotients can be viewed as a proof of the existence of a solution of the universal problem.)
Grading.
The symmetric algebra is a graded algebra. That is, it is a direct sum
formula_6
where formula_7 called the nth symmetric power of V, is the vector subspace or submodule generated by the products of n elements of V. (The second symmetric power formula_8 is sometimes called the symmetric square of V).
This can be proved by various means. One follows from the tensor-algebra construction: since the tensor algebra is graded, and the symmetric algebra is its quotient by a homogeneous ideal: the ideal generated by all formula_9 where x and y are in V, that is, homogeneous of degree one.
In the case of a vector space or a free module, the gradation is the gradation of the polynomials by the total degree. A non-free module can be written as "L" / "M", where L is a free module of base B; its symmetric algebra is the quotient of the (graded) symmetric algebra of L (a polynomial ring) by the homogeneous ideal generated by the elements of M, which are homogeneous of degree one.
One can also define formula_10 as the solution of the universal problem for n-linear symmetric functions from V into a vector space or a module, and then verify that the direct sum of all formula_10 satisfies the universal problem for the symmetric algebra.
Relationship with symmetric tensors.
As the symmetric algebra of a vector space is a quotient of the tensor algebra, an element of the symmetric algebra is not a tensor, and, in particular, is not a symmetric tensor. However, symmetric tensors are strongly related to the symmetric algebra.
A "symmetric tensor" of degree n is an element of "T""n"("V") that is invariant under the action of the symmetric group formula_11 More precisely, given formula_12 the transformation formula_13 defines a linear endomorphism of "T""n"("V"). A symmetric tensor is a tensor that is invariant under all these endomorphisms. The symmetric tensors of degree n form a vector subspace (or module) Sym"n"("V") ⊂ "T""n"("V"). The "symmetric tensors" are the elements of the direct sum formula_14 which is a graded vector space (or a graded module). It is not an algebra, as the tensor product of two symmetric tensors is not symmetric in general.
Let formula_15 be the restriction to Sym"n"("V") of the canonical surjection formula_16 If "n"! is invertible in the ground field (or ring), then formula_15 is an isomorphism. This is always the case with a ground field of characteristic zero. The inverse isomorphism is the linear map defined (on products of n vectors) by the symmetrization
formula_17
The map formula_15 is not injective if the characteristic is less than n+1; for example formula_18 is zero in characteristic two. Over a ring of characteristic zero, formula_15 can be non surjective; for example, over the integers, if x and y are two linearly independent elements of "V" = "S"1("V") that are not in 2"V", then formula_19 since formula_20
In summary, over a field of characteristic zero, the symmetric tensors and the symmetric algebra form two isomorphic graded vector spaces. They can thus be identified as far as only the vector space structure is concerned, but they cannot be identified as soon as products are involved. Moreover, this isomorphism does not extend to the cases of fields of positive characteristic and rings that do not contain the rational numbers.
Categorical properties.
Given a module V over a commutative ring K, the symmetric algebra "S"("V") can be defined by the following universal property:
For every K-linear map f from V to a commutative K-algebra A, there is a unique K-algebra homomorphism formula_21 such that formula_22 where i is the inclusion of V in "S"("V").
As for every universal property, as soon as a solution exists, this defines uniquely the symmetric algebra, up to a canonical isomorphism. It follows that all properties of the symmetric algebra can be deduced from the universal property. This section is devoted to the main properties that belong to category theory.
The symmetric algebra is a functor from the category of K-modules to the category of K-commutative algebra, since the universal property implies that every module homomorphism formula_23 can be uniquely extended to an algebra homomorphism formula_24
The universal property can be reformulated by saying that the symmetric algebra is a left adjoint to the forgetful functor that sends a commutative algebra to its underlying module.
Symmetric algebra of an affine space.
One can analogously construct the symmetric algebra on an affine space. The key difference is that the symmetric algebra of an affine space is not a graded algebra, but a filtered algebra: one can determine the degree of a polynomial on an affine space, but not its homogeneous parts.
For instance, given a linear polynomial on a vector space, one can determine its constant part by evaluating at 0. On an affine space, there is no distinguished point, so one cannot do this (choosing a point turns an affine space into a vector space).
Analogy with exterior algebra.
The "S""k" are functors comparable to the exterior powers; here, though, the dimension grows with "k"; it is given by
formula_25
where "n" is the dimension of "V". This binomial coefficient is the number of "n"-variable monomials of degree "k".
In fact, the symmetric algebra and the exterior algebra appear as the isotypical components of the trivial and sign representation of the action of formula_26 acting on the tensor product formula_27 (for example over the complex field)
As a Hopf algebra.
The symmetric algebra can be given the structure of a Hopf algebra. See Tensor algebra for details.
As a universal enveloping algebra.
The symmetric algebra "S"("V") is the universal enveloping algebra of an abelian Lie algebra, i.e. one in which the Lie bracket is identically 0.
|
[
{
"math_id": 0,
"text": "v\\otimes w - w\\otimes v."
},
{
"math_id": 1,
"text": "T(V)\\rightarrow A"
},
{
"math_id": 2,
"text": "S(V)\\rightarrow A"
},
{
"math_id": 3,
"text": "V=L/M,"
},
{
"math_id": 4,
"text": "S(V)=S(L/M)=S(L)/\\langle M\\rangle,"
},
{
"math_id": 5,
"text": "\\langle M\\rangle"
},
{
"math_id": 6,
"text": "S(V)=\\bigoplus_{n=0}^\\infty S^n(V),"
},
{
"math_id": 7,
"text": "S^n(V),"
},
{
"math_id": 8,
"text": "S^2(V)"
},
{
"math_id": 9,
"text": "x \\otimes y - y \\otimes x,"
},
{
"math_id": 10,
"text": "S^n(V)"
},
{
"math_id": 11,
"text": "\\mathcal S_n."
},
{
"math_id": 12,
"text": "\\sigma\\in \\mathcal S_n,"
},
{
"math_id": 13,
"text": "v_1\\otimes \\cdots \\otimes v_n \\mapsto v_{\\sigma(1)}\\otimes \\cdots \\otimes v_{\\sigma(n)}"
},
{
"math_id": 14,
"text": "\\textstyle \\bigoplus_{n=0}^\\infty \\operatorname{Sym}^n(V),"
},
{
"math_id": 15,
"text": "\\pi_n"
},
{
"math_id": 16,
"text": "T^n(V)\\to S^n(V)."
},
{
"math_id": 17,
"text": "v_1\\cdots v_n \\mapsto \\frac 1{n!} \\sum_{\\sigma \\in S_n} v_{\\sigma(1)}\\otimes \\cdots \\otimes v_{\\sigma(n)}."
},
{
"math_id": 18,
"text": "\\pi_n(x\\otimes y+y\\otimes x) = 2xy"
},
{
"math_id": 19,
"text": "xy\\not\\in \\pi_n(\\operatorname{Sym}^2(V)),"
},
{
"math_id": 20,
"text": "\\frac 12 (x\\otimes y +y\\otimes x) \\not\\in \\operatorname{Sym}^2(V)."
},
{
"math_id": 21,
"text": "g:S(V)\\to A"
},
{
"math_id": 22,
"text": "f=g\\circ i,"
},
{
"math_id": 23,
"text": "f:V\\to W"
},
{
"math_id": 24,
"text": "S(f):S(V)\\to S(W)."
},
{
"math_id": 25,
"text": "\\operatorname{dim}(S^k(V)) = \\binom{n+k-1}{k}"
},
{
"math_id": 26,
"text": "S_n"
},
{
"math_id": 27,
"text": "V^{\\otimes n}"
}
] |
https://en.wikipedia.org/wiki?curid=654098
|
65416418
|
Chance-constrained portfolio selection
|
Chance-constrained portfolio selection is an approach to portfolio selection under loss aversion.
The formulation assumes that (i) investor's preferences are representable by the expected utility of final wealth, and that (ii) they require that the probability of their final wealth falling below a survival or safety level must to be acceptably low.
The chance-constrained portfolio problem is then to find:
Max formula_0wjE(Xj), subject to Pr(formula_0 wjXj < s) ≤ "α", formula_0wj = 1, wj ≥ 0 for all j,
where s is the survival level and "α" is the admissible probability of ruin; w is the weight and x is the value of the "jth" asset to be included in the portfolio.
The original implementation is based on the seminal work of Abraham Charnes and William W. Cooper on chance constrained programming in 1959,
and was first applied to finance by Bertil Naslund and Andrew B. Whinston in 1962
and in 1969 by N. H. Agnew, et al.
For fixed "α" the chance-constrained portfolio problem represents lexicographic preferences and is an implementation of capital asset pricing under loss aversion.
In general though, it is observed that no utility function can represent the preference ordering of chance-constrained programming because a fixed "α" does not admit compensation for a small increase in "α" by any increase in expected wealth.
For a comparison to mean-variance and safety-first portfolio problems, see; for a survey of solution methods here, see; for a discussion of the risk aversion properties of chance-constrained portfolio selection, see.
|
[
{
"math_id": 0,
"text": "\\sum_{j}"
}
] |
https://en.wikipedia.org/wiki?curid=65416418
|
654223
|
Modbus
|
Serial communications protocol mainly developed for programmable logic controllers
Modbus or MODBUS is a client/server data communications protocol in the application layer. It was originally designed for use with its programmable logic controllers (PLCs), but has become a "de facto" standard communication protocol for communication between industrial electronic devices in a wide range of buses and networks.
Modbus is popular in industrial environments because it is openly published and royalty-free. It was developed for industrial applications, is relatively easy to deploy and maintain compared to other standards, and places few restrictions on the format of the data to be transmitted.
The Modbus protocol uses serial communication lines, Ethernet, or the Internet protocol suite as a transport layer. Modbus supports communication to and from multiple devices connected to the same cable or Ethernet network. For example, there can be a device that measures temperature and another device to measure humidity connected to the same cable, both communicating measurements to the same computer, via Modbus.
Modbus is often used to connect a plant/system supervisory computer with a remote terminal unit (RTU) in supervisory control and data acquisition (SCADA) systems. Many of the data types are named from industrial control of factory devices, such as ladder logic because of its use in driving relays: a single-bit physical output is called a "coil", and a single-bit physical input is called a "discrete input" or a "contact".
It was originally published by Modicon in 1979. The company was acquired by Schneider Electric in 1997. In 2004, they transferred the rights to the Modbus Organization which is trade association of users and suppliers of Modbus-compliant devices that advocates for the continued use of the technology.
Protocol description.
Modbus standards or buses include:
To support Modbus communication on a network, many modems and gateways incorporate proprietary designs (refer to the diagram: "Architecture of a network for Modbus communication"). Implementations may deploy either wireline or wireless communication, such as in the ISM radio band, and even Short Message Service (SMS) or General Packet Radio Service (GPRS).
PDU and ADU.
Modbus defines client which is an entity which initiates a transaction to request any specific task from its "request receiver". The client's "request receiver", which the client has initiated the transaction with, is then called server. For example, when a Microcontroller unit (MCU) connects to a sensor to read its data by Modbus on a wired network, e.g RS485 bus, the MCU in this context is the client and the sensor is the server. In former terminology, the client was named master and the server named slave.
Modbus defines a protocol data unit (PDU) independently to its lower layer protocols in its protocol stack. The mapping of MODBUS protocol on specific buses or network requires some additional fields, which are defined as application data unit (ADU). ADU is formed by a client inside a Modbus network when the client initiates a transaction. Contents are:
ADU is officially called a Modbus frame by the Modbus Organization, although frame is used as the data unit in the data-link layer in the OSI and TCP/IP model (while Modbus is an application layer protocol).
PDU max size is 253 bytes. ADU max size on RS232/RS485 network is 256 bytes, and with TCP is 260 bytes.
For data encoding, Modbus uses a big-endian representation for addresses and data fields. Thus, for a 16-bit value, the most significant byte is sent first. For example, when a 16-bit register has value 0x1234, byte 0x12 is sent before byte 0x34.
Function code is 1 byte which gives the code of the function to execute. Function codes are integer values, ranging from 1 to 255, and the range from 128 to 255 is for exception responses.
The data field of the PDU has the address from 0 to 65535 (not to be confused with the address of the Additional address field of ADU). The data field of the PDU can be empty, and then has a size of 0. In this case, the server will not request any information and the function code defines the function to be executed. If there is no error during the execution process, the data field of the ADU response from server to client will include the data requested, i.e. the data the client previously received. If there is any error, the server will respond with an exception code.
Modbus transaction and PDU.
A Modbus transaction between client and server includes:
Based on that, Modbus defines 3 PDU types:
mb_req_pdu = Function code (1 byte) + request data (n bytes)
request data field's size depends on the function code and usually includes values like variable values, data offset, and sub-function codes.
mb_rsp_pdu = Function code (1 byte) + response data (n bytes)
As in mb_req_pdu, response data field's size depends on the function code and usually includes values like variable values, data offset, and sub-function codes.
mb_excep_rsp_pdu = Exception Function code (1 byte) + exception code (1 byte)
Exception Function code = Function code (1 byte) + 0x80
Exception Function code is equal to the Function code, except that its MSB is set to 1.
Exception code (1 byte) of mb_excep_rsp_pdu is defined in the "MODBUS Exception Codes" table.
Modbus data model.
Modbus defines its data model based on a series of table with four primary tables:
Function code.
Modbus defines three types of function codes: Public, User-Defined and Reserved.
Public function codes.
Note: Some sources use terminology that differs from the standard; for example "Force Single Coil" instead of "Write Single Coil".
Function code 01 (read coils) as an example of public function code.
Function code 01 (read coils) allow reading the state from 1 to 2000 coil of a remote device. mb_req_pdu (request PDU) will then have 2 bytes to indicate the address of the first coil to read (from 0x0000 to 0xFFFF), and 2 bytes to indicate the number of coils to read. mb_req_pdu defines coil address by index 0, i.e the first coil has address 0x0. mb_rsp_pdu (response PDU) – if executing successfully – has 1 byte to indicate the number of bytes which is the number of coils that mb_req_pdu has required, and the left bytes store the status (on/off value) of those requested coils. Specifically, mb_rsp_pdu and mb_rsp_pdu of function code 01 is:
mb_req_pdu:
mb_rsp_pdu
For instance, mb_req_pdu and mb_rsp_pdu to read coils status from 20-38 will be:
mb_req_pdu:
Starting Address (2 bytes) is 0x0013, (or 19 in decimal) which is the 20th coil.
Quantity of Outputs (2 bytes) is 0x0013, (or 19 in decimal) which corresponds to 19 values of status of coils 20th to 38th.
mb_rsp_pdu:
As 19 coils (20-38) are required, 3 bytes is used to indicate the coil's state. So that Byte Count is 0x03. States of coil from 20 to 27 is 0xCD, which is 1100 1101 in binary. So coil 27 is MSb, and coil 20 is LSb. Same for coil 28 to 35. With coil from 36 to 38, the state will be 0x05, which is 0000 0101. Coil 38 state is the 3rd bit (count from the right), i.e 1, coil 37 is 0, and coil 36 state is LSb bit, i.e. 1. 5 left bits are all 0.
User-defined function codes.
User-Defined Function Codes are function codes defined by users. Modbus gives two range of values for user-defined function codes: 65 to 72 and 100 to 110. Obviously, user-defined function codes are not unique.
Reserved function codes.
Reserved Function Codes are function codes used by some companies for legacy product and are not available for public use.
Exception responses.
When a client sends a request to a server, there can be four possible events for that request:
Exception response message includes two other fields when compared to a normal response message:
All Modbus exception code:
Modbus over Serial Line protocol.
Modbus standard also defines Modbus over Serial Line, a protocol over the data link layer of the OSI model for the Modbus application layer protocol to be communicated over a serial bus. Modbus Serial Line protocol is a master-slave protocol which supports one master and multiple slaves in the serial bus. With Modbus protocol on the application layer, client/server model is used for the devices on the communication channel. With Modbus over Serial Line, client's role is implemented by master, and the server's role is implemented by slave.
The organization's naming convention inverts the common usage of having multiple clients and only one server. To avoid this confusion, the RS-485 transport layer uses the terms "node" or "device" instead of "server", and the "client" is not a "node".
<templatestyles src="Template:Blockquote/styles.css" />The (Modbus Organization) is using "client-server" to describe Modbus communications, characterized by communication between [client device (s), which initiates communication and makes requests of server device(s), which process requests and return an appropriate response (or error message).
A serial bus for Modbus over Serial Line can have a maximum of 247 slaves communicating with 1 master. Those slaves have a unique address ranging from 1 to 247 (decimal value). The master doesn't need to have an address. The communication process is initiated by the master, as only it can initiate a Modbus transaction. A slave will never transmit any data or perform any action without a request from the master, and slaves cannot communicate with each other.
In Modbus over Serial Line, the master initiates requests to the slaves in unicast or broadcast modes. In unicast mode, the master will initiate a request to a single slave with a specific address. Upon receiving and finishing the request, the slave will respond with a message to the master. In this mode, a Modbus transaction includes two messages: one request from the master and one reply from the slave. Each slave must have a unique address (from 1 to 247) to be addressed independently for the communication. In broadcast mode, the master can send a request to all the slaves, using the broadcast address 0, which is the address reserved for broadcast exchanges (and not the master address). Slaves must accept broadcast exchanges but must not respond.
The mapping of PDU of Modbus to the serial bus of Modbus over Serial Line protocol results in Modbus Serial Line PDU.
Modbus Serial Line PDU = Address + PDU + CRC (or LRC)
With PDU = Function code + data
On the Physical layer, MODBUS over Serial Line performs its communication on bit by RS485 or RS232, with TIA/EIA-485 Two-Wire interface as the most popular way. RS485 Four-Wire interface is also used. TIA/EIA-232-E (RS232) can also be used but is limited to point-to-point short-range communication. MODBUS over Serial Line has two transmission modes RTU and ASCII which are corresponded to two versions of the protocol, known as Modbus RTU and Modbus ASCII.
Modbus RTU.
Modbus RTU (Remote Terminal Unit), which is the most common implementation available for Modbus, makes use of a compact, binary representation of the data for protocol communication. The RTU format follows the commands/data with a cyclic redundancy check checksum as an error check mechanism to ensure the reliability of data. A Modbus RTU message must be transmitted continuously without inter-character hesitations. Modbus messages are framed (separated) by idle (silent) periods. Each byte (8 bits) of data is sent as 11 bits:
A Modbus RTU frame then will be:
The CRC calculation is widely known as CRC-16-MODBUS, whose polynomial is "x"16 + "x"15 + "x"2 + 1 (normal hexadecimal algebraic polynomial being codice_0 and reversed codice_1).
Example of a Modbus RTU frame in hexadecimal: codice_2 (CRC-16-MODBUS calculation for the 5 bytes from codice_3 to codice_4 gives codice_5, which is transmitted least significant byte first).
To ensure frame integrity during the transmission, the time interval between two frames must be at least the transmission time of 3.5 characters, and the time interval between two consecutive characters must be no more than the transmission time of 1.5 characters. For example, with the default data rate of 19200 bit/s, the transmission times of 3.5 (t3.5) and 1.5 (t1.5) 11-bit characters are:
formula_0
formula_1
For higher data rates, Modbus RTU recommends to use the fixed values 750 μs for t1.5 and 1.750 ms for t3.5.
Modbus ASCII.
Modbus ASCII makes use of ASCII characters for protocol communication. The ASCII format uses a longitudinal redundancy check checksum. Modbus ASCII messages are framed by a leading colon (":") and trailing newline (CR/LF).
A Modbus ASCII frame includes:
Address, Function, Data, and LRC are ASCII hexadecimal encoded values, whereby 8-bit values (0–255) are encoded as two human-readable ASCII characters from the ranges 0–9 and A–F. For example, a value of 122 (7A16) is encoded as two ASCII characters, "7" and "A", and transmitted as two bytes, codice_6 (3716, ASCII value for "7") and codice_7 (4116, ASCII value for "A").
LRC is calculated as the sum of 8-bit values (excluding the start and end characters), negated (two's complement) and encoded as an 8-bit value. For example, if Address, Function, and Data are 247, 3, 19, 137, 0, and 10, the two's complement of their sum (416) is −416; this trimmed to 8 bits is 96 (256 × 2 − 416 = 6016), giving the following 17 ASCII character frame: codice_8. LRC is specified for use only as a checksum: because it is calculated on the encoded data rather than the transmitted characters, its 'longitudinal' characteristic is not available for use with parity bits to locate single-bit errors.
Modbus Messaging on TCP/IP.
Modbus TCP.
"Modbus TCP" or "Modbus TCP/IP" is a Modbus variant used for communications over TCP/IP networks, connecting over port 502. It does not require a checksum calculation, as lower layers already provide checksum protection.
Modbus TCP nomenclature is the same as for the Modbus over Serial line protocol, as any device which send out a Modbus command, is the 'client' and the response comes from a 'server'.
The ADU for Modbus TCP is officially called MODBUS TCP/IP ADU (or Modbus TCP/IP ADU) by the Modbus organization and is also called Modbus TCP frame by other parties.
MODBUS TCP/IP ADU = MBAP Header + Function code + Data
Where MBAP - which stands for MODBUS Application Protocol header - is the dedicated header used on TCP/IP to identify the MODBUS Application Data Unit.
The MBAP Header contains the following fields:
"Unit identifier" is used with Modbus TCP devices that are composites of several Modbus devices, e.g. Modbus TCP to Modbus RTU gateways. In such a case, the unit identifier is the Server Address of the device behind the gateway.
A MODBUS TCP/IP ADU/Modbus TCP frame format then will be:
Example of a Modbus TCP/IP ADU/Modbus TCP frame in hexadecimal:
codice_9
Other Modbus protocol versions.
Besides the widely used Modbus RTU, Modbus ASCII and Modbus TCP, there are many variants of Modbus protocols:
Data models and function calls are identical for the first four variants listed above; only the encapsulation is different. However the variants are not interoperable, nor are the frame formats.
JBUS mapping.
Another "de facto" protocol closely related to Modbus appeared later, and was defined by PLC maker April Automates, the result of a collaborative effort between French companies Renault Automation and Merlin Gerin et Cie in 1985: JBUS. Differences between Modbus and JBUS at that time (number of entities, server stations) are now irrelevant as this protocol almost disappeared with the April PLC series, which AEG Schneider Automation bought in 1994 and then made obsolete. However, the name JBUS has survived to some extent.
JBUS supports function codes 1, 2, 3, 4, 5, 6, 15, and 16 and thus all the entities described above, although numbering is different:
References.
<templatestyles src="Reflist/styles.css" />
External links.
Specifications
Other
|
[
{
"math_id": 0,
"text": "t3.5 = 3.5*\\left( \\frac{11*1000}{19200} \\right)= 2.005 ms"
},
{
"math_id": 1,
"text": "t1.5 = 1.5*\\left( \\frac{11*10^6}{19200} \\right)= 859.375 \\mu s"
}
] |
https://en.wikipedia.org/wiki?curid=654223
|
65423
|
Brake
|
Mechanical device that inhibits motion
A brake is a mechanical device that inhibits motion by absorbing energy from a moving system. It is used for slowing or stopping a moving vehicle, wheel, axle, or to prevent its motion, most often accomplished by means of friction.
Background.
Most brakes commonly use friction between two surfaces pressed together to convert the kinetic energy of the moving object into heat, though other methods of energy conversion may be employed. For example, regenerative braking converts much of the energy to electrical energy, which may be stored for later use. Other methods convert kinetic energy into potential energy in such stored forms as pressurized air or pressurized oil. Eddy current brakes use magnetic fields to convert kinetic energy into electric current in the brake disc, fin, or rail, which is converted into heat. Still other braking methods even transform kinetic energy into different forms, for example by transferring the energy to a rotating flywheel.
Brakes are generally applied to rotating axles or wheels, but may also take other forms such as the surface of a moving fluid (flaps deployed into water or air). Some vehicles use a combination of braking mechanisms, such as drag racing cars with both wheel brakes and a parachute, or airplanes with both wheel brakes and drag flaps raised into the air during landing.
Since kinetic energy increases quadratically with velocity (formula_0), an object moving at 10 m/s has 100 times as much energy as one of the same mass moving at 1 m/s, and consequently the theoretical braking distance, when braking at the traction limit, is up to 100 times as long. In practice, fast vehicles usually have significant air drag, and energy lost to air drag rises quickly with speed.
Almost all wheeled vehicles have a brake of some sort. Even baggage carts and shopping carts may have them for use on a moving ramp. Most fixed-wing aircraft are fitted with wheel brakes on the undercarriage. Some aircraft also feature air brakes designed to reduce their speed in flight. Notable examples include gliders and some World War II-era aircraft, primarily some fighter aircraft and many dive bombers of the era. These allow the aircraft to maintain a safe speed in a steep descent. The Saab B 17 dive bomber and Vought F4U Corsair fighter used the deployed undercarriage as an air brake.
Friction brakes on automobiles store braking heat in the drum brake or disc brake while braking then conduct it to the air gradually. When traveling downhill some vehicles can use their engines to brake.
When the brake pedal of a modern vehicle with hydraulic brakes is pushed against the master cylinder, ultimately a piston pushes the brake pad against the brake disc which slows the wheel down. On the brake drum it is similar as the cylinder pushes the brake shoes against the drum which also slows the wheel down.
Types.
Brakes may be broadly described as using friction, pumping, or electromagnetics. One brake may use several principles: for example, a pump may pass fluid through an orifice to create friction:
Frictional.
Frictional brakes are most common and can be divided broadly into "shoe" or "pad" brakes, using an explicit wear surface, and hydrodynamic brakes, such as parachutes, which use friction in a working fluid and do not explicitly wear. Typically the term "friction brake" is used to mean pad/shoe brakes and excludes hydrodynamic brakes, even though hydrodynamic brakes use friction. Friction (pad/shoe) brakes are often rotating devices with a stationary pad and a rotating wear surface. Common configurations include shoes that contract to rub on the outside of a rotating drum, such as a band brake; a rotating drum with shoes that expand to rub the inside of a drum, commonly called a "drum brake", although other drum configurations are possible; and pads that pinch a rotating disc, commonly called a "disc brake". Other brake configurations are used, but less often. For example, PCC trolley brakes include a flat shoe which is clamped to the rail with an electromagnet; the Murphy brake pinches a rotating drum, and the Ausco Lambert disc brake uses a hollow disc (two parallel discs with a structural bridge) with shoes that sit between the disc surfaces and expand laterally.
A drum brake is a vehicle brake in which the friction is caused by a set of brake shoes that press against the inner surface of a rotating drum. The drum is connected to the rotating roadwheel hub.
Drum brakes generally can be found on older car and truck models. However, because of their low production cost, drum brake setups are also installed on the rear of some low-cost newer vehicles. Compared to modern disc brakes, drum brakes wear out faster due to their tendency to overheat.
The disc brake is a device for slowing or stopping the rotation of a road wheel. A brake disc (or rotor in U.S. English), usually made of cast iron or ceramic, is connected to the wheel or the axle. To stop the wheel, friction material in the form of brake pads (mounted in a device called a brake caliper) is forced mechanically, hydraulically, pneumatically or electromagnetically against both sides of the disc. Friction causes the disc and attached wheel to slow or stop.
Pumping.
Pumping brakes are often used where a pump is already part of the machinery. For example, an internal-combustion piston motor can have the fuel supply stopped, and then internal pumping losses of the engine create some braking. Some engines use a valve override called a Jake brake to greatly increase pumping losses. Pumping brakes can dump energy as heat, or can be regenerative brakes that recharge a pressure reservoir called a hydraulic accumulator.
Electromagnetic.
Electromagnetic brakes are likewise often used where an electric motor is already part of the machinery. For example, many hybrid gasoline/electric vehicles use the electric motor as a generator to charge electric batteries and also as a regenerative brake. Some diesel/electric railroad locomotives use the electric motors to generate electricity which is then sent to a resistor bank and dumped as heat. Some vehicles, such as some transit buses, do not already have an electric motor but use a secondary "retarder" brake that is effectively a generator with an internal short circuit. Related types of such a brake are eddy current brakes, and electro-mechanical brakes (which actually are magnetically driven friction brakes, but nowadays are often just called "electromagnetic brakes" as well).
Electromagnetic brakes slow an object through electromagnetic induction, which creates resistance and in turn either heat or electricity. Friction brakes apply pressure on two separate objects to slow the vehicle in a controlled manner.
Characteristics.
Brakes are often described according to several characteristics including:
Foundation components.
Foundation components are the brake-assembly components at the wheels of a vehicle, named for forming the basis of the rest of the brake system. These mechanical parts contained around the wheels are controlled by the air brake system.
The three types of foundation brake systems are “S” cam brakes, disc brakes and wedge brakes.
Brake boost.
Most modern passenger vehicles, and light vans, use a vacuum assisted brake system that greatly increases the force applied to the vehicle's brakes by its operator. This additional force is supplied by the manifold vacuum generated by air flow being obstructed by the throttle on a running engine. This force is greatly reduced when the engine is running at fully open throttle, as the difference between ambient air pressure and manifold (absolute) air pressure is reduced, and therefore available vacuum is diminished. However, brakes are rarely applied at full throttle; the driver takes the right foot off the gas pedal and moves it to the brake pedal - unless left-foot braking is used.
Because of low vacuum at high RPM, reports of unintended acceleration are often accompanied by complaints of failed or weakened brakes, as the high-revving engine, having an open throttle, is unable to provide enough vacuum to power the brake booster. This problem is exacerbated in vehicles equipped with automatic transmissions as the vehicle will automatically downshift upon application of the brakes, thereby increasing the torque delivered to the driven-wheels in contact with the road surface.
Heavier road vehicles, as well as trains, usually boost brake power with compressed air, supplied by one or more compressors.
Noise.
Although ideally a brake would convert all the kinetic energy into heat, in practice a significant amount may be converted into acoustic energy instead, contributing to noise pollution.
For road vehicles, the noise produced varies significantly with tire construction, road surface, and the magnitude of the deceleration. Noise can be caused by different things. These are signs that there may be issues with brakes wearing out over time.
Fires.
Railway brake malfunctions can produce sparks and cause forest fires. In some very extreme cases, disc brakes can become red hot and set on fire. This happened in the Tuscan GP, when the Mercedes car, the W11 had its front carbon disc brakes almost bursting into flames, due to low ventilation and high usage. These fires can also occur on some Mercedes Sprinter vans, when the load adjusting sensor seizes up and the rear brakes have to compensate for the fronts.
Inefficiency.
A significant amount of energy is always lost while braking, even with regenerative braking which is not perfectly efficient. Therefore, a good metric of efficient energy use while driving is to note how much one is braking. If the majority of deceleration is from unavoidable friction instead of braking, one is squeezing out most of the service from the vehicle. Minimizing brake use is one of the fuel economy-maximizing behaviors.
While energy is always lost during a brake event, a secondary factor that influences efficiency is "off-brake drag", or drag that occurs when the brake is not intentionally actuated. After a braking event, hydraulic pressure drops in the system, allowing the brake caliper pistons to retract. However, this retraction must accommodate all compliance in the system (under pressure) as well as thermal distortion of components like the brake disc or the brake system will drag until the contact with the disc, for example, knocks the pads and pistons back from the rubbing surface. During this time, there can be significant brake drag. This brake drag can lead to significant parasitic power loss, thus impacting fuel economy and overall vehicle performance.
History.
Early brake system.
In the 1890s, Wooden block brakes became obsolete when Michelin brothers introduced rubber tires.
During the 1960s, some car manufacturers replaced drum brakes with disc brakes.
Electronic brake system.
In 1966, the ABS was fitted in the Jensen FF grand tourer.
In 1978, Bosch and Mercedes updated their 1936 anti-lock brake system for the Mercedes S-Class. That ABS is a fully electronic, four-wheel and multi-channel system that later became standard.
In 2005, ESC — which automatically applies the brakes to avoid a loss of steering control — become compulsory for carriers of dangerous goods without data recorders in the Canadian province of Quebec.
Since 2017, numerous United Nations Economic Commission for Europe (UNECE) countries use "Brake Assist System" (BAS) a function of the braking system that deduces an emergency braking event from a characteristic of the driver's brake demand and under such conditions assist the driver to improve braking.
In July 2013 UNECE vehicle regulation 131 was enacted. This regulation defines Advanced Emergency Braking Systems (AEBS) for heavy vehicles to automatically detect a potential forward collision and activate the vehicle braking system.
On 23 January 2020 UNECE vehicle regulation 152 was enacted, defining Advanced Emergency Braking Systems for light vehicles.
From May 2022, in the European Union, by law, new vehicles will have advanced emergency-braking system.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "K=mv^2/2"
}
] |
https://en.wikipedia.org/wiki?curid=65423
|
65424
|
Hydraulics
|
Applied engineering involving liquids
Hydraulics (from grc " ' ()" 'water' and " ' ()" 'pipe') is a technology and applied science using engineering, chemistry, and other sciences involving the mechanical properties and use of liquids. At a very basic level, hydraulics is the liquid counterpart of pneumatics, which concerns gases. Fluid mechanics provides the theoretical foundation for hydraulics, which focuses on applied engineering using the properties of fluids. In its fluid power applications, hydraulics is used for the generation, control, and transmission of power by the use of pressurized liquids. Hydraulic topics range through some parts of science and most of engineering modules, and they cover concepts such as pipe flow, dam design, fluidics, and fluid control circuitry. The principles of hydraulics are in use naturally in the human body within the vascular system and erectile tissue.
Free surface hydraulics is the branch of hydraulics dealing with free surface flow, such as occurring in rivers, canals, lakes, estuaries, and seas. Its sub-field open-channel flow studies the flow in open channels.
History.
Ancient and medieval eras.
Early uses of water power date back to Mesopotamia and ancient Egypt, where irrigation has been used since the 6th millennium BC and water clocks had been used since the early 2nd millennium BC. Other early examples of water power include the Qanat system in ancient Persia and the Turpan water system in ancient Central Asia.
Persian Empire and Urartu.
In the Persian Empire or previous entities in Persia, the Persians constructed an intricate system of water mills, canals and dams known as the Shushtar Historical Hydraulic System. The project, commenced by Achaemenid king Darius the Great and finished by a group of Roman engineers captured by Sassanian king Shapur I, has been referred to by UNESCO as "a masterpiece of creative genius". They were also the inventors of the Qanat, an underground aqueduct, around the 9th century BC. Several of Iran's large, ancient gardens were irrigated thanks to Qanats.
The Qanat spread to neighboring areas, including the Armenian highlands. There, starting in the early 8th century BC, the Kingdom of Urartu undertook significant hydraulic works, such as the Menua canal.
The earliest evidence of water wheels and watermills date back to the ancient Near East in the 4th century BC, specifically in the Persian Empire before 350 BCE, in the regions of Iraq, Iran, and Egypt.
China.
In ancient China there was Sunshu Ao (6th century BC), Ximen Bao (5th century BC), Du Shi (circa 31 AD), Zhang Heng (78 – 139 AD), and Ma Jun (200 – 265 AD), while medieval China had Su Song (1020 – 1101 AD) and Shen Kuo (1031–1095). Du Shi employed a waterwheel to power the bellows of a blast furnace producing cast iron. Zhang Heng was the first to employ hydraulics to provide motive power in rotating an armillary sphere for astronomical observation.
Sri Lanka.
In ancient Sri Lanka, hydraulics were widely used in the ancient kingdoms of Anuradhapura and Polonnaruwa. The discovery of the principle of the valve tower, or valve pit, (Bisokotuwa in Sinhalese) for regulating the escape of water is credited to ingenuity more than 2,000 years ago. By the first century AD, several large-scale irrigation works had been completed. Macro- and micro-hydraulics to provide for domestic horticultural and agricultural needs, surface drainage and erosion control, ornamental and recreational water courses and retaining structures and also cooling systems were in place in Sigiriya, Sri Lanka. The coral on the massive rock at the site includes cisterns for collecting water. Large ancient reservoirs of Sri Lanka are Kalawewa (King Dhatusena), Parakrama Samudra (King Parakrama Bahu), Tisa Wewa (King Dutugamunu), Minneriya (King Mahasen)
Greco-Roman world.
In Ancient Greece, the Greeks constructed sophisticated water and hydraulic power systems. An example is a construction by Eupalinos, under a public contract, of a watering channel for Samos, the Tunnel of Eupalinos. An early example of the usage of hydraulic wheel, probably the earliest in Europe, is the Perachora wheel (3rd century BC).
In Greco-Roman Egypt, the construction of the first hydraulic machine automata by Ctesibius (flourished c. 270 BC) and Hero of Alexandria (c. 10 – 80 AD) is notable. Hero describes several working machines using hydraulic power, such as the force pump, which is known from many Roman sites as having been used for raising water and in fire engines.
In the Roman Empire, different hydraulic applications were developed, including public water supplies, innumerable aqueducts, power using watermills and hydraulic mining. They were among the first to make use of the siphon to carry water across valleys, and used hushing on a large scale to prospect for and then extract metal ores. They used lead widely in plumbing systems for domestic and public supply, such as feeding thermae.
Hydraulic mining was used in the gold-fields of northern Spain, which was conquered by Augustus in 25 BC. The alluvial gold-mine of Las Medulas was one of the largest of their mines. At least seven long aqueducts worked it, and the water streams were used to erode the soft deposits, and then wash the tailings for the valuable gold content.
Arabic-Islamic world.
In the Muslim world during the Islamic Golden Age and Arab Agricultural Revolution (8th–13th centuries), engineers made wide use of hydropower as well as early uses of tidal power, and large hydraulic factory complexes. A variety of water-powered industrial mills were used in the Islamic world, including fulling mills, gristmills, paper mills, hullers, sawmills, ship mills, stamp mills, steel mills, sugar mills, and tide mills. By the 11th century, every province throughout the Islamic world had these industrial mills in operation, from Al-Andalus and North Africa to the Middle East and Central Asia. Muslim engineers also used water turbines, employed gears in watermills and water-raising machines, and pioneered the use of dams as a source of water power, used to provide additional power to watermills and water-raising machines.
Al-Jazari (1136–1206) described designs for 50 devices, many of them water-powered, in his book, "The Book of Knowledge of Ingenious Mechanical Devices", including water clocks, a device to serve wine, and five devices to lift water from rivers or pools. These include an endless belt with jugs attached and a reciprocating device with hinged valves.
The earliest programmable machines were water-powered devices developed in the Muslim world. A music sequencer, a programmable musical instrument, was the earliest type of programmable machine. The first music sequencer was an automated water-powered flute player invented by the Banu Musa brothers, described in their "Book of Ingenious Devices", in the 9th century. In 1206, Al-Jazari invented water-powered programmable automata/robots. He described four automaton musicians, including drummers operated by a programmable drum machine, where they could be made to play different rhythms and different drum patterns.
Modern era (c. 1600–1870).
Benedetto Castelli and Italian Hydraulics.
In 1619 Benedetto Castelli, a student of Galileo Galilei, published the book "Della Misura dell'Acque Correnti" or "On the Measurement of Running Waters," one of the foundations of modern hydrodynamics. He served as a chief consultant to the Pope on hydraulic projects, i.e., management of rivers in the Papal States, beginning in 1626.
The science and engineering of water in Italy from 1500-1800 in books and manuscripts is presented in an illustrated catalog published in 2022.
Blaise Pascal.
Blaise Pascal (1623–1662) studied fluid hydrodynamics and hydrostatics, centered on the principles of hydraulic fluids. His discovery on the theory behind hydraulics led to his invention of the hydraulic press, which multiplied a smaller force acting on a smaller area into the application of a larger force totaled over a larger area, transmitted through the same pressure (or exact change of pressure) at both locations. Pascal's law or principle states that for an incompressible fluid at rest, the difference in pressure is proportional to the difference in height, and this difference remains the same whether or not the overall pressure of the fluid is changed by applying an external force. This implies that by increasing the pressure at any point in a confined fluid, there is an equal increase at every other end in the container, i.e., any change in pressure applied at any point of the liquid is transmitted undiminished throughout the fluids.
Jean Léonard Marie Poiseuille.
A French physician, Poiseuille (1797–1869) researched the flow of blood through the body and discovered an important law governing the rate of flow with the diameter of the tube in which flow occurred.
In the UK.
Several cities developed citywide hydraulic power networks in the 19th century, to operate machinery such as lifts, cranes, capstans and the like. Joseph Bramah (1748–1814) was an early innovator and William Armstrong (1810–1900) perfected the apparatus for power delivery on an industrial scale. In London, the London Hydraulic Power Company was a major supplier its pipes serving large parts of the West End of London, City and the Docks, but there were schemes restricted to single enterprises such as docks and railway goods yards.
Hydraulic models.
After students understand the basic principles of hydraulics, some teachers use a hydraulic analogy to help students learn other things.
For example:
The conservation of mass requirement combined with fluid compressibility yields a fundamental relationship between pressure, fluid flow, and volumetric expansion, as shown below:
formula_0
Assuming an incompressible fluid or a "very large" ratio of compressibility to contained fluid volume, a finite rate of pressure rise requires that any net flow into the collected fluid volume create a volumetric change.
See also.
<templatestyles src="Div col/styles.css"/>
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\frac{dp}{dt} = \\frac{\\beta}{V} \\left(\\sum_\\text{in} Q - \\frac{dV}{dt}\\right)"
}
] |
https://en.wikipedia.org/wiki?curid=65424
|
65426689
|
NPZ model
|
Mathematical model of marine ecosystem
An NPZ model is the most basic abstract representation, expressed as a mathematical model, of a pelagic ecosystem which examines the interrelationships between quantities of nutrients, phytoplankton and zooplankton as time-varying "states" which depend only on the relative concentrations of the various states at the given time.
One goal in pelagic ecology is to understand the interactions among available nutrients (i.e. the essential resource base), phytoplankton and zooplankton. The most basic models to shed light on this goal are called nutrient-phytoplankton-zooplankton (NPZ) models. These models are a subset of Ecosystem models.
Example.
An unrealistic but instructive example of an NPZ model is provided in Franks "et al." (1986) (FWF-NPZ model). It is a system of ordinary differential equations that examines the time evolution of dissolved and assimilated nutrients in an ideal upper water column consisting of three state variables corresponding to amounts of nutrients (N), phytoplankton (P) and zooplankton (Z). This closed system model is shown in the figure to the right which also shows the "flow" directions of each state quantity.
These interactions, assumed to be spatial homogeneous (and thus is termed a "zero-dimensional" model) are described in general terms as follows
This NPZ model can now be cast as a system of first order differential equations:
formula_0
formula_1
formula_2
where the parameters and variables are defined in the table below along with nominal values for a "standard environment"
An example of a 60 day sequence for the values shown is depicted in the figure to the right. Each state is color coded (Nutrient – black, Phytoplankton – green and Zooplankton – blue). Note that the initial nutrient concentration is rapidly consumed resulting in a phytoplankton "bloom" until the zooplankton begin aggressive grazing around day 10. Eventually both populations drop to a very low level and a high nutrient concentration remains. In the next section more sophistication is applied to the model in order to increase realism.
More Sophisticated NPZ Models.
The Franks "et al." (1986) work has inspired significant analysis from other researchers but is overly simplistic to capture the complexity of actual pelagic communities. A more realistic NPZ model would simulate control of primary production by incorporating mechanisms to simulate seasonally varying sunlight and decreasing illumination with depth. Evans and Parslow (1985) developed an NPZ model which includes these mechanisms and forms the basis of the following example (see also Denman and Pena (1999)).
A 200 day sequence resulting from this configuration of the FWF-NPZ model is shown in the figure to the right. Each state is color coded (Nutrient – black, Phytoplankton – green and Zooplankton – blue). Several interesting features in the model output are easily observed. First, a "spring bloom" occurs in the first 20 days or so, where the high nutrient concentrations are consumed by the phytoplankton causing an inverse relationship which is halted by a rise in zooplankton concentration eventually settling into a sustained steady-state solution for the remainder of the summer. Another bloom, not as pronounced as in the spring, occurs in the fall with a remixing of nutrients into the water column.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\frac{\\mathrm{d}P}{\\mathrm{d}t} = \\frac{V_m N}{k_s + N}P-mP-ZR_m(1-e^{-\\Lambda P})"
},
{
"math_id": 1,
"text": "\\frac{\\mathrm{d}Z}{\\mathrm{d}t} = \\gamma ZR_m(1-e^{-\\Lambda P}) - d Z"
},
{
"math_id": 2,
"text": "\\frac{\\mathrm{d}N}{\\mathrm{d}t} = -\\frac{V_m N}{k_s + N}P + mP+dZ+(1 - \\gamma)Z R_m(1-e^{-\\Lambda P}),"
}
] |
https://en.wikipedia.org/wiki?curid=65426689
|
6543001
|
Conditioned disjunction
|
In logic, conditioned disjunction (sometimes called conditional disjunction) is a ternary logical connective introduced by Church. Given operands "p", "q", and "r", which represent truth-valued propositions, the meaning of the conditioned disjunction ["p", "q", "r"] is given by
formula_0
In words, ["p", "q", "r"] is equivalent to: "if "q", then "p", else "r", or "p" or "r", according as "q" or not "q". This may also be stated as "q" implies "p", and not "q" implies "r"". So, for any values of "p", "q", and "r", the value of ["p", "q", "r"] is the value of "p" when "q" is true, and is the value of "r" otherwise.
The conditioned disjunction is also equivalent to
formula_1
and has the same truth table as the ternary conditional operator codice_0 in many programming languages (with formula_2 being equivalent to codice_1). In electronic logic terms, it may also be viewed as a single-bit multiplexer.
In conjunction with truth constants denoting each truth-value, conditioned disjunction is truth-functionally complete for classical logic. There are other truth-functionally complete ternary connectives.
Truth table.
The truth table for formula_3:
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "[p, q, r] \\Leftrightarrow (q \\to p) \\land (\\neg q \\to r)."
},
{
"math_id": 1,
"text": "(q \\land p) \\lor (\\neg q \\land r)"
},
{
"math_id": 2,
"text": "[b, a, c]"
},
{
"math_id": 3,
"text": "[p,q,r]"
}
] |
https://en.wikipedia.org/wiki?curid=6543001
|
654387
|
Distance geometry
|
Distance geometry is the branch of mathematics concerned with characterizing and studying sets of points based "only" on given values of the distances between pairs of points. More abstractly, it is the study of semimetric spaces and the isometric transformations between them. In this view, it can be considered as a subject within general topology.
Historically, the first result in distance geometry is Heron's formula in 1st century AD. The modern theory began in 19th century with work by Arthur Cayley, followed by more extensive developments in the 20th century by Karl Menger and others.
Distance geometry problems arise whenever one needs to infer the shape of a configuration of points (relative positions) from the distances between them, such as in biology, sensor networks, surveying, navigation, cartography, and physics.
Introduction and definitions.
The concepts of distance geometry will first be explained by describing two particular problems.
First problem: hyperbolic navigation.
Consider three ground radio stations A, B, C, whose locations are known. A radio receiver is at an unknown location. The times it takes for a radio signal to travel from the stations to the receiver, formula_0, are unknown, but the time differences, formula_1 and formula_2, are known. From them, one knows the distance differences formula_3 and formula_4, from which the position of the receiver can be found.
Second problem: dimension reduction.
In data analysis, one is often given a list of data represented as vectors formula_5, and one needs to find out whether they lie within a low-dimensional affine subspace. A low-dimensional representation of data has many advantages, such as saving storage space, computation time, and giving better insight into data.
Definitions.
Now we formalize some definitions that naturally arise from considering our problems.
Semimetric space.
Given a list of points on formula_6, formula_7, we can arbitrarily specify the distances between pairs of points by a list of formula_8, formula_9. This defines a semimetric space: a metric space without triangle inequality.
Explicitly, we define a semimetric space as a nonempty set formula_10 equipped with a semimetric formula_11 such that, for all formula_12,
Any metric space is "a fortiori" a semimetric space. In particular, formula_16, the formula_17-dimensional Euclidean space, is the canonical metric space in distance geometry.
The triangle inequality is omitted in the definition, because we do not want to enforce more constraints on the distances formula_18 than the mere requirement that they be positive.
In practice, semimetric spaces naturally arise from inaccurate measurements. For example, given three points formula_19 on a line, with formula_20, an inaccurate measurement could give formula_21, violating the triangle inequality.
Isometric embedding.
Given two semimetric spaces, formula_22, an isometric embedding from formula_10 to formula_23 is a map formula_24 that preserves the semimetric, that is, for all formula_12, formula_25.
For example, given the finite semimetric space formula_26 defined above, an isometric embedding from formula_10 to formula_16 is defined by points formula_27, such that formula_28 for all formula_9.
Affine independence.
Given the points formula_27, they are defined to be affinely independent, iff they cannot fit inside a single formula_29-dimensional affine subspace of formula_30, for any formula_31, iff the formula_32"-"simplex they span, formula_33, has positive formula_32-volume, that is, formula_34.
In general, when formula_35, they are affinely independent, since a generic "n"-simplex is nondegenerate. For example, 3 points in the plane, in general, are not collinear, because the triangle they span does not degenerate into a line segment. Similarly, 4 points in space, in general, are not coplanar, because the tetrahedron they span does not degenerate into a flat triangle.
When formula_36, they must be affinely dependent. This can be seen by noting that any formula_32-simplex that can fit inside formula_16 must be "flat".
Cayley–Menger determinants.
Cayley–Menger determinants, named after Arthur Cayley and Karl Menger, are determinants of matrices of distances between sets of points.
Let formula_37 be "n" + 1 points in a semimetric space, their Cayley–Menger determinant is defined by
formula_38
If formula_39, then they make up the vertices of a possibly degenerate "n"-simplex formula_33 in formula_16. It can be shown that the "n"-dimensional volume of the simplex formula_33 satisfies
formula_40
Note that, for the case of formula_41, we have formula_42, meaning the "0-dimensional volume" of a 0-simplex is 1, that is, there is 1 point in a 0-simplex.
formula_37 are affinely independent iff formula_34, that is, formula_43. Thus Cayley–Menger determinants give a computational way to prove affine independence.
If formula_44, then the points must be affinely dependent, thus formula_45. Cayley's 1841 paper studied the special case of formula_46, that is, any five points formula_47 in 3-dimensional space must have formula_48.
History.
The first result in distance geometry is Heron's formula, from 1st century AD, which gives the area of a triangle from the distances between its 3 vertices. Brahmagupta's formula, from 7th century AD, generalizes it to cyclic quadrilaterals. Tartaglia, from 16th century AD, generalized it to give the volume of tetrahedron from the distances between its 4 vertices.
The modern theory of distance geometry began with Arthur Cayley and Karl Menger. Cayley published the Cayley determinant in 1841, which is a special case of the general Cayley–Menger determinant. Menger proved in 1928 a characterization theorem of all semimetric spaces that are isometrically embeddable in the "n"-dimensional Euclidean space formula_49. In 1931, Menger used distance relations to give an axiomatic treatment of Euclidean geometry.
Leonard Blumenthal's book gives a general overview for distance geometry at the graduate level, a large part of which is treated in English for the first time when it was published.
Menger characterization theorem.
Menger proved the following characterization theorem of semimetric spaces:A semimetric space formula_26 is isometrically embeddable in the formula_32-dimensional Euclidean space formula_49, but not in formula_50 for any formula_51, if and only if:
A proof of this theorem in a slightly weakened form (for metric spaces instead of semimetric spaces) is in.
Characterization via Cayley–Menger determinants.
The following results are proved in Blumethal's book.
Embedding "n" + 1 points in the real numbers.
Given a semimetric space formula_56 , with formula_57, and formula_58, formula_9, an isometric embedding of formula_59 into formula_49 is defined by formula_60, such that formula_28 for all formula_9.
Again, one asks whether such an isometric embedding exists for formula_61.
A necessary condition is easy to see: for all formula_62, let formula_63 be the "k"-simplex formed by formula_64, then
formula_65
The converse also holds. That is, if for all formula_62,
formula_66
then such an embedding exists.
Further, such embedding is unique up to isometry in formula_49. That is, given any two isometric embeddings defined by formula_67, and formula_68, there exists a (not necessarily unique) isometry formula_69, such that formula_70 for all formula_71. Such formula_72 is unique if and only if formula_73, that is, formula_67 are affinely independent.
Embedding "n" + 2 and "n" + 3 points.
If formula_74 points formula_75 can be embedded in formula_49 as formula_76, then other than the conditions above, an additional necessary condition is that the formula_52-simplex formed by formula_77, must have no formula_52-dimensional volume. That is, formula_78.
The converse also holds. That is, if for all formula_62,
formula_79
and
formula_80
then such an embedding exists.
For embedding formula_81 points in formula_49, the necessary and sufficient conditions are similar:
Embedding arbitrarily many points.
The formula_81 case turns out to be sufficient in general.
In general, given a semimetric space formula_26, it can be isometrically embedded in formula_49 if and only if there exists formula_86, such that, for all formula_62, formula_82, and for any formula_87,
And such embedding is unique up to isometry in formula_49.
Further, if formula_73, then it cannot be isometrically embedded in any formula_88. And such embedding is unique up to unique isometry in formula_49.
Thus, Cayley–Menger determinants give a concrete way to calculate whether a semimetric space can be embedded in formula_49, for some finite formula_32, and if so, what is the minimal formula_32.
Applications.
There are many applications of distance geometry.
In telecommunication networks such as GPS, the positions of some sensors are known (which are called anchors) and some of the distances between sensors are also known: the problem is to identify the positions for all sensors. Hyperbolic navigation is one pre-GPS technology that uses distance geometry for locating ships based on the time it takes for signals to reach anchors.
There are many applications in chemistry. Techniques such as NMR can measure distances between pairs of atoms of a given molecule, and the problem is to infer the 3-dimensional shape of the molecule from those distances.
Some software packages for applications are:
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " t_A,t_B,t_C "
},
{
"math_id": 1,
"text": "t_A-t_B "
},
{
"math_id": 2,
"text": "t_A-t_C "
},
{
"math_id": 3,
"text": "c(t_A-t_B) "
},
{
"math_id": 4,
"text": "c(t_A-t_C) "
},
{
"math_id": 5,
"text": "\\mathbf{v} = (x_1, \\ldots, x_n)\\in \\mathbb{R}^n"
},
{
"math_id": 6,
"text": "R = \\{P_0, \\ldots, P_n\\}"
},
{
"math_id": 7,
"text": "n \\ge 0"
},
{
"math_id": 8,
"text": "d_{ij}> 0"
},
{
"math_id": 9,
"text": "0 \\le i < j \\le n"
},
{
"math_id": 10,
"text": "R"
},
{
"math_id": 11,
"text": "d: R\\times R \\to [0, \\infty)"
},
{
"math_id": 12,
"text": "x, y\\in R"
},
{
"math_id": 13,
"text": "d(x, y) = 0"
},
{
"math_id": 14,
"text": "x = y"
},
{
"math_id": 15,
"text": "d(x, y) = d(y, x)"
},
{
"math_id": 16,
"text": "\\mathbb{R}^k"
},
{
"math_id": 17,
"text": "k"
},
{
"math_id": 18,
"text": "d_{ij}"
},
{
"math_id": 19,
"text": "A, B, C"
},
{
"math_id": 20,
"text": "d_{AB} = 1, d_{BC} = 1, d_{AC} = 2"
},
{
"math_id": 21,
"text": "d_{AB} = 0.99, d_{BC} = 0.98, d_{AC} = 2.00"
},
{
"math_id": 22,
"text": "(R, d), (R', d')"
},
{
"math_id": 23,
"text": "R'"
},
{
"math_id": 24,
"text": "f: R \\to R'"
},
{
"math_id": 25,
"text": "d(x, y) = d'(f(x), f(y))"
},
{
"math_id": 26,
"text": "(R, d)"
},
{
"math_id": 27,
"text": "A_0, A_1,\\ldots, A_n \\in \\mathbb R^k"
},
{
"math_id": 28,
"text": "d(A_i, A_j) = d_{ij}"
},
{
"math_id": 29,
"text": "\nl"
},
{
"math_id": 30,
"text": " \\mathbb{R}^k"
},
{
"math_id": 31,
"text": " \\ell < n"
},
{
"math_id": 32,
"text": "n"
},
{
"math_id": 33,
"text": "v_n"
},
{
"math_id": 34,
"text": "\\operatorname{Vol}_n(v_n) > 0"
},
{
"math_id": 35,
"text": "k\\ge n "
},
{
"math_id": 36,
"text": " n > k"
},
{
"math_id": 37,
"text": "A_0, A_1,\\ldots, A_n"
},
{
"math_id": 38,
"text": "\n\\operatorname{CM}(A_0, \\cdots, A_n) = \\begin{vmatrix} \n0 & d_{01}^2 & d_{02}^2 & \\cdots & d_{0n}^2 & 1 \\\\\nd_{01}^2 & 0 & d_{12}^2 & \\cdots & d_{1n}^2 & 1 \\\\\nd_{02}^2 & d_{12}^2 & 0 & \\cdots & d_{2n}^2 & 1 \\\\\n\\vdots & \\vdots & \\vdots & \\ddots & \\vdots & \\vdots \\\\\nd_{0n}^2 & d_{1n}^2 & d_{2n}^2 & \\cdots & 0 & 1 \\\\\n1 & 1 & 1 & \\cdots & 1 & 0\n\\end{vmatrix}"
},
{
"math_id": 39,
"text": " A_0, A_1,\\ldots, A_n \\in \\mathbb R^k"
},
{
"math_id": 40,
"text": " \\operatorname{Vol}_n(v_n)^2 = \\frac{(-1)^{n+1}}{(n!)^2 2^n} \\operatorname{CM}(A_0, \\ldots, A_n). "
},
{
"math_id": 41,
"text": "n=0"
},
{
"math_id": 42,
"text": "\\operatorname{Vol}_0(v_0) = 1"
},
{
"math_id": 43,
"text": " (-1)^{n+1} \\operatorname{CM}(A_0, \\ldots, A_n) > 0"
},
{
"math_id": 44,
"text": "\n k < n"
},
{
"math_id": 45,
"text": "\n \\operatorname{CM}(A_0, \\ldots, A_n) = 0"
},
{
"math_id": 46,
"text": "\nk = 3, n = 4"
},
{
"math_id": 47,
"text": "\nA_0, \\ldots, A_4"
},
{
"math_id": 48,
"text": "\n\\operatorname{CM}(A_0, \\ldots, A_4) = 0"
},
{
"math_id": 49,
"text": "\\mathbb{R}^n"
},
{
"math_id": 50,
"text": "\\mathbb{R}^m"
},
{
"math_id": 51,
"text": "0 \\le m < n"
},
{
"math_id": 52,
"text": "(n+1)"
},
{
"math_id": 53,
"text": "S"
},
{
"math_id": 54,
"text": "(n+3)"
},
{
"math_id": 55,
"text": "S'"
},
{
"math_id": 56,
"text": "\n (S,d)"
},
{
"math_id": 57,
"text": "S = \\{P_0, \\ldots, P_n\\}"
},
{
"math_id": 58,
"text": "d(P_i, P_j) = d_{ij}\\ge 0"
},
{
"math_id": 59,
"text": "(S, d)"
},
{
"math_id": 60,
"text": "A_0, A_1,\\ldots, A_n \\in \\mathbb R^n"
},
{
"math_id": 61,
"text": "(S,d)"
},
{
"math_id": 62,
"text": "k = 1, \\ldots, n"
},
{
"math_id": 63,
"text": "v_k"
},
{
"math_id": 64,
"text": "A_0, A_1,\\ldots, A_k"
},
{
"math_id": 65,
"text": "(-1)^{k+1} \\operatorname{CM}(P_0, \\ldots, P_k) = (-1)^{k+1} \\operatorname{CM}(A_0, \\ldots, A_k) = 2^k (k!)^k \\operatorname{Vol}_k(v_k)^2 \\ge 0"
},
{
"math_id": 66,
"text": "(-1)^{k+1}\\operatorname{CM}(P_0, \\ldots, P_k) \\ge 0,"
},
{
"math_id": 67,
"text": "A_0, A_1,\\ldots, A_n"
},
{
"math_id": 68,
"text": "A'_0, A'_1,\\ldots, A'_n"
},
{
"math_id": 69,
"text": "T : \\mathbb R^n \\to \\mathbb R^n"
},
{
"math_id": 70,
"text": "T(A_k) = A'_k"
},
{
"math_id": 71,
"text": "k = 0, \\ldots, n"
},
{
"math_id": 72,
"text": "T"
},
{
"math_id": 73,
"text": "\\operatorname{CM}(P_0, \\ldots, P_n) \\neq 0"
},
{
"math_id": 74,
"text": "n+2"
},
{
"math_id": 75,
"text": "P_0, \\ldots, P_{n+1}"
},
{
"math_id": 76,
"text": "A_0, \\ldots, A_{n+1}"
},
{
"math_id": 77,
"text": "A_0, A_1,\\ldots, A_{n+1}"
},
{
"math_id": 78,
"text": "\\operatorname{CM}(P_0, \\ldots, P_n, P_{n+1}) = 0"
},
{
"math_id": 79,
"text": "(-1)^{k+1} \\operatorname{CM}(P_0, \\ldots, P_k) \\ge 0,"
},
{
"math_id": 80,
"text": " \\operatorname{CM}(P_0, \\ldots, P_n, P_{n+1}) = 0, "
},
{
"math_id": 81,
"text": "n+3"
},
{
"math_id": 82,
"text": "(-1)^{k+1} \\operatorname{CM}(P_0, \\ldots, P_k) \\ge 0"
},
{
"math_id": 83,
"text": "\\operatorname{CM}(P_0, \\ldots, P_n, P_{n+1}) = 0;"
},
{
"math_id": 84,
"text": "\\operatorname{CM}(P_0, \\ldots, P_n, P_{n+2}) = 0;"
},
{
"math_id": 85,
"text": "\\operatorname{CM}(P_0, \\ldots, P_n, P_{n+1}, P_{n+2}) = 0."
},
{
"math_id": 86,
"text": "P_0, \\ldots, P_n\\in R"
},
{
"math_id": 87,
"text": "P_{n+1}, P_{n+2} \\in R"
},
{
"math_id": 88,
"text": "\\mathbb{R}^m, m < n"
}
] |
https://en.wikipedia.org/wiki?curid=654387
|
6544443
|
Organocopper chemistry
|
Compound with carbon to copper bonds
Organocopper chemistry is the study of the physical properties, reactions, and synthesis of organocopper compounds, which are organometallic compounds containing a carbon to copper chemical bond. They are reagents in organic chemistry.
The first organocopper compound, the explosive copper(I) acetylide (), was synthesized by Rudolf Christian Böttger in 1859 by passing acetylene gas through a solution of copper(I) chloride:
Structure and bonding.
Organocopper compounds are diverse in structure and reactivity, but almost all are based on copper with an oxidation state of +1, sometimes denoted Cu(I) or . With 10 electrons in its valence shell, the bonding behavior of Cu(I) is similar to Ni(0), but owing to its higher oxidation state, it engages in less pi-backbonding. Organic derivatives of copper's higher oxidation states +2 and +3 are sometimes encountered as reaction intermediates, but rarely isolated or even observed.
Organocopper compounds form complexes with a variety of soft ligands such as alkylphosphines (), thioethers (), and cyanide ().
Due to the spherical electronic shell of , copper(I) complexes have symmetrical structures - either linear, trigonal planar or tetrahedral, depending on the number of ligands.
Simple complexes with CO, alkene, and Cp ligands.
Copper(I) salts have long been known to bind CO, albeit weakly. A representative complex is CuCl(CO), which is polymeric. In contrast to classical metal carbonyls, pi-backbonding is not strong in these compounds.
Alkenes bind to copper(I), although again generally weakly. The binding of ethylene to Cu in proteins is of broad significance in plant biology so much so that ethylene is classified as a plant hormone. Its presence, detected by the Cu-protein, affects ripening and many other developments.
Although copper does not form a metallocene, half-sandwich complexes can be produced. One such derivative is π-cyclopentadienyl(triethylphosphine)copper(I).
Alkyl and aryl copper compounds.
Alkyl and aryl copper(I) compounds.
Copper halides react with organolithium reagents to give organocopper compounds. The area was pioneered by Henry Gilman, who reported methylcopper in 1936. Thus, phenylcopper is prepared by reaction of phenyllithium with copper(I) bromide in diethyl ether. Grignard reagents can be used in place of organolithium compounds. Gilman also investigated the dialkylcuprates. These are obtained by combining two equivalent of RLi with Cu(I) salts. Alternatively, these cuprates are prepared from oligomeric neutral organocopper compounds by treatment with one equivalent of organolithium reagent.
Compounds of the type are reactive towards oxygen and water, forming copper(I) oxide. They also tend to be thermally unstable, which can be useful in certain coupling reactions. Despite or because of these difficulties, organocopper reagents are frequently generated and consumed in situ with no attempt to isolate them. They are used in organic synthesis as alkylating reagents because they exhibit greater functional group tolerance than corresponding Grignard and organolithium reagents. The electronegativity of copper is much higher than its next-door neighbor in the group 12 elements, zinc, suggesting diminished nucleophilicity for its carbon ligands.
Copper salts react with terminal alkynes to form the acetylides.
Alkyl halides react with organocopper compounds with inversion of configuration. On the other hand, reactions of organocopper compound with alkenyl halides proceed with retention of subtrate’s configuration.
Organocopper compounds couple with aryl halides (see Ullmann condensation and Ullmann reaction):
formula_0
Structures.
Alkyl and aryl copper complexes aggregate both in crystalline form and in solution. Aggregation is especially evident for charge-neutral organocopper compounds, i.e. species with the empirical formula (RCu), which adopt cyclic structures. Since each copper center requires at least two ligands, the organic group is a bridging ligand. This effect is illustrated by the structure of mesitylcopper, which is a pentamer. A cyclic structure is also seen for , where Me stands for methyl group , the first 1:1 organocopper compound to be analyzed by X-ray crystallography (1972 by Lappert). This compound is relatively stable because the bulky trimethylsilyl groups provide steric protection. It is a tetramer, forming an 8-membered ring with alternating Cu-C bonds. In addition the four copper atoms form a planar ring based on three-center two-electron bonds. The copper to copper bond length is 242 pm compared to 256 pm in bulk copper. In "pentamesitylpentacopper" a 5-membered copper ring is formed, similar to (2,4,6-trimethylphenyl)gold, and "pentafluorophenylcopper" is a tetramer.
Lithium dimethylcuprate(I) is a dimer in diethyl ether, forming an 8-membered ring with two lithium atoms linking two methyl groups, . Similarly, lithium diphenylcuprate(I) forms a dimeric etherate, , in the solid state.
Alkyl and aryl copper(III) compounds.
The involvement of the otherwise rare Cu(III) oxidation state has been demonstrated in the conjugate addition of the Gilman reagent to an enone: In a so-called rapid-injection NMR experiment at −100 °C, the Gilman reagent (stabilized by lithium iodide) was introduced to cyclohexenone (1) enabling the detection of the copper — alkene pi complex 2. On subsequent addition of trimethylsilyl cyanide the Cu(III) species 3 is formed (indefinitely stable at that temperature) and on increasing the temperature to −80 °C the conjugate addition product 4. According to an accompanying in silico experiments the Cu(III) intermediate has a square planar molecular geometry with the cyano group in "cis" orientation with respect to the cyclohexenyl methine group and anti-parallel to the methine proton. With other ligands than the cyano group this study predicts room temperature stable Cu(III) compounds.
Reactions of organocuprates.
Cross-coupling reactions.
Prior to the development of palladium-catalyzed cross coupling reactions, copper was the preferred catalyst for almost a century. Palladium offers a faster, more selective reaction. Copper reagents and catalysts continue to be the subject of innovation. Relative to palladium, copper is cheaper but the turnover numbers are often lower with copper and the reaction conditions more vigorous.
Reactions of with alkyl halides give the coupling product:
The reaction mechanism involves oxidative addition (OA) of the alkyl halide to Cu(I), forming a planar Cu(III) intermediate, followed by reductive elimination (RE). The nucleophilic attack is the rate-determining step. In the substitution of iodide, a single-electron transfer mechanism is proposed (see figure).
formula_1
Many electrophiles participate in this reaction. The approximate order of reactivity, beginning with the most reactive, is as follows: acid chlorides > aldehydes > tosylates ~ epoxides > iodides > bromides > chlorides > ketones > esters > nitriles » alkenes
Generally the OA-RE mechanism is analogous to that of palladium-catalyzed cross coupling reactions. One difference between copper and palladium is that copper can undergo single-electron transfer processes.
Coupling reactions.
Oxidative coupling is the coupling of copper acetylides to conjugated alkynes in the Glaser coupling (for example in the synthesis of cyclooctadecanonaene) or to aryl halides in the Castro-Stephens Coupling.
Reductive coupling is a coupling reaction of aryl halides with a stoichiometric equivalent of copper metal that occurs in the Ullmann reaction. A related reaction called decarboxylative cross-coupling, one coupling partner is a carboxylate. Cu(I) displaces a carboxyl forming the arylcopper (ArCu) intermediate. Simultaneously, a palladium catalyst reacts with an aryl bromide to give an organopalladium intermediate (Ar'PdB), which undergoes transmetallation to give ArPdAr', which in turn reductively eliminates the biaryl.
Redox neutral coupling is the coupling of terminal alkynes with halo-alkynes with a copper(I) salt in the Cadiot-Chodkiewicz coupling. Thermal coupling of two organocopper compounds is also possible.
Carbocupration.
Carbocupration is a nucleophilic addition of organocopper reagents () to acetylene or terminal alkynes resulting in an alkenylcopper compound (). It is a special case of carbometalation and also called the Normant reaction.
Synthetic applications.
Reducing agents.
Copper hydrides are specialized reducing agents. The well-known copper hydride is Stryker's reagent, with the formula [(PPh3)CuH]6. It reduces the alkene portion of α,β-Unsaturated carbonyl compounds. A related but catalytic reaction uses copper(I) NHC complex with hydride equivalents provided by a hydrosilane.
Copper alkylation reaction.
Generally, the alkylation reaction of organocopper reagents proceed via "gamma"- alkylation. "Cis"- "gamma" attack occurs better in cyclohexyl carbamate due to sterics. The reaction is reported to be favorable in ethereal solvents. This method was proved to be very effective for the oxidative coupling of amines and alkyl, including "tert"-butyl, and aryl halides.
Vicinal functionalization reactions.
Vicinal functionalization using a carbocupration/Mukaiyama aldol reaction sequence:
Muller and collaborators reported a vicinal functionalization of α,β-acetylenic esters using a carbocupration/Mukaiyama aldol reaction sequence (as shown in the figure above) carbocupration favors the formation of the Z-aldol.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\n\\begin{align}\n\\ce{{ArX} + (Ar')2CuLi}\\ &\\ce{<=> {ArAr'CuLi} + Ar'X}\\\\\n\\ce{2ArAr'CuLi}\\ &\\ce{<=> {(Ar)2CuLi} + (Ar')2CuLi}\\\\\n\\ce{{ArAr'CuLi} + O2}\\ &\\ce{-> Ar-Ar'}\n\\end{align}\n"
},
{
"math_id": 1,
"text": "[\\ce{R}{-}{\\color{Blue}\\ce{Cu}}\\ce{-R}]^-\\ce{Li+}\\ \\xrightarrow{\\color{Red}\\ce{R'-X}}\\ \\left[\\ce R{-}\\overset{{\\displaystyle \\color{Red} \\ce R'} \\atop |}\\underset{| \\atop {\\displaystyle \\color{Red} \\ce X}}{\\color{Blue}\\ce{Cu}}\\ce{-R} \\right]^-\\ce{Li+} \\ce{-> R}{-}{\\color{Blue}\\ce{Cu}} + \\ce{R}{-}{\\color{Red}\\ce{R'}} + \\ce{Li}{-}{\\color{Red}\\ce{X}}"
}
] |
https://en.wikipedia.org/wiki?curid=6544443
|
65453552
|
Barzilai-Borwein method
|
Mathematical optimization method
The Barzilai-Borwein method is an iterative gradient descent method for unconstrained optimization using either of two step sizes derived from the linear trend of the most recent two iterates. This method, and modifications, are globally convergent under mild conditions, and perform competitively with conjugate gradient methods for many problems. Not depending on the objective itself, it can also solve some systems of linear and non-linear equations.
Method.
To minimize a convex function formula_0 with gradient vector formula_1 at point formula_2, let there be two prior iterates, formula_3 and formula_4, in which formula_5 where formula_6 is the previous iteration's step size (not necessarily a Barzilai-Borwein step size), and for brevity, let formula_7 and formula_8.
A Barzilai-Borwein (BB) iteration is formula_9 where the step size formula_10 is either
[long BB step] formula_11, or
[short BB step] formula_12.
Barzilai-Borwein also applies to systems of equations formula_13 for formula_14 in which the Jacobian of formula_1 is positive-definite in the symmetric part, that is, formula_15 is necessarily positive.
Derivation.
Despite its simplicity and optimality properties, Cauchy's classical steepest-descent method for unconstrained optimization often performs poorly. This has motivated many to propose alternate search directions, such as the conjugate gradient method. Jonathan Barzilai and Jonathan Borwein instead proposed new step sizes for the gradient by approximating the quasi-Newton method, creating a scalar approximation of the Hessian estimated from the finite differences between two evaluation points of the gradient, these being the most recent two iterates.
In a quasi-Newton iteration,
formula_16
where formula_17 is some approximation of the Jacobian matrix of formula_1 (i.e. Hessian of the objective function) which satisfies the secant equation formula_18. Barzilai and Borwein simplify formula_17 with a scalar formula_19, which usually cannot exactly satisfy the secant equation, but approximate it as formula_20. Approximations by two least-squares criteria are:
[1] Minimize formula_21 with respect to formula_22, yielding the long BB step, or
[2] Minimize formula_23 with respect to formula_22, yielding the short BB step.
Properties.
In one dimension, both BB step sizes are equal and same as the classical secant method.
The long BB step size is the same as a linearized Cauchy step, i.e. the first estimate using a secant-method for the line search (also, for linear problems). The short BB step size is same as a linearized minimum-residual step. BB applies the step sizes upon the forward direction vector for the next iterate, instead of the prior direction vector as if for another line-search step.
Barzilai and Borwein proved their method converges "R"-superlinearly for quadratic minimization in two dimensions. Raydan demonstrates convergence in general for quadratic problems. Convergence is usually non-monotone, that is, neither the objective function nor the residual or gradient magnitude necessarily decrease with each iteration along a successful convergence toward the solution.
If formula_24 is a quadratic function with Hessian formula_25, formula_26 is the Rayleigh quotient of formula_25 by vector formula_27, and formula_28 is the Rayleigh quotient of formula_25 by vector formula_29 (here taking formula_30 as a solution to formula_31, more at Definite matrix).
Fletcher compared its computational performance to conjugate gradient (CG) methods, finding CG tending faster for linear problems, but BB often faster for non-linear problems versus applicable CG-based methods.
BB has low storage requirements, suitable for large systems with millions of elements in formula_32.
formula_33angle between formula_34 and formula_35.
Modifications and related methods.
Since being demonstrated by Raydan, BB is often applied with the non-monotone safeguarding strategy of Grippo, Lampariello, and Lucidi. This tolerates some rise of the objective, but excessive rise initiates a backtracking line search using smaller step sizes, to assure global convergence. Fletcher finds that allowing wider limits for non-monotonicity tend to result in more efficient convergence.
Others have identified a step size being the geometric mean between the long and short BB step sizes, which exhibits similar properties.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "f:\\mathbb{R}^n\\rightarrow\\mathbb{R}"
},
{
"math_id": 1,
"text": "g"
},
{
"math_id": 2,
"text": "x"
},
{
"math_id": 3,
"text": "g_{k-1}(x_{k-1})"
},
{
"math_id": 4,
"text": "g_{k}(x_{k})"
},
{
"math_id": 5,
"text": "x_{k}=x_{k-1}-\\alpha_{k-1} g_{k-1}"
},
{
"math_id": 6,
"text": "\\alpha_{k-1}"
},
{
"math_id": 7,
"text": "\\Delta x=x_k-x_{k-1}"
},
{
"math_id": 8,
"text": "\\Delta g=g_k-g_{k-1}"
},
{
"math_id": 9,
"text": "x_{k+1}=x_k-\\alpha _kg_k"
},
{
"math_id": 10,
"text": "\\alpha _k"
},
{
"math_id": 11,
"text": "\\alpha _k^{LONG}=\\frac{\\Delta x\\cdot\\Delta x}{\\Delta x\\cdot\\Delta g}"
},
{
"math_id": 12,
"text": "\\alpha _k^{SHORT}=\\frac{\\Delta x \\cdot\\Delta g}{\\Delta g \\cdot\\Delta g}"
},
{
"math_id": 13,
"text": "g(x)=0"
},
{
"math_id": 14,
"text": "g:\\mathbb{R}^n\\rightarrow \\mathbb{R}^n"
},
{
"math_id": 15,
"text": "\\Delta x \\cdot\\Delta g"
},
{
"math_id": 16,
"text": "x_{k+1}=x_k-B^{-1}g(x_k)"
},
{
"math_id": 17,
"text": "B"
},
{
"math_id": 18,
"text": "B_k \\Delta x_k = \\Delta g_k"
},
{
"math_id": 19,
"text": "1/\\alpha"
},
{
"math_id": 20,
"text": "\\frac{1}{\\alpha}\\Delta x\\approx \\Delta g"
},
{
"math_id": 21,
"text": " \\|\\Delta x/\\alpha-\\Delta g\\|^2"
},
{
"math_id": 22,
"text": "\\alpha"
},
{
"math_id": 23,
"text": " \\|\\Delta x-\\alpha\\Delta g\\|^2"
},
{
"math_id": 24,
"text": "f"
},
{
"math_id": 25,
"text": "A"
},
{
"math_id": 26,
"text": "1/\\alpha^{LONG}"
},
{
"math_id": 27,
"text": "\\Delta x"
},
{
"math_id": 28,
"text": "1/\\alpha^{SHORT}"
},
{
"math_id": 29,
"text": "\\sqrt{A}\\Delta x"
},
{
"math_id": 30,
"text": "\\sqrt A"
},
{
"math_id": 31,
"text": "(\\sqrt{A})^T\\sqrt{A}=A"
},
{
"math_id": 32,
"text": " x"
},
{
"math_id": 33,
"text": " \\frac{\\alpha^{SHORT}}{\\alpha^{LONG}}=cos^2("
},
{
"math_id": 34,
"text": " \\Delta x"
},
{
"math_id": 35,
"text": " \\Delta g)"
}
] |
https://en.wikipedia.org/wiki?curid=65453552
|
6545682
|
Honeycomb conjecture
|
The honeycomb conjecture states that a regular hexagonal grid or honeycomb has the least total perimeter of any subdivision of the plane into regions of equal area. The conjecture was proven in 1999 by mathematician Thomas C. Hales.
Theorem.
Let formula_0 be any system of smooth curves in formula_1, subdividing the plane into regions (connected components of the complement of formula_0) all of which are bounded and have unit area. Then, averaged over large disks in the plane, the average length of formula_0 per unit area is at least as large as for the hexagon tiling. The theorem applies even if the complement of formula_0 has additional components that are unbounded or whose area is not one; allowing these additional components cannot shorten formula_0. Formally, let formula_2 denote the disk of radius formula_3 centered at the origin, let formula_4 denote the total length of formula_5, and let formula_6 denote the total area of formula_2 covered by bounded unit-area components. (If these are the only components, then formula_7.) Then the theorem states that
formula_8
The value on the right hand side of the inequality is the limiting length per unit area of the hexagonal tiling.
History.
The first record of the conjecture dates back to 36 BC, from Marcus Terentius Varro, but is often attributed to Pappus of Alexandria (c. 290 – c. 350).
In the 17th century, Jan Brożek used a similar theorem to argue why bees create hexagonal honeycombs. In 1943, László Fejes Tóth published a proof for a special case of the conjecture, in which each cell is required to be a convex polygon. The full conjecture was proven in 1999 by mathematician Thomas C. Hales, who mentions in his work that there is reason to believe that the conjecture may have been present in the minds of mathematicians before Varro.
It is also related to the densest circle packing of the plane, in which every circle is tangent to six other circles, which fill just over 90% of the area of the plane.
The case when the problem is restricted to a square grid was solved in 1989 by Jaigyoung Choe who proved that the optimal figure is a irregular hexagon.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\Gamma"
},
{
"math_id": 1,
"text": "\\mathbb{R}^2"
},
{
"math_id": 2,
"text": "B(0,r)"
},
{
"math_id": 3,
"text": "r"
},
{
"math_id": 4,
"text": "L_r"
},
{
"math_id": 5,
"text": "\\Gamma\\cap B(0,r)"
},
{
"math_id": 6,
"text": "A_r"
},
{
"math_id": 7,
"text": "A_r=\\pi r^2"
},
{
"math_id": 8,
"text": "\\limsup_{r\\to\\infty} \\frac{L_r}{A_r}\\ge\\sqrt[4]{12}."
}
] |
https://en.wikipedia.org/wiki?curid=6545682
|
654601
|
Tridecagon
|
Polygon with 13 edges
In geometry, a tridecagon or triskaidecagon or 13-gon is a thirteen-sided polygon.
Regular tridecagon.
A "regular tridecagon" is represented by Schläfli symbol {13}.
The measure of each internal angle of a regular tridecagon is approximately 152.308 degrees, and the area with side length "a" is given by
formula_0
Construction.
As 13 is a Pierpont prime but not a Fermat prime, the regular tridecagon cannot be constructed using a compass and straightedge. However, it is constructible using neusis, or an angle trisector.
The following is an animation from a "neusis construction" of a regular tridecagon with radius of circumcircle formula_1 according to Andrew M. Gleason, based on the angle trisection by means of the Tomahawk (light blue).
An approximate construction of a regular tridecagon using straightedge and compass is shown here.
Another possible animation of an approximate construction, also possible with using straightedge and compass.
Up to the maximum precision of 15 decimal places, the absolute error is formula_4
Up to 13 decimal places, the absolute error is formula_7
Example to illustrate the error.
At a circumscribed circle of radius r = 1 billion km (a distance which would take light approximately 55 minutes to travel), the absolute error on the side length constructed would be less than 1 mm.
Symmetry.
The "regular tridecagon" has Dih13 symmetry, order 26. Since 13 is a prime number there is one subgroup with dihedral symmetry: Dih1, and 2 cyclic group symmetries: Z13, and Z1.
These 4 symmetries can be seen in 4 distinct symmetries on the tridecagon. John Conway labels these by a letter and group order. Full symmetry of the regular form is r26 and no symmetry is labeled a1. The dihedral symmetries are divided depending on whether they pass through vertices (d for diagonal) or edges (p for perpendiculars), and i when reflection lines path through both edges and vertices. Cyclic symmetries in the middle column are labeled as g for their central gyration orders.
Each subgroup symmetry allows one or more degrees of freedom for irregular forms. Only the g13 subgroup has no degrees of freedom but can be seen as directed edges.
Numismatic use.
The regular tridecagon is used as the shape of the Czech 20 korun coin.
Related polygons.
A tridecagram is a 13-sided star polygon. There are 5 regular forms given by Schläfli symbols: {13/2}, {13/3}, {13/4}, {13/5}, and {13/6}. Since 13 is prime, none of the tridecagrams are compound figures.
Petrie polygons.
The regular tridecagon is the Petrie polygon 12-simplex:
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "A = \\frac{13}{4}a^2 \\cot \\frac{\\pi}{13} \\simeq 13.1858\\,a^2."
},
{
"math_id": 1,
"text": "\\overline{OA} = 12,"
},
{
"math_id": 2,
"text": " a = 0.478631328575115\\; [\\text{unit of length}]"
},
{
"math_id": 3,
"text": " a_{\\text{target}} = r \\cdot 2 \\cdot \\sin\\left(\\frac{180^\\circ}{13} \\right) = 0.478631328575115\\ldots\\; [\\text{unit of length}]"
},
{
"math_id": 4,
"text": " F_a = a - a_{\\text{target}} = 0.0\\; [\\text{unit of length}]"
},
{
"math_id": 5,
"text": " \\mu = 27.6923076923077^\\circ "
},
{
"math_id": 6,
"text": " \\mu_{\\text{target}} = \\left( \\frac{360^\\circ}{13}\\right) = 27.{\\overline{692307}}^\\circ "
},
{
"math_id": 7,
"text": "F_\\mu = \\mu - \\mu_{\\text{target}} = 0.0^\\circ "
}
] |
https://en.wikipedia.org/wiki?curid=654601
|
65467021
|
Lion algorithm
|
Lion algorithm (LA) is one among the bio-inspired (or) nature-inspired optimization algorithms (or) that are mainly based on meta-heuristic principles. It was first introduced by B. R. Rajakumar in 2012 in the name, Lion’s Algorithm.. It was further extended in 2014 to solve the system identification problem. This version was referred as LA, which has been applied by many researchers for their optimization problems.
Inspiration from lion’s social behaviour.
Lions form a social system called a "pride", which consists of 1–3 pair of lions. A pride of lions shares a common area known as territory in which a dominant lion is called as territorial lion. The territorial lion safeguards its territory from outside attackers, especially nomadic lions. This process is called territorial defense. It protects the cubs till they become sexually matured. The maturity period is about 2–4 years. The pride undergoes survival fights to protect its territory and the cubs from nomadic lions. Upon getting defeated by the nomadic lions, the dominating nomadic lion takes the role of territorial lion by killing or driving out the cubs of the pride. The lioness of the pride give birth to cubs though the new territorial lion. When the cubs of the pride mature and considered to be stronger than the territorial lion, they take over the pride. This process is called territorial take-over. If territorial take-over happens, either the old territorial lion, which is considered to be laggard, is driven out or it leaves the pride. The stronger lions and lioness form the new pride and give birth to their own cubs
Terminology.
In the LA, the terms that are associated with lion’s social system are mapped to the terminology of optimization problems. Few of such notable terms are related here.
Algorithm.
The steps involved in LA are given below:
Variants.
The LA has been further taken forward to adopt in different problem areas. According to the characteristics of the problem area, significant amendment has been done in the processes and the models used in the LA. Accordingly, diverse variants have been developed by the researchers. They can be broadly grouped as hybrid LAs and non-hybrid LAs. Hybrid LAs are the LAs that are amended by the principle of other meta-heuristics, whereas the Non-hybrid LAs take any scientific amendment inside its operation that are felt to be essential to attend the respective problem area.
Applications.
LA is applied in diverse engineering applications that range from network security, text mining, image processing, electrical systems, data mining and many more. Few of the notable applications are discussed here.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "X^{male}"
},
{
"math_id": 1,
"text": "X^{female}"
},
{
"math_id": 2,
"text": "X_1^{nomad}"
},
{
"math_id": 3,
"text": "f(X^{male})"
},
{
"math_id": 4,
"text": "f(X^{female})"
},
{
"math_id": 5,
"text": "f(X_1^{nomad})"
},
{
"math_id": 6,
"text": "f^{ref}"
},
{
"math_id": 7,
"text": "N_g"
},
{
"math_id": 8,
"text": "X_{cub}^{male}"
},
{
"math_id": 9,
"text": "X_{cub}^{female}"
},
{
"math_id": 10,
"text": "age_{cub}"
},
{
"math_id": 11,
"text": "age_{cub}>age_{max}"
}
] |
https://en.wikipedia.org/wiki?curid=65467021
|
65470806
|
Leximin order
|
Concept in mathematics
In mathematics, leximin order is a total preorder on finite-dimensional vectors. A more accurate, but less common term is leximin preorder. The leximin order is particularly important in social choice theory and fair division.
Definition.
A vector x = ("x"1, ..., "x""n") is "leximin-larger" than a vector y = ("y"1, ..., "y""n") if one of the following holds:
Examples.
The vector (3,5,3) is leximin-larger than (4,2,4), since the smallest element in the former is 3 and in the latter is 2. The vector (4,2,4) is leximin-larger than (5,3,2), since the smallest elements in both are 2, but the second-smallest element in the former is 4 and in the latter is 3.
Vectors with the same multiset of elements are equivalent w.r.t. the leximin preorder, since they have the same smallest element, the same second-smallest element, etc. For example, the vectors (4,2,4) and (2,4,4) are leximin-equivalent (but both are leximin-larger than (2,4,2)).
Related order relations.
In the lexicographic order, the first comparison is between "x"1 and "y1", regardless of whether they are smallest in their vectors. The second comparison is between "x"2 and "y2", and so on.
For example, the vector (3,5,3) is lexicographically "smaller" than (4,2,4), since the first element in the former is 3 and in the latter it is 4. Similarly, (4,2,4) is lexicographically larger than (2,4,4).
The following algorithm can be used to compute whether x is "leximin-larger" than y:
The leximax order is similar to the leximin order except that the first comparison is between the "largest elements"; the second comparison is between the "second-largest" elements; and so on.
Applications.
In social choice.
In social choice theory, particularly in fair division, the leximin order is one of the orders used to choose between alternatives. In a typical social choice problem, society has to choose among several alternatives (for example: several ways to allocate a set of resources). Each alternative induces a "utility profile" - a vector in which element "i" is the utility of agent "i" in the allocation. An alternative is called leximin-optimal if its utility-profile is (weakly) leximin-larger than the utility profile of all other alternatives.
For example, suppose there are three alternatives: x gives a utility of 2 to Alice and 4 to George; y gives a utility of 9 to Alice and 1 to George; and z gives a utility of 1 to Alice and 8 to George. Then alternative x is leximin-optimal, since its utility profile is (2,4) which is leximin-larger than that of y (9,1) and z (1,8). The leximin-optimal solution is always Pareto-efficient.
The leximin rule selects, from among all possible allocations, the leximin-optimal ones. It is often called the egalitarian rule; see that page for more information on its computation and applications. For particular applications of the leximin rule in fair division, see:
In multicriteria decision.
In "Multiple-criteria decision analysis" a decision has to be made, and there are several criteria on which the decision should be based (for example: cost, quality, speed, etc.). One way to decide is to assign, to each alternative, a vector of numbers representing its value in each of the criteria, and chooses the alternative whose vector is leximin-optimal.
The leximin-order is also used for Multi-objective optimization, for example, in optimal resource allocation, location problems, and matrix games.
It is also studied in the context of fuzzy constraint solving problems.
In flow networks.
The leximin order can be used as a rule for solving network flow problems. Given a flow network, a source "s", a sink "t", and a specified subset "E" of edges, a flow is called leximin-optimal (or decreasingly minimal) on "E" if it minimizes the largest flow on an edge of "E", subject to this minimizes the second-largest flow, and so on. There is a polynomial-time algorithm for computing a cheapest leximin-optimal integer-valued flow of a given flow amount. It is a possible way to define a "fair flow".
In game theory.
One kind of a solution to a cooperative game is the payoff-vector that minimizes the leximin vector of excess-values of coalitions, among all payoff-vectors that are efficient and individually-rational. This solution is called the nucleolus.
Representation.
A "representation" of an ordering on a set of vectors is a function "f" that assigns a single number to each vector, such that the ordering between the numbers is identical to the ordering between the vectors. That is, "f"(x) ≥ "f"(y) iff x is larger than y by that ordering. When the number of possible vectors is countable (e.g. when all vectors are integral and bounded), the leximin order can be represented by various functions, for example:
However, when the set of possible vectors is uncountable (e.g. real vectors), "no" function (whether contiuous or not) can represent the leximin order. The same is true for the lexicographic order.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "f(\\mathbf{x}) = - \\sum_{i=1}^n n^{-x_i}"
},
{
"math_id": 1,
"text": "f(\\mathbf{x}) = - \\sum_{i=1}^n x_i^{-q}"
},
{
"math_id": 2,
"text": "f(\\mathbf{x}) = \\sum_{i=1}^n w_i \\cdot (x^{\\uparrow})_i"
},
{
"math_id": 3,
"text": "\\mathbf{x^{\\uparrow}}"
},
{
"math_id": 4,
"text": "w_1\\gg w_2 \\gg \\cdots \\gg w_n"
}
] |
https://en.wikipedia.org/wiki?curid=65470806
|
65471441
|
Froissart bound
|
Constraint on particle cross sections
In particle physics the Froissart bound, or Froissart limit, is a generic constraint that the total scattering cross section of two colliding high-energy particles cannot increase faster than formula_0, with "c" a normalization constant and "s" the square of the center-of-mass energy ("s" is one of the three Mandelstam variables).
Further reading.
The Froissart bound on scholarpedia, by M. Froissart
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " c \\ln^2(s) "
}
] |
https://en.wikipedia.org/wiki?curid=65471441
|
65472779
|
Kovner–Besicovitch measure
|
In plane geometry the Kovner–Besicovitch measure is a number defined for any bounded convex set describing how close to being centrally symmetric it is. It is the fraction of the area of the set that can be covered by its largest centrally symmetric subset.
Properties.
This measure is one for a set that is centrally symmetric, and less than one for sets whose closure is not centrally symmetric. It is invariant under affine transformations of the plane. If formula_0 is the center of symmetry of the largest centrally-symmetric set within a given convex body formula_1, then the centrally-symmetric set itself is the intersection of formula_1 with its reflection across formula_0.
Minimizers.
The convex sets with the smallest possible Kovner–Besicovitch measure are the triangles, for which the measure is 2/3. The result that triangles are the minimizers of this measure is known as Kovner's theorem or the Kovner–Besicovitch theorem, and the inequality bounding the measure above 2/3 for all convex sets is the Kovner–Besicovitch inequality. The curve of constant width with the smallest possible Kovner–Besicovitch measure is the Reuleaux triangle.
Computational complexity.
The Kovner–Besicovitch measure of any given convex polygon with formula_2 vertices can be found in time formula_3 by determining a translation of the reflection of the polygon that has the largest possible overlap with the unreflected polygon.
History.
Branko Grünbaum writes that the Kovner–Besicovitch theorem was first published in Russian, in a 1935 textbook on the calculus of variations by Mikhail Lavrentyev and Lazar Lyusternik, where it was credited to Soviet mathematician and geophysicist S. S. Kovner. Additional proofs were given by Abram Samoilovitch Besicovitch and by István Fáry, who also proved that every minimizer of the Kovner–Besicovitch measure is a triangle.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "c"
},
{
"math_id": 1,
"text": "K"
},
{
"math_id": 2,
"text": "n"
},
{
"math_id": 3,
"text": "O(n\\log n)"
}
] |
https://en.wikipedia.org/wiki?curid=65472779
|
65472780
|
Estermann measure
|
In plane geometry the Estermann measure is a number defined for any bounded convex set describing how close to being centrally symmetric it is. It is the ratio of areas between the given set and its smallest centrally symmetric convex superset. It is one for a set that is centrally symmetric, and less than one for sets whose closure is not centrally symmetric. It is invariant under affine transformations of the plane.
Properties.
If formula_0 is the center of symmetry of the smallest centrally-symmetric set containing a given convex body formula_1, then the centrally-symmetric set itself is the convex hull of the union of formula_1 with its reflection across formula_0.
Minimizers.
The shapes of minimum Estermann measure are the triangles, for which this measure is 1/2. The curve of constant width with the smallest possible Estermann measure is the Reuleaux triangle.
History.
The Estermann measure is named after Theodor Estermann, who first proved in 1928 that this measure is always at least 1/2, and that a convex set with Estermann measure 1/2 must be a triangle. Subsequent proofs were given by Friedrich Wilhelm Levi, by István Fáry, and by Isaak Yaglom and Vladimir Boltyansky.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "c"
},
{
"math_id": 1,
"text": "K"
}
] |
https://en.wikipedia.org/wiki?curid=65472780
|
654760
|
132 (number)
|
Natural number
132 (one hundred [and] thirty-two) is the natural number following 131 and preceding 133. It is 11 dozens.
In mathematics.
132 is the sixth Catalan number. With twelve divisors total where 12 is one of them, 132 is the 20th refactorable number, preceding the triangular 136.
132 is an oblong number, as the product of 11 and 12 whose sum instead yields the 9th prime number 23; on the other hand, 132 is the 99th composite number.
Adding all two-digit permutation subsets of 132 yields the same number:
formula_0.
132 is the smallest number in decimal with this property, which is shared by 264, 396 and 35964 (see digit-reassembly number).
The number of irreducible trees with fifteen vertices is 132.
In a formula_1 toroidal board in the "n"–Queens problem, 132 is the count of non-attacking queens, with respective indicator of 19 and multiplicity of 1444 = 382 (where, 2 × 19 = 38).
The exceptional outer automorphism of symmetric group "S"6 uniquely maps vertices to factorizations and edges to partitions in the graph factors of the complete graph with six vertices (and fifteen edges) "K"6, which yields 132 blocks in Steiner system S(5,6,12).
In other fields.
132 is also:
|
[
{
"math_id": 0,
"text": "12 + 13 + 21 + 23 + 31 + 32 = 132"
},
{
"math_id": 1,
"text": "15 \\times 15"
}
] |
https://en.wikipedia.org/wiki?curid=654760
|
654771
|
135 (number)
|
Natural number
135 (one hundred [and] thirty-five) is the natural number following 134 and preceding 136.
In mathematics.
135 is the number of integer partitions of 14, and the number of rooted trees with 15 nodes and height at most 2. 135 is 5-smooth, since its prime factorization is formula_0, and a Harshad number in decimal.
Using its own digits, 135 in base-10 can be expressed in operations as the sum of consecutive powers of its digits, and as a sum-product number:
formula_1
formula_2
135 is the number of degrees in the internal angle of a regular eight-sided octagon, and the number of nodes inside a regular nonagon from the intersection of its diagonals and sides. Also:
While the central angle of a regular octagon is 135 ÷ 3 = 45 degrees, 4.5 is the harmonic mean of all eight divisors of 135.
The aliquot sum of 135 is 105, which is the 14th triangular number, or equivalently the sum of the first fourteen non-zero positive integers.
There are 135 total "Krotenheerdt" "k"-uniform tilings for "k" < 8, with no other such tilings for higher "k".
There are a total of 135 primes between 1,000 and 2,000.
formula_3 for formula_4 is a polynomial that plays an essential role in Apéry's proof that formula_5 is irrational.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "3^3 \\times 5"
},
{
"math_id": 1,
"text": "135 = 1^1 + 3^2 + 5^3"
},
{
"math_id": 2,
"text": "135 = (1 + 3 + 5)(1 \\times 3 \\times 5)"
},
{
"math_id": 3,
"text": "135 = 11 n^2 + 11 n + 3"
},
{
"math_id": 4,
"text": "n = 3"
},
{
"math_id": 5,
"text": "\\zeta(3)"
}
] |
https://en.wikipedia.org/wiki?curid=654771
|
65479599
|
Reilly formula
|
In the mathematical field of Riemannian geometry, the Reilly formula is an important identity, discovered by Robert Reilly in 1977. It says that, given a smooth Riemannian manifold-with-boundary ("M", "g") and a smooth function u on M, one has
formula_0
in which h is the second fundamental form of the boundary of M, H is its mean curvature, and ν is its unit normal vector. This is often used in combination with the observation
formula_1
with the consequence that
formula_2
This is particularly useful since one can now make use of the solvability of the Dirichlet problem for the Laplacian to make useful choices for u. Applications include eigenvalue estimates in spectral geometry and the study of submanifolds of constant mean curvature.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\int_{\\partial M}\\left(H\\Big(\\frac{\\partial u}{\\partial\\nu}\\Big)^2+2\\frac{\\partial u}{\\partial\\nu}\\Delta^{\\partial M}u+h\\big(\\nabla^{\\partial M}u,\\nabla^{\\partial M}u\\big)\\right)=\\int_M \\Big((\\Delta u)^2-|\\nabla\\nabla u|^2-\\operatorname{Ric}(\\nabla u,\\nabla u)\\Big),"
},
{
"math_id": 1,
"text": "|\\nabla\\nabla u|^2=\\frac{1}{n}(\\Delta u)^2+\\Big|\\nabla\\nabla u-\\frac{1}{n}(\\Delta u)g\\Big|^2\\geq\\frac{1}{n}(\\Delta u)^2,"
},
{
"math_id": 2,
"text": "\\int_{\\partial M}\\left(H\\Big(\\frac{\\partial u}{\\partial\\nu}\\Big)^2+2\\frac{\\partial u}{\\partial\\nu}\\Delta^{\\partial M}u+h\\big(\\nabla^{\\partial M}u,\\nabla^{\\partial M}u\\big)\\right)\\leq\\int_M \\Big(\\frac{n-1}{n}(\\Delta u)^2-\\operatorname{Ric}(\\nabla u,\\nabla u)\\Big)."
}
] |
https://en.wikipedia.org/wiki?curid=65479599
|
654808
|
145 (number)
|
Natural number
145 (one hundred [and] forty-five) is the natural number following 144 and preceding 146.
In other fields.
145 is also:
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "145 = 12^2 + 1^2 = 8^2 + 9^2"
},
{
"math_id": 1,
"text": "145 = 1! + 4! + 5!"
},
{
"math_id": 2,
"text": "145=5^2+5!=11^2+4!=12^2+1!"
}
] |
https://en.wikipedia.org/wiki?curid=654808
|
6548232
|
Sabin (unit)
|
Unit of sound absorption
In acoustics, the sabin (or more precisely the square foot sabin) is a unit of sound absorption, used for expressing the total effective absorption for the interior of a room. Sound absorption can be expressed in terms of the percentage of energy absorbed compared with the percentage reflected. It can also be expressed as a coefficient, with a value of 1.00 representing a material which absorbs 100% of the energy, and a value of 0.00 meaning all the sound is reflected.
The concept of a unit for absorption was first suggested by American physicist Wallace Clement Sabine, the founder of the field of architectural acoustics. He defined the "open-window unit" as the absorption of of open window. The unit was renamed the "sabin" after Sabine, and it is now defined as "the absorption due to unit area of a totally absorbent surface".
Sabins may be calculated with either imperial or metric units. One square foot of 100% absorbing material has a value of one imperial sabin, and 1 square metre of 100% absorbing material has a value of one metric sabin.
The total absorption A in metric sabins for a room containing many types of surface is given by
formula_0
where "S"1, "S"2, ..., "Sn" are the areas of the surfaces in the room (in m2), and "α"1, "α"2, ..., "αn" are the absorption coefficients of the surfaces.
Sabins are used in calculating the reverberation time of concert halls, lecture theatres, and recording studios.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "A = S_1 \\alpha_1 + S_2 \\alpha_2 + \\ldots + S_n \\alpha_n = \\sum S_i \\alpha_i ,"
}
] |
https://en.wikipedia.org/wiki?curid=6548232
|
65493886
|
Physics of optical holography
|
Overview article
Optical holography is a technique which enables an optical wavefront to be recorded and later re-constructed. Holography is best known as a method of generating three-dimensional images but it also has a wide range of other applications.
A hologram is made by superimposing a second wavefront (normally called the reference beam) on the wavefront of interest, thereby generating an interference pattern which is recorded on a physical medium. When only the second wavefront illuminates the interference pattern, it is diffracted to recreate the original wavefront. Holograms can also be computer-generated by modelling the two wavefronts and adding them together digitally. The resulting digital image is then printed it onto a suitable mask or film and illuminated by a suitable source to reconstruct the wavefront of interest.
Basic physics.
To understand the process, it is helpful to understand interference and diffraction. Interference occurs when one or more wavefronts are superimposed. Diffraction occurs when a wavefront encounters an object. The process of producing a holographic reconstruction is explained below purely in terms of interference and diffraction. It is somewhat simplified but is accurate enough to give an understanding of how the holographic process works.
For those unfamiliar with these concepts, it is worthwhile to read those articles before reading further in this article.
A simple hologram can be made by superimposing two plane waves from the same light source on a light recording medium such as a photographic emulsion. The two waves interfere, giving a straight-line fringe pattern whose intensity varies sinusoidally across the medium. The spacing of the fringe pattern is determined by the angle between the two waves, and by the wavelength of the light.
The recorded light pattern is a diffraction grating, which is a structure with a repeating pattern. A simple example is a metal plate with slits cut at regular intervals. A light wave that is incident on a grating is split into several waves; the direction of these diffracted waves is determined by the grating spacing and the wavelength of the light.
When the recorded light pattern is illuminated by only one of the plane waves used to create it, it can be shown that one of the diffracted waves is a re-construction of the other plane wave.
When a plane wave is added to a point source and the resulting interference pattern recorded, a point source hologram is produced. This is effectively a Fresnel zone plate which acts as a lens. If the plane wave is normally incident on the recording plate, three waves are diffracted by the plate
the original plane wave
a wave which appears to diverge from the point source - this is a reconstruction of the original point source wave
a wave which is focused to a point on the other side of the plate at the same distance as the original point source
This is known as an in-line hologram. Its usefulness is limited by the fact that all three waves are superimposed.
If the plane wave illuminates the recording plate at non-normal incidence, then the three diffracted waves are now as follows:
the original plane wave
a wave which appears to diverge from the original point source - this is the re-constructed wave
a wave which converges to a point which is deflected from the normal by twice the angle of incidence of the plane wave - this is known as the conjugate wave.
The three waves are now separated in space. This is known as an off-axis hologram. It was first developed by Leith and Upatnieks and was a vital step in enabling 3-d images to be produced with holography.
Theory underlying the holographic process.
General form.
The complex amplitude of a monochromatic electromagnetic wave can be represented by
formula_0
where A represents the amplitude of the vector, and formula_1 its phase.
To make a hologram, two waves are added together to give a total complex amplitude which can be represented as
formula_2
where R refers to the recording wavefront, known as the reference wavefront, and O refer to the wavefront being recorded. The dependence on r has been omitted for clarity.
The intensity of the combined beams is the average value of the complex amplitude times its complex conjugate:
formula_3
A recording medium is exposed to the two beams and then developed. Assume that the amplitude transmittance, formula_4 of the developed photographic plate is linearly related to the intensity of the interference pattern.
formula_5
where formula_6 is a constant background transmittance and formula_7 is constant,
When the developed photographic plate is illuminated only by the reference beam, formula_8, the amplitude of the light transmitted through the plate, UH, is given by
formula_9
This can be split into three terms:
formula_10
formula_11 is a modified version of the reference wave. The first term is a reduced amplitude version, the second is also a reduced amplitude version if the reference wave amplitude is uniform. The third term produces a halo round the transmitted reference wave which is negligible when the amplitude of the object wave is much less than that of the reference wave
formula_12 is the reconstructed object wave which is identical to the original wave except that its amplitude is reduced. When the object wave is generated by light scattered from an object or objects, a virtual image of the object(s) is formed when a lens is placed in the reconstructed wave.
formula_13 is known as the conjugate wave. It is similar to the object wave but has the opposite curvature. When the object wave is generated by light scattered from an object or a series of objects, a real image is formed on the opposite side of the hologram plate to where the object was located and is deflected from the normal axis by twice the angle between the reference wave and the normal direction.
Two plane waves.
The reference and object wave are monochromatic plane waves of uniform amplitude
formula_14
where the vectors formula_15 are related to the unit vectors formula_16 which define the direction of travel and the wavelength formula_17 of the two waves by
formula_18
The expression for formula_19 in the previous section can be written as
formula_20
from which it can be seen that the transmission of the holographic recording varies sinusoidally, i.e.it is a diffraction grating, where spacing of the fringes depends on the angle between the two waves and the orientation of the holographic plate.
The reconstructed wave is:
formula_21
which is a plane wave travelling in the direction formula_22, i.e. it is the reconstructed object wave with a modified amplitude.
The conjugate wave is:
formula_23
which is a plane wave travelling in the direction formula_24
This shows that the transmission of the photographic plate varies sinusoidally so it acts as a diffraction grating. If one of the waves is normally incident to the hologram plate, and the other is incident at an angle formula_25, the spacing of the fringes is formula_26 and the first order diffracted waves will be at angles given by formula_27. One of these is the reconstructed object plane wave, the other is the conjugate wave.
Plane wave and point source.
Consider a point source located at the origin which illuminates a photographic plate which is located at a distance formula_28 normal to the z axis. The phase difference between the source and a point (x, y, z) is given approximately by
formula_29
where formula_30
The complex amplitude of the object beam is then given by
formula_31.
The form of the interference pattern for various directions of incidence for the reference wave are shown on the right.
The reference plane wave is
formula_32
The reconstructed wave is described by the second term:
formula_33
The conjugate wave is described by the third term
formula_34
It emerges at twice the angle between the object and references wave and converges to a point which is at the same distance from the plate as the original point source.
General object and a point source.
When light is scattered off a general object, the amplitude of the scattered light at any point formula_35 can be represented by
formula_36
where both the phase and the amplitude vary with formula_37.
The distance travelled by a wave emitted from a point source located at a point formula_38 to a point formula_39 travels a distance formula_40, so that the phase change is formula_41 so that its amplitude is given by
formula_42
where formula_43 is constant.
The reconstructed object wave is
formula_44
The conjugate wave is given by
formula_45
Recording a hologram.
Items required.
To make a hologram, the following are required:
These requirements are inter-related, and it is essential to understand the nature of optical interference to see this. Interference is the variation in intensity which can occur when two light waves are superimposed. The intensity of the maxima exceeds the sum of the individual intensities of the two beams, and the intensity at the minima is less than this and may be zero. The interference pattern maps the relative phase between the two waves, and any change in the relative phases causes the interference pattern to move across the field of view. If the relative phase of the two waves changes by one cycle, then the pattern drifts by one whole fringe. One phase cycle corresponds to a change in the relative distances travelled by the two beams of one wavelength. Since the wavelength of light is of the order of 0.5 μm, it can be seen that very small changes in the optical paths travelled by either of the beams in the holographic recording system lead to movement of the interference pattern which is the holographic recording. Such changes can be caused by relative movements of any of the optical components or the object itself, and also by local changes in air-temperature. It is essential that any such changes are significantly less than the wavelength of light if a clear well-defined recording of the interference is to be created.
The exposure time required to record the hologram depends on the laser power available, on the particular medium used and on the size and nature of the object(s) to be recorded, just as in conventional photography. This determines the stability requirements. Exposure times of several minutes are typical when using quite powerful gas lasers and silver halide emulsions. All the elements within the optical system have to be stable to fractions of a μm over that period. It is possible to make holograms of much less stable objects by using a pulsed laser which produces a large amount of energy in a very short time (μs or less). These systems have been used to produce holograms of live people. A holographic portrait of Dennis Gabor was produced in 1971 using a pulsed ruby laser.
Thus, the laser power, recording medium sensitivity, recording time and mechanical and thermal stability requirements are all interlinked. Generally, the smaller the object, the more compact the optical layout, so that the stability requirements are significantly less than when making holograms of large objects.
Another very important laser parameter is its coherence. This can be envisaged by considering a laser producing a sine wave whose frequency drifts over time; the coherence length can then be considered to be the distance over which it maintains a single frequency. This is important because two waves of different frequencies do not produce a stable interference pattern. The coherence length of the laser determines the depth of field which can be recorded in the scene. A good holography laser will typically have a coherence length of several meters, ample for a deep hologram.
The objects that form the scene must, in general, have optically rough surfaces so that they scatter light over a wide range of angles. A specularly reflecting (or shiny) surface reflects the light in only one direction at each point on its surface, so in general, most of the light will not be incident on the recording medium. A hologram of a shiny object can be made by locating it very close to the recording plate.
Hologram classifications.
There are three important properties of a hologram which are defined in this section. A given hologram will have one or other of each of these three properties, e.g. an amplitude modulated, thin, transmission hologram, or a phase modulated, volume, reflection hologram.
Amplitude and phase modulation holograms.
An amplitude modulation hologram is one where the amplitude of light diffracted by the hologram is proportional to the intensity of the recorded light. A straightforward example of this is photographic emulsion on a transparent substrate. The emulsion is exposed to the interference pattern, and is subsequently developed giving a transmittance which varies with the intensity of the pattern – the more light that fell on the plate at a given point, the darker the developed plate at that point.
A phase hologram is made by changing either the thickness or the refractive index of the material in proportion to the intensity of the holographic interference pattern. This is a phase grating and it can be shown that when such a plate is illuminated by the original reference beam, it reconstructs the original object wavefront. The efficiency (i.e., the fraction of the illuminated object beam which is converted into the reconstructed object beam) is greater for phase than for amplitude modulated holograms.
Thin holograms and thick (volume) holograms.
A thin hologram is one where the thickness of the recording medium is much less than the spacing of the interference fringes which make up the holographic recording. The thickness of a thin hologram can be down to 60 nm by using a topological insulator material Sb2Te3 thin film. Ultrathin holograms hold the potential to be integrated with everyday consumer electronics like smartphones.
A thick or volume hologram is one where the thickness of the recording medium is greater than the spacing of the interference pattern. The recorded hologram is now a three dimensional structure, and it can be shown that incident light is diffracted by the grating only at a particular angle, known as the Bragg angle. If the hologram is illuminated with a light source incident at the original reference beam angle but a broad spectrum of wavelengths; reconstruction occurs only at the wavelength of the original laser used. If the angle of illumination is changed, reconstruction will occur at a different wavelength and the colour of the re-constructed scene changes. A volume hologram effectively acts as a colour filter.
Transmission and reflection holograms.
A transmission hologram is one where the object and reference beams are incident on the recording medium from the same side. In practice, several more mirrors may be used to direct the beams in the required directions.
Normally, transmission holograms can only be reconstructed using a laser or a quasi-monochromatic source, but a particular type of transmission hologram, known as a rainbow hologram, can be viewed with white light.
In a reflection hologram, the object and reference beams are incident on the plate from opposite sides of the plate. The reconstructed object is then viewed from the same side of the plate as that at which the re-constructing beam is incident.
Only volume holograms can be used to make reflection holograms, as only a very low intensity diffracted beam would be reflected by a thin hologram.
Examples of full-color reflection holograms of mineral specimens:
Holographic recording media.
The recording medium has to convert the original interference pattern into an optical element that modifies either the amplitude or the phase of an incident light beam in proportion to the intensity of the original light field.
The recording medium should be able to resolve fully all the fringes arising from interference between object and reference beam. These fringe spacings can range from tens of micrometers to less than one micrometer, i.e. spatial frequencies ranging from a few hundred to several thousand cycles/mm, and ideally, the recording medium should have a response which is flat over this range. Photographic film has a very low or even zero response at the frequencies involved and cannot be used to make a hologram – for example, the resolution of Kodak's professional black and white film starts falling off at 20 lines/mm – it is unlikely that any reconstructed beam could be obtained using this film.
If the response is not flat over the range of spatial frequencies in the interference pattern, then the resolution of the reconstructed image may also be degraded.,
The table below shows the principal materials used for holographic recording. Note that these do not include the materials used in the mass replication of an existing hologram, which are discussed in the next section. The resolution limit given in the table indicates the maximal number of interference lines/mm of the gratings. The required exposure, expressed as millijoules (mJ) of photon energy impacting the surface area, is for a long exposure time. Short exposure times (less than <templatestyles src="Fraction/styles.css" />1⁄1000 of a second, such as with a pulsed laser) require much higher exposure energies, due to reciprocity failure.
Copying and mass production.
An existing hologram can be copied by embossing or optically.
Most holographic recordings (e.g. bleached silver halide, photoresist, and photopolymers) have surface relief patterns which conform with the original illumination intensity. Embossing, which is similar to the method used to stamp out plastic discs from a master in audio recording, involves copying this surface relief pattern by impressing it onto another material.
The first step in the embossing process is to make a stamper by electrodeposition of nickel on the relief image recorded on the photoresist or photothermoplastic. When the nickel layer is thick enough, it is separated from the master hologram and mounted on a metal backing plate. The material used to make embossed copies consists of a polyester base film, a resin separation layer and a thermoplastic film constituting the holographic layer.
The embossing process can be carried out with a simple heated press. The bottom layer of the duplicating film (the thermoplastic layer) is heated above its softening point and pressed against the stamper, so that it takes up its shape. This shape is retained when the film is cooled and removed from the press. In order to permit the viewing of embossed holograms in reflection, an additional reflecting layer of aluminum is usually added on the hologram recording layer. This method is particularly suited to mass production.
The first book to feature a hologram on the front cover was "The Skook" (Warner Books, 1984) by JP Miller, featuring an illustration by Miller. The first record album cover to have a hologram was "UB44", produced in 1982 for the British group UB40 by Advanced Holographics in Loughborough. This featured a 5.75 inch square embossed hologram showing a 3D image of the letters UB carved out of polystyrene to look like stone and the numbers 44 hovering in space on the picture plane. On the inner sleeve was an explanation of the holographic process and instructions on how to light the hologram. "National Geographic" published the first magazine with a hologram cover in March 1984. Embossed holograms are used widely on credit cards, banknotes, and high value products for authentication purposes.
It is possible to print holograms directly into steel using a sheet explosive charge to create the required surface relief. The Royal Canadian Mint produces holographic gold and silver coinage through a complex stamping process.
A hologram can be copied optically by illuminating it with a laser beam, and locating a second hologram plate so that it is illuminated both by the reconstructed object beam, and the illuminating beam. Stability and coherence requirements are significantly reduced if the two plates are located very close together. An index matching fluid is often used between the plates to minimize spurious interference between the plates. Uniform illumination can be obtained by scanning point-by-point or with a beam shaped into a thin line.
Reconstructing and viewing the holographic image.
When the hologram plate is illuminated by a laser beam identical to the reference beam which was used to record the hologram, an exact reconstruction of the original object wavefront is obtained. An imaging system (an eye or a camera) located in the reconstructed beam 'sees' exactly the same scene as it would have done when viewing the original. When the lens is moved, the image changes in the same way as it would have done when the object was in place. If several objects were present when the hologram was recorded, the reconstructed objects move relative to one another, i.e. exhibit parallax, in the same way as the original objects would have done. It was very common in the early days of holography to use a chess board as the object and then take photographs at several different angles using the reconstructed light to show how the relative positions of the chess pieces appeared to change.
A holographic image can also be obtained using a different laser beam configuration to the original recording object beam, but the reconstructed image will not match the original exactly. When a laser is used to reconstruct the hologram, the image is speckled just as the original image will have been. This can be a major drawback in viewing a hologram.
White light consists of light of a wide range of wavelengths. Normally, if a hologram is illuminated by a white light source, each wavelength can be considered to generate its own holographic reconstruction, and these will vary in size, angle, and distance. These will be superimposed, and the summed image will wipe out any information about the original scene, as if superimposing a set of photographs of the same object of different sizes and orientations. However, a holographic image can be obtained using white light in specific circumstances, e.g. with volume holograms and rainbow holograms. The white light source used to view these holograms should always approximate to a point source, i.e. a spot light or the sun. An extended source (e.g. a fluorescent lamp) will not reconstruct a hologram since its light is incident at each point at a wide range of angles, giving multiple reconstructions which will "wipe" one another out.
White light reconstructions do not contain speckles.
Volume holograms.
A reflection-type volume hologram can give an acceptably clear reconstructed image using a white light source, as the hologram structure itself effectively filters out light of wavelengths outside a relatively narrow range. In theory, the result should be an image of approximately the same colour as the laser light used to make the hologram. In practice, with recording media that require chemical processing, there is typically a compaction of the structure due to the processing and a consequent colour shift to a shorter wavelength. Such a hologram recorded in a silver halide gelatin emulsion by red laser light will usually display a green image. Deliberate temporary alteration of the emulsion thickness before exposure, or permanent alteration after processing, has been used by artists to produce unusual colours and multicoloured effects.
Rainbow holograms.
In this method, parallax in the vertical plane is sacrificed to allow a bright, well-defined, gradiently colored reconstructed image to be obtained using white light. The rainbow holography recording process usually begins with a standard transmission hologram and copies it using a horizontal slit to eliminate vertical parallax in the output image. The viewer is therefore effectively viewing the holographic image through a narrow horizontal slit, but the slit has been expanded into a window by the same dispersion that would otherwise smear the entire image. Horizontal parallax information is preserved but movement in the vertical direction results in a color shift rather than altered vertical perspective. Because perspective effects are reproduced along one axis only, the subject will appear variously stretched or squashed when the hologram is not viewed at an optimum distance; this distortion may go unnoticed when there is not much depth, but can be severe when the distance of the subject from the plane of the hologram is very substantial. Stereopsis and horizontal motion parallax, two relatively powerful cues to depth, are preserved.
The holograms found on credit cards are examples of rainbow holograms. These are technically transmission holograms mounted onto a reflective surface like a metalized polyethylene terephthalate substrate commonly known as PET.
Fidelity of the reconstructed beam.
To replicate the original object beam exactly, the reconstructing reference beam must be identical to the original reference beam and the recording medium must be able to fully resolve the interference pattern formed between the object and reference beams. Exact reconstruction is required in holographic interferometry, where the holographically reconstructed wavefront interferes with the wavefront coming from the actual object, giving a null fringe if there has been no movement of the object and mapping out the displacement if the object has moved. This requires very precise relocation of the developed holographic plate.
Any change in the shape, orientation or wavelength of the reference beam gives rise to aberrations in the reconstructed image. For instance, the reconstructed image is magnified if the laser used to reconstruct the hologram has a longer wavelength than the original laser. Nonetheless, good reconstruction is obtained using a laser of a different wavelength, quasi-monochromatic light or white light, in the right circumstances.
Since each point in the object illuminates all of the hologram, the whole object can be reconstructed from a small part of the hologram. Thus, a hologram can be broken up into small pieces and each one will enable the whole of the original object to be imaged. One does, however, lose information and the spatial resolution gets worse as the size of the hologram is decreased – the image becomes "fuzzier". The field of view is also reduced, and the viewer will have to change position to see different parts of the scene.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathbf{U}(\\mathbf{r}) = A \\exp{i [\\varphi(\\mathbf{r})]}"
},
{
"math_id": 1,
"text": " \\varphi (\\mathbf{r})"
},
{
"math_id": 2,
"text": "\\mathbf{U}_\\text{T} = \\mathbf{U}_\\text{R} + \\mathbf{U}_\\text{O} = A_\\text{R} \\exp {i (\\varphi_\\text{R})} + A_\\text{O} \\exp {i( \\varphi_\\text{O})}"
},
{
"math_id": 3,
"text": "\\mathbf{I}_T =\n \\left\\langle\\mathbf{U}_\\text{T} \\mathbf{U}_\\text{T}^*\\right\\rangle =\n A_\\text{R}^2 + A_\\text{O}^2 + A_\\text{R}A_\\text{O} \\exp{i(\\varphi_\\text{R} - \\varphi_\\text{O})} + A_\\text{R}A_\\text{O} \\exp{-i(\\varphi_\\text{R}- \\varphi_\\text{O})}\n"
},
{
"math_id": 4,
"text": "\\mathbf{t} "
},
{
"math_id": 5,
"text": "\\mathbf{t} = \\mathbf{t}_0 + \\beta \\left[ A_\\text{R}^2 + A_\\text{O}^2 + A_\\text{R}A_\\text{O} \\exp{i(\\varphi_\\text{R} - \\varphi_\\text{O})} + A_\\text{R}A_\\text{O} \\exp{-i(\\varphi_\\text{R} - \\varphi_\\text{O})}\\right]"
},
{
"math_id": 6,
"text": "\\mathbf{t}_0"
},
{
"math_id": 7,
"text": "\\beta"
},
{
"math_id": 8,
"text": "\\mathbf{U_R}"
},
{
"math_id": 9,
"text": "\\mathbf{U}_H =\n \\mathbf{t} A_\\text{R} \\exp {i \\varphi_\\text{R}} =\n \\left[ \\mathbf{t}_0 + \\beta \\left[ A_\\text{R}^2 + A_\\text{O}^2 + A_\\text{R}A_\\text{O} \\exp {i (\\varphi_\\text{R} - \\varphi_\\text{O})} + A_\\text{R}A_\\text{O} \\exp{-i (\\varphi_\\text{R} - \\varphi_\\text{O})}\\right]\\right] A_\\text{R} \\exp {i \\varphi_\\text{R}}\n"
},
{
"math_id": 10,
"text": "\\begin{align}\n \\mathbf{U}_1 &= \\left[\\mathbf{t}_0 + \\beta A_\\text{R}^2 + \\beta A_\\text{O}^2\\right] A_\\text{R} \\exp {i \\varphi_\\text{R}} \\\\\n \\mathbf{U}_2 &= \\beta A_\\text{R}^2 A_\\text{O} \\exp i\\varphi _\\text{O} \\\\\n \\mathbf{U}_3 &= \\beta A_\\text{R}^2 A_\\text{O} \\exp{i (2 \\varphi_\\text{R}- \\varphi_\\text{O})}\n\\end{align}"
},
{
"math_id": 11,
"text": "\\mathbf{U}_1"
},
{
"math_id": 12,
"text": " \\mathbf{U}_2"
},
{
"math_id": 13,
"text": " \\mathbf{U}_3"
},
{
"math_id": 14,
"text": "\\begin{align}\n \\mathbf{U_\\text{R}}(\\mathbf{r}) &= A _\\text{R} \\exp\\left[i \\mathbf{k_\\text{R}} \\cdot \\mathbf{r}\\right] \\\\\n \\mathbf{U_\\text{O}}(\\mathbf{r}) &= A _\\text{O} \\exp\\left[i \\mathbf{k_\\text{O}} \\cdot \\mathbf{r}\\right]\n\\end{align}"
},
{
"math_id": 15,
"text": "\\mathbf{k}"
},
{
"math_id": 16,
"text": "\\mathbf{n}"
},
{
"math_id": 17,
"text": " \\lambda "
},
{
"math_id": 18,
"text": "\\mathbf{k} = 2\\pi \\frac{\\mathbf{n}}{\\lambda}"
},
{
"math_id": 19,
"text": " \\mathbf t"
},
{
"math_id": 20,
"text": " \\mathbf{t} = \\mathbf{t}_0 + \\beta \\left[ A_\\text{O}^2 + A_\\text{R}^2+ A_\\text{O}A_\\text{R} \\cos { \\left([\\mathbf{k_O} - \\mathbf{k_R}] \\cdot \\mathbf{r}\\right)}\\right]"
},
{
"math_id": 21,
"text": " \\mathbf{U}_2 = \\beta A_\\text{R}^2 A_\\text{O} \\exp i (\\mathbf{k}_O \\cdot \\mathbf{r}) "
},
{
"math_id": 22,
"text": " \\mathbf{k} _O "
},
{
"math_id": 23,
"text": " \\mathbf{U}_3 = A_\\text{O}A_\\text{R} \\exp[i (2 \\mathbf{k}_R - \\mathbf{k}_O) \\cdot \\mathbf{r}]"
},
{
"math_id": 24,
"text": " 2\\mathbf{k} _R - \\mathbf{k} _O "
},
{
"math_id": 25,
"text": " \\theta "
},
{
"math_id": 26,
"text": " d = \\lambda / \\sin {\\theta} "
},
{
"math_id": 27,
"text": " \\sin {\\gamma} = \\pm \\lambda / d = \\sin {\\theta}"
},
{
"math_id": 28,
"text": "l"
},
{
"math_id": 29,
"text": " \\phi_O = \\frac {\\pi r^2}{\\lambda l} "
},
{
"math_id": 30,
"text": " r = \\sqrt{x^2 + y^2} "
},
{
"math_id": 31,
"text": " \\mathbf{U_O} = A_0 \\exp i{\\frac {\\pi r^2}{\\lambda l}} "
},
{
"math_id": 32,
"text": "\\mathbf{U _\\text{R}}(\\mathbf{r}) = A _\\text{R} \\exp[i \\mathbf{k _\\text{R}} \\cdot \\mathbf{r}] "
},
{
"math_id": 33,
"text": " \\mathbf{U}_2 = \\beta A_\\text{R}^2 A_\\text{O} \\exp i\\frac {\\pi r^2}{\\lambda l} "
},
{
"math_id": 34,
"text": " \\mathbf{U}_3 = A_\\text{O}A_\\text{R} \\exp i \\left(\\mathbf{k _\\text{R}} \\cdot \\mathbf{r}-{\\frac {\\pi r^2}{\\lambda l}} \\right)"
},
{
"math_id": 35,
"text": "(\\mathbf{r})"
},
{
"math_id": 36,
"text": "\\mathbf{U_\\text{O}}(\\mathbf{r}) = A_\\text{O} (\\mathbf{r}) \\exp{i [\\varphi _\\text{O}(\\mathbf{r})]}"
},
{
"math_id": 37,
"text": "\\mathbf{r}"
},
{
"math_id": 38,
"text": "\\mathbf{r}_\\text{R}"
},
{
"math_id": 39,
"text": " \\mathbf{r} "
},
{
"math_id": 40,
"text": " \\mathbf{r} - \\mathbf{r}_\\text{R}"
},
{
"math_id": 41,
"text": " \\phi_\\text{R} = \\frac {2 \\pi (\\mathbf{r} - \\mathbf{r}_\\text{R})}{\\lambda} "
},
{
"math_id": 42,
"text": "\\mathbf{U}_\\text{R}(\\mathbf{r}) = A_\\text{R} \\exp{ \\frac {2 \\pi i (\\mathbf{r} - \\mathbf{r}_\\text{R})}{\\lambda}}"
},
{
"math_id": 43,
"text": "A_\\text{R}"
},
{
"math_id": 44,
"text": " \\mathbf{U}_2 = \\beta A_\\text{R}^2 A_\\text{O}(\\mathbf{r}) \\exp{i [\\varphi _\\text{O}(\\mathbf{r})]} "
},
{
"math_id": 45,
"text": " \\mathbf{U}_3 = \\beta A_\\text{R} \\exp { \\frac {4 \\pi i (\\mathbf{r} - \\mathbf{r}_\\text{R})}{\\lambda}} A_\\text{O} (\\mathbf{r}) \\exp{i \\varphi _\\text{O}(\\mathbf{r})}"
}
] |
https://en.wikipedia.org/wiki?curid=65493886
|
65507105
|
Wielandt theorem
|
In mathematics, the Wielandt theorem characterizes the gamma function, defined for all complex numbers formula_0 for which formula_1 by
formula_2
as the only function formula_3 defined on the half-plane formula_4 such that:
This theorem is named after the mathematician Helmut Wielandt.
|
[
{
"math_id": 0,
"text": "z"
},
{
"math_id": 1,
"text": "\\mathrm{Re}\\,z > 0"
},
{
"math_id": 2,
"text": "\\Gamma(z)=\\int_0^{+\\infty} t^{z-1} \\mathrm e^{-t}\\,\\mathrm dt,"
},
{
"math_id": 3,
"text": "f"
},
{
"math_id": 4,
"text": "H := \\{ z \\in \\Complex : \\operatorname{Re}\\,z > 0\\}"
},
{
"math_id": 5,
"text": "H"
},
{
"math_id": 6,
"text": "f(1)=1"
},
{
"math_id": 7,
"text": "f(z+1)=z\\,f(z)"
},
{
"math_id": 8,
"text": "z \\in H"
},
{
"math_id": 9,
"text": "\\{ z \\in \\Complex : 1 \\leq \\operatorname{Re}\\,z \\leq 2\\}"
}
] |
https://en.wikipedia.org/wiki?curid=65507105
|
65508940
|
Gauss curvature flow
|
In the mathematical fields of differential geometry and geometric analysis, the Gauss curvature flow is a geometric flow for oriented hypersurfaces of Riemannian manifolds. In the case of curves in a two-dimensional manifold, it is identical with the curve shortening flow. The mean curvature flow is a different geometric flow which also has the curve shortening flow as a special case.
Definition and well-posedness.
Let S be a smooth n-dimensional manifold and let ("M", "g") be a smooth Riemannian manifold of dimension "n" + 1. Given an immersion f of S into M together with a unit normal vector field along f, the second fundamental form of f can be viewed as a symmetric 2-tensor field on S. Via the first fundamental form, it can also be viewed as a (1,1)-tensor field on S, where it is known as the shape operator. The "Gaussian curvature" or "Gauss–Kronecker curvature" of f, denoted by K, can then be defined as the point-by-point determinant of the shape operator, or equivalently (relative to local coordinates) as the determinant of the second fundamental form divided by the determinant of the first fundamental form.
The equation defining the Gauss curvature flow is
formula_0
So a Gauss curvature flow consists of a smooth manifold S, a smooth Riemannian manifold M of dimension one larger, and a one-parameter family of immersions of S into M, together with a smooth unit normal vector field along each immersion, such that the above equation is satisfied.
The well-posedness of the Gauss curvature flow is settled if S is closed. Then, if n is greater than one, and if a given immersion, along which a smooth unit normal vector field has been chosen, has positive-definite second fundamental form, then there is a unique solution of the Gauss curvature flow with "initial data" f. If n is equal to one, so that one is in the setting of the curve shortening flow, the condition on the second fundamental form is unnecessary.
Convergence theorems.
Due to the existence & uniqueness theorem above, the Gauss curvature flow has essentially only been studied in the cases of curve shortening flow, and in higher dimensions for closed convex hypersurfaces. Regardless of dimension, it has been most widely studied in the case that ("M", "g") is the Euclidean space ℝ"n" + 1.
In the case of curve shortening flow, Michael Gage and Richard Hamilton showed that any convex embedding of the circle into the plane is deformed to a point in finite time, in such a way that rescalings of the curves in the flow smoothly approach a round circle. This was enhanced by a result of Matthew Grayson showing that any embedded circle in the plane is deformed into a convex embedding, at which point Gage and Hamilton's result applies. Proofs have since been found which do not treat the two cases of convexity and non-convexity separately. In the more general setting of a complete two-dimensional Riemannian manifold which has a certain convexity near infinity, Grayson proved the convergence to a closed geodesic or to a round point.
Kaising Tso applied the methods of Shiu-Yuen Cheng and Shing-Tung Yau's resolution of the Minkowski problem to study the higher-dimensional version of Gage and Hamilton's result. In particular, he cast the Gauss curvature flow as a parabolic Monge–Ampère equation for the support function of the hypersurfaces. He was able to show that the maximal time of existence is an explicit constant multiple of the volume enclosed by the initial hypersurface, and that each hypersurface in the flow is smooth and strictly convex, with diameter converging to zero as the time approaches its maximum.
In 1999, Ben Andrews succeeded in proving the well-known "Firey conjecture", showing that for convex surfaces in ℝ3, the surfaces in Tso's result could be rescaled to smoothly converge to a round sphere. The key of his proof was an application of the maximum principle to the quantity "H"2 − 4"K", showing that the largest size of the point-by-point difference of the two eigenvalues of the shape operator cannot be increasing in time. Previous results of Andrews for convex hypersurfaces of Euclidean space, as well as a Li–Yau Harnack inequality found by Bennett Chow, then applied to obtain uniform geometric control over the surfaces comprising the flow. The full convergence to the sphere made use of the Krylov–Safonov theorem.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\frac{\\partial F}{\\partial t}=-K\\nu."
}
] |
https://en.wikipedia.org/wiki?curid=65508940
|
65513967
|
Danzer's configuration
|
In mathematics, Danzer's configuration is a self-dual configuration of 35 lines and 35 points, having 4 points on each line and 4 lines through each point. It is named after the German geometer Ludwig Danzer and was popularised by Branko Grünbaum. The Levi graph of the configuration is the Kronecker cover of the odd graph O4, and is isomorphic to the middle layer graph of the seven-dimensional hypercube graph Q7. The middle layer graph of an odd-dimensional hypercube graph Q2n+1(n,n+1) is a subgraph whose vertex set consists of all binary strings of length 2n + 1 that have exactly n or n + 1 entries equal to 1, with an edge between any two vertices for which the corresponding binary strings differ in exactly one bit. Every middle layer graph is Hamiltonian.
Danzer's configuration DCD(4) is the fourth term of an infinite series of formula_0 configurations DCD(n), where DCD(1) is the trivial configuration (11), DCD(2) is the trilateral (32) and DCD(3) is the Desargues configuration (103). In configurations DCD(n) were further generalized to the unbalanced formula_1 configuration DCD(n,d) by introducing parameter d with connection DCD(n) = DCD(2n-1,n). DCD stands for Desargues-Cayley-Danzer. Each DCD(2n,d) configuration is a subconfiguration of
the formula_2 Clifford configuration. While each DCD(n,d) admits a realisation as a geometric point-line configuration, the Clifford configuration can only be realised as a point-circle configuration
and depicts the Clifford's circle theorems.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " (\\tbinom {2n-1}{n}_n) "
},
{
"math_id": 1,
"text": " (\\tbinom {n}{d}_d, \\tbinom {n}{d-1}_{n-d+1}) "
},
{
"math_id": 2,
"text": " (2^{2n}_{2n+1})"
}
] |
https://en.wikipedia.org/wiki?curid=65513967
|
65517102
|
2 Chronicles 35
|
Second Book of Chronicles, chapter 35
2 Chronicles 35 is the thirty-fifth chapter of the Second Book of Chronicles in the Old Testament of the Christian Bible or of the second part of the Books of Chronicles in the Hebrew Bible. The book was compiled from older sources by an unknown person or group, designated by modern scholars as "the Chronicler", and had the final shape established in late fifth or fourth century BCE. This chapter belongs to the section focusing on the kingdom of Judah until its destruction by the Babylonians under Nebuchadnezzar and the beginning of restoration under Cyrus the Great of Persia (2 Chronicles 10 to 36). It contains the regnal accounts of Josiah the king of Judah.
Text.
This chapter was originally written in the Hebrew language and is divided into 27 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
Analysis.
This chapter can be divided into three sections:
Josiah Restores the Passover (35:1–19).
Unlike the hasty celebration in Hezekiah's time, the liturgy of Passover feast in Josiah's 18th year of reign is performed meticulously on the appointed day in Jerusalem (verse 1), referring to and , including the involvement of the Levites and musicians in the procedures.
"Now Josiah kept a Passover to the LORD in Jerusalem, and they slaughtered the Passover "lambs" on the fourteenth day of the first month."
"Then he said to the Levites who taught all Israel, who were holy to the LORD: “Put the holy ark in the house which Solomon the son of David, king of Israel, built. It shall no longer be a burden on your shoulders. Now serve the LORD your God and His people Israel."
Josiah's death (35:20–27).
The report in this section has been regarded by some commentaries as historically more reliable and with clearer explanation about the event than that in the Books of Kings. The description of Josiah's armor, his wounding, and his order to be taken to Jerusalem is quite similar to that of Ahab (, 34). Although the passage and the Talmud attribute the lamentations to Jeremiah, Mathys suggests that Zechariah 12:9–14 may be the one referred in verses 24b–25, as it seems to refer to Josiah's death.
"After all this, when Josiah had prepared the temple, Necho king of Egypt came up to fight against Carchemish by the Euphrates; and Josiah went out against him."
Verse 20.
The reference to Carchemish on the Euphrates (verse 20) uses similar wording as in . The Battle of Carchemish was eventually fought in 605 BCE where the Babylonian and Median army led by Nebuchadnezzar II destroyed the combined Egyptian and Assyrian forces, ending the existence of the Assyrian empire and eliminating Egypt's significant role in the Ancient Near East since that time.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
] |
https://en.wikipedia.org/wiki?curid=65517102
|
655183
|
Sphere-world
|
Mathematical thought experiment posed by Henri Poincaré
The idea of a sphere-world was constructed by French mathematician Henri Poincaré who, while pursuing his argument for conventionalism (see philosophy of space and time), offered a thought experiment about a sphere with strange properties.
The concept.
Poincaré asks us to imagine a sphere of radius "R". The temperature of the sphere decreases from its maximum at the center to absolute zero at its extremity such that a body’s temperature at a distance "r" from the center is proportional to formula_0.
In addition, all bodies have the same coefficient of dilatation so every body shrinks and expands in similar proportion as they move about the sphere. To finish the story, Poincaré states that the index of refraction will also vary with the distance "r", in inverse proportion to formula_0.
How will this world look to inhabitants of this sphere?
In many ways it will look "normal". Bodies will remain intact upon transfer from place to place, as well as seeming to remain the same size (the Spherians would shrink along with them). The geometry, on the other hand, would seem quite different. Supposing the inhabitants were to view rods believed to be rigid, or measure distance with light rays. They would find that a geodesic is not a straight line, and that the ratio of a circle’s circumference to its radius is greater than formula_1.
These inhabitants would in fact determine that their universe is not ruled by Euclidean geometry, but instead by hyperbolic geometry.
Commentary.
This thought experiment is discussed in Roberto Torretti's book "Philosophy of Geometry from Riemann to Poincaré" and in Jeremy Gray's article "Epistemology of Geometry" in the Stanford Encyclopedia of Philosophy. This sphere-world is also described in Ian Stewart's book "Flatterland" (chapter 10, Platterland).
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "R^2-r^2"
},
{
"math_id": 1,
"text": "2\\pi"
}
] |
https://en.wikipedia.org/wiki?curid=655183
|
65524604
|
2 Chronicles 34
|
Second Book of Chronicles, chapter 34
2 Chronicles 34 is the thirty-fourth chapter of the Second Book of Chronicles in the Old Testament of the Christian Bible or of the second part of the Books of Chronicles in the Hebrew Bible. The book was compiled from older sources by an unknown person or group, designated by modern scholars as "the Chronicler", and had the final shape established in late fifth or fourth century BCE. This chapter belongs to the section focusing on the kingdom of Judah until its destruction by the Babylonians under Nebuchadnezzar and the beginning of restoration under Cyrus the Great of Persia (2 Chronicles 10 to 36). It contains the regnal accounts of Josiah the king of Judah.
Text.
This chapter was originally written in the Hebrew language and is divided into 33 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
Josiah king of Judah (34:1–7).
While 2 Kings 22–23 record Josiah's deed from the eighteenth year of his reign, the Chronicler noted that since he was still young (16 years old), Josiah already started to 'seek God', but as he was not yet of age, the public measures he planned were carried out in the twelfth year of his reign (when he was considered an adult at 20 years of age, verse 3). The inclusion of the area used to belong to the former northern kingdom in his reform showed a legitimate control of the whole Israel (cf. , 21, 33) and later in 35:17–18. The phrase 'he returned to Jerusalem' (cf. ) underlines the direct involvement of the king for the reform.
"Josiah was eight years old when he began to reign, and he reigned thirty-one years in Jerusalem."
" For in the eighth year of his reign, while he was yet young, he began to seek after the God of David his father: and in the twelfth year he began to purge Judah and Jerusalem from the high places, and the groves, and the carved images, and the molten images."
The Book of the Law found (34:8–21).
The collection of donations for the temple's improvement was described in more detail in verses 8–13 than in 2 Kings 22 with the collection of tithes from the entire population (cf. 2 Chronicles 24:5–9 and David's approach for temple's construction (1 Chronicles 29), emphasizing the co-operation of all inhabitants, including people from the north. The Levites have similar duties as in 1 Chronicles 26. The discovery account of the Book of Law (verses 14–33) is very similar to 2 Kings 22, with some minor details, especially linking the finding of the book to the exemplary behavior of Josiah and his people. The Chronicles record that this is "the book of the law, which was written by Moses", so it was not only Deuteronomy, but the entire Pentateuch. Therefore, Shaphan read 'from' it (cf. "read it" in 2 Kings 22) rather than 'all of it', before the king (cf. verse 18). The Chronicles clarify in verse 24 about 'all the curses that are written in the book', instead of 'all the words of the book' in 2 Kings 22, which refer to Deuteronomy 27–29 (and Leviticus 26).
"Now in the eighteenth year of his reign, when he had purged the land, and the house, he sent Shaphan the son of Azaliah, and Maaseiah the governor of the city, and Joah the son of Joahaz the recorder, to repair the house of the Lord his God."
"When they came to Hilkiah the high priest, they delivered the money that was brought into the house of God, which the Levites, the keepers of the door, had collected from the hand of Manasseh and Ephraim, and from all the remnant of Israel, and from all Judah and Benjamin, and from the inhabitants of Jerusalem."
"Then the king commanded Hilkiah, Ahikam the son of Shaphan, Abdon the son of Micah, Shaphan the scribe, and Asaiah a servant of the king, saying,"
Huldah prophesies disaster (34:22–28).
The prophetess Huldah pointed out the inevitability that the kingdom of Judah would suffer destruction because of the people's apostasy, although she showed supports for Josiah's reforms and indicated that Josiah's righteousness would earn him a peaceful death before the catastrophe struck.
Josiah restores true worship (34:29–33).
In verse 30, 'the Levites' replaced 'the prophets' in 2 Kings 22, indicating the Chronicler's conviction that in that period the Levites had a role of announcing God's word, although the prophets still had their place of honour in the books of Chronicles. Verse 33 is an extremely shortened summary of 2 Kings 23:4–20, which together with verses. 3–7, show two different forms of cleansing.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
] |
https://en.wikipedia.org/wiki?curid=65524604
|
6553354
|
Tightness of measures
|
Concept in measure theory
In mathematics, tightness is a concept in measure theory. The intuitive idea is that a given collection of measures does not "escape to infinity".
Definitions.
Let formula_0 be a Hausdorff space, and let formula_1 be a σ-algebra on formula_2 that contains the topology formula_3. (Thus, every open subset of formula_2 is a measurable set and formula_1 is at least as fine as the Borel σ-algebra on formula_2.) Let formula_4 be a collection of (possibly signed or complex) measures defined on formula_1. The collection formula_4 is called tight (or sometimes uniformly tight) if, for any formula_5, there is a compact subset formula_6 of formula_2 such that, for all measures formula_7,
formula_8
where formula_9 is the total variation measure of formula_10. Very often, the measures in question are probability measures, so the last part can be written as
formula_11
If a tight collection formula_4 consists of a single measure formula_10, then (depending upon the author) formula_10 may either be said to be a tight measure or to be an inner regular measure.
If formula_12 is an formula_2-valued random variable whose probability distribution on formula_2 is a tight measure then formula_12 is said to be a separable random variable or a Radon random variable.
Another equivalent criterion of the tightness of a collection formula_4 is sequentially weakly compact. We say the family formula_4 of probability measures is sequentially weakly compact if for every sequence formula_13 from the family, there is a subsequence of measures that converges weakly to some probability measure formula_10. It can be shown that a family of measure is tight if and only if it is sequentially weakly compact.
Examples.
Compact spaces.
If formula_2 is a metrizable compact space, then every collection of (possibly complex) measures on formula_2 is tight. This is not necessarily so for non-metrisable compact spaces. If we take formula_14 with its order topology, then there exists a measure formula_10 on it that is not inner regular. Therefore, the singleton formula_15 is not tight.
Polish spaces.
If formula_2 is a Polish space, then every probability measure on formula_2 is tight. Furthermore, by Prokhorov's theorem, a collection of probability measures on formula_2 is tight if and only if
it is precompact in the topology of weak convergence.
A collection of point masses.
Consider the real line formula_16 with its usual Borel topology. Let formula_17 denote the Dirac measure, a unit mass at the point formula_18 in formula_16. The collection
formula_19
is not tight, since the compact subsets of formula_16 are precisely the closed and bounded subsets, and any such set, since it is bounded, has formula_20-measure zero for large enough formula_21. On the other hand, the collection
formula_22
is tight: the compact interval formula_23 will work as formula_6 for any formula_5. In general, a collection of Dirac delta measures on formula_24 is tight if, and only if, the collection of their supports is bounded.
A collection of Gaussian measures.
Consider formula_21-dimensional Euclidean space formula_24 with its usual Borel topology and σ-algebra. Consider a collection of Gaussian measures
formula_25
where the measure formula_26 has expected value (mean) formula_27 and covariance matrix formula_28. Then the collection formula_29 is tight if, and only if, the collections formula_30 and formula_31 are both bounded.
Tightness and convergence.
Tightness is often a necessary criterion for proving the weak convergence of a sequence of probability measures, especially when the measure space has infinite dimension. See
Exponential tightness.
A strengthening of tightness is the concept of exponential tightness, which has applications in large deviations theory. A family of probability measures formula_32 on a Hausdorff topological space formula_2 is said to be exponentially tight if, for any formula_5, there is a compact subset formula_6 of formula_2 such that
formula_33
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "(X, T)"
},
{
"math_id": 1,
"text": "\\Sigma"
},
{
"math_id": 2,
"text": "X"
},
{
"math_id": 3,
"text": "T"
},
{
"math_id": 4,
"text": "M"
},
{
"math_id": 5,
"text": "\\varepsilon > 0"
},
{
"math_id": 6,
"text": "K_{\\varepsilon}"
},
{
"math_id": 7,
"text": "\\mu \\in M"
},
{
"math_id": 8,
"text": "|\\mu| (X \\setminus K_{\\varepsilon}) < \\varepsilon."
},
{
"math_id": 9,
"text": "|\\mu|"
},
{
"math_id": 10,
"text": "\\mu"
},
{
"math_id": 11,
"text": "\\mu (K_{\\varepsilon}) > 1 - \\varepsilon. \\,"
},
{
"math_id": 12,
"text": "Y"
},
{
"math_id": 13,
"text": "\\left\\{\\mu_n\\right\\}"
},
{
"math_id": 14,
"text": "[0,\\omega_1]"
},
{
"math_id": 15,
"text": "\\{\\mu\\}"
},
{
"math_id": 16,
"text": "\\mathbb{R}"
},
{
"math_id": 17,
"text": "\\delta_{x}"
},
{
"math_id": 18,
"text": "x"
},
{
"math_id": 19,
"text": "M_{1} := \\{ \\delta_{n} | n \\in \\mathbb{N} \\}"
},
{
"math_id": 20,
"text": "\\delta_{n}"
},
{
"math_id": 21,
"text": "n"
},
{
"math_id": 22,
"text": "M_{2} := \\{ \\delta_{1 / n} | n \\in \\mathbb{N} \\}"
},
{
"math_id": 23,
"text": "[0, 1]"
},
{
"math_id": 24,
"text": "\\mathbb{R}^{n}"
},
{
"math_id": 25,
"text": "\\Gamma = \\{ \\gamma_{i} | i \\in I \\},"
},
{
"math_id": 26,
"text": "\\gamma_{i}"
},
{
"math_id": 27,
"text": "m_{i} \\in \\mathbb{R}^{n}"
},
{
"math_id": 28,
"text": "C_{i} \\in \\mathbb{R}^{n \\times n}"
},
{
"math_id": 29,
"text": "\\Gamma"
},
{
"math_id": 30,
"text": "\\{ m_{i} | i \\in I \\} \\subseteq \\mathbb{R}^{n}"
},
{
"math_id": 31,
"text": "\\{ C_{i} | i \\in I \\} \\subseteq \\mathbb{R}^{n \\times n}"
},
{
"math_id": 32,
"text": "(\\mu_{\\delta})_{\\delta > 0}"
},
{
"math_id": 33,
"text": "\\limsup_{\\delta \\downarrow 0} \\delta \\log \\mu_{\\delta} (X \\setminus K_{\\varepsilon}) < - \\varepsilon."
}
] |
https://en.wikipedia.org/wiki?curid=6553354
|
65534930
|
Phylogenetic invariants
|
Phylogenetic invariants are polynomial relationships between the frequencies of various site patterns in an idealized DNA multiple sequence alignment. They have received substantial study in the field of biomathematics, and they can be used to choose among phylogenetic tree topologies in an empirical setting. The primary advantage of phylogenetic invariants relative to other methods of phylogenetic estimation like maximum likelihood or Bayesian MCMC analyses is that invariants can yield information about the tree without requiring the estimation of branch lengths of model parameters. The idea of using phylogenetic invariants was introduced independently by James Cavender and Joseph Felsenstein and by James A. Lake in 1987.
At this point the number of programs that allow empirical datasets to be analyzed using invariants is limited. However, phylogenetic invariants may provide solutions to other problems in phylogenetics and they represent an area of active research for that reason. Felsenstein stated it best when he said, "invariants are worth attention, not for what they do for us now, but what they might lead to in the future." (p. 390)
If we consider a multiple sequence alignment with "t" taxa and no gaps or missing data (i.e., an "idealized multiple sequence alignment"), there are 4"t" possible site patterns. For example, there are 256 possible site patterns for four taxa ("f"AAAA, "f"AAAC, "f"AAAG, … "f"TTTT), which can be written as a vector. This site pattern frequency vector has 255 degrees of freedom because the frequencies must sum to one. However, any set of site pattern frequencies that resulted from some specific process of sequence evolution on a specific tree must obey many constraints. and therefore have many fewer degrees of freedom. Thus, there should be polynomials involving those frequencies that take on a value of zero if the DNA sequences were generated on a specific tree given a particular substitution model.
Invariants are formulas in the expected pattern frequencies, not the observed pattern frequencies. When they are computed using the observed pattern frequencies, we will usually find that they are not precisely zero even when the model and tree topology are correct. By testing whether such polynomials for various trees are 'nearly zero' when evaluated on the observed frequencies of patterns in real data sequences one should be able infer which tree best explains the data.
Some invariants are straightforward consequences of symmetries in the model of nucleotide substitution and they will take on a value of zero regardless of the underlying tree topology. For example, if we assume the Jukes-Cantor model of sequence evolution and a four-taxon tree we expect:
formula_0
This is a simple outgrowth of the fact that base frequencies are constrained to be equal under the Jukes-Cantor model. Thus, they are called "symmetry invariants". The equation shown above is only one of a large number of symmetry invariants for the Jukes-Cantor model; in fact, there are a total of 241 symmetry invariants for that model.
Symmetry invariants are non-phylogenetic in nature; they take on the expected value of zero regardless of the tree topology. However, it is possible to determine whether a particular multiple sequence alignment fits the Jukes-Cantor model of evolution (i.e., by testing whether the site patterns of the appropriate types are present in equal numbers). More general tests for the best-fitting model using invariants are also possible. For example Kedzierska et al. 2012 used invariants to establish the best-fitting model out from a specific model set.
The asterisk after the JC69, K80, and K81 models is used to emphasize the non-homogeneous nature of the models that can be examined using invariants. These non-homogeneous models include the commonly used continuous-time JC69, K80, and K81 models as submodels. The SSM (strand-specific model), also called the CS05 model, is a generalized non-homogeneous version of the HKY (Hasegawa-Kishino-Yano) model constrained to have equal distribution of the pairs of bases A,T and C,G at each node of the tree and no assumption regarding a stable base distribution. All models listed above are submodels of the general Markov model (GMM). The ability to perform tests using non-homogeneous models represents a major benefit of the invariants methods relative to the more commonly used maximum likelihood methods for phylogenetic model testing.
"Phylogenetic invariants", which are defined as the subset of invariants that take on a value of zero only when the sequences were (or were not) generated on a specific topology, are likely to be the most useful invariants for phylogenetic studies. .
Lake's linear invariants.
Lake's invariants (which he called "evolutionary parsimony") provide an excellent example of phylogenetic invariants. Lake's invariants involve quartets, two of which (the incorrect topologies) yield values of zero and one of which yields a value greater than zero. This can be used to construct a test based on following invariant relationship, which holds for the two incorrect trees when sites evolve under the Kimura two-parameter model of sequence evolution:
formula_1
The indices of these site pattern frequencies indicate the bases scored relative to the base in the first taxon (which we call taxon A). If base 1 is a purine, then base 2 is the other purine and bases 3 and 4 are the pyrimidines. If base 1 is a pyrimidine, then base 2 is the other pyrimidine and. bases 3 and 4 are the purines.
We will call three possible quartet trees TX [TX is ((A,B),(C,D)); in newick format], TY [TY is ((A,C),(B,D)); in newick format], and TZ [TZ is ((A,D),(B,C)); in newick format]. We can calculate three values from the data to identify the best topology given the data:
formula_2
formula_3
formula_4
Lake broke these values up into a "parsimony-like term" (formula_5 for TX) the "background term" (formula_6 for TX) and suggests testing for deviation from zero by calculating formula_7 and performing a χ2 test with one degree of freedom. Similar χ2 tests can be performed for Y and Z. If one of the three values is significantly different from zero the corresponding topology is the best estimate of phylogeny. The advantage of using Lake's invariants relative to maximum likelihood or neighbor joining of Kimura two-parameter distances is that the invariants should hold regardless of the model parameters, branch lengths, or patterns of among-sites rate heterogeneity.
A classic study by John Huelsenbeck and David Hillis found that Lake's invariants converges on the true tree over all of the branch length space they examined when the underlying model of evolution is the Kimura two-parameter model. However, they also found that Lake's invariants are very inefficient (large amounts of data are necessary to converge on the correct tree). This inefficiency has caused most empiricists to abandon the use of Lake's invariants. Also, because Lake's invariants are based on the Kimura two-parameter model phylogenetic estimation using Lake's invariants may not yield the true tree when the model that generated the data strongly violates that model.
Modern approaches using phylogenetic invariants.
The low efficiency of Lake's invariants reflects the fact that it used a limited set of generators for the phylogenetic invariants. Casanellas et al. introduced methods to derive a much larger set of set of generators for DNA data and this has led to the development of invariants methods that are as efficient as maximum likelihood methods. Several of these methods have implementations that are practical for analyses of empirical datasets.
Eriksson proposed an invariants method for the general Markov model based on singular value decomposition (SVD) of matrices generated by "flattening" the nucleotides associated with each of the leaves (i.e., the site pattern frequency spectrum). Different flattening matrices are produced for each topology. However, comparisons of the original Eriksson SVD method (ErikSVD) to neighbor joining and the maximum likelihood approach implemented in the PHYLIP program dnaml were mixed; ErikSVD underperformed the other two methods when used with simulated data but it appeared to perform better than dnaml when applied to an empirical mammalian dataset based on an early release of data from the ENCODE project. The original ErikSVD method was improved by Fernández-Sánchez and Casanellas, who proposed a normalization they called Erik+2. The original ErikSVD method is statistically consistent (it converges on. the true tree. as the empirical distribution approaches the theoretical distribution); the Erik+2 normalization improves the performance of the method given finite datasets. It has been implemented in the software package PAUP* as an option for the SVDquartets method.
"Squangles" (stochastic quartet tangles) represents another example of an invariants method hat has been implemented in software package that is practical to be used with empirical datasets. Squangles permit the choice among the three possible quartets assuming that DNA sequences have evolved under the general Markov model; the quartets can then be assembled using a supertree method. There are three squangles that are useful for differentiating among quartets, which can be denoted as "q"1(f), "q"2(f), and "q"3(f) (f is a 256 element vector containing the site frequency spectrum). Each "q" has 66,744 terms and together they satisfy the linear relation "q"1 + "q"2 + "q"3 = 0 (i.e., up to linear dependence there are only two "q" values). Each possible quartet has different expected values for "q"1, "q"2, and "q"3:
The expected values "q"1, "q"2, and "q"3 are all zero on the star topology (a quartet with an internal branch length of zero). For practicality, Holland et al. used least squares to solve for the "q" values. Empirical tests of the squangles method have been limited but they appear to be promising.
|
[
{
"math_id": 0,
"text": "f_{ACAT}-f_{CGCA}=0"
},
{
"math_id": 1,
"text": "f_{1133}+f_{1234}=f_{1233}+f_{1134}"
},
{
"math_id": 2,
"text": "X=N_{1133}-N_{1233}-N_{1134}+N_{1234}"
},
{
"math_id": 3,
"text": "Y=N_{1313}-N_{132 3}-N_{1314}+N_{1324}"
},
{
"math_id": 4,
"text": "Z=N_{1331}-N_{1332}-N_{1341}+N_{1342}"
},
{
"math_id": 5,
"text": "P=N_{1133}+N_{1234}"
},
{
"math_id": 6,
"text": "B=N_{1233}+N_{1134}"
},
{
"math_id": 7,
"text": "\\chi^2=(P-B)^2/(P+B)"
}
] |
https://en.wikipedia.org/wiki?curid=65534930
|
6553549
|
Prokhorov's theorem
|
Relates tightness of measures to relative compactness in the space of probability measures
In measure theory Prokhorov's theorem relates tightness of measures to relative compactness (and hence weak convergence) in the space of probability measures. It is credited to the Soviet mathematician Yuri Vasilyevich Prokhorov, who considered probability measures on complete separable metric spaces. The term "Prokhorov’s theorem" is also applied to later generalizations to either the direct or the inverse statements.
Statement.
Let formula_0 be a separable metric space.
Let formula_1 denote the collection of all probability measures defined on formula_2 (with its Borel σ-algebra).
Theorem.
Corollaries.
For Euclidean spaces we have that:
Extension.
Prokhorov's theorem can be extended to consider complex measures or finite signed measures.
Theorem:
Suppose that formula_5 is a complete separable metric space and formula_18 is a family of Borel complex measures on formula_2. The following statements are equivalent:
Comments.
Since Prokhorov's theorem expresses tightness in terms of compactness, the Arzelà–Ascoli theorem is often used to substitute for compactness: in function spaces, this leads to a characterization of tightness in terms of the modulus of continuity or an appropriate analogue—see tightness in classical Wiener space and tightness in Skorokhod space.
There are several deep and non-trivial extensions to Prokhorov's theorem. However, those results do not overshadow the importance and the relevance to applications of the original result.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "(S, \\rho)"
},
{
"math_id": 1,
"text": "\\mathcal{P}(S)"
},
{
"math_id": 2,
"text": "S"
},
{
"math_id": 3,
"text": "K\\subset \\mathcal{P}(S)"
},
{
"math_id": 4,
"text": "K"
},
{
"math_id": 5,
"text": "(S,\\rho)"
},
{
"math_id": 6,
"text": "d_0"
},
{
"math_id": 7,
"text": " K\\subset \\mathcal{P}(S)"
},
{
"math_id": 8,
"text": " K"
},
{
"math_id": 9,
"text": "(\\mathcal{P}(S),d_0)"
},
{
"math_id": 10,
"text": " (\\mu_n)"
},
{
"math_id": 11,
"text": "\\mathcal{P}(\\mathbb{R}^m)"
},
{
"math_id": 12,
"text": "m"
},
{
"math_id": 13,
"text": "(\\mu_{n_k})"
},
{
"math_id": 14,
"text": "\\mu\\in\\mathcal{P}(\\mathbb{R}^m)"
},
{
"math_id": 15,
"text": "\\mu_{n_k}"
},
{
"math_id": 16,
"text": "\\mu"
},
{
"math_id": 17,
"text": "(\\mu_n)"
},
{
"math_id": 18,
"text": "\\Pi"
},
{
"math_id": 19,
"text": "\\{\\mu_n\\}\\subset\\Pi"
}
] |
https://en.wikipedia.org/wiki?curid=6553549
|
65555540
|
Saxon I TV
|
The Saxon class I Tformula_0 were a class of 19 German 0-4-4-0 Meyer tank locomotives built for the Royal Saxon State Railways (, K.Sä.St.E) for service of the Windbergbahn. The Deutsche Reichsbahn assigned them to Class 98.0.
History.
Near Dresden, the Royal Saxon State Railways had the "Windbergbahn", a branch line primarily serving coal traffic, which, in addition to a steep incline, also had a minimum curve of only in radius. At the turn of the century, the performance of the previously used locomotives of the VII T class was no longer sufficient. The reasons for this were the onset of excursion traffic as well as increasing transport volumes in coal transport. Between 1910 and 1914, the Sächsische Maschinenfabrik delivered a total of 19 locomotives in 3 lots, which were largely similar in their design to the tried and tested narrow-gauge locomotives of the IV K class. The construction lots differed in terms of their service weights and external design. They were popularly known as "Windberglok" or "Cross spider" ().
Three locomotives were lost in World War I; the Deutsche Reichsbahn took over the remaining 15 locomotives in 1920 and gave them the numbers 98 001 to 98 015. Like all locomotives of the 98 series, the machines were thus classified as local railway () locomotives.
In 1940 the Reichsbahn took over another locomotive of this type that had been delivered to the Oberhohndorf-Reinsdorf Coal Railway and gave it the road number 98 015 (second); the first 98 015 having been retired.
All the remaining locomotives survived World War II. Two locomotives were badly damaged in the air raid on 13 February 1945, but were rebuilt. They continued to be used on their main line in passenger and freight traffic. Between 1952 and 1962 they were used double-heading uranium ore block trains to Dresden-Gittersee. The trains consisted of 10 wagons with a capacity of 20 tons each. The eight remaining locomotives transported 560,000 tons of uranium ore to Gittersee every year. It was only with the introduction of locomotives of the DR Class V 60 (later series 106, now 346) with wheel flange lubrication that the locomotives could be replaced on the winding route at the end of the 1960s. The last class 98.0 was retired in 1968.
The 98 001 (ex K.SäSt.E. 1394) has been preserved and is part of the holdings of the Dresden Transport Museum. It is currently on loan at the Industrial Museum in Chemnitz.
Technical features.
The locomotives had a two-ring boiler with a Crampton firebox. Two non-lifting Friedmann injectors were used to feed the boiler. From 1914, those of the Winzer type were also used.
The steam circuit was designed as a four-cylinder compound drive with Walschaerts valve gear (Heusinger) and flat slide valves. The smaller high-pressure cylinders were on the front, the larger low-pressure cylinders on the rear bogie. The bogies were connected by a coupling iron in order to reduce any counter-rotating movements.
The water supply was housed in side tanks, the coal in a bunker behind the driver's cab.
The locomotives were factory-fitted with a Westinghouse air brake, supplemented by a counterweight brake. As special equipment, they were provided with a Latowski-type of steam-driven bell.
|
[
{
"math_id": 0,
"text": "\\textstyle \\mathfrak{V}"
}
] |
https://en.wikipedia.org/wiki?curid=65555540
|
6556
|
Coprime integers
|
Two numbers without shared prime factors
In number theory, two integers a and b are coprime, relatively prime or mutually prime if the only positive integer that is a divisor of both of them is 1. Consequently, any prime number that divides a does not divide b, and vice versa. This is equivalent to their greatest common divisor (GCD) being 1. One says also a "is prime to" b or a "is coprime with" b.
The numbers 8 and 9 are coprime, despite the fact that neither—considered individually—is a prime number, since 1 is their only common divisor. On the other hand, 6 and 9 are not coprime, because they are both divisible by 3. The numerator and denominator of a reduced fraction are coprime, by definition.
Notation and testing.
When the integers a and b are coprime, the standard way of expressing this fact in mathematical notation is to indicate that their greatest common divisor is one, by the formula gcd("a", "b") = 1 or ("a", "b") = 1. In their 1989 textbook "Concrete Mathematics", Ronald Graham, Donald Knuth, and Oren Patashnik proposed an alternative notation formula_0 to indicate that a and b are relatively prime and that the term "prime" be used instead of coprime (as in a is "prime" to b).
A fast way to determine whether two numbers are coprime is given by the Euclidean algorithm and its faster variants such as binary GCD algorithm or Lehmer's GCD algorithm.
The number of integers coprime with a positive integer n, between 1 and n, is given by Euler's totient function, also known as Euler's phi function, "φ"("n").
A set of integers can also be called coprime if its elements share no common positive factor except 1. A stronger condition on a set of integers is pairwise coprime, which means that a and b are coprime for every pair ("a", "b") of different integers in the set. The set {2, 3, 4} is coprime, but it is not pairwise coprime since 2 and 4 are not relatively prime.
Properties.
The numbers 1 and −1 are the only integers coprime with every integer, and they are the only integers that are coprime with 0.
A number of conditions are equivalent to a and b being coprime:
As a consequence of the third point, if a and b are coprime and "br" ≡ "bs" (mod "a"), then "r" ≡ "s" (mod "a"). That is, we may "divide by b" when working modulo a. Furthermore, if "b"1, "b"2 are both coprime with a, then so is their product "b"1"b"2 (i.e., modulo a it is a product of invertible elements, and therefore invertible); this also follows from the first point by Euclid's lemma, which states that if a prime number p divides a product bc, then p divides at least one of the factors b, c.
As a consequence of the first point, if a and b are coprime, then so are any powers ak and bm.
If a and b are coprime and a divides the product bc, then a divides c. This can be viewed as a generalization of Euclid's lemma.
The two integers a and b are coprime if and only if the point with coordinates ("a", "b") in a Cartesian coordinate system would be "visible" via an unobstructed line of sight from the origin (0, 0), in the sense that there is no point with integer coordinates anywhere on the line segment between the origin and ("a", "b"). (See figure 1.)
In a sense that can be made precise, the probability that two randomly chosen integers are coprime is 6/"π"2, which is about 61% (see , below).
Two natural numbers a and b are coprime if and only if the numbers 2"a" – 1 and 2"b" – 1 are coprime. As a generalization of this, following easily from the Euclidean algorithm in base "n" > 1:
formula_1
Coprimality in sets.
A set of integers formula_2 can also be called "coprime" or "setwise coprime" if the greatest common divisor of all the elements of the set is 1. For example, the integers 6, 10, 15 are coprime because 1 is the only positive integer that divides all of them.
If every pair in a set of integers is coprime, then the set is said to be "pairwise coprime" (or "pairwise relatively prime", "mutually coprime" or "mutually relatively prime"). Pairwise coprimality is a stronger condition than setwise coprimality; every pairwise coprime finite set is also setwise coprime, but the reverse is not true. For example, the integers 4, 5, 6 are (setwise) coprime (because the only positive integer dividing "all" of them is 1), but they are not "pairwise" coprime (because gcd(4, 6) = 2).
The concept of pairwise coprimality is important as a hypothesis in many results in number theory, such as the Chinese remainder theorem.
It is possible for an infinite set of integers to be pairwise coprime. Notable examples include the set of all prime numbers, the set of elements in Sylvester's sequence, and the set of all Fermat numbers.
Coprimality in ring ideals.
Two ideals A and B in a commutative ring R are called coprime (or "comaximal") if formula_3 This generalizes Bézout's identity: with this definition, two principal ideals (a) and (b) in the ring of integers &NoBreak;&NoBreak; are coprime if and only if a and b are coprime. If the ideals A and B of R are coprime, then formula_4 furthermore, if C is a third ideal such that A contains BC, then A contains C. The Chinese remainder theorem can be generalized to any commutative ring, using coprime ideals.
Probability of coprimality.
Given two randomly chosen integers a and b, it is reasonable to ask how likely it is that a and b are coprime. In this determination, it is convenient to use the characterization that a and b are coprime if and only if no prime number divides both of them (see Fundamental theorem of arithmetic).
Informally, the probability that any number is divisible by a prime (or in fact any integer) p is &NoBreak;&NoBreak; for example, every 7th integer is divisible by 7. Hence the probability that two numbers are both divisible by p is &NoBreak;&NoBreak; and the probability that at least one of them is not is &NoBreak;&NoBreak; Any finite collection of divisibility events associated to distinct primes is mutually independent. For example, in the case of two events, a number is divisible by primes p and q if and only if it is divisible by pq; the latter event has probability &NoBreak;&NoBreak; If one makes the heuristic assumption that such reasoning can be extended to infinitely many divisibility events, one is led to guess that the probability that two numbers are coprime is given by a product over all primes,
formula_5
Here ζ refers to the Riemann zeta function, the identity relating the product over primes to "ζ"(2) is an example of an Euler product, and the evaluation of "ζ"(2) as "π"2/6 is the Basel problem, solved by Leonhard Euler in 1735.
There is no way to choose a positive integer at random so that each positive integer occurs with equal probability, but statements about "randomly chosen integers" such as the ones above can be formalized by using the notion of "natural density". For each positive integer N, let PN be the probability that two randomly chosen numbers in formula_6 are coprime. Although PN will never equal 6/"π"2 exactly, with work one can show that in the limit as formula_7 the probability PN approaches 6/"π"2.
More generally, the probability of k randomly chosen integers being setwise coprime is &NoBreak;&NoBreak;
Generating all coprime pairs.
All pairs of positive coprime numbers ("m", "n") (with "m" > "n") can be arranged in two disjoint complete ternary trees, one tree starting from (2, 1) (for even–odd and odd–even pairs), and the other tree starting from (3, 1) (for odd–odd pairs). The children of each vertex ("m", "n") are generated as follows:
This scheme is exhaustive and non-redundant with no invalid members. This can be proved by remarking that, if formula_11 is a coprime pair with formula_12 then
In all cases formula_19 is a "smaller" coprime pair with formula_20 This process of "computing the father" can stop only if either formula_21 or formula_22 In these cases, coprimality, implies that the pair is either formula_23 or formula_24
Applications.
In machine design, an even, uniform gear wear is achieved by choosing the tooth counts of the two gears meshing together to be relatively prime. When a 1:1 gear ratio is desired, a gear relatively prime to the two equal-size gears may be inserted between them.
In pre-computer cryptography, some Vernam cipher machines combined several loops of key tape of different lengths. Many rotor machines combine rotors of different numbers of teeth. Such combinations work best when the entire set of lengths are pairwise coprime.
Generalizations.
This concept can be extended to other algebraic structures than &NoBreak;&NoBreak; for example, polynomials whose greatest common divisor is 1 are called coprime polynomials.
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "a\\perp b"
},
{
"math_id": 1,
"text": "\\gcd\\left(n^a - 1, n^b - 1\\right) = n^{\\gcd(a, b)} - 1."
},
{
"math_id": 2,
"text": "S=\\{a_1,a_2, \\dots, a_n\\}"
},
{
"math_id": 3,
"text": "A+B=R."
},
{
"math_id": 4,
"text": "AB=A\\cap B;"
},
{
"math_id": 5,
"text": "\\prod_{\\text{prime } p} \\left(1-\\frac{1}{p^2}\\right) = \\left( \\prod_{\\text{prime } p} \\frac{1}{1-p^{-2}} \\right)^{-1} = \\frac{1}{\\zeta(2)} = \\frac{6}{\\pi^2} \\approx 0.607927102 \\approx 61\\%."
},
{
"math_id": 6,
"text": "\\{1,2,\\ldots,N\\}"
},
{
"math_id": 7,
"text": "N \\to \\infty,"
},
{
"math_id": 8,
"text": "(2m-n,m)"
},
{
"math_id": 9,
"text": "(2m+n,m)"
},
{
"math_id": 10,
"text": "(m+2n,n)"
},
{
"math_id": 11,
"text": "(a,b)"
},
{
"math_id": 12,
"text": "a>b,"
},
{
"math_id": 13,
"text": "a>3b,"
},
{
"math_id": 14,
"text": "(m,n)=(a-2b, b)"
},
{
"math_id": 15,
"text": "2b<a<3b,"
},
{
"math_id": 16,
"text": "(m,n)=(b, a-2b)"
},
{
"math_id": 17,
"text": "b<a<2b,"
},
{
"math_id": 18,
"text": "(m,n)=(b, 2b-a)"
},
{
"math_id": 19,
"text": "(m,n)"
},
{
"math_id": 20,
"text": "m>n."
},
{
"math_id": 21,
"text": "a=2b"
},
{
"math_id": 22,
"text": "a=3b."
},
{
"math_id": 23,
"text": "(2,1)"
},
{
"math_id": 24,
"text": "(3,1)."
}
] |
https://en.wikipedia.org/wiki?curid=6556
|
65561717
|
Rider optimization algorithm
|
The rider optimization algorithm (ROA) is devised based on a novel computing method, namely fictional computing that undergoes series of process to solve the issues of optimizations using imaginary facts and notions. ROA relies on the groups of rider that struggle to reach the target. ROA employs rider groups that take a trip to reach common target in order to become winner. In ROA, the count of groups is four wherein equal riders are placed.
The four groups adapted in ROA are attacker, overtaker, follower, and bypass rider. Each group undergoes series of strategy to attain the target. The goal of bypass rider is to attain target by bypassing leader's path. The follower tries to follow the position of leader in axis. Furthermore, the follower employs multidirectional search space considering leading rider, which is useful for algorithm as it improves convergence rate. The overtaker undergoes its own position to attain target considering nearby locations of leader. The benefit of overtaker is that it facilitates faster convergence with huge global neighbourhood. As per ROA, the global optimal convergence is function of overtaker, whose position relies on the position of the leader, success rate, and directional indicator. The attacker adapts position of leader to accomplish destination by using its utmost speed. Moreover, it is responsible for initializing the multidirectional search using fast search for accelerating search speed.
Despite the riders undergoes a specific method, the major factors employed for reaching the target are correct riding of vehicles and proper management of accelerator, steering, brake and gear. At each time instance, the riders alter its position towards target by regulating these factors and follow the prescribed method using current success rate. The leader is defined using the success rate at current instance. The process is repeated till the riders go into off time that is maximal instant provided to riders to attain intended location. After reaching off time, the rider at leading position is termed winner.
Algorithm.
The ROA is motivated from riders, who contend to reach anticipated location. The steps employed in ROA algorithm are defined below:
Initialization of Rider and other algorithmic parameters.
The foremost step is the initialization of algorithm which is done using four groups of riders represented as formula_0, and initializations of its positions are performed in arbitrary manner. The initialization of group is given by,
where, formula_1 signifies count of riders, and formula_2signifies position of formula_3 rider in formula_4 size at formula_5 time instant.
The count of riders is evaluated with count of riders of each group and is expressed as,
where, formula_6 signifies bypass rider, formula_7 represent follower, formula_8 signifies overtaker, formula_9 represent attacker, and formula_10 signifies rag bull rider. Hence, the relation amongst the aforementioned attributes is represented as,
Finding rate of success.
After rider group parameters initialization, the rate of success considering each rider is evaluated. The rate of success is computed with distance and is measured between rider location and target and is formulated as,
where,formula_11 symbolize position of formula_3 rider and formula_12 indicate target position. To elevate rate of success, distance must be minimized and hence, distance reciprocal offers the success rate of rider.
Determination of leading rider.
The rate of success is employed as significant part in discovering leader. The rider that reside in near target location is supposed to contain highest rate of success.
Evaluate the rider’s update position.
The position of rider in each group is updated to discover rider at leading position and hence is winner. Thus, the rider update the position using the features of each rider defined on the definition. The update position of each rider is explained below:
The follower has an inclination to update position based on location of leading rider to attain target in quick manner and is expressed as,
where, formula_13 signifies coordinate selector, formula_14 represent leading rider position, formula_15 indicate leader's index, formula_16signifies angle of steering considering formula_17 rider in formula_18coordinate, and formula_19 represent distance.
The overtaker's update position is utilized to elevate rate of success by discovering overtaker position and is represented as,
where, formula_20signifies direction indicator.
The attacker contains an inclination to confiscate the leaders position by following the leader's update process and is expressed as,
Here, the update rule of bypass riders is exhibited wherein standard bypass rider is expressed as,
where, formula_21 signifies random number, formula_22 symbolize random number between 1 and formula_23 , formula_24 indicate a random number ranging between 1 and formula_23 and formula_25 represent random number between 0 and 1.
Finding success rate.
After executing process of update, the rate of success considering each rider is computed.
Update of Rider parameter.
The parameter of rider's update is important to discover an effective solution. Moreover, the steering angle, gears are updated with activity counter, and are updated with success rate.
Off time of rider.
The procedure is iterated repeatedly till formula_26 wherein, leader is discovered. After race completion, the leading rider is considered as winner.
algorithm rider-optimization is
input: Arbitrary rider position formula_27,
iteration formula_28,
maximum iteration formula_29
output: Leading rider formula_30
Initialize solution set
Initialize other parameter of rider.
Find rate of success using equation (4)
while formula_31
for formula_32
Update position of follower using equation (5)
Update position of overtaker with equation (6)
Update position of attacker with equation (7)
Update position of bypass rider with equation (8)
Rank the riders based on success rate using equation (4)
Select the rider with high success rate
Update rider parameters
Return formula_14
formula_33
Applications.
The applications of ROA are noticed in several domains that involve: Engineering Design Optimization Problems, Diabetic retinopathy detection, Document clustering, Plant disease detection, Attack Detection, Enhanced Video Super Resolution, Clustering, Webpages Re-ranking, Task scheduling, Medical Image Compression, Resource allocation, and multihop routing
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "V"
},
{
"math_id": 1,
"text": "P"
},
{
"math_id": 2,
"text": "S_l(v,k)"
},
{
"math_id": 3,
"text": "v^{th}"
},
{
"math_id": 4,
"text": "k^{th}"
},
{
"math_id": 5,
"text": "l^{th}"
},
{
"math_id": 6,
"text": "B"
},
{
"math_id": 7,
"text": "J"
},
{
"math_id": 8,
"text": "O"
},
{
"math_id": 9,
"text": "A"
},
{
"math_id": 10,
"text": "K"
},
{
"math_id": 11,
"text": "S_v"
},
{
"math_id": 12,
"text": "l_t"
},
{
"math_id": 13,
"text": "o"
},
{
"math_id": 14,
"text": "S^G"
},
{
"math_id": 15,
"text": "G"
},
{
"math_id": 16,
"text": "\\varphi_{v,o}^l "
},
{
"math_id": 17,
"text": "v^{th} "
},
{
"math_id": 18,
"text": "o^{th} "
},
{
"math_id": 19,
"text": "\\partial_v^l "
},
{
"math_id": 20,
"text": "D_l^*\\bigl(v\\bigr) "
},
{
"math_id": 21,
"text": "\\lambda "
},
{
"math_id": 22,
"text": "\\chi "
},
{
"math_id": 23,
"text": "P "
},
{
"math_id": 24,
"text": "\\xi "
},
{
"math_id": 25,
"text": "\\delta "
},
{
"math_id": 26,
"text": "L_{OFF}"
},
{
"math_id": 27,
"text": "S_l "
},
{
"math_id": 28,
"text": "l"
},
{
"math_id": 29,
"text": "L"
},
{
"math_id": 30,
"text": "S^G "
},
{
"math_id": 31,
"text": "l<L_{OFF}"
},
{
"math_id": 32,
"text": "v=1 to P"
},
{
"math_id": 33,
"text": "l = l + 1"
}
] |
https://en.wikipedia.org/wiki?curid=65561717
|
65575420
|
The White Boys (mummers)
|
The White Boys (Manx: "Ny Guillyn Baney") is the traditional mummers' play of the Isle of Man.
The play and its actors are named because of the unusual white clothing they wear. The play is traditionally performed in public places close to Christmas, and it concerns knights fighting and killing one another before being resurrected by a doctor, after which there is a song and a dance. Historical scripts date back as far as 1832, and it is still performed on Manx streets each year.
History.
Nineteenth Century.
The earliest known record of the White Boys dates to 1832, when a full version of the play by 'the Douglas White Boys' was printed in the "Manx Sun" newspaper. The first eyewitness account of a performance is from 1838, when it was said that:For several nights before Christmas, it is customary for boys, dressed in white [...] to perambulate the streets [...] and solicit contribution at the various dwellingsAt this time, the White Boys were considered to be, alongside Hunt the Wren and the Fiddlers, 'amongst the most popular amusements of Christmas,' with their strength remaining 'untouched by the enerating hands of time and change.'
Boys or young men visited houses in the local community to perform the play in the late afternoon or early evening in the lead-up to Christmas. They would knock on doors and call out 'Who wants to see the White Boys act?' before the actors would be invited into homes. The play would normally be performed in either the sitting room or in the kitchen, where it was conventionally enjoyed by the whole household.
The performances were described as being 'a rude comical drama, half verse, and half prose run-mad,' often delivered in a 'hurried manner and the burlesque intonations of the speakers,' making it 'of all oddities the most odd.' However, it remained entirely compelling with eye-witnesses noting that 'the excitement attached to the "White Boys” can scarcely be imagined,' and 'the greatest actor in all the world could not have charmed us half so much.'
After the conclusion of the performance, the actors would receive their unnysup (the 'deserving'); a donation in thanks for the entertainment. This was reported as being generally quite generous. Money was expected but many households also offered food and drink:[The White Boys] are not averse to the pennies; but if solids are not to be had, they readily put up with liquids, in shape of gin, or even Jough, alias Manx beer. The White Boys were known all over the Isle of Man, including the following: Andreas, Arbory, Ballabeg, Ballaragh, Ballaugh, Castletown, Douglas, Glen Auldyn, Glenchass, Jurby, Laxey, Lezayre, The Lhen, Maughold, Kirk Michael, Peel, Port Erin, Ramsey and Sulby. Many individual houses and inns are also known to have been visited.
1890s to 1950s.
By the end of the 19th century, the practice was reported as 'formerly common, [but] now almost moribund.' By this stage the doctor's role was taken by the smallest person of the group, most commonly a young boy. This was remarkable when the group incorporated drinking into their activities, and the Doctor was therefore sometimes left at a friend's house as pubs were visited.
The White Boys were adversely affected by the outbreak of the First World War. For instance, the previously popular and successful White Boys of Ramsey ceased after December 1913 and were not able to restart due to the introduction of a £5 Street Traders licence. The White Boys in Castletown continued until at least 1931, when young boys of about 10 years old were performed the play.
From the 1910s Mona Douglas and Phillip Leighton Stowell were working to collect the dance believed to have been performed at the play's conclusion. It was through this that the play was revived and presented by boys from Castletown at a public entertainment in January 1939.
The play received some institutional support over this period, such as from the church, through the Ballabeg Sunday School in the 1900s and the Young Men's Club of a church in Peel in the early 20th century. Schools such as Ballamoda in 1934 and Ballasalla School in 1951 also began to stage the play, and Arbory School played a particularly important role in the play's living survival by producing it within the school for decades, from as early as 1943 through to the 1970s and beyond.
Although not well documented, the White Boys also continued to be performed in some form in Port Erin until at least 1950.
1970s to present.
In 1975 the play was independently revived for public performance by Ross Trench-Jellicoe, Colin Jerry, Bob Carswell, David Fisher, Ian Coulson, Stewart Bennett, George Broderick, Phil Gorry and Mark Shimmin. As members of the Manx dance group, Bock Yuan Fannee, they had originally planned to perform the dance only and only later came to prepare the play also. Although a complete rewrite of the play was prepared by Stewart Bennett to address contemporary politics, the version eventually performed was an amalgamation of the 1832 and 1845 versions, with other slight additions. On the last Saturday before Christmas 1975 they performed the play concluding with the sword dance at fourteen locations all over the Isle of Man.
The script developed for these performances was published in 1983 in the book of Manx dances, "Rinkaghyn Vannin". Here it appeared with music and full instructions for the White Boys Dance.
The play continued to be performed on the streets of the Isle of Man for 31 years by a changing set of actors connected to Bock Yuan Fannee. True to the tradition, the play was adapted to contemporary interests and concerns, most notably with a complete rewrite in the 1990s which replaced the characters of the play with Sir MHK, Sir Banker, Sir Cherished Number Salesman, Sir Expert and Estate Agent. The final recorded performance by members of Bock Yuan Fannee was in Peel in 2007.
In 2010 a book on the White Boys compiled and edited by Stephen Miller was published by Chiollagh Books; ""Who wants to see the White Boys act?" The Mumming Play in the Isle of Man: A Compendium of Sources". One review called it 'a comprehensive collection' with 'erudite and perceptive' accompanying notes.
Although the White Boys Dance continued to be regularly performed, particularly by Perree Bane and Skeddan Jiarg, the play was performed only intermittently in the 2010s, such as in an abridged form at the Bree Weekend in 2016.
In December 2019 the play returned to the Island's streets in the two weekends before Christmas. Performed by two independent groups, the play was put on in Port St. Mary, Port Erin, Colby, Ramsey, Castletown (twice), Peel (twice) and Douglas (four times). The group performing on 21 December presented a new version of the traditional play which playfully reassigned the heroic knight as the recognisably Manx St. Maughold, and the antagonist as St. George. Two groups of performers have performed the play around the Island in the run-in to Christmas each year since then.
Plot and variations.
There are six historical White Boys scripts. Although all vary from one another, the dominant core of the narrative is as follows:
A short introduction of the drama is followed by the entrance of the patriarch character, who introduces the hero (normally St. George). An antagonist then enters and proceeds to kill the hero in a fight. The patriarch calls for help and an aid enters to fight the antagonist on his behalf. After the antagonist is killed, the patriarch calls for a doctor, who enters to revive the two dead men. When the combatants rise and begin to fight again, the patriarch steps in to stop them. Then there is a song, an argument about who is to pay the doctor, and a dance.
As was noted in 1869, the traditional approach to the play is to vary or adapt it in performance:
The plot everywhere seems to be pretty nearly the same; scarcely any two sets of performers render it alike, constantly mixing up extraneous matter, often of a local nature, and frequently allusive to the passing events of the day, making the confusion of character in all the versions very great.
This is evident in the varying characters involved in the historical scripts:
In addition to these characters, references to the play beyond the available scripts show that Beelzebub, Devil Doubt, King and Turkish Champion were also present in other versions of the play.
The 1890s script is notable for the appearance of a girl interrupting the first fight to offer wine to the combatants. As well as this speaking role, this script also unusually has a cast list, which pairs each of the male parts with a female role; Queen, two Princesses and two Maids.
The modern variations of the script also reflect the historic variance of characters:
The 1983 script printed in "Rinkaghyn Vannin" noted that it was created from the 1832 script 'with variants from other recorded versions.' The text shows that it was constructed using the scripts of 1832, 1845, 1890s, 1950 and 1983, and well as introducing material not known in any historical Manx scripts. Most remarkable in this new material is the 20 lines and fight of Big Head and Little Devil Doubt at the close of the play, only six lines of which can be found in the historical Manx scripts.
As well as replacing the characters with contemporary figures, the 1990s versions of the White Boys were remarkable for re-writing the script entirely as biting commentary on contemporary Manx politics.
In 2019 a new edit of the historical scripts was created for the performances given on 21 December of that year. Although faithful to the historical sources, this new edit playfully reassigned the heroic knight as St. Maughold (a recognisably Manx character) and the antagonist as St. George.
The relative prominence of the roles within the play also varies over time, with the key comic character being seen as the Doctor in 1845, Sambo in 1909, and Devil Doubt in 1928. Today, whether comic or otherwise, the key role is seen to be the Doctor.
Today, a key line in the play is considered to be the contents of the Doctor's bottle, with which he cures those injured in the fighting. However, there is little consensus across the historical sources over this particular line:
Also of note are the lines:
The former is a distinctive Manx dialect phrase, directly translated from the Manx 'fud y cheilley,' meaning 'confused.' The latter rhyme might suggest that the play would have been performed in a strong Manx accent in order to make the rhyme work. The prominence of the Manx accent is also historically relevant to the name, as it was noted that it was perhaps more commonly spoken of by some as 'the Quite Boys.'
Costume.
An essential part of the White Boys has always been the costume as it is from their unusual white clothes that they derive their name.
As a folk practice, there has never been a determined uniform for the different sets of practitioners across the Isle of Man and descriptions vary across time and location. References to the costume include the following:
It is also of note that an account of a performance from 1909 stated that Sambo, in distinction from the Doctor, did not vary from the other White Boys in his costume and in not having any form of makeup.
The dominant costume for most of the actors today is a white smock covered in strips of coloured fabric, with hats and often shields indicating their character. In contrast, the Doctor dresses in dark Victorian gentlemen's attire, with tail-coat and a bowler or top hat.
Song.
There are a number of songs associated with the White Boys, the best known of which is what is today known as the White Boys Carol ("Carval ny Guillyn Baney"):
<poem>
Then here’s success to all brave boys
Of stout and gallant heart,
In battlefield or banquet board
Prepared to play a part.
We handle well both knife and fork
Likewise the sword and spear,
And we wish you a merry Christmas
And a good new year,
And we wish you a good new year!
With hostile bands confronted,
To fight we are not slack,
On roast beef and plum pudding
We can make a stout attack.
We handle well both knife and fork
Likewise the sword and spear,
And we wish you a merry Christmas
And a good new year,
And we wish you a good new year!</poem>This is the only song to feature in any of the White Boy scripts. It appears before the argument over the Doctor's payment in the 1832 script only.
Another 'Carol of the White Boys' was published in 1898 by W. H. Gill in his "Manx National Music". Beginning with the lines, 'I wish you a Merry Christmas and a Happy New Year, / A pocket full of money and a cellar full of beer,' the song is better known elsewhere in association with the quaaltagh tradition of the Manx New Year.
Other songs are known to have regularly featured as a part of the performances, commonly after the conclusion of the drama or whilst leaving the venue. In at least the early twentieth century these songs were frequently popular songs of the day, performed as solo songs by the actors.
Dance.
The White Boys Dance ("Rinkey ny Ghuilleyn Baney") is performed at the end of all contemporary performances of the play. References to a dance appear in very few historical accounts of the White Boys, but the earliest is from 1840:
"Good morning to you, Mr and Mrs McKenzie," sounded at two o'clock in the morning; "and all the family that is small, good morning to you, and luck to you," and then a dance, a mock fight, and strains that would have murdered all the cows on the island, and this repeated with little variation down an entire street.
Although Mona Douglas began to be collect something of the dance in Maughold prior to 1916, the dance was ultimately collected by Phillip Leighton Stowell between c.1925 and 1934. Its first recorded public performance was in the Royal Albert Hall.
At this point of the dance's revival, there was no lock of the swords at the conclusion, but only 'a woven cross-seat':
At the end there are six men in a ring – they pile their swords one-by-one on top of each other, the dancers going round slowly in a jog trot and then the Doctor runs in and sits on the swords and is raised up on them
Although the collected version of the dance has the Doctor sit on the swords, there is evidence that he stood on the swords in some performances.
A variant of this in the 2000s introduced the Laair Vane, another Manx Christmas tradition, who fooled about the dance before being carried out on the swords. However, the modern version of the dance is almost universally performed with a lock of swords held aloft at the conclusion of the dance, a feature introduced in the 1980s through contact with English folk dance societies. However, there is some evidence that such a finish to the dance was used in a performance under Leighton Stowell as far back as 1939.
Daunse Noo George ("St. George's Dance") is another dance also associated with the White Boys. First alluded to by Mona Douglas in 1941, it was finally collected by Leighton Stowell in 1948, at which point it received its first performance in Port Erin. The song and tune associated with Daunse Noo George has an unclear origin, as Leighton Stowell offers conflicting accounts of its being both collected and composed. Although it was historically performed by the St. George character at the conclusion of the play, it is today performed only as a display dance separate from the play itself.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\bigtriangleup"
}
] |
https://en.wikipedia.org/wiki?curid=65575420
|
6558
|
Cello
|
Bowed string instrument
The violoncello ( , ), often simply abbreviated as cello ( ), is a bowed (sometimes plucked and occasionally hit) string instrument of the violin family. Its four strings are usually tuned in perfect fifths: from low to high, C2, G2, D3 and A3. The viola's four strings are each an octave higher. Music for the cello is generally written in the bass clef, with tenor clef, and treble clef used for higher-range passages.
Played by a "cellist" or "violoncellist", it enjoys a large solo repertoire with and without accompaniment, as well as numerous concerti. As a solo instrument, the cello uses its whole range, from bass to soprano, and in chamber music such as string quartets and the orchestra's string section, it often plays the bass part, where it may be reinforced an octave lower by the double basses. Figured bass music of the Baroque era typically assumes a cello, viola da gamba or bassoon as part of the basso continuo group alongside chordal instruments such as organ, harpsichord, lute, or theorbo. Cellos are found in many other ensembles, from modern Chinese orchestras to cello rock bands.
Etymology.
The name "cello" is derived from the ending of the Italian "violoncello", which means "little violone". Violone ("big viola") was a large-sized member of viol (viola da gamba) family or the violin (viola da braccio) family. The term "violone" today usually refers to the lowest-pitched instrument of the viols, a family of stringed instruments that went out of fashion around the end of the 17th century in most countries except England and, especially, France, where they survived another half-century before the louder violin family came into greater favour in that country as well. In modern symphony orchestras, it is the second largest stringed instrument (the double bass is the largest). Thus, the name "violoncello" contained both the augmentative "-one" ("big") and the diminutive "-cello" ("little"). By the turn of the 20th century, it had become common to shorten the name to 'cello, with the apostrophe indicating the missing stem. It is now customary to use "cello" without apostrophe as the full designation. "Viol" is derived from the root "viola", which was derived from Medieval Latin , meaning stringed instrument.
General description.
Tuning.
Cellos are tuned in fifths, starting with C2 (two octaves below middle C), followed by G2, D3, and then A3. It is tuned in the exact same intervals and strings as the viola, but an octave lower. Similar to the double bass, the cello has an endpin that rests on the floor to support the instrument's weight. The cello is most closely associated with European classical music. The instrument is a part of the standard orchestra, as part of the string section, and is the bass voice of the string quartet (although many composers give it a melodic role as well), as well as being part of many other chamber groups.
Works.
Among the most well-known Baroque works for the cello are Johann Sebastian Bach's six unaccompanied Suites. Other significant works include Sonatas and Concertos by Antonio Vivaldi, and solo sonatas by Francesco Geminiani and Giovanni Bononcini. Domenico Gabrielli was one of the first composers to treat the cello as a solo instrument. As a basso continuo instrument the cello may have been used in works by Francesca Caccini (1587–1641), Barbara Strozzi (1619–1677) with pieces such as "Il primo libro di madrigali, per 2–5 voci e basso continuo, op. 1" and Elisabeth Jacquet de La Guerre (1665–1729), who wrote six sonatas for violin and basso continuo. The earliest known manual for learning the cello, Francesco Supriani's "Principij da imparare a suonare il violoncello e con 12 Toccate a solo" (before 1753), dates from this era. As the title of the work suggests, it contains 12 toccatas for solo cello, which along with Johann Sebastian Bach's Cello Suites, are some of the first works of that type.
From the Classical era, the two concertos by Joseph Haydn in C major and D major stand out, as do the five sonatas for cello and pianoforte of Ludwig van Beethoven, which span the important three periods of his compositional evolution. Other outstanding examples include the three Concerti by Carl Philipp Emanuel Bach, Capricci by dall'Abaco, and Sonatas by Flackton, Boismortier, and Luigi Boccherini. A "Divertimento for Piano, Clarinet, Viola and Cello" is among the surviving works by Duchess Anna Amalia of Brunswick-Wolfenbüttel (1739–1807). Wolfgang Amadeus Mozart supposedly wrote a Cello Concerto in F major, K. 206a in 1775, but this has since been lost. His Sinfonia Concertante in A major, K. 320e includes a solo part for cello, along with the violin and viola, although this work is incomplete and only exists in fragments, therefore it is given an Anhang number (Anh. 104).
Well-known works of the Romantic era include the Robert Schumann Concerto, the Antonín Dvořák Concerto, the first Camille Saint-Saëns Concerto, as well as the two sonatas and the Double Concerto by Johannes Brahms. A review of compositions for cello in the Romantic era must include the German composer Fanny Mendelssohn (1805–1847), who wrote Fantasia in G Minor for cello and piano and a Capriccio in A-flat for cello.
Compositions from the late 19th and early 20th century include three cello sonatas (including the Cello Sonata in C Minor written in 1880) by Dame Ethel Smyth (1858–1944), Edward Elgar's Cello Concerto in E minor, Claude Debussy's Sonata for Cello and Piano, and unaccompanied cello sonatas by Zoltán Kodály and Paul Hindemith. Pieces including cello were written by American Music Center founder Marion Bauer (1882–1955) (two trio sonatas for flute, cello, and piano) and Ruth Crawford Seeger (1901–1953) (Diaphonic suite No. 2 for bassoon and cello).
The cello's versatility made it popular with many composers in this era, such as Sergei Prokofiev, Dmitri Shostakovich, Benjamin Britten, György Ligeti, Witold Lutoslawski and Henri Dutilleux. Polish composer Grażyna Bacewicz (1909–1969) was writing for cello in the mid 20th century with Concerto No. 1 for Cello and Orchestra (1951), Concerto No. 2 for Cello and Orchestra (1963) and in 1964 composed her Quartet for four cellos.
In the 2010s, the instrument is found in popular music, but was more commonly used in 1970s pop and disco music. Today it is sometimes featured in pop and rock recordings, examples of which are noted later in this article. The cello has also appeared in major hip-hop and R & B performances, such as singers Rihanna and Ne-Yo's 2007 performance at the American Music Awards. The instrument has also been modified for Indian classical music by Nancy Lesh and Saskia Rao-de Haas.[5]
History.
The violin family, including cello-sized instruments, emerged c. 1500 as a family of instruments distinct from the viola da gamba family. The earliest depictions of the violin family, from Italy c. 1530, show three sizes of instruments, roughly corresponding to what we now call violins, violas, and cellos. Contrary to a popular misconception, the cello did not evolve from the viola da gamba, but existed alongside it for about two and a half centuries. The violin family is also known as the viola da braccio (meaning viola for the arm) family, a reference to the primary way the members of the family are held. This is to distinguish it from the viola da gamba (meaning viola for the leg) family, in which all the members are all held with the legs. The likely predecessors of the violin family include the lira da braccio and the rebec. The earliest surviving cellos are made by Andrea Amati, the first known member of the celebrated Amati family of luthiers.
The direct ancestor to the violoncello was the bass violin.["unt. library"] Monteverdi referred to the instrument as "basso de viola da braccio" in "Orfeo" (1607). Although the first bass violin, possibly invented as early as 1538, was most likely inspired by the viol, it was created to be used in consort with the violin. The bass violin was actually often referred to as a "violone", or "large viola", as were the viols of the same period. Instruments that share features with both the bass violin and the "viola da gamba" appear in Italian art of the early 16th century.
The invention of wire-wound strings (fine wire around a thin gut core), c. 1660 in Bologna, allowed for a finer bass sound than was possible with purely gut strings on such a short body. Bolognese makers exploited this new technology to create the cello, a somewhat smaller instrument suitable for solo repertoire due to both the timbre of the instrument and the fact that the smaller size made it easier to play virtuosic passages. This instrument had disadvantages as well, however. The cello's light sound was not as suitable for church and ensemble playing, so it had to be doubled by organ, theorbo, or violone.
Around 1700, Italian players popularized the cello in northern Europe, although the bass violin (basse de violon) continued to be used for another two decades in France. Many existing bass violins were literally cut down in size to convert them into cellos according to the smaller pattern developed by Stradivarius, who also made a number of old pattern large cellos (the 'Servais'). The sizes, names, and tunings of the cello varied widely by geography and time. The size was not standardized until c. 1750.
Despite similarities to the viola da gamba, the cello is actually part of the viola da braccio family, meaning "viol of the arm", which includes, among others, the violin and viola. Though paintings like Bruegel's "The Rustic Wedding", and Jambe de Fer in his "Epitome Musical" suggest that the bass violin had alternate playing positions, these were short-lived and the more practical and ergonomic "a gamba" position eventually replaced them entirely.
Baroque-era cellos differed from the modern instrument in several ways. The neck has a different form and angle, which matches the baroque bass-bar and stringing. The fingerboard is usually shorter than that of the modern cello, as the highest notes are not often called for in baroque music. Modern cellos have an endpin at the bottom to support the instrument (and transmit some of the sound through the floor), while Baroque cellos are held only by the calves of the player. Modern bows curve in and are held at the frog; Baroque bows curve out and are held closer to the bow's point of balance. Modern strings are normally flatwound with a metal (or synthetic) core; Baroque strings are made of gut, with the G and C strings wire-wound. Modern cellos often have fine tuners connecting the strings to the tailpiece, which makes it much easier to tune the instrument, but such pins are rendered ineffective by the flexibility of the gut strings used on Baroque cellos. Overall, the modern instrument has much higher string tension than the Baroque cello, resulting in a louder, more projecting tone, with fewer overtones. In addition, the instrument was less standardized in size and number of strings; a smaller, five-string variant (the violoncello piccolo) was commonly used as a solo instrument and five-string instruments are occasionally specified in the Baroque repertoire. BWV 1012 (Bach's 6th Cello Suite) was written for 5 string Cello but since its additional High E String is an Octave below the same string on the Violin, anything written for the Violin can be played on the 5 string Cello, sounding an Octave lower than written.
Few educational works specifically devoted to the cello existed before the 18th century and those that do exist contain little value to the performer beyond simple accounts of instrumental technique. One of the earliest cello manuals is Michel Corrette's "Méthode, thèorique et pratique pour apprendre en peu de temps le violoncelle dans sa perfection" (Paris, 1741).
Modern use.
Orchestral.
Cellos are part of the standard symphony orchestra, which usually includes eight to twelve cellists. The cello section, in standard orchestral seating, is located on stage left (the audience's right) in the front, opposite the first violin section. However, some orchestras and conductors prefer switching the positioning of the viola and cello sections. The "principal" cellist is the section leader, determining bowings for the section in conjunction with other string principals, playing solos, and leading entrances (when the section begins to play its part). Principal players always sit closest to the audience.
The cellos are a critical part of orchestral music; all symphonic works involve the cello section, and many pieces require cello soli or solos. Much of the time, cellos provide part of the low-register harmony for the orchestra. Often, the cello section plays the melody for a brief period, before returning to the harmony role. There are also cello concertos, which are orchestral pieces that feature a solo cellist accompanied by an entire orchestra.
Solo.
There are numerous cello concertos – where a solo cello is accompanied by an orchestra – notably 25 by Vivaldi, 12 by Boccherini, at least three by Haydn, three by C. P. E. Bach, two by Saint-Saëns, two by Dvořák, and one each by Robert Schumann, Lalo, and Elgar. There were also some composers who, while not otherwise cellists, did write cello-specific repertoire, such as Nikolaus Kraft, who wrote six cello concertos. Beethoven's Triple Concerto for Cello, Violin and Piano and Brahms' Double Concerto for Cello and Violin are also part of the concertante repertoire, although in both cases the cello shares solo duties with at least one other instrument. Moreover, several composers wrote large-scale pieces for cello and orchestra, which are concertos in all but name. Some familiar "concertos" are Richard Strauss' tone poem "Don Quixote", Tchaikovsky's "Variations on a Rococo Theme", Bloch's "Schelomo" and Bruch's "Kol Nidrei".
In the 20th century, the cello repertoire grew immensely. This was partly due to the influence of virtuoso cellist Mstislav Rostropovich, who inspired, commissioned, and premiered dozens of new works. Among these, Prokofiev's "Symphony-Concerto", Britten's "Cello Symphony", the concertos of Shostakovich and Lutosławski as well as Dutilleux's "Tout un monde lointain..." have already become part of the standard repertoire. Other major composers who wrote concertante works for him include Messiaen, Jolivet, Berio, and Penderecki. In addition, Arnold, Barber, Glass, Hindemith, Honegger, Ligeti, Myaskovsky, Penderecki, Rodrigo, Villa-Lobos and Walton wrote major concertos for other cellists, notably for Gaspar Cassadó, Aldo Parisot, Gregor Piatigorsky, Siegfried Palm and Julian Lloyd Webber.
There are also many sonatas for cello and piano. Those written by Beethoven, Mendelssohn, Chopin, Brahms, Grieg, Rachmaninoff, Debussy, Fauré, Shostakovich, Prokofiev, Poulenc, Carter, and Britten are particularly well known.
Other important pieces for cello and piano include Schumann's five "Stücke im Volkston" and transcriptions like Schubert's "Arpeggione Sonata" (originally for arpeggione and piano), César Franck's Cello Sonata (originally a violin sonata, transcribed by Jules Delsart with the composer's approval), Stravinsky's "Suite italienne" (transcribed by the composer – with Gregor Piatigorsky – from his ballet "Pulcinella") and Bartók's first rhapsody (also transcribed by the composer, originally for violin and piano).
There are pieces for cello solo, Johann Sebastian Bach's six Suites for Cello (which are among the best-known solo cello pieces), Kodály's Sonata for Solo Cello and Britten's three Cello Suites. Other notable examples include Hindemith's and Ysaÿe's Sonatas for Solo Cello, Dutilleux's "Trois Strophes sur le Nom de Sacher", Berio's "Les Mots Sont Allés", Cassadó's Suite for Solo Cello, Ligeti's Solo Sonata, Carter's two "Figment"s and Xenakis' "Nomos Alpha" and "Kottos".
There are also modern solo pieces written for cello, such as Julie-O by Mark Summer.
Quartets and other ensembles.
The cello is a member of the traditional string quartet as well as string quintets, sextet or trios and other mixed ensembles.
There are also pieces written for two, three, four, or more cellos; this type of ensemble is also called a "cello choir" and its sound is familiar from the introduction to Rossini's William Tell Overture as well as Zaccharia's prayer scene in Verdi's Nabucco. Tchaikovsky's 1812 Overture also starts with a cello ensemble, with four cellos playing the top lines and two violas playing the bass lines. As a self-sufficient ensemble, its most famous repertoire is Heitor Villa-Lobos' first of his Bachianas Brasileiras for cello ensemble (the fifth is for soprano and 8 cellos). Other examples are Offenbach's cello duets, quartet, and sextet, Pärt's Fratres for eight cellos and Boulez' "Messagesquisse" for seven cellos, or even Villa-Lobos' rarely played "Fantasia Concertante" (1958) for 32 cellos. The 12 cellists of the Berlin Philharmonic Orchestra (or "the Twelve" as they have since taken to being called) specialize in this repertoire and have commissioned many works, including arrangements of well-known popular songs.
Popular music, jazz, world music and neoclassical.
The cello is less common in popular music than in classical music. Several bands feature a cello in their standard line-up, including Hoppy Jones of the Ink Spots and Joe Kwon of the Avett Brothers. The more common use in pop and rock is to bring the instrument in for a particular song. In the 1960s, artists such as the Beatles and Cher used the cello in popular music, in songs such as The Beatles' "Yesterday", "Eleanor Rigby" and "Strawberry Fields Forever", and Cher's "Bang Bang (My Baby Shot Me Down)". "Good Vibrations" by the Beach Boys includes the cello in its instrumental ensemble, which includes a number of instruments unusual for this sort of music. Bass guitarist Jack Bruce, who had originally studied music on a performance scholarship for cello, played a prominent cello part in "As You Said" on Cream's "Wheels of Fire" studio album (1968).
In the 1970s, the Electric Light Orchestra enjoyed great commercial success taking inspiration from so-called "Beatlesque" arrangements, adding the cello (and violin) to the standard rock combo line-up and in 1978 the UK-based rock band Colosseum II collaborated with cellist Julian Lloyd Webber on the recording "Variations". Most notably, Pink Floyd included a cello solo in their 1970 epic instrumental "Atom Heart Mother". Bass guitarist Mike Rutherford of Genesis was originally a cellist and included some cello parts in their "Foxtrot" album.
Established non-traditional cello groups include Apocalyptica, a group of Finnish cellists best known for their versions of Metallica songs; Rasputina, a group of cellists committed to an intricate cello style intermingled with Gothic music; the Massive Violins, an ensemble of seven singing cellists known for their arrangements of rock, pop and classical hits; Von Cello, a cello-fronted rock power trio; Break of Reality, who mix elements of classical music with the more modern rock and metal genre; Cello Fury, a cello rock band that performs original rock/classical crossover music; and Jelloslave, a Minneapolis-based cello duo with two percussionists. These groups are examples of a style that has become known as cello rock. The crossover string quartet Bond also includes a cellist. Silenzium and Cellissimo Quartet are Russian (Novosibirsk) groups playing rock and metal and having more and more popularity in Siberia. Cold Fairyland from Shanghai, China is using a cello along with a pipa as the main solo instrument to create East meets West progressive (folk) rock.
More recent bands who have used the cello include Clean Bandit, Aerosmith, The Auteurs, Nirvana, Oasis, Ra Ra Riot, Smashing Pumpkins, James, Talk Talk, Phillip Phillips, OneRepublic, Electric Light Orchestra and the baroque rock band Arcade Fire. An Atlanta-based trio, King Richard's Sunday Best, also uses a cellist in their lineup. So-called "chamber pop" artists like Kronos Quartet, The Vitamin String Quartet and Margot and the Nuclear So and So's have also recently made cello common in modern alternative rock. Heavy metal band System of a Down has also made use of the cello's rich sound. The indie rock band The Stiletto Formal are known for using a cello as a major staple of their sound; similarly, the indie rock band Canada employs two cello players in their lineup. The orch-rock group The Polyphonic Spree, which has pioneered the use of stringed and symphonic instruments, employs the cello in creative ways for many of their "psychedelic-esque" melodies. The first-wave screamo band I Would Set Myself On Fire For You featured a cello as well as a viola to create a more folk-oriented sound. The band Panic! at the Disco uses a cello in their song "Build God, Then We'll Talk", with lead vocalist Brendon Urie recording the cello solo himself. The Lumineers added cellist Nela Pekarek to the band in 2010. Radiohead makes frequent use of cello in their music, notably for the songs "Burn The Witch" and "Glass Eyes" in 2016.
In jazz, bassists Oscar Pettiford and Harry Babasin were among the first to use the cello as a solo instrument; both tuned their instruments in fourths, an octave above the double bass. Fred Katz (who was not a bassist) was one of the first notable jazz cellists to use the instrument's standard tuning and arco technique. Contemporary jazz cellists include Abdul Wadud, Diedre Murray, Ron Carter, Dave Holland, David Darling, Lucio Amanti, Akua Dixon, Ernst Reijseger, Fred Lonberg-Holm, Tom Cora and Erik Friedlander. Modern musical theatre pieces like Jason Robert Brown's "The Last Five Years", Duncan Sheik's "Spring Awakening", Adam Guettel's "Floyd Collins", and Ricky Ian Gordon's "My Life with Albertine" use small string ensembles (including solo cellos) to a prominent extent.
In Indian classical music, Saskia Rao-de Haas is a well-established soloist as well as playing duets with her sitarist husband, Pt. Shubhendra Rao. Other cellists performing Indian classical music are Nancy Lesh (Dhrupad) and Anup Biswas. Both Rao and Lesh play the cello sitting cross-legged on the floor.
The cello can also be used in bluegrass and folk music, with notable players including Ben Sollee of the Sparrow Quartet and the "Cajun cellist" Sean Grissom, as well as Vyvienne Long, who, in addition to her own projects, has played for those of Damien Rice. Cellists such as Natalie Haas, Abby Newton, and Liz Davis Maxfield have contributed significantly to the use of cello playing in Celtic folk music, often with the cello featured as a primary melodic instrument and employing the skills and techniques of traditional fiddle playing. Lindsay Mac is becoming well known for playing the cello like a guitar, with her cover of The Beatles' "Blackbird".
Construction.
The cello is typically made from carved wood, although other materials such as carbon fiber or aluminum may be used. A traditional cello has a spruce top, with maple for the back, sides, and neck. Other woods, such as poplar or willow, are sometimes used for the back and sides. Less expensive cellos frequently have tops and backs made of laminated wood. Laminated cellos are widely used in elementary and secondary school orchestras and youth orchestras, because they are much more durable than carved wood cellos (i.e., they are less likely to crack if bumped or dropped) and they are much less expensive.
The top and back are traditionally hand-carved, though less expensive cellos are often machine-produced. The sides, or ribs, are made by heating the wood and bending it around forms. The cello body has a wide top bout, narrow middle formed by two C-bouts, and wide bottom bout, with the bridge and F holes just below the middle. The top and back of the cello have a decorative border inlay known as purfling. While purfling is attractive, it is also functional: if the instrument is struck, the purfling can prevent cracking of the wood. A crack may form at the rim of the instrument but spreads no further. Without purfling, cracks can spread up or down the top or back. Playing, traveling and the weather all affect the cello and can increase a crack if purfling is not in place. Less expensive instruments typically have painted purfling.
The fingerboard and pegs on a cello are generally made from ebony, as it is strong and does not wear out easily.
Alternative materials.
In the late 1920s and early 1930s, the Aluminum Company of America (Alcoa) as well as German luthier G.A. Pfretzschner produced an unknown number of aluminum cellos (in addition to aluminum double basses and violins). Cello manufacturer Luis & Clark constructs cellos from carbon fibre. Carbon fibre instruments are particularly suitable for outdoor playing because of the strength of the material and its resistance to humidity and temperature fluctuations. Luis & Clark has produced over 1000 cellos, some of which are owned by cellists such as Yo-Yo Ma and Josephine van Lier.
Neck, fingerboard, pegbox, and scroll.
Above the main body is the carved neck. The neck has a curved cross-section on its underside, which is where the player's thumb runs along the neck during playing. The neck leads to a pegbox and the scroll, which are all normally carved out of a single piece of wood, usually maple. The fingerboard is glued to the neck and extends over the body of the instrument. The fingerboard is given a curved shape, matching the curve on the bridge. Both the fingerboard and bridge need to be curved so that the performer can bow individual strings. If the cello were to have a flat fingerboard and bridge, as with a typical guitar, the performer would only be able to bow the leftmost and rightmost two strings or bow all the strings. The performer would not be able to play the inner two strings alone.
The nut is a raised piece of wood, fitted where the fingerboard meets the pegbox, in which the strings rest in shallow slots or grooves to keep them the correct distance apart. The pegbox houses four tapered tuning pegs, one for each string. The pegs are used to tune the cello by either tightening or loosening the string. The pegs are called "friction pegs", because they maintain their position by friction. The scroll is a traditional ornamental part of the cello and a feature of all other members of the violin family. Ebony is usually used for the tuning pegs, fingerboard, and nut, but other hardwoods, such as boxwood or rosewood, can be used. Black fittings on low-cost instruments are often made from inexpensive wood that has been blackened or "ebonized" to look like ebony, which is much harder and more expensive. Ebonized parts such as tuning pegs may crack or split, and the black surface of the fingerboard will eventually wear down to reveal the lighter wood underneath.
Strings.
Historically, cello strings had cores made out of catgut, which, despite its name, is made from sheep or goat intestines. Most modern strings used in the 2010s are wound with metallic materials like aluminum, titanium and chromium. Cellists may mix different types of strings on their instruments. The pitches of the open strings are C, G, D, and A (black note heads in the playing range figure above), unless alternative tuning (scordatura) is specified by the composer. Some composers (e.g. Ottorino Respighi in the final movement of ‘’The Pines of Rome’’) ask that the low C be tuned down to a B-flat so that the performer can play a different low note on the lowest open string.
Tailpiece and endpin.
The tailpiece and endpin are found in the lower part of the cello. The tailpiece is the part of the cello to which the "ball ends" of the strings are attached by passing them through holes. The tailpiece is attached to the bottom of the cello. The tailpiece is traditionally made of ebony or another hardwood, but can also be made of plastic or steel on lower-cost instruments. It attaches the strings to the lower end of the cello and can have one or more fine tuners. The fine tuners are used to make smaller adjustments to the pitch of the string. The fine tuners can increase the tension of each string (raising the pitch) or decrease the tension of the string (lowering the pitch). When the performer is putting on a new string, the fine tuner for that string is normally reset to a middle position, and then the peg is turned to bring the string up to pitch. The fine turners are used for subtle, minor adjustments to pitch, such as tuning a cello to the oboe's 440 Hz A note or tuning the cello to a piano.
The endpin or spike is made of wood, metal, or rigid carbon fiber and supports the cello in playing position. The endpin can be retracted into the hollow body of the instrument when the cello is being transported in its case. This makes the cello easier to move about. When the performer wishes to play the cello, the endpin is pulled out to lengthen it. The endpin is locked into the player's preferred length with a screw mechanism. The adjustable nature of endpins enables performers of different ages and body sizes to adjust the endpin length to suit them. In the Baroque period, the cello was held between the calves, as there was no endpin at that time. The endpin was "introduced by Adrien Servais c. 1845 to give the instrument greater stability". Modern endpins are retractable and adjustable; older ones were removed when not in use. (The word "endpin" sometimes also refers to the button of wood located at this place in all instruments in the violin family, but this is usually called "tailpin".) The sharp tip of the cello's endpin is sometimes capped with a rubber tip that protects the tip from dulling and prevents the cello from slipping on the floor. Many cellists use a rubber pad with a metal cup to keep the tip from slipping on the floor. A number of accessories exist to keep the endpin from slipping; these include ropes that attach to the chair leg and other devices.
Bridge and f-holes.
The bridge holds the strings above the cello and transfers their vibrations to the top of the instrument and the soundpost inside (see below). The bridge is not glued but rather held in place by the tension of the strings. The bridge is usually positioned by the cross point of the "f-hole" (i.e., where the horizontal line occurs in the "f"). The f-holes, named for their shape, are located on either side of the bridge and allow air to move in and out of the instrument as part of the sound-production process. The f-holes also act as access points to the interior of the cello for repairs or maintenance. Sometimes a small length of rubber hose containing a water-soaked sponge, called a Dampit, is inserted through the f-holes and serves as a humidifier. This keeps the wood components of the cello from drying out.
Internal features.
Internally, the cello has two important features: a bass bar, which is glued to the underside of the top of the instrument, and a round wooden sound post, a solid wooden cylinder which is wedged between the top and bottom plates. The bass bar, found under the bass foot of the bridge, serves to support the cello's top and distribute the vibrations from the strings to the body of the instrument. The soundpost, found under the treble side of the bridge, connects the back and front of the cello. Like the bridge, the soundpost is not glued but is kept in place by the tensions of the bridge and strings. Together, the bass bar and sound post transfer the strings' vibrations to the top (front) of the instrument (and to a lesser extent the back), acting as a diaphragm to produce the instrument's sound.
Glue.
Cellos are constructed and repaired using hide glue, which is strong but reversible, allowing for disassembly when needed. Tops may be glued on with diluted glue since some repairs call for the removal of the top. Theoretically, hide glue is weaker than the body's wood, so as the top or back shrinks side-to-side, the glue holding it lets go and the plate does not crack. Cellists repairing cracks in their cello do not use regular wood glue, because it cannot be steamed open when a repair has to be made by a luthier.
Bow.
Traditionally, bows are made from pernambuco or brazilwood. Both come from the same species of tree ("Caesalpinia echinata"), but Pernambuco, used for higher-quality bows, is the heartwood of the tree and is darker in color than brazilwood (which is sometimes stained to compensate). Pernambuco is a heavy, resinous wood with great elasticity, which makes it an ideal wood for instrument bows. Horsehair is stretched out between the two ends of the bow. The taut horsehair is drawn over the strings, while being held roughly parallel to the bridge and perpendicular to the strings, to produce sound. A small knob is twisted to increase or decrease the tension of the horsehair. The tension on the bow is released when the instrument is not being used. The amount of tension a cellist puts on the bow hair depends on the preferences of the player, the style of music being played, and for students, the preferences of their teacher.
Bows are also made from other materials, such as carbon fibre—stronger than wood—and fiberglass (often used to make inexpensive, lower-quality student bows). An average cello bow is long (shorter than a violin or viola bow) high (from the frog to the stick) and wide. The frog of a cello bow typically has a rounded corner like that of a viola bow, but is wider. A cello bow is roughly heavier than a viola bow, which in turn is roughly heavier than a violin bow.
Bow hair is traditionally horsehair, though synthetic hair, in varying colors, is also used. Prior to playing, the musician tightens the bow by turning a screw to pull the frog (the part of the bow under the hand) back and increase the tension of the hair. Rosin is applied by the player to make the hair sticky. Bows need to be re-haired periodically. Baroque-style (1600–1750) cello bows were much thicker and were formed with a larger outward arch than modern cello bows. The inward arch of a modern cello bow produces greater tension, which in turn produces a louder sound.
The cello bow has also been used to play electric guitars. Jimmy Page pioneered its application on tracks such as "Dazed and Confused". The post-rock Icelandic band Sigur Rós's lead singer often plays guitar using a cello bow.
In 1989, the German cellist Michael Bach began developing a curved bow, encouraged by John Cage, Dieter Schnebel, Mstislav Rostropovich and Luigi Colani: and since then many pieces have been composed especially for it. This curved bow ("BACH.Bow") is a convex curved bow which, unlike the ordinary bow, renders possible polyphonic playing on the various strings of the instrument. The solo repertoire for violin and cello by J. S. Bach the BACH.Bow is particularly suited to it: and it was developed with this in mind, polyphonic playing being required, as well as monophonic.
Physics.
Physical aspects.
When a string is bowed or plucked, it vibrates and moves the air around it, producing sound waves. Because the string is quite thin, not much air is moved by the string itself, and consequently, if the string was not mounted on a hollow body, the sound would be weak. In acoustic stringed instruments such as the cello, this lack of volume is solved by mounting the vibrating string on a larger hollow wooden body. The vibrations are transmitted to the larger body, which can move more air and produce a louder sound. Different designs of the instrument produce variations in the instrument's vibrational patterns and thus change the character of the sound produced. A string's fundamental pitch can be adjusted by changing its stiffness, which depends on tension and length. Tightening a string stiffens it by increasing both the outward forces along its length and the net forces it experiences during a distortion. A cello can be tuned by adjusting the tension of its strings, by turning the tuning pegs mounted on its pegbox and tension adjusters (fine tuners) on the tailpiece.
A string's length also affects its fundamental pitch. Shortening a string stiffens it by increasing its curvature during a distortion and subjecting it to larger net forces. Shortening the string also reduces its mass, but does not alter the mass per unit length, and it is the latter ratio rather than the total mass which governs the frequency. The string vibrates in a standing wave whose speed of propagation is given by formula_0, where T is the tension and m is the mass per unit length; there is a node at either end of the vibrating length, and thus the vibrating length l is half a wavelength. Since the frequency of any wave is equal to the speed divided by the wavelength, we have formula_1. (Some writers, including Muncaster (cited below), use the Greek letter μ in place of m.) Thus shortening a string increases the frequency, and thus the pitch. Because of this effect, you can raise and change the pitch of a string by pressing it against the fingerboard in the cello's neck and effectively shortening it. Likewise strings with less mass per unit length, if under the same tension, will have a higher frequency and thus higher pitch than more massive strings. This is a prime reason the different strings on all string instruments have different fundamental pitches, with the lightest strings having the highest pitches.
A played note of E or F-sharp has a frequency that is often very close to the natural resonating frequency of the body of the instrument, and if the problem is not addressed this can set the body into near resonance. This may cause an unpleasant sudden amplification of this pitch, and additionally a loud beating sound results from the interference produced between these nearby frequencies; this is known as the “wolf tone” because it is an unpleasant growling sound. The wood resonance appears to be split into two frequencies by the driving force of the sounding string. These two periodic resonances beat with each other. This wolf tone must be eliminated or significantly reduced for the cello to play the nearby notes with a pleasant tone. This can be accomplished by modifying the cello front plate, attaching a wolf eliminator (a metal cylinder or a rubber cylinder encased in metal), or moving the soundpost.
When a string is bowed or plucked to produce a note, the fundamental note is accompanied by higher frequency overtones. Each sound has a particular recipe of frequencies that combine to make the total sound.
Playing technique.
Playing the cello is done while seated with the instrument supported on the floor by the endpin. The right hand bows (or sometimes plucks) the strings to sound the notes. The left-hand fingertips stop the strings along their length, determining the pitch of each fingered note. Stopping the string closer to the bridge results in a higher-pitched sound because the vibrating string length has been shortened. On the contrary, a string stopped closer to the tuning pegs produces a lower sound. In the "neck" positions (which use just less than half of the fingerboard, nearest the top of the instrument), the thumb rests on the back of the neck, some people use their thumb as a marker of their position; in "thumb position" (a general name for notes on the remainder of the fingerboard) the thumb usually rests alongside the fingers on the string. Then, the side of the thumb is used to play notes. The fingers are normally held curved with each knuckle bent, with the fingertips in contact with the string. If a finger is required on two (or more) strings at once to play perfect fifths (in double stops or chords), it is used flat. The contact point can move slightly away from the nail to the finger's pad in slower or more expressive playing, allowing a fuller vibrato.
Vibrato is a small oscillation in the pitch of a note, usually considered an expressive technique. The closer towards the bridge the note is, the smaller the oscillation needed to create the effect. Harmonics played on the cello fall into two classes; natural and artificial. Natural harmonics are produced by lightly touching (but not depressing) the string at certain places and then bowing (or, rarely, plucking) the string. For example, the halfway point of the string will produce a harmonic that is one octave above the unfingered (open) string. Natural harmonics only produce notes that are part of the harmonic series on a particular string. Artificial harmonics (also called false harmonics or stopped harmonics), in which the player depresses the string fully with one finger while touching the same string lightly with another finger, can produce any note above middle C.
Glissando (Italian for "sliding") is an effect achieved by sliding the finger up or down the fingerboard without releasing the string. This causes the pitch to rise and fall smoothly, without separate, discernible steps.
In cello playing, the bow is much like the breath of a wind instrument player. Arguably, it is a major factor in the expressiveness of the playing. The right hand holds the bow and controls the duration and character of the notes. In general, the bow is drawn across the strings roughly halfway between the end of the fingerboard and the bridge, in a direction perpendicular to the strings; however, the player may wish to move the bow's point of contact higher or lower depending on the desired sound. The bow is held and manipulated with all five fingers of the right hand, with the thumb opposite the fingers and closer to the cellist's body. Tone production and volume of sound depend on a combination of several factors. The four most important ones are "weight" applied to the string, the "angle" of the bow on the string, bow "speed", and the "point of contact" of the bow hair with the string (sometimes abbreviated WASP).
Double stops involve the playing of two notes simultaneously. Two strings are fingered at once, and the bow is drawn to sound them both. Often, in pizzicato playing, the string is plucked directly with the fingers or thumb of the right hand. However, the strings may be plucked with a finger of the left hand in certain advanced pieces, either so that the cellist can play bowed notes on another string along with pizzicato notes or because the speed of the piece would not allow the player sufficient time to pluck with the right hand. In musical notation, pizzicato is often abbreviated as "pizz." The position of the hand in pizzicato is commonly slightly over the fingerboard and away from the bridge.
A player using the col legno technique strikes or rubs the strings with the wood of the bow rather than the hair. In spiccato playing, the bow still moves in a horizontal motion on the string but is allowed to bounce, generating a lighter, somewhat more percussive sound. In staccato, the player moves the bow a small distance and stops it on the string, making a short sound, the rest of the written duration being taken up by silence.
Legato is a technique in which notes are smoothly connected without breaks. It is indicated by a slur (curved line) above or below – depending on their position on the staff – the notes of the passage that is to be played legato.
"Sul ponticello" ("on the bridge") refers to bowing closer to (or nearly on) the bridge, while "sul tasto" ("on the fingerboard") calls for bowing nearer to (or over) the end of the fingerboard. At its extreme, sul ponticello produces a harsh, shrill sound with emphasis on overtones and high harmonics. In contrast, sul tasto produces a more flute-like sound that emphasizes the note's fundamental frequency and produces softened overtones. Composers have used both techniques, particularly in an orchestral setting, for special sounds and effects.
Sizes.
Standard-sized cellos are referred to as "full-size" or "<templatestyles src="Fraction/styles.css" />4⁄4" but are also made in smaller (fractional) sizes, including <templatestyles src="Fraction/styles.css" />15⁄16 <templatestyles src="Fraction/styles.css" />7⁄8, <templatestyles src="Fraction/styles.css" />3⁄4, <templatestyles src="Fraction/styles.css" />1⁄2, <templatestyles src="Fraction/styles.css" />1⁄4, <templatestyles src="Fraction/styles.css" />1⁄8, <templatestyles src="Fraction/styles.css" />1⁄10, and <templatestyles src="Fraction/styles.css" />1⁄16. The fractions refer to volume rather than length, so a 1/2 size cello is much longer than half the length of a full size. The smaller cellos are identical to standard cellos in construction, range, and usage, but are simply scaled-down for the benefit of children and shorter adults.
Cellos in sizes larger than <templatestyles src="Fraction/styles.css" />4⁄4 do exist, and cellists with unusually large hands may require such a non-standard instrument. Cellos made before c. 1700 tended to be considerably larger than those made and commonly played today. Around 1680, changes in string-making technology made it possible to play lower-pitched notes on shorter strings. The cellos of Stradivari, for example, can be clearly divided into two models: the style made before 1702, characterized by larger instruments (of which only three exist in their original size and configuration), and the style made during and after 1707, when Stradivari began making smaller cellos. This later model is the design most commonly used by modern luthiers. The scale length of a <templatestyles src="Fraction/styles.css" />4⁄4 cello is about . The new size offered fuller tonal projection and a greater range of expression. The instrument in this form was able to contribute to more pieces musically and offered the possibility of greater physical dexterity for the player to develop technique.
Accessories.
There are many accessories for the cello.
Instrument makers.
Cellos are made by luthiers, specialists in building and repairing stringed instruments, ranging from guitars to violins. The following luthiers are notable for the cellos they have produced:
Cellists.
A person who plays the cello is called a "cellist". For a list of notable cellists, see the list of cellists and .
Famous instruments.
Specific instruments are famous (or become famous) for a variety of reasons. An instrument's notability may arise from its age, the fame of its maker, its physical appearance, its acoustic properties, and its use by notable performers. The most famous instruments are generally known for all of these things. The most highly prized instruments are now collector's items and are priced beyond the reach of most musicians. These instruments are typically owned by some kind of organization or investment group, which may loan the instrument to a notable performer. For example, the Davidov Stradivarius, which is currently in the possession of one of the most widely known living cellists, Yo-Yo Ma, is actually owned by the Vuitton Foundation.
Some notable cellos:
References.
<templatestyles src="Reflist/styles.css" />
Sources.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "\\sqrt{\\frac{T}{m}}"
},
{
"math_id": 1,
"text": "frequency = \\frac{1}{2l} \\cdot \\sqrt{\\frac{T}{m}}"
}
] |
https://en.wikipedia.org/wiki?curid=6558
|
65588359
|
Brezis–Lieb lemma
|
In the mathematical field of analysis, the Brezis–Lieb lemma is a basic result in measure theory. It is named for Haïm Brézis and Elliott Lieb, who discovered it in 1983. The lemma can be viewed as an improvement, in certain settings, of Fatou's lemma to an equality. As such, it has been useful for the study of many variational problems.
The lemma and its proof.
Statement of the lemma.
Let ("X", μ) be a measure space and let "f""n" be a sequence of measurable complex-valued functions on X which converge almost everywhere to a function f. The limiting function f is automatically measurable. The Brezis–Lieb lemma asserts that if p is a positive number, then
formula_0
provided that the sequence "f""n" is uniformly bounded in "L""p"("X", μ). A significant consequence, which sharpens Fatou's lemma as applied to the sequence |"f""n"|"p", is that
formula_1
which follows by the triangle inequality. This consequence is often taken as the statement of the lemma, although it does not have a more direct proof.
Proof.
The essence of the proof is in the inequalities
formula_2
The consequence is that "W""n" − ε|"f" − "f""n"|"p", which converges almost everywhere to zero, is bounded above by an integrable function, independently of n. The observation that
formula_3
and the application of the dominated convergence theorem to the first term on the right-hand side shows that
formula_4
The finiteness of the supremum on the right-hand side, with the arbitrariness of ε, shows that the left-hand side must be zero.
References.
<templatestyles src="Refbegin/styles.css" />
Footnotes
<templatestyles src="Reflist/styles.css" />
Sources
|
[
{
"math_id": 0,
"text": "\\lim_{n\\to\\infty}\\int_X\\Big||f|^p-|f_n|^p+|f-f_n|^p\\Big|\\,d\\mu=0,"
},
{
"math_id": 1,
"text": "\\int_X|f|^p\\,d\\mu=\\lim_{n\\to\\infty}\\left(\\int_X|f_n|^p\\,d\\mu-\\int_X|f-f_n|^p\\,d\\mu\\right),"
},
{
"math_id": 2,
"text": "\\begin{align}\nW_n\\equiv \\Big||f_n|^p-|f|^p-|f-f_n|^p\\Big|&\\leq\\Big||f_n|^p-|f-f_n|^p\\Big|+|f|^p\\\\\n&\\leq\\varepsilon|f-f_n|^p+C_\\varepsilon|f|^p.\n\\end{align}"
},
{
"math_id": 3,
"text": "W_n\\leq\\max\\Big(0,W_n-\\varepsilon|f-f_n|^p\\Big)+\\varepsilon|f-f_n|^p,"
},
{
"math_id": 4,
"text": "\\limsup_{n\\to\\infty}\\int_XW_n\\,d\\mu\\leq\\varepsilon\\sup_n\\int_X |f-f_n|^p\\,d\\mu."
}
] |
https://en.wikipedia.org/wiki?curid=65588359
|
6559316
|
Social media optimization
|
Form of optimization
Social media optimization (SMO) is the use of a number of outlets and communities to generate publicity to increase the awareness of a product, service brand or event. Types of social media involved include RSS feeds, social news, bookmarking sites, and social networking sites such as Facebook, Instagram, Twitter, video sharing websites, and blogging sites. SMO is similar to search engine optimization (SEO) in that the goal is to generate web traffic and increase awareness for a website. SMO's focal point is on gaining organic links to social media content. In contrast, SEO's core is about reaching the top of the search engine hierarchy. In general, social media optimization refers to optimizing a website and its content to encourage more users to use and share links to the website across social media and networking sites.
SMO is used to strategically create online content ranging from well-written text to eye-catching digital photos or video clips that encourages and entices people to engage with a website. Users share this content, via its weblink, with social media contacts and friends. Common examples of social media engagement are "liking and commenting on posts, retweeting, embedding, sharing, and promoting content". Social media optimization is also an effective way of implementing online reputation management (ORM), meaning that if someone posts bad reviews of a business, an SMO strategy can ensure that the negative feedback is not the first link to come up in a list of search engine results.
In the 2010s, with social media sites overtaking TV as a source for news for young people, news organizations have become increasingly reliant on social media platforms for generating web traffic. Publishers such as "The Economist" employ large social media teams to optimize their online posts and maximize traffic, while other major publishers now use advanced artificial intelligence (AI) technology to generate higher volumes of web traffic.
Relationship with search engine optimization.
Social media optimization is an increasingly important factor in search engine optimization, which is the process of designing a website in a way so that it has as high a ranking as possible on search engines. Search engines are increasingly utilizing the recommendations of users of social networks such as Reddit, Facebook, Tumblr, Twitter, YouTube, LinkedIn, Pinterest, Instagram to rank pages in the search engine result pages. The implication is that when a webpage is shared or "liked" by a user on a social network, it counts as a "vote" for that webpage's quality. Thus, search engines can use such votes accordingly to properly ranked websites in search engine results pages. Furthermore, since it is more difficult to top the scales or influence the search engines in this way, search engines are putting more stock into social search. This, coupled with increasingly personalized search based on interests and location, has significantly increased the importance of a social media presence in search engine optimization. Due to personalized search results, location-based social media presences on websites such as Yelp, Google Places, Foursquare, and Yahoo! Local have become increasingly important. While social media optimization is related to search engine marketing, it differs in several ways. Primarily, SMO focuses on driving web traffic from sources "other" than search engines, though improved search engine ranking is also a benefit of successful social media optimization. Further, SMO is helpful to target particular geographic regions in order to target and reach potential customers. This helps in lead generation (finding new customers) and contributes to high conversion rates (i.e., converting previously uninterested individuals into people who are interested in a brand or organization).
Relationship with viral marketing.
Social media optimization is in many ways connected to the technique of viral marketing or "viral seeding" where word of mouth is created through the use of networking in social bookmarking, video and photo sharing websites. An effective SMO campaign can harness the power of viral marketing; for example, 80% of activity on Pinterest is generated through "repinning." Furthermore, by following social trends and utilizing alternative social networks, websites can retain existing followers while also attracting new ones. This allows businesses to build an online following and presence, all linking back to the company's website for increased traffic. For example, with an effective social bookmarking campaign, not only can website traffic be increased, but a site's rankings can also be increased. In a similar way, the engagement with blogs creates a similar result by sharing content through the use of RSS in the blogosphere and special blog search engines. Social media optimization is considered an integral part of an online reputation management (ORM) or search engine reputation management (SERM) strategy for organizations or individuals who care about their online presence. SMO is one of six key influencers that affect Social Commerce Construct (SCC). Online activities such as consumers' evaluations and advices on products and services constitute part of what creates a Social Commerce Construct (SCC).
Social media optimization is not limited to marketing and brand building. Increasingly, smart businesses are integrating social media participation as part of their knowledge management strategy (i.e., product/service development, recruiting, employee engagement and turnover, brand building, customer satisfaction and relations, business development and more). Additionally, social media optimization can be implemented to foster a community of the associated site, allowing for a healthy business-to-consumer (B2C) relationship.
Origins and implementation.
According to technologist Danny Sullivan, the term "social media optimization" was first used and described by marketer Rohit Bhargava on his marketing blog in August 2006. In the same post, Bhargava established the five important rules of social media optimization. Bhargava believed that by following his rules, anyone could influence the levels of traffic and engagement on their site, increase popularity, and ensure that it ranks highly in search engine results. An additional 11 SMO rules have since been added to the list by other marketing contributors.
The 16 rules of SMO, according to one source, are as follows:
Bhargava's initial five rules were more specifically designed to SMO, while the list is now much broader and addresses everything that can be done across different social media platforms. According to author and CEO of TopRank Online Marketing, Lee Odden, a Social Media Strategy is also necessary to ensure optimization. This is a similar concept to Bhargava's list of rules for SMO.
The Social Media Strategy may consider:
According to Lon Safko and David K. Brake in "The Social Media Bible", it is also important to act like a publisher by maintaining an effective organizational strategy, to have an original concept and unique "edge" that differentiates one's approach from competitors, and to experiment with new ideas if things do not work the first time. If a business is blog-based, an effective method of SMO is using widgets that allow users to share content to their personal social media platforms. This will ultimately reach a wider target audience and drive more traffic to the original post. Blog widgets and plug-ins for post-sharing are most commonly linked to Facebook, Google+, LinkedIn, and Twitter. They occasionally also link to social media platforms such as StumbleUpon, Tumblr, and Pinterest. Many sharing widgets also include user counters which indicate how many times the content has been liked and shared across different social media pages. This can influence whether or not new users will engage with the post, and also gives businesses an idea of what kind of posts are most successful at engaging audiences. By using relevant and trending keywords in titles and throughout blog posts, a business can also increase search engine optimization and the chances of their content of being read and shared by a large audience. The root of effective SMO is the content that is being posted, so professional content creation tools can be very beneficial. These can include editing programs such as Photoshop, GIMP, Final Cut Pro, and Dreamweaver. Many websites also offer customization options such as different layouts to personalize a page and create a point of difference.
Publishing industry.
With social media sites overtaking TV as a source for news for young people, news organisations have become increasingly reliant on social media platforms for generating traffic. A report by Reuters Institute for the Study of Journalism described how a 'second wave of disruption' had hit news organisations, with publishers such as "The Economist" having to employ large social media teams to optimism their posts and maximize traffic. Within the context of the publishing industry, even professional fields are utilizing SMO. Because doctors want to maximize exposure to their research findings SMO has also found a place in the medical field.
Today, 3.8 billion people globally are using some form of social media. People frequently obtain health-related information from online social media platforms like Twitter and Facebook. Healthcare professionals and scientists can communicate with other medical-counterparts to discuss research and findings through social media platforms. These platforms provide researchers with data sets and surveillance that help detect patterns and behavior in preventing, informing, and studying global disease; COVID-19. Additionally, researchers utilize SMO to reach and recruit hard-to-reach patients. SMO narrows specified demographics that filter necessary data in a given study.
Social network games.
Social media gaming is online gaming activity performed through social media sites with friends and online gaming activity that promotes social media interaction. Examples of the former include "FarmVille", "Clash of Clans", "Clash Royale", "FrontierVille", and "Mafia Wars". In these games a player's social network is exploited to recruit additional players and allies. An example of the latter is "Empire Avenue", a virtual stock exchange where players buy and sell shares of each other's social network worth. Nielsen Media Research estimates that, as of June 2010, social networking and playing online games account for about one-third of all online activity
by Americans.
Facebook.
Facebook has in recent years become a popular channel for advertising, alongside traditional forms such as television, radio, and print. With over 1 billion active users, and 50% of those users logging into their accounts every day it is an important communication platform that businesses can utilize and optimize to promote their brand and drive traffic to their websites. There are three commonly used strategies to increase advertising reach on Facebook:
Improving effectiveness and increasing network size are organic approaches, while buying more reach is a paid approach which does not require any further action. Most businesses will attempt an "organic" approach to gaining a significant following before considering a paid approach. Because Facebook requires a login, it is important that posts are public to ensure they will reach the widest possible audience. Posts that have been heavily shared and interacted with by users are displayed as 'highlighted posts' at the top of newsfeeds. In order to achieve this status, the posts need to be engaging, interesting, or useful. This can be achieved by being spontaneous, asking questions, addressing current events and issues, and optimizing trending hashtags and keywords. The more engagement a post receives, the further it will spread and the more likely it is to feature on first in search results.
Another organic approach to Facebook optimization is cross-linking different social platforms. By posting links to websites or social media sites in the profile 'about' section, it is possible to direct traffic and ultimately increase search engine optimization. Another option is to share links to relevant videos and blog posts. Facebook Connect is a functionality that launched in 2008 to allow Facebook users to sign up to different websites, enter competitions, and access exclusive promotions by logging in with their existing Facebook account details. This is beneficial to users as they don't have to create a new login every time they want to sign up to a website, but also beneficial to businesses as Facebook users become more likely to share their content. Often the two are interlinked, where in order to access parts of a website, a user has to like or share certain things on their personal profile or invite a number of friends to like a page. This can lead to greater traffic flow to a website as it reaches a wider audience. Businesses have more opportunities to reach their target markets if they choose a paid approach to SMO. When Facebook users create an account, they are urged to fill out their personal details such as gender, age, location, education, current and previous employers, religious and political views, interests, and personal preferences such as movie and music tastes. Facebook then takes this information and allows advertisers to use it to determine how to best market themselves to users that they know will be interested in their product. This can also be known as micro-targeting. If a user clicks on a link to like a page, it will show up on their profile and newsfeed. This then feeds back into organic social media optimization, as friends of the user will see this and be encouraged to click on the page themselves. Although advertisers are buying mass reach, they are attracting a customer base with a genuine interest in their product. Once a customer base has been established through a paid approach, businesses will often run promotions and competitions to attract more organic followers.
The number of businesses that use Facebook to advertise also holds significant relevance. Currently there are three million businesses that advertise on Facebook. This makes Facebook the world's largest platform for social media advertising. What also holds importance is the amount of money leading businesses are spending on Facebook advertising alone. Procter & Gamble spend $60 million every year on Facebook advertising. Other advertisers on Facebook include Microsoft, with a yearly spend of £35 million, Amazon, Nestle and American Express all with yearly expenditures above £25 million per year.
Furthermore, the number of small businesses advertising on Facebook is of relevance. This number has grown rapidly over the upcoming years and demonstrates how important social media advertising actually is. Currently 70% of the UK's small businesses use Facebook advertising. This is a substantial number of advertisers. Almost half of the world's small businesses use social media marketing product of some sort. This demonstrates the impact that social media has had on the current digital marketing era.
Engagement Rate.
ER (Engagement Rate) represents the activity of users specific for a certain profile on Facebook, Instagram, Tiktok or any other Social Media. A common way to calculate it is the following:
formula_0
In the above formula "followers" is the total number of followers (friends, subscribers, etc), "interactions" stands for the number of interactions, such as likes, comments, personal messages, shares. The latter is averaged over the certain period of time, which should normally be short enough to ensure the variance in followers number is negligible during this period.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "ER = \\frac{\\overline{interactions}}{followers} \\times 100\\%"
}
] |
https://en.wikipedia.org/wiki?curid=6559316
|
65595160
|
General relativity priority dispute
|
Debate about credit for general relativity
Albert Einstein's discovery of the gravitational field equations of general relativity and David Hilbert's almost simultaneous derivation of the theory using an elegant variational principle, during a period when the two corresponded frequently, has led to numerous historical analyses of their interaction. The analyses came to be called a priority dispute.
Einstein and Hilbert.
The events of interest to historians of the dispute occurred in late 1915. At that time
Albert Einstein, now perhaps the most famous modern scientist, had been working on gravitational theory since 1912. He had "developed and published much of the framework of general relativity, including the ideas that gravitational effects require a tensor theory, that these effects determine a non-Euclidean geometry, that this metric role of gravitation results in a redshift and in the bending of light passing near a massive body." While David Hilbert never became a celebrity, he was seen as a mathematician unequaled in his generation, with an especially wide impact on mathematics. When he met Einstein in the summer of 1915, Hilbert had started working on an axiomatic system for a unified field theory, combining the ideas of Gustav Mie's on electromagnetism with Einstein's general relativity. As the historians referenced below recount, Einstein and Hilbert corresponded extensively throughout the fall of 1915, culminating in lectures by both men in late November that were later published. The historians debate consequences of this friendly correspondence on the resulting publications.
Undisputed facts.
The following facts are well established and referable:
Historians on Hilbert's point of view..
Historians have discussed Hilbert's view of his interaction with Einstein.
Walter Isaacson points out that Hilbert's publication on his derivation of the equations of general relativity included the text: “The differential equations of gravitation that result are, as it seems to me, in agreement with the magnificent theory of general relativity established by Einstein.”
Wuensch points out that Hilbert refers to the field equations of gravity as "meine Theorie" ("my theory") in his 6 February 1916 letter to Schwarzschild. This, however, is not at issue, since no one disputes that Hilbert had his own "theory", which Einstein criticized as naive and overly ambitious. Hilbert's theory was based on the work of Mie combined with Einstein's principle of general covariance, but applied to matter and electromagnetism as well as gravity.
Mehra and Bjerknes point out that Hilbert's 1924 version of the article contained the sentence "... und andererseits auch Einstein, obwohl wiederholt von abweichenden und unter sich verschiedenen Ansätzen ausgehend, kehrt schließlich in seinen letzten Publikationen geradenwegs zu den Gleichungen meiner Theorie zurück" - "Einstein [...] in his last publications ultimately returns directly to the equations of my theory.". These statements of course do not have any particular bearing on the matter at issue. No one disputes that Hilbert had "his" theory, which was a very ambitious attempt to combine gravity with a theory of matter and electromagnetism along the lines of Mie's theory, and that his equations for gravitation agreed with those that Einstein presented beginning in Einstein's 25 November paper (which Hilbert refers to as Einstein's later papers to distinguish them from previous theories of Einstein). None of this bears on the precise origin of the trace term in the Einstein field equations (a feature of the equations that, while theoretically significant, does not have any effect on the vacuum equations, from which all the empirical tests proposed by Einstein were derived).
Sauer says "the independence of Einstein's discovery was never a point of dispute between Einstein and Hilbert ... Hilbert claimed priority for the introduction of the Riemann scalar into the action principle and the derivation of the field equations from it," (Sauer mentions a letter and a draft letter where Hilbert defends his priority for the action functional) "and Einstein admitted publicly that Hilbert (and Lorentz) had succeeded in giving the equations of general relativity a particularly lucid form by deriving them from a single variational principle". Sauer also stated, "And in a draft of a letter to Weyl, dated 22 April 1918, written after he had read the proofs of the first edition of Weyl's 'Raum-Zeit-Materie' Hilbert also objected to being slighted in Weyl's exposition. In this letter again 'in particular the use of the Riemannian curvature [scalar] in the Hamiltonian integral' ('insbesondere die Verwendung der Riemannschen Krümmung unter dem Hamiltonschen Integral') was claimed as one of his original contributions. SUB Cod. Ms. Hilbert 457/17."
Did Einstein develop the field equations independently?
While Hilbert's paper was submitted five days earlier than Einstein's, it only appeared in 1916, after Einstein's field equations paper had appeared in print. For this reason, there was no good reason to suspect plagiarism on either side. In 1978, an 18 November 1915 letter from Einstein to Hilbert resurfaced, in which Einstein thanked Hilbert for sending an explanation of Hilbert's work. This was not unexpected to most scholars, who were well aware of the correspondence between Hilbert and Einstein that November, and who continued to hold the view expressed by Albrecht Fölsing in his Einstein biography:
In November, when Einstein was totally absorbed in his theory of gravitation, he essentially only corresponded with Hilbert, sending Hilbert his publications and, on November 18, thanking him for a draft of his article. Einstein must have received that article immediately before writing this letter. Could Einstein, casting his eye over Hilbert's paper, have discovered the term which was still lacking in his own equations, and thus 'nostrified' Hilbert?
In the very next sentence, after asking the rhetorical question, Folsing answers it with "This is not really probable...", and then goes on to explain in detail why
[Einstein's] eventual derivation of the equations was a logical development of his earlier arguments—in which, despite all the mathematics, physical principles invariably predominated. His approach was thus quite different from Hilbert's, and Einstein's achievements can, therefore, surely be regarded as authentic.
In their 1997 "Science" paper, Corry, Renn and Stachel quote the above passage and comment that "the arguments by which Einstein is exculpated are rather weak, turning on his slowness in fully grasping Hilbert's mathematics", and so they attempted to find more definitive evidence of the relationship between the work of Hilbert and Einstein, basing their work largely on a recently discovered pre-print of Hilbert's paper. A discussion of the controversy around this paper is given below.
Those who contend that Einstein's paper was motivated by the information obtained from Hilbert have referred to the following sources:
Those who contend that Einstein's work takes priority over Hilbert's, or that both authors worked independently have used the following arguments:
Scholars.
This section cites notable publications where people have expressed a view on the issues outlined above.
Albrecht Fölsing on the Hilbert-Einstein interaction (1993).
From Fölsing's 1993 (English translation 1998) Einstein biography " Hilbert, like all his other colleagues, acknowledged Einstein as the sole creator of relativity theory."
Cory/Renn/Stachel and Friedwardt Winterberg (1997/2003).
In 1997, Cory, Renn and Stachel published a three-page article in "Science" entitled "Belated Decision in the Hilbert-Einstein Priority Dispute" concluding that Hilbert had not anticipated Einstein's equations.
Friedwardt Winterberg, a professor of physics at the University of Nevada, Reno, disputed these conclusions, observing that the galley proofs of Hilbert's articles had been tampered with - part of one page had been cut off. He goes on to argue that the removed part of the article contained the equations that Einstein later published, and he wrote that "the cut off part of the proofs suggests a crude attempt by someone to falsify the historical record". "Science" declined to publish this; it was printed in revised form in "Zeitschrift für Naturforschung", with a dateline of 5 June 2003. Winterberg criticized Corry, Renn and Statchel for having omitted the fact that part of Hilbert's proofs was cut off. Winterberg wrote that the correct field equations are still present on the existing pages of the proofs in various equivalent forms. In this paper, Winterberg asserted that Einstein "sought the help of" Hilbert and Klein to help him find the "correct field equation", without mentioning the research of Fölsing (1997) and Sauer (1999), according to which Hilbert "invited" Einstein to Göttingen to give a week of lectures on general relativity in June 1915, which however does not necessarily contradict Winterberg. Hilbert at the time was looking for physics problems to solve.
A short reply to Winterberg's article can be found at ; the original long reply can be accessed via the Internet Archive at . In this reply, Winterberg's hypothesis is called "paranoid" and "speculative". Cory et al. offer the following alternative speculation: "it is possible that Hilbert himself cropped off the top of p. 7 to include it with the three sheets he sent Klein, in order that they not end in mid-sentence."
As of September 2006, the Max Planck Institute of Berlin has replaced the short reply with a note saying that the Max Planck Society "distances itself from statements published on this website [...] concerning Prof. Friedwart Winterberg" and stating that "the Max Planck Society will not take a position in [this] scientific dispute".
Ivan Todorov, in a paper published on ArXiv, says of the debate:
Their [CRS's] attempt to support on this ground Einstein's accusation of "nostrification" goes much too far. A calm, non-confrontational reaction was soon provided by a thorough study of Hilbert's route to the "Foundations of Physics" (see also the relatively even handed survey (Viz 01)).
In the paper recommended by Todorov as calm and non-confrontational, Tilman Sauer concludes that the printer's proofs show conclusively that Einstein did not plagiarize Hilbert, stating
any possibility that Einstein took the clue for the final step toward his field equations from Hilbert's note [Nov 20, 1915] is now definitely precluded.
Max Born's letters to David Hilbert, quoted in Wuensch, are quoted by Todorov as evidence that Einstein's thinking towards general covariance was influenced by the competition with Hilbert.
Todorov ends his paper by stating:
Einstein and Hilbert had the moral strength and wisdom - after a month of intense competition, from which, in a final account, everybody (including science itself) profited - to avoid a lifelong priority dispute (something in which Leibniz and Newton failed). It would be a shame to subsequent generations of scientists and historians of science to try to undo their achievement.
Anatoly Alexeevich Logunov on general relativity (2004).
Anatoly Logunov (a former vice president of the Soviet Academy of Sciences and at the time the scientific advisor of the Institute for High Energy Physics), is author of a book about Poincaré's relativity theory and coauthor, with Mestvirishvili and Petrov, of an article rejecting the conclusions of the Corry/Renn/Stachel paper. They discuss both Einstein's and Hilbert's papers, claiming that Einstein and Hilbert arrived at the correct field equations independently. Specifically, they conclude that:
"Their pathways were different but they led exactly to the same result. Nobody "nostrified" the other. So no "belated decision in the Einstein–Hilbert priority dispute", about which [Corry, Renn, and Stachel] wrote, can be taken. Moreover, the very Einstein–Hilbert dispute never took place."
"All is absolutely clear: both authors made everything to immortalize their names in the title of the gravitational field equations. But general relativity is Einstein's theory."
Wuensch and Sommer (2005).
Daniela Wuensch, a historian of science and a Hilbert and Kaluza expert, responded to Bjerknes, Winterberg and Logunov's criticisms of the Corry/Renn/Stachel paper in a book which appeared in 2005, where in she defends the view that the cut to Hilbert's printer proofs was made in recent times. Moreover, she presents a theory about what might have been on the missing part of the proofs, based upon her knowledge of Hilbert's papers and lectures.
She defends the view that knowledge of Hilbert's 16 November 1915 letter was crucial to Einstein's development of the field equations: Einstein arrived at the correct field equations only with Hilbert's help ("nach großer Anstrengung mit Hilfe Hilberts"), but nevertheless calls Einstein's reaction (his negative comments on Hilbert in the 26 November letter to Zangger) "understandable" ("Einsteins Reaktion ist verständlich") because Einstein had worked on the problem for a long time.
According to her publisher, Klaus Sommer, Wuensch concludes though that:
This comprehensive study concludes with a historical interpretation. It shows that while it is true that Hilbert must be seen as the one who first discovered the field equations, the general theory of relativity is indeed Einstein's achievement, whereas Hilbert developed a unified theory of gravitation and electromagnetism.
In 2006, Wuensch was invited to give a talk at the annual meeting of the German Physics Society (Deutsche Physikalische Gesellschaft) about her views about the priority issue for the field equations.
Wuensch's publisher, Klaus Sommer, in an article in "Physik in unserer Zeit", supported Wuensch's view that Einstein obtained some results not independently but from the information obtained from Hilbert's 16 November letter and from the notes of Hilbert's talk. While he does not call Einstein a plagiarist, Sommer speculates that Einstein's conciliatory 20 December letter was motivated by the fear that Hilbert might comment on Einstein's behaviour in the final version of his paper. Sommer claimed that a scandal caused by Hilbert could have done more damage to Einstein than any scandal before ("Ein Skandal Hilberts hätte ihm mehr geschadet als jeder andere zuvor").
David E. Rowe (2006).
The contentions of Wuensch and Sommer have been strongly contested by the historian of mathematics and natural sciences David E. Rowe in a detailed review of Wuensch's book published in "Historia Mathematica" in 2006. Rowe argues that Wuensch's book offers nothing but tendentious, unsubstantiated, and in many cases highly implausible, speculations.
In popular works by famous physicists.
Wolfgang Pauli's Encyclopedia entry for the theory of relativity pointed out two reasons physicists did not consider Hilbert's derivation equivalent to Einstein's: 1) it required accepting the stationary-action principle as a physical axiom and more important 2) it was based on Mie unified field theory.
In his 1999 article for Time Magazine which featured Einstein Man of the Century Stephen Hawking wrote: <templatestyles src="Template:Blockquote/styles.css" />"Einstein had discussed his ideas with the mathematician David Hilbert during a visit to the University of Gottingen in the summer of 1915, and Hilbert independently found the same equations a few days before Einstein. Nevertheless, as Hilbert admitted, the credit for the new theory belonged to Einstein. It was his idea to relate gravity to the warping of space-time."
Kip Thorne concludes, in remarks based on Hilbert's 1924 paper, that Hilbert regarded the general theory of relativity as Einstein's: <templatestyles src="Template:Blockquote/styles.css" />"Quite naturally, and in accord with Hilbert's view of things, the resulting law of warpage was quickly given the name the Einstein field equation rather than being named after Hilbert. Hilbert had carried out the last few mathematical steps to its discovery independently and almost simultaneously with Einstein, but Einstein was responsible for essentially everything that preceded those steps...". However, Kip Thorne also stated, "Remarkably, Einstein was not the first to discover the correct form of the law of warpage [. . . .] Recognition for the first discovery must go to Hilbert" based on "the things he had learned from Einstein's summer visit to Göttingen." This last point is also mentioned by Corry et al.
Insignificance of the dispute.
As noted by the historians John Earman and Clark Glymour, "questions about the priority of discoveries are often among the least interesting and least important issues in the history of science." There was no real controversy between Einstein and Hilbert themselves:
<templatestyles src="Template:Blockquote/styles.css" />
And:
<templatestyles src="Template:Blockquote/styles.css" />"Hilbert always remained aware of the
fact that the great principal physical idea was Einstein's, and he expressed it in numerous lectures and memoirs ..."
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "g^{\\mu\\nu}"
},
{
"math_id": 1,
"text": "g_{\\mu\\nu}"
},
{
"math_id": 2,
"text": "q_s"
}
] |
https://en.wikipedia.org/wiki?curid=65595160
|
656
|
Acid
|
Chemical compound giving a proton or accepting an electron pair
An acid is a molecule or ion capable of either donating a proton (i.e. hydrogen ion, H+), known as a Brønsted–Lowry acid, or forming a covalent bond with an electron pair, known as a Lewis acid.
The first category of acids are the proton donors, or Brønsted–Lowry acids. In the special case of aqueous solutions, proton donors form the hydronium ion H3O+ and are known as Arrhenius acids. Brønsted and Lowry generalized the Arrhenius theory to include non-aqueous solvents. A Brønsted or Arrhenius acid usually contains a hydrogen atom bonded to a chemical structure that is still energetically favorable after loss of H+.
Aqueous Arrhenius acids have characteristic properties that provide a practical description of an acid. Acids form aqueous solutions with a sour taste, can turn blue litmus red, and react with bases and certain metals (like calcium) to form salts. The word "acid" is derived from the Latin , meaning 'sour'. An aqueous solution of an acid has a pH less than 7 and is colloquially also referred to as "acid" (as in "dissolved in acid"), while the strict definition refers only to the solute. A lower pH means a higher acidity, and thus a higher concentration of positive hydrogen ions in the solution. Chemicals or substances having the property of an acid are said to be acidic.
Common aqueous acids include hydrochloric acid (a solution of hydrogen chloride that is found in gastric acid in the stomach and activates digestive enzymes), acetic acid (vinegar is a dilute aqueous solution of this liquid), sulfuric acid (used in car batteries), and citric acid (found in citrus fruits). As these examples show, acids (in the colloquial sense) can be solutions or pure substances, and can be derived from acids (in the strict sense) that are solids, liquids, or gases. Strong acids and some concentrated weak acids are corrosive, but there are exceptions such as carboranes and boric acid.
The second category of acids are Lewis acids, which form a covalent bond with an electron pair. An example is boron trifluoride (BF3), whose boron atom has a vacant orbital that can form a covalent bond by sharing a lone pair of electrons on an atom in a base, for example the nitrogen atom in ammonia (NH3). Lewis considered this as a generalization of the Brønsted definition, so that an acid is a chemical species that accepts electron pairs either directly "or" by releasing protons (H+) into the solution, which then accept electron pairs. Hydrogen chloride, acetic acid, and most other Brønsted–Lowry acids cannot form a covalent bond with an electron pair, however, and are therefore not Lewis acids. Conversely, many Lewis acids are not Arrhenius or Brønsted–Lowry acids. In modern terminology, an "acid" is implicitly a Brønsted acid and not a Lewis acid, since chemists almost always refer to a Lewis acid explicitly as such.
Definitions and concepts.
Modern definitions are concerned with the fundamental chemical reactions common to all acids.
Most acids encountered in everyday life are aqueous solutions, or can be dissolved in water, so the Arrhenius and Brønsted–Lowry definitions are the most relevant.
The Brønsted–Lowry definition is the most widely used definition; unless otherwise specified, acid–base reactions are assumed to involve the transfer of a proton (H+) from an acid to a base.
Hydronium ions are acids according to all three definitions. Although alcohols and amines can be Brønsted–Lowry acids, they can also function as Lewis bases due to the lone pairs of electrons on their oxygen and nitrogen atoms.
Arrhenius acids.
In 1884, Svante Arrhenius attributed the properties of acidity to hydrogen ions (H+), later described as protons or hydrons. An Arrhenius acid is a substance that, when added to water, increases the concentration of H+ ions in the water. Chemists often write H+("aq") and refer to the hydrogen ion when describing acid–base reactions but the free hydrogen nucleus, a proton, does not exist alone in water, it exists as the hydronium ion (H3O+) or other forms (H5O2+, H9O4+). Thus, an Arrhenius acid can also be described as a substance that increases the concentration of hydronium ions when added to water. Examples include molecular substances such as hydrogen chloride and acetic acid.
An Arrhenius base, on the other hand, is a substance that increases the concentration of hydroxide (OH−) ions when dissolved in water. This decreases the concentration of hydronium because the ions react to form H2O molecules:
H3O + OH ⇌ H2O(liq) + H2O(liq)
Due to this equilibrium, any increase in the concentration of hydronium is accompanied by a decrease in the concentration of hydroxide. Thus, an Arrhenius acid could also be said to be one that decreases hydroxide concentration, while an Arrhenius base increases it.
In an acidic solution, the concentration of hydronium ions is greater than 10−7 moles per liter. Since pH is defined as the negative logarithm of the concentration of hydronium ions, acidic solutions thus have a pH of less than 7.
Brønsted–Lowry acids.
While the Arrhenius concept is useful for describing many reactions, it is also quite limited in its scope. In 1923, chemists Johannes Nicolaus Brønsted and Thomas Martin Lowry independently recognized that acid–base reactions involve the transfer of a proton. A Brønsted–Lowry acid (or simply Brønsted acid) is a species that donates a proton to a Brønsted–Lowry base. Brønsted–Lowry acid–base theory has several advantages over Arrhenius theory. Consider the following reactions of acetic acid (CH3COOH), the organic acid that gives vinegar its characteristic taste:
Both theories easily describe the first reaction: CH3COOH acts as an Arrhenius acid because it acts as a source of H3O+ when dissolved in water, and it acts as a Brønsted acid by donating a proton to water. In the second example CH3COOH undergoes the same transformation, in this case donating a proton to ammonia (NH3), but does not relate to the Arrhenius definition of an acid because the reaction does not produce hydronium. Nevertheless, CH3COOH is both an Arrhenius and a Brønsted–Lowry acid.
Brønsted–Lowry theory can be used to describe reactions of molecular compounds in nonaqueous solution or the gas phase. Hydrogen chloride (HCl) and ammonia combine under several different conditions to form ammonium chloride, NH4Cl. In aqueous solution HCl behaves as hydrochloric acid and exists as hydronium and chloride ions. The following reactions illustrate the limitations of Arrhenius's definition:
As with the acetic acid reactions, both definitions work for the first example, where water is the solvent and hydronium ion is formed by the HCl solute. The next two reactions do not involve the formation of ions but are still proton-transfer reactions. In the second reaction hydrogen chloride and ammonia (dissolved in benzene) react to form solid ammonium chloride in a benzene solvent and in the third gaseous HCl and NH3 combine to form the solid.
Lewis acids.
A third, only marginally related concept was proposed in 1923 by Gilbert N. Lewis, which includes reactions with acid–base characteristics that do not involve a proton transfer. A Lewis acid is a species that accepts a pair of electrons from another species; in other words, it is an electron pair acceptor. Brønsted acid–base reactions are proton transfer reactions while Lewis acid–base reactions are electron pair transfers. Many Lewis acids are not Brønsted–Lowry acids. Contrast how the following reactions are described in terms of acid–base chemistry:
In the first reaction a fluoride ion, F−, gives up an electron pair to boron trifluoride to form the product tetrafluoroborate. Fluoride "loses" a pair of valence electrons because the electrons shared in the B—F bond are located in the region of space between the two atomic nuclei and are therefore more distant from the fluoride nucleus than they are in the lone fluoride ion. BF3 is a Lewis acid because it accepts the electron pair from fluoride. This reaction cannot be described in terms of Brønsted theory because there is no proton transfer.
The second reaction can be described using either theory. A proton is transferred from an unspecified Brønsted acid to ammonia, a Brønsted base; alternatively, ammonia acts as a Lewis base and transfers a lone pair of electrons to form a bond with a hydrogen ion. The species that gains the electron pair is the Lewis acid; for example, the oxygen atom in H3O+ gains a pair of electrons when one of the H—O bonds is broken and the electrons shared in the bond become localized on oxygen.
Depending on the context, a Lewis acid may also be described as an oxidizer or an electrophile. Organic Brønsted acids, such as acetic, citric, or oxalic acid, are not Lewis acids. They dissociate in water to produce a Lewis acid, H+, but at the same time, they also yield an equal amount of a Lewis base (acetate, citrate, or oxalate, respectively, for the acids mentioned). This article deals mostly with Brønsted acids rather than Lewis acids.
Dissociation and equilibrium.
Reactions of acids are often generalized in the form , where HA represents the acid and A− is the conjugate base. This reaction is referred to as protolysis. The protonated form (HA) of an acid is also sometimes referred to as the free acid.
Acid–base conjugate pairs differ by one proton, and can be interconverted by the addition or removal of a proton (protonation and deprotonation, respectively). The acid can be the charged species and the conjugate base can be neutral in which case the generalized reaction scheme could be written as . In solution there exists an equilibrium between the acid and its conjugate base. The equilibrium constant "K" is an expression of the equilibrium concentrations of the molecules or the ions in solution. Brackets indicate concentration, such that [H2O] means "the concentration of H2O". The acid dissociation constant "K"a is generally used in the context of acid–base reactions. The numerical value of "K"a is equal to the product (multiplication) of the concentrations of the products divided by the concentration of the reactants, where the reactant is the acid (HA) and the products are the conjugate base and H+.
formula_0
The stronger of two acids will have a higher "K"a than the weaker acid; the ratio of hydrogen ions to acid will be higher for the stronger acid as the stronger acid has a greater tendency to lose its proton. Because the range of possible values for "K"a spans many orders of magnitude, a more manageable constant, p"K"a is more frequently used, where p"K"a = −log10 "K"a. Stronger acids have a smaller p"K"a than weaker acids. Experimentally determined p"K"a at 25 °C in aqueous solution are often quoted in textbooks and reference material.
Nomenclature.
Arrhenius acids are named according to their anions. In the classical naming system, the ionic suffix is dropped and replaced with a new suffix, according to the table following. The prefix "hydro-" is used when the acid is made up of just hydrogen and one other element. For example, HCl has chloride as its anion, so the hydro- prefix is used, and the -ide suffix makes the name take the form hydrochloric acid.
"Classical naming system:"
In the IUPAC naming system, "aqueous" is simply added to the name of the ionic compound. Thus, for hydrogen chloride, as an acid solution, the IUPAC name is aqueous hydrogen chloride.
Acid strength.
The strength of an acid refers to its ability or tendency to lose a proton. A strong acid is one that completely dissociates in water; in other words, one mole of a strong acid HA dissolves in water yielding one mole of H+ and one mole of the conjugate base, A−, and none of the protonated acid HA. In contrast, a weak acid only partially dissociates and at equilibrium both the acid and the conjugate base are in solution. Examples of strong acids are hydrochloric acid (HCl), hydroiodic acid (HI), hydrobromic acid (HBr), perchloric acid (HClO4), nitric acid (HNO3) and sulfuric acid (H2SO4). In water each of these essentially ionizes 100%. The stronger an acid is, the more easily it loses a proton, H+. Two key factors that contribute to the ease of deprotonation are the polarity of the H—A bond and the size of atom A, which determines the strength of the H—A bond. Acid strengths are also often discussed in terms of the stability of the conjugate base.
Stronger acids have a larger acid dissociation constant, "K"a and a lower p"K"a than weaker acids.
Sulfonic acids, which are organic oxyacids, are a class of strong acids. A common example is toluenesulfonic acid (tosylic acid). Unlike sulfuric acid itself, sulfonic acids can be solids. In fact, polystyrene functionalized into polystyrene sulfonate is a solid strongly acidic plastic that is filterable.
Superacids are acids stronger than 100% sulfuric acid. Examples of superacids are fluoroantimonic acid, magic acid and perchloric acid. The strongest known acid is helium hydride ion, with a proton affinity of 177.8kJ/mol. Superacids can permanently protonate water to give ionic, crystalline hydronium "salts". They can also quantitatively stabilize carbocations.
While "K"a measures the strength of an acid compound, the strength of an aqueous acid solution is measured by pH, which is an indication of the concentration of hydronium in the solution. The pH of a simple solution of an acid compound in water is determined by the dilution of the compound and the compound's "K"a.
Lewis acid strength in non-aqueous solutions.
Lewis acids have been classified in the ECW model and it has been shown that there is no one order of acid strengths. The relative acceptor strength of Lewis acids toward a series of bases, versus other Lewis acids, can be illustrated by C-B plots. It has been shown that to define the order of Lewis acid strength at least two properties must be considered. For Pearson's qualitative HSAB theory the two properties are hardness and strength while for Drago's quantitative ECW model the two properties are electrostatic and covalent.
Chemical characteristics.
Monoprotic acids.
Monoprotic acids, also known as monobasic acids, are those acids that are able to donate one proton per molecule during the process of dissociation (sometimes called ionization) as shown below (symbolized by HA):
"K"a
Common examples of monoprotic acids in mineral acids include hydrochloric acid (HCl) and nitric acid (HNO3). On the other hand, for organic acids the term mainly indicates the presence of one carboxylic acid group and sometimes these acids are known as monocarboxylic acid. Examples in organic acids include formic acid (HCOOH), acetic acid (CH3COOH) and benzoic acid (C6H5COOH).
Polyprotic acids.
Polyprotic acids, also known as polybasic acids, are able to donate more than one proton per acid molecule, in contrast to monoprotic acids that only donate one proton per molecule. Specific types of polyprotic acids have more specific names, such as diprotic (or dibasic) acid (two potential protons to donate), and triprotic (or tribasic) acid (three potential protons to donate). Some macromolecules such as proteins and nucleic acids can have a very large number of acidic protons.
A diprotic acid (here symbolized by H2A) can undergo one or two dissociations depending on the pH. Each dissociation has its own dissociation constant, Ka1 and Ka2.
"K"a1
"K"a2
The first dissociation constant is typically greater than the second (i.e., "K"a1 > "K"a2). For example, sulfuric acid (H2SO4) can donate one proton to form the bisulfate anion (HSO), for which "K"a1 is very large; then it can donate a second proton to form the sulfate anion (SO), wherein the "K"a2 is intermediate strength. The large "K"a1 for the first dissociation makes sulfuric a strong acid. In a similar manner, the weak unstable carbonic acid (H2CO3) can lose one proton to form bicarbonate anion (HCO) and lose a second to form carbonate anion (CO). Both "K"a values are small, but "K"a1 > "K"a2 .
A triprotic acid (H3A) can undergo one, two, or three dissociations and has three dissociation constants, where "K"a1 > "K"a2 > "K"a3.
"K"a1
"K"a2
"K"a3
An inorganic example of a triprotic acid is orthophosphoric acid (H3PO4), usually just called phosphoric acid. All three protons can be successively lost to yield H2PO, then HPO, and finally PO, the orthophosphate ion, usually just called phosphate. Even though the positions of the three protons on the original phosphoric acid molecule are equivalent, the successive "K"a values differ since it is energetically less favorable to lose a proton if the conjugate base is more negatively charged. An organic example of a triprotic acid is citric acid, which can successively lose three protons to finally form the citrate ion.
Although the subsequent loss of each hydrogen ion is less favorable, all of the conjugate bases are present in solution. The fractional concentration, "α" (alpha), for each species can be calculated. For example, a generic diprotic acid will generate 3 species in solution: H2A, HA−, and A2−. The fractional concentrations can be calculated as below when given either the pH (which can be converted to the [H+]) or the concentrations of the acid with all its conjugate bases:
formula_1
A plot of these fractional concentrations against pH, for given "K"1 and "K"2, is known as a Bjerrum plot. A pattern is observed in the above equations and can be expanded to the general "n" -protic acid that has been deprotonated "i" -times:
formula_2
where "K"0 = 1 and the other K-terms are the dissociation constants for the acid.
Neutralization.
Neutralization is the reaction between an acid and a base, producing a salt and neutralized base; for example, hydrochloric acid and sodium hydroxide form sodium chloride and water:
HCl(aq) + NaOH(aq) → H2O(l) + NaCl(aq)
Neutralization is the basis of titration, where a pH indicator shows equivalence point when the equivalent number of moles of a base have been added to an acid. It is often wrongly assumed that neutralization should result in a solution with pH 7.0, which is only the case with similar acid and base strengths during a reaction.
Neutralization with a base weaker than the acid results in a weakly acidic salt. An example is the weakly acidic ammonium chloride, which is produced from the strong acid hydrogen chloride and the weak base ammonia. Conversely, neutralizing a weak acid with a strong base gives a weakly basic salt (e.g., sodium fluoride from hydrogen fluoride and sodium hydroxide).
Weak acid–weak base equilibrium.
In order for a protonated acid to lose a proton, the pH of the system must rise above the p"K"a of the acid. The decreased concentration of H+ in that basic solution shifts the equilibrium towards the conjugate base form (the deprotonated form of the acid). In lower-pH (more acidic) solutions, there is a high enough H+ concentration in the solution to cause the acid to remain in its protonated form.
Solutions of weak acids and salts of their conjugate bases form buffer solutions.
Titration.
To determine the concentration of an acid in an aqueous solution, an acid–base titration is commonly performed. A strong base solution with a known concentration, usually NaOH or KOH, is added to neutralize the acid solution according to the color change of the indicator with the amount of base added. The titration curve of an acid titrated by a base has two axes, with the base volume on the x-axis and the solution's pH value on the y-axis. The pH of the solution always goes up as the base is added to the solution.
Example: Diprotic acid.
For each diprotic acid titration curve, from left to right, there are two midpoints, two equivalence points, and two buffer regions.
Equivalence points.
Due to the successive dissociation processes, there are two equivalence points in the titration curve of a diprotic acid. The first equivalence point occurs when all first hydrogen ions from the first ionization are titrated. In other words, the amount of OH− added equals the original amount of H2A at the first equivalence point. The second equivalence point occurs when all hydrogen ions are titrated. Therefore, the amount of OH− added equals twice the amount of H2A at this time. For a weak diprotic acid titrated by a strong base, the second equivalence point must occur at pH above 7 due to the hydrolysis of the resulted salts in the solution. At either equivalence point, adding a drop of base will cause the steepest rise of the pH value in the system.
Buffer regions and midpoints.
A titration curve for a diprotic acid contains two midpoints where pH=pKa. Since there are two different Ka values, the first midpoint occurs at pH=pKa1 and the second one occurs at pH=pKa2. Each segment of the curve that contains a midpoint at its center is called the buffer region. Because the buffer regions consist of the acid and its conjugate base, it can resist pH changes when base is added until the next equivalent points.
Applications of acids.
In industry.
Acids are fundamental reagents in treating almost all processes in modern industry. Sulfuric acid, a diprotic acid, is the most widely used acid in industry, and is also the most-produced industrial chemical in the world. It is mainly used in producing fertilizer, detergent, batteries and dyes, as well as used in processing many products such like removing impurities. According to the statistics data in 2011, the annual production of sulfuric acid was around 200 million tonnes in the world. For example, phosphate minerals react with sulfuric acid to produce phosphoric acid for the production of phosphate fertilizers, and zinc is produced by dissolving zinc oxide into sulfuric acid, purifying the solution and electrowinning.
In the chemical industry, acids react in neutralization reactions to produce salts. For example, nitric acid reacts with ammonia to produce ammonium nitrate, a fertilizer. Additionally, carboxylic acids can be esterified with alcohols, to produce esters.
Acids are often used to remove rust and other corrosion from metals in a process known as pickling. They may be used as an electrolyte in a wet cell battery, such as sulfuric acid in a car battery.
In food.
Tartaric acid is an important component of some commonly used foods like unripened mangoes and tamarind. Natural fruits and vegetables also contain acids. Citric acid is present in oranges, lemon and other citrus fruits. Oxalic acid is present in tomatoes, spinach, and especially in carambola and rhubarb; rhubarb leaves and unripe carambolas are toxic because of high concentrations of oxalic acid. Ascorbic acid (Vitamin C) is an essential vitamin for the human body and is present in such foods as amla (Indian gooseberry), lemon, citrus fruits, and guava.
Many acids can be found in various kinds of food as additives, as they alter their taste and serve as preservatives. Phosphoric acid, for example, is a component of cola drinks. Acetic acid is used in day-to-day life as vinegar. Citric acid is used as a preservative in sauces and pickles.
Carbonic acid is one of the most common acid additives that are widely added in soft drinks. During the manufacturing process, CO2 is usually pressurized to dissolve in these drinks to generate carbonic acid. Carbonic acid is very unstable and tends to decompose into water and CO2 at room temperature and pressure. Therefore, when bottles or cans of these kinds of soft drinks are opened, the soft drinks fizz and effervesce as CO2 bubbles come out.
Certain acids are used as drugs. Acetylsalicylic acid (Aspirin) is used as a pain killer and for bringing down fevers.
In human bodies.
Acids play important roles in the human body. The hydrochloric acid present in the stomach aids digestion by breaking down large and complex food molecules. Amino acids are required for synthesis of proteins required for growth and repair of body tissues. Fatty acids are also required for growth and repair of body tissues. Nucleic acids are important for the manufacturing of DNA and RNA and transmitting of traits to offspring through genes. Carbonic acid is important for maintenance of pH equilibrium in the body.
Human bodies contain a variety of organic and inorganic compounds, among those dicarboxylic acids play an essential role in many biological behaviors. Many of those acids are amino acids, which mainly serve as materials for the synthesis of proteins. Other weak acids serve as buffers with their conjugate bases to keep the body's pH from undergoing large scale changes that would be harmful to cells. The rest of the dicarboxylic acids also participate in the synthesis of various biologically important compounds in human bodies.
Acid catalysis.
Acids are used as catalysts in industrial and organic chemistry; for example, sulfuric acid is used in very large quantities in the alkylation process to produce gasoline. Some acids, such as sulfuric, phosphoric, and hydrochloric acids, also effect dehydration and condensation reactions. In biochemistry, many enzymes employ acid catalysis.
Biological occurrence.
Many biologically important molecules are acids. Nucleic acids, which contain acidic phosphate groups, include DNA and RNA. Nucleic acids contain the genetic code that determines many of an organism's characteristics, and is passed from parents to offspring. DNA contains the chemical blueprint for the synthesis of proteins, which are made up of amino acid subunits. Cell membranes contain fatty acid esters such as phospholipids.
An α-amino acid has a central carbon (the α or "alpha" carbon) that is covalently bonded to a carboxyl group (thus they are carboxylic acids), an amino group, a hydrogen atom and a variable group. The variable group, also called the R group or side chain, determines the identity and many of the properties of a specific amino acid. In glycine, the simplest amino acid, the R group is a hydrogen atom, but in all other amino acids it is contains one or more carbon atoms bonded to hydrogens, and may contain other elements such as sulfur, oxygen or nitrogen. With the exception of glycine, naturally occurring amino acids are chiral and almost invariably occur in the . Peptidoglycan, found in some bacterial cell walls contains some -amino acids. At physiological pH, typically around 7, free amino acids exist in a charged form, where the acidic carboxyl group (-COOH) loses a proton (-COO−) and the basic amine group (-NH2) gains a proton (-NH). The entire molecule has a net neutral charge and is a zwitterion, with the exception of amino acids with basic or acidic side chains. Aspartic acid, for example, possesses one protonated amine and two deprotonated carboxyl groups, for a net charge of −1 at physiological pH.
Fatty acids and fatty acid derivatives are another group of carboxylic acids that play a significant role in biology. These contain long hydrocarbon chains and a carboxylic acid group on one end. The cell membrane of nearly all organisms is primarily made up of a phospholipid bilayer, a micelle of hydrophobic fatty acid esters with polar, hydrophilic phosphate "head" groups. Membranes contain additional components, some of which can participate in acid–base reactions.
In humans and many other animals, hydrochloric acid is a part of the gastric acid secreted within the stomach to help hydrolyze proteins and polysaccharides, as well as converting the inactive pro-enzyme, pepsinogen into the enzyme, pepsin. Some organisms produce acids for defense; for example, ants produce formic acid.
Acid–base equilibrium plays a critical role in regulating mammalian breathing. Oxygen gas (O2) drives cellular respiration, the process by which animals release the chemical potential energy stored in food, producing carbon dioxide (CO2) as a byproduct. Oxygen and carbon dioxide are exchanged in the lungs, and the body responds to changing energy demands by adjusting the rate of ventilation. For example, during periods of exertion the body rapidly breaks down stored carbohydrates and fat, releasing CO2 into the blood stream. In aqueous solutions such as blood CO2 exists in equilibrium with carbonic acid and bicarbonate ion.
It is the decrease in pH that signals the brain to breathe faster and deeper, expelling the excess CO2 and resupplying the cells with O2.
Cell membranes are generally impermeable to charged or large, polar molecules because of the lipophilic fatty acyl chains comprising their interior. Many biologically important molecules, including a number of pharmaceutical agents, are organic weak acids that can cross the membrane in their protonated, uncharged form but not in their charged form (i.e., as the conjugate base). For this reason the activity of many drugs can be enhanced or inhibited by the use of antacids or acidic foods. The charged form, however, is often more soluble in blood and cytosol, both aqueous environments. When the extracellular environment is more acidic than the neutral pH within the cell, certain acids will exist in their neutral form and will be membrane soluble, allowing them to cross the phospholipid bilayer. Acids that lose a proton at the intracellular pH will exist in their soluble, charged form and are thus able to diffuse through the cytosol to their target. Ibuprofen, aspirin and penicillin are examples of drugs that are weak acids.
Common acids.
Sulfonic acids.
A sulfonic acid has the general formula RS(=O)2–OH, where R is an organic radical.
Carboxylic acids.
A carboxylic acid has the general formula R-C(O)OH, where R is an organic radical. The carboxyl group -C(O)OH contains a carbonyl group, C=O, and a hydroxyl group, O-H.
Halogenated carboxylic acids.
Halogenation at alpha position increases acid strength, so that the following acids are all stronger than acetic acid.
Vinylogous carboxylic acids.
Normal carboxylic acids are the direct union of a carbonyl group and a hydroxyl group. In vinylogous carboxylic acids, a carbon-carbon double bond separates the carbonyl and hydroxyl groups.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "K_a = \\frac\\ce{[H+] [A^{-}]}\\ce{[HA]}"
},
{
"math_id": 1,
"text": "\\begin{align}\n\\alpha_\\ce{H2A} &= \\frac{\\ce{[H+]^2}}{\\ce{[H+]^2} + [\\ce{H+}]K_1 + K_1 K_2} = \\frac{\\ce{[H2A]}}{\\ce{{[H2A]}} + [HA^-] + [A^{2-}]}\\\\ \n\\alpha_\\ce{HA^-} &= \\frac{[\\ce{H+}]K_1}{\\ce{[H+]^2} + [\\ce{H+}]K_1 + K_1 K_2} = \\frac{\\ce{[HA^-]}}{\\ce{[H2A]}+{[HA^-]}+{[A^{2-}]}}\\\\\n\\alpha_\\ce{A^{2-}}&= \\frac{K_1 K_2}{\\ce{[H+]^2} + [\\ce{H+}]K_1 + K_1 K_2} = \\frac{\\ce{[A^{2-}]}}{\\ce{{[H2A]}}+{[HA^-]}+{[A^{2-}]}}\n\\end{align}"
},
{
"math_id": 2,
"text": "\n\\alpha_{\\ce H_{n-i} A^{i-} }= { {[\\ce{H+}]^{n-i} \\displaystyle \\prod_{j=0}^{i}K_j} \\over { \\displaystyle \\sum_{i=0}^n \\Big[ [\\ce{H+}]^{n-i} \\displaystyle \\prod_{j=0}^{i}K_j} \\Big] }\n"
}
] |
https://en.wikipedia.org/wiki?curid=656
|
65601334
|
Overcategory
|
Category theory concept
In mathematics, specifically category theory, an overcategory (and undercategory) is a distinguished class of categories used in multiple contexts, such as with covering spaces (espace etale). They were introduced as a mechanism for keeping track of data surrounding a fixed object formula_0 in some category formula_1. There is a dual notion of undercategory, which is defined similarly.
Definition.
Let formula_1 be a category and formula_0 a fixed object of formula_1pg 59. The overcategory (also called a slice category) formula_2 is an associated category whose objects are pairs formula_3 where formula_4 is a morphism in formula_1. Then, a morphism between objects formula_5 is given by a morphism formula_6 in the category formula_1 such that the following diagram commutesformula_7There is a dual notion called the undercategory (also called a coslice category) formula_8 whose objects are pairs formula_9 where formula_10 is a morphism in formula_1. Then, morphisms in formula_8 are given by morphisms formula_11 in formula_1 such that the following diagram commutesformula_12These two notions have generalizations in 2-category theory and higher category theorypg 43, with definitions either analogous or essentially the same.
Properties.
Many categorical properties of formula_1 are inherited by the associated over and undercategories for an object formula_0. For example, if formula_1 has finite products and coproducts, it is immediate the categories formula_2 and formula_8 have these properties since the product and coproduct can be constructed in formula_1, and through universal properties, there exists a unique morphism either to formula_0 or from formula_0. In addition, this applies to limits and colimits as well.
Examples.
Overcategories on a site.
Recall that a site formula_1 is a categorical generalization of a topological space first introduced by Grothendieck. One of the canonical examples comes directly from topology, where the category formula_13 whose objects are open subsets formula_14 of some topological space formula_0, and the morphisms are given by inclusion maps. Then, for a fixed open subset formula_14, the overcategory formula_15 is canonically equivalent to the category formula_16 for the induced topology on formula_17. This is because every object in formula_15 is an open subset formula_18 contained in formula_14.
Category of algebras as an undercategory.
The category of commutative formula_19-algebras is equivalent to the undercategory formula_20 for the category of commutative rings. This is because the structure of an formula_19-algebra on a commutative ring formula_21 is directly encoded by a ring morphism formula_22. If we consider the opposite category, it is an overcategory of affine schemes, formula_23, or just formula_24.
Overcategories of spaces.
Another common overcategory considered in the literature are overcategories of spaces, such as schemes, smooth manifolds, or topological spaces. These categories encode objects relative to a fixed object, such as the category of schemes over formula_25, formula_26. Fiber products in these categories can be considered intersections, given the objects are subobjects of the fixed object.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "\\mathcal{C}"
},
{
"math_id": 2,
"text": "\\mathcal{C}/X"
},
{
"math_id": 3,
"text": "(A, \\pi)"
},
{
"math_id": 4,
"text": "\\pi:A \\to X"
},
{
"math_id": 5,
"text": "f:(A, \\pi) \\to (A', \\pi')"
},
{
"math_id": 6,
"text": "f:A \\to A'"
},
{
"math_id": 7,
"text": "\\begin{matrix}\nA & \\xrightarrow{f} & A' \\\\\n\\pi\\downarrow \\text{ } & \\text{ } &\\text{ } \\downarrow \\pi' \\\\\nX & = & X\n\\end{matrix}"
},
{
"math_id": 8,
"text": "X/\\mathcal{C}"
},
{
"math_id": 9,
"text": "(B, \\psi)"
},
{
"math_id": 10,
"text": "\\psi:X\\to B"
},
{
"math_id": 11,
"text": "g: B \\to B'"
},
{
"math_id": 12,
"text": "\\begin{matrix}\nX & = & X \\\\\n\\psi\\downarrow \\text{ } & \\text{ } &\\text{ } \\downarrow \\psi' \\\\\nB & \\xrightarrow{g} & B'\n\\end{matrix}"
},
{
"math_id": 13,
"text": "\\text{Open}(X)"
},
{
"math_id": 14,
"text": "U"
},
{
"math_id": 15,
"text": "\\text{Open}(X)/U"
},
{
"math_id": 16,
"text": "\\text{Open}(U)"
},
{
"math_id": 17,
"text": "U \\subseteq X"
},
{
"math_id": 18,
"text": "V"
},
{
"math_id": 19,
"text": "A"
},
{
"math_id": 20,
"text": "A/\\text{CRing}"
},
{
"math_id": 21,
"text": "B"
},
{
"math_id": 22,
"text": "A \\to B"
},
{
"math_id": 23,
"text": "\\text{Aff}/\\text{Spec}(A)"
},
{
"math_id": 24,
"text": "\\text{Aff}_A"
},
{
"math_id": 25,
"text": "S"
},
{
"math_id": 26,
"text": "\\text{Sch}/S"
}
] |
https://en.wikipedia.org/wiki?curid=65601334
|
656099
|
De Casteljau's algorithm
|
Method to evaluate polynomials in Bernstein form
In the mathematical field of numerical analysis, De Casteljau's algorithm is a recursive method to evaluate polynomials in Bernstein form or Bézier curves, named after its inventor Paul de Casteljau. De Casteljau's algorithm can also be used to split a single Bézier curve into two Bézier curves at an arbitrary parameter value.
Although the algorithm is slower for most architectures when compared with the direct approach, it is more numerically stable.
Definition.
A Bézier curve formula_0 (of degree formula_1, with control points formula_2) can be written in Bernstein form as follows
formula_3
where formula_4 is a Bernstein basis polynomial
formula_5
The curve at point formula_6 can be evaluated with the recurrence relation
formula_7
Then, the evaluation of formula_0 at point formula_6 can be evaluated in formula_8 operations. The result formula_9 is given by
formula_10
Moreover, the Bézier curve formula_0 can be split at point formula_6 into two curves with respective control points:
formula_11
Geometric interpretation.
The geometric interpretation of De Casteljau's algorithm is straightforward.
The following picture shows this process for a cubic Bézier curve:
Note that the intermediate points that were constructed are in fact the control points for two new Bézier curves, both exactly coincident with the old one. This algorithm not only evaluates the curve at formula_14, but splits the curve into two pieces at formula_14, and provides the equations of the two sub-curves in Bézier form.
The interpretation given above is valid for a nonrational Bézier curve. To evaluate a rational Bézier curve in formula_15, we may project the point into formula_16; for example, a curve in three dimensions may have its control points formula_17 and weights formula_18 projected to the weighted control points formula_19. The algorithm then proceeds as usual, interpolating in formula_20. The resulting four-dimensional points may be projected back into three-space with a perspective divide.
In general, operations on a rational curve (or surface) are equivalent to operations on a nonrational curve in a projective space. This representation as the "weighted control points" and weights is often convenient when evaluating rational curves.
Notation.
When doing the calculation by hand it is useful to write down the coefficients in a triangle scheme as
formula_21
When choosing a point "t"0 to evaluate a Bernstein polynomial we can use the two diagonals of the triangle scheme to construct a division of the polynomial
formula_22
into
formula_23
and
formula_24
Bézier curve.
When evaluating a Bézier curve of degree "n" in 3-dimensional space with "n" + 1 control points P"i"
formula_25
with
formula_26
we split the Bézier curve into three separate equations
formula_27
which we evaluate individually using De Casteljau's algorithm.
Example.
We want to evaluate the Bernstein polynomial of degree 2 with the Bernstein coefficients
formula_28
at the point "t"0.
We start the recursion with
formula_29
and with the second iteration the recursion stops with
formula_30
which is the expected Bernstein polynomial of degree "2".
Implementations.
Here are example implementations of De Casteljau's algorithm in various programming languages.
Haskell.
deCasteljau :: Double -> [(Double, Double)] -> (Double, Double)
deCasteljau t [b] = b
deCasteljau t coefs = deCasteljau t reduced
where
reduced = zipWith (lerpP t) coefs (tail coefs)
lerpP t (x0, y0) (x1, y1) = (lerp t x0 x1, lerp t y0 y1)
lerp t a b = t * b + (1 - t) * a
Python.
def de_casteljau(t: float, coefs: list[float]) -> float:
"""De Casteljau's algorithm."""
beta = coefs.copy() # values in this list are overridden
n = len(beta)
for j in range(1, n):
for k in range(n - j):
beta[k] = beta[k] * (1 - t) + beta[k + 1] * t
return beta[0]
Java.
public double deCasteljau(double t, double[] coefficients) {
double[] beta = coefficients;
int n = beta.length;
for (int i = 1; i <= n; i++) {
for (int j = 0; j < (n - i); j++) {
beta[j] = beta[j] * (1 - t) + beta[j + 1] * t;
return beta[0];
Code Example in Javascript.
The following javascript function applies De Casteljau's algorithm to an array of control points or poles as originally named by De Casteljau to reduce them one by one until reaching a point in the curve for a given t between 0 for the first point of the curve and 1 for the last one
function crlPtReduceDeCasteljau(points, t) {
let retArr = [ points.slice () ];
while (points.length > 1) {
let midpoints = [];
for (let i = 0; i+1 < points.length; ++i) {
let ax = points[i][0];
let ay = points[i][1];
let bx = points[i+1][0];
let by = points[i+1][1];
// a * (1-t) + b * t = a + (b - a) * t
midpoints.push([
ax + (bx - ax) * t,
ay + (by - ax) * t,
retArr.push (midpoints)
points = midpoints;
return retArr;
For example
var poles = [ [0, 128], [128, 0], [256, 0], [384, 128] ]
crlPtReduceDeCasteljau (poles, .5)
returns the array
[ [ [0, 128], [128, 0], [256, 0], [384, 128 ] ],
[ [64, 64], [192, 0], [320, 64] ],
[ [128, 32], [256, 32]],
[ [192, 32]],
Which points and segments joining them are plotted below
|
[
{
"math_id": 0,
"text": "B"
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "\\beta_0, \\ldots, \\beta_n"
},
{
"math_id": 3,
"text": "B(t) = \\sum_{i=0}^{n}\\beta_{i}b_{i,n}(t),"
},
{
"math_id": 4,
"text": "b"
},
{
"math_id": 5,
"text": "b_{i,n}(t) = {n \\choose i}(1-t)^{n-i}t^i."
},
{
"math_id": 6,
"text": "t_0"
},
{
"math_id": 7,
"text": "\\begin{align}\n\\beta_i^{(0)} &:= \\beta_i, && i=0,\\ldots,n \\\\\n\\beta_i^{(j)} &:= \\beta_i^{(j-1)} (1-t_0) + \\beta_{i+1}^{(j-1)} t_0, && i = 0,\\ldots,n-j,\\ \\ j= 1,\\ldots,n\n\\end{align}"
},
{
"math_id": 8,
"text": "\\binom{n}{2}"
},
{
"math_id": 9,
"text": "B(t_0)"
},
{
"math_id": 10,
"text": "B(t_0) = \\beta_0^{(n)}."
},
{
"math_id": 11,
"text": "\\begin{align}\n&\\beta_0^{(0)},\\beta_0^{(1)},\\ldots,\\beta_0^{(n)} \\\\[1ex]\n&\\beta_0^{(n)},\\beta_1^{(n-1)},\\ldots,\\beta_n^{(0)}\n\\end{align}"
},
{
"math_id": 12,
"text": "P_0, \\dots, P_n"
},
{
"math_id": 13,
"text": "t : (1-t)"
},
{
"math_id": 14,
"text": "t"
},
{
"math_id": 15,
"text": "\\mathbf{R}^n"
},
{
"math_id": 16,
"text": "\\mathbf{R}^{n+1}"
},
{
"math_id": 17,
"text": "\\{(x_i, y_i, z_i)\\}"
},
{
"math_id": 18,
"text": "\\{w_i\\}"
},
{
"math_id": 19,
"text": "\\{(w_ix_i, w_iy_i, w_iz_i, w_i)\\}"
},
{
"math_id": 20,
"text": "\\mathbf{R}^4"
},
{
"math_id": 21,
"text": "\n\\begin{matrix}\n\\beta_0 & = \\beta_0^{(0)} & & & \\\\\n & & \\beta_0^{(1)} & & \\\\\n\\beta_1 & = \\beta_1^{(0)} & & & \\\\\n & & & \\ddots & \\\\\n\\vdots & & \\vdots & & \\beta_0^{(n)} \\\\\n & & & & \\\\\n\\beta_{n-1} & = \\beta_{n-1}^{(0)} & & & \\\\\n & & \\beta_{n-1}^{(1)} & & \\\\\n\\beta_n & = \\beta_n^{(0)} & & & \\\\\n\\end{matrix}\n"
},
{
"math_id": 22,
"text": "B(t) = \\sum_{i=0}^n \\beta_i^{(0)} b_{i,n}(t), \\quad t \\in [0,1]"
},
{
"math_id": 23,
"text": "B_1(t) = \\sum_{i=0}^n \\beta_0^{(i)} b_{i,n}\\left(\\frac{t}{t_0}\\right)\\!, \\quad t \\in [0,t_0]"
},
{
"math_id": 24,
"text": "B_2(t) = \\sum_{i=0}^n \\beta_i^{(n-i)} b_{i,n}\\left(\\frac{t-t_0}{1-t_0}\\right)\\!, \\quad t \\in [t_0,1]."
},
{
"math_id": 25,
"text": "\\mathbf{B}(t) = \\sum_{i=0}^{n} \\mathbf{P}_i b_{i,n}(t),\\ t \\in [0,1]"
},
{
"math_id": 26,
"text": "\\mathbf{P}_i := \\begin{pmatrix} x_i \\\\ y_i \\\\ z_i \\end{pmatrix},"
},
{
"math_id": 27,
"text": "\\begin{align}\nB_1(t) &= \\sum_{i=0}^{n} x_i b_{i,n}(t), & t \\in [0,1] \\\\[1ex]\nB_2(t) &= \\sum_{i=0}^{n} y_i b_{i,n}(t), & t \\in [0,1] \\\\[1ex]\nB_3(t) &= \\sum_{i=0}^{n} z_i b_{i,n}(t), & t \\in [0,1]\n\\end{align}"
},
{
"math_id": 28,
"text": "\\begin{align}\n\\beta_0^{(0)} &= \\beta_0 \\\\[1ex]\n\\beta_1^{(0)} &= \\beta_1 \\\\[1ex]\n\\beta_2^{(0)} &= \\beta_2\n\\end{align}"
},
{
"math_id": 29,
"text": "\\begin{align}\n\\beta_0^{(1)} &&=&& \\beta_0^{(0)} (1-t_0) + \\beta_1^{(0)}t_0 &&=&& \\beta_0(1-t_0) + \\beta_1 t_0 \\\\[1ex]\n\\beta_1^{(1)} &&=&& \\beta_1^{(0)} (1-t_0) + \\beta_2^{(0)}t_0 &&=&& \\beta_1(1-t_0) + \\beta_2 t_0\n\\end{align}"
},
{
"math_id": 30,
"text": " \\begin{align}\n\\beta_0^{(2)} & = \\beta_0^{(1)} (1-t_0) + \\beta_1^{(1)} t_0 \\\\\n\\ & = \\beta_0(1-t_0) (1-t_0) + \\beta_1 t_0 (1-t_0) + \\beta_1(1-t_0)t_0 + \\beta_2 t_0 t_0 \\\\\n\\ & = \\beta_0 (1-t_0)^2 + \\beta_1 2t_0(1-t_0) + \\beta_2 t_0^2\n\\end{align}"
}
] |
https://en.wikipedia.org/wiki?curid=656099
|
6561589
|
Matra Bagheera
|
The Matra Bagheera is a sports car built by the automotive division of the French engineering group Matra from 1973 to 1980, in cooperation with automaker Simca. It was marketed as the Matra-Simca Bagheera until its final year of production, when its designation was changed to the Talbot-Matra Bagheera following Chrysler Europe's demise and subsequent takeover by PSA.
Conception and development.
In December 1969 Matra and Simca entered into an agreement that rebranded Matra's racing cars as Matra-Simcas and give Matra access to the Simca dealer network in France and the Common Market. The first joint project of the new liaison was development of a replacement for the Matra 530, which had not reached either its targeted market or its projected sales volumes.
Work on the new car began in 1970 under project code M550. Development was led by Matra's head of engineering and design Philippe Guédon and Chrysler-Simca product planner Jacques Rousseau. Additional direction for the design was provided by Chrysler-Simca planner Marc Honoré. Honoré identified Simca's strongest market as being cars displacing between 1.3 and 1.5 litres and suggested the team focus on building a car of that class, which would constrain the size of the car if performance was to be acceptable. As many as possible of the major components were sourced from the Chrysler-Simca parts inventory. Although the engine, gearbox and many suspension elements came directly from the Simca 1100, this new Matra was to be a mid-engined car rather than front-wheel drive like the donor car.
Chrysler-Simca's planners also wanted a car with more than just two seats. Guédon agreed, but he was also not satisfied with the 2+2 arrangement used in the M530, feeling that the rear seats were too small to be really useful. The solution came to him on a lengthy trip he took in a Ford Taunus station wagon with two colleagues. The back of the car was so full that the travelers sat three across in the front of the car. The M550 sat three abreast.
Eleven prototypes were built and used for road-testing in environments ranging from Saharan Mauritania to Lapland, as well as for crash-testing. Development was complete by the end of 1972. The car was built in Matra's factory in the commune of Romorantin-Lanthenay in the department of Loir-et-Cher in central France. Rather than being sold under its development code name, the car took its name from the character in Rudyard Kipling's "The Jungle Book".
The Bagheera was unveiled to the press at an event held at Lake Annecy on April 14, 1973. The public release of the car took place at the 1973 24 Heures du Mans. At the same time Simca had arranged to have 500 yellow Bagheeras available at their dealers across France. Towards the end of 1973, production levels had reached 65 cars per day. In June 1974, within eighteen months of its release, more than 10,000 Bagheeras had been sold.
Bodywork.
The initial shape of the car was drawn by Jean Toprieux and later refined by Jacques Nochet. Greek designer Antonis Volanis joined the project and contributed to the interior, handling the instrument panel and steering wheel shapes.
The body's shape was that of a sleek hatchback with hidden headlights. The rear hatch opened to access the engine mounted behind the passenger compartment and a rear luggage space. The unusual three-abreast seating dictated by Guédon was implemented as a 2+1 arrangement. The driver had a regular seat while on the passenger side was a single two-place bench with two individual seatbacks inspired by a lounge chair Guédon had found in a Paris shop. Seen in plan view it is apparent that the body sides are slightly convex to accommodate the seating.
The 19 panels that made up the Bagheera's body were made of fiberglass-reinforced polyester, which were then attached to the chassis. The process used to make the panels was called `LP', and it used a low-pressure high-temperature pressing method to produce panels using relatively inexpensive tooling. The advantages of using LP for Matra were its ability to produce large, high quality panels with precision and economy. The LP process had only been in use for twelve month prior to the beginning of production, which means that Matra had introduced this new technology at the car's early development stage. Problems with the car's finish served to hamper sales when new, and in 1975 the Bagheera received German ADAC's "Silver Lemon" award for being the new car with the most problems.
The Bagheera won the 1973 Style Auto Award, beating out competition that included the Lancia Stratos, Lancia Beta coupé and Ferrari Dino 308 GT4.
The Bagheera was also very aerodynamic, with a drag coefficient (formula_0) of 0.33 for the early models. This rose slightly to 0.35 after a mid-life redesign.
Chassis and suspension.
The chassis was fabricated of pressed steel. While it has been called a space-frame it more closely resembled a unitary body. The shapes of some pieces were simplified to accommodate the low production numbers that the car was built in.
The front suspension was from the Simca 1100. It consisted of upper and lower A-arms with telescopic hydraulic dampers and longitudinal torsion bars running back along the chassis for springing. An anti-roll bar was fitted at the front as well.
The rear of the M550 prototype used the same type of suspension as the front, moved rearward along with the engine and transaxle. This proved unsatisfactory and so the final production cars received a new system that comprised new trailing arms designed by Matra with transverse torsion bars and telescopic shock-absorbers. An anti-roll bar was also fitted at the rear.
No right-hand-drive Bagheeras were ever built by the factory, but a number were converted to RHD by Wooler-Hodec in England.
Engine and transaxle.
The only engine offered at first was the "Poissy engine" from Simca's 1100 Ti model. In the Bagheera this ohv straight-4 engine developed at 6000 rpm, two more horsepower than in the 1100 Ti. The transversely mounted engine was paired with the 4-speed manual transaxle from the 1100.
In 1976 a larger version of the same engine became available when the engine from the Simca 1308 GT was added to the lineup. The first Bagheera to use this engine was the newly introduced `S' version. Changes were also made to the carburation. A 4-speed manual was still the only transmission offered.
Road tests and impressions.
Early in 1974 the German Magazine Auto, Motor und Sport tested a 1294 cc Bagheera and compared it to its closest competitors in the market. The car's light weight served it well in the performance comparisons: a top speed of 186.5 km/h (116 mph) was recorded against 176.5 km/h (110 mph) for an Alfa Romeo GT 1300 Junior, despite the Alfa's claim of an extra 3 bhp. The French car's acceleration also bettered the Italian's, taking 12.2 seconds to reach 100 km/h (62 mph) against the Alfa's 13.5 seconds. The Matra-Simca's DM 14,198 price tag was somewhat lower than the DM 14,490 listed for the Alfa Romeo, although both were undercut on price by models from mass market producers such as the 1900 cc Opel Manta SR at DM 13,990.
Longevity.
The Bagheera won the ADAC "Silberne Zitrone" ("Silver Lemon") award in 1975 for the poorest quality car at the time. Complaints ranged from a leaky body that allowed rain to enter the cabin to mechanical failures. Few Bagheeras survive today, and the cause is usually extensive corrosion of the steel chassis. While the polyester body panels do not rust, the problem was caused by the underlying steel chassis having almost no corrosion protection. Matra learned from this and fully galvanized the chassis of the Bagheera's successor, the Matra Murena.
The Bagheera U8.
In March 1973 a team of Matra engineers led by Georges Pinardaud completed the initial design for project M560, which was to be a more powerful Bagheera. A key part of the project was the creation of a unique "U engine" out of two existing Simca straight-4 engines. The blocks came from two different Simca applications and rotated in opposite directions but shared the same 1294 cc displacement. One block was from the 1100Ti and was adapted to transverse mounting while the other was from the Simca 1000 Rallye II in which it had been mounted longitudinally. The two blocks were joined at an 82° angle using a common cast-aluminum sump that also carried a common oil supply for the engine. At the non-drive end another aluminum casting assured the alignment of the blocks while at the drive end a steel adapter fit the ends of both crankshafts. A sprocket and Morse chain from each crankshaft were connected to a shaft running down the middle of the sump that transmitted power from the left-hand crank to the right. Each block retained its own crankshaft, distributor, and water pump. The clutch and bell-housing of the engine from the Rallye II engine provided the transaxle mounting while a flywheel was only mounted to the 1100Ti crankshaft. The resulting 8-cylinder assembly was fitted with four Weber 36 DCNF carburetors and, with a 9.8:1 compression ratio, produced at 6200 rpm and at 4000 rpm.
The car required modifications to accept the new engine. Additional air-intakes were let into the sides of the car ahead of the rear wheels. The overall length rose by and the wheelbase rose by . Overall width increased by due to the addition of larger wheel arches added to clear wider tires, which were 185/70 VR14s at the front and 205/70 VR14s at the rear. The front suspension was unchanged from the original but at the rear suspension was now by lateral links, trailing arms, and coil springs. The car also received ventilated disk brakes and 5-lug wheels. The first prototype used a modified production chassis, while subsequent prototypes used a chassis made of tubular steel. The engine was mounted longitudinally behind the driver and drove the wheels through a Porsche 5-speed transaxle. Due to the output shaft being offset to the right the half-shafts were of unequal lengths. Top speed for the car was reported to have been .
Even though the project was announced in the autumn of 1973, said to be production ready by 1974, and survived until 1975, Chrysler Europe was unwilling to approve the project due to the developing fuel crises as well as its own financial problems. Thus, the U8-powered Bagheera remained a prototype with only three units ever built. A surviving prototype and engine are in the Matra museum at Romorantin-Lanthenay.
Information.
<templatestyles src="Template:Table alignment/tables.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\scriptstyle C_\\mathrm d\\,"
}
] |
https://en.wikipedia.org/wiki?curid=6561589
|
6563
|
Conjunction introduction
|
Conjunction introduction (often abbreviated simply as conjunction and also called and introduction or adjunction) is a valid rule of inference of propositional logic. The rule makes it possible to introduce a conjunction into a logical proof. It is the inference that if the proposition formula_0 is true, and the proposition formula_1 is true, then the logical conjunction of the two propositions formula_0 and formula_1 is true. For example, if it is true that "it is raining", and it is true that "the cat is inside", then it is true that "it is raining and the cat is inside". The rule can be stated:
formula_2
where the rule is that wherever an instance of "formula_0" and "formula_1" appear on lines of a proof, a "formula_3" can be placed on a subsequent line.
Formal notation.
The "conjunction introduction" rule may be written in sequent notation:
formula_4
where formula_0 and formula_1 are propositions expressed in some formal system, and formula_5 is a metalogical symbol meaning that formula_3 is a syntactic consequence if formula_0 and formula_1 are each on lines of a proof in some logical system;
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "P"
},
{
"math_id": 1,
"text": "Q"
},
{
"math_id": 2,
"text": "\\frac{P,Q}{\\therefore P \\land Q}"
},
{
"math_id": 3,
"text": "P \\land Q"
},
{
"math_id": 4,
"text": "P, Q \\vdash P \\land Q"
},
{
"math_id": 5,
"text": "\\vdash"
}
] |
https://en.wikipedia.org/wiki?curid=6563
|
65635
|
Chirp
|
Frequency swept signal
A chirp is a signal in which the frequency increases ("up-chirp") or decreases ("down-chirp") with time. In some sources, the term "chirp" is used interchangeably with sweep signal. It is commonly applied to sonar, radar, and laser systems, and to other applications, such as in spread-spectrum communications (see chirp spread spectrum). This signal type is biologically inspired and occurs as a phenomenon due to dispersion (a non-linear dependence between frequency and the propagation speed of the wave components). It is usually compensated for by using a matched filter, which can be part of the propagation channel. Depending on the specific performance measure, however, there are better techniques both for radar and communication. Since it was used in radar and space, it has been adopted also for communication standards. For automotive radar applications, it is usually called linear frequency modulated waveform (LFMW).
In spread-spectrum usage, surface acoustic wave (SAW) devices are often used to generate and demodulate the chirped signals. In optics, ultrashort laser pulses also exhibit chirp, which, in optical transmission systems, interacts with the dispersion properties of the materials, increasing or decreasing total pulse dispersion as the signal propagates. The name is a reference to the chirping sound made by birds; see bird vocalization.
Definitions.
The basic definitions here translate as the common physics quantities location (phase), speed (angular velocity), acceleration (chirpyness).
If a waveform is defined as:
formula_0
then the instantaneous angular frequency, "ω", is defined as the phase rate as given by the first derivative of phase,
with the instantaneous ordinary frequency, "f", being its normalized version:
formula_1
Finally, the instantaneous angular chirpyness (symbol "γ") is defined to be the second derivative of instantaneous phase or the first derivative of instantaneous angular frequency,
formula_2
Angular chirpyness has units of radians per square second (rad/s2); thus, it is analogous to "angular acceleration".
The instantaneous ordinary chirpyness (symbol "c") is a normalized version, defined as the rate of change of the instantaneous frequency:
formula_3
Ordinary chirpyness has units of square reciprocal seconds (s−2); thus, it is analogous to "rotational acceleration".
Types.
Linear.
In a linear-frequency chirp or simply linear chirp, the instantaneous frequency formula_4 varies exactly linearly with time:
formula_5
where formula_6 is the starting frequency (at time formula_7) and formula_8 is the chirp rate, assumed constant:
formula_9
Here, formula_10 is the final frequency and formula_11 is the time it takes to sweep from formula_12 to formula_10.
The corresponding time-domain function for the phase of any oscillating signal is the integral of the frequency function, as one expects the phase to grow like formula_13, i.e., that the derivative of the phase is the angular frequency formula_14.
For the linear chirp, this results in:
formula_15
where formula_16 is the initial phase (at time formula_7). Thus this is also called a quadratic-phase signal.
The corresponding time-domain function for a sinusoidal linear chirp is the sine of the phase in radians:
formula_17
Exponential.
In a geometric chirp, also called an exponential chirp, the frequency of the signal varies with a geometric relationship over time. In other words, if two points in the waveform are chosen, formula_18 and formula_19, and the time interval between them formula_20 is kept constant, the frequency ratio formula_21 will also be constant.
In an exponential chirp, the frequency of the signal varies exponentially as a function of time:
formula_22
where formula_6 is the starting frequency (at formula_7), and formula_23 is the rate of exponential change in frequency.
formula_24
Where formula_10 is the ending frequency of the chirp (at formula_25).
Unlike the linear chirp, which has a constant chirpyness, an exponential chirp has an exponentially increasing frequency rate.
The corresponding time-domain function for the phase of an exponential chirp is the integral of the frequency:
formula_26
where formula_16 is the initial phase (at formula_7).
The corresponding time-domain function for a sinusoidal exponential chirp is the sine of the phase in radians:
formula_27
As was the case for the Linear Chirp, the instantaneous frequency of the Exponential Chirp consists of the fundamental frequency formula_28 accompanied by additional harmonics.
Hyperbolic.
Hyperbolic chirps are used in radar applications, as they show maximum matched filter response after being distorted by the Doppler effect.
In a hyperbolic chirp, the frequency of the signal varies hyperbolically as a function of time:
formula_29
The corresponding time-domain function for the phase of an hyperbolic chirp is the integral of the frequency:
formula_30
where formula_16 is the initial phase (at formula_7).
The corresponding time-domain function for a sinusoidal hyperbolic chirp is the sine of the phase in radians:
formula_31
Generation.
A chirp signal can be generated with analog circuitry via a voltage-controlled oscillator (VCO), and a linearly or exponentially ramping control voltage. It can also be generated digitally by a digital signal processor (DSP) and digital-to-analog converter (DAC), using a direct digital synthesizer (DDS) and by varying the step in the numerically controlled oscillator. It can also be generated by a YIG oscillator.
Relation to an impulse signal.
A chirp signal shares the same spectral content with an impulse signal. However, unlike in the impulse signal, spectral components of the chirp signal have different phases, i.e., their power spectra are alike but the phase spectra are distinct. Dispersion of a signal propagation medium may result in unintentional conversion of impulse signals into chirps (Whistler). On the other hand, many practical applications, such as chirped pulse amplifiers or echolocation systems, use chirp signals instead of impulses because of their inherently lower peak-to-average power ratio (PAPR).
Uses and occurrences.
Chirp modulation.
Chirp modulation, or linear frequency modulation for digital communication, was patented by Sidney Darlington in 1954 with significant later work performed by Winkler in 1962. This type of modulation employs sinusoidal waveforms whose instantaneous frequency increases or decreases linearly over time. These waveforms are commonly referred to as linear chirps or simply chirps.
Hence the rate at which their frequency changes is called the "chirp rate". In binary chirp modulation, binary data is transmitted by mapping the bits into chirps of opposite chirp rates. For instance, over one bit period "1" is assigned a chirp with positive rate "a" and "0" a chirp with negative rate −"a". Chirps have been heavily used in radar applications and as a result advanced sources for transmission and matched filters for reception of linear chirps are available.
Chirplet transform.
Another kind of chirp is the projective chirp, of the form:
formula_32
having the three parameters "a" (scale), "b" (translation), and "c" (chirpiness). The projective chirp is ideally suited to image processing, and forms the basis for the projective chirplet transform.
Key chirp.
A change in frequency of Morse code from the desired frequency, due to poor stability in the RF oscillator, is known as chirp, and in the R-S-T system is given an appended letter 'C'.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "x(t) = \\sin\\left(\\phi(t)\\right)"
},
{
"math_id": 1,
"text": "\n\\omega(t) = \\frac{d\\phi(t)}{dt}, \\,\nf(t) = \\frac{\\omega(t)}{2\\pi}\n"
},
{
"math_id": 2,
"text": "\n\\gamma(t) = \\frac{d^2\\phi(t)}{dt^2} = \\frac{d\\omega(t)}{dt}\n"
},
{
"math_id": 3,
"text": "\nc(t) = \\frac{\\gamma(t)}{2\\pi} = \\frac{df(t)}{dt}\n"
},
{
"math_id": 4,
"text": "f(t)"
},
{
"math_id": 5,
"text": "f(t) = c t + f_0,"
},
{
"math_id": 6,
"text": "f_0"
},
{
"math_id": 7,
"text": "t = 0"
},
{
"math_id": 8,
"text": "c"
},
{
"math_id": 9,
"text": "c = \\frac{f_1 - f_0}{T} = \\frac{\\Delta f}{\\Delta t}."
},
{
"math_id": 10,
"text": "f_1"
},
{
"math_id": 11,
"text": " T "
},
{
"math_id": 12,
"text": " f_0 "
},
{
"math_id": 13,
"text": "\\phi(t + \\Delta t) \\simeq \\phi(t) + 2\\pi f(t)\\,\\Delta t"
},
{
"math_id": 14,
"text": "\\phi'(t) = 2\\pi\\,f(t)"
},
{
"math_id": 15,
"text": "\\begin{align}\n \\phi(t) &= \\phi_0 + 2\\pi\\int_0^t f(\\tau)\\, d\\tau\\\\\n &= \\phi_0 + 2\\pi\\int_0^t \\left(c \\tau+f_0\\right)\\, d\\tau\\\\\n &= \\phi_0 + 2\\pi \\left(\\frac{c}{2} t^2+f_0 t\\right),\n\\end{align}"
},
{
"math_id": 16,
"text": "\\phi_0"
},
{
"math_id": 17,
"text": "x(t) = \\sin\\left[\\phi_0 + 2\\pi \\left(\\frac{c}{2} t^2 + f_0 t \\right) \\right]"
},
{
"math_id": 18,
"text": "t_1"
},
{
"math_id": 19,
"text": "t_2"
},
{
"math_id": 20,
"text": "T = t_2 - t_1"
},
{
"math_id": 21,
"text": "f\\left(t_2\\right)/f\\left(t_1\\right)"
},
{
"math_id": 22,
"text": "f(t) = f_0 k^\\frac{t}{T}"
},
{
"math_id": 23,
"text": "k"
},
{
"math_id": 24,
"text": "k = \\frac{f_1}{f_0}"
},
{
"math_id": 25,
"text": "t = T"
},
{
"math_id": 26,
"text": "\\begin{align}\n \\phi(t)\n &= \\phi_0 + 2\\pi \\int_0^t f(\\tau)\\, d\\tau \\\\\n &= \\phi_0 + 2\\pi f_0 \\int_0^t k^\\frac{\\tau}{T} d\\tau \\\\\n &= \\phi_0 + 2\\pi f_0 \\left(\\frac{T k^\\frac{t}{T}}{\\ln(k)}\\right)\n\\end{align}"
},
{
"math_id": 27,
"text": "x(t) = \\sin\\left[\\phi_0 + 2\\pi f_0 \\left(\\frac{T k^\\frac{t}{T}}{\\ln(k)}\\right) \\right]"
},
{
"math_id": 28,
"text": "f(t) = f_0 k^\\frac{t}{T}"
},
{
"math_id": 29,
"text": "f(t) = \\frac{f_0 f_1 T}{(f_0-f_1)t+f_1T}"
},
{
"math_id": 30,
"text": "\\begin{align}\n \\phi(t)\n &= \\phi_0 + 2\\pi \\int_0^t f(\\tau)\\, d\\tau \\\\\n &= \\phi_0 + 2\\pi \\frac{-f_0 f_1 T}{f_1-f_0} \\ln\\left(1-\\frac{f_1-f_0}{f_1T}t\\right)\n\\end{align}"
},
{
"math_id": 31,
"text": "x(t) = \\sin\\left[ \\phi_0 + 2\\pi \\frac{-f_0 f_1 T}{f_1-f_0} \\ln\\left(1-\\frac{f_1-f_0}{f_1T}t\\right)\\right]"
},
{
"math_id": 32,
"text": "g = f\\left[\\frac{a \\cdot x + b}{c \\cdot x + 1}\\right],"
}
] |
https://en.wikipedia.org/wiki?curid=65635
|
65635504
|
Kenmotsu manifold
|
Almost-contact manifold
In the mathematical field of differential geometry, a Kenmotsu manifold is an almost-contact manifold endowed with a certain kind of Riemannian metric. They are named after the Japanese mathematician Katsuei Kenmotsu.
Definitions.
Let formula_0 be an almost-contact manifold. One says that a Riemannian metric formula_1 on formula_2 is adapted to the almost-contact structure formula_3 if:
formula_4
That is to say that, relative to formula_5 the vector formula_6 has length one and is orthogonal to formula_7 furthermore the restriction of formula_8 to formula_9is a Hermitian metric relative to the almost-complex structure formula_10 One says that formula_11 is an "almost-contact metric manifold".
An almost-contact metric manifold formula_11 is said to be a Kenmotsu manifold if
formula_12
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" />
Sources
|
[
{
"math_id": 0,
"text": "(M, \\varphi, \\xi, \\eta)"
},
{
"math_id": 1,
"text": "g"
},
{
"math_id": 2,
"text": "M"
},
{
"math_id": 3,
"text": "(\\varphi, \\xi, \\eta)"
},
{
"math_id": 4,
"text": "\\begin{align}\ng_{ij}\\xi^j&=\\eta_i\\\\\ng_{pq}\\varphi_i^p\\varphi_j^q&=g_{ij}-\\eta_i\\eta_j.\n\\end{align}"
},
{
"math_id": 5,
"text": "g_p,"
},
{
"math_id": 6,
"text": "\\xi_p"
},
{
"math_id": 7,
"text": "\\ker \\left(\\eta_p\\right);"
},
{
"math_id": 8,
"text": "g_p"
},
{
"math_id": 9,
"text": "\\ker \\left(\\eta_p\\right)"
},
{
"math_id": 10,
"text": "\\varphi_p\\big\\vert_{\\ker \\left(\\eta_p\\right)}."
},
{
"math_id": 11,
"text": "(M, \\varphi, \\xi, \\eta, g)"
},
{
"math_id": 12,
"text": "\\nabla_i\\varphi_j^k=-\\eta_j\\varphi_i^k-g_{ip}\\varphi_j^p\\xi^k."
}
] |
https://en.wikipedia.org/wiki?curid=65635504
|
65635767
|
Microwave electrothermal thruster
|
Type of space propulsion using microwaves
Microwave electrothermal thruster, also known as MET, is a propulsion device that converts microwave energy (a type of electromagnetic radiation) into thermal (or heat) energy. These thrusters are predominantly used in spacecraft propulsion, more specifically to adjust the spacecraft’s position and orbit. A MET sustains and ignites a plasma in a propellant gas. This creates a heated propellant gas which in turn changes into thrust due to the expansion of the gas going through the nozzle. A MET’s heating feature is like one of an arc-jet (another propulsion device); however, due to the free-floating plasma, there are no problems with the erosion of metal electrodes, and therefore the MET is more efficient.
Mechanism Description.
The MET contains key features and parts that contribute to its efficiency. The parts include: two endplates (nozzle and antenna), plasma, and a dielectric separation plate.
The resonant cavity is the round overlapping section waveguide that is shorted by the two endplates. The cavity is near the separation plate. There are two end plates inside the MET: the nozzle and the antenna. The nozzle’s function is to convert the gaseous plasma into thrust. The antenna is used to input the microwave power. Although most of the power is absorbed by the plasma, some of it is reflected. Another part of the MET is the plasma. In some cases, plasma is also referred to as the fourth state of matter. The plasma is the main portion of the MET. It is created inside of the system by heating the propellant and is exhausted to generate thrust. The last part of the MET is the dielectric separation plate. This piece of the MET allows both parts of the cavity to be controlled at various pressures.
Process.
Description.
In order for the MET to create thrust, it must go through a 4 step process of converting electrical energy into heat energy.
During this process, the antenna section is held at atmospheric pressure to ensure that there is no plasma formation close to the antenna. It also ensures that the separation plates are not held at two significantly different pressures, which would put stress upon the two plates.
The physical process for what takes place on a molecular level can also be explained in the following manner:
Mathematically.
Thrust.
Thrust is the force that is applied on the rocket caused by when the propellant is released. The formula for thrust is given as:
formula_0
Where thrust is given as formula_1 in Newtons(N), formula_2 as mass flow rate in kilograms/second(kg/s), formula_3 as exhaust velocity in meters/second(m/s), formula_4 as exit pressure, formula_5 as atmospheric pressure, and formula_6 as nozzle exit area in meters^2(m^2).
Specific Impulse.
Specific impulse is how efficiently the fuel of the MET is used to create thrust. The formula for specific impulse is given as:
formula_7
Where formula_8 is given as specific impulse, formula_1 as thrust in N, formula_2 as mass flow rate in kg/s, and formula_9 as the gravitational acceleration of the earth.
Mass Relationship.
When applying the conservation of momentum law, the relationship between mass of propellant and initial mass of the spacecraft can be shown as:
formula_10
Where formula_11 is given as propellant mass, formula_12 as initial spacecraft mass, formula_13 as change in velocity, formula_8 is as specific impulse, and formula_9 as earth’s gravity.
Application.
Space.
The MET’s main purpose is spacecraft propulsion. The energy that is created is meant to be converted into kinetic energy, which will produce thrust in space. Some tasks include orbit raising and stationkeeping. Orbit raising is changing the orbit of a ship using propulsion systems, while stationkeeping is maintaining a spacecraft’s position in relation to other spacecraft. This includes the maintenance of satellites at certain positions.
Notable Inventions.
Control System for a Microwave Electrothermal Thruster.
This is one of the more recent applications of a microwave electrothermal thruster created in August 2020. This invention used the functions of a MET to create a precise control system. When the MET changes the energy from electromagnetic waves to propellant, it allows for the small impulses of the MET to give control over the satellite.
In-Space Electrothermal Propulsion.
This invention is pertaining to the MET adaption for space electrothermal propulsion. In order to control the altitude of a satellite/spacecraft and for primary propulsion, the tunable frequency MET was provided. Instead of a magnetron (microwave generating device), there were alternative constructional features which included using generators and semiconductors. This made it more efficient allowing the thruster to operate at two separate frequencies.
Pros and Cons.
Pros.
Relative to other electrothermal thrusters, the MET ranks higher than resistojets and some claim that they may be able to achieve similar performance to arc-jets. This is based on the supposition that the MET provides higher specific impulse, or in simpler terms more thrust for the amount of fuel. Another advantage is that because microwaves can be collected and fed directly into the thrust chamber, the MET is extremely compatible with space transport. Finally, the MET can be run on water vapor as a propellant, which can be found in many different parts of the cosmos.
Cons.
In general, electrothermal thrusters have the lowest efficiency among most other electric propulsion systems. MET ranks lower than most electrostatic thrusters such as ion thrusters. Another disadvantage is that the MET has relatively low thrust compared to chemical rocket engines (although this is true for electric propulsion in general).
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\tau = \\dot{m}u_e + (p_e + p_a)A_e"
},
{
"math_id": 1,
"text": "\\tau"
},
{
"math_id": 2,
"text": "\\dot{m}"
},
{
"math_id": 3,
"text": "u_e"
},
{
"math_id": 4,
"text": "p_e"
},
{
"math_id": 5,
"text": "p_a"
},
{
"math_id": 6,
"text": "A_e"
},
{
"math_id": 7,
"text": "I_{sp} = \\tau/\\dot{m}g"
},
{
"math_id": 8,
"text": "I_{sp}"
},
{
"math_id": 9,
"text": "g"
},
{
"math_id": 10,
"text": "M_p/M_i = 1-e^{-\\Delta v/I_{sp}g}"
},
{
"math_id": 11,
"text": "M_p"
},
{
"math_id": 12,
"text": "M_i"
},
{
"math_id": 13,
"text": "\\Delta v"
}
] |
https://en.wikipedia.org/wiki?curid=65635767
|
65636660
|
Almost-contact manifold
|
Geometric structure on a smooth manifold
In the mathematical field of differential geometry, an almost-contact structure is a certain kind of geometric structure on a smooth manifold. Such structures were introduced by Shigeo Sasaki in 1960.
Precisely, given a smooth manifold formula_0, an almost-contact structure consists of a hyperplane distribution formula_1, an almost-complex structure formula_2 on formula_1, and a vector field formula_3 which is transverse to formula_1. That is, for each point formula_4 of formula_0, one selects a codimension-one linear subspace formula_5 of the tangent space formula_6, a linear map formula_7 such that formula_8, and an element formula_9 of formula_6 which is not contained in formula_5.
Given such data, one can define, for each formula_4 in formula_0, a linear map formula_10 and a linear map formula_11 by
formula_12
This defines a one-form formula_13 and (1,1)-tensor field formula_14 on formula_0, and one can check directly, by decomposing formula_15 relative to the direct sum decomposition formula_16, that
formula_17
for any formula_15 in formula_6. Conversely, one may define an almost-contact structure as a triple formula_18 which satisfies the two conditions
Then one can define formula_5 to be the kernel of the linear map formula_22, and one can check that the restriction of formula_23 to formula_5 is valued in formula_5, thereby defining formula_24.
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "M"
},
{
"math_id": 1,
"text": "Q"
},
{
"math_id": 2,
"text": "J"
},
{
"math_id": 3,
"text": "\\xi"
},
{
"math_id": 4,
"text": "p"
},
{
"math_id": 5,
"text": "Q_p"
},
{
"math_id": 6,
"text": "T_p M"
},
{
"math_id": 7,
"text": "J_p : Q_p \\to Q_p"
},
{
"math_id": 8,
"text": "J_p \\circ J_p = - \\operatorname{id}_{Q_p}"
},
{
"math_id": 9,
"text": "\\xi_p"
},
{
"math_id": 10,
"text": "\\eta_p : T_p M \\to \\R"
},
{
"math_id": 11,
"text": "\\varphi_p : T_p M \\to T_p M"
},
{
"math_id": 12,
"text": "\\begin{align}\n\\eta_p(u)&=0\\text{ if }u\\in Q_p\\\\\n\\eta_p(\\xi_p)&=1\\\\\n\\varphi_p(u)&=J_p(u)\\text{ if }u\\in Q_p\\\\\n\\varphi_p(\\xi)&=0.\n\\end{align}"
},
{
"math_id": 13,
"text": "\\eta"
},
{
"math_id": 14,
"text": "\\varphi"
},
{
"math_id": 15,
"text": "v"
},
{
"math_id": 16,
"text": "T_p M = Q_p \\oplus \\left\\{ k \\xi_p : k \\in \\R \\right\\}"
},
{
"math_id": 17,
"text": "\\begin{align}\n\\eta_p(v) \\xi_p &= \\varphi_p \\circ \\varphi_p(v) + v\n\\end{align}"
},
{
"math_id": 18,
"text": "(\\xi, \\eta, \\varphi)"
},
{
"math_id": 19,
"text": "\\eta_p(v) \\xi_p = \\varphi_p \\circ \\varphi_p(v) + v"
},
{
"math_id": 20,
"text": "v \\in T_p M"
},
{
"math_id": 21,
"text": "\\eta_p(\\xi_p) = 1"
},
{
"math_id": 22,
"text": "\\eta_p"
},
{
"math_id": 23,
"text": "\\varphi_p"
},
{
"math_id": 24,
"text": "J_p"
}
] |
https://en.wikipedia.org/wiki?curid=65636660
|
65637
|
Metrology
|
Science of measurement and its application
Metrology is the scientific study of measurement. It establishes a common understanding of units, crucial in linking human activities. Modern metrology has its roots in the French Revolution's political motivation to standardise units in France when a length standard taken from a natural source was proposed. This led to the creation of the decimal-based metric system in 1795, establishing a set of standards for other types of measurements. Several other countries adopted the metric system between 1795 and 1875; to ensure conformity between the countries, the "Bureau International des Poids et Mesures" (BIPM) was established by the Metre Convention. This has evolved into the International System of Units (SI) as a result of a resolution at the 11th General Conference on Weights and Measures (CGPM) in 1960.
Metrology is divided into three basic overlapping activities:
These overlapping activities are used in varying degrees by the three basic sub-fields of metrology:
In each country, a national measurement system (NMS) exists as a network of laboratories, calibration facilities and accreditation bodies which implement and maintain its metrology infrastructure. The NMS affects how measurements are made in a country and their recognition by the international community, which has a wide-ranging impact in its society (including economics, energy, environment, health, manufacturing, industry and consumer confidence). The effects of metrology on trade and economy are some of the easiest-observed societal impacts. To facilitate fair trade, there must be an agreed-upon system of measurement.
History.
The ability to measure alone is insufficient; standardisation is crucial for measurements to be meaningful. The first record of a permanent standard was in 2900 BC, when the royal Egyptian cubit was carved from black granite. The cubit was decreed to be the length of the Pharaoh's forearm plus the width of his hand, and replica standards were given to builders. The success of a standardised length for the building of the pyramids is indicated by the lengths of their bases differing by no more than 0.05 per cent.
In China weights and measures had a semi religious meaning as it was used in the various crafts by the Artificers and in ritual utensils and is mentioned in the book of rites along with the steelyard balance and other tools.
Other civilizations produced generally accepted measurement standards, with Roman and Greek architecture based on distinct systems of measurement. The collapse of the empires and the Dark Ages that followed lost much measurement knowledge and standardisation. Although local systems of measurement were common, comparability was difficult since many local systems were incompatible. England established the Assize of Measures to create standards for length measurements in 1196, and the 1215 Magna Carta included a section for the measurement of wine and beer.
Modern metrology has its roots in the French Revolution. With a political motivation to harmonise units throughout France, a length standard based on a natural source was proposed. In March 1791, the metre was defined. This led to the creation of the decimal-based metric system in 1795, establishing standards for other types of measurements. Several other countries adopted the metric system between 1795 and 1875; to ensure international conformity, the International Bureau of Weights and Measures (, or BIPM) was formed by the Metre Convention. Although the BIPM's original mission was to create international standards for units of measurement and relate them to national standards to ensure conformity, its scope has broadened to include electrical and photometric units and ionizing radiation measurement standards. The metric system was modernised in 1960 with the creation of the International System of Units (SI) as a result of a resolution at the 11th General Conference on Weights and Measures (, or CGPM).
Subfields.
Metrology is defined by the International Bureau of Weights and Measures (BIPM) as "the science of measurement, embracing both experimental and theoretical determinations at any level of uncertainty in any field of science and technology". It establishes a common understanding of units, crucial to human activity. Metrology is a wide reaching field, but can be summarized through three basic activities: the definition of internationally accepted units of measurement, the realisation of these units of measurement in practice, and the application of chains of traceability (linking measurements to reference standards). These concepts apply in different degrees to metrology's three main fields: scientific metrology; applied, technical or industrial metrology, and legal metrology.
Scientific metrology.
Scientific metrology is concerned with the establishment of units of measurement, the development of new measurement methods, the realisation of measurement standards, and the transfer of traceability from these standards to users in a society. This type of metrology is considered the top level of metrology which strives for the highest degree of accuracy. BIPM maintains a database of the metrological calibration and measurement capabilities of institutes around the world. These institutes, whose activities are peer-reviewed, provide the fundamental reference points for metrological traceability. In the area of measurement, BIPM has identified nine metrology areas, which are acoustics, electricity and magnetism, length, mass and related quantities, photometry and radiometry, ionizing radiation, time and frequency, thermometry, and chemistry.
As of May 2019 no physical objects define the base units. The motivation in the change of the base units is to make the entire system derivable from physical constants, which required the removal of the prototype kilogram as it is the last artefact the unit definitions depend on. Scientific metrology plays an important role in this redefinition of the units as precise measurements of the physical constants is required to have accurate definitions of the base units. To redefine the value of a kilogram without an artefact the value of the Planck constant must be known to twenty parts per billion. Scientific metrology, through the development of the Kibble balance and the Avogadro project, has produced a value of Planck constant with low enough uncertainty to allow for a redefinition of the kilogram.
Applied, technical or industrial metrology.
Applied, technical or industrial metrology is concerned with the application of measurement to manufacturing and other processes and their use in society, ensuring the suitability of measurement instruments, their calibration and quality control. Producing good measurements is important in industry as it has an impact on the value and quality of the end product, and a 10–15% impact on production costs. Although the emphasis in this area of metrology is on the measurements themselves, traceability of the measuring-device calibration is necessary to ensure confidence in the measurement. Recognition of the metrological competence in industry can be achieved through mutual recognition agreements, accreditation, or peer review. Industrial metrology is important to a country's economic and industrial development, and the condition of a country's industrial-metrology program can indicate its economic status.
Legal metrology.
Legal metrology "concerns activities which result from statutory requirements and concern measurement, units of measurement, measuring instruments and methods of measurement and which are performed by competent bodies". Such statutory requirements may arise from the need for protection of health, public safety, the environment, enabling taxation, protection of consumers and fair trade. The International Organization for Legal Metrology (OIML) was established to assist in harmonising regulations across national boundaries to ensure that legal requirements do not inhibit trade. This harmonisation ensures that certification of measuring devices in one country is compatible with another country's certification process, allowing the trade of the measuring devices and the products that rely on them. WELMEC was established in 1990 to promote cooperation in the field of legal metrology in the European Union and among European Free Trade Association (EFTA) member states. In the United States legal metrology is under the authority of the Office of Weights and Measures of National Institute of Standards and Technology (NIST), enforced by the individual states.
Concepts.
Definition of units.
The International System of Units (SI) defines seven base units: length, mass, time, electric current, thermodynamic temperature, amount of substance, and luminous intensity. By convention, each of these units are considered to be mutually independent and can be constructed directly from their defining constants. All other SI units are constructed as products of powers of the seven base units.
Since the base units are the reference points for all measurements taken in SI units, if the reference value changed all prior measurements would be incorrect. Before 2019, if a piece of the international prototype of the kilogram had been snapped off, it would have still been defined as a kilogram; all previous measured values of a kilogram would be heavier. The importance of reproducible SI units has led the BIPM to complete the task of defining all SI base units in terms of physical constants.
By defining SI base units with respect to physical constants, and not artefacts or specific substances, they are realisable with a higher level of precision and reproducibility. As of the redefinition of the SI units on 20 May 2019 the kilogram, ampere, kelvin, and mole are defined by setting exact numerical values for the Planck constant ("h"), the elementary electric charge ("e"), the Boltzmann constant ("k"), and the Avogadro constant ("N"A), respectively. The second, metre, and candela have previously been defined by physical constants (the caesium standard (Δ"ν"Cs), the speed of light ("c"), and the luminous efficacy of visible light radiation ("K"cd)), subject to correction to their present definitions. The new definitions aim to improve the SI without changing the size of any units, thus ensuring continuity with existing measurements.
Realisation of units.
The realisation of a unit of measure is its conversion into reality. Three possible methods of realisation are defined by the (VIM): a physical realisation of the unit from its definition, a highly-reproducible measurement as a reproduction of the definition (such as the quantum Hall effect for the ohm), and the use of a material object as the measurement standard.
Standards.
A standard (or etalon) is an object, system, or experiment with a defined relationship to a unit of measurement of a physical quantity. Standards are the fundamental reference for a system of weights and measures by realising, preserving, or reproducing a unit against which measuring devices can be compared. There are three levels of standards in the hierarchy of metrology: primary, secondary, and working standards. Primary standards (the highest quality) do not reference any other standards. Secondary standards are calibrated with reference to a primary standard. Working standards, used to calibrate (or check) measuring instruments or other material measures, are calibrated with respect to secondary standards. The hierarchy preserves the quality of the higher standards. An example of a standard would be gauge blocks for length. A gauge block is a block of metal or ceramic with two opposing faces ground precisely flat and parallel, a precise distance apart. The length of the path of light in vacuum during a time interval of 1/299,792,458 of a second is embodied in an artefact standard such as a gauge block; this gauge block is then a primary standard which can be used to calibrate secondary standards through mechanical comparators.
Traceability and calibration.
Metrological traceability is defined as the "property of a measurement result whereby the result can be related to a reference through a documented unbroken chain of calibrations, each contributing to the measurement uncertainty". It permits the comparison of measurements, whether the result is compared to the previous result in the same laboratory, a measurement result a year ago, or to the result of a measurement performed anywhere else in the world. The chain of traceability allows any measurement to be referenced to higher levels of measurements back to the original definition of the unit.
Traceability is obtained directly through calibration, establishing the relationship between an indication on a standard traceable measuring instrument and the value of the comparator (or comparative measuring instrument). The process will determine the measurement value and uncertainty of the device that is being calibrated (the comparator) and create a traceability link to the measurement standard. The four primary reasons for calibrations are to provide traceability, to ensure that the instrument (or standard) is consistent with other measurements, to determine accuracy, and to establish reliability. Traceability works as a pyramid, at the top level there is the international standards, which beholds the world's standards. The next level is the national Metrology institutes that have primary standards that are traceable to the international standards. The national Metrology institutes standards are used to establish a traceable link to local laboratory standards, these laboratory standards are then used to establish a traceable link to industry and testing laboratories. Through these subsequent calibrations between national metrology institutes, calibration laboratories, and industry and testing laboratories the realisation of the unit definition is propagated down through the pyramid. The traceability chain works upwards from the bottom of the pyramid, where measurements done by industry and testing laboratories can be directly related to the unit definition at the top through the traceability chain created by calibration.
Uncertainty.
Measurement uncertainty is a value associated with a measurement which expresses the spread of possible values associated with the measurand—a quantitative expression of the doubt existing in the measurement. There are two components to the uncertainty of a measurement: the width of the uncertainty interval and the confidence level. The uncertainty interval is a range of values that the measurement value expected to fall within, while the confidence level is how likely the true value is to fall within the uncertainty interval. Uncertainty is generally expressed as follows:
formula_0
Coverage factor: "k" = 2
Where "y" is the measurement value and "U" is the uncertainty value and "k" is the coverage factor indicates the confidence interval. The upper and lower limit of the uncertainty interval can be determined by adding and subtracting the uncertainty value from the measurement value. The coverage factor of "k" = 2 generally indicates a 95% confidence that the measured value will fall inside the uncertainty interval. Other values of "k" can be used to indicate a greater or lower confidence on the interval, for example "k" = 1 and "k" = 3 generally indicate 66% and 99.7% confidence respectively. The uncertainty value is determined through a combination of statistical analysis of the calibration and uncertainty contribution from other errors in measurement process, which can be evaluated from sources such as the instrument history, manufacturer's specifications, or published information.
International infrastructure.
Several international organizations maintain and standardise metrology.
Metre Convention.
The Metre Convention created three main international organizations to facilitate standardisation of weights and measures. The first, the General Conference on Weights and Measures (CGPM), provided a forum for representatives of member states. The second, the International Committee for Weights and Measures (CIPM), was an advisory committee of metrologists of high standing. The third, the International Bureau of Weights and Measures (BIPM), provided secretarial and laboratory facilities for the CGPM and CIPM.
General Conference on Weights and Measures.
The General Conference on Weights and Measures (, or CGPM) is the convention's principal decision-making body, consisting of delegates from member states and non-voting observers from associate states. The conference usually meets every four to six years to receive and discuss a CIPM report and endorse new developments in the SI as advised by the CIPM. The last meeting was held on 13–16 November 2018. On the last day of this conference there was vote on the redefinition of four base units, which the International Committee for Weights and Measures (CIPM) had proposed earlier that year. The new definitions came into force on 20 May 2019.
International Committee for Weights and Measures.
The International Committee for Weights and Measures (, or CIPM) is made up of eighteen (originally fourteen) individuals from a member state of high scientific standing, nominated by the CGPM to advise the CGPM on administrative and technical matters. It is responsible for ten consultative committees (CCs), each of which investigates a different aspect of metrology; one CC discusses the measurement of temperature, another the measurement of mass, and so forth. The CIPM meets annually in Sèvres to discuss reports from the CCs, to submit an annual report to the governments of member states concerning the administration and finances of the BIPM and to advise the CGPM on technical matters as needed. Each member of the CIPM is from a different member state, with France (in recognition of its role in establishing the convention) always having one seat.
International Bureau of Weights and Measures.
The International Bureau of Weights and Measures (, or BIPM) is an organisation based in Sèvres, France which has custody of the international prototype of the kilogram, provides metrology services for the CGPM and CIPM, houses the secretariat for the organisations and hosts their meetings. Over the years, prototypes of the metre and of the kilogram have been returned to BIPM headquarters for recalibration. The BIPM director is an ex officio member of the CIPM and a member of all consultative committees.
International Organization of Legal Metrology.
The International Organization of Legal Metrology (, or OIML), is an intergovernmental organization created in 1955 to promote the global harmonisation of the legal metrology procedures facilitating international trade. This harmonisation of technical requirements, test procedures and test-report formats ensure confidence in measurements for trade and reduces the costs of discrepancies and measurement duplication. The OIML publishes a number of international reports in four categories:
Although the OIML has no legal authority to impose its recommendations and guidelines on its member countries, it provides a standardised legal framework for those countries to assist the development of appropriate, harmonised legislation for certification and calibration. OIML provides a mutual acceptance arrangement (MAA) for measuring instruments that are subject to legal metrological control, which upon approval allows the evaluation and test reports of the instrument to be accepted in all participating countries. Issuing participants in the agreement issue MAA Type Evaluation Reports of MAA Certificates upon demonstration of compliance with ISO/IEC 17065 and a peer evaluation system to determine competency. This ensures that certification of measuring devices in one country is compatible with the certification process in other participating countries, allowing the trade of the measuring devices and the products that rely on them.
International Laboratory Accreditation Cooperation.
The International Laboratory Accreditation Cooperation (ILAC) is an international organisation for accreditation agencies involved in the certification of conformity-assessment bodies. It standardises accreditation practices and procedures, recognising competent calibration facilities and assisting countries developing their own accreditation bodies. ILAC originally began as a conference in 1977 to develop international cooperation for accredited testing and calibration results to facilitate trade. In 2000, 36 members signed the ILAC mutual recognition agreement (MRA), allowing members work to be automatically accepted by other signatories, and in 2012 was expanded to include accreditation of inspection bodies. Through this standardisation, work done in laboratories accredited by signatories is automatically recognised internationally through the MRA. Other work done by ILAC includes promotion of laboratory and inspection body accreditation, and supporting the development of accreditation systems in developing economies.
Joint Committee for Guides in Metrology.
The Joint Committee for Guides in Metrology (JCGM) is a committee which created and maintains two metrology guides: "Guide to the expression of uncertainty in measurement" (GUM) and "International vocabulary of metrology – basic and general concepts and associated terms" (VIM). The JCGM is a collaboration of eight partner organisations:
The JCGM has two working groups: JCGM-WG1 and JCGM-WG2. JCGM-WG1 is responsible for the GUM, and JCGM-WG2 for the VIM. Each member organization appoints one representative and up to two experts to attend each meeting, and may appoint up to three experts for each working group.
National infrastructure.
A national measurement system (NMS) is a network of laboratories, calibration facilities and accreditation bodies which implement and maintain a country's measurement infrastructure. The NMS sets measurement standards, ensuring the accuracy, consistency, comparability, and reliability of measurements made in the country. The measurements of member countries of the CIPM Mutual Recognition Arrangement (CIPM MRA), an agreement of national metrology institutes, are recognized by other member countries. As of March 2018, there are 102 signatories of the CIPM MRA, consisting of 58 member states, 40 associate states, and 4 international organizations.
Metrology institutes.
A national metrology institute's (NMI) role in a country's measurement system is to conduct scientific metrology, realise base units, and maintain primary national standards. An NMI provides traceability to international standards for a country, anchoring its national calibration hierarchy. For a national measurement system to be recognized internationally by the CIPM Mutual Recognition Arrangement, an NMI must participate in international comparisons of its measurement capabilities. BIPM maintains a comparison database and a list of calibration and measurement capabilities (CMCs) of the countries participating in the CIPM MRA. Not all countries have a centralised metrology institute; some have a lead NMI and several decentralised institutes specialising in specific national standards. Some examples of NMI's are the National Institute of Standards and Technology (NIST) in the United States, the National Research Council (NRC) in Canada, the Physikalisch-Technische Bundesanstalt (PTB) in Germany, and the National Physical Laboratory (United Kingdom) (NPL).
Calibration laboratories.
Calibration laboratories are generally responsible for calibrations of industrial instrumentation. Calibration laboratories are accredited and provide calibration services to industry firms, which provides a traceability link back to the national metrology institute. Since the calibration laboratories are accredited, they give companies a traceability link to national metrology standards.
Accreditation bodies.
An organisation is accredited when an authoritative body determines, by assessing the organisation's personnel and management systems, that it is competent to provide its services. For international recognition, a country's accreditation body must comply with international requirements and is generally the product of international and regional cooperation. A laboratory is evaluated according to international standards such as ISO/IEC 17025 general requirements for the competence of testing and calibration laboratories. To ensure objective and technically-credible accreditation, the bodies are independent of other national measurement system institutions. The National Association of Testing Authorities in Australia and the United Kingdom Accreditation Service are examples of accreditation bodies.
Impacts.
Metrology has wide-ranging impacts on a number of sectors, including economics, energy, the environment, health, manufacturing, industry, and consumer confidence. The effects of metrology on trade and the economy are two of its most-apparent societal impacts. To facilitate fair and accurate trade between countries, there must be an agreed-upon system of measurement. Accurate measurement and regulation of water, fuel, food, and electricity are critical for consumer protection and promote the flow of goods and services between trading partners. A common measurement system and quality standards benefit consumer and producer; production at a common standard reduces cost and consumer risk, ensuring that the product meets consumer needs. Transaction costs are reduced through an increased economy of scale. Several studies have indicated that increased standardisation in measurement has a positive impact on GDP. In the United Kingdom, an estimated 28.4 per cent of GDP growth from 1921 to 2013 was the result of standardisation; in Canada between 1981 and 2004 an estimated nine per cent of GDP growth was standardisation-related, and in Germany the annual economic benefit of standardisation is an estimated 0.72% of GDP.
Legal metrology has reduced accidental deaths and injuries with measuring devices, such as radar guns and breathalyzers, by improving their efficiency and reliability. Measuring the human body is challenging, with poor repeatability and reproducibility, and advances in metrology help develop new techniques to improve health care and reduce costs. Environmental policy is based on research data, and accurate measurements are important for assessing climate change and environmental regulation. Aside from regulation, metrology is essential in supporting innovation, the ability to measure provides a technical infrastructure and tools that can then be used to pursue further innovation. By providing a technical platform which new ideas can be built upon, easily demonstrated, and shared, measurement standards allow new ideas to be explored and expanded upon.
See also.
<templatestyles src="Div col/styles.css"/>
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "Y = y \\pm U"
}
] |
https://en.wikipedia.org/wiki?curid=65637
|
65639408
|
Manifold injection
|
External mixture preparation system for internal combustion engines
Manifold injection is a mixture formation system for internal combustion engines with external mixture formation. It is commonly used in engines with spark ignition that use petrol as fuel, such as the Otto engine, and the Wankel engine. In a manifold-injected engine, the fuel is injected into the intake manifold, where it begins forming a combustible air-fuel mixture with the air. As soon as the intake valve opens, the piston starts sucking in the still forming mixture. Usually, this mixture is relatively homogeneous, and, at least in production engines for passenger cars, approximately stoichiometric; this means that there is an even distribution of fuel and air across the combustion chamber, and enough, but not more air present than what is required for the fuel's complete combustion. The injection timing and measuring of the fuel amount can be controlled either mechanically (by a fuel distributor), or electronically (by an engine control unit). Since the 1970s and 1980s, manifold injection has been replacing carburettors in passenger cars. However, since the late 1990s, car manufacturers have started using petrol direct injection, which caused a decline in manifold injection installation in newly produced cars.
There are two different types of manifold injection:
In this article, the terms multi-point injection (MPI), and single-point injection (SPI) are used. In an MPI system, there is one fuel injector per cylinder, installed very close to the intake valve(s). In an SPI system, there is only a single fuel injector, usually installed right behind the throttle valve. Modern manifold injection systems are usually MPI systems; SPI systems are now considered obsolete.
Description.
In a manifold injected engine, the fuel is injected with relatively low pressure (70...1470 kPa) into the intake manifold to form a fine fuel vapour. This vapour can then form a combustible mixture with the air, and the mixture is sucked into the cylinder by the piston during the intake stroke. Otto engines use a technique called "quantity control" for setting the desired engine torque, which means that the amount of mixture sucked into the engine determines the amount of torque produced. For controlling the amount of mixture, a throttle valve is used, which is why quantity control is also called intake air throttling. Intake air throttling changes the amount of air sucked into the engine, which means that if a stoichiometric (formula_0) air-fuel mixture is desired, the amount of injected fuel has to be changed along with the intake air throttling. To do so, manifold injection systems have at least one way to measure the amount of air that is currently being sucked into the engine. In mechanically controlled systems with a fuel distributor, a vacuum-driven piston directly connected to the control rack is used, whereas electronically controlled manifold injection systems typically use an airflow sensor, and a lambda sensor. Only electronically controlled systems can form the stoichiometric air-fuel mixture precisely enough for a three-way catalyst to work sufficiently, which is why mechanically controlled manifold injection systems such as the Bosch K-Jetronic are now considered obsolete.
Main types.
Single-point injection.
As the name implies, a single-point injected (SPI) engine only has a single fuel injector. It is usually installed right behind the throttle valve in the throttle body. Single-point injection was a relatively low-cost way for automakers to reduce exhaust emissions to comply with tightening regulations while providing better "driveability" (easy starting, smooth running, freedom from hesitation) than could be obtained with a carburetor. Many of the carburetor's supporting components - such as the air cleaner, intake manifold, and fuel line routing - could be used with few or no changes. This postponed the redesign and tooling costs of these components. However, single-point injection does not allow forming very precise mixtures required for modern emission regulations, and is thus deemed an obsolete technology in passenger cars. Single-point injection was used extensively on American-made passenger cars and light trucks during 1980–1995, and in some European cars in the early and mid-1990s.
Single-point injection has been a known technology since the 1960s, but has long been considered inferior to carburettors, because it requires an injection pump, and is thus more complicated. Only with the availability of inexpensive digital engine control units (ECUs) in the 1980s did single-point injection become a reasonable option for passenger cars. Usually, intermittently injecting, low injection pressure (70...100 kPa) systems were used that allowed the use of low-cost electric fuel injection pumps. A very common single-point injection system used in many passenger cars is the Bosch Mono-Jetronic, which German motor journalist Olaf von Fersen considers a "combination of fuel injection and carburettor".
The system was called Throttle-body Injection or Digital Fuel Injection by General Motors, Central Fuel Injection by Ford, PGM-CARB by Honda, and EGI by Mazda).
Multi-point injection.
In a multi-point injected engine, every cylinder has its own fuel injector, and the fuel injectors are usually installed in close proximity to the intake valve(s). Thus, the injectors inject the fuel through the open intake valve into the cylinder, which should not be confused with direct injection. Certain multi-point injection systems also use tubes with poppet valves fed by a central injector instead of individual injectors. Typically though, a multi-point injected engine has one fuel injector per cylinder, an electric fuel pump, a fuel distributor, an airflow sensor, and, in modern engines, an engine control unit. The temperatures near the intake valve(s) are rather high, the intake stroke causes intake air swirl, and there is much time for the air-fuel mixture to form. Therefore, the fuel does not require much atomisation. The atomisation quality is relative to the injection pressure, which means that a relatively low injection pressure (compared with direct injection) is sufficient for multi-point injected engines. A low injection pressure results in a low relative air-fuel velocity, which causes large, and slowly vapourising fuel droplets. Therefore, the injection timing has to be precise to minimise unburnt fuel (and thus HC emissions). Because of this, continuously injecting systems such as the Bosch K-Jetronic are obsolete. Modern multi-point injection systems use electronically controlled intermittent injection instead.
From 1992 to 1996 General Motors implemented a system called Central Port Injection or Central Port Fuel Injection. The system uses tubes with poppet valves from a central injector to spray fuel at each intake port rather than the central throttle body. Fuel pressure is similar to a single-point injection system. CPFI (used from 1992 to 1995) is a batch-fire system, while CSFI (from 1996) is a sequential system.
Injection controlling mechanism.
In manifold injected engines, there are three main methods of metering the fuel, and controlling the injection timing.
Mechanical controlling.
In early manifold injected engines with fully mechanical injection systems, a gear-, chain- or belt-driven injection pump with a mechanic "analogue" engine map was used. This allowed injecting fuel intermittently, and relatively precisely. Typically, such injection pumps have a three-dimensional cam that depicts the engine map. Depending on the throttle position, the three-dimensional cam is moved axially on its shaft. A roller-type pick-up mechanism that is directly connected to the injection pump control rack rides on the three-dimensional cam. Depending upon the three-dimensional cam's position, it pushes in or out the camshaft-actuated injection pump plungers, which controls both the amount of injected fuel, and the injection timing. The injection plungers both create the injection pressure, and act as the fuel distributors. Usually, there is an additional adjustment rod that is connected to a barometric cell, and a cooling water thermometer, so that the fuel mass can be corrected according to air pressure, and water temperature. Kugelfischer injection systems also have a mechanical centrifugal crankshaft speed sensor. Multi-point injected systems with mechanical controlling were used until the 1970s.
No injection-timing controlling.
In systems without injection-timing controlling, the fuel is injected continuously, thus, no injection timing is required. The biggest disadvantage of such systems is that the fuel is also injected when the intake valves are closed, but such systems are much simpler and less expensive than mechanical injection systems with engine maps on three-dimensional cams. Only the amount of injected fuel has to be determined, which can be done very easily with a rather simple fuel distributor that is controlled by an intake manifold vacuum-driven airflow sensor. The fuel distributor does not have to create any injection pressure, because the fuel pump already provides pressure sufficient for injection (up to 500 kPa). Therefore, such systems are called "unpowered", and do not need to be driven by a chain or belt, unlike systems with mechanical injection pumps. Also, an engine control unit is not required. "Unpowered" multi-point injection systems without injection-timing controlling such as the Bosch K-Jetronic were commonly used from the mid-1970s until the early 1990s in passenger cars, although examples had existed earlier, such as the Rochester Ramjet offered on high-performance versions of the Chevrolet small-block engine from 1957 to 1965.
Electronic control unit.
Engines with manifold injection, and an electronic engine control unit are often referred to as engines with electronic fuel injection (EFI). Typically, EFI engines have an engine map built into discrete electronic components, such as read-only memory. This is both more reliable and more precise than a three-dimensional cam. The engine control circuitry uses the engine map, as well as airflow, throttle valve, crankshaft speed, and intake air temperature sensor data to determine both the amount of injected fuel, and the injection timing. Usually, such systems have a single, pressurised fuel rail, and injection valves that open according to an electric signal sent from the engine control circuitry. The circuitry can either be fully analogue, or digital. Analogue systems such as the Bendix Electrojector were niche systems, and used from the late 1950s until the early 1970s; digital circuitry became available in the late 1970s, and has been used in electronic engine control systems since. One of the first widespread digital engine control units was the Bosch Motronic.
Air mass determination.
In order to mix air and fuel correctly so a proper air-fuel mixture is formed, the injection control system needs to know how much air is sucked into the engine, so it can determine how much fuel has to be injected accordingly. In modern systems, an air-mass meter that is built into the throttle body meters the air mass, and sends a signal to the engine control unit, so it can calculate the correct fuel mass. Alternatively, a manifold vacuum sensor can be used. The manifold vacuum sensor signal, the throttle position, and the crankshaft speed can then be used by the engine control unit to calculate the correct amount of fuel. In modern engines, a combination of all these systems is used. Mechanical injection controlling systems as well as unpowered systems typically only have an intake manifold vacuum sensor (a membrane or a sensor plate) that is mechanically connected to the injection pump rack or fuel distributor.
Injection operation modes.
Manifold injected engines can use either continuous or intermittent injection. In a continuously injecting system, the fuel is injected continuously, thus, there are no operating modes. In intermittently injecting systems however, there are usually four different operating modes.
Simultaneous injection.
In a simultaneously intermittently injecting system, there is one single, fixed injection timing for all cylinders. Therefore, the injection timing is ideal only for some cylinders; there is always at least one cylinder that has its fuel injected against the closed intake valve(s). This causes fuel evaporation times that are different for each cylinder.
Group injection.
Systems with intermittent group injection work similarly to the simultaneously injection systems mentioned earlier, except that they have two or more groups of simultaneously injecting fuel injectors. Typically, a group consists of two fuel injectors. In an engine with two groups of fuel injectors, there is an injection every half crankshaft rotation, so that at least in some areas of the engine map no fuel is injected against a closed intake valve. This is an improvement over a simultaneously injecting system. However, the fuel evaporation times are still different for each cylinder.
Sequential injection.
In a sequentially injecting system, each fuel injector has a fixed, correctly set, injection timing that is in sync with the spark plug firing order, and the intake valve opening. This way, no more fuel is injected against closed intake valves.
Cylinder-specific injection.
Cylinder-specific injection means that there are no limitations to the injection timing. The injection control system can set the injection timing for each cylinder individually, and there is no fixed synchronisation between each cylinder's injector. This allows the injection control unit to inject the fuel not only according to firing order, and intake valve opening intervals, but it also allows it to correct cylinder charge irregularities. This system's disadvantage is that it requires cylinder-specific air-mass determination, which makes it more complicated than a sequentially injecting system.
History.
The first manifold injection system was designed by Johannes Spiel at Hallesche Maschinenfabrik. Deutz started series production of stationary four-stroke engines with manifold injection in 1898. Grade built the first two-stroke engine with manifold injection in 1906; the first manifold injected series production four-stroke aircraft engines were built by Wright and Antoinette the same year (Antoinette 8V). In 1912, Bosch equipped a watercraft engine with a makeshift injection pump built from an oil pump, but this system did not prove to be reliable. In the 1920s, they attempted to use a Diesel engine injection pump in a petrol-fuelled Otto engine. However, they were not successful. In 1930 Moto Guzzi built the first manifold injected Otto engine for motorcycles, which eventually was the first land vehicle engine with manifold injection. From the 1930s until the 1950s, manifold injections systems were not used in passenger cars, despite the fact that such systems existed. This was because the carburettor proved to be a simpler and less expensive, yet sufficient mixture formation system that did not need replacing yet.
In ca. 1950, Daimler-Benz started development of a petrol direct injection system for their Mercedes-Benz sports cars. For passenger cars however, a manifold injection system was deemed more feasible. Eventually, the Mercedes-Benz W 128, W 113, W 189, and W 112 passenger cars were equipped with manifold injected Otto engines.
From 1951 until 1956, FAG Kugelfischer Georg Schäfer & Co. developed the mechanical Kugelfischer injection system. It was used in many passenger cars, such as the Peugeot 404 (1962), Lancia Flavia iniezione (1965), BMW E10 (1969), Ford Capri RS 2600 (1970), BMW E12 (1973), BMW E20 (1973), and the BMW E26 (1978).
In 1957, Bendix Corporation presented the Bendix Electrojector, one of the first electronically controlled manifold injection systems. Bosch built this system under licence, and marketed it from 1967 as the D-Jetronic. In 1973, Bosch introduced their first self-developed multi-point injection systems, the electronic L-Jetronic, and the mechanical, unpowered K-Jetronic. Their fully digital Motronic system was introduced in 1979. It found widespread use in German luxury saloons. At the same time, most American car manufacturers stuck to electronic single-point injection systems. In the mid-1980s, Bosch upgraded their non-Motronic multi-point injection systems with digital engine control units, creating the KE-Jetronic, and the LH-Jetronic. Volkswagen developed the digital "Digijet" injection system for their "Wasserboxer" water-cooled engines, which evolved into the Volkswagen Digifant system in 1985.
Cheap single-point injection systems that worked with either two-way or three-way catalyst converters, such as the Mono-Jetronic introduced in 1987, enabled car manufacturers to economically offer an alternative to carburettors even in their economy cars, which helped the extensive spread of manifold injection systems across all passenger car market segments during the 1990s. In 1995, Mitsubishi introduced the first petrol direct injection Otto engine for passenger cars, and the petrol direct injection has been replacing the manifold injection since, but not across all market segments; several newly produced passenger car engines still use multi-point injection.
|
[
{
"math_id": 0,
"text": "\\lambda \\approx 1"
}
] |
https://en.wikipedia.org/wiki?curid=65639408
|
65644428
|
Simons' formula
|
Mathematical formula
In the mathematical field of differential geometry, the Simons formula (also known as the Simons identity, and in some variants as the Simons inequality) is a fundamental equation in the study of minimal submanifolds. It was discovered by James Simons in 1968. It can be viewed as a formula for the Laplacian of the second fundamental form of a Riemannian submanifold. It is often quoted and used in the less precise form of a formula or inequality for the Laplacian of the length of the second fundamental form.
In the case of a hypersurface M of Euclidean space, the formula asserts that
formula_0
where, relative to a local choice of unit normal vector field, h is the second fundamental form, H is the mean curvature, and "h"2 is the symmetric 2-tensor on M given by "h"
"g""pq""h""ip""h""qj".
This has the consequence that
formula_1
where A is the shape operator. In this setting, the derivation is particularly simple:
formula_2
the only tools involved are the Codazzi equation (equalities #2 and 4), the Gauss equation (equality #4), and the commutation identity for covariant differentiation (equality #3). The more general case of a hypersurface in a Riemannian manifold requires additional terms to do with the Riemann curvature tensor. In the even more general setting of arbitrary codimension, the formula involves a complicated polynomial in the second fundamental form.
References.
Footnotes
<templatestyles src="Reflist/styles.css" />
Books
Articles
|
[
{
"math_id": 0,
"text": "\\Delta h=\\operatorname{Hess}H+Hh^2-|h|^2h,"
},
{
"math_id": 1,
"text": "\\frac{1}{2}\\Delta|h|^2=|\\nabla h|^2-|h|^4+\\langle h,\\operatorname{Hess}H\\rangle+H\\operatorname{tr}(A^3)"
},
{
"math_id": 2,
"text": "\\begin{align}\n\\Delta h_{ij}&=\\nabla^p\\nabla_p h_{ij}\\\\\n&=\\nabla^p\\nabla_ih_{jp}\\\\\n&=\\nabla_i\\nabla^p h_{jp}-{{R^p}_{ij}}^qh_{qp}-{{R^p}_{ip}}^qh_{jq}\\\\\n&=\\nabla_i\\nabla_jH-(h^{pq}h_{ij}-h_j^ph_i^q)h_{qp}-(h^{pq}h_{ip}-Hh_i^q)h_{jq}\\\\\n&=\\nabla_i\\nabla_jH-|h|^2h+Hh^2;\n\\end{align}"
}
] |
https://en.wikipedia.org/wiki?curid=65644428
|
656554
|
Kochanek–Bartels spline
|
In mathematics, a Kochanek–Bartels spline or Kochanek–Bartels curve is a cubic Hermite spline with tension, bias, and continuity parameters defined to change the behavior of the tangents.
Given "n" + 1 knots,
p0, ..., p"n",
to be interpolated with "n" cubic Hermite curve segments, for each curve we have a starting point p"i" and an ending point p"i"+1 with starting tangent d"i" and ending tangent d"i"+1 defined by
formula_0
formula_1
where...
Setting each parameter to zero would give a Catmull–Rom spline.
The source code found here of Steve Noskowicz in 1996 actually describes the impact that each of these values has on the drawn curve:
The code includes matrix summary needed to generate these splines in a BASIC dialect.
|
[
{
"math_id": 0,
"text": "\\mathbf{d}_i = \\frac{(1-t)(1+b)(1+c)}{2}(\\mathbf{p}_i-\\mathbf{p}_{i-1}) + \\frac{(1-t)(1-b)(1-c)}{2}(\\mathbf{p}_{i+1}-\\mathbf{p}_i)\n"
},
{
"math_id": 1,
"text": "\\mathbf{d}_{i+1} = \\frac{(1-t)(1+b)(1-c)}{2}(\\mathbf{p}_{i+1}-\\mathbf{p}_{i}) + \\frac{(1-t)(1-b)(1+c)}{2}(\\mathbf{p}_{i+2}-\\mathbf{p}_{i+1})\n"
}
] |
https://en.wikipedia.org/wiki?curid=656554
|
65657913
|
Spike response model
|
Biological neuron model
The spike response model (SRM) is a spiking neuron model in which spikes are generated by either a deterministic or a stochastic threshold process. In the SRM, the membrane voltage "V" is described as a linear sum of the postsynaptic potentials (PSPs) caused by spike arrivals to which the effects of refractoriness and adaptation are added. The threshold is either fixed or dynamic. In the latter case it increases after each spike. The SRM is flexible enough to account for a variety of neuronal firing pattern in response to step current input. The SRM has also been used in the theory of computation to quantify the capacity of spiking neural networks; and in the neurosciences to predict the subthreshold voltage and the firing times of cortical neurons during stimulation with a time-dependent current stimulation. The name "Spike Response Model" points to the property that the two important filters formula_0 and formula_1 of the model can be interpreted as the response of the membrane potential to an incoming spike (response kernel formula_0, the PSP) and to an outgoing spike (response kernel formula_1, also called refractory kernel). The SRM has been formulated in continuous time and in discrete time. The SRM can be viewed as a generalized linear model (GLM) or as an (integrated version of) a generalized integrate-and-fire model with adaptation.
Model equations for SRM in continuous time.
In the SRM, at each moment in time t, a spike can be generated stochastically with instantaneous stochastic intensity or 'escape function'
formula_2
that depends on the momentary difference between the membrane voltage "V"(t) and the dynamic threshold formula_3.
The membrane voltage "V"(t) at time t is given by
formula_4
where "t""f" is the firing time of spike number "f" of the neuron, "V"rest is the resting voltage in the absence of input, "I(t-s)" is the input current at time "t" − "s" and formula_5 is a linear filter (also called kernel) that describes the contribution of an input current pulse at time "t" − "s" to the voltage at time "t". The contributions to the voltage caused by a spike at time formula_6 are described by the refractory kernel formula_7. In particular,
formula_7 describes the time course of the action potential starting at time formula_6 as well as the spike-afterpotential.
The dynamic threshold formula_3 is given by
formula_8
where formula_9 is the firing threshold of an inactive neuron and formula_10 describes the increase of the threshold after a spike at time formula_6. In case of a fixed threshold [i.e., formula_10=0], the refractory kernel formula_7 should include only the spike-afterpotential, but not the shape of the spike itself.
A common choice for the 'escape rate' formula_11 (that is consistent with biological data) is
formula_12
where formula_13is a time constant that describes how quickly a spike is fired once the membrane potential reaches the threshold and formula_14 is a sharpness parameter. For formula_15 the threshold becomes sharp and spike firing occurs deterministically at the moment when the membrane potential hits the threshold from below. The sharpness value found in experiments is formula_16 which that neuronal firing becomes non-neglibable as soon the membrane potential is a few mV below the formal firing threshold. The escape rate process via a soft threshold is reviewed in Chapter 9 of the textbook "Neuronal Dynamics."
In a network of N SRM neurons formula_17, the membrane voltage of neuron formula_18 is given by
formula_19
where formula_20 are the firing times of neuron j (i.e., its spike train), and formula_21 describes the time course of the spike and the spike after-potential for neuron i, formula_22 and formula_23 describe the amplitude and time course of an excitatory or inhibitory postsynaptic potential (PSP) caused by the spike formula_20 of the presynaptic neuron j. The time course formula_24 of the PSP results from the convolution of the postsynaptic current formula_25 caused by the arrival of a presynaptic spike from neuron j.
Model equations for SRM in discrete time.
For simulations, the SRM is usually implemented in discrete time. In time step formula_26 of duration formula_27, a spike is generated with probability
formula_28
that depends on the momentary difference between the membrane voltage "V" and the dynamic threshold formula_29. The function F is often taken as a standard sigmoidal formula_30 with steepness parameter formula_31. But the functional form of F can also be calculated from the stochastic intensity formula_11 in continuous time as formula_32 where formula_33 is the distance to threshold.
The membrane voltage formula_34 in discrete time is given by
formula_35
where "t"f is the discretized firing time of the neuron, "V"rest is the resting voltage in the absence of input, and formula_36 is the input current at time formula_37 (integrated over one time step). The input filter formula_5 and the spike-afterpotential formula_38 are defined as in the case of the SRM in continuous time.
For networks of SRM neurons in discrete time we define the spike train of neuron j as a sequence of zeros and ones, formula_39 and rewrite the membrane potential as
formula_40
In this notation, the refractory kernel formula_5 and the PSP shape formula_24 can be interpreted as linear response filters applied to the binary spike trains formula_41.
Main applications of the SRM.
Theory of computation with pulsed neural networks.
Since the formulation as SRM provides an explicit expression for the membrane voltage (without the detour via a differential equations), SRMs have been the dominant mathematical model in a formal theory of computation with spiking neurons.
Prediction of voltage and spike times of cortical neurons.
The SRM with dynamic threshold has been used to predict the firing time of cortical neurons with a precision of a few milliseconds. Neurons were stimulated, via current injection, with time-dependent currents of different means and variance while the membrane voltage was recorded. The reliability of predicted spikes was close to the intrinsic reliability when the same time-dependent current was repeated several times. Moreover, extracting the shape of the filters formula_5 and formula_38 directly from the experimental data revealed that adaptation extends over time scales from tens of milliseconds to tens of seconds. Thanks to the convexity properties of the likelihood in Generalized Linear Models, parameter extraction is efficient.
Associative memory in networks of spiking neurons.
SRM0 neurons have been used to construct an associative memory in a network of spiking neurons. The SRM network which stored a finite number of stationary patterns as attractors using a Hopfield-type connectivity matrix was one of the first examples of attractor networks with spiking neurons.
Population activity equations in large networks of spiking neurons.
For SRM neurons, an important variable characterizing the internal state of the neuron is the time since the last spike (or 'age' of the neuron) which enters into the refractory kernel formula_38. The population activity equations for SRM neurons can be formulated alternatively either as integral equations, or as partial differential equations for the 'refractory density'. Because the refractory kernel may include a time scale slower than that of the membrane potential, the population equations for SRM neurons provide powerful alternatives to the more broadly used partial differential equations for the 'membrane potential density'. Reviews of the population activity equation based on refractory densities can be found in as well in Chapter 14 of the textbook "Neuronal Dynamics."
Spike patterns and temporal code.
SRMs are useful to understand theories of neural coding. A network SRM neurons has stored attractors that form reliable spatio-temporal spike patterns (also known as synfire chains) example of temporal coding for stationary inputs. Moreover, the population activity equations for SRM exhibit temporally precise transients after a stimulus switch, indicating reliable spike firing.
4. History and relation to other models.
The Spike Response Model has been introduced in a series of papers between 1991 and 2000. The name "Spike Response Model" probably appeared for the first time in 1993. Some papers used exclusively the deterministic limit with a hard threshold others the soft threshold with escape noise. Precursors of the Spike Response Model are the integrate-and-fire model introduced by Lapicque in 1907 as well as models used in auditory neuroscience.
SRM0.
An important variant of the model is SRM0 which is related to time-dependent nonlinear renewal theory. The main difference to the voltage equation of the SRM introduced above is that in the term containing the refractory kernel formula_38there is no summation sign over past spikes: only the "most recent spike" matters. The model SRM0 is closely related to the inhomogeneous Markov interval process and to age-dependent models of refractoriness.
GLM.
The equations of the SRM as introduced above are equivalent to Generalized Linear Models in neuroscience (GLM). In the neuroscience, GLMs have been introduced as an extension of the Linear-Nonlinear-Poisson model (LNP) by adding self-interaction of an output spike with the internal state of the neuron (therefore also called 'Recursive LNP'). The self-interaction is equivalent to the kernel formula_38 of the SRM. The GLM framework enables to formulate a maximum likelihood approach applied to the likelihood of an observed spike train under the assumption that an SRM could have generated the spike train. Despite the mathematical equivalence there is a conceptual difference in interpretation: in the SRM the variable V is interpreted as membrane voltage whereas in the recursive LNP it is a 'hidden' variable to which no meaning is assigned. The SRM interpretation is useful if measurements of subthreshold voltage are available whereas the recursive LNP is useful in systems neuroscience where spikes (in response to sensory stimulation) are recorded extracellulary without access to the subthreshold voltage.
Adaptive leaky integrate-and-fire models.
A leaky integrate-and-fire neuron with spike-triggered adaptation has a subthreshold membrane potential generated by the following differential equations
formula_42
formula_43
where formula_44 is the membrane time constant and "w"k is an adaptation current number, with index k, "E"rest is the resting potential and "t"f is the firing time of the neuron and the Greek delta denotes the Dirac delta function. Whenever the voltage reaches the firing threshold the voltage is reset to a value "V"r below the firing threshold. Integration of the linear differential equations gives a formula identical to the voltage equation of the SRM. However, in this case, the refractory kernel formula_38 does not include the spike shape but only the spike-afterpotential. In the absence of adaptation currents, we retrieve the standard LIF model which is equivalent to a refractory kernel formula_38 that decays exponentially with the membrane time constant formula_44.
Reference section.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\varepsilon"
},
{
"math_id": 1,
"text": "\\eta"
},
{
"math_id": 2,
"text": "\\rho(t) = f(V(t)-\\vartheta(t)) "
},
{
"math_id": 3,
"text": "\\vartheta(t)"
},
{
"math_id": 4,
"text": "V(t)= \\sum_f \\eta(t-t^f) + \\int_0^\\infty \\kappa(s) I(t-s) \\, ds + V_\\mathrm{rest} "
},
{
"math_id": 5,
"text": "\\kappa(s)"
},
{
"math_id": 6,
"text": "t^f"
},
{
"math_id": 7,
"text": "\\eta(t-t^f)"
},
{
"math_id": 8,
"text": "\\vartheta(t)= \\vartheta_0 + \\sum_f \\theta_1(t-t^{f}) "
},
{
"math_id": 9,
"text": "\\vartheta_0"
},
{
"math_id": 10,
"text": "\\theta_1(t-t^f)"
},
{
"math_id": 11,
"text": "f"
},
{
"math_id": 12,
"text": " f(V-\\vartheta) = \\frac{1}{\\tau_0} \\exp[\\beta(V-\\vartheta)] "
},
{
"math_id": 13,
"text": "\\tau_0"
},
{
"math_id": 14,
"text": "\\beta"
},
{
"math_id": 15,
"text": "\\beta\\to\\infty"
},
{
"math_id": 16,
"text": "1/\\beta\\approx 4mV"
},
{
"math_id": 17,
"text": "1\\le i \\le N"
},
{
"math_id": 18,
"text": "i"
},
{
"math_id": 19,
"text": "V_i(t)= \\sum_f \\eta_i(t-t_i^{f}) + \\sum_{j=1}^N w_{ij} \\sum_{f'}\\varepsilon_{ij}(t-t_j^{f'}) + V_\\mathrm{rest} "
},
{
"math_id": 20,
"text": "t_j^{f'}"
},
{
"math_id": 21,
"text": "\\eta_i(t-t^f_i)"
},
{
"math_id": 22,
"text": "w_{ij}"
},
{
"math_id": 23,
"text": "\\varepsilon_{ij}(t-t_j^{f'})"
},
{
"math_id": 24,
"text": "\\varepsilon_{ij}(s)"
},
{
"math_id": 25,
"text": "I(t)"
},
{
"math_id": 26,
"text": "t_n"
},
{
"math_id": 27,
"text": "\\Delta t"
},
{
"math_id": 28,
"text": "P_F(t_n) = F(V(t_n)-\\vartheta(t_n)) "
},
{
"math_id": 29,
"text": "\\vartheta"
},
{
"math_id": 30,
"text": "F(x) = 0.5[1 + \\tanh(\\gamma x)]"
},
{
"math_id": 31,
"text": "\\gamma"
},
{
"math_id": 32,
"text": "F(y_n)\\approx 1 - \\exp[y_n \\, \\Delta t]"
},
{
"math_id": 33,
"text": "y_n = V(t_n)-\\vartheta(t_n)"
},
{
"math_id": 34,
"text": "V(t_n)"
},
{
"math_id": 35,
"text": "V(t_n) = \\sum_f \\eta(t_n-t^{f}) + \\sum_{m=1}^\\infty \\kappa(m\\,\\Delta t) I(t_n-m\\,\\Delta t)+ V_\\mathrm{rest} "
},
{
"math_id": 36,
"text": "I(t_k)"
},
{
"math_id": 37,
"text": "t_k"
},
{
"math_id": 38,
"text": "\\eta(s)"
},
{
"math_id": 39,
"text": "\\{X_j(t_m)\\in \\{0,1\\}; m=1,2,3, \\dots \\}\n"
},
{
"math_id": 40,
"text": "V_i(t_n) = \\sum_m\\eta_i(t_n-t_m) X_i(t_m) + \\sum_j w_{ij} \\sum_m \\varepsilon_{ij}(t_n-t_m) X_j(t_m) + V_\\mathrm{rest} "
},
{
"math_id": 41,
"text": "X_j"
},
{
"math_id": 42,
"text": "\\tau_\\mathrm{m} \\frac{d V (t)}{d t} = R I(t)- [V (t) - E_\\mathrm{rest} ]- R \\sum_k w_k"
},
{
"math_id": 43,
"text": "\\tau_k \\frac{d w_k (t)}{d t} = - w_k + b_k \\tau_k \\sum_f \\delta (t-t^f) "
},
{
"math_id": 44,
"text": "\\tau_m"
}
] |
https://en.wikipedia.org/wiki?curid=65657913
|
656586
|
Cubic Hermite spline
|
Cubic function used for interpolation
In numerical analysis, a cubic Hermite spline or cubic Hermite interpolator is a spline where each piece is a third-degree polynomial specified in Hermite form, that is, by its values and first derivatives at the end points of the corresponding domain interval.
Cubic Hermite splines are typically used for interpolation of numeric data specified at given argument values formula_0, to obtain a continuous function. The data should consist of the desired function value and derivative at each formula_1. (If only the values are provided, the derivatives must be estimated from them.) The Hermite formula is applied to each interval formula_2 separately. The resulting spline will be continuous and will have continuous first derivative.
Cubic polynomial splines can be specified in other ways, the Bezier cubic being the most common. However, these two methods provide the same set of splines, and data can be easily converted between the Bézier and Hermite forms; so the names are often used as if they were synonymous.
Cubic polynomial splines are extensively used in computer graphics and geometric modeling to obtain curves or motion trajectories that pass through specified points of the plane or three-dimensional space. In these applications, each coordinate of the plane or space is separately interpolated by a cubic spline function of a separate parameter "t".
Cubic polynomial splines are also used extensively in structural analysis applications, such as Euler–Bernoulli beam theory. Cubic polynomial splines have also been applied to mortality analysis and mortality forecasting.
Cubic splines can be extended to functions of two or more parameters, in several ways. Bicubic splines (Bicubic interpolation) are often used to interpolate data on a regular rectangular grid, such as pixel values in a digital image or altitude data on a terrain. Bicubic surface patches, defined by three bicubic splines, are an essential tool in computer graphics.
Cubic splines are often called csplines, especially in computer graphics. Hermite splines are named after Charles Hermite.
Interpolation on a single interval.
Unit interval [0, 1].
On the unit interval formula_3, given a starting point formula_4 at formula_5 and an ending point formula_6 at formula_7 with starting tangent formula_8 at formula_5 and ending tangent formula_9 at formula_7, the polynomial can be defined by
formula_10
where "t" ∈ [0, 1].
Interpolation on an arbitrary interval.
Interpolating formula_11 in an arbitrary interval formula_2 is done by mapping the latter to formula_12 through an affine (degree-1) change of variable. The formula is
formula_13
where formula_14, and formula_15 refers to the basis functions, defined below. Note that the tangent values have been scaled by formula_16 compared to the equation on the unit interval.
Uniqueness.
The formula specified above provides the unique third-degree polynomial path between the two points with the given tangents.
Proof. Let formula_17 be two third-degree polynomials satisfying the given boundary conditions. Define formula_18 then:
formula_19
formula_20
Since both formula_21 and formula_22 are third-degree polynomials, formula_23 is at most a third-degree polynomial. So formula_23 must be of the form
formula_24
Calculating the derivative gives
formula_25
We know furthermore that
formula_26
formula_27
Putting (1) and (2) together, we deduce that formula_28, and therefore formula_29 thus formula_30
Representations.
We can write the interpolation polynomial as
formula_31
where formula_32, formula_33, formula_34, formula_35 are Hermite basis functions.
These can be written in different ways, each way revealing different properties:
The "expanded" column shows the representation used in the definition above.
The "factorized" column shows immediately that formula_33 and formula_35 are zero at the boundaries.
You can further conclude that formula_34 and formula_35 have a zero of multiplicity 2 at 0, and formula_32 and formula_33 have such a zero at 1, thus they have slope 0 at those boundaries.
The "Bernstein" column shows the decomposition of the Hermite basis functions into Bernstein polynomials of order 3:
formula_36
Using this connection you can express cubic Hermite interpolation in terms of cubic Bézier curves with respect to the four values formula_37 and do Hermite interpolation using the de Casteljau algorithm.
It shows that in a cubic Bézier patch the two control points in the middle determine the tangents of the interpolation curve at the respective outer points.
We can also write the polynomial in standard form as
formula_38
where the control points and tangents are coefficients. This permits efficient evaluation of the polynomial at various values of "t" since the constant coefficients can be computed once and reused.
Interpolating a data set.
A data set, formula_39 for formula_40, can be interpolated by applying the above procedure on each interval, where the tangents are chosen in a sensible manner, meaning that the tangents for intervals sharing endpoints are equal. The interpolated curve then consists of piecewise cubic Hermite splines and is globally continuously differentiable in formula_41.
The choice of tangents is not unique, and there are several options available.
Finite difference.
The simplest choice is the three-point difference, not requiring constant interval lengths:
formula_42
for internal points formula_43, and one-sided difference at the endpoints of the data set.
Cardinal spline.
A cardinal spline, sometimes called a canonical spline, is obtained if
formula_45
is used to calculate the tangents. The parameter c is a "tension" parameter that must be in the interval [0, 1]. In some sense, this can be interpreted as the "length" of the tangent. Choosing "c" = 1 yields all zero tangents, and choosing "c" = 0 yields a Catmull–Rom spline in the uniform parameterization case.
Catmull–Rom spline.
For tangents chosen to be
formula_46
a Catmull–Rom spline is obtained, being a special case of a cardinal spline. This assumes uniform parameter spacing.
The curve is named after Edwin Catmull and Raphael Rom. The principal advantage of this technique is that the points along the original set of points also make up the control points for the spline curve. Two additional points are required on either end of the curve. The uniform Catmull–Rom implementation can produce loops and self-intersections. The chordal and centripetal Catmull–Rom implementations solve this problem, but use a slightly different calculation. In computer graphics, Catmull–Rom splines are frequently used to get smooth interpolated motion between key frames. For example, most camera path animations generated from discrete key-frames are handled using Catmull–Rom splines. They are popular mainly for being relatively easy to compute, guaranteeing that each key frame position will be hit exactly, and also guaranteeing that the tangents of the generated curve are continuous over multiple segments.
Kochanek–Bartels spline.
A Kochanek–Bartels spline is a further generalization on how to choose the tangents given the data points formula_47, formula_44 and formula_48, with three parameters possible: tension, bias and a continuity parameter.
Monotone cubic interpolation.
If a cubic Hermite spline of any of the above listed types is used for interpolation of a monotonic data set, the interpolated function will not necessarily be monotonic, but monotonicity can be preserved by adjusting the tangents.
Interpolation on the unit interval with matched derivatives at endpoints.
Consider a single coordinate of the points formula_49 and formula_50 as the values that a function "f"("x") takes at integer ordinates "x" = "n" − 1, "n", "n" + 1 and "n" + 2,
formula_51
In addition, assume that the tangents at the endpoints are defined as the centered differences of the adjacent points:
formula_52
To evaluate the interpolated "f"("x") for a real "x", first separate "x" into the integer portion "n" and fractional portion "u":
formula_53
formula_54
formula_55
formula_56
where formula_57 denotes the floor function, which returns the largest integer no larger than "x".
Then the Catmull–Rom spline is
formula_58
where formula_59 denotes the matrix transpose. The bottom equality is depicting the application of Horner's method.
This writing is relevant for tricubic interpolation, where one optimization requires computing CINT"u" sixteen times with the same "u" and different "p".
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "x_1,x_2,\\ldots,x_n"
},
{
"math_id": 1,
"text": "x_k"
},
{
"math_id": 2,
"text": "(x_k, x_{k+1})"
},
{
"math_id": 3,
"text": "[0,1]"
},
{
"math_id": 4,
"text": "\\boldsymbol{p}_0"
},
{
"math_id": 5,
"text": "t = 0"
},
{
"math_id": 6,
"text": "\\boldsymbol{p}_1"
},
{
"math_id": 7,
"text": "t = 1"
},
{
"math_id": 8,
"text": "\\boldsymbol{m}_0"
},
{
"math_id": 9,
"text": "\\boldsymbol{m}_1"
},
{
"math_id": 10,
"text": "\\boldsymbol{p}(t) = \\left(2t^3 - 3t^2 + 1\\right) \\boldsymbol{p}_0 + \\left(t^3 - 2t^2 + t\\right) \\boldsymbol{m}_0 + \\left(-2t^3 + 3t^2\\right) \\boldsymbol{p}_1 + \\left(t^3 - t^2\\right) \\boldsymbol{m}_1,"
},
{
"math_id": 11,
"text": "x"
},
{
"math_id": 12,
"text": "[0, 1]"
},
{
"math_id": 13,
"text": "\\boldsymbol{p}(x) = h_{00}(t) \\boldsymbol{p}_k + h_{10}(t) (x_{k+1} - x_k)\\boldsymbol{m}_k + h_{01}(t) \\boldsymbol{p}_{k+1} + h_{11}(t)(x_{k+1} - x_k)\\boldsymbol{m}_{k+1},"
},
{
"math_id": 14,
"text": "t = (x - x_k)/(x_{k+1} - x_k)"
},
{
"math_id": 15,
"text": "h"
},
{
"math_id": 16,
"text": "x_{k+1} - x_k"
},
{
"math_id": 17,
"text": "P, Q"
},
{
"math_id": 18,
"text": "R = Q - P,"
},
{
"math_id": 19,
"text": "R(0) = Q(0)-P(0) = 0,"
},
{
"math_id": 20,
"text": "R(1) = Q(1) - P(1) = 0."
},
{
"math_id": 21,
"text": "Q"
},
{
"math_id": 22,
"text": "P"
},
{
"math_id": 23,
"text": "R"
},
{
"math_id": 24,
"text": "R(x) = ax(x - 1)(x - r)."
},
{
"math_id": 25,
"text": "R'(x) = ax(x - 1) + ax(x - r) + a(x - 1)(x - r)."
},
{
"math_id": 26,
"text": "R'(0) = Q'(0) - P'(0) = 0,"
},
{
"math_id": 27,
"text": "R'(1) = Q'(1) - P'(1) = 0,"
},
{
"math_id": 28,
"text": "a = 0"
},
{
"math_id": 29,
"text": "R = 0,"
},
{
"math_id": 30,
"text": "P = Q."
},
{
"math_id": 31,
"text": "\\boldsymbol{p}(t) = h_{00}(t)\\boldsymbol{p}_0 + h_{10}(t)(x_{k+1}-x_k)\\boldsymbol{m}_0 + h_{01}(t)\\boldsymbol{p}_1 + h_{11}(t)(x_{k+1}-x_k)\\boldsymbol{m}_1"
},
{
"math_id": 32,
"text": "h_{00}"
},
{
"math_id": 33,
"text": "h_{10}"
},
{
"math_id": 34,
"text": "h_{01}"
},
{
"math_id": 35,
"text": "h_{11}"
},
{
"math_id": 36,
"text": "B_k(t) = \\binom{3}{k} \\cdot t^k \\cdot (1 - t)^{3-k}."
},
{
"math_id": 37,
"text": "\\boldsymbol{p}_0, \\boldsymbol{p}_0 + \\frac{1}{3} \\boldsymbol{m}_0, \\boldsymbol{p}_1 - \\frac{1}{3} \\boldsymbol{m}_1, \\boldsymbol{p}_1"
},
{
"math_id": 38,
"text": "\\boldsymbol{p}(t) = \\left(2\\boldsymbol{p}_0 + \\boldsymbol{m}_0 - 2\\boldsymbol{p}_1 + \\boldsymbol{m}_1\\right) t^3 + \\left(-3\\boldsymbol{p}_0 + 3\\boldsymbol{p}_1 - 2\\boldsymbol{m}_0 - \\boldsymbol{m}_1\\right) t^2 + \\boldsymbol{m}_0 t + \\boldsymbol{p}_0"
},
{
"math_id": 39,
"text": "(x_k,\\boldsymbol{p}_k)"
},
{
"math_id": 40,
"text": "k=1,\\ldots,n"
},
{
"math_id": 41,
"text": "(x_1, x_n)"
},
{
"math_id": 42,
"text": "\\boldsymbol{m}_k = \\frac{1}{2} \\left(\\frac{\\boldsymbol{p}_{k+1} - \\boldsymbol{p}_k}{x_{k+1} - x_k} + \\frac{\\boldsymbol{p}_k - \\boldsymbol{p}_{k-1}}{x_k - x_{k-1}}\\right)"
},
{
"math_id": 43,
"text": "k = 2, \\dots, n - 1"
},
{
"math_id": 44,
"text": "\\boldsymbol{p}_k"
},
{
"math_id": 45,
"text": "\\boldsymbol{m}_k = (1 - c) \\frac{\\boldsymbol{p}_{k+1} - \\boldsymbol{p}_{k-1}}{x_{k+1} - x_{k-1}}"
},
{
"math_id": 46,
"text": "\\boldsymbol{m}_k = \\frac{\\boldsymbol{p}_{k+1} - \\boldsymbol{p}_{k-1}}{2}"
},
{
"math_id": 47,
"text": "\\boldsymbol{p}_{k-1}"
},
{
"math_id": 48,
"text": "\\boldsymbol{p}_{k+1}"
},
{
"math_id": 49,
"text": "\\boldsymbol{p}_{n-1}, \\boldsymbol{p}_n, \\boldsymbol{p}_{n+1}"
},
{
"math_id": 50,
"text": "\\boldsymbol{p}_{n+2}"
},
{
"math_id": 51,
"text": "p_n = f(n) \\quad \\forall n \\in \\mathbb{Z}."
},
{
"math_id": 52,
"text": "m_n = \\frac{f(n + 1) - f(n - 1)}{2} = \\frac{p_{n+1} - p_{n-1}}{2} \\quad \\forall n \\in \\mathbb{Z}."
},
{
"math_id": 53,
"text": "x = n + u,"
},
{
"math_id": 54,
"text": "n = \\lfloor x \\rfloor = \\operatorname{floor}(x),"
},
{
"math_id": 55,
"text": "u = x - n = x - \\lfloor x \\rfloor,"
},
{
"math_id": 56,
"text": "0 \\le u < 1,"
},
{
"math_id": 57,
"text": "\\lfloor x \\rfloor"
},
{
"math_id": 58,
"text": "\\begin{align}\n f(x) = f(n + u) &= \\text{CINT}_u(p_{n-1}, p_n, p_{n+1}, p_{n+2}) \\\\\n &=\n \\begin{bmatrix}\n 1 & u & u^2 & u^3\n \\end{bmatrix}\n \\begin{bmatrix}\n 0 & 1 & 0 & 0 \\\\\n -\\tfrac12 & 0 & \\tfrac12 & 0 \\\\\n 1 & -\\tfrac52 & 2 & -\\tfrac12 \\\\\n -\\tfrac12 & \\tfrac32 & -\\tfrac32 & \\tfrac12\n \\end{bmatrix}\n \\begin{bmatrix}\n p_{n-1} \\\\\n p_n \\\\\n p_{n+1} \\\\\n p_{n+2}\n \\end{bmatrix} \\\\\n &= \\frac 12\n \\begin{bmatrix}\n -u^3 +2u^2 - u \\\\\n 3u^3 - 5u^2 + 2 \\\\\n -3u^3 + 4u^2 + u \\\\\n u^3 - u^2\n \\end{bmatrix}^\\mathrm{T}\n \\begin{bmatrix}\n p_{n-1} \\\\\n p_n \\\\\n p_{n+1} \\\\\n p_{n+2}\n \\end{bmatrix} \\\\\n &= \\frac 12\n \\begin{bmatrix}\n u\\big((2 - u)u - 1\\big) \\\\\n u^2(3u - 5) + 2 \\\\\n u\\big((4 - 3u)u + 1\\big) \\\\\n u^2(u - 1)\n \\end{bmatrix}^\\mathrm{T}\n \\begin{bmatrix}\n p_{n-1} \\\\\n p_n \\\\\n p_{n+1} \\\\\n p_{n+2}\n \\end{bmatrix} \\\\\n &= \\tfrac12 \\Big(\\big(u^2(2 - u) - u\\big) p_{n-1} + \\big(u^2(3u - 5) + 2\\big) p_n + \\big(u^2(4 - 3u) + u\\big) p_{n+1} + u^2(u - 1) p_{n+2}\\Big) \\\\\n &= \\tfrac12 \\big((-u^3 + 2u^2 - u) p_{n-1} + (3u^3 - 5u^2 + 2) p_n + (-3u^3 + 4u^2 + u) p_{n+1} + (u^3 - u^2) p_{n+2}\\big) \\\\\n &= \\tfrac12 \\big((-p_{n-1} + 3p_n - 3p_{n+1} + p_{n+2}) u^3 + (2p_{n-1} - 5p_n + 4p_{n+1} - p_{n+2})u^2 + (-p_{n-1} + p_{n+1}) u + 2p_n\\big) \\\\\n &= \\tfrac12 \\Big(\\big((-p_{n-1} + 3p_n - 3p_{n+1} + p_{n+2}) u + (2p_{n-1} - 5p_n + 4p_{n+1} - p_{n+2})\\big)u + (-p_{n-1} + p_{n+1})\\Big)u + p_n,\n\\end{align}"
},
{
"math_id": 59,
"text": "\\mathrm{T}"
}
] |
https://en.wikipedia.org/wiki?curid=656586
|
65667650
|
Mitchell–Netravali filters
|
The Mitchell–Netravali filters or BC-splines are a group of reconstruction filters used primarily in computer graphics, which can be used, for example, for anti-aliasing or for scaling raster graphics. They are also known as bicubic filters in image editing programs because they are bi-dimensional cubic splines.
Definition.
The Mitchell–Netravali filters were designed as part of an investigation into artifacts from reconstruction filters. The filters are piece-wise cubic filters with four-pixel wide supports. After excluding unsuitable filters from this family, such as discontinuous curves, two parameters formula_0 and formula_1 remain, through which the Mitchell–Netravali filters can be configured. The filters are defined as follows:
formula_2
It is possible to construct two-dimensional versions of the Mitchell–Netravali filters by separation. In this case the filters can be replaced by a series of interpolations with the one-dimensional filter. From the color values of the four neighboring pixels formula_3, formula_4, formula_5, formula_6 the color value is then calculated formula_7 as follows:
formula_8
formula_9 lies between formula_4 and formula_5; formula_10 is the distance between formula_4 and formula_9.
Subjective effects.
Various artifacts may result from certain choices of parameters "B" and "C", as shown in the following illustration. The researchers recommended values from the family formula_11 (dashed line) and especially formula_12 as a satisfactory compromise.
Implementations.
The following parameters result in well-known cubic splines used in common image editing programs:
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "B"
},
{
"math_id": 1,
"text": "C"
},
{
"math_id": 2,
"text": "\nk(x) = \\frac{1}{6}\n\\begin{cases}\n\\begin{array}{l}\n(12-9B-6C)|x|^3 + (-18+12B+6C)|x|^2 \\\\\n\\qquad + (6-2B)\n\\end{array} & \\text{, if } |x|<1 \\\\\n\\begin{array}{l}\n(-B-6C)|x|^3 + (6B+30C)|x|^2 \\\\\n\\qquad + (-12B-48C)|x| + (8B+24C)\n\\end{array} & \\text{, if } 1\\le |x|<2 \\\\\n0 & \\text{otherwise}\n\\end{cases}\n"
},
{
"math_id": 3,
"text": "P_0"
},
{
"math_id": 4,
"text": "P_1"
},
{
"math_id": 5,
"text": "P_2"
},
{
"math_id": 6,
"text": "P_3"
},
{
"math_id": 7,
"text": "P(d)"
},
{
"math_id": 8,
"text": "\\begin{align}\nP(d) &\\textstyle = \\left((-\\frac{1}{6}B-C)P_0 + (-\\frac{3}{2}B-C+2)P_1 + (\\frac{3}{2}B+C-2)P_2 + (\\frac{1}{6}B+C)P_3\\right) d^3 \\\\\n&\\textstyle + \\left((\\frac{1}{2}B+2C)P_0 + (2B+C-3)P_1 + (-\\frac{5}{2}B-2C+3)P_2 -CP_3\\right) d^2 \\\\\n&\\textstyle + \\left((-\\frac{1}{2}B-C)P_0 + (\\frac{1}{2}B+C)P_2\\right) d \\\\\n&\\textstyle + \\frac{1}{6}BP_0 + (-\\frac{1}{3}B+1)P_1 + \\frac{1}{6}BP_2 \\\\\n\\end{align}"
},
{
"math_id": 9,
"text": "P"
},
{
"math_id": 10,
"text": "d"
},
{
"math_id": 11,
"text": "B+2C=1"
},
{
"math_id": 12,
"text": "\\textstyle B=C=\\frac{1}{3}"
}
] |
https://en.wikipedia.org/wiki?curid=65667650
|
65673969
|
Basis of a matroid
|
Maximal independent set of the matroid
In mathematics, a basis of a matroid is a maximal independent set of the matroid—that is, an independent set that is not contained in any other independent set.
Examples.
As an example, consider the matroid over the ground-set R2 (the vectors in the two-dimensional Euclidean plane), with the following independent sets: <templatestyles src="Block indent/styles.css"/>{ {}, {(0,1)}, {(2,0)}, {(0,1),(2,0)}, {(0,3)}, {(0,3),(2,0)} }.It has two bases, which are the sets {(0,1),(2,0)} , {(0,3),(2,0)}. These are the only independent sets that are maximal under inclusion.
The basis has a specialized name in several specialized kinds of matroids:
Properties.
Exchange.
All matroids satisfy the following properties, for any two distinct bases formula_0 and formula_1:
However, a basis-exchange property that is "both" symmetric "and" bijective is not satisfied by all matroids: it is satisfied only by base-orderable matroids.
In general, in the symmetric basis-exchange property, the element formula_3 need not be unique. Regular matroids have the unique exchange property, meaning that for "some" formula_2, the corresponding "b" is unique.
Cardinality.
It follows from the basis exchange property that no member of formula_16 can be a proper subset of another.
Moreover, all bases of a given matroid have the same cardinality. In a linear matroid, the cardinality of all bases is called the dimension of the vector space.
Neil White's conjecture.
It is conjectured that all matroids satisfy the following property: For every integer "t" ≥ 1"," If B and B' are two "t"-tuples of bases with the same multi-set union, then there is a sequence of symmetric exchanges that transforms B to B'.
Characterization.
The bases of a matroid characterize the matroid completely: a set is independent if and only if it is a subset of a basis. Moreover, one may define a matroid formula_17 to be a pair formula_18, where formula_19 is the ground-set and formula_16 is a collection of subsets of formula_19, called "bases", with the following properties:
(B1) There is at least one base -- formula_16 is nonempty;
(B2) If formula_0 and formula_1 are distinct bases, and formula_2, then there exists an element formula_3 such that formula_20 is a basis (this is the basis-exchange property).
(B2) implies that, given any two bases "A" and "B", we can transform "A" into "B" by a sequence of exchanges of a single element. In particular, this implies that all bases must have the same cardinality.
Duality.
If formula_18 is a finite matroid, we can define the orthogonal or dual matroid formula_21 by calling a set a "basis" in formula_22 if and only if its complement is in formula_16. It can be verified that formula_21 is indeed a matroid. The definition immediately implies that the dual of formula_21 is "formula_18".
Using duality, one can prove that the property (B2) can be replaced by the following:(B2*) If formula_0 and formula_1 are distinct bases, and formula_3, then there exists an element formula_2 such that formula_20 is a basis.
Circuits.
A dual notion to a basis is a circuit. A circuit in a matroid is a minimal dependent set—that is, a dependent set whose proper subsets are all independent. The terminology arises because the circuits of graphic matroids are cycles in the corresponding graphs.
One may define a matroid formula_17 to be a pair formula_23, where formula_19 is the ground-set and formula_24 is a collection of subsets of formula_19, called "circuits", with the following properties:
(C1) The empty set is not a circuit;
(C2) A proper subset of a circuit is not a circuit;
(C3) If C1 and C2 are distinct circuits, and "x" is an element in their intersection, then formula_25 contains a circuit.
Another property of circuits is that, if a set formula_26 is independent, and the set formula_27 is dependent (i.e., adding the element formula_28 makes it dependent), then formula_27 contains a "unique" circuit formula_29, and it contains formula_28. This circuit is called the fundamental circuit of formula_28 w.r.t. formula_26. It is analogous to the linear algebra fact, that if adding a vector formula_28 to an independent vector set formula_26 makes it dependent, then there is a unique linear combination of elements of formula_26 that equals formula_28.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "A"
},
{
"math_id": 1,
"text": "B"
},
{
"math_id": 2,
"text": "a\\in A\\setminus B"
},
{
"math_id": 3,
"text": "b\\in B\\setminus A"
},
{
"math_id": 4,
"text": "(A \\setminus \\{ a \\}) \\cup \\{b\\}"
},
{
"math_id": 5,
"text": "(B \\setminus \\{ b \\}) \\cup \\{a\\}"
},
{
"math_id": 6,
"text": "X\\subseteq A\\setminus B"
},
{
"math_id": 7,
"text": "Y\\subseteq B\\setminus A"
},
{
"math_id": 8,
"text": "(A \\setminus X) \\cup Y"
},
{
"math_id": 9,
"text": "(B \\setminus Y) \\cup X"
},
{
"math_id": 10,
"text": "f"
},
{
"math_id": 11,
"text": "(A \\setminus \\{ a \\}) \\cup \\{f(a)\\}"
},
{
"math_id": 12,
"text": "(A_1, A_2, \\ldots, A_m)"
},
{
"math_id": 13,
"text": "(B_1, B_2, \\ldots, B_m)"
},
{
"math_id": 14,
"text": "i\\in[m]"
},
{
"math_id": 15,
"text": "(A \\setminus A_i) \\cup B_i"
},
{
"math_id": 16,
"text": "\\mathcal{B}"
},
{
"math_id": 17,
"text": "M"
},
{
"math_id": 18,
"text": "(E,\\mathcal{B})"
},
{
"math_id": 19,
"text": "E"
},
{
"math_id": 20,
"text": "(A \\setminus \\{ a \\}) \\cup \\{b\\} "
},
{
"math_id": 21,
"text": "(E,\\mathcal{B}^*)"
},
{
"math_id": 22,
"text": "\\mathcal{B}^*\n"
},
{
"math_id": 23,
"text": "(E,\\mathcal{C})"
},
{
"math_id": 24,
"text": "\\mathcal{C}"
},
{
"math_id": 25,
"text": "C_1 \\cup C_2 \\setminus \\{x\\}"
},
{
"math_id": 26,
"text": "J"
},
{
"math_id": 27,
"text": "J \\cup \\{x\\}"
},
{
"math_id": 28,
"text": "x"
},
{
"math_id": 29,
"text": "C(x,J)"
}
] |
https://en.wikipedia.org/wiki?curid=65673969
|
65674252
|
Base-orderable matroid
|
Mathematical structure
In mathematics, a base-orderable matroid is a matroid that has the following additional property, related to the bases of the matroid. For any two bases formula_0 and formula_1 there exists a feasible exchange bijection, defined as a bijection "formula_2" from "formula_0" to formula_1, such that for every formula_3, both formula_4 and formula_5 are bases.The property was introduced by Brualdi and Scrimger. A strongly-base-orderable matroid has the following stronger property:For any two bases formula_0 and formula_1, there is a strong feasible exchange bijection, defined as a bijection "formula_2" from "formula_0" to formula_1, such that for every formula_6, both formula_7 and formula_8 are bases.
The property in context.
Base-orderability imposes two requirements on the function "formula_2:"
Each of these properties alone is easy to satisfy:
Matroids that are base-orderable.
Every partition matroid is strongly base-orderable. Recall that a partition matroid is defined by a finite collection of "categories", where each category formula_12 has a "capacity" denoted by an integer formula_13 with formula_14. A basis of this matroid is a set which contains exactly formula_13 elements of each category formula_12. For any two bases formula_0 and formula_1, every bijection mapping the formula_13 elements of formula_15 to the formula_13 elements of formula_16 is a strong feasible exchange bijection.
Every transversal matroid is strongly base-orderable.
Matroids that are not base-orderable.
Some matroids are not base-orderable. A notable example is the graphic matroid on the graph "K"4, i.e., the matroid whose bases are the spanning trees of the clique on 4 vertices. Denote the vertices of "K"4 by 1,2,3,4, and its edges by 12,13,14,23,24,34. Note that the bases are:
Consider the two bases "A" = {12,23,34} and "B" = {13,14,24}, and suppose that there is a function "f" satisfying the exchange property (property 2 above). Then:
Then "f" is not a bijection - it maps two elements of "A" to the same element of "B".
There are matroids that are base-orderable but not strongly-base-orderable.
Properties.
In base-orderable matroids, a feasible exchange bijection exists not only between bases but also between any two independent sets of the same cardinality, i.e., any two independent sets formula_0 and formula_1 such that formula_17.
This can be proved by induction on the difference between the size of the sets and the size of a basis (recall that all bases of a matroid have the same size). If the difference is 0 then the sets are actually bases, and the property follows from the definition of base-orderable matroids. Otherwise by the augmentation property of a matroid, we can augment formula_0 to an independent set formula_18 and augment formula_1 to an independent set formula_19. Then, by the induction assumption there exists a feasible exchange bijection "formula_2" between formula_18 and formula_19. If formula_20, then the restriction of "formula_2" to formula_0 and formula_1 is a feasible exchange bijection. Otherwise, "formula_21" and "formula_22", so "formula_2" can be modified by setting: "formula_23". Then, the restriction of the modified function to formula_0 and formula_1 is a feasible exchange bijection.
Completeness.
The class of base-orderable matroids is "complete". This means that it is closed under the operations of minors, duals, direct sums, truncations, and induction by directed graphs. It is also closed under restriction, union and truncation.
The same is true for the class of strongly-base-orderable matroids.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "A"
},
{
"math_id": 1,
"text": "B"
},
{
"math_id": 2,
"text": "f"
},
{
"math_id": 3,
"text": "a\\in A\\setminus B"
},
{
"math_id": 4,
"text": "(A \\setminus \\{ a \\}) \\cup \\{f(a)\\}"
},
{
"math_id": 5,
"text": "(B \\setminus \\{ f(a) \\}) \\cup \\{a\\}"
},
{
"math_id": 6,
"text": "X\\subseteq A"
},
{
"math_id": 7,
"text": "(A \\setminus X) \\cup f(X)"
},
{
"math_id": 8,
"text": "(B \\setminus f(X)) \\cup X"
},
{
"math_id": 9,
"text": "f(a)\\in B\\setminus A"
},
{
"math_id": 10,
"text": "a"
},
{
"math_id": 11,
"text": "f(a)"
},
{
"math_id": 12,
"text": "C_i"
},
{
"math_id": 13,
"text": "d_i"
},
{
"math_id": 14,
"text": "0\\le d_i\\le |C_i|"
},
{
"math_id": 15,
"text": "C_i\\cap A"
},
{
"math_id": 16,
"text": "C_i\\cap B"
},
{
"math_id": 17,
"text": "|A|=|B|"
},
{
"math_id": 18,
"text": "A\\cup \\{x\\}"
},
{
"math_id": 19,
"text": "B\\cup \\{y\\}"
},
{
"math_id": 20,
"text": "f(x)=y"
},
{
"math_id": 21,
"text": "f^{-1}(y)\\in A"
},
{
"math_id": 22,
"text": "f(x)\\in B"
},
{
"math_id": 23,
"text": "f(f^{-1}(y)) := f(x)"
}
] |
https://en.wikipedia.org/wiki?curid=65674252
|
65677561
|
Right group
|
In mathematics, a right group is an algebraic structure consisting of a set together with a binary operation that combines two elements into a third element while obeying the right group axioms. The right group axioms are similar to the group axioms, but while groups can have only one identity and any element can have only one inverse, right groups allow for multiple one-sided identity elements and multiple one-sided inverse elements.
It can be proven (theorem 1.27 in ) that a right group is isomorphic to the direct product of a right zero semigroup and a group, while a right abelian group is the direct product of a right zero semigroup and an abelian group. Left group and left abelian group are defined in analogous way, by substituting right for left in the definitions. The rest of this article will be mostly concerned about right groups, but everything applies to left groups by doing the appropriate right/left substitutions.
Definition.
A right group, originally called multiple group, is a set formula_0 with a binary operation ⋅, satisfying the following axioms:
For all formula_1 and formula_2 in formula_0, there is an element "c" in formula_0 such that formula_3.
For all formula_4 in formula_0, formula_5.
There is at least one left identity in formula_0. That is, there exists an element formula_6 such that formula_7 for all formula_1 in formula_0. Such an element does not need to be unique.
For every formula_1 in formula_0 and every identity element formula_6, also in formula_0, there is at least one element formula_2 in formula_0, such that formula_8. Such element formula_2 is said to be the right inverse of formula_1 with respect to formula_6.
Examples.
Direct product of finite sets.
The following example is provided by. Take the group formula_9, the right zero semigroup formula_10 and construct a right group formula_11 as the direct product of formula_12 and formula_13.
formula_12 is simply the cyclic group of order 3, with formula_6 as its identity, and formula_1 and formula_2 as the inverses of each other.
formula_13 is the right zero semigroup of order 2. Notice the each element repeats along its column, since by definition formula_14, for any formula_15 and formula_16 in formula_13.
The direct product formula_17 of these two structures is defined as follows:
The elements of formula_11 will look like formula_22 and so on. For brevity, let's rename these as formula_23, and so on. The Cayley table of formula_11 is as follows:
Here are some facts about formula_11:
Complex numbers in polar coordinates.
Clifford gives a second example involving complex numbers. Given two non-zero complex numbers "a" and "b", the following operation forms a right group:
formula_33
All complex numbers with modulus equal to 1 are left identities, and all complex numbers will have a right inverse with respect to any left identity.
The inner structure of this right group becomes clear when we use polar coordinates: let formula_34 and formula_35, where "A" and "B" are the magnitudes and formula_36 and formula_37 are the arguments (angles) of "a" and "b", respectively. formula_38 (this is not the regular multiplication of complex numbers) then becomes formula_39. If we represent the magnitudes and arguments as ordered pairs, we can write this as:
Formula 2: formula_40
This right group is the direct product of a group (positive real numbers under multiplication) and a right zero semigroup induced by the real numbers. Structurally, this is identical to formula 1 above. In fact, this is how all right group operations look like when written as ordered pairs of the direct product of their factors.
Complex numbers in cartesian coordinates.
If we take the and complex numbers and define an operation similar to example 2 but use cartesian instead of polar coordinates and addition instead of multiplication, we get another right group, with operation defined as follows:
formula_41, or equivalently:
Formula 3: formula_42
A practical example from computer science.
Consider the following example from computer science, where a set would be implemented as a programming language type.
Both formula_44 and formula_13 are subsets of formula_45, the full transformation semigroup on formula_43. formula_44 behaves like a group, where there is a zero duration and every duration has an inverse duration. If we treat these transformations as right semigroup actions, formula_13 behaves like a right zero semigroup, such that a time zone transformation always cancels any previous time zone transformation on a given date time.
Given any two arbitrary date times formula_1 and formula_2 (ignore issues regarding representation boundaries), one can find a pair of a duration and a time zone that will transform formula_1 into formula_2. This composite transformation of time zone conversion and duration adding is isomorphic to the right group formula_46.
Taking the java.time package as an example, the sets formula_47 and formula_13 would correspond to the class ZonedDateTime, the function plus and the function withZoneSameInstant, respectively. More concretely, for any ZonedDateTime "t"1 and "t"2, there is a Duration "d" and a ZoneId "z", such that:
t2 = t1.plus(d).withZoneSomeInstant(z)
The expression above can be written more concisely using right action notation borrowed from group theory as:
formula_48
It can also be verified that durations and time zones, when viewed as transformations on date/times, in addition to obeying the axioms of groups and right zero semigroups, respectively, they commute with each other. That is, for any date/time t, any duration d and any timezone z:
formula_49
This is the same as saying:
formula_50
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "R"
},
{
"math_id": 1,
"text": "a"
},
{
"math_id": 2,
"text": "b"
},
{
"math_id": 3,
"text": "c = a \\cdot b"
},
{
"math_id": 4,
"text": "a, b, c"
},
{
"math_id": 5,
"text": "(a \\cdot b) \\cdot c = a \\cdot (b \\cdot c)"
},
{
"math_id": 6,
"text": "e"
},
{
"math_id": 7,
"text": "e \\cdot a = a"
},
{
"math_id": 8,
"text": "a \\cdot b = e"
},
{
"math_id": 9,
"text": "G = \\{ e, a, b \\}"
},
{
"math_id": 10,
"text": "Z = \\{ 1, 2 \\}"
},
{
"math_id": 11,
"text": "R_{gz}"
},
{
"math_id": 12,
"text": "G"
},
{
"math_id": 13,
"text": "Z"
},
{
"math_id": 14,
"text": "x \\cdot y = y"
},
{
"math_id": 15,
"text": "x"
},
{
"math_id": 16,
"text": "y"
},
{
"math_id": 17,
"text": "R_{gz} = G \\times Z"
},
{
"math_id": 18,
"text": "(g, z)"
},
{
"math_id": 19,
"text": "g"
},
{
"math_id": 20,
"text": "z"
},
{
"math_id": 21,
"text": "(x, y) \\cdot (u, v) = (xu, v)"
},
{
"math_id": 22,
"text": "(e, 1), (e, 2), (a, 1)"
},
{
"math_id": 23,
"text": "e_1, e_2, a_1"
},
{
"math_id": 24,
"text": "e_1"
},
{
"math_id": 25,
"text": "e_2"
},
{
"math_id": 26,
"text": "e2 \\cdot b1 = b1"
},
{
"math_id": 27,
"text": "e1 \\cdot a2 = a2"
},
{
"math_id": 28,
"text": "a_2"
},
{
"math_id": 29,
"text": "b_1"
},
{
"math_id": 30,
"text": "b_2"
},
{
"math_id": 31,
"text": "a2 \\cdot b1 = e1"
},
{
"math_id": 32,
"text": "a2 \\cdot b2 = e2"
},
{
"math_id": 33,
"text": "a \\cdot b = |a|\\, b"
},
{
"math_id": 34,
"text": "a = A e^{i \\alpha}"
},
{
"math_id": 35,
"text": "b = B e^{i \\beta}"
},
{
"math_id": 36,
"text": "\\alpha"
},
{
"math_id": 37,
"text": "\\beta"
},
{
"math_id": 38,
"text": "a \\cdot b"
},
{
"math_id": 39,
"text": "A e^{i \\alpha} \\cdot B e^{i \\beta} = AB e^{i \\beta}"
},
{
"math_id": 40,
"text": "(A, \\alpha) \\cdot (B,\\beta) = (AB, \\beta)"
},
{
"math_id": 41,
"text": "(a + bi) \\cdot (c + di) = a + c + di"
},
{
"math_id": 42,
"text": "(a,b) \\cdot (c,d) = (a+c,d)"
},
{
"math_id": 43,
"text": "X"
},
{
"math_id": 44,
"text": "D"
},
{
"math_id": 45,
"text": "T_x"
},
{
"math_id": 46,
"text": "D \\times Z"
},
{
"math_id": 47,
"text": "X, D"
},
{
"math_id": 48,
"text": "t2 = t1.d.z"
},
{
"math_id": 49,
"text": "t.d.z = t.z.d"
},
{
"math_id": 50,
"text": "d \\cdot z = z \\cdot d"
}
] |
https://en.wikipedia.org/wiki?curid=65677561
|
65680044
|
Knockoffs (statistics)
|
Statistical method
In statistics, the knockoff filter, or simply knockoffs, is a framework for variable selection. It was originally introduced for linear regression by Rina Barber and Emmanuel Candès, and later generalized to other regression models in the random design setting. Knockoffs has found application in many practical areas, notably in genome-wide association studies.
Fixed-X knockoffs.
Consider a linear regression model with response vector formula_0 and feature matrix formula_1, which is treated as deterministic"." A matrix formula_2 is said to be knockoffs of formula_1 if it does not depend on formula_0 and satisfies formula_3 for formula_4. Barber and Candès showed that, equipped with a suitable feature importance statistic, fixed-X knockoffs can be used for variable selection while controlling the false discovery rate (FDR).
Model-X knockoffs.
Consider a general regression model with response vector formula_0 and random feature matrix formula_1"." A matrix formula_2 is said to be knockoffs of formula_1 if it is conditionally independent of formula_0 given formula_1 and satisfies a subtle pairwise exchangeable condition: for any formula_5, the joint distribution of the random matrix formula_6 does not change if its formula_5th and formula_7th columns are swapped, where formula_8 is the number of features. While it is less clear how to create model-X knockoffs compared to their fixed-X counterpart, various algorithms have been proposed to construct knockoffs. Once constructed, model-X knockoffs can be used for variable selection following the same procedure as fixed-X knockoffs and control the FDR.
Properties.
The knockoffs formula_2 can be understood as negative controls. Informally speaking, knockoffs has the property that no method can statistically distinguish the original matrix from its knockoffs without looking at formula_0. Mathematically, the exchangeability conditions translate to symmetry that allows for an estimation of the type I error (e.g., if one wishes to choose the FDR as the type I error rate, the false discovery proportion is estimated), which then leads to exact type I error control.
Model-X knockoffs provides valid type I error control regardless of the unknown conditional distribution of formula_0 given formula_1, and it can work with black-box variable importance statistics, including the ones derived from complicated machine learning methods. A most significant challenge of implementing model-X knockoffs is that it requires nontrivial knowledge on the distribution of formula_1, which is usually high-dimensional. This knowledge can be gained with the help of unlabeled data.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathbf y"
},
{
"math_id": 1,
"text": "\\mathbf X"
},
{
"math_id": 2,
"text": "\\tilde{\\mathbf X}"
},
{
"math_id": 3,
"text": "\\mathbf X_i^\\top\\mathbf X_j=\\mathbf X_i^\\top\\tilde{\\mathbf X}_j=\\tilde{\\mathbf X}_i^\\top\\mathbf X_j=\\tilde{\\mathbf X}_i^\\top\\tilde{\\mathbf X}_j"
},
{
"math_id": 4,
"text": "i\\ne j"
},
{
"math_id": 5,
"text": "j"
},
{
"math_id": 6,
"text": "[\\mathbf X,\\tilde{\\mathbf X}]"
},
{
"math_id": 7,
"text": "(j+p)"
},
{
"math_id": 8,
"text": "p"
}
] |
https://en.wikipedia.org/wiki?curid=65680044
|
656965
|
Polymer chemistry
|
Chemistry subdiscipline
Polymer chemistry is a sub-discipline of chemistry that focuses on the structures of chemicals, chemical synthesis, and chemical and physical properties of polymers and macromolecules. The principles and methods used within polymer chemistry are also applicable through a wide range of other chemistry sub-disciplines like organic chemistry, analytical chemistry, and physical chemistry. Many materials have polymeric structures, from fully inorganic metals and ceramics to DNA and other biological molecules. However, polymer chemistry is typically related to synthetic and organic compositions. Synthetic polymers are ubiquitous in commercial materials and products in everyday use, such as plastics, and rubbers, and are major components of composite materials. Polymer chemistry can also be included in the broader fields of polymer science or even nanotechnology, both of which can be described as encompassing polymer physics and polymer engineering.
History.
The work of Henri Braconnot in 1777 and the work of Christian Schönbein in 1846 led to the discovery of nitrocellulose, which, when treated with camphor, produced celluloid. Dissolved in ether or acetone, it becomes collodion, which has been used as a wound dressing since the U.S. Civil War. Cellulose acetate was first prepared in 1865. In years 1834-1844 the properties of rubber (polyisoprene) were found to be greatly improved by heating with sulfur, thus founding the vulcanization process.
In 1884 Hilaire de Chardonnet started the first artificial fiber plant based on regenerated cellulose, or viscose rayon, as a substitute for silk, but it was very flammable. In 1907 Leo Baekeland invented the first polymer made independent of the products of organisms, a thermosetting phenol-formaldehyde resin called Bakelite. Around the same time, Hermann Leuchs reported the synthesis of amino acid N-carboxyanhydrides and their high molecular weight products upon reaction with nucleophiles, but stopped short of referring to these as polymers, possibly due to the strong views espoused by Emil Fischer, his direct supervisor, denying the possibility of any covalent molecule exceeding 6,000 daltons. Cellophane was invented in 1908 by Jocques Brandenberger who treated sheets of viscose rayon with acid.
The chemist Hermann Staudinger first proposed that polymers consisted of long chains of atoms held together by covalent bonds, which he called macromolecules. His work expanded the chemical understanding of polymers and was followed by an expansion of the field of polymer chemistry during which such polymeric materials as neoprene, nylon and polyester were invented. Before Staudinger, polymers were thought to be clusters of small molecules (colloids), without definite molecular weights, held together by an unknown force. Staudinger received the Nobel Prize in Chemistry in 1953. Wallace Carothers invented the first synthetic rubber called neoprene in 1931, the first polyester, and went on to invent nylon, a true silk replacement, in 1935. Paul Flory was awarded the Nobel Prize in Chemistry in 1974 for his work on polymer random coil configurations in solution in the 1950s. Stephanie Kwolek developed an aramid, or aromatic nylon named Kevlar, patented in 1966. Karl Ziegler and Giulio Natta received a Nobel Prize for their discovery of catalysts for the polymerization of alkenes. Alan J. Heeger, Alan MacDiarmid, and Hideki Shirakawa were awarded the 2000 Nobel Prize in Chemistry for the development of polyacetylene and related conductive polymers. Polyacetylene itself did not find practical applications, but organic light-emitting diodes (OLEDs) emerged as one application of conducting polymers.
Teaching and research programs in polymer chemistry were introduced in the 1940s. An Institute for Macromolecular Chemistry was founded in 1940 in Freiburg, Germany under the direction of Staudinger. In America, a Polymer Research Institute (PRI) was established in 1941 by Herman Mark at the Polytechnic Institute of Brooklyn (now Polytechnic Institute of NYU).
Polymers and their properties.
Polymers are high molecular mass compounds formed by polymerization of monomers. They are synthesized by the polymerization process and can be modified by the additive of monomers. The additives of monomers change polymers mechanical property, processability, durability and so on. The simple reactive molecule from which the repeating structural units of a polymer are derived is called a monomer. A polymer can be described in many ways: its degree of polymerisation, molar mass distribution, tacticity, copolymer distribution, the degree of branching, by its end-groups, crosslinks, crystallinity and thermal properties such as its glass transition temperature and melting temperature. Polymers in solution have special characteristics with respect to solubility, viscosity, and gelation. Illustrative of the quantitative aspects of polymer chemistry, particular attention is paid to the number-average and weight-average molecular weights formula_0 and formula_1, respectively.
<br>
formula_2
The formation and properties of polymers have been rationalized by many theories including Scheutjens–Fleer theory, Flory–Huggins solution theory, Cossee–Arlman mechanism, Polymer field theory, Hoffman Nucleation Theory, Flory–Stockmayer theory, and many others.
The study of polymer thermodynamics helps improve the material properties of various polymer-based materials such as polystyrene (styrofoam) and polycarbonate. Common improvements include toughening, improving impact resistance, improving biodegradability, and altering a material's solubility.
Viscosity.
As polymers get longer and their molecular weight increases, their viscosity tend to increase. Thus, the measured viscosity of polymers can provide valuable information about the average length of the polymer, the progress of reactions, and in what ways the polymer branches.
Classification.
Polymers can be classified in many ways. Polymers, strictly speaking, comprise most solid matter: minerals (i.e. most of the Earth's crust) are largely polymers, metals are 3-d polymers, organisms, living and dead, are composed largely of polymers and water. Often polymers are classified according to their origin:
Biopolymers are the structural and functional materials that comprise most of the organic matter in organisms. One major class of biopolymers are proteins, which are derived from amino acids. Polysaccharides, such as cellulose, chitin, and starch, are biopolymers derived from sugars. The polynucleic acids DNA and RNA are derived from phosphorylated sugars with pendant nucleotides that carry genetic information.
Synthetic polymers are the structural materials manifested in plastics, synthetic fibers, paints, building materials, furniture, mechanical parts, and adhesives. Synthetic polymers may be divided into thermoplastic polymers and thermoset plastics. Thermoplastic polymers include polyethylene, teflon, polystyrene, polypropylene, polyester, polyurethane, Poly(methyl methacrylate), polyvinyl chloride, nylons, and rayon. Thermoset plastics include vulcanized rubber, bakelite, Kevlar, and polyepoxide. Almost all synthetic polymers are derived from petrochemicals.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "M_n"
},
{
"math_id": 1,
"text": "M_w"
},
{
"math_id": 2,
"text": "\nM_n=\\frac{\\sum M_i N_i} {\\sum N_i},\\quad\n\nM_w=\\frac{\\sum M_i^2 N_i} {\\sum M_i N_i},\\quad \n"
}
] |
https://en.wikipedia.org/wiki?curid=656965
|
65698
|
Mel scale
|
Conceptual scale
The mel scale (after the word "melody") is a perceptual scale of pitches judged by listeners to be equal in distance from one another. The reference point between this scale and normal frequency measurement is defined by assigning a perceptual pitch of 1000 mels to a 1000 Hz tone, 40 dB above the listener's threshold. Above about 500 Hz, increasingly large intervals are judged by listeners to produce equal pitch increments.
Formula.
A formula (O'Shaughnessy 1987) to convert "f" hertz into "m" mels is
formula_0
History and other formulas.
The formula from O'Shaughnessy's book can be expressed with different logarithmic bases:
formula_1
The corresponding inverse expressions are
formula_2
There were published curves and tables on psychophysical pitch scales since Steinberg's 1937
curves based on just-noticeable differences of pitch. More curves soon followed in Fletcher and Munson's 1937
and Fletcher's 1938
and Stevens' 1937 and Stevens and Volkmann's 1940
papers using a variety of experimental methods and analysis approaches.
In 1949 Koenig published an approximation based on separate linear and logarithmic segments, with a break at 1000 Hz.
Gunnar Fant proposed the current popular linear/logarithmic formula in 1949, but with the 1000 Hz corner frequency.
An alternate expression of the formula, not depending on choice of logarithm base, is noted in Fant (1968):
formula_3
In 1976, Makhoul and Cosell published the now-popular version with the 700 Hz corner frequency.
As Ganchev et al. have observed, "The formulae [with 700], when compared to [Fant's with 1000], provide a closer approximation of the Mel scale for frequencies below 1000 Hz, at the price of higher inaccuracy for frequencies higher than 1000 Hz." Above 7 kHz, however, the situation is reversed, and the 700 Hz version again fits better.
Data by which some of these formulas are motivated are tabulated in Beranek (1949), as measured from the curves of Stevens and Volkmann:
A formula with a break frequency of 625 Hz is given by Lindsay & Norman (1977); the formula does not appear in their 1972 first edition:
formula_4
For direct comparison with other formulae, this is equivalent to
formula_5
Most mel-scale formulas give exactly 1000 mels at 1000 Hz. The break frequency (e.g. 700 Hz, 1000 Hz, or 625 Hz) is the only free parameter in the usual form of the formula. Some non-mel auditory-frequency-scale formulas use the same form but with much lower break frequency, not necessarily mapping to 1000 at 1000 Hz; for example the ERB-rate scale of Glasberg and Moore (1990) uses a break point of 228.8 Hz, and the cochlear frequency–place map of Greenwood (1990) uses 165.3 Hz.
Other functional forms for the mel scale have been explored by Umesh et al.; they point out that the traditional formulas with a logarithmic region and a linear region do not fit the data from Stevens and Volkmann's curves as well as some other forms, based on the following data table of measurements that they made from those curves:
Slaney's MATLAB Auditory Toolbox agrees with Umesh et al. and uses the following two-piece fit, though notably not using the "1000 mels at 1000 Hz" convention:
formula_6
Applications.
The first version of Google's Lyra codec uses "log mel spectrograms" as the feature-extraction step. The transmitted data is a vector-quantized form of the spectrogram, which is then synthesized back to speech by a neural network. Use of the mel scale is believed to weigh the data in a way appropriate to human perception. MelGAN takes a similar approach.
Criticism.
Stevens' student Donald D. Greenwood, who had worked on the mel scale experiments in 1956, considers the scale biased by experimental flaws. In 2009 he posted to a mailing list:
<templatestyles src="Template:Blockquote/styles.css" />I would ask, why use the Mel scale now, since it appears to be biased? If anyone wants a Mel scale, they should do it over, controlling carefully for order bias and using plenty of subjects – more than in the past – and using both musicians and non-musicians to search for any differences in performance that may be governed by musician/non-musician differences or subject differences generally.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "m = 2595 \\log_{10}\\left(1 + \\frac{f}{700}\\right)."
},
{
"math_id": 1,
"text": "m = 2595 \\log_{10}\\left(1 + \\frac{f}{700}\\right) = 1127 \\ln\\left(1 + \\frac{f}{700}\\right)."
},
{
"math_id": 2,
"text": "f = 700\\left(10^\\frac{m}{2595} - 1\\right) = 700\\left(e^\\frac{m}{1127} - 1\\right)."
},
{
"math_id": 3,
"text": "m = \\frac{1000}{\\log 2} \\log\\left(1 + \\frac{f}{1000}\\right)."
},
{
"math_id": 4,
"text": "m = 2410 \\log_{10}(0.0016 f + 1)."
},
{
"math_id": 5,
"text": "m = 2410 \\log_{10}\\left(1 + \\frac{f}{625}\\right)."
},
{
"math_id": 6,
"text": "\n m(f) = \\begin{cases}\n \\dfrac{3f}{200}, & f < 1000, \\\\\n 15 + 27 \\log_{6.4} \\left(\\dfrac{f}{1000}\\right), & f \\geq 1000.\n \\end{cases}\n"
}
] |
https://en.wikipedia.org/wiki?curid=65698
|
65707306
|
Princeton Science Library
|
Science book series written by scientists and published by Princeton University Press
The Princeton Science Library is a book series of popular science written by scientists known for their popular writings and originally published by Princeton University Press.
Books include:
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\sqrt{-1}"
}
] |
https://en.wikipedia.org/wiki?curid=65707306
|
657106
|
Bounding volume
|
Closed volume that completely contains the union of a set of objects
In computer graphics and computational geometry, a bounding volume (or bounding region) for a set of objects is a closed region that completely contains the union of the objects in the set. Bounding volumes are used to improve the efficiency of geometrical operations, such as by using simple regions, having simpler ways to test for overlap.
A bounding volume for a set of objects is also a bounding volume for the single object consisting of their union, and the other way around. Therefore, it is possible to confine the description to the case of a single object, which is assumed to be non-empty and bounded (finite).
Uses.
Bounding volumes are most often used to accelerate certain kinds of tests.
In ray tracing, bounding volumes are used in ray-intersection tests, and in many rendering algorithms, they are used for viewing frustum tests. If the ray or viewing frustum does not intersect the bounding volume, it cannot intersect the object contained within, allowing trivial rejection. Similarly if the frustum contains the entirety of the bounding volume, the contents may be trivially accepted without further tests. These intersection tests produce a list of objects that must be 'displayed' (rendered; rasterized).
In collision detection, when two bounding volumes do not intersect, the contained objects cannot collide.
Testing against a bounding volume is typically much faster than testing against the object itself, because of the bounding volume's simpler geometry. This is because an 'object' is typically composed of polygons or data structures that are reduced to polygonal approximations. In either case, it is computationally wasteful to test each polygon against the view volume if the object is not visible. (Onscreen objects must be 'clipped' to the screen, regardless of whether their surfaces are actually visible.)
To obtain bounding volumes of complex objects, a common way is to break the objects/scene down using a scene graph or more specifically a bounding volume hierarchy, like e.g. OBB trees. The basic idea behind this is to organize a scene in a tree-like structure where the root comprises the whole scene and each leaf contains a smaller subpart.
In computer stereo vision, a bounding volume reconstructed from silhouettes of an object is known as a "visual hull."
Common types.
The choice of the type of bounding volume for a given application is determined by a variety of factors: the computational cost of computing a bounding volume for an object, the cost of updating it in applications in which the objects can move or change shape or size, the cost of determining intersections, and the desired precision of the intersection test. The precision of the intersection test is related to the amount of space within the bounding volume not associated with the bounded object, called "void space". Sophisticated bounding volumes generally allow for less void space but are more computationally expensive. It is common to use several types in conjunction, such as a cheap one for a quick but rough test in conjunction with a more precise but also more expensive type.
The types treated here all give convex bounding volumes. If the object being bounded is known to be convex, this is not a restriction. If non-convex bounding volumes are required, an approach is to represent them as a union of a number of convex bounding volumes. Unfortunately, intersection tests become quickly more expensive as the bounding boxes become more sophisticated.
A "bounding box" or "minimum bounding box" ("MBB") is a cuboid, or in 2-D a rectangle, containing the object. In dynamical simulation, bounding boxes are preferred to other shapes of bounding volume such as bounding spheres or cylinders for objects that are roughly cuboid in shape when the intersection test needs to be fairly accurate. The benefit is obvious, for example, for objects that rest upon other, such as a car resting on the ground: a bounding sphere would show the car as possibly intersecting with the ground, which then would need to be rejected by a more expensive test of the actual model of the car; a bounding box immediately shows the car as not intersecting with the ground, saving the more expensive test.
A "minimum bounding rectangle" ("MBR") – the least AABB in 2-D – is frequently used in the description of geographic (or "geospatial") data items, serving as a simplified proxy for a dataset's spatial extent (see geospatial metadata) for the purpose of data search (including spatial queries as applicable) and display. It is also a basic component of the R-tree method of spatial indexing.
In many applications the bounding box is aligned with the axes of the co-ordinate system, and it is then known as an axis-aligned bounding box (<templatestyles src="Template:Visible anchor/styles.css" />AABB). To distinguish the general case from an AABB, an arbitrary bounding box is sometimes called an oriented bounding box (<templatestyles src="Template:Visible anchor/styles.css" />OBB), or an <templatestyles src="Template:Visible anchor/styles.css" />OOBB when an existing object's local coordinate system is used. AABBs are much simpler to test for intersection than OBBs, but have the disadvantage that when the model is rotated they cannot be simply rotated with it, but need to be recomputed.
A <templatestyles src="Template:Visible anchor/styles.css" />bounding capsule is a swept sphere (i.e. the volume that a sphere takes as it moves along a straight line segment) containing the object. Capsules can be represented by the radius of the swept sphere and the segment that the sphere is swept across). It has traits similar to a cylinder, but is easier to use, because the intersection test is simpler. A capsule and another object intersect if the distance between the capsule's defining segment and some feature of the other object is smaller than the capsule's radius. For example, two capsules intersect if the distance between the capsules' segments is smaller than the sum of their radii. This holds for arbitrarily rotated capsules, which is why they're more appealing than cylinders in practice.
A <templatestyles src="Template:Visible anchor/styles.css" />bounding cylinder is a cylinder containing the object. In most applications the axis of the cylinder is aligned with the vertical direction of the scene. Cylinders are appropriate for 3-D objects that can only rotate about a vertical axis but not about other axes, and are otherwise constrained to move by translation only. Two vertical-axis-aligned cylinders intersect when, simultaneously, their projections on the vertical axis intersect – which are two line segments – as well their projections on the horizontal plane – two circular disks. Both are easy to test. In video games, bounding cylinders are often used as bounding volumes for people standing upright.
A <templatestyles src="Template:Visible anchor/styles.css" />bounding ellipsoid is an ellipsoid containing the object. Ellipsoids usually provide tighter fitting than a sphere. Intersections with ellipsoids are done by scaling the other object along the principal axes of the ellipsoid by an amount equal to the multiplicative inverse of the radii of the ellipsoid, thus reducing the problem to intersecting the scaled object with a unit sphere. Care should be taken to avoid problems if the applied scaling introduces skew. Skew can make the usage of ellipsoids impractical in certain cases, for example collision between two arbitrary ellipsoids.
A "bounding sphere" is a sphere containing the object. In 2-D graphics, this is a circle. Bounding spheres are represented by centre and radius. They are very quick to test for collision with each other: two spheres intersect when the distance between their centres does not exceed the sum of their radii. This makes bounding spheres appropriate for objects that can move in any number of dimensions.
A <templatestyles src="Template:Visible anchor/styles.css" />bounding slab is the volume that projects to an extent on an axis, and can be thought of as the slab bounded between two planes. A bounding box is the intersection of orthogonally oriented bounding slabs. Bounding slabs have been used to speed up ray tracing
A <templatestyles src="Template:Visible anchor/styles.css" />bounding triangle in 2-D is quite useful to speedup the clipping or visibility test of a B-Spline curve. See "Circle and B-Splines clipping algorithms" under the subject Clipping (computer graphics) for an example of use.
A convex hull is the smallest convex volume containing the object. If the object is the union of a finite set of points, its convex hull is a polytope.
A <templatestyles src="Template:Visible anchor/styles.css" />discrete oriented polytope (DOP) generalizes the bounding box. A k-DOP is the Boolean intersection of extents along "k" directions. Thus, a "k"-DOP is the Boolean intersection of "k" bounding slabs and is a convex polytope containing the object (in 2-D a polygon; in 3-D a polyhedron). A 2-D rectangle is a special case of a 2-DOP, and a 3-D box is a special case of a 3-DOP. In general, the axes of a DOP do not have to be orthogonal, and there can be more axes than dimensions of space. For example, a 3-D box that is beveled on all edges and corners can be constructed as a 13-DOP. The actual number of faces can be less than 2 times "k" if some faces become degenerate, shrunk to an edge or a vertex.
Basic intersection checks.
For some types of bounding volume (OBB and convex polyhedra), an effective check is that of the separating axis theorem. The idea here is that, if there exists an axis by which the objects do not overlap, then the objects do not intersect. Usually the axes checked are those of the basic axes for the volumes (the unit axes in the case of an AABB, or the 3 base axes from each OBB in the case of OBBs). Often, this is followed by also checking the cross-products of the previous axes (one axis from each object).
In the case of an AABB, this tests becomes a simple set of overlap tests in terms of the unit axes. For an "AABB" defined by "M","N" against one defined by "O","P" they do not intersect if
("M""x" > "P""x") or ("O""x" > "N""x") or ("M""y" > "P""y") or ("O""y" > "N""y") or ("M""z" > "P""z") or ("O""z" > "N""z").
An AABB can also be projected along an axis, for example, if it has edges of length L and is centered at "C", and is being projected along the axis N:
formula_0, and formula_1 or formula_2, and formula_3
where m and n are the minimum and maximum extents.
An OBB is similar in this respect, but is slightly more complicated. For an OBB with L and C as above, and with "I", "J", and "K" as the OBB's base axes, then:
formula_4
formula_5
For the ranges "m","n" and "o","p" it can be said that they do not intersect if "m" > "p" or "o" > "n". Thus, by projecting the ranges of 2 OBBs along the I, J, and K axes of each OBB, and checking for non-intersection, it is possible to detect non-intersection. By additionally checking along the cross products of these axes (I0×I1, I0×J1, ...) one can be more certain that intersection is impossible.
This concept of determining non-intersection via use of axis projection also extends to convex polyhedra, however with the normals of each polyhedral face being used instead of the base axes, and with the extents being based on the minimum and maximum dot products of each vertex against the axes. Note that this description assumes the checks are being done in world space.
The intersection of two "k"-DOP's can be computed very similarly to AABBs: for each orientation, you just check the two corresponding intervals of the two DOP's. So, just like DOP's being a generalization of AABBs, the intersection test is a generalization of the AABB overlap test. The complexity of the overlap test of two DOP's is in O(k). This assumes, however, that both DOP's are given with respect to the same set of orientations. If one of them is rotated, this is no longer true. In that case, one relatively easy way to check the two DOP's formula_6 for intersection is to enclose the rotated one, formula_7, by another, smallest enclosing DOP formula_8 that is oriented with respect to the orientations of the first DOP formula_9. The procedure for that is a little bit more complex, but eventually amounts to a matrix vector multiplication of complexity O(k) as well.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "r = 0.5L_x|N_x|+0.5L_y|N_y|+0.5L_z|N_z|\\,"
},
{
"math_id": 1,
"text": "b=C*N\\,"
},
{
"math_id": 2,
"text": "b=C_x N_x +C_y N_y+C_z N_z\\,"
},
{
"math_id": 3,
"text": "m=b-r, n=b+r\\,"
},
{
"math_id": 4,
"text": "r = 0.5L_x|N*I|+0.5L_y|N*J|+0.5L_z|N*K|\\,"
},
{
"math_id": 5,
"text": "m=C*N-r \\mbox{ and } n=C*N+r\\,"
},
{
"math_id": 6,
"text": "D^1, D^2"
},
{
"math_id": 7,
"text": "D^2"
},
{
"math_id": 8,
"text": "\\tilde{D}^2"
},
{
"math_id": 9,
"text": "D^1"
}
] |
https://en.wikipedia.org/wiki?curid=657106
|
6571387
|
Cylinder set measure
|
In mathematics, cylinder set measure (or promeasure, or premeasure, or quasi-measure, or CSM) is a kind of prototype for a measure on an infinite-dimensional vector space. An example is the Gaussian cylinder set measure on Hilbert space.
Cylinder set measures are in general not measures (and in particular need not be countably additive but only finitely additive), but can be used to define measures, such as the classical Wiener measure on the set of continuous paths starting at the origin in Euclidean space.
Definition.
Let formula_0 be a separable real topological vector space. Let formula_1 denote the collection of all surjective continuous linear maps formula_2 defined on formula_0 whose image is some finite-dimensional real vector space formula_3:
formula_4
A cylinder set measure on formula_0 is a collection of probability measures
formula_5
where formula_6 is a probability measure on formula_7 These measures are required to satisfy the following consistency condition: if formula_8 is a surjective projection, then the push forward of the measure is as follows:
formula_9
Remarks.
The consistency condition
formula_10
is modelled on the way that true measures push forward (see the section cylinder set measures versus true measures). However, it is important to understand that in the case of cylinder set measures, this is a requirement that is part of the definition, not a result.
A cylinder set measure can be intuitively understood as defining a finitely additive function on the cylinder sets of the topological vector space formula_11 The cylinder sets are the pre-images in formula_0 of measurable sets in formula_3: if formula_12 denotes the formula_13-algebra on formula_3 on which formula_6 is defined, then
formula_14
In practice, one often takes formula_12 to be the Borel formula_13-algebra on formula_7 In this case, one can show that when formula_0 is a separable Banach space, the σ-algebra generated by the cylinder sets is precisely the Borel formula_13-algebra of formula_0:
formula_15
Cylinder set measures versus true measures.
A cylinder set measure on formula_0 is not actually a true measure on formula_0: it is a collection of measures defined on all finite-dimensional images of formula_11 If formula_0 has a probability measure formula_16 already defined on it, then formula_16 gives rise to a cylinder set measure on formula_0 using the push forward: set formula_17on formula_7
When there is a measure formula_16 on formula_0 such that formula_17 in this way, it is customary to abuse notation slightly and say that the cylinder set measure formula_18 "is" the measure formula_19
Cylinder set measures on Hilbert spaces.
When the Banach space formula_0 is also a Hilbert space formula_20 there is a <templatestyles src="Template:Visible anchor/styles.css" />canonical Gaussian cylinder set measure formula_21 arising from the inner product structure on formula_22 Specifically, if formula_23 denotes the inner product on formula_20 let formula_24 denote the quotient inner product on formula_7 The measure formula_25 on formula_3 is then defined to be the canonical Gaussian measure on formula_3:
formula_26
where formula_27 is an isometry of Hilbert spaces taking the Euclidean inner product on formula_28 to the inner product formula_24 on formula_29 and formula_30 is the standard Gaussian measure on formula_31
The canonical Gaussian cylinder set measure on an infinite-dimensional separable Hilbert space formula_32 does not correspond to a true measure on formula_22 The proof is quite simple: the ball of radius formula_33 (and center 0) has measure at most equal to that of the ball of radius formula_33 in an formula_34-dimensional Hilbert space, and this tends to 0 as formula_34 tends to infinity. So the ball of radius formula_33 has measure 0; as the Hilbert space is a countable union of such balls it also has measure 0, which is a contradiction. (See infinite dimensional Lebesgue measure.)
An alternative proof that the Gaussian cylinder set measure is not a measure uses the Cameron–Martin theorem and a result on the quasi-invariance of measures. If formula_35 really were a measure, then the identity function on formula_32 would radonify that measure, thus making formula_36 into an abstract Wiener space. By the Cameron–Martin theorem, formula_37 would then be quasi-invariant under translation by any element of formula_20 which implies that either formula_32 is finite-dimensional or that formula_37 is the zero measure. In either case, we have a contradiction.
Sazonov's theorem gives conditions under which the push forward of a canonical Gaussian cylinder set measure can be turned into a true measure.
Nuclear spaces and cylinder set measures.
A cylinder set measure on the dual of a nuclear Fréchet space automatically extends to a measure if its Fourier transform is continuous.
Example: Let formula_38 be the space of Schwartz functions on a finite dimensional vector space; it is nuclear. It is contained in the Hilbert space formula_32 of formula_39 functions, which is in turn contained in the space of tempered distributions formula_40 the dual of the nuclear Fréchet space formula_38:
formula_41
The Gaussian cylinder set measure on formula_32 gives a cylinder set measure on the space of tempered distributions, which extends to a measure on the space of tempered distributions, formula_42
The Hilbert space formula_32 has measure 0 in formula_40 by the first argument used above to show that the canonical Gaussian cylinder set measure on formula_32 does not extend to a measure on formula_22
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "E"
},
{
"math_id": 1,
"text": "\\mathcal{A} (E)"
},
{
"math_id": 2,
"text": "T : E \\to F_T"
},
{
"math_id": 3,
"text": "F_T"
},
{
"math_id": 4,
"text": "\\mathcal{A} (E) := \\left\\{ T \\in \\mathrm{Lin} (E; F_{T}) : T \\mbox{ surjective and } \\dim_{\\R} F_{T} < + \\infty\\right\\}."
},
{
"math_id": 5,
"text": "\\left\\{\\mu_{T} : T \\in \\mathcal{A} (E)\\right\\}."
},
{
"math_id": 6,
"text": "\\mu_T"
},
{
"math_id": 7,
"text": "F_T."
},
{
"math_id": 8,
"text": "\\pi_{ST} : F_S \\to F_T"
},
{
"math_id": 9,
"text": "\\mu_{T} = \\left(\\pi_{ST}\\right)_{*} \\left(\\mu_{S}\\right)."
},
{
"math_id": 10,
"text": "\\mu_{T} = \\left(\\pi_{ST}\\right)_{*} (\\mu_{S})"
},
{
"math_id": 11,
"text": "E."
},
{
"math_id": 12,
"text": "\\mathcal{B}_{T}"
},
{
"math_id": 13,
"text": "\\sigma"
},
{
"math_id": 14,
"text": "\\mathrm{Cyl} (E) := \\left\\{T^{-1} (B) : B \\in \\mathcal{B}_{T}, T \\in \\mathcal{A} (E)\\right\\}."
},
{
"math_id": 15,
"text": "\\mathrm{Borel} (E) = \\sigma \\left(\\mathrm{Cyl} (E)\\right)."
},
{
"math_id": 16,
"text": "\\mu"
},
{
"math_id": 17,
"text": "\\mu_T = T_{*}(\\mu)"
},
{
"math_id": 18,
"text": "\\left\\{\\mu_{T} : T \\in \\mathcal{A} (E)\\right\\}"
},
{
"math_id": 19,
"text": "\\mu."
},
{
"math_id": 20,
"text": "H,"
},
{
"math_id": 21,
"text": "\\gamma^H"
},
{
"math_id": 22,
"text": "H."
},
{
"math_id": 23,
"text": "\\langle \\cdot, \\cdot \\rangle"
},
{
"math_id": 24,
"text": "\\langle \\cdot, \\cdot \\rangle_T"
},
{
"math_id": 25,
"text": "\\gamma_T^H"
},
{
"math_id": 26,
"text": "\\gamma_{T}^{H} := i_{*} \\left(\\gamma^{\\dim F_{T}}\\right),"
},
{
"math_id": 27,
"text": "i : \\R^{\\dim(F_T)} \\to F_T"
},
{
"math_id": 28,
"text": "\\R^{\\dim(F_T)}"
},
{
"math_id": 29,
"text": "F_T,"
},
{
"math_id": 30,
"text": "\\gamma^n"
},
{
"math_id": 31,
"text": "\\R^n."
},
{
"math_id": 32,
"text": "H"
},
{
"math_id": 33,
"text": "r"
},
{
"math_id": 34,
"text": "n"
},
{
"math_id": 35,
"text": "\\gamma^H = \\gamma"
},
{
"math_id": 36,
"text": "\\operatorname{id} : H \\to H"
},
{
"math_id": 37,
"text": "\\gamma"
},
{
"math_id": 38,
"text": "S"
},
{
"math_id": 39,
"text": "L^2"
},
{
"math_id": 40,
"text": "S^\\prime,"
},
{
"math_id": 41,
"text": "S \\subseteq H \\subseteq S^\\prime."
},
{
"math_id": 42,
"text": "S^\\prime."
}
] |
https://en.wikipedia.org/wiki?curid=6571387
|
6571457
|
AMS-LaTeX
|
LaTeX additions for the American Mathematical Society
AMS-LaTeX is a collection of LaTeX document classes and packages developed for the American Mathematical Society (AMS). Its additions to LaTeX include the typesetting of multi-line and other mathematical statements, document classes, and fonts containing numerous mathematical symbols.
It has largely superseded the plain TeX macro package AMS-TeX. AMS-TeX was originally written by Michael Spivak, and was used by the AMS from 1983 to 1985.
MathJax supports AMS-LaTeX through extensions.
The following code of the LaTeX2e produces the AMS-LaTeX logo:
%%% -- AMS-LaTeX_logo.tex -------
\AmS-\LaTeX
The package has a suite of facilities to format multi-line equations. For example, the following code,
y &= (x+1)^2 \\
&= x^2+2x+1
causes the equals signs in the two lines to be aligned with one another, like this:
formula_0
AMS-LaTeX also includes many flexible commands for formatting and numbering theorems, lemmas, etc. For example, one may use the environment <samp style="padding-left:0.4em; padding-right:0.4em; color:var( --color-subtle, #666666); " >theorem</samp>
to generate
Theorem ("Pythagoras") "Suppose" formula_1 "are the side-lengths of a right triangle. " <br>"Then" formula_2.<br>
Proof. . . □
|
[
{
"math_id": 0,
"text": "\n \\begin{align}\n y &= (x+1)^2 \\\\\n &= x^2+2x+1\n \\end{align}\n"
},
{
"math_id": 1,
"text": "a\\leq b\\leq c"
},
{
"math_id": 2,
"text": "a^2+b^2=c^2"
}
] |
https://en.wikipedia.org/wiki?curid=6571457
|
6572909
|
Gaussian measure
|
Type of Borel measure
In mathematics, Gaussian measure is a Borel measure on finite-dimensional Euclidean space formula_0, closely related to the normal distribution in statistics. There is also a generalization to infinite-dimensional spaces. Gaussian measures are named after the German mathematician Carl Friedrich Gauss. One reason why Gaussian measures are so ubiquitous in probability theory is the central limit theorem. Loosely speaking, it states that if a random variable formula_1 is obtained by summing a large number formula_2 of independent random variables with variance 1, then formula_1 has variance formula_2 and its law is approximately Gaussian.
Definitions.
Let formula_3 and let formula_4 denote the completion of the Borel formula_5-algebra on formula_6. Let formula_7 denote the usual formula_8-dimensional Lebesgue measure. Then the standard Gaussian measure formula_9 is defined by
formula_10
for any measurable set formula_11. In terms of the Radon–Nikodym derivative,
formula_12
More generally, the Gaussian measure with mean formula_13 and variance formula_14 is given by
formula_15
Gaussian measures with mean formula_16 are known as centered Gaussian measures.
The Dirac measure formula_17 is the weak limit of formula_18 as formula_19, and is considered to be a degenerate Gaussian measure; in contrast, Gaussian measures with finite, non-zero variance are called non-degenerate Gaussian measures.
Properties.
The standard Gaussian measure formula_20 on formula_6
Infinite-dimensional spaces.
It can be shown that there is no analogue of Lebesgue measure on an infinite-dimensional vector space. Even so, it is possible to define Gaussian measures on infinite-dimensional spaces, the main example being the abstract Wiener space construction. A Borel measure formula_32 on a separable Banach space formula_33 is said to be a non-degenerate (centered) Gaussian measure if, for every linear functional formula_34 except formula_35, the push-forward measure formula_36 is a non-degenerate (centered) Gaussian measure on formula_37 in the sense defined above.
For example, classical Wiener measure on the space of continuous paths is a Gaussian measure.
References.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "R^n"
},
{
"math_id": 1,
"text": "X"
},
{
"math_id": 2,
"text": "N"
},
{
"math_id": 3,
"text": "n \\in N"
},
{
"math_id": 4,
"text": "B_0(\\mathbb{R}^n)"
},
{
"math_id": 5,
"text": "\\sigma"
},
{
"math_id": 6,
"text": "\\mathbb{R}^n"
},
{
"math_id": 7,
"text": "\\lambda^n : B_0(\\mathbb{R}^n) \\to [0, +\\infty]"
},
{
"math_id": 8,
"text": "n"
},
{
"math_id": 9,
"text": "\\gamma^n : B_0(\\mathbb{R}^n) \\to [0, 1]"
},
{
"math_id": 10,
"text": "\\gamma^{n} (A) = \\frac{1}{\\sqrt{2 \\pi}^{n}} \\int_{A} \\exp \\left( - \\frac{1}{2} \\left\\| x \\right\\|_{\\mathbb{R}^{n}}^{2} \\right) \\, \\mathrm{d} \\lambda^{n} (x)"
},
{
"math_id": 11,
"text": "A \\in B_0(\\mathbb{R}^n)"
},
{
"math_id": 12,
"text": "\\frac{\\mathrm{d} \\gamma^{n}}{\\mathrm{d} \\lambda^{n}} (x) = \\frac{1}{\\sqrt{2 \\pi}^{n}} \\exp \\left( - \\frac{1}{2} \\left\\| x \\right\\|_{\\mathbb{R}^{n}}^{2} \\right)."
},
{
"math_id": 13,
"text": "\\mu \\in \\mathbb{R}^n"
},
{
"math_id": 14,
"text": "\\sigma^2 > 0"
},
{
"math_id": 15,
"text": "\\gamma_{\\mu, \\sigma^{2}}^{n} (A) := \\frac{1}{\\sqrt{2 \\pi \\sigma^{2}}^{n}} \\int_{A} \\exp \\left( - \\frac{1}{2 \\sigma^{2}} \\left\\| x - \\mu \\right\\|_{\\mathbb{R}^{n}}^{2} \\right) \\, \\mathrm{d} \\lambda^{n} (x)."
},
{
"math_id": 16,
"text": "\\mu = 0"
},
{
"math_id": 17,
"text": "\\delta_\\mu"
},
{
"math_id": 18,
"text": "\\gamma_{\\mu, \\sigma^{2}}^{n}"
},
{
"math_id": 19,
"text": "\\sigma \\to 0"
},
{
"math_id": 20,
"text": "\\gamma^n"
},
{
"math_id": 21,
"text": "\\lambda^{n} \\ll \\gamma^n \\ll \\lambda^n"
},
{
"math_id": 22,
"text": "\\ll"
},
{
"math_id": 23,
"text": "\\operatorname{supp}(\\gamma^n) = \\mathbb{R}^n"
},
{
"math_id": 24,
"text": "(\\gamma^n(\\mathbb{R}^n) = 1)"
},
{
"math_id": 25,
"text": "A"
},
{
"math_id": 26,
"text": "\\gamma^n (A) = \\sup \\{ \\gamma^n (K) \\mid K \\subseteq A, K \\text{ is compact} \\},"
},
{
"math_id": 27,
"text": " \\frac{\\mathrm{d} (T_h)_{*} (\\gamma^n)}{\\mathrm{d} \\gamma^n} (x) = \\exp \\left( \\langle h, x \\rangle_{\\R^n} - \\frac{1}{2} \\| h \\|_{\\R^n}^2 \\right),"
},
{
"math_id": 28,
"text": "(T_h)_*(\\gamma^n)"
},
{
"math_id": 29,
"text": "T_h : \\mathbb{R}^n \\to \\mathbb{R}^n"
},
{
"math_id": 30,
"text": "T_h(x) = x + h"
},
{
"math_id": 31,
"text": "Z \\sim \\operatorname{Normal} (\\mu, \\sigma^2) \\implies \\mathbb{P} (Z \\in A) = \\gamma_{\\mu, \\sigma^2}^n (A)."
},
{
"math_id": 32,
"text": "\\gamma"
},
{
"math_id": 33,
"text": "E"
},
{
"math_id": 34,
"text": "L \\in E^*"
},
{
"math_id": 35,
"text": "L = 0"
},
{
"math_id": 36,
"text": "L_*(\\gamma)"
},
{
"math_id": 37,
"text": "\\mathbb{R}"
}
] |
https://en.wikipedia.org/wiki?curid=6572909
|
6573013
|
Cameron–Martin theorem
|
Theorem defining translation of Gaussian measures (Wiener measures) on Hilbert spaces.
In mathematics, the Cameron–Martin theorem or Cameron–Martin formula (named after Robert Horton Cameron and W. T. Martin) is a theorem of measure theory that describes how abstract Wiener measure changes under translation by certain elements of the Cameron–Martin Hilbert space.
Motivation.
The standard Gaussian measure formula_0 on formula_1-dimensional Euclidean space formula_2 is not translation-invariant. (In fact, there is a unique translation invariant Radon measure up to scale by Haar's theorem: the formula_1-dimensional Lebesgue measure, denoted here formula_3.) Instead, a measurable subset formula_4 has Gaussian measure
formula_5
Here formula_6 refers to the standard Euclidean dot product in formula_2. The Gaussian measure of the translation of formula_4 by a vector formula_7 is
formula_8
So under translation through formula_9, the Gaussian measure scales by the distribution function appearing in the last display:
formula_10
The measure that associates to the set formula_4 the number formula_11 is the pushforward measure, denoted formula_12. Here formula_13 refers to the translation map: formula_14. The above calculation shows that the Radon–Nikodym derivative of the pushforward measure with respect to the original Gaussian measure is given by
formula_15
The abstract Wiener measure formula_16 on a separable Banach space formula_17, where formula_18 is an abstract Wiener space, is also a "Gaussian measure" in a suitable sense. How does it change under translation? It turns out that a similar formula to the one above holds if we consider only translations by elements of the dense subspace formula_19.
Statement of the theorem.
Let formula_18 be an abstract Wiener space with abstract Wiener measure formula_20. For formula_21, define formula_22 by formula_23. Then formula_24 is equivalent to formula_16 with Radon–Nikodym derivative
formula_25
where
formula_26
denotes the Paley–Wiener integral.
The Cameron–Martin formula is valid only for translations by elements of the dense subspace formula_19, called Cameron–Martin space, and not by arbitrary elements of formula_17. If the Cameron–Martin formula did hold for arbitrary translations, it would contradict the following result:
If formula_17 is a separable Banach space and formula_27 is a locally finite Borel measure on formula_17 that is equivalent to its own push forward under any translation, then either formula_17 has finite dimension or formula_27 is the trivial (zero) measure. (See quasi-invariant measure.)
In fact, formula_16 is quasi-invariant under translation by an element formula_28 if and only if formula_29. Vectors in formula_30 are sometimes known as Cameron–Martin directions.
Integration by parts.
The Cameron–Martin formula gives rise to an integration by parts formula on formula_17: if formula_31 has bounded Fréchet derivative formula_32, integrating the Cameron–Martin formula with respect to Wiener measure on both sides gives
formula_33
for any formula_34. Formally differentiating with respect to formula_35 and evaluating at formula_36 gives the integration by parts formula
formula_37
Comparison with the divergence theorem of vector calculus suggests
formula_38
where formula_39 is the constant "vector field" formula_40 for all formula_41. The wish to consider more general vector fields and to think of stochastic integrals as "divergences" leads to the study of stochastic processes and the Malliavin calculus, and, in particular, the Clark–Ocone theorem and its associated integration by parts formula.
An application.
Using Cameron–Martin theorem one may establish (See Liptser and Shiryayev 1977, p. 280) that for a formula_42 symmetric non-negative definite matrix formula_43 whose elements formula_44 are continuous and satisfy the condition
formula_45
it holds for a formula_46−dimensional Wiener process formula_47 that
formula_48
where formula_49 is a formula_42 nonpositive definite matrix which is a unique solution of the matrix-valued Riccati differential equation
formula_50
with the boundary condition formula_51.
In the special case of a one-dimensional Brownian motion where formula_52, the unique solution is formula_53, and we have the original formula as established by Cameron and Martin:
formula_54
References.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "\\gamma^n"
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "\\mathbf{R}^n"
},
{
"math_id": 3,
"text": "dx"
},
{
"math_id": 4,
"text": "A"
},
{
"math_id": 5,
"text": "\\gamma_n(A) = \\frac{1}{(2\\pi)^{n/2}}\\int_A \\exp\\left(-\\tfrac12\\langle x, x\\rangle_{\\mathbf R^n}\\right)\\,dx."
},
{
"math_id": 6,
"text": "\\langle x,x\\rangle_{\\mathbf R^n}"
},
{
"math_id": 7,
"text": "h \\in \\mathbf{R}^n"
},
{
"math_id": 8,
"text": "\\begin{align}\n\\gamma_n(A-h) &= \\frac{1}{(2\\pi)^{n/2}}\\int_A \\exp\\left(-\\tfrac12\\langle x-h, x-h\\rangle_{\\mathbf R^n}\\right)\\,dx\\\\[4pt]\n&=\\frac{1}{(2\\pi)^{n/2}}\\int_A \\exp\\left(\\frac{2\\langle x, h\\rangle_{\\mathbf R^n} - \\langle h, h\\rangle_{\\mathbf R^n}}{2}\\right)\\exp\\left(-\\tfrac12\\langle x, x\\rangle_{\\mathbf R^n}\\right)\\,dx.\n\\end{align}"
},
{
"math_id": 9,
"text": "h"
},
{
"math_id": 10,
"text": "\\exp\\left(\\frac{2\\langle x, h\\rangle_{\\mathbf R^n} - \\langle h, h\\rangle_{\\mathbf R^n}}{2} \\right)=\\exp\\left(\\langle x, h\\rangle_{\\mathbf R^n} - \\tfrac12\\|h\\|_{\\mathbf R^n}^ 2\\right)."
},
{
"math_id": 11,
"text": "\\gamma_n(A - h)"
},
{
"math_id": 12,
"text": "(T_h)_* (\\gamma^n)"
},
{
"math_id": 13,
"text": "T_h : \\mathbf{R}^n \\to \\mathbf{R}^n"
},
{
"math_id": 14,
"text": "T_h(x) = x + h"
},
{
"math_id": 15,
"text": "\\frac{\\mathrm{d} (T_h)_{*} (\\gamma^n)}{\\mathrm{d} \\gamma^n} (x) = \\exp \\left( \\left \\langle h, x \\right \\rangle_{\\mathbf{R}^n} - \\tfrac{1}{2} \\| h \\|_{\\mathbf{R}^n}^2 \\right)."
},
{
"math_id": 16,
"text": "\\gamma"
},
{
"math_id": 17,
"text": "E"
},
{
"math_id": 18,
"text": "i : H \\to E"
},
{
"math_id": 19,
"text": "i(H) \\subseteq E"
},
{
"math_id": 20,
"text": "\\gamma : \\operatorname{Borel}(E) \\to [0, 1]"
},
{
"math_id": 21,
"text": "h \\in H"
},
{
"math_id": 22,
"text": "T_h : E \\to E"
},
{
"math_id": 23,
"text": "T_h(x) = x + i(h)"
},
{
"math_id": 24,
"text": "(T_h)_*(\\gamma)"
},
{
"math_id": 25,
"text": "\\frac{\\mathrm{d} (T_{h})_{*} (\\gamma)}{\\mathrm{d} \\gamma} (x) = \\exp \\left( \\langle h, x \\rangle^{\\sim} - \\tfrac{1}{2} \\| h \\|_{H}^{2} \\right),"
},
{
"math_id": 26,
"text": "\\langle h, x \\rangle^{\\sim} = i(h) (x)"
},
{
"math_id": 27,
"text": "\\mu"
},
{
"math_id": 28,
"text": "v"
},
{
"math_id": 29,
"text": "v \\in i(H)"
},
{
"math_id": 30,
"text": "i(H)"
},
{
"math_id": 31,
"text": "F : E \\to \\mathbf{R}"
},
{
"math_id": 32,
"text": "\\mathrm{D}F : E \\to \\operatorname{Lin}(E; \\mathbf{R}) = E^*"
},
{
"math_id": 33,
"text": "\\int_{E} F(x + t i(h)) \\, \\mathrm{d} \\gamma (x) = \\int_{E} F(x) \\exp \\left( t \\langle h, x \\rangle^{\\sim} - \\tfrac{1}{2} t^2 \\| h \\|_{H}^{2} \\right) \\, \\mathrm{d} \\gamma (x)"
},
{
"math_id": 34,
"text": "t \\in \\mathbf{R}"
},
{
"math_id": 35,
"text": "t"
},
{
"math_id": 36,
"text": "t = 0"
},
{
"math_id": 37,
"text": "\\int_E \\mathrm{D} F(x) (i(h)) \\, \\mathrm{d} \\gamma (x) = \\int_E F(x) \\langle h, x \\rangle^\\sim \\, \\mathrm{d} \\gamma (x)."
},
{
"math_id": 38,
"text": "\\mathop{\\mathrm{div}} [V_h] (x) = - \\langle h, x \\rangle^\\sim,"
},
{
"math_id": 39,
"text": "V_h : E \\to E"
},
{
"math_id": 40,
"text": "V_h(x) = i(h)"
},
{
"math_id": 41,
"text": "x \\in E"
},
{
"math_id": 42,
"text": "q \\times q"
},
{
"math_id": 43,
"text": "H(t)"
},
{
"math_id": 44,
"text": "H_{j, k}(t)"
},
{
"math_id": 45,
"text": " \\int_0^T \\sum_{j,k=1} ^q |H_{j,k}(t)|\\,dt < \\infty, "
},
{
"math_id": 46,
"text": "q"
},
{
"math_id": 47,
"text": "w(t)"
},
{
"math_id": 48,
"text": " E \\left[ \\exp \\left( -\\int_0^T w(t)^*H(t)w(t) \\, dt \\right) \\right] = \\exp \\left[ \\tfrac{1}{2} \\int_0^T \\operatorname{tr} (G(t)) \\, dt \\right], "
},
{
"math_id": 49,
"text": "G(t)"
},
{
"math_id": 50,
"text": " \\frac{dG(t)}{dt} = 2H(t)-G^2(t)"
},
{
"math_id": 51,
"text": "G(T) = 0"
},
{
"math_id": 52,
"text": "H(t)=1/2"
},
{
"math_id": 53,
"text": "G(t)=\\tanh(t-T)"
},
{
"math_id": 54,
"text": "E\\left[\\exp\\left(-\\tfrac12\\int_0^T w(t)^2\\,dt\\right)\\right] = \\frac{1}{\\sqrt{\\cosh T}}."
}
] |
https://en.wikipedia.org/wiki?curid=6573013
|
6573043
|
Exceptional Lie algebra
|
Complex simple Lie Algebra
In mathematics, an exceptional Lie algebra is a complex simple Lie algebra whose Dynkin diagram is of exceptional (nonclassical) type. There are exactly five of them: formula_0; their respective dimensions are 14, 52, 78, 133, 248. The corresponding diagrams are:
In contrast, simple Lie algebras that are not exceptional are called classical Lie algebras (there are infinitely many of them).
Construction.
There is no simple universally accepted way to construct exceptional Lie algebras; in fact, they were discovered only in the process of the classification program. Here are some constructions:
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathfrak{g}_2, \\mathfrak{f}_4, \\mathfrak{e}_6, \\mathfrak{e}_7, \\mathfrak{e}_8"
},
{
"math_id": 1,
"text": "\\mathfrak{g}_2"
},
{
"math_id": 2,
"text": "\\mathfrak{e}_8"
},
{
"math_id": 3,
"text": "\\mathfrak{e}_6, \\mathfrak{e}_7"
}
] |
https://en.wikipedia.org/wiki?curid=6573043
|
65732111
|
Samsung Wave 533
|
Smartphone
The Samsung Wave 533 (or Samsung S5330), was launched on October 26, 2010 alongside the Samsung Wave 525. Both hardware and software-wise, the S5330 and Samsung Wave 525 are identical except for the inclusion of a pop-out keyboard on the Samsung Wave 533. The phone initially launched with a price of €150 ($210).
Specifications.
Body.
The phone body is made of plastic, with the screen and keyboard portions of the phone being separated by a chrome ring. The screen portion of the phone has 3 buttons: the menu button, and a send and end button, which are marked with a phone and a phone with a bar in between the two earpieces respectively. The keyboard portion of the phone has volume control buttons on the left side, and a dedicated camera activation button on the right along with a lock/power button. The phone was available in 3 colors: black, white, or pink.
Screen.
The screen is a TFT LCD display that measures 3.2 inches along the diagonal, and a total surface area of 29.1 formula_0. The surface area gives it a screen-to-body ratio of 48.4%. The screen has a resolution of 240×400 pixels, a 5:3 aspect ratio, a pixel density of ~146 ppi, and can display 256K colors. The display has a capacitive touchscreen which allows menu interaction without use of the keyboard or menu button. The menu UI for this phone was the TouchWiz 3.0.
Hardware.
The phone has a single-core processor running at 312 MHz. There is a base amount of storage of 100 MB, which is upgradable with a 16 GB microSD via the phone's microSD port, allowing for up to 16.1 GB total storage.
Camera.
The phone has a 3.15 MP (2048 × 1536 4:3) camera, which is centered at the top of the phone. The phone can also shoot video at 320p at 15 fps.
Battery.
The phone has a rechargeable 1200 mAh lithium-ion battery, which allows for 1090 hours of standby time or 14 hours of talk time (both measured while connected to a 2G network). The phone can be charged via micro USB.
Connectivity.
The phone has WiFi 4, Bluetooth 3.0, Stereo FM Radio, A-GPS, and a microUSB 2.0 port. It has 2G connectivity and can tap into GSM frequency bands 850/900/1800/1900.
Operating system.
The phone has Bada OS 1.1 pre-installed, with continued support until Bada OS 2.0. TouchWiz 3.0 provides a user interface for phone interaction.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "cm^2"
}
] |
https://en.wikipedia.org/wiki?curid=65732111
|
65738568
|
Almost open map
|
Map that satisfies a condition similar to that of being an open map.
In functional analysis and related areas of mathematics, an almost open map between topological spaces is a map that satisfies a condition similar to, but weaker than, the condition of being an open map.
As described below, for certain broad categories of topological vector spaces, all surjective linear operators are necessarily almost open.
Definitions.
Given a surjective map formula_0 a point formula_1 is called a for formula_2 and formula_2 is said to be open at formula_3 (or an open map at formula_3) if for every open neighborhood formula_4 of formula_5 formula_6 is a neighborhood of formula_7 in formula_8 (note that the neighborhood formula_6 is not required to be an open neighborhood).
A surjective map is called an open map if it is open at every point of its domain, while it is called an almost open map each of its fibers has some point of openness.
Explicitly, a surjective map formula_9 is said to be almost open if for every formula_10 there exists some formula_11 such that formula_2 is open at formula_12
Every almost open surjection is necessarily a (introduced by Alexander Arhangelskii in 1963), which by definition means that for every formula_13 and every neighborhood formula_4 of formula_14 (that is, formula_15), formula_6 is necessarily a neighborhood of formula_16
Almost open linear map.
A linear map formula_17 between two topological vector spaces (TVSs) is called a or an almost open linear map if for any neighborhood formula_4 of formula_18 in formula_19 the closure of formula_20 in formula_8 is a neighborhood of the origin.
Importantly, some authors use a different definition of "almost open map" in which they instead require that the linear map formula_21 satisfy: for any neighborhood formula_4 of formula_18 in formula_19 the closure of formula_20 in formula_22 (rather than in formula_8) is a neighborhood of the origin;
this article will not use this definition.
If a linear map formula_17 is almost open then because formula_22 is a vector subspace of formula_8 that contains a neighborhood of the origin in formula_23 the map formula_17 is necessarily surjective.
For this reason many authors require surjectivity as part of the definition of "almost open".
If formula_17 is a bijective linear operator, then formula_21 is almost open if and only if formula_24 is almost continuous.
Relationship to open maps.
Every surjective open map is an almost open map but in general, the converse is not necessarily true.
If a surjection formula_25 is an almost open map then it will be an open map if it satisfies the following condition (a condition that does not depend in any way on formula_8's topology formula_26):
whenever formula_27 belong to the same fiber of formula_2 (that is, formula_28) then for every neighborhood formula_29 of formula_30 there exists some neighborhood formula_31 of formula_32 such that formula_33
If the map is continuous then the above condition is also necessary for the map to be open. That is, if formula_9 is a continuous surjection then it is an open map if and only if it is almost open and it satisfies the above condition.
Theorem: If formula_17 is a surjective linear operator from a locally convex space formula_34 onto a barrelled space formula_8 then formula_21 is almost open.
Theorem: If formula_17 is a surjective linear operator from a TVS formula_34 onto a Baire space formula_8 then formula_21 is almost open.
Open mapping theorems.
The two theorems above do not require the surjective linear map to satisfy any topological conditions.
Theorem: If formula_34 is a complete pseudometrizable TVS, formula_8 is a Hausdorff TVS, and formula_17 is a closed and almost open linear surjection, then formula_21 is an open map.
Theorem: Suppose formula_17 is a continuous linear operator from a complete pseudometrizable TVS formula_34 into a Hausdorff TVS formula_35 If the image of formula_21 is non-meager in formula_8 then formula_17 is a surjective open map and formula_8 is a complete metrizable space.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "f : X \\to Y,"
},
{
"math_id": 1,
"text": "x \\in X"
},
{
"math_id": 2,
"text": "f"
},
{
"math_id": 3,
"text": "x"
},
{
"math_id": 4,
"text": "U"
},
{
"math_id": 5,
"text": "x,"
},
{
"math_id": 6,
"text": "f(U)"
},
{
"math_id": 7,
"text": "f(x)"
},
{
"math_id": 8,
"text": "Y"
},
{
"math_id": 9,
"text": "f : X \\to Y"
},
{
"math_id": 10,
"text": "y \\in Y,"
},
{
"math_id": 11,
"text": "x \\in f^{-1}(y)"
},
{
"math_id": 12,
"text": "x."
},
{
"math_id": 13,
"text": "y \\in Y"
},
{
"math_id": 14,
"text": "f^{-1}(y)"
},
{
"math_id": 15,
"text": "f^{-1}(y) \\subseteq \\operatorname{Int}_X U"
},
{
"math_id": 16,
"text": "y."
},
{
"math_id": 17,
"text": "T : X \\to Y"
},
{
"math_id": 18,
"text": "0"
},
{
"math_id": 19,
"text": "X,"
},
{
"math_id": 20,
"text": "T(U)"
},
{
"math_id": 21,
"text": "T"
},
{
"math_id": 22,
"text": "T(X)"
},
{
"math_id": 23,
"text": "Y,"
},
{
"math_id": 24,
"text": "T^{-1}"
},
{
"math_id": 25,
"text": "f : (X, \\tau) \\to (Y, \\sigma)"
},
{
"math_id": 26,
"text": "\\sigma"
},
{
"math_id": 27,
"text": "m, n \\in X"
},
{
"math_id": 28,
"text": "f(m) = f(n)"
},
{
"math_id": 29,
"text": "U \\in \\tau"
},
{
"math_id": 30,
"text": "m,"
},
{
"math_id": 31,
"text": "V \\in \\tau"
},
{
"math_id": 32,
"text": "n"
},
{
"math_id": 33,
"text": "F(V) \\subseteq F(U)."
},
{
"math_id": 34,
"text": "X"
},
{
"math_id": 35,
"text": "Y."
}
] |
https://en.wikipedia.org/wiki?curid=65738568
|
65740015
|
List of set identities and relations
|
Equalities for combinations of sets
This article lists mathematical properties and laws of sets, involving the set-theoretic operations of union, intersection, and complementation and the relations of set equality and set inclusion. It also provides systematic procedures for evaluating expressions, and performing calculations, involving these operations and relations.
The binary operations of set union (formula_0) and intersection (formula_1) satisfy many identities. Several of these identities or "laws" have well established names.
Notation.
Throughout this article, capital letters (such as formula_2 and formula_3) will denote sets. On the left hand side of an identity, typically,
This is to facilitate applying identities to expressions that are complicated or use the same symbols as the identity.
For example, the identity
formula_7
may be read as:
formula_8
Elementary set operations.
For sets formula_4 and formula_9 define:
formula_10
and
formula_11
where the symmetric difference formula_12 is sometimes denoted by formula_13 and equals:
formula_14
One set formula_4 is said to intersect another set formula_6 if formula_15 Sets that do not intersect are said to be disjoint.
The power set of formula_3 is the set of all subsets of formula_3 and will be denoted by
formula_16
Universe set and complement notation
The notation
formula_17
may be used if formula_4 is a subset of some set formula_3 that is understood (say from context, or because it is clearly stated what the superset formula_3 is).
It is emphasized that the definition of formula_18 depends on context. For instance, had formula_4 been declared as a subset of formula_19 with the sets formula_20 and formula_3 not necessarily related to each other in any way, then formula_18 would likely mean formula_21 instead of formula_22
If it is needed then unless indicated otherwise, it should be assumed that formula_3 denotes the universe set, which means that all sets that are used in the formula are subsets of formula_23
In particular, the complement of a set formula_4 will be denoted by formula_18 where unless indicated otherwise, it should be assumed that formula_18 denotes the complement of formula_4 in (the universe) formula_23
One subset involved.
Assume formula_24
Identity:
Definition: formula_25 is called a left identity element of a binary operator formula_26 if formula_27 for all formula_6 and it is called a right identity element of formula_26 if formula_28 for all formula_29 A left identity element that is also a right identity element if called an identity element.
The empty set formula_30 is an identity element of binary union formula_0 and symmetric difference formula_31 and it is also a right identity element of set subtraction formula_32
formula_33
but formula_30 is not a left identity element of formula_34 since
formula_35
so formula_36 if and only if formula_37
Idempotence formula_38 and Nilpotence formula_39:
formula_40
Domination/Absorbing element:
Definition: formula_41 is called a left absorbing element of a binary operator formula_26 if formula_42 for all formula_6 and it is called a right absorbing element of formula_26 if formula_43 for all formula_29 A left absorbing element that is also a right absorbing element if called an absorbing element. Absorbing elements are also sometime called annihilating elements or zero elements.
A universe set is an absorbing element of binary union formula_44 The empty set formula_30 is an absorbing element of binary intersection formula_1 and binary Cartesian product formula_45 and it is also a left absorbing element of set subtraction formula_32
formula_46
but formula_30 is not a right absorbing element of set subtraction since
formula_47
where formula_48 if and only if formula_49
Double complement or involution law:
formula_50
formula_47
formula_51
formula_52
formula_53
formula_54
Two sets involved.
In the left hand sides of the following identities, formula_4 is the L eft most set and formula_6 is the R ight most set.
Assume both formula_55 are subsets of some universe set formula_23
Formulas for binary set operations ⋂, ⋃, \, and ∆.
In the left hand sides of the following identities, formula_4 is the L eft most set and formula_6 is the R ight most set. Whenever necessary, both formula_55 should be assumed to be subsets of some universe set formula_56 so that formula_57
formula_58
formula_59
formula_60
formula_61
De Morgan's laws.
De Morgan's laws state that for formula_62
formula_63
Commutativity.
Unions, intersection, and symmetric difference are commutative operations:
formula_64
Set subtraction is not commutative. However, the commutativity of set subtraction can be characterized: from formula_65 it follows that:
formula_66
Said differently, if distinct symbols always represented distinct sets, then the only true formulas of the form formula_67 that could be written would be those involving a single symbol; that is, those of the form: formula_68
But such formulas are necessarily true for every binary operation formula_26 (because formula_69 must hold by definition of equality), and so in this sense, set subtraction is as diametrically opposite to being commutative as is possible for a binary operation.
Set subtraction is also neither left alternative nor right alternative; instead, formula_70 if and only if formula_71 if and only if formula_72
Set subtraction is quasi-commutative and satisfies the Jordan identity.
Other identities involving two sets.
Absorption laws:
formula_73
Other properties
formula_74
Intervals:
formula_75
formula_76
Subsets ⊆ and supersets ⊇.
The following statements are equivalent for any formula_62
The following statements are equivalent for any formula_62
Set equality.
The following statements are equivalent:
Empty set.
A set formula_4 is empty if the sentence formula_77 is true, where the notation formula_78 is shorthand for formula_79
If formula_4 is any set then the following are equivalent:
If formula_4 is any set then the following are equivalent:
Given any formula_81 the following are equivalent:
Moreover,
formula_82
Meets, Joins, and lattice properties.
Inclusion is a partial order:
Explicitly, this means that inclusion formula_83 which is a binary operation, has the following three properties:
The following proposition says that for any set formula_84 the power set of formula_84 ordered by inclusion, is a bounded lattice, and hence together with the distributive and complement laws above, show that it is a Boolean algebra.
Existence of a least element and a greatest element:
formula_85
Joins/supremums exist:
formula_86
The union formula_87 is the join/supremum of formula_4 and formula_6 with respect to formula_88 because:
The intersection formula_89 is the join/supremum of formula_4 and formula_6 with respect to formula_90
Meets/infimums exist:
formula_91
The intersection formula_89 is the meet/infimum of formula_4 and formula_6 with respect to formula_88 because:
The union formula_87 is the meet/infimum of formula_4 and formula_6 with respect to formula_90
Other inclusion properties:
formula_92
formula_93
Three sets involved.
In the left hand sides of the following identities, formula_4 is the L eft most set, formula_5 is the M iddle set, and formula_6 is the R ight most set.
Precedence rules
There is no universal agreement on the order of precedence of the basic set operators.
Nevertheless, many authors use precedence rules for set operators, although these rules vary with the author.
One common convention is to associate intersection formula_94 with logical conjunction (and) formula_95 and associate union formula_96 with logical disjunction (or) formula_97 and then transfer the precedence of these logical operators (where formula_98 has precedence over formula_99) to these set operators, thereby giving formula_100 precedence over formula_101
So for example, formula_102 would mean formula_103 since it would be associated with the logical statement formula_104 and similarly, formula_105 would mean formula_106 since it would be associated with formula_107
Sometimes, set complement (subtraction) formula_108 is also associated with logical complement (not) formula_109 in which case it will have the highest precedence.
More specifically, formula_110 is rewritten formula_111 so that for example, formula_112 would mean formula_113 since it would be rewritten as the logical statement formula_114 which is equal to formula_115
For another example, because formula_116 means formula_117 which is equal to both formula_118 and formula_119 (where formula_120 was rewritten as formula_121), the formula formula_122 would refer to the set formula_123
moreover, since formula_124 this set is also equal to formula_125 (other set identities can similarly be deduced from propositional calculus identities in this way).
However, because set subtraction is not associative formula_126 a formula such as formula_127 would be ambiguous; for this reason, among others, set subtraction is often not assigned any precedence at all.
Symmetric difference formula_128 is sometimes associated with exclusive or (xor) formula_129 (also sometimes denoted by formula_130), in which case if the order of precedence from highest to lowest is formula_131 then the order of precedence (from highest to lowest) for the set operators would be formula_132
There is no universal agreement on the precedence of exclusive disjunction formula_133 with respect to the other logical connectives, which is why symmetric difference formula_134 is not often assigned a precedence.
Associativity.
Definition: A binary operator formula_26 is called associative if formula_135 always holds.
The following set operators are associative:
formula_136
For set subtraction, instead of associativity, only the following is always guaranteed:
formula_137
where equality holds if and only if formula_71 (this condition does not depend on formula_5). Thus
formula_138 if and only if formula_139
where the only difference between the left and right hand side set equalities is that the locations of formula_55 have been swapped.
Distributivity.
Definition: If formula_140 are binary operators then formula_26 left distributes over formula_141 if formula_142
while formula_26 right distributes over formula_141 if formula_143
The operator formula_26 distributes over formula_141 if it both left distributes and right distributes over formula_144
In the definitions above, to transform one side to the other, the innermost operator (the operator inside the parentheses) becomes the outermost operator and the outermost operator becomes the innermost operator.
Right distributivity:
formula_145
Left distributivity:
formula_146
Distributivity and symmetric difference ∆.
Intersection distributes over symmetric difference:
formula_147
formula_148
Union does not distribute over symmetric difference because only the following is guaranteed in general:
formula_149
Symmetric difference does not distribute over itself:
formula_150
and in general, for any sets formula_151 (where formula_152 represents formula_153), formula_154 might not be a subset, nor a superset, of formula_4 (and the same is true for formula_152).
Distributivity and set subtraction \.
Failure of set subtraction to left distribute:
Set subtraction is right distributive over itself. However, set subtraction is not left distributive over itself because only the following is guaranteed in general:
formula_155
where equality holds if and only if formula_156 which happens if and only if formula_157
For symmetric difference, the sets formula_158 and formula_159 are always disjoint.
So these two sets are equal if and only if they are both equal to formula_160
Moreover, formula_161 if and only if formula_162
To investigate the left distributivity of set subtraction over unions or intersections, consider how the sets involved in (both of) De Morgan's laws are all related:
formula_163
always holds (the equalities on the left and right are De Morgan's laws) but equality is not guaranteed in general (that is, the containment formula_164 might be strict).
Equality holds if and only if formula_165 which happens if and only if formula_166
This observation about De Morgan's laws shows that formula_108 is not left distributive over formula_167 or formula_100 because only the following are guaranteed in general:
formula_168
formula_169
where equality holds for one (or equivalently, for both) of the above two inclusion formulas if and only if formula_166
The following statements are equivalent:
Quasi-commutativity:
formula_170
always holds but in general,
formula_171
However, formula_172 if and only if formula_173 if and only if formula_174
Set subtraction complexity: To manage the many identities involving set subtraction, this section is divided based on where the set subtraction operation and parentheses are located on the left hand side of the identity. The great variety and (relative) complexity of formulas involving set subtraction (compared to those without it) is in part due to the fact that unlike formula_175 and formula_176 set subtraction is neither associative nor commutative and it also is not left distributive over formula_177 or even over itself.
Two set subtractions.
Set subtraction is not associative in general:
formula_178
since only the following is always guaranteed:
formula_179
(L\M)\R.
formula_180
L\(M\R).
formula_181
* If formula_182
* formula_183 with equality if and only if formula_184
One set subtraction.
(L\M) ⁎ R.
Set subtraction on the "left", and parentheses on the left
formula_185
formula_186
formula_163
formula_187
formula_188
L\(M ⁎ R).
Set subtraction on the "left", and parentheses on the right
formula_189
formula_190
where the above two sets that are the subjects of De Morgan's laws always satisfy formula_191
formula_192
(L ⁎ M)\R.
Set subtraction on the "right", and parentheses on the left
formula_193
formula_194
formula_195
L ⁎ (M\R).
Set subtraction on the "right", and parentheses on the right
formula_196
formula_197
formula_198
Three operations on three sets.
(L • M) ⁎ (M • R).
Operations of the form formula_199:
formula_200
(L • M) ⁎ (R\M).
Operations of the form formula_201:
formula_202
(L\M) ⁎ (L\R).
Operations of the form formula_203:
formula_204
Other simplifications.
Other properties:
formula_205
Symmetric difference ∆ of finitely many sets.
Given finitely many sets formula_206 something belongs to their symmetric difference if and only if it belongs to an odd number of these sets. Explicitly, for any formula_81 formula_207 if and only if the cardinality formula_208 is odd. (Recall that symmetric difference is associative so parentheses are not needed for the set formula_209).
Consequently, the symmetric difference of three sets satisfies:
formula_210
Cartesian products ⨯ of finitely many sets.
Binary ⨯ distributes over ⋃ and ⋂ and \ and ∆.
The binary Cartesian product ⨯ distributes over unions, intersections, set subtraction, and symmetric difference:
formula_211
formula_212
But in general, ⨯ does not distribute over itself:
formula_213
formula_214
Binary ⋂ of finite ⨯.
formula_215
formula_216
Binary ⋃ of finite ⨯.
formula_217
Difference \ of finite ⨯.
formula_218
and
formula_219
Finite ⨯ of differences \.
formula_220
formula_221
Symmetric difference ∆ and finite ⨯.
formula_222
formula_223
formula_224
formula_225
In general, formula_226 need not be a subset nor a superset of formula_227
formula_228
formula_229
Arbitrary families of sets.
Let formula_230 formula_231 and formula_232 be indexed families of sets. Whenever the assumption is needed, then all indexing sets, such as formula_233 and formula_234 are assumed to be non-empty.
Definitions.
A family of sets or (more briefly) a family refers to a set whose elements are sets.
An indexed family of sets is a function from some set, called its indexing set, into some family of sets.
An indexed family of sets will be denoted by formula_230 where this notation assigns the symbol formula_233 for the indexing set and for every index formula_235 assigns the symbol formula_236 to the value of the function at formula_237
The function itself may then be denoted by the symbol formula_238 which is obtained from the notation formula_239 by replacing the index formula_240 with a bullet symbol formula_241 explicitly, formula_242 is the function:
formula_243
which may be summarized by writing formula_244
Any given indexed family of sets formula_245 (which is a function) can be canonically associated with its image/range formula_246 (which is a family of sets).
Conversely, any given family of sets formula_247 may be associated with the formula_247-indexed family of sets formula_248 which is technically the identity map formula_249
However, this is not a bijective correspondence because an indexed family of sets formula_245 is not required to be injective (that is, there may exist distinct indices formula_250 such as formula_251), which in particular means that it is possible for distinct indexed families of sets (which are functions) to be associated with the same family of sets (by having the same image/range).
Arbitrary unions defined
If formula_252 then formula_253 which is somethings called the nullary union convention (despite being called a convention, this equality follows from the definition).
If formula_247 is a family of sets then formula_254 denotes the set:
formula_255
Arbitrary intersections defined
If formula_256 then
If formula_257 is a non-empty family of sets then formula_258 denotes the set:
formula_259
Nullary intersections
If formula_252 then
formula_260
where every possible thing formula_80 in the universe vacuously satisfied the condition: "if formula_261 then formula_262". Consequently, formula_263 consists of everything in the universe.
So if formula_252 and:
Assumption: Henceforth, whenever a formula requires some indexing set to be non-empty in order for an arbitrary intersection to be well-defined, then this will automatically be assumed without mention.
A consequence of this is the following assumption/definition:
A finite intersection of sets or an intersection of finitely many sets refers to the intersection of a finite collection of one or more sets.
Some authors adopt the so called nullary intersection convention, which is the convention that an empty intersection of sets is equal to some canonical set. In particular, if all sets are subsets of some set formula_3 then some author may declare that the empty intersection of these sets be equal to formula_23 However, the nullary intersection convention is not as commonly accepted as the nullary union convention and this article will not adopt it (this is due to the fact that unlike the empty union, the value of the empty intersection depends on formula_3 so if there are multiple sets under consideration, which is commonly the case, then the value of the empty intersection risks becoming ambiguous).
Multiple index sets
formula_266
formula_267
Distributing unions and intersections.
Binary ⋂ of arbitrary ⋃'s.
and
Binary ⋃ of arbitrary ⋂'s.
and
Arbitrary ⋂'s and arbitrary ⋃'s.
Incorrectly distributing by swapping ⋂ and ⋃.
Naively swapping formula_285 and formula_286 may produce a different set
The following inclusion always holds:
In general, equality need not hold and moreover, the right hand side depends on how for each fixed formula_235 the sets formula_287 are labelled; and analogously, the left hand side depends on how for each fixed formula_288 the sets formula_289 are labelled. An example demonstrating this is now given.
Equality in Inclusion 1 ∪∩ is a subset of ∩∪ can hold under certain circumstances, such as in 7e, which is the special case where formula_290 is formula_291 (that is, formula_292 with the same indexing sets formula_233 and formula_276), or such as in 7f, which is the special case where formula_290 is formula_293 (that is, formula_294 with the indexing sets formula_233 and formula_276 swapped).
For a correct formula that extends the distributive laws, an approach other than just switching formula_0 and formula_1 is needed.
Correct distributive laws.
Suppose that for each formula_235 formula_295 is a non-empty index set and for each formula_296 let formula_297 be any set (for example, to apply this law to formula_298 use formula_299 for all formula_300 and use formula_301 for all formula_300 and all formula_302). Let
formula_303
denote the Cartesian product, which can be interpreted as the set of all functions formula_304 such that formula_305 for every formula_306 Such a function may also be denoted using the tuple notation formula_307 where formula_308 for every formula_300 and conversely, a tuple formula_307 is just notation for the function with domain formula_233 whose value at formula_300 is formula_309 both notations can be used to denote the elements of formula_310
Then
where formula_311
Applying the distributive laws.
Example application: In the particular case where all formula_295 are equal (that is, formula_312 for all formula_313 which is the case with the family formula_298 for example), then letting formula_276 denote this common set, the Cartesian product will be formula_314 which is the set of all functions of the form formula_315 The above set equalities Eq. 5 ∩∪ to ∪∩ and Eq. 6 ∪∩ to ∩∪, respectively become:
formula_316
formula_317
which when combined with Inclusion 1 ∪∩ is a subset of ∩∪ implies:
formula_318
where
Example application: To apply the general formula to the case of formula_327 and formula_328 use formula_329 formula_330 formula_331 and let formula_332 for all formula_333 and let formula_334 for all formula_335
Every map formula_336 can be bijectively identified with the pair formula_337 (the inverse sends formula_338 to the map formula_339 defined by formula_340 and formula_341 this is technically just a change of notation). Recall that Eq. 5 ∩∪ to ∪∩ was
formula_342
Expanding and simplifying the left hand side gives
formula_343
and doing the same to the right hand side gives:
formula_344
Thus the general identity Eq. 5 ∩∪ to ∪∩ reduces down to the previously given set equality Eq. 3b:
formula_345
Distributing subtraction over ⋃ and ⋂.
The next identities are known as De Morgan's laws.
The following four set equalities can be deduced from the equalities 7a - 7d above.
In general, naively swapping formula_346 and formula_347 may produce a different set (see this note for more details).
The equalities
formula_348
found in Eq. 7e and Eq. 7f are thus unusual in that they state exactly that swapping formula_346 and formula_347 will not change the resulting set.
Commutativity and associativity of ⋃ and ⋂.
Commutativity:
formula_349
formula_350
Unions of unions and intersections of intersections:
formula_351
formula_352
and
and if formula_272 then also:
Cartesian products Π of arbitrarily many sets.
Intersections ⋂ of Π.
If formula_290 is a family of sets then
In particular, if formula_239 and formula_356 are two families indexed by the same set then
formula_357
So for instance,
formula_215
formula_358 and
formula_216
Intersections of products indexed by different sets
Let formula_239 and formula_268 be two families indexed by different sets.
Technically, formula_359 implies formula_360
However, sometimes these products are somehow identified as the same set through some bijection or one of these products is identified as a subset of the other via some injective map, in which case (by abuse of notation) this intersection may be equal to some other (possibly non-empty) set.
Binary ⨯ distributes over arbitrary ⋃ and ⋂.
The binary Cartesian product ⨯ distributes over arbitrary intersections (when the indexing set is not empty) and over arbitrary unions:
formula_381
Distributing arbitrary Π over arbitrary ⋃.
Suppose that for each formula_235 formula_295 is a non-empty index set and for each formula_296 let formula_297 be any set (for example, to apply this law to formula_298 use formula_299 for all formula_300 and use formula_301 for all formula_300 and all formula_302). Let
formula_303
denote the Cartesian product, which (as mentioned above) can be interpreted as the set of all functions formula_304 such that formula_305 for every formula_300.
Then
where formula_311
Unions ⋃ of Π.
For unions, only the following is guaranteed in general:
formula_382
where formula_290 is a family of sets.
However,
formula_217
Difference \ of Π.
If formula_239 and formula_356 are two families of sets then:
formula_392
so for instance,
formula_218
and
formula_219
Symmetric difference ∆ of Π.
formula_393
Functions and sets.
Let formula_394 be any function.
Let formula_55 be completely arbitrary sets. Assume formula_395
Definitions.
Let formula_394 be any function, where we denote its domain formula_3 by formula_396 and denote its codomain formula_20 by formula_397
Many of the identities below do not actually require that the sets be somehow related to formula_398's domain or codomain (that is, to formula_3 or formula_20) so when some kind of relationship is necessary then it will be clearly indicated.
Because of this, in this article, if formula_4 is declared to be "any set," and it is not indicated that formula_4 must be somehow related to formula_3 or formula_20 (say for instance, that it be a subset formula_3 or formula_20) then it is meant that formula_4 is truly arbitrary.
This generality is useful in situations where formula_394 is a map between two subsets formula_399 and formula_400 of some larger sets formula_401 and formula_402 and where the set formula_4 might not be entirely contained in formula_403 and/or formula_404 (e.g. if all that is known about formula_4 is that formula_405); in such a situation it may be useful to know what can and cannot be said about formula_406 and/or formula_407 without having to introduce a (potentially unnecessary) intersection such as: formula_408 and/or formula_409
Images and preimages of sets
If formula_4 is any set then the image of formula_4 under formula_398 is defined to be the set:
formula_410
while the preimage of formula_4 under formula_398 is:
formula_411
where if formula_412 is a singleton set then the fiber or preimage of formula_413 under formula_398 is
formula_414
Denote by formula_415 or formula_416 the image or range of formula_417 which is the set:
formula_418
Saturated sets
A set formula_152 is said to be formula_398-saturated or a if any of the following equivalent conditions are satisfied:
Elementwise operations on families.
Let formula_422 and formula_423 be families of sets over formula_23
On the left hand sides of the following identities, formula_421 is the L eft most family, formula_424 is in the M iddle, and formula_423 is the R ight most set.
Commutativity:
formula_425
formula_426
Associativity:
formula_427
formula_428
Identity:
formula_429
formula_430
formula_431
Domination:
formula_432
formula_433
formula_434
formula_435
formula_436
formula_437
Power set.
formula_438
formula_439
If formula_4 and formula_6 are subsets of a vector space formula_3 and if formula_413 is a scalar then
formula_440
formula_441
Sequences of sets.
Suppose that formula_4 is any set such that formula_442 for every index formula_237
If formula_443 decreases to formula_6 then formula_444 increases to formula_420
whereas if instead formula_443 increases to formula_6 then formula_445 decreases to formula_446
If formula_55 are arbitrary sets and if formula_447 increases (resp. decreases) to formula_4 then formula_448 increase (resp. decreases) to formula_446
Partitions.
Suppose that formula_449 is any sequence of sets, that formula_450 is any subset, and for every index formula_419 let formula_451
Then formula_452 and formula_453 is a sequence of pairwise disjoint sets.
Suppose that formula_449 is non-decreasing, let formula_454 and let formula_455 for every formula_456 Then formula_457 and formula_458 is a sequence of pairwise disjoint sets.
Notes.
Notes
<templatestyles src="Reflist/styles.css" />
Proofs
<templatestyles src="Reflist/styles.css" />
Citations.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\cup"
},
{
"math_id": 1,
"text": "\\cap"
},
{
"math_id": 2,
"text": "A, B, C, L, M, R, S,"
},
{
"math_id": 3,
"text": "X"
},
{
"math_id": 4,
"text": "L"
},
{
"math_id": 5,
"text": "M"
},
{
"math_id": 6,
"text": "R"
},
{
"math_id": 7,
"text": "(L \\,\\setminus\\, M) \\,\\setminus\\, R ~=~ (L \\,\\setminus\\, R) \\,\\setminus\\, (M \\,\\setminus\\, R)"
},
{
"math_id": 8,
"text": "(\\text{Left set} \\,\\setminus\\, \\text{Middle set}) \\,\\setminus\\, \\text{Right set} ~=~ (\\text{Left set} \\,\\setminus\\, \\text{Right set}) \\,\\setminus\\, (\\text{Middle set} \\,\\setminus\\, \\text{Right set})."
},
{
"math_id": 9,
"text": "R,"
},
{
"math_id": 10,
"text": "\\begin{alignat}{4}\nL \\cup R &&~\\stackrel{\\scriptscriptstyle\\text{def}}{=}~ \\{~ x ~:~ x \\in L \\;&&\\text{ or }\\;\\, &&\\; x \\in R ~\\} \\\\\nL \\cap R &&~\\stackrel{\\scriptscriptstyle\\text{def}}{=}~ \\{~ x ~:~ x \\in L \\;&&\\text{ and } &&\\; x \\in R ~\\} \\\\\nL \\setminus R &&~\\stackrel{\\scriptscriptstyle\\text{def}}{=}~ \\{~ x ~:~ x \\in L \\;&&\\text{ and } &&\\; x \\notin R ~\\} \\\\\n\\end{alignat}"
},
{
"math_id": 11,
"text": "L \\triangle R ~\\stackrel{\\scriptscriptstyle\\text{def}}{=}~ \\{~ x ~:~ x \\text{ belongs to exactly one of } L \\text{ and } R ~\\}"
},
{
"math_id": 12,
"text": "L \\triangle R"
},
{
"math_id": 13,
"text": "L \\ominus R"
},
{
"math_id": 14,
"text": "\\begin{alignat}{4}\nL \\;\\triangle\\; R \n~&=~ (L ~\\setminus~ &&R) ~\\cup~ &&(R ~\\setminus~ &&L) \\\\\n~&=~ (L ~\\cup~ &&R) ~\\setminus~ &&(L ~\\cap~ &&R). \n\\end{alignat}"
},
{
"math_id": 15,
"text": "L \\cap R \\neq \\varnothing."
},
{
"math_id": 16,
"text": "\\wp(X) ~\\stackrel{\\scriptscriptstyle\\text{def}}{=}~ \\{~ L ~:~ L \\subseteq X ~\\}."
},
{
"math_id": 17,
"text": "L^\\complement ~\\stackrel{\\scriptscriptstyle\\text{def}}{=}~ X \\setminus L."
},
{
"math_id": 18,
"text": "L^\\complement"
},
{
"math_id": 19,
"text": "Y,"
},
{
"math_id": 20,
"text": "Y"
},
{
"math_id": 21,
"text": "Y \\setminus L"
},
{
"math_id": 22,
"text": "X \\setminus L."
},
{
"math_id": 23,
"text": "X."
},
{
"math_id": 24,
"text": "L \\subseteq X."
},
{
"math_id": 25,
"text": "e"
},
{
"math_id": 26,
"text": "\\,\\ast\\,"
},
{
"math_id": 27,
"text": "e \\,\\ast\\, R = R"
},
{
"math_id": 28,
"text": "L \\,\\ast\\, e = L"
},
{
"math_id": 29,
"text": "L."
},
{
"math_id": 30,
"text": "\\varnothing"
},
{
"math_id": 31,
"text": "\\triangle,"
},
{
"math_id": 32,
"text": "\\, \\setminus:"
},
{
"math_id": 33,
"text": "\\begin{alignat}{10}\nL \\cap X &\\;=\\;&& L &\\;=\\;& X \\cap L ~~~~\\text{ where } L \\subseteq X \\\\[1.4ex]\nL \\cup \\varnothing &\\;=\\;&& L &\\;=\\;& \\varnothing \\cup L \\\\[1.4ex]\nL \\,\\triangle \\varnothing &\\;=\\;&& L &\\;=\\;& \\varnothing \\,\\triangle L \\\\[1.4ex]\nL \\setminus \\varnothing &\\;=\\;&& L \\\\[1.4ex]\n\\end{alignat}"
},
{
"math_id": 34,
"text": "\\, \\setminus \\,"
},
{
"math_id": 35,
"text": "\\varnothing \\setminus L = \\varnothing"
},
{
"math_id": 36,
"text": "\\varnothing \\setminus L = L"
},
{
"math_id": 37,
"text": "L = \\varnothing."
},
{
"math_id": 38,
"text": "L \\ast L = L"
},
{
"math_id": 39,
"text": "L \\ast L = \\varnothing"
},
{
"math_id": 40,
"text": "\\begin{alignat}{10}\nL \\cup L &\\;=\\;&& L && \\quad \\text{ (Idempotence)} \\\\[1.4ex]\nL \\cap L &\\;=\\;&& L && \\quad \\text{ (Idempotence)} \\\\[1.4ex]\nL \\,\\triangle\\, L &\\;=\\;&& \\varnothing && \\quad \\text{ (Nilpotence of index 2)} \\\\[1.4ex]\nL \\setminus L &\\;=\\;&& \\varnothing && \\quad \\text{ (Nilpotence of index 2)} \\\\[1.4ex]\n\\end{alignat}"
},
{
"math_id": 41,
"text": "z"
},
{
"math_id": 42,
"text": "z \\,\\ast\\, R = z"
},
{
"math_id": 43,
"text": "L \\,\\ast\\, z = z"
},
{
"math_id": 44,
"text": "\\cup."
},
{
"math_id": 45,
"text": "\\times,"
},
{
"math_id": 46,
"text": "\\begin{alignat}{10}\nX \\cup L &\\;=\\;&& X &\\;=\\;& L \\cup X ~~~~\\text{ where } L \\subseteq X \\\\[1.4ex]\n\\varnothing \\cap L &\\;=\\;&& \\varnothing &\\;=\\;& L \\cap \\varnothing \\\\[1.4ex]\n\\varnothing \\times L &\\;=\\;&& \\varnothing &\\;=\\;& L \\times \\varnothing \\\\[1.4ex]\n\\varnothing \\setminus L &\\;=\\;&& \\varnothing &\\;\\;& \\\\[1.4ex]\n\\end{alignat}"
},
{
"math_id": 47,
"text": "L \\setminus \\varnothing = L"
},
{
"math_id": 48,
"text": "L \\setminus \\varnothing = \\varnothing"
},
{
"math_id": 49,
"text": "L = \\varnothing."
},
{
"math_id": 50,
"text": "\\begin{alignat}{10}\nX \\setminus (X \\setminus L)\n&= L\n&&\\qquad\\text{ Also written }\\quad\n&&\\left(L^\\complement\\right)^\\complement = L\n&&\\quad&&\\text{ where } L \\subseteq X \\quad\n\\text{ (Double complement/Involution law)} \\\\[1.4ex]\n\\end{alignat}"
},
{
"math_id": 51,
"text": "\\begin{alignat}{4}\n\\varnothing \n&= L &&\\setminus L \\\\\n&= \\varnothing &&\\setminus L \\\\\n&= L &&\\setminus X ~~~~\\text{ where } L \\subseteq X \\\\\n\\end{alignat}"
},
{
"math_id": 52,
"text": "L^\\complement = X \\setminus L \\quad \\text{ (definition of notation)}"
},
{
"math_id": 53,
"text": "\\begin{alignat}{10}\nL \\,\\cup (X \\setminus L)\n&= X\n&&\\qquad\\text{ Also written }\\quad\n&&L \\cup L^\\complement = X\n&&\\quad&&\\text{ where } L \\subseteq X\n\\\\[1.4ex]\n\nL \\,\\triangle (X \\setminus L)\n&= X\n&&\\qquad\\text{ Also written }\\quad\n&&L \\,\\triangle L^\\complement = X\n&&\\quad&&\\text{ where } L \\subseteq X\n\\\\[1.4ex]\n\nL \\,\\cap (X \\setminus L)\n&= \\varnothing\n&&\\qquad\\text{ Also written }\\quad\n&&L \\cap L^\\complement = \\varnothing\n&&\\quad&&\n\\\\[1.4ex]\n\\end{alignat}"
},
{
"math_id": 54,
"text": "\\begin{alignat}{10}\nX \\setminus \\varnothing\n&= X\n&&\\qquad\\text{ Also written }\\quad\n&&\\varnothing^\\complement = X\n&&\\quad&&\\text{ (Complement laws for the empty set))}\n\\\\[1.4ex]\n\nX \\setminus X\n&= \\varnothing\n&&\\qquad\\text{ Also written }\\quad\n&&X^\\complement = \\varnothing\n&&\\quad&&\\text{ (Complement laws for the universe set)}\n\\\\[1.4ex]\n\\end{alignat}"
},
{
"math_id": 55,
"text": "L \\text{ and } R"
},
{
"math_id": 56,
"text": "X,"
},
{
"math_id": 57,
"text": "L^\\complement := X \\setminus L \\text{ and } R^\\complement := X \\setminus R."
},
{
"math_id": 58,
"text": "\\begin{alignat}{9}\nL \\cap R\n&= L &&\\,\\,\\setminus\\, &&(L &&\\,\\,\\setminus &&R) \\\\\n&= R &&\\,\\,\\setminus\\, &&(R &&\\,\\,\\setminus &&L) \\\\\n&= L &&\\,\\,\\setminus\\, &&(L &&\\,\\triangle\\, &&R) \\\\\n&= L &&\\,\\triangle\\, &&(L &&\\,\\,\\setminus &&R) \\\\\n\\end{alignat}"
},
{
"math_id": 59,
"text": "\\begin{alignat}{9}\nL \\cup R\n&= (&&L \\,\\triangle\\, R) &&\\,\\,\\cup && &&L && && \\\\\n&= (&&L \\,\\triangle\\, R) &&\\,\\triangle\\, &&(&&L &&\\cap\\, &&R) \\\\\n&= (&&R \\,\\setminus\\, L) &&\\,\\,\\cup && &&L && && ~~~~~\\text{ (union is disjoint)} \\\\\n\\end{alignat}"
},
{
"math_id": 60,
"text": "\\begin{alignat}{9}\nL \\,\\triangle\\, R\n&= &&R \\,\\triangle\\, L && && && && \\\\\n&= (&&L \\,\\cup\\, R) &&\\,\\setminus\\, &&(&&L \\,\\,\\cap\\, R) && \\\\\n&= (&&L \\,\\setminus\\, R) &&\\cup\\, &&(&&R \\,\\,\\setminus\\, L) && ~~~~~\\text{ (union is disjoint)} \\\\\n&= (&&L \\,\\triangle\\, M) &&\\,\\triangle\\, &&(&&M \\,\\triangle\\, R) && ~~~~~\\text{ where } M \\text{ is an arbitrary set. } \\\\\n&= (&&L^\\complement) &&\\,\\triangle\\, &&(&&R^\\complement) && \\\\\n\\end{alignat}"
},
{
"math_id": 61,
"text": "\\begin{alignat}{9}\nL \\setminus R\n&= &&L &&\\,\\,\\setminus &&(L &&\\,\\,\\cap &&R) \\\\\n&= &&L &&\\,\\,\\cap &&(L &&\\,\\triangle\\, &&R) \\\\\n&= &&L &&\\,\\triangle\\, &&(L &&\\,\\,\\cap &&R) \\\\\n&= &&R &&\\,\\triangle\\, &&(L &&\\,\\,\\cup &&R) \\\\\n\\end{alignat}"
},
{
"math_id": 62,
"text": "L, R \\subseteq X:"
},
{
"math_id": 63,
"text": "\\begin{alignat}{10}\nX \\setminus (L \\cap R)\n&= (X \\setminus L) \\cup (X \\setminus R)\n&&\\qquad\\text{ Also written }\\quad\n&&(L \\cap R)^\\complement = L^\\complement \\cup R^\\complement\n&&\\quad&&\\text{ (De Morgan's law)}\n\\\\[1.4ex]\n\nX \\setminus (L \\cup R)\n&= (X \\setminus L) \\cap (X \\setminus R)\n&&\\qquad\\text{ Also written }\\quad\n&&(L \\cup R)^\\complement = L^\\complement \\cap R^\\complement\n&&\\quad&&\\text{ (De Morgan's law)}\n\\\\[1.4ex]\n\\end{alignat}"
},
{
"math_id": 64,
"text": "\\begin{alignat}{10}\nL \\cup R &\\;=\\;&& R \\cup L && \\quad \\text{ (Commutativity)} \\\\[1.4ex]\nL \\cap R &\\;=\\;&& R \\cap L && \\quad \\text{ (Commutativity)} \\\\[1.4ex]\nL \\,\\triangle R &\\;=\\;&& R \\,\\triangle L && \\quad \\text{ (Commutativity)} \\\\[1.4ex]\n\\end{alignat}"
},
{
"math_id": 65,
"text": "(L \\,\\setminus\\, R) \\cap (R \\,\\setminus\\, L) = \\varnothing"
},
{
"math_id": 66,
"text": "L \\,\\setminus\\, R = R \\,\\setminus\\, L \\quad \\text{ if and only if } \\quad L = R."
},
{
"math_id": 67,
"text": "\\,\\cdot\\, \\,\\setminus\\, \\,\\cdot\\, = \\,\\cdot\\, \\,\\setminus\\, \\,\\cdot\\,"
},
{
"math_id": 68,
"text": "S \\,\\setminus\\, S = S \\,\\setminus\\, S."
},
{
"math_id": 69,
"text": "x \\,\\ast\\, x = x \\,\\ast\\, x"
},
{
"math_id": 70,
"text": "(L \\setminus L) \\setminus R = L \\setminus (L \\setminus R)"
},
{
"math_id": 71,
"text": "L \\cap R = \\varnothing"
},
{
"math_id": 72,
"text": "(R \\setminus L) \\setminus L = R \\setminus (L \\setminus L)."
},
{
"math_id": 73,
"text": "\\begin{alignat}{4}\nL \\cup (L \\cap R) &\\;=\\;&& L && \\quad \\text{ (Absorption)} \\\\[1.4ex]\nL \\cap (L \\cup R) &\\;=\\;&& L && \\quad \\text{ (Absorption)} \\\\[1.4ex]\n\\end{alignat}"
},
{
"math_id": 74,
"text": "\\begin{alignat}{10}\nL \\setminus R\n&= L \\cap (X \\setminus R)\n&&\\qquad\\text{ Also written }\\quad\n&&L \\setminus R = L \\cap R^\\complement\n&&\\quad&&\\text{ where } L, R \\subseteq X\n\\\\[1.4ex]\n\nX \\setminus (L \\setminus R)\n&= (X \\setminus L) \\cup R\n&&\\qquad\\text{ Also written }\\quad\n&&(L \\setminus R)^\\complement = L^\\complement \\cup R\n&&\\quad&&\\text{ where } R \\subseteq X\n\\\\[1.4ex]\n\nL \\setminus R\n&= (X \\setminus R) \\setminus (X \\setminus L)\n&&\\qquad\\text{ Also written }\\quad\n&&L \\setminus R = R^\\complement \\setminus L^\\complement\n&&\\quad&&\\text{ where } L, R \\subseteq X\n\\\\[1.4ex]\n\\end{alignat}"
},
{
"math_id": 75,
"text": "(a, b) \\cap (c, d) = (\\max\\{a,c\\}, \\min\\{b,d\\})"
},
{
"math_id": 76,
"text": "[a, b) \\cap [c, d) = [\\max\\{a,c\\}, \\min\\{b,d\\})"
},
{
"math_id": 77,
"text": "\\forall x (x \\not\\in L)"
},
{
"math_id": 78,
"text": "x \\not\\in L"
},
{
"math_id": 79,
"text": "\\lnot (x \\in L)."
},
{
"math_id": 80,
"text": "x"
},
{
"math_id": 81,
"text": "x,"
},
{
"math_id": 82,
"text": "(L \\setminus R) \\cap R = \\varnothing \\qquad \\text{ always holds}."
},
{
"math_id": 83,
"text": "\\,\\subseteq,\\,"
},
{
"math_id": 84,
"text": "S,"
},
{
"math_id": 85,
"text": "\\varnothing \\subseteq L \\subseteq X"
},
{
"math_id": 86,
"text": "L \\subseteq L \\cup R"
},
{
"math_id": 87,
"text": "L \\cup R"
},
{
"math_id": 88,
"text": "\\,\\subseteq\\,"
},
{
"math_id": 89,
"text": "L \\cap R"
},
{
"math_id": 90,
"text": "\\,\\supseteq.\\,"
},
{
"math_id": 91,
"text": "L \\cap R \\subseteq L"
},
{
"math_id": 92,
"text": "L \\setminus R \\subseteq L"
},
{
"math_id": 93,
"text": "(L \\setminus R) \\cap L = L \\setminus R"
},
{
"math_id": 94,
"text": "L \\cap R = \\{x : (x \\in L) \\land (x \\in R)\\}"
},
{
"math_id": 95,
"text": "L \\land R"
},
{
"math_id": 96,
"text": "L \\cup R = \\{x : (x \\in L) \\lor (x \\in R)\\}"
},
{
"math_id": 97,
"text": "L \\lor R,"
},
{
"math_id": 98,
"text": "\\,\\land\\,"
},
{
"math_id": 99,
"text": "\\,\\lor\\,"
},
{
"math_id": 100,
"text": "\\,\\cap\\,"
},
{
"math_id": 101,
"text": "\\,\\cup.\\,"
},
{
"math_id": 102,
"text": "L \\cup M \\cap R"
},
{
"math_id": 103,
"text": "L \\cup (M \\cap R)"
},
{
"math_id": 104,
"text": "L \\lor M \\land R ~=~ L \\lor (M \\land R)"
},
{
"math_id": 105,
"text": "L \\cup M \\cap R \\cup Z"
},
{
"math_id": 106,
"text": "L \\cup (M \\cap R) \\cup Z"
},
{
"math_id": 107,
"text": "L \\lor M \\land R \\lor Z ~=~ L \\lor (M \\land R) \\lor Z."
},
{
"math_id": 108,
"text": "\\,\\setminus\\,"
},
{
"math_id": 109,
"text": "\\,\\lnot,\\,"
},
{
"math_id": 110,
"text": "L \\setminus R = \\{x : (x \\in L) \\land \\lnot (x \\in R)\\}"
},
{
"math_id": 111,
"text": "L \\land \\lnot R"
},
{
"math_id": 112,
"text": "L \\cup M \\setminus R"
},
{
"math_id": 113,
"text": "L \\cup (M \\setminus R)"
},
{
"math_id": 114,
"text": "L \\lor M \\land \\lnot R"
},
{
"math_id": 115,
"text": "L \\lor (M \\land \\lnot R)."
},
{
"math_id": 116,
"text": "L \\land \\lnot M \\land R"
},
{
"math_id": 117,
"text": "L \\land (\\lnot M) \\land R,"
},
{
"math_id": 118,
"text": "(L \\land (\\lnot M)) \\land R"
},
{
"math_id": 119,
"text": "L \\land ((\\lnot M) \\land R) ~=~ L \\land (R \\land (\\lnot M))"
},
{
"math_id": 120,
"text": "(\\lnot M) \\land R"
},
{
"math_id": 121,
"text": "R \\land (\\lnot M)"
},
{
"math_id": 122,
"text": "L \\setminus M \\cap R"
},
{
"math_id": 123,
"text": "(L \\setminus M) \\cap R = L \\cap (R \\setminus M);"
},
{
"math_id": 124,
"text": "L \\land (\\lnot M) \\land R = (L \\land R) \\land \\lnot M,"
},
{
"math_id": 125,
"text": "(L \\cap R) \\setminus M"
},
{
"math_id": 126,
"text": "(L \\setminus M) \\setminus R \\neq L \\setminus (M \\setminus R),"
},
{
"math_id": 127,
"text": "L \\setminus M \\setminus R"
},
{
"math_id": 128,
"text": "L \\triangle R = \\{x : (x \\in L) \\oplus (x \\in R)\\}"
},
{
"math_id": 129,
"text": "L \\oplus R"
},
{
"math_id": 130,
"text": "\\,\\veebar"
},
{
"math_id": 131,
"text": "\\,\\lnot, \\,\\oplus, \\,\\land, \\,\\lor\\,"
},
{
"math_id": 132,
"text": "\\,\\setminus,\\, \\triangle,\\, \\cap,\\, \\cup."
},
{
"math_id": 133,
"text": "\\,\\oplus\\,"
},
{
"math_id": 134,
"text": "\\,\\triangle\\,"
},
{
"math_id": 135,
"text": "(L \\,\\ast\\, M) \\,\\ast\\, R = L \\,\\ast\\, (M \\,\\ast\\, R)"
},
{
"math_id": 136,
"text": "\\begin{alignat}{5}\n(L \\cup M) \\cup R &\\;=\\;\\;&& L \\cup (M \\cup R) \\\\[1.4ex]\n(L \\cap M) \\cap R &\\;=\\;\\;&& L \\cap (M \\cap R) \\\\[1.4ex]\n(L \\,\\triangle M) \\,\\triangle R &\\;=\\;\\;&& L \\,\\triangle (M \\,\\triangle R) \\\\[1.4ex]\n\\end{alignat}"
},
{
"math_id": 137,
"text": "(L \\,\\setminus\\, M) \\,\\setminus\\, R \\;~~{\\color{red}{\\subseteq}}~~\\; L \\,\\setminus\\, (M \\,\\setminus\\, R)"
},
{
"math_id": 138,
"text": "\\; (L \\setminus M) \\setminus R = L \\setminus (M \\setminus R) \\;"
},
{
"math_id": 139,
"text": "\\; (R \\setminus M) \\setminus L = R \\setminus (M \\setminus L), \\;"
},
{
"math_id": 140,
"text": "\\ast \\text{ and } \\bullet"
},
{
"math_id": 141,
"text": "\\,\\bullet\\,"
},
{
"math_id": 142,
"text": "L \\,\\ast\\, (M \\,\\bullet\\, R) ~=~ (L \\,\\ast\\, M) \\,\\bullet\\, (L \\,\\ast\\,R) \\qquad\\qquad \\text{ for all } L, M, R"
},
{
"math_id": 143,
"text": "(L \\,\\bullet\\, M) \\,\\ast\\, R ~=~ (L \\,\\ast\\, R) \\,\\bullet\\, (M \\,\\ast\\,R) \\qquad\\qquad \\text{ for all } L, M, R."
},
{
"math_id": 144,
"text": "\\,\\bullet\\,.\\,"
},
{
"math_id": 145,
"text": "\\begin{alignat}{9}\n(L \\,\\cap\\, M) \\,\\cup\\, R ~&~~=~~&& (L \\,\\cup\\, R) \\,&&\\cap\\, &&(M \\,\\cup\\, R) \\qquad \n&&\\text{ (Right-distributivity of } \\,\\cup\\, \\text{ over } \\,\\cap\\, \\text{)} \\\\[1.4ex]\n(L \\,\\cup\\, M) \\,\\cup\\, R ~&~~=~~&& (L \\,\\cup\\, R) \\,&&\\cup\\, &&(M \\,\\cup\\, R) \\qquad \n&&\\text{ (Right-distributivity of } \\,\\cup\\, \\text{ over } \\,\\cup\\, \\text{)} \\\\[1.4ex]\n(L \\,\\cup\\, M) \\,\\cap\\, R ~&~~=~~&& (L \\,\\cap\\, R) \\,&&\\cup\\, &&(M \\,\\cap\\, R) \\qquad \n&&\\text{ (Right-distributivity of } \\,\\cap\\, \\text{ over } \\,\\cup\\, \\text{)} \\\\[1.4ex]\n(L \\,\\cap\\, M) \\,\\cap\\, R ~&~~=~~&& (L \\,\\cap\\, R) \\,&&\\cap\\, &&(M \\,\\cap\\, R) \\qquad \n&&\\text{ (Right-distributivity of } \\,\\cap\\, \\text{ over } \\,\\cap\\, \\text{)} \\\\[1.4ex]\n(L \\,\\triangle\\, M) \\,\\cap\\, R ~&~~=~~&& (L \\,\\cap\\, R) \\,&&\\triangle\\, &&(M \\,\\cap\\, R) \\qquad \n&&\\text{ (Right-distributivity of } \\,\\cap\\, \\text{ over } \\,\\triangle\\, \\text{)} \\\\[1.4ex]\n(L \\,\\cap\\, M) \\,\\times\\, R ~&~~=~~&& (L \\,\\times\\, R) \\,&&\\cap\\, &&(M \\,\\times\\, R) \\qquad \n&&\\text{ (Right-distributivity of } \\,\\times\\, \\text{ over } \\,\\cap\\, \\text{)} \\\\[1.4ex]\n(L \\,\\cup\\, M) \\,\\times\\, R ~&~~=~~&& (L \\,\\times\\, R) \\,&&\\cup\\, &&(M \\,\\times\\, R) \\qquad \n&&\\text{ (Right-distributivity of } \\,\\times\\, \\text{ over } \\,\\cup\\, \\text{)} \\\\[1.4ex]\n(L \\,\\setminus\\, M) \\,\\times\\, R ~&~~=~~&& (L \\,\\times\\, R) \\,&&\\setminus\\, &&(M \\,\\times\\, R) \\qquad \n&&\\text{ (Right-distributivity of } \\,\\times\\, \\text{ over } \\,\\setminus\\, \\text{)} \\\\[1.4ex]\n(L \\,\\triangle\\, M) \\,\\times\\, R ~&~~=~~&& (L \\,\\times\\, R) \\,&&\\triangle\\, &&(M \\,\\times\\, R) \\qquad \n&&\\text{ (Right-distributivity of } \\,\\times\\, \\text{ over } \\,\\triangle\\, \\text{)} \\\\[1.4ex]\n(L \\,\\cup\\, M) \\,\\setminus\\, R ~&~~=~~&& (L \\,\\setminus\\, R) \\,&&\\cup\\, &&(M \\,\\setminus\\, R) \\qquad \n&&\\text{ (Right-distributivity of } \\,\\setminus\\, \\text{ over } \\,\\cup\\, \\text{)} \\\\[1.4ex]\n(L \\,\\cap\\, M) \\,\\setminus\\, R ~&~~=~~&& (L \\,\\setminus\\, R) \\,&&\\cap\\, &&(M \\,\\setminus\\, R) \\qquad \n&&\\text{ (Right-distributivity of } \\,\\setminus\\, \\text{ over } \\,\\cap\\, \\text{)} \\\\[1.4ex]\n(L \\,\\triangle\\, M) \\,\\setminus\\, R ~&~~=~~&& (L \\,\\setminus\\, R) &&\\,\\triangle\\, &&(M \\,\\setminus\\, R) \\qquad \n&&\\text{ (Right-distributivity of } \\,\\setminus\\, \\text{ over } \\,\\triangle\\, \\text{)} \\\\[1.4ex]\n(L \\,\\setminus\\, M) \\,\\setminus\\, R ~&~~=~~&& (L \\,\\setminus\\, R) &&\\,\\setminus\\, &&(M \\,\\setminus\\, R) \\qquad \n&&\\text{ (Right-distributivity of } \\,\\setminus\\, \\text{ over } \\,\\setminus\\, \\text{)} \\\\[1.4ex]\n~&~~=~~&&~~\\;~~\\;~~\\;~ L &&\\,\\setminus\\, &&(M \\cup R) \\\\[1.4ex]\n\\end{alignat}"
},
{
"math_id": 146,
"text": "\\begin{alignat}{5}\nL \\cup (M \\cap R) &\\;=\\;\\;&& (L \\cup M) \\cap (L \\cup R) \\qquad\n&&\\text{ (Left-distributivity of } \\,\\cup\\, \\text{ over } \\,\\cap\\, \\text{)} \\\\[1.4ex]\nL \\cup (M \\cup R) &\\;=\\;\\;&& (L \\cup M) \\cup (L \\cup R) \n&&\\text{ (Left-distributivity of } \\,\\cup\\, \\text{ over } \\,\\cup\\, \\text{)} \\\\[1.4ex]\nL \\cap (M \\cup R) &\\;=\\;\\;&& (L \\cap M) \\cup (L \\cap R) \n&&\\text{ (Left-distributivity of } \\,\\cap\\, \\text{ over } \\,\\cup\\, \\text{)} \\\\[1.4ex]\nL \\cap (M \\cap R) &\\;=\\;\\;&& (L \\cap M) \\cap (L \\cap R) \n&&\\text{ (Left-distributivity of } \\,\\cap\\, \\text{ over } \\,\\cap\\, \\text{)} \\\\[1.4ex]\nL \\cap (M \\,\\triangle\\, R) &\\;=\\;\\;&& (L \\cap M) \\,\\triangle\\, (L \\cap R) \n&&\\text{ (Left-distributivity of } \\,\\cap\\, \\text{ over } \\,\\triangle\\, \\text{)} \\\\[1.4ex]\nL \\times (M \\cap R) &\\;=\\;\\;&& (L \\times M) \\cap (L \\times R) \n&&\\text{ (Left-distributivity of } \\,\\times\\, \\text{ over } \\,\\cap\\, \\text{)} \\\\[1.4ex]\nL \\times (M \\cup R) &\\;=\\;\\;&& (L \\times M) \\cup (L \\times R) \n&&\\text{ (Left-distributivity of } \\,\\times\\, \\text{ over } \\,\\cup\\, \\text{)} \\\\[1.4ex]\nL \\times (M \\,\\setminus R) &\\;=\\;\\;&& (L \\times M) \\,\\setminus (L \\times R) \n&&\\text{ (Left-distributivity of } \\,\\times\\, \\text{ over } \\,\\setminus\\, \\text{)} \\\\[1.4ex]\nL \\times (M \\,\\triangle R) &\\;=\\;\\;&& (L \\times M) \\,\\triangle (L \\times R) \n&&\\text{ (Left-distributivity of } \\,\\times\\, \\text{ over } \\,\\triangle\\, \\text{)} \\\\[1.4ex]\n\\end{alignat}"
},
{
"math_id": 147,
"text": "\\begin{alignat}{5}\nL \\,\\cap\\, (M \\,\\triangle\\, R) ~&~~=~~&& (L \\,\\cap\\, M) \\,\\triangle\\, (L \\,\\cap\\, R) ~&&~ \\\\[1.4ex]\n\\end{alignat}"
},
{
"math_id": 148,
"text": "\\begin{alignat}{5}\n(L \\,\\triangle\\, M) \\,\\cap\\, R~&~~=~~&& (L \\,\\cap\\, R) \\,\\triangle\\, (M \\,\\cap\\, R) ~&&~ \\\\[1.4ex]\n\\end{alignat}"
},
{
"math_id": 149,
"text": "\\begin{alignat}{5}\nL \\cup (M \\,\\triangle\\, R) \n~~{\\color{red}{\\supseteq}}~~ \\color{black}{\\,} (L \\cup M) \\,\\triangle\\, (L \\cup R) ~\n&~=~&& (M \\,\\triangle\\, R) \\,\\setminus\\, L \n&~=~&& (M \\,\\setminus\\, L) \\,\\triangle\\, (R \\,\\setminus\\, L) \\\\[1.4ex]\n\\end{alignat}"
},
{
"math_id": 150,
"text": "L \\,\\triangle\\, (M \\,\\triangle\\, R) \n~~{\\color{red}{\\neq}}~~ \\color{black}{\\,} (L \\,\\triangle\\, M) \\,\\triangle\\, (L \\,\\triangle\\, R) \n~=~ M \\,\\triangle\\, R"
},
{
"math_id": 151,
"text": "L \\text{ and } A"
},
{
"math_id": 152,
"text": "A"
},
{
"math_id": 153,
"text": "M \\,\\triangle\\, R"
},
{
"math_id": 154,
"text": "L \\,\\triangle\\, A"
},
{
"math_id": 155,
"text": "\\begin{alignat}{5}\nL \\,\\setminus\\, (M \\,\\setminus\\, R) &~~{\\color{red}{\\supseteq}}~~&& \\color{black}{\\,} (L \\,\\setminus\\, M) \\,\\setminus\\, (L \\,\\setminus\\, R) ~~=~~ L \\cap R \\,\\setminus\\, M \\\\[1.4ex]\n\\end{alignat}"
},
{
"math_id": 156,
"text": "L \\,\\setminus\\, M = L \\,\\cap\\, R,"
},
{
"math_id": 157,
"text": "L \\cap M \\cap R = \\varnothing \\text{ and } L \\setminus M \\subseteq R."
},
{
"math_id": 158,
"text": "L \\,\\setminus\\, (M \\,\\triangle\\, R)"
},
{
"math_id": 159,
"text": "(L \\,\\setminus\\, M) \\,\\triangle\\, (L \\,\\setminus\\, R) = L \\,\\cap\\, (M \\,\\triangle\\, R)"
},
{
"math_id": 160,
"text": "\\varnothing."
},
{
"math_id": 161,
"text": "L \\,\\setminus\\, (M \\,\\triangle\\, R) = \\varnothing"
},
{
"math_id": 162,
"text": "L \\cap M \\cap R = \\varnothing \\text{ and } L \\subseteq M \\cup R."
},
{
"math_id": 163,
"text": "\\begin{alignat}{5}\n(L \\,\\setminus\\, M) \\,\\cap\\, (L \\,\\setminus\\, R) ~~=~~ L \\,\\setminus\\, (M \\,\\cup\\, R) ~&~~{\\color{red}{\\subseteq}}~~&& \\color{black}{\\,} L \\,\\setminus\\, (M \\,\\cap\\, R) ~~=~~ (L \\,\\setminus\\, M) \\,\\cup\\, (L \\,\\setminus\\, R) \\\\[1.4ex]\n\\end{alignat}"
},
{
"math_id": 164,
"text": "{\\color{red}{\\subseteq}}"
},
{
"math_id": 165,
"text": "L \\,\\setminus\\, (M \\,\\cap\\, R) \\;\\subseteq\\; L \\,\\setminus\\, (M \\,\\cup\\, R),"
},
{
"math_id": 166,
"text": "L \\,\\cap\\, M = L \\,\\cap\\, R."
},
{
"math_id": 167,
"text": "\\,\\cup\\,"
},
{
"math_id": 168,
"text": "\\begin{alignat}{5}\nL \\,\\setminus\\, (M \\,\\cup\\, R) ~&~~{\\color{red}{\\subseteq}}~~&& \\color{black}{\\,} (L \\,\\setminus\\, M) \\,\\cup\\, (L \\,\\setminus\\, R) ~~=~~ L \\,\\setminus\\, (M \\,\\cap\\, R) \\\\[1.4ex]\n\\end{alignat}"
},
{
"math_id": 169,
"text": "\\begin{alignat}{5}\nL \\,\\setminus\\, (M \\,\\cap\\, R) ~&~~{\\color{red}{\\supseteq}}~~&& \\color{black}{\\,} (L \\,\\setminus\\, M) \\,\\cap\\, (L \\,\\setminus\\, R) ~~=~~ L \\,\\setminus\\, (M \\,\\cup\\, R) \\\\[1.4ex]\n\\end{alignat}"
},
{
"math_id": 170,
"text": "(L \\setminus M) \\setminus R ~=~ (L \\setminus R) \\setminus M \\qquad \\text{ (Quasi-commutative)}"
},
{
"math_id": 171,
"text": "L \\setminus (M \\setminus R) ~~{\\color{red}{\\neq}}~~ L \\setminus (R \\setminus M)."
},
{
"math_id": 172,
"text": "L \\setminus (M \\setminus R) ~\\subseteq~ L \\setminus (R \\setminus M)"
},
{
"math_id": 173,
"text": "L \\cap R ~\\subseteq~ M"
},
{
"math_id": 174,
"text": "L \\setminus (R \\setminus M) ~=~ L."
},
{
"math_id": 175,
"text": "\\,\\cup, \\,\\cap,"
},
{
"math_id": 176,
"text": "\\triangle,\\,"
},
{
"math_id": 177,
"text": "\\,\\cup, \\,\\cap, \\,\\triangle,"
},
{
"math_id": 178,
"text": "(L \\,\\setminus\\, M) \\,\\setminus\\, R \\;~~{\\color{red}{\\neq}}~~\\; L \\,\\setminus\\, (M \\,\\setminus\\, R)"
},
{
"math_id": 179,
"text": "(L \\,\\setminus\\, M) \\,\\setminus\\, R \\;~~{\\color{red}{\\subseteq}}~~\\; L \\,\\setminus\\, (M \\,\\setminus\\, R)."
},
{
"math_id": 180,
"text": "\\begin{alignat}{4}\n(L \\setminus M) \\setminus R \n&= &&L \\setminus (M \\cup R) \\\\[0.6ex]\n&= (&&L \\setminus R) \\setminus M \\\\[0.6ex]\n&= (&&L \\setminus M) \\cap (L \\setminus R) \\\\[0.6ex]\n&= (&&L \\setminus R) \\setminus M \\\\[0.6ex]\n&= (&&L \\,\\setminus\\, R) \\,\\setminus\\, (M \\,\\setminus\\, R) \\\\[1.4ex]\n\\end{alignat}"
},
{
"math_id": 181,
"text": "\\begin{alignat}{4}\nL \\setminus (M \\setminus R) \n&= (L \\setminus M) \\cup (L \\cap R) \\\\[1.4ex]\n\\end{alignat}"
},
{
"math_id": 182,
"text": "L \\subseteq M \\text{ then } L \\setminus (M \\setminus R) = L \\cap R"
},
{
"math_id": 183,
"text": "L \\setminus (M \\setminus R) \\subseteq (L \\setminus M) \\cup R"
},
{
"math_id": 184,
"text": "R \\subseteq L."
},
{
"math_id": 185,
"text": "\\begin{alignat}{4}\n\\left(L \\setminus M\\right) \\cup R \n&= (L \\cup R) \\setminus (M \\setminus R) \\\\\n&= (L \\setminus (M \\cup R)) \\cup R ~~~~~ \\text{ (the outermost union is disjoint) } \\\\\n\\end{alignat}"
},
{
"math_id": 186,
"text": "\\begin{alignat}{4}\n(L \\setminus M) \\cap R \n&= (&&L \\cap R) \\setminus (M \\cap R) ~~~\\text{ (Distributive law of } \\cap \\text{ over } \\setminus \\text{ )} \\\\\n&= (&&L \\cap R) \\setminus M \\\\\n&= &&L \\cap (R \\setminus M) \\\\\n\\end{alignat}"
},
{
"math_id": 187,
"text": "\\begin{alignat}{4}\n(L \\setminus M) ~\\triangle~ R \n&= (L \\setminus (M \\cup R)) \\cup (R \\setminus L) \\cup (L \\cap M \\cap R) ~~~\\text{ (the three outermost sets are pairwise disjoint) } \\\\\n\\end{alignat}"
},
{
"math_id": 188,
"text": "(L \\,\\setminus M) \\times R = (L \\times R) \\,\\setminus (M \\times R) ~~~~~\\text{ (Distributivity)}"
},
{
"math_id": 189,
"text": "\\begin{alignat}{3}\nL \\setminus (M \\cup R) \n&= (L \\setminus M) &&\\,\\cap\\, (&&L \\setminus R) ~~~~\\text{ (De Morgan's law) } \\\\\n&= (L \\setminus M) &&\\,\\,\\setminus &&R \\\\\n&= (L \\setminus R) &&\\,\\,\\setminus &&M \\\\\n\\end{alignat}"
},
{
"math_id": 190,
"text": "\\begin{alignat}{4}\nL \\setminus (M \\cap R) \n&= (L \\setminus M) \\cup (L \\setminus R) ~~~~\\text{ (De Morgan's law) } \\\\\n\\end{alignat}"
},
{
"math_id": 191,
"text": "L \\,\\setminus\\, (M \\,\\cup\\, R) ~~{\\color{red}{\\subseteq}}~~ \\color{black}{\\,} L \\,\\setminus\\, (M \\,\\cap\\, R)."
},
{
"math_id": 192,
"text": "\\begin{alignat}{4}\nL \\setminus (M ~\\triangle~ R) \n&= (L \\setminus (M \\cup R)) \\cup (L \\cap M \\cap R) ~~~\\text{ (the outermost union is disjoint) } \\\\\n\\end{alignat}"
},
{
"math_id": 193,
"text": "\\begin{alignat}{4}\n(L \\cup M) \\setminus R \n&= (L \\setminus R) \\cup (M \\setminus R) \\\\\n\\end{alignat}"
},
{
"math_id": 194,
"text": "\\begin{alignat}{4}\n(L \\cap M) \\setminus R \n&= (&&L \\setminus R) &&\\cap (M \\setminus R) \\\\\n&= &&L &&\\cap (M \\setminus R) \\\\\n&= &&M &&\\cap (L \\setminus R) \\\\\n\\end{alignat}"
},
{
"math_id": 195,
"text": "\\begin{alignat}{4}\n(L \\,\\triangle\\, M) \\setminus R \n&= (L \\setminus R) ~&&\\triangle~ (M \\setminus R) \\\\\n&= (L \\cup R) ~&&\\triangle~ (M \\cup R) \\\\\n\\end{alignat}"
},
{
"math_id": 196,
"text": "\\begin{alignat}{3}\nL \\cup (M \\setminus R)\n&= && &&L &&\\cup\\; &&(M \\setminus (R \\cup L)) &&~~~\\text{ (the outermost union is disjoint) } \\\\\n&= [&&(&&L \\setminus M) &&\\cup\\; &&(R \\cap L)] \\cup (M \\setminus R) &&~~~\\text{ (the outermost union is disjoint) } \\\\\n&= &&(&&L \\setminus (M \\cup R)) \\;&&\\;\\cup &&(R \\cap L)\\,\\, \\cup (M \\setminus R) &&~~~\\text{ (the three outermost sets are pairwise disjoint) } \\\\\n\\end{alignat}"
},
{
"math_id": 197,
"text": "\\begin{alignat}{4}\nL \\cap (M \\setminus R)\n&= (&&L \\cap M) &&\\setminus (L \\cap R) ~~~\\text{ (Distributive law of } \\cap \\text{ over } \\setminus \\text{ )} \\\\\n&= (&&L \\cap M) &&\\setminus R \\\\\n&= &&M &&\\cap (L \\setminus R) \\\\\n&= (&&L \\setminus R) &&\\cap (M \\setminus R) \\\\\n\\end{alignat}\n"
},
{
"math_id": 198,
"text": "L \\times (M \\,\\setminus R) = (L \\times M) \\,\\setminus (L \\times R) ~~~~~\\text{ (Distributivity)}"
},
{
"math_id": 199,
"text": "(L \\bullet M) \\ast (M \\bullet R)"
},
{
"math_id": 200,
"text": "\\begin{alignat}{9}\n(L \\cup M) &\\,\\cup\\,&& (&&M \\cup R) && \n &&\\;=\\;\\;&& L \\cup M \\cup R \\\\[1.4ex]\n(L \\cup M) &\\,\\cap\\,&& (&&M \\cup R) && \n &&\\;=\\;\\;&& M \\cup (L \\cap R) \\\\[1.4ex]\n(L \\cup M) &\\,\\setminus\\,&& (&&M \\cup R) && \n &&\\;=\\;\\;&& L \\,\\setminus\\, (M \\cup R) \\\\[1.4ex]\n(L \\cup M) &\\,\\triangle\\,&& (&&M \\cup R) && \n &&\\;=\\;\\;&& (L \\,\\setminus\\, (M \\cup R)) \\,\\cup\\, (R \\,\\setminus\\, (L \\cup M)) \\\\[1.4ex]\n&\\,&&\\,&&\\,&& &&\\;=\\;\\;&& (L \\,\\triangle\\, R) \\,\\setminus\\, M \\\\[1.4ex]\n(L \\cap M) &\\,\\cup\\,&& (&&M \\cap R) && \n &&\\;=\\;\\;&& M \\cup (L \\cap R) \\\\[1.4ex]\n(L \\cap M) &\\,\\cap\\,&& (&&M \\cap R) && \n &&\\;=\\;\\;&& L \\cap M \\cap R \\\\[1.4ex]\n(L \\cap M) &\\,\\setminus\\,&& (&&M \\cap R) && \n &&\\;=\\;\\;&& (L \\cap M) \\,\\setminus\\, R \\\\[1.4ex]\n(L \\cap M) &\\,\\triangle\\,&& (&&M \\cap R) && \n &&\\;=\\;\\;&& [(L \\,\\cap M) \\cup (M \\,\\cap R)] \\,\\setminus\\, (L \\,\\cap M \\,\\cap R) \\\\[1.4ex]\n(L \\,\\setminus M) &\\,\\cup\\,&& (&&M \\,\\setminus R) && \n &&\\;=\\;\\;&& (L \\,\\cup M) \\,\\setminus (M \\,\\cap\\, R) \\\\[1.4ex]\n(L \\,\\setminus M) &\\,\\cap\\,&& (&&M \\,\\setminus R) && \n &&\\;=\\;\\;&& \\varnothing \\\\[1.4ex]\n(L \\,\\setminus M) &\\,\\setminus\\,&& (&&M \\,\\setminus R) && \n &&\\;=\\;\\;&& L \\,\\setminus M \\\\[1.4ex]\n(L \\,\\setminus M) &\\,\\triangle\\,&& (&&M \\,\\setminus R) && \n &&\\;=\\;\\;&& (L \\,\\setminus M) \\cup (M \\,\\setminus R) \\\\[1.4ex]\n&\\,&&\\,&&\\,&& &&\\;=\\;\\;&& (L \\,\\cup M) \\setminus (M \\,\\cap R) \\\\[1.4ex]\n(L \\,\\triangle\\, M) &\\,\\cup\\,&& (&&M \\,\\triangle\\, R) && \n &&\\;=\\;\\;&& (L \\,\\cup\\, M \\,\\cup\\, R) \\,\\setminus\\, (L \\,\\cap\\, M \\,\\cap\\, R) \\\\[1.4ex]\n(L \\,\\triangle\\, M) &\\,\\cap\\,&& (&&M \\,\\triangle\\, R) && \n &&\\;=\\;\\;&& ((L \\,\\cap\\, R) \\,\\setminus\\, M) \\,\\cup\\, (M \\,\\setminus\\, (L \\,\\cup\\, R)) \\\\[1.4ex]\n(L \\,\\triangle\\, M) &\\,\\setminus\\,&& (&&M \\,\\triangle\\, R) && \n &&\\;=\\;\\;&& (L \\,\\setminus\\, (M \\,\\cup\\, R)) \\,\\cup\\, ((M \\,\\cap\\, R) \\,\\setminus\\, L) \\\\[1.4ex]\n(L \\,\\triangle\\, M) &\\,\\triangle\\,&& (&&M \\,\\triangle\\, R) && \n &&\\;=\\;\\;&& L \\,\\triangle\\, R \\\\[1.7ex]\n\\end{alignat}"
},
{
"math_id": 201,
"text": "(L \\bullet M) \\ast (R \\,\\setminus\\, M)"
},
{
"math_id": 202,
"text": "\\begin{alignat}{9}\n(L \\cup M) &\\,\\cup\\,&& (&&R \\,\\setminus\\, M) && \n &&\\;=\\;\\;&& L \\cup M \\cup R \\\\[1.4ex]\n(L \\cup M) &\\,\\cap\\,&& (&&R \\,\\setminus\\, M) && \n &&\\;=\\;\\;&& (L \\cap R) \\,\\setminus\\, M \\\\[1.4ex]\n(L \\cup M) &\\,\\setminus\\,&& (&&R \\,\\setminus\\, M) && \n &&\\;=\\;\\;&& M \\cup (L \\,\\setminus\\, R) \\\\[1.4ex]\n(L \\cup M) &\\,\\triangle\\,&& (&&R \\,\\setminus\\, M) && \n &&\\;=\\;\\;&& M \\cup (L \\,\\triangle\\, R) \\\\[1.4ex]\n(L \\cap M) &\\,\\cup\\,&& (&&R \\,\\setminus\\, M) && \n &&\\;=\\;\\;&& [L \\cap (M \\cup R)] \\cup [R \\,\\setminus\\, (L \\cup M)] \\qquad \\text{ (disjoint union)} \\\\[1.4ex]\n&\\,&&\\,&&\\,&& &&\\;=\\;\\;&& (L \\cap M) \\,\\triangle\\, (R \\,\\setminus\\, M) \\\\[1.4ex]\n(L \\cap M) &\\,\\cap\\,&& (&&R \\,\\setminus\\, M) && \n &&\\;=\\;\\;&& \\varnothing \\\\[1.4ex]\n(L \\cap M) &\\,\\setminus\\,&& (&&R \\,\\setminus\\, M) && \n &&\\;=\\;\\;&& L \\cap M \\\\[1.4ex]\n(L \\cap M) &\\,\\triangle\\,&& (&&R \\,\\setminus\\, M) && \n &&\\;=\\;\\;&& (L \\cap M) \\cup (R \\,\\setminus\\, M) \\qquad \\text{ (disjoint union)} \\\\[1.4ex]\n(L \\,\\setminus\\, M) &\\,\\cup\\,&& (&&R \\,\\setminus\\, M) && \n &&\\;=\\;\\;&& L \\cup R \\,\\setminus\\, M \\\\[1.4ex]\n(L \\,\\setminus\\, M) &\\,\\cap\\,&& (&&R \\,\\setminus\\, M) && \n &&\\;=\\;\\;&& (L \\cap R) \\,\\setminus\\, M \\\\[1.4ex]\n(L \\,\\setminus\\, M) &\\,\\setminus\\,&& (&&R \\,\\setminus\\, M) && \n &&\\;=\\;\\;&& L \\,\\setminus\\, (M \\cup R) \\\\[1.4ex]\n(L \\,\\setminus\\, M) &\\,\\triangle\\,&& (&&R \\,\\setminus\\, M) && \n &&\\;=\\;\\;&& (L \\,\\triangle\\, R) \\,\\setminus\\, M \\\\[1.4ex]\n(L \\,\\triangle\\, M) &\\,\\cup\\,&& (&&R \\,\\setminus\\, M) && \n &&\\;=\\;\\;&& (L \\cup M \\cup R) \\,\\setminus\\, (L \\cap M) \\\\[1.4ex]\n(L \\,\\triangle\\, M) &\\,\\cap\\,&& (&&R \\,\\setminus\\, M) && \n &&\\;=\\;\\;&& (L \\cap R) \\,\\setminus\\, M \\\\[1.4ex]\n(L \\,\\triangle\\, M) &\\,\\setminus\\,&& (&&R \\,\\setminus\\, M) && \n &&\\;=\\;\\;&& [L \\,\\setminus\\, (M \\cup R)] \\cup (M \\,\\setminus\\, L) \\qquad \\text{ (disjoint union)} \\\\[1.4ex]\n&\\,&&\\,&&\\,&& &&\\;=\\;\\;&& (L \\,\\triangle\\, M) \\setminus (L \\,\\cap R) \\\\[1.4ex]\n(L \\,\\triangle\\, M) &\\,\\triangle\\,&& (&&R \\,\\setminus\\, M) && \n &&\\;=\\;\\;&& L \\,\\triangle\\, (M \\cup R) \\\\[1.7ex]\n\\end{alignat}"
},
{
"math_id": 203,
"text": "(L \\,\\setminus\\, M) \\ast (L \\,\\setminus\\, R)"
},
{
"math_id": 204,
"text": "\\begin{alignat}{9}\n(L \\,\\setminus M) &\\,\\cup\\,&& (&&L \\,\\setminus R) \n &&\\;=\\;&& L \\,\\setminus\\, (M \\,\\cap\\, R) \\\\[1.4ex]\n(L \\,\\setminus M) &\\,\\cap\\,&& (&&L \\,\\setminus R) \n &&\\;=\\;&& L \\,\\setminus\\, (M \\,\\cup\\, R) \\\\[1.4ex]\n(L \\,\\setminus M) &\\,\\setminus\\,&& (&&L \\,\\setminus R) \n &&\\;=\\;&& (L \\,\\cap\\, R) \\,\\setminus\\, M \\\\[1.4ex]\n(L \\,\\setminus M) &\\,\\triangle\\,&& (&&L \\,\\setminus R) \n &&\\;=\\;&& L \\,\\cap\\, (M \\,\\triangle\\, R) \\\\[1.4ex]\n&\\,&&\\,&&\\, &&\\;=\\;&& (L \\cap M) \\,\\triangle\\, (L \\cap R) \\\\[1.4ex]\n\\end{alignat}"
},
{
"math_id": 205,
"text": "L \\cap M = R \\;\\text{ and }\\; L \\cap R = M \\qquad \\text{ if and only if } \\qquad M = R \\subseteq L."
},
{
"math_id": 206,
"text": "L_1, \\ldots, L_n,"
},
{
"math_id": 207,
"text": "x \\in L_1 \\triangle \\cdots \\triangle L_n"
},
{
"math_id": 208,
"text": "\\left|\\left\\{i : x \\in L_i\\right\\}\\right|"
},
{
"math_id": 209,
"text": "L_1 \\triangle \\cdots \\triangle L_n"
},
{
"math_id": 210,
"text": "\\begin{alignat}{4}\nL \\,\\triangle\\, M \\,\\triangle\\, R \n&= (L \\cap M \\cap R) \\cup \\{x : x \\text{ belongs to exactly one of the sets } L, M, R \\} ~~~~~~ \\text{ (the union is disjoint) } \\\\\n&= [L \\cap M \\cap R] \\cup [L \\setminus (M \\cup R)] \\cup [M \\setminus (L \\cup R)] \\cup [R \\setminus (L \\cup M)] ~~~~~~~~~ \\text{ (all 4 sets enclosed by [ ] are pairwise disjoint) } \\\\\n\\end{alignat}"
},
{
"math_id": 211,
"text": "\\begin{alignat}{9}\n(L \\,\\cap\\, M) \\,\\times\\, R ~&~~=~~&& (L \\,\\times\\, R) \\,&&\\cap\\, &&(M \\,\\times\\, R) \\qquad \n&&\\text{ (Right-distributivity of } \\,\\times\\, \\text{ over } \\,\\cap\\, \\text{)} \\\\[1.4ex]\n(L \\,\\cup\\, M) \\,\\times\\, R ~&~~=~~&& (L \\,\\times\\, R) \\,&&\\cup\\, &&(M \\,\\times\\, R) \\qquad \n&&\\text{ (Right-distributivity of } \\,\\times\\, \\text{ over } \\,\\cup\\, \\text{)} \\\\[1.4ex]\n(L \\,\\setminus\\, M) \\,\\times\\, R ~&~~=~~&& (L \\,\\times\\, R) \\,&&\\setminus\\, &&(M \\,\\times\\, R) \\qquad \n&&\\text{ (Right-distributivity of } \\,\\times\\, \\text{ over } \\,\\setminus\\, \\text{)} \\\\[1.4ex]\n(L \\,\\triangle\\, M) \\,\\times\\, R ~&~~=~~&& (L \\,\\times\\, R) \\,&&\\triangle\\, &&(M \\,\\times\\, R) \\qquad \n&&\\text{ (Right-distributivity of } \\,\\times\\, \\text{ over } \\,\\triangle\\, \\text{)} \\\\[1.4ex]\n\\end{alignat}"
},
{
"math_id": 212,
"text": "\\begin{alignat}{5}\nL \\times (M \\cap R) &\\;=\\;\\;&& (L \\times M) \\cap (L \\times R) \\qquad\n&&\\text{ (Left-distributivity of } \\,\\times\\, \\text{ over } \\,\\cap\\, \\text{)} \\\\[1.4ex]\nL \\times (M \\cup R) &\\;=\\;\\;&& (L \\times M) \\cup (L \\times R) \n&&\\text{ (Left-distributivity of } \\,\\times\\, \\text{ over } \\,\\cup\\, \\text{)} \\\\[1.4ex]\nL \\times (M \\setminus R) &\\;=\\;\\;&& (L \\times M) \\setminus (L \\times R) \n&&\\text{ (Left-distributivity of } \\,\\times\\, \\text{ over } \\,\\setminus\\, \\text{)} \\\\[1.4ex]\nL \\times (M \\triangle R) &\\;=\\;\\;&& (L \\times M) \\triangle (L \\times R) \n&&\\text{ (Left-distributivity of } \\,\\times\\, \\text{ over } \\,\\triangle\\, \\text{)} \\\\[1.4ex]\n\\end{alignat}"
},
{
"math_id": 213,
"text": "L \\times (M \\times R) ~\\color{Red}{\\neq}\\color{Black}{}~ (L \\times M) \\times (L \\times R)"
},
{
"math_id": 214,
"text": "(L \\times M) \\times R ~\\color{Red}{\\neq}\\color{Black}{}~ (L \\times R) \\times (M \\times R)."
},
{
"math_id": 215,
"text": "(L \\times R) \\cap \\left(L_2 \\times R_2\\right) ~=~ \\left(L \\cap L_2\\right) \\times \\left(R \\cap R_2\\right)"
},
{
"math_id": 216,
"text": "(L \\times M \\times R) \\cap \\left(L_2 \\times M_2 \\times R_2\\right) ~=~ \\left(L \\cap L_2\\right) \\times \\left(M \\cap M_2\\right) \\times \\left(R \\cap R_2\\right)"
},
{
"math_id": 217,
"text": "\\begin{alignat}{9}\n\\left(L \\times R\\right) ~\\cup~ \\left(L_2 \\times R_2\\right) \n~&=~ \\left[\\left(L \\setminus L_2\\right) \\times R\\right] ~\\cup~ \\left[\\left(L_2 \\setminus L\\right) \\times R_2\\right] ~\\cup~ \\left[\\left(L \\cap L_2\\right) \\times \\left(R \\cup R_2\\right)\\right] \\\\[0.5ex]\n~&=~ \\left[L \\times \\left(R \\setminus R_2\\right)\\right] ~\\cup~ \\left[L_2 \\times \\left(R_2 \\setminus R\\right)\\right] ~\\cup~ \\left[\\left(L \\cup L_2\\right) \\times \\left(R \\cap R_2\\right)\\right] \\\\\n\\end{alignat}"
},
{
"math_id": 218,
"text": "\\begin{alignat}{9}\n\\left(L \\times R\\right) ~\\setminus~ \\left(L_2 \\times R_2\\right) \n~&=~ \\left[\\left(L \\,\\setminus\\, L_2\\right) \\times R\\right] ~\\cup~ \\left[L \\times \\left(R \\,\\setminus\\, R_2\\right)\\right] \\\\\n\\end{alignat}"
},
{
"math_id": 219,
"text": "(L \\times M \\times R) ~\\setminus~ \\left(L_2 \\times M_2 \\times R_2\\right) \n~=~ \\left[\\left(L \\,\\setminus\\, L_2\\right) \\times M \\times R\\right] ~\\cup~ \\left[L \\times \\left(M \\,\\setminus\\, M_2\\right) \\times R\\right] ~\\cup~ \\left[L \\times M \\times \\left(R \\,\\setminus\\, R_2\\right)\\right]"
},
{
"math_id": 220,
"text": "\\left(L \\,\\setminus\\, L_2\\right) \\times \\left(R \\,\\setminus\\, R_2\\right) ~=~ \\left(L \\times R\\right) \\,\\setminus\\, \\left[\\left(L_2 \\times R\\right) \\cup \\left(L \\times R_2\\right)\\right]"
},
{
"math_id": 221,
"text": "\\left(L \\,\\setminus\\, L_2\\right) \\times \\left(M \\,\\setminus\\, M_2\\right) \\times \\left(R \\,\\setminus\\, R_2\\right) ~=~ \\left(L \\times M \\times R\\right) \\,\\setminus\\, \\left[\\left(L_2 \\times M \\times R\\right) \\cup \\left(L \\times M_2 \\times R\\right) \\cup \\left(L \\times M \\times R_2\\right)\\right]"
},
{
"math_id": 222,
"text": "L \\times \\left(R \\,\\triangle\\, R_2\\right) ~=~ \\left[L \\times \\left(R \\,\\setminus\\, R_2\\right)\\right] \\,\\cup\\, \\left[L \\times \\left(R_2 \\,\\setminus\\, R\\right)\\right]"
},
{
"math_id": 223,
"text": "\\left(L \\,\\triangle\\, L_2\\right) \\times R ~=~ \\left[\\left(L \\,\\setminus\\, L_2\\right) \\times R\\right] \\,\\cup\\, \\left[\\left(L_2 \\,\\setminus\\, L\\right) \\times R\\right]"
},
{
"math_id": 224,
"text": "\\begin{alignat}{4}\n\\left(L \\,\\triangle\\, L_2\\right) \\times \\left(R \\,\\triangle\\, R_2\\right) \n~&=~ && && \\,\\left[\\left(L \\cup L_2\\right) \\times \\left(R \\cup R_2\\right)\\right] \\;\\setminus\\; \\left[\\left(\\left(L \\cap L_2\\right) \\times R\\right) \\;\\cup\\; \\left(L \\times \\left(R \\cap R_2\\right)\\right)\\right] \\\\[0.7ex]\n&=~ & &&& \\,\\left[\\left(L \\,\\setminus\\, L_2\\right) \\times \\left(R_2 \\,\\setminus\\, R\\right)\\right] \\,\\cup\\, \\left[\\left(L_2 \\,\\setminus\\, L\\right) \\times \\left(R_2 \\,\\setminus\\, R\\right)\\right] \\,\\cup\\, \\left[\\left(L \\,\\setminus\\, L_2\\right) \\times \\left(R \\,\\setminus\\, R_2\\right)\\right] \\,\\cup\\, \\left[\\left(L_2 \\,\\setminus\\, L\\right) \\cup \\left(R \\,\\setminus\\, R_2\\right)\\right] \\\\\n\\end{alignat}"
},
{
"math_id": 225,
"text": "\\begin{alignat}{4}\n\\left(L \\,\\triangle\\, L_2\\right) \\times \\left(M \\,\\triangle\\, M_2\\right) \\times \\left(R \\,\\triangle\\, R_2\\right) \n~&=~ \\left[\\left(L \\cup L_2\\right) \\times \\left(M \\cup M_2\\right) \\times \\left(R \\cup R_2\\right)\\right] \\;\\setminus\\; \\left[\\left(\\left(L \\cap L_2\\right) \\times M \\times R\\right) \\;\\cup\\; \\left(L \\times \\left(M \\cap M_2\\right) \\times R\\right) \\;\\cup\\; \\left(L \\times M \\times \\left(R \\cap R_2\\right)\\right)\\right] \\\\\n\\end{alignat}"
},
{
"math_id": 226,
"text": "\\left(L \\,\\triangle\\, L_2\\right) \\times \\left(R \\,\\triangle\\, R_2\\right)"
},
{
"math_id": 227,
"text": "\\left(L \\times R\\right) \\,\\triangle\\, \\left(L_2 \\times R_2\\right)."
},
{
"math_id": 228,
"text": "\\begin{alignat}{4}\n\\left(L \\times R\\right) \\,\\triangle\\, \\left(L_2 \\times R_2\\right)\n~&=~ && \\left(L \\times R\\right) \\cup \\left(L_2 \\times R_2\\right) \\;\\setminus\\; \\left[\\left(L \\cap L_2\\right) \\times \\left(R \\cap R_2\\right)\\right] \\\\[0.7ex]\n\\end{alignat}"
},
{
"math_id": 229,
"text": "\\begin{alignat}{4}\n\\left(L \\times M \\times R\\right) \\,\\triangle\\, \\left(L_2 \\times M_2 \\times R_2\\right)\n~&=~ && \\left(L \\times M \\times R\\right) \\cup \\left(L_2 \\times M_2 \\times R_2\\right) \\;\\setminus\\; \\left[\\left(L \\cap L_2\\right) \\times \\left(M \\cap M_2\\right) \\times \\left(R \\cap R_2\\right)\\right] \\\\[0.7ex]\n\\end{alignat}"
},
{
"math_id": 230,
"text": "\\left(L_i\\right)_{i \\in I},"
},
{
"math_id": 231,
"text": "\\left(R_j\\right)_{j \\in J},"
},
{
"math_id": 232,
"text": "\\left(S_{i,j}\\right)_{(i, j) \\in I \\times J}"
},
{
"math_id": 233,
"text": "I"
},
{
"math_id": 234,
"text": "J,"
},
{
"math_id": 235,
"text": "i \\in I,"
},
{
"math_id": 236,
"text": "L_i"
},
{
"math_id": 237,
"text": "i."
},
{
"math_id": 238,
"text": "L_\\bull,"
},
{
"math_id": 239,
"text": "\\left(L_i\\right)_{i \\in I}"
},
{
"math_id": 240,
"text": "i"
},
{
"math_id": 241,
"text": "\\bullet\\,;"
},
{
"math_id": 242,
"text": "L_\\bull"
},
{
"math_id": 243,
"text": "\\begin{alignat}{4}\nL_\\bull :\\;&& I &&\\;\\to \\;& \\left\\{L_i : i \\in I\\right\\} \\\\[0.3ex]\n && i &&\\;\\mapsto\\;& L_i \\\\\n\\end{alignat}"
},
{
"math_id": 244,
"text": "L_\\bull = \\left(L_i\\right)_{i \\in I}."
},
{
"math_id": 245,
"text": "L_\\bull = \\left(L_i\\right)_{i \\in I}"
},
{
"math_id": 246,
"text": "\\operatorname{Im} L_\\bull ~\\stackrel{\\scriptscriptstyle\\text{def}}{=}~ \\left\\{L_i : i \\in I\\right\\}"
},
{
"math_id": 247,
"text": "\\mathcal{B}"
},
{
"math_id": 248,
"text": "(B)_{B \\in \\mathcal{B}},"
},
{
"math_id": 249,
"text": "\\mathcal{B} \\to \\mathcal{B}."
},
{
"math_id": 250,
"text": "i \\neq j"
},
{
"math_id": 251,
"text": "L_i = L_j"
},
{
"math_id": 252,
"text": "I = \\varnothing"
},
{
"math_id": 253,
"text": "\\bigcup_{i \\in \\varnothing} L_i = \\{x ~:~ \\text{ there exists } i \\in \\varnothing \\text{ such that } x \\in L_i\\} = \\varnothing,"
},
{
"math_id": 254,
"text": "\\cup \\mathcal{B}"
},
{
"math_id": 255,
"text": "\\bigcup \\mathcal{B} ~~\\stackrel{\\scriptscriptstyle\\text{def}}{=}~ \\bigcup_{B \\in \\mathcal{B}} B ~~\\stackrel{\\scriptscriptstyle\\text{def}}{=}~ \\{x ~:~ \\text{ there exists } B \\in \\mathcal{B} \\text{ such that } x \\in B\\}."
},
{
"math_id": 256,
"text": "I \\neq \\varnothing"
},
{
"math_id": 257,
"text": "\\mathcal{B} \\neq \\varnothing"
},
{
"math_id": 258,
"text": "\\cap \\mathcal{B}"
},
{
"math_id": 259,
"text": "\\bigcap \\mathcal{B} ~~\\stackrel{\\scriptscriptstyle\\text{def}}{=}~ \\bigcap_{B \\in B} B ~~\\stackrel{\\scriptscriptstyle\\text{def}}{=}~ \\{x ~:~ x \\in B \\text{ for every } B \\in \\mathcal{B}\\} ~=~ \\{x ~:~ \\text{ for all } B, \\text{ if } B \\in \\mathcal{B} \\text{ then } x \\in B\\}."
},
{
"math_id": 260,
"text": "\\bigcap_{i \\in \\varnothing} L_i = \\{x ~:~ \\text{ for all } i, \\text{ if } i \\in \\varnothing \\text{ then } x \\in L_i\\}"
},
{
"math_id": 261,
"text": "i \\in \\varnothing"
},
{
"math_id": 262,
"text": "x \\in L_i"
},
{
"math_id": 263,
"text": "{\\textstyle\\bigcap\\limits_{i \\in \\varnothing}} L_i = \\{x : \\text{ true }\\}"
},
{
"math_id": 264,
"text": "{\\textstyle\\bigcap\\limits_{i \\in \\varnothing}} L_i = \\{x ~:~ x \\in L_i \\text{ for every } i \\in \\varnothing\\} ~=~ X."
},
{
"math_id": 265,
"text": "{\\textstyle\\bigcap\\limits_{i \\in \\varnothing}} L_i"
},
{
"math_id": 266,
"text": "\\bigcup_{\\stackrel{i \\in I,}{j \\in J}} S_{i,j} ~~\\stackrel{\\scriptscriptstyle\\text{def}}{=}~ \\bigcup_{(i, j) \\in I \\times J} S_{i,j}"
},
{
"math_id": 267,
"text": "\\bigcap_{\\stackrel{i \\in I,}{j \\in J}} S_{i,j} ~~\\stackrel{\\scriptscriptstyle\\text{def}}{=}~ \\bigcap_{(i, j) \\in I \\times J} S_{i,j}"
},
{
"math_id": 268,
"text": "\\left(R_j\\right)_{j \\in J}"
},
{
"math_id": 269,
"text": "\\left(L_i \\cap R_j\\right)_{(i, j) \\in I \\times J}"
},
{
"math_id": 270,
"text": "(i, j) \\neq \\left(i_2, j_2\\right)"
},
{
"math_id": 271,
"text": "\\left(L_i \\cap R_j\\right) \\cap \\left(L_{i_2} \\cap R_{j_2}\\right) = \\varnothing"
},
{
"math_id": 272,
"text": "I = J"
},
{
"math_id": 273,
"text": "~\\left(\\bigcup_{i \\in I} L_i\\right) \\cap \\left(\\bigcup_{i \\in I} R_i\\right) ~~\\color{Red}{\\neq}\\color{Black}{}~~ \\bigcup_{i \\in I} \\left(L_i \\cap R_i\\right)~"
},
{
"math_id": 274,
"text": "(i, j) \\in I \\times I:"
},
{
"math_id": 275,
"text": "~\\left(\\bigcup_{i \\in I} L_i\\right) \\cap \\left(\\bigcup_{i \\in I} R_i\\right) ~~=~~ \\bigcup_{\\stackrel{i \\in I,}{j \\in I}} \\left(L_i \\cap R_j\\right).~"
},
{
"math_id": 276,
"text": "J"
},
{
"math_id": 277,
"text": "X \\neq \\varnothing"
},
{
"math_id": 278,
"text": "I = \\{1, 2\\}."
},
{
"math_id": 279,
"text": "L_1 \\colon= R_2 \\colon= X"
},
{
"math_id": 280,
"text": "L_2 \\colon= R_1 \\colon= \\varnothing."
},
{
"math_id": 281,
"text": "X = X \\cap X = \\left(L_1 \\cup L_2\\right) \\cap \\left(R_2 \\cup R_2\\right) = \\left(\\bigcup_{i \\in I} L_i\\right) \\cap \\left(\\bigcup_{i \\in I} R_i\\right) ~\\neq~ \\bigcup_{i \\in I} \\left(L_i \\cap R_i\\right) = \\left(L_1 \\cap R_1\\right) \\cup \\left(L_2 \\cap R_2\\right) = \\varnothing \\cup \\varnothing = \\varnothing."
},
{
"math_id": 282,
"text": "\\varnothing = \\varnothing \\cup \\varnothing = \\left(L_1 \\cap L_2\\right) \\cup \\left(R_2 \\cap R_2\\right) = \\left(\\bigcap_{i \\in I} L_i\\right) \\cup \\left(\\bigcap_{i \\in I} R_i\\right) ~\\neq~ \\bigcap_{i \\in I} \\left(L_i \\cup R_i\\right) = \\left(L_1 \\cup R_1\\right) \\cap \\left(L_2 \\cup R_2\\right) = X \\cap X = X."
},
{
"math_id": 283,
"text": "~\\left(\\bigcap_{i \\in I} L_i\\right) \\cup \\left(\\bigcap_{i \\in I} R_i\\right) ~~\\color{Red}{\\neq}\\color{Black}{}~~ \\bigcap_{i \\in I} \\left(L_i \\cup R_i\\right)~"
},
{
"math_id": 284,
"text": "~\\left(\\bigcap_{i \\in I} L_i\\right) \\cup \\left(\\bigcap_{i \\in I} R_i\\right) ~~=~~ \\bigcap_{\\stackrel{i \\in I,}{j \\in I}} \\left(L_i \\cup R_j\\right).~"
},
{
"math_id": 285,
"text": "\\;{\\textstyle\\bigcup\\limits_{i \\in I}}\\;"
},
{
"math_id": 286,
"text": "\\;{\\textstyle\\bigcap\\limits_{j \\in J}}\\;"
},
{
"math_id": 287,
"text": "\\left(S_{i,j}\\right)_{j \\in J}"
},
{
"math_id": 288,
"text": "j \\in J,"
},
{
"math_id": 289,
"text": "\\left(S_{i,j}\\right)_{i \\in I}"
},
{
"math_id": 290,
"text": "\\left(S_{i,j}\\right)_{(i,j) \\in I \\times J}"
},
{
"math_id": 291,
"text": "\\left(L_i \\setminus R_j\\right)_{(i,j) \\in I \\times J}"
},
{
"math_id": 292,
"text": "S_{i,j} \\colon= L_i \\setminus R_j"
},
{
"math_id": 293,
"text": "\\left(L_i \\setminus R_j\\right)_{(j, i) \\in J \\times I}"
},
{
"math_id": 294,
"text": "\\hat{S}_{j,i} \\colon= L_i \\setminus R_j"
},
{
"math_id": 295,
"text": "J_i"
},
{
"math_id": 296,
"text": "j \\in J_i,"
},
{
"math_id": 297,
"text": "T_{i,j}"
},
{
"math_id": 298,
"text": "\\left(S_{i,j}\\right)_{(i, j) \\in I \\times J},"
},
{
"math_id": 299,
"text": "J_i \\colon= J"
},
{
"math_id": 300,
"text": "i \\in I"
},
{
"math_id": 301,
"text": "T_{i,j} \\colon= S_{i,j}"
},
{
"math_id": 302,
"text": "j \\in J_i = J"
},
{
"math_id": 303,
"text": "{\\textstyle \\prod} J_{\\bull} ~\\stackrel{\\scriptscriptstyle\\text{def}}{=}~ \\prod_{i \\in I} J_i"
},
{
"math_id": 304,
"text": "f ~:~ I ~\\to~ {\\textstyle\\bigcup\\limits_{i \\in I}} J_i"
},
{
"math_id": 305,
"text": "f(i) \\in J_i"
},
{
"math_id": 306,
"text": "i \\in I."
},
{
"math_id": 307,
"text": "\\left(f_i\\right)_{i \\in I}"
},
{
"math_id": 308,
"text": "f_i ~\\stackrel{\\scriptscriptstyle\\text{def}}{=}~ f(i)"
},
{
"math_id": 309,
"text": "f_i;"
},
{
"math_id": 310,
"text": "{\\textstyle \\prod} J_{\\bull}."
},
{
"math_id": 311,
"text": "{\\textstyle \\prod} J_{\\bull} ~\\stackrel{\\scriptscriptstyle\\text{def}}{=}~ {\\textstyle\\prod\\limits_{i \\in I}} J_i."
},
{
"math_id": 312,
"text": "J_i = J_{i_2}"
},
{
"math_id": 313,
"text": "i, i_2 \\in I,"
},
{
"math_id": 314,
"text": "{\\textstyle \\prod} J_{\\bull} ~\\stackrel{\\scriptscriptstyle\\text{def}}{=}~ {\\textstyle\\prod\\limits_{i \\in I}} J_i = {\\textstyle\\prod\\limits_{i \\in I}} J = J^I,"
},
{
"math_id": 315,
"text": "f ~:~ I ~\\to~ J."
},
{
"math_id": 316,
"text": "\\bigcap_{i \\in I} \\; \\bigcup_{j \\in J} S_{i,j} = \\bigcup_{f \\in J^I} \\; \\bigcap_{i \\in I} S_{i,f(i)}"
},
{
"math_id": 317,
"text": "\\bigcup_{i \\in I} \\; \\bigcap_{j \\in J} S_{i,j} = \\bigcap_{f \\in J^I} \\; \\bigcup_{i \\in I} S_{i,f(i)}"
},
{
"math_id": 318,
"text": "\\bigcup_{i \\in I} \\; \\bigcap_{j \\in J} S_{i,j}\n~=~ \\bigcap_{f \\in J^I} \\; \\bigcup_{i \\in I} S_{i,f(i)}\n~~\\color{Red}{\\subseteq}\\color{Black}{}~~ \\bigcup_{g \\in I^J} \\; \\bigcap_{j \\in J} S_{g(j),j}\n~=~ \\bigcap_{j \\in J} \\; \\bigcup_{i \\in I} S_{i,j}"
},
{
"math_id": 319,
"text": "f \\text{ and } i"
},
{
"math_id": 320,
"text": "f \\in J^I \\text{ and } i \\in I"
},
{
"math_id": 321,
"text": "S_{i,f(i)}"
},
{
"math_id": 322,
"text": "i \\in I \\text{ and } f(i) \\in f(I) \\subseteq J"
},
{
"math_id": 323,
"text": "g \\text{ and } j"
},
{
"math_id": 324,
"text": "g \\in I^J \\text{ and } j \\in J"
},
{
"math_id": 325,
"text": "S_{g(j),j}"
},
{
"math_id": 326,
"text": "j \\in J \\text{ and } g(j) \\in g(J) \\subseteq I"
},
{
"math_id": 327,
"text": "\\left(C_k\\right)_{k \\in K}"
},
{
"math_id": 328,
"text": "\\left(D_{l}\\right)_{l \\in L},"
},
{
"math_id": 329,
"text": "I \\colon= \\{1, 2\\},"
},
{
"math_id": 330,
"text": "J_1 \\colon= K,"
},
{
"math_id": 331,
"text": "J_2 \\colon= L,"
},
{
"math_id": 332,
"text": "T_{1,k} \\colon= C_k"
},
{
"math_id": 333,
"text": "k \\in J_1"
},
{
"math_id": 334,
"text": "T_{2,l} \\colon= D_l"
},
{
"math_id": 335,
"text": "l \\in J_2."
},
{
"math_id": 336,
"text": "f \\in {\\textstyle \\prod} J_{\\bull} ~\\stackrel{\\scriptscriptstyle\\text{def}}{=}~ {\\textstyle\\prod\\limits_{i \\in I}} J_i = J_1 \\times J_2 = K \\times L"
},
{
"math_id": 337,
"text": "\\left(f(1), f(2)\\right) \\in K \\times L"
},
{
"math_id": 338,
"text": "(k,l) \\in K \\times L"
},
{
"math_id": 339,
"text": "f_{(k,l)} \\in {\\textstyle \\prod} J_{\\bull}"
},
{
"math_id": 340,
"text": "1 \\mapsto k"
},
{
"math_id": 341,
"text": "2 \\mapsto l;"
},
{
"math_id": 342,
"text": "~\\bigcap_{i \\in I} \\; \\bigcup_{j \\in J_i} T_{i,j} = \\bigcup_{f \\in {\\textstyle \\prod} J_{\\bull}} \\; \\bigcap_{i \\in I} T_{i,f(i)}.~"
},
{
"math_id": 343,
"text": "\\bigcap_{i \\in I} \\; \\bigcup_{j \\in J_i} T_{i,j}\n= \\left(\\bigcup_{j \\in J_1} T_{1,j}\\right) \\cap \\left(\\;\\bigcup_{j \\in J_2} T_{2,j}\\right) \n= \\left(\\bigcup_{k \\in K} T_{1,k}\\right) \\cap \\left(\\;\\bigcup_{l \\in L} T_{2,l}\\right) \n= \\left(\\bigcup_{k \\in K} C_k\\right) \\cap \\left(\\;\\bigcup_{l \\in L} D_l\\right)\n"
},
{
"math_id": 344,
"text": "\\bigcup_{f \\in \\prod J_{\\bull}} \\; \\bigcap_{i \\in I} T_{i,f(i)}\n= \\bigcup_{f \\in \\prod J_{\\bull}} \\left(T_{1,f(1)} \\cap T_{2,f(2)}\\right) \n= \\bigcup_{f \\in \\prod J_{\\bull}} \\left(C_{f(1)} \\cap D_{f(2)}\\right) \n= \\bigcup_{(k,l) \\in K \\times L} \\left(C_k \\cap D_l\\right) \n= \\bigcup_{\\stackrel{k \\in K,}{l \\in L}} \\left(C_k \\cap D_l\\right).\n"
},
{
"math_id": 345,
"text": "\\left(\\bigcup_{k \\in K} C_k\\right) \\cap \\;\\bigcup_{l \\in L} D_l = \\bigcup_{\\stackrel{k \\in K,}{l \\in L}} \\left(C_k \\cap D_l\\right)."
},
{
"math_id": 346,
"text": "\\;\\cup\\;"
},
{
"math_id": 347,
"text": "\\;\\cap\\;"
},
{
"math_id": 348,
"text": "\\bigcup_{i \\in I} \\; \\bigcap_{j \\in J} \\left(L_i \\setminus R_j\\right) ~=~ \\bigcap_{j \\in J} \\; \\bigcup_{i \\in I} \\left(L_i \\setminus R_j\\right)\n\\quad \\text{ and } \\quad \n\\bigcup_{j \\in J} \\; \\bigcap_{i \\in I} \\left(L_i \\setminus R_j\\right) ~=~ \\bigcap_{i \\in I} \\; \\bigcup_{j \\in J} \\left(L_i \\setminus R_j\\right)"
},
{
"math_id": 349,
"text": "\\bigcup_{\\stackrel{i \\in I,}{j \\in J}} S_{i,j} \n~~\\stackrel{\\scriptscriptstyle\\text{def}}{=}~ \\bigcup_{(i, j) \\in I \\times J} S_{i,j} \n~=~ \\bigcup_{i \\in I} \\left(\\bigcup_{j \\in J} S_{i,j}\\right) \n~=~ \\bigcup_{j \\in J} \\left(\\bigcup_{i \\in I} S_{i,j}\\right)"
},
{
"math_id": 350,
"text": "\\bigcap_{\\stackrel{i \\in I,}{j \\in J}} S_{i,j} \n~~\\stackrel{\\scriptscriptstyle\\text{def}}{=}~ \\bigcap_{(i, j) \\in I \\times J} S_{i,j} \n~=~ \\bigcap_{i \\in I} \\left(\\bigcap_{j \\in J} S_{i,j}\\right) \n~=~ \\bigcap_{j \\in J} \\left(\\bigcap_{i \\in I} S_{i,j}\\right)"
},
{
"math_id": 351,
"text": "\\left(\\bigcup_{i \\in I} L_i\\right) \\cup R ~=~ \\bigcup_{i \\in I} \\left(L_i \\cup R\\right)"
},
{
"math_id": 352,
"text": "\\left(\\bigcap_{i \\in I} L_i\\right) \\cap R ~=~ \\bigcap_{i \\in I} \\left(L_i \\cap R\\right)"
},
{
"math_id": 353,
"text": "\\left(x_i\\right)_{i \\in I}"
},
{
"math_id": 354,
"text": "x_i \\in S_{i,j}"
},
{
"math_id": 355,
"text": "j \\in J."
},
{
"math_id": 356,
"text": "\\left(R_i\\right)_{i \\in I}"
},
{
"math_id": 357,
"text": "\\left(\\prod_{i \\in I} L_i\\right) \\cap \\prod_{i \\in I} R_i ~=~ \\prod_{i \\in I} \\left(L_i \\cap R_i\\right)"
},
{
"math_id": 358,
"text": "(L \\times R) \\cap \\left(L_2 \\times R_2\\right) \\cap \\left(L_3 \\times R_3\\right) ~=~ \\left(L \\cap L_2 \\cap L_3\\right) \\times \\left(R \\cap R_2 \\cap R_3\\right)"
},
{
"math_id": 359,
"text": "I \\neq J"
},
{
"math_id": 360,
"text": "\\left({\\textstyle\\prod\\limits_{i \\in I}} L_i\\right) \\cap {\\textstyle\\prod\\limits_{j \\in J}} R_j = \\varnothing."
},
{
"math_id": 361,
"text": "I := \\{1, 2\\}"
},
{
"math_id": 362,
"text": "J := \\{1, 2, 3\\}"
},
{
"math_id": 363,
"text": "\\R"
},
{
"math_id": 364,
"text": "{\\textstyle\\prod\\limits_{i \\in I}} L_i = {\\textstyle\\prod\\limits_{i \\in \\{1, 2\\}}} \\R = \\R^2"
},
{
"math_id": 365,
"text": "{\\textstyle\\prod\\limits_{j \\in J}} R_j = {\\textstyle\\prod\\limits_{j \\in \\{1, 2, 3\\}}} \\R = \\R^3"
},
{
"math_id": 366,
"text": "\\R^2 \\cap \\R^3 = \\varnothing"
},
{
"math_id": 367,
"text": "{\\textstyle\\prod\\limits_{i \\in \\{1, 2\\}}} \\R = \\R^2"
},
{
"math_id": 368,
"text": "{\\textstyle\\prod\\limits_{j \\in \\{1, 2, 3\\}}} \\R = \\R^3"
},
{
"math_id": 369,
"text": "(x, y) \\mapsto (x, y, 0)"
},
{
"math_id": 370,
"text": "{\\textstyle\\prod\\limits_{i \\in I = \\{1, 2\\}}} L_i"
},
{
"math_id": 371,
"text": "{\\textstyle\\prod\\limits_{j \\in J = \\{1, 2, 3\\}}} L_i"
},
{
"math_id": 372,
"text": "L_3 := \\{0\\}."
},
{
"math_id": 373,
"text": "L_1 := \\R^2"
},
{
"math_id": 374,
"text": "L_2, R_1, R_2, \\text{ and } R_3"
},
{
"math_id": 375,
"text": "\\R."
},
{
"math_id": 376,
"text": "{\\textstyle\\prod\\limits_{i \\in I}}L_i = \\R^2 \\times \\R"
},
{
"math_id": 377,
"text": "{\\textstyle\\prod\\limits_{j \\in J}} R_j = \\R \\times \\R \\times \\R,"
},
{
"math_id": 378,
"text": "((x, y), z) \\in \\R^2 \\times \\R"
},
{
"math_id": 379,
"text": "(x, y, z) \\in \\R \\times \\R \\times \\R."
},
{
"math_id": 380,
"text": "\\left({\\textstyle\\prod\\limits_{i \\in I}} L_i\\right) \\cap \\, {\\textstyle\\prod\\limits_{j \\in J}} R_j ~=~ \\R^3."
},
{
"math_id": 381,
"text": "\\begin{alignat}{5}\nL \\times \\left(\\bigcup_{i \\in I} R_i\\right) &\\;=\\;\\;&& \\bigcup_{i \\in I} (L \\times R_i) \\qquad\n&&\\text{ (Left-distributivity of } \\,\\times\\, \\text{ over } \\,\\cup\\, \\text{)} \\\\[1.4ex]\nL \\times \\left(\\bigcap_{i \\in I} R_i\\right) &\\;=\\;\\;&& \\bigcap_{i \\in I} (L \\times R_i) \\qquad\n&&\\text{ (Left-distributivity of } \\,\\times\\, \\text{ over } \\,\\bigcap_{i \\in I}\\, \\text{ when } I \\neq \\varnothing\\, \\text{)} \\\\[1.4ex]\n\\left(\\bigcup_{i \\in I} L_i\\right) \\times R &\\;=\\;\\;&& \\bigcup_{i \\in I} (L_i \\times R) \\qquad\n&&\\text{ (Right-distributivity of } \\,\\times\\, \\text{ over } \\,\\cup\\, \\text{)} \\\\[1.4ex]\n\\left(\\bigcap_{i \\in I} L_i\\right) \\times R &\\;=\\;\\;&& \\bigcap_{i \\in I} (L_i \\times R) \\qquad\n&&\\text{ (Right-distributivity of } \\,\\times\\, \\text{ over } \\,\\bigcap_{i \\in I}\\, \\text{ when } I \\neq \\varnothing\\, \\text{)} \\\\[1.4ex]\n\\end{alignat}"
},
{
"math_id": 382,
"text": "\\bigcup_{j \\in J} \\; \\prod_{i \\in I} S_{i,j} ~~\\color{Red}{\\subseteq}\\color{Black}{}~~ \\prod_{i \\in I} \\; \\bigcup_{j \\in J} S_{i,j} \\qquad \\text{ and } \\qquad \\bigcup_{i \\in I} \\; \\prod_{j \\in J} S_{i,j} ~~\\color{Red}{\\subseteq}\\color{Black}{}~~ \\prod_{j \\in J} \\; \\bigcup_{i \\in I} S_{i,j}"
},
{
"math_id": 383,
"text": "I = J = \\{1, 2\\},"
},
{
"math_id": 384,
"text": "S_{1,1} = S_{2,2} = \\varnothing,"
},
{
"math_id": 385,
"text": "X \\neq \\varnothing,"
},
{
"math_id": 386,
"text": "S_{1,2} = S_{2,1} = X."
},
{
"math_id": 387,
"text": "\\varnothing = \\varnothing \\cup \\varnothing = \\left(\\prod_{i \\in I} S_{i,1}\\right) \\cup \\left(\\prod_{i \\in I} S_{i,2}\\right) = \\bigcup_{j \\in J} \\; \\prod_{i \\in I} S_{i,j} ~~\\color{Red}{\\neq}\\color{Black}{}~~ \\prod_{i \\in I} \\; \\bigcup_{j \\in J} S_{i,j} = \\left(\\bigcup_{j \\in J} S_{1,j}\\right) \\times \\left(\\bigcup_{j \\in J} S_{2,j}\\right) = X \\times X."
},
{
"math_id": 388,
"text": "\\varnothing = \\bigcup_{j \\in J} \\; \\prod_{i \\in I} S_{i,j}"
},
{
"math_id": 389,
"text": "S_{\\bullet,j} = \\left(S_{i,j}\\right)_{i\\in I}"
},
{
"math_id": 390,
"text": "\\prod_{i \\in I} \\; \\bigcup_{j \\in J} S_{i,j} \\neq \\varnothing"
},
{
"math_id": 391,
"text": "S_{i,\\bullet} = \\left(S_{i,j}\\right)_{j\\in J}"
},
{
"math_id": 392,
"text": "\\begin{alignat}{9}\n\\left(\\prod_{i \\in I} L_i\\right) ~\\setminus~ \\prod_{i \\in I}R_i\n~&=~ \\;~ \\bigcup_{j \\in I} \\; ~ \\prod_{i \\in I} \\begin{cases}L_j \\,\\setminus\\, R_j & \\text{ if } i = j \\\\ L_i & \\text{ if } i \\neq j \\\\ \\end{cases} \\\\[0.5ex]\n~&=~ \\;~ \\bigcup_{j \\in I} \\; ~ \\Big[\\left(L_j \\,\\setminus\\, R_j\\right) ~\\times~ \\prod_{\\stackrel{i \\in I,}{j \\neq i}} L_i\\Big] \\\\[0.5ex]\n~&=~ \\bigcup_{\\stackrel{j \\in I,}{L_j \\not\\subseteq R_j}} \\Big[\\left(L_j \\,\\setminus\\, R_j\\right) ~\\times~ \\prod_{\\stackrel{i \\in I,}{j \\neq i}} L_i\\Big] \\\\[0.3ex]\n\\end{alignat}"
},
{
"math_id": 393,
"text": "\\begin{alignat}{9}\n\\left(\\prod_{i \\in I} L_i\\right) ~\\triangle~ \\left(\\prod_{i \\in I} R_i\\right)\n~&=~ \\;~ \\left(\\prod_{i \\in I} L_i\\right) ~\\cup~ \\left(\\prod_{i \\in I} R_i\\right) \\;\\setminus\\; \\prod_{i \\in I} L_i \\cap R_i \\\\[0.5ex]\n\\end{alignat}"
},
{
"math_id": 394,
"text": "f : X \\to Y"
},
{
"math_id": 395,
"text": "A \\subseteq X \\text{ and } C \\subseteq Y."
},
{
"math_id": 396,
"text": "\\operatorname{domain} f"
},
{
"math_id": 397,
"text": "\\operatorname{codomain} f."
},
{
"math_id": 398,
"text": "f"
},
{
"math_id": 399,
"text": "X \\subseteq U"
},
{
"math_id": 400,
"text": "Y \\subseteq V"
},
{
"math_id": 401,
"text": "U"
},
{
"math_id": 402,
"text": "V,"
},
{
"math_id": 403,
"text": "X = \\operatorname{domain} f"
},
{
"math_id": 404,
"text": "Y = \\operatorname{codomain} f"
},
{
"math_id": 405,
"text": "L \\subseteq U"
},
{
"math_id": 406,
"text": "f(L)"
},
{
"math_id": 407,
"text": "f^{-1}(L)"
},
{
"math_id": 408,
"text": "f(L \\cap X)"
},
{
"math_id": 409,
"text": "f^{-1}(L \\cap Y)."
},
{
"math_id": 410,
"text": "f(L) ~\\stackrel{\\scriptscriptstyle\\text{def}}{=}~ \\{\\,f(l) ~:~ l \\in L \\cap \\operatorname{domain} f\\,\\}"
},
{
"math_id": 411,
"text": "f^{-1}(L) ~\\stackrel{\\scriptscriptstyle\\text{def}}{=}~ \\{\\,x \\in \\operatorname{domain} f ~:~ f(x) \\in L\\,\\}"
},
{
"math_id": 412,
"text": "L = \\{s\\}"
},
{
"math_id": 413,
"text": "s"
},
{
"math_id": 414,
"text": "f^{-1}(s) ~\\stackrel{\\scriptscriptstyle\\text{def}}{=}~ f^{-1}(\\{s\\}) ~=~ \\{\\,x \\in \\operatorname{domain} f ~:~ f(x) = s\\,\\}."
},
{
"math_id": 415,
"text": "\\operatorname{Im} f"
},
{
"math_id": 416,
"text": "\\operatorname{image} f"
},
{
"math_id": 417,
"text": "f : X \\to Y,"
},
{
"math_id": 418,
"text": "\\operatorname{Im} f ~\\stackrel{\\scriptscriptstyle\\text{def}}{=}~ f(X) ~\\stackrel{\\scriptscriptstyle\\text{def}}{=}~ f(\\operatorname{domain} f) ~=~ \\{f(x) ~:~ x \\in \\operatorname{domain} f\\}."
},
{
"math_id": 419,
"text": "i,"
},
{
"math_id": 420,
"text": "L \\setminus R"
},
{
"math_id": 421,
"text": "\\mathcal{L}"
},
{
"math_id": 422,
"text": "\\mathcal{L}, \\mathcal{M},"
},
{
"math_id": 423,
"text": "\\mathcal{R}"
},
{
"math_id": 424,
"text": "\\mathcal{M}"
},
{
"math_id": 425,
"text": "\\mathcal{L} \\;(\\cup)\\; \\mathcal{R} = \\mathcal{R} \\;(\\cup)\\; \\mathcal{L}"
},
{
"math_id": 426,
"text": "\\mathcal{L} \\;(\\cap)\\; \\mathcal{R} = \\mathcal{R} \\;(\\cap)\\; \\mathcal{L}"
},
{
"math_id": 427,
"text": "[\\mathcal{L} \\;(\\cup)\\; \\mathcal{M}] \\;(\\cup)\\; \\mathcal{R} = \\mathcal{L} \\;(\\cup)\\; [\\mathcal{M} \\;(\\cup)\\; \\mathcal{R}]"
},
{
"math_id": 428,
"text": "[\\mathcal{L} \\;(\\cap)\\; \\mathcal{M}] \\;(\\cap)\\; \\mathcal{R} = \\mathcal{L} \\;(\\cap)\\; [\\mathcal{M} \\;(\\cap)\\; \\mathcal{R}]"
},
{
"math_id": 429,
"text": "\\mathcal{L} \\;(\\cup)\\; \\{\\varnothing\\} = \\mathcal{L}"
},
{
"math_id": 430,
"text": "\\mathcal{L} \\;(\\cap)\\; \\{X\\} = \\mathcal{L}"
},
{
"math_id": 431,
"text": "\\mathcal{L} \\;(\\setminus)\\; \\{\\varnothing\\} = \\mathcal{L}"
},
{
"math_id": 432,
"text": "\\mathcal{L} \\;(\\cup)\\; \\{X\\} = \\{X\\} ~~~~\\text{ if } \\mathcal{L} \\neq \\varnothing"
},
{
"math_id": 433,
"text": "\\mathcal{L} \\;(\\cap)\\; \\{\\varnothing\\} = \\{\\varnothing\\} ~~~~\\text{ if } \\mathcal{L} \\neq \\varnothing"
},
{
"math_id": 434,
"text": "\\mathcal{L} \\;(\\cup)\\; \\varnothing = \\varnothing"
},
{
"math_id": 435,
"text": "\\mathcal{L} \\;(\\cap)\\; \\varnothing = \\varnothing"
},
{
"math_id": 436,
"text": "\\mathcal{L} \\;(\\setminus)\\; \\varnothing = \\varnothing"
},
{
"math_id": 437,
"text": "\\varnothing \\;(\\setminus)\\; \\mathcal{R} = \\varnothing"
},
{
"math_id": 438,
"text": "\\wp(L \\cap R) ~=~ \\wp(L) \\cap \\wp(R)"
},
{
"math_id": 439,
"text": "\\wp(L \\cup R) ~=~ \\wp(L) \\ (\\cup)\\ \\wp(R) ~\\supseteq~ \\wp(L) \\cup \\wp(R)."
},
{
"math_id": 440,
"text": "\\wp(s L) ~=~ s \\wp(L)"
},
{
"math_id": 441,
"text": "\\wp(L + R) ~\\supseteq~ \\wp(L) + \\wp(R)."
},
{
"math_id": 442,
"text": "L \\supseteq R_i"
},
{
"math_id": 443,
"text": "R_\\bull"
},
{
"math_id": 444,
"text": "L \\setminus R_\\bull := \\left(L \\setminus R_i\\right)_i"
},
{
"math_id": 445,
"text": "L \\setminus R_\\bull"
},
{
"math_id": 446,
"text": "L \\setminus R."
},
{
"math_id": 447,
"text": "L_\\bull = \\left(L_i\\right)_i"
},
{
"math_id": 448,
"text": "\\left(L_i \\setminus R\\right)_i"
},
{
"math_id": 449,
"text": "S_\\bull = \\left(S_i\\right)_{i = 1}^\\infty"
},
{
"math_id": 450,
"text": "S \\subseteq \\bigcup_i S_i"
},
{
"math_id": 451,
"text": "D_i = \\left(S_i \\cap S\\right) \\setminus \\bigcup_{m=1}^i \\left(S_m \\cap S\\right)."
},
{
"math_id": 452,
"text": "S = \\bigcup_i D_i"
},
{
"math_id": 453,
"text": "D_\\bull := \\left(D_i\\right)_{i=1}^\\infty"
},
{
"math_id": 454,
"text": "S_0 = \\varnothing,"
},
{
"math_id": 455,
"text": "D_i = S_i \\setminus S_{i-1}"
},
{
"math_id": 456,
"text": "i = 1, 2, \\ldots."
},
{
"math_id": 457,
"text": "\\bigcup_i S_i = \\bigcup_i D_i"
},
{
"math_id": 458,
"text": "D_\\bull = \\left(D_i\\right)_{i=1}^\\infty"
}
] |
https://en.wikipedia.org/wiki?curid=65740015
|
65740749
|
Sinai–Ruelle–Bowen measure
|
Invariant measure that displays a less restricted form of ergodicity
In the mathematical discipline of ergodic theory, a Sinai–Ruelle–Bowen (SRB) measure is an invariant measure that behaves similarly to, but is not an ergodic measure. In order to be ergodic, the time average would need to be equal the space average for almost all initial states formula_0, with formula_1 being the phase space. For an SRB measure formula_2, it suffices that the ergodicity condition be valid for initial states in a set formula_3 of positive Lebesgue measure.
The initial ideas pertaining to SRB measures were introduced by Yakov Sinai, David Ruelle and Rufus Bowen in the less general area of Anosov diffeomorphisms and axiom A attractors.
Definition.
Let formula_4 be a map. Then a measure formula_2 defined on formula_1 is an SRB measure if there exist formula_5 of positive Lebesgue measure, and formula_6 with same Lebesgue measure, such that:
formula_7
for every formula_8 and every continuous function formula_9.
One can see the SRB measure formula_2 as one that satisfies the conclusions of Birkhoff's ergodic theorem on a smaller set contained in formula_1.
Existence of SRB measures.
The following theorem establishes sufficient conditions for the existence of SRB measures. It considers the case of Axiom A attractors, which is simpler, but it has been extended times to more general scenarios.
Theorem 1: Let formula_10 be a formula_11 diffeomorphism with an Axiom A attractor formula_12. Assume that this attractor is "irreducible", that is, it is not the union of two other sets that are also invariant under formula_13. Then there is a unique Borelian measure formula_2, with formula_14, characterized by the following equivalent statements:
Also, in these conditions formula_19 is a measure-preserving dynamical system.
It has also been proved that the above are equivalent to stating that formula_2 equals the zero-noise limit stationary distribution of a Markov chain with states formula_20. That is, consider that to each point formula_0 is associated a transition probability formula_21 with noise level formula_22 that measures the amount of uncertainty of the next state, in a way such that:
formula_23
where formula_24 is the Dirac measure. The zero-noise limit is the stationary distribution of this Markov chain when the noise level approaches zero. The importance of this is that it states mathematically that the SRB measure is a "good" approximation to practical cases where small amounts of noise exist, though nothing can be said about the amount of noise that is tolerable.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "x \\in X"
},
{
"math_id": 1,
"text": "X"
},
{
"math_id": 2,
"text": "\\mu"
},
{
"math_id": 3,
"text": "B(\\mu)"
},
{
"math_id": 4,
"text": "T:X \\rightarrow X"
},
{
"math_id": 5,
"text": "U \\subset X"
},
{
"math_id": 6,
"text": "V \\subset U"
},
{
"math_id": 7,
"text": "\n \\lim_{n \\rightarrow \\infty} \\frac{1}{n} \\sum_{i = 0}^n \\varphi(T^i x) = \\int_U \\varphi \\, d\\mu\n"
},
{
"math_id": 8,
"text": "x \\in V"
},
{
"math_id": 9,
"text": "\\varphi: U \\rightarrow \\mathbb{R}"
},
{
"math_id": 10,
"text": "T: X \\rightarrow X"
},
{
"math_id": 11,
"text": "C^2"
},
{
"math_id": 12,
"text": "\\mathcal{A} \\subset X"
},
{
"math_id": 13,
"text": "T"
},
{
"math_id": 14,
"text": "\\mu(X) = 1"
},
{
"math_id": 15,
"text": "h(T) = \\int \\log {\\bigl| \\det(D T)|_{E^u} \\bigr|} \\, d\\mu"
},
{
"math_id": 16,
"text": "h"
},
{
"math_id": 17,
"text": "E^u"
},
{
"math_id": 18,
"text": "D"
},
{
"math_id": 19,
"text": "\\left(T, X, \\mathcal{B}(X), \\mu \\right)"
},
{
"math_id": 20,
"text": "T^i(x)"
},
{
"math_id": 21,
"text": "P_\\varepsilon(\\cdot \\mid x)"
},
{
"math_id": 22,
"text": "\\varepsilon"
},
{
"math_id": 23,
"text": "\n \\lim_{\\varepsilon \\rightarrow 0} P_{\\varepsilon}(\\cdot \\mid x) = \\delta_{Tx}(\\cdot),\n"
},
{
"math_id": 24,
"text": "\\delta"
}
] |
https://en.wikipedia.org/wiki?curid=65740749
|
657430
|
Amenable group
|
Locally compact topological group with an invariant averaging operation
In mathematics, an amenable group is a locally compact topological group "G" carrying a kind of averaging operation on bounded functions that is invariant under translation by group elements. The original definition, in terms of a finitely additive measure (or mean) on subsets of "G", was introduced by John von Neumann in 1929 under the German name "messbar" ("measurable" in English) in response to the Banach–Tarski paradox. In 1949 Mahlon M. Day introduced the English translation "amenable", apparently as a pun on "mean".
The critical step in the Banach–Tarski paradox construction is to find inside the rotation group SO(3) a free subgroup on two generators. Amenable groups cannot contain such groups, and do not allow this kind of paradoxical construction.
Amenability has many equivalent definitions. In the field of analysis, the definition is in terms of linear functionals. An intuitive way to understand this version is that the support of the regular representation is the whole space of irreducible representations.
In discrete group theory, where "G" has the discrete topology, a simpler definition is used. In this setting, a group is amenable if one can say what proportion of "G" any given subset takes up. For example, any subgroup of the group of integers formula_0 is generated by some integer formula_1. If formula_2 then the subgroup takes up 0 proportion. Otherwise, it takes up formula_3 of the whole group. Even though both the group and the subgroup has infinitely many elements, there is a well-defined sense of proportion.
If a group has a Følner sequence then it is automatically amenable.
Definition for locally compact groups.
Let "G" be a locally compact Hausdorff group. Then it is well known that it possesses a unique, up-to-scale left- (or right-) translation invariant nontrivial ring measure, the Haar measure. (This is a Borel regular measure when "G" is second-countable; there are both left and right measures when "G" is compact.) Consider the Banach space "L"∞("G") of essentially bounded measurable functions within this measure space (which is clearly independent of the scale of the Haar measure).
Definition 1. A linear functional Λ in Hom("L"∞("G"), R) is said to be a mean if Λ has norm 1 and is non-negative, i.e. "f" ≥ 0 a.e. implies Λ("f") ≥ 0.
Definition 2. A mean Λ in Hom("L"∞("G"), R) is said to be left-invariant (respectively right-invariant) if Λ("g"·"f") = Λ("f") for all "g" in "G", and "f" in "L"∞("G") with respect to the left (respectively right) shift action of "g"·"f"(x) = "f"("g"−1"x") (respectively "f"·"g"(x) = "f"("xg"−1)).
Definition 3. A locally compact Hausdorff group is called amenable if it admits a left- (or right-)invariant mean.
By identifying Hom("L"∞("G"), R) with the space of finitely-additive Borel measures which are absolutely continuous with respect to the Haar measure on "G" (a ba space), the terminology becomes more natural: a mean in Hom("L"∞("G"), R) induces a left-invariant, finitely additive Borel measure on "G" which gives the whole group weight 1.
Example.
As an example for compact groups, consider the circle group. The graph of a typical function "f" ≥ 0 looks like a jagged curve above a circle, which can be made by tearing off the end of a paper tube. The linear functional would then average the curve by snipping off some paper from one place and gluing it to another place, creating a flat top again. This is the invariant mean, i.e. the average value formula_4 where formula_5 is Lebesgue measure.
Left-invariance would mean that rotating the tube does not change the height of the flat top at the end. That is, only the shape of the tube matters. Combined with linearity, positivity, and norm-1, this is sufficient to prove that the invariant mean we have constructed is unique.
As an example for locally compact groups, consider the group of integers. A bounded function "f" is simply a bounded function of type formula_6, and its mean is the running average formula_7.
Equivalent conditions for amenability.
contains a comprehensive account of the conditions on a second countable locally compact group "G" that are equivalent to amenability:
Case of discrete groups.
The definition of amenability is simpler in the case of a discrete group, i.e. a group equipped with the discrete topology.
Definition. A discrete group "G" is amenable if there is a finitely additive measure (also called a mean)—a function that assigns to each subset of "G" a number from 0 to 1—such that
This definition can be summarized thus: "G" is amenable if it has a finitely-additive left-invariant probability measure. Given a subset "A" of "G", the measure can be thought of as answering the question: what is the probability that a random element of "G" is in "A"?
It is a fact that this definition is equivalent to the definition in terms of "L"∞("G").
Having a measure "μ" on "G" allows us to define integration of bounded functions on "G". Given a bounded function "f": "G" → R, the integral
formula_8
is defined as in Lebesgue integration. (Note that some of the properties of the Lebesgue integral fail here, since our measure is only finitely additive.)
If a group has a left-invariant measure, it automatically has a bi-invariant one. Given a left-invariant measure "μ", the function "μ"−("A") = "μ"("A"−1) is a right-invariant measure. Combining these two gives a bi-invariant measure:
formula_9
The equivalent conditions for amenability also become simpler in the case of a countable discrete group Γ. For such a group the following conditions are equivalent:
Note that A. Connes also proved that the von Neumann group algebra of any connected locally compact group is hyperfinite, so the last condition no longer applies in the case of connected groups.
Amenability is related to spectral theory of certain operators. For instance, the fundamental group of a closed Riemannian manifold is amenable if and only if the bottom of the spectrum of the Laplacian on the L2-space of the universal cover of the manifold is 0.
Examples.
All examples above are elementary amenable. The first class of examples below can be used to exhibit non-elementary amenable examples thanks to the existence of groups of intermediate growth.
Nonexamples.
If a countable discrete group contains a (non-abelian) free subgroup on two generators, then it is not amenable. The converse to this statement is the so-called von Neumann conjecture, which was disproved by Olshanskii in 1980 using his "Tarski monsters". Adyan subsequently showed that free Burnside groups are non-amenable: since they are periodic, they cannot contain the free group on two generators. These groups are finitely generated, but not finitely presented. However, in 2002 Sapir and Olshanskii found finitely presented counterexamples: non-amenable finitely presented groups that have a periodic normal subgroup with quotient the integers.
For finitely generated linear groups, however, the von Neumann conjecture is true by the Tits alternative: every subgroup of GL("n","k") with "k" a field either has a normal solvable subgroup of finite index (and therefore is amenable) or contains the free group on two generators. Although Tits' proof used algebraic geometry, Guivarc'h later found an analytic proof based on V. Oseledets' multiplicative ergodic theorem. Analogues of the Tits alternative have been proved for many other classes of groups, such as fundamental groups of 2-dimensional simplicial complexes of non-positive curvature.
Notes.
<templatestyles src="Reflist/styles.css" />
Citations.
<templatestyles src="Reflist/styles.css" />
Sources.
"This article incorporates material from Amenable group on PlanetMath, which is licensed under the ."
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "(\\Z, +)"
},
{
"math_id": 1,
"text": "p \\geq 0"
},
{
"math_id": 2,
"text": "p = 0"
},
{
"math_id": 3,
"text": "1/p"
},
{
"math_id": 4,
"text": "\\Lambda(f)=\\int_{\\mathbb{R}/\\mathbb{Z}} f \\ d\\lambda"
},
{
"math_id": 5,
"text": "\\lambda"
},
{
"math_id": 6,
"text": "f: \\Z \\to \\R"
},
{
"math_id": 7,
"text": "\\lim_n \\frac{1}{2n+1} \\sum_{k=-n}^n f(k)"
},
{
"math_id": 8,
"text": "\\int_G f\\,d\\mu"
},
{
"math_id": 9,
"text": "\\nu(A) = \\int_{g\\in G}\\mu \\left (Ag^{-1} \\right ) \\, d\\mu^-."
}
] |
https://en.wikipedia.org/wiki?curid=657430
|
65745795
|
Jennifer Morse (mathematician)
|
Mathematician
Jennifer Leigh Morse is a mathematician specializing in algebraic combinatorics. She is a professor of mathematics at the University of Virginia.
Research.
Morse's interests in algebraic combinatorics include representation theory and applications to statistical physics, symmetric functions, Young tableaux, and formula_0-Schur functions, which are a generalization of Schur polynomials.
Education and career.
Morse earned her Ph.D. in 1999 from the University of California, San Diego. Her dissertation, "Explicit Expansions for Knop-Sahi and Macdonald Polynomials", was supervised by Adriano Garsia.
She has been a faculty member at the University of Pennsylvania, at the University of Miami, and at Drexel University before moving to the University of Virginia in 2017.
Book.
Morse is one of six coauthors of the book "formula_0-Schur Functions and Affine Schubert Calculus" (Fields Institute Monographs 33, Springer, 2014).
Recognition.
Morse was named a Simons Fellow in Mathematics in 2012 and again in 2021. She was elected as a Fellow of the American Mathematical Society in the 2021 class of fellows, "for contributions to algebraic combinatorics and representation theory and service to the mathematical community".
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "k"
}
] |
https://en.wikipedia.org/wiki?curid=65745795
|
65753
|
Rational expectations
|
Economics concept
Rational expectations is an economic theory that seeks to infer the macroeconomic consequences of individuals' decisions based on all available knowledge. It assumes that individuals actions are based on the best available economic theory and information, and concludes that government policies cannot succeed by assuming widespread systematic error by individuals.
History.
The concept of rational expectations was first introduced by John F. Muth in his paper "Rational Expectations and the Theory of Price Movements" published in 1961. Robert Lucas and Thomas Sargent further developed the theory in the 1970s and 1980s which became seminal works on the topic and were widely used in microeconomics.
Significant Findings
Muth’s work introduces the concept of rational expectations and discusses its implications for economic theory. He argues that individuals are rational and use all available information to make unbiased, informed predictions about the future. This means that individuals do not make systematic errors in their predictions and that their predictions are not biased by past errors. Muth’s paper also discusses the implication of rational expectations for economic theory. One key implication is that government policies, such as changes in monetary or fiscal policy may not be as effective if individuals’ expectations are not considered. For example, if individuals expect inflation to increase, they may anticipate that the central bank will raise interest rates to combat inflation, which could lead to higher borrowing costs and slower economic growth. Similarly, if individuals expect a recession, they may reduce their spending and investment, which could lead to a self-fulling prophecy.
Lucas’ paper “Expectations and the Neutrality of Money” expands on Muth's work and sheds light on the relationship between rational expectations and the monetary policy. The paper argues that when individuals hold rational expectations, changes in the money supply do not have real effects on the economy and the neutrality of money holds. Lucas presents a theoretical model that incorporates rational expectations into an analysis of the effects of changes in the money supply. The model suggests that individuals adjust their expectations in response to changes in the money supply, which eliminates the effect on real variables such as output and employment. He argues that a stable monetary policy that is consistent with individuals' rational expectations will be more effective in promoting economic stability than attempts to manipulate the money supply.
In 1973, Thomas J Sargent published the article “Rational Expectations, the Real Rate of Interest, and the Natural Rate of Unemployment” which was an important contribution to the development and application of the concept of rational expectations in economic theory and policy. By assuming individuals are forward-looking and rational, Sargent argues that rational expectations can help explain fluctuations in key economic variables such as the real interest rate and the natural rate of employment. He also suggests that the concept of the natural rate of unemployment can be used to help policymakers set macroeconomic policy. This concept suggests that there is a trade-off between unemployment and inflation in the short run, but in the long run, the economy will return to the natural rate of unemployment, which is determined by structural factors such as the skills of the labour force and the efficiency of the labour market. Sargent argues that policymakers should take this concept into account when setting macroeconomic policy, as policies that try to push unemployment below the natural rate will only lead to higher inflation in the long run.
Theory.
The key idea of rational expectations is that individuals make decisions based on all available information, including their own expectations about future events. This implies that individuals are rational and use all available information to make decisions. Another important idea is that individuals adjust their expectations in response to new information. In this way, individuals are assumed to be forward-looking and able to adapt to changing circumstances. They will learn from past trends and experiences to make their best guess of the future.
It is assumed that an individual's predicted outcome do not differ systematically from the market equilibrium given that they do not make systematic errors when predicting the future.
In an economic model, this is typically modelled by assuming that the expected value of a variable is equal to the expected value predicted by the model. For example, suppose that "P" is the equilibrium price in a simple market, determined by supply and demand. The theory of rational expectations implies that the actual price will only deviate from the expectation if there is an 'information shock' caused by information unforeseeable at the time expectations were formed. In other words, "ex ante" the price is anticipated to equal its rational expectation:
formula_0
formula_1
where formula_2 is the rational expectation and formula_3 is the random error term, which has an expected value of zero, and is independent of formula_2.
Mathematical derivation.
If rational expectations are applied to the Phillips curve analysis, the distinction between long and short term will be completely negated, that is, there is no Phillips curve, and there is no substitute relationship between inflation rate and unemployment rate that can be utilized.
The mathematical derivation is as follows:
Rational expectation is consistent with objective mathematical expectation:
formula_4
Mathematical derivation (1)
Assuming that the actual process is known, the rate of inflation depends on previous monetary changes and changes in short-term variables such as X (for example, oil prices):
(1) formula_5
(2) formula_6
(3) formula_7 , formula_8
(4) formula_9
(5) formula_10
Thus, even in the short run, there is no substitute relationship between inflation and unemployment. Random shocks, which are completely unpredictable, are the only reason why the unemployment rate deviates from the natural rate.
Mathematical derivation (2)
Even if the actual rate of inflation is dependent on current monetary changes, the public can make rational expectations as long as they know how monetary policy is being decided:
(1) formula_11
(2) formula_12
(3) formula_13
(4) formula_14
(5) formula_15
The conclusion is essentially the same: random shocks that are completely unpredictable are the only thing that can cause the unemployment rate to deviate from the natural rate.
Implications.
Rational expectations theories were developed in response to perceived flaws in theories based on adaptive expectations. Under adaptive expectations, expectations of the future value of an economic variable are based on past values. For example, it assumes that individuals predict inflation by looking at historical inflation data. Under adaptive expectations, if the economy suffers from a prolonged period of rising inflation, people are assumed to always underestimate inflation. Many economists suggested that it was an unrealistic and irrational assumption, as they believe that rational individuals will learn from past experiences and trends and adjust their predictions accordingly.
The rational expectations hypothesis has been used to support conclusions about economic policymaking. An example is the policy ineffectiveness proposition developed by Thomas Sargent and Neil Wallace. If the Federal Reserve attempts to lower unemployment through expansionary monetary policy, economic agents will anticipate the effects of the change of policy and raise their expectations of future inflation accordingly. This will counteract the expansionary effect of the increased money supply, suggesting that the government can only increase the inflation rate but not employment.
If agents do not form rational expectations or if prices are not completely flexible, discretional and completely anticipated, economic policy actions can trigger real changes.
Criticism.
While the rational expectations theory has been widely influential in macroeconomic analysis, it has also been subject to criticism:
Unrealistic assumptions: The theory assumes that individuals have perfect information and can process it without error. This is unlikely to be the case, due to limited information available and human error.
Limited empirical support: While there is some evidence that individuals do incorporate expectations into their decision-making, it is unclear whether they do so in the way predicted by the rational expectations theory.
Misspecification of models: The rational expectations theory assumes that individuals have a common understanding of the model used to make predictions. However, if the model is misspecified, this can lead to incorrect predictions.
Inability to explain certain phenomena: The theory is also criticised for its inability to explain certain phenomena, such as bubbles and crashes in financial markets.
Lack of attention to distributional effects: Critics argue that the rational expectations theory focuses too much on aggregate outcomes and does not pay enough attention to the distributional effects of economic policies.
See also.
<templatestyles src="Div col/styles.css"/>
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "P=P^*+\\epsilon"
},
{
"math_id": 1,
"text": "E[P]=P^*"
},
{
"math_id": 2,
"text": "P^*"
},
{
"math_id": 3,
"text": "\\epsilon"
},
{
"math_id": 4,
"text": "E\\dot{P}_t=\\dot{P}_t+\\varepsilon_t"
},
{
"math_id": 5,
"text": "\\dot{P}=q\\dot M_{t-1}+z\\dot{X}_{t-1}+\\varepsilon_t"
},
{
"math_id": 6,
"text": "E\\dot{P}_t=q\\dot M_{t-1}+z\\dot X_{t-1}"
},
{
"math_id": 7,
"text": "\\dot{P}_t=\\alpha-\\beta u_t+\\gamma E_{t-1}(\\dot{P}_t)"
},
{
"math_id": 8,
"text": "\\gamma=1"
},
{
"math_id": 9,
"text": "\\alpha-\\beta u_t+q\\dot M_{t-1}+z\\dot X_{t-1}=q\\dot M_{t-1}+z\\dot{X}_{t-1}+\\varepsilon_t"
},
{
"math_id": 10,
"text": "u_t=\\frac{\\alpha-\\epsilon_t}{\\beta}"
},
{
"math_id": 11,
"text": "\\dot{P}_t=q\\dot{M}_t+z\\dot{X}_{t-1}+\\varepsilon_t"
},
{
"math_id": 12,
"text": "\\dot{M}_t=g\\dot{M}_{t-1}+\\mu_t"
},
{
"math_id": 13,
"text": "\\dot{P}_t=qg\\dot{M}_{t-1}+z\\dot{X}_{t-1}+q\\mu_t+\\varepsilon_{t}"
},
{
"math_id": 14,
"text": "E\\dot{P}=qg\\dot{M}_{t-1}+z\\dot{X}_{t-1}"
},
{
"math_id": 15,
"text": "u_t=\\frac{\\alpha-q\\mu_t-\\varepsilon_t}{\\beta}"
}
] |
https://en.wikipedia.org/wiki?curid=65753
|
65762043
|
Statistics of the COVID-19 pandemic in the United Kingdom
|
Statistics article
This article presents official statistics gathered during the COVID-19 pandemic in the United Kingdom.
The official daily report from the Department of Health and Social Care (DHSC) counts those who died within 28 days of testing positive for coronavirus. It "could be the major cause, a contributory factor or simply present when they are dying of something else". From 29 April 2020, the official figures include all coronavirus-positive deaths in the UK, wherever they happened. Before then, the official daily toll included only hospital deaths in England, but included all coronavirus-positive deaths in the rest of the UK wherever they happened, if known to public health agencies. There may be a delay between a death and it entering official statistics so families can be informed; this delay is usually a few days, but can be longer.
The Office for National Statistics (ONS) issues a weekly report covering the four countries, which counts all deaths where coronavirus was mentioned on the death certificate; not necessarily as the main cause of death. As of 21 2021[ [update]], the total of registered deaths mentioning COVID-19 up till 10 September was 160,374, comprising 146,380 deaths for England, 8,129 for Wales, 10,688 for Scotland and 3,306 for Northern Ireland. In addition 184 non-UK residents died in England and Wales. This incorporates data from the National Records of Scotland and Northern Ireland Statistics and Research Agency. This figure is higher because it also counts deaths where no test was done. The ONS has analysed death certificates for England and Wales to the end of 2020 and shown that 91% of deaths which mention COVID-19 state this as the main cause of death (compared with 18% for flu and pneumonia). The end of free mass testing in April 2022 greatly reduced the number of tests taken and may affect the number of cases, although ONS statistics have continued being collected.
Details.
As of 2020[ [update]] the death rate across the UK from COVID-19 was 592 per million population. The death rate varied greatly by age and healthiness. More than 90% of deaths were among the most vulnerable: those with underlying illnesses and the over-60s. COVID-19 deaths are "remarkably uncommon" among the least vulnerable: those under 65 and with no underlying illnesses.
There was also large regional variation in the pandemic's severity. The outbreak in London had the highest number and highest rate of infections. England was the UK country with the highest recorded death rate per capita, followed by Wales and then Scotland, while Northern Ireland has the lowest per capita.
On 22 April 2020, the "Financial Times" estimated that 41,000 may have died by that date, by extrapolating the ONS data and counting all deaths above the average for the time of year. The World Health Organisation cautioned on 23 April that up to half of coronavirus deaths in Europe were among care home residents. The Chief Medical Officer for England warned that even the ONS figures on coronavirus deaths in care homes are likely to be "an underestimate" and said he is "sure we will see a high mortality rate sadly in care homes, because this is a very, very vulnerable group". On 28 April, Health Secretary Matt Hancock said the number of coronavirus-linked deaths in care homes would be announced as part of the daily report, instead of weekly. By 7 May 2020, the epidemic was concentrated in hospitals and care homes, with the infection rate being higher in care homes than in the community. By 28 May, the "Financial Times" estimate of 'excess deaths', the increase over the figure expected for the time of year, had increased to 59,537 since 20 March.
"The Guardian" wrote in May 2020 that across the UK around 8,000 more people had died in their homes since the start of the pandemic, when compared to normal times. Of that total around 80% of the people according to their death certificates, died from non COVID-19 illnesses. The statistics additionally showed a drop in non COVID-19 deaths in hospitals, leading many to think that people who normally would have been admitted were avoiding hospitals. NHS England said that between 10 and 20% of people who were admitted to hospital for other reasons contracted coronavirus during their stay.
The number of cases in the table represent laboratory confirmed cases only. The UK Government's Chief Scientific Adviser Patrick Vallance, says it is likely that other cases are not included in these figures.
In the week ending 19 June 2020, registered deaths fell below the average for the previous five years for the first time since mid-March. The total number of excess deaths in the UK since the start of the outbreak is just over 65,000.
On 12 August 2020, the UK death toll was reduced by more than 5,000, after a review of how deaths are counted in England.
In the latter half of August, testing increased from 2.3 to 2.6 tests per thousand population per day.
2,460 new cases in the UK were reported on Tuesday 8 September 2020. This number was approximately double what it had been a fortnight previously and the daily case number further doubled to 4,926 a fortnight later, on 22 September 2020. On 18 September, the COVID Symptom Study estimated the formula_0 value to be above 1 in each of England, Scotland and Wales, with a value of 1.4 for England meaning that cases were doubling every seven days.
Total cases and deaths.
Long Covid.
Roughly 1.3 million UK people have "long Covid", symptoms lasting over four weeks following initial infection, according to an Office for National Statistics survey.
The ONS survey, during four weeks in November and December 2021, claims, of those with long Covid:
As with previous analyses, roughly 20% said "their symptoms meant their ability to do day-to-day activities had been limited a lot."
And patients most likely to develop long Covid are:
Dr David Strain of the University of Exeter said, "The stark warning here is that, based on this, in the previous waves, over 800,000 people have their day-to-day activities significantly affected over three months after catching Covid and nearly a quarter of a million report this has a dramatic impact on their quality of life. As we continue to see case numbers of Omicron rise, we must be wary that our reliance purely on hospitalisations and death as a measure of the risk from Covid could grossly underestimate the public-health impact of our current Covid strategy."
New cases.
New cases by day reported.
The figures are as reported daily at "coronavirus.data.gov.uk".
From the week of 21 February 2022, the UK Health Security Agency stopped publishing dashboard updates at weekends. Figures for Saturday and Sunday are now combined with Monday's figures. The source stopped reporting numbers after May 2022.
Daily cases, 2022 (NB scaled to a maximum of 240,000 per time period)
Daily cases, 2021 (NB scaled to a maximum of 200,000 per day)
Daily cases, 2020 (NB scaled to a maximum of 60,000 per day)
Notes
Details of the periods with corrected case numbers.
On 3 October, the UK Government Dashboard "GOV.UKCoronavirus (COVID-19) in the UK" issued the following note: Due to a technical issue, which has now been resolved, there has been a delay in publishing a number of COVID-19 cases to the dashboard in England. This means the total reported over the coming days will include some additional cases from the period between 24 September and 1 October, increasing the number of cases reported.
On 4 October, the UK Government Dashboard "GOV.UKCoronavirus (COVID-19) in the UK" issued the following note:
An issue was identified overnight on Friday 2 October in the automated process that transfers positive cases data to PHE. It has now been resolved.
The cases by publish date for 3 and 4 October include 15,841 additional cases with specimen dates between 25 September and 2 October — they are therefore artificially high for England and the UK.
After correction as calculated by the BBC, the case numbers should read from 25 September as below, showing a trend (apart from the 28 September and 4 October figures) which is subsequently maintained:
On 16 December Public Health Wales announced that there had been a delay in transferring data from the Lighthouse Labs which had resulted in under-reporting over the preceding week of approximately 11,000 positive tests. The 'missing' numbers were reported instead on 16 December. While the Wales numbers were a relatively small proportion of the UK total, this nevertheless affected the day-to-day accuracy of the case numbers in this period, though not the cumulative totals afterwards: the affected dates are marked in the graph above with a letter 'x'.
New cases by week reported.
"Number of people who have had a lab-confirmed positive test result"
From the week of 21 February 2022, the UK Health Security Agency stopped publishing dashboard updates at weekends. Figures for Saturday and Sunday are now combined with Monday's figures. Hence, the period displayed beginning 20 February 2022 contains only six days figures after which full seven day totals will commence with the period commencing 26 February 2022. Easter 2022 dates necessarily contain a 6-day and an 8-day period.
By week (2022)
By week (2020-2021)
' * ' Values for these dates are questionable in light of the announcements of 3 & 4 October, see New cases by day reported
The values in the above graph are not directly comparable between different time periods because they were measured under different testing rates. It is therefore vital to consult Test positivity rate and New daily tests.
Numbers of deaths.
New deaths by week and day reported.
"Deaths of people within 28 days of a positive test result"
By week (2022) NB Scaled to maximum of 2,000
By week (2020-2021) NB Scaled to maximum of 9,000
Daily numbers (2022) - scaled with a maximum of 600 per day
From the week of 21 February 2022, the UK Health Security Agency stopped publishing dashboard updates at weekends. Figures for Saturday and Sunday are now combined with Monday's figures.
Daily numbers (2021) - scaled with a maximum of 2,000 per day
Daily numbers (2020) - scaled with a maximum of 1,200 per day
Daily deaths within 28 days of positive test, listed by date reported.
Daily number of deaths within 28 days of positive test, by date of death.
Test positivity rate.
The UK's test positivity rate, every seven days from 7 April 2020 until 14 December 2021. This is the percentage of tests that were positive out of all tests made on the day. Because testing rates vary over time, and can vary greatly between countries, the positivity rate is a key metric for measuring the pandemic. According to the World Health Organization, a positive rate of less than 5% for at least two weeks is one indicator the epidemic is under control in a country.
Test availability also varies over time, so the very high rates seen in spring 2020 are probably an over-estimate of positivity.
New daily tests.
New daily tests per 1000 population, smoothed, UK from April 2020 to January 2021 (chart):
All-cause deaths.
Comparison of 2020 (England and Wales) with average death rates and the 2014–15 flu season.
<templatestyles src="Legend/styles.css" /> Average deaths per week 2010–2019 excluding 'flu year q4,2014-q3,2015
<templatestyles src="Legend/styles.css" /> deaths per week, 2014 quarter 4
<templatestyles src="Legend/styles.css" /> deaths per week, 2015 quarters 1–3
<templatestyles src="Legend/styles.css" /> deaths per week, 2020
<templatestyles src="Legend/styles.css" /> deaths per week, 2021
Note: Average deaths per week are presented using the years 2010–2019 but excluding the recent year with particularly high incidence of 'flu, q4,2014-q3,2015; deaths per week 2020 covers weeks 1–42 inclusive. Data downloaded from mortality.org. The pronounced zigzags typically correspond to holiday periods when there may be a lag in the reporting of some deaths.
Comparison of numbers of deaths for all ages, second quarter.
Excess deaths in the UK in 2020 to date have occurred mainly in the second quarter of the year. For this period, in England and Wales there were 49% more deaths than for the average of the preceding 10 years. The bar chart below shows all-cause deaths in England and Wales in quarter 2 (weeks 14–26, inclusive), year by year, based on mortality.org data, stmf.csv:
Mortality.org indicates the data for 2020 to be preliminary. The above is not adjusted by population size.
All-cause deaths for all ages.
All-cause deaths in England and Wales in weeks 1–33, year by year, based on mortality.org data, stmf.csv:
mortality.org indicates the data for 2020 to be preliminary; above, the last two weeks available from mortality.org were excluded to prevent the worst effect of registration delay. The above is not adjusted by population size.
All-cause deaths in Scotland in weeks 1–30, year by year, based on mortality.org data, stmf.csv:
Note that a similar effect is seen to that in England and Wales, namely most excess deaths occurred in the second quarter of the year. Note also that mortality.org indicates the data for 2020 to be preliminary; above, the last two weeks available from mortality.org were excluded to prevent the worst effect of registration delay. The above is not adjusted by population size.
All-cause deaths for ages 0–14.
All-cause deaths in England and Wales in weeks 1–33, ages 0–14, year by year, based on mortality.org data, stmf.csv:
Note that COVID-19 has generally been found to have very low mortality rates for the very young. The source, mortality.org, indicates the data for 2020 to be preliminary; above, the last two weeks available from mortality.org were excluded to prevent the worst effect of registration delay. The above is not adjusted by population size.
All-cause deaths in Scotland in weeks 1–30, ages 0–14, year by year, based on mortality.org data, stmf.csv:
Note that the smaller population of Scotland compared with England and Wales results in a 'noisier' data set due to the relatively random nature of the events recorded here. The source, mortality.org indicates the data for 2020 to be preliminary; above, the last two weeks available from mortality.org were excluded to prevent the worst effect of registration delay. The above is not adjusted by population size.
All-cause deaths for ages 65–74.
All-cause deaths in England and Wales in weeks 1–33, ages 65–74, year by year, based on mortality.org data, stmf.csv:
Note that the period for 2020 is unique in that it includes deaths from COVID-19, but with very few of these prior to week 14.
The source, mortality.org, indicates the data for 2020 to be preliminary; above, the last two weeks available from mortality.org were excluded to prevent the worst effect of registration delay. The above is not adjusted by population size.
Hospitalisations.
Daily hospital admissions of COVID-positive patients for England Statistics » COVID-19 Hospital Activity, with 7-day moving averages, based on two spreadsheets in the same source, one historic, one more current- note however that the data seems to be inconsistent between these two, 'admissions' vs. 'admissions... and diagnoses in hospital'.
Source for figures from March:
Daily covid-positive hospitalisations for England, August–December 2020.
August and September figures from a separate file at the same webpage:
After 25 October 2020, hospitalisations rose to a high of 1,711 on 11 November, but afterwards drifted gradually down, reaching 1208 on 28 November, before climbing again and exceeding 1,800 on 17 December 2020; the rate peaked at around 4,000 per day in January 2021.
As of December 2021, similar data for the UK (and constituent nations) is available "here".
Vaccination.
Data is from GOV.UK. There are around 53 million adults age 18+ in the UK, and a further 4.5 million teenagers aged 12-17 who are eligible for the vaccine.
Show cumulative vaccinationsShow cumulative vaccinations by dose
Show daily vaccinationsShow daily vaccinations by dose
On 31 December 2021 the "BBC" wrote, "The UKHSA analysed more than 600,000 confirmed and suspected cases of the Omicron variant up to 29 December in England. It found that a single vaccine dose reduced the risk of needing hospital treatment by 52%. Adding the second dose increased the protection to 72%, although after 25 weeks that protection had faded to 52%. And two weeks after getting a third dose, that protection against hospitalisation was boosted to 88%."
On 1 February 2022 just 26,875 people in England got a third dose of COVID vaccine and 6 million people should have got their third dose at least six weeks previously. Distrust of the government due to Partygate is part of the reason for the poor response.
Demographics.
Different demographics in the UK have been affected to different degrees by the coronavirus pandemic, and this may have medical, social or cultural causes.
Coronavirus risk and ethnicity.
In April 2020, the British Medical Association called on the government to investigate if and why people from black, Asian and minority ethnic (BAME) groups were more vulnerable to COVID-19, after the first 10 doctors to die were all from the group. The Labour Party called for a public enquiry after the first 10 deaths in the health service were from BAME backgrounds. The Mayor of London Sadiq Khan wrote to the Equality and Human Rights Commission asking them to investigate whether the effects of coronavirus on BAME groups could have been prevented or mitigated. A group of 70 BAME figures sent a letter to Boris Johnson calling for an independent public enquiry into the disproportionate impact of the coronavirus on people from black, Asian and minority ethnic backgrounds.
Research by the Intensive Care National Audit and Research Centre concluded that people from BAME backgrounds made up 34% of critical patients. NHS England and Public Health England were appointed to lead an inquiry into why people from black and minority ethnic backgrounds appear to be disproportionately affected by coronavirus. On 18 April, Public Health England said that they would start recording the ethnicity of victims of coronavirus.
Research carried out by "The Guardian" newspaper concluded that ethnic minorities in England when compared to white people were dying in disproportionately high numbers. They said that deaths in hospitals up to 19 April 19% were from BAME backgrounds who make up only 15% of the population of England.
The Office for National Statistics (ONS), meanwhile, wrote that in England and Wales black men were four times more likely to die from coronavirus than white men, from figures gathered between 2 March to 10 April. They concluded that "the difference between ethnic groups in COVID-19 mortality is partly a result of socio-economic disadvantage and other circumstances, but a remaining part of the difference has not yet been explained". Some commentators including Dr. John Campbell have pointed to Vitamin D deficiency as a possible cause of the discrepancy, but the theory remains unproven.
Another study carried out by University of Oxford and the London School of Hygiene and Tropical Medicine on behalf of NHS England and a separate report by the Institute for Fiscal Studies corroborated the ONS' findings. An Oxford University led study into the impact of COVID-19 on pregnancy concluded that 55% of pregnant women admitted to hospital with coronavirus from 1 March to 14 April were from a BAME background. The study also concluded that BAME women were four times more likely to be hospitalised than white women.
A study by Public Health Scotland found no link between BAME groups and COVID-19. A second Public Health England study found that those with a Bangladeshi heritage were dying at twice the rate of white Britons. Other BAME groups had between 10% and 50% higher risk of death from COVID-19.
Public Health England continued to report quarterly on the progress of its research. In its final December 2021 report it concluded that (a) the main factors behind the higher risk of COVID-19 infection for ethnic minority groups were occupation, living in multigenerational households, and living in densely-populated urban areas with poor air quality and higher levels of deprivation; (b) once infected, the risk of dying was higher for older people, males, people with disabilities, and people with other health conditions such as diabetes, and (c) a gene carried by 61% of people with South Asian ancestry doubled the risk of respiratory failure following COVID-19 infection.
As the vaccine programme gathered pace, it became clear that the level of take-up varied significantly between different ethnic groups. Notably, those identifying as Black or Black British reported the highest level of vaccine hesitancy, at over 40%.
Fines and ethnicity.
Figures from the Metropolitan Police showed that BAME people received proportionally more fines than white people for breaching COVID-related restrictions.
Coronavirus risk and employment status.
The ONS study, using data collected up to 17 April 2020 across England and Wales, concluded that men in low-skilled jobs were four times more likely to die from the virus than those in professional jobs. Women who worked as carers were twice as likely to die than those who worked in technical or professional jobs. The GMB trade union commented on the findings that ministers must stop any return to work until "proper guidelines, advice and enforcement are in place to keep people safe". An analysis of the figures by "The Guardian" concluded that deaths were higher in occupations where physical distancing was more difficult to achieve.
Analysis by "The Independent" and the "Financial Times" concluded that mortality rates from coronavirus were higher in deprived and urban areas than in prosperous and rural locations, across England and Wales. Analysis of the ONS data by the "Guardian" also concluded that by 13 May, only about 12% of people who had died from the virus in England and Wales were under 65 while 59% were over 80. A Public Health England report in June 2020 found that security guards, taxi and bus drivers, construction workers and social care staff were at a higher risk of COVID-19 when compared to other occupations.
Death rate and Brexit referendum.
In November 2021 researchers from the universities of Oxford and Glasgow published a paper stating that the COVID-19 death rate was significantly lower (33%) in districts that voted most in favour of remaining in the European Union in the 2016 Brexit referendum. They suggest "different cultures and belief systems" should be taken into account in dealing with similar situations.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "R_0"
}
] |
https://en.wikipedia.org/wiki?curid=65762043
|
65764375
|
Hedgehog (geometry)
|
Type of mathematical plane curve
In differential geometry, a hedgehog or plane hedgehog is a type of plane curve, the envelope of a family of lines determined by a support function. More intuitively, sufficiently well-behaved hedgehogs are plane curves with one tangent line in each oriented direction. A projective hedgehog is a restricted type of hedgehog, defined from an anti-symmetric support function, and (again when sufficiently well-behaved) forms a curve with one tangent line in each direction, regardless of orientation.
Every closed strictly convex curve, the envelope of its supporting lines. The astroid forms a non-convex hedgehog, and the deltoid curve forms a projective hedgehog.
Hedgehogs can also be defined from support functions of hyperplanes in higher dimensions.
Definitions.
Formally, a planar support function can be defined as a continuously differentiable function formula_0 from the unit circle in the plane to real numbers, or equivalently as a function formula_1 from angles to real numbers. For each point formula_2 on the unit circle, it defines a line, the set of points formula_3 for which formula_4. This line is perpendicular to vector formula_2, passes through the point formula_5, and is at distance formula_6 from the origin. A support function is anti-symmetric when, for all formula_2, formula_7, or equivalently in terms of angles formula_8, so that formula_2 and formula_9 define the same line as each other.
Given any support function formula_0, its hedgehog is denoted formula_10. In terms of the function formula_11 and the angle formula_12 it has the parametric equations
formula_13
A hedgehog is "non-singular" when it has a tangent line at each of its points. A projective hedgehog is defined by an anti-symmetric support function. Hedgehogs can also be defined in the same way in higher dimensions, as envelopes of hyperplanes defined by support functions.
Examples.
The support function describing the supporting lines for a convex set formula_14 is defined by formula_15. The hedgehog of the support function of any strictly convex set is its boundary, parameterized by the angle of its supporting lines. When a convex set is not strictly convex (it has a line segment in its boundary), its support function is continuous but not continuously differentiable, and the parametric equations above jump discontinuously across the line segment instead of defining a continuous curve, so it is not defined as a hedgehog. The astroid provides an example of a non-convex hedgehog.
An example of a projective hedgehog, defined from an anti-symmetric support function, is given by the deltoid curve. The deltoid is a simple closed curve but other hedgehogs may self-intersect, or otherwise behave badly. In particular, there exist anti-symmetric support functions based on the Weierstrass function whose corresponding projective hedgehogs are fractal curves that are continuous but nowhere differentiable and have infinite length.
Every strictly convex body in the plane defines a projective hedgehog, its "middle hedgehog", the envelope of lines halfway between each pair of parallel supporting lines. Although triangles are not strictly convex, the envelope defined in this way for a triangle is its medial triangle. The points of the middle hedgehog are the midpoints of line segments connecting the pairs of points where each pair of parallel supporting lines contact the body. It has finite length, equal to half the perimeter of the given body. Each extreme point of the convex hull of the middle hedgehog is a "convexity point", a point such that the union of the body with its reflection through this point is convex. There are always at least three such points, and the triangles and Reuleaux triangle provide examples where there are exactly three.
Properties.
A non-singular hedgehog has a unique tangent line in each oriented direction, belonging to its defining family of lines. Correspondingly, any sufficiently well-behaved projective hedgehog has a unique tangent line in each direction without respect to orientation.
Pairs of hedgehogs can be combined by the pointwise sum of their support functions. This operation extends Minkowski addition of convex bodies and is analogous to Minkowski addition in multiple ways. It can be used to characterize curves of constant width: a convex hedgehog has constant width formula_16 if and only if its support function is formed by adding formula_17 to the support function of a projective hedgehog. That is, the curves of constant width are exactly the convex hedgehogs formed as sums of projective hedgehogs and circles.
Every projective hedgehog has at least three singularities (typically, cusps). When a projective hedgehog has finite length, a construction of Leonhard Euler shows that its involutes of sufficiently high radius are curves of constant width.
Generalization.
More generally, hedgehogs are the natural geometrical objects that represent the formal differences of convex bodies: given (K,L) an ordered pair of convex bodies in the Euclidean vector space formula_18, there exists one, and only one, hedgehog that represents the formal difference K – L in formula_18.
Polygonal case in the plane:
Case of smooth convex bodies with positive Gauss curvature:
Subtracting two convex hypersurfaces (with positive Gauss curvature) by subtracting the points corresponding to a same outer unit normal to obtain a (possibly singular and self-intersecting) hypersurface:
The idea of using Minkowski differences of convex bodies may be traced back to a couple of papers by A.D. Alexandrov and H. Geppert in the 1930s. Many classical notions for convex bodies extend to hedgehogs and quite a number of classical results find their counterparts. Of course, a few adaptations are necessary. In particular, volumes have to be replaced by their algebraic versions.
In a long series of papers, hedgehogs and their extensions were studied by Y. Martinez-Maure under various aspects. The most striking result of this hedgehog theory was the construction of counterexamples to an old conjectured characterization of the 2-sphere.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "h"
},
{
"math_id": 1,
"text": "f(\\theta)=h\\bigl((\\cos\\theta,\\sin\\theta)\\bigr)"
},
{
"math_id": 2,
"text": "q"
},
{
"math_id": 3,
"text": "p"
},
{
"math_id": 4,
"text": "p\\cdot q = h(q)"
},
{
"math_id": 5,
"text": "qh(q)"
},
{
"math_id": 6,
"text": "|h(q)|"
},
{
"math_id": 7,
"text": "h(-q)=-h(q)"
},
{
"math_id": 8,
"text": "f(\\theta)=-f(\\theta+\\pi)"
},
{
"math_id": 9,
"text": "-q"
},
{
"math_id": 10,
"text": "\\mathcal{H}_h"
},
{
"math_id": 11,
"text": "f"
},
{
"math_id": 12,
"text": "\\theta"
},
{
"math_id": 13,
"text": "\n\\begin{align}\nx&=f(\\theta)\\cos\\theta-f'(\\theta)\\sin\\theta\\\\\ny&=f(\\theta)\\sin\\theta+f'(\\theta)\\cos\\theta\n\\end{align}\n"
},
{
"math_id": 14,
"text": "K"
},
{
"math_id": 15,
"text": "h(q)=\\max \\{p\\cdot q\\mid p\\in K\\}"
},
{
"math_id": 16,
"text": "w"
},
{
"math_id": 17,
"text": "w/2"
},
{
"math_id": 18,
"text": "\\mathbb{R}^{n+1}"
}
] |
https://en.wikipedia.org/wiki?curid=65764375
|
65767417
|
Landau–Placzek ratio
|
Landau–Placzek ratio is a ratio of the integrated intensity of Rayleigh scattering to the combined integrated intensity of Brillouin scattering of a triplet frequency spectrum of light scattered by homogenous liquids or gases. The triplet consists of two frequency shifted Brillouin scattering and a central unshifted Rayleigh scattering line split. The triplet structure was explained by Lev Landau and George Placzek in 1934 in a short publication, summarizing major results of their analysis. Landau and Placzek noted in their short paper that a more detailed discussion will be published later although that paper does not seem to have been published. However, a detailed discussion is provided in Lev Landau and Evgeny Lifshitz's book.
The Landau–Placzek ratio is defined as
formula_0
where
The Landau–Placzek formula provides an approximate theoretical prediction for the Landau–Placzek ratio,
formula_3
where
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "R_{LP} = \\frac {I_c} {2I_B}"
},
{
"math_id": 1,
"text": "I_c"
},
{
"math_id": 2,
"text": "I_B"
},
{
"math_id": 3,
"text": "R_{LP} = \\frac{c_p-c_v}{c_v}"
},
{
"math_id": 4,
"text": "c_p"
},
{
"math_id": 5,
"text": "c_v"
}
] |
https://en.wikipedia.org/wiki?curid=65767417
|
65770326
|
Hedgehog (hypergraph)
|
In the mathematical theory of hypergraphs, a hedgehog is a 3-uniform hypergraph defined from an integer parameter formula_0. It has formula_1 vertices, formula_0 of which can be labeled by the integers from formula_2 to formula_0 and the remaining formula_3 of which can be labeled by unordered pairs of these integers. For each pair of integers formula_4 in this range, it has a hyperedge whose vertices have the labels formula_5, formula_6, and formula_7. Equivalently it can be formed from a complete graph by adding a new vertex to each edge of the complete graph, extending it to an order-3 hyperedge.
The properties of this hypergraph make it of interest in Ramsey theory.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "t"
},
{
"math_id": 1,
"text": "t+\\tbinom{t}{2}"
},
{
"math_id": 2,
"text": "1"
},
{
"math_id": 3,
"text": "\\tbinom{t}{2}"
},
{
"math_id": 4,
"text": "i,j"
},
{
"math_id": 5,
"text": "i"
},
{
"math_id": 6,
"text": "j"
},
{
"math_id": 7,
"text": "\\{i,j\\}"
}
] |
https://en.wikipedia.org/wiki?curid=65770326
|
657730
|
TI-BASIC
|
Programming language used in Texas Instruments calculators
TI-BASIC is the official name of a BASIC-like language built into Texas Instruments' graphing calculators.
TI-BASIC is a language family of three different and incompatible versions, released on different products:
TI rarely refers to the language by name, but the name TI-BASIC has been used in some developer documentation.
For many applications, it is the most convenient way to program any TI calculator, since the capability to write programs in TI-BASIC is built-in. Assembly language (often referred to as "asm") can also be used, and C compilers exist for translation into assembly: TIGCC for Motorola 68000 (68k) based calculators, and SDCC for Zilog Z80 based calculators. However, both of them are cross-compilers, not allowing on-calculator programming. TI-BASIC is considerably slower than the assembly language (because it has to be interpreted), making it better suited to writing programs to quickly solve math problems or perform repetitive tasks, rather than programming games or graphics-intensive applications. Some math instruction books even provide programs in TI-BASIC (usually for the widespread variant used by the TI-82/83/84 series).
Although it is somewhat minimalist compared to programming languages used on computers, TI-BASIC is nonetheless an important factor in the programming community. Because TI graphing calculators are required for advanced mathematics classes in many high schools and universities, TI-BASIC often provides the first glimpse many students have into the world of programming.
Syntax.
The syntax of all versions of TI-BASIC are somewhat different from typical BASIC implementations. The language itself has some basic structured programming capabilities, but makes limited to no use of or allowance for white space or indentation. It is also dependent on a somewhat non-standard character set, with specific characters for assignment (the right "STO" arrow, not readily available in most character sets), square and cube roots, and other mathematical symbols, as well as tokenized entry and storage for keywords. All statements begin with a colon, which also functions as a statement separator within lines. On the TI-83/84 models, closing parentheses, brackets, braces, and quotes can optionally be omitted at the end of a line or before the STO token in order to save space, although sometimes they are better left on. For example, on TI 83/84 models the for loop function runs much slower without closing parentheses in certain circumstances.
Expressions use infix notation, with standard operator precedence. Many statements demand arguments in parentheses, similar to the syntax used for mathematical functions. The syntax for assignment (copying of data into a variable) is unusual with respect to most conventional programming languages for computers; rather than using a BASIC-like let statement with an equal sign, or an algol-like codice_0 operator, TI-BASIC uses a right-arrow codice_1 operator with the syntax: "source → destination". This is similar to several Japanese calculators, such as from Casio, Canon and Sharp, that have often employed a similar syntax, ever since the first mass market Japanese alphanumerical calculators appeared in the late 1970s and early 1980s.
Control flow.
Control flow statements include if-then-else blocks, for loops, while loops, and repeat loops, though no switch statements.
The main control flow statements are:
Unusual for a high level language, TI-BASIC implementations include IS> (Increment and Skip if Greater Than) and DS< (Decrement and Skip if Less Than) statements, constructs generally associated with assembly languages. Sections of programs can be labeled; however, particularly on the Z80 models, the labels function as destinations for Goto statements or codice_2 functions rather than as program or block labels.
Availability of functions and subroutines depends on the implementation; the versions available on the TI-82-descended calculators do not even support a GOSUB-like function, though it is possible to call programs from within each other and share variables between programs. TI-89/92-based designs can have access to shared functions, essentially programs capable of returning a value.
Data types.
TI-BASIC is a strongly and dynamically typed language. Available data types differ considerably between the 68k and Z80 versions. It is not possible to create user-defined data types without using a library written in assembly. Lists are often used as a replacement for structs.
TI-83/84 (Z80).
Data types that cannot be directly manipulated include:
TI-89 (68k).
Data types that cannot be directly manipulated (typing only their name on a line would result in an error) include:
Variables.
Flexibility in the use of variables varies widely by the calculator model. For example, on the TI-84 Plus, all English language letters as well as theta (Θ) are available.
TI-83/84 (Z80).
On the TI-83/84, the programmer can create lists whose names are up to five characters. All other data types are limited, such as the 27 real or complex variables, and a number of predefined variable names of other types (e.g., matrices have to be one of the ten variables codice_16-codice_17). On the TI-83/84 certain variables such as codice_39 and the finance variables have fixed addresses in RAM, making them much faster to access than the 27 letter variables. codice_39 acts as a special variable containing the result of the last evaluated code. A line with just a variable will still be evaluated and its contents stored in codice_39 as a result. Because codice_39 is reevaluated so frequently it most often is used to store very temporary calculations or to hold values that would otherwise be slow to access such as items from a list. All variables are global.
TI-89 (68k).
In contrast, 68k calculators allow all variable names to have up to eight alphanumeric characters, including Greek. Furthermore, variables can be grouped into "folders", or made local to a program by declaring them with the codice_43 statement.
Comments.
TI-83/84 (Z80).
Z80 programmers often start lines with " (double quotation mark) to denote a comment. Lines starting with " are actually executed changing the codice_39 variable, but this does not affect anything other than performance unless codice_39 is read immediately afterwards.
TI-89 (68k).
The 68k calculators allow programs to include single-line comments, using © as a comment symbol. If a comment appears as the first line after the "Prgm" statement, it is displayed in the status bar when the program is selected in the catalog; such comments are often used to document the names or types of parameters. The 68k interpreter has a built in feature to store the number of space characters at the beginning of a line, this allows indentation.
Functions.
TI-83/84 (Z80).
The Z80 version of TI-BASIC makes explicit "functions" like those in 68k impossible. However, all variables are global so functions can be emulated by setting variables, similar to arguments, before calling another program. Return values do not exist; the codice_46 statement stops the current program and continues where the program was called.
TI-89 (68k).
The 68k version of TI-BASIC allows creating user-defined functions. Functions have the same syntax as programs except that they use the codice_47...codice_48 keywords instead of codice_49...codice_50, and that they are not allowed to use instructions that perform I/O, modify non-local variables, nor call programs. However, functions can still be non-pure because they can call built-in functions such as codice_51, codice_52, or codice_53. All functions have a return value, which in the absence of an explicit codice_46 statement is the last expression evaluated.
Third-party language extensions.
Third-party applications, in chronological order Omnicalc, xLIB, Celtic, and Doors CS, have overloaded TI-BASIC functions on the Z80 calculators to provide additional language functionality. The third-party libraries overload the codice_55, codice_56, codice_57 and codice_58 functions, which are handled and interpreted by their respective applications. Among the extra functions are fast shape-drawing routines, sprite and tilemap tools, program and VAT modification and access abilities, GUI construction features, and much more, most of which are ordinarily restricted to use by assembly programmers. All of the functions require that an application like Doors CS 7.0 be present on the user's calculator, sometimes considered a detraction to the use of the libraries.
Examples.
Hello world.
The following programs, when executed, will display the phrase "codice_59".
Lists and loops.
TI-89 (68k Series).
Lists have many possible names, this allows for many programs to manipulate many lists without overriding previous data. Lists on the TI-82 cannot have custom names (L1 through L6 are preprogrammed). The TI-85 and TI-86 do not have the ability to handle a variable name with subscripts. The TI-81 is completely unable to handle lists. Lists can be used by the numerous built-in TI-BASIC functions for calculating statistics, including various regression analyses and more. These can be called inside of programs, however they still show the info while pausing execution and they cannot store specific results into variables.
Recursion.
Recursion is possible. A program can be called from within itself or from within another program.
TI-83/84 (Z80 Series).
The example below is used to compute factorials. In order for it to work, codice_60 is the parameter of the factorial function and codice_3 must equal 1.
Functions.
The 68k series makes a distinction between programs and functions. Functions are just like programs except that they do not allow statements that perform I/O, including modifying non-local variables, and they return a value, which in the absence of an explicit codice_46 statement is the last expression evaluated.
Editors and Tools.
The growth of the hobbyist graphing calculator community in the 1990s brought with it sharing and collaboration, including the need to share TI-BASIC code on mailing lists and discussion forums. At first, this was done by typing out the TI-BASIC code from a calculator screen into a computer by hand, or conversely, entering programs manually into calculators. TI-BASIC programs are stored in a tokenized format, they cannot be edited using standard computer text editors, so as the calculator programming community matured, a need for an automated converter arose. The format for computer-stored TI-BASIC programs generated by Texas Instruments' TI-GraphLink application was eventually decoded, and third-party tools were created to manipulate these files. TI created a BASIC editor that they included in certain releases of the TI-GraphLink linking program, but it has not gained widespread usage. In particular, it used a custom character set that did not display properly when copied and pasted to fora.
In 2005, Joe Penna created OptiBASIC, a translator tool to convert text from the TI-GraphLink editor into standard Unicode. The project soon expanded to include a regex-based TI-BASIC optimizer. Independently, Christopher "Kerm Martian" Mitchell of Cemetech began creating an online converter to extract plain-text (and later HTML and BBCode-formatted) contents from tokenized TI-BASIC programs, which expanded to include an online program editor, exporter, and TI-83 Plus emulator. The SourceCoder project absorbed OptiBASIC at the end of 2005. The only other major TI-BASIC editor currently in use is TokenIDE (or "Tokens"), created by Shaun "Merthsoft" McFall. An offline editor, Tokens can import, edit, and export TI-BASIC programs, includes tools to track program size and correctness, and offers ancillary features such as a sprite/image editor. Built around token definitions stored in XML files, it is intended to be extensible to work with any user-specified token mapping.
Programmes on the NSprire series as well as TI 92 plus and Voyage 200 calculators can be transferred and saved in flat clear text (Ansi/Ascii/ISO 8859-*) format and there are several IDEs for TI calculator programming. A series of TextPad syntax definitions, code snippets, and charts are available for the TI calculators, and the syntax definitions have also been converted to the format used by the Zeus editor. The clear text format is also used for the Lua interpreter on the calculator.
An independent project exists for developing a PC-side interpreter for the TI89-92-Voyage 200 variant of TI Basic that would allow programmes for the calculator to be run directly as well as combined programmes of other languages which call this interpreter. The interpreter uses standard input, output, error and specifiable log and configuration files in console mode under Windows, and a second programme to replicate the graphics used on the calculator would be related to it in the same way as the Tk tools which are integrated with Tcl, Perl, Rexx, C and other languages. A related project for developing a Tk kind of tool for use by VBScript is the source of this tool. A third tool which integrates the PC-side TI Basic with spreadsheet and database programmes via VBA and WSH engines is also envisioned. This project also involves a calculator-side Unix-style shell and Rexx and Perl interpreters, a Fortran 77 interpreter, as well as converters to go back and forth amongst the various Casio, HP, Sharp, and Texas Instruments calculator programming languages and to and from those and various scripting languages.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "10^{600}"
}
] |
https://en.wikipedia.org/wiki?curid=657730
|
65773644
|
Vivanti–Pringsheim theorem
|
The Vivanti–Pringsheim theorem is a mathematical statement in complex analysis, that determines a specific singularity for a function described by certain type of power series. The theorem was originally formulated by Giulio Vivanti in 1893 and proved in the following year by Alfred Pringsheim.
More precisely the theorem states the following:
A complex function defined by a power series
formula_0
with non-negative real coefficients formula_1 and a radius of convergence formula_2 has a singularity at formula_3.
A simple example is the (complex) geometric series
formula_4
with a singularity at formula_5.
|
[
{
"math_id": 0,
"text": "f(z)=\\sum_{n=0}^\\infty a_nz^n"
},
{
"math_id": 1,
"text": "a_n"
},
{
"math_id": 2,
"text": "R"
},
{
"math_id": 3,
"text": "z=R"
},
{
"math_id": 4,
"text": "f(z)=\\sum_{n=0}^\\infty z^n =\\frac{1}{1-z}"
},
{
"math_id": 5,
"text": "z=1"
}
] |
https://en.wikipedia.org/wiki?curid=65773644
|
65776
|
Metcalfe's law
|
Value of a communication network is proportional the square of the number of pairwise connections
Metcalfe's law states that the financial value or influence of a telecommunications network is proportional to the square of the number of connected users of the system (n2). The law is named after Robert Metcalfe and was first proposed in 1980, albeit not in terms of users, but rather of "compatible communicating devices" (e.g., fax machines, telephones). It later became associated with users on the Ethernet after a September 1993 "Forbes" article by George Gilder.
Network effects.
Metcalfe's law characterizes many of the network effects of communication technologies and networks such as the Internet, social networking and the World Wide Web. Former Chairman of the U.S. Federal Communications Commission Reed Hundt said that this law gives the most understanding to the workings of the present-day Internet. Mathematically, Metcalfe's Law shows that the number of unique possible connections in an formula_0-node connection can be expressed as the triangular number formula_1, which is asymptotically proportional to formula_2.
The law has often been illustrated using the example of fax machines: a single fax machine on its own is useless, but the value of every fax machine increases with the total number of fax machines in the network, because the total number of people with whom each user may send and receive documents increases. This is common illustration to explain Network effect. Thus, in any social networks, the greater the number of users with the service, the more valuable the service becomes to the community.
History and derivation.
Metcalfe's law was conceived in 1983 in a presentation to the 3Com sales force. It stated V would be proportional to the total number of possible connections, or approximately n-squared.
The original incarnation was careful to delineate between a linear cost (Cn), non-linear growth(n2) and a non-constant proportionality factor affinity (A). The break-even point point where costs are recouped is given by:formula_3At some size, the right-hand side of the equation V, value, exceeds the cost, and A describes the relationship between size and net value added. For large n, net network value is then:formula_4Metcalfe properly dimensioned A as "value per user". Affinity is also a function of network size, and Metcalfe correctly asserted that A must decline as n grows large. In a 2006 interview, Metcalfe stated:
<templatestyles src="Template:Blockquote/styles.css" />There may be diseconomies of network scale that eventually drive values down with increasing size. So, if V=A*n2, it could be that A (for “affinity,” value per connection) is also a function of n and heads down after some network size, overwhelming n2.
Growth of n.
Network size, and hence value, does not grow unbounded but is constrained by practical limitations such as infrastructure, access to technology, and bounded rationality such as Dunbar's number. It is almost always the case that user growth n reaches a saturation point. With technologies, substitutes, competitors and technical obsolescence constrain growth of n. Growth of n is typically assumed to follow a sigmoid function such as a logistic curve or Gompertz curve.
Density.
"A" is also governed by the connectivity or "density" of the network topology. In an undirected network, every "edge" connects two nodes such that there are 2"m" nodes per edge. The proportion of nodes in actual contact are given by formula_5.
The maximum possible number of edges in a simple network (i.e. one with no multi-edges or self-edges) is formula_6.
Therefore the density "ρ" of a network is the faction of those edges that are actually present is:
<templatestyles src="Block indent/styles.css"/>formula_7
which for large networks is approximated by formula_8.
Limitations.
Metcalfe's law assumes that the value of each node formula_0 is of equal benefit. If this is not the case, for example because one fax machine serves 60 workers in a company, the second fax machine serves half of that, the third one third, and so on, then the relative value of an additional connection decreases. Likewise, in social networks, if users that join later use the network less than early adopters, then the benefit of each additional user may lessen, making the overall network less efficient if costs per users are fixed.
Modified models.
Within the context of social networks, many, including Metcalfe himself, have proposed modified models in which the value of the network grows as formula_9 rather than formula_2. Reed and Andrew Odlyzko have sought out possible relationships to Metcalfe's Law in terms of describing the relationship of a network and one can read about how those are related. Tongia and Wilson also examine the related question of the costs to those excluded.
Validation in data.
For more than 30 years, there was little concrete evidence in support of the law. Finally, in July 2013, Dutch researchers analyzed European Internet-usage patterns over a long-enough time and found formula_2 proportionality for small values of formula_0 and formula_9 proportionality for large values of formula_0. A few months later, Metcalfe himself provided further proof by using Facebook's data over the past 10 years to show a good fit for Metcalfe's law.
In 2015, Zhang, Liu, and Xu parameterized the Metcalfe function in data from Tencent and Facebook. Their work showed that Metcalfe's law held for both, despite differences in audience between the two sites (Facebook serving a worldwide audience and Tencent serving only Chinese users). The functions for the two sites were formula_10 and formula_11 respectively.
One of the earliest mentions of the Metcalfe Law in the context of Bitcoin was by a Reddit post by Santostasi in 2014. He compared the observed generalized Metcalfe behavior for Bitcoin to the Zipf's Law and the theoretical Metcalfe result.
The Metcalfe's Law is a critical component of Santostasi's Bitcoin Power Law Theory.
In a working paper, Peterson linked time-value-of-money concepts to Metcalfe value using Bitcoin and Facebook as numerical examples of the proof, and in 2018 applied Metcalfe's law to Bitcoin, showing that over 70% of variance in Bitcoin value was explained by applying Metcalfe's law to increases in Bitcoin network size.
In a 2024 interview, mathematician Terrence Tao emphasized the importance of universality and networking within the mathematics community, for which he cited the Metcalfe's Law. Tao believes that a larger audience leads to more connections, which ultimately results in positive developments within the community. For this, he cited Metcalfe's law to support this perspective. Tao further stated, "my whole career experience has been sort of the more connections equals just better stuff happening".
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "n(n-1)/2"
},
{
"math_id": 2,
"text": "n^2"
},
{
"math_id": 3,
"text": "C \\times n=A\\times n(n-1)/2"
},
{
"math_id": 4,
"text": "\\Pi=n(A \\times (n-1)/2 - C)"
},
{
"math_id": 5,
"text": " c=2m / n "
},
{
"math_id": 6,
"text": " \\binom{n}{2}=n(n-1)/2"
},
{
"math_id": 7,
"text": " \\rho=c/(n-1) "
},
{
"math_id": 8,
"text": " \\rho=c/n "
},
{
"math_id": 9,
"text": "n \\log n"
},
{
"math_id": 10,
"text": "V_{Tencent}=7.39\\times10^{-9}\\times n^{2}"
},
{
"math_id": 11,
"text": "V_{Facebook}=5.70\\times 10^{-9}\\times n^{2}"
}
] |
https://en.wikipedia.org/wiki?curid=65776
|
65782984
|
Free matroid
|
In mathematics, the free matroid over a given ground-set "E" is the matroid in which the independent sets are all subsets of "E". It is a special case of a uniform matroid. The unique basis of this matroid is the ground-set itself, "E". Among matroids on "E", the free matroid on "E" has the most independent sets, the highest rank, and the fewest circuits.
Free extension of a matroid.
The free extension of a matroid formula_0 by some element formula_1, denoted formula_2, is a matroid whose elements are the elements of formula_0 plus the new element formula_3, and:
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "M"
},
{
"math_id": 1,
"text": "e\\not\\in M"
},
{
"math_id": 2,
"text": "M+e"
},
{
"math_id": 3,
"text": "e"
},
{
"math_id": 4,
"text": "B\\cup \\{e\\}"
},
{
"math_id": 5,
"text": "B"
},
{
"math_id": 6,
"text": "I\\cup \\{e\\}"
},
{
"math_id": 7,
"text": "I"
},
{
"math_id": 8,
"text": "\\text{rank}(M)-1"
}
] |
https://en.wikipedia.org/wiki?curid=65782984
|
65783919
|
Taylor–von Neumann–Sedov blast wave
|
Self-similar solution describing the fluid dynamics of explosions
Taylor–von Neumann–Sedov blast wave (or sometimes referred to as Sedov–von Neumann–Taylor blast wave) refers to a blast wave induced by a strong explosion. The blast wave was described by a self-similar solution independently by G. I. Taylor, John von Neumann and Leonid Sedov during World War II.
History.
G. I. Taylor was told by the British Ministry of Home Security that it might be possible to produce a bomb in which a very large amount of energy would be released by nuclear fission and asked to report the effect of such weapons. Taylor presented his results on June 27, 1941. Exactly at the same time, in the United States, John von Neumann was working on the same problem and he presented his results on June 30, 1941. It was said that Leonid Sedov was also working on the problem around the same time in the USSR, although Sedov never confirmed any exact dates.
The complete solution was published first by Sedov in 1946. von Neumann published his results in August 1947 in the Los Alamos scientific laboratory report on , although that report was distributed only in 1958. Taylor got clearance to publish his results in 1949 and he published his works in two papers in 1950. In the second paper, Taylor calculated the energy of the atomic bomb used in the Trinity (nuclear test) using the similarity, just by looking at the series of blast wave photographs that had a length scale and time stamps, published by Julian E Mack in 1947. This calculation of energy caused, in Taylor's own words, 'much embarrassment' (according to Grigory Barenblatt) in US government circles since the number was then still classified although the photographs published by Mack were not. Taylor's biographer George Batchelor writes "This estimate of the yield of the first atom bomb explosion caused quite a stir... G.I. was mildly admonished by the US Army for publishing his deductions from their (unclassified) photographs".
Mathematical description.
Consider a strong explosion (such as nuclear bombs) that releases a large amount of energy formula_0 in a small volume during a short time interval. This will create a strong spherical shock wave propagating outwards from the explosion center. The self-similar solution tries to describe the flow when the shock wave has moved through a distance that is extremely large when compared to the size of the explosive. At these large distances, the information about the size and duration of the explosion will be forgotten; only the energy released formula_0 will have influence on how the shock wave evolves. To a very high degree of accuracy, then it can be assumed that the explosion occurred at a point (say the origin formula_1) instantaneously at time formula_2.
The shock wave in the self-similar region is assumed to be still very strong such that the pressure behind the shock wave formula_3 is very large in comparison with the pressure (atmospheric pressure) in front of the shock wave formula_4, which can be neglected from the analysis. Although the pressure of the undisturbed gas is negligible, the density of the undisturbed gas formula_5 cannot be neglected since the density jump across strong shock waves is finite as a direct consequence of Rankine–Hugoniot conditions. This approximation is equivalent to setting formula_6 and the corresponding sound speed formula_7, but keeping its density non zero, i.e., formula_8.
The only parameters available at our disposal are the energy formula_0 and the undisturbed gas density formula_5. The properties behind the shock wave such as formula_9 are derivable from those in front of the shock wave. The only non-dimensional combination available from formula_10 and formula_0 is
formula_11.
It is reasonable to assume that the evolution in formula_12 and formula_13 of the shock wave depends only on the above variable. This means that the shock wave location formula_14 itself will correspond to a particular value, say formula_15, of this variable, i.e.,
formula_16
The detailed analysis that follows will, at the end, reveal that the factor formula_15 is quite close to unity, thereby demonstrating (for this problem) the quantitative predictive capability of the dimensional analysis in determining the shock-wave location as a function of time. The propagation velocity of the shock wave is
formula_17
With the approximation described above, Rankine–Hugoniot conditions determines the gas velocity immediately behind the shock front formula_18, formula_3 and formula_19 for an ideal gas as follows
formula_20
where formula_21 is the specific heat ratio. Since formula_5 is a constant, the density immediately behind the shock wave is not changing with time, whereas formula_18 and formula_3 decrease as formula_22 and formula_23, respectively.
Self-similar solution.
The gas motion behind the shock wave is governed by Euler equations. For an ideal polytropic gas with spherical symmetry, the equations for the fluid variables such as radial velocity formula_24, density formula_25 and pressure formula_26 are given by
formula_27
At formula_14, the solutions should approach the values given by the Rankine-Hugoniot conditions defined in the previous section.
The variable pressure can be replaced by the sound speed formula_28 since pressure can be obtained from the formula formula_29. The following non-dimensional self-similar variables are introduced,
formula_30.
The conditions at the shock front formula_31 becomes
formula_32
Substituting the self-similar variables into the governing equations will lead to three ordinary differential equations. Solving these differential equations analytically is laborious, as shown by Sedov in 1946 and von Neumann in 1947. G. I. Taylor integrated these equations numerically to obtain desired results.
The relation between formula_33 and formula_34 can be deduced directly from energy conservation. Since the energy associated with the undisturbed gas is neglected by setting formula_6, the total energy of the gas within the shock sphere must be equal to formula_0. Due to self-similarity, it is clear that not only the total energy within a sphere of radius formula_31 is constant, but also the total energy within a sphere of any radius formula_35 (in dimensional form, it says that total energy within a sphere of radius formula_12 that moves outwards with a velocity formula_36 must be constant). The amount of energy that leaves the sphere of radius formula_12 in time formula_37 due to the gas velocity formula_38 is formula_39, where formula_40 is the specific enthalpy of the gas. In that time, the radius of the sphere increases with the velocity formula_41 and the energy of the gas in this extra increased volume is formula_42, where formula_43 is the specific energy of the gas. Equating these expressions and substituting formula_44 and formula_45 that is valid for ideal polytropic gas leads to
formula_46
The continuity and energy equation reduce to
formula_47
Expressing formula_49 and formula_50 as a function of formula_34 only using the relation obtained earlier and integrating once yields the solution in implicit form,
formula_51
where
formula_52
The constant formula_15 that determines the shock location can be determined from the conservation of energy
formula_53
to obtain
formula_54
For air, formula_48 and formula_55. The solution for formula_48 is shown in the figure by graphing the curves of formula_56, formula_57, formula_58 and formula_59 where formula_60 is the temperature.
Asymptotic behavior near the central region.
The asymptotic behavior of the central region can be investigated by taking the limit formula_61. From the figure, it can be observed that the density falls to zero very rapidly behind the shock wave. The entire mass of the gas which was initially spread out uniformly in a sphere of radius formula_62 is now contained in a thin layer behind the shock wave, that is to say, all the mass is driven outwards by the acceleration imparted by the shock wave. Thus, most of the region is basically empty. The pressure ratio also drops rapidly to attain the constant value formula_63. The temperature ratio follows from the ideal gas law; since density ratio decays to zero and the pressure ratio is constant, the temperature ratio must become infinite. The limiting form for the density is given as follows
formula_64
Remember that the density formula_19 is time-independent whereas formula_65 which means that the actual pressure is in fact time dependent. It becomes clear if the above forms are rewritten in dimensional units,
formula_66
The velocity ratio has the linear behavior in the central region,
formula_67
whereas the behavior of the velocity itself is given by
formula_68
Final stage of the blast wave.
As the shock wave evolves in time, its strength decreases. The self-similar solution described above breaks down when formula_3 becomes comparable to formula_4 (more precisely, when formula_69). At this later stage of the evolution, formula_4 (and consequently formula_70) cannot be neglected. This means that the evolution is not self-similar, because one can form a length scale formula_71 and a time scale formula_72 to describe the problem. The governing equations are then integrated numerically, as was done by H. Goldstine and John von Neumann, Brode, and Okhotsimskii et al. Furthermore, in this stage, the compressing shock wave is necessarily followed by a rarafaction wave behind it; the waveform is empirically fiited by the Friedlander waveform.
Cylindrical line explosion.
The analogous problem in cylindrical geometry corresponding to an axisymmetric blast wave, such as that produced in a lightning, can be solved analytically. This problem was solved independently by Leonid Sedov, A. Sakurai and S. C. Lin. In cylindrical geometry, the non-dimensional combination involving the radial coordinate formula_12 (this is different from the formula_12 in the spherical geometry), the time formula_13, the total energy released per unit axial length formula_0 (this is different from the formula_0 used in the previous section) and the ambient density formula_5 is found to be
formula_73
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "E"
},
{
"math_id": 1,
"text": "r=0"
},
{
"math_id": 2,
"text": "t=0"
},
{
"math_id": 3,
"text": "p_1"
},
{
"math_id": 4,
"text": "p_0"
},
{
"math_id": 5,
"text": "\\rho_0"
},
{
"math_id": 6,
"text": "p_0=0"
},
{
"math_id": 7,
"text": "c_0=0"
},
{
"math_id": 8,
"text": "\\rho_0\\neq 0"
},
{
"math_id": 9,
"text": "p_1,\\,\\rho_1"
},
{
"math_id": 10,
"text": "r,\\,t,\\,\\rho_0"
},
{
"math_id": 11,
"text": "r\\left(\\frac{\\rho_0}{Et^2}\\right)^{1/5}"
},
{
"math_id": 12,
"text": "r"
},
{
"math_id": 13,
"text": "t"
},
{
"math_id": 14,
"text": "r=R(t)"
},
{
"math_id": 15,
"text": "\\beta"
},
{
"math_id": 16,
"text": "R= \\beta \\left(\\frac{Et^2}{\\rho_0}\\right)^{1/5}."
},
{
"math_id": 17,
"text": "D=\\frac{\\mathrm{d}R}{\\mathrm{d}t}=\\frac{2R}{5t}=\\frac{2\\beta}{5}\\left(\\frac{E}{\\rho_0t^3}\\right)^{1/5}"
},
{
"math_id": 18,
"text": "v_1"
},
{
"math_id": 19,
"text": "\\rho_1"
},
{
"math_id": 20,
"text": "v_1 = \\frac{2}{\\gamma+1}D, \\quad p_1 = \\frac{2}{\\gamma+1}\\rho_0 D^2, \\quad \\rho_1= \\rho_0 \\frac{\\gamma+1}{\\gamma-1}"
},
{
"math_id": 21,
"text": "\\gamma"
},
{
"math_id": 22,
"text": "t^{-3/5}"
},
{
"math_id": 23,
"text": "t^{-6/5}"
},
{
"math_id": 24,
"text": "v(r,t)"
},
{
"math_id": 25,
"text": "\\rho(r,t)"
},
{
"math_id": 26,
"text": "p(r,t)"
},
{
"math_id": 27,
"text": "\\begin{align}\n\\frac{\\partial v}{\\partial t} + v \\frac{\\partial v}{\\partial r} &= - \\frac{1}{\\rho}\\frac{\\partial p}{\\partial r},\\\\\n\\frac{\\partial \\rho}{\\partial t} + \\frac{\\partial (\\rho v)}{\\partial r} &= - \\frac{2\\rho v}{r},\\\\\n\\left(\\frac{\\partial }{\\partial t}+ v\\frac{\\partial }{\\partial r}\\right)\\ln \\frac{p}{\\rho^\\gamma} &=0.\n\\end{align}\n"
},
{
"math_id": 28,
"text": "c(r,t)"
},
{
"math_id": 29,
"text": "c^2=\\gamma p/\\rho"
},
{
"math_id": 30,
"text": "\\xi = \\frac{r}{R(t)}, \\quad V(\\xi) = \\frac{5tv}{2r}, \\quad G(\\xi) = \\frac{\\rho}{\\rho_0}, \\quad Z(\\xi) = \\frac{25 t^2 c^2}{4r^2}"
},
{
"math_id": 31,
"text": "\\xi=1"
},
{
"math_id": 32,
"text": "V(1) = \\frac{2}{\\gamma+1}, \\quad G(1)=\\frac{\\gamma+1}{\\gamma-1}, \\quad Z(1) = \\frac{2\\gamma(\\gamma-1)}{(\\gamma+1)^2}."
},
{
"math_id": 33,
"text": "Z"
},
{
"math_id": 34,
"text": "V"
},
{
"math_id": 35,
"text": "\\xi<1"
},
{
"math_id": 36,
"text": "v_n=2r/5t"
},
{
"math_id": 37,
"text": "dt"
},
{
"math_id": 38,
"text": "v"
},
{
"math_id": 39,
"text": "4\\pi r^2\\rho v(h+v^2/2)\\mathrm{d}t"
},
{
"math_id": 40,
"text": "h"
},
{
"math_id": 41,
"text": "v_n"
},
{
"math_id": 42,
"text": "4\\pi r^2 \\rho v_n(e+v^2/2)\\mathrm{d}t"
},
{
"math_id": 43,
"text": "e"
},
{
"math_id": 44,
"text": "e=c^2/\\gamma(\\gamma-1)"
},
{
"math_id": 45,
"text": "h=c^2/(\\gamma-1)"
},
{
"math_id": 46,
"text": "Z = \\frac{\\gamma(\\gamma-1)(1-V)V^2}{2(\\gamma V-1)}."
},
{
"math_id": 47,
"text": "\\begin{align}\n\\frac{\\mathrm{d} V }{\\mathrm{d}\\ln \\xi} - (1-V) \\frac{\\mathrm{d}\\ln G}{\\mathrm{d}\\ln\\xi} &= - 3V\\\\\n\\frac{\\mathrm{d}\\ln Z}{\\mathrm{d}\\ln\\xi} - (\\gamma-1) \\frac{\\mathrm{d}\\ln G}{\\mathrm{d}\\ln\\xi} &= -\\frac{5-2V}{1-V}.\n\\end{align}\n"
},
{
"math_id": 48,
"text": "\\gamma=7/5"
},
{
"math_id": 49,
"text": "\\mathrm{d}V/\\mathrm{d}\\ln\\xi"
},
{
"math_id": 50,
"text": "\\mathrm{d}\\ln G/\\mathrm{d}V"
},
{
"math_id": 51,
"text": "\\begin{align}\n\\xi^5 &= \\left[\\frac{1}{2}(\\gamma+1)V\\right]^{-2} \\left\\{\\frac{\\gamma+1}{7-\\gamma}[5-(3\\gamma-1)V]\\right\\}^{\\nu_1}\\left[\\frac{\\gamma+1}{\\gamma-1}(\\gamma V-1)\\right]^{\\nu_2},\\\\\nG &= \\frac{\\gamma+1}{\\gamma-1}\\left[\\frac{\\gamma+1}{\\gamma-1}(\\gamma V-1)\\right]^{\\nu_3}\\left\\{\\frac{\\gamma+1}{7-\\gamma}[5-(3\\gamma-1)V\\right\\}^{\\nu_4}\\left[\\frac{\\gamma+1}{\\gamma-1}(1-V)\\right]^{\\nu_5}\n\\end{align}\n"
},
{
"math_id": 52,
"text": "\\nu_1= -\\frac{13\\gamma^2-7\\gamma+12}{(3\\gamma-1)(2\\gamma+1)},\\quad \\nu_2 = \\frac{5(\\gamma-1)}{2\\gamma+1}, \\quad \\nu_3 = \\frac{3}{2\\gamma+1},\\quad \\nu_4 = -\\frac{\\nu_1}{2-\\gamma}, \\quad \\nu_5 = - \\frac{2}{2-\\gamma}."
},
{
"math_id": 53,
"text": "E=\\int_0^R\\rho[v^2/2+c^2/\\gamma(\\gamma-1)]4\\pi r^2\\mathrm{d}r"
},
{
"math_id": 54,
"text": "\\beta^5 \\frac{16\\pi}{25}\\int_0^1 G[V^2/2+Z/\\gamma(\\gamma-1)]\\xi^4\\mathrm{d}\\xi = 1."
},
{
"math_id": 55,
"text": "\\beta=1.033"
},
{
"math_id": 56,
"text": "\\rho/\\rho_1=G(\\gamma-1)/(\\gamma+1)"
},
{
"math_id": 57,
"text": "v/v_1 = \\xi V(\\gamma+1)/2"
},
{
"math_id": 58,
"text": "p/p_1=\\xi^2GZ(\\gamma+1)/(2\\gamma)"
},
{
"math_id": 59,
"text": "T/T_1=\\xi^2Z(\\gamma+1)^2/[2\\gamma(\\gamma-1)],"
},
{
"math_id": 60,
"text": "T"
},
{
"math_id": 61,
"text": "\\xi\\rightarrow 0"
},
{
"math_id": 62,
"text": "R"
},
{
"math_id": 63,
"text": "p_c"
},
{
"math_id": 64,
"text": "\\frac{\\rho}{\\rho_1} \\sim \\xi^{3/(\\gamma-1)}, \\quad \\frac{p}{p_1}\\rightarrow p_c, \\quad \\frac{T}{T_1} \\sim \\xi^{-3/(\\gamma-1)} \\quad \\text{as} \\quad \\xi\\rightarrow 0."
},
{
"math_id": 65,
"text": "p_1\\sim t^{-6/5}"
},
{
"math_id": 66,
"text": "\\rho \\sim r^{3/(\\gamma-1)}t^{-6/5(\\gamma-1)}, \\quad p\\rightarrow p_c t^{-6/5}, \\quad T \\sim r^{-3/(\\gamma-1)}t^{(6/5)(2-\\gamma)/(\\gamma-1)} \\quad \\text{as} \\quad r\\rightarrow 0."
},
{
"math_id": 67,
"text": "\\frac{v}{v_1} \\sim \\xi \\quad \\text{as} \\quad \\xi\\rightarrow 0"
},
{
"math_id": 68,
"text": "v \\sim r t^{1/5} \\quad \\text{as} \\quad r\\rightarrow 0."
},
{
"math_id": 69,
"text": "p_1\\sim [(\\gamma+1)/(\\gamma-1)]p_0"
},
{
"math_id": 70,
"text": "c_0"
},
{
"math_id": 71,
"text": "(E/p_0)^{1/3}"
},
{
"math_id": 72,
"text": "(E/p_0)^{1/3}/c_0"
},
{
"math_id": 73,
"text": "r\\left(\\frac{\\rho_0}{E t^2}\\right)^{1/4}."
}
] |
https://en.wikipedia.org/wiki?curid=65783919
|
65786994
|
Spread (projective geometry)
|
A frequently studied problem in finite geometry is to identify ways in which an object can be covered by other simpler objects such as points, lines, and planes. In projective geometry, a specific instance of this problem that has numerous applications is determining whether, and how, a projective space can be covered by pairwise disjoint subspaces which have the same dimension; such a partition is called a spread. Specifically, a spread of a projective space formula_0, where formula_1 is an integer and formula_2 a division ring, is a set of formula_3-dimensional subspaces, for some formula_4 such that every point of the space lies in exactly one of the elements of the spread.
Spreads are particularly well-studied in projective geometries over finite fields, though some notable results apply to infinite projective geometries as well. In the finite case, the foundational work on spreads appears in André and independently in Bruck-Bose in connection with the theory of translation planes. In these papers, it is shown that a spread of formula_3-dimensional subspaces of the finite projective space formula_5 exists if and only if formula_6.
Spreads and translation planes.
For all integers formula_7, the projective space formula_8 always has a spread of formula_9-dimensional subspaces, and in this section the term spread refers to this specific type of spread; spreads of this form may (and frequently do) occur in infinite projective geometries as well. These spreads are the most widely studied in the literature, due to the fact that every such spread can be used to create a translation plane using the André/Bruck-Bose construction.
Reguli and regular spreads.
Let formula_10 be the projective space formula_11 for formula_7 an integer, and formula_2 a division ring. A "regulus" formula_12 in formula_10 is a collection of pairwise disjoint formula_9-dimensional subspaces with the following properties:
Any three pairwise disjoint formula_9-dimensional subspaces in formula_10 lie in a unique regulus. A spread formula_13 of formula_10 is "regular" if for any three distinct formula_9-dimensional subspaces of formula_13, all the members of the unique regulus determined by them are contained in formula_13. Regular spreads are significant in the theory of translation planes, in that they generate Moufang planes in general, and Desarguesian planes in the finite case when the order of the ambient field is greater than formula_14. All spreads of formula_15 are trivially regular, since a regulus only contains three elements.
Constructing a regular spread.
Construction of a regular spread is most easily seen using an algebraic model. Letting formula_16 be a formula_17-dimensional vector space over a field formula_18, one can model the formula_19-dimensional subspaces of formula_20 using the formula_21-dimensional subspaces of formula_16; this model uses homogeneous coordinates to represent points and hyperplanes. Incidence is defined by intersection, with subspaces intersecting in only the zero vector considered disjoint; in this model, the zero vector of formula_16 is effectively ignored.
Let formula_18 be a field and formula_22 an formula_9-dimensional extension field of formula_18. Consider formula_23 as a formula_24-dimensional vector space over formula_18, which provides a model for the projective space formula_25 as above. Each element of formula_16 can be written uniquely as formula_26 where formula_27. A regular spread is given by the set of formula_9-dimensional projective spaces defined by formula_28, for each formula_29, together with formula_30.
Constructing spreads.
Spread sets.
The construction of a regular spread above is an instance of a more general construction of spreads, which uses the fact that field multiplication is a linear transformation over formula_22 when considered as a vector space. Since formula_22 is a finite formula_9-dimensional extension over formula_18, a linear transformation from formula_22 to itself can be represented by an formula_31 matrix with entries in formula_18. A "spread set" is a set formula_13 of formula_31 matrices over formula_18 with the following properties:
In the finite case, where formula_22 is the field of order formula_38 for some prime power formula_39, the last condition is equivalent to the spread set containing formula_38 matrices. Given a spread set formula_13, one can create a spread as the set of formula_9-dimensional projective spaces defined by formula_40, for each formula_41, together with formula_30,
As a specific example, the following nine matrices represent formula_42 as 2 × 2 matrices over formula_43 and so provide a spread set of formula_44.
formula_45
Another example of a spread set yields the Hall plane of order 9
formula_46
Modifying spreads.
One common approach to creating new spreads is to start with a regular spread and modify it in some way. The techniques presented here are some of the more elementary examples of this approach.
Spreads of 3-space.
One can create new spreads by starting with a spread and looking for a "switching set", a subset of its elements that can be replaced with an alternate set of pairwise disjoint
subspaces of the correct dimension. In formula_47, a regulus forms a switching set, as the set of transversals of a regulus formula_12 also form a regulus, called the "opposite regulus" of formula_12. Removing the lines of a regulus in a spread and replacing them with the opposite regulus produces a new spread which is often non-isomorphic to the original. This process is a special case of a more general process called "derivation" or "net replacement".
Starting with a regular spread of formula_48 and reversing any regulus produces a spread that yields a Hall plane. In more generality, the process can be applied independently to any collection of reguli in a regular spread, yielding a "subregular spread"; the resulting translation plane is called a subregular plane. The André planes form a special subclass of subregular planes, of which the Hall planes are the simplest examples, arising by replacing a single regulus in a regular spread.
More complex switching sets have been constructed. Bruen has explored the concept of a "chain" of reguli in a regular spread of formula_48, formula_39 odd, namely a set of formula_49 reguli which pairwise meet in exactly 2 lines, so that every line contained in a regulus of the chain is contained in exactly two distinct reguli of the chain. Bruen constructed an example of a chain in the regular spread of formula_50, and showed that it could be replaced by taking the union of exactly half of the lines from the opposite regulus of each regulus in the chain. Numerous examples of Bruen chains have appeared in the literature since, and Heden has shown that any Bruen chain is replaceable using opposite half-reguli. Chains are known to exist in a regular spread of formula_48 for all odd prime powers formula_39 up to 37, except 29, and are known not to exist for formula_51. It is conjectured that no additional Bruen chains exist.
Baker and Ebert generalized the concept of a chain to a "nest", which is a set of reguli in a regular spread such that every line contained in a regulus of the nest is contained in exactly two distinct reguli of the nest. Unlike a chain, two reguli in a nest are not required to meet in a pair of lines. Unlike chains, a nest in a regular spread need not be replaceable, however several infinite families of replaceable nests are known.
Higher-dimensional spreads.
In higher dimensions a regulus cannot be reversed because the transversals do not have the correct dimension. There exist analogs to reguli, called "norm surfaces", which can be reversed. The higher-dimensional André planes can be obtained from spreads obtained by reversing these norm surfaces, and there also exist analogs of subregular spreads which do not give rise to André planes.
Geometric techniques.
There are several known ways to construct spreads of formula_48 from other geometrical objects without reference to an initial regular spread. Some well-studied approaches to this are given below.
Flocks of quadratic cones.
In formula_48, a "quadratic cone" is the union of the set of lines containing a fixed point P (the "vertex") and a point on a conic in a plane not passing through P. Since a conic has formula_53 points, a quadratic cone has formula_54 points. As with traditional geometric conic sections, a plane of formula_48 can meet a quadratic cone in either a point, a conic, a line or a line pair. A "flock" of a quadratic cone is a set of formula_39 planes whose intersections with the quadratic cone are pairwise disjoint conics. The classic construction of a flock is to pick a line formula_55 that does not meet the quadratic cone, and take the formula_39 planes through formula_55 that do not contain the vertex of the cone; such a flock is called "linear".
Fisher and Thas show how to construct a spread of formula_48 from a flock of a quadratic cone using the Klein correspondence, and show that the resulting spread is regular if and only if the initial flock is linear. Many infinite families of flocks of quadratic cones are known, as are numerous sporadic examples.
Every spread arising from a flock of a quadratic cone is the union of formula_39 reguli which all meet in a fixed line formula_55. Much like with a regular spread, any of these reguli can be replaced with its opposite to create several potentially new spreads.
Hyperbolic fibrations.
In formula_48 a hyperbolic fibration is a partition of the space into formula_52 pairwise disjoint hyperbolic quadrics and two lines disjoint from all of the quadrics and each other. Since a hyperbolic quadric consists of the points covered by a regulus and its opposite, a hyperbolic fibration yields formula_56 different spreads.
All spreads yielding André planes, including the regular spread, are obtainable from a hyperbolic fibration (specifically an algebraic pencil generated by any two of the quadrics), as articulated by André. Using nest replacement, Ebert found a family of spreads in which a hyperbolic fibration was identified. Baker, et al. provide an explicit example of a construction of a hyperbolic fibration. A much more robust source of hyperbolic fibrations was identified by Baker, et al., where the authors developed a correspondence between flocks of quadratic cones and hyperbolic fibrations; interestingly, the spreads generated by a flock of a quadratic cone are not generally isomorphic to the spreads generated from the corresponding hyperbolic fibration.
Subgeometry partitions.
Hirschfeld and Thas note that for any odd integer formula_57, a partition of formula_58 into subgeometries isomorphic to formula_59 gives rise to a spread of formula_60, where each subgeometry of the partition corresponds to a regulus of the new spread.
The "classical" subgeometry partitions of formula_58 can be generated using suborbits of a Singer cycle, but this simply generates a regular spread. Yff published the non-classical subgeometry partition, namely a partition of formula_61 into 7 copies of formula_62, that admit a cyclic group permuting the subplanes. Baker, et al. provide several infinite families of partitions of formula_63 into subplanes, with the same cyclic group action.
Partial spreads.
A "partial spread" of a projective space formula_0 is a set of pairwise disjoint formula_3-dimensional subspaces in the space; hence a spread is just a partial spread where every point of the space is covered. A partial spread is called "complete" or "maximal" if there is no larger partial spread that contains it; equivalently, there is no formula_3-dimensional subspace disjoint from all members of the partial spread. As with spreads, the most well-studied case is partial spreads of lines of the finite projective space formula_48, where a full spread has size formula_64. Mesner showed that any partial spread of lines in formula_48 with size greater than formula_65 cannot be complete; indeed, it must be a subset of a unique spread. For a lower bound, Bruen showed that a complete partial spread of lines in formula_48 with size at most formula_66 lines cannot be complete; there will necessarily be a line that can be added to a partial spread of this size. Bruen also provides examples of complete partial spreads of lines in formula_48 with sizes formula_67 and formula_68 for all formula_69.
Spreads of classical polar spaces.
The classical polar spaces are all embedded in some projective space formula_0 as the set of totally isotropic subspaces of a sesquilinear or quadratic form on the vector space underlying the projective space. A particularly interesting class of partial spreads of formula_0 are those that consist strictly of maximal subspaces of a classical polar space embedded in the projective space. Such partial spreads that cover all of the points of the polar space are called "spreads" of the polar space.
From the perspective of the theory of translation planes, the symplectic polar space is of particular interest, as its set of points are all of the points in formula_11, and its maximal subspaces are of dimension formula_9. Hence a spread of the symplectic polar space is also a spread of the entire projective space, and can be used as noted above to create a translation plane. Several examples of symplectic spreads are known; see Ball, et al.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "PG(d,K)"
},
{
"math_id": 1,
"text": "d \\geq 1"
},
{
"math_id": 2,
"text": "K"
},
{
"math_id": 3,
"text": "r"
},
{
"math_id": 4,
"text": "0 < r < d"
},
{
"math_id": 5,
"text": "PG(d,q)"
},
{
"math_id": 6,
"text": " r+1 \\mid d+1"
},
{
"math_id": 7,
"text": "n \\geq 1"
},
{
"math_id": 8,
"text": "PG(2n+1,q)"
},
{
"math_id": 9,
"text": "n"
},
{
"math_id": 10,
"text": "\\Sigma"
},
{
"math_id": 11,
"text": "PG(2n+1,K)"
},
{
"math_id": 12,
"text": "R"
},
{
"math_id": 13,
"text": "S"
},
{
"math_id": 14,
"text": "2"
},
{
"math_id": 15,
"text": "PG(2n+1,2)"
},
{
"math_id": 16,
"text": "V"
},
{
"math_id": 17,
"text": "(2n+2)"
},
{
"math_id": 18,
"text": "F"
},
{
"math_id": 19,
"text": "k"
},
{
"math_id": 20,
"text": "PG(2n+1,F)"
},
{
"math_id": 21,
"text": "(k+1)"
},
{
"math_id": 22,
"text": "E"
},
{
"math_id": 23,
"text": "V = E \\oplus E"
},
{
"math_id": 24,
"text": "2n"
},
{
"math_id": 25,
"text": "PG(2n-1,F)"
},
{
"math_id": 26,
"text": "(x,y)"
},
{
"math_id": 27,
"text": "x,y \\in E"
},
{
"math_id": 28,
"text": "J(k) = \\{(x,kx):x \\in E\\}"
},
{
"math_id": 29,
"text": "k \\in E"
},
{
"math_id": 30,
"text": "J(\\infty) = \\{(0,y):y \\in E\\}"
},
{
"math_id": 31,
"text": "n \\times n"
},
{
"math_id": 32,
"text": "X"
},
{
"math_id": 33,
"text": "Y"
},
{
"math_id": 34,
"text": "X-Y"
},
{
"math_id": 35,
"text": "a,b \\in E"
},
{
"math_id": 36,
"text": "X \\in S"
},
{
"math_id": 37,
"text": "aX = b"
},
{
"math_id": 38,
"text": "q^n"
},
{
"math_id": 39,
"text": "q"
},
{
"math_id": 40,
"text": "J(k) = \\{(x,xM):x \\in E\\}"
},
{
"math_id": 41,
"text": "M \\in S"
},
{
"math_id": 42,
"text": "GF(9)"
},
{
"math_id": 43,
"text": "GF(3)"
},
{
"math_id": 44,
"text": "AG(2, 9)"
},
{
"math_id": 45,
"text": " \\left [ \\begin{matrix} 0 & 0 \\\\ 0 & 0 \\end{matrix} \\right ], \\left [ \\begin{matrix} 1 & 0 \\\\ 0 & 1 \\end{matrix} \\right ], \\left [ \\begin{matrix} 2 & 0 \\\\ 0 & 2 \\end{matrix} \\right ], \\left [ \\begin{matrix} 0 & 1 \\\\ 2 & 0 \\end{matrix} \\right ], \\left [ \\begin{matrix} 1 & 1 \\\\ 2 & 1 \\end{matrix} \\right ], \\left [ \\begin{matrix} 2 & 1 \\\\ 2 & 2 \\end{matrix} \\right ], \\left [ \\begin{matrix} 0 & 2 \\\\ 1 & 0 \\end{matrix} \\right ], \\left [ \\begin{matrix} 1 & 2 \\\\ 1 & 1\\end{matrix} \\right ], \\left [ \\begin{matrix} 2 & 2 \\\\ 2 & 1 \\end{matrix} \\right ]"
},
{
"math_id": 46,
"text": " \\left [ \\begin{matrix} 0 & 0 \\\\ 0 & 0 \\end{matrix} \\right ], \\left [ \\begin{matrix} 1 & 0 \\\\ 0 & 1 \\end{matrix} \\right ], \\left [ \\begin{matrix} 2 & 0 \\\\ 0 & 2 \\end{matrix} \\right ], \\left [ \\begin{matrix} 1 & 1 \\\\ 1 & 2 \\end{matrix} \\right ], \\left [ \\begin{matrix} 2 & 2 \\\\ 2 & 1 \\end{matrix} \\right ], \\left [ \\begin{matrix} 0 & 1 \\\\ 2 & 0 \\end{matrix} \\right ], \\left [ \\begin{matrix} 0 & 2 \\\\ 1 & 0 \\end{matrix} \\right ], \\left [ \\begin{matrix} 1 & 2 \\\\ 2 & 2\\end{matrix} \\right ], \\left [ \\begin{matrix} 2 & 1 \\\\ 1 & 1 \\end{matrix} \\right ]"
},
{
"math_id": 47,
"text": "PG(3,K)"
},
{
"math_id": 48,
"text": "PG(3,q)"
},
{
"math_id": 49,
"text": "(q+3)/2"
},
{
"math_id": 50,
"text": "PG(3,5)"
},
{
"math_id": 51,
"text": "q \\in \\{29,41,43,47,49\\}"
},
{
"math_id": 52,
"text": "q-1"
},
{
"math_id": 53,
"text": "q+1"
},
{
"math_id": 54,
"text": "q(q+1)+1"
},
{
"math_id": 55,
"text": "m"
},
{
"math_id": 56,
"text": "2^{q-1}"
},
{
"math_id": 57,
"text": "n \\geq 3"
},
{
"math_id": 58,
"text": "PG(n-1,q^2)"
},
{
"math_id": 59,
"text": "PG(n-1,q)"
},
{
"math_id": 60,
"text": "PG(2n-1,q)"
},
{
"math_id": 61,
"text": "PG(2,9)"
},
{
"math_id": 62,
"text": "PG(2,3)"
},
{
"math_id": 63,
"text": "PG(2,q^2)"
},
{
"math_id": 64,
"text": "q^2+1"
},
{
"math_id": 65,
"text": "q^2 - \\sqrt{q}"
},
{
"math_id": 66,
"text": "q+\\sqrt{q}"
},
{
"math_id": 67,
"text": "q^2-q+1"
},
{
"math_id": 68,
"text": "q^2-q+2"
},
{
"math_id": 69,
"text": "q > 2"
}
] |
https://en.wikipedia.org/wiki?curid=65786994
|
65787329
|
Shuffle-exchange network
|
Type of multigraph
In graph theory, the shuffle-exchange network is an undirected cubic multigraph, whose vertices represent binary sequences of a given length and whose edges represent two operations on these sequence, circular shifts and flipping the lowest-order bit.
Definition.
In the version of this network introduced by Tomas Lang and Harold S. Stone in 1976, simplifying earlier work of Stone in 1971, the shuffle-exchange network of order formula_0 consisted of an array of formula_1 cells, numbered by the formula_1 different binary numbers that can be represented with formula_0 bits. These cells were connected by communications links in two different patterns: "exchange" links in which each cell is connected to the cell numbered with the opposite value in its lowest-order bit, and "shuffle" links in which each cell is connected to the cell whose number is obtained by a circular shift that shifts every bit to the next more significant position, except for the highest-order bit which shifts into the lowest-order position. The "exchange" links are bidirectional, while the "shuffle" links can only transfer information in one direction, from a cell to its circular shift.
Subsequent work on networks with this topology removed the distinction between unidirectional and bidirectional communication links, allowing information to flow in either direction across each link.
Applications.
The advantage of this communications pattern, over earlier methods, is that it allows information to be rapidly transferred through a small number of steps from any vertex in the network to any other vertex, while only requiring a single bit of control information (which of the two communications links to use) for each communications step. Fast parallel algorithms for basic problems including sorting, matrix multiplication, polynomial evaluation, and Fourier transforms are known for parallel systems using this network.
Layout area.
If this network is given a straightforward layout in the integer lattice, with the vertices placed on a line in numerical order, with each lattice edge carrying part of at most one communication link, and with each vertex or crossing of the network placed at a lattice point, the layout uses area formula_2, quadratic in its number of vertices. However, more compact and asymptotically optimal layouts with area formula_3 were described by F. Thomson Leighton in his 1981 doctoral dissertation.
Related networks.
A related communications network, the "omega network" or multi-stage shuffle-exchange network, consists of a given number of stages, each consisting of formula_1 vertices, with the shuffle links connecting pairs of vertices in consecutive stages and the exchange links connecting pairs of vertices in the same stage as each other.
The same operations on binary words, of rotation and flipping the first bit, can also be used to generate the cube-connected cycles, a different cubic parallel communications network with a greater number of vertices. Instead of having the binary words themselves as its vertices, the vertices of the cube-connected cycles represent operations on words that can be generated by rotation and flipping, and the edges represent the composition of one of these operations with an additional rotation or flip.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "d"
},
{
"math_id": 1,
"text": "2^d"
},
{
"math_id": 2,
"text": "O(2^{2d})"
},
{
"math_id": 3,
"text": "O(2^{2d}/d^2)"
}
] |
https://en.wikipedia.org/wiki?curid=65787329
|
65795325
|
Proportional symbol map
|
Thematic map based on symbol size
A proportional symbol map or proportional point symbol map is a type of thematic map that uses map symbols that vary in size to represent a quantitative variable. For example, circles may be used to show the location of cities within the map, with the size of each circle sized proportionally to the population of the city. Typically, the size of each symbol is calculated so that its area is mathematically proportional to the variable, but more indirect methods (e.g., categorizing symbols as "small," "medium," and "large") are also used.
While all dimensions of geometric primitives (i.e., points, lines, and regions) on a map can be resized according to a variable, this term is generally only applied to point symbols, and different design techniques are used for other dimensionalities. A cartogram is a map that distorts region size proportionally, while a flow map represents lines, often using the width of the symbol (a form of size) to represent a quantitative variable. That said, there are gray areas between these three types of proportional map: a Dorling cartogram essentially replaces the polygons of area features with a proportional point symbol (usually a circle), while a linear cartogram is a kind of flow map that distorts the length of linear features proportional to a variable (often travel time).
History.
Arthur H. Robinson credited Henry Drury Harness with the first map to clearly attempt to portray point sizes proportionally, on an 1838 map of cargo traffic in Ireland (with proportional widths) that showed city population. The technique was soon replicated and enhanced by other cartographers. The official report of the 1851 Census of Great Britain included several maps drawn by a W. Bone, showing significant towns sized proportionally to population (apparently range-graded), including one of the first useful legends. Charles Joseph Minard produced several proportional symbol maps, including the innovations of using them to represent regions rather than points, and incorporating color and statistical charts in the point symbols.
As cartography arose as an academic discipline in the early 20th Century, textbooks included detailed instructions on constructing proportional symbol maps, including calculating circle sizes. Several cartography professors began to experiment with new mapping techniques, notably the use of spheres with a proportional volume rather than area by Sten de Geer (1922) and Guy-Harold Smith (1928), and the use of transparency to resolve overlapping circles by Smith (1928) and Floyd Stilgenbauer (1932), the latter of which included a unique legend.
The rise of the map communication paradigm in academic cartography led to a number of psychophysical experiments on the effectiveness of map symbols. One of the earliest and most well-known of these studies was the PhD dissertation of James J. Flannery, who studied the ability of people to judge the relative areas of proportional circles, finding that Stevens's power law applied such that map readers underestimated circle area by a fairly predictable amount, leading to the Flannery Scaling Adjustment still in use today.
Starting in the early 1990s, almost all proportional symbol maps have been created using geographic information system (GIS) and graphics software, with increasing capability for professional design. The rise of the Internet and web mapping, especially modern tiled services with API access starting in 2005, have enabled the creation of interactive proportional symbol maps, including cloud mapping platforms such as Esri ArcGIS Online and CARTO.
Point locations.
Proportional symbol maps represent a set of related geographic phenomena (e.g., cities) as point symbols. These point locations can have two different sources and meanings:
Variables.
The second part of the proportional symbol map is the choice of variable to represent by symbol size. The best variables to use in this technique are ones in which size will be interpreted intuitively by most map readers. In "Semiology of Graphics", Jacques Bertin argued that of all of his visual variables, size was most intimately tied to a single interpretation. That is, a larger symbol looks like more of something and thus more important, and it is very difficult to interpret it any other way (e.g., as qualitatively different nominal categories). A second tendency is for users to interpret relative sizes: a symbol that is twice as large (in area or length) will be interpreted as representing twice the quantity. The absence of a circle would be interpreted as the complete absence of the phenomenon, and negative values cannot be shown.
Based on these principles, only "ratio" variables (per Stevens' levels of measurement) are appropriate to represent with size, specifically those in which negative values are not possible. Within this set, the most intuitive are those that measure the total amount/count/volume of something, such as total population, volume or weight of agricultural production, or shipping tonnage. These are all "spatially extensive" variables, which happen to be the most problematic choices for , making these two thematic mapping techniques complimentary.
Some ratio variables can be appropriate for both choropleth and proportional symbol maps, especially those that are "spatially intensive" (i.e., fields) but still represent an amount or count in some way. A common type of variable that meets these criteria is an "allotment", calculating how one amount is theoretically distributed among individuals, such as GDP per capita or the crude birth rate (births per 1,000 population). Other non-negative spatially intensive ratio variables can technically be mapped as proportional symbols, such as proportions (e.g., Percent ages 0–17), but can lead to misinterpretations because they do not represent amounts (although proportions can be represented using proportional pie charts). Ordinal qualitative variables can also be appropriate, if the goal is a simple representation of "small," "medium," and "large."
Variables that are inappropriate for proportional symbols include those that may include negative values (e.g., population growth rate) and qualitative categories. Another consideration in selecting a variable is the degree of variance in the statistical distribution. If there is a high degree of variation (i.e., a ratio of high values to low values of more than 1,000:1), the largest symbols will be overcrowded and entirely overlapping while the smallest symbols will be nearly invisible. If there is a low degree of variation (i.e., a ratio of less than 10), most of the symbols will look nearly the same size and the map will be relatively uninformative.
Symbol design.
The primary goal in selecting a point symbol to use in a proportional symbol map is that users should be able to accurately judge sizes, both in comparison to the legend to estimate data values, and in comparison to each other to judge relative patterns. Secondary goals include aesthetic appeal and an intuitive shape that is easy to interpret.
The point symbols that represent each data value can be of any shape. In most proportional symbol maps, the shape does not vary, so it does not represent any information on its own. Differences in shape can be used to represent a nominal variable (say, circles for wheat production and squares for maize production) can make judging relative sizes more difficult. "Pictorial" or "pictographic" symbols, which use an iconic shape (usually a silhouette) that evokes the represented phenomenon (e.g., a shaft of wheat to represent wheat production) can give the map an intuitive look, but their complexity can increase the overall feel of clutter, and it can be more difficult to judge their size than simple "geometric" shapes like circles or squares, especially if they are in a congested area where individual symbols overlap. This difference is lessened if the shape is compact (e.g., more like a geometric shape).
Among geometric symbols, circles have been the predominant shape since this type of thematic map was invented. Several advantages of circles over other geometric shapes have been cited, such as:
However, disadvantages of circles have also been raised, especially that circles are aesthetically uninteresting, and that psychophysical studies have suggested that people are worse at judging the relative areas of circles than other shapes, especially squares. The best way to increase the reader's ability to correctly estimate the size of a circle is through effective legend design, including providing examples of different sized circles which will be shown in the map.
Three dimensional symbols, such as spheres or cubes, are sometimes used. They can add an aesthetic appeal, but they were originally designed for their function, to allow large symbols to be smaller because the value would be proportional to volume rather than area. However, it appears that most map readers will interpret a three dimensional symbol by projected area, not by volume, so they are only useful as decorative two dimensional symbols.
Isotype maps.
A very different approach to proportional symbols is the isotype symbol, named after an approach to information graphics developed by Austrian Otto Neurath in the 1930s. This uses a composite point symbol composed of a multitude of small point symbols (pictographic or geometric) to represent the value of the variable. The technique is most effective when the variable represents a relatively small number of distinct individuals, rather than a mass amount (which is better visualized as a single mass shape, like a circle). Eduard Imhof argued against this technique (which he called "count frame diagrams") for point locations, on the grounds that it tends to be much larger and more complex than a simple point symbol, covering more of the underlying geography; however, he found them effective on region locations, especially if the count consists of different types of individuals.
Chart maps.
A strategy to represent complex information is to create a statistical chart of related attributes for each feature, and use the entire chart as a point symbol, usually using linear (height/width) or areal scaling of the entire chart according to an overall total amount. This approach is thus a form of multivariate map. The most common technique, first appearing in the 1850s, is to start with a proportional circle sized according to some total amount, and turn it into a pie chart to visualize the relative composition of the total, such as the percentage of a total population belonging to various ethnic groups. Other options include bar charts and line charts, which are often used to represent trends over time or relative amounts of related variables for each feature (e.g., agricultural products).
Scaling techniques.
Theoretically, the proportional symbol map works because the "size" of the symbol appears to be proportional to its value, with size generally interpreted as two-dimensional area. However, making this work in practice can lead to some challenges, so several methods of scaling have been developed.
Absolute scaling.
This method calculates the exact area of the symbol and resizes it so that its area is mathematically directly proportional to the represented value. For example, if circles are being used to represent GDP on a global map, then a country with a value of 58 would have a circle with twice the area as a country with a value of 29.
If circles are being used, the sizes of all symbols are calculated based on a chosen size for any one of the symbols (often, but not necessarily, the minimum value). Say the cartographer decides that a value "v"0 will have a circle of radius "r"0. Then for any other value "v", the radius "r" is determined by setting the areas in direct proportion to the values:
formula_0
This can then be solved for "r":
formula_1
Apparent magnitude (Flannery) scaling.
In his 1956 PhD Dissertation, James Flannery conducted psychophysical experiments into how well map readers judged the relative size of proportional circles. He found that it conformed to a response power law that was soon after formalized (in general) as Stevens's power law. While people are fairly adept at judging relative length, they are typically much worse at judging relative area. In testing circles, Flannery's subjects underestimated the ratio of area between large circles and smaller circles by a fairly consistent amount. He and Arthur H. Robinson immediately began encouraging cartographers to compensate for this effect by increasing the difference between circle sizes accordingly, using a technique called "apparent magnitude scaling". According to Flannery's results, this can be accomplished by increasing the exponent of the scaling factor slightly, replacing the above formula with the following:
formula_2
The acceptance of Flannery's method for circles has been mixed. Various studies have resulted in different magnitudes of the effect, and some have argued that the effect is not large enough to require the effort of compensation, that map readers can make adequate judgments with absolute scaling, as long as a clear legend is provided to help.
Flannery's research focused only on circles, and subsequent research has found other symbol types to have different magnitudes of areal underestimation. Squares have been found to be judged fairly accurately, but for spheres and other three-dimensional shapes, volume is estimated extremely poorly; basically, readers judge their two-dimensional area.
Interpolated scaling.
One criticism of the absolute scaling method is that it does not work well for very large ranges of values, in which the largest symbols will be overwhelming and the smallest symbols will be nearly invisible. Some software, such as Esri ArcGIS Pro, allow for the option to control the size of both ends of the value range. Rather than calculating a true proportionality, the area of the symbol of each intervening value is calculated using a Linear interpolation:
formula_3
in which "A" is the symbol area, "v" is the value of the variable, "L" is the largest value, "S" is the smallest value, and "i" is the value with a symbol size to be determined. The advantage of this method is complete control over the entire range of symbol sizes, but true proportionality is lost, and judgments of relative size can only be made by frequent reference to the legend.
Range grading.
In this method, the size of the symbol is not directly mathematically connected to the value. Instead, the range of possible values is classified as it would be in a choropleth map, and a single size of symbol is assigned to each class. This allows the cartographer to have more control over the range of sizes, and is therefore used when absolute scaling produces an undesirable range of sizes. However, it has inherent issues, in that it makes similar values appear identical, and that apparent size differences cannot be interpreted as a ratio; that is, a value of one feature being twice that of another feature is not necessarily represented as a symbol of twice the size.
Managing symbol overlap.
Most proportional symbol maps will have occasional overlap between symbols, typically around the largest symbols or in regions with a high density of features. This can lead to errors in size interpretations, and when a mass of symbols obscures too much of the underlying geographical reference map, it can be difficult to recognize feature each symbol is representing. That said, there is general consensus that some overlap is acceptable, because eliminating all overlap would often require reducing symbol size so much that it would be difficult to judge size, or reducing the number of features to the degree that the map would be uninformative. A common rule of thumb is that the scaling should be large enough that some symbols overlap, but the center of most symbols are not covering another symbol. In situations of overlap, smaller symbols are generally drawn on top of larger symbols, because the smaller symbol will obscure a smaller proportion of the larger symbol.
When overlap occurs, it is crucial that the individual symbols can be distinctly recognized and the relative sizes of each symbol judged. This is typically accomplished by outlining each symbol (usually either a darker or lighter shade than the main symbol), or making the symbols semi-transparent; research has shown both methods to be effective for discriminating symbols and judging sizes as long as there is not too much overlap; map readers are generally divided in their aesthetic preferences for one or the other.
Legend.
The primary purpose of the legend for a proportional circle map, as with any thematic map, is for the map reader to clearly understand the meaning of the features and variable being represented, and to assist in the interpretation of the particular values represented by each symbol. In this case, it is not feasible to show every possible symbol size (although some have tried, using wedge-shaped continuous legends), so most proportional symbol legends include a set of sample sizes with their respective values, usually the largest value, one of the smallest value, and one or more in between. Usually, these samples are placed in a "linear form", in a vertical or horizontal line, or a "nested form", in which the smaller symbols are placed on top of the larger symbols (usually aligned at their bottoms, not centered).
|
[
{
"math_id": 0,
"text": "\\frac{v}{v_0} = \\frac{a}{a_0} = \\frac{\\pi{r^2}}{\\pi{r_0^2}}"
},
{
"math_id": 1,
"text": "{r} = {r_0}\\left(\\frac{v}{v_0}\\right)^{0.5}"
},
{
"math_id": 2,
"text": "{r} = {r_0}\\left(\\frac{v}{v_0}\\right)^{0.5716}"
},
{
"math_id": 3,
"text": "\\frac{A_i-A_S}{A_L-A_S} = \\frac{v_i-v_S}{v_L-v_S}"
}
] |
https://en.wikipedia.org/wiki?curid=65795325
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.