id
stringlengths
2
8
title
stringlengths
1
130
text
stringlengths
0
252k
formulas
listlengths
1
823
url
stringlengths
38
44
15655832
Electronics and Radar Development Establishment
Indian defence research laboratory Electronics and Radar Development Establishment (LRDE) is a laboratory of the Defence Research & Development Organisation (DRDO), India. Located in C.V. Raman Nagar, Bengaluru, Karnataka, its primary function is research and development of radars and related technologies. It was founded by S. P. Chakravarti, the father of Electronics and Telecommunication engineering in India, who also founded DLRL and DRDL. LRDE is sometimes mis-abbreviated as "ERDE". To distinguish between "Electrical" and "Electronic", the latter is abbreviated with the first letter of its Latin root ("lektra"). The same approach is used with for the DLRL. The LRDE is India's premier radar design and development establishment and is deeply involved in Indian radar efforts. Its primary production partners include Bharat Electronics Limited (BEL) and various private firms like CoreEL Technologies, Bangalore, Mistral Solutions in Bengaluru, Astra microwave in Hyderabad and Data Patterns in Chennai. LRDE Radars. The DRDO's initial projects included short range 2D systems (Indra-1), but it now manufactures high power 3D systems, airborne surveillance and fire control radars as well. The publicly known projects include: Apart from the above, the DRDO has also several other radar systems currently under development or in trials. The systems on which publicly available information is available include: References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "m^2" } ]
https://en.wikipedia.org/wiki?curid=15655832
15657106
Log-logistic distribution
Continuous probability distribution for a non-negative random variable In probability and statistics, the log-logistic distribution (known as the Fisk distribution in economics) is a continuous probability distribution for a non-negative random variable. It is used in survival analysis as a parametric model for events whose rate increases initially and decreases later, as, for example, mortality rate from cancer following diagnosis or treatment. It has also been used in hydrology to model stream flow and precipitation, in economics as a simple model of the distribution of wealth or income, and in networking to model the transmission times of data considering both the network and the software. The log-logistic distribution is the probability distribution of a random variable whose logarithm has a logistic distribution. It is similar in shape to the log-normal distribution but has heavier tails. Unlike the log-normal, its cumulative distribution function can be written in closed form. Characterization. There are several different parameterizations of the distribution in use. The one shown here gives reasonably interpretable parameters and a simple form for the cumulative distribution function. The parameter formula_1 is a scale parameter and is also the median of the distribution. The parameter formula_3 is a shape parameter. The distribution is unimodal when formula_2 and its dispersion decreases as formula_0 increases. The cumulative distribution function is formula_4 where formula_5, formula_1, formula_6 The probability density function is formula_7 Alternative parameterization. An alternative parametrization is given by the pair formula_8 in analogy with the logistic distribution: formula_9 formula_10 Properties. Moments. The formula_11th raw moment exists only when formula_12 when it is given by formula_13 where B is the beta function. Expressions for the mean, variance, skewness and kurtosis can be derived from this. Writing formula_14 for convenience, the mean is formula_15 and the variance is formula_16 Explicit expressions for the skewness and kurtosis are lengthy. As formula_0 tends to infinity the mean tends to formula_17, the variance and skewness tend to zero and the excess kurtosis tends to 6/5 (see also related distributions below). Quantiles. The quantile function (inverse cumulative distribution function) is : formula_18 It follows that the median is formula_17, the lower quartile is formula_19 and the upper quartile is formula_20. Applications. Survival analysis. The log-logistic distribution provides one parametric model for survival analysis. Unlike the more commonly used Weibull distribution, it can have a non-monotonic hazard function: when formula_21 the hazard function is unimodal (when formula_0 ≤ 1, the hazard decreases monotonically). The fact that the cumulative distribution function can be written in closed form is particularly useful for analysis of survival data with censoring. The log-logistic distribution can be used as the basis of an accelerated failure time model by allowing formula_17 to differ between groups, or more generally by introducing covariates that affect formula_17 but not formula_0 by modelling formula_22 as a linear function of the covariates. The survival function is formula_23 and so the hazard function is formula_24 The log-logistic distribution with shape parameter formula_25 is the marginal distribution of the inter-times in a geometric-distributed counting process. Hydrology. The log-logistic distribution has been used in hydrology for modelling stream flow rates and precipitation. Extreme values like maximum one-day rainfall and river discharge per month or per year often follow a log-normal distribution. The log-normal distribution, however, needs a numeric approximation. As the log-logistic distribution, which can be solved analytically, is similar to the log-normal distribution, it can be used instead. The blue picture illustrates an example of fitting the log-logistic distribution to ranked maximum one-day October rainfalls and it shows the 90% confidence belt based on the binomial distribution. The rainfall data are represented by the plotting position "r"/("n"+1) as part of the cumulative frequency analysis. Economics. The log-logistic has been used as a simple model of the distribution of wealth or income in economics, where it is known as the Fisk distribution. Its Gini coefficient is formula_26. Networking. The log-logistic has been used as a model for the period of time beginning when some data leaves a software user application in a computer and the response is received by the same application after travelling through and being processed by other computers, applications, and network segments, most or all of them without hard real-time guarantees (for example, when an application is displaying data coming from a remote sensor connected to the Internet). It has been shown to be a more accurate probabilistic model for that than the log-normal distribution or others, as long as abrupt changes of regime in the sequences of those times are properly detected. formula_36 formula_41 Related distributions. Generalizations. Several different distributions are sometimes referred to as the generalized log-logistic distribution, as they contain the log-logistic as a special case. These include the Burr Type XII distribution (also known as the "Singh–Maddala distribution") and the Dagum distribution, both of which include a second shape parameter. Both are in turn special cases of the even more general "generalized beta distribution of the second kind". Another more straightforward generalization of the log-logistic is the shifted log-logistic distribution. Another generalized log-logistic distribution is the log-transform of the metalog distribution, in which power series expansions in terms of formula_42 are substituted for logistic distribution parameters formula_27 and formula_43. The resulting log-metalog distribution is highly shape flexible, has simple closed form PDF and quantile function, can be fit to data with linear least squares, and subsumes the log-logistic distribution is special case. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\beta" }, { "math_id": 1, "text": "\\alpha>0" }, { "math_id": 2, "text": "\\beta>1" }, { "math_id": 3, "text": "\\beta>0" }, { "math_id": 4, "text": "\\begin{align}\nF(x; \\alpha, \\beta) & = { 1 \\over 1+(x/\\alpha)^{-\\beta} } \\\\[5pt]\n & = {(x/\\alpha)^\\beta \\over 1+(x/\\alpha)^ \\beta } \\\\[5pt]\n & = {x^\\beta \\over \\alpha^\\beta+x^\\beta}\n\\end{align}" }, { "math_id": 5, "text": "x>0" }, { "math_id": 6, "text": "\\beta>0." }, { "math_id": 7, "text": "f(x; \\alpha, \\beta) = \\frac{ (\\beta/\\alpha)(x/\\alpha)^{\\beta-1} }\n { \\left( 1+(x/\\alpha)^{\\beta} \\right)^2 }" }, { "math_id": 8, "text": "\\mu, s" }, { "math_id": 9, "text": "\\mu = \\ln (\\alpha)" }, { "math_id": 10, "text": "s = 1 / \\beta" }, { "math_id": 11, "text": "k" }, { "math_id": 12, "text": "k<\\beta," }, { "math_id": 13, "text": "\\begin{align}\n\\operatorname{E}(X^k)\n& = \\alpha^k\\operatorname{B}(1-k/\\beta, 1+k/\\beta) \\\\[5pt]\n& = \\alpha^k\\, {k\\pi/\\beta \\over \\sin(k\\pi/\\beta)}\n\\end{align}" }, { "math_id": 14, "text": "b=\\pi/\\beta" }, { "math_id": 15, "text": " \\operatorname{E}(X) = \\alpha b / \\sin b , \\quad \\beta>1," }, { "math_id": 16, "text": " \\operatorname{Var}(X) = \\alpha^2 \\left( 2b / \\sin 2b -b^2 / \\sin^2 b \\right), \\quad \\beta>2." }, { "math_id": 17, "text": "\\alpha" }, { "math_id": 18, "text": "F^{-1}(p;\\alpha, \\beta) = \\alpha\\left( \\frac{p}{1-p} \\right)^{1/\\beta}." }, { "math_id": 19, "text": "3^{-1/\\beta} \\alpha " }, { "math_id": 20, "text": "3^{1/\\beta} \\alpha" }, { "math_id": 21, "text": "\\beta>1," }, { "math_id": 22, "text": "\\log(\\alpha)" }, { "math_id": 23, "text": "S(t) = 1 - F(t) = [1+(t/\\alpha)^{\\beta}]^{-1},\\, " }, { "math_id": 24, "text": " h(t) = \\frac{f(t)}{S(t)} = \\frac{(\\beta/\\alpha)(t/\\alpha)^{\\beta-1}}\n {1+(t/\\alpha)^\\beta}." }, { "math_id": 25, "text": "\\beta = 1" }, { "math_id": 26, "text": "1/\\beta" }, { "math_id": 27, "text": "\\mu" }, { "math_id": 28, "text": " X \\sim \\operatorname{LL}(\\alpha,\\beta)" }, { "math_id": 29, "text": " kX \\sim \\operatorname{LL}(k \\alpha, \\beta)." }, { "math_id": 30, "text": " X \\sim \\operatorname{LL}(\\alpha, \\beta)" }, { "math_id": 31, "text": " X^k \\sim \\operatorname{LL}(\\alpha^k, \\beta/|k|)." }, { "math_id": 32, "text": " \\operatorname{LL}(\\alpha,\\beta) \\sim \\textrm{Dagum}(1,\\alpha,\\beta)" }, { "math_id": 33, "text": " \\operatorname{LL}(\\alpha,\\beta) \\sim \\textrm{SinghMaddala}(1,\\alpha,\\beta)" }, { "math_id": 34, "text": "\\textrm{LL}(\\gamma,\\sigma) \\sim \\beta'(1,1,\\gamma,\\sigma)" }, { "math_id": 35, "text": "1/\\beta." }, { "math_id": 36, "text": "\\operatorname{LL}(\\alpha, \\beta) \\to L(\\alpha,\\alpha/\\beta) \\quad \\text{as} \\quad \\beta \\to \\infty." }, { "math_id": 37, "text": "\\beta=1" }, { "math_id": 38, "text": "\\mu=0" }, { "math_id": 39, "text": "\\xi=1" }, { "math_id": 40, "text": "\\alpha:" }, { "math_id": 41, "text": "\\operatorname{LL}(\\alpha,1) = \\operatorname{GPD}(1,\\alpha,1)." }, { "math_id": 42, "text": "p" }, { "math_id": 43, "text": "\\sigma" } ]
https://en.wikipedia.org/wiki?curid=15657106
1565926
Estimation theory
Branch of statistics to estimate models based on measured data Estimation theory is a branch of statistics that deals with estimating the values of parameters based on measured empirical data that has a random component. The parameters describe an underlying physical setting in such a way that their value affects the distribution of the measured data. An "estimator" attempts to approximate the unknown parameters using the measurements. In estimation theory, two approaches are generally considered: Examples. For example, it is desired to estimate the proportion of a population of voters who will vote for a particular candidate. That proportion is the parameter sought; the estimate is based on a small random sample of voters. Alternatively, it is desired to estimate the probability of a voter voting for a particular candidate, based on some demographic features, such as age. Or, for example, in radar the aim is to find the range of objects (airplanes, boats, etc.) by analyzing the two-way transit timing of received echoes of transmitted pulses. Since the reflected pulses are unavoidably embedded in electrical noise, their measured values are randomly distributed, so that the transit time must be estimated. As another example, in electrical communication theory, the measurements which contain information regarding the parameters of interest are often associated with a noisy signal. Basics. For a given model, several statistical "ingredients" are needed so the estimator can be implemented. The first is a statistical sample – a set of data points taken from a random vector (RV) of size "N". Put into a vector, formula_0 Secondly, there are "M" parameters formula_1 whose values are to be estimated. Third, the continuous probability density function (pdf) or its discrete counterpart, the probability mass function (pmf), of the underlying distribution that generated the data must be stated conditional on the values of the parameters: formula_2 It is also possible for the parameters themselves to have a probability distribution (e.g., Bayesian statistics). It is then necessary to define the Bayesian probability formula_3 After the model is formed, the goal is to estimate the parameters, with the estimates commonly denoted formula_4, where the "hat" indicates the estimate. One common estimator is the minimum mean squared error (MMSE) estimator, which utilizes the error between the estimated parameters and the actual value of the parameters formula_5 as the basis for optimality. This error term is then squared and the expected value of this squared value is minimized for the MMSE estimator. Estimators. Commonly used estimators (estimation methods) and topics related to them include: Examples. Unknown constant in additive white Gaussian noise. Consider a received discrete signal, formula_6, of formula_7 independent samples that consists of an unknown constant formula_8 with additive white Gaussian noise (AWGN) formula_9 with zero mean and known variance formula_10 ("i.e.", formula_11). Since the variance is known then the only unknown parameter is formula_8. The model for the signal is then formula_12 Two possible (of many) estimators for the parameter formula_8 are: Both of these estimators have a mean of formula_8, which can be shown through taking the expected value of each estimator formula_15 and formula_16 At this point, these two estimators would appear to perform the same. However, the difference between them becomes apparent when comparing the variances. formula_17 and formula_18 It would seem that the sample mean is a better estimator since its variance is lower for every "N" &gt; 1. Maximum likelihood. Continuing the example using the maximum likelihood estimator, the probability density function (pdf) of the noise for one sample formula_9 is formula_19 and the probability of formula_6 becomes (formula_6 can be thought of a formula_20) formula_21 By independence, the probability of formula_22 becomes formula_23 Taking the natural logarithm of the pdf formula_24 and the maximum likelihood estimator is formula_25 Taking the first derivative of the log-likelihood function formula_26 and setting it to zero formula_27 This results in the maximum likelihood estimator formula_28 which is simply the sample mean. From this example, it was found that the sample mean is the maximum likelihood estimator for formula_7 samples of a fixed, unknown parameter corrupted by AWGN. Cramér–Rao lower bound. To find the Cramér–Rao lower bound (CRLB) of the sample mean estimator, it is first necessary to find the Fisher information number formula_29 and copying from above formula_30 Taking the second derivative formula_31 and finding the negative expected value is trivial since it is now a deterministic constant formula_32 Finally, putting the Fisher information into formula_33 results in formula_34 Comparing this to the variance of the sample mean (determined previously) shows that the sample mean is "equal to" the Cramér–Rao lower bound for all values of formula_7 and formula_8. In other words, the sample mean is the (necessarily unique) efficient estimator, and thus also the minimum variance unbiased estimator (MVUE), in addition to being the maximum likelihood estimator. Maximum of a uniform distribution. One of the simplest non-trivial examples of estimation is the estimation of the maximum of a uniform distribution. It is used as a hands-on classroom exercise and to illustrate basic principles of estimation theory. Further, in the case of estimation based on a single sample, it demonstrates philosophical issues and possible misunderstandings in the use of maximum likelihood estimators and likelihood functions. Given a discrete uniform distribution formula_35 with unknown maximum, the UMVU estimator for the maximum is given by formula_36 where "m" is the sample maximum and "k" is the sample size, sampling without replacement. This problem is commonly known as the German tank problem, due to application of maximum estimation to estimates of German tank production during World War II. The formula may be understood intuitively as; &lt;templatestyles src="Block indent/styles.css"/&gt;"The sample maximum plus the average gap between observations in the sample", the gap being added to compensate for the negative bias of the sample maximum as an estimator for the population maximum. This has a variance of formula_37 so a standard deviation of approximately formula_38, the (population) average size of a gap between samples; compare formula_39 above. This can be seen as a very simple case of maximum spacing estimation. The sample maximum is the maximum likelihood estimator for the population maximum, but, as discussed above, it is biased. Applications. Numerous fields require the use of estimation theory. Some of these fields include: Measured data are likely to be subject to noise or uncertainty and it is through statistical probability that optimal solutions are sought to extract as much information from the data as possible. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. Citations. &lt;templatestyles src="Reflist/styles.css" /&gt; Sources. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbf{x} = \\begin{bmatrix} x[0] \\\\ x[1] \\\\ \\vdots \\\\ x[N-1] \\end{bmatrix}." }, { "math_id": 1, "text": "\\boldsymbol{\\theta} = \\begin{bmatrix} \\theta_1 \\\\ \\theta_2 \\\\ \\vdots \\\\ \\theta_M \\end{bmatrix}," }, { "math_id": 2, "text": "p(\\mathbf{x} | \\boldsymbol{\\theta}).\\," }, { "math_id": 3, "text": "\\pi( \\boldsymbol{\\theta}).\\," }, { "math_id": 4, "text": "\\hat{\\boldsymbol{\\theta}}" }, { "math_id": 5, "text": "\\mathbf{e} = \\hat{\\boldsymbol{\\theta}} - \\boldsymbol{\\theta}" }, { "math_id": 6, "text": "x[n]" }, { "math_id": 7, "text": "N" }, { "math_id": 8, "text": "A" }, { "math_id": 9, "text": "w[n]" }, { "math_id": 10, "text": "\\sigma^2" }, { "math_id": 11, "text": "\\mathcal{N}(0, \\sigma^2)" }, { "math_id": 12, "text": "x[n] = A + w[n] \\quad n=0, 1, \\dots, N-1" }, { "math_id": 13, "text": "\\hat{A}_1 = x[0]" }, { "math_id": 14, "text": "\\hat{A}_2 = \\frac{1}{N} \\sum_{n=0}^{N-1} x[n]" }, { "math_id": 15, "text": "\\mathrm{E}\\left[\\hat{A}_1\\right] = \\mathrm{E}\\left[ x[0] \\right] = A" }, { "math_id": 16, "text": "\n\\mathrm{E}\\left[ \\hat{A}_2 \\right]\n=\n\\mathrm{E}\\left[ \\frac{1}{N} \\sum_{n=0}^{N-1} x[n] \\right]\n=\n\\frac{1}{N} \\left[ \\sum_{n=0}^{N-1} \\mathrm{E}\\left[ x[n] \\right] \\right]\n=\n\\frac{1}{N} \\left[ N A \\right]\n=\nA\n" }, { "math_id": 17, "text": "\\mathrm{var} \\left( \\hat{A}_1 \\right) = \\mathrm{var} \\left( x[0] \\right) = \\sigma^2" }, { "math_id": 18, "text": "\n\\mathrm{var} \\left( \\hat{A}_2 \\right)\n=\n\\mathrm{var} \\left( \\frac{1}{N} \\sum_{n=0}^{N-1} x[n] \\right)\n\\overset{\\text{independence}}{=}\n\\frac{1}{N^2} \\left[ \\sum_{n=0}^{N-1} \\mathrm{var} (x[n]) \\right]\n=\n\\frac{1}{N^2} \\left[ N \\sigma^2 \\right]\n=\n\\frac{\\sigma^2}{N}\n" }, { "math_id": 19, "text": "p(w[n]) = \\frac{1}{\\sigma \\sqrt{2 \\pi}} \\exp\\left(- \\frac{1}{2 \\sigma^2} w[n]^2 \\right)" }, { "math_id": 20, "text": "\\mathcal{N}(A, \\sigma^2)" }, { "math_id": 21, "text": "p(x[n]; A) = \\frac{1}{\\sigma \\sqrt{2 \\pi}} \\exp\\left(- \\frac{1}{2 \\sigma^2} (x[n] - A)^2 \\right)" }, { "math_id": 22, "text": "\\mathbf{x}" }, { "math_id": 23, "text": "\np(\\mathbf{x}; A)\n=\n\\prod_{n=0}^{N-1} p(x[n]; A)\n=\n\\frac{1}{\\left(\\sigma \\sqrt{2\\pi}\\right)^N}\n\\exp\\left(- \\frac{1}{2 \\sigma^2} \\sum_{n=0}^{N-1}(x[n] - A)^2 \\right)\n" }, { "math_id": 24, "text": "\n\\ln p(\\mathbf{x}; A)\n=\n-N \\ln \\left(\\sigma \\sqrt{2\\pi}\\right)\n- \\frac{1}{2 \\sigma^2} \\sum_{n=0}^{N-1}(x[n] - A)^2\n" }, { "math_id": 25, "text": "\\hat{A} = \\arg \\max \\ln p(\\mathbf{x}; A)" }, { "math_id": 26, "text": "\n\\frac{\\partial}{\\partial A} \\ln p(\\mathbf{x}; A)\n=\n\\frac{1}{\\sigma^2} \\left[ \\sum_{n=0}^{N-1}(x[n] - A) \\right]\n=\n\\frac{1}{\\sigma^2} \\left[ \\sum_{n=0}^{N-1}x[n] - N A \\right]\n" }, { "math_id": 27, "text": "\n0\n=\n\\frac{1}{\\sigma^2} \\left[ \\sum_{n=0}^{N-1}x[n] - N A \\right]\n=\n\\sum_{n=0}^{N-1}x[n] - N A\n" }, { "math_id": 28, "text": "\\hat{A} = \\frac{1}{N} \\sum_{n=0}^{N-1}x[n]" }, { "math_id": 29, "text": "\n\\mathcal{I}(A)\n=\n\\mathrm{E}\n\\left(\n \\left[\n \\frac{\\partial}{\\partial A} \\ln p(\\mathbf{x}; A)\n \\right]^2\n\\right)\n=\n-\\mathrm{E}\n\\left[\n \\frac{\\partial^2}{\\partial A^2} \\ln p(\\mathbf{x}; A)\n\\right]\n" }, { "math_id": 30, "text": "\n\\frac{\\partial}{\\partial A} \\ln p(\\mathbf{x}; A)\n=\n\\frac{1}{\\sigma^2} \\left[ \\sum_{n=0}^{N-1}x[n] - N A \\right]\n" }, { "math_id": 31, "text": "\n\\frac{\\partial^2}{\\partial A^2} \\ln p(\\mathbf{x}; A)\n=\n\\frac{1}{\\sigma^2} (- N)\n=\n\\frac{-N}{\\sigma^2}\n" }, { "math_id": 32, "text": "\n-\\mathrm{E}\n\\left[\n \\frac{\\partial^2}{\\partial A^2} \\ln p(\\mathbf{x}; A)\n\\right]\n=\n\\frac{N}{\\sigma^2}\n" }, { "math_id": 33, "text": "\n\\mathrm{var}\\left( \\hat{A} \\right)\n\\geq\n\\frac{1}{\\mathcal{I}}\n" }, { "math_id": 34, "text": "\n\\mathrm{var}\\left( \\hat{A} \\right)\n\\geq\n\\frac{\\sigma^2}{N}\n" }, { "math_id": 35, "text": "1,2,\\dots,N" }, { "math_id": 36, "text": "\\frac{k+1}{k} m - 1 = m + \\frac{m}{k} - 1" }, { "math_id": 37, "text": "\\frac{1}{k}\\frac{(N-k)(N+1)}{(k+2)} \\approx \\frac{N^2}{k^2} \\text{ for small samples } k \\ll N" }, { "math_id": 38, "text": "N/k" }, { "math_id": 39, "text": "\\frac{m}{k}" } ]
https://en.wikipedia.org/wiki?curid=1565926
15659323
Laplace operators in differential geometry
In differential geometry there are a number of second-order, linear, elliptic differential operators bearing the name Laplacian. This article provides an overview of some of them. Connection Laplacian. The connection Laplacian, also known as the rough Laplacian, is a differential operator acting on the various tensor bundles of a manifold, defined in terms of a Riemannian- or pseudo-Riemannian metric. When applied to functions (i.e. tensors of rank 0), the connection Laplacian is often called the Laplace–Beltrami operator. It is defined as the trace of the second covariant derivative: formula_0 where "T" is any tensor, formula_1 is the Levi-Civita connection associated to the metric, and the trace is taken with respect to the metric. Recall that the second covariant derivative of "T" is defined as formula_2 Note that with this definition, the connection Laplacian has negative spectrum. On functions, it agrees with the operator given as the divergence of the gradient. If the connection of interest is the Levi-Civita connection one can find a convenient formula for the Laplacian of a scalar function in terms of partial derivatives with respect to a coordinate system: formula_3 where formula_4 is a scalar function, formula_5 is absolute value of the determinant of the metric (absolute value is necessary in the pseudo-Riemannian case, e.g. in General Relativity) and formula_6 denotes the inverse of the metric tensor. Hodge Laplacian. The Hodge Laplacian, also known as the Laplace–de Rham operator, is a differential operator acting on differential forms. (Abstractly, it is a second order operator on each exterior power of the cotangent bundle.) This operator is defined on any manifold equipped with a Riemannian- or pseudo-Riemannian metric. formula_7 where d is the exterior derivative or differential and "δ" is the codifferential. The Hodge Laplacian on a compact manifold has nonnegative spectrum. The connection Laplacian may also be taken to act on differential forms by restricting it to act on skew-symmetric tensors. The connection Laplacian differs from the Hodge Laplacian by means of a Weitzenböck identity. Bochner Laplacian. The Bochner Laplacian is defined differently from the connection Laplacian, but the two will turn out to differ only by a sign, whenever the former is defined. Let "M" be a compact, oriented manifold equipped with a metric. Let "E" be a vector bundle over "M" equipped with a fiber metric and a compatible connection, formula_1. This connection gives rise to a differential operator formula_8 where formula_9 denotes smooth sections of "E", and "T"*M is the cotangent bundle of "M". It is possible to take the formula_10-adjoint of formula_1, giving a differential operator formula_11 The Bochner Laplacian is given by formula_12 which is a second order operator acting on sections of the vector bundle "E". Note that the connection Laplacian and Bochner Laplacian differ only by a sign: formula_13 Lichnerowicz Laplacian. The Lichnerowicz Laplacian is defined on symmetric tensors by taking formula_14 to be the symmetrized covariant derivative. The Lichnerowicz Laplacian is then defined by formula_15, where formula_16 is the formal adjoint. The Lichnerowicz Laplacian differs from the usual tensor Laplacian by a Weitzenbock formula involving the Riemann curvature tensor, and has natural applications in the study of Ricci flow and the prescribed Ricci curvature problem. Conformal Laplacian. On a Riemannian manifold, one can define the conformal Laplacian as an operator on smooth functions; it differs from the Laplace–Beltrami operator by a term involving the scalar curvature of the underlying metric. In dimension "n" ≥ 3, the conformal Laplacian, denoted "L", acts on a smooth function "u" by formula_17 where Δ is the Laplace-Beltrami operator (of negative spectrum), and "R" is the scalar curvature. This operator often makes an appearance when studying how the scalar curvature behaves under a conformal change of a Riemannian metric. If "n" ≥ 3 and "g" is a metric and "u" is a smooth, positive function, then the conformal metric formula_18 has scalar curvature given by formula_19 More generally, the action of the conformal Laplacian of "g̃" on smooth functions "φ" can be related to that of the conformal Laplacian of "g" via the transformation rule formula_20 Complex differential geometry. In complex differential geometry, the Laplace operator (also known as the Laplacian) is defined in terms of the complex differential forms. formula_21 This operator acts on complex-valued functions of a complex variable. It is essentially the complex conjugate of the ordinary partial derivative with respect to. It's important in complex analysis and complex differential geometry for studying functions of complex variables. Comparisons. Below is a table summarizing the various Laplacian operators, including the most general vector bundle on which they act, and what structure is required for the manifold and vector bundle. All of these operators are second order, linear, and elliptic.
[ { "math_id": 0, "text": "\\Delta T= \\text{tr}\\;\\nabla^2 T," }, { "math_id": 1, "text": "\\nabla" }, { "math_id": 2, "text": "\\nabla^2_{X,Y} T = \\nabla_X \\nabla_Y T - \\nabla_{\\nabla_X Y} T." }, { "math_id": 3, "text": "\\Delta \\phi = |g|^{-1/2} \\partial_\\mu\\left( |g|^{1/2} g^{\\mu\\nu} \\partial_\\nu\\phi\\right) " }, { "math_id": 4, "text": "\\phi" }, { "math_id": 5, "text": "|g|" }, { "math_id": 6, "text": "g^{\\mu \\nu}" }, { "math_id": 7, "text": "\\Delta= \\mathrm{d}\\delta+\\delta\\mathrm{d} = (\\mathrm{d}+\\delta)^2,\\;" }, { "math_id": 8, "text": "\\nabla:\\Gamma(E)\\rightarrow \\Gamma(T^*M\\otimes E)" }, { "math_id": 9, "text": "\\Gamma(E)" }, { "math_id": 10, "text": "L^2" }, { "math_id": 11, "text": "\\nabla^*:\\Gamma(T^*M\\otimes E)\\rightarrow \\Gamma(E)." }, { "math_id": 12, "text": "\\Delta=\\nabla^*\\nabla" }, { "math_id": 13, "text": " \\nabla^* \\nabla = - \\text{tr}\\, \\nabla^2" }, { "math_id": 14, "text": "\\nabla : \\Gamma(\\operatorname{Sym}^k(TM))\\to \\Gamma(\\operatorname{Sym}^{k+1}(TM))" }, { "math_id": 15, "text": "\\Delta_L = \\nabla^*\\nabla" }, { "math_id": 16, "text": "\\nabla^*" }, { "math_id": 17, "text": "Lu = -4\\frac{n-1}{n-2} \\Delta u + Ru," }, { "math_id": 18, "text": "\\tilde g = u^{4/(n-2)} g \\, " }, { "math_id": 19, "text": "\\tilde R = u^{-(n+2)/(n-2)} L u. \\, " }, { "math_id": 20, "text": "\\tilde{L}(\\varphi) = u^{-(n+2)/(n-2)} L(u \\varphi)." }, { "math_id": 21, "text": "\n \\partial f=\\sum\\left(\\frac{\\partial f}{\\partial x_k}-i\\frac{\\partial f}{\\partial y_k}\\right)dz_k \n" } ]
https://en.wikipedia.org/wiki?curid=15659323
1565963
IC50
Half maximal inhibitory concentration Half maximal inhibitory concentration (IC50) is a measure of the potency of a substance in inhibiting a specific biological or biochemical function. IC50 is a quantitative measure that indicates how much of a particular inhibitory substance (e.g. drug) is needed to inhibit, "in vitro", a given biological process or biological component by 50%. The biological component could be an enzyme, cell, cell receptor or microbe. IC50 values are typically expressed as molar concentration. IC50 is commonly used as a measure of antagonist drug potency in pharmacological research. IC50 is comparable to other measures of potency, such as EC50 for excitatory drugs. EC50 represents the dose or plasma concentration required for obtaining 50% of a maximum effect "in vivo". IC50 can be determined with functional assays or with competition binding assays. Sometimes, IC50 values are converted to the pIC50 scale. formula_0 Due to the minus sign, higher values of pIC50 indicate exponentially more potent inhibitors. pIC50 is usually given in terms of molar concentration (mol/L, or M), thus requiring IC50 in units of M. The IC50 terminology is also used for some behavioral measures in vivo, such as the two bottle fluid consumption test. When animals decrease consumption from the drug-laced water bottle, the concentration of the drug that results in a 50% decrease in consumption is considered the IC50 for fluid consumption of that drug. Functional antagonist assay. The IC50 of a drug can be determined by constructing a dose-response curve and examining the effect of different concentrations of antagonist on reversing agonist activity. IC50 values can be calculated for a given antagonist by determining the concentration needed to inhibit half of the maximum biological response of the agonist. IC50 values can be used to compare the potency of two antagonists. IC50 values are very dependent on conditions under which they are measured. In general, "a higher concentration of inhibitor leads to lowered agonist activity." IC50 value increases as agonist concentration increases. Furthermore, depending on the type of inhibition, other factors may influence IC50 value; for ATP dependent enzymes, IC50 value has an interdependency with concentration of ATP, especially if inhibition is competitive. IC50 and affinity. Competition binding assays. In this type of assay, a single concentration of radioligand (usually an agonist) is used in every assay tube. The ligand is used at a low concentration, usually at or below its Kd value. The level of specific binding of the radioligand is then determined in the presence of a range of concentrations of other competing non-radioactive compounds (usually antagonists), in order to measure the potency with which they compete for the binding of the radioligand. Competition curves may also be computer-fitted to a logistic function as described under direct fit. In this situation the IC50 is the concentration of competing ligand which displaces 50% of the specific binding of the radioligand. The IC50 value is converted to an absolute inhibition constant Ki using the Cheng-Prusoff equation formulated by Yung-Chi Cheng and William Prusoff (see Ki). Cheng Prusoff equation. IC50 is not a direct indicator of affinity, although the two can be related at least for competitive agonists and antagonists by the Cheng-Prusoff equation. For enzymatic reactions, this equation is: formula_1 where Ki is the binding affinity of the inhibitor, IC50 is the functional strength of the inhibitor, [S] is fixed substrate concentration and Km is the Michaelis constant i.e. concentration of substrate at which enzyme activity is at half maximal (but is frequently confused with substrate affinity for the enzyme, which it is not). Alternatively, for inhibition constants at cellular receptors: formula_2 where [A] is the fixed concentration of agonist and EC50 is the concentration of agonist that results in half maximal activation of the receptor. Whereas the IC50 value for a compound may vary between experiments depending on experimental conditions, (e.g. substrate and enzyme concentrations) the Ki is an absolute value. Ki is the inhibition constant for a drug; the concentration of competing ligand in a competition assay which would occupy 50% of the receptors if no ligand were present. The Cheng-Prusoff equation produces good estimates at high agonist concentrations, but over- or under-estimates Ki at low agonist concentrations. In these conditions, other analyses have been recommended. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\ce{pIC_{50}} = -\\log_{10} \\ce{(IC_{50})}" }, { "math_id": 1, "text": " K_i = \\frac\\ce{IC50}{1+\\frac{[S]}{K_m}} " }, { "math_id": 2, "text": " K_i = \\frac\\ce{IC50}{\\frac{[A]}\\ce{EC50}+1} " } ]
https://en.wikipedia.org/wiki?curid=1565963
15663283
Nonlinear eigenproblem
Mathematical equation involving a matrix-valued function that is singular at the eigenvalue. In mathematics, a nonlinear eigenproblem, sometimes nonlinear eigenvalue problem, is a generalization of the (ordinary) eigenvalue problem to equations that depend nonlinearly on the eigenvalue. Specifically, it refers to equations of the form formula_0 where formula_1 is a vector, and "formula_2" is a matrix-valued function of the number formula_3. The number formula_3 is known as the (nonlinear) eigenvalue, the vector formula_4 as the (nonlinear) eigenvector, and formula_5 as the eigenpair. The matrix formula_6 is singular at an eigenvalue formula_3. Definition. In the discipline of numerical linear algebra the following definition is typically used. Let formula_7, and let formula_8 be a function that maps scalars to matrices. A scalar formula_9 is called an "eigenvalue", and a nonzero vector formula_10is called a "right eigevector" if formula_11. Moreover, a nonzero vector formula_12is called a "left eigevector" if formula_13, where the superscript formula_14 denotes the Hermitian transpose. The definition of the eigenvalue is equivalent to formula_15, where formula_16 denotes the determinant. The function "formula_2" is usually required to be a holomorphic function of formula_3 (in some domain formula_17). In general, formula_6 could be a linear map, but most commonly it is a finite-dimensional, usually square, matrix. Definition: The problem is said to be "regular" if there exists a formula_18 such that formula_19. Otherwise it is said to be "singular". Definition: An eigenvalue formula_3 is said to have "algebraic multiplicity" formula_20 if formula_20 is the smallest integer such that the formula_20th derivative of formula_21 with respect to formula_22, in formula_3 is nonzero. In formulas that formula_23 but formula_24 for formula_25. Definition: The "geometric multiplicity" of an eigenvalue formula_3 is the dimension of the nullspace of formula_6. Special cases. The following examples are special cases of the nonlinear eigenproblem. Jordan chains. Definition: Let formula_34 be an eigenpair. A tuple of vectors formula_35 is called a "Jordan chain" ifformula_36for formula_37, where formula_38 denotes the formula_20th derivative of formula_2 with respect to formula_3 and evaluated in formula_39. The vectors formula_40 are called "generalized eigenvectors", formula_41 is called the "length" of the Jordan chain, and the maximal length a Jordan chain starting with formula_42 is called the "rank" of formula_42. Theorem: A tuple of vectors formula_35 is a Jordan chain if and only if the function formula_43 has a root in formula_39 and the root is of multiplicity at least formula_44 for formula_45, where the vector valued function formula_46 is defined asformula_47 Eigenvector nonlinearity. Eigenvector nonlinearities is a related, but different, form of nonlinearity that is sometimes studied. In this case the function formula_2 maps vectors to matrices, or sometimes hermitian matrices to hermitian matrices.
[ { "math_id": 0, "text": "M (\\lambda) x = 0 ," }, { "math_id": 1, "text": "x\\neq0" }, { "math_id": 2, "text": "M" }, { "math_id": 3, "text": "\\lambda" }, { "math_id": 4, "text": "x" }, { "math_id": 5, "text": "(\\lambda,x)" }, { "math_id": 6, "text": "M (\\lambda)" }, { "math_id": 7, "text": "\\Omega \\subseteq \\Complex" }, { "math_id": 8, "text": "M : \\Omega \\rightarrow \\Complex^{n\\times n}" }, { "math_id": 9, "text": "\\lambda \\in \\Complex " }, { "math_id": 10, "text": "x \\in \\Complex^n " }, { "math_id": 11, "text": "M (\\lambda) x = 0" }, { "math_id": 12, "text": "y \\in \\Complex^n " }, { "math_id": 13, "text": "y^H M (\\lambda) = 0^H" }, { "math_id": 14, "text": "^H" }, { "math_id": 15, "text": "\\det(M (\\lambda)) = 0" }, { "math_id": 16, "text": "\\det()" }, { "math_id": 17, "text": "\\Omega" }, { "math_id": 18, "text": "z\\in\\Omega" }, { "math_id": 19, "text": "\\det(M (z)) \\neq 0" }, { "math_id": 20, "text": "k" }, { "math_id": 21, "text": "\\det(M (z))" }, { "math_id": 22, "text": "z" }, { "math_id": 23, "text": "\\left.\\frac{d^k \\det(M (z))}{d z^k} \\right|_{z=\\lambda} \\neq 0" }, { "math_id": 24, "text": "\\left.\\frac{d^\\ell \\det(M (z))}{d z^\\ell} \\right|_{z=\\lambda} = 0" }, { "math_id": 25, "text": "\\ell=0,1,2,\\dots, k-1" }, { "math_id": 26, "text": "M (\\lambda) = A-\\lambda I." }, { "math_id": 27, "text": "M (\\lambda) = A-\\lambda B." }, { "math_id": 28, "text": "M (\\lambda) = A_0 + \\lambda A_1 + \\lambda^2 A_2." }, { "math_id": 29, "text": "M (\\lambda) = \\sum_{i=0}^m \\lambda^i A_i." }, { "math_id": 30, "text": "M (\\lambda) = \\sum_{i=0}^{m_1} A_i \\lambda^i + \\sum_{i=1}^{m_2} B_i r_i(\\lambda)," }, { "math_id": 31, "text": "r_i(\\lambda)" }, { "math_id": 32, "text": "M (\\lambda) = -I\\lambda + A_0 +\\sum_{i=1}^m A_i e^{-\\tau_i \\lambda}," }, { "math_id": 33, "text": "\\tau_1,\\tau_2,\\dots,\\tau_m" }, { "math_id": 34, "text": "(\\lambda_0,x_0)" }, { "math_id": 35, "text": "(x_0,x_1,\\dots, x_{r-1})\\in\\Complex^n\\times\\Complex^n\\times\\dots\\times\\Complex^n" }, { "math_id": 36, "text": "\\sum_{k=0}^{\\ell} M^{(k)} (\\lambda_0) x_{\\ell - k} = 0 ," }, { "math_id": 37, "text": "\\ell = 0,1,\\dots , r-1" }, { "math_id": 38, "text": "M^{(k)}(\\lambda_0)" }, { "math_id": 39, "text": "\\lambda=\\lambda_0" }, { "math_id": 40, "text": "x_0,x_1,\\dots, x_{r-1}" }, { "math_id": 41, "text": "r" }, { "math_id": 42, "text": "x_0" }, { "math_id": 43, "text": "M(\\lambda) \\chi_\\ell (\\lambda)" }, { "math_id": 44, "text": "\\ell" }, { "math_id": 45, "text": "\\ell=0,1,\\dots,r-1" }, { "math_id": 46, "text": "\\chi_\\ell (\\lambda)" }, { "math_id": 47, "text": "\\chi_\\ell(\\lambda) = \\sum_{k=0}^\\ell x_k (\\lambda-\\lambda_0)^k." } ]
https://en.wikipedia.org/wiki?curid=15663283
15663294
Matrix polynomial
Material matrix In mathematics, a matrix polynomial is a polynomial with square matrices as variables. Given an ordinary, scalar-valued polynomial formula_0 this polynomial evaluated at a matrix formula_1 is formula_2 where formula_3 is the identity matrix. Note that formula_4 has the same dimension as formula_1. A matrix polynomial equation is an equality between two matrix polynomials, which holds for the specific matrices in question. A matrix polynomial identity is a matrix polynomial equation which holds for all matrices "A" in a specified matrix ring "Mn"("R"). Matrix polynomials are often demonstrated in undergraduate linear algebra classes due to their relevance in showcasing properties of linear transformations represented as matrices, most notably the Cayley–Hamilton theorem. Characteristic and minimal polynomial. The characteristic polynomial of a matrix "A" is a scalar-valued polynomial, defined by formula_5. The Cayley–Hamilton theorem states that if this polynomial is viewed as a matrix polynomial and evaluated at the matrix formula_1 itself, the result is the zero matrix: formula_6. An polynomial "annihilates" formula_1 if formula_7; formula_8 is also known as an "annihilating polynomial". Thus, the characteristic polynomial is a polynomial which annihilates formula_1. There is a unique monic polynomial of minimal degree which annihilates formula_1; this polynomial is the minimal polynomial. Any polynomial which annihilates formula_1 (such as the characteristic polynomial) is a multiple of the minimal polynomial. It follows that given two polynomials formula_9 and formula_10, we have formula_11 if and only if formula_12 where formula_13 denotes the formula_14th derivative of formula_9 and formula_15 are the eigenvalues of formula_1 with corresponding indices formula_16 (the index of an eigenvalue is the size of its largest Jordan block). Matrix geometrical series. Matrix polynomials can be used to sum a matrix geometrical series as one would an ordinary geometric series, formula_17 formula_18 formula_19 formula_20 If formula_21 is nonsingular one can evaluate the expression for the sum formula_22. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "P(x) = \\sum_{i=0}^n{ a_i x^i} =a_0 + a_1 x+ a_2 x^2 + \\cdots + a_n x^n, " }, { "math_id": 1, "text": "A" }, { "math_id": 2, "text": "P(A) = \\sum_{i=0}^n{ a_i A^i} =a_0 I + a_1 A + a_2 A^2 + \\cdots + a_n A^n," }, { "math_id": 3, "text": "I" }, { "math_id": 4, "text": "P(A)" }, { "math_id": 5, "text": "p_A(t) = \\det \\left(tI - A\\right)" }, { "math_id": 6, "text": "p_A(A) = 0" }, { "math_id": 7, "text": "p(A) = 0" }, { "math_id": 8, "text": "p" }, { "math_id": 9, "text": "P" }, { "math_id": 10, "text": "Q" }, { "math_id": 11, "text": " P(A) = Q(A) " }, { "math_id": 12, "text": " P^{(j)}(\\lambda_i) = Q^{(j)}(\\lambda_i) \\qquad \\text{for } j = 0,\\ldots,n_i-1 \\text{ and } i = 1,\\ldots,s, " }, { "math_id": 13, "text": " P^{(j)} " }, { "math_id": 14, "text": "j" }, { "math_id": 15, "text": " \\lambda_1, \\dots, \\lambda_s " }, { "math_id": 16, "text": " n_1, \\dots, n_s " }, { "math_id": 17, "text": "S=I+A+A^2+\\cdots +A^n" }, { "math_id": 18, "text": "AS=A+A^2+A^3+\\cdots +A^{n+1}" }, { "math_id": 19, "text": "(I-A)S=S-AS=I-A^{n+1}" }, { "math_id": 20, "text": "S=(I-A)^{-1}(I-A^{n+1})" }, { "math_id": 21, "text": "I - A" }, { "math_id": 22, "text": "S" } ]
https://en.wikipedia.org/wiki?curid=15663294
1566437
Physiologically based pharmacokinetic modelling
Physiologically based pharmacokinetic (PBPK) modeling is a mathematical modeling technique for predicting the absorption, distribution, metabolism and excretion (ADME) of synthetic or natural chemical substances in humans and other animal species. PBPK modeling is used in pharmaceutical research and drug development, and in health risk assessment for cosmetics or general chemicals. PBPK models strive to be mechanistic by mathematically transcribing anatomical, physiological, physical, and chemical descriptions of the phenomena involved in the complex ADME processes. A large degree of residual simplification and empiricism is still present in those models, but they have an extended domain of applicability compared to that of classical, empirical function based, pharmacokinetic models. PBPK models may have purely predictive uses, but other uses, such as statistical inference, have been made possible by the development of Bayesian statistical tools able to deal with complex models. That is true for both toxicity risk assessment and therapeutic drug development. PBPK models try to rely "a priori" on the anatomical and physiological structure of the body, and to a certain extent, on biochemistry. They are usually multi-compartment models, with compartments corresponding to predefined organs or tissues, with interconnections corresponding to blood or lymph flows (more rarely to diffusions). A system of differential equations for concentration or quantity of substance on each compartment can be written, and its parameters represent blood flows, pulmonary ventilation rate, organ volumes etc., for which information is available in scientific publications. Indeed, the description they make of the body is simplified and a balance needs to be struck between complexity and simplicity. Besides the advantage of allowing the recruitment of "a priori" information about parameter values, these models also facilitate inter-species transpositions or extrapolation from one mode of administration to another ("e.g.", inhalation to oral). An example of a 7-compartment PBPK model, suitable to describe the fate of many solvents in the mammalian body, is given in the Figure on the right. History. The first pharmacokinetic model described in the scientific literature was in fact a PBPK model. It led, however, to computations intractable at that time. The focus shifted then to simpler models, for which analytical solutions could be obtained (such solutions were sums of exponential terms, which led to further simplifications.) The availability of computers and numerical integration algorithms marked a renewed interest in physiological models in the early 1970s. For substances with complex kinetics, or when inter-species extrapolations were required, simple models were insufficient and research continued on physiological models. By 2010, hundreds of scientific publications had described and used PBPK models, and at least two private companies have based their business on their expertise in this area. Building a PBPK model. The model equations follow the principles of mass transport, fluid dynamics, and biochemistry in order to simulate the fate of a substance in the body. Compartments are usually defined by grouping organs or tissues with similar blood perfusion rate and lipid content ("i.e." organs for which chemicals' concentration "vs." time profiles will be similar). Ports of entry (lung, skin, intestinal tract...), ports of exit (kidney, liver...) and target organs for therapeutic effect or toxicity are often left separate. Bone can be excluded from the model if the substance of interest does not distribute to it. Connections between compartment follow physiology ("e.g.", blood flow in exit of the gut goes to liver, "etc.") Basic transport equations. Drug distribution into a tissue can be rate-limited by either perfusion or permeability. Perfusion-rate-limited kinetics apply when the tissue membranes present no barrier to diffusion. Blood flow, assuming that the drug is transported mainly by blood, as is often the case, is then the limiting factor to distribution in the various cells of the body. That is usually true for small lipophilic drugs. Under perfusion limitation, the instantaneous rate of entry for the quantity of drug in a compartment is simply equal to (blood) volumetric flow rate through the organ times the incoming blood concentration. In that case; for a generic compartment "i", the differential equation for the quantity "Qi" of substance, which defines the rate of change in this quantity, is: formula_0 where "Fi" is blood flow (noted "Q" in the Figure above), "Cart" incoming arterial blood concentration, "Pi" the tissue over blood partition coefficient and "Vi" the volume of compartment "i". A complete set of differential equations for the 7-compartment model shown above could therefore be given by the following table: The above equations include only transport terms and do not account for inputs or outputs. Those can be modeled with specific terms, as in the following. Modeling inputs. Modeling inputs is necessary to come up with a meaningful description of a chemical's pharmacokinetics. The following examples show how to write the corresponding equations. Ingestion. When dealing with an oral bolus dose ("e.g." ingestion of a tablet), first order absorption is a very common assumption. In that case the gut equation is augmented with an input term, with an absorption rate constant "Ka": formula_1 That requires defining an equation for the quantity ingested and present in the gut lumen: formula_2 In the absence of a gut compartment, input can be made directly in the liver. However, in that case local metabolism in the gut may not be correctly described. The case of approximately continuous absorption ("e.g. via" drinking water) can be modeled by a zero-order absorption rate (here "Ring" in units of mass over time): formula_3 More sophisticated gut absorption model can be used. In those models, additional compartments describe the various sections of the gut lumen and tissue. Intestinal pH, transit times and presence of active transporters can be taken into account Skin depot. The absorption of a chemical deposited on skin can also be modeled using first order terms. It is best in that case to separate the skin from the other tissues, to further differentiate exposed skin and non-exposed skin, and differentiate viable skin (dermis and epidermis) from the stratum corneum (the actual skin upper layer exposed). This is the approach taken in [Bois F., Diaz Ochoa J.G. Gajewska M., Kovarich S., Mauch K., Paini A., Péry A., Sala Benito J.V., Teng S., Worth A., in press, Multiscale modelling approaches for assessing cosmetic ingredients safety, Toxicology. doi: 10.1016/j.tox.2016.05.026] Unexposed stratum corneum simply exchanges with the underlying viable skin by diffusion: formula_4 where formula_5 is the partition coefficient, formula_6 is the total skin surface area, formula_7 the fraction of skin surface area exposed, ... For the viable skin unexposed: formula_8 For the skin stratum corneum exposed: formula_9 for the viable skin exposed: formula_10 dt(QSkin_u) and dt(QSkin_e) feed from arterial blood and back to venous blood. More complex diffusion models have been published [reference to add]. Intra-venous injection. Intravenous injection is a common clinical route of administration. (to be completed) Inhalation. Inhalation occurs through the lung and is hardly dissociable from exhalation (to be completed) Modelling metabolism. There are several ways metabolism can be modeled. For some models, a linear excretion rate is preferred. This can be accomplished with a simple differential equation. Otherwise a Michaelis-Menten equation, as follows, is generally appropriate for a more accurate result. formula_11. Uses of PBPK modeling. PBPK models are compartmental models like many others, but they have a few advantages over so-called "classical" pharmacokinetic models, which are less grounded in physiology. PBPK models can first be used to abstract and eventually reconcile disparate data (from physicochemical or biochemical experiments, "in vitro" or "in vivo" pharmacological or toxicological experiments, "etc.") They give also access to internal body concentrations of chemicals or their metabolites, and in particular at the site of their effects, be it therapeutic or toxic. Finally they also help interpolation and extrapolation of knowledge between: Some of these extrapolations are "parametric" : only changes in input or parameter values are needed to achieve the extrapolation (this is usually the case for dose and time extrapolations). Others are "nonparametric" in the sense that a change in the model structure itself is needed ("e.g.", when extrapolating to a pregnant female, equations for the foetus should be added). Owing to the mechanistic basis of PBPK models, another potential use of PBPK modeling is hypothesis testing. For example, if a drug compound showed lower-than-expected oral bioavailability, various model structures (i.e., hypotheses) and parameter values can be evaluated to determine which models and/or parameters provide the best fit to the observed data. If the hypothesis that metabolism in the intestines was responsibility for the low bioavailability yielded the best fit, then the PBPK modeling results support this hypothesis over the other hypotheses evaluated. As such, PBPK modeling can be used, "inter alia", to evaluate the involvement of carrier-mediated transport, clearance saturation, enterohepatic recirculation of the parent compound, extra-hepatic/extra-gut elimination; higher "in vivo" solubility than predicted "in vitro"; drug-induced gastric emptying delays; gut loss and regional variation in gut absorption. Limits and extensions of PBPK modeling. Each type of modeling technique has its strengths and limitations. PBPK modeling is no exception. One limitation is the potential for a large number of parameters, some of which may be correlated. This can lead to the issues of parameter identifiability and redundancy. However, it is possible (and commonly done) to model explicitly the correlations between parameters (for example, the non-linear relationships between age, body-mass, organ volumes and blood flows). After numerical values are assigned to each PBPK model parameter, specialized or general computer software is typically used to numerically integrate a set of ordinary differential equations like those described above, in order to calculate the numerical value of each compartment at specified values of time (see Software). However, if such equations involve only linear functions of each compartmental value, or under limiting conditions (e.g., when input values remain very small) that guarantee such linearity is closely approximated, such equations may be solved analytically to yield explicit equations (or, under those limiting conditions, very accurate approximations) for the time-weighted average (TWA) value of each compartment as a function of the TWA value of each specified input (see, e.g.,). PBPK models can rely on chemical property prediction models (QSAR models or predictive chemistry models) on one hand. For example, QSAR models can be used to estimate partition coefficients. They also extend into, but are not destined to supplant, systems biology models of metabolic pathways. They are also parallel to physiome models, but do not aim at modelling physiological functions beyond fluid circulation in detail. In fact the above four types of models can reinforce each other when integrated. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further references: &lt;templatestyles src="Refbegin/styles.css" /&gt; Software. Dedicated software: General software:
[ { "math_id": 0, "text": "{dQ_i \\over dt} = F_i (C_{art} - {{Q_i} \\over {P_i V_i}})" }, { "math_id": 1, "text": "{dQ_g \\over dt} = F_g (C_{art} - {{Q_g} \\over {P_g V_g}}) + K_a Q_{ing}" }, { "math_id": 2, "text": "{dQ_{ing} \\over dt} = - K_a Q_{ing}" }, { "math_id": 3, "text": "{dQ_g \\over dt} = F_g (C_{art} - {{Q_g} \\over {P_g V_g}}) + R_{ing}" }, { "math_id": 4, "text": "{dQ_{{sc}_{u}} \\over dt} = K_p \\times S_s \\times (1 - f_{S_{e}}) \\times ({Q_{s_u} \\over {P_{sc} V_{{sc}_{u}}}} - C_{{sc}_u})" }, { "math_id": 5, "text": "K_p" }, { "math_id": 6, "text": "S_s" }, { "math_id": 7, "text": "f_{S_{e}}" }, { "math_id": 8, "text": " {dQ_{s_u} \\over dt} = F_s (1 - f_{S_{e}}) (C_{art} - {{Q_{s_u}} \\over {P_s V_{s_u}}}) - {dQ_{{sc}_{u}} \\over dt} " }, { "math_id": 9, "text": "{dQ_{{sc}_{e}} \\over dt} = K_p \\times S_s \\times f_{S_{e}} \\times ({Q_{s_e} \\over {P_{sc} V_{{sc}_e}}} - C_{{sc}_e})" }, { "math_id": 10, "text": " {dQ_{s_e} \\over dt} = F_s f_{S_{e}} (C_{art} - {{Q_{s_e}} \\over {P_s V_{s_e}}}) - {dQ_{{sc}_{e}} \\over dt}\n" }, { "math_id": 11, "text": " v = \\frac{d [P]}{d t} = \\frac{ V_\\max {[S]}}{K_m + [S]} " } ]
https://en.wikipedia.org/wiki?curid=1566437
15667957
Minkowski content
The Minkowski content (named after Hermann Minkowski), or the boundary measure, of a set is a basic concept that uses concepts from geometry and measure theory to generalize the notions of length of a smooth curve in the plane, and area of a smooth surface in space, to arbitrary measurable sets. It is typically applied to fractal boundaries of domains in the Euclidean space, but it can also be used in the context of general metric measure spaces. It is related to, although different from, the Hausdorff measure. Definition. For formula_0, and each integer "m" with formula_1, the "m"-dimensional upper Minkowski content is formula_2 and the "m"-dimensional lower Minkowski content is defined as formula_3 where formula_4 is the volume of the ("n"−"m")-ball of radius r and formula_5 is an formula_6-dimensional Lebesgue measure. If the upper and lower "m"-dimensional Minkowski content of "A" are equal, then their common value is called the Minkowski content "M""m"("A"). Footnotes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " A \\subset \\mathbb{R}^{n}" }, { "math_id": 1, "text": " 0 \\leq m \\leq n" }, { "math_id": 2, "text": "M^{*m}(A) = \\limsup_{r \\to 0^+} \\frac{\\mu(\\{x: d(x,A) < r\\})}{\\alpha (n-m)r^{n-m}}" }, { "math_id": 3, "text": "M_*^m(A) = \\liminf_{r \\to 0^+} \\frac{\\mu(\\{x: d(x,A) < r\\})}{\\alpha (n-m)r^{n-m}}" }, { "math_id": 4, "text": "\\alpha(n-m)r^{n-m}" }, { "math_id": 5, "text": "\\mu" }, { "math_id": 6, "text": "n" } ]
https://en.wikipedia.org/wiki?curid=15667957
15669004
Intuitive criterion
The intuitive criterion is a technique for equilibrium refinement in signaling games. It aims to reduce possible outcome scenarios by restricting the possible sender types to types who could obtain higher utility levels by deviating to off-the-equilibrium messages, and to types for which the off-the-equilibrium message is not equilibrium dominated. Background. A signaling game is a game in which one player ("sender") has private information regarding his type. He sends a signal ("message") to the other player ("receiver") to indicate his type. The receiver then takes an action. Both the signal and the receiver action can affect both players' utilities. A "Perfect Bayesian equilibrium (PBE)" in such a game consists of three elements. The definition of PBE does, however, not require anything about signals that the sender never sends, since Bayes' rule is not applicable for events that occur with probability zero. Therefore, it is possible to have a PBE with the following properties. While this satisfies the definition of PBE, the receiver belief might be "unreasonable". The intuitive criterion, like most refinement techniques, is based on restricting the beliefs off the equilibrium path. The intuitive criterion was presented by In-Koo Cho and David M. Kreps in a 1987 article. Their idea was to try to reduce the set of equilibria by requiring off-equilibrium receiver beliefs to be reasonable in some sense. Intuitively, we can eliminate a PBE if there is exists a sender type who wants to deviate, assuming that the receiver has a reasonable belief. It is reasonable to believe that the deviating sender is of a type who would benefit from the deviation in at least the best-case scenario. If a type of sender could not benefit from the deviation even if the receiver changed his belief in the best possible way for the sender, then the receiver should reasonably put zero probability on the sender being of that type. The deviating sender type formula_0 could persuasively tell the receiver to interpret his deviating signal formula_1 favorably: I am sending the message formula_2. Please re-think your belief. If you switch to a reasonable belief, then you will have to re-think what your optimal response action is. If sending this message so convinces you to change your response action, then, as you can see, it is in my interest to deviate to the signal formula_1. Formally, given any set of types formula_3, let formula_4 denote the set of actions that are optimal for the receiver given some belief with support in formula_5 and given the signal formula_2. Let formula_6 denote the sender utility as a function of her type formula_7, her signal formula_8, and the receiver action formula_9. Given any PBE with sender strategy formula_10 and receiver strategy formula_11, the equilibrium payoff of any type formula_7 is denoted formula_12. The set of types such that deviating to signal formula_13 can, in the best case, yield a weakly higher payoff than the equilibrium payoff is formula_14 For types outside of this set, the signal formula_13 is called equilibrium dominated. A particular PBE is eliminated by the intuitive criterion if there exists a sender type formula_0 and a deviating signal formula_2 that guarantees for this type a payoff above their equilibrium payoff as long as the receiver has a reasonable belief, that is, assigns zero probability to the deviation having been made by a type for whom formula_2 is equilibrium dominated. Formally, formula_15 Criticisms. Other game theorists have criticized the intuitive criterion and suggested alternative refinements such as Universal Divinity. Example. In the standard Spence signaling game, with two types of senders, a continuum of pooling equilibrium persist under solution concepts such as sequential equilibrium and perfect bayesian equilibrium. But the Cho-Kreps intuitive criterion eliminates all pooling equilibria. In the same game, there is also a continuum of separating equilibria, but the intuitive criterion eliminates all the separating equilibria except for the most efficient one -- the one where low-ability types are exactly indifferent between acquiring the amount of education that high-ability types do and not acquiring any education at all. A sketch of a typical model shows why (this model is worked out more fully in signalling games). Suppose the abilities of low and high types of worker are 0 and 10, with equal probability, that in equilibrium the employer will pay the worker his expected ability, and that the cost of education formula_16 is formula_16 for high-ability workers and formula_17 for low-ability workers. There would be a continuum of separating equilibria with formula_18 and of pooling equilibria with formula_19. The intuitive criteria would rule out a separating equilibrium such as formula_20 for the high type and formula_21 for the low type because the high-ability worker could profitably deviate to, for example, formula_22. That is because if the employer still believe the worker is high-ability, his payoff is higher than with formula_23, receiving the same salary of 10 but paying less for education, while the low-ability worker does worse even if his deviation persuades employers that he has high ability, because although his wage would rise from 0 to 10, his signal cost would rise from 0 to 2*5.1. Thus, it is reasonable for the employer to believe that only a high-ability worker would ever switch to formula_24. This argument applies to all separating equilibria with formula_25. The intuitive criterion also rules out all pooling equilibria. Consider the equilibrium in which both types choose formula_26 and receive the expected ability of 5 as their wage. If a worker deviates to formula_27 (for example), the intuitive criterion says that employers must believe he is the high type. That is because if they do believe, and he really is the high type, his payoff will rise from 5 - 0 = 5 to 10 - 4 = 6, but if he were the low type, his payoff would fall from 5 - 0 = 5 to 10 - 2*4 = 2. This argument can be applied to any of the pooling equilibria. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\theta'" }, { "math_id": 1, "text": " m'" }, { "math_id": 2, "text": "m'" }, { "math_id": 3, "text": "\\Theta'\\subseteq\\Theta" }, { "math_id": 4, "text": " A^*(\\Theta',m')" }, { "math_id": 5, "text": "\\Theta'" }, { "math_id": 6, "text": "u_s(m,a,\\theta)" }, { "math_id": 7, "text": "\\theta" }, { "math_id": 8, "text": "m" }, { "math_id": 9, "text": "a" }, { "math_id": 10, "text": "m^*" }, { "math_id": 11, "text": "a^*" }, { "math_id": 12, "text": "u_s^*(\\theta) = u_s(m^*(\\theta),a^*(m^*(\\theta)),\\theta)" }, { "math_id": 13, "text": " m' " }, { "math_id": 14, "text": "\\Theta^{**}(m')=\\{ \\theta \\in \\Theta | u_s^*(\\theta) \\leq \\max_{a\\in A^*(\\Theta,m')} u_s(m',a,\\theta)\\}." }, { "math_id": 15, "text": " \\min_{a \\in A^*(\\Theta^{**}(m'),m')} \\left[ u_s (m',a,\\theta')\\right] > u_s^*(\\theta')." }, { "math_id": 16, "text": "s" }, { "math_id": 17, "text": "2s" }, { "math_id": 18, "text": "s^*\\in [5, 10]" }, { "math_id": 19, "text": "s^* \\in [0, 2.5]" }, { "math_id": 20, "text": "s^*=6" }, { "math_id": 21, "text": "s=0" }, { "math_id": 22, "text": "s=5.1" }, { "math_id": 23, "text": "s=6" }, { "math_id": 24, "text": "s^*=5.1" }, { "math_id": 25, "text": "s^*>5" }, { "math_id": 26, "text": "s^*=0" }, { "math_id": 27, "text": "s=4" } ]
https://en.wikipedia.org/wiki?curid=15669004
15669450
Distensibility
Distensibility is a metric of the stiffness of blood vessels. It is defined as formula_0, where formula_1 and formula_2 are the diameter of the vessel in systole and diastole, and formula_3and formula_4are the systolic and diastolic blood pressure.
[ { "math_id": 0, "text": "D = \\frac{d_{sys}-d_{dias}}{(p_{sys}-p_{dias})d_{dias}}" }, { "math_id": 1, "text": "d_{sys}" }, { "math_id": 2, "text": "d_{dias}" }, { "math_id": 3, "text": "p_{sys}" }, { "math_id": 4, "text": "p_{dias}" } ]
https://en.wikipedia.org/wiki?curid=15669450
15669513
Avraham Trahtman
Soviet-born Israeli mathematician (1944–2024) Avraham Naumovich Trahtman (Trakhtman) (; 10 February 1944 – 17 July 2024) was a Soviet-born Israeli mathematician and academic at Bar-Ilan University (Israel). In 2007, Trahtman solved a problem in combinatorics that had been open for 37 years, the Road Coloring Conjecture posed in 1970. Trahtman died in Jerusalem on 17 July 2024, at the age of 80. Road coloring problem posed and solved. Trahtman's solution to the road coloring problem was accepted in 2007 and published in 2009 by the "Israel Journal of Mathematics". The problem arose in the subfield of symbolic dynamics, an abstract part of the field of dynamical systems. The road coloring problem was raised by R. L. Adler and L. W. Goodwyn from the United States, and the Israeli mathematician B. Weiss. The proof used results from earlier work. Černý conjecture. The problem of estimating the length of synchronizing word has a long history and was posed independently by several authors, but it is commonly known as the Černý conjecture. In 1964 Jan Černý conjectured that formula_0 is the upper bound for the length of the shortest synchronizing word for any n-state complete DFA (a DFA with complete state transition graph). If this is true, it would be tight: in his 1964 paper, Černý exhibited a class of automata (indexed by the number n of states) for which the shortest reset words have this length. In 2011 Trahtman published a proof of upper bound formula_1, but then he found an error in it. The conjecture holds in many partial cases, see for instance, Kari and Trahtman. Other work. The finite basis problem for semigroups of order less than six in the theory of semigroups was posed by Alfred Tarski in 1966, and repeated by Anatoly Maltsev and L. N. Shevrin. In 1983, Trahtman solved this problem by proving that all semigroups of order less than six are finitely based. In the theory of varieties of semigroups and universal algebras the problem of existence of covering elements in the lattice of varieties was posed by Evans in 1971. The positive solution of the problem was found by Trahtman. He also found a six-element semigroup that generates a variety with a continuum of subvarieties, and varieties of semigroups having no irreducible base of identities. The theory of locally testable automata can be based on the theory of varieties of locally testable semigroups. Trahtman found the precise estimation on the order of local testability of finite automata. There are results in theoretical mechanics and in the promising area of extracting moisture from the air mentioned in "New Scientist". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(n-1)^2" }, { "math_id": 1, "text": "n (7n^2+6n-16)/48" } ]
https://en.wikipedia.org/wiki?curid=15669513
156706
Effective mass (solid-state physics)
Mass of a particle when interacting with other particles In solid state physics, a particle's effective mass (often denoted formula_0) is the mass that it "seems" to have when responding to forces, or the mass that it seems to have when interacting with other identical particles in a thermal distribution. One of the results from the band theory of solids is that the movement of particles in a periodic potential, over long distances larger than the lattice spacing, can be very different from their motion in a vacuum. The effective mass is a quantity that is used to simplify band structures by modeling the behavior of a free particle with that mass. For some purposes and some materials, the effective mass can be considered to be a simple constant of a material. In general, however, the value of effective mass depends on the purpose for which it is used, and can vary depending on a number of factors. For electrons or electron holes in a solid, the effective mass is usually stated as a factor multiplying the rest mass of an electron, "m"e (9.11 × 10−31 kg). This factor is usually in the range 0.01 to 10, but can be lower or higher—for example, reaching 1,000 in exotic heavy fermion materials, or anywhere from zero to infinity (depending on definition) in graphene. As it simplifies the more general band theory, the electronic effective mass can be seen as an important basic parameter that influences measurable properties of a solid, including everything from the efficiency of a solar cell to the speed of an integrated circuit. Simple case: parabolic, isotropic dispersion relation. At the highest energies of the valence band in many semiconductors (Ge, Si, GaAs, ...), and the lowest energies of the conduction band in some semiconductors (GaAs, ...), the band structure "E"(k) can be locally approximated as formula_1 where "E"(k) is the energy of an electron at wavevector k in that band, "E"0 is a constant giving the edge of energy of that band, and "m"* is a constant (the effective mass). It can be shown that the electrons placed in these bands behave as free electrons except with a different mass, as long as their energy stays within the range of validity of the approximation above. As a result, the electron mass in models such as the Drude model must be replaced with the effective mass. One remarkable property is that the effective mass can become "negative", when the band curves downwards away from a maximum. As a result of the negative mass, the electrons respond to electric and magnetic forces by gaining velocity in the opposite direction compared to normal; even though these electrons have negative charge, they move in trajectories as if they had positive charge (and positive mass). This explains the existence of valence-band holes, the positive-charge, positive-mass quasiparticles that can be found in semiconductors. In any case, if the band structure has the simple parabolic form described above, then the value of effective mass is unambiguous. Unfortunately, this parabolic form is not valid for describing most materials. In such complex materials there is no single definition of "effective mass" but instead multiple definitions, each suited to a particular purpose. The rest of the article describes these effective masses in detail. Intermediate case: parabolic, anisotropic dispersion relation. In some important semiconductors (notably, silicon) the lowest energies of the conduction band are not symmetrical, as the constant-energy surfaces are now ellipsoids, rather than the spheres in the isotropic case. Each conduction band minimum can be approximated only by formula_2 where "x", "y", and "z" axes are aligned to the principal axes of the ellipsoids, and "m", "m" and "m" are the inertial effective masses along these different axes. The offsets "k"0,"x", "k"0,"y", and "k"0,"z" reflect that the conduction band minimum is no longer centered at zero wavevector. (These effective masses correspond to the principal components of the inertial effective mass tensor, described later.) In this case, the electron motion is no longer directly comparable to a free electron; the speed of an electron will depend on its direction, and it will accelerate to a different degree depending on the direction of the force. Still, in crystals such as silicon the overall properties such as conductivity appear to be isotropic. This is because there are multiple valleys (conduction-band minima), each with effective masses rearranged along different axes. The valleys collectively act together to give an isotropic conductivity. It is possible to average the different axes' effective masses together in some way, to regain the free electron picture. However, the averaging method turns out to depend on the purpose: General case. In general the dispersion relation cannot be approximated as parabolic, and in such cases the effective mass should be precisely defined if it is to be used at all. Here a commonly stated definition of effective mass is the "inertial" effective mass tensor defined below; however, in general it is a matrix-valued function of the wavevector, and even more complex than the band structure. Other effective masses are more relevant to directly measurable phenomena. Inertial effective mass tensor. A classical particle under the influence of a force accelerates according to Newton's second law, a "m"−1F, or alternatively, the momentum changes according to p F. This intuitive principle appears identically in semiclassical approximations derived from band structure when interband transitions can be ignored for sufficiently weak external fields. The force gives a rate of change in crystal momentum pcrystal: formula_3 where is the "reduced Planck constant". Acceleration for a wave-like particle becomes the rate of change in group velocity: formula_4 where ∇"k" is the del operator in reciprocal space. The last step follows from using the chain rule for a total derivative for a quantity with indirect dependencies, because the direct result of the force is the change in k("t") given above, which indirectly results in a change in "E"(k) ħω(k). Combining these two equations yields formula_5 using the dot product rule with a uniform force (∇"k"F 0). formula_6 is the Hessian matrix of "E"(k) in reciprocal space. We see that the equivalent of the Newtonian reciprocal inertial mass for a free particle defined by a "m"−1F has become a tensor quantity formula_7 whose elements are formula_8 This tensor allows the acceleration and force to be in different directions, and for the magnitude of the acceleration to depend on the direction of the force. I, where "m"* is a scalar effective mass and I is the identity. ("M"inert−1)−1, is known as the effective mass tensor. Note that it is not always possible to invert "M"inert−1 For bands with linear dispersion formula_9 such as with photons or electrons in graphene, the group velocity is fixed, i.e. electrons travelling with parallel with k to the force direction F cannot be accelerated and the diagonal elements of "M"inert−1 are obviously zero. However, electrons travelling with a component perpendicular to the force can be accelerated in the direction of the force, and the off-diagonal elements of "M"inert−1 are non-zero. In fact the off-diagonal elements scale inversely with "k", i.e. they diverge (become infinite) for small "k". This is why the electrons in graphene are sometimes said to have infinite mass (due to the zeros on the diagonal of "M"inert−1) and sometimes said to be massless (due to the divergence on the off-diagonals). Cyclotron effective mass. Classically, a charged particle in a magnetic field moves in a helix along the magnetic field axis. The period "T" of its motion depends on its mass "m" and charge "e", formula_10 where "B" is the magnetic flux density. For particles in asymmetrical band structures, the particle no longer moves exactly in a helix, however its motion transverse to the magnetic field still moves in a closed loop (not necessarily a circle). Moreover, the time to complete one of these loops still varies inversely with magnetic field, and so it is possible to define a "cyclotron effective mass" from the measured period, using the above equation. The semiclassical motion of the particle can be described by a closed loop in k-space. Throughout this loop, the particle maintains a constant energy, as well as a constant momentum along the magnetic field axis. By defining "A" to be the k-space area enclosed by this loop (this area depends on the energy "E", the direction of the magnetic field, and the on-axis wavevector "k""B"), then it can be shown that the cyclotron effective mass depends on the band structure via the derivative of this area in energy: formula_11 Typically, experiments that measure cyclotron motion (cyclotron resonance, De Haas–Van Alphen effect, etc.) are restricted to only probe motion for energies near the Fermi level. In two-dimensional electron gases, the cyclotron effective mass is defined only for one magnetic field direction (perpendicular) and the out-of-plane wavevector drops out. The cyclotron effective mass therefore is only a function of energy, and it turns out to be exactly related to the density of states at that energy via the relation formula_12, where "g"v is the valley degeneracy. Such a simple relationship does not apply in three-dimensional materials. Density of states effective masses (lightly doped semiconductors). In semiconductors with low levels of doping, the electron concentration in the conduction band is in general given by formula_13 where "E"F is the Fermi level, "E"C is the minimum energy of the conduction band, and "N"C is a concentration coefficient that depends on temperature. The above relationship for "n"e can be shown to apply for any conduction band shape (including non-parabolic, asymmetric bands), provided the doping is weak ("E"C − "E"F ≫ "kT"); this is a consequence of Fermi–Dirac statistics limiting towards Maxwell–Boltzmann statistics. The concept of effective mass is useful to model the temperature dependence of "N"C, thereby allowing the above relationship to be used over a range of temperatures. In an idealized three-dimensional material with a parabolic band, the concentration coefficient is given by formula_14 In semiconductors with non-simple band structures, this relationship is used to define an effective mass, known as the density of states effective mass of electrons. The name "density of states effective mass" is used since the above expression for "N"C is derived via the density of states for a parabolic band. In practice, the effective mass extracted in this way is not quite constant in temperature ("N"C does not exactly vary as "T"3/2). In silicon, for example, this effective mass varies by a few percent between absolute zero and room temperature because the band structure itself slightly changes in shape. These band structure distortions are a result of changes in electron–phonon interaction energies, with the lattice's thermal expansion playing a minor role. Similarly, the number of holes in the valence band, and the density of states effective mass of holes are defined by: formula_15 where "E"V is the maximum energy of the valence band. Practically, this effective mass tends to vary greatly between absolute zero and room temperature in many materials (e.g., a factor of two in silicon), as there are multiple valence bands with distinct and significantly non-parabolic character, all peaking near the same energy. Determination. Experimental. Traditionally effective masses were measured using cyclotron resonance, a method in which microwave absorption of a semiconductor immersed in a magnetic field goes through a sharp peak when the microwave frequency equals the cyclotron frequency formula_16. In recent years effective masses have more commonly been determined through measurement of band structures using techniques such as angle-resolved photoemission spectroscopy (ARPES) or, most directly, the de Haas–van Alphen effect. Effective masses can also be estimated using the coefficient γ of the linear term in the low-temperature electronic specific heat at constant volume formula_17. The specific heat depends on the effective mass through the density of states at the Fermi level and as such is a measure of degeneracy as well as band curvature. Very large estimates of carrier mass from specific heat measurements have given rise to the concept of heavy fermion materials. Since carrier mobility depends on the ratio of carrier collision lifetime formula_18 to effective mass, masses can in principle be determined from transport measurements, but this method is not practical since carrier collision probabilities are typically not known a priori. The optical Hall effect is an emerging technique for measuring the free charge carrier density, effective mass and mobility parameters in semiconductors. The optical Hall effect measures the analogue of the quasi-static electric-field-induced electrical Hall effect at optical frequencies in conductive and complex layered materials. The optical Hall effect also permits characterization of the anisotropy (tensor character) of the effective mass and mobility parameters. Theoretical. A variety of theoretical methods including density functional theory, k·p perturbation theory, and others are used to supplement and support the various experimental measurements described in the previous section, including interpreting, fitting, and extrapolating these measurements. Some of these theoretical methods can also be used for predictions of effective mass in the absence of any experimental data, for example to study materials that have not yet been created in the laboratory. Significance. The effective mass is used in transport calculations, such as transport of electrons under the influence of fields or carrier gradients, but it also is used to calculate the carrier density and density of states in semiconductors. These masses are related but, as explained in the previous sections, are not the same because the weightings of various directions and wavevectors are different. These differences are important, for example in thermoelectric materials, where high conductivity, generally associated with light mass, is desired at the same time as high Seebeck coefficient, generally associated with heavy mass. Methods for assessing the electronic structures of different materials in this context have been developed. Certain group III–V compounds such as gallium arsenide (GaAs) and indium antimonide (InSb) have far smaller effective masses than tetrahedral group IV materials like silicon and germanium. In the simplest Drude picture of electronic transport, the maximum obtainable charge carrier velocity is inversely proportional to the effective mass: formula_19, where formula_20 with formula_21 being the electronic charge. The ultimate speed of integrated circuits depends on the carrier velocity, so the low effective mass is the fundamental reason that GaAs and its derivatives are used instead of Si in high-bandwidth applications like cellular telephony. In April 2017, researchers at Washington State University claimed to have created a fluid with negative effective mass inside a Bose–Einstein condensate, by engineering the dispersion relation. See also. Models of solids and crystals: Footnotes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "m^*" }, { "math_id": 1, "text": "E(\\mathbf k) = E_0 + \\frac{\\hbar^2 \\mathbf k^2}{2 m^*}" }, { "math_id": 2, "text": "E\\left(\\mathbf{k}\\right) =\n E_0 + \\frac{\\hbar^2}{2 m_x^*}\\left(k_x - k_{0,x}\\right)^2 + \\frac{\\hbar^2}{2 m_y^*}\\left(k_y - k_{0,y}\\right)^2 + \\frac{\\hbar^2}{2 m_z^*}\\left(k_z - k_{0,z}\\right)^2\n" }, { "math_id": 3, "text": "\\mathbf{F} =\n \\frac{\\operatorname{d}\\mathbf{p}_{\\text{crystal}}}{\\operatorname{d}t} =\n \\hbar\\frac{\\operatorname{d}\\mathbf{k}}{\\operatorname{d}t},\n" }, { "math_id": 4, "text": "\\mathbf{a} =\n \\frac{\\operatorname{d}}{\\operatorname{d}t}\\,\\mathbf{v}_\\text{g} =\n \\frac{\\operatorname{d}}{\\operatorname{d}t}\\left(\\nabla_k\\,\\omega\\left(\\mathbf{k}\\right)\\right) =\n \\nabla_k\\frac{\\operatorname{d}\\omega\\left(\\mathbf{k}\\right)}{\\operatorname{d}t} =\n \\nabla_k\\left(\\frac{\\operatorname{d}\\mathbf{k}}{\\operatorname{d}t}\\cdot\\nabla_k\\,\\omega(\\mathbf{k})\\right),\n" }, { "math_id": 5, "text": "\\mathbf{a} = \\nabla_k\\left(\\frac{\\mathbf{F}}{\\hbar}\\cdot\\nabla_k\\,\\frac{E(\\mathbf{k})}{\\hbar}\\right)=\\frac{1}{\\hbar^2} \\left(\\nabla_k\\left(\\nabla_k\\,E(\\mathbf{k})\\right)\\right)\\cdot\\mathbf{F}=M_{\\text{inert}}^{-1}\\cdot\\mathbf{F}" }, { "math_id": 6, "text": "\\nabla_k\\left(\\nabla_k\\,E(\\mathbf{k})\\right)" }, { "math_id": 7, "text": "M_{\\text{inert}}^{-1}=\\frac{1}{\\hbar^2} \\nabla_k\\left(\\nabla_k\\,E(\\mathbf{k})\\right)." }, { "math_id": 8, "text": "\\left[M_{\\text{inert}}^{-1}\\right]_{ij} = \\frac{1}{\\hbar^2} \\left[\\nabla_k\\left(\\nabla_k\\,E(\\mathbf{k})\\right)\\right]_{ij} = \\frac{1}{\\hbar^2} \\frac{\\partial^2 E}{\\partial k_i \\partial k_j}\\,." }, { "math_id": 9, "text": "E\\propto k" }, { "math_id": 10, "text": "T = \\left\\vert\\frac{2\\pi m}{e B}\\right\\vert" }, { "math_id": 11, "text": "m^*\\left(E, \\hat{B}, k_{\\hat{B}}\\right) = \\frac{\\hbar^2}{2\\pi} \\cdot \\frac{\\partial}{\\partial E} A\\left(E, \\hat{B}, k_{\\hat{B}}\\right)" }, { "math_id": 12, "text": "\\scriptstyle g(E) \\;=\\; \\frac{g_v m^*}{\\pi \\hbar^2}" }, { "math_id": 13, "text": "n_\\text{e} = N_\\text{C} \\exp\\left(-\\frac{E_\\text{C} - E_\\text{F}}{kT}\\right)" }, { "math_id": 14, "text": "\\quad N_\\text{C} = 2\\left(\\frac{2\\pi m_\\text{e}^* kT}{h^2}\\right)^\\frac{3}{2}" }, { "math_id": 15, "text": "n_\\text{h} = N_\\text{V} \\exp\\left(-\\frac{E_\\text{F} - E_\\text{V}}{kT}\\right), \\quad N_\\text{V} = 2\\left(\\frac{2\\pi m_\\text{h}^* kT}{h^2}\\right)^\\frac{3}{2}" }, { "math_id": 16, "text": "\\scriptstyle f_c \\;=\\; \\frac{eB}{2\\pi m^*}" }, { "math_id": 17, "text": "\\scriptstyle c_v" }, { "math_id": 18, "text": "\\tau" }, { "math_id": 19, "text": " \\vec{v} \\;=\\; \\left\\Vert \\mu \\right\\Vert \\cdot \\vec{E}" }, { "math_id": 20, "text": " \\left\\Vert \\mu \\right\\Vert \\;=\\; {e \\tau}/{\\left\\Vert m^* \\right\\Vert}" }, { "math_id": 21, "text": " e" } ]
https://en.wikipedia.org/wiki?curid=156706
15670753
Exponential dispersion model
Set of probability distributions In probability and statistics, the class of exponential dispersion models (EDM), also called exponential dispersion family (EDF), is a set of probability distributions that represents a generalisation of the natural exponential family. Exponential dispersion models play an important role in statistical theory, in particular in generalized linear models because they have a special structure which enables deductions to be made about appropriate statistical inference. Definition. Univariate case. There are two versions to formulate an exponential dispersion model. Additive exponential dispersion model. In the univariate case, a real-valued random variable formula_0 belongs to the additive exponential dispersion model with canonical parameter formula_1 and index parameter formula_2, formula_3, if its probability density function can be written as formula_4 Reproductive exponential dispersion model. The distribution of the transformed random variable formula_5 is called reproductive exponential dispersion model, formula_6, and is given by formula_7 with formula_8 and formula_9, implying formula_10. The terminology "dispersion model" stems from interpreting formula_11 as "dispersion parameter". For fixed parameter formula_11, the formula_12 is a natural exponential family. Multivariate case. In the multivariate case, the "n"-dimensional random variable formula_13 has a probability density function of the following form formula_14 where the parameter formula_15 has the same dimension as formula_13. Properties. Cumulant-generating function. The cumulant-generating function of formula_16 is given by formula_17 with formula_10 Mean and variance. Mean and variance of formula_16 are given by formula_18 with unit variance function formula_19. Reproductive. If formula_20 are i.i.d. with formula_21, i.e. same mean formula_22 and different weights formula_23, the weighted mean is again an formula_24 with formula_25 with formula_26. Therefore formula_27 are called "reproductive". Unit deviance. The probability density function of an formula_12 can also be expressed in terms of the unit deviance formula_28 as formula_29 where the unit deviance takes the special form formula_30 or in terms of the unit variance function as formula_31. Examples. Many very common probability distributions belong to the class of EDMs, among them are: normal distribution, binomial distribution, Poisson distribution, negative binomial distribution, gamma distribution, inverse Gaussian distribution, and Tweedie distribution. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X" }, { "math_id": 1, "text": "\\theta" }, { "math_id": 2, "text": "\\lambda" }, { "math_id": 3, "text": "X \\sim \\mathrm{ED}^*(\\theta, \\lambda)" }, { "math_id": 4, "text": " f_X(x\\mid\\theta, \\lambda) = h^*(\\lambda,x) \\exp\\left(\\theta x - \\lambda A(\\theta)\\right) \\,\\! ." }, { "math_id": 5, "text": "Y=\\frac{X}{\\lambda}" }, { "math_id": 6, "text": "Y \\sim \\mathrm{ED}(\\mu, \\sigma^2)" }, { "math_id": 7, "text": " f_Y(y\\mid\\mu, \\sigma^2) = h(\\sigma^2,y) \\exp\\left(\\frac{\\theta y - A(\\theta)}{\\sigma^2}\\right) \\,\\! ," }, { "math_id": 8, "text": "\\sigma^2 = \\frac{1}{\\lambda}" }, { "math_id": 9, "text": "\\mu = A'(\\theta)" }, { "math_id": 10, "text": "\\theta = (A')^{-1}(\\mu)" }, { "math_id": 11, "text": "\\sigma^2" }, { "math_id": 12, "text": "\\mathrm{ED}(\\mu, \\sigma^2)" }, { "math_id": 13, "text": "\\mathbf{X}" }, { "math_id": 14, "text": " f_{\\mathbf{X}}(\\mathbf{x}|\\boldsymbol{\\theta}, \\lambda) = h(\\lambda,\\mathbf{x}) \\exp\\left(\\lambda(\\boldsymbol\\theta^\\top \\mathbf{x} - A(\\boldsymbol\\theta))\\right) \\,\\!," }, { "math_id": 15, "text": "\\boldsymbol\\theta" }, { "math_id": 16, "text": "Y\\sim\\mathrm{ED}(\\mu,\\sigma^2)" }, { "math_id": 17, "text": "K(t;\\mu,\\sigma^2) = \\log\\operatorname{E}[e^{tY}] = \\frac{A(\\theta+\\sigma^2 t)-A(\\theta)}{\\sigma^2}\\,\\! ," }, { "math_id": 18, "text": " \\operatorname{E}[Y]= \\mu = A'(\\theta) \\,, \\quad \\operatorname{Var}[Y] = \\sigma^2 A''(\\theta) = \\sigma^2 V(\\mu)\\,\\! ," }, { "math_id": 19, "text": "V(\\mu) = A''((A')^{-1}(\\mu))" }, { "math_id": 20, "text": "Y_1,\\ldots, Y_n" }, { "math_id": 21, "text": "Y_i\\sim\\mathrm{ED}\\left(\\mu,\\frac{\\sigma^2}{w_i}\\right)" }, { "math_id": 22, "text": "\\mu" }, { "math_id": 23, "text": "w_i" }, { "math_id": 24, "text": "\\mathrm{ED}" }, { "math_id": 25, "text": "\\sum_{i=1}^n \\frac{w_i Y_i}{w_{\\bullet}} \\sim \\mathrm{ED}\\left(\\mu, \\frac{\\sigma^2}{w_\\bullet}\\right) \\,\\! ," }, { "math_id": 26, "text": "w_\\bullet = \\sum_{i=1}^n w_i" }, { "math_id": 27, "text": "Y_i" }, { "math_id": 28, "text": "d(y,\\mu)" }, { "math_id": 29, "text": " f_Y(y\\mid\\mu, \\sigma^2) = \\tilde{h}(\\sigma^2,y) \\exp\\left(-\\frac{d(y,\\mu)}{2\\sigma^2}\\right) \\,\\! ," }, { "math_id": 30, "text": "d(y,\\mu) = y f(\\mu) + g(\\mu) + h(y)" }, { "math_id": 31, "text": "d(y,\\mu) = 2 \\int_\\mu^y\\! \\frac{y-t}{V(t)} \\,dt" } ]
https://en.wikipedia.org/wiki?curid=15670753
15671204
Grubbs's test
Statistical test In statistics, Grubbs's test or the Grubbs test (named after Frank E. Grubbs, who published the test in 1950), also known as the maximum normalized residual test or extreme studentized deviate test, is a test used to detect outliers in a univariate data set assumed to come from a normally distributed population. Definition. Grubbs's test is based on the assumption of normality. That is, one should first verify that the data can be reasonably approximated by a normal distribution before applying the Grubbs test. Grubbs's test detects one outlier at a time. This outlier is expunged from the dataset and the test is iterated until no outliers are detected. However, multiple iterations change the probabilities of detection, and the test should not be used for sample sizes of six or fewer since it frequently tags most of the points as outliers. Grubbs's test is defined for the following hypotheses: H0: There are no outliers in the data set Ha: There is exactly one outlier in the data set The Grubbs test statistic is defined as formula_0 with formula_1 and formula_2 denoting the sample mean and standard deviation, respectively. The Grubbs test statistic is the largest absolute deviation from the sample mean in units of the sample standard deviation. This is the two-sided test, for which the hypothesis of no outliers is rejected at significance level α if formula_3 with "t"α/(2"N"),"N"−2 denoting the upper critical value of the t-distribution with "N" − 2 degrees of freedom and a significance level of α/(2"N"). One-sided case. Grubbs's test can also be defined as a one-sided test, replacing α/(2"N") with α/"N". To test whether the minimum value is an outlier, the test statistic is formula_4 with "Y"min denoting the minimum value. To test whether the maximum value is an outlier, the test statistic is formula_5 with "Y"max denoting the maximum value. Related techniques. Several graphical techniques can be used to detect outliers. A simple run sequence plot, a box plot, or a histogram should show any obviously outlying points. A normal probability plot may also be useful. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading.  This article incorporates public domain material from
[ { "math_id": 0, "text": "\nG = \\frac{\\displaystyle\\max_{i=1,\\ldots, N}\\left \\vert Y_i - \\bar{Y}\\right\\vert}{s}\n" }, { "math_id": 1, "text": "\\overline{Y}" }, { "math_id": 2, "text": "s" }, { "math_id": 3, "text": "\nG > \\frac{N-1}{\\sqrt{N}} \\sqrt{\\frac{t_{\\alpha/(2N),N-2}^2}{N - 2 + t_{\\alpha/(2N),N-2}^2}}\n" }, { "math_id": 4, "text": "\nG = \\frac{\\bar{Y}-Y_\\min}{s}\n" }, { "math_id": 5, "text": "\nG = \\frac{Y_\\max - \\bar{Y}}{s}\n" } ]
https://en.wikipedia.org/wiki?curid=15671204
15672030
Yates analysis
In statistics, a Yates analysis is an approach to analyzing data obtained from a designed experiment, where a factorial design has been used. Full- and fractional-factorial designs are common in designed experiments for engineering and scientific applications. In these designs, each factor is assigned two levels, typically called the low and high levels, and referred to as "-" and "+". For computational purposes, the factors are scaled so that the low level is assigned a value of -1 and the high level is assigned a value of +1. A full factorial design contains all possible combinations of low/high levels for all the factors. A fractional factorial design contains a carefully chosen subset of these combinations. The criterion for choosing the subsets is discussed in detail in the fractional factorial designs article. Formalized by Frank Yates, a Yates analysis exploits the special structure of these designs to generate least squares estimates for factor effects for all factors and all relevant interactions. The Yates analysis can be used to answer the following questions: The mathematical details of the Yates analysis are given in chapter 10 of Box, Hunter, and Hunter (1978). The Yates analysis is typically complemented by a number of graphical techniques such as the DOE mean plot and the DOE contour plot ("DOE" stands for "design of experiments"). Yates' Order. Before performing a Yates analysis, the data should be arranged in "Yates' order". That is, given "k" factors, the "k"th column consists of 2("k" - 1) minus signs (i.e., the low level of the factor) followed by 2("k" - 1) plus signs (i.e., the high level of the factor). For example, for a full factorial design with three factors, the design matrix is formula_0 formula_1 formula_2 formula_3 formula_4 formula_5 formula_6 formula_7 To better understand and utilize the sign table above, one method of detailing the factors and treatment combinations is called modern notation. The notation is a shorthand that arises from taking the subscripts of each treatment combination, making them exponents, and then evaluating the resulting expression and using that as the new name of the treatment combination. Note that while the names look very much like algebraic expressions, they are simply names and no new values are assigned. Taking the 3-factor, 2-level model from above, in Yates' order the response variables are: formula_8 which in modern notation becomes: formula_9 in which it is evident that the exponents in the modern notation names are simply the subscripts of the former (note that anything raised to the zeroth power is 1 and anything raised to the first power is itself). Each response variable then gets assigned row-wise to the table above. Thus, the first row is for formula_10, the second row is for formula_11, and so on. The signs in each column then represent the signs each response variable should take in the calculation of the effect estimates for that factor. Determining the Yates' order for fractional factorial designs requires knowledge of the confounding structure of the fractional factorial design. Output. A Yates analysis generates the following output. 1 = factor 1 2 = factor 2 3 = factor 3 12 = interaction of factor 1 and factor 2 13 = interaction of factor 1 and factor 3 23 = interaction of factor 2 and factor 3 123 = interaction of factors 1, 2, and 3 To determine the magnitudes, the response variables are first arranged in Yates' order, as described in the aforementioned section above. Then, terms are added and subtracted pairwise to determine the next column. More specifically, given the values of the response variables (as they should have been obtained from the experiment directly) in Yates' order, the first two terms are added and that sum is now the first term in the new column. The next two terms are then added and that is the second term in the new column. Since the terms are added pairwise, half of the new column is now filled and should be composed entirely of the pairwise sums of the data. The second half of the column is found in an analogous manner, only the pairwise differences are taken, where the first term is subtracted from the second, the third from the fourth, and so on. Thus completes the column. Should more columns be needed, the same process is repeated, only using the new column. In other words, the nth column is generated from the (n-1)th column (Berger et al. calls this process "Yatesing the data"). In a formula_12 design, k columns will be required, and the last column is the column used to calculate the effect estimates. formula_13 where "e" is the estimated factor effect and "se" is the standard deviation of the estimated factor effect. formula_14 where "Xi" is the estimate of the "i"th factor or interaction effect. formula_15 This consists of a monotonically decreasing set of residual standard deviations (indicating a better fit as the number of terms in the model increases). The first cumulative residual standard deviation is for the model formula_16 where the constant is the overall mean of the response variable. The last cumulative residual standard deviation is for the model formula_17 This last model will have a residual standard deviation of zero. Example. (Adapted from Berger et al., chapter 9) Say there is a study done where one is selling some product by mail and is trying to determine the effect of three factors (postage, product price, envelope size) on people's response rate (that is, will they buy the product). Each factor has two levels: for postage (labeled A), they are third-class (low) and first-class (high), for product price (labeled B) the low level is $9.95 and the high level is $12.95, and for envelope size (labeled C) the low level is #10 and the high level is 9x12. From the experiment, the following data are acquired. Note that the response rate is given as a proportion of the people who answered the survey (favorably and unfavorably) for each treatment combination. Singling out factor A (postage) for calculation for now, the overall estimate for A must take into account the interaction effects of B and C on it as well. The four terms for calculation are: The total estimate is the sum of these four terms divided by four. In other words, the estimate of A is formula_18 where the sum has been rearranged to have all the positive terms grouped together and the negatives together for ease of viewing. In Yates' order, the sum is written as formula_19 The estimates for B and C can be determined in a similar fashion. Calculating the interaction effects is also similar, but the responses are averaged over all other effects not considered. The full table of signs for a three-factor, two-level design is given to the right. Both the factors (columns) and the treatment combinations (rows) are written in Yates' order. The value of arranging the sum in Yates' order is now apparent, as only the signs need to be altered according to the table to produce the effect estimates for every treatment combination. Observe that the columns for A, B, and C are the same as those in the design matrix in the above Yates' order section. Observe also that the columns of the interaction effects can be produced by taking the dot product of the columns of the individual factors (i.e., multiplying the columns element-wise to produce another column of the same length). Note that all sums must divided by 4 to yield the actual effect estimate, as shown earlier. Using this table, the effect estimates are calculated as: A positive value means that an increase in the factor creates an increase in the response rate, while a negative value means that that same factor increase actually produces a decrease in the response rate. Note however that these effects have not yet been determined to be statistically significant, only that there are such effects on the response rate for each factor. Statistical significance must be ascertained via other methods such as analysis of variance (ANOVA). Further reading can be found in Berger et al., chapter 9. Parameter estimates as terms are added. In most cases of least squares fitting, the model coefficients for previously added terms change depending on what was successively added. For example, the "X"1 coefficient might change depending on whether or not an "X"2 term was included in the model. This is not the case when the design is orthogonal, as is a 23 full factorial design. For orthogonal designs, the estimates for the previously included terms do not change as additional terms are added. This means the ranked list of effect estimates simultaneously serves as the least squares coefficient estimates for progressively more complicated models. Model selection and validation. From the above Yates output, one can define the potential models from the Yates analysis. An important component of a Yates analysis is selecting the best model from the available potential models. The above step lists all the potential models. From this list, we want to select the most appropriate model. This requires balancing the following two goals. In short, we want our model to include all the important factors and interactions and to omit the unimportant factors and interactions. Note that the residual standard deviation alone is insufficient for determining the most appropriate model as it will always be decreased by adding additional factors. Instead, seven criteria are utilized to define important factors. These seven criteria are not all equally important, nor will they yield identical subsets, in which case a consensus subset or a weighted consensus subset must be extracted. In practice, some of these criteria may not apply in all situations, and some analysts may have additional criteria. These criteria are given as useful guidelines. Mosts analysts will focus on those criteria that they find most useful. The first four criteria focus on effect sizes with three numeric criteria and one graphical criterion. The fifth criterion focuses on averages. The last two criteria focus on the residual standard deviation of the model. Once a tentative model has been selected, the error term should follow the assumptions for a univariate measurement process. That is, the model should be validated by analyzing the residuals. Graphical presentation. Some analysts may prefer a more graphical presentation of the Yates results. In particular, the following plots may be useful: References.  This article incorporates public domain material from
[ { "math_id": 0, "text": "- - -" }, { "math_id": 1, "text": "+ - -" }, { "math_id": 2, "text": "- + -" }, { "math_id": 3, "text": "+ + -" }, { "math_id": 4, "text": "- - +" }, { "math_id": 5, "text": "+ - +" }, { "math_id": 6, "text": "- + +" }, { "math_id": 7, "text": "+ + +" }, { "math_id": 8, "text": "a_0b_0c_0, a_1b_0c_0, a_0b_1c_0, a_1b_1c_0, a_0b_0c_1, a_1b_0c_1, a_0b_1c_1, a_1b_1c_1 " }, { "math_id": 9, "text": "1, a, b, ab, c, ac, bc, abc " }, { "math_id": 10, "text": "1 " }, { "math_id": 11, "text": "a " }, { "math_id": 12, "text": "2^k" }, { "math_id": 13, "text": "\nt = \\frac{e}{s_e}\n" }, { "math_id": 14, "text": "\n\\textrm{response} = \\textrm{constant} + 0.5 X_i\n" }, { "math_id": 15, "text": "\n\\textrm{response} = \\textrm{constant} + 0.5 \\mathrm{(all\\ effect\\ estimates\\ down\\ to\\ and\\ including\\ the\\ effect\\ of\\ interest)}\n" }, { "math_id": 16, "text": "\n\\textrm{response} = \\textrm{constant}\n" }, { "math_id": 17, "text": "\n\\textrm{response} = \\textrm{constant} + 0.5 \\mathrm{(all\\ factor\\ and\\ interaction\\ estimates)}\n" }, { "math_id": 18, "text": "E_a = (a + ab + ac + abc - 1 - b - c - bc)/4" }, { "math_id": 19, "text": "E_a = (-1 + a - b + ab - c + ac - bc + abc)/4" } ]
https://en.wikipedia.org/wiki?curid=15672030
1567335
Derived set (mathematics)
In mathematics, more specifically in point-set topology, the derived set of a subset formula_0 of a topological space is the set of all limit points of formula_1 It is usually denoted by formula_2 The concept was first introduced by Georg Cantor in 1872 and he developed set theory in large part to study derived sets on the real line. Definition. The derived set of a subset formula_0 of a topological space formula_3 denoted by formula_4 is the set of all points formula_5 that are limit points of formula_6 that is, points formula_7 such that every neighbourhood of formula_7 contains a point of formula_0 other than formula_7 itself. Examples. If formula_8 is endowed with its usual Euclidean topology then the derived set of the half-open interval formula_9 is the closed interval formula_10 Consider formula_8 with the topology (open sets) consisting of the empty set and any subset of formula_8 that contains 1. The derived set of formula_11 is formula_12 Properties. If formula_13 and formula_14 are subsets of the topological space formula_15 then the derived set has the following properties: A subset formula_0 of a topological space is closed precisely when formula_22 that is, when formula_0 contains all its limit points. For any subset formula_6 the set formula_23 is closed and is the closure of formula_0 (that is, the set formula_24). The derived set of a subset of a space formula_25 need not be closed in general. For example, if formula_26 with the trivial topology, the set formula_27 has derived set formula_28 which is not closed in formula_29 But the derived set of a closed set is always closed. In addition, if formula_25 is a T1 space, the derived set of every subset of formula_25 is closed in formula_29 Two subsets formula_0 and formula_30 are separated precisely when they are disjoint and each is disjoint from the other's derived set formula_31 A bijection between two topological spaces is a homeomorphism if and only if the derived set of the image (in the second space) of any subset of the first space is the image of the derived set of that subset. A space is a T1 space if every subset consisting of a single point is closed. In a T1 space, the derived set of a set consisting of a single element is empty (Example 2 above is not a T1 space). It follows that in T1 spaces, the derived set of any finite set is empty and furthermore, formula_32 for any subset formula_0 and any point formula_33 of the space. In other words, the derived set is not changed by adding to or removing from the given set a finite number of points. It can also be shown that in a T1 space, formula_34 for any subset formula_1 A set formula_0 with formula_35 (that is, formula_0 contains no isolated points) is called dense-in-itself. A set formula_0 with formula_36 is called a perfect set. Equivalently, a perfect set is a closed dense-in-itself set, or, put another way, a closed set with no isolated points. Perfect sets are particularly important in applications of the Baire category theorem. The Cantor–Bendixson theorem states that any Polish space can be written as the union of a countable set and a perfect set. Because any Gδ subset of a Polish space is again a Polish space, the theorem also shows that any Gδ subset of a Polish space is the union of a countable set and a set that is perfect with respect to the induced topology. Topology in terms of derived sets. Because homeomorphisms can be described entirely in terms of derived sets, derived sets have been used as the primitive notion in topology. A set of points formula_25 can be equipped with an operator formula_37 mapping subsets of formula_25 to subsets of formula_3 such that for any set formula_0 and any point formula_38: Calling a set formula_0 closed if formula_46 will define a topology on the space in which formula_37 is the derived set operator, that is, formula_47 Cantor–Bendixson rank. For ordinal numbers formula_48 the formula_49-th Cantor–Bendixson derivative of a topological space is defined by repeatedly applying the derived set operation using transfinite recursion as follows: The transfinite sequence of Cantor–Bendixson derivatives of formula_25 is decreasing and must eventually be constant. The smallest ordinal formula_49 such that formula_54 is called the &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;Cantor–Bendixson rank of formula_29 This investigation into the derivation process was one of the motivations for introducing ordinal numbers by Georg Cantor. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; Proofs &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "S" }, { "math_id": 1, "text": "S." }, { "math_id": 2, "text": "S'." }, { "math_id": 3, "text": "X," }, { "math_id": 4, "text": "S'," }, { "math_id": 5, "text": "x \\in X" }, { "math_id": 6, "text": "S," }, { "math_id": 7, "text": "x" }, { "math_id": 8, "text": "\\Reals" }, { "math_id": 9, "text": "[0, 1)" }, { "math_id": 10, "text": "[0, 1]." }, { "math_id": 11, "text": "A := \\{1\\}" }, { "math_id": 12, "text": "A' = \\Reals \\setminus \\{1\\}." }, { "math_id": 13, "text": "A" }, { "math_id": 14, "text": "B" }, { "math_id": 15, "text": "(X, \\mathcal{F})," }, { "math_id": 16, "text": "\\varnothing' = \\varnothing" }, { "math_id": 17, "text": "a \\in A'" }, { "math_id": 18, "text": "a \\in (A \\setminus \\{a\\})'" }, { "math_id": 19, "text": "(A \\cup B)' = A' \\cup B'" }, { "math_id": 20, "text": "A \\subseteq B" }, { "math_id": 21, "text": "A' \\subseteq B'" }, { "math_id": 22, "text": "S' \\subseteq S," }, { "math_id": 23, "text": "S \\cup S'" }, { "math_id": 24, "text": "\\overline{S}" }, { "math_id": 25, "text": "X" }, { "math_id": 26, "text": "X = \\{a, b\\}" }, { "math_id": 27, "text": "S = \\{a\\}" }, { "math_id": 28, "text": "S' = \\{b\\}," }, { "math_id": 29, "text": "X." }, { "math_id": 30, "text": "T" }, { "math_id": 31, "text": "S' \\cap T = \\varnothing = T' \\cap S." }, { "math_id": 32, "text": "(S - \\{p\\})' = S' = (S \\cup \\{p\\})'," }, { "math_id": 33, "text": "p" }, { "math_id": 34, "text": "\\left(S'\\right)' \\subseteq S'" }, { "math_id": 35, "text": "S \\subseteq S'" }, { "math_id": 36, "text": "S = S'" }, { "math_id": 37, "text": "S \\mapsto S^*" }, { "math_id": 38, "text": "a" }, { "math_id": 39, "text": "\\varnothing^* = \\varnothing" }, { "math_id": 40, "text": "S^{**} \\subseteq S^*\\cup S" }, { "math_id": 41, "text": "a \\in S^*" }, { "math_id": 42, "text": "a \\in (S \\setminus \\{a\\})^*" }, { "math_id": 43, "text": "(S \\cup T)^* \\subseteq S^* \\cup T^*" }, { "math_id": 44, "text": "S \\subseteq T" }, { "math_id": 45, "text": "S^* \\subseteq T^*." }, { "math_id": 46, "text": "S^* \\subseteq S" }, { "math_id": 47, "text": "S^* = S'." }, { "math_id": 48, "text": "\\alpha," }, { "math_id": 49, "text": "\\alpha" }, { "math_id": 50, "text": "\\displaystyle X^0 = X" }, { "math_id": 51, "text": "\\displaystyle X^{\\alpha+1} = \\left(X^\\alpha\\right)'" }, { "math_id": 52, "text": "\\displaystyle X^\\lambda = \\bigcap_{\\alpha < \\lambda} X^\\alpha" }, { "math_id": 53, "text": "\\lambda." }, { "math_id": 54, "text": "X^{\\alpha+1} = X^\\alpha" } ]
https://en.wikipedia.org/wiki?curid=1567335
1567386
Elementary arithmetic
Numbers and the basic operations on them Elementary arithmetic is a branch of mathematics involving addition, subtraction, multiplication, and division. Due to its low level of abstraction, broad range of application, and position as the foundation of all mathematics, elementary arithmetic is generally the first branch of mathematics taught in schools. Numeral systems. In numeral systems, digits are characters used to represent the value of numbers. An example of a numeral system is the predominantly used Indo-Arabic numeral system (0 to 9), which uses a decimal positional notation. Other numeral systems include the Kaktovik system (often used in the Eskimo-Aleut languages of Alaska, Canada, and Greenland), and is a vigesimal positional notation system. Regardless of the numeral system used, the results of arithmetic operations are unaffected. Successor function and ordering. In elementary arithmetic, the successor of a natural number (including zero) is the next natural number and is the result of adding one to that number. The predecessor of a natural number (excluding zero) is the previous natural number and is the result of subtracting one from that number. For example, the successor of zero is one, and the predecessor of eleven is ten (formula_0 and formula_1). Every natural number has a successor, and every natural number except 0 has a predecessor. The natural numbers have a total ordering. If one number is greater than (formula_2) another number, then the latter is less than (formula_3) the former. For example, three is less than eight (formula_4), thus eight is greater than three (formula_5). The natural numbers are also well-ordered, meaning that any subset of the natural numbers has a least element. Counting. Counting assigns a natural number to each object in a set, starting with 1 for the first object and increasing by 1 for each subsequent object. The number of objects in the set is the count. This is also known as the cardinality of the set. Counting can also be the process of tallying, the process of drawing a mark for each object in a set. Addition. Addition is a mathematical operation that combines two or more numbers (called addends or summands) to produce a combined number (called the sum). The addition of two numbers is expressed with the plus sign (formula_6). It is performed according to these rules: When the sum of a pair of digits results in a two-digit number, the "tens" digit is referred to as the "carry digit". In elementary arithmetic, students typically learn to add whole numbers and may also learn about topics such as negative numbers and fractions. Subtraction. Subtraction evaluates the difference between two numbers, where the minuend is the number being subtracted from, and the subtrahend is the number being subtracted. It is represented using the minus sign (formula_7). The minus sign is also used to notate negative numbers. Subtraction is not commutative, which means that the order of the numbers can change the final value; formula_8 is not the same as formula_9. In elementary arithmetic, the minuend is always larger than the subtrahend to produce a positive result. Subtraction is also used to separate, combine (e.g., find the size of a subset of a specific set), and find quantities in other contexts. There are several methods to accomplish subtraction. The traditional mathematics method subtracts using methods suitable for hand calculation. Reform mathematics is distinguished generally by the lack of preference for any specific technique, replaced by guiding students to invent their own methods of computation. American schools teach a method of subtraction using borrowing. A subtraction problem such as formula_10 is solved by borrowing a 10 from the tens place to add to the ones place in order to facilitate the subtraction. Subtracting 9 from 6 involves borrowing a 10 from the tens place, making the problem into formula_11. This is indicated by crossing out the 8, writing a 7 above it, and writing a 1 above the 6. These markings are called "crutches", which were invented by William A. Brownell, who used them in a study, in November 1937. The Austrian method, also known as the additions method, is taught in certain European countries. In contrast to the previous method, no borrowing is used, although there are crutches that vary according to certain countries. The method of addition involves augmenting the subtrahend. This transforms the previous problem into formula_12. A small 1 is marked below the subtrahend digit as a reminder. Example. Subtracting the numbers 792 and 308, starting with the ones column, 2 is smaller than 8. Using the borrowing method, 10 is borrowed from 90, reducing 90 to 80. This changes the problem to formula_13. In the tens column, the difference between 80 and 0 is 80. In the hundreds column, the difference between 700 and 300 is 400. The result: formula_14 Multiplication. Multiplication is a mathematical operation of repeated addition. When two numbers are multiplied, the resulting value is a product. The numbers being multiplied are multiplicands, multipliers, or factors. Multiplication can be expressed as "five times three equals fifteen", "five times three is fifteen" or "fifteen is the product of five and three". Multiplication is represented using the multiplication sign (×), the asterisk (*), parentheses (), or a dot (⋅). The statement "five times three equals fifteen" can be written as "formula_15", "formula_16", "formula_17", or "formula_18". In elementary arithmetic, multiplication satisfies the following properties: In the multiplication algorithm, the "tens" digit of the product of a pair of digits is referred to as the "carry digit". Example of multiplication for a single-digit factor. Multiplying 729 and 3, starting on the ones column, the product of 9 and 3 is 27. 7 is written under the ones column and 2 is written above the tens column as a carry digit. The product of 2 and 3 is 6, and the carry digit adds 2 to 6, so 8 is written under the tens column. The product of 7 and 3 is 21, and since this is the last digit, 2 will not be written as a carry digit, but instead beside 1. The result: formula_22 Example of multiplication for multiple-digit factors. Multiplying 789 and 345, starting with the ones column, the product of 789 and 5 is 3945. 4 is in the tens digit. The multiplier is 40, not 4. The product of 789 and 40 is 31560. 3 is in the hundreds digit. The multiplier is 300. The product of 789 and 300 is 236700. Adding all the products, The result: formula_23 Division. Division is an arithmetic operation, and the inverse of multiplication. Given that formula_24, , Division can be written as formula_25, formula_26, or &lt;templatestyles src="Fraction/styles.css" /&gt;"a"⁄"b". This can be read verbally as ""a" divided by "b" or "a" over "b". In some non-English-speaking cultures, "a" divided by "b"" is written "a" : "b". In English usage, the colon is restricted to the concept of ratios (""a" is to "b""). In an equation "formula_27, a" is the dividend, "b" the divisor, and "c" the quotient. Division by zero is considered impossible at an elementary arithmetic level. Two numbers can be divided on paper using long division. An abbreviated version of long division, short division, can be used for smaller divisors. A less systematic method involves the concept of chunking, involving subtracting more multiples from the partial remainder at each stage. Example. Dividing 272 and 8, starting with the hundreds digit, 2 is not divisible by 8. Add 20 and 7 to get 27. The largest number that the divisor of 8 can be multiplied by without exceeding 27 is 3, so it is written under the tens column. Subtracting 24 (the product of 3 and 8) from 27 gives 3 as the remainder. Going to the ones digit, the number is 2. Adding 30 (the remainder, 3, times 10) and 2 gets 32. The quotient of 32 and 8 is 4, which is written under the ones column. The result: formula_28 Bus stop method. Another method of dividing taught in some schools is the bus stop method, sometimes notated as   result   (divisor) dividend The steps here are shown below, using the same example as above: 034 8|272 0 ( 8 × 0 = 0) 27 ( 2 - 0 = 2) 24 ( 8 × 3 = 24) 32 (27 - 24 = 3) 32 ( 8 × 4 = 32) 0 (32 - 32 = 0) The result: formula_29 Educational standards. Elementary arithmetic is typically taught at the primary or secondary school levels and is governed by local educational standards. In the United States and Canada, there has been debate about the content and methods used to teach elementary arithmetic. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "0+1=1" }, { "math_id": 1, "text": "11-1=10" }, { "math_id": 2, "text": ">" }, { "math_id": 3, "text": "<" }, { "math_id": 4, "text": "3<8" }, { "math_id": 5, "text": "8>3" }, { "math_id": 6, "text": "+" }, { "math_id": 7, "text": "-" }, { "math_id": 8, "text": "3-5" }, { "math_id": 9, "text": "5-3" }, { "math_id": 10, "text": "86-39" }, { "math_id": 11, "text": "70+16-39" }, { "math_id": 12, "text": "(80+16)-(39+10)" }, { "math_id": 13, "text": "12-8" }, { "math_id": 14, "text": "792 - 308 = 484" }, { "math_id": 15, "text": "5 \\times 3 = 15" }, { "math_id": 16, "text": "5 \\ast 3 = 15" }, { "math_id": 17, "text": "(5)(3) = 15" }, { "math_id": 18, "text": "5 \\cdot 3 = 15" }, { "math_id": 19, "text": "a \\times b = b \\times a" }, { "math_id": 20, "text": "a \\times (b \\times c) = (a \\times b) \\times c" }, { "math_id": 21, "text": "a \\times (b + c) = a \\times b + a \\times c" }, { "math_id": 22, "text": "3 \\times 729 = 2187" }, { "math_id": 23, "text": "789 \\times 345 = 272205" }, { "math_id": 24, "text": "c \\times b = a" }, { "math_id": 25, "text": "a \\div b" }, { "math_id": 26, "text": "\\frac ab" }, { "math_id": 27, "text": "a \\div b = c" }, { "math_id": 28, "text": "272 \\div 8 = 34" }, { "math_id": 29, "text": "272\\div8=34" } ]
https://en.wikipedia.org/wiki?curid=1567386
1567410
Sigma-ideal
Family closed under subsets and countable unions In mathematics, particularly measure theory, a 𝜎-ideal, or sigma ideal, of a σ-algebra (𝜎, read "sigma") is a subset with certain desirable closure properties. It is a special type of ideal. Its most frequent application is in probability theory. Let formula_0 be a measurable space (meaning formula_1 is a 𝜎-algebra of subsets of formula_2). A subset formula_3 of formula_1 is a 𝜎-ideal if the following properties are satisfied: Briefly, a sigma-ideal must contain the empty set and contain subsets and countable unions of its elements. The concept of 𝜎-ideal is dual to that of a countably complete (𝜎-) filter. If a measure formula_11 is given on formula_12 the set of formula_11-negligible sets (formula_13 such that formula_14) is a 𝜎-ideal. The notion can be generalized to preorders formula_15 with a bottom element formula_16 as follows: formula_17 is a 𝜎-ideal of formula_18 just when (i') formula_19 (ii') formula_20 implies formula_21 and (iii') given a sequence formula_22 there exists some formula_23 such that formula_24 for each formula_25 Thus formula_17 contains the bottom element, is downward closed, and satisfies a countable analogue of the property of being upwards directed. A 𝜎-ideal of a set formula_2 is a 𝜎-ideal of the power set of formula_26 That is, when no 𝜎-algebra is specified, then one simply takes the full power set of the underlying set. For example, the meager subsets of a topological space are those in the 𝜎-ideal generated by the collection of closed subsets with empty interior.
[ { "math_id": 0, "text": "(X, \\Sigma)" }, { "math_id": 1, "text": "\\Sigma" }, { "math_id": 2, "text": "X" }, { "math_id": 3, "text": "N" }, { "math_id": 4, "text": "\\varnothing \\in N" }, { "math_id": 5, "text": "A \\in N" }, { "math_id": 6, "text": "B \\in \\Sigma" }, { "math_id": 7, "text": "B \\subseteq A" }, { "math_id": 8, "text": "B \\in N" }, { "math_id": 9, "text": "\\left\\{A_n\\right\\}_{n \\in \\N} \\subseteq N" }, { "math_id": 10, "text": "\\bigcup_{n \\in \\N} A_n \\in N." }, { "math_id": 11, "text": "\\mu" }, { "math_id": 12, "text": "(X, \\Sigma)," }, { "math_id": 13, "text": "S \\in \\Sigma" }, { "math_id": 14, "text": "\\mu(S) = 0" }, { "math_id": 15, "text": "(P, \\leq, 0)" }, { "math_id": 16, "text": "0" }, { "math_id": 17, "text": "I" }, { "math_id": 18, "text": "P" }, { "math_id": 19, "text": "0 \\in I," }, { "math_id": 20, "text": "x \\leq y \\text{ and } y \\in I" }, { "math_id": 21, "text": "x \\in I," }, { "math_id": 22, "text": "x_1, x_2, \\ldots \\in I," }, { "math_id": 23, "text": "y \\in I" }, { "math_id": 24, "text": "x_n \\leq y" }, { "math_id": 25, "text": "n." }, { "math_id": 26, "text": "X." } ]
https://en.wikipedia.org/wiki?curid=1567410
15675759
Open cluster family
In astronomy, an open cluster family is a group of approximately coeval (age range formula_030 Myr) young open star clusters located in a relatively small region of the Galactic disk (radius formula_0250 pc). Introduction. Open clusters do not form in isolation but in complexes (Efremov 1978), within star forming regions. When age, spatial distribution, and kinematics are taken into account simultaneously, a significant number of known young open clusters are found in groups. Piskunov et al. (2006) found evidence for four open cluster complexes (OCCs) of different ages containing up to a few tens of clusters. The existence of at least five dynamical families of young open clusters in the Milky Way disk has been confirmed using statistical analysis by de la Fuente Marcos and de la Fuente Marcos (2008). They are, in order of increasing distance: Orion, Scutum-Sagittarius, Cygnus, Scorpius, and Cassiopeia-Perseus. These families are associated to the Galactic spiral structure, they are short-lived as they disperse in a relatively short timescale and they are progenitors of classical superclusters, moving groups, and stellar streams (de la Fuente Marcos &amp; de la Fuente Marcos 2008). The Cassiopeia-Perseus open cluster family is located 2 kpc from the Sun between the constellations of Cassiopeia and Perseus, embedded in the Perseus spiral arm (de la Fuente Marcos &amp; de la Fuente Marcos 2009). The structure roughly defines a plane that is inclined almost 30° with respect to the plane of the Milky Way. It has a diameter of about 600 pc and includes 10 to 20 members. Most candidate members are located below the Galactic disk and moving away from it. It started to form about 20 to 40 Myr (1 Myr = 106 yr) ago.
[ { "math_id": 0, "text": "\\sim" } ]
https://en.wikipedia.org/wiki?curid=15675759
1567681
Froth flotation
Process for selectively separating of hydrophobic materials from hydrophilic Froth flotation is a process for selectively separating hydrophobic materials from hydrophilic. This is used in mineral processing, paper recycling and waste-water treatment industries. Historically this was first used in the mining industry, where it was one of the great enabling technologies of the 20th century. It has been described as "the single most important operation used for the recovery and upgrading of sulfide ores". The development of froth flotation has improved the recovery of valuable minerals, such as copper- and lead-bearing minerals. Along with mechanized mining, it has allowed the economic recovery of valuable metals from much lower-grade ore than previously. Industries. Froth flotation is applied to a wide range of separations. An estimated 1B tons of materials are processed in this manner annually. Mineral processing. Froth flotation is a process for separating minerals from gangue by exploiting differences in their hydrophobicity. Hydrophobicity differences between valuable minerals and waste gangue are increased through the use of surfactants and wetting agents. The flotation process is used for the separation of a large range of sulfides, carbonates and oxides prior to further refinement. Phosphates and coal are also upgraded (purified) by flotation technology. "Grade-recovery curves" are tools for weighing the trade-off of producing a high grade of concentrate vs cost. These curves only compare the grade-recovery relations of a specific feed grade and feed rate. Waste water treatment. The flotation process is also widely used in industrial waste water treatment plants, where it removes fats, oil, grease and suspended solids from waste water. These units are called dissolved air flotation (DAF) units. In particular, dissolved air flotation units are used in removing oil from the wastewater effluents of oil refineries, petrochemical and chemical plants, natural gas processing plants and similar industrial facilities. Principle of operation. The ore to be treated is ground into particles (comminution). In the idealized case, the individual minerals are physically separated, a process known as full "liberation". The particle sizes are typically in the range 2–500 micrometers in diameter. For froth flotation, an aqueous slurry of the ground ore is treated with the frothing agent. An example is sodium ethyl xanthate as a collector in the flotation of galena (lead sulfide) to separate it from sphalerite (zinc sulfide). The polar part of xanthate anion attaches to the ore particles and the non-polar hydrocarbon part forms a hydrophobic layer. The particles are brought to the water surface by air bubbles. About 300 g/t of ore is required for efficient separation. With increasing length of the hydrocarbon chain in xanthates, the efficiency of the hydrophobic action increases, but the selectivity to ore type decreases. The chain is shortest in sodium ethyl xanthate that makes it highly selective to copper, nickel, lead, gold, and zinc ores. Aqueous solutions (10%) with pH = 7–11 are normally used in the process. This slurry (more properly called the "pulp") of hydrophobic particles and hydrophilic particles is then introduced to tanks known as "flotation cells" that are aerated to produce bubbles. The hydrophobic particles attach to the air bubbles, which rise to the surface, forming a froth. The froth is skimmed from the cell, producing a concentrate ("conc") of the target mineral. The minerals that do not float into the froth are referred to as the "flotation tailings" or "flotation tails". These tailings may also be subjected to further stages of flotation to recover the valuable particles that did not float the first time. This is known as "scavenging". The final tailings after scavenging are normally pumped for disposal as mine fill or to tailings disposal facilities for long-term storage. Flotation is normally undertaken in several stages to maximize the recovery of the target mineral or minerals and the concentration of those minerals in the concentrate, while minimizing the energy input. Flotation stages. The first stage is called "roughing", which produces a "rougher concentrate". The objective is to remove the maximum amount of the valuable mineral at as coarse a particle size as practical. Grinding costs energy. The goal is to release enough gangue from the valuable mineral to get a high recovery. Some concentrators use a "preflotation" step to remove low density impurities such as carbonaceous dust. The rougher concentrate is normally subjected to further stages of flotation to reject more of the undesirable minerals that also reported to the froth, in a process known as "cleaning". The resulting material is often subject to further grinding (usually called "regrinding"). Regrinding is often undertaken in specialized "regrind mills", such as the IsaMill. The rougher flotation step is often followed by a "scavenger" flotation step that is applied to the rougher tailings to further recover any of the target minerals. Science of flotation. To be effective on a given ore slurry, the collectors are chosen based upon their selective wetting of the types of particles to be separated. A good collector will adsorb, physically or chemically, with one of the types of particles. The wetting activity of a surfactant on a particle can in principle be quantified by measuring the contact angles of the liquid/bubble interface. Another important measure for attachment of bubbles to particles is induction time, the time required for the particle and bubble to rupture the thin film separating the particle and bubble. This rupturing is achieved by the surface forces between the particle and bubble. The mechanisms for the bubble-particle attachment is complex but is viewed as consisting of three steps: collision, attachment, and detachment. The collision is achieved by particles being within the collision tube of a bubble and this is affected by the velocity of the bubble and radius of the bubble. The collision tube corresponds to the region in which a particle will collide with the bubble, with the perimeter of the collision tube corresponding to the grazing trajectory. The attachment of the particle to the bubble is controlled by the induction time of the particle and bubble. The particle and bubble need to bind and this occurs if the time in which the particle and bubble are in contact with each other is larger than the required induction time. This induction time is affected by the fluid viscosity, particle and bubble size and the forces between the particle and bubbles. The detachment of a particle and bubble occurs when the force exerted by the surface tension is exceeded by shear forces and gravitational forces. These forces are complex and vary within the cell. High shear will be experienced close to the impeller of a mechanical flotation cell and mostly gravitational force in the collection and cleaning zone of a flotation column. Significant issues of entrainment of fine particles occurs as these particles experience low collision efficiencies as well as sliming and degradation of the particle surfaces. Coarse particles show a low recovery of the valuable mineral due to the low liberation and high detachment efficiencies. Flotation equipment. Flotation can be performed in rectangular or cylindrical mechanically agitated cells or tanks, flotation columns, Jameson Cells or deinking flotation machines. Classified by the method of air absorption manner, it is fair to state that two distinct groups of flotation equipment have arisen:pneumatic and mechanical machines. Generally pneumatic machines give a low-grade concentrate and little operating troubles. Mechanical cells use a large mixer and diffuser mechanism at the bottom of the mixing tank to introduce air and provide mixing action. Flotation columns use air spargers to introduce air at the bottom of a tall column while introducing slurry above. The countercurrent motion of the slurry flowing down and the air flowing up provides mixing action. Mechanical cells generally have a higher throughput rate, but produce material that is of lower quality, while flotation columns generally have a low throughput rate but produce higher quality material. The Jameson cell uses neither impellers nor spargers, instead combining the slurry with air in a downcomer where high shear creates the turbulent conditions required for bubble particle contacting. Chemicals of flotation. Collectors. For many ores (e.g. those of Cu, Mo, W, Ni), the collectors are anionic sulfur ligands. Particularly popular for sulfide minerals are xanthate salts, including potassium amyl xanthate (PAX), potassium isobutyl xanthate (PIBX), potassium ethyl xanthate (KEX), sodium isobutyl xanthate (SIBX), sodium isopropyl xanthate (SIPX), sodium ethyl xanthate (SEX). Related collectors include related sulfur-based ligands: dithiophosphates, dithiocarbamates. Still other classes of collectors include the thiourea thiocarbanilide. Fatty acid carboxylates, alkyl sulfates, and alkyl sulfonates have also been used for oxide minerals. For some minerals (e.g., sylvinite for KCl), fatty amines are used as collectors. Frothers. A variety of compounds are added to stabilize the foams. These additives include pine oil and various alcohols: methyl isobutyl carbinol (MIBC), polyglycols, xylenol (cresylic acid). Depressants. According to one vendor, depressants "increase the efficiency of the flotation process by selectively inhibiting the interaction of one mineral with the collector." Thus a typical pulverized ore sample consists of many components, of which only one or a few are targets for the collector. Depressants bind to these other components, lest the collector be wasted by doing so. Depressants are selected for particular ores. Typical depressants are starch, polyphenols, lye, and lime. They are cheap, and oxygen-rich typically. Modifiers. A variety of other compounds are added to optimize the separation process, these additives are called modifiers. Modifying reagents react either with the mineral surfaces or with collectors and other ions in the flotation pulp, resulting in a modified and controlled flotation response. Specific applications. Sulfide ores. Prior to 1907, nearly all the copper mined in the US came from underground vein deposits, averaging 2.5 percent copper. By 1991, the average grade of copper ore mined in the US had fallen to only 0.6 percent. Nonsulfide ores. Flotation is used for the purification of potassium chloride from sodium chloride and clay minerals. The crushed mineral is suspended in brine in the presence of fatty ammonium salts. Because the ammonium head group and K+ have very similar ionic radii (ca. 0.135, 0.143 nm respectively), the ammonium centers exchange for the surface potassium sites on the particles of KCl, but not on the NaCl particles. The long alkyl chains then confer hydrophobicity to the particles, which enable them to form foams. Chemical compounds for deinking of recycled paper. Froth flotation is one of the processes used to recover recycled paper. In the paper industry this step is called deinking or just flotation. The target is to release and remove the hydrophobic contaminants from the recycled paper. The contaminants are mostly printing ink and stickies. Normally the setup is a two-stage system with 3,4 or 5 flotation cells in series. Environmental considerations. As in any technology that has long been conducted on the multi-million ton per year scale, flotation technologies have the potential to threaten the environment beyond the disruption caused by mining. Froth flotation employs a host of organic chemicals and relies upon elaborate machinery. Some of the chemicals (cyanide) are acutely toxic but hydrolyze to innocuous products. Naturally occurring fatty acids are widely used. Tailings and effluents are contained in lined ponds. Froth flotation is "poised for increased activity due to their potential usefulness in environmental site cleanup operations" including recycling of plastics and metals, not to mention water treatment. History. Flotation processes are described in ancient Greek and Persian literature. During the late 19th century, the process basics were discovered through a slow evolutionary phase. During the first decade of the 20th century, a more rapid investigation of oils, froths, and agitation led to proven workplace applications, especially in Broken Hill, Australia, that brought the technological innovation known as “froth flotation.” During the early 20th century, froth flotation revolutionized mineral processing. Initially, naturally occurring chemicals such as fatty acids and oils were used as flotation reagents in large quantities to increase the hydrophobicity of the valuable minerals. Since then, the process has been adapted and applied to a wide variety of materials to be separated, and additional collector agents, including surfactants and synthetic compounds have been adopted for various applications. 19th century. Englishman William Haynes patented a process in 1860 for separating sulfide and gangue minerals using oil. Later writers have pointed to Haynes's as the first "bulk oil flotation" patent, though there is no evidence of its being field tested, or used commercially. In 1877 the brothers Bessel (Adolph and August) of Dresden, Germany, introduced their commercially successful oil and froth flotation process for extracting graphite, considered by some the root of froth flotation. However, the Bessel process became uneconomical after the discovery of high-grade graphite in Sri Lanka and was largely forgotten. Inventor Hezekiah Bradford of Philadelphia invented a "method of saving floating material in ore-separation” and received US patent No. 345951 on July 20, 1886. He would later go on to patent the Bradford Breaker, currently in use by the coal industry, in 1893. His "Bradford washer," patented 1870, was used to concentrate iron, copper and lead-zinc ores by specific gravity, but lost some of the metal as float off the concentration process. The 1886 patent was to capture this "float" using surface tension, the first of the skin-flotation process patents that were eclipsed by oil froth flotation. On August 24, 1886, Carrie Everson received a patent for her process calling for oil[s] but also an acid or a salt, a significant step in the evolution of the process history. By 1890, tests of the Everson process had been made at Georgetown and Silver Cliff, Colorado, and Baker, Oregon. She abandoned the work upon the death of her husband, and before perfecting a commercially successful process. Later, during the height of legal disputes over the validity of various patents during the 1910s, Everson's was often pointed to as the initial flotation patent - which would have meant that the process was not patentable again by later contestants. Much confusion has been clarified recently by historian Dawn Bunyak. First commercial flotation process. The generally recognized first successful commercial flotation process for "mineral" sulphides was invented by Frank Elmore who worked on the development with his brother, Stanley. The Glasdir copper mine at Llanelltyd, near Dolgellau in North Wales was bought in 1896 by the Elmore brothers in conjunction with their father, William. In 1897, the Elmore brothers installed the world's first industrial-size commercial flotation process for mineral beneficiation at the Glasdir mine. The process was not froth flotation but used oil to agglomerate (make balls of) pulverised sulphides and buoy them to the surface, and was patented in 1898 (revised 1901). The operation and process was described in the April 25, 1900 "Transactions" of the Institution of Mining and Metallurgy of England, which was reprinted with comment, June 23, 1900, in the "Engineering and Mining Journal", New York City. By this time they had recognized the importance of air bubbles in assisting the oil to carry away the mineral particles. As modifications were made to improve the process, it became a success with base metal ores from Norway to Australia. The Elmores had formed a company known as the Ore Concentration Syndicate Ltd to promote the commercial use of the process worldwide. In 1900, Charles Butters of Berkeley, California, acquired American rights to the Elmore process after seeing a demonstration at Llanelltyd, Wales. Butters, an expert on the cyanide process, built an Elmore process plant in the basement of the Dooley Building, Salt Lake City, and tested the oil process on gold ores throughout the region and tested the tailings of the Mammoth gold mill, Tintic district, Utah, but without success. Because of Butters' reputation and the news of his failure, as well as the unsuccessful attempt at the LeRoi gold mine at Rossland, B. C., the Elmore process was all but ignored in North America. Developments elsewhere, particularly in Broken Hill, Australia by Minerals Separation, Limited, led to decades of hard-fought legal battles and litigations for the Elmores who, ultimately, lost as the Elmore process was superseded by more advanced techniques. Another flotation process was independently invented in 1901 in Australia by Charles Vincent Potter and by Guillaume Daniel Delprat around the same time. Potter was a brewer of beer, as well as a chemist, and was likely inspired by the way beer froth lifted up sediment in the beer. This process did not use oil, but relied upon flotation by the generation of gas formed by the introduction of acid into the pulp. In 1903, Potter sued Delprat, then general manager of BHP, for patent infringement. He lost the case for reasons of utility, with Delpat arguing that while Delprat's process, which used sulphuric acid to generate the bubbles in the process, was not as useful as Delprat's process, which used salt cake. Despite this, after the case was over BHP began using sulphuric acid for its flotation process. In 1902, Froment combined oil and gaseous flotation using a modification of the Potter-Delprat process. During the first decade of the twentieth century, Broken Hill became the center of innovation leading to the perfection of the froth flotation process by many technologists there borrowing from each other and building on these first successes. Yet another process was developed in 1902 by Arthur C. Cattermole, who emulsified the pulp with a small quantity of oil, subjected it to violent agitation, and then slow stirring which coagulated the target minerals into nodules which were separated from the pulp by gravity. The Minerals Separation Ltd., formed in Britain in 1903 to acquire the Cattermole patent, found that it proved unsuccessful. Metallurgists on the staff continued to test and combine other discoveries to patent in 1905 their process, called the Sulman-Picard-Ballot process after company officers and patentees. The process proved successful at their Central Block plant, Broken Hill that year. Significant in their "agitation froth flotation" process was the use of less than 1% oil and an agitation step that created small bubbles, which provided more surface to capture the metal and float into a froth at the surface. Useful work was done by Leslie Bradford at Port Pirie and by William Piper, Sir Herbert Gepp and Auguste de Bavay. Mineral Separation also bought other patents to consolidate ownership of any potential conflicting rights to the flotation process - except for the Elmore patents. In 1910, when the Zinc Corporation replaced its Elmore process with the Minerals Separation (Sulman-Picard-Ballot) froth flotation process at its Broken Hill plant, the primacy of the Minerals Separation over other process contenders was assured. Henry Livingston Sulman was later recognized by his peers in his election as President of the (British) Institution of Mining and Metallurgy, which also awarded him its gold medal. 20th century. Developments in the United States had been less than spectacular. Butters's failures, as well as others, was followed after 1904, with Scotsman Stanley MacQuisten's process (a surface tension based method), which was developed with a modicum of success in Nevada and Idaho, but this would not work when slimes were present, a major fault. Henry E. Wood of Denver had developed his flotation process along the same lines in 1907, patented 1911, with some success on molybdenum ores. For the most part, however, these were isolated attempts without fanfare for what can only be called marginal successes. In 1911, James M. Hyde, a former employee of Minerals Separation, Ltd., modified the Minerals Separation process and installed a test plant in the Butte and Superior Mill in Basin, Montana, the first such installation in the USA. In 1912, he designed the Butte &amp; Superior zinc works, Butte, Montana, the first great flotation plant in America. Minerals Separation, Ltd., which had set up an office in San Francisco, sued Hyde for infringement as well as the Butte &amp; Superior company, both cases were eventually won by the firm in the U. S. Supreme Court. Daniel Cowan Jackling and partners, who controlled Butte &amp; Superior, also refuted the Minerals Separation patent and funded the ensuing legal battles that lasted over a decade. They - Utah Copper (Kennecott), Nevada Consolidated, Chino Copper, Ray Con and other Jackling firms - eventually settled, in 1922, paying a substantial fee for licenses to use the Minerals Separation process. One unfortunate result of the dispute was professional divisiveness among the mining engineering community for a generation. In 1913, the Minerals Separation paid for a test plant for the Inspiration Copper Company at Miami, Arizona. Built under the San Francisco office director, Edward Nutter, it proved a success. Inspiration engineer L. D. Ricketts ripped out a gravity concentration mill and replaced it with the Minerals Separation process, the first major use of the process at an American copper mine. A major holder of Inspiration stock were men who controlled the great Anaconda mine of Butte. They immediately followed the Inspiration success to build a Minerals Separation licensed plant at Butte, in 1915–1916, a major statement about the final acceptance of the Minerals Separation patented process. John M. Callow, of General Engineering of Salt Lake City, had followed flotation from technical papers and the introduction in both the Butte and Superior Mill, and at Inspiration Copper in Arizona and determined that mechanical agitation was a drawback to the existing technology. Introducing a porous brick with compressed air, and a mechanical stirring mechanism, Callow applied for a patent in 1914 (some say that Callow, a Jackling partisan, invented his cell as a means to avoid paying royalties to Minerals Separation, which firms using his cell eventually were forced to do by the courts). This method, known as Pneumatic Flotation, was recognized as an alternative to the Minerals Separation process of flotation concentration. The American Institute of Mining Engineers presented Callow the James Douglas Gold Medal in 1926 for his contributions to the field of flotation. By that time, flotation technology was changing, especially with the discovery of the use of xanthates and other reagents, which made the Callow cell and his process obsolete. Montana Tech professor Antoine Marc Gaudin defined the early period of flotation as the mechanical phase while by the late 1910s it entered the chemical phase. Discoveries in reagents, especially the use of xanthates patented by Minerals Separations chemist Cornelius H. Keller, not so much increased the capture of minerals through the process as making it far more manageable in day-to-day operations. Minerals Separation's initial flotation patents ended 1923, and new ones for chemical processes gave it a significant position into the 1930s. During this period the company also developed and patented flotation processes for iron out of its Hibbing lab and of phosphate in its Florida lab. Another rapid phase of flotation process innovation did not occur until after 1960. In the 1960s the froth flotation technique was adapted for deinking recycled paper. The success of the process is evinced by the number of claimants as "discoverers" of flotation. In 1961, American engineers celebrated "50 years of flotation" and enshrined James Hyde and his Butte &amp; Superior mill. In 1977, German engineers celebrated the "hundredth anniversary of flotation" based on the brothers Bessel patent of 1877. The historic Glasdir copper mine site advertises its tours in Wales as site of the "discovery of flotation" based upon the Elmore brothers work. Recent writers, because of the interest in celebrating women in science, champion Carrie Everson of Denver as mother of the process based on her 1885 patent. Omitted from this list are the engineers, metallurgists and chemists of Minerals Separation, Ltd., which, at least in the American and Australian courts, won control of froth flotation patents as well as right of claimant as discoverers of froth flotation. But, as historian Martin Lynch writes, "Mineral Separation would eventually prevail after taking the case to the US Supreme Court [and the House of Lords], and in so doing earned for itself the cordial detestation of many in the mining world." Theory. Froth flotation efficiency is determined by a series of probabilities: those of particle–bubble contact, particle–bubble attachment, transport between the pulp and the froth, and froth collection into the product launder. In a conventional mechanically-agitated cell, the void fraction (i.e. volume occupied by air bubbles) is low (5 to 10 percent) and the bubble size is usually greater than 1 mm. This results in a relatively low interfacial area and a low probability of particle–bubble contact. Consequently, several cells in series are required to increase the particle residence time, thus increasing the probability of particle–bubble contact. Selective adhesion. Froth flotation depends on the selective adhesion of air bubbles to mineral surfaces in a mineral/water slurry. The air bubbles attach to more hydrophobic particles, as determined by the interfacial energies between the solid, liquid, and gas phases. This energy is determined by the Young–Dupré equation: formula_0 where: Minerals targeted for separation may be chemically surface-modified with collectors so that they are more hydrophobic. Collectors are a type of surfactant that increase the natural hydrophobicity of the surface, increasing the separability of the hydrophobic and hydrophilic particles. Collectors either chemically bond via chemisorption to the mineral or adsorb onto the surface via physisorption. IMFs and surface forces in bubble-particle interactions. Collision. The collision rates for fine particles (50 - 80 μm) can be accurately modeled, but there is no current theory that accurately models bubble-particle collision for particles as large as 300 μm, which are commonly used in flotation processes. For fine particles, Stokes law underestimates collision probability while the potential equation based on surface charge overestimates collision probability so an intermediate equation is used. It is important to know the collision rates in the system since this step precedes the adsorption where a three phase system is formed. Adsorption (attachment). The effectiveness of a medium to adsorb to a particle is influenced by the relationship between the surfaces of both materials. There are multiple factors that affect the efficiency of adsorption in chemical, thermodynamic, and physical domains. These factors can range from surface energy and polarity to the shape, size, and roughness of the particle. In froth flotation, adsorption is a strong consequence of surface energy, since the small particles have a high surface area to size ratio, resulting in higher energy surfaces to form attractions with adsorbates. The air bubbles must selectively adhere to the desired minerals to elevate them to the surface of the slurry while wetting the other minerals and leaving them in the aqueous slurry medium. Particles that can be easily wetted by water are called hydrophilic, while particles that are not easily wetted by water are called hydrophobic. Hydrophobic particles have a tendency to form a separate phase in aqueous media. In froth flotation the effectiveness of an air bubble to adhere to a particle is based on how hydrophobic the particle is. Hydrophobic particles have an affinity to air bubbles, leading to adsorption. The bubble-particle combinations are elevated to the froth zone driven by buoyancy forces. The attachment of the bubbles to the particles is determined by the interfacial energies of between the solid, liquid, and vapor phases, as modeled by the Young/Dupre Equation. The interfacial energies can be based on the natural structure of the materials, or the addition of chemical treatments can improve energy compatibility. Collectors are the main additives used to improve particle surfaces. They function as surfactants to selectively isolate and aid adsorption between the particles of interest and bubbles rising through the slurry. Common collectors used in flotation are anionic sulfur ligands, which have a bifunctional structure with an ionic portion which shares attraction with metals, and a hydrophobic portion such as a long hydrocarbon tail. These collectors coat a particle's surface with a monolayer of non-polar substance to aid separation from the aqueous phase by decreasing the adsorbed particle solubility in water. The adsorbed ligands can form micelles around the particles and form small-particle colloids improving stability and phase separation further. Desorption (detachment). The adsorption of particles to bubbles is essential to separating the minerals from the slurry, but the minerals must be purified from the additives used in separation, such as the collectors, frothers, and modifiers. The product of the cleaning, or desorption process, is known as the cleaner concentrate. The detachment of a particle and bubble requires adsorption bond cleavage driven by shear forces. Depending on the flotation cell type, shear forces are applied by a variety of mechanical systems. Among the most common are impellers and mixers. Some systems combine the functionalities of these components by placing them at key locations where they can take part in multiple froth flotation mechanisms. Cleaning cells also take advantage of gravitational forces to improve separation efficiency. Desorption itself is a chemical phenomenon where compounds are just physically attached to each other without having any chemical bond. Performance calculations. Relevant equations. A common quantity used to describe the collection efficiency of a froth flotation process is "flotation recovery" (formula_1). This quantity incorporates the probabilities of collision and attachment of particles to gas flotation bubbles. formula_2 where: The following are several additional mathematical methods often used to evaluate the effectiveness of froth flotation processes. These equations are more simple than the calculation for "flotation recovery", as they are based solely on the amounts of inputs and outputs of the processes. For the following equations: Ratio of feed weight to concentrate weight formula_15 (unitless) formula_16 Percent of metal recovered (formula_17) in wt"%" formula_18 Percent of metal lost (formula_19) in wt"%" formula_20 Percent of weight recovered formula_21 in wt"%" formula_22 This can be calculated using weights and assays, as formula_23. Or, since formula_24, the percent of metal recovered (formula_17) can be calculated from assays alone using formula_18. Percent of metal lost is the opposite of the percent of metal recovered, and represents the material lost to the tailings. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\gamma_{lv} \\cos \\theta = (\\gamma_{sv} - \\gamma_{sl})" }, { "math_id": 1, "text": "R" }, { "math_id": 2, "text": "R = \\frac{N_c} {\\left(\\tfrac{\\pi}{4}\\right)\\left(d_p+d_b\\right)^2 Hc}" }, { "math_id": 3, "text": "N_c = PN_c^i" }, { "math_id": 4, "text": "P" }, { "math_id": 5, "text": "N_c^i" }, { "math_id": 6, "text": "d_p" }, { "math_id": 7, "text": "d_b" }, { "math_id": 8, "text": "H" }, { "math_id": 9, "text": "c" }, { "math_id": 10, "text": "F" }, { "math_id": 11, "text": "C" }, { "math_id": 12, "text": "T" }, { "math_id": 13, "text": "t" }, { "math_id": 14, "text": "f" }, { "math_id": 15, "text": "\\tfrac {F}{C}" }, { "math_id": 16, "text": " \\frac{F}{C} = \\frac{c-t}{f-t}" }, { "math_id": 17, "text": "\\Chi_R" }, { "math_id": 18, "text": "\\Chi_R = 100\\left(\\frac{c}{f}\\right)\\left(\\frac{f-t}{c-t}\\right)" }, { "math_id": 19, "text": "\\Chi_L" }, { "math_id": 20, "text": "\\Chi_L = 100 - \\Chi_R" }, { "math_id": 21, "text": "\\left(\\Chi_W\\right)" }, { "math_id": 22, "text": "\\Chi_W = 100\\left(\\frac{C}{F}\\right) = 100\\frac{f-t}{c-t}" }, { "math_id": 23, "text": "\\frac{Cc}{Ff}*100" }, { "math_id": 24, "text": "\\frac{C}{F} = \\frac{f-t}{c-t}" } ]
https://en.wikipedia.org/wiki?curid=1567681
15677755
Photovoltaic system
Power system designed to supply usable electric power from solar energy A photovoltaic system, also called a PV system or solar power system, is an electric power system designed to supply usable solar power by means of photovoltaics. It consists of an arrangement of several components, including solar panels to absorb and convert sunlight into electricity, a solar inverter to convert the output from direct to alternating current, as well as mounting, cabling, and other electrical accessories to set up a working system. Many utility-scale PV systems use tracking systems that follow the sun's daily path across the sky to generate more electricity than fixed-mounted systems. PV systems convert light directly into electricity and are not to be confused with other solar technologies, such as concentrated solar power or solar thermal, used for heating and cooling. A solar array only encompasses the solar panels, the visible part of the PV system, and does not include all the other hardware, often summarized as the balance of system (BOS). PV systems range from small, rooftop-mounted or building-integrated systems with capacities ranging from a few to several tens of kilowatts to large, utility-scale power stations of hundreds of megawatts. Nowadays, off-grid or stand-alone systems account for a small portion of the market. Operating silently and without any moving parts or air pollution, PV systems have evolved from niche market applications into a mature technology used for mainstream electricity generation. Due to the growth of photovoltaics, prices for PV systems have rapidly declined since their introduction; however, they vary by market and the size of the system. Nowadays, solar PV modules account for less than half of the system's overall cost, leaving the rest to the remaining BOS components and to soft costs, which include customer acquisition, permitting, inspection and interconnection, installation labor, and financing costs. &lt;templatestyles src="Template:TOC limit/styles.css" /&gt; Modern system. Overview. A photovoltaic system converts the Sun's radiation, in the form of light, into usable electricity. It comprises the solar array and the balance of system components. PV systems can be categorized by various aspects, such as, grid-connected vs. stand alone systems, building-integrated vs. rack-mounted systems, residential vs. utility systems, distributed vs. centralized systems, rooftop vs. ground-mounted systems, tracking vs. fixed-tilt systems, and new constructed vs. retrofitted systems. Other distinctions may include, systems with microinverters vs. central inverter, systems using crystalline silicon vs. thin-film technology, and systems with modules. About 99 percent of all European and 90 percent of all U.S. solar power systems are connected to the electrical grid, while off-grid systems are somewhat more common in Australia and South Korea. PV systems rarely use battery storage. This may change, as government incentives for distributed energy storage are implemented and investments in storage solutions gradually become economically viable for small systems. In the UK, the number of commercial systems using battery storage is gradually increasing as a result of grid constraints preventing feedback of unused electricity into the grid as well as increased electricity costs resulting in improved economics. A typical residential solar array is rack-mounted on the roof, rather than integrated into the roof or facade of the building, which is significantly more expensive. Utility-scale solar power stations are ground-mounted, with fixed tilted solar panels rather than using expensive tracking devices. Crystalline silicon is the predominant material used in 90 percent of worldwide produced solar modules, while its rival thin-film has lost market-share. About 70 percent of all solar cells and modules are produced in China and Taiwan, only 5 percent by European and US-manufacturers. The installed capacity for both small rooftop systems and large solar power stations is growing rapidly and in equal parts, although there is a notable trend towards utility-scale systems, as the focus on new installations is shifting away from Europe to sunnier regions, such as the Sunbelt in the U.S., which are less opposed to ground-mounted solar farms and cost-effectiveness is more emphasized by investors. Driven by advances in technology and increases in manufacturing scale and sophistication, the cost of photovoltaics is declining continuously. There are several million PV systems distributed all over the world, mostly in Europe, with 1.4 million systems in Germany alone– as well as North America with 440,000 systems in the United States. The energy conversion efficiency of a conventional solar module increased from 15 to 20 percent since 2004 and a PV system recoups the energy needed for its manufacture in about 2 years. In exceptionally irradiated locations, or when thin-film technology is used, the so-called energy payback time decreases to one year or less. Net metering and financial incentives, such as preferential feed-in tariffs for solar-generated electricity, have also greatly supported installations of PV systems in many countries. The levelised cost of electricity from large-scale PV systems has become competitive with conventional electricity sources in an expanding list of geographic regions, and grid parity has been achieved in about 30 countries. As of 2015, the fast-growing global PV market is rapidly approaching the 200 GW mark – about 40 times the installed capacity in 2006. These systems currently contribute about 1 percent to worldwide electricity generation. Top installers of PV systems in terms of capacity are currently China, Japan and the United States, while half of the world's capacity is installed in Europe, with Germany and Italy supplying 7% to 8% of their respective domestic electricity consumption with solar PV. The International Energy Agency expects solar power to become the world's largest source of electricity by 2050, with solar photovoltaics and concentrated solar thermal contributing 16% and 11% to the global demand, respectively. Solar grid-connection. A grid connected system is connected to a larger independent grid (typically the public electricity grid) and feeds energy directly into the grid. This energy may be shared by a residential or commercial building before or after the revenue measurement point, depending on whether the credited energy production is calculated independently of the customer's energy consumption (feed-in tariff) or only on the difference of energy (net metering). These systems vary in size from residential (2–10 kWp) to solar power stations (up to tens of MWp). This is a form of decentralized electricity generation. Feeding electricity into the grid requires the transformation of DC into AC by a special, synchronizing grid-tie inverter. In kilowatt-sized installations the DC side system voltage is as high as permitted (typically 1000 V except US residential 600 V) to limit ohmic losses. Most modules (60 or 72 crystalline silicon cells) generate 160 W to 300 W at 36 volts. It is sometimes necessary or desirable to connect the modules partially in parallel rather than all in series. An individual set of modules connected in series is known as a 'string'. A set of series-connected "strings" is known as an "array." Scale of system. Photovoltaic systems are generally categorized into three distinct market segments: residential rooftop, commercial rooftop, and ground-mount utility-scale systems. Their capacities range from a few kilowatts to hundreds of megawatts. A typical residential system is around 10 kilowatts and mounted on a sloped roof, while commercial systems may reach a megawatt-scale and are generally installed on low-slope or even flat roofs. Although rooftop mounted systems are small and have a higher cost per watt than large utility-scale installations, they account for the largest share in the market. There is, however, a growing trend towards bigger utility-scale power plants, especially in the "sunbelt" region of the planet. Utility-scale. Large utility-scale solar parks or farms are power stations and capable of providing an energy supply to large numbers of consumers. Generated electricity is fed into the transmission grid powered by central generation plants (grid-connected or grid-tied plant), or combined with one, or many, domestic electricity generators to feed into a small electrical grid (hybrid plant). In rare cases generated electricity is stored or used directly by island/standalone plant. PV systems are generally designed in order to ensure the highest energy yield for a given investment. Some large photovoltaic power stations such as Solar Star, Waldpolenz Solar Park and Topaz Solar Farm cover tens or hundreds of hectares and have power outputs up to hundreds of megawatts. Rooftop, mobile, and portable. A small PV system is capable of providing enough AC electricity to power a single home, or an isolated device in the form of AC or DC electric. Military and civilian Earth observation satellites, street lights, construction and traffic signs, electric cars, solar-powered tents, and electric aircraft may contain integrated photovoltaic systems to provide a primary or auxiliary power source in the form of AC or DC power, depending on the design and power demands. In 2013, rooftop systems accounted for 60 percent of worldwide installations. However, there is a trend away from rooftop and towards utility-scale PV systems, as the focus of new PV installations is also shifting from Europe to countries in the sunbelt region of the planet where opposition to ground-mounted solar farms is less accentuated. Portable and mobile PV systems provide electrical power independent of utility connections, for "off the grid" operation. Such systems are so commonly used on recreational vehicles and boats that there are retailers specializing in these applications and products specifically targeted to them. Since recreational vehicles (RV) normally carry batteries and operate lighting and other systems on nominally 12-volt DC power, RV systems normally operate in a voltage range that can charge 12-volt batteries directly, so addition of a PV system requires only panels, a charge controller, and wiring. Solar systems on recreation vehicles are usually constrained in wattage by the physical size of the RV's roof space. Building-integrated. In urban and suburban areas, photovoltaic arrays are often used on rooftops to supplement power use; often the building will have a connection to the power grid, in which case the energy produced by the PV array can be sold back to the utility in some sort of net metering agreement. Some utilities use the rooftops of commercial customers and telephone poles to support their use of PV panels. Solar trees are arrays that, as the name implies, mimic the look of trees, provide shade, and at night can function as street lights. Performance. Uncertainties in revenue over time relate mostly to the evaluation of the solar resource and to the performance of the system itself. In the best of cases, uncertainties are typically 4% for year-to-year climate variability, 5% for solar resource estimation (in a horizontal plane), 3% for estimation of irradiation in the plane of the array, 3% for power rating of modules, 2% for losses due to dirt and soiling, 1.5% for losses due to snow, and 5% for other sources of error. Identifying and reacting to manageable losses is critical for revenue and O&amp;M efficiency. Monitoring of array performance may be part of contractual agreements between the array owner, the builder, and the utility purchasing the energy produced. A method to create "synthetic days" using readily available weather data and verification using the Open Solar Outdoors Test Field make it possible to predict photovoltaic systems performance with high degrees of accuracy. This method can be used to then determine loss mechanisms on a local scale - such as those from snow or the effects of surface coatings (e.g. hydrophobic or hydrophilic) on soiling or snow losses. (Although in heavy snow environments with severe ground interference can result in annual losses from snow of 30%.) Access to the Internet has allowed a further improvement in energy monitoring and communication. Dedicated systems are available from a number of vendors. For solar PV systems that use microinverters (panel-level DC to AC conversion), module power data is automatically provided. Some systems allow setting performance alerts that trigger phone/email/text warnings when limits are reached. These solutions provide data for the system owner and the installer. Installers are able to remotely monitor multiple installations, and see at-a-glance the status of their entire installed base. Components. A photovoltaic system for residential, commercial, or industrial energy supply consists of the solar array and a number of components often summarized as the balance of system (BOS). This term is synonymous with "Balance of plant" q.v. BOS-components include power-conditioning equipment and structures for mounting, typically one or more DC to AC power converters, also known as inverters, an energy storage device, a racking system that supports the solar array, electrical wiring and interconnections, and mounting for other components. Optionally, a balance of system may include any or all of the following: renewable energy credit revenue-grade meter, maximum power point tracker (MPPT), battery system and charger, GNSS solar tracker, energy management software, solar irradiance sensors, anemometer, or task-specific accessories designed to meet specialized requirements for a system owner. In addition, a CPV system requires optical lenses or mirrors and sometimes a cooling system. The terms "solar array" and "PV system" are often incorrectly used interchangeably, despite the fact that the solar array does not encompass the entire system. Moreover, "solar panel" is often used as a synonym for "solar module", although a panel consists of a string of several modules. The term "solar system" is also an often used misnomer for a PV system. Solar array. The building blocks of a photovoltaic system are solar cells. A solar cell is the electrical device that can directly convert photons energy into electricity. There are three technological generations of solar cells: the first generation (1G) of crystalline silicon cells (c-Si), the second generation (2G) of thin-film cells (such as CdTe, CIGS, Amorphous Silicon, and GaAs), and the third generation (3G) of organic, dye-sensitized, Perovskite and multijunction cells. Conventional c-Si solar cells, normally wired in series, are encapsulated in a solar module to protect them from the weather. The module consists of a tempered glass as cover, a soft and flexible encapsulant, a rear backsheet made of a weathering and fire-resistant material and an aluminium frame around the outer edge. Electrically connected and mounted on a supporting structure, solar modules build a string of modules, often called solar panel. A solar array consists of one or many such panels. A photovoltaic array, or solar array, is a linked collection of solar modules. The power that one module can produce is seldom enough to meet requirements of a home or a business, so the modules are linked together to form an array. Most PV arrays use an inverter to convert the DC power produced by the modules into alternating current that can power lights, motors, and other loads. The modules in a PV array are usually first connected in series to obtain the desired voltage; the individual strings are then connected in parallel to allow the system to produce more current. Solar panels are typically measured under STC (standard test conditions) or PTC (PVUSA test conditions), in watts. Typical panel ratings range from less than 100 watts to over 400 watts. The array rating consists of a summation of the panel ratings, in watts, kilowatts, or megawatts. Modules and efficiency. A typical 150 watt PV module is about a square meter in size. Such a module may be expected to produce 0.75 kilowatt-hour (kWh) every day, on average, after taking into account the weather and the latitude, for an insolation of 5 sun hours/day. Module output degrades faster at increased temperature. Allowing ambient air to flow over, and if possible behind, PV modules reduces this problem, as the airflow tend to reduce the operating temperature and, as consequence, increase the module efficiency. However, it was recently demonstrated that, in the real-world operation, considering a larger scale photovoltaic generator, increase in wind speed can increase the energy losses, following the fluid mechanics theory, as the wind interaction with the PV generator induces air flux variations that modify the heat transfer from the modules to the air. Effective module lives are typically 25 years or more. The payback period for an investment in a PV solar installation varies greatly and is typically less useful than a calculation of return on investment. While it is typically calculated to be between 10 and 20 years, the financial payback period can be far shorter with incentives. The temperature effect on photovoltaic modules is usually quantified by means of some coefficients relating the variations of the open‐circuit voltage, of the short‐circuit current, and of the maximum power to temperature changes. In this paper, comprehensive experimental guidelines to estimate the temperature coefficients. Due to the low voltage of an individual solar cell (typically ca. 0.5V), several cells are wired "(see Copper in renewable energy#Solar photovoltaic power generation)" in series in the manufacture of a "laminate". The laminate is assembled into a protective weatherproof enclosure, thus making a photovoltaic module or solar panel. Modules may then be strung together into a photovoltaic array. In 2012, solar panels available for consumers had an efficiency of up to about 17%, while commercially available panels can go as far as 27%. By concentrating the sunlight it is possible to achieve higher efficiencies. A group from The Fraunhofer Institute for Solar Energy Systems has created a cell that can reach 44.7% efficiency using the equivalent of "297 suns". Shading and dirt. Photovoltaic cell electrical output is extremely sensitive to shading (the so-called "Christmas light effect"). When even a small portion of a cell or of a module or array of cells in parallel is shaded, with the remainder in sunlight, the output falls dramatically due to internal 'short-circuiting' (the electrons reversing course through the shaded portion). When connected in series, the current drawn from a string of cells is no greater than the normally small current that can flow through the shaded cell, so the current (and therefore power) developed by the string is limited. If the external load is of low enough impedance, there may be enough voltage available from the other cells in a string to force more current through the shaded cell by breaking down the junction. This breakdown voltage in common cells is between 10 and 30 volts. Instead of adding to the power produced by the panel, the shaded cell absorbs power, turning it into heat. Since the reverse voltage of a shaded cell is much greater than the forward voltage of an illuminated cell, one shaded cell can absorb the power of many other cells in the string, disproportionately affecting panel output. For example, a shaded cell may drop 8 volts, instead of adding 0.5 volts, at a high current level, thereby absorbing the power produced by 16 other cells. It is thus important that a PV installation not be shaded by trees or other obstructions. There are techniques to mitigate the losses with diodes, but these techniques also entail losses. Several methods have been developed to determine shading losses from trees to PV systems over both large regions using LiDAR, but also at an individual system level using 3D modeling software. Most modules have bypass diodes between each cell or string of cells that minimize the effects of shading and only lose the power that the shaded portion of the array would have supplied, as well as the power dissipated in the diodes. The main job of the bypass diode is to eliminate hot spots that form on cells that can cause further damage to the array, and cause fires. Sunlight can be absorbed by dust, snow, or other impurities at the surface of the module (collectively referred to as soiling). Soiling reduces the light that strikes the cells, which in turn reduces the power output of the PV system. Soiling losses aggregate over time, and can become large without adequate cleaning. In 2018, the global annual energy loss due to soiling was estimated to at least 3–4%. However, soiling losses vary significantly from region to region, and within regions. Maintaining a clean module surface will increase output performance over the life of the PV system. In one study performed in a snow-rich area (Ontario), cleaning flat mounted solar panels after 15 months increased their output by almost 100%. However, 5° tilted arrays were adequately cleaned by rainwater. In many cases, especially in arid regions, or in locations in close proximity to deserts, roads, industry, or agriculture, regular cleaning of the solar panels is cost-effective. In 2018, the estimated soiling-induced revenue loss was estimated to between 5 and 7 billion euros. The long‐term reliability of photovoltaic modules is crucial to ensure the technical and economic viability of PV as a successful energy source. The analysis of degradation mechanisms of PV modules is key to ensure current lifetimes exceeding 25 years. Insolation and energy. Solar insolation is made up of direct, diffuse, and reflected radiation. The absorption factor of a PV cell is defined as the fraction of incident solar irradiance that is absorbed by the cell. When the sun is at the zenith on a cloudless day, the power of the sun is about 1 kW/m2, on the Earth's surface, to a plane that is perpendicular to the sun's rays. As such, PV arrays can track the sun through each day to greatly enhance energy collection. However, tracking devices add cost, and require maintenance, so it is more common for PV arrays to have fixed mounts that tilt the array and face due south in the northern hemisphere or due north in the southern hemisphere. The tilt angle from horizontal can be varied for season, but if fixed, should be set to give optimal array output during the peak electrical demand portion of a typical year for a stand-alone system. This optimal module tilt angle is not necessarily identical to the tilt angle for maximum annual array energy output. The optimization of the photovoltaic system for a specific environment can be complicated as issues of solar flux, soiling, and snow losses should be taken into effect. In addition, later work has shown that spectral effects can play a role in optimal photovoltaic material selection. For example, the spectrum of the albedo of the surroundings can play a significant role in output depending on the surface around the photovoltaic system and the type of solar cell material. A photovoltaic installation in the northern latitudes of Europe or the United States may expect to produce 1 kWh/m2/day. A typical 1 kW photovoltaic installation in Australia or the southern latitudes of Europe or United States, may produce 3.5–5 kWh per day, dependent on location, orientation, tilt, insolation and other factors. In the Sahara desert, with less cloud cover and a better solar angle, one could ideally obtain closer to 8.3 kWh/m2/day provided the nearly ever present wind would not blow sand onto the units. The area of the Sahara desert is over 9 million km2. 90,600 km2, or about 1%, could generate as much electricity as all of the world's power plants combined. Mounting. Modules are assembled into arrays on some kind of mounting system, which may be classified as ground mount, roof mount or pole mount. For solar parks a large rack is mounted on the ground, and the modules mounted on the rack. For buildings, many different racks have been devised for pitched roofs. For flat roofs, racks, bins and building integrated solutions are used. Solar panel racks mounted on top of poles can be stationary or moving, see Trackers below. Side-of-pole mounts are suitable for situations where a pole has something else mounted at its top, such as a light fixture or an antenna. Pole mounting raises what would otherwise be a ground mounted array above weed shadows and livestock, and may satisfy electrical code requirements regarding inaccessibility of exposed wiring. Pole mounted panels are open to more cooling air on their underside, which increases performance. A multiplicity of pole top racks can be formed into a parking carport or other shade structure. A rack which does not follow the sun from left to right may allow seasonal adjustment up or down. Cabling. Due to their outdoor usage, solar cables are designed to be resistant against UV radiation and extremely high temperature fluctuations and are generally unaffected by the weather. Standards specifying the usage of electrical wiring in PV systems include the IEC 60364 by the International Electrotechnical Commission, in section 712 "Solar photovoltaic (PV) power supply systems", the British Standard BS 7671, incorporating regulations relating to microgeneration and photovoltaic systems, and the US UL4703 standard, in subject 4703 "Photovoltaic Wire". A solar cable is the interconnection cable used in photovoltaic power generation. Solar cables interconnect solar panels and other electrical components of a photovoltaic system. Solar cables are designed to be UV resistant and weather resistant. They can be used within a large temperature range. Specific performance requirements for material used for wiring a solar panel installation are given in national and local electrical codes which regulate electrical installations in an area. General features required for solar cables are resistance to ultraviolet light, weather, temperature extremes of the area and insulation suitable for the voltage class of the equipment. Different jurisdictions will have specific rules regarding grounding (earthing) of solar power installations for electric shock protection and lightning protection. Tracker. A solar tracking system tilts a solar panel throughout the day. Depending on the type of tracking system, the panel is either aimed directly at the Sun or the brightest area of a partly clouded sky. Trackers greatly enhance early morning and late afternoon performance, increasing the total amount of power produced by a system by about 20–25% for a single axis tracker and about 30% or more for a dual axis tracker, depending on latitude. Trackers are effective in regions that receive a large portion of sunlight directly. In diffuse light (i.e. under cloud or fog), tracking has little or no value. Because most concentrated photovoltaics systems are very sensitive to the sunlight's angle, tracking systems allow them to produce useful power for more than a brief period each day. Tracking systems improve performance for two main reasons. First, when a solar panel is perpendicular to the sunlight, it receives more light on its surface than if it were angled. Second, direct light is used more efficiently than angled light. Special anti-reflective coatings can improve solar panel efficiency for direct and angled light, somewhat reducing the benefit of tracking. Trackers and sensors to optimise the performance are often seen as optional, but they can increase viable output by up to 45%. Arrays that approach or exceed one megawatt often use solar trackers. Considering clouds, and the fact that most of the world is not on the equator, and that the sun sets in the evening, the correct measure of solar power is insolation – the average number of kilowatt-hours per square meter per day. For the weather and latitudes of the United States and Europe, typical insolation ranges from 2.26 kWh/m2/day in northern climes to 5.61 kWh/m2/day in the sunniest regions. For large systems, the energy gained by using tracking systems can outweigh the added complexity. For very large systems, the added maintenance of tracking is a substantial detriment. Tracking is not required for flat panel and low-concentration photovoltaic systems. For high-concentration photovoltaic systems, dual axis tracking is a necessity. Pricing trends affect the balance between adding more stationary solar panels versus having fewer panels that track. As pricing, reliability and performance of single-axis trackers have improved, the systems have been installed in an increasing percentage of utility-scale projects. According to data from WoodMackenzie/GTM Research, global solar tracker shipments hit a record 14.5 gigawatts in 2017. This represents growth of 32 percent year-over-year, with similar or greater growth projected as large-scale solar deployment accelerates. Inverter. Systems designed to deliver alternating current (AC), such as grid-connected applications need an inverter to convert the direct current (DC) from the solar modules to AC. Grid connected inverters must supply AC electricity in sinusoidal form, synchronized to the grid frequency, limit feed in voltage to no higher than the grid voltage and disconnect from the grid if the grid voltage is turned off. Islanding inverters need only produce regulated voltages and frequencies in a sinusoidal waveshape as no synchronisation or co-ordination with grid supplies is required. A solar inverter may connect to a string of solar panels. In some installations a solar micro-inverter is connected at each solar panel. For safety reasons a circuit breaker is provided both on the AC and DC side to enable maintenance. AC output may be connected through an electricity meter into the public grid. The number of modules in the system determines the total DC watts capable of being generated by the solar array; however, the inverter ultimately governs the amount of AC watts that can be distributed for consumption. For example, a PV system comprising 11 kilowatts DC (kWDC) worth of PV modules, paired with one 10-kilowatt AC (kWAC) inverter, will be limited to the inverter's output of 10 kW. As of 2019, conversion efficiency for state-of-the-art converters reached more than 98 percent. While string inverters are used in residential to medium-sized commercial PV systems, central inverters cover the large commercial and utility-scale market. Market-share for central and string inverters are about 44 percent and 52 percent, respectively, with less than 1 percent for micro-inverters. Maximum power point tracking (MPPT) is a technique that grid connected inverters use to get the maximum possible power from the photovoltaic array. In order to do so, the inverter's MPPT system digitally samples the solar array's ever changing power output and applies the proper impedance to find the optimal "maximum power point". Anti-islanding is a protection mechanism to immediately shut down the inverter, preventing it from generating AC power when the connection to the load no longer exists. This happens, for example, in the case of a blackout. Without this protection, the supply line would become an "island" with power surrounded by a "sea" of unpowered lines, as the solar array continues to deliver DC power during the power outage. Islanding is a hazard to utility workers, who may not realize that an AC circuit is still powered, and it may prevent automatic re-connection of devices. Anti-Islanding feature is not required for complete Off-Grid Systems. Battery. Although still expensive, PV systems increasingly use rechargeable batteries to store a surplus to be later used at night. Batteries used for grid-storage also stabilize the electrical grid by leveling out peak loads, and play an important role in a smart grid, as they can charge during periods of low demand and feed their stored energy into the grid when demand is high. Common battery technologies used in today's PV systems include the valve regulated lead-acid battery – a modified version of the conventional lead–acid battery – nickel–cadmium and lithium-ion batteries. Compared to the other types, lead-acid batteries have a shorter lifetime and lower energy density. However, due to their high reliability, low self discharge as well as low investment and maintenance costs, they are currently (as of 2014) the predominant technology used in small-scale, residential PV systems, as lithium-ion batteries are still being developed and about 3.5 times as expensive as lead-acid batteries. Furthermore, as storage devices for PV systems are stationary, the lower energy and power density and therefore higher weight of lead-acid batteries are not as critical as, for example, in electric transportation Other rechargeable batteries considered for distributed PV systems include sodium–sulfur and vanadium redox batteries, two prominent types of a molten salt and a flow battery, respectively. In 2015, Tesla Motors launched the Powerwall, a rechargeable lithium-ion battery with the aim to revolutionize energy consumption. PV systems with an integrated battery solution also need a charge controller, as the varying voltage and current from the solar array requires constant adjustment to prevent damage from overcharging. Basic charge controllers may simply turn the PV panels on and off, or may meter out pulses of energy as needed, a strategy called PWM or pulse-width modulation. More advanced charge controllers will incorporate MPPT logic into their battery charging algorithms. Charge controllers may also divert energy to some purpose other than battery charging. Rather than simply shut off the free PV energy when not needed, a user may choose to heat air or water once the battery is full. Monitoring and metering. The metering must be able to accumulate energy units in both directions, or two meters must be used. Many meters accumulate bidirectionally, some systems use two meters, but a unidirectional meter (with detent) will not accumulate energy from any resultant feed into the grid. In some countries, for installations over 30 kWp a frequency and a voltage monitor with disconnection of all phases is required. This is done where more solar power is being generated than can be accommodated by the utility, and the excess can not either be exported or stored. Grid operators historically have needed to provide transmission lines and generation capacity. Now they need to also provide storage. This is normally hydro-storage, but other means of storage are used. Initially storage was used so that baseload generators could operate at full output. With variable renewable energy, storage is needed to allow power generation whenever it is available, and consumption whenever needed. The two variables a grid operator has are storing electricity for "when" it is needed, or transmitting it to "where" it is needed. If both of those fail, installations over 30kWp can automatically shut down, although in practice all inverters maintain voltage regulation and stop supplying power if the load is inadequate. Grid operators have the option of curtailing excess generation from large systems, although this is more commonly done with wind power than solar power, and results in a substantial loss of revenue. Three-phase inverters have the unique option of supplying reactive power which can be advantageous in matching load requirements. Photovoltaic systems need to be monitored to detect breakdown and optimize operation. There are several photovoltaic monitoring strategies depending on the output of the installation and its nature. Monitoring can be performed on site or remotely. It can measure production only, retrieve all the data from the inverter or retrieve all of the data from the communicating equipment (probes, meters, etc.). Monitoring tools can be dedicated to supervision only or offer additional functions. Individual inverters and battery charge controllers may include monitoring using manufacturer specific protocols and software. Energy metering of an inverter may be of limited accuracy and not suitable for revenue metering purposes. A third-party data acquisition system can monitor multiple inverters, using the inverter manufacturer's protocols, and also acquire weather-related information. Independent smart meters may measure the total energy production of a PV array system. Separate measures such as satellite image analysis or a solar radiation meter (a pyranometer) can be used to estimate total insolation for comparison. Data collected from a monitoring system can be displayed remotely over the World Wide Web, such as OSOTF. Sizing of the photovoltaic system. Knowing the annual energy consumption in Kwh formula_0 of an institution or a family, for example of 2300Kwh, legible in its electricity bill, it is possible to calculate the number of photovoltaic panels necessary to satisfy its energy needs. By connecting to the site https://re.jrc.ec.europa.eu/pvg_tools/en/ , after selecting the location in which to install the panels or clicking on the map or typing the name of the location, you must select "Grid connected" and "Visualize results" obtaining the following table for example relating to the city of Palermo: Provided inputs:; Location [Lat/Lon]:;38.111,13.352 Horizon:;Calculated Database used:;PVGIS-SARAH2 PV technology:;Crystalline silicon PV installed [kWp]:;1 System loss [%]:;14 Simulation outputs:; Slope angle [°]:;35 Azimuth angle [°]:;0 Yearly PV energy production [kWh]:;1519.1 Yearly in-plane irradiation [kWh/m2]:;1944.62 Year-to-year variability [kWh]:;47.61 Changes in output due to:; Angle of incidence [%]:;-2.68 Spectral effects [%]:;0.88 Temperature and low irradiance [%]:;-7.48 Total loss [%]:;-21.88 PV electricity cost [per kWh]:; Using the wxMaxima program, the number of panels required for an annual consumption of 2300 kWh and for a crystalline silicon technology with a slope angle of 35°, an azimut angle of 0° and total losses equal to 21.88% is 6 rounded up: E_d : 2300 ; E_s : 1519.1 ; P : 300 ; Number_panels : 1000 * E_d / ( P * E_s ) ; 5.046847914335243 On average, each family manages to consume 30% of energy directly from the photovoltaic. The storage system can bring its self-consumption to a maximum of 70%, therefore the battery storage capacity that should be in the specific case is: 4.41 Kwh which rounded up is 4.8 Kwh Battery_capacity : 0.70 * E_d/365 ; 4.410958904109589 If the price of energy is 0.5 €/Kwh then the cost of energy excluding taxes will be 1150€ per year: Energy_cost : E_d * 0.5; 1150.0 So if a 300W panel costs €200, the 4.8Kwh battery costs €3000, the inverter to convert the direct current into alternating current €1000, the charge regulator €100, the installation costs €1000 the total cost will be €6,300 : Total_cost : 200*6 + 3000 + 1000 + 100 + 1000 ; 3150 which are amortized over 5.46 years: Years : Total_cost / Energy_cost ; 5.46... having the battery a life of 10 years and the panels 25–30 years Other systems. This section includes systems that are either highly specialized and uncommon or still an emerging new technology with limited significance. However, standalone or off-grid systems take a special place. They were the most common type of systems during the 1980s and 1990s, when PV technology was still very expensive and a pure niche market of small scale applications. Only in places where no electrical grid was available, they were economically viable. Although new stand-alone systems are still being deployed all around the world, their contribution to the overall installed photovoltaic capacity is decreasing. In Europe, off-grid systems account for 1 percent of installed capacity. In the United States, they account for about 10 percent. Off-grid systems are still common in Australia and South Korea, and in many developing countries. CPV. Concentrator photovoltaics (CPV) and "high concentrator photovoltaic" (HCPV) systems use optical lenses or curved mirrors to concentrate sunlight onto small but highly efficient solar cells. Besides concentrating optics, CPV systems sometime use solar trackers and cooling systems and are more expensive. Especially HCPV systems are best suited in location with high solar irradiance, concentrating sunlight up to 400 times or more, with efficiencies of 24–28 percent, exceeding those of regular systems. Various designs of systems are commercially available but not very common. However, ongoing research and development is taking place. CPV is often confused with CSP (concentrated solar power) that does not use photovoltaics. Both technologies favor locations that receive much sunlight and directly compete with each other. Hybrid. A hybrid system combines PV with other forms of generation, usually a diesel generator. Biogas is also used. The other form of generation may be a type able to modulate power output as a function of demand. However more than one renewable form of energy may be used e.g. wind. The photovoltaic power generation serves to reduce the consumption of non renewable fuel. Hybrid systems are most often found on islands. Pellworm island in Germany and Kythnos island in Greece are notable examples (both are combined with wind). The Kythnos plant has reduced diesel consumption by 11.2%. In 2015, a case-study conducted in seven countries concluded that in all cases generating costs can be reduced by hybridising mini-grids and isolated grids. However, financing costs for such hybrids are crucial and largely depend on the ownership structure of the power plant. While cost reductions for state-owned utilities can be significant, the study also identified economic benefits to be insignificant or even negative for non-public utilities, such as independent power producers. There has also been work showing that the PV penetration limit can be increased by deploying a distributed network of PV+CHP hybrid systems in the U.S. The temporal distribution of solar flux, electrical and heating requirements for representative U.S. single family residences were analyzed and the results clearly show that hybridizing CHP with PV can enable additional PV deployment above what is possible with a conventional centralized electric generation system. This theory was reconfirmed with numerical simulations using per second solar flux data to determine that the necessary battery backup to provide for such a hybrid system is possible with relatively small and inexpensive battery systems. In addition, large PV+CHP systems are possible for institutional buildings, which again provide back up for intermittent PV and reduce CHP runtime. Direct current grid. DC grids are found in electric powered transport: railways trams and trolleybuses. A few pilot plants for such applications have been built, such as the tram depots in Hannover Leinhausen, using photovoltaic contributors and Geneva (Bachet de Pesay). The 150 kWp Geneva site feeds 600 V DC directly into the tram/trolleybus electricity network whereas before it provided about 15% of the electricity at its opening in 1999. Standalone. A stand-alone or off-grid system is not connected to the electrical grid. Standalone systems vary widely in size and application from wristwatches or calculators to remote buildings or spacecraft. If the load is to be supplied independently of solar insolation, the generated power is stored and buffered with a battery. In non-portable applications where weight is not an issue, such as in buildings, lead acid batteries are most commonly used for their low cost and tolerance for abuse. A charge controller may be incorporated in the system to avoid battery damage by excessive charging or discharging. It may also help to optimize production from the solar array using a maximum power point tracking technique (MPPT). However, in simple PV systems where the PV module voltage is matched to the battery voltage, the use of MPPT electronics is generally considered unnecessary, since the battery voltage is stable enough to provide near-maximum power collection from the PV module. In small devices (e.g. calculators, parking meters) only direct current (DC) is consumed. In larger systems (e.g. buildings, remote water pumps) AC is usually required. To convert the DC from the modules or batteries into AC, an inverter is used. In agricultural settings, the array may be used to directly power DC pumps, without the need for an inverter. In remote settings such as mountainous areas, islands, or other places where a power grid is unavailable, solar arrays can be used as the sole source of electricity, usually by charging a storage battery. Stand-alone systems closely relate to microgeneration and distributed generation. Costs and economy. Median installed system prices for residential PV Systems in Japan, Germany and the United States ($/W) History of solar rooftop prices 2006–2013. Comparison in US$ per installed watt. The cost of producing photovoltaic cells has dropped because of economies of scale in production and technological advances in manufacturing. For large-scale installations, prices below $1.00 per watt were common by 2012. A price decrease of 50% had been achieved in Europe from 2006 to 2011, and there was a potential to lower the generation cost by 50% by 2020. Crystal silicon solar cells have largely been replaced by less expensive multicrystalline silicon solar cells, and thin film silicon solar cells have also been developed at lower costs of production. Although they are reduced in energy conversion efficiency from single crystalline "siwafers", they are also much easier to produce at comparably lower costs. The table below shows the total (average) cost in US cents per kWh of electricity generated by a photovoltaic system. The row headings on the left show the total cost, per peak kilowatt (kWp), of a photovoltaic installation. Photovoltaic system costs have been declining and in Germany, for example, were reported to have fallen to USD 1389/kWp by the end of 2014. The column headings across the top refer to the annual energy output in kWh expected from each installed kWp. This varies by geographic region because the average insolation depends on the average cloudiness and the thickness of atmosphere traversed by the sunlight. It also depends on the path of the sun relative to the panel and the horizon. Panels are usually mounted at an angle based on latitude, and often they are adjusted seasonally to meet the changing solar declination. Solar tracking can also be utilized to access even more perpendicular sunlight, thereby raising the total energy output. The calculated values in the table reflect the total (average) cost in cents per kWh produced. They assume a 10% total capital cost (for instance 4% interest rate, 1% operating and maintenance cost, and depreciation of the capital outlay over 20 years). Normally, photovoltaic modules have a 25-year warranty. Learning curve. Photovoltaic systems demonstrate a learning curve in terms of levelized cost of electricity (LCOE), reducing its cost per kWh by 32.6% for every doubling of capacity. From the data of LCOE and cumulative installed capacity from International Renewable Energy Agency (IRENA) from 2010 to 2017, the learning curve equation for photovoltaic systems is given as formula_1 Regulation. Standardization. Increasing use of photovoltaic systems and integration of photovoltaic power into existing structures and techniques of supply and distribution increases the need for general standards and definitions for photovoltaic components and systems. The standards are compiled at the International Electrotechnical Commission (IEC) and apply to efficiency, durability and safety of cells, modules, simulation programs, plug connectors and cables, mounting systems, overall efficiency of inverters etc. National regulations. United Kingdom. In the UK, PV installations are generally considered permitted development and do not require planning permission. If the property is listed or in a designated area (National Park, Area of Outstanding Natural Beauty, Site of Special Scientific Interest or Norfolk Broads) then planning permission is required. UK Solar PV installations are also subject to control under the Building Regulations 2010. Buildings regulation approval is therefore necessary for both domestic and commercial solar PV rootop installations to ensure that they meet the required safety standards. This includes ensuring that the roof can support the weight of the solar panels, that the electrical connections are safe, and that there are no fire risks. United States. In the United States, article 690 of the National Electric Code provides general guidelines for the installation of photovoltaic systems; these may be superseded by local laws and regulations. Often a permit is required necessitating plan submissions and structural calculations before work may begin. Additionally, many locales require the work to be performed under the guidance of a licensed electrician. The Authority Having Jurisdiction (AHJ) will review designs and issue permits, before construction can lawfully begin. Electrical installation practices must comply with standards set forth within the National Electrical Code (NEC) and be inspected by the AHJ to ensure compliance with building code, electrical code, and fire safety code. Jurisdictions may require that equipment has been tested, certified, listed, and labeled by at least one of the Nationally Recognized Testing Laboratories (NRTL). Many localities require a permit to install a photovoltaic system. A grid-tied system normally requires a licensed electrician to connect between the system and the grid-connected wiring of the building. Installers who meet these qualifications are located in almost every state. Several states prohibit homeowners' associations from restricting solar devices. Spain. Although Spain generates around 40% of its electricity via photovoltaic and other renewable energy sources, and cities such as Huelva and Seville boast nearly 3,000 hours of sunshine per year, in 2013 Spain issued a solar tax to account for the debt created by the investment done by the Spanish government. Those who do not connect to the grid can face up to a fine of 30 million euros (US$40 million). Such measures were finally withdrawn by 2018, when new legislation was introduced banning any taxes on renewable energy self-consumption. Limitations. Impact on electricity network. With the increasing levels of rooftop photovoltaic systems, the energy flow becomes two-way. When there is more local generation than consumption, electricity is exported to the grid. However, electricity network traditionally is not designed to deal with the two-way energy transfer. Therefore, some technical issues may occur. For example, in Queensland, Australia, there have been more than 30% of households with rooftop PV by the end of 2017. The famous Californian 2020 duck curve appears very often for a lot of communities from 2015 onwards. An over-voltage issue may come out as the electricity flows back to the network. There are solutions to manage the over voltage issue, such as regulating PV inverter power factor, new voltage and energy control equipment at electricity distributor level, re-conductor the electricity wires, demand side management, etc. There are often limitations and costs related to these solutions. A way to calculate these costs and benefits is to use the concept of 'value of solar' (VOS), which includes the avoided costs/losses including: plant operations ans maintenance (fixed and variable); fuel; generation capacity, reserve capacity, transmission capacity, distribution capacity, and environmental and health liability. Popular Mechanics reports that VOS results show that grid-tied utility customers are being grossly under-compensated in most of the U.S. as the value of solar eclipses the net metering rate as well as two-tiered rates, which means "your neighbor's solar panels are secretly saving you money". Implications for electricity bill management and energy investment. Customers have different specific situations, e.g. different comfort/convenience needs, different electricity tariffs, or different usage patterns. An electricity tariff may have a few elements, such as daily access and metering charge, energy charge (based on kWh, MWh) or peak demand charge (e.g. a price for the highest 30min energy consumption in a month). PV is a promising option for reducing energy charge when electricity price is reasonably high and continuously increasing, such as in Australia and Germany. However, for sites with peak demand charge in place, PV may be less attractive if peak demands mostly occur in the late afternoon to early evening, for example residential communities. Overall, energy investment is largely an economic decision and investment decisions are based on systematical evaluation of options in operational improvement, energy efficiency, onsite generation and energy storage. Grid-connected photovoltaic system. A grid-connected photovoltaic system, or grid-connected PV system is an electricity generating solar PV power system that is connected to the utility grid. A grid-connected PV system consists of solar panels, one or several inverters, a power conditioning unit and grid connection equipment. They range from small residential and commercial rooftop systems to large utility-scale solar power stations. When conditions are right, the grid-connected PV system supplies the excess power, beyond consumption by the connected load, to the utility grid. Operation. Residential, grid-connected rooftop systems which have a capacity more than 10 kilowatts can meet the load of most consumers. They can feed excess power to the grid where it is consumed by other users. The feedback is done through a meter to monitor power transferred. Photovoltaic wattage may be less than average consumption, in which case the consumer will continue to purchase grid energy, but a lesser amount than previously. If photovoltaic wattage substantially exceeds average consumption, the energy produced by the panels will be much in excess of the demand. In this case, the excess power can yield revenue by selling it to the grid. Depending on their agreement with their local grid energy company, the consumer only needs to pay the cost of electricity consumed less the value of electricity generated. This will be a negative number if more electricity is generated than consumed. Additionally, in some cases, cash incentives are paid from the grid operator to the consumer. Connection of the photovoltaic power system can be done only through an interconnection agreement between the consumer and the utility company. The agreement details the various safety standards to be followed during the connection. Features. Electric power from photovoltaic panels must be converted to alternating current by a special power inverter if it is intended for delivery to a power grid. The inverter sits between the solar array and the grid, and may be a large stand-alone unit or may be a collection of small inverters attached to individual solar panels as an AC module. The inverter must monitor grid voltage, waveform, and frequency. The inverter must detect failure of the grid supply, and then, must not supply power to the grid. An inverter connected to a malfunctioning power line will automatically disconnect in accordance with safety rules, which vary by jurisdiction. The location of the fault current plays a crucial part in deciding whether the protection mechanism of the inverter will kick in, especially for low and medium electricity supply network. A protection system must ensure proper operation for faults external to the inverter on the supply network. The special inverter must also be designed to synchronize its AC frequency with the grid, to ensure the correct integration of the inverter power flow into the grid according to the waveform. Islanding. Islanding is the condition in which a distributed generator continues to power a location even though power from the electric utility grid is no longer present. Islanding can be dangerous to utility workers, who may not realize that a circuit is still powered, even though there is no power from the electrical grid. For that reason, distributed generators must detect islanding and immediately stop producing power; this is referred to as anti-islanding. Anti-islanding. In the case of a utility blackout in a grid-connected PV system, the solar panels will continue to deliver power as long as the sun is shining. In this case, the supply line becomes an "island" with power surrounded by a "sea" of unpowered lines. For this reason, solar inverters that are designed to supply power to the grid are generally required to have automatic anti-islanding circuitry in them. In intentional islanding, the generator disconnects from the grid, and forces the distributed generator to power the local circuit. This is often used as a power backup system for buildings that normally sell their power to the grid. There are two types of anti-islanding control techniques: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "E_d" }, { "math_id": 1, "text": "LCOE_{photovoltaic}=151.46 \\, Capacity^{-0.57}" } ]
https://en.wikipedia.org/wiki?curid=15677755
15677992
Project Euler
Website and series of mathematical challenges Project Euler (named after Leonhard Euler) is a website dedicated to a series of computational problems intended to be solved with computer programs. The project attracts graduates and students interested in mathematics and computer programming. Since its creation in 2001 by Colin Hughes, Project Euler has gained notability and popularity worldwide. It includes 904 problems as of 25 August 2024, with a new one added approximately every week. Problems are of varying difficulty, but each is solvable in less than a minute of CPU time using an efficient algorithm on a modestly powered computer. Features of the site. A forum specific to each question may be viewed after the user has correctly answered the given question. Problems can be sorted on ID, number solved and difficulty. Participants can track their progress through achievement levels based on the number of problems solved. A new level is reached for every 25 problems solved. Special awards exist for solving special combinations of problems. For instance, there is an award for solving fifty prime numbered problems. A special "Eulerians" level exists to track achievement based on the fastest fifty solvers of recent problems so that newer members can compete without solving older problems. Example problem and solutions. The first Project Euler problem is Multiples of 3 and 5 If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23. Find the sum of all the multiples of 3 or 5 below 1000. It is a 5% rated problem, indicating it is one of the easiest on the site. The initial approach a beginner can come up with is a bruteforce attempt. Given the upper bound of 1000 in this case, a bruteforce is easily achievable for most current home computers. A Python code that solves it is presented below. def solve(limit): total = 0 for i in range(limit): if i % 3 == 0 or i % 5 == 0: total += i return total print(solve(1000)) This solution has a Big O Notation of formula_0. A user could keep refining their solution for any given problem further. In this case, there exists a constant time solution for the problem. The inclusion-exclusion principle claims that if there are two finite sets formula_1, the number of elements in their union can be expressed as formula_2. This is a pretty popular combinatorics result. One can extend this result and express a relation for the sum of their elements, namely formula_3 Implying this to the problem, have formula_4 denote the multiples of 3 up to formula_5 and formula_6 the multiples of 5 up to formula_7, the problem can be reduced to summing the multiples of 3, adding the sum of the multiples of 5, and subtracting the sum of the multiples of 15. For an arbitrarily selected formula_8, one can compute the multiples of formula_9 up to formula_7 via formula_10 Later problems progress (non-linearly) in difficulty, requiring more creative methodology and higher understanding of the mathematical principles behind the problems. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "O(n)" }, { "math_id": 1, "text": "A, B" }, { "math_id": 2, "text": "|A \\cup B| = |A| + |B| - |A \\cap B|" }, { "math_id": 3, "text": "\\sum_{x \\in A \\cup B} x = \\sum_{x \\in A} x + \\sum_{x \\in B} x - \\sum_{x \\in A \\cap B} x " }, { "math_id": 4, "text": "A " }, { "math_id": 5, "text": "n \n" }, { "math_id": 6, "text": "B " }, { "math_id": 7, "text": "n " }, { "math_id": 8, "text": "k \n" }, { "math_id": 9, "text": "k " }, { "math_id": 10, "text": "k + 2k + 3k + \\ldots + \\lfloor n/k \\rfloor k = k (1 + 2 + 3 + \\ldots + \\lfloor n/k \\rfloor) = k \\frac{\\lfloor n/k \\rfloor(\\lfloor n/k \\rfloor+1)}2" } ]
https://en.wikipedia.org/wiki?curid=15677992
156794
Composition
Composition or Compositions may refer to: &lt;templatestyles src="Template:TOC_right/styles.css" /&gt; See also. Topics referred to by the same term &lt;templatestyles src="Dmbox/styles.css" /&gt; This page lists associated with the title .
[ { "math_id": 0, "text": "N(x y) = N(x) N(y)" } ]
https://en.wikipedia.org/wiki?curid=156794
15680391
Lis (linear algebra library)
Lis (Library of Iterative Solvers for linear systems, pronounced [lis]) is a scalable parallel software library to solve discretized linear equations and eigenvalue problems that mainly arise from the numerical solution of partial differential equations using iterative methods. Although it is designed for parallel computers, the library can be used without being conscious of parallel processing. Features. Lis provides facilities for: Example. A C program to solve the linear equation formula_0 is written as follows: LIS_INT main(LIS_INT argc, char* argv[]) LIS_MATRIX A; LIS_VECTOR b, x; LIS_SOLVER solver; LIS_INT iter; double time; lis_initialize(&amp;argc, &amp;argv); lis_matrix_create(LIS_COMM_WORLD, &amp;A); lis_vector_create(LIS_COMM_WORLD, &amp;b); lis_vector_create(LIS_COMM_WORLD, &amp;x); lis_input_matrix(A, argv[1]); lis_input_vector(b, argv[2]); lis_vector_duplicate(A, &amp;x); lis_solver_create(&amp;solver); lis_solver_set_optionC(solver); lis_solve(A, b, x, solver); lis_solver_get_iter(solver, &amp;iter); lis_solver_get_time(solver, &amp;time); printf("number of iterations = %d\n", iter); printf("elapsed time = %e\n", time); lis_output_vector(x, LIS_FMT_MM, argv[3]); lis_solver_destroy(solver); lis_matrix_destroy(A); lis_vector_destroy(b); lis_vector_destroy(x); lis_finalize(); return 0; System requirements. Installing Lis requires a C compiler. If you wish to use the Fortran interface, a Fortran compiler is needed, and the algebraic multigrid preconditioner requires a Fortran 90 compiler. For parallel computing environments, an OpenMP or MPI library is necessary. Lis supports both the Matrix Market and Harwell-Boeing formats for importing and exporting user data. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "Ax=b" } ]
https://en.wikipedia.org/wiki?curid=15680391
156817
Gridiron pendulum
Pendulum mechanism that adjusts with temperature The gridiron pendulum was a temperature-compensated clock pendulum invented by British clockmaker John Harrison around 1726. It was used in precision clocks. In ordinary clock pendulums, the pendulum rod expands and contracts with changes in temperature. The period of the pendulum's swing depends on its length, so a pendulum clock's rate varied with changes in ambient temperature, causing inaccurate timekeeping. The gridiron pendulum consists of alternating parallel rods of two metals with different thermal expansion coefficients, such as steel and brass. The rods are connected by a frame in such a way that their different thermal expansions (or contractions) compensate for each other, so that the overall length of the pendulum, and thus its period, stays constant with temperature. The gridiron pendulum was used during the Industrial Revolution period in pendulum clocks, particularly precision "regulator clocks" employed as time standards in factories, laboratories, office buildings, railroad stations and post offices to schedule work and set other clocks. The gridiron became so associated with accurate timekeeping that by the turn of the 20th century many clocks had pendulums with decorative fake gridirons, which had no temperature compensating qualities. How it works. The gridiron pendulum is constructed so the high thermal expansion (zinc or brass) rods make the pendulum shorter when they expand, while the low expansion steel rods make the pendulum longer. By using the correct ratio of lengths, the greater expansion of the zinc or brass rods exactly compensate for the greater length of the low expansion steel rods, and the pendulum stays the same length with temperature changes. The simplest form of gridiron pendulum, introduced as an improvement to Harrison's around 1750 by John Smeaton, consists of five rods, 3 of steel and two of zinc. A central steel rod runs up from the bob to the suspension pivot. At that point a cross-piece (middle bridge) extends from the central rod and connects to two zinc rods, one on each side of the central rod, which reach down to, and are fixed to, the bottom bridge just above the bob. The bottom bridge clears the central rod and connects to two further steel rods which run back up to the top bridge attached to the suspension. As the steel rods expand in heat, the bottom bridge drops relative to the suspension, and the bob drops relative to the middle bridge. However, the middle bridge rises relative to the bottom one because the greater expansion of the zinc rods pushes the middle bridge, and therefore the bob, upward to match the combined drop caused by the expanding steel. In simple terms, the upward expansion of the zinc counteracts the combined downward expansion of the steel (which has a greater total length). The rod lengths are calculated so that the effective length of the zinc rods multiplied by zinc's thermal expansion coefficient equals the effective length of the steel rods multiplied by iron's expansion coefficient, thereby keeping the pendulum the same length. Harrison's original pendulum used brass rods (pure zinc not being available then); these required more rods because brass does not expand as much as zinc does. Instead of one high expansion rod on each side, two are needed on each side, requiring a total of 9 rods, five steel and four brass. The exact degree of compensation can be adjusted by having a section of the central rod which is partly brass and partly steel. These overlap (like a sandwich) and are joined by a pin which passes through both metals. A number of holes for the pin are made in both parts and moving the pin up or down the rod changes how much of the combined rod is brass and how much is steel. In the late 19th century the Dent company developed a tubular version of the zinc gridiron in which the four outer rods were replaced by two concentric tubes which were linked by a tubular nut which could be screwed up and down to alter the degree of compensation. In the 1730s clockmaker John Ellicott designed a version that only required 3 rods, two brass and one steel ("see drawing"), in which the brass rods as they expanded with increasing temperature pressed against levers which lifted the bob. The Ellicott pendulum did not see much use. Disadvantages. Scientists in the 1800s found that the gridiron pendulum had disadvantages that made it unsuitable for the highest-precision clocks. The friction of the rods sliding in the holes in the frame caused the rods to adjust to temperature changes in a series of tiny jumps, rather than with a smooth motion. This caused the rate of the pendulum, and therefore the clock, to change suddenly with each jump. Later it was found that zinc is not very stable dimensionally; it is subject to creep. Therefore, another type of temperature-compensated pendulum, the mercury pendulum invented in 1721 by George Graham, was used in the highest-precision clocks. By 1900, the highest-precision astronomical regulator clocks used pendulum rods of low thermal expansion materials such as invar and fused quartz. Mathematical analysis. Temperature error. All substances expand with an increase in temperature formula_0, so uncompensated pendulum rods get longer with a temperature increase, causing the clock to slow down, and get shorter with a temperature decrease, causing the clock to speed up. The amount depends on the linear coefficient of thermal expansion (CTE) formula_1 of the material they are composed of. CTE is usually given in parts per million per degree Celsius. The expansion or contraction of a rod of length formula_2 with a coefficient of expansion formula_1 caused by a temperature change formula_3 is formula_4        (1) The period of oscillation formula_5 of the pendulum (the time interval for a right swing and a left swing) is formula_6        (2) A change in length formula_7 due to a temperature change formula_3 will cause a change in the period formula_8. Since the expansion coefficient is so small, the length changes due to temperature are very small, parts per million, so formula_9 and the change in period can be approximated to first order as a linear function formula_10 formula_11 Substituting equation (1), the change in the pendulum's period caused by a change in temperature formula_3 is formula_12 formula_13 formula_14 So the fractional change in an uncompensated pendulum's period is equal to one-half the coefficient of expansion times the change in temperature. Steel has a CTE of 11.5 x 10−6 per °C so a pendulum with a steel rod will have a thermal error rate of 5.7 parts per million or 0.5 seconds per day per degree Celsius (0.9 seconds per day per degree Fahrenheit). Before 1900 most buildings were unheated, so clocks in temperate climates like Europe and North America would experience a summer/winter temperature variation of around resulting in an error rate of 6.8 seconds per day. Wood has a smaller CTE of 4.9 x 10−6 per °C thus a pendulum with a wood rod will have a smaller thermal error of 0.21 sec per day per °C, so wood pendulum rods were often used in quality domestic clocks. The wood had to be varnished to protect it from the atmosphere as humidity could also cause changes in length. Compensation. A gridiron pendulum is symmetrical, with two identical linkages of suspension rods, one on each side, suspending the bob from the pivot. Within each suspension chain, the total change in length of the pendulum formula_2 is equal to the sum of the changes of the rods that make it up. It is designed so with an increase in temperature the high expansion rods on each side push the pendulum bob up, in the opposite direction to the low expansion rods which push it down, so the net change in length is the difference between these changes formula_15 From (1) the change in length formula_7 of a gridiron pendulum with a temperature change formula_3 is formula_16 formula_17 where formula_18 is the sum of the lengths of all the low expansion (steel) rods and formula_19 is the sum of the lengths of the high expansion rods in the suspension chain from the bob to the pivot. The condition for zero length change with temperature is formula_20 formula_21        (3) In other words, the ratio of thermal expansion coefficients of the two metals must be equal to the inverse ratio of the total rod lengths. In order to calculate the length of the individual rods, this equation is solved along with equation (2) giving the total length of pendulum needed for the correct period formula_5 formula_22 Most of the precision pendulum clocks with gridirons used a 'seconds pendulum', in which the period was two seconds. The length of the seconds pendulum was formula_23. In an ordinary uncompensated pendulum, which has most of its mass in the bob, the center of oscillation is near the center of the bob, so it was usually accurate enough to make the length from the pivot to the center of the bob formula_24 0.9936 m and then correct the clock's period with the adjustment nut. But in a gridiron pendulum, the gridiron constitutes a significant part of the mass of the pendulum. This changes the moment of inertia so the center of oscillation is somewhat higher, above the bob in the gridiron. Therefore the total length formula_2 of the pendulum must be somewhat longer to give the correct period. This factor is hard to calculate accurately. Another minor factor is that if the pendulum bob is supported at bottom by a nut on the pendulum rod, as is typical, the rise in center of gravity due to thermal expansion of the bob has to be taken into account. Clockmakers of the 19th century usually used recommended lengths for gridiron rods that had been found by master clockmakers by trial and error. Five rod gridiron. In the 5 rod gridiron, there is one high expansion rod on each side, of length formula_25, flanked by two low expansion rods with lengths formula_26 and formula_27, one from the pivot to support the bottom of formula_25, the other goes from the top of formula_25 down to support the bob. So from equation (3) the condition for compensation is formula_28 Since to fit in the frame the high expansion rod must be equal to or shorter than each of the low expansion rods formula_29 and formula_30 the geometrical condition for construction of the gridiron is formula_31 Therefore the 5 rod gridiron can only be made with metals whose expansion coefficients have a ratio greater than or equal to two formula_32 Zinc has a CTE of formula_1 = 26.2 x 10−6 per °C, a ratio of formula_33 = 2.28 times steel, so the zinc/steel combination can be used in 5 rod pendulums. The compensation condition for a zinc/steel gridiron is formula_34 Nine rod gridiron. To allow the use of metals with a lower ratio of expansion coefficients, such as brass and steel, a greater proportion of the suspension length must be the high expansion metal, so a construction with more high expansion rods must be used. In the 9 rod gridiron, there are two high expansion rods on each side, of length formula_25 and formula_35, flanked by three low expansion rods with lengths formula_26, formula_27 and formula_36. So from equation (3) the condition for compensation is formula_37 Since to fit in the frame each of the two high expansion rods must be as short as or shorter than each of the high expansion rods, the geometrical condition for construction is formula_38 Therefore the 9 rod gridiron can be made with metals with a ratio of thermal expansion coefficients exceeding 1.5. formula_39 Brass has a CTE of around formula_1 = 19.3 x 10−6 per °C, a ratio of formula_33 = 1.68 times steel. So while brass/steel cannot be used in 5 rod gridirons, it can be used in the 9 rod version. So the compensation condition for a brass/steel gridiron using brass with the above CTE is formula_40 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\theta" }, { "math_id": 1, "text": "\\alpha" }, { "math_id": 2, "text": "L" }, { "math_id": 3, "text": "\\Delta\\theta" }, { "math_id": 4, "text": "\\Delta L = \\alpha L \\Delta\\theta" }, { "math_id": 5, "text": "T" }, { "math_id": 6, "text": "T = 2\\pi\\sqrt{L \\over g}" }, { "math_id": 7, "text": "\\Delta L" }, { "math_id": 8, "text": "\\Delta T" }, { "math_id": 9, "text": "\\Delta T << T" }, { "math_id": 10, "text": "\\Delta T = {dT \\over dL}\\Delta L" }, { "math_id": 11, "text": "\\qquad = {d \\over dL}\\Big( 2\\pi\\sqrt{L \\over g}\\Big)\\Delta L = \\pi{\\Delta L \\over \\sqrt{gL}}" }, { "math_id": 12, "text": "\\qquad = \\pi{\\alpha L \\Delta\\theta \\over \\sqrt{gL}} = \\alpha\\pi\\sqrt{L \\over g}\\Delta\\theta" }, { "math_id": 13, "text": "\\Delta T = {\\alpha T\\Delta\\theta \\over 2}" }, { "math_id": 14, "text": "\\quad{\\Delta T \\over T} = {\\alpha\\Delta\\theta \\over 2}\\quad" }, { "math_id": 15, "text": "\\Delta L = \\sum \\Delta L_\\text{low} - \\sum \\Delta L_\\text{high}" }, { "math_id": 16, "text": "\\Delta L = \\sum\\alpha_\\text{low}L_\\text{low}\\Delta\\theta - \\sum\\alpha_\\text{high}L_\\text{high}\\Delta\\theta" }, { "math_id": 17, "text": "\\Delta L = (\\alpha_\\text{low}\\sum L_\\text{low} - \\alpha_\\text{high}\\sum L_\\text{high})\\Delta\\theta" }, { "math_id": 18, "text": "\\sum L_\\text{low}" }, { "math_id": 19, "text": "\\sum L_\\text{high}" }, { "math_id": 20, "text": "\\alpha_\\text{low}\\sum L_\\text{low} - \\alpha_\\text{high}\\sum L_\\text{high} = 0" }, { "math_id": 21, "text": "{\\alpha_\\text{high} \\over \\alpha_\\text{low}} = {\\sum L_\\text{low} \\over \\sum L_\\text{high}}" }, { "math_id": 22, "text": "L = \\sum L_\\text{low} - \\sum L_\\text{high} = g\\big({T \\over 2\\pi}\\big)^2" }, { "math_id": 23, "text": "L =\\," }, { "math_id": 24, "text": "L =" }, { "math_id": 25, "text": "L_\\text{2}" }, { "math_id": 26, "text": "L_\\text{1}" }, { "math_id": 27, "text": "L_\\text{3}" }, { "math_id": 28, "text": "{\\alpha_\\text{high} \\over \\alpha_\\text{low}} = {L_\\text{1} + L_\\text{3} \\over L_\\text{2}}" }, { "math_id": 29, "text": "L_\\text{1} \\ge L_\\text{2}" }, { "math_id": 30, "text": "L_\\text{3} \\ge L_\\text{2}" }, { "math_id": 31, "text": "L_\\text{1} + L_\\text{3} \\ge 2L_\\text{2}" }, { "math_id": 32, "text": "{\\alpha_\\text{high} \\over \\alpha_\\text{low}} = {L_\\text{1} + L_\\text{3} \\over L_\\text{2}} \\ge 2" }, { "math_id": 33, "text": "\\alpha_\\text{high}/\\alpha_\\text{low}" }, { "math_id": 34, "text": "{L_\\text{1} + L_\\text{3} \\over L_\\text{2}} = 2.28" }, { "math_id": 35, "text": "L_\\text{4}" }, { "math_id": 36, "text": "L_\\text{5}" }, { "math_id": 37, "text": "{\\alpha_\\text{high} \\over \\alpha_\\text{low}} = {L_\\text{1} + L_\\text{3} + L_\\text{5} \\over L_\\text{2} + L_\\text{4}}" }, { "math_id": 38, "text": "L_\\text{1} + L_\\text{3} + L_\\text{5} \\ge {3 \\over 2}(L_\\text{2} + L_\\text{2})" }, { "math_id": 39, "text": "{\\alpha_\\text{high} \\over \\alpha_\\text{low}} = {L_\\text{1} + L_\\text{3} + L_\\text{5} \\over L_\\text{2} + L_\\text{4}} \\ge 1.5" }, { "math_id": 40, "text": "{L_\\text{1} + L_\\text{3} + L_\\text{5} \\over L_\\text{2} + L_\\text{4}} = 1.68" } ]
https://en.wikipedia.org/wiki?curid=156817
15682905
MacCormack method
Equation in computational fluid dynamics In computational fluid dynamics, the MacCormack method (/məˈkɔːrmæk ˈmɛθəd/) is a widely used discretization scheme for the numerical solution of hyperbolic partial differential equations. This second-order finite difference method was introduced by Robert W. MacCormack in 1969. The MacCormack method is elegant and easy to understand and program. The algorithm. The MacCormack method is designed to solve hyperbolic partial differential equations of the form formula_0 To update this equation one timestep formula_1 on a grid with spacing formula_2 at grid cell formula_3, the MacCormack method uses a "predictor step" and a "corrector step", given below formula_4 Linear Example. To illustrate the algorithm, consider the following first order hyperbolic equation formula_5 The application of MacCormack method to the above equation proceeds in two steps; a "predictor step" which is followed by a "corrector step". Predictor step: In the predictor step, a "provisional" value of formula_6 at time level formula_7 (denoted by formula_8) is estimated as follows formula_9 The above equation is obtained by replacing the spatial and temporal derivatives in the previous first order hyperbolic equation using forward differences. Corrector step: In the corrector step, the predicted value formula_8 is corrected according to the equation formula_10 Note that the corrector step uses backward finite difference approximations for spatial derivative. The time-step used in the corrector step is formula_11 in contrast to the formula_12 used in the predictor step. Replacing the formula_13 term by the temporal average formula_14 to obtain the corrector step as formula_15 Some remarks. The MacCormack method is well suited for nonlinear equations (Inviscid Burgers equation, Euler equations, etc.) The order of differencing can be reversed for the time step (i.e., forward/backward followed by backward/forward). For nonlinear equations, this procedure provides the best results. For linear equations, the MacCormack scheme is equivalent to the Lax–Wendroff method. Unlike first-order upwind scheme, the MacCormack does not introduce diffusive errors in the solution. However, it is known to introduce dispersive errors (Gibbs phenomenon) in the region where the gradient is high. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n\\frac{\\partial u}{\\partial t} + \\frac{\\partial f(u)}{\\partial x} = 0\n" }, { "math_id": 1, "text": " \\Delta t " }, { "math_id": 2, "text": " \\Delta x " }, { "math_id": 3, "text": " i " }, { "math_id": 4, "text": " \\begin{align}\n&u_i^p = u^n_i - \\frac{\\Delta t}{\\Delta x}\\left(f^n_{i+1} - f^n_i\\right) \\\\\n &u^{n+1}_i = \\frac{1}{2}(u^n_i + u^p_i) - \\frac{\\Delta t}{2\\Delta x}(f^p_i - f^p_{i-1})\n\\end{align} " }, { "math_id": 5, "text": "\n \\qquad \\frac{\\partial u}{\\partial t} + a \\frac{\\partial u}{\\partial x} = 0 .\n" }, { "math_id": 6, "text": "u" }, { "math_id": 7, "text": "n+1" }, { "math_id": 8, "text": "u_i^p" }, { "math_id": 9, "text": "\n u_i^p = u_i^n - a \\frac{\\Delta t}{\\Delta x} \\left( u_{i+1}^n - u_i^n \\right)\n" }, { "math_id": 10, "text": "\n u_i^{n+1} = u_i^{n+1/2} - a \\frac{\\Delta t}{2\\Delta x} \\left( u_i^p - u_{i-1}^p \\right)\n" }, { "math_id": 11, "text": "\\Delta t/2" }, { "math_id": 12, "text": "\\Delta t" }, { "math_id": 13, "text": "u_i^{n+1/2}" }, { "math_id": 14, "text": "\n u_i^{n+1/2} = \\frac{u_i^n + u_i^p}{2}\n" }, { "math_id": 15, "text": "\n u_i^{n+1} = \\frac{u_i^n + u_i^p}{2} - a \\frac{\\Delta t}{2\\Delta x} \\left( u_i^p - u_{i-1}^p \\right)\n" } ]
https://en.wikipedia.org/wiki?curid=15682905
1568489
Todd class
In mathematics, the Todd class is a certain construction now considered a part of the theory in algebraic topology of characteristic classes. The Todd class of a vector bundle can be defined by means of the theory of Chern classes, and is encountered where Chern classes exist — most notably in differential topology, the theory of complex manifolds and algebraic geometry. In rough terms, a Todd class acts like a reciprocal of a Chern class, or stands in relation to it as a conormal bundle does to a normal bundle. The Todd class plays a fundamental role in generalising the classical Riemann–Roch theorem to higher dimensions, in the Hirzebruch–Riemann–Roch theorem and the Grothendieck–Hirzebruch–Riemann–Roch theorem. History. It is named for J. A. Todd, who introduced a special case of the concept in algebraic geometry in 1937, before the Chern classes were defined. The geometric idea involved is sometimes called the Todd-Eger class. The general definition in higher dimensions is due to Friedrich Hirzebruch. Definition. To define the Todd class formula_0 where formula_1 is a complex vector bundle on a topological space formula_2, it is usually possible to limit the definition to the case of a Whitney sum of line bundles, by means of a general device of characteristic class theory, the use of Chern roots (aka, the splitting principle). For the definition, let formula_3 be the formal power series with the property that the coefficient of formula_4 in formula_5 is 1, where formula_6 denotes the formula_7-th Bernoulli number. Consider the coefficient of formula_8 in the product formula_9 for any formula_10. This is symmetric in the formula_11s and homogeneous of weight formula_12: so can be expressed as a polynomial formula_13 in the elementary symmetric functions formula_14 of the formula_11s. Then formula_15 defines the Todd polynomials: they form a multiplicative sequence with formula_16 as characteristic power series. If formula_1 has the formula_17 as its Chern roots, then the Todd class formula_18 which is to be computed in the cohomology ring of formula_2 (or in its completion if one wants to consider infinite-dimensional manifolds). The Todd class can be given explicitly as a formal power series in the Chern classes as follows: formula_19 where the cohomology classes formula_20 are the Chern classes of formula_1, and lie in the cohomology group formula_21. If formula_2 is finite-dimensional then most terms vanish and formula_0 is a polynomial in the Chern classes. Properties of the Todd class. The Todd class is multiplicative: formula_22 Let formula_23 be the fundamental class of the hyperplane section. From multiplicativity and the Euler exact sequence for the tangent bundle of formula_24 formula_25 one obtains formula_26 Computations of the Todd class. For any algebraic curve formula_27 the Todd class is just formula_28. Since formula_27 is projective, it can be embedded into some formula_29 and we can find formula_30 using the normal sequenceformula_31and properties of chern classes. For example, if we have a degree formula_32 plane curve in formula_33, we find the total chern class isformula_34where formula_35 is the hyperplane class in formula_33 restricted to formula_27. Hirzebruch-Riemann-Roch formula. For any coherent sheaf "F" on a smooth compact complex manifold "M", one has formula_36 where formula_37 is its holomorphic Euler characteristic, formula_38 and formula_39 its Chern character.
[ { "math_id": 0, "text": "\\operatorname{td}(E)" }, { "math_id": 1, "text": "E" }, { "math_id": 2, "text": "X" }, { "math_id": 3, "text": " Q(x) = \\frac{x}{1 - e^{-x}}=1+\\dfrac{x}{2}+\\sum_{i=1}^\\infty \\frac{B_{2i}}{(2i)!}x^{2i} = 1 +\\dfrac{x}{2}+\\dfrac{x^2}{12}-\\dfrac{x^4}{720}+\\cdots" }, { "math_id": 4, "text": "x^n" }, { "math_id": 5, "text": "Q(x)^{n+1}" }, { "math_id": 6, "text": "B_i" }, { "math_id": 7, "text": "i" }, { "math_id": 8, "text": "x^j" }, { "math_id": 9, "text": " \\prod_{i=1}^m Q(\\beta_i x) \\ " }, { "math_id": 10, "text": "m > j" }, { "math_id": 11, "text": "\\beta_i" }, { "math_id": 12, "text": "j" }, { "math_id": 13, "text": "\\operatorname{td}_j(p_1,\\ldots, p_j)" }, { "math_id": 14, "text": "p" }, { "math_id": 15, "text": "\\operatorname{td}_j" }, { "math_id": 16, "text": "Q" }, { "math_id": 17, "text": "\\alpha_i" }, { "math_id": 18, "text": "\\operatorname{td}(E) = \\prod Q(\\alpha_i)" }, { "math_id": 19, "text": "\\operatorname{td}(E) = 1 + \\frac{c_1}{2} + \\frac{c_1^2 +c_2}{12} + \\frac{c_1c_2}{24} + \\frac{-c_1^4 + 4 c_1^2 c_2 + c_1c_3 + 3c_2^2 - c_4}{720} + \\cdots " }, { "math_id": 20, "text": "c_i" }, { "math_id": 21, "text": "H^{2i}(X)" }, { "math_id": 22, "text": "\\operatorname{td}(E\\oplus F) = \\operatorname{td}(E)\\cdot \\operatorname{td}(F)." }, { "math_id": 23, "text": "\\xi \\in H^2({\\mathbb C} P^n)" }, { "math_id": 24, "text": " {\\mathbb C} P^n" }, { "math_id": 25, "text": " 0 \\to {\\mathcal O} \\to {\\mathcal O}(1)^{n+1} \\to T {\\mathbb C} P^n \\to 0," }, { "math_id": 26, "text": " \\operatorname{td}(T {\\mathbb C}P^n) = \\left( \\dfrac{\\xi}{1-e^{-\\xi}} \\right)^{n+1}." }, { "math_id": 27, "text": "C" }, { "math_id": 28, "text": "\\operatorname{td}(C) = 1 + c_1(T_C)" }, { "math_id": 29, "text": "\\mathbb{P}^n" }, { "math_id": 30, "text": "c_1(T_C)" }, { "math_id": 31, "text": "0 \\to T_C \\to T_\\mathbb{P}^n|_C \\to N_{C/\\mathbb{P}^n} \\to 0" }, { "math_id": 32, "text": "d" }, { "math_id": 33, "text": "\\mathbb{P}^2" }, { "math_id": 34, "text": "\\begin{align}\nc(T_C) &= \\frac{c(T_{\\mathbb{P}^2}|_C)}{c(N_{C/\\mathbb{P}^2})} \\\\\n&= \\frac{1+3[H]}{1+d[H]} \\\\\n&= (1+3[H])(1-d[H]) \\\\\n&= 1 + (3-d)[H]\n\\end{align}" }, { "math_id": 35, "text": "[H]" }, { "math_id": 36, "text": "\\chi(F)=\\int_M \\operatorname{ch}(F) \\wedge \\operatorname{td}(TM)," }, { "math_id": 37, "text": "\\chi(F)" }, { "math_id": 38, "text": "\\chi(F):= \\sum_{i=0}^{\\text{dim}_{\\mathbb{C}} M} (-1)^i \\text{dim}_{\\mathbb{C}} H^i(M,F)," }, { "math_id": 39, "text": "\\operatorname{ch}(F)" } ]
https://en.wikipedia.org/wiki?curid=1568489
1568513
Dendrite (crystal)
Crystal that develops with a typical multi-branching form A crystal dendrite is a crystal that develops with a typical multi-branching form, resembling a fractal. The name comes from the Ancient Greek word (), which means "tree", since the crystal's structure resembles that of a tree. These crystals can be synthesised by using a supercooled pure liquid, however they are also quite common in nature. The most common crystals in nature exhibit dendritic growth are snowflakes and frost on windows, but many minerals and metals can also be found in dendritic structures. History. Maximum velocity principle. The first dendritic patterns were discovered in palaeontology and are often mistaken for fossils because of their appearance. The first theory for the creation of these patterns was published by Nash and Glicksman in 1974, they used a very mathematical method and derived a non-linear integro-differential equation for a classical needle growth. However they only found an inaccurate numerical solution close to the tip of the needle and they found that under a given growth condition, the tip velocity has a unique maximum value. This became known as the maximum velocity principle (MVP) but was ruled out by Glicksman and Nash themselves very quickly. In the following two years Glicksman improved the numerical methods used, but did not realise the non-linear integro-differential equation had no mathematical solutions making his results meaningless. Marginal stability hypothesis. Four years later, in 1978, Langer and Müller-Krumbhaar proposed the marginal stability hypothesis (MSH). This hypothesis used a stability parameter σ which depended on the thermal diffusivity, the surface tension and the radius of the tip of the dendrite. They claimed a system would be unstable for small σ causing it to form dendrites. At the time however Langer and Müller-Krumbhaar were unable to obtain a stability criterion for certain growth systems which lead to the MSH theory being abandoned. Microscopic solvability condition. A decade later several groups of researchers went back to the Nash-Glicksman problem and focused on simplified versions of it. Through this they found that the problem for isotropic surface tension had no solutions. This result meant that a system with a steady needle growth solution necessarily needed to have some type of anisotropic surface tension. This breakthrough lead to the microscopic solvability condition theory (MSC), however this theory still failed since although for isotropic surface tension there could not be a steady solution, it was experimentally shown that there were nearly steady solutions which the theory did not predict. Macroscopic continuum model. Nowadays the best understanding for dendritic crystals comes in the form of the macroscopic continuum model which assumes that both the solid and the liquid parts of the system are continuous media and the interface is a surface. This model uses the microscopic structure of the material and uses the general understanding of nucleation to accurately predict how a dendrite will grow. Dendrite formation. Dendrite formation starts with some nucleation, i.e. the first appearance of solid growth, in the supercooled liquid. This formation will at first grow spherically until this shape is no longer stable. This instability has two causes: anisotropy in the surface energy of the solid/liquid interface and the attachment kinetics of particles to crystallographic planes when they have formed. On the solid-liquid interface, we can define a surface energy, formula_0, which is the excess energy at the liquid-solid interface to accommodate the structural changes at the interface. For a spherical interface, the Gibbs–Thomson equation then gives a melting point depression compared to a flat interface formula_1, which has the relation formula_2 where formula_3 is the radius of the sphere. This curvature undercooling, the effective lowering of the melting point at the interface, sustains the spherical shape for small radii. However, anisotropy in the surface energy implies that the interface will deform to find the energetically most favourable shape. For cubic symmetry in 2D we can express this anisotropy int the surface energy as formula_4 This gives rise to a surface stiffness formula_5 where we note that this quantity is positive for all angles formula_6 when formula_7. In this case we speak of "weak anisotropy". For larger values of formula_8, the "strong anisotropy" causes the surface stiffness to be negative for some formula_6. This means that these orientations cannot appear, leading to so-called 'faceted' crystals, i.e. the interface would be a crystallographic plane inhibiting growth along this part of the interface due to attachment kinetics. Wulff construction. For both above and below the critical anisotropy the Wulff construction provides a method to determine the shape of the crystal. In principle, we can understand the deformation as an attempt by the system to minimise the area with the highest effective surface energy. Growth velocity. Taking into account attachment kinetics, we can derive that both for spherical growth and for flat surface growth, the growth velocity decreases with time by formula_9. We do however find stable parabolic growth, where the length grows with formula_10 and the width with formula_11. Therefore, growth mainly takes place at the tip the parabolic interface, which draws out longer and longer. Eventually, the sides of this parabolic tip will also exhibit instabilities giving a dendrite its characteristic shape. Preferred growth direction. When dendrites start to grow with tips in different directions, they display their underlying crystal structure, as this structure causes the anisotropy in surface energy. For instance, a dendrite growing with BCC crystal structure will have a preferred growth direction along the formula_12 directions. The table below gives an overview of preferred crystallographic directions for dendritic growth. Note that when the strain energy minimisation effect dominates over surface energy minimisation, one might find a different growth direction, such as with Cr, which has as a preferred growth direction formula_13, even though it is a BCC latice. Metal dendrites. For metals the process of forming dendrites is very similar to other crystals, but the kinetics of attachment play a much smaller role. This is because the interface is atomically rough; because of the small difference in structure between the liquid and the solid state, the transition from liquid to solid is somewhat gradual and one observes some interface thickness. Consequently, the surface energy will become nearly isotropic. For this reason, one would not expect faceted crystals as found for atomically smooth interfaces observed in crystals of more complex molecules. Mineralogy and paleontology. In paleontology, dendritic mineral crystal forms are often mistaken for fossils. These pseudofossils form as naturally occurring fissures in the rock are filled by percolating mineral solutions. They form when water rich in manganese and iron flows along fractures and bedding planes between layers of limestone and other rock types, depositing dendritic crystals as the solution flows through. A variety of manganese oxides and hydroxides are involved, including: A three-dimensional form of dendrite develops in fissures in quartz, forming moss agate NASA microgravity experiment. The Isothermal Dendritic Growth Experiment (IDGE) was a materials science solidification experiment that researchers used on Space Shuttle missions to investigate dendritic growth in an environment where the effect of gravity (convection in the liquid) could be excluded. The experimental results indicated that at lower supercooling (up to 1.3 K), these convective effects are indeed significant. Compared to the growth in microgravity, the tip velocity during dendritic growth under normal gravity was found to be up to several times greater. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\gamma_{sl}" }, { "math_id": 1, "text": "\\Delta T_m" }, { "math_id": 2, "text": "\\Delta T_m \\propto \\frac{\\gamma_{sl}}{r}" }, { "math_id": 3, "text": "r" }, { "math_id": 4, "text": "\\gamma_{sl}(\\theta) = \\gamma_{sl}^0[1 + \\epsilon \\cos(4\\theta)]." }, { "math_id": 5, "text": "\\gamma_{sl}^0[1 - 15\\epsilon \\cos(4\\theta)]" }, { "math_id": 6, "text": "\\theta" }, { "math_id": 7, "text": "\\epsilon < 1/15" }, { "math_id": 8, "text": "\\epsilon" }, { "math_id": 9, "text": "t^{-1/2}" }, { "math_id": 10, "text": "t" }, { "math_id": 11, "text": "\\sqrt{t}" }, { "math_id": 12, "text": "\\langle 100 \\rangle" }, { "math_id": 13, "text": "\\langle 111 \\rangle" } ]
https://en.wikipedia.org/wiki?curid=1568513
15685517
Homothetic center
Point from which two similar geometric figures can be scaled to each other In geometry, a homothetic center (also called a center of similarity or a center of similitude) is a point from which at least two geometrically similar figures can be seen as a dilation or contraction of one another. If the center is external, the two figures are directly similar to one another; their angles have the same rotational sense. If the center is internal, the two figures are scaled mirror images of one another; their angles have the opposite sense. General polygons. If two geometric figures possess a homothetic center, they are similar to one another; in other words they must have the same angles at corresponding points and differ only in their relative scaling. The homothetic center and the two figures need not lie in the same plane; they can be related by a projection from the homothetic center. Homothetic centers may be external or internal. If the center is internal, the two geometric figures are scaled mirror images of one another; in technical language, they have opposite chirality. A clockwise angle in one figure would correspond to a counterclockwise angle in the other. Conversely, if the center is external, the two figures are directly similar to one another; their angles have the same sense. Circles. Circles are geometrically similar to one another and mirror symmetric. Hence, a pair of circles has both types of homothetic centers, internal and external, unless the centers are equal or the radii are equal; these exceptional cases are treated after general position. These two homothetic centers lie on the line joining the centers of the two given circles, which is called the "line of centers" (Figure 3). Circles with radius zero can also be included (see exceptional cases), and negative radius can also be used, switching external and internal. Computing homothetic centers. For a given pair of circles, the internal and external homothetic centers may be found in various ways. In analytic geometry, the internal homothetic center is the weighted average of the centers of the circles, weighted by the opposite circle's radius – distance from center of circle to inner center is proportional to that radius, so weighting is proportional to the "opposite" radius. Denoting the centers of the circles "C"1, "C"2 by ("x"1, "y"1), ("x"2, "y"2) and their radii by "r"1, "r"2 and denoting the center by ("x"0, "y"0), this is: formula_0 The external center can be computed by the same equation, but considering one of the radii as negative; either one yields the same equation, which is: formula_1 More generally, taking both radii with the same sign (both positive or both negative) yields the inner center, while taking the radii with opposite signs (one positive and the other negative) yields the outer center. Note that the equation for the inner center is valid for any values (unless both radii zero or one is the negative of the other), but the equation for the external center requires that the radii be different, otherwise it involves division by zero. In synthetic geometry, two parallel diameters are drawn, one for each circle; these make the same angle α with the line of centers. The lines "A"1"A"2, "B"1"B"2 drawn through corresponding endpoints of those radii, which are homologous points, intersect each other and the line of centers at the "external" homothetic center. Conversely, the lines "A"1"B"2, "B"1"A"2 drawn through one endpoint and the opposite endpoint of its counterpart intersects each other and the line of centers at the "internal" homothetic center. As a limiting case of this construction, a line tangent to both circles (a bitangent line) passes through one of the homothetic centers, as it forms right angles with both the corresponding diameters, which are thus parallel; see tangent lines to two circles for details. If the circles fall on opposite sides of the line, it passes through the internal homothetic center, as in "A"2"B"1 in the figure above. Conversely, if the circles fall on the same side of the line, it passes through the external homothetic center (not pictured). Special cases. If the circles have the same radius (but different centers), they have no external homothetic center in the affine plane: in analytic geometry this results in division by zero, while in synthetic geometry the lines "A"1"A"2, "B"1"B"2 are parallel to the line of centers (both for secant lines and the bitangent lines) and thus have no intersection. An external center can be defined in the projective plane to be the point at infinity corresponding to the slope of this line. This is also the limit of the external center if the centers of the circles are fixed and the radii are varied until they are equal. If the circles have the same center but different radii, both the external and internal coincide with the common center of the circles. This can be seen from the analytic formula, and is also the limit of the two homothetic centers as the centers of the two circles are varied until they coincide, holding the radii equal. There is no line of centers, however, and the synthetic construction fails as the two parallel lines coincide. If one radius is zero but the other is non-zero (a point and a circle), both the external and internal center coincide with the point (center of the radius zero circle). If the two circles are identical (same center, same radius), the internal center is their common center, but there is no well-defined external center – properly, the function from the parameter space of two circles in the plane to the external center has a non-removable discontinuity on the locus of identical circles. In the limit of two circles with the same radius but distinct centers moving to having the same center, the external center is the point at infinity corresponding to the slope of the line of centers, which can be anything, so no limit exists for all possible pairs of such circles. Conversely, if both radii are zero (two points) but the points are distinct, the external center can be defined as the point at infinity corresponding to the slope of the line of centers, but there is no well-defined internal center. Homologous and antihomologous points. In general, a line passing through a homothetic center intersects each of its circles in two places. Of these four points, two are said to be "homologous" if radii drawn to them make the same angle with the line connecting the centers; for example, the points Q, Q' in Figure 4. Points which are collinear with respect to the homothetic center but are "not" homologous are said to be "antihomologous"; for example, points Q, P' in Figure 4. Pairs of antihomologous points lie on a circle. When two rays from the same homothetic center intersect the circles, each set of antihomologous points lie on a circle. Consider triangles △"EQS", △"EQ'S' " (Figure 4). They are similar because formula_2 since E is the homothetic center. From that similarity, it follows that formula_3 By the inscribed angle theorem, formula_4 Because ∠"QSR' " is supplementary to ∠"ESQ", formula_5 In the quadrilateral QSR'P', formula_6 which means it can be inscribed in a circle. From the secant theorem, it follows that formula_7 In the same way, it can be shown that PRS'Q' can be inscribed in a circle and formula_8 The proof is similar for the internal homothetic center I: formula_9 Segment is seen in the same angle from P and S', which means R, P, S', Q' lie on a circle. Then from the intersecting chords theorem, formula_10 Similarly QSP'R' can be inscribed in a circle and formula_11 Relation with the radical axis. Two circles have a radical axis, which is the line of points from which tangents to both circles have equal length. More generally, every point on the radical axis has the property that its powers relative to the circles are equal. The radical axis is always perpendicular to the line of centers, and if two circles intersect, their radical axis is the line joining their points of intersection. For three circles, three radical axes can be defined, one for each pair of circles ("C"1/"C"2, "C"1/"C"3, "C"2/"C"3); remarkably, these three radical axes intersect at a single point, the radical center. Tangents drawn from the radical center to the three circles would all have equal length. Any two pairs of antihomologous points can be used to find a point on the radical axis. Consider the two rays emanating from the external homothetic center E in Figure 4. These rays intersect the two given circles (green and blue in Figure 4) in two pairs of antihomologous points, Q, P' for the first ray, and S, R' for the second ray. These four points lie on a single circle, that intersects both given circles. By definition, the line QS is the radical axis of the new circle with the green given circle, whereas the line P'R' is the radical axis of the new circle with the blue given circle. These two lines intersect at the point G, which is the radical center of the new circle and the two given circles. Therefore, the point G also lies on the radical axis of the two given circles. Tangent circles and antihomologous points. For each pair of antihomologous points of two circles exists a third circle which is tangent to the given ones and touches them at the antihomologous points. The opposite is also true — every circle which is tangent to two other circles touches them at a pair of antihomologous points. Let our two circles have centers "O"1, "O"2 (Figure 5). E is their external homothetic center. We construct an arbitrary ray from E which intersects the two circles in P, Q, P' and Q'. Extend "O"1"Q", "O"2"P' " until they intersect in "T"1. It is easily proven that triangles △"O"1"PQ", △"O"2"P'Q' " are similar because of the homothety. They are also isosceles because formula_12 (radius), therefore formula_13 Thus △"T"1"P'Q" is also isosceles and a circle can be constructed with center "T"1 and radius formula_14 This circle is tangent to the two given circles in points Q, P'. The proof for the other pair of antihomologous points (P, Q'), as well as in the case of the internal homothetic center is analogous. If we construct the tangent circles for every possible pair of antihomologous points we get two families of circles - one for each homothetic center. The family of circles of the external homothetic center is such that every tangent circle either contains "both" given circles or none (Figure 6). On the other hand, the circles from the other family always contain only one of the given circles (Figure 7). All circles from a tangent family have a common radical center and it coincides with the homothetic center. To show this, consider two rays from the homothetic center, intersecting the given circles (Figure 8). Two tangent circles "T"1, "T"2 exist which touch the given circles at the antihomologous points. As we've already shown these points lie on a circle C and thus the two rays are radical axes for "C"/"T"1, "C"/"T"2. Then the intersecting point of the two radical axes must also belong to the radical axis of "T"1/"T"2. This point of intersection is the homothetic center E. If the two tangent circle touch collinear pairs of antihomologous point — as in Figure 5 — then because of the homothety formula_15 Thus the powers of E with respect to the two tangent circles are equal which means that E belongs to the radical axis. Homothetic centers of three circles. Any pair of circles has two centers of similarity, therefore, three circles would have six centers of similarity, two for each distinct pair of given circles. Remarkably, these six points lie on four lines, three points on each line. Here is one way to show this. Consider the "plane" of the three circles (Figure 9). Offset each center point perpendicularly to the plane by a distance equal to the corresponding radius. The centers can be offset to either side of the plane. The three offset points define a single plane. In that plane we build three lines through each pair of points. The lines pierce the plane of circles in the points HAB, HBC, HAC. Since the locus of points which are common to two distinct and non-parallel planes is a line then necessarily these three points lie on such line. From the similarity of triangles △"HABAA'," △"HABBB' " we see that formula_16 (where rA, rB are the radii of the circles) and thus HAB is in fact the homothetic center of the corresponding two circles. We can do the same for HBC and HAC. Repeating the above procedure for different combinations of homothetic centers (in our method this is determined by the side to which we offset the centers of the circles) would yield a total of four lines — three homothetic centers on each line (Figure 10). Here is yet another way to prove this. Let "C"1, "C"2 be a conjugate pair of circles tangent to "all" three given circles (Figure 11). By conjugate we imply that both tangent circles belong to the same family with respect to any one of the given pairs of circles. As we've already seen, the radical axis of any two tangent circles from the same family passes through the homothetic center of the two given circles. Since the tangent circles are common for all three pairs of given circles then their homothetic centers all belong to the radical axis of "C"1, "C"2 e.g., they lie on a single line. This property is exploited in Joseph Diaz Gergonne's general solution to Apollonius' problem. Given the three circles, the homothetic centers can be found and thus the radical axis of a pair of solution circles. Of course, there are infinitely many circles with the same radical axis, so additional work is done to find out exactly which two circles are the solution. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "(x_0, y_0) = \\frac{r_2}{r_1 + r_2}(x_1, y_1) + \\frac{r_1}{r_1 + r_2}(x_2, y_2)." }, { "math_id": 1, "text": "(x_e, y_e) = \\frac{-r_2}{r_1 - r_2}(x_1, y_1) + \\frac{r_1}{r_1 - r_2}(x_2, y_2)." }, { "math_id": 2, "text": "\\angle QES = \\angle Q'\\!ES', \\quad \\frac{\\overline{EQ}}{\\overline{EQ'}} = \\frac{\\overline{ES}}{\\overline{ES'}}," }, { "math_id": 3, "text": "\\angle ESQ = \\angle ES'\\!Q' = \\alpha." }, { "math_id": 4, "text": "\\angle EP'\\!R' = \\angle ES'\\!Q'." }, { "math_id": 5, "text": "\\angle QSR' = 180^\\circ - \\alpha." }, { "math_id": 6, "text": "\\angle QSR' + \\angle QP'\\!R' = 180^\\circ - \\alpha + \\alpha = 180^\\circ," }, { "math_id": 7, "text": "\\overline{EQ} \\cdot \\overline{EP'} = \\overline{ES} \\cdot \\overline{ER'}." }, { "math_id": 8, "text": "\\overline{EP} \\cdot \\overline{EQ'} = \\overline{ER} \\cdot \\overline{ES'}." }, { "math_id": 9, "text": "\\begin{align}\n & \\triangle PIR \\cong \\triangle P'\\!IR' \\\\\n & \\implies \\angle RPI = \\angle IP'\\!R' = \\alpha \\\\\n & \\implies \\angle RS'\\!Q' = \\angle PP'\\!R' = \\alpha \\quad \\text{(inscribed angle theorem)}\n\\end{align}" }, { "math_id": 10, "text": "\\overline{IP} \\cdot \\overline{IQ'} = \\overline{IR} \\cdot \\overline{IS'}." }, { "math_id": 11, "text": "\\overline{IQ} \\cdot \\overline{IP'} = \\overline{IS} \\cdot \\overline{IR'}." }, { "math_id": 12, "text": "\\overline{O_1P} = \\overline{O_1Q}" }, { "math_id": 13, "text": "\\angle O_1PQ = \\angle O_1QP = \\angle O_2P'\\!Q' = \\angle O_2Q'\\!P' = \\angle T_1QP' = \\angle T_1P'\\!Q." }, { "math_id": 14, "text": "\\overline{T_1P'} = \\overline{T_1Q}." }, { "math_id": 15, "text": "\\frac{\\overline{EP}}{\\overline{EP'}} = \\frac{\\overline{EQ}}{\\overline{EQ'}}; \\quad \\overline{EP} \\cdot \\overline{EQ'} = \\overline{EQ} \\cdot \\overline{EP'}." }, { "math_id": 16, "text": "\\frac{\\overline{H\\!_{AB}B}}{\\overline{H\\!_{AB}A}} = \\frac{r_B}{r_A}" } ]
https://en.wikipedia.org/wiki?curid=15685517
1568608
Half-integer
Rational number equal to an integer plus 1/2 In mathematics, a half-integer is a number of the form formula_0 where formula_1 is an integer. For example, formula_2 are all "half-integers". The name "half-integer" is perhaps misleading, as the set may be misunderstood to include numbers such as 1 (being half the integer 2). A name such as "integer-plus-half" may be more accurate, but while not literally true, "half integer" is the conventional term. Half-integers occur frequently enough in mathematics and in quantum mechanics that a distinct term is convenient. Note that halving an integer does not always produce a half-integer; this is only true for odd integers. For this reason, half-integers are also sometimes called half-odd-integers. Half-integers are a subset of the dyadic rationals (numbers produced by dividing an integer by a power of two). Notation and algebraic structure. The set of all half-integers is often denoted formula_3 The integers and half-integers together form a group under the addition operation, which may be denoted formula_4 However, these numbers do not form a ring because the product of two half-integers is not a half-integer; e.g. formula_5 The smallest ring containing them is formula_6, the ring of dyadic rationals. Uses. Sphere packing. The densest lattice packing of unit spheres in four dimensions (called the "D"4 lattice) places a sphere at every point whose coordinates are either all integers or all half-integers. This packing is closely related to the Hurwitz integers: quaternions whose real coefficients are either all integers or all half-integers. Physics. In physics, the Pauli exclusion principle results from definition of fermions as particles which have spins that are half-integers. The energy levels of the quantum harmonic oscillator occur at half-integers and thus its lowest energy is not zero. Sphere volume. Although the factorial function is defined only for integer arguments, it can be extended to fractional arguments using the gamma function. The gamma function for half-integers is an important part of the formula for the volume of an n-dimensional ball of radius formula_10, formula_11 The values of the gamma function on half-integers are integer multiples of the square root of pi: formula_12 where formula_13 denotes the double factorial. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n + \\tfrac{1}{2}," }, { "math_id": 1, "text": "n" }, { "math_id": 2, "text": "4\\tfrac12,\\quad 7/2,\\quad -\\tfrac{13}{2},\\quad 8.5" }, { "math_id": 3, "text": "\\mathbb Z + \\tfrac{1}{2} \\quad = \\quad \\left( \\tfrac{1}{2} \\mathbb Z \\right) \\smallsetminus \\mathbb Z ~." }, { "math_id": 4, "text": "\\tfrac{1}{2} \\mathbb Z ~." }, { "math_id": 5, "text": "~\\tfrac{1}{2} \\times \\tfrac{1}{2} ~=~ \\tfrac{1}{4} ~ \\notin ~ \\tfrac{1}{2} \\mathbb Z ~." }, { "math_id": 6, "text": "\\Z\\left[\\tfrac12\\right]" }, { "math_id": 7, "text": "n=0" }, { "math_id": 8, "text": "f:x\\to x+0.5" }, { "math_id": 9, "text": "x" }, { "math_id": 10, "text": "R" }, { "math_id": 11, "text": "V_n(R) = \\frac{\\pi^{n/2}}{\\Gamma(\\frac{n}{2} + 1)}R^n~." }, { "math_id": 12, "text": "\\Gamma\\left(\\tfrac{1}{2} + n\\right) ~=~ \\frac{\\,(2n-1)!!\\,}{2^n}\\, \\sqrt{\\pi\\,} ~=~ \\frac{(2n)!}{\\,4^n \\, n!\\,} \\sqrt{\\pi\\,} ~" }, { "math_id": 13, "text": "n!!" } ]
https://en.wikipedia.org/wiki?curid=1568608
15686544
Eta and eta prime mesons
The eta () and eta prime meson () are isosinglet mesons made of a mixture of up, down and strange quarks and their antiquarks. The charmed eta meson () and bottom eta meson () are similar forms of quarkonium; they have the same spin and parity as the (light) defined, but are made of charm quarks and bottom quarks respectively. The top quark is too heavy to form a similar meson, due to its very fast decay. General. The eta was discovered in pion–nucleon collisions at the Bevatron in 1961 by Aihud Pevsner et al. at a time when the proposal of the Eightfold Way was leading to predictions and discoveries of new particles from symmetry considerations. The difference between the mass of the and that of the is larger than the quark model can naturally explain. This "– puzzle" can be resolved by the 't Hooft instanton mechanism, whose realization is also known as the Witten–Veneziano mechanism. Specifically, in QCD, the higher mass of the is very significant, since it is associated with the axial UA(1) classical symmetry, which is "explicitly broken" through the chiral anomaly upon quantization; thus, although the "protected" mass is small, the is not. Quark composition. The particles belong to the "pseudo-scalar" nonet of mesons which have spin and negative parity, and and have zero total isospin, I, and zero strangeness, and hypercharge. Each quark which appears in an particle is accompanied by its antiquark, hence all the main quantum numbers are zero, and the particle overall is "flavourless". The basic SU(3) symmetry theory of quarks for the three lightest quarks, which only takes into account the strong force, predicts corresponding particles formula_0 and formula_1 The subscripts are labels that refer to the fact that η1 belongs to a singlet (which is fully antisymmetrical) and η8 is part of an octet. However, the electroweak interaction – which can transform one flavour of quark into another – causes a small but significant amount of "mixing" of the eigenstates (with mixing angle so that the actual quark composition is a linear combination of these formulae. That is: formula_2 The unsubscripted name refers to the real particle which is actually observed and which is close to the η8. The is the observed particle close to η1. The and particles are closely related to the better-known neutral pion , where formula_3 In fact, , η1, and η8 are three mutually orthogonal, linear combinations of the quark pairs , , and ; they are at the centre of the pseudo-scalar nonet of mesons with all the main quantum numbers equal to zero. η′ meson. The η′ meson () is a flavor SU(3) singlet, unlike the . It is a different superposition of the same quarks as the eta meson (), as described above, and it has a higher mass, a different decay state, and a shorter lifetime. Fundamentally, it results from the direct sum decomposition of the approximate SU(3) flavor symmetry among the 3 lightest quarks, formula_4, where 1 corresponds to η1 before s light quark mixing yields . References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathrm{\\eta}_1 = \\frac{1}{\\sqrt 3} \\left( \\mathrm{ u\\bar{u} + d\\bar{d} + s\\bar{s} } \\right) ~," }, { "math_id": 1, "text": "\\mathrm{\\eta}_8 = \\frac{1}{\\sqrt 6} \\left( \\mathrm{ u\\bar{u} + d\\bar{d} - 2s\\bar{s} } \\right) ~." }, { "math_id": 2, "text": "\\left(\\begin{array}{cc}\n \\cos\\theta_\\mathrm{P} & - \\sin\\theta_\\mathrm{P} \\\\\n \\sin\\theta_\\mathrm{P} & ~~\\cos\\theta_\\mathrm{P}\n \\end{array}\\right) \\left(\\begin{array}{c} \\mathrm{\\eta}_8 \\\\\n \\mathrm{\\eta}_1 \\end{array}\\right) =\n \\left(\\begin{array}{c} \\mathrm{\\eta} \\\\ \\mathrm{\\eta'} \\end{array}\\right) ~.\n" }, { "math_id": 3, "text": "\\mathrm{\\pi}^0 = \\frac{1}{\\sqrt 2} \\left( \\mathrm{ u\\bar{u} - d\\bar{d} } \\right) ~." }, { "math_id": 4, "text": "\\mathbb{3} \\times \\bar{\\mathbb{3}} = \\mathbb{1} + \\mathbb{8}" } ]
https://en.wikipedia.org/wiki?curid=15686544
15686668
Sokolov–Ternov effect
Physical phenomenon of spin-polarization The Sokolov–Ternov effect is the effect of self-polarization of relativistic electrons or positrons moving at high energy in a magnetic field. The self-polarization occurs through the emission of spin-flip synchrotron radiation. The effect was predicted by Igor Ternov and the prediction rigorously justified by Arseny Sokolov using exact solutions to the Dirac equation. Theory. An electron in a magnetic field can have its spin oriented in the same ("spin up") or in the opposite ("spin down") direction with respect to the direction of the magnetic field (which is assumed to be oriented "up"). The "spin down" state has a higher energy than "spin up" state. The polarization arises due to the fact that the rate of transition through emission of synchrotron radiation to the "spin down" state is slightly greater than the probability of transition to the "spin up" state. As a result, an initially unpolarized beam of high-energy electrons circulating in a storage ring after sufficiently long time will have spins oriented in the direction opposite to the magnetic field. Saturation is not complete and is explicitly described by the formula formula_0 where formula_1 is the limiting degree of polarization (92.4%), and formula_2 is the relaxation time: formula_3 Here formula_4 is as before, formula_5 and formula_6 are the mass and charge of the electron, formula_7 is the vacuum permittivity, formula_8 is the speed of light, formula_9 is the Schwinger field, formula_10 is the magnetic field, and formula_11 is the electron energy. The limiting degree of polarization formula_4 is less than one due to the existence of spin–orbital energy exchange, which allows transitions to the "spin up" state (with probability 25.25 times less than to the "spin down" state). Typical relaxation time is on the order of minutes and hours. Thus producing a highly polarized beam requires a long enough time and the use of storage rings. The self-polarization effect for positrons is similar, with the only difference that positrons will tend to have spins oriented in the direction parallel to the direction of the magnetic field. Experimental observation. The Sokolov–Ternov effect was experimentally observed in the USSR, France, Germany, United States, Japan, and Switzerland in storage rings with electrons of energy 1–50 GeV. Applications and generalization. The effect of radiative polarization provides a unique capability for creating polarized beams of high-energy electrons and positrons that can be used for various experiments. The effect also has been related to the Unruh effect which, up to now, under experimentally achievable conditions is too small to be observed. The equilibrium polarization given by the Sokolov and Ternov has corrections when the orbit is not perfectly planar. The formula has been generalized by Derbenev and Kondratenko and others. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\xi(t) = A \\left(1 - e^{-\\frac{t}{\\tau}}\\right)," }, { "math_id": 1, "text": "A = 8 \\sqrt{3}/15 \\approx 0.924" }, { "math_id": 2, "text": "\\tau" }, { "math_id": 3, "text": "\\tau = A \\frac{4 \\pi \\varepsilon_0 \\hbar^2}{mce^2} \\left(\\frac{mc^2}{E}\\right)^2 \\left(\\frac{H_0}{H}\\right)^3." }, { "math_id": 4, "text": "A" }, { "math_id": 5, "text": "m" }, { "math_id": 6, "text": "e" }, { "math_id": 7, "text": "\\varepsilon_0" }, { "math_id": 8, "text": "c" }, { "math_id": 9, "text": "H_0 \\approx 4.414 \\times 10^{13}~\\text{gauss}" }, { "math_id": 10, "text": "H" }, { "math_id": 11, "text": "E" } ]
https://en.wikipedia.org/wiki?curid=15686668
15689191
Log-linear model
Mathematical model A log-linear model is a mathematical model that takes the form of a function whose logarithm equals a linear combination of the parameters of the model, which makes it possible to apply (possibly multivariate) linear regression. That is, it has the general form formula_0, in which the "fi"("X") are quantities that are functions of the variable "X", in general a vector of values, while "c" and the "wi" stand for the model parameters. The term may specifically be used for: The specific applications of log-linear models are where the output quantity lies in the range 0 to ∞, for values of the independent variables "X", or more immediately, the transformed quantities "fi"("X") in the range −∞ to +∞. This may be contrasted to logistic models, similar to the logistic function, for which the output quantity lies in the range 0 to 1. Thus the contexts where these models are useful or realistic often depends on the range of the values being modelled.
[ { "math_id": 0, "text": "\\exp \\left(c + \\sum_{i} w_i f_i(X) \\right)" } ]
https://en.wikipedia.org/wiki?curid=15689191
1569065
Unitarian trick
Device in the representation theory of Lie groups In mathematics, the unitarian trick (or unitary trick) is a device in the representation theory of Lie groups, introduced by Adolf Hurwitz (1897) for the special linear group and by Hermann Weyl for general semisimple groups. It applies to show that the representation theory of some complex Lie group "G" is in a qualitative way controlled by that of some compact real Lie group "K," and the latter representation theory is easier. An important example is that in which "G" is the complex general linear group GL"n"(C), and "K" the unitary group U("n") acting on vectors of the same size. From the fact that the representations of "K" are completely reducible, the same is concluded for the complex-analytic representations of "G", at least in finite dimensions. The relationship between "G" and "K" that drives this connection is traditionally expressed in the terms that the Lie algebra of "K" is a real form of that of "G". In the theory of algebraic groups, the relationship can also be put that "K" is a dense subset of "G", for the Zariski topology. The trick works for reductive Lie groups "G", of which an important case are semisimple Lie groups. Formulations. The "trick" is stated in a number of ways in contemporary mathematics. One such formulation is for "G" a reductive group over the complex numbers. Then "G"an, the complex points of "G" considered as a Lie group, has a compact subgroup "K" that is Zariski-dense. For the case of the special linear group, this result was proved for its special unitary subgroup by Issai Schur (1924, presaged by earlier work). The special linear group is a complex semisimple Lie group. For any such group "G" and maximal compact subgroup "K", and "V" a complex vector space of finite dimension which is a "G"-module, its "G"-submodules and "K"-submodules are the same. In the "Encyclopedia of Mathematics", the formulation is The classical compact Lie groups ... have the same complex linear representations and the same invariant subspaces in tensor spaces as their complex envelopes [...]. Therefore, results of the theory of linear representations obtained for the classical complex Lie groups can be carried over to the corresponding compact groups and vice versa. In terms of Tannakian formalism, Claude Chevalley interpreted Tannaka duality starting from a compact Lie group "K" as constructing the "complex envelope" "G" as the dual reductive algebraic group "Tn(K)" over the complex numbers. Veeravalli S. Varadarajan wrote of the "unitarian trick" as "the canonical correspondence between compact and complex semisimple complex groups discovered by Weyl", noting the "closely related duality theories of Chevalley and Tannaka", and later developments that followed on quantum groups. History. Adolf Hurwitz had shown how integration over a compact Lie group could be used to construct invariants, in the cases of unitary groups and compact orthogonal groups. Issai Schur in 1924 showed that this technique can be applied to show complete reducibility of representations for such groups via the construction of an invariant inner product. Weyl extended Schur's method to complex semisimple Lie algebras by showing they had a compact real form. Weyl's theorem. The complete reducibility of finite-dimensional linear representations of compact groups, or connected semisimple Lie groups and complex semisimple Lie algebras goes sometimes under the name of "Weyl's theorem". A related result, that the universal cover of a compact semisimple Lie group is also compact, also goes by the same name. It was proved by Weyl a few years before "universal cover" had a formal definition. Explicit formulas. Let formula_0 be a complex representation of a compact Lie group formula_1. Define formula_2, integrating over formula_1 with respect to the Haar measure. Since formula_3 is a positive matrix, there exists a square root formula_4 such that formula_5. For each formula_6, the matrix formula_7 is unitary. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\pi: G \\rightarrow GL(n,\\mathbb{C})" }, { "math_id": 1, "text": "G" }, { "math_id": 2, "text": "P = \\int_G \\pi(g)\\pi(g)^*dg" }, { "math_id": 3, "text": "P" }, { "math_id": 4, "text": "Q" }, { "math_id": 5, "text": "P=Q^2" }, { "math_id": 6, "text": "g \\in G" }, { "math_id": 7, "text": "\\tau(g) = Q^{-1}\\pi(g)Q" } ]
https://en.wikipedia.org/wiki?curid=1569065
1569089
Membrane gas separation
Technology for splitting specific gases out of mixtures Gas mixtures can be effectively separated by synthetic membranes made from polymers such as polyamide or cellulose acetate, or from ceramic materials. While polymeric membranes are economical and technologically useful, they are bounded by their performance, known as the Robeson limit (permeability must be sacrificed for selectivity and vice versa). This limit affects polymeric membrane use for CO2 separation from flue gas streams, since mass transport becomes limiting and CO2 separation becomes very expensive due to low permeabilities. Membrane materials have expanded into the realm of silica, zeolites, metal-organic frameworks, and perovskites due to their strong thermal and chemical resistance as well as high tunability (ability to be modified and functionalized), leading to increased permeability and selectivity. Membranes can be used for separating gas mixtures where they act as a permeable barrier through which different compounds move across at different rates or not move at all. The membranes can be nanoporous, polymer, etc. and the gas molecules penetrate according to their size, diffusivity, or solubility. Basic process. Gas separation across a membrane is a pressure-driven process, where the driving force is the difference in pressure between inlet of raw material and outlet of product. The membrane used in the process is a generally non-porous layer, so there will not be a severe leakage of gas through the membrane. The performance of the membrane depends on permeability and selectivity. Permeability is affected by the penetrant size. Larger gas molecules have a lower diffusion coefficient. The polymer chain flexibility and free volume in the polymer of the membrane material influence the diffusion coefficient, as the space within the permeable membrane must be large enough for the gas molecules to diffuse across. The solubility is expressed as the ratio of the concentration of the gas in the polymer to the pressure of the gas in contact with it. Permeability is the ability of the membrane to allow the permeating gas to diffuse through the material of the membrane as a consequence of the pressure difference over the membrane, and can be measured in terms of the permeate flow rate, membrane thickness and area and the pressure difference across the membrane. The selectivity of a membrane is a measure of the ratio of permeability of the relevant gases for the membrane. It can be calculated as the ratio of permeability of two gases in binary separation. The membrane gas separation equipment typically pumps gas into the membrane module and the targeted gases are separated based on difference in diffusivity and solubility. For example, oxygen will be separated from the ambient air and collected at the upstream side, and nitrogen at the downstream side. As of 2016, membrane technology was reported as capable of producing 10 to 25 tonnes of 25 to 40% oxygen per day. Membrane governing methodology. There are three main diffusion mechanisms. The first (b), Knudsen diffusion holds at very low pressures where lighter molecules can move across a membrane faster than heavy ones, in a material with reasonably large pores. The second (c), molecular sieving, is the case where the pores of the membrane are too small to let one component pass, a process which is typically not practical in gas applications, as the molecules are too small to design relevant pores. In these cases the movement of molecules is best described by pressure-driven convective flow through capillaries, which is quantified by Darcy's law. However, the more general model in gas applications is the solution-diffusion (d) where particles are first dissolved onto the membrane and then diffuse through it both at different rates. This model is employed when the pores in the polymer membrane appear and disappear faster relative to the movement of the particles. In a typical membrane system the incoming feed stream is separated into two components: permeant and retentate. Permeant is the gas that travels across the membrane and the retentate is what is left of the feed. On both sides of the membrane, a gradient of chemical potential is maintained by a pressure difference which is the driving force for the gas molecules to pass through. The ease of transport of each species is quantified by the permeability, Pi. With the assumptions of ideal mixing on both sides of the membrane, ideal gas law, constant diffusion coefficient and Henry's law, the flux of a species can be related to the pressure difference by Fick's law: formula_0 where, (Ji) is the molar flux of species i across the membrane, (l) is membrane thickness, (Pi) is permeability of species i, (Di) is diffusivity, (Ki) is the Henry coefficient, and (pi') and (pi") represent the partial pressures of the species i at the feed and permeant side respectively. The product of DiKi is often expressed as the permeability of the species i, on the specific membrane being used. formula_1 The flow of a second species, j, can be defined as: formula_2 With the expression above, a membrane system for a binary mixture can be sufficiently defined. it can be seen that the total flow across the membrane is strongly dependent on the relation between the feed and permeate pressures. The ratio of feed pressure (p') over permeate pressure (p") is defined as the membrane pressure ratio (θ). formula_3 It is clear from the above, that a flow of species i or j across the membrane can only occur when: formula_4 In other words, the membrane will experience flow across it when there exists a concentration gradient between feed and permeate. If the gradient is positive, the flow will go from the feed to the permeate and species i will be separated from the feed. formula_5 Therefore, the maximum separation of species i results from: formula_6 Another important coefficient when choosing the optimum membrane for a separation process is the membrane selectivity αij defined as the ratio of permeability of species i with relation to the species j. formula_7 This coefficient is used to indicate the level to which the membrane is able to separates species i from j. It is obvious from the expression above, that a membrane selectivity of 1 indicates the membrane has no potential to separate the two gases, the reason being, both gases will diffuse equally through the membrane. In the design of a separation process, normally the pressure ratio and the membrane selectivity are prescribed by the pressures of the system and the permeability of the membrane . The level of separation achieved by the membrane (concentration of the species to be separated) needs to be evaluated based on the aforementioned design parameters in order to evaluate the cost-effectiveness of the system. Membrane performance. The concentration of species i and j across the membrane can be evaluated based on their respective diffusion flows across it. formula_8 In the case of a binary mixture, the concentration of species i across the membrane: formula_9 This can be further expanded to obtain an expression of the form: formula_10 formula_11 Using the relations: formula_12 formula_13 The expression can be rewritten as: formula_14 Then using formula_15 formula_16 formula_17 The solution to the above quadratic expression can be expressed as: formula_18 Finally, an expression for the permeant concentration is obtained by the following: formula_19 Along the separation unit, the feed concentration decays with the diffusion across the membrane causing the concentration at the membrane to drop accordingly. As a result, the total permeant flow (q"out) results from the integration of the diffusion flow across the membrane from the feed inlet (q'in) to feed outlet (q'out). A mass balance across a differential length of the separation unit is therefore: formula_20 where: formula_21 Because of the binary nature of the mixture, only one species needs to be evaluated. Prescribing a function n'i=n'i(x), the species balance can be rewritten as: formula_22 Where: formula_23 formula_24 Lastly, the area required per unit membrane length can be obtained by the following expression: formula_25 Membrane materials for carbon capture in flue gas streams. The material of the membrane plays an important role in its ability to provide the desired performance characteristics. It is optimal to have a membrane with a high permeability and sufficient selectivity and it is also important to match the membrane properties to that of the system operating conditions (for example pressures and gas composition). Synthetic membranes are made from a variety of polymers including polyethylene, polyamides, polyimides, cellulose acetate, polysulphone and polydimethylsiloxane. Polymer membranes. Polymeric membranes are a common option for use in the capture of CO2 from flue gas because of the maturity of the technology in a variety of industries, namely petrochemicals. The ideal polymer membrane has both a high selectivity and permeability. Polymer membranes are examples of systems that are dominated by the solution-diffusion mechanism. The membrane is considered to have holes which the gas can dissolve (solubility) and the molecules can move from one cavity to the other (diffusion). It was discovered by Robeson in the early 1990s that polymers with a high selectivity have a low permeability and opposite is true; materials with a low selectivity have a high permeability. This is best illustrated in a Robeson plot where the selectivity is plotted as a function of the CO2 permeation. In this plot, the upper bound of selectivity is approximately a linear function of the permeability. It was found that the solubility in polymers is mostly constant but the diffusion coefficients vary significantly and this is where the engineering of the material occurs. Somewhat intuitively, the materials with the highest diffusion coefficients have a more open pore structure, thus losing selectivity. There are two methods that researchers are using to break the Robeson limit, one of these is the use of glassy polymers whose phase transition and changes in mechanical properties make it appear that the material is absorbing molecules and thus surpasses the upper limit. The second method of pushing the boundaries of the Robeson limit is by the facilitated transport method. As previously stated, the solubility of polymers is typically fairly constant but the facilitated transport method uses a chemical reaction to enhance the permeability of one component without changing the selectivity. Nanoporous membranes. Nanoporous membranes are fundamentally different from polymer-based membranes in that their chemistry is different and that they do not follow the Robeson limit for a variety of reasons. The simplified figure of a nanoporous membrane shows a small portion of an example membrane structure with cavities and windows. The white portion represents the area where the molecule can move and the blue shaded areas represent the walls of the structure. In the engineering of these membranes, the size of the cavity (Lcy x Lcz) and window region (Lwy x Lwz) can be modified so that the desired permeation is achieved. It has been shown that the permeability of a membrane is the production of adsorption and diffusion. In low loading conditions, the adsorption can be computed by the Henry coefficient. If the assumption is made that the energy of a particle does not change when moving through this structure, only the entropy of the molecules changes based on the size of the openings. If we first consider changes the cavity geometry, the larger the cavity, the larger the entropy of the absorbed molecules which thus makes the Henry coefficient larger. For diffusion, an increase in entropy will lead to a decrease in free energy which in turn leads to a decrease in the diffusion coefficient. Conversely, changing the window geometry will primarily effect the diffusion of the molecules and not the Henry coefficient. In summary, by using the above simplified analysis, it is possible to understand why the upper limit of the Robeson line does not hold for nanostructures. In the analysis, both the diffusion and Henry coefficients can be modified without affecting the permeability of the material which thus can exceed the upper limit for polymer membranes. Silica membranes. Silica membranes are mesoporous and can be made with high uniformity (the same structure throughout the membrane). The high porosity of these membranes gives them very high permeabilities. Synthesized membranes have smooth surfaces and can be modified on the surface to drastically improve selectivity. Functionalizing silica membrane surfaces with amine containing molecules (on the surface silanol groups) allows the membranes to separate CO2 from flue gas streams more effectively. Surface functionalization (and thus chemistry) can be tuned to be more efficient for wet flue gas streams as compared to dry flue gas streams. While previously, silica membranes were impractical due to their technical scalability and cost (they are very difficult to produce in an economical manner on a large scale), there have been demonstrations of a simple method of producing silica membranes on hollow polymeric supports. These demonstrations indicate that economical materials and methods can effectively separate CO2 and N2. Ordered mesoporous silica membranes have shown considerable potential for surface modification that allows for ease of CO2 separation. Surface functionalization with amines leads to the reversible formation of carbamates (during CO2 flow), increasing CO2 selectivity significantly. Zeolite membranes. Zeolites are crystalline aluminosilicates with a regular repeating structure of molecular-sized pores. Zeolite membranes selectively separate molecules based on pore size and polarity and are thus highly tunable to specific gas separation processes. In general, smaller molecules and those with stronger zeolite-adsorption properties are adsorbed onto zeolite membranes with larger selectivity. The capacity to discriminate based on both molecular size and adsorption affinity makes zeolite membranes an attractive candidate for CO2 separation from N2, CH4, and H2. Scientists have found that the gas-phase enthalpy (heat) of adsorption on zeolites increases as follows: H2 &lt; CH4 &lt; N2 &lt; CO2. It is generally accepted that CO2 has the largest adsorption energy because it has the largest quadrupole moment, thereby increasing its affinity for charged or polar zeolite pores. At low temperatures, zeolite adsorption-capacity is large and the high concentration of adsorbed CO2 molecules blocks the flow of other gases. Therefore, at lower temperatures, CO2 selectively permeates through zeolite pores. Several recent research efforts have focused on developing new zeolite membranes that maximize the CO2 selectivity by taking advantage of the low-temperature blocking phenomena. Researchers have synthesized Y-type (Si:Al&gt;3) zeolite membranes which achieve room-temperature separation factors of 100 and 21 for CO2/N2 and CO2/CH4 mixtures respectively. DDR-type and SAPO-34 membranes have also shown promise in separating CO2 and CH4 at a variety of pressures and feed compositions. The SAPO-34 membranes, being nitrogen selective, are also strong contender for natural gas sweetening process. Researchers have also made an effort to utilize zeolite membranes for the separation of H2 from hydrocarbons. Hydrogen can be separated from larger hydrocarbons such as C4H10 with high selectivity. This is due to the molecular sieving effect since zeolites have pores much larger than H2, but smaller than these large hydrocarbons. Smaller hydrocarbons such as CH4, C2H6, and C3H8 are small enough to not be separated by molecular sieving. Researchers achieved a higher selectivity of hydrogen when performing the separation at high temperatures, likely as a result of a decrease in the competitive adsorption effect. Metal-organic framework (MOF) membranes. There have been advances in zeolitic-imidazolate frameworks (ZIFs), a subclass of metal-organic frameworks (MOFs), that have allowed them to be useful for carbon dioxide separation from flue gas streams. Extensive modeling has been performed to demonstrate the value of using MOFs as membranes. MOF materials are adsorption-based, and thus can be tuned to achieve selectivity. The drawback to MOF systems is stability in water and other compounds present in flue gas streams. Select materials, such as ZIF-8, have demonstrated stability in water and benzene, contents often present in flue gas mixtures. ZIF-8 can be synthesized as a membrane on a porous alumina support and has proven to be effective at separating CO2 from flue gas streams. At similar CO2/CH4 selectivity to Y-type zeolite membranes, ZIF-8 membranes achieve unprecedented CO2 permeance, two orders of magnitude above the previous standard. Perovskite membranes. Perovskite are mixed metal oxide with a well-defined cubic structure and a general formula of ABO3, where A is an alkaline earth or lanthanide element and B is a transition metal. These materials are attractive for CO2 separation because of the tunability of the metal sites as well as their stabilities at elevated temperatures. The separation of CO2 from N2 was investigated with an α-alumina membrane impregnated with BaTiO3. It was found that adsorption of CO2 was favorable at high temperatures due to an endothermic interaction between CO2 and the material, promoting mobile CO2 that enhanced CO2 adsorption-desorption rate and surface diffusion. The experimental separation factor of CO2 to N2 was found to be 1.1-1.2 at 100 °C to 500 °C, which is higher than the separation factor limit of 0.8 predicted by Knudsen diffusion. Though the separation factor was low due to pinholes observed in the membrane, this demonstrates the potential of perovskite materials in their selective surface chemistry for CO2 separation. Other membrane technologies. In special cases other materials can be utilized; for example, palladium membranes permit transport solely of hydrogen. In addition to palladium membranes (which are typically palladium silver alloys to stop embrittlement of the alloy at lower temperature) there is also a significant research effort looking into finding non-precious metal alternatives. Although slow kinetics of exchange on the surface of the membrane and tendency for the membranes to crack or disintegrate after a number of duty cycles or during cooling are problems yet to be fully solved. Construction. Membranes are typically contained in one of three modules: Uses. Membranes are employed in: Air separation. Oxygen-enriched air is in high demanded for a range of medical and industrial applications including chemical and combustion processes. Cryogenic distillation is the mature technology for commercial air separation for the production of large quantities of high purity oxygen and nitrogen. However, it is a complex process, is energy-intensive, and is generally not suitable for small-scale production. Pressure swing adsorption is also commonly used for air separation and can also produce high purity oxygen at medium production rates, but it still requires considerable space, high investment and high energy consumption. The membrane gas separation method is a relatively low environmental impact and sustainable process providing continuous production, simple operation, lower pressure/temperature requirements, and compact space requirements. Current status of CO2 capture with membranes. A great deal of research has been undertaken to utilize membranes instead of absorption or adsorption for carbon capture from flue gas streams, however, no current projects exist that utilize membranes. Process engineering along with new developments in materials have shown that membranes have the greatest potential for low energy penalty and cost compared to competing technologies. Background. Today, membranes are used for commercial separations involving: N2 from air, H2 from ammonia in the Haber-Bosch process, natural gas purification, and tertiary-level enhanced oil recovery supply. Single-stage membrane operations involve a single membrane with one selectivity value. Single-stage membranes were first used in natural gas purification, separating CO2 from methane. A disadvantage of single-stage membranes is the loss of product in the permeate due to the constraints imposed by the single selectivity value. Increasing the selectivity reduces the amount of product lost in the permeate, but comes at the cost of requiring a larger pressure difference to process an equivalent amount of a flue stream. In practice, the maximum pressure ratio economically possible is around 5:1. To combat the loss of product in the membrane permeate, engineers use “cascade processes” in which the permeate is recompressed and interfaced with additional, higher selectivity membranes. The retentate streams can be recycled, which achieves a better yield of product. Need for multi-stage process. Single-stage membranes devices are not feasible for obtaining a high concentration of separated material in the permeate stream. This is due to the pressure ratio limit that is economically unrealistic to exceed. Therefore, the use of multi-stage membranes is required to concentrate the permeate stream. The use of a second stage allows for less membrane area and power to be used. This is because of the higher concentration that passes the second stage, as well as the lower volume of gas for the pump to process. Other factors, such as adding another stage that uses air to concentrate the stream further reduces cost by increasing concentration within the feed stream. Additional methods such as combining multiple types of separation methods allow for variation in creating economical process designs. Membrane use in hybrid processes. Hybrid processes have long-standing history with gas separation. Typically, membranes are integrated into already existing processes such that they can be retrofitted into already existing carbon capture systems. MTR, Membrane Technology and Research Inc., and UT Austin have worked to create hybrid processes, utilizing both absorption and membranes, for CO2 capture. First, an absorption column using piperazine as a solvent absorbs about half the carbon dioxide in the flue gas, then the use of a membrane results in 90% capture. A parallel setup is also, with the membrane and absorption processes occurring simultaneously. Generally, these processes are most effective when the highest content of carbon dioxide enters the amine absorption column. Incorporating hybrid design processes allows for retrofitting into fossil fuel power plants. Hybrid processes can also use cryogenic distillation and membranes. For example, hydrogen and carbon dioxide can be separated, first using cryogenic gas separation, whereby most of the carbon dioxide exits first, then using a membrane process to separate the remaining carbon dioxide, after which it is recycled for further attempts at cryogenic separation. Cost analysis. Cost limits the pressure ratio in a membrane CO2 separation stage to a value of 5; higher pressure ratios eliminate any economic viability for CO2 capture using membrane processes. Recent studies have demonstrated that multi-stage CO2 capture/separation processes using membranes can be economically competitive with older and more common technologies such as amine-based absorption. Currently, both membrane and amine-based absorption processes can be designed to yield a 90% CO2 capture rate. For carbon capture at an average 600 MW coal-fired power plant, the cost of CO2 capture using amine-based absorption is in the $40–100 per ton of CO2 range, while the cost of CO2 capture using current membrane technology (including current process design schemes) is about $23 per ton of CO2. Additionally, running an amine-based absorption process at an average 600 MW coal-fired power plant consumes about 30% of the energy generated by the power plant, while running a membrane process requires about 16% of the energy generated. CO2 transport (e.g. to geologic sequestration sites, or to be used for EOR) costs about $2–5 per ton of CO2. This cost is the same for all types of CO2 capture/separation processes such as membrane separation and absorption. In terms of dollars per ton of captured CO2, the least expensive membrane processes being studied at this time are multi-step counter-current flow/sweep processes. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "J_i=\\frac{D_i K_i(p_i'-p_i'')}{l}=\\frac{P_i(p_i'-p_i'')}{l}" }, { "math_id": 1, "text": "P_i=D_i K_i " }, { "math_id": 2, "text": "J_j=\\frac{P_j(p_j'-p_j'')}{l}" }, { "math_id": 3, "text": "\\theta=\\frac{P'}{P''} " }, { "math_id": 4, "text": " p_i'-p_i'' =p' n_i'-p'' n_i'' \\neq 0" }, { "math_id": 5, "text": "p'n_i'-p''n_i''>0 \\rightarrow \\frac{n_i''}{n_i'} \\leq \\frac{p'}{p''}" }, { "math_id": 6, "text": " n_i'',max'' = \\frac{p'}{p''}n_i'= \\theta n_i'" }, { "math_id": 7, "text": " \\alpha_{ij}= \\frac{P_i}{P_j} " }, { "math_id": 8, "text": " n_i'' = \\frac{J_i}{\\sum{J_k}} , \\quad n_j ''= \\frac{J_j}{\\sum{J_k}} " }, { "math_id": 9, "text": " n_i'' = \\frac{J_i}{J_i+J_j}" }, { "math_id": 10, "text": " n_i'' = n_i''(\\phi, \\alpha_{ij}, n_i^')" }, { "math_id": 11, "text": " n_i'' = \\frac{J_i}{J_i+J_j}= \\frac{P_i(p_i'-p_i'')}{P_i(p_i'-p_i'')+P_j(n_j'-\\frac{1}{\\phi}n_j'')} " }, { "math_id": 12, "text": " p_i'=p'n_i' ,\\quad p_j'=p'n_j' = \\frac{p'}{\\phi}n_i' " }, { "math_id": 13, "text": " p_i''=p''n_i' ,\\quad p_j''=p''n_j'' = \\frac{p'}{\\phi}n_i'' " }, { "math_id": 14, "text": " n_i''=\\frac{P_ip'(n_i'-\\frac{1}{\\phi}n_i'')}{P_ip'(n_i'-\\frac{1}{\\phi}n_i'')+P_jp'(n_j'-\\frac{1}{\\phi}n_j'')}" }, { "math_id": 15, "text": " n_j'=1-n_i'\\quad and \\quad n_j'' =1-n_i'' " }, { "math_id": 16, "text": " n_i''=\\frac{P_ip'(n_i'-\\frac{1}{\\phi}n_i'')}{P_ip'(n_i'-\\frac{1}{\\phi}n_i'')+P_jp'((1-n_i')-\\frac{1}{\\phi}(1-n_i''))}" }, { "math_id": 17, "text": " (1-\\alpha)(n_i'')^2+(\\phi+\\phi(\\alpha-1)n_i'+\\alpha-1)n_i''-\\alpha\\phi n_i' =0 " }, { "math_id": 18, "text": " n_i = \\frac{-(\\phi+\\phi(\\alpha-1)n_i'+\\alpha-1)\\pm \\sqrt{\\phi+\\phi(\\alpha-1)n_i'+\\alpha-1)^2+4(1-\\alpha)\\alpha\\phi n_i'}}{2(1-\\alpha)} " }, { "math_id": 19, "text": " n_i''(\\phi \\alpha n_i')=\\frac{\\phi}{2}\\left(n_i'+\\frac{1}{\\phi}+\\frac{1}{\\alpha-1}-\\sqrt{\\left(n_i'+\\frac{1}{\\phi}+\\frac{1}{\\alpha-1}\\right)^2-\\frac{4\\alpha n_i'}{(\\alpha-1)\\phi}} \\right)" }, { "math_id": 20, "text": " q'(x)=q'(x+dx)+\\int_{x}^{x+dx} q''(x)dx" }, { "math_id": 21, "text": "q''(x)=J_i(x)+J_j(x)" }, { "math_id": 22, "text": " q'(x)n'_i(x)=q'(x+\\Delta x)n'_i(x+\\Delta x) +\\int_{x}^{x+dx}q''(x)dx \\bar{n_i''} " }, { "math_id": 23, "text": " \\int_{x}^{x+dx} q''(x)dx= \\delta q'', \\quad \\bar{n_i''}=\\frac{n_i''(x)+n_i''(x+\\Delta x)}{2} " }, { "math_id": 24, "text": " \\delta q''= \\frac{n'_i(x)-n'_i(x+\\Delta x)}{\\bar{n_i''}-n'_i(x+\\Delta x)} q'(x) " }, { "math_id": 25, "text": " A=\\frac{\\delta q''}{J_i+J_j}" } ]
https://en.wikipedia.org/wiki?curid=1569089
1569217
Veronese surface
In mathematics, the Veronese surface is an algebraic surface in five-dimensional projective space, and is realized by the Veronese embedding, the embedding of the projective plane given by the complete linear system of conics. It is named after Giuseppe Veronese (1854–1917). Its generalization to higher dimension is known as the Veronese variety. The surface admits an embedding in the four-dimensional projective space defined by the projection from a general point in the five-dimensional space. Its general projection to three-dimensional projective space is called a Steiner surface. Definition. The Veronese surface is the image of the mapping formula_0 given by formula_1 where formula_2 denotes homogeneous coordinates. The map formula_3 is known as the Veronese embedding. Motivation. The Veronese surface arises naturally in the study of conics. A conic is a degree 2 plane curve, thus defined by an equation: formula_4 The pairing between coefficients formula_5 and variables formula_6 is linear in coefficients and quadratic in the variables; the Veronese map makes it linear in the coefficients and linear in the monomials. Thus for a fixed point formula_7 the condition that a conic contains the point is a linear equation in the coefficients, which formalizes the statement that "passing through a point imposes a linear condition on conics". Veronese map. The Veronese map or Veronese variety generalizes this idea to mappings of general degree "d" in "n"+1 variables. That is, the Veronese map of degree "d" is the map formula_8 with "m" given by the multiset coefficient, or more familiarly the binomial coefficient, as: formula_9 The map sends formula_10 to all possible monomials of total degree "d" (of which there are formula_11); we have formula_12 since there are formula_12 variables formula_13 to choose from; and we subtract formula_14 since the projective space formula_15 has formula_11 coordinates. The second equality shows that for fixed source dimension "n," the target dimension is a polynomial in "d" of degree "n" and leading coefficient formula_16 For low degree, formula_17 is the trivial constant map to formula_18 and formula_19 is the identity map on formula_20 so "d" is generally taken to be 2 or more. One may define the Veronese map in a coordinate-free way, as formula_21 where "V" is any vector space of finite dimension, and formula_22 are its symmetric powers of degree "d". This is homogeneous of degree "d" under scalar multiplication on "V", and therefore passes to a mapping on the underlying projective spaces. If the vector space "V" is defined over a field "K" which does not have characteristic zero, then the definition must be altered to be understood as a mapping to the dual space of polynomials on "V". This is because for fields with finite characteristic "p", the "p"th powers of elements of "V" are not rational normal curves, but are of course a line. (See, for example additive polynomial for a treatment of polynomials over a field of finite characteristic). Rational normal curve. For formula_23 the Veronese variety is known as the rational normal curve, of which the lower-degree examples are familiar. Biregular. The image of a variety under the Veronese map is again a variety, rather than simply a constructible set; furthermore, these are isomorphic in the sense that the inverse map exists and is regular – the Veronese map is biregular. More precisely, the images of open sets in the Zariski topology are again open.
[ { "math_id": 0, "text": "\\nu:\\mathbb{P}^2\\to \\mathbb{P}^5" }, { "math_id": 1, "text": "\\nu: [x:y:z] \\mapsto [x^2:y^2:z^2:yz:xz:xy]" }, { "math_id": 2, "text": "[x:\\cdots]" }, { "math_id": 3, "text": "\\nu" }, { "math_id": 4, "text": "Ax^2 + Bxy + Cy^2 +Dxz + Eyz + Fz^2 = 0." }, { "math_id": 5, "text": "(A, B, C, D, E, F)" }, { "math_id": 6, "text": "(x,y,z)" }, { "math_id": 7, "text": "[x:y:z]," }, { "math_id": 8, "text": "\\nu_d\\colon \\mathbb{P}^n \\to \\mathbb{P}^m" }, { "math_id": 9, "text": "m= \\left(\\!\\!{n + 1 \\choose d}\\!\\!\\right) - 1 = {n+d \\choose d} - 1." }, { "math_id": 10, "text": "[x_0:\\ldots:x_n]" }, { "math_id": 11, "text": "m+1" }, { "math_id": 12, "text": "n+1" }, { "math_id": 13, "text": "x_0, \\ldots, x_n" }, { "math_id": 14, "text": "1" }, { "math_id": 15, "text": "\\mathbb{P}^m" }, { "math_id": 16, "text": "1/n!." }, { "math_id": 17, "text": "d=0" }, { "math_id": 18, "text": "\\mathbf{P}^0," }, { "math_id": 19, "text": "d=1" }, { "math_id": 20, "text": "\\mathbf{P}^n," }, { "math_id": 21, "text": "\\nu_d: \\mathbb{P}(V) \\ni [v] \\mapsto [v^d] \\in \\mathbb{P}(\\rm{Sym}^d V)" }, { "math_id": 22, "text": "\\rm{Sym}^d V" }, { "math_id": 23, "text": "n=1," }, { "math_id": 24, "text": "n=1, d=1" }, { "math_id": 25, "text": "n=1, d=2," }, { "math_id": 26, "text": "[x^2:xy:y^2]," }, { "math_id": 27, "text": "(x,x^2)." }, { "math_id": 28, "text": "n=1, d=3," }, { "math_id": 29, "text": "[x^3:x^2y:xy^2:y^3]," }, { "math_id": 30, "text": "(x,x^2,x^3)." } ]
https://en.wikipedia.org/wiki?curid=1569217
1569600
Thermal expansion
Tendency of matter to change volume in response to a change in temperature Thermal expansion is the tendency of matter to increase in length, area, or volume, changing its size and density, in response to an increase in temperature (usually excluding phase transitions). Substances usually contract with decreasing temperature (thermal contraction), with rare exceptions within limited temperature ranges ("negative thermal expansion"). Temperature is a monotonic function of the average molecular kinetic energy of a substance. As energy in particles increases, they start moving faster and faster, weakening the intermolecular forces between them and therefore expanding the substance. When a substance is heated, molecules begin to vibrate and move more, usually creating more distance between themselves. The relative expansion (also called strain) divided by the change in temperature is called the material's coefficient of linear thermal expansion and generally varies with temperature. &lt;templatestyles src="Template:TOC limit/styles.css" /&gt; Prediction. If an equation of state is available, it can be used to predict the values of the thermal expansion at all the required temperatures and pressures, along with many other state functions. Contraction effects (negative expansion). A number of materials contract on heating within certain temperature ranges; this is usually called negative thermal expansion, rather than "thermal contraction". For example, the coefficient of thermal expansion of water drops to zero as it is cooled to 3.983 °C and then becomes negative below this temperature; this means that water has a maximum density at this temperature, and this leads to bodies of water maintaining this temperature at their lower depths during extended periods of sub-zero weather. Other materials are also known to exhibit negative thermal expansion. Fairly pure silicon has a negative coefficient of thermal expansion for temperatures between about 18 and 120 kelvin. ALLVAR Alloy 30, a titanium alloy, exhibits anisotropic negative thermal expansion across a wide range of temperatures. Factors. Unlike gases or liquids, solid materials tend to keep their shape when undergoing thermal expansion. Thermal expansion generally decreases with increasing bond energy, which also has an effect on the melting point of solids, so high melting point materials are more likely to have lower thermal expansion. In general, liquids expand slightly more than solids. The thermal expansion of glasses is slightly higher compared to that of crystals. At the glass transition temperature, rearrangements that occur in an amorphous material lead to characteristic discontinuities of coefficient of thermal expansion and specific heat. These discontinuities allow detection of the glass transition temperature where a supercooled liquid transforms to a glass. Absorption or desorption of water (or other solvents) can change the size of many common materials; many organic materials change size much more due to this effect than due to thermal expansion. Common plastics exposed to water can, in the long term, expand by many percent. Effect on density. Thermal expansion changes the space between particles of a substance, which changes the volume of the substance while negligibly changing its mass (the negligible amount comes from mass–energy equivalence), thus changing its density, which has an effect on any buoyant forces acting on it. This plays a crucial role in convection of unevenly heated fluid masses, notably making thermal expansion partly responsible for wind and ocean currents. Coefficients. The coefficient of thermal expansion describes how the size of an object changes with a change in temperature. Specifically, it measures the fractional change in size per degree change in temperature at a constant pressure, such that lower coefficients describe lower propensity for change in size. Several types of coefficients have been developed: volumetric, area, and linear. The choice of coefficient depends on the particular application and which dimensions are considered important. For solids, one might only be concerned with the change along a length, or over some area. The volumetric thermal expansion coefficient is the most basic thermal expansion coefficient, and the most relevant for fluids. In general, substances expand or contract when their temperature changes, with expansion or contraction occurring in all directions. Substances that expand at the same rate in every direction are called isotropic. For isotropic materials, the area and volumetric thermal expansion coefficient are, respectively, approximately twice and three times larger than the linear thermal expansion coefficient. In the general case of a gas, liquid, or solid, the volumetric coefficient of thermal expansion is given by formula_0 The subscript "p" to the derivative indicates that the pressure is held constant during the expansion, and the subscript "V" stresses that it is the volumetric (not linear) expansion that enters this general definition. In the case of a gas, the fact that the pressure is held constant is important, because the volume of a gas will vary appreciably with pressure as well as temperature. For a gas of low density this can be seen from the ideal gas law. For various materials. This section summarizes the coefficients for some common materials. For isotropic materials the coefficients linear thermal expansion "α" and volumetric thermal expansion "αV" are related by "αV" = 3"α". For liquids usually the coefficient of volumetric expansion is listed and linear expansion is calculated here for comparison. For common materials like many metals and compounds, the thermal expansion coefficient is inversely proportional to the melting point. In particular, for metals the relation is: formula_1 for halides and oxides formula_2 In the table below, the range for "α" is from 10−7 K−1 for hard solids to 10−3 K−1 for organic liquids. The coefficient "α" varies with the temperature and some materials have a very high variation; see for example the variation vs. temperature of the volumetric coefficient for a semicrystalline polypropylene (PP) at different pressure, and the variation of the linear coefficient vs. temperature for some steel grades (from bottom to top: ferritic stainless steel, martensitic stainless steel, carbon steel, duplex stainless steel, austenitic steel). The highest linear coefficient in a solid has been reported for a Ti-Nb alloy. In solids. When calculating thermal expansion it is necessary to consider whether the body is free to expand or is constrained. If the body is free to expand, the expansion or strain resulting from an increase in temperature can be simply calculated by using the applicable coefficient of thermal expansion. If the body is constrained so that it cannot expand, then internal stress will be caused (or changed) by a change in temperature. This stress can be calculated by considering the strain that would occur if the body were free to expand and the stress required to reduce that strain to zero, through the stress/strain relationship characterised by the elastic or Young's modulus. In the special case of solid materials, external ambient pressure does not usually appreciably affect the size of an object and so it is not usually necessary to consider the effect of pressure changes. Common engineering solids usually have coefficients of thermal expansion that do not vary significantly over the range of temperatures where they are designed to be used, so where extremely high accuracy is not required, practical calculations can be based on a constant, average, value of the coefficient of expansion. Length. Linear expansion means change in one dimension (length) as opposed to change in volume (volumetric expansion). To a first approximation, the change in length measurements of an object due to thermal expansion is related to temperature change by a coefficient of linear thermal expansion (CLTE). It is the fractional change in length per degree of temperature change. Assuming negligible effect of pressure, one may write: formula_3 where formula_4 is a particular length measurement and formula_5 is the rate of change of that linear dimension per unit change in temperature. The change in the linear dimension can be estimated to be: formula_6 This estimation works well as long as the linear-expansion coefficient does not change much over the change in temperature formula_7, and the fractional change in length is small formula_8. If either of these conditions does not hold, the exact differential equation (using formula_5) must be integrated. Effects on strain. For solid materials with a significant length, like rods or cables, an estimate of the amount of thermal expansion can be described by the material strain, given by formula_9 and defined as: formula_10 where formula_11 is the length before the change of temperature and formula_12 is the length after the change of temperature. For most solids, thermal expansion is proportional to the change in temperature: formula_13 Thus, the change in either the strain or temperature can be estimated by: formula_14 where formula_15 is the difference of the temperature between the two recorded strains, measured in degrees Fahrenheit, degrees Rankine, degrees Celsius, or kelvin, and formula_16 is the linear coefficient of thermal expansion in "per degree Fahrenheit", "per degree Rankine", "per degree Celsius", or "per kelvin", denoted by °F−1, °R−1, °C−1, or K−1, respectively. In the field of continuum mechanics, thermal expansion and its effects are treated as eigenstrain and eigenstress. Area. The area thermal expansion coefficient relates the change in a material's area dimensions to a change in temperature. It is the fractional change in area per degree of temperature change. Ignoring pressure, one may write: formula_17 where formula_18 is some area of interest on the object, and formula_19 is the rate of change of that area per unit change in temperature. The change in the area can be estimated as: formula_20 This equation works well as long as the area expansion coefficient does not change much over the change in temperature formula_7, and the fractional change in area is small formula_21. If either of these conditions does not hold, the equation must be integrated. Volume. For a solid, one can ignore the effects of pressure on the material, and the volumetric (or cubical) thermal expansion coefficient can be written: formula_22 where formula_23 is the volume of the material, and formula_24 is the rate of change of that volume with temperature. This means that the volume of a material changes by some fixed fractional amount. For example, a steel block with a volume of 1 cubic meter might expand to 1.002 cubic meters when the temperature is raised by 50 K. This is an expansion of 0.2%. If a block of steel has a volume of 2 cubic meters, then under the same conditions, it would expand to 2.004 cubic meters, again an expansion of 0.2%. The volumetric expansion coefficient would be 0.2% for 50 K, or 0.004% K−1. If the expansion coefficient is known, the change in volume can be calculated formula_25 where formula_26 is the fractional change in volume (e.g., 0.002) and formula_7 is the change in temperature (50 °C). The above example assumes that the expansion coefficient did not change as the temperature changed and the increase in volume is small compared to the original volume. This is not always true, but for small changes in temperature, it is a good approximation. If the volumetric expansion coefficient does change appreciably with temperature, or the increase in volume is significant, then the above equation will have to be integrated: formula_27 formula_28 where formula_29 is the volumetric expansion coefficient as a function of temperature "T", and formula_30 and formula_31 are the initial and final temperatures respectively. Isotropic materials. For isotropic materials the volumetric thermal expansion coefficient is three times the linear coefficient: formula_32 This ratio arises because volume is composed of three mutually orthogonal directions. Thus, in an isotropic material, for small differential changes, one-third of the volumetric expansion is in a single axis. As an example, take a cube of steel that has sides of length L. The original volume will be formula_33 and the new volume, after a temperature increase, will be formula_34 We can easily ignore the terms as Δ"L" is a small quantity which on squaring gets much smaller and on cubing gets smaller still. So formula_35 The above approximation holds for small temperature and dimensional changes (that is, when formula_7 and formula_36 are small), but it does not hold if trying to go back and forth between volumetric and linear coefficients using larger values of formula_7. In this case, the third term (and sometimes even the fourth term) in the expression above must be taken into account. Similarly, the area thermal expansion coefficient is two times the linear coefficient: formula_37 This ratio can be found in a way similar to that in the linear example above, noting that the area of a face on the cube is just formula_38. Also, the same considerations must be made when dealing with large values of formula_7. Put more simply, if the length of a cubic solid expands from 1.00 m to 1.01 m, then the area of one of its sides expands from 1.00 m2 to 1.02 m2 and its volume expands from 1.00 m3 to 1.03 m3. Anisotropic materials. Materials with anisotropic structures, such as crystals (with less than cubic symmetry, for example martensitic phases) and many composites, will generally have different linear expansion coefficients formula_16 in different directions. As a result, the total volumetric expansion is distributed unequally among the three axes. If the crystal symmetry is monoclinic or triclinic, even the angles between these axes are subject to thermal changes. In such cases it is necessary to treat the coefficient of thermal expansion as a tensor with up to six independent elements. A good way to determine the elements of the tensor is to study the expansion by x-ray powder diffraction. The thermal expansion coefficient tensor for the materials possessing cubic symmetry (for e.g. FCC, BCC) is isotropic. Temperature dependence. Thermal expansion coefficients of solids usually show little dependence on temperature (except at very low temperatures) whereas liquids can expand at different rates at different temperatures. There are some exceptions: for example, cubic boron nitride exhibits significant variation of its thermal expansion coefficient over a broad range of temperatures. Another example is paraffin which in its solid form has a thermal expansion coefficient that is dependent on temperature. In gases. Since gases fill the entirety of the container which they occupy, the volumetric thermal expansion coefficient at constant pressure, formula_39, is the only one of interest. For an ideal gas, a formula can be readily obtained by differentiation of the ideal gas law, formula_40. This yields formula_41 where formula_42 is the pressure, formula_43 is the molar volume (formula_44, with formula_45 the total number of moles of gas), formula_46 is the absolute temperature and formula_47 is equal to the gas constant. For an isobaric thermal expansion, formula_48, so that formula_49 and the isobaric thermal expansion coefficient is: formula_50 which is a strong function of temperature; doubling the temperature will halve the thermal expansion coefficient. Absolute zero computation. From 1787 to 1802, it was determined by Jacques Charles (unpublished), John Dalton, and Joseph Louis Gay-Lussac that, at constant pressure, ideal gases expanded or contracted their volume linearly (Charles's law) by about 1/273 parts per degree Celsius of temperature's change up or down, between 0° and 100 °C. This suggested that the volume of a gas cooled at about −273 °C would reach zero. In October 1848, William Thomson, a 24 year old professor of Natural Philosophy at the University of Glasgow, published the paper "On an Absolute Thermometric Scale". In a footnote Thomson calculated that "infinite cold" (absolute zero) was equivalent to −273 °C (he called the temperature in °C as the "temperature of the air thermometers" of the time). This value of "−273" was considered to be the temperature at which the ideal gas volume reaches zero. By considering a thermal expansion linear with temperature (i.e. a constant coefficient of thermal expansion), the value of absolute zero was linearly extrapolated as the negative reciprocal of 0.366/100 °C – the accepted average coefficient of thermal expansion of an ideal gas in the temperature interval 0–100 °C, giving a remarkable consistency to the currently accepted value of −273.15 °C. In liquids. The thermal expansion of liquids is usually higher than in solids because the intermolecular forces present in liquids are relatively weak and its constituent molecules are more mobile. Unlike solids, liquids have no definite shape and they take the shape of the container. Consequently, liquids have no definite length and area, so linear and areal expansions of liquids only have significance in that they may be applied to topics such as thermometry and estimates of sea level rising due to global climate change. Sometimes, "αL" is still calculated from the experimental value of "αV". In general, liquids expand on heating, except cold water; below 4 °C it contracts, leading to a negative thermal expansion coefficient. At higher temperatures it shows more typical behavior, with a positive thermal expansion coefficient. Apparent and absolute. The expansion of liquids is usually measured in a container. When a liquid expands in a vessel, the vessel expands along with the liquid. Hence the observed increase in volume (as measured by the liquid level) is not the actual increase in its volume. The expansion of the liquid relative to the container is called its "apparent expansion", while the actual expansion of the liquid is called "real expansion" or "absolute expansion". The ratio of apparent increase in volume of the liquid per unit rise of temperature to the original volume is called its "coefficient of apparent expansion". The absolute expansion can be measured by a variety of techniques, including ultrasonic methods. Historically, this phenomenon complicated the experimental determination of thermal expansion coefficients of liquids, since a direct measurement of the change in height of a liquid column generated by thermal expansion is a measurement of the apparent expansion of the liquid. Thus the experiment simultaneously measures "two" coefficients of expansion and measurement of the expansion of a liquid must account for the expansion of the container as well. For example, when a flask with a long narrow stem, containing enough liquid to partially fill the stem itself, is placed in a heat bath, the height of the liquid column in the stem will initially drop, followed immediately by a rise of that height until the whole system of flask, liquid and heat bath has warmed through. The initial drop in the height of the liquid column is not due to an initial contraction of the liquid, but rather to the expansion of the flask as it contacts the heat bath first. Soon after, the liquid in the flask is heated by the flask itself and begins to expand. Since liquids typically have a greater percent expansion than solids for the same temperature change, the expansion of the liquid in the flask eventually exceeds that of the flask, causing the level of liquid in the flask to rise. For small and equal rises in temperature, the increase in volume (real expansion) of a liquid is equal to the sum of the apparent increase in volume (apparent expansion) of the liquid and the increase in volume of the containing vessel. The absolute expansion of the liquid is the apparent expansion corrected for the expansion of the containing vessel. Examples and applications. The expansion and contraction of the materials must be considered when designing large structures, when using tape or chain to measure distances for land surveys, when designing molds for casting hot material, and in other engineering applications when large changes in dimension due to temperature are expected. Thermal expansion is also used in mechanical applications to fit parts over one another, e.g. a bushing can be fitted over a shaft by making its inner diameter slightly smaller than the diameter of the shaft, then heating it until it fits over the shaft, and allowing it to cool after it has been pushed over the shaft, thus achieving a 'shrink fit'. Induction shrink fitting is a common industrial method to pre-heat metal components between 150 °C and 300 °C thereby causing them to expand and allow for the insertion or removal of another component. There exist some alloys with a very small linear expansion coefficient, used in applications that demand very small changes in physical dimension over a range of temperatures. One of these is Invar 36, with expansion approximately equal to 0.6×10-6 K−1. These alloys are useful in aerospace applications where wide temperature swings may occur. Pullinger's apparatus is used to determine the linear expansion of a metallic rod in the laboratory. The apparatus consists of a metal cylinder closed at both ends (called a steam jacket). It is provided with an inlet and outlet for the steam. The steam for heating the rod is supplied by a boiler which is connected by a rubber tube to the inlet. The center of the cylinder contains a hole to insert a thermometer. The rod under investigation is enclosed in a steam jacket. One of its ends is free, but the other end is pressed against a fixed screw. The position of the rod is determined by a micrometer screw gauge or spherometer. To determine the coefficient of linear thermal expansion of a metal, a pipe made of that metal is heated by passing steam through it. One end of the pipe is fixed securely and the other rests on a rotating shaft, the motion of which is indicated by a pointer. A suitable thermometer records the pipe's temperature. This enables calculation of the relative change in length per degree temperature change. The control of thermal expansion in brittle materials is a key concern for a wide range of reasons. For example, both glass and ceramics are brittle and uneven temperature causes uneven expansion which again causes thermal stress and this might lead to fracture. Ceramics need to be joined or work in concert with a wide range of materials and therefore their expansion must be matched to the application. Because glazes need to be firmly attached to the underlying porcelain (or other body type) their thermal expansion must be tuned to 'fit' the body so that crazing or shivering do not occur. Good example of products whose thermal expansion is the key to their success are CorningWare and the spark plug. The thermal expansion of ceramic bodies can be controlled by firing to create crystalline species that will influence the overall expansion of the material in the desired direction. In addition or instead the formulation of the body can employ materials delivering particles of the desired expansion to the matrix. The thermal expansion of glazes is controlled by their chemical composition and the firing schedule to which they were subjected. In most cases there are complex issues involved in controlling body and glaze expansion, so that adjusting for thermal expansion must be done with an eye to other properties that will be affected, and generally trade-offs are necessary. Thermal expansion can have a noticeable effect on gasoline stored in above-ground storage tanks, which can cause gasoline pumps to dispense gasoline which may be more compressed than gasoline held in underground storage tanks in winter, or less compressed than gasoline held in underground storage tanks in summer. Heat-induced expansion has to be taken into account in most areas of engineering. A few examples are: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\alpha = \\alpha_{\\text{V}} = \\frac{1}{V}\\,\\left(\\frac{\\partial V}{\\partial T}\\right)_{p}" }, { "math_id": 1, "text": " \\alpha \\approx \\frac{0.020}{T_m}" }, { "math_id": 2, "text": " \\alpha \\approx \\frac{0.038}{T_m} - 7.0 \\cdot 10^{-6} \\, \\mathrm{K}^{-1}" }, { "math_id": 3, "text": "\\alpha_L = \\frac{1}{L}\\,\\frac{\\mathrm{d}L}{\\mathrm{d}T}" }, { "math_id": 4, "text": "L" }, { "math_id": 5, "text": "\\mathrm{d}L/\\mathrm{d}T" }, { "math_id": 6, "text": "\\frac{\\Delta L}{L} = \\alpha_L \\Delta T" }, { "math_id": 7, "text": "\\Delta T" }, { "math_id": 8, "text": "\\Delta L/L \\ll 1" }, { "math_id": 9, "text": "\\varepsilon_\\mathrm{thermal}" }, { "math_id": 10, "text": "\\varepsilon_\\mathrm{thermal} = \\frac{(L_\\mathrm{final} - L_\\mathrm{initial})} {L_\\mathrm{initial}}" }, { "math_id": 11, "text": "L_\\mathrm{initial}" }, { "math_id": 12, "text": "L_\\mathrm{final}" }, { "math_id": 13, "text": "\\varepsilon_\\mathrm{thermal} \\propto \\Delta T" }, { "math_id": 14, "text": "\\varepsilon_\\mathrm{thermal} = \\alpha_L \\Delta T" }, { "math_id": 15, "text": "\\Delta T = (T_\\mathrm{final} - T_\\mathrm{initial})" }, { "math_id": 16, "text": "\\alpha_L" }, { "math_id": 17, "text": "\\alpha_A = \\frac{1}{A}\\,\\frac{\\mathrm{d}A}{\\mathrm{d}T}" }, { "math_id": 18, "text": "A" }, { "math_id": 19, "text": "dA/dT" }, { "math_id": 20, "text": "\\frac{\\Delta A}{A} = \\alpha_A\\Delta T" }, { "math_id": 21, "text": "\\Delta A/A \\ll 1" }, { "math_id": 22, "text": "\\alpha_V = \\frac{1}{V}\\,\\frac{\\mathrm{d}V}{\\mathrm{d}T}" }, { "math_id": 23, "text": "V" }, { "math_id": 24, "text": "\\mathrm{d}V/\\mathrm{d}T" }, { "math_id": 25, "text": "\\frac{\\Delta V}{V} = \\alpha_V \\Delta T" }, { "math_id": 26, "text": "\\Delta V/V" }, { "math_id": 27, "text": "\\ln\\left(\\frac{V + \\Delta V}{V}\\right) = \\int_{T_i}^{T_f}\\alpha_V(T)\\,\\mathrm{d}T" }, { "math_id": 28, "text": "\\frac{\\Delta V}{V} = \\exp\\left(\\int_{T_i}^{T_f}\\alpha_V(T)\\,\\mathrm{d}T\\right) - 1" }, { "math_id": 29, "text": "\\alpha_V(T)" }, { "math_id": 30, "text": "T_i" }, { "math_id": 31, "text": "T_f" }, { "math_id": 32, "text": "\\alpha_V = 3\\alpha_L" }, { "math_id": 33, "text": "V = L^3" }, { "math_id": 34, "text": "V + \\Delta V = \\left(L + \\Delta L\\right)^3 = L^3 + 3L^2\\Delta L + 3L\\Delta L^2 + \\Delta L^3 \\approx L^3 + 3L^2\\Delta L = V + 3 V \\frac{\\Delta L}{L}." }, { "math_id": 35, "text": "\\frac{\\Delta V}{V} = 3 {\\Delta L \\over L} = 3\\alpha_L\\Delta T." }, { "math_id": 36, "text": "\\Delta L" }, { "math_id": 37, "text": "\\alpha_A = 2\\alpha_L" }, { "math_id": 38, "text": "L^2" }, { "math_id": 39, "text": "\\alpha_{V}" }, { "math_id": 40, "text": "p V_m = RT" }, { "math_id": 41, "text": "p \\mathrm{d}V_m + V_m \\mathrm{d}p = R\\mathrm{d}T" }, { "math_id": 42, "text": "p" }, { "math_id": 43, "text": "V_m" }, { "math_id": 44, "text": " V_m = V / n" }, { "math_id": 45, "text": "n" }, { "math_id": 46, "text": "T" }, { "math_id": 47, "text": "R" }, { "math_id": 48, "text": "\\mathrm{d}p=0" }, { "math_id": 49, "text": "p \\mathrm{d}V_m=R \\mathrm{d}T" }, { "math_id": 50, "text": "\\alpha_{V} \\equiv \\frac{1}{V} \\left(\\frac{\\partial V}{\\partial T}\\right)_p = \\frac{1}{V_m} \\left(\\frac{\\partial V_m}{\\partial T}\\right)_p = \\frac{1}{V_m} \\left(\\frac{R}{p}\\right) = \\frac{R}{pV_m} = \\frac{1}{T}" } ]
https://en.wikipedia.org/wiki?curid=1569600
156962
X-ray scattering techniques
X-ray scattering techniques are a family of non-destructive analytical techniques which reveal information about the crystal structure, chemical composition, and physical properties of materials and thin films. These techniques are based on observing the scattered intensity of an X-ray beam hitting a sample as a function of incident and scattered angle, polarization, and wavelength or energy. Note that X-ray diffraction is sometimes considered a sub-set of X-ray scattering, where the scattering is elastic and the scattering object is crystalline, so that the resulting pattern contains sharp spots analyzed by X-ray crystallography (as in the Figure). However, both scattering and diffraction are related general phenomena and the distinction has not always existed. Thus Guinier's classic text from 1963 is titled "X-ray diffraction in Crystals, Imperfect Crystals and Amorphous Bodies" so 'diffraction' was clearly not restricted to crystals at that time. Scattering techniques. Inelastic X-ray scattering (IXS). In IXS the energy and angle of inelastically scattered X-rays are monitored, giving the dynamic structure factor formula_0. From this many properties of materials can be obtained, the specific property depending on the scale of the energy transfer. The table below, listing techniques, is adapted from. Inelastically scattered X-rays have intermediate phases and so in principle are not useful for X-ray crystallography. In practice X-rays with small energy transfers are included with the diffraction spots due to elastic scattering, and X-rays with large energy transfers contribute to the background noise in the diffraction pattern. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " S(\\mathbf{q},\\omega)" } ]
https://en.wikipedia.org/wiki?curid=156962
1569663
Leo Kadanoff
American physicist Leo Philip Kadanoff (January 14, 1937 – October 26, 2015) was an American physicist. He was a professor of physics (emeritus from 2004) at the University of Chicago and a former president of the American Physical Society (APS). He contributed to the fields of statistical physics, chaos theory, and theoretical condensed matter physics. Biography. Kadanoff was raised in New York City. He received his undergraduate degree and doctorate in physics (1960) from Harvard University. After a post-doctorate at the Niels Bohr Institute in Copenhagen, he joined the physics faculty at the University of Illinois in 1965. Kadanoff's early research focused upon superconductivity. In the late 1960s, he studied the organization of matter in phase transitions. Kadanoff demonstrated that sudden changes in material properties (such as the magnetization of a magnet or the boiling of a fluid) could be understood in terms of scaling and universality. With his collaborators, he showed how all the experimental data then available for the changes, called second-order phase transitions, could be understood in terms of these two ideas. These same ideas have now been extended to apply to a broad range of scientific and engineering problems, and have found numerous and important applications in urban planning, computer science, hydrodynamics, biology, applied mathematics and geophysics. In recognition of these achievements, he won the Buckley Prize of the American Physical Society (1977), the Wolf Prize in Physics (1980), the 1989 Boltzmann Medal of the International Union of Pure and Applied Physics, and the 2006 Lorentz Medal. In 1969 he moved to Brown University. He exploited mathematical analogies between solid state physics and urban growth to shed insights into the latter field, so much so that he contributed substantially to the statewide planning program in Rhode Island. In 1978 he moved to the University of Chicago, where he became the John D. and Catherine T. MacArthur Distinguished Service Professor of Physics and Mathematics. Much of his work in the second half of his career involved contributions to chaos theory, in both mechanical and fluid systems. He was elected a Fellow of the American Academy of Arts and Sciences in 1982. He was one of the recipients of the 1999 National Medal of Science, awarded by President Clinton. He was a member of the National Academy of Sciences and of the American Philosophical Society as well as being a Fellow of the American Physical Society and of the American Association for the Advancement of Science. During the last decade, he has received the Quantrell Award (for excellence in teaching) from the University of Chicago, the Centennial Medal of Harvard University, the Lars Onsager Prize of the American Physical Society, and the Grande Medaille d'Or of the Académie des sciences de l'Institut de France. His textbook with Gordon Baym, "Quantum Statistical Mechanics" (), is a prominent text in the field and has been widely translated. With Leo Irakliotis, Kadanoff established the Center for Presentation of Science at the University of Chicago. In June 2013, it was stated that anonymous donors had provided a $3.5 million gift to establish the Leo Kadanoff Center for Theoretical Physics at the University of Chicago. He died after complications from an illness on October 26, 2015. In 2018 the American Physical Society established the Leo P. Kadanoff Prize in his honor. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "T_c" } ]
https://en.wikipedia.org/wiki?curid=1569663
15698103
Saturated-surface-dry
Aggregate or porous solid condition Saturated surface dry (SSD) is defined as the condition of an aggregate in which the surfaces of the particles are "dry" ("i.e.", surface adsorption would no longer take place), but the inter-particle voids are saturated with water. In this condition aggregates will not affect the free water content of a composite material. The water adsorption by mass (Am)) is defined in terms of the mass of saturated-surface-dry (Mssd) sample and the mass of oven dried test sample (Mdry) by formula_0 References. &lt;templatestyles src="Reflist/styles.css" /&gt; 3- Neville, A. M. “Properties of Concrete”, 4th &amp; Final ed., Longman, Malaysia, 1995 rep. 1996, 844 pp. &lt;br&gt; 4 - Field usage information from a manufacturer - https://blog.kryton.com/2012/08/what-ssd/
[ { "math_id": 0, "text": "A = \\frac{M_{ssd}-M_{dry}}{M_{dry}}" } ]
https://en.wikipedia.org/wiki?curid=15698103
1569856
Gross margin
Gross profit as a percentage Gross margin is the difference between revenue and cost of goods sold (COGS), divided by revenue. Gross margin is expressed as a percentage. Generally, it is calculated as the selling price of an item, less the cost of goods sold (e.g., production or acquisition costs, not including indirect fixed costs like office expenses, rent, or administrative costs), then divided by the same selling price. "Gross margin" is often used interchangeably with "gross profit", however, the terms are different: "gross "profit"" is technically an absolute monetary amount, and "gross "margin"" is technically a percentage or ratio. Gross margin is a kind of profit margin, specifically a form of profit divided by net revenue, e.g., gross (profit) margin, operating (profit) margin, net (profit) margin, etc. Purpose. The purpose of margins is "to determine the value of incremental sales, and to guide pricing and promotion decision." "Margin on sales represents a key factor behind many of the most fundamental business considerations, including budgets and forecasts. All managers should, and generally do, know their approximate business margins. Managers differ widely, however, in the assumptions they use in calculating margins and in the ways they analyze and communicate these important figures." Percentage margins and unit margins. Gross margin can be expressed as a percentage or in total financial terms. If the latter, it can be reported on a per-unit basis or on a per-period basis for a business. "Margin (on sales) is the difference between selling price and cost. This difference is typically expressed either as a percentage of selling price or on a per-unit basis. Managers need to know margins for almost all marketing decisions. Margins represent a key factor in pricing, return on marketing spending, earnings forecasts, and analyses of customer profitability." In a survey of nearly 200 senior marketing managers, 78 percent responded that they found the "margin %" metric very useful while 65 percent found "unit margin" very useful. "A fundamental variation in the way people talk about margins lies in the difference between percentage margins and unit margins on sales. The difference is easy to reconcile, and managers should be able to switch back and forth between the two." Definition of "Unit". "Every business has its own notion of a 'unit,' ranging from a ton of margarine, to 64 ounces of cola, to a bucket of plaster. Many industries work with multiple units and calculate margin accordingly... Marketers must be prepared to shift between varying perspectives with little effort because decisions can be rounded in any of these perspectives." "Investopedia" defines "gross margin" as: &lt;templatestyles src="Block indent/styles.css"/&gt;Gross margin (%) = (Revenue − Cost of goods sold) / Revenue In contrast, "gross profit" is defined as: &lt;templatestyles src="Block indent/styles.css"/&gt;Gross profit = Net sales − Cost of goods sold + Annual sales return or as the ratio of gross profit to revenue, usually as a percentage: formula_0 Cost of sales, also denominated "cost of goods sold" (COGS), includes variable costs and fixed costs directly related to the sale, e.g., material costs, labor, supplier profit, shipping-in costs (cost of transporting the product to the point of sale, as opposed to shipping-out costs which are not included in COGS), etc. It excludes indirect fixed costs, e.g., office expenses, rent, and administrative costs. Higher gross margins for a manufacturer indicate greater efficiency in turning raw materials into income. For a retailer it would be the difference between its markup and the wholesale price. Larger gross margins are generally considered ideal for most businesses, with the exception of discount retailers who instead rely on operational efficiency and strategic financing to remain competitive with businesses that have lower margins. Two related metrics are unit margin and margin percent: formula_1 formula_2 "Percentage margins can also be calculated using total sales revenue and total costs. When working with either percentage or unit margins, marketers can perform a simple check by verifying that the individual parts sum to the total." "When considering multiple products with different revenues and costs, we can calculate overall margin (%) on either of two bases: Total revenue and total costs for all products, or the dollar-weighted average of the percentage margins of the different products." Use in sales. Retailers can measure their profit by using two basic methods, namely markup and margin, both of which describe gross profit. Markup expresses profit as a percentage of the cost of the product to the retailer. Margin expresses profit as a percentage of the selling price of the product that the retailer determines. These methods produce different percentages, yet both percentages are valid descriptions of the profit. It is important to specify which method is used when referring to a retailer's profit as a percentage. Some retailers use margins because profits are easily calculated from the total of sales. If margin is 30%, then 30% of the total of sales is the profit. If markup is 30%, the percentage of daily sales that are profit will not be the same percentage. Some retailers use markups because it is easier to calculate a sales price from a cost. If markup is 40%, then sales price will be 40% more than the cost of the item. If margin is 40%, then sales price will not be equal to 40% over cost; in fact, it will be approximately 67% more than the cost of the item. Markup. The equation for calculating the monetary value of gross margin is: &lt;templatestyles src="Block indent/styles.css"/&gt;Gross margin = Sales − Cost of goods sold A simple way to keep markup and gross margin factors straight is to remember that: Gross margin (as a percentage of revenue). Most people find it easier to work with gross margin because it directly tells you how much of the sales revenue, or price, is profit: If an item costs $ to produce and is sold for a price of $, the price includes a 100% markup which represents a 50% gross margin. Gross margin is just the percentage of the selling price that is profit. In this case, 50% of the price is profit, or $. formula_3 In a more complex example, if an item costs $ to produce and is sold for a price of $, the price includes a 67% markup ($136) which represents a 40% gross margin. This means that 40% of the $ is profit. Again, gross margin is just the direct percentage of profit in the sale price. In accounting, the gross margin refers to sales minus cost of goods sold. It is not necessarily profit as other expenses such as sales, administrative, and financial costs must be deducted. And it means companies are reducing their cost of production or passing their cost to customers. The higher the ratio, all other things being equal, the better for the retailer. Converting between gross margin and markup (gross profit). Converting markup to gross margin formula_4 Examples: Converting gross margin to markup formula_7 Examples: Using gross margin to calculate selling price Given the cost of an item, one can compute the selling price required to achieve a specific gross margin. For example, if your product costs $100 and the required gross margin is 40%, then formula_10 Gross margin tools to measure retail performance. Some of the tools that are useful in retail analysis are GMROII, GMROS and GMROL. Differences between industries. In some industries, like clothing for example, profit margins are expected to be near the 40% mark, as the goods need to be bought from suppliers at a certain rate before they are resold. In other industries such as software product development the gross profit margin can be higher than 80% in many cases. In the agriculture industry, particularly the European Union, Standard Gross Margin is used to assess farm profitability. References. "As of February 5, 2012, this article is derived in whole or in part from ""Marketing Metrics: The Definitive Guide to Measuring Marketing Performance" by Farris, Bendle, Pfeifer and Reibstein". The copyright holder has licensed the content in a manner that permits reuse under and . All relevant terms must be followed." &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\text{Gross margin percentage} = \\frac{\\text{Revenue} -\\text{COGS}}{\\text{Revenue}}\\times 100\\%" }, { "math_id": 1, "text": "\\text{Unit margin} (\\$) = \\text{Selling price per unit} (\\$) - \\text{Cost per unit} (\\$)" }, { "math_id": 2, "text": "\\text{Margin} = \\frac{\\text{Unit margin} (\\$)} {\\text{Selling price per unit} (\\$)} \\times 100\\%" }, { "math_id": 3, "text": "\\frac{\\$200 - \\$100}{\\$200} \\cdot 100\\% = 50\\%" }, { "math_id": 4, "text": "\\text{gross margin} = \\frac{\\text{markup}}{1 + \\text{markup}}" }, { "math_id": 5, "text": " \\text{gross margin} = \\frac{1}{1 + 1} = 0.5 = 50\\%" }, { "math_id": 6, "text": " \\text{gross margin} = \\frac{0.667}{1 + 0.667} = 0.4 = 40\\%" }, { "math_id": 7, "text": "\\text{markup} = \\frac{\\text{gross margin}}{1 - \\text{gross margin}}" }, { "math_id": 8, "text": " \\text{markup} = \\frac{0.5}{1 - 0.5} = 1 = 100\\%" }, { "math_id": 9, "text": " \\text{markup} = \\frac{0.4}{1 - 0.4} = 0.667 = 66.7\\%" }, { "math_id": 10, "text": "\\text{Selling price} = \\frac{\\$100} {1 - 40\\%} = \\frac{\\$100} {0.6} = \\$166.67" } ]
https://en.wikipedia.org/wiki?curid=1569856
156998
Action potential
Neuron communication by electric impulses An action potential occurs when the membrane potential of a specific cell rapidly rises and falls. This depolarization then causes adjacent locations to similarly depolarize. Action potentials occur in several types of excitable cells, which include animal cells like neurons and muscle cells, as well as some plant cells. Certain endocrine cells such as pancreatic beta cells, and certain cells of the anterior pituitary gland are also excitable cells. In neurons, action potentials play a central role in cell–cell communication by providing for—or with regard to saltatory conduction, assisting—the propagation of signals along the neuron's axon toward synaptic boutons situated at the ends of an axon; these signals can then connect with other neurons at synapses, or to motor cells or glands. In other types of cells, their main function is to activate intracellular processes. In muscle cells, for example, an action potential is the first step in the chain of events leading to contraction. In beta cells of the pancreas, they provoke release of insulin. Action potentials in neurons are also known as "nerve impulses" or "spikes", and the temporal sequence of action potentials generated by a neuron is called its "spike train". A neuron that emits an action potential, or nerve impulse, is often said to "fire". Action potentials are generated by special types of voltage-gated ion channels embedded in a cell's plasma membrane. These channels are shut when the membrane potential is near the (negative) resting potential of the cell, but they rapidly begin to open if the membrane potential increases to a precisely defined threshold voltage, depolarising the transmembrane potential. When the channels open, they allow an inward flow of sodium ions, which changes the electrochemical gradient, which in turn produces a further rise in the membrane potential towards zero. This then causes more channels to open, producing a greater electric current across the cell membrane and so on. The process proceeds explosively until all of the available ion channels are open, resulting in a large upswing in the membrane potential. The rapid influx of sodium ions causes the polarity of the plasma membrane to reverse, and the ion channels then rapidly inactivate. As the sodium channels close, sodium ions can no longer enter the neuron, and they are then actively transported back out of the plasma membrane. Potassium channels are then activated, and there is an outward current of potassium ions, returning the electrochemical gradient to the resting state. After an action potential has occurred, there is a transient negative shift, called the afterhyperpolarization. In animal cells, there are two primary types of action potentials. One type is generated by voltage-gated sodium channels, the other by voltage-gated calcium channels. Sodium-based action potentials usually last for under one millisecond, but calcium-based action potentials may last for 100 milliseconds or longer. In some types of neurons, slow calcium spikes provide the driving force for a long burst of rapidly emitted sodium spikes. In cardiac muscle cells, on the other hand, an initial fast sodium spike provides a "primer" to provoke the rapid onset of a calcium spike, which then produces muscle contraction. Overview. Nearly all cell membranes in animals, plants and fungi maintain a voltage difference between the exterior and interior of the cell, called the membrane potential. A typical voltage across an animal cell membrane is −70 mV. This means that the interior of the cell has a negative voltage relative to the exterior. In most types of cells, the membrane potential usually stays fairly constant. Some types of cells, however, are electrically active in the sense that their voltages fluctuate over time. In some types of electrically active cells, including neurons and muscle cells, the voltage fluctuations frequently take the form of a rapid upward (positive) spike followed by a rapid fall. These up-and-down cycles are known as "action potentials". In some types of neurons, the entire up-and-down cycle takes place in a few thousandths of a second. In muscle cells, a typical action potential lasts about a fifth of a second. In plant cells, an action potential may last three seconds or more. The electrical properties of a cell are determined by the structure of its membrane. A cell membrane consists of a lipid bilayer of molecules in which larger protein molecules are embedded. The lipid bilayer is highly resistant to movement of electrically charged ions, so it functions as an insulator. The large membrane-embedded proteins, in contrast, provide channels through which ions can pass across the membrane. Action potentials are driven by channel proteins whose configuration switches between closed and open states as a function of the voltage difference between the interior and exterior of the cell. These voltage-sensitive proteins are known as voltage-gated ion channels. Process in a typical neuron. All cells in animal body tissues are electrically polarized – in other words, they maintain a voltage difference across the cell's plasma membrane, known as the membrane potential. This electrical polarization results from a complex interplay between protein structures embedded in the membrane called ion pumps and ion channels. In neurons, the types of ion channels in the membrane usually vary across different parts of the cell, giving the dendrites, axon, and cell body different electrical properties. As a result, some parts of the membrane of a neuron may be excitable (capable of generating action potentials), whereas others are not. Recent studies have shown that the most excitable part of a neuron is the part after the axon hillock (the point where the axon leaves the cell body), which is called the axonal initial segment, but the axon and cell body are also excitable in most cases. Each excitable patch of membrane has two important levels of membrane potential: the resting potential, which is the value the membrane potential maintains as long as nothing perturbs the cell, and a higher value called the threshold potential. At the axon hillock of a typical neuron, the resting potential is around –70 millivolts (mV) and the threshold potential is around –55 mV. Synaptic inputs to a neuron cause the membrane to depolarize or hyperpolarize; that is, they cause the membrane potential to rise or fall. Action potentials are triggered when enough depolarization accumulates to bring the membrane potential up to threshold. When an action potential is triggered, the membrane potential abruptly shoots upward and then equally abruptly shoots back downward, often ending below the resting level, where it remains for some period of time. The shape of the action potential is stereotyped; this means that the rise and fall usually have approximately the same amplitude and time course for all action potentials in a given cell. (Exceptions are discussed later in the article). In most neurons, the entire process takes place in about a thousandth of a second. Many types of neurons emit action potentials constantly at rates of up to 10–100 per second. However, some types are much quieter, and may go for minutes or longer without emitting any action potentials. Biophysical basis. Action potentials result from the presence in a cell's membrane of special types of voltage-gated ion channels. A voltage-gated ion channel is a transmembrane protein that has three key properties: Thus, a voltage-gated ion channel tends to be open for some values of the membrane potential, and closed for others. In most cases, however, the relationship between membrane potential and channel state is probabilistic and involves a time delay. Ion channels switch between conformations at unpredictable times: The membrane potential determines the rate of transitions and the probability per unit time of each type of transition. Voltage-gated ion channels are capable of producing action potentials because they can give rise to positive feedback loops: The membrane potential controls the state of the ion channels, but the state of the ion channels controls the membrane potential. Thus, in some situations, a rise in the membrane potential can cause ion channels to open, thereby causing a further rise in the membrane potential. An action potential occurs when this positive feedback cycle (Hodgkin cycle) proceeds explosively. The time and amplitude trajectory of the action potential are determined by the biophysical properties of the voltage-gated ion channels that produce it. Several types of channels capable of producing the positive feedback necessary to generate an action potential do exist. Voltage-gated sodium channels are responsible for the fast action potentials involved in nerve conduction. Slower action potentials in muscle cells and some types of neurons are generated by voltage-gated calcium channels. Each of these types comes in multiple variants, with different voltage sensitivity and different temporal dynamics. The most intensively studied type of voltage-dependent ion channels comprises the sodium channels involved in fast nerve conduction. These are sometimes known as Hodgkin-Huxley sodium channels because they were first characterized by Alan Hodgkin and Andrew Huxley in their Nobel Prize-winning studies of the biophysics of the action potential, but can more conveniently be referred to as "Na"V channels. (The "V" stands for "voltage".) An "Na"V channel has three possible states, known as "deactivated", "activated", and "inactivated". The channel is permeable only to sodium ions when it is in the "activated" state. When the membrane potential is low, the channel spends most of its time in the "deactivated" (closed) state. If the membrane potential is raised above a certain level, the channel shows increased probability of transitioning to the "activated" (open) state. The higher the membrane potential the greater the probability of activation. Once a channel has activated, it will eventually transition to the "inactivated" (closed) state. It tends then to stay inactivated for some time, but, if the membrane potential becomes low again, the channel will eventually transition back to the "deactivated" state. During an action potential, most channels of this type go through a cycle "deactivated"→"activated"→"inactivated"→"deactivated". This is only the population average behavior, however – an individual channel can in principle make any transition at any time. However, the likelihood of a channel's transitioning from the "inactivated" state directly to the "activated" state is very low: A channel in the "inactivated" state is refractory until it has transitioned back to the "deactivated" state. The outcome of all this is that the kinetics of the "Na"V channels are governed by a transition matrix whose rates are voltage-dependent in a complicated way. Since these channels themselves play a major role in determining the voltage, the global dynamics of the system can be quite difficult to work out. Hodgkin and Huxley approached the problem by developing a set of differential equations for the parameters that govern the ion channel states, known as the Hodgkin-Huxley equations. These equations have been extensively modified by later research, but form the starting point for most theoretical studies of action potential biophysics. As the membrane potential is increased, sodium ion channels open, allowing the entry of sodium ions into the cell. This is followed by the opening of potassium ion channels that permit the exit of potassium ions from the cell. The inward flow of sodium ions increases the concentration of positively charged cations in the cell and causes depolarization, where the potential of the cell is higher than the cell's resting potential. The sodium channels close at the peak of the action potential, while potassium continues to leave the cell. The efflux of potassium ions decreases the membrane potential or hyperpolarizes the cell. For small voltage increases from rest, the potassium current exceeds the sodium current and the voltage returns to its normal resting value, typically −70 mV. However, if the voltage increases past a critical threshold, typically 15 mV higher than the resting value, the sodium current dominates. This results in a runaway condition whereby the positive feedback from the sodium current activates even more sodium channels. Thus, the cell "fires", producing an action potential. The frequency at which a neuron elicits action potentials is often referred to as a firing rate or neural firing rate. Currents produced by the opening of voltage-gated channels in the course of an action potential are typically significantly larger than the initial stimulating current. Thus, the amplitude, duration, and shape of the action potential are determined largely by the properties of the excitable membrane and not the amplitude or duration of the stimulus. This all-or-nothing property of the action potential sets it apart from graded potentials such as receptor potentials, electrotonic potentials, subthreshold membrane potential oscillations, and synaptic potentials, which scale with the magnitude of the stimulus. A variety of action potential types exist in many cell types and cell compartments as determined by the types of voltage-gated channels, leak channels, channel distributions, ionic concentrations, membrane capacitance, temperature, and other factors. The principal ions involved in an action potential are sodium and potassium cations; sodium ions enter the cell, and potassium ions leave, restoring equilibrium. Relatively few ions need to cross the membrane for the membrane voltage to change drastically. The ions exchanged during an action potential, therefore, make a negligible change in the interior and exterior ionic concentrations. The few ions that do cross are pumped out again by the continuous action of the sodium–potassium pump, which, with other ion transporters, maintains the normal ratio of ion concentrations across the membrane. Calcium cations and chloride anions are involved in a few types of action potentials, such as the cardiac action potential and the action potential in the single-cell alga "Acetabularia", respectively. Although action potentials are generated locally on patches of excitable membrane, the resulting currents can trigger action potentials on neighboring stretches of membrane, precipitating a domino-like propagation. In contrast to passive spread of electric potentials (electrotonic potential), action potentials are generated anew along excitable stretches of membrane and propagate without decay. Myelinated sections of axons are not excitable and do not produce action potentials and the signal is propagated passively as electrotonic potential. Regularly spaced unmyelinated patches, called the nodes of Ranvier, generate action potentials to boost the signal. Known as saltatory conduction, this type of signal propagation provides a favorable tradeoff of signal velocity and axon diameter. Depolarization of axon terminals, in general, triggers the release of neurotransmitter into the synaptic cleft. In addition, backpropagating action potentials have been recorded in the dendrites of pyramidal neurons, which are ubiquitous in the neocortex. These are thought to have a role in spike-timing-dependent plasticity. In the Hodgkin–Huxley membrane capacitance model, the speed of transmission of an action potential was undefined and it was assumed that adjacent areas became depolarized due to released ion interference with neighbouring channels. Measurements of ion diffusion and radii have since shown this not to be possible. Moreover, contradictory measurements of entropy changes and timing disputed the capacitance model as acting alone. Alternatively, Gilbert Ling's adsorption hypothesis, posits that the membrane potential and action potential of a living cell is due to the adsorption of mobile ions onto adsorption sites of cells. Maturation of the electrical properties of the action potential. A neuron's ability to generate and propagate an action potential changes during development. How much the membrane potential of a neuron changes as the result of a current impulse is a function of the membrane input resistance. As a cell grows, more channels are added to the membrane, causing a decrease in input resistance. A mature neuron also undergoes shorter changes in membrane potential in response to synaptic currents. Neurons from a ferret lateral geniculate nucleus have a longer time constant and larger voltage deflection at P0 than they do at P30. One consequence of the decreasing action potential duration is that the fidelity of the signal can be preserved in response to high frequency stimulation. Immature neurons are more prone to synaptic depression than potentiation after high frequency stimulation. In the early development of many organisms, the action potential is actually initially carried by calcium current rather than sodium current. The opening and closing kinetics of calcium channels during development are slower than those of the voltage-gated sodium channels that will carry the action potential in the mature neurons. The longer opening times for the calcium channels can lead to action potentials that are considerably slower than those of mature neurons. Xenopus neurons initially have action potentials that take 60–90 ms. During development, this time decreases to 1 ms. There are two reasons for this drastic decrease. First, the inward current becomes primarily carried by sodium channels. Second, the delayed rectifier, a potassium channel current, increases to 3.5 times its initial strength. In order for the transition from a calcium-dependent action potential to a sodium-dependent action potential to proceed new channels must be added to the membrane. If Xenopus neurons are grown in an environment with RNA synthesis or protein synthesis inhibitors that transition is prevented. Even the electrical activity of the cell itself may play a role in channel expression. If action potentials in Xenopus myocytes are blocked, the typical increase in sodium and potassium current density is prevented or delayed. This maturation of electrical properties is seen across species. Xenopus sodium and potassium currents increase drastically after a neuron goes through its final phase of mitosis. The sodium current density of rat cortical neurons increases by 600% within the first two postnatal weeks. Neurotransmission. Anatomy of a neuron. Dendrite Soma Axon Axon hillock Nucleus Node ofRanvier Axon terminal Schwann cell Myelin sheathStructure of a typical neuron Several types of cells support an action potential, such as plant cells, muscle cells, and the specialized cells of the heart (in which occurs the cardiac action potential). However, the main excitable cell is the neuron, which also has the simplest mechanism for the action potential. Neurons are electrically excitable cells composed, in general, of one or more dendrites, a single soma, a single axon and one or more axon terminals. Dendrites are cellular projections whose primary function is to receive synaptic signals. Their protrusions, known as dendritic spines, are designed to capture the neurotransmitters released by the presynaptic neuron. They have a high concentration of ligand-gated ion channels. These spines have a thin neck connecting a bulbous protrusion to the dendrite. This ensures that changes occurring inside the spine are less likely to affect the neighboring spines. The dendritic spine can, with rare exception (see LTP), act as an independent unit. The dendrites extend from the soma, which houses the nucleus, and many of the "normal" eukaryotic organelles. Unlike the spines, the surface of the soma is populated by voltage activated ion channels. These channels help transmit the signals generated by the dendrites. Emerging out from the soma is the axon hillock. This region is characterized by having a very high concentration of voltage-activated sodium channels. In general, it is considered to be the spike initiation zone for action potentials, i.e. the trigger zone. Multiple signals generated at the spines, and transmitted by the soma all converge here. Immediately after the axon hillock is the axon. This is a thin tubular protrusion traveling away from the soma. The axon is insulated by a myelin sheath. Myelin is composed of either Schwann cells (in the peripheral nervous system) or oligodendrocytes (in the central nervous system), both of which are types of glial cells. Although glial cells are not involved with the transmission of electrical signals, they communicate and provide important biochemical support to neurons. To be specific, myelin wraps multiple times around the axonal segment, forming a thick fatty layer that prevents ions from entering or escaping the axon. This insulation prevents significant signal decay as well as ensuring faster signal speed. This insulation, however, has the restriction that no channels can be present on the surface of the axon. There are, therefore, regularly spaced patches of membrane, which have no insulation. These nodes of Ranvier can be considered to be "mini axon hillocks", as their purpose is to boost the signal in order to prevent significant signal decay. At the furthest end, the axon loses its insulation and begins to branch into several axon terminals. These presynaptic terminals, or synaptic boutons, are a specialized area within the axon of the presynaptic cell that contains neurotransmitters enclosed in small membrane-bound spheres called synaptic vesicles. Initiation. Before considering the propagation of action potentials along axons and their termination at the synaptic knobs, it is helpful to consider the methods by which action potentials can be initiated at the axon hillock. The basic requirement is that the membrane voltage at the hillock be raised above the threshold for firing. There are several ways in which this depolarization can occur. Dynamics. Action potentials are most commonly initiated by excitatory postsynaptic potentials from a presynaptic neuron. Typically, neurotransmitter molecules are released by the presynaptic neuron. These neurotransmitters then bind to receptors on the postsynaptic cell. This binding opens various types of ion channels. This opening has the further effect of changing the local permeability of the cell membrane and, thus, the membrane potential. If the binding increases the voltage (depolarizes the membrane), the synapse is excitatory. If, however, the binding decreases the voltage (hyperpolarizes the membrane), it is inhibitory. Whether the voltage is increased or decreased, the change propagates passively to nearby regions of the membrane (as described by the cable equation and its refinements). Typically, the voltage stimulus decays exponentially with the distance from the synapse and with time from the binding of the neurotransmitter. Some fraction of an excitatory voltage may reach the axon hillock and may (in rare cases) depolarize the membrane enough to provoke a new action potential. More typically, the excitatory potentials from several synapses must work together at nearly the same time to provoke a new action potential. Their joint efforts can be thwarted, however, by the counteracting inhibitory postsynaptic potentials. Neurotransmission can also occur through electrical synapses. Due to the direct connection between excitable cells in the form of gap junctions, an action potential can be transmitted directly from one cell to the next in either direction. The free flow of ions between cells enables rapid non-chemical-mediated transmission. Rectifying channels ensure that action potentials move only in one direction through an electrical synapse. Electrical synapses are found in all nervous systems, including the human brain, although they are a distinct minority. "All-or-none" principle. The amplitude of an action potential is often thought to be independent of the amount of current that produced it. In other words, larger currents do not create larger action potentials. Therefore, action potentials are said to be all-or-none signals, since either they occur fully or they do not occur at all. This is in contrast to receptor potentials, whose amplitudes are dependent on the intensity of a stimulus. In both cases, the frequency of action potentials is correlated with the intensity of a stimulus. Despite the classical view of the action potential as a stereotyped, uniform signal having dominated the field of neuroscience for many decades, newer evidence does suggest that action potentials are more complex events indeed capable of transmitting information through not just their amplitude, but their duration and phase as well, sometimes even up to distances originally not thought to be possible. Sensory neurons. In sensory neurons, an external signal such as pressure, temperature, light, or sound is coupled with the opening and closing of ion channels, which in turn alter the ionic permeabilities of the membrane and its voltage. These voltage changes can again be excitatory (depolarizing) or inhibitory (hyperpolarizing) and, in some sensory neurons, their combined effects can depolarize the axon hillock enough to provoke action potentials. Some examples in humans include the olfactory receptor neuron and Meissner's corpuscle, which are critical for the sense of smell and touch, respectively. However, not all sensory neurons convert their external signals into action potentials; some do not even have an axon. Instead, they may convert the signal into the release of a neurotransmitter, or into continuous graded potentials, either of which may stimulate subsequent neuron(s) into firing an action potential. For illustration, in the human ear, hair cells convert the incoming sound into the opening and closing of mechanically gated ion channels, which may cause neurotransmitter molecules to be released. In similar manner, in the human retina, the initial photoreceptor cells and the next layer of cells (comprising bipolar cells and horizontal cells) do not produce action potentials; only some amacrine cells and the third layer, the ganglion cells, produce action potentials, which then travel up the optic nerve. Pacemaker potentials. In sensory neurons, action potentials result from an external stimulus. However, some excitable cells require no such stimulus to fire: They spontaneously depolarize their axon hillock and fire action potentials at a regular rate, like an internal clock. The voltage traces of such cells are known as pacemaker potentials. The cardiac pacemaker cells of the sinoatrial node in the heart provide a good example. Although such pacemaker potentials have a natural rhythm, it can be adjusted by external stimuli; for instance, heart rate can be altered by pharmaceuticals as well as signals from the sympathetic and parasympathetic nerves. The external stimuli do not cause the cell's repetitive firing, but merely alter its timing. In some cases, the regulation of frequency can be more complex, leading to patterns of action potentials, such as bursting. Phases. The course of the action potential can be divided into five parts: the rising phase, the peak phase, the falling phase, the undershoot phase, and the refractory period. During the rising phase the membrane potential depolarizes (becomes more positive). The point at which depolarization stops is called the peak phase. At this stage, the membrane potential reaches a maximum. Subsequent to this, there is a falling phase. During this stage the membrane potential becomes more negative, returning towards resting potential. The undershoot, or afterhyperpolarization, phase is the period during which the membrane potential temporarily becomes more negatively charged than when at rest (hyperpolarized). Finally, the time during which a subsequent action potential is impossible or difficult to fire is called the refractory period, which may overlap with the other phases. The course of the action potential is determined by two coupled effects. First, voltage-sensitive ion channels open and close in response to changes in the membrane voltage "Vm". This changes the membrane's permeability to those ions. Second, according to the Goldman equation, this change in permeability changes the equilibrium potential "Em", and, thus, the membrane voltage "Vm". Thus, the membrane potential affects the permeability, which then further affects the membrane potential. This sets up the possibility for positive feedback, which is a key part of the rising phase of the action potential. A complicating factor is that a single ion channel may have multiple internal "gates" that respond to changes in "Vm" in opposite ways, or at different rates. For example, although raising "Vm" "opens" most gates in the voltage-sensitive sodium channel, it also "closes" the channel's "inactivation gate", albeit more slowly. Hence, when "Vm" is raised suddenly, the sodium channels open initially, but then close due to the slower inactivation. The voltages and currents of the action potential in all of its phases were modeled accurately by Alan Lloyd Hodgkin and Andrew Huxley in 1952, for which they were awarded the Nobel Prize in Physiology or Medicine in 1963. However, their model considers only two types of voltage-sensitive ion channels, and makes several assumptions about them, e.g., that their internal gates open and close independently of one another. In reality, there are many types of ion channels, and they do not always open and close independently. Stimulation and rising phase. A typical action potential begins at the axon hillock with a sufficiently strong depolarization, e.g., a stimulus that increases "Vm". This depolarization is often caused by the injection of extra sodium cations into the cell; these cations can come from a wide variety of sources, such as chemical synapses, sensory neurons or pacemaker potentials. For a neuron at rest, there is a high concentration of sodium and chloride ions in the extracellular fluid compared to the intracellular fluid, while there is a high concentration of potassium ions in the intracellular fluid compared to the extracellular fluid. The difference in concentrations, which causes ions to move from a high to a low concentration, and electrostatic effects (attraction of opposite charges) are responsible for the movement of ions in and out of the neuron. The inside of a neuron has a negative charge, relative to the cell exterior, from the movement of K+ out of the cell. The neuron membrane is more permeable to K+ than to other ions, allowing this ion to selectively move out of the cell, down its concentration gradient. This concentration gradient along with potassium leak channels present on the membrane of the neuron causes an efflux of potassium ions making the resting potential close to "E"K ≈ –75 mV. Since Na+ ions are in higher concentrations outside of the cell, the concentration and voltage differences both drive them into the cell when Na+ channels open. Depolarization opens both the sodium and potassium channels in the membrane, allowing the ions to flow into and out of the axon, respectively. If the depolarization is small (say, increasing "Vm" from −70 mV to −60 mV), the outward potassium current overwhelms the inward sodium current and the membrane repolarizes back to its normal resting potential around −70 mV. However, if the depolarization is large enough, the inward sodium current increases more than the outward potassium current and a runaway condition (positive feedback) results: the more inward current there is, the more "Vm" increases, which in turn further increases the inward current. A sufficiently strong depolarization (increase in "Vm") causes the voltage-sensitive sodium channels to open; the increasing permeability to sodium drives "Vm" closer to the sodium equilibrium voltage "E"Na≈ +55 mV. The increasing voltage in turn causes even more sodium channels to open, which pushes "Vm" still further towards "E"Na. This positive feedback continues until the sodium channels are fully open and "Vm" is close to "E"Na. The sharp rise in "Vm" and sodium permeability correspond to the "rising phase" of the action potential. The critical threshold voltage for this runaway condition is usually around −45 mV, but it depends on the recent activity of the axon. A cell that has just fired an action potential cannot fire another one immediately, since the Na+ channels have not recovered from the inactivated state. The period during which no new action potential can be fired is called the "absolute refractory period". At longer times, after some but not all of the ion channels have recovered, the axon can be stimulated to produce another action potential, but with a higher threshold, requiring a much stronger depolarization, e.g., to −30 mV. The period during which action potentials are unusually difficult to evoke is called the "relative refractory period". Peak phase. The positive feedback of the rising phase slows and comes to a halt as the sodium ion channels become maximally open. At the peak of the action potential, the sodium permeability is maximized and the membrane voltage "Vm" is nearly equal to the sodium equilibrium voltage "E"Na. However, the same raised voltage that opened the sodium channels initially also slowly shuts them off, by closing their pores; the sodium channels become "inactivated". This lowers the membrane's permeability to sodium relative to potassium, driving the membrane voltage back towards the resting value. At the same time, the raised voltage opens voltage-sensitive potassium channels; the increase in the membrane's potassium permeability drives "Vm" towards "E"K. Combined, these changes in sodium and potassium permeability cause "Vm" to drop quickly, repolarizing the membrane and producing the "falling phase" of the action potential. Afterhyperpolarization. The depolarized voltage opens additional voltage-dependent potassium channels, and some of these do not close right away when the membrane returns to its normal resting voltage. In addition, further potassium channels open in response to the influx of calcium ions during the action potential. The intracellular concentration of potassium ions is transiently unusually low, making the membrane voltage "Vm" even closer to the potassium equilibrium voltage "E"K. The membrane potential goes below the resting membrane potential. Hence, there is an undershoot or hyperpolarization, termed an afterhyperpolarization, that persists until the membrane potassium permeability returns to its usual value, restoring the membrane potential to the resting state. Refractory period. Each action potential is followed by a refractory period, which can be divided into an "absolute refractory period", during which it is impossible to evoke another action potential, and then a "relative refractory period", during which a stronger-than-usual stimulus is required. These two refractory periods are caused by changes in the state of sodium and potassium channel molecules. When closing after an action potential, sodium channels enter an "inactivated" state, in which they cannot be made to open regardless of the membrane potential—this gives rise to the absolute refractory period. Even after a sufficient number of sodium channels have transitioned back to their resting state, it frequently happens that a fraction of potassium channels remains open, making it difficult for the membrane potential to depolarize, and thereby giving rise to the relative refractory period. Because the density and subtypes of potassium channels may differ greatly between different types of neurons, the duration of the relative refractory period is highly variable. The absolute refractory period is largely responsible for the unidirectional propagation of action potentials along axons. At any given moment, the patch of axon behind the actively spiking part is refractory, but the patch in front, not having been activated recently, is capable of being stimulated by the depolarization from the action potential. Propagation. The action potential generated at the axon hillock propagates as a wave along the axon. The currents flowing inwards at a point on the axon during an action potential spread out along the axon, and depolarize the adjacent sections of its membrane. If sufficiently strong, this depolarization provokes a similar action potential at the neighboring membrane patches. This basic mechanism was demonstrated by Alan Lloyd Hodgkin in 1937. After crushing or cooling nerve segments and thus blocking the action potentials, he showed that an action potential arriving on one side of the block could provoke another action potential on the other, provided that the blocked segment was sufficiently short. Once an action potential has occurred at a patch of membrane, the membrane patch needs time to recover before it can fire again. At the molecular level, this "absolute refractory period" corresponds to the time required for the voltage-activated sodium channels to recover from inactivation, i.e., to return to their closed state. There are many types of voltage-activated potassium channels in neurons. Some of them inactivate fast (A-type currents) and some of them inactivate slowly or not inactivate at all; this variability guarantees that there will be always an available source of current for repolarization, even if some of the potassium channels are inactivated because of preceding depolarization. On the other hand, all neuronal voltage-activated sodium channels inactivate within several milliseconds during strong depolarization, thus making following depolarization impossible until a substantial fraction of sodium channels have returned to their closed state. Although it limits the frequency of firing, the absolute refractory period ensures that the action potential moves in only one direction along an axon. The currents flowing in due to an action potential spread out in both directions along the axon. However, only the unfired part of the axon can respond with an action potential; the part that has just fired is unresponsive until the action potential is safely out of range and cannot restimulate that part. In the usual orthodromic conduction, the action potential propagates from the axon hillock towards the synaptic knobs (the axonal termini); propagation in the opposite direction—known as antidromic conduction—is very rare. However, if a laboratory axon is stimulated in its middle, both halves of the axon are "fresh", i.e., unfired; then two action potentials will be generated, one traveling towards the axon hillock and the other traveling towards the synaptic knobs. Myelin and saltatory conduction. In order to enable fast and efficient transduction of electrical signals in the nervous system, certain neuronal axons are covered with myelin sheaths. Myelin is a multilamellar membrane that enwraps the axon in segments separated by intervals known as nodes of Ranvier. It is produced by specialized cells: Schwann cells exclusively in the peripheral nervous system, and oligodendrocytes exclusively in the central nervous system. Myelin sheath reduces membrane capacitance and increases membrane resistance in the inter-node intervals, thus allowing a fast, saltatory movement of action potentials from node to node. Myelination is found mainly in vertebrates, but an analogous system has been discovered in a few invertebrates, such as some species of shrimp. Not all neurons in vertebrates are myelinated; for example, axons of the neurons comprising the autonomous nervous system are not, in general, myelinated. Myelin prevents ions from entering or leaving the axon along myelinated segments. As a general rule, myelination increases the conduction velocity of action potentials and makes them more energy-efficient. Whether saltatory or not, the mean conduction velocity of an action potential ranges from 1 meter per second (m/s) to over 100 m/s, and, in general, increases with axonal diameter. Action potentials cannot propagate through the membrane in myelinated segments of the axon. However, the current is carried by the cytoplasm, which is sufficient to depolarize the first or second subsequent node of Ranvier. Instead, the ionic current from an action potential at one node of Ranvier provokes another action potential at the next node; this apparent "hopping" of the action potential from node to node is known as saltatory conduction. Although the mechanism of saltatory conduction was suggested in 1925 by Ralph Lillie, the first experimental evidence for saltatory conduction came from Ichiji Tasaki and Taiji Takeuchi and from Andrew Huxley and Robert Stämpfli. By contrast, in unmyelinated axons, the action potential provokes another in the membrane immediately adjacent, and moves continuously down the axon like a wave. Myelin has two important advantages: fast conduction speed and energy efficiency. For axons larger than a minimum diameter (roughly 1 micrometre), myelination increases the conduction velocity of an action potential, typically tenfold. Conversely, for a given conduction velocity, myelinated fibers are smaller than their unmyelinated counterparts. For example, action potentials move at roughly the same speed (25 m/s) in a myelinated frog axon and an unmyelinated squid giant axon, but the frog axon has a roughly 30-fold smaller diameter and 1000-fold smaller cross-sectional area. Also, since the ionic currents are confined to the nodes of Ranvier, far fewer ions "leak" across the membrane, saving metabolic energy. This saving is a significant selective advantage, since the human nervous system uses approximately 20% of the body's metabolic energy. The length of axons' myelinated segments is important to the success of saltatory conduction. They should be as long as possible to maximize the speed of conduction, but not so long that the arriving signal is too weak to provoke an action potential at the next node of Ranvier. In nature, myelinated segments are generally long enough for the passively propagated signal to travel for at least two nodes while retaining enough amplitude to fire an action potential at the second or third node. Thus, the safety factor of saltatory conduction is high, allowing transmission to bypass nodes in case of injury. However, action potentials may end prematurely in certain places where the safety factor is low, even in unmyelinated neurons; a common example is the branch point of an axon, where it divides into two axons. Some diseases degrade myelin and impair saltatory conduction, reducing the conduction velocity of action potentials. The most well-known of these is multiple sclerosis, in which the breakdown of myelin impairs coordinated movement. Cable theory. The flow of currents within an axon can be described quantitatively by cable theory and its elaborations, such as the compartmental model. Cable theory was developed in 1855 by Lord Kelvin to model the transatlantic telegraph cable and was shown to be relevant to neurons by Hodgkin and Rushton in 1946. In simple cable theory, the neuron is treated as an electrically passive, perfectly cylindrical transmission cable, which can be described by a partial differential equation formula_0 where "V"("x", "t") is the voltage across the membrane at a time "t" and a position "x" along the length of the neuron, and where λ and τ are the characteristic length and time scales on which those voltages decay in response to a stimulus. Referring to the circuit diagram on the right, these scales can be determined from the resistances and capacitances per unit length. formula_1 formula_2 These time and length-scales can be used to understand the dependence of the conduction velocity on the diameter of the neuron in unmyelinated fibers. For example, the time-scale τ increases with both the membrane resistance "rm" and capacitance "cm". As the capacitance increases, more charge must be transferred to produce a given transmembrane voltage (by the equation "Q" = "CV"); as the resistance increases, less charge is transferred per unit time, making the equilibration slower. In a similar manner, if the internal resistance per unit length "ri" is lower in one axon than in another (e.g., because the radius of the former is larger), the spatial decay length λ becomes longer and the conduction velocity of an action potential should increase. If the transmembrane resistance "rm" is increased, that lowers the average "leakage" current across the membrane, likewise causing "λ" to become longer, increasing the conduction velocity. Termination. Chemical synapses. In general, action potentials that reach the synaptic knobs cause a neurotransmitter to be released into the synaptic cleft. Neurotransmitters are small molecules that may open ion channels in the postsynaptic cell; most axons have the same neurotransmitter at all of their termini. The arrival of the action potential opens voltage-sensitive calcium channels in the presynaptic membrane; the influx of calcium causes vesicles filled with neurotransmitter to migrate to the cell's surface and release their contents into the synaptic cleft. This complex process is inhibited by the neurotoxins tetanospasmin and botulinum toxin, which are responsible for tetanus and botulism, respectively. Electrical synapses. Some synapses dispense with the "middleman" of the neurotransmitter, and connect the presynaptic and postsynaptic cells together. When an action potential reaches such a synapse, the ionic currents flowing into the presynaptic cell can cross the barrier of the two cell membranes and enter the postsynaptic cell through pores known as connexons. Thus, the ionic currents of the presynaptic action potential can directly stimulate the postsynaptic cell. Electrical synapses allow for faster transmission because they do not require the slow diffusion of neurotransmitters across the synaptic cleft. Hence, electrical synapses are used whenever fast response and coordination of timing are crucial, as in escape reflexes, the retina of vertebrates, and the heart. Neuromuscular junctions. A special case of a chemical synapse is the neuromuscular junction, in which the axon of a motor neuron terminates on a muscle fiber. In such cases, the released neurotransmitter is acetylcholine, which binds to the acetylcholine receptor, an integral membrane protein in the membrane (the "sarcolemma") of the muscle fiber. However, the acetylcholine does not remain bound; rather, it dissociates and is hydrolyzed by the enzyme, acetylcholinesterase, located in the synapse. This enzyme quickly reduces the stimulus to the muscle, which allows the degree and timing of muscular contraction to be regulated delicately. Some poisons inactivate acetylcholinesterase to prevent this control, such as the nerve agents sarin and tabun, and the insecticides diazinon and malathion. Other cell types. Cardiac action potentials. The cardiac action potential differs from the neuronal action potential by having an extended plateau, in which the membrane is held at a high voltage for a few hundred milliseconds prior to being repolarized by the potassium current as usual. This plateau is due to the action of slower calcium channels opening and holding the membrane voltage near their equilibrium potential even after the sodium channels have inactivated. The cardiac action potential plays an important role in coordinating the contraction of the heart. The cardiac cells of the sinoatrial node provide the pacemaker potential that synchronizes the heart. The action potentials of those cells propagate to and through the atrioventricular node (AV node), which is normally the only conduction pathway between the atria and the ventricles. Action potentials from the AV node travel through the bundle of His and thence to the Purkinje fibers. Conversely, anomalies in the cardiac action potential—whether due to a congenital mutation or injury—can lead to human pathologies, especially arrhythmias. Several anti-arrhythmia drugs act on the cardiac action potential, such as quinidine, lidocaine, beta blockers, and verapamil. Muscular action potentials. The action potential in a normal skeletal muscle cell is similar to the action potential in neurons. Action potentials result from the depolarization of the cell membrane (the sarcolemma), which opens voltage-sensitive sodium channels; these become inactivated and the membrane is repolarized through the outward current of potassium ions. The resting potential prior to the action potential is typically −90mV, somewhat more negative than typical neurons. The muscle action potential lasts roughly 2–4 ms, the absolute refractory period is roughly 1–3 ms, and the conduction velocity along the muscle is roughly 5 m/s. The action potential releases calcium ions that free up the tropomyosin and allow the muscle to contract. Muscle action potentials are provoked by the arrival of a pre-synaptic neuronal action potential at the neuromuscular junction, which is a common target for neurotoxins. Plant action potentials. Plant and fungal cells are also electrically excitable. The fundamental difference from animal action potentials is that the depolarization in plant cells is not accomplished by an uptake of positive sodium ions, but by release of negative "chloride" ions. In 1906, J. C. Bose published the first measurements of action potentials in plants, which had previously been discovered by Burdon-Sanderson and Darwin. An increase in cytoplasmic calcium ions may be the cause of anion release into the cell. This makes calcium a precursor to ion movements, such as the influx of negative chloride ions and efflux of positive potassium ions, as seen in barley leaves. The initial influx of calcium ions also poses a small cellular depolarization, causing the voltage-gated ion channels to open and allowing full depolarization to be propagated by chloride ions. Some plants (e.g. "Dionaea muscipula") use sodium-gated channels to operate plant movements and "count" stimulation events to determine if a threshold for movement is met. "Dionaea muscipula", also known as the Venus flytrap, is found in subtropical wetlands in North and South Carolina. When there are poor soil nutrients, the flytrap relies on a diet of insects and animals. Despite research on the plant, there lacks an understanding behind the molecular basis to the Venus flytraps, and carnivore plants in general. However, plenty of research has been done on action potentials and how they affect movement and clockwork within the Venus flytrap. To start, the resting membrane potential of the Venus flytrap (−120 mV) is lower than animal cells (usually −90 mV to −40 mV). The lower resting potential makes it easier to activate an action potential. Thus, when an insect lands on the trap of the plant, it triggers a hair-like mechanoreceptor. This receptor then activates an action potential that lasts around 1.5 ms. This causes an increase of positive calcium ions into the cell, slightly depolarizing it. However, the flytrap does not close after one trigger. Instead, it requires the activation of two or more hairs. If only one hair is triggered, it disregards the activation as a false positive. Further, the second hair must be activated within a certain time interval (0.75–40 s) for it to register with the first activation. Thus, a buildup of calcium begins and then slowly falls after the first trigger. When the second action potential is fired within the time interval, it reaches the calcium threshold to depolarize the cell, closing the trap on the prey within a fraction of a second. Together with the subsequent release of positive potassium ions the action potential in plants involves an osmotic loss of salt (KCl). Whereas, the animal action potential is osmotically neutral because equal amounts of entering sodium and leaving potassium cancel each other osmotically. The interaction of electrical and osmotic relations in plant cells appears to have arisen from an osmotic function of electrical excitability in a common unicellular ancestors of plants and animals under changing salinity conditions. Further, the present function of rapid signal transmission is seen as a newer accomplishment of metazoan cells in a more stable osmotic environment. It is likely that the familiar signaling function of action potentials in some vascular plants (e.g. "Mimosa pudica") arose independently from that in metazoan excitable cells. Unlike the rising phase and peak, the falling phase and after-hyperpolarization seem to depend primarily on cations that are not calcium. To initiate repolarization, the cell requires movement of potassium out of the cell through passive transportation on the membrane. This differs from neurons because the movement of potassium does not dominate the decrease in membrane potential. To fully repolarize, a plant cell requires energy in the form of ATP to assist in the release of hydrogen from the cell – utilizing a transporter called proton ATPase. Taxonomic distribution and evolutionary advantages. Action potentials are found throughout multicellular organisms, including plants, invertebrates such as insects, and vertebrates such as reptiles and mammals. Sponges seem to be the main phylum of multicellular eukaryotes that does not transmit action potentials, although some studies have suggested that these organisms have a form of electrical signaling, too. The resting potential, as well as the size and duration of the action potential, have not varied much with evolution, although the conduction velocity does vary dramatically with axonal diameter and myelination. Given its conservation throughout evolution, the action potential seems to confer evolutionary advantages. One function of action potentials is rapid, long-range signaling within the organism; the conduction velocity can exceed 110 m/s, which is one-third the speed of sound. For comparison, a hormone molecule carried in the bloodstream moves at roughly 8 m/s in large arteries. Part of this function is the tight coordination of mechanical events, such as the contraction of the heart. A second function is the computation associated with its generation. Being an all-or-none signal that does not decay with transmission distance, the action potential has similar advantages to digital electronics. The integration of various dendritic signals at the axon hillock and its thresholding to form a complex train of action potentials is another form of computation, one that has been exploited biologically to form central pattern generators and mimicked in artificial neural networks. The common prokaryotic/eukaryotic ancestor, which lived perhaps four billion years ago, is believed to have had voltage-gated channels. This functionality was likely, at some later point, cross-purposed to provide a communication mechanism. Even modern single-celled bacteria can utilize action potentials to communicate with other bacteria in the same biofilm. Experimental methods. The study of action potentials has required the development of new experimental methods. The initial work, prior to 1955, was carried out primarily by Alan Lloyd Hodgkin and Andrew Fielding Huxley, who were, along John Carew Eccles, awarded the 1963 Nobel Prize in Physiology or Medicine for their contribution to the description of the ionic basis of nerve conduction. It focused on three goals: isolating signals from single neurons or axons, developing fast, sensitive electronics, and shrinking electrodes enough that the voltage inside a single cell could be recorded. The first problem was solved by studying the giant axons found in the neurons of the squid ("Loligo forbesii" and "Doryteuthis pealeii", at the time classified as "Loligo pealeii"). These axons are so large in diameter (roughly 1 mm, or 100-fold larger than a typical neuron) that they can be seen with the naked eye, making them easy to extract and manipulate. However, they are not representative of all excitable cells, and numerous other systems with action potentials have been studied. The second problem was addressed with the crucial development of the voltage clamp, which permitted experimenters to study the ionic currents underlying an action potential in isolation, and eliminated a key source of electronic noise, the current "IC" associated with the capacitance "C" of the membrane. Since the current equals "C" times the rate of change of the transmembrane voltage "Vm", the solution was to design a circuit that kept "Vm" fixed (zero rate of change) regardless of the currents flowing across the membrane. Thus, the current required to keep "Vm" at a fixed value is a direct reflection of the current flowing through the membrane. Other electronic advances included the use of Faraday cages and electronics with high input impedance, so that the measurement itself did not affect the voltage being measured. The third problem, that of obtaining electrodes small enough to record voltages within a single axon without perturbing it, was solved in 1949 with the invention of the glass micropipette electrode, which was quickly adopted by other researchers. Refinements of this method are able to produce electrode tips that are as fine as 100 Å (10 nm), which also confers high input impedance. Action potentials may also be recorded with small metal electrodes placed just next to a neuron, with neurochips containing EOSFETs, or optically with dyes that are sensitive to Ca2+ or to voltage. While glass micropipette electrodes measure the sum of the currents passing through many ion channels, studying the electrical properties of a single ion channel became possible in the 1970s with the development of the patch clamp by Erwin Neher and Bert Sakmann. For this discovery, they were awarded the Nobel Prize in Physiology or Medicine in 1991. Patch-clamping verified that ionic channels have discrete states of conductance, such as open, closed and inactivated. Optical imaging technologies have been developed in recent years to measure action potentials, either via simultaneous multisite recordings or with ultra-spatial resolution. Using voltage-sensitive dyes, action potentials have been optically recorded from a tiny patch of cardiomyocyte membrane. Neurotoxins. Several neurotoxins, both natural and synthetic, function by blocking the action potential. Tetrodotoxin from the pufferfish and saxitoxin from the "Gonyaulax" (the dinoflagellate genus responsible for "red tides") block action potentials by inhibiting the voltage-sensitive sodium channel; similarly, dendrotoxin from the black mamba snake inhibits the voltage-sensitive potassium channel. Such inhibitors of ion channels serve an important research purpose, by allowing scientists to "turn off" specific channels at will, thus isolating the other channels' contributions; they can also be useful in purifying ion channels by affinity chromatography or in assaying their concentration. However, such inhibitors also make effective neurotoxins, and have been considered for use as chemical weapons. Neurotoxins aimed at the ion channels of insects have been effective insecticides; one example is the synthetic permethrin, which prolongs the activation of the sodium channels involved in action potentials. The ion channels of insects are sufficiently different from their human counterparts that there are few side effects in humans. History. The role of electricity in the nervous systems of animals was first observed in dissected frogs by Luigi Galvani, who studied it from 1791 to 1797. Galvani's results inspired Alessandro Volta to develop the Voltaic pile—the earliest-known electric battery—with which he studied animal electricity (such as electric eels) and the physiological responses to applied direct-current voltages. In the 19th century scientists studied the propagation of electrical signals in whole nerves (i.e., bundles of neurons) and demonstrated that nervous tissue was made up of cells, instead of an interconnected network of tubes (a "reticulum"). Carlo Matteucci followed up Galvani's studies and demonstrated that injured nerves and muscles in frogs could produce direct current. Matteucci's work inspired the German physiologist, Emil du Bois-Reymond, who discovered in 1843 that stimulating these muscle and nerve preparations produced a notable diminution in their resting currents, making him the first researcher to identify the electrical nature of the action potential. The conduction velocity of action potentials was then measured in 1850 by du Bois-Reymond's friend, Hermann von Helmholtz. Progress in electrophysiology stagnated thereafter due to the limitations of chemical theory and experimental practice. To establish that nervous tissue is made up of discrete cells, the Spanish physician Santiago Ramón y Cajal and his students used a stain developed by Camillo Golgi to reveal the myriad shapes of neurons, which they rendered painstakingly. For their discoveries, Golgi and Ramón y Cajal were awarded the 1906 Nobel Prize in Physiology. Their work resolved a long-standing controversy in the neuroanatomy of the 19th century; Golgi himself had argued for the network model of the nervous system. The 20th century saw significant breakthroughs in electrophysiology. In 1902 and again in 1912, Julius Bernstein advanced the hypothesis that the action potential resulted from a change in the permeability of the axonal membrane to ions. Bernstein's hypothesis was confirmed by Ken Cole and Howard Curtis, who showed that membrane conductance increases during an action potential. In 1907, Louis Lapicque suggested that the action potential was generated as a threshold was crossed, what would be later shown as a product of the dynamical systems of ionic conductances. In 1949, Alan Hodgkin and Bernard Katz refined Bernstein's hypothesis by considering that the axonal membrane might have different permeabilities to different ions; in particular, they demonstrated the crucial role of the sodium permeability for the action potential. They made the first actual recording of the electrical changes across the neuronal membrane that mediate the action potential. This line of research culminated in the five 1952 papers of Hodgkin, Katz and Andrew Huxley, in which they applied the voltage clamp technique to determine the dependence of the axonal membrane's permeabilities to sodium and potassium ions on voltage and time, from which they were able to reconstruct the action potential quantitatively. Hodgkin and Huxley correlated the properties of their mathematical model with discrete ion channels that could exist in several different states, including "open", "closed", and "inactivated". Their hypotheses were confirmed in the mid-1970s and 1980s by Erwin Neher and Bert Sakmann, who developed the technique of patch clamping to examine the conductance states of individual ion channels. In the 21st century, researchers are beginning to understand the structural basis for these conductance states and for the selectivity of channels for their species of ion, through the atomic-resolution crystal structures, fluorescence distance measurements and cryo-electron microscopy studies. Julius Bernstein was also the first to introduce the Nernst equation for resting potential across the membrane; this was generalized by David E. Goldman to the eponymous Goldman equation in 1943. The sodium–potassium pump was identified in 1957 and its properties gradually elucidated, culminating in the determination of its atomic-resolution structure by X-ray crystallography. The crystal structures of related ionic pumps have also been solved, giving a broader view of how these molecular machines work. Quantitative models. Mathematical and computational models are essential for understanding the action potential, and offer predictions that may be tested against experimental data, providing a stringent test of a theory. The most important and accurate of the early neural models is the Hodgkin–Huxley model, which describes the action potential by a coupled set of four ordinary differential equations (ODEs). Although the Hodgkin–Huxley model may be a simplification with few limitations compared to the realistic nervous membrane as it exists in nature, its complexity has inspired several even-more-simplified models, such as the Morris–Lecar model and the FitzHugh–Nagumo model, both of which have only two coupled ODEs. The properties of the Hodgkin–Huxley and FitzHugh–Nagumo models and their relatives, such as the Bonhoeffer–Van der Pol model, have been well-studied within mathematics, computation and electronics. However the simple models of generator potential and action potential fail to accurately reproduce the near threshold neural spike rate and spike shape, specifically for the mechanoreceptors like the Pacinian corpuscle. More modern research has focused on larger and more integrated systems; by joining action-potential models with models of other parts of the nervous system (such as dendrites and synapses), researchers can study neural computation and simple reflexes, such as escape reflexes and others controlled by central pattern generators. References. Footnotes. &lt;templatestyles src="Reflist/styles.css" /&gt; Journal articles. &lt;templatestyles src="Reflist/styles.css" /&gt; Books. &lt;templatestyles src="Refbegin/styles.css" /&gt; Web pages. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\n\\tau \\frac{\\partial V}{\\partial t} = \\lambda^2 \\frac{\\partial^2 V}{\\partial x^2} - V\n" }, { "math_id": 1, "text": "\n\\tau =\\ r_m c_m \\,\n" }, { "math_id": 2, "text": "\n\\lambda = \\sqrt \\frac{r_m}{r_\\ell}\n" } ]
https://en.wikipedia.org/wiki?curid=156998
15704862
Accelerated failure time model
Parametric model in survival analysis In the statistical area of survival analysis, an accelerated failure time model (AFT model) is a parametric model that provides an alternative to the commonly used proportional hazards models. Whereas a proportional hazards model assumes that the effect of a covariate is to multiply the hazard by some constant, an AFT model assumes that the effect of a covariate is to accelerate or decelerate the life course of a disease by some constant. There is strong basic science evidence from "C. elegans" experiments by Stroustrup et al. indicating that AFT models are the correct model for biological survival processes. Model specification. In full generality, the accelerated failure time model can be specified as formula_0 where formula_1 denotes the joint effect of covariates, typically formula_2. (Specifying the regression coefficients with a negative sign implies that high values of the covariates "increase" the survival time, but this is merely a sign convention; without a negative sign, they increase the hazard.) This is satisfied if the probability density function of the event is taken to be formula_3; it then follows for the survival function that formula_4. From this it is easy to see that the moderated life time formula_5 is distributed such that formula_6 and the unmoderated life time formula_7 have the same distribution. Consequently, formula_8 can be written as formula_9 where the last term is distributed as formula_10, i.e., independently of formula_1. This reduces the accelerated failure time model to regression analysis (typically a linear model) where formula_11 represents the fixed effects, and formula_12 represents the noise. Different distributions of formula_12 imply different distributions of formula_7, i.e., different baseline distributions of the survival time. Typically, in survival-analytic contexts, many of the observations are censored: we only know that formula_13, not formula_14. In fact, the former case represents survival, while the later case represents an event/death/censoring during the follow-up. These right-censored observations can pose technical challenges for estimating the model, if the distribution of formula_7 is unusual. The interpretation of formula_1 in accelerated failure time models is straightforward: formula_15 means that everything in the relevant life history of an individual happens twice as fast. For example, if the model concerns the development of a tumor, it means that all of the pre-stages progress twice as fast as for the unexposed individual, implying that the expected time until a clinical disease is 0.5 of the baseline time. However, this does not mean that the hazard function formula_16 is always twice as high - that would be the proportional hazards model. Statistical issues. Unlike proportional hazards models, in which Cox's semi-parametric proportional hazards model is more widely used than parametric models, AFT models are predominantly fully parametric i.e. a probability distribution is specified for formula_10. (Buckley and James proposed a semi-parametric AFT but its use is relatively uncommon in applied research; in a 1992 paper, Wei pointed out that the Buckley–James model has no theoretical justification and lacks robustness, and reviewed alternatives.) This can be a problem, if a degree of realistic detail is required for modelling the distribution of a baseline lifetime. Hence, technical developments in this direction would be highly desirable. When a frailty term is incorporated in the survival model, the regression parameter estimates from AFT models are robust to omitted covariates, unlike proportional hazards models. They are also less affected by the choice of probability distribution for the frailty term. The results of AFT models are easily interpreted. For example, the results of a clinical trial with mortality as the endpoint could be interpreted as a certain percentage increase in future life expectancy on the new treatment compared to the control. So a patient could be informed that he would be expected to live (say) 15% longer if he took the new treatment. Hazard ratios can prove harder to explain in layman's terms. Distributions used in AFT models. The log-logistic distribution provides the most commonly used AFT model. Unlike the Weibull distribution, it can exhibit a non-monotonic hazard function which increases at early times and decreases at later times. It is somewhat similar in shape to the log-normal distribution but it has heavier tails. The log-logistic cumulative distribution function has a simple closed form, which becomes important computationally when fitting data with censoring. For the censored observations one needs the survival function, which is the complement of the cumulative distribution function, i.e. one needs to be able to evaluate formula_17. The Weibull distribution (including the exponential distribution as a special case) can be parameterised as either a proportional hazards model or an AFT model, and is the only family of distributions to have this property. The results of fitting a Weibull model can therefore be interpreted in either framework. However, the biological applicability of this model may be limited by the fact that the hazard function is monotonic, i.e. either decreasing or increasing. Any distribution on a multiplicatively closed group, such as the positive real numbers, is suitable for an AFT model. Other distributions include the log-normal, gamma, hypertabastic, Gompertz distribution, and inverse Gaussian distributions, although they are less popular than the log-logistic, partly as their cumulative distribution functions do not have a closed form. Finally, the generalized gamma distribution is a three-parameter distribution that includes the Weibull, log-normal and gamma distributions as special cases. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n\\lambda(t|\\theta)=\\theta\\lambda_0(\\theta t)\n" }, { "math_id": 1, "text": "\\theta" }, { "math_id": 2, "text": "\\theta=\\exp(-[\\beta_1X_1 + \\cdots + \\beta_pX_p])" }, { "math_id": 3, "text": "f(t|\\theta)=\\theta f_0(\\theta t)" }, { "math_id": 4, "text": "S(t|\\theta)=S_0(\\theta t)" }, { "math_id": 5, "text": "T" }, { "math_id": 6, "text": "T\\theta" }, { "math_id": 7, "text": "T_0" }, { "math_id": 8, "text": "\\log(T)" }, { "math_id": 9, "text": "\n\\log(T)=-\\log(\\theta)+\\log(T\\theta):=-\\log(\\theta)+\\epsilon\n" }, { "math_id": 10, "text": "\\log(T_0)" }, { "math_id": 11, "text": "-\\log(\\theta)" }, { "math_id": 12, "text": "\\epsilon" }, { "math_id": 13, "text": "T_i>t_i" }, { "math_id": 14, "text": "T_i=t_i" }, { "math_id": 15, "text": "\\theta=2" }, { "math_id": 16, "text": "\\lambda(t|\\theta)" }, { "math_id": 17, "text": "S(t|\\theta)=1-F(t|\\theta)" } ]
https://en.wikipedia.org/wiki?curid=15704862
157055
Law of large numbers
Averages of repeated trials converge to the expected value In probability theory, the law of large numbers (LLN) is a mathematical law that states that the average of the results obtained from a large number of independent random samples converges to the true value, if it exists. More formally, the LLN states that given a sample of independent and identically distributed values, the sample mean converges to the true mean. The LLN is important because it guarantees stable long-term results for the averages of some random events. For example, while a casino may lose money in a single spin of the roulette wheel, its earnings will tend towards a predictable percentage over a large number of spins. Any winning streak by a player will eventually be overcome by the parameters of the game. Importantly, the law applies (as the name indicates) only when a "large number" of observations are considered. There is no principle that a small number of observations will coincide with the expected value or that a streak of one value will immediately be "balanced" by the others (see the gambler's fallacy). The LLN only applies to the "average" of the results obtained from repeated trials and claims that this average converges to the expected value; it does not claim that the "sum" of "n" results gets close to the expected value times "n" as "n" increases. Throughout its history, many mathematicians have refined this law. Today, the LLN is used in many fields including statistics, probability theory, economics, and insurance. Examples. For example, a single roll of a fair, six-sided die produces one of the numbers 1, 2, 3, 4, 5, or 6, each with equal probability. Therefore, the expected value of the average of the rolls is: formula_0 According to the law of large numbers, if a large number of six-sided dice are rolled, the average of their values (sometimes called the sample mean) will approach 3.5, with the precision increasing as more dice are rolled. It follows from the law of large numbers that the empirical probability of success in a series of Bernoulli trials will converge to the theoretical probability. For a Bernoulli random variable, the expected value is the theoretical probability of success, and the average of "n" such variables (assuming they are independent and identically distributed (i.i.d.)) is precisely the relative frequency. For example, a fair coin toss is a Bernoulli trial. When a fair coin is flipped once, the theoretical probability that the outcome will be heads is equal to &lt;templatestyles src="Fraction/styles.css" /&gt;1⁄2. Therefore, according to the law of large numbers, the proportion of heads in a "large" number of coin flips "should be" roughly &lt;templatestyles src="Fraction/styles.css" /&gt;1⁄2. In particular, the proportion of heads after "n" flips will almost surely converge to &lt;templatestyles src="Fraction/styles.css" /&gt;1⁄2 as "n" approaches infinity. Although the proportion of heads (and tails) approaches &lt;templatestyles src="Fraction/styles.css" /&gt;1⁄2, almost surely the absolute difference in the number of heads and tails will become large as the number of flips becomes large. That is, the probability that the absolute difference is a small number approaches zero as the number of flips becomes large. Also, almost surely the ratio of the absolute difference to the number of flips will approach zero. Intuitively, the expected difference grows, but at a slower rate than the number of flips. Another good example of the LLN is the Monte Carlo method. These methods are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results. The larger the number of repetitions, the better the approximation tends to be. The reason that this method is important is mainly that, sometimes, it is difficult or impossible to use other approaches. Limitation. The average of the results obtained from a large number of trials may fail to converge in some cases. For instance, the average of "n" results taken from the Cauchy distribution or some Pareto distributions (α&lt;1) will not converge as "n" becomes larger; the reason is heavy tails. The Cauchy distribution and the Pareto distribution represent two cases: the Cauchy distribution does not have an expectation, whereas the expectation of the Pareto distribution ("α"&lt;1) is infinite. One way to generate the Cauchy-distributed example is where the random numbers equal the tangent of an angle uniformly distributed between −90° and +90°. The median is zero, but the expected value does not exist, and indeed the average of "n" such variables have the same distribution as one such variable. It does not converge in probability toward zero (or any other value) as "n" goes to infinity. And if the trials embed a selection bias, typical in human economic/rational behaviour, the law of large numbers does not help in solving the bias. Even if the number of trials is increased the selection bias remains. History. The Italian mathematician Gerolamo Cardano (1501–1576) stated without proof that the accuracies of empirical statistics tend to improve with the number of trials. This was then formalized as a law of large numbers. A special form of the LLN (for a binary random variable) was first proved by Jacob Bernoulli. It took him over 20 years to develop a sufficiently rigorous mathematical proof which was published in his ("The Art of Conjecturing") in 1713. He named this his "Golden Theorem" but it became generally known as "Bernoulli's theorem". This should not be confused with Bernoulli's principle, named after Jacob Bernoulli's nephew Daniel Bernoulli. In 1837, S. D. Poisson further described it under the name ("the law of large numbers"). Thereafter, it was known under both names, but the "law of large numbers" is most frequently used. After Bernoulli and Poisson published their efforts, other mathematicians also contributed to refinement of the law, including Chebyshev, Markov, Borel, Cantelli, Kolmogorov and Khinchin. Markov showed that the law can apply to a random variable that does not have a finite variance under some other weaker assumption, and Khinchin showed in 1929 that if the series consists of independent identically distributed random variables, it suffices that the expected value exists for the weak law of large numbers to be true. These further studies have given rise to two prominent forms of the LLN. One is called the "weak" law and the other the "strong" law, in reference to two different modes of convergence of the cumulative sample means to the expected value; in particular, as explained below, the strong form implies the weak. Forms. There are two different versions of the law of large numbers that are described below. They are called the" strong law of large numbers" and the "weak law of large numbers". Stated for the case where "X"1, "X"2, ... is an infinite sequence of independent and identically distributed (i.i.d.) Lebesgue integrable random variables with expected value E("X"1) = E("X"2) = ... = "μ", both versions of the law state that the sample average formula_1 converges to the expected value: Introductory probability texts often additionally assume identical finite variance formula_2 (for all formula_3) and no correlation between random variables. In that case, the variance of the average of n random variables is formula_4 which can be used to shorten and simplify the proofs. This assumption of finite variance is "not necessary". Large or infinite variance will make the convergence slower, but the LLN holds anyway. Mutual independence of the random variables can be replaced by pairwise independence or exchangeability in both versions of the law. The difference between the strong and the weak version is concerned with the mode of convergence being asserted. For interpretation of these modes, see Convergence of random variables. Weak law. The weak law of large numbers (also called Khinchin's law) states that given a collection of independent and identically distributed (iid) samples from a random variable with finite mean, the sample mean converges in probability to the expected value That is, for any positive number "ε", formula_5 Interpreting this result, the weak law states that for any nonzero margin specified ("ε"), no matter how small, with a sufficiently large sample there will be a very high probability that the average of the observations will be close to the expected value; that is, within the margin. As mentioned earlier, the weak law applies in the case of i.i.d. random variables, but it also applies in some other cases. For example, the variance may be different for each random variable in the series, keeping the expected value constant. If the variances are bounded, then the law applies, as shown by Chebyshev as early as 1867. (If the expected values change during the series, then we can simply apply the law to the average deviation from the respective expected values. The law then states that this converges in probability to zero.) In fact, Chebyshev's proof works so long as the variance of the average of the first "n" values goes to zero as "n" goes to infinity. As an example, assume that each random variable in the series follows a Gaussian distribution (normal distribution) with mean zero, but with variance equal to formula_6, which is not bounded. At each stage, the average will be normally distributed (as the average of a set of normally distributed variables). The variance of the sum is equal to the sum of the variances, which is asymptotic to formula_7. The variance of the average is therefore asymptotic to formula_8 and goes to zero. There are also examples of the weak law applying even though the expected value does not exist. Strong law. The strong law of large numbers (also called Kolmogorov's law) states that the sample average converges almost surely to the expected value That is, formula_9 What this means is that the probability that, as the number of trials "n" goes to infinity, the average of the observations converges to the expected value, is equal to one. The modern proof of the strong law is more complex than that of the weak law, and relies on passing to an appropriate subsequence. The strong law of large numbers can itself be seen as a special case of the pointwise ergodic theorem. This view justifies the intuitive interpretation of the expected value (for Lebesgue integration only) of a random variable when sampled repeatedly as the "long-term average". Law 3 is called the strong law because random variables which converge strongly (almost surely) are guaranteed to converge weakly (in probability). However the weak law is known to hold in certain conditions where the strong law does not hold and then the convergence is only weak (in probability). See differences between the weak law and the strong law. The strong law applies to independent identically distributed random variables having an expected value (like the weak law). This was proved by Kolmogorov in 1930. It can also apply in other cases. Kolmogorov also showed, in 1933, that if the variables are independent and identically distributed, then for the average to converge almost surely on "something" (this can be considered another statement of the strong law), it is necessary that they have an expected value (and then of course the average will converge almost surely on that). If the summands are independent but not identically distributed, then provided that each "X""k" has a finite second moment and formula_10 This statement is known as "Kolmogorov's strong law", see e.g. . Differences between the weak law and the strong law. The "weak law" states that for a specified large "n", the average formula_11 is likely to be near "μ". Thus, it leaves open the possibility that formula_12 happens an infinite number of times, although at infrequent intervals. (Not necessarily formula_13 for all "n"). The "strong law" shows that this almost surely will not occur. It does not imply that with probability 1, we have that for any "ε" &gt; 0 the inequality formula_14 holds for all large enough "n", since the convergence is not necessarily uniform on the set where it holds. The strong law does not hold in the following cases, but the weak law does. Uniform laws of large numbers. There are extensions of the law of large numbers to collections of estimators, where the convergence is uniform over the collection; thus the name "uniform law of large numbers". Suppose "f"("x","θ") is some function defined for "θ" ∈ Θ, and continuous in "θ". Then for any fixed "θ", the sequence {"f"("X"1,"θ"), "f"("X"2,"θ"), ...} will be a sequence of independent and identically distributed random variables, such that the sample mean of this sequence converges in probability to E["f"("X","θ")]. This is the "pointwise" (in "θ") convergence. A particular example of a uniform law of large numbers states the conditions under which the convergence happens "uniformly" in "θ". If Then E["f"("X","θ")] is continuous in "θ", and formula_16 This result is useful to derive consistency of a large class of estimators (see Extremum estimator). Borel's law of large numbers. Borel's law of large numbers, named after Émile Borel, states that if an experiment is repeated a large number of times, independently under identical conditions, then the proportion of times that any specified event is expected to occur approximately equals the probability of the event's occurrence on any particular trial; the larger the number of repetitions, the better the approximation tends to be. More precisely, if "E" denotes the event in question, "p" its probability of occurrence, and "Nn"("E") the number of times "E" occurs in the first "n" trials, then with probability one, formula_17 This theorem makes rigorous the intuitive notion of probability as the expected long-run relative frequency of an event's occurrence. It is a special case of any of several more general laws of large numbers in probability theory. Chebyshev's inequality. Let "X" be a random variable with finite expected value "μ" and finite non-zero variance "σ"2. Then for any real number "k" &gt; 0, formula_18 Proof of the weak law. Given "X"1, "X"2, ... an infinite sequence of i.i.d. random variables with finite expected value formula_19, we are interested in the convergence of the sample average formula_20 The weak law of large numbers states: Proof using Chebyshev's inequality assuming finite variance. This proof uses the assumption of finite variance formula_21 (for all formula_3). The independence of the random variables implies no correlation between them, and we have that formula_22 The common mean μ of the sequence is the mean of the sample average: formula_23 Using Chebyshev's inequality on formula_24 results in formula_25 This may be used to obtain the following: formula_26 As "n" approaches infinity, the expression approaches 1. And by definition of convergence in probability, we have obtained Proof using convergence of characteristic functions. By Taylor's theorem for complex functions, the characteristic function of any random variable, "X", with finite mean μ, can be written as formula_27 All "X"1, "X"2, ... have the same characteristic function, so we will simply denote this "φ""X". Among the basic properties of characteristic functions there are formula_28 if "X" and "Y" are independent. These rules can be used to calculate the characteristic function of formula_29 in terms of "φ""X": formula_30 The limit "e""itμ" is the characteristic function of the constant random variable μ, and hence by the Lévy continuity theorem, formula_31 converges in distribution to μ: formula_32 μ is a constant, which implies that convergence in distribution to μ and convergence in probability to μ are equivalent (see Convergence of random variables.) Therefore, This shows that the sample mean converges in probability to the derivative of the characteristic function at the origin, as long as the latter exists. Proof of the strong law. We give a relatively simple proof of the strong law under the assumptions that the formula_33 are iid, formula_34, formula_35, and formula_36. Let us first note that without loss of generality we can assume that formula_37 by centering. In this case, the strong law says that formula_38 or formula_39 It is equivalent to show that formula_40 Note that formula_41 and thus to prove the strong law we need to show that for every formula_42, we have formula_43 Define the events formula_44, and if we can show that formula_45 then the Borel-Cantelli Lemma implies the result. So let us estimate formula_46. We compute formula_47 We first claim that every term of the form formula_48 where all subscripts are distinct, must have zero expectation. This is because formula_49 by independence, and the last term is zero --- and similarly for the other terms. Therefore the only terms in the sum with nonzero expectation are formula_50 and formula_51. Since the formula_33 are identically distributed, all of these are the same, and moreover formula_52. There are formula_53 terms of the form formula_50 and formula_54 terms of the form formula_55, and so formula_56 Note that the right-hand side is a quadratic polynomial in formula_53, and as such there exists a formula_57 such that formula_58 for formula_53 sufficiently large. By Markov, formula_59 for formula_53 sufficiently large, and therefore this series is summable. Since this holds for any formula_42, we have established the Strong LLN. Another proof can be found in For a proof without the added assumption of a finite fourth moment, see Section 22 of. Consequences. The law of large numbers provides an expectation of an unknown distribution from a realization of the sequence, but also any feature of the probability distribution. By applying Borel's law of large numbers, one could easily obtain the probability mass function. For each event in the objective probability mass function, one could approximate the probability of the event's occurrence with the proportion of times that any specified event occurs. The larger the number of repetitions, the better the approximation. As for the continuous case: formula_60, for small positive h. Thus, for large n: formula_61 With this method, one can cover the whole x-axis with a grid (with grid size 2h) and obtain a bar graph which is called a histogram. Applications. One application of the LLN is the use of an important method of approximation, the Monte Carlo Method. This method uses a random sampling of numbers to approximate numerical results. The algorithm to compute an integral of f(x) on an interval [a,b] is as follows: We can find the integral of formula_66 on [-1,2]. Using traditional methods to compute this integral is very difficult, so the Monte Carlo Method can be used here. Using the above algorithm, we get formula_67 = 0.905 when n=25 and formula_67 = 1.028 when n=250 We observe that as n increases, the numerical value also increases. When we get the actual results for the integral we get formula_67 = 1.000194 By using the LLN, the approximation of the integral was more accurate and was closer to its true value. Another example is the integration of f(x) = formula_68 on [0,1]. Using the Monte Carlo Method and the LLN, we can see that as the number of samples increases, the numerical value gets closer to 0.4180233. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": " \\frac{1+2+3+4+5+6}{6} = 3.5" }, { "math_id": 1, "text": "\\overline{X}_n=\\frac1n(X_1+\\cdots+X_n) " }, { "math_id": 2, "text": " \\operatorname{Var} (X_i) = \\sigma^2 " }, { "math_id": 3, "text": "i" }, { "math_id": 4, "text": "\\operatorname{Var}(\\overline{X}_n) = \\operatorname{Var}(\\tfrac1n(X_1+\\cdots+X_n)) = \\frac{1}{n^2} \\operatorname{Var}(X_1+\\cdots+X_n) = \\frac{n\\sigma^2}{n^2} = \\frac{\\sigma^2}{n}." }, { "math_id": 5, "text": "\n \\lim_{n\\to\\infty}\\Pr\\!\\left(\\,|\\overline{X}_n-\\mu| < \\varepsilon\\,\\right) = 1.\n" }, { "math_id": 6, "text": "2n/\\log(n+1)" }, { "math_id": 7, "text": "n^2 / \\log n" }, { "math_id": 8, "text": "1 / \\log n" }, { "math_id": 9, "text": "\n \\Pr\\!\\left( \\lim_{n\\to\\infty}\\overline{X}_n = \\mu \\right) = 1.\n" }, { "math_id": 10, "text": "\n \\sum_{k=1}^{\\infty} \\frac{1}{k^2} \\operatorname{Var}[X_k] < \\infty.\n" }, { "math_id": 11, "text": "\\overline{X}_n" }, { "math_id": 12, "text": "|\\overline{X}_n -\\mu| > \\varepsilon" }, { "math_id": 13, "text": "|\\overline{X}_n -\\mu| \\neq 0" }, { "math_id": 14, "text": "|\\overline{X}_n -\\mu| < \\varepsilon" }, { "math_id": 15, "text": " \\left\\| f(x,\\theta) \\right\\| \\leq d(x) \\quad\\text{for all}\\ \\theta\\in\\Theta." }, { "math_id": 16, "text": "\n \\sup_{\\theta\\in\\Theta} \\left\\| \\frac 1 n \\sum_{i=1}^n f(X_i,\\theta) - \\operatorname{E}[f(X,\\theta)] \\right\\| \\overset{\\mathrm{P}}{\\rightarrow} \\ 0.\n " }, { "math_id": 17, "text": " \\frac{N_n(E)}{n}\\to p\\text{ as }n\\to\\infty." }, { "math_id": 18, "text": "\n \\Pr(|X-\\mu|\\geq k\\sigma) \\leq \\frac{1}{k^2}.\n" }, { "math_id": 19, "text": "E(X_1)=E(X_2)=\\cdots=\\mu<\\infty" }, { "math_id": 20, "text": "\\overline{X}_n=\\tfrac1n(X_1+\\cdots+X_n). " }, { "math_id": 21, "text": " \\operatorname{Var} (X_i)=\\sigma^2 " }, { "math_id": 22, "text": "\n\\operatorname{Var}(\\overline{X}_n) = \\operatorname{Var}(\\tfrac1n(X_1+\\cdots+X_n)) = \\frac{1}{n^2} \\operatorname{Var}(X_1+\\cdots+X_n) = \\frac{n\\sigma^2}{n^2} = \\frac{\\sigma^2}{n}.\n" }, { "math_id": 23, "text": "\nE(\\overline{X}_n) = \\mu.\n" }, { "math_id": 24, "text": "\\overline{X}_n " }, { "math_id": 25, "text": "\n\\operatorname{P}( \\left| \\overline{X}_n-\\mu \\right| \\geq \\varepsilon) \\leq \\frac{\\sigma^2}{n\\varepsilon^2}.\n" }, { "math_id": 26, "text": "\n\\operatorname{P}( \\left| \\overline{X}_n-\\mu \\right| < \\varepsilon) = 1 - \\operatorname{P}( \\left| \\overline{X}_n-\\mu \\right| \\geq \\varepsilon) \\geq 1 - \\frac{\\sigma^2}{n \\varepsilon^2 }.\n" }, { "math_id": 27, "text": "\\varphi_X(t) = 1 + it\\mu + o(t), \\quad t \\rightarrow 0." }, { "math_id": 28, "text": "\\varphi_{\\frac 1 n X}(t)= \\varphi_X(\\tfrac t n) \\quad \\text{and} \\quad\n \\varphi_{X+Y}(t) = \\varphi_X(t) \\varphi_Y(t) \\quad " }, { "math_id": 29, "text": "\\overline{X}_n" }, { "math_id": 30, "text": "\\varphi_{\\overline{X}_n}(t)= \\left[\\varphi_X\\left({t \\over n}\\right)\\right]^n = \\left[1 + i\\mu{t \\over n} + o\\left({t \\over n}\\right)\\right]^n \\, \\rightarrow \\, e^{it\\mu}, \\quad \\text{as} \\quad n \\to \\infty." }, { "math_id": 31, "text": " \\overline{X}_n" }, { "math_id": 32, "text": "\\overline{X}_n \\, \\overset{\\mathcal D}{\\rightarrow} \\, \\mu \\qquad\\text{for}\\qquad n \\to \\infty." }, { "math_id": 33, "text": "X_i" }, { "math_id": 34, "text": " {\\mathbb E}[X_i] =: \\mu < \\infty " }, { "math_id": 35, "text": " \\operatorname{Var} (X_i)=\\sigma^2 < \\infty" }, { "math_id": 36, "text": " {\\mathbb E}[X_i^4] =: \\tau < \\infty " }, { "math_id": 37, "text": "\\mu = 0" }, { "math_id": 38, "text": "\n \\Pr\\!\\left( \\lim_{n\\to\\infty}\\overline{X}_n = 0 \\right) = 1,\n" }, { "math_id": 39, "text": "\n \\Pr\\left(\\omega: \\lim_{n\\to\\infty}\\frac{S_n(\\omega)}n = 0 \\right) = 1.\n" }, { "math_id": 40, "text": "\n \\Pr\\left(\\omega: \\lim_{n\\to\\infty}\\frac{S_n(\\omega)}n \\neq 0 \\right) = 0,\n" }, { "math_id": 41, "text": "\n \\lim_{n\\to\\infty}\\frac{S_n(\\omega)}n \\neq 0 \\iff \\exists\\epsilon>0, \\left|\\frac{S_n(\\omega)}n\\right| \\ge \\epsilon\\ \\mbox{infinitely often},\n" }, { "math_id": 42, "text": "\\epsilon > 0" }, { "math_id": 43, "text": "\n \\Pr\\left(\\omega: |S_n(\\omega)| \\ge n\\epsilon \\mbox{ infinitely often} \\right) = 0.\n" }, { "math_id": 44, "text": " A_n = \\{\\omega : |S_n| \\ge n\\epsilon\\}" }, { "math_id": 45, "text": "\n \\sum_{n=1}^\\infty \\Pr(A_n) <\\infty,\n" }, { "math_id": 46, "text": "\\Pr(A_n)" }, { "math_id": 47, "text": "\n {\\mathbb E}[S_n^4] = {\\mathbb E}\\left[\\left(\\sum_{i=1}^n X_i\\right)^4\\right] = {\\mathbb E}\\left[\\sum_{1 \\le i,j,k,l\\le n} X_iX_jX_kX_l\\right].\n" }, { "math_id": 48, "text": "X_i^3X_j, X_i^2X_jX_k, X_iX_jX_kX_l" }, { "math_id": 49, "text": "{\\mathbb E}[X_i^3X_j] = {\\mathbb E}[X_i^3]{\\mathbb E}[X_j]" }, { "math_id": 50, "text": "{\\mathbb E}[X_i^4]" }, { "math_id": 51, "text": "{\\mathbb E}[X_i^2X_j^2]" }, { "math_id": 52, "text": "{\\mathbb E}[X_i^2X_j^2]=({\\mathbb E}[X_i^2])^2" }, { "math_id": 53, "text": "n" }, { "math_id": 54, "text": "3 n (n-1)" }, { "math_id": 55, "text": "({\\mathbb E}[X_i^2])^2" }, { "math_id": 56, "text": "\n {\\mathbb E}[S_n^4] = n \\tau + 3n(n-1)\\sigma^4.\n" }, { "math_id": 57, "text": "C>0" }, { "math_id": 58, "text": " {\\mathbb E}[S_n^4] \\le Cn^2" }, { "math_id": 59, "text": "\n \\Pr(|S_n| \\ge n \\epsilon) \\le \\frac1{(n\\epsilon)^4}{\\mathbb E}[S_n^4] \\le \\frac{C}{\\epsilon^4 n^2},\n" }, { "math_id": 60, "text": "C=(a-h,a+h]" }, { "math_id": 61, "text": " \\frac{N_n(C)}{n}\\thickapprox\np = P(X\\in C) = \\int_{a-h}^{a+h} f(x) \\, dx \n\\thickapprox\n2hf(a)" }, { "math_id": 62, "text": "(b-a)\\tfrac{f(X_1)+f(X_2)+...+f(X_n)}{n}" }, { "math_id": 63, "text": "(b-a)E(f(X_1))" }, { "math_id": 64, "text": "(b-a)\\int_{a}^{b} f(x)\\tfrac{1}{b-a}{dx}" }, { "math_id": 65, "text": "\\int_{a}^{b} f(x){dx}" }, { "math_id": 66, "text": "f(x) = cos^2(x)\\sqrt{x^3+1}" }, { "math_id": 67, "text": "\\int_{-1}^{2} f(x){dx}" }, { "math_id": 68, "text": "\\frac{e^x-1}{e-1}" } ]
https://en.wikipedia.org/wiki?curid=157055
157057
Correlation
Statistical concept In statistics, correlation or dependence is any statistical relationship, whether causal or not, between two random variables or bivariate data. Although in the broadest sense, "correlation" may indicate any type of association, in statistics it usually refers to the degree to which a pair of variables are "linearly" related. Familiar examples of dependent phenomena include the correlation between the height of parents and their offspring, and the correlation between the price of a good and the quantity the consumers are willing to purchase, as it is depicted in the so-called demand curve. Correlations are useful because they can indicate a predictive relationship that can be exploited in practice. For example, an electrical utility may produce less power on a mild day based on the correlation between electricity demand and weather. In this example, there is a causal relationship, because extreme weather causes people to use more electricity for heating or cooling. However, in general, the presence of a correlation is not sufficient to infer the presence of a causal relationship (i.e., correlation does not imply causation). Formally, random variables are "dependent" if they do not satisfy a mathematical property of probabilistic independence. In informal parlance, "correlation" is synonymous with "dependence". However, when used in a technical sense, correlation refers to any of several specific types of mathematical relationship between the conditional expectation of one variable given the other is not constant as the conditioning variable changes; broadly correlation in this specific sense is used when formula_0 is related to formula_1 in some manner (such as linearly, monotonically, or perhaps according to some particular functional form such as logarithmic). Essentially, correlation is the measure of how two or more variables are related to one another. There are several correlation coefficients, often denoted formula_2 or formula_3, measuring the degree of correlation. The most common of these is the "Pearson correlation coefficient", which is sensitive only to a linear relationship between two variables (which may be present even when one variable is a nonlinear function of the other). Other correlation coefficients – such as "Spearman's rank correlation" – have been developed to be more robust than Pearson's, that is, more sensitive to nonlinear relationships. Mutual information can also be applied to measure dependence between two variables. Pearson's product-moment coefficient. The most familiar measure of dependence between two quantities is the Pearson product-moment correlation coefficient (PPMCC), or "Pearson's correlation coefficient", commonly called simply "the correlation coefficient". It is obtained by taking the ratio of the covariance of the two variables in question of our numerical dataset, normalized to the square root of their variances. Mathematically, one simply divides the covariance of the two variables by the product of their standard deviations. Karl Pearson developed the coefficient from a similar but slightly different idea by Francis Galton. A Pearson product-moment correlation coefficient attempts to establish a line of best fit through a dataset of two variables by essentially laying out the expected values and the resulting Pearson's correlation coefficient indicates how far away the actual dataset is from the expected values. Depending on the sign of our Pearson's correlation coefficient, we can end up with either a negative or positive correlation if there is any sort of relationship between the variables of our data set. The population correlation coefficient formula_4 between two random variables formula_5 and formula_6 with expected values formula_7 and formula_8 and standard deviations formula_9 and formula_10 is defined as: formula_11 where formula_12 is the expected value operator, formula_13 means covariance, and formula_14 is a widely used alternative notation for the correlation coefficient. The Pearson correlation is defined only if both standard deviations are finite and positive. An alternative formula purely in terms of moments is: formula_15 Correlation and independence. It is a corollary of the Cauchy–Schwarz inequality that the absolute value of the Pearson correlation coefficient is not bigger than 1. Therefore, the value of a correlation coefficient ranges between −1 and +1. The correlation coefficient is +1 in the case of a perfect direct (increasing) linear relationship (correlation), −1 in the case of a perfect inverse (decreasing) linear relationship (anti-correlation), and some value in the open interval formula_16 in all other cases, indicating the degree of linear dependence between the variables. As it approaches zero there is less of a relationship (closer to uncorrelated). The closer the coefficient is to either −1 or 1, the stronger the correlation between the variables. If the variables are independent, Pearson's correlation coefficient is 0. However, because the correlation coefficient detects only linear dependencies between two variables, the converse is not necessarily true. A correlation coefficient of 0 does not imply that the variables are independent. formula_17 For example, suppose the random variable formula_5 is symmetrically distributed about zero, and formula_18. Then formula_6 is completely determined by formula_5, so that formula_5 and formula_6 are perfectly dependent, but their correlation is zero; they are uncorrelated. However, in the special case when formula_5 and formula_6 are jointly normal, uncorrelatedness is equivalent to independence. Even though uncorrelated data does not necessarily imply independence, one can check if random variables are independent if their mutual information is 0. Sample correlation coefficient. Given a series of formula_19 measurements of the pair formula_20 indexed by formula_21, the "sample correlation coefficient" can be used to estimate the population Pearson correlation formula_4 between formula_5 and formula_6. The sample correlation coefficient is defined as formula_22 where formula_23 and formula_24 are the sample means of formula_5 and formula_6, and formula_25 and formula_26 are the corrected sample standard deviations of formula_5 and formula_6. Equivalent expressions for formula_27 are formula_28 where formula_29 and formula_30 are the "uncorrected" sample standard deviations of formula_5 and formula_6. If formula_1 and formula_31 are results of measurements that contain measurement error, the realistic limits on the correlation coefficient are not −1 to +1 but a smaller range. For the case of a linear model with a single independent variable, the coefficient of determination (R squared) is the square of formula_27, Pearson's product-moment coefficient. Example. Consider the joint probability distribution of X and Y given in the table below. For this joint distribution, the marginal distributions are: formula_32 formula_33 This yields the following expectations and variances: formula_34 formula_35 formula_36 formula_37 Therefore: formula_38 Rank correlation coefficients. Rank correlation coefficients, such as Spearman's rank correlation coefficient and Kendall's rank correlation coefficient (τ) measure the extent to which, as one variable increases, the other variable tends to increase, without requiring that increase to be represented by a linear relationship. If, as the one variable increases, the other "decreases", the rank correlation coefficients will be negative. It is common to regard these rank correlation coefficients as alternatives to Pearson's coefficient, used either to reduce the amount of calculation or to make the coefficient less sensitive to non-normality in distributions. However, this view has little mathematical basis, as rank correlation coefficients measure a different type of relationship than the Pearson product-moment correlation coefficient, and are best seen as measures of a different type of association, rather than as an alternative measure of the population correlation coefficient. To illustrate the nature of rank correlation, and its difference from linear correlation, consider the following four pairs of numbers formula_39: (0, 1), (10, 100), (101, 500), (102, 2000). As we go from each pair to the next pair formula_1 increases, and so does formula_31. This relationship is perfect, in the sense that an increase in formula_1 is "always" accompanied by an increase in formula_31. This means that we have a perfect rank correlation, and both Spearman's and Kendall's correlation coefficients are 1, whereas in this example Pearson product-moment correlation coefficient is 0.7544, indicating that the points are far from lying on a straight line. In the same way if formula_31 always "decreases" when formula_1 "increases", the rank correlation coefficients will be −1, while the Pearson product-moment correlation coefficient may or may not be close to −1, depending on how close the points are to a straight line. Although in the extreme cases of perfect rank correlation the two coefficients are both equal (being both +1 or both −1), this is not generally the case, and so values of the two coefficients cannot meaningfully be compared. For example, for the three pairs (1, 1) (2, 3) (3, 2) Spearman's coefficient is 1/2, while Kendall's coefficient is 1/3. Other measures of dependence among random variables. The information given by a correlation coefficient is not enough to define the dependence structure between random variables. The correlation coefficient completely defines the dependence structure only in very particular cases, for example when the distribution is a multivariate normal distribution. (See diagram above.) In the case of elliptical distributions it characterizes the (hyper-)ellipses of equal density; however, it does not completely characterize the dependence structure (for example, a multivariate t-distribution's degrees of freedom determine the level of tail dependence). Distance correlation was introduced to address the deficiency of Pearson's correlation that it can be zero for dependent random variables; zero distance correlation implies independence. The Randomized Dependence Coefficient is a computationally efficient, copula-based measure of dependence between multivariate random variables. RDC is invariant with respect to non-linear scalings of random variables, is capable of discovering a wide range of functional association patterns and takes value zero at independence. For two binary variables, the odds ratio measures their dependence, and takes range non-negative numbers, possibly infinity: &amp;NoBreak;&amp;NoBreak;. Related statistics such as Yule's "Y" and Yule's "Q" normalize this to the correlation-like range &amp;NoBreak;&amp;NoBreak;. The odds ratio is generalized by the logistic model to model cases where the dependent variables are discrete and there may be one or more independent variables. The correlation ratio, entropy-based mutual information, total correlation, dual total correlation and polychoric correlation are all also capable of detecting more general dependencies, as is consideration of the copula between them, while the coefficient of determination generalizes the correlation coefficient to multiple regression. Sensitivity to the data distribution. The degree of dependence between variables X and Y does not depend on the scale on which the variables are expressed. That is, if we are analyzing the relationship between X and Y, most correlation measures are unaffected by transforming X to "a" + "bX" and Y to "c" + "dY", where "a", "b", "c", and "d" are constants ("b" and "d" being positive). This is true of some correlation statistics as well as their population analogues. Some correlation statistics, such as the rank correlation coefficient, are also invariant to monotone transformations of the marginal distributions of X and/or Y. Most correlation measures are sensitive to the manner in which X and Y are sampled. Dependencies tend to be stronger if viewed over a wider range of values. Thus, if we consider the correlation coefficient between the heights of fathers and their sons over all adult males, and compare it to the same correlation coefficient calculated when the fathers are selected to be between 165 cm and 170 cm in height, the correlation will be weaker in the latter case. Several techniques have been developed that attempt to correct for range restriction in one or both variables, and are commonly used in meta-analysis; the most common are Thorndike's case II and case III equations. Various correlation measures in use may be undefined for certain joint distributions of X and Y. For example, the Pearson correlation coefficient is defined in terms of moments, and hence will be undefined if the moments are undefined. Measures of dependence based on quantiles are always defined. Sample-based statistics intended to estimate population measures of dependence may or may not have desirable statistical properties such as being unbiased, or asymptotically consistent, based on the spatial structure of the population from which the data were sampled. Sensitivity to the data distribution can be used to an advantage. For example, scaled correlation is designed to use the sensitivity to the range in order to pick out correlations between fast components of time series. By reducing the range of values in a controlled manner, the correlations on long time scale are filtered out and only the correlations on short time scales are revealed. Correlation matrices. The correlation matrix of formula_19 random variables formula_40 is the formula_41 matrix formula_42 whose formula_43 entry is formula_44 Thus the diagonal entries are all identically one. If the measures of correlation used are product-moment coefficients, the correlation matrix is the same as the covariance matrix of the standardized random variables formula_45 for formula_46. This applies both to the matrix of population correlations (in which case formula_47 is the population standard deviation), and to the matrix of sample correlations (in which case formula_47 denotes the sample standard deviation). Consequently, each is necessarily a positive-semidefinite matrix. Moreover, the correlation matrix is strictly positive definite if no variable can have all its values exactly generated as a linear function of the values of the others. The correlation matrix is symmetric because the correlation between formula_48 and formula_49 is the same as the correlation between formula_49 and formula_48. A correlation matrix appears, for example, in one formula for the coefficient of multiple determination, a measure of goodness of fit in multiple regression. In statistical modelling, correlation matrices representing the relationships between variables are categorized into different correlation structures, which are distinguished by factors such as the number of parameters required to estimate them. For example, in an exchangeable correlation matrix, all pairs of variables are modeled as having the same correlation, so all non-diagonal elements of the matrix are equal to each other. On the other hand, an autoregressive matrix is often used when variables represent a time series, since correlations are likely to be greater when measurements are closer in time. Other examples include independent, unstructured, M-dependent, and Toeplitz. In exploratory data analysis, the iconography of correlations consists in replacing a correlation matrix by a diagram where the "remarkable" correlations are represented by a solid line (positive correlation), or a dotted line (negative correlation). Nearest valid correlation matrix. In some applications (e.g., building data models from only partially observed data) one wants to find the "nearest" correlation matrix to an "approximate" correlation matrix (e.g., a matrix which typically lacks semi-definite positiveness due to the way it has been computed). In 2002, Higham formalized the notion of nearness using the Frobenius norm and provided a method for computing the nearest correlation matrix using the Dykstra's projection algorithm, of which an implementation is available as an online Web API. This sparked interest in the subject, with new theoretical (e.g., computing the nearest correlation matrix with factor structure) and numerical (e.g. usage the Newton's method for computing the nearest correlation matrix) results obtained in the subsequent years. Uncorrelatedness and independence of stochastic processes. Similarly for two stochastic processes formula_50 and formula_51: If they are independent, then they are uncorrelated. The opposite of this statement might not be true. Even if two variables are uncorrelated, they might not be independent to each other. Common misconceptions. Correlation and causality. The conventional dictum that "correlation does not imply causation" means that correlation cannot be used by itself to infer a causal relationship between the variables. This dictum should not be taken to mean that correlations cannot indicate the potential existence of causal relations. However, the causes underlying the correlation, if any, may be indirect and unknown, and high correlations also overlap with identity relations (tautologies), where no causal process exists. Consequently, a correlation between two variables is not a sufficient condition to establish a causal relationship (in either direction). A correlation between age and height in children is fairly causally transparent, but a correlation between mood and health in people is less so. Does improved mood lead to improved health, or does good health lead to good mood, or both? Or does some other factor underlie both? In other words, a correlation can be taken as evidence for a possible causal relationship, but cannot indicate what the causal relationship, if any, might be. Simple linear correlations. The Pearson correlation coefficient indicates the strength of a "linear" relationship between two variables, but its value generally does not completely characterize their relationship. In particular, if the conditional mean of formula_6 given formula_5, denoted formula_52, is not linear in formula_5, the correlation coefficient will not fully determine the form of formula_52. The adjacent image shows scatter plots of Anscombe's quartet, a set of four different pairs of variables created by Francis Anscombe. The four formula_31 variables have the same mean (7.5), variance (4.12), correlation (0.816) and regression line (formula_53). However, as can be seen on the plots, the distribution of the variables is very different. The first one (top left) seems to be distributed normally, and corresponds to what one would expect when considering two variables correlated and following the assumption of normality. The second one (top right) is not distributed normally; while an obvious relationship between the two variables can be observed, it is not linear. In this case the Pearson correlation coefficient does not indicate that there is an exact functional relationship: only the extent to which that relationship can be approximated by a linear relationship. In the third case (bottom left), the linear relationship is perfect, except for one outlier which exerts enough influence to lower the correlation coefficient from 1 to 0.816. Finally, the fourth example (bottom right) shows another example when one outlier is enough to produce a high correlation coefficient, even though the relationship between the two variables is not linear. These examples indicate that the correlation coefficient, as a summary statistic, cannot replace visual examination of the data. The examples are sometimes said to demonstrate that the Pearson correlation assumes that the data follow a normal distribution, but this is only partially correct. The Pearson correlation can be accurately calculated for any distribution that has a finite covariance matrix, which includes most distributions encountered in practice. However, the Pearson correlation coefficient (taken together with the sample mean and variance) is only a sufficient statistic if the data is drawn from a multivariate normal distribution. As a result, the Pearson correlation coefficient fully characterizes the relationship between variables if and only if the data are drawn from a multivariate normal distribution. Bivariate normal distribution. If a pair formula_54 of random variables follows a bivariate normal distribution, the conditional mean formula_55 is a linear function of formula_6, and the conditional mean formula_56 is a linear function of formula_57 The correlation coefficient formula_58 between formula_59 and formula_60 and the marginal means and variances of formula_59 and formula_61 determine this linear relationship: formula_62 where formula_63 and formula_64 are the expected values of formula_59 and formula_60 respectively, and formula_65 and formula_66 are the standard deviations of formula_59 and formula_60 respectively. The empirical correlation formula_3 is an estimate of the correlation coefficient formula_67 A distribution estimate for formula_68 is given by formula_69 where formula_70 is the Gaussian hypergeometric function. This density is both a Bayesian posterior density and an exact optimal confidence distribution density. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "E(Y|X=x)" }, { "math_id": 1, "text": "x" }, { "math_id": 2, "text": "\\rho" }, { "math_id": 3, "text": "r" }, { "math_id": 4, "text": "\\rho_{X,Y}" }, { "math_id": 5, "text": "X" }, { "math_id": 6, "text": "Y" }, { "math_id": 7, "text": "\\mu_X" }, { "math_id": 8, "text": "\\mu_Y" }, { "math_id": 9, "text": "\\sigma_X" }, { "math_id": 10, "text": "\\sigma_Y" }, { "math_id": 11, "text": "\\rho_{X,Y} = \\operatorname{corr}(X,Y) = {\\operatorname{cov}(X,Y) \\over \\sigma_X \\sigma_Y} = {\\operatorname{E}[(X-\\mu_X)(Y-\\mu_Y)] \\over \\sigma_X\\sigma_Y}, \\quad \\text{if}\\ \\sigma_{X}\\sigma_{Y}>0." }, { "math_id": 12, "text": "\\operatorname{E}" }, { "math_id": 13, "text": "\\operatorname{cov}" }, { "math_id": 14, "text": "\\operatorname{corr}" }, { "math_id": 15, "text": "\\rho_{X,Y} = {\\operatorname{E}(XY)-\\operatorname{E}(X)\\operatorname{E}(Y)\\over \\sqrt{\\operatorname{E}(X^2)-\\operatorname{E}(X)^2}\\cdot \\sqrt{\\operatorname{E}(Y^2)-\\operatorname{E}(Y)^2} }" }, { "math_id": 16, "text": "(-1,1)" }, { "math_id": 17, "text": "\\begin{align}\nX,Y \\text{ independent} \\quad & \\Rightarrow \\quad \\rho_{X,Y} = 0 \\quad (X,Y \\text{ uncorrelated})\\\\\n\\rho_{X,Y} = 0 \\quad (X,Y \\text{ uncorrelated})\\quad & \\nRightarrow \\quad X,Y \\text{ independent}\n\\end{align}" }, { "math_id": 18, "text": "Y=X^2" }, { "math_id": 19, "text": "n" }, { "math_id": 20, "text": "(X_i,Y_i)" }, { "math_id": 21, "text": "i=1,\\ldots,n" }, { "math_id": 22, "text": "\nr_{xy} \\quad \\overset{\\underset{\\mathrm{def}}{}}{=} \\quad \\frac{\\sum\\limits_{i=1}^n (x_i-\\bar{x})(y_i-\\bar{y})}{(n-1)s_x s_y}\n =\\frac{\\sum\\limits_{i=1}^n (x_i-\\bar{x})(y_i-\\bar{y})}\n {\\sqrt{\\sum\\limits_{i=1}^n (x_i-\\bar{x})^2 \\sum\\limits_{i=1}^n (y_i-\\bar{y})^2}},\n" }, { "math_id": 23, "text": "\\overline{x}" }, { "math_id": 24, "text": "\\overline{y}" }, { "math_id": 25, "text": "s_x" }, { "math_id": 26, "text": "s_y" }, { "math_id": 27, "text": "r_{xy}" }, { "math_id": 28, "text": "\n\\begin{align}\nr_{xy} &=\\frac{\\sum x_iy_i-n \\bar{x} \\bar{y}}{n s'_x s'_y} \\\\[5pt]\n &=\\frac{n\\sum x_iy_i-\\sum x_i\\sum y_i}{\\sqrt{n\\sum x_i^2-(\\sum x_i)^2}~\\sqrt{n\\sum y_i^2-(\\sum y_i)^2}}.\n\\end{align}\n" }, { "math_id": 29, "text": "s'_x" }, { "math_id": 30, "text": "s'_y" }, { "math_id": 31, "text": "y" }, { "math_id": 32, "text": "\\mathrm{P}(X=x)=\n\\begin{cases}\n\\frac 1 3 & \\quad \\text{for } x=0 \\\\\n\\frac 2 3 & \\quad \\text{for } x=1 \n\\end{cases}\n" }, { "math_id": 33, "text": "\\mathrm{P}(Y=y)=\n\\begin{cases}\n\\frac 1 3 & \\quad \\text{for } y=-1 \\\\\n\\frac 1 3 & \\quad \\text{for } y=0 \\\\\n\\frac 1 3 & \\quad \\text{for } y=1 \n\\end{cases}\n" }, { "math_id": 34, "text": "\\mu_X = \\frac 2 3" }, { "math_id": 35, "text": "\\mu_Y = 0" }, { "math_id": 36, "text": "\\sigma_X^2 = \\frac 2 9" }, { "math_id": 37, "text": "\\sigma_Y^2 = \\frac 2 3" }, { "math_id": 38, "text": "\n\\begin{align}\n\\rho_{X,Y} & = \\frac{1}{\\sigma_X \\sigma_Y} \\mathrm{E}[(X-\\mu_X)(Y-\\mu_Y)] \\\\[5pt]\n& = \\frac{1}{\\sigma_X \\sigma_Y} \\sum_{x,y}{(x-\\mu_X)(y-\\mu_Y) \\mathrm{P}(X=x,Y=y)} \\\\[5pt]\n& = \\frac{3\\sqrt{3}}{2}\\left(\\left(1-\\frac 2 3\\right)(-1-0)\\frac{1}{3} + \\left(0-\\frac 2 3\\right)(0-0)\\frac{1}{3} + \\left(1-\\frac 2 3\\right)(1-0)\\frac{1}{3}\\right) = 0.\n\\end{align}\n" }, { "math_id": 39, "text": "(x,y)" }, { "math_id": 40, "text": "X_1,\\ldots,X_n" }, { "math_id": 41, "text": "n \\times n" }, { "math_id": 42, "text": "C" }, { "math_id": 43, "text": "(i,j)" }, { "math_id": 44, "text": "c_{ij}:=\\operatorname{corr}(X_i,X_j)=\\frac{\\operatorname{cov}(X_i,X_j)}{\\sigma_{X_i}\\sigma_{X_j}},\\quad \\text{if}\\ \\sigma_{X_i}\\sigma_{X_j}>0." }, { "math_id": 45, "text": "X_i / \\sigma(X_i)" }, { "math_id": 46, "text": "i = 1, \\dots, n" }, { "math_id": 47, "text": "\\sigma" }, { "math_id": 48, "text": "X_i" }, { "math_id": 49, "text": "X_j" }, { "math_id": 50, "text": "\\left\\{ X_t \\right\\}_{t\\in\\mathcal{T}}" }, { "math_id": 51, "text": "\\left\\{ Y_t \\right\\}_{t\\in\\mathcal{T}}" }, { "math_id": 52, "text": "\\operatorname{E}(Y \\mid X)" }, { "math_id": 53, "text": "y=3+0.5x" }, { "math_id": 54, "text": "\\ (X,Y)\\ " }, { "math_id": 55, "text": "\\operatorname{\\boldsymbol\\mathcal E}(X \\mid Y)" }, { "math_id": 56, "text": "\\operatorname{\\boldsymbol\\mathcal E}(Y \\mid X)" }, { "math_id": 57, "text": "\\ X ~." }, { "math_id": 58, "text": "\\ \\rho_{X,Y}\\ " }, { "math_id": 59, "text": "\\ X\\ " }, { "math_id": 60, "text": "\\ Y\\ ," }, { "math_id": 61, "text": "\\ Y\\ " }, { "math_id": 62, "text": "\\operatorname{\\boldsymbol\\mathcal E}(Y \\mid X ) = \\operatorname{\\boldsymbol\\mathcal E}(Y) + \\rho_{X,Y} \\cdot \\sigma_Y \\cdot \\frac{\\ X-\\operatorname{\\boldsymbol\\mathcal E}(X)\\ }{ \\sigma_X }\\ ," }, { "math_id": 63, "text": "\\operatorname{\\boldsymbol\\mathcal E}(X)" }, { "math_id": 64, "text": "\\operatorname{\\boldsymbol\\mathcal E}(Y)" }, { "math_id": 65, "text": "\\ \\sigma_X\\ " }, { "math_id": 66, "text": "\\ \\sigma_Y\\ " }, { "math_id": 67, "text": "\\ \\rho ~." }, { "math_id": 68, "text": "\\ \\rho\\ " }, { "math_id": 69, "text": "\\pi ( \\rho \\mid r ) =\n\\frac{\\ \\Gamma(N)\\ }{\\ \\sqrt{ 2\\pi\\ } \\cdot\n\\Gamma( N - \\tfrac{\\ 1\\ }{ 2 } )\\ } \\cdot\n\\bigl( 1 - r^2 \\bigr)^{ \\frac{\\ N\\ - 2\\ }{ 2 } } \\cdot\n\\bigl( 1 - \\rho^2 \\bigr)^{ \\frac{\\ N - 3\\ }{ 2 } } \\cdot\n\\bigl( 1 - r \\rho \\bigr)^{ - N + \\frac{\\ 3 \\ }{ 2 } } \\cdot F_\\mathsf{Hyp} \\left(\\ \\tfrac{\\ 3\\ }{ 2 } , -\\tfrac{\\ 1\\ }{ 2 } ; N - \\tfrac{\\ 1\\ }{ 2 } ; \\frac{\\ 1 + r \\rho\\ }{ 2 }\\ \\right)\\ " }, { "math_id": 70, "text": "\\ F_\\mathsf{Hyp} \\ " } ]
https://en.wikipedia.org/wiki?curid=157057
157059
Covariance
Measure of the joint variability Covariance in probability theory and statistics is a measure of the joint variability of two random variables. The sign of the covariance, therefore, shows the tendency in the linear relationship between the variables. If greater values of one variable mainly correspond with greater values of the other variable, and the same holds for lesser values (that is, the variables tend to show similar behavior), the covariance is positive. In the opposite case, when greater values of one variable mainly correspond to lesser values of the other (that is, the variables tend to show opposite behavior), the covariance is negative. The magnitude of the covariance is the geometric mean of the variances that are in common for the two random variables. The correlation coefficient normalizes the covariance by dividing by the geometric mean of the total variances for the two random variables. A distinction must be made between (1) the covariance of two random variables, which is a population parameter that can be seen as a property of the joint probability distribution, and (2) the sample covariance, which in addition to serving as a descriptor of the sample, also serves as an estimated value of the population parameter. Mathematical definition. For two jointly distributed real-valued random variables formula_0 and formula_1 with finite second moments, the covariance is defined as the expected value (or mean) of the product of their deviations from their individual expected values:119 formula_2 where formula_3 is the expected value of formula_0, also known as the mean of formula_0. The covariance is also sometimes denoted formula_4 or formula_5, in analogy to variance. By using the linearity property of expectations, this can be simplified to the expected value of their product minus the product of their expected values: formula_6 but this equation is susceptible to catastrophic cancellation (see the section on numerical computation below). The units of measurement of the covariance formula_7 are those of formula_0 times those of formula_1. By contrast, correlation coefficients, which depend on the covariance, are a dimensionless measure of linear dependence. (In fact, correlation coefficients can simply be understood as a normalized version of covariance.) Complex random variables. The covariance between two complex random variables formula_8 is defined as119 formula_9 Notice the complex conjugation of the second factor in the definition. A related "pseudo-covariance" can also be defined. Discrete random variables. If the (real) random variable pair formula_10 can take on the values formula_11 for formula_12, with equal probabilities formula_13, then the covariance can be equivalently written in terms of the means formula_3 and formula_14 as formula_15 It can also be equivalently expressed, without directly referring to the means, as formula_16 More generally, if there are formula_17 possible realizations of formula_10, namely formula_11 but with possibly unequal probabilities formula_18 for formula_12, then the covariance is formula_19 In the case where two discrete random variables formula_0 and formula_1 have a joint probability distribution, represented by elements formula_20 corresponding to the joint probabilities of formula_21, the covariance is calculated using a double summation over the indices of the matrix: formula_22 Examples. Consider 3 independent random variables formula_23 and two constants formula_24. formula_25 In the special case, formula_26 and formula_27, the covariance between formula_0 and formula_1 is just the variance of formula_28 and the name covariance is entirely appropriate. Suppose that formula_0 and formula_1 have the following joint probability mass function, in which the six central cells give the discrete joint probabilities formula_29 of the six hypothetical realizations formula_30: formula_0 can take on three values (5, 6 and 7) while formula_1 can take on two (8 and 9). Their means are formula_31 and formula_32. Then, formula_33 Properties. Covariance with itself. The variance is a special case of the covariance in which the two variables are identical:121 formula_34 Covariance of linear combinations. If formula_0, formula_1, formula_35, and formula_36 are real-valued random variables and formula_37 are real-valued constants, then the following facts are a consequence of the definition of covariance: formula_38 For a sequence formula_39 of random variables in real-valued, and constants formula_40, we have formula_41 Hoeffding's covariance identity. A useful identity to compute the covariance between two random variables formula_42 is the Hoeffding's covariance identity: formula_43 where formula_44 is the joint cumulative distribution function of the random vector formula_45 and formula_46 are the marginals. Uncorrelatedness and independence. Random variables whose covariance is zero are called uncorrelated.121 Similarly, the components of random vectors whose covariance matrix is zero in every entry outside the main diagonal are also called uncorrelated. If formula_0 and formula_1 are independent random variables, then their covariance is zero.123 This follows because under independence, formula_47 The converse, however, is not generally true. For example, let formula_0 be uniformly distributed in formula_48 and let formula_49. Clearly, formula_0 and formula_1 are not independent, but formula_50 In this case, the relationship between formula_1 and formula_0 is non-linear, while correlation and covariance are measures of linear dependence between two random variables. This example shows that if two random variables are uncorrelated, that does not in general imply that they are independent. However, if two variables are jointly normally distributed (but not if they are merely individually normally distributed), uncorrelatedness "does" imply independence. formula_0 and formula_1 whose covariance is positive are called positively correlated, which implies if formula_51 then likely formula_52. Conversely, formula_0 and formula_1 with negative covariance are negatively correlated, and if formula_51 then likely formula_53. Relationship to inner products. Many of the properties of covariance can be extracted elegantly by observing that it satisfies similar properties to those of an inner product: In fact these properties imply that the covariance defines an inner product over the quotient vector space obtained by taking the subspace of random variables with finite second moment and identifying any two that differ by a constant. (This identification turns the positive semi-definiteness above into positive definiteness.) That quotient vector space is isomorphic to the subspace of random variables with finite second moment and mean zero; on that subspace, the covariance is exactly the L2 inner product of real-valued functions on the sample space. As a result, for random variables with finite variance, the inequality formula_61 holds via the Cauchy–Schwarz inequality. Proof: If formula_62, then it holds trivially. Otherwise, let random variable formula_63 Then we have formula_64 Calculating the sample covariance. The sample covariances among formula_65 variables based on formula_66 observations of each, drawn from an otherwise unobserved population, are given by the formula_67 matrix formula_68 with the entries formula_69 which is an estimate of the covariance between variable formula_70 and variable formula_71. The sample mean and the sample covariance matrix are unbiased estimates of the mean and the covariance matrix of the random vector formula_72, a vector whose "j"th element formula_73 is one of the random variables. The reason the sample covariance matrix has formula_74 in the denominator rather than formula_75 is essentially that the population mean formula_76 is not known and is replaced by the sample mean formula_77. If the population mean formula_76 is known, the analogous unbiased estimate is given by formula_78. Generalizations. Auto-covariance matrix of real random vectors. For a vector formula_79 of formula_80 jointly distributed random variables with finite second moments, its auto-covariance matrix (also known as the variance–covariance matrix or simply the covariance matrix) formula_81 (also denoted by formula_82 or formula_83) is defined as335 formula_84 Let formula_85 be a random vector with covariance matrix Σ, and let A be a matrix that can act on formula_85 on the left. The covariance matrix of the matrix-vector product A X is: formula_86 This is a direct result of the linearity of expectation and is useful when applying a linear transformation, such as a whitening transformation, to a vector. Cross-covariance matrix of real random vectors. For real random vectors formula_87 and formula_88, the formula_89 cross-covariance matrix is equal to336 where formula_90 is the transpose of the vector (or matrix) formula_91. The formula_92-th element of this matrix is equal to the covariance formula_93 between the "i"-th scalar component of formula_85 and the "j"-th scalar component of formula_91. In particular, formula_94 is the transpose of formula_95. Cross-covariance sesquilinear form of random vectors in a real or complex Hilbert space. More generally let formula_96 and formula_97, be Hilbert spaces over formula_98 or formula_99 with formula_100 anti linear in the first variable, and let formula_101 be formula_102 resp. formula_103 valued random variables. Then the covariance of formula_85 and formula_91 is the sesquilinear form on formula_104 (anti linear in the first variable) given by formula_105 Numerical computation. When formula_106, the equation formula_107 is prone to catastrophic cancellation if formula_108 and formula_109 are not computed exactly and thus should be avoided in computer programs when the data has not been centered before. Numerically stable algorithms should be preferred in this case. Comments. The covariance is sometimes called a measure of "linear dependence" between the two random variables. That does not mean the same thing as in the context of linear algebra (see linear dependence). When the covariance is normalized, one obtains the Pearson correlation coefficient, which gives the goodness of the fit for the best possible linear function describing the relation between the variables. In this sense covariance is a linear gauge of dependence. Applications. In genetics and molecular biology. Covariance is an important measure in biology. Certain sequences of DNA are conserved more than others among species, and thus to study secondary and tertiary structures of proteins, or of RNA structures, sequences are compared in closely related species. If sequence changes are found or no changes at all are found in noncoding RNA (such as microRNA), sequences are found to be necessary for common structural motifs, such as an RNA loop. In genetics, covariance serves a basis for computation of Genetic Relationship Matrix (GRM) (aka kinship matrix), enabling inference on population structure from sample with no known close relatives as well as inference on estimation of heritability of complex traits. In the theory of evolution and natural selection, the price equation describes how a genetic trait changes in frequency over time. The equation uses a covariance between a trait and fitness, to give a mathematical description of evolution and natural selection. It provides a way to understand the effects that gene transmission and natural selection have on the proportion of genes within each new generation of a population. In financial economics. Covariances play a key role in financial economics, especially in modern portfolio theory and in the capital asset pricing model. Covariances among various assets' returns are used to determine, under certain assumptions, the relative amounts of different assets that investors should (in a normative analysis) or are predicted to (in a positive analysis) choose to hold in a context of diversification. In meteorological and oceanographic data assimilation. The covariance matrix is important in estimating the initial conditions required for running weather forecast models, a procedure known as data assimilation. The 'forecast error covariance matrix' is typically constructed between perturbations around a mean state (either a climatological or ensemble mean). The 'observation error covariance matrix' is constructed to represent the magnitude of combined observational errors (on the diagonal) and the correlated errors between measurements (off the diagonal). This is an example of its widespread application to Kalman filtering and more general state estimation for time-varying systems. In micrometeorology. The eddy covariance technique is a key atmospherics measurement technique where the covariance between instantaneous deviation in vertical wind speed from the mean value and instantaneous deviation in gas concentration is the basis for calculating the vertical turbulent fluxes. In signal processing. The covariance matrix is used to capture the spectral variability of a signal. In statistics and image processing. The covariance matrix is used in principal component analysis to reduce feature dimensionality in data preprocessing. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X" }, { "math_id": 1, "text": "Y" }, { "math_id": 2, "text": "\\operatorname{cov}(X, Y) = \\operatorname{E}{\\big[(X - \\operatorname{E}[X])(Y - \\operatorname{E}[Y])\\big]}" }, { "math_id": 3, "text": "\\operatorname{E}[X]" }, { "math_id": 4, "text": "\\sigma_{XY}" }, { "math_id": 5, "text": "\\sigma(X,Y)" }, { "math_id": 6, "text": "\n\\begin{align}\n\\operatorname{cov}(X, Y)\n&= \\operatorname{E}\\left[\\left(X - \\operatorname{E}\\left[X\\right]\\right) \\left(Y - \\operatorname{E}\\left[Y\\right]\\right)\\right] \\\\\n&= \\operatorname{E}\\left[X Y - X \\operatorname{E}\\left[Y\\right] - \\operatorname{E}\\left[X\\right] Y + \\operatorname{E}\\left[X\\right] \\operatorname{E}\\left[Y\\right]\\right] \\\\\n&= \\operatorname{E}\\left[X Y\\right] - \\operatorname{E}\\left[X\\right] \\operatorname{E}\\left[Y\\right] - \\operatorname{E}\\left[X\\right] \\operatorname{E}\\left[Y\\right] + \\operatorname{E}\\left[X\\right] \\operatorname{E}\\left[Y\\right] \\\\\n&= \\operatorname{E}\\left[X Y\\right] - \\operatorname{E}\\left[X\\right] \\operatorname{E}\\left[Y\\right],\n\\end{align}\n" }, { "math_id": 7, "text": "\\operatorname{cov}(X, Y)" }, { "math_id": 8, "text": "Z, W" }, { "math_id": 9, "text": "\\operatorname{cov}(Z, W) =\n \\operatorname{E}\\left[(Z - \\operatorname{E}[Z])\\overline{(W - \\operatorname{E}[W])}\\right] =\n \\operatorname{E}\\left[Z\\overline{W}\\right] - \\operatorname{E}[Z]\\operatorname{E}\\left[\\overline{W}\\right]\n" }, { "math_id": 10, "text": "(X,Y)" }, { "math_id": 11, "text": "(x_i,y_i)" }, { "math_id": 12, "text": "i = 1,\\ldots,n" }, { "math_id": 13, "text": "p_i=1/n" }, { "math_id": 14, "text": "\\operatorname{E}[Y]" }, { "math_id": 15, "text": "\\operatorname{cov} (X,Y) = \\frac{1}{n}\\sum_{i=1}^n (x_i-E(X)) (y_i-E(Y))." }, { "math_id": 16, "text": " \\operatorname{cov}(X,Y) = \\frac{1}{n^2} \\sum_{i=1}^n \\sum_{j=1}^n \\frac{1}{2}(x_i - x_j)(y_i - y_j) = \\frac{1}{n^2} \\sum_i \\sum_{j>i} (x_i-x_j)(y_i - y_j). " }, { "math_id": 17, "text": "n" }, { "math_id": 18, "text": "p_i " }, { "math_id": 19, "text": "\\operatorname{cov} (X,Y) = \\sum_{i=1}^n p_i (x_i-E(X)) (y_i-E(Y))." }, { "math_id": 20, "text": "p_{i,j}" }, { "math_id": 21, "text": "P( X = x_i, Y = y_j )" }, { "math_id": 22, "text": "\\operatorname{cov} (X, Y) = \\sum_{i=1}^{n}\\sum_{j=1}^{n} p_{i,j} (x_i - E[X])(y_j - E[Y])." }, { "math_id": 23, "text": "A, B, C" }, { "math_id": 24, "text": "q, r" }, { "math_id": 25, "text": "\n\\begin{align}\nX &= qA + B \\\\\nY &= rA + C \\\\\n\\operatorname{cov}(X, Y)\n&= qr \\operatorname{var}(A)\n\\end{align}\n" }, { "math_id": 26, "text": "q=1" }, { "math_id": 27, "text": "r=1" }, { "math_id": 28, "text": "A" }, { "math_id": 29, "text": "f(x, y)" }, { "math_id": 30, "text": "(x, y) \\in S = \\left\\{ (5, 8), (6, 8), (7, 8), (5, 9), (6, 9), (7, 9) \\right\\}" }, { "math_id": 31, "text": "\\mu_X = 5(0.3) + 6(0.4) + 7(0.1 + 0.2) = 6" }, { "math_id": 32, "text": "\\mu_Y = 8(0.4 + 0.1) + 9(0.3 + 0.2) = 8.5" }, { "math_id": 33, "text": "\\begin{align}\n \\operatorname{cov}(X, Y)\n ={} &\\sigma_{XY} = \\sum_{(x,y)\\in S}f(x, y) \\left(x - \\mu_X\\right)\\left(y - \\mu_Y\\right) \\\\[4pt]\n ={} &(0)(5 - 6)(8 - 8.5) + (0.4)(6 - 6)(8 - 8.5) + (0.1)(7 - 6)(8 - 8.5) +{} \\\\[4pt]\n &(0.3)(5 - 6)(9 - 8.5) + (0)(6 - 6)(9 - 8.5) + (0.2)(7 - 6)(9 - 8.5) \\\\[4pt]\n ={} &{-0.1} \\; .\n\\end{align}" }, { "math_id": 34, "text": "\\operatorname{cov}(X, X) = \\operatorname{var}(X)\\equiv\\sigma^2(X)\\equiv\\sigma_X^2." }, { "math_id": 35, "text": "W" }, { "math_id": 36, "text": "V" }, { "math_id": 37, "text": "a,b,c,d" }, { "math_id": 38, "text": "\n\\begin{align}\n \\operatorname{cov}(X, a) &= 0 \\\\\n \\operatorname{cov}(X, X) &= \\operatorname{var}(X) \\\\\n \\operatorname{cov}(X, Y) &= \\operatorname{cov}(Y, X) \\\\\n \\operatorname{cov}(aX, bY) &= ab\\, \\operatorname{cov}(X, Y) \\\\\n \\operatorname{cov}(X+a, Y+b) &= \\operatorname{cov}(X, Y) \\\\ \n \\operatorname{cov}(aX+bY, cW+dV) &= ac\\,\\operatorname{cov}(X,W)+ad\\,\\operatorname{cov}(X,V)+bc\\,\\operatorname{cov}(Y,W)+bd\\,\\operatorname{cov}(Y,V)\n\\end{align}\n" }, { "math_id": 39, "text": "X_1,\\ldots,X_n" }, { "math_id": 40, "text": "a_1,\\ldots,a_n" }, { "math_id": 41, "text": "\\operatorname{var}\\left(\\sum_{i=1}^n a_iX_i \\right) = \\sum_{i=1}^n a_i^2\\sigma^2(X_i) + 2\\sum_{i,j\\,:\\,i<j} a_ia_j\\operatorname{cov}(X_i,X_j) = \\sum_{i,j} {a_ia_j\\operatorname{cov}(X_i,X_j)}\n" }, { "math_id": 42, "text": "X, Y " }, { "math_id": 43, "text": "\\operatorname{cov}(X, Y) = \\int_\\mathbb R \\int_\\mathbb R \\left(F_{(X, Y)}(x, y) - F_X(x)F_Y(y)\\right) \\,dx \\,dy" }, { "math_id": 44, "text": " F_{(X,Y)}(x,y) " }, { "math_id": 45, "text": " (X, Y) " }, { "math_id": 46, "text": " F_X(x), F_Y(y) " }, { "math_id": 47, "text": "\\operatorname{E}[XY] = \\operatorname{E}[X] \\cdot \\operatorname{E}[Y]. " }, { "math_id": 48, "text": "[-1,1]" }, { "math_id": 49, "text": "Y = X^2" }, { "math_id": 50, "text": "\\begin{align}\n \\operatorname{cov}(X, Y) &= \\operatorname{cov}\\left(X, X^2\\right) \\\\\n &= \\operatorname{E}\\left[X \\cdot X^2\\right] - \\operatorname{E}[X] \\cdot \\operatorname{E}\\left[X^2\\right] \\\\\n &= \\operatorname{E}\\left[X^3\\right] - \\operatorname{E}[X]\\operatorname{E}\\left[X^2\\right] \\\\\n &= 0 - 0 \\cdot \\operatorname{E}[X^2] \\\\\n &= 0. \n\\end{align}" }, { "math_id": 51, "text": "X>E[X]" }, { "math_id": 52, "text": "Y>E[Y]" }, { "math_id": 53, "text": "Y<E[Y]" }, { "math_id": 54, "text": "a" }, { "math_id": 55, "text": "b" }, { "math_id": 56, "text": "X,Y,Z," }, { "math_id": 57, "text": " \\operatorname{cov}(aX+bY,Z) = a \\operatorname{cov}(X,Z) + b \\operatorname{cov}(Y,Z)" }, { "math_id": 58, "text": "\\operatorname{cov}(X,Y) = \\operatorname{cov}(Y,X)" }, { "math_id": 59, "text": "\\sigma^2(X) = \\operatorname{cov}(X,X) \\ge 0" }, { "math_id": 60, "text": "\\operatorname{cov}(X,X) = 0" }, { "math_id": 61, "text": "\\left|\\operatorname{cov}(X, Y)\\right| \\le \\sqrt{\\sigma^2(X) \\sigma^2(Y)} " }, { "math_id": 62, "text": "\\sigma^2(Y) = 0" }, { "math_id": 63, "text": " Z = X - \\frac{\\operatorname{cov}(X, Y)}{\\sigma^2(Y)} Y." }, { "math_id": 64, "text": "\\begin{align}\n 0 \\le \\sigma^2(Z)\n &= \\operatorname{cov}\\left(\n X - \\frac{\\operatorname{cov}(X, Y)}{\\sigma^2(Y)} Y,\\;\n X - \\frac{\\operatorname{cov}(X, Y)}{\\sigma^2(Y)} Y\n \\right) \\\\[12pt]\n &= \\sigma^2(X) - \\frac{(\\operatorname{cov}(X, Y))^2}{\\sigma^2(Y)} \\\\\n\\implies (\\operatorname{cov}(X, Y))^2 &\\le \\sigma^2(X)\\sigma^2(Y) \\\\\n\\left|\\operatorname{cov}(X, Y)\\right| &\\le \\sqrt{\\sigma^2(X)\\sigma^2(Y)}\n\\end{align}" }, { "math_id": 65, "text": "K" }, { "math_id": 66, "text": "N" }, { "math_id": 67, "text": "K \\times K" }, { "math_id": 68, "text": "\\textstyle \\overline{\\mathbf{q}} = \\left[q_{jk}\\right]" }, { "math_id": 69, "text": "q_{jk} = \\frac{1}{N - 1}\\sum_{i=1}^N \\left(X_{ij} - \\bar{X}_j\\right) \\left(X_{ik} - \\bar{X}_k\\right)," }, { "math_id": 70, "text": "j" }, { "math_id": 71, "text": "k" }, { "math_id": 72, "text": "\\textstyle \\mathbf{X}" }, { "math_id": 73, "text": "(j = 1,\\, \\ldots,\\, K)" }, { "math_id": 74, "text": "\\textstyle N-1" }, { "math_id": 75, "text": "\\textstyle N" }, { "math_id": 76, "text": "\\operatorname{E}(\\mathbf{X})" }, { "math_id": 77, "text": "\\mathbf{\\bar{X}}" }, { "math_id": 78, "text": " q_{jk} = \\frac{1}{N} \\sum_{i=1}^N \\left(X_{ij} - \\operatorname{E}\\left(X_j\\right)\\right) \\left(X_{ik} - \\operatorname{E}\\left(X_k\\right)\\right)" }, { "math_id": 79, "text": "\\mathbf{X} = \\begin{bmatrix} X_1 & X_2 & \\dots & X_m \\end{bmatrix}^\\mathrm{T}" }, { "math_id": 80, "text": "m" }, { "math_id": 81, "text": "\\operatorname{K}_{\\mathbf{X}\\mathbf{X}}" }, { "math_id": 82, "text": "\\Sigma(\\mathbf{X})" }, { "math_id": 83, "text": "\\operatorname{cov}(\\mathbf{X}, \\mathbf{X})" }, { "math_id": 84, "text": "\\begin{align}\n \\operatorname{K}_\\mathbf{XX} = \\operatorname{cov}(\\mathbf{X}, \\mathbf{X})\n &= \\operatorname{E}\\left[(\\mathbf{X} - \\operatorname{E}[\\mathbf{X}]) (\\mathbf{X} - \\operatorname{E}[\\mathbf{X}])^\\mathrm{T}\\right] \\\\\n &= \\operatorname{E}\\left[\\mathbf{XX}^\\mathrm{T}\\right] - \\operatorname{E}[\\mathbf{X}]\\operatorname{E}[\\mathbf{X}]^\\mathrm{T}.\n\\end{align}" }, { "math_id": 85, "text": "\\mathbf{X}" }, { "math_id": 86, "text": "\\begin{align}\n \\operatorname{cov}(\\mathbf{AX},\\mathbf{AX}) &=\n \\operatorname{E}\\left[\\mathbf{AX(A}\\mathbf{X)}^\\mathrm{T}\\right] - \\operatorname{E}[\\mathbf{AX}] \\operatorname{E}\\left[(\\mathbf{A}\\mathbf{X})^\\mathrm{T}\\right] \\\\\n &= \\operatorname{E}\\left[\\mathbf{AXX}^\\mathrm{T}\\mathbf{A}^\\mathrm{T}\\right] - \\operatorname{E}[\\mathbf{AX}] \\operatorname{E}\\left[\\mathbf{X}^\\mathrm{T}\\mathbf{A}^\\mathrm{T}\\right] \\\\\n &= \\mathbf{A}\\operatorname{E}\\left[\\mathbf{XX}^\\mathrm{T}\\right]\\mathbf{A}^\\mathrm{T} - \\mathbf{A}\\operatorname{E}[\\mathbf{X}] \\operatorname{E}\\left[\\mathbf{X}^\\mathrm{T}\\right]\\mathbf{A}^\\mathrm{T} \\\\\n &= \\mathbf{A}\\left(\\operatorname{E}\\left[\\mathbf{XX}^\\mathrm{T}\\right] - \\operatorname{E}[\\mathbf{X}] \\operatorname{E}\\left[\\mathbf{X}^\\mathrm{T}\\right]\\right)\\mathbf{A}^\\mathrm{T} \\\\\n &= \\mathbf{A}\\Sigma\\mathbf{A}^\\mathrm{T}.\n\\end{align}" }, { "math_id": 87, "text": "\\mathbf{X} \\in \\mathbb{R}^m" }, { "math_id": 88, "text": "\\mathbf{Y} \\in \\mathbb{R}^n" }, { "math_id": 89, "text": "m \\times n" }, { "math_id": 90, "text": "\\mathbf{Y}^{\\mathrm T}" }, { "math_id": 91, "text": "\\mathbf{Y}" }, { "math_id": 92, "text": "(i,j)" }, { "math_id": 93, "text": "\\operatorname{cov}(X_i,Y_j)" }, { "math_id": 94, "text": "\\operatorname{cov}(\\mathbf{Y},\\mathbf{X})" }, { "math_id": 95, "text": "\\operatorname{cov}(\\mathbf{X},\\mathbf{Y})" }, { "math_id": 96, "text": "H_1 = (H_1, \\langle \\,,\\rangle_1)" }, { "math_id": 97, "text": "H_2 = (H_2, \\langle \\,,\\rangle_2)" }, { "math_id": 98, "text": "\\mathbb{R}" }, { "math_id": 99, "text": "\\mathbb{C}" }, { "math_id": 100, "text": "\\langle \\,, \\rangle" }, { "math_id": 101, "text": "\\mathbf{X}, \\mathbf{Y}" }, { "math_id": 102, "text": "H_1" }, { "math_id": 103, "text": "H_2" }, { "math_id": 104, "text": "H_1 \\times H_2" }, { "math_id": 105, "text": "\\begin{align}\n \\operatorname{K}_{X,Y}(h_1,h_2) = \\operatorname{cov}(\\mathbf{X},\\mathbf{Y})(h_1,h_2) &=\n\\operatorname{E}\\left[\\langle h_1,(\\mathbf{X} - \\operatorname{E}[\\mathbf{X}])\\rangle_1\\langle(\\mathbf{Y} - \\operatorname{E}[\\mathbf{Y}]), h_2 \\rangle_2\\right] \\\\\n &= \\operatorname{E}[\\langle h_1,\\mathbf{X}\\rangle_1\\langle\\mathbf{Y}, h_2 \\rangle_2] - \\operatorname{E}[\\langle h,\\mathbf{X} \\rangle_1] \\operatorname{E}[\\langle \\mathbf{Y},h_2 \\rangle_2] \\\\\n&= \\langle h_1, \\operatorname{E}\\left[(\\mathbf{X} - \\operatorname{E}[\\mathbf{X}])(\\mathbf{Y} - \\operatorname{E}[\\mathbf{Y}])^\\dagger \\right]h_2 \\rangle_1\\\\\n&= \\langle h_1, \\left( \\operatorname{E}[\\mathbf{X}\\mathbf{Y}^\\dagger] - \\operatorname{E}[\\mathbf{X}]\\operatorname{E}[\\mathbf{Y}]^\\dagger \\right) h_2 \\rangle_1\\\\\n\\end{align} " }, { "math_id": 106, "text": "\\operatorname{E}[XY] \\approx \\operatorname{E}[X]\\operatorname{E}[Y]" }, { "math_id": 107, "text": "\\operatorname{cov}(X, Y) = \\operatorname{E}\\left[X Y\\right] - \\operatorname{E}\\left[X\\right] \\operatorname{E}\\left[Y\\right]" }, { "math_id": 108, "text": "\\operatorname{E}\\left[X Y\\right]" }, { "math_id": 109, "text": "\\operatorname{E}\\left[X\\right] \\operatorname{E}\\left[Y\\right]" } ]
https://en.wikipedia.org/wiki?curid=157059
1570629
CDI
CDI, CDi, CD-i, or .cdi may refer to: &lt;templatestyles src="Template:TOC_right/styles.css" /&gt; Other uses. Topics referred to by the same term &lt;templatestyles src="Dmbox/styles.css" /&gt; This page lists associated with the title .
[ { "math_id": 0, "text": "C_{D,i}" } ]
https://en.wikipedia.org/wiki?curid=1570629
15708202
Postnikov system
In mathematics, a topological construction In homotopy theory, a branch of algebraic topology, a Postnikov system (or Postnikov tower) is a way of decomposing a topological space's homotopy groups using an inverse system of topological spaces whose homotopy type at degree formula_0 agrees with the truncated homotopy type of the original space formula_1. Postnikov systems were introduced by, and are named after, Mikhail Postnikov. Definition. A Postnikov system of a path-connected space formula_1 is an inverse system of spaces formula_2 with a sequence of maps formula_3 compatible with the inverse system such that The first two conditions imply that formula_11 is also a formula_12-space. More generally, if formula_1 is formula_13-connected, then formula_14 is a formula_10-space and all formula_15 for formula_16 are contractible. Note the third condition is only included optionally by some authors. Existence. Postnikov systems exist on connected CW complexes, and there is a weak homotopy-equivalence between formula_1 and its inverse limit, so formula_17, showing that formula_1 is a CW approximation of its inverse limit. They can be constructed on a CW complex by iteratively killing off homotopy groups. If we have a map formula_18 representing a homotopy class formula_19, we can take the pushout along the boundary map formula_20, killing off the homotopy class. For formula_21 this process can be repeated for all formula_22, giving a space which has vanishing homotopy groups formula_23. Using the fact that formula_24can be constructed from formula_14 by killing off all homotopy maps formula_25, we obtain a map formula_26. Main property. One of the main properties of the Postnikov tower, which makes it so powerful to study while computing cohomology, is the fact the spaces formula_14 are homotopic to a CW complex formula_27 which differs from formula_1 only by cells of dimension formula_28. Homotopy classification of fibrations. The sequence of fibrations formula_29 have homotopically defined invariants, meaning the homotopy classes of maps formula_30, give a well defined homotopy type formula_31. The homotopy class of formula_30 comes from looking at the homotopy class of the classifying map for the fiber formula_32. The associated classifying map is formula_33, hence the homotopy class formula_34 is classified by a homotopy class formula_35 called the "n"th Postnikov invariant of formula_1, since the homotopy classes of maps to Eilenberg-Maclane spaces gives cohomology with coefficients in the associated abelian group. Fiber sequence for spaces with two nontrivial homotopy groups. One of the special cases of the homotopy classification is the homotopy class of spaces formula_1 such that there exists a fibration formula_36 giving a homotopy type with two non-trivial homotopy groups, formula_37, and formula_38. Then, from the previous discussion, the fibration map formula_39 gives a cohomology class in formula_40, which can also be interpreted as a group cohomology class. This space formula_1 can be considered a higher local system. Examples of Postnikov towers. Postnikov tower of a "K"("G", "n"). One of the conceptually simplest cases of a Postnikov tower is that of the Eilenberg–Maclane space formula_41. This gives a tower with formula_42 Postnikov tower of "S"2. The Postnikov tower for the sphere formula_43 is a special case whose first few terms can be understood explicitly. Since we have the first few homotopy groups from the simply connectedness of formula_43, degree theory of spheres, and the Hopf fibration, giving formula_44 for formula_45, hence formula_46 Then, formula_47, and formula_48 comes from a pullback sequence formula_49 which is an element in formula_50. If this was trivial it would imply formula_51. But, this is not the case! In fact, this is responsible for why strict infinity groupoids don't model homotopy types. Computing this invariant requires more work, but can be explicitly found. This is the quadratic form formula_52 on formula_53 coming from the Hopf fibration formula_54. Note that each element in formula_55 gives a different homotopy 3-type. Homotopy groups of spheres. One application of the Postnikov tower is the computation of homotopy groups of spheres. For an formula_56-dimensional sphere formula_57 we can use the Hurewicz theorem to show each formula_58 is contractible for formula_16, since the theorem implies that the lower homotopy groups are trivial. Recall there is a spectral sequence for any Serre fibration, such as the fibration formula_59. We can then form a homological spectral sequence with formula_60-terms formula_61. And the first non-trivial map to formula_62, formula_63, equivalently written as formula_64. If it's easy to compute formula_65 and formula_66, then we can get information about what this map looks like. In particular, if it's an isomorphism, we obtain a computation of formula_62. For the case formula_67, this can be computed explicitly using the path fibration for formula_68, the main property of the Postnikov tower for formula_69 (giving formula_70, and the universal coefficient theorem giving formula_71. Moreover, because of the Freudenthal suspension theorem this actually gives the stable homotopy group formula_72 since formula_73 is stable for formula_74. Note that similar techniques can be applied using the Whitehead tower (below) for computing formula_75 and formula_76, giving the first two non-trivial stable homotopy groups of spheres. Postnikov towers of spectra. In addition to the classical Postnikov tower, there is a notion of Postnikov towers in stable homotopy theory constructed on spectrapg 85-86. Definition. For a spectrum formula_77 a postnikov tower of formula_77 is a diagram in the homotopy category of spectra, formula_78, given by formula_79, with maps formula_80 commuting with the formula_30 maps. Then, this tower is a Postnikov tower if the following two conditions are satisfied: where formula_84 are stable homotopy groups of a spectrum. It turns out every spectrum has a Postnikov tower and this tower can be constructed using a similar kind of inductive procedure as the one given above. Whitehead tower. Given a CW complex formula_1, there is a dual construction to the Postnikov tower called the Whitehead tower. Instead of killing off all higher homotopy groups, the Whitehead tower iteratively kills off lower homotopy groups. This is given by a tower of CW complexes, formula_85, where Implications. Notice formula_88 is the universal cover of formula_1 since it is a covering space with a simply connected cover. Furthermore, each formula_89 is the universal formula_56-connected cover of formula_1. Construction. The spaces formula_14 in the Whitehead tower are constructed inductively. If we construct a formula_90 by killing off the higher homotopy groups in formula_14, we get an embedding formula_91. If we let formula_92 for some fixed basepoint formula_93, then the induced map formula_94 is a fiber bundle with fiber homeomorphic to formula_95, and so we have a Serre fibration formula_96. Using the long exact sequence in homotopy theory, we have that formula_97 for formula_98, formula_99 for formula_100, and finally, there is an exact sequence formula_101, where if the middle morphism is an isomorphism, the other two groups are zero. This can be checked by looking at the inclusion formula_91 and noting that the Eilenberg–Maclane space has a cellular decomposition formula_102; thus, formula_103, giving the desired result. As a homotopy fiber. Another way to view the components in the Whitehead tower is as a homotopy fiber. If we take formula_104 from the Postnikov tower, we get a space formula_105 which has formula_106 Whitehead tower of spectra. The dual notion of the Whitehead tower can be defined in a similar manner using homotopy fibers in the category of spectra. If we let formula_107 then this can be organized in a tower giving connected covers of a spectrum. This is a widely used construction in bordism theory because the coverings of the unoriented cobordism spectrum formula_108 gives other bordism theories formula_109 such as string bordism. Whitehead tower and string theory. In Spin geometry the formula_110 group is constructed as the universal cover of the Special orthogonal group formula_111, so formula_112 is a fibration, giving the first term in the Whitehead tower. There are physically relevant interpretations for the higher parts in this tower, which can be read asformula_113where formula_114 is the formula_115-connected cover of formula_111 called the string group, and formula_116 is the formula_117-connected cover called the fivebrane group. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "k" }, { "math_id": 1, "text": "X" }, { "math_id": 2, "text": "\\cdots \\to X_n \\xrightarrow{p_n} X_{n-1}\\xrightarrow{p_{n-1}} \\cdots \\xrightarrow{p_3} X_2 \\xrightarrow{p_2} X_1 \\xrightarrow{p_1} *" }, { "math_id": 3, "text": "\\phi_n : X \\to X_n" }, { "math_id": 4, "text": "\\pi_i(X) \\to \\pi_i(X_n)" }, { "math_id": 5, "text": "i\\leq n" }, { "math_id": 6, "text": "\\pi_i(X_n) = 0" }, { "math_id": 7, "text": "i > n" }, { "math_id": 8, "text": "p_n : X_n \\to X_{n-1}" }, { "math_id": 9, "text": "F_n" }, { "math_id": 10, "text": "K(\\pi_n(X),n)" }, { "math_id": 11, "text": "X_1" }, { "math_id": 12, "text": "K(\\pi_1(X),1)" }, { "math_id": 13, "text": "(n-1)" }, { "math_id": 14, "text": "X_n" }, { "math_id": 15, "text": "X_{i}" }, { "math_id": 16, "text": "i < n" }, { "math_id": 17, "text": "X\\simeq\\varprojlim{}X_n" }, { "math_id": 18, "text": "f : S^{n} \\to X" }, { "math_id": 19, "text": "[f]\\in\\pi_n(X)" }, { "math_id": 20, "text": "S^{n} \\to e_{n+1}" }, { "math_id": 21, "text": "X_{m}" }, { "math_id": 22, "text": "n > m " }, { "math_id": 23, "text": "\\pi_n(X_m) " }, { "math_id": 24, "text": "X_{n-1} " }, { "math_id": 25, "text": "S^n \\to X_{n}" }, { "math_id": 26, "text": "X_n \\to X_{n-1}" }, { "math_id": 27, "text": "\\mathfrak{X}_n" }, { "math_id": 28, "text": "\\geq n+2" }, { "math_id": 29, "text": "p_n:X_n \\to X_{n-1}" }, { "math_id": 30, "text": "p_n" }, { "math_id": 31, "text": "[X] \\in \\operatorname{Ob}(hTop)" }, { "math_id": 32, "text": "K(\\pi_n(X), n)" }, { "math_id": 33, "text": "X_{n-1} \\to B(K(\\pi_n(X),n)) \\simeq K(\\pi_n(X),n+1)" }, { "math_id": 34, "text": "[p_n]" }, { "math_id": 35, "text": "[p_n] \\in [X_{n-1},K(\\pi_n(X), n+1)] \\cong H^{n+1}(X_{n-1}, \\pi_n(X))" }, { "math_id": 36, "text": "K(A,n) \\to X \\to \\pi_1(X)" }, { "math_id": 37, "text": "\\pi_1(X) = G" }, { "math_id": 38, "text": "\\pi_n(X) = A" }, { "math_id": 39, "text": "BG \\to K(A,n+1)" }, { "math_id": 40, "text": "H^{n+1}(BG, A)" }, { "math_id": 41, "text": "K(G,n)" }, { "math_id": 42, "text": "\\begin{matrix}\nX_i \\simeq * &\\text{for } i < n \\\\\nX_i \\simeq K(G,n) & \\text{for } i \\geq n\n\\end{matrix}" }, { "math_id": 43, "text": "S^2" }, { "math_id": 44, "text": "\\pi_k(S^2) \\simeq \\pi_k(S^3)" }, { "math_id": 45, "text": "k \\geq 3" }, { "math_id": 46, "text": "\\begin{matrix}\n\\pi_1(S^2) =& 0 \\\\\n\\pi_2(S^2) =& \\Z \\\\\n\\pi_3(S^2) =& \\Z \\\\\n\\pi_4(S^2) =& \\Z/2.\n\\end{matrix}" }, { "math_id": 47, "text": "X_2 = S^2_2 = K(\\Z,2)" }, { "math_id": 48, "text": "X_3" }, { "math_id": 49, "text": "\\begin{matrix}\nX_3 & \\to & * \\\\\n\\downarrow & & \\downarrow \\\\\nX_2 & \\to & K(\\Z,4) , \n\\end{matrix}" }, { "math_id": 50, "text": "[p_3] \\in [K(\\Z,2), K(\\Z,4)] \\cong H^4(\\mathbb{CP}^\\infty) = \\Z" }, { "math_id": 51, "text": "X_3 \\simeq K(\\Z,2)\\times K(\\Z,3)" }, { "math_id": 52, "text": "x \\mapsto x^2" }, { "math_id": 53, "text": "\\Z \\to \\Z" }, { "math_id": 54, "text": "S^3 \\to S^2" }, { "math_id": 55, "text": "H^4(\\mathbb{CP}^\\infty)" }, { "math_id": 56, "text": "n" }, { "math_id": 57, "text": "S^n" }, { "math_id": 58, "text": "S^n_i" }, { "math_id": 59, "text": "K(\\pi_{n+1}(X), n + 1) \\simeq F_{n+1} \\to S^n_{n+1} \\to S^n_n \\simeq K(\\Z, n)" }, { "math_id": 60, "text": "E^2" }, { "math_id": 61, "text": "E^2_{p,q} = H_p\\left(K(\\Z, n), H_q\\left(K\\left(\\pi_{n+1}\\left(S^n\\right), n + 1\\right)\\right)\\right)" }, { "math_id": 62, "text": "\\pi_{n+1}\\left(S^n\\right)" }, { "math_id": 63, "text": "d^{n+1}_{0,n+1} : H_{n+2}(K(\\Z, n)) \\to H_0\\left(K(\\Z, n), H_{n+1}\\left(K\\left(\\pi_{n+1}\\left(S^n\\right), n + 1\\right)\\right)\\right)" }, { "math_id": 64, "text": "d^{n+1}_{0,n+1} : H_{n+2}(K(\\Z, n)) \\to \\pi_{n+1}\\left(S^n\\right)" }, { "math_id": 65, "text": "H_{n+1}\\left(S^n_{n+1}\\right)" }, { "math_id": 66, "text": "H_{n+2}\\left(S^n_{n+2}\\right)" }, { "math_id": 67, "text": "n = 3" }, { "math_id": 68, "text": "K(\\Z, 3)" }, { "math_id": 69, "text": "\\mathfrak{X}_4 \\simeq S^3 \\cup \\{\\text{cells of dimension} \\geq 6\\} " }, { "math_id": 70, "text": "H_4(X_4) = H_5(X_4) = 0" }, { "math_id": 71, "text": "\\pi_4\\left(S^3\\right) = \\Z/2" }, { "math_id": 72, "text": "\\pi_1^\\mathbb{S}" }, { "math_id": 73, "text": "\\pi_{n+k}\\left(S^n\\right)" }, { "math_id": 74, "text": "n \\geq k + 2" }, { "math_id": 75, "text": "\\pi_4\\left(S^3\\right)" }, { "math_id": 76, "text": "\\pi_5\\left(S^3\\right)" }, { "math_id": 77, "text": "E" }, { "math_id": 78, "text": "\\text{Ho}(\\textbf{Spectra})" }, { "math_id": 79, "text": "\\cdots \\to E_{(2)} \\xrightarrow{p_2} E_{(1)} \\xrightarrow{p_1} E_{(0)} " }, { "math_id": 80, "text": "\\tau_n : E \\to E_{(n)}" }, { "math_id": 81, "text": "\\pi_i^{\\mathbb{S}}\\left(E_{(n)}\\right) = 0 " }, { "math_id": 82, "text": "\\left(\\tau_n\\right)_* : \\pi_i^{\\mathbb{S}}(E) \\to \\pi_i^{\\mathbb{S}}\\left(E_{(n)}\\right)" }, { "math_id": 83, "text": "i \\leq n" }, { "math_id": 84, "text": "\\pi_i^{\\mathbb{S}}" }, { "math_id": 85, "text": "\\cdots \\to X_3 \\to X_2 \\to X_1 \\to X" }, { "math_id": 86, "text": "\\pi_i : \\pi_i(X_n) \\to \\pi_i(X)" }, { "math_id": 87, "text": "K(\\pi_n(X), n-1)" }, { "math_id": 88, "text": "X_1 \\to X" }, { "math_id": 89, "text": "X_n \\to X" }, { "math_id": 90, "text": "K\\left(\\pi_{n+1}(X), n + 1\\right)" }, { "math_id": 91, "text": "X_n \\to K(\\pi_{n+1}(X), n + 1)" }, { "math_id": 92, "text": "X_{n+1} = \\left\\{f\\colon I \\to K\\left(\\pi_{n+1}(X), n + 1\\right) : f(0) = p \\text{ and } f(1) \\in X_{n} \\right\\}" }, { "math_id": 93, "text": "p" }, { "math_id": 94, "text": "X_{n+1} \\to X_n" }, { "math_id": 95, "text": "\\Omega K\\left(\\pi_{n+1}(X), n + 1\\right) \\simeq K\\left(\\pi_{n+1}(X), n\\right)" }, { "math_id": 96, "text": "K\\left(\\pi_{n+1}(X), n\\right) \\to X_n \\to X_{n-1}" }, { "math_id": 97, "text": "\\pi_i(X_n) = \\pi_i\\left(X_{n-1}\\right)" }, { "math_id": 98, "text": "i \\geq n + 1" }, { "math_id": 99, "text": "\\pi_i(X_n) = \\pi_i(X_{n-1}) = 0" }, { "math_id": 100, "text": "i < n-1" }, { "math_id": 101, "text": "0 \\to \\pi_{n+1}\\left(X_{n+1}) \\to \\pi_{n+1}(X_{n}\\right) \\mathrel{\\overset{\\partial}{\\rightarrow}} \\pi_{n}K\\left(\\pi_{n+1}(X), n\\right) \\to \\pi_{n}\\left(X_{n+1}\\right) \\to 0" }, { "math_id": 102, "text": "X_{n-1} \\cup \\{\\text{cells of dimension} \\geq n + 2\\}" }, { "math_id": 103, "text": "\\pi_{n+1}\\left(X_n\\right) \\cong \\pi_{n+1}\\left(K\\left(\\pi_{n+1}(X), n + 1\\right)\\right) \\cong \\pi_n\\left(K\\left(\\pi_{n+1}(X), n\\right)\\right)" }, { "math_id": 104, "text": "\\text{Hofiber}(\\phi_n: X \\to X_n)" }, { "math_id": 105, "text": "X^n" }, { "math_id": 106, "text": "\\pi_k(X^n) = \\begin{cases}\n \\pi_k(X) & k > n \\\\\n 0 & k \\leq n\n\\end{cases}" }, { "math_id": 107, "text": "E\\langle n \\rangle = \\operatorname{Hofiber}\\left(\\tau_n: E \\to E_{(n)}\\right)" }, { "math_id": 108, "text": "M\\text{O} " }, { "math_id": 109, "text": "\\begin{align}\n M\\text{String} &= M\\text{O}\\langle 8 \\rangle \\\\\n M\\text{Spin} &= M\\text{O}\\langle 4 \\rangle \\\\\n M\\text{SO} &= M\\text{O}\\langle 2 \\rangle\n\\end{align}" }, { "math_id": 110, "text": "\\operatorname{Spin}(n)" }, { "math_id": 111, "text": "\\operatorname{SO}(n)" }, { "math_id": 112, "text": "\\Z/2 \\to \\operatorname{Spin}(n) \\to SO(n)" }, { "math_id": 113, "text": "\\cdots \\to \\operatorname{Fivebrane}(n) \\to \\operatorname{String}(n) \\to \\operatorname{Spin}(n) \\to \\operatorname{SO}(n)" }, { "math_id": 114, "text": "\\operatorname{String}(n)" }, { "math_id": 115, "text": "3" }, { "math_id": 116, "text": "\\operatorname{Fivebrane}(n)" }, { "math_id": 117, "text": "7" } ]
https://en.wikipedia.org/wiki?curid=15708202
157092
Cross product
Mathematical operation on vectors in 3D space In mathematics, the cross product or vector product (occasionally directed area product, to emphasize its geometric significance) is a binary operation on two vectors in a three-dimensional oriented Euclidean vector space (named here formula_0), and is denoted by the symbol formula_1. Given two linearly independent vectors a and b, the cross product, a × b (read "a cross b"), is a vector that is perpendicular to both a and b, and thus normal to the plane containing them. It has many applications in mathematics, physics, engineering, and computer programming. It should not be confused with the dot product (projection product). The magnitude of the cross product equals the area of a parallelogram with the vectors for sides; in particular, the magnitude of the product of two perpendicular vectors is the product of their lengths. The units of the cross-product are the product of the units of each vector. If two vectors are parallel or are anti-parallel (that is, they are linearly dependent), or if either one has zero length, then their cross product is zero. The cross product is anticommutative (that is, a × b − b × a) and is distributive over addition, that is, a × (b + c) a × b + a × c. The space formula_0 together with the cross product is an algebra over the real numbers, which is neither commutative nor associative, but is a Lie algebra with the cross product being the Lie bracket. Like the dot product, it depends on the metric of Euclidean space, but unlike the dot product, it also depends on a choice of orientation (or "handedness") of the space (it is why an oriented space is needed). The resultant vector is invariant of rotation of basis. Due to the dependence on handedness, the cross product is said to be a "pseudovector". In connection with the cross product, the exterior product of vectors can be used in arbitrary dimensions (with a bivector or 2-form result) and is independent of the orientation of the space. The product can be generalized in various ways, using the orientation and metric structure just as for the traditional 3-dimensional cross product, one can, in n dimensions, take the product of "n" − 1 vectors to produce a vector perpendicular to all of them. But if the product is limited to non-trivial binary products with vector results, it exists only in three and seven dimensions. The cross-product in seven dimensions has undesirable properties, however (e.g. it fails to satisfy the Jacobi identity), so it is not used in mathematical physics to represent quantities such as multi-dimensional space-time. (See below for other dimensions.) Definition. The cross product of two vectors a and b is defined only in three-dimensional space and is denoted by a × b. In physics and applied mathematics, the wedge notation a ∧ b is often used (in conjunction with the name "vector product"), although in pure mathematics such notation is usually reserved for just the exterior product, an abstraction of the vector product to n dimensions. The cross product a × b is defined as a vector c that is perpendicular (orthogonal) to both a and b, with a direction given by the right-hand rule and a magnitude equal to the area of the parallelogram that the vectors span. The cross product is defined by the formula formula_2 where "θ" is the angle between a and b in the plane containing them (hence, it is between 0° and 180°), ‖a‖ and ‖b‖ are the magnitudes of vectors a and b, n is a unit vector perpendicular to the plane containing a and b, with direction such that the ordered set (a, b, n) is positively oriented. If the vectors a and b are parallel (that is, the angle "θ" between them is either 0° or 180°), by the above formula, the cross product of a and b is the zero vector 0. Direction. The direction of the vector n depends on the chosen orientation of the space. Conventionally, it is given by the right-hand rule, where one simply points the forefinger of the right hand in the direction of a and the middle finger in the direction of b. Then, the vector n is coming out of the thumb (see the adjacent picture). Using this rule implies that the cross product is anti-commutative; that is, b × a = −(a × b). By pointing the forefinger toward b first, and then pointing the middle finger toward a, the thumb will be forced in the opposite direction, reversing the sign of the product vector. As the cross product operator depends on the orientation of the space, in general the cross product of two vectors is not a "true" vector, but a "pseudovector". See for more detail. Names and origin. In 1842, William Rowan Hamilton first described the algebra of quaternions and the non-commutative Hamilton product. In particular, when the Hamilton product of two vectors (that is, pure quaternions with zero scalar part) is performed, it results in a quaternion with a scalar and vector part. The scalar and vector part of this Hamilton product corresponds to the negative of dot product and cross product of the two vectors. In 1881, Josiah Willard Gibbs, and independently Oliver Heaviside, introduced the notation for both the dot product and the cross product using a period (a ⋅ b) and an "×" (a × b), respectively, to denote them. In 1877, to emphasize the fact that the result of a dot product is a scalar while the result of a cross product is a vector, William Kingdon Clifford coined the alternative names scalar product and vector product for the two operations. These alternative names are still widely used in the literature. Both the cross notation (a × b) and the name cross product were possibly inspired by the fact that each scalar component of a × b is computed by multiplying non-corresponding components of a and b. Conversely, a dot product a ⋅ b involves multiplications between corresponding components of a and b. As explained below, the cross product can be expressed in the form of a determinant of a special 3 × 3 matrix. According to Sarrus's rule, this involves multiplications between matrix elements identified by crossed diagonals. Computing. Coordinate notation. If (i, j, k) is a positively oriented orthonormal basis, the basis vectors satisfy the following equalities formula_3 which imply, by the anticommutativity of the cross product, that formula_4 The anticommutativity of the cross product (and the obvious lack of linear independence) also implies that formula_5 (the zero vector). These equalities, together with the distributivity and linearity of the cross product (though neither follows easily from the definition given above), are sufficient to determine the cross product of any two vectors a and b. Each vector can be defined as the sum of three orthogonal components parallel to the standard basis vectors: formula_6 Their cross product a × b can be expanded using distributivity: formula_7 This can be interpreted as the decomposition of a × b into the sum of nine simpler cross products involving vectors aligned with i, j, or k. Each one of these nine cross products operates on two vectors that are easy to handle as they are either parallel or orthogonal to each other. From this decomposition, by using the above-mentioned equalities and collecting similar terms, we obtain: formula_8 meaning that the three scalar components of the resulting vector s = "s"1i + "s"2j + "s"3k = a × b are formula_9 Using column vectors, we can represent the same result as follows: formula_10 Matrix notation. The cross product can also be expressed as the formal determinant: formula_11 This determinant can be computed using Sarrus's rule or cofactor expansion. Using Sarrus's rule, it expands to formula_12 which gives the components of the resulting vector directly. Using Levi-Civita tensors. The latter formula avoids having to change the orientation of the space when we inverse an orthonormal basis. Properties. Geometric meaning. The magnitude of the cross product can be interpreted as the positive area of the parallelogram having a and b as sides (see Figure 1): formula_20 Indeed, one can also compute the volume "V" of a parallelepiped having a, b and c as edges by using a combination of a cross product and a dot product, called scalar triple product (see Figure 2): formula_21 Since the result of the scalar triple product may be negative, the volume of the parallelepiped is given by its absolute value: formula_22 Because the magnitude of the cross product goes by the sine of the angle between its arguments, the cross product can be thought of as a measure of "perpendicularity" in the same way that the dot product is a measure of "parallelism". Given two unit vectors, their cross product has a magnitude of 1 if the two are perpendicular and a magnitude of zero if the two are parallel. The dot product of two unit vectors behaves just oppositely: it is zero when the unit vectors are perpendicular and 1 if the unit vectors are parallel. Unit vectors enable two convenient identities: the dot product of two unit vectors yields the cosine (which may be positive or negative) of the angle between the two unit vectors. The magnitude of the cross product of the two unit vectors yields the sine (which will always be positive). Algebraic properties. If the cross product of two vectors is the zero vector (that is, a × b = 0), then either one or both of the inputs is the zero vector, (a = 0 or b = 0) or else they are parallel or antiparallel (a ∥ b) so that the sine of the angle between them is zero ("θ" = 0° or "θ" = 180° and sin "θ" = 0). The self cross product of a vector is the zero vector: formula_23 The cross product is anticommutative, formula_24 distributive over addition, formula_25 and compatible with scalar multiplication so that formula_26 It is not associative, but satisfies the Jacobi identity: formula_27 Distributivity, linearity and Jacobi identity show that the R3 vector space together with vector addition and the cross product forms a Lie algebra, the Lie algebra of the real orthogonal group in 3 dimensions, SO(3). The cross product does not obey the cancellation law; that is, a × b = a × c with a ≠ 0 does not imply b = c, but only that: formula_28 This can be the case where b and c cancel, but additionally where a and b − c are parallel; that is, they are related by a scale factor "t", leading to: formula_29 for some scalar "t". If, in addition to a × b = a × c and a ≠ 0 as above, it is the case that a ⋅ b = a ⋅ c then formula_30 As b − c cannot be simultaneously parallel (for the cross product to be 0) and perpendicular (for the dot product to be 0) to a, it must be the case that b and c cancel: b = c. From the geometrical definition, the cross product is invariant under proper rotations about the axis defined by a × b. In formulae: formula_31, where formula_32 is a rotation matrix with formula_33. More generally, the cross product obeys the following identity under matrix transformations: formula_34 where formula_35 is a 3-by-3 matrix and formula_36 is the transpose of the inverse and formula_37 is the cofactor matrix. It can be readily seen how this formula reduces to the former one if formula_35 is a rotation matrix. If formula_35 is a 3-by-3 symmetric matrix applied to a generic cross product formula_38, the following relation holds true: formula_39 The cross product of two vectors lies in the null space of the 2 × 3 matrix with the vectors as rows: formula_40 For the sum of two cross products, the following identity holds: formula_41 Differentiation. The product rule of differential calculus applies to any bilinear operation, and therefore also to the cross product: formula_42 where a and b are vectors that depend on the real variable "t". Triple product expansion. The cross product is used in both forms of the triple product. The scalar triple product of three vectors is defined as formula_43 It is the signed volume of the parallelepiped with edges a, b and c and as such the vectors can be used in any order that's an even permutation of the above ordering. The following therefore are equal: formula_44 The vector triple product is the cross product of a vector with the result of another cross product, and is related to the dot product by the following formula formula_45 The mnemonic "BAC minus CAB" is used to remember the order of the vectors in the right hand member. This formula is used in physics to simplify vector calculations. A special case, regarding gradients and useful in vector calculus, is formula_46 where ∇2 is the vector Laplacian operator. Other identities relate the cross product to the scalar triple product: formula_47 where "I" is the identity matrix. Alternative formulation. The cross product and the dot product are related by: formula_48 The right-hand side is the Gram determinant of a and b, the square of the area of the parallelogram defined by the vectors. This condition determines the magnitude of the cross product. Namely, since the dot product is defined, in terms of the angle "θ" between the two vectors, as: formula_49 the above given relationship can be rewritten as follows: formula_50 Invoking the Pythagorean trigonometric identity one obtains: formula_51 which is the magnitude of the cross product expressed in terms of "θ", equal to the area of the parallelogram defined by a and b (see definition above). The combination of this requirement and the property that the cross product be orthogonal to its constituents a and b provides an alternative definition of the cross product. Cross product inverse. For the cross product a × b = c, there are multiple b vectors that give the same value of c. As a result, it is not possible to rearrange this equation to yield a unique solution for b in terms of a and c. Nevertheless, it is possible to find a family of solutions for b, which are formula_52 where "t" is an arbitrary constant. This can be derived using the triple product expansion: formula_53 Rearrange to solve for b to give formula_54 The coefficient of the last term can be simplified to just the arbitrary constant "t" to yield the result shown above. Lagrange's identity. The relation formula_55 can be compared with another relation involving the right-hand side, namely Lagrange's identity expressed as formula_56 where a and b may be "n"-dimensional vectors. This also shows that the Riemannian volume form for surfaces is exactly the surface element from vector calculus. In the case where "n" = 3, combining these two equations results in the expression for the magnitude of the cross product in terms of its components: formula_57 The same result is found directly using the components of the cross product found from formula_58 In R3, Lagrange's equation is a special case of the multiplicativity |vw| = |v||w| of the norm in the quaternion algebra. It is a special case of another formula, also sometimes called Lagrange's identity, which is the three dimensional case of the Binet–Cauchy identity: formula_59 If a = c and b = d, this simplifies to the formula above. Infinitesimal generators of rotations. The cross product conveniently describes the infinitesimal generators of rotations in R3. Specifically, if n is a unit vector in R3 and "R"("φ", n) denotes a rotation about the axis through the origin specified by n, with angle φ (measured in radians, counterclockwise when viewed from the tip of n), then formula_60 for every vector x in R3. The cross product with n therefore describes the infinitesimal generator of the rotations about n. These infinitesimal generators form the Lie algebra so(3) of the rotation group SO(3), and we obtain the result that the Lie algebra R3 with cross product is isomorphic to the Lie algebra so(3). Alternative ways to compute. Conversion to matrix multiplication. The vector cross product also can be expressed as the product of a skew-symmetric matrix and a vector: formula_61 where superscript T refers to the transpose operation, and [a]× is defined by: formula_62 The columns [a]×,i of the skew-symmetric matrix for a vector a can be also obtained by calculating the cross product with unit vectors. That is, formula_63 or formula_64 where formula_65 is the outer product operator. Also, if a is itself expressed as a cross product: formula_66 then formula_67 &lt;templatestyles src="Math_proof/styles.css" /&gt;Proof by substitution Evaluation of the cross product gives formula_68 Hence, the left hand side equals formula_69 Now, for the right hand side, formula_70 And its transpose is formula_71 Evaluation of the right hand side gives formula_72 Comparison shows that the left hand side equals the right hand side. This result can be generalized to higher dimensions using geometric algebra. In particular in any dimension bivectors can be identified with skew-symmetric matrices, so the product between a skew-symmetric matrix and vector is equivalent to the grade-1 part of the product of a bivector and vector. In three dimensions bivectors are dual to vectors so the product is equivalent to the cross product, with the bivector instead of its vector dual. In higher dimensions the product can still be calculated but bivectors have more degrees of freedom and are not equivalent to vectors. This notation is also often much easier to work with, for example, in epipolar geometry. From the general properties of the cross product follows immediately that formula_73   and   formula_74 and from fact that [a]× is skew-symmetric it follows that formula_75 The above-mentioned triple product expansion (bac–cab rule) can be easily proven using this notation. As mentioned above, the Lie algebra R3 with cross product is isomorphic to the Lie algebra so(3), whose elements can be identified with the 3×3 skew-symmetric matrices. The map a → [a]× provides an isomorphism between R3 and so(3). Under this map, the cross product of 3-vectors corresponds to the commutator of 3x3 skew-symmetric matrices. Index notation for tensors. The cross product can alternatively be defined in terms of the Levi-Civita tensor "Eijk" and a dot product "ηmi", which are useful in converting vector notation for tensor applications: formula_76 where the indices formula_77 correspond to vector components. This characterization of the cross product is often expressed more compactly using the Einstein summation convention as formula_78 in which repeated indices are summed over the values 1 to 3. In a positively-oriented orthonormal basis "ηmi" = δ"mi" (the Kronecker delta) and formula_79 (the Levi-Civita symbol). In that case, this representation is another form of the skew-symmetric representation of the cross product: formula_80 In classical mechanics: representing the cross product by using the Levi-Civita symbol can cause mechanical symmetries to be obvious when physical systems are isotropic. (An example: consider a particle in a Hooke's Law potential in three-space, free to oscillate in three dimensions; none of these dimensions are "special" in any sense, so symmetries lie in the cross-product-represented angular momentum, which are made clear by the abovementioned Levi-Civita representation). Mnemonic. The word "xyzzy" can be used to remember the definition of the cross product. If formula_81 where: formula_82 then: formula_83 formula_84 formula_85 The second and third equations can be obtained from the first by simply vertically rotating the subscripts, "x" → "y" → "z" → "x". The problem, of course, is how to remember the first equation, and two options are available for this purpose: either to remember the relevant two diagonals of Sarrus's scheme (those containing i), or to remember the xyzzy sequence. Since the first diagonal in Sarrus's scheme is just the main diagonal of the above-mentioned 3×3 matrix, the first three letters of the word xyzzy can be very easily remembered. Cross visualization. Similarly to the mnemonic device above, a "cross" or X can be visualized between the two vectors in the equation. This may be helpful for remembering the correct cross product formula. If formula_81 then: formula_86 If we want to obtain the formula for formula_87 we simply drop the formula_88 and formula_89 from the formula, and take the next two components down: formula_90 When doing this for formula_91 the next two elements down should "wrap around" the matrix so that after the z component comes the x component. For clarity, when performing this operation for formula_91, the next two components should be z and x (in that order). While for formula_92 the next two components should be taken as x and y. formula_93 For formula_87 then, if we visualize the cross operator as pointing from an element on the left to an element on the right, we can take the first element on the left and simply multiply by the element that the cross points to in the right-hand matrix. We then subtract the next element down on the left, multiplied by the element that the cross points to here as well. This results in our formula_87 formula – formula_94 We can do this in the same way for formula_91 and formula_92 to construct their associated formulas. Applications. The cross product has applications in various contexts. For example, it is used in computational geometry, physics and engineering. A non-exhaustive list of examples follows. Computational geometry. The cross product appears in the calculation of the distance of two skew lines (lines not in the same plane) from each other in three-dimensional space. The cross product can be used to calculate the normal for a triangle or polygon, an operation frequently performed in computer graphics. For example, the winding of a polygon (clockwise or anticlockwise) about a point within the polygon can be calculated by triangulating the polygon (like spoking a wheel) and summing the angles (between the spokes) using the cross product to keep track of the sign of each angle. In computational geometry of the plane, the cross product is used to determine the sign of the acute angle defined by three points formula_95 and formula_96. It corresponds to the direction (upward or downward) of the cross product of the two coplanar vectors defined by the two pairs of points formula_97 and formula_98. The sign of the acute angle is the sign of the expression formula_99 which is the signed length of the cross product of the two vectors. In the "right-handed" coordinate system, if the result is 0, the points are collinear; if it is positive, the three points constitute a positive angle of rotation around formula_100 from formula_101 to formula_102, otherwise a negative angle. From another point of view, the sign of formula_103 tells whether formula_102 lies to the left or to the right of line formula_104 The cross product is used in calculating the volume of a polyhedron such as a tetrahedron or parallelepiped. Angular momentum and torque. The angular momentum L of a particle about a given origin is defined as: formula_105 where r is the position vector of the particle relative to the origin, p is the linear momentum of the particle. In the same way, the moment M of a force FB applied at point B around point A is given as: formula_106 In mechanics the "moment of a force" is also called "torque" and written as formula_107 Since position linear momentum p and force F are all "true" vectors, both the angular momentum L and the moment of a force M are "pseudovectors" or "axial vectors". Rigid body. The cross product frequently appears in the description of rigid motions. Two points "P" and "Q" on a rigid body can be related by: formula_108 where formula_109 is the point's position, formula_110 is its velocity and formula_111 is the body's angular velocity. Since position formula_109 and velocity formula_110 are "true" vectors, the angular velocity formula_111 is a "pseudovector" or "axial vector". Lorentz force. The cross product is used to describe the Lorentz force experienced by a moving electric charge formula_112 Since velocity force F and electric field E are all "true" vectors, the magnetic field B is a "pseudovector". Other. In vector calculus, the cross product is used to define the formula for the vector operator curl. The trick of rewriting a cross product in terms of a matrix multiplication appears frequently in epipolar and multi-view geometry, in particular when deriving matching constraints. As an external product. The cross product can be defined in terms of the exterior product. It can be generalized to an external product in other than three dimensions. This generalization allows a natural geometric interpretation of the cross product. In exterior algebra the exterior product of two vectors is a bivector. A bivector is an oriented plane element, in much the same way that a vector is an oriented line element. Given two vectors "a" and "b", one can view the bivector "a" ∧ "b" as the oriented parallelogram spanned by "a" and "b". The cross product is then obtained by taking the Hodge star of the bivector "a" ∧ "b", mapping 2-vectors to vectors: formula_113 This can be thought of as the oriented multi-dimensional element "perpendicular" to the bivector. In a "d-"dimensional space, Hodge star takes a "k"-vector to a ("d–k")-vector; thus only in "d =" 3 dimensions is the result an element of dimension one (3–2 = 1), i.e. a vector. For example, in "d =" 4 dimensions, the cross product of two vectors has dimension 4–2 = 2, giving a bivector. Thus, only in three dimensions does cross product define an algebra structure to multiply vectors. Handedness. Consistency. When physics laws are written as equations, it is possible to make an arbitrary choice of the coordinate system, including handedness. One should be careful to never write down an equation where the two sides do not behave equally under all transformations that need to be considered. For example, if one side of the equation is a cross product of two polar vectors, one must take into account that the result is an axial vector. Therefore, for consistency, the other side must also be an axial vector. More generally, the result of a cross product may be either a polar vector or an axial vector, depending on the type of its operands (polar vectors or axial vectors). Namely, polar vectors and axial vectors are interrelated in the following ways under application of the cross product: or symbolically Because the cross product may also be a polar vector, it may not change direction with a mirror image transformation. This happens, according to the above relationships, if one of the operands is a polar vector and the other one is an axial vector (e.g., the cross product of two polar vectors). For instance, a vector triple product involving three polar vectors is a polar vector. A handedness-free approach is possible using exterior algebra. The paradox of the orthonormal basis. Let (i, j, k) be an orthonormal basis. The vectors i, j and k do not depend on the orientation of the space. They can even be defined in the absence of any orientation. They can not therefore be axial vectors. But if i and j are polar vectors, then k is an axial vector for i × j = k or j × i = k. This is a paradox. "Axial" and "polar" are "physical" qualifiers for "physical" vectors; that is, vectors which represent "physical" quantities such as the velocity or the magnetic field. The vectors i, j and k are mathematical vectors, neither axial nor polar. In mathematics, the cross-product of two vectors is a vector. There is no contradiction. Generalizations. There are several ways to generalize the cross product to higher dimensions. Lie algebra. The cross product can be seen as one of the simplest Lie products, and is thus generalized by Lie algebras, which are axiomatized as binary products satisfying the axioms of multilinearity, skew-symmetry, and the Jacobi identity. Many Lie algebras exist, and their study is a major field of mathematics, called Lie theory. For example, the Heisenberg algebra gives another Lie algebra structure on formula_114 In the basis formula_115 the product is formula_116 Quaternions. The cross product can also be described in terms of quaternions. In general, if a vector ["a"1, "a"2, "a"3] is represented as the quaternion "a"1"i" + "a"2"j" + "a"3"k", the cross product of two vectors can be obtained by taking their product as quaternions and deleting the real part of the result. The real part will be the negative of the dot product of the two vectors. Octonions. A cross product for 7-dimensional vectors can be obtained in the same way by using the octonions instead of the quaternions. The nonexistence of nontrivial vector-valued cross products of two vectors in other dimensions is related to the result from Hurwitz's theorem that the only normed division algebras are the ones with dimension 1, 2, 4, and 8. Exterior product. In general dimension, there is no direct analogue of the binary cross product that yields specifically a vector. There is however the exterior product, which has similar properties, except that the exterior product of two vectors is now a 2-vector instead of an ordinary vector. As mentioned above, the cross product can be interpreted as the exterior product in three dimensions by using the Hodge star operator to map 2-vectors to vectors. The Hodge dual of the exterior product yields an ("n" − 2)-vector, which is a natural generalization of the cross product in any number of dimensions. The exterior product and dot product can be combined (through summation) to form the geometric product in geometric algebra. External product. As mentioned above, the cross product can be interpreted in three dimensions as the Hodge dual of the exterior product. In any finite "n" dimensions, the Hodge dual of the exterior product of "n" − 1 vectors is a vector. So, instead of a binary operation, in arbitrary finite dimensions, the cross product is generalized as the Hodge dual of the exterior product of some given "n" − 1 vectors. This generalization is called external product. Commutator product. Interpreting the three-dimensional vector space of the algebra as the 2-vector (not the 1-vector) subalgebra of the three-dimensional geometric algebra, where formula_117, formula_118, and formula_119, the cross product corresponds exactly to the commutator product in geometric algebra and both use the same symbol formula_1. The commutator product is defined for 2-vectors formula_120 and formula_121 in geometric algebra as: formula_122 where formula_123 is the geometric product. The commutator product could be generalised to arbitrary multivectors in three dimensions, which results in a multivector consisting of only elements of grades 1 (1-vectors/true vectors) and 2 (2-vectors/pseudovectors). While the commutator product of two 1-vectors is indeed the same as the exterior product and yields a 2-vector, the commutator of a 1-vector and a 2-vector yields a true vector, corresponding instead to the left and right contractions in geometric algebra. The commutator product of two 2-vectors has no corresponding equivalent product, which is why the commutator product is defined in the first place for 2-vectors. Furthermore, the commutator triple product of three 2-vectors is the same as the vector triple product of the same three pseudovectors in vector algebra. However, the commutator triple product of three 1-vectors in geometric algebra is instead the negative of the vector triple product of the same three true vectors in vector algebra. Generalizations to higher dimensions is provided by the same commutator product of 2-vectors in higher-dimensional geometric algebras, but the 2-vectors are no longer pseudovectors. Just as the commutator product/cross product of 2-vectors in three dimensions correspond to the simplest Lie algebra, the 2-vector subalgebras of higher dimensional geometric algebra equipped with the commutator product also correspond to the Lie algebras. Also as in three dimensions, the commutator product could be further generalised to arbitrary multivectors. Multilinear algebra. In the context of multilinear algebra, the cross product can be seen as the (1,2)-tensor (a mixed tensor, specifically a bilinear map) obtained from the 3-dimensional volume form, a (0,3)-tensor, by raising an index. In detail, the 3-dimensional volume form defines a product formula_125 by taking the determinant of the matrix given by these 3 vectors. By duality, this is equivalent to a function formula_126 (fixing any two inputs gives a function formula_127 by evaluating on the third input) and in the presence of an inner product (such as the dot product; more generally, a non-degenerate bilinear form), we have an isomorphism formula_128 and thus this yields a map formula_129 which is the cross product: a (0,3)-tensor (3 vector inputs, scalar output) has been transformed into a (1,2)-tensor (2 vector inputs, 1 vector output) by "raising an index". Translating the above algebra into geometry, the function "volume of the parallelepiped defined by formula_130" (where the first two vectors are fixed and the last is an input), which defines a function formula_127, can be "represented" uniquely as the dot product with a vector: this vector is the cross product formula_131 From this perspective, the cross product is "defined" by the scalar triple product, formula_132 In the same way, in higher dimensions one may define generalized cross products by raising indices of the "n"-dimensional volume form, which is a formula_133-tensor. The most direct generalizations of the cross product are to define either: These products are all multilinear and skew-symmetric, and can be defined in terms of the determinant and parity. The formula_136-ary product can be described as follows: given formula_135 vectors formula_139 in formula_124 define their generalized cross product formula_140 as: This is the unique multilinear, alternating product which evaluates to formula_143, formula_144 and so forth for cyclic permutations of indices. In coordinates, one can give a formula for this formula_136-ary analogue of the cross product in R"n" by: formula_145 This formula is identical in structure to the determinant formula for the normal cross product in R3 except that the row of basis vectors is the last row in the determinant rather than the first. The reason for this is to ensure that the ordered vectors (v1, ..., v"n"−1, Λv"i") have a positive orientation with respect to (e1, ..., e"n"). If "n" is odd, this modification leaves the value unchanged, so this convention agrees with the normal definition of the binary product. In the case that "n" is even, however, the distinction must be kept. This formula_136-ary form enjoys many of the same properties as the vector cross product: it is alternating and linear in its arguments, it is perpendicular to each argument, and its magnitude gives the hypervolume of the region bounded by the arguments. And just like the vector cross product, it can be defined in a coordinate independent way as the Hodge dual of the wedge product of the arguments. Moreover, the product formula_146 satisfies the Filippov identity, formula_147 and so it endows Rn+1 with a structure of n-Lie algebra (see Proposition 1 of ). History. In 1773, Joseph-Louis Lagrange used the component form of both the dot and cross products in order to study the tetrahedron in three dimensions. In 1843, William Rowan Hamilton introduced the quaternion product, and with it the terms "vector" and "scalar". Given two quaternions [0, u] and [0, v], where u and v are vectors in R3, their quaternion product can be summarized as [−u ⋅ v, u × v]. James Clerk Maxwell used Hamilton's quaternion tools to develop his famous electromagnetism equations, and for this and other reasons quaternions for a time were an essential part of physics education. In 1844, Hermann Grassmann published a geometric algebra not tied to dimension two or three. Grassmann developed several products, including a cross product represented then by [uv]. ("See also: exterior algebra.") In 1853, Augustin-Louis Cauchy, a contemporary of Grassmann, published a paper on algebraic keys which were used to solve equations and had the same multiplication properties as the cross product. In 1878, William Kingdon Clifford, known for a precursor to the Clifford algebra named in his honor, published "Elements of Dynamic", in which the term "vector product" is attested. In the book, this product of two vectors is defined to have magnitude equal to the area of the parallelogram of which they are two sides, and direction perpendicular to their plane. In lecture notes from 1881, Gibbs represented the cross product by formula_148 and called it the "skew product". In 1901, Gibb's student Edwin Bidwell Wilson edited and extended these lecture notes into the textbook "Vector Analysis". Wilson kept the term "skew product", but observed that the alternative terms "cross product" and "vector product" were more frequent. In 1908, Cesare Burali-Forti and Roberto Marcolongo introduced the vector product notation u ∧ v. This is used in France and other areas until this day, as the symbol formula_1 is already used to denote multiplication and the Cartesian product. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "E" }, { "math_id": 1, "text": "\\times" }, { "math_id": 2, "text": "\\mathbf{a} \\times \\mathbf{b} = \\| \\mathbf{a} \\| \\| \\mathbf{b} \\| \\sin(\\theta) \\, \\mathbf{n}," }, { "math_id": 3, "text": "\\begin{alignat}{2}\n \\mathbf{\\color{blue}{i}}&\\times\\mathbf{\\color{red}{j}} &&= \\mathbf{\\color{green}{k}}\\\\\n \\mathbf{\\color{red}{j}}&\\times\\mathbf{\\color{green}{k}} &&= \\mathbf{\\color{blue}{i}}\\\\\n \\mathbf{\\color{green}{k}}&\\times\\mathbf{\\color{blue}{i}} &&= \\mathbf{\\color{red}{j}}\n\\end{alignat}" }, { "math_id": 4, "text": "\\begin{alignat}{2}\n \\mathbf{\\color{red}{j}}&\\times\\mathbf{\\color{blue}{i}} &&= -\\mathbf{\\color{green}{k}}\\\\\n \\mathbf{\\color{green}{k}}&\\times\\mathbf{\\color{red}{j}} &&= -\\mathbf{\\color{blue}{i}}\\\\\n \\mathbf{\\color{blue}{i}}&\\times\\mathbf{\\color{green}{k}} &&= -\\mathbf{\\color{red}{j}}\n\\end{alignat}" }, { "math_id": 5, "text": "\\mathbf{\\color{blue}{i}}\\times\\mathbf{\\color{blue}{i}} = \\mathbf{\\color{red}{j}}\\times\\mathbf{\\color{red}{j}} = \\mathbf{\\color{green}{k}}\\times\\mathbf{\\color{green}{k}} = \\mathbf{0}" }, { "math_id": 6, "text": "\\begin{alignat}{3}\n \\mathbf{a} &= a_1\\mathbf{\\color{blue}{i}} &&+ a_2\\mathbf{\\color{red}{j}} &&+ a_3\\mathbf{\\color{green}{k}} \\\\\n \\mathbf{b} &= b_1\\mathbf{\\color{blue}{i}} &&+ b_2\\mathbf{\\color{red}{j}} &&+ b_3\\mathbf{\\color{green}{k}}\n\\end{alignat}" }, { "math_id": 7, "text": " \\begin{align}\n \\mathbf{a}\\times\\mathbf{b} = {} &(a_1\\mathbf{\\color{blue}{i}} + a_2\\mathbf{\\color{red}{j}} + a_3\\mathbf{\\color{green}{k}}) \\times (b_1\\mathbf{\\color{blue}{i}} + b_2\\mathbf{\\color{red}{j}} + b_3\\mathbf{\\color{green}{k}})\\\\\n = {} &a_1b_1(\\mathbf{\\color{blue}{i}} \\times \\mathbf{\\color{blue}{i}}) + a_1b_2(\\mathbf{\\color{blue}{i}} \\times \\mathbf{\\color{red}{j}}) + a_1b_3(\\mathbf{\\color{blue}{i}} \\times \\mathbf{\\color{green}{k}}) + {}\\\\\n &a_2b_1(\\mathbf{\\color{red}{j}} \\times \\mathbf{\\color{blue}{i}}) + a_2b_2(\\mathbf{\\color{red}{j}} \\times \\mathbf{\\color{red}{j}}) + a_2b_3(\\mathbf{\\color{red}{j}} \\times \\mathbf{\\color{green}{k}}) + {}\\\\\n &a_3b_1(\\mathbf{\\color{green}{k}} \\times \\mathbf{\\color{blue}{i}}) + a_3b_2(\\mathbf{\\color{green}{k}} \\times \\mathbf{\\color{red}{j}}) + a_3b_3(\\mathbf{\\color{green}{k}} \\times \\mathbf{\\color{green}{k}})\\\\\n\\end{align}" }, { "math_id": 8, "text": "\\begin{align}\n \\mathbf{a}\\times\\mathbf{b} = {} &\\quad\\ a_1b_1\\mathbf{0} + a_1b_2\\mathbf{\\color{green}{k}} - a_1b_3\\mathbf{\\color{red}{j}} \\\\\n &- a_2b_1\\mathbf{\\color{green}{k}} + a_2b_2\\mathbf{0} + a_2b_3\\mathbf{\\color{blue}{i}} \\\\\n &+ a_3b_1\\mathbf{\\color{red}{j}}\\ - a_3b_2\\mathbf{\\color{blue}{i}}\\ + a_3b_3\\mathbf{0} \\\\\n = {} &(a_2b_3 - a_3b_2)\\mathbf{\\color{blue}{i}} + (a_3b_1 - a_1b_3)\\mathbf{\\color{red}{j}} + (a_1b_2 - a_2b_1)\\mathbf{\\color{green}{k}}\\\\\n\\end{align}" }, { "math_id": 9, "text": "\\begin{align}\n s_1 &= a_2b_3-a_3b_2\\\\\n s_2 &= a_3b_1-a_1b_3\\\\\n s_3 &= a_1b_2-a_2b_1\n\\end{align}" }, { "math_id": 10, "text": "\\begin{bmatrix}s_1\\\\s_2\\\\s_3\\end{bmatrix}=\\begin{bmatrix}a_2b_3-a_3b_2\\\\a_3b_1-a_1b_3\\\\a_1b_2-a_2b_1\\end{bmatrix}" }, { "math_id": 11, "text": "\\mathbf{a\\times b} = \\begin{vmatrix}\n \\mathbf{i}&\\mathbf{j}&\\mathbf{k}\\\\\n a_1&a_2&a_3\\\\\n b_1&b_2&b_3\\\\\n\\end{vmatrix}" }, { "math_id": 12, "text": "\\begin{align}\n \\mathbf{a\\times b} &=(a_2b_3\\mathbf{i}+a_3b_1\\mathbf{j}+a_1b_2\\mathbf{k}) - (a_3b_2\\mathbf{i}+a_1b_3\\mathbf{j}+a_2b_1\\mathbf{k})\\\\\n &=(a_2b_3 - a_3b_2)\\mathbf{i} +(a_3b_1 - a_1b_3)\\mathbf{j} +(a_1b_2 - a_2b_1)\\mathbf{k}. \n\\end{align}" }, { "math_id": 13, "text": "a \\times b" }, { "math_id": 14, "text": "E_{ijk}a^ib^j" }, { "math_id": 15, "text": "E_{ijk} " }, { "math_id": 16, "text": " \\varepsilon_{ijk}a^ib^j" }, { "math_id": 17, "text": "\\varepsilon_{ijk}" }, { "math_id": 18, "text": "(-1)^B\\varepsilon_{ijk}a^ib^j" }, { "math_id": 19, "text": "(-1)^B = \\pm 1" }, { "math_id": 20, "text": " \\left\\| \\mathbf{a} \\times \\mathbf{b} \\right\\| = \\left\\| \\mathbf{a} \\right\\| \\left\\| \\mathbf{b} \\right\\| \\left| \\sin \\theta \\right| ." }, { "math_id": 21, "text": "\n \\mathbf{a}\\cdot(\\mathbf{b}\\times \\mathbf{c})=\n \\mathbf{b}\\cdot(\\mathbf{c}\\times \\mathbf{a})=\n \\mathbf{c}\\cdot(\\mathbf{a}\\times \\mathbf{b}).\n" }, { "math_id": 22, "text": "V = |\\mathbf{a} \\cdot (\\mathbf{b} \\times \\mathbf{c})|." }, { "math_id": 23, "text": "\\mathbf{a} \\times \\mathbf{a} = \\mathbf{0}." }, { "math_id": 24, "text": "\\mathbf{a} \\times \\mathbf{b} = -(\\mathbf{b} \\times \\mathbf{a})," }, { "math_id": 25, "text": "\\mathbf{a} \\times (\\mathbf{b} + \\mathbf{c}) = (\\mathbf{a} \\times \\mathbf{b}) + (\\mathbf{a} \\times \\mathbf{c})," }, { "math_id": 26, "text": "(r\\,\\mathbf{a}) \\times \\mathbf{b} = \\mathbf{a} \\times (r\\,\\mathbf{b}) = r\\,(\\mathbf{a} \\times \\mathbf{b})." }, { "math_id": 27, "text": "\\mathbf{a} \\times (\\mathbf{b} \\times \\mathbf{c}) + \\mathbf{b} \\times (\\mathbf{c} \\times \\mathbf{a}) + \\mathbf{c} \\times (\\mathbf{a} \\times \\mathbf{b}) = \\mathbf{0}." }, { "math_id": 28, "text": " \\begin{align}\n \\mathbf{0} &= (\\mathbf{a} \\times \\mathbf{b}) - (\\mathbf{a} \\times \\mathbf{c})\\\\\n &= \\mathbf{a} \\times (\\mathbf{b} - \\mathbf{c}).\\\\\n\\end{align}" }, { "math_id": 29, "text": "\\mathbf{c} = \\mathbf{b} + t\\,\\mathbf{a}," }, { "math_id": 30, "text": "\\begin{align}\n \\mathbf{a} \\times (\\mathbf{b} - \\mathbf{c}) &= \\mathbf{0} \\\\\n \\mathbf{a} \\cdot (\\mathbf{b} - \\mathbf{c}) &= 0,\n\\end{align}" }, { "math_id": 31, "text": "(R\\mathbf{a}) \\times (R\\mathbf{b}) = R(\\mathbf{a} \\times \\mathbf{b})" }, { "math_id": 32, "text": "R" }, { "math_id": 33, "text": "\\det(R)=1" }, { "math_id": 34, "text": "(M\\mathbf{a}) \\times (M\\mathbf{b}) = (\\det M) \\left(M^{-1}\\right)^\\mathrm{T}(\\mathbf{a} \\times \\mathbf{b}) = \\operatorname{cof} M (\\mathbf{a} \\times \\mathbf{b}) " }, { "math_id": 35, "text": "M" }, { "math_id": 36, "text": "\\left(M^{-1}\\right)^\\mathrm{T}" }, { "math_id": 37, "text": "\\operatorname{cof}" }, { "math_id": 38, "text": "\\mathbf{a} \\times \\mathbf{b}" }, { "math_id": 39, "text": "M(\\mathbf{a} \\times \\mathbf{b}) = \\operatorname{Tr}(M)(\\mathbf{a} \\times \\mathbf{b}) - \\mathbf{a} \\times M\\mathbf{b} + \\mathbf{b} \\times M\\mathbf{a}" }, { "math_id": 40, "text": "\\mathbf{a} \\times \\mathbf{b} \\in NS\\left(\\begin{bmatrix}\\mathbf{a} \\\\ \\mathbf{b}\\end{bmatrix}\\right)." }, { "math_id": 41, "text": "\\mathbf{a} \\times \\mathbf{b} + \\mathbf{c} \\times \\mathbf{d} = (\\mathbf{a} - \\mathbf{c}) \\times (\\mathbf{b} - \\mathbf{d}) + \\mathbf{a} \\times \\mathbf{d} + \\mathbf{c} \\times \\mathbf{b}." }, { "math_id": 42, "text": "\\frac{d}{dt}(\\mathbf{a} \\times \\mathbf{b}) = \\frac{d\\mathbf{a}}{dt} \\times \\mathbf{b} + \\mathbf{a} \\times \\frac{d\\mathbf{b}}{dt} ," }, { "math_id": 43, "text": "\\mathbf{a} \\cdot (\\mathbf{b} \\times \\mathbf{c}), " }, { "math_id": 44, "text": "\\mathbf{a} \\cdot (\\mathbf{b} \\times \\mathbf{c}) = \\mathbf{b} \\cdot (\\mathbf{c} \\times \\mathbf{a}) = \\mathbf{c} \\cdot (\\mathbf{a} \\times \\mathbf{b}), " }, { "math_id": 45, "text": "\\begin{align}\n\\mathbf{a} \\times (\\mathbf{b} \\times \\mathbf{c}) = \\mathbf{b}(\\mathbf{a} \\cdot \\mathbf{c}) - \\mathbf{c}(\\mathbf{a} \\cdot \\mathbf{b}) \\\\\n(\\mathbf{a} \\times \\mathbf{b}) \\times \\mathbf{c} = \\mathbf{b}(\\mathbf{c} \\cdot \\mathbf{a}) - \\mathbf{a} (\\mathbf{b} \\cdot \\mathbf{c})\n\\end{align}" }, { "math_id": 46, "text": "\\begin{align}\n \\nabla \\times (\\nabla \\times \\mathbf{f}) &= \\nabla (\\nabla \\cdot \\mathbf{f} ) - (\\nabla \\cdot \\nabla) \\mathbf{f} \\\\\n &= \\nabla (\\nabla \\cdot \\mathbf{f} ) - \\nabla^2 \\mathbf{f},\\\\\n\\end{align}" }, { "math_id": 47, "text": "\\begin{align}\n(\\mathbf{a}\\times \\mathbf{b})\\times (\\mathbf{a}\\times \\mathbf{c}) &= (\\mathbf{a}\\cdot(\\mathbf{b}\\times \\mathbf{c})) \\mathbf{a} \\\\\n(\\mathbf{a}\\times \\mathbf{b})\\cdot(\\mathbf{c}\\times \\mathbf{d}) &= \\mathbf{b}^\\mathrm{T} \\left( \\left( \\mathbf{c}^\\mathrm{T} \\mathbf{a}\\right)I - \\mathbf{c} \\mathbf{a}^\\mathrm{T} \\right) \\mathbf{d}\\\\ &= (\\mathbf{a}\\cdot \\mathbf{c})(\\mathbf{b}\\cdot \\mathbf{d})-(\\mathbf{a}\\cdot \\mathbf{d}) (\\mathbf{b}\\cdot \\mathbf{c})\n\\end{align}" }, { "math_id": 48, "text": " \\left\\| \\mathbf{a} \\times \\mathbf{b} \\right\\| ^2 = \\left\\| \\mathbf{a}\\right\\|^2 \\left\\|\\mathbf{b}\\right\\|^2 - (\\mathbf{a} \\cdot \\mathbf{b})^2 ." }, { "math_id": 49, "text": " \\mathbf{a \\cdot b} = \\left\\| \\mathbf a \\right\\| \\left\\| \\mathbf b \\right\\| \\cos \\theta , " }, { "math_id": 50, "text": " \\left\\| \\mathbf{a \\times b} \\right\\|^2 = \\left\\| \\mathbf{a} \\right\\| ^2 \\left\\| \\mathbf{b}\\right \\| ^2 \\left(1-\\cos^2 \\theta \\right) ." }, { "math_id": 51, "text": " \\left\\| \\mathbf{a} \\times \\mathbf{b} \\right\\| = \\left\\| \\mathbf{a} \\right\\| \\left\\| \\mathbf{b} \\right\\| \\left| \\sin \\theta \\right| ," }, { "math_id": 52, "text": " \\mathbf{b} = \\frac{\\mathbf{c} \\times \\mathbf{a}}{\\left\\| \\mathbf{a} \\right\\|^2} + t \\mathbf{a} ," }, { "math_id": 53, "text": "\n\\mathbf{c} \\times \\mathbf{a} = (\\mathbf{a} \\times \\mathbf{b}) \\times \\mathbf{a}\n= \\left\\| \\mathbf{a} \\right\\|^2 \\mathbf{b} - (\\mathbf{a} \\cdot \\mathbf{b})\\mathbf{a} " }, { "math_id": 54, "text": "\n\\mathbf{b} = \\frac{\\mathbf{c} \\times \\mathbf{a}}{\\left\\| \\mathbf{a} \\right\\|^2} + \\frac{\\mathbf{a}\\cdot \\mathbf{b}}{\\left\\| \\mathbf{a} \\right\\|^2}\\mathbf{a}\n" }, { "math_id": 55, "text": "\n \\left\\| \\mathbf{a} \\times \\mathbf{b} \\right\\|^2 \\equiv\n \\det \\begin{bmatrix}\n \\mathbf{a} \\cdot \\mathbf{a} & \\mathbf{a} \\cdot \\mathbf{b} \\\\\n \\mathbf{a} \\cdot \\mathbf{b} & \\mathbf{b} \\cdot \\mathbf{b}\\\\\n \\end{bmatrix} \\equiv\n \\left\\| \\mathbf{a} \\right\\| ^2 \\left\\| \\mathbf{b} \\right\\| ^2 - (\\mathbf{a} \\cdot \\mathbf{b})^2\n" }, { "math_id": 56, "text": "\n \\sum_{1 \\le i < j \\le n} \\left( a_ib_j - a_jb_i \\right)^2 \\equiv\n \\left\\| \\mathbf a \\right\\|^2 \\left\\| \\mathbf b \\right\\|^2 - ( \\mathbf{a \\cdot b } )^2,\n" }, { "math_id": 57, "text": "\\begin{align}\n \\|\\mathbf{a} \\times \\mathbf{b}\\|^2\n &\\equiv \\sum_{1 \\le i < j \\le 3} (a_ib_j - a_jb_i)^2 \\\\\n &\\equiv (a_1 b_2 - b_1 a_2)^2 + (a_2 b_3 - a_3 b_2)^2 + (a_3 b_1 - a_1 b_3)^2.\n\\end{align}" }, { "math_id": 58, "text": "\\mathbf{a} \\times \\mathbf{b} \\equiv \\det \\begin{bmatrix}\n \\hat\\mathbf{i} & \\hat\\mathbf{j} & \\hat\\mathbf{k} \\\\\n a_1 & a_2 & a_3 \\\\\n b_1 & b_2 & b_3 \\\\\n\\end{bmatrix}." }, { "math_id": 59, "text": "\n (\\mathbf{a} \\times \\mathbf{b}) \\cdot (\\mathbf{c} \\times \\mathbf{d}) \\equiv\n (\\mathbf{a} \\cdot \\mathbf{c})(\\mathbf{b} \\cdot \\mathbf{d}) - (\\mathbf{a} \\cdot \\mathbf{d})(\\mathbf{b} \\cdot \\mathbf{c}).\n" }, { "math_id": 60, "text": "\\left.{d\\over d\\phi} \\right|_{\\phi=0} R(\\phi,\\boldsymbol{n}) \\boldsymbol{x} = \\boldsymbol{n} \\times \\boldsymbol{x}" }, { "math_id": 61, "text": "\\begin{align}\n \\mathbf{a} \\times \\mathbf{b} = [\\mathbf{a}]_{\\times} \\mathbf{b}\n &= \\begin{bmatrix}\\,0&\\!-a_3&\\,\\,a_2\\\\ \\,\\,a_3&0&\\!-a_1\\\\-a_2&\\,\\,a_1&\\,0\\end{bmatrix}\\begin{bmatrix}b_1\\\\b_2\\\\b_3\\end{bmatrix} \\\\\n \\mathbf{a} \\times \\mathbf{b} = {[\\mathbf{b}]_\\times}^\\mathrm{\\!\\!T} \\mathbf{a}\n &= \\begin{bmatrix}\\,0&\\,\\,b_3&\\!-b_2\\\\ -b_3&0&\\,\\,b_1\\\\\\,\\,b_2&\\!-b_1&\\,0\\end{bmatrix}\\begin{bmatrix}a_1\\\\a_2\\\\a_3\\end{bmatrix},\n\\end{align}" }, { "math_id": 62, "text": "[\\mathbf{a}]_{\\times} \\stackrel{\\rm def}{=} \\begin{bmatrix}\\,\\,0&\\!-a_3&\\,\\,\\,a_2\\\\\\,\\,\\,a_3&0&\\!-a_1\\\\\\!-a_2&\\,\\,a_1&\\,\\,0\\end{bmatrix}." }, { "math_id": 63, "text": "[\\mathbf{a}]_{\\times, i} = \\mathbf{a} \\times \\mathbf{\\hat{e}_i}, \\; i\\in \\{1,2,3\\} " }, { "math_id": 64, "text": "[\\mathbf{a}]_{\\times} = \\sum_{i=1}^3\\left(\\mathbf{a} \\times \\mathbf{\\hat{e}_i}\\right)\\otimes\\mathbf{\\hat{e}_i}," }, { "math_id": 65, "text": "\\otimes" }, { "math_id": 66, "text": "\\mathbf{a} = \\mathbf{c} \\times \\mathbf{d}" }, { "math_id": 67, "text": "[\\mathbf{a}]_{\\times} = \\mathbf{d}\\mathbf{c}^\\mathrm{T} - \\mathbf{c}\\mathbf{d}^\\mathrm{T} ." }, { "math_id": 68, "text": " \\mathbf{a} = \\mathbf{c} \\times \\mathbf{d} = \\begin{pmatrix} \nc_2 d_3 - c_3 d_2 \\\\\nc_3 d_1 - c_1 d_3 \\\\\nc_1 d_2 - c_2 d_1 \\end{pmatrix}\n" }, { "math_id": 69, "text": " [\\mathbf{a}]_{\\times} = \\begin{bmatrix} \n 0 & c_2 d_1 - c_1 d_2 & c_3 d_1 - c_1 d_3 \\\\\nc_1 d_2 - c_2 d_1 & 0 & c_3 d_2 - c_2 d_3 \\\\\nc_1 d_3 - c_3 d_1 & c_2 d_3 - c_3 d_2 & 0 \\end{bmatrix}\n" }, { "math_id": 70, "text": " \\mathbf{c} \\mathbf{d}^{\\mathrm T} = \\begin{bmatrix}\nc_1 d_1 & c_1 d_2 & c_1 d_3 \\\\\nc_2 d_1 & c_2 d_2 & c_2 d_3 \\\\\nc_3 d_1 & c_3 d_2 & c_3 d_3 \\end{bmatrix}\n" }, { "math_id": 71, "text": " \\mathbf{d} \\mathbf{c}^{\\mathrm T} = \\begin{bmatrix}\nc_1 d_1 & c_2 d_1 & c_3 d_1 \\\\\nc_1 d_2 & c_2 d_2 & c_3 d_2 \\\\\nc_1 d_3 & c_2 d_3 & c_3 d_3 \\end{bmatrix}\n" }, { "math_id": 72, "text": " \\mathbf{d} \\mathbf{c}^{\\mathrm T} - \n\\mathbf{c} \\mathbf{d}^{\\mathrm T} = \\begin{bmatrix} \n0 & c_2 d_1 - c_1 d_2 & c_3 d_1 - c_1 d_3 \\\\\n c_1 d_2 - c_2 d_1 & 0 & c_3 d_2 - c_2 d_3 \\\\\nc_1 d_3 - c_3 d_1 & c_2 d_3 - c_3 d_2 & 0 \\end{bmatrix}\n" }, { "math_id": 73, "text": "[\\mathbf{a}]_{\\times} \\, \\mathbf{a} = \\mathbf{0}" }, { "math_id": 74, "text": "\\mathbf{a}^\\mathrm T \\, [\\mathbf{a}]_{\\times} = \\mathbf{0}" }, { "math_id": 75, "text": "\\mathbf{b}^\\mathrm T \\, [\\mathbf{a}]_{\\times} \\, \\mathbf{b} = 0. " }, { "math_id": 76, "text": "\\mathbf{c} = \\mathbf{a \\times b} \\Leftrightarrow\\ c^m = \\sum_{i=1}^3 \\sum_{j=1}^3 \\sum_{k=1}^3 \\eta^{mi} E_{ijk} a^j b^k" }, { "math_id": 77, "text": "i,j,k" }, { "math_id": 78, "text": "\\mathbf{c} = \\mathbf{a \\times b} \\Leftrightarrow\\ c^m = \\eta^{mi} E_{ijk} a^j b^k" }, { "math_id": 79, "text": " E_{ijk} = \\varepsilon_{ijk}" }, { "math_id": 80, "text": "[\\varepsilon_{ijk} a^j] = [\\mathbf{a}]_\\times." }, { "math_id": 81, "text": "\\mathbf{a} = \\mathbf{b} \\times \\mathbf{c}" }, { "math_id": 82, "text": "\n \\mathbf{a} = \\begin{bmatrix}a_x\\\\a_y\\\\a_z\\end{bmatrix},\\ \n \\mathbf{b} = \\begin{bmatrix}b_x\\\\b_y\\\\b_z\\end{bmatrix},\\ \n \\mathbf{c} = \\begin{bmatrix}c_x\\\\c_y\\\\c_z\\end{bmatrix}\n" }, { "math_id": 83, "text": "a_x = b_y c_z - b_z c_y " }, { "math_id": 84, "text": "a_y = b_z c_x - b_x c_z " }, { "math_id": 85, "text": "a_z = b_x c_y - b_y c_x. " }, { "math_id": 86, "text": "\n \\mathbf{a} =\n \\begin{bmatrix}b_x\\\\b_y\\\\b_z\\end{bmatrix} \\times\n \\begin{bmatrix}c_x\\\\c_y\\\\c_z\\end{bmatrix}.\n" }, { "math_id": 87, "text": "a_x" }, { "math_id": 88, "text": "b_x" }, { "math_id": 89, "text": "c_x" }, { "math_id": 90, "text": "\n a_x =\n \\begin{bmatrix}b_y\\\\b_z\\end{bmatrix} \\times\n \\begin{bmatrix}c_y\\\\c_z\\end{bmatrix}.\n" }, { "math_id": 91, "text": "a_y" }, { "math_id": 92, "text": "a_z" }, { "math_id": 93, "text": "\n a_y =\n \\begin{bmatrix}b_z\\\\b_x\\end{bmatrix} \\times\n \\begin{bmatrix}c_z\\\\c_x\\end{bmatrix},\\ \n a_z =\n \\begin{bmatrix}b_x\\\\b_y\\end{bmatrix} \\times\n \\begin{bmatrix}c_x\\\\c_y\\end{bmatrix}\n" }, { "math_id": 94, "text": "a_x = b_y c_z - b_z c_y." }, { "math_id": 95, "text": " p_1=(x_1,y_1), p_2=(x_2,y_2)" }, { "math_id": 96, "text": " p_3=(x_3,y_3)" }, { "math_id": 97, "text": "(p_1, p_2)" }, { "math_id": 98, "text": "(p_1, p_3)" }, { "math_id": 99, "text": " P = (x_2-x_1)(y_3-y_1)-(y_2-y_1)(x_3-x_1)," }, { "math_id": 100, "text": " p_1" }, { "math_id": 101, "text": " p_2" }, { "math_id": 102, "text": " p_3" }, { "math_id": 103, "text": "P" }, { "math_id": 104, "text": " p_1, p_2." }, { "math_id": 105, "text": "\\mathbf{L} = \\mathbf{r} \\times \\mathbf{p}," }, { "math_id": 106, "text": " \\mathbf{M}_\\mathrm{A} = \\mathbf{r}_\\mathrm{AB} \\times \\mathbf{F}_\\mathrm{B}\\," }, { "math_id": 107, "text": "\\mathbf{\\tau}" }, { "math_id": 108, "text": "\\mathbf{v}_P - \\mathbf{v}_Q = \\boldsymbol\\omega \\times \\left( \\mathbf{r}_P - \\mathbf{r}_Q \\right)\\," }, { "math_id": 109, "text": "\\mathbf{r}" }, { "math_id": 110, "text": "\\mathbf{v}" }, { "math_id": 111, "text": "\\boldsymbol\\omega" }, { "math_id": 112, "text": "\\mathbf{F} = q_e \\left( \\mathbf{E}+ \\mathbf{v} \\times \\mathbf{B} \\right)" }, { "math_id": 113, "text": "a \\times b = \\star (a \\wedge b)." }, { "math_id": 114, "text": "\\mathbf{R}^3." }, { "math_id": 115, "text": "\\{x,y,z\\}," }, { "math_id": 116, "text": "[x,y]=z, [x,z]=[y,z]=0." }, { "math_id": 117, "text": "\\mathbf{i} = \\mathbf{e_2} \\mathbf{e_3}" }, { "math_id": 118, "text": "\\mathbf{j} = \\mathbf{e_1} \\mathbf{e_3}" }, { "math_id": 119, "text": "\\mathbf{k} = \\mathbf{e_1} \\mathbf{e_2}" }, { "math_id": 120, "text": "A" }, { "math_id": 121, "text": "B" }, { "math_id": 122, "text": "A \\times B = \\tfrac{1}{2}(AB - BA)," }, { "math_id": 123, "text": "AB" }, { "math_id": 124, "text": "\\mathbf{R}^n," }, { "math_id": 125, "text": " V \\times V \\times V \\to \\mathbf{R}," }, { "math_id": 126, "text": " V \\times V \\to V^*," }, { "math_id": 127, "text": " V \\to \\mathbf{R}" }, { "math_id": 128, "text": " V \\to V^*," }, { "math_id": 129, "text": " V \\times V \\to V," }, { "math_id": 130, "text": " (a,b,-)" }, { "math_id": 131, "text": " a \\times b." }, { "math_id": 132, "text": "\\mathrm{Vol}(a,b,c) = (a\\times b)\\cdot c." }, { "math_id": 133, "text": " (0,n)" }, { "math_id": 134, "text": " (1,n-1)" }, { "math_id": 135, "text": " n-1" }, { "math_id": 136, "text": " (n-1)" }, { "math_id": 137, "text": " (n-2,2)" }, { "math_id": 138, "text": "(k,n-k)" }, { "math_id": 139, "text": " v_1,\\dots,v_{n-1}" }, { "math_id": 140, "text": " v_n = v_1 \\times \\cdots \\times v_{n-1}" }, { "math_id": 141, "text": " v_i," }, { "math_id": 142, "text": " v_1,\\dots,v_n" }, { "math_id": 143, "text": " e_1 \\times \\cdots \\times e_{n-1} = e_n" }, { "math_id": 144, "text": " e_2 \\times \\cdots \\times e_n = e_1," }, { "math_id": 145, "text": "\\bigwedge_{i=0}^{n-1}\\mathbf{v}_i =\n \\begin{vmatrix}\n v_1{}^1 &\\cdots &v_1{}^{n}\\\\\n \\vdots &\\ddots &\\vdots\\\\\n v_{n-1}{}^1 & \\cdots &v_{n-1}{}^{n}\\\\\n \\mathbf{e}_1 &\\cdots &\\mathbf{e}_{n}\n \\end{vmatrix}.\n" }, { "math_id": 146, "text": "[v_1,\\ldots,v_n]:=\\bigwedge_{i=0}^n v_i" }, { "math_id": 147, "text": "\n[[x_1,\\ldots,x_n],y_2,\\ldots,y_n]] = \\sum_{i=1}^n [x_1,\\ldots,x_{i-1},[x_i,y_2,\\ldots,y_n],x_{i+1},\\ldots,x_n],\n" }, { "math_id": 148, "text": "u \\times v" } ]
https://en.wikipedia.org/wiki?curid=157092
157093
Dot product
Algebraic operation on coordinate vectors In mathematics, the dot product or scalar product is an algebraic operation that takes two equal-length sequences of numbers (usually coordinate vectors), and returns a single number. In Euclidean geometry, the dot product of the Cartesian coordinates of two vectors is widely used. It is often called the inner product (or rarely projection product) of Euclidean space, even though it is not the only inner product that can be defined on Euclidean space (see Inner product space for more). Algebraically, the dot product is the sum of the products of the corresponding entries of the two sequences of numbers. Geometrically, it is the product of the Euclidean magnitudes of the two vectors and the cosine of the angle between them. These definitions are equivalent when using Cartesian coordinates. In modern geometry, Euclidean spaces are often defined by using vector spaces. In this case, the dot product is used for defining lengths (the length of a vector is the square root of the dot product of the vector by itself) and angles (the cosine of the angle between two vectors is the quotient of their dot product by the product of their lengths). The name "dot product" is derived from the dot operator " · " that is often used to designate this operation; the alternative name "scalar product" emphasizes that the result is a scalar, rather than a vector (as with the vector product in three-dimensional space). Definition. The dot product may be defined algebraically or geometrically. The geometric definition is based on the notions of angle and distance (magnitude) of vectors. The equivalence of these two definitions relies on having a Cartesian coordinate system for Euclidean space. In modern presentations of Euclidean geometry, the points of space are defined in terms of their Cartesian coordinates, and Euclidean space itself is commonly identified with the real coordinate space formula_0. In such a presentation, the notions of length and angle are defined by means of the dot product. The length of a vector is defined as the square root of the dot product of the vector by itself, and the cosine of the (non oriented) angle between two vectors of length one is defined as their dot product. So the equivalence of the two definitions of the dot product is a part of the equivalence of the classical and the modern formulations of Euclidean geometry. Coordinate definition. The dot product of two vectors formula_1 and formula_2, specified with respect to an orthonormal basis, is defined as: formula_3 where formula_4 denotes summation and formula_5 is the dimension of the vector space. For instance, in three-dimensional space, the dot product of vectors formula_6 and formula_7 is: formula_8 Likewise, the dot product of the vector formula_9 with itself is: formula_10 If vectors are identified with column vectors, the dot product can also be written as a matrix product formula_11 where formula_12 denotes the transpose of formula_13. Expressing the above example in this way, a 1 × 3 matrix (row vector) is multiplied by a 3 × 1 matrix (column vector) to get a 1 × 1 matrix that is identified with its unique entry: formula_14 Geometric definition. In Euclidean space, a Euclidean vector is a geometric object that possesses both a magnitude and a direction. A vector can be pictured as an arrow. Its magnitude is its length, and its direction is the direction to which the arrow points. The magnitude of a vector formula_15 is denoted by formula_16. The dot product of two Euclidean vectors formula_15 and formula_17 is defined by formula_18 where formula_19 is the angle between formula_15 and formula_17. In particular, if the vectors formula_15 and formula_17 are orthogonal (i.e., their angle is formula_20 or formula_21), then formula_22, which implies that formula_23 At the other extreme, if they are codirectional, then the angle between them is zero with formula_24 and formula_25 This implies that the dot product of a vector formula_15 with itself is formula_26 which gives formula_27 the formula for the Euclidean length of the vector. Scalar projection and first properties. The scalar projection (or scalar component) of a Euclidean vector formula_15 in the direction of a Euclidean vector formula_17 is given by formula_28 where formula_19 is the angle between formula_15 and formula_17. In terms of the geometric definition of the dot product, this can be rewritten as formula_29 where formula_30 is the unit vector in the direction of formula_17. The dot product is thus characterized geometrically by formula_31 The dot product, defined in this manner, is homogeneous under scaling in each variable, meaning that for any scalar formula_32, formula_33 It also satisfies the distributive law, meaning that formula_34 These properties may be summarized by saying that the dot product is a bilinear form. Moreover, this bilinear form is positive definite, which means that formula_35 is never negative, and is zero if and only if formula_36, the zero vector. Equivalence of the definitions. If formula_37 are the standard basis vectors in formula_0, then we may write formula_38 The vectors formula_39 are an orthonormal basis, which means that they have unit length and are at right angles to each other. Since these vectors have unit length, formula_40 and since they form right angles with each other, if formula_41, formula_42 Thus in general, we can say that: formula_43 where formula_44 is the Kronecker delta. Also, by the geometric definition, for any vector formula_39 and a vector formula_15, we note that formula_45 where formula_46 is the component of vector formula_15 in the direction of formula_39. The last step in the equality can be seen from the figure. Now applying the distributivity of the geometric version of the dot product gives formula_47 which is precisely the algebraic definition of the dot product. So the geometric dot product equals the algebraic dot product. Properties. The dot product fulfills the following properties if formula_15, formula_17, and formula_48 are real vectors and formula_49, formula_50 and formula_51 are scalars. formula_52 which follows from the definition (formula_19 is the angle between formula_15 and formula_17): formula_53 Two non-zero vectors formula_15 and formula_17 are "orthogonal" if and only if formula_61. Unlike multiplication of ordinary numbers, where if formula_62, then formula_63 always equals formula_64 unless formula_65 is zero, the dot product does not obey the cancellation law: If formula_66 and formula_67, then we can write: formula_68 by the distributive law; the result above says this just means that formula_15 is perpendicular to formula_69, which still allows formula_70, and therefore allows formula_71. If formula_15 and formula_17 are vector-valued differentiable functions, then the derivative (denoted by a prime formula_72) of formula_57 is given by the rule formula_73 Application to the law of cosines. Given two vectors formula_74 and formula_75 separated by angle formula_19 (see the upper image ), they form a triangle with a third side formula_76. Let formula_65, formula_63 and formula_64 denote the lengths of formula_74, formula_75, and formula_77, respectively. The dot product of this with itself is: formula_78 which is the law of cosines. Triple product. There are two ternary operations involving dot product and cross product. The scalar triple product of three vectors is defined as formula_79 Its value is the determinant of the matrix whose columns are the Cartesian coordinates of the three vectors. It is the signed volume of the parallelepiped defined by the three vectors, and is isomorphic to the three-dimensional special case of the exterior product of three vectors. The vector triple product is defined by formula_80 This identity, also known as "Lagrange's formula", may be remembered as "ACB minus ABC", keeping in mind which vectors are dotted together. This formula has applications in simplifying vector calculations in physics. Physics. In physics, the dot product takes two vectors and returns a scalar quantity. It is also known as the "scalar product". The dot product of two vectors can be defined as the product of the magnitudes of the two vectors and the cosine of the angle between the two vectors. Thus, formula_81 Alternatively, it is defined as the product of the projection of the first vector onto the second vector and the magnitude of the second vector. For example: Generalizations. Complex vectors. For vectors with complex entries, using the given definition of the dot product would lead to quite different properties. For instance, the dot product of a vector with itself could be zero without the vector being the zero vector (e.g. this would happen with the vector formula_82). This in turn would have consequences for notions like length and angle. Properties such as the positive-definite norm can be salvaged at the cost of giving up the symmetric and bilinear properties of the dot product, through the alternative definition formula_83 where formula_84 is the complex conjugate of formula_85. When vectors are represented by column vectors, the dot product can be expressed as a matrix product involving a conjugate transpose, denoted with the superscript H: formula_86 In the case of vectors with real components, this definition is the same as in the real case. The dot product of any vector with itself is a non-negative real number, and it is nonzero except for the zero vector. However, the complex dot product is sesquilinear rather than bilinear, as it is conjugate linear and not linear in formula_15. The dot product is not symmetric, since formula_87 The angle between two complex vectors is then given by formula_88 The complex dot product leads to the notions of Hermitian forms and general inner product spaces, which are widely used in mathematics and physics. The self dot product of a complex vector formula_89, involving the conjugate transpose of a row vector, is also known as the norm squared, formula_90, after the Euclidean norm; it is a vector generalization of the "absolute square" of a complex scalar (see also: squared Euclidean distance). Inner product. The inner product generalizes the dot product to abstract vector spaces over a field of scalars, being either the field of real numbers formula_91 or the field of complex numbers formula_92. It is usually denoted using angular brackets by formula_93. The inner product of two vectors over the field of complex numbers is, in general, a complex number, and is sesquilinear instead of bilinear. An inner product space is a normed vector space, and the inner product of a vector with itself is real and positive-definite. Functions. The dot product is defined for vectors that have a finite number of entries. Thus these vectors can be regarded as discrete functions: a length-formula_5 vector formula_94 is, then, a function with domain formula_95, and formula_96 is a notation for the image of formula_97 by the function/vector formula_94. This notion can be generalized to continuous functions: just as the inner product on vectors uses a sum over corresponding components, the inner product on functions is defined as an integral over some interval ["a", "b"]: formula_98 Generalized further to complex functions formula_99 and formula_100, by analogy with the complex inner product above, gives formula_101 Weight function. Inner products can have a weight function (i.e., a function which weights each term of the inner product with a value). Explicitly, the inner product of functions formula_102 and formula_103 with respect to the weight function formula_104 is formula_105 Dyadics and matrices. A double-dot product for matrices is the Frobenius inner product, which is analogous to the dot product on vectors. It is defined as the sum of the products of the corresponding components of two matrices formula_106 and formula_107 of the same size: formula_108 And for real matrices, formula_109 Writing a matrix as a dyadic, we can define a different double-dot product (see ) however it is not an inner product. Tensors. The inner product between a tensor of order formula_5 and a tensor of order formula_110 is a tensor of order formula_111, see Tensor contraction for details. Computation. Algorithms. The straightforward algorithm for calculating a floating-point dot product of vectors can suffer from catastrophic cancellation. To avoid this, approaches such as the Kahan summation algorithm are used. Libraries. A dot product function is included in: See also. &lt;templatestyles src="Div col/styles.css"/&gt; Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbf{R}^n" }, { "math_id": 1, "text": "\\mathbf{a} = [a_1, a_2, \\cdots, a_n]" }, { "math_id": 2, "text": "\\mathbf{b} = [b_1, b_2, \\cdots, b_n]" }, { "math_id": 3, "text": "\\mathbf a \\cdot \\mathbf b = \\sum_{i=1}^n a_i b_i = a_1 b_1 + a_2 b_2 + \\cdots + a_n b_n" }, { "math_id": 4, "text": "\\Sigma" }, { "math_id": 5, "text": "n" }, { "math_id": 6, "text": " [1,3,-5] " }, { "math_id": 7, "text": " [4,-2,-1] " }, { "math_id": 8, "text": "\n\\begin{align}\n\\ [1, 3, -5] \\cdot [4, -2, -1] &= (1 \\times 4) + (3\\times-2) + (-5\\times-1) \\\\\n&= 4 - 6 + 5 \\\\\n&= 3\n\\end{align}\n" }, { "math_id": 9, "text": "[1,3,-5]" }, { "math_id": 10, "text": "\n\\begin{align}\n\\ [1, 3, -5] \\cdot [1, 3, -5] &= (1 \\times 1) + (3\\times 3) + (-5\\times -5) \\\\\n&= 1 + 9 + 25 \\\\\n&= 35\n\\end{align}\n" }, { "math_id": 11, "text": "\\mathbf a \\cdot \\mathbf b = \\mathbf a^{\\mathsf T} \\mathbf b," }, { "math_id": 12, "text": "a{^\\mathsf T}" }, { "math_id": 13, "text": "\\mathbf a" }, { "math_id": 14, "text": "\n\\begin{bmatrix}\n 1 & 3 & -5\n\\end{bmatrix}\n\\begin{bmatrix}\n 4 \\\\ -2 \\\\ -1\n\\end{bmatrix} = 3 \\, .\n" }, { "math_id": 15, "text": "\\mathbf{a}" }, { "math_id": 16, "text": " \\left\\| \\mathbf{a} \\right\\| " }, { "math_id": 17, "text": "\\mathbf{b}" }, { "math_id": 18, "text": "\\mathbf{a}\\cdot\\mathbf{b}= \\left\\|\\mathbf{a}\\right\\| \\left\\|\\mathbf{b}\\right\\|\\cos\\theta ," }, { "math_id": 19, "text": "\\theta" }, { "math_id": 20, "text": "\\frac{\\pi}{2}" }, { "math_id": 21, "text": "90^\\circ" }, { "math_id": 22, "text": "\\cos \\frac \\pi 2 = 0" }, { "math_id": 23, "text": "\\mathbf a \\cdot \\mathbf b = 0 ." }, { "math_id": 24, "text": "\\cos 0 = 1" }, { "math_id": 25, "text": "\\mathbf a \\cdot \\mathbf b = \\left\\| \\mathbf a \\right\\| \\, \\left\\| \\mathbf b \\right\\| " }, { "math_id": 26, "text": "\\mathbf a \\cdot \\mathbf a = \\left\\| \\mathbf a \\right\\| ^2 ," }, { "math_id": 27, "text": " \\left\\| \\mathbf a \\right\\| = \\sqrt{\\mathbf a \\cdot \\mathbf a} ," }, { "math_id": 28, "text": " a_b = \\left\\| \\mathbf a \\right\\| \\cos \\theta ," }, { "math_id": 29, "text": "a_b = \\mathbf a \\cdot \\widehat{\\mathbf b} ," }, { "math_id": 30, "text": " \\widehat{\\mathbf b} = \\mathbf b / \\left\\| \\mathbf b \\right\\| " }, { "math_id": 31, "text": " \\mathbf a \\cdot \\mathbf b = a_b \\left\\| \\mathbf{b} \\right\\| = b_a \\left\\| \\mathbf{a} \\right\\| ." }, { "math_id": 32, "text": "\\alpha" }, { "math_id": 33, "text": " ( \\alpha \\mathbf{a} ) \\cdot \\mathbf b = \\alpha ( \\mathbf a \\cdot \\mathbf b ) = \\mathbf a \\cdot ( \\alpha \\mathbf b ) ." }, { "math_id": 34, "text": " \\mathbf a \\cdot ( \\mathbf b + \\mathbf c ) = \\mathbf a \\cdot \\mathbf b + \\mathbf a \\cdot \\mathbf c ." }, { "math_id": 35, "text": " \\mathbf a \\cdot \\mathbf a " }, { "math_id": 36, "text": " \\mathbf a = \\mathbf 0 " }, { "math_id": 37, "text": "\\mathbf{e}_1,\\cdots,\\mathbf{e}_n" }, { "math_id": 38, "text": "\\begin{align}\n\\mathbf a &= [a_1 , \\dots , a_n] = \\sum_i a_i \\mathbf e_i \\\\\n\\mathbf b &= [b_1 , \\dots , b_n] = \\sum_i b_i \\mathbf e_i.\n\\end{align}\n" }, { "math_id": 39, "text": "\\mathbf{e}_i" }, { "math_id": 40, "text": " \\mathbf e_i \\cdot \\mathbf e_i = 1 " }, { "math_id": 41, "text": "i\\neq j" }, { "math_id": 42, "text": " \\mathbf e_i \\cdot \\mathbf e_j = 0 ." }, { "math_id": 43, "text": " \\mathbf e_i \\cdot \\mathbf e_j = \\delta_ {ij} ," }, { "math_id": 44, "text": "\\delta_{ij}" }, { "math_id": 45, "text": " \\mathbf a \\cdot \\mathbf e_i = \\left\\| \\mathbf a \\right\\| \\, \\left\\| \\mathbf e_i \\right\\| \\cos \\theta_i = \\left\\| \\mathbf a \\right\\| \\cos \\theta_i = a_i ," }, { "math_id": 46, "text": "a_i" }, { "math_id": 47, "text": " \\mathbf a \\cdot \\mathbf b = \\mathbf a \\cdot \\sum_i b_i \\mathbf e_i = \\sum_i b_i ( \\mathbf a \\cdot \\mathbf e_i ) = \\sum_i b_i a_i= \\sum_i a_i b_i ," }, { "math_id": 48, "text": "\\mathbf{c}" }, { "math_id": 49, "text": "r" }, { "math_id": 50, "text": "c_1" }, { "math_id": 51, "text": "c_2" }, { "math_id": 52, "text": " \\mathbf{a} \\cdot \\mathbf{b} = \\mathbf{b} \\cdot \\mathbf{a} ," }, { "math_id": 53, "text": " \\mathbf{a} \\cdot \\mathbf{b} = \\left\\| \\mathbf{a} \\right\\| \\left\\| \\mathbf{b} \\right\\| \\cos \\theta = \\left\\| \\mathbf{b} \\right\\| \\left\\| \\mathbf{a} \\right\\| \\cos \\theta = \\mathbf{b} \\cdot \\mathbf{a} ." }, { "math_id": 54, "text": " \\mathbf{a} \\cdot (\\mathbf{b} + \\mathbf{c}) = \\mathbf{a} \\cdot \\mathbf{b} + \\mathbf{a} \\cdot \\mathbf{c} ." }, { "math_id": 55, "text": " \\mathbf{a} \\cdot ( r \\mathbf{b} + \\mathbf{c} ) = r ( \\mathbf{a} \\cdot \\mathbf{b} ) + ( \\mathbf{a} \\cdot \\mathbf{c} ) ." }, { "math_id": 56, "text": " ( c_1 \\mathbf{a} ) \\cdot ( c_2 \\mathbf{b} ) = c_1 c_2 ( \\mathbf{a} \\cdot \\mathbf{b} ) ." }, { "math_id": 57, "text": "\\mathbf{a}\\cdot\\mathbf{b}" }, { "math_id": 58, "text": "(\\mathbf{a}\\cdot\\mathbf{b})\\cdot\\mathbf{c}" }, { "math_id": 59, "text": "\\mathbf{a}\\cdot(\\mathbf{b}\\cdot\\mathbf{c})" }, { "math_id": 60, "text": "c (\\mathbf{a} \\cdot \\mathbf{b}) = (c\\mathbf{a})\\cdot\\mathbf{b} = \\mathbf{a}\\cdot(c\\mathbf{b})" }, { "math_id": 61, "text": "\\mathbf{a} \\cdot \\mathbf{b} = 0" }, { "math_id": 62, "text": "ab=ac" }, { "math_id": 63, "text": "b" }, { "math_id": 64, "text": "c" }, { "math_id": 65, "text": "a" }, { "math_id": 66, "text": "\\mathbf{a}\\cdot\\mathbf{b}=\\mathbf{a}\\cdot\\mathbf{c}" }, { "math_id": 67, "text": "\\mathbf{a}\\neq\\mathbf{0}" }, { "math_id": 68, "text": "\\mathbf{a}\\cdot(\\mathbf{b}-\\mathbf{c}) = 0" }, { "math_id": 69, "text": "(\\mathbf{b}-\\mathbf{c})" }, { "math_id": 70, "text": "(\\mathbf{b}-\\mathbf{c})\\neq\\mathbf{0}" }, { "math_id": 71, "text": "\\mathbf{b}\\neq\\mathbf{c}" }, { "math_id": 72, "text": "{}'" }, { "math_id": 73, "text": "(\\mathbf{a}\\cdot\\mathbf{b})' = \\mathbf{a}'\\cdot\\mathbf{b} + \\mathbf{a}\\cdot\\mathbf{b}'." }, { "math_id": 74, "text": "{\\color{red}\\mathbf{a}}" }, { "math_id": 75, "text": "{\\color{blue}\\mathbf{b}}" }, { "math_id": 76, "text": "{\\color{orange}\\mathbf{c}} = {\\color{red}\\mathbf{a}} - {\\color{blue}\\mathbf{b}}" }, { "math_id": 77, "text": "{\\color{orange}\\mathbf{c}}" }, { "math_id": 78, "text": "\n\\begin{align}\n\\mathbf{\\color{orange}c} \\cdot \\mathbf{\\color{orange}c} & = ( \\mathbf{\\color{red}a} - \\mathbf{\\color{blue}b}) \\cdot ( \\mathbf{\\color{red}a} - \\mathbf{\\color{blue}b} ) \\\\\n & = \\mathbf{\\color{red}a} \\cdot \\mathbf{\\color{red}a} - \\mathbf{\\color{red}a} \\cdot \\mathbf{\\color{blue}b} - \\mathbf{\\color{blue}b} \\cdot \\mathbf{\\color{red}a} + \\mathbf{\\color{blue}b} \\cdot \\mathbf{\\color{blue}b} \\\\\n & = {\\color{red}a}^2 - \\mathbf{\\color{red}a} \\cdot \\mathbf{\\color{blue}b} - \\mathbf{\\color{red}a} \\cdot \\mathbf{\\color{blue}b} + {\\color{blue}b}^2 \\\\\n & = {\\color{red}a}^2 - 2 \\mathbf{\\color{red}a} \\cdot \\mathbf{\\color{blue}b} + {\\color{blue}b}^2 \\\\\n{\\color{orange}c}^2 & = {\\color{red}a}^2 + {\\color{blue}b}^2 - 2 {\\color{red}a} {\\color{blue}b} \\cos \\mathbf{\\color{purple}\\theta} \\\\\n\\end{align}\n" }, { "math_id": 79, "text": " \\mathbf{a} \\cdot ( \\mathbf{b} \\times \\mathbf{c} ) = \\mathbf{b} \\cdot ( \\mathbf{c} \\times \\mathbf{a} )=\\mathbf{c} \\cdot ( \\mathbf{a} \\times \\mathbf{b} )." }, { "math_id": 80, "text": " \\mathbf{a} \\times ( \\mathbf{b} \\times \\mathbf{c} ) = ( \\mathbf{a} \\cdot \\mathbf{c} )\\, \\mathbf{b} - ( \\mathbf{a} \\cdot \\mathbf{b} )\\, \\mathbf{c} ." }, { "math_id": 81, "text": "\\mathbf{a} \\cdot \\mathbf{b} = |\\mathbf{a}| \\, |\\mathbf{b}| \\cos \\theta" }, { "math_id": 82, "text": "\\mathbf{a} = [1\\ i]" }, { "math_id": 83, "text": " \\mathbf{a} \\cdot \\mathbf{b} = \\sum_i {{a_i}\\,\\overline{b_i}} ," }, { "math_id": 84, "text": "\\overline{b_i}" }, { "math_id": 85, "text": "b_i" }, { "math_id": 86, "text": " \\mathbf{a} \\cdot \\mathbf{b} = \\mathbf{b}^\\mathsf{H} \\mathbf{a} ." }, { "math_id": 87, "text": " \\mathbf{a} \\cdot \\mathbf{b} = \\overline{\\mathbf{b} \\cdot \\mathbf{a}} ." }, { "math_id": 88, "text": " \\cos \\theta = \\frac{\\operatorname{Re} ( \\mathbf{a} \\cdot \\mathbf{b} )}{ \\left\\| \\mathbf{a} \\right\\| \\, \\left\\| \\mathbf{b} \\right\\| } ." }, { "math_id": 89, "text": "\\mathbf{a} \\cdot \\mathbf{a} = \\mathbf{a}^\\mathsf{H} \\mathbf{a} " }, { "math_id": 90, "text": "\\mathbf{a} \\cdot \\mathbf{a} = \\|\\mathbf{a}\\|^2" }, { "math_id": 91, "text": " \\R " }, { "math_id": 92, "text": " \\Complex " }, { "math_id": 93, "text": " \\left\\langle \\mathbf{a} \\, , \\mathbf{b} \\right\\rangle " }, { "math_id": 94, "text": "u" }, { "math_id": 95, "text": "\\{k\\in\\mathbb{N}:1\\leq k \\leq n\\}" }, { "math_id": 96, "text": "u_i" }, { "math_id": 97, "text": "i" }, { "math_id": 98, "text": " \\left\\langle u , v \\right\\rangle = \\int_a^b u(x) v(x) \\,dx." }, { "math_id": 99, "text": "\\psi(x)" }, { "math_id": 100, "text": "\\chi(x)" }, { "math_id": 101, "text": " \\left\\langle \\psi , \\chi \\right\\rangle = \\int_a^b \\psi(x) \\overline{\\chi(x)} \\,dx." }, { "math_id": 102, "text": "u(x)" }, { "math_id": 103, "text": "v(x)" }, { "math_id": 104, "text": "r(x)>0" }, { "math_id": 105, "text": " \\left\\langle u , v \\right\\rangle_r = \\int_a^b r(x) u(x) v(x) \\, d x." }, { "math_id": 106, "text": "\\mathbf{A}" }, { "math_id": 107, "text": "\\mathbf{B}" }, { "math_id": 108, "text": " \\mathbf{A} : \\mathbf{B} = \\sum_i \\sum_j A_{ij} \\overline{B_{ij}} = \\operatorname{tr} ( \\mathbf{B}^\\mathsf{H} \\mathbf{A} ) = \\operatorname{tr} ( \\mathbf{A} \\mathbf{B}^\\mathsf{H} ) ." }, { "math_id": 109, "text": " \\mathbf{A} : \\mathbf{B} = \\sum_i \\sum_j A_{ij} B_{ij} = \\operatorname{tr} ( \\mathbf{B}^\\mathsf{T} \\mathbf{A} ) = \\operatorname{tr} ( \\mathbf{A} \\mathbf{B}^\\mathsf{T} ) = \\operatorname{tr} ( \\mathbf{A}^\\mathsf{T} \\mathbf{B} ) = \\operatorname{tr} ( \\mathbf{B} \\mathbf{A}^\\mathsf{T} ) ." }, { "math_id": 110, "text": "m" }, { "math_id": 111, "text": "n+m-2" } ]
https://en.wikipedia.org/wiki?curid=157093
15710171
Guided local search
Guided local search is a metaheuristic search method. A meta-heuristic method is a method that sits on top of a local search algorithm to change its behavior. Guided local search builds up penalties during a search. It uses penalties to help local search algorithms escape from local minima and plateaus. When the given local search algorithm settles in a local optimum, GLS modifies the objective function using a specific scheme (explained below). Then the local search will operate using an augmented objective function, which is designed to bring the search out of the local optimum. The key is in the way that the objective function is modified. The method in its current form was developed by Dr Christos Voudouris and detailed in his PhD Thesis. GLS was inspired by and extended GENET, a neural network architecture for solving Constraint Satisfaction Problems, which was developed by Chang Wang, Edward Tsang and Andrew Davenport. Both GLS's and GENET's mechanism for escaping from local minima resembles reinforcement learning. Overview. Solution features. To apply GLS, solution features must be defined for the given problem. Solution features are defined to distinguish between solutions with different characteristics, so that regions of similarity around local optima can be recognized and avoided. The choice of solution features depends on the type of problem, and also to a certain extent on the local search algorithm. For each feature formula_0 a cost function formula_1 is defined. Each feature is also associated with a penalty formula_2 (initially set to 0) to record the number of occurrences of the feature in local minima. The features and costs often come directly from the objective function. For example, in the traveling salesman problem, “whether the tour goes directly from city X to city Y” can be defined to be a feature. The distance between X and Y can be defined to be the cost. In the SAT and weighted MAX-SAT problems, the features can be “whether clause C satisfied by the current assignments”. At the implementation level, we define for each feature formula_3 an Indicator Function formula_4 indicating whether the feature is present in the current solution or not. formula_4 is 1 when solution formula_5 exhibits property formula_3, 0 otherwise. Selective penalty modifications. GLS computes the utility of penalising each feature. When the local search algorithm returns a local minimum x, GLS penalizes all those features (through increments to the penalty of the features) present in that solution which have maximum utility, formula_6, as defined below. formula_7 The idea is to penalise features that have high costs, although the utility of doing so decreases as the feature is penalised more and more often. Searching through an augmented cost function. GLS uses an augmented cost function (defined below), to allow it to guide the local search algorithm out of the local minimum, through penalising features present in that local minimum. The idea is to make the local minimum more costly than the surrounding search space, where these features are not present. formula_8 The parameter λ may be used to alter the intensification of the search for solutions. A higher value for λ will result in a more diverse search, where plateaus and basins are searched more coarsely; a low value will result in a more intensive search for the solution, where the plateaus and basins in the search landscape are searched in finer detail. The coefficient formula_9 is used to make the penalty part of the objective function balanced relative to changes in the objective function and is problem specific. A simple heuristic for setting formula_9 is simply to record the average change in objective function up until the first local minimum, and then set formula_9 to this value divided by the number of GLS features in the problem instance. Extensions of guided local search. Mills (2002) has described an extended guided local search (EGLS) which utilises random moves and an aspiration criterion designed specifically for penalty based schemes. The resulting algorithm improved the robustness of GLS over a range of parameter settings, particularly in the case of the quadratic assignment problem. A general version of the GLS algorithm, using a min-conflicts based hill climber (Minton et al. 1992) and based partly on GENET for constraint satisfaction and optimisation, has also been implemented in the Computer-Aided Constraint Programming project. Alsheddy (2011) extended guided local search to multi-objective optimization, and demonstrated its use in staff empowerment in scheduling . Related work. GLS was built on GENET, which was developed by Chang Wang, Edward Tsang and Andrew Davenport. The breakout method is very similar to GENET. It was designed for constraint satisfaction. Tabu search is a class of search methods which can be instantiated to specific methods. GLS can be seen as a special case of Tabu search. By sitting GLS on top of genetic algorithm, Tung-leng Lau introduced the guided genetic programming (GGA) algorithm. It was successfully applied to the general assignment problem (in scheduling), processors configuration problem (in electronic design) and a set of radio-link frequency assignment problems (an abstracted military application). Choi et al. cast GENET as a Lagrangian search. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f_i" }, { "math_id": 1, "text": "c_i" }, { "math_id": 2, "text": "p_i" }, { "math_id": 3, "text": "i" }, { "math_id": 4, "text": "I_i" }, { "math_id": 5, "text": "x" }, { "math_id": 6, "text": "\\operatorname{util}(x,i)" }, { "math_id": 7, "text": "\\operatorname{util}(x,i) = I_i(x)\\frac{c_i(x)}{1+p_i}." }, { "math_id": 8, "text": "g(x) = f(x) + \\lambda a \\sum_{1\\leq i\\leq m} I_i(x) p_i " }, { "math_id": 9, "text": "a" } ]
https://en.wikipedia.org/wiki?curid=15710171
15710792
Inverse mean curvature flow
Geometric flow In the mathematical fields of differential geometry and geometric analysis, inverse mean curvature flow (IMCF) is a geometric flow of submanifolds of a Riemannian or pseudo-Riemannian manifold. It has been used to prove a certain case of the Riemannian Penrose inequality, which is of interest in general relativity. Formally, given a pseudo-Riemannian manifold ("M", "g") and a smooth manifold S, an inverse mean curvature flow consists of an open interval I and a smooth map F from "I" × "S" into M such that formula_0 where H is the mean curvature vector of the immersion "F"("t", ⋅). If g is Riemannian, if S is closed with dim("M") dim("S") + 1, and if a given smooth immersion f of S into M has mean curvature which is nowhere zero, then there exists a unique inverse mean curvature flow whose "initial data" is f. Gerhardt's convergence theorem. A simple example of inverse mean curvature flow is given by a family of concentric round hyperspheres in Euclidean space. If the dimension of such a sphere is n and its radius is r, then its mean curvature is . As such, such a family of concentric spheres forms an inverse mean curvature flow if and only if formula_1 So a family of concentric round hyperspheres forms an inverse mean curvature flow when the radii grow exponentially. In 1990, Claus Gerhardt showed that this situation is characteristic of the more general case of mean-convex star-shaped smooth hypersurfaces of Euclidean space. In particular, for any such initial data, the inverse mean curvature flow exists for all positive time and consists only of mean-convex and star-shaped smooth hypersurfaces. Moreover the surface area grows exponentially, and after a rescaling that fixes the surface area, the surfaces converge smoothly to a round sphere. The geometric estimates in Gerhardt's work follow from the maximum principle; the asymptotic roundness then becomes a consequence of the Krylov-Safonov theorem. In addition, Gerhardt's methods apply simultaneously to more general curvature-based hypersurface flows. As is typical of geometric flows, IMCF solutions in more general situations often have finite-time singularities, meaning that I often cannot be taken to be of the form ("a", ∞). Huisken and Ilmanen's weak solutions. Following the seminal works of Yun Gang Chen, Yoshikazu Giga, and Shun'ichi Goto, and of Lawrence Evans and Joel Spruck on the mean curvature flow, Gerhard Huisken and Tom Ilmanen replaced the IMCF equation, for hypersurfaces in a Riemannian manifold ("M", "g"), by the elliptic partial differential equation formula_2 for a real-valued function u on M. Weak solutions of this equation can be specified by a variational principle. Huisken and Ilmanen proved that for any complete and connected smooth Riemannian manifold ("M", "g") which is asymptotically flat or asymptotically conic, and for any precompact and open subset U of M whose boundary is a smooth embedded submanifold, there is a proper and locally Lipschitz function u on M which is a positive weak solution on the complement of U and which is nonpositive on U; moreover such a function is uniquely determined on the complement of U. The idea is that, as t increases, the boundary of {"x" : "u"("x") &lt; "t"} moves through the hypersurfaces arising in a inverse mean curvature flow, with the initial condition given by the boundary of U. However, the elliptic and weak setting gives a broader context, as such boundaries can have irregularities and can jump discontinuously, which is impossible in the usual inverse mean curvature flow. In the special case that M is three-dimensional and g has nonnegative scalar curvature, Huisken and Ilmanen showed that a certain geometric quantity known as the Hawking mass can be defined for the boundary of {"x" : "u"("x") &lt; "t"}, and is monotonically non-decreasing as t increases. In the simpler case of a smooth inverse mean curvature flow, this amounts to a local calculation and was shown in the 1970s by the physicist Robert Geroch. In Huisken and Ilmanen's setting, it is more nontrivial due to the possible irregularities and discontinuities of the surfaces involved. As a consequence of Huisken and Ilmanen's extension of Geroch's monotonicity, they were able to use the Hawking mass to interpolate between the surface area of an "outermost" minimal surface and the ADM mass of an asymptotically flat three-dimensional Riemannian manifold of nonnegative scalar curvature. This settled a certain case of the Riemannian Penrose inequality. Example: inverse mean curvature flow of a m-dimensional spheres. A simple example of inverse mean curvature flow is given by a family of concentric round hyperspheres in formula_3. The mean curvature of an formula_4-dimensional sphere of radius formula_5 is formula_6. Due to the rotational symmetry of the sphere (or in general, due to the invariance of mean curvature under isometries) the inverse mean curvature flow equation formula_7 reduces to the ordinary differential equation, for an initial sphere of radius formula_8, formula_9 The solution of this ODE (obtained, e.g., by separation of variables) is formula_10. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{\\partial F}{\\partial t}=\\frac{-\\mathbf H}{|\\mathbf H|^2}," }, { "math_id": 1, "text": "r'(t)=\\frac{r(t)}{n}." }, { "math_id": 2, "text": "\\operatorname{div}_g\\frac{du}{|du|_g}=|du|_g" }, { "math_id": 3, "text": "\\mathbb{R}^{m+1}" }, { "math_id": 4, "text": "m" }, { "math_id": 5, "text": "r" }, { "math_id": 6, "text": "H = \\frac{m}{r} \\in \\mathbb{R}" }, { "math_id": 7, "text": "\\partial_t F = H^{-1} \\nu" }, { "math_id": 8, "text": "r_0" }, { "math_id": 9, "text": "\\begin{align}\n\\frac{\\text{d}}{\\text{d}t} r(t) = & \\frac{r(t)}{m} , \\\\\nr(0) = & r_0 .\n\\end{align}" }, { "math_id": 10, "text": "r(t) = r_0 e^{t/m}" } ]
https://en.wikipedia.org/wiki?curid=15710792
15712191
Quasi-derivative
Generalization of a derivative of a function between two Banach spacesIn mathematics, the quasi-derivative is one of several generalizations of the derivative of a function between two Banach spaces. The quasi-derivative is a slightly stronger version of the Gateaux derivative, though weaker than the Fréchet derivative. Let "f" : "A" → "F" be a continuous function from an open set "A" in a Banach space "E" to another Banach space "F". Then the quasi-derivative of "f" at "x"0 ∈ "A" is a linear transformation "u" : "E" → "F" with the following property: for every continuous function "g" : [0,1] → "A" with "g"(0)="x"0 such that "g"′(0) ∈ "E" exists, formula_0 If such a linear map "u" exists, then "f" is said to be "quasi-differentiable" at "x"0. Continuity of "u" need not be assumed, but it follows instead from the definition of the quasi-derivative. If "f" is Fréchet differentiable at "x"0, then by the chain rule, "f" is also quasi-differentiable and its quasi-derivative is equal to its Fréchet derivative at "x"0. The converse is true provided "E" is finite-dimensional. Finally, if "f" is quasi-differentiable, then it is Gateaux differentiable and its Gateaux derivative is equal to its quasi-derivative. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\lim_{t\\to 0^+}\\frac{f(g(t))-f(x_0)}{t} = u(g'(0))." } ]
https://en.wikipedia.org/wiki?curid=15712191
1571246
Creep (Radiohead song)
1992 single by Radiohead "Creep" is the debut single by the English rock band Radiohead, released on 21 September 1992. It was included as the second track of Radiohead's debut album, "Pablo Honey" (1993). It features "blasts" of guitar noise by Jonny Greenwood and lyrics describing an obsessive unrequited attraction. Radiohead had not planned to release "Creep", and recorded it at the suggestion of the producers, Sean Slade and Paul Q. Kolderie, while they were working on other songs. They took elements from the 1972 song "The Air That I Breathe" by Albert Hammond and Mike Hazlewood. Following legal action, Hammond and Hazlewood were credited as co-writers. Kolderie convinced Radiohead's record label, EMI, to release "Creep" as a single. It was initially unsuccessful, but achieved radio play in Israel and became popular on American alternative rock radio. It was reissued in 1993 and became an international hit, likened to alt-rock "slacker anthems" such as "Smells Like Teen Spirit" by Nirvana and "Loser" by Beck. Reviews of "Creep" were mostly positive. EMI pressured Radiohead to match the success, which created tension during the recording of their second album, "The Bends" (1995). Radiohead departed from the style of "Creep" and grew weary of it, feeling it set narrow expectations of their music, and did not perform it for several years. Though they achieved greater commercial and critical success with later albums, "Creep" remains Radiohead's most successful single. It was named one of the greatest debut singles and one of the greatest songs by "Rolling Stone." In 2021, the singer, Thom Yorke, released a remixed version with synthesisers and time-stretched acoustic guitar. Recording. Radiohead formed in Oxfordshire in 1985 and signed a record contract with EMI in 1991. Their 1992 debut, the "Drill" EP, drew little attention. For their debut single, Radiohead hired the producers Sean Slade and Paul Q. Kolderie and recorded at Chipping Norton Recording Studios in Chipping Norton, Oxfordshire. They worked on the songs "Inside My Head" and "Lurgee", but without results. Between rehearsals, Radiohead spontaneously performed another song, "Creep", which the singer, Thom Yorke, had written at the University of Exeter in the late 1980s. Yorke jokingly described it as their "Scott Walker song", which the producers misinterpreted. As they left the studio that night, Slade told Kolderie, "Too bad their best song's a cover." After further recording sessions failed to produce results, Kolderie suggested Radiohead perform "Creep". After the first take, everyone in the studio burst into applause. Radiohead did not know they were being recorded; according to the drummer, Philip Selway, "The reason it sounds so powerful is because it’s completely unselfconscious." After Radiohead assured Kolderie that "Creep" was an original song, he called EMI and convinced them to release it as the single. According to Kolderie, "Everyone [at EMI] who heard 'Creep' just started going insane." Slade and Kolderie suggested that the lead guitarist, Jonny Greenwood, record a piano part. While mixing the song, Kolderie forgot to add the piano until the outro, but the band approved of the result. Lyrics. According to the critic Alex Ross, "Creep" has "obsessive" lyrics that depict the "self-lacerating rage" of an unrequited attraction. Greenwood said the lyrics were inspired by a woman who Yorke had "followed for a couple of days", and who unexpectedly attended a Radiohead performance. John Harris, then the Oxford correspondent for "Melody Maker", said "Creep" was about a girl who frequented the upmarket Little Clarendon Street in Oxford. According to Harris, Yorke preferred the more bohemian Jericho, and expressed his discomfort using the lines "What the hell am I doing here / I don't belong here". Asked if the lyrics were inspired by a real person who made him feel like a "creep", Yorke said: "Yeah. It was a pretty strange period in my life. When I was at college and stuff and I was really fucked up and wanted to leave and do proper things with my life like be in a rock band." Yorke said he was not happy with the lyrics, and thought they were "pretty crap". Asked about "Creep" in 1993, Yorke said: "I have a real problem being a man in the '90s... Any man with any sensitivity or conscience toward the opposite sex would have a problem. To actually assert yourself in a masculine way without looking like you're in a hard-rock band is a very difficult thing to do... It comes back to the music we write, which is not effeminate, but it's not brutal in its arrogance. It is one of the things I'm always trying: to assert a sexual persona and on the other hand trying desperately to negate it." Greenwood said "Creep" was in fact a happy song about "recognising what you are". Radiohead recorded a censored version of "Creep" for radio, which replaces the line "so fucking special" with "so very special". Radiohead worried that issuing a censored version would be selling out, but decided it was acceptable since their idols Sonic Youth had done the same thing; nonetheless, Greenwood said the British press "weren't impressed". During the recording session for the censored lyrics, Kolderie convinced Yorke to rewrite the first verse, saying he thought Yorke could do better. Composition. Like many Radiohead songs, "Creep" uses pivot notes, creating a "bittersweet, doomy" feeling. The G–B–C–Cm chord progression is repeated throughout, alternating between arpeggiated chords in the verses and last chorus and distorted power chords during the first two choruses. In G major, these may be interpreted as "I–V7/vi–IV–iv". According to Guy Capuzzo, the ostinato mirrors the lyrics. For example, the "highest pitches of the ostinato form a prominent chromatic line that 'creeps' up, then down, involving scale degrees formula_0– ♯formula_0– formula_1– ♭formula_1...[while] ascend[ing], the lyrics strain towards optimism... Descend[ing], the subject sinks back into the throes of self-pity ... The guitarist's fretting hand mirrors this contour." The middle eight originally featured a guitar solo from Greenwood. When the guitarist Ed O'Brien pointed out that the chord progression was the same as the 1972 song "The Air That I Breathe", Yorke wrote a new middle eight using the same vocal melody. According to Greenwood, "It was funny to us in a way, sort of feeding something like that into [it]. It's a bit of change." When the song shifts from the verse to the chorus, Jonny Greenwood plays three blasts of guitar noise ("dead notes" played by releasing fret-hand pressure and picking the strings). Greenwood said he did this because he did not like how quiet the song was, and so "I hit the guitar hard—really hard". O'Brien said: "That's the sound of Jonny trying to fuck the song up. He really didn't like it the first time we played it, so he tried spoiling it. And it made the song." Yorke said the sound was like the song was "slashing its wrists. Halfway through the song it suddenly starts killing itself off, which is the whole point of the song really. It's a real self-destruct song, there's a real self-destruction ethic in a lot of the things we do onstage." The producers boosted the sound so "it punched you in the face". According to the "Guardian" critic Alexis Petridis, "Creep" has an "almost complete lack of resemblance" to Radiohead's later music. Music video. The "Creep" music video was filmed at the Venue, Oxford. For the video, Radiohead performed a free short concert, playing "Creep" several times. They donated proceeds from audience members to the Oxford magazine "Curfew", which had covered their early work. In the audience was the electronic musician Four Tet, then a teenager, who years later supported Radiohead on tour and collaborated with Yorke. Release. &lt;templatestyles src="Template:Quote_box/styles.css" /&gt; Having a big hit like that wasn't in the game plan. We were giddy ... The first tour we sold out, and our American tour manager was going, "You know, I've toured with bands who have been doing this for seven, eight years, and this isn't usual." So it was really great on the one hand. But on the other hand we couldn't follow it up. The album had a couple of other songs that were OK, but we didn't have a body of work. We didn't know what we were doing. Ed O'Brien, guitarist EMI released "Creep" as a single on 21 September 1992. BBC Radio 1 found it "too depressing" and excluded it from playlists. It reached number 78 on the UK singles chart, selling 6,000 copies. Asked about the poor response, Yorke said he was "absolutely horribly gutted, pissed off, self-righteous". Radiohead's follow-up singles "Anyone Can Play Guitar" and "Pop Is Dead" were also unsuccessful. In late 1992, the Israeli DJ Yoav Kutner played "Creep" often on Israeli radio, having been introduced to it by an EMI representative, and it became a national hit. Radiohead quickly set up tour dates in Israel to capitalise on the success. "Creep" had similar success in New Zealand, Spain, and Scandinavia. In the US, "Creep" became an underground hit in California after it was added to an alternative rock radio playlist in San Francisco. A censored version was released to radio stations. "Creep" was included on Radiohead's debut album, "Pablo Honey," released on 22 February 1993. By mid-1993, it had become a hit in America, a "slacker anthem" in the vein of "Smells Like Teen Spirit" by Nirvana and "Loser" by Beck. Radiohead were surprised by the success; Yorke told "Melody Maker" in 1993 that many journalists misunderstood it, asking him if it was a joke. In September 1993, Radiohead performed "Creep" on "Late Night with Conan O'Brien" as the show's first musical guests. Radiohead did not want to reissue "Creep" in the UK, but relented following pressure from the music press, EMI and fans. The reissue was released in the UK on 6 September 1993 and reached number seven, promoted with an appearance on the music programme "Top of the Pops". In the US, "Creep" was aided by its appearance in a 1994 episode of the MTV animated series "Beavis and Butt-Head." Capitol, Radiohead's US label, used the endorsement in a marketing campaign with the slogan "Beavis and Butt-Head say [Radiohead] don't suck". An acoustic version of "Creep", taken from a live performance on KROQ-FM on 13 July 1993, was included on Radiohead's 1994 EP "My Iron Lung". In June 2008, "Creep" re-entered the UK singles chart at number 37 after its inclusion on "". As of April 2019, in the UK, it was the most streamed song released in 1992, with 10.1 million streams. On 23 April 2024, "Creep" surpassed 1 billion views on YouTube. It remains Radiohead's most successful single. Critical reception. Reviewing the 1993 reissue, Larry Flick of "Billboard" wrote: "Minimal cut, boosted with just a touch of noise, relies mainly on an appropriately languid, melodic vocal (which also vaults into Bono-esque falsetto range) to pull the whole thing together. A possible spinner for alternative and college radio." Troy J. Augusto from "Cash Box" described it as a song "for all those of the post-pimple set who just can't find their way in this big ol' world. Vocalist Thom Yorke is our too-self-aware hero who won't let a little disillusionment keep him down. Song's hook is the razor-sharp guitar play that frames Yorke's gnashing of teeth." Marisa Fox of "Entertainment Weekly" wrote that "Creep" was "the ultimate neurotic teen anthem", marrying the self-consciousness of the Smiths, the vocals and guitar of U2, and the "heavy but crunchy pop" of the Cure. Reviewing "Creep" for "Melody Maker" in September 1992, Sharon O'Connell described it as "a stormer, a perfect monster of a malevolent pop song ... Like all the best pop, it gently strokes the nape of your neck before it digs the bread knife in. Aggression is rarely this delicious." One year later, the "Melody Maker" critic Simon Price named "Creep" Single of the Week. Martin Aston from "Music Week" gave it four out of five, describing it as "stunning". Tom Doyle from "Smash Hits" also gave it four out of five and named it Best New Single, praising Yorke's lyrics, the "crunching guitar" and the "delirious" chorus. Edwin Pouncey of "NME" named "Creep" Reissue of the Week and wrote that it had "clout, class and truth proudly branded on its forearm". A reviewer from "People" called it a "startling pop song" and a "gripping descent into love's dark regrets". Later reviews. According to the journalist Alex Ross in 2001, "What set 'Creep' apart from the grunge of the early nineties was the grandeur of its chords—in particular, its regal turn from G major to B major. No matter how many times you hear the song, the second chord still sails beautifully out of the blue. The lyrics may be saying, 'I'm a creep,' but the music is saying, 'I am majestic.'" Stephen Thomas Erlewine wrote in 2001 that "Creep" achieved "a rare power that is both visceral and intelligent". In 2007, VH1 ranked "Creep" the 31st-greatest song of the 1990s. In 2020, "Rolling Stone" named it the 16th-greatest debut single; the journalist Andy Greene noted that though Radiohead had followed "Creep" with "some of the most innovative and acclaimed music of the past 30 years", it remained their most famous song. In the same year, "The Guardian" named "Creep" the 34th-greatest Radiohead song. "Rolling Stone" named "Creep" number 118 in its list of the 500 greatest songs in both 2021 and 2024. Legacy. Following the release of "Pablo Honey", Radiohead spent two years touring in support of Belly, PJ Harvey and James. They performed "Creep" at every show, and came to resent it. O'Brien recalled: "We seemed to be living out the same four and a half minutes of our lives over and over again. It was incredibly stultifying." Yorke told "Rolling Stone" in 1993: "It's like it's not our song any more ... It feels like we're doing a cover." During Radiohead's first American tour, audience members would scream for "Creep", then leave after it was performed. Yorke said the success "gagged" them and almost caused them to break up; they felt they were being judged on a single song. Radiohead were determined to move on rather than "repeat that small moment of [our] lives forevermore". The success of "Creep" meant that Radiohead were not in debt to EMI, and so had more freedom on their next album, "The Bends" (1995). The album title, a term for decompression sickness, references Radiohead's rapid rise to fame with "Creep"; Yorke said "we just came up too fast". John Leckie, who produced "The Bends", recalled that EMI hoped for a single "even better" than "Creep" but that Radiohead "didn't even know what was good about it in the first place". Radiohead wrote the "Bends" song "My Iron Lung" in response, with the lines: "This is our new song / just like the last one / a total waste of time". Yorke said in 1995: "People have defined our emotional range with that one song, 'Creep'. I saw reviews of 'My Iron Lung' that said it was just like 'Creep'. When you're up against things like that, it's like: 'Fuck you.' These people are never going to listen." In January 1996, Radiohead surpassed the UK chart performance of "Creep" with the "Bends" single "Street Spirit", which reached number five. This, alongside the critical success of "The Bends", established that Radiohead were not one-hit wonders. Over the following years, Radiohead departed further from the style of "Creep". During the promotion for their third album, "OK Computer" (1997), Yorke became hostile when "Creep" was mentioned in interviews and refused requests to play it, telling a Montréal audience: "Fuck off, we're tired of it." He dismissed fans demanding it as "anally retarded". After the tour, Radiohead did not perform "Creep" until the encore of their 2001 homecoming concert at South Park, Oxford, when an equipment failure halted a performance of another song. In a surprise move, Radiohead performed "Creep" as the opening song of their headline performance at the 2009 Reading Festival. They did not perform it again until their 2016 tour for "A Moon Shaped Pool," when a fan spent the majority of a concert shouting for it. Radiohead decided to play it to "see what the reaction is, just to see how it feels". They performed "Creep" again during the encore of their headline performance at the Glastonbury Festival that year. According to the "Guardian" critic Alexis Petridis, "Given Radiohead's famously fractious relationship with their first big hit ... the performance of 'Creep' [was] greeted with something approaching astonished delight." In 2020, the "Guardian" critic Jazz Monroe wrote: "In the end, the band's disavowal of the song sent its credibility full circle. Nowadays, 'Creep' is a joke, but we're all blissfully in on it." In 2017, O'Brien said: "It's nice to play for the right reasons. People like it and want to hear it. We do err towards not playing it because you don't want it to feel like show business." In the same interview, Yorke said: "It can be cool sometimes, but other times I want to stop halfway through and be like, 'Nah, this isn't happening'." In a 2020 interview, O'Brien was dismissive of "Pablo Honey" but cited "Creep" as the "standout track". In 2023, Yorke said that his vocal range had dropped with age and that he found "Creep" difficult to sing. 2021 remix. In July 2021, Yorke released "Creep (Very 2021 Rmx)", a remixed version of "Creep". The remix is based on a time-stretched version of the acoustic version of "Creep", extending it to nine minutes, with "eerie" synthesisers. Yorke contributed the remix to a show by the Japanese fashion designer Jun Takahashi, who provided artwork and an animated music video. "Vogue" described the remix as "haunting and spare", and "Classic Rock" described it as "woozy" and "discombobulating". "Rolling Stone" said it was a fitting track for the COVID-19 pandemic, when "a sense of time is warped and singular moments can seem both fleeting and drawn out simultaneously". Covers. In April 2008, the American musician Prince covered "Creep" at the Coachella Valley Music and Arts Festival. A bootleg recording was shared online, but removed at Prince's request. After being informed of the situation in an interview, Yorke said: "Well, tell him to unblock it. It's our song." In 2011, the Canadian actor Jim Carrey covered "Creep" at Arlene's Grocery in New York City. Pentatonix covered "Creep" on "The Masked Singer", and released a studio version the night after their unmasking. Other artists who have covered "Creep" include Postmodern Jukebox, Korn, Weezer, Damien Rice, Amanda Palmer, Moby, the Pretenders, Kelly Clarkson, Arlo Parks, Olivia Rodrigo, and Ernest. A cover by the choir group the Scala &amp; Kolacny Brothers was used in the trailer for the 2010 film "The Social Network", creating a trend for trailers using eerie, slowed-down versions of pop songs. A version sung by Diego Luna appears in the 2014 animated film "The Book of Life". According to the director, Jorge Gutierrez, Radiohead told him: "For the first time ever, the way I'm using the song is exactly how it's supposed to be used. They said it's for a teenager who feels like he doesn't fit in.". In the videogame , released in 2021, the main protagonist Alex Chen (voiced by mxmtoon) sings Creep in the first chapter. The acoustic version of "Creep" was featured in the 2023 film "Guardians of the Galaxy Vol. 3." Copyright infringement. The chord progression and melody in "Creep" are similar to those of the 1972 song "The Air That I Breathe", written by Albert Hammond and Mike Hazlewood. After Rondor Music, the publisher of "The Air That I Breathe", took legal action, Hammond and Hazlewood received cowriting credits and a percentage of the royalties. Hammond said Radiohead were honest about having reused the composition, and so he and Hazlewood accepted only a small part of the royalties. In January 2018, the American singer Lana Del Rey said on Twitter that Radiohead were taking legal action against her for allegedly plagiarising "Creep" on her 2017 track "Get Free", and had asked for 100% of publishing royalties instead of Del Rey's offer of 40%. She denied that "Creep" had inspired "Get Free". Radiohead's publisher, Warner Chappell Music, confirmed it was seeking songwriting credits for "all writers" of "Creep", but denied that a lawsuit had been brought or that Radiohead had demanded 100% of royalties. In March, Del Rey told an audience: "My lawsuit's over, I guess I can sing that song any time I want." The writing credits for "Get Free" were not updated on the database of the American Society of Composers, Authors, and Publishers. Track listings. All tracks are written by Radiohead. &lt;templatestyles src="Col-begin/styles.css"/&gt; Credits and personnel. Adapted from the original release liner notes, except where noted: Radiohead Technical Artwork Charts. &lt;templatestyles src="Col-begin/styles.css"/&gt; Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; Bibliography. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\hat 5" }, { "math_id": 1, "text": "\\hat 6" } ]
https://en.wikipedia.org/wiki?curid=1571246
15713242
Superconducting radio frequency
Technique used to attain a high quality factor in resonant cavities Superconducting radio frequency (SRF) science and technology involves the application of electrical superconductors to radio frequency devices. The ultra-low electrical resistivity of a superconducting material allows an RF resonator to obtain an extremely high quality factor, "Q". For example, it is commonplace for a 1.3 GHz niobium SRF resonant cavity at 1.8 kelvins to obtain a quality factor of "Q"=5×1010. Such a very high "Q" resonator stores energy with very low loss and narrow bandwidth. These properties can be exploited for a variety of applications, including the construction of high-performance particle accelerator structures. Introduction. The amount of loss in an SRF resonant cavity is so minute that it is often explained with the following comparison: Galileo Galilei (1564–1642) was one of the first investigators of pendulous motion, a simple form of mechanical resonance. Had Galileo experimented with a 1 Hz resonator with a quality factor "Q" typical of today's SRF cavities and left it swinging in an entombed lab since the early 17th century, that pendulum would still be swinging today with about half of its original amplitude. The most common application of superconducting RF is in particle accelerators. Accelerators typically use resonant RF cavities formed from or coated with superconducting materials. Electromagnetic fields are excited in the cavity by coupling in an RF source with an antenna. When the RF fed by the antenna is the same as that of a cavity mode, the resonant fields build to high amplitudes. Charged particles passing through apertures in the cavity are then accelerated by the electric fields and deflected by the magnetic fields. The resonant frequency driven in SRF cavities typically ranges from 200 MHz to 3 GHz, depending on the particle species to be accelerated. The most common fabrication technology for such SRF cavities is to form thin walled (1–3 mm) shell components from high purity niobium sheets by stamping. These shell components are then welded together to form cavities. A simplified diagram of the key elements of an SRF cavity setup is shown below. The cavity is immersed in a saturated liquid helium bath. Pumping removes helium vapor boil-off and controls the bath temperature. The helium vessel is often pumped to a pressure below helium's superfluid lambda point to take advantage of the superfluid's thermal properties. Because superfluid has very high thermal conductivity, it makes an excellent coolant. In addition, superfluids boil only at free surfaces, preventing the formation of bubbles on the surface of the cavity, which would cause mechanical perturbations. An antenna is needed in the setup to couple RF power to the cavity fields and, in turn, any passing particle beam. The cold portions of the setup need to be extremely well insulated, which is best accomplished by a vacuum vessel surrounding the helium vessel and all ancillary cold components. The full SRF cavity containment system, including the vacuum vessel and many details not discussed here, is a cryomodule. Entry into superconducting RF technology can incur more complexity, expense, and time than normal-conducting RF cavity strategies. SRF requires chemical facilities for harsh cavity treatments, a low-particulate cleanroom for high-pressure water rinsing and assembly of components, and complex engineering for the cryomodule vessel and cryogenics. A vexing aspect of SRF is the as-yet elusive ability to consistently produce high "Q" cavities in high volume production, which would be required for a large linear collider. Nevertheless, for many applications the capabilities of SRF cavities provide the only solution for a host of demanding performance requirements. Several extensive treatments of SRF physics and technology are available, many of them free of charge and online. There are the proceedings of CERN accelerator schools, a scientific paper giving a thorough presentation of the many aspects of an SRF cavity to be used in the International Linear Collider, bi-annual International Conferences on RF Superconductivity held at varying global locations in odd numbered years, and tutorials presented at the conferences. SRF cavity application in particle accelerators. A large variety of RF cavities are used in particle accelerators. Historically most have been made of copper – a good electrical conductor – and operated near room temperature with exterior water cooling to remove the heat generated by the electrical loss in the cavity. In the past two decades, however, accelerator facilities have increasingly found superconducting cavities to be more suitable (or necessary) for their accelerators than normal-conducting copper versions. The motivation for using superconductors in RF cavities is "not" to achieve a net power saving, but rather to increase the quality of the particle beam being accelerated. Though superconductors have little AC electrical resistance, the little power they do dissipate is radiated at very low temperatures, typically in a liquid helium bath at 1.6 K to 4.5 K, and maintaining such low temperatures takes a lot of energy. The refrigeration power required to maintain the cryogenic bath at low temperature in the presence of heat from small RF power dissipation is dictated by the Carnot efficiency, and can easily be comparable to the normal-conductor power dissipation of a room-temperature copper cavity. The principle motivations for using superconducting RF cavities, are: When future advances in superconducting material science allow higher superconducting critical temperatures "Tc" and consequently higher SRF bath temperatures, then the reduced thermocline between the cavity and the surrounding environment could yield a significant net power savings by SRF over the normal conducting approach to RF cavities. Other issues will need to be considered with a higher bath temperature, though, such as the fact that superfluidity (which is presently exploited with liquid helium) would not be present with (for example) liquid nitrogen. At present, none of the "high "Tc"" superconducting materials are suitable for RF applications. Shortcomings of these materials arise due to their underlying physics as well as their bulk mechanical properties not being amenable to fabricating accelerator cavities. However, depositing films of promising materials onto other mechanically amenable cavity materials may provide a viable option for exotic materials serving SRF applications. At present, the de facto choice for SRF material is still pure niobium, which has a critical temperature of 9.3 K and functions as a superconductor nicely in a liquid helium bath of 4.2 K or lower, and has excellent mechanical properties. Physics of SRF cavities. The physics of Superconducting RF can be complex and lengthy. A few simple approximations derived from the complex theories, though, can serve to provide some of the important parameters of SRF cavities. By way of background, some of the pertinent parameters of RF cavities are itemized as follows. A resonator's quality factor is defined by formula_0, where: "ω" is the resonant frequency in [rad/s], "U" is the energy stored in [J], and "Pd" is the power dissipated in [W] in the cavity to maintain the energy "U". The energy stored in the cavity is given by the integral of field energy density over its volume, formula_1 , where: "H" is the magnetic field in the cavity and "μ0" is the permeability of free space. The power dissipated is given by the integral of resistive wall losses over its surface, formula_2 , where: "Rs" is the surface resistance which will be discussed below. The integrals of the electromagnetic field in the above expressions are generally not solved analytically, since the cavity boundaries rarely lie along axes of common coordinate systems. Instead, the calculations are performed by any of a variety of computer programs that solve for the fields for non-simple cavity shapes, and then numerically integrate the above expressions. An RF cavity parameter known as the Geometry Factor ranks the cavity's effectiveness of providing accelerating electric field due to the influence of its shape alone, which excludes specific material wall loss. The Geometry Factor is given by formula_3 , and then formula_4 The geometry factor is quoted for cavity designs to allow comparison to other designs independent of wall loss, since wall loss for SRF cavities can vary substantially depending on material preparation, cryogenic bath temperature, electromagnetic field level, and other highly variable parameters. The Geometry Factor is also independent of cavity size, it is constant as a cavity shape is scaled to change its frequency. As an example of the above parameters, a typical 9-cell SRF cavity for the International Linear Collider (a.k.a. a TESLA cavity) would have "G"=270 Ω and "Rs"= 10 nΩ, giving "Qo"=2.7×1010. The critical parameter for SRF cavities in the above equations is the surface resistance "Rs", and is where the complex physics comes into play. For normal-conducting copper cavities operating near room temperature, "Rs" is simply determined by the empirically measured bulk electrical conductivity "σ" by formula_5 . For copper at 300 K, "σ"=5.8×107 (Ω·m)−1 and at 1.3 GHz, "Rs copper"= 9.4 mΩ. For Type II superconductors in RF fields, "Rs" can be viewed as the sum of the superconducting BCS resistance and temperature-independent "residual resistances", formula_6 . The "BCS resistance" derives from BCS theory. One way to view the nature of the BCS RF resistance is that the superconducting Cooper pairs, which have zero resistance for DC current, have finite mass and momentum which has to alternate sinusoidally for the AC currents of RF fields, thus giving rise to a small energy loss. The BCS resistance for niobium can be approximated when the temperature is less than half of niobium's superconducting critical temperature, "T"&lt;"Tc"/2, by formula_7 [Ω], where: "f" is the frequency in [Hz], "T" is the temperature in [K], and "Tc"=9.3 K for niobium, so this approximation is valid for "T"&lt;4.65 K. Note that for superconductors, the BCS resistance increases quadratically with frequency, ~"f" 2, whereas for normal conductors the surface resistance increases as the root of frequency, ~√"f". For this reason, the majority of superconducting cavity applications favor lower frequencies, &lt;3 GHz, and normal-conducting cavity applications favor higher frequencies, &gt;0.5 GHz, there being some overlap depending on the application. The superconductor's "residual resistance" arises from several sources, such as random material defects, hydrides that can form on the surface due to hot chemistry and slow cool-down, and others that are yet to be identified. One of the quantifiable residual resistance contributions is due to an external magnetic field pinning magnetic fluxons in a Type II superconductor. The pinned fluxon cores create small normal-conducting regions in the niobium that can be summed to estimate their net resistance. For niobium, the magnetic field contribution to "Rs" can be approximated by formula_8 [Ω], where: "Hext" is any external magnetic field in [Oe], "Hc2" is the Type II superconductor magnetic quench field, which is 2400 Oe (190 kA/m) for niobium, and "Rn" is the normal-conducting resistance of niobium in ohms. The Earth's nominal magnetic flux of 0.5 gauss (50 μT) translates to a magnetic field of 0.5 Oe (40 A/m) and would produce a residual surface resistance in a superconductor that is orders of magnitude greater than the BCS resistance, rendering the superconductor too lossy for practical use. For this reason, superconducting cavities are surrounded by magnetic shielding to reduce the field permeating the cavity to typically &lt;10 mOe (0.8 A/m). Using the above approximations for a niobium a SRF cavity at 1.8 K, 1.3 GHz, and assuming a magnetic field of 10 mOe (0.8 A/m), the surface resistance components would be "RBCS" = 4.55 nΩ and "Rres" = "RH" = 3.42 nΩ, giving a net surface resistance "Rs" = 7.97 nΩ. If for this cavity "G" = 270 Ω then the ideal quality factor would be "Qo" = 3.4×1010. The "Qo" just described can be further improved by up to a factor of 2 by performing a mild vacuum bake of the cavity. Empirically, the bake seems to reduce the BCS resistance by 50%, but increases the residual resistance by 30%. The plot below shows the ideal "Qo" values for a range of residual magnetic field for a baked and unbaked cavity. In general, much care and attention to detail must be exercised in the experimental setup of SRF cavities so that there is not "Qo" degradation due to RF losses in ancillary components, such as stainless steel vacuum flanges that are too close to the cavity's evanescent fields. However, careful SRF cavity preparation and experimental configuration have achieved the ideal "Qo" not only for low field amplitudes, but up to cavity fields that are typically 75% of the magnetic field quench limit. Few cavities make it to the magnetic field quench limit since residual losses and vanishingly small defects heat up localized spots, which eventually exceed the superconducting critical temperature and lead to a thermal quench. "Q " vs "E". When using superconducting RF cavities in particle accelerators, the field level in the cavity should generally be as high as possible to most efficiently accelerate the beam passing through it. The "Qo" values described by the above calculations tend to degrade as the fields increase, which is plotted for a given cavity as a ""Q" vs "E" curve, where "E"" refers to the accelerating electric field of the TM01 mode. Ideally, the cavity "Qo" would remain constant as the accelerating field is increased all the way up to the point of a magnetic quench field, as indicated by the "ideal" dashed line in the plot below. In reality, though, even a well prepared niobium cavity will have a "Q" vs "E" curve that lies beneath the ideal, as shown by the "good" curve in the plot. There are many phenomena that can occur in an SRF cavity to degrade its "Q" vs "E" performance, such as impurities in the niobium, hydrogen contamination due to excessive heat during chemistry, and a rough surface finish. After a couple decades of development, a necessary prescription for successful SRF cavity production is emerging. This includes: There remains some uncertainty as to the root cause of why some of these steps lead to success, such as the electropolish and vacuum bake. However, if this prescription is not followed, the "Q" vs "E" curve often shows an excessive degradation of "Qo" with increasing field, as shown by the ""Q" slope" curve in the plot below. Finding the root causes of "Q" slope phenomena is the subject of ongoing fundamental SRF research. The insight gained could lead to simpler cavity fabrication processes as well as benefit future material development efforts to find higher "Tc" alternatives to niobium. In 2012, the Q(E) dependence on SRF cavities discovered for the first time in such a way that the Q-rise phenomenon was observed in Ti doped SRF cavity. The quality factor increases with increase in accelerating field and was explained by the presence of sharper peaks in the electronic density of states at the gap edges in doped cavities and such peaks being broadened by the rf current. Later the similar phenomenon was observed with nitrogen doping and which has been the current state-of-art cavity preparation for high performance. Wakefields and higher order modes (HOMs). One of the main reasons for using SRF cavities in particle accelerators is that their large apertures result in low beam impedance and higher thresholds of deleterious beam instabilities. As a charged particle beam passes through a cavity, its electromagnetic radiation field is perturbed by the sudden increase of the conducting wall diameter in the transition from the small-diameter beampipe to the large hollow RF cavity. A portion of the particle's radiation field is then "clipped off" upon re-entrance into the beampipe and left behind as wakefields in the cavity. The wakefields are simply superimposed upon the externally driven accelerating fields in the cavity. The spawning of electromagnetic cavity modes as wakefields from the passing beam is analogous to a drumstick striking a drumhead and exciting many resonant mechanical modes. The beam wakefields in an RF cavity excite a subset of the spectrum of the many electromagnetic modes, including the externally driven TM01 mode. There are then a host of beam instabilities that can occur as the repetitive particle beam passes through the RF cavity, each time adding to the wakefield energy in a collection of modes. For a particle bunch with charge "q", a length much shorter than the wavelength of a given cavity mode, and traversing the cavity at time "t"=0, the amplitude of the wakefield voltage left behind in the cavity "in a given mode" is given by formula_9 , where: "R" is the shunt impedance of the cavity "mode" defined by formula_10 , "E" is the electric field of the RF mode, "Pd" is the power dissipated in the cavity to produce the electric field "E", "QL" is the "loaded "Q"" of the cavity, which takes into account energy leakage out of the coupling antenna, "ωo" is the angular frequency of the mode, the imaginary exponential is the mode's sinusoidal time variation, the real exponential term quantifies the decay of the wakefield with time, and formula_11 is termed the "loss parameter" of the RF mode. The shunt impedance "R" can be calculated from the solution of the electromagnetic fields of a mode, typically by a computer program that solves for the fields. In the equation for "Vwake", the ratio "R"/"Qo" serves as a good comparative measure of wakefield amplitude for various cavity shapes, since the other terms are typically dictated by the application and are fixed. Mathematically, formula_12 , where relations defined above have been used. "R"/"Qo" is then a parameter that factors out cavity dissipation and is viewed as measure of the cavity geometry's effectiveness of producing accelerating voltage per stored energy in its volume. The wakefield being proportional to "R"/"Qo" can be seen intuitively since a cavity with small beam apertures concentrates the electric field on axis and has high "R"/"Qo", but also clips off more of the particle bunch's radiation field as deleterious wakefields. The calculation of electromagnetic field buildup in a cavity due to wakefields can be complex and depends strongly on the specific accelerator mode of operation. For the straightforward case of a storage ring with repetitive particle bunches spaced by time interval "Tb" and a bunch length much shorter than the wavelength of a given mode, the long term steady state wakefield voltage presented to the beam by the mode is given by formula_13 , where: formula_14 is the decay of the wakefield between bunches, and "δ" is the phase shift of the wakefield mode between bunch passages through the cavity. As an example calculation, let the phase shift "δ=0", which would be close to the case for the TM01 mode by design and unfortunately likely to occur for a few HOM's. Having "δ=0" (or an integer multiple of an RF mode's period, "δ=n2π") gives the worse-case wakefield build-up, where successive bunches are maximally decelerated by previous bunches' wakefields and give up even more energy than with only their "self wake". Then, taking "ωo" = 2"π" 500 MHz, "Tb"=1 μs, and "QL"=106, the buildup of wakefields would be "Vss wake"=637×"Vwake". A pitfall for any accelerator cavity would be the presence of what is termed a "trapped mode". This is an HOM that does not leak out of the cavity and consequently has a "QL" that can be orders of magnitude larger than used in this example. In this case, the buildup of wakefields of the trapped mode would likely cause a beam instability. The beam instability implications due to the "Vss wake" wakefields is thus addressed differently for the fundamental accelerating mode TM01 and all other RF modes, as described next. Fundamental accelerating mode TM010. The complex calculations treating wakefield-related beam stability for the TM010 mode in accelerators show that there are specific regions of phase between the beam bunches and the driven RF mode that allow stable operation at the highest possible beam currents. At some point of increasing beam current, though, just about any accelerator configuration will become unstable. As pointed out above, the beam wakefield amplitude is proportional to the cavity parameter "R"/"Qo", so this is typically used as a comparative measure of the likelihood of TM01 related beam instabilities. A comparison of "R"/"Qo" and "R" for a 500 MHz superconducting cavity and a 500 MHz normal-conducting cavity is shown below. The accelerating voltage provided by both cavities is comparable for a given net power consumption when including refrigeration power for SRF. The "R"/"Qo" for the SRF cavity is 15 times less than the normal-conducting version, and thus less beam-instability susceptible. This one of the main reasons such SRF cavities are chosen for use in high-current storage rings. Higher order modes (HOMs). In addition to the fundamental accelerating TM010 mode of an RF cavity, numerous higher frequency modes and a few lower-frequency dipole modes are excited by charged particle beam wakefields, all generally denoted higher order modes (HOMs). These modes serve no useful purpose for accelerator particle beam dynamics, only giving rise to beam instabilities, and are best heavily damped to have as low a "QL" as possible. The damping is accomplished by preferentially allowing dipole and all HOMs to leak out of the SRF cavity, and then coupling them to resistive RF loads. The leaking out of undesired RF modes occurs along the beampipe, and results from a careful design of the cavity aperture shapes. The aperture shapes are tailored to keep the TM01 mode "trapped" with high "Qo" inside of the cavity and allow HOMs to propagate away. The propagation of HOMs is sometimes facilitated by having a larger diameter beampipe on one side of the cavity, beyond the smaller diameter cavity iris, as seen in the SRF cavity CAD cross-section at the top of this wiki page. The larger beampipe diameter allows the HOMs to easily propagate away from the cavity to an HOM antenna or beamline absorber. The resistive load for HOMs can be implemented by having loop antennas located at apertures on the side of the beampipe, with coaxial lines routing the RF to outside of the cryostat to standard RF loads. Another approach is to place the HOM loads directly on the beampipe as hollow cylinders with RF lossy material attached to the interior surface, as shown in the adjacent image. This "beamline load" approach can be more technically challenging, since the load must absorb high RF power while preserving a high-vacuum beamline environment in close proximity to a contamination-sensitive SRF cavity. Further, such loads must sometimes operate at cryogenic temperatures to avoid large thermal gradients along the beampipe from the cold SRF cavity. The benefit of the beamline HOM load configuration, however, is a greater absorptive bandwidth and HOM attenuation as compared to antenna coupling. This benefit can be the difference between a stable vs. an unstable particle beam for high current accelerators. Cryogenics. A significant part of SRF technology is cryogenic engineering. The SRF cavities tend to be thin-walled structures immersed in a bath of liquid helium having temperature 1.6 K to 4.5 K. Careful engineering is then required to insulate the helium bath from the room-temperature external environment. This is accomplished by: The major cryogenic engineering challenge is the refrigeration plant for the liquid helium. The small power that is dissipated in an SRF cavity and the heat leak to the vacuum vessel are both heat loads at very low temperature. The refrigerator must replenish this loss with an inherent poor efficiency, given by the product of the Carnot efficiency "ηC" and a "practical" efficiency "ηp". The Carnot efficiency derives from the second law of thermodynamics and can be quite low. It is given by formula_15 where "Tcold" is the temperature of the cold load, which is the helium vessel in this case, and "Twarm" is the temperature of the refrigeration heat sink, usually room temperature. In most cases "Twarm ="300 K, so for "Tcold ≥"150 K the Carnot efficiency is unity. The practical efficiency is a catch-all term that accounts for the many mechanical non-idealities that come into play in a refrigeration system aside from the fundamental physics of the Carnot efficiency. For a large refrigeration installation there is some economy of scale, and it is possible to achieve "ηp" in the range of 0.2–0.3. The wall-plug power consumed by the refrigerator is then formula_16 , where "Pcold" is the power dissipated at temperature "Tcold" . As an example, if the refrigerator delivers 1.8 K helium to the cryomodule where the cavity and heat leak dissipate "Pcold"=10 W, then the refrigerator having "Twarm"=300 K and "ηp"=0.3 would have "ηC"=0.006 and a wall-plug power of "Pwarm"=5.5 kW. Of course, most accelerator facilities have numerous SRF cavities, so the refrigeration plants can get to be very large installations. The temperature of operation of an SRF cavity is typically selected as a minimization of wall-plug power for the entire SRF system. The plot to the right then shows the pressure to which the helium vessel must be pumped to obtain the desired liquid helium temperature. Atmospheric pressure is 760 Torr (101.325 kPa), corresponding to 4.2 K helium. The superfluid "λ" point occurs at about 38 Torr (5.1 kPa), corresponding to 2.18 K helium. Most SRF systems either operate at atmospheric pressure, 4.2 K, or below the λ point at a system efficiency optimum usually around 1.8 K, corresponding to about 12 Torr (1.6 kPa). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " Q_o = \\frac{\\omega U} {P_d} " }, { "math_id": 1, "text": " U = \\frac{\\mu_0}{2}\\int{|\\overrightarrow{H}|^2 dV}" }, { "math_id": 2, "text": " P_d = \\frac{R_s}{2}\\int{|\\overrightarrow{H}|^2 dS} " }, { "math_id": 3, "text": " G = \\frac{\\omega \\mu_0 \\int{|\\overrightarrow{H}|^2 dV}}{\\int{|\\overrightarrow{H}|^2 dS}} " }, { "math_id": 4, "text": " Q_o = \\frac{G} {R_s} \\cdot " }, { "math_id": 5, "text": " R_{s\\ normal} = \\sqrt{ \\frac{\\omega \\mu_0} {2 \\sigma} }" }, { "math_id": 6, "text": " R_s = R_{BCS} + R_{res}" }, { "math_id": 7, "text": " R_{BCS} \\simeq 2 \\times 10^{-4} \\left( \\frac{f}{1.5 \\times 10^{9}} \\right)^2 \\frac {e^{-17.67 / T}} {T} " }, { "math_id": 8, "text": " R_{H} = \\frac{H_{ext}}{2 H_{c2}} R_n \\approx 9.49 \\times 10^{-12} H_{ext}\\sqrt{f} " }, { "math_id": 9, "text": " V_{wake} = \\frac{q \\omega_o R} {2 Q_o} \\ e^{j \\omega_o t} \\ e^{-\\frac{\\omega_o t}{2 Q_L}} = k q \\ e^{j \\omega_o t} \\ e^{-\\frac{\\omega_o t}{2 Q_L}}" }, { "math_id": 10, "text": " R = \\frac{ \\left( \\int{\\overrightarrow{E} \\cdot dl} \\right)^2}{P_d} = \\frac{V^2}{P_d} " }, { "math_id": 11, "text": " k = \\frac{\\omega_o R} {2 Q_o} " }, { "math_id": 12, "text": " \\frac{R} {Q_o} = \\frac{V^2}{\\omega_o U} = \\frac{2 \\left( \\int{\\overrightarrow{E} \\cdot dl} \\right)^2}{ \\omega_o \\mu_o\\int{|\\overrightarrow{H}|^2 dV} } = \\frac {2k}{\\omega_o}" }, { "math_id": 13, "text": " V_{ss \\ wake} = V_{wake} \\left( \\frac{1} {1 - e^{-\\tau} e^{j\\delta}} - \\frac{1}{2} \\right) " }, { "math_id": 14, "text": " \\tau = \\frac{\\omega_o T_b}{2 Q_L} " }, { "math_id": 15, "text": " \\eta_C =\n\\begin{cases}\n \\frac{T_{cold}} {T_{warm} - T_{cold}}, & \\mbox{if } T_{cold} < T_{warm} - T_{cold} \\\\\n 1, & \\mbox{otherwise}\n\\end{cases}\n" }, { "math_id": 16, "text": " P_{warm} = \\frac{P_{cold}} {\\eta_C \\ \\eta_{p}} " } ]
https://en.wikipedia.org/wiki?curid=15713242
15713616
SUHA (computer science)
In computer science, SUHA (Simple Uniform Hashing Assumption) is a basic assumption that facilitates the mathematical analysis of hash tables. The assumption states that a hypothetical hashing function will evenly distribute items into the slots of a hash table. Moreover, each item to be hashed has an equal probability of being placed into a slot, regardless of the other elements already placed. This assumption generalizes the details of the hash function and allows for certain assumptions about the stochastic system. Applications. SUHA is most commonly used as a foundation for mathematical proofs describing the properties and behavior of hash tables in theoretical computer science. Minimizing hashing collisions can be achieved with a uniform hashing function. These functions often rely on the specific input data set and can be quite difficult to implement. Assuming uniform hashing allows hash table analysis to be made without exact knowledge of the input or the hash function used. Mathematical implications. Certain properties of hash tables can be derived once uniform hashing is assumed. Uniform distribution. Under the assumption of uniform hashing, given a hash function h, and a hash table of size m, the probability that two non-equal elements will hash to the same slot is formula_0 Collision chain length. Under the assumption of uniform hashing, the load factor formula_1 and the average chain length of a hash table of size m with n elements will be formula_2 Successful lookup. Under the assumption of uniform hashing, the average time (in big-O notation) to successfully find an element in a hash table using chaining is formula_3 Unsuccessful lookup. Under the assumption of uniform hashing, the average time (in big-O notation) to unsuccessfully find an element in a hash table using chaining is formula_3 Example. A simple example of using SUHA can be seen while observing an arbitrary hash table of size 10 and a data set of 30 unique elements. If chaining is used to deal with collisions, the average chain length of this hash table may be a desirable value. Without any assumptions and with no more additional information about the data or hash function, the chain length cannot be estimated. With SUHA however, we can state that because of an assumed uniform hashing, each element has an equal probability of mapping to a slot. Since no particular slot should be favored over another, the 30 elements should hash into the 10 slots uniformly. This will produce a hash table with, on average, 10 chains each of length 3 formula_2 formula_4 formula_5
[ { "math_id": 0, "text": "P(h(a) = h(b)) = \\frac{1}{m}." }, { "math_id": 1, "text": "\\alpha" }, { "math_id": 2, "text": "\\alpha = \\tfrac{n}{m}" }, { "math_id": 3, "text": "O(\\alpha + 1)\\," }, { "math_id": 4, "text": "\\alpha = \\tfrac{30}{10}" }, { "math_id": 5, "text": "\\alpha = 3\\," } ]
https://en.wikipedia.org/wiki?curid=15713616
1571448
Gimel
Third letter of many Semitic alphabets Gimel is the third (in alphabetical order; fifth in spelling order) letter of the Semitic abjads, including Phoenician "gīml" 𐤂, Hebrew "gīmel" &lt;templatestyles src="Script/styles_hebrew.css" /&gt;ג‎, Aramaic "gāmal" 𐡂, Syriac "gāmal" ܓ, and Arabic "ǧīm" ج‎ . Its sound value in the original Phoenician and in all derived alphabets, except Arabic, is a voiced velar plosive ; in Modern Standard Arabic, it represents either a or for most Arabic speakers except in Northern Egypt, the southern parts of Yemen and some parts of Oman where it is pronounced as the voiced velar plosive (see below). In its Proto-Canaanite form, the letter may have been named after a weapon that was either a staff sling or a throwing stick (spear thrower), ultimately deriving from a Proto-Sinaitic glyph based on the hieroglyph below: T14 The Phoenician letter gave rise to the Greek gamma (Γ), the Latin C, G, Ɣ and Ȝ, and the Cyrillic Г, Ґ, and Ғ. Arabic ǧīm. The Arabic letter is named "" ج . It is written in several ways depending on its position in the word: !scope="row" style="line-height:1.5;text-align:left"|Position in word !scope="col"|Isolated !scope="col"|Final !scope="col"|Medial !scope="col"|Initial !scope="row" style="text-align:left"|Glyph form:() Pronunciation. In all varieties of Arabic, cognate words will have consistent differences in pronunciation of the letter. The standard pronunciation taught outside the Arabic speaking world is an affricate , which was the agreed-upon pronunciation by the end of the nineteenth century to recite the Qur'an. It is pronounced as a fricative in most of Northern Africa and the Levant, and is the prestigious and most common pronunciation in Egypt, which is also found in Southern Arabian Peninsula. Differences in pronunciation occur because readers of Modern Standard Arabic pronounce words following their native dialects. Egyptians always use the letter to represent as well as in names and loanwords, such as "golf". However, may be used in Egypt to transcribe ~ (normally pronounced ) or if there is a need to distinguish them completely, then is used to represent , which is also a proposal for Mehri and Soqotri languages. Historical pronunciation. While in most Semitic languages, e.g. Aramaic, Hebrew, Ge'ez, Old South Arabian the equivalent letter represents a , Arabic is considered unique among them where the "Jīm" ⟨ج⟩ was palatalized to an affricate or a fricative in most dialects from classical times. While there is variation in Modern Arabic varieties, most of them reflect this palatalized pronunciation except in coastal Yemeni and Omani dialects as well as in Egypt, where it is pronounced . It is not well known when palatalization occurred or the probability of it being connected to the pronunciation of "Qāf" ⟨ق⟩ as a , but in most of the Arabian peninsula (Saudi Arabia, Kuwait, Qatar, Bahrain, UAE and parts of Yemen and Oman), the ⟨⟩ represents a and ⟨⟩ represents a , except in coastal Yemen and southern Oman where ⟨⟩ represents a and ⟨⟩ represents a , which shows a strong correlation between the palatalization of ⟨⟩ to and the pronunciation of the ⟨⟩ as a as shown in the table below: Notes: Variant. A variant letter named "che" is used in Persian, with three dots below instead having just one dot below. However, it is not included on one of the 28 letters on the Arabic alphabet. It is thus written as: !scope="row" style="line-height:1.5;text-align:left"|Position in word !scope="col"|Isolated !scope="col"|Final !scope="col"|Medial !scope="col"|Initial !scope="row" style="text-align:left"|Glyph form:() This form is used to denote four letters, the other three being xe, jim, and he. Hebrew gimel. Variations. Hebrew spelling: Bertrand Russell posits that the letter's form is a conventionalized image of a camel. The letter may be the shape of the walking animal's head, neck, and forelegs. Barry B. Powell, a specialist in the history of writing, states “It is hard to imagine how gimel = ‘camel’ can be derived from the picture of a camel (it may show his hump, or his head and neck!)”. Gimel is one of the six letters which can receive a dagesh qal. The two functions of dagesh are distinguished as either qal (light) or hazaq (strong). The six letters that can receive a dagesh qal are bet, gimel, daled, kaph, pe, and taf. Three of them (bet, kaph, and pe) have their sound value changed in modern Hebrew from the fricative to the plosive by adding a dagesh. The other three represent the same pronunciation in modern Hebrew, but have had alternate pronunciations at other times and places. They are essentially pronounced in the fricative as ג gh غ, dh ذ and th ث. In the Temani pronunciation, gimel represents , , or when with a dagesh, and without a dagesh. In modern Hebrew, the combination &lt;templatestyles src="Script/styles_hebrew.css" /&gt;ג׳‎ (gimel followed by a geresh) is used in loanwords and foreign names to denote . Significance. In gematria, gimel represents the number three. It is written like a "vav" with a "yud" as a "foot", and is traditionally believed to resemble a person in motion; symbolically, a rich man running after a poor man to give him charity. In the Hebrew alphabet "gimel" directly precedes "dalet", which signifies a poor or lowly man, given its similarity to the Hebrew word "dal" (b. "Shabbat", 104a). Gimel is also one of the seven letters which receive special crowns (called "tagin") when written in a Sefer Torah. See "shin", "ayin", "teth", "nun", "zayin", and "tsadi". The letter gimel is the electoral symbol for the United Torah Judaism party, and the party is often nicknamed "Gimmel". In Modern Hebrew, the frequency of usage of gimel, out of all the letters, is 1.26%. Syriac gamal/gomal. In the Syriac alphabet, the third letter is — Gamal in eastern pronunciation, Gomal in western pronunciation (). It is one of six letters that represent two associated sounds (the others are Bet, Dalet, Kaph, Pe and Taw). When Gamal/Gomal has a hard pronunciation ("qûššāyâ ") it represents , like "goat". When Gamal/Gomal has a soft pronunciation ("rûkkāḵâ ") it traditionally represents (), or "Ghamal/Ghomal". The letter, renamed "Jamal/Jomal", is written with a tilde/tie either below or within it to represent the borrowed phoneme (), which is used in Garshuni and some Neo-Aramaic languages to write loan and foreign words from Arabic or Persian. Other uses. Mathematics. The serif form formula_0 of the Hebrew letter gimel is occasionally used for the gimel function in mathematics. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\gimel" } ]
https://en.wikipedia.org/wiki?curid=1571448
15714607
Varimax rotation
Concept in statistics In statistics, a varimax rotation is used to simplify the expression of a particular sub-space in terms of just a few major items each. The actual coordinate system is unchanged, it is the orthogonal basis that is being rotated to align with those coordinates. The sub-space found with principal component analysis or factor analysis is expressed as a dense basis with many non-zero weights which makes it hard to interpret. Varimax is so called because it maximizes the sum of the variances of the squared loadings (squared correlations between variables and factors). Preserving orthogonality requires that it is a rotation that leaves the sub-space invariant. Intuitively, this is achieved if, (a) any given variable has a high loading on a single factor but near-zero loadings on the remaining factors and if (b) any given factor is constituted by only a few variables with very high loadings on this factor while the remaining variables have near-zero loadings on this factor. If these conditions hold, the factor loading matrix is said to have "simple structure," and varimax rotation brings the loading matrix closer to such simple structure (as much as the data allow). From the perspective of individuals measured on the variables, varimax seeks a basis that most economically represents each individual—that is, each individual can be well described by a linear combination of only a few basis functions. One way of expressing the varimax criterion formally is this: formula_0 Suggested by Henry Felix Kaiser in 1958, it is a popular scheme for orthogonal rotation (where all factors remain uncorrelated with one another). Rotation in factor analysis. A summary of the use of varimax rotation and of other types of factor rotation is presented in this article on factor analysis. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; External links.  This article incorporates public domain material from
[ { "math_id": 0, "text": " R_\\mathrm{VARIMAX} = \\operatorname{arg}\\max_R \\left(\\frac{1}{p}\\sum_{j=1}^k \\sum_{i=1}^p (\\Lambda R)^4_{ij} - \\sum_{j=1}^k \\left(\\frac{1}{p}\\sum_{i=1}^p (\\Lambda R)^2_{ij}\\right)^2\\right)." } ]
https://en.wikipedia.org/wiki?curid=15714607
15715699
Cauchy-continuous function
In mathematics, a Cauchy-continuous, or Cauchy-regular, function is a special kind of continuous function between metric spaces (or more general spaces). Cauchy-continuous functions have the useful property that they can always be (uniquely) extended to the Cauchy completion of their domain. Definition. Let formula_0 and formula_1 be metric spaces, and let formula_2 be a function from formula_0 to formula_3 Then formula_4 is Cauchy-continuous if and only if, given any Cauchy sequence formula_5 in formula_6 the sequence formula_7 is a Cauchy sequence in formula_3 Properties. Every uniformly continuous function is also Cauchy-continuous. Conversely, if the domain formula_0 is totally bounded, then every Cauchy-continuous function is uniformly continuous. More generally, even if formula_0 is not totally bounded, a function on formula_0 is Cauchy-continuous if and only if it is uniformly continuous on every totally bounded subset of formula_8 Every Cauchy-continuous function is continuous. Conversely, if the domain formula_0 is complete, then every continuous function is Cauchy-continuous. More generally, even if formula_0 is not complete, as long as formula_1 is complete, then any Cauchy-continuous function from formula_0 to formula_1 can be extended to a continuous (and hence Cauchy-continuous) function defined on the Cauchy completion of formula_9 this extension is necessarily unique. Combining these facts, if formula_0 is compact, then continuous maps, Cauchy-continuous maps, and uniformly continuous maps on formula_0 are all the same. Examples and non-examples. Since the real line formula_10 is complete, continuous functions on formula_10 are Cauchy-continuous. On the subspace formula_11 of rational numbers, however, matters are different. For example, define a two-valued function so that formula_12 is formula_13 when formula_14 is less than formula_15 but formula_16 when formula_14 is greater than formula_17 (Note that formula_14 is never equal to formula_15 for any rational number formula_18) This function is continuous on formula_11 but not Cauchy-continuous, since it cannot be extended continuously to formula_19 On the other hand, any uniformly continuous function on formula_11 must be Cauchy-continuous. For a non-uniform example on formula_20 let formula_12 be formula_21; this is not uniformly continuous (on all of formula_11), but it is Cauchy-continuous. (This example works equally well on formula_19) A Cauchy sequence formula_22 in formula_1 can be identified with a Cauchy-continuous function from formula_23 to formula_24 defined by formula_25 If formula_1 is complete, then this can be extended to formula_26 formula_12 will be the limit of the Cauchy sequence. Generalizations. Cauchy continuity makes sense in situations more general than metric spaces, but then one must move from sequences to nets (or equivalently filters). The definition above applies, as long as the Cauchy sequence formula_5 is replaced with an arbitrary Cauchy net. Equivalently, a function formula_4 is Cauchy-continuous if and only if, given any Cauchy filter formula_27 on formula_6 then formula_28 is a Cauchy filter base on formula_3 This definition agrees with the above on metric spaces, but it also works for uniform spaces and, most generally, for Cauchy spaces. Any directed set formula_29 may be made into a Cauchy space. Then given any space formula_24 the Cauchy nets in formula_1 indexed by formula_29 are the same as the Cauchy-continuous functions from formula_29 to formula_3 If formula_1 is complete, then the extension of the function to formula_30 will give the value of the limit of the net. (This generalizes the example of sequences above, where 0 is to be interpreted as formula_31)
[ { "math_id": 0, "text": "X" }, { "math_id": 1, "text": "Y" }, { "math_id": 2, "text": "f : X \\to Y" }, { "math_id": 3, "text": "Y." }, { "math_id": 4, "text": "f" }, { "math_id": 5, "text": "\\left(x_1, x_2, \\ldots\\right)" }, { "math_id": 6, "text": "X," }, { "math_id": 7, "text": "\\left(f\\left(x_1\\right), f\\left(x_2\\right), \\ldots\\right)" }, { "math_id": 8, "text": "X." }, { "math_id": 9, "text": "X;" }, { "math_id": 10, "text": "\\R" }, { "math_id": 11, "text": "\\Q" }, { "math_id": 12, "text": "f(x)" }, { "math_id": 13, "text": "0" }, { "math_id": 14, "text": "x^2" }, { "math_id": 15, "text": "2" }, { "math_id": 16, "text": "1" }, { "math_id": 17, "text": "2." }, { "math_id": 18, "text": "x." }, { "math_id": 19, "text": "\\R." }, { "math_id": 20, "text": "\\Q," }, { "math_id": 21, "text": "2^x" }, { "math_id": 22, "text": "\\left(y_1, y_2, \\ldots\\right)" }, { "math_id": 23, "text": "\\left\\{1, 1/2, 1/3, \\ldots\\right\\}" }, { "math_id": 24, "text": "Y," }, { "math_id": 25, "text": "f\\left(1/n\\right) = y_n." }, { "math_id": 26, "text": "\\left\\{1, 1/2, 1/3, \\ldots\\right\\};" }, { "math_id": 27, "text": "\\mathcal{F}" }, { "math_id": 28, "text": "f(\\mathcal{F})" }, { "math_id": 29, "text": "A" }, { "math_id": 30, "text": "A \\cup \\{\\infty\\}" }, { "math_id": 31, "text": "\\frac{1}{\\infty}." } ]
https://en.wikipedia.org/wiki?curid=15715699
157178
Van der Waerden's theorem
Theorem in Ramsey theory Van der Waerden's theorem is a theorem in the branch of mathematics called Ramsey theory. Van der Waerden's theorem states that for any given positive integers "r" and "k", there is some number "N" such that if the integers {1, 2, ..., "N"} are colored, each with one of "r" different colors, then there are at least "k" integers in arithmetic progression whose elements are of the same color. The least such "N" is the Van der Waerden number "W"("r", "k"), named after the Dutch mathematician B. L. van der Waerden. This was conjectured by Pierre Joseph Henry Baudet in 1921. Waerden heard of it in 1926 and published his proof in 1927, titled "Beweis einer Baudetschen Vermutung [Proof of Baudet's conjecture]". Example. For example, when "r" = 2, you have two colors, say red and blue. "W"(2, 3) is bigger than 8, because you can color the integers from {1, ..., 8} like this: and no three integers of the same color form an arithmetic progression. But you can't add a ninth integer to the end without creating such a progression. If you add a red 9, then the red 3, 6, and 9 are in arithmetic progression. Alternatively, if you add a blue 9, then the blue 1, 5, and 9 are in arithmetic progression. In fact, there is no way of coloring 1 through 9 without creating such a progression (it can be proved by considering examples). Therefore, "W"(2, 3) is 9. Open problem. It is an open problem to determine the values of "W"("r", "k") for most values of "r" and "k". The proof of the theorem provides only an upper bound. For the case of "r" = 2 and "k" = 3, for example, the argument given below shows that it is sufficient to color the integers {1, ..., 325} with two colors to guarantee there will be a single-colored arithmetic progression of length 3. But in fact, the bound of 325 is very loose; the minimum required number of integers is only 9. Any coloring of the integers {1, ..., 9} will have three evenly spaced integers of one color. For "r" = 3 and "k" = 3, the bound given by the theorem is 7(2·37 + 1)(2·37·(2·37 + 1) + 1), or approximately 4.22·1014616. But actually, you don't need that many integers to guarantee a single-colored progression of length 3; you only need 27. (And it is possible to color {1, ..., 26} with three colors so that there is no single-colored arithmetic progression of length 3; for example: An open problem is the attempt to reduce the general upper bound to any 'reasonable' function. Ronald Graham offered a prize of US$1000 for showing "W"(2, "k") &lt; 2"k"2. In addition, he offered a US$250 prize for a proof of his conjecture involving more general "off-diagonal" van der Waerden numbers, stating "W"(2; 3, "k") ≤ "k""O(1)", while mentioning numerical evidence suggests "W"(2; 3, "k") = "k"2 + "o(1)". Ben Green disproved this latter conjecture and proved super-polynomial counterexamples to "W"(2; 3, "k") &lt; "k"r for any "r". The best upper bound currently known is due to Timothy Gowers, who establishes formula_0 by first establishing a similar result for Szemerédi's theorem, which is a stronger version of Van der Waerden's theorem. The previously best-known bound was due to Saharon Shelah and proceeded via first proving a result for the Hales–Jewett theorem, which is another strengthening of Van der Waerden's theorem. The best lower bound currently known for formula_1 is that for all positive formula_2 we have formula_3, for all sufficiently large formula_4. Proof of Van der Waerden's theorem (in a special case). The following proof is due to Ron Graham, B.L. Rothschild, and Joel Spencer. Khinchin gives a fairly simple proof of the theorem without estimating "W"("r", "k"). Proof in the case of "W"(2, 3). We will prove the special case mentioned above, that "W"(2, 3) ≤ 325. Let "c"("n") be a coloring of the integers {1, ..., 325}. We will find three elements of {1, ..., 325} in arithmetic progression that are the same color. Divide {1, ..., 325} into the 65 blocks {1, ..., 5}, {6, ..., 10}, ... {321, ..., 325}, thus each block is of the form {5"b" + 1, ..., 5"b" + 5} for some "b" in {0, ..., 64}. Since each integer is colored either red or blue, each block is colored in one of 32 different ways. By the pigeonhole principle, there are two blocks among the first 33 blocks that are colored identically. That is, there are two integers "b"1 and "b"2, both in {0...,32}, such that "c"(5"b"1 + "k") = "c"(5"b"2 + "k") for all "k" in {1, ..., 5}. Among the three integers 5"b"1 + 1, 5"b"1 + 2, 5"b"1 + 3, there must be at least two that are of the same color. (The pigeonhole principle again.) Call these 5"b"1 + "a"1 and 5"b"1 + "a"2, where the "a""i" are in {1,2,3} and "a"1 &lt; "a"2. Suppose (without loss of generality) that these two integers are both red. (If they are both blue, just exchange 'red' and 'blue' in what follows.) Let "a"3 = 2"a"2 − "a"1. If 5"b"1 + "a"3 is red, then we have found our arithmetic progression: 5"b"1 + "a""i" are all red. Otherwise, 5"b"1 + "a"3 is blue. Since "a"3 ≤ 5, 5"b"1 + "a"3 is in the "b"1 block, and since the "b"2 block is colored identically, 5"b"2 + "a"3 is also blue. Now let "b"3 = 2"b"2 − "b"1. Then "b"3 ≤ 64. Consider the integer 5"b"3 + "a"3, which must be ≤ 325. What color is it? If it is red, then 5"b"1 + "a"1, 5"b"2 + "a"2, and 5"b"3 + "a"3 form a red arithmetic progression. But if it is blue, then 5"b"1 + "a"3, 5"b"2 + "a"3, and 5"b"3 + "a"3 form a blue arithmetic progression. Either way, we are done. Proof in the case of "W"(3, 3). A similar argument can be advanced to show that "W"(3, 3) ≤ 7(2·37+1)(2·37·(2·37+1)+1). One begins by dividing the integers into 2·37·(2·37 + 1) + 1 groups of 7(2·37 + 1) integers each; of the first 37·(2·37 + 1) + 1 groups, two must be colored identically. Divide each of these two groups into 2·37+1 subgroups of 7 integers each; of the first 37 + 1 subgroups in each group, two of the subgroups must be colored identically. Within each of these identical subgroups, two of the first four integers must be the same color, say red; this implies either a red progression or an element of a different color, say blue, in the same subgroup. Since we have two identically-colored subgroups, there is a third subgroup, still in the same group that contains an element which, if either red or blue, would complete a red or blue progression, by a construction analogous to the one for "W"(2, 3). Suppose that this element is green. Since there is a group that is colored identically, it must contain copies of the red, blue, and green elements we have identified; we can now find a pair of red elements, a pair of blue elements, and a pair of green elements that 'focus' on the same integer, so that whatever color it is, it must complete a progression. Proof in general case. The proof for "W"(2, 3) depends essentially on proving that "W"(32, 2) ≤ 33. We divide the integers {1...,325} into 65 'blocks', each of which can be colored in 32 different ways, and then show that two blocks of the first 33 must be the same color, and there is a block colored the opposite way. Similarly, the proof for "W"(3, 3) depends on proving that formula_5 By a double induction on the number of colors and the length of the progression, the theorem is proved in general. Proof. A "D-dimensional arithmetic progression" (AP) consists of numbers of the form: formula_6 where "a" is the basepoint, the "s"'s are positive step-sizes, and the "i"'s range from 0 to "L" − 1. A "d"-dimensional AP is "homogeneous" for some coloring when it is all the same color. A "D-dimensional arithmetic progression with benefits" is all numbers of the form above, but where you add on some of the "boundary" of the arithmetic progression, i.e. some of the indices "i"'s can be equal to "L". The sides you tack on are ones where the first "k" "i"'s are equal to "L", and the remaining "i"'s are less than "L". The boundaries of a D-dimensional AP with benefits are these additional arithmetic progressions of dimension formula_7, down to 0. The 0-dimensional arithmetic progression is the single point at index value formula_8. A D-dimensional AP with benefits is "homogeneous" when each of the boundaries are individually homogeneous, but different boundaries do not have to necessarily have the same color. Next define the quantity MinN("L", "D", "N") to be the least integer so that any assignment of N colors to an interval of length MinN or more necessarily contains a homogeneous D-dimensional arithmetical progression with benefits. The goal is to bound the size of MinN. Note that MinN("L",1,"N") is an upper bound for Van der Waerden's number. There are two inductions steps, as follows: Base case: MinN(1,"d","n") 1, i.e. if you want a length 1 homogeneous d-dimensional arithmetic sequence, with or without benefits, you have nothing to do. So this forms the base of the induction. The Van der Waerden theorem itself is the assertion that MinN("L",1,"N") is finite, and it follows from the base case and the induction steps. Ergodic theory. Furstenberg and Weiss proved an equivalent form of the theorem in 1978, using ergodic theory. &lt;templatestyles src="Math_theorem/styles.css" /&gt; multiple Birkhoff recurrence theorem (Furstenberg and Weiss, 1978) — If formula_9 is a compact metric space, and formula_10 are homeomorphisms that commute, then formula_11, and an increasing sequence formula_12, such that formula_13 The proof of the above theorem is delicate, and the reader is referred to. With this recurrence theorem, the van der Waerden theorem can be proved in the ergodic-theoretic style.&lt;templatestyles src="Math_theorem/styles.css" /&gt; Theorem (van der Waerden, 1927) — If formula_14 is partitioned into finitely many subsets formula_15, then one of them formula_16 contains infinitely many arithmetic progressions of arbitrarily long length formula_17 &lt;templatestyles src="Math_proof/styles.css" /&gt;Proof It suffices to show that for each length formula_18, there exist at least one partition that contains at least one arithmetic progression of length formula_18. Once this is proved, we can cut out that arithmetic progression into formula_18 singleton sets, and repeat the process to create another arithmetic progression, and so one of the partitions contain infinitely many arithmetic progressions of length formula_18. Once this is proved, we can repeat this process to find that there exists at least one partition that contains infinitely many progressions of length formula_18, for infinitely many formula_18, and that is the partition we want. Consider the state space formula_19, with compact metric formula_20 In other words, let formula_21 be the index closest to formula_22 where formula_23 differ, and then their distance is formula_24. (In fact, this is an ultrametric.) Since each integer falls in exactly one of the partitions formula_15, we can code the partition into a sequence formula_25. Each formula_26 is the name of the partition that formula_21 falls in. In other words, we can draw the sets formula_15 horizontally, and connect the dots, into the sequence formula_25. Let the map formula_27 be the shift map: formula_28 and then, let the closure of all shifts of the sequence formula_25 be formula_9: formula_29 By the multiple Birkhoff recurrence theorem, there exist some sequence formula_30, and an integer formula_31, such that formula_32 Since formula_9 is the closure of shifts of formula_33, and formula_34 is continuous, there exists a shift formula_35 such that simultaneously, formula_36 is very close to formula_35, and formula_37 is very close to formula_38, and so on: formula_39 By the triangle inequality, all pairs in the set formula_40 are close to each other: formula_41 which implies formula_42, meaning that the arithmetic sequence formula_43 is in the partition formula_44. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "W(r,k) \\leq 2^{2^{r^{2^{2^{k + 9}}}}}," }, { "math_id": 1, "text": "W(2, k)" }, { "math_id": 2, "text": "\\varepsilon" }, { "math_id": 3, "text": "W(2, k) > 2^k/k^\\varepsilon" }, { "math_id": 4, "text": "k" }, { "math_id": 5, "text": "W(3^{7(2 \\cdot 3^7+1)},2) \\leq 3^{7(2 \\cdot 3^7+1)}+1." }, { "math_id": 6, "text": " a + i_1 s_1 + i_2 s_2 + \\cdots + i_D s_D " }, { "math_id": 7, "text": "d-1, d-2, d-3, d-4" }, { "math_id": 8, "text": "(L, L, L, L, \\ldots, L)" }, { "math_id": 9, "text": "X" }, { "math_id": 10, "text": "T_1, \\dots, T_N: X \\to X" }, { "math_id": 11, "text": "\\exists x\\in X" }, { "math_id": 12, "text": "n_1 < n_2 < \\cdots" }, { "math_id": 13, "text": "\\lim_{j} d(T_i^{n_j} x, x) = 0, \\quad \\forall i \\in 1:N" }, { "math_id": 14, "text": "\\Z" }, { "math_id": 15, "text": "S_1, \\dots, S_n" }, { "math_id": 16, "text": "S_k" }, { "math_id": 17, "text": "\n\\forall N, N', \\; \\exists |a| \\geq N', \\exists r \\geq 1: \\{a + ir\\}_{i \\in 1:N} \\subset S_k" }, { "math_id": 18, "text": "N" }, { "math_id": 19, "text": "\\Omega := (1:N)^\\Z" }, { "math_id": 20, "text": "\n d( (x_n), (y_n)) = \\max\\{2^{-|n|} : x_n \\neq y_n\\}\n " }, { "math_id": 21, "text": "n" }, { "math_id": 22, "text": "0" }, { "math_id": 23, "text": "x, y" }, { "math_id": 24, "text": "1/2^{|n|}" }, { "math_id": 25, "text": "(z_n)_n" }, { "math_id": 26, "text": "z_n" }, { "math_id": 27, "text": "T: \\Omega \\to \\Omega" }, { "math_id": 28, "text": "T((x_n)_n) = (x_{n+1})_n" }, { "math_id": 29, "text": "\n X := cl(\\{T^n z : n \\in \\Z\\})\n " }, { "math_id": 30, "text": "x \\in X" }, { "math_id": 31, "text": "n \\geq 1" }, { "math_id": 32, "text": "d(T^n x, x) < 1/4, \\quad d(T^{2n} x, x) < 1/4, \\quad \\cdots, \\quad d(T^{Nn} x, x) < 1/4" }, { "math_id": 33, "text": "z" }, { "math_id": 34, "text": "T" }, { "math_id": 35, "text": "T^m z" }, { "math_id": 36, "text": "x" }, { "math_id": 37, "text": "T^n x" }, { "math_id": 38, "text": "T^{n+m} z" }, { "math_id": 39, "text": "d( x, T^{m} z) < 1/4, \\quad d(T^{n} x, T^{m+n} z) < 1/4, \\quad \\cdots, \\quad d(T^{Nn} x, T^{m + Nn} z) < 1/4" }, { "math_id": 40, "text": "\\{T^mz, T^{m+n}z, \\dots, T^{m+Nn}z\\}" }, { "math_id": 41, "text": "\n d(T^{m + in} z, T^{m + jn} z) < 3/4 \\quad \\forall i, j \\in 0:N\n " }, { "math_id": 42, "text": "z_m = z_{m+n} = \\dots = z_{m+Nn}" }, { "math_id": 43, "text": "m, m+n, \\dots, m+Nn" }, { "math_id": 44, "text": "S_{z_m}" } ]
https://en.wikipedia.org/wiki?curid=157178
1571780
Condensation algorithm
The condensation algorithm (Conditional Density Propagation) is a computer vision algorithm. The principal application is to detect and track the contour of objects moving in a cluttered environment. Object tracking is one of the more basic and difficult aspects of computer vision and is generally a prerequisite to object recognition. Being able to identify which pixels in an image make up the contour of an object is a non-trivial problem. Condensation is a probabilistic algorithm that attempts to solve this problem. The algorithm itself is described in detail by Isard and Blake in a publication in the "International Journal of Computer Vision" in 1998. One of the most interesting facets of the algorithm is that it does not compute on every pixel of the image. Rather, pixels to process are chosen at random, and only a subset of the pixels end up being processed. Multiple hypotheses about what is moving are supported naturally by the probabilistic nature of the approach. The evaluation functions come largely from previous work in the area and include many standard statistical approaches. The original part of this work is the application of particle filter estimation techniques. The algorithm’s creation was inspired by the inability of Kalman filtering to perform object tracking well in the presence of significant background clutter. The presence of clutter tends to produce probability distributions for the object state which are multi-modal and therefore poorly modeled by the Kalman filter. The condensation algorithm in its most general form requires no assumptions about the probability distributions of the object or measurements. Algorithm overview. The condensation algorithm seeks to solve the problem of estimating the conformation of an object described by a vector formula_0 at time formula_1, given observations formula_2 of the detected features in the images up to and including the current time. The algorithm outputs an estimate to the state conditional probability density formula_3 by applying a nonlinear filter based on factored sampling and can be thought of as a development of a Monte-Carlo method. formula_3 is a representation of the probability of possible conformations for the objects based on previous conformations and measurements. The condensation algorithm is a generative model since it models the joint distribution of the object and the observer. The conditional density of the object at the current time formula_3 is estimated as a weighted, time-indexed sample set formula_4 with weights formula_5. N is a parameter determining the number of sample sets chosen. A realization of formula_3 is obtained by sampling with replacement from the set formula_6 with probability equal to the corresponding element of formula_7. The assumptions that object dynamics form a temporal Markov chain and that observations are independent of each other and the dynamics facilitate the implementation of the condensation algorithm. The first assumption allows the dynamics of the object to be entirely determined by the conditional density formula_8. The model of the system dynamics determined by formula_8 must also be selected for the algorithm, and generally includes both deterministic and stochastic dynamics. The algorithm can be summarized by initialization at time formula_9 and three steps at each time "t": Initialization. Form the initial sample set and weights by sampling according to the prior distribution. For example, specify as Gaussian and set the weights equal to each other. Iterative procedure. This algorithm outputs the probability distribution formula_3 which can be directly used to calculate the mean position of the tracked object, as well as the other moments of the tracked object. Cumulative weights can instead be used to achieve a more efficient sampling. Implementation considerations. Since object-tracking can be a real-time objective, consideration of algorithm efficiency becomes important. The condensation algorithm is relatively simple when compared to the computational intensity of the Ricatti equation required for Kalman filtering. The parameter formula_10, which determines the number of samples in the sample set, will clearly hold a trade-off in efficiency versus performance. One way to increase efficiency of the algorithm is by selecting a low degree of freedom model for representing the shape of the object. The model used by Isard 1998 is a linear parameterization of B-splines in which the splines are limited to certain configurations. Suitable configurations were found by analytically determining combinations of contours from multiple views, of the object in different poses, and through principal component analysis (PCA) on the deforming object. Isard and Blake model the object dynamics formula_8 as a second order difference equation with deterministic and stochastic components: formula_16 where formula_17 is the mean value of the state, and formula_18, formula_19 are matrices representing the deterministic and stochastic components of the dynamical model respectively. formula_18, formula_19, and formula_17 are estimated via Maximum Likelihood Estimation while the object performs typical movements. The observation model formula_20 cannot be directly estimated from the data, requiring assumptions to be made in order to estimate it. Isard 1998 assumes that the clutter which may make the object not visible is a Poisson random process with spatial density formula_21 and that any true target measurement is unbiased and normally distributed with standard deviation formula_22. The basic condensation algorithm is used to track a single object in time. It is possible to extend the condensation algorithm using a single probability distribution to describe the likely states of multiple objects to track multiple objects in a scene at the same time. Since clutter can cause the object probability distribution to split into multiple peaks, each peak represents a hypothesis about the object configuration. Smoothing is a statistical technique of conditioning the distribution based on both past and future measurements once the tracking is complete in order to reduce the effects of multiple peaks. Smoothing cannot be directly done in real-time since it requires information of future measurements. Applications. The algorithm can be used for vision-based robot localization of mobile robots. Instead of tracking the position of an object in the scene, however, the position of the camera platform is tracked. This allows the camera platform to be globally localized given a visual map of the environment. Extensions of the condensation algorithm have also been used to recognize human gestures in image sequences. This application of the condensation algorithm impacts the range of human–computer interactions possible. It has been used to recognize simple gestures of a user at a whiteboard to control actions such as selecting regions of the boards to print or save them. Other extensions have also been used for tracking multiple cars in the same scene. The condensation algorithm has also been used for face recognition in a video sequence. Resources. An implementation of the condensation algorithm in C can be found on Michael Isard’s website. An implementation in MATLAB can be found on the Mathworks File Exchange. An example of implementation using the OpenCV library can be found on the OpenCV forums. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbf{x_t}" }, { "math_id": 1, "text": "t" }, { "math_id": 2, "text": "\\mathbf{z_1, ..., z_t} " }, { "math_id": 3, "text": "p(\\mathbf{x_t}|\\mathbf{z_1,...,z_t})" }, { "math_id": 4, "text": "\\{s_t^{(n)},n=1,...,N\\}" }, { "math_id": 5, "text": "\\pi_t^{(n)}" }, { "math_id": 6, "text": "s_t" }, { "math_id": 7, "text": "\\pi_t" }, { "math_id": 8, "text": "p(\\mathbf{x_t}|\\mathbf{x_{t-1}})" }, { "math_id": 9, "text": "t = 0" }, { "math_id": 10, "text": "N" }, { "math_id": 11, "text": "\\{s_0^{(n)},n=1,...,N\\}" }, { "math_id": 12, "text": "\\{\\pi_0^{(n)},n=1,...,N\\}" }, { "math_id": 13, "text": "\\{s_t^{(n)}\\}" }, { "math_id": 14, "text": "\\mathbf{z_t}" }, { "math_id": 15, "text": "\\pi_t^{(n)} = \\frac{p(\\mathbf{z_t}|s^{(n)})}{\\sum_{j=1}^N p(\\mathbf{z_t}|s^{(j)})}" }, { "math_id": 16, "text": "p(\\mathbf{x_t}|\\mathbf{x_{t-1}}) \\propto e^{-\\frac{1}{2}||B^{-1}((\\mathbf{x_t}-\\mathbf{\\bar{x}})-A(\\mathbf{x_{t-1}}-\\mathbf{\\bar{x}}))||^2)}" }, { "math_id": 17, "text": "\\mathbf{\\bar{x}}" }, { "math_id": 18, "text": "A" }, { "math_id": 19, "text": "B" }, { "math_id": 20, "text": "p(\\mathbf{z}|\\mathbf{x})" }, { "math_id": 21, "text": "\\lambda" }, { "math_id": 22, "text": "\\sigma" } ]
https://en.wikipedia.org/wiki?curid=1571780
1572078
Kripke–Platek set theory with urelements
System of mathematical set theory The Kripke–Platek set theory with urelements (KPU) is an axiom system for set theory with urelements, based on the traditional (urelement-free) Kripke–Platek set theory. It is considerably weaker than the (relatively) familiar system ZFU. The purpose of allowing urelements is to allow large or high-complexity objects (such as the set of all reals) to be included in the theory's transitive models without disrupting the usual well-ordering and recursion-theoretic properties of the constructible universe; KP is so weak that this is hard to do by traditional means. Preliminaries. The usual way of stating the axioms presumes a two sorted first order language formula_0 with a single binary relation symbol formula_1. Letters of the sort formula_2 designate urelements, of which there may be none, whereas letters of the sort formula_3 designate sets. The letters formula_4 may denote both sets and urelements. The letters for sets may appear on both sides of formula_1, while those for urelements may only appear on the left, i.e. the following are examples of valid expressions: formula_5, formula_6. The statement of the axioms also requires reference to a certain collection of formulae called formula_7-formulae. The collection formula_7 consists of those formulae that can be built using the constants, formula_1, formula_8, formula_9, formula_10, and bounded quantification. That is quantification of the form formula_11 or formula_12 where formula_13 is given set. Axioms. The axioms of KPU are the universal closures of the following formulae: Additional assumptions. Technically these are axioms that describe the partition of objects into sets and urelements. Applications. KPU can be applied to the model theory of infinitary languages. Models of KPU considered as sets inside a maximal universe that are transitive as such are called admissible sets.
[ { "math_id": 0, "text": "L^*" }, { "math_id": 1, "text": "\\in" }, { "math_id": 2, "text": "p,q,r,..." }, { "math_id": 3, "text": "a,b,c,..." }, { "math_id": 4, "text": "x,y,z,..." }, { "math_id": 5, "text": "p\\in a" }, { "math_id": 6, "text": "b\\in a" }, { "math_id": 7, "text": "\\Delta_0" }, { "math_id": 8, "text": "\\neg" }, { "math_id": 9, "text": "\\wedge" }, { "math_id": 10, "text": "\\vee" }, { "math_id": 11, "text": "\\forall x \\in a" }, { "math_id": 12, "text": " \\exists x \\in a" }, { "math_id": 13, "text": "a" }, { "math_id": 14, "text": "\\forall x (x \\in a \\leftrightarrow x\\in b)\\rightarrow a=b" }, { "math_id": 15, "text": "\\phi(x)" }, { "math_id": 16, "text": "\\exists a. \\phi(a) \\rightarrow \\exists a (\\phi(a) \\wedge \\forall x\\in a\\,(\\neg \\phi(x)))" }, { "math_id": 17, "text": "\\exists a\\, (x\\in a \\land y\\in a )" }, { "math_id": 18, "text": "\\exists a \\forall c \\in b. \\forall y\\in c (y \\in a)" }, { "math_id": 19, "text": "\\exists a \\forall x \\,(x\\in a \\leftrightarrow x\\in b \\wedge \\phi(x) )" }, { "math_id": 20, "text": "\\phi(x,y)" }, { "math_id": 21, "text": "\\forall x \\in a.\\exists y. \\phi(x,y)\\rightarrow \\exists b\\forall x \\in a.\\exists y\\in b. \\phi(x,y) " }, { "math_id": 22, "text": "\\exists a\\, (a=a)" }, { "math_id": 23, "text": "\\forall p \\forall a (p \\neq a)" }, { "math_id": 24, "text": "\\forall p \\forall x (x \\notin p)" } ]
https://en.wikipedia.org/wiki?curid=1572078
1572371
Allan J. C. Cunningham
British-Indian mathematician Allan Joseph Champneys Cunningham (1842–1928) was a British-Indian mathematician. Biography. Born in Delhi, Cunningham was the son of Sir Alexander Cunningham, archaeologist and the founder of the Archaeological Survey of India. He started a military career with the East India Company's Bengal Engineers at a young age. From 1871 to 1881, he was instructor in mathematics at the Indian Institute Of Technology Roorkee (IIT Roorkee). Upon returning to the United Kingdom in 1881, he continued teaching at military institutes in Chatham, Dublin and Shorncliffe. He left the army in 1891. He spent the rest of his life studying number theory. He applied his expertise to finding factors of large numbers of the form "an" ± "bn", such as Mersenne numbers (formula_0) and Fermat numbers (formula_1) which have "b" = 1. His work is continued in the Cunningham project. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "2^p-1" }, { "math_id": 1, "text": "2^{2^n}+1" } ]
https://en.wikipedia.org/wiki?curid=1572371
1572446
Cunningham Project
Mathematical project in integer factorization The Cunningham Project is a collaborative effort started in 1925 to factor numbers of the form "b""n" ± 1 for "b" = 2, 3, 5, 6, 7, 10, 11, 12 and large "n". The project is named after Allan Joseph Champneys Cunningham, who published the first version of the table together with Herbert J. Woodall. There are three printed versions of the table, the most recent published in 2002, as well as an online version by Samuel Wagstaff. The current limits of the exponents are: Factors of Cunningham number. Two types of factors can be derived from a Cunningham number without having to use a factorization algorithm: algebraic factors of binomial numbers (e.g. difference of two squares and sum of two cubes), which depend on the exponent, and aurifeuillean factors, which depend on both the base and the exponent. Algebraic factors. From elementary algebra, formula_0 for all "k", and formula_1 for odd "k". In addition, "b"2"n" − 1 = ("b""n" − 1)("b""n" + 1). Thus, when "m" divides "n", "b""m" − 1 and "b""m" + 1 are factors of "b""n" − 1 if the quotient of "n" over "m" is even; only the first number is a factor if the quotient is odd. "b""m" + 1 is a factor of "b""n" − 1, if "m" divides "n" and the quotient is odd. In fact, formula_2 and formula_3 See this page for more information. Aurifeuillean factors. When the number is of a particular form (the exact expression varies with the base), aurifeuillean factorization may be used, which gives a product of two or three numbers. The following equations give aurifeuillean factors for the Cunningham project bases as a product of "F", "L" and "M": Let "b" = "s"2 × "k" with squarefree "k", if one of the conditions holds, then formula_4 have aurifeuillean factorization. (i) formula_5 and formula_6 (ii) formula_7 and formula_8 Other factors. Once the algebraic and aurifeuillean factors are removed, the other factors of "b""n" ± 1 are always of the form 2"kn" + 1, since the factors of "b""n" − 1 are all factors of formula_4, and the factors of "b""n" + 1 are all factors of formula_9. When "n" is prime, both algebraic and aurifeuillean factors are not possible, except the trivial factors ("b" − 1 for "b""n" − 1 and "b" + 1 for "b""n" + 1). For Mersenne numbers, the trivial factors are not possible for prime "n", so all factors are of the form 2"kn" + 1. In general, all factors of ("b""n" − 1)&amp;hairsp;/("b" − 1) are of the form 2"kn" + 1, where "b" ≥ 2 and "n" is prime, except when "n" divides "b" − 1, in which case ("b""n" − 1)/("b" − 1) is divisible by "n" itself. Cunningham numbers of the form "b""n" − 1 can only be prime if "b" = 2 and "n" is prime, assuming that "n" ≥ 2; these are the Mersenne numbers. Numbers of the form "b""n" + 1 can only be prime if "b" is even and "n" is a power of 2, again assuming "n" ≥ 2; these are the generalized Fermat numbers, which are Fermat numbers when "b" = 2. Any factor of a Fermat number 22"n" + 1 is of the form "k"2"n"+2 + 1. Notation. "b""n" − 1 is denoted as "b","n"−. Similarly, "b""n" + 1 is denoted as "b","n"+. When dealing with numbers of the form required for aurifeuillean factorization, "b","n"L and "b","n"M are used to denote L and M in the products above. References to "b","n"− and "b","n"+ are to the number with all algebraic and aurifeuillean factors removed. For example, Mersenne numbers are of the form 2,"n"− and Fermat numbers are of the form 2,2"n"+; the number Aurifeuille factored in 1871 was the product of 2,58L and 2,58M. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(b^{kn}-1) = (b^n-1) \\sum_{r=0}^{k-1} b^{rn}" }, { "math_id": 1, "text": "(b^{kn}+1) = (b^n+1) \\sum_{r=0}^{k-1} (-1)^r \\cdot b^{rn}" }, { "math_id": 2, "text": "b^n-1 = \\prod_{d \\mid n} \\Phi_d(b)" }, { "math_id": 3, "text": "b^n+1 = \\prod_{d \\mid 2n,\\, d \\nmid n} \\Phi_d(b)" }, { "math_id": 4, "text": "\\Phi_n(b)" }, { "math_id": 5, "text": "k\\equiv 1 \\mod 4" }, { "math_id": 6, "text": "n\\equiv k \\pmod{2k};" }, { "math_id": 7, "text": "k\\equiv 2, 3 \\pmod 4" }, { "math_id": 8, "text": "n\\equiv 2k \\pmod{4k}." }, { "math_id": 9, "text": "\\Phi_{2n}(b)" } ]
https://en.wikipedia.org/wiki?curid=1572446
15730240
Haller index
The Haller index, created in 1987 by J. Alex Haller, S. S. Kramer, and S. A. Lietman, is a mathematical relationship that exists in a human chest section observed with a CT scan. It is defined as the ratio of the transverse diameter (the horizontal distance of the inside of the ribcage) and the anteroposterior diameter (the shortest distance between the vertebrae and sternum). formula_0 where: HI is the Haller Index "distance 1" is the distance of the inside ribcage (at the level of maximum deformity or at the lower third of the sternum) "distance 2" is the distance between the sternal notch and vertebrae. More recent studies show that simple chest x-rays are just as effective as CT scans for calculating the Haller index and recommend replacing CT scans with CXR to reduce radiation exposure in all but gross deformities. A normal Haller index should be about 2.5. Chest wall deformities such as pectus excavatum can cause the sternum to invert, thus increasing the index. In severe asymmetric cases, where the sternum dips below the level of the vertebra, the index can be a negative value. Sources. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\ HI = \\frac {\\text{distance 1}}{\\text{distance 2}}" } ]
https://en.wikipedia.org/wiki?curid=15730240
15732228
Stress functions
In linear elasticity, the equations describing the deformation of an elastic body subject only to surface forces (or body forces that could be expressed as potentials) on the boundary are (using index notation) the equilibrium equation: formula_0 where formula_1 is the stress tensor, and the Beltrami-Michell compatibility equations: formula_2 A general solution of these equations may be expressed in terms of the Beltrami stress tensor. Stress functions are derived as special cases of this Beltrami stress tensor which, although less general, sometimes will yield a more tractable method of solution for the elastic equations. Beltrami stress functions. It can be shown that a complete solution to the equilibrium equations may be written as formula_3 Using index notation: formula_4 where formula_5 is an arbitrary second-rank tensor field that is at least twice differentiable, and is known as the "Beltrami stress tensor". Its components are known as Beltrami stress functions. formula_6 is the Levi-Civita pseudotensor, with all values equal to zero except those in which the indices are not repeated. For a set of non-repeating indices the component value will be +1 for even permutations of the indices, and -1 for odd permutations. And formula_7 is the Nabla operator. For the Beltrami stress tensor to satisfy the Beltrami-Michell compatibility equations in addition to the equilibrium equations, it is further required that formula_5 is at least four times continuously differentiable. Maxwell stress functions. The Maxwell stress functions are defined by assuming that the Beltrami stress tensor formula_5 is restricted to be of the form. formula_8 The stress tensor which automatically obeys the equilibrium equation may now be written as: The solution to the elastostatic problem now consists of finding the three stress functions which give a stress tensor which obeys the Beltrami–Michell compatibility equations for stress. Substituting the expressions for the stress into the Beltrami–Michell equations yields the expression of the elastostatic problem in terms of the stress functions: formula_9 These must also yield a stress tensor which obeys the specified boundary conditions. Airy stress function. The Airy stress function is a special case of the Maxwell stress functions, in which it is assumed that A=B=0 and C is a function of x and y only. This stress function can therefore be used only for two-dimensional problems. In the elasticity literature, the stress function formula_10 is usually represented by formula_11 and the stresses are expressed as formula_12 Where formula_13 and formula_14 are values of body forces in relevant direction. In polar coordinates the expressions are: formula_15 Morera stress functions. The Morera stress functions are defined by assuming that the Beltrami stress tensor formula_5 tensor is restricted to be of the form formula_16 The solution to the elastostatic problem now consists of finding the three stress functions which give a stress tensor which obeys the Beltrami-Michell compatibility equations. Substituting the expressions for the stress into the Beltrami-Michell equations yields the expression of the elastostatic problem in terms of the stress functions: Prandtl stress function. The Prandtl stress function is a special case of the Morera stress functions, in which it is assumed that A=B=0 and C is a function of x and y only. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\sigma_{ij,i}=0\\," }, { "math_id": 1, "text": "\\sigma" }, { "math_id": 2, "text": "\\sigma_{ij,kk}+\\frac{1}{1+\\nu}\\sigma_{kk,ij}=0" }, { "math_id": 3, "text": "\\sigma=\\nabla \\times \\Phi \\times \\nabla" }, { "math_id": 4, "text": "\\sigma_{ij}=\\varepsilon_{ikm}\\varepsilon_{jln}\\Phi_{kl,mn}" }, { "math_id": 5, "text": "\\Phi_{mn}" }, { "math_id": 6, "text": "\\varepsilon" }, { "math_id": 7, "text": "\\nabla " }, { "math_id": 8, "text": "\\Phi_{ij}=\n\\begin{bmatrix}\nA&0&0\\\\\n0&B&0\\\\\n0&0&C\n\\end{bmatrix}\n" }, { "math_id": 9, "text": "\\nabla^4 A+\\nabla^4 B+\\nabla^4 C=3\\left(\n\\frac{\\partial^2 A}{\\partial x^2}+\n\\frac{\\partial^2 B}{\\partial y^2}+\n\\frac{\\partial^2 C}{\\partial z^2}\\right)/(2-\\nu)," }, { "math_id": 10, "text": "C" }, { "math_id": 11, "text": "\\varphi" }, { "math_id": 12, "text": "\n \\sigma_x = \\frac{\\partial^2\\varphi}{\\partial y^2} ~;~~\n \\sigma_y = \\frac{\\partial^2\\varphi}{\\partial x^2} ~;~~\n \\sigma_{xy} = -\\frac{\\partial^2\\varphi}{\\partial x \\partial y}-(f_{x}y+f_{y}x)\n " }, { "math_id": 13, "text": "f_{x}" }, { "math_id": 14, "text": "f_{y}" }, { "math_id": 15, "text": "\n\\sigma_{rr} = \\frac{1}{r}\\frac{\\partial \\varphi}{\\partial r} + \\frac{1}{r^2}\\frac{\\partial^2\\varphi}{\\partial \\theta^2} ~;~~\n\\sigma_{\\theta\\theta} = \\frac{\\partial^2\\varphi}{\\partial r^2} ~;~~\n\\sigma_{r\\theta}=\\sigma_{\\theta r} = - \\frac{\\partial}{\\partial r}\\left( \\frac{1}{r}\\frac{\\partial \\varphi}{\\partial\\theta} \\right)\n" }, { "math_id": 16, "text": "\\Phi_{ij}=\n\\begin{bmatrix}\n0&C&B\\\\\nC&0&A\\\\\nB&A&0\n\\end{bmatrix}\n" } ]
https://en.wikipedia.org/wiki?curid=15732228
1573372
Scale (map)
Ratio of distance on a map to the corresponding distance on the ground The scale of a map is the ratio of a distance on the map to the corresponding distance on the ground. This simple concept is complicated by the curvature of the Earth's surface, which forces scale to vary across a map. Because of this variation, the concept of scale becomes meaningful in two distinct ways. The first way is the ratio of the size of the generating globe to the size of the Earth. The generating globe is a conceptual model to which the Earth is shrunk and from which the map is projected. The ratio of the Earth's size to the generating globe's size is called the nominal scale (also called principal scale or representative fraction). Many maps state the nominal scale and may even display a bar scale (sometimes merely called a "scale") to represent it. The second distinct concept of scale applies to the variation in scale across a map. It is the ratio of the mapped point's scale to the nominal scale. In this case 'scale' means the scale factor (also called point scale or particular scale). If the region of the map is small enough to ignore Earth's curvature, such as in a town plan, then a single value can be used as the scale without causing measurement errors. In maps covering larger areas, or the whole Earth, the map's scale may be less useful or even useless in measuring distances. The map projection becomes critical in understanding how scale varies throughout the map. When scale varies noticeably, it can be accounted for as the scale factor. Tissot's indicatrix is often used to illustrate the variation of point scale across a map. History. The foundations for quantitative map scaling goes back to ancient China with textual evidence that the idea of map scaling was understood by the second century BC. Ancient Chinese surveyors and cartographers had ample technical resources used to produce maps such as counting rods, carpenter's square's, plumb lines, compasses for drawing circles, and sighting tubes for measuring inclination. Reference frames postulating a nascent coordinate system for identifying locations were hinted by ancient Chinese astronomers that divided the sky into various sectors or lunar lodges. The Chinese cartographer and geographer Pei Xiu of the Three Kingdoms period created a set of large-area maps that were drawn to scale. He produced a set of principles that stressed the importance of consistent scaling, directional measurements, and adjustments in land measurements in the terrain that was being mapped. Terminology. Representation of scale. Map scales may be expressed in words (a lexical scale), as a ratio, or as a fraction. Examples are: 'one centimetre to one hundred metres'    or    1:10,000   or    1/10,000 'one inch to one mile'    or    1:63,360    or    1/63,360 'one centimetre to one thousand kilometres'   or   1:100,000,000    or    1/100,000,000.  (The ratio would usually be abbreviated to 1:100M) Bar scale vs. lexical scale. In addition to the above many maps carry one or more "(graphical)" bar scales. For example, some modern British maps have three bar scales, one each for kilometres, miles and nautical miles. A lexical scale in a language known to the user may be easier to visualise than a ratio: if the scale is an inch to two miles and the map user can see two villages that are about two inches apart on the map, then it is easy to work out that the villages are about four miles apart on the ground. A lexical scale may cause problems if it expressed in a language that the user does not understand or in obsolete or ill-defined units. For example, a scale of one inch to a furlong (1:7920) will be understood by many older people in countries where Imperial units used to be taught in schools. But a scale of one pouce to one league may be about 1:144,000, depending on the cartographer's choice of the many possible definitions for a league, and only a minority of modern users will be familiar with the units used. "Contrast to spatial scale." Large scale, medium scale, small scale. A small-scale map cover large regions, such as world maps, continents or large nations. In other words, they show large areas of land on a small space. They are called small scale because the representative fraction is relatively small. Large-scale maps show smaller areas in more detail, such as county maps or town plans might. Such maps are called large scale because the representative fraction is relatively large. For instance a town plan, which is a large-scale map, might be on a scale of 1:10,000, whereas the world map, which is a small scale map, might be on a scale of 1:100,000,000. The following table describes typical ranges for these scales but should not be considered authoritative because there is no standard: The terms are sometimes used in the absolute sense of the table, but other times in a relative sense. For example, a map reader whose work refers solely to large-scale maps (as tabulated above) might refer to a map at 1:500,000 as small-scale. In the English language, the word is often used to mean "extensive". However, as explained above, cartographers use the term "large scale" to refer to "less" extensive maps – those that show a smaller area. Maps that show an extensive area are "small scale" maps. This can be a cause of confusion. Scale variation. Mapping large areas causes noticeable distortions because it significantly flattens the curved surface of the earth. How distortion gets distributed depends on the map projection. Scale varies across the map, and the stated map scale is only an approximation. This is discussed in detail below. Large-scale maps with curvature neglected. The region over which the earth can be regarded as flat depends on the accuracy of the survey measurements. If measured only to the nearest metre, then curvature of the earth is undetectable over a meridian distance of about and over an east-west line of about 80 km (at a latitude of 45 degrees). If surveyed to the nearest , then curvature is undetectable over a meridian distance of about 10 km and over an east-west line of about 8 km. Thus a plan of New York City accurate to one metre or a building site plan accurate to one millimetre would both satisfy the above conditions for the neglect of curvature. They can be treated by plane surveying and mapped by scale drawings in which any two points at the same distance on the drawing are at the same distance on the ground. True ground distances are calculated by measuring the distance on the map and then multiplying by the inverse of the scale fraction or, equivalently, simply using dividers to transfer the separation between the points on the map to a bar scale on the map. Point scale (or particular scale). As proved by Gauss’s "Theorema Egregium", a sphere (or ellipsoid) cannot be projected onto a plane without distortion. This is commonly illustrated by the impossibility of smoothing an orange peel onto a flat surface without tearing and deforming it. The only true representation of a sphere at constant scale is another sphere such as a globe. Given the limited practical size of globes, we must use maps for detailed mapping. Maps require projections. A projection implies distortion: A constant separation on the map does not correspond to a constant separation on the ground. While a map may display a graphical bar scale, the scale must be used with the understanding that it will be accurate on only some lines of the map. (This is discussed further in the examples in the following sections.) Let "P" be a point at latitude formula_0 and longitude formula_1 on the sphere (or ellipsoid). Let Q be a neighbouring point and let formula_2 be the angle between the element PQ and the meridian at P: this angle is the azimuth angle of the element PQ. Let P' and Q' be corresponding points on the projection. The angle between the direction P'Q' and the projection of the meridian is the bearing formula_3. In general formula_4. Comment: this precise distinction between azimuth (on the Earth's surface) and bearing (on the map) is not universally observed, many writers using the terms almost interchangeably. Definition: the point scale at P is the ratio of the two distances P'Q' and PQ in the limit that Q approaches P. We write this as formula_5 where the notation indicates that the point scale is a function of the position of P and also the direction of the element PQ. Definition: if P and Q lie on the same meridian formula_6, the meridian scale is denoted by formula_7 . Definition: if P and Q lie on the same parallel formula_8, the parallel scale is denoted by formula_9. Definition: if the point scale depends only on position, not on direction, we say that it is isotropic and conventionally denote its value in any direction by the parallel scale factor formula_10. Definition: A map projection is said to be conformal if the angle between a pair of lines intersecting at a point P is the same as the angle between the projected lines at the projected point P', for all pairs of lines intersecting at point P. A conformal map has an isotropic scale factor. Conversely isotropic scale factors across the map imply a conformal projection. Isotropy of scale implies that "small" elements are stretched equally in all directions, that is the shape of a small element is preserved. This is the property of orthomorphism (from Greek 'right shape'). The qualification 'small' means that at some given accuracy of measurement no change can be detected in the scale factor over the element. Since conformal projections have an isotropic scale factor they have also been called orthomorphic projections. For example, the Mercator projection is conformal since it is constructed to preserve angles and its scale factor is isotropic, a function of latitude only: Mercator "does" preserve shape in small regions. Definition: on a conformal projection with an isotropic scale, points which have the same scale value may be joined to form the isoscale lines. These are not plotted on maps for end users but they feature in many of the standard texts. (See Snyder pages 203—206.) The representative fraction (RF) or principal scale. There are two conventions used in setting down the equations of any given projection. For example, the equirectangular cylindrical projection may be written as cartographers:        formula_11      formula_12 mathematicians:       formula_13      formula_14 Here we shall adopt the first of these conventions (following the usage in the surveys by Snyder). Clearly the above projection equations define positions on a huge cylinder wrapped around the Earth and then unrolled. We say that these coordinates define the projection map which must be distinguished logically from the actual printed (or viewed) maps. If the definition of point scale in the previous section is in terms of the projection map then we can expect the scale factors to be close to unity. For normal tangent cylindrical projections the scale along the equator is k=1 and in general the scale changes as we move off the equator. Analysis of scale on the projection map is an investigation of the change of k away from its true value of unity. Actual printed maps are produced from the projection map by a "constant" scaling denoted by a ratio such as 1:100M (for whole world maps) or 1:10000 (for such as town plans). To avoid confusion in the use of the word 'scale' this constant scale fraction is called the representative fraction (RF) of the printed map and it is to be identified with the ratio printed on the map. The actual printed map coordinates for the equirectangular cylindrical projection are printed map:        formula_15      formula_16 This convention allows a clear distinction of the intrinsic projection scaling and the reduction scaling. From this point we ignore the RF and work with the projection map. Visualisation of point scale: the Tissot indicatrix. Consider a small circle on the surface of the Earth centred at a point P at latitude formula_0 and longitude formula_1. Since the point scale varies with position and direction the projection of the circle on the projection will be distorted. Tissot proved that, as long as the distortion is not too great, the circle will become an ellipse on the projection. In general the dimension, shape and orientation of the ellipse will change over the projection. Superimposing these distortion ellipses on the map projection conveys the way in which the point scale is changing over the map. The distortion ellipse is known as Tissot's indicatrix. The example shown here is the Winkel tripel projection, the standard projection for world maps made by the National Geographic Society. The minimum distortion is on the central meridian at latitudes of 30 degrees (North and South). (Other examples). Point scale for normal cylindrical projections of the sphere. The key to a "quantitative" understanding of scale is to consider an infinitesimal element on the sphere. The figure shows a point P at latitude formula_0 and longitude formula_1 on the sphere. The point Q is at latitude formula_17 and longitude formula_18. The lines PK and MQ are arcs of meridians of length formula_19 where formula_20 is the radius of the sphere and formula_0 is in radian measure. The lines PM and KQ are arcs of parallel circles of length formula_21 withformula_1 in radian measure. In deriving a "point" property of the projection "at" P it suffices to take an infinitesimal element PMQK of the surface: in the limit of Q approaching P such an element tends to an infinitesimally small planar rectangle. Normal cylindrical projections of the sphere have formula_11 and formula_22 equal to a function of latitude only. Therefore, the infinitesimal element PMQK on the sphere projects to an infinitesimal element P'M'Q'K' which is an "exact" rectangle with a base formula_23 and height formula_24. By comparing the elements on sphere and projection we can immediately deduce expressions for the scale factors on parallels and meridians. (The treatment of scale in a general direction may be found below.) parallel scale factor   formula_25 meridian scale factor  formula_26 Note that the parallel scale factor formula_27 is independent of the definition of formula_28 so it is the same for all normal cylindrical projections. It is useful to note that at latitude 30 degrees the parallel scale is formula_29 at latitude 45 degrees the parallel scale is formula_30 at latitude 60 degrees the parallel scale is formula_31 at latitude 80 degrees the parallel scale is formula_32 at latitude 85 degrees the parallel scale is formula_33 The following examples illustrate three normal cylindrical projections and in each case the variation of scale with position and direction is illustrated by the use of Tissot's indicatrix. Three examples of normal cylindrical projection. The equirectangular projection. The equirectangular projection, also known as the "Plate Carrée" (French for "flat square") or (somewhat misleadingly) the equidistant projection, is defined by formula_34   formula_35 where formula_20 is the radius of the sphere, formula_1 is the longitude from the central meridian of the projection (here taken as the Greenwich meridian at formula_36) and formula_0 is the latitude. Note that formula_1 and formula_0 are in radians (obtained by multiplying the degree measure by a factor of formula_37/180). The longitude formula_1 is in the range formula_38 and the latitude formula_0 is in the range formula_39. Since formula_40 the previous section gives parallel scale,  formula_25 meridian scale formula_41 For the calculation of the point scale in an arbitrary direction see addendum. The figure illustrates the Tissot indicatrix for this projection. On the equator h=k=1 and the circular elements are undistorted on projection. At higher latitudes the circles are distorted into an ellipse given by stretching in the parallel direction only: there is no distortion in the meridian direction. The ratio of the major axis to the minor axis is formula_42. Clearly the area of the ellipse increases by the same factor. It is instructive to consider the use of bar scales that might appear on a printed version of this projection. The scale is true (k=1) on the equator so that multiplying its length on a printed map by the inverse of the RF (or principal scale) gives the actual circumference of the Earth. The bar scale on the map is also drawn at the true scale so that transferring a separation between two points on the equator to the bar scale will give the correct distance between those points. The same is true on the meridians. On a parallel other than the equator the scale is formula_42 so when we transfer a separation from a parallel to the bar scale we must divide the bar scale distance by this factor to obtain the distance between the points when measured along the parallel (which is not the true distance along a great circle). On a line at a bearing of say 45 degrees (formula_43) the scale is continuously varying with latitude and transferring a separation along the line to the bar scale does not give a distance related to the true distance in any simple way. (But see addendum). Even if a distance along this line of constant planar angle could be worked out, its relevance is questionable since such a line on the projection corresponds to a complicated curve on the sphere. For these reasons bar scales on small-scale maps must be used with extreme caution. Mercator projection. The Mercator projection maps the sphere to a rectangle (of infinite extent in the formula_22-direction) by the equations formula_44 formula_45 where a, formula_46 and formula_47 are as in the previous example. Since formula_48 the scale factors are: parallel scale     formula_49 meridian scale   formula_50 In the mathematical addendum it is shown that the point scale in an arbitrary direction is also equal to formula_42 so the scale is isotropic (same in all directions), its magnitude increasing with latitude as formula_42. In the Tissot diagram each infinitesimal circular element preserves its shape but is enlarged more and more as the latitude increases. Lambert's equal area projection. Lambert's equal area projection maps the sphere to a finite rectangle by the equations formula_51 where a, formula_1 and formula_0 are as in the previous example. Since formula_52 the scale factors are parallel scale      formula_25 meridian scale    formula_53 The calculation of the point scale in an arbitrary direction is given below. The vertical and horizontal scales now compensate each other (hk=1) and in the Tissot diagram each infinitesimal circular element is distorted into an ellipse of the "same" area as the undistorted circles on the equator. Graphs of scale factors. The graph shows the variation of the scale factors for the above three examples. The top plot shows the isotropic Mercator scale function: the scale on the parallel is the same as the scale on the meridian. The other plots show the meridian scale factor for the Equirectangular projection (h=1) and for the Lambert equal area projection. These last two projections have a parallel scale identical to that of the Mercator plot. For the Lambert note that the parallel scale (as Mercator A) increases with latitude and the meridian scale (C) decreases with latitude in such a way that hk=1, guaranteeing area conservation. Scale variation on the Mercator projection. The Mercator point scale is unity on the equator because it is such that the auxiliary cylinder used in its construction is tangential to the Earth at the equator. For this reason the usual projection should be called a tangent projection. The scale varies with latitude as formula_27. Since formula_42 tends to infinity as we approach the poles the Mercator map is grossly distorted at high latitudes and for this reason the projection is totally inappropriate for world maps (unless we are discussing navigation and rhumb lines). However, at a latitude of about 25 degrees the value of formula_42 is about 1.1 so Mercator "is" accurate to within 10% in a strip of width 50 degrees centred on the equator. Narrower strips are better: a strip of width 16 degrees (centred on the equator) is accurate to within 1% or 1 part in 100. A standard criterion for good large-scale maps is that the accuracy should be within 4 parts in 10,000, or 0.04%, corresponding to formula_54. Since formula_42 attains this value at formula_55 degrees (see figure below, red line). Therefore, the tangent Mercator projection is highly accurate within a strip of width 3.24 degrees centred on the equator. This corresponds to north-south distance of about . Within this strip Mercator is "very" good, highly accurate and shape preserving because it is conformal (angle preserving). These observations prompted the development of the transverse Mercator projections in which a meridian is treated 'like an equator' of the projection so that we obtain an accurate map within a narrow distance of that meridian. Such maps are good for countries aligned nearly north-south (like Great Britain) and a set of 60 such maps is used for the Universal Transverse Mercator (UTM). Note that in both these projections (which are based on various ellipsoids) the transformation equations for x and y and the expression for the scale factor are complicated functions of both latitude and longitude. Secant, or modified, projections. The basic idea of a secant projection is that the sphere is projected to a cylinder which intersects the sphere at two parallels, say formula_56 north and south. Clearly the scale is now true at these latitudes whereas parallels beneath these latitudes are contracted by the projection and their (parallel) scale factor must be less than one. The result is that deviation of the scale from unity is reduced over a wider range of latitudes. As an example, one possible secant Mercator projection is defined by formula_57 The numeric multipliers do not alter the shape of the projection but it does mean that the scale factors are modified: secant Mercator scale,   formula_58 Thus This is illustrated by the lower (green) curve in the figure of the previous section. Such narrow zones of high accuracy are used in the UTM and the British OSGB projection, both of which are secant, transverse Mercator on the ellipsoid with the scale on the central meridian constant at formula_65. The isoscale lines with formula_66 are slightly curved lines approximately 180 km east and west of the central meridian. The maximum value of the scale factor is 1.001 for UTM and 1.0007 for OSGB. The lines of unit scale at latitude formula_56 (north and south), where the cylindrical projection surface intersects the sphere, are the standard parallels of the secant projection. Whilst a narrow band with formula_67 is important for high accuracy mapping at a large scale, for world maps much wider spaced standard parallels are used to control the scale variation. Examples are The scale plots for the latter are shown below compared with the Lambert equal area scale factors. In the latter the equator is a single standard parallel and the parallel scale increases from k=1 to compensate the decrease in the meridian scale. For the Gall the parallel scale is reduced at the equator (to k=0.707) whilst the meridian scale is increased (to k=1.414). This gives rise to the gross distortion of shape in the Gall-Peters projection. (On the globe Africa is about as long as it is broad). Note that the meridian and parallel scales are both unity on the standard parallels. Mathematical addendum. For normal cylindrical projections the geometry of the infinitesimal elements gives formula_68 formula_69 The relationship between the angles formula_3 and formula_2 is formula_70 For the Mercator projection formula_48 giving formula_71: angles are preserved. (Hardly surprising since this is the relation used to derive Mercator). For the equidistant and Lambert projections we have formula_72 and formula_73 respectively so the relationship between formula_2 and formula_3 depends upon the latitude formula_0. Denote the point scale at P when the infinitesimal element PQ makes an angle formula_74 with the meridian by formula_75 It is given by the ratio of distances: formula_76 Setting formula_23 and substituting formula_77 and formula_24 from equations (a) and (b) respectively gives formula_78 For the projections other than Mercator we must first calculate formula_3 from formula_2 and formula_0 using equation (c), before we can find formula_79. For example, the equirectangular projection has formula_80 so that formula_81 If we consider a line of constant slope formula_3 on the projection both the corresponding value of formula_2 and the scale factor along the line are complicated functions of formula_0. There is no simple way of transferring a general finite separation to a bar scale and obtaining meaningful results. Ratio symbol. While the colon is often used to express ratios, Unicode can express a symbol specific to ratios, being slightly raised: . Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\varphi" }, { "math_id": 1, "text": "\\lambda" }, { "math_id": 2, "text": "\\alpha" }, { "math_id": 3, "text": "\\beta" }, { "math_id": 4, "text": "\\alpha\\ne\\beta" }, { "math_id": 5, "text": "\\mu(\\lambda,\\,\\varphi,\\,\\alpha)=\\lim_{Q\\to P}\\frac{P'Q'}{PQ}," }, { "math_id": 6, "text": "(\\alpha=0)" }, { "math_id": 7, "text": "h(\\lambda,\\,\\varphi)" }, { "math_id": 8, "text": "(\\alpha=\\pi/2)" }, { "math_id": 9, "text": "k(\\lambda,\\,\\varphi)" }, { "math_id": 10, "text": "k(\\lambda,\\varphi)" }, { "math_id": 11, "text": "x=a\\lambda" }, { "math_id": 12, "text": "y=a\\varphi" }, { "math_id": 13, "text": "x=\\lambda" }, { "math_id": 14, "text": "y=\\varphi" }, { "math_id": 15, "text": "x=(RF)a\\lambda" }, { "math_id": 16, "text": "y=(RF)a\\varphi" }, { "math_id": 17, "text": "\\varphi+\\delta\\varphi" }, { "math_id": 18, "text": "\\lambda+\\delta\\lambda" }, { "math_id": 19, "text": "a\\,\\delta\\varphi" }, { "math_id": 20, "text": "a" }, { "math_id": 21, "text": "(a\\cos\\varphi)\\delta\\lambda" }, { "math_id": 22, "text": "y" }, { "math_id": 23, "text": "\\delta x=a\\,\\delta\\lambda" }, { "math_id": 24, "text": "\\delta y" }, { "math_id": 25, "text": "\\quad k\\;=\\;\\dfrac{\\delta x}{a\\cos\\varphi\\,\\delta\\lambda\\,}=\\,\\sec\\varphi\\qquad\\qquad{}" }, { "math_id": 26, "text": "\\quad h\\;=\\;\\dfrac{\\delta y}{a\\,\\delta\\varphi\\,} = \\dfrac{y'(\\varphi)}{a}" }, { "math_id": 27, "text": "k=\\sec\\varphi" }, { "math_id": 28, "text": "y(\\varphi)" }, { "math_id": 29, "text": "k=\\sec30^{\\circ}=2/\\sqrt{3}=1.15" }, { "math_id": 30, "text": "k=\\sec45^{\\circ}=\\sqrt{2}=1.414" }, { "math_id": 31, "text": "k=\\sec60^{\\circ}=2" }, { "math_id": 32, "text": "k=\\sec80^{\\circ}=5.76" }, { "math_id": 33, "text": "k=\\sec85^{\\circ}=11.5" }, { "math_id": 34, "text": "x = a\\lambda," }, { "math_id": 35, "text": "y = a\\varphi," }, { "math_id": 36, "text": "\\lambda =0" }, { "math_id": 37, "text": "\\pi" }, { "math_id": 38, "text": "[-\\pi,\\pi]" }, { "math_id": 39, "text": "[-\\pi/2,\\pi/2]" }, { "math_id": 40, "text": "y'(\\varphi)=1" }, { "math_id": 41, "text": "\\quad h\\;=\\;\\dfrac{\\delta y}{a\\,\\delta\\varphi\\,}=\\,1" }, { "math_id": 42, "text": "\\sec\\varphi" }, { "math_id": 43, "text": "\\beta=45^{\\circ}" }, { "math_id": 44, "text": "x = a\\lambda\\," }, { "math_id": 45, "text": "y = a\\ln \\left[\\tan \\left(\\frac{\\pi}{4} + \\frac{\\varphi}{2} \\right) \\right]" }, { "math_id": 46, "text": "\\lambda\\," }, { "math_id": 47, "text": "\\varphi \\," }, { "math_id": 48, "text": "y'(\\varphi)=a\\sec\\varphi" }, { "math_id": 49, "text": "k\\;=\\;\\dfrac{\\delta x}{a\\cos\\varphi\\,\\delta\\lambda\\,}=\\,\\sec\\varphi." }, { "math_id": 50, "text": "h\\;=\\;\\dfrac{\\delta y}{a\\,\\delta\\varphi\\,}=\\,\\sec\\varphi." }, { "math_id": 51, "text": "x = a\\lambda \\qquad\\qquad y = a\\sin\\varphi" }, { "math_id": 52, "text": "y'(\\varphi)=\\cos\\varphi" }, { "math_id": 53, "text": "\\quad h\\;=\\;\\dfrac{\\delta y}{a\\,\\delta\\varphi\\,} = \\,\\cos\\varphi" }, { "math_id": 54, "text": "k=1.0004" }, { "math_id": 55, "text": "\\varphi=1.62" }, { "math_id": 56, "text": "\\varphi_1" }, { "math_id": 57, "text": "x = 0.9996a\\lambda \\qquad\\qquad y = 0.9996a\\ln \\left(\\tan \\left(\\frac{\\pi}{4} + \\frac{\\varphi}{2} \\right) \\right)." }, { "math_id": 58, "text": "\\quad k\\;=0.9996\\sec\\varphi." }, { "math_id": 59, "text": "\\sec\\varphi_1=1/0.9996=1.00004" }, { "math_id": 60, "text": "\\varphi_1=1.62" }, { "math_id": 61, "text": "\\varphi_2" }, { "math_id": 62, "text": "\\sec\\varphi_2=1.0004/0.9996=1.0008" }, { "math_id": 63, "text": "\\varphi_2=2.29" }, { "math_id": 64, "text": "1<k<1.0004" }, { "math_id": 65, "text": "k_0=0.9996" }, { "math_id": 66, "text": "k=1" }, { "math_id": 67, "text": "|k-1|<0.0004" }, { "math_id": 68, "text": "\n \\text{(a)}\\quad\n \\tan\\alpha=\\frac{a\\cos\\varphi\\,\\delta\\lambda}{a\\,\\delta\\varphi}, " }, { "math_id": 69, "text": "\n \\text{(b)}\\quad\n \\tan\\beta=\\frac{\\delta x}{\\delta y}\n =\\frac{a\\,\\delta \\lambda}{\\delta y}.\n" }, { "math_id": 70, "text": " \\text{(c)}\\quad\n\\tan\\beta=\\frac{a\\sec\\varphi}{y'(\\varphi)} \\tan\\alpha.\\," }, { "math_id": 71, "text": "\\alpha=\\beta" }, { "math_id": 72, "text": "y'(\\varphi)=a" }, { "math_id": 73, "text": "y'(\\varphi)=a\\cos\\varphi" }, { "math_id": 74, "text": "\\alpha \\," }, { "math_id": 75, "text": "\\mu_{\\alpha}." }, { "math_id": 76, "text": " \n\\mu_\\alpha = \\lim_{Q\\to P}\\frac{P'Q'}{PQ}\n= \\lim_{Q\\to P}\\frac{\\sqrt{\\delta x^2 +\\delta y^2}}\n{\\sqrt{ a^2\\, \\delta\\varphi^2+a^2\\cos^2 \\varphi\\, \\delta\\lambda^2}}.\n" }, { "math_id": 77, "text": "\\delta\\varphi" }, { "math_id": 78, "text": "\\mu_\\alpha(\\varphi) = \\sec\\varphi \\left[\\frac{\\sin\\alpha}{\\sin\\beta}\\right]." }, { "math_id": 79, "text": "\\mu_{\\alpha}" }, { "math_id": 80, "text": "y'=a" }, { "math_id": 81, "text": "\\tan\\beta=\\sec\\varphi \\tan\\alpha.\\," } ]
https://en.wikipedia.org/wiki?curid=1573372
1573393
Conversion (chemistry)
Conversion and its related terms yield and selectivity are important terms in chemical reaction engineering. They are described as ratios of how much of a reactant has reacted ("X" — conversion, normally between zero and one), how much of a desired product was formed ("Y" — yield, normally also between zero and one) and how much desired product was formed in ratio to the undesired product(s) ("S" — selectivity). There are conflicting definitions in the literature for selectivity and yield, so each author's intended definition should be verified. Conversion can be defined for (semi-)batch and continuous reactors and as instantaneous and overall conversion. Assumptions. The following assumptions are made: formula_0, where formula_1 and formula_2 are the stoichiometric coefficients. For multiple parallel reactions, the definitions can also be applied, either per reaction or using the limiting reaction. Conversion. Conversion can be separated into instantaneous conversion and overall conversion. For continuous processes the two are the same, for batch and semi-batch there are important differences. Furthermore, for multiple reactants, conversion can be defined overall or per reactant. Instantaneous conversion. Semi-batch. In this setting there are different definitions. One definition regards the instantaneous conversion as the ratio of the instantaneously converted amount to the amount fed at any point in time: formula_3. with formula_4 as the change of moles with time of species i. This ratio can become larger than 1. It can be used to indicate whether reservoirs are built up and it is ideally close to 1. When the feed stops, its value is not defined. In semi-batch polymerisation, the instantaneous conversion is defined as the total mass of polymer divided by the total mass of monomer fed: formula_5. formula_6 Overall conversion. Semi-batch. Total conversion of the formulation: formula_7 Total conversion of the fed reactants: formula_8 formula_9 Yield. Yield in general refers to the amount of a specific product ("p" in 1.."m") formed per mole of reactant consumed (Definition 1). However, it is also defined as the amount of product produced per amount of product that could be produced (Definition 2). If not all of the limiting reactant has reacted, the two definitions contradict each other. Combining those two also means that stoichiometry needs to be taken into account and that yield has to be based on the limiting reactant ("k" in 1.."n"): formula_10 Continuous. The version normally found in the literature: formula_11 Selectivity. Instantaneous selectivity is the production rate of one component per production rate of another component. For overall selectivity the same problem of the conflicting definitions exists. Generally, it is defined as the number of moles of desired product per the number of moles of undesired product (Definition 1). However, the definitions of the total amount of reactant to form a product per total amount of reactant consumed is used (Definition 2) as well as the total amount of desired product formed per total amount of limiting reactant consumed (Definition 3). This last definition is the same as definition 1 for yield. Batch or semi-batch. The version normally found in the literature: formula_12 Continuous. The version normally found in the literature: formula_13 Combination. For batch and continuous reactors (semi-batch needs to be checked more carefully) and the definitions marked as the ones generally found in the literature, the three concepts can be combined: formula_14 For a process with the only reaction &lt;chem&gt;A -&gt; B&lt;/chem&gt; this mean that "S"=1 and "Y"="X". Abstract example. For the following abstract example and the amounts depicted on the right, the following calculation can be performed with the above definitions, either in batch or a continuous reactor. &lt;chem&gt;A -&gt; B&lt;/chem&gt; &lt;chem&gt;A -&gt; C&lt;/chem&gt; B is the desired product. There are 100 mol of A at the beginning or at the entry to the continuous reactor and 10 mol A, 72 mol B and 18 mol C at the end of the reaction or the exit of the continuous reactor. The three properties are found to be: formula_15 formula_16 formula_17 The property formula_14 holds. In this reaction, 90% of substance A is converted (consumed), but only 80% of the 90% is converted to the desired substance B and 20% to undesired by-products C. So, conversion of A is 90%, selectivity for B 80% and yield of substance B 72%. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\sum_{i=1}^n \\nu_i A_i = \\sum_{j=1}^m \\mu_j B_j" }, { "math_id": 1, "text": "\\nu_{i}" }, { "math_id": 2, "text": "\\mu_{j}" }, { "math_id": 3, "text": "X_{i,\\text{inst}}=\\frac{\\dot{n}_{i,\\text{react}}}{\\dot{n}_{i,\\text{in}}}" }, { "math_id": 4, "text": "\\dot{n}_{i}" }, { "math_id": 5, "text": "X_{\\text{poly}}=\\frac{m_{\\text{Pol}}}{\\sum_i\\int_0^t\\dot{m}_{i,\\text{in}}(\\tau)d\\tau}" }, { "math_id": 6, "text": "X_{i}=\\frac{n_{i}(t=0)-n_i(t)}{n_{i}(t=0)}=1-\\frac{n_i(t)}{n_{i}(t=0)}" }, { "math_id": 7, "text": "X_{i}=\\frac{n_{i}(t=0)+\\int_0^t\\dot{n}_{i,\\text{in}}(\\tau)d\\tau-n_i(t)}\n {n_{i}(t=0)+\\int_0^{t_{\\text{end}}}\\dot{n}_{i,\\text{in}}(\\tau)d\\tau}" }, { "math_id": 8, "text": "X_{i}=\\frac{n_{i}(t=0)+\\int_0^t\\dot{n}_{i,\\text{in}}(\\tau)d\\tau-n_i(t)}\n {n_{i}(t=0)+\\int_0^{t}\\dot{n}_{i,\\text{in}}(\\tau)d\\tau}" }, { "math_id": 9, "text": "X_{i}=\\frac{\\dot{n}_{i,in}-\\dot{n}_{i,out}}{\\dot{n}_{i,in}}=1-\\frac{\\dot{n}_{i,out}}{\\dot{n}_{i,in}}" }, { "math_id": 10, "text": "Y_{p}=\\frac{\\dot{n}_{p,\\text{out}}-\\dot{n}_{p,\\text{in}}}{\\dot{n}_{k,\\text{in}}\\underbrace{-n_{k,\\text{out}}}_{\\text{only for Definition 1}}}\\left |\\frac{\\mu_k}{\\nu_p}\\right|" }, { "math_id": 11, "text": "Y_{p}=\\frac{\\dot{n}_{p,\\text{out}}-\\dot{n}_{p,\\text{in}}}{\\dot{n}_{k,\\text{in}}}\\left |\\frac{\\mu_k}{\\nu_p}\\right|" }, { "math_id": 12, "text": "S_{p}=\\frac{n_{p}(t=0)-n_{p}(t)}{n_{k}(t=0)+\\int_0^t\\dot{n}_{k,\\text{in}}(\\tau)d\\tau-n_k(t)}\\left |\\frac{\\mu_k}{\\nu_p}\\right|" }, { "math_id": 13, "text": "S_{p}=\\frac{\\dot{n}_{p,\\text{out}}-\\dot{n}_{p,\\text{in}}}{\\dot{n}_{k,\\text{in}}-\\dot{n}_{k,\\text{out}}}\\left |\\frac{\\mu_k}{\\nu_p}\\right|" }, { "math_id": 14, "text": " Y_p = X_i \\cdot S_p " }, { "math_id": 15, "text": "X_\\ce{A}=\\frac{n_\\ce{A}(t=0)-n_A(t)}{n_\\ce{A}(t=0)}=1-\\frac{n_\\ce{A}(t)}{n_\\ce{A}(t=0)}=\\frac{100-10}{100}=0.9=90\\%" }, { "math_id": 16, "text": "Y_\\ce{B}=\\frac{n_\\ce{B}(t)-n_\\ce{B}(t=0)}{n_\\ce{A}(t=0)+\\int_0^t\\dot{n}_\\ce{A,{in}}(\\tau)d\\tau}\\left |\\frac{\\mu_k}{\\nu_p}\\right|=\\frac{72-0}{100+0}\\cdot\\frac{1}{1}=0.72=72\\%" }, { "math_id": 17, "text": "S_\\ce{B}=\\frac{{n}_\\ce{B}(t=0)-\\dot{n}_\\ce{B}(t)}{\\dot{n}_\\ce{A}(t=0)-n_\\ce{A}(t)}\\left |\\frac{\\mu_k}{\\nu_p}\\right|=\\frac{0-72}{100-10}\\cdot\\frac{1}{1}=0.8=80\\%" } ]
https://en.wikipedia.org/wiki?curid=1573393
15737471
Indirect DNA damage
Theory of damage from ultraviolet light Indirect DNA damage occurs when a UV-photon is absorbed in the human skin by a chromophore that does not have the ability to convert the energy into harmless heat very quickly. Molecules that do not have this ability have a long-lived excited state. This long lifetime leads to a high probability for reactions with other molecules—so-called bimolecular reactions. Melanin and DNA have extremely short excited state lifetimes in the range of a few femtoseconds (10−15s). The excited state lifetime of compounds used in sunscreens such as menthyl anthranilate, avobenzone or padimate O is 1,000 to 1,000,000 times longer than that of melanin, and therefore they may cause damage to living cells that come in contact with them. The molecule that originally absorbs the UV-photon is called a "chromophore". Bimolecular reactions can occur either between the excited chromophore and DNA or between the excited chromophore and another species, to produce free radicals and reactive oxygen species. These reactive chemical species can reach DNA by diffusion and the bimolecular reaction damages the DNA (oxidative stress). It is important to note that, unlike direct DNA damage which causes sunburn, indirect DNA damage does not result in any warning signal or pain in the human body. The bimolecular reactions that cause the indirect DNA damage are illustrated in the figure: formula_0 1O2 is reactive harmful singlet oxygen: formula_1 Location of the damage. Unlike direct DNA damage, which occurs in areas directly exposed to UV-B light, reactive chemical species can travel through the body and affect other areas—possibly even inner organs. The traveling nature of the indirect DNA damage can be seen in the fact that the malignant melanoma can occur in places that are not directly illuminated by the sun—in contrast to basal-cell carcinoma and squamous cell carcinoma, which appear only on directly illuminated locations on the body. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathrm{(Chromophore)^* + {}^3O_2 \\ \\xrightarrow{} \\ Chromophore + {}^1O_2}" }, { "math_id": 1, "text": "\\mathrm{{}^1O_2 + intact\\ DNA \\ \\xrightarrow{} \\ {}^3O_2 + damaged\\ DNA}" } ]
https://en.wikipedia.org/wiki?curid=15737471
1573991
Hilbert's fourth problem
Construct all metric spaces where lines resemble those on a sphere In mathematics, Hilbert's fourth problem in the 1900 list of Hilbert's problems is a foundational question in geometry. In one statement derived from the original, it was to find — up to an isomorphism — all geometries that have an axiomatic system of the classical geometry (Euclidean, hyperbolic and elliptic), with those axioms of congruence that involve the concept of the angle dropped, and `triangle inequality', regarded as an axiom, added. If one assumes the continuity axiom in addition, then, in the case of the Euclidean plane, we come to the problem posed by Jean Gaston Darboux: "To determine all the calculus of variation problems in the plane whose solutions are all the plane straight lines." There are several interpretations of the original statement of David Hilbert. Nevertheless, a solution was sought, with the German mathematician Georg Hamel being the first to contribute to the solution of Hilbert's fourth problem. A recognized solution was given by Soviet mathematician Aleksei Pogorelov in 1973. In 1976, Armenian mathematician Rouben V. Ambartzumian proposed another proof of Hilbert's fourth problem. Original statement. Hilbert discusses the existence of non-Euclidean geometry and non-Archimedean geometry ...a geometry in which all the axioms of ordinary euclidean geometry hold, and in particular all the congruence axioms except the one of the congruence of triangles (or all except the theorem of the equality of the base angles in the isosceles triangle), and in which, besides, the proposition that in every triangle the sum of two sides is greater than the third is assumed as a particular axiom. Due to the idea that a 'straight line' is defined as the shortest path between two points, he mentions how congruence of triangles is necessary for Euclid's proof that a straight line in the plane is the shortest distance between two points. He summarizes as follows: The theorem of the straight line as the shortest distance between two points and the essentially equivalent theorem of Euclid about the sides of a triangle, play an important part not only in number theory but also in the theory of surfaces and in the calculus of variations. For this reason, and because I believe that the thorough investigation of the conditions for the validity of this theorem will throw a new light upon the idea of distance, as well as upon other elementary ideas, e. g., upon the idea of the plane, and the possibility of its definition by means of the idea of the straight line, "the construction and systematic treatment of the geometries here possible seem to me desirable." Flat metrics. Desargues's theorem: "If two triangles lie on a plane such that the lines connecting corresponding vertices of the triangles meet at one point, then the three points, at which the prolongations of three pairs of corresponding sides of the triangles intersect, lie on one line." The necessary condition for solving Hilbert's fourth problem is the requirement that a metric space that satisfies the axioms of this problem should be Desarguesian, i.e.,: For Desarguesian spaces Georg Hamel proved that every solution of Hilbert's fourth problem can be represented in a real projective space formula_0 or in a convex domain of formula_0 if one determines the congruence of segments by equality of their lengths in a special metric for which the lines of the projective space are geodesics. Metrics of this type are called flat or projective. Thus, the solution of Hilbert's fourth problem was reduced to the solution of the problem of constructive determination of all complete flat metrics. Hamel solved this problem under the assumption of high regularity of the metric. However, as simple examples show, the class of regular flat metrics is smaller than the class of all flat metrics. The axioms of geometries under consideration imply only a continuity of the metrics. Therefore, to solve Hilbert's fourth problem completely it is necessary to determine constructively all the continuous flat metrics. Prehistory of Hilbert's fourth problem. Before 1900, there was known the Cayley–Klein model of Lobachevsky geometry in the unit disk, according to which geodesic lines are chords of the disk and the distance between points is defined as a logarithm of the cross-ratio of a quadruple. For two-dimensional Riemannian metrics, Eugenio Beltrami (1835–1900) proved that flat metrics are the metrics of constant curvature. For multidimensional Riemannian metrics this statement was proved by E. Cartan in 1930. In 1890, for solving problems on the theory of numbers, Hermann Minkowski introduced a notion of the space that nowadays is called the finite-dimensional Banach space. Minkowski space. Let formula_1be a compact convex hypersurface in a Euclidean space defined by formula_2 where the function formula_3 satisfies the following conditions: The length of the vector "OA" is defined by: formula_8 A space with this metric is called Minkowski space. The hypersurface formula_9 is convex and can be irregular. The defined metric is flat. Finsler spaces. Let "M" and formula_10 be a smooth finite-dimensional manifold and its tangent bundle, respectively. The function formula_11 is called Finsler metric if formula_16 is Finsler space. Hilbert's geometry. Let formula_17 be a bounded open convex set with the boundary of class "C2" and positive normal curvatures. Similarly to the Lobachevsky space, the hypersurface formula_18 is called the absolute of Hilbert's geometry. Hilbert's distance (see fig.) is defined by formula_19 The distance formula_20 induces the Hilbert–Finsler metric formula_21 on "U". For any formula_22 and formula_23 (see fig.), we have formula_24 The metric is symmetric and flat. In 1895, Hilbert introduced this metric as a generalization of the Lobachevsky geometry. If the hypersurface formula_25 is an ellipsoid, then we have the Lobachevsky geometry. Funk metric. In 1930, Funk introduced a non-symmetric metric. It is defined in a domain bounded by a closed convex hypersurface and is also flat. "σ"-metrics. Sufficient condition for flat metrics. Georg Hamel was first to contribute to the solution of Hilbert's fourth problem. He proved the following statement. Theorem. A regular Finsler metric formula_26 is flat if and only if it satisfies the conditions: formula_27 Crofton formula. Consider a set of all oriented lines on a plane. Each line is defined by the parameters formula_28 and formula_29 where formula_28 is a distance from the origin to the line, and formula_30 is an angle between the line and the "x"-axis. Then the set of all oriented lines is homeomorphic to a circular cylinder of radius 1 with the area element formula_31. Let formula_32 be a rectifiable curve on a plane. Then the length of formula_32 is formula_33 where formula_34 is a set of lines that intersect the curve formula_32, and formula_35 is the number of intersections of the line with formula_32. Crofton proved this statement in 1870. A similar statement holds for a projective space. Blaschke–Busemann measure. In 1966, in his talk at the International Mathematical Congress in Moscow, Herbert Busemann introduced a new class of flat metrics. On a set of lines on the projective plane formula_36 he introduced a completely additive non-negative measure formula_37, which satisfies the following conditions: If we consider a formula_37-metric in an arbitrary convex domain formula_34 of a projective space formula_36, then condition 3) should be replaced by the following: for any set "H" such that "H" is contained in formula_34 and the closure of "H" does not intersect the boundary of formula_34, the inequality formula_43 holds. Using this measure, the formula_37-metric on formula_36 is defined by formula_44 where formula_45 is the set of straight lines that intersect the segment formula_46. The triangle inequality for this metric follows from Pasch's theorem. Theorem. formula_37-metric on formula_36 is flat, i.e., the geodesics are the straight lines of the projective space. But Busemann was far from the idea that formula_37-metrics exhaust all flat metrics. He wrote, "The freedom in the choice of a metric with given geodesics is for non-Riemannian metrics so great that it may be doubted whether there really exists a convincing characterization of all Desarguesian spaces". Two-dimensional case. Pogorelov's theorem. The following theorem was proved by Pogorelov in 1973 Theorem. "Any two-dimensional continuous complete flat metric is a formula_37-metric." Thus Hilbert's fourth problem for the two-dimensional case was completely solved. A consequence of this is that you can glue boundary to boundary two copies of the same planar convex shape, with an angle twist between them, you will get a 3D object without crease lines, the two faces being developable. Ambartsumian's proofs. In 1976, Ambartsumian proposed another proof of Hilbert's fourth problem. His proof uses the fact that in the two-dimensional case the whole measure can be restored by its values on biangles, and thus be defined on triangles in the same way as the area of a triangle is defined on a sphere. Since the triangle inequality holds, it follows that this measure is positive on non-degenerate triangles and is determined on all Borel sets. However, this structure can not be generalized to higher dimensions because of Hilbert's third problem solved by Max Dehn. In the two-dimensional case, polygons with the same volume are scissors-congruent. As was shown by Dehn this is not true for a higher dimension. Three dimensional case. For three dimensional case Pogorelov proved the following theorem. Theorem. "Any three-dimensional regular complete flat metric is a formula_37-metric." However, in the three-dimensional case formula_37-measures can take either positive or negative values. The necessary and sufficient conditions for the regular metric defined by the function of the set formula_37 to be flat are the following three conditions: Moreover, Pogorelov showed that any complete continuous flat metric in the three-dimensional case is the limit of regular formula_37-metrics with the uniform convergence on any compact sub-domain of the metric's domain. He called them generalized formula_37-metrics. Thus Pogorelov could prove the following statement. Theorem. "In the three-dimensional case any complete continuous flat metric is a formula_37-metric in generalized meaning." Busemann, in his review to Pogorelov’s book "Hilbert’s Fourth Problem" wrote, "In the spirit of the time Hilbert restricted himself to "n" = 2, 3 and so does Pogorelov. However, this has doubtless pedagogical reasons, because he addresses a wide class of readers. The real difference is between "n" = 2 and "n"&gt;2. Pogorelov's method works for "n"&gt;3, but requires greater technicalities". Multidimensional case. The multi-dimensional case of the Fourth Hilbert problem was studied by Szabo. In 1986, he proved, as he wrote, the generalized Pogorelov theorem. Theorem. Each "n"-dimensional Desarguesian space of the class formula_47, is generated by the Blaschke–Busemann construction. A formula_37-measure that generates a flat measure has the following properties: There was given the example of a flat metric not generated by the Blaschke–Busemann construction. Szabo described all continuous flat metrics in terms of generalized functions. Hilbert's fourth problem and convex bodies. Hilbert's fourth problem is also closely related to the properties of convex bodies. A convex polyhedron is called a zonotope if it is the Minkowski sum of segments. A convex body which is a limit of zonotopes in the Blaschke – Hausdorff metric is called zonoid. For zonoids, the support function is represented by where formula_48 is an even positive Borel measure on a sphere formula_49. The Minkowski space is generated by the Blaschke–Busemann construction if and only if the support function of the indicatrix has the form of (1), where formula_48 is even and not necessarily of positive Borel measure. The bodies bounded by such hypersurfaces are called generalized zonoids. The octahedron formula_50 in the Euclidean space formula_51 is not a generalized zonoid. From the above statement it follows that the flat metric of Minkowski space with the norm formula_52 is not generated by the Blaschke–Busemann construction. Generalizations of Hilbert's fourth problem. There was found the correspondence between the planar "n"-dimensional Finsler metrics and special symplectic forms on the Grassmann manifold formula_53 в formula_54. There were considered periodic solutions of Hilbert's fourth problem : Another exposition of Hilbert's fourth problem can be found in work of Paiva. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "RP^{n}" }, { "math_id": 1, "text": "F_{0}\\subset \\mathbb{E}^{n}" }, { "math_id": 2, "text": "F_{0}=\\{y\\in E^{n}:F(y)=1\\}," }, { "math_id": 3, "text": "F=F(y)" }, { "math_id": 4, "text": "F(y)\\geqslant 0, \\qquad F(y)=0 \\Leftrightarrow y=0;" }, { "math_id": 5, "text": "F(\\lambda y)=\\lambda F(y), \\qquad \\lambda\\geqslant 0;" }, { "math_id": 6, "text": "F(y)\\in C^{k}(E^{n}\\setminus \\{0\\}), \\qquad k\\geqslant 3;" }, { "math_id": 7, "text": " \\frac{\\partial^2 F^2}{\\partial y^i \\, \\partial y^j}\\xi^i\\xi^j>0" }, { "math_id": 8, "text": "\\|OA\\|_M=\\frac{\\|OA\\|_{\\mathbb{E}}}{\\|OL\\|_{\\mathbb{E}}}." }, { "math_id": 9, "text": "F_{0}" }, { "math_id": 10, "text": "TM=\\{(x,y)|x\\in M, y\\in T_xM\\}" }, { "math_id": 11, "text": "F(x,y)\\colon TM \\rightarrow [0, +\\infty)" }, { "math_id": 12, "text": "F(x,y)\\in C^{k}(TM\\setminus \\{0\\}), \\qquad k\\geqslant 3" }, { "math_id": 13, "text": "x\\in M" }, { "math_id": 14, "text": "F(x, y)" }, { "math_id": 15, "text": "T_{x}M" }, { "math_id": 16, "text": "(M, F)" }, { "math_id": 17, "text": "U\\subset (\\mathbb{E}^{n+1}, \\| \\cdot \\|_{\\mathbb{E}})" }, { "math_id": 18, "text": "\\partial U" }, { "math_id": 19, "text": "d_U(p, q)=\\frac{1}{2} \\ln \\frac{\\|q-q_1\\|_E}{\\|q-p_1\\|_E}\\times \\frac{\\|p-p_1\\|_E}{\\|p-q_1\\|_E}." }, { "math_id": 20, "text": "d_{U}" }, { "math_id": 21, "text": "F_{U}" }, { "math_id": 22, "text": "x\\in U" }, { "math_id": 23, "text": "y\\in T_{x}U" }, { "math_id": 24, "text": "F_U(x, y)=\\frac{1}{2}\\|y\\|_{\\mathbb{E}} \\left( \\frac{1}{\\|x-x_{+}\\|_{\\mathbb{E}}}+\\frac{1}{\\|x-x_{-}\\|_{\\mathbb{E}}} \\right). " }, { "math_id": 25, "text": "\\partial U " }, { "math_id": 26, "text": "F(x,y)=F(x_1,\\ldots,x_n,y_1,\\ldots,y_n)" }, { "math_id": 27, "text": "\\frac{\\partial^2 F^2}{\\partial x^i \\, \\partial y^j} = \\frac{\\partial^2 F^2}{\\partial x^j \\, \\partial y^i}, \\, i,j=1,\\ldots,n." }, { "math_id": 28, "text": "\\rho" }, { "math_id": 29, "text": "\\varphi," }, { "math_id": 30, "text": "\\varphi" }, { "math_id": 31, "text": "dS = d\\rho \\, d\\varphi " }, { "math_id": 32, "text": "\\gamma" }, { "math_id": 33, "text": "L = \\frac{1}{4} \\iint_\\Omega n( \\rho, \\varphi) \\, dp \\, d\\varphi" }, { "math_id": 34, "text": "\\Omega" }, { "math_id": 35, "text": "n(p, \\varphi)" }, { "math_id": 36, "text": "RP^{2}" }, { "math_id": 37, "text": "\\sigma" }, { "math_id": 38, "text": "\\sigma (\\tau P)=0" }, { "math_id": 39, "text": "\\tau P" }, { "math_id": 40, "text": "\\sigma (\\tau X)>0" }, { "math_id": 41, "text": "\\tau X" }, { "math_id": 42, "text": "\\sigma (RP^{n})" }, { "math_id": 43, "text": "\\sigma(\\pi H)<\\infty" }, { "math_id": 44, "text": "|x,y|=\\sigma \\left( \\tau [x,y] \\right)," }, { "math_id": 45, "text": "\\tau [x,y]" }, { "math_id": 46, "text": "[x,y]" }, { "math_id": 47, "text": "C^{n+2}, n>2" }, { "math_id": 48, "text": "\\sigma (u)" }, { "math_id": 49, "text": "S^{n-1}" }, { "math_id": 50, "text": "|x_1| + |x_2| + |x_3| \\leq 1" }, { "math_id": 51, "text": "E^3" }, { "math_id": 52, "text": "\\|x\\| = \\max\\{|x_1|, |x_2|, |x_3|\\}" }, { "math_id": 53, "text": "G(n+1,2)" }, { "math_id": 54, "text": "E^{n+1}" }, { "math_id": 55, "text": "C^2" } ]
https://en.wikipedia.org/wiki?curid=1573991
1574901
Cardinality of the continuum
Cardinality of the set of real numbers In set theory, the cardinality of the continuum is the cardinality or "size" of the set of real numbers formula_0, sometimes called the continuum. It is an infinite cardinal number and is denoted by formula_1 (lowercase Fraktur "c") or formula_2 The real numbers formula_0 are more numerous than the natural numbers formula_3. Moreover, formula_0 has the same number of elements as the power set of formula_3. Symbolically, if the cardinality of formula_3 is denoted as formula_4, the cardinality of the continuum is &lt;templatestyles src="Block indent/styles.css"/&gt;formula_5 This was proven by Georg Cantor in his uncountability proof of 1874, part of his groundbreaking study of different infinities. The inequality was later stated more simply in his diagonal argument in 1891. Cantor defined cardinality in terms of bijective functions: two sets have the same cardinality if, and only if, there exists a bijective function between them. Between any two real numbers "a" &lt; "b", no matter how close they are to each other, there are always infinitely many other real numbers, and Cantor showed that they are as many as those contained in the whole set of real numbers. In other words, the open interval ("a","b") is equinumerous with formula_0, as well as with several other infinite sets, such as any "n"-dimensional Euclidean space formula_6 (see space filling curve). That is, &lt;templatestyles src="Block indent/styles.css"/&gt;formula_7 The smallest infinite cardinal number is formula_4 (aleph-null). The second smallest is formula_8 (aleph-one). The continuum hypothesis, which asserts that there are no sets whose cardinality is strictly between formula_4 and formula_9, means that formula_10. The truth or falsity of this hypothesis is undecidable and cannot be proven within the widely used Zermelo–Fraenkel set theory with axiom of choice (ZFC). Properties. Uncountability. Georg Cantor introduced the concept of cardinality to compare the sizes of infinite sets. He famously showed that the set of real numbers is uncountably infinite. That is, formula_11 is strictly greater than the cardinality of the natural numbers, formula_4: &lt;templatestyles src="Block indent/styles.css"/&gt;formula_12 In practice, this means that there are strictly more real numbers than there are integers. Cantor proved this statement in several different ways. For more information on this topic, see Cantor's first uncountability proof and Cantor's diagonal argument. Cardinal equalities. A variation of Cantor's diagonal argument can be used to prove Cantor's theorem, which states that the cardinality of any set is strictly less than that of its power set. That is, formula_13 (and so that the power set formula_14 of the natural numbers formula_3 is uncountable). In fact, the cardinality of formula_14, by definition formula_15, is equal to formula_11. This can be shown by providing one-to-one mappings in both directions between subsets of a countably infinite set and real numbers, and applying the Cantor–Bernstein–Schroeder theorem according to which two sets with one-to-one mappings in both directions have the same cardinality. In one direction, reals can be equated with Dedekind cuts, sets of rational numbers, or with their binary expansions. In the other direction, the binary expansions of numbers in the half-open interval formula_16, viewed as sets of positions where the expansion is one, almost give a one-to-one mapping from subsets of a countable set (the set of positions in the expansions) to real numbers, but it fails to be one-to-one for numbers with terminating binary expansions, which can also be represented by a non-terminating expansion that ends in a repeating sequence of 1s. This can be made into a one-to-one mapping by that adds one to the non-terminating repeating-1 expansions, mapping them into formula_17. Thus, we conclude that &lt;templatestyles src="Block indent/styles.css"/&gt;formula_18 The cardinal equality formula_19 can be demonstrated using cardinal arithmetic: &lt;templatestyles src="Block indent/styles.css"/&gt;formula_20 By using the rules of cardinal arithmetic, one can also show that &lt;templatestyles src="Block indent/styles.css"/&gt;formula_21 where "n" is any finite cardinal ≥ 2 and &lt;templatestyles src="Block indent/styles.css"/&gt;formula_22 where formula_23 is the cardinality of the power set of R, and formula_24. ===Alternative explanation for &amp;cfr; 2א‎0=== Every real number has at least one infinite decimal expansion. For example, &lt;templatestyles src="Block indent/styles.css"/&gt;1/2 = 0.50000... &lt;templatestyles src="Block indent/styles.css"/&gt;1/3 = 0.33333... &lt;templatestyles src="Block indent/styles.css"/&gt;π = 3.14159... In any given case, the number of decimal places is countable since they can be put into a one-to-one correspondence with the set of natural numbers formula_25. This makes it sensible to talk about, say, the first, the one-hundredth, or the millionth decimal place of π. Since the natural numbers have cardinality formula_26 each real number has formula_4 digits in its expansion. Since each real number can be broken into an integer part and a decimal fraction, we get: &lt;templatestyles src="Block indent/styles.css"/&gt;formula_27 where we used the fact that &lt;templatestyles src="Block indent/styles.css"/&gt;formula_28 On the other hand, if we map formula_29 to formula_30 and consider that decimal fractions containing only 3 or 7 are only a part of the real numbers, then we get &lt;templatestyles src="Block indent/styles.css"/&gt;formula_31 and thus &lt;templatestyles src="Block indent/styles.css"/&gt;formula_32 Beth numbers. The sequence of beth numbers is defined by setting formula_33 and formula_34. So formula_11 is the second beth number, beth-one: &lt;templatestyles src="Block indent/styles.css"/&gt;formula_35 The third beth number, beth-two, is the cardinality of the power set of formula_36 (i.e. the set of all subsets of the real line): &lt;templatestyles src="Block indent/styles.css"/&gt;formula_37 The continuum hypothesis. The continuum hypothesis asserts that formula_11 is also the second aleph number, formula_8. In other words, the continuum hypothesis states that there is no set formula_38 whose cardinality lies strictly between formula_4 and formula_11 &lt;templatestyles src="Block indent/styles.css"/&gt;formula_39 This statement is now known to be independent of the axioms of Zermelo–Fraenkel set theory with the axiom of choice (ZFC), as shown by Kurt Gödel and Paul Cohen. That is, both the hypothesis and its negation are consistent with these axioms. In fact, for every nonzero natural number "n", the equality formula_11 = formula_40 is independent of ZFC (case formula_41 being the continuum hypothesis). The same is true for most other alephs, although in some cases, equality can be ruled out by König's theorem on the grounds of cofinality (e.g. formula_42). In particular, formula_43 could be either formula_8 or formula_44, where formula_45 is the first uncountable ordinal, so it could be either a successor cardinal or a limit cardinal, and either a regular cardinal or a singular cardinal. Sets with cardinality of the continuum. A great many sets studied in mathematics have cardinality equal to formula_11. Some common examples are the following: Sets with greater cardinality. Sets with cardinality greater than formula_11 include: These all have cardinality formula_50 (beth two) Bibliography. "This article incorporates material from cardinality of the continuum on PlanetMath, which is licensed under the ."
[ { "math_id": 0, "text": "\\mathbb R" }, { "math_id": 1, "text": "\\bold\\mathfrak c" }, { "math_id": 2, "text": "\\bold|\\bold\\mathbb R\\bold|" }, { "math_id": 3, "text": "\\mathbb N" }, { "math_id": 4, "text": "\\aleph_0" }, { "math_id": 5, "text": "\\mathfrak c = 2^{\\aleph_0} > \\aleph_0. " }, { "math_id": 6, "text": "\\mathbb R^n" }, { "math_id": 7, "text": "|(a,b)| = |\\mathbb R| = |\\mathbb R^n|." }, { "math_id": 8, "text": "\\aleph_1" }, { "math_id": 9, "text": "\\mathfrak c" }, { "math_id": 10, "text": "\\mathfrak c = \\aleph_1" }, { "math_id": 11, "text": "{\\mathfrak c}" }, { "math_id": 12, "text": "\\aleph_0 < \\mathfrak c." }, { "math_id": 13, "text": "|A| < 2^{|A|}" }, { "math_id": 14, "text": "\\wp(\\mathbb N)" }, { "math_id": 15, "text": "2^{\\aleph_0}" }, { "math_id": 16, "text": "[0,1)" }, { "math_id": 17, "text": "[1,2)" }, { "math_id": 18, "text": "\\mathfrak c = |\\wp(\\mathbb{N})| = 2^{\\aleph_0}." }, { "math_id": 19, "text": "\\mathfrak{c}^2 = \\mathfrak{c}" }, { "math_id": 20, "text": "\\mathfrak{c}^2 = (2^{\\aleph_0})^2 = 2^{2\\times{\\aleph_0}} = 2^{\\aleph_0} = \\mathfrak{c}." }, { "math_id": 21, "text": "\\mathfrak c^{\\aleph_0} = {\\aleph_0}^{\\aleph_0} = n^{\\aleph_0} = \\mathfrak c^n = \\aleph_0 \\mathfrak c = n \\mathfrak c = \\mathfrak c" }, { "math_id": 22, "text": " \\mathfrak c ^{\\mathfrak c} = (2^{\\aleph_0})^{\\mathfrak c} = 2^{\\mathfrak c\\times\\aleph_0} = 2^{\\mathfrak c}" }, { "math_id": 23, "text": "2 ^{\\mathfrak c}" }, { "math_id": 24, "text": "2 ^{\\mathfrak c} > \\mathfrak c " }, { "math_id": 25, "text": "\\mathbb{N}" }, { "math_id": 26, "text": "\\aleph_0," }, { "math_id": 27, "text": "{\\mathfrak c} \\leq \\aleph_0 \\cdot 10^{\\aleph_0} \\leq 2^{\\aleph_0} \\cdot {(2^4)}^{\\aleph_0} = 2^{\\aleph_0 + 4 \\cdot \\aleph_0} = 2^{\\aleph_0} " }, { "math_id": 28, "text": "\\aleph_0 + 4 \\cdot \\aleph_0 = \\aleph_0 \\," }, { "math_id": 29, "text": "2 = \\{0, 1\\}" }, { "math_id": 30, "text": "\\{3, 7\\}" }, { "math_id": 31, "text": "2^{\\aleph_0} \\leq {\\mathfrak c} \\," }, { "math_id": 32, "text": "{\\mathfrak c} = 2^{\\aleph_0} \\,." }, { "math_id": 33, "text": "\\beth_0 = \\aleph_0" }, { "math_id": 34, "text": "\\beth_{k+1} = 2^{\\beth_k}" }, { "math_id": 35, "text": "\\mathfrak c = \\beth_1." }, { "math_id": 36, "text": "\\mathbb{R}" }, { "math_id": 37, "text": "2^\\mathfrak c = \\beth_2." }, { "math_id": 38, "text": "A" }, { "math_id": 39, "text": "\\nexists A \\quad:\\quad \\aleph_0 < |A| < \\mathfrak c." }, { "math_id": 40, "text": "\\aleph_n" }, { "math_id": 41, "text": "n=1" }, { "math_id": 42, "text": "\\mathfrak{c}\\neq\\aleph_\\omega" }, { "math_id": 43, "text": "\\mathfrak{c}" }, { "math_id": 44, "text": "\\aleph_{\\omega_1}" }, { "math_id": 45, "text": "\\omega_1" }, { "math_id": 46, "text": "\\mathcal{P}(\\mathbb{R})" }, { "math_id": 47, "text": "2^{\\mathbb{R}}" }, { "math_id": 48, "text": "\\mathbb{R}^\\mathbb{R}" }, { "math_id": 49, "text": "\\mathbb{Q}" }, { "math_id": 50, "text": "2^\\mathfrak c = \\beth_2" } ]
https://en.wikipedia.org/wiki?curid=1574901
1575447
Shear modulus
Ratio of shear stress to shear strain In materials science, shear modulus or modulus of rigidity, denoted by "G", or sometimes "S" or "μ", is a measure of the elastic shear stiffness of a material and is defined as the ratio of shear stress to the shear strain: formula_0 where formula_1 = shear stress formula_2 is the force which acts formula_3 is the area on which the force acts formula_4 = shear strain. In engineering formula_5, elsewhere formula_6 formula_7 is the transverse displacement formula_8 is the initial length of the area. The derived SI unit of shear modulus is the pascal (Pa), although it is usually expressed in gigapascals (GPa) or in thousand pounds per square inch (ksi). Its dimensional form is M1L−1T−2, replacing "force" by "mass" times "acceleration". Explanation. The shear modulus is one of several quantities for measuring the stiffness of materials. All of them arise in the generalized Hooke's law: These moduli are not independent, and for isotropic materials they are connected via the equations formula_9 The shear modulus is concerned with the deformation of a solid when it experiences a force parallel to one of its surfaces while its opposite face experiences an opposing force (such as friction). In the case of an object shaped like a rectangular prism, it will deform into a parallelepiped. Anisotropic materials such as wood, paper and also essentially all single crystals exhibit differing material response to stress or strain when tested in different directions. In this case, one may need to use the full tensor-expression of the elastic constants, rather than a single scalar value. One possible definition of a fluid would be a material with zero shear modulus. Shear waves. In homogeneous and isotropic solids, there are two kinds of waves, pressure waves and shear waves. The velocity of a shear wave, formula_10 is controlled by the shear modulus, formula_11 where G is the shear modulus formula_12 is the solid's density. Shear modulus of metals. The shear modulus of metals is usually observed to decrease with increasing temperature. At high pressures, the shear modulus also appears to increase with the applied pressure. Correlations between the melting temperature, vacancy formation energy, and the shear modulus have been observed in many metals. Several models exist that attempt to predict the shear modulus of metals (and possibly that of alloys). Shear modulus models that have been used in plastic flow computations include: Varshni-Chen-Gray model. The Varshni-Chen-Gray model (sometimes referred to as the Varshni equation) has the form: formula_13 where formula_14 is the shear modulus at formula_15, and formula_16 and formula_17 are material constants. SCG model. The Steinberg-Cochran-Guinan (SCG) shear modulus model is pressure dependent and has the form formula_18 where, μ0 is the shear modulus at the reference state ("T" = 300 K, "p" = 0, η = 1), "p" is the pressure, and "T" is the temperature. NP model. The Nadal-Le Poac (NP) shear modulus model is a modified version of the SCG model. The empirical temperature dependence of the shear modulus in the SCG model is replaced with an equation based on Lindemann melting theory. The NP shear modulus model has the form: formula_19 where formula_20 and μ0 is the shear modulus at absolute zero and ambient pressure, ζ is an area, "m" is the atomic mass, and "f" is the Lindemann constant. Shear relaxation modulus. The shear relaxation modulus formula_21 is the time-dependent generalization of the shear modulus formula_22: formula_23. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "G \\ \\stackrel{\\mathrm{def}}{=}\\ \\frac {\\tau_{xy}} {\\gamma_{xy}} = \\frac{F/A}{\\Delta x/l} = \\frac{F l}{A \\Delta x} " }, { "math_id": 1, "text": "\\tau_{xy} = F/A \\," }, { "math_id": 2, "text": "F" }, { "math_id": 3, "text": "A" }, { "math_id": 4, "text": "\\gamma_{xy}" }, { "math_id": 5, "text": ":=\\Delta x/l = \\tan \\theta " }, { "math_id": 6, "text": " := \\theta" }, { "math_id": 7, "text": "\\Delta x" }, { "math_id": 8, "text": "l" }, { "math_id": 9, "text": " E = 2G(1+\\nu) = 3K(1-2\\nu)" }, { "math_id": 10, "text": "(v_s)" }, { "math_id": 11, "text": "v_s = \\sqrt{\\frac {G} {\\rho} }" }, { "math_id": 12, "text": "\\rho" }, { "math_id": 13, "text": " \n \\mu(T) = \\mu_0 - \\frac{D}{\\exp(T_0/T) - 1}\n " }, { "math_id": 14, "text": " \\mu_0 " }, { "math_id": 15, "text": " T=0K " }, { "math_id": 16, "text": "D" }, { "math_id": 17, "text": " T_0 " }, { "math_id": 18, "text": " \n \\mu(p,T) = \\mu_0 + \\frac{\\partial \\mu}{\\partial p} \\frac{p}{\\eta^\\frac{1}{3}} +\n \\frac{\\partial \\mu}{\\partial T}(T - 300) ; \\quad\n \\eta := \\frac{\\rho}{\\rho_0}\n " }, { "math_id": 19, "text": "\n \\mu(p,T) = \\frac{1}{\\mathcal{J}\\left(\\hat{T}\\right)}\n \\left[\n \\left(\\mu_0 + \\frac{\\partial \\mu}{\\partial p} \\frac{p}{\\eta^\\frac{1}{3}} \\right)\n \\left(1 - \\hat{T}\\right) + \\frac{\\rho}{Cm}~T\n \\right]; \\quad\n C := \\frac{\\left(6\\pi^2\\right)^\\frac{2}{3}}{3} f^2\n " }, { "math_id": 20, "text": "\n \\mathcal{J}(\\hat{T}) := 1 + \\exp\\left[-\\frac{1 + 1/\\zeta}\n {1 + \\zeta/\\left(1 - \\hat{T}\\right)}\\right] \\quad\n \\text{for} \\quad \\hat{T} := \\frac{T}{T_m}\\in[0, 6+ \\zeta],\n " }, { "math_id": 21, "text": "G(t)" }, { "math_id": 22, "text": "G" }, { "math_id": 23, "text": "G=\\lim_{t\\to \\infty} G(t)" } ]
https://en.wikipedia.org/wiki?curid=1575447
157550
Karl Schwarzschild
German physicist (1873–1916) Karl Schwarzschild (; 9 October 1873 – 11 May 1916) was a German physicist and astronomer. Schwarzschild provided the first exact solution to the Einstein field equations of general relativity, for the limited case of a single spherical non-rotating mass, which he accomplished in 1915, the same year that Einstein first introduced general relativity. The Schwarzschild solution, which makes use of Schwarzschild coordinates and the Schwarzschild metric, leads to a derivation of the Schwarzschild radius, which is the size of the event horizon of a non-rotating black hole. Schwarzschild accomplished this while serving in the German army during World War I. He died the following year from the autoimmune disease pemphigus, which he developed while at the Russian front. Various forms of the disease particularly affect people of Ashkenazi Jewish origin. Asteroid 837 Schwarzschilda is named in his honour, as is the large crater "Schwarzschild", on the far side of the Moon. Life. Karl Schwarzschild was born on 9 October 1873 in Frankfurt on Main, the eldest of six boys and one girl, to Jewish parents. His father was active in the business community of the city, and the family had ancestors in Frankfurt from the sixteenth century onwards. The family owned two fabric stores in Frankfurt. His brother Alfred became a painter. The young Schwarzschild attended a Jewish primary school until 11 years of age and then the Lessing-Gymnasium (secondary school). He received an all-encompassing education, including subjects like Latin, Ancient Greek, music and art, but developed a special interest in astronomy early on. In fact he was something of a child prodigy, having two papers on binary orbits (celestial mechanics) published before the age of sixteen. After graduation in 1890, he attended the University of Strasbourg to study astronomy. After two years he transferred to the Ludwig Maximilian University of Munich where he obtained his doctorate in 1896 for a work on Henri Poincaré's theories. From 1897, he worked as assistant at the Kuffner Observatory in Vienna. His work here concentrated on the photometry of star clusters and laid the foundations for a formula linking the intensity of the starlight, exposure time, and the resulting contrast on a photographic plate. An integral part of that theory is the Schwarzschild exponent (astrophotography). In 1899, he returned to Munich to complete his Habilitation. From 1901 until 1909, he was a professor at the prestigious Göttingen Observatory within the University of Göttingen, where he had the opportunity to work with some significant figures, including David Hilbert and Hermann Minkowski. Schwarzschild became the director of the observatory. He married Else Rosenbach, a great-granddaughter of Friedrich Wöhler and daughter of a professor of surgery at Göttingen, in 1909. Later that year they moved to Potsdam, where he took up the post of director of the Astrophysical Observatory. This was then the most prestigious post available for an astronomer in Germany. From 1912, Schwarzschild was a member of the Prussian Academy of Sciences. At the outbreak of World War I in 1914, Schwarzschild volunteered for service in the German army despite being over 40 years old. He served on both the western and eastern fronts, specifically helping with ballistic calculations and rising to the rank of second lieutenant in the artillery. While serving on the front in Russia in 1915, he began to suffer from pemphigus, a rare and painful autoimmune skin-disease. Nevertheless, he managed to write three outstanding papers, two on the theory of relativity and one on quantum theory. His papers on relativity produced the first exact solutions to the Einstein field equations, and a minor modification of these results gives the well-known solution that now bears his name — the Schwarzschild metric. In March 1916, Schwarzschild left military service because of his illness and returned to Göttingen. Two months later, on May 11, 1916, his struggle with pemphigus may have led to his death at the age of 42. He rests in his family grave at the Stadtfriedhof Göttingen. With his wife Else he had three children: Work. Thousands of dissertations, articles, and books have since been devoted to the study of Schwarzschild's solutions to the Einstein field equations. However, although his best known work lies in the area of general relativity, his research interests were extremely broad, including work in celestial mechanics, observational stellar photometry, quantum mechanics, instrumental astronomy, stellar structure, stellar statistics, Halley's comet, and spectroscopy. Some of his particular achievements include measurements of variable stars, using photography, and the improvement of optical systems, through the perturbative investigation of geometrical aberrations. Physics of photography. While at Vienna in 1897, Schwarzschild developed a formula, now known as the Schwarzschild law, to calculate the optical density of photographic material. It involved an exponent now known as the Schwarzschild exponent, which is the formula_0 in the formula: formula_1 (where formula_2 is optical density of exposed photographic emulsion, a function of formula_3, the intensity of the source being observed, and formula_4, the exposure time, with formula_0 a constant). This formula was important for enabling more accurate photographic measurements of the intensities of faint astronomical sources. Electrodynamics. According to Wolfgang Pauli, Schwarzschild is the first to introduce the correct Lagrangian formalism of the electromagnetic field as formula_5 where formula_6 are the electric and applied magnetic fields, formula_7 is the vector potential and formula_8 is the electric potential. He also introduced a field free variational formulation of electrodynamics (also known as "action at distance" or "direct interparticle action") based only on the world line of particles as formula_9 where formula_10 are the world lines of the particle, formula_11 the (vectorial) arc element along the world line. Two points on two world lines contribute to the Lagrangian (are coupled) only if they are a zero Minkowskian distance (connected by a light ray), hence the term formula_12. The idea was further developed by Hugo Tetrode and Adriaan Fokker in the 1920s and John Archibald Wheeler and Richard Feynman in the 1940s and constitutes an alternative but equivalent formulation of electrodynamics. Relativity. Einstein himself was pleasantly surprised to learn that the field equations admitted exact solutions, because of their "prima facie" complexity, and because he himself had produced only an approximate solution. Einstein's approximate solution was given in his famous 1915 article on the advance of the perihelion of Mercury. There, Einstein used rectangular coordinates to approximate the gravitational field around a spherically symmetric, non-rotating, non-charged mass. Schwarzschild, in contrast, chose a more elegant "polar-like" coordinate system and was able to produce an exact solution which he first set down in a letter to Einstein of 22 December 1915, written while he was serving in the war stationed on the Russian front. He concluded the letter by writing: "As you see, the war is kindly disposed toward me, allowing me, despite fierce gunfire at a decidedly terrestrial distance, to take this walk into this your land of ideas." In 1916, Einstein wrote to Schwarzschild on this result: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;I have read your paper with the utmost interest. I had not expected that one could formulate the exact solution of the problem in such a simple way. I liked very much your mathematical treatment of the subject. Next Thursday I shall present the work to the Academy with a few words of explanation. Schwarzschild's second paper, which gives what is now known as the "Inner Schwarzschild solution" (in German: "innere Schwarzschild-Lösung"), is valid within a sphere of homogeneous and isotropic distributed molecules within a shell of radius r=R. It is applicable to solids; incompressible fluids; the sun and stars viewed as a quasi-isotropic heated gas; and any homogeneous and isotropic distributed gas. Schwarzschild's first (spherically symmetric) solution "does not" contain a coordinate singularity on a surface that is now named after him. In his coordinates, this singularity lies on the sphere of points at a particular radius, called the Schwarzschild radius: formula_13 where "G" is the gravitational constant, "M" is the mass of the central body, and "c" is the speed of light in vacuum. In cases where the radius of the central body is less than the Schwarzschild radius, formula_14 represents the radius within which all massive bodies, and even photons, must inevitably fall into the central body (ignoring quantum tunnelling effects near the boundary). When the mass density of this central body exceeds a particular limit, it triggers a gravitational collapse which, if it occurs with spherical symmetry, produces what is known as a Schwarzschild black hole. This occurs, for example, when the mass of a neutron star exceeds the Tolman–Oppenheimer–Volkoff limit (about three solar masses). Cultural references. Karl Schwarzschild appears as a character in the science fiction short story "Schwarzschild Radius" (1987) by Connie Willis. Karl Schwarzchild appears as a fictionalized character in the story “Schwarzchild’s Singularity” in the collection "When We Cease to Understand the World" (2020) by Benjamín Labatut. Works. The entire scientific estate of Karl Schwarzschild is stored in a special collection of the Lower Saxony National- and University Library of Göttingen. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "p" }, { "math_id": 1, "text": "i = f ( I\\cdot t^p )" }, { "math_id": 2, "text": "i" }, { "math_id": 3, "text": "I" }, { "math_id": 4, "text": "t" }, { "math_id": 5, "text": " S = (1/2) \\int (H^2-E^2) dV + \\int \\rho(\\phi - \\vec{A} \\cdot \\vec{u}) dV " }, { "math_id": 6, "text": " \\vec{E},\\vec{H} " }, { "math_id": 7, "text": "\\vec{A}" }, { "math_id": 8, "text": "\\phi" }, { "math_id": 9, "text": "\nS=\\sum_{i}m_{i}\\int_{C_{i}}ds_{i}+\\frac{1}{2}\\sum_{i,j}\\iint_{C_{i},C_{j}}q_{i}q_{j}\\delta\\left(\\left\\Vert P_{i}P_{j}\\right\\Vert \\right)d\\mathbf{s}_{i}d\\mathbf{s}_{j}\n" }, { "math_id": 10, "text": " C_\\alpha " }, { "math_id": 11, "text": " d\\mathbf{s}_{\\alpha} " }, { "math_id": 12, "text": " \\delta\\left(\\left\\Vert P_{i}P_{j}\\right\\Vert \\right) " }, { "math_id": 13, "text": "\nR_{s} = \\frac{2GM}{c^{2}}\n" }, { "math_id": 14, "text": "R_{s}" } ]
https://en.wikipedia.org/wiki?curid=157550
1575643
Mass flow meter
A mass flow meter, also known as an inertial flow meter, is a device that measures mass flow rate of a fluid traveling through a tube. The mass flow rate is the mass of the fluid traveling past a fixed point per unit time. The mass flow meter does not measure the volume per unit time (e.g. cubic meters per second) passing through the device; it measures the mass per unit time (e.g. kilograms per second) flowing through the device. Volumetric flow rate is the mass flow rate divided by the fluid density. If the density is constant, then the relationship is simple. If the fluid has varying density, then the relationship is not simple. For example, the density of the fluid may change with temperature, pressure, or composition. The fluid may also be a combination of phases such as a fluid with entrained bubbles. Actual density can be determined due to dependency of sound velocity on the controlled liquid concentration. Operating principle of a Coriolis flow meter. There are two basic configurations of Coriolis flow meter: the "curved tube flow meter" and the "straight tube flow meter". This article discusses the curved tube design. The animations on the right do not represent an actually existing Coriolis flow meter design. The purpose of the animations is to illustrate the operating principle, and to show the connection with rotation. Fluid is being pumped through the mass flow meter. When there is mass flow, the tube twists slightly. The arm through which fluid flows away from the axis of rotation must exert a force on the fluid, to increase its angular momentum, so it bends backwards. The arm through which fluid is pushed back to the axis of rotation must exert a force on the fluid to decrease the fluid's angular momentum again, hence that arm will bend forward. In other words, the inlet arm (containing an outwards directed flow), is lagging behind the overall rotation, the part which in rest is parallel to the axis is now skewed, and the outlet arm (containing an inwards directed flow) leads the overall rotation. The animation on the right represents how curved tube mass flow meters are designed. The fluid is led through two parallel tubes. An actuator (not shown) induces equal counter vibrations on the sections parallel to the axis, to make the measuring device less sensitive to outside vibrations. The actual frequency of the vibration depends on the size of the mass flow meter, and ranges from 80 to 1000 Hz. The amplitude of the vibration is too small to be seen, but it can be felt by touch. When no fluid is flowing, the motion of the two tubes is symmetrical, as shown in the left animation. The animation on the right illustrates what happens during mass flow: some twisting of the tubes. The arm carrying the flow away from the axis of rotation must exert a force on the fluid to accelerate the flowing mass to the vibrating speed of the tubes at the outside (increase of absolute angular momentum), so it is lagging behind the overall vibration. The arm through which fluid is pushed back towards the axis of movement must exert a force on the fluid to decrease the fluid's absolute angular speed (angular momentum) again, hence that arm leads the overall vibration. The inlet arm and the outlet arm vibrate with the same frequency as the overall vibration, but when there is mass flow the two vibrations are out of sync: the inlet arm is behind, the outlet arm is ahead. The two vibrations are shifted in phase with respect to each other, and the degree of phase-shift is a measure for the amount of mass that is flowing through the tubes and line. Density and volume measurements. The mass flow of a U-shaped Coriolis flow meter is given as: formula_0 where "Ku" is the temperature dependent stiffness of the tube, "K" is a shape-dependent factor, "d" is the width, "τ" is the time lag, "ω" is the vibration frequency, and "Iu" is the inertia of the tube. As the inertia of the tube depend on its contents, knowledge of the fluid density is needed for the calculation of an accurate mass flow rate. If the density changes too often for manual calibration to be sufficient, the Coriolis flow meter can be adapted to measure the density as well. The natural vibration frequency of the flow tubes depends on the combined mass of the tube and the fluid contained in it. By setting the tube in motion and measuring the natural frequency, the mass of the fluid contained in the tube can be deduced. Dividing the mass on the known volume of the tube gives us the "density" of the fluid. An instantaneous density measurement allows the calculation of flow in volume per time by dividing mass flow with density. Calibration. Both mass flow and density measurements depend on the vibration of the tube. Calibration is affected by changes in the rigidity of the flow tubes. Changes in temperature and pressure will cause the tube rigidity to change, but these can be compensated for through pressure and temperature zero and span compensation factors. Additional effects on tube rigidity will cause shifts in the calibration factor over time due to degradation of the flow tubes. These effects include pitting, cracking, coating, erosion or corrosion. It is not possible to compensate for these changes dynamically, but efforts to monitor the effects may be made through regular meter calibration or verification checks. If a change is deemed to have occurred, but is considered to be acceptable, the offset may be added to the existing calibration factor to ensure continued accurate measurement. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "Q_m=\\frac{ K_u -I_u\\omega^2 }{2Kd^2}\\tau" } ]
https://en.wikipedia.org/wiki?curid=1575643
15756448
Church's thesis (constructive mathematics)
Axiom In constructive mathematics, Church's thesis formula_0 is the principle stating that all total functions are computable functions. The similarly named Church–Turing thesis states that every "effectively calculable function" is a "computable function", thus collapsing the former notion into the latter. formula_0 is stronger in the sense that with it every function is computable. The constructivist principle is however also given, in different theories and incarnations, as a fully formal axiom. The formalizations depends on the definition of "function" and "computable" of the theory at hand. A common context is recursion theory as established since the 1930's. Adopting formula_0 as a principle, then for a predicate of the form of a family of existence claims (e.g. formula_1 below) that is proven not to be validated for all formula_2 in a computable manner, the contrapositive of the axiom implies that this is then not validated by "any" total function (i.e. no mapping corresponding to formula_3). It thus collapses the possible scope of the notion of "function" compared to the underlying theory, restricting it to the defined notion of "computable function". In turn, the axiom is incompatible with systems that validate total functional value associations and evaluations that are also proven not to be computable. More concretely, it affects ones proof calculus in a way that it makes provable the negations of some common classically provable propositions. For example, Peano arithmetic formula_4 is such a system. Instead of it, one may consider the constructive theory of Heyting arithmetic formula_5 with the thesis in its first-order formulation formula_6 as an additional axiom, concerning associations between natural numbers. This theory disproves some universally closed variants of instances of the principle of the excluded middle. It is in this way that the axiom is shown incompatible with formula_4. However, formula_5 is equiconsistent with both formula_4 as well as with the theory given by formula_5 plus formula_6. That is, adding either the law of the excluded middle or Church's thesis does not make Heyting arithmetic inconsistent, but adding both does. Formal statement. This principle has formalizations in various mathematical frameworks. Let formula_7 denote Kleene's T predicate, so that e.g. validity of the predicate formula_8 expresses that formula_9 is the index of a total computable function. Note that there are also variations on formula_7 and the value extracting formula_10, as functions with return values. Here they are expressed as primitive recursive predicates. Write formula_11 to abbreviate formula_12, as the values formula_13 plays a role in the principle's formulations. So the computable function with index formula_9 terminates on formula_2 with value formula_13 iff formula_14. This formula_15-predicate of on triples formula_16 may be expressed by formula_17, at the cost of introducing abbreviating notation involving the sign already used for arithmetic equality. In first-order theories such as formula_5, which cannot quantify over relations and function directly, formula_0 may be stated as an axiom schema saying that for any definable total relation, which comprises a family of valid existence claims formula_18, the latter are computable in the sense of formula_19. For each formula formula_20 of two variables, the schema formula_6 includes the axiom formula_21 In words: If for every formula_2 there is a formula_13 satisfying formula_22, then there is in fact an formula_9 that is the Gödel number of a partial recursive function that will, for every formula_2, produce such a formula_13 satisfying the formula - and with some formula_23 being a Gödel number encoding a verifiable computation bearing witness to the fact that formula_13 is in fact the value of that function at formula_2. Relatedly, implications of this form may instead also be established as constructive meta-theoretical properties of theories. I.e. the theory need not necessarily prove the implication (a formula with formula_24), but the existence of formula_9 is meta-logically validated. A theory is then said to be closed under the rule. Variants. Extended Church's thesis. The statement formula_25 extends the claim to relations which are defined and total over a certain type of domain. This may be achieved by allowing to narrowing the scope of the universal quantifier and so can be formally stated by the schema: formula_26 In the above, formula_27 is restricted to be "almost-negative". For first-order arithmetic (where the schema is designated formula_25), this means formula_27 cannot contain any disjunction, and existential quantifiers can only appear in front of formula_28 (decidable) formulas. In the presence of Markov's principle formula_29, the syntactical restrictions may be somewhat loosened. When considering the domain of all numbers (e.g. when taking formula_30 to be the trivial formula_31), the above reduces to the previous form of Church's thesis. These first-order formulations are fairly strong in that they also constitute a form of function choice: Total relations contain total recursive functions. The extended Church's thesis is used by the school of constructive mathematics founded by Andrey Markov. Functional premise. formula_32 denotes the weaker variant of the principle in which the premise demands unique existence (of formula_13), i.e. the return value already has to be determined. Higher order formulation. The first formulation of the thesis above is also called the arithmetical form of the principle, since only quantifier over numbers appear in its formulation. It uses a general relation formula_22 in its antecedent. In a framework like recursion theory, a functions may be representable as a functional relation, granting a unique output value for every input. In higher-order systems that can quantify over (total) functions directly, a form of formula_0 can be stated as a single axiom, saying that every function from the natural numbers to the natural numbers is computable. In terms of the primitive recursive predicates, formula_33 This postulates that all functions formula_34 are computable, in the Kleene sense, with an index formula_9 in the theory. Thus, so are all values formula_35. One may write formula_36 with formula_37 denoting extensional equality formula_38. For example, in set theory functions are elements of function spaces and total functional by definition. A total function has a unique return value for every input in its domain. Being sets, set theory has quantifiers that range over functions. The principle can be regarded as the identification of the space formula_39 with the collection of total recursive functions. In realzability topoi, this exponential object of the natural numbers object can also be identified with less restrictive collections of maps. Weaker statements. There are weaker forms of the thesis, variously called formula_40. By inserting a double negation before the index existence claim in the higher order version, it is asserted that there are no non-recursive functions. This still restricts the space of functions while not constituting a function choice axiom. A related statement is that any decidable subset of naturals cannot ruled out to be computable in the sense that formula_41 The contrapositive of this puts any non-computable predicate in violation to excluded middle, so this is still generally anti-classical. Unlike formula_42, as a principle this is compatible with formulations of the fan theorem. Variants for related premises formula_43 may be defined. E.g. a principle always granting existence of a total recursive function formula_44 into some discrete binary set that validates one of the disjuncts. Without the double negation, this may be denoted formula_45. Relationship to classical logic. The schema formula_6, when added to constructive systems such as formula_5, implies the negation of the universally quantified version of the law of the excluded middle for some predicates. As an example, the halting problem provenly not "computably" decidable, but assuming classical logic it is a tautology that every Turing machine either halts or does not halt on a given input. Further assuming Church's thesis one in turn concludes that this is computable - a contradiction. In slightly more detail: In sufficiently strong systems, such as formula_5, it is possible to express the relation formula_46 associated with the halting question, relating any code from an enumeration of Turing machines and values from formula_47. Assuming the classical tautology above, this relation can be proven total, i.e. it constitutes a function that returns formula_48 if the machine halts and formula_49 if it does not halt. But formula_5 plus formula_6 disproves this consequence of the law of the excluded middle, for example. Principles like the double negation shift (commutativity of universal quantification with a double negation) are also rejected by the principle. The single axiom form of formula_0 with formula_50 above is consistent with some weak classical systems that do not have the strength to form functions such as the function of the previous paragraph. For example, the classical weak second-order arithmetic formula_51 is consistent with this single axiom, because formula_51 has a model in which every function is computable. However, the single-axiom form becomes inconsistent with excluded middle in any system that has axioms sufficient to prove existence of functions such as the function formula_46. E.g., adoption of variants of countable choice, such as unique choice for the numerical quantifiers, formula_52 where formula_53 denotes a sequence, spoil this consistency. The first-order formulation formula_6 already subsumes the power of such a function comprehension principle via enumerated functions. Constructively formulated subtheories of formula_54 can typically be shown to be closed under a Church's rule and the corresponding principle is thus compatible. But as an implication (formula_24) it cannot be proven by such theories, as that would render the stronger, classical theory inconsistent. Realizers and metalogic. This above thesis can be characterized as saying that a sentence is true iff it is computably realisable. This is captured by the following metatheoretic equivalences: formula_55 and formula_56 Here "formula_57" is just the equivalence in the arithmetic theory, while "formula_58" denotes the metatheoretical equivalence. For given formula_22, the predicate formula_59 is read as "formula_60". In words, the first result above states that it is provable in formula_5 plus formula_25 that a sentence is true iff it is realisable. But also, the second result above states that formula_22 is provable in formula_5 plus formula_25 iff formula_22 is provably realisable in just formula_5. For the next metalogical theorem, recall that formula_4 is non-constructive and lacks then existence property, whereas Heyting arithmetic exhibits it: formula_61 The second equivalence above can be extended with formula_29 as follows: formula_62 The existential quantifier needs to be outside formula_4 in this case. In words, formula_22 is provable in formula_5 plus formula_25 as well as formula_29 iff one can metatheoretically establish that there is some number formula_63 such that the corresponding standard numeral in formula_4, denoted formula_64, provably realises formula_22. Assuming formula_29 together with alternative variants of Church's thesis, more such metalogical existence statements have been obtained. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "{\\mathrm{CT}}" }, { "math_id": 1, "text": "\\exists! y. \\varphi(x,y)" }, { "math_id": 2, "text": "x" }, { "math_id": 3, "text": "x\\mapsto y" }, { "math_id": 4, "text": "{\\mathsf{PA}}" }, { "math_id": 5, "text": "{\\mathsf{HA}}" }, { "math_id": 6, "text": "{\\mathrm{CT}}_0" }, { "math_id": 7, "text": "T_1" }, { "math_id": 8, "text": "\\forall x\\,\\exists w\\,T_1(e, x, w)" }, { "math_id": 9, "text": "e" }, { "math_id": 10, "text": "U" }, { "math_id": 11, "text": "TU(e, x, w, y)" }, { "math_id": 12, "text": "T_1(e, x, w)\\land U(w, y)" }, { "math_id": 13, "text": "y" }, { "math_id": 14, "text": "\\exists w\\,TU(e, x, w, y)" }, { "math_id": 15, "text": "\\Sigma_1^0" }, { "math_id": 16, "text": "e, x, y" }, { "math_id": 17, "text": "\\{e\\}(x)=y" }, { "math_id": 18, "text": "\\exists y" }, { "math_id": 19, "text": "TU" }, { "math_id": 20, "text": "\\varphi(x,y)" }, { "math_id": 21, "text": "\\big(\\forall x \\; \\exists y \\; \\varphi(x,y)\\big)\\; \\to\\; \\exists e\\,\\big(\\forall x \\; \\exists y\\; \\exists w \\; TU(e, x, w, y) \\wedge \\varphi(x, y)\\big)" }, { "math_id": 22, "text": "\\varphi" }, { "math_id": 23, "text": "w" }, { "math_id": 24, "text": "\\to" }, { "math_id": 25, "text": "{\\mathrm{ECT_0}}" }, { "math_id": 26, "text": "\\big(\\forall x \\; \\psi(x) \\to \\exists y \\; \\varphi(x,y)\\big)\\; \\to\\; \\exists e\\, \\big(\\forall x \\; \\psi(x) \\to \\exists y\\; \\exists w \\; TU(e, x, w, y) \\wedge \\varphi(x,y)\\big)" }, { "math_id": 27, "text": "\\psi" }, { "math_id": 28, "text": "\\Delta^0_0" }, { "math_id": 29, "text": "{\\mathrm{MP}}" }, { "math_id": 30, "text": "\\psi(x)" }, { "math_id": 31, "text": "x=x" }, { "math_id": 32, "text": "{\\mathrm{CT}}_0!" }, { "math_id": 33, "text": "\\forall f\\;\\exists e\\,\\big(\\forall x\\;\\exists w\\;TU(e, x, w, f(x))\\big)" }, { "math_id": 34, "text": "f" }, { "math_id": 35, "text": "y = f(x)" }, { "math_id": 36, "text": "\\forall f\\;\\exists e\\, f\\cong \\{e\\}" }, { "math_id": 37, "text": "f\\cong g" }, { "math_id": 38, "text": "\\forall x. f(x)=g(x)" }, { "math_id": 39, "text": "{\\mathbb N}^{\\mathbb N}" }, { "math_id": 40, "text": "{\\mathrm {WCT}}" }, { "math_id": 41, "text": "\\big(\\forall x\\ \\chi(x)\\lor\\neg\\chi(x)\\big)\\; \\to\\; \\neg\\neg\\exists e\\,\\big(\\forall x\\, \\big(\\exists w \\; T_1(e, x, w)\\big) \\leftrightarrow \\chi(x)\\big)" }, { "math_id": 42, "text": "\\mathrm {CT}_0" }, { "math_id": 43, "text": "\\forall x\\ \\psi_\\mathrm{left}(x)\\lor \\psi_\\mathrm{right}(x)" }, { "math_id": 44, "text": "{\\mathbb N}\\to\\{\\mathrm{left},\\mathrm{right}\\}" }, { "math_id": 45, "text": "\\mathrm {CT}_0^\\lor" }, { "math_id": 46, "text": "h" }, { "math_id": 47, "text": "\\{0,1\\}" }, { "math_id": 48, "text": "1" }, { "math_id": 49, "text": "0" }, { "math_id": 50, "text": "\\forall f" }, { "math_id": 51, "text": "{\\mathsf{RCA_0}}" }, { "math_id": 52, "text": "\\forall n\\;\\exists! m\\;\\phi(n, m)\\to \\exists a\\;\\forall k\\;\\phi(k, a_k)," }, { "math_id": 53, "text": "a" }, { "math_id": 54, "text": "{\\mathsf{ZF}}" }, { "math_id": 55, "text": "{\\mathsf{HA}} + {\\mathrm{ECT_0}} \\vdash \\varphi \\leftrightarrow \\exists n \\, (n \\Vdash \\varphi)" }, { "math_id": 56, "text": "{\\mathsf{HA}} + {\\mathrm{ECT_0}} \\vdash \\varphi \\;\\iff\\; {\\mathsf{HA}} \\vdash \\exists n \\, (n \\Vdash \\varphi) " }, { "math_id": 57, "text": "\\leftrightarrow" }, { "math_id": 58, "text": "\\iff" }, { "math_id": 59, "text": "n \\Vdash \\varphi" }, { "math_id": 60, "text": "n \\text{ realises } \\varphi" }, { "math_id": 61, "text": "{\\mathsf{HA}}\\vdash\\exists n.\\phi(n)\\implies\\text{exists}\\;{\\mathrm n}\\ {\\mathsf{HA}}\\vdash\\phi({\\underline{\\mathrm n}})" }, { "math_id": 62, "text": "{\\mathsf{HA}} + {\\mathrm{ECT_0}} + {\\mathrm{MP}} \\vdash \\varphi \\;\\iff\\; \\text{exists}\\; {\\mathrm n}\\ {\\mathsf{PA}} \\vdash ({\\underline{\\mathrm n}} \\Vdash \\varphi)" }, { "math_id": 63, "text": "{\\mathrm n}" }, { "math_id": 64, "text": "{\\underline{\\mathrm n}}" } ]
https://en.wikipedia.org/wiki?curid=15756448
15757039
Z-group
In mathematics, especially in the area of algebra known as group theory, the term Z-group refers to a number of distinct types of groups: "Usage: , , MR, , " Groups whose Sylow subgroups are cyclic. In the study of finite groups, a Z-group is a finite group whose Sylow subgroups are all cyclic. The Z originates both from the German and from their classification in . In many standard textbooks these groups have no special name, other than metacyclic groups, but that term is often used more generally today. See metacyclic group for more on the general, modern definition which includes non-cyclic "p"-groups; see for the stricter, classical definition more closely related to Z-groups. Every group whose Sylow subgroups are cyclic is itself metacyclic, so supersolvable. In fact, such a group has a cyclic derived subgroup with cyclic maximal abelian quotient. Such a group has the presentation : formula_2, where "mn" is the order of "G"("m","n","r"), the greatest common divisor, gcd(("r"-1)"n", "m") = 1, and "r""n" ≡ 1 (mod "m"). The character theory of Z-groups is well understood , as they are monomial groups. The derived length of a Z-group is at most 2, so Z-groups may be insufficient for some uses. A generalization due to Hall are the A-groups, those groups with abelian Sylow subgroups. These groups behave similarly to Z-groups, but can have arbitrarily large derived length . Another generalization due to allows the Sylow 2-subgroup more flexibility, including dihedral and generalized quaternion groups. "Usage: , " Group with a generalized central series. The definition of central series used for Z-group is somewhat technical. A series of "G" is a collection "S" of subgroups of "G", linearly ordered by inclusion, such that for every "g" in "G", the subgroups "A""g" = ∩ { "N" in "S" : "g" in "N" } and "B""g" = ∪ { "N" in "S" : "g" not in "N" } are both in "S". A (generalized) central series of "G" is a series such that every "N" in "S" is normal in "G" and such that for every "g" in "G", the quotient "A""g"/"B""g" is contained in the center of "G"/"B""g". A Z-group is a group with such a (generalized) central series. Examples include the hypercentral groups whose transfinite upper central series form such a central series, as well as the hypocentral groups whose transfinite lower central series form such a central series . "Usage: " Special 2-transitive groups. A (Z)-group is a group faithfully represented as a doubly transitive permutation group in which no non-identity element fixes more than two points. A (ZT)-group is a (Z)-group that is of odd degree and not a Frobenius group, that is a Zassenhaus group of odd degree, also known as one of the groups PSL(2,2"k"+1) or Sz(22"k"+1), for "k" any positive integer .
[ { "math_id": 0, "text": "\\mathbb Z" }, { "math_id": 1, "text": "(\\mathbb Z,+,<)" }, { "math_id": 2, "text": "G(m,n,r) = \\langle a,b | a^m = b^n = 1, bab^{-1} = a^r \\rangle " } ]
https://en.wikipedia.org/wiki?curid=15757039
1575704
Neil Robertson (mathematician)
Canadian-American graph theorist (b.1938) George Neil Robertson (born November 30, 1938) is a mathematician working mainly in topological graph theory, currently a distinguished professor emeritus at the Ohio State University. Education. Robertson earned his B.Sc. from Brandon College in 1959 and his Ph.D. in 1969 at the University of Waterloo under his doctoral advisor William Tutte. Biography. In 1969, Robertson joined the faculty of the Ohio State University, where he was promoted to Associate Professor in 1972 and Professor in 1984. He was a consultant with Bell Communications Research from 1984 to 1996. He has held visiting faculty positions in many institutions, most extensively at Princeton University from 1996 to 2001, and at Victoria University of Wellington, New Zealand, in 2002. He also holds an adjunct position at King Abdulaziz University in Saudi Arabia. Research. Robertson is known for his work in graph theory, and particularly for a long series of papers co-authored with Paul Seymour and published over a span of many years, in which they proved the Robertson–Seymour theorem (formerly Wagner's Conjecture). This states that families of graphs closed under the graph minor operation may be characterized by a finite set of forbidden minors. As part of this work, Robertson and Seymour also proved the graph structure theorem describing the graphs in these families. Additional major results in Robertson's research include the following: Awards and honors. Robertson has won the Fulkerson Prize three times, in 1994 for his work on the Hadwiger conjecture, in 2006 for the Robertson–Seymour theorem, and in 2009 for his proof of the strong perfect graph theorem. He also won the Pólya Prize (SIAM) in 2004, the OSU Distinguished Scholar Award in 1997, and the Waterloo Alumni Achievement Medal in 2002. In 2012 he became a fellow of the American Mathematical Society. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "K_6\n" } ]
https://en.wikipedia.org/wiki?curid=1575704
1575813
Series expansion
Expression of a function as an infinite sum of simpler functions In mathematics, a series expansion is a technique that expresses a function as an infinite sum, or series, of simpler functions. It is a method for calculating a function that cannot be expressed by just elementary operators (addition, subtraction, multiplication and division). The resulting so-called "series" often can be limited to a finite number of terms, thus yielding an approximation of the function. The fewer terms of the sequence are used, the simpler this approximation will be. Often, the resulting inaccuracy (i.e., the partial sum of the omitted terms) can be described by an equation involving Big O notation (see also asymptotic expansion). The series expansion on an open interval will also be an approximation for non-analytic functions. Types of series expansions. There are several kinds of series expansions, listed below. Taylor series. A "Taylor series" is a power series based on a function's derivatives at a single point. More specifically, if a function formula_0 is infinitely differentiable around a point formula_1, then the Taylor series of "f" around this point is given by formula_2 under the convention formula_3. The "Maclaurin series" of "f" is its Taylor series about formula_4. Laurent series. A "Laurent series" is a generalization of the Taylor series, allowing terms with negative exponents; it takes the form formula_5 and converges in an annulus. In particular, a Laurent series can be used to examine the behavior of a complex function near a singularity by considering the series expansion on an annulus centered at the singularity. Dirichlet series. A "general Dirichlet series" is a series of the form formula_6 One important special case of this is the "ordinary Dirichlet series" formula_7 Used in number theory. Fourier series. A "Fourier series" is an expansion of periodic functions as a sum of many sine and cosine functions. More specifically, the Fourier series of a function formula_8 of period formula_9 is given by the expressionformula_10where the coefficients are given by the formulaeformula_11 Examples. The following is the Taylor series of formula_13:formula_14 The Dirichlet series of the Riemann zeta function isformula_15
[ { "math_id": 0, "text": "f: U\\to\\R" }, { "math_id": 1, "text": "x_0" }, { "math_id": 2, "text": "\\sum_{n=0}^{\\infty}\\frac{f^{(n)}(x_0)}{n!}(x - x_0)^n" }, { "math_id": 3, "text": "0^0 := 1" }, { "math_id": 4, "text": "x_0 = 0" }, { "math_id": 5, "text": "\\sum_{k = -\\infty}^{\\infty} c_k (z - a)^k" }, { "math_id": 6, "text": "\\sum_{n = 1}^{\\infty} a_ne^{-\\lambda_n s}." }, { "math_id": 7, "text": "\\sum_{n = 1}^{\\infty}\\frac{a_n}{n^s}." }, { "math_id": 8, "text": "f(t)" }, { "math_id": 9, "text": "2L" }, { "math_id": 10, "text": "a_0 + \\sum_{n = 1}^{\\infty} \\left[a_n\\cos\\left(\\frac{n\\pi t}{L}\\right) + b_n\\sin\\left(\\frac{n\\pi t}{L}\\right)\\right]" }, { "math_id": 11, "text": "\\begin{align}\na_n &:= \\frac{1}{L}\\int_{-L}^L f(t)\\cos\\left(\\frac{n\\pi t}{L}\\right)dt, \\\\\nb_n &:= \\frac{1}{L}\\int_{-L}^L f(t)\\sin\\left(\\frac{n\\pi t}{L}\\right)dt.\n\\end{align}" }, { "math_id": 12, "text": "\\text{Ln}\\Gamma\\left(z\\right)\\sim\\left(z-\\tfrac{1}{2}\\right)\\ln z-z+\\tfrac{1}{2}\\ln\\left(2\\pi\\right)+\\sum_{k=1}^{\\infty}\\frac{B_{2k}}{2k(2k-1)z^{2k-1}}" }, { "math_id": 13, "text": "e^x" }, { "math_id": 14, "text": "e^x=\\sum^{\\infty}_{n=0}\\frac{x^n}{n!}= 1 + x + \\frac{x^2}{2} + \\frac{x^3}{6}..." }, { "math_id": 15, "text": "\\zeta(s) := \\sum_{n = 1}^{\\infty} \\frac{1}{n^s} = \\frac{1}{1^s} + \\frac{1}{2^s} + \\cdots" } ]
https://en.wikipedia.org/wiki?curid=1575813
1575825
Hyperbolic partial differential equation
Type of partial differential equations In mathematics, a hyperbolic partial differential equation of order formula_0 is a partial differential equation (PDE) that, roughly speaking, has a well-posed initial value problem for the first formula_1 derivatives. More precisely, the Cauchy problem can be locally solved for arbitrary initial data along any non-characteristic hypersurface. Many of the equations of mechanics are hyperbolic, and so the study of hyperbolic equations is of substantial contemporary interest. The model hyperbolic equation is the wave equation. In one spatial dimension, this is formula_2 The equation has the property that, if "u" and its first time derivative are arbitrarily specified initial data on the line "t" = 0 (with sufficient smoothness properties), then there exists a solution for all time t. The solutions of hyperbolic equations are "wave-like". If a disturbance is made in the initial data of a hyperbolic differential equation, then not every point of space feels the disturbance at once. Relative to a fixed time coordinate, disturbances have a finite propagation speed. They travel along the characteristics of the equation. This feature qualitatively distinguishes hyperbolic equations from elliptic partial differential equations and parabolic partial differential equations. A perturbation of the initial (or boundary) data of an elliptic or parabolic equation is felt at once by essentially all points in the domain. Although the definition of hyperbolicity is fundamentally a qualitative one, there are precise criteria that depend on the particular kind of differential equation under consideration. There is a well-developed theory for linear differential operators, due to Lars Gårding, in the context of microlocal analysis. Nonlinear differential equations are hyperbolic if their linearizations are hyperbolic in the sense of Gårding. There is a somewhat different theory for first order systems of equations coming from systems of conservation laws. Definition. A partial differential equation is hyperbolic at a point formula_3 provided that the Cauchy problem is uniquely solvable in a neighborhood of formula_3 for any initial data given on a non-characteristic hypersurface passing through formula_3. Here the prescribed initial data consist of all (transverse) derivatives of the function on the surface up to one less than the order of the differential equation. Examples. By a linear change of variables, any equation of the form formula_4 with formula_5 can be transformed to the wave equation, apart from lower order terms which are inessential for the qualitative understanding of the equation.400 This definition is analogous to the definition of a planar hyperbola. The one-dimensional wave equation: formula_6 is an example of a hyperbolic equation. The two-dimensional and three-dimensional wave equations also fall into the category of hyperbolic PDE. This type of second-order hyperbolic partial differential equation may be transformed to a hyperbolic system of first-order differential equations.402 Hyperbolic system of partial differential equations. The following is a system of formula_7 first order partial differential equations for formula_7 unknown functions formula_8, formula_9, where formula_10: where formula_11 are once continuously differentiable functions, nonlinear in general. Next, for each formula_12 define the formula_13 Jacobian matrix formula_14 The system (∗) is hyperbolic if for all formula_15 the matrix formula_16 has only real eigenvalues and is diagonalizable. If the matrix formula_17 has s "distinct" real eigenvalues, it follows that it is diagonalizable. In this case the system (∗) is called strictly hyperbolic. If the matrix formula_17 is symmetric, it follows that it is diagonalizable and the eigenvalues are real. In this case the system (∗) is called symmetric hyperbolic. Hyperbolic system and conservation laws. There is a connection between a hyperbolic system and a conservation law. Consider a hyperbolic system of one partial differential equation for one unknown function formula_18. Then the system (∗) has the form Here, formula_19 can be interpreted as a quantity that moves around according to the flux given by formula_20. To see that the quantity formula_19 is conserved, integrate (∗∗) over a domain formula_21 formula_22 If formula_19 and formula_23 are sufficiently smooth functions, we can use the divergence theorem and change the order of the integration and formula_24 to get a conservation law for the quantity formula_19 in the general form formula_25 which means that the time rate of change of formula_19 in the domain formula_21 is equal to the net flux of formula_19 through its boundary formula_26. Since this is an equality, it can be concluded that formula_19 is conserved within formula_21. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "n - 1" }, { "math_id": 2, "text": "\\frac{\\partial^2 u}{\\partial t^2} = c^2 \\frac{\\partial^2 u}{\\partial x^2} " }, { "math_id": 3, "text": "P" }, { "math_id": 4, "text": " A\\frac{\\partial^2 u}{\\partial x^2} + 2B\\frac{\\partial^2 u}{\\partial x\\partial y} + C\\frac{\\partial^2u}{\\partial y^2} + \\text{(lower order derivative terms)} = 0" }, { "math_id": 5, "text": " B^2 - A C > 0" }, { "math_id": 6, "text": "\\frac{\\partial^2 u}{\\partial t^2} - c^2\\frac{\\partial^2 u}{\\partial x^2} = 0" }, { "math_id": 7, "text": "s" }, { "math_id": 8, "text": " \\vec u = (u_1, \\ldots, u_s) " }, { "math_id": 9, "text": " \\vec u = \\vec u (\\vec x,t)" }, { "math_id": 10, "text": "\\vec x \\in \\mathbb{R}^d" }, { "math_id": 11, "text": "\\vec {f}^j \\in C^1(\\mathbb{R}^s, \\mathbb{R}^s), j = 1, \\ldots, d" }, { "math_id": 12, "text": "\\vec {f}^j" }, { "math_id": 13, "text": "s \\times s" }, { "math_id": 14, "text": "A^j :=\n\\begin{pmatrix}\n\\frac{\\partial f_1^j}{\\partial u_1} & \\cdots & \\frac{\\partial f_1^j}{\\partial u_s} \\\\\n\\vdots & \\ddots & \\vdots \\\\\n\\frac{\\partial f_s^j}{\\partial u_1} & \\cdots & \\frac{\\partial f_s^j}{\\partial u_s}\n\\end{pmatrix}\n,\\text{ for }j = 1, \\ldots, d." }, { "math_id": 15, "text": "\\alpha_1, \\ldots, \\alpha_d \\in \\mathbb{R}" }, { "math_id": 16, "text": "A := \\alpha_1 A^1 + \\cdots + \\alpha_d A^d" }, { "math_id": 17, "text": "A" }, { "math_id": 18, "text": "u = u(\\vec x, t)" }, { "math_id": 19, "text": "u" }, { "math_id": 20, "text": "\\vec f = (f^1, \\ldots, f^d)" }, { "math_id": 21, "text": "\\Omega" }, { "math_id": 22, "text": "\\int_{\\Omega} \\frac{\\partial u}{\\partial t} \\, d\\Omega + \\int_{\\Omega} \\nabla \\cdot \\vec f(u)\\, d\\Omega = 0." }, { "math_id": 23, "text": "\\vec f" }, { "math_id": 24, "text": "\\partial / \\partial t" }, { "math_id": 25, "text": "\n\\frac{ d}{ dt} \\int_{\\Omega} u \\, d\\Omega \n+ \\int_{\\partial\\Omega} \\vec f(u) \\cdot \\vec n \\, d\\Gamma = 0,\n" }, { "math_id": 26, "text": "\\partial\\Omega" } ]
https://en.wikipedia.org/wiki?curid=1575825
157616
Composite material
Material made from a combination of two or more unlike substances A composite material (also called a composition material or shortened to composite, which is the common name) is a material which is produced from two or more constituent materials. These constituent materials have notably dissimilar chemical or physical properties and are merged to create a material with properties unlike the individual elements. Within the finished structure, the individual elements remain separate and distinct, distinguishing composites from mixtures and solid solutions. Composite materials with more than one distinct layer are called composite laminates. Typical engineered composite materials include: There are various reasons where new material can be favoured. Typical examples include materials which are less expensive, lighter, stronger or more durable when compared with common materials, as well as composite materials inspired from animals and natural sources with low carbon footprint. More recently researchers have also begun to actively include sensing, actuation, computation, and communication into composites, which are known as robotic materials. Composite materials are generally used for buildings, bridges, and structures such as boat hulls, swimming pool panels, racing car bodies, shower stalls, bathtubs, storage tanks, imitation granite, and cultured marble sinks and countertops. They are also being increasingly used in general automotive applications. The most advanced examples perform routinely on spacecraft and aircraft in demanding environments. History. The earliest composite materials were made from straw and mud combined to form bricks for building construction. Ancient brick-making was documented by Egyptian tomb paintings. Wattle and daub is one of the oldest composite materials, at over 6000 years old. Concrete is also a composite material, and is used more than any other synthetic material in the world. As of 2009[ [update]], about 7.5 billion cubic metres of concrete are made each year Examples. Composite materials. Concrete is the most common artificial composite material of all and typically consists of loose stones (aggregate) held with a matrix of cement. Concrete is an inexpensive material, and will not compress or shatter even under quite a large compressive force. However, concrete cannot survive tensile loading (i.e., if stretched it will quickly break apart). Therefore, to give concrete the ability to resist being stretched, steel bars, which can resist high stretching (tensile) forces, are often added to concrete to form reinforced concrete. Fibre-reinforced polymers include carbon-fiber-reinforced polymers and glass-reinforced plastic. If classified by matrix then there are thermoplastic composites, short fibre thermoplastics, long fibre thermoplastics or long-fiber-reinforced thermoplastics. There are numerous thermoset composites, including paper composite panels. Many advanced thermoset polymer matrix systems usually incorporate aramid fibre and carbon fibre in an epoxy resin matrix. Shape-memory polymer composites are high-performance composites, formulated using fibre or fabric reinforcements and shape-memory polymer resin as the matrix. Since a shape-memory polymer resin is used as the matrix, these composites have the ability to be easily manipulated into various configurations when they are heated above their activation temperatures and will exhibit high strength and stiffness at lower temperatures. They can also be reheated and reshaped repeatedly without losing their material properties. These composites are ideal for applications such as lightweight, rigid, deployable structures; rapid manufacturing; and dynamic reinforcement. High strain composites are another type of high-performance composites that are designed to perform in a high deformation setting and are often used in deployable systems where structural flexing is advantageous. Although high strain composites exhibit many similarities to shape-memory polymers, their performance is generally dependent on the fibre layout as opposed to the resin content of the matrix. Composites can also use metal fibres reinforcing other metals, as in metal matrix composites (MMC) or ceramic matrix composites (CMC), which includes bone (hydroxyapatite reinforced with collagen fibres), cermet (ceramic and metal), and concrete. Ceramic matrix composites are built primarily for fracture toughness, not for strength. Another class of composite materials involve woven fabric composite consisting of longitudinal and transverse laced yarns. Woven fabric composites are flexible as they are in form of fabric. Organic matrix/ceramic aggregate composites include asphalt concrete, polymer concrete, mastic asphalt, mastic roller hybrid, dental composite, syntactic foam, and mother of pearl. Chobham armour is a special type of composite armour used in military applications. Additionally, thermoplastic composite materials can be formulated with specific metal powders resulting in materials with a density range from 2 g/cm3 to 11 g/cm3 (same density as lead). The most common name for this type of material is "high gravity compound" (HGC), although "lead replacement" is also used. These materials can be used in place of traditional materials such as aluminium, stainless steel, brass, bronze, copper, lead, and even tungsten in weighting, balancing (for example, modifying the centre of gravity of a tennis racquet), vibration damping, and radiation shielding applications. High density composites are an economically viable option when certain materials are deemed hazardous and are banned (such as lead) or when secondary operations costs (such as machining, finishing, or coating) are a factor. There have been several studies indicating that interleaving stiff and brittle epoxy-based carbon-fiber-reinforced polymer laminates with flexible thermoplastic laminates can help to make highly toughened composites that show improved impact resistance. Another interesting aspect of such interleaved composites is that they are able to have shape memory behaviour without needing any shape-memory polymers or shape-memory alloys e.g. balsa plies interleaved with hot glue, aluminium plies interleaved with acrylic polymers or PVC and carbon-fiber-reinforced polymer laminates interleaved with polystyrene. A sandwich-structured composite is a special class of composite material that is fabricated by attaching two thin but stiff skins to a lightweight but thick core. The core material is normally low strength material, but its higher thickness provides the sandwich composite with high bending stiffness with overall low density. Wood is a naturally occurring composite comprising cellulose fibres in a lignin and hemicellulose matrix. Engineered wood includes a wide variety of different products such as wood fibre board, plywood, oriented strand board, wood plastic composite (recycled wood fibre in polyethylene matrix), Pykrete (sawdust in ice matrix), plastic-impregnated or laminated paper or textiles, Arborite, Formica (plastic), and Micarta. Other engineered laminate composites, such as Mallite, use a central core of end grain balsa wood, bonded to surface skins of light alloy or GRP. These generate low-weight, high rigidity materials. Particulate composites have particle as filler material dispersed in matrix, which may be nonmetal, such as glass, epoxy. Automobile tire is an example of particulate composite. Advanced diamond-like carbon (DLC) coated polymer composites have been reported where the coating increases the surface hydrophobicity, hardness and wear resistance. Ferromagnetic composites, including those with a polymer matrix consisting, for example, of nanocrystalline filler of Fe-based powders and polymers matrix. Amorphous and nanocrystalline powders obtained, for example, from metallic glasses can be used. Their use makes it possible to obtain ferromagnetic nanocomposites with controlled magnetic properties. Products. Fibre-reinforced composite materials have gained popularity (despite their generally high cost) in high-performance products that need to be lightweight, yet strong enough to take harsh loading conditions such as aerospace components (tails, wings, fuselages, propellers), boat and scull hulls, bicycle frames, and racing car bodies. Other uses include fishing rods, storage tanks, swimming pool panels, and baseball bats. The Boeing 787 and Airbus A350 structures including the wings and fuselage are composed largely of composites. Composite materials are also becoming more common in the realm of orthopedic surgery, and it is the most common hockey stick material. Carbon composite is a key material in today's launch vehicles and heat shields for the re-entry phase of spacecraft. It is widely used in solar panel substrates, antenna reflectors and yokes of spacecraft. It is also used in payload adapters, inter-stage structures and heat shields of launch vehicles. Furthermore, disk brake systems of airplanes and racing cars are using carbon/carbon material, and the composite material with carbon fibres and silicon carbide matrix has been introduced in luxury vehicles and sports cars. In 2006, a fibre-reinforced composite pool panel was introduced for in-ground swimming pools, residential as well as commercial, as a non-corrosive alternative to galvanized steel. In 2007, an all-composite military Humvee was introduced by TPI Composites Inc and Armor Holdings Inc, the first all-composite military vehicle. By using composites the vehicle is lighter, allowing higher payloads. In 2008, carbon fibre and DuPont Kevlar (five times stronger than steel) were combined with enhanced thermoset resins to make military transit cases by ECS Composites creating 30-percent lighter cases with high strength. Pipes and fittings for various purpose like transportation of potable water, fire-fighting, irrigation, seawater, desalinated water, chemical and industrial waste, and sewage are now manufactured in glass reinforced plastics. Composite materials used in tensile structures for facade application provides the advantage of being translucent. The woven base cloth combined with the appropriate coating allows better light transmission. This provides a very comfortable level of illumination compared to the full brightness of outside. The wings of wind turbines, in growing sizes in the order of 50 m length are fabricated in composites since several years. Two-lower-leg-amputees run on carbon-composite spring-like artificial feet as quick as non-amputee athletes. High-pressure gas cylinders typically about 7–9 litre volume x 300 bar pressure for firemen are nowadays constructed from carbon composite. Type-4-cylinders include metal only as boss that carries the thread to screw in the valve. On 5 September 2019, HMD Global unveiled the Nokia 6.2 and Nokia 7.2 which are claimed to be using polymer composite for the frames. Overview. Composite materials are created from individual materials. These individual materials are known as constituent materials, and there are two main categories of it. One is the matrix (binder) and the other reinforcement. A portion of each kind is needed at least. The reinforcement receives support from the matrix as the matrix surrounds the reinforcement and maintains its relative positions. The properties of the matrix are improved as the reinforcements impart their exceptional physical and mechanical properties. The mechanical properties become unavailable from the individual constituent materials by synergism. At the same time, the designer of the product or structure receives options to choose an optimum combination from the variety of matrix and strengthening materials. To shape the engineered composites, it must be formed. The reinforcement is placed onto the mould surface or into the mould cavity. Before or after this, the matrix can be introduced to the reinforcement. The matrix undergoes a melding event which sets the part shape necessarily. This melding event can happen in several ways, depending upon the matrix nature, such as solidification from the melted state for a thermoplastic polymer matrix composite or chemical polymerization for a thermoset polymer matrix. According to the requirements of end-item design, various methods of moulding can be used. The natures of the chosen matrix and reinforcement are the key factors influencing the methodology. The gross quantity of material to be made is another main factor. To support high capital investments for rapid and automated manufacturing technology, vast quantities can be used. Cheaper capital investments but higher labour and tooling expenses at a correspondingly slower rate assists the small production quantities. Many commercially produced composites use a polymer matrix material often called a resin solution. There are many different polymers available depending upon the starting raw ingredients. There are several broad categories, each with numerous variations. The most common are known as polyester, vinyl ester, epoxy, phenolic, polyimide, polyamide, polypropylene, PEEK, and others. The reinforcement materials are often fibres but also commonly ground minerals. The various methods described below have been developed to reduce the resin content of the final product, or the fibre content is increased. As a rule of thumb, lay up results in a product containing 60% resin and 40% fibre, whereas vacuum infusion gives a final product with 40% resin and 60% fibre content. The strength of the product is greatly dependent on this ratio. Martin Hubbe and Lucian A Lucia consider wood to be a natural composite of cellulose fibres in a matrix of lignin. Cores in composites. Several layup designs of composite also involve a co-curing or post-curing of the prepreg with many other media, such as foam or honeycomb. Generally, this is known as a sandwich structure. This is a more general layup for the production of cowlings, doors, radomes or non-structural parts. Open- and closed-cell-structured foams like polyvinyl chloride, polyurethane, polyethylene, or polystyrene foams, balsa wood, syntactic foams, and honeycombs are generally utilized core materials. Open- and closed-cell metal foam can also be utilized as core materials. Recently, 3D graphene structures ( also called graphene foam) have also been employed as core structures. A recent review by Khurram and Xu et al., have provided the summary of the state-of-the-art techniques for fabrication of the 3D structure of graphene, and the examples of the use of these foam like structures as a core for their respective polymer composites. Semi-crystalline polymers. Although the two phases are chemically equivalent, semi-crystalline polymers can be described both quantitatively and qualitatively as composite materials. The crystalline portion has a higher elastic modulus and provides reinforcement for the less stiff, amorphous phase. Polymeric materials can range from 0% to 100% crystallinity aka volume fraction depending on molecular structure and thermal history. Different processing techniques can be employed to vary the percent crystallinity in these materials and thus the mechanical properties of these materials as described in the physical properties section. This effect is seen in a variety of places from industrial plastics like polyethylene shopping bags to spiders which can produce silks with different mechanical properties. In many cases these materials act like particle composites with randomly dispersed crystals known as spherulites. However they can also be engineered to be anisotropic and act more like fiber reinforced composites. In the case of spider silk, the properties of the material can even be dependent on the size of the crystals, independent of the volume fraction. Ironically, single component polymeric materials are some of the most easily tunable composite materials known. Methods of fabrication. Normally, the fabrication of composite includes wetting, mixing or saturating the reinforcement with the matrix. The matrix is then induced to bind together (with heat or a chemical reaction) into a rigid structure. Usually, the operation is done in an open or closed forming mould. However, the order and ways of introducing the constituents alters considerably. Composites fabrication is achieved by a wide variety of methods, including advanced fibre placement (automated fibre placement), fibreglass spray lay-up process, filament winding, lanxide process, tailored fibre placement, tufting, and z-pinning. Overview of mould. The reinforcing and matrix materials are merged, compacted, and cured (processed) within a mould to undergo a melding event. The part shape is fundamentally set after the melding event. However, under particular process conditions, it can deform. The melding event for a thermoset polymer matrix material is a curing reaction that is caused by the possibility of extra heat or chemical reactivity such as an organic peroxide. The melding event for a thermoplastic polymeric matrix material is a solidification from the melted state. The melding event for a metal matrix material such as titanium foil is a fusing at high pressure and a temperature near the melting point. It is suitable for many moulding methods to refer to one mould piece as a "lower" mould and another mould piece as an "upper" mould. Lower and upper does not refer to the mould's configuration in space, but the different faces of the moulded panel. There is always a lower mould, and sometimes an upper mould in this convention. Part construction commences by applying materials to the lower mould. Lower mould and upper mould are more generalized descriptors than more common and specific terms such as male side, female side, a-side, b-side, tool side, bowl, hat, mandrel, etc. Continuous manufacturing utilizes a different nomenclature. Usually, the moulded product is referred to as a panel. It can be referred to as casting for certain geometries and material combinations. It can be referred to as a profile for certain continuous processes. Some of the processes are autoclave moulding, vacuum bag moulding, pressure bag moulding, resin transfer moulding, and light resin transfer moulding. Other fabrication methods. Other types of fabrication include casting, centrifugal casting, braiding (onto a former), continuous casting, filament winding, press moulding, transfer moulding, pultrusion moulding, and slip forming. There are also forming capabilities including CNC filament winding, vacuum infusion, wet lay-up, compression moulding, and thermoplastic moulding, to name a few. The practice of curing ovens and paint booths is also required for some projects. Finishing methods. The composite parts finishing is also crucial in the final design. Many of these finishes will involve rain-erosion coatings or polyurethane coatings. Tooling. The mould and mould inserts are referred to as "tooling". The mould/tooling can be built from different materials. Tooling materials include aluminium, carbon fibre, invar, nickel, reinforced silicone rubber and steel. The tooling material selection is normally based on, but not limited to, the coefficient of thermal expansion, expected number of cycles, end item tolerance, desired or expected surface condition, cure method, glass transition temperature of the material being moulded, moulding method, matrix, cost, and other various considerations. Physical properties. Usually, the composite's physical properties are not isotropic (independent of the direction of applied force) in nature. But they are typically anisotropic (different depending on the direction of the applied force or load). For instance, the composite panel's stiffness will usually depend upon the orientation of the applied forces and/or moments. The composite's strength is bounded by two loading conditions, as shown in the plot to the right. Isostrain rule of mixtures. If both the fibres and matrix are aligned parallel to the loading direction, the deformation of both phases will be the same (assuming there is no delamination at the fibre-matrix interface). This isostrain condition provides the upper bound for composite strength, and is determined by the rule of mixtures: formula_0 where "EC" is the effective composite Young's modulus, and "Vi" and "Ei" are the volume fraction and Young's moduli, respectively, of the composite phases. For example, a composite material made up of α and β phases as shown in the figure to the right under isostrain, the Young's modulus would be as follows:formula_1where Vα and Vβ are the respective volume fractions of each phase. This can be derived by considering that in the isostrain case,formula_2 Assuming that the composite has a uniform cross section, the stress on the composite is a weighted average between the two phases,formula_3 The stresses in the individual phases are given by Hooke's Law,formula_4formula_5 Combining these equations gives that the overall stress in the composite isformula_6 Then it can be shown thatformula_7 Isostress rule of mixtures. The lower bound is dictated by the isostress condition, in which the fibres and matrix are oriented perpendicularly to the loading direction:formula_8and now the strains become a weighted averageformula_9Rewriting Hooke's Law for the individual phasesformula_10formula_11 This leads toformula_12From the definition of Hooke's Lawformula_13and, in general, formula_14 Following the example above, if one had a composite material made up of α and β phases under isostress conditions as shown in the figure to the right, the composition Young's modulus would be:formula_15 The isostrain condition implies that under an applied load, both phases experience the same strain but will feel different stress. Comparatively, under isostress conditions both phases will feel the same stress but the strains will differ between each phase. A generalized equation for any loading condition between isostrain and isostress can be written as: formula_16 where X is a material property such as modulus or stress, c, m, and r stand for the properties of the composite, matrix, and reinforcement materials respectively, and n is a value between 1 and −1. The above equation can be further generalized beyond a two phase composite to an m-component system: formula_17 Though composite stiffness is maximized when fibres are aligned with the loading direction, so is the possibility of fibre tensile fracture, assuming the tensile strength exceeds that of the matrix. When a fibre has some angle of misorientation θ, several fracture modes are possible. For small values of θ the stress required to initiate fracture is increased by a factor of (cos θ)−2 due to the increased cross-sectional area ("A" cos θ) of the fibre and reduced force ("F/"cos θ) experienced by the fibre, leading to a composite tensile strength of "σparallel /"cos2 θ where "σparallel " is the tensile strength of the composite with fibres aligned parallel with the applied force. Intermediate angles of misorientation θ lead to matrix shear failure. Again the cross sectional area is modified but since shear stress is now the driving force for failure the area of the matrix parallel to the fibres is of interest, increasing by a factor of 1/sin θ. Similarly, the force parallel to this area again decreases ("F/"cos θ) leading to a total tensile strength of "τmy /"sin θ cos θ where "τmy" is the matrix shear strength. Finally, for large values of θ (near π/2) transverse matrix failure is the most likely to occur, since the fibres no longer carry the majority of the load. Still, the tensile strength will be greater than for the purely perpendicular orientation, since the force perpendicular to the fibres will decrease by a factor of 1/sin θ and the area decreases by a factor of 1/sin θ producing a composite tensile strength of "σperp /"sin2θ where "σperp " is the tensile strength of the composite with fibres align perpendicular to the applied force. The majority of commercial composites are formed with random dispersion and orientation of the strengthening fibres, in which case the composite Young's modulus will fall between the isostrain and isostress bounds. However, in applications where the strength-to-weight ratio is engineered to be as high as possible (such as in the aerospace industry), fibre alignment may be tightly controlled. Panel stiffness is also dependent on the design of the panel. For instance, the fibre reinforcement and matrix used, the method of panel build, thermoset versus thermoplastic, and type of weave. In contrast to composites, isotropic materials (for example, aluminium or steel), in standard wrought forms, possess the same stiffness typically despite the directional orientation of the applied forces and/or moments. The relationship between forces/moments and strains/curvatures for an isotropic material can be described with the following material properties: Young's Modulus, the shear modulus, and the Poisson's ratio, in relatively simple mathematical relationships. For the anisotropic material, it needs the mathematics of a second-order tensor and up to 21 material property constants. For the special case of orthogonal isotropy, there are three distinct material property constants for each of Young's Modulus, Shear Modulus and Poisson's ratio—a total of 9 constants to express the relationship between forces/moments and strains/curvatures. Techniques that take benefit of the materials' anisotropic properties involve mortise and tenon joints (in natural composites such as wood) and pi joints in synthetic composites. Mechanical properties of composites. Particle reinforcement. In general, particle reinforcement is strengthening the composites less than fiber reinforcement. It is used to enhance the stiffness of the composites while increasing the strength and the toughness. Because of their mechanical properties, they are used in applications in which wear resistance is required. For example, hardness of cement can be increased by reinforcing gravel particles, drastically. Particle reinforcement a highly advantageous method of tuning mechanical properties of materials since it is very easy implement while being low cost. The elastic modulus of particle-reinforced composites can be expressed as, formula_18 where E is the elastic modulus, V is the volume fraction. The subscripts c, p and m are indicating composite, particle and matrix, respectively. formula_19 is a constant can be found empirically. Similarly, tensile strength of particle-reinforced composites can be expressed as, formula_20 where T.S. is the tensile strength, and formula_21 is a constant (not equal to formula_19) that can be found empirically. Continuous fiber reinforcement. In general, continuous fiber reinforcement is implemented by incorporating a fiber as the strong phase into a weak phase, matrix. The reason for the popularity of fiber usage is materials with extraordinary strength can be obtained in their fiber form. Non-metallic fibers are usually showing a very high strength to density ratio compared to metal fibers because of the covalent nature of their bonds. The most famous example of this is carbon fibers that have many applications extending from sports gear to protective equipment to space industries. The stress on the composite can be expressed in terms of the volume fraction of the fiber and the matrix. formula_22 where formula_23 is the stress, V is the volume fraction. The subscripts c, f and m are indicating composite, fiber and matrix, respectively. Although the stress–strain behavior of fiber composites can only be determined by testing, there is an expected trend, three stages of the stress–strain curve. The first stage is the region of the stress–strain curve where both fiber and the matrix are elastically deformed. This linearly elastic region can be expressed in the following form. formula_24 where formula_23 is the stress, formula_25 is the strain, E is the elastic modulus, and V is the volume fraction. The subscripts c, f, and m are indicating composite, fiber, and matrix, respectively. After passing the elastic region for both fiber and the matrix, the second region of the stress–strain curve can be observed. In the second region, the fiber is still elastically deformed while the matrix is plastically deformed since the matrix is the weak phase. The instantaneous modulus can be determined using the slope of the stress–strain curve in the second region. The relationship between stress and strain can be expressed as, formula_26 where formula_23 is the stress, formula_25 is the strain, E is the elastic modulus, and V is the volume fraction. The subscripts c, f, and m are indicating composite, fiber, and matrix, respectively. To find the modulus in the second region derivative of this equation can be used since the slope of the curve is equal to the modulus. formula_27 In most cases it can be assumedformula_28 since the second term is much less than the first one. In reality, the derivative of stress with respect to strain is not always returning the modulus because of the binding interaction between the fiber and matrix. The strength of the interaction between these two phases can result in changes in the mechanical properties of the composite. The compatibility of the fiber and matrix is a measure of internal stress. The covalently bonded high strength fibers (e.g. carbon fibers) experience mostly elastic deformation before the fracture since the plastic deformation can happen due to dislocation motion. Whereas, metallic fibers have more space to plastically deform, so their composites exhibit a third stage where both fiber and the matrix are plastically deforming. Metallic fibers have many applications to work at cryogenic temperatures that is one of the advantages of composites with metal fibers over nonmetallic. The stress in this region of the stress–strain curve can be expressed as, formula_29 where formula_23 is the stress, formula_25 is the strain, E is the elastic modulus, and V is the volume fraction. The subscripts c, f, and m are indicating composite, fiber, and matrix, respectively. formula_30 and formula_31 are for fiber and matrix flow stresses respectively. Just after the third region the composite exhibit necking. The necking strain of composite is happened to be between the necking strain of the fiber and the matrix just like other mechanical properties of the composites. The necking strain of the weak phase is delayed by the strong phase. The amount of the delay depends upon the volume fraction of the strong phase. Thus, the tensile strength of the composite can be expressed in terms of the volume fraction. formula_32 where T.S. is the tensile strength, formula_23 is the stress, formula_25 is the strain, E is the elastic modulus, and V is the volume fraction. The subscripts c, f, and m are indicating composite, fiber, and matrix, respectively. The composite tensile strength can be expressed as formula_33 for formula_34 is less than or equal to formula_35 (arbitrary critical value of volume fraction) formula_36 for formula_34 is greater than or equal to formula_35 The critical value of volume fraction can be expressed as, formula_37 Evidently, the composite tensile strength can be higher than the matrix if formula_38 is greater than formula_39. Thus, the minimum volume fraction of the fiber can be expressed as, formula_40 Although this minimum value is very low in practice, it is very important to know since the reason for the incorporation of continuous fibers is to improve the mechanical properties of the materials/composites, and this value of volume fraction is the threshold of this improvement. The effect of fiber orientation. Aligned fibers. A change in the angle between the applied stress and fiber orientation will affect the mechanical properties of fiber-reinforced composites, especially the tensile strength. This angle, formula_41, can be used predict the dominant tensile fracture mechanism. At small angles, formula_42, the dominant fracture mechanism is the same as with load-fiber alignment, tensile fracture. The resolved force acting upon the length of the fibers is reduced by a factor of formula_43 from rotation. formula_44. The resolved area on which the fiber experiences the force is increased by a factor of formula_43 from rotation. formula_45. Taking the effective tensile strength to be formula_46 and the aligned tensile strength formula_47. formula_48 At moderate angles, formula_49, the material experiences shear failure. The effective force direction is reduced with respect to the aligned direction. formula_44. The resolved area on which the force acts is formula_50. The resulting tensile strength depends on the shear strength of the matrix, formula_51. formula_52 At extreme angles, formula_53, the dominant mode of failure is tensile fracture in the matrix in the perpendicular direction. As in the isostress case of layered composite materials, the strength in this direction is lower than in the aligned direction. The effective areas and forces act perpendicular to the aligned direction so they both scale by formula_54. The resolved tensile strength is proportional to the transverse strength, formula_55. formula_56 The critical angles from which the dominant fracture mechanism changes can be calculated as, formula_57 formula_58 where formula_59 is the critical angle between longitudinal fracture and shear failure, and formula_60 is the critical angle between shear failure and transverse fracture. By ignoring length effects, this model is most accurate for continuous fibers and does not effectively capture the strength-orientation relationship for short fiber reinforced composites. Furthermore, most realistic systems do not experience the local maxima predicted at the critical angles. The Tsai-Hill criterion provides a more complete description of fiber composite tensile strength as a function of orientation angle by coupling the contributing yield stresses: formula_61, formula_62, and formula_51. formula_63 Randomly oriented fibers. Anisotropy in the tensile strength of fiber reinforced composites can be removed by randomly orienting the fiber directions within the material. It sacrifices the ultimate strength in the aligned direction for an overall, isotropically strengthened material. formula_64 Where K is an empirically determined reinforcement factor; similar to the particle reinforcement equation. For fibers with randomly distributed orientations in a plane, formula_65, and for a random distribution in 3D, formula_66. Stiffness and Compliance Elasticity. For real application, most composite is anisotropic material or orthotropic material. The three-dimension stress tensor is required for stress and strain analysis. The stiffness and compliance can be written as follows formula_67 and formula_68 In order to simplify the 3D stress direction, the plane stress assumption is apply that the out–of–plane stress and out–of–plane strain are insignificant or zero. That is formula_69 and formula_70. formula_71 The stiffness matrix and compliance matrix can be reduced to formula_72 and formula_73 For fiber-reinforced composite, the fiber orientation in material affect anisotropic properties of the structure. From characterizing technique i.e. tensile testing, the material properties were measured based on sample (1-2) coordinate system. The tensors above express stress-strain relationship in (1-2) coordinate system. While the known material properties is in the principal coordinate system (x-y) of material. Transforming the tensor between two coordinate system help identify the material properties of the tested sample. The transformation matrix with formula_74 degree rotation is formula_75 for formula_76formula_77 for formula_78 Types of fibers and their mechanical properties. The most common types of fibers used in industry are glass fibers, carbon fibers, and kevlar due to their ease of production and availability. Their mechanical properties are very important to know, therefore the table of their mechanical properties is given below to compare them with S97 steel. The angle of fiber orientation is very important because of the anisotropy of fiber composites (please see the section "Physical properties" for a more detailed explanation). The mechanical properties of the composites can be tested using standard mechanical testing methods by positioning the samples at various angles (the standard angles are 0°, 45°, and 90°) with respect to the orientation of fibers within the composites. In general, 0° axial alignment makes composites resistant to longitudinal bending and axial tension/compression, 90° hoop alignment is used to obtain resistance to internal/external pressure, and ± 45° is the ideal choice to obtain resistance against pure torsion. Mechanical properties of aerospace grade &amp; commercial grade carbon fiber composites, fiberglass composite, and aluminum alloy and steel. This table is demonstrating one of the most important features and advantage of fiber composites over metal, that is specific strength and specific stiffness. Although the steel and the aluminum alloy have comparable strength and stiffness with fiber composites, the specific strength and stiffness of composites are around higher than steel and the aluminum alloy. Failure. Shock, impact of varying speed, or repeated cyclic stresses can provoke the laminate to separate at the interface between two layers, a condition known as delamination. Individual fibres can separate from the matrix, for example, fibre pull-out. Composites can fail on the macroscopic or microscopic scale. Compression failures can happen at both the macro scale or at each individual reinforcing fibre in compression buckling. Tension failures can be net section failures of the part or degradation of the composite at a microscopic scale where one or more of the layers in the composite fail in tension of the matrix or failure of the bond between the matrix and fibres. Some composites are brittle and possess little reserve strength beyond the initial onset of failure while others may have large deformations and have reserve energy absorbing capacity past the onset of damage. The distinctions in fibres and matrices that are available and the mixtures that can be made with blends leave a very broad range of properties that can be designed into a composite structure. The most famous failure of a brittle ceramic matrix composite occurred when the carbon-carbon composite tile on the leading edge of the wing of the Space Shuttle Columbia fractured when impacted during take-off. It directed to the catastrophic break-up of the vehicle when it re-entered the Earth's atmosphere on 1 February 2003. Composites have relatively poor bearing strength compared to metals. Testing. Composites are tested before and after construction to assist in predicting and preventing failures. Pre-construction testing may adopt finite element analysis (FEA) for ply-by-ply analysis of curved surfaces and predicting wrinkling, crimping and dimpling of composites. Materials may be tested during manufacturing and after construction by various non-destructive methods including ultrasonic, thermography, shearography and X-ray radiography, and laser bond inspection for NDT of relative bond strength integrity in a localized area. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "E_C = \\sum_{i=1} V_iE_i" }, { "math_id": 1, "text": "E_C=V_\\alpha E_\\alpha+V_\\beta E_\\beta" }, { "math_id": 2, "text": " \\epsilon_C = \\epsilon_\\alpha = \\epsilon_\\beta = \\epsilon" }, { "math_id": 3, "text": " \\sigma_C = \\sigma_\\alpha V_\\alpha + \\sigma_\\beta V_\\beta" }, { "math_id": 4, "text": " \\sigma_\\beta = E_\\beta \\epsilon" }, { "math_id": 5, "text": " \\sigma_\\alpha = E_\\alpha \\epsilon" }, { "math_id": 6, "text": " \\sigma_C = E_\\alpha V_\\alpha \\epsilon + E_\\beta V_\\beta \\epsilon = (E_\\alpha V_\\alpha + E_\\beta V_\\beta) \\epsilon" }, { "math_id": 7, "text": " E_C = (E_\\alpha V_\\alpha + E_\\beta V_\\beta)" }, { "math_id": 8, "text": " \\sigma_C = \\sigma_\\alpha = \\sigma_\\beta = \\sigma" }, { "math_id": 9, "text": " \\epsilon_C = \\epsilon_\\alpha V_\\alpha + \\epsilon_\\beta V_\\beta" }, { "math_id": 10, "text": " \\epsilon_\\beta = \\frac{\\sigma}{E_\\beta}" }, { "math_id": 11, "text": " \\epsilon_\\alpha = \\frac{\\sigma}{E_\\alpha}" }, { "math_id": 12, "text": " \\epsilon_c = V_\\beta \\frac{\\sigma}{\\epsilon_\\beta} + V_\\alpha \\frac{\\sigma}{\\epsilon_\\alpha} = (\\frac{V_\\alpha}{\\epsilon_\\alpha} + \\frac{V_\\beta}{\\epsilon_\\beta}) \\sigma" }, { "math_id": 13, "text": " \\frac{1}{E_C} = \\frac{V_\\alpha}{E_\\alpha} + \\frac{V_\\beta}{E_\\beta}" }, { "math_id": 14, "text": "\\frac{1}{E_C} = \\sum_{i=1}\\frac{V_i}{E_i}" }, { "math_id": 15, "text": "E_C=(E_\\alpha E_\\beta)/(V_\\alpha E_\\beta+V_\\beta E_\\alpha)" }, { "math_id": 16, "text": "(X_c)^n = V_m(X_m)^n + V_r(X_r)^n" }, { "math_id": 17, "text": "(X_c)^n = \\sum_{i=1}^{m}V_i(X_i)^n" }, { "math_id": 18, "text": "E_c = V_m E_m + K_c V_p E_p" }, { "math_id": 19, "text": "K_c" }, { "math_id": 20, "text": "(T.S.)_c = V_m (T.S.)_m + K_s V_p (T.S.)_p" }, { "math_id": 21, "text": "K_s" }, { "math_id": 22, "text": "\\sigma_c = V_f \\sigma_f + V_m \\sigma_m" }, { "math_id": 23, "text": "\\sigma" }, { "math_id": 24, "text": "\\sigma_c - E_c \\epsilon_c = \\epsilon_c (V_f E_f + V_m E_m)" }, { "math_id": 25, "text": "\\epsilon" }, { "math_id": 26, "text": "\\sigma_c = V_f E_f \\epsilon_c + V_m \\sigma_m (\\epsilon_c)" }, { "math_id": 27, "text": "E_c' = \\frac{d \\sigma_c}{d \\epsilon_c} = V_f E_f + V_m \\left(\\frac{d \\sigma_c}{d \\epsilon_c}\\right)" }, { "math_id": 28, "text": "E_c'= V_f E_f" }, { "math_id": 29, "text": "\\sigma_c (\\epsilon_c) = V_f \\sigma_f \\epsilon_c + V_m \\sigma_m (\\epsilon_c)" }, { "math_id": 30, "text": "\\sigma_f (\\epsilon_c)" }, { "math_id": 31, "text": "\\sigma_m (\\epsilon_c)" }, { "math_id": 32, "text": "(T.S.)_c=V_f(T.S.)_f+V_m \\sigma_m(\\epsilon_m)" }, { "math_id": 33, "text": "(T.S.)_c=V_m(T.S.)_m" }, { "math_id": 34, "text": "V_f" }, { "math_id": 35, "text": "V_c" }, { "math_id": 36, "text": "(T.S.)_c= V_f(T.S.)_f + V_m(\\sigma_m)" }, { "math_id": 37, "text": "V_c= \\frac{[(T.S.)_m - \\sigma_m(\\epsilon_f)]}{[(T.S.)_f + (T.S.)_m - \\sigma_m(\\epsilon_f)]}" }, { "math_id": 38, "text": "(T.S.)_c" }, { "math_id": 39, "text": "(T.S.)_m\n" }, { "math_id": 40, "text": "V_c= \\frac{[(T.S.)_m - \\sigma_m(\\epsilon_f)]}{[(T.S.)_f - \\sigma_m(\\epsilon_f)]}" }, { "math_id": 41, "text": "\\theta" }, { "math_id": 42, "text": "\\theta \\approx 0^{\\circ}" }, { "math_id": 43, "text": "\\cos \\theta" }, { "math_id": 44, "text": "F_{\\mbox{res}}=F\\cos\\theta" }, { "math_id": 45, "text": "A_{\\mbox{res}}=A_{0}/\\cos\\theta" }, { "math_id": 46, "text": "(\\mbox{T.S.})_{\\mbox{c}}=F_{\\mbox{res}}/A_{\\mbox{res}}" }, { "math_id": 47, "text": "\\sigma^*_\\parallel=F/A" }, { "math_id": 48, "text": "(\\mbox{T.S.})_{\\mbox{c}}\\;(\\mbox{longitudinal fracture})=\\frac{\\sigma^*_\\parallel}{\\cos^2\\theta}" }, { "math_id": 49, "text": "\\theta \\approx 45^{\\circ}" }, { "math_id": 50, "text": "A_{\\mbox{res}}=A_m/\\sin\\theta" }, { "math_id": 51, "text": "\\tau_m" }, { "math_id": 52, "text": "(\\mbox{T.S.})_{\\mbox{c}}\\;(\\mbox{shear failure})=\\frac{\\tau_m}{\\sin{\\theta}\\cos{\\theta}}" }, { "math_id": 53, "text": "\\theta \\approx 90^{\\circ}" }, { "math_id": 54, "text": "\\sin\\theta" }, { "math_id": 55, "text": "\\sigma^{*}_{\\perp}" }, { "math_id": 56, "text": "(\\mbox{T.S.})_{\\mbox{c}}\\;(\\mbox{transverse fracture})=\\frac{\\sigma^*_{\\perp}}{\\sin^2\\theta}" }, { "math_id": 57, "text": "\\theta_{c_1}=\\tan^{-1}\\left({\\frac{\\tau_m}{\\sigma^*_\\parallel}}\\right)" }, { "math_id": 58, "text": "\\theta_{c_2}=\\tan^{-1}\\left({\\frac{\\sigma^*_\\perp}{\\tau_m}}\\right)" }, { "math_id": 59, "text": "\\theta_{c_1}" }, { "math_id": 60, "text": "\\theta_{c_2}" }, { "math_id": 61, "text": "\\sigma^{*}_\\parallel" }, { "math_id": 62, "text": "\\sigma^{*}_\\perp" }, { "math_id": 63, "text": "(\\mbox{T.S.})_{\\mbox{c}}\\;(\\mbox{Tsai-Hill})=\\bigg[{\\frac{\\cos^4\\theta}{({\\sigma^*_\\parallel})^2}}+\\cos^2\\theta\\sin^2\\theta\\left({\\frac{1}{({\\tau_m})^2}}-{\\frac{1}{({\\sigma^*_\\parallel})^2}}\\right)+{\\frac{\\sin^4\\theta}{({\\sigma^*_\\perp})^2}}\\bigg]^{-1/2}" }, { "math_id": 64, "text": "E_c=KV_{f}E_{f}+V_{m}E_{m}" }, { "math_id": 65, "text": "K \\approx 0.38" }, { "math_id": 66, "text": "K \\approx 0.20" }, { "math_id": 67, "text": "\\begin{bmatrix} \\sigma_1 \\\\ \\sigma_2 \\\\ \\sigma_3 \\\\ \\sigma_4 \\\\ \\sigma_5 \\\\ \\sigma_6 \\end{bmatrix} =\n \\begin{bmatrix}\n C_{11} & C_{12} & C_{13} & C_{14} & C_{15} & C_{16} \\\\\nC_{12} & C_{22} & C_{23} & C_{24} & C_{25} & C_{26} \\\\\nC_{13} & C_{23} & C_{33} & C_{34} & C_{35} & C_{36} \\\\\nC_{14} & C_{24} & C_{34} & C_{44} & C_{45} & C_{46} \\\\\nC_{15} & C_{25} & C_{35} & C_{45} & C_{55} & C_{56} \\\\\nC_{16} & C_{26} & C_{36} & C_{46} & C_{56} & C_{66} \\end{bmatrix}\n \\begin{bmatrix} \\varepsilon_1 \\\\ \\varepsilon_2 \\\\ \\varepsilon_3 \\\\ \\varepsilon_4 \\\\ \\varepsilon_5 \\\\ \\varepsilon_6 \\end{bmatrix}" }, { "math_id": 68, "text": "\\begin{bmatrix} \\varepsilon_1 \\\\ \\varepsilon_2 \\\\ \\varepsilon_3 \\\\ \\varepsilon_4 \\\\ \\varepsilon_5 \\\\ \\varepsilon_6 \\end{bmatrix} =\n \\begin{bmatrix}\n S_{11} & S_{12} & S_{13} & S_{14} & S_{15} & S_{16} \\\\\nS_{12} & S_{22} & S_{23} & S_{24} & S_{25} & S_{26} \\\\\nS_{13} & S_{23} & S_{33} & S_{34} & S_{35} & S_{36} \\\\\nS_{14} & S_{24} & S_{34} & S_{44} & S_{45} & S_{46} \\\\\nS_{15} & S_{25} & S_{35} & S_{45} & S_{55} & S_{56} \\\\\nS_{16} & S_{26} & S_{36} & S_{46} & S_{56} & S_{66} \\end{bmatrix}\n \\begin{bmatrix} \\sigma_1 \\\\ \\sigma_2 \\\\ \\sigma_3 \\\\ \\sigma_4 \\\\ \\sigma_5 \\\\ \\sigma_6 \\end{bmatrix}" }, { "math_id": 69, "text": "\\sigma_3 = \\sigma_4 = \\sigma_5 = 0" }, { "math_id": 70, "text": "\\varepsilon_4 = \\varepsilon_5 = 0" }, { "math_id": 71, "text": "\\begin{bmatrix} \\varepsilon_1 \\\\ \\varepsilon_2 \\\\ \\varepsilon_3 \\\\ \\varepsilon_4 \\\\ \\varepsilon_5 \\\\ \\varepsilon_6 \\end{bmatrix} = \n \\begin{bmatrix}\n \\tfrac{1}{E_{\\rm 1}} & - \\tfrac{\\nu_{\\rm 21}}{E_{\\rm 2}} & - \\tfrac{\\nu_{\\rm 31}}{E_{\\rm 3}} & 0 & 0 & 0 \\\\\n -\\tfrac{\\nu_{\\rm 12}}{E_{\\rm 1}} & \\tfrac{1}{E_{\\rm 2}} & - \\tfrac{\\nu_{\\rm 32}}{E_{\\rm 3}} & 0 & 0 & 0 \\\\\n -\\tfrac{\\nu_{\\rm 13}}{E_{\\rm 1}} & - \\tfrac{\\nu_{\\rm 23}}{E_{\\rm 2}} & \\tfrac{1}{E_{\\rm 3}} & 0 & 0 & 0 \\\\\n 0 & 0 & 0 & \\tfrac{1}{G_{\\rm 23}} & 0 & 0 \\\\\n 0 & 0 & 0 & 0 & \\tfrac{1}{G_{\\rm 31}} & 0 \\\\\n 0 & 0 & 0 & 0 & 0 & \\tfrac{1}{G_{\\rm 12}} \\\\\n \\end{bmatrix}\n \\begin{bmatrix} \\sigma_1 \\\\ \\sigma_2 \\\\ \\sigma_3 \\\\ \\sigma_4 \\\\ \\sigma_5 \\\\ \\sigma_6 \\end{bmatrix}" }, { "math_id": 72, "text": "\\begin{bmatrix} \\sigma_1 \\\\ \\sigma_2 \\\\ \\sigma_6 \\end{bmatrix} = \n \\begin{bmatrix}\n \\tfrac{E_{\\rm 1}}{1-{\\nu_{\\rm 12}}{\\nu_{\\rm 21}}} & \\tfrac{E_{\\rm 2}{\\nu_{\\rm 12}}}{1-{\\nu_{\\rm 12}}{\\nu_{\\rm 21}}} & 0 \\\\\n \\tfrac{E_{\\rm 2}{\\nu_{\\rm 12}}}{1-{\\nu_{\\rm 12}}{\\nu_{\\rm 21}}} & \\tfrac{E_{\\rm 2}}{1-{\\nu_{\\rm 12}}{\\nu_{\\rm 21}}} & 0 \\\\\n 0 & 0 & G_{\\rm 12} \\\\\n \\end{bmatrix}\n\\begin{bmatrix} \\varepsilon_1 \\\\ \\varepsilon_2 \\\\ \\varepsilon_6 \\end{bmatrix} " }, { "math_id": 73, "text": "\\begin{bmatrix} \\varepsilon_1 \\\\ \\varepsilon_2 \\\\ \\varepsilon_6 \\end{bmatrix} = \n \\begin{bmatrix}\n \\tfrac{1}{E_{\\rm 1}} & - \\tfrac{\\nu_{\\rm 21}}{E_{\\rm 2}} & 0 \\\\\n -\\tfrac{\\nu_{\\rm 12}}{E_{\\rm 1}} & \\tfrac{1}{E_{\\rm 2}} & 0 \\\\\n 0 & 0 & \\tfrac{1}{G_{\\rm 12}} \\\\\n \\end{bmatrix}\n \\begin{bmatrix} \\sigma_1 \\\\ \\sigma_2 \\\\ \\sigma_6 \\end{bmatrix}" }, { "math_id": 74, "text": "\\theta " }, { "math_id": 75, "text": "T(\\theta)_\\epsilon = \n\\begin{bmatrix} \\cos^2 \\theta & \\sin^2 \\theta & \\cos \\theta\\sin \\theta\n\\\\ sin^2 \\theta & \\cos^2 \\theta & -\\cos \\theta\\sin \\theta\n\\\\ -2\\cos \\theta\\sin \\theta & 2\\cos \\theta\\sin \\theta & \\cos^2 \\theta - \\sin^2 \\theta \\end{bmatrix} " }, { "math_id": 76, "text": "\\begin{bmatrix} \\acute{\\epsilon} \\end{bmatrix} = T(\\theta)_\\epsilon \\begin{bmatrix} \\epsilon \\end{bmatrix}\n\n " }, { "math_id": 77, "text": "T(\\theta)_\\sigma = \n\\begin{bmatrix} \\cos^2 \\theta & \\sin^2 \\theta & 2\\cos \\theta\\sin \\theta\n\\\\ sin^2 \\theta & \\cos^2 \\theta & -2\\cos \\theta\\sin \\theta\n\\\\ -\\cos \\theta\\sin \\theta & \\cos \\theta\\sin \\theta & \\cos^2 \\theta - \\sin^2 \\theta \\end{bmatrix} " }, { "math_id": 78, "text": "\\begin{bmatrix} \\acute{\\sigma} \\end{bmatrix} = T(\\theta)_\\sigma \\begin{bmatrix} \\sigma \\end{bmatrix} " } ]
https://en.wikipedia.org/wiki?curid=157616
15761992
Galvani potential
In electrochemistry, the Galvani potential (also called Galvani potential difference, or inner potential difference, Δφ, delta phi) is the electric potential difference between two points in the bulk of two phases. These phases can be two different solids (e.g., two metals joined), or a solid and a liquid (e.g., a metal electrode submerged in an electrolyte). The Galvani potential is named after Luigi Galvani. Galvani potential between two metals. First, consider the Galvani potential between two metals. When two metals are electrically isolated from each other, an arbitrary voltage difference may exist between them. However, when two different metals are brought into electronic contact, electrons will flow from the metal with a lower voltage to the metal with the higher voltage until the Fermi level of the electrons in the bulk of both phases are equal. The actual numbers of electrons that passes between the two phases is small (it depends on the capacitance between the objects), and the occupancies of the electron bands are practically unaffected. Rather, this small increase or decrease in charge results in a shift in all the energy levels in the metals. An electrical double layer is formed at the interface between the two phases. The equality of the electrochemical potential between the two different phases in contact can be written as: formula_0 where: Now, the electrochemical potential of a species is defined as a sum of its chemical potential and the local electrostatic potential: formula_2 where: From the two equations above: formula_3 where the difference on the left-hand side is the Galvani potential difference between the phases (1) and (2). Thus, the Galvani potential difference is determined entirely by the chemical difference of the two phases; specifically by the difference of the chemical potential of the charge carriers in the two phases. The Galvani potential difference between an electrode and electrolyte (or between other two electrically conductive phases) forms in an analogous fashion, although the chemical potentials in the equation above may need to include all species involved in the electrochemical reaction at the interface. Relation to measured cell potential. The Galvani potential difference is not directly measurable using voltmeters. The measured potential difference between two metal electrodes assembled into a cell does not equal the difference of the Galvani potentials of the two metals (or their combination with the solution Galvani potential) because the cell needs to contain another metal-metal interface, as in the following schematic of a galvanic cell: M(1) | S | M(2) | M(1)' where: Instead, the measured cell potential can be written as: formula_4 where: From the above equation, two metals in electronic contact (i.e., under electronic equilibrium) must have the same electrode potential. Also, the electrochemical potentials of the electrons within the two metals will be the same. However, their Galvani potentials will be different (unless the metals are identical). Moreover, if define formula_5, the "electric potential" (or the "electromotive potential in" [6]), as formula_6, which is effectively negative of the reduced electrochemical potential of electrons given in units of volts. It is noted that what one experimentally measures using an inert metallic probe and a voltmeter is formula_5. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\overline{\\mu}_j^{(1)} = \\overline{\\mu}_j^{(2)}" }, { "math_id": 1, "text": "\\overline{\\mu}" }, { "math_id": 2, "text": "\\overline{\\mu}_j = \\mu_j + z_j F \\phi" }, { "math_id": 3, "text": "\\phi^{(2)} - \\phi^{(1)} = \\frac {\\mu_j^{(1)} - \\mu_j^{(2)}} {z_j F}" }, { "math_id": 4, "text": "E^{(2)} - E^{(1)} = \\left(\\phi^{(2)} - \\phi^{(S)} - \\frac {\\mu_j^{(2)}} {z_j F}\\right) - \\left(\\phi^{(1)} - \\phi^{(S)} - \\frac {\\mu_j^{(1)}} {z_j F}\\right) = \\left(\\phi^{(2)} - \\phi^{(1)}\\right) - \\left(\\frac {\\mu_j^{(2)} - \\mu_j^{(1)}} {z_j F}\\right)\n" }, { "math_id": 5, "text": "\\pi" }, { "math_id": 6, "text": "\\pi=-\\frac{\\mu_e}{F}+\\phi" } ]
https://en.wikipedia.org/wiki?curid=15761992
157620
Electrochemical potential
Intensive physical property of substances In electrochemistry, the electrochemical potential (ECP), "μ", is a thermodynamic measure of chemical potential that does not omit the energy contribution of electrostatics. Electrochemical potential is expressed in the unit of J/mol. Introduction. Each chemical species (for example, "water molecules", "sodium ions", "electrons", etc.) has an electrochemical potential (a quantity with units of energy) at any given point in space, which represents how easy or difficult it is to add more of that species to that location. If possible, a species will move from areas with higher electrochemical potential to areas with lower electrochemical potential; in equilibrium, the electrochemical potential will be constant everywhere for each species (it may have a different value for different species). For example, if a glass of water has sodium ions (Na+) dissolved uniformly in it, and an electric field is applied across the water, then the sodium ions will tend to get pulled by the electric field towards one side. We say the ions have electric potential energy, and are moving to lower their potential energy. Likewise, if a glass of water has a lot of dissolved sugar on one side and none on the other side, each sugar molecule will randomly diffuse around the water, until there is equal concentration of sugar everywhere. We say that the sugar molecules have a "chemical potential", which is higher in the high-concentration areas, and the molecules move to lower their chemical potential. These two examples show that an electrical potential and a chemical potential can both give the same result: A redistribution of the chemical species. Therefore, it makes sense to combine them into a single "potential", the "electrochemical potential", which can directly give the "net" redistribution taking "both" into account. It is (in principle) easy to measure whether or not two regions (for example, two glasses of water) have the same electrochemical potential for a certain chemical species (for example, a solute molecule): Allow the species to freely move back and forth between the two regions (for example, connect them with a semi-permeable membrane that lets only that species through). If the chemical potential is the same in the two regions, the species will occasionally move back and forth between the two regions, but on average there is just as much movement in one direction as the other, and there is zero net migration (this is called "diffusive equilibrium"). If the chemical potentials of the two regions are different, more molecules will move to the lower chemical potential than the other direction. Moreover, when there is "not" diffusive equilibrium, i.e., when there is a tendency for molecules to diffuse from one region to another, then there is a certain free energy released by each net-diffusing molecule. This energy, which can sometimes be harnessed (a simple example is a concentration cell), and the free-energy per mole is exactly equal to the electrochemical potential difference between the two regions. Conflicting terminologies. It is common in electrochemistry and solid-state physics to discuss both the chemical potential and the electrochemical potential of the electrons. However, in the two fields, the definitions of these two terms are sometimes swapped. In electrochemistry, the "electrochemical potential" of electrons (or any other species) is the total potential, including both the (internal, nonelectrical) chemical potential and the electric potential, and is by definition constant across a device in equilibrium, whereas the "chemical potential" of electrons is equal to the electrochemical potential minus the local electric potential energy per electron. In solid-state physics, the definitions are normally compatible with this, but occasionally  the definitions are swapped. This article uses the electrochemistry definitions. Definition and usage. In generic terms, electrochemical potential is the mechanical work done in bringing 1 mole of an ion from a standard state to a specified concentration and electrical potential. According to the IUPAC definition, it is the partial molar Gibbs energy of the substance at the specified electric potential, where the substance is in a specified phase. Electrochemical potential can be expressed as formula_0 where: In the special case of an uncharged atom, "zi" = 0, and so "μi" = "μi". Electrochemical potential is important in biological processes that involve molecular diffusion across membranes, in electroanalytical chemistry, and industrial applications such as batteries and fuel cells. It represents one of the many interchangeable forms of potential energy through which energy may be conserved. In cell membranes, the electrochemical potential is the sum of the chemical potential and the membrane potential. Incorrect usage. The term "electrochemical potential" is sometimes used to mean an electrode potential (either of a corroding electrode, an electrode with a non-zero net reaction or current, or an electrode at equilibrium). In some contexts, the electrode potential of corroding metals is called "electrochemical corrosion potential", which is often abbreviated as ECP, and the word "corrosion" is sometimes omitted. This usage can lead to confusion. The two quantities have different meanings and different dimensions: the dimension of electrochemical potential is energy per mole while that of electrode potential is voltage (energy per charge). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\bar{\\mu}_i = \\mu_i + z_i F\\Phi," } ]
https://en.wikipedia.org/wiki?curid=157620
1576293
Rössler attractor
Attractor for chaotic Rössler system The Rössler attractor () is the attractor for the Rössler system, a system of three non-linear ordinary differential equations originally studied by Otto Rössler in the 1970s. These differential equations define a continuous-time dynamical system that exhibits chaotic dynamics associated with the fractal properties of the attractor. Rössler interpreted it as a formalization of a taffy-pulling machine. Some properties of the Rössler system can be deduced via linear methods such as eigenvectors, but the main features of the system require non-linear methods such as Poincaré maps and bifurcation diagrams. The original Rössler paper states the Rössler attractor was intended to behave similarly to the Lorenz attractor, but also be easier to analyze qualitatively. An orbit within the attractor follows an outward spiral close to the formula_3 plane around an unstable fixed point. Once the graph spirals out enough, a second fixed point influences the graph, causing a rise and twist in the formula_4-dimension. In the time domain, it becomes apparent that although each variable is oscillating within a fixed range of values, the oscillations are chaotic. This attractor has some similarities to the Lorenz attractor, but is simpler and has only one manifold. Otto Rössler designed the Rössler attractor in 1976, but the originally theoretical equations were later found to be useful in modeling equilibrium in chemical reactions. Definition. The defining equations of the Rössler system are: formula_5 Rössler studied the chaotic attractor with formula_6, formula_7, and formula_8, though properties of formula_9, formula_10, and formula_11 have been more commonly used since. Another line of the parameter space was investigated using the topological analysis. It corresponds to formula_12, formula_13, and formula_14 was chosen as the bifurcation parameter. How Rössler discovered this set of equations was investigated by Letellier and Messager. Stability analysis. Some of the Rössler attractor's elegance is due to two of its equations being linear; setting formula_16, allows examination of the behavior on the formula_3 plane formula_17 The stability in the formula_3 plane can then be found by calculating the eigenvalues of the Jacobian formula_18, which are formula_19. From this, we can see that when formula_20, the eigenvalues are complex and both have a positive real component, making the origin unstable with an outwards spiral on the formula_3 plane. Now consider the formula_4 plane behavior within the context of this range for formula_14. So as long as formula_21 is smaller than formula_22, the formula_22 term will keep the orbit close to the formula_3 plane. As the orbit approaches formula_21 greater than formula_22, the formula_4-values begin to climb. As formula_4 climbs, though, the formula_23 in the equation for formula_24 stops the growth in formula_21. Fixed points. In order to find the fixed points, the three Rössler equations are set to zero and the (formula_21,formula_25,formula_4) coordinates of each fixed point were determined by solving the resulting equations. This yields the general equations of each of the fixed point coordinates: formula_26 Which in turn can be used to show the actual fixed points for a given set of parameter values: formula_27 formula_28 As shown in the general plots of the Rössler Attractor above, one of these fixed points resides in the center of the attractor loop and the other lies relatively far from the attractor. Eigenvalues and eigenvectors. The stability of each of these fixed points can be analyzed by determining their respective eigenvalues and eigenvectors. Beginning with the Jacobian: formula_29 the eigenvalues can be determined by solving the following cubic: formula_30 For the centrally located fixed point, Rössler's original parameter values of a=0.2, b=0.2, and c=5.7 yield eigenvalues of: formula_31 formula_32 formula_33 The magnitude of a negative eigenvalue characterizes the level of attraction along the corresponding eigenvector. Similarly the magnitude of a positive eigenvalue characterizes the level of repulsion along the corresponding eigenvector. The eigenvectors corresponding to these eigenvalues are: formula_34 formula_35 formula_36 These eigenvectors have several interesting implications. First, the two eigenvalue/eigenvector pairs (formula_37 and formula_38) are responsible for the steady outward slide that occurs in the main disk of the attractor. The last eigenvalue/eigenvector pair is attracting along an axis that runs through the center of the manifold and accounts for the z motion that occurs within the attractor. This effect is roughly demonstrated with the figure below. The figure examines the central fixed point eigenvectors. The blue line corresponds to the standard Rössler attractor generated with formula_0, formula_1, and formula_15. The red dot in the center of this attractor is formula_39. The red line intersecting that fixed point is an illustration of the repulsing plane generated by formula_37 and formula_38. The green line is an illustration of the attracting formula_40. The magenta line is generated by stepping backwards through time from a point on the attracting eigenvector which is slightly above formula_39 – it illustrates the behavior of points that become completely dominated by that vector. Note that the magenta line nearly touches the plane of the attractor before being pulled upwards into the fixed point; this suggests that the general appearance and behavior of the Rössler attractor is largely a product of the interaction between the attracting formula_40 and the repelling formula_37 and formula_38 plane. Specifically it implies that a sequence generated from the Rössler equations will begin to loop around formula_39, start being pulled upwards into the formula_40 vector, creating the upward arm of a curve that bends slightly inward toward the vector before being pushed outward again as it is pulled back towards the repelling plane. For the outlier fixed point, Rössler's original parameter values of formula_0, formula_1, and formula_15 yield eigenvalues of: formula_41 formula_42 formula_43 The eigenvectors corresponding to these eigenvalues are: formula_44 formula_45 formula_46 Although these eigenvalues and eigenvectors exist in the Rössler attractor, their influence is confined to iterations of the Rössler system whose initial conditions are in the general vicinity of this outlier fixed point. Except in those cases where the initial conditions lie on the attracting plane generated by formula_47 and formula_48, this influence effectively involves pushing the resulting system towards the general Rössler attractor. As the resulting sequence approaches the central fixed point and the attractor itself, the influence of this distant fixed point (and its eigenvectors) will wane. Poincaré map. The Poincaré map is constructed by plotting the value of the function every time it passes through a set plane in a specific direction. An example would be plotting the formula_51 value every time it passes through the formula_52 plane where formula_21 is changing from negative to positive, commonly done when studying the Lorenz attractor. In the case of the Rössler attractor, the formula_52 plane is uninteresting, as the map always crosses the formula_53 plane at formula_16 due to the nature of the Rössler equations. In the formula_54 plane for formula_49, formula_50, formula_2, the Poincaré map shows the upswing in formula_4 values as formula_21 increases, as is to be expected due to the upswing and twist section of the Rössler plot. The number of points in this specific Poincaré plot is infinite, but when a different formula_22 value is used, the number of points can vary. For example, with a formula_22 value of 4, there is only one point on the Poincaré map, because the function yields a periodic orbit of period one, or if the formula_22 value is set to 12.8, there would be six points corresponding to a period six orbit. Lorenz map. The Lorenz map is the relation between successive maxima of a coordinate in a trajectory. Consider a trajectory on the attractor, and let formula_55 be the n-th maximum of its x-coordinate. Then formula_55-formula_56 scatterplot is almost a curve, meaning that knowing formula_55 one can almost exactly predict formula_56. Mapping local maxima. In the original paper on the Lorenz Attractor, Edward Lorenz analyzed the local maxima of formula_4 against the immediately preceding local maxima. When visualized, the plot resembled the tent map, implying that similar analysis can be used between the map and attractor. For the Rössler attractor, when the formula_57 local maximum is plotted against the next local formula_4 maximum, formula_58, the resulting plot (shown here for formula_0, formula_1, formula_15) is unimodal, resembling a skewed Hénon map. Knowing that the Rössler attractor can be used to create a pseudo 1-d map, it then follows to use similar analysis methods. The bifurcation diagram is a particularly useful analysis method. Variation of parameters. Rössler attractor's behavior is largely a factor of the values of its constant parameters formula_14, formula_59, and formula_22. In general, varying each parameter has a comparable effect by causing the system to converge toward a periodic orbit, fixed point, or escape towards infinity, however the specific ranges and behaviors induced vary substantially for each parameter. Periodic orbits, or "unit cycles," of the Rössler system are defined by the number of loops around the central point that occur before the loops series begins to repeat itself. Bifurcation diagrams are a common tool for analyzing the behavior of dynamical systems, of which the Rössler attractor is one. They are created by running the equations of the system, holding all but one of the variables constant and varying the last one. Then, a graph is plotted of the points that a particular value for the changed variable visits after transient factors have been neutralised. Chaotic regions are indicated by filled-in regions of the plot. Varying a. Here, formula_59 is fixed at 0.2, formula_22 is fixed at 5.7 and formula_14 changes. Numerical examination of the attractor's behavior over changing formula_14 suggests it has a disproportional influence over the attractor's behavior. The results of the analysis are: Varying b. Here, formula_14 is fixed at 0.2, formula_22 is fixed at 5.7 and formula_59 changes. As shown in the accompanying diagram, as formula_59 approaches 0 the attractor approaches infinity (note the upswing for very small values of formula_59). Comparative to the other parameters, varying formula_59 generates a greater range when period-3 and period-6 orbits will occur. In contrast to formula_14 and formula_22, higher values of formula_59 converge to period-1, not to a chaotic state. Varying c. Here, formula_66 and formula_22 changes. The bifurcation diagram reveals that low values of formula_22 are periodic, but quickly become chaotic as formula_22 increases. This pattern repeats itself as formula_22 increases – there are sections of periodicity interspersed with periods of chaos, and the trend is towards higher-period orbits as formula_22 increases. For example, the period one orbit only appears for values of formula_22 around 4 and is never found again in the bifurcation diagram. The same phenomenon is seen with period three; until formula_67, period three orbits can be found, but thereafter, they do not appear. A graphical illustration of the changing attractor over a range of formula_22 values illustrates the general behavior seen for all of these parameter analyses – the frequent transitions between periodicity and aperiodicity. The above set of images illustrates the variations in the post-transient Rössler system as formula_22 is varied over a range of values. These images were generated with formula_68. Periodic orbits. The attractor is filled densely with periodic orbits: solutions for which there exists a nonzero value of formula_78 such that formula_79. These interesting solutions can be numerically derived using Newton's method. Periodic orbits are the roots of the function formula_80, where formula_81 is the evolution by time formula_82 and formula_83 is the identity. As the majority of the dynamics occurs in the x-y plane, the periodic orbits can then be classified by their winding number around the central equilibrium after projection. It seems from numerical experimentation that there is a unique periodic orbit for all positive winding numbers. This lack of degeneracy likely stems from the problem's lack of symmetry. The attractor can be dissected into easier to digest invariant manifolds: 1D periodic orbits and the 2D stable and unstable manifolds of periodic orbits. These invariant manifolds are a natural skeleton of the attractor, just as rational numbers are to the real numbers. For the purposes of dynamical systems theory, one might be interested in topological invariants of these manifolds. Periodic orbits are copies of formula_84 embedded in formula_85, so their topological properties can be understood with knot theory. The periodic orbits with winding numbers 1 and 2 form a Hopf link, showing that no diffeomorphism can separate these orbits. Links to other topics. The banding evident in the Rössler attractor is similar to a Cantor set rotated about its midpoint. Additionally, the half-twist that occurs in the Rössler attractor only affects a part of the attractor. Rössler showed that his attractor was in fact the combination of a "normal band" and a Möbius strip. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "a=0.2" }, { "math_id": 1, "text": "b=0.2" }, { "math_id": 2, "text": "c=14" }, { "math_id": 3, "text": "x, y" }, { "math_id": 4, "text": "z" }, { "math_id": 5, "text": " \\begin{cases} \\frac{dx}{dt} = -y - z \\\\ \\frac{dy}{dt} = x + ay \\\\ \\frac{dz}{dt} = b + z(x-c) \\end{cases} " }, { "math_id": 6, "text": "a = 0.2" }, { "math_id": 7, "text": "b = 0.2" }, { "math_id": 8, "text": "c = 5.7" }, { "math_id": 9, "text": "a = 0.1" }, { "math_id": 10, "text": "b = 0.1" }, { "math_id": 11, "text": "c = 14" }, { "math_id": 12, "text": "b = 2" }, { "math_id": 13, "text": "c = 4" }, { "math_id": 14, "text": "a" }, { "math_id": 15, "text": "c=5.7" }, { "math_id": 16, "text": "z = 0" }, { "math_id": 17, "text": " \\begin{cases} \\frac{dx}{dt} = -y \\\\ \\frac{dy}{dt} = x + ay \\end{cases} " }, { "math_id": 18, "text": "\\begin{pmatrix}0 & -1 \\\\ 1 & a\\\\\\end{pmatrix}" }, { "math_id": 19, "text": "(a \\pm \\sqrt{a^2 - 4})/2" }, { "math_id": 20, "text": "0 < a < 2" }, { "math_id": 21, "text": "x" }, { "math_id": 22, "text": "c" }, { "math_id": 23, "text": "-z" }, { "math_id": 24, "text": "dx/dt" }, { "math_id": 25, "text": "y" }, { "math_id": 26, "text": " \\begin{cases} x = \\frac{c\\pm\\sqrt{c^2-4ab}}{2} \\\\ y=-\\left(\\frac{c\\pm\\sqrt{c^2-4ab}}{2a}\\right) \\\\ z=\\frac{c\\pm\\sqrt{c^2-4ab}}{2a} \\end{cases} " }, { "math_id": 27, "text": "\\left(\\frac{c+\\sqrt{c^2-4ab}}{2}, \\frac{-c-\\sqrt{c^2-4ab}}{2a}, \\frac{c+\\sqrt{c^2-4ab}}{2a}\\right)" }, { "math_id": 28, "text": "\\left(\\frac{c-\\sqrt{c^2-4ab}}{2}, \\frac{-c+\\sqrt{c^2-4ab}}{2a}, \\frac{c-\\sqrt{c^2-4ab}}{2a}\\right)" }, { "math_id": 29, "text": "\\begin{pmatrix}0 & -1 & -1 \\\\ 1 & a & 0 \\\\ z & 0 & x-c\\\\\\end{pmatrix}" }, { "math_id": 30, "text": "-\\lambda^3+\\lambda^2(a+x-c) + \\lambda(ac-ax-1-z)+x-c+az =0\\," }, { "math_id": 31, "text": "\\lambda_{1}= 0.0971028 + 0.995786i \\," }, { "math_id": 32, "text": "\\lambda_{2}= 0.0971028 - 0.995786i \\," }, { "math_id": 33, "text": "\\lambda_{3}= -5.68718 \\," }, { "math_id": 34, "text": "v_{1}= \\begin{pmatrix} 0.7073 \\\\ -0.07278 - 0.7032i \\\\ 0.0042 - 0.0007i \\\\\\end{pmatrix}" }, { "math_id": 35, "text": "v_{2}= \\begin{pmatrix}0.7073 \\\\ 0.07278 + 0.7032i \\\\ 0.0042 + 0.0007i \\\\\\end{pmatrix}" }, { "math_id": 36, "text": "v_{3}= \\begin{pmatrix}0.1682 \\\\ -0.0286 \\\\ 0.9853 \\\\\\end{pmatrix}" }, { "math_id": 37, "text": "v_{1}" }, { "math_id": 38, "text": "v_{2}" }, { "math_id": 39, "text": "FP_{1}" }, { "math_id": 40, "text": "v_{3}" }, { "math_id": 41, "text": "\\lambda_{1}= -0.0000046 + 5.4280259i" }, { "math_id": 42, "text": "\\lambda_{2}= -0.0000046 - 5.4280259i " }, { "math_id": 43, "text": "\\lambda_{3}= 0.1929830" }, { "math_id": 44, "text": "v_{1}= \\begin{pmatrix}0.0002422 + 0.1872055i \\\\ 0.0344403 - 0.0013136i \\\\ 0.9817159 \\\\\\end{pmatrix}" }, { "math_id": 45, "text": "v_{2}= \\begin{pmatrix}0.0002422 - 0.1872055i \\\\ 0.0344403 + 0.0013136i \\\\ 0.9817159 \\\\\\end{pmatrix}" }, { "math_id": 46, "text": "v_{3}= \\begin{pmatrix}0.0049651 \\\\ -0.7075770 \\\\ 0.7066188 \\\\\\end{pmatrix}" }, { "math_id": 47, "text": "\\lambda_{1}" }, { "math_id": 48, "text": "\\lambda_{2}" }, { "math_id": 49, "text": "a=0.1" }, { "math_id": 50, "text": "b=0.1" }, { "math_id": 51, "text": "y, z" }, { "math_id": 52, "text": "x = 0" }, { "math_id": 53, "text": "x = 0 " }, { "math_id": 54, "text": "x=0.1" }, { "math_id": 55, "text": "x_{max}(n)" }, { "math_id": 56, "text": "x_{max}(n+1)" }, { "math_id": 57, "text": "z_n" }, { "math_id": 58, "text": "z_{n+1}" }, { "math_id": 59, "text": "b" }, { "math_id": 60, "text": "a \\leq 0" }, { "math_id": 61, "text": "a = 0.1 " }, { "math_id": 62, "text": "a = 0.2 " }, { "math_id": 63, "text": "a = 0.3" }, { "math_id": 64, "text": "a = 0.35" }, { "math_id": 65, "text": "a = 0.38" }, { "math_id": 66, "text": "a = b = 0.1" }, { "math_id": 67, "text": "c=12" }, { "math_id": 68, "text": "a=b=.1" }, { "math_id": 69, "text": "c = 6" }, { "math_id": 70, "text": "c = 8.5" }, { "math_id": 71, "text": "c = 8.7" }, { "math_id": 72, "text": "c = 9" }, { "math_id": 73, "text": "c = 12" }, { "math_id": 74, "text": "c = 12.6" }, { "math_id": 75, "text": "c = 13" }, { "math_id": 76, "text": "c = 15.4" }, { "math_id": 77, "text": "c = 18" }, { "math_id": 78, "text": "T" }, { "math_id": 79, "text": "\\vec{x}(t+T) = \\vec{x}(t)" }, { "math_id": 80, "text": " \\Phi_t - Id " }, { "math_id": 81, "text": "\\Phi_t" }, { "math_id": 82, "text": "t" }, { "math_id": 83, "text": "Id" }, { "math_id": 84, "text": "S^1" }, { "math_id": 85, "text": "\\mathbb{R}^3" } ]
https://en.wikipedia.org/wiki?curid=1576293
1576323
Incidence geometry
Field of mathematics which studies incidence structures In mathematics, incidence geometry is the study of incidence structures. A geometric structure such as the Euclidean plane is a complicated object that involves concepts such as length, angles, continuity, betweenness, and incidence. An "incidence structure" is what is obtained when all other concepts are removed and all that remains is the data about which points lie on which lines. Even with this severe limitation, theorems can be proved and interesting facts emerge concerning this structure. Such fundamental results remain valid when additional concepts are added to form a richer geometry. It sometimes happens that authors blur the distinction between a study and the objects of that study, so it is not surprising to find that some authors refer to incidence structures as incidence geometries. Incidence structures arise naturally and have been studied in various areas of mathematics. Consequently, there are different terminologies to describe these objects. In graph theory they are called hypergraphs, and in combinatorial design theory they are called block designs. Besides the difference in terminology, each area approaches the subject differently and is interested in questions about these objects relevant to that discipline. Using geometric language, as is done in incidence geometry, shapes the topics and examples that are normally presented. It is, however, possible to translate the results from one discipline into the terminology of another, but this often leads to awkward and convoluted statements that do not appear to be natural outgrowths of the topics. In the examples selected for this article we use only those with a natural geometric flavor. A special case that has generated much interest deals with finite sets of points in the Euclidean plane and what can be said about the number and types of (straight) lines they determine. Some results of this situation can extend to more general settings since only incidence properties are considered. Incidence structures. An "incidence structure" ("P", "L", I) consists of a set "P" whose elements are called "points", a disjoint set "L" whose elements are called "lines" and an "incidence relation" I between them, that is, a subset of "P" × "L" whose elements are called "flags". If ("A", "l") is a flag, we say that "A" is "incident with" "l" or that "l" is incident with "A" (the terminology is symmetric), and write "A" I "l". Intuitively, a point and line are in this relation if and only if the point is "on" the line. Given a point "B" and a line "m" which do not form a flag, that is, the point is not on the line, the pair ("B", "m") is called an "anti-flag". Distance in an incidence structure. There is no natural concept of distance (a metric) in an incidence structure. However, a combinatorial metric does exist in the corresponding incidence graph (Levi graph), namely the length of the shortest path between two vertices in this bipartite graph. The distance between two objects of an incidence structure – two points, two lines or a point and a line – can be defined to be the distance between the corresponding vertices in the incidence graph of the incidence structure. Another way to define a distance again uses a graph-theoretic notion in a related structure, this time the "collinearity graph" of the incidence structure. The vertices of the collinearity graph are the points of the incidence structure and two points are joined if there exists a line incident with both points. The distance between two points of the incidence structure can then be defined as their distance in the collinearity graph. When distance is considered in an incidence structure, it is necessary to mention how it is being defined. Partial linear spaces. Incidence structures that are most studied are those that satisfy some additional properties (axioms), such as projective planes, affine planes, generalized polygons, partial geometries and near polygons. Very general incidence structures can be obtained by imposing "mild" conditions, such as: A partial linear space is an incidence structure for which the following axioms are true: In a partial linear space it is also true that every pair of distinct lines meet in at most one point. This statement does not have to be assumed as it is readily proved from axiom one above. Further constraints are provided by the regularity conditions: RLk: Each line is incident with the same number of points. If finite this number is often denoted by "k". RPr: Each point is incident with the same number of lines. If finite this number is often denoted by "r". The second axiom of a partial linear space implies that "k" &gt; 1. Neither regularity condition implies the other, so it has to be assumed that "r" &gt; 1. A finite partial linear space satisfying both regularity conditions with "k", "r" &gt; 1 is called a "tactical configuration". Some authors refer to these simply as "configurations", or "projective configurations". If a tactical configuration has "n" points and "m" lines, then, by double counting the flags, the relationship "nr" = "mk" is established. A common notation refers to ("n""r", "m""k")-"configurations". In the special case where "n" = "m" (and hence, "r" = "k") the notation ("n""k", "n""k") is often simply written as ("n""k"). A "linear space" is a partial linear space such that: Some authors add a "non-degeneracy" (or "non-triviality") axiom to the definition of a (partial) linear space, such as: This is used to rule out some very small examples (mainly when the sets "P" or "L" have fewer than two elements) that would normally be exceptions to general statements made about the incidence structures. An alternative to adding the axiom is to refer to incidence structures that do not satisfy the axiom as being "trivial" and those that do as "non-trivial". Each non-trivial linear space contains at least three points and three lines, so the simplest non-trivial linear space that can exist is a triangle. A linear space having at least three points on every line is a Sylvester–Gallai design. Fundamental geometric examples. Some of the basic concepts and terminology arises from geometric examples, particularly projective planes and affine planes. Projective planes. A "projective plane" is a linear space in which: and that satisfies the non-degeneracy condition: There is a bijection between "P" and "L" in a projective plane. If "P" is a finite set, the projective plane is referred to as a "finite" projective plane. The order of a finite projective plane is "n" = "k" – 1, that is, one less than the number of points on a line. All known projective planes have orders that are prime powers. A projective plane of order "n" is an (("n"2 + "n" + 1)"n" + 1) configuration. The smallest projective plane has order two and is known as the "Fano plane". Fano plane. This famous incidence geometry was developed by the Italian mathematician Gino Fano. In his work on proving the independence of the set of axioms for projective "n"-space that he developed, he produced a finite three-dimensional space with 15 points, 35 lines and 15 planes, in which each line had only three points on it. The planes in this space consisted of seven points and seven lines and are now known as Fano planes. The Fano plane cannot be represented in the Euclidean plane using only points and straight line segments (i.e., it is not realizable). This is a consequence of the Sylvester–Gallai theorem, according to which every realizable incidence geometry must include an "ordinary line", a line containing only two points. The Fano plane has no such line (that is, it is a Sylvester–Gallai configuration), so it is not realizable. A complete quadrangle consists of four points, no three of which are collinear. In the Fano plane, the three points not on a complete quadrangle are the diagonal points of that quadrangle and are collinear. This contradicts the "Fano axiom", often used as an axiom for the Euclidean plane, which states that the three diagonal points of a complete quadrangle are never collinear. Affine planes. An "affine plane" is a linear space satisfying: and satisfying the non-degeneracy condition: The lines "l" and "m" in the statement of Playfair's axiom are said to be "parallel". Every affine plane can be uniquely extended to a projective plane. The "order" of a finite affine plane is "k", the number of points on a line. An affine plane of order "n" is an (("n"2)"n" + 1, ("n"2 + "n")"n") configuration. Hesse configuration. The affine plane of order three is a (94, 123) configuration. When embedded in some ambient space it is called the Hesse configuration. It is not realizable in the Euclidean plane but is realizable in the complex projective plane as the nine inflection points of an elliptic curve with the 12 lines incident with triples of these. The 12 lines can be partitioned into four classes of three lines apiece where, in each class the lines are mutually disjoint. These classes are called "parallel classes" of lines. Adding four new points, each being added to all the lines of a single parallel class (so all of these lines now intersect), and one new line containing just these four new points produces the projective plane of order three, a (134) configuration. Conversely, starting with the projective plane of order three (it is unique) and removing any single line and all the points on that line produces this affine plane of order three (it is also unique). Removing one point and the four lines that pass through that point (but not the other points on them) produces the (83) Möbius–Kantor configuration. Partial geometries. Given an integer "α" ≥ 1, a tactical configuration satisfying: is called a "partial geometry". If there are "s" + 1 points on a line and "t" + 1 lines through a point, the notation for a partial geometry is pg("s", "t", "α"). If "α" = 1 these partial geometries are generalized quadrangles. If "α" = "s" + 1 these are called Steiner systems. Generalized polygons. For "n" &gt; 2, a generalized "n"-gon is a partial linear space whose incidence graph Γ has the property: A "generalized 2-gon" is an incidence structure, which is not a partial linear space, consisting of at least two points and two lines with every point being incident with every line. The incidence graph of a generalized 2-gon is a complete bipartite graph. A generalized "n"-gon contains no ordinary "m"-gon for 2 ≤ "m" &lt; "n" and for every pair of objects (two points, two lines or a point and a line) there is an ordinary "n"-gon that contains them both. Generalized 3-gons are projective planes. Generalized 4-gons are called generalized quadrangles. By the Feit-Higman theorem the only finite generalized "n"-gons with at least three points per line and three lines per point have "n" = 2, 3, 4, 6 or 8. Near polygons. For a non-negative integer "d" a near 2"d"-gon is an incidence structure such that: A near 0-gon is a point, while a near 2-gon is a line. The collinearity graph of a near 2-gon is a complete graph. A near 4-gon is a generalized quadrangle (possibly degenerate). Every finite generalized polygon except the projective planes is a near polygon. Any connected bipartite graph is a near polygon and any near polygon with precisely two points per line is a connected bipartite graph. Also, all dual polar spaces are near polygons. Many near polygons are related to finite simple groups like the Mathieu groups and the Janko group J2. Moreover, the generalized 2"d"-gons, which are related to Groups of Lie type, are special cases of near 2"d"-gons. Möbius planes. An abstract Möbius plane (or inversive plane) is an incidence structure where, to avoid possible confusion with the terminology of the classical case, the lines are referred to as "cycles" or "blocks". Specifically, a Möbius plane is an incidence structure of points and cycles such that: The incidence structure obtained at any point "P" of a Möbius plane by taking as points all the points other than "P" and as lines only those cycles that contain "P" (with "P" removed), is an affine plane. This structure is called the "residual" at "P" in design theory. A finite Möbius plane of "order" "m" is a tactical configuration with "k" = "m" + 1 points per cycle that is a 3-design, specifically a 3-("m"2 + 1, "m" + 1, 1) block design. Incidence theorems in the Euclidean plane. The Sylvester-Gallai theorem. A question raised by J.J. Sylvester in 1893 and finally settled by Tibor Gallai concerned incidences of a finite set of points in the Euclidean plane. Theorem (Sylvester-Gallai): A finite set of points in the Euclidean plane is either collinear or there exists a line incident with exactly two of the points. A line containing exactly two of the points is called an "ordinary line" in this context. Sylvester was probably led to the question while pondering about the embeddability of the Hesse configuration. The de Bruijn–Erdős theorem. A related result is the de Bruijn–Erdős theorem. Nicolaas Govert de Bruijn and Paul Erdős proved the result in the more general setting of projective planes, but it still holds in the Euclidean plane. The theorem is: In a projective plane, every non-collinear set of "n" points determines at least "n" distinct lines. As the authors pointed out, since their proof was combinatorial, the result holds in a larger setting, in fact in any incidence geometry in which there is a unique line through every pair of distinct points. They also mention that the Euclidean plane version can be proved from the Sylvester-Gallai theorem using induction. The Szemerédi–Trotter theorem. A bound on the number of flags determined by a finite set of points and the lines they determine is given by: Theorem (Szemerédi–Trotter): given "n" points and "m" lines in the plane, the number of flags (incident point-line pairs) is: formula_0 and this bound cannot be improved, except in terms of the implicit constants. This result can be used to prove Beck's theorem. A similar bound for the number of incidences is conjectured for point-circle incidences, but only weaker upper bounds are known. Beck's theorem. Beck's theorem says that finite collections of points in the plane fall into one of two extremes; one where a large fraction of points lie on a single line, and one where a large number of lines are needed to connect all the points. The theorem asserts the existence of positive constants "C", "K" such that given any "n" points in the plane, at least one of the following statements is true: In Beck's original argument, "C" is 100 and "K" is an unspecified constant; it is not known what the optimal values of "C" and "K" are. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "O \\left ( n^{\\frac{2}{3}} m^{\\frac{2}{3}} + n + m \\right )," } ]
https://en.wikipedia.org/wiki?curid=1576323
1576696
Reaction rate constant
Coefficient of rate of a chemical reaction In chemical kinetics, a reaction rate constant or reaction rate coefficient (&amp;NoBreak;&amp;NoBreak;) is a proportionality constant which quantifies the rate and direction of a chemical reaction by relating it with the concentration of reactants. For a reaction between reactants A and B to form a product C, &lt;templatestyles src="Block indent/styles.css"/&gt;"a" A + "b" B → "c" C where A and B are reactants C is a product "a", "b", and "c" are stoichiometric coefficients, the reaction rate is often found to have the form: formula_0 Here &amp;NoBreak;&amp;NoBreak; is the reaction rate constant that depends on temperature, and [A] and [B] are the molar concentrations of substances A and B in moles per unit volume of solution, assuming the reaction is taking place throughout the volume of the solution. (For a reaction taking place at a boundary, one would use moles of A or B per unit area instead.) The exponents "m" and "n" are called partial orders of reaction and are "not" generally equal to the stoichiometric coefficients "a" and "b". Instead they depend on the reaction mechanism and can be determined experimentally. Sum of m and n, that is, ("m" + "n") is called the overall order of reaction. Elementary steps. For an elementary step, there "is" a relationship between stoichiometry and rate law, as determined by the law of mass action. Almost all elementary steps are either unimolecular or bimolecular. For a unimolecular step &lt;templatestyles src="Block indent/styles.css"/&gt;A → P the reaction rate is described by formula_1, where formula_2 is a unimolecular rate constant. Since a reaction requires a change in molecular geometry, unimolecular rate constants cannot be larger than the frequency of a molecular vibration. Thus, in general, a unimolecular rate constant has an upper limit of "k"1 ≤ ~1013 s−1. For a bimolecular step &lt;templatestyles src="Block indent/styles.css"/&gt;A + B → P the reaction rate is described by formula_3, where formula_4 is a bimolecular rate constant. Bimolecular rate constants have an upper limit that is determined by how frequently molecules can collide, and the fastest such processes are limited by diffusion. Thus, in general, a bimolecular rate constant has an upper limit of "k"2 ≤ ~1010 M−1s−1. For a termolecular step &lt;templatestyles src="Block indent/styles.css"/&gt;A + B + C → P the reaction rate is described by formula_5, where formula_6 is a termolecular rate constant. There are few examples of elementary steps that are termolecular or higher order, due to the low probability of three or more molecules colliding in their reactive conformations and in the right orientation relative to each other to reach a particular transition state. There are, however, some termolecular examples in the gas phase. Most involve the recombination of two atoms or small radicals or molecules in the presence of an inert third body which carries off excess energy, such as O + O2 + N2 → O3 + N2. One well-established example is the termolecular step 2 I + H2 → 2 HI in the hydrogen-iodine reaction. In cases where a termolecular step might plausibly be proposed, one of the reactants is generally present in high concentration (e.g., as a solvent or diluent gas). Relationship to other parameters. For a first-order reaction (including a unimolecular one-step process), there is a direct relationship between the unimolecular rate constant and the half-life of the reaction: formula_7. Transition state theory gives a relationship between the rate constant formula_8 and the Gibbs free energy of activation formula_9, a quantity that can be regarded as the free energy change needed to reach the transition state. In particular, this energy barrier incorporates both enthalpic (formula_10) and entropic (formula_11) changes that need to be achieved for the reaction to take place: The result from transition state theory is formula_12, where "h" is the Planck constant and "R" the molar gas constant. As useful rules of thumb, a first-order reaction with a rate constant of 10−4 s−1 will have a half-life ("t"1/2) of approximately 2 hours. For a one-step process taking place at room temperature, the corresponding Gibbs free energy of activation (Δ"G"‡) is approximately 23 kcal/mol. Dependence on temperature. The Arrhenius equation is an elementary treatment that gives the quantitative basis of the relationship between the activation energy and the reaction rate at which a reaction proceeds. The rate constant as a function of thermodynamic temperature is then given by: formula_13 The reaction rate is given by: formula_14 where "E"a is the activation energy, and "R" is the gas constant, and "m" and "n" are experimentally determined partial orders in [A] and [B], respectively. Since at temperature "T" the molecules have energies according to a Boltzmann distribution, one can expect the proportion of collisions with energy greater than "E"a to vary with "e"&lt;templatestyles src="Fraction/styles.css" /&gt;−"E"a⁄"RT". The constant of proportionality "A" is the pre-exponential factor, or frequency factor (not to be confused here with the reactant A) takes into consideration the frequency at which reactant molecules are colliding and the likelihood that a collision leads to a successful reaction. Here, "A" has the same dimensions as an ("m" + "n")-order rate constant ("see" Units "below"). Another popular model that is derived using more sophisticated statistical mechanical considerations is the Eyring equation from transition state theory: formula_15 where Δ"G"‡ is the free energy of activation, a parameter that incorporates both the enthalpy and entropy change needed to reach the transition state. The temperature dependence of Δ"G"‡ is used to compute these parameters, the enthalpy of activation Δ"H"‡ and the entropy of activation Δ"S"‡, based on the defining formula Δ"G"‡ = Δ"H"‡ − "T"Δ"S"‡. In effect, the free energy of activation takes into account both the activation energy and the likelihood of successful collision, while the factor "k"B"T"/"h" gives the frequency of molecular collision. The factor ("c"⊖)1-"M" ensures the dimensional correctness of the rate constant when the transition state in question is bimolecular or higher. Here, "c"⊖ is the standard concentration, generally chosen based on the unit of concentration used (usually "c"⊖ = 1 mol L−1 = 1 M), and "M" is the molecularity of the transition state. Lastly, κ, usually set to unity, is known as the transmission coefficient, a parameter which essentially serves as a "fudge factor" for transition state theory. The biggest difference between the two theories is that Arrhenius theory attempts to model the reaction (single- or multi-step) as a whole, while transition state theory models the individual elementary steps involved. Thus, they are not directly comparable, unless the reaction in question involves only a single elementary step. Finally, in the past, collision theory, in which reactants are viewed as hard spheres with a particular cross-section, provided yet another common way to rationalize and model the temperature dependence of the rate constant, although this approach has gradually fallen into disuse. The equation for the rate constant is similar in functional form to both the Arrhenius and Eyring equations: formula_16 where "P" is the steric (or probability) factor and "Z" is the collision frequency, and Δ"E" is energy input required to overcome the activation barrier. Of note, formula_17, making the temperature dependence of "k" different from both the Arrhenius and Eyring models. Comparison of models. All three theories model the temperature dependence of "k" using an equation of the form formula_18 for some constant "C", where α = 0, &lt;templatestyles src="Fraction/styles.css" /&gt;1⁄2, and 1 give Arrhenius theory, collision theory, and transition state theory, respectively, although the imprecise notion of Δ"E", the energy needed to overcome the activation barrier, has a slightly different meaning in each theory. In practice, experimental data does not generally allow a determination to be made as to which is "correct" in terms of best fit. Hence, it must be remembered that all three are conceptual frameworks that make numerous assumptions, both realistic and unrealistic, in their derivations. As a result, they are capable of providing different insights into a system. Units. The units of the rate constant depend on the overall order of reaction. If concentration is measured in units of mol·L−1 (sometimes abbreviated as M), then Plasma and gases. Calculation of rate constants of the processes of generation and relaxation of electronically and vibrationally excited particles are of significant importance. It is used, for example, in the computer simulation of processes in plasma chemistry or microelectronics. First-principle based models should be used for such calculation. It can be done with the help of computer simulation software. Rate constant calculations. Rate constant can be calculated for elementary reactions by molecular dynamics simulations. One possible approach is to calculate the mean residence time of the molecule in the reactant state. Although this is feasible for small systems with short residence times, this approach is not widely applicable as reactions are often rare events on molecular scale. One simple approach to overcome this problem is Divided Saddle Theory. Such other methods as the Bennett Chandler procedure, and Milestoning have also been developed for rate constant calculations. Divided saddle theory. The theory is based on the assumption that the reaction can be described by a reaction coordinate, and that we can apply Boltzmann distribution at least in the reactant state. A new, especially reactive segment of the reactant, called the "saddle domain", is introduced, and the rate constant is factored: formula_19 where "α" is the conversion factor between the reactant state and saddle domain, while "k"SD is the rate constant from the saddle domain. The first can be simply calculated from the free energy surface, the latter is easily accessible from short molecular dynamics simulations References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "r = k[\\mathrm{A}]^m [\\mathrm{B}]^{n}" }, { "math_id": 1, "text": "r = k_1[\\mathrm{A}]" }, { "math_id": 2, "text": "k_1" }, { "math_id": 3, "text": "r=k_2[\\mathrm{A}][\\mathrm{B}]" }, { "math_id": 4, "text": "k_2" }, { "math_id": 5, "text": "r=k_3[\\mathrm{A}][\\mathrm{B}][\\mathrm{C}]" }, { "math_id": 6, "text": "k_3" }, { "math_id": 7, "text": "t_{1/2} = \\frac{\\ln 2}{k}" }, { "math_id": 8, "text": "k(T)" }, { "math_id": 9, "text": "{\\Delta G^{\\ddagger} = \\Delta H^{\\ddagger} - T\\Delta S^{\\ddagger}} " }, { "math_id": 10, "text": "\\Delta H^{\\ddagger}" }, { "math_id": 11, "text": "\\Delta S^{\\ddagger}" }, { "math_id": 12, "text": "k(T) = \\frac{k_{\\mathrm{B}}T}{h}e^{-\\Delta G^{\\ddagger}/RT}" }, { "math_id": 13, "text": "k(T) = Ae^{- E_\\mathrm{a}/RT}" }, { "math_id": 14, "text": "r = Ae^{ - E_\\mathrm{a}/RT}[\\mathrm{A}]^m[\\mathrm{B}]^n," }, { "math_id": 15, "text": "k(T)\n= \\kappa\\frac{k_{\\mathrm{B}}T}{h}(c^{\\ominus})^{1-M}e^{-\\Delta G^{\\ddagger}/RT}\n= \\left(\\kappa\\frac{k_{\\mathrm{B}}T}{h}(c^{\\ominus})^{1-M}\\right)e^{\\Delta S^{\\ddagger}/R}\ne^{-\\Delta H^{\\ddagger}/RT}," }, { "math_id": 16, "text": "k(T)=PZe^{-\\Delta E/RT}," }, { "math_id": 17, "text": "Z\\propto T^{1/2}" }, { "math_id": 18, "text": "k(T)=CT^\\alpha e^{-\\Delta E/RT}" }, { "math_id": 19, "text": "k= k_\\mathrm{SD}\\cdot \\alpha^\\mathrm{SD}_\\mathrm{RS} " } ]
https://en.wikipedia.org/wiki?curid=1576696
15769518
Image functors for sheaves
In mathematics, especially in sheaf theory—a domain applied in areas such as topology, logic and algebraic geometry—there are four image functors for sheaves that belong together in various senses. Given a continuous mapping "f": "X" → "Y" of topological spaces, and the category Sh(–) of sheaves of abelian groups on a topological space. The functors in question are The exclamation mark is often pronounced "shriek" (slang for exclamation mark), and the maps called ""f" shriek" or ""f" lower shriek" and ""f" upper shriek"—see also shriek map. The exceptional inverse image is in general defined on the level of derived categories only. Similar considerations apply to étale sheaves on schemes. Adjointness. The functors are adjoint to each other as depicted at the right, where, as usual, formula_0 means that "F" is left adjoint to "G" (equivalently "G" right adjoint to "F"), i.e. Hom("F"("A"), "B") ≅ Hom("A", "G"("B")) for any two objects "A", "B" in the two categories being adjoint by "F" and "G". For example, "f"∗ is the left adjoint of "f"*. By the standard reasoning with adjointness relations, there are natural unit and counit morphisms formula_1 and formula_2 for formula_3 on "Y" and formula_4 on "X", respectively. However, these are "almost never" isomorphisms—see the localization example below. Verdier duality. Verdier duality gives another link between them: morally speaking, it exchanges "∗" and "!", i.e. in the synopsis above it exchanges functors along the diagonals. For example the direct image is dual to the direct image with compact support. This phenomenon is studied and used in the theory of perverse sheaves. Base Change. Another useful property of the image functors is base change. Given continuous maps formula_5 and formula_6, which induce morphisms formula_7 and formula_8, there exists a canonical isomorphism formula_9. Localization. In the particular situation of a closed subspace "i": "Z" ⊂ "X" and the complementary open subset "j": "U" ⊂ "X", the situation simplifies insofar that for "j"∗="j"! and "i"!="i"∗ and for any sheaf "F" on "X", one gets exact sequences 0 → "j"!"j"∗ "F" → "F" → "i"∗"i"∗ "F" → 0 Its Verdier dual reads "i"∗"Ri"! "F" → "F" → "Rj"∗"j"∗ "F" → "i"∗"Ri"! "F"[1], a distinguished triangle in the derived category of sheaves on "X". The adjointness relations read in this case formula_10 and formula_11.
[ { "math_id": 0, "text": "F \\leftrightarrows G" }, { "math_id": 1, "text": "\\mathcal{G} \\rightarrow f_*f^{*}\\mathcal{G}" }, { "math_id": 2, "text": "f^{*}f_*\\mathcal{F} \\rightarrow \\mathcal{F}" }, { "math_id": 3, "text": "\\mathcal G" }, { "math_id": 4, "text": "\\mathcal F" }, { "math_id": 5, "text": "f:X \\rightarrow Z" }, { "math_id": 6, "text": "g:Y \\rightarrow Z" }, { "math_id": 7, "text": "\\bar f:X\\times_Z Y \\rightarrow Y" }, { "math_id": 8, "text": "\\bar g:X\\times_Z Y \\rightarrow X" }, { "math_id": 9, "text": "R \\bar f_* R\\bar g^! \\cong Rf^! Rg_*" }, { "math_id": 10, "text": "i^* \\leftrightarrows i_*=i_! \\leftrightarrows i^!" }, { "math_id": 11, "text": "j_! \\leftrightarrows j^!=j^* \\leftrightarrows j_*" } ]
https://en.wikipedia.org/wiki?curid=15769518
157700
Moment of inertia
Scalar measure of the rotational inertia with respect to a fixed axis of rotation &lt;templatestyles src="Hlist/styles.css"/&gt; The moment of inertia, otherwise known as the mass moment of inertia, angular/rotational mass, second moment of mass, or most accurately, rotational inertia, of a rigid body is a quantity that determines the torque needed for a desired angular acceleration about a rotational axis, akin to how mass determines the force needed for a desired acceleration. It depends on the body's mass distribution and the axis chosen, with larger moments requiring more torque to change the body's rate of rotation by a given amount. It is an extensive (additive) property: for a point mass the moment of inertia is simply the mass times the square of the perpendicular distance to the axis of rotation. The moment of inertia of a rigid composite system is the sum of the moments of inertia of its component subsystems (all taken about the same axis). Its simplest definition is the second moment of mass with respect to distance from an axis. For bodies constrained to rotate in a plane, only their moment of inertia about an axis perpendicular to the plane, a scalar value, matters. For bodies free to rotate in three dimensions, their moments can be described by a symmetric 3-by-3 matrix, with a set of mutually perpendicular principal axes for which this matrix is diagonal and torques around the axes act independently of each other. In mechanical engineering, simply "inertia" is often used to refer to "inertial mass" or "moment of inertia". Introduction. When a body is free to rotate around an axis, torque must be applied to change its angular momentum. The amount of torque needed to cause any given angular acceleration (the rate of change in angular velocity) is proportional to the moment of inertia of the body. Moments of inertia may be expressed in units of kilogram metre squared (kg·m2) in SI units and pound-foot-second squared (lbf·ft·s2) in imperial or US units. The moment of inertia plays the role in rotational kinetics that mass (inertia) plays in linear kinetics—both characterize the resistance of a body to changes in its motion. The moment of inertia depends on how mass is distributed around an axis of rotation, and will vary depending on the chosen axis. For a point-like mass, the moment of inertia about some axis is given by formula_0, where formula_1 is the distance of the point from the axis, and formula_2 is the mass. For an extended rigid body, the moment of inertia is just the sum of all the small pieces of mass multiplied by the square of their distances from the axis in rotation. For an extended body of a regular shape and uniform density, this summation sometimes produces a simple expression that depends on the dimensions, shape and total mass of the object. In 1673, Christiaan Huygens introduced this parameter in his study of the oscillation of a body hanging from a pivot, known as a compound pendulum. The term "moment of inertia" ("momentum inertiae" in Latin) was introduced by Leonhard Euler in his book "Theoria motus corporum solidorum seu rigidorum" in 1765, and it is incorporated into Euler's second law. The natural frequency of oscillation of a compound pendulum is obtained from the ratio of the torque imposed by gravity on the mass of the pendulum to the resistance to acceleration defined by the moment of inertia. Comparison of this natural frequency to that of a simple pendulum consisting of a single point of mass provides a mathematical formulation for moment of inertia of an extended body. The moment of inertia also appears in momentum, kinetic energy, and in Newton's laws of motion for a rigid body as a physical parameter that combines its shape and mass. There is an interesting difference in the way moment of inertia appears in planar and spatial movement. Planar movement has a single scalar that defines the moment of inertia, while for spatial movement the same calculations yield a 3 × 3 matrix of moments of inertia, called the inertia matrix or inertia tensor. The moment of inertia of a rotating flywheel is used in a machine to resist variations in applied torque to smooth its rotational output. The moment of inertia of an airplane about its longitudinal, horizontal and vertical axes determine how steering forces on the control surfaces of its wings, elevators and rudder(s) affect the plane's motions in roll, pitch and yaw. Definition. The moment of inertia is defined as the product of mass of section and the square of the distance between the reference axis and the centroid of the section. The moment of inertia I is also defined as the ratio of the net angular momentum L of a system to its angular velocity ω around a principal axis, that is formula_3 If the angular momentum of a system is constant, then as the moment of inertia gets smaller, the angular velocity must increase. This occurs when spinning figure skaters pull in their outstretched arms or divers curl their bodies into a tuck position during a dive, to spin faster. If the shape of the body does not change, then its moment of inertia appears in Newton's law of motion as the ratio of an applied torque τ on a body to the angular acceleration α around a principal axis, that is formula_4 For a simple pendulum, this definition yields a formula for the moment of inertia I in terms of the mass m of the pendulum and its distance r from the pivot point as, formula_5 Thus, the moment of inertia of the pendulum depends on both the mass m of a body and its geometry, or shape, as defined by the distance r to the axis of rotation. This simple formula generalizes to define moment of inertia for an arbitrarily shaped body as the sum of all the elemental point masses "dm" each multiplied by the square of its perpendicular distance r to an axis k. An arbitrary object's moment of inertia thus depends on the spatial distribution of its mass. In general, given an object of mass m, an effective radius k can be defined, dependent on a particular axis of rotation, with such a value that its moment of inertia around the axis is formula_6 where k is known as the radius of gyration around the axis. Examples. Simple pendulum. Mathematically, the moment of inertia of a simple pendulum is the ratio of the torque due to gravity about the pivot of a pendulum to its angular acceleration about that pivot point. For a simple pendulum this is found to be the product of the mass of the particle formula_2 with the square of its distance formula_1 to the pivot, that is formula_5 This can be shown as follows: The force of gravity on the mass of a simple pendulum generates a torque formula_7 around the axis perpendicular to the plane of the pendulum movement. Here formula_8 is the distance vector from the torque axis to the pendulum center of mass, and formula_9 is the net force on the mass. Associated with this torque is an angular acceleration, formula_10, of the string and mass around this axis. Since the mass is constrained to a circle the tangential acceleration of the mass is formula_11. Since formula_12 the torque equation becomes: formula_13 where formula_14 is a unit vector perpendicular to the plane of the pendulum. (The second to last step uses the vector triple product expansion with the perpendicularity of formula_10 and formula_8.) The quantity formula_15 is the "moment of inertia" of this single mass around the pivot point. The quantity formula_15 also appears in the angular momentum of a simple pendulum, which is calculated from the velocity formula_16 of the pendulum mass around the pivot, where formula_17 is the angular velocity of the mass about the pivot point. This angular momentum is given by formula_18 using a similar derivation to the previous equation. Similarly, the kinetic energy of the pendulum mass is defined by the velocity of the pendulum around the pivot to yield formula_19 This shows that the quantity formula_15 is how mass combines with the shape of a body to define rotational inertia. The moment of inertia of an arbitrarily shaped body is the sum of the values formula_0 for all of the elements of mass in the body. Compound pendulums. A compound pendulum is a body formed from an assembly of particles of continuous shape that rotates rigidly around a pivot. Its moment of inertia is the sum of the moments of inertia of each of the particles that it is composed of. The natural frequency (formula_20) of a compound pendulum depends on its moment of inertia, formula_21, formula_22 where formula_2 is the mass of the object, formula_23 is local acceleration of gravity, and formula_1 is the distance from the pivot point to the center of mass of the object. Measuring this frequency of oscillation over small angular displacements provides an effective way of measuring moment of inertia of a body. Thus, to determine the moment of inertia of the body, simply suspend it from a convenient pivot point formula_24 so that it swings freely in a plane perpendicular to the direction of the desired moment of inertia, then measure its natural frequency or period of oscillation (formula_25), to obtain formula_26 where formula_25 is the period (duration) of oscillation (usually averaged over multiple periods). Center of oscillation. A simple pendulum that has the same natural frequency as a compound pendulum defines the length formula_27 from the pivot to a point called the center of oscillation of the compound pendulum. This point also corresponds to the center of percussion. The length formula_27 is determined from the formula, formula_28 or formula_29 The seconds pendulum, which provides the "tick" and "tock" of a grandfather clock, takes one second to swing from side-to-side. This is a period of two seconds, or a natural frequency of formula_30 for the pendulum. In this case, the distance to the center of oscillation, formula_27, can be computed to be formula_31 Notice that the distance to the center of oscillation of the seconds pendulum must be adjusted to accommodate different values for the local acceleration of gravity. Kater's pendulum is a compound pendulum that uses this property to measure the local acceleration of gravity, and is called a gravimeter. Measuring moment of inertia. The moment of inertia of a complex system such as a vehicle or airplane around its vertical axis can be measured by suspending the system from three points to form a trifilar pendulum. A trifilar pendulum is a platform supported by three wires designed to oscillate in torsion around its vertical centroidal axis. The period of oscillation of the trifilar pendulum yields the moment of inertia of the system. Moment of inertia of area. Moment of inertia of area is also known as the second moment of area. These calculations are commonly used in civil engineering for structural design of beams and columns. Cross-sectional areas calculated for vertical moment of the x-axis formula_32 and horizontal moment of the y-axis formula_33.&lt;br&gt; Height ("h") and breadth ("b") are the linear measures, except for circles, which are effectively half-breadth derived, formula_1 Motion in a fixed plane. Point mass. The moment of inertia about an axis of a body is calculated by summing formula_0 for every particle in the body, where formula_1 is the perpendicular distance to the specified axis. To see how moment of inertia arises in the study of the movement of an extended body, it is convenient to consider a rigid assembly of point masses. (This equation can be used for axes that are not principal axes provided that it is understood that this does not fully describe the moment of inertia.) Consider the kinetic energy of an assembly of formula_39 masses formula_40 that lie at the distances formula_41 from the pivot point formula_24, which is the nearest point on the axis of rotation. It is the sum of the kinetic energy of the individual masses, formula_42 This shows that the moment of inertia of the body is the sum of each of the formula_0 terms, that is formula_43 Thus, moment of inertia is a physical property that combines the mass and distribution of the particles around the rotation axis. Notice that rotation about different axes of the same body yield different moments of inertia. The moment of inertia of a continuous body rotating about a specified axis is calculated in the same way, except with infinitely many point particles. Thus the limits of summation are removed, and the sum is written as follows: formula_44 Another expression replaces the summation with an integral, formula_45 Here, the function formula_46 gives the mass density at each point formula_47, formula_8 is a vector perpendicular to the axis of rotation and extending from a point on the rotation axis to a point formula_47 in the solid, and the integration is evaluated over the volume formula_48 of the body formula_49. The moment of inertia of a flat surface is similar with the mass density being replaced by its areal mass density with the integral evaluated over its area. Note on second moment of area: The moment of inertia of a body moving in a plane and the second moment of area of a beam's cross-section are often confused. The moment of inertia of a body with the shape of the cross-section is the second moment of this area about the formula_50-axis perpendicular to the cross-section, weighted by its density. This is also called the "polar moment of the area", and is the sum of the second moments about the formula_51- and formula_52-axes. The stresses in a beam are calculated using the second moment of the cross-sectional area around either the formula_51-axis or formula_52-axis depending on the load. Examples. The moment of inertia of a compound pendulum constructed from a thin disc mounted at the end of a thin rod that oscillates around a pivot at the other end of the rod, begins with the calculation of the moment of inertia of the thin rod and thin disc about their respective centers of mass. A list of moments of inertia formulas for standard body shapes provides a way to obtain the moment of inertia of a complex body as an assembly of simpler shaped bodies. The parallel axis theorem is used to shift the reference point of the individual bodies to the reference point of the assembly. As one more example, consider the moment of inertia of a solid sphere of constant density about an axis through its center of mass. This is determined by summing the moments of inertia of the thin discs that can form the sphere whose centers are along the axis chosen for consideration. If the surface of the sphere is defined by the equation1301 formula_62 then the square of the radius formula_1 of the disc at the cross-section formula_50 along the formula_50-axis is formula_63 Therefore, the moment of inertia of the sphere is the sum of the moments of inertia of the discs along the formula_50-axis, formula_64 where formula_65 is the mass of the sphere. Rigid body. If a mechanical system is constrained to move parallel to a fixed plane, then the rotation of a body in the system occurs around an axis formula_14 parallel to this plane. In this case, the moment of inertia of the mass in this system is a scalar known as the "polar moment of inertia". The definition of the polar moment of inertia can be obtained by considering momentum, kinetic energy and Newton's laws for the planar movement of a rigid system of particles. If a system of formula_66 particles, formula_67, are assembled into a rigid body, then the momentum of the system can be written in terms of positions relative to a reference point formula_68, and absolute velocities formula_69: formula_70 where formula_17 is the angular velocity of the system and formula_71 is the velocity of formula_68. For planar movement the angular velocity vector is directed along the unit vector formula_72 which is perpendicular to the plane of movement. Introduce the unit vectors formula_73 from the reference point formula_68 to a point formula_74, and the unit vector formula_75, so formula_76 This defines the relative position vector and the velocity vector for the rigid system of the particles moving in a plane. Note on the cross product: When a body moves parallel to a ground plane, the trajectories of all the points in the body lie in planes parallel to this ground plane. This means that any rotation that the body undergoes must be around an axis perpendicular to this plane. Planar movement is often presented as projected onto this ground plane so that the axis of rotation appears as a point. In this case, the angular velocity and angular acceleration of the body are scalars and the fact that they are vectors along the rotation axis is ignored. This is usually preferred for introductions to the topic. But in the case of moment of inertia, the combination of mass and geometry benefits from the geometric properties of the cross product. For this reason, in this section on planar movement the angular velocity and accelerations of the body are vectors perpendicular to the ground plane, and the cross product operations are the same as used for the study of spatial rigid body movement. Angular momentum. The angular momentum vector for the planar movement of a rigid system of particles is given by formula_77 Use the center of mass formula_78 as the reference point so formula_79 and define the moment of inertia relative to the center of mass formula_80 as formula_81 then the equation for angular momentum simplifies to1028 formula_82 The moment of inertia formula_80 about an axis perpendicular to the movement of the rigid system and through the center of mass is known as the "polar moment of inertia". Specifically, it is the second moment of mass with respect to the orthogonal distance from an axis (or pole). For a given amount of angular momentum, a decrease in the moment of inertia results in an increase in the angular velocity. Figure skaters can change their moment of inertia by pulling in their arms. Thus, the angular velocity achieved by a skater with outstretched arms results in a greater angular velocity when the arms are pulled in, because of the reduced moment of inertia. A figure skater is not, however, a rigid body. Kinetic energy. The kinetic energy of a rigid system of particles moving in the plane is given by formula_83 Let the reference point be the center of mass formula_78 of the system so the second term becomes zero, and introduce the moment of inertia formula_80 so the kinetic energy is given by1084 formula_84 The moment of inertia formula_80 is the "polar moment of inertia" of the body. Newton's laws. Newton's laws for a rigid system of formula_66 particles, formula_67, can be written in terms of a resultant force and torque at a reference point formula_68, to yield formula_85 where formula_74 denotes the trajectory of each particle. The kinematics of a rigid body yields the formula for the acceleration of the particle formula_86 in terms of the position formula_68 and acceleration formula_87 of the reference particle as well as the angular velocity vector formula_17 and angular acceleration vector formula_10 of the rigid system of particles as, formula_88 For systems that are constrained to planar movement, the angular velocity and angular acceleration vectors are directed along formula_14 perpendicular to the plane of movement, which simplifies this acceleration equation. In this case, the acceleration vectors can be simplified by introducing the unit vectors formula_89 from the reference point formula_68 to a point formula_74 and the unit vectors formula_75, so formula_90 This yields the resultant torque on the system as formula_91 where formula_92, and formula_93 is the unit vector perpendicular to the plane for all of the particles formula_86. Use the center of mass formula_78 as the reference point and define the moment of inertia relative to the center of mass formula_80, then the equation for the resultant torque simplifies to1029 formula_94 Motion in space of a rigid body, and the inertia matrix. The scalar moments of inertia appear as elements in a matrix when a system of particles is assembled into a rigid body that moves in three-dimensional space. This inertia matrix appears in the calculation of the angular momentum, kinetic energy and resultant torque of the rigid system of particles. Let the system of formula_66 particles, formula_67 be located at the coordinates formula_74 with velocities formula_69 relative to a fixed reference frame. For a (possibly moving) reference point formula_68, the relative positions are formula_95 and the (absolute) velocities are formula_96 where formula_17 is the angular velocity of the system, and formula_97 is the velocity of formula_68. Angular momentum. Note that the cross product can be equivalently written as matrix multiplication by combining the first operand and the operator into a skew-symmetric matrix, formula_98, constructed from the components of formula_99: formula_100 The inertia matrix is constructed by considering the angular momentum, with the reference point formula_68 of the body chosen to be the center of mass formula_78: formula_101 where the terms containing formula_97 (formula_102) sum to zero by the definition of center of mass. Then, the skew-symmetric matrix formula_103 obtained from the relative position vector formula_104, can be used to define, formula_105 where formula_106 defined by formula_107 is the symmetric inertia matrix of the rigid system of particles measured relative to the center of mass formula_78. Kinetic energy. The kinetic energy of a rigid system of particles can be formulated in terms of the center of mass and a matrix of mass moments of inertia of the system. Let the system of formula_66 particles formula_67 be located at the coordinates formula_74 with velocities formula_69, then the kinetic energy is formula_108 where formula_104 is the position vector of a particle relative to the center of mass. This equation expands to yield three terms formula_109 Since the center of mass is defined by formula_110 , the second term in this equation is zero. Introduce the skew-symmetric matrix formula_103 so the kinetic energy becomes formula_111 Thus, the kinetic energy of the rigid system of particles is given by formula_112 where formula_106 is the inertia matrix relative to the center of mass and formula_113 is the total mass. Resultant torque. The inertia matrix appears in the application of Newton's second law to a rigid assembly of particles. The resultant torque on this system is, formula_114 where formula_115 is the acceleration of the particle formula_86. The kinematics of a rigid body yields the formula for the acceleration of the particle formula_86 in terms of the position formula_68 and acceleration formula_116 of the reference point, as well as the angular velocity vector formula_17 and angular acceleration vector formula_10 of the rigid system as, formula_117 Use the center of mass formula_78 as the reference point, and introduce the skew-symmetric matrix formula_118 to represent the cross product formula_119, to obtain formula_120 The calculation uses the identity formula_121 obtained from the Jacobi identity for the triple cross product as shown in the proof below: &lt;templatestyles src="Math_proof/styles.css" /&gt;Proof formula_122 In the last statement, formula_123 because formula_68 is either at rest or moving at a constant velocity but not accelerated, or the origin of the fixed (world) coordinate reference system is placed at the center of mass formula_78. And distributing the cross product over the sum, we get formula_124 Then, the following Jacobi identity is used on the last term: formula_125 The result of applying Jacobi identity can then be continued as follows: formula_126 The final result can then be substituted to the main proof as follows: formula_127 formula_128 Notice that for any vector formula_129, the following holds: formula_130 Finally, the result is used to complete the main proof as follows: formula_131 Thus, the resultant torque on the rigid system of particles is given by formula_132 where formula_106 is the inertia matrix relative to the center of mass. Parallel axis theorem. The inertia matrix of a body depends on the choice of the reference point. There is a useful relationship between the inertia matrix relative to the center of mass formula_78 and the inertia matrix relative to another point formula_68. This relationship is called the parallel axis theorem. Consider the inertia matrix formula_133 obtained for a rigid system of particles measured relative to a reference point formula_68, given by formula_134 Let formula_78 be the center of mass of the rigid system, then formula_135 where formula_136 is the vector from the center of mass formula_78 to the reference point formula_68. Use this equation to compute the inertia matrix, formula_137 Distribute over the cross product to obtain formula_138 The first term is the inertia matrix formula_106 relative to the center of mass. The second and third terms are zero by definition of the center of mass formula_78. And the last term is the total mass of the system multiplied by the square of the skew-symmetric matrix formula_139 constructed from formula_136. The result is the parallel axis theorem, formula_140 where formula_136 is the vector from the center of mass formula_78 to the reference point formula_68. Note on the minus sign: By using the skew symmetric matrix of position vectors relative to the reference point, the inertia matrix of each particle has the form formula_141, which is similar to the formula_0 that appears in planar movement. However, to make this to work out correctly a minus sign is needed. This minus sign can be absorbed into the term formula_142, if desired, by using the skew-symmetry property of formula_143. Scalar moment of inertia in a plane. The scalar moment of inertia, formula_144, of a body about a specified axis whose direction is specified by the unit vector formula_14 and passes through the body at a point formula_68 is as follows: formula_145 where formula_133 is the moment of inertia matrix of the system relative to the reference point formula_68, and formula_103 is the skew symmetric matrix obtained from the vector formula_146. This is derived as follows. Let a rigid assembly of formula_66 particles, formula_67, have coordinates formula_74. Choose formula_68 as a reference point and compute the moment of inertia around a line L defined by the unit vector formula_14 through the reference point formula_68, formula_147. The perpendicular vector from this line to the particle formula_86 is obtained from formula_148 by removing the component that projects onto formula_14. formula_149 where formula_150 is the identity matrix, so as to avoid confusion with the inertia matrix, and formula_151 is the outer product matrix formed from the unit vector formula_14 along the line formula_27. To relate this scalar moment of inertia to the inertia matrix of the body, introduce the skew-symmetric matrix formula_152 such that formula_153, then we have the identity formula_154 noting that formula_14 is a unit vector. The magnitude squared of the perpendicular vector is formula_155 The simplification of this equation uses the triple scalar product identity formula_156 where the dot and the cross products have been interchanged. Exchanging products, and simplifying by noting that formula_148 and formula_14 are orthogonal: formula_157 Thus, the moment of inertia around the line formula_27 through formula_68 in the direction formula_14 is obtained from the calculation formula_158 where formula_133 is the moment of inertia matrix of the system relative to the reference point formula_68. This shows that the inertia matrix can be used to calculate the moment of inertia of a body around any specified rotation axis in the body. Inertia tensor. For the same object, different axes of rotation will have different moments of inertia about those axes. In general, the moments of inertia are not equal unless the object is symmetric about all axes. The moment of inertia tensor is a convenient way to summarize all moments of inertia of an object with one quantity. It may be calculated with respect to any point in space, although for practical purposes the center of mass is most commonly used. Definition. For a rigid object of formula_39 point masses formula_159, the moment of inertia tensor is given by formula_160 Its components are defined as formula_161 where Note that, by the definition, formula_167 is a symmetric tensor. The diagonal elements are more succinctly written as formula_168 while the off-diagonal elements, also called the products of inertia, are formula_169 Here formula_32 denotes the moment of inertia around the formula_51-axis when the objects are rotated around the x-axis, formula_170 denotes the moment of inertia around the formula_52-axis when the objects are rotated around the formula_51-axis, and so on. These quantities can be generalized to an object with distributed mass, described by a mass density function, in a similar fashion to the scalar moment of inertia. One then has formula_171 where formula_172 is their outer product, E3 is the 3×3 identity matrix, and "V" is a region of space completely containing the object. Alternatively it can also be written in terms of the angular momentum operator formula_173: formula_174 The inertia tensor can be used in the same way as the inertia matrix to compute the scalar moment of inertia about an arbitrary axis in the direction formula_175, formula_176 where the dot product is taken with the corresponding elements in the component tensors. A product of inertia term such as formula_177 is obtained by the computation formula_178 and can be interpreted as the moment of inertia around the formula_51-axis when the object rotates around the formula_52-axis. The components of tensors of degree two can be assembled into a matrix. For the inertia tensor this matrix is given by, formula_179 It is common in rigid body mechanics to use notation that explicitly identifies the formula_51, formula_52, and formula_50-axes, such as formula_32 and formula_170, for the components of the inertia tensor. Alternate inertia convention. There are some CAD and CAE applications such as SolidWorks, Unigraphics NX/Siemens NX and MSC Adams that use an alternate convention for the products of inertia. According to this convention, the minus sign is removed from the product of inertia formulas and instead inserted in the inertia matrix: formula_180 Determine inertia convention (Principal axes method). If one has the inertia data formula_181 without knowing which inertia convention that has been used, it can be determined if one also has the principal axes. With the principal axes method, one makes inertia matrices from the following two assumptions: Next, one calculates the eigenvectors for the two matrices. The matrix whose eigenvectors are parallel to the principal axes corresponds to the inertia convention that has been used. Derivation of the tensor components. The distance formula_1 of a particle at formula_184 from the axis of rotation passing through the origin in the formula_185 direction is formula_186, where formula_185 is unit vector. The moment of inertia on the axis is formula_187 Rewrite the equation using matrix transpose: formula_188 where E3 is the 3×3 identity matrix. This leads to a tensor formula for the moment of inertia formula_189 For multiple particles, we need only recall that the moment of inertia is additive in order to see that this formula is correct. Inertia tensor of translation. Let formula_190 be the inertia tensor of a body calculated at its center of mass, and formula_68 be the displacement vector of the body. The inertia tensor of the translated body respect to its original center of mass is given by: formula_191 where formula_2 is the body's mass, E3 is the 3 × 3 identity matrix, and formula_192 is the outer product. Inertia tensor of rotation. Let formula_68 be the matrix that represents a body's rotation. The inertia tensor of the rotated body is given by: formula_193 Inertia matrix in different reference frames. The use of the inertia matrix in Newton's second law assumes its components are computed relative to axes parallel to the inertial frame and not relative to a body-fixed reference frame. This means that as the body moves the components of the inertia matrix change with time. In contrast, the components of the inertia matrix measured in a body-fixed frame are constant. Body frame. Let the body frame inertia matrix relative to the center of mass be denoted formula_194, and define the orientation of the body frame relative to the inertial frame by the rotation matrix formula_87, such that, formula_195 where vectors formula_196 in the body fixed coordinate frame have coordinates formula_184 in the inertial frame. Then, the inertia matrix of the body measured in the inertial frame is given by formula_197 Notice that formula_87 changes as the body moves, while formula_194 remains constant. Principal axes. Measured in the body frame, the inertia matrix is a constant real symmetric matrix. A real symmetric matrix has the eigendecomposition into the product of a rotation matrix formula_198 and a diagonal matrix formula_199, given by formula_200 where formula_201 The columns of the rotation matrix formula_198 define the directions of the principal axes of the body, and the constants formula_202, formula_203, and formula_204 are called the principal moments of inertia. This result was first shown by J. J. Sylvester (1852), and is a form of Sylvester's law of inertia. The principal axis with the highest moment of inertia is sometimes called the figure axis or axis of figure. A toy top is an example of a rotating rigid body, and the word "top" is used in the names of types of rigid bodies. When all principal moments of inertia are distinct, the principal axes through center of mass are uniquely specified and the rigid body is called an asymmetric top. If two principal moments are the same, the rigid body is called a symmetric top and there is no unique choice for the two corresponding principal axes. If all three principal moments are the same, the rigid body is called a spherical top (although it need not be spherical) and any axis can be considered a principal axis, meaning that the moment of inertia is the same about any axis. The principal axes are often aligned with the object's symmetry axes. If a rigid body has an axis of symmetry of order formula_2, meaning it is symmetrical under rotations of 360°/"m" about the given axis, that axis is a principal axis. When formula_205, the rigid body is a symmetric top. If a rigid body has at least two symmetry axes that are not parallel or perpendicular to each other, it is a spherical top, for example, a cube or any other Platonic solid. The motion of vehicles is often described in terms of yaw, pitch, and roll which usually correspond approximately to rotations about the three principal axes. If the vehicle has bilateral symmetry then one of the principal axes will correspond exactly to the transverse (pitch) axis. A practical example of this mathematical phenomenon is the routine automotive task of balancing a tire, which basically means adjusting the distribution of mass of a car wheel such that its principal axis of inertia is aligned with the axle so the wheel does not wobble. Rotating molecules are also classified as asymmetric, symmetric, or spherical tops, and the structure of their rotational spectra is different for each type. Ellipsoid. The moment of inertia matrix in body-frame coordinates is a quadratic form that defines a surface in the body called Poinsot's ellipsoid. Let formula_199 be the inertia matrix relative to the center of mass aligned with the principal axes, then the surface formula_206 or formula_207 defines an ellipsoid in the body frame. Write this equation in the form, formula_208 to see that the semi-principal diameters of this ellipsoid are given by formula_209 Let a point formula_184 on this ellipsoid be defined in terms of its magnitude and direction, formula_210, where formula_175 is a unit vector. Then the relationship presented above, between the inertia matrix and the scalar moment of inertia formula_211 around an axis in the direction formula_175, yields formula_212 Thus, the magnitude of a point formula_184 in the direction formula_175 on the inertia ellipsoid is formula_213 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "mr^2" }, { "math_id": 1, "text": "r" }, { "math_id": 2, "text": "m" }, { "math_id": 3, "text": "I = \\frac{L}{\\omega}." }, { "math_id": 4, "text": "\\tau = I \\alpha." }, { "math_id": 5, "text": "I = mr^2." }, { "math_id": 6, "text": "I = m k^2," }, { "math_id": 7, "text": "\\boldsymbol{\\tau} = \\mathbf{r} \\times \\mathbf{F}" }, { "math_id": 8, "text": "\\mathbf{r}" }, { "math_id": 9, "text": "\\mathbf{F}" }, { "math_id": 10, "text": "\\boldsymbol{\\alpha}" }, { "math_id": 11, "text": "\\mathbf{a} = \\boldsymbol{\\alpha} \\times \\mathbf{r}" }, { "math_id": 12, "text": "\\mathbf F = m \\mathbf a" }, { "math_id": 13, "text": "\\begin{align}\n \\boldsymbol{\\tau}\n &= \\mathbf{r} \\times \\mathbf{F}\n = \\mathbf{r} \\times (m \\boldsymbol{\\alpha} \\times \\mathbf{r}) \\\\\n &= m \\left(\\left(\\mathbf{r} \\cdot \\mathbf{r}\\right) \\boldsymbol{\\alpha} - \\left(\\mathbf{r} \\cdot \\boldsymbol{\\alpha}\\right) \\mathbf{r}\\right) \\\\\n &= mr^2 \\boldsymbol{\\alpha}\n = I\\alpha \\mathbf{\\hat{k}},\n\\end{align}" }, { "math_id": 14, "text": "\\mathbf{\\hat{k}}" }, { "math_id": 15, "text": "I = mr^2" }, { "math_id": 16, "text": "\\mathbf{v} = \\boldsymbol{\\omega} \\times \\mathbf{r}" }, { "math_id": 17, "text": "\\boldsymbol{\\omega}" }, { "math_id": 18, "text": "\\begin{align}\n \\mathbf{L}\n &= \\mathbf{r} \\times \\mathbf{p}\n = \\mathbf{r} \\times \\left(m\\boldsymbol{\\omega} \\times \\mathbf{r}\\right) \\\\\n & = m\\left(\\left(\\mathbf{r} \\cdot \\mathbf{r}\\right)\\boldsymbol{\\omega} - \\left(\\mathbf{r} \\cdot \\boldsymbol{\\omega}\\right)\\mathbf{r}\\right) \\\\\n &= mr^2 \\boldsymbol{\\omega}\n = I\\omega\\mathbf{\\hat{k}},\n\\end{align}" }, { "math_id": 19, "text": "E_\\text{K} = \\frac{1}{2} m \\mathbf{v} \\cdot \\mathbf{v} = \\frac{1}{2} \\left(mr^2\\right)\\omega^2 = \\frac{1}{2}I\\omega^2." }, { "math_id": 20, "text": "\\omega_\\text{n}" }, { "math_id": 21, "text": "I_P" }, { "math_id": 22, "text": "\\omega_\\text{n} = \\sqrt{\\frac{mgr}{I_P}}," }, { "math_id": 23, "text": "g" }, { "math_id": 24, "text": "P" }, { "math_id": 25, "text": "t" }, { "math_id": 26, "text": "I_P = \\frac{mgr}{\\omega_\\text{n}^2} = \\frac{mgrt^2}{4\\pi^2}," }, { "math_id": 27, "text": "L" }, { "math_id": 28, "text": "\\omega_\\text{n} = \\sqrt{\\frac{g}{L}} = \\sqrt{\\frac{mgr}{I_P}}," }, { "math_id": 29, "text": "L = \\frac{g}{\\omega_\\text{n}^2} = \\frac{I_P}{mr}." }, { "math_id": 30, "text": "\\pi \\ \\mathrm{rad/s}" }, { "math_id": 31, "text": "L = \\frac{g}{\\omega_\\text{n}^2} \\approx \\frac{9.81 \\ \\mathrm{m/s^2}}{(3.14 \\ \\mathrm{rad/s})^2} \\approx 0.99 \\ \\mathrm{m}." }, { "math_id": 32, "text": "I_{xx}" }, { "math_id": 33, "text": "I_{yy}" }, { "math_id": 34, "text": "I_{xx}=I_{yy}=\\frac{b^4}{12}" }, { "math_id": 35, "text": "I_{xx}=\\frac{bh^3}{12}" }, { "math_id": 36, "text": "I_{yy}=\\frac{hb^3}{12}" }, { "math_id": 37, "text": "I_{xx}=\\frac{bh^3}{36}" }, { "math_id": 38, "text": "I_{xx}=I_{yy}=\\frac{1}{4} {\\pi} r^4=\\frac{1}{64} {\\pi} d^4" }, { "math_id": 39, "text": "N" }, { "math_id": 40, "text": "m_i" }, { "math_id": 41, "text": "r_i" }, { "math_id": 42, "text": "\n E_\\text{K} =\n \\sum_{i=1}^N \\frac{1}{2}\\,m_i \\mathbf{v}_i \\cdot \\mathbf{v}_i =\n \\sum_{i=1}^N \\frac{1}{2}\\,m_i \\left(\\omega r_i\\right)^2 =\n \\frac12\\, \\omega^2 \\sum_{i=1}^N m_i r_i^2.\n" }, { "math_id": 43, "text": "I_P = \\sum_{i=1}^N m_i r_i^2." }, { "math_id": 44, "text": "I_P = \\sum_i m_i r_i^2" }, { "math_id": 45, "text": "I_P = \\iiint_{Q} \\rho(x, y, z) \\left\\|\\mathbf{r}\\right\\|^2 dV" }, { "math_id": 46, "text": "\\rho" }, { "math_id": 47, "text": "(x, y, z)" }, { "math_id": 48, "text": "V" }, { "math_id": 49, "text": "Q" }, { "math_id": 50, "text": "z" }, { "math_id": 51, "text": "x" }, { "math_id": 52, "text": "y" }, { "math_id": 53, "text": "s" }, { "math_id": 54, "text": "\\ell" }, { "math_id": 55, "text": "\n I_{C, \\text{rod}} = \\iiint_Q \\rho\\,x^2 \\, dV =\n \\int_{-\\frac{\\ell}{2}}^\\frac{\\ell}{2} \\rho\\,x^2 s\\, dx =\n \\left. \\rho s\\frac{x^3}{3}\\right|_{-\\frac{\\ell}{2}}^\\frac{\\ell}{2} =\n \\frac{\\rho s}{3} \\left(\\frac{\\ell^3}{8} + \\frac{\\ell^3}{8}\\right) =\n \\frac{m\\ell^2}{12},\n" }, { "math_id": 56, "text": "m = \\rho s \\ell" }, { "math_id": 57, "text": "R" }, { "math_id": 58, "text": "dV = sr \\, dr\\, d\\theta" }, { "math_id": 59, "text": "\n I_{C, \\text{disc}} = \\iiint_Q \\rho \\, r^2\\, dV =\n \\int_0^{2\\pi} \\int_0^R \\rho r^2 s r\\, dr\\, d\\theta =\n 2\\pi \\rho s \\frac{R^4}{4} =\n \\frac{1}{2}mR^2,\n" }, { "math_id": 60, "text": "m = \\pi R^2 \\rho s" }, { "math_id": 61, "text": " I_P = I_{C, \\text{rod}} + M_\\text{rod}\\left(\\frac{L}{2}\\right)^2 + I_{C, \\text{disc}} + M_\\text{disc}(L + R)^2," }, { "math_id": 62, "text": " x^2 + y^2 + z^2 = R^2," }, { "math_id": 63, "text": "r(z)^2 = x^2 + y^2 = R^2 - z^2." }, { "math_id": 64, "text": "\\begin{align}\n I_{C, \\text{sphere}}\n &= \\int_{-R}^R \\tfrac{1}{2} \\pi \\rho r(z)^4\\, dz = \\int_{-R}^R \\tfrac{1}{2} \\pi \\rho \\left(R^2 - z^2\\right)^2\\,dz \\\\[1ex]\n &= \\tfrac{1}{2} \\pi \\rho \\left[R^4z - \\tfrac{2}{3} R^2 z^3 + \\tfrac{1}{5} z^5\\right]_{-R}^R \\\\[1ex]\n &= \\pi \\rho\\left(1 - \\tfrac{2}{3} + \\tfrac{1}{5}\\right)R^5 \\\\[1ex]\n &= \\tfrac{2}{5} mR^2,\n\\end{align}" }, { "math_id": 65, "text": "m = \\frac{4}{3}\\pi R^3 \\rho" }, { "math_id": 66, "text": "n" }, { "math_id": 67, "text": "P_i, i = 1, \\dots, n" }, { "math_id": 68, "text": "\\mathbf{R}" }, { "math_id": 69, "text": "\\mathbf{v}_i" }, { "math_id": 70, "text": "\\begin{align}\n \\Delta\\mathbf{r}_i &= \\mathbf{r}_i - \\mathbf{R}, \\\\\n \\mathbf{v}_i &= \\boldsymbol{\\omega} \\times \\left(\\mathbf{r}_i - \\mathbf{R}\\right) + \\mathbf{V}\n = \\boldsymbol{\\omega} \\times \\Delta\\mathbf{r}_i + \\mathbf{V},\n\\end{align}" }, { "math_id": 71, "text": "\\mathbf{V}" }, { "math_id": 72, "text": "\\mathbf{k}" }, { "math_id": 73, "text": "\\mathbf{e}_i" }, { "math_id": 74, "text": "\\mathbf{r}_i" }, { "math_id": 75, "text": "\\mathbf{\\hat{t}}_i = \\mathbf{\\hat{k}} \\times \\mathbf{\\hat{e}}_i" }, { "math_id": 76, "text": "\\begin{align}\n \\mathbf{\\hat{e}}_i &= \\frac{\\Delta\\mathbf{r}_i}{\\Delta r_i},\\quad\n \\mathbf{\\hat{k}} = \\frac{\\boldsymbol{\\omega}}{\\omega},\\quad\n \\mathbf{\\hat{t}}_i = \\mathbf{\\hat{k}} \\times \\mathbf{\\hat{e}}_i, \\\\\n \\mathbf{v}_i &= \\boldsymbol{\\omega} \\times \\Delta\\mathbf{r}_i + \\mathbf{V}\n = \\omega\\mathbf{\\hat{k}} \\times \\Delta r_i\\mathbf{\\hat{e}}_i + \\mathbf{V}\n = \\omega\\, \\Delta r_i\\mathbf{\\hat{t}}_i + \\mathbf{V}\n\\end{align}" }, { "math_id": 77, "text": "\\begin{align}\n \\mathbf{L} &= \\sum_{i=1}^n m_i \\Delta\\mathbf{r}_i \\times \\mathbf{v}_i \\\\\n &= \\sum_{i=1}^n m_i \\,\\Delta r_i\\mathbf{\\hat{e}}_i \\times \\left(\\omega\\, \\Delta r_i\\mathbf{\\hat{t}}_i + \\mathbf{V}\\right) \\\\\n &= \\left(\\sum_{i=1}^n m_i \\,\\Delta r_i^2\\right)\\omega \\mathbf{\\hat{k}} +\n \\left(\\sum_{i=1}^n m_i\\,\\Delta r_i\\mathbf{\\hat{e}}_i\\right) \\times \\mathbf{V}.\n\\end{align}" }, { "math_id": 78, "text": "\\mathbf{C}" }, { "math_id": 79, "text": "\\begin{align}\n \\Delta r_i \\mathbf{\\hat{e}}_i &= \\mathbf{r}_i - \\mathbf{C}, \\\\\n \\sum_{i=1}^n m_i\\,\\Delta r_i \\mathbf{\\hat{e}}_i &= 0,\n\\end{align}" }, { "math_id": 80, "text": "I_\\mathbf{C}" }, { "math_id": 81, "text": "I_\\mathbf{C} = \\sum_{i} m_i\\,\\Delta r_i^2," }, { "math_id": 82, "text": "\\mathbf{L} = I_\\mathbf{C} \\omega \\mathbf{\\hat{k}}." }, { "math_id": 83, "text": "\\begin{align}\n E_\\text{K}\n &= \\frac{1}{2} \\sum_{i=1}^n m_i \\mathbf{v}_i \\cdot \\mathbf{v}_i, \\\\\n &= \\frac{1}{2} \\sum_{i=1}^n m_i\n \\left(\\omega \\,\\Delta r_i\\mathbf{\\hat{t}}_i + \\mathbf{V}\\right) \\cdot\n \\left(\\omega \\,\\Delta r_i\\mathbf{\\hat{t}}_i + \\mathbf{V}\\right), \\\\\n &= \\frac{1}{2}\\omega^2 \\left(\\sum_{i=1}^n m_i\\, \\Delta r_i^2 \\mathbf{\\hat{t}}_i \\cdot \\mathbf{\\hat{t}}_i\\right) +\n \\omega\\mathbf{V} \\cdot \\left(\\sum_{i=1}^n m_i \\,\\Delta r_i\\mathbf{\\hat{t}}_i\\right) +\n \\frac{1}{2}\\left(\\sum_{i=1}^n m_i\\right) \\mathbf{V} \\cdot \\mathbf{V}.\n\\end{align}" }, { "math_id": 84, "text": "E_\\text{K} = \\frac{1}{2} I_\\mathbf{C} \\omega^2 + \\frac{1}{2} M\\mathbf{V} \\cdot \\mathbf{V}." }, { "math_id": 85, "text": "\\begin{align}\n \\mathbf{F} &= \\sum_{i=1}^n m_i\\mathbf{A}_i, \\\\\n \\boldsymbol\\tau &= \\sum_{i=1}^n \\Delta\\mathbf{r}_i \\times m_i\\mathbf{A}_i,\n\\end{align}" }, { "math_id": 86, "text": "P_i" }, { "math_id": 87, "text": "\\mathbf{A}" }, { "math_id": 88, "text": "\n \\mathbf{A}_i =\n \\boldsymbol\\alpha \\times \\Delta\\mathbf{r}_i + \\boldsymbol{\\omega} \\times \\boldsymbol{\\omega} \\times \\Delta\\mathbf{r}_i + \\mathbf{A}.\n" }, { "math_id": 89, "text": "\\mathbf{\\hat{e}}_i" }, { "math_id": 90, "text": "\\begin{align}\n \\mathbf{A}_i\n &= \\alpha\\mathbf{\\hat{k}} \\times \\Delta r_i\\mathbf{\\hat{e}}_i - \\omega\\mathbf{\\hat{k}} \\times \\omega\\mathbf{\\hat{k}} \\times \\Delta r_i\\mathbf{\\hat{e}}_i + \\mathbf{A} \\\\\n &= \\alpha \\Delta r_i\\mathbf{\\hat{t}}_i - \\omega^2 \\Delta r_i\\mathbf{\\hat{e}}_i + \\mathbf{A}.\n\\end{align}" }, { "math_id": 91, "text": "\\begin{align}\n \\boldsymbol{\\tau} &= \\sum_{i=1}^n m_i\\,\\Delta r_i\\mathbf{\\hat{e}}_i \\times \\left(\\alpha\\Delta r_i\\mathbf{\\hat{t}}_i - \\omega^2\\Delta r_i\\mathbf{\\hat{e}}_i + \\mathbf{A}\\right) \\\\\n &= \\left(\\sum_{i=1}^n m_i\\,\\Delta r_i^2\\right)\\alpha \\mathbf{\\hat{k}} + \\left(\\sum_{i=1}^n m_i\\,\\Delta r_i\\mathbf{\\hat{e}}_i\\right) \\times\\mathbf{A},\n\\end{align}" }, { "math_id": 92, "text": "\\mathbf{\\hat{e}}_i \\times \\mathbf{\\hat{e}}_i = \\mathbf{0}" }, { "math_id": 93, "text": "\\mathbf{\\hat{e}}_i \\times \\mathbf{\\hat{t}}_i = \\mathbf{\\hat{k}}" }, { "math_id": 94, "text": "\\boldsymbol{\\tau} = I_\\mathbf{C}\\alpha\\mathbf{\\hat{k}}." }, { "math_id": 95, "text": "\\Delta\\mathbf{r}_i = \\mathbf{r}_i - \\mathbf{R}" }, { "math_id": 96, "text": "\\mathbf{v}_i = \\boldsymbol{\\omega} \\times \\Delta\\mathbf{r}_i + \\mathbf{V}_\\mathbf{R}" }, { "math_id": 97, "text": "\\mathbf{V_R}" }, { "math_id": 98, "text": "\\left[\\mathbf{b}\\right]" }, { "math_id": 99, "text": "\\mathbf{b} = (b_x, b_y, b_z)" }, { "math_id": 100, "text": "\\begin{align}\n \\mathbf{b} \\times \\mathbf{y}\n &\\equiv \\left[\\mathbf{b}\\right] \\mathbf{y} \\\\\n \\left[\\mathbf{b}\\right] &\\equiv \\begin{bmatrix}\n 0 & -b_z & b_y \\\\\n b_z & 0 & -b_x \\\\\n -b_y & b_x & 0\n \\end{bmatrix}.\n\\end{align}" }, { "math_id": 101, "text": "\\begin{align}\n \\mathbf{L}\n &= \\sum_{i=1}^n m_i\\,\\Delta\\mathbf{r}_i \\times \\mathbf{v}_i \\\\\n &= \\sum_{i=1}^n m_i\\,\\Delta\\mathbf{r}_i \\times \\left(\\boldsymbol{\\omega} \\times \\Delta\\mathbf{r}_i + \\mathbf{V}_\\mathbf{R}\\right) \\\\\n &= \\left(-\\sum_{i=1}^n m_i\\,\\Delta\\mathbf{r}_i \\times \\left(\\Delta\\mathbf{r}_i \\times \\boldsymbol{\\omega}\\right)\\right) + \\left(\\sum_{i=1}^n m_i \\,\\Delta\\mathbf{r}_i \\times \\mathbf{V}_\\mathbf{R}\\right),\n\\end{align}" }, { "math_id": 102, "text": "= \\mathbf{C}" }, { "math_id": 103, "text": "[\\Delta\\mathbf{r}_i]" }, { "math_id": 104, "text": "\\Delta\\mathbf{r}_i = \\mathbf{r}_i - \\mathbf{C}" }, { "math_id": 105, "text": "\n \\mathbf{L} =\n \\left(-\\sum_{i=1}^n m_i \\left[\\Delta\\mathbf{r}_i\\right]^2\\right)\\boldsymbol{\\omega} =\n \\mathbf{I}_\\mathbf{C} \\boldsymbol{\\omega},\n" }, { "math_id": 106, "text": "\\mathbf{I_C}" }, { "math_id": 107, "text": "\\mathbf{I}_\\mathbf{C} = -\\sum_{i=1}^n m_i \\left[\\Delta\\mathbf{r}_i\\right]^2," }, { "math_id": 108, "text": "\n E_\\text{K} = \\frac{1}{2} \\sum_{i=1}^n m_i \\mathbf{v}_i \\cdot \\mathbf{v}_i =\n \\frac{1}{2} \\sum_{i=1}^n\n m_i \\left(\\boldsymbol{\\omega} \\times \\Delta\\mathbf{r}_i + \\mathbf{V}_\\mathbf{C}\\right) \\cdot\n \\left(\\boldsymbol{\\omega} \\times \\Delta\\mathbf{r}_i + \\mathbf{V}_\\mathbf{C}\\right),\n" }, { "math_id": 109, "text": "\n E_\\text{K} =\n \\frac{1}{2}\\left(\\sum_{i=1}^n m_i \\left(\\boldsymbol{\\omega} \\times \\Delta\\mathbf{r}_i\\right) \\cdot \\left(\\boldsymbol{\\omega} \\times \\Delta\\mathbf{r}_i\\right)\\right) +\n \\left(\\sum_{i=1}^n m_i \\mathbf{V}_\\mathbf{C} \\cdot \\left(\\boldsymbol{\\omega} \\times \\Delta\\mathbf{r}_i\\right)\\right) +\n \\frac{1}{2}\\left(\\sum_{i=1}^n m_i \\mathbf{V}_\\mathbf{C} \\cdot \\mathbf{V}_\\mathbf{C}\\right).\n" }, { "math_id": 110, "text": "\n \\sum_{i=1}^n m_i \\Delta\\mathbf{r}_i =0" }, { "math_id": 111, "text": "\\begin{align}\n E_\\text{K}\n &= \\frac{1}{2}\\left(\\sum_{i=1}^n m_i \\left(\\left[\\Delta\\mathbf{r}_i\\right] \\boldsymbol{\\omega}\\right) \\cdot \\left(\\left[\\Delta\\mathbf{r}_i\\right] \\boldsymbol{\\omega}\\right)\\right) +\n \\frac{1}{2}\\left(\\sum_{i=1}^n m_i\\right) \\mathbf{V}_\\mathbf{C} \\cdot \\mathbf{V}_\\mathbf{C} \\\\\n &= \\frac{1}{2}\\left(\\sum_{i=1}^n m_i \\left(\\boldsymbol{\\omega}^\\mathsf{T}\\left[\\Delta\\mathbf{r}_i\\right]^\\mathsf{T} \\left[\\Delta\\mathbf{r}_i\\right] \\boldsymbol{\\omega}\\right)\\right) +\n \\frac{1}{2}\\left(\\sum_{i=1}^n m_i\\right) \\mathbf{V}_\\mathbf{C} \\cdot \\mathbf{V}_\\mathbf{C} \\\\\n &= \\frac{1}{2}\\boldsymbol{\\omega} \\cdot \\left(-\\sum_{i=1}^n m_i \\left[\\Delta\\mathbf{r}_i\\right]^2\\right) \\boldsymbol{\\omega} +\n \\frac{1}{2}\\left(\\sum_{i=1}^n m_i\\right) \\mathbf{V}_\\mathbf{C} \\cdot \\mathbf{V}_\\mathbf{C}.\n\\end{align}" }, { "math_id": 112, "text": "E_\\text{K} = \\frac{1}{2} \\boldsymbol{\\omega} \\cdot \\mathbf{I}_\\mathbf{C} \\boldsymbol{\\omega} + \\frac{1}{2} M\\mathbf{V}_\\mathbf{C}^2." }, { "math_id": 113, "text": "M" }, { "math_id": 114, "text": "\\boldsymbol{\\tau} = \\sum_{i=1}^n \\left(\\mathbf{r_i} - \\mathbf{R}\\right) \\times m_i\\mathbf{a}_i," }, { "math_id": 115, "text": "\\mathbf{a}_i" }, { "math_id": 116, "text": "\\mathbf{A}_\\mathbf{R}" }, { "math_id": 117, "text": "\\mathbf{a}_i = \\boldsymbol{\\alpha} \\times \\left(\\mathbf{r}_i - \\mathbf{R}\\right) + \\boldsymbol{\\omega} \\times \\left( \\boldsymbol{\\omega} \\times \\left(\\mathbf{r}_i - \\mathbf{R}\\right) \\right) + \\mathbf{A}_\\mathbf{R}." }, { "math_id": 118, "text": "\\left[\\Delta\\mathbf{r}_i\\right] = \\left[\\mathbf{r}_i - \\mathbf{C}\\right]" }, { "math_id": 119, "text": "(\\mathbf{r}_i - \\mathbf{C}) \\times" }, { "math_id": 120, "text": "\n \\boldsymbol{\\tau} = \\left(-\\sum_{i=1}^n m_i\\left[\\Delta\\mathbf{r}_i\\right]^2\\right)\\boldsymbol{\\alpha} +\n \\boldsymbol{\\omega} \\times \\left(-\\sum_{i=1}^n m_i \\left[\\Delta\\mathbf{r}_i\\right]^2\\right)\\boldsymbol{\\omega}\n" }, { "math_id": 121, "text": "\n \\Delta\\mathbf{r}_i \\times \\left(\\boldsymbol{\\omega} \\times \\left(\\boldsymbol{\\omega} \\times \\Delta\\mathbf{r}_i\\right)\\right) + \n \\boldsymbol{\\omega} \\times \\left(\\left(\\boldsymbol{\\omega} \\times \\Delta\\mathbf{r}_i\\right) \\times \\Delta\\mathbf{r}_i\\right)\n = 0,\n" }, { "math_id": 122, "text": "\\begin{align}\n \\boldsymbol{\\tau}\n &= \\sum_{i=1}^n (\\mathbf{r_i} - \\mathbf{R})\\times (m_i\\mathbf{a}_i) \\\\\n &= \\sum_{i=1}^n \\Delta\\mathbf{r}_i\\times (m_i\\mathbf{a}_i) \\\\\n &= \\sum_{i=1}^n m_i [\\Delta\\mathbf{r}_i\\times \\mathbf{a}_i]\\;\\ldots\\text{ cross-product scalar multiplication} \\\\\n &= \\sum_{i=1}^n m_i [\\Delta\\mathbf{r}_i\\times (\\mathbf{a}_{\\text{tangential},i} + \\mathbf{a}_{\\text{centripetal},i} + \\mathbf{A}_\\mathbf{R})] \\\\\n &= \\sum_{i=1}^n m_i [\\Delta\\mathbf{r}_i\\times (\\mathbf{a}_{\\text{tangential},i} + \\mathbf{a}_{\\text{centripetal},i} + 0)] \\\\\n\\end{align}" }, { "math_id": 123, "text": "\\mathbf{A}_\\mathbf{R} = 0" }, { "math_id": 124, "text": "\\begin{align}\n\\boldsymbol{\\tau} &= \\sum_{i=1}^n m_i [\\Delta\\mathbf{r}_i\\times \\mathbf{a}_{\\text{tangential},i} + \\Delta\\mathbf{r}_i\\times \\mathbf{a}_{\\text{centripetal},i}] \\\\\n\\boldsymbol{\\tau} &= \\sum_{i=1}^n m_i [\\Delta\\mathbf{r}_i\\times (\\boldsymbol{\\alpha} \\times \\Delta\\mathbf{r}_i) + \\Delta\\mathbf{r}_i\\times (\\boldsymbol{\\omega} \\times \\mathbf{v}_{\\text{tangential},i})] \\\\\n\\boldsymbol{\\tau} &= \\sum_{i=1}^n m_i [\\Delta\\mathbf{r}_i\\times (\\boldsymbol{\\alpha} \\times \\Delta\\mathbf{r}_i) + \\Delta\\mathbf{r}_i \\times (\\boldsymbol{\\omega} \\times (\\boldsymbol{\\omega} \\times \\Delta\\mathbf{r}_i))]\n\\end{align}" }, { "math_id": 125, "text": "\\begin{align}\n 0 &=\n \\Delta\\mathbf{r}_i\\times (\\boldsymbol\\omega \\times(\\boldsymbol\\omega\\times \\Delta\\mathbf{r}_i)) + \\boldsymbol\\omega\\times((\\boldsymbol\\omega\\times\\Delta\\mathbf{r}_i)\\times\\Delta\\mathbf{r}_i) + (\\boldsymbol\\omega\\times\\Delta\\mathbf{r}_i)\\times(\\Delta\\mathbf{r}_i\\times\\boldsymbol\\omega)\\\\\n &= \\Delta\\mathbf{r}_i\\times (\\boldsymbol\\omega\\times(\\boldsymbol\\omega\\times\\Delta\\mathbf{r}_i)) + \\boldsymbol\\omega\\times((\\boldsymbol\\omega\\times\\Delta\\mathbf{r}_i)\\times\\Delta\\mathbf{r}_i) + (\\boldsymbol\\omega\\times\\Delta\\mathbf{r}_i)\\times -(\\boldsymbol\\omega\\times\\Delta\\mathbf{r}_i)\\;\\ldots\\text{ cross-product anticommutativity} \\\\\n &= \\Delta\\mathbf{r}_i \\times (\\boldsymbol\\omega\\times(\\boldsymbol\\omega\\times\\Delta\\mathbf{r}_i)) + \\boldsymbol\\omega\\times((\\boldsymbol\\omega\\times\\Delta\\mathbf{r}_i)\\times\\Delta\\mathbf{r}_i) + -[(\\boldsymbol\\omega\\times\\Delta\\mathbf{r}_i)\\times(\\boldsymbol\\omega\\times\\Delta\\mathbf{r}_i)]\\;\\ldots\\text{ cross-product scalar multiplication} \\\\\n &= \\Delta\\mathbf{r}_i\\times (\\boldsymbol\\omega\\times(\\boldsymbol\\omega\\times\\Delta\\mathbf{r}_i)) + \\boldsymbol\\omega\\times((\\boldsymbol\\omega\\times\\Delta\\mathbf{r}_i)\\times\\Delta\\mathbf{r}_i) + -[0]\\;\\ldots\\text{ self cross-product} \\\\\n 0 &= \\Delta\\mathbf{r}_i\\times (\\boldsymbol\\omega\\times(\\boldsymbol\\omega\\times\\Delta\\mathbf{r}_i)) + \\boldsymbol\\omega\\times((\\boldsymbol\\omega\\times\\Delta\\mathbf{r}_i)\\times\\Delta\\mathbf{r}_i)\n\\end{align}" }, { "math_id": 126, "text": "\\begin{align}\n \\Delta\\mathbf{r}_i\\times (\\boldsymbol\\omega\\times(\\boldsymbol\\omega\\times\\Delta\\mathbf{r}_i))\n &= -[\\boldsymbol\\omega\\times((\\boldsymbol\\omega\\times\\Delta\\mathbf{r}_i)\\times\\Delta\\mathbf{r}_i)] \\\\\n &= -[(\\boldsymbol\\omega\\times\\Delta\\mathbf{r}_i)(\\boldsymbol\\omega\\cdot\\Delta\\mathbf{r}_i) - \\Delta\\mathbf{r}_i(\\boldsymbol\\omega\\cdot(\\boldsymbol\\omega\\times\\Delta\\mathbf{r}_i))]\\;\\ldots\\text{ vector triple product} \\\\\n &= -[(\\boldsymbol\\omega\\times\\Delta\\mathbf{r}_i)(\\boldsymbol\\omega\\cdot\\Delta\\mathbf{r}_i) - \\Delta\\mathbf{r}_i(\\Delta\\mathbf{r}_i\\cdot(\\boldsymbol\\omega\\times\\boldsymbol\\omega))]\\;\\ldots\\text{ scalar triple product} \\\\\n &= -[(\\boldsymbol\\omega\\times\\Delta\\mathbf{r}_i)(\\boldsymbol\\omega\\cdot\\Delta\\mathbf{r}_i) - \\Delta\\mathbf{r}_i(\\Delta\\mathbf{r}_i\\cdot(0))]\\;\\ldots\\text{ self cross-product} \\\\\n &= -[(\\boldsymbol\\omega\\times\\Delta\\mathbf{r}_i)(\\boldsymbol\\omega\\cdot\\Delta\\mathbf{r}_i)] \\\\\n &= -[\\boldsymbol\\omega\\times(\\Delta\\mathbf{r}_i (\\boldsymbol\\omega\\cdot\\Delta\\mathbf{r}_i))]\\;\\ldots\\text{ cross-product scalar multiplication} \\\\\n &= \\boldsymbol\\omega\\times -(\\Delta\\mathbf{r}_i (\\boldsymbol\\omega\\cdot\\Delta\\mathbf{r}_i))\\;\\ldots\\text{ cross-product scalar multiplication} \\\\\n \\Delta\\mathbf{r}_i\\times (\\boldsymbol\\omega\\times(\\boldsymbol\\omega\\times\\Delta\\mathbf{r}_i))\n &= \\boldsymbol\\omega\\times -(\\Delta\\mathbf{r}_i (\\Delta\\mathbf{r}_i \\cdot \\boldsymbol\\omega))\\;\\ldots\\text{ dot-product commutativity} \\\\\n\\end{align}\n" }, { "math_id": 127, "text": "\\begin{align}\n \\boldsymbol\\tau\n &= \\sum_{i=1}^n m_i [\\Delta\\mathbf{r}_i\\times (\\boldsymbol\\alpha\\times\\Delta\\mathbf{r}_i) + \\Delta\\mathbf{r}_i\\times (\\boldsymbol\\omega\\times(\\boldsymbol\\omega\\times\\Delta\\mathbf{r}_i))] \\\\\n &= \\sum_{i=1}^n m_i [\\Delta\\mathbf{r}_i\\times (\\boldsymbol\\alpha\\times\\Delta\\mathbf{r}_i) + \\boldsymbol\\omega\\times -(\\Delta\\mathbf{r}_i (\\Delta\\mathbf{r}_i \\cdot \\boldsymbol\\omega))] \\\\\n &= \\sum_{i=1}^n m_i [\\Delta\\mathbf{r}_i\\times (\\boldsymbol\\alpha\\times\\Delta\\mathbf{r}_i) + \\boldsymbol\\omega\\times \\{0 - \\Delta\\mathbf{r}_i (\\Delta\\mathbf{r}_i \\cdot \\boldsymbol\\omega)\\}] \\\\\n &= \\sum_{i=1}^n m_i [\\Delta\\mathbf{r}_i\\times (\\boldsymbol\\alpha\\times\\Delta\\mathbf{r}_i) + \\boldsymbol\\omega\\times \\{[\\boldsymbol\\omega(\\Delta\\mathbf{r}_i\\cdot\\Delta\\mathbf{r}_i) - \\boldsymbol\\omega(\\Delta\\mathbf{r}_i\\cdot\\Delta\\mathbf{r}_i)] - \\Delta\\mathbf{r}_i (\\Delta\\mathbf{r}_i \\cdot \\boldsymbol\\omega)\\}]\\;\\ldots\\;\\boldsymbol\\omega(\\Delta\\mathbf{r}_i\\cdot\\Delta\\mathbf{r}_i) - \\boldsymbol\\omega(\\Delta\\mathbf{r}_i\\cdot\\Delta\\mathbf{r}_i) = 0 \\\\\n &= \\sum_{i=1}^n m_i [\\Delta\\mathbf{r}_i\\times (\\boldsymbol\\alpha\\times\\Delta\\mathbf{r}_i) + \\boldsymbol\\omega\\times \\{[\\boldsymbol\\omega(\\Delta\\mathbf{r}_i\\cdot\\Delta\\mathbf{r}_i) - \\Delta\\mathbf{r}_i (\\Delta\\mathbf{r}_i \\cdot \\boldsymbol\\omega)] - \\boldsymbol\\omega(\\Delta\\mathbf{r}_i\\cdot\\Delta\\mathbf{r}_i)\\}]\\;\\ldots\\text{ addition associativity} \\\\\n\\end{align}" }, { "math_id": 128, "text": "\\begin{align}\n \\boldsymbol{\\tau}\n &= \\sum_{i=1}^n m_i [\\Delta\\mathbf{r}_i\\times (\\boldsymbol\\alpha\\times\\Delta\\mathbf{r}_i) + \\boldsymbol\\omega\\times \\{\\boldsymbol\\omega(\\Delta\\mathbf{r}_i\\cdot\\Delta\\mathbf{r}_i) - \\Delta\\mathbf{r}_i (\\Delta\\mathbf{r}_i \\cdot \\boldsymbol\\omega)\\} - \\boldsymbol\\omega\\times\\boldsymbol\\omega(\\Delta\\mathbf{r}_i\\cdot\\Delta\\mathbf{r}_i)]\\;\\ldots\\text{ cross-product distributivity over addition} \\\\\n &= \\sum_{i=1}^n m_i [\\Delta\\mathbf{r}_i\\times (\\boldsymbol\\alpha\\times\\Delta\\mathbf{r}_i) + \\boldsymbol\\omega\\times \\{\\boldsymbol\\omega(\\Delta\\mathbf{r}_i\\cdot\\Delta\\mathbf{r}_i) - \\Delta\\mathbf{r}_i (\\Delta\\mathbf{r}_i \\cdot \\boldsymbol\\omega)\\} - (\\Delta\\mathbf{r}_i\\cdot\\Delta\\mathbf{r}_i)(\\boldsymbol\\omega\\times\\boldsymbol\\omega)]\\;\\ldots\\text{ cross-product scalar multiplication} \\\\\n &= \\sum_{i=1}^n m_i [\\Delta\\mathbf{r}_i\\times (\\boldsymbol\\alpha\\times\\Delta\\mathbf{r}_i) + \\boldsymbol\\omega\\times \\{\\boldsymbol\\omega(\\Delta\\mathbf{r}_i\\cdot\\Delta\\mathbf{r}_i) - \\Delta\\mathbf{r}_i (\\Delta\\mathbf{r}_i \\cdot \\boldsymbol\\omega)\\} - (\\Delta\\mathbf{r}_i\\cdot\\Delta\\mathbf{r}_i)(0)]\\;\\ldots\\text{ self cross-product} \\\\\n &= \\sum_{i=1}^n m_i [\\Delta\\mathbf{r}_i\\times (\\boldsymbol\\alpha\\times\\Delta\\mathbf{r}_i) + \\boldsymbol\\omega\\times \\{\\boldsymbol\\omega(\\Delta\\mathbf{r}_i\\cdot\\Delta\\mathbf{r}_i) - \\Delta\\mathbf{r}_i (\\Delta\\mathbf{r}_i \\cdot \\boldsymbol\\omega)\\}] \\\\\n &= \\sum_{i=1}^n m_i [\\Delta\\mathbf{r}_i\\times (\\boldsymbol\\alpha\\times\\Delta\\mathbf{r}_i) + \\boldsymbol\\omega\\times \\{\\Delta\\mathbf{r}_i \\times (\\boldsymbol\\omega \\times \\Delta\\mathbf{r}_i)\\}]\\;\\ldots\\text{ vector triple product} \\\\\n &= \\sum_{i=1}^n m_i [\\Delta\\mathbf{r}_i\\times -(\\Delta\\mathbf{r}_i \\times \\boldsymbol\\alpha) + \\boldsymbol\\omega\\times \\{\\Delta\\mathbf{r}_i \\times -(\\Delta\\mathbf{r}_i \\times \\boldsymbol\\omega)\\}]\\;\\ldots\\text{ cross-product anticommutativity} \\\\\n &= -\\sum_{i=1}^n m_i [\\Delta\\mathbf{r}_i\\times (\\Delta\\mathbf{r}_i \\times \\boldsymbol\\alpha) + \\boldsymbol\\omega\\times \\{\\Delta\\mathbf{r}_i \\times (\\Delta\\mathbf{r}_i \\times \\boldsymbol\\omega)\\}]\\;\\ldots\\text{ cross-product scalar multiplication} \\\\\n &= -\\sum_{i=1}^n m_i [\\Delta\\mathbf{r}_i\\times (\\Delta\\mathbf{r}_i \\times \\boldsymbol\\alpha)] + -\\sum_{i=1}^n m_i [\\boldsymbol\\omega\\times \\{\\Delta\\mathbf{r}_i \\times (\\Delta\\mathbf{r}_i \\times \\boldsymbol\\omega)\\}]\\;\\ldots\\text{ summation distributivity} \\\\\n\\boldsymbol\\tau &= -\\sum_{i=1}^n m_i [\\Delta\\mathbf{r}_i\\times (\\Delta\\mathbf{r}_i \\times \\boldsymbol\\alpha)] + \\boldsymbol\\omega\\times -\\sum_{i=1}^n m_i [\\Delta\\mathbf{r}_i \\times (\\Delta\\mathbf{r}_i \\times \\boldsymbol\\omega)]\\;\\ldots\\;\\boldsymbol\\omega\\text{ is not characteristic of particle } P_i\n\\end{align}" }, { "math_id": 129, "text": "\\mathbf{u}" }, { "math_id": 130, "text": "\\begin{align}\n -\\sum_{i=1}^n m_i [\\Delta\\mathbf{r}_i\\times (\\Delta\\mathbf{r}_i \\times \\mathbf{u})]\n &= -\\sum_{i=1}^n m_i \\left(\\begin{bmatrix}\n 0 & -\\Delta r_{3,i} & \\Delta r_{2,i} \\\\\n \\Delta r_{3,i} & 0 & -\\Delta r_{1,i} \\\\\n -\\Delta r_{2,i} & \\Delta r_{1,i} & 0\n \\end{bmatrix} \\left(\\begin{bmatrix}\n 0 & -\\Delta r_{3,i} & \\Delta r_{2,i} \\\\\n \\Delta r_{3,i} & 0 & -\\Delta r_{1,i} \\\\\n -\\Delta r_{2,i} & \\Delta r_{1,i} & 0\n \\end{bmatrix} \\begin{bmatrix} u_1 \\\\ u_2 \\\\ u_3 \\end{bmatrix}\n \\right)\\right)\\;\\ldots\\text{ cross-product as matrix multiplication} \\\\[6pt]\n &= -\\sum_{i=1}^n m_i \\left(\\begin{bmatrix}\n 0 & -\\Delta r_{3,i} & \\Delta r_{2,i} \\\\\n \\Delta r_{3,i} & 0 & -\\Delta r_{1,i} \\\\\n -\\Delta r_{2,i} & \\Delta r_{1,i} & 0\n \\end{bmatrix} \\begin{bmatrix}\n -\\Delta r_{3,i}\\,u_2 + \\Delta r_{2,i}\\,u_3 \\\\\n +\\Delta r_{3,i}\\,u_1 - \\Delta r_{1,i}\\,u_3 \\\\\n -\\Delta r_{2,i}\\,u_1 + \\Delta r_{1,i}\\,u_2\n \\end{bmatrix}\\right) \\\\[6pt]\n &= -\\sum_{i=1}^n m_i \\begin{bmatrix}\n -\\Delta r_{3,i}(+\\Delta r_{3,i}\\,u_1 - \\Delta r_{1,i}\\,u_3) + \\Delta r_{2,i}(-\\Delta r_{2,i}\\,u_1 + \\Delta r_{1,i}\\,u_2) \\\\\n +\\Delta r_{3,i}(-\\Delta r_{3,i}\\,u_2 + \\Delta r_{2,i}\\,u_3) - \\Delta r_{1,i}(-\\Delta r_{2,i}\\,u_1 + \\Delta r_{1,i}\\,u_2) \\\\\n -\\Delta r_{2,i}(-\\Delta r_{3,i}\\,u_2 + \\Delta r_{2,i}\\,u_3) + \\Delta r_{1,i}(+\\Delta r_{3,i}\\,u_1 - \\Delta r_{1,i}\\,u_3)\n \\end{bmatrix} \\\\[6pt]\n &= -\\sum_{i=1}^n m_i \\begin{bmatrix}\n -\\Delta r_{3,i}^2\\,u_1 + \\Delta r_{1,i}\\Delta r_{3,i}\\,u_3 - \\Delta r_{2,i}^2\\,u_1 + \\Delta r_{1,i}\\Delta r_{2,i}\\,u_2 \\\\\n -\\Delta r_{3,i}^2\\,u_2 + \\Delta r_{2,i}\\Delta r_{3,i}\\,u_3 + \\Delta r_{2,i}\\Delta r_{1,i}\\,u_1 - \\Delta r_{1,i}^2\\,u_2 \\\\\n +\\Delta r_{3,i}\\Delta r_{2,i}\\,u_2 - \\Delta r_{2,i}^2\\,u_3 + \\Delta r_{3,i}\\Delta r_{1,i}\\,u_1 - \\Delta r_{1,i}^2\\,u_3\n \\end{bmatrix} \\\\[6pt]\n &= -\\sum_{i=1}^n m_i \\begin{bmatrix}\n -(\\Delta r_{2,i}^2 + \\Delta r_{3,i}^2)\\,u_1 + \\Delta r_{1,i}\\Delta r_{2,i}\\,u_2 + \\Delta r_{1,i}\\Delta r_{3,i}\\,u_3 \\\\\n +\\Delta r_{2,i}\\Delta r_{1,i}\\,u_1 - (\\Delta r_{1,i}^2 + \\Delta r_{3,i}^2)\\,u_2 + \\Delta r_{2,i}\\Delta r_{3,i}\\,u_3 \\\\\n +\\Delta r_{3,i}\\Delta r_{1,i}\\,u_1 + \\Delta r_{3,i}\\Delta r_{2,i}\\,u_2 - (\\Delta r_{1,i}^2 + \\Delta r_{2,i}^2)\\,u_3\n \\end{bmatrix} \\\\[6pt]\n &= -\\sum_{i=1}^n m_i \\begin{bmatrix}\n -(\\Delta r_{2,i}^2 + \\Delta r_{3,i}^2) & \\Delta r_{1,i}\\Delta r_{2,i} & \\Delta r_{1,i}\\Delta r_{3,i} \\\\\n \\Delta r_{2,i}\\Delta r_{1,i} & -(\\Delta r_{1,i}^2 + \\Delta r_{3,i}^2) & \\Delta r_{2,i}\\Delta r_{3,i} \\\\\n \\Delta r_{3,i}\\Delta r_{1,i} & \\Delta r_{3,i}\\Delta r_{2,i} & -(\\Delta r_{1,i}^2 + \\Delta r_{2,i}^2)\n \\end{bmatrix} \\begin{bmatrix} u_1 \\\\ u_2 \\\\ u_3 \\end{bmatrix} \\\\\n &= -\\sum_{i=1}^n m_i [\\Delta r_i]^2 \\mathbf{u} \\\\[6pt]\n -\\sum_{i=1}^n m_i [\\Delta\\mathbf{r}_i \\times (\\Delta\\mathbf{r}_i \\times \\mathbf{u})]\n &= \\left(-\\sum_{i=1}^n m_i [\\Delta r_i]^2\\right) \\mathbf{u}\\;\\ldots\\;\\mathbf{u}\\text{ is not characteristic of } P_i\n\\end{align}" }, { "math_id": 131, "text": "\\begin{align}\n \\boldsymbol{\\tau}\n &= -\\sum_{i=1}^n m_i [\\Delta\\mathbf{r}_i \\times (\\Delta\\mathbf{r}_i \\times \\boldsymbol{\\alpha})] + \\boldsymbol{\\omega} \\times -\\sum_{i=1}^n m_i \\Delta\\mathbf{r}_i \\times (\\Delta\\mathbf{r}_i \\times \\boldsymbol{\\omega})] \\\\\n &= \\left(-\\sum_{i=1}^n m_i [\\Delta r_i]^2\\right) \\boldsymbol{\\alpha} + \\boldsymbol{\\omega} \\times \\left(-\\sum_{i=1}^n m_i [\\Delta r_i]^2\\right) \\boldsymbol{\\omega}\n\\end{align}" }, { "math_id": 132, "text": "\\boldsymbol{\\tau} = \\mathbf{I}_\\mathbf{C} \\boldsymbol{\\alpha} + \\boldsymbol{\\omega} \\times \\mathbf{I}_\\mathbf{C} \\boldsymbol{\\omega}," }, { "math_id": 133, "text": "\\mathbf{I_R}" }, { "math_id": 134, "text": "\\mathbf{I}_\\mathbf{R} = -\\sum_{i=1}^n m_i\\left[\\mathbf{r}_i - \\mathbf{R}\\right]^2." }, { "math_id": 135, "text": "\\mathbf{R} = (\\mathbf{R} - \\mathbf{C}) + \\mathbf{C} = \\mathbf{d} + \\mathbf{C}," }, { "math_id": 136, "text": "\\mathbf{d}" }, { "math_id": 137, "text": "\n \\mathbf{I}_\\mathbf{R}\n = -\\sum_{i=1}^n m_i[\\mathbf{r}_i - \\left(\\mathbf{C} + \\mathbf{d}\\right)]^2\n = -\\sum_{i=1}^n m_i[\\left(\\mathbf{r}_i - \\mathbf{C}\\right) - \\mathbf{d}]^2.\n" }, { "math_id": 138, "text": "\n \\mathbf{I}_\\mathbf{R} =\n - \\left(\\sum_{i=1}^n m_i [\\mathbf{r}_i - \\mathbf{C}]^2\\right)\n + \\left(\\sum_{i=1}^n m_i [\\mathbf{r}_i - \\mathbf{C}]\\right)[\\mathbf{d}]\n + [\\mathbf{d}]\\left(\\sum_{i=1}^n m_i [\\mathbf{r}_i - \\mathbf{C}]\\right) \n - \\left(\\sum_{i=1}^n m_i\\right) [\\mathbf{d}]^2.\n" }, { "math_id": 139, "text": "[\\mathbf{d}]" }, { "math_id": 140, "text": "\\mathbf{I}_\\mathbf{R} = \\mathbf{I}_\\mathbf{C} - M[\\mathbf{d}]^2," }, { "math_id": 141, "text": "-m\\left[\\mathbf{r}\\right]^2" }, { "math_id": 142, "text": "m\\left[\\mathbf{r}\\right]^\\mathsf{T} \\left[\\mathbf{r}\\right]" }, { "math_id": 143, "text": "[\\mathbf{r}]" }, { "math_id": 144, "text": "I_L" }, { "math_id": 145, "text": "I_L\n = \\mathbf{\\hat{k}} \\cdot \\left(-\\sum_{i=1}^N m_i \\left[\\Delta\\mathbf{r}_i\\right]^2 \\right) \\mathbf{\\hat{k}}\n = \\mathbf{\\hat{k}} \\cdot \\mathbf{I}_\\mathbf{R} \\mathbf{\\hat{k}}\n = \\mathbf{\\hat{k}}^\\mathsf{T} \\mathbf{I}_\\mathbf{R} \\mathbf{\\hat{k}}," }, { "math_id": 146, "text": "\\Delta\\mathbf{r}_i = \\mathbf{r}_i - \\mathbf{R}" }, { "math_id": 147, "text": "\\mathbf{L}(t) = \\mathbf{R} + t\\mathbf{\\hat{k}}" }, { "math_id": 148, "text": "\\Delta\\mathbf{r}_i" }, { "math_id": 149, "text": "\n \\Delta\\mathbf{r}_i^\\perp =\n \\Delta\\mathbf{r}_i - \\left(\\mathbf{\\hat{k}} \\cdot \\Delta\\mathbf{r}_i\\right)\\mathbf{\\hat{k}} =\n \\left(\\mathbf{E} - \\mathbf{\\hat{k}}\\mathbf{\\hat{k}}^\\mathsf{T}\\right) \\Delta\\mathbf{r}_i,\n" }, { "math_id": 150, "text": "\\mathbf{E}" }, { "math_id": 151, "text": "\\mathbf{\\hat{k}}\\mathbf{\\hat{k}}^\\mathsf{T}" }, { "math_id": 152, "text": "\\left[\\mathbf{\\hat{k}}\\right]" }, { "math_id": 153, "text": "\\left[\\mathbf{\\hat{k}}\\right]\\mathbf{y} = \\mathbf{\\hat{k}} \\times \\mathbf{y}" }, { "math_id": 154, "text": "\n -\\left[\\mathbf{\\hat{k}}\\right]^2 \\equiv\n \\left|\\mathbf{\\hat{k}}\\right|^2\\left(\\mathbf{E} - \\mathbf{\\hat{k}}\\mathbf{\\hat{k}}^\\mathsf{T}\\right) =\n \\mathbf{E} - \\mathbf{\\hat{k}}\\mathbf{\\hat{k}}^\\mathsf{T},\n" }, { "math_id": 155, "text": "\\begin{align}\n \\left|\\Delta\\mathbf{r}_i^\\perp\\right|^2\n &= \\left(-\\left[\\mathbf{\\hat{k}}\\right]^2 \\Delta\\mathbf{r}_i\\right) \\cdot \\left(-\\left[\\mathbf{\\hat{k}}\\right]^2 \\Delta\\mathbf{r}_i\\right) \\\\\n &= \\left(\\mathbf{\\hat{k}} \\times \\left(\\mathbf{\\hat{k}} \\times \\Delta\\mathbf{r}_i\\right)\\right) \\cdot\n \\left(\\mathbf{\\hat{k}} \\times \\left(\\mathbf{\\hat{k}} \\times \\Delta\\mathbf{r}_i\\right)\\right)\n\\end{align}" }, { "math_id": 156, "text": "\n \\left(\\mathbf{\\hat{k}} \\times \\left(\\mathbf{\\hat{k}} \\times \\Delta\\mathbf{r}_i\\right)\\right) \\cdot\n \\left(\\mathbf{\\hat{k}} \\times \\left(\\mathbf{\\hat{k}} \\times \\Delta\\mathbf{r}_i\\right)\\right) \\equiv\n \\left(\\left(\\mathbf{\\hat{k}} \\times \\left(\\mathbf{\\hat{k}} \\times \\Delta\\mathbf{r}_i\\right)\\right) \\times \\mathbf{\\hat{k}}\\right) \\cdot\n \\left(\\mathbf{\\hat{k}} \\times \\Delta\\mathbf{r}_i\\right),\n" }, { "math_id": 157, "text": "\\begin{align}\n &\\left(\\mathbf{\\hat{k}} \\times \\left(\\mathbf{\\hat{k}} \\times \\Delta\\mathbf{r}_i\\right)\\right) \\cdot\n \\left(\\mathbf{\\hat{k}} \\times \\left(\\mathbf{\\hat{k}} \\times \\Delta\\mathbf{r}_i\\right)\\right) \\\\\n ={} &\\left(\\left(\\mathbf{\\hat{k}} \\times \\left(\\mathbf{\\hat{k}} \\times \\Delta\\mathbf{r}_i\\right)\\right) \\times \\mathbf{\\hat{k}}\\right) \\cdot\n \\left(\\mathbf{\\hat{k}} \\times \\Delta\\mathbf{r}_i\\right) \\\\\n ={} &\\left(\\mathbf{\\hat{k}} \\times \\Delta\\mathbf{r}_i\\right) \\cdot \\left(-\\Delta\\mathbf{r}_i \\times \\mathbf{\\hat{k}}\\right) \\\\\n ={} &-\\mathbf{\\hat{k}} \\cdot \\left(\\Delta\\mathbf{r}_i \\times \\Delta\\mathbf{r}_i \\times \\mathbf{\\hat{k}}\\right) \\\\\n ={} &-\\mathbf{\\hat{k}} \\cdot \\left[\\Delta\\mathbf{r}_i\\right]^2 \\mathbf{\\hat{k}}.\n\\end{align}" }, { "math_id": 158, "text": "\\begin{align}\n I_L &= \\sum_{i=1}^N m_i \\left|\\Delta\\mathbf{r}_i^\\perp\\right|^2 \\\\\n &= -\\sum_{i=1}^N m_i \\mathbf{\\hat{k}} \\cdot \\left[\\Delta\\mathbf{r}_i\\right]^2\\mathbf{\\hat{k}}\n = \\mathbf{\\hat{k}} \\cdot \\left(-\\sum_{i=1}^N m_i \\left[\\Delta\\mathbf{r}_i\\right]^2 \\right) \\mathbf{\\hat{k}} \\\\\n &= \\mathbf{\\hat{k}} \\cdot \\mathbf{I}_\\mathbf{R} \\mathbf{\\hat{k}}\n = \\mathbf{\\hat{k}}^\\mathsf{T} \\mathbf{I}_\\mathbf{R} \\mathbf{\\hat{k}},\n\\end{align}" }, { "math_id": 159, "text": "m_{k}" }, { "math_id": 160, "text": "\n\\mathbf{I} = \\begin{bmatrix}\nI_{11} & I_{12} & I_{13} \\\\\nI_{21} & I_{22} & I_{23} \\\\\nI_{31} & I_{32} & I_{33}\n\\end{bmatrix}.\n" }, { "math_id": 161, "text": "I_{ij} \\ \\stackrel{\\mathrm{def}}{=}\\ \\sum_{k=1}^{N} m_{k}\\left(\\left\\|\\mathbf{r}_k\\right\\|^{2}\\delta_{ij} - x_{i}^{(k)}x_{j}^{(k)}\\right)" }, { "math_id": 162, "text": "i" }, { "math_id": 163, "text": "j" }, { "math_id": 164, "text": "\\mathbf{r}_k = \\left(x_1^{(k)}, x_2^{(k)}, x_3^{(k)}\\right)" }, { "math_id": 165, "text": "m_k" }, { "math_id": 166, "text": "\\delta_{ij}" }, { "math_id": 167, "text": "\\mathbf{I}" }, { "math_id": 168, "text": "\\begin{align}\n I_{xx} \\ &\\stackrel{\\mathrm{def}}{=}\\ \\sum_{k=1}^{N} m_{k} \\left(y_{k}^{2} + z_{k}^{2}\\right), \\\\\n I_{yy} \\ &\\stackrel{\\mathrm{def}}{=}\\ \\sum_{k=1}^{N} m_{k} \\left(x_{k}^{2} + z_{k}^{2}\\right), \\\\\n I_{zz} \\ &\\stackrel{\\mathrm{def}}{=}\\ \\sum_{k=1}^{N} m_{k} \\left(x_{k}^{2} + y_{k}^{2}\\right),\n\\end{align}" }, { "math_id": 169, "text": "\\begin{align}\n I_{xy} = I_{yx} \\ &\\stackrel{\\mathrm{def}}{=}\\ -\\sum_{k=1}^{N} m_{k} x_{k} y_{k}, \\\\\n I_{xz} = I_{zx} \\ &\\stackrel{\\mathrm{def}}{=}\\ -\\sum_{k=1}^{N} m_{k} x_{k} z_{k}, \\\\\n I_{yz} = I_{zy} \\ &\\stackrel{\\mathrm{def}}{=}\\ -\\sum_{k=1}^{N} m_{k} y_{k} z_{k}.\n\\end{align}" }, { "math_id": 170, "text": "I_{xy}" }, { "math_id": 171, "text": "\\mathbf{I} = \\iiint_V \\rho(x,y,z) \\left( \\|\\mathbf{r}\\|^2 \\mathbf{E}_{3} - \\mathbf{r}\\otimes \\mathbf{r}\\right)\\, dx \\, dy \\, dz," }, { "math_id": 172, "text": "\\mathbf{r}\\otimes \\mathbf{r}" }, { "math_id": 173, "text": "[\\mathbf r]\\mathbf x = \\mathbf r\\times\\mathbf x" }, { "math_id": 174, "text": "\\mathbf{I} = \\iiint_V \\rho(\\mathbf{r}) [\\mathbf r]^\\textsf{T}[\\mathbf r] \\, dV = -\\iiint_{Q} \\rho(\\mathbf{r}) [\\mathbf r]^2 \\, dV " }, { "math_id": 175, "text": "\\mathbf{n}" }, { "math_id": 176, "text": "I_n = \\mathbf{n}\\cdot\\mathbf{I}\\cdot\\mathbf{n}," }, { "math_id": 177, "text": "I_{12}" }, { "math_id": 178, "text": "I_{12} = \\mathbf{e}_1\\cdot\\mathbf{I}\\cdot\\mathbf{e}_2," }, { "math_id": 179, "text": "\\mathbf{I} = \\begin{bmatrix}\n I_{11} & I_{12} & I_{13} \\\\[1.8ex]\n I_{21} & I_{22} & I_{23} \\\\[1.8ex]\n I_{31} & I_{32} & I_{33}\n \\end{bmatrix} = \\begin{bmatrix}\n I_{xx} & I_{xy} & I_{xz} \\\\[1.8ex]\n I_{yx} & I_{yy} & I_{yz} \\\\[1.8ex]\n I_{zx} & I_{zy} & I_{zz}\n \\end{bmatrix} = \\begin{bmatrix}\n \\sum_{k=1}^{N} m_{k} \\left(y_{k}^2 + z_{k}^2\\right) &\n -\\sum_{k=1}^{N} m_{k} x_{k} y_{k} &\n -\\sum_{k=1}^{N} m_{k} x_{k} z_{k} \\\\[1ex]\n -\\sum_{k=1}^{N} m_{k} x_{k} y_{k} &\n \\sum_{k=1}^{N} m_{k} \\left(x_{k}^2 + z_{k}^2\\right) &\n -\\sum_{k=1}^{N} m_{k} y_{k} z_{k} \\\\[1ex]\n -\\sum_{k=1}^{N} m_{k} x_{k} z_{k} &\n -\\sum_{k=1}^{N} m_{k} y_{k} z_{k} &\n \\sum_{k=1}^{N} m_{k} \\left(x_{k}^2 + y_{k}^2\\right)\n \\end{bmatrix}.\n" }, { "math_id": 180, "text": "\\begin{align}\n I_{xy} = I_{yx} \\ &\\stackrel{\\mathrm{def}}{=}\\ \\sum_{k=1}^{N} m_{k} x_{k} y_{k}, \\\\\n I_{xz} = I_{zx} \\ &\\stackrel{\\mathrm{def}}{=}\\ \\sum_{k=1}^{N} m_{k} x_{k} z_{k}, \\\\\n I_{yz} = I_{zy} \\ &\\stackrel{\\mathrm{def}}{=}\\ \\sum_{k=1}^{N} m_{k} y_{k} z_{k}, \\\\[3pt]\n \\mathbf{I} = \\begin{bmatrix}\n I_{11} & I_{12} & I_{13} \\\\[1.8ex]\n I_{21} & I_{22} & I_{23} \\\\[1.8ex]\n I_{31} & I_{32} & I_{33}\n \\end{bmatrix} &= \\begin{bmatrix}\n I_{xx} & -I_{xy} & -I_{xz} \\\\[1.8ex]\n -I_{yx} & I_{yy} & -I_{yz} \\\\[1.8ex]\n -I_{zx} & -I_{zy} & I_{zz}\n \\end{bmatrix} = \\begin{bmatrix}\n \\sum_{k=1}^{N} m_{k} \\left(y_{k}^{2} + z_{k}^{2}\\right) & -\\sum_{k=1}^{N} m_{k} x_{k} y_{k} & -\\sum_{k=1}^{N} m_{k} x_{k} z_{k} \\\\[1ex]\n -\\sum_{k=1}^{N} m_{k} x_{k} y_{k} & \\sum_{k=1}^{N} m_{k} \\left(x_{k}^{2} + z_{k}^{2}\\right) & -\\sum_{k=1}^{N} m_{k} y_{k} z_{k} \\\\[1ex]\n -\\sum_{k=1}^{N} m_{k} x_{k} z_{k} & -\\sum_{k=1}^{N} m_{k} y_{k} z_{k} & \\sum_{k=1}^{N} m_{k} \\left(x_{k}^{2} + y_{k}^{2}\\right)\n \\end{bmatrix}.\n\\end{align}" }, { "math_id": 181, "text": "(I_{xx}, I_{yy}, I_{zz}, I_{xy}, I_{xz}, I_{yz})" }, { "math_id": 182, "text": "(I_{12} = I_{xy}, I_{13} = I_{xz}, I_{23} = I_{yz})" }, { "math_id": 183, "text": "(I_{12} = -I_{xy}, I_{13} = -I_{xz}, I_{23} = -I_{yz})" }, { "math_id": 184, "text": "\\mathbf{x}" }, { "math_id": 185, "text": "\\mathbf{\\hat{n}}" }, { "math_id": 186, "text": "\\left|\\mathbf{x} - \\left(\\mathbf{x} \\cdot \\mathbf{\\hat{n}}\\right) \\mathbf{\\hat{n}}\\right|" }, { "math_id": 187, "text": "I = mr^2 =\n m\\left(\\mathbf{x} - \\left(\\mathbf{x}\\cdot\\mathbf{\\hat{n}}\\right) \\mathbf{\\hat{n}}\\right)\\cdot\\left(\\mathbf{x} - \\left(\\mathbf{x}\\cdot\\mathbf{\\hat{n}}\\right) \\mathbf{\\hat{n}}\\right) =\n m\\left(\\mathbf{x}^2 - 2\\mathbf{x}\\left(\\mathbf{x}\\cdot\\mathbf{\\hat{n}}\\right)\\mathbf{\\hat{n}} + \\left(\\mathbf{x}\\cdot\\mathbf{\\hat{n}}\\right)^2\\mathbf{\\hat{n}}^2\\right) =\n m\\left(\\mathbf{x}^2 - \\left(\\mathbf{x}\\cdot\\mathbf{\\hat{n}}\\right)^2\\right).\n" }, { "math_id": 188, "text": "I =\n m\\left(\\mathbf{x}^\\textsf{T}\\mathbf{x} - \\mathbf{\\hat{n}}^\\textsf{T}\\mathbf{x}\\mathbf{x}^\\textsf{T}\\mathbf{\\hat{n}}\\right) =\n m\\cdot\\mathbf{\\hat{n}}^\\textsf{T}\\left(\\mathbf{x}^\\textsf{T}\\mathbf{x}\\cdot\\mathbf{E_3} - \\mathbf{x}\\mathbf{x}^\\textsf{T}\\right)\\mathbf{\\hat{n}},\n" }, { "math_id": 189, "text": "I =\n m \\begin{bmatrix} n_1 & n_2 & n_3 \\end{bmatrix} \\begin{bmatrix}\n y^2 + z^2 & -xy & -xz \\\\[0.5ex]\n -yx & x^2 + z^2 & -yz \\\\[0.5ex]\n -zx & -zy & x^2 + y^2\n \\end{bmatrix} \\begin{bmatrix}\n n_1 \\\\[0.7ex]\n n_2 \\\\[0.7ex]\n n_3\n \\end{bmatrix}.\n" }, { "math_id": 190, "text": "\\mathbf{I}_0" }, { "math_id": 191, "text": "\\mathbf{I} = \\mathbf{I}_0 + m[(\\mathbf{R}\\cdot\\mathbf{R})\\mathbf{E}_3 - \\mathbf{R}\\otimes\\mathbf{R}]" }, { "math_id": 192, "text": "\\otimes" }, { "math_id": 193, "text": "\\mathbf{I} = \\mathbf{R}\\mathbf{I_0}\\mathbf{R}^\\textsf{T}" }, { "math_id": 194, "text": "\\mathbf{I}_\\mathbf{C}^B" }, { "math_id": 195, "text": "\\mathbf{x} = \\mathbf{A}\\mathbf{y}," }, { "math_id": 196, "text": "\\mathbf{y}" }, { "math_id": 197, "text": "\\mathbf{I}_\\mathbf{C} = \\mathbf{A} \\mathbf{I}_\\mathbf{C}^B \\mathbf{A}^\\mathsf{T}." }, { "math_id": 198, "text": "\\mathbf{Q}" }, { "math_id": 199, "text": "\\boldsymbol{\\Lambda}" }, { "math_id": 200, "text": "\\mathbf{I}_\\mathbf{C}^B = \\mathbf{Q}\\boldsymbol{\\Lambda}\\mathbf{Q}^\\mathsf{T}," }, { "math_id": 201, "text": "\\boldsymbol{\\Lambda} = \\begin{bmatrix}\n I_1 & 0 & 0 \\\\\n 0 & I_2 & 0 \\\\\n 0 & 0 & I_3\n\\end{bmatrix}." }, { "math_id": 202, "text": "I_1" }, { "math_id": 203, "text": "I_2" }, { "math_id": 204, "text": "I_3" }, { "math_id": 205, "text": "m > 2" }, { "math_id": 206, "text": "\\mathbf{x}^\\mathsf{T}\\boldsymbol{\\Lambda}\\mathbf{x} = 1," }, { "math_id": 207, "text": "I_1x^2 + I_2y^2 + I_3z^2 =1," }, { "math_id": 208, "text": " \\left(\\frac{x}{1/\\sqrt{I_1}}\\right)^2 + \\left(\\frac{y}{1/\\sqrt{I_2}}\\right)^2 + \\left(\\frac{z}{1/\\sqrt{I_3}}\\right)^2 = 1," }, { "math_id": 209, "text": "a = \\frac{1}{\\sqrt{I_1}}, \\quad b=\\frac{1}{\\sqrt{I_2}}, \\quad c=\\frac{1}{\\sqrt{I_3}}." }, { "math_id": 210, "text": "\\mathbf{x} = \\|\\mathbf{x}\\|\\mathbf{n}" }, { "math_id": 211, "text": "I_\\mathbf{n}" }, { "math_id": 212, "text": "\\mathbf{x}^\\mathsf{T}\\boldsymbol{\\Lambda}\\mathbf{x} = \\|\\mathbf{x}\\|^2\\mathbf{n}^\\mathsf{T}\\boldsymbol{\\Lambda}\\mathbf{n} = \\|\\mathbf{x}\\|^2 I_\\mathbf{n} = 1. " }, { "math_id": 213, "text": " \\|\\mathbf{x}\\| = \\frac{1}{\\sqrt{I_\\mathbf{n}}}." } ]
https://en.wikipedia.org/wiki?curid=157700
15772894
Dual linear program
Mathematical optimization concept The dual of a given linear program (LP) is another LP that is derived from the original (the primal) LP in the following schematic way: The weak duality theorem states that the objective value of the dual LP at any feasible solution is always a bound on the objective of the primal LP at any feasible solution (upper or lower bound, depending on whether it is a maximization or minimization problem). In fact, this bounding property holds for the optimal values of the dual and primal LPs. The strong duality theorem states that, moreover, if the primal has an optimal solution then the dual has an optimal solution too, "and the two optima are equal". These theorems belong to a larger class of duality theorems in optimization. The strong duality theorem is one of the cases in which the duality gap (the gap between the optimum of the primal and the optimum of the dual) is 0. Form of the dual LP. Suppose we have the linear program: Maximize cTx subject to "A"x ≤ b, x ≥ 0.We would like to construct an upper bound on the solution. So we create a linear combination of the constraints, with positive coefficients, such that the coefficients of x in the constraints are at least cT. This linear combination gives us an upper bound on the objective. The variables y of the dual LP are the coefficients of this linear combination. The dual LP tries to find such coefficients that "minimize" the resulting upper bound. This gives the following LP:Minimize bTy subject to "A"Ty ≥ c, y ≥ 0 This LP is called the "dual of" the original LP. Interpretation. The duality theorem has an economic interpretation. If we interpret the primal LP as a classical "resource allocation" problem, its dual LP can be interpreted as a "resource valuation" problem. Consider a factory that is planning its production of goods. Let formula_0 be its production schedule (make formula_1 amount of good formula_2), let formula_3 be the list of market prices (a unit of good formula_2 can sell for formula_4). The constraints it has are formula_5 (it cannot produce negative goods) and raw-material constraints. Let formula_6 be the raw material it has available, and let formula_7 be the matrix of material costs (producing one unit of good formula_2 requires formula_8 units of raw material formula_9). Then, the constrained revenue maximization is the primal LP:Maximize cTx subject to "A"x ≤ b, x ≥ 0.Now consider another factory that has no raw material, and wishes to purchase the entire stock of raw material from the previous factory. It offers a price vector of formula_10 (a unit of raw material formula_2 for formula_11). For the offer to be accepted, it should be the case that formula_12, since otherwise the factory could earn more cash by producing a certain product than selling off the raw material used to produce the good. It also should be formula_13, since the factory would not sell any raw material with negative price. Then, the second factory's optimization problem is the dual LP:Minimize bTy subject to "A"Ty ≥ c, y ≥ 0The duality theorem states that the duality gap between the two LP problems is at least zero. Economically, it means that if the first factory is given an offer to buy its entire stock of raw material, at a per-item price of y, such that "A"Ty ≥ c, y ≥ 0, then it should take the offer. It will make at least as much revenue as it could producing finished goods. The strong duality theorem further states that the duality gap is zero. With strong duality, the dual solution formula_14 is, economically speaking, the "equilibrium price" (see shadow price) for the raw material that a factory with production matrix formula_15 and raw material stock formula_6 would accept for raw material, given the market price for finished goods formula_16. (Note that formula_14 may not be unique, so the equilibrium price may not be fully determined by formula_15, formula_6, and formula_16.) To see why, consider if the raw material prices formula_13 are such that formula_17 for some formula_2, then the factory would purchase more raw material to produce more of good formula_2, since the prices are "too low". Conversely, if the raw material prices satisfy formula_18, but does not minimize formula_19, then the factory would make more money by selling its raw material than producing goods, since the prices are "too high". At equilibrium price formula_14, the factory cannot increase its profit by purchasing or selling off raw material. The duality theorem has a physical interpretation too. Constructing the dual LP. In general, given a primal LP, the following algorithm can be used to construct its dual LP. The primal LP is defined by: The dual LP is constructed as follows. From this algorithm, it is easy to see that the dual of the dual is the primal. Vector formulations. If all constraints have the same sign, it is possible to present the above recipe in a shorter way using matrices and vectors. The following table shows the relation between various kinds of primals and duals. The duality theorems. Below, suppose the primal LP is "maximize cTx subject to [constraints]" and the dual LP is "minimize bTy subject to [constraints]". Weak duality. The weak duality theorem says that, for each feasible solution x of the primal and each feasible solution y of the dual: cTx ≤ bTy. In other words, the objective value in each feasible solution of the dual is an upper-bound on the objective value of the primal, and objective value in each feasible solution of the primal is a lower-bound on the objective value of the dual. Here is a proof for the primal LP "Maximize cTx subject to "A"x ≤ b, x ≥ 0": Weak duality implies:maxx cTx ≤ miny bTyIn particular, if the primal is unbounded (from above) then the dual has no feasible solution, and if the dual is unbounded (from below) then the primal has no feasible solution. Strong duality. The strong duality theorem says that if one of the two problems has an optimal solution, so does the other one and that the bounds given by the weak duality theorem are tight, i.e.:maxx cTx = miny bTyThe strong duality theorem is harder to prove; the proofs usually use the weak duality theorem as a sub-routine. One proof uses the simplex algorithm and relies on the proof that, with the suitable pivot rule, it provides a correct solution. The proof establishes that, once the simplex algorithm finishes with a solution to the primal LP, it is possible to read from the final tableau, a solution to the dual LP. So, by running the simplex algorithm, we obtain solutions to both the primal and the dual simultaneously. Another proof uses the Farkas lemma. Theoretical implications. 1. The weak duality theorem implies that finding a "single" feasible solution is as hard as finding an "optimal" feasible solution. Suppose we have an oracle that, given an LP, finds an arbitrary feasible solution (if one exists). Given the LP "Maximize cTx subject to "Ax ≤ b, x ≥ 0", we can construct another LP by combining this LP with its dual. The combined LP has both x and y as variables:Maximize 1 subject to "Ax ≤ b, "A"Ty ≥ c, cTx ≥ bTy, x ≥ 0, y ≥ 0If the combined LP has a feasible solution (x,y), then by weak duality, cTx = bTy. So x must be a maximal solution of the primal LP and y must be a minimal solution of the dual LP. If the combined LP has no feasible solution, then the primal LP has no feasible solution either. 2. The strong duality theorem provides a "good characterization" of the optimal value of an LP in that it allows us to easily prove that some value "t" is the optimum of some LP. The proof proceeds in two steps: Examples. Tiny example. Consider the primal LP, with two variables and one constraint: formula_45 Applying the recipe above gives the following dual LP, with one variable and two constraints: formula_46 It is easy to see that the maximum of the primal LP is attained when "x"1 is minimized to its lower bound (0) and "x"2 is maximized to its upper bound under the constraint (7/6). The maximum is 4 ⋅ 7/6 = 14/3. Similarly, the minimum of the dual LP is attained when "y"1 is minimized to its lower bound under the constraints: the first constraint gives a lower bound of 3/5 while the second constraint gives a stricter lower bound of 4/6, so the actual lower bound is 4/6 and the minimum is 7 ⋅ 4/6 = 14/3. In accordance with the strong duality theorem, the maximum of the primal equals the minimum of the dual. We use this example to illustrate the proof of the weak duality theorem. Suppose that, in the primal LP, we want to get an upper bound on the objective formula_47. We can use the constraint multiplied by some coefficient, say formula_48. For any formula_48 we get: formula_49. Now, if formula_50and formula_51, then formula_52, so formula_53. Hence, the objective of the dual LP is an upper bound on the objective of the primal LP. Farmer example. Consider a farmer who may grow wheat and barley with the set provision of some "L" land, "F" fertilizer and "P" pesticide. To grow one unit of wheat, one unit of land, formula_54 units of fertilizer and formula_55 units of pesticide must be used. Similarly, to grow one unit of barley, one unit of land, formula_56 units of fertilizer and formula_57 units of pesticide must be used. The primal problem would be the farmer deciding how much wheat (formula_58) and barley (formula_59) to grow if their sell prices are formula_60 and formula_61 per unit. In matrix form this becomes: Maximize: formula_62 subject to: formula_63 For the dual problem assume that "y" unit prices for each of these means of production (inputs) are set by a planning board. The planning board's job is to minimize the total cost of procuring the set amounts of inputs while providing the farmer with a floor on the unit price of each of his crops (outputs), "S"1 for wheat and "S"2 for barley. This corresponds to the following LP: In matrix form this becomes: Minimize: formula_64 subject to: formula_65 The primal problem deals with physical quantities. With all inputs available in limited quantities, and assuming the unit prices of all outputs is known, what quantities of outputs to produce so as to maximize total revenue? The dual problem deals with economic values. With floor guarantees on all output unit prices, and assuming the available quantity of all inputs is known, what input unit pricing scheme to set so as to minimize total expenditure? To each variable in the primal space corresponds an inequality to satisfy in the dual space, both indexed by output type. To each inequality to satisfy in the primal space corresponds a variable in the dual space, both indexed by input type. The coefficients that bound the inequalities in the primal space are used to compute the objective in the dual space, input quantities in this example. The coefficients used to compute the objective in the primal space bound the inequalities in the dual space, output unit prices in this example. Both the primal and the dual problems make use of the same matrix. In the primal space, this matrix expresses the consumption of physical quantities of inputs necessary to produce set quantities of outputs. In the dual space, it expresses the creation of the economic values associated with the outputs from set input unit prices. Since each inequality can be replaced by an equality and a slack variable, this means each primal variable corresponds to a dual slack variable, and each dual variable corresponds to a primal slack variable. This relation allows us to speak about complementary slackness. Infeasible program. A LP can also be unbounded or infeasible. Duality theory tells us that: However, it is possible for both the dual and the primal to be infeasible. Here is an example: Viewing the solution to a linear programming problem as a (generalized) eigenvector. There is a close connection between linear programming problems, eigenequations, and von Neumann's general equilibrium model. The solution to a linear programming problem can be regarded as a generalized eigenvector. The eigenequations of a square matrix are as follows: formula_66 where formula_67 and formula_68 are the left and right eigenvectors of the square matrix formula_69, respectively, and formula_70 is the eigenvalue. The above eigenequations for the square matrix can be extended to von Neumann's general equilibrium model: formula_71 where the economic meanings of formula_72 and formula_68 are the equilibrium prices of various goods and the equilibrium activity levels of various economic agents, respectively. The von Neumann's equilibrium model can be further extended to the following structural equilibrium model with formula_69 and formula_73 as matrix-valued functions: formula_74 where the economic meaning of formula_75 is the utility levels of various consumers. A special case of the above model is formula_76 This form of the structural equilibrium model and linear programming problems can often be converted to each other, that is, the solutions to these two types of problems are often consistent. If we define formula_77, formula_78, formula_79, formula_80, then the structural equilibrium model can be written as formula_81 formula_82 Let us illustrate the structural equilibrium model with the previously discussed tiny example. In this example, we have formula_83, formula_84 and formula_85. To solve the structural equilibrium model, we obtain formula_86 These are consistent with the solutions to the linear programming problems. We substitute the above calculation results into the structural equilibrium model, obtaining formula_87 Applications. The max-flow min-cut theorem is a special case of the strong duality theorem: flow-maximization is the primal LP, and cut-minimization is the dual LP. See Max-flow min-cut theorem#Linear program formulation. Other graph-related theorems can be proved using the strong duality theorem, in particular, Konig's theorem. The Minimax theorem for zero-sum games can be proved using the strong-duality theorem. Alternative algorithm. Sometimes, one may find it more intuitive to obtain the dual program without looking at the program matrix. Consider the following linear program: We have "m" + "n" conditions and all variables are non-negative. We shall define "m" + "n" dual variables: yj and s"i". We get: Since this is a minimization problem, we would like to obtain a dual program that is a lower bound of the primal. In other words, we would like the sum of all right hand side of the constraints to be the maximal under the condition that for each primal variable the sum of its coefficients do not exceed its coefficient in the linear function. For example, x1 appears in "n" + 1 constraints. If we sum its constraints' coefficients we get "a"1,1y1 + "a"1,2y2 + ... + "a"1,;;n;;y"n" + "f"1s1. This sum must be at most c1. As a result, we get: Note that we assume in our calculations steps that the program is in standard form. However, any linear program may be transformed to standard form and it is therefore not a limiting factor. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x" }, { "math_id": 1, "text": "x_i" }, { "math_id": 2, "text": "i" }, { "math_id": 3, "text": "c \\geq 0" }, { "math_id": 4, "text": "c_i" }, { "math_id": 5, "text": "x \\geq 0" }, { "math_id": 6, "text": "b" }, { "math_id": 7, "text": "A\\geq 0" }, { "math_id": 8, "text": "A_{ji}" }, { "math_id": 9, "text": "j" }, { "math_id": 10, "text": "y" }, { "math_id": 11, "text": "y_i" }, { "math_id": 12, "text": "A^T y \\geq c" }, { "math_id": 13, "text": "y \\geq 0" }, { "math_id": 14, "text": "y^*" }, { "math_id": 15, "text": "A" }, { "math_id": 16, "text": "c" }, { "math_id": 17, "text": "(A^Ty)_i < c_i" }, { "math_id": 18, "text": "A^Ty \\geq c, y\\geq 0" }, { "math_id": 19, "text": "b^T y" }, { "math_id": 20, "text": "x_1 ,\\ldots, x_n" }, { "math_id": 21, "text": "x_i \\geq 0" }, { "math_id": 22, "text": "x_i \\leq 0" }, { "math_id": 23, "text": "x_i \\in \\mathbb{R}" }, { "math_id": 24, "text": "\\text{maximize} ~~~ c_1 x_1 +\\cdots + c_n x_n" }, { "math_id": 25, "text": "a_{j 1} x_1 +\\cdots + a_{j n} x_n \\lesseqqgtr b_j" }, { "math_id": 26, "text": "b_j" }, { "math_id": 27, "text": "\\geq" }, { "math_id": 28, "text": "\\leq" }, { "math_id": 29, "text": "=" }, { "math_id": 30, "text": "y_1 ,\\ldots, y_m" }, { "math_id": 31, "text": "\\geq b_j " }, { "math_id": 32, "text": "y_j \\leq 0 " }, { "math_id": 33, "text": "\\leq b_j " }, { "math_id": 34, "text": "y_j \\geq 0 " }, { "math_id": 35, "text": "= b_j " }, { "math_id": 36, "text": "y_j \\in \\mathbb{R} " }, { "math_id": 37, "text": "\\text{minimize } ~~~ b_1 y_1 +\\cdots + b_m y_m" }, { "math_id": 38, "text": "a_{1 i} y_1 +\\cdots + a_{m i} y_m \\lesseqqgtr c_i" }, { "math_id": 39, "text": "x_i \\leq 0 " }, { "math_id": 40, "text": "\\leq c_i " }, { "math_id": 41, "text": "x_i \\geq 0 " }, { "math_id": 42, "text": "\\geq c_i " }, { "math_id": 43, "text": "x_i \\in \\mathbb{R} " }, { "math_id": 44, "text": "= c_i " }, { "math_id": 45, "text": "\\begin{align}\n\\text{maximize } & 3 x_1 + 4 x_2 \n\\\\\n\\text{subject to } & 5 x_1 + 6 x_2 = 7\n\\\\\n& x_1\\geq 0, x_2\\geq 0\n\\end{align}\n" }, { "math_id": 46, "text": "\\begin{align}\n\\text{minimize } & 7 y_1\n\\\\\n\\text{subject to } \n & 5 y_1 \\geq 3\n\\\\\n & 6 y_1 \\geq 4\n\\\\\n & y_1\\in \\mathbb{R}\n\\end{align}\n" }, { "math_id": 47, "text": "3 x_1 + 4 x_2" }, { "math_id": 48, "text": "y_1" }, { "math_id": 49, "text": "y_1\\cdot (5 x_1 + 6 x_2) = 7 y_1" }, { "math_id": 50, "text": "y_1\\cdot 5 x_1 \\geq 3 x_1" }, { "math_id": 51, "text": "y_1\\cdot 6 x_2 \\geq 4 x_2" }, { "math_id": 52, "text": "y_1\\cdot (5 x_1 + 6 x_2) \\geq 3 x_1 + 4 x_2" }, { "math_id": 53, "text": "7 y_1 \\geq 3 x_1 + 4 x_2" }, { "math_id": 54, "text": "F_1" }, { "math_id": 55, "text": "P_1" }, { "math_id": 56, "text": "F_2" }, { "math_id": 57, "text": "P_2" }, { "math_id": 58, "text": "x_1" }, { "math_id": 59, "text": "x_2" }, { "math_id": 60, "text": "S_1" }, { "math_id": 61, "text": "S_2" }, { "math_id": 62, "text": "\\begin{bmatrix}S_1 & S_2 \\end{bmatrix} \\begin{bmatrix} x_1 \\\\ x_2 \\end{bmatrix} " }, { "math_id": 63, "text": "\\begin{bmatrix} 1 & 1 \\\\ F_1 & F_2\\\\ P_1 & P_2 \\end{bmatrix} \\begin{bmatrix} x_1 \\\\ x_2 \\end{bmatrix} \\leq \\begin{bmatrix} L \\\\ F \\\\ P \\end{bmatrix}, \\, \\begin{bmatrix} x_1 \\\\ x_2 \\end{bmatrix} \\ge 0. " }, { "math_id": 64, "text": "\\begin{bmatrix} L & F & P \\end{bmatrix} \\begin{bmatrix} y_L \\\\ y_F \\\\ y_P \\end{bmatrix} " }, { "math_id": 65, "text": "\\begin{bmatrix} 1 & F_1 & P_1 \\\\ 1 & F_2 & P_2 \\end{bmatrix} \\begin{bmatrix} y_L \\\\ y_F \\\\ y_P \\end{bmatrix} \\ge \\begin{bmatrix} S_1 \\\\ S_2 \\end{bmatrix}, \\, \\begin{bmatrix} y_L \\\\ y_F \\\\ y_P \\end{bmatrix} \\ge 0. " }, { "math_id": 66, "text": "\n\\begin{matrix}\n\\mathbf p^{T}\\mathbf A\n=\\rho \\mathbf p^{T} \\\\\n\\mathbf A \\mathbf z\n=\\rho {\\mathbf z} \\\\\n \\end{matrix}\n" }, { "math_id": 67, "text": "\\mathbf p^{T}" }, { "math_id": 68, "text": "\\mathbf z" }, { "math_id": 69, "text": "\\mathbf A" }, { "math_id": 70, "text": "\\rho" }, { "math_id": 71, "text": "\\begin{matrix}\n \\mathbf p^{T}\\mathbf A\n\\geq \\rho\\mathbf p^{T}\\mathbf B \\\\\n \\mathbf A \\mathbf z\n\\leq \\rho \\mathbf B {\\mathbf z} \\\\\n \\end{matrix}\n" }, { "math_id": 72, "text": "\\mathbf p" }, { "math_id": 73, "text": "\\mathbf B" }, { "math_id": 74, "text": "\n\\begin{matrix} \n\\mathbf p^{T}\\mathbf A(\\mathbf p, \\mathbf u, \\mathbf z)\n\\geq \\rho\\mathbf p^{T}\\mathbf B(\\mathbf p, \\mathbf u, \\mathbf z)\\\\\n\\mathbf A(\\mathbf p, \\mathbf u, \\mathbf z) \\mathbf z\n\\leq \\rho \\mathbf B(\\mathbf p, \\mathbf u, \\mathbf z) {\\mathbf z}\\\\\n\\end{matrix}\n" }, { "math_id": 75, "text": "\\mathbf u" }, { "math_id": 76, "text": "\n\\begin{matrix} \n\\mathbf p^{T}\\mathbf A(u)\n\\geq\\mathbf p^{T}\\mathbf B\\\\\n\\mathbf A(u) \\mathbf z\n\\leq \\mathbf B {\\mathbf z}\n\\end{matrix}\n" }, { "math_id": 77, "text": "\n\\mathbf A(u)=\\begin{bmatrix}\n\\mathbf 0& u\\\\\n\\mathbf A& \\mathbf 0 \\\\\n\\end{bmatrix}\n" }, { "math_id": 78, "text": "\n\\mathbf B=\\begin{bmatrix}\n\\mathbf c^T & 0\\\\\n\\mathbf 0 & \\mathbf b \\\\\n\\end{bmatrix}\n" }, { "math_id": 79, "text": "\\mathbf p=\\begin{bmatrix}\n 1 \\\\\n \\mathbf y \\\\\n \\end{bmatrix}\n" }, { "math_id": 80, "text": "\\mathbf z=\\begin{bmatrix}\n \\mathbf x \\\\\n 1 \\\\\n \\end{bmatrix}\n" }, { "math_id": 81, "text": "\n\\begin{bmatrix}\n \\mathbf y^{T}\\mathbf A & u \\\\\n\\end{bmatrix}\n\\geq\n\\begin{bmatrix}\n \\mathbf c^T & \\mathbf y^{T}\\mathbf b \\\\\n\\end{bmatrix}\n" }, { "math_id": 82, "text": "\n\\begin{bmatrix}\n u \\\\\n \\mathbf A \\mathbf x \\\\\n\\end{bmatrix}\n\\leq\n\\begin{bmatrix}\n \\mathbf c^T \\mathbf x \\\\\n \\mathbf b\\\\\n\\end{bmatrix}\n" }, { "math_id": 83, "text": "\n\\mathbf A=\\begin{bmatrix}\n5&6\n\\end{bmatrix}\n" }, { "math_id": 84, "text": "\n\\mathbf A(u)=\\begin{bmatrix}\n0& 0& u\\\\\n5&6&0 \\\\\n\\end{bmatrix}\n" }, { "math_id": 85, "text": "\n\\mathbf B=\\begin{bmatrix}\n3& 4& 0\\\\\n0 & 0&7 \\\\\n\\end{bmatrix}\n" }, { "math_id": 86, "text": "\\mathbf p^*=(1, 2/3)^T, \\quad \\mathbf z^*=(0, 7/6, 1)^T, \\quad u^*=14/3" }, { "math_id": 87, "text": "\n\\begin{matrix}\n\\mathbf p^{T}\\mathbf A(u)=(10/3, 4, 14/3)\n\\geq (3, 4, 14/3)= \\mathbf p^{T}\\mathbf B \\\\\n\\mathbf A(u) \\mathbf z = (14/3, 7)^T\n\\leq (14/3, 7)^T=\\mathbf B {\\mathbf z}\n\\end{matrix}\n" } ]
https://en.wikipedia.org/wiki?curid=15772894
1577298
Decile
Quantile dividing data into 10 equal parts In descriptive statistics, a decile is any of the nine values that divide the sorted data into ten equal parts, so that each part represents 1/10 of the sample or population. A decile is one possible form of a quantile; others include the quartile and percentile. A decile rank arranges the data in order from lowest to highest and is done on a scale of one to ten where each successive number corresponds to an increase of 10 percentage points. Special usage: The decile mean. A moderately robust measure of central tendency - known as the decile mean - can be computed by making use of a sample's deciles formula_0 to formula_1 (formula_0 = 10th percentile, formula_2 = 20th percentile and so on). It is calculated as follows: formula_3 Apart from serving as an alternative for the mean and the truncated mean, it also forms the basis for robust measures of skewness and kurtosis, and even a normality test. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "D_{1}" }, { "math_id": 1, "text": "D_{9}" }, { "math_id": 2, "text": "D_{2}" }, { "math_id": 3, "text": " DM = \\frac{\\sum_{i=1}^9 D_{i}} {9} " } ]
https://en.wikipedia.org/wiki?curid=1577298
15773113
Dividend discount model
Method of valuing a stock In financial economics, the dividend discount model (DDM) is a method of valuing the price of a company's capital stock or business value based on the assertion that intrinsic value is determined by the sum of future cash flows from dividend payments to shareholders, discounted back to their present value. The constant-growth form of the DDM is sometimes referred to as the Gordon growth model (GGM), after Myron J. Gordon of the Massachusetts Institute of Technology, the University of Rochester, and the University of Toronto, who published it along with Eli Shapiro in 1956 and made reference to it in 1959. Their work borrowed heavily from the theoretical and mathematical ideas found in John Burr Williams 1938 book "The Theory of Investment Value," which put forth the dividend discount model 18 years before Gordon and Shapiro. When dividends are assumed to grow at a constant rate, the variables are: formula_0 is the current stock price. formula_1 is the constant growth rate in perpetuity expected for the dividends. formula_2 is the constant cost of equity capital for that company. formula_3 is the value of dividends at the end of the first period. formula_4 Derivation of equation. The model uses the fact that the current value of the dividend payment formula_5 at (discrete) time formula_6 is formula_7, and so the current value of all the future dividend payments, which is the current price formula_0, is the sum of the infinite series formula_8 This summation can be rewritten as formula_9 where formula_10 The series in parentheses is the geometric series with common ratio formula_11 so it sums to formula_12 if formula_13. Thus, formula_14 Substituting the value for formula_11 leads to formula_15, which is simplified by multiplying by formula_16, so that formula_17 Income plus capital gains equals total return. The DDM equation can also be understood to state simply that a stock's total return equals the sum of its income and capital gains. formula_18 is rearranged to give formula_19 So the dividend yield formula_20 plus the growth formula_21 equals cost of equity formula_22. Consider the dividend growth rate in the DDM model as a proxy for the growth of earnings and by extension the stock price and capital gains. Consider the DDM's cost of equity capital as a proxy for the investor's required total return. formula_23 Growth cannot exceed cost of equity. From the first equation, one might notice that formula_24 cannot be negative. When growth is expected to exceed the cost of equity in the short run, then usually a two-stage DDM is used: formula_25 Therefore, formula_26 where formula_1 denotes the short-run expected growth rate, formula_27 denotes the long-run growth rate, and formula_28 is the period (number of years), over which the short-run growth rate is applied. Even when "g" is very close to "r", P approaches infinity, so the model becomes meaningless. Some properties of the model. a) When the growth "g" is zero, the dividend is capitalized. formula_29. b) This equation is also used to estimate the cost of capital by solving for formula_2. formula_30 c) which is equivalent to the formula of the Gordon Growth Model "(or Yield-plus-growth Model)": formula_31 = formula_32 where “formula_31” stands for the present stock value, “formula_3” stands for expected dividend per share one year from the present time, “g” stands for rate of growth of dividends, and “k” represents the required return rate for the equity investor. Problems with the constant-growth form of the model. The following shortcomings have been noted; See also . Related methods. The dividend discount model does not include a forecast of the price at which the stock under consideration could be sold at the end of the investment time horizon. A related approach, known as a discounted cash flow analysis, can be used to calculate the intrinsic value of a stock including all cash payments to the investor, consisting of both expected future dividends and the expected sale price at the end of the holding period, discounted at an appropriate risk-adjusted interest rate. If the intrinsic value exceeds the stock’s current market price, the stock is an attractive investment. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "P" }, { "math_id": 1, "text": "g" }, { "math_id": 2, "text": "r" }, { "math_id": 3, "text": "D_1" }, { "math_id": 4, "text": "P = \\frac{D_1}{r-g}" }, { "math_id": 5, "text": "D_0 (1+g)^t" }, { "math_id": 6, "text": "t" }, { "math_id": 7, "text": "\\frac{D_0 (1+g)^t}{{(1+r)}^t}" }, { "math_id": 8, "text": " P_0= \\sum_{t=1}^{\\infty} {D_0} \\frac{(1+g)^{t}}{(1+r)^t}" }, { "math_id": 9, "text": "P_0={D_0} r' (1+r'+{r'}^2+{r'}^3+....)" }, { "math_id": 10, "text": "r'=\\frac{(1+g)}{(1+r)}." }, { "math_id": 11, "text": "r'" }, { "math_id": 12, "text": "\\frac{1}{1-r'}" }, { "math_id": 13, "text": " \\mid r'\\mid<1" }, { "math_id": 14, "text": " P_0 = \\frac{D_0 r'}{1-r'} " }, { "math_id": 15, "text": " P_0 = \\frac{D_0 \\frac{1+g}{1+r}}{1-\\frac{1+g}{1+r}}" }, { "math_id": 16, "text": " \\frac {1+r}{1+r}" }, { "math_id": 17, "text": "P_0 = \\frac{D_0(1+g)}{r-g} = \\frac{D_1}{r-g}" }, { "math_id": 18, "text": "\\frac{D_1}{r -g} = P_0" }, { "math_id": 19, "text": "\\frac{D_1}{P_0} + g = r" }, { "math_id": 20, "text": "(D_1/P_0)" }, { "math_id": 21, "text": "(g)" }, { "math_id": 22, "text": "(r)" }, { "math_id": 23, "text": "\\text{Income} + \\text{Capital Gain} = \\text{Total Return}" }, { "math_id": 24, "text": "r-g" }, { "math_id": 25, "text": "P = \\sum_{t=1}^N \\frac{D_0 \\left( 1+g \\right)^t}{\\left( 1+r\\right)^t} + \\frac{P_N}{\\left( 1 +r\\right)^N}" }, { "math_id": 26, "text": "P = \\frac{D_0 \\left( 1 + g \\right)}{r-g} \\left[ 1- \\frac{\\left( 1+g \\right)^N}{\\left( 1 + r \\right)^N} \\right]\n+ \\frac{D_0 \\left( 1 + g \\right)^N \\left( 1 + g_\\infty \\right)}{\\left( 1 + r \\right)^N \\left( r - g_\\infty \\right)}," }, { "math_id": 27, "text": "g_\\infty" }, { "math_id": 28, "text": "N" }, { "math_id": 29, "text": "P_0 = \\frac{D_1}{r}" }, { "math_id": 30, "text": "r = \\frac{D_1}{P_0} + g." }, { "math_id": 31, "text": "P_0" }, { "math_id": 32, "text": "\\frac{D_1}{k - g}" } ]
https://en.wikipedia.org/wiki?curid=15773113
157755
Fermat primality test
Probabilistic primality test The Fermat primality test is a probabilistic test to determine whether a number is a probable prime. Concept. Fermat's little theorem states that if "p" is prime and "a" is not divisible by "p", then formula_0 If one wants to test whether "p" is prime, then we can pick random integers "a" not divisible by "p" and see whether the congruence holds. If it does not hold for a value of "a", then "p" is composite. This congruence is unlikely to hold for a random "a" if "p" is composite. Therefore, if the equality does hold for one or more values of "a", then we say that "p" is probably prime. However, note that the above congruence holds trivially for formula_1, because the congruence relation is compatible with exponentiation. It also holds trivially for formula_2 if "p" is odd, for the same reason. That is why one usually chooses a random "a" in the interval formula_3. Any "a" such that formula_4 when "n" is composite is known as a "Fermat liar". In this case "n" is called Fermat pseudoprime to base "a". If we do pick an "a" such that formula_5 then "a" is known as a "Fermat witness" for the compositeness of "n". Example. Suppose we wish to determine whether "n" = 221 is prime. Randomly pick 1 &lt; "a" &lt; 220, say "a" = 38. We check the above congruence and find that it holds: formula_6 Either 221 is prime, or 38 is a Fermat liar, so we take another "a", say 24: formula_7 So 221 is composite and 38 was indeed a Fermat liar. Furthermore, 24 is a Fermat witness for the compositeness of 221. Algorithm. The algorithm can be written as follows: Inputs: "n": a value to test for primality, "n"&gt;3; "k": a parameter that determines the number of times to test for primality Output: "composite" if "n" is composite, otherwise "probably prime" Repeat "k" times: Pick "a" randomly in the range [2, "n" − 2] If formula_8, then return "composite" If composite is never returned: return "probably prime" The "a" values 1 and "n"-1 are not used as the equality holds for all "n" and all odd "n" respectively, hence testing them adds no value. Complexity. Using fast algorithms for modular exponentiation and multiprecision multiplication, the running time of this algorithm is O("k" log2"n" log log "n") Õ("k" log2"n"), where "k" is the number of times we test a random "a", and "n" is the value we want to test for primality; see Miller–Rabin primality test for details. Flaw. There are infinitely many Fermat pseudoprimes to any given basis "a" &gt; 1.Theorem 1 Even worse, there are infinitely many Carmichael numbers. These are numbers formula_9 for which all values of formula_10 with formula_11 are Fermat liars. For these numbers, repeated application of the Fermat primality test performs the same as a simple random search for factors. While Carmichael numbers are substantially rarer than prime numbers (Erdös' upper bound for the number of Carmichael numbers is lower than the prime number function n/log(n)) there are enough of them that Fermat's primality test is not often used in the above form. Instead, other more powerful extensions of the Fermat test, such as Baillie–PSW, Miller–Rabin, and Solovay–Strassen are more commonly used. In general, if formula_9 is a composite number that is not a Carmichael number, then at least half of all formula_12 (i.e. formula_13) are Fermat witnesses. For proof of this, let formula_10 be a Fermat witness and formula_14, formula_15, ..., formula_16 be Fermat liars. Then formula_17 and so all formula_18 for formula_19 are Fermat witnesses. Applications. As mentioned above, most applications use a Miller–Rabin or Baillie–PSW test for primality. Sometimes a Fermat test (along with some trial division by small primes) is performed first to improve performance. GMP since version 3.0 uses a base-210 Fermat test after trial division and before running Miller–Rabin tests. Libgcrypt uses a similar process with base 2 for the Fermat test, but OpenSSL does not. In practice with most big number libraries such as GMP, the Fermat test is not noticeably faster than a Miller–Rabin test, and can be slower for many inputs. As an exception, OpenPFGW uses only the Fermat test for probable prime testing. The program is typically used with multi-thousand digit inputs with a goal of maximum speed with very large inputs. Another well known program that relies only on the Fermat test is PGP where it is only used for testing of self-generated large random values (an open source counterpart, GNU Privacy Guard, uses a Fermat pretest followed by Miller–Rabin tests). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "a^{p-1} \\equiv 1 \\pmod{p}." }, { "math_id": 1, "text": "a \\equiv 1 \\pmod{p}" }, { "math_id": 2, "text": "a \\equiv -1 \\pmod{p}" }, { "math_id": 3, "text": "1 < a < p - 1" }, { "math_id": 4, "text": "a^{n-1} \\equiv 1 \\pmod{n}" }, { "math_id": 5, "text": "a^{n-1} \\not\\equiv 1 \\pmod{n}" }, { "math_id": 6, "text": "a^{n-1} = 38^{220} \\equiv 1 \\pmod{221}." }, { "math_id": 7, "text": "a^{n-1} = 24^{220} \\equiv 81 \\not\\equiv 1 \\pmod{221}." }, { "math_id": 8, "text": "a^{n-1}\\not\\equiv1 \\pmod n" }, { "math_id": 9, "text": "n" }, { "math_id": 10, "text": "a" }, { "math_id": 11, "text": "\\operatorname{gcd}(a, n) = 1" }, { "math_id": 12, "text": "a\\in(\\mathbb{Z}/n\\mathbb{Z})^*" }, { "math_id": 13, "text": "\\operatorname{gcd}(a,n)=1" }, { "math_id": 14, "text": "a_1" }, { "math_id": 15, "text": "a_2" }, { "math_id": 16, "text": "a_s" }, { "math_id": 17, "text": "(a\\cdot a_i)^{n-1} \\equiv a^{n-1}\\cdot a_i^{n-1} \\equiv a^{n-1} \\not\\equiv 1\\pmod{n}" }, { "math_id": 18, "text": "a \\cdot a_i" }, { "math_id": 19, "text": "i = 1, 2, ..., s" } ]
https://en.wikipedia.org/wiki?curid=157755
1577800
Schwinger function
Euclidean Wightman distributions In quantum field theory, the Wightman distributions can be analytically continued to analytic functions in Euclidean space with the domain restricted to the ordered set of points in Euclidean space with no coinciding points. These functions are called the Schwinger functions (named after Julian Schwinger) and they are real-analytic, symmetric under the permutation of arguments (antisymmetric for fermionic fields), Euclidean covariant and satisfy a property known as reflection positivity. Properties of Schwinger functions are known as Osterwalder–Schrader axioms (named after Konrad Osterwalder and Robert Schrader). Schwinger functions are also referred to as Euclidean correlation functions. Osterwalder–Schrader axioms. Here we describe Osterwalder–Schrader (OS) axioms for a Euclidean quantum field theory of a Hermitian scalar field formula_0, formula_1. Note that a typical quantum field theory will contain infinitely many local operators, including also composite operators, and their correlators should also satisfy OS axioms similar to the ones described below. The Schwinger functions of formula_2 are denoted as formula_3 OS axioms from are numbered (E0)-(E4) and have the following meaning: Temperedness. Temperedness axiom (E0) says that Schwinger functions are tempered distributions away from coincident points. This means that they can be integrated against Schwartz test functions which vanish with all their derivatives at configurations where two or more points coincide. It can be shown from this axiom and other OS axioms (but not the linear growth condition) that Schwinger functions are in fact real-analytic away from coincident points. Euclidean covariance. Euclidean covariance axiom (E1) says that Schwinger functions transform covariantly under rotations and translations, namely: formula_4 for an arbitrary rotation matrix formula_5 and an arbitrary translation vector formula_6. OS axioms can be formulated for Schwinger functions of fields transforming in arbitrary representations of the rotation group. Symmetry. Symmetry axiom (E3) says that Schwinger functions are invariant under permutations of points: formula_7, where formula_8 is an arbitrary permutation of formula_9. Schwinger functions of fermionic fields are instead antisymmetric; for them this equation would have a ± sign equal to the signature of the permutation. Cluster property. Cluster property (E4) says that Schwinger function formula_10 reduces to the product formula_11 if two groups of points are separated from each other by a large constant translation: formula_12. The limit is understood in the sense of distributions. There is also a technical assumption that the two groups of points lie on two sides of the formula_13 hyperplane, while the vector formula_14 is parallel to it: formula_15 Reflection positivity. Positivity axioms (E2) asserts the following property called (Osterwalder–Schrader) reflection positivity. Pick any arbitrary coordinate τ and pick a test function "f""N" with "N" points as its arguments. Assume "f""N" has its support in the "time-ordered" subset of "N" points with 0 &lt; τ1 &lt; ... &lt; τ"N". Choose one such "f""N" for each positive "N", with the f's being zero for all "N" larger than some integer "M". Given a point formula_16, let formula_17 be the reflected point about the τ = 0 hyperplane. Then, formula_18 where * represents complex conjugation. Sometimes in theoretical physics literature reflection positivity is stated as the requirement that the Schwinger function of arbitrary even order should be non-negative if points are inserted symmetrically with respect to the formula_19 hyperplane: formula_20. This property indeed follows from the reflection positivity but it is weaker than full reflection positivity. Intuitive understanding. One way of (formally) constructing Schwinger functions which satisfy the above properties is through the Euclidean path integral. In particular, Euclidean path integrals (formally) satisfy reflection positivity. Let "F" be any polynomial functional of the field "φ" which only depends upon the value of "φ"("x") for those points "x" whose "τ" coordinates are nonnegative. Then formula_21 Since the action "S" is real and can be split into formula_22, which only depends on "φ" on the positive half-space (formula_23), and formula_24 which only depends upon "φ" on the negative half-space (formula_25), and if "S" also happens to be invariant under the combined action of taking a reflection and complex conjugating all the fields, then the previous quantity has to be nonnegative. Osterwalder–Schrader theorem. The Osterwalder–Schrader theorem states that Euclidean Schwinger functions which satisfy the above axioms (E0)-(E4) and an additional property (E0') called linear growth condition can be analytically continued to Lorentzian Wightman distributions which satisfy Wightman axioms and thus define a quantum field theory. Linear growth condition. This condition, called (E0') in, asserts that when the Schwinger function of order formula_26 is paired with an arbitrary Schwartz test function formula_27 which vanishes at coincident points, we have the following bound: formula_28 where formula_29 is an integer constant, formula_30 is the Schwartz-space seminorm of order formula_31, i.e. formula_32 and formula_33 a sequence of constants of factorial growth, i.e. formula_34 with some constants formula_35. Linear growth condition is subtle as it has to be satisfied for all Schwinger functions simultaneously. It also has not been derived from the Wightman axioms, so that the system of OS axioms (E0)-(E4) plus the linear growth condition (E0') appears to be stronger than the Wightman axioms. History. At first, Osterwalder and Schrader claimed a stronger theorem that the axioms (E0)-(E4) by themselves imply the Wightman axioms, however their proof contained an error which could not be corrected without adding extra assumptions. Two years later they published a new theorem, with the linear growth condition added as an assumption, and a correct proof. The new proof is based on a complicated inductive argument (proposed also by Vladimir Glaser), by which the region of analyticity of Schwinger functions is gradually extended towards the Minkowski space, and Wightman distributions are recovered as a limit. The linear growth condition (E0') is crucially used to show that the limit exists and is a tempered distribution. Osterwalder's and Schrader's paper also contains another theorem replacing (E0') by yet another assumption called formula_36. This other theorem is rarely used, since formula_36 is hard to check in practice. Other axioms for Schwinger functions. Axioms by Glimm and Jaffe. An alternative approach to axiomatization of Euclidean correlators is described by Glimm and Jaffe in their book. In this approach one assumes that one is given a measure formula_37 on the space of distributions formula_38. One then considers a generating functional formula_39 which is assumed to satisfy properties OS0-OS4: formula_40 is an entire-analytic function of formula_41 for any collection of formula_42 compactly supported test functions formula_43. Intuitively, this means that the measure formula_44 decays faster than any exponential. Relation to Osterwalder–Schrader axioms. Although the above axioms were named by Glimm and Jaffe (OS0)-(OS4) in honor of Osterwalder and Schrader, they are not equivalent to the Osterwalder–Schrader axioms. Given (OS0)-(OS4), one can define Schwinger functions of formula_2 as moments of the measure formula_37, and show that these moments satisfy Osterwalder–Schrader axioms (E0)-(E4) and also the linear growth conditions (E0'). Then one can appeal to the Osterwalder–Schrader theorem to show that Wightman functions are tempered distributions. Alternatively, and much easier, one can derive Wightman axioms directly from (OS0)-(OS4). Note however that the full quantum field theory will contain infinitely many other local operators apart from formula_2, such as formula_53, formula_54 and other composite operators built from formula_2 and its derivatives. It's not easy to extract these Schwinger functions from the measure formula_37 and show that they satisfy OS axioms, as it should be the case. To summarize, the axioms called (OS0)-(OS4) by Glimm and Jaffe are stronger than the OS axioms as far as the correlators of the field formula_2 are concerned, but weaker than then the full set of OS axioms since they don't say much about correlators of composite operators. Nelson's axioms. These axioms were proposed by Edward Nelson. See also their description in the book of Barry Simon. Like in the above axioms by Glimm and Jaffe, one assumes that the field formula_38 is a random distribution with a measure formula_37. This measure is sufficiently regular so that the field formula_55 has regularity of a Sobolev space of negative derivative order. The crucial feature of these axioms is to consider the field restricted to a surface. One of the axioms is Markov property, which formalizes the intuitive notion that the state of the field inside a closed surface depends only on the state of the field on the surface. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\phi(x)" }, { "math_id": 1, "text": " x\\in \\mathbb{R}^d" }, { "math_id": 2, "text": "\\phi" }, { "math_id": 3, "text": "S_n(x_1,\\ldots,x_n) \\equiv \\langle \\phi(x_1) \\phi(x_2)\\ldots \\phi(x_n)\\rangle,\\quad x_k \\in \\mathbb{R}^d." }, { "math_id": 4, "text": "S_n(x_1,\\ldots,x_n)=S_n(R x_1+b,\\ldots,Rx_n+b)" }, { "math_id": 5, "text": "R\\in SO(d)" }, { "math_id": 6, "text": "b\\in \\mathbb{R}^d" }, { "math_id": 7, "text": "S_n(x_1,\\ldots,x_n)=S_n(x_{\\pi(1)},\\ldots,x_{\\pi(n)})" }, { "math_id": 8, "text": "\\pi" }, { "math_id": 9, "text": "\\{1,\\ldots,n\\}" }, { "math_id": 10, "text": "S_{p+q}" }, { "math_id": 11, "text": "S_{p}S_q" }, { "math_id": 12, "text": "\\lim_{b\\to \\infty} S_{p+q}(x_1,\\ldots,x_p,x_{p+1}+b,\\ldots, x_{p+q}+b)\n=S_{p}(x_1,\\ldots,x_p) S_q(x_{p+1},\\ldots, x_{p+q})" }, { "math_id": 13, "text": "x^0=0" }, { "math_id": 14, "text": "b" }, { "math_id": 15, "text": "x^0_1,\\ldots,x^0_p>0,\\quad x^0_{p+1},\\ldots,x^0_{p+q}<0,\\quad b^0=0." }, { "math_id": 16, "text": "x" }, { "math_id": 17, "text": "x^\\theta" }, { "math_id": 18, "text": "\\sum_{m,n}\\int d^dx_1 \\cdots d^dx_m\\, d^dy_1 \\cdots d^dy_n S_{m+n}(x_1,\\dots,x_m,y_1,\\dots,y_n)f_m(x^\\theta_1,\\dots,x^\\theta_m)^* f_n(y_1,\\dots,y_n)\\geq 0" }, { "math_id": 19, "text": "\\tau=0" }, { "math_id": 20, "text": "S_{2n}(x_1,\\dots,x_n,x^\\theta_n,\\dots,x^\\theta_1)\\geq 0" }, { "math_id": 21, "text": " \\int \\mathcal{D}\\phi F[\\phi(x)]F[\\phi(x^\\theta)]^* e^{-S[\\phi]}=\\int \\mathcal{D}\\phi_0 \\int_{\\phi_+(\\tau=0)=\\phi_0} \\mathcal{D}\\phi_+ F[\\phi_+]e^{-S_+[\\phi_+]}\\int_{\\phi_-(\\tau=0)=\\phi_0} \\mathcal{D}\\phi_- F[(\\phi_-)^\\theta]^* e^{-S_-[\\phi_-]}. " }, { "math_id": 22, "text": " S_+ " }, { "math_id": 23, "text": " \\phi_+ " }, { "math_id": 24, "text": " S_- " }, { "math_id": 25, "text": " \\phi_- " }, { "math_id": 26, "text": " n" }, { "math_id": 27, "text": "f" }, { "math_id": 28, "text": "|S_{n}(f)|\\leq \\sigma_n |f|_{C\\cdot n}," }, { "math_id": 29, "text": " C\\in \\mathbb{N}" }, { "math_id": 30, "text": "|f|_{C\\cdot n}" }, { "math_id": 31, "text": "N=C\\cdot n" }, { "math_id": 32, "text": "|f|_{N} = \\sup_{|\\alpha|\\leq N, x\\in \\mathbb{R}^d} |(1+|x|)^N D^\\alpha f(x)|," }, { "math_id": 33, "text": "\\sigma_n " }, { "math_id": 34, "text": "\\sigma_n \\leq A (n!)^B " }, { "math_id": 35, "text": "A,B" }, { "math_id": 36, "text": "\\check{\\text{(E0)}}" }, { "math_id": 37, "text": "d\\mu " }, { "math_id": 38, "text": " \\phi \\in D'(\\mathbb{R}^d)" }, { "math_id": 39, "text": " S(f) =\\int e^{\\phi(f)} d\\mu,\\quad f\\in D(\\mathbb{R}^d)" }, { "math_id": 40, "text": "z=(z_1,\\ldots,z_n)\\mapsto S\\left(\\sum_{i=1}^n z_i f_i\\right)" }, { "math_id": 41, "text": "z\\in \\mathbb{R}^n" }, { "math_id": 42, "text": "n" }, { "math_id": 43, "text": "f_i\\in D(\\mathbb{R}^d)" }, { "math_id": 44, "text": "d\\mu" }, { "math_id": 45, "text": "S(f)" }, { "math_id": 46, "text": "|S(f)|\\leq \\exp\\left(C \\int d^dx |f(x)|\\right)" }, { "math_id": 47, "text": "f(x)\\mapsto f(R x+b)" }, { "math_id": 48, "text": "x^0>0" }, { "math_id": 49, "text": "\\theta f_i(x)=f_i(\\theta x)" }, { "math_id": 50, "text": "\\theta" }, { "math_id": 51, "text": "M_{ij}=S(f_i+\\theta f_j)" }, { "math_id": 52, "text": " (D'(\\mathbb{R}^d),d\\mu)" }, { "math_id": 53, "text": "\\phi^2" }, { "math_id": 54, "text": "\\phi^4" }, { "math_id": 55, "text": " \\phi" } ]
https://en.wikipedia.org/wiki?curid=1577800
1577896
Symmetric graph
Graph in which all ordered pairs of linked nodes are automorphic In the mathematical field of graph theory, a graph G is symmetric (or arc-transitive) if, given any two pairs of adjacent vertices "u"1—"v"1 and "u"2—"v"2 of G, there is an automorphism formula_0 such that formula_1 and formula_2 In other words, a graph is symmetric if its automorphism group acts transitively on ordered pairs of adjacent vertices (that is, upon edges considered as having a direction). Such a graph is sometimes also called 1-arc-transitive or flag-transitive. By definition (ignoring "u"1 and "u"2), a symmetric graph without isolated vertices must also be vertex-transitive. Since the definition above maps one edge to another, a symmetric graph must also be edge-transitive. However, an edge-transitive graph need not be symmetric, since a—b might map to c—d, but not to d—c. Star graphs are a simple example of being edge-transitive without being vertex-transitive or symmetric. As a further example, semi-symmetric graphs are edge-transitive and regular, but not vertex-transitive. Every connected symmetric graph must thus be both vertex-transitive and edge-transitive, and the converse is true for graphs of odd degree. However, for even degree, there exist connected graphs which are vertex-transitive and edge-transitive, but not symmetric. Such graphs are called half-transitive. The smallest connected half-transitive graph is Holt's graph, with degree 4 and 27 vertices. Confusingly, some authors use the term "symmetric graph" to mean a graph which is vertex-transitive and edge-transitive, rather than an arc-transitive graph. Such a definition would include half-transitive graphs, which are excluded under the definition above. A distance-transitive graph is one where instead of considering pairs of adjacent vertices (i.e. vertices a distance of 1 apart), the definition covers two pairs of vertices, each the same distance apart. Such graphs are automatically symmetric, by definition. A is defined to be a sequence of "t" + 1 vertices, such that any two consecutive vertices in the sequence are adjacent, and with any repeated vertices being more than 2 steps apart. A graph is a graph such that the automorphism group acts transitively on , but not on . Since 1-arcs are simply edges, every symmetric graph of degree 3 or more must be for some t, and the value of t can be used to further classify symmetric graphs. The cube is 2-transitive, for example. Note that conventionally the term "symmetric graph" is not complementary to the term "asymmetric graph," as the latter refers to a graph that has no nontrivial symmetries at all. Examples. Two basic families of symmetric graphs for any number of vertices are the cycle graphs (of degree 2) and the complete graphs. Further symmetric graphs are formed by the vertices and edges of the regular and quasiregular polyhedra: the cube, octahedron, icosahedron, dodecahedron, cuboctahedron, and icosidodecahedron. Extension of the cube to n dimensions gives the hypercube graphs (with 2n vertices and degree n). Similarly extension of the octahedron to n dimensions gives the graphs of the cross-polytopes, this family of graphs (with 2n vertices and degree 2n-2) are sometimes referred to as the cocktail party graphs - they are complete graphs with a set of edges making a perfect matching removed. Additional families of symmetric graphs with an even number of vertices 2n, are the evenly split complete bipartite graphs Kn,n and the crown graphs on 2n vertices. Many other symmetric graphs can be classified as circulant graphs (but not all). The Rado graph forms an example of a symmetric graph with infinitely many vertices and infinite degree Cubic symmetric graphs. Combining the symmetry condition with the restriction that graphs be cubic (i.e. all vertices have degree 3) yields quite a strong condition, and such graphs are rare enough to be listed. They all have an even number of vertices. The Foster census and its extensions provide such lists. The Foster census was begun in the 1930s by Ronald M. Foster while he was employed by Bell Labs, and in 1988 (when Foster was 92) the then current Foster census (listing all cubic symmetric graphs up to 512 vertices) was published in book form. The first thirteen items in the list are cubic symmetric graphs with up to 30 vertices (ten of these are also distance-transitive; the exceptions are as indicated): Other well known cubic symmetric graphs are the Dyck graph, the Foster graph and the Biggs–Smith graph. The ten distance-transitive graphs listed above, together with the Foster graph and the Biggs–Smith graph, are the only cubic distance-transitive graphs. Properties. The vertex-connectivity of a symmetric graph is always equal to the degree "d". In contrast, for vertex-transitive graphs in general, the vertex-connectivity is bounded below by 2("d" + 1)/3. A "t"-transitive graph of degree 3 or more has girth at least 2("t" – 1). However, there are no finite "t"-transitive graphs of degree 3 or more for "t" ≥ 8. In the case of the degree being exactly 3 (cubic symmetric graphs), there are none for "t" ≥ 6. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f : V(G) \\rightarrow V(G)" }, { "math_id": 1, "text": "f(u_1) = u_2" }, { "math_id": 2, "text": "f(v_1) = v_2." } ]
https://en.wikipedia.org/wiki?curid=1577896
15779815
Birnbaum–Saunders distribution
The Birnbaum–Saunders distribution, also known as the fatigue life distribution, is a probability distribution used extensively in reliability applications to model failure times. There are several alternative formulations of this distribution in the literature. It is named after Z. W. Birnbaum and S. C. Saunders. Theory. This distribution was developed to model failures due to cracks. A material is placed under repeated cycles of stress. The "j"th cycle leads to an increase in the crack by "X"j amount. The sum of the "X"j is assumed to be normally distributed with mean "nμ" and variance "nσ"2. The probability that the crack does not exceed a critical length "ω" is formula_0 where "Φ"() is the cdf of normal distribution. If "T" is the number of cycles to failure then the cumulative distribution function (cdf) of "T" is formula_1 The more usual form of this distribution is: formula_2 Here "α" is the shape parameter and "β" is the scale parameter. Properties. The Birnbaum–Saunders distribution is unimodal with a median of "β". The mean ("μ"), variance (σ2), skewness ("γ") and kurtosis ("κ") are as follows: formula_3 formula_4 formula_5 formula_6 Given a data set that is thought to be Birnbaum–Saunders distributed the parameters' values are best estimated by maximum likelihood. If "T" is Birnbaum–Saunders distributed with parameters "α" and "β" then "T"−1 is also Birnbaum-Saunders distributed with parameters "α" and "β"−1. Transformation. Let "T" be a Birnbaum-Saunders distributed variate with parameters "α" and "β". A useful transformation of "T" is formula_7. Equivalently formula_8. "X" is then distributed normally with a mean of zero and a variance of "α"2 / 4. Probability density function. The general formula for the probability density function (pdf) is formula_9 where γ is the shape parameter, μ is the location parameter, β is the scale parameter, and formula_10 is the probability density function of the standard normal distribution. Standard fatigue life distribution. The case where μ = 0 and β = 1 is called the standard fatigue life distribution. The pdf for the standard fatigue life distribution reduces to formula_11 Since the general form of probability functions can be expressed in terms of the standard distribution, all of the subsequent formulas are given for the standard form of the function. Cumulative distribution function. The formula for the cumulative distribution function is formula_12 where Φ is the cumulative distribution function of the standard normal distribution. Quantile function. The formula for the quantile function is formula_13 where Φ −1 is the quantile function of the standard normal distribution. External links.  This article incorporates public domain material from
[ { "math_id": 0, "text": " P( X \\le \\omega ) = \\Phi\\left( \\frac{ \\omega - n \\mu }{ \\sigma \\sqrt{ n } } \\right)" }, { "math_id": 1, "text": " \nP( T \\le t ) = 1 - \\Phi\\left( \\frac{ \\omega - t \\mu }{ \\sigma \\sqrt{ t } } \\right)\n= \\Phi\\left( \\frac{ t \\mu - \\omega }{ \\sigma \\sqrt{ t } } \\right)\n= \\Phi\\left( \\frac{ \\mu \\sqrt{ t } }{ \\sigma } - \\frac{ \\omega }{ \\sigma \\sqrt{t} } \\right)\n= \\Phi\\left( \\frac{ \\sqrt{ \\mu \\omega } }{ \\sigma } \\left[ \\left( \\frac{ t }{ \\omega / \\mu } \\right)^{ 0.5 } - \\left( \\frac{ \\omega / \\mu }{ t } \\right)^{ 0.5 } \\right] \\right)\n" }, { "math_id": 2, "text": " F( x; \\alpha, \\beta ) = \\Phi\\left( \\frac{ 1 }{ \\alpha } \\left[ \\left( \\frac{ x }{ \\beta } \\right)^{0.5} - \\left( \\frac{ \\beta }{ x } \\right)^{0.5} \\right] \\right) " }, { "math_id": 3, "text": " \\mu = \\beta\\left( 1 + \\frac{ \\alpha^2 }{ 2 } \\right)" }, { "math_id": 4, "text": " \\sigma^2 = ( \\alpha \\beta )^2 \\left( 1 + \\frac{ 5 \\alpha^2 }{ 4 } \\right)" }, { "math_id": 5, "text": " \\gamma = \\frac{ 4 \\alpha ( 11 \\alpha^2 + 6 ) }{ ( 5 \\alpha^2 + 4 )^{\\frac{3}{2}}}" }, { "math_id": 6, "text": " \\kappa = 3 + \\frac{ 6 \\alpha^2 ( 93 \\alpha^2 + 40 ) }{ ( 5 \\alpha^2 + 4 )^2 }" }, { "math_id": 7, "text": " X = \\frac{ 1 }{ 2 } \\left[ \\left( \\frac{ T }{ \\beta } \\right)^{ 0.5 } - \\left( \\frac{ T }{ \\beta } \\right)^{ -0.5 } \\right]" }, { "math_id": 8, "text": " T = \\beta\\left( 1 + 2X^2 + 2X( 1 + X^2 )^{ 0.5 } \\right) " }, { "math_id": 9, "text": "\nf(x) = \\frac{\\sqrt{\\frac{x-\\mu}{\\beta}}+\\sqrt{\\frac{\\beta}{x-\\mu}}}{2\\gamma\\left(x-\\mu\\right)}\\phi\\left(\\frac{\\sqrt{\\frac{x-\\mu}{\\beta}}-\\sqrt{\\frac{\\beta}{x-\\mu}}}{\\gamma}\\right)\\quad x > \\mu; \\gamma,\\beta>0\n" }, { "math_id": 10, "text": "\\phi" }, { "math_id": 11, "text": "\nf(x) = \\frac{\\sqrt{x}+\\sqrt{\\frac{1}{x}}}{2\\gamma x}\\phi\\left(\\frac{\\sqrt{x}-\\sqrt{\\frac{1}{x}}}{\\gamma}\\right)\\quad x > 0; \\gamma >0\n" }, { "math_id": 12, "text": "\nF(x) = \\Phi\\left(\\frac{\\sqrt{x} - \\sqrt{\\frac{1}{x}}}{\\gamma}\\right)\\quad x > 0; \\gamma > 0\n" }, { "math_id": 13, "text": "\nG(p) = \\frac{1}{4}\\left[\\gamma\\Phi^{-1}(p) + \\sqrt{4+\\left(\\gamma\\Phi^{-1}(p)\\right)^2}\\right]^2\n" } ]
https://en.wikipedia.org/wiki?curid=15779815
15780233
Tukey lambda distribution
Symmetric probability distribution Formalized by John Tukey, the Tukey lambda distribution is a continuous, symmetric probability distribution defined in terms of its quantile function. It is typically used to identify an appropriate distribution (see the comments below) and not used in statistical models directly. The Tukey lambda distribution has a single shape parameter, λ, and as with other probability distributions, it can be transformed with a location parameter, μ, and a scale parameter, σ. Since the general form of probability distribution can be expressed in terms of the standard distribution, the subsequent formulas are given for the standard form of the function. Quantile function. For the standard form of the Tukey lambda distribution, the quantile function, formula_0 (i.e. the inverse function to the cumulative distribution function) and the quantile density function, formula_1 are formula_2 formula_3 For most values of the shape parameter, λ, the probability density function (PDF) and cumulative distribution function (CDF) must be computed numerically. The Tukey lambda distribution has a simple, closed form for the CDF and / or PDF only for a few exceptional values of the shape parameter, for example: λ ∈ { 2, 1, , 0 } (see uniform distribution and and the logistic distribution However, for any value of λ both the CDF and PDF can be tabulated for any number of cumulative probabilities, p, using the quantile function Q to calculate the value x, for each cumulative probability p, with the probability density given by , the reciprocal of the quantile density function. As is the usual case with statistical distributions, the Tukey lambda distribution can readily be used by looking up values in a prepared table. Moments. The Tukey lambda distribution is symmetric around zero, therefore the expected value of this distribution, if it exists, is equal to zero. The variance exists for and except when is given by the formula formula_4 More generally, the n-th order moment is finite when and is expressed (except when in terms of the beta function formula_5 Note that due to symmetry of the density function, all moments of odd orders, if they exist, are equal to zero. L-moments. Differently from the central moments, L-moments can be expressed in a closed form. For formula_6 the formula_7th L-moment, formula_8 is given by formula_9 The first six L-moments can be presented as follows: formula_10 formula_11 formula_12 formula_13 formula_14 formula_15 Comments. The Tukey lambda distribution is actually a family of distributions that can approximate a number of common distributions. For example, The most common use of this distribution is to generate a Tukey lambda PPCC plot of a data set. Based on the value for {{mvar| λ }} with best correlation, as shown on the PPCC plot, an appropriate model for the data is suggested. For example, if the best-fit of the curve to the data occurs for a value of {{mvar| λ }} at or near {{math|0.14}}, then empirically the data could be well-modeled with a normal distribution. Values of {{mvar| λ }} less than 0.14 suggests a heavier-tailed distribution. A milepost at {{nobr| {{math| "λ" {{=}} 0 }} }} (logistic) would indicate quite fat tails, with the extreme limit at {{nobr| {{math| "λ" {{=}} −1 }} ,}} approximating Cauchy and small sample versions of the Student's {{mvar|t}}. That is, as the best-fit value of {{mvar|λ}} varies from thin tails at {{math|0.14}} towards fat tails {{math|−1}}, a bell-shaped PDF with increasingly heavy tails is suggested. Similarly, an optimal curve-fit value of {{mvar|λ}} greater than {{math|0.14}} suggests a distribution with "exceptionally" thin tails (based on the point of view that the normal distribution itself is thin-tailed to begin with; the exponential distribution is often chosen as the exemplar of tails intermediate between fat and thin). Except for values of {{mvar|λ}} approaching {{math|0}} and those below, all the PDF functions discussed have finite support, between   {{sfrac|−1  |{{!}}{{mvar|λ}}{{!}}}}   and   {{sfrac|+1  | {{!}}{{mvar|λ}}{{!}} }} . Since the Tukey lambda distribution is a symmetric distribution, the use of the Tukey lambda PPCC plot to determine a reasonable distribution to model the data only applies to symmetric distributions. A histogram of the data should provide evidence as to whether the data can be reasonably modeled with a symmetric distribution.
[ { "math_id": 0, "text": "~ Q(p) ~," }, { "math_id": 1, "text": "~ q = \\frac{\\ \\operatorname{d}Q\\ }{ \\operatorname{d}p }\\ ," }, { "math_id": 2, "text": "\n\\ Q\\left(\\ p\\ ; \\lambda\\ \\right) ~=~ \n\\begin{cases} \n\\tfrac{ 1 }{\\ \\lambda\\ } \\left[\\ p^\\lambda - (1 - p)^\\lambda\\ \\right]\\ , &\\ \\mbox{ if }\\ \\lambda \\ne 0~, \\\\ {} \\\\\n\\ln\\left( \\frac{ p }{\\ 1 - p\\ } \\right)~, &\\ \\mbox{ if }\\ \\lambda = 0 ~. \n\\end{cases}" }, { "math_id": 3, "text": " q \\left(\\ p\\ ; \\lambda\\ \\right) ~=~ \\frac{\\ \\operatorname{d}Q\\ }{ \\operatorname{d}p } ~=~ p^{ \\lambda - 1 } + \\left(\\ 1 - p\\ \\right)^{ \\lambda - 1 } ~." }, { "math_id": 4, "text": "\n \\operatorname{Var}[\\ X\\ ] = \\frac{2}{\\lambda^2}\\bigg(\\ \\frac{ 1 }{\\ 1 + 2\\lambda\\ } ~ - ~ \\frac{\\ \\Gamma(\\lambda+1)^2\\ }{\\ \\Gamma(2\\lambda+2)\\ }\\ \\bigg) ~.\n" }, { "math_id": 5, "text": "\n\\mu_n \\equiv \\operatorname{E}[\\ X^n\\ ] = \\frac{1}{\\lambda^n} \\sum_{k=0}^n\\ (-1)^k\\ {n \\choose k}\\ \\Beta(\\ \\lambda\\ k + 1\\ ,\\ (n - k)\\ \\lambda + 1\\ ) ~.\n" }, { "math_id": 6, "text": "\\lambda > -1\\ ," }, { "math_id": 7, "text": "\\ r" }, { "math_id": 8, "text": "\\ \\ell_r\\ ," }, { "math_id": 9, "text": "\n\\begin{align}\n\\ell_{r} &= \\frac{\\ 1 + (-1)^{r}\\ }{ \\lambda }\\ \\sum_{k=0}^{r-1}\\ (-1)^{ r - 1 - k }\\ \\binom{r-1}{k}\\ \\binom{ r + k -1 }{ k }\\ \\left(\\frac{ 1 }{\\ k + 1 + \\lambda\\ } \\right) \\\\ {} \\\\\n&= \\bigl( 1 + (-1)^{r} \\bigr) \\frac{\\ \\Gamma( 1 + \\lambda )\\ \\Gamma( r - 1 - \\lambda )\\ }{\\ \\Gamma( 1 - \\lambda )\\ \\Gamma( r + 1 + \\lambda)\\ } ~.\n\\end{align}\n" }, { "math_id": 10, "text": " \\ell_{1} = ~~ 0\\ ," }, { "math_id": 11, "text": " \\ell_2 = \\frac{ 2 }{\\ \\lambda\\ }\\ \\left[\\ -\\frac{ 1 }{\\ 1 + \\lambda\\ } + \\frac{ 2 }{\\ 2 + \\lambda\\ }\\ \\right]\\ ," }, { "math_id": 12, "text": " \\ell_3 = ~~ 0\\ ," }, { "math_id": 13, "text": " \\ell_4 = \\frac{2}{\\ \\lambda\\ }\\ \\left[ - \\frac{ 1 }{\\ 1 + \\lambda\\ } + \\frac{ 12 }{\\ 2 + \\lambda\\ } - \\frac{ 3 0}{\\ 3 + \\lambda\\ } + \\frac{ 20 }{\\ 4 + \\lambda\\ }\\ \\right]\\ ," }, { "math_id": 14, "text": " \\ell_5 = ~~ 0\\ ," }, { "math_id": 15, "text": " \\ell_6 = \\frac{ 2 }{\\ \\lambda\\ }\\ \\left[\\ -\\frac{ 1 }{\\ 1 + \\lambda\\ } + \\frac{ 30 }{\\ 2 + \\lambda\\ } - \\frac{ 210 }{\\ 3 + \\lambda\\ } +\\frac{ 560 }{\\ 4 + \\lambda\\ } - \\frac{ 630 }{\\ 5 + \\lambda\\ } + \\frac{ 252 }{\\ 6 + \\lambda\\ }\\ \\right] ~. " } ]
https://en.wikipedia.org/wiki?curid=15780233
1578059
Desargues graph
Distance-transitive cubic graph with 20 nodes and 30 edges In the mathematical field of graph theory, the Desargues graph is a distance-transitive, cubic graph with 20 vertices and 30 edges. It is named after Girard Desargues, arises from several different combinatorial constructions, has a high level of symmetry, is the only known non-planar cubic partial cube, and has been applied in chemical databases. The name "Desargues graph" has also been used to refer to a ten-vertex graph, the complement of the Petersen graph, which can also be formed as the bipartite half of the 20-vertex Desargues graph. Constructions. There are several different ways of constructing the Desargues graph: Algebraic properties. The Desargues graph is a symmetric graph: it has symmetries that take any vertex to any other vertex and any edge to any other edge. Its symmetry group has order 240, and is isomorphic to the product of a symmetric group on 5 points with a group of order 2. One can interpret this product representation of the symmetry group in terms of the constructions of the Desargues graph: the symmetric group on five points is the symmetry group of the Desargues configuration, and the order-2 subgroup swaps the roles of the vertices that represent points of the Desargues configuration and the vertices that represent lines. Alternatively, in terms of the bipartite Kneser graph, the symmetric group on five points acts separately on the two-element and three-element subsets of the five points, and complementation of subsets forms a group of order two that transforms one type of subset into the other. The symmetric group on five points is also the symmetry group of the Petersen graph, and the order-2 subgroup swaps the vertices within each pair of vertices formed in the double cover construction. The generalized Petersen graph "G"("n", "k") is vertex-transitive if and only if "n" = 10 and "k" = 2 or if "k"2 ≡ ±1 (mod "n") and is edge-transitive only in the following seven cases: ("n", "k") = (4, 1), (5, 2), (8, 3), (10, 2), (10, 3), (12, 5), (24, 5). So the Desargues graph is one of only seven symmetric Generalized Petersen graphs. Among these seven graphs are the cubical graph "G"(4, 1), the Petersen graph "G"(5, 2), the Möbius–Kantor graph "G"(8, 3), the dodecahedral graph "G"(10, 2) and the Nauru graph "G"(12, 5). The characteristic polynomial of the Desargues graph is formula_0 Therefore, the Desargues graph is an integral graph: its spectrum consists entirely of integers. Applications. In chemistry, the Desargues graph is known as the Desargues–Levi graph; it is used to organize systems of stereoisomers of 5-ligand compounds. In this application, the thirty edges of the graph correspond to pseudorotations of the ligands. Other properties. The Desargues graph has rectilinear crossing number 6, and is the smallest cubic graph with that crossing number (sequence in the OEIS). It is the only known nonplanar cubic partial cube. The Desargues graph has chromatic number 2, chromatic index 3, radius 5, diameter 5 and girth 6. It is also a 3-vertex-connected and a 3-edge-connected Hamiltonian graph. It has book thickness 3 and queue number 2. All the cubic distance-regular graphs are known. The Desargues graph is one of the 13 such graphs. The Desargues graph can be embedded as a self-Petrie dual regular map in the non-orientable manifold of genus 6, with decagonal faces. Erv Wilson used this diagram to show the combination product sets (CPS) of the 3 out of 6 set. He called this Structure the Eikosany.https://www.anaphoria.com/eikosanystructures.pdf
[ { "math_id": 0, "text": "(x-3) (x-2)^4 (x-1)^5 (x+1)^5 (x+2)^4 (x+3). \\, " } ]
https://en.wikipedia.org/wiki?curid=1578059
1578212
Hille–Yosida theorem
In functional analysis, the Hille–Yosida theorem characterizes the generators of strongly continuous one-parameter semigroups of linear operators on Banach spaces. It is sometimes stated for the special case of contraction semigroups, with the general case being called the Feller–Miyadera–Phillips theorem (after William Feller, Isao Miyadera, and Ralph Phillips). The contraction semigroup case is widely used in the theory of Markov processes. In other scenarios, the closely related Lumer–Phillips theorem is often more useful in determining whether a given operator generates a strongly continuous contraction semigroup. The theorem is named after the mathematicians Einar Hille and Kōsaku Yosida who independently discovered the result around 1948. Formal definitions. If "X" is a Banach space, a one-parameter semigroup of operators on "X" is a family of operators indexed on the non-negative real numbers The semigroup is said to be strongly continuous, also called a ("C"0) semigroup, if and only if the mapping formula_2 is continuous for all "x ∈ X", where "[0, ∞)" has the usual topology and "X" has the norm topology. The infinitesimal generator of a one-parameter semigroup "T" is an operator "A" defined on a possibly proper subspace of "X" as follows: formula_3 has a limit as "h" approaches "0" from the right. formula_4 The infinitesimal generator of a strongly continuous one-parameter semigroup is a closed linear operator defined on a dense linear subspace of "X". The Hille–Yosida theorem provides a necessary and sufficient condition for a closed linear operator "A" on a Banach space to be the infinitesimal generator of a strongly continuous one-parameter semigroup. Statement of the theorem. Let "A" be a linear operator defined on a linear subspace "D"("A") of the Banach space "X", "ω" a real number, and "M" &gt; 0. Then "A" generates a strongly continuous semigroup "T" that satisfies formula_5 if and only if formula_6 Hille-Yosida theorem for contraction semigroups. In the general case the Hille–Yosida theorem is mainly of theoretical importance since the estimates on the powers of the resolvent operator that appear in the statement of the theorem can usually not be checked in concrete examples. In the special case of contraction semigroups ("M" = 1 and "ω" = 0 in the above theorem) only the case "n" = 1 has to be checked and the theorem also becomes of some practical importance. The explicit statement of the Hille–Yosida theorem for contraction semigroups is: Let "A" be a linear operator defined on a linear subspace "D"("A") of the Banach space "X". Then "A" generates a contraction semigroup if and only if formula_7 Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " T(0)= I \\quad " }, { "math_id": 1, "text": " T(s+t)= T(s) \\circ T(t), \\quad \\forall t,s \\geq 0. " }, { "math_id": 2, "text": " t \\mapsto T(t) x " }, { "math_id": 3, "text": " h^{-1}\\bigg(T(h) x - x\\bigg) " }, { "math_id": 4, "text": " t \\mapsto T(t)x. " }, { "math_id": 5, "text": "\\|T(t)\\|\\leq M{\\rm e}^{\\omega t}" }, { "math_id": 6, "text": "\\|(\\lambda I-A)^{-n}\\|\\leq\\frac{M}{(\\lambda-\\omega)^n}." }, { "math_id": 7, "text": "\\|(\\lambda I-A)^{-1}\\|\\leq\\frac{1}{\\lambda}." } ]
https://en.wikipedia.org/wiki?curid=1578212
15782871
Lattice (discrete subgroup)
Discrete subgroup in a locally compact topological group In Lie theory and related areas of mathematics, a lattice in a locally compact group is a discrete subgroup with the property that the quotient space has finite invariant measure. In the special case of subgroups of R"n", this amounts to the usual geometric notion of a lattice as a periodic subset of points, and both the algebraic structure of lattices and the geometry of the space of all lattices are relatively well understood. The theory is particularly rich for lattices in semisimple Lie groups or more generally in semisimple algebraic groups over local fields. In particular there is a wealth of rigidity results in this setting, and a celebrated theorem of Grigory Margulis states that in most cases all lattices are obtained as arithmetic groups. Lattices are also well-studied in some other classes of groups, in particular groups associated to Kac–Moody algebras and automorphisms groups of regular trees (the latter are known as "tree lattices"). Lattices are of interest in many areas of mathematics: geometric group theory (as particularly nice examples of discrete groups), in differential geometry (through the construction of locally homogeneous manifolds), in number theory (through arithmetic groups), in ergodic theory (through the study of homogeneous flows on the quotient spaces) and in combinatorics (through the construction of expanding Cayley graphs and other combinatorial objects). Generalities on lattices. Informal discussion. Lattices are best thought of as discrete approximations of continuous groups (such as Lie groups). For example, it is intuitively clear that the subgroup formula_0 of integer vectors "looks like" the real vector space formula_1 in some sense, while both groups are essentially different: one is finitely generated and countable, while the other is not finitely generated and has the cardinality of the continuum. Rigorously defining the meaning of "approximation of a continuous group by a discrete subgroup" in the previous paragraph in order to get a notion generalising the example formula_2 is a matter of what it is designed to achieve. Maybe the most obvious idea is to say that a subgroup "approximates" a larger group is that the larger group can be covered by the translates of a "small" subset by all elements in the subgroups. In a locally compact topological group there are two immediately available notions of "small": topological (a compact, or relatively compact subset) or measure-theoretical (a subset of finite Haar measure). Note that since the Haar measure is a Radon measure, so it gives finite mass to compact subsets, the second definition is more general. The definition of a lattice used in mathematics relies upon the second meaning (in particular to include such examples as formula_3) but the first also has its own interest (such lattices are called uniform). Other notions are coarse equivalence and the stronger quasi-isometry. Uniform lattices are quasi-isometric to their ambient groups, but non-uniform ones are not even coarsely equivalent to it. Definition. Let formula_4 be a locally compact group and formula_5 a discrete subgroup (this means that there exists a neighbourhood formula_6 of the identity element formula_7 of formula_4 such that formula_8). Then formula_5 is called a lattice in formula_4 if in addition there exists a Borel measure formula_9 on the quotient space formula_10 which is finite (i.e. formula_11) and formula_4-invariant (meaning that for any formula_12 and any open subset formula_13 the equality formula_14 is satisfied). A slightly more sophisticated formulation is as follows: suppose in addition that formula_4 is unimodular, then since formula_5 is discrete it is also unimodular and by general theorems there exists a unique formula_4-invariant Borel measure on formula_10 up to scaling. Then formula_5 is a lattice if and only if this measure is finite. In the case of discrete subgroups this invariant measure coincides locally with the Haar measure and hence a discrete subgroup in a locally compact group formula_4 being a lattice is equivalent to it having a fundamental domain (for the action on formula_4 by left-translations) of finite volume for the Haar measure. A lattice formula_15 is called uniform (or cocompact) when the quotient space formula_16 is compact (and "non-uniform" otherwise). Equivalently a discrete subgroup formula_15 is a uniform lattice if and only if there exists a compact subset formula_17 with formula_18. Note that if formula_5 is any discrete subgroup in formula_4 such that formula_19 is compact then formula_5 is automatically a lattice in formula_4. First examples. The fundamental, and simplest, example is the subgroup formula_0 which is a lattice in the Lie group formula_1. A slightly more complicated example is given by the discrete Heisenberg group inside the continuous Heisenberg group. If formula_4 is a discrete group then a lattice in formula_4 is exactly a subgroup formula_5 of finite index (i.e. the quotient set formula_16 is finite). All of these examples are uniform. A non-uniform example is given by the modular group formula_20 inside formula_21, and also by the higher-dimensional analogues formula_22. Any finite-index subgroup of a lattice is also a lattice in the same group. More generally, a subgroup commensurable to a lattice is a lattice. Which groups have lattices? Not every locally compact group contains a lattice, and there is no general group-theoretical sufficient condition for this. On the other hand, there are plenty of more specific settings where such criteria exist. For example, the existence or non-existence of lattices in Lie groups is a well-understood topic. As we mentioned, a necessary condition for a group to contain a lattice is that the group must be unimodular. This allows for the easy construction of groups without lattices, for example the group of invertible upper triangular matrices or the affine groups. It is also not very hard to find unimodular groups without lattices, for example certain nilpotent Lie groups as explained below. A stronger condition than unimodularity is simplicity. This is sufficient to imply the existence of a lattice in a Lie group, but in the more general setting of locally compact groups there exist simple groups without lattices, for example the "Neretin groups". Lattices in solvable Lie groups. Nilpotent Lie groups. For nilpotent groups the theory simplifies much from the general case, and stays similar to the case of Abelian groups. All lattices in a nilpotent Lie group are uniform, and if formula_23 is a connected simply connected nilpotent Lie group (equivalently it does not contain a nontrivial compact subgroup) then a discrete subgroup is a lattice if and only if it is not contained in a proper connected subgroup (this generalises the fact that a discrete subgroup in a vector space is a lattice if and only if it spans the vector space). A nilpotent Lie group formula_23 contains a lattice if and only if the Lie algebra formula_24 of formula_23 can be defined over the rationals. That is, if and only if the structure constants of formula_24 are rational numbers. More precisely: if formula_23 is a nilpotent simply connected Lie group whose Lie algebra formula_24 has only rational structure constants, and formula_25 is a lattice in formula_24 (in the more elementary sense of Lattice (group)) then formula_26 generates a lattice in formula_23; conversely, if formula_5 is a lattice in formula_23 then formula_27 generates a lattice in formula_24. A lattice in a nilpotent Lie group formula_23 is always finitely generated (and hence finitely presented since it is itself nilpotent); in fact it is generated by at most formula_28 elements. Finally, a nilpotent group is isomorphic to a lattice in a nilpotent Lie group if and only if it contains a subgroup of finite index which is torsion-free and finitely generated. The general case. The criterion for nilpotent Lie groups to have a lattice given above does not apply to more general solvable Lie groups. It remains true that any lattice in a solvable Lie group is uniform and that lattices in solvable groups are finitely presented. Not all finitely generated solvable groups are lattices in a Lie group. An algebraic criterion is that the group be polycyclic. Lattices in semisimple Lie groups. Arithmetic groups and existence of lattices. If formula_4 is a semisimple linear algebraic group in formula_29 which is defined over the field formula_30 of rational numbers (i.e. the polynomial equations defining formula_4 have their coefficients in formula_30) then it has a subgroup formula_31. A fundamental theorem of Armand Borel and Harish-Chandra states that formula_5 is always a lattice in formula_4; the simplest example of this is the subgroup formula_3. Generalising the construction above one gets the notion of an "arithmetic lattice" in a semisimple Lie group. Since all semisimple Lie groups can be defined over formula_30 a consequence of the arithmetic construction is that any semisimple Lie group contains a lattice. Irreducibility. When the Lie group formula_4 splits as a product formula_32 there is an obvious construction of lattices in formula_4 from the smaller groups: if formula_33 are lattices then formula_34 is a lattice as well. Roughly, a lattice is then said to be "irreducible" if it does not come from this construction. More formally, if formula_35 is the decomposition of formula_4 into simple factors, a lattice formula_15 is said to be irreducible if either of the following equivalent conditions hold: An example of an irreducible lattice is given by the subgroup formula_37 which we view as a subgroup formula_38 via the map formula_39 where formula_40 is the Galois map sending a matric with coefficients formula_41 to formula_42. Rank 1 versus higher rank. The real rank of a Lie group formula_4 is the maximal dimension of a formula_43-split torus of formula_4 (an abelian subgroup containing only semisimple elements with at least one real eigenvalue distinct from formula_44). The semisimple Lie groups of real rank 1 without compact factors are (up to isogeny) those in the following list (see List of simple Lie groups): The real rank of a Lie group has a significant influence on the behaviour of the lattices it contains. In particular the behaviour of lattices in the first two families of groups (and to a lesser extent that of lattices in the latter two) differs much from that of irreducible lattices in groups of higher rank. For example: Kazhdan's property (T). The property known as (T) was introduced by Kazhdan to study the algebraic structure lattices in certain Lie groups when the classical, more geometric methods failed or at least were not as efficient. The fundamental result when studying lattices is the following: "A lattice in a locally compact group has property (T) if and only if the group itself has property (T). " Using harmonic analysis it is possible to classify semisimple Lie groups according to whether or not they have the property. As a consequence we get the following result, further illustrating the dichotomy of the previous section: Finiteness properties. Lattices in semisimple Lie groups are always finitely presented, and actually satisfy stronger finiteness conditions. For uniform lattices this is a direct consequence of cocompactness. In the non-uniform case this can be proved using reduction theory. It is easier to prove finite presentability for groups with Property (T); however, there is a geometric proof which works for all semisimple Lie groups. Riemannian manifolds associated to lattices in Lie groups. Left-invariant metrics. If formula_4 is a Lie group then from an inner product formula_55 on the tangent space formula_56 (the Lie algebra of formula_4) one can construct a Riemannian metric on formula_4 as follows: if formula_57 belong to the tangent space at a point formula_58 put formula_59 where formula_60 indicates the tangent map (at formula_61) of the diffeomorphism formula_62 of formula_4. The maps formula_63 for formula_58 are by definition isometries for this metric formula_64. In particular, if formula_5 is any discrete subgroup in formula_4 (so that it acts freely and properly discontinuously by left-translations on formula_4) the quotient formula_65 is a Riemannian manifold locally isometric to formula_4 with the metric formula_64. The Riemannian volume form associated to formula_66 defines a Haar measure on formula_4 and we see that the quotient manifold is of finite Riemannian volume if and only if formula_5 is a lattice. Interesting examples in this class of Riemannian spaces include compact flat manifolds and nilmanifolds. Locally symmetric spaces. A natural bilinear form on formula_56 is given by the Killing form. If formula_4 is not compact it is not definite and hence not an inner product: however when formula_4 is semisimple and formula_67 is a maximal compact subgroup it can be used to define a formula_4-invariant metric on the homogeneous space formula_68: such Riemannian manifolds are called symmetric spaces of non-compact type without Euclidean factors. A subgroup formula_15 acts freely, properly discontinuously on formula_69 if and only if it is discrete and torsion-free. The quotients formula_70 are called locally symmetric spaces. There is thus a bijective correspondence between complete locally symmetric spaces locally isomorphic to formula_69 and of finite Riemannian volume, and torsion-free lattices in formula_4. This correspondence can be extended to all lattices by adding orbifolds on the geometric side. Lattices in p-adic Lie groups. A class of groups with similar properties (with respect to lattices) to real semisimple Lie groups are semisimple algebraic groups over local fields of characteristic 0, for example the p-adic fields formula_71. There is an arithmetic construction similar to the real case, and the dichotomy between higher rank and rank one also holds in this case, in a more marked form. Let formula_4 be an algebraic group over formula_71 of split-formula_71-rank "r". Then: In the latter case all lattices are in fact free groups (up to finite index). S-arithmetic groups. More generally one can look at lattices in groups of the form formula_72 where formula_73 is a semisimple algebraic group over formula_71. Usually formula_74 is allowed, in which case formula_75 is a real Lie group. An example of such a lattice is given by formula_76. This arithmetic construction can be generalised to obtain the notion of an "S-arithmetic group". The Margulis arithmeticity theorem applies to this setting as well. In particular, if at least two of the factors formula_73 are noncompact then any irreducible lattice in formula_4 is S-arithmetic. Lattices in adelic groups. If formula_77 is a semisimple algebraic group over a number field formula_78 and formula_79 its adèle ring then the group formula_80 of adélic points is well-defined (modulo some technicalities) and it is a locally compact group which naturally contains the group formula_81 of formula_78-rational point as a discrete subgroup. The Borel–Harish-Chandra theorem extends to this setting, and formula_82 is a lattice. The strong approximation theorem relates the quotient formula_83 to more classical S-arithmetic quotients. This fact makes the adèle groups very effective as tools in the theory of automorphic forms. In particular modern forms of the trace formula are usually stated and proven for adélic groups rather than for Lie groups. Rigidity. Rigidity results. Another group of phenomena concerning lattices in semisimple algebraic groups is collectively known as "rigidity". Here are three classical examples of results in this category. Local rigidity results state that in most situations every subgroup which is sufficiently "close" to a lattice (in the intuitive sense, formalised by Chabauty topology or by the topology on a character variety) is actually conjugated to the original lattice by an element of the ambient Lie group. A consequence of local rigidity and the Kazhdan-Margulis theorem is Wang's theorem: in a given group (with a fixed Haar measure), for any "v&gt;0" there are only finitely many (up to conjugation) lattices with covolume bounded by "v". The Mostow rigidity theorem states that for lattices in simple Lie groups not locally isomorphic to formula_21 (the group of 2 by 2 matrices with determinant 1) any isomorphism of lattices is essentially induced by an isomorphism between the groups themselves. In particular, a lattice in a Lie group "remembers" the ambient Lie group through its group structure. The first statement is sometimes called "strong rigidity" and is due to George Mostow and Gopal Prasad (Mostow proved it for cocompact lattices and Prasad extended it to the general case). "Superrigidity" provides (for Lie groups and algebraic groups over local fields of higher rank) a strengthening of both local and strong rigidity, dealing with arbitrary homomorphisms from a lattice in an algebraic group "G" into another algebraic group "H". It was proven by Grigori Margulis and is an essential ingredient in the proof of his arithmeticity theorem. Nonrigidity in low dimensions. The only semisimple Lie groups for which Mostow rigidity does not hold are all groups locally isomorphic to formula_84. In this case there are in fact continuously many lattices and they give rise to Teichmüller spaces. Nonuniform lattices in the group formula_85 are not locally rigid. In fact they are accumulation points (in the Chabauty topology) of lattices of smaller covolume, as demonstrated by hyperbolic Dehn surgery. As lattices in rank-one p-adic groups are virtually free groups they are very non-rigid. Tree lattices. Definition. Let formula_86 be a tree with a cocompact group of automorphisms; for example, formula_86 can be a regular or biregular tree. The group of automorphismsformula_87 of formula_86 is a locally compact group (when endowed with the compact-open topology, in which a basis of neighbourhoods of the identity is given by the stabilisers of finite subtrees, which are compact). Any group which is a lattice in some formula_87 is then called a "tree lattice". The discreteness in this case is easy to see from the group action on the tree: a subgroup of formula_87 is discrete if and only if all vertex stabilisers are finite groups. It is easily seen from the basic theory of group actions on trees that uniform tree lattices are virtually free groups. Thus the more interesting tree lattices are the non-uniform ones, equivalently those for which the quotient graph formula_88 is infinite. The existence of such lattices is not easy to see. Tree lattices from algebraic groups. If formula_78 is a local field of positive characteristic (i.e. a completion of a function field of a curve over a finite field, for example the field of formal Laurent power series formula_89) and formula_4 an algebraic group defined over formula_78 of formula_78-split rank one, then any lattice in formula_4 is a tree lattice through its action on the Bruhat–Tits building which in this case is a tree. In contrast to the characteristic 0 case such lattices can be nonuniform, and in this case they are never finitely generated. Tree lattices from Bass–Serre theory. If formula_5 is the fundamental group of an infinite graph of groups, all of whose vertex groups are finite, and under additional necessary assumptions on the index of the edge groups and the size of the vertex groups, then the action of formula_5 on the Bass-Serre tree associated to the graph of groups realises it as a tree lattice. Existence criterion. More generally one can ask the following question: if formula_90 is a closed subgroup of formula_87, under which conditions does formula_90 contain a lattice? The existence of a uniform lattice is equivalent to formula_90 being unimodular and the quotient formula_91 being finite. The general existence theorem is more subtle: it is necessary and sufficient that formula_90 be unimodular, and that the quotient formula_91 be of "finite volume" in a suitable sense (which can be expressed combinatorially in terms of the action of formula_90), more general than the stronger condition that the quotient be finite (as proven by the very existence of nonuniform tree lattices). Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbb Z^n" }, { "math_id": 1, "text": "\\mathbb R^n" }, { "math_id": 2, "text": "\\mathbb Z^n \\subset \\mathbb R^n" }, { "math_id": 3, "text": "\\mathrm{SL}_2(\\mathbb Z) \\subset \\mathrm{SL}_2(\\mathbb R)" }, { "math_id": 4, "text": "G" }, { "math_id": 5, "text": "\\Gamma" }, { "math_id": 6, "text": "U" }, { "math_id": 7, "text": "e_G" }, { "math_id": 8, "text": "\\Gamma \\cap U = \\{ e_G \\}" }, { "math_id": 9, "text": "\\mu" }, { "math_id": 10, "text": "G / \\Gamma" }, { "math_id": 11, "text": " \\mu(G / \\Gamma) < +\\infty" }, { "math_id": 12, "text": "g \\in G" }, { "math_id": 13, "text": " W \\subset G / \\Gamma" }, { "math_id": 14, "text": "\\mu(gW) = \\mu(W)" }, { "math_id": 15, "text": "\\Gamma \\subset G" }, { "math_id": 16, "text": "G/\\Gamma" }, { "math_id": 17, "text": "C \\subset G" }, { "math_id": 18, "text": "G = \\bigcup {}_{\\gamma \\in \\Gamma} \\, C\\gamma" }, { "math_id": 19, "text": "<math>G/\\Gamma</matH>" }, { "math_id": 20, "text": "\\mathrm{SL}_2(\\mathbb Z)" }, { "math_id": 21, "text": "\\mathrm{SL}_2(\\mathbb R)" }, { "math_id": 22, "text": "\\mathrm{SL}_n(\\mathbb Z) \\subset \\mathrm{SL}_n(\\mathbb R)" }, { "math_id": 23, "text": "N" }, { "math_id": 24, "text": "\\mathfrak n" }, { "math_id": 25, "text": "L" }, { "math_id": 26, "text": "\\exp(L)" }, { "math_id": 27, "text": "\\exp^{-1}(\\Gamma)" }, { "math_id": 28, "text": "\\dim(N)" }, { "math_id": 29, "text": "\\mathrm{GL}_n(\\mathbb R)" }, { "math_id": 30, "text": "\\mathbb Q" }, { "math_id": 31, "text": "\\Gamma = G \\cap \\mathrm{GL}_n(\\mathbb Z)" }, { "math_id": 32, "text": "G = G_1 \\times G_2" }, { "math_id": 33, "text": "\\Gamma_1 \\subset G_1, \\Gamma_2 \\subset G_2" }, { "math_id": 34, "text": "\\Gamma_1 \\times \\Gamma_2 \\subset G" }, { "math_id": 35, "text": "G = G_1 \\times \\ldots \\times G_r" }, { "math_id": 36, "text": "G_{i_1} \\times \\ldots \\times G_{i_k}" }, { "math_id": 37, "text": "\\mathrm{SL}_2(\\mathbb Z[\\sqrt 2])" }, { "math_id": 38, "text": "\\mathrm{SL}_2(\\mathbb R) \\times \\mathrm{SL}_2(\\mathbb R)" }, { "math_id": 39, "text": "g \\mapsto (g, \\sigma(g))" }, { "math_id": 40, "text": "\\sigma" }, { "math_id": 41, "text": "a_i+b_i\\sqrt 2" }, { "math_id": 42, "text": "<matH>a_i - b_i \\sqrt 2</math>" }, { "math_id": 43, "text": "\\mathbb R" }, { "math_id": 44, "text": "\\pm 1" }, { "math_id": 45, "text": "\\mathrm{SO}(n,1)" }, { "math_id": 46, "text": "(n, 1)" }, { "math_id": 47, "text": "n \\ge 2" }, { "math_id": 48, "text": "\\mathrm{SU}(n,1)" }, { "math_id": 49, "text": "\\mathrm{Sp}(n,1)" }, { "math_id": 50, "text": "F_4^{-20}" }, { "math_id": 51, "text": "F_4" }, { "math_id": 52, "text": "\\mathrm{SU}(2,1),\\mathrm{SU}(3,1)" }, { "math_id": 53, "text": "\\mathrm{SU}(n,1), n \\ge 4" }, { "math_id": 54, "text": "\\mathrm{SO}(n,1), \\mathrm{SU}(n,1)" }, { "math_id": 55, "text": "g_e" }, { "math_id": 56, "text": "\\mathfrak g" }, { "math_id": 57, "text": "v, w" }, { "math_id": 58, "text": "\\gamma \\in G" }, { "math_id": 59, "text": "g_\\gamma(v, w) = g_e(\\gamma^*v, \\gamma^*w)" }, { "math_id": 60, "text": "\\gamma^*" }, { "math_id": 61, "text": "\\gamma" }, { "math_id": 62, "text": "x \\mapsto \\gamma^{-1}x" }, { "math_id": 63, "text": "x \\mapsto \\gamma x " }, { "math_id": 64, "text": "g" }, { "math_id": 65, "text": "\\Gamma \\backslash G" }, { "math_id": 66, "text": "<math>g</Math>" }, { "math_id": 67, "text": "K" }, { "math_id": 68, "text": "X = G/K" }, { "math_id": 69, "text": "X" }, { "math_id": 70, "text": "\\Gamma \\backslash X" }, { "math_id": 71, "text": "\\mathbb Q_p" }, { "math_id": 72, "text": "G = \\prod_{p \\in S} G_p " }, { "math_id": 73, "text": "G_p" }, { "math_id": 74, "text": "p=\\infty" }, { "math_id": 75, "text": "G_\\infty" }, { "math_id": 76, "text": "\\mathrm{SL}_2 \\left( \\mathbb Z \\left[\\frac 1 p \\right] \\right) \\subset \\mathrm{SL}_2(\\mathbb R) \\times \\mathrm{SL}_2(\\mathbb Q_p)" }, { "math_id": 77, "text": "\\mathrm G" }, { "math_id": 78, "text": "F" }, { "math_id": 79, "text": "\\mathbb A" }, { "math_id": 80, "text": "G = \\mathrm G(\\mathbb A)" }, { "math_id": 81, "text": "\\mathrm G(F)" }, { "math_id": 82, "text": "\\mathrm G(F) \\subset \\mathrm G(\\mathbb A)" }, { "math_id": 83, "text": "\\mathrm G(F) \\backslash \\mathrm G(\\mathbb A)" }, { "math_id": 84, "text": "\\mathrm{PSL}_2(\\mathbb R)" }, { "math_id": 85, "text": "\\mathrm{PSL}_2(\\mathbb C)" }, { "math_id": 86, "text": "T" }, { "math_id": 87, "text": "\\mathrm{Aut}(T)" }, { "math_id": 88, "text": "\\Gamma \\backslash T" }, { "math_id": 89, "text": "\\mathbb F_p((t))" }, { "math_id": 90, "text": "H" }, { "math_id": 91, "text": "H \\backslash T" } ]
https://en.wikipedia.org/wiki?curid=15782871
15784796
Shifted log-logistic distribution
The shifted log-logistic distribution is a probability distribution also known as the generalized log-logistic or the three-parameter log-logistic distribution. It has also been called the generalized logistic distribution, but this conflicts with other uses of the term: see generalized logistic distribution. Definition. The shifted log-logistic distribution can be obtained from the log-logistic distribution by addition of a shift parameter formula_1. Thus if formula_2 has a log-logistic distribution then formula_3 has a shifted log-logistic distribution. So formula_4 has a shifted log-logistic distribution if formula_5 has a logistic distribution. The shift parameter adds a location parameter to the scale and shape parameters of the (unshifted) log-logistic. The properties of this distribution are straightforward to derive from those of the log-logistic distribution. However, an alternative parameterisation, similar to that used for the generalized Pareto distribution and the generalized extreme value distribution, gives more interpretable parameters and also aids their estimation. In this parameterisation, the cumulative distribution function (CDF) of the shifted log-logistic distribution is formula_6 for formula_7, where formula_8 is the location parameter, formula_9 the scale parameter and formula_10 the shape parameter. Note that some references use formula_11 to parameterise the shape. The probability density function (PDF) is formula_12 again, for formula_13 The shape parameter formula_0 is often restricted to lie in [-1,1], when the probability density function is bounded. When formula_14, it has an asymptote at formula_15. Reversing the sign of formula_0 reflects the pdf and the cdf about formula_16. Applications. The three-parameter log-logistic distribution is used in hydrology for modelling flood frequency. Alternate parameterization. An alternate parameterization with simpler expressions for the PDF and CDF is as follows. For the shape parameter formula_20, scale parameter formula_21 and location parameter formula_22, the PDF is given by formula_23 The CDF is given by formula_24 The mean is formula_25 and the variance is formula_26, where formula_27. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\xi" }, { "math_id": 1, "text": "\\delta" }, { "math_id": 2, "text": "X" }, { "math_id": 3, "text": "X+\\delta" }, { "math_id": 4, "text": "Y" }, { "math_id": 5, "text": "\\log(Y-\\delta)" }, { "math_id": 6, "text": "F(x; \\mu,\\sigma,\\xi) = \\frac{1}{ 1 + \\left(1+ \\frac{\\xi(x-\\mu)}{\\sigma}\\right)^{-1/\\xi}}" }, { "math_id": 7, "text": " 1 + \\xi(x-\\mu)/\\sigma \\geqslant 0" }, { "math_id": 8, "text": "\\mu\\in\\mathbb R" }, { "math_id": 9, "text": "\\sigma>0\\," }, { "math_id": 10, "text": "\\xi\\in\\mathbb R" }, { "math_id": 11, "text": " \\kappa = - \\xi\\,\\!" }, { "math_id": 12, "text": " f(x; \\mu,\\sigma,\\xi) = \\frac{\\left(1+\\frac{\\xi(x-\\mu)}{\\sigma}\\right)^{-(1/\\xi +1)}}\n{\\sigma\\left[1 + \\left(1+\\frac{\\xi(x-\\mu)}{\\sigma}\\right)^{-1/\\xi}\\right]^2}, " }, { "math_id": 13, "text": " 1 + \\xi(x-\\mu)/\\sigma \\geqslant 0. " }, { "math_id": 14, "text": "|\\xi|>1" }, { "math_id": 15, "text": "x = \\mu - \\sigma/\\xi" }, { "math_id": 16, "text": "x=\\mu." }, { "math_id": 17, "text": "\\mu = \\sigma/\\xi," }, { "math_id": 18, "text": "\\xi=1" }, { "math_id": 19, "text": "\\xi=1." }, { "math_id": 20, "text": "\\alpha" }, { "math_id": 21, "text": "\\beta" }, { "math_id": 22, "text": "\\gamma" }, { "math_id": 23, "text": "f(x) = \\frac{\\alpha}{\\beta} \\bigg(\\frac{x-\\gamma}{\\beta} \\bigg) ^{\\alpha -1}\\bigg(1+\\bigg(\\frac{x-\\gamma}{\\beta}\\bigg)^\\alpha\\bigg)^{-2}" }, { "math_id": 24, "text": "F(x) = \\bigg(1+\\bigg(\\frac{\\beta}{x-\\gamma}\\bigg)^\\alpha\\bigg)^{-1}" }, { "math_id": 25, "text": "\\beta \\theta \\csc(\\theta) + \\gamma" }, { "math_id": 26, "text": "\\beta^2\\theta[2\\csc(2\\theta)-\\theta \\csc^2(\\theta)]" }, { "math_id": 27, "text": "\\theta = \\frac{\\pi}{\\alpha}" } ]
https://en.wikipedia.org/wiki?curid=15784796