id
stringlengths 2
8
| title
stringlengths 1
130
| text
stringlengths 0
252k
| formulas
listlengths 1
823
| url
stringlengths 38
44
|
---|---|---|---|---|
922464 | Bunched logic | Bunched logic is a variety of substructural logic proposed by Peter O'Hearn and David Pym. Bunched logic provides primitives for reasoning about "resource composition", which aid in the compositional analysis of computer and other systems. It has category-theoretic and truth-functional semantics, which can be understood in terms of an abstract concept of resource, and a proof theory in which the contexts Γ in an entailment judgement Γ ⊢ A are tree-like structures (bunches) rather than lists or (multi)sets as in most proof calculi. Bunched logic has an associated type theory, and its first application was in providing a way to control the aliasing and other forms of interference in imperative programs.
The logic has seen further applications in program verification, where it is the basis of the assertion language of separation logic, and in systems modelling, where it provides a way to decompose the resources used by components of a system.
Foundations.
The deduction theorem of classical logic relates conjunction and implication:
formula_0
Bunched logic has two versions of the deduction theorem:
formula_1
formula_2 and formula_3 are forms of conjunction and implication that take resources into account (explained below). In addition to these connectives
bunched logic has a formula, sometimes written I or emp, which is the unit of *. In the original version of bunched logic formula_4 and formula_5 were the connectives from intuitionistic logic, while a boolean variant takes formula_4 and formula_5 (and formula_6) as from traditional boolean logic. Thus, bunched logic is compatible with constructive principles, but is in no way dependent on them.
Truth-functional semantics (resource semantics).
The easiest way to understand these formulae is in terms of its truth-functional semantics. In this semantics a formula is true or false with respect to given resources.
formula_7 asserts that the resource at hand can be decomposed into resources that satisfy formula_8 and formula_9.
formula_10 says that if we compose the resource at hand with additional resource that satisfies formula_9, then the combined resource satisfies formula_11. formula_4 and formula_5 have their familiar meanings.
The foundation for this reading of formulae was provided by
a forcing semantics formula_12 advanced by Pym, where the forcing relation means '"A" holds of resource "r"'. The semantics is analogous to Kripke's semantics of intuitionistic or modal logic, but where the elements of the model are regarded as resources that can be composed and decomposed, rather than as possible worlds that are accessible from one another. For example, the forcing semantics for the conjunction is of the form
formula_13
where formula_14 is a way of combining resources and formula_15 is a relation of approximation.
This semantics of bunched logic draws on prior work in relevance logic (especially the operational semantics of Routley–Meyer), but differs from it by not requiring formula_16 and by accepting the semantics of standard intuitionistic or classical versions of formula_4 and formula_5. The property formula_16 is justified when thinking about relevance but denied by considerations of resource; having two copies of a resource is not the same as having one, and in some models (e.g. heap models) formula_17 might not even be defined. The standard semantics of formula_5 (or of negation) is often rejected by relevantists in their bid to escape the `paradoxes of material implication', which are not a problem from the perspective of modelling resources and so not rejected by bunched logic. The semantics is also related to the 'phase semantics' of linear logic, but again is differentiated by accepting the standard (even boolean) semantics of formula_4 and formula_5, which in linear logic is rejected in a bid to be constructive. These considerations are discussed in detail in an article on resource semantics by Pym, O'Hearn and Yang.
Categorical semantics (doubly closed categories).
The double version of the deduction theorem of bunched logic has a corresponding category-theoretic structure. Proofs in intuitionistic logic can be interpreted in
cartesian closed categories, that is, categories with finite products satisfying the (natural in "A" and "C") adjunction correspondence relating hom sets:
formula_18
Bunched logic can be interpreted in categories possessing two such structures
a categorical model of bunched logic is a single category possessing two closed structures, one symmetric monoidal closed the other cartesian closed.
A host of categorial models can be given using Day's tensor product construction.
Additionally, the implicational fragment of bunched logic has been given a game semantics.
Algebraic semantics.
The algebraic semantics of bunched logic is a special case of its categorical semantics, but is simple to state and can be more approachable.
An algebraic model of bunched logic is a poset that is a Heyting algebra and that carries an additional commutative residuated lattice structure (for the same lattice as the Heyting algebra): that is, an ordered commutative monoid with an associated implication satisfying formula_19.
The boolean version of bunched logic has models as follows.
An algebraic model of boolean bunched logic is a poset that is a Boolean algebra and that carries an additional residuated commutative monoid structure.
Proof theory and type theory (bunches).
The proof calculus of bunched logic
differs from usual sequent calculi in having a tree-like context of hypotheses instead of a flat list-like structure. In its sequent-based proof theories, the
context formula_20 in an entailment judgement formula_21
is a finite rooted tree whose leaves are propositions and whose internal nodes are labelled with modes of composition corresponding to the two conjunctions.
The two combining operators, comma and semicolon, are used (for instance) in the introduction rules for the two implications.
formula_22
The difference between the two composition rules comes from additional rules that apply to them.
The structural rules and other operations on bunches are often applied deep within a tree-context, and not only at the top level: it is thus in a sense a calculus of deep inference.
Corresponding to bunched logic is a type theory having two kinds of function type. Following the Curry–Howard correspondence, introduction rules for implications correspond to introduction rules for function types.
formula_25
Here, there are two distinct binders, formula_26 and formula_27, one for each kind of function type.
The proof theory of bunched logic has an historical debt to the use of bunches in relevance logic. But the bunched structure can in a sense be derived from the categorical and algebraic semantics:
to formulate an introduction rule for formula_28 we should mimick formula_29 on the left in sequents, and to introduce formula_30 we should mimick formula_4. This consideration leads to the use of two combining operators.
James Brotherston has done further significant work on a unified proof theory for bunched logic and variants, employing Belnap's notion of display logic.
Galmiche, Méry, and Pym have provided a comprehensive treatment of bunched logic, including completeness and other meta-theory, based on labelled tableaux.
Applications.
Interference control.
In perhaps the first use of substructural type theory to control resources, John C. Reynolds showed how to use an affine type theory to control aliasing and other forms of interference in Algol-like programming languages. O'Hearn used bunched type theory to extend Reynolds' system by allowing interference and non-interference to be more flexibly mixed. This resolved open problems concerning recursion and jumps in Reynolds' system.
Separation logic.
Separation logic is an extension of Hoare logic that facilitates reasoning about mutable data structures that use pointers. Following Hoare logic the formulae of separation logic are of the form
formula_31, but the preconditions and postconditions are formulae interpreted in a model of bunched logic.
The original version of the logic was based on models as follows:
It is the undefinedness of the composition on overlapping heaps that models the separation idea. This is a model of the boolean variant of bunched logic.
Separation logic was used originally to prove properties of sequential programs, but then was extended to concurrency using a proof rule
formula_34
that divides the storage accessed by parallel threads.
Later, the greater generality of the resource semantics was utilized: an abstract version of separation logic works for Hoare triples
where the preconditions and postconditions are formulae interpreted over an arbitrary partial commutative monoid instead of a particular heap model.
By suitable choice of commutative monoid, it was surprisingly found that the proofs rules of abstract versions of concurrent separation logic could be used to reason about interfering concurrent processes, for example by encoding rely-guarantee and trace-based reasoning.
Separation logic is the basis of a number of tools for automatic and semi-automatic reasoning about programs, and is used in the Infer program analyzer currently deployed at Facebook.
Resources and processes.
Bunched logic has been used in connection with the (synchronous) resource-process calculus SCRP in order to give a (modal) logic that characterizes, in the sense of Hennessy–Milner, the compositional structure of concurrent systems.
SCRP is notable for interpreting formula_35 in terms of "both" parallel composition of systems and composition of their associated resources.
The semantic clause of SCRP's process logic that corresponds to separation logic's rule for concurrency asserts that a formula formula_35 is true in resource-process state formula_36, formula_37 just in case there are decompositions of the resource formula_38 and process
formula_39 ~ formula_40, where ~ denotes bisimulation, such that formula_8 is true in the resource-process state formula_41, formula_42 and formula_9 is true in the resource-process state formula_43, formula_44; that is formula_45 iff formula_46 and formula_47.
The system SCRP is based directly on bunched logic's resource semantics; that is, on ordered monoids of resource elements. While direct and intuitively appealing, this choice leads to a specific technical problem: the Hennessy–Milner completeness theorem holds only for fragments of the modal logic that exclude the multiplicative implication and multiplicative modalities. This problem is solved by basing resource-process calculus on a resource semantics in which resource elements are combined using two combinators, one corresponding to concurrent composition and one corresponding to choice.
Spatial logics.
Cardelli, Caires, Gordon and others have investigated a series of logics of process calculi, where a conjunction is interpreted in terms of parallel composition.
Unlike the work of Pym et al. in SCRP, they do not distinguish between parallel composition of systems and composition of resources accessed by the systems.
Their logics are based on instances of the resource semantics that give rise to models of the boolean variant of bunched logic.
Although these logics give rise to instances of boolean bunched logic, they appear to have been arrived at independently, and in any case have significant additional structure in the way of modalities and binders. Related logics have been proposed as well for modelling XML data.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "A \\wedge B \\vdash C \\quad \\mbox{iff} \\quad A \\vdash B \\Rightarrow C "
},
{
"math_id": 1,
"text": "A * B \\vdash C \\quad \\mbox{iff} \\quad A \\vdash B {-\\!\\!*} C \\qquad \\mbox{and also} \\qquad A \\wedge B \\vdash C \\quad \\mbox{iff} \\quad A \\vdash B \\Rightarrow C "
},
{
"math_id": 2,
"text": "A * B "
},
{
"math_id": 3,
"text": "B {-\\!\\!*} C"
},
{
"math_id": 4,
"text": " \\wedge "
},
{
"math_id": 5,
"text": " \\Rightarrow "
},
{
"math_id": 6,
"text": " \\neg "
},
{
"math_id": 7,
"text": "A*B "
},
{
"math_id": 8,
"text": "A"
},
{
"math_id": 9,
"text": "B"
},
{
"math_id": 10,
"text": " B {-\\!\\!*} C "
},
{
"math_id": 11,
"text": "C"
},
{
"math_id": 12,
"text": " r \\models A "
},
{
"math_id": 13,
"text": "r \\models A * B \\quad \\mbox{iff} \\quad \\exists r_Ar_B.\\,r_A \\models A,\\, r_B \\models B,\\,\\mbox{and}\\,r_A \\bullet r_B \\leq r "
},
{
"math_id": 14,
"text": " r_A \\bullet r_B "
},
{
"math_id": 15,
"text": " \\leq "
},
{
"math_id": 16,
"text": " r \\bullet r \\leq r "
},
{
"math_id": 17,
"text": " r \\bullet r "
},
{
"math_id": 18,
"text": "Hom(A \\wedge B, C) \\quad \\mbox{is isomorphic to} \\quad Hom(A, B \\Rightarrow C) "
},
{
"math_id": 19,
"text": "A * B \\leq C \\quad \\mbox{iff} \\quad A \\leq B {-\\!\\!*} C"
},
{
"math_id": 20,
"text": "\\Delta "
},
{
"math_id": 21,
"text": "\\Delta \\vdash A "
},
{
"math_id": 22,
"text": "\\frac{\\Gamma,A \\vdash B}{\\Gamma \\vdash A{-\\!\\!*} B} \\qquad \\qquad \\frac{\\Gamma;A \\vdash B}{\\Gamma \\vdash A{\\Rightarrow} B} "
},
{
"math_id": 23,
"text": " \\Delta , \\Gamma "
},
{
"math_id": 24,
"text": " \\Delta ; \\Gamma "
},
{
"math_id": 25,
"text": "\\frac{\\Gamma,x:A \\vdash M:B}{\\Gamma \\vdash \\lambda x.M: A{-\\!\\!*} B} \\qquad \\qquad \\frac{\\Gamma;x:A \\vdash M: B}{\\Gamma \\vdash \\alpha x.M:A{\\Rightarrow} B} "
},
{
"math_id": 26,
"text": "\\lambda"
},
{
"math_id": 27,
"text": "\\alpha"
},
{
"math_id": 28,
"text": " {-\\!\\!*} "
},
{
"math_id": 29,
"text": " * "
},
{
"math_id": 30,
"text": " \\Rightarrow"
},
{
"math_id": 31,
"text": "\\{Pre\\} program \\{Post\\}"
},
{
"math_id": 32,
"text": " Heaps = L \\rightharpoonup_f V \\qquad "
},
{
"math_id": 33,
"text": " h_0 \\bullet h_1 = "
},
{
"math_id": 34,
"text": " \\frac{\\{P_1\\} C_1 \\{Q_1\\} \\quad \\{P_2\\} C_2 \\{Q_2\\}}{\\{P_1 * P_2\\} C_1 \\parallel C_2 \\{Q_1 * Q_2\\}}"
},
{
"math_id": 35,
"text": " A * B "
},
{
"math_id": 36,
"text": " R "
},
{
"math_id": 37,
"text": " E "
},
{
"math_id": 38,
"text": "R = S \\bullet T"
},
{
"math_id": 39,
"text": "E"
},
{
"math_id": 40,
"text": "F \\times G"
},
{
"math_id": 41,
"text": " S "
},
{
"math_id": 42,
"text": " F "
},
{
"math_id": 43,
"text": " T "
},
{
"math_id": 44,
"text": " G "
},
{
"math_id": 45,
"text": " R, E \\models A "
},
{
"math_id": 46,
"text": " S, F \\models A "
},
{
"math_id": 47,
"text": " T, G \\models B "
}
] | https://en.wikipedia.org/wiki?curid=922464 |
922505 | Receiver operating characteristic | Diagnostic plot of binary classifier ability
A receiver operating characteristic curve, or ROC curve, is a graphical plot that illustrates the performance of a binary classifier model (can be used for multi class classification as well) at varying threshold values.
The ROC curve is the plot of the true positive rate (TPR) against the false positive rate (FPR) at each threshold setting.
The ROC can also be thought of as a plot of the statistical power as a function of the Type I Error of the decision rule (when the performance is calculated from just a sample of the population, it can be thought of as estimators of these quantities). The ROC curve is thus the sensitivity or recall as a function of false positive rate.
Given that the probability distributions for both true positive and false positive are known, the ROC curve is obtained as the cumulative distribution function (CDF, area under the probability distribution from formula_0 to the discrimination threshold) of the detection probability in the "y"-axis versus the CDF of the false positive probability on the "x"-axis.
ROC analysis provides tools to select possibly optimal models and to discard suboptimal ones independently from (and prior to specifying) the cost context or the class distribution. ROC analysis is related in a direct and natural way to the cost/benefit analysis of diagnostic decision making.
Terminology.
The true-positive rate is also known as sensitivity, recall or "probability of detection". The false-positive rate is also known as the "probability of false alarm" and equals (1 − specificity).
The ROC is also known as a relative operating characteristic curve, because it is a comparison of two operating characteristics (TPR and FPR) as the criterion changes.
History.
The ROC curve was first developed by electrical engineers and radar engineers during World War II for detecting enemy objects in battlefields, starting in 1941, which led to its name ("receiver operating characteristic").
It was soon introduced to psychology to account for the perceptual detection of stimuli. ROC analysis has been used in medicine, radiology, biometrics, forecasting of natural hazards, meteorology, model performance assessment, and other areas for many decades and is increasingly used in machine learning and data mining research.
Basic concept.
A classification model (classifier or diagnosis) is a mapping of instances between certain classes/groups. Because the classifier or diagnosis result can be an arbitrary real value (continuous output), the classifier boundary between classes must be determined by a threshold value (for instance, to determine whether a person has hypertension based on a blood pressure measure). Or it can be a discrete class label, indicating one of the classes.
Consider a two-class prediction problem (binary classification), in which the outcomes are labeled either as positive ("p") or negative ("n"). There are four possible outcomes from a binary classifier. If the outcome from a prediction is "p" and the actual value is also "p", then it is called a "true positive" (TP); however if the actual value is "n" then it is said to be a "false positive" (FP). Conversely, a "true negative" (TN) has occurred when both the prediction outcome and the actual value are "n", and a "false negative" (FN) is when the prediction outcome is "n" while the actual value is "p".
To get an appropriate example in a real-world problem, consider a diagnostic test that seeks to determine whether a person has a certain disease. A false positive in this case occurs when the person tests positive, but does not actually have the disease. A false negative, on the other hand, occurs when the person tests negative, suggesting they are healthy, when they actually do have the disease.
Consider an experiment from P positive instances and N negative instances for some condition. The four outcomes can be formulated in a 2×2 "contingency table" or "confusion matrix", as follows:
<templatestyles src="Reflist/styles.css" />
ROC space.
The contingency table can derive several evaluation "metrics" (see infobox). To draw a ROC curve, only the true positive rate (TPR) and false positive rate (FPR) are needed (as functions of some classifier parameter). The TPR defines how many correct positive results occur among all positive samples available during the test. FPR, on the other hand, defines how many incorrect positive results occur among all negative samples available during the test.
A ROC space is defined by FPR and TPR as "x" and "y" axes, respectively, which depicts relative trade-offs between true positive (benefits) and false positive (costs). Since TPR is equivalent to sensitivity and FPR is equal to 1 − specificity, the ROC graph is sometimes called the sensitivity vs (1 − specificity) plot. Each prediction result or instance of a confusion matrix represents one point in the ROC space.
The best possible prediction method would yield a point in the upper left corner or coordinate (0,1) of the ROC space, representing 100% sensitivity (no false negatives) and 100% specificity (no false positives). The (0,1) point is also called a "perfect classification". A random guess would give a point along a diagonal line (the so-called "line of no-discrimination") from the bottom left to the top right corners (regardless of the positive and negative base rates). An intuitive example of random guessing is a decision by flipping coins. As the size of the sample increases, a random classifier's ROC point tends towards the diagonal line. In the case of a balanced coin, it will tend to the point (0.5, 0.5).
The diagonal divides the ROC space. Points above the diagonal represent good classification results (better than random); points below the line represent bad results (worse than random). Note that the output of a consistently bad predictor could simply be inverted to obtain a good predictor.
Consider four prediction results from 100 positive and 100 negative instances:
Plots of the four results above in the ROC space are given in the figure. The result of method A clearly shows the best predictive power among A, B, and C. The result of B lies on the random guess line (the diagonal line), and it can be seen in the table that the accuracy of B is 50%. However, when C is mirrored across the center point (0.5,0.5), the resulting method C′ is even better than A. This mirrored method simply reverses the predictions of whatever method or test produced the C contingency table. Although the original C method has negative predictive power, simply reversing its decisions leads to a new predictive method C′ which has positive predictive power. When the C method predicts p or n, the C′ method would predict n or p, respectively. In this manner, the C′ test would perform the best. The closer a result from a contingency table is to the upper left corner, the better it predicts, but the distance from the random guess line in either direction is the best indicator of how much predictive power a method has. If the result is below the line (i.e. the method is worse than a random guess), all of the method's predictions must be reversed in order to utilize its power, thereby moving the result above the random guess line.
Curves in ROC space.
In binary classification, the class prediction for each instance is often made based on a continuous random variable formula_1, which is a "score" computed for the instance (e.g. the estimated probability in logistic regression). Given a threshold parameter formula_2, the instance is classified as "positive" if formula_3, and "negative" otherwise. formula_1 follows a probability density formula_4 if the instance actually belongs to class "positive", and formula_5 if otherwise. Therefore, the true positive rate is given by formula_6 and the false positive rate is given by formula_7.
The ROC curve plots parametrically formula_8 versus formula_9 with formula_10 as the varying parameter.
For example, imagine that the blood protein levels in diseased people and healthy people are normally distributed with means of 2 g/dL and 1 g/dL respectively. A medical test might measure the level of a certain protein in a blood sample and classify any number above a certain threshold as indicating disease. The experimenter can adjust the threshold (green vertical line in the figure), which will in turn change the false positive rate. Increasing the threshold would result in fewer false positives (and more false negatives), corresponding to a leftward movement on the curve. The actual shape of the curve is determined by how much overlap the two distributions have.
Criticisms.
Several studies criticize certain applications of the ROC curve and its area under the curve as measurements for assessing binary classifications when they do not capture the information relevant to the application.
The main criticism to the ROC curve described in these studies regards the incorporation of areas with low sensitivity and low specificity (both lower than 0.5) for the calculation of the total area under the curve (AUC)., as described in the plot on the right.
According to the authors of these studies, that portion of area under the curve (with low sensitivity and low specificity) regards confusion matrices where binary predictions obtain bad results, and therefore should not be included for the assessment of the overall performance.
Moreover, that portion of AUC indicates a space with high or low confusion matrix threshold which is rarely of interest for scientists performing a binary classification in any field.
Another criticism to the ROC and its area under the curve is that they say nothing about precision and negative predictive value.
A high ROC AUC, such as 0.9 for example, might correspond to low values of precision and negative predictive value, such as 0.2 and 0.1 in the [0, 1] range.
If one performed a binary classification, obtained an ROC AUC of 0.9 and decided to focus only on this metric, they might overoptimistically believe their binary test was excellent. However, if this person took a look at the values of precision and negative predictive value, they might discover their values are low.
The ROC AUC summarizes sensitivity and specificity, but does not inform regarding precision and negative predictive value.
Further interpretations.
Sometimes, the ROC is used to generate a summary statistic. Common versions are:
However, any attempt to summarize the ROC curve into a single number loses information about the pattern of tradeoffs of the particular discriminator algorithm.
Probabilistic interpretation.
The area under the curve (often referred to as simply the AUC) is equal to the probability that a classifier will rank a randomly chosen positive instance higher than a randomly chosen negative one (assuming 'positive' ranks higher than 'negative'). In other words, when given one randomly selected positive instance and one randomly selected negative instance, AUC is the probability that the classifier will be able to tell which one is which.
This can be seen as follows: the area under the curve is given by (the integral boundaries are reversed as large threshold formula_2 has a lower value on the "x"-axis)
formula_12
formula_13
formula_14
where formula_15 is the score for a positive instance and formula_16 is the score for a negative instance, and formula_17 and formula_18 are probability densities as defined in previous section.
If formula_16 and formula_15 follows two Gaussian distributions, then formula_19.
Area under the curve.
It can be shown that the AUC is closely related to the Mann–Whitney U, which tests whether positives are ranked higher than negatives. For a predictor formula_20, an unbiased estimator of its AUC can be expressed by the following "Wilcoxon-Mann-Whitney" statistic:
formula_21
where formula_22 denotes an "indicator function" which returns 1 if formula_23 otherwise return 0; formula_24 is the set of negative examples, and formula_25 is the set of positive examples.
In the context of credit scoring, a rescaled version of AUC is often used:
formula_26.
formula_27 is referred to as Gini index or Gini coefficient, but it should not be confused with the measure of statistical dispersion that is also called Gini coefficient. formula_27 is a special case of Somers' D.
It is also common to calculate the Area Under the ROC Convex Hull (ROC AUCH = ROCH AUC) as any point on the line segment between two prediction results can be achieved by randomly using one or the other system with probabilities proportional to the relative length of the opposite component of the segment. It is also possible to invert concavities – just as in the figure the worse solution can be reflected to become a better solution; concavities can be reflected in any line segment, but this more extreme form of fusion is much more likely to overfit the data.
The machine learning community most often uses the ROC AUC statistic for model comparison. This practice has been questioned because AUC estimates are quite noisy and suffer from other problems. Nonetheless, the coherence of AUC as a measure of aggregated classification performance has been vindicated, in terms of a uniform rate distribution, and AUC has been linked to a number of other performance metrics such as the Brier score.
Another problem with ROC AUC is that reducing the ROC Curve to a single number ignores the fact that it is about the tradeoffs between the different systems or performance points plotted and not the performance of an individual system, as well as ignoring the possibility of concavity repair, so that related alternative measures such as Informedness or DeltaP are recommended. These measures are essentially equivalent to the Gini for a single prediction point with DeltaP' = Informedness = 2AUC-1, whilst DeltaP = Markedness represents the dual (viz. predicting the prediction from the real class) and their geometric mean is the Matthews correlation coefficient.
Whereas ROC AUC varies between 0 and 1 — with an uninformative classifier yielding 0.5 — the alternative measures known as Informedness, Certainty and Gini Coefficient (in the single parameterization or single system case) all have the advantage that 0 represents chance performance whilst 1 represents perfect performance, and −1 represents the "perverse" case of full informedness always giving the wrong response. Bringing chance performance to 0 allows these alternative scales to be interpreted as Kappa statistics. Informedness has been shown to have desirable characteristics for Machine Learning versus other common definitions of Kappa such as Cohen Kappa and Fleiss Kappa.
Sometimes it can be more useful to look at a specific region of the ROC Curve rather than at the whole curve. It is possible to compute partial AUC. For example, one could focus on the region of the curve with low false positive rate, which is often of prime interest for population screening tests. Another common approach for classification problems in which P ≪ N (common in bioinformatics applications) is to use a logarithmic scale for the "x"-axis.
The ROC area under the curve is also called c-statistic or c statistic.
Other measures.
The Total Operating Characteristic (TOC) also characterizes diagnostic ability while revealing more information than the ROC. For each threshold, ROC reveals two ratios, TP/(TP + FN) and FP/(FP + TN). In other words, ROC reveals formula_28 and formula_29. On the other hand, TOC shows the total information in the contingency table for each threshold. The TOC method reveals all of the information that the ROC method provides, plus additional important information that ROC does not reveal, i.e. the size of every entry in the contingency table for each threshold. TOC also provides the popular AUC of the ROC.
These figures are the TOC and ROC curves using the same data and thresholds.
Consider the point that corresponds to a threshold of 74. The TOC curve shows the number of hits, which is 3, and hence the number of misses, which is 7. Additionally, the TOC curve shows that the number of false alarms is 4 and the number of correct rejections is 16. At any given point in the ROC curve, it is possible to glean values for the ratios of formula_29 and formula_28. For example, at threshold 74, it is evident that the x coordinate is 0.2 and the y coordinate is 0.3. However, these two values are insufficient to construct all entries of the underlying two-by-two contingency table.
Detection error tradeoff graph.
An alternative to the ROC curve is the detection error tradeoff (DET) graph, which plots the false negative rate (missed detections) vs. the false positive rate (false alarms) on non-linearly transformed x- and y-axes. The transformation function is the quantile function of the normal distribution, i.e., the inverse of the cumulative normal distribution. It is, in fact, the same transformation as zROC, below, except that the complement of the hit rate, the miss rate or false negative rate, is used. This alternative spends more graph area on the region of interest. Most of the ROC area is of little interest; one primarily cares about the region tight against the "y"-axis and the top left corner – which, because of using miss rate instead of its complement, the hit rate, is the lower left corner in a DET plot. Furthermore, DET graphs have the useful property of linearity and a linear threshold behavior for normal distributions. The DET plot is used extensively in the automatic speaker recognition community, where the name DET was first used. The analysis of the ROC performance in graphs with this warping of the axes was used by psychologists in perception studies halfway through the 20th century, where this was dubbed "double probability paper".
Z-score.
If a standard score is applied to the ROC curve, the curve will be transformed into a straight line. This z-score is based on a normal distribution with a mean of zero and a standard deviation of one. In memory strength theory, one must assume that the zROC is not only linear, but has a slope of 1.0. The normal distributions of targets (studied objects that the subjects need to recall) and lures (non studied objects that the subjects attempt to recall) is the factor causing the zROC to be linear.
The linearity of the zROC curve depends on the standard deviations of the target and lure strength distributions. If the standard deviations are equal, the slope will be 1.0. If the standard deviation of the target strength distribution is larger than the standard deviation of the lure strength distribution, then the slope will be smaller than 1.0. In most studies, it has been found that the zROC curve slopes constantly fall below 1, usually between 0.5 and 0.9. Many experiments yielded a zROC slope of 0.8. A slope of 0.8 implies that the variability of the target strength distribution is 25% larger than the variability of the lure strength distribution.
Another variable used is "d'" (d prime) (discussed above in "Other measures"), which can easily be expressed in terms of z-values. Although "d"' is a commonly used parameter, it must be recognized that it is only relevant when strictly adhering to the very strong assumptions of strength theory made above.
The z-score of an ROC curve is always linear, as assumed, except in special situations. The Yonelinas familiarity-recollection model is a two-dimensional account of recognition memory. Instead of the subject simply answering yes or no to a specific input, the subject gives the input a feeling of familiarity, which operates like the original ROC curve. What changes, though, is a parameter for Recollection (R). Recollection is assumed to be all-or-none, and it trumps familiarity. If there were no recollection component, zROC would have a predicted slope of 1. However, when adding the recollection component, the zROC curve will be concave up, with a decreased slope. This difference in shape and slope result from an added element of variability due to some items being recollected. Patients with anterograde amnesia are unable to recollect, so their Yonelinas zROC curve would have a slope close to 1.0.
History.
The ROC curve was first used during World War II for the analysis of radar signals before it was employed in signal detection theory. Following the attack on Pearl Harbor in 1941, the United States military began new research to increase the prediction of correctly detected Japanese aircraft from their radar signals. For these purposes they measured the ability of a radar receiver operator to make these important distinctions, which was called the Receiver Operating Characteristic.
In the 1950s, ROC curves were employed in psychophysics to assess human (and occasionally non-human animal) detection of weak signals. In medicine, ROC analysis has been extensively used in the evaluation of diagnostic tests. ROC curves are also used extensively in epidemiology and medical research and are frequently mentioned in conjunction with evidence-based medicine. In radiology, ROC analysis is a common technique to evaluate new radiology techniques. In the social sciences, ROC analysis is often called the ROC Accuracy Ratio, a common technique for judging the accuracy of default probability models.
ROC curves are widely used in laboratory medicine to assess the diagnostic accuracy of a test, to choose the optimal cut-off of a test and to compare diagnostic accuracy of several tests.
ROC curves also proved useful for the evaluation of machine learning techniques. The first application of ROC in machine learning was by Spackman who demonstrated the value of ROC curves in comparing and evaluating different classification algorithms.
ROC curves are also used in verification of forecasts in meteorology.
Radar in detail.
As mentioned ROC curves are critical to radar operation and theory. The signals received at a receiver station, as reflected by a target, are often of very low energy, in comparison to the noise floor. The ratio of signal to noise is an important metric when determining if a target will be detected. This signal to noise ratio is directly correlated to the receiver operating characteristics of the whole radar system, which is used to quantify the ability of a radar system.
Consider the development of a radar system. A specification for the abilities of the system may be provided in terms of probability of detect, formula_30, with a certain tolerance for false alarms, formula_31. A simplified approximation of the required signal to noise ratio at the receiver station can be calculated by solving
formula_32
for the signal to noise ratio formula_33. Here, formula_33 is not in decibels, as is common in many radar applications. Conversion to decibels is through formula_34. From this figure, the common entries in the radar range equation (with noise factors) may be solved, to estimate the required effective radiated power.
ROC curves beyond binary classification.
The extension of ROC curves for classification problems with more than two classes is cumbersome. Two common approaches for when there are multiple classes are (1) average over all pairwise AUC values and (2) compute the volume under surface (VUS). To average over all pairwise classes, one computes the AUC for each pair of classes, using only the examples from those two classes as if there were no other classes, and then averages these AUC values over all possible pairs. When there are "c" classes there will be "c"("c" − 1) / 2 possible pairs of classes.
The volume under surface approach has one plot a hypersurface rather than a curve and then measure the hypervolume under that hypersurface. Every possible decision rule that one might use for a classifier for "c" classes can be described in terms of its true positive rates (TPR1, . . . , TPR"c"). It is this set of rates that defines a point, and the set of all possible decision rules yields a cloud of points that define the hypersurface. With this definition, the VUS is the probability that the classifier will be able to correctly label all "c" examples when it is given a set that has one randomly selected example from each class. The implementation of a classifier that knows that its input set consists of one example from each class might first compute a goodness-of-fit score for each of the "c"2 possible pairings of an example to a class, and then employ the Hungarian algorithm to maximize the sum of the "c" selected scores over all "c"! possible ways to assign exactly one example to each class.
Given the success of ROC curves for the assessment of classification models, the extension of ROC curves for other supervised tasks has also been investigated. Notable proposals for regression problems are the so-called regression error characteristic (REC) Curves and the Regression ROC (RROC) curves. In the latter, RROC curves become extremely similar to ROC curves for classification, with the notions of asymmetry, dominance and convex hull. Also, the area under RROC curves is proportional to the error variance of the regression model.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "-\\infty"
},
{
"math_id": 1,
"text": " X "
},
{
"math_id": 2,
"text": " T "
},
{
"math_id": 3,
"text": " X>T "
},
{
"math_id": 4,
"text": " f_1 (x) "
},
{
"math_id": 5,
"text": " f_0 (x) "
},
{
"math_id": 6,
"text": " \\mbox{TPR}(T)= \\int_{T}^\\infty f_1(x) \\, dx "
},
{
"math_id": 7,
"text": " \\mbox{FPR}(T)= \\int_{T}^\\infty f_0(x) \\, dx "
},
{
"math_id": 8,
"text": "\\mbox{TPR}(T)"
},
{
"math_id": 9,
"text": "\\mbox{FPR}(T)"
},
{
"math_id": 10,
"text": "T"
},
{
"math_id": 11,
"text": "(tpr,fpr)"
},
{
"math_id": 12,
"text": "\\operatorname{TPR}(T): T \\to y(x)"
},
{
"math_id": 13,
"text": "\\operatorname{FPR}(T): T \\to x"
},
{
"math_id": 14,
"text": "\n\\begin{align}\nA & = \\int_{x=0}^1 \\mbox{TPR}(\\mbox{FPR}^{-1}(x)) \\, dx \\\\[5pt]\n& = \\int_{\\infty}^{-\\infty} \\mbox{TPR}(T) \\mbox{FPR}'(T) \\, dT \\\\[5pt]\n& = \\int_{-\\infty}^\\infty \\int_{-\\infty}^\\infty I(T' \\ge T)f_1(T') f_0(T) \\, dT' \\, dT = P(X_1 \\ge X_0)\n\\end{align}\n"
},
{
"math_id": 15,
"text": " X_1 "
},
{
"math_id": 16,
"text": " X_0 "
},
{
"math_id": 17,
"text": "f_0"
},
{
"math_id": 18,
"text": "f_1"
},
{
"math_id": 19,
"text": " A = \\Phi\\left((\\mu_1-\\mu_0)/\\sqrt{\\sigma_1^2 + \\sigma_0^2}\\right) "
},
{
"math_id": 20,
"text": "f"
},
{
"math_id": 21,
"text": "\\text{AUC}(f) = \n \\frac{\\sum _{t_0 \\in \\mathcal{D}^0} \\sum _{t_1 \\in \\mathcal{D}^1} \n \\textbf{1}[f(t_0) < f(t_1)]}{|\\mathcal{D}^0| \\cdot |\\mathcal{D}^1|},\n"
},
{
"math_id": 22,
"text": "\\textbf{1}[f(t_0) < f(t_1)]"
},
{
"math_id": 23,
"text": "f(t_0) < f(t_1)"
},
{
"math_id": 24,
"text": "\\mathcal{D}^0"
},
{
"math_id": 25,
"text": "\\mathcal{D}^1"
},
{
"math_id": 26,
"text": "G_1 = 2 \\operatorname{AUC} - 1"
},
{
"math_id": 27,
"text": "G_1"
},
{
"math_id": 28,
"text": "\\frac{\\text{hits}}{\\text{hits}+\\text{misses}}"
},
{
"math_id": 29,
"text": "\\frac{\\text{false alarms}}{\\text{false alarms} + \\text{correct rejections}}"
},
{
"math_id": 30,
"text": "P_{D}"
},
{
"math_id": 31,
"text": "P_{FA}"
},
{
"math_id": 32,
"text": "P_{D}=\\frac{1}{2}\\operatorname{erfc}\\left(\\operatorname{erfc}^{-1}\\left(2P_{FA}\\right)-\\sqrt{\\mathcal{X}}\\right)"
},
{
"math_id": 33,
"text": "\\mathcal{X}"
},
{
"math_id": 34,
"text": "\\mathcal{X}_{dB}=10\\log_{10}\\mathcal{X}"
}
] | https://en.wikipedia.org/wiki?curid=922505 |
922554 | Discharge (hydrology) | Flow rate of water that is transported through a given cross-sectional area
In hydrology, discharge is the volumetric flow rate (volume per time, in units of m3/h or ft3/h) of a stream. It equals the product of average flow velocity (with dimension of length per time, in m/h or ft/h) and the cross-sectional area (in m2 or ft2). It includes any suspended solids (e.g. sediment), dissolved chemicals like CaCO3(aq), or biologic material (e.g. diatoms) in addition to the water itself. Terms may vary between disciplines. For example, a fluvial hydrologist studying natural river systems may define discharge as streamflow, whereas an engineer operating a reservoir system may equate it with outflow, contrasted with inflow.
Formulation.
A discharge is a measure of the quantity of any fluid flow over unit time. The quantity may be either volume or mass. Thus the water discharge of a tap (faucet) can be measured with a measuring jug and a stopwatch. Here the discharge might be 1 litre per 15 seconds, equivalent to 67 ml/second or 4 litres/minute. This is an average measure. For measuring the discharge of a river we need a different method and the most common is the 'area-velocity' method. The area is the cross sectional area across a river and the average velocity across that section needs to be measured for a unit time, commonly a minute. Measurement of cross sectional area and average velocity, although simple in concept, are frequently non-trivial to determine.
The units that are typically used to express discharge in streams or rivers include m3/s (cubic meters per second), ft3/s (cubic feet per second or cfs) and/or acre-feet per day.
A commonly applied methodology for measuring, and estimating, the discharge of a river is based on a simplified form of the continuity equation. The equation implies that for any incompressible fluid, such as liquid water, the discharge (Q) is equal to the product of the stream's cross-sectional area (A) and its mean velocity (formula_0), and is written as:
formula_1
where
For example, the average discharge of the Rhine river in Europe is or per day.
Because of the difficulties of measurement, a stream gauge is often used at a fixed location on the stream or river.
Catchment discharge.
The catchment of a river above a certain location is determined by the surface area of all land which drains toward the river from above that point. The river's discharge at that location depends on the rainfall on the catchment or drainage area and the inflow or outflow of groundwater to or from the area, stream modifications such as dams and irrigation diversions, as well as evaporation and evapotranspiration from the area's land and plant surfaces. In storm hydrology, an important consideration is the stream's discharge hydrograph, a record of how the discharge varies over time after a precipitation event. The stream rises to a peak flow after each precipitation event, then falls in a slow recession. Because the peak flow also corresponds to the maximum water level reached during the event, it is of interest in flood studies. Analysis of the relationship between precipitation intensity and duration and the response of the stream discharge are aided by the concept of the unit hydrograph, which represents the response of stream discharge over time to the application of a hypothetical "unit" amount and duration of rainfall (e.g., half an inch over one hour). The amount of precipitation correlates to the volume of water (depending on the area of the catchment) that subsequently flows out of the river. Using the unit hydrograph method, actual historical rainfalls can be modeled mathematically to confirm characteristics of historical floods, and hypothetical "design storms" can be created for comparison to observed stream responses.
The relationship between the discharge in the stream at a given cross-section and the level of the stream is described by a rating curve. Average velocities and the cross-sectional area of the stream are measured for a given stream level. The velocity and the area give the discharge for that level. After measurements are made for several different levels, a rating table or rating curve may be developed. Once rated, the discharge in the stream may be determined by measuring the level, and determining the corresponding discharge from the rating curve. If a continuous level-recording device is located at a rated cross-section, the stream's discharge may be continuously determined.
Larger flows (higher discharges) can transport more sediment and larger particles downstream than smaller flows due to their greater force. Larger flows can also erode stream banks and damage public infrastructure.
Catchment effects on discharge and morphology.
G. H. Dury and M. J. Bradshaw are two geographers who devised models showing the relationship between discharge and other variables in a river. The Bradshaw model described how pebble size and other variables change from source to mouth; while Dury considered the relationships between discharge and variables such as stream slope and friction. These follow from the ideas presented by Leopold, Wolman and Miller in "Fluvial Processes in Geomorphology". and on land use affecting river discharge and bedload supply.
Inflow.
Inflow is the sum of processes within the hydrologic cycle that increase the water levels of bodies of water.
Most precipitation occurs directly over bodies of water such as the oceans, or on land as surface runoff. A portion of runoff enters streams and rivers, and another portion soaks into the ground as groundwater seepage. The rest soaks into the ground as infiltration, some of which infiltrates deep into the ground to replenish aquifers. | [
{
"math_id": 0,
"text": "\\bar{u}"
},
{
"math_id": 1,
"text": "Q=A\\,\\bar{u}"
},
{
"math_id": 2,
"text": "Q"
},
{
"math_id": 3,
"text": "A"
}
] | https://en.wikipedia.org/wiki?curid=922554 |
922567 | Fermi's golden rule | Transition rate formula
In quantum physics, Fermi's golden rule is a formula that describes the transition rate (the probability of a transition per unit time) from one energy eigenstate of a quantum system to a group of energy eigenstates in a continuum, as a result of a weak perturbation. This transition rate is effectively independent of time (so long as the strength of the perturbation is independent of time) and is proportional to the strength of the coupling between the initial and final states of the system (described by the square of the matrix element of the perturbation) as well as the density of states. It is also applicable when the final state is discrete, i.e. it is not part of a continuum, if there is some decoherence in the process, like relaxation or collision of the atoms, or like noise in the perturbation, in which case the density of states is replaced by the reciprocal of the decoherence bandwidth.
Historical background.
Although the rule is named after Enrico Fermi, most of the work leading to it is due to Paul Dirac, who twenty years earlier had formulated a virtually identical equation, including the three components of a constant, the matrix element of the perturbation and an energy difference. It was given this name because, on account of its importance, Fermi called it "golden rule No. 2".
Most uses of the term Fermi's golden rule are referring to "golden rule No. 2", but Fermi's "golden rule No. 1" is of a similar form and considers the probability of indirect transitions per unit time.
The rate and its derivation.
Fermi's golden rule describes a system that begins in an eigenstate formula_0 of an unperturbed Hamiltonian "H"0 and considers the effect of a perturbing Hamiltonian H' applied to the system. If H' is time-independent, the system goes only into those states in the continuum that have the same energy as the initial state. If H' is oscillating sinusoidally as a function of time (i.e. it is a harmonic perturbation) with an angular frequency ω, the transition is into states with energies that differ by "ħω" from the energy of the initial state.
In both cases, the "transition probability per unit of time" from the initial state formula_0 to a set of final states formula_1 is essentially constant. It is given, to first-order approximation, by
formula_2
where formula_3 is the matrix element (in bra–ket notation) of the perturbation H' between the final and initial states, and formula_4 is the density of states (number of continuum states divided by formula_5 in the infinitesimally small energy interval formula_6 to formula_7) at the energy formula_8 of the final states. This transition probability is also called "decay probability" and is related to the inverse of the mean lifetime. Thus, the probability of finding the system in state formula_0 is proportional to formula_9.
The standard way to derive the equation is to start with time-dependent perturbation theory and to take the limit for absorption under the assumption that the time of the measurement is much larger than the time needed for the transition.
Only the magnitude of the matrix element formula_3 enters the Fermi's golden rule. The phase of this matrix element, however, contains separate information about the transition process.
It appears in expressions that complement the golden rule in the semiclassical Boltzmann equation approach to electron transport.
While the Golden rule is commonly stated and derived in the terms above, the final state (continuum) wave function is often rather vaguely described, and not normalized correctly (and the normalisation is used in the derivation). The problem is that in order to produce a continuum there can be no spatial confinement (which would necessarily discretise the spectrum), and therefore the continuum wave functions must have infinite extent, and in turn this means that the normalisation formula_14 is infinite, not unity. If the interactions depend on the energy of the continuum state, but not any other quantum numbers, it is usual to normalise continuum wave-functions with energy formula_15 labelled formula_16, by writing formula_17 where formula_18 is the Dirac delta function, and effectively a factor of the square-root of the density of states is included into formula_19. In this case, the continuum wave function has dimensions of formula_20, and the Golden Rule is now
formula_21
where formula_22 refers to the continuum state with the same energy as the discrete state formula_10. For example, correctly normalized continuum wave functions for the case of a free electron in the vicinity of a hydrogen atom are available in Bethe and Salpeter.
<templatestyles src="Math_proof/styles.css" />Normalized Derivation in time-dependent perturbation theory
The following paraphrases the treatment of Cohen-Tannoudji. As before, the total Hamiltonian is the sum of an “original” Hamiltonian "H"0 and a perturbation: formula_23. We can still expand an arbitrary quantum state’s time evolution in terms of energy eigenstates of the unperturbed system, but these now consist of discrete states and continuum states. We assume that the interactions depend on the energy of the continuum state, but not any other quantum numbers. The expansion in the relevant states in the Dirac picture is
formula_24
where formula_25, formula_26 and formula_27 are the energies of states formula_28, respectively. The integral is over the continuum formula_29, i.e. formula_30 is in the continuum.
Substituting into the time-dependent Schrödinger equation
formula_31
and premultiplying by formula_32 produces
formula_33
where formula_34, and premultiplying by formula_35 produces
formula_36
We made use of the normalisation formula_37.
Integrating the latter and substituting into the former,
formula_38
It can be seen here that formula_39 at time formula_40 depends on formula_41 at all earlier times formula_42, i.e. it is non-Markovian. We make the Markov approximation, i.e. that it only depends on formula_41 at time formula_40 (which is less restrictive than the approximation that formula_43 used above, and allows the perturbation to be strong)
formula_44
where formula_45 and formula_46. Integrating over formula_47,
formula_48
The fraction on the right is a nascent Dirac delta function, meaning it tends to formula_49 as formula_50 (ignoring its imaginary part which leads to an unimportant energy shift, while the real part produces decay ). Finally
formula_51
which can have solutions:
formula_52, i.e., the decay of population in the initial discrete state is
formula_53
where
formula_54
Applications.
Semiconductors.
The Fermi's golden rule can be used for calculating the transition probability rate for an electron that is excited by a photon from the valence band to the conduction band in a direct band-gap semiconductor, and also for when the electron recombines with the hole and emits a photon. Consider a photon of frequency formula_12 and wavevector formula_55, where the light dispersion relation is formula_56 and formula_13 is the index of refraction.
Using the Coulomb gauge where formula_57 and formula_58, the vector potential of light is given by formula_59 where the resulting electric field is
formula_60
For an electron in the valence band, the Hamiltonian is
formula_61
where formula_62 is the potential of the crystal, formula_63 and formula_64 are the charge and mass of an electron, and formula_65 is the momentum operator. Here we consider process involving one photon and first order in formula_66. The resulting Hamiltonian is
formula_67
where formula_11 is the perturbation of light.
From here on we consider vertical optical dipole transition, and thus have transition probability based on time-dependent perturbation theory that
formula_68
with formula_69
where formula_70 is the light polarization vector. formula_0 and formula_1 are the Bloch wavefunction of the initial and final states. Here the transition probability needs to satisfy the energy
conservation given by formula_71. From perturbation it is evident that the heart of the calculation lies in the matrix elements shown in the bracket.
For the initial and final states in valence and conduction bands, we have formula_72 and formula_73, respectively and if the formula_11 operator does not act on the spin, the electron stays in the same spin state and hence we can write the Bloch wavefunction of the initial and final states as
formula_74
formula_75
where formula_76 is the number of unit cells with volume formula_77. Calculating using these wavefunctions, and focusing on emission (photoluminescence) rather than absorption, we are led to the transition rate
formula_78
where formula_79 defined as the optical transition dipole moment is qualitatively the expectation value formula_80 and in this situation takes the form
formula_81
Finally, we want to know the total transition rate formula_82. Hence we need to sum over all possible initial and final states that can satisfy the energy conservation (i.e. an integral of the Brillouin zone in the "k"-space), and take into account spin degeneracy, which after calculation results in
formula_83
where formula_84 is the joint valence-conduction density of states (i.e. the density of pair of states; one occupied valence state, one empty conduction state). In 3D, this is
formula_85
but the joint DOS is different for 2D, 1D, and 0D.
We note that in a general way we can express the Fermi's golden rule for semiconductors as
formula_86
In the same manner, the stationary DC photocurrent with amplitude proportional to the square of the field of light is
formula_87
where formula_88 is the relaxation time, formula_89 and formula_90 are the
difference of the group velocity and Fermi-Dirac distribution between possible the initial and
final states. Here formula_91 defines the optical transition dipole. Due to the commutation relation between position formula_92 and the Hamiltonian, we can also rewrite the transition dipole and photocurrent in terms of position operator matrix using formula_93. This effect can only exist in systems with broken inversion symmetry and nonzero components of the photocurrent can be obtained by symmetry arguments.
Scanning tunneling microscopy.
In a scanning tunneling microscope, the Fermi's golden rule is used in deriving the tunneling current. It takes the form
formula_94
where formula_95 is the tunneling matrix element.
Quantum optics.
When considering energy level transitions between two discrete states, Fermi's golden rule is written as
formula_96
where formula_97 is the density of photon states at a given energy, formula_98 is the photon energy, and formula_12 is the angular frequency. This alternative expression relies on the fact that there is a continuum of final (photon) states, i.e. the range of allowed photon energies is continuous.
Drexhage experiment.
Fermi's golden rule predicts that the probability that an excited state will decay depends on the density of states. This can be seen experimentally by measuring the decay rate of a dipole near a mirror: as the presence of the mirror creates regions of higher and lower density of states, the measured decay rate depends on the distance between the mirror and the dipole.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "|i\\rangle"
},
{
"math_id": 1,
"text": "|f\\rangle"
},
{
"math_id": 2,
"text": "\\Gamma_{i \\to f} = \\frac{2 \\pi}{\\hbar} \\left| \\langle f|H'|i \\rangle \\right|^2 \\rho(E_f),"
},
{
"math_id": 3,
"text": "\\langle f|H'|i \\rangle"
},
{
"math_id": 4,
"text": "\\rho(E_f)"
},
{
"math_id": 5,
"text": "dE"
},
{
"math_id": 6,
"text": "E"
},
{
"math_id": 7,
"text": "E + dE"
},
{
"math_id": 8,
"text": "E_f"
},
{
"math_id": 9,
"text": "e^{-\\Gamma_{i \\to f} t}"
},
{
"math_id": 10,
"text": "i"
},
{
"math_id": 11,
"text": "H'"
},
{
"math_id": 12,
"text": "\\omega"
},
{
"math_id": 13,
"text": "n"
},
{
"math_id": 14,
"text": "\\langle f|f \\rangle = \\int d^3\\mathbf{r} \\left|f(\\mathbf{r})\\right|^2"
},
{
"math_id": 15,
"text": " \\varepsilon "
},
{
"math_id": 16,
"text": "| \\varepsilon\\rangle"
},
{
"math_id": 17,
"text": "\\langle \\varepsilon|\\varepsilon ' \\rangle=\\delta(\\varepsilon-\\varepsilon ')"
},
{
"math_id": 18,
"text": "\\delta"
},
{
"math_id": 19,
"text": "|\\varepsilon_i\\rangle"
},
{
"math_id": 20,
"text": "1/\\sqrt{\\text{[energy]}}"
},
{
"math_id": 21,
"text": " \\Gamma_{i \\to \\varepsilon_i} = \\frac{2\\pi}{\\hbar} |\\langle \\varepsilon_i|H'|i\\rangle|^2 ."
},
{
"math_id": 22,
"text": "\\varepsilon_i"
},
{
"math_id": 23,
"text": "H = H_0 + H'"
},
{
"math_id": 24,
"text": " |\\psi(t)\\rang = a_i e^{-\\mathrm{i}\\omega_i t}|i\\rang + \\int_C d\\varepsilon a_\\varepsilon e^{-\\mathrm{i}\\omega t} |\\varepsilon\\rangle,"
},
{
"math_id": 25,
"text": "\\omega_i = \\varepsilon_i / \\hbar"
},
{
"math_id": 26,
"text": "\\omega = \\varepsilon / \\hbar"
},
{
"math_id": 27,
"text": "\\varepsilon_i,\\varepsilon "
},
{
"math_id": 28,
"text": "|i\\rangle, |\\varepsilon\\rangle"
},
{
"math_id": 29,
"text": " \\varepsilon \\in C"
},
{
"math_id": 30,
"text": " |\\varepsilon\\rangle"
},
{
"math_id": 31,
"text": " H |\\psi(t)\\rang = \\mathrm{i}\\hbar \\frac{\\partial}{\\partial t} |\\psi(t)\\rang"
},
{
"math_id": 32,
"text": "\\langle i|"
},
{
"math_id": 33,
"text": " \\frac{da_i(t)}{dt} = -\\mathrm{i} \\int_C d\\varepsilon \\Omega_{i\\varepsilon} e^{-\\mathrm{i}(\\omega - \\omega_i)t}a_\\varepsilon(t),"
},
{
"math_id": 34,
"text": "\\Omega_{i \\varepsilon }=\\langle i|H'|\\varepsilon\\rangle/\\hbar"
},
{
"math_id": 35,
"text": "\\langle \\varepsilon '|"
},
{
"math_id": 36,
"text": " \\frac{da_\\varepsilon(t)}{dt} = -\\mathrm{i} \\Omega_{\\varepsilon i} e^{\\mathrm{i}(\\omega - \\omega_i)t} a_i(t)."
},
{
"math_id": 37,
"text": " \\langle \\varepsilon' |\\varepsilon\\rangle = \\delta(\\varepsilon'-\\varepsilon) "
},
{
"math_id": 38,
"text": "\n \\frac{da_i(t)}{dt} = - \\int_C d\\varepsilon \\Omega_{i\\varepsilon}\\Omega_{\\varepsilon i} \\int_0^t dt' e^{-\\mathrm{i}(\\omega - \\omega_i)(t-t')} a_i(t').\n"
},
{
"math_id": 39,
"text": "da_i/dt"
},
{
"math_id": 40,
"text": "t"
},
{
"math_id": 41,
"text": "a_i"
},
{
"math_id": 42,
"text": "t'"
},
{
"math_id": 43,
"text": "a_i \\approx 1"
},
{
"math_id": 44,
"text": "\n \\frac{da_i(t)}{dt} = \\int_C d\\varepsilon |\\Omega_{i\\varepsilon}|^2 a_i(t) \\int_0^t dT e^{-\\mathrm{i}\\Delta T},\n"
},
{
"math_id": 45,
"text": "T=t-t'"
},
{
"math_id": 46,
"text": "\\Delta=\\omega -\\omega_i"
},
{
"math_id": 47,
"text": "T"
},
{
"math_id": 48,
"text": "\n \\frac{da_i(t)}{dt} = - 2\\pi\\hbar \\int_C d\\varepsilon |\\Omega_{i\\varepsilon}|^2 a_i(t) \\frac{ e^{-\\mathrm{i}\\Delta t/2}\\sin(\\Delta t/2)}{\\pi\\hbar\\Delta} , "
},
{
"math_id": 49,
"text": "\\delta(\\varepsilon-\\varepsilon_i)"
},
{
"math_id": 50,
"text": "t\\to\\infty"
},
{
"math_id": 51,
"text": " \\frac{da_i(t)}{dt} = - 2\\pi\\hbar |\\Omega_{i\\varepsilon_i}|^2 a_i(t), "
},
{
"math_id": 52,
"text": " a_i(t) = \\exp(-\\Gamma_{i\\to \\varepsilon_i} t/2) "
},
{
"math_id": 53,
"text": " P_i(t) = |a_i(t)|^2 = \\exp(-\\Gamma_{i\\to \\varepsilon_i} t) "
},
{
"math_id": 54,
"text": " \\Gamma_{i\\to \\varepsilon_i} = 2\\pi\\hbar |\\Omega_{i\\varepsilon_i}|^2 = \\frac{2\\pi}{\\hbar} |\\langle i|H'|\\varepsilon\\rangle|^2. "
},
{
"math_id": 55,
"text": "\\textbf{q}"
},
{
"math_id": 56,
"text": "\\omega = (c/n)\\left|\\textbf{q}\\right|"
},
{
"math_id": 57,
"text": "\\nabla\\cdot \\textbf{A}=0"
},
{
"math_id": 58,
"text": "V=0"
},
{
"math_id": 59,
"text": "\\textbf{A} = A_0\\boldsymbol{\\varepsilon}e^{\\mathrm{i}(\\textbf{q}\\cdot\\textbf{r}-\\omega t)} +C "
},
{
"math_id": 60,
"text": "\\textbf{E} = -\\frac{\\partial\\textbf{A}}{\\partial t} = \\mathrm{i} \\omega A_0 \\boldsymbol{\\varepsilon} e^{\\mathrm{i}.(\\textbf{q}\\cdot\\textbf{r}-\\omega t)}."
},
{
"math_id": 61,
"text": "H = \\frac{(\\textbf{p} +e\\textbf{A})^2}{2m_0} + V(\\textbf{r}),"
},
{
"math_id": 62,
"text": "V(\\textbf{r})"
},
{
"math_id": 63,
"text": "e"
},
{
"math_id": 64,
"text": "m_0"
},
{
"math_id": 65,
"text": "\\textbf{p}"
},
{
"math_id": 66,
"text": "\\textbf{A}"
},
{
"math_id": 67,
"text": "H = H_0 + H' = \\left[ \\frac{\\textbf{p}^2}{2m_0} + V(\\textbf{r}) \\right] + \n\\left[ \\frac{e}{2m_0}(\\textbf{p}\\cdot \\textbf{A} + \\textbf{A}\\cdot \\textbf{p}) \\right],"
},
{
"math_id": 68,
"text": "\\Gamma_{if} = \\frac{2\\pi}{\\hbar} \\left|\\langle f|H'|i\\rangle \\right|^2\\delta (E_f-E_i \\pm \\hbar \\omega),"
},
{
"math_id": 69,
"text": "H' \\approx \\frac{eA_0}{m_0}\\boldsymbol{\\varepsilon}\\cdot \\mathbf{p},"
},
{
"math_id": 70,
"text": "\\boldsymbol{\\varepsilon}"
},
{
"math_id": 71,
"text": "\\delta (E_f-E_i \\pm \\hbar \\omega)"
},
{
"math_id": 72,
"text": "|i\\rangle =\\Psi_{v,\\textbf{k}_i,s_i}(\\textbf{r})"
},
{
"math_id": 73,
"text": "|f\\rangle =\\Psi_{c,\\textbf{k}_f,s_f}(\\textbf{r})"
},
{
"math_id": 74,
"text": "\\Psi_{v,\\textbf{k}_i}(\\textbf{r})=\n\\frac{1}{\\sqrt{N\\Omega_0}}u_{n_v,\\textbf{k}_i}(\\textbf{r})e^{i\\textbf{k}_i\\cdot\\textbf{r}},\n"
},
{
"math_id": 75,
"text": "\\Psi_{c,\\textbf{k}_f}(\\textbf{r})=\n\\frac{1}{\\sqrt{N\\Omega_0}}u_{n_c,\\textbf{k}_f}(\\textbf{r})e^{i\\textbf{k}_f\\cdot\\textbf{r}},\n"
},
{
"math_id": 76,
"text": "N"
},
{
"math_id": 77,
"text": "\\Omega_0"
},
{
"math_id": 78,
"text": "\n\\Gamma_{cv}=\\frac{2\\pi}{\\hbar}\\left(\\frac{eA_0}{m_0}\\right)^2\n|\\boldsymbol{\\varepsilon} \\cdot \\boldsymbol{\\mu}_{cv}(\\textbf{k})|^2 \\delta (E_c - E_v - \\hbar \\omega),\n"
},
{
"math_id": 79,
"text": "\\boldsymbol{\\mu}_{cv}"
},
{
"math_id": 80,
"text": "\\langle c| (\\text{charge}) \\times (\\text{distance})|v\\rangle"
},
{
"math_id": 81,
"text": "\\boldsymbol{\\mu}_{cv} = \n-\\frac{i\\hbar}{\\Omega_0} \\int_{\\Omega_0} d\\textbf{r}' u^*_{n_c,\\textbf{k}}(\\textbf{r}')\n\\nabla u_{n_v,\\textbf{k}}(\\textbf{r}').\n"
},
{
"math_id": 82,
"text": "\\Gamma(\\omega)"
},
{
"math_id": 83,
"text": "\n\\Gamma(\\omega) = \\frac{4\\pi}{\\hbar}\\left( \\frac{eA_0}{m_0} \\right)^2\n|\\boldsymbol{\\varepsilon}\\cdot \\boldsymbol{\\mu}_{cv}|^2 \\rho_{cv}(\\omega)\n"
},
{
"math_id": 84,
"text": "\\rho_{cv}(\\omega)"
},
{
"math_id": 85,
"text": "\\rho_{cv}(\\omega) = 2\\pi \\left( \\frac{2m^*}{\\hbar^2}\\right)^{3/2}\\sqrt{\\hbar \\omega - E_g},"
},
{
"math_id": 86,
"text": "\n\\Gamma_{vc}=\n\\frac{2\\pi}{\\hbar}\\int_\\text{BZ} \\frac{d\\textbf{k}}{4\\pi^3}|H_{vc}'|^2\n\\delta(E_c(\\textbf{k}) - E_v(\\textbf{k}) - \\hbar\\omega).\n"
},
{
"math_id": 87,
"text": "\n\\textbf{J}=\n-\\frac{2\\pi e \\tau}{\\hbar}\\sum_{i,f}\\int_\\text{BZ} \\frac{d\\textbf{k}}{(2\\pi)^D} |\\textbf{v}_i-\\textbf{v}_f|(f_i(\\textbf{k})-f_f(\\textbf{k}))|H_{if}'|^2\n\\delta(E_f(\\textbf{k}) - E_i(\\textbf{k}) - \\hbar\\omega),\n"
},
{
"math_id": 88,
"text": "\\tau"
},
{
"math_id": 89,
"text": "\\textbf{v}_i-\\textbf{v}_f"
},
{
"math_id": 90,
"text": "f_i(\\textbf{k})-f_f(\\textbf{k})"
},
{
"math_id": 91,
"text": "|H_{if}'|^2"
},
{
"math_id": 92,
"text": "\\textbf{r}"
},
{
"math_id": 93,
"text": "\\langle i|\\textbf{p}|f\\rangle= -im_0\\omega\\langle i|\\textbf{r}|f\\rangle"
},
{
"math_id": 94,
"text": " w = \\frac{2 \\pi}{\\hbar} |M|^2 \\delta (E_{\\psi} - E_{\\chi} ), "
},
{
"math_id": 95,
"text": "M"
},
{
"math_id": 96,
"text": "\\Gamma_{i \\to f} = \\frac{2 \\pi}{\\hbar} \\left|\\langle f| H' |i \\rangle\\right|^2 g(\\hbar\\omega),"
},
{
"math_id": 97,
"text": "g(\\hbar\\omega)"
},
{
"math_id": 98,
"text": "\\hbar\\omega"
}
] | https://en.wikipedia.org/wiki?curid=922567 |
9226345 | Hat notation | Mathematical notation
A "hat" (circumflex (ˆ)), placed over a symbol is a mathematical notation with various uses.
Estimated value.
In statistics, a circumflex (ˆ), called a "hat", is used to denote an estimator or an estimated value. For example, in the context of errors and residuals, the "hat" over the letter formula_0 indicates an observable estimate (the residuals) of an unobservable quantity called formula_1 (the statistical errors).
Another example of the hat operator denoting an estimator occurs in simple linear regression. Assuming a model of formula_2, with observations of independent variable data formula_3 and dependent variable data formula_4, the estimated model is of the form formula_5 where formula_6 is commonly minimized via least squares by finding optimal values of formula_7 and formula_8 for the observed data.
Hat matrix.
In statistics, the hat matrix "H" projects the observed values y of response variable to the predicted values ŷ:
formula_9
Cross product.
In screw theory, one use of the hat operator is to represent the cross product operation. Since the cross product is a linear transformation, it can be represented as a matrix. The hat operator takes a vector and transforms it into its equivalent matrix.
formula_10
For example, in three dimensions,
formula_11
Unit vector.
In mathematics, a unit vector in a normed vector space is a vector (often a spatial vector) of length 1. A unit vector is often denoted by a lowercase letter with a circumflex, or "hat", as in formula_12 (pronounced "v-hat"). This is especially common in physics context.
Fourier transform.
The Fourier transform of a function formula_13 is traditionally denoted by formula_14.
Operator.
In quantum mechanics, operators are denoted with hat notation. For instance, see the time-independent Schrödinger equation, where the Hamiltonian operator is denoted formula_15.
formula_16
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\hat{\\varepsilon}"
},
{
"math_id": 1,
"text": "\\varepsilon"
},
{
"math_id": 2,
"text": "y_i = \\beta_0+\\beta_1 x_i+\\varepsilon_i"
},
{
"math_id": 3,
"text": "x_i"
},
{
"math_id": 4,
"text": "y_i"
},
{
"math_id": 5,
"text": "\\hat{y}_i = \\hat{\\beta}_0+\\hat{\\beta}_1 x_i"
},
{
"math_id": 6,
"text": "\\sum_i (y_i-\\hat{y}_i)^2"
},
{
"math_id": 7,
"text": "\\hat{\\beta}_0"
},
{
"math_id": 8,
"text": "\\hat{\\beta}_1"
},
{
"math_id": 9,
"text": "\\hat{\\mathbf{y}} = H \\mathbf{y}."
},
{
"math_id": 10,
"text": "\\mathbf{a} \\times \\mathbf{b} = \\mathbf{\\hat{a}} \\mathbf{b} "
},
{
"math_id": 11,
"text": "\\mathbf{a} \\times \\mathbf{b} = \\begin{bmatrix} a_x \\\\ a_y \\\\ a_z \\end{bmatrix} \\times \\begin{bmatrix} b_x \\\\ b_y \\\\ b_z \\end{bmatrix} = \\begin{bmatrix} 0 & -a_z & a_y \\\\ a_z & 0 & -a_x \\\\ -a_y & a_x & 0 \\end{bmatrix} \\begin{bmatrix} b_x \\\\ b_y \\\\ b_z \\end{bmatrix} = \\mathbf{\\hat{a}} \\mathbf{b} "
},
{
"math_id": 12,
"text": "\\hat {\\mathbf {v} }"
},
{
"math_id": 13,
"text": "f"
},
{
"math_id": 14,
"text": "\\hat{f}"
},
{
"math_id": 15,
"text": "\\hat{H} "
},
{
"math_id": 16,
"text": "\\hat{H}\\psi = E\\psi "
}
] | https://en.wikipedia.org/wiki?curid=9226345 |
9228246 | Quadratic growth | In mathematics, a function or sequence is said to exhibit quadratic growth when its values are proportional to the square of the function argument or sequence position. "Quadratic growth" often means more generally "quadratic growth in the limit", as the argument or sequence position goes to infinity – in big Theta notation, formula_0. This can be defined both continuously (for a real-valued function of a real variable) or discretely (for a sequence of real numbers, i.e., real-valued function of an integer or natural number variable).
Examples.
Examples of quadratic growth include:
For a real function of a real variable, quadratic growth is equivalent to the second derivative being constant (i.e., the third derivative being zero), and thus functions with quadratic growth are exactly the quadratic polynomials, as these are the kernel of the third derivative operator formula_4. Similarly, for a sequence (a real function of an integer or natural number variable), quadratic growth is equivalent to the second finite difference being constant (the third finite difference being zero), and thus a sequence with quadratic growth is also a quadratic polynomial. Indeed, an integer-valued sequence with quadratic growth is a polynomial in the zeroth, first, and second binomial coefficient with integer values. The coefficients can be determined by taking the Taylor polynomial (if continuous) or Newton polynomial (if discrete).
Algorithmic examples include:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f(x)=\\Theta(x^2)"
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "n(n+1)/2"
},
{
"math_id": 3,
"text": "n^2/2"
},
{
"math_id": 4,
"text": "D^3"
}
] | https://en.wikipedia.org/wiki?curid=9228246 |
9228588 | Florimond de Beaune | French jurist and mathematician
Florimond de Beaune (7 October 1601, Blois – 18 August 1652, Blois) was a French jurist and mathematician, and an early follower of René Descartes. R. Taton calls him "a typical example of the erudite amateurs" active in 17th-century science.
In a 1638 letter to Descartes, de Beaune posed the problem of solving the differential equation
formula_0
now seen as the first example of the inverse tangent method of deducing properties of a curve from its tangents.
His "Tractatus de limitibus aequationum" was reprinted in England in 1807; in it, he finds upper and lower bounds for the solutions to quadratic equations and cubic equations, as simple functions of the coefficients of these equations. His "Doctrine de l'angle solide" and "Inventaire de sa bibliothèque" were also reprinted, in Paris in 1975. Another of his writings was "Notae breves", the introduction to a 1649 edition of Descartes' "La Géométrie".
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\frac{\\operatorname{d}y}{\\operatorname{d}x}=\\frac{\\alpha}{y-x}"
}
] | https://en.wikipedia.org/wiki?curid=9228588 |
92290 | Mechanical equilibrium | When the net force on a particle is zero
In classical mechanics, a particle is in mechanical equilibrium if the net force on that particle is zero. By extension, a physical system made up of many parts is in mechanical equilibrium if the net force on each of its individual parts is zero.
In addition to defining mechanical equilibrium in terms of force, there are many alternative definitions for mechanical equilibrium which are all mathematically equivalent.
More generally in conservative systems, equilibrium is established at a point in configuration space where the gradient of the potential energy with respect to the generalized coordinates is zero.
If a particle in equilibrium has zero velocity, that particle is in static equilibrium. Since all particles in equilibrium have constant velocity, it is always possible to find an inertial reference frame in which the particle is stationary with respect to the frame.
Stability.
An important property of systems at mechanical equilibrium is their stability.
Potential energy stability test.
In a function which describes the system's potential energy, the system's equilibria can be determined using calculus. A system is in mechanical equilibrium at the critical points of the function describing the system's potential energy. These points can be located using the fact that the derivative of the function is zero at these points. To determine whether or not the system is stable or unstable, the second derivative test is applied. With formula_0 denoting the static equation of motion of a system with a single degree of freedom the following calculations can be performed:
When considering more than one dimension, it is possible to get different results in different directions, for example stability with respect to displacements in the "x"-direction but instability in the "y"-direction, a case known as a saddle point. Generally an equilibrium is only referred to as stable if it is stable in all directions.
Statically indeterminate system.
Sometimes the equilibrium equations – force and moment equilibrium conditions – are insufficient to determine the forces and reactions. Such a situation is described as "statically indeterminate".
Statically indeterminate situations can often be solved by using information from outside the standard equilibrium equations.
Examples.
A stationary object (or set of objects) is in "static equilibrium," which is a special case of mechanical equilibrium. A paperweight on a desk is an example of static equilibrium. Other examples include a rock balance sculpture, or a stack of blocks in the game of Jenga, so long as the sculpture or stack of blocks is not in the state of collapsing.
Objects in motion can also be in equilibrium. A child sliding down a slide at constant speed would be in mechanical equilibrium, but not in static equilibrium (in the reference frame of the earth or slide).
Another example of mechanical equilibrium is a person pressing a spring to a defined point. He or she can push it to an arbitrary point and hold it there, at which point the compressive load and the spring reaction are equal. In this state the system is in mechanical equilibrium. When the compressive force is removed the spring returns to its original state.
The minimal number of static equilibria of homogeneous, convex bodies (when resting under gravity on a horizontal surface) is of special interest. In the planar case, the minimal number is 4, while in three dimensions one can build an object with just one stable and one unstable balance point. Such an object is called a gömböc.
Notes and references.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "V "
},
{
"math_id": 1,
"text": "e^{-1/x^2}"
}
] | https://en.wikipedia.org/wiki?curid=92290 |
923015 | Theory of descriptions | Russelian view
The theory of descriptions is the philosopher Bertrand Russell's most significant contribution to the philosophy of language. It is also known as Russell's theory of descriptions (commonly abbreviated as RTD). In short, Russell argued that the syntactic form of descriptions (phrases that took the form of "The aardvark" and "An aardvark") is misleading, as it does not correlate their logical and/or semantic architecture. While descriptions may seem like fairly uncontroversial phrases, Russell argued that providing a satisfactory analysis of the linguistic and logical properties of a description is vital to clarity in important philosophical debates, particularly in semantic arguments, epistemology and metaphysical elements.
Since the first development of the theory in Russell's 1905 paper "On Denoting", RTD has been hugely influential and well-received within the philosophy of language. However, it has not been without its critics. In particular, the philosophers P. F. Strawson and Keith Donnellan have given notable, well known criticisms of the theory. Most recently, RTD has been defended by various philosophers and even developed in promising ways to bring it into harmony with generative grammar in Noam Chomsky's sense, particularly by Stephen Neale. Such developments have themselves been criticised, and debate continues.
Russell viewed his theory of descriptions as a kind of analysis that is now called propositional analysis (not to be confused with propositional calculus).
Overview.
Bertrand Russell's theory of descriptions was initially put forth in his 1905 essay "On Denoting", published in the journal of philosophy "Mind". Russell's theory is focused on the logical form of expressions involving denoting phrases, which he divides into three groups:
"Indefinite descriptions" constitute Russell's third group. Descriptions most frequently appear in the standard subject–predicate form.
Russell put forward his theory of descriptions to solve a number of problems in the philosophy of language. The two major problems are (1) co-referring expressions and (2) non-referring expressions.
The problem of co-referring expressions originated primarily with Gottlob Frege as the problem of informative identities. For example, if the morning star and the evening star are the same planet in the sky seen at different times of day (indeed, they are both the planet Venus: the morning star is the planet Venus seen in the morning sky and the evening star is the planet Venus seen in the evening sky), how is it that someone can think that the morning star rises in the morning but the evening star does not? This is apparently problematic because although the two expressions seem to denote the same thing, one cannot substitute one for the other, which one ought to be able to do with identical or synonymous expressions.
The problem of non-referring expressions is that certain expressions that are meaningful do not truly refer to anything. For example, by "any dog is annoying" it is not meant that there is a particular individual dog, namely "any dog", that has the property of being annoying (similar considerations go for "some dog", "every dog", "a dog", and so on). Likewise, by "the current Emperor of Kentucky is gray" it is not meant that there is some individual, namely "the current Emperor of Kentucky ", who has the property of being gray; Kentucky was never a monarchy, so there is currently no Emperor. Thus, what Russell wants to avoid is admitting mysterious non-existent objects into his ontology. Furthermore, the law of the excluded middle requires that one of the following propositions, for example, must be true: either "the current Emperor of Kentucky is gray" or "it is not the case that the current Emperor of Kentucky is gray". Normally, propositions of the subject-predicate form are said to be true if and only if the subject is in the extension of the predicate. But, there is currently no Emperor of Kentucky. So, since the subject does not exist, it is not in the extension of either predicate (it is not on the list of gray people or non-gray people). Thus, it appears that this is a case in which the law of excluded middle is violated, which is also an indication that something has gone wrong.
Definite descriptions.
Russell analyzes definite descriptions similarly to indefinite descriptions, except that the individual is now uniquely specified. Take as an example of a definite description the sentence "the current Emperor of Kentucky is gray". Russell analyses this phrase into the following component parts (with 'x' and 'y' representing variables):
Thus, a definite description (of the general form 'the F is G') becomes the following existentially quantified phrase in classic symbolic logic (where 'x' and 'y' are variables and 'F' and 'G' are predicates – in the example above, F would be "is an emperor of Kentucky", and G would be "is gray"):
formula_0
Informally, this reads as follows: something exists with the property F, there is only one such thing, and this unique thing also has the property G.
This analysis, according to Russell, solves the two problems noted above as related to definite descriptions:
Russell says that all propositions in which the Emperor of Kentucky has a primary occurrence are false. The denials of such propositions are true, but in these cases the Emperor of Kentucky has a secondary occurrence (the truth value of the proposition is not a function of the truth of the existence of the Emperor of Kentucky).
Indefinite descriptions.
Take as an example of an indefinite description the sentence "some dog is annoying". Russell analyses this phrase into the following component parts (with 'x' and 'y' representing variables):
Thus, an indefinite description (of the general form 'a D is A') becomes the following existentially quantified phrase in classic symbolic logic (where 'x' and 'y' are variables and 'D' and 'A' are predicates):
formula_1
Informally, this reads as follows: there is something such that it is D and A.
This analysis, according to Russell, solves the second problem noted above as related to indefinite descriptions. Since the phrase "some dog is annoying" is not a referring expression, according to Russell's theory, it need not refer to a mysterious non-existent entity. Furthermore, the law of excluded middle need not be violated (i.e. it remains a law), because "some dog is annoying" comes out true: there is a thing that is both a dog and annoying. Thus, Russell's theory seems to be a better analysis insofar as it solves several problems.
Criticism of Russell's analysis.
P. F. Strawson.
P. F. Strawson argued that Russell had failed to correctly represent what one means when one says a sentence in the form of "the current Emperor of Kentucky is gray." According to Strawson, this sentence is not contradicted by "No one is the current Emperor of Kentucky", for the former sentence contains not an existential assertion, but attempts to "use" "the current Emperor of Kentucky" as a referring (or denoting) phrase. Since there is no current Emperor of Kentucky, the phrase fails to refer to anything, and so the sentence is neither true nor false.
Another kind of counter-example that Strawson and philosophers since have raised concerns that of "incomplete" definite descriptions, that is sentences which have the form of a definite description but which do not uniquely denote an object. Strawson gives the example "the table is covered with books". Under Russell's theory, for such a sentence to be true there would have to be only one table in all of existence. But by uttering a phrase such as "the table is covered with books", the speaker is referring to a particular table: for instance, one that is in the vicinity of the speaker. Two broad responses have been constructed to this failure: a semantic and a pragmatic approach. The semantic approach of philosophers like Stephen Neale suggests that the sentence does in fact have the appropriate meaning as to make it true. Such meaning is added to the sentence by the particular context of the speaker—that, say, the context of standing next to a table "completes" the sentence. Ernie Lepore suggests that this approach treats "definite descriptions as harboring hidden indexical expressions, so that whatever descriptive meaning alone leaves unfinished its context of use can complete".
Pragmatist responses deny this intuition and say instead that the sentence itself, following Russell's analysis, is not true but that the act of uttering the false sentence communicated true information to the listener.
Keith Donnellan.
According to Keith Donnellan, there are two distinct ways we may use a definite description such as "the current Emperor of Kentucky is gray", and thus makes his distinction between the referential and the attributive use of a definite description. He argues that both Russell and Strawson make the mistake of attempting to analyse sentences removed from their context. We can mean different and distinct things while using the same sentence in different situations.
For example, suppose Smith has been brutally murdered. When the person who discovers Smith's body says, "Smith's murderer is insane", we may understand this as the attributive use of the definite description "Smith's murderer", and analyse the sentence according to Russell. This is because the discoverer might equivalently have worded the assertion, "Whoever killed Smith is insane." Now consider another speaker: suppose Jones, though innocent, has been arrested for the murder of Smith, and is now on trial. When a reporter sees Jones talking to himself outside the courtroom, and describes what she sees by saying, "Smith's murderer is insane", we may understand this as the referring use of the definite description, for we may equivalently reword the reporter's assertion thus: "That person who I see talking to himself, and who I believe murdered Smith, is insane." In this case, we should not accept Russell's analysis as correctly representing the reporter's assertion. On Russell's analysis, the sentence is to be understood as an existential quantification of the conjunction of three components:
If this analysis of the reporter's assertion were correct, then since Jones is innocent, we should take her to mean what the discoverer of Smith's body meant, that whoever murdered Smith is insane. We should then take her observation of Jones talking to himself to be irrelevant to the truth of her assertion. This clearly misses her point.
Thus the same sentence, "Smith's murderer is insane", can be used to mean quite different things in different contexts. There are, accordingly, contexts in which "the current Emperor of Kentucky is not gray" is false because no one is the current Emperor of Kentucky, and contexts in which it is a sentence referring to a person whom the speaker takes to be the current Emperor of Kentucky, true or false according to the hair of the pretender.
Saul Kripke.
In "Reference and Existence", Saul Kripke argues that while Donnellan is correct to point out two uses of the phrase, it does not follow that the phrase is ambiguous between two meanings. For example, when the reporter "finds out" that Jones, the person she has been calling "Smith's murderer" did not murder Smith, she will admit that her use of the name was incorrect. Kripke defends Russell's analysis of definite descriptions, and argues that Donnellan does not adequately distinguish meaning from use, or, speaker's meaning from sentence meaning.
Other Objections.
The theory of descriptions is regarded as a redundant and cumbersome method. The theory claims that ‘The present King of France is bald’ means ‘One and only one entity is the present King of France, and that one is bald’. L. Susan Stebbing suggests that if ‘that’ is used referentially, ‘that one is bald’ is logically equivalent to the entire conjunction. Hence, the conjunction of three propositions is unnecessary as one proposition is already adequate.
P. T. Geach maintains that Russell's theory commits the fallacy of too many questions. Such a sentence as "The present President of Sealand is bald" involves two questions: (1) Is anybody at the moment a President of Sealand ? (2) Are there at the moment different people each of whom is a President of Sealand? Unless the answer to 4 is affirmative and the answer to 5 negative, the affirmative answer " yes, the President of Sealand is bald " is not false but indeterminate. In addition, Russell's theory involves unnecessary logical complications.
Furthermore, Honcques Laus contends that Russell's analysis involves an error in the truth value that all sentences can be either true or false. Russell's nonacceptance of multiple-valued logic makes himself unable to assign a proper truth value to unverifiable and unfalsifiable sentences and causes the puzzle of the laws of thought. The third truth value, namely ‘indeterminate’ or ‘undefined’ should be accepted in the event that both truth and falsity are absent or inapplicable.
William G. Lycan argues that Russell's theory intrinsically applies solely to one extraordinary subclass of singular terms but an adequate solution to the puzzles must be generalized. His theory merely addresses the principal use of the definite article "the", but fails to deal with plural uses or the generic use. Russell also fails to consider anaphoric uses of singular referential expressions.
Arthur Pap argues that the theory of descriptions must be rejected because according to the theory of descriptions, 'the present king of France is bald' and 'the present king of France is not bald' are both false and not contradictories otherwise the law of excluded middle would be violated.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\exists x ((Fx \\land \\forall y(Fy \\rightarrow x=y)) \\land Gx) "
},
{
"math_id": 1,
"text": "\\exists x (Dx \\land Ax)"
}
] | https://en.wikipedia.org/wiki?curid=923015 |
9230960 | Minimum efficient scale | In industrial organization, the minimum efficient scale (MES) or efficient scale of production is the lowest point where the plant (or firm) can produce such that its long run average costs are minimized with production remaining effective. It is also the point at which the firm can achieve necessary economies of scale for it to compete effectively within the market.
Measurement of the MES.
Economies of scale refers to the cost advantage arise from increasing amount of production. Mathematically, it is a situation in which the firm can double its output for less than doubling the cost, which brings cost advantages. Usually, economies of scale can be represented in connection with a cost-production elasticity, E.
formula_0
The cost-production elasticity equation can be rewritten to express the relationship between marginal cost and average cost.
formula_1
The minimum efficient scale can be computed by equating average cost (AC) with marginal cost (MC): formula_2 The rationale behind this is that if a firm were to produce a small number of units, its average cost per unit would be high because the bulk of the costs would come from fixed costs. But if the firm produces more units, the average cost incurred per unit will be lower as the fixed costs are spread over a larger number of units; the marginal cost is below the average cost, pulling the latter down. The efficient scale of production is then reached when the average cost is at its minimum and therefore the same as the marginal cost.
Relationship to market structure.
The concept of minimum efficient scale is useful in determining the likely market structure of a market. For instance, if the minimum efficient scale is small relative to the overall size of the market (demand for the good), there will be a large number of firms. The firms in this market will be likely to behave in a perfectly competitive manner due to the large number of competitors. However, if the minimum efficient scale can only be achieved at a significantly high levels of output relative to the overall size of the market, the number of firms will be small, the market is likely to be a oligopoly or monopoly market.
MES in L-shaped cost curve.
Modern cost theory and recent empirical studies suggest that, instead of a U-shaped curve due to the presence of diseconomies of scale, the long run average cost curve is more likely to be L-shaped. In the L-shaped cost curve, the long run cost would keep fixed with a significantly increased scale of output once the firm reaches the minimum efficient scale (MES).
However, the average cost in an L-shaped curve may further decrease even though most economies of scale have been exploited when firms achieve the MES because of technical and production economies.
For instance, the firm may obtain further economies of scale from skill improvement by training the employees, decentralization in management. Secondly, repair cost and scrap rate will decrease when the firm reaches a certain size. Thirdly, improvement in the firm's vertical integration, producing by a firm itself some of the materials and equipment it needs at a lower cost for its production process instead of buying them from other firms.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "Ec = \\frac{\\Delta C/ C}{\\Delta q/ q}."
},
{
"math_id": 1,
"text": "\nEc = \\frac{\\Delta C/ C}{\\Delta q/ q} = \\frac{\\Delta C/\\Delta q}{C/q} = Marginal Cost(MC)/Average Cost(AC)\n"
},
{
"math_id": 2,
"text": "Ec = MC / AC = 1."
}
] | https://en.wikipedia.org/wiki?curid=9230960 |
9232272 | Geometrical properties of polynomial roots | Geometry of the location of polynomial roots
In mathematics, a univariate polynomial of degree n with real or complex coefficients has n complex roots, if counted with their multiplicities. They form a multiset of n points in the complex plane. This article concerns the geometry of these points, that is the information about their localization in the complex plane that can be deduced from the degree and the coefficients of the polynomial.
Some of these geometrical properties are related to a single polynomial, such as upper bounds on the absolute values of the roots, which define a disk containing all roots, or lower bounds on the distance between two roots. Such bounds are widely used for root-finding algorithms for polynomials, either for tuning them, or for computing their computational complexity.
Some other properties are probabilistic, such as the expected number of real roots of a random polynomial of degree n with real coefficients, which is less than formula_0 for n sufficiently large.
In this article, a polynomial that is considered is always denoted
formula_1
where formula_2 are real or complex numbers and formula_3; thus n is the degree of the polynomial.
Continuous dependence on coefficients.
The "n" roots of a polynomial of degree "n" depend continuously on the coefficients. For simple roots, this results immediately from the implicit function theorem. This is true also for multiple roots, but some care is needed for the proof.
A small change of coefficients may induce a dramatic change of the roots, including the change of a real root into a complex root with a rather large imaginary part (see Wilkinson's polynomial). A consequence is that, for classical numeric root-finding algorithms, the problem of approximating the roots given the coefficients can be ill-conditioned for many inputs.
Conjugation.
The complex conjugate root theorem states that if the coefficients
of a polynomial are real, then the non-real roots appear in pairs of the form ("a" + "ib", "a" – "ib").
It follows that the roots of a polynomial with real coefficients are mirror-symmetric with respect to the real axis.
This can be extended to algebraic conjugation: the roots of a polynomial with rational coefficients are "conjugate" (that is, invariant) under the action of the Galois group of the polynomial. However, this symmetry can rarely be interpreted geometrically.
Bounds on all roots.
Upper bounds on the absolute values of polynomial roots are widely used for root-finding algorithms, either for limiting the regions where roots should be searched, or for the computation of the computational complexity of these algorithms.
Many such bounds have been given, and the sharper one depends generally on the specific sequence of coefficient that are considered. Most bounds are greater or equal to one, and are thus not sharp for a polynomial which have only roots of absolute values lower than one. However, such polynomials are very rare, as shown below.
Any upper bound on the absolute values of roots provides a corresponding lower bound. In fact, if formula_4 and U is an upper bound of the absolute values of the roots of
formula_5
then 1/"U" is a lower bound of the absolute values of the roots of
formula_6
since the roots of either polynomial are the multiplicative inverses of the roots of the other. "Therefore, in the remainder of the article lower bounds will not be given explicitly".
Lagrange's and Cauchy's bounds.
Lagrange and Cauchy were the first to provide upper bounds on all complex roots. Lagrange's bound is
formula_7
and Cauchy's bound is
formula_8
Lagrange's bound is sharper (smaller) than Cauchy's bound only when 1 is larger than the sum of all formula_9 but the largest. This is relatively rare in practice, and explains why Cauchy's bound is more widely used than Lagrange's.
Both bounds result from the Gershgorin circle theorem applied to the companion matrix of the polynomial and its transpose. They can also be proved by elementary methods.
These bounds are not invariant by scaling. That is, the roots of the polynomial "p"("sx") are the quotient by s of the root of p, and the bounds given for the roots of "p"("sx") are not the quotient by s of the bounds of p. Thus, one may get sharper bounds by minimizing over possible scalings. This gives
formula_10
and
formula_11
for Lagrange's and Cauchy's bounds respectively.
Another bound, originally given by Lagrange, but attributed to Zassenhaus by Donald Knuth, is
formula_12
This bound is invariant by scaling.
Lagrange improved this latter bound into the sum of the two largest values (possibly equal) in the sequence
formula_13
Lagrange also provided the bound
formula_14
where formula_15 denotes the ith "nonzero" coefficient when the terms of the polynomials are sorted by increasing degrees.
Using Hölder's inequality.
Hölder's inequality allows the extension of Lagrange's and Cauchy's bounds to every h-norm. The h-norm of a sequence
formula_16
is
formula_17
for any real number "h" ≥ 1, and
formula_18
If formula_19 with 1 ≤ "h", "k" ≤ ∞, and 1 / ∞ = 0, an upper bound on the absolute values of the roots of p is
formula_20
For "k" = 1 and "k" = ∞, one gets respectively Cauchy's and Lagrange's bounds.
For "h" = "k" = 2, one has the bound
formula_21
This is not only a bound of the absolute values of the roots, but also a bound of the product of their absolute values larger than 1; see , below.
Other bounds.
Many other upper bounds for the magnitudes of all roots have been given.
Fujiwara's bound
formula_22
slightly improves the bound given above by dividing the last argument of the maximum by two.
Kojima's bound is
formula_23
where formula_15 denotes the ith "nonzero" coefficient when the terms of the polynomials are sorted by increasing degrees. If all coefficients are nonzero, Fujiwara's bound is sharper, since each element in Fujiwara's bound is the geometric mean of first elements in Kojima's bound.
Sun and Hsieh obtained another improvement on Cauchy's bound. Assume the polynomial is monic with general term aixi. Sun and Hsieh showed that upper bounds 1 + "d"1 and 1 + "d"2 could be obtained from the following equations.
formula_24
"d"2 is the positive root of the cubic equation
formula_25
They also noted that "d"2 ≤ "d"1.
Landau's inequality.
The previous bounds are upper bounds for each root separately. Landau's inequality provides an upper bound for the absolute values of the product of the roots that have an absolute value greater than one. This inequality, discovered in 1905 by Edmund Landau, has been forgotten and rediscovered at least three times during the 20th century.
This bound of the product of roots is not much greater than the best preceding bounds of each root separately.
Let formula_26 be the n roots of the polynomial p. If
formula_27
is the Mahler measure of p,
then
formula_28
Surprisingly, this bound of the product of the absolute values larger than 1 of the roots is not much larger than the best bounds of "one" root that have been given above for a single root. This bound is even exactly equal to one of the bounds that are obtained using Hölder's inequality.
This bound is also useful to bound the coefficients of a divisor of a polynomial with integer coefficients: if
formula_29
is a divisor of "p", then
formula_30
and, by Vieta's formulas,
formula_31
for "i" = 0, ..., "m", where formula_32 is a binomial coefficient. Thus
formula_33
and
formula_34
Discs containing some roots.
From Rouché theorem.
Rouché's theorem allows defining discs centered at zero and containing a given number of roots. More precisely, if there is a positive real number "R" and an integer 0 ≤ "k" ≤ "n" such that
formula_35
then there are exactly "k" roots, counted with multiplicity, of absolute value less than "R".
The above result may be applied if the polynomial
formula_36
takes a negative value for some positive real value of x.
In the remaining of the section, suppose that "a""0" ≠ 0. If it is not the case, zero is a root, and the localization of the other roots may be studied by dividing the polynomial by a power of the indeterminate, getting a polynomial with a nonzero constant term.
For "k" = 0 and "k" = "n", Descartes' rule of signs shows that the polynomial has exactly one positive real root. If formula_37 and formula_38 are these roots, the above result shows that all the roots satisfy
formula_39
As these inequalities apply also to formula_40 and formula_41 these bounds are optimal for polynomials with a given sequence of the absolute values of their coefficients. They are thus sharper than all bounds given in the preceding sections.
For 0 < "k" < "n", Descartes' rule of signs implies that formula_42 either has two positive real roots that are not multiple, or is nonnegative for every positive value of x. So, the above result may be applied only in the first case. If formula_43 are these two roots, the above result implies that
formula_44
for k roots of p, and that
formula_45
for the "n" – "k" other roots.
Instead of explicitly computing formula_46 and formula_47 it is generally sufficient to compute a value formula_48 such that formula_49 (necessarily formula_50). These formula_48 have the property of separating roots in terms of their absolute values: if, for "h" < "k", both formula_51 and formula_48 exist, there are exactly "k" – "h" roots z such that formula_52
For computing formula_53 one can use the fact that formula_54 is a convex function (its second derivative is positive). Thus formula_48 exists if and only if formula_54 is negative at its unique minimum. For computing this minimum, one can use any optimization method, or, alternatively, Newton's method for computing the unique positive zero of the derivative of formula_54 (it converges rapidly, as the derivative is a monotonic function).
One can increase the number of existing formula_48's by applying the root squaring operation of the Dandelin–Graeffe iteration. If the roots have distinct absolute values, one can eventually completely separate the roots in terms of their absolute values, that is, compute "n" + 1 positive numbers formula_55 such there is exactly one root with an absolute value in the open interval formula_56 for "k" = 1, ..., "n".
From Gershgorin circle theorem.
The Gershgorin circle theorem applies the companion matrix of the polynomial on a basis related to Lagrange interpolation to define discs centered at the interpolation points, each containing a root of the polynomial; see for details.
If the interpolation points are close to the roots of the roots of the polynomial, the radii of the discs are small, and this is a key ingredient of Durand–Kerner method for computing polynomial roots.
Bounds of real roots.
For polynomials with real coefficients, it is often useful to bound only the real roots. It suffices to bound the positive roots, as the negative roots of "p"("x") are the positive roots of "p"(–"x").
Clearly, every bound of all roots applies also for real roots. But in some contexts, tighter bounds of real roots are useful. For example, the efficiency of the method of continued fractions for real-root isolation strongly depends on tightness of a bound of positive roots. This has led to establishing new bounds that are tighter than the general bounds of all roots. These bounds are generally expressed not only in terms of the absolute values of the coefficients, but also in terms of their signs.
Other bounds apply only to polynomials whose all roots are reals (see below).
Bounds of positive real roots.
To give a bound of the positive roots, one can assume formula_57 without loss of generality, as changing the signs of all coefficients does not change the roots.
Every upper bound of the positive roots of
formula_58
is also a bound of the real zeros of
formula_59.
In fact, if B is such a bound, for all "x" > "B", one has "p"("x") ≥ "q"("x") > 0.
Applied to Cauchy's bound, this gives the upper bound
formula_60
for the real roots of a polynomial with real coefficients. If this bound is not greater than , this means that all nonzero coefficients have the same sign, and that there is no positive root.
Similarly, another upper bound of the positive roots is
formula_61
If all nonzero coefficients have the same sign, there is no positive root, and the maximum must be zero.
Other bounds have been recently developed, mainly for the method of continued fractions for real-root isolation.
Polynomials whose roots are all real.
If all roots of a polynomial are real, Laguerre proved the following lower and upper bounds of the roots, by using what is now called Samuelson's inequality.
Let formula_62 be a polynomial with all real roots. Then its roots are located in the interval with endpoints
formula_63
For example, the roots of the polynomial formula_64 satisfy
formula_65
Root separation.
The root separation of a polynomial is the minimal distance between two roots, that is the minimum of the absolute values of the difference of two roots:
formula_66
The root separation is a fundamental parameter of the computational complexity of root-finding algorithms for polynomials. In fact, the root separation determines the precision of number representation that is needed for being certain of distinguishing distinct roots. Also, for real-root isolation, it allows bounding the number of interval divisions that are needed for isolating all roots.
For polynomials with real or complex coefficients, it is not possible to express a lower bound of the root separation in terms of the degree and the absolute values of the coefficients only, because a small change on a single coefficient transforms a polynomial with multiple roots into a square-free polynomial with a small root separation, and essentially the same absolute values of the coefficient. However, involving the discriminant of the polynomial allows a lower bound.
For square-free polynomials with integer coefficients, the discriminant is an integer, and has thus an absolute value that is not smaller than . This allows lower bounds for root separation that are independent from the discriminant.
Mignotte's separation bound is
formula_67
where formula_68 is the discriminant, and formula_69
For a square free polynomial with integer coefficients, this implies
formula_70
where s is the bit size of p, that is the sum of the bitsize of its coefficients.
Gauss–Lucas theorem.
The Gauss–Lucas theorem states that the convex hull of the roots of a polynomial contains the roots of the derivative of the polynomial.
A sometimes useful corollary is that, if all roots of a polynomial have positive real part, then so do the roots of all derivatives of the polynomial.
A related result is Bernstein's inequality. It states that for a polynomial "P" of degree "n" with derivative "P′" we have
formula_71
Statistical distribution of the roots.
If the coefficients "a""i" of a random polynomial are independently and identically distributed with a mean of zero, most complex roots are on the unit circle or close to it. In particular, the real roots are mostly located near ±1, and, moreover, their expected number is, for a large degree, less than the natural logarithm of the degree.
If the coefficients are Gaussian distributed with a mean of zero and variance of "σ" then the mean density of real roots is given by the Kac formula
formula_72
where
formula_73
When the coefficients are Gaussian distributed with a non-zero mean and variance of "σ", a similar but more complex formula is known.
Real roots.
For large "n", the mean density of real roots near x is asymptotically
formula_74
if formula_75
and
formula_76
It follows that the expected number of real roots is, using big O notation
formula_77
where "C" is a constant approximately equal to .
In other words, "the expected number of real roots of a random polynomial of high degree is lower than the natural logarithm of the degree".
Kac, Erdős and others have shown that these results are insensitive to the distribution of the coefficients, if they are independent and have the same distribution with mean zero. However, if the variance of the ith coefficient is equal to formula_78 the expected number of real roots is formula_79
Geometry of multiple roots.
A polynomial formula_80 can be written in the form of
formula_81
with distinct roots formula_82 and corresponding multiplicities formula_83. A root formula_84 is a simple root if formula_85 or a multiple root if formula_86. Simple roots are Lipschitz continuous with respect to coefficients but multiple roots are not. In other words, simple roots have bounded sensitivities but multiple roots are infinitely sensitive if the coefficients are perturbed arbitrarily. As a result, most root-finding algorithms suffer substantial loss of accuracy on multiple roots in numerical computation.
In 1972, William Kahan proved that there is an inherent stability of multiple roots. Kahan discovered that polynomials with a particular set of multiplicities form what he called a "pejorative manifold" and proved that a multiple root is Lipschitz continuous if the perturbation maintains its multiplicity.
This geometric property of multiple roots is crucial in numerical computation of multiple roots.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " 1+\\frac 2\\pi \\ln (n) "
},
{
"math_id": 1,
"text": "\n p(x)=a_0 + a_1 x + \\cdots + a_n x^n,\n"
},
{
"math_id": 2,
"text": "a_0, \\dots, a_n"
},
{
"math_id": 3,
"text": "a_n \\neq 0"
},
{
"math_id": 4,
"text": "a_n\\ne 0,"
},
{
"math_id": 5,
"text": "\na_0 + a_1 x + \\cdots + a_n x^n,\n"
},
{
"math_id": 6,
"text": "\na_n + a_{n-1} x + \\cdots + a_0 x^n,\n"
},
{
"math_id": 7,
"text": "\\max\\left\\{1,\\sum_{i=0}^{n-1} \\left|\\frac{a_i}{a_n}\\right|\\right\\},"
},
{
"math_id": 8,
"text": "1+\\max\\left\\{ \\left|\\frac{a_{n-1}}{a_n}\\right|, \\left|\\frac{a_{n-2}}{a_n}\\right|, \\ldots, \\left|\\frac{a_0}{a_n}\\right|\\right\\}"
},
{
"math_id": 9,
"text": "\\left|\\frac{a_i}{a_n}\\right|"
},
{
"math_id": 10,
"text": "\\min_{s\\in \\mathbb R_+}\\left(\\max\\left\\{ s,\\sum_{i=0}^{n-1} \\left|\\frac{a_i}{a_n}\\right|s^{i-n+1}\\right\\}\\right),"
},
{
"math_id": 11,
"text": "\\min_{s\\in \\mathbb R_+}\\left(s+\\max_{0\\le i\\le n-1}\\left(\\left|\\frac{a_i}{a_n}\\right| s^{i-n+1}\\right)\\right),"
},
{
"math_id": 12,
"text": "2\\max\\left\\{ \\left|\\frac{a_{n-1}}{a_n}\\right|, \\left|\\frac{a_{n-2}}{a_{n}}\\right|^{1/2}, \\ldots, \\left|\\frac{a_0}{a_n}\\right|^{1/n}\\right\\}."
},
{
"math_id": 13,
"text": "\\left[ \\left|\\frac{a_{n-1}}{a_n}\\right|, \\left|\\frac{a_{n-2}}{a_{n}}\\right|^{1/2}, \\ldots, \\left|\\frac{a_0}{a_n}\\right|^{1/n}\\right]."
},
{
"math_id": 14,
"text": "\\sum_i \\left|\\frac{a_i}{a_{i+1}}\\right|,"
},
{
"math_id": 15,
"text": "a_i"
},
{
"math_id": 16,
"text": "s=(a_0, \\ldots, a_n)"
},
{
"math_id": 17,
"text": "\\|s\\|_h = \\left(\\sum_{i=0}^n |a_i|^h\\right)^{1/h},"
},
{
"math_id": 18,
"text": "\\|s\\|_\\infty = \\textstyle{\\max_{i=0}^n} |a_i|."
},
{
"math_id": 19,
"text": "\\frac 1h+ \\frac 1k=1,"
},
{
"math_id": 20,
"text": "\\frac 1{|a_n|}\\left\\|(|a_n|, \\left\\|(|a_{n-1}|, \\ldots, |a_0| \\right)\\|_h\\right)\\|_k."
},
{
"math_id": 21,
"text": "\\frac 1{|a_n|}\\sqrt{|a_n|^2+|a_{n-1}|^2+ \\cdots +|a_0|^2 }."
},
{
"math_id": 22,
"text": "2\\, \\max \\left\\{ \\left|\\frac{a_{n-1}}{a_n}\\right|, \\left|\\frac{a_{n-2}}{a_n}\\right|^{\\frac{1}{2}}, \\ldots, \\left|\\frac{a_1}{a_n}\\right|^\\frac{1}{n-1}, \\left|\\frac{a_0}{2a_n}\\right|^\\frac 1 n\\right\\}, "
},
{
"math_id": 23,
"text": "2\\,\\max \\left\\{ \\left|\\frac{a_{n-1}}{a_n}\\right|,\\left|\\frac{a_{n-2}}{a_{n-1}}\\right|, \\ldots, \\left|\\frac{a_0}{2a_1}\\right|\\right\\},"
},
{
"math_id": 24,
"text": "d_1 = \\tfrac{1}{2} \\left((| a_{n-1}| - 1) + \\sqrt{(|a_{n-1}| - 1 )^2 + 4a } \\right), \\qquad a = \\max \\{ |a_i | \\}."
},
{
"math_id": 25,
"text": "Q(x) = x^3 + (2 - |a_{n-1}|) x^2 + (1 - |a_{n-1}| - |a_{n-2}| ) x - a, \\qquad a = \\max \\{ |a_i | \\}"
},
{
"math_id": 26,
"text": "z_1, \\ldots, z_n"
},
{
"math_id": 27,
"text": "M(p)=|a_n|\\prod_{j=1}^n \\max(1,|z_j|)"
},
{
"math_id": 28,
"text": "M(p)\\le \\sqrt{\\sum_{k=0}^n |a_k|^2}."
},
{
"math_id": 29,
"text": "q= \\sum_{k=0}^m b_k x^k"
},
{
"math_id": 30,
"text": "|b_m|\\le|a_n|,"
},
{
"math_id": 31,
"text": "\\frac{|b_i|}{|b_m|}\\le \\binom mi \\frac{M(p)}{|a_n|},"
},
{
"math_id": 32,
"text": "\\binom mi"
},
{
"math_id": 33,
"text": "|b_i|\\le \\binom mi M(p)\\le \\binom mi \\sqrt{\\sum_{k=0}^n |a_k|^2},"
},
{
"math_id": 34,
"text": "\\sum_{i=0}^m |b_i| \\le 2^m M(p) \\le 2^m \\sqrt{\\sum_{k=0}^n |a_k|^2}."
},
{
"math_id": 35,
"text": "|a_k| R^k > |a_0|+\\cdots+|a_{k-1}| R^{k-1}+|a_{k+1}| R^{k+1}+\\cdots+|a_n| R^n,"
},
{
"math_id": 36,
"text": "h_k(x)=|a_0| +\\cdots+|a_{k-1}| x^{k-1}-|a_k|x^k+|a_{k+1}| x^{k+1}+\\cdots+|a_n| x^n."
},
{
"math_id": 37,
"text": "R_0"
},
{
"math_id": 38,
"text": "R_n"
},
{
"math_id": 39,
"text": "R_0\\le |z| \\le R_1."
},
{
"math_id": 40,
"text": "h_0"
},
{
"math_id": 41,
"text": "h_n,"
},
{
"math_id": 42,
"text": "h_k(x)"
},
{
"math_id": 43,
"text": "R_{k,1}<R_{k,2}"
},
{
"math_id": 44,
"text": "|z| \\le R_{k,1}"
},
{
"math_id": 45,
"text": "|z| \\ge R_{k,2}"
},
{
"math_id": 46,
"text": "R_{k,1}"
},
{
"math_id": 47,
"text": "R_{k,2},"
},
{
"math_id": 48,
"text": "R_k"
},
{
"math_id": 49,
"text": "h_k(R_k)<0"
},
{
"math_id": 50,
"text": "R_{k,1}<R_k<R_{k,2}"
},
{
"math_id": 51,
"text": "R_h"
},
{
"math_id": 52,
"text": "R_h < |z| < R_k."
},
{
"math_id": 53,
"text": "R_k,"
},
{
"math_id": 54,
"text": "\\frac{h(x)}{x^k}"
},
{
"math_id": 55,
"text": "R_0 < R_1 <\\dots <R_n"
},
{
"math_id": 56,
"text": "(R_{k-1},R_k),"
},
{
"math_id": 57,
"text": "a_n >0"
},
{
"math_id": 58,
"text": "q(x)=a_nx^n + \\sum_{i=0}^{n-1} \\min(0,a_i)x^i"
},
{
"math_id": 59,
"text": "p(x)=\\sum_{i=0}^n a_ix^i"
},
{
"math_id": 60,
"text": "1+{\\textstyle\\max_{i=0}^{n-1}} \\frac{-a_i}{a_n}"
},
{
"math_id": 61,
"text": "2\\,{\\max_{a_ia_n<0}}\\left(\\frac{-a_i}{a_n}\\right)^{\\frac 1{n-i}}."
},
{
"math_id": 62,
"text": "\\sum_{k=0}^n a_k x^k"
},
{
"math_id": 63,
"text": "-\\frac{a_{n-1}}{na_n} \\pm \\frac{n-1}{na_n}\\sqrt{a^2_{n-1} - \\frac{2n}{n-1}a_n a_{n-2}}."
},
{
"math_id": 64,
"text": "x^4+5x^3+5x^2-5x-6=(x+3)(x+2)(x+1)(x-1)"
},
{
"math_id": 65,
"text": "-3.8118<-\\frac{5}{4} - \\frac{3}{4}\\sqrt{\\frac{35}{3}}\\le x\\le -\\frac{5}{4} + \\frac{3}{4}\\sqrt{\\frac{35}{3}}<1.3118."
},
{
"math_id": 66,
"text": "\\operatorname{sep}(p) = \\min\\{|\\alpha-\\beta|\\;;\\; \\alpha \\neq \\beta \\text{ and } p(\\alpha)=p(\\beta)=0\\}"
},
{
"math_id": 67,
"text": "\\operatorname{sep}(p) > \\frac {\\sqrt{3|\\Delta(p)|}}{n^{(n+1)/2}(\\|p\\|_2)^{n-1}},"
},
{
"math_id": 68,
"text": "\\Delta(p)"
},
{
"math_id": 69,
"text": "\\textstyle\\|p\\|_2=\\sqrt{a_0^2+a_1^2+\\dots+a_n^2}."
},
{
"math_id": 70,
"text": "\\operatorname{sep}(p) > \\frac {\\sqrt 3}{n^{n/2+1}(\\|p\\|_2)^{n-1}}> \\frac 1{2^{2s^2}},"
},
{
"math_id": 71,
"text": "\\max_{|z| \\leq 1} \\big|P'(z)\\big| \\le n \\max_{|z| \\leq 1} \\big|P(z)\\big|."
},
{
"math_id": 72,
"text": " m( x ) = \\frac { \\sqrt{ A( x ) C( x ) - B( x )^2 }} {\\pi A( x )} "
},
{
"math_id": 73,
"text": " \\begin{align}\nA(x) &= \\sigma \\sum_{i=0}^{n-1} x^{2i} = \\sigma \\frac{x^{2n} - 1}{x^2-1}, \\\\\nB(x) &= \\frac 1 2 \\frac{ d } { dx } A( x ), \\\\\nC(x) &= \\frac 1 4 \\frac{d^2} {dx^2} A(x) + \\frac 1 { 4x } \\frac d {dx} A(x).\n\\end{align} "
},
{
"math_id": 74,
"text": " m( x ) = \\frac{ 1 } { \\pi | 1 - x^2 | } "
},
{
"math_id": 75,
"text": "x^2-1\\ne 0,"
},
{
"math_id": 76,
"text": " m(\\pm 1) = \\frac 1 \\pi \\sqrt {\\frac{n^2 - 1}{12}} "
},
{
"math_id": 77,
"text": " N_n = \\frac 2 \\pi \\ln n + C + \\frac 2 {\\pi n} +O( n^{ -2 } ) "
},
{
"math_id": 78,
"text": "\\binom ni,"
},
{
"math_id": 79,
"text": "\\sqrt n."
},
{
"math_id": 80,
"text": "p"
},
{
"math_id": 81,
"text": "p(x) = a (x-z_1)^{m_1} \\cdots (x-z_k)^{m_k} "
},
{
"math_id": 82,
"text": "z_1,\\ldots,z_k"
},
{
"math_id": 83,
"text": "m_1,\\ldots,m_k"
},
{
"math_id": 84,
"text": "z_j"
},
{
"math_id": 85,
"text": "m_j=1"
},
{
"math_id": 86,
"text": "m_j\\ge 2"
}
] | https://en.wikipedia.org/wiki?curid=9232272 |
92340 | Multistatic radar | A multistatic radar system contains multiple spatially diverse monostatic radar or bistatic radar components with a shared area of coverage. An important distinction of systems based on these individual radar geometries is the added requirement for some level of data fusion to take place between component parts.
The spatial diversity afforded by multistatic systems allows different aspects of a target to be viewed simultaneously. The potential for information gain can give rise to a number of advantages over conventional systems.
Multistatic radar is often referred to as "multisite" or "netted" radar and is comparable with the idea of macrodiversity in communications. A further subset of multistatic radar with roots in communications is that of MIMO radar.
Characteristics.
Since multistatic radar may contain both monostatic and bistatic components, the advantages and disadvantages of each radar arrangement will also apply to multistatic systems. A system with formula_0 transmitters and formula_1 receivers will contain formula_2 of these component pairs, each of which may involve a differing bistatic angle and target radar cross section. The following characteristics are unique to the multistatic arrangement, where multiple transmitter-receiver pairs are present:
Detection.
Increased coverage in multistatic radar may be obtained via the spreading of the radar geometry throughout the surveillance area - such that targets might be more likely to be physically closer to transmitter receiver-pairs and thus attain a higher signal-to-noise ratio.
Spatial diversity may also be beneficial when combining information from multiple transmitter-receiver pairs which have a shared coverage. By weighting and integrating individual returns (such as through likelihood ratio based detectors), detection can be optimised to place more emphasis on stronger returns obtained from certain monostatic or bistatic radar cross section values, or from favourable propagation paths, when making a decision as to whether a target is present. This is analogous to the use of antenna diversity in an attempt to improve links in wireless communications.
This is useful where multipath or shadowing effects might otherwise lead to the potential for poor detection performance if only a single radar is used. One notable area of interest is in sea clutter, and how diversity in reflectivity and Doppler shift might prove beneficial for detection in a maritime environment.
Many stealth vehicles are designed to reflect radar energy away from expected radar sources in order to present as small a return to a monostatic system as possible. This leads to more energy being radiated in directions that are only available to multistatic receivers.
Resolution.
Resolution may benefit from spatial diversity, due to the availability of multiple spatially diverse down-range profiles. Conventional radar typically has a much poorer cross-range resolution compared to down-range resolution, thus there is potential for gains through the intersection of constant bistatic range ellipses.
This involves a process of associating individual target detections to form a joint detection. Due to the un-cooperative nature of the targets, there is potential, if several targets are present, for ambiguities or "ghost targets" to be formed. These can be reduced through an increase in information (e.g. use of Doppler information, increase in down-range resolution or addition of further spatially diverse radars to the multistatic system).
Classification.
Target features such as variation in the radar cross section or jet engine modulation may be observed by transmitter-receiver pairs within a multistatic system. The gain in information through observation of different aspects of a target may improve classification of the target. Most existing air defence systems utilize a series of networked monostatic radars, without making use of bistatic pairs within the system.
Robustness.
Increased survivability and "graceful degradation" may result from the spatially distributed nature of multistatic radar. A fault in either transmitter or receiver for a monostatic or bistatic system will lead to a complete loss of radar functionality. From a tactical point of view, a single large transmitter will be easier to locate and destroy compared to several distributed transmitters. Likewise, it may be increasingly difficult to successfully focus jamming on multiple receivers compared to a single site.
Spatio-temporal synchronization.
To deduce the range or velocity of a target relative to a multistatic system, knowledge of the spatial location of transmitters and receivers is required. A shared time and frequency standard also must be maintained if the receiver has no direct line of sight of the transmitter. As in bistatic radar, without this knowledge there would be inaccuracy in the information reported by the radar. For systems exploiting data fusion before detection, there is a need for accurate time and or phase synchronisation of the different receivers. For plot level fusion, time tagging using a standard GPS clock (or similar) is more than sufficient.
Communications bandwidth.
The increase in information from the multiple monostatic or bistatic pairs in the multistatic system must be combined for benefits to be realised. This fusion process may range from the simple case of selecting plots from the receiver closest to a target (ignoring others), increasing in complexity to effectively beamforming through radio signal fusion. Dependent on this, a wide communications bandwidth may be required to pass the relevant data to a point where it can be fused.
Processing requirements.
Data fusion will always mean an increase in processing compared to a single radar. However it may be particularly computationally expensive if significant processing is involved in data fusion, such as attempts to increase resolution.
Examples of multistatic radar systems.
Several passive radar systems make use of multiple spatially diverse transmitters and hence may be considered to operate multistatically.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "N"
},
{
"math_id": 1,
"text": "M"
},
{
"math_id": 2,
"text": "NM"
}
] | https://en.wikipedia.org/wiki?curid=92340 |
923556 | Classifying space | Quotient of a weakly contractible space by a free action
In mathematics, specifically in homotopy theory, a classifying space "BG" of a topological group "G" is the quotient of a weakly contractible space "EG" (i.e., a topological space all of whose homotopy groups are trivial) by a proper free action of "G". It has the property that any "G" principal bundle over a paracompact manifold is isomorphic to a pullback of the principal bundle formula_0. As explained later, this means that classifying spaces represent a set-valued functor on the homotopy category of topological spaces. The term classifying space can also be used for spaces that represent a set-valued functor on the category of topological spaces, such as Sierpiński space. This notion is generalized by the notion of classifying topos. However, the rest of this article discusses the more commonly used notion of classifying space up to homotopy.
For a discrete group "G", "BG" is, roughly speaking, a path-connected topological space "X" such that the fundamental group of "X" is isomorphic to "G" and the higher homotopy groups of "X" are trivial, that is, "BG" is an Eilenberg–MacLane space, or a "K"("G", 1).
Motivation.
An example of a classifying space for the infinite cyclic group "G" is the circle as "X". When "G" is a discrete group, another way to specify the condition on "X" is that the universal cover "Y" of "X" is contractible. In that case the projection map
formula_1
becomes a fiber bundle with structure group "G", in fact a principal bundle for "G". The interest in the classifying space concept really arises from the fact that in this case "Y" has a universal property with respect to principal "G"-bundles, in the homotopy category. This is actually more basic than the condition that the higher homotopy groups vanish: the fundamental idea is, given "G", to find such a contractible space "Y" on which "G" acts "freely". (The weak equivalence idea of homotopy theory relates the two versions.) In the case of the circle example, what is being said is that we remark that an infinite cyclic group "C" acts freely on the real line "R", which is contractible. Taking "X" as the quotient space circle, we can regard the projection π from "R" = "Y" to "X" as a helix in geometrical terms, undergoing projection from three dimensions to the plane. What is being claimed is that π has a universal property amongst principal "C"-bundles; that any principal "C"-bundle in a definite way 'comes from' π.
Formalism.
A more formal statement takes into account that "G" may be a topological group (not simply a "discrete group"), and that group actions of "G" are taken to be continuous; in the absence of continuous actions the classifying space concept can be dealt with, in homotopy terms, via the Eilenberg–MacLane space construction. In homotopy theory the definition of a topological space "BG", the classifying space for principal "G"-bundles, is given, together with the space "EG" which is the total space of the universal bundle over "BG". That is, what is provided is in fact a continuous mapping
formula_2
Assume that the homotopy category of CW complexes is the underlying category, from now on. The "classifying" property required of "BG" in fact relates to π. We must be able to say that given any principal "G"-bundle
formula_3
over a space "Z", there is a classifying map φ from "Z" to "BG", such that formula_4 is the pullback of π along φ. In less abstract terms, the construction of formula_4 by 'twisting' should be reducible via φ to the twisting already expressed by the construction of π.
For this to be a useful concept, there evidently must be some reason to believe such spaces "BG" exist. The early work on classifying spaces introduced constructions (for example, the bar construction), that gave concrete descriptions of "BG" as a simplicial complex for an arbitrary discrete group. Such constructions make evident the connection with group cohomology.
Specifically, let "EG" be the weak simplicial complex whose "n-" simplices are the ordered ("n"+1)-tuples formula_5 of elements of "G". Such an "n-"simplex attaches to the (n−1) simplices formula_6 in the same way a standard simplex attaches to its faces, where formula_7 means this vertex is deleted. The complex EG is contractible. The group "G" acts on "EG" by left multiplication,
formula_8
and only the identity "e" takes any simplex to itself. Thus the action of "G" on "EG" is a covering space action and the quotient map formula_9 is the universal cover of the orbit space formula_10, and "BG" is a formula_11.
In abstract terms (which are not those originally used around 1950 when the idea was first introduced) this is a question of whether a certain functor is representable: the contravariant functor from the homotopy category to the category of sets, defined by
"h"("Z") = set of isomorphism classes of principal "G"-bundles on "Z."
The abstract conditions being known for this (Brown's representability theorem) ensure that the result, as an existence theorem, is affirmative and not too difficult.
Applications.
This still leaves the question of doing effective calculations with "BG"; for example, the theory of characteristic classes is essentially the same as computing the cohomology groups of "BG", at least within the restrictive terms of homotopy theory, for interesting groups "G" such as Lie groups (H. Cartan's theorem). As was shown by the Bott periodicity theorem, the homotopy groups of "BG" are also of fundamental interest.
An example of a classifying space is that when "G" is cyclic of order two; then "BG" is real projective space of infinite dimension, corresponding to the observation that "EG" can be taken as the contractible space resulting from removing the origin in an infinite-dimensional Hilbert space, with "G" acting via "v" going to −"v", and allowing for homotopy equivalence in choosing "BG". This example shows that classifying spaces may be complicated.
In relation with differential geometry (Chern–Weil theory) and the theory of Grassmannians, a much more hands-on approach to the theory is possible for cases such as the unitary groups that are of greatest interest. The construction of the Thom complex "MG" showed that the spaces "BG" were also implicated in cobordism theory, so that they assumed a central place in geometric considerations coming out of algebraic topology. Since group cohomology can (in many cases) be defined by the use of classifying spaces, they can also be seen as foundational in much homological algebra.
Generalizations include those for classifying foliations, and the classifying toposes for logical theories of the predicate calculus in intuitionistic logic that take the place of a 'space of models'. | [
{
"math_id": 0,
"text": "EG \\to BG"
},
{
"math_id": 1,
"text": "\\pi\\colon Y\\longrightarrow X\\ "
},
{
"math_id": 2,
"text": "\\pi\\colon EG\\longrightarrow BG. "
},
{
"math_id": 3,
"text": "\\gamma\\colon Y\\longrightarrow Z\\ "
},
{
"math_id": 4,
"text": "\\gamma"
},
{
"math_id": 5,
"text": "[g_0,\\ldots,g_n]"
},
{
"math_id": 6,
"text": "[g_0,\\ldots,\\hat g_i,\\ldots,g_n]"
},
{
"math_id": 7,
"text": "\\hat g_i"
},
{
"math_id": 8,
"text": "g\\cdot[g_0,\\ldots,g_n ]=[gg_0,\\ldots,gg_n],"
},
{
"math_id": 9,
"text": "EG\\to EG/G"
},
{
"math_id": 10,
"text": "BG = EG/G"
},
{
"math_id": 11,
"text": "K(G,1)"
},
{
"math_id": 12,
"text": "S^1"
},
{
"math_id": 13,
"text": "\\Z."
},
{
"math_id": 14,
"text": "E\\Z =\\R. "
},
{
"math_id": 15,
"text": "\\mathbb T^n"
},
{
"math_id": 16,
"text": "\\Z^n"
},
{
"math_id": 17,
"text": "E\\Z^n=\\R^n."
},
{
"math_id": 18,
"text": "\\pi_1(S)."
},
{
"math_id": 19,
"text": "\\pi_1(M)"
},
{
"math_id": 20,
"text": "\\mathbb{RP}^\\infty"
},
{
"math_id": 21,
"text": "\\Z_2 = \\Z /2\\Z."
},
{
"math_id": 22,
"text": "E\\Z_2 = S^\\infty"
},
{
"math_id": 23,
"text": "S^n."
},
{
"math_id": 24,
"text": "B\\Z_n = S^\\infty / \\Z_n"
},
{
"math_id": 25,
"text": "\\Z_n."
},
{
"math_id": 26,
"text": "S^\\infty"
},
{
"math_id": 27,
"text": "\\Complex^\\infty"
},
{
"math_id": 28,
"text": "\\operatorname{UConf}_n(\\R^2)"
},
{
"math_id": 29,
"text": "B_n"
},
{
"math_id": 30,
"text": "\\operatorname{Conf}_n(\\R^2)"
},
{
"math_id": 31,
"text": "P_n."
},
{
"math_id": 32,
"text": "\\operatorname{UConf}_n(\\R^\\infty)"
},
{
"math_id": 33,
"text": "S_n."
},
{
"math_id": 34,
"text": "\\mathbb{CP}^\\infty"
},
{
"math_id": 35,
"text": " Gr(n, \\R^\\infty)"
},
{
"math_id": 36,
"text": "\\R^\\infty"
},
{
"math_id": 37,
"text": "EO(n) = V(n, \\R^\\infty)"
},
{
"math_id": 38,
"text": "\\R^\\infty."
}
] | https://en.wikipedia.org/wiki?curid=923556 |
9236652 | Néron–Tate height | In number theory, the Néron–Tate height (or canonical height) is a quadratic form on the Mordell–Weil group of rational points of an abelian variety defined over a global field. It is named after André Néron and John Tate.
Definition and properties.
Néron defined the Néron–Tate height as a sum of local heights. Although the global Néron–Tate height is quadratic, the constituent local heights are not quite quadratic. Tate (unpublished) defined it globally by observing that the logarithmic height formula_0 associated to a symmetric invertible sheaf formula_1 on an abelian variety formula_2 is “almost quadratic,” and used this to show that the limit
formula_3
exists, defines a quadratic form on the Mordell–Weil group of rational points, and satisfies
formula_4
where the implied formula_5 constant is independent of formula_6. If formula_1 is anti-symmetric, that is formula_7, then the analogous limit
formula_8
converges and satisfies formula_9, but in this case formula_10 is a linear function on the Mordell-Weil group. For general invertible sheaves, one writes formula_11 as a product of a symmetric sheaf and an anti-symmetric sheaf, and then
formula_12
is the unique quadratic function satisfying
formula_13
The Néron–Tate height depends on the choice of an invertible sheaf on the abelian variety, although the associated bilinear form depends only on the image of formula_1 in
the Néron–Severi group of formula_2. If the abelian variety formula_2 is defined over a number field "K" and the invertible sheaf is symmetric and ample, then the Néron–Tate height is positive definite in the sense that it vanishes only on torsion elements of the Mordell–Weil group formula_14. More generally, formula_10 induces a positive definite quadratic form on the real vector space formula_15.
On an elliptic curve, the Néron–Severi group is of rank one and has a unique ample generator, so this generator is often used to define the Néron–Tate height, which is denoted formula_16 without reference to a particular line bundle. (However, the height that naturally appears in the statement of the Birch and Swinnerton-Dyer conjecture is twice this height.) On abelian varieties of higher dimension, there need not be a particular choice of smallest ample line bundle to be used in defining the Néron–Tate height, and the height used in the statement of the Birch–Swinnerton-Dyer conjecture is the Néron–Tate height associated to the Poincaré line bundle on formula_17, the product of formula_2 with its dual.
The elliptic and abelian regulators.
The bilinear form associated to the canonical height formula_16 on an elliptic curve "E" is
formula_18
The elliptic regulator of "E"/"K" is
formula_19
where "P"1...,"P""r" is a basis for the Mordell–Weil group "E"("K") modulo torsion (cf. Gram determinant). The elliptic regulator does not depend on the choice of basis.
More generally, let "A"/"K" be an abelian variety, let "B" ≅ Pic0("A") be the dual abelian variety to "A", and let "P" be the Poincaré line bundle on "A" × "B". Then the abelian regulator of "A"/"K" is defined by choosing a basis "Q"1...,"Q""r" for the Mordell–Weil group "A"("K") modulo torsion and a basis "η"1...,"η""r" for the Mordell–Weil group "B"("K") modulo torsion and setting
formula_20
The elliptic and abelian regulators appear in the Birch–Swinnerton-Dyer conjecture.
Lower bounds for the Néron–Tate height.
There are two fundamental conjectures that give lower bounds for the Néron–Tate height. In the first, the field "K" is fixed and the elliptic curve "E"/"K" and point "P" ∈ "E"("K") vary, while in the second, the elliptic Lehmer conjecture, the curve "E"/"K" is fixed while the field of definition of the point "P" varies.
In both conjectures, the constants are positive and depend only on the indicated quantities. (A stronger form of Lang's conjecture asserts that formula_26 depends only on the degree formula_27.) It is known that the "abc" conjecture implies Lang's conjecture, and that the analogue of Lang's conjecture over one dimensional characteristic 0 function fields is unconditionally true. The best general result on Lehmer's conjecture is the weaker estimate formula_28 due to Masser. When the elliptic curve has complex multiplication, this has been improved to formula_29 by Laurent. There are analogous conjectures for abelian varieties, with the nontorsion condition replaced by the condition that the multiples of formula_6 form a Zariski dense subset of formula_2, and the lower bound in Lang's conjecture replaced by formula_30, where formula_31 is the Faltings height of formula_32.
Generalizations.
A polarized algebraic dynamical system is a triple formula_33 consisting of a (smooth projective) algebraic variety formula_34, an endomorphism formula_35, and a line bundle formula_36 with the property that formula_37 for some integer formula_38. The associated canonical height is given by the Tate limit
formula_39
where formula_40 is the "n"-fold iteration of formula_41. For example, any morphism formula_42 of degree formula_38 yields a canonical height associated to the line bundle relation formula_43. If formula_34 is defined over a number field and formula_1 is ample, then the canonical height is non-negative, and
formula_44
References.
<templatestyles src="Reflist/styles.css" />
General references for the theory of canonical heights | [
{
"math_id": 0,
"text": "h_L"
},
{
"math_id": 1,
"text": "L"
},
{
"math_id": 2,
"text": "A"
},
{
"math_id": 3,
"text": "\\hat h_L(P) = \\lim_{N\\rightarrow\\infty}\\frac{h_L(NP)}{N^2}"
},
{
"math_id": 4,
"text": "\\hat h_L(P) = h_L(P) + O(1),"
},
{
"math_id": 5,
"text": "O(1)"
},
{
"math_id": 6,
"text": "P"
},
{
"math_id": 7,
"text": "[-1]^*L=L^{-1}"
},
{
"math_id": 8,
"text": "\\hat h_L(P) = \\lim_{N\\rightarrow\\infty}\\frac{h_L(NP)}{N}"
},
{
"math_id": 9,
"text": "\\hat h_L(P) = h_L(P) + O(1)"
},
{
"math_id": 10,
"text": "\\hat h_L"
},
{
"math_id": 11,
"text": "L^{\\otimes2} = (L\\otimes[-1]^*L)\\otimes(L\\otimes[-1]^*L^{-1})"
},
{
"math_id": 12,
"text": "\\hat h_L(P) = \\frac12 \\hat h_{L\\otimes[-1]^*L}(P) + \\frac12 \\hat h_{L\\otimes[-1]^*L^{-1}}(P)"
},
{
"math_id": 13,
"text": "\\hat h_L(P) = h_L(P) + O(1) \\quad\\mbox{and}\\quad \\hat h_L(0)=0."
},
{
"math_id": 14,
"text": "A(K)"
},
{
"math_id": 15,
"text": "A(K)\\otimes\\mathbb{R}"
},
{
"math_id": 16,
"text": "\\hat h"
},
{
"math_id": 17,
"text": "A\\times\\hat A"
},
{
"math_id": 18,
"text": " \\langle P,Q\\rangle = \\frac{1}{2} \\bigl( \\hat h(P+Q) - \\hat h(P) - \\hat h(Q) \\bigr) ."
},
{
"math_id": 19,
"text": " \\operatorname{Reg}(E/K) = \\det\\bigl( \\langle P_i,P_j\\rangle \\bigr)_{1\\le i,j\\le r},"
},
{
"math_id": 20,
"text": " \\operatorname{Reg}(A/K) = \\det\\bigl( \\langle Q_i,\\eta_j\\rangle_{P} \\bigr)_{1\\le i,j\\le r}."
},
{
"math_id": 21,
"text": " \\hat h(P) \\ge c(K) \\log\\max\\bigl\\{\\operatorname{Norm}_{K/\\mathbb{Q}}\\operatorname{Disc}(E/K),h(j(E))\\bigr\\}\\quad"
},
{
"math_id": 22,
"text": "E/K"
},
{
"math_id": 23,
"text": "P\\in E(K)."
},
{
"math_id": 24,
"text": "\\hat h(P) \\ge \\frac{c(E/K)}{[K(P):K]}"
},
{
"math_id": 25,
"text": "P\\in E(\\bar K)."
},
{
"math_id": 26,
"text": "c"
},
{
"math_id": 27,
"text": "[K:\\mathbb Q]"
},
{
"math_id": 28,
"text": "\\hat h(P)\\ge c(E/K)/[K(P):K]^{3+\\varepsilon}"
},
{
"math_id": 29,
"text": "\\hat h(P)\\ge c(E/K)/[K(P):K]^{1+\\varepsilon}"
},
{
"math_id": 30,
"text": "\\hat h(P)\\ge c(K)h(A/K)"
},
{
"math_id": 31,
"text": "h(A/K)"
},
{
"math_id": 32,
"text": "A/K"
},
{
"math_id": 33,
"text": "(V,\\varphi, L)"
},
{
"math_id": 34,
"text": "V"
},
{
"math_id": 35,
"text": "\\varphi:V \\to V"
},
{
"math_id": 36,
"text": "L \\to V"
},
{
"math_id": 37,
"text": "\\varphi^*L = L^{\\otimes d}"
},
{
"math_id": 38,
"text": "d > 1"
},
{
"math_id": 39,
"text": " \\hat h_{V,\\varphi,L}(P) = \\lim_{n\\to\\infty} \\frac{h_{V,L}(\\varphi^{(n)}(P))}{d^n}, "
},
{
"math_id": 40,
"text": "\\varphi^{(n)} = \\varphi\\circ \\cdots \\circ \\varphi"
},
{
"math_id": 41,
"text": "\\varphi"
},
{
"math_id": 42,
"text": "\\varphi: \\mathbb{P}^n \\to \\mathbb{P}^n"
},
{
"math_id": 43,
"text": "\\varphi^*\\mathcal{O}(1) = \\mathcal{O}(n)"
},
{
"math_id": 44,
"text": " \\hat h_{V,\\varphi,L}(P) = 0 ~~ \\Longleftrightarrow ~~ P \\text{ is preperiodic for } \\varphi."
},
{
"math_id": 45,
"text": "P, \\varphi(P), \\varphi^2(P), \\varphi^3(P),\\ldots"
}
] | https://en.wikipedia.org/wiki?curid=9236652 |
92377 | Electromagnet | Magnet created with an electric current
An electromagnet is a type of magnet in which the magnetic field is produced by an electric current. Electromagnets usually consist of wire wound into a coil. A current through the wire creates a magnetic field which is concentrated in the hole in the center of the coil. The magnetic field disappears when the current is turned off. The wire turns are often wound around a magnetic core made from a ferromagnetic or ferrimagnetic material such as iron; the magnetic core concentrates the magnetic flux and makes a more powerful magnet.
The main advantage of an electromagnet over a permanent magnet is that the magnetic field can be quickly changed by controlling the amount of electric current in the winding. However, unlike a permanent magnet that needs no power, an electromagnet requires a continuous supply of current to maintain the magnetic field.
Electromagnets are widely used as components of other electrical devices, such as motors, generators, electromechanical solenoids, relays, loudspeakers, hard disks, MRI machines, scientific instruments, and magnetic separation equipment. Electromagnets are also employed in industry for picking up and moving heavy iron objects such as scrap iron and steel.
History.
Danish scientist Hans Christian Ørsted discovered in 1820 that electric currents create magnetic fields. In the same year, the French scientist André-Marie Ampère showed that iron can be magnetized by inserting it in an electrically fed solenoid.
British scientist William Sturgeon invented the electromagnet in 1824.
His first electromagnet was a horseshoe-shaped piece of iron that was wrapped with about 18 turns of bare copper wire. (Insulated wire did not then exist.) The iron was varnished to insulate it from the windings. When a current was passed through the coil, the iron became magnetized and attracted other pieces of iron; when the current was stopped, it lost magnetization. Sturgeon displayed its power by showing that although it only weighed seven ounces (roughly 200 grams), it could lift nine pounds (roughly 4 kilos) when the current of a single-cell power supply was applied. However, Sturgeon's magnets were weak because the uninsulated wire he used could only be wrapped in a single spaced-out layer around the core, limiting the number of turns.
Beginning in 1830, US scientist Joseph Henry systematically improved and popularised the electromagnet. By using wire insulated by silk thread and inspired by Schweigger's use of multiple turns of wire to make a galvanometer, he was able to wind multiple layers of wire onto cores, creating powerful magnets with thousands of turns of wire, including one that could support . The first major use for electromagnets was in telegraph sounders.
The magnetic domain theory of how ferromagnetic cores work was first proposed in 1906 by French physicist Pierre-Ernest Weiss, and the detailed modern quantum mechanical theory of ferromagnetism was worked out in the 1920s by Werner Heisenberg, Lev Landau, Felix Bloch and others.
Applications of electromagnets.
A "portative electromagnet" is one designed to just hold material in place; an example is a lifting magnet. A "tractive electromagnet" applies a force and moves something.
Electromagnets are very widely used in electric and electromechanical devices, including:
Simple solenoid.
A common tractive electromagnet is a uniformly-wound solenoid and plunger. The solenoid is a coil of wire, and the plunger is made of a material such as soft iron. Applying a current to the solenoid applies a force to the plunger and may make it move. The plunger stops moving when the forces upon it are balanced. For example, the forces are balanced when the plunger is centered in the solenoid.
The maximum uniform pull happens when one end of the plunger is at the middle of the solenoid. An approximation for the force F is
formula_0
where C is a proportionality constant, A is the cross-sectional area of the plunger, N is the number of turns in the solenoid, I is the current through the solenoid wire, and ℓ is the length of the solenoid. For units using inches, pounds force, and amperes with long, slender, solenoids, the value of C is around 0.009 to 0.010 psi (maximum pull pounds per square inch of plunger cross-sectional area). For example, a 12-inch long coil ("ℓ" = 12 in) with a long plunger of 1-square inch cross section ("A" = 1 in2) and 11,200 ampere-turns ("N I" = 11,200 Aturn) had a maximum pull of 8.75 pounds (corresponding to "C" = 0.0094 psi).
The maximum pull is increased when a magnetic stop is inserted into the solenoid. The stop becomes a magnet that will attract the plunger; it adds little to the solenoid pull when the plunger is far away but dramatically increases the pull when they are close. An approximation for the pull P is
formula_1
Here "ℓ"a is the distance between the end of the stop and the end of the plunger. The additional constant "C"1 for units of inches, pounds, and amperes with slender solenoids is about 2660. The second term within the bracket represents the same force as the stop-less solenoid above; the first term represents the attraction between the stop and the plunger.
Some improvements can be made on the basic design. The ends of the stop and plunger are often conical. For example, the plunger may have a pointed end that fits into a matching recess in the stop. The shape makes the solenoid's pull more uniform as a function of separation. Another improvement is to add a magnetic return path around the outside of the solenoid (an "iron-clad solenoid"). The magnetic return path, just as the stop, has little impact until the air gap is small.
Physics.
An electric current flowing in a wire creates a magnetic field around the wire, due to Ampere's law "(see drawing of wire with magnetic field)". To concentrate the magnetic field in an electromagnet, the wire is wound into a coil with many turns of wire lying side by side. The magnetic field of all the turns of wire passes through the center of the coil, creating a strong magnetic field there. A coil forming the shape of a straight tube (a helix) is called a solenoid.
The direction of the magnetic field through a coil of wire can be found from a form of the right-hand rule. If the fingers of the right hand are curled around the coil in the direction of current flow (conventional current, flow of positive charge) through the windings, the thumb points in the direction of the field inside the coil. The side of the magnet that the field lines emerge from is defined to be the "north pole".
Magnetic core.
For definitions of the variables below, see box at end of article.
Much stronger magnetic fields can be produced if a "magnetic core" of a soft ferromagnetic (or ferrimagnetic) material, such as iron, is placed inside the coil. A core can increase the magnetic field to thousands of times the strength of the field of the coil alone, due to the high magnetic permeability μ of the material. Not all electromagnets use cores, so this is called a ferromagnetic-core or iron-core electromagnet.
This is because the material of a magnetic core (often made of iron or steel) is composed of small regions called magnetic domains that act like tiny magnets (see ferromagnetism). Before the current in the electromagnet is turned on, the domains in the soft iron core point in random directions, so their tiny magnetic fields cancel each other out, and the iron has no large-scale magnetic field. When a current is passed through the wire wrapped around the iron, its magnetic field penetrates the iron, and causes the domains to turn, aligning parallel to the magnetic field, so their tiny magnetic fields add to the wire's field, creating a large magnetic field that extends into the space around the magnet. The effect of the core is to concentrate the field, and the magnetic field passes through the core in lower reluctance than when it would pass through air.
The larger the current passed through the wire coil, the more the domains align, and the stronger the magnetic field is. Finally, all the domains are lined up, and further increases in current only cause slight increases in the magnetic field: this phenomenon is called saturation. This is why the very strongest electromagnets, such as superconducting and very high current electromagnets, cannot use cores.
The main nonlinear feature of ferromagnetic materials is that the B field saturates at a certain value, which is around 1.6 to 2 teslas (T) for most high permeability core steels. The B field increases quickly with increasing current up to that value, but above that value the field levels off and becomes almost constant, regardless of how much current is sent through the windings. The maximum strength of the magnetic field possible from an iron core electromagnet is limited to around 1.6 to 2 T.
When the current in the coil is turned off, in the magnetically soft materials that are nearly always used as cores, most of the domains lose alignment and return to a random state and the field disappears. However, some of the alignment persists, because the domains have difficulty turning their direction of magnetization, leaving the core magnetized as a weak permanent magnet. This phenomenon is called hysteresis and the remaining magnetic field is called remanent magnetism. The residual magnetization of the core can be removed by degaussing. In alternating current electromagnets, such as are used in motors, the core's magnetization is constantly reversed, and the remanence contributes to the motor's losses.
Ampere's law.
The magnetic field of electromagnets in the general case is given by Ampere's Law:
formula_2
which says that the integral of the magnetizing field formula_3 around any closed loop is equal to the sum of the current flowing through the loop. Another equation used, that gives the magnetic field due to each small segment of current, is the Biot–Savart law.
Force exerted by magnetic field.
Likewise on the solenoid, the force exerted by an electromagnet on a conductor located at a section of core material is:
The force equation can be derived from the energy stored in a magnetic field. Energy is force times distance. Rearranging terms yields the equation above.
The 1.6 T limit on the field mentioned above sets a limit on the maximum force per unit core area, or magnetic pressure, an iron-core electromagnet can exert; roughly:
formula_4
for saturation limit of the core, "Bsat".
In more intuitive units it is useful to remember that at 1 T the magnetic pressure is approximately 4 atmospheres, or kg/cm2.
Given a core geometry, the B field needed for a given force can be calculated from (1); if it comes out to much more than 1.6 T, a larger core must be used.
However, computing the magnetic field and force exerted by ferromagnetic materials in general is difficult for two reasons. First, because the strength of the field varies from point to point in a complicated way, particularly outside the core and in air gaps, where "fringing fields" and "leakage flux" must be considered. Second, because the magnetic field B and force are nonlinear functions of the current, depending on the nonlinear relation between B and H for the particular core material used. For precise calculations, computer programs that can produce a model of the magnetic field using the finite element method are employed.
Magnetic circuit.
In many practical applications of electromagnets, such as motors, generators, transformers, lifting magnets, and loudspeakers, the iron core is in the form of a loop or magnetic circuit, possibly broken by a few narrow air gaps. Iron presents much less "resistance" (reluctance) to the magnetic field than air, so a stronger field can be obtained if most of the magnetic field's path is within the core. This is why the core and magnetic field lines are in the form of closed loops.
Since most of the magnetic field is confined within the outlines of the core loop, this allows a simplification of the mathematical analysis. See the drawing at right. A common simplifying assumption satisfied by many electromagnets, which will be used in this section, is that the magnetic field strength "B" is constant around the magnetic circuit (within the core and air gaps) and zero outside it. Most of the magnetic field will be concentrated in the core material ("C"). Within the core the magnetic field ("B") will be approximately uniform across any cross-section, so if in addition, the core has roughly constant area throughout its length, the field in the core will be constant.
This leaves the air gaps ("G"), if any, between core sections. In the gaps, the magnetic field lines are no longer confined by the core. So they 'bulge' out beyond the outlines of the core before curving back to enter the next piece of core material, reducing the field strength in the gap. The bulges ("BF") are called "fringing fields". However, as long as the length of the gap is smaller than the cross-section dimensions of the core, the field in the gap will be approximately the same as in the core.
In addition, some of the magnetic field lines ("BL") will take 'short cuts' and not pass through the entire core circuit, and thus will not contribute to the force exerted by the magnet. This also includes field lines that encircle the wire windings but do not enter the core. This is called "leakage flux".
The equations in this section are valid for electromagnets for which:
Magnetic field in magnetic circuit.
The magnetic field created by an electromagnet is proportional to both "N" and "I", hence this product, "NI", is given the name magnetomotive force. For an electromagnet with a single magnetic circuit, Ampere's Law reduces to:
formula_5
This is a nonlinear equation, because "μ" varies with "B". For an exact solution, the value of "μ" at the "B" value used must be obtained from the core material hysteresis curve. If "B" is unknown, the equation must be solved by numerical methods.
Moreover, if the magnetomotive force is well above saturation, so the core material is in saturation, the magnetic field will be approximately the saturation value "Bsat" for the material, and would not vary much with changes in "NI". For a closed magnetic circuit (no air gap) most core materials saturate at a magnetomotive force of roughly 800 ampere-turns per meter of flux path.
For most core materials, formula_6. So in equation (2) above, the second term dominates. Therefore, in magnetic circuits with an air gap, "B" depends strongly on the length of the air gap, and the length of the flux path in the core does not matter much. Given an air gap of 1mm, a magnetomotive force of about 796 Ampere-turns is required to produce a magnetic field of 1T.
Closed magnetic circuit.
For a closed magnetic circuit (no air gap), such as would be found in an electromagnet lifting a piece of iron bridged across its poles, equation (2) becomes:
Substituting into (1), the force is:
It can be seen that to maximize the force, a core with a short flux path "L" and a wide cross-sectional area "A" is preferred (this also applies to magnets with an air gap). To achieve this, in applications like lifting magnets (see photo above) and loudspeakers a flat cylindrical design is often used. The winding is wrapped around a short wide cylindrical core that forms one pole, and a thick metal housing that wraps around the outside of the windings forms the other part of the magnetic circuit, bringing the magnetic field to the front to form the other pole.
Force between electromagnets.
The above methods are applicable to electromagnets with a magnetic circuit and do not apply when a large part of the magnetic field path is outside the core. A non-circuit example would be a magnet with a straight cylindrical core like the one shown at the top of this article. Only focusing on the force between two electromagnets (or permanent magnets) with well-defined "poles" where the field lines emerge from the core, a special analogy called a magnetic-charge model which assumes the magnetic field is produced by fictitious 'magnetic charges' on the surface of the poles. This model assumes point-like poles instead of the really existing surfaces, and thus it only yields a good approximation when the distance between the magnets is much larger than their diameter, so it is useful just for a force between them.
Magnetic pole strength of electromagnets can be found from:
formula_7
The force between two poles is:
formula_8
Each electromagnet has two poles, so the total force on a given magnet due to another magnet is equal to the vector sum of the forces of the other magnet's poles acting on each pole of the given magnet.
Side effects.
There are several side effects which occur in electromagnets which must be provided for in their design. These generally become more significant in larger electromagnets.
Ohmic heating.
The only power consumed in a DC electromagnet under steady-state conditions is due to the resistance of the windings, and is dissipated as heat. Some large electromagnets require water cooling systems in the windings to carry off the waste heat.
Since the magnetic field is proportional to the product "NI", the number of turns in the windings "N" and the current "I" can be chosen to minimize heat losses, as long as their product is constant. Since the power dissipation, "P" = "I"2"R", increases with the square of the current but only increases approximately linearly with the number of windings, the power lost in the windings can be minimized by reducing "I" and increasing the number of turns "N" proportionally, or using thicker wire to reduce the resistance. For example, halving "I" and doubling "N" halves the power loss, as does doubling the area of the wire. In either case, increasing the amount of wire reduces the ohmic losses. For this reason, electromagnets often have a significant thickness of windings.
However, the limit to increasing "N" or lowering the resistance is that the windings take up more room between the magnet's core pieces. If the area available for the windings is filled up, more turns require going to a smaller diameter of wire, which has higher resistance, which cancels the advantage of using more turns. So in large magnets there is a minimum amount of heat loss that cannot be reduced. This increases with the square of the magnetic flux "B"2.
Inductive voltage spikes.
An electromagnet has significant inductance, and resists changes in the current through its windings. Any sudden changes in the winding current cause large voltage spikes across the windings. This is because when the current through the magnet is increased, such as when it is turned on, energy from the circuit must be stored in the magnetic field. When it is turned off the energy in the field is returned to the circuit.
If an ordinary switch is used to control the winding current, this can cause sparks at the terminals of the switch. This does not occur when the magnet is switched on, because the limited supply voltage causes the current through the magnet and the field energy to increase slowly, but when it is switched off, the energy in the magnetic field is suddenly returned to the circuit, causing a large voltage spike and an arc across the switch contacts, which can damage them. With small electromagnets, a capacitor is sometimes used across the contacts, which reduces arcing by temporarily storing the current. More often a diode is used to prevent voltage spikes by providing a path for the current to recirculate through the winding until the energy is dissipated as heat. The diode is connected across the winding, oriented so it is reverse-biased during steady state operation and does not conduct. When the supply voltage is removed, the voltage spike forward-biases the diode and the reactive current continues to flow through the winding, through the diode, and back into the winding. A diode used in this way is called a freewheeling diode or flyback diode.
Large electromagnets are usually powered by variable current electronic power supplies, controlled by a microprocessor, which prevent voltage spikes by accomplishing current changes slowly, in gentle ramps. It may take several minutes to energize or deenergize a large magnet.
Lorentz forces.
In powerful electromagnets, the magnetic field exerts a force on each turn of the windings, due to the Lorentz force formula_9 acting on the moving charges within the wire. The Lorentz force is perpendicular to both the axis of the wire and the magnetic field. It can be visualized as a pressure between the magnetic field lines, pushing them apart. It has two effects on an electromagnet's windings:
The Lorentz forces increase with "B2". In large electromagnets the windings must be firmly clamped in place, to prevent motion on power-up and power-down from causing metal fatigue in the windings. In the Bitter design, below, used in very high-field research magnets, the windings are constructed as flat disks to resist the radial forces, and clamped in an axial direction to resist the axial ones.
Core losses.
In alternating current (AC) electromagnets, used in transformers, inductors, and AC motors and generators, the magnetic field is constantly changing. This causes energy losses in their magnetic cores that is dissipated as heat in the core. The losses stem from two processes:
High-field electromagnets.
Superconducting electromagnets.
When a magnetic field higher than the ferromagnetic limit of 1.6 T is needed, superconducting electromagnets can be used. Instead of using ferromagnetic materials, these use superconducting windings cooled with liquid helium, which conduct current without electrical resistance. These allow enormous currents to flow, which generate intense magnetic fields. Superconducting magnets are limited by the field strength at which the winding material ceases to be superconducting. Current designs are limited to 10–20 T, with the current (2017) record of 32 T. The necessary refrigeration equipment and cryostat make them much more expensive than ordinary electromagnets. However, in high power applications this can be offset by lower operating costs, since after startup no power is required for the windings, since no energy is lost to ohmic heating. They are used in particle accelerators and MRI machines.
Bitter electromagnets.
Both iron-core and superconducting electromagnets have limits to the field they can produce. Therefore, the most powerful man-made magnetic fields have been generated by "air-core" nonsuperconducting electromagnets of a design invented by Francis Bitter in 1933, called Bitter electromagnets. Instead of wire windings, a Bitter magnet consists of a solenoid made of a stack of conducting disks, arranged so that the current moves in a helical path through them, with a hole through the center where the maximum field is created. This design has the mechanical strength to withstand the extreme Lorentz forces of the field, which increase with "B"2. The disks are pierced with holes through which cooling water passes to carry away the heat caused by the high current. The strongest continuous field achieved solely with a resistive magnet is 41.5 tesla as of 22 August 2017[ [update]], produced by a Bitter electromagnet at the National High Magnetic Field Laboratory in Tallahassee, Florida. The previous record was 37.5 T. The strongest continuous magnetic field overall, 45 T, was achieved in June 2000 with a hybrid device consisting of a Bitter magnet inside a superconducting magnet.
The factor limiting the strength of electromagnets is the inability to dissipate the enormous waste heat, so more powerful fields, up to 100 T, have been obtained from resistive magnets by sending brief pulses of high current through them; the inactive period after each pulse allows the heat produced during the pulse to be removed, before the next pulse.
Explosively pumped flux compression.
The most powerful manmade magnetic fields have been created by using explosives to compress the magnetic field inside an electromagnet as it is pulsed; these are called explosively pumped flux compression generators. The implosion compresses the magnetic field to values of around 1000 T for a few microseconds. While this method may seem very destructive, charge shaping redirects the blast outwardly to minimize harm to the experiment. These devices are known as destructive pulsed electromagnets. They are used in physics and materials science research to study the properties of materials at high magnetic fields.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "F = \\frac{C A N I}{\\ell}"
},
{
"math_id": 1,
"text": "P = A N I \\left[\\frac{N I}{(\\ell_\\mathrm{a})^2 (C_1)^2} + \\frac C \\ell\\right] = \\frac{A N^2 I^2}{(\\ell_\\mathrm{a})^2 (C_1)^2} + \\frac{C A N I}{\\ell}"
},
{
"math_id": 2,
"text": "\\int \\mathbf{J}\\cdot d\\mathbf{A} = \\oint \\mathbf{H}\\cdot d\\boldsymbol{\\ell}"
},
{
"math_id": 3,
"text": "\\mathbf{H}"
},
{
"math_id": 4,
"text": "\\frac{F}{A} = \\frac {(B_\\text{sat})^2}{2 \\mu_0} \\approx 1000\\ \\mathrm{kPa} = 10^6 \\mathrm{N/m^2} = 145\\ \\mathrm{{lbf/in^2}}"
},
{
"math_id": 5,
"text": "NI = H_{\\mathrm{core}} L_{\\mathrm{core}} + H_{\\mathrm{gap}} L_{\\mathrm{gap}}"
},
{
"math_id": 6,
"text": "\\mu_r \\approx 2000 \\text{–} 6000\\,"
},
{
"math_id": 7,
"text": "m = \\frac{NIA}{L}"
},
{
"math_id": 8,
"text": "F = \\frac{\\mu_0 m_1 m_2}{4\\pi r^2}"
},
{
"math_id": 9,
"text": "q\\mathbf{v}\\times\\mathbf{B}"
}
] | https://en.wikipedia.org/wiki?curid=92377 |
924025 | Centered cube number | Centered figurate number that counts points in a three-dimensional pattern
A centered cube number is a centered figurate number that counts the points in a three-dimensional pattern formed by a point surrounded by concentric cubical layers of points, with "i"2 points on the square faces of the ith layer. Equivalently, it is the number of points in a body-centered cubic pattern within a cube that has "n" + 1 points along each of its edges.
The first few centered cube numbers are
1, 9, 35, 91, 189, 341, 559, 855, 1241, 1729, 2331, 3059, 3925, 4941, 6119, 7471, 9009, ... (sequence in the OEIS).
Formulas.
The centered cube number for a pattern with n concentric layers around the central point is given by the formula
formula_0
The same number can also be expressed as a trapezoidal number (difference of two triangular numbers), or a sum of consecutive numbers, as
formula_1
Properties.
Because of the factorization (2"n" + 1)("n"2 + "n" + 1), it is impossible for a centered cube number to be a prime number.
The only centered cube numbers which are also the square numbers are 1 and 9, which can be shown by solving "x"2
"y"3 + 3"y" , the only integer solutions being (x,y) from {(0,0), (1,2), (3,6), (12,42)}, By substituting a=(x-1)/2 and b=y/2, we obtain x^2=2y^3+3y^2+3y+1. This gives only (a,b) from {(-1/2,0), (0,1), (1,3), (11/2,21)} where a,b are half-integers.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "n^3 + (n + 1)^3 = (2n+1)\\left(n^2+n+1\\right)."
},
{
"math_id": 1,
"text": "\\binom{(n+1)^2+1}{2}-\\binom{n^2+1}{2} = (n^2+1)+(n^2+2)+\\cdots+(n+1)^2."
}
] | https://en.wikipedia.org/wiki?curid=924025 |
924193 | Volterra's function | In mathematics, Volterra's function, named for Vito Volterra, is a real-valued function "V" defined on the real line R with the following curious combination of properties:
Definition and construction.
The function is defined by making use of the Smith–Volterra–Cantor set and "copies" of the function defined by formula_0 for formula_1 and formula_2. The construction of "V" begins by determining the largest value of "x" in the interval [0, 1/8] for which "f" ′("x") = 0. Once this value (say "x"0) is determined, extend the function to the right with a constant value of "f"("x"0) up to and including the point 1/8. Once this is done, a mirror image of the function can be created starting at the point 1/4 and extending downward towards 0. This function will be defined to be 0 outside of the interval [0, 1/4]. We then translate this function to the interval [3/8, 5/8] so that the resulting function, which we call "f"1, is nonzero only on the middle interval of the complement of the Smith–Volterra–Cantor set. To construct "f"2, "f" ′ is then considered on the smaller interval [0,1/32], truncated at the last place the derivative is zero, extended, and mirrored the same way as before, and two translated copies of the resulting function are added to "f"1 to produce the function "f"2. Volterra's function then results by repeating this procedure for every interval removed in the construction of the Smith–Volterra–Cantor set; in other words, the function "V" is the limit of the sequence of functions "f"1, "f"2, ...
Further properties.
Volterra's function is differentiable everywhere just as "f" (as defined above) is. One can show that "f" ′("x") = 2"x" sin(1/"x") - cos(1/"x") for "x" ≠ 0, which means that in any neighborhood of zero, there are points where "f" ′ takes values 1 and −1. Thus there are points where "V" ′ takes values 1 and −1 in every neighborhood of each of the endpoints of intervals removed in the construction of the Smith–Volterra–Cantor set "S". In fact, "V" ′ is discontinuous at every point of "S", even though "V" itself is differentiable at every point of "S", with derivative 0. However, "V" ′ is continuous on each interval removed in the construction of "S", so the set of discontinuities of "V" ′ is equal to "S".
Since the Smith–Volterra–Cantor set "S" has positive Lebesgue measure, this means that "V" ′ is discontinuous on a set of positive measure. By Lebesgue's criterion for Riemann integrability, "V" ′ is not Riemann integrable. If one were to repeat the construction of Volterra's function with the ordinary measure-0 Cantor set "C" in place of the "fat" (positive-measure) Cantor set "S", one would obtain a function with many similar properties, but the derivative would then be discontinuous on the measure-0 set "C" instead of the positive-measure set "S", and so the resulting function would have a Riemann integrable derivative.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f(x) = x^2 \\sin(1/x)"
},
{
"math_id": 1,
"text": "x \\neq 0"
},
{
"math_id": 2,
"text": "f(0) = 0"
}
] | https://en.wikipedia.org/wiki?curid=924193 |
924298 | Measuring network throughput | Throughput of a network can be measured using various tools available on different platforms. This page explains the theory behind what these tools set out to measure and the issues regarding these measurements.
Reasons for measuring throughput in networks.
People are often concerned about measuring the maximum data throughput in bits per second of a communications link or network access. A typical method of performing a measurement is to transfer a 'large' file from one system to another system and measure the time required to complete the transfer or copy of the file. The throughput is then calculated by dividing the file size by the time to get the throughput in megabits, kilobits, or bits per second.
Unfortunately, the results of such an exercise will often result in the goodput which is less than the maximum theoretical data throughput, leading to people believing that their communications link is not operating correctly.
In fact, there are many overheads accounted for in throughput in addition to transmission overheads, including latency, TCP Receive Window size and system limitations, which means the calculated goodput does not reflect the maximum achievable throughput.
Theory: Short Summary.
The Maximum bandwidth can be calculated as follows:
formula_0
where RWIN is the TCP Receive Window and RTT is the round-trip time for the path.
The Max TCP Window size in the absence of TCP window scale option is 65,535 bytes. Example: Max Bandwidth = 65,535 bytes / 0.220 s = 297886.36 B/s * 8 = 2.383 Mbit/s. Over a single TCP connection between those endpoints, the tested bandwidth will be restricted to 2.376 Mbit/s even if the contracted bandwidth is greater.
Bandwidth test software.
Bandwidth test software is used to determine the maximum bandwidth of a network or internet connection. It is typically undertaken by attempting to download or upload the maximum amount of data in a certain period of time, or a certain amount of data in the minimum amount of time. For this reason, Bandwidth tests can delay internet transmissions through the internet connection as they are undertaken, and can cause inflated data charges.
Nomenclature.
The throughput of communications links is measured in bits per second (bit/s), kilobits per second (kbit/s), megabits per second (Mbit/s) and gigabits per second (Gbit/s). In this application, kilo, mega and giga are the standard S.I. prefixes indicating multiplication by 1,000 (kilo), 1,000,000 (mega), and 1,000,000,000 (giga).
File sizes are typically measured in bytes — kilobytes, megabytes, and gigabytes being usual, where a byte is eight bits. In modern textbooks one kilobyte is defined as 1,000 byte, one megabyte as 1,000,000 byte, etc., in accordance with the 1998 International Electrotechnical Commission (IEC) standard. However, the convention adopted by Windows systems is to define 1 kilobyte is as 1,024 (or 210) bytes, which is equal to 1 kibibyte. Similarly, a file size of "1 megabyte" is 1,024 × 1,024 byte, equal to 1 mebibyte), and "1 gigabyte" 1,024 × 1,024 × 1,024 byte = 1 gibibyte).
Confusing and inconsistent use of Suffixes.
It is usual for people to abbreviate commonly used expressions. For file sizes, it is usual for someone to say that they have a '64 k' file (meaning 64 kilobytes), or a '100 meg' file (meaning 100 megabytes). When talking about circuit bit rates, people will interchangeably use the terms throughput, bandwidth and speed, and refer to a circuit as being a '64 k' circuit, or a '2 meg' circuit — meaning 64 kbit/s or 2 Mbit/s (see also the List of connection bandwidths). However, a '64 k' circuit will not transmit a '64 k' file in one second. This may not be obvious to those unfamiliar with telecommunications and computing, so misunderstandings sometimes arise. In actuality, a 64 kilobyte file is 64 × 1,024 × 8 bits in size and the 64 k circuit will transmit bits at a rate of 64 × 1,000 bit/s, so the amount of time taken to transmit a 64 kilobyte file over the 64 k circuit will be at least (64 × 1,024 × 8)/(64 × 1,000) seconds, which works out to be 8.192 seconds.
Compression.
Some equipment can improve matters by compressing the data as it is sent. This is a feature of most analog modems and of several popular operating systems. If the 64 k file can be shrunk by compression, the time taken to transmit can be reduced. This can be done invisibly to the user, so a highly compressible file may be transmitted considerably faster than expected. As this 'invisible' compression cannot easily be disabled, it therefore follows that when measuring throughput by using files and timing the time to transmit, one should use files that cannot be compressed. Typically, this is done using a file of random data, which becomes harder to compress the closer to truly random it is.
Assuming your data cannot be compressed, the 8.192 seconds to transmit a 64 kilobyte file over a 64 kilobit/s communications link is a theoretical minimum time which will not be achieved in practice. This is due to the effect of overheads which are used to format the data in an agreed manner so that both ends of a connection have a consistent view of the data.
There are at least two issues that aren't immediately obvious for transmitting compressed files:
Overheads and data formats.
A common communications link used by many people is the asynchronous start-stop, or just "asynchronous", serial link. If you have an external modem attached to your home or office computer, the chances are that the connection is over an asynchronous serial connection. Its advantage is that it is simple — it can be implemented using only three wires: Send, Receive and Signal Ground (or Signal Common). In an RS-232 interface, an idle connection has a continuous negative voltage applied. A 'zero' bit is represented as a positive voltage difference with respect to the Signal Ground and a 'one' bit is a negative voltage with respect to signal ground, thus indistinguishable from the idle state. This means you need to know when a 'one' bit starts to distinguish it from idle. This is done by agreeing in advance how fast data will be transmitted over a link, then using a start bit to signal the start of a byte — this start bit will be a 'zero' bit. Stop bits are 'one' bits i.e. negative voltage.
Actually, more things will have been agreed in advance — the speed of bit transmission, the number of bits per character, the parity and the number of stop bits (signifying the end of a character). So a designation of 9600-8-E-2 would be 9,600 bits per second, with eight bits per character, even parity and two stop bits.
A common set-up of an asynchronous serial connection would be 9600-8-N-1 (9,600 bit/s, 8 bits per character, no parity and 1 stop bit) - a total of 10 bits transmitted to send one 8 bit character (one start bit, the 8 bits making up the byte transmitted and one stop bit). This is an overhead of 20%, so a 9,600 bit/s asynchronous serial link will not transmit data at 9600/8 bytes per second (1200 byte/s) but actually, in this case 9600/10 bytes per second (960 byte/s), which is considerably slower than expected.
It can get worse. If parity is specified and we use 2 stop bits, the overhead for carrying one 8 bit character is 4 bits (one start bit, one parity bit and two stop bits) - or 50%! In this case a 9600 bit/s connection will carry 9600/12 byte/s (800 byte/s). Asynchronous serial interfaces commonly will support bit transmission speeds of up to 230.4 kbit/s. If it is set up to have no parity and one stop bit, this means the byte transmission rate is 23.04 kbyte/s.
The advantage of the asynchronous serial connection is its simplicity. One disadvantage is its low efficiency in carrying data. This can be overcome by using a synchronous interface. In this type of interface, a clock signal is added on a separate wire, and the bits are transmitted in synchrony with the clock — the interface no longer has to look for the start and stop bits of each individual character — however, it is necessary to have a mechanism to ensure the sending and receiving clocks are kept in synchrony, so data is divided up into frames of multiple characters separated by known delimiters. There are three common coding schemes for framed communications — HDLC, PPP, and Ethernet
HDLC.
When using HDLC, rather than each byte having a start, optional parity, and one or two stop bits, the bytes are gathered together into a frame. The start and end of the frame are signalled by the 'flag', and error detection is carried out by the frame check sequence. If the frame has a maximum sized address of 32 bits, a maximum sized control part of 16 bits and a maximum sized frame check sequence of 16 bits, the overhead per frame could be as high as 64 bits. If each frame carried but a single byte, the data throughput efficiency would be extremely low. However, the bytes are normally gathered together, so that even with a maximal overhead of 64 bits, frames carrying more than 24 bytes are more efficient than asynchronous serial connections. As frames can vary in size because they can have different numbers of bytes being carried as data, this means the overhead of an HDLC connection is not fixed.
PPP.
The "point-to-point protocol" (PPP) is defined by the Internet Request For Comment documents RFC 1570, RFC 1661 and RFC 1662. With respect to the framing of packets, PPP is quite similar to HDLC, but supports both bit-oriented as well as byte-oriented ("octet-stuffed") methods of delimiting frames while maintaining data transparency.
Ethernet.
Ethernet is a "local area network" (LAN) technology, which is also framed. The way the frame is electrically defined on a connection between two systems is different from the typically wide-area networking technology that uses HDLC or PPP implemented, but these details are not important for throughput calculations. Ethernet is a shared medium, so that it is not guaranteed that only the two systems that are transferring a file between themselves will have exclusive access to the connection. If several systems are attempting to communicate simultaneously, the throughput between any pair can be substantially lower than the nominal bandwidth available.
Other low-level protocols.
Dedicated point-to-point links are not the only option for many connections between systems. Frame Relay, ATM, and MPLS based services can also be used. When calculating or estimating data throughputs, the details of the frame/cell/packet format and the technology's detailed implementation need to be understood.
Frame Relay.
Frame Relay uses a modified HDLC format to define the frame format that carries data.
ATM.
Asynchronous Transfer Mode (ATM) uses a radically different method of carrying data. Rather than using variable length frames or packets, data is carried in fixed size cells. Each cell is 53 bytes long, with the first 5 bytes defined as the header, and the following 48 bytes as payload. Data networking commonly requires packets of data that are larger than 48 bytes, so there is a defined adaptation process that specifies how larger packets of data should be divided up in a standard manner to be carried by the smaller cells. This process varies according to the data carried, so in ATM nomenclature, there are different ATM Adaptation Layers. The process defined for most data is named ATM Adaptation Layer No. 5 or AAL5.
Understanding throughput on ATM links requires a knowledge of which ATM adaptation layer has been used for the data being carried.
MPLS.
Multiprotocol Label Switching (MPLS) adds a standard tag or header known as a 'label' to existing packets of data. In certain situations it is possible to use MPLS in a 'stacked' manner, so that labels are added to packets that have already been labelled. Connections between MPLS systems can also be 'native', with no underlying transport protocol, or MPLS labelled packets can be carried inside frame relay or HDLC packets as payloads. Correct throughput calculations need to take such configurations into account. For example, a data packet could have two MPLS labels attached via 'label-stacking', then be placed as payload inside an HDLC frame. This generates more overhead that has to be taken into account that a single MPLS label attached to a packet which is then sent 'natively', with no underlying protocol to a receiving system.
Higher-level protocols.
Few systems transfer files and data by simply copying the contents of the file into the 'Data' field of HDLC or PPP frames — another protocol layer is used to format the data inside the 'Data' field of the HDLC or PPP frame. The most commonly used such protocol is Internet Protocol (IP), defined by RFC 791. This imposes its own overheads.
Again, few systems simply copy the contents of files into IP packets, but use yet another protocol that manages the connection between two systems — TCP (Transmission Control Protocol), defined by RFC 1812. This adds its own overhead.
Finally, a final protocol layer manages the actual data transfer process. A commonly used protocol for this is the "file transfer protocol
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathrm{Throughput} \\le \\frac {\\mathrm{RWIN}} {\\mathrm{RTT}} \\,\\!"
}
] | https://en.wikipedia.org/wiki?curid=924298 |
9243099 | Competitive equilibrium | Economic equilibrium concept
Competitive equilibrium (also called: Walrasian equilibrium) is a concept of economic equilibrium, introduced by Kenneth Arrow and Gérard Debreu in 1951, appropriate for the analysis of commodity markets with flexible prices and many traders, and serving as the benchmark of efficiency in economic analysis. It relies crucially on the assumption of a competitive environment where each trader decides upon a quantity that is so small compared to the total quantity traded in the market that their individual transactions have no influence on the prices. Competitive markets are an ideal standard by which other market structures are evaluated.
Definitions.
A competitive equilibrium (CE) consists of two elements:
These elements should satisfy the following requirement:
formula_5, if formula_6 then formula_7.
Often, there is an initial endowment matrix formula_8: for every formula_2, formula_9 is the initial endowment of agent formula_4. Then, a CE should satisfy some additional requirements:
formula_10.
formula_11.
formula_12.
Definition 2.
This definition explicitly allows for the possibility that there may be multiple commodity arrays that are equally appealing. Also for zero prices. An alternative definition relies on the concept of a "demand-set". Given a price function P and an agent with a utility function U, a certain bundle of goods x is in the demand-set of the agent if: formula_13 for every other bundle y. A "competitive equilibrium" is a price function P and an allocation matrix X such that:
Approximate equilibrium.
In some cases it is useful to define an equilibrium in which the rationality condition is relaxed. Given a positive value formula_14 (measured in monetary units, e.g., dollars), a price vector formula_0 and a bundle formula_15, define formula_16 as a price vector in which all items in x have the same price they have in P, and all items not in x are priced formula_14 more than their price in P.
In a "formula_14-competitive-equilibrium", the bundle x allocated to an agent should be in that agent's demand-set for the "modified" price vector, formula_16.
This approximation is realistic when there are buy/sell commissions. For example, suppose that an agent has to pay formula_14 dollars for buying a unit of an item, in addition to that item's price. That agent will keep his current bundle as long as it is in the demand-set for price vector formula_16. This makes the equilibrium more stable.
Examples.
The following examples involve an exchange economy with two agents, Jane and Kelvin, two goods e.g. bananas (x) and apples (y), and no money.
1. Graphical example: Suppose that the initial allocation is at point X, where Jane has more apples than Kelvin does and Kelvin has more bananas than Jane does.
By looking at their indifference curves formula_17 of Jane and formula_18 of Kelvin, we can see that this is not an equilibrium - both agents are willing to trade with each other at the prices formula_19 and formula_20. After trading, both Jane and Kelvin move to an indifference curve which depicts a higher level of utility, formula_21 and formula_22. The new indifference curves intersect at point E. The slope of the tangent of both curves equals -formula_23.
And the formula_24;
formula_25.
The marginal rate of substitution (MRS) of Jane equals that of Kelvin. Therefore, the 2 individuals society reaches Pareto efficiency, where there is no way to make Jane or Kelvin better off without making the other worse off.
2. Arithmetic example: suppose that both agents have Cobb–Douglas utilities:
formula_26
formula_27
where formula_28 are constants.
Suppose the initial endowment is formula_29.
The demand function of Jane for x is:
formula_30
The demand function of Kelvin for x is:
formula_31
The market clearance condition for x is:
formula_32
This equation yields the equilibrium price ratio:
formula_33
We could do a similar calculation for y, but this is not needed, since Walras' law guarantees that the results will be the same. Note that in CE, only relative prices are determined; we can normalize the prices, e.g, by requiring that formula_34. Then we get formula_35. But any other normalization will also work.
3. Non-existence example: Suppose the agents' utilities are:
formula_36
and the initial endowment is [(2,1),(2,1)].
In CE, every agent must have either only x or only y (the other product does not contribute anything to the utility so the agent would like to exchange it away). Hence, the only possible CE allocations are [(4,0),(0,2)] and [(0,2),(4,0)]. Since the agents have the same income, necessarily formula_37. But then, the agent holding 2 units of y will want to exchange them for 4 units of x.
4. For existence and non-existence examples involving linear utilities, see Linear utility#Examples.
Indivisible items.
When there are indivisible items in the economy, it is common to assume that there is also money, which is divisible. The agents have quasilinear utility functions: their utility is the amount of money they have plus the utility from the bundle of items they hold.
A. Single item: Alice has a car which she values as 10. Bob has no car, and he values Alice's car as 20. A possible CE is: the price of the car is 15, Bob gets the car and pays 15 to Alice. This is an equilibrium because the market is cleared and both agents prefer their final bundle to their initial bundle. In fact, every price between 10 and 20 will be a CE price, with the same allocation. The same situation holds when the car is not initially held by Alice but rather in an auction in which both Alice and Bob are buyers: the car will go to Bob and the price will be anywhere between 10 and 20.
On the other hand, any price below 10 is not an equilibrium price because there is an excess demand (both Alice and Bob want the car at that price), and any price above 20 is not an equilibrium price because there is an excess supply (neither Alice nor Bob want the car at that price).
This example is a special case of a double auction.
B. Substitutes: A car and a horse are sold in an auction. Alice only cares about transportation, so for her these are perfect substitutes: she gets utility 8 from the horse, 9 from the car, and if she has both of them then she uses only the car so her utility is 9. Bob gets a utility of 5 from the horse and 7 from the car, but if he has both of them then his utility is 11 since he also likes the horse as a pet. In this case it is more difficult to find an equilibrium (see below). A possible equilibrium is that Alice buys the horse for 5 and Bob buys the car for 7. This is an equilibrium since Bob wouldn't like to pay 5 for the horse which will give him only 4 additional utility, and Alice wouldn't like to pay 7 for the car which will give her only 1 additional utility.
C. Complements: A horse and a carriage are sold in an auction. There are two potential buyers: AND and XOR. AND wants only the horse and the carriage together - they receive a utility of formula_38 from holding both of them but a utility of 0 for holding only one of them. XOR wants either the horse or the carriage but doesn't need both - they receive a utility of formula_39 from holding one of them and the same utility for holding both of them. Here, when formula_40, a competitive equilibrium does NOT exist, i.e, no price will clear the market. "Proof": consider the following options for the sum of the prices (horse-price + carriage-price):
D. Unit-demand consumers: There are "n" consumers. Each consumer has an index formula_41. There is a single type of good. Each consumer formula_4 wants at most a single unit of the good, which gives him a utility of formula_42. The consumers are ordered such that formula_43 is a weakly increasing function of formula_4. If the supply is formula_44 units, then any price formula_45 satisfying formula_46 is an equilibrium price, since there are "k" consumers that either want to buy the product or indifferent between buying and not buying it. Note that an increase in supply causes a decrease in price.
Existence of a competitive equilibrium.
Divisible resources.
The Arrow–Debreu model shows that a CE exists in every exchange economy with divisible goods satisfying the following conditions:
The proof proceeds in several steps.
A. For concreteness, assume that there are formula_49 agents and formula_50 divisible goods. Normalize the prices such that their sum is 1, i.e. formula_51. Then the space of all possible prices is the formula_52-dimensional unit simplex in formula_53. We call this simplex the "price simplex".
B. Let formula_54 be the excess demand function. This is a function of the price vector formula_45 when the initial endowment formula_8 is kept constant:
formula_55
It is known that, when the agents have strictly convex preferences, the Marshallian demand function is continuous. Hence, formula_54 is also a continuous function of formula_45.
C. Define the following function from the price simplex to itself:
formula_56
This is a continuous function, so by the Brouwer fixed-point theorem there is a price vector formula_57 such that:
formula_58
so,
formula_59
D. Using Walras' law and some algebra, it is possible to show that for this price vector, there is no excess demand in any product, i.e:
formula_60
E. The desirability assumption implies that all products have strictly positive prices:
formula_61
By Walras' law, formula_62. But this implies that the inequality above must be an equality:
formula_63
This means that formula_57 is a price vector of a competitive equilibrium.
Note that Linear utilities are only weakly convex, so they do not qualify for the Arrow–Debreu model. However, David Gale proved that a CE exists in every linear exchange economy satisfying certain conditions. For details see Linear utilities#Existence of competitive equilibrium.
Algorithms for computing the market equilibrium are described in market equilibrium computation.
Indivisible items.
In the examples above, a competitive equilibrium existed when the items were substitutes but not when the items were complements. This is not a coincidence.
Given a utility function on two goods "X" and "Y", say that the goods are weakly gross-substitute (GS) if they are either independent goods or gross substitute goods, but "not" complementary goods. This means that formula_64. I.e., if the price of "Y" increases, then the demand for "X" either remains constant or increases, but does "not" decrease. If the price of "Y" decreases, then the demand for "X" either remains constant or decreases.
A utility function is called GS if, according to this utility function, all pairs of different goods are GS. With a GS utility function, if an agent has a demand set at a given price vector, and the prices of some items increase, then the agent has a demand set which includes all the items whose price remained constant. He may decide that he doesn't want an item which has become more expensive; he may also decide that he wants another item instead (a substitute); but he may not decide that he doesn't want a third item whose price hasn't changed.
When the utility functions of all agents are GS, a competitive equilibrium always exists.
Moreover, the set of GS valuations is the largest set containing unit demand valuations for which the existence of competitive equilibrium is guaranteed: for any non-GS valuation, there exist unit-demand valuations such that a competitive equilibrium does not exist for these unit-demand valuations coupled with the given non-GS valuation.
For the computational problem of finding a competitive equilibrium in a special kind of a market, see Fisher market#indivisible.
The competitive equilibrium and allocative efficiency.
By the fundamental theorems of welfare economics, any CE allocation is Pareto efficient, and any efficient allocation can be sustainable by a competitive equilibrium. Furthermore, by Varian's theorems, a CE allocation in which all agents have the same income is also envy-free.
At the competitive equilibrium, the value society places on a good is equivalent to the value of the resources given up to produce it (marginal benefit equals marginal cost). This ensures allocative efficiency: the additional value society places on another unit of the good is equal to what society must give up in resources to produce it.
Note that microeconomic analysis does not assume additive utility, nor does it assume any interpersonal utility tradeoffs. Efficiency, therefore, refers to the absence of Pareto improvements. It does not in any way opine on the fairness of the allocation (in the sense of distributive justice or equity). An efficient equilibrium could be one where one player has all the goods and other players have none (in an extreme example), which is efficient in the sense that one may not be able to find a Pareto improvement - which makes all players (including the one with everything in this case) better off (for a strict Pareto improvement), or not worse off.
Welfare theorems for indivisible item assignment.
In the case of indivisible items, we have the following strong versions of the two welfare theorems:
Finding an equilibrium.
In the case of indivisible item assignment, when the utility functions of all agents are GS (and thus an equilibrium exists), it is possible to find a competitive equilibrium using an "ascending auction". In an ascending auction, the auctioneer publishes a price vector, initially zero, and the buyers declare their favorite bundle under these prices. In case each item is desired by at most a single bidder, the items are divided and the auction is over. In case there is an excess demand on one or more items, the auctioneer increases the price of an over-demanded item by a small amount (e.g. a dollar), and the buyers bid again.
Several different ascending-auction mechanisms have been suggested in the literature. Such mechanisms are often called Walrasian auction, "Walrasian tâtonnement" or English auction.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "P"
},
{
"math_id": 1,
"text": "X"
},
{
"math_id": 2,
"text": "i\\in 1,\\dots,n"
},
{
"math_id": 3,
"text": "X_i"
},
{
"math_id": 4,
"text": "i"
},
{
"math_id": 5,
"text": "\\forall i\\in 1,\\dots,n"
},
{
"math_id": 6,
"text": "P(Y) \\leq P(X_i)"
},
{
"math_id": 7,
"text": "Y \\preceq_i X_i"
},
{
"math_id": 8,
"text": "E"
},
{
"math_id": 9,
"text": "E_i"
},
{
"math_id": 10,
"text": "\\sum_{i=1}^n X_i = \\sum_{i=1}^n E_i"
},
{
"math_id": 11,
"text": "\\forall i\\in 1,\\dots,n: X_i \\succeq_i E_i"
},
{
"math_id": 12,
"text": "\\forall i\\in 1,\\dots,n: P(X_i) \\leq P(E_i)"
},
{
"math_id": 13,
"text": "U(x)-P(x) \\geq U(y) - P(y)"
},
{
"math_id": 14,
"text": "\\epsilon"
},
{
"math_id": 15,
"text": "x"
},
{
"math_id": 16,
"text": "P^x_\\epsilon"
},
{
"math_id": 17,
"text": "J_1"
},
{
"math_id": 18,
"text": "K_1"
},
{
"math_id": 19,
"text": "P_x"
},
{
"math_id": 20,
"text": "P_y"
},
{
"math_id": 21,
"text": "J_2"
},
{
"math_id": 22,
"text": "K_2"
},
{
"math_id": 23,
"text": "P_x / P_y"
},
{
"math_id": 24,
"text": "MRS_{Jane} = P_x / P_y"
},
{
"math_id": 25,
"text": "MRS_{Kelvin} = P_x / P_y"
},
{
"math_id": 26,
"text": "u_J(x,y) = x^a y^{1-a}"
},
{
"math_id": 27,
"text": "u_K(x,y) = x^b y^{1-b}"
},
{
"math_id": 28,
"text": "a,b"
},
{
"math_id": 29,
"text": "E=[(1,0), (0,1)]"
},
{
"math_id": 30,
"text": "x_J(p_x,p_y,I_J) = \\frac{a\\cdot I_J}{p_x} = \\frac{a\\cdot (1\\cdot p_x)}{p_x} = a"
},
{
"math_id": 31,
"text": "x_K(p_x,p_y,I_K) = \\frac{b\\cdot I_K}{p_x} = \\frac{b\\cdot p_y}{p_x}"
},
{
"math_id": 32,
"text": "x_J + x_K = E_{J,x} + E_{K,x} = 1"
},
{
"math_id": 33,
"text": "\\frac{p_y}{p_x} = \\frac{1-a}{b}"
},
{
"math_id": 34,
"text": "p_x+p_y=1"
},
{
"math_id": 35,
"text": "p_x=\\frac{b}{1+b-a}, p_y=\\frac{1-a}{1+b-a}"
},
{
"math_id": 36,
"text": "u_J(x,y)=u_K(x,y) = \\max(x,y)"
},
{
"math_id": 37,
"text": "p_y = 2 p_x"
},
{
"math_id": 38,
"text": "v_{and}"
},
{
"math_id": 39,
"text": "v_{xor}"
},
{
"math_id": 40,
"text": "v_{and} < 2 v_{xor}"
},
{
"math_id": 41,
"text": "i=1,...,n"
},
{
"math_id": 42,
"text": "u(i)"
},
{
"math_id": 43,
"text": "u"
},
{
"math_id": 44,
"text": "k\\leq n"
},
{
"math_id": 45,
"text": "p"
},
{
"math_id": 46,
"text": "u(n-k)\\leq p\\leq u(n-k+1)"
},
{
"math_id": 47,
"text": "j"
},
{
"math_id": 48,
"text": "p_j=0"
},
{
"math_id": 49,
"text": "n"
},
{
"math_id": 50,
"text": "k"
},
{
"math_id": 51,
"text": "\\sum_{j=1}^k p_j = 1"
},
{
"math_id": 52,
"text": "k-1"
},
{
"math_id": 53,
"text": "\\mathbb{R}^k"
},
{
"math_id": 54,
"text": "z"
},
{
"math_id": 55,
"text": "z(p) = \\sum_{i=1}^n {x_i(p, p\\cdot E_i) - E_i}"
},
{
"math_id": 56,
"text": "g_i(p) = \\frac{p_i + \\max(0, z_i(p))}{1 + \\sum_{j=1}^k \\max(0,z_j(p))}, \\forall i\\in 1,\\dots,k"
},
{
"math_id": 57,
"text": "p^*"
},
{
"math_id": 58,
"text": "p^* = g(p^*)"
},
{
"math_id": 59,
"text": "p^*_i = \\frac{p_i + \\max(0, z_i(p))}{1 + \\sum_{j=1}^k \\max(0,z_j(p))}, \\forall i\\in 1,\\dots,k"
},
{
"math_id": 60,
"text": "z_j(p^*) \\leq 0, \\forall j\\in 1,\\dots,k"
},
{
"math_id": 61,
"text": "p_j > 0, \\forall j\\in 1,\\dots,k"
},
{
"math_id": 62,
"text": "p^* \\cdot z(p^*) = 0"
},
{
"math_id": 63,
"text": "z_j(p^*) = 0, \\forall j\\in 1,\\dots,k"
},
{
"math_id": 64,
"text": "\\frac{\\Delta \\text{demand}(X)}{\\Delta \\text{price}(Y)}\\geq 0"
}
] | https://en.wikipedia.org/wiki?curid=9243099 |
9245509 | Milnor conjecture (knot theory) | Theorem that the slice genus of the (p, q) torus knot is (p-1)(q-1)/2
In knot theory, the Milnor conjecture says that the slice genus of the formula_0 torus knot is
formula_1
It is in a similar vein to the Thom conjecture.
It was first proved by gauge theoretic methods by Peter Kronheimer and Tomasz Mrowka. Jacob Rasmussen later gave a purely combinatorial proof using Khovanov homology, by means of the s-invariant.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(p, q)"
},
{
"math_id": 1,
"text": "(p-1)(q-1)/2."
}
] | https://en.wikipedia.org/wiki?curid=9245509 |
92465 | Lambert W function | Multivalued function in mathematics
In mathematics, the Lambert W function, also called the omega function or product logarithm, is a multivalued function, namely the branches of the converse relation of the function "f"("w") = "we""w", where w is any complex number and "e""w" is the exponential function. The function is named after Johann Lambert, who considered a related problem in 1758. Building on Lambert's work, Leonhard Euler described the W function per se in 1783.
For each integer "k" there is one branch, denoted by "W""k"("z"), which is a complex-valued function of one complex argument. "W"0 is known as the principal branch. These functions have the following property: if "z" and "w" are any complex numbers, then
formula_0
holds if and only if
formula_1
When dealing with real numbers only, the two branches "W"0 and "W"−1 suffice: for real numbers "x" and "y" the equation
formula_2
can be solved for "y" only if "x" ≥ −; gets "y" = "W"0("x") if "x" ≥ 0 and the two values "y" = "W"0("x") and "y" = "W"−1("x") if − ≤ "x" < 0.
The Lambert W function's branches cannot be expressed in terms of elementary functions. It is useful in combinatorics, for instance, in the enumeration of trees. It can be used to solve various equations involving exponentials (e.g. the maxima of the Planck, Bose–Einstein, and Fermi–Dirac distributions) and also occurs in the solution of delay differential equations, such as "y"′("t") = "a" "y"("t" − 1). In biochemistry, and in particular enzyme kinetics, an opened-form solution for the time-course kinetics analysis of Michaelis–Menten kinetics is described in terms of the Lambert W function.
Terminology.
The notation convention chosen here (with "W"0 and "W"−1) follows the canonical reference on the Lambert W function by Corless, Gonnet, Hare, Jeffrey and Knuth.
The name "product logarithm" can be understood as this: Since the inverse function of "f"("w") = "e""w" is called the logarithm, it makes sense to call the inverse "function" of the product "we""w" as "product logarithm". (Technical note: like the complex logarithm, it is multivalued and thus W is described as the converse relation rather than inverse function.) It is related to the omega constant, which is equal to "W"0(1).
History.
Lambert first considered the related "Lambert's Transcendental Equation" in 1758, which led to an article by Leonhard Euler in 1783 that discussed the special case of "wew".
The equation Lambert considered was
formula_3
Euler transformed this equation into the form
formula_4
Both authors derived a series solution for their equations.
Once Euler had solved this equation, he considered the case &NoBreak;&NoBreak;. Taking limits, he derived the equation
formula_5
He then put &NoBreak;&NoBreak; and obtained a convergent series solution for the resulting equation, expressing &NoBreak;&NoBreak; in terms of &NoBreak;&NoBreak;.
After taking derivatives with respect to &NoBreak;&NoBreak; and some manipulation, the standard form of the Lambert function is obtained.
In 1993, it was reported that the Lambert &NoBreak;&NoBreak; function provides an exact solution to the quantum-mechanical double-well Dirac delta function model for equal charges—a fundamental problem in physics. Prompted by this, Rob Corless and developers of the Maple computer algebra system realized that "the Lambert W function has been widely used in many fields, but because of differing notation and the absence of a standard name, awareness of the function was not as high as it should have been."
Another example where this function is found is in Michaelis–Menten kinetics.
Although it was widely believed that the Lambert &NoBreak;&NoBreak; function cannot be expressed in terms of elementary (Liouvillian) functions, the first published proof did not appear until 2008.
Elementary properties, branches and range.
There are countably many branches of the W function, denoted by "Wk"("z"), for integer k; "W"0("z") being the main (or principal) branch. "W"0("z") is defined for all complex numbers "z" while "Wk"("z") with "k" ≠ 0 is defined for all non-zero "z". With "W"0(0) = 0 and lim"z"→0 "W""k"("z") = −∞ for all "k" ≠ 0.
The branch point for the principal branch is at "z" = −, with a branch cut that extends to −∞ along the negative real axis. This branch cut separates the principal branch from the two branches "W"−1 and "W"1. In all branches "Wk" with "k" ≠ 0, there is a branch point at "z" = 0 and a branch cut along the entire negative real axis.
The functions "Wk"("z"), "k" ∈ Z are all injective and their ranges are disjoint. The range of the entire multivalued function W is the complex plane. The image of the real axis is the union of the real axis and the quadratrix of Hippias, the parametric curve "w" = −"t" cot "t" + "it".
Inverse.
The range plot above also delineates the regions in the complex plane where the simple inverse relationship &NoBreak;&NoBreak; is true. &NoBreak;&NoBreak; implies that there exists an &NoBreak;&NoBreak; such that &NoBreak;&NoBreak;, where &NoBreak;&NoBreak; depends upon the value of &NoBreak;&NoBreak;. The value of the integer &NoBreak;&NoBreak; changes abruptly when &NoBreak;&NoBreak; is at the branch cut of &NoBreak;&NoBreak;, which means that &NoBreak;&NoBreak;≤ 0, except for &NoBreak;&NoBreak; where it is &NoBreak;&NoBreak; ≤ −1/&NoBreak;&NoBreak;.
Defining &NoBreak;&NoBreak;, where &NoBreak;&NoBreak; and &NoBreak;&NoBreak; are real, and expressing &NoBreak;&NoBreak; in polar coordinates, it is seen that
formula_6
For formula_7, the branch cut for &NoBreak;&NoBreak; is the non-positive real axis, so that
formula_8
and
formula_9
For formula_10, the branch cut for &NoBreak;&NoBreak; is the real axis with formula_11, so that the inequality becomes
formula_12
Inside the regions bounded by the above, there are no discontinuous changes in &NoBreak;&NoBreak;, and those regions specify where the &NoBreak;&NoBreak; function is simply invertible, i.e. &NoBreak;&NoBreak;.
Calculus.
Derivative.
By implicit differentiation, one can show that all branches of W satisfy the differential equation
formula_13
(W is not differentiable for "z" = −.) As a consequence, that gets the following formula for the derivative of "W":
formula_14
Using the identity "e""W"("z") =, gives the following equivalent formula:
formula_15
At the origin we have
formula_16
Integral.
The function "W"("x"), and many other expressions involving "W"("x"), can be integrated using the substitution "w" = "W"("x"), i.e. "x" = "we""w":
formula_17
(The last equation is more common in the literature but is undefined at "x" = 0). One consequence of this (using the fact that "W"0("e") = 1) is the identity
formula_18
Asymptotic expansions.
The Taylor series of "W"0 around 0 can be found using the Lagrange inversion theorem and is given by
formula_19
The radius of convergence is , as may be seen by the ratio test. The function defined by this series can be extended to a holomorphic function defined on all complex numbers with a branch cut along the interval (−∞, −]; this holomorphic function defines the principal branch of the Lambert W function.
For large values of x, "W"0 is asymptotic to
formula_20
where "L"1 = ln "x", "L"2 = ln ln "x", and [] is a non-negative Stirling number of the first kind. Keeping only the first two terms of the expansion,
formula_21
The other real branch, "W"−1, defined in the interval [−, 0), has an approximation of the same form as x approaches zero, with in this case "L"1 = ln(−"x") and "L"2 = ln(−ln(−"x")).
Integer and complex powers.
Integer powers of "W"0 also admit simple Taylor (or Laurent) series expansions at zero:
formula_22
More generally, for "r" ∈ Z, the Lagrange inversion formula gives
formula_23
which is, in general, a Laurent series of order r. Equivalently, the latter can be written in the form of a Taylor expansion of powers of "W"0("x") / "x":
formula_24
which holds for any "r" ∈ C and .
Bounds and inequalities.
A number of non-asymptotic bounds are known for the Lambert function.
Hoorfar and Hassani showed that the following bound holds for "x" ≥ "e":
formula_25
They also showed the general bound
formula_26
for every formula_27 and formula_28, with equality only for formula_29.
The bound allows many other bounds to be made, such as taking formula_30 which gives the bound
formula_31
In 2013 it was proven that the branch "W"−1 can be bounded as follows:
formula_32
Roberto Iacono and John P. Boyd enhanced the bounds as follows:
formula_33
Identities.
A few identities follow from the definition:
formula_34
Note that, since "f"("x") = "xex" is not injective, it does not always hold that "W"("f"("x")) = "x", much like with the inverse trigonometric functions. For fixed "x" < 0 and "x" ≠ −1, the equation "xex" = "yey" has two real solutions in y, one of which is of course "y" = "x". Then, for "i" = 0 and "x" < −1, as well as for "i" = −1 and "x" ∈ (−1, 0), "y" = "Wi"("xex") is the other solution.
Some other identities:
formula_35
formula_36
formula_37
formula_38
formula_39
(which can be extended to other n and x if the correct branch is chosen).
formula_40
Substituting −ln "x" in the definition:
formula_41
With Euler's iterated exponential "h"("x"):
formula_42
Special values.
The following are special values of the principal branch:
formula_43
formula_44
formula_45
formula_46
formula_47
formula_48
formula_49(the omega constant)
formula_50
formula_51
formula_52
formula_53
formula_54
formula_55
Special values of the branch "W"−1:
formula_56
Representations.
The principal branch of the Lambert function can be represented by a proper integral, due to Poisson:
formula_57
Another representation of the principal branch was found by Kalugin–Jeffrey–Corless:
formula_58
The following continued fraction representation also holds for the principal branch:
formula_59
Also, if :
formula_60
In turn, if , then
formula_61
Other formulas.
Definite integrals.
There are several useful definite integral formulas involving the principal branch of the W function, including the following:
formula_62
The first identity can be found by writing the Gaussian integral in polar coordinates.
The second identity can be derived by making the substitution "u" = "W"0 ("x"), which gives
formula_63
Thus
formula_64
The third identity may be derived from the second by making the substitution "u" = "x"−2 and the first can also be derived from the third by the substitution "z" = tan "x".
Except for z along the branch cut (−∞, −] (where the integral does not converge), the principal branch of the Lambert W function can be computed by the following integral:
formula_65
where the two integral expressions are equivalent due to the symmetry of the integrand.
Indefinite integrals.
formula_66
<templatestyles src="Math_proof/styles.css" />1st proof
Introduce substitution variable formula_67
formula_68
formula_69
formula_70
formula_71
formula_72
formula_73
<templatestyles src="Math_proof/styles.css" />2nd proof
formula_74
formula_75
formula_76
formula_77
formula_78
formula_70
formula_71
formula_72
formula_73
formula_79
<templatestyles src="Math_proof/styles.css" />Proof
formula_80
formula_81
formula_82
formula_83
formula_84
formula_85
formula_86
formula_87
formula_88
formula_89
formula_90
formula_91
formula_92
formula_93
formula_94
formula_95
formula_96
formula_97
formula_98
formula_99
formula_100
<templatestyles src="Math_proof/styles.css" />Proof
Introduce substitution variable formula_101, which gives us formula_102 and formula_103
formula_104
formula_105
formula_106
formula_107
formula_108
formula_109
formula_110
formula_111
Applications.
Solving equations.
The Lambert W function is used to solve equations in which the unknown quantity occurs both in the base and in the exponent, or both inside and outside of a logarithm. The strategy is to convert such an equation into one of the form "ze""z" = "w" and then to solve for z using the W function.
For example, the equation
formula_112
(where x is an unknown real number) can be solved by rewriting it as
formula_113
This last equation has the desired form and the solutions for real "x" are:
formula_114
and thus:
formula_115
Generally, the solution to
formula_116
is:
formula_117
where "a", "b", and "c" are complex constants, with "b" and "c" not equal to zero, and the "W" function is of any integer order.
Viscous flows.
Granular and debris flow fronts and deposits, and the fronts of viscous fluids in natural events and in laboratory experiments can be described by using the Lambert–Euler omega function as follows:
formula_118
where "H"("x") is the debris flow height, x is the channel downstream position, L is the unified model parameter consisting of several physical and geometrical parameters of the flow, flow height and the hydraulic pressure gradient.
In pipe flow, the Lambert W function is part of the explicit formulation of the Colebrook equation for finding the Darcy friction factor. This factor is used to determine the pressure drop through a straight run of pipe when the flow is turbulent.
Time-dependent flow in simple branch hydraulic systems.
The principal branch of the Lambert W function is employed in the field of mechanical engineering, in the study of time dependent transfer of Newtonian fluids between two reservoirs with varying free surface levels, using centrifugal pumps. The Lambert W function provided an exact solution to the flow rate of fluid in both the laminar and turbulent regimes:
formula_119
where formula_120 is the initial flow rate and formula_121 is time.
Neuroimaging.
The Lambert W function is employed in the field of neuroimaging for linking cerebral blood flow and oxygen consumption changes within a brain voxel, to the corresponding blood oxygenation level dependent (BOLD) signal.
Chemical engineering.
The Lambert W function is employed in the field of chemical engineering for modeling the porous electrode film thickness in a glassy carbon based supercapacitor for electrochemical energy storage. The Lambert W function provides an exact solution for a gas phase thermal activation process where growth of carbon film and combustion of the same film compete with each other.
Crystal growth.
In the crystal growth, the negative principal of the Lambert W-function can be used to calculate the distribution coefficient, formula_122, and solute concentration in the melt, formula_123, from the Scheil equation:
formula_124
Materials science.
The Lambert W function is employed in the field of epitaxial film growth for the determination of the critical dislocation onset film thickness. This is the calculated thickness of an epitaxial film, where due to thermodynamic principles the film will develop crystallographic dislocations in order to minimise the elastic energy stored in the films. Prior to application of Lambert W for this problem, the critical thickness had to be determined via solving an implicit equation. Lambert W turns it in an explicit equation for analytical handling with ease.
Porous media.
The Lambert W function has been employed in the field of fluid flow in porous media to model the tilt of an interface separating two gravitationally segregated fluids in a homogeneous tilted porous bed of constant dip and thickness where the heavier fluid, injected at the bottom end, displaces the lighter fluid that is produced at the same rate from the top end. The principal branch of the solution corresponds to stable displacements while the −1 branch applies if the displacement is unstable with the heavier fluid running underneath the lighter fluid.
Bernoulli numbers and Todd genus.
The equation (linked with the generating functions of Bernoulli numbers and Todd genus):
formula_125
can be solved by means of the two real branches "W"0 and "W"−1:
formula_126
This application shows that the branch difference of the W function can be employed in order to solve other transcendental equations.
Statistics.
The centroid of a set of histograms defined with respect to the symmetrized Kullback–Leibler divergence (also called the Jeffreys divergence ) has a closed form using the Lambert W function.
Pooling of tests for infectious diseases.
Solving for the optimal group size to pool tests so that at least one individual is infected involves the Lambert "W" function.
Exact solutions of the Schrödinger equation.
The Lambert W function appears in a quantum-mechanical potential, which affords the fifth – next to those of the harmonic oscillator plus centrifugal, the Coulomb plus inverse square, the Morse, and the inverse square root potential – exact solution to the stationary one-dimensional Schrödinger equation in terms of the confluent hypergeometric functions. The potential is given as
formula_127
A peculiarity of the solution is that each of the two fundamental solutions that compose the general solution of the Schrödinger equation is given by a combination of two confluent hypergeometric functions of an argument proportional to
formula_128
The Lambert W function also appears in the exact solution for the bound state energy of the one dimensional Schrödinger equation with a Double Delta Potential.
Exact solution of QCD coupling constant.
In Quantum chromodynamics, the quantum field theory of the Strong interaction, the coupling constant formula_129 is computed perturbatively, the order n corresponding to Feynman diagrams including n quantum loops. The first order, n=1, solution is exact (at that order) and analytical. At higher orders, n>1, there is no exact and analytical solution and one typically uses an iterative method to furnish an approximate solution. However, for second order, n=2, the Lambert function provides an exact (if non-analytical) solution.
Exact solutions of the Einstein vacuum equations.
In the Schwarzschild metric solution of the Einstein vacuum equations, the W function is needed to go from the Eddington–Finkelstein coordinates to the Schwarzschild coordinates. For this reason, it also appears in the construction of the Kruskal–Szekeres coordinates.
Resonances of the delta-shell potential.
The s-wave resonances of the delta-shell potential can be written exactly in terms of the Lambert W function.
Thermodynamic equilibrium.
If a reaction involves reactants and products having heat capacities that are constant with temperature then the equilibrium constant K obeys
formula_130
for some constants a, b, and c. When c (equal to ) is not zero the value or values of T can be found where K equals a given value as follows, where L can be used for ln "T".
formula_131
If a and c have the same sign there will be either two solutions or none (or one if the argument of W is exactly −). (The upper solution may not be relevant.) If they have opposite signs, there will be one solution.
Phase separation of polymer mixtures.
In the calculation of the phase diagram of thermodynamically incompatible polymer mixtures according to the Edmond-Ogston model, the solutions for binodal and tie-lines are formulated in terms of Lambert W functions.
Wien's displacement law in a "D"-dimensional universe.
Wien's displacement law is expressed as formula_132. With formula_133 and formula_134, where formula_135 is the spectral energy energy density, one finds formula_136, where formula_137 is the number of degrees of freedom for spatial translation. The solution formula_138 shows that the spectral energy density is dependent on the dimensionality of the universe.
AdS/CFT correspondence.
The classical finite-size corrections to the dispersion relations of giant magnons, single spikes and GKP strings can be expressed in terms of the Lambert W function.
Epidemiology.
In the "t" → ∞ limit of the SIR model, the proportion of susceptible and recovered individuals has a solution in terms of the Lambert W function.
Determination of the time of flight of a projectile.
The total time of the journey of a projectile which experiences air resistance proportional to its velocity can be determined in exact form by using the Lambert "W" function.
Electromagnetic surface wave propagation.
The transcendental equation that appears in the determination of the propagation wave number of an electromagnetic axially symmetric surface wave (a low-attenuation single TM01 mode) propagating in a cylindrical metallic wire gives rise to an equation like "u" ln "u" = "v" (where u and v clump together the geometrical and physical factors of the problem), which is solved by the Lambert W function. The first solution to this problem, due to Sommerfeld "circa" 1898, already contained an iterative method to determine the value of the Lambert W function.
Orthogonal trajectories of real ellipses.
The family of ellipses formula_139 centered at formula_140 is parameterized by eccentricity formula_141. The orthogonal trajectories of this family are given by the differential equation formula_142 whose general solution is the family formula_143formula_144.
Generalizations.
The standard Lambert W function expresses exact solutions to "transcendental algebraic" equations (in x) of the form:
where "a"0, c and r are real constants. The solution is
formula_145
Generalizations of the Lambert W function include:
Applications of the Lambert W function in fundamental physical problems are not exhausted even for the standard case expressed in (1) as seen recently in the area of atomic, molecular, and optical physics.
Numerical evaluation.
The W function may be approximated using Newton's method, with successive approximations to "w" = "W"("z") (so "z" = "wew") being
formula_146
The W function may also be approximated using Halley's method,
formula_147
given in Corless et al. to compute W.
For real formula_148, it may be approximated by the quadratic-rate recursive formula of R. Iacono and J.P. Boyd:
formula_149
Lajos Lóczi proves that by using this iteration with an appropriate starting value formula_150,
one can determine the maximum number of iteration steps in advance for any precision:
Software.
The Lambert W function is implemented as codice_0 in Maple, codice_1 in GP (and codice_2 in PARI), codice_1 in Matlab, also codice_1 in Octave with the codice_5 package, as codice_6 in Maxima, as codice_7 (with a silent alias codice_0) in Mathematica, as codice_1 in Python scipy's special function package, as codice_0 in Perl's codice_11 module, and as codice_12, codice_13 functions in the special functions section of the GNU Scientific Library (GSL). In the Boost C++ libraries, the calls are codice_14, codice_15, codice_16, and codice_17. In R, the Lambert W function is implemented as the codice_18 and codice_19 functions in the codice_20 package.
C++ code for all the branches of the complex Lambert W function is available on the homepage of István Mező.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "w e^{w} = z"
},
{
"math_id": 1,
"text": "w=W_k(z) \\ \\ \\text{ for some integer } k."
},
{
"math_id": 2,
"text": "y e^{y} = x"
},
{
"math_id": 3,
"text": "x = x^m + q."
},
{
"math_id": 4,
"text": "x^a - x^b = (a - b) c x^{a + b}."
},
{
"math_id": 5,
"text": "\\ln x = c x^a."
},
{
"math_id": 6,
"text": "\n\\begin{align}\n ze^z &= (x + iy) e^{x} (\\cos y + i \\sin y) \\\\\n &= e^{x} (x \\cos y - y \\sin y) + i e^{x} (x \\sin y + y \\cos y) \\\\\n\\end{align}\n"
},
{
"math_id": 7,
"text": "n \\neq 0"
},
{
"math_id": 8,
"text": "x \\sin y + y \\cos y = 0 \\Rightarrow x = -y/\\tan(y),"
},
{
"math_id": 9,
"text": "(x \\cos y - y \\sin y) e^x \\leq 0."
},
{
"math_id": 10,
"text": "n = 0"
},
{
"math_id": 11,
"text": "-\\infty < z \\leq -1/e"
},
{
"math_id": 12,
"text": "(x \\cos y - y \\sin y) e^x \\leq -1/e."
},
{
"math_id": 13,
"text": "z(1 + W) \\frac{dW}{dz} = W \\quad \\text{for } z \\neq -\\frac{1}{e}."
},
{
"math_id": 14,
"text": "\\frac{dW}{dz} = \\frac{W(z)}{z(1 + W(z))} \\quad \\text{for } z \\not\\in \\left\\{0, -\\frac{1}{e}\\right\\}."
},
{
"math_id": 15,
"text": "\\frac{dW}{dz} = \\frac{1}{z + e^{W(z)}} \\quad \\text{for } z \\neq -\\frac{1}{e}."
},
{
"math_id": 16,
"text": "W'_0(0)=1."
},
{
"math_id": 17,
"text": " \\begin{align}\n\\int W(x)\\,dx &= x W(x) - x + e^{W(x)} + C\\\\\n& = x \\left( W(x) - 1 + \\frac{1}{W(x)} \\right) + C.\n\\end{align}"
},
{
"math_id": 18,
"text": "\\int_{0}^{e} W_0(x)\\,dx = e - 1."
},
{
"math_id": 19,
"text": "W_0(x)=\\sum_{n=1}^\\infty \\frac{(-n)^{n-1}}{n!}x^n =x-x^2+\\tfrac{3}{2}x^3-\\tfrac{8}{3}x^4+\\tfrac{125}{24}x^5-\\cdots."
},
{
"math_id": 20,
"text": "\\begin{align}\nW_0(x) &= L_1 - L_2 + \\frac{L_2}{L_1} + \\frac{L_2\\left(-2 + L_2\\right)}{2L_1^2} + \\frac{L_2\\left(6 - 9L_2 + 2L_2^2\\right)}{6L_1^3} + \\frac{L_2\\left(-12 + 36L_2 - 22L_2^2 + 3L_2^3\\right)}{12L_1^4} + \\cdots \\\\[5pt]\n&= L_1 - L_2 + \\sum_{l=0}^\\infty \\sum_{m=1}^\\infty \\frac{(-1)^l \\left[ \\begin{smallmatrix} l + m \\\\ l + 1 \\end{smallmatrix} \\right]}{m!} L_1^{-l-m} L_2^m,\n\\end{align}"
},
{
"math_id": 21,
"text": "W_0(x) = \\ln x - \\ln \\ln x + \\mathcal{o}(1)."
},
{
"math_id": 22,
"text": "\nW_0(x)^2 = \\sum_{n=2}^\\infty \\frac{-2\\left(-n\\right)^{n-3}}{(n - 2)!} x^n = x^2 - 2x^3 + 4x^4 - \\tfrac{25}{3}x^5 + 18x^6 - \\cdots.\n"
},
{
"math_id": 23,
"text": "\nW_0(x)^r = \\sum_{n=r}^\\infty \\frac{-r\\left(-n\\right)^{n - r - 1}}{(n - r)!} x^n,\n"
},
{
"math_id": 24,
"text": "\n\\left(\\frac{W_0(x)}{x}\\right)^r = e^{-r W_0(x)} = \\sum_{n=0}^\\infty \\frac{r\\left(n + r\\right)^{n - 1}}{n!} \\left(-x\\right)^n,\n"
},
{
"math_id": 25,
"text": "\\ln x -\\ln \\ln x + \\frac{\\ln \\ln x}{2\\ln x} \\le W_0(x) \\le \\ln x - \\ln\\ln x + \\frac{e}{e - 1} \\frac{\\ln \\ln x}{\\ln x}."
},
{
"math_id": 26,
"text": "W_0(x) \\le \\ln\\left(\\frac{x+y}{1+\\ln(y)}\\right),"
},
{
"math_id": 27,
"text": "y>1/e"
},
{
"math_id": 28,
"text": "x\\ge-1/e"
},
{
"math_id": 29,
"text": "x = y \\ln(y)"
},
{
"math_id": 30,
"text": "y=x+1"
},
{
"math_id": 31,
"text": "W_0(x) \\le \\ln\\left(\\frac{2x+1}{1+\\ln(x+1)}\\right)."
},
{
"math_id": 32,
"text": "-1 - \\sqrt{2u} - u < W_{-1}\\left(-e^{-u-1}\\right) < -1 - \\sqrt{2u} - \\tfrac{2}{3}u \\quad \\text{for } u > 0."
},
{
"math_id": 33,
"text": "\\ln \\left(\\frac{x}{\\ln x}\\right) -\\frac{\\ln \\left(\\frac{x}{\\ln x}\\right)}{1+\\ln \\left(\\frac{x}{\\ln x}\\right)} \\ln \\left(1-\\frac{\\ln \\ln x}{\\ln x}\\right) \\le W_0(x) \\le \\ln \\left(\\frac{x}{\\ln x}\\right) - \\ln \\left(\\left(1-\\frac{\\ln \\ln x}{\\ln x}\\right)\\left(1-\\frac{\\ln\\left(1-\\frac{\\ln \\ln x}{\\ln x}\\right)}{1+\\ln \\left(\\frac{x}{\\ln x}\\right)}\\right)\\right)."
},
{
"math_id": 34,
"text": "\\begin{align}\nW_0(x e^x) &= x & \\text{for } x &\\geq -1,\\\\\nW_{-1}(x e^x) &= x & \\text{for } x &\\leq -1.\n\\end{align}"
},
{
"math_id": 35,
"text": "\n\\begin{align}\n& W(x)e^{W(x)} = x, \\quad\\text{therefore:}\\\\[5pt]\n& e^{W(x)} = \\frac{x}{W(x)}, \\qquad e^{-W(x)} = \\frac{W(x)}{x}, \\qquad e^{n W(x)} = \\left(\\frac{x}{W(x)}\\right)^n.\n\\end{align}\n"
},
{
"math_id": 36,
"text": "\\ln W_0(x) = \\ln x - W_0(x) \\quad \\text{for } x > 0."
},
{
"math_id": 37,
"text": "W_0\\left(x \\ln x\\right) = \\ln x \\quad\\text{and}\\quad e^{W_0\\left(x \\ln x\\right)} = x \\quad \\text{for } \\frac1e \\leq x . "
},
{
"math_id": 38,
"text": "W_{-1}\\left(x \\ln x\\right) = \\ln x \\quad\\text{and}\\quad e^{W_{-1}\\left(x \\ln x\\right)} = x \\quad \\text{for } 0 < x \\leq \\frac1e . "
},
{
"math_id": 39,
"text": "\n\\begin{align}\n& W(x) = \\ln \\frac{x}{W(x)} &&\\text{for } x \\geq -\\frac1e, \\\\[5pt]\n& W\\left( \\frac{nx^n}{W\\left(x\\right)^{n-1}} \\right) = n W(x) &&\\text{for } n, x > 0\n\\end{align}\n"
},
{
"math_id": 40,
"text": "W(x) + W(y) = W\\left(x y \\left(\\frac{1}{W(x)} + \\frac{1}{W(y)}\\right)\\right) \\quad \\text{for } x, y > 0."
},
{
"math_id": 41,
"text": "\\begin{align}\nW_0\\left(-\\frac{\\ln x}{x}\\right) &= -\\ln x &\\text{for } 0 &< x \\leq e,\\\\[5pt]\nW_{-1}\\left(-\\frac{\\ln x}{x}\\right) &= -\\ln x &\\text{for } x &> e.\n\\end{align}"
},
{
"math_id": 42,
"text": "\\begin{align}h(x) & = e^{-W(-\\ln x)}\\\\\n & = \\frac{W(-\\ln x)}{-\\ln x} \\quad \\text{for } x \\neq 1.\n\\end{align}"
},
{
"math_id": 43,
"text": "W_0\\left(-\\frac{\\pi}{2}\\right) = \\frac{i\\pi}{2}"
},
{
"math_id": 44,
"text": "W_0\\left(-\\frac{1}{e}\\right) = -1"
},
{
"math_id": 45,
"text": "W_0\\left(2 \\ln 2 \\right) = \\ln 2"
},
{
"math_id": 46,
"text": "W_0\\left(x \\ln x \\right) = \\ln x \\quad \\left(x \\geqslant \\tfrac{1}{e} \\approx 0.36788\\right)"
},
{
"math_id": 47,
"text": "W_0\\left(x^{x+1} \\ln x \\right) = x \\ln x \\quad \\left(x > 0\\right)"
},
{
"math_id": 48,
"text": "W_0(0) = 0"
},
{
"math_id": 49,
"text": "W_0(1) = \\Omega = \\left(\\int_{-\\infty}^{\\infty} \\frac{dt}{\\left(e^t-t\\right)^2 + \\pi^2}\\right)^{\\!-1}\\!\\!\\!\\!-\\,1\\approx 0.56714329 \\quad"
},
{
"math_id": 50,
"text": "W_0(1) = e^{-W_0(1)} = \\ln\\frac{1}{W_0(1)} = -\\ln W_0(1)"
},
{
"math_id": 51,
"text": "W_0(e) = 1"
},
{
"math_id": 52,
"text": "W_0\\left(e^{1+e}\\right) = e"
},
{
"math_id": 53,
"text": "W_0\\left(\\frac{\\sqrt{e}}{2}\\right) = \\frac{1}{2}"
},
{
"math_id": 54,
"text": "W_0\\left(\\frac{\\sqrt[n]{e}}{n}\\right) = \\frac{1}{n}"
},
{
"math_id": 55,
"text": "W_0(-1) \\approx -0.31813+1.33723i"
},
{
"math_id": 56,
"text": "W_{-1}\\left(-\\frac{\\ln 2}{2}\\right) = -\\ln 4"
},
{
"math_id": 57,
"text": "-\\frac{\\pi}{2}W_0(-x)=\\int_0^\\pi\\frac{\\sin\\left(\\tfrac32 t\\right)-xe^{\\cos t}\\sin\\left(\\tfrac52 t-\\sin t\\right)}{1-2xe^{\\cos t}\\cos(t-\\sin t)+x^2e^{2\\cos t}}\\sin\\left(\\tfrac12 t\\right)\\,dt \\quad \\text{for } |x| < \\frac1{e}.\n"
},
{
"math_id": 58,
"text": "W_0(x)=\\frac{1}{\\pi}\\int_0^\\pi\\ln\\left(1+x\\frac{\\sin t}{t}e^{t\\cot t}\\right)dt."
},
{
"math_id": 59,
"text": "\nW_0(x) = \\cfrac{x}{1+\\cfrac{x}{1+\\cfrac{x}{2+\\cfrac{5x}{3+\\cfrac{17x}{10+\\cfrac{133x}{17+\\cfrac{1927x}{190+\\cfrac{13582711x}{94423+\\ddots}}}}}}}}.\n"
},
{
"math_id": 60,
"text": "W_0(x) = \\cfrac{x}{\\exp \\cfrac{x}{\\exp \\cfrac{x}{\\ddots}}}."
},
{
"math_id": 61,
"text": "W_0(x) = \\ln \\cfrac{x}{\\ln \\cfrac{x}{\\ln \\cfrac{x}{\\ddots}}}."
},
{
"math_id": 62,
"text": "\\begin{align}\n& \\int_0^\\pi W_0\\left( 2\\cot^2x \\right)\\sec^2 x\\,dx = 4\\sqrt{\\pi}. \\\\[5pt]\n& \\int_0^\\infty \\frac{W_0(x)}{x\\sqrt{x}}\\,dx = 2\\sqrt{2\\pi}. \\\\[5pt]\n& \\int_0^\\infty W_0\\left(\\frac{1}{x^2}\\right)\\,dx = \\sqrt{2\\pi}.\n\\end{align}"
},
{
"math_id": 63,
"text": "\\begin{align}\nx & =ue^u, \\\\[5pt]\n\\frac{dx}{du} & =(u+1)e^u.\n\\end{align}"
},
{
"math_id": 64,
"text": "\\begin{align}\n\\int_0^\\infty \\frac{W_0(x)}{x\\sqrt{x}}\\,dx &=\\int_0^\\infty \\frac{u}{ue^{u}\\sqrt{ue^{u}}}(u+1)e^u \\, du \\\\[5pt]\n&=\\int_0^\\infty \\frac{u+1}{\\sqrt{ue^u}}du \\\\[5pt]\n&=\\int_0^\\infty \\frac{u+1}{\\sqrt{u}}\\frac{1}{\\sqrt{e^u}}du\\\\[5pt]\n&=\\int_0^\\infty u^\\tfrac12 e^{-\\frac{u}{2}}du+\\int_0^\\infty u^{-\\tfrac12} e^{-\\frac{u}{2}}du\\\\[5pt]\n&=2\\int_0^\\infty (2w)^\\tfrac12 e^{-w} \\, dw+2\\int_0^\\infty (2w)^{-\\tfrac12} e^{-w} \\, dw && \\quad (u =2w) \\\\[5pt]\n&=2\\sqrt{2}\\int_0^\\infty w^\\tfrac12 e^{-w} \\, dw + \\sqrt{2} \\int_0^\\infty w^{-\\tfrac12} e^{-w} \\, dw \\\\[5pt]\n&=2\\sqrt{2} \\cdot \\Gamma \\left (\\tfrac32 \\right )+\\sqrt{2} \\cdot \\Gamma \\left (\\tfrac12 \\right ) \\\\[5pt]\n&=2\\sqrt{2} \\left (\\tfrac12\\sqrt{\\pi} \\right )+\\sqrt{2}\\left(\\sqrt{\\pi}\\right) \\\\[5pt]\n&=2\\sqrt{2\\pi}.\n\\end{align}"
},
{
"math_id": 65,
"text": "\\begin{align}\nW_0(z)&=\\frac{z}{2\\pi}\\int_{-\\pi}^\\pi\\frac{\\left(1-\\nu\\cot\\nu\\right)^2+\\nu^2}{z+\\nu\\csc\\nu e^{-\\nu\\cot\\nu}} \\, d\\nu \\\\[5pt]\n&= \\frac{z}{\\pi} \\int_0^\\pi \\frac{\\left(1-\\nu\\cot\\nu\\right)^2+\\nu^2}{z + \\nu \\csc\\nu e^{-\\nu\\cot\\nu}} \\, d\\nu,\n\\end{align}"
},
{
"math_id": 66,
"text": "\\int \\frac{ W(x) }{x} \\, dx \\; = \\; \\frac{ W(x)^2}{2} + W(x) + C "
},
{
"math_id": 67,
"text": " u= W(x) \\rightarrow ue^u=x \\;\\;\\;\\; \\frac{ d }{ du } ue^u = (u+1)e^u "
},
{
"math_id": 68,
"text": "\\int \\frac{ W(x) }{x} \\, dx \\; = \\; \\int \\frac{u}{ue^u}(u+1)e^u \\, du "
},
{
"math_id": 69,
"text": "\\int \\frac{ W(x) }{x} \\, dx \\; = \\; \\int \\frac{ \\cancel{\\color{OliveGreen}{u}} }{ \\cancel{\\color{OliveGreen}{u}} \\cancel{\\color{BrickRed}{e^u}} }\\left(u+1\\right)\\cancel{\\color{BrickRed}{e^u}} \\, du "
},
{
"math_id": 70,
"text": "\\int \\frac{ W(x) }{x} \\, dx \\; = \\; \\int (u+1) \\, du "
},
{
"math_id": 71,
"text": "\\int \\frac{ W(x) }{x} \\, dx \\; = \\; \\frac{u^2}{2} + u + C "
},
{
"math_id": 72,
"text": " u= W(x) "
},
{
"math_id": 73,
"text": "\\int \\frac{ W(x) }{x} \\, dx \\; = \\; \\frac{ W(x) ^2}{2} + W(x) + C "
},
{
"math_id": 74,
"text": " W(x) e^{ W(x) } = x \\rightarrow \\frac{ W(x) }{ x } = e^{ - W(x) } "
},
{
"math_id": 75,
"text": "\\int \\frac{ W(x) }{x} \\, dx \\; = \\; \\int e^{ - W(x) } \\, dx "
},
{
"math_id": 76,
"text": " u= W(x) \\rightarrow ue^u=x \\;\\;\\;\\; \\frac{ d }{ \\, du } ue^u = \\left(u+1\\right)e^u "
},
{
"math_id": 77,
"text": "\\int \\frac{ W(x) }{x} \\, dx \\; = \\; \\int e^{-u} (u+1) e^u \\, du "
},
{
"math_id": 78,
"text": "\\int \\frac{ W(x) }{x} \\, dx \\; = \\; \\int \\cancel{\\color{OliveGreen}{e^{ -u } }} \\left( u+1 \\right) \\cancel{\\color{OliveGreen}{ e^u }} \\, du "
},
{
"math_id": 79,
"text": "\\int W\\left(A e^{Bx}\\right) \\, dx \\; = \\; \\frac{ W\\left(A e^{Bx}\\right) ^2}{2B} + \\frac{ W\\left(A e^{Bx}\\right) }{B} + C "
},
{
"math_id": 80,
"text": "\\int W\\left(A e^{Bx}\\right) \\, dx \\; = \\; \\int W\\left(A e^{Bx}\\right) \\, dx "
},
{
"math_id": 81,
"text": " u = Bx \\rightarrow \\frac{u}{B} = x \\;\\;\\;\\; \\frac{ d }{ du } \\frac{u}{B} = \\frac{1}{B} "
},
{
"math_id": 82,
"text": "\\int W\\left(A e^{Bx}\\right) \\, dx \\; = \\; \\int W\\left(A e^{u}\\right) \\frac{1}{B} du "
},
{
"math_id": 83,
"text": " v = e^u \\rightarrow \\ln\\left(v\\right) = u \\;\\;\\;\\; \\frac{ d }{ dv } \\ln\\left(v\\right) = \\frac{1}{v} "
},
{
"math_id": 84,
"text": "\\int W\\left(A e^{Bx}\\right) \\, dx \\; = \\; \\frac{1}{B} \\int \\frac{W\\left(A v\\right)}{v} dv "
},
{
"math_id": 85,
"text": " w = Av \\rightarrow \\frac{w}{A} = v \\;\\;\\;\\; \\frac{ d }{ dw } \\frac{w}{A} = \\frac{1}{A} "
},
{
"math_id": 86,
"text": "\\int W\\left(A e^{Bx}\\right) \\, dx \\; = \\; \\frac{1}{B} \\int \\frac{\\cancel{\\color{OliveGreen}{A}} W(w)}{w} \\cancel{\\color{OliveGreen}{ \\frac{1}{A} }} dw "
},
{
"math_id": 87,
"text": " t = W\\left(w\\right) \\rightarrow te^t=w \\;\\;\\;\\; \\frac{ d }{ dt } te^t = \\left(t+1\\right)e^t "
},
{
"math_id": 88,
"text": "\\int W\\left(A e^{Bx}\\right) \\, dx \\; = \\; \\frac{1}{B} \\int \\frac{t}{te^t}\\left(t+1\\right)e^t dt "
},
{
"math_id": 89,
"text": "\\int W\\left(A e^{Bx}\\right) \\, dx \\; = \\; \\frac{1}{B} \\int \\frac{ \\cancel{\\color{OliveGreen}{t}} }{ \\cancel{\\color{OliveGreen}{t}} \\cancel{\\color{BrickRed}{e^t}} }\\left(t+1\\right) \\cancel{\\color{BrickRed}{e^t}} dt "
},
{
"math_id": 90,
"text": "\\int W\\left(A e^{Bx}\\right) \\, dx \\; = \\; \\frac{1}{B} \\int (t+1) dt "
},
{
"math_id": 91,
"text": " \\int W\\left(A e^{Bx}\\right) \\, dx \\; = \\; \\frac{t^2}{2B} + \\frac{t}{B} + C "
},
{
"math_id": 92,
"text": " t = W\\left(w\\right) "
},
{
"math_id": 93,
"text": " \\int W\\left(A e^{Bx}\\right) \\, dx \\; = \\; \\frac{ W\\left(w\\right) ^2}{2B} + \\frac{ W\\left(w\\right) }{B} + C "
},
{
"math_id": 94,
"text": " w = Av "
},
{
"math_id": 95,
"text": " \\int W\\left(A e^{Bx}\\right) \\, dx \\; = \\; \\frac{ W\\left(Av\\right) ^2}{2B} + \\frac{ W\\left(Av\\right) }{B} + C "
},
{
"math_id": 96,
"text": " v = e^u "
},
{
"math_id": 97,
"text": " \\int W\\left(A e^{Bx}\\right) \\, dx \\; = \\; \\frac{ W\\left(Ae^u\\right) ^2}{2B} + \\frac{ W\\left(Ae^u\\right) }{B} + C "
},
{
"math_id": 98,
"text": " u = Bx "
},
{
"math_id": 99,
"text": " \\int W\\left(A e^{Bx}\\right) \\, dx \\; = \\; \\frac{ W\\left(Ae^{Bx}\\right) ^2}{2B} + \\frac{ W\\left(Ae^{Bx}\\right) }{B} + C "
},
{
"math_id": 100,
"text": " \\int \\frac{ W(x) }{x^2} \\, dx \\; = \\; \\operatorname{Ei}\\left(- W(x) \\right) - e^{ - W(x) } + C "
},
{
"math_id": 101,
"text": " u = W(x)"
},
{
"math_id": 102,
"text": " ue^u = x "
},
{
"math_id": 103,
"text": "\\frac{ d }{ du } ue^u = \\left(u+1\\right)e^u "
},
{
"math_id": 104,
"text": "\\begin{align}\n\\int \\frac{ W(x) }{x^2} \\, dx \\;\n& = \\; \\int \\frac{ u }{ \\left(ue^u\\right)^2 } \\left(u+1\\right)e^u du \\\\\n& = \\; \\int \\frac{ u+1 }{ ue^u } du \\\\\n& = \\; \\int \\frac{ u }{ ue^u } du \\; + \\; \\int \\frac{ 1 }{ ue^u } du \\\\\n& = \\; \\int e^{-u} du \\; + \\; \\int \\frac{ e^{-u} }{ u } du\n\\end{align}"
},
{
"math_id": 105,
"text": " v = -u \\rightarrow -v = u \\;\\;\\;\\; \\frac{ d }{ dv } -v = -1 "
},
{
"math_id": 106,
"text": " \\int \\frac{ W(x) }{x^2} \\, dx \\; = \\; \\int e^{v} \\left(-1\\right) dv \\; + \\; \\int \\frac{ e^{-u} }{ u } du "
},
{
"math_id": 107,
"text": " \\int \\frac{ W(x) }{x^2} \\, dx \\; = \\; - e^v + \\operatorname{Ei}\\left(-u\\right) + C "
},
{
"math_id": 108,
"text": " v = -u "
},
{
"math_id": 109,
"text": " \\int \\frac{ W(x) }{x^2} \\, dx \\; = \\; - e^{-u} + \\operatorname{Ei}\\left(-u\\right) + C "
},
{
"math_id": 110,
"text": " u = W(x) "
},
{
"math_id": 111,
"text": "\\begin{align}\n \\int \\frac{ W(x) }{x^2} \\, dx \\; &= \\; - e^{- W(x) } + \\operatorname{Ei}\\left(- W(x) \\right) + C \\\\\n &= \\; \\operatorname{Ei}\\left(- W(x) \\right) - e^{- W(x) } + C\n\\end{align}"
},
{
"math_id": 112,
"text": "3^x=2x+2"
},
{
"math_id": 113,
"text": "\\begin{align} &(x+1)\\ 3^{-x}=\\frac{1}{2} & (\\mbox{multiply by } 3^{-x}/2) \\\\\n\\Leftrightarrow\\ &(-x-1)\\ 3^{-x-1} = -\\frac{1}{6} & (\\mbox{multiply by } {-}1/3) \\\\\n\\Leftrightarrow\\ &(\\ln 3) (-x-1)\\ e^{(\\ln 3)(-x-1)} = -\\frac{\\ln 3}{6} & (\\mbox{multiply by } \\ln 3)\n\\end{align}"
},
{
"math_id": 114,
"text": "(\\ln 3) (-x-1) = W_0\\left(\\frac{-\\ln 3}{6}\\right) \\ \\ \\ \\textrm{ or }\\ \\ \\ (\\ln 3) (-x-1) = W_{-1}\\left(\\frac{-\\ln 3}{6}\\right) "
},
{
"math_id": 115,
"text": "x= -1-\\frac{W_0\\left(-\\frac{\\ln 3}{6}\\right)}{\\ln 3} = -0.79011\\ldots \\ \\ \\textrm{ or }\\ \\ x= -1-\\frac{W_{-1}\\left(-\\frac{\\ln 3}{6}\\right)}{\\ln 3} = 1.44456\\ldots"
},
{
"math_id": 116,
"text": "x = a+b\\,e^{cx}"
},
{
"math_id": 117,
"text": "x=a-\\frac{1}{c}W(-bc\\,e^{ac})"
},
{
"math_id": 118,
"text": "H(x)= 1 + W \\left((H(0) -1) e^{(H(0)-1)-\\frac{x}{L}}\\right),"
},
{
"math_id": 119,
"text": "\\begin{align}\nQ_\\text{turb} &= \\frac{Q_i}{\\zeta_i} W_0\\left[\\zeta_i \\, e^{(\\zeta_i+\\beta t/b)}\\right]\\\\\nQ_\\text{lam} &= \\frac{Q_i}{\\xi_i} W_0\\left[\\xi_i \\, e^{\\left(\\xi_i+\\beta t/(b-\\Gamma_1)\\right)}\\right]\n\\end{align}"
},
{
"math_id": 120,
"text": "Q_i"
},
{
"math_id": 121,
"text": "t"
},
{
"math_id": 122,
"text": "k"
},
{
"math_id": 123,
"text": "C_L"
},
{
"math_id": 124,
"text": "\\begin{align}\n& k = \\frac{W_0(Z)}{\\ln(1-fs)} \\\\\n& C_L=\\frac{C_0}{(1-fs)} e^{W_0(Z)}\\\\\n& Z = \\frac{C_S}{C_0} (1-fs) \\ln(1-fs) \n\\end{align}\n"
},
{
"math_id": 125,
"text": " Y = \\frac{X}{1-e^X}"
},
{
"math_id": 126,
"text": " X(Y) = \\begin{cases}\nW_{-1}\\left( Y e^Y\\right) - W_0\\left( Y e^Y\\right) = Y - W_0\\left( Y e^Y\\right) &\\text{for }Y < -1,\\\\\nW_0\\left( Y e^Y\\right) - W_{-1}\\left( Y e^Y\\right) = Y - W_{-1}\\left(Y e^Y\\right) &\\text{for }-1 < Y < 0.\n\\end{cases}"
},
{
"math_id": 127,
"text": " V = \\frac{V_0}{1+W \\left(e^{-\\frac{x}{\\sigma}}\\right)}."
},
{
"math_id": 128,
"text": " z = W \\left(e^{-\\frac{x}{\\sigma}}\\right)."
},
{
"math_id": 129,
"text": "\\alpha_\\text{s}"
},
{
"math_id": 130,
"text": "\\ln K=\\frac{a}{T}+b+c\\ln T"
},
{
"math_id": 131,
"text": "\\begin{align}\n-a&=(b-\\ln K)T+cT\\ln T\\\\\n&=(b-\\ln K)e^L+cLe^L\\\\[5pt]\n-\\frac{a}{c}&=\\left(\\frac{b-\\ln K}{c}+L\\right)e^L\\\\[5pt]\n-\\frac{a}{c}e^\\frac{b-\\ln K}{c}&=\\left(L+\\frac{b-\\ln K}{c}\\right)e^{L+\\frac{b-\\ln K}{c}}\\\\[5pt]\nL&=W\\left(-\\frac{a}{c}e^\\frac{b-\\ln K}{c}\\right)+\\frac{\\ln K-b}{c}\\\\[5pt]\nT&=\\exp\\left(W\\left(-\\frac{a}{c}e^\\frac{b-\\ln K}{c}\\right)+\\frac{\\ln K-b}{c}\\right).\n\\end{align}"
},
{
"math_id": 132,
"text": "\\nu _{\\max }/T=\\alpha =\\mathrm{const}"
},
{
"math_id": 133,
"text": "x=h\\nu _{\\max } / k_\\mathrm{B}T"
},
{
"math_id": 134,
"text": "d\\rho _{T}\\left( x\\right) /dx=0"
},
{
"math_id": 135,
"text": "\\rho_{T}"
},
{
"math_id": 136,
"text": "e^{-x}=1-\\frac{x}{D}"
},
{
"math_id": 137,
"text": "D"
},
{
"math_id": 138,
"text": "x=D+W\\left( -De^{-D}\\right)"
},
{
"math_id": 139,
"text": "x^2+(1-\\varepsilon^2)y^2 =\\varepsilon^2"
},
{
"math_id": 140,
"text": "(0, 0)"
},
{
"math_id": 141,
"text": "\\varepsilon"
},
{
"math_id": 142,
"text": "\\left ( \\frac{1}{y}+y \\right )dy=\\left ( \\frac{1}{x}-x \\right )dx"
},
{
"math_id": 143,
"text": "y^2="
},
{
"math_id": 144,
"text": "W_0(x^2\\exp(-2C-x^2))"
},
{
"math_id": 145,
"text": " x = r + \\frac{1}{c} W\\left( \\frac{c\\,e^{-c r}}{a_0} \\right). "
},
{
"math_id": 146,
"text": "w_{j+1}=w_j-\\frac{w_j e^{w_j}-z}{e^{w_j}+w_j e^{w_j}}."
},
{
"math_id": 147,
"text": "\nw_{j+1}=w_j-\\frac{w_j e^{w_j}-z}{e^{w_j}\\left(w_j+1\\right)-\\dfrac{\\left(w_j+2\\right)\\left(w_je^{w_j}-z\\right)}{2w_j+2}}\n"
},
{
"math_id": 148,
"text": "x \\ge -1/e"
},
{
"math_id": 149,
"text": "w_{n+1} (x) = \\frac{w_{n} (x)}{1 + w_{n} (x)} \\left( 1 + \\log \\left(\\frac{x}{w_{n} (x)} \\right) \\right)."
},
{
"math_id": 150,
"text": "w_0 (x)"
},
{
"math_id": 151,
"text": "W_0:"
},
{
"math_id": 152,
"text": "x \\in (e,\\infty)"
},
{
"math_id": 153,
"text": "w_0 (x) = \\log(x) - \\log(\\log(x)),"
},
{
"math_id": 154,
"text": "x \\in (0, e):"
},
{
"math_id": 155,
"text": "w_0 (x) = x/e,"
},
{
"math_id": 156,
"text": "x \\in (-1/e, 0):"
},
{
"math_id": 157,
"text": "w_0 (x) = \\frac{ ex \\log(1+\\sqrt{1+ex}) }{ 1+ ex + \\sqrt{1+ex} },"
},
{
"math_id": 158,
"text": "W_{-1}:"
},
{
"math_id": 159,
"text": "x \\in (-1/4, 0):"
},
{
"math_id": 160,
"text": "w_0 (x) = \\log(-x) - \\log(-\\log(-x)),"
},
{
"math_id": 161,
"text": "x \\in (-1/e, -1/4]:"
},
{
"math_id": 162,
"text": "w_0 (x) = -1 - \\sqrt{2}\\sqrt{1+ex},"
},
{
"math_id": 163,
"text": "0 < W_0 (x) - w_n(x) < \\left( \\log(1+1/e) \\right)^{2^n},"
},
{
"math_id": 164,
"text": "x \\in (0, e)"
},
{
"math_id": 165,
"text": "0 < W_0 (x) - w_n(x) < \\frac{\\left( 1 - 1/e \\right)^{2^n-1}}{5},"
},
{
"math_id": 166,
"text": "W_0"
},
{
"math_id": 167,
"text": "0 < w_n(x) - W_0 (x) < \\left( 1/10 \\right)^{2^n},"
},
{
"math_id": 168,
"text": "W_{-1}"
},
{
"math_id": 169,
"text": "0 < W_{-1} (x) - w_n(x) < \\left( 1/2 \\right)^{2^n}."
}
] | https://en.wikipedia.org/wiki?curid=92465 |
9247 | Epistemology | Philosophical study of knowledge
Epistemology is the branch of philosophy that examines the nature, origin, and limits of knowledge. Also called theory of knowledge, it explores different types of knowledge, such as propositional knowledge about facts, practical knowledge in the form of skills, and knowledge by acquaintance as a familiarity through experience. Epistemologists study the concepts of belief, truth, and justification to understand the nature of knowledge. To discover how knowledge arises, they investigate sources of justification, such as perception, introspection, memory, reason, and testimony.
The school of skepticism questions the human ability to attain knowledge while fallibilism says that knowledge is never certain. Empiricists hold that all knowledge comes from sense experience, whereas rationalists believe that some knowledge does not depend on it. Coherentists argue that a belief is justified if it coheres with other beliefs. Foundationalists, by contrast, maintain that the justification of basic beliefs does not depend on other beliefs. Internalism and externalism disagree about whether justification is determined solely by mental states or also by external circumstances.
Separate branches of epistemology are dedicated to knowledge found in specific fields, like scientific, mathematical, moral, and religious knowledge. Naturalized epistemology relies on empirical methods and discoveries, whereas formal epistemology uses formal tools from logic. Social epistemology investigates the communal aspect of knowledge and historical epistemology examines its historical conditions. Epistemology is closely related to psychology, which describes the beliefs people hold, while epistemology studies the norms governing the evaluation of beliefs. It also intersects with fields such as decision theory, education, and anthropology.
Early reflections on the nature, sources, and scope of knowledge are found in ancient Greek and ancient Indian philosophy. The relation between reason and faith was a central topic in the medieval period. The modern era was characterized by the contrasting perspectives of empiricism and rationalism. Epistemologists in the 20th century examined the components, structure, and value of knowledge while integrating insights from the natural sciences and linguistics.
Definition.
Epistemology is the philosophical study of knowledge. Also called "theory of knowledge", it examines what knowledge is and what types of knowledge there are. It further investigates the sources of knowledge, like perception, inference, and testimony, to determine how knowledge is created. Another topic is the extent and limits of knowledge, confronting questions about what people can and cannot know. Other central concepts include belief, truth, justification, evidence, and reason. Epistemology is one of the main branches of philosophy besides fields like ethics, logic, and metaphysics. The term is also used in a slightly different sense to refer not to the branch of philosophy but to a particular position within that branch, as in Plato's epistemology and Immanuel Kant's epistemology.
As a normative field of inquiry, epistemology explores how people should acquire beliefs. This way, it determines which beliefs fulfill the standards or epistemic goals of knowledge and which ones fail, thereby providing an evaluation of beliefs. Descriptive fields of inquiry, like psychology and cognitive sociology, are also interested in beliefs and related cognitive processes. Unlike epistemology, they study the beliefs people have and how people acquire them instead of examining the evaluative norms of these processes. Epistemology is relevant to many descriptive and normative disciplines, such as the other branches of philosophy and the sciences, by exploring the principles of how they may arrive at knowledge.
The word "epistemology" comes from the ancient Greek terms (episteme, meaning "knowledge" or "understanding") and (logos, meaning "study of" or "reason"), literally, the study of knowledge. Yet, even though ancient Greek philosophers practiced what is today considered epistemology, they did not understand their investigations in these terms. The word was only coined in the 19th century to label this field and conceive it as a distinct branch of philosophy.
Central concepts.
Knowledge.
Knowledge is an awareness, familiarity, understanding, or skill. Its various forms all involve a cognitive success through which a person establishes epistemic contact with reality. Knowledge is typically understood as an aspect of individuals, generally as a cognitive mental state that helps them understand, interpret, and interact with the world. While this core sense is of particular interest to epistemologists, the term also has other meanings. Understood on a social level, knowledge is a characteristic of a group of people that share ideas, understanding, or culture in general. The term can also refer to information stored in documents, such as "knowledge housed in the library" or knowledge stored in computers in the form of the knowledge base of an expert system.
Knowledge contrasts with ignorance, which is often simply defined as the absence of knowledge. Knowledge is usually accompanied by ignorance since people rarely have complete knowledge of a field, forcing them to rely on incomplete or uncertain information when making decisions. Even though many forms of ignorance can be mitigated through education and research, there are certain limits to human understanding that are responsible for inevitable ignorance. Some limitations are inherent in the human cognitive faculties themselves, such as the inability to know facts too complex for the human mind to conceive. Others depend on external circumstances when no access to the relevant information exists.
Epistemologists disagree on how much people know, for example, whether fallible beliefs about everyday affairs can amount to knowledge or whether absolute certainty is required. The most stringent position is taken by radical skeptics, who argue that there is no knowledge at all.
Types.
Epistemologists distinguish between different types of knowledge. Their primary interest is in knowledge of facts, called "propositional knowledge". It is a theoretical knowledge that can be expressed in declarative sentences using a that-clause, like "Ravi knows that kangaroos hop". For this reason, it is also called "knowledge-that". Epistemologists often understand it as a relation between a knower and a known proposition, in the case above between the person Ravi and the proposition "kangaroos hop". It is use-independent since it is not tied to one specific purpose. It is a mental representation that relies on concepts and ideas to depict reality. Because of its theoretical nature, it is often held that only relatively sophisticated creatures, such as humans, possess propositional knowledge.
Propositional knowledge contrasts with non-propositional knowledge in the form of knowledge-how and knowledge by acquaintance. Knowledge-how is a practical ability or skill, like knowing how to read or how to prepare lasagna. It is usually tied to a specific goal and not mastered in the abstract without concrete practice. To know something by acquaintance means to be familiar with it as a result of experiental contact. Examples are knowing the city of Perth, knowing the taste of tsampa, and knowing Marta Vieira da Silva personally.
Another influential distinction is between "a posteriori" and "a priori" knowledge. "A posteriori" knowledge is knowledge of empirical facts based on sensory experience, like seeing that the sun is shining and smelling that a piece of meat has gone bad. Knowledge belonging to the empirical science and knowledge of everyday affairs belongs to "a posteriori" knowledge. "A priori" knowledge is knowledge of non-empirical facts and does not depend on evidence from sensory experience. It belongs to fields such as mathematics and logic, like knowing that formula_0. The contrast between "a posteriori" and "a priori" knowledge plays a central role in the debate between empiricists and rationalists on whether all knowledge depends on sensory experience.
A closely related contrast is between analytic and synthetic truths. A sentence is analytically true if its truth depends only on the meaning of the words it uses. For instance, the sentence "all bachelors are unmarried" is analytically true because the word "bachelor" already includes the meaning "unmarried". A sentence is synthetically true if its truth depends on additional facts. For example, the sentence "snow is white" is synthetically true because its truth depends on the color of snow in addition to the meanings of the words "snow" and "white". "A priori" knowledge is primarily associated with analytic sentences while "a posteriori" knowledge is primarily associated with synthetic sentences. However, it is controversial whether this is true for all cases. Some philosophers, such as Willard Van Orman Quine, reject the distinction, saying that there are no analytic truths.
Analysis.
The analysis of knowledge is the attempt to identify the essential components or conditions of all and only propositional knowledge states. According to the so-called "traditional analysis", knowledge has three components: it is a belief that is justified and true. In the second half of the 20th century, this view was put into doubt by a series of thought experiments that aimed to show that some justified true beliefs do not amount to knowledge. In one of them, a person is unaware of all the fake barns in their area. By coincidence, they stop in front of the only real barn and form a justified true belief that it is a real barn. Many epistemologists agree that this is not knowledge because the justification is not directly relevant to the truth. More specifically, this and similar counterexamples involve some form of epistemic luck, that is, a cognitive success that results from fortuitous circumstances rather than competence.
Following these thought experiments, philosophers proposed various alternative definitions of knowledge by modifying or expanding the traditional analysis. According to one view, the known fact has to cause the belief in the right way. Another theory states that the belief is the product of a reliable belief formation process. Further approaches require that the person would not have the belief if it was false, that the belief is not inferred from a falsehood, that the justification cannot be undermined, or that the belief is infallible. There is no consensus on which of the proposed modifications and reconceptualizations is correct. Some philosophers, such as Timothy Williamson, reject the basic assumption underlying the analysis of knowledge by arguing that propositional knowledge is a unique state that cannot be dissected into simpler components.
Value.
The value of knowledge is the worth it holds by expanding understanding and guiding action. Knowledge can have instrumental value by helping a person achieve their goals. For example, knowledge of a disease helps a doctor cure their patient, and knowledge of when a job interview starts helps a candidate arrive on time. The usefulness of a known fact depends on the circumstances. Knowledge of some facts may have little to no uses, like memorizing random phone numbers from an outdated phone book. Being able to assess the value of knowledge matters in choosing what information to acquire and transmit to others. It affects decisions like which subjects to teach at school and how to allocate funds to research projects.
Of particular interest to epistemologists is the question of whether knowledge is more valuable than a mere opinion that is true. Knowledge and true opinion often have a similar usefulness since both are accurate representations of reality. For example, if a person wants to go to Larissa, a true opinion about how to get there may help them in the same way as knowledge does. Plato already considered this problem and suggested that knowledge is better because it is more stable. Another suggestion focuses on practical reasoning. It proposes that people put more trust in knowledge than in mere true beliefs when drawing conclusions and deciding what to do. A different response says that knowledge has intrinsic value, meaning that it is good in itself independent of its usefulness.
Belief and truth.
Beliefs are mental states about what is the case, like believing that snow is white or that God exists. In epistemology, they are often understood as subjective attitudes that affirm or deny a proposition, which can be expressed in a declarative sentence. For instance, to believe that snow is white is to affirm the proposition "snow is white". According to this view, beliefs are representations of what the world is like. They are kept in memory and can be retrieved when actively thinking about reality or when deciding how to act. A different view understands beliefs as behavioral patterns or dispositions to act rather than as representational items stored in the mind. This view says that to believe that there is mineral water in the fridge is nothing more than a group of dispositions related to mineral water and the fridge. Examples are the dispositions to answer questions about the presence of mineral water affirmatively and to go to the fridge when thirsty. Some theorists deny the existence of beliefs, saying that this concept borrowed from folk psychology is an oversimplification of much more complex psychological processes. Beliefs play a central role in various epistemological debates, which cover their status as a component of propositional knowledge, the question of whether people have control over and are responsible for their beliefs, and the issue of whether there are degrees of beliefs, called credences.
As propositional attitudes, beliefs are true or false depending on whether they affirm a true or a false proposition. According to the correspondence theory of truth, to be true means to stand in the right relation to the world by accurately describing what it is like. This means that truth is objective: a belief is true if it corresponds to a fact. The coherence theory of truth says that a belief is true if it belongs to a coherent system of beliefs. A result of this view is that truth is relative since it depends on other beliefs. Further theories of truth include pragmatist, semantic, pluralist, and deflationary theories. Truth plays a central role in epistemology as a goal of cognitive processes and a component of propositional knowledge.
Justification.
In epistemology, justification is a property of beliefs that fulfill certain norms about what a person should believe. According to a common view, this means that the person has sufficient reasons for holding this belief because they have information that supports it. Another view states that a belief is justified if it is formed by a reliable belief formation process, such as perception. The terms "reasonable", "warranted", and "supported" are closely related to the idea of justification and are sometimes used as synonyms. Justification is what distinguishes justified beliefs from superstition and lucky guesses. However, justification does not guarantee truth. For example, if a person has strong but misleading evidence, they may form a justified belief that is false.
Epistemologists often identify justification as one component of knowledge. Usually, they are not only interested in whether a person has a sufficient reason to hold a belief, known as "propositional justification", but also in whether the person holds the belief because or based on this reason, known as "doxastic justification". For example, if a person has sufficient reason to believe that a neighborhood is dangerous but forms this belief based on superstition then they have propositional justification but lack doxastic justification.
Sources.
Sources of justification are ways or cognitive capacities through which people acquire justification. Often-discussed sources include perception, introspection, memory, reason, and testimony, but there is no universal agreement to what extent they all provide valid justification. Perception relies on sensory organs to gain empirical information. There are various forms of perception corresponding to different physical stimuli, such as visual, auditory, haptic, olfactory, and gustatory perception. Perception is not merely the reception of sense impressions but an active process that selects, organizes, and interprets sensory signals. Introspection is a closely related process focused not on external physical objects but on internal mental states. For example, seeing a bus at a bus station belongs to perception while feeling tired belongs to introspection.
Rationalists understand reason as a source of justification for non-empirical facts. It is often used to explain how people can know about mathematical, logical, and conceptual truths. Reason is also responsible for inferential knowledge, in which one or several beliefs are used as premises to support another belief. Memory depends on information provided by other sources, which it retains and recalls, like remembering a phone number perceived earlier. Justification by testimony relies on information one person communicates to another person. This can happen by talking to each other but can also occur in other forms, like a letter, a newspaper, and a blog.
Other concepts.
Rationality is closely related to justification and the terms "rational belief" and "justified belief" are sometimes used as synonyms. However, rationality has a wider scope that encompasses both a theoretical side, covering beliefs, and a practical side, covering decisions, intentions, and actions. There are different conceptions about what it means for something to be rational. According to one view, a mental state is rational if it is based on or responsive to good reasons. Another view emphasizes the role of coherence, stating that rationality requires that the different mental states of a person are consistent and support each other. A slightly different approach holds that rationality is about achieving certain goals. Two goals of theoretical rationality are accuracy and comprehensiveness, meaning that a person has as few false beliefs and as many true beliefs as possible.
Epistemic norms are criteria to assess the cognitive quality of beliefs, like their justification and rationality. Epistemologists distinguish between deontic norms, which are prescriptions about what people should believe or which beliefs are correct, and axiological norms, which identify the goals and values of beliefs. Epistemic norms are closely related to intellectual or epistemic virtues, which are character traits like open-mindedness and conscientiousness. Epistemic virtues help individuals form true beliefs and acquire knowledge. They contrast with epistemic vices and act as foundational concepts of virtue epistemology.
Evidence for a belief is information that favors or supports it. Epistemologists understand evidence primarily in terms of mental states, for example, as sensory impressions or as other propositions that a person knows. But in a wider sense, it can also include physical objects, like bloodstains examined by forensic analysts or financial records studied by investigative journalists. Evidence is often understood in terms of probability: evidence for a belief makes it more likely that the belief is true. A defeater is evidence against a belief or evidence that undermines another piece of evidence. For instance, witness testimony connecting a suspect to a crime is evidence for their guilt while an alibi is a defeater. Evidentialists analyze justification in terms of evidence by saying that to be justified, a belief needs to rest on adequate evidence.
The presence of evidence usually affects doubt and certainty, which are subjective attitudes toward propositions that differ regarding their level of confidence. Doubt involves questioning the validity or truth of a proposition. Certainty, by contrast, is a strong affirmative conviction, meaning that the person is free of doubt that the proposition is true. In epistemology, doubt and certainty play central roles in attempts to find a secure foundation of all knowledge and in skeptical projects aiming to establish that no belief is immune to doubt.
While propositional knowledge is the main topic in epistemology, some theorists focus on understanding rather than knowledge. Understanding is a more holistic notion that involves a wider grasp of a subject. To understand something, a person requires awareness of how different things are connected and why they are the way they are. For example, knowledge of isolated facts memorized from a textbook does not amount to understanding. According to one view, understanding is a special epistemic good that, unlike knowledge, is always intrinsically valuable. Wisdom is similar in this regard and is sometimes considered the highest epistemic good. It encompasses a reflective understanding with practical applications. It helps people grasp and evaluate complex situations and lead a good life.
Schools of thought.
Skepticism, fallibilism, and relativism.
Philosophical skepticism questions the human ability to arrive at knowledge. Some skeptics limit their criticism to certain domains of knowledge. For example, religious skeptics say that it is impossible to have certain knowledge about the existence of deities or other religious doctrines. Similarly, moral skeptics challenge the existence of moral knowledge and metaphysical skeptics say that humans cannot know ultimate reality.
Global skepticism is the widest form of skepticism, asserting that there is no knowledge in any domain. In ancient philosophy, this view was accepted by academic skeptics while Pyrrhonian skeptics recommended the suspension of belief to achieve a state of tranquility. Overall, not many epistemologists have explicitly defended global skepticism. The influence of this position derives mainly from attempts by other philosophers to show that their theory overcomes the challenge of skepticism. For example, René Descartes used methodological doubt to find facts that cannot be doubted.
One consideration in favor of global skepticism is the dream argument. It starts from the observation that, while people are dreaming, they are usually unaware of this. This inability to distinguish between dream and regular experience is used to argue that there is no certain knowledge since a person can never be sure that they are not dreaming. Some critics assert that global skepticism is a self-refuting idea because denying the existence of knowledge is itself a knowledge claim. Another objection says that the abstract reasoning leading to skepticism is not convincing enough to overrule common sense.
Fallibilism is another response to skepticism. Fallibilists agree with skeptics that absolute certainty is impossible. Most fallibilists disagree with skeptics about the existence of knowledge, saying that there is knowledge since it does not require absolute certainty. They emphasize the need to keep an open and inquisitive mind since doubt can never be fully excluded, even for well-established knowledge claims like thoroughly tested scientific theories.
Epistemic relativism is a related view. It does not question the existence of knowledge in general but rejects the idea that there are universal epistemic standards or absolute principles that apply equally to everyone. This means that what a person knows depends on the subjective criteria or social conventions used to assess epistemic status.
Empiricism and rationalism.
The debate between empiricism and rationalism centers on the origins of human knowledge. Empiricism emphasizes that sense experience is the primary source of all knowledge. Some empiricists express this view by stating that the mind is a blank slate that only develops ideas about the external world through the sense data it receives from the sensory organs. According to them, the mind can arrive at various additional insights by comparing impressions, combining them, generalizing to arrive at more abstract ideas, and deducing new conclusions from them. Empiricists say that all these mental operations depend on material from the senses and do not function on their own.
Even though rationalists usually accept sense experience as one source of knowledge, they also say that important forms of knowledge come directly from reason without sense experience, like knowledge of mathematical and logical truths. According to some rationalists, the mind possesses inborn ideas which it can access without the help of the senses. Others hold that there is an additional cognitive faculty, sometimes called rational intuition, through which people acquire nonempirical knowledge. Some rationalists limit their discussion to the origin of concepts, saying that the mind relies on inborn categories to understand the world and organize experience.
Foundationalism and coherentism.
Foundationalists and coherentists disagree about the structure of knowledge. Foundationalism distinguishes between basic and non-basic beliefs. A belief is basic if it is justified directly, meaning that its validity does not depend on the support of other beliefs. A belief is non-basic if it is justified by another belief. For example, the belief that it rained last night is a non-basic belief if it is inferred from the observation that the street is wet. According to foundationalism, basic beliefs are the foundation on which all other knowledge is built while non-basic beliefs constitute the superstructure resting on this foundation.
Coherentists reject the distinction between basic and non-basic beliefs, saying that the justification of any belief depends on other beliefs. They assert that a belief must be in tune with other beliefs to amount to knowledge. This is the case if the beliefs are consistent and support each other. According to coherentism, justification is a holistic aspect determined by the whole system of beliefs, which resembles an interconnected web.
The view of foundherentism is an intermediary position combining elements of both foundationalism and coherentism. It accepts the distinction between basic and non-basic beliefs while asserting that the justification of non-basic beliefs depends on coherence with other beliefs.
Infinitism presents another approach to the structure of knowledge. It agrees with coherentism that there are no basic beliefs while rejecting the view that beliefs can support each other in a circular manner. Instead, it argues that beliefs form infinite justification chains, in which each link of the chain supports the belief following it and is supported by the belief preceding it.
Internalism and externalism.
The disagreement between internalism and externalism is about the sources of justification. Internalists say that justification depends only on factors within the individual. Examples of such factors include perceptual experience, memories, and the possession of other beliefs. This view emphasizes the importance of the cognitive perspective of the individual in the form of their mental states. It is commonly associated with the idea that the relevant factors are accessible, meaning that the individual can become aware of their reasons for holding a justified belief through introspection and reflection.
Externalism rejects this view, saying that at least some relevant factors are external to the individual. This means that the cognitive perspective of the individual is less central while other factors, specifically the relation to truth, become more important. For instance, when considering the belief that a cup of coffee stands on the table, externalists are not only interested in the perceptual experience that led to this belief but also consider the quality of the person's eyesight, their ability to differentiate coffee from other beverages, and the circumstances under which they observed the cup.
Evidentialism is an influential internalist view. It says that justification depends on the possession of evidence. In this context, evidence for a belief is any information in the individual's mind that supports the belief. For example, the perceptual experience of rain is evidence for the belief that it is raining. Evidentialists have suggested various other forms of evidence, including memories, intuitions, and other beliefs. According to evidentialism, a belief is justified if the individual's evidence supports the belief and they hold the belief on the basis of this evidence.
Reliabilism is an externalist theory asserting that a reliable connection between belief and truth is required for justification. Some reliabilists explain this in terms of reliable processes. According to this view, a belief is justified if it is produced by a reliable belief-formation process, like perception. A belief-formation process is reliable if most of the beliefs it causes are true. A slightly different view focuses on beliefs rather than belief-formation processes, saying that a belief is justified if it is a reliable indicator of the fact it presents. This means that the belief tracks the fact: the person believes it because it is a fact but would not believe it otherwise.
Virtue epistemology is another type of externalism and is sometimes understood as a form of reliabilism. It says that a belief is justified if it manifests intellectual virtues. Intellectual virtues are capacities or traits that perform cognitive functions and help people form true beliefs. Suggested examples include faculties like vision, memory, and introspection.
Others.
In the epistemology of perception, direct and indirect realists disagree about the connection between the perceiver and the perceived object. Direct realists say that this connection is direct, meaning that there is no difference between the object present in perceptual experience and the physical object causing this experience. According to indirect realism, the connection is indirect since there are mental entities, like ideas or sense data, that mediate between the perceiver and the external world. The contrast between direct and indirect realism is important for explaining the nature of illusions.
Constructivism in epistemology is the theory that how people view the world is not a simple reflection of external reality but an invention or a social construction. This view emphasizes the creative role of interpretation while undermining objectivity since social constructions may differ from society to society.
According to contrastivism, knowledge is a comparative term, meaning that to know something involves distinguishing it from relevant alternatives. For example, if a person spots a bird in the garden, they may know that it is a sparrow rather than an eagle but they may not know that it is a sparrow rather than an indistinguishable sparrow hologram.
Epistemic conservatism is a view about belief revision. It gives preference to the beliefs a person already has, asserting that a person should only change their beliefs if they have a good reason to. One motivation for adopting epistemic conservatism is that the cognitive resources of humans are limited, meaning that it is not feasible to constantly reexamine every belief.
Pragmatist epistemology is a form of fallibilism that emphasizes the close relation between knowing and acting. It sees the pursuit of knowledge as an ongoing process guided by common sense and experience while always open to revision.
Bayesian epistemology is a formal approach based on the idea that people have degrees of belief representing how certain they are. It uses probability theory to define norms of rationality that govern how certain people should be about their beliefs.
Phenomenological epistemology emphasizes the importance of first-person experience. It distinguishes between the natural and the phenomenological attitudes. The natural attitude focuses on objects belonging to common sense and natural science. The phenomenological attitude focuses on the experience of objects and aims to provide a presuppositionless description of how objects appear to the observer.
Particularism and generalism disagree about the right method of conducting epistemological research. Particularists start their inquiry by looking at specific cases. For example, to find a definition of knowledge, they rely on their intuitions about concrete instances of knowledge and particular thought experiments. They use these observations as methodological constraints that any theory of more general principles needs to follow. Generalists proceed in the opposite direction. They give preference to general epistemic principles, saying that it is not possible to accurately identify and describe specific cases without a grasp of these principles. Other methods in contemporary epistemology aim to extract philosophical insights from ordinary language or look at the role of knowledge in making assertions and guiding actions.
Postmodern epistemology criticizes the conditions of knowledge in advanced societies. This concerns in particular the metanarrative of a constant progress of scientific knowledge leading to a universal and foundational understanding of reality. Feminist epistemology critiques the effect of gender on knowledge. Among other topics, it explores how preconceptions about gender influence who has access to knowledge, how knowledge is produced, and which types of knowledge are valued in society. Decolonial scholarship criticizes the global influence of Western knowledge systems, often with the aim of decolonizing knowledge to undermine Western hegemony.
Various schools of epistemology are found in traditional Indian philosophy. Many of them focus on the different sources of knowledge, called . Perception, inference, and testimony are sources discussed by most schools. Other sources only considered by some schools are non-perception, which leads to knowledge of absences, and presumption. Buddhist epistemology tends to focus on immediate experience, understood as the presentation of unique particulars without the involvement of secondary cognitive processes, like thought and desire. Nyāya epistemology discusses the causal relation between the knower and the object of knowledge, which happens through reliable knowledge-formation processes. It sees perception as the primary source of knowledge, drawing a close connection between it and successful action. Mīmāṃsā epistemology understands the holy scriptures known as the Vedas as a key source of knowledge while discussing the problem of their right interpretation. Jain epistemology states that reality is many-sided, meaning that no single viewpoint can capture the entirety of truth.
Branches.
Some branches of epistemology focus on the problems of knowledge within specific academic disciplines. The epistemology of science examines how scientific knowledge is generated and what problems arise in the process of validating, justifying, and interpreting scientific claims. A key issue concerns the problem of how individual observations can support universal scientific laws. Further topics include the nature of scientific evidence and the aims of science. The epistemology of mathematics studies the origin of mathematical knowledge. In exploring how mathematical theories are justified, it investigates the role of proofs and whether there are empirical sources of mathematical knowledge.
Epistemological problems are found in most areas of philosophy. The epistemology of logic examines how people know that an argument is valid. For example, it explores how logicians justify that modus ponens is a correct rule of inference or that all contradictions are false. Epistemologists of metaphysics investigate whether knowledge of ultimate reality is possible and what sources this knowledge could have. Knowledge of moral statements, like the claim that lying is wrong, belongs to the epistemology of ethics. It studies the role of ethical intuitions, coherence among moral beliefs, and the problem of moral disagreement. The ethics of belief is a closely related field covering the interrelation between epistemology and ethics. It examines the norms governing belief formation and asks whether violating them is morally wrong.
Religious epistemology studies the role of knowledge and justification for religious doctrines and practices. It evaluates the weight and reliability of evidence from religious experience and holy scriptures while also asking whether the norms of reason should be applied to religious faith. Social epistemology focuses on the social dimension of knowledge. While traditional epistemology is mainly interested in knowledge possessed by individuals, social epistemology covers knowledge acquisition, transmission, and evaluation within groups, with specific emphasis on how people rely on each other when seeking knowledge. Historical epistemology examines how the understanding of knowledge and related concepts has changed over time. It asks whether the main issues in epistemology are perennial and to what extent past epistemological theories are relevant to contemporary debates. It is particularly concerned with scientific knowledge and practices associated with it. It contrasts with the history of epistemology, which presents, reconstructs, and evaluates epistemological theories of philosophers in the past.
Naturalized epistemology is closely associated with the natural sciences, relying on their methods and theories to examine knowledge. Naturalistic epistemologists focus on empirical observation to formulate their theories and are often critical of approaches to epistemology that proceed by "a priori" reasoning. Evolutionary epistemology is a naturalistic approach that understands cognition as a product of evolution, examining knowledge and the cognitive faculties responsible for it from the perspective of natural selection. Epistemologists of language explore the nature of linguistic knowledge. One of their topics is the role of tacit knowledge, for example, when native speakers have mastered the rules of grammar but are unable to explicitly articulate those rules. Epistemologists of modality examine knowledge about what is possible and necessary. Epistemic problems that arise when two people have diverging opinions on a topic are covered by the epistemology of disagreement. Epistemologists of ignorance are interested in epistemic faults and gaps in knowledge.
There are distinct areas of epistemology dedicated to specific sources of knowledge. Examples are the epistemology of perception, the epistemology of memory, and the epistemology of testimony.
Some branches of epistemology are characterized by their research method. Formal epistemology employs formal tools found in logic and mathematics to investigate the nature of knowledge. Experimental epistemologists rely in their research on empirical evidence about common knowledge practices. Applied epistemology focuses on the practical application of epistemological principles to diverse real-world problems, like the reliability of knowledge claims on the internet, how to assess sexual assault allegations, and how racism may lead to epistemic injustice.
Metaepistemologists examine the nature, goals, and research methods of epistemology. As a metatheory, it does not directly defend a position about which epistemological theories are correct but examines their fundamental concepts and background assumptions.
Related fields.
Epistemology and psychology were not defined as distinct fields until the 19th century; earlier investigations about knowledge often do not fit neatly into today's academic categories. Both contemporary disciplines study beliefs and the mental processes responsible for their formation and change. One important contrast is that psychology describes what beliefs people have and how they acquire them, thereby explaining why someone has a specific belief. The focus of epistemology is on evaluating beliefs, leading to a judgment about whether a belief is justified and rational in a particular case. Epistemology has a similar intimate connection to cognitive science, which understands mental events as processes that transform information. Artificial intelligence relies on the insights of epistemology and cognitive science to implement concrete solutions to problems associated with knowledge representation and automatic reasoning.
Logic is the study of correct reasoning. For epistemology, it is relevant to inferential knowledge, which arises when a person reasons from one known fact to another. This is the case, for example, if a person does not know directly that formula_1 but comes to infer it based on their knowledge that formula_2, formula_3, and formula_4. Whether an inferential belief amounts to knowledge depends on the form of reasoning used, in particular, that the process does not violate the laws of logic. Another overlap between the two fields is found in the epistemic approach to fallacy theory. Fallacies are faulty arguments based on incorrect reasoning. The epistemic approach to fallacies explains why they are faulty, stating that arguments aim to expand knowledge. According to this view, an argument is a fallacy if it fails to do so. A further intersection is found in epistemic logic, which uses formal logical devices to study epistemological concepts like "knowledge" and "belief".
Both decision theory and epistemology are interested in the foundations of rational thought and the role of beliefs. Unlike many approaches in epistemology, the main focus of decision theory lies less in the theoretical and more in the practical side, exploring how beliefs are translated into action. Decision theorists examine the reasoning involved in decision-making and the standards of good decisions. They identify beliefs as a central aspect of decision-making. One of their innovations is to distinguish between weaker and stronger beliefs. This helps them take the effect of uncertainty on decisions into consideration.
Epistemology and education have a shared interest in knowledge, with one difference being that education focuses on the transmission of knowledge, exploring the roles of both learner and teacher. Learning theory examines how people acquire knowledge. Behavioral learning theories explain the process in terms of behavior changes, for example, by associating a certain response with a particular stimulus. Cognitive learning theories study how the cognitive processes that affect knowledge acquisition transform information. Pedagogy looks at the transmission of knowledge from the teacher's side, exploring the teaching methods they may employ. In teacher-centered methods, the teacher takes the role of the main authority delivering knowledge and guiding the learning process. In student-centered methods, the teacher mainly supports and facilitates the learning process while the students take a more active role. The beliefs students have about knowledge, called "personal epistemology", affect their intellectual development and learning success.
The anthropology of knowledge examines how knowledge is acquired, stored, retrieved, and communicated. It studies the social and cultural circumstances that affect how knowledge is reproduced and changes, covering the role of institutions like university departments and scientific journals as well as face-to-face discussions and online communications. It understands knowledge in a wide sense that encompasses various forms of understanding and culture, including practical skills. Unlike epistemology, it is not interested in whether a belief is true or justified but in how understanding is reproduced in society. The sociology of knowledge is a closely related field with a similar conception of knowledge. It explores how physical, demographic, economic, and sociocultural factors impact knowledge. It examines in what sociohistorical contexts knowledge emerges and the effects it has on people, for example, how socioeconomic conditions are related to the dominant ideology in a society.
History.
Early reflections on the nature and sources of knowledge are found in ancient history. In ancient Greek philosophy, Plato (427–347 BCE) studied what knowledge is, examining how it differs from true opinion by being based on good reasons. According to him, the process of learning something is a form of recollection in which the soul remembers what it already knew before. Aristotle (384–322 BCE) was particularly interested in scientific knowledge, exploring the role of sensory experience and how to make inferences from general principles. The Hellenistic schools began to arise in the 4th century BCE. The Epicureans had an empiricist outlook, stating that sensations are always accurate and act as the supreme standard of judgments. The Stoics defended a similar position but limited themselves to lucid and specific sensations, which they regarded as true. The skepticists questioned that knowledge is possible, recommending instead suspension of judgment to arrive at a state of tranquility.
The Upanishads, philosophical scriptures composed in ancient India between 700 and 300 BCE, examined how people acquire knowledge, including the role of introspection, comparison, and deduction. In the 6th century BCE, the school of Ajñana developed a radical skepticism questioning the possibility and usefulness of knowledge. The school of Nyaya emerged in the 2nd century BCE and provided a systematic treatment of how people acquire knowledge, distinguishing between valid and invalid sources. When Buddhist philosophers later became interested in epistemology, they relied on concepts developed in Nyaya and other traditions. Buddhist philosopher Dharmakirti (6th or 7th century CE) analyzed the process of knowing as a series of causally related events.
The relation between reason and faith was a central topic in the medieval period. In Islamic philosophy, al-Farabi (c. 870–950) and Averroes (1126–1198) discussed how philosophy and theology interact and which is the better vehicle to truth. Al-Ghazali (c. 1056–1111) criticized many of the core teachings of previous Islamic philosophers, saying that they rely on unproven assumptions that do not amount to knowledge. In Western philosophy, Anselm of Canterbury (1033–1109) proposed that theological teaching and philosophical inquiry are in harmony and complement each other. Peter Abelard (1079–1142) argued against unquestioned theological authorities and said that all things are open to rational doubt. Influenced by Aristotle, Thomas Aquinas (1225–1274) developed an empiricist theory, stating that "nothing is in the intellect unless it first appeared in the senses". According to an early form of direct realism proposed by William of Ockham (c. 1285–1349), perception of mind-independent objects happens directly without intermediaries.
The course of modern philosophy was shaped by René Descartes (1596–1650), who claimed that philosophy must begin from a position of indubitable knowledge of first principles. Inspired by skepticism, he aimed to find absolutely certain knowledge by encountering truths that cannot be doubted. He thought that this is the case for the assertion "I think, therefore I am", from which he constructed the rest of his philosophical system. Descartes, together with Baruch Spinoza (1632–1677) and Gottfried Wilhelm Leibniz (1646–1716), belonged to the school of rationalism, which asserts that the mind possesses innate ideas independent of experience. John Locke (1632–1704) rejected this view in favor of an empiricism according to which the mind is a blank slate. This means that all ideas depend on sense experience, either as "ideas of sense", which are directly presented through the senses, or as "ideas of reflection", which the mind creates by reflecting on ideas of sense. David Hume (1711–1776) used this idea to explore the limits of what people can know. He said that knowledge of facts is never certain, adding that knowledge of relations between ideas, like mathematical truths, can be certain but contains no information about the world. Immanuel Kant (1724–1804) tried to find a middle position between rationalism and empiricism by identifying a type of knowledge that Hume had missed. For Kant, this is knowledge about principles that underlie all experience and structure it, such as spatial and temporal relations and fundamental categories of understanding.
In the 19th-century, Georg Wilhelm Friedrich Hegel (1770–1831) argued against empiricism, saying that sensory impressions on their own cannot amount to knowledge since all knowledge is actively structured by the knowing subject. John Stuart Mill (1806–1873) defended a wide-sweeping form of empiricism and explained knowledge of general truths through inductive reasoning. Charles Peirce (1839–1914) thought that all knowledge is fallible, emphasizing that knowledge seekers should always be ready to revise their beliefs if new evidence is encountered. He used this idea to argue against Cartesian foundationalism seeking absolutely certain truths.
In the 20th century, fallibilism was further explored by J. L. Austin (1911–1960) and Karl Popper (1902–1994). In continental philosophy, Edmund Husserl (1859–1938) applied the skeptic idea of suspending judgment to the study of experience. By not judging whether an experience is accurate or not, he tried to describe the internal structure of experience instead. Logical positivists, like A. J. Ayer (1910–1989), said that all knowledge is either empirical or analytic. Bertrand Russell (1872–1970) developed an empiricist sense-datum theory, distinguishing between direct knowledge by acquaintance of sense data and indirect knowledge by description, which is inferred from knowledge by acquaintance. Common sense had a central place in G. E. Moore's (1873–1958) epistemology. He used trivial observations, like the fact that he has two hands, to argue against abstract philosophical theories that deviate from common sense. Ordinary language philosophy, as practiced by the late Ludwig Wittgenstein (1889–1951), is a similar approach that tries to extract epistemological insights from how ordinary language is used.
Edmund Gettier (1927–2021) conceived counterexamples against the idea that knowledge is the same as justified true belief. These counterexamples prompted many philosophers to suggest alternative definitions of knowledge. One of the alternatives considered was reliabilism, which says that knowledge requires reliable sources, shifting the focus away from justification. Virtue epistemology, a closely related response, analyses belief formation in terms of the intellectual virtues or cognitive competencies involved in the process. Naturalized epistemology, as conceived by Willard Van Orman Quine (1908–2000), employs concepts and ideas from the natural sciences to formulate its theories. Other developments in late 20th-century epistemology were the emergence of social, feminist, and historical epistemology.
References.
Notes.
<templatestyles src="Reflist/styles.css" />
Citations.
<templatestyles src="Reflist/styles.css" />
Bibliography.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "2 + 2=4"
},
{
"math_id": 1,
"text": "572+382=954"
},
{
"math_id": 2,
"text": "2+2=4"
},
{
"math_id": 3,
"text": "8+7=15"
},
{
"math_id": 4,
"text": "5+3=8"
}
] | https://en.wikipedia.org/wiki?curid=9247 |
924869 | Longitude of the ascending node | Defining the orbit of an object in space
The longitude of the ascending node (symbol ☊) is one of the orbital elements used to specify the orbit of an object in space. It is the angle from a specified reference direction, called the "origin of longitude", to the direction of the ascending node (☊), as measured in a specified reference plane. The ascending node is the point where the orbit of the object passes through the plane of reference, as seen in the adjacent image.
Types.
Commonly used reference planes and origins of longitude include:
In the case of a binary star known only from visual observations, it is not possible to tell which node is ascending and which is descending. In this case the orbital parameter which is recorded is simply labeled longitude of the node, ☊, and represents the longitude of whichever node has a longitude between 0 and 180 degrees., chap. 17;, p. 72.
Calculation from state vectors.
In astrodynamics, the longitude of the ascending node can be calculated from the specific relative angular momentum vector h as follows:
formula_0
Here, n = ⟨"n"x, "n"y, "n"z⟩ is a vector pointing towards the ascending node. The reference plane is assumed to be the "xy"-plane, and the origin of longitude is taken to be the positive "x"-axis. k is the unit vector (0, 0, 1), which is the normal vector to the "xy" reference plane.
For non-inclined orbits (with inclination equal to zero), ☊ is undefined. For computation it is then, by convention, set equal to zero; that is, the ascending node is placed in the reference direction, which is equivalent to letting n point towards the positive "x"-axis.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\begin{align}\n \\mathbf{n} &= \\mathbf{k} \\times \\mathbf{h} = (-h_y, h_x, 0) \\\\\n \\Omega &= \\begin{cases}\n \\arccos { {n_x} \\over { \\mathbf{\\left |n \\right |}}}, &n_y \\ge 0; \\\\\n 2\\pi-\\arccos { {n_x} \\over { \\mathbf{\\left |n \\right |}}}, &n_y < 0.\n \\end{cases}\n\\end{align}"
}
] | https://en.wikipedia.org/wiki?curid=924869 |
924973 | Vampire number | In recreational mathematics, a vampire number (or true vampire number) is a composite natural number with an even number of digits, that can be factored into two natural numbers each with half as many digits as the original number, where the two factors contain precisely all the digits of the original number, in any order, counting multiplicity. The two factors cannot both have trailing zeroes. The first vampire number is 1260 = 21 × 60.
Definition.
Let formula_0 be a natural number with formula_1 digits:
formula_2
Then formula_0 is a vampire number if and only if there exist two natural numbers formula_3 and formula_4, each with formula_5 digits:
formula_6
formula_7
such that formula_8, formula_9 and formula_10 are not both zero, and the formula_1 digits of the concatenation of formula_3 and formula_4 formula_11 are a permutation of the formula_1 digits of formula_0. The two numbers formula_3 and formula_4 are called the "fangs" of formula_0.
Vampire numbers were first described in a 1994 post by Clifford A. Pickover to the Usenet group sci.math, and the article he later wrote was published in chapter 30 of his book "Keys to Infinity".
Examples.
1260 is a vampire number, with 21 and 60 as fangs, since 21 × 60 = 1260 and the digits of the concatenation of the two factors (2160) are a permutation of the digits of the original number (1260).
However, 126000 (which can be expressed as 21 × 6000 or 210 × 600) is not a vampire number, since although 126000 = 21 × 6000 and the digits (216000) are a permutation of the original number, the two factors 21 and 6000 do not have the correct number of digits. Furthermore, although 126000 = 210 × 600, both factors 210 and 600 have trailing zeroes.
The first few vampire numbers are:
1260 = 21 × 60
1395 = 15 × 93
1435 = 35 × 41
1530 = 30 × 51
1827 = 21 × 87
2187 = 27 × 81
6880 = 80 × 86
102510 = 201 × 510
104260 = 260 × 401
105210 = 210 × 501
The sequence of vampire numbers is:
1260, 1395, 1435, 1530, 1827, 2187, 6880, 102510, 104260, 105210, 105264, 105750, 108135, 110758, 115672, 116725, 117067, 118440, 120600, 123354, 124483, 125248, 125433, 125460, 125500, ... (sequence in the OEIS)
There are many known sequences of infinitely many vampire numbers following a pattern, such as:
1530 = 30 × 51, 150300 = 300 × 501, 15003000 = 3000 × 5001, ...
Al Sweigart calculated all the vampire numbers that have at most 10 digits.
Multiple fang pairs.
A vampire number can have multiple distinct pairs of fangs. The first of infinitely many vampire numbers with 2 pairs of fangs:
125460 = 204 × 615 = 246 × 510
The first with 3 pairs of fangs:
13078260 = 1620 × 8073 = 1863 × 7020 = 2070 × 6318
The first with 4 pairs of fangs:
16758243290880 = 1982736 × 8452080 = 2123856 × 7890480 = 2751840 × 6089832 = 2817360 × 5948208
The first with 5 pairs of fangs:
24959017348650 = 2947050 × 8469153 = 2949705 × 8461530 = 4125870 × 6049395 = 4129587 × 6043950 = 4230765 × 5899410
Other bases.
Vampire numbers also exist for bases other than base 10. For example, a vampire number in base 12 is 10392BA45768 = 105628 × BA3974, where A means ten and B means eleven. Another example in the same base is a vampire number with three fangs, 572164B9A830 = 8752 × 9346 × A0B1. An example with four fangs is 3715A6B89420 = 763 × 824 × 905 × B1A. In these examples, all 12 digits are used exactly once.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "N"
},
{
"math_id": 1,
"text": "2k"
},
{
"math_id": 2,
"text": "N = {n_{2k}}{n_{2k-1}}...{n_1}"
},
{
"math_id": 3,
"text": "A"
},
{
"math_id": 4,
"text": "B"
},
{
"math_id": 5,
"text": "k"
},
{
"math_id": 6,
"text": "A = {a_k}{a_{k-1}}...{a_1}"
},
{
"math_id": 7,
"text": "B = {b_k}{b_{k-1}}...{b_1}"
},
{
"math_id": 8,
"text": "A \\times B = N"
},
{
"math_id": 9,
"text": "a_1"
},
{
"math_id": 10,
"text": "b_1"
},
{
"math_id": 11,
"text": "({a_k}{a_{k-1}}...{a_2}{a_1}{b_k}{b_{k-1}}...{b_2}{b_1})"
}
] | https://en.wikipedia.org/wiki?curid=924973 |
9249813 | Forward kinematics | Computing a robot's end-effector position from joint values and kinematic equations
In robot kinematics, forward kinematics refers to the use of the kinematic equations of a robot to compute the position of the end-effector from specified values for the joint parameters.
The kinematics equations of the robot are used in robotics, computer games, and animation. The reverse process, that computes the joint parameters that achieve a specified position of the end-effector, is known as inverse kinematics.
Kinematics equations.
The kinematics equations for the series chain of a robot are obtained using a rigid transformation [Z] to characterize the relative movement allowed at each joint and separate rigid transformation [X] to define the dimensions of each link. The result is a sequence of rigid transformations alternating joint and link transformations from the base of the chain to its end link, which is equated to the specified position for the end link,
formula_0
where [T] is the transformation locating the end-link. These equations are called the kinematics equations of the serial chain.
Link transformations.
In 1955, Jacques Denavit and Richard Hartenberg introduced a convention for the definition of the joint matrices [Z] and link matrices [X] to standardize the coordinate frame for spatial linkages. This convention positions the joint frame so that it consists of a screw displacement along the Z-axis
formula_1
and it positions the link frame so it consists of a screw displacement along the X-axis,
formula_2
Using this notation, each transformation-link goes along a serial chain robot, and can be described by the coordinate transformation,
formula_3
where "θi", "di", "αi,i+1" and "ai,i+1" are known as the Denavit-Hartenberg parameters.
Kinematics equations revisited.
The kinematics equations of a serial chain of "n" links, with joint parameters "θi" are given by
formula_4
where formula_5 is the transformation matrix from the frame of link formula_6 to link formula_7. In robotics, these are conventionally described by Denavit–Hartenberg parameters.
Denavit-Hartenberg matrix.
The matrices associated with these operations are:
formula_8
Similarly,
formula_9
The use of the Denavit-Hartenberg convention yields the link transformation matrix, ["i-1Ti"] as
formula_10
known as the "Denavit-Hartenberg matrix."
Computer animation.
The forward kinematic equations can be used as a method in 3D computer graphics for animating models.
The essential concept of forward kinematic animation is that the positions of particular parts of the model at a specified time are calculated from the position and orientation of the object, together with any information on the joints of an articulated model. So for example if the object to be animated is an arm with the shoulder remaining at a fixed location, the location of the tip of the thumb would be calculated from the angles of the shoulder, elbow, wrist, thumb and knuckle joints. Three of these joints (the shoulder, wrist and the base of the thumb) have more than one degree of freedom, all of which must be taken into account. If the model were an entire human figure, then the location of the shoulder would also have to be calculated from other properties of the model.
Forward kinematic animation can be distinguished from inverse kinematic animation by this means of calculation - in inverse kinematics the orientation of articulated parts is calculated from the desired position of certain points on the model. It is also distinguished from other animation systems by the fact that the motion of the model is defined directly by the animator - no account is taken of any physical laws that might be in effect on the model, such as gravity or collision with other models.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "[T] = [Z_1][X_1][Z_2][X_2]\\ldots[X_{n-1}][Z_n],\\!"
},
{
"math_id": 1,
"text": " [Z_i] = \\operatorname{Trans}_{Z_{i}}(d_i) \\operatorname{Rot}_{Z_{i}}(\\theta_i),"
},
{
"math_id": 2,
"text": " [X_i]=\\operatorname{Trans}_{X_i}(a_{i,i+1})\\operatorname{Rot}_{X_i}(\\alpha_{i,i+1})."
},
{
"math_id": 3,
"text": "{}^{i-1}T_{i} = [Z_i][X_i] =\n \\operatorname{Trans}_{Z_{i}}(d_i)\n \\operatorname{Rot}_{Z_{i}}(\\theta_i)\n \\operatorname{Trans}_{X_i}(a_{i,i+1})\n \\operatorname{Rot}_{X_i}(\\alpha_{i,i+1}),"
},
{
"math_id": 4,
"text": "[T] = {}^{0}T_n = \\prod_{i=1}^n {}^{i - 1}T_i(\\theta_i),"
},
{
"math_id": 5,
"text": "{}^{i - 1}T_i(\\theta_i)"
},
{
"math_id": 6,
"text": "i"
},
{
"math_id": 7,
"text": " i-1"
},
{
"math_id": 8,
"text": "\\operatorname{Trans}_{Z_{i}}(d_i)\n = \\begin{bmatrix}\n 1 & 0 & 0 & 0 \\\\\n 0 & 1 & 0 & 0 \\\\\n 0 & 0 & 1 & d_i \\\\\n 0 & 0 & 0 & 1\n \\end{bmatrix}, \\quad\n\\operatorname{Rot}_{Z_{i}}(\\theta_i)\n = \n\\begin{bmatrix}\n \\cos\\theta_i & -\\sin\\theta_i & 0 & 0 \\\\\n \\sin\\theta_i & \\cos\\theta_i & 0 & 0 \\\\\n 0 & 0 & 1 & 0 \\\\\n 0 & 0 & 0 & 1\n \\end{bmatrix}.\n"
},
{
"math_id": 9,
"text": "\\operatorname{Trans}_{X_i}(a_{i,i+1})\n = \n\\begin{bmatrix}\n 1 & 0 & 0 & a_{i,i+1} \\\\\n 0 & 1 & 0 & 0 \\\\\n 0 & 0 & 1 & 0 \\\\\n 0 & 0 & 0 & 1\n \\end{bmatrix},\\quad\n\\operatorname{Rot}_{X_i}(\\alpha_{i,i+1})\n = \n\\begin{bmatrix}\n 1 & 0 & 0 & 0 \\\\\n 0 & \\cos\\alpha_{i,i+1} & -\\sin\\alpha_{i,i+1} & 0 \\\\\n 0 & \\sin\\alpha_{i,i+1} & \\cos\\alpha_{i,i+1} & 0 \\\\\n 0 & 0 & 0 & 1\n \\end{bmatrix}.\n"
},
{
"math_id": 10,
"text": "\\operatorname{}^{i-1}T_i\n = \n\\begin{bmatrix}\n \\cos\\theta_i & -\\sin\\theta_i \\cos\\alpha_{i,i+1} & \\sin\\theta_i \\sin\\alpha_{i,i+1} & a_{i,i+1} \\cos\\theta_i \\\\\n \\sin\\theta_i & \\cos\\theta_i \\cos\\alpha_{i,i+1} & -\\cos\\theta_i \\sin\\alpha_{i,i+1} & a_{i,i+1} \\sin\\theta_i \\\\\n 0 & \\sin\\alpha_{i,i+1} & \\cos\\alpha_{i,i+1} & d_i \\\\\n 0 & 0 & 0 & 1\n \\end{bmatrix},\n"
}
] | https://en.wikipedia.org/wiki?curid=9249813 |
9252215 | Drain-induced barrier lowering | Effect in MOSFETs
Drain-induced barrier lowering (DIBL) is a short-channel effect in MOSFETs referring originally to a reduction of threshold voltage of the transistor at higher drain voltages.
In a classic planar field-effect transistor with a long channel, the bottleneck in channel formation occurs far enough from the drain contact that it is electrostatically shielded from the drain by the combination of the substrate and gate, and so classically the threshold voltage was independent of drain voltage.
In short-channel devices this is no longer true: The drain is close enough to gate the channel, and so a high drain voltage can open the bottleneck and turn on the transistor prematurely.
The origin of the threshold decrease can be understood as a consequence of charge neutrality: the Yau charge-sharing model.
The combined charge in the depletion region of the device and that in the channel of the device is balanced by three electrode charges: the gate, the source and the drain. As drain voltage is increased, the depletion region of the p-n junction between the drain and body increases in size and extends under the gate, so the drain assumes a greater portion of the burden of balancing depletion region charge, leaving a smaller burden for the gate. As a result, the charge present on the gate retains charge balance by attracting more carriers into the channel, an effect equivalent to lowering the threshold voltage of the device.
In effect, the channel becomes more attractive for electrons. In other words, the potential energy barrier for electrons in the channel is lowered. Hence the term "barrier lowering" is used to describe these phenomena. Unfortunately, it is not easy to come up with accurate analytical results using the barrier lowering concept.
Barrier lowering increases as channel length is reduced, even at zero applied drain bias, because the source and drain form pn junctions with the body, and so have associated built-in depletion layers associated with them that become significant partners in charge balance at short channel lengths, even with no reverse bias applied to increase depletion widths.
The term DIBL has expanded beyond the notion of simple threshold adjustment, however, and refers to a number of drain-voltage effects upon MOSFET "I-V" curves that go beyond description in terms of simple threshold voltage changes, as described below.
As channel length is reduced, the effects of DIBL in the subthreshold region (weak inversion) show up initially as a simple translation of the subthreshold current vs. gate bias curve with change in drain-voltage, which can be modeled as a simple change in threshold voltage with drain bias. However, at shorter lengths the slope of the current vs. gate bias curve is reduced, that is, it requires a larger change in gate bias to effect the same change in drain current. At extremely short lengths, the gate entirely fails to turn the device off. These effects cannot be modeled as a threshold adjustment.
DIBL also affects the current vs. drain bias curve in the active mode, causing the current to increase with drain bias, lowering the MOSFET output resistance. This increase is additional to the normal channel length modulation effect on output resistance, and cannot always be modeled as a threshold adjustment.
In practice, the DIBL can be calculated as follows:
formula_0
where formula_1 or Vtsat is the threshold voltage measured at a supply voltage (the high drain voltage), and formula_2 or Vtlin is the threshold voltage measured at a very low drain voltage, typically 0.05 V or 0.1 V. formula_3 is the supply voltage (the high drain voltage) and formula_4 is the low drain voltage (for a linear part of device I-V characteristics). The minus in the front of the formula ensures a positive DIBL value. This is because the high drain threshold voltage, formula_1, is always smaller than the low drain threshold voltage, formula_2. Typical units of DIBL are mV/V.
DIBL can reduce the device operating frequency as well, as described by the following equation:
formula_5
where formula_3 is the supply voltage and formula_6 is the threshold voltage.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathrm{DIBL} = - \\frac{V_{Th}^{DD} - V_{Th}^{\\mathrm{low}}}{V_{DD} - V_{D}^{\\mathrm{low}}}, "
},
{
"math_id": 1,
"text": "V_{Th}^{DD}"
},
{
"math_id": 2,
"text": "V_{Th}^{\\mathrm{low}}"
},
{
"math_id": 3,
"text": "V_{DD}"
},
{
"math_id": 4,
"text": "V_{D}^{\\mathrm{low}}"
},
{
"math_id": 5,
"text": "\\frac{\\Delta f}{f} = -\\frac{2 \\mathrm{DIBL}}{V_{DD}-V_{Th}},"
},
{
"math_id": 6,
"text": "V_{Th}"
}
] | https://en.wikipedia.org/wiki?curid=9252215 |
9252619 | Qualitative variation | Statistical dispersion in nominal distributions
An index of qualitative variation (IQV) is a measure of statistical dispersion in nominal distributions. Examples include the variation ratio or the information entropy.
Properties.
There are several types of indices used for the analysis of nominal data. Several are standard statistics that are used elsewhere - range, standard deviation, variance, mean deviation, coefficient of variation, median absolute deviation, interquartile range and quartile deviation.
In addition to these several statistics have been developed with nominal data in mind. A number have been summarized and devised by Wilcox , , who requires the following standardization properties to be satisfied:
In particular, the value of these standardized indices does not depend on the number of categories or number of samples.
For any index, the closer to uniform the distribution, the larger the variance, and the larger the differences in frequencies across categories, the smaller the variance.
Indices of qualitative variation are then analogous to information entropy, which is minimized when all cases belong to a single category and maximized in a uniform distribution. Indeed, information entropy can be used as an index of qualitative variation.
One characterization of a particular index of qualitative variation (IQV) is as a ratio of observed differences to maximum differences.
Wilcox's indexes.
Wilcox gives a number of formulae for various indices of QV , the first, which he designates DM for "Deviation from the Mode", is a standardized form of the variation ratio, and is analogous to variance as deviation from the mean.
ModVR.
The formula for the variation around the mode (ModVR) is derived as follows:
formula_0
where "f""m" is the modal frequency, "K" is the number of categories and "f""i" is the frequency of the "i"th group.
This can be simplified to
formula_1
where "N" is the total size of the sample.
Freeman's index (or variation ratio) is
formula_2
This is related to "M" as follows:
formula_3
The ModVR is defined as
formula_4
where "v" is Freeman's index.
Low values of ModVR correspond to small amount of variation and high values to larger amounts of variation.
When "K" is large, ModVR is approximately equal to Freeman's index "v".
RanVR.
This is based on the range around the mode. It is defined to be
formula_5
where "f"m is the modal frequency and "f"l is the lowest frequency.
AvDev.
This is an analog of the mean deviation. It is defined as the arithmetic mean of the absolute differences of each value from the mean.
formula_6
MNDif.
This is an analog of the mean difference - the average of the differences of all the possible pairs of variate values, taken regardless of sign. The mean difference differs from the mean and standard deviation because it is dependent on the spread of the variate values among themselves and not on the deviations from some central value.
formula_7
where "f""i" and "f""j" are the "i"th and "j"th frequencies respectively.
The MNDif is the Gini coefficient applied to qualitative data.
VarNC.
This is an analog of the variance.
formula_8
It is the same index as Mueller and Schussler's Index of Qualitative Variation and Gibbs' M2 index.
It is distributed as a chi square variable with "K" – 1 degrees of freedom.
StDev.
Wilson has suggested two versions of this statistic.
The first is based on AvDev.
formula_9
The second is based on MNDif
formula_10
HRel.
This index was originally developed by Claude Shannon for use in specifying the properties of communication channels.
formula_11
where "p""i" = "f""i" / "N".
This is equivalent to information entropy divided by the formula_12 and is useful for comparing relative variation between frequency tables of multiple sizes.
B index.
Wilcox adapted a proposal of Kaiser based on the geometric mean and created the "B"' index. The "B" index is defined as
formula_13
R packages.
Several of these indices have been implemented in the R language.
Gibb's indices and related formulae.
proposed six indexes.
"M"1.
The unstandardized index ("M"1) is
formula_14
where "K" is the number of categories and formula_15 is the proportion of observations that fall in a given category "i".
"M"1 can be interpreted as one minus the likelihood that a random pair of samples will belong to the same category, so this formula for IQV is a standardized likelihood of a random pair falling in the same category. This index has also referred to as the index of differentiation, the index of sustenance differentiation and the geographical differentiation index depending on the context it has been used in.
"M"2.
A second index is the "M2" is:
formula_16
where "K" is the number of categories and formula_15 is the proportion of observations that fall in a given category "i". The factor of formula_17 is for standardization.
"M"1 and "M"2 can be interpreted in terms of variance of a multinomial distribution (there called an "expanded binomial model"). "M"1 is the variance of the multinomial distribution and "M"2 is the ratio of the variance of the multinomial distribution to the variance of a binomial distribution.
"M"4.
The "M"4 index is
formula_18
where "m" is the mean.
"M"6.
The formula for "M"6 is
formula_19
where "K" is the number of categories, "X""i" is the number of data points in the "i"th category, "N" is the total number of data points, || is the absolute value (modulus) and
formula_20
This formula can be simplified
formula_21
where "p""i" is the proportion of the sample in the "i"th category.
In practice "M"1 and "M"6 tend to be highly correlated which militates against their combined use.
Related indices.
The sum
formula_22
has also found application. This is known as the Simpson index in ecology and as the Herfindahl index or the Herfindahl-Hirschman index (HHI) in economics. A variant of this is known as the Hunter–Gaston index in microbiology
In linguistics and cryptanalysis this sum is known as the repeat rate. The incidence of coincidence ("IC") is an unbiased estimator of this statistic
formula_23
where "f""i" is the count of the "i"th grapheme in the text and "n" is the total number of graphemes in the text.
The "M"1 statistic defined above has been proposed several times in a number of different settings under a variety of names. These include Gini's index of mutability, Simpson's measure of diversity, Bachi's index of linguistic homogeneity, Mueller and Schuessler's index of qualitative variation, Gibbs and Martin's index of industry diversification, Lieberson's index. and Blau's index in sociology, psychology and management studies. The formulation of all these indices are identical.
Simpson's "D" is defined as
formula_24
where "n" is the total sample size and "n""i" is the number of items in the ith category.
For large "n" we have
formula_25
Another statistic that has been proposed is the coefficient of unalikeability which ranges between 0 and 1.
formula_26
where "n" is the sample size and "c"("x","y") = 1 if "x" and "y" are unalike and 0 otherwise.
For large "n" we have
formula_25
where "K" is the number of categories.
Another related statistic is the quadratic entropy
formula_27
which is itself related to the Gini index.
Greenberg's monolingual non weighted index of linguistic diversity is the "M"2 statistic defined above.
Another index – the "M"7 – was created based on the "M"4 index of
formula_28
where
formula_29
and
formula_30
where "K" is the number of categories, "L" is the number of subtypes, "O""ij" and "E""ij" are the number observed and expected respectively of subtype "j" in the "i"th category, "n""i" is the number in the "i"th category and "p""j" is the proportion of subtype "j" in the complete sample.
Note: This index was designed to measure women's participation in the work place: the two subtypes it was developed for were male and female.
Other single sample indices.
These indices are summary statistics of the variation within the sample.
Berger–Parker index.
The Berger–Parker index, named after Wolfgang H. Berger and Frances Lawrence Parker, equals the maximum formula_31 value in the dataset, i.e. the proportional abundance of the most abundant type. This corresponds to the weighted generalized mean of the formula_31 values when "q" approaches infinity, and hence equals the inverse of true diversity of order infinity (1/∞"D").
Brillouin index of diversity.
This index is strictly applicable only to entire populations rather than to finite samples. It is defined as
formula_32
where "N" is total number of individuals in the population, "n""i" is the number of individuals in the "i"th category and "N"! is the factorial of "N".
Brillouin's index of evenness is defined as
formula_33
where "I""B"(max) is the maximum value of "I"B.
Hill's diversity numbers.
Hill suggested a family of diversity numbers
formula_34
For given values of a, several of the other indices can be computed
Hill also suggested a family of evenness measures
formula_35
where "a" > "b".
Hill's "E"4 is
formula_36
Hill's "E"5 is
formula_37
formula_38
Margalef's index.
where "S" is the number of data types in the sample and "N" is the total size of the sample.
formula_39
Menhinick's index.
where "S" is the number of data types in the sample and "N" is the total size of the sample.
In linguistics this index is the identical with the Kuraszkiewicz index (Guiard index) where "S" is the number of distinct words (types) and "N" is the total number of words (tokens) in the text being examined. This index can be derived as a special case of the Generalised Torquist function.
Q statistic.
This is a statistic invented by Kempton and Taylor. and involves the quartiles of the sample. It is defined as
formula_40
where "R"1 and "R"2 are the 25% and 75% quartiles respectively on the cumulative species curve, "n""j" is the number of species in the "j"th category, "n"Ri is the number of species in the class where "R""i" falls ("i" = 1 or 2).
Shannon–Wiener index.
This is taken from information theory
formula_41
where "N" is the total number in the sample and "p""i" is the proportion in the "i"th category.
In ecology where this index is commonly used, "H" usually lies between 1.5 and 3.5 and only rarely exceeds 4.0.
An approximate formula for the standard deviation (SD) of "H" is
formula_42
where "p""i" is the proportion made up by the "i"th category and "N" is the total in the sample.
A more accurate approximate value of the variance of "H"(var("H")) is given by
formula_43
where "N" is the sample size and "K" is the number of categories.
A related index is the Pielou "J" defined as
formula_44
One difficulty with this index is that "S" is unknown for a finite sample. In practice "S" is usually set to the maximum present in any category in the sample.
Rényi entropy.
The Rényi entropy is a generalization of the Shannon entropy to other values of "q" than unity. It can be expressed:
formula_45
which equals
formula_46
This means that taking the logarithm of true diversity based on any value of "q" gives the Rényi entropy corresponding to the same value of "q".
The value of formula_47 is also known as the Hill number.
McIntosh's D and E.
McIntosh proposed measure of diversity:
formula_48
where "n""i" is the number in the "i"th category and "K" is the number of categories.
He also proposed several normalized versions of this index. First is "D":
formula_49
where "N" is the total sample size.
This index has the advantage of expressing the observed diversity as a proportion of the absolute maximum diversity at a given "N".
Another proposed normalization is "E" — ratio of observed diversity to maximum possible diversity of a given "N" and "K" (i.e., if all species are equal in number of individuals):
formula_50
Fisher's alpha.
This was the first index to be derived for diversity.
formula_51
where "K" is the number of categories and "N" is the number of data points in the sample. Fisher's "α" has to be estimated numerically from the data.
The expected number of individuals in the "r"th category where the categories have been placed in increasing size is
formula_52
where "X" is an empirical parameter lying between 0 and 1. While X is best estimated numerically an approximate value can be obtained by solving the following two equations
formula_53
formula_54
where "K" is the number of categories and "N" is the total sample size.
The variance of "α" is approximately
formula_55
Strong's index.
This index ("D"w) is the distance between the Lorenz curve of species distribution and the 45 degree line. It is closely related to the Gini coefficient.
In symbols it is
formula_56
where max() is the maximum value taken over the "N" data points, "K" is the number of categories (or species) in the data set and "c""i" is the cumulative total up and including the "i"th category.
Simpson's E.
This is related to Simpson's "D" and is defined as
formula_57
where "D" is Simpson's "D" and "K" is the number of categories in the sample.
Smith & Wilson's indices.
Smith and Wilson suggested a number of indices based on Simpson's "D".
formula_58
formula_59
where "D" is Simpson's "D" and "K" is the number of categories.
formula_60
Heip's index.
where "H" is the Shannon entropy and "K" is the number of categories.
This index is closely related to Sheldon's index which is
formula_61
where "H" is the Shannon entropy and "K" is the number of categories.
Camargo's index.
This index was created by Camargo in 1993.
formula_62
where "K" is the number of categories and "p""i" is the proportion in the "i"th category.
Smith and Wilson's B.
This index was proposed by Smith and Wilson in 1996.
formula_63
where "θ" is the slope of the log(abundance)-rank curve.
Nee, Harvey, and Cotgreave's index.
This is the slope of the log(abundance)-rank curve.
Bulla's E.
There are two versions of this index - one for continuous distributions ("E"c) and the other for discrete ("E"d).
formula_64
formula_65
where
formula_66
is the Schoener–Czekanoski index, "K" is the number of categories and "N" is the sample size.
Horn's information theory index.
This index ("R""ik") is based on Shannon's entropy. It is defined as
formula_67
where
formula_68
formula_69
formula_70
formula_71
formula_72
formula_73
formula_74
In these equations "x""ij" and "x""kj" are the number of times the "j"th data type appears in the "i"th or "k"th sample respectively.
Rarefaction index.
In a rarefied sample a random subsample "n" in chosen from the total "N" items. In this sample some groups may be necessarily absent from this subsample. Let formula_75 be the number of groups still present in the subsample of "n" items. formula_75 is less than "K" the number of categories whenever at least one group is missing from this subsample.
The rarefaction curve, formula_76 is defined as:
formula_77
Note that 0 ≤ "f"("n") ≤ "K".
Furthermore,
formula_78
Despite being defined at discrete values of "n", these curves are most frequently displayed as continuous functions.
This index is discussed further in Rarefaction (ecology).
Caswell's V.
This is a "z" type statistic based on Shannon's entropy.
formula_79
where "H" is the Shannon entropy, "E"("H") is the expected Shannon entropy for a neutral model of distribution and "SD"("H") is the standard deviation of the entropy. The standard deviation is estimated from the formula derived by Pielou
formula_80
where "p""i" is the proportion made up by the "i"th category and "N" is the total in the sample.
Lloyd & Ghelardi's index.
This is
formula_81
where "K" is the number of categories and "K"' is the number of categories according to MacArthur's broken stick model yielding the observed diversity.
Average taxonomic distinctness index.
This index is used to compare the relationship between hosts and their parasites. It incorporates information about the phylogenetic relationship amongst the host species.
formula_82
where "s" is the number of host species used by a parasite and "ω""ij" is the taxonomic distinctness between host species "i" and "j".
Index of qualitative variation.
Several indices with this name have been proposed.
One of these is
formula_83
where "K" is the number of categories and "p"i is the proportion of the sample that lies in the ith category.
Theil's H.
This index is also known as the multigroup entropy index or the information theory index. It was proposed by Theil in 1972. The index is a weighted average of the samples entropy.
Let
formula_84
and
formula_85
where "p"i is the proportion of type "i" in the "a"th sample, "r" is the total number of samples, "n"i is the size of the "i"th sample, "N" is the size of the population from which the samples were obtained and "E" is the entropy of the population.
Indices for comparison of two or more data types within a single sample.
Several of these indexes have been developed to document the degree to which different data types of interest may coexist within a geographic area.
Index of dissimilarity.
Let "A" and "B" be two types of data item. Then the index of dissimilarity is
formula_86
where
formula_87
formula_88
"A""i" is the number of data type "A" at sample site "i", "B""i" is the number of data type "B" at sample site "i", "K" is the number of sites sampled and || is the absolute value.
This index is probably better known as the index of dissimilarity ("D"). It is closely related to the Gini index.
This index is biased as its expectation under a uniform distribution is > 0.
A modification of this index has been proposed by Gorard and Taylor. Their index (GT) is
formula_89
Index of segregation.
The index of segregation ("IS") is
formula_90
where
formula_87
formula_91
and "K" is the number of units, "A""i" and "t""i" is the number of data type "A" in unit "i" and the total number of all data types in unit "i".
Hutchen's square root index.
This index ("H") is defined as
formula_92
where "p""i" is the proportion of the sample composed of the "i"th variate.
Lieberson's isolation index.
This index ( "L""xy" ) was invented by Lieberson in 1981.
formula_93
where "X""i" and "Y""i" are the variables of interest at the "i"th site, "K" is the number of sites examined and "X"tot is the total number of variate of type "X" in the study.
Bell's index.
This index is defined as
formula_94
where "p""x" is the proportion of the sample made up of variates of type "X" and
formula_95
where "N"x is the total number of variates of type "X" in the study, "K" is the number of samples in the study and "x""i" and "p""i" are the number of variates and the proportion of variates of type "X" respectively in the "i"th sample.
Index of isolation.
The index of isolation is
formula_96
where "K" is the number of units in the study, "A""i" and "t""i" is the number of units of type "A" and the number of all units in "i"th sample.
A modified index of isolation has also been proposed
formula_97
The "MII" lies between 0 and 1.
Gorard's index of segregation.
This index (GS) is defined as
formula_98
where
formula_87
formula_91
and "A""i" and "t""i" are the number of data items of type "A" and the total number of items in the "i"th sample.
Index of exposure.
This index is defined as
formula_99
where
formula_87
and "A""i" and "B""i" are the number of types "A" and "B" in the "i"th category and "t""i" is the total number of data points in the "i"th category.
Ochiai index.
This is a binary form of the cosine index. It is used to compare presence/absence data of two data types (here "A" and "B"). It is defined as
formula_100
where "a" is the number of sample units where both "A" and "B" are found, "b" is number of sample units where "A" but not "B" occurs and "c" is the number of sample units where type "B" is present but not type "A".
Kulczyński's coefficient.
This coefficient was invented by Stanisław Kulczyński in 1927 and is an index of association between two types (here "A" and "B"). It varies in value between 0 and 1. It is defined as
formula_101
where "a" is the number of sample units where type "A" and type "B" are present, "b" is the number of sample units where type "A" but not type "B" is present and "c" is the number of sample units where type "B" is present but not type "A".
Yule's Q.
This index was invented by Yule in 1900. It concerns the association of two different types (here "A" and "B"). It is defined as
formula_102
where "a" is the number of samples where types "A" and "B" are both present, "b" is where type "A" is present but not type "B", "c" is the number of samples where type "B" is present but not type "A" and "d" is the sample count where neither type "A" nor type "B" are present. "Q" varies in value between -1 and +1. In the ordinal case "Q" is known as the Goodman-Kruskal "γ".
Because the denominator potentially may be zero, Leinhert and Sporer have recommended adding +1 to "a", "b", "c" and "d".
Yule's Y.
This index is defined as
formula_103
where "a" is the number of samples where types "A" and "B" are both present, "b" is where type "A" is present but not type "B", "c" is the number of samples where type "B" is present but not type "A" and "d" is the sample count where neither type "A" nor type "B" are present.
Baroni–Urbani–Buser coefficient.
This index was invented by Baroni-Urbani and Buser in 1976. It varies between 0 and 1 in value. It is defined as
formula_104
where "a" is the number of samples where types "A" and "B" are both present, "b" is where type "A" is present but not type "B", "c" is the number of samples where type "B" is present but not type "A" and "d" is the sample count where neither type "A" nor type "B" are present. "N" is the sample size.
When "d" = 0, this index is identical to the Jaccard index.
Hamman coefficient.
This coefficient is defined as
formula_105
where "a" is the number of samples where types "A" and "B" are both present, "b" is where type "A" is present but not type "B", "c" is the number of samples where type "B" is present but not type "A" and "d" is the sample count where neither type "A" nor type "B" are present. "N" is the sample size.
Rogers–Tanimoto coefficient.
This coefficient is defined as
formula_106
where "a" is the number of samples where types "A" and "B" are both present, "b" is where type "A" is present but not type "B", "c" is the number of samples where type "B" is present but not type "A" and "d" is the sample count where neither type "A" nor type "B" are present. "N" is the sample size
Sokal–Sneath coefficient.
This coefficient is defined as
formula_107
where "a" is the number of samples where types "A" and "B" are both present, "b" is where type "A" is present but not type "B", "c" is the number of samples where type "B" is present but not type "A" and "d" is the sample count where neither type "A" nor type "B" are present. "N" is the sample size.
Sokal's binary distance.
This coefficient is defined as
formula_108
where "a" is the number of samples where types "A" and "B" are both present, "b" is where type "A" is present but not type "B", "c" is the number of samples where type "B" is present but not type "A" and "d" is the sample count where neither type "A" nor type "B" are present. "N" is the sample size.
Russel–Rao coefficient.
This coefficient is defined as
formula_109
where "a" is the number of samples where types "A" and "B" are both present, "b" is where type "A" is present but not type "B", "c" is the number of samples where type "B" is present but not type "A" and "d" is the sample count where neither type "A" nor type "B" are present. "N" is the sample size.
Phi coefficient.
This coefficient is defined as
formula_110
where "a" is the number of samples where types "A" and "B" are both present, "b" is where type "A" is present but not type "B", "c" is the number of samples where type "B" is present but not type "A" and "d" is the sample count where neither type "A" nor type "B" are present.
Soergel's coefficient.
This coefficient is defined as
formula_111
where "b" is the number of samples where type "A" is present but not type "B", "c" is the number of samples where type "B" is present but not type "A" and "d" is the sample count where neither type "A" nor type "B" are present. "N" is the sample size.
Simpson's coefficient.
This coefficient is defined as
formula_112
where "b" is the number of samples where type "A" is present but not type "B", "c" is the number of samples where type "B" is present but not type "A".
Dennis' coefficient.
This coefficient is defined as
formula_113
where "a" is the number of samples where types "A" and "B" are both present, "b" is where type "A" is present but not type "B", "c" is the number of samples where type "B" is present but not type "A" and "d" is the sample count where neither type "A" nor type "B" are present. "N" is the sample size.
Forbes' coefficient.
This coefficient was proposed by Stephen Alfred Forbes in 1907. It is defined as
formula_114
where "a" is the number of samples where types "A" and "B" are both present, "b" is where type "A" is present but not type "B", "c" is the number of samples where type "B" is present but not type "A" and "d" is the sample count where neither type "A" nor type "B" are present. "N" is the sample size ("N = a + b + c + d").
A modification of this coefficient which does not require the knowledge of "d" has been proposed by Alroy
formula_115
Where "n = a + b + c".
Simple match coefficient.
This coefficient is defined as
formula_116
where "a" is the number of samples where types "A" and "B" are both present, "b" is where type "A" is present but not type "B", "c" is the number of samples where type "B" is present but not type "A" and "d" is the sample count where neither type "A" nor type "B" are present. "N" is the sample size.
Fossum's coefficient.
This coefficient is defined as
formula_117
where "a" is the number of samples where types "A" and "B" are both present, "b" is where type "A" is present but not type "B", "c" is the number of samples where type "B" is present but not type "A" and "d" is the sample count where neither type "A" nor type "B" are present. "N" is the sample size.
Stile's coefficient.
This coefficient is defined as
formula_118
where "a" is the number of samples where types "A" and "B" are both present, "b" is where type "A" is present but not type "B", "c" is the number of samples where type "B" is present but not type "A", "d" is the sample count where neither type "A" nor type "B" are present, "n" equals "a" + "b" + "c" + "d" and || is the modulus (absolute value) of the difference.
Michael's coefficient.
This coefficient is defined as
formula_119
where "a" is the number of samples where types "A" and "B" are both present, "b" is where type "A" is present but not type "B", "c" is the number of samples where type "B" is present but not type "A" and "d" is the sample count where neither type "A" nor type "B" are present.
Peirce's coefficient.
In 1884 Charles Peirce suggested the following coefficient
formula_120
where "a" is the number of samples where types "A" and "B" are both present, "b" is where type "A" is present but not type "B", "c" is the number of samples where type "B" is present but not type "A" and "d" is the sample count where neither type "A" nor type "B" are present.
Hawkin–Dotson coefficient.
In 1975 Hawkin and Dotson proposed the following coefficient
formula_121
where "a" is the number of samples where types "A" and "B" are both present, "b" is where type "A" is present but not type "B", "c" is the number of samples where type "B" is present but not type "A" and "d" is the sample count where neither type "A" nor type "B" are present. "N" is the sample size.
Benini coefficient.
In 1901 Benini proposed the following coefficient
formula_122
where "a" is the number of samples where types "A" and "B" are both present, "b" is where type "A" is present but not type "B" and "c" is the number of samples where type "B" is present but not type "A". Min("b", "c") is the minimum of "b" and "c".
Gilbert coefficient.
Gilbert proposed the following coefficient
formula_123
where "a" is the number of samples where types "A" and "B" are both present, "b" is where type "A" is present but not type "B", "c" is the number of samples where type "B" is present but not type "A" and "d" is the sample count where neither type "A" nor type "B" are present. "N" is the sample size.
Gini index.
The Gini index is
formula_124
where "a" is the number of samples where types "A" and "B" are both present, "b" is where type "A" is present but not type "B" and "c" is the number of samples where type "B" is present but not type "A".
Modified Gini index.
The modified Gini index is
formula_125
where "a" is the number of samples where types "A" and "B" are both present, "b" is where type "A" is present but not type "B" and "c" is the number of samples where type "B" is present but not type "A".
Kuhn's index.
Kuhn proposed the following coefficient in 1965
formula_126
where "a" is the number of samples where types "A" and "B" are both present, "b" is where type "A" is present but not type "B" and "c" is the number of samples where type "B" is present but not type "A". "K" is a normalizing parameter. "N" is the sample size.
This index is also known as the coefficient of arithmetic means.
Eyraud index.
Eyraud proposed the following coefficient in 1936
formula_127
where "a" is the number of samples where types "A" and "B" are both present, "b" is where type "A" is present but not type "B", "c" is the number of samples where type "B" is present but not type "A" and "d" is the number of samples where both "A" and "B" are not present.
Soergel distance.
This is defined as
formula_128
where "a" is the number of samples where types "A" and "B" are both present, "b" is where type "A" is present but not type "B", "c" is the number of samples where type "B" is present but not type "A" and "d" is the number of samples where both "A" and "B" are not present. "N" is the sample size.
Tanimoto index.
This is defined as
formula_129
where "a" is the number of samples where types "A" and "B" are both present, "b" is where type "A" is present but not type "B", "c" is the number of samples where type "B" is present but not type "A" and "d" is the number of samples where both "A" and "B" are not present. "N" is the sample size.
Piatetsky–Shapiro's index.
This is defined as
formula_130
where "a" is the number of samples where types "A" and "B" are both present, "b" is where type "A" is present but not type "B", "c" is the number of samples where type "B" is present but not type "A".
Indices for comparison between two or more samples.
Czekanowski's quantitative index.
This is also known as the Bray–Curtis index, Schoener's index, least common percentage index, index of affinity or proportional similarity. It is related to the Sørensen similarity index.
formula_131
where "x""i" and "x""j" are the number of species in sites "i" and "j" respectively and the minimum is taken over the number of species in common between the two sites.
Canberra metric.
The Canberra distance is a weighted version of the "L"1 metric. It was introduced by introduced in 1966 and refined in 1967 by G. N. Lance and W. T. Williams. It is used to define a distance between two vectors – here two sites with "K" categories within each site.
The Canberra distance "d" between vectors p and q in a "K"-dimensional real vector space is
formula_132
where "p""i" and "q""i" are the values of the "i"th category of the two vectors.
Sorensen's coefficient of community.
This is used to measure similarities between communities.
formula_133
where "s"1 and "s"2 are the number of species in community 1 and 2 respectively and "c" is the number of species common to both areas.
Jaccard's index.
This is a measure of the similarity between two samples:
formula_134
where "A" is the number of data points shared between the two samples and "B" and "C" are the data points found only in the first and second samples respectively.
This index was invented in 1902 by the Swiss botanist Paul Jaccard.
Under a random distribution the expected value of "J" is
formula_135
The standard error of this index with the assumption of a random distribution is
formula_136
where "N" is the total size of the sample.
Dice's index.
This is a measure of the similarity between two samples:
formula_137
where "A" is the number of data points shared between the two samples and "B" and "C" are the data points found only in the first and second samples respectively.
Match coefficient.
This is a measure of the similarity between two samples:
formula_138
where "N" is the number of data points in the two samples and "B" and "C" are the data points found only in the first and second samples respectively.
Morisita's index.
Masaaki Morisita's index of dispersion ( "I""m" ) is the scaled probability that two points chosen at random from the whole population are in the same sample. Higher values indicate a more clumped distribution.
formula_139
An alternative formulation is
formula_140
where "n" is the total sample size, "m" is the sample mean and "x" are the individual values with the sum taken over the whole sample. It is also equal to
formula_141
where "IMC" is Lloyd's index of crowding.
This index is relatively independent of the population density but is affected by the sample size.
Morisita showed that the statistic
formula_142
is distributed as a chi-squared variable with "n" − 1 degrees of freedom.
An alternative significance test for this index has been developed for large samples.
formula_143
where "m" is the overall sample mean, "n" is the number of sample units and "z" is the normal distribution abscissa. Significance is tested by comparing the value of "z" against the values of the normal distribution.
Morisita's overlap index.
Morisita's overlap index is used to compare overlap among samples. The index is based on the assumption that increasing the size of the samples will increase the diversity because it will include different habitats
formula_144
"x""i" is the number of times species "i" is represented in the total "X" from one sample.
"y""i" is the number of times species "i" is represented in the total "Y" from another sample.
"D""x" and "D""y" are the Simpson's index values for the "x" and "y" samples respectively.
"S" is the number of unique species
"C""D" = 0 if the two samples do not overlap in terms of species, and "C""D" = 1 if the species occur in the same proportions in both samples.
Horn's introduced a modification of the index
formula_145
Standardised Morisita's index.
Smith-Gill developed a statistic based on Morisita's index which is independent of both sample size and population density and bounded by −1 and +1. This statistic is calculated as follows
First determine Morisita's index ( "I""d" ) in the usual fashion. Then let "k" be the number of units the population was sampled from. Calculate the two critical values
formula_146
formula_147
where χ2 is the chi square value for "n" − 1 degrees of freedom at the 97.5% and 2.5% levels of confidence.
The standardised index ( "I""p" ) is then calculated from one of the formulae below
When "I""d" ≥ "M""c" > 1
formula_148
When "M""c" > "I""d" ≥ 1
formula_149
When 1 > "I""d" ≥ "M""u"
formula_150
When 1 > "M""u" > "I""d"
formula_151
"I""p" ranges between +1 and −1 with 95% confidence intervals of ±0.5. "I""p" has the value of 0 if the pattern is random; if the pattern is uniform, "I""p" < 0 and if the pattern shows aggregation, "I""p" > 0.
Peet's evenness indices.
These indices are a measure of evenness between samples.
formula_152
formula_153
where "I" is an index of diversity, "I"max and "I"min are the maximum and minimum values of "I" between the samples being compared.
Loevinger's coefficient.
Loevinger has suggested a coefficient "H" defined as follows:
formula_154
where "p"max and "p"min are the maximum and minimum proportions in the sample.
Tversky index.
The Tversky index is an asymmetric measure that lies between 0 and 1.
For samples "A" and "B" the Tversky index ("S") is
formula_155
The values of "α" and "β" are arbitrary. Setting both "α" and "β" to 0.5 gives Dice's coefficient. Setting both to 1 gives Tanimoto's coefficient.
A symmetrical variant of this index has also been proposed.
formula_156
where
formula_157
formula_158
Several similar indices have been proposed.
Monostori "et al." proposed the SymmetricSimilarity index
formula_159
where "d"("X") is some measure of derived from "X".
Bernstein and Zobel have proposed the S2 and S3 indexes
formula_160
formula_161
S3 is simply twice the SymmetricSimilarity index. Both are related to Dice's coefficient
Metrics used.
A number of metrics (distances between samples) have been proposed.
Euclidean distance.
While this is usually used in quantitative work it may also be used in qualitative work. This is defined as
formula_162
where "d""jk" is the distance between "x""ij" and "x""ik".
Gower's distance.
This is defined as
formula_163
where "d"i is the distance between the "i"th samples and "w"i is the weighing give to the "i"th distance.
Manhattan distance.
While this is more commonly used in quantitative work it may also be used in qualitative work. This is defined as
formula_164
where "d""jk" is the distance between "x""ij" and "x""ik" and || is the absolute value of the difference between "x""ij" and "x""ik".
A modified version of the Manhattan distance can be used to find a zero (root) of a polynomial of any degree using Lill's method.
Prevosti's distance.
This is related to the Manhattan distance. It was described by Prevosti "et al." and was used to compare differences between chromosomes. Let "P" and "Q" be two collections of "r" finite probability distributions. Let these distributions have values that are divided into "k" categories. Then the distance "D"PQ is
formula_165
where "r" is the number of discrete probability distributions in each population, "k""j" is the number of categories in distributions "P""j" and "Q""j" and "p""ji" (respectively "q""ji") is the theoretical probability of category "i" in distribution "P""j" ("Q""j") in population "P"("Q").
Its statistical properties were examined by Sanchez "et al." who recommended a bootstrap procedure to estimate confidence intervals when testing for differences between samples.
Other metrics.
Let
formula_166
formula_167
formula_168
where min("x","y") is the lesser value of the pair "x" and "y".
Then
formula_169
is the Manhattan distance,
formula_170
is the Bray−Curtis distance,
formula_171
is the Jaccard (or Ruzicka) distance and
formula_172
is the Kulczynski distance.
Similarities between texts.
HaCohen-Kerner et al. have proposed a variety of metrics for comparing two or more texts.
Ordinal data.
If the categories are at least ordinal then a number of other indices may be computed.
Leik's D.
Leik's measure of dispersion ("D") is one such index. Let there be "K" categories and let "p""i" be "f""i"/"N" where "f""i" is the number in the "i"th category and let the categories be arranged in ascending order. Let
formula_173
where "a" ≤ "K". Let "d"a = "c"a if "c"a ≤ 0.5 and 1 − "c"a ≤ 0.5 otherwise. Then
formula_174
Normalised Herfindahl measure.
This is the square of the coefficient of variation divided by "N" − 1 where "N" is the sample size.
formula_175
where "m" is the mean and "s" is the standard deviation.
Potential-for-conflict Index.
The potential-for-conflict Index (PCI) describes the ratio of scoring on either side of a rating scale's centre point. This index requires at least ordinal data. This ratio is often displayed as a bubble graph.
The PCI uses an ordinal scale with an odd number of rating points (−"n" to +"n") centred at 0. It is calculated as follows
formula_176
where "Z" = 2"n", |·| is the absolute value (modulus), "r"+ is the number of responses in the positive side of the scale, "r"− is the number of responses in the negative side of the scale, "X"+ are the responses on the positive side of the scale, "X"− are the responses on the negative side of the scale and
formula_177
Theoretical difficulties are known to exist with the PCI. The PCI can be computed only for scales with a neutral center point and an equal number of response options on either side of it. Also a uniform distribution of responses does not always yield the midpoint of the PCI statistic but rather varies with the number of possible responses or values in the scale. For example, five-, seven- and nine-point scales with a uniform distribution of responses give PCIs of 0.60, 0.57 and 0.50 respectively.
The first of these problems is relatively minor as most ordinal scales with an even number of response can be extended (or reduced) by a single value to give an odd number of possible responses. Scale can usually be recentred if this is required. The second problem is more difficult to resolve and may limit the PCI's applicability.
The PCI has been extended
formula_178
where "K" is the number of categories, "k""i" is the number in the "i"th category, "d""ij" is the distance between the "i"th and "i"th categories, and "δ" is the maximum distance on the scale multiplied by the number of times it can occur in the sample. For a sample with an even number of data points
formula_179
and for a sample with an odd number of data points
formula_180
where "N" is the number of data points in the sample and "d"max is the maximum distance between points on the scale.
Vaske "et al." suggest a number of possible distance measures for use with this index.
formula_181
if the signs (+ or −) of "r""i" and "r""j" differ. If the signs are the same "d""ij" = 0.
formula_182
formula_183
where "p" is an arbitrary real number > 0.
formula_184
if sign("r""i" ) ≠ sign("r""i" ) and "p" is a real number > 0. If the signs are the same then "d""ij" = 0. "m" is "D"1, "D"2 or "D"3.
The difference between "D"1 and "D"2 is that the first does not include neutrals in the distance while the latter does. For example, respondents scoring −2 and +1 would have a distance of 2 under "D"1 and 3 under "D"2.
The use of a power ("p") in the distances allows for the rescaling of extreme responses. These differences can be highlighted with "p" > 1 or diminished with "p" < 1.
In simulations with a variates drawn from a uniform distribution the PCI2 has a symmetric unimodal distribution. The tails of its distribution are larger than those of a normal distribution.
Vaske "et al." suggest the use of a t test to compare the values of the PCI between samples if the PCIs are approximately normally distributed.
van der Eijk's A.
This measure is a weighted average of the degree of agreement the frequency distribution. "A" ranges from −1 (perfect bimodality) to +1 (perfect unimodality). It is defined as
formula_185
where "U" is the unimodality of the distribution, "S" the number of categories that have nonzero frequencies and "K" the total number of categories.
The value of "U" is 1 if the distribution has any of the three following characteristics:
With distributions other than these the data must be divided into 'layers'. Within a layer the responses are either equal or zero. The categories do not have to be contiguous. A value for "A" for each layer ("A""i") is calculated and a weighted average for the distribution is determined. The weights ("w""i") for each layer are the number of responses in that layer. In symbols
formula_186
A uniform distribution has "A" = 0: when all the responses fall into one category "A" = +1.
One theoretical problem with this index is that it assumes that the intervals are equally spaced. This may limit its applicability.
Related statistics.
Birthday problem.
If there are "n" units in the sample and they are randomly distributed into "k" categories ("n" ≤ "k"), this can be considered a variant of the birthday problem. The probability ("p") of all the categories having only one unit is
formula_187
If "c" is large and "n" is small compared with "k"2/3 then to a good approximation
formula_188
This approximation follows from the exact formula as follows:
formula_189
For "p" = 0.5 and "p" = 0.05 respectively the following estimates of "n" may be useful
formula_190
formula_191
This analysis can be extended to multiple categories. For "p" = 0.5 and "p" 0.05 we have respectively
formula_192
formula_193
where "c""i" is the size of the "i"th category. This analysis assumes that the categories are independent.
If the data is ordered in some fashion then for at least one event occurring in two categories lying within "j" categories of each other than a probability of 0.5 or 0.05 requires a sample size ("n") respectively of
formula_194
formula_195
where "k" is the number of categories.
Birthday-death day problem.
Whether or not there is a relation between birthdays and death days has been investigated with the statistic
formula_196
where "d" is the number of days in the year between the birthday and the death day.
Rand index.
The Rand index is used to test whether two or more classification systems agree on a data set.
Given a set of formula_197 elements formula_198 and two partitions of formula_199 to compare, formula_200, a partition of "S" into "r" subsets, and formula_201, a partition of "S" into "s" subsets, define the following:
The Rand index - formula_208 - is defined as
formula_209
Intuitively, formula_210 can be considered as the number of agreements between formula_203 and formula_204 and formula_211 as the number of disagreements between formula_203 and formula_204.
Adjusted Rand index.
The adjusted Rand index is the corrected-for-chance version of the Rand index. Though the Rand Index may only yield a value between 0 and +1, the adjusted Rand index can yield negative values if the index is less than the expected index.
The contingency table.
Given a set formula_199 of formula_197 elements, and two groupings or partitions ("e.g." clusterings) of these points, namely formula_212 and formula_213, the overlap between formula_203 and formula_204 can be summarized in a contingency table formula_214 where each entry formula_215 denotes the number of objects in common between formula_216 and formula_217 : formula_218.
Definition.
The adjusted form of the Rand Index, the Adjusted Rand Index, is
formula_219
more specifically
formula_220
where formula_221 are values from the contingency table.
Since the denominator is the total number of pairs, the Rand index represents the "frequency of occurrence" of agreements over the total pairs, or the probability that formula_203 and formula_204 will agree on a randomly chosen pair.
Evaluation of indices.
Different indices give different values of variation, and may be used for different purposes: several are used and critiqued in the sociology literature especially.
If one wishes to simply make ordinal comparisons between samples (is one sample more or less varied than another), the choice of IQV is relatively less important, as they will often give the same ordering.
Where the data is ordinal a method that may be of use in comparing samples is ORDANOVA.
In some cases it is useful to not standardize an index to run from 0 to 1, regardless of number of categories or samples , but one generally so standardizes it.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " M = \\sum_{i = 1}^K ( f_m - f_i ) "
},
{
"math_id": 1,
"text": " M = Kf_m - N"
},
{
"math_id": 2,
"text": " v = 1 - \\frac{ f_m }{ N } "
},
{
"math_id": 3,
"text": " \\frac{ ( \\frac{ f_m }{ N } ) - \\frac 1 K }{ \\frac N K \\frac{ ( K - 1 )} N } = \\frac M { N( K - 1 ) }"
},
{
"math_id": 4,
"text": " \\operatorname{ModVR} = 1 - \\frac{ Kf_m - N }{ N( K - 1 ) } = \\frac{ K( N - f_m ) }{ N ( K - 1 ) } = \\frac{ K v }{ K - 1 } "
},
{
"math_id": 5,
"text": " \\operatorname{RanVR} = 1 - \\frac{ f_m - f_l }{ f_m } = \\frac{ f_l }{ f_m }"
},
{
"math_id": 6,
"text": " \\operatorname{AvDev} = 1 - \\frac 1 {2N} \\frac K {K - 1} \\sum^K_{i = 1} \\left| f_i - \\frac N K \\right| "
},
{
"math_id": 7,
"text": " \\operatorname{MNDif} = 1 - \\frac 1 { N( K - 1 ) } \\sum_{i = 1}^{K - 1} \\sum_{ j = i + 1 }^K | f_i - f_j | "
},
{
"math_id": 8,
"text": " \\operatorname{VarNC} = 1 - \\frac 1 {N^2} \\frac K {K - 1} \\sum \\left( f_i - \\frac N K \\right)^2 "
},
{
"math_id": 9,
"text": " \\operatorname{StDev}_1 = 1 - \\sqrt{ \\frac{ \\sum_{ i = 1 }^K \\left( f_i - \\frac N K \\right)^2 }{ \\left( N - \\frac N K \\right)^2 + ( K - 1 ) \\left( \\frac N K \\right)^2 } }"
},
{
"math_id": 10,
"text": " \\operatorname{StDev}_2 = 1 - \\sqrt{ \\frac{ \\sum^{K - 1}_{i = 1} \\sum^K_{j = i + 1 } ( f_i - f_j )^2 }{ N^2 ( K - 1 )} }"
},
{
"math_id": 11,
"text": " \\operatorname{HRel} = \\frac{ - \\sum p_i \\log_2 p_i }{ \\log_2 K } "
},
{
"math_id": 12,
"text": "\\log_2(K)"
},
{
"math_id": 13,
"text": " B = 1 - \\sqrt{ 1 - \\left[ \\sqrt[k] { \\prod_{ i = 1 }^k \\frac{ f_i K }{ N } } \\, \\right]^2 } "
},
{
"math_id": 14,
"text": "M1 = 1 - \\sum_{ i = 1 }^K p_i^2 "
},
{
"math_id": 15,
"text": " p_i = f_i / N "
},
{
"math_id": 16,
"text": " M2 = \\frac{ K }{ K - 1 } \\left( 1 - \\sum_{ i = 1 }^K p_i^2 \\right)"
},
{
"math_id": 17,
"text": " \\frac{ K }{ K - 1 } "
},
{
"math_id": 18,
"text": " M4 = \\frac{ \\sum_{ i = 1 }^K | X_i - m | }{ 2 \\sum_{ i = 1 }^K X_i } "
},
{
"math_id": 19,
"text": " M6 = K \\left[ 1 - \\frac{ \\sum_{ i = 1 }^K | X_i - m | }{ 2 N } \\right] "
},
{
"math_id": 20,
"text": " m = \\frac{ \\sum_{ i = 1 }^K X_i }{ N } "
},
{
"math_id": 21,
"text": " M6 = K\\left[ 1 - \\frac{ \\sum_{ i = 1 }^K \\left| p_i - \\frac 1 N \\right| } 2 \\right] "
},
{
"math_id": 22,
"text": " \\sum_{ i = 1 }^K p_i^2 "
},
{
"math_id": 23,
"text": " \\operatorname{IC} = \\sum \\frac{ f_i ( f_i - 1 ) }{ n ( n - 1 ) } "
},
{
"math_id": 24,
"text": " D = 1 - \\sum_{i = 1}^K { \\frac{ n_i ( n_i - 1 ) }{ n( n - 1 ) } }"
},
{
"math_id": 25,
"text": " u \\sim 1 - \\sum_{ i = 1 }^K p_i^2 "
},
{
"math_id": 26,
"text": " u = \\frac{ c( x, y ) }{ n^2 - n } "
},
{
"math_id": 27,
"text": " H^2 = 2 \\left( 1 - \\sum_{ i = 1 }^K p_i^2 \\right) "
},
{
"math_id": 28,
"text": " M7 = \\frac{ \\sum_{ i = 1 }^K \\sum_{ j = 1 }^L | R_i - R | }{ 2 \\sum R_i }"
},
{
"math_id": 29,
"text": "R_{ ij } = \\frac{ O_{ ij } } { E_{ ij } } = \\frac{ O_{ ij } }{ n_i p_j }"
},
{
"math_id": 30,
"text": " R = \\frac{ \\sum_{ i = 1 }^K \\sum_{ j = 1 }^L R_{ ij } }{ \\sum_{ i = 1 }^K n_i } "
},
{
"math_id": 31,
"text": "p_i"
},
{
"math_id": 32,
"text": " I_B = \\frac{ \\log( N! ) - \\sum_{ i = 1 }^K ( \\log( n_i! ) ) }{ N }"
},
{
"math_id": 33,
"text": " E_B = I_B / I_{B( \\max )} "
},
{
"math_id": 34,
"text": " N_a = \\frac{1}{ \\left[ \\sum_{ i = 1 }^K p_i^a \\right]^{ a - 1 } }"
},
{
"math_id": 35,
"text": " E_{ a, b } = \\frac{ N_a }{ N_b }"
},
{
"math_id": 36,
"text": " E_4 = \\frac{ N_2 } { N_1 } "
},
{
"math_id": 37,
"text": " E_5 = \\frac{ N_2 - 1 } { N_1 - 1 } "
},
{
"math_id": 38,
"text": " I_\\text{Marg} = \\frac{ S - 1 } { \\log_e N} "
},
{
"math_id": 39,
"text": " I_\\mathrm{Men} = \\frac{ S }{ \\sqrt{ N } } "
},
{
"math_id": 40,
"text": " Q = \\frac{ \\frac{ 1 }{ 2 } ( n_{ R1 } + n_{ R2 } ) + \\sum_{ j = R_1 + 1 }^{ R_2 - 1 } n_j } { \\log( R_2 / R_1 ) } "
},
{
"math_id": 41,
"text": " H = \\log_e N - \\frac{ 1 }{ N } \\sum n_i p_i \\log( p_i ) "
},
{
"math_id": 42,
"text": " \\operatorname{SD}( H ) = \\frac{ 1 }{ N } \\left[ \\sum p_i [ \\log_e( p_i ) ]^2 - H^2 \\right] "
},
{
"math_id": 43,
"text": " \\operatorname{var}(H) = \\frac{ \\sum p_i [\\log(p_i)]^2 - \\left[ \\sum p_i \\log( p_i ) \\right]^2 } N + \\frac{K - 1}{2N^2} + \\frac{ -1 + \\sum p_i^2 - \\sum p_i^{-1} \\log(p_i) + \\sum p_i^{-1} \\sum p_i \\log(p_i) }{6N^3} "
},
{
"math_id": 44,
"text": " J = \\frac{ H } {\\log_e( S ) } "
},
{
"math_id": 45,
"text": "{}^qH = \\frac{ 1 }{ 1 - q } \\; \\ln\\left ( \\sum_{ i = 1 }^K p_i^q \\right ) "
},
{
"math_id": 46,
"text": "{}^qH = \\ln\\left ( { 1 \\over \\sqrt[ q - 1 ]{{ \\sum_{ i = 1 }^K p_i p_i^{ q - 1 } } } } \\right ) = \\ln( {}^q\\!D )"
},
{
"math_id": 47,
"text": "{}^q\\!D"
},
{
"math_id": 48,
"text": " I = \\sqrt{ \\sum_{ i = 1 }^K n_i^2 } "
},
{
"math_id": 49,
"text": " D = \\frac{ N - I }{ N - \\sqrt{ N } } "
},
{
"math_id": 50,
"text": " E = \\frac{ N - I }{ N - \\frac{ N }{ K } } "
},
{
"math_id": 51,
"text": " K = \\alpha \\ln( 1 + \\frac{ N }{ \\alpha } ) "
},
{
"math_id": 52,
"text": " \\operatorname E( n_r ) = \\alpha \\frac{ X^r }{ r } "
},
{
"math_id": 53,
"text": " N = \\frac{ \\alpha X }{ 1 - X } "
},
{
"math_id": 54,
"text": " K = - \\alpha \\ln( 1 - X ) "
},
{
"math_id": 55,
"text": " \\operatorname{var}( \\alpha ) = \\frac{ \\alpha }{ \\ln( X )( 1 - X ) } "
},
{
"math_id": 56,
"text": " D_w = max[ \\frac{ c_i } K - \\frac i N ] "
},
{
"math_id": 57,
"text": " E = \\frac{1/D} K "
},
{
"math_id": 58,
"text": " E_1 = \\frac{ 1 - D }{ 1 - \\frac{ 1 }{ K } } "
},
{
"math_id": 59,
"text": " E_2 = \\frac{ \\log_e( D ) }{ \\log_e ( K ) } "
},
{
"math_id": 60,
"text": " E = \\frac{ e^H - 1 }{ K - 1 } "
},
{
"math_id": 61,
"text": " E = \\frac{ e^H }{ K } "
},
{
"math_id": 62,
"text": " E = 1 - \\sum_{ i = 1 }^K \\sum_{ j = i + 1 }^K \\frac{ p_i - p_j }{ K } "
},
{
"math_id": 63,
"text": " B = 1 - \\frac{ 2 }{ \\pi } \\arctan( \\theta ) "
},
{
"math_id": 64,
"text": " E_c = \\frac{ O - \\frac{ 1 }{ K } }{ 1 - \\frac{ 1 }{ K } } "
},
{
"math_id": 65,
"text": " E_d = \\frac{ O - \\frac{ 1 }{ K } - \\frac{ K - 1 }{ N } }{ 1 - \\frac{ 1 }{ K } - \\frac{ K - 1 }{ N } } "
},
{
"math_id": 66,
"text": " O = 1 - \\frac 1 2 \\left| p_i - \\frac 1 K \\right| "
},
{
"math_id": 67,
"text": " R_{ik} = \\frac{ H_\\max - H_\\mathrm{obs}}{ H_\\max - H_\\min }"
},
{
"math_id": 68,
"text": " X = \\sum x_{ ij } "
},
{
"math_id": 69,
"text": " X = \\sum x_{ kj } "
},
{
"math_id": 70,
"text": " H( X ) = \\sum \\frac{ x_{ ij } }{ X } \\log \\frac{ X }{ x_{ ij } }"
},
{
"math_id": 71,
"text": " H( Y ) = \\sum \\frac{ x_{ kj } }{ Y } \\log \\frac{ Y }{ x_{ kj } }"
},
{
"math_id": 72,
"text": "H_\\min = \\frac{ X }{ X + Y } H( X ) + \\frac{ Y }{ X + Y } H( Y )"
},
{
"math_id": 73,
"text": " H_\\max = \\sum \\left( \\frac{ x_{ ij } }{ X + Y } \\log \\frac{ X + Y }{ x_{ ij } } + \\frac{ x_{ kj }}{ X + Y } \\log \\frac{ X + Y }{ x_{ kj } } \\right)"
},
{
"math_id": 74,
"text": " H_\\mathrm{obs} = \\sum \\frac{ x_{ ij } + x_{ kj } }{ X + Y } \\log \\frac{ X + Y }{ x_{ ij } + x_{ kj } }"
},
{
"math_id": 75,
"text": "X_n"
},
{
"math_id": 76,
"text": "f_n"
},
{
"math_id": 77,
"text": "f_n = \\operatorname E[ X_n ] = K - \\binom{ N }{ n }^{ -1 } \\sum_{ i = 1 }^K \\binom{ N - N_i }{ n } "
},
{
"math_id": 78,
"text": "f(0) = 0,\\ f(1) = 1,\\ f(N) = K. "
},
{
"math_id": 79,
"text": " V = \\frac{ H - \\operatorname E( H ) }{ \\operatorname{SD}( H ) } "
},
{
"math_id": 80,
"text": " SD( H ) = \\frac{ 1 }{ N } \\left[ \\sum p_i [ \\log_e( p_i ) ]^2 - H^2 \\right] "
},
{
"math_id": 81,
"text": " I_{ LG } = \\frac{ K }{ K' } "
},
{
"math_id": 82,
"text": " S_{TD} = 2 \\frac{ \\sum \\sum_{ i < j } \\omega_{ ij } }{ s( s - 1 ) }"
},
{
"math_id": 83,
"text": " IQV = \\frac{ K ( 100^2 - \\sum_{ i = 1 }^K p_i^2 ) }{ 100^2 ( K - 1 ) } = \\frac{ K }{ K - 1 } ( 1 - \\sum_{ i = 1 }^K ( p_i / 100 )^2 )"
},
{
"math_id": 84,
"text": " E_a = \\sum_{i = 1}^a p_i log ( p_i ) "
},
{
"math_id": 85,
"text": " H = \\sum_{ i = 1 }^r \\frac{ n_i ( E - E_i ) }{ NE } "
},
{
"math_id": 86,
"text": " D = \\frac{ 1 }{ 2 } \\sum_{ i = 1 }^K \\left| \\frac{ A_i }{ A } - \\frac{ B_i }{ B } \\right| "
},
{
"math_id": 87,
"text": " A = \\sum_{ i = 1 }^K A_i "
},
{
"math_id": 88,
"text": " B = \\sum_{ i = 1 }^K B_i "
},
{
"math_id": 89,
"text": " GT = D \\left( 1 - \\frac{ A }{ A + B } \\right)"
},
{
"math_id": 90,
"text": " SI = \\frac{ 1 }{ 2 }\\sum_{ i = 1 }^K \\left| \\frac{ A_i }{ A } - \\frac{ t_i - A_i }{ T - A } \\right| "
},
{
"math_id": 91,
"text": " T = \\sum_{ i = 1 }^K t_i "
},
{
"math_id": 92,
"text": " H = 1 - \\sum_{ i = 1}^K \\sum_{ j = 1 }^i \\sqrt{ p_i p_j } "
},
{
"math_id": 93,
"text": " L_{xy} = \\frac 1 N \\sum_{i = 1}^K \\frac{ X_i Y_i }{ X_\\mathrm{tot} } "
},
{
"math_id": 94,
"text": " I_R = \\frac{ p_{ xx } - p_x } { 1 - p_x } "
},
{
"math_id": 95,
"text": " p_{xx} = \\frac{ \\sum_{ i = 1 }^K x_i p_i }{ N_x } "
},
{
"math_id": 96,
"text": " II = \\sum_{ i = 1 }^K \\frac{ A_i }{ A } \\frac{ A_i }{ t_i } "
},
{
"math_id": 97,
"text": " MII = \\frac{ II - \\frac{ A }{ T } }{ 1 - \\frac{ A }{ T } } "
},
{
"math_id": 98,
"text": " GS = \\frac 1 2 \\sum_{i = 1}^K \\left| \\frac{A_i} A - \\frac{t_i} T \\right| "
},
{
"math_id": 99,
"text": " IE = \\sum_{ i = 1 }^K \\frac{ A_i }{ A } \\frac{ B_i }{ t_i } "
},
{
"math_id": 100,
"text": " O = \\frac{ a }{ \\sqrt{ ( a + b )( a + c ) } } "
},
{
"math_id": 101,
"text": " K = \\frac{ a }{ 2 } \\left( \\frac{ 1 }{ a + b } + \\frac{ 1 }{ a + c } \\right) "
},
{
"math_id": 102,
"text": " Q = \\frac{ ad - bc }{ ad + bc }"
},
{
"math_id": 103,
"text": " Y = \\frac{ \\sqrt{ ad } - \\sqrt{ bc } }{ \\sqrt{ ad } + \\sqrt{ bc } }"
},
{
"math_id": 104,
"text": " BUB = \\frac{ \\sqrt{ ad } + a }{ \\sqrt{ ad } + a + b + c } = \\frac{ \\sqrt{ ad } + a }{ N + \\sqrt{ ad } - d } = \n1 - \\frac{ N - ( a - d ) }{ N + \\sqrt{ ad } - d } "
},
{
"math_id": 105,
"text": " H = \\frac{ ( a + d ) - ( b + c ) }{ a + b + c + d } = \\frac{ ( a + d ) - ( b + c ) }{ N } "
},
{
"math_id": 106,
"text": " RT = \\frac{ a + d }{ a + 2( b + c ) + d } = \\frac{ a + d }{ N + b + c }"
},
{
"math_id": 107,
"text": " SS = \\frac{ 2( a + d ) }{ 2( a + d ) + b + c } = \\frac{ 2( a + d ) }{ N + a + d }"
},
{
"math_id": 108,
"text": " SBD = \\sqrt{ \\frac{ b + c }{ a + b + c + d } } = \\sqrt{ \\frac{ b + c }{ N } }"
},
{
"math_id": 109,
"text": " RR = \\frac{ a }{ a + b + c + d } = \\frac{ a }{ N } "
},
{
"math_id": 110,
"text": " \\varphi = \\frac{ ad - bc }{ \\sqrt{ ( a + b ) ( a + c ) ( b + c ) ( c + d ) } } "
},
{
"math_id": 111,
"text": " S = \\frac{ b + c }{ b + c + d } = \\frac{ b + c }{ N - a }"
},
{
"math_id": 112,
"text": " S = \\frac{ a }{ a + \\min( b, c ) } "
},
{
"math_id": 113,
"text": " D = \\frac{ ad - bc }{ \\sqrt{ ( a + b + c + d ) ( a + b ) ( a + c ) } } = \\frac{ ad - bc }{ \\sqrt{ N ( a + b ) ( a + c ) } }"
},
{
"math_id": 114,
"text": " F = \\frac{ a N }{ ( a + b ) ( a + c ) } "
},
{
"math_id": 115,
"text": " F_{ A } = \\frac{ a ( n + \\sqrt{ n } ) } { a ( n + \\sqrt{ n } ) + \\frac{ 3 }{ 2 } bc } = 1 - \\frac{ 3 bc } { 2 a ( n + \\sqrt{ n } ) + 3 bc }"
},
{
"math_id": 116,
"text": " SM = \\frac{ a + d }{ a + b + c + d } = \\frac{ a + d }{ N }"
},
{
"math_id": 117,
"text": " F = \\frac{ ( a + b + c + d ) ( a - 0.5 )^2 }{ ( a + b ) ( a + c ) } = \\frac{ N ( a - 0.5 )^2 }{ ( a + b ) ( a + c ) }"
},
{
"math_id": 118,
"text": " S = \\log \\left[ \\frac{ n ( | ad - bc | - \\frac{ n }{ 2 } )^2 }{ ( a + b ) ( a + c ) ( b + d )( c + d ) } \\right] "
},
{
"math_id": 119,
"text": " M = \\frac{ 4 ( ad - bc ) }{ ( a + d )^2 + ( b + c )^2 } "
},
{
"math_id": 120,
"text": " P = \\frac{ ab + bc }{ ab + 2bc + cd } "
},
{
"math_id": 121,
"text": " HD = \\frac{ 1 }{ 2 } \\left( \\frac{ a }{ a + b + c } + \\frac{ d }{ b + c + d }\n\\right) = \\frac{ 1 }{ 2 } \\left( \\frac{ a }{ N - d } + \\frac{ d }{ N - a }\n\\right) "
},
{
"math_id": 122,
"text": " B = \\frac{ a - ( a + b )( a + c ) } { a + \\min( b, c ) - ( a + b )( a + c ) } "
},
{
"math_id": 123,
"text": " G = \\frac{ a - ( a + b )( a + c ) } { a + b + c - ( a + b )( a + c ) } = \\frac{ a - ( a + b )( a + c ) } { N - ( a + b )( a + c ) - d }"
},
{
"math_id": 124,
"text": " G = \\frac{ a - ( a + b )( a + c ) } { \\sqrt{ ( 1 - ( a + b )^2 ) ( 1 - ( a + c )^2 ) } } "
},
{
"math_id": 125,
"text": " G_M = \\frac{ a - ( a + b )( a + c ) } { 1 - \\frac{ | b - c | }{ 2 } - ( a + b )( a + c ) } "
},
{
"math_id": 126,
"text": " I = \\frac{ 2 ( ad - bc ) }{ K ( 2a + b + c ) } = \\frac{ 2 ( ad - bc ) }{ K ( N + a - d ) }"
},
{
"math_id": 127,
"text": " I = \\frac{ a - ( a + b )( a + c ) }{ ( a + c )( a + d )( b + d )( c + d ) } "
},
{
"math_id": 128,
"text": " \\operatorname{SD} = \\frac{ b + c }{ b + c + d } = \\frac{ b + c }{ N - a } "
},
{
"math_id": 129,
"text": "TI = 1 - \\frac{ a }{ b + c + d } = 1 - \\frac{ a }{ N - a }= \\frac{ N - 2a }{ N - a }"
},
{
"math_id": 130,
"text": " PSI = a - bc "
},
{
"math_id": 131,
"text": " CZI = \\frac{ \\sum \\min( x_i, x_j ) }{ \\sum ( x_i + x_j ) }"
},
{
"math_id": 132,
"text": " d ( \\mathbf{ p }, \\mathbf{ q } ) = \\sum_{ i = 1 }^n \\frac{ |p_i - q_i |}{ | p_i| + |q_i | } "
},
{
"math_id": 133,
"text": " CC = \\frac{ 2c } { s_1 + s_2 } "
},
{
"math_id": 134,
"text": " J = \\frac A { A + B + C } "
},
{
"math_id": 135,
"text": " J = \\frac 1 A \\left( \\frac 1 { A + B + C } \\right)"
},
{
"math_id": 136,
"text": " SE( J ) = \\sqrt{ \\frac{ A ( B + C ) } { N ( A + B + C )^3 } }"
},
{
"math_id": 137,
"text": " D = \\frac{ 2A }{ 2A + B + C } "
},
{
"math_id": 138,
"text": " M = \\frac{ N - B - C }{ N } = 1 - \\frac{ B + C }{ N }"
},
{
"math_id": 139,
"text": " I_m = \\frac { \\sum x ( x - 1 ) } { n m ( m - 1 ) } "
},
{
"math_id": 140,
"text": " I_m = n \\frac{ \\sum x^2 - \\sum x } { \\left( \\sum x \\right)^2 - \\sum x } "
},
{
"math_id": 141,
"text": " I_m = \\frac { n\\ IMC } { nm - 1 } "
},
{
"math_id": 142,
"text": " I_m \\left( \\sum x - 1 \\right) + n - \\sum x "
},
{
"math_id": 143,
"text": " z = \\frac { I_m - 1 } { 2 / n m^2 } "
},
{
"math_id": 144,
"text": "\n\nC_D = \\frac{ 2 \\sum_{ i = 1 }^S x_i y_i } { ( D_x + D_y ) XY }\n\n"
},
{
"math_id": 145,
"text": "\n\nC_H = \\frac{ 2 \\sum_{ i = 1 }^S x_i y_i }{ \\left( { \\sum_{ i = 1}^S x_i^2 \\over X^2 } + {\\sum_{ i = 1 }^S y_i^2 \\over Y^2 } \\right) X Y }\n\n"
},
{
"math_id": 146,
"text": " M_u = \\frac { \\chi^2_{ 0.975 } - k + \\sum x } { \\sum x - 1 } "
},
{
"math_id": 147,
"text": " M_c = \\frac { \\chi^2_{ 0.025 } - k + \\sum x } { \\sum x - 1 } "
},
{
"math_id": 148,
"text": " I_p = 0.5 + 0.5 \\left( \\frac { I_d - M_c } { k - M_c } \\right) "
},
{
"math_id": 149,
"text": " I_p = 0.5 \\left( \\frac { I_d - 1 } { M_u - 1 } \\right) "
},
{
"math_id": 150,
"text": " I_p = -0.5 \\left( \\frac { I_d - 1 } { M_u - 1 } \\right) "
},
{
"math_id": 151,
"text": " I_p = -0.5 + 0.5 \\left( \\frac { I_d - M_u } { M_u } \\right) "
},
{
"math_id": 152,
"text": " E_1 = \\frac{ I - I_\\min }{ I_\\max - I_\\min }"
},
{
"math_id": 153,
"text": " E_2 = \\frac{ I }{ I_\\max } "
},
{
"math_id": 154,
"text": " H = \\sqrt{ \\frac{ p_{\\max} ( 1- p_{\\min} ) } { p_{\\min} (1-p_{\\max} ) } }"
},
{
"math_id": 155,
"text": " S = \\frac{ | A \\cap B | }{ | A \\cap B | + \\alpha | A - B | + \\beta | B - A | } "
},
{
"math_id": 156,
"text": " S_1 = \\frac{ | A \\cap B | }{ | A \\cap B |+ \\beta \\left( \\alpha a + ( 1 - \\alpha )b \\right) }"
},
{
"math_id": 157,
"text": " a = \\min \\left( | X - Y |, | Y - X |\\right ) "
},
{
"math_id": 158,
"text": " b = \\max \\left( | X - Y |, | Y - X | \\right) "
},
{
"math_id": 159,
"text": " SS( A, B ) = \\frac{ | d( A ) \\cap d( B ) | }{ | d( A ) + d( B ) | } "
},
{
"math_id": 160,
"text": " S2 = \\frac{ | d( A ) \\cap d( B ) | }{ \\min ( | d( A ) |, | d( B ) ) | } "
},
{
"math_id": 161,
"text": " S3 = \\frac{ 2 | d( A ) \\cap d( B ) | }{ | d( A ) + d( B ) | } "
},
{
"math_id": 162,
"text": " d_{ jk } = \\sqrt { \\sum_{ i = 1 }^N ( x_{ ij } - x_{ ik } )^2 } "
},
{
"math_id": 163,
"text": " GD = \\frac{ \\Sigma_{ i = 1 }^n w_i d_i }{ \\Sigma_{ i = 1 }^n w_i } "
},
{
"math_id": 164,
"text": " d_{ jk } = \\sum_{ i = 1 }^N | x_{ ij } - x_{ ik } | "
},
{
"math_id": 165,
"text": " D_{PQ} = \\frac{ 1 }{ r } \\sum_{ j = 1 }^r \\sum_{ i = 1 }^k | p_{ ji } - q_{ ji } |"
},
{
"math_id": 166,
"text": " A = \\sum x_{ ij } "
},
{
"math_id": 167,
"text": " B = \\sum x_{ ik } "
},
{
"math_id": 168,
"text": " J = \\sum \\min ( x_{ ij }, x_{ jk } ) "
},
{
"math_id": 169,
"text": " d_{ jk } = A + B - 2J "
},
{
"math_id": 170,
"text": " d_{ jk } = \\frac{ A + B - 2J }{ A + B } "
},
{
"math_id": 171,
"text": " d_{ jk } = \\frac{ A + B - 2J }{ A + B - J } "
},
{
"math_id": 172,
"text": " d_{ jk } = 1 - \\frac{ 1 }{ 2 } \\left( \\frac{ J }{ A } + \\frac{ J }{ B } \\right) "
},
{
"math_id": 173,
"text": " c_a = \\sum^a_{ i = 1 } p_i "
},
{
"math_id": 174,
"text": " D = 2 \\sum_{ a = 1 }^K \\frac{ d_a }{ K - 1 }"
},
{
"math_id": 175,
"text": " H = \\frac{ 1 }{ N - 1 } \\frac{ s^2 }{ m^2 } "
},
{
"math_id": 176,
"text": " PCI = \\frac{ X_t }{ Z } \\left[ 1 - \\left| \\frac{ \\sum_{ i = 1 }^{ r_+ } X_+ }{ X_t } - \\frac{ \\sum _{ i = 1 }^{ r_- } X_-} { X_t } \\right| \\right] "
},
{
"math_id": 177,
"text": " X_t = \\sum_{ i = 1 }^{ r_+ } | X_+ | + \\sum_{ i = 1 }^{ r_- } | X_- | "
},
{
"math_id": 178,
"text": " PCI_2 = \\frac{ \\sum_{ i = 1 }^K \\sum_{ j = 1 }^i k_i k_j d_{ ij } }{ \\delta }"
},
{
"math_id": 179,
"text": " \\delta = \\frac{ N^2 }{ 2 } d_\\max "
},
{
"math_id": 180,
"text": " \\delta = \\frac{ N^2 - 1 }{ 2 } d_\\max "
},
{
"math_id": 181,
"text": "D_1: d_{ ij } = | r_i - r_j | - 1 "
},
{
"math_id": 182,
"text": " D_2: d_{ ij } = | r_i - r_j | "
},
{
"math_id": 183,
"text": " D_3: d_{ ij } = | r_i - r_j |^p "
},
{
"math_id": 184,
"text": " Dp_{ ij }: d_{ ij } = [ | r_i - r_j | - ( m - 1 ) ]^p "
},
{
"math_id": 185,
"text": " A = U \\left( 1 - \\frac{ S - 1 }{ K - 1 } \\right) "
},
{
"math_id": 186,
"text": " A_\\mathrm{overall} = \\sum w_i A_i "
},
{
"math_id": 187,
"text": " p = \\prod_{ i = 1 }^n \\left( 1 - \\frac{ i }{ k } \\right) "
},
{
"math_id": 188,
"text": " p = \\exp\\left( \\frac{ -n^2 } { 2k } \\right) "
},
{
"math_id": 189,
"text": " \\log_e \\left( 1 - \\frac{ i }{ k } \\right) \\approx - \\frac{ i }{ k } "
},
{
"math_id": 190,
"text": " n = 1.2 \\sqrt{ k } "
},
{
"math_id": 191,
"text": " n = 2.448 \\sqrt{ k } \\approx 2.5 \\sqrt{ k } "
},
{
"math_id": 192,
"text": " n = 1.2 \\sqrt{ \\frac{ 1 }{ \\sum_{ i = 1 }^k \\frac{ 1 }{ c_i } } } "
},
{
"math_id": 193,
"text": " n \\approx 2.5 \\sqrt{ \\frac{ 1 }{ \\sum_{ i = 1 }^k \\frac{ 1 }{ c_i } } } "
},
{
"math_id": 194,
"text": " n = 1.2 \\sqrt { \\frac{ k }{ 2j + 1 } }"
},
{
"math_id": 195,
"text": " n \\approx 2.5 \\sqrt { \\frac{ k }{ 2j + 1 } }"
},
{
"math_id": 196,
"text": " - \\log_{10} \\left( \\frac{ 1 + 2 d }{ 365 } \\right),"
},
{
"math_id": 197,
"text": "n"
},
{
"math_id": 198,
"text": "S = \\{o_1, \\ldots, o_n\\}"
},
{
"math_id": 199,
"text": "S"
},
{
"math_id": 200,
"text": "X = \\{X _1, \\ldots, X_r \\}"
},
{
"math_id": 201,
"text": "Y = \\{ Y_1, \\ldots, Y_s \\}"
},
{
"math_id": 202,
"text": "a"
},
{
"math_id": 203,
"text": "X"
},
{
"math_id": 204,
"text": "Y"
},
{
"math_id": 205,
"text": "b"
},
{
"math_id": 206,
"text": "c"
},
{
"math_id": 207,
"text": "d"
},
{
"math_id": 208,
"text": "R"
},
{
"math_id": 209,
"text": " R = \\frac{ a + b }{ a + b + c + d } = \\frac{ a+ b }{ { n \\choose 2 } }"
},
{
"math_id": 210,
"text": "a + b"
},
{
"math_id": 211,
"text": "c + d"
},
{
"math_id": 212,
"text": "X = \\{ X_1, X_2, \\ldots , X_r \\}"
},
{
"math_id": 213,
"text": "Y = \\{ Y_1, Y_2, \\ldots , Y_s \\}"
},
{
"math_id": 214,
"text": " \\left[ n_{ ij } \\right] "
},
{
"math_id": 215,
"text": "n_{ ij }"
},
{
"math_id": 216,
"text": "X_i"
},
{
"math_id": 217,
"text": "Y_j"
},
{
"math_id": 218,
"text": "n_{ ij }= |X_i \\cap Y_j | "
},
{
"math_id": 219,
"text": "\\text{AdjustedIndex} = \\frac{ \\text{Index} - \\text{ExpectedIndex} }{ \\text{MaxIndex} - \\text{ExpectedIndex}}, "
},
{
"math_id": 220,
"text": " \\text{ARI} = \\frac{ \\sum_{ij} \\binom{n_{ij}} 2 - \\left. \\left[ \\sum_i \\binom{a_i} 2 \\sum_j \\binom{b_j} 2\\right] \\right/ \\binom n 2} {\\frac 1 2 \\left[ \\sum_i \\binom{a_i}2 + \\sum_j \\binom{b_j} 2 \\right] - \\left. \\left[ \\sum_i \\binom{a_i} 2 \\sum_j \\binom{b_j} 2 \\right] \\right/ \\binom n 2} "
},
{
"math_id": 221,
"text": " n_{ij}, a_i, b_j "
}
] | https://en.wikipedia.org/wiki?curid=9252619 |
9252911 | Deviation (statistics) | Difference between a variable's observed value and a reference value
In mathematics and statistics, deviation serves as a measure to quantify the disparity between an observed value of a variable and another designated value, frequently the mean of that variable. Deviations with respect to the sample mean and the population mean (or "true value") are called "errors" and "residuals", respectively. The sign of the deviation reports the direction of that difference: the deviation is positive when the observed value exceeds the reference value. The absolute value of the deviation indicates the size or magnitude of the difference. In a given sample, there are as many deviations as sample points. Summary statistics can be derived from a set of deviations, such as the "standard deviation" and the "mean absolute deviation", measures of dispersion, and the "mean signed deviation", a measure of bias.
The deviation of each data point is calculated by subtracting the mean of the data set from the individual data point. Mathematically, the deviation d of a data point x in a data set is given by
formula_0
This calculation represents the "distance" of a data point from the mean and provides information about how much individual values vary from the average. Positive deviations indicate values above the mean, while negative deviations indicate values below the mean.
The sum of squared deviations is a key component in the calculation of variance, another measure of the spread or dispersion of a data set. Variance is calculated by averaging the squared deviations. Deviation is a fundamental concept in understanding the distribution and variability of data points in statistical analysis.
Types.
A deviation that is a difference between an observed value and the "true value" of a quantity of interest (where "true value" denotes the Expected Value, such as the population mean) is an error.
Signed deviations.
A deviation that is the difference between the observed value and an estimate of the true value (e.g. the sample mean) is a "residual". These concepts are applicable for data at the interval and ratio levels of measurement.
Unsigned or absolute deviation.
formula_1where
The average absolute deviation (AAD) in statistics is a measure of the dispersion or spread of a set of data points around a central value, usually the mean or median. It is calculated by taking the average of the absolute differences between each data point and the chosen central value. AAD provides a measure of the typical magnitude of deviations from the central value in a dataset, giving insights into the overall variability of the data.
Least absolute deviation (LAD) is a statistical method used in regression analysis to estimate the coefficients of a linear model. Unlike the more common least squares method, which minimizes the sum of squared vertical distances (residuals) between the observed and predicted values, the LAD method minimizes the sum of the absolute vertical distances.
In the context of linear regression, if ("x"1,"y"1), ("x"2,"y"2), ... are the data points, and "a" and "b" are the coefficients to be estimated for the linear model
formula_3
the least absolute deviation estimates ("a" and "b") are obtained by minimizing the sum.
The LAD method is less sensitive to outliers compared to the least squares method, making it a robust regression technique in the presence of skewed or heavy-tailed residual distributions.
Summary statistics.
Mean signed deviation.
For an unbiased estimator, the average of the signed deviations across the entire set of all observations from the unobserved population parameter value averages zero over an arbitrarily large number of samples. However, by construction the average of signed deviations of values from the sample mean value is always zero, though the average signed deviation from another measure of central tendency, such as the sample median, need not be zero.
Mean Signed Deviation is a statistical measure used to assess the average deviation of a set of values from a central point, usually the mean. It is calculated by taking the arithmetic mean of the signed differences between each data point and the mean of the dataset.
The term "signed" indicates that the deviations are considered with their respective signs, meaning whether they are above or below the mean. Positive deviations (above the mean) and negative deviations (below the mean) are included in the calculation. The mean signed deviation provides a measure of the average distance and direction of data points from the mean, offering insights into the overall trend and distribution of the data.
Dispersion.
Statistics of the distribution of deviations are used as measures of statistical dispersion.
Normalization.
Deviations, which measure the difference between observed values and some reference point, inherently carry units corresponding to the measurement scale used. For example, if lengths are being measured, deviations would be expressed in units like meters or feet. To make deviations unitless and facilitate comparisons across different datasets, one can nondimensionalize.
One common method involves dividing deviations by a measure of scale(statistical dispersion), with the population standard deviation used for standardizing or the sample standard deviation for studentizing (e.g., Studentized residual).
Another approach to nondimensionalization focuses on scaling by location rather than dispersion. The percent deviation offers an illustration of this method, calculated as the difference between the observed value and the accepted value, divided by the accepted value, and then multiplied by 100%. By scaling the deviation based on the accepted value, this technique allows for expressing deviations in percentage terms, providing a clear perspective on the relative difference between the observed and accepted values. Both methods of nondimensionalization serve the purpose of making deviations comparable and interpretable beyond the specific measurement units.
Examples.
In one example, a series of measurements of the speed are taken of sound in a particular medium. The accepted or expected value for the speed of sound in this medium, based on theoretical calculations, is 343 meters per second.
Now, during an experiment, multiple measurements are taken by different researchers. Researcher A measures the speed of sound as 340 meters per second, resulting in a deviation of −3 meters per second from the expected value. Researcher B, on the other hand, measures the speed as 345 meters per second, resulting in a deviation of +2 meters per second.
In this scientific context, deviation helps quantify how individual measurements differ from the theoretically predicted or accepted value. It provides insights into the accuracy and precision of experimental results, allowing researchers to assess the reliability of their data and potentially identify factors contributing to discrepancies.
In another example, suppose a chemical reaction is expected to yield 100 grams of a specific compound based on stoichiometry. However, in an actual laboratory experiment, several trials are conducted with different conditions.
In Trial 1, the actual yield is measured to be 95 grams, resulting in a deviation of −5 grams from the expected yield. In Trial 2, the actual yield is measured to be 102 grams, resulting in a deviation of +2 grams. These deviations from the expected value provide valuable information about the efficiency and reproducibility of the chemical reaction under different conditions.
Scientists can analyze these deviations to optimize reaction conditions, identify potential sources of error, and improve the overall yield and reliability of the process. The concept of deviation is crucial in assessing the accuracy of experimental results and making informed decisions to enhance the outcomes of scientific experiments.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "d = x - mean"
},
{
"math_id": 1,
"text": "D_i = |x_i - m(X)|,"
},
{
"math_id": 2,
"text": "\\overline{x}"
},
{
"math_id": 3,
"text": "y= b + (a * x)"
}
] | https://en.wikipedia.org/wiki?curid=9252911 |
92550 | Omar Khayyam | Persian polymath and poet (1048–1131 CE)
Ghiyāth al-Dīn Abū al-Fatḥ ʿUmar ibn Ibrāhīm Nīsābūrī (18 May 1048 – 4 December 1131), commonly known as Omar Khayyam (), was a Persian polymath, known for his contributions to mathematics, astronomy, philosophy, and poetry. He was born in Nishapur, the initial capital of the Seljuk Empire, and lived during the period of the Seljuk dynasty, around the time of the First Crusade.
As a mathematician, he is most notable for his work on the classification and solution of cubic equations, where he provided a geometric formulation based on the intersection of conics. He also contributed to a deeper understanding of Euclid's parallel axiom. As an astronomer, he calculated the duration of the solar year with remarkable precision and accuracy, and designed the Jalali calendar, a solar calendar with a very precise 33-year intercalation cycle
which provided the basis for the Persian calendar that is still in use after nearly a millennium.
There is a tradition of attributing poetry to Omar Khayyam, written in the form of quatrains ("rubāʿiyāt" ). This poetry became widely known to the English-reading world in a translation by Edward FitzGerald ("Rubaiyat of Omar Khayyam", 1859), which enjoyed great success in the Orientalism of the "fin de siècle".
Life.
Omar Khayyam was born in Nishapur—a metropolis in Khorasan province, of Persian stock, in 1048. In medieval Persian texts he is usually simply called "Omar Khayyam". Although open to doubt, it has often been assumed that his forebears followed the trade of tent-making, since "Khayyam" means 'tent-maker' in Arabic. The historian Bayhaqi, who was personally acquainted with Khayyam, provides the full details of his horoscope: "he was Gemini, the sun and Mercury being in the ascendant[...]". This was used by modern scholars to establish his date of birth as 18 May 1048.
Khayyam's boyhood was spent in Nishapur, a leading metropolis under the Great Seljuq Empire, which had earlier been a major center of the Zoroastrian religion. His full name, as it appears in Arabic sources, was "Abu’l Fath Omar ibn Ibrahim al-Khayyam". His gifts were recognized by his early tutors who sent him to study under Imam Muwaffaq Nishaburi, the greatest teacher of the Khorasan region who tutored the children of the highest nobility, and Khayyam developed a firm friendship with him through the years. Khayyam might have met and studied with Bahmanyar, a disciple of Avicenna. After studying science, philosophy, mathematics and astronomy at Nishapur, about the year 1068 he traveled to the province of Bukhara, where he frequented the renowned library of the Ark. In about 1070 he moved to Samarkand, where he started to compose his famous "Treatise on Algebra" under the patronage of Abu Tahir Abd al-Rahman ibn ʿAlaq, the governor and chief judge of the city. Khayyam was kindly received by the Karakhanid ruler Shams al-Mulk Nasr, who according to Bayhaqi, would "show him the greatest honour, so much so that he would seat [Khayyam] beside him on his throne".
In 1073–4 peace was concluded with Sultan Malik-Shah I who had made incursions into Karakhanid dominions. Khayyam entered the service of Malik-Shah in 1074 when he was invited by the Grand Vizier Nizam al-Mulk to meet Malik-Shah in the city of Marv. Khayyam was subsequently commissioned to set up an observatory in Isfahan and lead a group of scientists in carrying out precise astronomical observations aimed at the revision of the Persian calendar. The undertaking probably began with the opening of the observatory in 1074 and ended in 1079, when Omar Khayyam and his colleagues concluded their measurements of the length of the year, reporting it as 365.24219858156 days. Given that the length of the year is changing in the sixth decimal place over a person's lifetime, this is outstandingly accurate. For comparison, the length of the year at the end of the 19th century was 365.242196 days, while today it is 365.242190 days.
After the death of Malik-Shah and his vizier (murdered, it is thought, by the Ismaili order of Assassins), Khayyam fell from favor at court, and as a result, he soon set out on his pilgrimage to Mecca. A possible ulterior motive for his pilgrimage reported by Al-Qifti, was a public demonstration of his faith with a view to allaying suspicions of skepticism and confuting the allegations of unorthodoxy (including possible sympathy or adherence to Zoroastrianism) levelled at him by a hostile clergy. He was then invited by the new Sultan Sanjar to Marv, possibly to work as a court astrologer. He was later allowed to return to Nishapur owing to his declining health. Upon his return, he seems to have lived the life of a recluse.
Omar Khayyam died at the age of 83 in his hometown of Nishapur on 4 December 1131, and he is buried in what is now the Mausoleum of Omar Khayyam. One of his disciples Nizami Aruzi relates the story that sometime during 1112–3 Khayyam was in Balkh in the company of Isfizari (one of the scientists who had collaborated with him on the Jalali calendar) when he made a prophecy that "my tomb shall be in a spot where the north wind may scatter roses over it". Four years after his death, Aruzi located his tomb in a cemetery in a then large and well-known quarter of Nishapur on the road to Marv. As it had been foreseen by Khayyam, Aruzi found the tomb situated at the foot of a garden-wall over which pear trees and peach trees had thrust their heads and dropped their flowers so that his tombstone was hidden beneath them.
Mathematics.
Khayyam was famous during his life as a mathematician. His surviving mathematical works include (i) "Commentary on the Difficulties Concerning the Postulates of Euclid's Elements" (), completed in December 1077, (ii) "Treatise On the Division of a Quadrant of a Circle" (), undated but completed prior to the "Treatise on Algebra", and (iii) "Treatise on Algebra" (), most likely completed in 1079. He furthermore wrote a treatise on the binomial theorem and extracting the nth root of natural numbers, which has been lost.
Theory of parallels.
Part of Khayyam's "Commentary on the Difficulties Concerning the Postulates of Euclid's Elements" deals with the parallel axiom. The treatise of Khayyam can be considered the first treatment of the axiom not based on petitio principii, but on a more intuitive postulate. Khayyam refutes the previous attempts by other mathematicians to "prove" the proposition, mainly on grounds that each of them had postulated something that was by no means easier to admit than the Fifth Postulate itself. Drawing upon Aristotle's views, he rejects the usage of movement in geometry and therefore dismisses the different attempt by Ibn al-Haytham. Unsatisfied with the failure of mathematicians to prove Euclid's statement from his other postulates, Khayyam tried to connect the axiom with the Fourth Postulate, which states that all right angles are equal to one another.
Khayyam was the first to consider the three distinct cases of acute, obtuse, and right angle for the summit angles of a Khayyam-Saccheri quadrilateral. After proving a number of theorems about them, he showed that Postulate V follows from the right angle hypothesis, and refuted the obtuse and acute cases as self-contradictory. His elaborate attempt to prove the parallel postulate was significant for the further development of geometry, as it clearly shows the possibility of non-Euclidean geometries. The hypotheses of acute, obtuse, and right angles are now known to lead respectively to the non-Euclidean hyperbolic geometry of Gauss-Bolyai-Lobachevsky, to that of Riemannian geometry, and to Euclidean geometry.
Tusi's commentaries on Khayyam's treatment of parallels made its way to Europe. John Wallis, professor of geometry at Oxford, translated Tusi's commentary into Latin. Jesuit geometer Girolamo Saccheri, whose work ("euclides ab omni naevo vindicatus", 1733) is generally considered the first step in the eventual development of non-Euclidean geometry, was familiar with the work of Wallis. The American historian of mathematics David Eugene Smith mentions that Saccheri "used the same lemma as the one of Tusi, even lettering the figure in precisely the same way and using the lemma for the same purpose". He further says that "Tusi distinctly states that it is due to Omar Khayyam, and from the text, it seems clear that the latter was his inspirer."
Real number concept.
This treatise on Euclid contains another contribution dealing with the theory of proportions and with the compounding of ratios. Khayyam discusses the relationship between the concept of ratio and the concept of number and explicitly raises various theoretical difficulties. In particular, he contributes to the theoretical study of the concept of irrational number. Displeased with Euclid's definition of equal ratios, he redefined the concept of a number by the use of a continuous fraction as the means of expressing a ratio. Youschkevitch and Rosenfeld argue that "by placing irrational quantities and numbers on the same operational scale, [Khayyam] began a true revolution in the doctrine of number." Likewise, it was noted by D. J. Struik that Omar was "on the road to that extension of the number concept which leads to the notion of the real number."
Geometric algebra.
Rashed and Vahabzadeh (2000) have argued that because of his thoroughgoing geometrical approach to algebraic equations, Khayyam can be considered the precursor of Descartes in the invention of analytic geometry. In the "Treatise on the Division of a Quadrant of a Circle" Khayyam applied algebra to geometry. In this work, he devoted himself mainly to investigating whether it is possible to divide a circular quadrant into two parts such that the line segments projected from the dividing point to the perpendicular diameters of the circle form a specific ratio. His solution, in turn, employed several curve constructions that led to equations containing cubic and quadratic terms.
Solution of cubic equations.
Khayyam seems to have been the first to conceive a general theory of cubic equations, and the first to geometrically solve every type of cubic equation, so far as positive roots are concerned. The "Treatise on Algebra" contains his work on cubic equations. It is divided into three parts: (i) equations which can be solved with compass and straight edge, (ii) equations which can be solved by means of conic sections, and (iii) equations which involve the inverse of the unknown.
Khayyam produced an exhaustive list of all possible equations involving lines, squares, and cubes. He considered three binomial equations, nine trinomial equations, and seven tetranomial equations. For the first and second degree polynomials, he provided numerical solutions by geometric construction. He concluded that there are fourteen different types of cubics that cannot be reduced to an equation of a lesser degree. For these he could not accomplish the construction of his unknown segment with compass and straight edge. He proceeded to present geometric solutions to all types of cubic equations using the properties of conic sections. The prerequisite lemmas for Khayyam's geometrical proof include Euclid VI, Prop 13, and Apollonius II, Prop 12. The positive root of a cubic equation was determined as the abscissa of a point of intersection of two conics, for instance, the intersection of two parabolas, or the intersection of a parabola and a circle, etc. However, he acknowledged that the arithmetic problem of these cubics was still unsolved, adding that "possibly someone else will come to know it after us". This task remained open until the sixteenth century, where an algebraic solution of the cubic equation was found in its generality by Cardano, Del Ferro, and Tartaglia in Renaissance Italy.
<templatestyles src="Template:Quote_box/styles.css" />
Whoever thinks algebra is a trick in obtaining unknowns has thought it in vain. No attention should be paid to the fact that algebra and geometry are different in appearance. Algebras are geometric facts which are proved by propositions five and six of Book two of Elements.
—Omar Khayyam
In effect, Khayyam's work is an effort to unify algebra and geometry. This particular geometric solution of cubic equations was further investigated by M. Hachtroudi and extended to solving fourth-degree equations. Although similar methods had appeared sporadically since Menaechmus, and further developed by the 10th-century mathematician Abu al-Jud, Khayyam's work can be considered the first systematic study and the first exact method of solving cubic equations. The mathematician Woepcke (1851) who offered translations of Khayyam's algebra into French praised him for his "power of generalization and his rigorously systematic procedure."
Binomial theorem and extraction of roots.
<templatestyles src="Template:Quote_box/styles.css" />
From the Indians one has methods for obtaining square and cube roots, methods based on knowledge of individual cases – namely the knowledge of the squares of the nine digits 12, 22, 32 (etc.) and their respective products, i.e. 2 × 3 etc. We have written a treatise on the proof of the validity of those methods and that they satisfy the conditions. In addition we have increased their types, namely in the form of the determination of the fourth, fifth, sixth roots up to any desired degree. No one preceded us in this and those proofs are purely arithmetic, founded on the arithmetic of "The Elements".
—Omar Khayyam, "Treatise on Algebra"
In his algebraic treatise, Khayyam alludes to a book he had written on the extraction of the formula_0th root of natural numbers using a law he had discovered which did not depend on geometric figures. This book was most likely titled the "Difficulties of Arithmetic" (), and is not extant. Based on the context, some historians of mathematics such as D. J. Struik, believe that Omar must have known the formula for the expansion of the binomial formula_1, where "n" is a positive integer. The case of power 2 is explicitly stated in Euclid's elements and the case of at most power 3 had been established by Indian mathematicians. Khayyam was the mathematician who noticed the importance of a general binomial theorem. The argument supporting the claim that Khayyam had a general binomial theorem is based on his ability to extract roots. One of Khayyam's predecessors, al-Karaji, had already discovered the triangular arrangement of the coefficients of binomial expansions that Europeans later came to know as Pascal's triangle; Khayyam popularized this triangular array in Iran, so that it is now known as Omar Khayyam's triangle.
Astronomy.
In 1074–5, Omar Khayyam was commissioned by Sultan Malik-Shah to build an observatory at Isfahan and reform the Persian calendar. There was a panel of eight scholars working under the direction of Khayyam to make large-scale astronomical observations and revise the astronomical tables. Recalibrating the calendar fixed the first day of the year at the exact moment of the passing of the Sun's center across vernal equinox. This marks the beginning of spring or Nowrūz, a day in which the Sun enters the first degree of Aries before noon. The resultant calendar was named in Malik-Shah's honor as the Jalālī calendar, and was inaugurated on 15 March 1079. The observatory itself was disused after the death of Malik-Shah in 1092.
The Jalālī calendar was a true solar calendar where the duration of each month is equal to the time of the passage of the Sun across the corresponding sign of the Zodiac. The calendar reform introduced a unique 33-year intercalation cycle. As indicated by the works of Khazini, Khayyam's group implemented an intercalation system based on quadrennial and quinquennial leap years. Therefore, the calendar consisted of 25 ordinary years that included 365 days, and 8 leap years that included 366 days. The calendar remained in use across Greater Iran from the 11th to the 20th centuries. In 1911 the Jalali calendar became the official national calendar of Qajar Iran. In 1925 this calendar was simplified and the names of the months were modernized, resulting in the . The Jalali calendar is more accurate than the Gregorian calendar of 1582, with an error of one day accumulating over 5,000 years, compared to one day every 3,330 years in the Gregorian calendar. Moritz Cantor considered it the most perfect calendar ever devised.
One of his pupils Nizami Aruzi of Samarcand relates that Khayyam apparently did not have a belief in astrology and divination: "I did not observe that he ("scil." Omar Khayyam) had any great belief in astrological predictions, nor have I seen or heard of any of the great [scientists] who had such belief." While working for Sultan Sanjar as an astrologer he was asked to predict the weather – a job that he apparently did not do well. George Saliba explains that the term , used in various sources in which references to Khayyam's life and work could be found, has sometimes been incorrectly translated to mean astrology. He adds: "from at least the middle of the tenth century, according to Farabi's "Enumeration of the Sciences", that this science, , was already split into two parts, one dealing with astrology and the other with theoretical mathematical astronomy."
Other works.
Khayyam has a short treatise devoted to Archimedes' principle (in full title, "On the Deception of Knowing the Two Quantities of Gold and Silver in a Compound Made of the Two"). For a compound of gold adulterated with silver, he describes a method to measure more exactly the weight per capacity of each element. It involves weighing the compound both in air and in water, since weights are easier to measure exactly than volumes. By repeating the same with both gold and silver one finds exactly how much heavier than water gold, silver and the compound were. This treatise was extensively examined by Eilhard Wiedemann who believed that Khayyam's solution was more accurate and sophisticated than that of Khazini and Al-Nayrizi who also dealt with the subject elsewhere.
Another short treatise is concerned with music theory in which he discusses the connection between music and arithmetic. Khayyam's contribution was in providing a systematic classification of musical scales, and discussing the mathematical relationship among notes, minor, major and tetrachords.
Poetry.
The earliest allusion to Omar Khayyam's poetry is from the historian Imad ad-Din al-Isfahani, a younger contemporary of Khayyam, who explicitly identifies him as both a poet and a scientist (, 1174). One of the earliest specimens of Omar Khayyam's Rubiyat is from Fakhr al-Din Razi. In his work (c. 1160), he quotes one of his poems (corresponding to quatrain LXII of FitzGerald's first edition). Daya in his writings (, c. 1230) quotes two quatrains, one of which is the same as the one already reported by Razi. An additional quatrain is quoted by the historian Juvayni (, c. 1226–1283). In 1340 Jajarmi includes thirteen quatrains of Khayyam in his work containing an anthology of the works of famous Persian poets (), two of which have hitherto been known from the older sources. A comparatively late manuscript is the Bodleian MS. Ouseley 140, written in Shiraz in 1460, which contains 158 quatrains on 47 folia. The manuscript belonged to William Ouseley (1767–1842) and was purchased by the Bodleian Library in 1844.
There are occasional quotes of verses attributed to Khayyam in texts attributed to authors of the 13th and 14th centuries, but these are of doubtful authenticity, so that skeptical scholars point out that the entire tradition may be pseudepigraphic. Hans Heinrich Schaeder in 1934 commented that the name of Omar Khayyam "is to be struck out from the history of Persian literature" due to the lack of any material that could confidently be attributed to him. De Blois presents a bibliography of the manuscript tradition, concluding pessimistically that the situation has not changed significantly since Schaeder's time.:307
Five of the quatrains later attributed to Omar Khayyam are found as early as 30 years after his death, quoted in "Sindbad-Nameh". While this establishes that these specific verses were in circulation in Omar's time or shortly later, it does not imply that the verses must be his. De Blois concludes that at the least the process of attributing poetry to Omar Khayyam appears to have begun already in the 13th century.:305 Edward Granville Browne (1906) notes the difficulty of disentangling authentic from spurious quatrains: "while it is certain that Khayyam wrote many quatrains, it is hardly possible, save in a few exceptional cases, to assert positively that he wrote any of those ascribed to him".
In addition to the Persian quatrains, there are twenty-five Arabic poems attributed to Khayyam which are attested by historians such as al-Isfahani, Shahrazuri (, c. 1201–1211), Qifti (, 1255), and Hamdallah Mustawfi (, 1339).
Boyle emphasized that there are a number of other Persian scholars who occasionally wrote quatrains, including Avicenna, Ghazali, and Tusi. They conclude that it is also possible that for Khayyam poetry was an amusement of his leisure hours: "these brief poems seem often to have been the work of scholars and scientists who composed them, perhaps, in moments of relaxation to edify or amuse the inner circle of their disciples".
The poetry attributed to Omar Khayyam has contributed greatly to his popular fame in the modern period as a direct result of the extreme popularity of the translation of such verses into English by Edward FitzGerald (1859). FitzGerald's "Rubaiyat of Omar Khayyam" contains loose translations of quatrains from the Bodleian manuscript. It enjoyed such success in the fin de siècle period that a bibliography compiled in 1929 listed more than 300 separate editions, and many more have been published since.:312
Philosophy.
Khayyam considered himself intellectually to be a student of Avicenna. According to Al-Bayhaqi, he was reading the metaphysics in Avicenna's "the Book of Healing" before he died. There are six philosophical papers believed to have been written by Khayyam. One of them, "On existence" (), was written originally in Persian and deals with the subject of existence and its relationship to universals. Another paper, titled "The necessity of contradiction in the world, determinism and subsistence" (), is written in Arabic and deals with free will and determinism. The titles of his other works are "On being and necessity" (), "The Treatise on Transcendence in Existence" (), "On the knowledge of the universal principles of existence" (), and "Abridgement concerning natural phenomena" ().
Khayyam himself once said:<templatestyles src="Template:Blockquote/styles.css" />We are the victims of an age when men of science are discredited, and only a few remain who are capable of engaging in scientific research. Our philosophers spend all their time in mixing true with false and are interested in nothing but outward show; such little learning as they have they extend on material ends. When they see a man sincere and unremitting in his search for the truth, one who will have nothing to do with falsehood and pretence, they mock and despise him.
Religious views.
A literal reading of Khayyam's quatrains leads to the interpretation of his philosophic attitude toward life as a combination of pessimism, nihilism, Epicureanism, fatalism, and agnosticism. This view is taken by Iranologists such as Arthur Christensen, Hans Heinrich Schaeder, John Andrew Boyle, Edward Denison Ross, Edward Henry Whinfield and George Sarton. Conversely, the Khayyamic quatrains have also been described as mystical Sufi poetry. In addition to his Persian quatrains, J. C. E. Bowen mentions that Khayyam's Arabic poems also "express a pessimistic viewpoint which is entirely consonant with the outlook of the deeply thoughtful rationalist philosopher that Khayyam is known historically to have been." Edward FitzGerald emphasized the religious skepticism he found in Khayyam. In his preface to the "Rubáiyát" he claimed that he "was hated and dreaded by the Sufis", and denied any pretense at divine allegory: "his Wine is the veritable Juice of the Grape: his Tavern, where it was to be had: his "Saki", the Flesh and Blood that poured it out for him." Sadegh Hedayat is one of the most notable proponents of Khayyam's philosophy as agnostic skepticism, and according to Jan Rypka (1934), he even considered Khayyam an atheist. Hedayat (1923) states that "while Khayyam believes in the transmutation and transformation of the human body, he does not believe in a separate soul; if we are lucky, our bodily particles would be used in the making of a jug of wine." Omar Khayyam's poetry has been cited in the context of New Atheism, such as in "The Portable Atheist" by Christopher Hitchens.
Al-Qifti (c. 1172–1248) appears to confirm this view of Khayyam's philosophy. In his work "The History of Learned Men" he reports that Khayyam's poems were only outwardly in the Sufi style, but were written with an anti-religious agenda. He also mentions that he was at one point indicted for impiety, but went on a pilgrimage to prove he was pious. The report has it that upon returning to his native city he concealed his deepest convictions and practised a strictly religious life, going morning and evening to the place of worship. Khayyam on the Koran (quote 84):
<templatestyles src="Template:Blockquote/styles.css" />The Koran! well, come put me to the test, Lovely old book in hideous error drest, Believe me, I can quote the Koran too, The unbeliever knows his Koran best. And do you think that unto such as you, A maggot-minded, starved, fanatic crew, God gave the Secret, and denied it me? Well, well, what matters it! believe that too.
<templatestyles src="Template:Blockquote/styles.css" />
<templatestyles src="Template:Blockquote/styles.css" />
An account of him, written in the thirteenth century, shows him as "versed in all the wisdom of the Greeks," and as wont to insist on the necessity of studying science on Greek lines. Of his prose works, two, which were stand authority, dealt respectively with precious stones and climatology. Beyond question the poet-astronomer was undevout; and his astronomy doubtless helped to make him so. One contemporary writes: "I did not observe that he had any great belief in astrological predictions; nor have I seen or heard of any of the great (scientists) who had such belief. He gave his adherence to no religious sect. Agnosticism, not faith, is the keynote of his works. Among the sects he saw everywhere strife and hatred in which he could have no part..."
Persian novelist Sadegh Hedayat says Khayyám from "his youth to his death remained a materialist, pessimist, agnostic. Khayyam looked at all religions questions with a skeptical eye", continues Hedayat, "and hated the fanaticism, narrow-mindedness, and the spirit of vengeance of the mullas, the so-called religious scholars."
In the context of a piece entitled "On the Knowledge of the Principles of Existence", Khayyam endorses the Sufi path. Csillik suggests the possibility that Omar Khayyam could see in Sufism an ally against orthodox religiosity. Other commentators do not accept that Khayyam's poetry has an anti-religious agenda and interpret his references to wine and drunkenness in the conventional metaphorical sense common in Sufism. The French translator J. B. Nicolas held that Khayyam's constant exhortations to drink wine should not be taken literally, but should be regarded rather in the light of Sufi thought where rapturous intoxication by "wine" is to be understood as a metaphor for the enlightened state or divine rapture of "baqaa". The view of Omar Khayyam as a Sufi was defended by Bjerregaard, Idries Shah, and Dougan who attributes the reputation of hedonism to the failings of FitzGerald's translation, arguing that Khayyam's poetry is to be understood as "deeply esoteric". On the other hand, Iranian experts such as Mohammad Ali Foroughi and Mojtaba Minovi rejected the hypothesis that Omar Khayyam was a Sufi. Foroughi stated that Khayyam's ideas may have been consistent with that of Sufis at times but there is no evidence that he was formally a Sufi. Aminrazavi states that "Sufi interpretation of Khayyam is possible only by reading into his "Rubāʿīyyāt" extensively and by stretching the content to fit the classical Sufi doctrine.". Furthermore, Boyle emphasizes that Khayyam was intensely disliked by a number of celebrated Sufi mystics who belonged to the same century. This includes Shams Tabrizi (spiritual guide of Rumi), Najm al-Din Daya who described Omar Khayyam as "an unhappy philosopher, atheist, and materialist", and Attar who regarded him not as a fellow-mystic but a free-thinking scientist who awaited punishments hereafter.
Seyyed Hossein Nasr argues that it is "reductive" to use a literal interpretation of his verses (many of which are of uncertain authenticity to begin with) to establish Omar Khayyam's philosophy. Instead, he adduces Khayyam's interpretive translation of Avicenna's treatise "Discourse on Unity" (), where he expresses orthodox views on Divine Unity in agreement with the author. The prose works believed to be Khayyam's are written in the Peripatetic style and are explicitly theistic, dealing with subjects such as the existence of God and theodicy. As noted by Bowen these works indicate his involvement in the problems of metaphysics rather than in the subtleties of Sufism. As evidence of Khayyam's faith and/or conformity to Islamic customs, Aminrazavi mentions that in his treatises he offers salutations and prayers, praising God and Muhammad. In most biographical extracts, he is referred to with religious honorifics such as , "The Patron of Faith" (), and "The Evidence of Truth" (). He also notes that biographers who praise his religiosity generally avoid making reference to his poetry, while the ones who mention his poetry often do not praise his religious character. For instance, Al-Bayhaqi's account, which antedates by some years other biographical notices, speaks of Omar as a very pious man who professed orthodox views down to his last hour.
On the basis of all the existing textual and biographical evidence, the question remains somewhat open, and as a result Khayyam has received sharply conflicting appreciations and criticisms.
Reception.
The various biographical extracts referring to Omar Khayyam describe him as unequalled in scientific knowledge and achievement during his time. Many called him by the epithet "King of the Wise" ().[] Shahrazuri (d. 1300) esteems him highly as a mathematician, and claims that he may be regarded as "the successor of Avicenna in the various branches of philosophic learning". Al-Qifti (d. 1248), even though disagreeing with his views, concedes he was "unrivalled in his knowledge of natural philosophy and astronomy". Despite being hailed as a poet by a number of biographers, according to Richard N. Frye "it is still possible to argue that Khayyam's status as a poet of the first rank is a comparatively late development."
Thomas Hyde was the first European to call attention to Khayyam and to translate one of his quatrains into Latin ("Historia religionis veterum Persarum eorumque magorum", 1700). Western interest in Persia grew with the Orientalism movement in the 19th century. Joseph von Hammer-Purgstall (1774–1856) translated some of Khayyam's poems into German in 1818, and Gore Ouseley (1770–1844) into English in 1846, but Khayyam remained relatively unknown in the West until after the publication of Edward FitzGerald's "Rubaiyat of Omar Khayyam" in 1859. FitzGerald's work at first was unsuccessful but was popularised by Whitley Stokes from 1861 onward, and the work came to be greatly admired by the Pre-Raphaelites. In 1872 FitzGerald had a third edition printed which increased interest in the work in America. By the 1880s, the book was extremely well known throughout the English-speaking world, to the extent of the formation of numerous "Omar Khayyam Clubs" and a "fin de siècle cult of the Rubaiyat". Khayyam's poems have been translated into many languages; many of the more recent ones are more literal than that of FitzGerald.
FitzGerald's translation was a factor in rekindling interest in Khayyam as a poet even in his native Iran. Sadegh Hedayat in his "Songs of Khayyam" ("Taranehha-ye Khayyam", 1934) reintroduced Khayyam's poetic legacy to modern Iran. Under the Pahlavi dynasty, a new monument of white marble, designed by the architect Houshang Seyhoun, was erected over his tomb. A statue by Abolhassan Sadighi was erected in Laleh Park, Tehran in the 1960s, and a bust by the same sculptor was placed near Khayyam's mausoleum in Nishapur. In 2009, the state of Iran donated a pavilion to the United Nations Office in Vienna, inaugurated at Vienna International Center. In 2016, three statues of Khayyam were unveiled: one at the University of Oklahoma, one in Nishapur and one in Florence, Italy. Over 150 composers have used the "Rubaiyat" as their source of inspiration. The earliest such composer was Liza Lehmann.
FitzGerald rendered Khayyam's name as "Tentmaker", and the anglicized name of "Omar the Tentmaker" resonated in English-speaking popular culture for a while. Thus, Nathan Haskell Dole published a novel called "Omar, the Tentmaker: A Romance of Old Persia" in 1898. "Omar the Tentmaker of Naishapur" is a historical novel by John Smith Clarke, published in 1910. "Omar the Tentmaker" is also the title of a 1914 play by Richard Walton Tully in an oriental setting, adapted as a silent film in 1922. US General Omar Bradley was given the nickname "Omar the Tent-Maker" in World War II.
The diverse talents and intellectual pursuits of Khayyam captivated many Ottoman and Turkish writers throughout history. Scholars often viewed Khayyam as a means to enhance their own poetic prowess and intellectual depth, drawing inspiration and recognition from his writings. For many Muslim reformers, Khayam's verses provided a counterpoint to the conservative norms prevalent in Islamic societies, allowing room for independent thought and a libertine lifestyle. Figures like Abdullah Cevdet, Rıza Tevfik, and Yahya Kemal utilized Khayyam's themes to justify their progressive ideologies or to celebrate liberal aspects of their lives, portraying him as a cultural, political, and intellectual role model who demonstrated Islam's compatibility with modern conventions. Similarly, Turkish leftist poets and intellectuals, including Nâzım Hikmet, Sabahattin Eyüboğlu, A. Kadir, and Gökçe, appropriated Khayyam to champion their socialist worldview, imbuing his voice with a humanistic tone in the vernacular. Khayyam's resurgence in spoken Turkish since the 1980s has transformed him into a poet of the people, with numerous books and translations revitalizing his historical significance. Conversely, scholars like Dāniş, Tevfik, and Gölpınarlı advocated for source criticism and the identification of authentic quatrains to discern the genuine Khayyam amidst historical perceptions of his sociocultural image.
The Moving Finger quatrain.
The quatrain by Omar Khayyam known as "The Moving Finger", in the form of its translation by the English poet Edward Fitzgerald is one of the most popular quatrains in the Anglosphere. It reads:
<templatestyles src="Template:Blockquote/styles.css" />The Moving Finger writes; and having writ,
Moves on: nor all your Piety nor Wit
Shall lure it back to cancel half a Line,
Nor all your Tears wash out a Word of it.
The title of the novel "The Moving Finger" written by Agatha Christie and published in 1942 was inspired by this quatrain of the translation of "Rubaiyat of Omar Khayyam" by Edward Fitzgerald. Martin Luther King also cites this quatrain of Omar Khayyam in one of his speeches, "":
<templatestyles src="Template:Blockquote/styles.css" />“We may cry out desperately for time to pause in her passage, but time is adamant to every plea and rushes on. Over the bleached bones and jumbled residues of numerous civilizations are written the pathetic words, ‘Too late.’ There is an invisible book of life that faithfully records our vigilance or our neglect. Omar Khayyam is right: ‘The moving finger writes, and having writ moves on.’”
In one of his apologetic speeches about the Clinton–Lewinsky scandal, Bill Clinton, the 42nd president of the US, also cites this quatrain.
Other popular culture references.
In 1934 Harold Lamb published a historical novel "Omar Khayyam". The French-Lebanese writer Amin Maalouf based the first half of his historical fiction novel "Samarkand" on Khayyam's life and the creation of his Rubaiyat. The sculptor Eduardo Chillida produced four massive iron pieces titled "Mesa de Omar Khayyam" (Omar Khayyam's Table) in the 1980s.
The lunar crater Omar Khayyam was named in his honour in 1970, as was the minor planet 3095 Omarkhayyam discovered by Soviet astronomer Lyudmila Zhuravlyova in 1980.
Google has released two Google Doodles commemorating him. The first was on his 964th birthday on 18 May 2012. The second was on his 971st birthday on 18 May 2019.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "(a+b)^n"
}
] | https://en.wikipedia.org/wiki?curid=92550 |
9256 | Enigma machine | German cipher machine
The Enigma machine is a cipher device developed and used in the early- to mid-20th century to protect commercial, diplomatic, and military communication. It was employed extensively by Nazi Germany during World War II, in all branches of the German military. The Enigma machine was considered so secure that it was used to encipher the most top-secret messages.
The Enigma has an electromechanical rotor mechanism that scrambles the 26 letters of the alphabet. In typical use, one person enters text on the Enigma's keyboard and another person writes down which of the 26 lights above the keyboard illuminated at each key press. If plain text is entered, the illuminated letters are the ciphertext. Entering ciphertext transforms it back into readable plaintext. The rotor mechanism changes the electrical connections between the keys and the lights with each keypress.
The security of the system depends on machine settings that were generally changed daily, based on secret key lists distributed in advance, and on other settings that were changed for each message. The receiving station would have to know and use the exact settings employed by the transmitting station to decrypt a message.
Although Nazi Germany introduced a series of improvements to the Enigma over the years that hampered decryption efforts, they did not prevent Poland from cracking the machine as early as December 1932 and reading messages prior to and into the war. Poland's sharing of their achievements enabled the Allies to exploit Enigma-enciphered messages as a major source of intelligence. Many commentators say the flow of Ultra communications intelligence from the decrypting of Enigma, Lorenz, and other ciphers shortened the war substantially and may even have altered its outcome.
<templatestyles src="Template:TOC limit/styles.css" />
History.
The Enigma machine was invented by German engineer Arthur Scherbius at the end of World War I. The German firm Scherbius & Ritter, co-founded by Scherbius, patented ideas for a cipher machine in 1918 and began marketing the finished product under the brand name "Enigma" in 1923, initially targeted at commercial markets. Early models were used commercially from the early 1920s, and adopted by military and government services of several countries, most notably Nazi Germany before and during World War II.
Several Enigma models were produced, but the German military models, having a plugboard, were the most complex. Japanese and Italian models were also in use. With its adoption (in slightly modified form) by the German Navy in 1926 and the German Army and Air Force soon after, the name "Enigma" became widely known in military circles. Pre-war German military planning emphasized fast, mobile forces and tactics, later known as blitzkrieg, which depend on radio communication for command and coordination. Since adversaries would likely intercept radio signals, messages had to be protected with secure encipherment. Compact and easily portable, the Enigma machine filled that need.
Breaking Enigma.
French spy Hans-Thilo Schmidt obtained access to German cipher materials that included the daily keys used in September and October 1932. Those keys included the plugboard settings. The French passed the material to the Poles. Around December 1932, Marian Rejewski, a Polish mathematician and cryptologist at the Polish Cipher Bureau, used the theory of permutations, and flaws in the German military-message encipherment procedures, to break message keys of the plugboard Enigma machine. Rejewski used the French supplied material and the message traffic that took place in September and October to solve for the unknown rotor wiring. Consequently the Polish mathematicians were able to build their own Enigma machines, dubbed "Enigma doubles". Rejewski was aided by fellow mathematician-cryptologists Jerzy Różycki and Henryk Zygalski, both of whom had been recruited with Rejewski from Poznań University, which had been selected for its students' knowledge of the German language, since that area was held by Germany prior to World War I. The Polish Cipher Bureau developed techniques to defeat the plugboard and find all components of the daily key, which enabled the Cipher Bureau to read German Enigma messages starting from January 1933.
Over time, the German cryptographic procedures improved, and the Cipher Bureau developed techniques and designed mechanical devices to continue reading Enigma traffic. As part of that effort, the Poles exploited quirks of the rotors, compiled catalogues, built a cyclometer (invented by Rejewski) to help make a catalogue with 100,000 entries, invented and produced Zygalski sheets, and built the electromechanical cryptologic "bomba" (invented by Rejewski) to search for rotor settings. In 1938 the Poles had six "bomby" (plural of "bomba"), but when that year the Germans added two more rotors, ten times as many "bomby" would have been needed to read the traffic.
On 26 and 27 July 1939, in Pyry, just south of Warsaw, the Poles initiated French and British military intelligence representatives into the Polish Enigma-decryption techniques and equipment, including Zygalski sheets and the cryptologic bomb, and promised each delegation a Polish-reconstructed Enigma (the devices were soon delivered).
In September 1939, British Military Mission 4, which included Colin Gubbins and Vera Atkins, went to Poland, intending to evacuate cipher-breakers Marian Rejewski, Jerzy Różycki, and Henryk Zygalski from the country. The cryptologists, however, had been evacuated by their own superiors into Romania, at the time a Polish-allied country. On the way, for security reasons, the Polish Cipher Bureau personnel had deliberately destroyed their records and equipment. From Romania they traveled on to France, where they resumed their cryptological work, collaborating by teletype with the British, who began work on decrypting German Enigma messages, using the Polish equipment and techniques.
Gordon Welchman, who became head of Hut 6 at Bletchley Park, wrote: "Hut 6 Ultra would never have got off the ground if we had not learned from the Poles, in the nick of time, the details both of the German military version of the commercial Enigma machine, and of the operating procedures that were in use." The Polish transfer of theory and technology at Pyry formed the crucial basis for the subsequent World War II British Enigma-decryption effort at Bletchley Park, where Welchman worked.
During the war, British cryptologists decrypted a vast number of messages enciphered on Enigma. The intelligence gleaned from this source, codenamed "Ultra" by the British, was a substantial aid to the Allied war effort.
Though Enigma had some cryptographic weaknesses, in practice it was German procedural flaws, operator mistakes, failure to systematically introduce changes in encipherment procedures, and Allied capture of key tables and hardware that, during the war, enabled Allied cryptologists to succeed.
The Abwehr used different versions of Enigma machines. In November 1942, during Operation Torch, a machine was captured which had no plugboard and the three rotors had been changed to rotate 11, 15, and 19 times rather than once every 26 letters, plus a plate on the left acted as a fourth rotor. From October 1944, the German Abwehr used the Schlüsselgerät 41.
The Abwehr code had been broken on 8 December 1941 by Dilly Knox. Agents sent messages to the Abwehr in a simple code which was then sent on using an Enigma machine. The simple codes were broken and helped break the daily Enigma cipher. This breaking of the code enabled the Double-Cross System to operate.
Design.
Like other rotor machines, the Enigma machine is a combination of mechanical and electrical subsystems. The mechanical subsystem consists of a keyboard; a set of rotating disks called "rotors" arranged adjacently along a spindle; one of various stepping components to turn at least one rotor with each key press, and a series of lamps, one for each letter. These design features are the reason that the Enigma machine was originally referred to as the rotor-based cipher machine during its intellectual inception in 1915.
Electrical pathway.
An electrical pathway is a route for current to travel. By manipulating this phenomenon the Enigma machine was able to scramble messages. The mechanical parts act by forming a varying electrical circuit. When a key is pressed, one or more rotors rotate on the spindle. On the sides of the rotors are a series of electrical contacts that, after rotation, line up with contacts on the other rotors or fixed wiring on either end of the spindle. When the rotors are properly aligned, each key on the keyboard is connected to a unique electrical pathway through the series of contacts and internal wiring. Current, typically from a battery, flows through the pressed key, into the newly configured set of circuits and back out again, ultimately lighting one display lamp, which shows the output letter. For example, when encrypting a message starting "ANX...", the operator would first press the "A" key, and the "Z" lamp might light, so "Z" would be the first letter of the ciphertext. The operator would next press "N", and then "X" in the same fashion, and so on.
Current flows from the battery (1) through a depressed bi-directional keyboard switch (2) to the plugboard (3). Next, it passes through the (unused in this instance, so shown closed) plug "A" (3) via the entry wheel (4), through the wiring of the three (Wehrmacht Enigma) or four ("Kriegsmarine" M4 and "Abwehr" variants) installed rotors (5), and enters the reflector (6). The reflector returns the current, via an entirely different path, back through the rotors (5) and entry wheel (4), proceeding through plug "S" (7) connected with a cable (8) to plug "D", and another bi-directional switch (9) to light the appropriate lamp.
The repeated changes of electrical path through an Enigma scrambler implement a polyalphabetic substitution cipher that provides Enigma's security. The diagram on the right shows how the electrical pathway changes with each key depression, which causes rotation of at least the right-hand rotor. Current passes into the set of rotors, into and back out of the reflector, and out through the rotors again. The greyed-out lines are other possible paths within each rotor; these are hard-wired from one side of each rotor to the other. The letter "A" encrypts differently with consecutive key presses, first to "G", and then to "C". This is because the right-hand rotor steps (rotates one position) on each key press, sending the signal on a completely different route. Eventually other rotors step with a key press.
Rotors.
The rotors (alternatively "wheels" or "drums", "Walzen" in German) form the heart of an Enigma machine. Each rotor is a disc approximately in diameter made from Ebonite or Bakelite with 26 brass, spring-loaded, electrical contact pins arranged in a circle on one face, with the other face housing 26 corresponding electrical contacts in the form of circular plates. The pins and contacts represent the alphabet — typically the 26 letters A–Z, as will be assumed for the rest of this description. When the rotors are mounted side by side on the spindle, the pins of one rotor rest against the plate contacts of the neighbouring rotor, forming an electrical connection. Inside the body of the rotor, 26 wires connect each pin on one side to a contact on the other in a complex pattern. Most of the rotors are identified by Roman numerals, and each issued copy of rotor I, for instance, is wired identically to all others. The same is true for the special thin beta and gamma rotors used in the M4 naval variant.
By itself, a rotor performs only a very simple type of encryption, a simple substitution cipher. For example, the pin corresponding to the letter "E" might be wired to the contact for letter "T" on the opposite face, and so on. Enigma's security comes from using several rotors in series (usually three or four) and the regular stepping movement of the rotors, thus implementing a polyalphabetic substitution cipher.
Each rotor can be set to one of 26 starting positions when placed in an Enigma machine. After insertion, a rotor can be turned to the correct position by hand, using the grooved finger-wheel which protrudes from the internal Enigma cover when closed. In order for the operator to know the rotor's position, each has an "alphabet tyre" (or letter ring) attached to the outside of the rotor disc, with 26 characters (typically letters); one of these is visible through the window for that slot in the cover, thus indicating the rotational position of the rotor. In early models, the alphabet ring was fixed to the rotor disc. A later improvement was the ability to adjust the alphabet ring relative to the rotor disc. The position of the ring was known as the "Ringstellung" ("ring setting"), and that setting was a part of the initial setup needed prior to an operating session. In modern terms it was a part of the initialization vector.
Each rotor contains one or more notches that control rotor stepping. In the military variants, the notches are located on the alphabet ring.
The Army and Air Force Enigmas were used with several rotors, initially three. On 15 December 1938, this changed to five, from which three were chosen for a given session. Rotors were marked with Roman numerals to distinguish them: I, II, III, IV and V, all with single turnover notches located at different points on the alphabet ring. This variation was probably intended as a security measure, but ultimately allowed the Polish Clock Method and British Banburismus attacks.
The Naval version of the "Wehrmacht" Enigma had always been issued with more rotors than the other services: At first six, then seven, and finally eight. The additional rotors were marked VI, VII and VIII, all with different wiring, and had two notches, resulting in more frequent turnover. The four-rotor Naval Enigma (M4) machine accommodated an extra rotor in the same space as the three-rotor version. This was accomplished by replacing the original reflector with a thinner one and by adding a thin fourth rotor. That fourth rotor was one of two types, "Beta" or "Gamma", and never stepped, but could be manually set to any of 26 positions. One of the 26 made the machine perform identically to the three-rotor machine.
Stepping.
To avoid merely implementing a simple (solvable) substitution cipher, every key press caused one or more rotors to step by one twenty-sixth of a full rotation, before the electrical connections were made. This changed the substitution alphabet used for encryption, ensuring that the cryptographic substitution was different at each new rotor position, producing a more formidable polyalphabetic substitution cipher. The stepping mechanism varied slightly from model to model. The right-hand rotor stepped once with each keystroke, and other rotors stepped less frequently.
Turnover.
The advancement of a rotor other than the left-hand one was called a "turnover" by the British. This was achieved by a ratchet and pawl mechanism. Each rotor had a ratchet with 26 teeth and every time a key was pressed, the set of spring-loaded pawls moved forward in unison, trying to engage with a ratchet. The alphabet ring of the rotor to the right normally prevented this. As this ring rotated with its rotor, a notch machined into it would eventually align itself with the pawl, allowing it to engage with the ratchet, and advance the rotor on its left. The right-hand pawl, having no rotor and ring to its right, stepped its rotor with every key depression. For a single-notch rotor in the right-hand position, the middle rotor stepped once for every 26 steps of the right-hand rotor. Similarly for rotors two and three. For a two-notch rotor, the rotor to its left would turn over twice for each rotation.
The first five rotors to be introduced (I–V) contained one notch each, while the additional naval rotors VI, VII and VIII each had two notches. The position of the notch on each rotor was determined by the letter ring which could be adjusted in relation to the core containing the interconnections. The points on the rings at which they caused the next wheel to move were as follows.
The design also included a feature known as "double-stepping". This occurred when each pawl aligned with both the ratchet of its rotor and the rotating notched ring of the neighbouring rotor. If a pawl engaged with a ratchet through alignment with a notch, as it moved forward it pushed against both the ratchet and the notch, advancing both rotors. In a three-rotor machine, double-stepping affected rotor two only. If, in moving forward, the ratchet of rotor three was engaged, rotor two would move again on the subsequent keystroke, resulting in two consecutive steps. Rotor two also pushes rotor one forward after 26 steps, but since rotor one moves forward with every keystroke anyway, there is no double-stepping. This double-stepping caused the rotors to deviate from odometer-style regular motion.
With three wheels and only single notches in the first and second wheels, the machine had a period of 26×25×26 = 16,900 (not 26×26×26, because of double-stepping). Historically, messages were limited to a few hundred letters, and so there was no chance of repeating any combined rotor position during a single session, denying cryptanalysts valuable clues.
To make room for the Naval fourth rotors, the reflector was made much thinner. The fourth rotor fitted into the space made available. No other changes were made, which eased the changeover. Since there were only three pawls, the fourth rotor never stepped, but could be manually set into one of 26 possible positions.
A device that was designed, but not implemented before the war's end, was the "Lückenfüllerwalze" (gap-fill wheel) that implemented irregular stepping. It allowed field configuration of notches in all 26 positions. If the number of notches was a relative prime of 26 and the number of notches were different for each wheel, the stepping would be more unpredictable. Like the Umkehrwalze-D it also allowed the internal wiring to be reconfigured.
Entry wheel.
The current entry wheel ("Eintrittswalze" in German), or entry stator, connects the plugboard to the rotor assembly. If the plugboard is not present, the entry wheel instead connects the keyboard and lampboard to the rotor assembly. While the exact wiring used is of comparatively little importance to security, it proved an obstacle to Rejewski's progress during his study of the rotor wirings. The commercial Enigma connects the keys in the order of their sequence on a QWERTZ keyboard: "Q"→"A", "W"→"B", "E"→"C" and so on. The military Enigma connects them in straight alphabetical order: "A"→"A", "B"→"B", "C"→"C", and so on. It took inspired guesswork for Rejewski to penetrate the modification.
Reflector.
With the exception of models "A" and "B", the last rotor came before a 'reflector' (German: "Umkehrwalze", meaning 'reversal rotor'), a patented feature unique to Enigma among the period's various rotor machines. The reflector connected outputs of the last rotor in pairs, redirecting current back through the rotors by a different route. The reflector ensured that Enigma would be self-reciprocal; thus, with two identically configured machines, a message could be encrypted on one and decrypted on the other, without the need for a bulky mechanism to switch between encryption and decryption modes. The reflector allowed a more compact design, but it also gave Enigma the property that no letter ever encrypted to itself. This was a severe cryptological flaw that was subsequently exploited by codebreakers.
In Model 'C', the reflector could be inserted in one of two different positions. In Model 'D', the reflector could be set in 26 possible positions, although it did not move during encryption. In the "Abwehr" Enigma, the reflector stepped during encryption in a manner similar to the other wheels.
In the German Army and Air Force Enigma, the reflector was fixed and did not rotate; there were four versions. The original version was marked 'A', and was replaced by "Umkehrwalze B" on 1 November 1937. A third version, "Umkehrwalze C" was used briefly in 1940, possibly by mistake, and was solved by Hut 6. The fourth version, first observed on 2 January 1944, had a rewireable reflector, called "Umkehrwalze D", nick-named Uncle Dick by the British, allowing the Enigma operator to alter the connections as part of the key settings.
Plugboard.
The plugboard ("Steckerbrett" in German) permitted variable wiring that could be reconfigured by the operator. It was introduced on German Army versions in 1928, and was soon adopted by the "Reichsmarine" (German Navy). The plugboard contributed more cryptographic strength than an extra rotor, as it had 150 trillion possible settings (see below). Enigma without a plugboard (known as "unsteckered Enigma") could be solved relatively straightforwardly using hand methods; these techniques were generally defeated by the plugboard, driving Allied cryptanalysts to develop special machines to solve it.
A cable placed onto the plugboard connected letters in pairs; for example, "E" and "Q" might be a steckered pair. The effect was to swap those letters before and after the main rotor scrambling unit. For example, when an operator pressed "E", the signal was diverted to "Q" before entering the rotors. Up to 13 steckered pairs might be used at one time, although only 10 were normally used.
Current flowed from the keyboard through the plugboard, and proceeded to the entry-rotor or "Eintrittswalze". Each letter on the plugboard had two jacks. Inserting a plug disconnected the upper jack (from the keyboard) and the lower jack (to the entry-rotor) of that letter. The plug at the other end of the crosswired cable was inserted into another letter's jacks, thus switching the connections of the two letters.
Accessories.
Other features made various Enigma machines more secure or more convenient.
"Schreibmax".
Some M4 Enigmas used the "Schreibmax", a small printer that could print the 26 letters on a narrow paper ribbon. This eliminated the need for a second operator to read the lamps and transcribe the letters. The "Schreibmax" was placed on top of the Enigma machine and was connected to the lamp panel. To install the printer, the lamp cover and light bulbs had to be removed. It improved both convenience and operational security; the printer could be installed remotely such that the signal officer operating the machine no longer had to see the decrypted plaintext.
"Fernlesegerät".
Another accessory was the remote lamp panel "Fernlesegerät". For machines equipped with the extra panel, the wooden case of the Enigma was wider and could store the extra panel. A lamp panel version could be connected afterwards, but that required, as with the "Schreibmax", that the lamp panel and light bulbs be removed. The remote panel made it possible for a person to read the decrypted plaintext without the operator seeing it.
"Uhr".
In 1944, the "Luftwaffe" introduced a plugboard switch, called the "Uhr" (clock), a small box containing a switch with 40 positions. It replaced the standard plugs. After connecting the plugs, as determined in the daily key sheet, the operator turned the switch into one of the 40 positions, each producing a different combination of plug wiring. Most of these plug connections were, unlike the default plugs, not pair-wise. In one switch position, the "Uhr" did not swap letters, but simply emulated the 13 stecker wires with plugs.
Mathematical analysis.
The Enigma transformation for each letter can be specified mathematically as a product of permutations. Assuming a three-rotor German Army/Air Force Enigma, let P denote the plugboard transformation, U denote that of the reflector (formula_0), and L, M, R denote those of the left, middle and right rotors respectively. Then the encryption E can be expressed as
formula_1
After each key press, the rotors turn, changing the transformation. For example, if the right-hand rotor R is rotated n positions, the transformation becomes
formula_2
where ρ is the cyclic permutation mapping A to B, B to C, and so forth. Similarly, the middle and left-hand rotors can be represented as j and k rotations of M and L. The encryption transformation can then be described as
formula_3
Combining three rotors from a set of five, each of the 3 rotor settings with 26 positions, and the plugboard with ten pairs of letters connected, the military Enigma has 158,962,555,217,826,360,000 different settings (nearly 159 quintillion or about 67 bits).
Operation.
Basic operation.
A German Enigma operator would be given a plaintext message to encrypt. After setting up his machine, he would type the message on the Enigma keyboard. For each letter pressed, one lamp lit indicating a different letter according to a pseudo-random substitution determined by the electrical pathways inside the machine. The letter indicated by the lamp would be recorded, typically by a second operator, as the cyphertext letter. The action of pressing a key also moved one or more rotors so that the next key press used a different electrical pathway, and thus a different substitution would occur even if the same plaintext letter were entered again. For each key press there was rotation of at least the right hand rotor and less often the other two, resulting in a different substitution alphabet being used for every letter in the message. This process continued until the message was completed. The cyphertext recorded by the second operator would then be transmitted, usually by radio in Morse code, to an operator of another Enigma machine. This operator would type in the cyphertext and — as long as all the settings of the deciphering machine were identical to those of the enciphering machine — for every key press the reverse substitution would occur and the plaintext message would emerge.
Details.
In use, the Enigma required a list of daily key settings and auxiliary documents. In German military practice, communications were divided into separate networks, each using different settings. These communication nets were termed "keys" at Bletchley Park, and were assigned code names, such as "Red", "Chaffinch", and "Shark". Each unit operating in a network was given the same settings list for its Enigma, valid for a period of time. The procedures for German Naval Enigma were more elaborate and more secure than those in other services and employed auxiliary codebooks. Navy codebooks were printed in red, water-soluble ink on pink paper so that they could easily be destroyed if they were endangered or if the vessel was sunk.
An Enigma machine's setting (its cryptographic key in modern terms; "Schlüssel" in German) specified each operator-adjustable aspect of the machine:
For a message to be correctly encrypted and decrypted, both sender and receiver had to configure their Enigma in the same way; rotor selection and order, ring positions, plugboard connections and starting rotor positions must be identical. Except for the starting positions, these settings were established beforehand, distributed in key lists and changed daily. For example, the settings for the 18th day of the month in the German Luftwaffe Enigma key list number 649 (see image) were as follows:
Enigma was designed to be secure even if the rotor wiring was known to an opponent, although in practice considerable effort protected the wiring configuration. If the wiring is secret, the total number of possible configurations has been calculated to be around (approximately 380 bits); with known wiring and other operational constraints, this is reduced to around (76 bits). Because of the large number of possibilities, users of Enigma were confident of its security; it was not then feasible for an adversary to even begin to try a brute-force attack.
Indicator.
Most of the key was kept constant for a set time period, typically a day. A different initial rotor position was used for each message, a concept similar to an initialisation vector in modern cryptography. The reason is that encrypting many messages with identical or near-identical settings (termed in cryptanalysis as being "in depth"), would enable an attack using a statistical procedure such as Friedman's Index of coincidence. The starting position for the rotors was transmitted just before the ciphertext, usually after having been enciphered. The exact method used was termed the "indicator procedure". Design weakness and operator sloppiness in these indicator procedures were two of the main weaknesses that made cracking Enigma possible.
One of the earliest "indicator procedures" for the Enigma was cryptographically flawed and allowed Polish cryptanalysts to make the initial breaks into the plugboard Enigma. The procedure had the operator set his machine in accordance with the secret settings that all operators on the net shared. The settings included an initial position for the rotors (the "Grundstellung"), say, "AOH". The operator turned his rotors until "AOH" was visible through the rotor windows. At that point, the operator chose his own arbitrary starting position for the message he would send. An operator might select "EIN", and that became the "message setting" for that encryption session. The operator then typed "EIN" into the machine twice, this producing the encrypted indicator, for example "XHTLOA". This was then transmitted, at which point the operator would turn the rotors to his message settings, "EIN" in this example, and then type the plaintext of the message.
At the receiving end, the operator set the machine to the initial settings ("AOH") and typed in the first six letters of the message ("XHTLOA"). In this example, "EINEIN" emerged on the lamps, so the operator would learn the "message setting" that the sender used to encrypt this message. The receiving operator would set his rotors to "EIN", type in the rest of the ciphertext, and get the deciphered message.
This indicator scheme had two weaknesses. First, the use of a global initial position ("Grundstellung") meant all message keys used the same polyalphabetic substitution. In later indicator procedures, the operator selected his initial position for encrypting the indicator and sent that initial position in the clear. The second problem was the repetition of the indicator, which was a serious security flaw. The message setting was encoded twice, resulting in a relation between first and fourth, second and fifth, and third and sixth character. These security flaws enabled the Polish Cipher Bureau to break into the pre-war Enigma system as early as 1932. The early indicator procedure was subsequently described by German cryptanalysts as the "faulty indicator technique".
During World War II, codebooks were only used each day to set up the rotors, their ring settings and the plugboard. For each message, the operator selected a random start position, let's say "WZA", and a random message key, perhaps "SXT". He moved the rotors to the "WZA" start position and encoded the message key "SXT". Assume the result was "UHL". He then set up the message key, "SXT", as the start position and encrypted the message. Next, he transmitted the start position, "WZA", the encoded message key, "UHL", and then the ciphertext. The receiver set up the start position according to the first trigram, "WZA", and decoded the second trigram, "UHL", to obtain the "SXT" message setting. Next, he used this "SXT" message setting as the start position to decrypt the message. This way, each ground setting was different and the new procedure avoided the security flaw of double encoded message settings.
This procedure was used by "Wehrmacht" and "Luftwaffe" only. The "Kriegsmarine" procedures on sending messages with the Enigma were far more complex and elaborate. Prior to encryption the message was encoded using the "Kurzsignalheft" code book. The "Kurzsignalheft" contained tables to convert sentences into four-letter groups. A great many choices were included, for example, logistic matters such as refuelling and rendezvous with supply ships, positions and grid lists, harbour names, countries, weapons, weather conditions, enemy positions and ships, date and time tables. Another codebook contained the "Kenngruppen" and "Spruchschlüssel": the key identification and message key.
Additional details.
The Army Enigma machine used only the 26 alphabet characters. Punctuation was replaced with rare character combinations. A space was omitted or replaced with an X. The X was generally used as full-stop.
Some punctuation marks were different in other parts of the armed forces. The "Wehrmacht" replaced a comma with ZZ and the question mark with FRAGE or FRAQ.
The "Kriegsmarine" replaced the comma with Y and the question mark with UD. The combination CH, as in ""Acht" (eight) or "Richtung"" (direction), was replaced with Q (AQT, RIQTUNG). Two, three and four zeros were replaced with CENTA, MILLE and MYRIA.
The "Wehrmacht" and the "Luftwaffe" transmitted messages in groups of five characters and counted the letters.
The "Kriegsmarine" used four-character groups and counted those groups.
Frequently used names or words were varied as much as possible. Words like "Minensuchboot" (minesweeper) could be written as MINENSUCHBOOT, MINBOOT or MMMBOOT. To make cryptanalysis harder, messages were limited to 250 characters. Longer messages were divided into several parts, each using a different message key.
Example enciphering process.
The character substitutions by the Enigma machine as a whole can be expressed as a string of letters with each position occupied by the character that will replace the character at the corresponding position in the alphabet. For example, a given machine configuration that enciphered A to L, B to U, C to S, ..., and Z to J could be represented compactly as
LUSHQOXDMZNAIKFREPCYBWVGTJ
and the enciphering of a particular character by that configuration could be represented by highlighting the enciphered character as in
D > LUS(H)QOXDMZNAIKFREPCYBWVGTJ
Since the operation of an Enigma machine enciphering a message is a series of such configurations, each associated with a single character being enciphered, a sequence of such representations can be used to represent the operation of the machine as it enciphers a message. For example, the process of enciphering the first sentence of the main body of the famous "Dönitz message" to
RBBF PMHP HGCZ XTDY GAHG UFXG EWKB LKGJ
can be represented as
0001 F > KGWNT(R)BLQPAHYDVJIFXEZOCSMU CDTK 25 15 16 26
0002 O > UORYTQSLWXZHNM(B)VFCGEAPIJDK CDTL 25 15 16 01
0003 L > HLNRSKJAMGF(B)ICUQPDEYOZXWTV CDTM 25 15 16 02
0004 G > KPTXIG(F)MESAUHYQBOVJCLRZDNW CDUN 25 15 17 03
0005 E > XDYB(P)WOSMUZRIQGENLHVJTFACK CDUO 25 15 17 04
0006 N > DLIAJUOVCEXBN(M)GQPWZYFHRKTS CDUP 25 15 17 05
0007 D > LUS(H)QOXDMZNAIKFREPCYBWVGTJ CDUQ 25 15 17 06
0008 E > JKGO(P)TCIHABRNMDEYLZFXWVUQS CDUR 25 15 17 07
0009 S > GCBUZRASYXVMLPQNOF(H)WDKTJIE CDUS 25 15 17 08
0010 I > XPJUOWIY(G)CVRTQEBNLZMDKFAHS CDUT 25 15 17 09
0011 S > DISAUYOMBPNTHKGJRQ(C)LEZXWFV CDUU 25 15 17 10
0012 T > FJLVQAKXNBGCPIRMEOY(Z)WDUHST CDUV 25 15 17 11
0013 S > KTJUQONPZCAMLGFHEW(X)BDYRSVI CDUW 25 15 17 12
0014 O > ZQXUVGFNWRLKPH(T)MBJYODEICSA CDUX 25 15 17 13
0015 F > XJWFR(D)ZSQBLKTVPOIEHMYNCAUG CDUY 25 15 17 14
0016 O > FSKTJARXPECNUL(Y)IZGBDMWVHOQ CDUZ 25 15 17 15
0017 R > CEAKBMRYUVDNFLTXW(G)ZOIJQPHS CDVA 25 15 18 16
0018 T > TLJRVQHGUCXBZYSWFDO(A)IEPKNM CDVB 25 15 18 17
0019 B > Y(H)LPGTEBKWICSVUDRQMFONJZAX CDVC 25 15 18 18
0020 E > KRUL(G)JEWNFADVIPOYBXZCMHSQT CDVD 25 15 18 19
0021 K > RCBPQMVZXY(U)OFSLDEANWKGTIJH CDVE 25 15 18 20
0022 A > (F)CBJQAWTVDYNXLUSEZPHOIGMKR CDVF 25 15 18 21
0023 N > VFTQSBPORUZWY(X)HGDIECJALNMK CDVG 25 15 18 22
0024 N > JSRHFENDUAZYQ(G)XTMCBPIWVOLK CDVH 25 15 18 23
0025 T > RCBUTXVZJINQPKWMLAY(E)DGOFSH CDVI 25 15 18 24
0026 Z > URFXNCMYLVPIGESKTBOQAJZDH(W) CDVJ 25 15 18 25
0027 U > JIOZFEWMBAUSHPCNRQLV(K)TGYXD CDVK 25 15 18 26
0028 G > ZGVRKO(B)XLNEIWJFUSDQYPCMHTA CDVL 25 15 18 01
0029 E > RMJV(L)YQZKCIEBONUGAWXPDSTFH CDVM 25 15 18 02
0030 B > G(K)QRFEANZPBMLHVJCDUXSOYTWI CDWN 25 15 19 03
0031 E > YMZT(G)VEKQOHPBSJLIUNDRFXWAC CDWO 25 15 19 04
0032 N > PDSBTIUQFNOVW(J)KAHZCEGLMYXR CDWP 25 15 19 05
where the letters following each mapping are the letters that appear at the windows at that stage (the only state changes visible to the operator) and the numbers show the underlying physical position of each rotor.
The character mappings for a given configuration of the machine are in turn the result of a series of such mappings applied by each pass through a component of the machine: the enciphering of a character resulting from the application of a given component's mapping serves as the input to the mapping of the subsequent component. For example, the 4th step in the enciphering above can be expanded to show each of these stages using the same representation of mappings and highlighting for the enciphered character:
G > ABCDEF(G)HIJKLMNOPQRSTUVWXYZ
P EFMQAB(G)UINKXCJORDPZTHWVLYS AE.BF.CM.DQ.HU.JN.LX.PR.SZ.VW
1 OFRJVM(A)ZHQNBXPYKCULGSWETDI N 03 VIII
2 (N)UKCHVSMDGTZQFYEWPIALOXRJB U 17 VI
3 XJMIYVCARQOWH(L)NDSUFKGBEPZT D 15 V
4 QUNGALXEPKZ(Y)RDSOFTVCMBIHWJ C 25 β
R RDOBJNTKVEHMLFCWZAXGYIPS(U)Q c
4 EVTNHQDXWZJFUCPIAMOR(B)SYGLK β
3 H(V)GPWSUMDBTNCOKXJIQZRFLAEY V
2 TZDIPNJESYCUHAVRMXGKB(F)QWOL VI
1 GLQYW(B)TIZDPSFKANJCUXREVMOH VIII
P E(F)MQABGUINKXCJORDPZTHWVLYS AE.BF.CM.DQ.HU.JN.LX.PR.SZ.VW
F < KPTXIG(F)MESAUHYQBOVJCLRZDNW
Here the enciphering begins trivially with the first "mapping" representing the keyboard (which has no effect), followed by the plugboard, configured as AE.BF.CM.DQ.HU.JN.LX.PR.SZ.VW which has no effect on 'G', followed by the VIII rotor in the 03 position, which maps G to A, then the VI rotor in the 17 position, which maps A to N, ..., and finally the plugboard again, which maps B to F, producing the overall mapping indicated at the final step: G to F.
This model has 4 rotors (lines 1 through 4) and the reflector (line R) also permutes (garbles) letters.
Models.
The Enigma family included multiple designs. The earliest were commercial models dating from the early 1920s. Starting in the mid-1920s, the German military began to use Enigma, making a number of security-related changes. Various nations either adopted or adapted the design for their own cipher machines.
An estimated 40,000 Enigma machines were constructed. After the end of World War II, the Allies sold captured Enigma machines, still widely considered secure, to developing countries.
Commercial Enigma.
On 23 February 1918, Arthur Scherbius applied for a patent for a ciphering machine that used rotors. Scherbius and E. Richard Ritter founded the firm of Scherbius & Ritter. They approached the German Navy and Foreign Office with their design, but neither agency was interested. Scherbius & Ritter then assigned the patent rights to Gewerkschaft Securitas, who founded the "Chiffriermaschinen Aktien-Gesellschaft" (Cipher Machines Stock Corporation) on 9 July 1923; Scherbius and Ritter were on the board of directors.
Enigma Handelsmaschine (1923).
Chiffriermaschinen AG began advertising a rotor machine, "Enigma Handelsmaschine", which was exhibited at the Congress of the International Postal Union in 1924. The machine was heavy and bulky, incorporating a typewriter. It measured 65×45×38 cm and weighed about .
Schreibende Enigma (1924).
This was also a model with a type writer. There were a number of problems associated with the printer and the construction was not stable until 1926. Both early versions of Enigma lacked the reflector and had to be switched between enciphering and deciphering.
Glühlampenmaschine, Enigma A (1924).
The reflector, suggested by Scherbius' colleague Willi Korn, was introduced with the glow lamp version.
The machine was also known as the military Enigma. It had two rotors and a manually rotatable reflector. The typewriter was omitted and glow lamps were used for output. The operation was somewhat different from later models. Before the next key pressure, the operator had to press a button to advance the right rotor one step.
Enigma B (1924).
Enigma "model B" was introduced late in 1924, and was of a similar construction. While bearing the Enigma name, both models "A" and "B" were quite unlike later versions: They differed in physical size and shape, but also cryptographically, in that they lacked the reflector. This model of Enigma machine was referred to as the Glowlamp Enigma or "Glühlampenmaschine" since it produced its output on a lamp panel rather than paper. This method of output was much more reliable and cost effective. Hence this machine was 1/8th the price of its predecessor.
Enigma C (1926).
"Model C" was the third model of the so-called ″glowlamp Enigmas″ (after A and B) and it again lacked a typewriter.
Enigma D (1927).
The "Enigma C" quickly gave way to "Enigma D" (1927). This version was widely used, with shipments to Sweden, the Netherlands, United Kingdom, Japan, Italy, Spain, United States and Poland. In 1927 Hugh Foss at the British Government Code and Cypher School was able to show that commercial Enigma machines could be broken, provided suitable cribs were available. Soon, the Enigma D would pioneer the use of a standard keyboard layout to be used in German computing. This "QWERTZ" layout is very similar to the American QWERTY keyboard format used in many languages.
"Navy Cipher D".
Other countries used Enigma machines. The Italian Navy adopted the commercial Enigma as "Navy Cipher D". The Spanish also used commercial Enigma machines during their Civil War. British codebreakers succeeded in breaking these machines, which lacked a plugboard. Enigma machines were also used by diplomatic services.
Enigma H (1929).
There was also a large, eight-rotor printing model, the "Enigma H", called "Enigma II" by the "Reichswehr". In 1933 the Polish Cipher Bureau detected that it was in use for high-level military communication, but it was soon withdrawn, as it was unreliable and jammed frequently.
Enigma K.
The Swiss used a version of Enigma called "Model K" or "Swiss K" for military and diplomatic use, which was very similar to commercial Enigma D. The machine's code was cracked by Poland, France, the United Kingdom and the United States; the latter code-named it INDIGO. An "Enigma T" model, code-named "Tirpitz", was used by Japan.
Military Enigma.
The various services of the Wehrmacht used various Enigma versions, and replaced them frequently, sometimes with ones adapted from other services. Enigma seldom carried high-level strategic messages, which when not urgent went by courier, and when urgent went by other cryptographic systems including the Geheimschreiber.
Funkschlüssel C.
The Reichsmarine was the first military branch to adopt Enigma. This version, named "Funkschlüssel C" ("Radio cipher C"), had been put into production by 1925 and was introduced into service in 1926.
The keyboard and lampboard contained 29 letters — A-Z, Ä, Ö and Ü — that were arranged alphabetically, as opposed to the QWERTZUI ordering. The rotors had 28 contacts, with the letter "X" wired to bypass the rotors unencrypted. Three rotors were chosen from a set of five and the reflector could be inserted in one of four different positions, denoted α, β, γ and δ. The machine was revised slightly in July 1933.
Enigma G (1928–1930).
By 15 July 1928, the German Army ("Reichswehr") had introduced their own exclusive version of the Enigma machine, the "Enigma G".
The "Abwehr" used the "Enigma G". This Enigma variant was a four-wheel unsteckered machine with multiple notches on the rotors. This model was equipped with a counter that incremented upon each key press, and so is also known as the "counter machine" or the "Zählwerk" Enigma.
Wehrmacht Enigma I (1930–1938).
Enigma machine G was modified to the "Enigma I" by June 1930. Enigma I is also known as the "Wehrmacht", or "Services" Enigma, and was used extensively by German military services and other government organisations (such as the railways) before and during World War II.
The major difference between "Enigma I" (German Army version from 1930), and commercial Enigma models was the addition of a plugboard to swap pairs of letters, greatly increasing cryptographic strength.
Other differences included the use of a fixed reflector and the relocation of the stepping notches from the rotor body to the movable letter rings. The machine measured and weighed around .
In August 1935, the Air Force introduced the Wehrmacht Enigma for their communications.
M3 (1934).
By 1930, the Reichswehr had suggested that the Navy adopt their machine, citing the benefits of increased security (with the plugboard) and easier interservice communications. The Reichsmarine eventually agreed and in 1934 brought into service the Navy version of the Army Enigma, designated "Funkschlüssel" ' or "M3". While the Army used only three rotors at that time, the Navy specified a choice of three from a possible five.
Two extra rotors (1938).
In December 1938, the Army issued two extra rotors so that the three rotors were chosen from a set of five. In 1938, the Navy added two more rotors, and then another in 1939 to allow a choice of three rotors from a set of eight.
M4 (1942).
A four-rotor Enigma was introduced by the Navy for U-boat traffic on 1 February 1942, called "M4" (the network was known as "Triton", or "Shark" to the Allies). The extra rotor was fitted in the same space by splitting the reflector into a combination of a thin reflector and a thin fourth rotor.
Surviving machines.
The effort to break the Enigma was not disclosed until the 1970s. Since then, interest in the Enigma machine has grown. Enigmas are on public display in museums around the world, and several are in the hands of private collectors and computer history enthusiasts.
The "Deutsches Museum" in Munich has both the three- and four-rotor German military variants, as well as several civilian versions. The "Deutsches Spionagemuseum" in Berlin also showcases two military variants. Enigma machines are also exhibited at the National Codes Centre in Bletchley Park, the Government Communications Headquarters, the Science Museum in London, Discovery Park of America in Tennessee, the Polish Army Museum in Warsaw, the Swedish Army Museum ("Armémuseum") in Stockholm, the Military Museum of A Coruña in Spain, the Nordland Red Cross War Memorial Museum in Narvik, Norway, The Artillery, Engineers and Signals Museum in Hämeenlinna, Finland the Technical University of Denmark in Lyngby, Denmark, in Skanderborg Bunkerne at Skanderborg, Denmark, and at the Australian War Memorial and in the foyer of the Australian Signals Directorate, both in Canberra, Australia. The Jozef Pilsudski Institute in London exhibited a rare Polish Enigma double assembled in France in 1940. In 2020, thanks to the support of the Ministry of Culture and National Heritage, it became the property of the Polish History Museum.
In the United States, Enigma machines can be seen at the Computer History Museum in Mountain View, California, and at the National Security Agency's National Cryptologic Museum in Fort Meade, Maryland, where visitors can try their hand at enciphering and deciphering messages. Two machines that were acquired after the capture of during World War II are on display alongside the submarine at the Museum of Science and Industry in Chicago, Illinois. A three-rotor Enigma is on display at Discovery Park of America in Union City, Tennessee. A four-rotor device is on display in the ANZUS Corridor of the Pentagon on the second floor, A ring, between corridors 8 and 9. This machine is on loan from Australia. The United States Air Force Academy in Colorado Springs has a machine on display in the Computer Science Department. There is also a machine located at The National WWII Museum in New Orleans. The International Museum of World War II near Boston has seven Enigma machines on display, including a U-boat four-rotor model, one of three surviving examples of an Enigma machine with a printer, one of fewer than ten surviving ten-rotor code machines, an example blown up by a retreating German Army unit, and two three-rotor Enigmas that visitors can operate to encode and decode messages. Computer Museum of America in Roswell, Georgia has a three-rotor model with two additional rotors. The machine is fully restored and CMoA has the original paperwork for the purchase on 7 March 1936 by the German Army. The National Museum of Computing also contains surviving Enigma machines in Bletchley, England.
In Canada, a Swiss Army issue Enigma-K, is in Calgary, Alberta. It is on permanent display at the Naval Museum of Alberta inside the Military Museums of Calgary. A four-rotor Enigma machine is on display at the Military Communications and Electronics Museum at Canadian Forces Base (CFB) Kingston in Kingston, Ontario.
Occasionally, Enigma machines are sold at auction; prices have in recent years ranged from US$40,000 to US$547,500 in 2017. Replicas are available in various forms, including an exact reconstructed copy of the Naval M4 model, an Enigma implemented in electronics (Enigma-E), various simulators and paper-and-scissors analogues.
A rare "Abwehr" Enigma machine, designated G312, was stolen from the Bletchley Park museum on 1 April 2000. In September, a man identifying himself as "The Master" sent a note demanding £25,000 and threatening to destroy the machine if the ransom was not paid. In early October 2000, Bletchley Park officials announced that they would pay the ransom, but the stated deadline passed with no word from the blackmailer. Shortly afterward, the machine was sent anonymously to BBC journalist Jeremy Paxman, missing three rotors.
In November 2000, an antiques dealer named Dennis Yates was arrested after telephoning "The Sunday Times" to arrange the return of the missing parts. The Enigma machine was returned to Bletchley Park after the incident. In October 2001, Yates was sentenced to ten months in prison and served three months.
In October 2008, the Spanish daily newspaper "El País" reported that 28 Enigma machines had been discovered by chance in an attic of Army headquarters in Madrid. These four-rotor commercial machines had helped Franco's Nationalists win the Spanish Civil War, because, though the British cryptologist Alfred Dilwyn Knox in 1937 broke the cipher generated by Franco's Enigma machines, this was not disclosed to the Republicans, who failed to break the cipher. The Nationalist government continued using its 50 Enigmas into the 1950s. Some machines have gone on display in Spanish military museums, including one at the National Museum of Science and Technology (MUNCYT) in La Coruña and one at the Spanish Army Museum. Two have been given to Britain's GCHQ.
The Bulgarian military used Enigma machines with a Cyrillic keyboard; one is on display in the National Museum of Military History in Sofia.
On 3 December 2020, German divers working on behalf of the World Wide Fund for Nature discovered a destroyed Enigma machine in Flensburg Firth (part of the Baltic Sea) which is believed to be from a scuttled U-boat. This Enigma machine will be restored by and be the property of the Archaeology Museum of Schleswig Holstein.
An M4 Enigma was salvaged in the 1980s from the German minesweeper R15, which was sunk off the Istrian coast in 1945. The machine was put on display in the Pivka Park of Military History in Slovenia on 13 April 2023.
Derivatives.
The Enigma was influential in the field of cipher machine design, spinning off other rotor machines. Once the British discovered Enigma's principle of operation, they created the Typex rotor cipher, which the Germans believed to be unsolvable. Typex was originally derived from the Enigma patents; Typex even includes features from the patent descriptions that were omitted from the actual Enigma machine. The British paid no royalties for the use of the patents. In the United States, cryptologist William Friedman designed the M-325 machine, starting in 1936, that is logically similar.
Machines like the SIGABA, NEMA, Typex, and so forth, are not considered to be Enigma derivatives as their internal ciphering functions are not mathematically identical to the Enigma transform.
A unique rotor machine called Cryptograph was constructed in 2002 by Netherlands-based Tatjana van Vark. This device makes use of 40-point rotors, allowing letters, numbers and some punctuation to be used; each rotor contains 509 parts.
Explanatory notes.
<templatestyles src="Reflist/styles.css" />
References.
Citations.
<templatestyles src="Reflist/styles.css" />
General and cited references.
<templatestyles src="Refbegin/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "U=U^{-1}"
},
{
"math_id": 1,
"text": "E=PRMLUL^{-1}M^{-1}R^{-1}P^{-1}."
},
{
"math_id": 2,
"text": "\\rho^nR\\rho^{-n},"
},
{
"math_id": 3,
"text": "E=P\\left(\\rho^n R\\rho^{-n}\\right)\\left(\\rho^j M\\rho^{-j}\\right)\\left(\\rho^{k}L\\rho^{-k}\\right)U\\left(\\rho^kL^{-1}\\rho^{-k}\\right)\\left(\\rho^j M^{-1}\\rho^{-j}\\right)\\left(\\rho^n R^{-1}\\rho^{-n}\\right)P^{-1}."
}
] | https://en.wikipedia.org/wiki?curid=9256 |
9257 | Enzyme | Large biological molecule that acts as a catalyst
Enzymes () are proteins that act as biological catalysts by accelerating chemical reactions. The molecules upon which enzymes may act are called substrates, and the enzyme converts the substrates into different molecules known as products. Almost all metabolic processes in the cell need enzyme catalysis in order to occur at rates fast enough to sustain life. Metabolic pathways depend upon enzymes to catalyze individual steps. The study of enzymes is called "enzymology" and the field of pseudoenzyme analysis recognizes that during evolution, some enzymes have lost the ability to carry out biological catalysis, which is often reflected in their amino acid sequences and unusual 'pseudocatalytic' properties.
Enzymes are known to catalyze more than 5,000 biochemical reaction types.
Other biocatalysts are catalytic RNA molecules, also called ribozymes. They are sometimes described as a "type" of enzyme rather than being "like" an enzyme, but even in the decades since ribozymes' discovery in 1980-1982, the word "enzyme" alone often means the protein type specifically (as is used in this article).
An enzyme's specificity comes from its unique three-dimensional structure.
Like all catalysts, enzymes increase the reaction rate by lowering its activation energy. Some enzymes can make their conversion of substrate to product occur many millions of times faster. An extreme example is orotidine 5'-phosphate decarboxylase, which allows a reaction that would otherwise take millions of years to occur in milliseconds. Chemically, enzymes are like any catalyst and are not consumed in chemical reactions, nor do they alter the equilibrium of a reaction. Enzymes differ from most other catalysts by being much more specific. Enzyme activity can be affected by other molecules: inhibitors are molecules that decrease enzyme activity, and activators are molecules that increase activity. Many therapeutic drugs and poisons are enzyme inhibitors. An enzyme's activity decreases markedly outside its optimal temperature and pH, and many enzymes are (permanently) denatured when exposed to excessive heat, losing their structure and catalytic properties.
Some enzymes are used commercially, for example, in the synthesis of antibiotics. Some household products use enzymes to speed up chemical reactions: enzymes in biological washing powders break down protein, starch or fat stains on clothes, and enzymes in meat tenderizer break down proteins into smaller molecules, making the meat easier to chew.
<templatestyles src="Template:TOC limit/styles.css" />
Etymology and history.
By the late 17th and early 18th centuries, the digestion of meat by stomach secretions and the conversion of starch to sugars by plant extracts and saliva were known but the mechanisms by which these occurred had not been identified.
French chemist Anselme Payen was the first to discover an enzyme, diastase, in 1833. A few decades later, when studying the fermentation of sugar to alcohol by yeast, Louis Pasteur concluded that this fermentation was caused by a vital force contained within the yeast cells called "ferments", which were thought to function only within living organisms. He wrote that "alcoholic fermentation is an act correlated with the life and organization of the yeast cells, not with the death or putrefaction of the cells."
In 1877, German physiologist Wilhelm Kühne (1837–1900) first used the term "enzyme", which comes from grc " "ἔνζυμον" (énzymon)" 'leavened, in yeast', to describe this process. The word "enzyme" was used later to refer to nonliving substances such as pepsin, and the word "ferment" was used to refer to chemical activity produced by living organisms.
Eduard Buchner submitted his first paper on the study of yeast extracts in 1897. In a series of experiments at the University of Berlin, he found that sugar was fermented by yeast extracts even when there were no living yeast cells in the mixture. He named the enzyme that brought about the fermentation of sucrose "zymase". In 1907, he received the Nobel Prize in Chemistry for "his discovery of cell-free fermentation". Following Buchner's example, enzymes are usually named according to the reaction they carry out: the suffix "-ase" is combined with the name of the substrate (e.g., lactase is the enzyme that cleaves lactose) or to the type of reaction (e.g., DNA polymerase forms DNA polymers).
The biochemical identity of enzymes was still unknown in the early 1900s. Many scientists observed that enzymatic activity was associated with proteins, but others (such as Nobel laureate Richard Willstätter) argued that proteins were merely carriers for the true enzymes and that proteins "per se" were incapable of catalysis. In 1926, James B. Sumner showed that the enzyme urease was a pure protein and crystallized it; he did likewise for the enzyme catalase in 1937. The conclusion that pure proteins can be enzymes was definitively demonstrated by John Howard Northrop and Wendell Meredith Stanley, who worked on the digestive enzymes pepsin (1930), trypsin and chymotrypsin. These three scientists were awarded the 1946 Nobel Prize in Chemistry.
The discovery that enzymes could be crystallized eventually allowed their structures to be solved by x-ray crystallography. This was first done for lysozyme, an enzyme found in tears, saliva and egg whites that digests the coating of some bacteria; the structure was solved by a group led by David Chilton Phillips and published in 1965. This high-resolution structure of lysozyme marked the beginning of the field of structural biology and the effort to understand how enzymes work at an atomic level of detail.
Classification and nomenclature.
Enzymes can be classified by two main criteria: either amino acid sequence similarity (and thus evolutionary relationship) or enzymatic activity.
Enzyme activity. An enzyme's name is often derived from its substrate or the chemical reaction it catalyzes, with the word ending in "-ase". Examples are lactase, alcohol dehydrogenase and DNA polymerase. Different enzymes that catalyze the same chemical reaction are called isozymes.
The International Union of Biochemistry and Molecular Biology have developed a nomenclature for enzymes, the EC numbers (for "Enzyme Commission"). Each enzyme is described by "EC" followed by a sequence of four numbers which represent the hierarchy of enzymatic activity (from very general to very specific). That is, the first number broadly classifies the enzyme based on its mechanism while the other digits add more and more specificity.
The top-level classification is:
These sections are subdivided by other features such as the substrate, products, and chemical mechanism. An enzyme is fully specified by four numerical designations. For example, hexokinase (EC 2.7.1.1) is a transferase (EC 2) that adds a phosphate group (EC 2.7) to a hexose sugar, a molecule containing an alcohol group (EC 2.7.1).
Sequence similarity. EC categories do not reflect sequence similarity. For instance, two ligases of the same EC number that catalyze exactly the same reaction can have completely different sequences. Independent of their function, enzymes, like any other proteins, have been classified by their sequence similarity into numerous families. These families have been documented in dozens of different protein and protein family databases such as Pfam.
Non-homologous isofunctional enzymes. Unrelated enzymes that have the same enzymatic activity have been called "non-homologous isofunctional enzymes". Horizontal gene transfer may spread these genes to unrelated species, especially bacteria where they can replace endogenous genes of the same function, leading to hon-homologous gene displacement.
Structure.
Enzymes are generally globular proteins, acting alone or in larger complexes. The sequence of the amino acids specifies the structure which in turn determines the catalytic activity of the enzyme. Although structure determines function, a novel enzymatic activity cannot yet be predicted from structure alone. Enzyme structures unfold (denature) when heated or exposed to chemical denaturants and this disruption to the structure typically causes a loss of activity. Enzyme denaturation is normally linked to temperatures above a species' normal level; as a result, enzymes from bacteria living in volcanic environments such as hot springs are prized by industrial users for their ability to function at high temperatures, allowing enzyme-catalysed reactions to be operated at a very high rate.
Enzymes are usually much larger than their substrates. Sizes range from just 62 amino acid residues, for the monomer of 4-oxalocrotonate tautomerase, to over 2,500 residues in the animal fatty acid synthase. Only a small portion of their structure (around 2–4 amino acids) is directly involved in catalysis: the catalytic site. This catalytic site is located next to one or more binding sites where residues orient the substrates. The catalytic site and binding site together compose the enzyme's active site. The remaining majority of the enzyme structure serves to maintain the precise orientation and dynamics of the active site.
In some enzymes, no amino acids are directly involved in catalysis; instead, the enzyme contains sites to bind and orient catalytic cofactors. Enzyme structures may also contain allosteric sites where the binding of a small molecule causes a conformational change that increases or decreases activity.
A small number of RNA-based biological catalysts called ribozymes exist, which again can act alone or in complex with proteins. The most common of these is the ribosome which is a complex of protein and catalytic RNA components.
Mechanism.
Substrate binding.
Enzymes must bind their substrates before they can catalyse any chemical reaction. Enzymes are usually very specific as to what substrates they bind and then the chemical reaction catalysed. Specificity is achieved by binding pockets with complementary shape, charge and hydrophilic/hydrophobic characteristics to the substrates. Enzymes can therefore distinguish between very similar substrate molecules to be chemoselective, regioselective and stereospecific.
Some of the enzymes showing the highest specificity and accuracy are involved in the copying and expression of the genome. Some of these enzymes have "proof-reading" mechanisms. Here, an enzyme such as DNA polymerase catalyzes a reaction in a first step and then checks that the product is correct in a second step. This two-step process results in average error rates of less than 1 error in 100 million reactions in high-fidelity mammalian polymerases. Similar proofreading mechanisms are also found in RNA polymerase, aminoacyl tRNA synthetases and ribosomes.
Conversely, some enzymes display enzyme promiscuity, having broad specificity and acting on a range of different physiologically relevant substrates. Many enzymes possess small side activities which arose fortuitously (i.e. neutrally), which may be the starting point for the evolutionary selection of a new function.
"Lock and key" model.
To explain the observed specificity of enzymes, in 1894 Emil Fischer proposed that both the enzyme and the substrate possess specific complementary geometric shapes that fit exactly into one another. This is often referred to as "the lock and key" model. This early model explains enzyme specificity, but fails to explain the stabilization of the transition state that enzymes achieve.
Induced fit model.
In 1958, Daniel Koshland suggested a modification to the lock and key model: since enzymes are rather flexible structures, the active site is continuously reshaped by interactions with the substrate as the substrate interacts with the enzyme. As a result, the substrate does not simply bind to a rigid active site; the amino acid side-chains that make up the active site are molded into the precise positions that enable the enzyme to perform its catalytic function. In some cases, such as glycosidases, the substrate molecule also changes shape slightly as it enters the active site. The active site continues to change until the substrate is completely bound, at which point the final shape and charge distribution is determined.
Induced fit may enhance the fidelity of molecular recognition in the presence of competition and noise via the conformational proofreading mechanism.
Catalysis.
Enzymes can accelerate reactions in several ways, all of which lower the activation energy (ΔG‡, Gibbs free energy)
Enzymes may use several of these mechanisms simultaneously. For example, proteases such as trypsin perform covalent catalysis using a catalytic triad, stabilize charge build-up on the transition states using an oxyanion hole, complete hydrolysis using an oriented water substrate.
Dynamics.
Enzymes are not rigid, static structures; instead they have complex internal dynamic motions – that is, movements of parts of the enzyme's structure such as individual amino acid residues, groups of residues forming a protein loop or unit of secondary structure, or even an entire protein domain. These motions give rise to a conformational ensemble of slightly different structures that interconvert with one another at equilibrium. Different states within this ensemble may be associated with different aspects of an enzyme's function. For example, different conformations of the enzyme dihydrofolate reductase are associated with the substrate binding, catalysis, cofactor release, and product release steps of the catalytic cycle, consistent with catalytic resonance theory.
Substrate presentation.
Substrate presentation is a process where the enzyme is sequestered away from its substrate. Enzymes can be sequestered to the plasma membrane away from a substrate in the nucleus or cytosol. Or within the membrane, an enzyme can be sequestered into lipid rafts away from its substrate in the disordered region. When the enzyme is released it mixes with its substrate. Alternatively, the enzyme can be sequestered near its substrate to activate the enzyme. For example, the enzyme can be soluble and upon activation bind to a lipid in the plasma membrane and then act upon molecules in the plasma membrane.
Allosteric modulation.
Allosteric sites are pockets on the enzyme, distinct from the active site, that bind to molecules in the cellular environment. These molecules then cause a change in the conformation or dynamics of the enzyme that is transduced to the active site and thus affects the reaction rate of the enzyme. In this way, allosteric interactions can either inhibit or activate enzymes. Allosteric interactions with metabolites upstream or downstream in an enzyme's metabolic pathway cause feedback regulation, altering the activity of the enzyme according to the flux through the rest of the pathway.
Cofactors.
Some enzymes do not need additional components to show full activity. Others require non-protein molecules called cofactors to be bound for activity. Cofactors can be either inorganic (e.g., metal ions and iron–sulfur clusters) or organic compounds (e.g., flavin and heme). These cofactors serve many purposes; for instance, metal ions can help in stabilizing nucleophilic species within the active site. Organic cofactors can be either coenzymes, which are released from the enzyme's active site during the reaction, or prosthetic groups, which are tightly bound to an enzyme. Organic prosthetic groups can be covalently bound (e.g., biotin in enzymes such as pyruvate carboxylase).
An example of an enzyme that contains a cofactor is carbonic anhydrase, which uses a zinc cofactor bound as part of its active site. These tightly bound ions or molecules are usually found in the active site and are involved in catalysis. For example, flavin and heme cofactors are often involved in redox reactions.
Enzymes that require a cofactor but do not have one bound are called "apoenzymes" or "apoproteins". An enzyme together with the cofactor(s) required for activity is called a "holoenzyme" (or haloenzyme). The term "holoenzyme" can also be applied to enzymes that contain multiple protein subunits, such as the DNA polymerases; here the holoenzyme is the complete complex containing all the subunits needed for activity.
Coenzymes.
Coenzymes are small organic molecules that can be loosely or tightly bound to an enzyme. Coenzymes transport chemical groups from one enzyme to another. Examples include NADH, NADPH and adenosine triphosphate (ATP). Some coenzymes, such as flavin mononucleotide (FMN), flavin adenine dinucleotide (FAD), thiamine pyrophosphate (TPP), and tetrahydrofolate (THF), are derived from vitamins. These coenzymes cannot be synthesized by the body "de novo" and closely related compounds (vitamins) must be acquired from the diet. The chemical groups carried include:
Since coenzymes are chemically changed as a consequence of enzyme action, it is useful to consider coenzymes to be a special class of substrates, or second substrates, which are common to many different enzymes. For example, about 1000 enzymes are known to use the coenzyme NADH.
Coenzymes are usually continuously regenerated and their concentrations maintained at a steady level inside the cell. For example, NADPH is regenerated through the pentose phosphate pathway and "S"-adenosylmethionine by methionine adenosyltransferase. This continuous regeneration means that small amounts of coenzymes can be used very intensively. For example, the human body turns over its own weight in ATP each day.
Thermodynamics.
As with all catalysts, enzymes do not alter the position of the chemical equilibrium of the reaction. In the presence of an enzyme, the reaction runs in the same direction as it would without the enzyme, just more quickly. For example, carbonic anhydrase catalyzes its reaction in either direction depending on the concentration of its reactants:
The rate of a reaction is dependent on the activation energy needed to form the transition state which then decays into products. Enzymes increase reaction rates by lowering the energy of the transition state. First, binding forms a low energy enzyme-substrate complex (ES). Second, the enzyme stabilises the transition state such that it requires less energy to achieve compared to the uncatalyzed reaction (ES‡). Finally the enzyme-product complex (EP) dissociates to release the products.
Enzymes can couple two or more reactions, so that a thermodynamically favorable reaction can be used to "drive" a thermodynamically unfavourable one so that the combined energy of the products is lower than the substrates. For example, the hydrolysis of ATP is often used to drive other chemical reactions.
Kinetics.
Enzyme kinetics is the investigation of how enzymes bind substrates and turn them into products. The rate data used in kinetic analyses are commonly obtained from enzyme assays. In 1913 Leonor Michaelis and Maud Leonora Menten proposed a quantitative theory of enzyme kinetics, which is referred to as Michaelis–Menten kinetics. The major contribution of Michaelis and Menten was to think of enzyme reactions in two stages. In the first, the substrate binds reversibly to the enzyme, forming the enzyme-substrate complex. This is sometimes called the Michaelis–Menten complex in their honor. The enzyme then catalyzes the chemical step in the reaction and releases the product. This work was further developed by G. E. Briggs and J. B. S. Haldane, who derived kinetic equations that are still widely used today.
Enzyme rates depend on solution conditions and substrate concentration. To find the maximum speed of an enzymatic reaction, the substrate concentration is increased until a constant rate of product formation is seen. This is shown in the saturation curve on the right. Saturation happens because, as substrate concentration increases, more and more of the free enzyme is converted into the substrate-bound ES complex. At the maximum reaction rate ("V"max) of the enzyme, all the enzyme active sites are bound to substrate, and the amount of ES complex is the same as the total amount of enzyme.
"V"max is only one of several important kinetic parameters. The amount of substrate needed to achieve a given rate of reaction is also important. This is given by the Michaelis–Menten constant ("K"m), which is the substrate concentration required for an enzyme to reach one-half its maximum reaction rate; generally, each enzyme has a characteristic "K"M for a given substrate. Another useful constant is "k"cat, also called the "turnover number", which is the number of substrate molecules handled by one active site per second.
The efficiency of an enzyme can be expressed in terms of "k"cat/"K"m. This is also called the specificity constant and incorporates the rate constants for all steps in the reaction up to and including the first irreversible step. Because the specificity constant reflects both affinity and catalytic ability, it is useful for comparing different enzymes against each other, or the same enzyme with different substrates. The theoretical maximum for the specificity constant is called the diffusion limit and is about 108 to 109 (M−1 s−1). At this point every collision of the enzyme with its substrate will result in catalysis, and the rate of product formation is not limited by the reaction rate but by the diffusion rate. Enzymes with this property are called "catalytically perfect" or "kinetically perfect". Example of such enzymes are triose-phosphate isomerase, carbonic anhydrase, acetylcholinesterase, catalase, fumarase, β-lactamase, and superoxide dismutase. The turnover of such enzymes can reach several million reactions per second. But most enzymes are far from perfect: the average values of formula_0 and formula_1 are about formula_2 and formula_3, respectively.
Michaelis–Menten kinetics relies on the law of mass action, which is derived from the assumptions of free diffusion and thermodynamically driven random collision. Many biochemical or cellular processes deviate significantly from these conditions, because of macromolecular crowding and constrained molecular movement. More recent, complex extensions of the model attempt to correct for these effects.
Inhibition.
Enzyme reaction rates can be decreased by various types of enzyme inhibitors.
Types of inhibition.
Competitive.
A competitive inhibitor and substrate cannot bind to the enzyme at the same time. Often competitive inhibitors strongly resemble the real substrate of the enzyme. For example, the drug methotrexate is a competitive inhibitor of the enzyme dihydrofolate reductase, which catalyzes the reduction of dihydrofolate to tetrahydrofolate. The similarity between the structures of dihydrofolate and this drug are shown in the accompanying figure. This type of inhibition can be overcome with high substrate concentration. In some cases, the inhibitor can bind to a site other than the binding-site of the usual substrate and exert an allosteric effect to change the shape of the usual binding-site.
Non-competitive.
A non-competitive inhibitor binds to a site other than where the substrate binds. The substrate still binds with its usual affinity and hence Km remains the same. However the inhibitor reduces the catalytic efficiency of the enzyme so that Vmax is reduced. In contrast to competitive inhibition, non-competitive inhibition cannot be overcome with high substrate concentration.
Uncompetitive.
An uncompetitive inhibitor cannot bind to the free enzyme, only to the enzyme-substrate complex; hence, these types of inhibitors are most effective at high substrate concentration. In the presence of the inhibitor, the enzyme-substrate complex is inactive. This type of inhibition is rare.
Mixed.
A mixed inhibitor binds to an allosteric site and the binding of the substrate and the inhibitor affect each other. The enzyme's function is reduced but not eliminated when bound to the inhibitor. This type of inhibitor does not follow the Michaelis–Menten equation.
Irreversible.
An irreversible inhibitor permanently inactivates the enzyme, usually by forming a covalent bond to the protein. Penicillin and aspirin are common drugs that act in this manner.
Functions of inhibitors.
In many organisms, inhibitors may act as part of a feedback mechanism. If an enzyme produces too much of one substance in the organism, that substance may act as an inhibitor for the enzyme at the beginning of the pathway that produces it, causing production of the substance to slow down or stop when there is sufficient amount. This is a form of negative feedback. Major metabolic pathways such as the citric acid cycle make use of this mechanism.
Since inhibitors modulate the function of enzymes they are often used as drugs. Many such drugs are reversible competitive inhibitors that resemble the enzyme's native substrate, similar to methotrexate above; other well-known examples include statins used to treat high cholesterol, and protease inhibitors used to treat retroviral infections such as HIV. A common example of an irreversible inhibitor that is used as a drug is aspirin, which inhibits the COX-1 and COX-2 enzymes that produce the inflammation messenger prostaglandin. Other enzyme inhibitors are poisons. For example, the poison cyanide is an irreversible enzyme inhibitor that combines with the copper and iron in the active site of the enzyme cytochrome c oxidase and blocks cellular respiration.
Factors affecting enzyme activity.
As enzymes are made up of proteins, their actions are sensitive to change in many physio chemical factors such as pH, temperature, substrate concentration, etc.
The following table shows pH optima for various enzymes.
Biological function.
Enzymes serve a wide variety of functions inside living organisms. They are indispensable for signal transduction and cell regulation, often via kinases and phosphatases. They also generate movement, with myosin hydrolyzing adenosine triphosphate (ATP) to generate muscle contraction, and also transport cargo around the cell as part of the cytoskeleton. Other ATPases in the cell membrane are ion pumps involved in active transport. Enzymes are also involved in more exotic functions, such as luciferase generating light in fireflies. Viruses can also contain enzymes for infecting cells, such as the HIV integrase and reverse transcriptase, or for viral release from cells, like the influenza virus neuraminidase.
An important function of enzymes is in the digestive systems of animals. Enzymes such as amylases and proteases break down large molecules (starch or proteins, respectively) into smaller ones, so they can be absorbed by the intestines. Starch molecules, for example, are too large to be absorbed from the intestine, but enzymes hydrolyze the starch chains into smaller molecules such as maltose and eventually glucose, which can then be absorbed. Different enzymes digest different food substances. In ruminants, which have herbivorous diets, microorganisms in the gut produce another enzyme, cellulase, to break down the cellulose cell walls of plant fiber.
Metabolism.
Several enzymes can work together in a specific order, creating metabolic pathways. In a metabolic pathway, one enzyme takes the product of another enzyme as a substrate. After the catalytic reaction, the product is then passed on to another enzyme. Sometimes more than one enzyme can catalyze the same reaction in parallel; this can allow more complex regulation: with, for example, a low constant activity provided by one enzyme but an inducible high activity from a second enzyme.
Enzymes determine what steps occur in these pathways. Without enzymes, metabolism would neither progress through the same steps and could not be regulated to serve the needs of the cell. Most central metabolic pathways are regulated at a few key steps, typically through enzymes whose activity involves the hydrolysis of ATP. Because this reaction releases so much energy, other reactions that are thermodynamically unfavorable can be coupled to ATP hydrolysis, driving the overall series of linked metabolic reactions.
Control of activity.
There are five main ways that enzyme activity is controlled in the cell.
Regulation.
Enzymes can be either activated or inhibited by other molecules. For example, the end product(s) of a metabolic pathway are often inhibitors for one of the first enzymes of the pathway (usually the first irreversible step, called committed step), thus regulating the amount of end product made by the pathways. Such a regulatory mechanism is called a negative feedback mechanism, because the amount of the end product produced is regulated by its own concentration. Negative feedback mechanism can effectively adjust the rate of synthesis of intermediate metabolites according to the demands of the cells. This helps with effective allocations of materials and energy economy, and it prevents the excess manufacture of end products. Like other homeostatic devices, the control of enzymatic action helps to maintain a stable internal environment in living organisms.
Post-translational modification.
Examples of post-translational modification include phosphorylation, myristoylation and glycosylation. For example, in the response to insulin, the phosphorylation of multiple enzymes, including glycogen synthase, helps control the synthesis or degradation of glycogen and allows the cell to respond to changes in blood sugar. Another example of post-translational modification is the cleavage of the polypeptide chain. Chymotrypsin, a digestive protease, is produced in inactive form as chymotrypsinogen in the pancreas and transported in this form to the stomach where it is activated. This stops the enzyme from digesting the pancreas or other tissues before it enters the gut. This type of inactive precursor to an enzyme is known as a zymogen or proenzyme.
Quantity.
Enzyme production (transcription and translation of enzyme genes) can be enhanced or diminished by a cell in response to changes in the cell's environment. This form of gene regulation is called enzyme induction. For example, bacteria may become resistant to antibiotics such as penicillin because enzymes called beta-lactamases are induced that hydrolyse the crucial beta-lactam ring within the penicillin molecule. Another example comes from enzymes in the liver called cytochrome P450 oxidases, which are important in drug metabolism. Induction or inhibition of these enzymes can cause drug interactions. Enzyme levels can also be regulated by changing the rate of enzyme degradation. The opposite of enzyme induction is enzyme repression.
Subcellular distribution.
Enzymes can be compartmentalized, with different metabolic pathways occurring in different cellular compartments. For example, fatty acids are synthesized by one set of enzymes in the cytosol, endoplasmic reticulum and Golgi and used by a different set of enzymes as a source of energy in the mitochondrion, through β-oxidation. In addition, trafficking of the enzyme to different compartments may change the degree of protonation (e.g., the neutral cytoplasm and the acidic lysosome) or oxidative state (e.g., oxidizing periplasm or reducing cytoplasm) which in turn affects enzyme activity. In contrast to partitioning into membrane bound organelles, enzyme subcellular localisation may also be altered through polymerisation of enzymes into macromolecular cytoplasmic filaments.
Organ specialization.
In multicellular eukaryotes, cells in different organs and tissues have different patterns of gene expression and therefore have different sets of enzymes (known as isozymes) available for metabolic reactions. This provides a mechanism for regulating the overall metabolism of the organism. For example, hexokinase, the first enzyme in the glycolysis pathway, has a specialized form called glucokinase expressed in the liver and pancreas that has a lower affinity for glucose yet is more sensitive to glucose concentration. This enzyme is involved in sensing blood sugar and regulating insulin production.
Involvement in disease.
Since the tight control of enzyme activity is essential for homeostasis, any malfunction (mutation, overproduction, underproduction or deletion) of a single critical enzyme can lead to a genetic disease. The malfunction of just one type of enzyme out of the thousands of types present in the human body can be fatal. An example of a fatal genetic disease due to enzyme insufficiency is Tay–Sachs disease, in which patients lack the enzyme hexosaminidase.
One example of enzyme deficiency is the most common type of phenylketonuria. Many different single amino acid mutations in the enzyme phenylalanine hydroxylase, which catalyzes the first step in the degradation of phenylalanine, result in build-up of phenylalanine and related products. Some mutations are in the active site, directly disrupting binding and catalysis, but many are far from the active site and reduce activity by destabilising the protein structure, or affecting correct oligomerisation. This can lead to intellectual disability if the disease is untreated. Another example is pseudocholinesterase deficiency, in which the body's ability to break down choline ester drugs is impaired.
Oral administration of enzymes can be used to treat some functional enzyme deficiencies, such as pancreatic insufficiency and lactose intolerance.
Another way enzyme malfunctions can cause disease comes from germline mutations in genes coding for DNA repair enzymes. Defects in these enzymes cause cancer because cells are less able to repair mutations in their genomes. This causes a slow accumulation of mutations and results in the development of cancers. An example of such a hereditary cancer syndrome is xeroderma pigmentosum, which causes the development of skin cancers in response to even minimal exposure to ultraviolet light.
Evolution.
Similar to any other protein, enzymes change over time through mutations and sequence divergence. Given their central role in metabolism, enzyme evolution plays a critical role in adaptation. A key question is therefore whether and how enzymes can change their enzymatic activities alongside. It is generally accepted that many new enzyme activities have evolved through gene duplication and mutation of the duplicate copies although evolution can also happen without duplication. One example of an enzyme that has changed its activity is the ancestor of methionyl aminopeptidase (MAP) and creatine amidinohydrolase (creatinase) which are clearly homologous but catalyze very different reactions (MAP removes the amino-terminal methionine in new proteins while creatinase hydrolyses creatine to sarcosine and urea). In addition, MAP is metal-ion dependent while creatinase is not, hence this property was also lost over time. Small changes of enzymatic activity are extremely common among enzymes. In particular, substrate binding specificity (see above) can easily and quickly change with single amino acid changes in their substrate binding pockets. This is frequently seen in the main enzyme classes such as kinases.
Artificial (in vitro) evolution is now commonly used to modify enzyme activity or specificity for industrial applications (see below).
Industrial applications.
Enzymes are used in the chemical industry and other industrial applications when extremely specific catalysts are required. Enzymes in general are limited in the number of reactions they have evolved to catalyze and also by their lack of stability in organic solvents and at high temperatures. As a consequence, protein engineering is an active area of research and involves attempts to create new enzymes with novel properties, either through rational design or "in vitro" evolution. These efforts have begun to be successful, and a few enzymes have now been designed "from scratch" to catalyze reactions that do not occur in nature.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Col-begin/styles.css"/> | [
{
"math_id": 0,
"text": "k_{\\rm cat}/K_{\\rm m}"
},
{
"math_id": 1,
"text": "k_{\\rm cat}"
},
{
"math_id": 2,
"text": " 10^5 {\\rm s}^{-1}{\\rm M}^{-1}"
},
{
"math_id": 3,
"text": "10 {\\rm s}^{-1}"
}
] | https://en.wikipedia.org/wiki?curid=9257 |
9258361 | Ruppeiner geometry | Ruppeiner geometry is thermodynamic geometry (a type of information geometry) using the language of Riemannian geometry to study thermodynamics. George Ruppeiner proposed it in 1979. He claimed that thermodynamic systems can be represented by Riemannian geometry, and that statistical properties can be derived from the model.
This geometrical model is based on the inclusion of the theory of fluctuations into the axioms of equilibrium thermodynamics, namely, there exist equilibrium states which can be represented by points on two-dimensional surface (manifold) and the distance between these equilibrium states is related to the fluctuation between them. This concept is associated to probabilities, i.e. the less probable a fluctuation between states, the further apart they are. This can be recognized if one considers the metric tensor "g""ij" in the distance formula (line element) between the two equilibrium states
formula_0
where the matrix of coefficients "g""ij" is the symmetric metric tensor which is called a Ruppeiner metric, defined as a negative Hessian of the entropy function
formula_1
where "U" is the internal energy (mass) of the system and "N""a" refers to the extensive parameters of the system. Mathematically, the Ruppeiner geometry is one particular type of information geometry and it is similar to the Fisher–Rao metric used in mathematical statistics.
The Ruppeiner metric can be understood as the thermodynamic limit (large systems limit) of the more general Fisher information metric. For small systems (systems where fluctuations are large), the Ruppeiner metric may not exist, as second derivatives of the entropy are not guaranteed to be non-negative.
The Ruppeiner metric is conformally related to the Weinhold metric via
formula_2
where "T" is the temperature of the system under consideration. Proof of the conformal relation can be easily done when one writes down the first law of thermodynamics ("dU" = "TdS" + ...) in differential form with a few manipulations. The Weinhold geometry is also considered as a thermodynamic geometry. It is defined as a Hessian of the internal energy with respect to entropy and other extensive parameters.
formula_3
It has long been observed that the Ruppeiner metric is flat for systems with noninteracting underlying statistical mechanics such as the ideal gas. Curvature singularities signal critical behaviors. In addition, it has been applied to a number of statistical systems including Van der Waals gas. Recently the anyon gas has been studied using this approach.
Application to black hole systems.
This geometry has been applied to black hole thermodynamics, with some physically relevant results. The most physically significant case is for the Kerr black hole in higher dimensions, where the curvature singularity signals thermodynamic instability, as found earlier by conventional methods.
The entropy of a black hole is given by the well-known Bekenstein–Hawking formula
formula_4
where formula_5 is the Boltzmann constant, formula_6 is the speed of light, formula_7 is the Newtonian constant of gravitation and formula_8 is the area of the event horizon of the black hole. Calculating the Ruppeiner geometry of the black hole's entropy is, in principle, straightforward, but it is important that the entropy should be written in terms of extensive parameters,
formula_9
where formula_10 is ADM mass of the black hole and "N""a" are the conserved charges and "a" runs from 1 to "n". The signature of the metric reflects the sign of the hole's specific heat. For a Reissner–Nordström black hole, the Ruppeiner metric has a Lorentzian signature which corresponds to the negative heat capacity it possess, while for the BTZ black hole, we have a Euclidean signature. This calculation cannot be done for the Schwarzschild black hole, because its entropy is
formula_11
which renders the metric degenerate.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " ds^2 = g^R_{ij} dx^i dx^j, \\, "
},
{
"math_id": 1,
"text": " g^R_{ij} = -\\partial_i \\partial_j S(U, N^a) "
},
{
"math_id": 2,
"text": " ds^2_R = \\frac{1}{T} ds^2_W \\, "
},
{
"math_id": 3,
"text": " g^W_{ij} = \\partial_i \\partial_j U(S, N^a) "
},
{
"math_id": 4,
"text": " S = \\frac{k_\\text{B} c^3 A}{4G \\hbar} "
},
{
"math_id": 5,
"text": " k_\\text{B} "
},
{
"math_id": 6,
"text": " c "
},
{
"math_id": 7,
"text": " G "
},
{
"math_id": 8,
"text": " A "
},
{
"math_id": 9,
"text": " S= S(M, N^a) "
},
{
"math_id": 10,
"text": " M "
},
{
"math_id": 11,
"text": " S = S(M)"
}
] | https://en.wikipedia.org/wiki?curid=9258361 |
9259 | Equivalence relation | Mathematical concept for comparing objects
<templatestyles src="Stack/styles.css"/>
In mathematics, an equivalence relation is a binary relation that is reflexive, symmetric and transitive. The equipollence relation between line segments in geometry is a common example of an equivalence relation. A simpler example is equality. Any number formula_0 is equal to itself (reflexive). If formula_1, then formula_2 (symmetric). If formula_1 and formula_3, then formula_4 (transitive).
Each equivalence relation provides a partition of the underlying set into disjoint equivalence classes. Two elements of the given set are equivalent to each other if and only if they belong to the same equivalence class.
Notation.
Various notations are used in the literature to denote that two elements formula_0 and formula_5 of a set are equivalent with respect to an equivalence relation formula_6 the most common are "formula_7" and ""a" ≡ "b"", which are used when formula_8 is implicit, and variations of "formula_9", ""a" ≡"R" "b"", or "formula_10" to specify formula_8 explicitly. Non-equivalence may be written ""a" ≁ "b"" or "formula_11".
Definition.
A binary relation formula_12 on a set formula_13 is said to be an equivalence relation, if and only if it is reflexive, symmetric and transitive. That is, for all formula_14 and formula_15 in formula_16
formula_13 together with the relation formula_12 is called a setoid. The equivalence class of formula_0 under formula_21 denoted formula_22 is defined as formula_23
Alternative definition using relational algebra.
In relational algebra, if formula_24 and formula_25 are relations, then the composite relation formula_26 is defined so that formula_27 if and only if there is a formula_28 such that formula_29 and formula_30. This definition is a generalisation of the definition of functional composition. The defining properties of an equivalence relation formula_8 on a set formula_13 can then be reformulated as follows:
Examples.
Simple example.
On the set formula_35, the relation formula_36 is an equivalence relation. The following sets are equivalence classes of this relation:
formula_37
The set of all equivalence classes for formula_8 is formula_38 This set is a partition of the set formula_13 with respect to formula_8.
Equivalence relations.
The following relations are all equivalence relations:
Well-definedness under an equivalence relation.
If formula_12 is an equivalence relation on formula_49 and formula_50 is a property of elements of formula_49 such that whenever formula_51 formula_50 is true if formula_52 is true, then the property formula_53 is said to be well-defined or a class invariant under the relation formula_54
A frequent particular case occurs when formula_43 is a function from formula_13 to another set formula_55 if formula_56 implies formula_57 then formula_43 is said to be a morphism for formula_21 a class invariant under formula_21 or simply invariant under formula_54 This occurs, e.g. in the character theory of finite groups. The latter case with the function formula_43 can be expressed by a commutative triangle. See also invariant. Some authors use "compatible with formula_58" or just "respects formula_58" instead of "invariant under formula_58".
More generally, a function may map equivalent arguments (under an equivalence relation formula_59) to equivalent values (under an equivalence relation formula_60). Such a function is known as a morphism from formula_59 to formula_61
Related important definitions.
Let formula_62, and formula_63 be an equivalence relation. Some key definitions and terminology follow:
Equivalence class.
A subset formula_64 of formula_13 such that formula_7 holds for all formula_0 and formula_5 in formula_64, and never for formula_0 in formula_64 and formula_5 outside formula_64, is called an equivalence class of formula_13 by formula_63. Let formula_65 denote the equivalence class to which formula_0 belongs. All elements of formula_13 equivalent to each other are also elements of the same equivalence class.
Quotient set.
The set of all equivalence classes of formula_13 by formula_66 denoted formula_67 is the quotient set of formula_13 by formula_68 If formula_13 is a topological space, there is a natural way of transforming formula_69 into a topological space; see "Quotient space" for the details.
Projection.
The projection of formula_12 is the function formula_70 defined by formula_71 which maps elements of formula_13 into their respective equivalence classes by formula_54
Theorem on projections: Let the function formula_72 be such that if formula_7 then formula_73 Then there is a unique function formula_74 such that formula_75 If formula_43 is a surjection and formula_76 then formula_77 is a bijection.
Equivalence kernel.
The equivalence kernel of a function formula_43 is the equivalence relation ~ defined by formula_78 The equivalence kernel of an injection is the identity relation.
Partition.
A partition of "X" is a set "P" of nonempty subsets of "X", such that every element of "X" is an element of a single element of "P". Each element of "P" is a "cell" of the partition. Moreover, the elements of "P" are pairwise disjoint and their union is "X".
Counting partitions.
Let "X" be a finite set with "n" elements. Since every equivalence relation over "X" corresponds to a partition of "X", and vice versa, the number of equivalence relations on "X" equals the number of distinct partitions of "X", which is the "n"th Bell number "Bn":
formula_79 (Dobinski's formula).
Fundamental theorem of equivalence relations.
A key result links equivalence relations and partitions:
In both cases, the cells of the partition of "X" are the equivalence classes of "X" by ~. Since each element of "X" belongs to a unique cell of any partition of "X", and since each cell of the partition is identical to an equivalence class of "X" by ~, each element of "X" belongs to a unique equivalence class of "X" by ~. Thus there is a natural bijection between the set of all equivalence relations on "X" and the set of all partitions of "X".
Comparing equivalence relations.
If formula_63 and formula_80 are two equivalence relations on the same set formula_81, and formula_7 implies formula_82 for all formula_83 then formula_80 is said to be a coarser relation than formula_63, and formula_63 is a finer relation than formula_80. Equivalently,
The equality equivalence relation is the finest equivalence relation on any set, while the universal relation, which relates all pairs of elements, is the coarsest.
The relation "formula_63 is finer than formula_80" on the collection of all equivalence relations on a fixed set is itself a partial order relation, which makes the collection a geometric lattice.
formula_7 if there exists a natural number formula_41 and elements formula_88 such that formula_89, formula_90, and formula_91 or formula_92, for formula_93
The equivalence relation generated in this manner can be trivial. For instance, the equivalence relation generated by any total order on "X" has exactly one equivalence class, "X" itself.
Algebraic structure.
Much of mathematics is grounded in the study of equivalences, and order relations. Lattice theory captures the mathematical structure of order relations. Even though equivalence relations are as ubiquitous in mathematics as order relations, the algebraic structure of equivalences is not as well known as that of orders. The former structure draws primarily on group theory and, to a lesser extent, on the theory of lattices, categories, and groupoids.
Group theory.
Just as order relations are grounded in ordered sets, sets closed under pairwise supremum and infimum, equivalence relations are grounded in partitioned sets, which are sets closed under bijections that preserve partition structure. Since all such bijections map an equivalence class onto itself, such bijections are also known as permutations. Hence permutation groups (also known as transformation groups) and the related notion of orbit shed light on the mathematical structure of equivalence relations.
Let '~' denote an equivalence relation over some nonempty set "A", called the universe or underlying set. Let "G" denote the set of bijective functions over "A" that preserve the partition structure of "A", meaning that for all formula_99 and formula_100 Then the following three connected theorems hold:
In sum, given an equivalence relation ~ over "A", there exists a transformation group "G" over "A" whose orbits are the equivalence classes of "A" under ~.
This transformation group characterisation of equivalence relations differs fundamentally from the way lattices characterize order relations. The arguments of the lattice theory operations meet and join are elements of some universe "A". Meanwhile, the arguments of the transformation group operations composition and inverse are elements of a set of bijections, "A" → "A".
Moving to groups in general, let "H" be a subgroup of some group "G". Let ~ be an equivalence relation on "G", such that formula_101 The equivalence classes of ~—also called the orbits of the action of "H" on "G"—are the right cosets of "H" in "G". Interchanging "a" and "b" yields the left cosets.
Related thinking can be found in Rosen (2008: chpt. 10).
Categories and groupoids.
Let "G" be a set and let "~" denote an equivalence relation over "G". Then we can form a groupoid representing this equivalence relation as follows. The objects are the elements of "G", and for any two elements "x" and "y" of "G", there exists a unique morphism from "x" to "y" if and only if formula_102
The advantages of regarding an equivalence relation as a special case of a groupoid include:
Lattices.
The equivalence relations on any set "X", when ordered by set inclusion, form a complete lattice, called Con "X" by convention. The canonical map ker : "X"^"X" → Con "X", relates the monoid "X"^"X" of all functions on "X" and Con "X". ker is surjective but not injective. Less formally, the equivalence relation ker on "X", takes each function "f" : "X" → "X" to its kernel ker "f". Likewise, ker(ker) is an equivalence relation on "X"^"X".
Equivalence relations and mathematical logic.
Equivalence relations are a ready source of examples or counterexamples. For example, an equivalence relation with exactly two infinite equivalence classes is an easy example of a theory which is ω-categorical, but not categorical for any larger cardinal number.
An implication of model theory is that the properties defining a relation can be proved independent of each other (and hence necessary parts of the definition) if and only if, for each property, examples can be found of relations not satisfying the given property while satisfying all the other properties. Hence the three defining properties of equivalence relations can be proved mutually independent by the following three examples:
Properties definable in first-order logic that an equivalence relation may or may not possess include:
Notes.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "a"
},
{
"math_id": 1,
"text": "a = b"
},
{
"math_id": 2,
"text": "b = a"
},
{
"math_id": 3,
"text": "b = c"
},
{
"math_id": 4,
"text": "a = c"
},
{
"math_id": 5,
"text": "b"
},
{
"math_id": 6,
"text": "R;"
},
{
"math_id": 7,
"text": "a \\sim b"
},
{
"math_id": 8,
"text": "R"
},
{
"math_id": 9,
"text": "a \\sim_R b"
},
{
"math_id": 10,
"text": "{a\\mathop{R}b}"
},
{
"math_id": 11,
"text": "a \\not\\equiv b"
},
{
"math_id": 12,
"text": "\\,\\sim\\,"
},
{
"math_id": 13,
"text": "X"
},
{
"math_id": 14,
"text": "a, b,"
},
{
"math_id": 15,
"text": "c"
},
{
"math_id": 16,
"text": "X:"
},
{
"math_id": 17,
"text": "a \\sim a"
},
{
"math_id": 18,
"text": "b \\sim a"
},
{
"math_id": 19,
"text": "b \\sim c"
},
{
"math_id": 20,
"text": "a \\sim c"
},
{
"math_id": 21,
"text": "\\,\\sim,"
},
{
"math_id": 22,
"text": "[a],"
},
{
"math_id": 23,
"text": "[a] = \\{x \\in X : x \\sim a\\}."
},
{
"math_id": 24,
"text": "R\\subseteq X\\times Y"
},
{
"math_id": 25,
"text": "S\\subseteq Y\\times Z"
},
{
"math_id": 26,
"text": "SR\\subseteq X\\times Z"
},
{
"math_id": 27,
"text": "x \\, SR \\, z"
},
{
"math_id": 28,
"text": "y\\in Y"
},
{
"math_id": 29,
"text": "x \\, R \\, y"
},
{
"math_id": 30,
"text": "y \\, S \\, z"
},
{
"math_id": 31,
"text": "\\operatorname{id} \\subseteq R"
},
{
"math_id": 32,
"text": "\\operatorname{id}"
},
{
"math_id": 33,
"text": "R=R^{-1}"
},
{
"math_id": 34,
"text": "RR\\subseteq R"
},
{
"math_id": 35,
"text": "X = \\{a, b, c\\}"
},
{
"math_id": 36,
"text": "R = \\{(a, a), (b, b), (c, c), (b, c), (c, b)\\}"
},
{
"math_id": 37,
"text": "[a] = \\{a\\}, ~~~~ [b] = [c] = \\{b, c\\}."
},
{
"math_id": 38,
"text": "\\{\\{a\\}, \\{b, c\\}\\}."
},
{
"math_id": 39,
"text": "\\tfrac{1}{2}"
},
{
"math_id": 40,
"text": "\\tfrac{4}{8}."
},
{
"math_id": 41,
"text": "n"
},
{
"math_id": 42,
"text": "f:X \\to Y"
},
{
"math_id": 43,
"text": "f"
},
{
"math_id": 44,
"text": "0"
},
{
"math_id": 45,
"text": "\\pi"
},
{
"math_id": 46,
"text": "\\sin"
},
{
"math_id": 47,
"text": "a,"
},
{
"math_id": 48,
"text": "b \\text{ such that } a \\sim b."
},
{
"math_id": 49,
"text": "X,"
},
{
"math_id": 50,
"text": "P(x)"
},
{
"math_id": 51,
"text": "x \\sim y,"
},
{
"math_id": 52,
"text": "P(y)"
},
{
"math_id": 53,
"text": "P"
},
{
"math_id": 54,
"text": "\\,\\sim."
},
{
"math_id": 55,
"text": "Y;"
},
{
"math_id": 56,
"text": "x_1 \\sim x_2"
},
{
"math_id": 57,
"text": "f\\left(x_1\\right) = f\\left(x_2\\right)"
},
{
"math_id": 58,
"text": "\\,\\sim"
},
{
"math_id": 59,
"text": "\\,\\sim_A"
},
{
"math_id": 60,
"text": "\\,\\sim_B"
},
{
"math_id": 61,
"text": "\\,\\sim_B."
},
{
"math_id": 62,
"text": "a, b \\in X"
},
{
"math_id": 63,
"text": "\\sim"
},
{
"math_id": 64,
"text": "Y"
},
{
"math_id": 65,
"text": "[a] := \\{x \\in X : a \\sim x\\}"
},
{
"math_id": 66,
"text": "\\sim,"
},
{
"math_id": 67,
"text": "X / \\mathord{\\sim} := \\{[x] : x \\in X\\},"
},
{
"math_id": 68,
"text": "\\sim."
},
{
"math_id": 69,
"text": "X / \\sim"
},
{
"math_id": 70,
"text": "\\pi : X \\to X/\\mathord{\\sim}"
},
{
"math_id": 71,
"text": "\\pi(x) = [x]"
},
{
"math_id": 72,
"text": "f : X \\to B"
},
{
"math_id": 73,
"text": "f(a) = f(b)."
},
{
"math_id": 74,
"text": "g : X / \\sim \\to B"
},
{
"math_id": 75,
"text": "f = g \\pi."
},
{
"math_id": 76,
"text": "a \\sim b \\text{ if and only if } f(a) = f(b),"
},
{
"math_id": 77,
"text": "g"
},
{
"math_id": 78,
"text": "x \\sim y \\text{ if and only if } f(x) = f(y)."
},
{
"math_id": 79,
"text": "B_n = \\frac{1}{e} \\sum_{k=0}^\\infty \\frac{k^n}{k!} \\quad"
},
{
"math_id": 80,
"text": "\\approx"
},
{
"math_id": 81,
"text": "S"
},
{
"math_id": 82,
"text": "a \\approx b"
},
{
"math_id": 83,
"text": "a, b \\in S,"
},
{
"math_id": 84,
"text": "[X \\to X]"
},
{
"math_id": 85,
"text": "X \\to X"
},
{
"math_id": 86,
"text": "\\pi : X \\to X / \\sim."
},
{
"math_id": 87,
"text": "X \\times X"
},
{
"math_id": 88,
"text": "x_0, \\ldots, x_n \\in X"
},
{
"math_id": 89,
"text": "a = x_0"
},
{
"math_id": 90,
"text": "b = x_n"
},
{
"math_id": 91,
"text": "x_{i-1} \\mathrel{R} x_i"
},
{
"math_id": 92,
"text": "x_i \\mathrel{R} x_{i-1}"
},
{
"math_id": 93,
"text": "i = 1, \\ldots, n."
},
{
"math_id": 94,
"text": "[0, 1] \\times [0, 1],"
},
{
"math_id": 95,
"text": "(a, 0) \\sim (a, 1)"
},
{
"math_id": 96,
"text": "a \\in [0, 1]"
},
{
"math_id": 97,
"text": "(0, b) \\sim (1, b)"
},
{
"math_id": 98,
"text": "b \\in [0, 1],"
},
{
"math_id": 99,
"text": "x \\in A"
},
{
"math_id": 100,
"text": "g \\in G, g(x) \\in [x]."
},
{
"math_id": 101,
"text": "a \\sim b \\text{ if and only if } a b^{-1} \\in H."
},
{
"math_id": 102,
"text": "x \\sim y."
}
] | https://en.wikipedia.org/wiki?curid=9259 |
9260 | Equivalence class | Mathematical concept
In mathematics, when the elements of some set formula_0 have a notion of equivalence (formalized as an equivalence relation), then one may naturally split the set formula_0 into equivalence classes. These equivalence classes are constructed so that elements formula_1 and formula_2 belong to the same equivalence class if, and only if, they are equivalent.
Formally, given a set formula_0 and an equivalence relation formula_3 on formula_4 the equivalence class of an element formula_1 in formula_0 is denoted formula_5 or, equivalently, formula_6 to emphasize its equivalence relation formula_7 The definition of equivalence relations implies that the equivalence classes form a partition of formula_8 meaning, that every element of the set belongs to exactly one equivalence class.
The set of the equivalence classes is sometimes called the quotient set or the quotient space of formula_0 by formula_9 and is denoted by formula_10
When the set formula_0 has some structure (such as a group operation or a topology) and the equivalence relation formula_3 is compatible with this structure, the quotient set often inherits a similar structure from its parent set. Examples include quotient spaces in linear algebra, quotient spaces in topology, quotient groups, homogeneous spaces, quotient rings, quotient monoids, and quotient categories.
Definition and notation.
An equivalence relation on a set formula_11 is a binary relation formula_3 on formula_11 satisfying the three properties:
The equivalence class of an element formula_1 is defined as
formula_20
The word "class" in the term "equivalence class" may generally be considered as a synonym of "set", although some equivalence classes are not sets but proper classes. For example, "being isomorphic" is an equivalence relation on groups, and the equivalence classes, called isomorphism classes, are not sets.
The set of all equivalence classes in formula_11 with respect to an equivalence relation formula_21 is denoted as formula_22 and is called formula_11 modulo formula_21 (or the <templatestyles src="Template:Visible anchor/styles.css" />quotient set of formula_11 by formula_21). The surjective map formula_23 from formula_11 onto formula_22 which maps each element to its equivalence class, is called the <templatestyles src="Template:Visible anchor/styles.css" />canonical surjection, or the canonical projection.
Every element of an equivalence class characterizes the class, and may be used to "represent" it. When such an element is chosen, it is called a representative of the class. The choice of a representative in each class defines an injection from formula_24 to X. Since its composition with the canonical surjection is the identity of formula_22 such an injection is called a section, when using the terminology of category theory.
Sometimes, there is a section that is more "natural" than the other ones. In this case, the representatives are called canonical representatives. For example, in modular arithmetic, for every integer m greater than 1, the congruence modulo m is an equivalence relation on the integers, for which two integers a and b are equivalent—in this case, one says "congruent"—if m divides formula_25 this is denoted formula_26 Each class contains a unique non-negative integer smaller than formula_27 and these integers are the canonical representatives.
The use of representatives for representing classes allows avoiding to consider explicitly classes as sets. In this case, the canonical surjection that maps an element to its class is replaced by the function that maps an element to the representative of its class. In the preceding example, this function is denoted formula_28 and produces the remainder of the Euclidean division of a by m.
Properties.
Every element formula_29 of formula_11 is a member of the equivalence class formula_30 Every two equivalence classes formula_31 and formula_32 are either equal or disjoint. Therefore, the set of all equivalence classes of formula_11 forms a partition of formula_11: every element of formula_11 belongs to one and only one equivalence class. Conversely, every partition of formula_11 comes from an equivalence relation in this way, according to which formula_33 if and only if formula_29 and formula_34 belong to the same set of the partition.
It follows from the properties in the previous section that if formula_3 is an equivalence relation on a set formula_35 and formula_29 and formula_34 are two elements of formula_35 the following statements are equivalent:
Graphical representation.
An undirected graph may be associated to any symmetric relation on a set formula_35 where the vertices are the elements of formula_35 and two vertices formula_53 and formula_54 are joined if and only if formula_55 Among these graphs are the graphs of equivalence relations. These graphs, called cluster graphs, are characterized as the graphs such that the connected components are cliques.
Invariants.
If formula_3 is an equivalence relation on formula_35 and formula_56 is a property of elements of formula_11 such that whenever formula_57 formula_56 is true if formula_58 is true, then the property formula_59 is said to be an invariant of formula_9 or well-defined under the relation formula_60
A frequent particular case occurs when formula_61 is a function from formula_11 to another set formula_62; if formula_63 whenever formula_64 then formula_61 is said to be class invariant under formula_9 or simply invariant under formula_60 This occurs, for example, in the character theory of finite groups. Some authors use "compatible with formula_3" or just "respects formula_3" instead of "invariant under formula_3".
Any function formula_65 is "class invariant under" formula_9 according to which formula_66 if and only if formula_67 The equivalence class of formula_29 is the set of all elements in formula_11 which get mapped to formula_68 that is, the class formula_31 is the inverse image of formula_69 This equivalence relation is known as the kernel of formula_70
More generally, a function may map equivalent arguments (under an equivalence relation formula_71 on formula_11) to equivalent values (under an equivalence relation formula_72 on formula_62). Such a function is a morphism of sets equipped with an equivalence relation.
Quotient space in topology.
In topology, a quotient space is a topological space formed on the set of equivalence classes of an equivalence relation on a topological space, using the original space's topology to create the topology on the set of equivalence classes.
In abstract algebra, congruence relations on the underlying set of an algebra allow the algebra to induce an algebra on the equivalence classes of the relation, called a quotient algebra. In linear algebra, a quotient space is a vector space formed by taking a quotient group, where the quotient homomorphism is a linear map. By extension, in abstract algebra, the term quotient space may be used for quotient modules, quotient rings, quotient groups, or any quotient algebra. However, the use of the term for the more general cases can as often be by analogy with the orbits of a group action.
The orbits of a group action on a set may be called the quotient space of the action on the set, particularly when the orbits of the group action are the right cosets of a subgroup of a group, which arise from the action of the subgroup on the group by left translations, or respectively the left cosets as orbits under right translation.
A normal subgroup of a topological group, acting on the group by translation action, is a quotient space in the senses of topology, abstract algebra, and group actions simultaneously.
Although the term can be used for any equivalence relation's set of equivalence classes, possibly with further structure, the intent of using the term is generally to compare that type of equivalence relation on a set formula_35 either to an equivalence relation that induces some structure on the set of equivalence classes from a structure of the same kind on formula_35 or to the orbits of a group action. Both the sense of a structure preserved by an equivalence relation, and the study of invariants under group actions, lead to the definition of invariants of equivalence relations given above.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "S"
},
{
"math_id": 1,
"text": "a"
},
{
"math_id": 2,
"text": "b"
},
{
"math_id": 3,
"text": "\\,\\sim\\,"
},
{
"math_id": 4,
"text": "S,"
},
{
"math_id": 5,
"text": "[a]"
},
{
"math_id": 6,
"text": "[a]_{\\sim}"
},
{
"math_id": 7,
"text": "\\sim."
},
{
"math_id": 8,
"text": "S,\n"
},
{
"math_id": 9,
"text": "\\,\\sim\\,,"
},
{
"math_id": 10,
"text": "S /{\\sim}."
},
{
"math_id": 11,
"text": "X"
},
{
"math_id": 12,
"text": "a \\sim a"
},
{
"math_id": 13,
"text": "a \\in X"
},
{
"math_id": 14,
"text": "a \\sim b"
},
{
"math_id": 15,
"text": "b \\sim a"
},
{
"math_id": 16,
"text": "a, b \\in X"
},
{
"math_id": 17,
"text": "b \\sim c"
},
{
"math_id": 18,
"text": "a \\sim c"
},
{
"math_id": 19,
"text": "a, b, c \\in X"
},
{
"math_id": 20,
"text": "[a] = \\{ x \\in X : a \\sim x \\}."
},
{
"math_id": 21,
"text": "R"
},
{
"math_id": 22,
"text": "X / R,"
},
{
"math_id": 23,
"text": "x \\mapsto [x]"
},
{
"math_id": 24,
"text": "X / R"
},
{
"math_id": 25,
"text": "a-b;"
},
{
"math_id": 26,
"text": "a\\equiv b \\pmod m."
},
{
"math_id": 27,
"text": "m,"
},
{
"math_id": 28,
"text": "a \\bmod m,"
},
{
"math_id": 29,
"text": "x"
},
{
"math_id": 30,
"text": "[x]."
},
{
"math_id": 31,
"text": "[x]"
},
{
"math_id": 32,
"text": "[y]"
},
{
"math_id": 33,
"text": "x \\sim y"
},
{
"math_id": 34,
"text": "y"
},
{
"math_id": 35,
"text": "X,"
},
{
"math_id": 36,
"text": "[x] = [y]"
},
{
"math_id": 37,
"text": "[x] \\cap [y] \\ne \\emptyset."
},
{
"math_id": 38,
"text": "A,"
},
{
"math_id": 39,
"text": "A."
},
{
"math_id": 40,
"text": "\\Z,"
},
{
"math_id": 41,
"text": "x - y"
},
{
"math_id": 42,
"text": "[7], [9],"
},
{
"math_id": 43,
"text": "[1]"
},
{
"math_id": 44,
"text": "\\Z /{\\sim}."
},
{
"math_id": 45,
"text": "(a, b)"
},
{
"math_id": 46,
"text": "b,"
},
{
"math_id": 47,
"text": "(a, b) \\sim (c, d)"
},
{
"math_id": 48,
"text": "a d = b c,"
},
{
"math_id": 49,
"text": "a / b,"
},
{
"math_id": 50,
"text": "L \\sim M"
},
{
"math_id": 51,
"text": "L"
},
{
"math_id": 52,
"text": "M"
},
{
"math_id": 53,
"text": "s"
},
{
"math_id": 54,
"text": "t"
},
{
"math_id": 55,
"text": "s \\sim t."
},
{
"math_id": 56,
"text": "P(x)"
},
{
"math_id": 57,
"text": "x \\sim y,"
},
{
"math_id": 58,
"text": "P(y)"
},
{
"math_id": 59,
"text": "P"
},
{
"math_id": 60,
"text": "\\,\\sim."
},
{
"math_id": 61,
"text": "f"
},
{
"math_id": 62,
"text": "Y"
},
{
"math_id": 63,
"text": "f\\left(x_1\\right) = f\\left(x_2\\right)"
},
{
"math_id": 64,
"text": "x_1 \\sim x_2,"
},
{
"math_id": 65,
"text": "f : X \\to Y"
},
{
"math_id": 66,
"text": "x_1 \\sim x_2"
},
{
"math_id": 67,
"text": "f\\left(x_1\\right) = f\\left(x_2\\right)."
},
{
"math_id": 68,
"text": "f(x),"
},
{
"math_id": 69,
"text": "f(x)."
},
{
"math_id": 70,
"text": "f."
},
{
"math_id": 71,
"text": "\\sim_X"
},
{
"math_id": 72,
"text": "\\sim_Y"
}
] | https://en.wikipedia.org/wiki?curid=9260 |
9267447 | Gittins index | The Gittins index is a measure of the reward that can be achieved through a given stochastic process with certain properties, namely: the process has an ultimate termination state and evolves with an option, at each intermediate state, of terminating. Upon terminating at a given state, the reward achieved is the sum of the probabilistic expected rewards associated with every state from the actual terminating state to the ultimate terminal state, inclusive. The index is a real scalar.
Terminology.
To illustrate the theory we can take two examples from a developing sector, such as from electricity generating technologies: wind power and wave power. If we are presented with the two technologies when they are both proposed as ideas we cannot say which will be better in the long run as we have no data, as yet, to base our judgments on. It would be easy to say that wave power would be too problematic to develop as it seems easier to put up many wind turbines than to make the long floating generators, tow them out to sea and lay the cables necessary.
If we were to make a judgment call at that early time in development we could be condemning one technology to being put on the shelf and the other would be developed and put into operation. If we develop both technologies we would be able to make a judgment call on each by comparing the progress of each technology at a set time interval such as every three months. The decisions we make about investment in the next stage would be based on those results.
In a paper in 1979 called "Bandit Processes and Dynamic Allocation Indices" John C. Gittins suggests a solution for problems such as this. He takes the two basic functions of a "scheduling Problem" and a "multi-armed bandit" problem and shows how these problems can be solved using "Dynamic allocation indices". He first takes the "Scheduling Problem" and reduces it to a machine which has to perform jobs and has a set time period, every hour or day for example, to finish each job in. The machine is given a reward value, based on finishing or not within the time period, and a probability value of whether it will finish or not for each job is calculated. The problem is "to decide which job to process next at each stage so as to maximize the total expected reward." He then moves on to the "Multi–armed bandit problem" where each pull on a "one armed bandit" lever is allocated a reward function for a successful pull, and a zero reward for an unsuccessful pull. The sequence of successes forms a Bernoulli process and has an unknown probability of success. There are multiple "bandits" and the distribution of successful pulls is calculated and different for each machine. Gittins states that the problem here is "to decide which arm to pull next at each stage so as to maximize the total expected reward from an infinite sequence of pulls."
Gittins says that "Both the problems described above involve a sequence of decisions, each of which is based on more information than its predecessors, and these both problems may be tackled by dynamic allocation indices."
Definition.
In applied mathematics, the "Gittins index" is a real scalar value associated to the state of a stochastic process with a reward function and with a probability of termination. It is a measure of the reward that can be achieved by the process evolving from that state on, under the probability that it will be terminated in future. The "index policy" induced by the Gittins index, consisting of choosing at any time the stochastic process with the currently highest Gittins index, is the solution of some "stopping problems" such as the one of dynamic allocation, where a decision-maker has to maximize the total reward by distributing a limited amount of effort to a number of competing projects, each returning a stochastic reward. If the projects are independent from each other and only one project at a time may evolve, the problem is called multi-armed bandit (one type of Stochastic scheduling problems) and the Gittins index policy is optimal. If multiple projects can evolve, the problem is called "Restless bandit" and the Gittins index policy is a known good heuristic but no optimal solution exists in general. In fact, in general this problem is NP-complete and it is generally accepted that no feasible solution can be found.
History.
Questions about the optimal stopping policies in the context of clinical trials have been open from the 1940s and in the 1960s a few authors analyzed simple models leading to optimal index policies, but it was only in the 1970s that Gittins and his collaborators demonstrated in a Markovian framework that the optimal solution of the general case is an index policy whose "dynamic allocation index" is computable in principle for every state of each project as a function of the single project's dynamics. In parallel to Gittins, Martin Weitzman established the same result in the economics literature.
Soon after the seminal paper of Gittins, Peter Whittle
demonstrated that the index emerges as a Lagrange multiplier from a dynamic programming formulation of the problem called "retirement process" and conjectured that the same index would be a good heuristic in a more general setup named "Restless bandit". The question of how to actually calculate the index for Markov chains was first addressed by Varaiya and his collaborators with an algorithm that computes the indexes
from the largest first down to the smallest and by Chen and Katehakis who showed that standard LP could be used to calculate the index of a state without requiring its calculation for all states with higher index values.
LCM Kallenberg provided a parametric LP implementation to compute the indices for all states of a Markov chain. Further, Katehakis and Veinott demonstrated that the index is the expected reward of a Markov decision process constructed over the Markov chain and known as "Restart in State" and can be calculated exactly by solving that problem with the "policy iteration" algorithm, or approximately with the "value iteration" algorithm. This approach also has the advantage of calculating the index for one specific state without having to calculate all the greater indexes and it is valid under more general state space conditions. A faster algorithm for the calculation of all indices was obtained in 2004 by Sonin as a consequence of his "elimination algorithm" for the optimal stopping of a Markov chain. In this algorithm the termination probability of the process may depend on the current state rather than being a fixed factor. A faster algorithm was proposed in 2007 by Niño-Mora by exploiting the structure of a parametric simplex to reduce the computational effort of the pivot steps and thereby achieving the same complexity as the Gaussian elimination algorithm. Cowan, W. and Katehakis (2014), provide a solution to the problem, with potentially non-Markovian, uncountable state space reward processes, under frameworks in which, either the discount factors may be non-uniform and vary over time, or the periods of activation of each bandit may be not be fixed or uniform, subject instead to a possibly stochastic duration of activation before a change to a different bandit is allowed. The solution is based on generalized restart-in-state indices.
Mathematical definition.
Dynamic allocation index.
The classical definition by Gittins et al. is:
formula_0
where formula_1 is a stochastic process, formula_2 is the
utility (also called reward) associated to the discrete state formula_3,
formula_4 is the probability that the stochastic process does not
terminate, and formula_5 is the conditional expectation
operator given "c":
formula_6
with formula_7 being the domain of "X".
Retirement process formulation.
The dynamic programming formulation in terms of retirement process, given by Whittle, is:
formula_8
where formula_9 is the "value function"
formula_10
with the same notation as above. It holds that
formula_11
Restart-in-state formulation.
If formula_1 is a Markov chain with rewards, the interpretation of Katehakis and Veinott (1987) associates to every state the action of restarting from one arbitrary state formula_3, thereby constructing a Markov decision process formula_12.
The Gittins Index of that state formula_3 is the highest total reward which can be achieved on formula_12 if one can always choose to continue or restart from that state formula_3.
formula_13
where formula_14 indicates a policy over formula_12. It holds that
formula_15.
Generalized index.
If the probability of survival formula_16 depends on the state formula_3, a generalization introduced by Sonin (2008) defines the Gittins index formula_17 as the maximum discounted total reward per chance of termination.
formula_18
where
formula_19
formula_20
If formula_21 is replaced by formula_22 in the definitions of formula_23, formula_24 and formula_25, then it holds that
formula_26
formula_27
this observation leads Sonin to conclude that formula_17 and not formula_23 is the "true meaning" of the Gittins index.
Queueing theory.
In queueing theory, Gittins index is used to determine the optimal scheduling of jobs, e.g., in an M/G/1 queue. The mean completion time of jobs under a Gittins index schedule can be determined using the SOAP approach. Note that the dynamics of the queue are intrinsically Markovian, and stochasticity is due to the arrival and service processes. This is in contrast to most of the works in the learning literature, where stochasticity is explicitly accounted through a noise term.
Fractional problems.
While conventional Gittins indices induce a policy to optimize the accrual of a reward, a common problem setting consists of optimizing the ratio of accrued rewards. For example, this is a case for systems to maximize bandwidth, consisting of data over time, or minimize power consumption, consisting of energy over time.
This class of problems is different from the optimization of a semi-Markov reward process, because the latter one might select states with a disproportionate sojourn time just for accruing a higher reward. Instead, it corresponds to the class of linear-fractional markov reward optimization problem.
However, a detrimental aspect of such ratio optimizations is that, once the achieved ratio in some state is high, the optimization might select states leading to a low ratio because they bear a high probability of termination, so that the process is likely to terminate before the ratio drops significantly. A problem setting to prevent such early terminations consists of defining the optimization as maximization of the future ratio seen by each state. An indexation is conjectured to exist for this problem, be computable as simple variation on existing restart-in-state or state elimination algorithms and evaluated to work well in practice. | [
{
"math_id": 0,
"text": "\\nu(i)=\\sup_{\\tau>0}\\frac{\n \\left\\langle\\sum_{t=0}^{\\tau-1}\\beta^t R[Z(t)]\\right\\rangle_{Z(0)=i}}{\n \\left\\langle\\sum_{t=0}^{\\tau-1}\\beta^t \\right\\rangle_{Z(0)=i}}\n "
},
{
"math_id": 1,
"text": "Z(\\cdot)"
},
{
"math_id": 2,
"text": "R(i)"
},
{
"math_id": 3,
"text": "i"
},
{
"math_id": 4,
"text": "\\beta<1"
},
{
"math_id": 5,
"text": "\\langle\\cdot\\rangle_c"
},
{
"math_id": 6,
"text": "\\langle X\\rangle_c \\doteq \\sum_{x\\in\\chi}x P\\{X=x|c\\}"
},
{
"math_id": 7,
"text": "\\chi"
},
{
"math_id": 8,
"text": "w(i)=\\inf\\{k:v(i,k)=k\\}"
},
{
"math_id": 9,
"text": "v(i,k)"
},
{
"math_id": 10,
"text": "v(i,k)=\\sup_{\\tau>0} \\left\\langle \\sum_{t=0}^{\\tau-1} \\beta^t R[Z(t)] + \\beta^t k \\right\\rangle_{Z(0) = i} "
},
{
"math_id": 11,
"text": "\\nu(i)=(1-\\beta)w(i)."
},
{
"math_id": 12,
"text": "M_i"
},
{
"math_id": 13,
"text": "h(i)=\\sup_\\pi \\left\\langle \\sum_{t=0}^{\\tau-1} \\beta^t R[Z^\\pi(t)] \\right\\rangle_{Z(0) = i}"
},
{
"math_id": 14,
"text": "\\pi"
},
{
"math_id": 15,
"text": "h(i)=w(i)"
},
{
"math_id": 16,
"text": "\\beta(i)"
},
{
"math_id": 17,
"text": "\\alpha(i)"
},
{
"math_id": 18,
"text": "\\alpha(i)=\\sup_{\\tau>0} \\frac{R^\\tau(i)}{Q^\\tau(i)}"
},
{
"math_id": 19,
"text": "R^\\tau(i)=\\left\\langle \\sum_{t=0}^{\\tau-1} R[Z(t)] \\right\\rangle_{Z(0) = i}"
},
{
"math_id": 20,
"text": "Q^\\tau(i)=\\left\\langle 1 - \\prod_{t=0}^{\\tau-1} \\beta[Z(t)] \\right\\rangle_{Z(0) = i}"
},
{
"math_id": 21,
"text": "\\beta^t"
},
{
"math_id": 22,
"text": "\\prod_{j=0}^{t-1} \\beta[Z(j)]"
},
{
"math_id": 23,
"text": "\\nu(i)"
},
{
"math_id": 24,
"text": "w(i)"
},
{
"math_id": 25,
"text": "h(i)"
},
{
"math_id": 26,
"text": "\\alpha(i)=h(i)=w(i)"
},
{
"math_id": 27,
"text": "\\alpha(i)\\neq k \\nu(i), \\forall k"
}
] | https://en.wikipedia.org/wiki?curid=9267447 |
9268061 | Novikov's condition | In probability theory, Novikov's condition is the sufficient condition for a stochastic process which takes the form of the Radon–Nikodym derivative in Girsanov's theorem to be a martingale. If satisfied together with other conditions, Girsanov's theorem may be applied to a Brownian motion stochastic process to change from the original measure to the new measure defined by the Radon–Nikodym derivative.
This condition was suggested and proved by Alexander Novikov. There are other results which may be used to show that the Radon–Nikodym derivative is a martingale, such as the more general criterion Kazamaki's condition, however Novikov's condition is the most well-known result.
Assume that
formula_0 is a real valued adapted process on the probability space formula_1 and formula_2 is an adapted Brownian motion:
If the condition
formula_3
is fulfilled then the process
formula_4
is a martingale under the probability measure formula_5 and the filtration formula_6. Here formula_7 denotes the Doléans-Dade exponential.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " (X_t)_{0\\leq t\\leq T}"
},
{
"math_id": 1,
"text": " \\left (\\Omega, (\\mathcal{F}_t), \\mathbb{P}\\right) "
},
{
"math_id": 2,
"text": "(W_t)_{0\\leq t\\leq T}"
},
{
"math_id": 3,
"text": " \n\\mathbb{E}\\left[e^{\\frac12\\int_0^T|X|_t^2\\,dt} \\right]<\\infty\n"
},
{
"math_id": 4,
"text": "\n \\ \\mathcal{E}\\left( \\int_0^t X_s \\; dW_s \\right) \\ = e^{\\int_0^t X_s\\, dW_s -\\frac{1}{2}\\int_0^t X_s^2\\, ds},\\quad 0\\leq t\\leq T\n"
},
{
"math_id": 5,
"text": "\\mathbb{P}"
},
{
"math_id": 6,
"text": "\\mathcal{F}"
},
{
"math_id": 7,
"text": "\\mathcal{E}"
}
] | https://en.wikipedia.org/wiki?curid=9268061 |
9268243 | Dihydropteroate synthase | Class of enzymes
Dihydropteroate synthase (DHPS) is an enzyme classified under EC 2.5.1.15. It produces dihydropteroate in bacteria, but it is not expressed in most eukaryotes including humans. This makes it a useful target for sulfonamide antibiotics, which compete with the PABA precursor.
All organisms require reduced folate cofactors for the synthesis of a variety of metabolites. Most microorganisms must synthesize folate de novo because they lack the active transport system of higher vertebrate cells that allows these organisms to use dietary folates. Proteins containing this domain include dihydropteroate synthase (EC 2.5.1.15) as well as a group of methyltransferase enzymes including methyltetrahydrofolate, corrinoid iron-sulphur protein methyltransferase (MeTr) that catalyses a key step in the Wood-Ljungdahl pathway of carbon dioxide fixation.
Dihydropteroate synthase (EC 2.5.1.15) (DHPS) catalyses the condensation of 6-hydroxymethyl-7,8-dihydropteridine pyrophosphate to para-aminobenzoic acid to form 7,8-dihydropteroate. This is the second step in the three-step pathway leading from 6-hydroxymethyl-7,8-dihydropterin to 7,8-dihydrofolate. DHPS is the target of sulfonamides, which are substrate analogues that compete with para-aminobenzoic acid. Bacterial DHPS (gene sul or folP) is a protein of about 275 to 315 amino acid residues that is either chromosomally encoded or found on various antibiotic resistance plasmids. In the fungus "Pneumocystis jirovecii" (previously "P. carinii") DHPS is the C-terminal domain of a multifunctional folate synthesis enzyme (gene fas).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] | https://en.wikipedia.org/wiki?curid=9268243 |
9268401 | Differential item functioning | Differential item functioning (DIF) is a statistical property of a test item that indicates how likely it is for individuals from distinct groups, possessing similar abilities, to respond differently to the item. It manifests when individuals from different groups, with comparable skill levels, do not have an equal likelihood of answering a question correctly. There are two primary types of DIF: uniform DIF, where one group consistently has an advantage over the other, and nonuniform DIF, where the advantage varies based on the individual's ability level. The presence of DIF requires review and judgment, but it doesn't always signify bias. DIF analysis provides an indication of unexpected behavior of items on a test. DIF characteristic of an item isn't solely determined by varying probabilities of selecting a specific response among individuals from different groups. Rather, DIF becomes pronounced when individuals from different groups, who possess the "same underlying true ability", exhibit differing probabilities of giving a certain response. Even when uniform bias is present, test developers sometimes resort to assumptions such as DIF biases may offset each other due to the extensive work required to address it, compromising test ethics and perpetuating systemic biases. Common procedures for assessing DIF are Mantel-Haenszel procedure, logistic regression, item response theory (IRT) based methods, and confirmatory factor analysis (CFA) based methods.
Description.
DIF refers to differences in the functioning of items across groups, oftentimes demographic, which are matched on the latent trait or more generally the attribute being measured by the items or test. It is important to note that when examining items for DIF, the groups must be matched on the measured attribute, otherwise this may result in inaccurate detection of DIF. In order to create a general understanding of DIF or measurement bias, consider the following example offered by Osterlind and Everson (2009). In this case, Y refers to a response to a particular test item which is determined by the latent construct being measured. The latent construct of interest is referred to as theta (θ) where Y is an indicator of θ which can be arranged in terms of the probability distribution of Y on θ by the expression "f"(Y)|θ. Therefore, response Y is conditional on the latent trait (θ). Because DIF examines differences in the conditional probabilities of Y between groups, let us label the groups as the "reference" and "focal" groups. Although the designation does not matter, a typical practice in the literature is to designate the reference group as the group who is suspected to have an advantage while the focal group refers to the group anticipated to be disadvantaged by the test.[3] Therefore, given the functional relationship formula_0 and under the assumption that there are identical measurement error distributions for the reference and focal groups it can be concluded that under the null hypothesis:
formula_1
with G corresponding to the grouping variable, "r" the reference group, and "f" the focal group. This equation represents an instance where DIF is not present. In this case, the absence of DIF is determined by the fact that the conditional probability distribution of Y is not dependent on group membership. To illustrate, consider an item with response options 0 and 1, where Y = 0 indicates an incorrect response, and Y = 1 indicates a correct response. The probability of correctly responding to an item is the same for members of either group. This indicates that there is no DIF or item bias because members of the reference and focal group with the same underlying ability or attribute have the same probability of responding correctly. Therefore, there is no bias or disadvantage for one group over the other.
Consider the instance where the conditional probability of Y is not the same for the reference and focal groups. In other words, members of different groups with the same trait or ability level have unequal probability distributions on Y. Once controlling for θ, there is a clear dependency between group membership and performance on an item. For dichotomous items, this suggests that when the focal and reference groups are at the same location on θ, there is a different probability of getting a correct response or endorsing an item. Therefore, the group with the higher conditional probability of correctly responding to an item is the group advantaged by the test item. This suggests that the test item is biased and functions differently for the groups, therefore exhibits DIF.
It is important to draw the distinction between DIF or measurement bias and ordinary group differences. Whereas group differences indicate differing score distributions on Y, DIF explicitly involves conditioning on θ. For instance, consider the following equation:
formula_2
This indicates that an examinee's score is conditional on grouping such that having information about group membership changes the probability of a correct response. Therefore, if the groups differ on θ, and performance depends on θ, then the above equation would suggest item bias even in the absence of DIF. For this reason, it is generally agreed upon in the measurement literature that differences on Y conditional on group membership alone is inadequate for establishing bias. In fact, differences on θ or ability are common between groups and establish the basis for much research. Remember to establish bias or DIF, groups must be matched on θ and then demonstrate differential probabilities on Y as a function of group membership.
Forms.
Uniform DIF is the simplest type of DIF where the magnitude of conditional dependency is relatively invariant across the latent trait continuum (θ). The item of interest consistently gives one group an advantage across all levels of ability θ. Within an item response theory (IRT) framework this would be evidenced when both item characteristic curves (ICC) are equally discriminating yet exhibit differences in the difficulty parameters (i.e., "ar = af" and "br < bf") as depicted in Figure 1. However, nonuniform DIF presents an interesting case. Rather than a consistent advantage being given to the reference group across the ability continuum, the conditional dependency moves and changes direction at different locations on the θ continuum. For instance, an item may give the reference group a minor advantage at the lower end of the continuum while a major advantage at the higher end. Also, unlike uniform DIF, an item can simultaneously vary in discrimination for the two groups while also varying in difficulty (i.e., "ar ≠ af" and "br < bf"). Even more complex is "crossing" nonuniform DIF. As demonstrated in Figure 2, this occurs when an item gives an advantage to a reference group at one end of the θ continuum while favors the focal group at the other end. Differences in ICCs indicate that examinees from the two groups with identical ability levels have unequal probabilities of correctly responding to an item. When the curves are different but do not intersect, this is evidence of uniform DIF. However, if the ICCs cross at any point along the θ scale, there is evidence of nonuniform DIF.
Procedures for detecting DIF.
Mantel-Haenszel.
A common procedure for detecting DIF is the Mantel-Haenszel (MH) approach. The MH procedure is a chi-squared contingency table based approach which examines differences between the reference and focal groups on all items of the test, one by one. The ability continuum, defined by total test scores, is divided into "k" intervals which then serves as the basis for matching members of both groups. A 2 x 2 contingency table is used at each interval of "k" comparing both groups on an individual item. The rows of the contingency table correspond to group membership (reference or focal) while the columns correspond to correct or incorrect responses. The following table presents the general form for a single item at the "k"th ability interval.
Odds ratio.
The next step in the calculation of the MH statistic is to use data from the contingency table to obtain an odds ratio for the two groups on the item of interest at a particular "k" interval. This is expressed in terms of "p" and "q" where "p" represents the proportion correct and "q" the proportion incorrect for both the reference (R) and focal (F) groups. For the MH procedure, the obtained odds ratio is represented by α with possible value ranging from 0 to ∞. A α value of 1.0 indicates an absence of DIF and thus similar performance by both groups. Values greater than 1.0 suggest that the reference group outperformed or found the item less difficult than the focal group. On the other hand, if the obtained value is less than 1.0, this is an indication that the item was less difficult for the focal group.[8] Using variables from the contingency table above, the calculation is as follows:
α = <templatestyles src="Fraction/styles.css" />(pRk / qRk)⁄(pFk / qFk)
= <templatestyles src="Fraction/styles.css" />(Ak / (Ak + Bk)) / (Bk / (Ak + Bk))⁄(Ck / (Ck + Dk)) / (Dk / (Ck + Dk))
= <templatestyles src="Fraction/styles.css" />(Ak / Bk)⁄(Ck / Dk)
= <templatestyles src="Fraction/styles.css" />AkDk⁄BkCk
The above computation pertains to an individual item at a single ability interval. The population estimate α can be extended to reflect a common odds ratio across all ability intervals "k" for a specific item. The common odds ratio estimator is denoted αMH and can be computed by the following equation:
αMH = <templatestyles src="Fraction/styles.css" />Σ(AkDk / Nk)⁄Σ(BkCk / Nk)
for all values of "k" and where Nk represents the total sample size at the "kth" interval.
The obtained αMH is often standardized through log transformation, centering the value around 0. The new transformed estimator MHD-DIF is computed as follows:
MHD-DIF = -2.35ln(αMH)
Thus an obtained value of 0 would indicate no DIF. In examining the equation, it is important to note that the minus sign changes the interpretation of values less than or greater than 0. Values less than 0 indicate a reference group advantage whereas values greater than 0 indicate an advantage for the focal group.
Item response theory.
Item response theory (IRT) is another widely used method for assessing DIF. IRT allows for a critical examination of responses to particular items from a test or measure. As noted earlier, DIF examines the probability of correctly responding to or endorsing an item conditioned on the latent trait or ability. Because IRT examines the monotonic relationship between responses and the latent trait or ability, it is a fitting approach for examining DIF.
Three major advantages of using IRT in DIF detection are:
In relation to DIF, item parameter estimates are computed and graphically examined via item characteristic curves (ICCs) also referred to as trace lines or item response functions (IRF). After examination of ICCs and subsequent suspicion of DIF, statistical procedures are implemented to test differences between parameter estimates.
ICCs represent mathematical functions of the relationship between positioning on the latent trait continuum and the probability of giving a particular response. Figure 3 illustrates this relationship as a logistic function. Individuals lower on the latent trait or with less ability have a lower probability of getting a correct response or endorsing an item, especially as difficulty increases. Thus, those higher on the latent trait or in ability have a greater chance of a correct response or endorsing an item. For instance, on a depression inventory, highly depressed individuals would have a greater probability of endorsing an item than individuals with lower depression. Similarly, individuals with higher math ability have a greater probability of getting a math item correct than those with lesser ability. Another critical aspect of ICCs pertains to the inflection point. This is the point on the curve where the probability of a particular response is .5 and also represents the maximum value for the slope. This inflection point indicates where the probability of a correct response or endorsing an item becomes greater than 50%, except when a "c" parameter is greater than 0 which then places the inflection point at 1 + c/2 (a description will follow below). The inflection point is determined by the difficulty of the item which corresponds to values on the ability or latent trait continuum. Therefore, for an easy item, this inflection point may be lower on the ability continuum while for a difficult item it may be higher on the same scale.
Before presenting statistical procedures for testing differences of item parameters, it is important to first provide a general understanding of the different parameter estimation models and their associated parameters. These include the one-, two-, and three-parameter logistic (PL) models. All these models assume a single underling latent trait or ability. All three of these models have an item difficulty parameter denoted "b". For the 1PL and 2PL models, the "b" parameter corresponds to the inflection point on the ability scale, as mentioned above. In the case of the 3PL model, the inflection corresponds to 1 + c/2 where "c" is a lower asymptote (discussed below). Difficulty values, in theory, can range from -∞ to +∞; however in practice they rarely exceed ±3. Higher values are indicative of harder test items. Items exhibiting low "b" parameters are easy test items. Another parameter that is estimated is a discrimination parameter designated "a" . This parameter pertains to an item's ability to discriminate among individuals. The "a" parameter is estimated in the 2PL and 3PL models. In the case of the 1PL model, this parameter is constrained to be equal between groups. In relation to ICCs, the "a" parameter is the slope of the inflection point. As mentioned earlier, the slope is maximal at the inflection point. The "a" parameter, similar to the "b" parameter, can range from -∞ to +∞; however typical values are less than 2. In this case, higher value indicate greater discrimination between individuals. The 3PL model has an additional parameter referred to as a "guessing" or pseudochance parameter and is denoted by "c". This corresponds to a lower asymptote which essentially allows for the possibility of an individual to get a moderate or difficult item correct even if they are low in ability. Values for "c" range between 0 and 1, however typically fall below .3.
When applying statistical procedures to assess for DIF, the "a" and "b" parameters (discrimination and difficulty) are of particular interest. However, assume a 1PL model was used, where the "a" parameters are constrained to be equal for both groups leaving only the estimation of the "b" parameters. After examining the ICCs, there is an apparent difference in "b" parameters for both groups. Using a similar method to a Student's t-test, the next step is to determine if the difference in difficulty is statistically significant. Under the null hypothesis
H0: br = bf
Lord (1980) provides an easily computed and normally distributed test statistic.
d = (br - bf) / SE(br - bf)
The standard error of the difference between "b" parameters is calculated by
√[SE(br)]2 + √[SE(bf)]2
Wald statistic.
However, more common than not, a 2PL or 3PL model is more appropriate than fitting a 1PL model to the data and thus both the "a" and "b" parameters should be tested for DIF. Lord (1980) proposed another method for testing differences in both the "a" and "b" parameters, where "c" parameters are constrained to be equal across groups. This test yields a Wald statistic which follows a chi-square distribution. In this case the null hypothesis being tested is
H0: ar = af and br = bf.
First, a 2 x 2 covariance matrix of the parameter estimates is calculated for each group which are represented by Sr and Sf for the reference and focal groups. These covariance matrices are computed by inverting the obtained information matrices.
Next, the differences between estimated parameters are put into a 2 x 1 vector and is denoted by
V' = (ar - af, br - bf)
Next, covariance matrix S is estimated by summing Sr and Sf.
Using this information, the Wald statistic is computed as follows:
χ2 = V'S−1V
which is evaluated at 2 degrees of freedom.
Likelihood-ratio test.
The Likelihood-ratio test is another IRT based method for assessing DIF. This procedure involves comparing the ratio of two models. Under model (Mc) item parameters are constrained to be equal or invariant between the reference and focal groups. Under model (Mv) item parameters are free to vary. The likelihood function under Mc is denoted (Lc) while the likelihood function under Mv is designated (Lv). The items constrained to be equal serve as anchor items for this procedure while items suspected of DIF are allowed to freely vary. By using anchor items and allowing remaining item parameters to vary, multiple items can be simultaneously assessed for DIF. However, if the likelihood ratio indicates potential DIF, an item-by-item analysis would be appropriate to determine which items, if not all, contain DIF. The likelihood ratio of the two models is computed by
G2 = 2ln[Lv / Lc]
Alternatively, the ratio can be expressed by
G2 = -2ln[Lc / Lv]
where Lv and Lc are inverted and then multiplied by -2ln.
G2 approximately follows a chi square distribution, especially with larger samples. Therefore, it is evaluated by the degrees of freedom that correspond to the number of constraints necessary to derive the constrained model from the freely varying model. For instance, if a 2PL model is used and both "a" and "b" parameters are free to vary under Mv and these same two parameters are constrained in under Mc, then the ratio is evaluated at 2 degrees of freedom.
Logistic regression.
Logistic regression approaches to DIF detection involve running a separate analysis for each item. The independent variables included in the analysis are group membership, an ability matching variable typically a total score, and an interaction term between the two. The dependent variable of interest is the probability or likelihood of getting a correct response or endorsing an item. Because the outcome of interest is expressed in terms of probabilities, maximum likelihood estimation is the appropriate procedure. This set of variables can then be expressed by the following regression equation:
Y = β0 + β1M + β2G + β3MG
where β0 corresponds to the intercept or the probability of a response when M and G are equal to 0 with remaining βs corresponding to weight coefficients for each independent variable. The first independent variable, M, is the matching variable used to link individuals on ability, in this case a total test score, similar to that employed by the Mantel-Haenszel procedure. The group membership variable is denoted G and in the case of regression is represented through dummy coded variables. The final term MG corresponds to the interaction between the two above mentioned variables.
For this procedure, variables are entered hierarchically. Following the structure of the regression equation provided above, variables are entered by the following sequence: matching variable M, grouping variable G, and the interaction variable MG. Determination of DIF is made by evaluating the obtained chi-square statistic with 2 degrees of freedom. Additionally, parameter estimate significance is tested.
From the results of the logistic regression, DIF would be indicated if individuals matched on ability have significantly different probabilities of responding to an item and thus differing logistic regression curves. Conversely, if the curves for both groups are the same, then the item is unbiased and therefore DIF is not present. In terms of uniform and nonuniform DIF, if the intercepts and matching variable parameters for both groups are not equal, then there is evidence of uniform DIF. However, if there is a nonzero interaction parameter, this is an indication of nonuniform DIF.
Considerations.
Sample size.
The first consideration pertains to issues of sample size, specifically with regard to the reference and focal groups. Prior to any analyses, information about the number of people in each group is typically known such as the number of males/females or members of ethnic/racial groups. However, the issue more closely revolves around whether the number of people per group is sufficient for there to be enough statistical power to identify DIF. In some instances such as ethnicity there may be evidence of unequal group sizes such that Whites represent a far larger group sample than each individual ethnic group being represented. Therefore, in such instances, it may be appropriate to modify or adjust data so that the groups being compared for DIF are in fact equal or closer in size. Dummy coding or recoding is a common practice employed to adjust for disparities in the size of the reference and focal group. In this case, all Non-White ethnic groups can be grouped together in order to have a relatively equal sample size for the reference and focal groups. This would allow for a "majority/minority" comparison of item functioning. If modifications are not made and DIF procedures are carried out, there may not be enough statistical power to identify DIF even if DIF exists between groups.
Another issue that pertains to sample size directly relates to the statistical procedure being used to detect DIF. Aside from sample size considerations of the reference and focal groups, certain characteristics of the sample itself must be met to comply with assumptions of each statistical test utilized in DIF detection. For instance, using IRT approaches may require larger samples than required for the Mantel-Haenszel procedure. This is important, as investigation of group size may direct one toward using one procedure over another. Within the logistic regression approach, leveraged values and outliers are of particular concern and must be examined prior to DIF detection. Additionally, as with all analyses, statistical test assumptions must be met. Some procedures are more robust to minor violations while others less so. Thus, the distributional nature of sample responses should be investigated prior to implementing any DIF procedures.
Items.
Determining the number of items being used for DIF detection must be considered. No standard exists as to how many items should be used for DIF detection as this changes from study-to-study. In some cases it may be appropriate to test all items for DIF, whereas in others it may not be necessary. If only certain items are suspected of DIF with adequate reasoning, then it may be more appropriate to test those items and not the entire set. However, oftentimes it is difficult to simply assume which items may be problematic. For this reason, it is often recommended to simultaneously examine all test items for DIF. This will provide information about all items, shedding light on problematic items as well as those that function similarly for both the reference and focal groups. With regard to statistical tests, some procedures such as IRT-Likelihood Ratio testing require the use of anchor items. Some items are constrained to be equal across groups while items suspected of DIF are allowed to freely vary. In this instance, only a subset would be identified as DIF items while the rest would serve as a comparison group for DIF detection. Once DIF items are identified, the anchor items can also be analyzed by then constraining the original DIF items and allowing the original anchor items to freely vary. Thus it seems that testing all items simultaneously may be a more efficient procedure. However, as noted, depending on the procedure implemented different methods for selecting DIF items are used.
Aside from identifying the number of items being used in DIF detection, of additional importance is determining the number of items on the entire test or measure itself. The typical recommendation as noted by Zumbo (1999) is to have a minimum of 20 items. The reasoning for a minimum of 20 items directly relates to the formation of matching criteria. As noted in earlier sections, a total test score is typically used as a method for matching individuals on ability. The total test score is divided up into normally 3–5 ability levels (k) which is then used to match individuals on ability prior to DIF analysis procedures. Using a minimum of 20 items allows for greater variance in the score distribution which results in more meaningful ability level groups. Although the psychometric properties of the instrument should have been assessed prior to being utilized, it is important that the validity and reliability of an instrument be adequate. Test items need to accurately tap into the construct of interest in order to derive meaningful ability level groups. Of course, one does not want to inflate reliability coefficients by simply adding redundant items. The key is to have a valid and reliable measure with sufficient items to develop meaningful matching groups. Gadermann et al. (2012), Revelle and Zinbarg (2009), and John and Soto (2007) offer more information on modern approaches to structural validation and more precise and appropriate methods for assessing reliability.
Balancing statistics and reasoning.
As with all psychological research and psychometric evaluation, statistics play a vital role but should by no means be the sole basis for decisions and conclusions reached. Reasoned judgment is of critical importance when evaluating items for DIF. For instance, depending on the statistical procedure used for DIF detection, differing results may be yielded. Some procedures are more precise while others less so. For instance, the Mantel-Haenszel procedure requires the researcher to construct ability levels based on total test scores whereas IRT more effectively places individuals along the latent trait or ability continuum. Thus, one procedure may indicate DIF for certain items while others do not.
Another issue is that sometimes DIF may be indicated but there is no clear reason why DIF exists. This is where reasoned judgment comes into play. Especially by understanding why uniform and nonuniform DIF occurs. The researcher must use common sense to derive meaning from DIF analyses. It is not enough to report that items function differently for groups; there needs to be a qualitative reasoning for why it occurs.
Uniform DIF occurs when there's a consistent advantage for one group compared to another across all levels of ability. This type of bias can often be addressed by using separate test norms for different groups to ensure fairness in assessment. Nonuniform DIF, on the other hand, is more complex as the advantage varies based on individuals' ability levels. Factors such as socioeconomic status, cultural differences, language barriers, and disparities in knowledge access can contribute to nonuniform DIF. Identifying and addressing nonuniform DIF requires a deeper understanding of the underlying cognitive processes involved and may require tailored interventions to ensure fair assessment practices.
In DIF studies, uncovering certain items exhibiting DIF is common, indicating potential issues needing scrutiny. However, DIF evidence doesn't automatically imply the entire test is unfair. Instead, it signals specific items may be biased, requiring attention to maintain test integrity and fairness for all examinees. Identifying items with DIF offers an opportunity to review and potentially revise or remove problematic items, ensuring equitable assessment practices. Therefore, DIF analysis serves as a valuable tool for item analysis, particularly when supplemented with qualitative exploration of causal factors.
Statistical software.
Below are common statistical programs capable of performing the procedures discussed herein. By clicking on list of statistical packages, you will be directed to a comprehensive list of open source, public domain, freeware, and proprietary statistical software.
Mantel-Haenszel procedure
IRT-based procedures
Logistic regression
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f(Y)|\\theta"
},
{
"math_id": 1,
"text": "f(Y=1|\\theta, G=r) = f(Y=1|\\theta, G=f)"
},
{
"math_id": 2,
"text": "p(Y=1|G=g)\\neq p(Y=1)"
}
] | https://en.wikipedia.org/wiki?curid=9268401 |
9269065 | Precision tests of QED | Verifying quantum electrodynamics via multiple measurements of the fine-structure constant
Quantum electrodynamics (QED), a relativistic quantum field theory of electrodynamics, is among the most stringently tested theories in physics. The most precise and specific tests of QED consist of measurements of the electromagnetic fine-structure constant, "α", in various physical systems. Checking the consistency of such measurements tests the theory.
Tests of a theory are normally carried out by comparing experimental results to theoretical predictions. In QED, there is some subtlety in this comparison, because theoretical predictions require as input an extremely precise value of "α", which can only be obtained from another precision QED experiment. Because of this, the comparisons between theory and experiment are usually quoted as independent determinations of "α". QED is then confirmed to the extent that these measurements of "α" from different physical sources agree with each other.
The agreement found this way is to within ten parts in a billion (10−8), based on the comparison of the electron anomalous magnetic dipole moment and the Rydberg constant from atom recoil measurements as described below. This makes QED one of the most accurate physical theories constructed thus far.
Besides these independent measurements of the fine-structure constant, many other predictions of QED have been tested as well.
Measurements of the fine-structure constant using different systems.
Precision tests of QED have been performed in low-energy atomic physics experiments, high-energy collider experiments, and condensed matter systems. The value of "α" is obtained in each of these experiments by fitting an experimental measurement to a theoretical expression (including higher-order radiative corrections) that includes "α" as a parameter. The uncertainty in the extracted value of "α" includes both experimental and theoretical uncertainties. This program thus requires both high-precision measurements and high-precision theoretical calculations. Unless noted otherwise, all results below are taken from.
Low-energy measurements.
Anomalous magnetic dipole moments.
The most precise measurement of "α" comes from the anomalous magnetic dipole moment, or "g"−2 (pronounced "g minus 2"), of the electron. To make this measurement, two ingredients are needed:
As of February 2007, the best measurement of the anomalous magnetic dipole moment of the electron was made by the group of Gerald Gabrielse at Harvard University, using a single electron caught in a Penning trap. The difference between the electron's cyclotron frequency and its spin precession frequency in a magnetic field is proportional to "g"−2. An extremely high precision measurement of the quantized energies of the cyclotron orbits, or "Landau levels", of the electron, compared to the quantized energies of the electron's two possible spin orientations, gives a value for the electron's spin "g"-factor:
"g"/2 = ,
a precision of better than one part in a trillion. (The digits in parentheses indicate the standard uncertainty in the last listed digits of the measurement.)
The current state-of-the-art theoretical calculation of the anomalous magnetic dipole moment of the electron includes QED diagrams with up to four loops. Combining this with the experimental measurement of "g" yields the most precise value of "α":
"α"−1 = ,
a precision of better than a part in a billion. This uncertainty is ten times smaller than the nearest rival method involving atom-recoil measurements.
A value of "α" can also be extracted from the anomalous magnetic dipole moment of the muon. The "g"-factor of the muon is extracted using the same physical principle as for the electron above – namely, that the difference between the cyclotron frequency and the spin precession frequency in a magnetic field is proportional to "g"−2. The most precise measurement comes from Brookhaven National Laboratory's muon g−2 experiment, in which polarized muons are stored in a cyclotron and their spin orientation is measured by the direction of their decay electrons. As of February 2007, the current world average muon "g"-factor measurement is,
"g"/2 = ,
a precision of better than one part in a billion. The difference between the "g"-factors of the muon and the electron is due to their difference in mass. Because of the muon's larger mass, contributions to the theoretical calculation of its anomalous magnetic dipole moment from Standard Model weak interactions and from contributions involving hadrons are important at the current level of precision, whereas these effects are not important for the electron. The muon's anomalous magnetic dipole moment is also sensitive to contributions from new physics beyond the Standard Model, such as supersymmetry. For this reason, the muon's anomalous magnetic moment is normally used as a probe for new physics beyond the Standard Model rather than as a test of QED.
"See" muon "g"–2 for current efforts to refine the measurement.
Atom-recoil measurements.
This is an indirect method of measuring "α", based on measurements of the masses of the electron, certain atoms, and the Rydberg constant. The Rydberg constant is known to seven parts in a trillion. The mass of the electron relative to that of caesium and rubidium atoms is also known with extremely high precision. If the mass of the electron can be measured with sufficiently high precision, then "α" can be found from the Rydberg constant according to
formula_0
To get the mass of the electron, this method actually measures the mass of an 87Rb atom by measuring the recoil speed of the atom after it emits a photon of known wavelength in an atomic transition. Combining this with the ratio of electron to 87Rb atom, the result for "α" is,
"α"−1 = .
Because this measurement is the next-most-precise after the measurement of "α" from the electron's anomalous magnetic dipole moment described above, their comparison provides the most stringent test of QED: the value of "α" obtained here is within one standard deviation of that found from the electron's anomalous magnetic dipole moment, an agreement to within ten parts in a billion.
Neutron Compton wavelength.
This method of measuring "α" is very similar in principle to the atom-recoil method. In this case, the accurately known mass ratio of the electron to the neutron is used. The neutron mass is measured with high precision through a very precise measurement of its Compton wavelength. This is then combined with the value of the Rydberg constant to extract "α". The result is,
"α"−1 = .
Hyperfine splitting.
Hyperfine splitting is a splitting in the energy levels of an atom caused by the interaction between the magnetic moment of the nucleus and the combined spin and orbital magnetic moment of the electron. The hyperfine splitting in hydrogen, measured using Ramsey's hydrogen maser, is known with great precision. Unfortunately, the influence of the proton's internal structure limits how precisely the splitting can be predicted theoretically. This leads to the extracted value of "α" being dominated by theoretical uncertainty:
"α"−1 = .
The hyperfine splitting in muonium, an "atom" consisting of an electron and an antimuon, provides a more precise measurement of "α" because the muon has no internal structure:
"α"−1 = .
Lamb shift.
The Lamb shift is a small difference in the energies of the 2 S1/2 and 2 P1/2 energy levels of hydrogen, which arises from a one-loop effect in quantum electrodynamics. The Lamb shift is proportional to "α"5 and its measurement yields the extracted value:
"α"−1 = .
Positronium.
Positronium is an "atom" consisting of an electron and a positron. Whereas the calculation of the energy levels of ordinary hydrogen is contaminated by theoretical uncertainties from the proton's internal structure, the particles that make up positronium have no internal structure so precise theoretical calculations can be performed. The measurement of the splitting between the 2 3S1 and the 1 3S1 energy levels of positronium yields
"α"−1 = .
Measurements of "α" can also be extracted from the positronium decay rate. Positronium decays through the annihilation of the electron and the positron into two or more gamma-ray photons. The decay rate of the singlet ("para-positronium") 1S0 state yields
"α"−1 = ,
and the decay rate of the triplet ("ortho-positronium") 3S1 state yields
"α"−1 = .
This last result is the only serious discrepancy among the numbers given here, but there is some evidence that uncalculated higher-order quantum corrections give a large correction to the value quoted here.
High-energy QED processes.
The cross sections of higher-order QED reactions at high-energy electron-positron colliders provide a determination of "α". In order to compare the extracted value of "α" with the low-energy results, higher-order QED effects including the running of "α" due to vacuum polarization must be taken into account. These experiments typically achieve only percent-level accuracy, but their results are consistent with the precise measurements available at lower energies.
The cross section for e+ e− → e+ e− e+ e− yields
"α"−1 = ,
and the cross section for e+ e− → e+ e− μ+ μ− yields
"α"−1 = .
Condensed matter systems.
The quantum Hall effect and the AC Josephson effect are exotic quantum interference phenomena in condensed matter systems. These two effects provide a standard electrical resistance and a standard frequency, respectively, which measure the charge of the electron with corrections that are strictly zero for macroscopic systems.
The quantum Hall effect yields
"α"−1 = ,
and the AC Josephson effect yields
"α"−1 = .
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "R_\\infty = \\frac{\\alpha^2 m_\\text{e} c}{4 \\pi \\hbar}."
}
] | https://en.wikipedia.org/wiki?curid=9269065 |
926966 | String vibration | A wave
A vibration in a string is a wave. Resonance causes a vibrating string to produce a sound with constant frequency, i.e. constant pitch. If the length or tension of the string is correctly adjusted, the sound produced is a musical tone. Vibrating strings are the basis of string instruments such as guitars, cellos, and pianos.
Wave.
The velocity of propagation of a wave in a string (formula_0) is proportional to the square root of the force of tension of the string (formula_1) and inversely proportional to the square root of the linear density (formula_2) of the string:
formula_3
This relationship was discovered by Vincenzo Galilei in the late 1500s.
Derivation.
Source:
Let formula_4 be the length of a piece of string, formula_5 its mass, and formula_2 its linear density. If angles formula_6 and formula_7 are small, then the horizontal components of tension on either side can both be approximated by a constant formula_1, for which the net horizontal force is zero. Accordingly, using the small angle approximation, the horizontal tensions acting on both sides of the string segment are given by
formula_8
formula_9
From Newton's second law for the vertical component, the mass (which is the product of its linear density and length) of this piece times its acceleration, formula_10, will be equal to the net force on the piece:
formula_11
Dividing this expression by formula_1 and substituting the first and second equations obtains (we can choose either the first or the second equation for formula_1, so we conveniently choose each one with the matching angle formula_7 and formula_6)
formula_12
According to the small-angle approximation, the tangents of the angles at the ends of the string piece are equal to the slopes at the ends, with an additional minus sign due to the definition of formula_6 and formula_7. Using this fact and rearranging provides
formula_13
In the limit that formula_4 approaches zero, the left hand side is the definition of the second derivative of formula_14:
formula_15
This is the wave equation for formula_16, and the coefficient of the second time derivative term is equal to formula_17; thus
formula_18
Where formula_0 is the speed of propagation of the wave in the string (see the article on the wave equation for more about this). However, this derivation is only valid for small amplitude vibrations; for those of large amplitude, formula_4 is not a good approximation for the length of the string piece, the horizontal component of tension is not necessarily constant. The horizontal tensions are not well approximated by formula_1.
Frequency of the wave.
Once the speed of propagation is known, the frequency of the sound produced by the string can be calculated. The speed of propagation of a wave is equal to the wavelength formula_19 divided by the period formula_20, or multiplied by the frequency formula_21:
formula_22
If the length of the string is formula_23, the fundamental harmonic is the one produced by the vibration whose nodes are the two ends of the string, so formula_23 is half of the wavelength of the fundamental harmonic. Hence one obtains Mersenne's laws:
formula_24
where formula_1 is the tension (in Newtons), formula_2 is the linear density (that is, the mass per unit length), and formula_23 is the length of the vibrating part of the string. Therefore:
Moreover, if we take the nth harmonic as having a wavelength given by formula_25, then we easily get an expression for the frequency of the nth harmonic:
formula_26
And for a string under a tension T with linear density formula_2, then
formula_27
Observing string vibrations.
One can see the waveforms on a vibrating string if the frequency is low enough and the vibrating string is held in front of a CRT screen such as one of a television or a computer ("not" of an analog oscilloscope).
This effect is called the stroboscopic effect, and the rate at which the string seems to vibrate is the difference between the frequency of the string and the refresh rate of the screen. The same can happen with a fluorescent lamp, at a rate that is the difference between the frequency of the string and the frequency of the alternating current.
In daylight and other non-oscillating light sources, this effect does not occur and the string appears still but thicker, and lighter or blurred, due to persistence of vision.
A similar but more controllable effect can be obtained using a stroboscope. This device allows matching the frequency of the xenon flash lamp to the frequency of vibration of the string. In a dark room, this clearly shows the waveform. Otherwise, one can use bending or, perhaps more easily, by adjusting the machine heads, to obtain the same, or a multiple, of the AC frequency to achieve the same effect. For example, in the case of a guitar, the 6th (lowest pitched) string pressed to the third fret gives a G at 97.999 Hz. A slight adjustment can alter it to 100 Hz, exactly one octave above the alternating current frequency in Europe and most countries in Africa and Asia, 50 Hz. In most countries of the Americas—where the AC frequency is 60 Hz—altering A# on the fifth string, first fret from 116.54 Hz to 120 Hz produces a similar effect.
Real-world example.
A Wikipedia user's Jackson Professional Soloist XL electric guitar has a nut-to-bridge distance (corresponding to formula_23 above) of 25<templatestyles src="Fraction/styles.css" />5⁄8 in. and D'Addario XL Nickel-wound Super-light-gauge EXL-120 electric guitar strings with the following manufacturer specs:
Given the above specs, what would the computed vibrational frequencies (formula_21) of the above strings' fundamental harmonics be if the strings were strung at the tensions recommended by the manufacturer?
To answer this, we can start with the formula in the preceding section, with formula_30:
formula_31
The linear density formula_2 can be expressed in terms of the spatial (mass/volume) density formula_29 via the relation formula_32, where formula_33 is the radius of the string and formula_28 is the diameter (aka thickness) in the table above:
formula_34
For purposes of computation, we can substitute for the tension formula_1 above, via Newton's second law (Force = mass × acceleration), the expression formula_35, where formula_5 is the mass that, at the Earth's surface, would have the equivalent weight corresponding to the tension values formula_1 in the table above, as related through the standard acceleration due to gravity at the Earth's surface, formula_36 cm/s2. (This substitution is convenient here since the string tensions provided by the manufacturer above are in pounds of force, which can be most conveniently converted to equivalent masses in kilograms via the familiar conversion factor 1 lb. = 453.59237 g.) The above formula then explicitly becomes:
formula_37
Using this formula to compute formula_21 for string no. 1 above yields:
formula_38
Repeating this computation for all six strings results in the following frequencies. Shown next to each frequency is the musical note (in scientific pitch notation) in standard guitar tuning whose frequency is closest, confirming that stringing the above strings at the manufacturer-recommended tensions does indeed result in the standard pitches of a guitar: | [
{
"math_id": 0,
"text": "v"
},
{
"math_id": 1,
"text": "T"
},
{
"math_id": 2,
"text": "\\mu"
},
{
"math_id": 3,
"text": "v = \\sqrt{T \\over \\mu}."
},
{
"math_id": 4,
"text": "\\Delta x"
},
{
"math_id": 5,
"text": "m"
},
{
"math_id": 6,
"text": "\\alpha"
},
{
"math_id": 7,
"text": "\\beta"
},
{
"math_id": 8,
"text": "T_{1x}=T_1 \\cos(\\alpha) \\approx T."
},
{
"math_id": 9,
"text": "T_{2x}=T_2 \\cos(\\beta)\\approx T."
},
{
"math_id": 10,
"text": "a"
},
{
"math_id": 11,
"text": "\\Sigma F_y=T_{1y}-T_{2y}=-T_2 \\sin(\\beta)+T_1 \\sin(\\alpha)=\\Delta m a\\approx\\mu\\Delta x \\frac{\\partial^2 y}{\\partial t^2}."
},
{
"math_id": 12,
"text": "-\\frac{T_2 \\sin(\\beta)}{T_2 \\cos(\\beta)}+\\frac{T_1 \\sin(\\alpha)}{T_1 \\cos(\\alpha)}=-\\tan(\\beta)+\\tan(\\alpha)=\\frac{\\mu\\Delta x}{T}\\frac{\\partial^2 y}{\\partial t^2}."
},
{
"math_id": 13,
"text": "\\frac{1}{\\Delta x}\\left(\\left.\\frac{\\partial y}{\\partial x}\\right|^{x+\\Delta x}-\\left.\\frac{\\partial y}{\\partial x}\\right|^x\\right)=\\frac{\\mu}{T}\\frac{\\partial^2 y}{\\partial t^2}."
},
{
"math_id": 14,
"text": "y"
},
{
"math_id": 15,
"text": "\\frac{\\partial^2 y}{\\partial x^2}=\\frac{\\mu}{T}\\frac{\\partial^2 y}{\\partial t^2}."
},
{
"math_id": 16,
"text": "y(x,t)"
},
{
"math_id": 17,
"text": "\\frac{1}{v^{2}}"
},
{
"math_id": 18,
"text": "v=\\sqrt{T\\over\\mu},"
},
{
"math_id": 19,
"text": "\\lambda"
},
{
"math_id": 20,
"text": "\\tau"
},
{
"math_id": 21,
"text": "f"
},
{
"math_id": 22,
"text": "v = \\frac{\\lambda}{\\tau} = \\lambda f."
},
{
"math_id": 23,
"text": "L"
},
{
"math_id": 24,
"text": "f = \\frac{v}{2L} = { 1 \\over 2L } \\sqrt{T \\over \\mu}"
},
{
"math_id": 25,
"text": "\\lambda_n = 2L/n"
},
{
"math_id": 26,
"text": "f_n = \\frac{nv}{2L}"
},
{
"math_id": 27,
"text": "f_n = \\frac{n}{2L}\\sqrt{\\frac{T}{\\mu}}"
},
{
"math_id": 28,
"text": "d"
},
{
"math_id": 29,
"text": "\\rho"
},
{
"math_id": 30,
"text": "n = 1"
},
{
"math_id": 31,
"text": "f = \\frac{1}{2L}\\sqrt{\\frac{T}{\\mu}}"
},
{
"math_id": 32,
"text": "\\mu = \\pi r^2\\rho = \\pi d^2\\rho/4"
},
{
"math_id": 33,
"text": "r"
},
{
"math_id": 34,
"text": "f = \\frac{1}{2L}\\sqrt{\\frac{T}{\\pi d^2\\rho/4}}\n = \\frac{1}{2Ld}\\sqrt{\\frac{4T}{\\pi\\rho}}\n = \\frac{1}{Ld}\\sqrt{\\frac{T}{\\pi\\rho}}"
},
{
"math_id": 35,
"text": "T = ma"
},
{
"math_id": 36,
"text": "g_0 = 980.665"
},
{
"math_id": 37,
"text": "f_\\mathrm{Hz} = \\frac{1}{L_\\mathrm{in} \\times 2.54\\ \\mathrm{cm/in} \\times d_\\mathrm{in} \\times 2.54\\ \\mathrm{cm/in}} \\sqrt{\\frac{T_\\mathrm{lb} \\times 453.59237\\ \\mathrm{g/lb} \\times 980.665\\ \\mathrm{cm/s^2}}{\\pi \\times \\rho_\\mathrm{g/cm^3}}}"
},
{
"math_id": 38,
"text": "f_1 = \\frac{1}{25.625\\ \\mathrm{in} \\times 2.54\\ \\mathrm{cm/in} \\times 0.00899\\ \\mathrm{in} \\times 2.54\\ \\mathrm{cm/in}} \\sqrt{\\frac{13.1\\ \\mathrm{lb} \\times 453.59237\\ \\mathrm{g/lb} \\times 980.665\\ \\mathrm{cm/s^2}}{\\pi \\times 7.726\\ \\mathrm{g/cm^3}}} \\approx 330\\ \\mathrm{Hz}"
}
] | https://en.wikipedia.org/wiki?curid=926966 |
9270073 | Dimethyl oxalate | <templatestyles src="Chembox/styles.css"/>
Chemical compound
Dimethyl oxalate is an organic compound with the formula or . It is the dimethyl ester of oxalic acid. Dimethyl oxalate is a colorless or white solid that is soluble in water.
Production.
Dimethyl oxalate can be obtained by esterification of oxalic acid with methanol using sulfuric acid as a catalyst:
formula_0
Oxidative carbonylation route.
The preparation by oxidative carbonylation has attracted interest because it requires only C1 precursors:
formula_1
The reaction is catalyzed by Pd2+. The synthesis gas is mostly obtained from coal or biomass. The oxidation proceeds via dinitrogen trioxide, which is formed according to (1) of nitrogen monoxide and oxygen and then reacts according to (2) with methanol forming methyl nitrite:
In the next step of dicarbonylation (3) carbon monoxide reacts with methyl nitrite to dimethyl oxalate in the vapor phase at atmospheric pressure and temperatures at 80-120 °C over a palladium catalyst:
The sum equation:
This method is lossless with respect to methyl nitrite, which acts practically as a carrier of oxidation equivalents. However, the water formed must be removed to prevent hydrolysis of the dimethyl oxalate product. With 1% Pd/α-Al2O3 dimethyl oxalate is produced selectively in a dicarbonylation reaction, under the same conditions with 2% Pd/C dimethyl carbonate is produced by monocarbonylation:
Alternatively, the oxidative carbonylation of methanol can be carried out with high yield and selectivity with 1,4-benzoquinone as an oxidant in the system Pd(OAc)2/PPh3/benzoquinone with mass ratio 1/3/100 at 65 °C and 70 atm CO:
Reactions.
Dimethyl oxalate (and the related diethyl ester) is used in diverse condensation reactions. For example, diethyl oxalate condenses with cyclohexanone to give the diketo-ester, a precursor to pimelic acid. With diamines, the diesters of oxalic acid condense to give cyclic diamides. Quinoxalinedione is produced by condensation of dimethyloxalate and o-phenylenediamine:
C2O2(OMe)2 + C6H4(NH2)2 → C6H4(NHCO)2 + 2 MeOH
Hydrogenation gives ethylene glycol. Dimethyl oxalate can be converted into ethylene glycol in high yields (94.7%)
The methanol formed is recycled in the process of oxidative carbonylation. Other plants with a total annual capacity of more than 1 million tons of ethylene glycol per year are planned.
Decarbonylation gives dimethyl carbonate.
Diphenyl oxalate is obtained by transesterification with phenol in the presence of titanium catalysts, which is again decarbonylated to diphenyl carbonate in the liquid or gas phase.
Dimethyl oxalate can also be used as a methylating agent. It is notably less toxic than other methylating agents such as methyl iodide or dimethyl sulfate.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rm 2\\ CH_3OH + (CO_2H)_2\\ \\xrightarrow{H_2SO_4}\\ (CO_2CH_3)_2 + 2\\ H_2O"
},
{
"math_id": 1,
"text": "\\rm 4 \\ CH_3OH + 4 \\ CO + O_2 \\xrightarrow{catalyst}\\ 2 \\ (CO_2CH_3)_2 + 2 \\ H_2O"
}
] | https://en.wikipedia.org/wiki?curid=9270073 |
9272073 | Option (finance) | Right to buy or sell a certain thing at a later date at an agreed price
In finance, an option is a contract which conveys to its owner, the "holder", the right, but not the obligation, to buy or sell a specific quantity of an underlying asset or instrument at a specified strike price on or before a specified date, depending on the style of the option.
Options are typically acquired by purchase, as a form of compensation, or as part of a complex financial transaction. Thus, they are also a form of asset and have a valuation that may depend on a complex relationship between underlying asset price, time until expiration, market volatility, the risk-free rate of interest, and the strike price of the option.
Options may be traded between private parties in "over-the-counter" (OTC) transactions, or they may be exchange-traded in live, public markets in the form of standardized contracts.
Definition and application.
An option is a contract that allows the holder the right to buy or sell an underlying asset or financial instrument at a specified strike price on or before a specified date, depending on the form of the option. Selling or exercising an option before expiry typically requires a buyer to pick the contract up at the agreed upon price. The strike price may be set by reference to the spot price (market price) of the underlying security or commodity on the day an option is issued, or it may be fixed at a discount or at a premium. The issuer has the corresponding obligation to fulfill the transaction (to sell or buy) if the holder "exercises" the option. An option that conveys to the holder the right to buy at a specified price is referred to as a call, while one that conveys the right to sell at a specified price is known as a put.
The issuer may grant an option to a buyer as part of another transaction (such as a share issue or as part of an employee incentive scheme), or the buyer may pay a premium to the issuer for the option. A call option would normally be exercised only when the strike price is below the market value of the underlying asset, while a put option would normally be exercised only when the strike price is above the market value. When an option is exercised, the cost to the option holder is the strike price of the asset acquired plus the premium, if any, paid to the issuer. If the option's expiration date passes without the option being exercised, the option expires, and the holder forfeits the premium paid to the issuer. In any case, the premium is income to the issuer, and normally a capital loss to the option holder.
An option holder may on-sell the option to a third party in a secondary market, in either an over-the-counter transaction or on an options exchange, depending on the option. The market price of an American-style option normally closely follows that of the underlying stock being the difference between the market price of the stock and the strike price of the option. The actual market price of the option may vary depending on a number of factors, such as a significant option holder needing to sell the option due to the expiration date approaching and not having the financial resources to exercise the option, or a buyer in the market trying to amass a large option holding. The ownership of an option does not generally entitle the holder to any rights associated with the underlying asset, such as voting rights or any income from the underlying asset, such as a dividend.
History.
Historical uses of options.
Contracts similar to options have been used since ancient times. The first reputed option buyer was the ancient Greek mathematician and philosopher Thales of Miletus. On a certain occasion, it was predicted that the season's olive harvest would be larger than usual, and during the off-season, he acquired the right to use a number of olive presses the following spring. When spring came and the olive harvest was larger than expected, he exercised his options and then rented the presses out at a much higher price than he paid for his 'option'.
The 1688 book Confusion of Confusions describes the trading of "opsies" on the Amsterdam stock exchange (now Euronext), explaining that "there will be only limited risks to you, while the gain may surpass all your imaginings and hopes."
In London, puts and "refusals" (calls) first became well-known trading instruments in the 1690s during the reign of William and Mary. Privileges were options sold over the counter in nineteenth-century America, with both puts and calls on shares offered by specialized dealers. Their exercise price was fixed at a rounded-off market price on the day or week that the option was bought, and the expiry date was generally three months after purchase. They were not traded in secondary markets.
In the real estate market, call options have long been used to assemble large parcels of land from separate owners; e.g., a developer pays for the right to buy several adjacent plots, but is not obligated to buy these plots and might not unless they can buy all the plots in the entire parcel. Additionally, purchase of real property, like houses, requires a buyer paying the seller into an escrow account an earnest payment, which offers the buyer the right to buy the property at the set terms, including the purchase price.
In the motion picture industry, film or theatrical producers often buy an option giving the right – but not the obligation – to dramatize a specific book or script.
Lines of credit give the potential borrower the right – but not the obligation – to borrow within a specified time period.
Many choices, or embedded options, have traditionally been included in bond contracts. For example, many bonds are convertible into common stock at the buyer's option, or may be called (bought back) at specified prices at the issuer's option. Mortgage borrowers have long had the option to repay the loan early, which corresponds to a callable bond option.
Modern stock options.
Options contracts have been known for decades. The Chicago Board Options Exchange was established in 1973, which set up a regime using standardized forms and terms and trade through a guaranteed clearing house. Trading activity and academic interest have increased since then.
Today, many options are created in a standardized form and traded through clearing houses on regulated options exchanges. In contrast, other over-the-counter options are written as bilateral, customized contracts between a single buyer and seller, one or both of which may be a dealer or market-maker. Options are part of a larger class of financial instruments known as derivative products, or simply, derivatives.
Contract specifications.
A financial option is a contract between two counterparties with the terms of the option specified in a term sheet. Option contracts may be quite complicated; however, at minimum, they usually contain the following specifications:
Option trading.
Forms of trading.
Exchange-traded options.
Exchange-traded options (also called "listed options") are a class of exchange-traded derivatives. Exchange-traded options have standardized contracts and are settled through a clearing house with fulfillment guaranteed by the Options Clearing Corporation (OCC). Since the contracts are standardized, accurate pricing models are often available. Exchange-traded options include:
Over-the-counter options.
Over-the-counter options (OTC options, also called "dealer options") are traded between two private parties and are not listed on an exchange. The terms of an OTC option are unrestricted and may be individually tailored to meet any business need. In general, the option writer is a well-capitalized institution (to prevent credit risk). Option types commonly traded over the counter include:
By avoiding an exchange, users of OTC options can narrowly tailor the terms of the option contract to suit individual business requirements. In addition, OTC option transactions generally do not need to be advertised to the market and face little or no regulatory requirements. However, OTC counterparties must establish credit lines with each other and conform to each other's clearing and settlement procedures.
With few exceptions, there are no secondary markets for employee stock options. These must either be exercised by the original grantee or allowed to expire.
Exchange trading.
The most common way to trade options is via standardized options contracts listed by various futures and options exchanges. Listings and prices are tracked and can be looked up by ticker symbol. By publishing continuous, live markets for option prices, an exchange enables independent parties to engage in price discovery and execute transactions. As an intermediary to both sides of the transaction, the benefits the exchange provides to the transaction include:
Basic trades (American style).
These trades are described from the point of view of a speculator. If they are combined with other positions, they can also be used in hedging. An option contract in US markets usually represents 100 shares of the underlying security.
Long call.
A trader who expects a stock's price to increase can buy a call option to purchase the stock at a fixed price (strike price) at a later date, rather than purchase the stock outright. The cash outlay on the option is the premium. The trader would have no obligation to buy the stock, but only has the right to do so on or before the expiration date. The risk of loss would be limited to the premium paid, unlike the possible loss had the stock been bought outright.
The holder of an American-style call option can sell the option holding at any time until the expiration date and would consider doing so when the stock's spot price is above the exercise price, especially if the holder expects the price of the option to drop. By selling the option early in that situation, the trader can realise an immediate profit. Alternatively, the trader can exercise the option – for example, if there is no secondary market for the options – and then sell the stock, realising a profit. A trader would make a profit if the spot price of the shares rises by more than the premium. For example, if the exercise price is 100 and the premium paid is 10, then if the spot price of 100 rises to only 110, the transaction is break-even; an increase in the stock price above 110 produces a profit.
If the stock price at expiration is lower than the exercise price, the holder of the option at that time will let the call contract expire and lose only the premium (or the price paid on transfer).
Long put.
A trader who expects a stock's price to decrease can buy a put option to sell the stock at a fixed price (strike price) at a later date. The trader is not obligated to sell the stock, but has the right to do so on or before the expiration date. If the stock price at expiration is below the exercise price by more than the premium paid, the trader makes a profit. If the stock price at expiration is above the exercise price, the trader lets the put contract expire and loses only the premium paid. In the transaction, the premium also plays a role as it enhances the break-even point. For example, if the exercise price is 100 and the premium paid is 10, then a spot price between 90 and 100 is not profitable. The trader makes a profit only if the spot price is below 90.
The trader exercising a put option on a stock does not need to own the underlying asset, because most stocks can be shorted.
Short call.
A trader who expects a stock's price to decrease can sell the stock short or instead sell, or "write", a call. The trader selling a call has an obligation to sell the stock to the call buyer at a fixed price ("strike price"). If the seller does not own the stock when the option is exercised, they are obligated to purchase the stock in the market at the prevailing market price. If the stock price decreases, the seller of the call (call writer) makes a profit in the amount of the premium. If the stock price increases over the strike price by more than the amount of the premium, the seller loses money, with the potential loss being unlimited.
Short put.
A trader who expects a stock's price to increase can buy the stock or instead sell, or "write", a put. The trader selling a put has an obligation to buy the stock from the put buyer at a fixed price ("strike price"). If the stock price at expiration is above the strike price, the seller of the put (put writer) makes a profit in the amount of the premium. If the stock price at expiration is below the strike price by more than the amount of the premium, the trader loses money, with the potential loss being up to the strike price minus the premium. A benchmark index for the performance of a cash-secured short put option position is the CBOE S&P 500 PutWrite Index (ticker PUT).
Options strategies.
Combining any of the four basic kinds of option trades (possibly with different exercise prices and maturities) and the two basic kinds of stock trades (long and short) allows a variety of options strategies. Simple strategies usually combine only a few trades, while more complicated strategies can combine several.
Strategies are often used to engineer a particular risk profile to movements in the underlying security. For example, buying a butterfly spread (long one X1 call, short two X2 calls, and long one X3 call) allows a trader to profit if the stock price on the expiration date is near the middle exercise price, X2, and does not expose the trader to a large loss.
A condor is a strategy similar to a butterfly spread, but with different strikes for the short options – offering a larger likelihood of profit but with a lower net credit compared to the butterfly spread.
Selling a straddle (selling both a put and a call at the same exercise price) would give a trader a greater profit than a butterfly if the final stock price is near the exercise price, but might result in a large loss.
Similar to the straddle is the strangle which is also constructed by a call and a put, but whose strikes are different, reducing the net debit of the trade, but also reducing the risk of loss in the trade.
One well-known strategy is the covered call, in which a trader buys a stock (or holds a previously purchased stock position), and sells a call. (This can be contrasted with a naked call. See also naked put.) If the stock price rises above the exercise price, the call will be exercised and the trader will get a fixed profit. If the stock price falls, the call will not be exercised, and any loss incurred to the trader will be partially offset by the premium received from selling the call. Overall, the payoffs match the payoffs from selling a put. This relationship is known as put–call parity and offers insights for financial theory. A benchmark index for the performance of a buy-write strategy is the CBOE S&P 500 BuyWrite Index (ticker symbol BXM).
Another very common strategy is the protective put, in which a trader buys a stock (or holds a previously-purchased long stock position), and buys a put. This strategy acts as an insurance when investing long on the underlying stock, hedging the investor's potential losses, but also shrinking an otherwise larger profit, if just purchasing the stock without the put. The maximum profit of a protective put is theoretically unlimited as the strategy involves being long on the underlying stock. The maximum loss is limited to the purchase price of the underlying stock less the strike price of the put option and the premium paid. A protective put is also known as a married put.
Types.
Options can be classified in a few ways.
Other option types.
Another important class of options, particularly in the U.S., are employee stock options, which a company awards to their employees as a form of incentive compensation. Other types of options exist in many financial contracts. For example real estate options are often used to assemble large parcels of land, and prepayment options are usually included in mortgage loans. However, many of the valuation and risk management principles apply across all financial options.
Option styles.
Options are classified into a number of styles, the most common of which are:
These are often described as vanilla options. Other styles include:
Valuation.
Because the values of option contracts depend on a number of different variables in addition to the value of the underlying asset, they are complex to value. There are many pricing models in use, although all essentially incorporate the concepts of rational pricing (i.e. risk neutrality), moneyness, option time value, and put–call parity.
The valuation itself combines a model of the behavior ("process") of the underlying price with a mathematical method which returns the premium as a function of the assumed behavior. The models range from the (prototypical) Black–Scholes model for equities, to the Heath–Jarrow–Morton framework for interest rates, to the Heston model where volatility itself is considered stochastic. See Asset pricing for a listing of the various models here.
Basic decomposition.
In its most basic terms, the value of an option is commonly decomposed into two parts:
Valuation models.
As above, the value of the option is estimated using a variety of quantitative techniques, all based on the principle of risk-neutral pricing and using stochastic calculus in their solution. The most basic model is the Black–Scholes model. More sophisticated models are used to model the volatility smile. These models are implemented using a variety of numerical techniques. In general, standard option valuation models depend on the following factors:
More advanced models can require additional factors, such as an estimate of how volatility changes over time and for various underlying price levels, or the dynamics of stochastic interest rates.
The following are some principal valuation techniques used in practice to evaluate option contracts.
Black–Scholes.
Following early work by Louis Bachelier and later work by Robert C. Merton, Fischer Black and Myron Scholes made a major breakthrough by deriving a differential equation that must be satisfied by the price of any derivative dependent on a non-dividend-paying stock. By employing the technique of constructing a risk-neutral portfolio that replicates the returns of holding an option, Black and Scholes produced a closed-form solution for a European option's theoretical price. At the same time, the model generates hedge parameters necessary for effective risk management of option holdings.
While the ideas behind the Black–Scholes model were ground-breaking and eventually led to Scholes and Merton receiving the Swedish Central Bank's associated Prize for Achievement in Economics (a.k.a., the Nobel Prize in Economics), the application of the model in actual options trading is clumsy because of the assumptions of continuous trading, constant volatility, and a constant interest rate. Nevertheless, the Black–Scholes model is still one of the most important methods and foundations for the existing financial market in which the result is within the reasonable range.
Stochastic volatility models.
Since the market crash of 1987, it has been observed that market implied volatility for options of lower strike prices is typically higher than for higher strike prices, suggesting that volatility varies both for time and for the price level of the underlying security – a so-called volatility smile; and with a time dimension, a volatility surface.
The main approach here is to treat volatility as stochastic, with the resultant stochastic volatility models and the Heston model as a prototype; see #Risk-neutral_measure for a discussion of the logic. Other models include the CEV and SABR volatility models. One principal advantage of the Heston model, however, is that it can be solved in closed form, while other stochastic volatility models require complex numerical methods.
An alternate, though related, approach is to apply a local volatility model, where volatility is treated as a "deterministic" function of both the current asset level formula_0 and of time formula_1. As such, a local volatility model is a generalisation of the Black–Scholes model, where the volatility is a constant. The concept was developed when Bruno Dupire and Emanuel Derman and Iraj Kani noted that there is a unique diffusion process consistent with the risk neutral densities derived from the market prices of European options. See #Development for discussion.
Short-rate models.
For the valuation of bond options, swaptions (i.e. options on swaps), and interest rate cap and floors (effectively options on the interest rate) various short-rate models have been developed (applicable, in fact, to interest rate derivatives generally). The best known of these are Black-Derman-Toy and Hull–White. These models describe the future evolution of interest rates by describing the future evolution of the short rate. The other major framework for interest rate modelling is the Heath–Jarrow–Morton framework (HJM). The distinction is that HJM gives an analytical description of the "entire" yield curve, rather than just the short rate. (The HJM framework incorporates the Brace–Gatarek–Musiela model and market models. And some of the short rate models can be straightforwardly expressed in the HJM framework.) For some purposes, e.g., valuation of mortgage-backed securities, this can be a big simplification; regardless, the framework is often preferred for models of higher dimension. Note that for the simpler options here, i.e. those mentioned initially, the Black model can instead be employed, with certain assumptions.
Model implementation.
Once a valuation model has been chosen, there are a number of different techniques used to implement the models.
Analytic techniques.
In some cases, one can take the mathematical model and using analytical methods, develop closed form solutions such as the Black–Scholes model and the Black model. The resulting solutions are readily computable, as are their "Greeks". Although the Roll–Geske–Whaley model applies to an American call with one dividend, for other cases of American options, closed form solutions are not available; approximations here include Barone-Adesi and Whaley, Bjerksund and Stensland and others.
Binomial tree pricing model.
Closely following the derivation of Black and Scholes, John Cox, Stephen Ross and Mark Rubinstein developed the original version of the binomial options pricing model. It models the dynamics of the option's theoretical value for discrete time intervals over the option's life. The model starts with a binomial tree of discrete future possible underlying stock prices. By constructing a riskless portfolio of an option and stock (as in the Black–Scholes model) a simple formula can be used to find the option price at each node in the tree. This value can approximate the theoretical value produced by Black–Scholes, to the desired degree of precision. However, the binomial model is considered more accurate than Black–Scholes because it is more flexible; e.g., discrete future dividend payments can be modeled correctly at the proper forward time steps, and American options can be modeled as well as European ones. Binomial models are widely used by professional option traders. The trinomial tree is a similar model, allowing for an up, down or stable path; although considered more accurate, particularly when fewer time-steps are modelled, it is less commonly used as its implementation is more complex. For a more general discussion, as well as for application to commodities, interest rates and hybrid instruments, see Lattice model (finance).
Monte Carlo models.
For many classes of options, traditional valuation techniques are intractable because of the complexity of the instrument. In these cases, a Monte Carlo approach may often be useful. Rather than attempt to solve the differential equations of motion that describe the option's value in relation to the underlying security's price, a Monte Carlo model uses simulation to generate random price paths of the underlying asset, each of which results in a payoff for the option. The average of these payoffs can be discounted to yield an expectation value for the option. Note though, that despite its flexibility, using simulation for American styled options is somewhat more complex than for lattice based models.
Finite difference models.
The equations used to model the option are often expressed as partial differential equations (see for example Black–Scholes equation). Once expressed in this form, a finite difference model can be derived, and the valuation obtained. A number of implementations of finite difference methods exist for option valuation, including: explicit finite difference, implicit finite difference and the Crank–Nicolson method. A trinomial tree option pricing model can be shown to be a simplified application of the explicit finite difference method. Although the finite difference approach is mathematically sophisticated, it is particularly useful where changes are assumed over time in model inputs – for example dividend yield, risk-free rate, or volatility, or some combination of these – that are not tractable in closed form.
Other models.
Other numerical implementations which have been used to value options include finite element methods.
Risks.
As with all securities, trading options entails the risk of the option's value changing over time. However, unlike traditional securities, the return from holding an option varies non-linearly with the value of the underlying and other factors. Therefore, the risks associated with holding options are more complicated to understand and predict.
Standard hedge parameters.
In general, the change in the value of an option can be derived from Itô's lemma as:
formula_6
where the Greeks formula_2, formula_3, formula_4 and formula_5 are the standard hedge parameters calculated from an option valuation model, such as Black–Scholes, and formula_7, formula_8 and formula_9 are unit changes in the underlying's price, the underlying's volatility and time, respectively.
Thus, at any point in time, one can estimate the risk inherent in holding an option by calculating its hedge parameters and then estimating the expected change in the model inputs, formula_7, formula_8 and formula_9, provided the changes in these values are small. This technique can be used effectively to understand and manage the risks associated with standard options. For instance, by offsetting a holding in an option with the quantity formula_10 of shares in the underlying, a trader can form a delta neutral portfolio that is hedged from loss for small changes in the underlying's price. The corresponding price sensitivity formula for this portfolio formula_11 is:
formula_12
Pin risk.
A special situation called pin risk can arise when the underlying closes at or very close to the option's strike value on the last day the option is traded prior to expiration. The option writer (seller) may not know with certainty whether or not the option will actually be exercised or be allowed to expire. Therefore, the option writer may end up with a large, unwanted residual position in the underlying when the markets open on the next trading day after expiration, regardless of his or her best efforts to avoid such a residual.
Counterparty risk.
A further, often ignored, risk in derivatives such as options is counterparty risk. In an option contract this risk is that the seller will not sell or buy the underlying asset as agreed. The risk can be minimized by using a financially strong intermediary able to make good on the trade, but in a major panic or crash the number of defaults can overwhelm even the strongest intermediaries.
Options approval levels.
To limit risk, brokers use access control systems to restrict traders from executing certain options strategies that would not be suitable for them. Brokers generally offer about four or five approval levels, with the lowest level offering the lowest risk and the highest level offering the highest risk. The actual numbers of levels, and the specific options strategies permitted at each level, vary between brokers. Brokers may also have their own specific vetting criteria, but they are usually based on factors such as the trader's annual salary and net worth, trading experience, and investment goals (capital preservation, income, growth, or speculation). For example, a trader with a low salary and net worth, little trading experience, and only concerned about preserving capital generally would not be permitted to execute high-risk strategies like naked calls and naked puts. Traders can update their information when requesting permission to upgrade to a higher approval level.
Options exchanges.
Chicago Board Options Exchange (CBOE).
The Chicago Board Options Exchange (CBOE) is an options exchange located in Chicago, Illinois. Founded in 1973, the CBOE is the first options exchange in the United States. The CBOE offers options trading on various underlying securities including market indexes, exchange-traded funds (ETFs), stocks, and volatility indexes. Its flagship product is options on the S&P 500 Index (SPX), one of the most actively traded options globally. In addition to its floor-based open outcry trading, the CBOE also operates an all-electronic trading platform. The CBOE is regulated by the U.S. Securities and Exchange Commission (SEC).
NASDAQ OMX PHLX.
Founded in 1790, The NASDAQ OMX PHLX, also known as the Philadelphia Stock Exchange is an options and futures exchange located in Philadelphia, Pennsylvania. It is the oldest stock exchange in the United States. The NASDAQ OMX PHLX allows trading of options on equities, indexes, ETFs, and foreign currencies. It is one of the few exchanges designated for trading currency options in the U.S. In 2008, NASDAQ acquired the Philadelphia Stock Exchange and renamed it NASDAQ OMX PHLX. It operates as a subsidiary of NASDAQ, Inc.
International Securities Exchange (ISE).
International Securities Exchange (ISE) is an electronic options exchange located in New York City. Launched in 2000, ISE was the first all-electronic U.S. options exchange. ISE provides options trading on U.S. equities, indexes, and ETFs. Its trading platform provides a maximum price improvement auction to allow market makers to compete for orders. ISE is regulated by the SEC and is owned by Nasdaq, Inc.
Eurex Exchange.
Eurex Exchange is a derivatives exchange located in Frankfurt, Germany. It offers trading in futures and options on interest rates, equities, indexes, and fixed-income products. Formed in 1998 from the merger of Deutsche Terminbörse (DTB) and Swiss Options and Financial Futures Exchange (SOFFEX), Eurex Exchange operates electronic and open outcry trading platforms. Eurex Exchange is owned by Eurex Frankfurt AG.
Tokyo Stock Exchange (TSE).
Founded in 1878, the Tokyo Stock Exchange (TSE) is a stock exchange located in Tokyo, Japan. In addition to equities, the TSE also provides trading in stock index futures and options. Trading is conducted electronically as well as through auction bidding by securities companies. The TSE is regulated by the Financial Services Agency of Japan. It is owned by the Japan Exchange Group.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " S_t "
},
{
"math_id": 1,
"text": " t "
},
{
"math_id": 2,
"text": "\\Delta"
},
{
"math_id": 3,
"text": "\\Gamma"
},
{
"math_id": 4,
"text": "\\kappa"
},
{
"math_id": 5,
"text": "\\theta"
},
{
"math_id": 6,
"text": "dC=\\Delta dS + \\Gamma \\frac{dS^2}{2} + \\kappa d\\sigma + \\theta dt \\,"
},
{
"math_id": 7,
"text": "dS"
},
{
"math_id": 8,
"text": "d\\sigma"
},
{
"math_id": 9,
"text": "dt"
},
{
"math_id": 10,
"text": "-\\Delta"
},
{
"math_id": 11,
"text": "\\Pi"
},
{
"math_id": 12,
"text": "d\\Pi=\\Delta dS + \\Gamma \\frac{dS^2}{2} + \\kappa d\\sigma + \\theta dt - \\Delta dS = \\Gamma \\frac{dS^2}{2} + \\kappa d\\sigma + \\theta dt\\,"
}
] | https://en.wikipedia.org/wiki?curid=9272073 |
927236 | Convergence of Fourier series | Mathematical problem in classical harmonic analysis
In mathematics, the question of whether the Fourier series of a periodic function converges to a given function is researched by a field known as classical harmonic analysis, a branch of pure mathematics. Convergence is not necessarily given in the general case, and certain criteria must be met for convergence to occur.
Determination of convergence requires the comprehension of pointwise convergence, uniform convergence, absolute convergence, "L""p" spaces, summability methods and the Cesàro mean.
Preliminaries.
Consider "f" an integrable function on the interval [0, 2"π"]. For such an "f" the Fourier coefficients formula_0 are defined by the formula
formula_1
It is common to describe the connection between "f" and its Fourier series by
formula_2
The notation ~ here means that the sum represents the function in some sense. To investigate this more carefully, the partial sums must be defined:
formula_3
The question of whether a Fourier series converges is: Do the functions formula_4 (which are functions of the variable "t" we omitted in the notation) converge to "f" and in which sense? Are there conditions on "f" ensuring this or that type of convergence?
Before continuing, the Dirichlet kernel must be introduced. Taking the formula for formula_0, inserting it into the formula for formula_5 and doing some algebra gives that
formula_6
where ∗ stands for the periodic convolution and formula_7 is the Dirichlet kernel, which has an explicit formula,
formula_8
The Dirichlet kernel is "not" a positive kernel, and in fact, its norm diverges, namely
formula_9
a fact that plays a crucial role in the discussion. The norm of "D""n" in "L"1(T) coincides with the norm of the convolution operator with "D""n",
acting on the space "C"(T) of periodic continuous functions, or with the norm of the linear functional "f" → ("S""n""f")(0) on "C"(T). Hence, this family of linear functionals on "C"(T) is unbounded, when "n" → ∞.
Magnitude of Fourier coefficients.
In applications, it is often useful to know the size of the Fourier coefficient.
If formula_10 is an absolutely continuous function,
formula_11
for formula_12 a constant that only depends on formula_10.
If formula_10 is a bounded variation function,
formula_13
If formula_14
formula_15
If formula_14 and formula_16 has modulus of continuity formula_17,
formula_18
and therefore, if formula_10 is in the α-Hölder class
formula_19
Pointwise convergence.
There are many known sufficient conditions for the Fourier series of a function to converge at a given point "x", for example if the function is differentiable at "x". Even a jump discontinuity does not pose a problem: if the function has left and right derivatives at "x", then the Fourier series converges to the average of the left and right limits (but see Gibbs phenomenon).
The Dirichlet–Dini Criterion (see Dirichlet conditions and Dini test) states that: if "ƒ" is 2π–periodic, locally integrable and satisfies
formula_20
then (S"n""f")("x"0) converges to ℓ. This implies that for any function "f" of any Hölder class "α" > 0, the Fourier series converges everywhere to "f"("x").
It is also known that for any periodic function of bounded variation, the Fourier series converges everywhere. See also Dini test.
In general, the most common criteria for pointwise convergence of a periodic function "f" are as follows:
There exist continuous functions whose Fourier series converges pointwise but not uniformly; see Antoni Zygmund, "Trigonometric Series", vol. 1, Chapter 8, Theorem 1.13, p. 300.
However, the Fourier series of a continuous function need not converge pointwise. Perhaps the easiest proof uses the non-boundedness of Dirichlet's kernel in "L"1(T) and the Banach–Steinhaus uniform boundedness principle. As typical for existence arguments invoking the Baire category theorem, this proof is nonconstructive. It shows that the family of continuous functions whose Fourier series converges at a given "x" is of first Baire category, in the Banach space of continuous functions on the circle.
So in some sense pointwise convergence is "atypical", and for most continuous functions the Fourier series does not converge at a given point. However Carleson's theorem shows that for a given continuous function the Fourier series converges almost everywhere.
It is also possible to give explicit examples of a continuous function whose Fourier series diverges at 0: for instance, the even and 2π-periodic function "f" defined for all "x" in [0,π] by
formula_21
In this example it is easy to show how the series behaves at zero. Because the function is even the Fourier series contains only cosines:
formula_22
The coefficients are:
formula_23
As m increases, the coefficients will be positive and increasing until they reach a value of about formula_24 at formula_25 for some n and then become negative (starting with a value around formula_26) and getting smaller, before starting a new such wave. At formula_27 the Fourier series is simply the running sum of formula_28 and this builds up to around
formula_29
in the nth wave before returning to around zero, showing that the series does not converge at zero but reaches higher and higher peaks.
Uniform convergence.
Suppose formula_14, and formula_16 has modulus of continuity formula_30; then the partial sums of the Fourier series converge to the function with speed
formula_31
for a constant formula_12 that does not depend upon formula_32, nor formula_33, nor formula_34.
This theorem, first proved by D Jackson, tells, for example, that if formula_10 satisfies the formula_35-Hölder condition, then
formula_36
If formula_10 is formula_37 periodic and absolutely continuous on formula_38, then the Fourier series of formula_10 converges uniformly, but not necessarily absolutely, to formula_10.
Absolute convergence.
A function "ƒ" has an absolutely converging Fourier series if
formula_39
Obviously, if this condition holds then formula_40 converges absolutely for every "t" and on the other hand, it is enough that formula_40 converges absolutely for even one "t", then this
condition holds. In other words, for absolute convergence there is no issue of "where" the sum converges absolutely — if it converges absolutely at one point then it does so everywhere.
The family of all functions with absolutely converging Fourier series is a Banach algebra (the operation of multiplication in the algebra is a simple multiplication of functions). It is called the Wiener algebra, after Norbert Wiener, who proved that if "ƒ" has absolutely converging Fourier
series and is never zero, then 1/"ƒ" has absolutely converging Fourier series. The original proof of Wiener's theorem was difficult; a simplification using the theory of Banach algebras was given by Israel Gelfand. Finally, a short elementary proof was given by Donald J. Newman in 1975.
If formula_10 belongs to a α-Hölder class for α > 1/2 then
formula_41
for formula_42 the constant in the
Hölder condition, formula_43 a constant only dependent on formula_35; formula_44 is the norm of the Krein algebra. Notice that the 1/2 here is essential—there are 1/2-Hölder functions, which do not belong to the Wiener algebra. Besides, this theorem cannot improve the best known bound on the size of the Fourier coefficient of a α-Hölder function—that is only formula_45 and then not summable.
If "ƒ" is of bounded variation "and" belongs to a α-Hölder class for some α > 0, it belongs to the Wiener algebra.
Norm convergence.
The simplest case is that of "L"2, which is a direct transcription of general Hilbert space results. According to the Riesz–Fischer theorem, if "ƒ" is square-integrable then
formula_46
"i.e.", formula_47 converges to "ƒ" in the norm of "L"2. It is easy to see that the converse is also true: if the limit above is zero, "ƒ" must be in "L"2. So this is an if and only if condition.
If 2 in the exponents above is replaced with some "p", the question becomes much harder. It turns out that the convergence still holds if 1 < "p" < ∞. In other words, for "ƒ" in "L"p, formula_4 converges to "ƒ" in the "L""p" norm. The original proof uses properties of holomorphic functions and Hardy spaces, and another proof, due to Salomon Bochner relies upon the Riesz–Thorin interpolation theorem. For "p" = 1 and infinity, the result is not true. The construction of an example of divergence in "L"1 was first done by Andrey Kolmogorov (see below). For infinity, the result is a corollary of the uniform boundedness principle.
If the partial summation operator "SN" is replaced by a suitable summability kernel (for example the "Fejér sum" obtained by convolution with the Fejér kernel), basic functional analytic techniques can be applied to show that norm convergence holds for 1 ≤ "p" < ∞.
Convergence almost everywhere.
The problem whether the Fourier series of any continuous function converges almost everywhere was posed by Nikolai Lusin in the 1920s.
It was resolved positively in 1966 by Lennart Carleson. His result, now known as Carleson's theorem, tells the Fourier expansion of any function in "L"2 converges almost everywhere. Later on, Richard Hunt generalized this to "L""p" for any "p" > 1.
Contrariwise, Andrey Kolmogorov, as a student at the age of 19, in his very first scientific work, constructed an example of a function in "L"1 whose Fourier series diverges almost everywhere (later improved to diverge everywhere).
Jean-Pierre Kahane and Yitzhak Katznelson proved that for any given set "E" of measure zero, there exists a continuous function "ƒ" such that the Fourier series of "ƒ" fails to converge on any point
of "E".
Summability.
Does the sequence 0,1,0,1,0,1... (the partial sums of Grandi's series) converge to ? This does not seem like a very unreasonable generalization of the notion of convergence. Hence we say that any sequence formula_48 is Cesàro summable to some "a" if
formula_49
Where with formula_50 we denote the kth partial sum:
formula_51
It is not difficult to see that if a sequence converges to some "a" then it is also Cesàro summable to it.
To discuss summability of Fourier series, we must replace formula_5 with an appropriate notion. Hence we define
formula_52
and ask: does formula_53 converge to "f"? formula_54 is no longer
associated with Dirichlet's kernel, but with Fejér's kernel, namely
formula_55
where formula_56 is Fejér's kernel,
formula_57
The main difference is that Fejér's kernel is a positive kernel. Fejér's theorem states that the above sequence of partial sums converge uniformly to "ƒ". This implies much better convergence properties
Results about summability can also imply results about regular convergence. For example, we learn that if "ƒ" is continuous at "t", then the Fourier series of "ƒ" cannot converge to a value different from "ƒ"("t"). It may either converge to "ƒ"("t") or diverge. This is because, if formula_59 converges to some value "x", it is also summable to it, so from the first summability property above, "x" = "ƒ"("t").
Order of growth.
The order of growth of Dirichlet's kernel is logarithmic, i.e.
formula_60
See Big O notation for the notation "O"(1). The actual value formula_61 is both difficult to calculate (see Zygmund 8.3) and of almost no use. The fact that for "some" constant "c" we have
formula_62
is quite clear when one examines the graph of Dirichlet's kernel. The integral over the "n"-th peak is bigger than "c"/"n" and therefore the estimate for the harmonic sum gives the logarithmic estimate.
This estimate entails quantitative versions of some of the previous results. For any continuous function "f" and any "t" one has
formula_63
However, for any order of growth ω("n") smaller than log, this no longer holds and it is possible to find a continuous function "f" such that for some "t",
formula_64
The equivalent problem for divergence everywhere is open. Sergei Konyagin managed to construct an integrable function such that for "every t" one has
formula_65
It is not known whether this example is best possible. The only bound from the other direction known is log "n".
Multiple dimensions.
Upon examining the equivalent problem in more than one dimension, it is necessary to specify the precise order of summation one uses. For example, in two dimensions, one may define
formula_66
which are known as "square partial sums". Replacing the sum above with
formula_67
lead to "circular partial sums". The difference between these two definitions is quite notable. For example, the norm of the corresponding Dirichlet kernel for square partial sums is of the order of formula_68 while for circular partial sums it is of the order of formula_69.
Many of the results true for one dimension are wrong or unknown in multiple dimensions. In particular, the equivalent of Carleson's theorem is still open for circular partial sums. Almost everywhere convergence of "square partial sums" (as well as more general polygonal partial sums) in multiple dimensions was established around 1970 by Charles Fefferman.
Notes.
<templatestyles src="Reflist/styles.css" />
"The Katznelson book is the one using the most modern terminology and style of the three. The original publishing dates are: Zygmund in 1935, Bari in 1961 and Katznelson in 1968. Zygmund's book was greatly expanded in its second publishing in 1959, however."
This is the first proof that the Fourier series of a continuous function might diverge. In German
The first is a construction of an integrable function whose Fourier series diverges almost everywhere. The second is a strengthening to divergence everywhere. In French.
This is the original paper of Carleson, where he proves that the Fourier expansion of any continuous function converges almost everywhere; the paper of Hunt where he generalizes it to formula_70 spaces; two attempts at simplifying the proof; and a book that gives a self contained exposition of it.
In this paper the authors show that for any set of zero measure there exists a continuous function on the circle whose Fourier series diverges on that set. In French.
The Konyagin paper proves the formula_71 divergence result discussed above. A simpler proof that gives only log log "n" can be found in Kahane's book. | [
{
"math_id": 0,
"text": "\\widehat{f}(n)"
},
{
"math_id": 1,
"text": "\\widehat{f}(n)=\\frac{1}{2\\pi}\\int_0^{2\\pi}f(t)e^{-int}\\,\\mathrm{d}t, \\quad n \\in \\Z."
},
{
"math_id": 2,
"text": "f \\sim \\sum_n \\widehat{f}(n) e^{int}."
},
{
"math_id": 3,
"text": "S_N(f;t) = \\sum_{n=-N}^N \\widehat{f}(n) e^{int}."
},
{
"math_id": 4,
"text": "S_N(f)"
},
{
"math_id": 5,
"text": "S_N"
},
{
"math_id": 6,
"text": "S_N(f)=f * D_N"
},
{
"math_id": 7,
"text": "D_N"
},
{
"math_id": 8,
"text": "D_n(t)=\\frac{\\sin((n+\\frac{1}{2})t)}{\\sin(t/2)}."
},
{
"math_id": 9,
"text": "\\int |D_n(t)|\\,\\mathrm{d}t \\to \\infty "
},
{
"math_id": 10,
"text": "f"
},
{
"math_id": 11,
"text": "\\left|\\widehat f(n)\\right|\\le {K \\over |n|}"
},
{
"math_id": 12,
"text": "K"
},
{
"math_id": 13,
"text": "\\left|\\widehat f(n)\\right|\\le {{\\rm var}(f)\\over 2\\pi|n|}."
},
{
"math_id": 14,
"text": "f\\in C^p"
},
{
"math_id": 15,
"text": "\\left|\\widehat{f}(n)\\right|\\le {\\| f^{(p)}\\|_{L_1}\\over |n|^p}."
},
{
"math_id": 16,
"text": "f^{(p)}"
},
{
"math_id": 17,
"text": "\\omega_p"
},
{
"math_id": 18,
"text": "\\left|\\widehat{f}(n)\\right|\\le {\\omega(2\\pi/n)\\over |n|^p} "
},
{
"math_id": 19,
"text": "\\left|\\widehat{f}(n)\\right|\\le {K\\over |n|^\\alpha}."
},
{
"math_id": 20,
"text": "\\int_0^{\\pi} \\left| \\frac{f(x_0 + t) + f(x_0 - t)}2 - \\ell \\right| \\frac{\\mathrm{d}t }{t} < \\infty,"
},
{
"math_id": 21,
"text": "f(x) = \\sum_{n=1}^{\\infty} \\frac{1}{n^2} \\sin\\left[ \\left( 2^{n^3} +1 \\right) \\frac{x}{2}\\right]."
},
{
"math_id": 22,
"text": "f(x) \\sim \\sum_{m=0}^\\infty C_m \\cos(mx)."
},
{
"math_id": 23,
"text": "\\begin{align}\nC_m&=\\frac 2\\pi\\sum_{n=1}^{\\infty} \\frac{1}{n^2}\\int_0^\\pi \\sin\\left[ \\left( 2^{n^3} +1 \\right) \\frac{x}{2}\\right]\\cos{mx}\\,\\mathrm{d}x\\\\\n& =\\frac 1\\pi\\sum_{n=1}^{\\infty} \\frac{1}{n^2}\\int_0^\\pi \\left\\{\\sin\\left[ \\left( 2^{n^3} +1-2m\\right) \\frac{x}{2}\\right]+\\sin\\left[ \\left( 2^{n^3} +1+2m\\right) \\frac{x}{2}\\right]\\right\\}\\,\\mathrm{d}x\\\\\n& =\\frac 1\\pi\\sum_{n=1}^{\\infty} \\frac{1}{n^2} \\left\\{\\frac 2{2^{n^3} +1-2m}+\\frac 2{2^{n^3} +1+2m}\\right\\}\\\\\n\\end{align}"
},
{
"math_id": 24,
"text": "C_m\\approx 2/(n^2\\pi)"
},
{
"math_id": 25,
"text": "m=2^{n^3}/2"
},
{
"math_id": 26,
"text": "-2/(n^2\\pi)"
},
{
"math_id": 27,
"text": "x=0"
},
{
"math_id": 28,
"text": "C_m,"
},
{
"math_id": 29,
"text": "\\frac 1{n^2\\pi}\\sum_{k=0}^{2^{n^3}/2}\\frac 2{2k+1}\\sim\\frac 1{n^2\\pi}\\ln 2^{n^3}=\\frac n\\pi\\ln 2"
},
{
"math_id": 30,
"text": "\\omega"
},
{
"math_id": 31,
"text": "|f(x)-(S_Nf)(x)|\\le K {\\ln N \\over N^p}\\omega(2\\pi/N)"
},
{
"math_id": 32,
"text": "f "
},
{
"math_id": 33,
"text": "p"
},
{
"math_id": 34,
"text": "N"
},
{
"math_id": 35,
"text": "\\alpha"
},
{
"math_id": 36,
"text": "|f(x)-(S_Nf)(x)|\\le K {\\ln N\\over N^\\alpha}."
},
{
"math_id": 37,
"text": "2\\pi"
},
{
"math_id": 38,
"text": "[0,2\\pi]"
},
{
"math_id": 39,
"text": "\\|f\\|_A:=\\sum_{n=-\\infty}^\\infty |\\widehat{f}(n)|<\\infty."
},
{
"math_id": 40,
"text": "(S_N f)(t)"
},
{
"math_id": 41,
"text": "\\|f\\|_A\\le c_\\alpha \\|f\\|_{{\\rm Lip}_\\alpha},\\qquad\n\\|f\\|_K:=\\sum_{n=-\\infty}^{+\\infty} |n| |\\widehat{f}(n)|^2\\le c_\\alpha \\|f\\|^2_{{\\rm Lip}_\\alpha}"
},
{
"math_id": 42,
"text": "\\|f\\|_{{\\rm Lip}_\\alpha}"
},
{
"math_id": 43,
"text": "c_\\alpha"
},
{
"math_id": 44,
"text": "\\|f\\|_K"
},
{
"math_id": 45,
"text": "O(1/n^\\alpha)"
},
{
"math_id": 46,
"text": "\\lim_{N\\rightarrow\\infty}\\int_0^{2\\pi}\\left|f(x)-S_N(f) (x)\n\\right|^2\\,\\mathrm{d}x=0"
},
{
"math_id": 47,
"text": "S_N f"
},
{
"math_id": 48,
"text": "(a_n)_{n=1}^\\infty"
},
{
"math_id": 49,
"text": "\\lim_{n\\to\\infty}\\frac{1}{n}\\sum_{k=1}^n s_k=a."
},
{
"math_id": 50,
"text": "s_k"
},
{
"math_id": 51,
"text": "s_k = a_1 + \\cdots + a_k= \\sum_{n=1}^k a_n"
},
{
"math_id": 52,
"text": "K_N(f;t)=\\frac{1}{N}\\sum_{n=0}^{N-1} S_n(f;t), \\quad N \\ge 1,"
},
{
"math_id": 53,
"text": "K_N(f)"
},
{
"math_id": 54,
"text": "K_N "
},
{
"math_id": 55,
"text": "K_N(f)=f*F_N\\,"
},
{
"math_id": 56,
"text": "F_N"
},
{
"math_id": 57,
"text": "F_N=\\frac{1}{N}\\sum_{n=0}^{N-1} D_n."
},
{
"math_id": 58,
"text": "L^1"
},
{
"math_id": 59,
"text": "S_N(f;t)"
},
{
"math_id": 60,
"text": "\\int |D_N(t)|\\,\\mathrm{d}t = \\frac{4}{\\pi^2}\\log N+O(1)."
},
{
"math_id": 61,
"text": "4/\\pi^2"
},
{
"math_id": 62,
"text": "\\int |D_N(t)|\\,\\mathrm{d}t > c\\log N+O(1)"
},
{
"math_id": 63,
"text": "\\lim_{N\\to\\infty} \\frac{S_N(f;t)}{\\log N}=0."
},
{
"math_id": 64,
"text": "\\varlimsup_{N\\to\\infty} \\frac{S_N(f;t)}{\\omega(N)}=\\infty."
},
{
"math_id": 65,
"text": "\\varlimsup_{N\\to\\infty} \\frac{S_N(f;t)}{\\sqrt{\\log N}}=\\infty."
},
{
"math_id": 66,
"text": "S_N(f;t_1,t_2)=\\sum_{|n_1|\\leq N,|n_2|\\leq N}\\widehat{f}(n_1,n_2)e^{i(n_1 t_1+n_2 t_2)}"
},
{
"math_id": 67,
"text": "\\sum_{n_1^2+n_2^2\\leq N^2}"
},
{
"math_id": 68,
"text": "\\log^2 N"
},
{
"math_id": 69,
"text": "\\sqrt{N}"
},
{
"math_id": 70,
"text": "L^p"
},
{
"math_id": 71,
"text": "\\sqrt{\\log n}"
}
] | https://en.wikipedia.org/wiki?curid=927236 |
927274 | Pentatope number | Number in the 5th cell of any row of Pascal's triangle
In number theory, a pentatope number is a number in the fifth cell of any row of Pascal's triangle starting with the 5-term row 1 4 6 4 1, either from left to right or from right to left. It is named because it represents the number of 3-dimensional unit spheres which can be packed into a pentatope (a 4-dimensional tetrahedron) of increasing side lengths.
The first few numbers of this kind are:
1, 5, 15, 35, 70, 126, 210, 330, 495, 715, 1001, 1365 (sequence in the OEIS)
Pentatope numbers belong to the class of figurate numbers, which can be represented as regular, discrete geometric patterns.
Formula.
The formula for the nth pentatope number is represented by the 4th rising factorial of n divided by the factorial of 4:
formula_0
The pentatope numbers can also be represented as binomial coefficients:
formula_1
which is the number of distinct quadruples that can be selected from "n" + 3 objects, and it is read aloud as ""n" plus three choose four".
Properties.
Two of every three pentatope numbers are also pentagonal numbers. To be precise, the (3"k" − 2)th pentatope number is always the formula_2th pentagonal number and the (3"k" − 1)th pentatope number is always the formula_3th pentagonal number. The (3"k")th pentatope number is the generalized pentagonal number obtained by taking the negative index formula_4 in the formula for pentagonal numbers. (These expressions always give integers).
The infinite sum of the reciprocals of all pentatope numbers is . This can be derived using telescoping series.
formula_5
Pentatope numbers can be represented as the sum of the first n tetrahedral numbers:
formula_6
and are also related to tetrahedral numbers themselves:
formula_7
No prime number is the predecessor of a pentatope number (it needs to check only -1 and 4 = 22), and the largest semiprime which is the predecessor of a pentatope number is 1819.
Similarly, the only primes preceding a 6-simplex number are 83 and 461.
Test for pentatope numbers.
We can derive this test from the formula for the nth pentatope number.
Given a positive integer x, to test whether it is a pentatope number we can compute the positive root using Ferrari's method:
formula_8
The number x is pentatope if and only if n is a natural number. In that case x is the nth pentatope number.
Generating function.
The generating function for pentatope numbers is
formula_9
Applications.
In biochemistry, the pentatope numbers represent the number of possible arrangements of "n" different polypeptide subunits in a tetrameric (tetrahedral) protein.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "P_n = \\frac{n^{\\overline 4}}{4!} = \\frac{n(n+1)(n+2)(n+3)}{24} ."
},
{
"math_id": 1,
"text": "P_n = \\binom{n + 3}{4} ,"
},
{
"math_id": 2,
"text": "\\left(\\tfrac{3k^2 - k}{2}\\right)"
},
{
"math_id": 3,
"text": "\\left(\\tfrac{3k^2 + k}{2}\\right)"
},
{
"math_id": 4,
"text": "-\\tfrac{3k^2 + k}{2}"
},
{
"math_id": 5,
"text": "\\sum_{n=1}^\\infty \\frac{4!}{n(n+1)(n+2)(n+3)} = \\frac{4}{3}."
},
{
"math_id": 6,
"text": "P_n = \\sum_{ k =1}^n \\mathrm{Te}_k,"
},
{
"math_id": 7,
"text": "P_n = \\tfrac{1}{4}(n+3) \\mathrm{Te}_n."
},
{
"math_id": 8,
"text": "n = \\frac{\\sqrt{5+4\\sqrt{24x+1}} - 3}{2}."
},
{
"math_id": 9,
"text": "\\frac{x}{(1-x)^5} = x + 5x^2 + 15x^3 + 35x^4 + \\dots ."
}
] | https://en.wikipedia.org/wiki?curid=927274 |
927277 | Aquaplaning | Loss of traction due to water buildup under tires
Aquaplaning or hydroplaning by the tires of a road vehicle, aircraft or other wheeled vehicle occurs when a layer of water builds between the wheels of the vehicle and the road surface, leading to a loss of traction that prevents the vehicle from responding to control inputs. If it occurs to all wheels simultaneously, the vehicle becomes, in effect, an uncontrolled sled. Aquaplaning is a different phenomenon from when water on the surface of the roadway merely acts as a lubricant. Traction is diminished on wet pavement even when aquaplaning is not occurring.
Causes.
Every vehicle function that changes direction or speed relies on friction between the tires and the road surface. The grooves of a rubber tire are designed to disperse water from beneath the tire, providing high friction even in wet conditions. Aquaplaning occurs when a tire encounters more water than it can dissipate. Water pressure in front of the wheel forces a wedge of water under the leading edge of the tire, causing it to lift from the road. The tire then skates on a sheet of water with little, if any, direct road contact, and loss of control results. If multiple tires aquaplane, the vehicle may lose directional control and slide until it either collides with an obstacle, or slows enough that one or more tires contact the road again and friction is regained.
The risk of aquaplaning increases with the depth of standing water, higher speeds, and the sensitivity of a vehicle to that water depth.
Vehicle sensitivity factors.
There is no precise equation to determine the speed at which a vehicle will aquaplane. Existing efforts have derived rules of thumb from empirical testing. In general, cars start to aquaplane at speeds above .
Motorcycles.
Motorcycles benefit from narrow tires with round, canoe-shaped contact patches. Narrow tires are less vulnerable to aquaplaning because vehicle weight is distributed over a smaller area, and rounded tires more easily push water aside. These advantages diminish on lighter motorcycles with naturally wide tires, like those in the supersport class. Further, wet conditions reduce the lateral force that any tire can accommodate before sliding. While a slide in a four-wheeled vehicle may be corrected, the same slide on a motorcycle will generally cause the rider to fall. Thus, despite the relative lack of aquaplaning danger in wet conditions, motorcycle riders must be even more cautious because overall traction is reduced by wet roadways.
In motor vehicles.
Speed.
It is possible to approximate the speed at which total hydroplaning occurs, with the following equation:
formula_0
where formula_1 is the tire pressure in psi and the resulting formula_2 is the speed in mph for when the vehicle will begin to totally hydroplane. Considering an example vehicle with a tire pressure of 35 psi, one can approximate that 61 mph is the speed when the tires would lose contact with the road's surface.
However, the above equation only gives a very rough approximation. Resistance to aquaplaning is governed by several different factors, chiefly vehicle weight, tire width and tread pattern, as all affect the surface pressure exerted on the road by the tire over a given area of the contact patch - a narrow tire with a lot of weight placed upon it and an aggressive tread pattern will resist aquaplaning at far higher speeds than a wide tire on a light vehicle with minimal tread. Furthermore, the likelihood of aquaplaning drastically increases with water depth.
Response.
What the driver experiences when a vehicle aquaplanes depends on which wheels have lost traction and the direction of travel.
If the vehicle is traveling straight, it may begin to feel slightly loose. If there was a high level of road feel in normal conditions, it may suddenly diminish. Small correctional control inputs have no effect.
If the drive wheels aquaplane, there may be a sudden audible rise in engine RPM and indicated speed as they begin to spin. In a broad highway turn, if the front wheels lose traction, the car will suddenly drift towards the outside of the bend. If the rear wheels lose traction, the back of the car will slew out sideways into a skid. If all four wheels aquaplane at once, the car will slide in a straight line, again towards the outside of the bend if in a turn. When any or all of the wheels regain traction, there may be a sudden jerk in whatever direction that wheel is pointed.
Recovery.
Control inputs tend to be counterproductive while aquaplaning. If the car is not in a turn, easing off the accelerator may slow it enough to regain traction. Steering inputs may put the car into a skid from which recovery would be difficult or impossible. If braking is unavoidable, the driver should do so smoothly and be prepared for instability.
If the rear wheels aquaplane and cause oversteer, the driver should steer in the direction of the skid until the rear tires regain traction, and then rapidly steer in the other direction to straighten the car.
Prevention by the driver.
The best strategy is to avoid contributors to aquaplaning. Proper tire pressure, narrow and unworn tires, and reduced speeds from those judged suitably moderate in the dry will mitigate the risk of aquaplaning, as will avoidance of standing water.
Electronic stability control systems cannot replace defensive driving techniques and proper tire selection. These systems rely on selective wheel braking, which depends in turn on road contact. While stability control may help recovery from a skid when a vehicle slows enough to regain traction, it cannot prevent aquaplaning.
Because pooled water and changes in road conditions can require a smooth and timely reduction in speed, cruise control should not be used on wet or icy roads.
In aircraft.
Aquaplaning, also known as hydroplaning, is a condition in which standing water, slush or snow, causes the moving wheel of an aircraft to lose contact with the load bearing surface on which it is rolling with the result that braking action on the wheel is not effective in reducing the ground speed of the aircraft.
Aquaplaning may reduce the effectiveness of wheel braking in aircraft on landing or aborting a takeoff, when it can cause the aircraft to run off the end of the runway. Aquaplaning has been a factor in multiple aircraft accidents, including the destruction of TAM Airlines Flight 3054 which ran off the end of the runway in São Paulo in 2007 during heavy rain. Aircraft which can employ reverse thrust braking have the advantage over road vehicles in such situations, as this type of braking is not affected by aquaplaning, but it requires a considerable distance to operate as it is not as effective as wheel braking on a dry runway.
Aquaplaning is a condition that can exist when an aircraft is landed on a runway surface contaminated with standing water, slush, and/or wet snow. Aquaplaning can have serious adverse effects on ground controllability and braking efficiency. The three basic types of aquaplaning are dynamic aquaplaning, reverted rubber aquaplaning, and viscous aquaplaning. Any one of the three can render an aircraft partially or totally uncontrollable anytime during the landing roll.
However this can be prevented by grooves on runways. In 1965, a US delegation visited the Royal Aircraft Establishment at Farnborough to view their grooved runway for reduced aquaplaning and initiated a study by the FAA and NASA. Grooving has since been adopted by most major airports around the world. Thin grooves are cut in the concrete which allows for water to be dissipated and further reduces the potential to aquaplane.
Types.
Viscous.
Viscous aquaplaning is due to the viscous properties of water. A thin film of fluid no more than 0.025 mm in depth is all that is needed. The tire cannot penetrate the fluid and the tire rolls on top of the film. This can occur at a much lower speed than dynamic aquaplane, but requires a smooth or smooth-acting surface such as asphalt or a touchdown area coated with the accumulated rubber of past landings. Such a surface can have the same friction coefficient as wet ice.
Dynamic.
Dynamic aquaplaning is a relatively high-speed phenomenon that occurs when there is a film of water on the runway that is at least deep. As the speed of the aircraft and the depth of the water increase, the water layer builds up an increasing resistance to displacement, resulting in the formation of a wedge of water beneath the tire. At some speed, termed the aquaplaning speed (Vp), the upward force generated by water pressure equals the weight of the aircraft and the tire is lifted off the runway surface. In this condition, the tires no longer contribute to directional control, and braking action is nil. Dynamic aquaplaning is generally related to tire inflation pressure. Tests have shown that for tires with significant loads and enough water depth for the amount of tread so that the "dynamic head" pressure from the speed is applied to the whole contact patch, the minimum speed for dynamic aquaplaning (Vp) in knots is about 9 times the square root of the tire pressure in pounds per square inch (PSI). For an aircraft tire pressure of 64 PSI, the calculated aquaplaning speed would be approximately 72 knots. This speed is for a rolling, non-slipping wheel; a locked wheel reduces the Vp to 7.7 times the square root of the pressure. Therefore, once a locked tire starts aquaplaning it will continue until the speed reduces by other means (air drag or reverse thrust).
Reverted rubber.
Reverted rubber (steam) aquaplaning occurs during heavy braking that results in a prolonged locked-wheel skid. Only a thin film of water on the runway is required to facilitate this type of aquaplaning. The tire skidding generates enough heat to change the water film into a cushion of steam which keeps the tire off the runway. A side effect of the heat is it causes the rubber in contact with the runway to revert to its original uncured state. Indications of an aircraft having experienced reverted rubber aquaplaning, are distinctive 'steam-cleaned' marks on the runway surface and a patch of reverted rubber on the tire.
Reverted rubber aquaplaning frequently follows an encounter with dynamic aquaplaning, during which time the pilot may have the brakes locked in an attempt to slow the aircraft. Eventually the aircraft slows enough to where the tires make contact with the runway surface and the aircraft begins to skid. The remedy for this type of aquaplane is for the pilot to release the brakes and allow the wheels to spin up and apply moderate braking. Reverted rubber aquaplaning is insidious in that the pilot may not know when it begins, and it can persist to very slow groundspeeds (20 knots or less).
Reducing risk.
Any aquaplaning tire reduces both braking effectiveness and directional control.
When confronted with the possibility of aquaplaning, pilots are advised to land on a grooved runway (if available). Touchdown speed should be as slow as possible consistent with safety. After the nosewheel is lowered to the runway, moderate braking should be applied. If deceleration is not detected and aquaplaning is suspected, the nose should be raised and aerodynamic drag utilized to decelerate to a point where the brakes do become effective.
Proper braking technique is essential. The brakes should be applied firmly until reaching a point just short of a skid. At the first sign of a skid, the pilot should release brake pressure and allow the wheels to spin up. Directional control should be maintained as far as possible with the rudder. In a crosswind, if aquaplaning should occur, the crosswind will cause the aircraft to simultaneously weathervane into the wind (i.e. the nose will turn toward the wind) as well as slide downwind (the plane will tend to slide in the direction the air is moving). For small aircraft, holding the nose up as if performing a soft field landing and using the rudder to aerodynamically maintain directional control while holding the upwind aileron in the best position to prevent lifting the wing should help. However, avoid landing in heavy rain where the crosswind component of the wind is higher than the maximum demonstrated crosswind listed in the Pilot Operations Handbook.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "V_p=10.35\\sqrt{p}"
},
{
"math_id": 1,
"text": "p"
},
{
"math_id": 2,
"text": "V_p"
}
] | https://en.wikipedia.org/wiki?curid=927277 |
9274067 | Marangoni number | Concept in fluid dynamics
The Marangoni number (Ma) is, as usually defined, the dimensionless number that compares the rate of transport due to Marangoni flows, with the rate of transport of diffusion. The Marangoni effect is flow of a liquid due to gradients in the surface tension of the liquid. Diffusion is of whatever is creating the gradient in the surface tension. Thus as the Marangoni number compares flow and diffusion timescales it is a type of Péclet number.
The Marangoni number is defined as:
formula_0
A common example is surface tension gradients caused by temperature gradients. Then the relevant diffusion process is that of thermal energy (heat). Another is surface gradients caused by variations in the concentration of surfactants, where the diffusion is now that of surfactant molecules.
The number is named after Italian scientist Carlo Marangoni, although its use dates from the 1950s and it was neither discovered nor used by Carlo Marangoni.
The Marangoni number for a simple liquid of viscosity formula_1 with a surface tension change formula_2 over a distance formula_3 parallel to the surface, can be estimated as follows. Note that we assume that formula_3 is the only length scale in the problem, which in practice implies that the liquid be at least formula_3 deep. The transport rate is usually estimated using the equations of Stokes flow, where the fluid velocity is obtained by equating the stress gradient to the viscous dissipation. A surface tension is a force per unit length, so the resulting stress must scale as formula_4, while the viscous stress scales as formula_5, for formula_6 the speed of the Marangoni flow. Equating the two we have a flow speed formula_7. As Ma is a type of Péclet number, it is a velocity times a length, divided by a diffusion constant, formula_8, Here this is the diffusion constant of whatever is causing the surface tension difference. So,
formula_9
Marangoni number due to thermal gradients.
A common application is to a layer of liquid, such as water, when there is a temperature difference formula_10 across this layer. This could be due to the liquid evaporating or being heated from below. There is a surface tension at the surface of a liquid that depends on temperature, typically as the temperature increases the surface tension decreases. Thus if due to a small fluctuation temperature, one part of the surface is hotter than another, there will be flow from the hotter part to the colder part, driven by this difference in surface tension, this flow is called the Marangoni effect. This flow will transport thermal energy, and the Marangoni number compares the rate at which thermal energy is transported by this flow to the rate at which thermal energy diffuses.
For a liquid layer of thickness formula_3, viscosity formula_1 and thermal diffusivity formula_11, with a surface tension formula_12 which changes with temperature at a rate formula_13, the Marangoni number can be calculated using the following formula:
formula_14
When Ma is small thermal diffusion dominates and there is no flow, but for large Ma, flow (convection) occurs, driven by the gradients in the surface tension. This is called Bénard-Marangoni convection. | [
{
"math_id": 0,
"text": "\\mathrm{Ma} = \\dfrac{ \\mbox{advective transport rate, due to surface tension gradient} }{ \\mbox{diffusive transport rate, of source of gradient} }"
},
{
"math_id": 1,
"text": "\\mu"
},
{
"math_id": 2,
"text": "\\Delta\\gamma"
},
{
"math_id": 3,
"text": "L"
},
{
"math_id": 4,
"text": "\\Delta\\gamma/ L"
},
{
"math_id": 5,
"text": "\\mu u/L"
},
{
"math_id": 6,
"text": "u"
},
{
"math_id": 7,
"text": "u=\\Delta\\gamma/\\mu"
},
{
"math_id": 8,
"text": "D"
},
{
"math_id": 9,
"text": "\\mathrm{Ma} = \\dfrac{ uL }{ D }=\\dfrac{\\Delta\\gamma L}{\\mu D}"
},
{
"math_id": 10,
"text": "\\Delta T"
},
{
"math_id": 11,
"text": "\\alpha"
},
{
"math_id": 12,
"text": "\\gamma"
},
{
"math_id": 13,
"text": "\\partial\\gamma/\\partial T"
},
{
"math_id": 14,
"text": "\\mathrm{Ma} = - (\\partial\\gamma/\\partial T).\\frac{L.\\Delta T}{\\mu.\\alpha} "
}
] | https://en.wikipedia.org/wiki?curid=9274067 |
927478 | Triacontagon | Polygon with 30 edges
In geometry, a triacontagon or 30-gon is a thirty-sided polygon. The sum of any triacontagon's interior angles is 5040 degrees.
Regular triacontagon.
The "regular triacontagon" is a constructible polygon, by an edge-bisection of a regular pentadecagon, and can also be constructed as a truncated pentadecagon, t{15}. A truncated triacontagon, t{30}, is a hexacontagon, {60}.
One interior angle in a regular triacontagon is 168 degrees, meaning that one exterior angle would be 12°. The triacontagon is the largest regular polygon whose interior angle is the sum of the interior angles of smaller polygons: 168° is the sum of the interior angles of the equilateral triangle (60°) and the regular pentagon (108°).
The area of a regular triacontagon is (with "t"
edge length)
formula_0
The inradius of a regular triacontagon is
formula_1
The circumradius of a regular triacontagon is
formula_2
Construction.
As 30 = 2 × 3 × 5 , a regular triacontagon is constructible using a compass and straightedge.
Symmetry.
The "regular triacontagon" has Dih30 dihedral symmetry, order 60, represented by 30 lines of reflection. Dih30 has 7 dihedral subgroups: Dih15, (Dih10, Dih5), (Dih6, Dih3), and (Dih2, Dih1). It also has eight more cyclic symmetries as subgroups: (Z30, Z15), (Z10, Z5), (Z6, Z3), and (Z2, Z1), with Zn representing π/"n" radian rotational symmetry.
John Conway labels these lower symmetries with a letter and order of the symmetry follows the letter. He gives d (diagonal) with mirror lines through vertices, p with mirror lines through edges (perpendicular), i with mirror lines through both vertices and edges, and g for rotational symmetry. a1 labels no symmetry.
These lower symmetries allows degrees of freedoms in defining irregular triacontagons. Only the g30 subgroup has no degrees of freedom but can be seen as directed edges.
Dissection.
Coxeter states that every zonogon (a 2"m"-gon whose opposite sides are parallel and of equal length) can be dissected into "m"("m"-1)/2 parallelograms.
In particular this is true for regular polygons with evenly many sides, in which case the parallelograms are all rhombi. For the "regular triacontagon", "m"=15, it can be divided into 105: 7 sets of 15 rhombs. This decomposition is based on a Petrie polygon projection of a 15-cube.
Triacontagram.
A triacontagram is a 30-sided star polygon (though the word is extremely rare). There are 3 regular forms given by Schläfli symbols {30/7}, {30/11}, and {30/13}, and 11 compound star figures with the same vertex configuration.
There are also isogonal triacontagrams constructed as deeper truncations of the regular pentadecagon {15} and pentadecagram {15/7}, and inverted pentadecagrams {15/11}, and {15/13}. Other truncations form double coverings: t{15/14}={30/14}=2{15/7}, t{15/8}={30/8}=2{15/4}, t{15/4}={30/4}=2{15/4}, and t{15/2}={30/2}=2{15}.
Petrie polygons.
The regular triacontagon is the Petrie polygon for three 8-dimensional polytopes with E8 symmetry, shown in orthogonal projections in the E8 Coxeter plane. It is also the Petrie polygon for two 4-dimensional polytopes, shown in the H4 Coxeter plane.
The regular triacontagram {30/7} is also the Petrie polygon for the great grand stellated 120-cell and grand 600-cell.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "A = \\frac{15}{2} t^2 \\cot \\frac{\\pi}{30} = \\frac{15}{4} t^2 \\left(\\sqrt{15} + 3\\sqrt{3} + \\sqrt{2}\\sqrt{25+11\\sqrt{5}}\\right)"
},
{
"math_id": 1,
"text": "r = \\frac{1}{2} t \\cot \\frac{\\pi}{30} = \\frac{1}{4} t \\left(\\sqrt{15} + 3\\sqrt{3} + \\sqrt{2}\\sqrt{25+11\\sqrt{5}}\\right)"
},
{
"math_id": 2,
"text": "R = \\frac{1}{2} t \\csc \\frac{\\pi}{30} = \\frac{1}{2} t \\left(2 + \\sqrt{5} + \\sqrt{15+6\\sqrt{5}}\\right)"
}
] | https://en.wikipedia.org/wiki?curid=927478 |
9275274 | Small but significant and non-transitory increase in price | Concept in competition law
In competition law, before deciding whether companies have significant market power which would justify government intervention, the test of small but significant and non-transitory increase in price (SSNIP) is used to define the relevant market in a consistent way. It is an alternative to ad hoc determination of the relevant market by arguments about product similarity.
The SSNIP test is crucial in competition law cases accusing abuse of dominance and in approving or blocking mergers. Competition regulating authorities and other actuators of antitrust law intend to prevent market failure caused by cartel, oligopoly, monopoly, or other forms of market dominance.
History.
In 1982 the U.S. Department of Justice Merger Guidelines introduced the SSNIP test as a new method for defining markets and for measuring market power directly. In the EU it was used for the first time in the "Nestlé/Perrier" case in 1992 and has been officially recognized by the European Commission in its "Commission's Notice for the Definition of the Relevant Market" in 1997.
The original concept is believed to have been proposed first in 1959 by economist David Morris Adelman of the Aston University. Several other individuals formulated, apparently independently, similar conceptual approaches during the 1970s. The SSNIP approach was implemented by F. M. Scherer in three antitrust cases: in a 1972 Justice Department attempt to enjoin the merger of Associated Brewing Co. and G. W. Heileman Co., in 1975 during hearings on the U.S. government's monopolization case against IBM, and in a 1981 proceeding precipitated by Marathon Oil Company's effort to avert takeover by Mobil Oil Corporation. Scherer also proposed the basic concept underlying SSNIP along with limitations posed by what has come to be known as "the cellophane fallacy" in the second (1980) edition of his industrial organization textbook. Historical retrospectives suggest that early proponents were unaware of other individuals' conceptual proposals.
Measurement.
The SSNIP test seeks to identify the smallest relevant market within which a hypothetical monopolist or cartel could impose a profitable significant increase in price. The relevant market consists of a "catalogue" of goods and/or services which are considered substitutes by the customer. Such a catalogue is considered "worth monopolizing" if, should only one single supplier provide it, that supplier could profitably increase its price without its customers turning away and choosing other goods and services from other suppliers.
The application of the SSNIP test involves interviewing consumers regarding buying decisions and determining whether a hypothetical monopolist or cartel could profit from a price increase of 5% for at least one year (assuming that "the terms of sale of all other products are held constant"). If sufficient numbers of buyers are likely to switch to alternative products and the lost sales would make such price increase unprofitable, then the hypothetical market should not be considered a relevant market for the basis of litigation or regulation. Therefore, another, larger, basket of products is proposed for a hypothetical monopolist to control and the SSNIP test is performed on that relevant market.
The SSNIP test can be applied by estimating empirically the critical elasticity of demand. In the case of linear demand, information on firms' price-cost margins is sufficient for the calculation. If the pre-merger elasticity of demand exceeds the critical elasticity, then the decline in sales arising from the price increase will be sufficiently large to render the price increase unprofitable and the products concerned do not constitute the relevant market.
An alternative method for applying the SSNIP test where demand elasticities cannot be estimated, involves estimating the "critical loss." The critical loss is defined as the maximum sales loss that could be sustained as a result of the price increase without making the price increase unprofitable. Where the likely loss of sales to the hypothetical monopolist (cartel) is less than the Critical Loss, then a 5% price increase would be profitable and the market is defined.
Example.
The test consists of observing whether a small increase in price (in the range of 5 to 10 percent) would provoke a significant number of consumers to switch to another product (in fact, substitute product). In other words, it is designed to analyse whether that increase in price would be profitable or if, instead, it would just induce substitution, making it unprofitable.
In general, one uses databases from the firms which may include data on variables such as costs, prices, revenue or sales and over a sufficiently long period (generally over at least two years).
In economic terms, what the SSNIP test does is to calculate the residual elasticity of demand of the firm. That is, how a change in prices by the firm affects its own demand.
First phase.
As an example, let's suppose the following situation for a firm:
In this case, the firm would make profits equal to 5000: formula_0.
Now suppose the firm decides to increase its price by a 10 percent, which would imply that the new price would be 11 (10 percent increase). Suppose that the new situation facing the firm is therefore:
In this case, the firm would make profits equal to 4800: formula_0.
As can be seen, such an increase in prices would induce a certain substitution for our hypothetical firm, in fact, 200 units less will be sold. This may be so because some consumers have started to buy a substitute product, the same consumers have bought a smaller quantity of the product given its price increase or maybe because they have stopped buying that type of product.
If we want to know whether such price increase has been profitable, we should solve the following equation:<br>
formula_1
In our example, the increase in price produces too much consumer substitution which is not compensated by the increase in price nor the reduction in costs. Overall, the firm would make less profits (4800 compared to 5000). In other words, there are other substitute products that should be included in the relevant market and the product of the firm does not constitute by itself a separate relevant market. The "market" formed by this only product is not "worth monopolising" as an increase in prices would not be profitable. The investigation should continue by including new products which we may guess are substitutes of the one under investigation.
Second phase.
We already know that the previous product is not by itself a relevant market because there do exist other substitute products. Let’s suppose that the previous firm (A) tells us that it considers as competitors the products of B and C. In this case, in the second phase we should include these products to our analysis and repeat the exercise.
The situation can be described as follows:
Given that we want to know if products A, B and C constitute a relevant market, the exercise would consist in supposing that an hypothetical monopolist X would control all three products. In that case, the monopolist would make profits of:<br>
formula_2
Now suppose that monopolist X decides to increase the price of product A, maintaining the price of B and C constant. Suppose that a 10 percent increase in the price of A provokes the following situation:
This means that the price increase of A would provoke that 200 units less of A be sold and instead, 100 more units of B and C will be sold respectively. Given that our hypothetical monopolist controls all three products, its profits will be:<br>
formula_3
As can be seen, the monopolist controlling A, B and C would profitably increase the price of A by 10 percent, in other words, these three products do constitute a market "worth monopolising" and therefore constitutes a relevant market. This result is because X controls all three products which are the only substitutes of A. Thus, X knows that even if its increase in price of A will generate some substitution, a significant share of these consumers will end up buying other products which he controls, therefore overall, his profits will not be reduced but rather increased.
If we had found that such an increase would not have been profitable, we should further include new products which we may imagine are substitutes in a third phase until we arrive at a situation in which such an increase in price would have been profitable, indicating that those products do constitute a relevant market.
Limitations.
Despite its widespread usage, the SSNIP test is not without problems. Specifically:
Furthermore, many economists have noted an important pitfall in the use of demand elasticities when inferring both the market power and the relevant market. The problem arises from the fact that economic theory predicts that any profit-maximizing firm will set its prices at a level where demand for its product is elastic. Therefore, when a monopolist sets its prices at a monopoly level it may happen that two products appear to be close substitutes whereas at competitive prices they are not. In other words, it may happen that using the SSNIP test one defines the relevant market too broadly, including products which are not substitutes.
This problem is known in the literature as the cellophane paradox after the celebrated DuPont case ("U.S. v. E. I. du Pont"). In this case, DuPont (a cellophane producer) argued that cellophane was not a separate relevant market since it competed with flexible packaging materials such as aluminum foil, wax paper and polyethylene. The problem was that DuPont, being the sole producer of cellophane, had set prices at the monopoly level, and it was at this level that consumers viewed those other products as substitutes. Instead, at the competitive level, consumers viewed cellophane as a unique relevant market (a small but significant increase in prices would not have them switching to goods like wax or the others). In the case, the Supreme Court of the United States failed to recognise that a high own-price elasticity may mean that a firm is already exercising monopoly power.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathrm{Price} \\times \\mathrm{Sales} - \\mathrm{Variable\\ cost} \\times \\mathrm{Sales}"
},
{
"math_id": 1,
"text": "\\mathrm{Profits} = \\mathrm{Price} \\times \\mathrm{Sales} - \\mathrm{Variable\\ cost\\ per\\ unit} \\times \\mathrm{Sales} = 4800."
},
{
"math_id": 2,
"text": "10 \\times 1000 - 5 \\times 1000 + 13 \\times 800 - 4 \\times 800 + 9 \\times 1100 - 4 \\times 1100 = 17700"
},
{
"math_id": 3,
"text": "11 \\times 800 - 5 \\times 800 + 13 \\times 900 - 4 \\times 900 + 9 \\times 1200 - 4 \\times 1200 = 18900"
}
] | https://en.wikipedia.org/wiki?curid=9275274 |
9277 | Ellipse | Plane curve: conic section
In mathematics, an ellipse is a plane curve surrounding two focal points, such that for all points on the curve, the sum of the two distances to the focal points is a constant. It generalizes a circle, which is the special type of ellipse in which the two focal points are the same. The elongation of an ellipse is measured by its eccentricity formula_0, a number ranging from formula_1 (the limiting case of a circle) to formula_2 (the limiting case of infinite elongation, no longer an ellipse but a parabola).
An ellipse has a simple algebraic solution for its area, but for its perimeter (also known as circumference), integration is required to obtain an exact solution.
Analytically, the equation of a standard ellipse centered at the origin with width formula_3 and height formula_4 is:
formula_5
Assuming formula_6, the foci are formula_7 for formula_8. The standard parametric equation is:
formula_9
Ellipses are the closed type of conic section: a plane curve tracing the intersection of a cone with a plane (see figure). Ellipses have many similarities with the other two forms of conic sections, parabolas and hyperbolas, both of which are open and unbounded. An angled cross section of a right circular cylinder is also an ellipse.
An ellipse may also be defined in terms of one focal point and a line outside the ellipse called the directrix: for all points on the ellipse, the ratio between the distance to the focus and the distance to the directrix is a constant. This constant ratio is the above-mentioned eccentricity:
formula_10
Ellipses are common in physics, astronomy and engineering. For example, the orbit of each planet in the Solar System is approximately an ellipse with the Sun at one focus point (more precisely, the focus is the barycenter of the Sun–planet pair). The same is true for moons orbiting planets and all other systems of two astronomical bodies. The shapes of planets and stars are often well described by ellipsoids. A circle viewed from a side angle looks like an ellipse: that is, the ellipse is the image of a circle under parallel or perspective projection. The ellipse is also the simplest Lissajous figure formed when the horizontal and vertical motions are sinusoids with the same frequency: a similar effect leads to elliptical polarization of light in optics.
The name, (, "omission"), was given by Apollonius of Perga in his "Conics".
Definition as locus of points.
An ellipse can be defined geometrically as a set or locus of points in the Euclidean plane:
<templatestyles src="Block indent/styles.css"/>Given two fixed points formula_11 called the foci and a distance formula_3 which is greater than the distance between the foci, the ellipse is the set of points formula_12 such that the sum of the distances formula_13 is equal to formula_3: formula_14
The midpoint formula_15 of the line segment joining the foci is called the "center" of the ellipse. The line through the foci is called the "major axis", and the line perpendicular to it through the center is the "minor axis". The major axis intersects the ellipse at two "vertices" formula_16, which have distance formula_17 to the center. The distance formula_18 of the foci to the center is called the "focal distance" or linear eccentricity. The quotient formula_19 is the "eccentricity".
The case formula_20 yields a circle and is included as a special type of ellipse.
The equation formula_21 can be viewed in a different way (see figure):
<templatestyles src="Block indent/styles.css"/>If formula_22 is the circle with center formula_23 and radius formula_3, then the distance of a point formula_12 to the circle formula_22 equals the distance to the focus formula_24: formula_25
formula_22 is called the "circular directrix" (related to focus formula_23) of the ellipse. This property should not be confused with the definition of an ellipse using a directrix line below.
Using Dandelin spheres, one can prove that any section of a cone with a plane is an ellipse, assuming the plane does not contain the apex and has slope less than that of the lines on the cone.
In Cartesian coordinates.
Standard equation.
The standard form of an ellipse in Cartesian coordinates assumes that the origin is the center of the ellipse, the "x"-axis is the major axis, and:
For an arbitrary point formula_27 the distance to the focus formula_28 is formula_29 and to the other focus formula_30. Hence the point formula_31 is on the ellipse whenever:
formula_32
Removing the radicals by suitable squarings and using formula_33 (see diagram) produces the standard equation of the ellipse:
formula_34
or, solved for "y":
formula_35
The width and height parameters formula_36 are called the semi-major and semi-minor axes. The top and bottom points formula_37 are the "co-vertices". The distances from a point formula_31 on the ellipse to the left and right foci are formula_38 and formula_39.
It follows from the equation that the ellipse is "symmetric" with respect to the coordinate axes and hence with respect to the origin.
Parameters.
Principal axes.
Throughout this article, the semi-major and semi-minor axes are denoted formula_17 and formula_40, respectively, i.e. formula_41
In principle, the canonical ellipse equation formula_42 may have formula_43 (and hence the ellipse would be taller than it is wide). This form can be converted to the standard form by transposing the variable names formula_44 and formula_45 and the parameter names formula_17 and formula_46
Linear eccentricity.
This is the distance from the center to a focus: formula_47.
Eccentricity.
The eccentricity can be expressed as:
formula_48
assuming formula_49 An ellipse with equal axes (formula_50) has zero eccentricity, and is a circle.
Semi-latus rectum.
The length of the chord through one focus, perpendicular to the major axis, is called the "latus rectum". One half of it is the "semi-latus rectum" formula_26. A calculation shows:
formula_51
The semi-latus rectum formula_26 is equal to the radius of curvature at the vertices (see section curvature).
Tangent.
An arbitrary line formula_52 intersects an ellipse at 0, 1, or 2 points, respectively called an "exterior line", "tangent" and "secant". Through any point of an ellipse there is a unique tangent. The tangent at a point formula_53 of the ellipse formula_54 has the coordinate equation:
formula_55
A vector parametric equation of the tangent is:
formula_56
Proof:
Let formula_53 be a point on an ellipse and formula_57 be the equation of any line formula_52 containing formula_53. Inserting the line's equation into the ellipse equation and respecting formula_58 yields:
formula_59
There are then cases:
Using (1) one finds that formula_66 is a tangent vector at point formula_53, which proves the vector equation.
If formula_67 and formula_68 are two points of the ellipse such that formula_69, then the points lie on two "conjugate diameters" (see below). (If formula_50, the ellipse is a circle and "conjugate" means "orthogonal".)
Shifted ellipse.
If the standard ellipse is shifted to have center formula_70, its equation is
formula_71
The axes are still parallel to the "x"- and "y"-axes.
General ellipse.
In analytic geometry, the ellipse is defined as a quadric: the set of points formula_31 of the Cartesian plane that, in non-degenerate cases, satisfy the implicit equation
formula_72
provided formula_73
To distinguish the degenerate cases from the non-degenerate case, let "∆" be the determinant
formula_74
Then the ellipse is a non-degenerate real ellipse if and only if "C∆" < 0. If "C∆" > 0, we have an imaginary ellipse, and if "∆" = 0, we have a point ellipse.63
The general equation's coefficients can be obtained from known semi-major axis formula_17, semi-minor axis formula_40, center coordinates formula_70, and rotation angle formula_75 (the angle from the positive horizontal axis to the ellipse's major axis) using the formulae:
formula_76
These expressions can be derived from the canonical equation
formula_77
by a Euclidean transformation of the coordinates formula_78:
formula_79
Conversely, the canonical form parameters can be obtained from the general-form coefficients by the equations:
formula_80
where atan2 is the 2-argument arctangent function.
Parametric representation.
Standard parametric representation.
Using trigonometric functions, a parametric representation of the standard ellipse formula_81 is:
formula_82
The parameter "t" (called the "eccentric anomaly" in astronomy) is not the angle of formula_83 with the "x"-axis, but has a geometric meaning due to Philippe de La Hire (see "" below).
Rational representation.
With the substitution formula_84 and trigonometric formulae one obtains
formula_85
and the "rational" parametric equation of an ellipse
formula_86
which covers any point of the ellipse formula_54 except the left vertex formula_87.
For formula_88 this formula represents the right upper quarter of the ellipse moving counter-clockwise with increasing formula_89 The left vertex is the limit formula_90
Alternately, if the parameter formula_91 is considered to be a point on the real projective line formula_92, then the corresponding rational parametrization is
formula_93
Then formula_94
Rational representations of conic sections are commonly used in computer-aided design (see Bezier curve).
Tangent slope as parameter.
A parametric representation, which uses the slope formula_95 of the tangent at a point of the ellipse
can be obtained from the derivative of the standard representation formula_96:
formula_97
With help of trigonometric formulae one obtains:
formula_98
Replacing formula_99 and formula_100 of the standard representation yields:
formula_101
Here formula_95 is the slope of the tangent at the corresponding ellipse point, formula_102 is the upper and formula_103 the lower half of the ellipse. The verticesformula_104, having vertical tangents, are not covered by the representation.
The equation of the tangent at point formula_105 has the form formula_106. The still unknown formula_107 can be determined by inserting the coordinates of the corresponding ellipse point formula_105:
formula_108
This description of the tangents of an ellipse is an essential tool for the determination of the orthoptic of an ellipse. The orthoptic article contains another proof, without differential calculus and trigonometric formulae.
General ellipse.
Another definition of an ellipse uses affine transformations:
Any "ellipse" is an affine image of the unit circle with equation formula_109.
An affine transformation of the Euclidean plane has the form formula_110, where formula_111 is a regular matrix (with non-zero determinant) and formula_112 is an arbitrary vector. If formula_113 are the column vectors of the matrix formula_111, the unit circle formula_114, formula_115, is mapped onto the ellipse:
formula_116
Here formula_112 is the center and formula_117 are the directions of two conjugate diameters, in general not perpendicular.
The four vertices of the ellipse are formula_118, for a parameter formula_119 defined by:
formula_120
(If formula_121, then formula_122.) This is derived as follows. The tangent vector at point formula_123 is:
formula_124
At a vertex parameter formula_119, the tangent is perpendicular to the major/minor axes, so:
formula_125
Expanding and applying the identities formula_126 gives the equation for formula_127
From Apollonios theorem (see below) one obtains:<br>
The area of an ellipse formula_128 is
formula_129
With the abbreviations
formula_130 the statements of Apollonios's theorem can be written as:
formula_131
Solving this nonlinear system for formula_132 yields the semiaxes:
formula_133
Solving the parametric representation for formula_134 by Cramer's rule and using formula_135, one obtains the implicit representation
formula_136
Conversely: If the equation
formula_137 with formula_138
of an ellipse centered at the origin is given, then the two vectors
formula_139
point to two conjugate points and the tools developed above are applicable.
"Example": For the ellipse with equation formula_140 the vectors are
formula_141
For formula_142 one obtains a parametric representation of the standard ellipse rotated by angle formula_75:
formula_143
The definition of an ellipse in this section gives a parametric representation of an arbitrary ellipse, even in space, if one allows formula_144 to be vectors in space.
Polar forms.
Polar form relative to center.
In polar coordinates, with the origin at the center of the ellipse and with the angular coordinate formula_75 measured from the major axis, the ellipse's equation is75
formula_145
where formula_0 is the eccentricity, not Euler's number.
Polar form relative to focus.
If instead we use polar coordinates with the origin at one focus, with the angular coordinate formula_146 still measured from the major axis, the ellipse's equation is
formula_147
where the sign in the denominator is negative if the reference direction formula_146 points towards the center (as illustrated on the right), and positive if that direction points away from the center.
The angle formula_75 is called the true anomaly of the point. The numerator formula_148 is the semi-latus rectum.
Eccentricity and the directrix property.
Each of the two lines parallel to the minor axis, and at a distance of formula_149 from it, is called a "directrix" of the ellipse (see diagram).
For an arbitrary point formula_12 of the ellipse, the quotient of the distance to one focus and to the corresponding directrix (see diagram) is equal to the eccentricity: formula_150
The proof for the pair formula_151 follows from the fact that formula_152 and formula_153 satisfy the equation
formula_154
The second case is proven analogously.
The converse is also true and can be used to define an ellipse (in a manner similar to the definition of a parabola):
For any point formula_155 (focus), any line formula_156 (directrix) not through formula_155, and any real number formula_0 with formula_157 the ellipse is the locus of points for which the quotient of the distances to the point and to the line is formula_158 that is: formula_159
The extension to formula_1, which is the eccentricity of a circle, is not allowed in this context in the Euclidean plane. However, one may consider the directrix of a circle to be the line at infinity in the projective plane.
Let formula_161, and assume formula_162 is a point on the curve. The directrix formula_156 has equation formula_163. With formula_164, the relation formula_165 produces the equations
formula_166 and formula_167
The substitution formula_168 yields
formula_169
This is the equation of an "ellipse" (formula_170), or a "parabola" (formula_2), or a "hyperbola" (formula_160). All of these non-degenerate conics have, in common, the origin as a vertex (see diagram).
If formula_170, introduce new parameters formula_171 so that formula_172, and then the equation above becomes
formula_173
which is the equation of an ellipse with center formula_174, the "x"-axis as major axis, and
the major/minor semi axis formula_171.
Because of formula_175 point formula_176 of directrix formula_177 (see diagram) and focus formula_24 are inverse with respect to the circle inversion at circle formula_178 (in diagram green). Hence formula_176 can be constructed as shown in the diagram. Directrix formula_177 is the perpendicular to the main axis at point formula_176.
If the focus is formula_179 and the directrix formula_180, one obtains the equation
formula_181
Focus-to-focus reflection property.
An ellipse possesses the following property:
The normal at a point formula_12 bisects the angle between the lines formula_183.
Because the tangent line is perpendicular to the normal, an equivalent statement is that the tangent is the external angle bisector of the lines to the foci (see diagram).
Let formula_184 be the point on the line formula_185 with distance formula_3 to the focus formula_23, where formula_17 is the semi-major axis of the ellipse. Let line formula_186 be the external angle bisector of the lines formula_187 and formula_188 Take any other point formula_189 on formula_190 By the triangle inequality and the angle bisector theorem, formula_191formula_192formula_193 therefore formula_189 must be outside the ellipse. As this is true for every choice of formula_194 formula_186 only intersects the ellipse at the single point formula_12 so must be the tangent line.
The rays from one focus are reflected by the ellipse to the second focus. This property has optical and acoustic applications similar to the reflective property of a parabola (see whispering gallery).
Additionally, because of the focus-to-focus reflection property of ellipses, if the rays are allowed to continue propagating, reflected rays will eventually align closely with the major axis.
Conjugate diameters.
Definition of conjugate diameters.
A circle has the following property:
The midpoints of parallel chords lie on a diameter.
An affine transformation preserves parallelism and midpoints of line segments, so this property is true for any ellipse. (Note that the parallel chords and the diameter are no longer orthogonal.)
Two diameters formula_195 of an ellipse are "conjugate" if the midpoints of chords parallel to formula_196 lie on formula_197
From the diagram one finds:
Two diameters formula_198 of an ellipse are conjugate whenever the tangents at formula_199 and formula_200 are parallel to formula_201.
Conjugate diameters in an ellipse generalize orthogonal diameters in a circle.
In the parametric equation for a general ellipse given above,
formula_202
any pair of points formula_203 belong to a diameter, and the pair formula_204 belong to its conjugate diameter.
For the common parametric representation formula_205 of the ellipse with equation formula_206 one gets: The points
formula_207 (signs: (+,+) or (−,−) )
formula_208 (signs: (−,+) or (+,−) )
are conjugate and
formula_209
In case of a circle the last equation collapses to formula_210
Theorem of Apollonios on conjugate diameters.
For an ellipse with semi-axes formula_171 the following is true:
Let formula_211 and formula_212 be halves of two conjugate diameters (see diagram) then
# formula_213.
# The "triangle" formula_214 with sides formula_215 (see diagram) has the constant area formula_216, which can be expressed by formula_217, too. formula_196 is the altitude of point formula_199 and formula_218 the angle between the half diameters. Hence the area of the ellipse (see section metric properties) can be written as formula_219.
# The parallelogram of tangents adjacent to the given conjugate diameters has the formula_220
Let the ellipse be in the canonical form with parametric equation
formula_221
The two points formula_222 are on conjugate diameters (see previous section). From trigonometric formulae one obtains formula_223 and
formula_224
The area of the triangle generated by formula_225 is
formula_226
and from the diagram it can be seen that the area of the parallelogram is 8 times that of formula_227. Hence
formula_228
Orthogonal tangents.
For the ellipse formula_54 the intersection points of "orthogonal" tangents lie on the circle formula_229.
This circle is called "orthoptic" or director circle of the ellipse (not to be confused with the circular directrix defined above).
Drawing ellipses.
Ellipses appear in descriptive geometry as images (parallel or central projection) of circles. There exist various tools to draw an ellipse. Computers provide the fastest and most accurate method for drawing an ellipse. However, technical tools ("ellipsographs") to draw an ellipse without a computer exist. The principle was known to the 5th century mathematician Proclus, and the tool now known as an elliptical trammel was invented by Leonardo da Vinci.
If there is no ellipsograph available, one can draw an ellipse using an approximation by the four osculating circles at the vertices.
For any method described below, knowledge of the axes and the semi-axes is necessary (or equivalently: the foci and the semi-major axis). If this presumption is not fulfilled one has to know at least two conjugate diameters. With help of Rytz's construction the axes and semi-axes can be retrieved.
de La Hire's point construction.
The following construction of single points of an ellipse is due to de La Hire. It is based on the standard parametric representation formula_230 of an ellipse:
Pins-and-string method.
The characterization of an ellipse as the locus of points so that sum of the distances to the foci is constant leads to a method of drawing one using two drawing pins, a length of string, and a pencil. In this method, pins are pushed into the paper at two points, which become the ellipse's foci. A string is tied at each end to the two pins; its length after tying is formula_3. The tip of the pencil then traces an ellipse if it is moved while keeping the string taut. Using two pegs and a rope, gardeners use this procedure to outline an elliptical flower bed—thus it is called the "gardener's ellipse". The Byzantine architect Anthemius of Tralles (c. 600) described how this method could be used to construct an elliptical reflector, and it was elaborated in a now-lost 9th-century treatise by Al-Ḥasan ibn Mūsā.
A similar method for drawing with a "closed" string is due to the Irish bishop Charles Graves.
Paper strip methods.
The two following methods rely on the parametric representation (see "", above):
formula_232
This representation can be modeled technically by two simple methods. In both cases center, the axes and semi axes formula_233 have to be known.
The first method starts with
a strip of paper of length formula_234.
The point, where the semi axes meet is marked by formula_12. If the strip slides with both ends on the axes of the desired ellipse, then point formula_12 traces the ellipse. For the proof one shows that point formula_12 has the parametric representation formula_230, where parameter formula_235 is the angle of the slope of the paper strip.
A technical realization of the motion of the paper strip can be achieved by a Tusi couple (see animation). The device is able to draw any ellipse with a "fixed" sum formula_234, which is the radius of the large circle. This restriction may be a disadvantage in real life. More flexible is the second paper strip method.
A variation of the paper strip method 1 uses the observation that the midpoint formula_236 of the paper strip is moving on the circle with center formula_237 (of the ellipse) and radius formula_238. Hence, the paperstrip can be cut at point formula_236 into halves, connected again by a joint at formula_236 and the sliding end formula_239 fixed at the center formula_237 (see diagram). After this operation the movement of the unchanged half of the paperstrip is unchanged. This variation requires only one sliding shoe.
The second method starts with
a strip of paper of length formula_17.
One marks the point, which divides the strip into two substrips of length formula_40 and formula_240. The strip is positioned onto the axes as described in the diagram. Then the free end of the strip traces an ellipse, while the strip is moved. For the proof, one recognizes that the tracing point can be described parametrically by formula_230, where parameter formula_235 is the angle of slope of the paper strip.
This method is the base for several "ellipsographs" (see section below).
Similar to the variation of the paper strip method 1 a "variation of the paper strip method 2" can be established (see diagram) by cutting the part between the axes into halves.
Most ellipsograph drafting instruments are based on the second paperstrip method.
Approximation by osculating circles.
From "Metric properties" below, one obtains:
The diagram shows an easy way to find the centers of curvature formula_245 at vertex formula_246 and co-vertex formula_247, respectively:
The centers for the remaining vertices are found by symmetry.
With help of a French curve one draws a curve, which has smooth contact to the osculating circles.
Steiner generation.
The following method to construct single points of an ellipse relies on the Steiner generation of a conic section:
Given two pencils formula_251 of lines at two points formula_252 (all lines containing formula_253 and formula_254, respectively) and a projective but not perspective mapping formula_255 of formula_256 onto formula_257, then the intersection points of corresponding lines form a non-degenerate projective conic section.
For the generation of points of the ellipse formula_54 one uses the pencils at the vertices formula_241. Let formula_258 be an upper co-vertex of the ellipse and formula_259.
formula_12 is the center of the rectangle formula_260. The side formula_261 of the rectangle is divided into n equal spaced line segments and this division is projected parallel with the diagonal formula_262 as direction onto the line segment formula_263 and assign the division as shown in the diagram. The parallel projection together with the reverse of the orientation is part of the projective mapping between the pencils at formula_246 and formula_264 needed. The intersection points of any two related lines formula_265 and formula_266 are points of the uniquely defined ellipse. With help of the points formula_267 the points of the second quarter of the ellipse can be determined. Analogously one obtains the points of the lower half of the ellipse.
Steiner generation can also be defined for hyperbolas and parabolas. It is sometimes called a "parallelogram method" because one can use other points rather than the vertices, which starts with a parallelogram instead of a rectangle.
As hypotrochoid.
The ellipse is a special case of the hypotrochoid when formula_268, as shown in the adjacent image. The special case of a moving circle with radius formula_269 inside a circle with radius formula_268 is called a Tusi couple.
Inscribed angles and three-point form.
Circles.
A circle with equation formula_270 is uniquely determined by three points formula_271 not on a line. A simple way to determine the parameters formula_272 uses the "inscribed angle theorem" for circles:
For four points formula_273 (see diagram) the following statement is true:
The four points are on a circle if and only if the angles at formula_274 and formula_275 are equal.
Usually one measures inscribed angles by a degree or radian "θ", but here the following measurement is more convenient:
In order to measure the angle between two lines with equations formula_276 one uses the quotient: formula_277
Inscribed angle theorem for circles.
For four points formula_273 no three of them on a line, we have the following (see diagram):
The four points are on a circle, if and only if the angles at formula_274 and formula_275 are equal. In terms of the angle measurement above, this means: formula_278
At first the measure is available only for chords not parallel to the y-axis, but the final formula works for any chord.
As a consequence, one obtains an equation for the circle determined by three non-collinear points formula_279: formula_280
Three-point form of circle equation.
For example, for formula_281 the three-point equation is:
formula_282, which can be rearranged to formula_283
Using vectors, dot products and determinants this formula can be arranged more clearly, letting formula_284:
formula_285
The center of the circle formula_70 satisfies:
formula_286
The radius is the distance between any of the three points and the center.
formula_287
Ellipses.
This section considers the family of ellipses defined by equations formula_288 with a "fixed" eccentricity formula_0. It is convenient to use the parameter:
formula_289
and to write the ellipse equation as:
formula_290
where "q" is fixed and formula_291 vary over the real numbers. (Such ellipses have their axes parallel to the coordinate axes: if formula_292, the major axis is parallel to the "x"-axis; if formula_293, it is parallel to the "y"-axis.)
Like a circle, such an ellipse is determined by three points not on a line.
For this family of ellipses, one introduces the following q-analog angle measure, which is "not" a function of the usual angle measure "θ":
In order to measure an angle between two lines with equations formula_294 one uses the quotient: formula_295
Given four points formula_296, no three of them on a line (see diagram).
The four points are on an ellipse with equation formula_297 if and only if the angles at formula_274 and formula_275 are equal in the sense of the measurement above—that is, if formula_298
Inscribed angle theorem for ellipses.
At first the measure is available only for chords which are not parallel to the y-axis. But the final formula works for any chord. The proof follows from a straightforward calculation. For the direction of proof given that the points are on an ellipse, one can assume that the center of the ellipse is the origin.
A consequence, one obtains an equation for the ellipse determined by three non-collinear points formula_279: formula_299
Three-point form of ellipse equation.
For example, for formula_300 and formula_301 one obtains the three-point form
formula_302 and after conversion formula_303
Analogously to the circle case, the equation can be written more clearly using vectors:
formula_304
where formula_305 is the modified dot product formula_306
Pole-polar relation.
Any ellipse can be described in a suitable coordinate system by an equation formula_54. The equation of the tangent at a point formula_307 of the ellipse is formula_308 If one allows point formula_307 to be an arbitrary point different from the origin, then
point formula_309 is mapped onto the line formula_310, not through the center of the ellipse.
This relation between points and lines is a bijection.
The inverse function maps
Such a relation between points and lines generated by a conic is called "pole-polar relation" or "polarity". The pole is the point; the polar the line.
By calculation one can confirm the following properties of the pole-polar relation of the ellipse:
Pole-polar relations exist for hyperbolas and parabolas as well.
Metric properties.
All metric properties given below refer to an ellipse with equation
except for the section on the area enclosed by a tilted ellipse, where the generalized form of Eq.(1) will be given.
Area.
The area formula_322 enclosed by an ellipse is:
where formula_17 and formula_40 are the lengths of the semi-major and semi-minor axes, respectively. The area formula formula_323 is intuitive: start with a circle of radius formula_40 (so its area is formula_324) and stretch it by a factor formula_325 to make an ellipse. This scales the area by the same factor: formula_326 However, using the same approach for the circumference would be fallacious – compare the integrals formula_327 and formula_328. It is also easy to rigorously prove the area formula using integration as follows. Equation (1) can be rewritten as formula_329 For formula_330 this curve is the top half of the ellipse. So twice the integral of formula_331 over the interval formula_332 will be the area of the ellipse:
formula_333
The second integral is the area of a circle of radius formula_334 that is, formula_335 So
formula_336
An ellipse defined implicitly by formula_337 has area formula_338
The area can also be expressed in terms of eccentricity and the length of the semi-major axis as formula_339 (obtained by solving for flattening, then computing the semi-minor axis).
So far we have dealt with "erect" ellipses, whose major and minor axes are parallel to the formula_44 and formula_340 axes. However, some applications require "tilted" ellipses. In charged-particle beam optics, for instance, the enclosed area of an erect or tilted ellipse is an important property of the beam, its "emittance". In this case a simple formula still applies, namely
where formula_341, formula_342 are intercepts and formula_343, formula_344 are maximum values. It follows directly from Apollonios's theorem.
Circumference.
The circumference formula_15 of an ellipse is:
formula_345
where again formula_17 is the length of the semi-major axis, formula_346 is the eccentricity, and the function formula_347 is the complete elliptic integral of the second kind,
formula_348
which is in general not an elementary function.
The circumference of the ellipse may be evaluated in terms of formula_349 using Gauss's arithmetic-geometric mean; this is a quadratically converging iterative method (see here for details).
The exact infinite series is:
formula_350
where formula_351 is the double factorial (extended to negative odd integers by the recurrence relation formula_352, for formula_353). This series converges, but by expanding in terms of formula_354 James Ivory and Bessel derived an expression that converges much more rapidly:
formula_355
Srinivasa Ramanujan gave two close approximations for the circumference in §16 of "Modular Equations and Approximations to formula_255"; they are
formula_356
and
formula_357
where formula_358 takes on the same meaning as above. The errors in these approximations, which were obtained empirically, are of order formula_359 and formula_360 respectively.
Arc length.
More generally, the arc length of a portion of the circumference, as a function of the angle subtended (or of any two points on the upper half of the ellipse), is given by an incomplete elliptic integral. The upper half of an ellipse is parameterized by
formula_361
Then the arc length formula_362 from formula_363 to formula_364 is:
formula_365
This is equivalent to
formula_366
where formula_367 is the incomplete elliptic integral of the second kind with parameter formula_368
Some lower and upper bounds on the circumference of the canonical ellipse formula_369 with formula_370 are
formula_371
Here the upper bound formula_372 is the circumference of a circumscribed concentric circle passing through the endpoints of the ellipse's major axis, and the lower bound formula_373 is the perimeter of an inscribed rhombus with vertices at the endpoints of the major and the minor axes.
Curvature.
The curvature is given by formula_374 radius of curvature at point formula_27:
formula_375
Radius of curvature at the two "vertices" formula_376 and the centers of curvature:
formula_377
Radius of curvature at the two "co-vertices" formula_378 and the centers of curvature:
formula_379
In triangle geometry.
Ellipses appear in triangle geometry as
As plane sections of quadrics.
Ellipses appear as plane sections of the following quadrics:
Applications.
Physics.
Elliptical reflectors and acoustics.
If the water's surface is disturbed at one focus of an elliptical water tank, the circular waves of that disturbance, after reflecting off the walls, converge simultaneously to a single point: the "second focus". This is a consequence of the total travel length being the same along any wall-bouncing path between the two foci.
Similarly, if a light source is placed at one focus of an elliptic mirror, all light rays on the plane of the ellipse are reflected to the second focus. Since no other smooth curve has such a property, it can be used as an alternative definition of an ellipse. (In the special case of a circle with a source at its center all light would be reflected back to the center.) If the ellipse is rotated along its major axis to produce an ellipsoidal mirror (specifically, a prolate spheroid), this property holds for all rays out of the source. Alternatively, a cylindrical mirror with elliptical cross-section can be used to focus light from a linear fluorescent lamp along a line of the paper; such mirrors are used in some document scanners.
Sound waves are reflected in a similar way, so in a large elliptical room a person standing at one focus can hear a person standing at the other focus remarkably well. The effect is even more evident under a vaulted roof shaped as a section of a prolate spheroid. Such a room is called a "whisper chamber". The same effect can be demonstrated with two reflectors shaped like the end caps of such a spheroid, placed facing each other at the proper distance. Examples are the National Statuary Hall at the United States Capitol (where John Quincy Adams is said to have used this property for eavesdropping on political matters); the Mormon Tabernacle at Temple Square in Salt Lake City, Utah; at an exhibit on sound at the Museum of Science and Industry in Chicago; in front of the University of Illinois at Urbana–Champaign Foellinger Auditorium; and also at a side chamber of the Palace of Charles V, in the Alhambra.
Planetary orbits.
In the 17th century, Johannes Kepler discovered that the orbits along which the planets travel around the Sun are ellipses with the Sun [approximately] at one focus, in his first law of planetary motion. Later, Isaac Newton explained this as a corollary of his law of universal gravitation.
More generally, in the gravitational two-body problem, if the two bodies are bound to each other (that is, the total energy is negative), their orbits are similar ellipses with the common barycenter being one of the foci of each ellipse. The other focus of either ellipse has no known physical significance. The orbit of either body in the reference frame of the other is also an ellipse, with the other body at the same focus.
Keplerian elliptical orbits are the result of any radially directed attraction force whose strength is inversely proportional to the square of the distance. Thus, in principle, the motion of two oppositely charged particles in empty space would also be an ellipse. (However, this conclusion ignores losses due to electromagnetic radiation and quantum effects, which become significant when the particles are moving at high speed.)
For elliptical orbits, useful relations involving the eccentricity formula_0 are:
formula_380
where
Also, in terms of formula_381 and formula_382, the semi-major axis formula_17 is their arithmetic mean, the semi-minor axis formula_40 is their geometric mean, and the semi-latus rectum formula_26 is their harmonic mean. In other words,
formula_383
Harmonic oscillators.
The general solution for a harmonic oscillator in two or more dimensions is also an ellipse. Such is the case, for instance, of a long pendulum that is free to move in two dimensions; of a mass attached to a fixed point by a perfectly elastic spring; or of any object that moves under influence of an attractive force that is directly proportional to its distance from a fixed attractor. Unlike Keplerian orbits, however, these "harmonic orbits" have the center of attraction at the geometric center of the ellipse, and have fairly simple equations of motion.
Phase visualization.
In electronics, the relative phase of two sinusoidal signals can be compared by feeding them to the vertical and horizontal inputs of an oscilloscope. If the Lissajous figure display is an ellipse, rather than a straight line, the two signals are out of phase.
Elliptical gears.
Two non-circular gears with the same elliptical outline, each pivoting around one focus and positioned at the proper angle, turn smoothly while maintaining contact at all times. Alternatively, they can be connected by a link chain or timing belt, or in the case of a bicycle the main chainring may be elliptical, or an ovoid similar to an ellipse in form. Such elliptical gears may be used in mechanical equipment to produce variable angular speed or torque from a constant rotation of the driving axle, or in the case of a bicycle to allow a varying crank rotation speed with inversely varying mechanical advantage.
Elliptical bicycle gears make it easier for the chain to slide off the cog when changing gears.
An example gear application would be a device that winds thread onto a conical bobbin on a spinning machine. The bobbin would need to wind faster when the thread is near the apex than when it is near the base.
Statistics and finance.
In statistics, a bivariate random vector formula_384 is jointly elliptically distributed if its iso-density contours—loci of equal values of the density function—are ellipses. The concept extends to an arbitrary number of elements of the random vector, in which case in general the iso-density contours are ellipsoids. A special case is the multivariate normal distribution. The elliptical distributions are important in finance because if rates of return on assets are jointly elliptically distributed then all portfolios can be characterized completely by their mean and variance—that is, any two portfolios with identical mean and variance of portfolio return have identical distributions of portfolio return.
Computer graphics.
Drawing an ellipse as a graphics primitive is common in standard display libraries, such as the MacIntosh QuickDraw API, and Direct2D on Windows. Jack Bresenham at IBM is most famous for the invention of 2D drawing primitives, including line and circle drawing, using only fast integer operations such as addition and branch on carry bit. M. L. V. Pitteway extended Bresenham's algorithm for lines to conics in 1967. Another efficient generalization to draw ellipses was invented in 1984 by Jerry Van Aken.
In 1970 Danny Cohen presented at the "Computer Graphics 1970" conference in England a linear algorithm for drawing ellipses and circles. In 1971, L. B. Smith published similar algorithms for all conic sections and proved them to have good properties. These algorithms need only a few multiplications and additions to calculate each vector.
It is beneficial to use a parametric formulation in computer graphics because the density of points is greatest where there is the most curvature. Thus, the change in slope between each successive point is small, reducing the apparent "jaggedness" of the approximation.
Composite Bézier curves may also be used to draw an ellipse to sufficient accuracy, since any ellipse may be construed as an affine transformation of a circle. The spline methods used to draw a circle may be used to draw an ellipse, since the constituent Bézier curves behave appropriately under such transformations.
Optimization theory.
It is sometimes useful to find the minimum bounding ellipse on a set of points. The ellipsoid method is quite useful for solving this problem.
See also.
<templatestyles src="Div col/styles.css"/>
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "e"
},
{
"math_id": 1,
"text": "e = 0"
},
{
"math_id": 2,
"text": "e = 1"
},
{
"math_id": 3,
"text": "2a"
},
{
"math_id": 4,
"text": "2b"
},
{
"math_id": 5,
"text": "\\frac{x^2}{a^2}+\\frac{y^2}{b^2} = 1 ."
},
{
"math_id": 6,
"text": "a \\ge b"
},
{
"math_id": 7,
"text": "(\\pm c, 0)"
},
{
"math_id": 8,
"text": "c = \\sqrt{a^2-b^2}"
},
{
"math_id": 9,
"text": "(x,y) = (a\\cos(t),b\\sin(t)) \\quad \\text{for} \\quad 0\\leq t\\leq 2\\pi."
},
{
"math_id": 10,
"text": "e = \\frac{c}{a} = \\sqrt{1 - \\frac{b^2}{a^2}}."
},
{
"math_id": 11,
"text": "F_1, F_2"
},
{
"math_id": 12,
"text": "P"
},
{
"math_id": 13,
"text": "|PF_1|,\\ |PF_2|"
},
{
"math_id": 14,
"text": "E = \\left\\{P\\in \\R^2 \\,\\mid\\, \\left|PF_2\\right| + \\left|PF_1\\right| = 2a \\right\\} ."
},
{
"math_id": 15,
"text": "C"
},
{
"math_id": 16,
"text": "V_1,V_2"
},
{
"math_id": 17,
"text": "a"
},
{
"math_id": 18,
"text": "c"
},
{
"math_id": 19,
"text": "e = \\tfrac{c}{a}"
},
{
"math_id": 20,
"text": "F_1 = F_2"
},
{
"math_id": 21,
"text": "\\left|PF_2\\right| + \\left|PF_1\\right| = 2a"
},
{
"math_id": 22,
"text": "c_2"
},
{
"math_id": 23,
"text": "F_2"
},
{
"math_id": 24,
"text": "F_1"
},
{
"math_id": 25,
"text": "\\left|PF_1\\right| = \\left|Pc_2\\right|."
},
{
"math_id": 26,
"text": "\\ell"
},
{
"math_id": 27,
"text": "(x,y)"
},
{
"math_id": 28,
"text": "(c,0)"
},
{
"math_id": 29,
"text": "\\sqrt{(x - c)^2 + y^2 }"
},
{
"math_id": 30,
"text": "\\sqrt{(x + c)^2 + y^2}"
},
{
"math_id": 31,
"text": "(x,\\, y)"
},
{
"math_id": 32,
"text": "\\sqrt{(x - c)^2 + y^2} + \\sqrt{(x + c)^2 + y^2} = 2a\\ ."
},
{
"math_id": 33,
"text": "b^2 = a^2-c^2"
},
{
"math_id": 34,
"text": "\\frac{x^2}{a^2} + \\frac{y^2}{b^2} = 1,"
},
{
"math_id": 35,
"text": "y = \\pm\\frac{b}{a}\\sqrt{a^2 - x^2} = \\pm \\sqrt{\\left(a^2 - x^2\\right)\\left(1 - e^2\\right)}."
},
{
"math_id": 36,
"text": "a,\\; b"
},
{
"math_id": 37,
"text": "V_3 = (0,\\, b),\\; V_4 = (0,\\, -b)"
},
{
"math_id": 38,
"text": "a + ex"
},
{
"math_id": 39,
"text": "a - ex"
},
{
"math_id": 40,
"text": "b"
},
{
"math_id": 41,
"text": "a \\ge b > 0 \\ ."
},
{
"math_id": 42,
"text": "\\tfrac{x^2}{a^2} + \\tfrac{y^2}{b^2} = 1 "
},
{
"math_id": 43,
"text": "a < b"
},
{
"math_id": 44,
"text": "x"
},
{
"math_id": 45,
"text": " y"
},
{
"math_id": 46,
"text": " b."
},
{
"math_id": 47,
"text": "c = \\sqrt{a^2 - b^2}"
},
{
"math_id": 48,
"text": "e = \\frac{c}{a} = \\sqrt{1 - \\left(\\frac{b}{a}\\right)^2},"
},
{
"math_id": 49,
"text": "a > b."
},
{
"math_id": 50,
"text": "a = b"
},
{
"math_id": 51,
"text": "\\ell = \\frac{b^2}a = a \\left(1 - e^2\\right)."
},
{
"math_id": 52,
"text": "g"
},
{
"math_id": 53,
"text": "(x_1,\\, y_1)"
},
{
"math_id": 54,
"text": "\\tfrac{x^2}{a^2} + \\tfrac{y^2}{b^2} = 1"
},
{
"math_id": 55,
"text": "\\frac{x_1}{a^2}x + \\frac{y_1}{b^2}y = 1."
},
{
"math_id": 56,
"text": "\\vec x = \\begin{pmatrix} x_1 \\\\ y_1 \\end{pmatrix} + s \\left(\\begin{array}{r}\n-y_1 a^2 \\\\\n x_1 b^2\n\\end{array}\\right) , \\quad s \\in \\R. "
},
{
"math_id": 57,
"text": "\\vec{x} = \\begin{pmatrix} x_1 \\\\ y_1 \\end{pmatrix} + s \\begin{pmatrix} u \\\\ v \\end{pmatrix}"
},
{
"math_id": 58,
"text": "\\frac{x_1^2}{a^2} + \\frac{y_1^2}{b^2} = 1"
},
{
"math_id": 59,
"text": "\n \\frac{\\left(x_1 + su\\right)^2}{a^2} + \\frac{\\left(y_1 + sv\\right)^2}{b^2} = 1\\ \\quad\\Longrightarrow\\quad\n 2s\\left(\\frac{x_1u}{a^2} + \\frac{y_1v}{b^2}\\right) + s^2\\left(\\frac{u^2}{a^2} + \\frac{v^2}{b^2}\\right) = 0\\ ."
},
{
"math_id": 60,
"text": "\\frac{x_1}{a^2}u + \\frac{y_1}{b^2}v = 0."
},
{
"math_id": 61,
"text": "\\begin{pmatrix} \\frac{x_1}{a^2} & \\frac{y_1}{b^2} \\end{pmatrix}"
},
{
"math_id": 62,
"text": "\\frac{x_1}{a^2}x + \\tfrac{y_1}{b^2}y = k"
},
{
"math_id": 63,
"text": "k"
},
{
"math_id": 64,
"text": "k = 1"
},
{
"math_id": 65,
"text": "\\frac{x_ 1}{a^2}u + \\frac{y_1}{b^2}v \\ne 0."
},
{
"math_id": 66,
"text": "\\begin{pmatrix} -y_1 a^2 & x_1 b^2 \\end{pmatrix}"
},
{
"math_id": 67,
"text": "(x_1, y_1)"
},
{
"math_id": 68,
"text": "(u, v)"
},
{
"math_id": 69,
"text": "\\frac{x_1u}{a^2} + \\tfrac{y_1v}{b^2} = 0"
},
{
"math_id": 70,
"text": "\\left(x_\\circ,\\, y_\\circ\\right)"
},
{
"math_id": 71,
"text": "\\frac{\\left(x - x_\\circ\\right)^2}{a^2} + \\frac{\\left(y - y_\\circ\\right)^2}{b^2} = 1 \\ ."
},
{
"math_id": 72,
"text": "Ax^2 + Bxy + Cy^2 + Dx + Ey + F = 0"
},
{
"math_id": 73,
"text": "B^2 - 4AC < 0."
},
{
"math_id": 74,
"text": "\\Delta = \\begin{vmatrix}\n A & \\frac{1}{2}B & \\frac{1}{2}D \\\\\n \\frac{1}{2}B & C & \\frac{1}{2}E \\\\\n \\frac{1}{2}D & \\frac{1}{2}E & F\n \\end{vmatrix} = ACF + \\tfrac14 BDE - \\tfrac14(AE^2 + CD^2 + FB^2).\n"
},
{
"math_id": 75,
"text": "\\theta"
},
{
"math_id": 76,
"text": "\\begin{align}\n A &= a^2 \\sin^2\\theta + b^2 \\cos^2\\theta &\n B &= 2\\left(b^2 - a^2\\right) \\sin\\theta \\cos\\theta \\\\[1ex]\n C &= a^2 \\cos^2\\theta + b^2 \\sin^2\\theta &\n D &= -2A x_\\circ - B y_\\circ \\\\[1ex]\n E &= - B x_\\circ - 2C y_\\circ &\n F &= A x_\\circ^2 + B x_\\circ y_\\circ + C y_\\circ^2 - a^2 b^2.\n\\end{align}"
},
{
"math_id": 77,
"text": "\\frac{X^2}{a^2} + \\frac{Y^2}{b^2} = 1"
},
{
"math_id": 78,
"text": "(X,\\, Y)"
},
{
"math_id": 79,
"text": "\\begin{align}\n X &= \\left(x - x_\\circ\\right) \\cos\\theta + \\left(y - y_\\circ\\right) \\sin\\theta, \\\\\n Y &= -\\left(x - x_\\circ\\right) \\sin\\theta + \\left(y - y_\\circ\\right) \\cos\\theta.\n\\end{align}"
},
{
"math_id": 80,
"text": "\\begin{align}\n a, b &= \\frac{-\\sqrt{2 \\big(A E^2 + C D^2 - B D E + (B^2 - 4 A C) F\\big)\\big((A + C) \\pm \\sqrt{(A - C)^2 + B^2}\\big)}}{B^2 - 4 A C}, \\\\\n x_\\circ &= \\frac{2CD - BE}{B^2 - 4AC}, \\\\[5mu]\n y_\\circ &= \\frac{2AE - BD}{B^2 - 4AC}, \\\\[5mu]\n \\theta &= \\tfrac12 \\operatorname{atan2}(-B,\\, C-A),\n\\end{align}"
},
{
"math_id": 81,
"text": "\\tfrac{x^2}{a^2}+\\tfrac{y^2}{b^2} = 1"
},
{
"math_id": 82,
"text": "(x,\\, y) = (a \\cos t,\\, b \\sin t),\\ 0 \\le t < 2\\pi\\, ."
},
{
"math_id": 83,
"text": "(x(t),y(t))"
},
{
"math_id": 84,
"text": "u = \\tan\\left(\\frac{t}{2}\\right)"
},
{
"math_id": 85,
"text": "\\cos t = \\frac{1 - u^2}{1 + u^2}\\ ,\\quad \\sin t = \\frac{2u}{1 + u^2}"
},
{
"math_id": 86,
"text": "\\begin{cases}\nx(u) = a \\, \\dfrac{1 - u^2}{1 + u^2} \\\\[10mu]\ny(u) = b \\, \\dfrac{2u}{1 + u^2} \\\\[10mu]\n-\\infty < u < \\infty\n\\end{cases}"
},
{
"math_id": 87,
"text": "(-a,\\, 0)"
},
{
"math_id": 88,
"text": "u \\in [0,\\, 1],"
},
{
"math_id": 89,
"text": "u."
},
{
"math_id": 90,
"text": "\\lim_{u \\to \\pm \\infty} (x(u),\\, y(u)) = (-a,\\, 0)\\;."
},
{
"math_id": 91,
"text": "[u:v]"
},
{
"math_id": 92,
"text": "\\mathbf{P}(\\mathbf{R})"
},
{
"math_id": 93,
"text": "\n [u:v] \\mapsto \\left(a\\frac{v^2 - u^2}{v^2 + u^2}, b\\frac{2uv}{v^2 + u^2} \\right).\n"
},
{
"math_id": 94,
"text": "[1:0] \\mapsto (-a,\\, 0)."
},
{
"math_id": 95,
"text": "m"
},
{
"math_id": 96,
"text": "\\vec x(t) = (a \\cos t,\\, b \\sin t)^\\mathsf{T}"
},
{
"math_id": 97,
"text": "\\vec x'(t) = (-a\\sin t,\\, b\\cos t)^\\mathsf{T} \\quad \\rightarrow \\quad m = -\\frac{b}{a}\\cot t\\quad \\rightarrow \\quad \\cot t = -\\frac{ma}{b}."
},
{
"math_id": 98,
"text": "\\cos t = \\frac{\\cot t}{\\pm\\sqrt{1 + \\cot^2t}} = \\frac{-ma}{\\pm\\sqrt{m^2 a^2 + b^2}}\\ ,\\quad\\quad\n\\sin t = \\frac{1}{\\pm\\sqrt{1 + \\cot^2t}} = \\frac{b}{\\pm\\sqrt{m^2 a^2 + b^2}}."
},
{
"math_id": 99,
"text": "\\cos t"
},
{
"math_id": 100,
"text": "\\sin t"
},
{
"math_id": 101,
"text": "\\vec c_\\pm(m) = \\left(-\\frac{ma^2}{\\pm\\sqrt{m^2 a^2 + b^2}},\\;\\frac{b^2}{\\pm\\sqrt{m^2a^2 + b^2}}\\right),\\, m \\in \\R."
},
{
"math_id": 102,
"text": "\\vec c_+"
},
{
"math_id": 103,
"text": "\\vec c_-"
},
{
"math_id": 104,
"text": "(\\pm a,\\, 0)"
},
{
"math_id": 105,
"text": "\\vec c_\\pm(m)"
},
{
"math_id": 106,
"text": "y = mx + n"
},
{
"math_id": 107,
"text": "n"
},
{
"math_id": 108,
"text": "y = mx \\pm \\sqrt{m^2 a^2 + b^2}\\, ."
},
{
"math_id": 109,
"text": "x^2 + y^2 = 1"
},
{
"math_id": 110,
"text": "\\vec x \\mapsto \\vec f\\!_0 + A\\vec x"
},
{
"math_id": 111,
"text": "A"
},
{
"math_id": 112,
"text": "\\vec f\\!_0"
},
{
"math_id": 113,
"text": "\\vec f\\!_1, \\vec f\\!_2"
},
{
"math_id": 114,
"text": "(\\cos(t), \\sin(t))"
},
{
"math_id": 115,
"text": "0 \\leq t \\leq 2\\pi"
},
{
"math_id": 116,
"text": "\\vec x = \\vec p(t) = \\vec f\\!_0 + \\vec f\\!_1 \\cos t + \\vec f\\!_2 \\sin t \\, ."
},
{
"math_id": 117,
"text": "\\vec f\\!_1,\\; \\vec f\\!_2"
},
{
"math_id": 118,
"text": "\\vec p(t_0),\\;\\vec p\\left(t_0 \\pm \\tfrac{\\pi}{2}\\right),\\; \\vec p\\left(t_0 + \\pi\\right)"
},
{
"math_id": 119,
"text": "t = t_0"
},
{
"math_id": 120,
"text": "\\cot (2t_0) = \\frac{\\vec f\\!_1^{\\,2} - \\vec f\\!_2^{\\,2}}{2\\vec f\\!_1 \\cdot \\vec f\\!_2}."
},
{
"math_id": 121,
"text": "\\vec f\\!_1 \\cdot \\vec f\\!_2 = 0"
},
{
"math_id": 122,
"text": "t_0 = 0"
},
{
"math_id": 123,
"text": "\\vec p(t)"
},
{
"math_id": 124,
"text": "\\vec p\\,'(t) = -\\vec f\\!_1\\sin t + \\vec f\\!_2\\cos t \\ ."
},
{
"math_id": 125,
"text": "0 = \\vec p'(t) \\cdot \\left(\\vec p(t) -\\vec f\\!_0\\right) = \\left(-\\vec f\\!_1\\sin t + \\vec f\\!_2\\cos t\\right) \\cdot \\left(\\vec f\\!_1 \\cos t + \\vec f\\!_2 \\sin t\\right)."
},
{
"math_id": 126,
"text": "\\; \\cos^2 t -\\sin^2 t=\\cos 2t,\\ \\ 2\\sin t \\cos t = \\sin 2t\\;"
},
{
"math_id": 127,
"text": "t = t_0\\; ."
},
{
"math_id": 128,
"text": "\\;\\vec x = \\vec f_0 +\\vec f_1 \\cos t +\\vec f_2 \\sin t\\; "
},
{
"math_id": 129,
"text": "A=\\pi \\left|\\det(\\vec f_1, \\vec f_2)\\right| ."
},
{
"math_id": 130,
"text": "\\; M=\\vec f_1^2+\\vec f_2^2, \\ N = \\left|\\det(\\vec f_1,\\vec f_2)\\right| "
},
{
"math_id": 131,
"text": "a^2+b^2=M, \\quad ab=N \\ ."
},
{
"math_id": 132,
"text": "a,b"
},
{
"math_id": 133,
"text": "\\begin{align}\na &= \\frac{1}{2}(\\sqrt{M+2N}+\\sqrt{M-2N}) \\\\[1ex]\nb &= \\frac{1}{2}(\\sqrt{M+2N}-\\sqrt{M-2N})\\, .\n\\end{align}"
},
{
"math_id": 134,
"text": "\\; \\cos t,\\sin t\\;"
},
{
"math_id": 135,
"text": "\\;\\cos^2t+\\sin^2t -1=0\\; "
},
{
"math_id": 136,
"text": "\\det{\\left(\\vec x\\!-\\!\\vec f\\!_0,\\vec f\\!_2\\right)^2} + \\det{\\left(\\vec f\\!_1,\\vec x\\!-\\!\\vec f\\!_0\\right)^2} - \\det{\\left(\\vec f\\!_1,\\vec f\\!_2\\right)^2} = 0."
},
{
"math_id": 137,
"text": "x^2+2cxy+d^2y^2-e^2=0\\ ,"
},
{
"math_id": 138,
"text": "\\; d^2-c^2 >0 \\; ,"
},
{
"math_id": 139,
"text": "\\vec f_1={e \\choose 0},\\quad \\vec f_2=\\frac{e}{\\sqrt{d^2-c^2}}{-c\\choose 1} "
},
{
"math_id": 140,
"text": "\\;x^2+2xy+3y^2-1=0\\; "
},
{
"math_id": 141,
"text": "\\vec f_1={1 \\choose 0},\\quad \\vec f_2=\\frac{1}{\\sqrt{2}}{-1\\choose 1} ."
},
{
"math_id": 142,
"text": "\\vec f_0= {0\\choose 0},\\;\\vec f_1= a {\\cos \\theta\\choose \\sin \\theta},\\;\\vec f_2= b{-\\sin \\theta\\choose \\;\\cos \\theta}"
},
{
"math_id": 143,
"text": "\\begin{align}\nx &= x_\\theta(t) = a\\cos\\theta\\cos t - b\\sin\\theta\\sin t \\, , \\\\\ny &= y_\\theta(t) = a\\sin\\theta\\cos t + b\\cos\\theta\\sin t \\, .\n\\end{align}"
},
{
"math_id": 144,
"text": "\\vec f\\!_0, \\vec f\\!_1, \\vec f\\!_2"
},
{
"math_id": 145,
"text": "r(\\theta) = \\frac{ab}{\\sqrt{(b \\cos \\theta)^2 + (a\\sin \\theta)^2}}=\\frac{b}{\\sqrt{1 - (e\\cos\\theta)^2}}"
},
{
"math_id": 146,
"text": "\\theta = 0"
},
{
"math_id": 147,
"text": "r(\\theta)=\\frac{a (1-e^2)}{1\\pm e\\cos\\theta }"
},
{
"math_id": 148,
"text": "\\ell=a (1-e^2)"
},
{
"math_id": 149,
"text": "d = \\frac{a^2}{c} = \\frac{a}{e}"
},
{
"math_id": 150,
"text": "\\frac{\\left|PF_1\\right|}{\\left|Pl_1\\right|} = \\frac{\\left|PF_2\\right|}{\\left|Pl_2\\right|} = e = \\frac{c}{a}\\ ."
},
{
"math_id": 151,
"text": "F_1, l_1"
},
{
"math_id": 152,
"text": "\\left|PF_1\\right|^2 = (x - c)^2 + y^2,\\ \\left|Pl_1\\right|^2 = \\left(x - \\tfrac{a^2}{c}\\right)^2"
},
{
"math_id": 153,
"text": "y^2 = b^2 - \\tfrac{b^2}{a^2}x^2"
},
{
"math_id": 154,
"text": "\\left|PF_1\\right|^2 - \\frac{c^2}{a^2}\\left|Pl_1\\right|^2 = 0\\, ."
},
{
"math_id": 155,
"text": "F"
},
{
"math_id": 156,
"text": "l"
},
{
"math_id": 157,
"text": "0 < e < 1,"
},
{
"math_id": 158,
"text": "e,"
},
{
"math_id": 159,
"text": "E = \\left\\{P\\ \\left|\\ \\frac{|PF|}{|Pl|} = e\\right.\\right\\}."
},
{
"math_id": 160,
"text": "e > 1"
},
{
"math_id": 161,
"text": "F = (f,\\, 0),\\ e > 0"
},
{
"math_id": 162,
"text": "(0,\\, 0)"
},
{
"math_id": 163,
"text": "x = -\\tfrac{f}{e}"
},
{
"math_id": 164,
"text": "P = (x,\\, y)"
},
{
"math_id": 165,
"text": "|PF|^2 = e^2|Pl|^2"
},
{
"math_id": 166,
"text": "(x - f)^2 + y^2 = e^2\\left(x + \\frac{f}{e}\\right)^2 = (ex + f)^2"
},
{
"math_id": 167,
"text": "x^2\\left(e^2 - 1\\right) + 2xf(1 + e) - y^2 = 0."
},
{
"math_id": 168,
"text": "p = f(1 + e)"
},
{
"math_id": 169,
"text": "x^2\\left(e^2 - 1\\right) + 2px - y^2 = 0."
},
{
"math_id": 170,
"text": "e < 1"
},
{
"math_id": 171,
"text": "a,\\, b"
},
{
"math_id": 172,
"text": "1 - e^2 = \\tfrac{b^2}{a^2}, \\text{ and }\\ p = \\tfrac{b^2}{a}"
},
{
"math_id": 173,
"text": "\\frac{(x - a)^2}{a^2} + \\frac{y^2}{b^2} = 1\\, ,"
},
{
"math_id": 174,
"text": "(a,\\, 0)"
},
{
"math_id": 175,
"text": "c\\cdot\\tfrac{a^2}{c}=a^2"
},
{
"math_id": 176,
"text": "L_1"
},
{
"math_id": 177,
"text": "l_1"
},
{
"math_id": 178,
"text": "x^2+y^2=a^2"
},
{
"math_id": 179,
"text": "F = \\left(f_1,\\, f_2\\right)"
},
{
"math_id": 180,
"text": "ux + vy + w = 0"
},
{
"math_id": 181,
"text": "\\left(x - f_1\\right)^2 + \\left(y - f_2\\right)^2 = e^2 \\frac{\\left(ux + vy + w\\right)^2}{u^2 + v^2}\\ ."
},
{
"math_id": 182,
"text": "|Pl|"
},
{
"math_id": 183,
"text": "\\overline{PF_1},\\, \\overline{PF_2}"
},
{
"math_id": 184,
"text": "L"
},
{
"math_id": 185,
"text": "\\overline{PF_2}"
},
{
"math_id": 186,
"text": "w"
},
{
"math_id": 187,
"text": "\\overline{PF_1}"
},
{
"math_id": 188,
"text": "\\overline{PF_2}."
},
{
"math_id": 189,
"text": "Q"
},
{
"math_id": 190,
"text": "w."
},
{
"math_id": 191,
"text": "2a = \\left|LF_2\\right| < {}"
},
{
"math_id": 192,
"text": "\\left|QF_2\\right| + \\left|QL\\right| = {}"
},
{
"math_id": 193,
"text": "\\left|QF_2\\right| + \\left|QF_1\\right|,"
},
{
"math_id": 194,
"text": "Q,"
},
{
"math_id": 195,
"text": "d_1,\\, d_2"
},
{
"math_id": 196,
"text": "d_1"
},
{
"math_id": 197,
"text": "d_2\\ ."
},
{
"math_id": 198,
"text": "\\overline{P_1 Q_1},\\, \\overline{P_2 Q_2}"
},
{
"math_id": 199,
"text": "P_1"
},
{
"math_id": 200,
"text": "Q_1"
},
{
"math_id": 201,
"text": "\\overline{P_2 Q_2}"
},
{
"math_id": 202,
"text": "\\vec x = \\vec p(t) = \\vec f\\!_0 +\\vec f\\!_1 \\cos t + \\vec f\\!_2 \\sin t,"
},
{
"math_id": 203,
"text": "\\vec p(t),\\ \\vec p(t + \\pi)"
},
{
"math_id": 204,
"text": "\\vec p\\left(t + \\tfrac{\\pi}{2}\\right),\\ \\vec p\\left(t - \\tfrac{\\pi}{2}\\right)"
},
{
"math_id": 205,
"text": "(a\\cos t,b\\sin t)"
},
{
"math_id": 206,
"text": "\\tfrac{x^2}{a^2}+\\tfrac{y^2}{b^2}=1"
},
{
"math_id": 207,
"text": "(x_1,y_1)=(\\pm a\\cos t,\\pm b\\sin t)\\quad "
},
{
"math_id": 208,
"text": "(x_2,y_2)=({\\color{red}{\\mp}} a\\sin t,\\pm b\\cos t)\\quad "
},
{
"math_id": 209,
"text": "\\frac{x_1x_2}{a^2}+\\frac{y_1y_2}{b^2}=0\\ ."
},
{
"math_id": 210,
"text": "x_1x_2+y_1y_2=0\\ . "
},
{
"math_id": 211,
"text": "c_1 "
},
{
"math_id": 212,
"text": " c_2"
},
{
"math_id": 213,
"text": "c_1^2 + c_2^2 = a^2 + b^2"
},
{
"math_id": 214,
"text": "O,P_1,P_2"
},
{
"math_id": 215,
"text": "c_1,\\, c_2"
},
{
"math_id": 216,
"text": "A_\\Delta = \\frac{1}{2}ab"
},
{
"math_id": 217,
"text": "A_\\Delta=\\tfrac 1 2 c_2d_1=\\tfrac 1 2 c_1c_2\\sin\\alpha"
},
{
"math_id": 218,
"text": "\\alpha"
},
{
"math_id": 219,
"text": "A_{el}=\\pi ab=\\pi c_2d_1=\\pi c_1c_2\\sin\\alpha"
},
{
"math_id": 220,
"text": "\\text{Area}_{12} = 4ab\\ ."
},
{
"math_id": 221,
"text": "\\vec p(t) = (a\\cos t,\\, b\\sin t)."
},
{
"math_id": 222,
"text": "\\vec c_1 = \\vec p(t),\\ \\vec c_2 = \\vec p\\left(t + \\frac{\\pi}{2}\\right)"
},
{
"math_id": 223,
"text": "\\vec c_2 = (-a\\sin t,\\, b\\cos t)^\\mathsf{T}"
},
{
"math_id": 224,
"text": "\\left|\\vec c_1\\right|^2 + \\left|\\vec c_2\\right|^2 = \\cdots = a^2 + b^2\\, ."
},
{
"math_id": 225,
"text": "\\vec c_1,\\, \\vec c_2"
},
{
"math_id": 226,
"text": "A_\\Delta = \\tfrac{1}{2} \\det\\left(\\vec c_1,\\, \\vec c_2\\right) = \\cdots = \\tfrac{1}{2}ab"
},
{
"math_id": 227,
"text": "A_\\Delta"
},
{
"math_id": 228,
"text": "\\text{Area}_{12} = 4ab\\, ."
},
{
"math_id": 229,
"text": "x^2 + y^2 = a^2 + b^2"
},
{
"math_id": 230,
"text": "(a\\cos t,\\, b\\sin t)"
},
{
"math_id": 231,
"text": "B"
},
{
"math_id": 232,
"text": "(a\\cos t,\\, b\\sin t)"
},
{
"math_id": 233,
"text": " a,\\, b"
},
{
"math_id": 234,
"text": "a + b"
},
{
"math_id": 235,
"text": "t"
},
{
"math_id": 236,
"text": "N"
},
{
"math_id": 237,
"text": "M"
},
{
"math_id": 238,
"text": "\\tfrac{a + b}{2}"
},
{
"math_id": 239,
"text": "K"
},
{
"math_id": 240,
"text": "a - b"
},
{
"math_id": 241,
"text": "V_1,\\, V_2"
},
{
"math_id": 242,
"text": "\\tfrac{b^2}{a}"
},
{
"math_id": 243,
"text": "V_3,\\, V_4"
},
{
"math_id": 244,
"text": "\\tfrac{a^2}{b}\\ ."
},
{
"math_id": 245,
"text": "C_1 = \\left(a - \\tfrac{b^2}{a}, 0\\right),\\, C_3 = \\left(0, b - \\tfrac{a^2}{b}\\right)"
},
{
"math_id": 246,
"text": "V_1"
},
{
"math_id": 247,
"text": "V_3"
},
{
"math_id": 248,
"text": "H = (a,\\, b)"
},
{
"math_id": 249,
"text": "V_1 V_3\\ ,"
},
{
"math_id": 250,
"text": "H"
},
{
"math_id": 251,
"text": "B(U),\\, B(V)"
},
{
"math_id": 252,
"text": "U,\\, V"
},
{
"math_id": 253,
"text": "U"
},
{
"math_id": 254,
"text": "V"
},
{
"math_id": 255,
"text": "\\pi"
},
{
"math_id": 256,
"text": "B(U)"
},
{
"math_id": 257,
"text": "B(V)"
},
{
"math_id": 258,
"text": "P = (0,\\, b)"
},
{
"math_id": 259,
"text": "A = (-a,\\, 2b),\\, B = (a,\\,2b)"
},
{
"math_id": 260,
"text": "V_1,\\, V_2,\\, B,\\, A"
},
{
"math_id": 261,
"text": "\\overline{AB}"
},
{
"math_id": 262,
"text": "AV_2"
},
{
"math_id": 263,
"text": "\\overline{V_1B}"
},
{
"math_id": 264,
"text": "V_2"
},
{
"math_id": 265,
"text": "V_1 B_i"
},
{
"math_id": 266,
"text": "V_2 A_i"
},
{
"math_id": 267,
"text": "C_1,\\, \\dotsc"
},
{
"math_id": 268,
"text": "R = 2r"
},
{
"math_id": 269,
"text": "r"
},
{
"math_id": 270,
"text": "\\left(x - x_\\circ\\right)^2 + \\left(y - y_\\circ\\right)^2 = r^2"
},
{
"math_id": 271,
"text": "\\left(x_1, y_1\\right),\\; \\left(x_2,\\,y_2\\right),\\; \\left(x_3,\\, y_3\\right)"
},
{
"math_id": 272,
"text": "x_\\circ,y_\\circ,r"
},
{
"math_id": 273,
"text": "P_i = \\left(x_i,\\, y_i\\right),\\ i = 1,\\, 2,\\, 3,\\, 4,\\,"
},
{
"math_id": 274,
"text": "P_3"
},
{
"math_id": 275,
"text": "P_4"
},
{
"math_id": 276,
"text": "y = m_1x + d_1,\\ y = m_2x + d_2,\\ m_1 \\ne m_2,"
},
{
"math_id": 277,
"text": "\\frac{1 + m_1 m_2}{m_2 - m_1} = \\cot\\theta\\ ."
},
{
"math_id": 278,
"text": "\n\\frac{(x_4 - x_1)(x_4 - x_2) + (y_4 - y_1)(y_4 - y_2)}\n {(y_4 - y_1)(x_4 - x_2) - (y_4 - y_2)(x_4 - x_1)} =\n\\frac{(x_3 - x_1)(x_3 - x_2) + (y_3 - y_1)(y_3 - y_2)}\n {(y_3 - y_1)(x_3 - x_2) - (y_3 - y_2)(x_3 - x_1)}.\n"
},
{
"math_id": 279,
"text": "P_i = \\left(x_i,\\, y_i\\right)"
},
{
"math_id": 280,
"text": "\n \\frac{({\\color{red}x} - x_1)({\\color{red}x} - x_2) + ({\\color{red}y} - y_1)({\\color{red}y} - y_2)}\n {({\\color{red}y} - y_1)({\\color{red}x} - x_2) - ({\\color{red}y} - y_2)({\\color{red}x} - x_1)} =\n \\frac{(x_3 - x_1)(x_3 - x_2) + (y_3 - y_1)(y_3 - y_2)}\n {(y_3 - y_1)(x_3 - x_2) - (y_3 - y_2)(x_3 - x_1)}.\n"
},
{
"math_id": 281,
"text": "P_1 = (2,\\, 0),\\; P_2 = (0,\\, 1),\\; P_3 = (0,\\,0)"
},
{
"math_id": 282,
"text": "\\frac{(x - 2)x + y(y - 1)}{yx - (y - 1)(x - 2)} = 0"
},
{
"math_id": 283,
"text": "(x - 1)^2 + \\left(y - \\tfrac{1}{2}\\right)^2 = \\tfrac{5}{4}\\ ."
},
{
"math_id": 284,
"text": "\\vec x = (x,\\, y)"
},
{
"math_id": 285,
"text": "\n \\frac{\\left({\\color{red}\\vec x} - \\vec x_1\\right) \\cdot \\left({\\color{red}\\vec x} - \\vec x_2\\right)}\n {\\det\\left({\\color{red}\\vec x} - \\vec x_1,{\\color{red}\\vec x} - \\vec x_2\\right)} =\n \\frac{\\left(\\vec x_3 - \\vec x_1\\right) \\cdot \\left(\\vec x_3 - \\vec x_2\\right)}\n {\\det\\left(\\vec x_3 - \\vec x_1, \\vec x_3 - \\vec x_2\\right)}.\n"
},
{
"math_id": 286,
"text": "\\begin{bmatrix}\n 1 & \\dfrac{y_1 - y_2}{x_1 - x_2} \\\\[2ex]\n \\dfrac{x_1 - x_3}{y_1 - y_3} & 1\n \\end{bmatrix}\n \\begin{bmatrix} x_\\circ \\\\[1ex] y_\\circ \\end{bmatrix}\n =\n \\begin{bmatrix}\n \\dfrac{x_1^2 - x_2^2 + y_1^2 - y_2^2}{2(x_1 - x_2)} \\\\[2ex]\n \\dfrac{y_1^2 - y_3^2 + x_1^2 - x_3^2}{2(y_1 - y_3)}\n \\end{bmatrix}.\n"
},
{
"math_id": 287,
"text": "\n r = \\sqrt{\\left(x_1 - x_\\circ\\right)^2 + \\left(y_1 - y_\\circ\\right)^2}\n = \\sqrt{\\left(x_2 - x_\\circ\\right)^2 + \\left(y_2 - y_\\circ\\right)^2}\n = \\sqrt{\\left(x_3 - x_\\circ\\right)^2 + \\left(y_3 - y_\\circ\\right)^2}.\n"
},
{
"math_id": 288,
"text": "\\tfrac{\\left(x - x_\\circ\\right)^2}{a^2} + \\tfrac{\\left(y - y_\\circ\\right)^2}{b^2} = 1"
},
{
"math_id": 289,
"text": "{\\color{blue}q} = \\frac{a^2}{b^2} = \\frac{1}{1 - e^2},"
},
{
"math_id": 290,
"text": "\\left(x - x_\\circ\\right)^2 + {\\color{blue}q}\\, \\left(y - y_\\circ\\right)^2 = a^2,"
},
{
"math_id": 291,
"text": "x_\\circ,\\, y_\\circ,\\, a"
},
{
"math_id": 292,
"text": "q < 1"
},
{
"math_id": 293,
"text": "q > 1"
},
{
"math_id": 294,
"text": "y = m_1x + d_1,\\ y = m_2x + d_2,\\ m_1 \\ne m_2"
},
{
"math_id": 295,
"text": "\\frac{1 + {\\color{blue}q}\\; m_1 m_2}{m_2 - m_1}\\ ."
},
{
"math_id": 296,
"text": "P_i = \\left(x_i,\\, y_i\\right),\\ i = 1,\\, 2,\\, 3,\\, 4"
},
{
"math_id": 297,
"text": "(x - x_\\circ)^2 + {\\color{blue}q}\\, (y - y_\\circ)^2 = a^2"
},
{
"math_id": 298,
"text": "\n\\frac{(x_4 - x_1)(x_4 - x_2) + {\\color{blue}q}\\;(y_4 - y_1)(y_4 - y_2)}\n {(y_4 - y_1)(x_4 - x_2) - (y_4 - y_2)(x_4 - x_1)} =\n\\frac{(x_3 - x_1)(x_3 - x_2) + {\\color{blue}q}\\;(y_3 - y_1)(y_3 - y_2)}\n {(y_3 - y_1)(x_3 - x_2) - (y_3 - y_2)(x_3 - x_1)}\\ .\n"
},
{
"math_id": 299,
"text": "\n\\frac{({\\color{red}x} - x_1)({\\color{red}x} - x_2) + {\\color{blue}q}\\;({\\color{red}y} - y_1)({\\color{red}y} - y_2)}\n {({\\color{red}y} - y_1)({\\color{red}x} - x_2) - ({\\color{red}y} - y_2)({\\color{red}x} - x_1)} =\n\\frac{(x_3 - x_1)(x_3 - x_2) + {\\color{blue}q}\\;(y_3 - y_1)(y_3 - y_2)}\n {(y_3 - y_1)(x_3 - x_2) - (y_3 - y_2)(x_3 - x_1)}\\ .\n"
},
{
"math_id": 300,
"text": "P_1 = (2,\\, 0),\\; P_2 = (0,\\,1),\\; P_3 = (0,\\, 0)"
},
{
"math_id": 301,
"text": "q = 4"
},
{
"math_id": 302,
"text": "\\frac{(x - 2)x + 4y(y - 1)}{yx - (y - 1)(x - 2)} = 0"
},
{
"math_id": 303,
"text": "\\frac{(x - 1)^2}{2} + \\frac{\\left(y - \\frac{1}{2}\\right)^2}{\\frac{1}{2}} = 1."
},
{
"math_id": 304,
"text": "\n \\frac{\\left({\\color{red}\\vec x} - \\vec x_1\\right)*\\left({\\color{red}\\vec x} - \\vec x_2\\right)}\n {\\det\\left({\\color{red}\\vec x} - \\vec x_1,{\\color{red}\\vec x} - \\vec x_2\\right)}\n = \\frac{\\left(\\vec x_3 - \\vec x_1\\right)*\\left(\\vec x_3 - \\vec x_2\\right)}\n {\\det\\left(\\vec x_3 - \\vec x_1, \\vec x_3 - \\vec x_2\\right)},\n"
},
{
"math_id": 305,
"text": "*"
},
{
"math_id": 306,
"text": "\\vec u*\\vec v = u_x v_x + {\\color{blue}q}\\,u_y v_y."
},
{
"math_id": 307,
"text": "P_1 = \\left(x_1,\\, y_1\\right)"
},
{
"math_id": 308,
"text": "\\tfrac{x_1x}{a^2} + \\tfrac{y_1y}{b^2} = 1."
},
{
"math_id": 309,
"text": "P_1 = \\left(x_1,\\, y_1\\right) \\neq (0,\\, 0)"
},
{
"math_id": 310,
"text": "\\tfrac{x_1 x}{a^2} + \\tfrac{y_1 y}{b^2} = 1"
},
{
"math_id": 311,
"text": "y = mx + d,\\ d \\ne 0"
},
{
"math_id": 312,
"text": "\\left(-\\tfrac{ma^2}{d},\\, \\tfrac{b^2}{d}\\right)"
},
{
"math_id": 313,
"text": "x = c,\\ c \\ne 0"
},
{
"math_id": 314,
"text": "\\left(\\tfrac{a^2}{c},\\, 0\\right)."
},
{
"math_id": 315,
"text": "P_1,\\, p_1"
},
{
"math_id": 316,
"text": "P_2,\\, p_2"
},
{
"math_id": 317,
"text": "F_1,\\, l_1"
},
{
"math_id": 318,
"text": "(c,\\, 0)"
},
{
"math_id": 319,
"text": "(-c,\\, 0)"
},
{
"math_id": 320,
"text": "x = \\tfrac{a^2}{c}"
},
{
"math_id": 321,
"text": "x = -\\tfrac{a^2}{c}"
},
{
"math_id": 322,
"text": "A_\\text{ellipse}"
},
{
"math_id": 323,
"text": "\\pi a b"
},
{
"math_id": 324,
"text": "\\pi b^2"
},
{
"math_id": 325,
"text": "a/b"
},
{
"math_id": 326,
"text": "\\pi b^2(a/b) = \\pi a b."
},
{
"math_id": 327,
"text": "\\int f(x)\\, dx"
},
{
"math_id": 328,
"text": " \\int \\sqrt{1+f'^2(x)}\\, dx"
},
{
"math_id": 329,
"text": "y(x) = b \\sqrt{1 - x^2 / a^2}."
},
{
"math_id": 330,
"text": "x\\in[-a,a],"
},
{
"math_id": 331,
"text": "y(x)"
},
{
"math_id": 332,
"text": "[-a,a]"
},
{
"math_id": 333,
"text": "\\begin{align}\n A_\\text{ellipse} &= \\int_{-a}^a 2b\\sqrt{1 - \\frac{x^2}{a^2}}\\,dx\\\\\n &= \\frac ba \\int_{-a}^a 2\\sqrt{a^2 - x^2}\\,dx.\n\\end{align}"
},
{
"math_id": 334,
"text": "a,"
},
{
"math_id": 335,
"text": "\\pi a^2."
},
{
"math_id": 336,
"text": "A_\\text{ellipse} = \\frac{b}{a}\\pi a^2 = \\pi ab."
},
{
"math_id": 337,
"text": "Ax^2+ Bxy + Cy^2 = 1 "
},
{
"math_id": 338,
"text": "2\\pi / \\sqrt{4AC - B^2}."
},
{
"math_id": 339,
"text": "a^2\\pi\\sqrt{1-e^2}"
},
{
"math_id": 340,
"text": "y"
},
{
"math_id": 341,
"text": "y_{\\text{int}}"
},
{
"math_id": 342,
"text": "x_{\\text{int}}"
},
{
"math_id": 343,
"text": "x_{\\text{max}}"
},
{
"math_id": 344,
"text": "y_{\\text{max}}"
},
{
"math_id": 345,
"text": "C \\,=\\, 4a\\int_0^{\\pi/2}\\sqrt {1 - e^2 \\sin^2\\theta}\\ d\\theta \\,=\\, 4 a \\,E(e)"
},
{
"math_id": 346,
"text": "e=\\sqrt{1 - b^2/a^2}"
},
{
"math_id": 347,
"text": "E"
},
{
"math_id": 348,
"text": "E(e) \\,=\\, \\int_0^{\\pi/2}\\sqrt {1 - e^2 \\sin^2\\theta}\\ d\\theta"
},
{
"math_id": 349,
"text": "E(e)"
},
{
"math_id": 350,
"text": "\\begin{align}\n C &= 2\\pi a \\left[{1 - \\left(\\frac{1}{2}\\right)^2e^2 - \\left(\\frac{1\\cdot 3}{2\\cdot 4}\\right)^2\\frac{e^4}{3} - \\left(\\frac{1\\cdot 3\\cdot 5}{2\\cdot 4\\cdot 6}\\right)^2\\frac{e^6}{5} - \\cdots}\\right] \\\\\n &= 2\\pi a \\left[1 - \\sum_{n=1}^\\infty \\left(\\frac{(2n-1)!!}{(2n)!!}\\right)^2 \\frac{e^{2n}}{2n-1}\\right] \\\\\n &= -2\\pi a \\sum_{n=0}^\\infty \\left(\\frac{(2n-1)!!}{(2n)!!}\\right)^2 \\frac{e^{2n}}{2n-1},\n\\end{align}\n"
},
{
"math_id": 351,
"text": "n!!"
},
{
"math_id": 352,
"text": "(2n-1)!! = (2n+1)!!/(2n+1)"
},
{
"math_id": 353,
"text": "n \\le 0"
},
{
"math_id": 354,
"text": "h = (a-b)^2 / (a+b)^2,"
},
{
"math_id": 355,
"text": "\\begin{align}\n C &= \\pi (a+b) \\sum_{n=0}^\\infty \\left(\\frac{(2n-3)!!}{2^n n!}\\right)^2 h^n \\\\\n &= \\pi (a+b) \\left[1 + \\frac{h}{4} + \\sum_{n=2}^\\infty \\left(\\frac{(2n-3)!!}{2^n n!}\\right)^2 h^n\\right] \\\\\n &= \\pi (a+b) \\left[1 + \\sum_{n=1}^\\infty \\left(\\frac{(2n-1)!!}{2^n n!}\\right)^2 \\frac{h^n}{(2n-1)^2}\\right].\n\\end{align}"
},
{
"math_id": 356,
"text": "C \\approx \\pi \\biggl[3(a + b) - \\sqrt{(3a + b)(a + 3b)} \\biggr] = \\pi \\biggl[3(a + b) - \\sqrt{10ab + 3\\left(a^2 + b^2\\right)}\\biggr]"
},
{
"math_id": 357,
"text": "C\\approx\\pi\\left(a+b\\right)\\left(1+\\frac{3h}{10+\\sqrt{4-3h}}\\right),"
},
{
"math_id": 358,
"text": "h"
},
{
"math_id": 359,
"text": "h^3"
},
{
"math_id": 360,
"text": "h^5,"
},
{
"math_id": 361,
"text": " y = b\\ \\sqrt{ 1-\\frac{x^{2}}{a^{2}}\\ } ~."
},
{
"math_id": 362,
"text": "s"
},
{
"math_id": 363,
"text": "\\ x_{1}\\ "
},
{
"math_id": 364,
"text": "\\ x_{2}\\ "
},
{
"math_id": 365,
"text": "s = -b\\int_{\\arccos \\frac{x_1}{a}}^{\\arccos \\frac{x_2}{a}} \\sqrt{\\ 1 + \\left( \\tfrac{a^2}{b^2} - 1 \\right)\\ \\sin^2 z ~} \\; dz ~."
},
{
"math_id": 366,
"text": " s = b\\ \\left[ \\; E\\left(z \\;\\Biggl|\\; 1 - \\frac{a^2}{b^2} \\right) \\; \\right]^{\\arccos \\frac{x_1}{a}}_{z\\ =\\ \\arccos \\frac{x_2}{a}} "
},
{
"math_id": 367,
"text": "E(z \\mid m)"
},
{
"math_id": 368,
"text": "m=k^{2}."
},
{
"math_id": 369,
"text": "\\ x^2/a^2 + y^2/b^2 = 1\\ "
},
{
"math_id": 370,
"text": "\\ a \\geq b\\ "
},
{
"math_id": 371,
"text": "\\begin{align}\n 2\\pi b &\\le C \\le 2\\pi a\\ , \\\\\n \\pi (a+b) &\\le C \\le 4(a+b)\\ , \\\\\n 4\\sqrt{a^2+b^2\\ } &\\le C \\le \\sqrt{2\\ } \\pi \\sqrt{a^2 + b^2\\ } ~.\n\\end{align}"
},
{
"math_id": 372,
"text": "\\ 2\\pi a\\ "
},
{
"math_id": 373,
"text": "4\\sqrt{a^2+b^2}"
},
{
"math_id": 374,
"text": "\\kappa = \\frac{1}{a^2 b^2}\\left(\\frac{x^2}{a^4}+\\frac{y^2}{b^4}\\right)^{-\\frac{3}{2}}\\ ,"
},
{
"math_id": 375,
"text": "\\rho = a^2 b^2 \\left(\\frac{x^{2}}{a^4} + \\frac{y^{2}}{b^4}\\right)^\\frac{3}{2} = \\frac{1}{a^4 b^4} \\sqrt{\\left(a^4 y^{2} + b^4 x^{2}\\right)^3} \\ ."
},
{
"math_id": 376,
"text": "(\\pm a,0)"
},
{
"math_id": 377,
"text": "\\rho_0 = \\frac{b^2}{a}=p\\ , \\qquad \\left(\\pm\\frac{c^2}{a}\\,\\bigg|\\,0\\right)\\ ."
},
{
"math_id": 378,
"text": "(0,\\pm b)"
},
{
"math_id": 379,
"text": "\\rho_1 = \\frac{a^2}{b}\\ , \\qquad \\left(0\\,\\bigg|\\,\\pm\\frac{c^2}{b}\\right)\\ ."
},
{
"math_id": 380,
"text": "\\begin{align}\n e &= \\frac{r_a - r_p}{r_a + r_p} = \\frac{r_a - r_p}{2a} \\\\\n r_a &= (1 + e)a \\\\\n r_p &= (1 - e)a\n\\end{align}"
},
{
"math_id": 381,
"text": "r_a"
},
{
"math_id": 382,
"text": "r_p"
},
{
"math_id": 383,
"text": "\\begin{align}\n a &= \\frac{r_a + r_p}{2} \\\\[2pt]\n b &= \\sqrt{r_a r_p} \\\\[2pt]\n \\ell &= \\frac{2}{\\frac{1}{r_a} + \\frac{1}{r_p}} = \\frac{2r_ar_p}{r_a + r_p}.\n\\end{align}"
},
{
"math_id": 384,
"text": "(X, Y)"
}
] | https://en.wikipedia.org/wiki?curid=9277 |
9278167 | Angular velocity tensor | The angular velocity tensor is a skew-symmetric matrix defined by:
formula_0
The scalar elements above correspond to the angular velocity vector components formula_1.
This is an "infinitesimal rotation matrix".
The linear mapping Ω acts as a cross product formula_2:
formula_3
where formula_4 is a position vector.
When multiplied by a time difference, it results in the "angular displacement tensor".
Calculation of angular velocity tensor of a rotating frame.
A vector formula_5 undergoing uniform circular motion around a fixed axis satisfies:
formula_6
Let formula_7 be the orientation matrix of a frame, whose columns formula_8, formula_9, and formula_10 are the moving orthonormal coordinate vectors of the frame. We can obtain the angular velocity tensor Ω("t") of "A"("t") as follows:
The angular velocity formula_11 must be the same for each of the column vectors formula_12, so we have:
formula_13
which holds even if "A"("t") does not rotate uniformly. Therefore, the angular velocity tensor is:
formula_14
since the inverse of an orthogonal matrix formula_15 is its transpose formula_16.
Properties.
In general, the angular velocity in an "n"-dimensional space is the time derivative of the angular displacement tensor, which is a second rank skew-symmetric tensor.
This tensor Ω will have "n"("n"−1)/2 independent components, which is the dimension of the Lie algebra of the Lie group of rotations of an "n"-dimensional inner product space.
Duality with respect to the velocity vector.
In three dimensions, angular velocity can be represented by a pseudovector because second rank tensors are dual to pseudovectors in three dimensions. Since the angular velocity tensor Ω = Ω("t") is a skew-symmetric matrix:
formula_17
its Hodge dual is a vector, which is precisely the previous angular velocity vector formula_18.
Exponential of Ω.
If we know an initial frame "A"(0) and we are given a "constant" angular velocity tensor Ω, we can obtain "A"("t") for any given "t". Recall the matrix differential equation:
formula_19
This equation can be integrated to give:
formula_20
which shows a connection with the Lie group of rotations.
Ω is skew-symmetric.
We prove that angular velocity tensor is skew symmetric, i.e. formula_21 satisfies formula_22.
A rotation matrix "A" is orthogonal, inverse to its transpose, so we have formula_23. For formula_24 a frame matrix, taking the time derivative of the equation gives:
formula_25
Applying the formula formula_26,
formula_27
Thus, Ω is the negative of its transpose, which implies it is skew symmetric.
Coordinate-free description.
At any instant formula_28, the angular velocity tensor represents a linear map between the position vector formula_29 and the velocity vectors formula_30 of a point on a rigid body rotating around the origin:
formula_31
The relation between this linear map and the angular velocity pseudovector formula_32 is the following.
Because Ω is the derivative of an orthogonal transformation, the bilinear form
formula_33
is skew-symmetric. Thus we can apply the fact of exterior algebra that there is a unique linear form formula_34 on formula_35 that
formula_36
where formula_37 is the exterior product of formula_38 and formula_39.
Taking the sharp "L" of "L" we get
formula_40
Introducing formula_41, as the Hodge dual of "L", and applying the definition of the Hodge dual twice supposing that the preferred unit 3-vector is formula_42
formula_43
where
formula_44
by definition.
Because formula_39 is an arbitrary vector, from nondegeneracy of scalar product follows
formula_45
Angular velocity as a vector field.
Since the spin angular velocity tensor of a rigid body (in its rest frame) is a linear transformation that maps positions to velocities (within the rigid body), it can be regarded as a constant vector field. In particular, the spin angular velocity is a Killing vector field belonging to an element of the Lie algebra SO(3) of the 3-dimensional rotation group SO(3).
Also, it can be shown that the spin angular velocity vector field is exactly half of the curl of the linear velocity vector field v(r) of the rigid body. In symbols,
formula_46
Rigid body considerations.
The same equations for the angular speed can be obtained reasoning over a rotating rigid body. Here is not assumed that the rigid body rotates around the origin. Instead, it can be supposed rotating around an arbitrary point that is moving with a linear velocity "V"("t") in each instant.
To obtain the equations, it is convenient to imagine a rigid body attached to the frames and consider a coordinate system that is fixed with respect to the rigid body. Then we will study the coordinate transformations between this coordinate and the fixed laboratory frame.
As shown in the figure on the right, the lab system's origin is at point "O", the rigid body system origin is at "O"′ and the vector from "O" to "O"′ is R. A particle ("i") in the rigid body is located at point P and the vector position of this particle is R"i" in the lab frame, and at position r"i" in the body frame. It is seen that the position of the particle can be written:
formula_47
The defining characteristic of a rigid body is that the distance between any two points in a rigid body is unchanging in time. This means that the length of the vector formula_48 is unchanging. By Euler's rotation theorem, we may replace the vector formula_48 with formula_49 where formula_50 is a 3×3 rotation matrix and formula_51 is the position of the particle at some fixed point in time, say "t" = 0. This replacement is useful, because now it is only the rotation matrix formula_50 that is changing in time and not the reference vector formula_51, as the rigid body rotates about point "O"′. Also, since the three columns of the rotation matrix represent the three versors of a reference frame rotating together with the rigid body, any rotation about any axis becomes now visible, while the vector formula_48 would not rotate if the rotation axis were parallel to it, and hence it would only describe a rotation about an axis perpendicular to it (i.e., it would not see the component of the angular velocity pseudovector parallel to it, and would only allow the computation of the component perpendicular to it). The position of the particle is now written as:
formula_52
Taking the time derivative yields the velocity of the particle:
formula_53
where V"i" is the velocity of the particle (in the lab frame) and V is the velocity of "O"′ (the origin of the rigid body frame). Since formula_50 is a rotation matrix its inverse is its transpose. So we substitute formula_54:
formula_55
formula_56
formula_57
or
formula_58
where formula_59 is the previous angular velocity tensor.
It can be proved that this is a skew symmetric matrix, so we can take its dual to get a 3 dimensional pseudovector that is precisely the previous angular velocity vector formula_60:
formula_18
Substituting "ω" for Ω into the above velocity expression, and replacing matrix multiplication by an equivalent cross product:
formula_61
It can be seen that the velocity of a point in a rigid body can be divided into two terms – the velocity of a reference point fixed in the rigid body plus the cross product term involving the orbital angular velocity of the particle with respect to the reference point. This angular velocity is what physicists call the "spin angular velocity" of the rigid body, as opposed to the "orbital" angular velocity of the reference point "O"′ about the origin "O".
Consistency.
We have supposed that the rigid body rotates around an arbitrary point. We should prove that the spin angular velocity previously defined is independent of the choice of origin, which means that the spin angular velocity is an intrinsic property of the spinning rigid body. (Note the marked contrast of this with the "orbital" angular velocity of a point particle, which certainly "does" depend on the choice of origin.)
See the graph to the right: The origin of lab frame is "O", while "O"1 and "O"2 are two fixed points on the rigid body, whose velocity is formula_62 and formula_63 respectively. Suppose the angular velocity with respect to "O"1 and O2 is formula_64 and formula_65 respectively. Since point "P" and "O"2 have only one velocity,
formula_66
formula_67
The above two yields that
formula_68
Since the point "P" (and thus formula_69) is arbitrary, it follows that
formula_70
If the reference point is the instantaneous axis of rotation the expression of the velocity of a point in the rigid body will have just the angular velocity term. This is because the velocity of the instantaneous axis of rotation is zero. An example of the instantaneous axis of rotation is the hinge of a door. Another example is the point of contact of a purely rolling spherical (or, more generally, convex) rigid body.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\n\n\\Omega =\n\\begin{pmatrix}\n0 & -\\omega_z & \\omega_y \\\\\n\\omega_z & 0 & -\\omega_x \\\\\n-\\omega_y & \\omega_x & 0 \\\\\n\\end{pmatrix}"
},
{
"math_id": 1,
"text": "\\boldsymbol\\omega=(\\omega_x,\\omega_y,\\omega_z)"
},
{
"math_id": 2,
"text": "(\\boldsymbol\\omega \\times)"
},
{
"math_id": 3,
"text": "\\boldsymbol\\omega \\times \\boldsymbol{r} = \\Omega \\boldsymbol{r}"
},
{
"math_id": 4,
"text": "\\boldsymbol{r}"
},
{
"math_id": 5,
"text": "\\mathbf r"
},
{
"math_id": 6,
"text": "\\frac {d \\mathbf r} {dt} = \\boldsymbol{\\omega} \\times\\mathbf{r} = \\Omega \\mathbf{r}"
},
{
"math_id": 7,
"text": "A(t) = [ \\mathbf{e}_1(t) \\ \\mathbf{e}_2(t) \\ \\mathbf{e}_3(t)]"
},
{
"math_id": 8,
"text": "\\mathbf e_1"
},
{
"math_id": 9,
"text": "\\mathbf e_2"
},
{
"math_id": 10,
"text": "\\mathbf e_3"
},
{
"math_id": 11,
"text": "\\omega"
},
{
"math_id": 12,
"text": "\\mathbf e_i"
},
{
"math_id": 13,
"text": "\\begin{aligned}\n\\frac {dA}{dt} \n&= \\begin{bmatrix} \n\\dfrac{d\\mathbf{e}_1}{dt} &\n\\dfrac{d\\mathbf{e}_2}{dt} &\n\\dfrac{d\\mathbf{e}_3}{dt}\n\\end{bmatrix} \\\\\n&= \\begin{bmatrix} \n\\omega \\times \\mathbf{e}_1 & \n\\omega \\times \\mathbf{e}_2 & \n\\omega \\times \\mathbf{e}_3 \n\\end{bmatrix} \\\\\n&= \\begin{bmatrix} \n\\Omega \\mathbf{e}_1 & \n\\Omega\\mathbf{e}_2 & \n\\Omega \\mathbf{e}_3 \n\\end{bmatrix} \\\\\n&= \\Omega A,\n\\end{aligned}"
},
{
"math_id": 14,
"text": "\\Omega = \\frac {dA} {dt} A^{-1} = \\frac {dA} {dt} A^{\\mathsf{T}},"
},
{
"math_id": 15,
"text": "A"
},
{
"math_id": 16,
"text": "A^{\\mathsf{T}}"
},
{
"math_id": 17,
"text": "\n\\Omega =\n\\begin{pmatrix}\n0 & -\\omega_z & \\omega_y \\\\\n\\omega_z & 0 & -\\omega_x \\\\\n-\\omega_y & \\omega_x & 0\\\\\n\\end{pmatrix},\n"
},
{
"math_id": 18,
"text": "\\boldsymbol\\omega=[\\omega_x,\\omega_y,\\omega_z]"
},
{
"math_id": 19,
"text": "\\frac {dA} {dt} = \\Omega \\cdot A ."
},
{
"math_id": 20,
"text": "A(t) = e^{Wt}A(0) ,"
},
{
"math_id": 21,
"text": "\\Omega = \\frac {dA(t)}{dt} \\cdot A^\\text{T} "
},
{
"math_id": 22,
"text": "\\Omega^\\text{T} = -\\Omega"
},
{
"math_id": 23,
"text": "I=A\\cdot A^\\text{T}"
},
{
"math_id": 24,
"text": "A=A(t)"
},
{
"math_id": 25,
"text": "0=\\frac{dA}{dt}A^\\text{T}+A\\frac{dA^\\text{T}}{dt}"
},
{
"math_id": 26,
"text": "(A B)^\\text{T}=B^\\text{T}A^\\text{T}"
},
{
"math_id": 27,
"text": "0 = \\frac{dA}{dt}A^\\text{T}+\\left(\\frac{dA}{dt} A^\\text{T}\\right)^\\text{T} = \\Omega + \\Omega^\\text{T}"
},
{
"math_id": 28,
"text": "t"
},
{
"math_id": 29,
"text": "\\mathbf{r}(t)"
},
{
"math_id": 30,
"text": "\\mathbf{v}(t)"
},
{
"math_id": 31,
"text": " \\mathbf{v} = \\Omega\\mathbf{r} ."
},
{
"math_id": 32,
"text": "\\boldsymbol\\omega"
},
{
"math_id": 33,
"text": "B(\\mathbf{r},\\mathbf{s}) = (\\Omega\\mathbf{r}) \\cdot \\mathbf{s} "
},
{
"math_id": 34,
"text": "L"
},
{
"math_id": 35,
"text": "\\Lambda^2 V "
},
{
"math_id": 36,
"text": "L(\\mathbf{r}\\wedge \\mathbf{s}) = B(\\mathbf{r},\\mathbf{s})"
},
{
"math_id": 37,
"text": "\\mathbf{r}\\wedge \\mathbf{s} \\in \\Lambda^2 V "
},
{
"math_id": 38,
"text": "\\mathbf{r}"
},
{
"math_id": 39,
"text": "\\mathbf{s}"
},
{
"math_id": 40,
"text": " (\\Omega\\mathbf{r})\\cdot \\mathbf{s} = L^\\sharp \\cdot (\\mathbf{r}\\wedge \\mathbf{s}) "
},
{
"math_id": 41,
"text": " \\boldsymbol\\omega := {\\star} (L^\\sharp) "
},
{
"math_id": 42,
"text": " \\star 1"
},
{
"math_id": 43,
"text": " (\\Omega\\mathbf{r}) \\cdot \\mathbf{s} = {\\star} ( {\\star} ( L^\\sharp ) \\wedge \\mathbf{r} \\wedge \\mathbf{s}) = {\\star} (\\boldsymbol\\omega \\wedge \\mathbf{r} \\wedge \\mathbf{s}) = {\\star} (\\boldsymbol\\omega \\wedge \\mathbf{r} ) \\cdot \\mathbf{s} = (\\boldsymbol\\omega \\times \\mathbf{r} ) \\cdot \\mathbf{s} ,"
},
{
"math_id": 44,
"text": "\\boldsymbol\\omega \\times \\mathbf{r} := {\\star} (\\boldsymbol\\omega \\wedge \\mathbf{r}) "
},
{
"math_id": 45,
"text": " \\Omega\\mathbf{r} = \\boldsymbol\\omega \\times \\mathbf{r}"
},
{
"math_id": 46,
"text": " \\boldsymbol{\\omega} = \\frac{1}{2} \\nabla\\times\\mathbf{v}"
},
{
"math_id": 47,
"text": "\\mathbf{R}_i=\\mathbf{R}+\\mathbf{r}_i"
},
{
"math_id": 48,
"text": "\\mathbf{r}_i"
},
{
"math_id": 49,
"text": "\\mathcal{R}\\mathbf{r}_{io}"
},
{
"math_id": 50,
"text": "\\mathcal{R}"
},
{
"math_id": 51,
"text": "\\mathbf{r}_{io}"
},
{
"math_id": 52,
"text": "\\mathbf{R}_i=\\mathbf{R}+\\mathcal{R}\\mathbf{r}_{io}"
},
{
"math_id": 53,
"text": "\\mathbf{V}_i=\\mathbf{V}+\\frac{d\\mathcal{R}}{dt}\\mathbf{r}_{io}"
},
{
"math_id": 54,
"text": "\\mathcal{I}=\\mathcal{R}^\\text{T}\\mathcal{R}"
},
{
"math_id": 55,
"text": "\\mathbf{V}_i = \\mathbf{V}+\\frac{d\\mathcal{R}}{dt}\\mathcal{I}\\mathbf{r}_{io}"
},
{
"math_id": 56,
"text": "\\mathbf{V}_i = \\mathbf{V}+\\frac{d\\mathcal{R}}{dt}\\mathcal{R}^\\text{T}\\mathcal{R}\\mathbf{r}_{io}"
},
{
"math_id": 57,
"text": "\\mathbf{V}_i = \\mathbf{V}+\\frac{d\\mathcal{R}}{dt}\\mathcal{R}^\\text{T}\\mathbf{r}_{i}"
},
{
"math_id": 58,
"text": "\\mathbf{V}_i = \\mathbf{V}+\\Omega\\mathbf{r}_{i}"
},
{
"math_id": 59,
"text": "\\Omega = \\frac{d\\mathcal{R}}{dt}\\mathcal{R}^\\text{T}"
},
{
"math_id": 60,
"text": "\\boldsymbol \\omega"
},
{
"math_id": 61,
"text": "\\mathbf{V}_i=\\mathbf{V}+\\boldsymbol\\omega\\times\\mathbf{r}_i"
},
{
"math_id": 62,
"text": "\\mathbf{v}_1"
},
{
"math_id": 63,
"text": "\\mathbf{v}_2"
},
{
"math_id": 64,
"text": "\\boldsymbol{\\omega}_1"
},
{
"math_id": 65,
"text": "\\boldsymbol{\\omega}_2"
},
{
"math_id": 66,
"text": " \\mathbf{v}_1 + \\boldsymbol{\\omega}_1\\times\\mathbf{r}_1 = \\mathbf{v}_2 + \\boldsymbol{\\omega}_2\\times\\mathbf{r}_2 "
},
{
"math_id": 67,
"text": " \\mathbf{v}_2 = \\mathbf{v}_1 + \\boldsymbol{\\omega}_1\\times\\mathbf{r} = \\mathbf{v}_1 + \\boldsymbol{\\omega}_1\\times (\\mathbf{r}_1 - \\mathbf{r}_2) "
},
{
"math_id": 68,
"text": " (\\boldsymbol{\\omega}_2-\\boldsymbol{\\omega}_1) \\times \\mathbf{r}_2=0 "
},
{
"math_id": 69,
"text": " \\mathbf{r}_2 "
},
{
"math_id": 70,
"text": " \\boldsymbol{\\omega}_1 = \\boldsymbol{\\omega}_2 "
}
] | https://en.wikipedia.org/wiki?curid=9278167 |
9279963 | Thom–Mather stratified space | Topological space
In topology, a branch of mathematics, an abstract stratified space, or a Thom–Mather stratified space is a topological space "X" that has been decomposed into pieces called strata; these strata are manifolds and are required to fit together in a certain way. Thom–Mather stratified spaces provide a purely topological setting for the study of singularities analogous to the more differential-geometric theory of Whitney. They were introduced by René Thom, who showed that every Whitney stratified space was also a topologically stratified space, with the same strata. Another proof was given by John Mather in 1970, inspired by Thom's proof.
Basic examples of Thom–Mather stratified spaces include manifolds with boundary (top dimension and codimension 1 boundary) and manifolds with corners (top dimension, codimension 1 boundary, codimension 2 corners), real or complex analytic varieties, or orbit spaces of smooth transformation groups.
Definition.
A Thom–Mather stratified space is a triple formula_0 where formula_1 is a topological space (often we require that it is locally compact, Hausdorff, and second countable), formula_2 is a decomposition of formula_1 into strata,
formula_3
and formula_4 is the set of control data formula_5 where formula_6 is an open neighborhood of the stratum formula_7 (called the tubular neighborhood), formula_8 is a continuous retraction, and formula_9 is a continuous function. These data need to satisfy the following conditions.
Examples.
One of the original motivations for stratified spaces were decomposing singular spaces into smooth chunks. For example, given a singular variety formula_7, there is a naturally defined subvariety, formula_22, which is the singular locus. This may not be a smooth variety, so taking the iterated singularity locus formula_23 will eventually give a natural stratification. A simple algebreo-geometric example is the singular hypersurface
formula_24
where formula_25 is the prime spectrum. | [
{
"math_id": 0,
"text": "(V, {\\mathcal S}, {\\mathfrak J})"
},
{
"math_id": 1,
"text": "V"
},
{
"math_id": 2,
"text": "{\\mathcal S}"
},
{
"math_id": 3,
"text": " V = \\bigsqcup_{X\\in {\\mathcal S}} X, "
},
{
"math_id": 4,
"text": "{\\mathfrak J}"
},
{
"math_id": 5,
"text": "\\{ (T_X), (\\pi_X), (\\rho_X)\\ | X\\in S\\}"
},
{
"math_id": 6,
"text": "T_X"
},
{
"math_id": 7,
"text": "X"
},
{
"math_id": 8,
"text": "\\pi_X: T_X \\to X"
},
{
"math_id": 9,
"text": "\\rho_X: T_X \\to [0, +\\infty)"
},
{
"math_id": 10,
"text": "S"
},
{
"math_id": 11,
"text": "X, Y \\in {\\mathcal S}"
},
{
"math_id": 12,
"text": " Y \\cap \\overline{X} \\neq \\emptyset"
},
{
"math_id": 13,
"text": "Y \\subseteq \\overline{X}"
},
{
"math_id": 14,
"text": "Y<X"
},
{
"math_id": 15,
"text": "Y \\subset\\overline{X}"
},
{
"math_id": 16,
"text": "Y \\neq X"
},
{
"math_id": 17,
"text": "X = \\{ v \\in T_X\\ |\\ \\rho_X(v) = 0\\}"
},
{
"math_id": 18,
"text": "\\rho_X"
},
{
"math_id": 19,
"text": "(\\pi_Y, \\rho_Y): T_Y \\cap X \\to Y \\times (0, +\\infty)"
},
{
"math_id": 20,
"text": "\\pi_Y \\circ \\pi_X = \\pi_Y"
},
{
"math_id": 21,
"text": "\\rho_Y \\circ \\pi_X = \\rho_Y"
},
{
"math_id": 22,
"text": "\\mathrm{Sing}(X)"
},
{
"math_id": 23,
"text": "\\mathrm{Sing}(\\mathrm{Sing}(X))"
},
{
"math_id": 24,
"text": "\\text{Spec}\\left(\\Complex[x,y,z]/\\left(x^4 + y^4 + z^4\\right)\\right) \\xleftarrow{(0,0,0)} \\text{Spec}(\\Complex)"
},
{
"math_id": 25,
"text": "\\text{Spec}(-)"
}
] | https://en.wikipedia.org/wiki?curid=9279963 |
928 | Axiom | Statement that is taken to be true
An axiom, postulate, or assumption is a statement that is taken to be true, to serve as a premise or starting point for further reasoning and arguments. The word comes from the Ancient Greek word (), meaning 'that which is thought worthy or fit' or 'that which commends itself as evident'.
The precise definition varies across fields of study. In classic philosophy, an axiom is a statement that is so evident or well-established, that it is accepted without controversy or question. In modern logic, an axiom is a premise or starting point for reasoning.
In mathematics, an "axiom" may be a "logical axiom" or a "non-logical axiom". Logical axioms are taken to be true within the system of logic they define and are often shown in symbolic form (e.g., ("A" and "B") implies "A"), while non-logical axioms are substantive assertions about the elements of the domain of a specific mathematical theory, for example "a" + 0 = "a" in integer arithmetic.
Non-logical axioms may also be called "postulates", "assumptions" or "proper axioms". In most cases, a non-logical axiom is simply a formal logical expression used in deduction to build a mathematical theory, and might or might not be self-evident in nature (e.g., the parallel postulate in Euclidean geometry). To axiomatize a system of knowledge is to show that its claims can be derived from a small, well-understood set of sentences (the axioms), and there are typically many ways to axiomatize a given mathematical domain.
Any axiom is a statement that serves as a starting point from which other statements are logically derived. Whether it is meaningful (and, if so, what it means) for an axiom to be "true" is a subject of debate in the philosophy of mathematics.
Etymology.
The word "axiom" comes from the Greek word ("axíōma"), a verbal noun from the verb ("axioein"), meaning "to deem worthy", but also "to require", which in turn comes from ("áxios"), meaning "being in balance", and hence "having (the same) value (as)", "worthy", "proper". Among the ancient Greek philosophers and mathematicians, axioms were taken to be immediately evident propositions, foundational and common to many fields of investigation, and self-evidently true without any further argument or proof.
The root meaning of the word "postulate" is to "demand"; for instance, Euclid demands that one agree that some things can be done (e.g., any two points can be joined by a straight line).
Ancient geometers maintained some distinction between axioms and postulates. While commenting on Euclid's books, Proclus remarks that "Geminus held that this [4th] Postulate should not be classed as a postulate but as an axiom, since it does not, like the first three Postulates, assert the possibility of some construction but expresses an essential property." Boethius translated 'postulate' as "petitio" and called the axioms "notiones communes" but in later manuscripts this usage was not always strictly kept.
Historical development.
Early Greeks.
The logico-deductive method whereby conclusions (new knowledge) follow from premises (old knowledge) through the application of sound arguments (syllogisms, rules of inference) was developed by the ancient Greeks, and has become the core principle of modern mathematics. Tautologies excluded, nothing can be deduced if nothing is assumed. Axioms and postulates are thus the basic assumptions underlying a given body of deductive knowledge. They are accepted without demonstration. All other assertions (theorems, in the case of mathematics) must be proven with the aid of these basic assumptions. However, the interpretation of mathematical knowledge has changed from ancient times to the modern, and consequently the terms "axiom" and "postulate" hold a slightly different meaning for the present day mathematician, than they did for Aristotle and Euclid.
The ancient Greeks considered geometry as just one of several sciences, and held the theorems of geometry on par with scientific facts. As such, they developed and used the logico-deductive method as a means of avoiding error, and for structuring and communicating knowledge. Aristotle's posterior analytics is a definitive exposition of the classical view.
An "axiom", in classical terminology, referred to a self-evident assumption common to many branches of science. A good example would be the assertion that:
When an equal amount is taken from equals, an equal amount results.
At the foundation of the various sciences lay certain additional hypotheses that were accepted without proof. Such a hypothesis was termed a "postulate". While the axioms were common to many sciences, the postulates of each particular science were different. Their validity had to be established by means of real-world experience. Aristotle warns that the content of a science cannot be successfully communicated if the learner is in doubt about the truth of the postulates.
The classical approach is well-illustrated by Euclid's "Elements", where a list of postulates is given (common-sensical geometric facts drawn from our experience), followed by a list of "common notions" (very basic, self-evident assertions).
;Postulates
# It is possible to draw a straight line from any point to any other point.
# It is possible to extend a line segment continuously in both directions.
# It is possible to describe a circle with any center and any radius.
# It is true that all right angles are equal to one another.
# ("Parallel postulate") It is true that, if a straight line falling on two straight lines make the interior angles on the same side less than two right angles, the two straight lines, if produced indefinitely, intersect on that side on which are the angles less than the two right angles.
;Common notions:
# Things which are equal to the same thing are also equal to one another.
# If equals are added to equals, the wholes are equal.
# If equals are subtracted from equals, the remainders are equal.
# Things which coincide with one another are equal to one another.
# The whole is greater than the part.
Modern development.
A lesson learned by mathematics in the last 150 years is that it is useful to strip the meaning away from the mathematical assertions (axioms, postulates, propositions, theorems) and definitions. One must concede the need for primitive notions, or undefined terms or concepts, in any study. Such abstraction or formalization makes mathematical knowledge more general, capable of multiple different meanings, and therefore useful in multiple contexts. Alessandro Padoa, Mario Pieri, and Giuseppe Peano were pioneers in this movement.
Structuralist mathematics goes further, and develops theories and axioms (e.g. field theory, group theory, topology, vector spaces) without "any" particular application in mind. The distinction between an "axiom" and a "postulate" disappears. The postulates of Euclid are profitably motivated by saying that they lead to a great wealth of geometric facts. The truth of these complicated facts rests on the acceptance of the basic hypotheses. However, by throwing out Euclid's fifth postulate, one can get theories that have meaning in wider contexts (e.g., hyperbolic geometry). As such, one must simply be prepared to use labels such as "line" and "parallel" with greater flexibility. The development of hyperbolic geometry taught mathematicians that it is useful to regard postulates as purely formal statements, and not as facts based on experience.
When mathematicians employ the field axioms, the intentions are even more abstract. The propositions of field theory do not concern any one particular application; the mathematician now works in complete abstraction. There are many examples of fields; field theory gives correct knowledge about them all.
It is not correct to say that the axioms of field theory are "propositions that are regarded as true without proof." Rather, the field axioms are a set of constraints. If any given system of addition and multiplication satisfies these constraints, then one is in a position to instantly know a great deal of extra information about this system.
Modern mathematics formalizes its foundations to such an extent that mathematical theories can be regarded as mathematical objects, and mathematics itself can be regarded as a branch of logic. Frege, Russell, Poincaré, Hilbert, and Gödel are some of the key figures in this development.
Another lesson learned in modern mathematics is to examine purported proofs carefully for hidden assumptions.
In the modern understanding, a set of axioms is any collection of formally stated assertions from which other formally stated assertions follow – by the application of certain well-defined rules. In this view, logic becomes just another formal system. A set of axioms should be consistent; it should be impossible to derive a contradiction from the axioms. A set of axioms should also be non-redundant; an assertion that can be deduced from other axioms need not be regarded as an axiom.
It was the early hope of modern logicians that various branches of mathematics, perhaps all of mathematics, could be derived from a consistent collection of basic axioms. An early success of the formalist program was Hilbert's formalization of Euclidean geometry, and the related demonstration of the consistency of those axioms.
In a wider context, there was an attempt to base all of mathematics on Cantor's set theory. Here, the emergence of Russell's paradox and similar antinomies of naïve set theory raised the possibility that any such system could turn out to be inconsistent.
The formalist project suffered a setback a century ago, when Gödel showed that it is possible, for any sufficiently large set of axioms (Peano's axioms, for example) to construct a statement whose truth is independent of that set of axioms. As a corollary, Gödel proved that the consistency of a theory like Peano arithmetic is an unprovable assertion within the scope of that theory.
It is reasonable to believe in the consistency of Peano arithmetic because it is satisfied by the system of natural numbers, an infinite but intuitively accessible formal system. However, at present, there is no known way of demonstrating the consistency of the modern Zermelo–Fraenkel axioms for set theory. Furthermore, using techniques of forcing (Cohen) one can show that the continuum hypothesis (Cantor) is independent of the Zermelo–Fraenkel axioms. Thus, even this very general set of axioms cannot be regarded as the definitive foundation for mathematics.
Other sciences.
Experimental sciences - as opposed to mathematics and logic - also have general founding assertions from which a deductive reasoning can be built so as to express propositions that predict properties - either still general or much more specialized to a specific experimental context. For instance, Newton's laws in classical mechanics, Maxwell's equations in classical electromagnetism, Einstein's equation in general relativity, Mendel's laws of genetics, Darwin's Natural selection law, etc. These founding assertions are usually called "principles" or "postulates" so as to distinguish from mathematical "axioms".
As a matter of facts, the role of axioms in mathematics and postulates in experimental sciences is different. In mathematics one neither "proves" nor "disproves" an axiom. A set of mathematical axioms gives a set of rules that fix a conceptual realm, in which the theorems logically follow. In contrast, in experimental sciences, a set of postulates shall allow deducing results that match or do not match experimental results. If postulates do not allow deducing experimental predictions, they do not set a scientific conceptual framework and have to be completed or made more accurate. If the postulates allow deducing predictions of experimental results, the comparison with experiments allows falsifying (falsified) the theory that the postulates install. A theory is considered valid as long as it has not been falsified.
Now, the transition between the mathematical axioms and scientific postulates is always slightly blurred, especially in physics. This is due to the heavy use of mathematical tools to support the physical theories. For instance, the introduction of Newton's laws rarely establishes as a prerequisite neither Euclidean geometry or differential calculus that they imply. It became more apparent when Albert Einstein first introduced special relativity where the invariant quantity is no more the Euclidean length formula_0 (defined as formula_1) > but the Minkowski spacetime interval formula_2 (defined as formula_3), and then general relativity where flat Minkowskian geometry is replaced with pseudo-Riemannian geometry on curved manifolds.
In quantum physics, two sets of postulates have coexisted for some time, which provide a very nice example of falsification. The 'Copenhagen school' (Niels Bohr, Werner Heisenberg, Max Born) developed an operational approach with a complete mathematical formalism that involves the description of quantum system by vectors ('states') in a separable Hilbert space, and physical quantities as linear operators that act in this Hilbert space. This approach is fully falsifiable and has so far produced the most accurate predictions in physics. But it has the unsatisfactory aspect of not allowing answers to questions one would naturally ask. For this reason, another 'hidden variables' approach was developed for some time by Albert Einstein, Erwin Schrödinger, David Bohm. It was created so as to try to give deterministic explanation to phenomena such as entanglement. This approach assumed that the Copenhagen school description was not complete, and postulated that some yet unknown variable was to be added to the theory so as to allow answering some of the questions it does not answer (the founding elements of which were discussed as the EPR paradox in 1935). Taking this idea seriously, John Bell derived in 1964 a prediction that would lead to different experimental results (Bell's inequalities) in the Copenhagen and the Hidden variable case. The experiment was conducted first by Alain Aspect in the early 1980's, and the result excluded the simple hidden variable approach (sophisticated hidden variables could still exist but their properties would still be more disturbing than the problems they try to solve). This does not mean that the conceptual framework of quantum physics can be considered as complete now, since some open questions still exist (the limit between the quantum and classical realms, what happens during a quantum measurement, what happens in a completely closed quantum system such as the universe itself, etc.).
Mathematical logic.
In the field of mathematical logic, a clear distinction is made between two notions of axioms: "logical" and "non-logical" (somewhat similar to the ancient distinction between "axioms" and "postulates" respectively).
Logical axioms.
These are certain formulas in a formal language that are universally valid, that is, formulas that are satisfied by every assignment of values. Usually one takes as logical axioms "at least" some minimal set of tautologies that is sufficient for proving all tautologies in the language; in the case of predicate logic more logical axioms than that are required, in order to prove logical truths that are not tautologies in the strict sense.
Examples.
Propositional logic.
In propositional logic it is common to take as logical axioms all formulae of the following forms, where formula_4, formula_5, and formula_6 can be any formulae of the language and where the included primitive connectives are only "formula_7" for negation of the immediately following proposition and "formula_8" for implication from antecedent to consequent propositions:
Each of these patterns is an "axiom schema", a rule for generating an infinite number of axioms. For example, if formula_12, formula_13, and formula_14 are propositional variables, then formula_15 and formula_16 are both instances of axiom schema 1, and hence are axioms. It can be shown that with only these three axiom schemata and "modus ponens", one can prove all tautologies of the propositional calculus. It can also be shown that no pair of these schemata is sufficient for proving all tautologies with "modus ponens".
Other axiom schemata involving the same or different sets of primitive connectives can be alternatively constructed.
These axiom schemata are also used in the predicate calculus, but additional logical axioms are needed to include a quantifier in the calculus.
First-order logic.
Axiom of Equality.<br>Let formula_17 be a first-order language. For each variable formula_18, the below formula is universally valid.
formula_19
This means that, for any variable symbol formula_18, the formula formula_19 can be regarded as an axiom. Also, in this example, for this not to fall into vagueness and a never-ending series of "primitive notions", either a precise notion of what we mean by formula_19 (or, for that matter, "to be equal") has to be well established first, or a purely formal and syntactical usage of the symbol formula_20 has to be enforced, only regarding it as a string and only a string of symbols, and mathematical logic does indeed do that.
Another, more interesting example axiom scheme, is that which provides us with what is known as Universal Instantiation:
Axiom scheme for Universal Instantiation.<br>Given a formula formula_4 in a first-order language formula_17, a variable formula_18 and a term formula_21 that is substitutable for formula_18 in formula_4, the below formula is universally valid.
formula_22
Where the symbol formula_23 stands for the formula formula_4 with the term formula_21 substituted for formula_18. (See Substitution of variables.) In informal terms, this example allows us to state that, if we know that a certain property formula_24 holds for every formula_18 and that formula_21 stands for a particular object in our structure, then we should be able to claim formula_25. Again, "we are claiming that the formula" formula_26 "is valid", that is, we must be able to give a "proof" of this fact, or more properly speaking, a "metaproof". These examples are "metatheorems" of our theory of mathematical logic since we are dealing with the very concept of "proof" itself. Aside from this, we can also have Existential Generalization:
Axiom scheme for Existential Generalization. Given a formula formula_4 in a first-order language formula_17, a variable formula_18 and a term formula_21 that is substitutable for formula_18 in formula_4, the below formula is universally valid.
formula_27
Non-logical axioms.
Non-logical axioms are formulas that play the role of theory-specific assumptions. Reasoning about two different structures, for example, the natural numbers and the integers, may involve the same logical axioms; the non-logical axioms aim to capture what is special about a particular structure (or set of structures, such as groups). Thus non-logical axioms, unlike logical axioms, are not "tautologies". Another name for a non-logical axiom is "postulate".
Almost every modern mathematical theory starts from a given set of non-logical axioms, and it was thought that, in principle, every theory could be axiomatized in this way and formalized down to the bare language of logical formulas.
Non-logical axioms are often simply referred to as "axioms" in mathematical discourse. This does not mean that it is claimed that they are true in some absolute sense. For example, in some groups, the group operation is commutative, and this can be asserted with the introduction of an additional axiom, but without this axiom, we can do quite well developing (the more general) group theory, and we can even take its negation as an axiom for the study of non-commutative groups.
Thus, an "axiom" is an elementary basis for a formal logic system that together with the rules of inference define a deductive system.
Examples.
This section gives examples of mathematical theories that are developed entirely from a set of non-logical axioms (axioms, henceforth). A rigorous treatment of any of these topics begins with a specification of these axioms.
Basic theories, such as arithmetic, real analysis and complex analysis are often introduced non-axiomatically, but implicitly or explicitly there is generally an assumption that the axioms being used are the axioms of Zermelo–Fraenkel set theory with choice, abbreviated ZFC, or some very similar system of axiomatic set theory like Von Neumann–Bernays–Gödel set theory, a conservative extension of ZFC. Sometimes slightly stronger theories such as Morse–Kelley set theory or set theory with a strongly inaccessible cardinal allowing the use of a Grothendieck universe is used, but in fact, most mathematicians can actually prove all they need in systems weaker than ZFC, such as second-order arithmetic.
The study of topology in mathematics extends all over through point set topology, algebraic topology, differential topology, and all the related paraphernalia, such as homology theory, homotopy theory. The development of "abstract algebra" brought with itself group theory, rings, fields, and Galois theory.
This list could be expanded to include most fields of mathematics, including measure theory, ergodic theory, probability, representation theory, and differential geometry.
Arithmetic.
The Peano axioms are the most widely used "axiomatization" of first-order arithmetic. They are a set of axioms strong enough to prove many important facts about number theory and they allowed Gödel to establish his famous second incompleteness theorem.
We have a language formula_28 where formula_29 is a constant symbol and formula_30 is a unary function and the following axioms:
The standard structure is formula_35 where formula_36 is the set of natural numbers, formula_30 is the successor function and formula_29 is naturally interpreted as the number 0.
Euclidean geometry.
Probably the oldest, and most famous, list of axioms are the 4 + 1 Euclid's postulates of plane geometry. The axioms are referred to as "4 + 1" because for nearly two millennia the fifth (parallel) postulate ("through a point outside a line there is exactly one parallel") was suspected of being derivable from the first four. Ultimately, the fifth postulate was found to be independent of the first four. One can assume that exactly one parallel through a point outside a line exists, or that infinitely many exist. This choice gives us two alternative forms of geometry in which the interior angles of a triangle add up to exactly 180 degrees or less, respectively, and are known as Euclidean and hyperbolic geometries. If one also removes the second postulate ("a line can be extended indefinitely") then elliptic geometry arises, where there is no parallel through a point outside a line, and in which the interior angles of a triangle add up to more than 180 degrees.
Real analysis.
The objectives of the study are within the domain of real numbers. The real numbers are uniquely picked out (up to isomorphism) by the properties of a "Dedekind complete ordered field", meaning that any nonempty set of real numbers with an upper bound has a least upper bound. However, expressing these properties as axioms requires the use of second-order logic. The Löwenheim–Skolem theorems tell us that if we restrict ourselves to first-order logic, any axiom system for the reals admits other models, including both models that are smaller than the reals and models that are larger. Some of the latter are studied in non-standard analysis.
Role in mathematical logic.
Deductive systems and completeness.
A deductive system consists of a set formula_37 of logical axioms, a set formula_38 of non-logical axioms, and a set formula_39 of "rules of inference". A desirable property of a deductive system is that it be complete. A system is said to be complete if, for all formulas formula_4,
formula_40
that is, for any statement that is a "logical consequence" of formula_38 there actually exists a "deduction" of the statement from formula_38. This is sometimes expressed as "everything that is true is provable", but it must be understood that "true" here means "made true by the set of axioms", and not, for example, "true in the intended interpretation". Gödel's completeness theorem establishes the completeness of a certain commonly used type of deductive system.
Note that "completeness" has a different meaning here than it does in the context of Gödel's first incompleteness theorem, which states that no "recursive", "consistent" set of non-logical axioms formula_38 of the Theory of Arithmetic is "complete", in the sense that there will always exist an arithmetic statement formula_4 such that neither formula_4 nor formula_41 can be proved from the given set of axioms.
There is thus, on the one hand, the notion of "completeness of a deductive system" and on the other hand that of "completeness of a set of non-logical axioms". The completeness theorem and the incompleteness theorem, despite their names, do not contradict one another.
Further discussion.
Early mathematicians regarded axiomatic geometry as a model of physical space, and obviously, there could only be one such model. The idea that alternative mathematical systems might exist was very troubling to mathematicians of the 19th century and the developers of systems such as Boolean algebra made elaborate efforts to derive them from traditional arithmetic. Galois showed just before his untimely death that these efforts were largely wasted. Ultimately, the abstract parallels between algebraic systems were seen to be more important than the details, and modern algebra was born. In the modern view, axioms may be any set of formulas, as long as they are not known to be inconsistent.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "l"
},
{
"math_id": 1,
"text": "l^2 = x^2 + y^2 + z^2"
},
{
"math_id": 2,
"text": "s"
},
{
"math_id": 3,
"text": "s^2 = c^2 t^2 - x^2 - y^2 - z^2"
},
{
"math_id": 4,
"text": "\\phi"
},
{
"math_id": 5,
"text": "\\chi"
},
{
"math_id": 6,
"text": "\\psi"
},
{
"math_id": 7,
"text": "\\neg"
},
{
"math_id": 8,
"text": "\\to"
},
{
"math_id": 9,
"text": "\\phi \\to (\\psi \\to \\phi)"
},
{
"math_id": 10,
"text": "(\\phi \\to (\\psi \\to \\chi)) \\to ((\\phi \\to \\psi) \\to (\\phi \\to \\chi))"
},
{
"math_id": 11,
"text": "(\\lnot \\phi \\to \\lnot \\psi) \\to (\\psi \\to \\phi)."
},
{
"math_id": 12,
"text": "A"
},
{
"math_id": 13,
"text": "B"
},
{
"math_id": 14,
"text": "C"
},
{
"math_id": 15,
"text": "A \\to (B \\to A)"
},
{
"math_id": 16,
"text": "(A \\to \\lnot B) \\to (C \\to (A \\to \\lnot B))"
},
{
"math_id": 17,
"text": "\\mathfrak{L}"
},
{
"math_id": 18,
"text": "x"
},
{
"math_id": 19,
"text": "x = x"
},
{
"math_id": 20,
"text": "="
},
{
"math_id": 21,
"text": "t"
},
{
"math_id": 22,
"text": "\\forall x \\, \\phi \\to \\phi^x_t"
},
{
"math_id": 23,
"text": "\\phi^x_t"
},
{
"math_id": 24,
"text": "P"
},
{
"math_id": 25,
"text": "P(t)"
},
{
"math_id": 26,
"text": "\\forall x \\phi \\to \\phi^x_t"
},
{
"math_id": 27,
"text": "\\phi^x_t \\to \\exists x \\, \\phi"
},
{
"math_id": 28,
"text": "\\mathfrak{L}_{NT} = \\{0, S\\}"
},
{
"math_id": 29,
"text": "0"
},
{
"math_id": 30,
"text": "S"
},
{
"math_id": 31,
"text": "\\forall x. \\lnot (Sx = 0) "
},
{
"math_id": 32,
"text": "\\forall x. \\forall y. (Sx = Sy \\to x = y) "
},
{
"math_id": 33,
"text": "(\\phi(0) \\land \\forall x.\\,(\\phi(x) \\to \\phi(Sx))) \\to \\forall x.\\phi(x)"
},
{
"math_id": 34,
"text": "\\mathfrak{L}_{NT}"
},
{
"math_id": 35,
"text": "\\mathfrak{N} = \\langle\\N, 0, S\\rangle"
},
{
"math_id": 36,
"text": "\\N"
},
{
"math_id": 37,
"text": "\\Lambda"
},
{
"math_id": 38,
"text": "\\Sigma"
},
{
"math_id": 39,
"text": "\\{(\\Gamma, \\phi)\\}"
},
{
"math_id": 40,
"text": "\\text{if }\\Sigma \\models \\phi\\text{ then }\\Sigma \\vdash \\phi"
},
{
"math_id": 41,
"text": "\\lnot\\phi"
}
] | https://en.wikipedia.org/wiki?curid=928 |
928060 | Jet bundle | In differential topology, the jet bundle is a certain construction that makes a new smooth fiber bundle out of a given smooth fiber bundle. It makes it possible to write differential equations on sections of a fiber bundle in an invariant form. Jets may also be seen as the coordinate free versions of Taylor expansions.
Historically, jet bundles are attributed to Charles Ehresmann, and were an advance on the method (prolongation) of Élie Cartan, of dealing "geometrically" with higher derivatives, by imposing differential form conditions on newly introduced formal variables. Jet bundles are sometimes called sprays, although sprays usually refer more specifically to the associated vector field induced on the corresponding bundle (e.g., the geodesic spray on Finsler manifolds.)
Since the early 1980s, jet bundles have appeared as a concise way to describe phenomena associated with the derivatives of maps, particularly those associated with the calculus of variations. Consequently, the jet bundle is now recognized as the correct domain for a geometrical covariant field theory and much work is done in general relativistic formulations of fields using this approach.
Jets.
Suppose "M" is an "m"-dimensional manifold and that ("E", π, "M") is a fiber bundle. For "p" ∈ "M", let Γ(p) denote the set of all local sections whose domain contains "p". Let &NoBreak;&NoBreak; be a multi-index (an "m"-tuple of non-negative integers, not necessarily in ascending order), then define:
formula_0
Define the local sections σ, η ∈ Γ(p) to have the same "r"-jet at "p" if
formula_1
The relation that two maps have the same "r"-jet is an equivalence relation. An "r"-jet is an equivalence class under this relation, and the "r"-jet with representative σ is denoted formula_2. The integer "r" is also called the order of the jet, "p" is its source and σ("p") is its target.
Jet manifolds.
The "r"-th jet manifold of π is the set
formula_3
We may define projections "πr" and "π""r",0 called the source and target projections respectively, by
formula_4
If 1 ≤ "k" ≤ "r", then the "k"-jet projection is the function "πr,k" defined by
formula_5
From this definition, it is clear that "πr" = "π" "π""r",0 and that if 0 ≤ "m" ≤ "k", then "πr,m" = "πk,m" "πr,k". It is conventional to regard "πr,r" as the identity map on "J r"("π") and to identify "J" 0("π") with "E".
The functions "πr,k", "π""r",0 and "πr" are smooth surjective submersions.
A coordinate system on "E" will generate a coordinate system on "J r"("π"). Let ("U", "u") be an adapted coordinate chart on "E", where "u" = ("xi", "uα"). The induced coordinate chart ("Ur", "ur") on "J r"("π") is defined by
formula_6
where
formula_7
and the formula_8 functions known as the derivative coordinates:
formula_9
Given an atlas of adapted charts ("U", "u") on "E", the corresponding collection of charts ("U r", "u r") is a finite-dimensional "C"∞ atlas on "J r"("π").
Jet bundles.
Since the atlas on each formula_10 defines a manifold, the triples "formula_11", "formula_12" and "formula_13" all define fibered manifolds. In particular, if "formula_14"is a fiber bundle, the triple "formula_13" defines the "r"-th jet bundle of π.
If "W" ⊂ "M" is an open submanifold, then
formula_15
If "p" ∈ "M", then the fiber formula_16 is denoted formula_17.
Let σ be a local section of π with domain "W" ⊂ "M". The "r"-th jet prolongation of σ is the map formula_18 defined by
formula_19
Note that formula_20, so formula_21 really is a section. In local coordinates, formula_21 is given by
formula_22
We identify "formula_23" with formula_24 .
Algebro-geometric perspective.
An independently motivated construction of the sheaf of sections formula_25" is given"."
Consider a diagonal map formula_26, where the smooth manifold formula_27 is a locally ringed space by formula_28 for each open formula_29. Let formula_30 be the ideal sheaf of formula_31, equivalently let formula_30 be the sheaf of smooth germs which vanish on formula_31 for all formula_32. The pullback of the quotient sheaf formula_33 from formula_34 to formula_27 by formula_35 is the sheaf of k-jets.
The direct limit of the sequence of injections given by the canonical inclusions formula_36 of sheaves, gives rise to the infinite jet sheaf formula_37. Observe that by the direct limit construction it is a filtered ring.
Example.
If π is the trivial bundle ("M" × R, pr1, "M"), then there is a canonical diffeomorphism between the first jet bundle formula_38 and "T*M" × R. To construct this diffeomorphism, for each σ in formula_39 write formula_40.
Then, whenever "p" ∈ "M"
formula_41
Consequently, the mapping
formula_42
is well-defined and is clearly injective. Writing it out in coordinates shows that it is a diffeomorphism, because if "(xi, u)" are coordinates on "M" × R, where "u" = idR is the identity coordinate, then the derivative coordinates "ui" on "J1(π)" correspond to the coordinates ∂"i" on "T*M".
Likewise, if π is the trivial bundle (R × "M", pr1, R), then there exists a canonical diffeomorphism between formula_38and R × "TM".
Contact structure.
The space "Jr"(π) carries a natural distribution, that is, a sub-bundle of the tangent bundle "TJr"(π)), called the "Cartan distribution". The Cartan distribution is spanned by all tangent planes to graphs of holonomic sections; that is, sections of the form "jrφ" for "φ" a section of π.
The annihilator of the Cartan distribution is a space of differential one-forms called contact forms, on "Jr"(π). The space of differential one-forms on "Jr"(π) is denoted by formula_43 and the space of contact forms is denoted by formula_44. A one form is a contact form provided its pullback along every prolongation is zero. In other words, formula_45 is a contact form if and only if
formula_46
for all local sections σ of π over "M".
The Cartan distribution is the main geometrical structure on jet spaces and plays an important role in the geometric theory of partial differential equations. The Cartan distributions are completely non-integrable. In particular, they are not involutive. The dimension of the Cartan distribution grows with the order of the jet space. However, on the space of infinite jets "J∞" the Cartan distribution becomes involutive and finite-dimensional: its dimension coincides with the dimension of the base manifold "M".
Example.
Consider the case "(E, π, M)", where "E" ≃ R2 and "M" ≃ R. Then, "(J1(π), π, M)" defines the first jet bundle, and may be coordinated by "(x, u, u1)", where
formula_47
for all "p" ∈ "M" and σ in Γ"p"(π). A general 1-form on "J1(π)" takes the form
formula_48
A section σ in Γ"p"(π) has first prolongation
formula_49
Hence, "(j1σ)*θ" can be calculated as
formula_50
This will vanish for all sections σ if and only if "c" = 0 and "a" = −"bσ′(x)". Hence, θ = "b(x, u, u1)θ0" must necessarily be a multiple of the basic contact form θ0 = "du" − "u1dx". Proceeding to the second jet space "J2(π)" with additional coordinate "u2", such that
formula_51
a general 1-form has the construction
formula_52
This is a contact form if and only if
formula_53
which implies that "e" = 0 and "a" = −"bσ′(x)" − "cσ′′(x)". Therefore, θ is a contact form if and only if
formula_54
where θ1 = "du"1 − "u"2"dx" is the next basic contact form (Note that here we are identifying the form θ0 with its pull-back formula_55 to "J2(π)").
In general, providing "x, u" ∈ R, a contact form on "Jr+1(π)" can be written as a linear combination of the basic contact forms
formula_56
where
formula_57
Similar arguments lead to a complete characterization of all contact forms.
In local coordinates, every contact one-form on "Jr+1(π)" can be written as a linear combination
formula_58
with smooth coefficients formula_59 of the basic contact forms
formula_60
"|I|" is known as the order of the contact form formula_61. Note that contact forms on "Jr+1(π)" have orders at most "r". Contact forms provide a characterization of those local sections of "πr+1" which are prolongations of sections of π.
Let ψ ∈ Γ"W"("πr+1"), then "ψ" = "jr+1"σ where σ ∈ Γ"W"(π) if and only if formula_62
Vector fields.
A general vector field on the total space "E", coordinated by formula_63, is
formula_64
A vector field is called horizontal, meaning that all the vertical coefficients vanish, if formula_65 = 0.
A vector field is called vertical, meaning that all the horizontal coefficients vanish, if "ρi" = 0.
For fixed "(x, u)", we identify
formula_66
having coordinates "(x, u, ρi, φα)", with an element in the fiber "TxuE" of "TE" over "(x, u)" in "E", called a tangent vector in "TE". A section
formula_67
is called a vector field on "E" with
formula_68
and ψ in "Γ(TE)".
The jet bundle "Jr(π)" is coordinated by formula_69. For fixed "(x, u, w)", identify
formula_70
having coordinates
formula_71
with an element in the fiber formula_72 of "TJr(π)" over "(x, u, w)" ∈ "Jr(π)", called a tangent vector in "TJr(π)". Here,
formula_73
are real-valued functions on "Jr(π)". A section
formula_74
is a vector field on "Jr(π)", and we say formula_75
Partial differential equations.
Let "(E, π, M)" be a fiber bundle. An "r"-th order partial differential equation on π is a closed embedded submanifold "S" of the jet manifold "Jr(π)". A solution is a local section σ ∈ Γ"W"(π) satisfying formula_76, for all "p" in "M".
Consider an example of a first order partial differential equation.
Example.
Let π be the trivial bundle (R2 × R, pr1, R2) with global coordinates ("x"1, "x"2, "u"1). Then the map "F" : "J"1(π) → R defined by
formula_77
gives rise to the differential equation
formula_78
which can be written
formula_79
The particular
formula_80
has first prolongation given by
formula_81
and is a solution of this differential equation, because
formula_82
and so formula_83 for "every" "p" ∈ R2.
Jet prolongation.
A local diffeomorphism "ψ" : "Jr"("π") → "Jr"("π") defines a contact transformation of order "r" if it preserves the contact ideal, meaning that if θ is any contact form on "Jr"("π"), then "ψ*θ" is also a contact form.
The flow generated by a vector field "Vr" on the jet space "Jr(π)" forms a one-parameter group of contact transformations if and only if the Lie derivative formula_84 of any contact form θ preserves the contact ideal.
Let us begin with the first order case. Consider a general vector field "V"1 on "J"1("π"), given by
formula_85
We now apply formula_86 to the basic contact forms formula_87 and expand the exterior derivative of the functions in terms of their coordinates to obtain:
formula_88
Therefore, "V1" determines a contact transformation if and only if the coefficients of "dxi" and formula_89 in the formula vanish. The latter requirements imply the contact conditions
formula_90
The former requirements provide explicit formulae for the coefficients of the first derivative terms in "V1":
formula_91
where
formula_92
denotes the zeroth order truncation of the total derivative "Di".
Thus, the contact conditions uniquely prescribe the prolongation of any point or contact vector field. That is, if formula_93 satisfies these equations, "Vr" is called the r"-th prolongation of "V" to a vector field on "Jr(π).
These results are best understood when applied to a particular example. Hence, let us examine the following.
Example.
Consider the case "(E, π, M)", where "E" ≅ R2 and "M" ≃ R. Then, "(J1(π), π, E)" defines the first jet bundle, and may be coordinated by "(x, u, u1)", where
formula_94
for all "p" ∈ "M" and "σ" in Γ"p"("π"). A contact form on "J1(π)" has the form
formula_95
Consider a vector "V" on "E", having the form
formula_96
Then, the first prolongation of this vector field to "J1(π)" is
formula_97
If we now take the Lie derivative of the contact form with respect to this prolonged vector field, formula_98 we obtain
formula_99
Hence, for preservation of the contact ideal, we require
formula_100
And so the first prolongation of "V" to a vector field on "J1(π)" is
formula_101
Let us also calculate the second prolongation of "V" to a vector field on "J2(π)". We have formula_102 as coordinates on "J2(π)". Hence, the prolonged vector has the form
formula_103
The contact forms are
formula_104
To preserve the contact ideal, we require
formula_105
Now, "θ" has no "u"2 dependency. Hence, from this equation we will pick up the formula for "ρ", which will necessarily be the same result as we found for "V1". Therefore, the problem is analogous to prolonging the vector field "V1" to "J"2(π). That is to say, we may generate the "r"-th prolongation of a vector field by recursively applying the Lie derivative of the contact forms with respect to the prolonged vector fields, "r" times. So, we have
formula_106
and so
formula_107
Therefore, the Lie derivative of the second contact form with respect to "V2" is
formula_108
Hence, for formula_109 to preserve the contact ideal, we require
formula_110
And so the second prolongation of "V" to a vector field on "J"2(π) is
formula_111
Note that the first prolongation of "V" can be recovered by omitting the second derivative terms in "V2", or by projecting back to "J1(π)".
Infinite jet spaces.
The inverse limit of the sequence of projections formula_112 gives rise to the infinite jet space "J∞(π)". A point formula_113 is the equivalence class of sections of π that have the same "k"-jet in "p" as σ for all values of "k". The natural projection π∞ maps formula_113 into "p".
Just by thinking in terms of coordinates, "J∞(π)" appears to be an infinite-dimensional geometric object. In fact, the simplest way of introducing a differentiable structure on "J∞(π)", not relying on differentiable charts, is given by the differential calculus over commutative algebras. Dual to the sequence of projections formula_114 of manifolds is the sequence of injections formula_115 of commutative algebras. Let's denote formula_116 simply by formula_117. Take now the direct limit formula_118 of the formula_117's. It will be a commutative algebra, which can be assumed to be the smooth functions algebra over the geometric object "J∞(π)". Observe that formula_118, being born as a direct limit, carries an additional structure: it is a filtered commutative algebra.
Roughly speaking, a concrete element formula_119 will always belong to some formula_117, so it is a smooth function on the finite-dimensional manifold "Jk"(π) in the usual sense.
Infinitely prolonged PDEs.
Given a "k"-th order system of PDEs "E" ⊆ "Jk(π)", the collection "I(E)" of vanishing on "E" smooth functions on "J∞(π)" is an ideal in the algebra formula_117, and hence in the direct limit formula_118 too.
Enhance "I(E)" by adding all the possible compositions of total derivatives applied to all its elements. This way we get a new ideal "I" of formula_118 which is now closed under the operation of taking total derivative. The submanifold "E"(∞) of "J"∞(π) cut out by "I" is called the infinite prolongation of "E".
Geometrically, "E"(∞) is the manifold of formal solutions of "E". A point formula_113 of "E"(∞) can be easily seen to be represented by a section σ whose "k"-jet's graph is tangent to "E" at the point formula_120 with arbitrarily high order of tangency.
Analytically, if "E" is given by φ = 0, a formal solution can be understood as the set of Taylor coefficients of a section σ in a point "p" that make vanish the Taylor series of formula_121 at the point "p".
Most importantly, the closure properties of "I" imply that "E"(∞) is tangent to the infinite-order contact structure formula_122 on "J∞(π)", so that by restricting formula_122 to "E"(∞) one gets the diffiety formula_123, and can study the associated Vinogradov (C-spectral) sequence.
Remark.
This article has defined jets of local sections of a bundle, but it is possible to define jets of functions "f: M" → "N", where "M" and "N" are manifolds; the jet of "f" then just corresponds to the jet of the section
"grf: M" → "M" × "N"
"grf(p)" = "(p, f(p))"
("grf" is known as the graph of the function "f") of the trivial bundle ("M" × "N", π1, "M"). However, this restriction does not simplify the theory, as the global triviality of π does not imply the global triviality of π1.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\begin{align}\n |I| &:= \\sum_{i=1}^m I(i) \\\\\n \\frac{\\partial^{|I|}}{\\partial x^I} &:= \\prod_{i=1}^m \\left( \\frac{\\partial}{\\partial x^i} \\right)^{I(i)}.\n\\end{align}"
},
{
"math_id": 1,
"text": "\n \\left.\\frac{\\partial^{|I|} \\sigma^\\alpha}{\\partial x^I}\\right|_{p} =\n \\left.\\frac{\\partial^{|I|} \\eta^\\alpha}{\\partial x^I}\\right|_{p}, \\quad 0 \\leq |I| \\leq r.\n"
},
{
"math_id": 2,
"text": "j^r_p\\sigma"
},
{
"math_id": 3,
"text": "J^r (\\pi) = \\left \\{j^r_p\\sigma:p \\in M, \\sigma \\in \\Gamma(p) \\right \\}."
},
{
"math_id": 4,
"text": "\\begin{cases}\n \\pi_r: J^r(\\pi) \\to M \\\\\n j^r_p\\sigma \\mapsto p\n\\end{cases}, \\qquad \\begin{cases}\n \\pi_{r, 0}: J^r(\\pi) \\to E \\\\\n j^r_p\\sigma \\mapsto \\sigma(p)\n\\end{cases}"
},
{
"math_id": 5,
"text": "\\begin{cases}\n \\pi_{r, k}: J^r(\\pi) \\to J^{k}(\\pi) \\\\\n j^r_p\\sigma \\mapsto j^{k}_p\\sigma\n\\end{cases}"
},
{
"math_id": 6,
"text": "\\begin{align}\n U^r &= \\left\\{j^r_p \\sigma: p \\in M, \\sigma(p) \\in U\\right\\} \\\\\n u^r &= \\left(x^i, u^\\alpha, u^\\alpha_I\\right)\n\\end{align}"
},
{
"math_id": 7,
"text": "\\begin{align}\n x^i\\left(j^r_p\\sigma\\right) &= x^i(p) \\\\\n u^\\alpha\\left(j^r_p\\sigma\\right) &= u^\\alpha(\\sigma(p))\n\\end{align}"
},
{
"math_id": 8,
"text": "n \\left(\\binom{m+r}{r} - 1\\right)"
},
{
"math_id": 9,
"text": "\\begin{cases}\n u^\\alpha_I:U^k \\to \\mathbf{R} \\\\\n u^\\alpha_I\\left(j^r_p\\sigma\\right) = \\left.\\frac{\\partial^{|I|} \\sigma^\\alpha}{\\partial x^I}\\right|_p\n\\end{cases}"
},
{
"math_id": 10,
"text": "J^r(\\pi)"
},
{
"math_id": 11,
"text": "(J^r(\\pi), \\pi_{r,k}, J^k(\\pi))"
},
{
"math_id": 12,
"text": "(J^r(\\pi), \\pi_{r,0}, E)"
},
{
"math_id": 13,
"text": "(J^r(\\pi), \\pi_{r}, M)"
},
{
"math_id": 14,
"text": "(E, \\pi, M)"
},
{
"math_id": 15,
"text": " J^r \\left(\\pi|_{\\pi^{-1}(W)}\\right) \\cong \\pi^{-1}_r(W).\\,"
},
{
"math_id": 16,
"text": "\\pi^{-1}_r(p)\\,"
},
{
"math_id": 17,
"text": "J^r_p(\\pi)"
},
{
"math_id": 18,
"text": "j^r\\sigma: W \\rightarrow J^r(\\pi)"
},
{
"math_id": 19,
"text": " (j^r \\sigma)(p) = j^r_p \\sigma. \\,"
},
{
"math_id": 20,
"text": "\\pi_r \\circ j^r \\sigma =\\mathbb{id}_W"
},
{
"math_id": 21,
"text": "j^r\\sigma"
},
{
"math_id": 22,
"text": " \\left(\\sigma^\\alpha, \\frac{\\partial^{|I|} \\sigma^\\alpha}{\\partial x^{I}}\\right) \\qquad 1 \\leq |I| \\leq r. \\,"
},
{
"math_id": 23,
"text": "j^ 0\\sigma"
},
{
"math_id": 24,
"text": "\\sigma"
},
{
"math_id": 25,
"text": "\\Gamma J^k\\left(\\pi_{TM}\\right)"
},
{
"math_id": 26,
"text": "\\Delta_n: M \\to \\prod_{i=1}^{n+1} M"
},
{
"math_id": 27,
"text": "M"
},
{
"math_id": 28,
"text": "C^k(U)"
},
{
"math_id": 29,
"text": "U"
},
{
"math_id": 30,
"text": "\\mathcal{I}"
},
{
"math_id": 31,
"text": "\\Delta_n(M)"
},
{
"math_id": 32,
"text": "0 < n \\leq k"
},
{
"math_id": 33,
"text": "{\\Delta_n}^*\\left(\\mathcal{I}/\\mathcal{I}^{n+1}\\right)"
},
{
"math_id": 34,
"text": "\\prod_{i=1}^{n+1} M"
},
{
"math_id": 35,
"text": "\\Delta_n"
},
{
"math_id": 36,
"text": "\\mathcal{I}^{n+1} \\hookrightarrow \\mathcal{I}^n"
},
{
"math_id": 37,
"text": "\\mathcal{J}^\\infty(TM)"
},
{
"math_id": 38,
"text": "J^1(\\pi)"
},
{
"math_id": 39,
"text": "\\Gamma_M(\\pi)"
},
{
"math_id": 40,
"text": "\\bar{\\sigma} = pr_2 \\circ \\sigma \\in C^\\infty(M)\\,"
},
{
"math_id": 41,
"text": "j^1_p \\sigma = \\left\\{ \\psi : \\psi \\in \\Gamma_p (\\pi); \\bar{\\psi}(p) = \\bar{\\sigma}(p); d\\bar{\\psi}_p = d\\bar{\\sigma}_p \\right\\}. \\,"
},
{
"math_id": 42,
"text": "\\begin{cases}\n J^1(\\pi) \\to T^*M \\times \\mathbf{R} \\\\\n j^1_p\\sigma \\mapsto \\left(d\\bar{\\sigma}_p, \\bar{\\sigma}(p)\\right)\n\\end{cases}"
},
{
"math_id": 43,
"text": "\\Lambda^1J^r(\\pi)"
},
{
"math_id": 44,
"text": "\\Lambda_C^r\\pi"
},
{
"math_id": 45,
"text": "\\theta\\in\\Lambda^1J^r\\pi"
},
{
"math_id": 46,
"text": "\\left(j^{r+1}\\sigma\\right)^*\\theta = 0"
},
{
"math_id": 47,
"text": "\\begin{align}\n x\\left(j^1_p\\sigma\\right) &= x(p) = x \\\\\n u\\left(j^1_p\\sigma\\right) &= u(\\sigma(p)) = u(\\sigma(x)) = \\sigma(x) \\\\\n u_1\\left(j^1_p\\sigma\\right) &= \\left.\\frac{\\partial \\sigma}{\\partial x}\\right|_p = \\sigma'(x)\n\\end{align}"
},
{
"math_id": 48,
"text": "\\theta = a(x, u, u_1)dx + b(x, u, u_1)du + c(x, u, u_1)du_1\\,"
},
{
"math_id": 49,
"text": "j^1\\sigma = (u, u_1) = \\left(\\sigma(p), \\left. \\frac{\\partial \\sigma}{\\partial x} \\right|_p \\right)."
},
{
"math_id": 50,
"text": "\\begin{align}\n \\left(j^1_p\\sigma\\right)^* \\theta\n &= \\theta \\circ j^1_p\\sigma \\\\\n &= a(x, \\sigma(x), \\sigma'(x))dx + b(x, \\sigma(x), \\sigma'(x))d(\\sigma(x)) + c(x, \\sigma(x),\\sigma'(x))d(\\sigma'(x)) \\\\\n &= a(x, \\sigma(x), \\sigma'(x))dx + b(x, \\sigma(x), \\sigma'(x))\\sigma'(x)dx + c(x, \\sigma(x), \\sigma'(x))\\sigma''(x)dx \\\\\n &= [a(x, \\sigma(x), \\sigma'(x)) + b(x, \\sigma(x), \\sigma'(x))\\sigma'(x) + c(x, \\sigma(x), \\sigma'(x))\\sigma''(x) ]dx \n\\end{align}"
},
{
"math_id": 51,
"text": "u_2(j^2_p\\sigma) = \\left.\\frac{\\partial^2 \\sigma}{\\partial x^2}\\right|_p = \\sigma''(x)\\,"
},
{
"math_id": 52,
"text": "\\theta = a(x, u, u_1,u_2)dx + b(x, u, u_1,u_2)du + c(x, u, u_1,u_2)du_1 + e(x, u, u_1,u_2)du_2\\,"
},
{
"math_id": 53,
"text": "\\begin{align}\n \\left(j^2_p\\sigma\\right)^* \\theta\n &= \\theta \\circ j^2_p\\sigma \\\\\n &= a(x, \\sigma(x), \\sigma'(x), \\sigma''(x))dx + b(x, \\sigma(x), \\sigma'(x),\\sigma''(x))d(\\sigma(x)) +{} \\\\\n &\\qquad\\qquad c(x, \\sigma(x), \\sigma'(x),\\sigma''(x))d(\\sigma'(x)) + e(x, \\sigma(x), \\sigma'(x),\\sigma''(x))d(\\sigma''(x)) \\\\\n &= adx + b\\sigma'(x)dx + c\\sigma''(x)dx + e\\sigma'''(x)dx \\\\\n &= [a + b\\sigma'(x) + c\\sigma''(x) + e\\sigma'''(x)]dx\\\\\n &= 0\n\\end{align}"
},
{
"math_id": 54,
"text": "\\theta = b(x, \\sigma(x), \\sigma'(x))\\theta_{0} + c(x, \\sigma(x), \\sigma'(x))\\theta_1,"
},
{
"math_id": 55,
"text": "\\left(\\pi_{2,1}\\right)^{*}\\theta_{0}"
},
{
"math_id": 56,
"text": "\\theta_k = du_k - u_{k+1}dx \\qquad k = 0, \\ldots, r - 1\\,"
},
{
"math_id": 57,
"text": " u_k\\left(j^k \\sigma\\right) = \\left.\\frac{\\partial^k \\sigma}{\\partial x^k}\\right|_p."
},
{
"math_id": 58,
"text": "\\theta = \\sum_{|I|=0}^r P_\\alpha^I \\theta_I^\\alpha"
},
{
"math_id": 59,
"text": "P^\\alpha_i(x^i, u^\\alpha, u^\\alpha_I)"
},
{
"math_id": 60,
"text": "\\theta_I^\\alpha = du^\\alpha_I - u^\\alpha_{I,i} dx^i\\,"
},
{
"math_id": 61,
"text": "\\theta_i^\\alpha"
},
{
"math_id": 62,
"text": "\\psi^* (\\theta|_{W}) = 0, \\forall \\theta \\in \\Lambda_C^1 \\pi_{r+1,r}.\\,"
},
{
"math_id": 63,
"text": "(x, u) \\mathrel\\stackrel{\\mathrm{def}}{=} \\left(x^i, u^\\alpha\\right)\\,"
},
{
"math_id": 64,
"text": "V \\mathrel\\stackrel{\\mathrm{def}}{=} \\rho^i(x, u)\\frac{\\partial}{\\partial x^i} + \\phi^{\\alpha}(x, u)\\frac{\\partial}{\\partial u^\\alpha}.\\,"
},
{
"math_id": 65,
"text": "\\phi^\\alpha"
},
{
"math_id": 66,
"text": "V_{(x, u)} \\mathrel\\stackrel{\\mathrm{def}}{=} \\rho^i(x, u) \\frac{\\partial}{\\partial x^i} + \\phi^{\\alpha}(x, u) \\frac{\\partial}{\\partial u^\\alpha}\\,"
},
{
"math_id": 67,
"text": "\\begin{cases}\n \\psi : E \\to TE \\\\\n (x, u) \\mapsto \\psi(x, u) = V\n\\end{cases}"
},
{
"math_id": 68,
"text": "V = \\rho^i(x, u) \\frac{\\partial}{\\partial x^i} + \\phi^\\alpha(x, u) \\frac{\\partial}{\\partial u^\\alpha}"
},
{
"math_id": 69,
"text": "(x, u, w) \\mathrel\\stackrel{\\mathrm{def}}{=} \\left(x^i, u^\\alpha, w_i^\\alpha\\right)\\,"
},
{
"math_id": 70,
"text": "\n V_{(x, u, w)} \\mathrel\\stackrel{\\mathrm{def}}{=}\n V^i(x, u, w) \\frac{\\partial}{\\partial x^i} +\n V^\\alpha(x, u, w) \\frac{\\partial}{\\partial u^\\alpha} +\n V^\\alpha_i(x, u, w) \\frac{\\partial}{\\partial w^\\alpha_i} +\n V^\\alpha_{i_1 i_2}(x, u, w) \\frac{\\partial}{\\partial w^\\alpha_{i_1 i_2}} + \\cdots +\n V^\\alpha_{i_1 \\cdots i_r}(x, u, w) \\frac{\\partial}{\\partial w^\\alpha_{i_1 \\cdots i_r}}\n"
},
{
"math_id": 71,
"text": "\\left(x, u, w, v^\\alpha_i, v^\\alpha_{i_1 i_2}, \\cdots, v^\\alpha_{i_1 \\cdots i_r}\\right),"
},
{
"math_id": 72,
"text": "T_{xuw}(J^r\\pi)"
},
{
"math_id": 73,
"text": "v^\\alpha_i, v^\\alpha_{i_1 i_2}, \\ldots, v^\\alpha_{i_1 \\cdots i_r}"
},
{
"math_id": 74,
"text": "\\begin{cases}\n \\Psi : J^r(\\pi) \\to TJ^r(\\pi) \\\\\n (x, u, w) \\mapsto \\Psi(u, w) = V\n\\end{cases}"
},
{
"math_id": 75,
"text": "\\Psi \\in \\Gamma(T\\left(J^r\\pi\\right))."
},
{
"math_id": 76,
"text": "j^{r}_p\\sigma \\in S"
},
{
"math_id": 77,
"text": "F = u^1_1 u^1_2 - 2x^2 u^1"
},
{
"math_id": 78,
"text": "S = \\left\\{j^1_p\\sigma \\in J^1\\pi\\ :\\ \\left(u^1_1u^1_2 - 2x^2u^1\\right)\\left(j^1_p\\sigma\\right) = 0\\right\\}"
},
{
"math_id": 79,
"text": "\\frac{\\partial \\sigma}{\\partial x^1}\\frac{\\partial \\sigma}{\\partial x^2} - 2x^2\\sigma = 0."
},
{
"math_id": 80,
"text": "\\begin{cases}\n \\sigma : \\mathbf{R}^2 \\to \\mathbf{R}^2 \\times \\mathbf{R} \\\\\n \\sigma(p_1, p_2) = \\left( p^1, p^2, p^1(p^2)^2 \\right)\n\\end{cases}"
},
{
"math_id": 81,
"text": "j^1\\sigma\\left(p_1, p_2\\right) = \\left( p^1, p^2, p^1\\left(p^2\\right)^2, \\left(p^2\\right)^2, 2p^1 p^2 \\right) "
},
{
"math_id": 82,
"text": "\\begin{align}\n \\left(u^1_1 u^1_2 - 2x^2 u^1 \\right)\\left(j^1_p\\sigma\\right)\n &= u^1_1\\left(j^1_p\\sigma\\right)u^1_2\\left(j^1_p\\sigma\\right) - 2x^2\\left(j^1_p\\sigma\\right)u^1\\left(j^1_p\\sigma\\right) \\\\\n &= \\left(p^2\\right)^2 \\cdot 2p^1 p^2 - 2 \\cdot p^2 \\cdot p^1\\left(p^2\\right)^2 \\\\\n &= 2p^1\\left(p^2\\right)^3 - 2p^1 \\left(p^2\\right)^3 \\\\\n &= 0\n\\end{align}"
},
{
"math_id": 83,
"text": "j^1_p\\sigma \\in S"
},
{
"math_id": 84,
"text": "\\mathcal{L}_{V^r}(\\theta)"
},
{
"math_id": 85,
"text": "V^1\\ \\stackrel{\\mathrm{def}}{=}\\ \\rho^i\\left(x^i, u^\\alpha, u_I^\\alpha\\right)\\frac{\\partial}{\\partial x^i} + \\phi^{\\alpha}\\left(x^i, u^\\alpha, u_I^\\alpha\\right)\\frac{\\partial}{\\partial u^{\\alpha}} + \\chi^{\\alpha}_i\\left(x^i, u^\\alpha, u_I^\\alpha\\right)\\frac{\\partial}{\\partial u^{\\alpha}_i}."
},
{
"math_id": 86,
"text": "\\mathcal{L}_{V^1}"
},
{
"math_id": 87,
"text": "\\theta^{\\alpha}_0 = du^{\\alpha} - u_i^{\\alpha}dx^i,"
},
{
"math_id": 88,
"text": "\\begin{align}\n \\mathcal{L}_{V^1}\\left(\\theta^{\\alpha}_0\\right)\n &= \\mathcal{L}_{V^1}\\left(du^{\\alpha} - u_i^{\\alpha}dx^i\\right) \\\\\n &= \\mathcal{L}_{V^1}du^{\\alpha} - \\left(\\mathcal{L}_{V^1}u_i^{\\alpha}\\right)dx^i - u_i^{\\alpha}\\left(\\mathcal{L}_{V^1}dx^i\\right) \\\\\n &= d\\left(V^1u^{\\alpha}\\right) - V^1u_i^{\\alpha}dx^i - u_i^{\\alpha}d\\left(V^1 x^i\\right) \\\\\n &= d\\phi^{\\alpha} - \\chi^{\\alpha}_idx^i - u_i^{\\alpha}d\\rho^i \\\\\n &= \\frac{\\partial \\phi^{\\alpha}}{\\partial x^i} dx^i + \\frac{\\partial \\phi^{\\alpha}}{\\partial u^k} du^k + \\frac{\\partial \\phi^{\\alpha}}{\\partial u^k_i} du^k_i - \\chi^{\\alpha}_i dx^i - u_i^{\\alpha}\\left[ \\frac{\\partial \\rho^i}{\\partial x^m} dx^m + \\frac{\\partial \\rho^i}{\\partial u^k} du^k + \\frac{\\partial \\rho^i}{\\partial u^k_m} du^k_m \\right] \\\\\n &= \\frac{\\partial \\phi^{\\alpha}}{\\partial x^i} dx^i +\n \\frac{\\partial \\phi^{\\alpha}}{\\partial u^k} \\left(\\theta^k + u_i^k dx^i\\right) +\n \\frac{\\partial \\phi^{\\alpha}}{\\partial u^k_i} du^k_i -\n \\chi^{\\alpha}_i dx^i - u_l^{\\alpha} \\left[\n \\frac{\\partial \\rho^l}{\\partial x^i} dx^i +\n \\frac{\\partial \\rho^l}{\\partial u^k} \\left(\\theta^k + u_i^k dx^i\\right) +\n \\frac{\\partial \\rho^l}{\\partial u^k_i} du^k_i\n \\right] \\\\\n &= \\left[\n \\frac{\\partial \\phi^{\\alpha}}{\\partial x^i} +\n \\frac{\\partial \\phi^{\\alpha}}{\\partial u^k}u_i^k -\n u_l^\\alpha \\left(\\frac{\\partial \\rho^l}{\\partial x^i} + \\frac{\\partial \\rho^l}{\\partial u^k}u_i^k\\right) -\n \\chi^{\\alpha}_i\n \\right] dx^i +\n \\left[ \\frac{\\partial \\phi^{\\alpha}}{\\partial u^k_i} - u_l^{\\alpha}\\frac{\\partial \\rho^l}{\\partial u^k_i}\\right] du^k_i +\n \\left( \\frac{\\partial \\phi^{\\alpha}}{\\partial u^k} - u_l^{\\alpha}\\frac{\\partial \\rho^l}{\\partial u^k} \\right)\\theta^k\n\\end{align}"
},
{
"math_id": 89,
"text": "du^k_i"
},
{
"math_id": 90,
"text": "\\frac{\\partial \\phi^{\\alpha}}{\\partial u^k_i} - u^{\\alpha}_l \\frac{\\partial \\rho^l}{\\partial u^k_i} = 0"
},
{
"math_id": 91,
"text": "\\chi^{\\alpha}_i = \\widehat{D}_i \\phi^{\\alpha} - u^{\\alpha}_l\\left(\\widehat{D}_i\\rho^l\\right)"
},
{
"math_id": 92,
"text": "\\widehat{D}_i = \\frac{\\partial}{\\partial x^i} + u^k_i\\frac{\\partial}{\\partial u^k}"
},
{
"math_id": 93,
"text": "\\mathcal{L}_{V^r}"
},
{
"math_id": 94,
"text": "\\begin{align}\n x(j^1_{p}\\sigma) &= x(p) = x \\\\\n u(j^1_{p}\\sigma) &= u(\\sigma(p)) = u(\\sigma(x)) = \\sigma(x) \\\\\n u_1(j^1_{p}\\sigma) &= \\left.\\frac{\\partial \\sigma}{\\partial x}\\right|_{p} = \\dot{\\sigma}(x)\n\\end{align}"
},
{
"math_id": 95,
"text": "\\theta = du - u_1 dx"
},
{
"math_id": 96,
"text": "V = x \\frac{\\partial}{\\partial u} - u \\frac{\\partial}{\\partial x}"
},
{
"math_id": 97,
"text": "\\begin{align}\n V^1 &= V + Z \\\\\n &= x \\frac{\\partial}{\\partial u} - u \\frac{\\partial}{\\partial x} + Z \\\\\n &= x \\frac{\\partial}{\\partial u} - u \\frac{\\partial}{\\partial x} + \\rho(x, u, u_1) \\frac{\\partial}{\\partial u_1}\n\\end{align}"
},
{
"math_id": 98,
"text": "\\mathcal{L}_{V^1}(\\theta),"
},
{
"math_id": 99,
"text": "\\begin{align}\n \\mathcal{L}_{V^1}(\\theta)\n &= \\mathcal{L}_{V^1}(du - u_1dx) \\\\\n &= \\mathcal{L}_{V^1}du - \\left(\\mathcal{L}_{V^1}u_1\\right)dx - u_1\\left(\\mathcal{L}_{V^1}dx\\right) \\\\\n &= d\\left(V^1u\\right) - V^1 u_1 dx - u_1 d\\left(V^1x\\right) \\\\\n &= dx - \\rho(x, u, u_1)dx + u_1 du \\\\\n &= (1 - \\rho(x, u, u_1))dx + u_1 du \\\\\n &= [1 - \\rho(x, u, u_1)]dx + u_1(\\theta + u_1 dx) && du = \\theta + u_1 dx \\\\\n &= [1 + u_1u_1 - \\rho(x, u, u_1)]dx + u_1\\theta \n\\end{align}"
},
{
"math_id": 100,
"text": "1 + u_1 u_1 - \\rho(x, u, u_1) = 0 \\quad \\Leftrightarrow \\quad \\rho(x, u, u_1) = 1 + u_1 u_1."
},
{
"math_id": 101,
"text": "V^1 = x \\frac{\\partial}{\\partial u} - u \\frac{\\partial}{\\partial x} + (1 + u_1u_1)\\frac{\\partial}{\\partial u_1}."
},
{
"math_id": 102,
"text": "\\{x, u, u_1, u_2\\}"
},
{
"math_id": 103,
"text": " V^2 = x \\frac{\\partial}{\\partial u} - u \\frac{\\partial}{\\partial x} + \\rho(x, u, u_1, u_2)\\frac{\\partial}{\\partial u_1} + \\phi(x, u, u_1, u_2)\\frac{\\partial}{\\partial u_2}."
},
{
"math_id": 104,
"text": "\\begin{align}\n \\theta &= du - u_1dx \\\\\n \\theta_1 &= du_1 - u_2dx\n\\end{align}"
},
{
"math_id": 105,
"text": "\\begin{align}\n \\mathcal{L}_{V^2}(\\theta) &= 0 \\\\\n \\mathcal{L}_{V^2}(\\theta_1) &= 0\n\\end{align}"
},
{
"math_id": 106,
"text": "\\rho(x, u, u_1) = 1 + u_1 u_1"
},
{
"math_id": 107,
"text": "\\begin{align}\n V^2 &= V^1 + \\phi(x, u, u_1, u_2)\\frac{\\partial}{\\partial u_2} \\\\\n &= x \\frac{\\partial}{\\partial u} - u \\frac{\\partial}{\\partial x} + (1 + u_1 u_1)\\frac{\\partial}{\\partial u_1} + \\phi(x, u, u_1, u_2)\\frac{\\partial}{\\partial u_2}\n\\end{align}"
},
{
"math_id": 108,
"text": "\\begin{align}\n \\mathcal{L}_{V^2}(\\theta_1)\n &= \\mathcal{L}_{V^2}(du_1 - u_2dx) \\\\\n &= \\mathcal{L}_{V^2}du_1 - \\left(\\mathcal{L}_{V^2}u_2\\right)dx - u_2\\left(\\mathcal{L}_{V^2}dx\\right) \\\\\n &= d(V^2 u_1) - V^2u_2dx - u_2d(V^2x) \\\\\n &= d(1 + u_1 u_1) - \\phi(x, u, u_1, u_2)dx + u_2du \\\\\n &= 2u_1du_1 - \\phi(x, u, u_1, u_2)dx + u_2du \\\\\n &= 2u_1du_1 - \\phi(x, u, u_1, u_2)dx + u_2 (\\theta + u_1dx) & du &= \\theta + u_1 dx \\\\\n &= 2u_1(\\theta_1 + u_2dx) - \\phi(x, u, u_1, u_2)dx + u_2(\\theta + u_1dx) & du_1 &= \\theta_1 + u_2 dx \\\\\n &= [3u_1u_2 - \\phi(x, u, u_1, u_2)]dx + u_2\\theta + 2u_1\\theta_1 \n\\end{align}"
},
{
"math_id": 109,
"text": "\\mathcal{L}_{V^2}(\\theta_1)"
},
{
"math_id": 110,
"text": "3u_1 u_2 - \\phi(x, u, u_1, u_2) = 0 \\quad \\Leftrightarrow \\quad \\phi(x, u, u_1, u_2) = 3u_1 u_2."
},
{
"math_id": 111,
"text": " V^2 = x \\frac{\\partial}{\\partial u} - u \\frac{\\partial}{\\partial x} + (1 + u_1 u_1)\\frac{\\partial}{\\partial u_1} + 3u_1 u_2\\frac{\\partial}{\\partial u_2}."
},
{
"math_id": 112,
"text": "\\pi_{k+1,k}:J^{k+1}(\\pi)\\to J^k(\\pi)"
},
{
"math_id": 113,
"text": "j_p^\\infty(\\sigma)"
},
{
"math_id": 114,
"text": "\\pi_{k+1,k}: J^{k+1}(\\pi) \\to J^k(\\pi)"
},
{
"math_id": 115,
"text": "\\pi_{k+1,k}^*: C^\\infty(J^{k}(\\pi)) \\to C^\\infty\\left(J^{k+1}(\\pi)\\right)"
},
{
"math_id": 116,
"text": "C^\\infty(J^{k}(\\pi))"
},
{
"math_id": 117,
"text": "\\mathcal{F}_k(\\pi)"
},
{
"math_id": 118,
"text": "\\mathcal{F}(\\pi)"
},
{
"math_id": 119,
"text": "\\varphi\\in\\mathcal{F}(\\pi)"
},
{
"math_id": 120,
"text": "j_p^k(\\sigma)"
},
{
"math_id": 121,
"text": "\\varphi\\circ j^k(\\sigma)"
},
{
"math_id": 122,
"text": "\\mathcal{C}"
},
{
"math_id": 123,
"text": "(E_{(\\infty)}, \\mathcal{C}|_{E_{(\\infty)}})"
}
] | https://en.wikipedia.org/wiki?curid=928060 |
9282128 | Hypoelliptic operator | In the theory of partial differential equations, a partial differential operator formula_0 defined on an open subset
formula_1
is called hypoelliptic if for every distribution formula_2 defined on an open subset formula_3 such that formula_4 is formula_5 (smooth), formula_2 must also be formula_5.
If this assertion holds with formula_5 replaced by real-analytic, then formula_0 is said to be "analytically hypoelliptic".
Every elliptic operator with formula_5 coefficients is hypoelliptic. In particular, the Laplacian is an example of a hypoelliptic operator (the Laplacian is also analytically hypoelliptic). In addition, the operator for the heat equation (formula_6)
formula_7
(where formula_8) is hypoelliptic but not elliptic. However, the operator for the wave equation (formula_9)
formula_10
(where formula_11) is not hypoelliptic.
References.
"This article incorporates material from Hypoelliptic on PlanetMath, which is licensed under the ." | [
{
"math_id": 0,
"text": "P"
},
{
"math_id": 1,
"text": "U \\subset{\\mathbb{R}}^n"
},
{
"math_id": 2,
"text": "u"
},
{
"math_id": 3,
"text": "V \\subset U"
},
{
"math_id": 4,
"text": "Pu"
},
{
"math_id": 5,
"text": "C^\\infty"
},
{
"math_id": 6,
"text": "P(u)=u_t - k\\,\\Delta u\\,"
},
{
"math_id": 7,
"text": "P= \\partial_t - k\\,\\Delta_x\\,"
},
{
"math_id": 8,
"text": "k>0"
},
{
"math_id": 9,
"text": "P(u)=u_{tt} - c^2\\,\\Delta u\\,"
},
{
"math_id": 10,
"text": " P= \\partial^2_t - c^2\\,\\Delta_x\\,"
},
{
"math_id": 11,
"text": "c\\ne 0"
}
] | https://en.wikipedia.org/wiki?curid=9282128 |
9284 | Equation | Mathematical formula expressing equality
In mathematics, an equation is a mathematical formula that expresses the equality of two expressions, by connecting them with the equals sign =. The word "equation" and its cognates in other languages may have subtly different meanings; for example, in French an "équation" is defined as containing one or more variables, while in English, any well-formed formula consisting of two expressions related with an equals sign is an equation.
Solving an equation containing variables consists of determining which values of the variables make the equality true. The variables for which the equation has to be solved are also called unknowns, and the values of the unknowns that satisfy the equality are called solutions of the equation. There are two kinds of equations: identities and conditional equations. An identity is true for all values of the variables. A conditional equation is only true for particular values of the variables.
The "=" symbol, which appears in every equation, was invented in 1557 by Robert Recorde, who considered that nothing could be more equal than parallel straight lines with the same length.
Description.
An equation is written as two expressions, connected by an equals sign ("="). The expressions on the two sides of the equals sign are called the "left-hand side" and "right-hand side" of the equation. Very often the right-hand side of an equation is assumed to be zero. This does not reduce the generality, as this can be realized by subtracting the right-hand side from both sides.
The most common type of equation is a polynomial equation (commonly called also an "algebraic equation") in which the two sides are polynomials.
The sides of a polynomial equation contain one or more terms. For example, the equation
formula_0
has left-hand side formula_1, which has four terms, and right-hand side formula_2, consisting of just one term. The names of the variables suggest that "x" and "y" are unknowns, and that "A", "B", and "C" are parameters, but this is normally fixed by the context (in some contexts, y may be a parameter, or "A", "B", and "C" may be ordinary variables).
An equation is analogous to a scale into which weights are placed. When equal weights of something (e.g., grain) are placed into the two pans, the two weights cause the scale to be in balance and are said to be equal. If a quantity of grain is removed from one pan of the balance, an equal amount of grain must be removed from the other pan to keep the scale in balance. More generally, an equation remains in balance if the same operation is performed on its both sides.
Properties.
Two equations or two systems of equations are "equivalent", if they have the same set of solutions. The following operations transform an equation or a system of equations into an equivalent one – provided that the operations are meaningful for the expressions they are applied to:
If some function is applied to both sides of an equation, the resulting equation has the solutions of the initial equation among its solutions, but may have further solutions called extraneous solutions. For example, the equation formula_3 has the solution formula_4 Raising both sides to the exponent of 2 (which means applying the function formula_5 to both sides of the equation) changes the equation to formula_6, which not only has the previous solution but also introduces the extraneous solution, formula_7 Moreover, if the function is not defined at some values (such as 1/"x", which is not defined for "x" = 0), solutions existing at those values may be lost. Thus, caution must be exercised when applying such a transformation to an equation.
The above transformations are the basis of most elementary methods for equation solving, as well as some less elementary ones, like Gaussian elimination.
Examples.
Analogous illustration.
An equation is analogous to a weighing scale, balance, or seesaw.
Each side of the equation corresponds to one side of the balance. Different quantities can be placed on each side: if the weights on the two sides are equal, the scale balances, and in analogy, the equality that represents the balance is also balanced (if not, then the lack of balance corresponds to an inequality represented by an inequation).
In the illustration, "x", "y" and "z" are all different quantities (in this case real numbers) represented as circular weights, and each of "x", "y", and "z" has a different weight. Addition corresponds to adding weight, while subtraction corresponds to removing weight from what is already there. When equality holds, the total weight on each side is the same.
Parameters and unknowns.
Equations often contain terms other than the unknowns. These other terms, which are assumed to be "known", are usually called "constants", "coefficients" or "parameters".
An example of an equation involving "x" and "y" as unknowns and the parameter "R" is
formula_8
When "R "is chosen to have the value of 2 ("R "= 2), this equation would be recognized in Cartesian coordinates as the equation for the circle of radius of 2 around the origin. Hence, the equation with "R" unspecified is the general equation for the circle.
Usually, the unknowns are denoted by letters at the end of the alphabet, "x", "y", "z", "w", ..., while coefficients (parameters) are denoted by letters at the beginning, "a", "b", "c", "d", ... . For example, the general quadratic equation is usually written "ax"2 + "bx" + "c" = 0.
The process of finding the solutions, or, in case of parameters, expressing the unknowns in terms of the parameters, is called solving the equation. Such expressions of the solutions in terms of the parameters are also called "solutions".
A system of equations is a set of "simultaneous equations", usually in several unknowns for which the common solutions are sought. Thus, a "solution to the system" is a set of values for each of the unknowns, which together form a solution to each equation in the system. For example, the system
formula_9
has the unique solution "x" = −1, "y" = 1.
Identities.
An identity is an equation that is true for all possible values of the variable(s) it contains. Many identities are known in algebra and calculus. In the process of solving an equation, an identity is often used to simplify an equation, making it more easily solvable.
In algebra, an example of an identity is the difference of two squares:
formula_10
which is true for all "x" and "y".
Trigonometry is an area where many identities exist; these are useful in manipulating or solving trigonometric equations. Two of many that involve the sine and cosine functions are:
formula_11
and
formula_12
which are both true for all values of "θ".
For example, to solve for the value of "θ" that satisfies the equation:
formula_13
where "θ" is limited to between 0 and 45 degrees, one may use the above identity for the product to give:
formula_14
yielding the following solution for "θ:"
formula_15
Since the sine function is a periodic function, there are infinitely many solutions if there are no restrictions on "θ". In this example, restricting "θ" to be between 0 and 45 degrees would restrict the solution to only one number.
Algebra.
Algebra studies two main families of equations: polynomial equations and, among them, the special case of linear equations. When there is only one variable, polynomial equations have the form "P"("x") = 0, where "P" is a polynomial, and linear equations have the form "ax" + "b" = 0, where "a" and "b" are parameters. To solve equations from either family, one uses algorithmic or geometric techniques that originate from linear algebra or mathematical analysis. Algebra also studies Diophantine equations where the coefficients and solutions are integers. The techniques used are different and come from number theory. These equations are difficult in general; one often searches just to find the existence or absence of a solution, and, if they exist, to count the number of solutions.
Polynomial equations.
In general, an "algebraic equation" or polynomial equation is an equation of the form
formula_16, or
formula_17
where "P" and "Q" are polynomials with coefficients in some field (e.g., rational numbers, real numbers, complex numbers). An algebraic equation is "univariate" if it involves only one variable. On the other hand, a polynomial equation may involve several variables, in which case it is called "multivariate" (multiple variables, x, y, z, etc.).
For example,
formula_18
is a univariate algebraic (polynomial) equation with integer coefficients and
formula_19
is a multivariate polynomial equation over the rational numbers.
Some polynomial equations with rational coefficients have a solution that is an algebraic expression, with a finite number of operations involving just those coefficients (i.e., can be solved algebraically). This can be done for all such equations of degree one, two, three, or four; but equations of degree five or more cannot always be solved in this way, as the Abel–Ruffini theorem demonstrates.
A large amount of research has been devoted to compute efficiently accurate approximations of the real or complex solutions of a univariate algebraic equation (see Root finding of polynomials) and of the common solutions of several multivariate polynomial equations (see System of polynomial equations).
Systems of linear equations.
A system of linear equations (or "linear system") is a collection of linear equations involving one or more variables. For example,
formula_20
is a system of three equations in the three variables "x", "y", "z". A solution to a linear system is an assignment of numbers to the variables such that all the equations are simultaneously satisfied. A solution to the system above is given by
formula_21
since it makes all three equations valid. The word "system" indicates that the equations are to be considered collectively, rather than individually.
In mathematics, the theory of linear systems is a fundamental part of linear algebra, a subject which is used in many parts of modern mathematics. Computational algorithms for finding the solutions are an important part of numerical linear algebra, and play a prominent role in physics, engineering, chemistry, computer science, and economics. A system of non-linear equations can often be approximated by a linear system (see linearization), a helpful technique when making a mathematical model or computer simulation of a relatively complex system.
Geometry.
Analytic geometry.
In Euclidean geometry, it is possible to associate a set of coordinates to each point in space, for example by an orthogonal grid. This method allows one to characterize geometric figures by equations. A plane in three-dimensional space can be expressed as the solution set of an equation of the form formula_22, where formula_23 and formula_24 are real numbers and formula_25 are the unknowns that correspond to the coordinates of a point in the system given by the orthogonal grid. The values formula_23 are the coordinates of a vector perpendicular to the plane defined by the equation. A line is expressed as the intersection of two planes, that is as the solution set of a single linear equation with values in formula_26 or as the solution set of two linear equations with values in formula_27
A conic section is the intersection of a cone with equation formula_28 and a plane. In other words, in space, all conics are defined as the solution set of an equation of a plane and of the equation of a cone just given. This formalism allows one to determine the positions and the properties of the focuses of a conic.
The use of equations allows one to call on a large area of mathematics to solve geometric questions. The Cartesian coordinate system transforms a geometric problem into an analysis problem, once the figures are transformed into equations; thus the name analytic geometry. This point of view, outlined by Descartes, enriches and modifies the type of geometry conceived of by the ancient Greek mathematicians.
Currently, analytic geometry designates an active branch of mathematics. Although it still uses equations to characterize figures, it also uses other sophisticated techniques such as functional analysis and linear algebra.
Cartesian equations.
In Cartesian geometry, equations are used to describe geometric figures. As the equations that are considered, such as implicit equations or parametric equations, have infinitely many solutions, the objective is now different: instead of giving the solutions explicitly or counting them, which is impossible, one uses equations for studying properties of figures. This is the starting idea of algebraic geometry, an important area of mathematics.
One can use the same principle to specify the position of any point in three-dimensional space by the use of three Cartesian coordinates, which are the signed distances to three mutually perpendicular planes (or, equivalently, by its perpendicular projection onto three mutually perpendicular lines).
The invention of Cartesian coordinates in the 17th century by René Descartes revolutionized mathematics by providing the first systematic link between Euclidean geometry and algebra. Using the Cartesian coordinate system, geometric shapes (such as curves) can be described by Cartesian equations: algebraic equations involving the coordinates of the points lying on the shape. For example, a circle of radius 2 in a plane, centered on a particular point called the origin, may be described as the set of all points whose coordinates "x" and "y" satisfy the equation "x"2 + "y"2 = 4.
Parametric equations.
A parametric equation for a curve expresses the coordinates of the points of the curve as functions of a variable, called a parameter. For example,
formula_29
are parametric equations for the unit circle, where "t" is the parameter. Together, these equations are called a parametric representation of the curve.
The notion of "parametric equation" has been generalized to surfaces, manifolds and algebraic varieties of higher dimension, with the number of parameters being equal to the dimension of the manifold or variety, and the number of equations being equal to the dimension of the space in which the manifold or variety is considered (for curves the dimension is "one" and "one" parameter is used, for surfaces dimension "two" and "two" parameters, etc.).
Number theory.
Diophantine equations.
A Diophantine equation is a polynomial equation in two or more unknowns for which only the integer solutions are sought (an integer solution is a solution such that all the unknowns take integer values). A linear Diophantine equation is an equation between two sums of monomials of degree zero or one. An example of linear Diophantine equation is "ax" + "by"
"c" where "a", "b", and "c" are constants. An exponential Diophantine equation is one for which exponents of the terms of the equation can be unknowns.
Diophantine problems have fewer equations than unknown variables and involve finding integers that work correctly for all equations. In more technical language, they define an algebraic curve, algebraic surface, or more general object, and ask about the lattice points on it.
The word "Diophantine" refers to the Hellenistic mathematician of the 3rd century, Diophantus of Alexandria, who made a study of such equations and was one of the first mathematicians to introduce symbolism into algebra. The mathematical study of Diophantine problems that Diophantus initiated is now called Diophantine analysis.
Algebraic and transcendental numbers.
An algebraic number is a number that is a solution of a non-zero polynomial equation in one variable with rational coefficients (or equivalently — by clearing denominators — with integer coefficients). Numbers such as π that are not algebraic are said to be transcendental. Almost all real and complex numbers are transcendental.
Algebraic geometry.
Algebraic geometry is a branch of mathematics, classically studying solutions of polynomial equations. Modern algebraic geometry is based on more abstract techniques of abstract algebra, especially commutative algebra, with the language and the problems of geometry.
The fundamental objects of study in algebraic geometry are algebraic varieties, which are geometric manifestations of solutions of systems of polynomial equations. Examples of the most studied classes of algebraic varieties are: plane algebraic curves, which include lines, circles, parabolas, ellipses, hyperbolas, cubic curves like elliptic curves and quartic curves like lemniscates, and Cassini ovals. A point of the plane belongs to an algebraic curve if its coordinates satisfy a given polynomial equation. Basic questions involve the study of the points of special interest like the singular points, the inflection points and the points at infinity. More advanced questions involve the topology of the curve and relations between the curves given by different equations.
Differential equations.
A differential equation is a mathematical equation that relates some function with its derivatives. In applications, the functions usually represent physical quantities, the derivatives represent their rates of change, and the equation defines a relationship between the two. They are solved by finding an expression for the function that does not involve derivatives. Differential equations are used to model processes that involve the rates of change of the variable, and are used in areas such as physics, chemistry, biology, and economics.
In pure mathematics, differential equations are studied from several different perspectives, mostly concerned with their solutions — the set of functions that satisfy the equation. Only the simplest differential equations are solvable by explicit formulas; however, some properties of solutions of a given differential equation may be determined without finding their exact form.
If a self-contained formula for the solution is not available, the solution may be numerically approximated using computers. The theory of dynamical systems puts emphasis on qualitative analysis of systems described by differential equations, while many numerical methods have been developed to determine solutions with a given degree of accuracy.
Ordinary differential equations.
An ordinary differential equation or ODE is an equation containing a function of one independent variable and its derivatives. The term "ordinary" is used in contrast with the term partial differential equation, which may be with respect to "more than" one independent variable.
Linear differential equations, which have solutions that can be added and multiplied by coefficients, are well-defined and understood, and exact closed-form solutions are obtained. By contrast, ODEs that lack additive solutions are nonlinear, and solving them is far more intricate, as one can rarely represent them by elementary functions in closed form: Instead, exact and analytic solutions of ODEs are in series or integral form. Graphical and numerical methods, applied by hand or by computer, may approximate solutions of ODEs and perhaps yield useful information, often sufficing in the absence of exact, analytic solutions.
Partial differential equations.
A partial differential equation (PDE) is a differential equation that contains unknown multivariable functions and their partial derivatives. (This is in contrast to ordinary differential equations, which deal with functions of a single variable and their derivatives.) PDEs are used to formulate problems involving functions of several variables, and are either solved by hand, or used to create a relevant computer model.
PDEs can be used to describe a wide variety of phenomena such as sound, heat, electrostatics, electrodynamics, fluid flow, elasticity, or quantum mechanics. These seemingly distinct physical phenomena can be formalised similarly in terms of PDEs. Just as ordinary differential equations often model one-dimensional dynamical systems, partial differential equations often model multidimensional systems. PDEs find their generalisation in stochastic partial differential equations.
Types of equations.
Equations can be classified according to the types of operations and quantities involved. Important types include:
See also.
<templatestyles src="Div col/styles.css"/>
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " Ax^2 +Bx + C - y = 0 "
},
{
"math_id": 1,
"text": " Ax^2 +Bx + C - y "
},
{
"math_id": 2,
"text": " 0 "
},
{
"math_id": 3,
"text": "x=1"
},
{
"math_id": 4,
"text": "x=1."
},
{
"math_id": 5,
"text": "f(s)=s^2"
},
{
"math_id": 6,
"text": "x^2=1"
},
{
"math_id": 7,
"text": "x=-1."
},
{
"math_id": 8,
"text": " x^2 +y^2 = R^2 ."
},
{
"math_id": 9,
"text": "\\begin{align}\n3x+5y&=2\\\\\n5x+8y&=3\n\\end{align}\n"
},
{
"math_id": 10,
"text": "x^2 - y^2 = (x+y)(x-y) "
},
{
"math_id": 11,
"text": "\\sin^2(\\theta)+\\cos^2(\\theta) = 1 "
},
{
"math_id": 12,
"text": "\\sin(2\\theta)=2\\sin(\\theta) \\cos(\\theta) "
},
{
"math_id": 13,
"text": "3\\sin(\\theta) \\cos(\\theta)= 1\\,, "
},
{
"math_id": 14,
"text": "\\frac{3}{2}\\sin(2 \\theta) = 1\\,,"
},
{
"math_id": 15,
"text": "\\theta = \\frac{1}{2} \\arcsin\\left(\\frac{2}{3}\\right) \\approx 20.9^\\circ."
},
{
"math_id": 16,
"text": "P = 0"
},
{
"math_id": 17,
"text": "P = Q"
},
{
"math_id": 18,
"text": "x^5-3x+1=0"
},
{
"math_id": 19,
"text": "y^4+\\frac{xy}{2}=\\frac{x^3}{3}-xy^2+y^2-\\frac{1}{7}"
},
{
"math_id": 20,
"text": "\\begin{alignat}{7}\n3x &&\\; + \\;&& 2y &&\\; - \\;&& z &&\\; = \\;&& 1 & \\\\\n2x &&\\; - \\;&& 2y &&\\; + \\;&& 4z &&\\; = \\;&& -2 & \\\\\n-x &&\\; + \\;&& \\tfrac{1}{2} y &&\\; - \\;&& z &&\\; = \\;&& 0 &\n\\end{alignat}"
},
{
"math_id": 21,
"text": "\\begin{alignat}{2}\nx &\\,=\\,& 1 \\\\\ny &\\,=\\,& -2 \\\\\nz &\\,=\\,& -2\n\\end{alignat}"
},
{
"math_id": 22,
"text": " ax+by+cz+d=0"
},
{
"math_id": 23,
"text": "a,b,c"
},
{
"math_id": 24,
"text": "d"
},
{
"math_id": 25,
"text": "x,y,z"
},
{
"math_id": 26,
"text": "\\mathbb{R}^2"
},
{
"math_id": 27,
"text": "\\mathbb{R}^3."
},
{
"math_id": 28,
"text": "x^2+y^2=z^2"
},
{
"math_id": 29,
"text": "\\begin{align}\nx&=\\cos t\\\\\ny&=\\sin t\n\\end{align}"
},
{
"math_id": 30,
"text": "f'(x) = x^2"
},
{
"math_id": 31,
"text": "f'(x) = f(x-2)"
}
] | https://en.wikipedia.org/wiki?curid=9284 |
9286935 | Head (vessel) | End cap on a cylindrically shaped pressure vessel
A head is one of the end caps on a cylindrically shaped pressure vessel.
Principle.
Vessel dished ends are mostly used in storage or pressure vessels in industry. These ends, which in upright vessels are the bottom and the top, use less space than a hemisphere (which is the ideal form for pressure containments) while requiring only a slightly thicker wall.
Manufacturing.
The manufacturing of such an end is easier than that of a hemisphere. The starting material is first pressed to a radius r1 and then curled at the edge creating the second radius r2. Vessel dished ends can also be welded together from smaller pieces.
Shapes.
The shape of the heads used can vary. The most common head shapes are:
Hemispherical head.
A sphere is the ideal shape for a head, because the stresses are distributed evenly through the material of the head. The radius (r) of the head equals the radius of the cylindrical part of the vessel.
Ellipsoidal head.
This is also called an elliptical head. The shape of this head is more economical, because the height of the head is just a fraction of the diameter. Its radius varies between the major and minor axis; usually the ratio is 2:1.
Semi–Ellipsoidal Dished Heads.
2:1 Semi-Ellipsoidal dished heads are deeper and stronger than the more popular torispherical dished heads.
The greater depth results in the head being more difficult to form, and this makes them more expensive to manufacture. However, the cost is offset by a potential reduction in the specified thickness due to the dished head having greater overall strength and resistance to pressure.
Torispherical head (or flanged and dished head).
These heads have a dish with a fixed radius (r1), the size of which depends on the type of torispherical head. The transition between the cylinder and the dish is called the "knuckle". The knuckle has a toroidal shape. The most common types of torispherical heads are:
ASME F&D head.
Commonly used for ASME pressure vessels, these torispherical heads have a crown radius equal to the outside diameter of the head (formula_0), and a knuckle radius equal to 6% of the outside diameter (formula_1). The ASME design code does not allow the knuckle radius to be any less than 6% of the outside diameter.
Klöpper head.
This is a torispherical head. The dish has a radius that equals the diameter of the cylinder it is attached to (formula_0). The knuckle has a radius that equals a tenth of the diameter of the cylinder (formula_2), hence its alternative designation "decimal head".
Also other sizes are: formula_3 , rest of height (formula_4) formula_5 .
Korbbogen head.
This is a torispherical head also named Semi ellipsoidal head (According to DIN 28013). The radius of the dish is 80% of the diameter of the cylinder (formula_6). The radius of the knuckle is (formula_7).
Also other sizes are formula_8, rest of height (formula_4) formula_9. This shape finds its origin in architecture; see .
80-10 head.
These heads have a crown radius of 80% of outside diameter, and a knuckle radius of 10% of outside diameter.
Flat head.
This is a head consisting of a toroidal knuckle connecting to a flat plate. This type of head is typically used for the bottom of cookware.
Diffuser head.
This type of head is often found on the bottom of aerosol spray cans. It is an inverted torispherical head.
Conical head.
This is a cone-shaped head.
Heat treatment.
Heat treatment may be required after cold forming, but not for heads formed by hot forming.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "r_1=Do"
},
{
"math_id": 1,
"text": "r_2=0.06\\times Do"
},
{
"math_id": 2,
"text": "r_2=0.1\\times Do"
},
{
"math_id": 3,
"text": "h \\ge3.5\\times t"
},
{
"math_id": 4,
"text": "h_2"
},
{
"math_id": 5,
"text": "h_2=0.1935\\times Do-0.455\\times t"
},
{
"math_id": 6,
"text": "r_1=0.8\\times Do"
},
{
"math_id": 7,
"text": "r_2=0.154\\times Do"
},
{
"math_id": 8,
"text": "h \\ge3\\times t"
},
{
"math_id": 9,
"text": "h_2=0.255\\times Do-0.635\\times t"
}
] | https://en.wikipedia.org/wiki?curid=9286935 |
9289827 | Issued shares | Shares of a corporation owned by shareholders
In economics and law, issued shares are the shares of a corporation which have been allocated (allotted) and are subsequently held by shareholders. The act of creating new issued shares is called "issuance". Allotment is simply the transfer of shares to a subscriber. After allotment, a subscriber becomes a shareholder, though usually that also requires formal entry in a share registry.
Overview.
The number of shares that can be issued is limited to the total authorized shares. Issued shares are those shares which the board of directors and/or shareholders have agreed to issue, and which have been issued. Issued shares are the sum of outstanding shares held by shareholders; and treasury shares are shares which had been issued but have been repurchased by the corporation. The latter generally have no voting rights or rights to dividends.
The issued shares of a corporation form the equity capital of the corporation, and some corporations are required by law to have a minimum value of equity capital, while others may not need any or just a nominal number. The value of the issued shares is determined at the time they are issued and the value does not change, in relation to the issuing corporation after that time.
Shares are most commonly issued fully paid, in which case the liability of the shareholders is limited to the amount paid on the shares; but they may also be issued shares that are partly paid, with unlimited liability, subject to guarantee, or some other form.
formula_0
formula_1
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rm{Shares\\ authorized} = \\rm{Shares\\ issued} + \\rm{Shares\\ unissued}"
},
{
"math_id": 1,
"text": "\\rm{Shares\\ issued} = \\rm{Shares\\ outstanding} + \\rm{Treasury\\ shares}"
}
] | https://en.wikipedia.org/wiki?curid=9289827 |
9292690 | Risk dominance | Risk dominance and payoff dominance are two related refinements of the Nash equilibrium (NE) solution concept in game theory, defined by John Harsanyi and Reinhard Selten. A Nash equilibrium is considered payoff dominant if it is Pareto superior to all other Nash equilibria in the game.1 When faced with a choice among equilibria, all players would agree on the payoff dominant equilibrium since it offers to each player at least as much payoff as the other Nash equilibria. Conversely, a Nash equilibrium is considered risk dominant if it has the largest basin of attraction (i.e. is less risky). This implies that the more uncertainty players have about the actions of the other player(s), the more likely they will choose the strategy corresponding to it.
The payoff matrix in Figure 1 provides a simple two-player, two-strategy example of a game with two pure Nash equilibria. The strategy pair (Hunt, Hunt) is payoff dominant since payoffs are higher for both players compared to the other pure NE, (Gather, Gather). On the other hand, (Gather, Gather) risk dominates (Hunt, Hunt) since if uncertainty exists about the other player's action, gathering will provide a higher expected payoff. The game in Figure 1 is a well-known game-theoretic dilemma called stag hunt. The rationale behind it is that communal action (hunting) yields a higher return if all players combine their skills, but if it is unknown whether the other player helps in hunting, gathering might turn out to be the better individual strategy for food provision, since it does not depend on coordinating with the other player. In addition, gathering alone is preferred to gathering in competition with others. Like the Prisoner's dilemma, it provides a reason why collective action might fail in the absence of credible commitments.
Formal definition.
The game given in Figure 2 is a coordination game if the following payoff inequalities hold for player 1 (rows): A > B, D > C, and for player 2 (columns): a > b, d > c. The strategy pairs (H, H) and (G, G) are then the only pure Nash equilibria. In addition there is a mixed Nash equilibrium where player 1 plays H with probability p = (d-c)/(a-b-c+d) and G with probability 1–p; player 2 plays H with probability q = (D-C)/(A-B-C+D) and G with probability 1–q.
Strategy pair (H, H) payoff dominates (G, G) if A ≥ D, a ≥ d, and at least one of the two is a strict inequality: A > D or a > d.
Strategy pair (G, G) risk dominates (H, H) if the product of the deviation losses is higher for (G, G) (Harsanyi and Selten, 1988, Lemma 5.4.4). In other words, if the following inequality holds: (C – D)(c – d)≥(B – A)(b – a). If the inequality is strict then (G, G) strictly risk dominates (H, H).2(That is, players have more incentive to deviate).
If the game is symmetric, so if A = a, B = b, etc., the inequality allows for a simple interpretation: We assume the players are unsure about which strategy the opponent will pick and assign probabilities for each strategy. If each player assigns probabilities ½ to H and G each, then (G, G) risk dominates (H, H) if the expected payoff from playing G exceeds the expected payoff from playing H: ½ B + ½ D ≥ ½ A + ½ C, or simply B + D ≥ A + C.
Another way to calculate the risk dominant equilibrium is to calculate the risk factor for all equilibria and to find the equilibrium with the smallest risk factor. To calculate the risk factor in our 2x2 game, consider the expected payoff to a player if they play H: formula_0 (where "p" is the probability that the other player will play H), and compare it to the expected payoff if they play G: formula_1. The value of "p" which makes these two expected values equal is the risk factor for the equilibrium (H, H), with formula_2 the risk factor for playing (G, G). You can also calculate the risk factor for playing (G, G) by doing the same calculation, but setting "p" as the probability the other player will play G. An interpretation for "p" is it is the smallest probability that the opponent must play that strategy such that the person's own payoff from copying the opponent's strategy is greater than if the other strategy was played.
Equilibrium selection.
A number of evolutionary approaches have established that when played in a large population, players might fail to play the payoff dominant equilibrium strategy and instead end up in the payoff dominated, risk dominant equilibrium. Two separate evolutionary models both support the idea that the risk dominant equilibrium is more likely to occur. The first model, based on replicator dynamics, predicts that a population is more likely to adopt the risk dominant equilibrium than the payoff dominant equilibrium. The second model, based on best response strategy revision and mutation, predicts that the risk dominant state is the only stochastically stable equilibrium. Both models assume that multiple two-player games are played in a population of N players. The players are matched randomly with opponents, with each player having equal likelihoods of drawing any of the N−1 other players. The players start with a pure strategy, G or H, and play this strategy against their opponent. In replicator dynamics, the population game is repeated in sequential generations where subpopulations change based on the success of their chosen strategies. In best response, players update their strategies to improve expected payoffs in the subsequent generations. The recognition of Kandori, Mailath & Rob (1993) and Young (1993) was that if the rule to update one's strategy allows for mutation4, and the probability of mutation vanishes, i.e. asymptotically reaches zero over time, the likelihood that the risk dominant equilibrium is reached goes to one, even if it is payoff dominated.3 | [
{
"math_id": 0,
"text": "E[\\pi_H]=p A + (1-p) C"
},
{
"math_id": 1,
"text": "E[\\pi_G]=p B + (1-p) D"
},
{
"math_id": 2,
"text": "1-p"
}
] | https://en.wikipedia.org/wiki?curid=9292690 |
9292749 | Forward–backward algorithm | Inference algorithm for hidden Markov models
The forward–backward algorithm is an inference algorithm for hidden Markov models which computes the posterior marginals of all hidden state variables given a sequence of observations/emissions formula_0, i.e. it computes, for all hidden state variables formula_1, the distribution formula_2. This inference task is usually called smoothing. The algorithm makes use of the principle of dynamic programming to efficiently compute the values that are required to obtain the posterior marginal distributions in two passes. The first pass goes forward in time while the second goes backward in time; hence the name "forward–backward algorithm".
The term "forward–backward algorithm" is also used to refer to any algorithm belonging to the general class of algorithms that operate on sequence models in a forward–backward manner. In this sense, the descriptions in the remainder of this article refer only to one specific instance of this class.
Overview.
In the first pass, the forward–backward algorithm computes a set of forward probabilities which provide, for all formula_3, the probability of ending up in any particular state given the first formula_4 observations in the sequence, i.e. formula_5. In the second pass, the algorithm computes a set of backward probabilities which provide the probability of observing the remaining observations given any starting point formula_4, i.e. formula_6. These two sets of probability distributions can then be combined to obtain the distribution over states at any specific point in time given the entire observation sequence:
formula_7
The last step follows from an application of the Bayes' rule and the conditional independence of formula_8 and formula_9 given formula_10.
As outlined above, the algorithm involves three steps:
The forward and backward steps may also be called "forward message pass" and "backward message pass" - these terms are due to the "message-passing" used in general belief propagation approaches. At each single observation in the sequence, probabilities to be used for calculations at the next observation are computed. The smoothing step can be calculated simultaneously during the backward pass. This step allows the algorithm to take into account any past observations of output for computing more accurate results.
The forward–backward algorithm can be used to find the most likely state for any point in time. It cannot, however, be used to find the most likely sequence of states (see Viterbi algorithm).
Forward probabilities.
The following description will use matrices of probability values rather than probability distributions, although in general the forward-backward algorithm can be applied to continuous as well as discrete probability models.
We transform the probability distributions related to a given hidden Markov model into matrix notation as follows.
The transition probabilities formula_11 of a given random variable formula_10 representing all possible states in the hidden Markov model will be represented by the matrix formula_12 where the column index formula_13 will represent the target state and the row index formula_14 represents the start state. A transition from row-vector state formula_15 to the incremental row-vector state formula_16 is written as formula_17. The example below represents a system where the probability of staying in the same state after each step is 70% and the probability of transitioning to the other state is 30%. The transition matrix is then:
formula_18
In a typical Markov model, we would multiply a state vector by this matrix to obtain the probabilities for the subsequent state. In a hidden Markov model the state is unknown, and we instead observe events associated with the possible states. An event matrix of the form:
formula_19
provides the probabilities for observing events given a particular state. In the above example, event 1 will be observed 90% of the time if we are in state 1 while event 2 has a 10% probability of occurring in this state. In contrast, event 1 will only be observed 20% of the time if we are in state 2 and event 2 has an 80% chance of occurring. Given an arbitrary row-vector describing the state of the system (formula_20), the probability of observing event j is then:
formula_21
The probability of a given state leading to the observed event j can be represented in matrix form by multiplying the state row-vector (formula_20) with an observation matrix (formula_22) containing only diagonal entries. Continuing the above example, the observation matrix for event 1 would be:
formula_23
This allows us to calculate the new unnormalized probabilities state vector formula_24 through Bayes rule, weighting by the likelihood that each element of formula_20 generated event 1 as:
formula_25
We can now make this general procedure specific to our series of observations. Assuming an initial state vector formula_26, (which can be optimized as a parameter through repetitions of the forward-backward procedure), we begin with formula_27, then updating the state distribution and weighting by the likelihood of the first observation:
formula_28
This process can be carried forward with additional observations using:
formula_29
This value is the forward unnormalized probability vector. The i'th entry of this vector provides:
formula_30
Typically, we will normalize the probability vector at each step so that its entries sum to 1. A scaling factor is thus introduced at each step such that:
formula_31
where formula_32 represents the scaled vector from the previous step and formula_33 represents the scaling factor that causes the resulting vector's entries to sum to 1. The product of the scaling factors is the total probability for observing the given events irrespective of the final states:
formula_34
This allows us to interpret the scaled probability vector as:
formula_35
We thus find that the product of the scaling factors provides us with the total probability for observing the given sequence up to time t and that the scaled probability vector provides us with the probability of being in each state at this time.
Backward probabilities.
A similar procedure can be constructed to find backward probabilities. These intend to provide the probabilities:
formula_36
That is, we now want to assume that we start in a particular state (formula_37), and we are now interested in the probability of observing all future events from this state. Since the initial state is assumed as given (i.e. the prior probability of this state = 100%), we begin with:
formula_38
Notice that we are now using a column vector while the forward probabilities used row vectors. We can then work backwards using:
formula_39
While we could normalize this vector as well so that its entries sum to one, this is not usually done. Noting that each entry contains the probability of the future event sequence given a particular initial state, normalizing this vector would be equivalent to applying Bayes' theorem to find the likelihood of each initial state given the future events (assuming uniform priors for the final state vector). However, it is more common to scale this vector using the same formula_33 constants used in the forward probability calculations. formula_40 is not scaled, but subsequent operations use:
formula_41
where formula_42 represents the previous, scaled vector. This result is that the scaled probability vector is related to the backward probabilities by:
formula_43
This is useful because it allows us to find the total probability of being in each state at a given time, t, by multiplying these values:
formula_44
To understand this, we note that formula_45 provides the probability for observing the given events in a way that passes through state formula_46 at time t. This probability includes the forward probabilities covering all events up to time t as well as the backward probabilities which include all future events. This is the numerator we are looking for in our equation, and we divide by the total probability of the observation sequence to normalize this value and extract only the probability that formula_37. These values are sometimes called the "smoothed values" as they combine the forward and backward probabilities to compute a final probability.
The values formula_47 thus provide the probability of being in each state at time t. As such, they are useful for determining the most probable state at any time. The term "most probable state" is somewhat ambiguous. While the most probable state is the most likely to be correct at a given point, the sequence of individually probable states is not likely to be the most probable sequence. This is because the probabilities for each point are calculated independently of each other. They do not take into account the transition probabilities between states, and it is thus possible to get states at two moments (t and t+1) that are both most probable at those time points but which have very little probability of occurring together, i.e. formula_48. The most probable sequence of states that produced an observation sequence can be found using the Viterbi algorithm.
Example.
This example takes as its basis the umbrella world in Russell & Norvig 2010 Chapter 15 pp. 567 in which we would like to infer the weather given observation of another person either carrying or not carrying an umbrella. We assume two possible states for the weather: state 1 = rain, state 2 = no rain. We assume that the weather has a 70% chance of staying the same each day and a 30% chance of changing. The transition probabilities are then:
formula_18
We also assume each state generates one of two possible events: event 1 = umbrella, event 2 = no umbrella. The conditional probabilities for these occurring in each state are given by the probability matrix:
formula_19
We then observe the following sequence of events: {umbrella, umbrella, no umbrella, umbrella, umbrella} which we will represent in our calculations as:
formula_49
Note that formula_50 differs from the others because of the "no umbrella" observation.
In computing the forward probabilities we begin with:
formula_51
which is our prior state vector indicating that we don't know which state the weather is in before our observations. While a state vector should be given as a row vector, we will use the transpose of the matrix so that the calculations below are easier to read. Our calculations are then written in the form:
formula_52
instead of:
formula_53
Notice that the transformation matrix is also transposed, but in our example the transpose is equal to the original matrix. Performing these calculations and normalizing the results provides:
formula_54
formula_55
formula_56
formula_57
formula_58
For the backward probabilities, we start with:
formula_59
We are then able to compute (using the observations in reverse order and normalizing with different constants):
formula_60
formula_61
formula_62
formula_63
formula_64
Finally, we will compute the smoothed probability values. These results must also be scaled so that its entries sum to 1 because we did not scale the backward probabilities with the formula_33's found earlier. The backward probability vectors above thus actually represent the likelihood of each state at time t given the future observations. Because these vectors are proportional to the actual backward probabilities, the result has to be scaled an additional time.
formula_65
formula_66
formula_67
formula_68
formula_69
formula_70
Notice that the value of formula_71 is equal to formula_72 and that formula_73 is equal to formula_74. This follows naturally because both formula_74 and formula_72 begin with uniform priors over the initial and final state vectors (respectively) and take into account all of the observations. However, formula_71 will only be equal to formula_72 when our initial state vector represents a uniform prior (i.e. all entries are equal). When this is not the case formula_72 needs to be combined with the initial state vector to find the most likely initial state. We thus find that the forward probabilities by themselves are sufficient to calculate the most likely final state. Similarly, the backward probabilities can be combined with the initial state vector to provide the most probable initial state given the observations. The forward and backward probabilities need only be combined to infer the most probable states between the initial and final points.
The calculations above reveal that the most probable weather state on every day except for the third one was "rain". They tell us more than this, however, as they now provide a way to quantify the probabilities of each state at different times. Perhaps most importantly, our value at formula_73 quantifies our knowledge of the state vector at the end of the observation sequence. We can then use this to predict the probability of the various weather states tomorrow as well as the probability of observing an umbrella.
Performance.
The forward–backward algorithm runs with time complexity formula_75 in space formula_76, where formula_77 is the length of the time sequence and formula_78 is the number of symbols in the state alphabet. The algorithm can also run in constant space with time complexity formula_79 by recomputing values at each step. For comparison, a brute-force procedure would generate all possible formula_80 state sequences and calculate the joint probability of each state sequence with the observed series of events, which would have time complexity formula_81. Brute force is intractable for realistic problems, as the number of possible hidden node sequences typically is extremely high.
An enhancement to the general forward-backward algorithm, called the Island algorithm, trades smaller memory usage for longer running time, taking formula_82 time and formula_83 memory. Furthermore, it is possible to invert the process model to obtain an formula_84 space, formula_85 time algorithm, although the inverted process may not exist or be ill-conditioned.
In addition, algorithms have been developed to compute formula_86 efficiently through online smoothing such as the fixed-lag smoothing (FLS) algorithm.
Pseudocode.
algorithm forward_backward is
input: guessState
int "sequenceIndex"
output: "result"
if "sequenceIndex" is past the end of the sequence then
return 1
if ("guessState", "sequenceIndex") has been seen before then
return saved result
"result" := 0
for each neighboring state n:
"result" := result + (transition probability from "guessState" to
n given observation element at "sequenceIndex")
× Backward(n, "sequenceIndex" + 1)
save result for ("guessState", "sequenceIndex")
return "result"
Python example.
Given HMM (just like in Viterbi algorithm) represented in the Python programming language:
states = ('Healthy', 'Fever')
end_state = 'E'
observations = ('normal', 'cold', 'dizzy')
transition_probability = {
'Healthy' : {'Healthy': 0.69, 'Fever': 0.3, 'E': 0.01},
'Fever' : {'Healthy': 0.4, 'Fever': 0.59, 'E': 0.01},
emission_probability = {
'Healthy' : {'normal': 0.5, 'cold': 0.4, 'dizzy': 0.1},
'Fever' : {'normal': 0.1, 'cold': 0.3, 'dizzy': 0.6},
We can write the implementation of the forward-backward algorithm like this:
def fwd_bkw(observations, states, start_prob, trans_prob, emm_prob, end_st):
"""Forward–backward algorithm."""
# Forward part of the algorithm
fwd = []
for i, observation_i in enumerate(observations):
for st in states:
if i == 0:
# base case for the forward part
prev_f_sum = start_prob[st]
else:
prev_f_sum = sum(f_prev[k] * trans_prob[k][st] for k in states)
f_curr[st] = emm_prob[st][observation_i] * prev_f_sum
fwd.append(f_curr)
f_prev = f_curr
p_fwd = sum(f_curr[k] * trans_prob[k][end_st] for k in states)
# Backward part of the algorithm
bkw = []
for i, observation_i_plus in enumerate(reversed(observations[1:] + (None,))):
for st in states:
if i == 0:
# base case for backward part
b_curr[st] = trans_prob[st][end_st]
else:
b_curr[st] = sum(trans_prob[st][l] * emm_prob[l][observation_i_plus] * b_prev[l] for l in states)
bkw.insert(0,b_curr)
b_prev = b_curr
p_bkw = sum(start_prob[l] * emm_prob[l][observations[0]] * b_curr[l] for l in states)
# Merging the two parts
posterior = []
for i in range(len(observations)):
posterior.append({st: fwd[i][st] * bkw[i][st] / p_fwd for st in states})
assert p_fwd == p_bkw
return fwd, bkw, posterior
The function codice_0 takes the following arguments:
codice_1 is the sequence of observations, e.g. codice_2;
codice_3 is the set of hidden states;
codice_4 is the start probability;
codice_5 are the transition probabilities;
and codice_6 are the emission probabilities.
For simplicity of code, we assume that the observation sequence codice_1 is non-empty and that codice_8 and codice_9 is defined for all states i,j.
In the running example, the forward-backward algorithm is used as follows:
def example():
return fwd_bkw(observations,
states,
start_probability,
transition_probability,
emission_probability,
end_state)
»> for line in example():
... print(*line)
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "o_{1:T}:= o_1,\\dots,o_T"
},
{
"math_id": 1,
"text": "X_t \\in \\{X_1, \\dots, X_T\\}"
},
{
"math_id": 2,
"text": "P(X_t\\ |\\ o_{1:T})"
},
{
"math_id": 3,
"text": "t \\in \\{1, \\dots, T\\}"
},
{
"math_id": 4,
"text": "t"
},
{
"math_id": 5,
"text": "P(X_t\\ |\\ o_{1:t})"
},
{
"math_id": 6,
"text": "P(o_{t+1:T}\\ |\\ X_t)"
},
{
"math_id": 7,
"text": "P(X_t\\ |\\ o_{1:T}) = P(X_t\\ |\\ o_{1:t}, o_{t+1:T}) \\propto P(o_{t+1:T}\\ |\\ X_t) P( X_t | o_{1:t})"
},
{
"math_id": 8,
"text": "o_{t+1:T}"
},
{
"math_id": 9,
"text": "o_{1:t}"
},
{
"math_id": 10,
"text": "X_t"
},
{
"math_id": 11,
"text": "\\mathbf{P}(X_t\\mid X_{t-1})"
},
{
"math_id": 12,
"text": "\\mathbf{T}"
},
{
"math_id": 13,
"text": "j"
},
{
"math_id": 14,
"text": "i"
},
{
"math_id": 15,
"text": "\\mathbf{\\pi_t}"
},
{
"math_id": 16,
"text": "\\mathbf{\\pi_{t+1}}"
},
{
"math_id": 17,
"text": "\\mathbf{\\pi_{t+1}} = \\mathbf{\\pi_{t}} \\mathbf{T}"
},
{
"math_id": 18,
"text": "\\mathbf{T} = \\begin{pmatrix}\n 0.7 & 0.3 \\\\\n 0.3 & 0.7\n\\end{pmatrix}\n"
},
{
"math_id": 19,
"text": "\\mathbf{B} = \\begin{pmatrix}\n 0.9 & 0.1 \\\\\n 0.2 & 0.8\n\\end{pmatrix}\n"
},
{
"math_id": 20,
"text": "\\mathbf{\\pi}"
},
{
"math_id": 21,
"text": "\\mathbf{P}(O = j)=\\sum_{i} \\pi_i B_{i,j}"
},
{
"math_id": 22,
"text": "\\mathbf{O_j} = \\mathrm{diag}(B_{*,o_j})"
},
{
"math_id": 23,
"text": "\\mathbf{O_1} = \\begin{pmatrix}\n 0.9 & 0.0 \\\\\n 0.0 & 0.2\n\\end{pmatrix}\n"
},
{
"math_id": 24,
"text": "\\mathbf{\\pi '}"
},
{
"math_id": 25,
"text": "\n\\mathbf{\\pi '} = \\mathbf{\\pi} \\mathbf{O_1}\n"
},
{
"math_id": 26,
"text": "\\mathbf{\\pi}_0"
},
{
"math_id": 27,
"text": "\\mathbf{f_{0:0}} = \\mathbf{\\pi}_0"
},
{
"math_id": 28,
"text": "\n\\mathbf{f_{0:1}} = \\mathbf{\\pi}_0 \\mathbf{T} \\mathbf{O_{o_1}}\n"
},
{
"math_id": 29,
"text": "\n\\mathbf{f_{0:t}} = \\mathbf{f_{0:t-1}} \\mathbf{T} \\mathbf{O_{o_t}} \n"
},
{
"math_id": 30,
"text": "\n\\mathbf{f_{0:t}}(i) = \\mathbf{P}(o_1, o_2, \\dots, o_t, X_t=x_i | \\mathbf{\\pi}_0 )\n"
},
{
"math_id": 31,
"text": "\n\\mathbf{\\hat{f}_{0:t}} = c_t^{-1}\\ \\mathbf{\\hat{f}_{0:t-1}} \\mathbf{T} \\mathbf{O_{o_t}}\n"
},
{
"math_id": 32,
"text": "\\mathbf{\\hat{f}_{0:t-1}}"
},
{
"math_id": 33,
"text": "c_t"
},
{
"math_id": 34,
"text": "\n\\mathbf{P}(o_1, o_2, \\dots, o_t|\\mathbf{\\pi}_0) = \\prod_{s=1}^t c_s\n"
},
{
"math_id": 35,
"text": "\n\\mathbf{\\hat{f}_{0:t}}(i) =\n\\frac{\\mathbf{f_{0:t}}(i)}{\\prod_{s=1}^t c_s} =\n\\frac{\\mathbf{P}(o_1, o_2, \\dots, o_t, X_t=x_i | \\mathbf{\\pi}_0 )}{\\mathbf{P}(o_1, o_2, \\dots, o_t|\\mathbf{\\pi}_0)} =\n\\mathbf{P}(X_t=x_i | o_1, o_2, \\dots, o_t, \\mathbf{\\pi}_0 )\n"
},
{
"math_id": 36,
"text": "\n\\mathbf{b_{t:T}}(i) = \\mathbf{P}(o_{t+1}, o_{t+2}, \\dots, o_{T} | X_t=x_i )\n"
},
{
"math_id": 37,
"text": "X_t=x_i"
},
{
"math_id": 38,
"text": "\n\\mathbf{b_{T:T}} = [1\\ 1\\ 1\\ \\dots]^T\n"
},
{
"math_id": 39,
"text": "\n\\mathbf{b_{t-1:T}} = \\mathbf{T}\\mathbf{O_t}\\mathbf{b_{t:T}}\n"
},
{
"math_id": 40,
"text": "\\mathbf{b_{T:T}}"
},
{
"math_id": 41,
"text": "\n\\mathbf{\\hat{b}_{t-1:T}} = c_t^{-1} \\mathbf{T}\\mathbf{O_t}\\mathbf{\\hat{b}_{t:T}}\n"
},
{
"math_id": 42,
"text": "\\mathbf{\\hat{b}_{t:T}}"
},
{
"math_id": 43,
"text": "\n\\mathbf{\\hat{b}_{t:T}}(i) =\n\\frac{\\mathbf{b_{t:T}}(i)}{\\prod_{s=t+1}^T c_s}\n"
},
{
"math_id": 44,
"text": "\n\\mathbf{\\gamma_t}(i) =\n\\mathbf{P}(X_t=x_i | o_1, o_2, \\dots, o_T, \\mathbf{\\pi}_0) =\n\\frac{ \\mathbf{P}(o_1, o_2, \\dots, o_T, X_t=x_i | \\mathbf{\\pi}_0 ) }{ \\mathbf{P}(o_1, o_2, \\dots, o_T | \\mathbf{\\pi}_0 ) } =\n\\frac{ \\mathbf{f_{0:t}}(i) \\cdot \\mathbf{b_{t:T}}(i) }{ \\prod_{s=1}^T c_s } =\n\\mathbf{\\hat{f}_{0:t}}(i) \\cdot \\mathbf{\\hat{b}_{t:T}}(i)\n"
},
{
"math_id": 45,
"text": "\\mathbf{f_{0:t}}(i) \\cdot \\mathbf{b_{t:T}}(i)"
},
{
"math_id": 46,
"text": "x_i"
},
{
"math_id": 47,
"text": "\\mathbf{\\gamma_t}(i)"
},
{
"math_id": 48,
"text": " \\mathbf{P}(X_t=x_i,X_{t+1}=x_j) \\neq \\mathbf{P}(X_t=x_i) \\mathbf{P}(X_{t+1}=x_j) "
},
{
"math_id": 49,
"text": "\n\\mathbf{O_1} = \\begin{pmatrix}0.9 & 0.0 \\\\ 0.0 & 0.2 \\end{pmatrix}~~\\mathbf{O_2} = \\begin{pmatrix}0.9 & 0.0 \\\\ 0.0 & 0.2 \\end{pmatrix}~~\\mathbf{O_3} = \\begin{pmatrix}0.1 & 0.0 \\\\ 0.0 & 0.8 \\end{pmatrix}~~\\mathbf{O_4} = \\begin{pmatrix}0.9 & 0.0 \\\\ 0.0 & 0.2 \\end{pmatrix}~~\\mathbf{O_5} = \\begin{pmatrix}0.9 & 0.0 \\\\ 0.0 & 0.2 \\end{pmatrix}\n"
},
{
"math_id": 50,
"text": "\\mathbf{O_3}"
},
{
"math_id": 51,
"text": "\n\\mathbf{f_{0:0}}= \\begin{pmatrix} 0.5 & 0.5 \\end{pmatrix}\n"
},
{
"math_id": 52,
"text": "\n(\\mathbf{\\hat{f}_{0:t}})^T = c_t^{-1}\\mathbf{O_t}(\\mathbf{T})^T(\\mathbf{\\hat{f}_{0:t-1}})^T\n"
},
{
"math_id": 53,
"text": "\n\\mathbf{\\hat{f}_{0:t}} = c_t^{-1}\\mathbf{\\hat{f}_{0:t-1}} \\mathbf{T} \\mathbf{O_t}\n"
},
{
"math_id": 54,
"text": "\n(\\mathbf{\\hat{f}_{0:1}})^T =\nc_1^{-1}\\begin{pmatrix}0.9 & 0.0 \\\\ 0.0 & 0.2 \\end{pmatrix}\\begin{pmatrix} 0.7 & 0.3 \\\\ 0.3 & 0.7 \\end{pmatrix}\\begin{pmatrix}0.5000 \\\\ 0.5000 \\end{pmatrix}=\nc_1^{-1}\\begin{pmatrix}0.4500 \\\\ 0.1000\\end{pmatrix}=\n\\begin{pmatrix}0.8182 \\\\ 0.1818 \\end{pmatrix}\n"
},
{
"math_id": 55,
"text": "\n(\\mathbf{\\hat{f}_{0:2}})^T =\nc_2^{-1}\\begin{pmatrix}0.9 & 0.0 \\\\ 0.0 & 0.2 \\end{pmatrix}\\begin{pmatrix} 0.7 & 0.3 \\\\ 0.3 & 0.7 \\end{pmatrix}\\begin{pmatrix}0.8182 \\\\ 0.1818 \\end{pmatrix}=\nc_2^{-1}\\begin{pmatrix}0.5645 \\\\ 0.0745\\end{pmatrix}=\n\\begin{pmatrix}0.8834 \\\\ 0.1166 \\end{pmatrix}\n"
},
{
"math_id": 56,
"text": "\n(\\mathbf{\\hat{f}_{0:3}})^T =\nc_3^{-1}\\begin{pmatrix}0.1 & 0.0 \\\\ 0.0 & 0.8 \\end{pmatrix}\\begin{pmatrix} 0.7 & 0.3 \\\\ 0.3 & 0.7 \\end{pmatrix}\\begin{pmatrix}0.8834 \\\\ 0.1166 \\end{pmatrix}=\nc_3^{-1}\\begin{pmatrix}0.0653 \\\\ 0.2772\\end{pmatrix}=\n\\begin{pmatrix}0.1907 \\\\ 0.8093 \\end{pmatrix}\n"
},
{
"math_id": 57,
"text": "\n(\\mathbf{\\hat{f}_{0:4}})^T =\nc_4^{-1}\\begin{pmatrix}0.9 & 0.0 \\\\ 0.0 & 0.2 \\end{pmatrix}\\begin{pmatrix} 0.7 & 0.3 \\\\ 0.3 & 0.7 \\end{pmatrix}\\begin{pmatrix}0.1907 \\\\ 0.8093 \\end{pmatrix}=\nc_4^{-1}\\begin{pmatrix}0.3386 \\\\ 0.1247\\end{pmatrix}=\n\\begin{pmatrix}0.7308 \\\\ 0.2692 \\end{pmatrix}\n"
},
{
"math_id": 58,
"text": "\n(\\mathbf{\\hat{f}_{0:5}})^T =\nc_5^{-1}\\begin{pmatrix}0.9 & 0.0 \\\\ 0.0 & 0.2 \\end{pmatrix}\\begin{pmatrix} 0.7 & 0.3 \\\\ 0.3 & 0.7 \\end{pmatrix}\\begin{pmatrix}0.7308 \\\\ 0.2692 \\end{pmatrix}=\nc_5^{-1}\\begin{pmatrix}0.5331 \\\\ 0.0815\\end{pmatrix}=\n\\begin{pmatrix}0.8673 \\\\ 0.1327 \\end{pmatrix}\n"
},
{
"math_id": 59,
"text": "\n\\mathbf{b_{5:5}} = \\begin{pmatrix} 1.0 \\\\ 1.0\\end{pmatrix}\n"
},
{
"math_id": 60,
"text": "\n\\mathbf{\\hat{b}_{4:5}} = \\alpha\\begin{pmatrix} 0.7 & 0.3 \\\\ 0.3 & 0.7 \\end{pmatrix}\\begin{pmatrix}0.9 & 0.0 \\\\ 0.0 & 0.2 \\end{pmatrix}\\begin{pmatrix}1.0000 \\\\ 1.0000 \\end{pmatrix}=\\alpha\\begin{pmatrix}0.6900 \\\\ 0.4100\\end{pmatrix}=\\begin{pmatrix}0.6273 \\\\ 0.3727 \\end{pmatrix}\n"
},
{
"math_id": 61,
"text": "\n\\mathbf{\\hat{b}_{3:5}} = \\alpha\\begin{pmatrix} 0.7 & 0.3 \\\\ 0.3 & 0.7 \\end{pmatrix}\\begin{pmatrix}0.9 & 0.0 \\\\ 0.0 & 0.2 \\end{pmatrix}\\begin{pmatrix}0.6273 \\\\ 0.3727 \\end{pmatrix}=\\alpha\\begin{pmatrix}0.4175 \\\\ 0.2215\\end{pmatrix}=\\begin{pmatrix}0.6533 \\\\ 0.3467 \\end{pmatrix}\n"
},
{
"math_id": 62,
"text": "\n\\mathbf{\\hat{b}_{2:5}} = \\alpha\\begin{pmatrix} 0.7 & 0.3 \\\\ 0.3 & 0.7 \\end{pmatrix}\\begin{pmatrix}0.1 & 0.0 \\\\ 0.0 & 0.8 \\end{pmatrix}\\begin{pmatrix}0.6533 \\\\ 0.3467 \\end{pmatrix}=\\alpha\\begin{pmatrix}0.1289 \\\\ 0.2138\\end{pmatrix}=\\begin{pmatrix}0.3763 \\\\ 0.6237 \\end{pmatrix}\n"
},
{
"math_id": 63,
"text": "\n\\mathbf{\\hat{b}_{1:5}} = \\alpha\\begin{pmatrix} 0.7 & 0.3 \\\\ 0.3 & 0.7 \\end{pmatrix}\\begin{pmatrix}0.9 & 0.0 \\\\ 0.0 & 0.2 \\end{pmatrix}\\begin{pmatrix}0.3763 \\\\ 0.6237 \\end{pmatrix}=\\alpha\\begin{pmatrix}0.2745 \\\\ 0.1889\\end{pmatrix}=\\begin{pmatrix}0.5923 \\\\ 0.4077 \\end{pmatrix}\n"
},
{
"math_id": 64,
"text": "\n\\mathbf{\\hat{b}_{0:5}} = \\alpha\\begin{pmatrix} 0.7 & 0.3 \\\\ 0.3 & 0.7 \\end{pmatrix}\\begin{pmatrix}0.9 & 0.0 \\\\ 0.0 & 0.2 \\end{pmatrix}\\begin{pmatrix}0.5923 \\\\ 0.4077 \\end{pmatrix}=\\alpha\\begin{pmatrix}0.3976 \\\\ 0.2170\\end{pmatrix}=\\begin{pmatrix}0.6469 \\\\ 0.3531 \\end{pmatrix}\n"
},
{
"math_id": 65,
"text": "\n(\\mathbf{\\gamma_0})^T = \\alpha\\begin{pmatrix}0.5000 \\\\ 0.5000 \\end{pmatrix}\\circ \\begin{pmatrix}0.6469 \\\\ 0.3531 \\end{pmatrix}=\\alpha\\begin{pmatrix}0.3235 \\\\ 0.1765\\end{pmatrix}=\\begin{pmatrix}0.6469 \\\\ 0.3531 \\end{pmatrix}\n"
},
{
"math_id": 66,
"text": "\n(\\mathbf{\\gamma_1})^T = \\alpha\\begin{pmatrix}0.8182 \\\\ 0.1818 \\end{pmatrix}\\circ \\begin{pmatrix}0.5923 \\\\ 0.4077 \\end{pmatrix}=\\alpha\\begin{pmatrix}0.4846 \\\\ 0.0741\\end{pmatrix}=\\begin{pmatrix}0.8673 \\\\ 0.1327 \\end{pmatrix}\n"
},
{
"math_id": 67,
"text": "\n(\\mathbf{\\gamma_2})^T = \\alpha\\begin{pmatrix}0.8834 \\\\ 0.1166 \\end{pmatrix}\\circ \\begin{pmatrix}0.3763 \\\\ 0.6237 \\end{pmatrix}=\\alpha\\begin{pmatrix}0.3324 \\\\ 0.0728\\end{pmatrix}=\\begin{pmatrix}0.8204 \\\\ 0.1796 \\end{pmatrix}\n"
},
{
"math_id": 68,
"text": "\n(\\mathbf{\\gamma_3})^T = \\alpha\\begin{pmatrix}0.1907 \\\\ 0.8093 \\end{pmatrix}\\circ \\begin{pmatrix}0.6533 \\\\ 0.3467 \\end{pmatrix}=\\alpha\\begin{pmatrix}0.1246 \\\\ 0.2806\\end{pmatrix}=\\begin{pmatrix}0.3075 \\\\ 0.6925 \\end{pmatrix}\n"
},
{
"math_id": 69,
"text": "\n(\\mathbf{\\gamma_4})^T = \\alpha\\begin{pmatrix}0.7308 \\\\ 0.2692 \\end{pmatrix}\\circ \\begin{pmatrix}0.6273 \\\\ 0.3727 \\end{pmatrix}=\\alpha\\begin{pmatrix}0.4584 \\\\ 0.1003\\end{pmatrix}=\\begin{pmatrix}0.8204 \\\\ 0.1796 \\end{pmatrix}\n"
},
{
"math_id": 70,
"text": "\n(\\mathbf{\\gamma_5})^T = \\alpha\\begin{pmatrix}0.8673 \\\\ 0.1327 \\end{pmatrix}\\circ \\begin{pmatrix}1.0000 \\\\ 1.0000 \\end{pmatrix}=\\alpha\\begin{pmatrix}0.8673 \\\\ 0.1327 \\end{pmatrix}=\\begin{pmatrix}0.8673 \\\\ 0.1327 \\end{pmatrix}\n"
},
{
"math_id": 71,
"text": "\\mathbf{\\gamma_0}"
},
{
"math_id": 72,
"text": "\\mathbf{\\hat{b}_{0:5}}"
},
{
"math_id": 73,
"text": "\\mathbf{\\gamma_5}"
},
{
"math_id": 74,
"text": "\\mathbf{\\hat{f}_{0:5}}"
},
{
"math_id": 75,
"text": " O(S^2 T) "
},
{
"math_id": 76,
"text": " O(S T) "
},
{
"math_id": 77,
"text": "T"
},
{
"math_id": 78,
"text": "S"
},
{
"math_id": 79,
"text": " O(S^2 T^2) "
},
{
"math_id": 80,
"text": "S^T"
},
{
"math_id": 81,
"text": " O(T \\cdot S^T) "
},
{
"math_id": 82,
"text": " O(S^2 T \\log T) "
},
{
"math_id": 83,
"text": " O(S \\log T) "
},
{
"math_id": 84,
"text": "O(S)"
},
{
"math_id": 85,
"text": "O(S^2 T)"
},
{
"math_id": 86,
"text": "\\mathbf{f_{0:t+1}}"
}
] | https://en.wikipedia.org/wiki?curid=9292749 |
929502 | Kantorovich inequality | In mathematics, the Kantorovich inequality is a particular case of the Cauchy–Schwarz inequality, which is itself a generalization of the triangle inequality.
The triangle inequality states that the length of two sides of any triangle, added together, will be equal to or greater than the length of the third side. In simplest terms, the Kantorovich inequality translates the basic idea of the triangle inequality into the terms and notational conventions of linear programming. (See vector space, inner product, and normed vector space for other examples of how the basic ideas inherent in the triangle inequality—line segment and distance—can be generalized into a broader context.)
More formally, the Kantorovich inequality can be expressed this way:
Let
formula_0
Let formula_1
Then
formula_2
The Kantorovich inequality is used in convergence analysis; it bounds the convergence rate of Cauchy's steepest descent.
Equivalents of the Kantorovich inequality have arisen in a number of different fields. For instance, the Cauchy–Schwarz–Bunyakovsky inequality and the Wielandt inequality are equivalent to the Kantorovich inequality and all of these are, in turn, special cases of the Hölder inequality.
The Kantorovich inequality is named after Soviet economist, mathematician, and Nobel Prize winner Leonid Kantorovich, a pioneer in the field of linear programming.
There is also Matrix version of the Kantorovich inequality due to Marshall and Olkin (1990). Its extensions and their applications to statistics are available; see e.g. Liu and Neudecker (1999) and Liu et al. (2022). | [
{
"math_id": 0,
"text": "p_i \\geq 0,\\quad 0 < a \\leq x_i \\leq b\\text{ for }i=1, \\dots ,n."
},
{
"math_id": 1,
"text": "A_n=\\{1,2,\\dots ,n\\}."
},
{
"math_id": 2,
"text": "\n\\begin{align}\n& {} \\qquad \\left( \\sum_{i=1}^n p_ix_i \\right ) \\left (\\sum_{i=1}^n \\frac{p_i}{x_i} \\right) \\\\\n& \\leq \\frac{(a+b)^2}{4ab} \\left (\\sum_{i=1}^n p_i \\right )^2\n-\\frac{(a-b)^2}{4ab} \\cdot \\min \\left\\{ \\left (\\sum_{i \\in X}p_i-\\sum_{j \\in Y}p_j \\right )^2\\,:\\, {X \\cup Y=A_n},{X \\cap Y=\\varnothing} \\right\\}.\n\\end{align}\n"
}
] | https://en.wikipedia.org/wiki?curid=929502 |
9295203 | Mayo–Lewis equation | The Mayo–Lewis equation or copolymer equation in polymer chemistry describes the distribution of monomers in a copolymer. It was proposed by Frank R. Mayo and Frederick M. Lewis.
The equation considers a monomer mix of two components formula_0 and formula_1 and the four different reactions that can take place at the reactive chain end terminating in either monomer (formula_2 and formula_3) with their reaction rate constants formula_4:
formula_5
formula_6
formula_7
formula_8
The reactivity ratio for each propagating chain end is defined as the ratio of the rate constant for addition of a monomer of the species already at the chain end to the rate constant for addition of the other monomer.
formula_9
formula_10
The copolymer equation is then:
formula_11
with the concentrations of the components in square brackets. The equation gives the relative instantaneous rates of incorporation of the two monomers.
Equation derivation.
Monomer 1 is consumed with reaction rate:
formula_12
with formula_13 the concentration of all the active chains terminating in monomer 1, summed over chain lengths. formula_14 is defined similarly for monomer 2.
Likewise the rate of disappearance for monomer 2 is:
formula_15
Division of both equations by formula_16 followed by division of the first equation by the second yields:
formula_17
The ratio of active center concentrations can be found using the steady state approximation, meaning that the concentration of each type of active center remains constant.
formula_18
The rate of formation of active centers of monomer 1 (formula_8) is equal to the rate of their destruction (formula_6) so that
formula_19
or
formula_20
Substituting into the ratio of monomer consumption rates yields the Mayo–Lewis equation after rearrangement:
formula_21
Mole fraction form.
It is often useful to alter the copolymer equation by expressing concentrations in terms of mole fractions. Mole fractions of monomers formula_0 and formula_1 in the feed are defined as formula_22 and formula_23 where
formula_24
Similarly, formula_25 represents the mole fraction of each monomer in the copolymer:
formula_26
These equations can be combined with the Mayo–Lewis equation to give
formula_27
This equation gives the composition of copolymer formed at each instant. However the feed and copolymer compositions can change as polymerization proceeds.
Limiting cases.
Reactivity ratios indicate preference for propagation. Large formula_28 indicates a tendency for formula_2 to add formula_0, while small formula_28 corresponds to a tendency for formula_2 to add formula_1. Values of formula_29 describe the tendency of formula_3 to add formula_1 or formula_0. From the definition of reactivity ratios, several special cases can be derived:
Calculation of reactivity ratios.
Calculation of reactivity ratios generally involves carrying out several polymerizations at varying monomer ratios. The copolymer composition can be analysed with methods such as Proton nuclear magnetic resonance, Carbon-13 nuclear magnetic resonance, or Fourier transform infrared spectroscopy. The polymerizations are also carried out at low conversions, so monomer concentrations can be assumed to be constant. With all the other parameters in the copolymer equation known, formula_28 and formula_29 can be found.
Curve Fitting.
One of the simplest methods for finding reactivity ratios is plotting the copolymer equation and using nonlinear least squares analysis to find the formula_28, formula_29 pair that gives the best fit curve. This is preferred as methods such as Kelen-Tüdős or Fineman-Ross (see below) that involve linearization of the Mayo–Lewis equation will introduce bias to the results.
Mayo-Lewis Method.
The Mayo-Lewis method uses a form of the copolymer equation relating formula_28 to formula_29:
formula_37
For each different monomer composition, a line is generated using arbitrary formula_28 values. The intersection of these lines is the formula_28, formula_29 for the system. More frequently, the lines do not intersect in a single point and the area in which most lines intersect can be given as a range of formula_28, and formula_29 values.
Fineman-Ross Method.
Fineman and Ross rearranged the copolymer equation into a linear form:
formula_38
where formula_39 and formula_40
Thus, a plot of formula_41 versus formula_42 yields a straight line with slope formula_28 and intercept formula_43
Kelen-Tüdős method.
The Fineman-Ross method can be biased towards points at low or high monomer concentration, so Kelen and Tüdős introduced an arbitrary constant,
formula_44
where formula_45 and formula_46 are the highest and lowest values of formula_41 from the Fineman-Ross method. The data can be plotted in a linear form
formula_47
where formula_48 and formula_49. Plotting formula_50 against formula_51 yields a straight line that gives formula_52 when formula_53 and formula_54 when formula_55. This distributes the data more symmetrically and can yield better results.
Q-e scheme.
A semi-empirical method for the prediction of reactivity ratios is called the Q-e scheme which was proposed by Alfrey and Price in 1947. This involves using two parameters for each monomer, formula_56 and formula_57. The reaction of formula_58
radical with formula_59 monomer is written as
formula_60
while the reaction of formula_58 radical with formula_58 monomer is written as
formula_61
Where P is a proportionality constant, Q is the measure of reactivity of monomer via resonance stabilization, and e is the measure of polarity of monomer (molecule or radical) via the effect of functional groups on vinyl groups. Using these definitions, formula_54 and formula_34 can be found by the ratio of the terms. An advantage of this system is that reactivity ratios can be found using tabulated Q-e values of monomers regardless of what the monomer pair is in the system.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "M_1\\,"
},
{
"math_id": 1,
"text": "M_2\\,"
},
{
"math_id": 2,
"text": "M_1^*\\,"
},
{
"math_id": 3,
"text": "M_2^*\\,"
},
{
"math_id": 4,
"text": "k\\,"
},
{
"math_id": 5,
"text": "M_1^* + M_1 \\xrightarrow{k_{11}} M_1M_1^* \\,"
},
{
"math_id": 6,
"text": "M_1^* + M_2 \\xrightarrow{k_{12}} M_1M_2^* \\,"
},
{
"math_id": 7,
"text": "M_2^* + M_2 \\xrightarrow{k_{22}} M_2M_2^* \\,"
},
{
"math_id": 8,
"text": "M_2^* + M_1 \\xrightarrow{k_{21}} M_2M_1^* \\,"
},
{
"math_id": 9,
"text": "r_1 = \\frac{k_{11}}{k_{12}} \\,"
},
{
"math_id": 10,
"text": "r_2 = \\frac{k_{22}}{k_{21}} \\,"
},
{
"math_id": 11,
"text": "\\frac {d\\left [M_1 \\right]}{d\\left [M_2\\right]}=\\frac{\\left [M_1\\right]\\left (r_1\\left[M_1\\right]+\\left [M_2\\right]\\right)}{\\left [M_2\\right]\\left (\\left [M_1\\right]+r_2\\left [M_2\\right]\\right)}"
},
{
"math_id": 12,
"text": "\\frac{-d[M_1]}{dt} = k_{11}[M_1]\\sum[M_1^*] + k_{21}[M_1]\\sum[M_2^*] \\,"
},
{
"math_id": 13,
"text": "\\sum[M_1^*]"
},
{
"math_id": 14,
"text": "\\sum[M_2^*]"
},
{
"math_id": 15,
"text": "\\frac{-d[M_2]}{dt} = k_{12}[M_2]\\sum[M_1^*] + k_{22}[M_2]\\sum[M_2^*] \\,"
},
{
"math_id": 16,
"text": "\\sum[M_2^*] \\,"
},
{
"math_id": 17,
"text": "\\frac{d[M_1]}{d[M_2]} = \\frac{[M_1]}{[M_2]} \\left( \\frac{k_{11}\\frac{\\sum[M_1^*]}{\\sum[M_2^*]} + k_{21}} {k_{12}\\frac{\\sum[M_1^*]}{\\sum[M_2^*]} + k_{22}} \\right) \\,"
},
{
"math_id": 18,
"text": "\\frac{d\\sum[M_1^*]}{dt} = \\frac{d\\sum[M_2^*]}{dt} \\approx 0\\,"
},
{
"math_id": 19,
"text": "k_{21}[M_1]\\sum[M_2^*] = k_{12}[M_2]\\sum[M_1^*] \\,"
},
{
"math_id": 20,
"text": " \\frac{\\sum[M_1^*]}{\\sum[M_2^*]} = \\frac{k_{21}[M_1]}{k_{12}[M_2]}\\,"
},
{
"math_id": 21,
"text": "\\frac{d[M_1]}{d[M_2]} = \\frac{[M_1]}{[M_2]} \\left( \\frac{k_{11}\\frac{k_{21}[M_1]}{k_{12}[M_2]} + k_{21}} {k_{12}\\frac{k_{21}[M_1]}{k_{12}[M_2]}+ k_{22}} \\right) = \\frac{[M_1]}{[M_2]} \\left( \\frac{\\frac{k_{11}[M_1]}{k_{12}[M_2]} + 1} {\\frac{[M_1]}{[M_2]}+ \\frac{k_{22}}{k_{21}}} \\right) = \\frac{[M_1]}{[M_2]} \\frac{\\left (r_1\\left[M_1\\right]+\\left [M_2\\right]\\right)}{\\left (\\left [M_1\\right]+r_2\\left [M_2\\right]\\right)}"
},
{
"math_id": 22,
"text": "f_1\\,"
},
{
"math_id": 23,
"text": "f_2\\,"
},
{
"math_id": 24,
"text": "f_1 = 1 - f_2 = \\frac{M_1}{(M_1 + M_2)} \\,"
},
{
"math_id": 25,
"text": "F\\,"
},
{
"math_id": 26,
"text": "F_1 = 1 - F_2 = \\frac{d M_1}{d (M_1 + M_2)} \\,"
},
{
"math_id": 27,
"text": "F_1=1-F_2=\\frac{r_1 f_1^2+f_1 f_2}{r_1 f_1^2+2f_1 f_2+r_2f_2^2}\\,"
},
{
"math_id": 28,
"text": "r_1\\,"
},
{
"math_id": 29,
"text": "r_2\\,"
},
{
"math_id": 30,
"text": "r_1 \\approx r_2 >> 1 \\,"
},
{
"math_id": 31,
"text": "r_1 \\approx r_2 > 1 \\,"
},
{
"math_id": 32,
"text": "r_1 \\approx r_2 \\approx 1 \\,"
},
{
"math_id": 33,
"text": "r_1 \\approx r_2 \\approx 0 \\,"
},
{
"math_id": 34,
"text": " r_2 "
},
{
"math_id": 35,
"text": "r_1 >> 1 >> r_2 \\,"
},
{
"math_id": 36,
"text": "r < 1 \\,"
},
{
"math_id": 37,
"text": "r_2 = \\frac{f_1}{f_2}\\left[\\frac{F_2}{F_1}(1+\\frac{f_1r_1}{f_2})-1\\right]\\,"
},
{
"math_id": 38,
"text": " G= Hr_1-r_2 \\,"
},
{
"math_id": 39,
"text": " G = \\frac{f_1(2F_1-1)}{(1-f_1)F_1} \\,"
},
{
"math_id": 40,
"text": " H = \\frac{f_1^2(1-F_1)}{(1-f_1)^2F_1}\\ "
},
{
"math_id": 41,
"text": " H \\,"
},
{
"math_id": 42,
"text": " G \\,"
},
{
"math_id": 43,
"text": "-r_2\\,"
},
{
"math_id": 44,
"text": " \\alpha = (H_{min}H_{max})^{0.5} \\,"
},
{
"math_id": 45,
"text": " H_{min} \\,"
},
{
"math_id": 46,
"text": " H_{max} \\,"
},
{
"math_id": 47,
"text": " \\eta = \\left[r_1+\\frac{r_2}{\\alpha}\\right]\\mu - \\frac{r_2}{\\alpha} \\,"
},
{
"math_id": 48,
"text": " \\eta= G/(\\alpha+H) \\,"
},
{
"math_id": 49,
"text": " \\mu= H/(\\alpha+H) \\,"
},
{
"math_id": 50,
"text": " \\eta "
},
{
"math_id": 51,
"text": " \\mu "
},
{
"math_id": 52,
"text": " -r_2/\\alpha "
},
{
"math_id": 53,
"text": " \\mu=0 "
},
{
"math_id": 54,
"text": " r_1 "
},
{
"math_id": 55,
"text": " \\mu = 1 "
},
{
"math_id": 56,
"text": " Q "
},
{
"math_id": 57,
"text": " e "
},
{
"math_id": 58,
"text": " M_1 "
},
{
"math_id": 59,
"text": " M_2 "
},
{
"math_id": 60,
"text": " k_{12} = P_1Q_2exp(-e_1e_2) "
},
{
"math_id": 61,
"text": " k_{11} = P_1Q_1exp(-e_1e_1) "
}
] | https://en.wikipedia.org/wiki?curid=9295203 |
9296238 | Chebyshev–Markov–Stieltjes inequalities | Mathematical theorem
In mathematical analysis, the Chebyshev–Markov–Stieltjes inequalities are inequalities related to the problem of moments that were formulated in the 1880s by Pafnuty Chebyshev and proved independently by Andrey Markov and (somewhat later) by Thomas Jan Stieltjes. Informally, they provide sharp bounds on a measure from above and from below in terms of its first moments.
Formulation.
Given "m"0...,"m"2"m"-1 ∈ R, consider the collection C of measures "μ" on R such that
formula_0
for "k" = 0,1...,2"m" − 1 (and in particular the integral is defined and finite).
Let "P"0,"P"1, ...,"P""m" be the first "m" + 1 orthogonal polynomials with respect to "μ" ∈ C, and let "ξ"1..."ξ""m" be the zeros of "P""m". It is not hard to see that the polynomials "P"0,"P"1, ...,"P""m"-1 and the numbers "ξ"1..."ξ""m" are the same for every "μ" ∈ C, and therefore are determined uniquely by "m"0...,"m"2"m"-1.
Denote
formula_1.
Theorem For "j" = 1,2...,"m", and any "μ" ∈ C,
formula_2
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\int x^k d\\mu(x) = m_k"
},
{
"math_id": 1,
"text": "\\rho_{m-1}(z) = 1 \\Big/ \\sum_{k=0}^{m-1} |P_k(z)|^2"
},
{
"math_id": 2,
"text": "\\mu(-\\infty, \\xi_j] \\leq \\rho_{m-1}(\\xi_1) + \\cdots + \\rho_{m-1}(\\xi_j) \\leq \\mu(-\\infty,\\xi_{j+1})."
}
] | https://en.wikipedia.org/wiki?curid=9296238 |
929631 | Ski wax | Material for use on snow runners
Ski wax is a material applied to the bottom of snow runners, including skis, snowboards, and toboggans, to improve their coefficient of friction performance under varying snow conditions. The two main types of wax used on skis are glide waxes and grip waxes. They address kinetic friction—to be minimized with a glide wax—and static friction—to be achieved with a grip wax. Both types of wax are designed to be matched with the varying properties of snow, including crystal type and size, and moisture content of the snow surface, which vary with temperature and the temperature history of the snow. Glide wax is selected to minimize sliding friction for both alpine and cross-country skiing. Grip wax (also called "kick wax") provides on-snow traction for cross-country skiers, as they stride forward using classic technique.
Modern plastic materials (e.g. high-modulus polyethylene and Teflon), used on ski bases, have excellent gliding properties on snow, which in many circumstances diminish the added value of a glide wax. Likewise, uni-directional textures (e.g. fish scale or micro-scale hairs) underfoot on cross-country skis can offer a practical substitute for grip wax for those skiers, using the classic technique.
History.
Johannes Scheffer in "Argentoratensis Lapponiæ" (History of Lapland) in 1673 gave what is probably the first recorded instruction for ski wax application He advised skiers to use pine tar pitch and rosin. Ski waxing was also documented in 1761. In 1733 the use of tar was described by Norwegian colonel Jens Henrik Emahusen. In the 1740s Sami people use of resin and tallow under their skis is recorded in writing.
Beginning around 1854, California gold rush miners held organized downhill ski races. They also discovered that ski bases, smeared with lubricants brewed from vegetable and/or animal compounds, increased speed. This led to some of the first commercial ski lubricants, such as "Black Dope" and "Sierra Lighting"; both were mainly composed of sperm oil, vegetable oil and pine pitch. However, some instead used paraffin candle wax that melted onto ski bases, and these worked better under colder conditions.
Pine tar on wooden ski bases proved effective for using skis as transport over the centuries, because it fills the pores of the wood and creates a hydrophobic surface that minimizes suction from water in the snow, yet has sufficient roughness to allow traction for forward motion. In the 1920s and 30s, new varnishes were developed by European companies as season-long ski bases. A significant advance for cross country racing was the introduction of klister, for good traction in granular snow, especially in spring conditions; klister was invented and patented in 1913 by . In the early 1940s Astra AB, a Swedish chemical company, advised by Olympic crosscountry skier Martin Matsbo, started the development of petroleum-based waxes, using paraffin wax and other admixtures. By 1952, such noted brands as Toko, Swix and Rex were providing an array of color-coded, temperature-tailored waxes.
In the last quarter of the 20th century, researchers addressed the twin problems of water and impurities adhering to skis during spring conditions. Terry Hertel addressed both problems, first with the novel use of a surfactant that interacted with the wax matrix in such a way as to repel water effectively, a product introduced in 1974 by Hertel Wax. Hertel also developed the first fluorocarbon product and the first spring-time wax that repels and makes the running surface slick for spring time alpine ski and snowboard. This technology was introduced to the market in 1986 by Hertel Wax. In 1990, Hertel filed for a U.S. patent on a "ski wax for use with sintered-base snow skis", containing paraffin, a hardener wax, roughly 1% per-fluoroether diol, and 2% SDS surfactant. Trademarks for Hertel waxes are Super HotSauce, Racing FC739, SpringSolution and White Gold. In the 1990s, Swix chief chemist Leif Torgersen found a glide wax additive to repel pollen and other snow impurities—a problem with soft grip waxes during distance races—in the form of a fluorocarbon that could be ironed into the ski base. The solution was based on the work of Enrico Traverso at , who had developed a fluorocarbon powder with a melting temperature just a few degrees below that of sintered polyethylene, patented in Italy as a "ski lubricant comprising paraffinic wax and hydrocarbon compounds containing a perfluorocarbon segment".
Science of sliding on snow.
The ability of a ski or other runner to slide over snow depends on both the properties of the snow and the ski to result in an optimum amount of lubrication from melting the snow by friction with the ski—too little and the ski interacts with solid snow crystals, too much and capillary attraction of meltwater retards the ski.
Friction.
Before a ski can slide, it must overcome the maximum value static friction, formula_0, for the ski/snow contact, where formula_1 is the coefficient of static friction and formula_2 is the normal force of the ski on snow. Kinetic (or dynamic) friction occurs when the ski is moving over the snow. The coefficient of kinetic friction, formula_3, is less than the coefficient of static friction for both ice and snow. The force required for sliding on snow is the product of the coefficient of kinetic friction and the normal force: formula_4. Both the static and kinetic coefficients of friction increase with colder snow temperatures (also true for ice).
Snow properties.
Snowflakes have a wide range of shapes, even as they fall; among these are: six-sided star-like dendrites, hexagonal needles, platelets and icy pellets. Once snow accumulates on the ground, the flakes immediately begin to undergo transformation (called "metamorphism"), owing to temperature changes, sublimation, and mechanical action. Temperature changes may be from the ambient temperature, solar radiation, rainwater, wind, or the temperature of the material beneath the snow layer. Mechanical action includes wind and compaction. Over time, bulk snow tends to consolidate—its crystals become truncated from breaking apart or losing mass with sublimation directly from solid to gas and with freeze-thaw, causing them to combine as coarse and granular ice crystals. Colbeck reports that fresh, cold, and man-made snow all interact more directly with the base of a ski and increase friction, indicating the use of harder waxes. Conversely, older, warmer, and denser snows present lower friction, in part due to increased grain size, which better promotes a water film and a smoother surface of the snow crystals for which softer waxes are indicated.
Ski friction properties.
Colbeck offers an overview of the five friction processes of skis on snow. They are the: 1) resistance due to plowing of snow out of the way, 2) deformation of the snow over which the ski is traveling, 3) lubrication of the ski with a thin layer of melt water, 4) capillary attraction of water in the snow to the ski bottom, and 5) contamination of the snow with dust and other non-slippery elements. Plowing and deformation pertain to the interaction of the ski, as a whole, with the snow and are negligible on a firm surface. Lubrication, capillary attraction and contamination are issues for the ski bottom and the wax that is applied to reduce sliding friction or achieve adequate grip.
Typically, a sliding ski melts a thin and transitory film of lubricating layer of water, caused by the heat of friction between the ski and the snow in its passing. Colbeck suggests that the optimum water film thickness is in the range between 4 and 12 "μ"m. However, the heat generated by friction can be lost by conduction to a cold ski, thereby diminishing the production of the melt layer. At the other extreme, when the snow is wet and warm, heat generation creates a thicker film that can create increased capillary drag on the ski bottom. Kuzmin and Fuss suggest that the most favorable combination of ski base material properties to minimize ski sliding friction on snow include: increased hardness and lowered thermal conductivity of the base material to promote meltwater generation for lubrication, wear resistance in cold snow, and hydrophobicity to minimize capillary suction. These attributes are readily achievable with a PTFE base, which diminishes the value added by glide waxes. Lintzén reports that factors other than wax are much more important in reducing friction on cross-country skate skis—the curvature of the ski and snow conditions.
Glide wax.
Glide wax can be applied to alpine skis, snowboards, skate skis, classic skis, back-country skis, and touring skis. Traditional waxes comprise solid hydrocarbons. High-performance "fluorocarbon" waxes also contain fluorine, which substitutes some fraction of the hydrogen atoms in the hydrocarbons with fluorine atoms to achieve lower coefficients of friction and higher water repellency than the pure hydrocarbon wax can achieve. Wax is adjusted for hardness to minimize sliding friction as a function of snow properties, which include the effects of:
Properties.
A variety of glide waxes are tailored for specific temperature ranges and other snow properties with varying wax hardness and other properties that address repellence of moisture and dirt. The hardness of the glide wax affects the melting of the snow to lubricate its passage over the surface and its ability to avoid suction from meltwater in the snow. Too little melting and sharp edges of snow crystals or too much suction impede the passage of the ski. A tipping point between where crystal type dominates sliding friction and moisture content dominates occurs around . Harder waxes address colder, drier or more abrasive snow conditions, whereas softer waxes have a lower coefficient of friction, but abrade more readily. Wax formulations combine three types of wax to adjust coefficient of friction and durability. From hard to soft, they include synthetic waxes with 50 or more carbon atoms, microcrystalline waxes with 25 to 50 carbon atoms and paraffin waxes with 20 to 35 carbon atoms. Additives to such waxes include graphite, teflon, silicon, fluorocarbons, and molybdenum to improve glide and/or reduce dirt accumulation.
Application.
Glide wax can be applied cold or hot. Cold applications include, rubbing hard wax like a crayon, applying a liquid wax or a spray wax. Hot applications of wax include the use of heat from an iron, infrared lamp, or a "hot box" oven.
Base material.
The role of glide wax is to adapt and improve the friction properties of a ski base to the expected snow properties to be encountered on a spectrum from cold crystalline snow to saturated granular snow. Modern ski bases often are made from ultra-high-molecular-weight polyethylene (UHMWPE). Kuzmin asserts that UHMWPE is non-porous and can hold neither wax nor water, so there is no possibility for filling pores; furthermore, he asserts that UHMWPE is very hydrophobic, which means that wet snow does not appreciably retard the ski and that glide wax offers little additional ability to repel water. He notes that clear bases are more durable and hydrophobic than those with carbon content. The same author asserts that texture is more important than surface chemistry for creating the optimum balance between a running surface that's too dry (not slippery enough) and too wet (ski subject to suction forces). In warm, moist snow, texture can help break the retarding capillary attraction between the ski base and the snow. Giesbrecht agrees that low wetting angle of the ski base is key and also emphasizes the importance of the degree of surface roughness at the micrometre scale as a function of snow temperature—cold snow favoring a smoother surface and wetter, warmer snow favoring a textured surface. Some authors question the necessity to use any glide waxes on modern ski bases.
Grip wax.
Cross-country skiers use a grip wax (also called "kick wax") for classic-style waxable skis to provide traction with static friction on the snow that allows them to propel themselves forward on flats and up hills. They are applied in an area beneath the skier's foot and extending, somewhat forward, that is formed by the camber of the classic ski, called the "grip zone" (or "kick zone"). The presence of camber allows the skis to grip the snow, when the weight is on one ski and the ski is fully flexed, but minimize drag when the skis are weighted equally and are thus less than fully flexed. Grip waxes are designed for specific temperature ranges and types of snow; a correctly selected grip wax does not appreciably decrease the glide of skis that have proper camber for the skier's weight and for the snow conditions. There are two substances used for grip wax: hard wax and klister.
Some skis are "waxless", having a fish-scale or other texture to prevent the ski from sliding backwards. Ski mountaineers use temporarily adhered climbing skins to provide uphill grip, but typically remove them for descent.
Wax solvents.
Wax can be dissolved by non-polar solvents like mineral spirits. However, some commercial wax solvents are made from citrus oil, which is less toxic, harder to ignite, and gentler on the ski base.
Health and environmental effects.
Health.
Ski wax may contain chemicals with potential health affects including per- and polyfluoroalkyl substances (PFASs). Levels of perfluorinated carboxylic acids, especially perfluorooctanoic acid (PFOA), have been shown to increase in ski wax technicians during the ski season.
Environment.
When skiing, the friction between the snow and skis causes wax to abrade and remain in the snow pack until spring thaw. Then the snowmelt drains into watersheds, streams, lakes and rivers, thereby changing the chemistry of the environment and the food chain. PFASs in ski wax are heat resistant, chemically and biologically stable, and thus environmentally persistent. They have been shown to accumulate in animals that are present at ski venues. The International Ski Federation (FIS) announced to introduce a ban on PFASs in waxes in all competitive ski disciplines from the winter season of 2020/21.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "F_{max} = \\mu_\\mathrm{s} F_{n}\\,"
},
{
"math_id": 1,
"text": "\\mu_\\mathrm{s}"
},
{
"math_id": 2,
"text": " F_{n}\\,"
},
{
"math_id": 3,
"text": "\\mu_\\mathrm{k}"
},
{
"math_id": 4,
"text": "F_{k} = \\mu_\\mathrm{k} F_{n}\\,"
}
] | https://en.wikipedia.org/wiki?curid=929631 |
929699 | Friedman number | A Friedman number is an integer, which represented in a given numeral system, is the result of a non-trivial expression using all its own digits in combination with any of the four basic arithmetic operators (+, −, ×, ÷), additive inverses, parentheses, exponentiation, and concatenation. Here, non-trivial means that at least one operation besides concatenation is used. Leading zeros cannot be used, since that would also result in trivial Friedman numbers, such as 024 = 20 + 4. For example, 347 is a Friedman number in the decimal numeral system, since 347 = 73 + 4. The decimal Friedman numbers are:
25, 121, 125, 126, 127, 128, 153, 216, 289, 343, 347, 625, 688, 736, 1022, 1024, 1206, 1255, 1260, 1285, 1296, 1395, 1435, 1503, 1530, 1792, 1827, 2048, 2187, 2349, 2500, 2501, 2502, 2503, 2504, 2505, 2506, 2507, 2508, 2509, 2592, 2737, 2916, ... (sequence in the OEIS).
Friedman numbers are named after Erich Friedman, a now-retired mathematics professor at Stetson University and recreational mathematics enthusiast.
A Friedman prime is a Friedman number that is also prime. The decimal Friedman primes are:
127, 347, 2503, 12101, 12107, 12109, 15629, 15641, 15661, 15667, 15679, 16381, 16447, 16759, 16879, 19739, 21943, 27653, 28547, 28559, 29527, 29531, 32771, 32783, 35933, 36457, 39313, 39343, 43691, 45361, 46619, 46633, 46643, 46649, 46663, 46691, 48751, 48757, 49277, 58921, 59051, 59053, 59263, 59273, 64513, 74353, 74897, 78163, 83357, ... (sequence in the OEIS).
Results in base 10.
The expressions of the first few Friedman numbers are:
A nice Friedman number is a Friedman number where the digits in the expression can be arranged to be in the same order as in the number itself. For example, we can arrange 127 = 27 − 1 as 127 = −1 + 27. The first nice Friedman numbers are:
127, 343, 736, 1285, 2187, 2502, 2592, 2737, 3125, 3685, 3864, 3972, 4096, 6455, 11264, 11664, 12850, 13825, 14641, 15552, 15585, 15612, 15613, 15617, 15618, 15621, 15622, 15623, 15624, 15626, 15632, 15633, 15642, 15645, 15655, 15656, 15662, 15667, 15688, 16377, 16384, 16447, 16875, 17536, 18432, 19453, 19683, 19739 (sequence in the OEIS).
A nice Friedman prime is a nice Friedman number that's also prime. The first nice Friedman primes are:
127, 15667, 16447, 19739, 28559, 32771, 39343, 46633, 46663, 117619, 117643, 117763, 125003, 131071, 137791, 147419, 156253, 156257, 156259, 229373, 248839, 262139, 262147, 279967, 294829, 295247, 326617, 466553, 466561, 466567, 585643, 592763, 649529, 728993, 759359, 786433, 937577 (sequence in the OEIS).
Michael Brand proved that the density of Friedman numbers among the naturals is 1, which is to say that the probability of a number chosen randomly and uniformly between 1 and "n" to be a Friedman number tends to 1 as "n" tends to infinity. This result extends to Friedman numbers under any base of representation. He also proved that the same is true also for binary, ternary and quaternary nice Friedman numbers. The case of base-10 nice Friedman numbers is still open.
Vampire numbers are a subset of Friedman numbers where the only operation is a multiplication of two numbers with the same number of digits, for example 1260 = 21 × 60.
Finding 2-digit Friedman numbers.
There usually are fewer 2-digit Friedman numbers than 3-digit and more in any given base, but the 2-digit ones are easier to find. If we represent a 2-digit number as "mb" + "n", where "b" is the base and "m", "n" are integers from 0 to "b"−1, we need only check each possible combination of "m" and "n" against the equalities "mb" + "n" = "m""n", and "mb" + "n" = "n""m" to see which ones are true. We need not concern ourselves with "m" + "n" or "m" × "n", since these will always be smaller than "mb" + "n" when "n" < "b". The same clearly holds for "m" − "n" and "m" / "n".
Other bases.
Friedman numbers also exist for bases other than base 10. For example, 110012 = 25 is a Friedman number in the binary numeral system, since 11001 = 10110.
The first few known Friedman numbers in other small bases are shown below, written in their respective bases. Numbers shown in bold are nice Friedman numbers.
General results.
In base formula_0,
formula_1
is a Friedman number (written in base formula_2 as 1"mk" = "k" × "m"1).
In base formula_3,
formula_4
is a Friedman number (written in base formula_2 as 100...00200...001 = 100..0012, with formula_5 zeroes between each nonzero number).
In base formula_6,
formula_7
is a Friedman number (written in base formula_2 as 2"k" = "k"2). From the observation that all numbers of the form 2"k" × b2"n" can be written as "k"000...0002 with "n" 0's, we can find sequences of consecutive Friedman numbers which are arbitrarily long. For example, for formula_8, or in base 10, 250068 = 5002 + 68, from which we can easily deduce the range of consecutive Friedman numbers from 250000 to 250099 in base 10.
Repdigit Friedman numbers:
There are an infinite number of prime Friedman numbers in all bases, because for base formula_9 the numbers
formula_10 in base 2
formula_11 in base 3
formula_12 in base 4
formula_13 in base 5
formula_14 in base 6
for base formula_15 the numbers
formula_16 in base 7,
formula_17 in base 8,
formula_18 in base 9,
formula_19 in base 10,
and for base formula_20
formula_21
are Friedman numbers for all formula_22. The numbers of this form are an arithmetic sequence formula_23, where formula_24 and formula_25 are relatively prime regardless of base as formula_2 and formula_26 are always relatively prime, and therefore, by Dirichlet's theorem on arithmetic progressions, the sequence contains an infinite number of primes.
Using Roman numerals.
In a trivial sense, all Roman numerals with more than one symbol are Friedman numbers. The expression is created by simply inserting + signs into the numeral, and occasionally the − sign with slight rearrangement of the order of the symbols.
Some research into Roman numeral Friedman numbers for which the expression uses some of the other operators has been done. The first such nice Roman numeral Friedman number discovered was 8, since VIII = (V - I) × II. Other such nontrivial examples have been found.
The difficulty of finding nontrivial Friedman numbers in Roman numerals increases not with the size of the number (as is the case with positional notation numbering systems) but with the numbers of symbols it has. For example, it is much tougher to figure out whether 147 (CXLVII) is a Friedman number in Roman numerals than it is to make the same determination for 1001 (MI). With Roman numerals, one can at least derive quite a few Friedman expressions from any new expression one discovers. Since 8 is a nice nontrivial nice Roman numeral Friedman number, it follows that any number ending in VIII is also such a Friedman number.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "b = mk - m"
},
{
"math_id": 1,
"text": "b^2 + mb + k = (mk - m + m)b + k = mbk + k = k(mb + 1)"
},
{
"math_id": 2,
"text": "b"
},
{
"math_id": 3,
"text": "b > 2"
},
{
"math_id": 4,
"text": "{(b^n + 1)}^2 = b^{2n} + 2{b^n} + 1"
},
{
"math_id": 5,
"text": "n - 1"
},
{
"math_id": 6,
"text": "b = \\frac{k(k - 1)}{2}"
},
{
"math_id": 7,
"text": "2b + k = 2\\left(\\frac{k(k - 1)}{2}\\right) + k = k^2 - k + k = k^2"
},
{
"math_id": 8,
"text": "k = 5"
},
{
"math_id": 9,
"text": "2 \\leq b \\leq 6"
},
{
"math_id": 10,
"text": "n \\times 10^{1111} + 11111111 = n \\times 10^{1111} + 10^{1000} - 1 + 0 + 0"
},
{
"math_id": 11,
"text": "n \\times 10^{102} + 1101221 = n \\times 10^{102} + 2^{101} + 0 + 0"
},
{
"math_id": 12,
"text": "n \\times 10^{20} + 310233 = n \\times 10^{20} + 33^{3} + 0"
},
{
"math_id": 13,
"text": "n \\times 10^{13} + 2443111 = n \\times 10^{4 + 4} + (2 \\times 3)^{11}"
},
{
"math_id": 14,
"text": "n \\times 10^{13} + 25352411 = n \\times 10^{2 \\times 5 - 1} + (5 + 2)^{(3 + 4)}"
},
{
"math_id": 15,
"text": "7 \\leq b \\leq 10"
},
{
"math_id": 16,
"text": "n \\times 10^{60} + 164351 = n \\times 10^{60} + (10 + 4 - 3)^5 + 0 + 0 + \\ldots"
},
{
"math_id": 17,
"text": "n \\times 10^{60} + 163251 = n \\times 10^{60} + (10 + 3 - 2)^5 + 0 + 0 + \\ldots"
},
{
"math_id": 18,
"text": "n \\times 10^{60} + 162151 = n \\times 10^{60} + (10 + 2 - 1)^5 + 0 + 0 + \\ldots"
},
{
"math_id": 19,
"text": "n \\times 10^{60} + 161051 = n \\times 10^{60} + (10 + 1 - 0)^5 + 0 + 0 + \\ldots"
},
{
"math_id": 20,
"text": "b > 10"
},
{
"math_id": 21,
"text": "n \\times 10^{50} + \\text{15AA51} = n \\times 10^{50} + (10 + \\text{A}/\\text{A})^5 + 0 + 0 + \\ldots"
},
{
"math_id": 22,
"text": "n"
},
{
"math_id": 23,
"text": "pn + q"
},
{
"math_id": 24,
"text": "p"
},
{
"math_id": 25,
"text": "q"
},
{
"math_id": 26,
"text": "b + 1"
}
] | https://en.wikipedia.org/wiki?curid=929699 |
92981 | Tensegrity | Structural design made of isolated members held in place by tension
Tensegrity, tensional integrity or floating compression is a structural principle based on a system of isolated components under compression inside a network of continuous tension, and arranged in such a way that the compressed members (usually bars or struts) do not touch each other while the prestressed tensioned members (usually cables or tendons) delineate the system spatially.
Tensegrity structures are found in both nature as well as human-made objects: in the human body, the bones are held in compression while the connective tissues are held in tension, and the same principles have been applied to furniture and architectural design and beyond.
The term was coined by Buckminster Fuller in the 1960s as a portmanteau of "tensional integrity".
Core Concept.
Tensegrity is characterized by several foundational principles that define its unique properties:
Because of these patterns, no structural member experiences a bending moment and there are no shear stresses within the system. This can produce exceptionally strong and rigid structures for their mass and for the cross section of the components.
These principles collectively enable tensegrity structures to achieve a balance of strength, resilience, and flexibility, making the concept widely applicable across disciplines including architecture, robotics, and biomechanics.
Early Example.
A conceptual building block of tensegrity is seen in the 1951 Skylon. Six cables, three at each end, hold the tower in position. The three cables connected to the bottom "define" its location. The other three cables are simply keeping it vertical.
A three-rod tensegrity structure (shown above in a spinning drawing of a T3-Prism) builds on this simpler structure: the ends of each green rod look like the top and bottom of the Skylon. As long as the angle between any two cables is smaller than 180°, the position of the rod is well defined. While three cables are the minimum required for stability, additional cables can be attached to each node for aesthetic purposes and for redundancy. For example, Snelson's Needle Tower uses a repeated pattern built using nodes that are connected to 5 cables each.
Eleanor Heartney points out visual transparency as an important aesthetic quality of these structures. Korkmaz "et al." has argued that lightweight tensegrity structures are suitable for adaptive architecture.
Applications.
Architecture.
Tensegrities saw increased application in architecture beginning in the 1960s, when Maciej Gintowt and Maciej Krasiński designed Spodek arena complex (in Katowice, Poland), as one of the first major structures to employ the principle of tensegrity. The roof uses an inclined surface held in check by a system of cables holding up its circumference. Tensegrity principles were also used in David Geiger's Seoul Olympic Gymnastics Arena (for the 1988 Summer Olympics), and the Georgia Dome (for the 1996 Summer Olympics). Tropicana Field, home of the Tampa Bay Rays major league baseball team, also has a dome roof supported by a large tensegrity structure.
On 4 October 2009, the Kurilpa Bridge opened across the Brisbane River in Queensland, Australia. A multiple-mast, cable-stay structure based on the principles of tensegrity, it is currently the world's largest tensegrity bridge.
Robotics.
Since the early 2000s, tensegrities have also attracted the interest of roboticists due to their potential to design lightweight and resilient robots. Numerous researches have investigated tensegrity rovers, bio-mimicking robots, and modular soft robots. The most famous tensegrity robot is the Super Ball Bot, a rover for space exploration using a 6-bar tensegrity structure, currently under developments at NASA Ames.
Anatomy.
Biotensegrity, a term coined by Stephen Levin, is an extended theoretical application of tensegrity principles to biological structures. Biological structures such as muscles, bones, fascia, ligaments and tendons, or rigid and elastic cell membranes, are made strong by the unison of tensioned and compressed parts. The musculoskeletal system consists of a continuous network of muscles and connective tissues, while the bones provide discontinuous compressive support, whilst the nervous system maintains tension in vivo through electrical stimulus. Levin claims that the human spine, is also a tensegrity structure although there is no support for this theory from a structural perspective.
Biochemistry.
Donald E. Ingber has developed a theory of tensegrity to describe numerous phenomena observed in molecular biology. For instance, the expressed shapes of cells, whether it be their reactions to applied pressure, interactions with substrates, etc., all can be mathematically modelled by representing the cell's cytoskeleton as a tensegrity. Furthermore, geometric patterns found throughout nature (the helix of DNA, the geodesic dome of a volvox, Buckminsterfullerene, and more) may also be understood based on applying the principles of tensegrity to the spontaneous self-assembly of compounds, proteins, and even organs. This view is supported by how the tension-compression interactions of tensegrity minimize material needed to maintain stability and achieve structural resiliency, although the comparison with inert materials within a biological framework has no widely accepted premise within physiological science. Therefore, natural selection pressures would likely favor biological systems organized in a tensegrity manner.
As Ingber explains:
<templatestyles src="Template:Blockquote/styles.css" />The tension-bearing members in these structures – whether Fuller's domes or Snelson's sculptures – map out the shortest paths between adjacent members (and are therefore, by definition, arranged geodesically). Tensional forces naturally transmit themselves over the shortest distance between two points, so the members of a tensegrity structure are precisely positioned to best withstand stress. For this reason, tensegrity structures offer a maximum amount of strength.
In embryology, Richard Gordon proposed that embryonic differentiation waves are propagated by an 'organelle of differentiation' where the cytoskeleton is assembled in a bistable tensegrity structure at the apical end of cells called the 'cell state splitter'.
Origins and art history.
The origins of tensegrity are controversial. Many traditional structures, such as skin-on-frame kayaks and shōji, use tension and compression elements in a similar fashion.
Russian artist Viatcheslav Koleichuk claimed that the idea of tensegrity was invented first by Kārlis Johansons (in Russian as German as Karl Ioganson) (), a Soviet avant-garde artist of Latvian descent, who contributed some works to the main exhibition of Russian constructivism in 1921. Koleichuk's claim was backed up by Maria Gough for one of the works at the 1921 constructivist exhibition. Snelson has acknowledged the constructivists as an influence for his work (query?). French engineer David Georges Emmerich has also noted how Kārlis Johansons's work (and industrial design ideas) seemed to foresee tensegrity concepts.
In fact, some scientific paper proves this fact, showing the images of the first Simplex structures (made with 3 bars and 9 tendons) developed by Ioganson.
In 1948, artist Kenneth Snelson produced his innovative "X-Piece" after artistic explorations at Black Mountain College (where Buckminster Fuller was lecturing) and elsewhere. Some years later, the term "tensegrity" was coined by Fuller, who is best known for his geodesic domes. Throughout his career, Fuller had experimented with incorporating tensile components in his work, such as in the framing of his dymaxion houses.
Snelson's 1948 innovation spurred Fuller to immediately commission a mast from Snelson. In 1949, Fuller developed a tensegrity-icosahedron based on the technology, and he and his students quickly developed further structures and applied the technology to building domes. After a hiatus, Snelson also went on to produce a plethora of sculptures based on tensegrity concepts. His main body of work began in 1959 when a pivotal exhibition at the Museum of Modern Art took place. At the MOMA exhibition, Fuller had shown the mast and some of his other work. At this exhibition, Snelson, after a discussion with Fuller and the exhibition organizers regarding credit for the mast, also displayed some work in a vitrine.
Snelson's best-known piece is his 26.5-meter-high (87 ft) "Needle Tower" of 1968.
Mathematics of Tensegrity.
The loading of at least some tensegrity structures causes an auxetic response and negative Poisson ratio, e.g. the T3-prism and 6-strut tensegrity icosahedron.
Tensegrity prisms.
The three-rod tensegrity structure (3-way prism) has the property that, for a given (common) length of compression member "rod" (there are three total) and a given (common) length of tension cable "tendon" (six total) connecting the rod ends together, there is a particular value for the (common) length of the tendon connecting the rod tops with the neighboring rod bottoms that causes the structure to hold a stable shape. For such a structure, it is straightforward to prove that the triangle formed by the rod tops and that formed by the rod bottoms are rotated with respect to each other by an angle of 5π/6 (radians).
The stability ("prestressability") of several 2-stage tensegrity structures are analyzed by Sultan, et al.
The T3-prism (also known as Triplex) can be obtained through form finding of a straight triangular prism. Its self-equilibrium state is given when the base triangles are in parallel planes separated by an angle of twist of π/6. The formula for its unique self-stress state is given by,formula_0Here, the first three negative values correspond to the inner components in compression, while the rest correspond to the cables in tension.
Tensegrity icosahedra.
The tensegrity icosahedron, first studied by Snelson in 1949, has struts and tendons along the edges of a polyhedron called Jessen's icosahedron. It is a stable construction, albeit with infinitesimal mobility. To see this, consider a cube of side length 2"d", centered at the origin. Place a strut of length 2"l" in the plane of each cube face, such that each strut is parallel to one edge of the face and is centered on the face. Moreover, each strut should be parallel to the strut on the opposite face of the cube, but orthogonal to all other struts. If the Cartesian coordinates of one strut are &NoBreak;&NoBreak; and &NoBreak;&NoBreak;, those of its parallel strut will be, respectively, &NoBreak;&NoBreak; and &NoBreak;&NoBreak;. The coordinates of the other strut ends (vertices) are obtained by permuting the coordinates, e.g., &NoBreak;&NoBreak; (rotational symmetry in the main diagonal of the cube).
The distance "s" between any two neighboring vertices (0, "d", "l") and ("d", "l", 0) is
formula_1
Imagine this figure built from struts of given length 2"l" and tendons (connecting neighboring vertices) of given length "s", with formula_2. The relation tells us there are two possible values for "d": one realized by pushing the struts together, the other by pulling them apart. In the particular case formula_3 the two extremes coincide, and formula_4, therefore the figure is the stable tensegrity icosahedron. This choice of parameters gives the vertices the positions of Jessen's icosahedron; they are different from the regular icosahedron, for which the ratio of formula_5 and formula_6 would be the golden ratio, rather than 2. However both sets of coordinates lie along a continuous family of positions ranging from the cuboctahedron to the octahedron (as limit cases), which are linked by a helical contractive/expansive transformation. This kinematics of the cuboctahedron is the "geometry of motion" of the tensegrity icosahedron. It was first described by H. S. M. Coxeter and later called the "jitterbug transformation" by Buckminster Fuller.
Since the tensegrity icosahedron represents an extremal point of the above relation, it has infinitesimal mobility: a small change in the length "s" of the tendon (e.g. by stretching the tendons) results in a much larger change of the distance 2"d" of the struts.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\omega = \\omega_1 [-\\sqrt{3}, -\\sqrt{3}, -\\sqrt{3}, \\sqrt{3}, \\sqrt{3}, \\sqrt{3}, 1, 1, 1, 1, 1, 1]^T"
},
{
"math_id": 1,
"text": "s^2 = (d - l)^2 + d^2 + l^2 = 2\\left(d - \\frac{1}{2} \\,l\\right)^2 + \\frac{3}{2} \\,l^2"
},
{
"math_id": 2,
"text": "s > \\sqrt\\frac{3}{2}\\,l"
},
{
"math_id": 3,
"text": "s = \\sqrt\\frac{3}{2}\\,l"
},
{
"math_id": 4,
"text": "d = \\frac{1}{2}\\,l"
},
{
"math_id": 5,
"text": "d"
},
{
"math_id": 6,
"text": "l"
}
] | https://en.wikipedia.org/wiki?curid=92981 |
929870 | Bishop–Gromov inequality | On volumes in complete Riemannian n-manifolds whose Ricci curvature has a lower bound
In mathematics, the Bishop–Gromov inequality is a comparison theorem in Riemannian geometry, named after Richard L. Bishop and Mikhail Gromov. It is closely related to Myers' theorem, and is the key point in the proof of Gromov's compactness theorem.
Statement.
Let formula_0 be a complete "n"-dimensional Riemannian manifold whose Ricci curvature satisfies the lower bound
formula_1
for a constant formula_2. Let formula_3 be the complete "n"-dimensional simply connected space of constant sectional curvature formula_4 (and hence of constant Ricci curvature formula_5); thus formula_3 is the "n"-sphere of radius formula_6 if formula_7, or "n"-dimensional Euclidean space if formula_8, or an appropriately rescaled version of "n"-dimensional hyperbolic space if formula_9. Denote by formula_10 the ball of radius "r" around a point "p", defined with respect to the Riemannian distance function.
Then, for any formula_11 and formula_12, the function
formula_13
is non-increasing on formula_14.
As "r" goes to zero, the ratio approaches one, so together with the monotonicity this implies that
formula_15
This is the version first proved by Bishop.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "M"
},
{
"math_id": 1,
"text": "\\mathrm{Ric} \\geq (n-1) K"
},
{
"math_id": 2,
"text": "K\\in \\R"
},
{
"math_id": 3,
"text": "M_K^n"
},
{
"math_id": 4,
"text": "K"
},
{
"math_id": 5,
"text": "(n-1)K"
},
{
"math_id": 6,
"text": "1/\\sqrt{K}"
},
{
"math_id": 7,
"text": "K>0"
},
{
"math_id": 8,
"text": "K=0"
},
{
"math_id": 9,
"text": "K<0"
},
{
"math_id": 10,
"text": "B(p,r)"
},
{
"math_id": 11,
"text": "p\\in M"
},
{
"math_id": 12,
"text": "p_K\\in M_K^n"
},
{
"math_id": 13,
"text": " \\phi(r) = \\frac{\\mathrm{Vol} \\, B(p,r)}{\\mathrm{Vol}\\, B(p_K,r)} "
},
{
"math_id": 14,
"text": "(0,\\infty)"
},
{
"math_id": 15,
"text": "\\mathrm{Vol} \\,B(p,r) \\leq \\mathrm{Vol} \\, B(p_K,r)."
}
] | https://en.wikipedia.org/wiki?curid=929870 |
929925 | Normal extension | Algebraic field extension
In abstract algebra, a normal extension is an algebraic field extension "L"/"K" for which every irreducible polynomial over "K" that has a root in "L" splits into linear factors in "L". This is one of the conditions for an algebraic extension to be a Galois extension. Bourbaki calls such an extension a quasi-Galois extension. For finite extensions, a normal extension is identical to a splitting field.
Definition.
Let "formula_0" be an algebraic extension (i.e., "L" is an algebraic extension of "K"), such that formula_1 (i.e., "L" is contained in an algebraic closure of "K"). Then the following conditions, any of which can be regarded as a definition of normal extension, are equivalent:
Other properties.
Let "L" be an extension of a field "K". Then:
Equivalent conditions for normality.
Let formula_0 be algebraic. The field "L" is a normal extension if and only if any of the equivalent conditions below hold.
Examples and counterexamples.
For example, formula_8 is a normal extension of formula_9 since it is a splitting field of formula_10 On the other hand, formula_11 is not a normal extension of formula_12 since the irreducible polynomial formula_13 has one root in it (namely, formula_14), but not all of them (it does not have the non-real cubic roots of 2). Recall that the field formula_15 of algebraic numbers is the algebraic closure of formula_9 and thus it contains formula_16 Let formula_17 be a primitive cubic root of unity. Then since,
formula_18
the map
formula_19
is an embedding of formula_11 in formula_15 whose restriction to formula_20 is the identity. However, formula_21 is not an automorphism of formula_22
For any prime formula_23 the extension formula_24 is normal of degree formula_25 It is a splitting field of formula_26 Here formula_27 denotes any formula_28th primitive root of unity. The field formula_29 is the normal closure (see below) of formula_22
Normal closure.
If "K" is a field and "L" is an algebraic extension of "K", then there is some algebraic extension "M" of "L" such that "M" is a normal extension of "K". Furthermore, up to isomorphism there is only one such extension that is minimal, that is, the only subfield of "M" that contains "L" and that is a normal extension of "K" is "M" itself. This extension is called the normal closure of the extension "L" of "K".
If "L" is a finite extension of "K", then its normal closure is also a finite extension.
Citations.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "L/K"
},
{
"math_id": 1,
"text": "L\\subseteq \\overline{K}"
},
{
"math_id": 2,
"text": "\\overline{K}"
},
{
"math_id": 3,
"text": "K[X]"
},
{
"math_id": 4,
"text": "S \\subseteq K[x]"
},
{
"math_id": 5,
"text": "K\\subseteq F\\subsetneq L"
},
{
"math_id": 6,
"text": "L \\to \\bar{K}"
},
{
"math_id": 7,
"text": "\\text{Aut}(L/K),"
},
{
"math_id": 8,
"text": "\\Q(\\sqrt{2})"
},
{
"math_id": 9,
"text": "\\Q,"
},
{
"math_id": 10,
"text": "x^2-2."
},
{
"math_id": 11,
"text": "\\Q(\\sqrt[3]{2})"
},
{
"math_id": 12,
"text": "\\Q"
},
{
"math_id": 13,
"text": "x^3-2"
},
{
"math_id": 14,
"text": "\\sqrt[3]{2}"
},
{
"math_id": 15,
"text": "\\overline{\\Q}"
},
{
"math_id": 16,
"text": "\\Q(\\sqrt[3]{2})."
},
{
"math_id": 17,
"text": "\\omega"
},
{
"math_id": 18,
"text": "\\Q (\\sqrt[3]{2})=\\left. \\left \\{a+b\\sqrt[3]{2}+c\\sqrt[3]{4}\\in\\overline{\\Q }\\,\\,\\right | \\,\\,a,b,c\\in\\Q \\right \\}"
},
{
"math_id": 19,
"text": "\\begin{cases} \\sigma:\\Q (\\sqrt[3]{2})\\longrightarrow\\overline{\\Q}\\\\ a+b\\sqrt[3]{2}+c\\sqrt[3]{4}\\longmapsto a+b\\omega\\sqrt[3]{2}+c\\omega^2\\sqrt[3]{4}\\end{cases}"
},
{
"math_id": 20,
"text": "\\Q "
},
{
"math_id": 21,
"text": "\\sigma"
},
{
"math_id": 22,
"text": "\\Q (\\sqrt[3]{2})."
},
{
"math_id": 23,
"text": "p,"
},
{
"math_id": 24,
"text": "\\Q (\\sqrt[p]{2}, \\zeta_p)"
},
{
"math_id": 25,
"text": "p(p-1)."
},
{
"math_id": 26,
"text": "x^p - 2."
},
{
"math_id": 27,
"text": "\\zeta_p"
},
{
"math_id": 28,
"text": "p"
},
{
"math_id": 29,
"text": "\\Q (\\sqrt[3]{2}, \\zeta_3)"
}
] | https://en.wikipedia.org/wiki?curid=929925 |
9299409 | Nucleic acid thermodynamics | Study of how temperature affects the nucleic acid structure
Nucleic acid thermodynamics is the study of how temperature affects the nucleic acid structure of double-stranded DNA (dsDNA). The melting temperature ("Tm") is defined as the temperature at which half of the DNA strands are in the random coil or single-stranded (ssDNA) state. "Tm" depends on the length of the DNA molecule and its specific nucleotide sequence. DNA, when in a state where its two strands are dissociated (i.e., the dsDNA molecule exists as two independent strands), is referred to as having been denatured by the high temperature.
Concepts.
Hybridization.
Hybridization is the process of establishing a non-covalent, sequence-specific interaction between two or more complementary strands of nucleic acids into a single complex, which in the case of two strands is referred to as a duplex. Oligonucleotides, DNA, or RNA will bind to their complement under normal conditions, so two perfectly complementary strands will bind to each other readily. In order to reduce the diversity and obtain the most energetically preferred complexes, a technique called annealing is used in laboratory practice. However, due to the different molecular geometries of the nucleotides, a single inconsistency between the two strands will make binding between them less energetically favorable. Measuring the effects of base incompatibility by quantifying the temperature at which two strands anneal can provide information as to the similarity in base sequence between the two strands being annealed. The complexes may be dissociated by thermal denaturation, also referred to as melting. In the absence of external negative factors, the processes of hybridization and melting may be repeated in succession indefinitely, which lays the ground for polymerase chain reaction. Most commonly, the pairs of nucleic bases A=T and G≡C are formed, of which the latter is more stable.
Denaturation.
DNA denaturation, also called DNA melting, is the process by which double-stranded deoxyribonucleic acid unwinds and separates into single-stranded strands through the breaking of hydrophobic stacking attractions between the bases. See Hydrophobic effect. Both terms are used to refer to the process as it occurs when a mixture is heated, although "denaturation" can also refer to the separation of DNA strands induced by chemicals like formamide or urea.
The process of DNA denaturation can be used to analyze some aspects of DNA. Because cytosine / guanine base-pairing is generally stronger than adenine / thymine base-pairing, the amount of cytosine and guanine in a genome is called its GC-content and can be estimated by measuring the temperature at which the genomic DNA melts. Higher temperatures are associated with high GC content.
DNA denaturation can also be used to detect sequence differences between two different DNA sequences. DNA is heated and denatured into single-stranded state, and the mixture is cooled to allow strands to rehybridize. Hybrid molecules are formed between similar sequences and any differences between those sequences will result in a disruption of the base-pairing. On a genomic scale, the method has been used by researchers to estimate the genetic distance between two species, a process known as DNA-DNA hybridization. In the context of a single isolated region of DNA, denaturing gradient gels and temperature gradient gels can be used to detect the presence of small mismatches between two sequences, a process known as temperature gradient gel electrophoresis.
Methods of DNA analysis based on melting temperature have the disadvantage of being proxies for studying the underlying sequence; DNA sequencing is generally considered a more accurate method.
The process of DNA melting is also used in molecular biology techniques, notably in the polymerase chain reaction. Although the temperature of DNA melting is not diagnostic in the technique, methods for estimating "Tm" are important for determining the appropriate temperatures to use in a protocol. DNA melting temperatures can also be used as a proxy for equalizing the hybridization strengths of a set of molecules, e.g. the oligonucleotide probes of DNA microarrays.
Annealing.
Annealing, in genetics, means for complementary sequences of single-stranded DNA or RNA to pair by hydrogen bonds to form a double-stranded polynucleotide. Before annealing can occur, one of the strands may need to be phosphorylated by an enzyme such as kinase to allow proper hydrogen bonding to occur. The term annealing is often used to describe the binding of a DNA probe, or the binding of a primer to a DNA strand during a polymerase chain reaction. The term is also often used to describe the reformation (renaturation) of reverse-complementary strands that were separated by heat (thermally denatured). Proteins such as RAD52 can help DNA anneal. DNA strand annealing is a key step in pathways of homologous recombination. In particular, during meiosis, synthesis-dependent strand annealing is a major pathway of homologous recombination.
Stacking.
Stacking is the stabilizing interaction between the flat surfaces of adjacent bases. Stacking can happen with any face of the base, that is 5'-5', 3'-3', and vice versa.
Stacking in "free" nucleic acid molecules is mainly contributed by intermolecular force, specifically electrostatic attraction among aromatic rings, a process also known as pi stacking. For biological systems with water as a solvent, hydrophobic effect contributes and helps in formation of a helix. Stacking is the main stabilizing factor in the DNA double helix.
Contribution of stacking to the free energy of the molecule can be experimentally estimated by observing the bent-stacked equilibrium in nicked DNA. Such stabilization is dependent on the sequence. The extent of the stabilization varies with salt concentrations and temperature.
Thermodynamics of the two-state model.
Several formulas are used to calculate "Tm" values. Some formulas are more accurate in predicting melting temperatures of DNA duplexes. For DNA oligonucleotides, i.e. short sequences of DNA, the thermodynamics of hybridization can be accurately described as a two-state process. In this approximation one neglects the possibility of intermediate partial binding states in the formation of a double strand state from two single stranded oligonucleotides. Under this assumption one can elegantly describe the thermodynamic parameters for forming double-stranded nucleic acid AB from single-stranded nucleic acids A and B.
AB ↔ A + B
The equilibrium constant for this reaction is formula_0. According to the Van´t Hoff equation, the relation between free energy, Δ"G", and "K" is Δ"G°" = -"RT"ln "K", where "R" is the ideal gas law constant, and "T" is the kelvin temperature of the reaction. This gives, for the nucleic acid system,
formula_1.
The melting temperature, "T"m, occurs when half of the double-stranded nucleic acid has dissociated. If no additional nucleic acids are present, then [A], [B], and [AB] will be equal, and equal to half the initial concentration of double-stranded nucleic acid, [AB]initial. This gives an expression for the melting point of a nucleic acid duplex of
formula_2.
Because Δ"G"° = Δ"H"° -"T"Δ"S"°, "T"m is also given by
formula_3.
The terms Δ"H"° and Δ"S"° are usually given for the association and not the dissociation reaction (see the nearest-neighbor method for example). This formula then turns into:
formula_4, where [B]total ≤ [A]total.
As mentioned, this equation is based on the assumption that only two states are involved in melting: the double stranded state and the random-coil state. However, nucleic acids may melt via several intermediate states. To account for such complicated behavior, the methods of statistical mechanics must be used, which is especially relevant for long sequences.
Estimating thermodynamic properties from nucleic acid sequence.
The previous paragraph shows how melting temperature and thermodynamic parameters (Δ"G"° or Δ"H"° & Δ"S"°) are related to each other. From the observation of melting temperatures one can experimentally determine the thermodynamic parameters. Vice versa, and important for applications, when the thermodynamic parameters of a given nucleic acid sequence are known, the melting temperature can be predicted. It turns out that for oligonucleotides, these parameters can be well approximated by the nearest-neighbor model.
Nearest-neighbor method.
The interaction between bases on different strands depends somewhat on the neighboring bases. Instead of treating a DNA helix as a string of interactions between base pairs, the nearest-neighbor model treats a DNA helix as a string of interactions between 'neighboring' base pairs. So, for example, the DNA shown below has nearest-neighbor interactions indicated by the arrows.
codice_0
codice_1
codice_2
The free energy of forming this DNA from the individual strands, Δ"G"°, is represented (at 37 °C) as
Δ"G"°37(predicted) = Δ"G"°37(C/G initiation) + Δ"G"°37(CG/GC) + Δ"G"°37(GT/CA) + Δ"G"°37(TT/AA) + Δ"G"°37(TG/AC) + Δ"G"°37(GA/CT) + Δ"G"°37(A/T initiation)
Except for the C/G initiation term, the first term represents the free energy of the first base pair, CG, in the absence of a nearest neighbor. The second term includes both the free energy of formation of the second base pair, GC, and stacking interaction between this base pair and the previous base pair. The remaining terms are similarly defined. In general, the free energy of forming a nucleic acid duplex is
formula_5,
where formula_6 represents the free energy associated with one of the ten possible the nearest-neighbor nucleotide pairs, and formula_7 represents its count in the sequence.
Each Δ"G"° term has enthalpic, Δ"H"°, and entropic, Δ"S"°, parameters, so the change in free energy is also given by
formula_8.
Values of Δ"H"° and Δ"S"° have been determined for the ten possible pairs of interactions. These are given in Table 1, along with the value of Δ"G"° calculated at 37 °C. Using these values, the value of Δ"G"37° for the DNA duplex shown above is calculated to be −22.4 kJ/mol. The experimental value is −21.8 kJ/mol.
The parameters associated with the ten groups of neighbors shown in table 1 are determined from melting points of short oligonucleotide duplexes. It works out that only eight of the ten groups are independent.
The nearest-neighbor model can be extended beyond the Watson-Crick pairs to include parameters for interactions between mismatches and neighboring base pairs. This allows the estimation of the thermodynamic parameters of sequences containing isolated mismatches, like e.g. (arrows indicating mismatch)
codice_3
codice_4
codice_5
These parameters have been fitted from melting experiments and an extension of Table 1 which includes mismatches can be found in literature.
A more realistic way of modeling the behavior of nucleic acids would seem to be to have parameters that depend on the neighboring groups on both sides of a nucleotide, giving a table with entries like "TCG/AGC". However, this would involve around 32 groups for Watson-Crick pairing and even more for sequences containing mismatches; the number of DNA melting experiments needed to get reliable data for so many groups would be inconveniently high. However, other means exist to access thermodynamic parameters of nucleic acids: microarray technology allows hybridization monitoring of tens of thousands sequences in parallel. This data, in combination with molecular adsorption theory allows the determination of many thermodynamic parameters in a single experiment and to go beyond the nearest neighbor model. In general the predictions from the nearest neighbor method agree reasonably well with experimental results, but some unexpected outlying sequences, calling for further insights, do exist. Finally, we should also mention the increased accuracy provided by single molecule unzipping assays which provide a wealth of new insight into the thermodynamics of DNA hybridization and the validity of the nearest-neighbour model as well.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "K=\\frac{[A][B]}{[AB]}"
},
{
"math_id": 1,
"text": "\\Delta G^\\circ = -RT\\ln\\frac{[A][B]}{[AB]}"
},
{
"math_id": 2,
"text": "T_m = -\\frac{\\Delta G^\\circ}{R\\ln\\frac{[AB]_{initial}}{2}}"
},
{
"math_id": 3,
"text": "T_m = \\frac{\\Delta H^\\circ}{\\Delta S^\\circ-R\\ln\\frac{[AB]_{initial}}{2}}"
},
{
"math_id": 4,
"text": "T_m = \\frac{\\Delta H^\\circ}{\\Delta S^\\circ+R\\ln([A]_{total} - [B]_{total}/2)}"
},
{
"math_id": 5,
"text": "\\Delta G_{37}^\\circ (\\mathrm{total}) = \\Delta G_{37}^\\circ (\\mathrm{initiations}) + \\sum_{i=1}^{10} n_i \\Delta G_{37}^\\circ (i)"
},
{
"math_id": 6,
"text": "\\Delta G_{37}^\\circ (i)"
},
{
"math_id": 7,
"text": "n_i"
},
{
"math_id": 8,
"text": "\\Delta G^\\circ (\\mathrm{total}) = \\Delta H_{\\mathrm{total}}^\\circ - T\\Delta S_{\\mathrm{total}}^\\circ"
}
] | https://en.wikipedia.org/wiki?curid=9299409 |
929998 | Structural proof theory | In mathematical logic, structural proof theory is the subdiscipline of proof theory that studies proof calculi that support a notion of analytic proof, a kind of proof whose semantic properties are exposed. When all the theorems of a logic formalised in a structural proof theory have analytic proofs, then the proof theory can be used to demonstrate such things as consistency, provide decision procedures, and allow mathematical or computational witnesses to be extracted as counterparts to theorems, the kind of task that is more often given to model theory.
Analytic proof.
The notion of analytic proof was introduced into proof theory by Gerhard Gentzen for the sequent calculus; the analytic proofs are those that are cut-free. His natural deduction calculus also supports a notion of analytic proof, as was shown by Dag Prawitz; the definition is slightly more complex—the analytic proofs are the normal forms, which are related to the notion of normal form in term rewriting.
Structures and connectives.
The term "structure" in structural proof theory comes from a technical notion introduced in the sequent calculus: the sequent calculus represents the judgement made at any stage of an inference using special, extra-logical operators called structural operators: in formula_0, the commas to the left of the turnstile are operators normally interpreted as conjunctions, those to the right as disjunctions, whilst the turnstile symbol itself is interpreted as an implication. However, it is important to note that there is a fundamental difference in behaviour between these operators and the logical connectives they are interpreted by in the sequent calculus: the structural operators are used in every rule of the calculus, and are not considered when asking whether the subformula property applies. Furthermore, the logical rules go one way only: logical structure is introduced by logical rules, and cannot be eliminated once created, while structural operators can be introduced and eliminated in the course of a derivation.
The idea of looking at the syntactic features of sequents as special, non-logical operators is not old, and was forced by innovations in proof theory: when the structural operators are as simple as in Getzen's original sequent calculus there is little need to analyse them, but proof calculi of deep inference such as display logic (introduced by Nuel Belnap in 1982) support structural operators as complex as the logical connectives, and demand sophisticated treatment.
Hypersequents.
The hypersequent framework extends the ordinary sequent structure to a multiset of sequents, using an additional structural connective | (called the hypersequent bar) to separate different sequents. It has been used to provide analytic calculi for, e.g., modal, intermediate and substructural logics A hypersequent is a structure
formula_1
where each formula_2 is an ordinary sequent, called a component of the hypersequent. As for sequents, hypersequents can be based on sets, multisets, or sequences, and the components can be single-conclusion or multi-conclusion sequents. The formula interpretation of the hypersequents depends on the logic under consideration, but is nearly always some form of disjunction. The most common interpretations are as a simple disjunction
formula_3
for intermediate logics, or as a disjunction of boxes
formula_4
for modal logics.
In line with the disjunctive interpretation of the hypersequent bar, essentially all hypersequent calculi include the external structural rules, in particular the external weakening rule
formula_5
and the external contraction rule
formula_6
The additional expressivity of the hypersequent framework is provided by rules manipulating the hypersequent structure. An important example is provided by the modalised splitting rule
formula_7
for modal logic S5, where formula_8 means that every formula in formula_9 is of the form formula_10.
Another example is given by the communication rule for the intermediate logic LC
formula_11
Note that in the communication rule the components are single-conclusion sequents.
Nested sequent calculus.
The nested sequent calculus is a formalisation that resembles a 2-sided calculus of structures.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "A_1, \\dots, A_m \\vdash B_1, \\dots, B_n"
},
{
"math_id": 1,
"text": "\\Gamma_1 \\vdash \\Delta_1 \\mid \\dots \\mid \\Gamma_n \\vdash \\Delta_n"
},
{
"math_id": 2,
"text": "\\Gamma_i \\vdash\\Delta_i"
},
{
"math_id": 3,
"text": "(\\bigwedge\\Gamma_1 \\rightarrow \\bigvee \\Delta_1) \\lor \\dots \\lor (\\bigwedge\\Gamma_n \\rightarrow \\bigvee \\Delta_n)"
},
{
"math_id": 4,
"text": "\\Box(\\bigwedge\\Gamma_1 \\rightarrow\\bigvee \\Delta_1) \\lor \\dots \\lor \\Box(\\bigwedge\\Gamma_n \\rightarrow \\bigvee\\Delta_n)"
},
{
"math_id": 5,
"text": "\\frac{\\Gamma_1 \\vdash \\Delta_1 \\mid \\dots \\mid \\Gamma_n \\vdash \\Delta_n}{\\Gamma_1 \\vdash \\Delta_1 \\mid \\dots \\mid \\Gamma_n \\vdash \\Delta_n \\mid \\Sigma \\vdash \\Pi}"
},
{
"math_id": 6,
"text": "\\frac{\\Gamma_1\\vdash \\Delta_1 \\mid \\dots \\mid\\Gamma_n \\vdash \\Delta_n \\mid \\Gamma_n \\vdash \\Delta_n}{\\Gamma_1\\vdash \\Delta_1 \\mid \\dots \\mid\\Gamma_n \\vdash \\Delta_n}"
},
{
"math_id": 7,
"text": "\\frac{\\Gamma_1 \\vdash \\Delta_1 \\mid \\dots \\mid \\Gamma_n \\vdash \\Delta_n \\mid \\Box \\Sigma, \\Omega \\vdash \\Box \\Pi, \\Theta}{\\Gamma_1 \\vdash \\Delta_1 \\mid \\dots \\mid \\Gamma_n \\vdash \\Delta_n \\mid \\Box \\Sigma \\vdash \\Box \\Pi \\mid \\Omega \\vdash \\Theta}"
},
{
"math_id": 8,
"text": "\\Box \\Sigma"
},
{
"math_id": 9,
"text": "\\Box\\Sigma"
},
{
"math_id": 10,
"text": "\\Box A"
},
{
"math_id": 11,
"text": "\\frac{\\Gamma_1 \\vdash \\Delta_1 \\mid \\dots \\mid \\Gamma_n \\vdash \\Delta_n \\mid \\Omega \\vdash A \\qquad \\Sigma_1 \\vdash \\Pi_1 \\mid \\dots \\mid \\Sigma_m \\vdash \\Pi_m \\mid \\Theta \\vdash B }{\\Gamma_1\\vdash \\Delta_1 \\mid \\dots \\mid \\Gamma_n \\vdash \\Delta_n \\mid \\Sigma_1 \\vdash \\Pi_1 \\mid \\dots \\mid \\Sigma_m \\vdash \\Pi_m \\mid \\Omega \\vdash B \\mid \\Theta \\vdash A}"
}
] | https://en.wikipedia.org/wiki?curid=929998 |
9302 | Existence | State of being real
Existence is the state of having being or reality in contrast to nonexistence and nonbeing. Existence is often contrasted with essence: the essence of an entity is its essential features or qualities, which can be understood even if one does not know whether the entity exists.
Ontology is the philosophical discipline studying the nature and types of existence. Singular existence is the existence of individual entities while general existence refers to the existence of concepts or universals. Entities present in space and time have concrete existence in contrast to abstract entities, like numbers and sets. Other distinctions are between possible, contingent, and necessary existence and between physical and mental existence. The common view is that an entity either exists or not with nothing in between, but some philosophers say that there are degrees of existence, meaning that some entities exist to a higher degree than others.
The orthodox position in ontology is that existence is a second-order property or a property of properties. For example, to say that lions exist means that the property of being a lion is possessed by an entity. A different view states that existence is a first-order property or a property of individuals. This means existence is similar to other properties of individuals, like color and shape. Alexius Meinong and his followers accept this idea and say that not all individuals have this property; they state that there are some individuals, such as Santa Claus, that do not exist. Universalists reject this view; they see existence as a universal property of every individual.
The concept of existence has been discussed throughout the history of philosophy and already played a role in ancient philosophy, including Presocratic philosophy in Ancient Greece, Hindu and Buddhist philosophy in Ancient India, and Daoist philosophy in ancient China. It is relevant to fields such as logic, mathematics, epistemology, philosophy of mind, philosophy of language, and existentialism.
Definition and related terms.
Dictionaries define "existence" as the state of being real and "to exist" as having being or participating in reality. Existence sets real entities apart from imaginary ones, and can refer both to individual entities or to the totality of reality. The word "existence" entered the English language in the late 14th century from old French and has its roots in the medieval Latin term , which means "to stand forth", "to appear", and "to arise". Existence is studied by the subdiscipline of metaphysics known as ontology.
The terms "being", "reality", and "actuality" are often used as synonyms of "existence", but the exact definition of "existence" and its connection to these terms is disputed. According to metaphysician Alexius Meinong (1853–1920), all entities have being but not all entities have existence. He argues merely possible objects like Santa Claus have being but lack existence. Ontologist Takashi Yagisawa (20th century–present) contrasts existence with reality; he sees "reality" as the more-fundamental term because it equally characterizes all entities and defines existence as a relative term that connects an entity to the world it inhabits. According to philosopher Gottlob Frege (1848–1925), actuality is narrower than existence because only actual entities can produce and undergo changes, in contrast to non-actual existing entities like numbers and sets. According to some philosophers, like Edmund Husserl (1859–1938), existence is an elementary concept, meaning it cannot be defined in other terms without involving circularity. This would imply characterizing existence or talking about its nature in a non-trivial manner may be difficult or impossible.
Disputes about the nature of existence are reflected in the distinction between thin and thick concepts of existence. Thin concepts of existence understand existence as a logical property that every existing thing shares; they do not include any substantial content about the metaphysical implications of having existence. According to one view, existence is the same as the logical property of self-identity. This view articulates a thin concept of existence because it merely states what exists is identical to itself without discussing any substantial characteristics of the nature of existence. Thick concepts of existence encompass a metaphysical analysis of what it means that something exists and what essential features existence implies. According to one proposal, to exist is to be present in space and time, and to have effects on other things. This definition is controversial because it implies abstract objects such as numbers do not exist. Philosopher George Berkeley (1685–1753) gave a different thick concept of existence; he stated: "to be is to be perceived", meaning all existence is mental.
Existence contrasts with nonexistence, a lack of reality. Whether objects can be divided into existent and nonexistent objects is a subject of controversy. This distinction is sometimes used to explain how it is possible to think of fictional objects like dragons and unicorns but the concept of nonexistent objects is not generally accepted; some philosophers say the concept is contradictory. Closely related contrasting terms are nothingness and nonbeing. Existence is commonly associated with mind-independent reality but this position is not universally accepted because there could also be forms of mind-dependent existence, such as the existence of an idea inside a person's mind. According to some idealists, this may apply to all of reality.
Another contrast is made between "existence" and "essence". Essence refers to the intrinsic nature or defining qualities of an entity. The essence of something determines what kind of entity it is and how it differs from other kinds of entities. Essence corresponds to what an entity is, while existence corresponds to the fact that it is. For instance, it is possible to understand what an object is and grasp its nature even if one does not know whether this object exists. According to some philosophers, there is a difference between entities and the fundamental characteristics that make them the entities they are. Martin Heidegger (1889–1976) introduced this concept; he calls it the ontological difference and contrasts individual beings with being. According to his response to the question of being, being is not an entity but the background context that makes all individual entities intelligible.
Types of existing entities.
Many discussions of the types of existing entities revolve around the definitions of different types, the existence or nonexistence of entities of a specific type, the ways entities of different types are related to each other, and whether some types are more fundamental than others. Examples are the existence or nonexistence of souls; whether there are abstract, fictional, and universal entities; and the existence or nonexistence of possible worlds and objects besides the actual world. These discussions cover the topics of the basic stuff or constituents underlying all reality and the most general features of entities.
Singular and general.
There is a distinction between singular existence and general existence. Singular existence is the existence of individual entities. For example, the sentence "Angela Merkel exists" expresses the existence of one particular person. General existence pertains to general concepts, properties, or universals. For instance, the sentence "politicians exist" states the general term "politician" has instances without referring to a particular politician.
Singular and general existence are closely related to each other, and some philosophers have tried to explain one as a special case of the other. For example, according to Frege, general existence is more basic than singular existence. One argument in favor of this position is that singular existence can be expressed in terms of general existence. For instance, the sentence "Angela Merkel exists" can be expressed as "entities that are identical to Angela Merkel exist", where the expression "being identical to Angela Merkel" is understood as a general term. Philosopher Willard Van Orman Quine (1908–2000) defends a different position by giving primacy to singular existence and arguing that general existence can be expressed in terms of singular existence.
A related question is whether there can be general existence without singular existence. According to philosopher Henry S. Leonard (1905–1967), a property only has general existence if there is at least one actual object that instantiates it. Philosopher Nicholas Rescher (1928–2024), by contrast, states that properties can exist if they have no actual instances, like the property of "being a unicorn". This question has a long philosophical tradition in relation to the existence of universals. According to Platonists, universals have general existence as Platonic forms independently of the particulars that exemplify them. According to this view, the universal of redness exists independently of the existence or nonexistence of red objects. Aristotelianism also accepts the existence of universals but says their existence depends on particulars that instantiate them and that they are unable to exist by themselves. According to this view, a universal that is not present in the space and time does not exist. According to nominalists, only particulars have existence and universals do not exist.
Concrete and abstract.
There is an influential distinction in ontology between concrete and abstract objects. Many concrete objects, like rocks, plants, and other people, are encountered in everyday life. They exist in space and time. They have effects on each other, like when a rock falls on a plant and damages it, or a plant grows through rock and breaks it. Abstract objects, like numbers, sets, and types, have no location in space and time, and lack causal powers. The distinction between concrete objects and abstract objects is sometimes treated as the most-general division of being.
The existence of concrete objects is widely agreed upon but opinions about abstract objects are divided. Realists such as Plato accept the idea that abstract objects have independent existence. Some realists say abstract objects have the same mode of existence as concrete objects; according to others, they exist in a different way. Anti-realists state abstract objects do not exist, a view that is often combined with the idea that existence requires a location in space and time or the ability to causally interact.
Possible, contingent, and necessary.
A further distinction is between merely possible, contingent, and necessary existence. An entity has necessary existence if it must exist or could not fail to exist. This means that it is not possible to newly create or destroy necessary entities. Entities that exist but could fail to exist are contingent; merely possible entities do not exist but could exist.
Most entities encountered in ordinary experience, like telephones, sticks, and flowers, have contingent existence. The contingent existence of telephones is reflected in the fact that they exist in the present but did not exist in the past, meaning that it is not necessary that they exist. It is an open question whether any entities have necessary existence. According to some nominalists, all concrete objects have contingent existence while all abstract objects have necessary existence.
According to some theorists, one or several necessary beings are required as the explanatory foundation of the cosmos. For instance, the philosophers Avicenna (980–1037) and Thomas Aquinas (1225–1274) say that God has necessary existence. A few philosophers, like Baruch Spinoza (1632–1677), see God and the world as the same thing, and say that all entities have necessary existence to provide a unified and rational explanation of everything.
There are many academic debates about the existence of merely possible objects. According to actualism, only actual entities have being; this includes both contingent and necessary entities but excludes merely possible entities. Possibilists reject this view and state there are also merely possible objects besides actual objects. For example, metaphysician David Lewis (1941–2001) states that possible objects exist in the same way as actual objects so as to provide a robust explanation of why statements about what is possible and necessary are true. According to him, possible objects exist in possible worlds while actual objects exist in the actual world. Lewis says the only difference between possible worlds and the actual world is the location of the speaker; the term "actual" refers to the world of the speaker, similar to the way the terms "here" and "now" refer to the spatial and temporal location of the speaker.
The problem of contingent and necessary existence is closely related to the ontological question of why there is anything at all or why is there something rather than nothing. According to one view, the existence of something is a contingent fact, meaning the world could have been totally empty. This is not possible if there are necessary entities, which could not have failed to exist. In this case, global nothingness is impossible because the world needs to contain at least all necessary entities.
Physical and mental.
Entities that exist on a physical level include objects encountered in everyday life, like stones, trees, and human bodies, as well as entities discussed in modern physics, like electrons and protons. Physical entities can be observed and measured; they possess mass and a location in space and time. Mental entities like perceptions, experiences of pleasure and pain as well as beliefs, desires, and emotions belong to the realm of the mind; they are primarily associated with conscious experiences but also include unconscious states like unconscious beliefs, desires, and memories.
The mind–body problem concerns the ontological status of and relation between physical and mental entities and is a frequent topic in metaphysics and philosophy of mind. According to materialists, only physical entities exist on the most-fundamental level. Materialists usually explain mental entities in terms of physical processes; for example, as brain states or as patterns of neural activation. Idealism, a minority view in contemporary philosophy, rejects matter as ultimate and views the mind as the most basic reality. Dualists like René Descartes (1596–1650) believe both physical and mental entities exist on the most-fundamental level. They state they are connected to one another in several ways but that one cannot be reduced to the other.
Other types.
Fictional entities are entities that exist as inventions inside works of fiction. For example, Sherlock Holmes is a fictional character in Arthur Conan Doyle's book "A Study in Scarlet" and flying carpets are fictional objects in the folktales "One Thousand and One Nights". According to anti-realism, fictional entities do not form part of reality in any substantive sense. Possibilists, by contrast, see fictional entities as a subclass of possible objects; creationists say that they are artifacts that depend for their existence on the authors who first conceived them.
Intentional inexistence is a similar phenomenon concerned with the existence of objects within mental states. This happens when a person perceives or thinks about an object. In some cases, the intentional object corresponds to a real object outside the mental state, like when accurately perceiving a tree in the garden. In other cases, the intentional object does not have a real counterpart, like when thinking about Bigfoot. The problem of intentional inexistence is the challenge of explaining how one can think about entities that do not exist since this seems to have the paradoxical implication that the thinker stands in a relation to a non-existing object.
Modes and degrees of existence.
Closely related to the problem of different types of entities is the question of whether their modes of existence also vary. This is the case according to ontological pluralism, which states entities belonging to different types differ in both their essential features and in the ways they exist. This position is sometimes found in theology; it states God is radically different from his creation and emphasizes his uniqueness by saying the difference affects both God's features and God's mode of existence.
Another form of ontological pluralism distinguishes the existence of material objects from the existence of space-time. According to this view, material objects have relative existence because they exist in space-time; the existence of space-time itself is not relative in this sense because it just exists without existing within another space-time.
The topic of degrees of existence is closely related to the problem of modes of existence. This topic is based on the idea that some entities exist to a higher degree or have more being than other entities, similar to the way some properties, such as heat and mass, have degrees. According to philosopher Plato (428/427–348/347 BCE), for example, unchangeable Platonic forms have a higher degree of existence than physical objects.
The view that there are different types of entities is common in metaphysics but the idea that they differ from each other in their modes or degrees of existence is often rejected, implying that a thing either exists or does not exist without in-between alternatives. Metaphysician Peter van Inwagen (1942–present) uses the idea that there is an intimate relationship between existence and quantification to argue against different modes of existence. Quantification is related to the counting of objects; according to Inwagen, if there were different modes of entities, people would need different types of numbers to count them. Because the same numbers can be used to count different types of entities, he concludes all entities have the same mode of existence.
Theories of the nature of existence.
Theories of the nature of existence aim to explain what it means for something to exist. A central dispute in the academic discourse about the nature of existence is whether existence is a property of individuals. An individual is a unique entity, like Socrates or a particular apple. A property is something that is attributed to an entity, like "being human" or "being red", and usually expresses a quality or feature of that entity. The two main theories of existence are first-order and second-order theories. First-order theories understand existence as a property of individuals while second-order theories say existence is a second-order property, that is, a property of properties.
A central challenge for theories of the nature of existence is an understanding of the possibility of coherently denying the existence of something, like the statement: "Santa Claus does not exist". One difficulty is explaining how the name "Santa Claus" can be meaningful even though there is no Santa Claus.
Second-order theories.
Second-order theories understand existence as a second-order property rather than a first-order property. They are often seen as the orthodox position in ontology. For instance, the Empire State Building is an individual object and "being tall" is a first-order property of it. "Being instantiated" is a property of "being 443.2 meters tall" and therefore a second-order property. According to second-order theories, to talk about existence is to talk about which properties have instances. For example, this view says that the sentence "God exists" means "Godhood is instantiated" rather than "God has the property of existing".
A key reason against characterizing existence as a property of individuals is that existence differs from regular properties. Regular properties, such as "being a building" and "being 443.2 meters tall", express what an object is like but do not directly describe whether or not that building exists. According to this view, existence is more fundamental than regular properties because an object cannot have any properties if it does not exist.
According to second-order theorists, quantifiers rather than predicates express existence. Predicates are expressions that apply to and classify objects, usually by attributing features to them, such as "is a butterfly" and "is happy". Quantifiers are terms that talk about the quantity of objects that have certain properties. Existential quantifiers express that there is at least one object, like the expressions "some" and "there exists", as in "some cows eat grass" and "there exists an even prime number". In this regard, existence is closely related to counting because to assert that something exists is to assert that the corresponding concept has one or more instances.
Second-order views imply a sentence like "egg-laying mammals exist" is misleading because the word "exist" is used as a predicate in them. These views say the true logical form is better expressed in reformulations like "there exist entities that are egg-laying mammals". This way, "existence" has the role of a quantifier and "egg-laying mammals" is the predicate. Quantifier constructions can also be used to express negative existential statements; for instance, the sentence "talking tigers do not exist" can be expressed as "it is not the case that there exist talking tigers".
Many ontologists accept that second-order theories provide a correct analysis of many types of existential sentences. It is, however, controversial whether it is correct for all cases. Some problems relate to assumptions associated with everyday language about sentences like "Ronald McDonald does not exist". This type of statement is called "negative singular existential" and the expression "Ronald McDonald" is a singular term that seems to refer to an individual. It is not clear how the expression can refer to an individual if, as the sentence asserts, this individual does not exist. According to a solution philosopher Bertrand Russell (1872–1970) proposed, singular terms do not refer to individuals but are descriptions of individuals. This theory states negative singular existentials deny an object matching the descriptions exists without referring to a non-existent individual. Following this approach, the sentence "Ronald McDonald does not exist" expresses the idea: "it is not the case there is a unique happy hamburger clown".
First-order theories.
According to first-order theories, existence is a property of individuals. These theories are less-widely accepted than second-order theories but also have some influential proponents. There are two types of first-order theories: Meinongianism and universalism.
Meinongianism.
Meinongianism, which describes existence as a property of some but not all entities, was first formulated by Alexius Meinong. Its main assertion is that there are some entities that do not exist, meaning objecthood is independent of existence. Proposed examples of nonexistent objects are merely possible objects such as flying pigs, as well as fictional and mythical objects like Sherlock Holmes and Zeus. According to this view, these objects are real and have being, even though they do not exist. Meinong states there is an object for any combination of properties. For example, there is an object that only has the single property of "being a singer" with no other properties. This means neither the attribute of "wearing a dress" nor the absence of it applies to this object. Meinong also includes impossible objects like round squares in this classification.
According to Meinongians, sentences describing Sherlock Holmes and Zeus refer to nonexisting objects. They are true or false depending on whether these objects have the properties ascribed to them. For instance, the sentence "Pegasus has wings" is true because having wings is a property of Pegasus, even though Pegasus lacks the property of existing.
One key motivation of Meinongianism is to explain how negative singular existentials like "Ronald McDonald does not exist" can be true. Meinongians accept the idea that singular terms like "Ronald McDonald" refer to individuals. For them, a negative singular existential is true if the individual it refers to does not exist.
Meinongianism has important implications for understandings of quantification. According to an influential view defended by Willard Van Orman Quine, the domain of quantification is restricted to existing objects. This view implies quantifiers carry ontological commitments about what exists and what does not exist. Meinongianism differs from this view by saying the widest domain of quantification includes both existing and nonexisting objects.
Some aspects of Meinongianism are controversial and have received substantial criticism. According to one objection, one cannot distinguish between being an object and being an existing object. A closely related criticism states objects cannot have properties if they do not exist. A further objection is that Meinongianism leads to an "overpopulated universe" because there is an object corresponding to any combination of properties. A more specific criticism rejects the idea that there are incomplete and impossible objects.
Universalism.
Universalists agree with Meinongians that existence is a property of individuals but deny there are nonexistent entities. Instead, universalists state existence is a universal property; all entities have it, meaning everything exists. One approach is to say existence is the same as self-identity. According to the law of identity, every object is identical to itself or has the property of self-identity. This can be expressed in predicate logic as formula_0.
An influential argument in favor of universalism is that the denial of the existence of something is contradictory. This conclusion follows from the premises that one can only deny the existence of something by referring to that entity and that one can only refer to entities that exist.
Universalists have proposed different ways of interpreting negative singular existentials. According to one view, names of fictional entities like "Ronald McDonald" refer to abstract objects, which exist even though they do not exist in space and time. This means, when understood in a strict sense, all negative singular existentials are false, including the assertion that "Ronald McDonald does not exist". Universalists can interpret such sentences slightly differently in relation to the context. In everyday life, for example, people use sentences like "Ronald McDonald does not exist" to express the idea that Ronald McDonald does not exist as a concrete object, which is true. Another approach is to understand negative singular existentials as neither true nor false but meaningless because their singular terms do not refer to anything.
History.
Western philosophy.
Western philosophy originated with the Presocratic philosophers, who aimed to replace earlier mythological accounts of the universe by providing rational explanations based on foundational principles of all existence. Some, like Thales (c. 624–545 BCE) and Heraclitus (c. 540–480 BCE), suggested concrete principles like water and fire are the root of existence. Anaximander (c. 610–545 BCE) opposed this position; he believed the source must lie in an abstract principle that is beyond the world of human perception.
Plato (428/427–348/347 BCE) argued that different types of entities have different degrees of existence and that shadows and images exist in a weaker sense than regular material objects. He said unchangeable Platonic forms have the highest type of existence, and saw material objects as imperfect and impermanent copies of Platonic forms.
Philosopher Aristotle (384–322 BCE) accepted Plato's idea that forms are different from matter, but he challenged the idea that forms have a higher type of existence. Instead, he believed forms cannot exist without matter. He stated: "being is said in many ways" and explored how different types of entities have different modes of existence. For example, he distinguished between substances and their accidents, and between potentiality and actuality.
Neoplatonists like Plotinus (204–270 CE) suggested reality has a hierarchical structure. They believed a transcendent entity, called "the One" or "the Good", is responsible for all existence. From it emerges the intellect, which in turn gives rise to the soul and the material world.
In medieval philosophy, Anselm of Canterbury (1033–1109 CE) formulated the influential ontological argument, which aims to deduce the existence of God from the concept of God. Anselm defined God as the greatest conceivable being. He reasoned that an entity that did not exist outside his mind would not be the greatest conceivable being, leading him to the conclusion God exists.
Thomas Aquinas (1224–1274 CE) distinguished between the essence of a thing and its existence. According to him, the essence of a thing constitutes its fundamental nature. He argued it is possible to understand what an object is and grasp its essence, even if one does not know whether the object exists. He concluded from this observation that existence is not part of the qualities of an object and should be understood as a separate property. Aquinas also considered the problem of creation from nothing and said only God has the power to truly bring new entities into existence. These ideas later inspired metaphysician Gottfried Wilhelm Leibniz's (1646–1716) theory of creation; Leibniz said to create is to confer actual existence to possible objects.
The philosophers David Hume (1711–1776) and Immanuel Kant (1724–1804) rejected the idea that existence is a property. According to Hume, objects are bundles of qualities. He said existence is not a property because there is no impression of existence besides the bundled qualities. Kant came to a similar conclusion in his criticism of the ontological argument; according to him, this proof fails because one cannot deduce from the definition of a concept whether entities described by this concept exist. Kant said existence does not add anything to the concept of the object; it only indicates this concept is exemplified. According to philosopher Georg Wilhelm Friedrich Hegel (1770–1831), there is no pure being or pure nothing, only becoming.
Philosopher and psychologist Franz Brentano (1838–1917) agreed with Kant's criticism and his position that existence is not a real predicate. Brentano used this idea to develop his theory of judgments, which states all judgments are existential judgments; they either affirm or deny the existence of something. He stated judgments like "some zebras are striped" have the logical form "there is a striped zebra" while judgments like "all zebras are striped" have the logical form "there is not a non-striped zebra".
Gottlob Frege (1848–1925) and Bertrand Russell (1872–1970) aimed to refine the idea of what it means that existence is not a regular property. They distinguished between regular first-order properties of individuals and second-order properties of other properties. According to their view, existence is the second-order property of "being instantiated". Russell further developed the idea that general sentences like "lions exist" are at their most fundamental form about individuals by stating that there is an individual that is a lion.
Willard Van Orman Quine (1908–2000) followed Frege and Russell in accepting existence as a second-order property. He drew a close link between existence and the role of quantification in formal logic. He applied this idea to scientific theories and stated a scientific theory is committed to the existence of an entity if the theory quantifies over this entity. For example, if a theory in biology asserts that "there are populations with genetic diversity", this theory has an ontological commitment to the existence of populations with genetic diversity. Alexius Meinong (1853–1920) was an influential critic of second-order theories and developed the alternative view that existence is a property of individuals and that not all individuals have this property.
Eastern philosophy.
Many schools of thought in Eastern philosophy discuss the problem of existence and its implications. For instance, the ancient Hindu school of Samkhya articulated a metaphysical dualism according to which the two types of existence are pure consciousness ("Purusha") and matter ("Prakriti"). Samkhya explains the manifestation of the universe as the interaction between these two principles. The Vedic philosopher Adi Shankara (c. 700–750 CE) developed a different approach in his school of Advaita Vedanta. Shankara defended a metaphysical monism by defining the divine ("Brahman") as the ultimate reality and the only existent. According to this view, the impression that there is a universe consisting of many distinct entities is an illusion ("Maya"). The essential features of ultimate reality are described as "Sat Chit Ananda"—meaning existence, consciousness, and bliss.
A central doctrine in Buddhist philosophy is called the "three marks of existence", which are "aniccā" (impermanence), "anattā" (absence of a permanent self), and "dukkha" (suffering). "Aniccā" is the doctrine that all of existence is subject to change, meaning everything changes at some point and nothing lasts forever. "Anattā" expresses a similar state in relation to persons by stating that people do not have a permanent identity or a separate self. Ignorance about "aniccā" and "anattā" is seen as the main cause of "dukkha" by leading people to form attachments that cause suffering.
A central idea in many schools of Chinese philosophy, like Laozi's (6th century BCE) Daoism, is that a fundamental principle known as "dao" is the source of all existence. The term is often translated as "the way" and is understood as a cosmic force that governs the natural order of the world. Chinese metaphysicians debated whether "dao" is a form of being or whether, as the source of being, it belongs to non-being.
The concept of existence played a central role in Arabic-Persian philosophy. The Islamic philosophers Avicenna (980–1037 CE) and Al-Ghazali (1058–1111 CE) discussed the relationship between existence and essence, and said the essence of an entity is prior to its existence. The additional step of instantiating the essence is required for the entity to come into existence. Philosopher Mulla Sadra (1571–1636 CE) rejected this priority of essence over existence, and said essence is only a concept that is used by the mind to grasp existence. Existence, by contrast, encompasses the whole of reality, according to his view.
Other traditions.
Indigenous American philosophies tend to emphasize the interconnectedness of all existence and the importance of maintaining balance and harmony with nature. This is often combined with an animist outlook that ascribes a spiritual essence to some or all entities, including plants, rocks, and places.
The interest in the relational aspect of existence is also found in African philosophy, which explores how all entities are causally linked to form an ordered world. African philosophy also examines the idea of an underlying and all-pervading life force responsible for animating entities and their influence on each other.
In various disciplines.
Formal logic.
Formal logic studies deductively valid arguments. In first-order logic, which is the most-commonly used system of formal logic, existence is expressed using the existential quantifier (formula_1). For example, the formula formula_2 can be used to state horses exist. The variable "x" ranges over all elements in the domain of quantification and the existential quantifier expresses that at least one element in this domain is a horse. In first-order logic, all singular terms like names refer to objects in the domain and imply the object exists. Because of this, one can deduce formula_3 (someone is honest) from formula_4 (Bill is honest). If only one object matching the description exists, the unique existential quantifier formula_5 can be used.
Many logical systems that are based on first-order logic also follow this idea. Free logic is an exception because it allows the presence of empty names that do not refer to an object in the domain. With this modification, it is possible to apply logical reasoning to fictional objects instead of limiting it to regular objects. In free logic one can express that Pegasus is a flying horse using the formula formula_6. As a consequence of this modification, one cannot infer from this type of statement that something exists. This means the inference from formula_6 to formula_7 is invalid in free logic, even though it is valid in first-order logic. Free logic uses an additional existence predicate (formula_8) to say a singular term refers to an existing object. For example, the formula formula_9 can be used to say Homer exists while the formula formula_10 states Pegasus does not exist.
Others.
The disciplines of epistemology, philosophy of mind, and philosophy of language deal with mental and linguistic representations in their attempt to understand the nature of knowledge, the mind, and language. This brings with it the problem of reference or how representations can refer to existing objects. Examples of such representations are beliefs, thoughts, perceptions, words, and sentences. For instance, in the sentence "Barack Obama is a Democrat", the name "Barack Obama" refers to a particular individual. The problem of reference also affects the epistemology of perception. In particular, this concerns the problem of whether perceptual impressions establish a direct contact with reality.
Closely related to the problem of reference is the relationship between true representations and existence. According to truthmaker theory, true representations require a truthmaker, i.e., an entity whose existence is responsible for the representation being true. For example, the sentence "kangaroos live in Australia" is true because there are kangaroos in Australia; the existence of these kangaroos is the truthmaker of the sentence. Truthmaker theory states there is a close relationship between truth and existence; there exists a truthmaker for every true representation.
Many of the individual sciences are concerned with the existence of particular types of entities and the laws governing them, such as physical things in physics and living entities in biology. The natural sciences employ a great variety of concepts to classify entities; these are known as natural kinds, and include categories like protons, gold, and elephants. According to scientific realists, these entities have mind-independent being; scientific anti-realists say the existence of these entities and categories is based on human perceptions, theories, and social constructs.
A similar problem concerns the existence of social kinds, which are basic concepts used in the social sciences, such as race, gender, disability, money, and nation state. Social kinds are often understood as social constructions that, while useful for describing the complexities of human social life, do not form part of objective reality on the most fundamental level. According to the controversial Sapir–Whorf hypothesis, the social institution of language influences or fully determines how people perceive and understand the world.
Existentialism is a school of thought that explores the nature of human existence. Among its key ideas is that existence precedes essence. This means that existence is more basic than essence. As a result, the nature and purpose of human beings are not pre-existing but develop in the process of living. According to this view, humans are thrown into a world that lacks pre-existing intrinsic meaning. They must determine for themselves their purpose and what meaning their life should have. Existentialists use this idea to explore the role of freedom and responsibility in actively shaping one's life. Feminist existentialists investigate the effects of gender on human existence, for example, on the experience of freedom. Influential existentialists include Søren Kierkegaard (1813–1855), Friedrich Nietzsche (1844–1900), Jean-Paul Sartre (1905–1980), and Simone de Beauvoir (1908–1986). Existentialism has influenced reflections on the role of human existence in sociology. Existentialist sociology examines the ways humans experience the social world and construct reality. Existence theory is a relatively recent approach that focuses on the temporal aspect of existence in society. It explores how the existential milestones to which people aspire influence their lives.
Mathematicians are often interested in the existence of certain mathematical objects. For example, number theorists ask how many prime numbers exist within a certain interval. The statement that at least one mathematical object matching a certain description exists is called an existence theorem. Metaphysicians of mathematics investigate whether mathematical objects exist not only in relation to mathematical axioms but also as part of the fundamental structure of reality. This position is affirmed by Platonists, while nominalists believe mathematical objects lack a more-substantial form of existence, for instance, because they are merely useful fictions.
Many debates in theology revolve around the existence of the divine, and arguments have been presented for and against God's existence. Cosmological arguments state that God must exist as the first cause to explain facts about the existence and aspects of the universe. According to teleological arguments, the only way to explain the order and complexity of the universe and human life is by reference to God as the intelligent designer. An influential argument against the existence of God relies on the problem of evil since it is not clear how evil could exist if there was an all-powerful, all-knowing, and benevolent God. Another argument points to a lack of concrete evidence for God's existence.
References.
Notes.
<templatestyles src="Reflist/styles.css" />
Citations.
<templatestyles src="Reflist/styles.css" />
Sources.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\forall x (x=x)"
},
{
"math_id": 1,
"text": "\\exists"
},
{
"math_id": 2,
"text": "\\exists x \\text{Horse}(x)"
},
{
"math_id": 3,
"text": "\\exists x \\text{Honest}(x)"
},
{
"math_id": 4,
"text": "\\text{Honest}(Bill)"
},
{
"math_id": 5,
"text": "\\exists !"
},
{
"math_id": 6,
"text": "\\text{Flyinghorse}(Pegasus)"
},
{
"math_id": 7,
"text": "\\exist x \\text{Flyinghorse}(x)"
},
{
"math_id": 8,
"text": "E!"
},
{
"math_id": 9,
"text": "E!(Homer)"
},
{
"math_id": 10,
"text": "\\lnot E!(Pegasus)"
}
] | https://en.wikipedia.org/wiki?curid=9302 |
930254 | Clinton Davisson | American physicist
Clinton Joseph Davisson (October 22, 1881 – February 1, 1958) was an American physicist who won the 1937 Nobel Prize in Physics for his discovery of electron diffraction in the famous Davisson–Germer experiment. Davisson shared the Nobel Prize with George Paget Thomson, who independently discovered electron diffraction at about the same time as Davisson.
Early life and education.
Davisson was born in Bloomington, Illinois. He graduated from Bloomington High School in 1902, and entered the University of Chicago on scholarship. Upon the recommendation of Robert A. Millikan, in 1905 Davisson was hired by Princeton University as Instructor of Physics. He completed the requirements for his B.S. degree from Chicago in 1908, mainly by working in the summers. While teaching at Princeton, he did doctoral thesis research with Owen Richardson. He received his Ph.D. in physics from Princeton in 1911; in the same year he married Richardson's sister, Charlotte.
Scientific career.
Davisson was then appointed as an assistant professor at the Carnegie Institute of Technology. In 1917, he took a leave from the Carnegie Institute to do war-related research with the engineering department of the Western Electric Company (later Bell Telephone Laboratories). At the end of the war, Davisson accepted a permanent position at Western Electric after receiving assurances of his freedom there to do basic research. He had found that his teaching responsibilities at the Carnegie Institute largely precluded him from doing research. Davisson remained at Western Electric (and Bell Telephone) until his formal retirement in 1946. He then accepted a research professor appointment at the University of Virginia that continued until his second retirement in 1954. Davisson was elected to the American Philosophical Society, the American Academy of Arts and Sciences, and the United States National Academy of Sciences in 1929.
Diffraction is a characteristic effect when a wave is incident upon an aperture or a grating, and is closely associated with the meaning of wave motion itself. In the 19th Century, diffraction was well established for light and for ripples on the surfaces of fluids. In 1927, while working for Bell Labs, Davisson and Lester Germer performed an experiment showing that electrons were diffracted at the surface of a crystal of nickel. This celebrated Davisson–Germer experiment confirmed the de Broglie hypothesis that particles of matter have a wave-like nature, which is a central tenet of quantum mechanics. In particular, their observation of diffraction allowed the first measurement of a wavelength for electrons. The measured wavelength formula_0 agreed well with de Broglie's equation formula_1, where formula_2 is the Planck constant and formula_3 is the electron's momentum.
Personal life.
While doing his graduate work at Princeton, Davisson met his wife and life companion Charlotte Sara Richardson, who was visiting her brother, Professor Richardson. Richardson is the sister-in-law of Oswald Veblen, a prominent mathematician. Clinton and Charlotte Davisson (d.1984) had four children, Owen Davisson, James Davisson, the American physicist Richard Davisson, and Elizabeth Davisson.
Death and legacy.
Davisson died on February 1, 1958, at the age of 76.
An impact crater on the far side of the Moon was named after Davisson in 1970 by the IAU.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\lambda"
},
{
"math_id": 1,
"text": "\\lambda = h/p"
},
{
"math_id": 2,
"text": " h "
},
{
"math_id": 3,
"text": " p "
}
] | https://en.wikipedia.org/wiki?curid=930254 |
930306 | Coherent space | In proof theory, a coherent space (also coherence space) is a concept introduced in the semantic study of linear logic.
Let a set "C" be given. Two subsets "S","T" ⊆ "C" are said to be "orthogonal", written "S" ⊥ "T", if "S" ∩ "T" is ∅ or a singleton. The "dual" of a family "F" ⊆ ℘("C") is the family "F" ⊥ of all subsets "S" ⊆ "C" orthogonal to every member of "F", i.e., such that "S" ⊥ "T" for all "T" ∈ "F". A coherent space "F" over "C" is a family of "C"-subsets for which "F" = ("F" ⊥) ⊥.
In "Proofs and Types" coherent spaces are called coherence spaces. A footnote explains that although in the French original they were "espaces cohérents", the coherence space translation was used because spectral spaces are sometimes called coherent spaces.
Definitions.
As defined by Jean-Yves Girard, a "coherence space" formula_0 is a collection of sets satisfying down-closure and binary completeness in the following sense:
The elements of the sets in formula_1 are known as "tokens", and they are the elements of the set formula_5.
Coherence spaces correspond one-to-one with (undirected) graphs (in the sense of a bijection from the set of coherence spaces to that of undirected graphs). The graph corresponding to formula_1 is called the "web" of formula_1 and is the graph induced a reflexive, symmetric relation formula_6 over the token space formula_7 of formula_1 known as "coherence modulo" formula_1 defined as:formula_8In the web of formula_1, nodes are tokens from formula_7 and an edge is shared between nodes formula_9 and formula_10 when formula_11 (i.e. formula_12.) This graph is unique for each coherence space, and in particular, elements of formula_1 are exactly the cliques of the web of formula_1 i.e. the sets of nodes whose elements are pairwise adjacent (share an edge.)
Coherence spaces as types.
Coherence spaces can act as an interpretation for types in type theory where points of a type formula_1 are points of the coherence space formula_1. This allows for some structure to be discussed on types. For instance, each term formula_9 of a type formula_1 can be given a set of finite approximations formula_13 which is in fact, a directed set with the subset relation. With formula_9 being a coherent subset of the token space formula_7 (i.e. an element of formula_1), any element of formula_13 is a finite subset of formula_9 and therefore also coherent, and we have formula_14
Stable functions.
Functions between types formula_15 are seen as "stable" functions between coherence spaces. A stable function is defined to be one which respects approximants and satisfies a certain stability axiom. Formally, formula_16 is a stable function when
Product space.
In order to be considered stable, functions of two arguments must satisfy the criterion 3 above in this form: formula_21which would mean that in addition to stability in each argument alone, the pullback
is preserved with stable functions of two arguments. This leads to the definition of a product space formula_22 which makes a bijection between stable binary functions (functions of two arguments) and stable unary functions (one argument) over the product space. The product coherence space is a product in the categorical sense i.e. it satisfies the universal property for products. It is defined by the equations: | [
{
"math_id": 0,
"text": "\\mathcal{A}"
},
{
"math_id": 1,
"text": "\\mathcal A"
},
{
"math_id": 2,
"text": "a' \\subset a \\in \\mathcal A \\implies a' \\in \\mathcal A"
},
{
"math_id": 3,
"text": "M"
},
{
"math_id": 4,
"text": "\\forall M \\subset \\mathcal A, \\left((\\forall a_1, a_2 \\in M,\\ a_1 \\cup a_2 \\in \\mathcal A) \\implies \\bigcup M \\in \\mathcal A\\right)"
},
{
"math_id": 5,
"text": "|\\mathcal A| = \\bigcup \\mathcal A = \\{ \\alpha | \\{\\alpha\\} \\in \\mathcal A \\}"
},
{
"math_id": 6,
"text": "\\sim"
},
{
"math_id": 7,
"text": "|\\mathcal A|"
},
{
"math_id": 8,
"text": "a \\sim b \\iff \\{a, b\\} \\in \\mathcal A"
},
{
"math_id": 9,
"text": "a"
},
{
"math_id": 10,
"text": "b"
},
{
"math_id": 11,
"text": "a \\sim b"
},
{
"math_id": 12,
"text": "\\{a, b\\} \\in \\mathcal A"
},
{
"math_id": 13,
"text": "I"
},
{
"math_id": 14,
"text": "a = \\bigcup a_i, a_i \\in I."
},
{
"math_id": 15,
"text": "\\mathcal A \\to \\mathcal B"
},
{
"math_id": 16,
"text": "F : \\mathcal A \\to \\mathcal B"
},
{
"math_id": 17,
"text": "a' \\subset a \\in \\mathcal A \\implies F(a') \\subset F(a)."
},
{
"math_id": 18,
"text": "F(\\bigcup_{i\\in I}^{\\uparrow}a_i)= \\bigcup_{i\\in I}^\\uparrow F(a_i)"
},
{
"math_id": 19,
"text": "\\bigcup_{i\\in I}^\\uparrow"
},
{
"math_id": 20,
"text": "a_1 \\cup a_2 \\in \\mathcal A \\implies F(a_1 \\cap a_2) = F(a_1) \\cap F(a_2)."
},
{
"math_id": 21,
"text": "a_1 \\cup a_2 \\in \\mathcal A \\land b_1 \\cup b_2 \\in \\mathcal B \\implies F(a_1 \\cap a_2, b_1 \\cap b_2) = F(a_1, b_1) \\cap F(a_2, b_2)"
},
{
"math_id": 22,
"text": "\\mathcal A\\ \\&\\ \\mathcal B"
},
{
"math_id": 23,
"text": "| \\mathcal A \\ \\&\\ \\mathcal B| = |\\mathcal A| + |\\mathcal B| = (\\{1\\} \\times |\\mathcal A|) \\cup (\\{2\\} \\times |\\mathcal B|)"
},
{
"math_id": 24,
"text": "\\mathcal A \\ \\&\\ \\mathcal B"
},
{
"math_id": 25,
"text": "\\mathcal B"
},
{
"math_id": 26,
"text": "(1, \\alpha) \\sim_{\\mathcal A\\ \\&\\ \\mathcal B} (1, \\alpha') \\iff \\alpha \\sim_{\\mathcal A} \\alpha'"
},
{
"math_id": 27,
"text": "(2, \\beta) \\sim_{\\mathcal A\\ \\&\\ \\mathcal B} (2, \\beta') \\iff \\beta \\sim_{\\mathcal B} \\beta'"
},
{
"math_id": 28,
"text": "(1, \\alpha) \\sim_{\\mathcal A\\ \\&\\ \\mathcal B} (2, \\beta), \\forall \\alpha \\in |\\mathcal A|, \\beta \\in |\\mathcal B|"
}
] | https://en.wikipedia.org/wiki?curid=930306 |
9304783 | Consistent heuristic | Type of heuristic in path-finding problems
In the study of path-finding problems in artificial intelligence, a heuristic function is said to be consistent, or monotone, if its estimate is always less than or equal to the estimated distance from any neighbouring vertex to the goal, plus the cost of reaching that neighbour.
Formally, for every node "N" and each successor "P" of "N", the estimated cost of reaching the goal from "N" is no greater than the step cost of getting to "P" plus the estimated cost of reaching the goal from "P". That is:
formula_0 and
formula_1
where
* "h" is the consistent heuristic function
* "N" is any node in the graph
* "P" is any descendant of "N"
* "G" is any goal node
* c(N,P) is the cost of reaching node P from N
Informally, every node "i" will give an estimate that, accounting for the cost to reach the next node, is always lesser than or equal to the estimate at node "i+1".
A consistent heuristic is also admissible, i.e. it never overestimates the cost of reaching the goal (the converse, however, is not always true). Assuming non negative edges, this can be easily proved by induction.
Let formula_2 be the estimated cost for the goal node. This implies that the base condition is trivially true as 0 ≤ 0. Since the heuristic is consistent, formula_3 by expansion of each term. The given terms are equal to the true cost, formula_4, so any consistent heuristic is also admissible since it is upperbounded by the true cost.
The converse is clearly not true as we can always construct a heuristic that is always below the true cost but is nevertheless inconsistent by, for instance, increasing the heuristic estimate from the farthest node as we get closer and, when the estimate formula_5 becomes at most the true cost formula_6, we make formula_7.
Consequences of monotonicity.
Consistent heuristics are called monotone because the estimated final cost of a partial solution, formula_8 is monotonically non-decreasing along any path, where formula_9 is the cost of the best path from start node formula_10 to formula_11. It's necessary and sufficient for a heuristic to obey the triangle inequality in order to be consistent.
In the A* search algorithm, using a consistent heuristic means that once a node is expanded, the cost by which it was reached is the lowest possible, under the same conditions that Dijkstra's algorithm requires in solving the shortest path problem (no negative cost edges). In fact, if the search graph is given cost formula_12 for a consistent formula_13, then A* is equivalent to best-first search on that graph using Dijkstra's algorithm. In the unusual event that an admissible heuristic is not consistent, a node will need repeated expansion every time a new best (so-far) cost is achieved for it.
If the given heuristic formula_13 is admissible but not consistent, one can artificially force the heuristic values along a path to be monotonically non-decreasing
by using
formula_14
as the heuristic value for formula_15 instead of formula_16, where formula_17 is the node immediately preceding formula_15 on the path and formula_18. This idea is due to László Mérō
and is now known as pathmax.
Contrary to common belief, pathmax does not turn an admissible heuristic into a consistent heuristic. For example, if A* uses pathmax and a heuristic that is admissible but not consistent, it is not guaranteed to have an optimal path to a node when it is first expanded. | [
{
"math_id": 0,
"text": "h(N) \\leq c(N,P) + h(P)"
},
{
"math_id": 1,
"text": "h(G) = 0.\\,"
},
{
"math_id": 2,
"text": "h(N_{0}) = 0"
},
{
"math_id": 3,
"text": "h(N_{i+1}) \\leq c(N_{i+1}, N_{i}) + h(N_{i}) \\leq c(N_{i+1}, N_{i}) + c(N_{i}, N_{i-1}) + ... + c(N_{1}, N_{0}) + h(N_{0})"
},
{
"math_id": 4,
"text": "\\sum_{i=1}^n c(N_{i}, N_{i-1})"
},
{
"math_id": 5,
"text": "h(N_{i})"
},
{
"math_id": 6,
"text": "h^*(N_{i})"
},
{
"math_id": 7,
"text": "h(N_{i-1}) = h(N_{i}) - c(N_{i}, N_{i-1})"
},
{
"math_id": 8,
"text": "f(N_j)=g(N_j)+h(N_j)"
},
{
"math_id": 9,
"text": "g(N_j)=\\sum_{i=2}^j c(N_{i-1},N_i)"
},
{
"math_id": 10,
"text": "N_1"
},
{
"math_id": 11,
"text": "N_j"
},
{
"math_id": 12,
"text": "c'(N,P)=c(N,P)+h(P)-h(N)"
},
{
"math_id": 13,
"text": "h"
},
{
"math_id": 14,
"text": " h'(P) \\gets \\max(h(P), h'(N) - c(N,P)) "
},
{
"math_id": 15,
"text": " P "
},
{
"math_id": 16,
"text": " h(P) "
},
{
"math_id": 17,
"text": " N "
},
{
"math_id": 18,
"text": " h'(start)=h(start) "
}
] | https://en.wikipedia.org/wiki?curid=9304783 |
9304834 | Pointclass | Descriptive set theory concept
In the mathematical field of descriptive set theory, a pointclass is a collection of sets of points, where a "point" is ordinarily understood to be an element of some perfect Polish space. In practice, a pointclass is usually characterized by some sort of "definability property"; for example, the collection of all open sets in some fixed collection of Polish spaces is a pointclass. (An open set may be seen as in some sense definable because it cannot be a purely arbitrary collection of points; for any point in the set, all points sufficiently close to that point must also be in the set.)
Pointclasses find application in formulating many important principles and theorems from set theory and real analysis. Strong set-theoretic principles may be stated in terms of the determinacy of various pointclasses, which in turn implies that sets in those pointclasses (or sometimes larger ones) have regularity properties such as Lebesgue measurability (and indeed universal measurability), the property of Baire, and the perfect set property.
Basic framework.
In practice, descriptive set theorists often simplify matters by working in a fixed Polish space such as Baire space or sometimes Cantor space, each of which has the advantage of being zero dimensional, and indeed homeomorphic to its finite or countable powers, so that considerations of dimensionality never arise. Yiannis Moschovakis provides greater generality by fixing once and for all a collection of underlying Polish spaces, including the set of all naturals, the set of all reals, Baire space, and Cantor space, and otherwise allowing the reader to throw in any desired perfect Polish space. Then he defines a "product space" to be any finite Cartesian product of these underlying spaces. Then, for example, the pointclass formula_0 of all open sets means the collection of all open subsets of one of these product spaces. This approach prevents formula_0 from being a proper class, while avoiding excessive specificity as to the particular Polish spaces being considered (given that the focus is on the fact that formula_0 is the collection of open sets, not on the spaces themselves).
Boldface pointclasses.
The pointclasses in the Borel hierarchy, and in the more complex projective hierarchy, are represented by sub- and super-scripted Greek letters in boldface fonts; for example, formula_1 is the pointclass of all closed sets, formula_2 is the pointclass of all Fσ sets, formula_3 is the collection of all sets that are simultaneously Fσ and Gδ, and formula_4 is the pointclass of all analytic sets.
Sets in such pointclasses need be "definable" only up to a point. For example, every singleton set in a Polish space is closed, and thus formula_1. Therefore, it cannot be that every formula_1 set must be "more definable" than an arbitrary element of a Polish space (say, an arbitrary real number, or an arbitrary countable sequence of natural numbers). Boldface pointclasses, however, may (and in practice ordinarily do) require that sets in the class be definable relative to some real number, taken as an oracle. In that sense, membership in a boldface pointclass is a definability property, even though it is not absolute definability, but only definability with respect to a possibly undefinable real number.
Boldface pointclasses, or at least the ones ordinarily considered, are closed under Wadge reducibility; that is, given a set in the pointclass, its inverse image under a continuous function (from a product space to the space of which the given set is a subset) is also in the given pointclass. Thus a boldface pointclass is a downward-closed union of Wadge degrees.
Lightface pointclasses.
The Borel and projective hierarchies have analogs in effective descriptive set theory in which the definability property is no longer relativized to an oracle, but is made absolute. For example, if one fixes some collection of basic open neighborhoods (say, in Baire space, the collection of sets of the form {"x"∈ωω formula_5 "s" is an initial segment of "x"} for each fixed finite sequence "s" of natural numbers), then the open, or formula_0, sets may be characterized as all (arbitrary) unions of basic open neighborhoods. The analogous formula_6 sets, with a lightface formula_7, are no longer "arbitrary" unions of such neighborhoods, but computable unions of them. That is, a set is lightface formula_6, also called "effectively open", if there is a computable set "S" of finite sequences of naturals such that the given set is the union of the sets {"x"∈ωω formula_5 "s" is an initial segment of "x"} for "s" in "S".
A set is lightface formula_8 if it is the complement of a formula_6 set. Thus each formula_6 set has at least one index, which describes the computable function enumerating the basic open sets from which it is composed; in fact it will have infinitely many such indices. Similarly, an index for a formula_8 set "B" describes the computable function enumerating the basic open sets in the complement of "B".
A set "A" is lightface formula_9 if it is a union of a computable sequence of formula_8 sets (that is, there is a computable enumeration of indices of formula_8 sets such that "A" is the union of these sets). This relationship between lightface sets and their indices is used to extend the lightface Borel hierarchy into the transfinite, via recursive ordinals. This produces the hyperarithmetic hierarchy, which is the lightface analog of the Borel hierarchy. (The finite levels of the hyperarithmetic hierarchy are known as the arithmetical hierarchy.)
A similar treatment can be applied to the projective hierarchy. Its lightface analog is known as the analytical hierarchy.
Summary.
Each class is at least as large as the classes above it. | [
{
"math_id": 0,
"text": "\\boldsymbol{\\Sigma}^0_1"
},
{
"math_id": 1,
"text": "\\boldsymbol{\\Pi}^0_1"
},
{
"math_id": 2,
"text": "\\boldsymbol{\\Sigma}^0_2"
},
{
"math_id": 3,
"text": "\\boldsymbol{\\Delta}^0_2"
},
{
"math_id": 4,
"text": "\\boldsymbol{\\Sigma}^1_1"
},
{
"math_id": 5,
"text": "\\mid"
},
{
"math_id": 6,
"text": "\\Sigma^0_1"
},
{
"math_id": 7,
"text": "\\Sigma"
},
{
"math_id": 8,
"text": "\\Pi^0_1"
},
{
"math_id": 9,
"text": "\\Sigma^0_2"
}
] | https://en.wikipedia.org/wiki?curid=9304834 |
9305752 | Fanno flow | Fanno flow is the adiabatic flow through a constant area duct where the effect of friction is considered. Compressibility effects often come into consideration, although the Fanno flow model certainly also applies to incompressible flow. For this model, the duct area remains constant, the flow is assumed to be steady and one-dimensional, and no mass is added within the duct. The Fanno flow model is considered an irreversible process due to viscous effects. The viscous friction causes the flow properties to change along the duct. The frictional effect is modeled as a shear stress at the wall acting on the fluid with uniform properties over any cross section of the duct.
For a flow with an upstream Mach number greater than 1.0 in a sufficiently long duct, deceleration occurs and the flow can become choked. On the other hand, for a flow with an upstream Mach number less than 1.0, acceleration occurs and the flow can become choked in a sufficiently long duct. It can be shown that for flow of calorically perfect gas the maximum entropy occurs at "M" = 1.0. Fanno flow is named after Gino Girolamo Fanno.
Theory.
The Fanno flow model begins with a differential equation that relates the change in Mach number with respect to the length of the duct, "dM/dx". Other terms in the differential equation are the heat capacity ratio, "γ", the Fanning friction factor, "f", and the hydraulic diameter, "D""h":
formula_0
Assuming the Fanning friction factor is a constant along the duct wall, the differential equation can be solved easily. One must keep in mind, however, that the value of the Fanning friction factor can be difficult to determine for supersonic and especially hypersonic flow velocities. The resulting relation is shown below where "L*" is the required duct length to choke the flow assuming the upstream Mach number is supersonic. The left-hand side is often called the Fanno parameter.
formula_1
Equally important to the Fanno flow model is the dimensionless ratio of the change in entropy over the heat capacity at constant pressure, "c""p".
formula_2
The above equation can be rewritten in terms of a static to stagnation temperature ratio, which, for a calorically perfect gas, is equal to the dimensionless enthalpy ratio, "H":
formula_3
formula_4
The equation above can be used to plot the Fanno line, which represents a locus of states for given Fanno flow conditions on an "H"-"ΔS" diagram. In the diagram, the Fanno line reaches maximum entropy at "H" = 0.833 and the flow is choked. According to the Second law of thermodynamics, entropy must always increase for Fanno flow. This means that a subsonic flow entering a duct with friction will have an increase in its Mach number until the flow is choked. Conversely, the Mach number of a supersonic flow will decrease until the flow is choked. Each point on the Fanno line corresponds with a different Mach number, and the movement to choked flow is shown in the diagram.
The Fanno line defines the possible states for a gas when the mass flow rate and total enthalpy are held constant, but the momentum varies. Each point on the Fanno line will have a different momentum value, and the change in momentum is attributable to the effects of friction.
Additional Fanno flow relations.
As was stated earlier, the area and mass flow rate in the duct are held constant for Fanno flow. Additionally, the stagnation temperature remains constant. These relations are shown below with the * symbol representing the throat location where choking can occur. A stagnation property contains a 0 subscript.
formula_5
Differential equations can also be developed and solved to describe Fanno flow property ratios with respect to the values at the choking location. The ratios for the pressure, density, temperature, velocity and stagnation pressure are shown below, respectively. They are represented graphically along with the Fanno parameter.
formula_6
Applications.
The Fanno flow model is often used in the design and analysis of nozzles. In a nozzle, the converging or diverging area is modeled with isentropic flow, while the constant area section afterwards is modeled with Fanno flow. For given upstream conditions at point 1 as shown in Figures 3 and 4, calculations can be made to determine the nozzle exit Mach number and the location of a normal shock in the constant area duct. Point 2 labels the nozzle throat, where "M" = 1 if the flow is choked. Point 3 labels the end of the nozzle where the flow transitions from isentropic to Fanno. With a high enough initial pressure, supersonic flow can be maintained through the constant area duct, similar to the desired performance of a blowdown-type supersonic wind tunnel. However, these figures show the shock wave before it has moved entirely through the duct. If a shock wave is present, the flow transitions from the supersonic portion of the Fanno line to the subsonic portion before continuing towards "M" = 1. The movement in Figure 4 is always from the left to the right in order to satisfy the second law of thermodynamics.
The Fanno flow model is also used extensively with the Rayleigh flow model. These two models intersect at points on the enthalpy-entropy and Mach number-entropy diagrams, which is meaningful for many applications. However, the entropy values for each model are not equal at the sonic state. The change in entropy is 0 at "M" = 1 for each model, but the previous statement means the change in entropy from the same arbitrary point to the sonic point is different for the Fanno and Rayleigh flow models. If initial values of "s""i" and "M""i" are defined, a new equation for dimensionless entropy versus Mach number can be defined for each model. These equations are shown below for Fanno and Rayleigh flow, respectively.
formula_7
Figure 5 shows the Fanno and Rayleigh lines intersecting with each other for initial conditions of "s""i" = 0 and "M""i" = 3. The intersection points are calculated by equating the new dimensionless entropy equations with each other, resulting in the relation below.
formula_8
The intersection points occur at the given initial Mach number and its post-normal shock value. For Figure 5, these values are "M" = 3 and 0.4752, which can be found the normal shock tables listed in most compressible flow textbooks. A given flow with a constant duct area can switch between the Fanno and Rayleigh models at these points.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\ \\frac{dM^2}{M^2} = \\frac{\\gamma M^2}{1 - M^2}\\left(1 + \\frac{\\gamma - 1}{2}M^2\\right)\\frac{4f}{D_h}dx "
},
{
"math_id": 1,
"text": "\\ \\frac{4fL^*}{D_h} = \\left(\\frac{1 - M^2}{\\gamma M^2}\\right) + \\left(\\frac{\\gamma + 1}{2\\gamma}\\right)\\ln\\left[\\frac{M^2}{\\left(\\frac{2}{\\gamma + 1}\\right)\\left(1 + \\frac{\\gamma - 1}{2}M^2\\right)}\\right]"
},
{
"math_id": 2,
"text": "\\ \\Delta S = \\frac{\\Delta s}{c_p} = \\ln\\left[M^\\frac{\\gamma - 1}{\\gamma}\\left(\\left[\\frac{2}{\\gamma + 1}\\right]\\left[1 + \\frac{\\gamma - 1}{2}M^2\\right]\\right)^\\frac{-(\\gamma + 1)}{2\\gamma}\\right] "
},
{
"math_id": 3,
"text": "\\ H = \\frac{h}{h_0} = \\frac{c_pT}{c_pT_0} = \\frac{T}{T_0} "
},
{
"math_id": 4,
"text": "\\ \\Delta S = \\frac{\\Delta s}{c_p} = \\ln\\left[\\left(\\frac{1}{H} - 1\\right)^\\frac{\\gamma - 1}{2\\gamma}\\left(\\frac{2}{\\gamma - 1}\\right)^\\frac{\\gamma - 1}{2\\gamma}\\left(\\frac{\\gamma + 1}{2}\\right)^\\frac{\\gamma + 1}{2\\gamma}\\left(H\\right)^\\frac{\\gamma + 1}{2\\gamma}\\right] "
},
{
"math_id": 5,
"text": "\\begin{align}\nA &= A^* = \\mbox{constant} \\\\\nT_0 &= T_0^* = \\mbox{constant} \\\\\n\\dot{m} &= \\dot{m}^* = \\mbox{constant} \n\\end{align} "
},
{
"math_id": 6,
"text": "\\begin{align}\n\\frac{p}{p^*} &= \\frac{1}{M}\\frac{1}{\\sqrt{\\left(\\frac{2}{\\gamma + 1}\\right)\\left(1 + \\frac{\\gamma - 1}{2}M^2\\right)}} \\\\\n\\frac{\\rho}{\\rho^*} &= \\frac{1}{M}\\sqrt{\\left(\\frac{2}{\\gamma + 1}\\right)\\left(1 + \\frac{\\gamma - 1}{2}M^2\\right)} \\\\\n\\frac{T}{T^*} &= \\frac{1}{\\left(\\frac{2}{\\gamma + 1}\\right)\\left(1 + \\frac{\\gamma - 1}{2}M^2\\right)} \\\\\n\\frac{V}{V^*} &= M\\frac{1}{\\sqrt{\\left(\\frac{2}{\\gamma + 1}\\right)\\left(1 + \\frac{\\gamma - 1}{2}M^2\\right)}} \\\\\n\\frac{p_0}{p_0^*} &= \\frac{1}{M}\\left[\\left(\\frac{2}{\\gamma + 1}\\right)\\left(1 + \\frac{\\gamma - 1}{2}M^2\\right)\\right]^\\frac{\\gamma + 1}{2\\left(\\gamma - 1\\right)}\n\\end{align} "
},
{
"math_id": 7,
"text": "\\begin{align}\n\\Delta S_F &= \\frac{s - s_i}{c_p} = \\ln\\left[\\left(\\frac{M}{M_i}\\right)^\\frac{\\gamma - 1}{\\gamma}\\left(\\frac{1 + \\frac{\\gamma - 1}{2}M_i^2}{1 + \\frac{\\gamma - 1}{2}M^2}\\right)^\\frac{\\gamma + 1}{2\\gamma}\\right] \\\\\n\\Delta S_R &= \\frac{s - s_i}{c_p} = \\ln\\left[\\left(\\frac{M}{M_i}\\right)^2\\left(\\frac{1 + \\gamma M_i^2}{1 + \\gamma M^2}\\right)^\\frac{\\gamma + 1}{\\gamma}\\right]\n\\end{align} "
},
{
"math_id": 8,
"text": "\\ \\left(1 + \\frac{\\gamma - 1}{2}M_i^2\\right)\\left[\\frac{M_i^2}{\\left(1 + \\gamma M_i^2\\right)^2}\\right] = \\left(1 + \\frac{\\gamma - 1}{2}M^2\\right)\\left[\\frac{M^2}{\\left(1 + \\gamma M^2\\right)^2}\\right] "
}
] | https://en.wikipedia.org/wiki?curid=9305752 |
9306272 | Shock (fluid dynamics) | Shock is an abrupt discontinuity in the flow field and it occurs in flows when the local flow speed exceeds the local sound speed. More specifically, it is a flow whose Mach number exceeds 1.
Explanation of phenomena.
Shock is formed due to coalescence of various small pressure pulses. Sound waves are pressure waves and it is at the speed of the sound wave the disturbances are "communicated" in the medium. When an object is moving in a flow field the object sends out disturbances which propagate at the speed of sound and "adjusts" the remaining flow field accordingly. However, if the object itself happens to travel at speed greater than sound, then the disturbances created by the object would not have traveled and "communicated" to the rest of the flow field and this results in an abrupt change of property, which is termed as "shock" in gas dynamics terminology.
Shocks are characterized by discontinuous changes in flow properties such as velocity, pressure, temperature, etc. Typically, shock thickness is of a few mean free paths (of the order of 10−8 m). Shocks are irreversible occurrences in supersonic flows (i.e. the entropy increases).
formula_0
formula_1
formula_2
formula_3
formula_4
formula_5
formula_6
formula_7
Normal shock formulas.
Where, the index 1 refers to upstream properties, and the index 2 refers to down stream properties. The subscript 0 refers to total or stagnation properties. T is temperature, M is the mach number, P is pressure, ρ is density, and γ is the ratio of specific heats.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbf{T_{02}}=\\mathbf{T_{01}}"
},
{
"math_id": 1,
"text": "M_{2}=(\\frac{\\frac{2}{\\gamma -1} + {M_{1}}^{2}}{\\frac{2\\gamma}{\\gamma-1}{M_{1}}^{2} - 1})^{0.5}"
},
{
"math_id": 2,
"text": "\\frac{p_{2}}{p_{1}}=\\frac{1+\\gamma M_{1}^{2}}{{1+\\gamma M_{2}^{2}}} = \\frac{2\\gamma}{\\gamma+1}M_{1}^2-\\frac{\\gamma-1}{\\gamma+1}"
},
{
"math_id": 3,
"text": "\\frac{T_{2}}{T_{1}}=\\frac{1+\\frac{\\gamma -1}{2} M_{1}^{2}}{{1+\\frac{\\gamma -1}{2} M_{2}^{2}}} = \\frac{(1+\\frac{\\gamma -1}{2} M_{1}^{2})(\\frac{2\\gamma}{\\gamma - 1}M_{1}^{2}-1)}{\\frac{(\\gamma+1)^2M_{1}^2}{2(\\gamma-1)}} "
},
{
"math_id": 4,
"text": "\\frac{a_{2}}{a_{1}}={(\\frac{T_{2}}{T_{1}})}^{0.5}"
},
{
"math_id": 5,
"text": "\\frac{\\rho_{2}}{\\rho_{1}}=\\frac{p_{2}}{p_{1}}\\frac{T_{1}}{T_{2}}"
},
{
"math_id": 6,
"text": "\\frac{p_{01}}{p_{1}}=(1+\\frac{\\gamma-1}{2}M_{1}^2)^{\\frac{\\gamma}{\\gamma-1}}"
},
{
"math_id": 7,
"text": "\\frac{p_{02}}{p_{2}}=(1+\\frac{\\gamma-1}{2}M_{2}^2)^{\\frac{\\gamma}{\\gamma-1}}"
}
] | https://en.wikipedia.org/wiki?curid=9306272 |
9309 | Extractor (mathematics) | Bipartite graph with nodes
An formula_0 -extractor is a bipartite graph with formula_1 nodes on the left and formula_2 nodes on the right such that each node on the left has formula_3 neighbors (on the right), which has the added property that
for any subset formula_4 of the left vertices of size at least formula_5, the distribution on right vertices obtained by choosing a random node in formula_4 and then following a random edge to get a node x on the right side is formula_6-close to the uniform distribution in terms of total variation distance.
A disperser is a related graph.
An equivalent way to view an extractor is as a bivariate function
formula_7
in the natural way. With this view it turns out that the extractor property is equivalent to: for any source of randomness formula_8 that gives formula_9 bits with min-entropy formula_10, the distribution formula_11 is formula_6-close to formula_12, where formula_13 denotes the uniform distribution on formula_14.
Extractors are interesting when they can be constructed with small formula_15 relative to formula_1 and formula_2 is as close to formula_16 (the total randomness in the input sources) as possible.
Extractor functions were originally researched as a way to "extract" randomness from weakly random sources. "See" randomness extractor.
Using the probabilistic method it is easy to show that extractor graphs with really good parameters exist. The challenge is to find explicit or polynomial time computable examples of such graphs with good parameters. Algorithms that compute extractor (and disperser) graphs have found many applications in computer science. | [
{
"math_id": 0,
"text": "(N,M,D,K,\\epsilon)"
},
{
"math_id": 1,
"text": "N"
},
{
"math_id": 2,
"text": "M"
},
{
"math_id": 3,
"text": "D"
},
{
"math_id": 4,
"text": "A"
},
{
"math_id": 5,
"text": "K"
},
{
"math_id": 6,
"text": "\\epsilon"
},
{
"math_id": 7,
"text": "E : [N] \\times [D] \\rightarrow [M]"
},
{
"math_id": 8,
"text": "X"
},
{
"math_id": 9,
"text": "n"
},
{
"math_id": 10,
"text": "\\log K"
},
{
"math_id": 11,
"text": " E(X,U_D) "
},
{
"math_id": 12,
"text": "U_M"
},
{
"math_id": 13,
"text": "U_T"
},
{
"math_id": 14,
"text": "[T]"
},
{
"math_id": 15,
"text": "K,D,\\epsilon"
},
{
"math_id": 16,
"text": "KD"
}
] | https://en.wikipedia.org/wiki?curid=9309 |
9311111 | Closest pair of points problem | The closest pair of points problem or closest pair problem is a problem of computational geometry: given formula_0 points in metric space, find a pair of points with the smallest distance between them. The closest pair problem for points in the Euclidean plane was among the first geometric problems that were treated at the origins of the systematic study of the computational complexity of geometric algorithms.
Time bounds.
Randomized algorithms that solve the problem in linear time are known, in Euclidean spaces whose dimension is treated as a constant for the purposes of asymptotic analysis. This is significantly faster than the formula_1 time (expressed here in big O notation) that would be obtained by a naive algorithm of finding distances between all pairs of points and selecting the smallest.
It is also possible to solve the problem without randomization, in random-access machine models of computation with unlimited memory that allow the use of the floor function, in near-linear formula_2 time. In even more restricted models of computation, such as the algebraic decision tree, the problem can be solved in the somewhat slower formula_3 time bound, and this is optimal for this model, by a reduction from the element uniqueness problem. Both sweep line algorithms and divide-and-conquer algorithms with this slower time bound are commonly taught as examples of these algorithm design techniques.
Linear-time randomized algorithms.
A linear expected time randomized algorithm of , modified slightly by Richard Lipton to make its analysis easier, proceeds as follows, on an input set formula_4 consisting of formula_0 points in a formula_5-dimensional Euclidean space:
The algorithm will always correctly determine the closest pair, because it maps any pair closer than distance formula_6 to the same grid point or to adjacent grid points. The uniform sampling of pairs in the first step of the algorithm (compared to a different method of Rabin for sampling a similar number of pairs) simplifies the proof that the expected number of distances computed by the algorithm is linear.
Instead, a different algorithm goes through two phases: a random iterated filtering process that approximates the closest distance to within an approximation ratio of formula_8, together with a finishing step that turns this approximate distance into the exact closest distance. The filtering process repeat the following steps, until formula_4 becomes empty:
The approximate distance found by this filtering process is the final value of formula_6, computed in the step before formula_4 becomes empty. Each step removes all points whose closest neighbor is at distance formula_6 or greater, at least half of the points in expectation, from which it follows that the total expected time for filtering is linear. Once an approximate value of formula_6 is known, it can be used for the final steps of Rabin's algorithm; in these steps each grid point has a constant number of inputs rounded to it, so again the time is linear.
Dynamic closest-pair problem.
The dynamic version for the closest-pair problem is stated as follows:
If the bounding box for all points is known in advance and the constant-time floor function is available, then the expected formula_11-space data structure was suggested that supports expected-time formula_12 insertions and deletions and constant query time. When modified for the algebraic decision tree model, insertions and deletions would require formula_13 expected time. The complexity of the dynamic closest pair algorithm cited above is exponential in the dimension formula_6, and therefore such an algorithm becomes less suitable for high-dimensional problems.
An algorithm for the dynamic closest-pair problem in formula_6 dimensional space was developed by Sergey Bespamyatnikh in 1998. Points can be inserted and deleted in formula_12 time per point (in the worst case).
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "O(n^2)"
},
{
"math_id": 2,
"text": "O(n\\log\\log n)"
},
{
"math_id": 3,
"text": "O(n\\log n)"
},
{
"math_id": 4,
"text": "S"
},
{
"math_id": 5,
"text": "k"
},
{
"math_id": 6,
"text": "d"
},
{
"math_id": 7,
"text": "3^k-1"
},
{
"math_id": 8,
"text": "2\\sqrt{k}"
},
{
"math_id": 9,
"text": "p"
},
{
"math_id": 10,
"text": "d/(2\\sqrt{k})"
},
{
"math_id": 11,
"text": "O(n)"
},
{
"math_id": 12,
"text": "O(\\log n)"
},
{
"math_id": 13,
"text": "O(\\log^2 n)"
}
] | https://en.wikipedia.org/wiki?curid=9311111 |
9312326 | Gabriel graph | In mathematics and computational geometry, the Gabriel graph of a set formula_0 of points in the Euclidean plane expresses one notion of proximity or nearness of those points. Formally, it is the graph formula_1 with vertex set formula_0 in which any two distinct points formula_2 and formula_3 are adjacent precisely when the closed disc having formula_4 as a diameter contains no other points. Another way of expressing the same adjacency criterion is that formula_5 and formula_6 should be the two closest given points to their midpoint, with no other given point being as close. Gabriel graphs naturally generalize to higher dimensions, with the empty disks replaced by empty closed balls. Gabriel graphs are named after K. Ruben Gabriel, who introduced them in a paper with Robert R. Sokal in 1969.
Percolation.
For Gabriel graphs of infinite random point sets, the finite site percolation threshold gives the fraction of points needed to support connectivity: if a random subset of fewer vertices than the threshold is given, the remaining graph will almost surely have only finite connected components, while if the size of the random subset is more than the threshold, then the remaining graph will almost surely have an infinite component (as well as finite components). This threshold was proved to exist by , and more precise values of both site and bond thresholds have been given by Norrenbrock.
Related geometric graphs.
The Gabriel graph is a subgraph of the Delaunay triangulation. It can be found in linear time if the Delaunay triangulation is given.
The Gabriel graph contains, as subgraphs, the Euclidean minimum spanning tree, the relative neighborhood graph, and the nearest neighbor graph.
It is an instance of a beta-skeleton. Like beta-skeletons, and unlike Delaunay triangulations, it is not a geometric spanner: for some point sets, distances within the Gabriel graph can be much larger than the Euclidean distances between points.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "S"
},
{
"math_id": 1,
"text": "G"
},
{
"math_id": 2,
"text": "p \\in S"
},
{
"math_id": 3,
"text": "q \\in S"
},
{
"math_id": 4,
"text": "pq"
},
{
"math_id": 5,
"text": "p"
},
{
"math_id": 6,
"text": "q"
}
] | https://en.wikipedia.org/wiki?curid=9312326 |
9312350 | Element distinctness problem | In computational complexity theory, the element distinctness problem or element uniqueness problem is the problem of determining whether all the elements of a list are distinct.
It is a well studied problem in many different models of computation. The problem may be solved by sorting the list and then checking if there are any consecutive equal elements; it may also be solved in linear expected time by a randomized algorithm that inserts each item into a hash table and compares only those elements that are placed in the same hash table cell.
Several lower bounds in computational complexity are proved by reducing the element distinctness problem to the problem in question, i.e., by demonstrating that the solution of the element uniqueness problem may be quickly found after solving the problem in question.
Decision tree complexity.
The number of comparisons needed to solve the problem of size formula_0, in a comparison-based model of computation such as a decision tree or algebraic decision tree, is formula_1. Here, formula_2 invokes big theta notation, meaning that the problem can be solved in a number of comparisons proportional to formula_3 (a linearithmic function) and that all solutions require this many comparisons. In these models of computation, the input numbers may not be used to index the computer's memory (as in the hash table solution) but may only be accessed by computing and comparing simple algebraic functions of their values. For these models, an algorithm based on comparison sort solves the problem within a constant factor of the best possible number of comparisons. The same lower bound applies as well to the expected number of comparisons in the randomized algebraic decision tree model.
Real RAM Complexity.
If the elements in the problem are real numbers, the decision-tree lower bound extends to the real random-access machine model with an instruction set that includes addition, subtraction and multiplication of real numbers, as well as comparison and either division or remaindering ("floor"). It follows that the problem's complexity in this model is also formula_1. This RAM model covers more algorithms than the algebraic decision-tree model, as it encompasses algorithms that use indexing into tables. However, in this model all program steps are counted, not just decisions.
Turing Machine complexity.
A single-tape deterministic Turing machine can solve the problem, for "n" elements of "m" &geq; log "n" bits each, in time "O"("n"2"m"("m"+2–log "n")),
while on a nondeterministic machine the time complexity is "O"("nm"("n" + log "m")).
Quantum complexity.
Quantum algorithms can solve this problem faster, in formula_4 queries. The optimal algorithm is by Andris Ambainis. Yaoyun Shi first proved a tight lower bound when the size of the range is sufficiently large. Ambainis and Kutin independently (and via different proofs) extended his work to obtain the lower bound for all functions.
Generalization: Finding repeated elements.
Elements that occur more than formula_5 times in a multiset of size formula_0 may be found by a comparison-based algorithm, the Misra–Gries heavy hitters algorithm, in time formula_6. The element distinctness problem is a special case of this problem where formula_7. This time is optimal under the decision tree model of computation.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "\\Theta(n\\log n)"
},
{
"math_id": 2,
"text": "\\Theta"
},
{
"math_id": 3,
"text": "n\\log n"
},
{
"math_id": 4,
"text": "\\Theta(n^{2/3})"
},
{
"math_id": 5,
"text": "n/k"
},
{
"math_id": 6,
"text": "O(n\\log k)"
},
{
"math_id": 7,
"text": "k=n"
}
] | https://en.wikipedia.org/wiki?curid=9312350 |
9313 | Expander graph | Sparse graph with strong connectivity
In graph theory, an expander graph is a sparse graph that has strong connectivity properties, quantified using vertex, edge or spectral expansion. Expander constructions have spawned research in pure and applied mathematics, with several applications to complexity theory, design of robust computer networks, and the theory of error-correcting codes.
Definitions.
Intuitively, an expander graph is a finite, undirected multigraph in which every subset of the vertices that is not "too large" has a "large" boundary. Different formalisations of these notions give rise to different notions of expanders: "edge expanders", "vertex expanders", and "spectral expanders", as defined below.
A disconnected graph is not an expander, since the boundary of a connected component is empty. Every connected graph is an expander; however, different connected graphs have different expansion parameters. The complete graph has the best expansion property, but it has largest possible degree. Informally, a graph is a good expander if it has low degree and high expansion parameters.
Edge expansion.
The "edge expansion" (also "isoperimetric number" or Cheeger constant) "h"("G") of a graph G on n vertices is defined as
formula_0
where formula_1
which can also be written as ∂"S" = "E"("S", "S") with "S" := "V"("G") \ "S" the complement of S and
formula_2
the edges between the subsets of vertices "A","B" ⊆ "V"("G").
In the equation, the minimum is over all nonempty sets S of at most vertices and ∂"S" is the "edge boundary" of S, i.e., the set of edges with exactly one endpoint in S.
Intuitively,
formula_3
is the minimum number of edges that need to be cut in order to split the graph in two.
The edge expansion normalizes this concept by dividing with smallest number of vertices among the two parts.
To see how the normalization can drastically change the value, consider the following example.
Take two complete graphs with the same number of vertices n and add n edges between the two graphs by connecting their vertices one-to-one.
The minimum cut will be n but the edge expansion will be 1.
Notice that in , the optimization can be equivalently done either over or over any non-empty subset, since formula_4. The same is not true for "h"("G") because of the normalization by .
If we want to write "h"("G") with an optimization over all non-empty subsets, we can rewrite it as
formula_5
Vertex expansion.
The "vertex isoperimetric number" "h"out("G") (also "vertex expansion" or "magnification") of a graph G is defined as
formula_6
where ∂out("S") is the "outer boundary" of S, i.e., the set of vertices in "V"("G") \ "S" with at least one neighbor in S. In a variant of this definition (called "unique neighbor expansion") ∂out("S") is replaced by the set of vertices in V with "exactly" one neighbor in S.
The "vertex isoperimetric number" "h"in("G") of a graph G is defined as
formula_7
where formula_8 is the "inner boundary" of S, i.e., the set of vertices in S with at least one neighbor in "V"("G") \ "S".
Spectral expansion.
When G is d-regular, a linear algebraic definition of expansion is possible based on the eigenvalues of the adjacency matrix "A" = "A"("G") of G, where Aij is the number of edges between vertices i and j. Because A is symmetric, the spectral theorem implies that A has n real-valued eigenvalues λ1 ≥ λ2 ≥ … ≥ λ"n". It is known that all these eigenvalues are in [−"d", "d"] and more specifically, it is known that λ"n" = −"d" if and only if G is bipartite.
More formally, we refer to an n-vertex, d-regular graph with
formula_9
as an ("n", "d", λ)-"graph". The bound given by an ("n", "d", λ)-graph on λ"i" for "i" ≠ 1 is useful many contexts, including the expander mixing lemma.
Spectral expansion can be "two-sided", as above, with formula_9, or it can be "one-sided", with formula_10. The latter is a weaker notion that holds also for bipartite graphs and is still useful for many applications, such as the Alon-Chung lemma.
Because G is regular, the uniform distribution formula_11 with "ui" = <templatestyles src="Fraction/styles.css" />1⁄"n" for all "i" = 1, …, "n" is the stationary distribution of G. That is, we have "Au" = "du", and u is an eigenvector of A with eigenvalue λ1 = "d", where d is the degree of the vertices of G. The "spectral gap" of G is defined to be "d" − λ2, and it measures the spectral expansion of the graph G.
If we set
formula_12
as this is the largest eigenvalue corresponding to an eigenvector orthogonal to u, it can be equivalently defined using the Rayleigh quotient:
formula_13
where
formula_14
is the 2-norm of the vector formula_15.
The normalized versions of these definitions are also widely used and more convenient in stating some results. Here one considers the matrix "A", which is the Markov transition matrix of the graph G. Its eigenvalues are between −1 and 1. For not necessarily regular graphs, the spectrum of a graph can be defined similarly using the eigenvalues of the Laplacian matrix. For directed graphs, one considers the singular values of the adjacency matrix A, which are equal to the roots of the eigenvalues of the symmetric matrix "A"T"A".
Relationships between different expansion properties.
The expansion parameters defined above are related to each other. In particular, for any d-regular graph G,
formula_16
Consequently, for constant degree graphs, vertex and edge expansion are qualitatively the same.
Cheeger inequalities.
When G is d-regular, meaning each vertex is of degree d, there is a relationship between the isoperimetric constant "h"("G") and the gap "d" − λ2 in the spectrum of the adjacency operator of G. By standard spectral graph theory, the trivial eigenvalue of the adjacency operator of a d-regular graph is λ1 = "d" and the first non-trivial eigenvalue is λ2. If G is connected, then λ2 < "d". An inequality due to Dodziuk and independently Alon and Milman states that
formula_17
In fact, the lower bound is tight. The lower bound is achieved in limit for the hypercube Qn, where "h"("G") = 1 and "d" – λ2 = 2. The upper bound is (asymptotically) achieved for a cycle, where "h"("Cn") = 4/"n" = Θ(1/"n") and "d" – λ2 = 2 – 2cos(2formula_18/"n") ≈ (2formula_18/"n")2 = Θ(1/"n"2). A better bound is given in as
formula_19
These inequalities are closely related to the Cheeger bound for Markov chains and can be seen as a discrete version of Cheeger's inequality in Riemannian geometry.
Similar connections between vertex isoperimetric numbers and the spectral gap have also been studied:
formula_20
formula_21
Asymptotically speaking, the quantities , "h"out, and "h"in2 are all bounded above by the spectral gap "O"("d" – λ2).
Constructions.
There are four general strategies for explicitly constructing families of expander graphs. The first strategy is algebraic and group-theoretic, the second strategy is analytic and uses additive combinatorics, the third strategy is combinatorial and uses the zig-zag and related graph products, and the fourth strategy is based on lifts. Noga Alon showed that certain graphs constructed from finite geometries are the sparsest examples of highly expanding graphs.
Margulis–Gabber–Galil.
Algebraic constructions based on Cayley graphs are known for various variants of expander graphs. The following construction is due to Margulis and has been analysed by Gabber and Galil. For every natural number n, one considers the graph Gn with the vertex set formula_22, where formula_23: For every vertex formula_24, its eight adjacent vertices are
formula_25
Then the following holds:
Theorem. For all n, the graph Gn has second-largest eigenvalue formula_26.
Ramanujan graphs.
By a theorem of Alon and Boppana, all sufficiently large d-regular graphs satisfy formula_27, where λ2 is the second largest eigenvalue in absolute value. As a direct consequence, we know that for every fixed d and formula_28 , there are only finitely many ("n", "d", λ)-graphs. Ramanujan graphs are d-regular graphs for which this bound is tight, satisfying
formula_29
Hence Ramanujan graphs have an asymptotically smallest possible value of λ2. This makes them excellent spectral expanders.
Lubotzky, Phillips, and Sarnak (1988), Margulis (1988), and Morgenstern (1994) show how Ramanujan graphs can be constructed explicitly.
In 1985, Alon, conjectured that most d-regular graphs on n vertices, for sufficiently large n, are almost Ramanujan. That is, for "ε" > 0, they satisfy
formula_30.
In 2003, Joel Friedman both proved the conjecture and specified what is meant by "most d-regular graphs" by showing that random d-regular graphs have formula_30 for every "ε" > 0 with probability 1 – "O"("n"-τ), where
formula_31
A simpler proof of a slightly weaker result was given by Puder.
Marcus, Spielman and Srivastava, gave a construction of bipartite Ramanujan graphs based on lifts.
Zig-Zag product.
Reingold, Vadhan, and Wigderson introduced the zig-zag product in 2003. Roughly speaking, the zig-zag product of two expander graphs produces a graph with only slightly worse expansion. Therefore, a zig-zag product can also be used to construct families of expander graphs. If G is a ("n", "m", λ1)-graph and H is an ("m", "d", λ2)-graph, then the zig-zag product "G" ◦ "H" is a ("nm", "d"2, "φ"(λ1, λ2))-graph where φ has the following properties.
Specifically,
formula_32
Note that property (1) implies that the zig-zag product of two expander graphs is also an expander graph, thus zig-zag products can be used inductively to create a family of expander graphs.
Intuitively, the construction of the zig-zag product can be thought of in the following way. Each vertex of G is blown up to a "cloud" of m vertices, each associated to a different edge connected to the vertex. Each vertex is now labeled as ("v", "k") where v refers to an original vertex of G and k refers to the kth edge of v. Two vertices, ("v", "k") and ("w","l") are connected if it is possible to get from ("v", "k") to ("w", "l") through the following sequence of moves.
Lifts.
An r-lift of a graph is formed by replacing each vertex by r vertices, and each edge by a matching between the corresponding sets of formula_33 vertices. The lifted graph inherits the eigenvalues of the original graph, and has some additional eigenvalues. Bilu and Linial showed that every d-regular graph has a 2-lift in which the additional eigenvalues are at most formula_34 in magnitude. They also showed that if the starting graph is a good enough expander, then a good 2-lift can be found in polynomial time, thus giving an efficient construction of d-regular expanders for every d.
Bilu and Linial conjectured that the bound formula_34 can be improved to formula_35, which would be optimal due to the Alon-Boppana bound. This conjecture was proved in the bipartite setting by Marcus, Spielman and Srivastava, who used the method of interlacing polynomials. As a result, they obtained an alternative construction of bipartite Ramanujan graphs. The original non-constructive proof was turned into an algorithm by Michael B. Cohen. Later the method was generalized to r-lifts by Hall, Puder and Sawin.
Randomized constructions.
There are many results that show the existence of graphs with good expansion properties through probabilistic arguments. In fact, the existence of expanders was first proved by Pinsker who showed that for a randomly chosen n vertex left d regular bipartite graph, for all subsets of vertices with high probability, where cd is a constant depending on d that is "O"("d"-4). Alon and Roichman showed that for every 1 > "ε" > 0, there is some "c"("ε") > 0 such that the following holds: For a group G of order n, consider the Cayley graph on G with "c"("ε") log2 "n" randomly chosen elements from G. Then, in the limit of n getting to infinity, the resulting graph is almost surely an "ε"-expander.
Applications and useful properties.
The original motivation for expanders is to build economical robust networks (phone or computer): an expander with bounded degree is precisely an asymptotic robust graph with the number of edges growing linearly with size (number of vertices), for all subsets.
Expander graphs have found extensive applications in computer science, in designing algorithms, error correcting codes, extractors, pseudorandom generators, sorting networks () and robust computer networks. They have also been used in proofs of many important results in computational complexity theory, such as SL = L () and the PCP theorem (). In cryptography, expander graphs are used to construct hash functions.
In a 2006 survey of expander graphs, Hoory, Linial, and Wigderson split the study of expander graphs into four categories: extremal problems, typical behavior, explicit constructions, and algorithms. Extremal problems focus on the bounding of expansion parameters, while typical behavior problems characterize how the expansion parameters are distributed over random graphs. Explicit constructions focus on constructing graphs that optimize certain parameters, and algorithmic questions study the evaluation and estimation of parameters.
Expander mixing lemma.
The expander mixing lemma states that for an ("n", "d", λ)-graph, for any two subsets of the vertices "S", "T" ⊆ "V", the number of edges between S and T is approximately what you would expect in a random d-regular graph. The approximation is better the smaller λ is. In a random d-regular graph, as well as in an Erdős–Rényi random graph with edge probability , we expect edges between S and T.
More formally, let "E"("S", "T") denote the number of edges between S and T. If the two sets are not disjoint, edges in their intersection are counted twice, that is,
formula_36
Then the expander mixing lemma says that the following inequality holds:
formula_37
Many properties of ("n", "d", λ)-graphs are corollaries of the expander mixing lemmas, including the following.
Expander walk sampling.
The Chernoff bound states that, when sampling many independent samples from a random variable in the range [−1, 1], with high probability the average of our samples is close to the expectation of the random variable. The expander walk sampling lemma, due to and , states that this also holds true when sampling from a walk on an expander graph. This is particularly useful in the theory of derandomization, since sampling according to an expander walk uses many fewer random bits than sampling independently.
AKS sorting network and approximate halvers.
Sorting networks take a set of inputs and perform a series of parallel steps to sort the inputs. A parallel step consists of performing any number of disjoint comparisons and potentially swapping pairs of compared inputs. The depth of a network is given by the number of parallel steps it takes. Expander graphs play an important role in the AKS sorting network, which achieves depth "O"(log "n"). While this is asymptotically the best known depth for a sorting network, the reliance on expanders makes the constant bound too large for practical use.
Within the AKS sorting network, expander graphs are used to construct bounded depth ε-halvers. An ε-halver takes as input a length n permutation of (1, …, "n") and halves the inputs into two disjoint sets A and B such that for each integer "k" ≤ <templatestyles src="Fraction/styles.css" />"n"⁄2 at most εk of the k smallest inputs are in B and at most εk of the k largest inputs are in A. The sets A and B are an ε-halving.
Following , a depth d ε-halver can be constructed as follows. Take an n vertex, degree d bipartite expander with parts X and Y of equal size such that every subset of vertices of size at most εn has at least neighbors.
The vertices of the graph can be thought of as registers that contain inputs and the edges can be thought of as wires that compare the inputs of two registers. At the start, arbitrarily place half of the inputs in X and half of the inputs in Y and decompose the edges into d perfect matchings. The goal is to end with X roughly containing the smaller half of the inputs and Y containing roughly the larger half of the inputs. To achieve this, sequentially process each matching by comparing the registers paired up by the edges of this matching and correct any inputs that are out of order. Specifically, for each edge of the matching, if the larger input is in the register in X and the smaller input is in the register in Y, then swap the two inputs so that the smaller one is in X and the larger one is in Y. It is clear that this process consists of d parallel steps.
After all d rounds, take A to be the set of inputs in registers in X and B to be the set of inputs in registers in Y to obtain an ε-halving. To see this, notice that if a register u in X and v in Y are connected by an edge uv then after the matching with this edge is processed, the input in u is less than that of v. Furthermore, this property remains true throughout the rest of the process. Now, suppose for some "k" ≤ <templatestyles src="Fraction/styles.css" />"n"⁄2 that more than εk of the inputs (1, …, "k") are in B. Then by expansion properties of the graph, the registers of these inputs in Y are connected with at least "k" registers in X. Altogether, this constitutes more than k registers so there must be some register A in X connected to some register B in Y such that the final input of A is not in (1, …, "k"), while the final input of B is. This violates the previous property however, and thus the output sets A and B must be an ε-halving.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "h(G) = \\min_{0 < |S| \\le \\frac{n}{2} } \\frac{|\\partial S|}{|S|},"
},
{
"math_id": 1,
"text": "\\partial S := \\{ \\{ u, v \\} \\in E(G) \\ : \\ u \\in S, v \\notin S \\},"
},
{
"math_id": 2,
"text": " E(A,B) = \\{ \\{ u, v \\} \\in E(G) \\ : \\ u \\in A , v \\in B \\}"
},
{
"math_id": 3,
"text": "\\min {|\\partial S|} = \\min E({S}, \\overline{S})"
},
{
"math_id": 4,
"text": "E(S, \\overline{S}) = E(\\overline{S}, S)"
},
{
"math_id": 5,
"text": "h(G) = \\min_{\\emptyset \\subsetneq S\\subsetneq V(G) } \\frac{E({S}, \\overline{S})}{\\min\\{|S|, |\\overline{S}|\\}}."
},
{
"math_id": 6,
"text": "h_{\\text{out}}(G) = \\min_{0 < |S|\\le \\frac{n}{2}} \\frac{|\\partial_{\\text{out}}(S)|}{|S|},"
},
{
"math_id": 7,
"text": "h_{\\text{in}}(G) = \\min_{0 < |S|\\le \\frac{n}{2}} \\frac{|\\partial_{\\text{in}}(S)|}{|S|},"
},
{
"math_id": 8,
"text": "\\partial_{\\text{in}}(S)"
},
{
"math_id": 9,
"text": "\\max_{i \\neq 1}|\\lambda_i| \\leq \\lambda"
},
{
"math_id": 10,
"text": "\\max_{i \\neq 1}\\lambda_i \\leq \\lambda"
},
{
"math_id": 11,
"text": "u\\in\\R^n"
},
{
"math_id": 12,
"text": "\\lambda=\\max\\{|\\lambda_2|, |\\lambda_n|\\}"
},
{
"math_id": 13,
"text": "\\lambda=\\max_{v \\perp u , v \\neq 0} \\frac{\\|Av\\|_2}{\\|v\\|_2},"
},
{
"math_id": 14,
"text": "\\|v\\|_2=\\left(\\sum_{i=1}^n v_i^2\\right)^{1/2}"
},
{
"math_id": 15,
"text": "v\\in\\R^n"
},
{
"math_id": 16,
"text": "h_{\\text{out}}(G) \\le h(G) \\le d \\cdot h_{\\text{out}}(G)."
},
{
"math_id": 17,
"text": "\\tfrac{1}{2}(d - \\lambda_2) \\le h(G) \\le \\sqrt{2d(d - \\lambda_2)}."
},
{
"math_id": 18,
"text": "\\pi"
},
{
"math_id": 19,
"text": " h(G) \\le \\sqrt{d^2 - \\lambda_2^2}."
},
{
"math_id": 20,
"text": "h_{\\text{out}}(G)\\le \\left(\\sqrt{4 (d-\\lambda_2)} + 1\\right)^2 -1"
},
{
"math_id": 21,
"text": "h_{\\text{in}}(G) \\le \\sqrt{8(d-\\lambda_2)}."
},
{
"math_id": 22,
"text": "\\mathbb Z_n \\times \\mathbb Z_n"
},
{
"math_id": 23,
"text": "\\mathbb Z_n=\\mathbb Z/n\\mathbb Z"
},
{
"math_id": 24,
"text": "(x,y)\\in\\mathbb Z_n \\times \\mathbb Z_n"
},
{
"math_id": 25,
"text": "(x \\pm 2y,y), (x \\pm (2y+1),y), (x,y \\pm 2x), (x,y \\pm (2x+1))."
},
{
"math_id": 26,
"text": "\\lambda(G)\\leq 5 \\sqrt{2}"
},
{
"math_id": 27,
"text": "\\lambda_2 \\ge 2\\sqrt{d-1} - o(1)"
},
{
"math_id": 28,
"text": "\\lambda< 2 \\sqrt{d-1}"
},
{
"math_id": 29,
"text": "\\lambda = \\max_{|\\lambda_i| < d} |\\lambda_i| \\le 2\\sqrt{d-1}."
},
{
"math_id": 30,
"text": "\\lambda \\le 2\\sqrt{d-1}+\\varepsilon"
},
{
"math_id": 31,
"text": "\\tau = \\left\\lceil\\frac{\\sqrt{d-1} +1}{2} \\right\\rceil."
},
{
"math_id": 32,
"text": "\\phi(\\lambda_1, \\lambda_2)=\\frac{1}{2}(1-\\lambda^2_2)\\lambda_2 +\\frac{1}{2}\\sqrt{(1-\\lambda^2_2)^2\\lambda_1^2 +4\\lambda^2_2}."
},
{
"math_id": 33,
"text": "r"
},
{
"math_id": 34,
"text": "O(\\sqrt{d\\log^3 d})"
},
{
"math_id": 35,
"text": "2\\sqrt{d-1}"
},
{
"math_id": 36,
"text": "E(S,T)=2|E(G[S\\cap T])| + E(S\\setminus T,T) + E(S,T\\setminus S). "
},
{
"math_id": 37,
"text": "\\left|E(S, T) - \\frac{d \\cdot |S| \\cdot |T|}{n}\\right| \\leq \\lambda \\sqrt{|S| \\cdot |T|}."
},
{
"math_id": 38,
"text": "\\chi(G) \\leq O \\left( \\frac{d}{\\log(1+d/\\lambda)} \\right)."
},
{
"math_id": 39,
"text": "\\left\\lceil \\log \\frac{n}{ \\log(d/\\lambda)} \\right\\rceil."
}
] | https://en.wikipedia.org/wiki?curid=9313 |
931381 | Problem of multiple generality | Failure in traditional logic to describe certain intuitively valid inferences
The problem of multiple generality names a failure in traditional logic to describe certain intuitively valid inferences. For example, it is intuitively clear that if:
"Some cat is feared by every mouse"
then it follows logically that:
"All mice are afraid of at least one cat".
The syntax of traditional logic (TL) permits exactly four sentence types: "All As are Bs", "No As are Bs", "Some As are Bs" and "Some As are not Bs". Each type is a quantified sentence containing exactly one quantifier. Since the sentences above each contain two quantifiers ('some' and 'every' in the first sentence and 'all' and 'at least one' in the second sentence), they cannot be adequately represented in TL. The best TL can do is to incorporate the second quantifier from each sentence into the second term, thus rendering the artificial-sounding terms 'feared-by-every-mouse' and 'afraid-of-at-least-one-cat'. This in effect "buries" these quantifiers, which are essential to the inference's validity, within the hyphenated terms. Hence the sentence "Some cat is feared by every mouse" is allotted the same logical form as the sentence "Some cat is hungry". And so the logical form in TL is:
"Some As are Bs"
"All Cs are Ds"
which is clearly invalid.
The first logical calculus capable of dealing with such inferences was Gottlob Frege's "Begriffsschrift" (1879), the ancestor of modern predicate logic, which dealt with quantifiers by means of variable bindings. Modestly, Frege did not argue that his logic was more expressive than extant logical calculi, but commentators on Frege's logic regard this as one of his key achievements.
Using modern predicate calculus, we quickly discover that the statement is ambiguous.
"Some cat is feared by every mouse"
could mean "(Some cat is feared) by every mouse" (paraphrasable as "Every mouse fears some cat"), i.e.
"For every mouse m, there exists a cat c, such that c is feared by m,"
formula_0
in which case the conclusion is trivial.
But it could also mean "Some cat is (feared by every mouse)" (paraphrasable as " There's a cat feared by all mice"), i.e.
"There exists one cat c, such that for every mouse m, c is feared by m."
formula_1
This example illustrates the importance of specifying the scope of such quantifiers as "for all" and "there exists". | [
{
"math_id": 0,
"text": "\\forall m \\, (\\, \\text{Mouse}(m) \\rightarrow \\exists c \\, (\\text{Cat}(c) \\land \\text{Fears}(m,c)) \\, )"
},
{
"math_id": 1,
"text": "\\exists c \\, ( \\, \\text{Cat}(c) \\land \\forall m \\, (\\text{Mouse}(m) \\rightarrow \\text{Fears}(m,c)) \\, )"
}
] | https://en.wikipedia.org/wiki?curid=931381 |
9315192 | Fractal sequence | Sequence that contains itself as a subsequence
In mathematics, a fractal sequence is one that contains itself as a proper subsequence. An example is
1, 1, 2, 1, 2, 3, 1, 2, 3, 4, 1, 2, 3, 4, 5, 1, 2, 3, 4, 5, 6, ...
If the first occurrence of each n is deleted, the remaining sequence is identical to the original. The process can be repeated indefinitely, so that actually, the original sequence contains not only one copy of itself, but rather, infinitely many.
Definition.
The precise definition of fractal sequence depends on a preliminary definition: a sequence "x = (xn)" is an infinitive sequence if for every "i",
(F1) "xn = i" for infinitely many "n".
Let "a(i,j)" be the "jth" index "n" for which "xn = i". An infinitive sequence "x" is a fractal sequence if two additional conditions hold:
(F2) if "i+1 = xn", then there exists "m < n" such that
formula_0
(F3) if "h < i" then for every "j" there is exactly one "k" such that
formula_1
According to (F2), the first occurrence of each "i > 1" in "x" must be preceded at least once by each of the numbers 1, 2, ..., i-1, and according to (F3), between consecutive occurrences of "i" in "x", each "h" less than "i" occurs exactly once.
Example.
Suppose θ is a positive irrational number. Let
S(θ) = the set of numbers c + dθ, where c and d are positive integers
and let
cn(θ) + θdn(θ)
be the sequence obtained by arranging the numbers in S(θ) in increasing order. The sequence cn(θ) is the "signature of θ", and it is a fractal sequence.
For example, the signature of the golden ratio (i.e., θ = (1 + sqrt(5))/2) begins with
1, 2, 1, 3, 2, 4, 1, 3, 5, 2, 4, 1, 6, 3, 5, 2, 7, 4, 1, 6, 3, 8, 5, ...
and the signature of 1/θ = θ - 1 begins with
1, 1, 2, 1, 2, 1, 3, 2, 1, 3, 2, 4, 1, 3, 2, 4, 1, 3, 2, 4, 1, 3, 5, ...
These are sequences OEIS: and OEIS: in the On-Line Encyclopedia of Integer Sequences, where further examples from a variety of number-theoretic and combinatorial settings are given. | [
{
"math_id": 0,
"text": "i=x_m"
},
{
"math_id": 1,
"text": "a(i,j) < a(h,k) < a(i,j+1)."
}
] | https://en.wikipedia.org/wiki?curid=9315192 |
9315395 | Parametricity | Type theory concept
In programming language theory, parametricity is an abstract uniformity property enjoyed by parametrically polymorphic functions, which captures the intuition that all instances of a polymorphic function act the same way.
Idea.
Consider this example, based on a set "X" and the type "T"("X") = ["X" → "X"] of functions from "X" to itself. The higher-order function "twice""X" : "T"("X") → "T"("X") given by "twice""X"("f") = "f" ∘ "f", is intuitively independent of the set "X". The family of all such functions "twice""X", parametrized by sets "X", is called a "parametrically polymorphic function". We simply write twice for the entire family of these functions and write its type as formula_0"X". "T"("X") → "T"("X"). The individual functions "twice""X" are called the "components" or "instances" of the polymorphic function. Notice that all the component functions "twice""X" act "the same way" because they are given by the same rule. Other families of functions obtained by picking one arbitrary function from each "T"("X") → "T"("X") would not have such uniformity. They are called ""ad hoc" polymorphic functions". "Parametricity" is the abstract property enjoyed by the uniformly acting families such as twice, which distinguishes them from "ad hoc" families. With an adequate formalization of parametricity, it is possible to prove that the parametrically polymorphic functions of type formula_0"X". "T"("X") → "T"("X") are one-to-one with natural numbers. The function corresponding to the natural number "n" is given by the rule "f" formula_1 "f""n", i.e., the polymorphic Church numeral for "n". In contrast, the collection of all "ad hoc" families would be too large to be a set.
History.
The "parametricity theorem" was originally stated by John C. Reynolds, who called it the "abstraction theorem". In his paper "Theorems for free!", Philip Wadler described an application of parametricity to derive theorems about parametrically polymorphic functions based on their types.
Programming language implementation.
Parametricity is the basis for many program transformations implemented in compilers for the Haskell programming language. These transformations were traditionally thought to be correct in Haskell because of Haskell's non-strict semantics. Despite being a lazy programming language, Haskell does support certain primitive operations—such as the operator codice_0—that enable so-called "selective strictness", allowing the programmer to force the evaluation of certain expressions. In their paper "Free theorems in the presence of "seq"", Patricia Johann and Janis Voigtlaender showed that because of the presence of these operations, the general parametricity theorem does not hold for Haskell programs; thus, these transformations are unsound in general.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\forall"
},
{
"math_id": 1,
"text": "\\mapsto"
}
] | https://en.wikipedia.org/wiki?curid=9315395 |
931668 | Susceptible individual | Member of a population who is at risk of becoming infected by a disease
In epidemiology a susceptible individual (sometimes known simply as a susceptible) is a member of a population who is at risk of becoming infected by a disease.
Susceptible individuals.
Susceptibles have been exposed to neither the wild strain of the disease nor a vaccination against it, and thus have not developed immunity. Those individuals who have antibodies against an antigen associated with a particular infectious disease will not be susceptible, even if they did not produce the antibody themselves (for example, infants younger than six months who still have maternal antibodies passed through the placenta and from the colostrum, and adults who have had a recent injection of antibodies). However, these individuals soon return to the susceptible state as the antibodies are broken down.
Some individuals may have a natural resistance to a particular infectious disease. However, except in some special cases such as malaria, these individuals make up such a small proportion of the total population that they can be ignored for the purposes of modelling an epidemic.
Mathematical model of susceptibility.
The proportion of the population who are susceptible to a particular disease is denoted "S". Due to the problems mentioned above, it is difficult to know this parameter for a given population. However, in a population with a rectangular population distribution (such as that of a developed country), it may be estimated by:
formula_0
Where "A" is the average age at which the disease is contracted and "L" is the life expectancy of the population. To understand the rationale behind this relation, think of "A" as the length/amount of time spent in the susceptible group (assuming an individual is susceptible before contracting the disease and immune afterwards) and "L" as the total length of time spent in the population. It thus follows that the proportion of time spent as a susceptible is A/L and, in a population with a rectangular distribution, the proportion of an individual's life spent in one group is representative of the proportion of the population in that group.
The advantage of estimating "S" in this way is that both the average age of infection and life expectancy will be well documented, and thus the other parameters needed to calculate "S" will be easily at hand.
The parameter "S" is important in the mathematical modelling of epidemics.
Susceptibility in virology.
Viruses are only able to cause disease or pathologies if they meet several criteria:
Hence susceptibility only refers to the fact that the virus is able to get into the cell, via having the proper receptor(s), and as a result, despite the fact that a host may be susceptible, the virus may still not be able to cause any pathologies within the host. Reasons for this are varied and may include suppression by the host immune system, or abortive measures taken by intrinsic cell defenses. | [
{
"math_id": 0,
"text": " \n{S} = \\frac {A} {L} \n"
}
] | https://en.wikipedia.org/wiki?curid=931668 |
9318403 | Markov brothers' inequality | In mathematics, the Markov brothers' inequality is an inequality, proved in the 1890s by brothers Andrey Markov and Vladimir Markov, two Russian mathematicians. This inequality bounds the maximum of the derivatives of a polynomial on an interval in terms of the maximum of the polynomial. For "k" = 1 it was proved by Andrey Markov, and for "k" = 2,3... by his brother Vladimir Markov.
The statement.
Let "P" be a polynomial of degree ≤ "n". Then for all nonnegative integers formula_0
formula_1
This inequality is tight, as equality is attained for Chebyshev polynomials of the first kind.
Applications.
Markov's inequality is used to obtain lower bounds in computational complexity theory via the so-called "Polynomial Method".
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "k"
},
{
"math_id": 1,
"text": "\\max_{-1 \\leq x \\leq 1} |P^{(k)}(x)| \\leq \\frac{n^2 (n^2 - 1^2) (n^2 - 2^2) \\cdots (n^2 - (k-1)^2)}{1 \\cdot 3 \\cdot 5 \\cdots (2k-1)} \\max_{-1 \\leq x \\leq 1} |P(x)|. "
}
] | https://en.wikipedia.org/wiki?curid=9318403 |
931880 | Delta-v budget | Estimate of total change in velocity of a space mission
In astrodynamics and aerospace, a delta-v budget is an estimate of the total change in velocity (delta-"v") required for a space mission. It is calculated as the sum of the delta-v required to perform each propulsive maneuver needed during the mission. As input to the Tsiolkovsky rocket equation, it determines how much propellant is required for a vehicle of given empty mass and propulsion system.
Delta-"v" is a scalar quantity dependent only on the desired trajectory and not on the mass of the space vehicle. For example, although more fuel is needed to transfer a heavier communication satellite from low Earth orbit to geosynchronous orbit than for a lighter one, the delta-"v" required is the same. Delta-"v" is also additive, as contrasted to rocket burn time, the latter having greater effect later in the mission when more fuel has been used up.
Tables of the delta-"v" required to move between different space regime are useful in the conceptual planning of space missions. In the absence of an atmosphere, the delta-"v" is typically the same for changes in orbit in either direction; in particular, gaining and losing speed cost an equal effort. An atmosphere can be used to slow a spacecraft by aerobraking.
A typical delta-"v" budget might enumerate various classes of maneuvers, delta-"v" per maneuver, and number of each maneuver required over the life of the mission, then simply sum the total delta-"v", much like a typical financial budget. Because the delta-v needed to achieve the mission usually varies with the relative position of the gravitating bodies, launch windows are often calculated from porkchop plots that show delta-"v" plotted against the launch time.
General principles.
The Tsiolkovsky rocket equation shows that the delta-v of a rocket (stage) is proportional to the logarithm of the fuelled-to-empty mass ratio of the vehicle, and to the specific impulse of the rocket engine. A key goal in designing space-mission trajectories is to minimize the required delta-v to reduce the size and expense of the rocket that would be needed to successfully deliver any particular payload to its destination.
The simplest delta-v budget can be calculated with Hohmann transfer, which moves from one circular orbit to another coplanar circular orbit via an elliptical transfer orbit. In some cases a bi-elliptic transfer can give a lower delta-v.
A more complex transfer occurs when the orbits are not coplanar. In that case there is an additional delta-v necessary to change the plane of the orbit. The velocity of the vehicle needs substantial burns at the intersection of the two orbital planes and the delta-v is usually extremely high. However, these plane changes can be almost free in some cases if the gravity and mass of a planetary body are used to perform the deflection. In other cases, boosting up to a relatively high altitude apoapsis gives low speed before performing the plane change, thus requiring lower total delta-v.
The slingshot effect can be used to give a boost of speed/energy; if a vehicle goes past a planetary or lunar body, it is possible to pick up (or lose) some of that body's orbital velocity relative to the Sun or another planet.
Another effect is the Oberth effect—this can be used to greatly decrease the delta-v needed, because using propellant at low potential energy/high speed multiplies the effect of a burn. Thus for example the delta-v for a Hohmann transfer from Earth's orbital radius to Mars's orbital radius (to overcome the Sun's gravity) is many kilometres per second, but the incremental burn from low Earth orbit (LEO) over and above the burn to overcome Earth's gravity is far less if the burn is done close to Earth than if the burn to reach a Mars transfer orbit is performed at Earth's orbit, but far away from Earth.
A less used effect is low energy transfers. These are highly nonlinear effects that work by orbital resonances and by choosing trajectories close to Lagrange points. They can be very slow, but use very little delta-v.
Because delta-v depends on the position and motion of celestial bodies, particularly when using the slingshot effect and Oberth effect, the delta-v budget changes with launch time. These can be plotted on a porkchop plot.
Course corrections usually also require some propellant budget. Propulsion systems never provide precisely the right propulsion in precisely the right direction at all times, and navigation also introduces some uncertainty. Some propellant needs to be reserved to correct variations from the optimum trajectory.
Budget.
Launch/landing.
The delta-v requirements for sub-orbital spaceflight are much lower than for orbital spaceflight. For the Ansari X Prize altitude of 100 km, Space Ship One required a delta-v of roughly 1.4 km/s. To reach the initial low Earth orbit of the International Space Station of 300 km (now 400 km), the delta-v is over six times higher, about 9.4 km/s. Because of the exponential nature of the rocket equation the orbital rocket needs to be considerably bigger.
Earth–Moon space—high thrust.
Delta-v needed to move inside the Earth–Moon system (speeds lower than escape velocity) are given in km/s. This table assumes that the Oberth effect is being used—this is possible with high thrust chemical propulsion but not with current (as of 2018) electrical propulsion.
Earth–Moon space—low thrust.
Current electric ion thrusters produce a very low thrust (milli-newtons, yielding a small fraction of a "g)," so the Oberth effect cannot normally be used. This results in the journey requiring a higher delta-"v" and frequently a large increase in time compared to a high thrust chemical rocket. Nonetheless, the high specific impulse of electrical thrusters may significantly reduce the cost of the flight. For missions in the Earth–Moon system, an increase in journey time from days to months could be unacceptable for human space flight, but differences in flight time for interplanetary flights are less significant and could be favorable.
The table below presents delta-"v"'s in km/s, normally accurate to 2 significant figures and will be the same in both directions, unless aerobraking is used as described in the high thrust section above.
Earth Lunar Gateway—high thrust.
The Lunar Gateway space station is planned to be deployed in a highly elliptical seven-day near-rectilinear halo orbit (NRHO) around the Moon. Spacecraft launched from Earth would perform a powered flyby of the Moon followed by a NRHO orbit insertion burn to dock with the Gateway as it approaches the apoapsis point of its orbit.
Interplanetary.
The spacecraft is assumed to be using chemical propulsion and the Oberth effect.
According to Marsden and Ross, "The energy levels of the Sun–Earth L1 and L2 points differ from those of the Earth–Moon system by only 50 m/s (as measured by maneuver velocity)."
We may apply the formula
formula_0
(where μ = GM is the standard gravitational parameter of the sun, see Hohmann transfer orbit) to calculate the Δ"v" in km/s needed to arrive at various destinations from Earth (assuming circular orbits for the planets, and using perihelion distance for Pluto). In this table, the column labeled "Δ"v" to enter Hohmann orbit from Earth's orbit" gives the change from Earth's velocity to the velocity needed to get on a Hohmann ellipse whose other end will be at the desired distance from the Sun. The column labeled "v exiting LEO" gives the velocity needed (in a non-rotating frame of reference centred on Earth) when 300 km above Earth's surface. This is obtained by adding to the specific kinetic energy the square of the speed (7.73 km/s) of this low Earth orbit (that is, the depth of Earth's gravity well at this LEO). The column "Δ"v" from LEO" is simply the previous speed minus 7.73 km/s. The transit time is calculated as formula_1 years.
Note that the values in the table only give the Δv needed to get to the orbital distance of the planet. The speed relative to the planet will still be considerable, and in order to go into orbit around the planet either aerocapture is needed using the planet's atmosphere, or more Δv is needed.
The "New Horizons" space probe to Pluto achieved a near-Earth speed of over 16 km/s which was enough to escape from the Sun. (It also got a boost from a fly-by of Jupiter.)
To get to the Sun, it is actually not necessary to use a Δ"v" of 24 km/s. One can use 8.8 km/s to go very far away from the Sun, then use a negligible Δ"v" to bring the angular momentum to zero, and then fall into the Sun. This sequence of two Hohmann transfers, one up and one down, is a special case of a bi-elliptic transfer. Also, the table does not give the values that would apply when using the Moon for a gravity assist. There are also possibilities of using one planet, like Venus which is the easiest to get to, to assist getting to other planets or the Sun. The "Galileo" spacecraft used Venus once and Earth twice in order to reach Jupiter. The "Ulysses" solar probe used Jupiter to attain polar orbit around the Sun.
Delta-vs between Earth, Moon and Mars.
Delta-v needed for various orbital manoeuvers using conventional rockets.
Near-Earth objects.
Near-Earth objects are asteroids whose orbits can bring them within about 0.3 astronomical units of the Earth. There are thousands of such objects that are easier to reach than the Moon or Mars. Their one-way delta-v budgets from LEO range upwards from , which is less than 2/3 of the delta-v needed to reach the Moon's surface. But NEOs with low delta-v budgets have long synodic periods, and the intervals between times of closest approach to the Earth (and thus most efficient missions) can be decades long.
The delta-v required to return from Near-Earth objects is usually quite small, sometimes as low as , with aerocapture using Earth's atmosphere. However, heat shields are required for this, which add mass and constrain spacecraft geometry. The orbital phasing can be problematic; once rendezvous has been achieved, low delta-v return windows can be fairly far apart (more than a year, often many years), depending on the body.
In general, bodies that are much further away or closer to the Sun than Earth, have more frequent windows for travel, but usually require larger delta-vs.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Delta v\n= \\sqrt{\\frac{\\mu}{r_1}}\n \\left( \\sqrt{\\frac{2 r_2}{r_1+r_2}} - 1 \\right)"
},
{
"math_id": 1,
"text": "((1+\\text{orbital radius})/2)^{3/2}/2"
}
] | https://en.wikipedia.org/wiki?curid=931880 |
931978 | Quasi-continuous function | In mathematics, the notion of a quasi-continuous function is similar to, but weaker than, the notion of a continuous function. All continuous functions are quasi-continuous but the converse is not true in general.
Definition.
Let formula_0 be a topological space. A real-valued function formula_1 is quasi-continuous at a point formula_2 if for any formula_3 and any open neighborhood formula_4 of formula_5 there is a non-empty open set formula_6 such that
formula_7
Note that in the above definition, it is not necessary that formula_8.
Example.
Consider the function formula_13 defined by formula_14 whenever formula_15 and formula_16 whenever formula_17. Clearly f is continuous everywhere except at x=0, thus quasi-continuous everywhere except (at most) at x=0. At x=0, take any open neighborhood U of x. Then there exists an open set formula_6 such that formula_18. Clearly this yields formula_19 thus f is quasi-continuous.
In contrast, the function formula_20 defined by formula_21 whenever formula_22 is a rational number and formula_23 whenever formula_22 is an irrational number is nowhere quasi-continuous, since every nonempty open set formula_24 contains some formula_25 with formula_26. | [
{
"math_id": 0,
"text": " X "
},
{
"math_id": 1,
"text": " f:X \\rightarrow \\mathbb{R} "
},
{
"math_id": 2,
"text": " x \\in X "
},
{
"math_id": 3,
"text": " \\epsilon > 0 "
},
{
"math_id": 4,
"text": " U "
},
{
"math_id": 5,
"text": " x "
},
{
"math_id": 6,
"text": " G \\subset U "
},
{
"math_id": 7,
"text": " |f(x) - f(y)| < \\epsilon \\;\\;\\;\\; \\forall y \\in G "
},
{
"math_id": 8,
"text": " x \\in G "
},
{
"math_id": 9,
"text": " f: X \\rightarrow \\mathbb{R} "
},
{
"math_id": 10,
"text": " f"
},
{
"math_id": 11,
"text": " g: X \\rightarrow \\mathbb{R} "
},
{
"math_id": 12,
"text": " f+g "
},
{
"math_id": 13,
"text": " f: \\mathbb{R} \\rightarrow \\mathbb{R} "
},
{
"math_id": 14,
"text": " f(x) = 0 "
},
{
"math_id": 15,
"text": " x \\leq 0 "
},
{
"math_id": 16,
"text": " f(x) = 1 "
},
{
"math_id": 17,
"text": " x > 0 "
},
{
"math_id": 18,
"text": " y < 0 \\; \\forall y \\in G "
},
{
"math_id": 19,
"text": " |f(0) - f(y)| = 0 \\; \\forall y \\in G"
},
{
"math_id": 20,
"text": " g: \\mathbb{R} \\rightarrow \\mathbb{R} "
},
{
"math_id": 21,
"text": " g(x) = 0 "
},
{
"math_id": 22,
"text": " x"
},
{
"math_id": 23,
"text": " g(x) = 1 "
},
{
"math_id": 24,
"text": "G"
},
{
"math_id": 25,
"text": "y_1, y_2"
},
{
"math_id": 26,
"text": "|g(y_1) - g(y_2)| = 1"
}
] | https://en.wikipedia.org/wiki?curid=931978 |
9320596 | Carleman's inequality | Carleman's inequality is an inequality in mathematics, named after Torsten Carleman, who proved it in 1923 and used it to prove the Denjoy–Carleman theorem on quasi-analytic classes.
Statement.
Let formula_0 be a sequence of non-negative real numbers, then
formula_1
The constant formula_2 (euler number) in the inequality is optimal, that is, the inequality does not always hold if formula_2 is replaced by a smaller number. The inequality is strict (it holds with "<" instead of "≤") if some element in the sequence is non-zero.
Integral version.
Carleman's inequality has an integral version, which states that
formula_3
for any "f" ≥ 0.
Carleson's inequality.
A generalisation, due to Lennart Carleson, states the following:
for any convex function "g" with "g"(0) = 0, and for any -1 < "p" < ∞,
formula_4
Carleman's inequality follows from the case "p" = 0.
Proof.
An elementary proof is sketched below. From the inequality of arithmetic and geometric means applied to the numbers formula_5
formula_6
where MG stands for geometric mean, and MA — for arithmetic mean. The Stirling-type inequality formula_7 applied to formula_8 implies
formula_9 for all formula_10
Therefore,
formula_11
whence
formula_12
proving the inequality. Moreover, the inequality of arithmetic and geometric means of formula_13 non-negative numbers is known to be an equality if and only if all the numbers coincide, that is, in the present case, if and only if formula_14 for formula_15. As a consequence, Carleman's inequality is never an equality for a convergent series, unless all formula_16 vanish, just because the harmonic series is divergent.
One can also prove Carleman's inequality by starting with Hardy's inequality
formula_17
for the non-negative numbers "a"1,"a"2... and "p" > 1, replacing each "a""n" with "a", and letting "p" → ∞.
Versions for specific sequences.
Christian Axler and Mehdi Hassani investigated Carleman's inequality for the specific cases of formula_18 where formula_19 is the formula_20th prime number. They also investigated the case where formula_21. They found that if formula_22 one can replace formula_23 with formula_24 in Carleman's inequality, but that if formula_21 then formula_23 remained the best possible constant. | [
{
"math_id": 0,
"text": "a_1,a_2,a_3,\\dots"
},
{
"math_id": 1,
"text": " \\sum_{n=1}^\\infty \\left(a_1 a_2 \\cdots a_n\\right)^{1/n} \\le \\mathrm{e} \\sum_{n=1}^\\infty a_n."
},
{
"math_id": 2,
"text": "\\mathrm{e}"
},
{
"math_id": 3,
"text": " \\int_0^\\infty \\exp\\left\\{ \\frac{1}{x} \\int_0^x \\ln f(t) \\,\\mathrm{d}t \\right\\} \\,\\mathrm{d}x \\leq \\mathrm{e} \\int_0^\\infty f(x) \\,\\mathrm{d}x "
},
{
"math_id": 4,
"text": " \\int_0^\\infty x^p \\mathrm{e}^{-g(x)/x} \\,\\mathrm{d}x \\leq \\mathrm{e}^{p+1} \\int_0^\\infty x^p \\mathrm{e}^{-g'(x)} \\,\\mathrm{d}x. "
},
{
"math_id": 5,
"text": "1\\cdot a_1,2\\cdot a_2,\\dots,n \\cdot a_n"
},
{
"math_id": 6,
"text": "\\mathrm{MG}(a_1,\\dots,a_n)=\\mathrm{MG}(1a_1,2a_2,\\dots,na_n)(n!)^{-1/n}\\le \\mathrm{MA}(1a_1,2a_2,\\dots,na_n)(n!)^{-1/n}"
},
{
"math_id": 7,
"text": "n!\\ge \\sqrt{2\\pi n}\\, n^n \\mathrm{e}^{-n}"
},
{
"math_id": 8,
"text": "n+1"
},
{
"math_id": 9,
"text": "(n!)^{-1/n} \\le \\frac{\\mathrm{e}}{n+1}"
},
{
"math_id": 10,
"text": "n\\ge1."
},
{
"math_id": 11,
"text": "MG(a_1,\\dots,a_n) \\le \\frac{\\mathrm{e}}{n(n+1)}\\, \\sum_{1\\le k \\le n} k a_k \\, ,"
},
{
"math_id": 12,
"text": "\\sum_{n\\ge1}MG(a_1,\\dots,a_n) \\le\\, \\mathrm{e}\\, \\sum_{k\\ge1} \\bigg( \\sum_{n\\ge k} \\frac{1}{n(n+1)}\\bigg) \\, k a_k =\\, \\mathrm{e}\\, \\sum_{k\\ge1}\\, a_k \\, ,"
},
{
"math_id": 13,
"text": "n"
},
{
"math_id": 14,
"text": "a_k= C/k"
},
{
"math_id": 15,
"text": "k=1,\\dots,n"
},
{
"math_id": 16,
"text": "a_n"
},
{
"math_id": 17,
"text": "\\sum_{n=1}^\\infty \\left (\\frac{a_1+a_2+\\cdots +a_n}{n}\\right )^p\\le \\left (\\frac{p}{p-1}\\right )^p\\sum_{n=1}^\\infty a_n^p"
},
{
"math_id": 18,
"text": "a_i= p_i"
},
{
"math_id": 19,
"text": "p_i"
},
{
"math_id": 20,
"text": "i"
},
{
"math_id": 21,
"text": "a_i=\\frac{1}{p_i}"
},
{
"math_id": 22,
"text": "a_i=p_i"
},
{
"math_id": 23,
"text": "e"
},
{
"math_id": 24,
"text": "\\frac{1}{e}"
}
] | https://en.wikipedia.org/wiki?curid=9320596 |
932217 | Classical unified field theories | Theoretical attempts to unify the forces of nature
Since the 19th century, some physicists, notably Albert Einstein, have attempted to develop a single theoretical framework that can account for all the fundamental forces of nature – a unified field theory. Classical unified field theories are attempts to create a unified field theory based on classical physics. In particular, unification of gravitation and electromagnetism was actively pursued by several physicists and mathematicians in the years between the two World Wars. This work spurred the purely mathematical development of differential geometry.
This article describes various attempts at formulating a classical (non-quantum), relativistic unified field theory. For a survey of classical relativistic field theories of gravitation that have been motivated by theoretical concerns other than unification, see Classical theories of gravitation. For a survey of current work toward creating a quantum theory of gravitation, see quantum gravity.
Overview.
The early attempts at creating a unified field theory began with the Riemannian geometry of general relativity, and attempted to incorporate electromagnetic fields into a more general geometry, since ordinary Riemannian geometry seemed incapable of expressing the properties of the electromagnetic field. Einstein was not alone in his attempts to unify electromagnetism and gravity; a large number of mathematicians and physicists, including Hermann Weyl, Arthur Eddington, and Theodor Kaluza also attempted to develop approaches that could unify these interactions. These scientists pursued several avenues of generalization, including extending the foundations of geometry and adding an extra spatial dimension.
Early work.
The first attempts to provide a unified theory were by G. Mie in 1912 and Ernst Reichenbacher in 1916. However, these theories were unsatisfactory, as they did not incorporate general relativity because general relativity had yet to be formulated. These efforts, along with those of Rudolf Förster, involved making the metric tensor (which had previously been assumed to be symmetric and real-valued) into an asymmetric and/or complex-valued tensor, and they also attempted to create a field theory for matter as well.
Differential geometry and field theory.
From 1918 until 1923, there were three distinct approaches to field theory: the gauge theory of Weyl, Kaluza's five-dimensional theory, and Eddington's development of affine geometry. Einstein corresponded with these researchers, and collaborated with Kaluza, but was not yet fully involved in the unification effort.
Weyl's infinitesimal geometry.
In order to include electromagnetism into the geometry of general relativity, Hermann Weyl worked to generalize the Riemannian geometry upon which general relativity is based. His idea was to create a more general infinitesimal geometry. He noted that in addition to a metric field there could be additional degrees of freedom along a path between two points in a manifold, and he tried to exploit this by introducing a basic method for comparison of local size measures along such a path, in terms of a gauge field. This geometry generalized Riemannian geometry in that there was a vector field "Q", in addition to the metric "g", which together gave rise to both the electromagnetic and gravitational fields. This theory was mathematically sound, albeit complicated, resulting in difficult and high-order field equations. The critical mathematical ingredients in this theory, the Lagrangians and curvature tensor, were worked out by Weyl and colleagues. Then Weyl carried out an extensive correspondence with Einstein and others as to its physical validity, and the theory was ultimately found to be physically unreasonable. However, Weyl's principle of gauge invariance was later applied in a modified form to quantum field theory.
Kaluza's fifth dimension.
Kaluza's approach to unification was to embed space-time into a five-dimensional cylindrical world, consisting of four space dimensions and one time dimension. Unlike Weyl's approach, Riemannian geometry was maintained, and the extra dimension allowed for the incorporation of the electromagnetic field vector into the geometry. Despite the relative mathematical elegance of this approach, in collaboration with Einstein and Einstein's aide Grommer it was determined that this theory did not admit a non-singular, static, spherically symmetric solution. This theory did have some influence on Einstein's later work and was further developed later by Klein in an attempt to incorporate relativity into quantum theory, in what is now known as Kaluza–Klein theory.
Eddington's affine geometry.
Sir Arthur Stanley Eddington was a noted astronomer who became an enthusiastic and influential promoter of Einstein's general theory of relativity. He was among the first to propose an extension of the gravitational theory based on the affine connection as the fundamental structure field rather than the metric tensor which was the original focus of general relativity. Affine connection is the basis for "parallel transport" of vectors from one space-time point to another; Eddington assumed the affine connection to be symmetric in its covariant indices, because it seemed plausible that the result of parallel-transporting one infinitesimal vector along another should produce the same result as transporting the second along the first. (Later workers revisited this assumption.)
Eddington emphasized what he considered to be epistemological considerations; for example, he thought that the cosmological constant version of the general-relativistic field equation expressed the property that the universe was "self-gauging". Since the simplest cosmological model (the De Sitter universe) that solves that equation is a spherically symmetric, stationary, closed universe (exhibiting a cosmological red shift, which is more conventionally interpreted as due to expansion), it seemed to explain the overall form of the universe.
Like many other classical unified field theorists, Eddington considered that in the Einstein field equations for general relativity the stress–energy tensor formula_0, which represents matter/energy, was merely provisional, and that in a truly unified theory the source term would automatically arise as some aspect of the free-space field equations. He also shared the hope that an improved fundamental theory would explain why the two elementary particles then known (proton and electron) have quite different masses.
The Dirac equation for the relativistic quantum electron caused Eddington to rethink his previous conviction that fundamental physical theory had to be based on tensors. He subsequently devoted his efforts into development of a "Fundamental Theory" based largely on algebraic notions (which he called "E-frames"). Unfortunately his descriptions of this theory were sketchy and difficult to understand, so very few physicists followed up on his work.
Einstein's geometric approaches.
When the equivalent of Maxwell's equations for electromagnetism is formulated within the framework of Einstein's theory of general relativity, the electromagnetic field energy (being equivalent to mass as defined by Einstein's equation E=mc2) contributes to the stress tensor and thus to the curvature of space-time, which is the general-relativistic representation of the gravitational field; or putting it another way, certain configurations of curved space-time "incorporate" effects of an electromagnetic field. This suggests that a purely geometric theory ought to treat these two fields as different aspects of the same basic phenomenon. However, ordinary Riemannian geometry is unable to describe the properties of the electromagnetic field as a purely geometric phenomenon.
Einstein tried to form a generalized theory of gravitation that would unify the gravitational and electromagnetic forces (and perhaps others), guided by a belief in a single origin for the entire set of physical laws. These attempts initially concentrated on additional geometric notions such as vierbeins and "distant parallelism", but eventually centered around treating both the metric tensor and the affine connection as fundamental fields. (Because they are not independent, the metric-affine theory was somewhat complicated.) In general relativity, these fields are symmetric (in the matrix sense), but since antisymmetry seemed essential for electromagnetism, the symmetry requirement was relaxed for one or both fields. Einstein's proposed unified-field equations (fundamental laws of physics) were generally derived from a variational principle expressed in terms of the Riemann curvature tensor for the presumed space-time manifold.
In field theories of this kind, particles appear as limited regions in space-time in which the field strength or the energy density is particularly high. Einstein and coworker Leopold Infeld managed to demonstrate that, in Einstein's final theory of the unified field, true singularities of the field did have trajectories resembling point particles. However, singularities are places where the equations break down, and Einstein believed that in an ultimate theory the laws should apply "everywhere", with particles being soliton-like solutions to the (highly nonlinear) field equations. Further, the large-scale topology of the universe should impose restrictions on the solutions, such as quantization or discrete symmetries.
The degree of abstraction, combined with a relative lack of good mathematical tools for analyzing nonlinear equation systems, make it hard to connect such theories with the physical phenomena that they might describe. For example, it has been suggested that the torsion (antisymmetric part of the affine connection) might be related to isospin rather than electromagnetism; this is related to a discrete (or "internal") symmetry known to Einstein as "displacement field duality".
Einstein became increasingly isolated in his research on a generalized theory of gravitation, and most physicists consider his attempts ultimately unsuccessful. In particular, his pursuit of a unification of the fundamental forces ignored developments in quantum physics (and vice versa), most notably the discovery of the strong nuclear force and weak nuclear force.
Schrödinger's pure-affine theory.
Inspired by Einstein's approach to a unified field theory and Eddington's idea of the affine connection as the sole basis for differential geometric structure for space-time, Erwin Schrödinger from 1940 to 1951 thoroughly investigated pure-affine formulations of generalized gravitational theory. Although he initially assumed a symmetric affine connection, like Einstein he later considered the nonsymmetric field.
Schrödinger's most striking discovery during this work was that the metric tensor was "induced" upon the manifold via a simple construction from the Riemann curvature tensor, which was in turn formed entirely from the affine connection. Further, taking this approach with the simplest feasible basis for the variational principle resulted in a field equation having the form of Einstein's general-relativistic field equation with a cosmological term arising "automatically".
Skepticism from Einstein and published criticisms from other physicists discouraged Schrödinger, and his work in this area has been largely ignored.
Later work.
After the 1930s, progressively fewer scientists worked on classical unification, due to the continued development of quantum-theoretical descriptions of the non-gravitational fundamental forces of nature and the difficulties encountered in developing a quantum theory of gravity. Einstein pressed on with his attempts to theoretically unify gravity and electromagnetism, but he became increasingly isolated in this research, which he pursued until his death. Einstein's celebrity status brought much attention to his final quest, which ultimately saw limited success.
Most physicists, on the other hand, eventually abandoned classical unified theories. Current mainstream research on unified field theories focuses on the problem of creating a quantum theory of gravity and unifying with the other fundamental theories in physics, all of which are quantum field theories. (Some programs, such as string theory, attempt to solve both of these problems at once.) Of the four known fundamental forces, gravity remains the one force for which unification with the others proves problematic.
Although new "classical" unified field theories continue to be proposed from time to time, often involving non-traditional elements such as spinors or relating gravitation to an electromagnetic force, none have been generally accepted by physicists yet.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " T_{\\mu\\nu} "
}
] | https://en.wikipedia.org/wiki?curid=932217 |
932460 | Myers's theorem | Bounds the length of geodetic segments in Riemannian manifolds based in Ricci curvature
Myers's theorem, also known as the Bonnet–Myers theorem, is a celebrated, fundamental theorem in the mathematical field of Riemannian geometry. It was discovered by Sumner Byron Myers in 1941. It asserts the following:
<templatestyles src="Block indent/styles.css"/> Let formula_0 be a complete and connected Riemannian manifold of dimension formula_1 whose Ricci curvature satisfies for some fixed positive real number formula_2 the inequality formula_3 for every formula_4 and formula_5 of unit length. Then any two points of "M" can be joined by a geodesic segment of length at most formula_6.
In the special case of surfaces, this result was proved by Ossian Bonnet in 1855. For a surface, the Gauss, sectional, and Ricci curvatures are all the same, but Bonnet's proof easily generalizes to higher dimensions if one assumes a positive lower bound on the sectional curvature. Myers' key contribution was therefore to show that a Ricci lower bound is all that is needed to reach the same conclusion.
Corollaries.
The conclusion of the theorem says, in particular, that the diameter of formula_0 is finite. Therefore formula_7 must be compact, as a closed (and hence compact) ball of finite radius in any tangent space is carried onto all of formula_7 by the exponential map.
As a very particular case, this shows that any complete and noncompact smooth Riemannian manifold which is Einstein must have nonpositive Einstein constant.
Since formula_7 is connected, there exists the smooth universal covering map formula_8 One may consider the pull-back metric π*"g" on formula_9 Since formula_10 is a local isometry, Myers' theorem applies to the Riemannian manifold ("N",π*"g") and hence formula_11 is compact and the covering map is finite. This implies that the fundamental group of formula_7 is finite.
Cheng's diameter rigidity theorem.
The conclusion of Myers' theorem says that for any formula_12 one has "d""g"("p","q") ≤ "π"/√"k". In 1975, Shiu-Yuen Cheng proved:
<templatestyles src="Template:Blockquote/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(M, g)"
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "r"
},
{
"math_id": 3,
"text": "\\operatorname{Ric}_{p}(v)\\geq (n-1)\\frac{1}{r^2}"
},
{
"math_id": 4,
"text": "p\\in M"
},
{
"math_id": 5,
"text": "v\\in T_{p}M"
},
{
"math_id": 6,
"text": "\\pi r"
},
{
"math_id": 7,
"text": "M"
},
{
"math_id": 8,
"text": "\\pi : N \\to M."
},
{
"math_id": 9,
"text": "N."
},
{
"math_id": 10,
"text": "\\pi"
},
{
"math_id": 11,
"text": "N"
},
{
"math_id": 12,
"text": "p, q \\in M,"
}
] | https://en.wikipedia.org/wiki?curid=932460 |
932711 | Carmichael's theorem | On prime divisors in Fibonacci and Lucas sequences
In number theory, Carmichael's theorem, named after the American mathematician R. D. Carmichael,
states that, for any nondegenerate Lucas sequence of the first kind "U""n"("P", "Q") with relatively prime parameters "P", "Q" and positive discriminant, an element "U""n" with "n" ≠ 1, 2, 6 has at least one prime divisor that does not divide any earlier one except the 12th Fibonacci number F(12) = "U"12(1, −1) = 144 and its equivalent "U"12(−1, −1) = −144.
In particular, for "n" greater than 12, the "n"th Fibonacci number F("n") has at least one prime divisor that does not divide any earlier Fibonacci number.
Carmichael (1913, Theorem 21) proved this theorem. Recently, Yabuta (2001) gave a simple proof.
Statement.
Given two relatively prime integers "P" and "Q", such that formula_0 and "PQ" ≠ 0, let "U""n"("P", "Q") be the Lucas sequence of the first kind defined by
formula_1
Then, for "n" ≠ 1, 2, 6, "U""n"("P", "Q") has at least one prime divisor that does not divide any "U""m"("P", "Q") with "m" < "n", except "U"12(1, −1) = F(12) = 144, "U"12(−1, −1) = −F(12) = −144.
Such a prime "p" is called a "characteristic factor" or a "primitive prime divisor" of "U""n"("P", "Q").
Indeed, Carmichael showed a slightly stronger theorem: For "n" ≠ 1, 2, 6, "U""n"("P", "Q") has at least one primitive prime divisor not dividing "D" except "U"3(1, −2) = "U"3(−1, −2) = 3, "U"5(1, −1) = "U"5(−1, −1) = F(5) = 5, "U"12(1, −1) = F(12) = 144, "U"12(−1, −1) = −F(12) = −144.
Note that "D" should be greater than 0; thus the cases "U"13(1, 2), "U"18(1, 2) and "U"30(1, 2), etc. are not included, since in this case "D" = −7 < 0.
Fibonacci and Pell cases.
The only exceptions in Fibonacci case for "n" up to 12 are:
F(1) = 1 and F(2) = 1, which have no prime divisors
F(6) = 8, whose only prime divisor is 2 (which is F(3))
F(12) = 144, whose only prime divisors are 2 (which is F(3)) and 3 (which is F(4))
The smallest primitive prime divisor of F("n") are
1, 1, 2, 3, 5, 1, 13, 7, 17, 11, 89, 1, 233, 29, 61, 47, 1597, 19, 37, 41, 421, 199, 28657, 23, 3001, 521, 53, 281, 514229, 31, 557, 2207, 19801, 3571, 141961, 107, 73, 9349, 135721, 2161, 2789, 211, 433494437, 43, 109441, ... (sequence in the OEIS)
Carmichael's theorem says that every Fibonacci number, apart from the exceptions listed above, has at least one primitive prime divisor.
If "n" > 1, then the "n"th Pell number has at least one prime divisor that does not divide any earlier Pell number. The smallest primitive prime divisor of "n"th Pell number are
1, 2, 5, 3, 29, 7, 13, 17, 197, 41, 5741, 11, 33461, 239, 269, 577, 137, 199, 37, 19, 45697, 23, 229, 1153, 1549, 79, 53, 113, 44560482149, 31, 61, 665857, 52734529, 103, 1800193921, 73, 593, 9369319, 389, 241, ... (sequence in the OEIS)
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "D=P^2-4Q>0"
},
{
"math_id": 1,
"text": "\\begin{align}\nU_0(P,Q)&=0, \\\\\nU_1(P,Q)&=1, \\\\\nU_n(P,Q)&=P\\cdot U_{n-1}(P,Q)-Q\\cdot U_{n-2}(P,Q) \\qquad\\mbox{ for }n>1.\n\\end{align}\n"
}
] | https://en.wikipedia.org/wiki?curid=932711 |
932822 | Delta Pavonis | Star in the constellation Pavo
</td>
! style="text-align: center; background-color: #FFFFC0;" colspan="2" | Observation dataEpoch J2000 Equinox J2000
! style="text-align:left" | Constellation
! style="text-align:left" | Right ascension
! style="text-align:left" | Declination
! style="text-align:left" | Apparent magnitude (V)
! style="background-color: #FFFFC0; text-align: center;" colspan="2"| Characteristics
! style="text-align:left" | Spectral type
! style="text-align:left" | U−B
! style="text-align:left" | B−V
! style="text-align:left" | Variable type
</th></tr>
</th></tr>
Delta Pavonis, Latinized from δ Pavonis, is a single star in the southern constellation of Pavo. It has an apparent visual magnitude of 3.56, making it a fourth-magnitude star that is visible to the naked eye from the southern hemisphere. Parallax measurements yield an estimated distance of from Earth. This makes it one of the nearest bright stars to the Solar System. It is approaching the Sun with a radial velocity of −23.5 km/s, and is predicted to come as close as in around 49,200 years.
Observations.
This object is a subgiant of spectral type G8 IV; it will stop fusing hydrogen at its core relatively soon, starting the process of becoming a red giant. Hence, Delta Pavonis is 24% brighter than the Sun, but the effective temperature of its outer atmosphere is less: 5,571 K. Its mass is 105% of Sol's mass, with a mean radius 120% of Sol's radius. Delta Pavonis's surface convection zone extends downward to about 43.1% of the star's radius, but only contains 4.8% of the star's mass.
Spectroscopic examination of Delta Pavonis shows that it has a higher abundance of elements heavier than helium (metallicity) than does the Sun. This value is typically given in terms of the ratio of iron (chemical symbol Fe) to hydrogen (H) in a star's atmosphere, relative to that in Sol's atmosphere (iron being a good proxy for the presence of other heavy elements). The metallicity of Delta Pavonis is approximately
formula_0
This notation gives the logarithm of the iron-to-hydrogen ratio, relative to that of the Sun, meaning that Delta Pavonis's iron abundance is 214% of that of Sol. It is considered super metal-rich, and the high metallicity has slowed its evolution. Studies have shown a correlation between abundant heavy elements in stars, and the presence of a planetary system, so Delta Pavonis has a greater than average probability of harboring planets.
The age of Delta Pavonis is approximately 6.6 to 6.9 billion years, and is certainly in the billion year range. It appears to be rotating slowly, with a projected rotational velocity of 0.32 kilometers per second.
Possible planetary system.
The existence of a Jupiter-mass gas giant on a long-period orbit around Delta Pavonis is suspected, as of 2021, based on astrometric data. A study in 2023 detected a trend in the star's radial velocity, which may indicate the presence of a planetary companion, supporting the previous astrometric result. Such a planet would, at minimum, orbit with a period of 37 years at a distance of , and have a mass at least (0.22 MJ).
! align=center| Companion
! align=center| Mass
! align=center| Semimajor axis
! align=center| Orbital period
! align=center| Eccentricity
! align=center| Inclination
! align=center| Radius
SETI.
Delta Pavonis has been identified by Maggie Turnbull and Jill Tarter of the SETI Institute as the "Best SETI target" among the 100 closest G-type stars. Properties in its favor include a high metallicity, minimal level of magnetic activity, low rotation rate, and kinematic membership in the thin disk population of the Milky Way. Gas giants orbiting in, near, or through a star's habitable zone may destabilize the orbits of terrestrial planets in that zone; the lack of detected radial velocity variation suggests that there are no such gas giants orbiting Delta Pavonis. However, observation has detected no artificial radio sources. Delta Pavonis, a close photometric match to the Sun, is the nearest solar analog that is not a member of a binary or multiple star system.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Div col/styles.css"/>
External links.
<indicator name="01-sky-coordinates"><templatestyles src="Template:Sky/styles.css" />Coordinates: &de=-66.18206833333333&zoom=&show_grid=1&show_constellation_lines=1&show_constellation_boundaries=1&show_const_names=1&show_galaxies=1&img_source=IMG_all 20h 08m 43.6084s, −66° 10′ 55.446″</indicator> | [
{
"math_id": 0,
"text": "\\begin{smallmatrix}\\left [ \\frac{Fe}{H} \\right ]\\ =\\ 0.33\\end{smallmatrix}"
}
] | https://en.wikipedia.org/wiki?curid=932822 |
9328337 | Link/cut tree | A link/cut tree is a data structure for representing a forest, a set of rooted trees, and offers the following operations:
The represented forest may consist of very deep trees, so if we represent the forest as a plain collection of parent pointer trees, it might take us a long time to find the root of a given node. However, if we represent each tree in the forest as a link/cut tree, we can find which tree an element belongs to in O(log(n)) amortized time. Moreover, we can quickly adjust the collection of link/cut trees to changes in the represented forest. In particular, we can adjust it to merge (link) and split (cut) in "O"(log("n")) amortized time.
Link/cut trees divide each tree in the represented forest into vertex-disjoint paths, where each path is represented by an auxiliary data structure (often splay trees, though the original paper predates splay trees and thus uses biased binary search trees). The nodes in the auxiliary data structure are ordered by their depth in the corresponding represented tree. In one variation, "Naive Partitioning", the paths are determined by the most recently accessed paths and nodes, similar to Tango Trees. In "Partitioning by Size" paths are determined by the heaviest child (child with the most children) of the given node. This gives a more complicated structure, but reduces the cost of the operations from amortized O(log n) to worst case O(log n). It has uses in solving a variety of network flow problems and to jive data sets.
In the original publication, Sleator and Tarjan referred to link/cut trees as "dynamic trees", or "dynamic dyno trees".
Structure.
We take a tree where each node has an arbitrary degree of unordered nodes and split it into paths. We call this the "represented tree". These paths are represented internally by auxiliary trees (here we will use splay trees), where the nodes from left to right represent the path from root to the last node on the path. Nodes that are connected in the represented tree that are not on the same preferred path (and therefore not in the same auxiliary tree) are connected via a "path-parent pointer". This pointer is stored in the root of the auxiliary tree representing the path.
Preferred paths.
When an access to a node "v" is made on the "represented tree", the path that is taken becomes the preferred path. The preferred child of a node is the last child that was on the access path, or null if the last access was to "v" or if no accesses were made to this particular branch of the tree. A preferred edge is the edge that connects the preferred child to "v".
In an alternate version, preferred paths are determined by the heaviest child.
Operations.
The operations we are interested in are FindRoot(Node v), Cut(Node v), Link(Node v, Node w), and Path(Node v).
Every operation is implemented using the Access(Node v) subroutine. When we "access" a vertex "v", the preferred path of the represented tree is changed to a path from the root "R" of the represented tree to the node "v". If a node on
the access path previously had a preferred child "u", and the path now goes to child "w", the old "preferred edge"
is deleted (changed to a "path-parent pointer"), and the new path now goes through "w".
Access.
After performing an access to node "v", it will no longer have any preferred children, and will be at the end of the path. Since nodes in the auxiliary tree are keyed by depth, this means that any nodes to the right of "v" in the auxiliary tree must be disconnected. In a splay tree this is a relatively simple procedure; we splay at "v", which brings "v" to the root of the auxiliary tree. We then disconnect the right subtree of "v", which is every node that came below it on the previous preferred path. The root of the disconnected tree will have a path-parent pointer, which we point to "v".
We now walk up the represented tree to the root "R", breaking and resetting the preferred path where necessary. To do this we follow the path-parent pointer from "v" (since "v" is now the root, we have direct access to the path-parent pointer). If the path that "v" is on already contains the root "R" (since the nodes are keyed by depth, it would be the left most node in the auxiliary tree), the path-parent pointer will be null, and we are done the access. Otherwise we follow the pointer to some node on another path "w". We want to break the old preferred path of "w" and reconnect it to the path "v" is on. To do this we splay at "w", and disconnect its right subtree, setting its path-parent pointer to "w". Since all nodes are keyed by depth, and every node in the path of "v" is deeper than every node in the path of "w" (since they are children of "w" in the represented tree), we simply connect the tree of "v" as the right child of "w". We splay at "v" again, which, since "v" is a child of the root "w", simply rotates "v" to root. We repeat this entire process until the path-parent pointer of "v" is null, at which point it is on the same preferred path as the root of the represented tree "R".
FindRoot.
FindRoot refers to finding the root of the represented tree that contains the node "v". Since the "access" subroutine puts "v" on the preferred path, we first execute an access. Now the node "v" is on the same preferred path, and thus the same auxiliary tree as the root "R". Since the auxiliary trees are keyed by depth, the root "R" will be the leftmost node of the auxiliary tree. So we simply choose the left child of "v" recursively until we can go no further, and this node is the
root "R". The root may be linearly deep (which is worst case for a splay tree), we therefore splay it so that the next access will be quick.
Cut.
Here we would like to cut the represented tree at node "v". First we access "v". This puts all the elements lower than "v" in the represented tree as the right child of "v" in the auxiliary tree. All the elements now in the left subtree of "v" are the nodes higher than "v" in the represented tree. We therefore disconnect the left child of "v" (which still maintains an attachment to the original represented tree through its path-parent pointer). Now "v" is the root of a represented tree. Accessing "v" breaks the preferred path below "v" as well, but that subtree maintains its connection to "v" through its path-parent pointer.
Link.
If "v" is a tree root and "w" is a vertex in another tree, link the trees
containing "v" and "w" by adding the edge(v, w), making "w" the parent of "v".
To do this we access both "v" and "w" in their respective trees, and make "w" the left
child of "v". Since "v" is the root, and nodes are keyed by depth in the auxiliary tree, accessing "v" means
that "v" will have no left child in the auxiliary tree (since as root it is the minimum depth). Adding "w" as a left
child effectively makes it the parent of "v" in the represented tree.
Path.
For this operation we wish to do some aggregate function over all the nodes (or edges) on the path from root "R" to node "v" (such as "sum" or "min" or "max" or "increase", etc.). To do this we access "v", which gives us an auxiliary tree with all the nodes on the path from root "R" to node "v". The data structure can be augmented with data we wish to retrieve, such as min or max values, or the sum of the costs in the subtree, which can then be returned from a given path in constant time.
Pseudocode of operations.
Switch-Preferred-Child(x, y):
if (right(x) is not null)
path-parent(right(x)) = x
right(x) = y
if (y is not null)
parent(y) = x
Access(v):
splay(v)
Switch-Preferred-Child(v, null)
if (path-parent(v) is not null)
w = path-parent(v)
splay(w)
Switch-Preferred-Child(w, v)
Access(v)
Link(v, w):
Access(v)
Access(w)
left(v) = w
parent(w) = v
Cut(v):
Access(v)
if (left(v) is not null)
path-parent(left(v)) = path-parent(v)
left(v) = null
path-parent(v) = null
Analysis.
Cut and link have "O"(1) cost, plus that of the access. FindRoot has an "O"(log "n") amortized upper bound, plus the cost of the access. The data structure can be augmented with additional information (such as the min or max valued node in its subtrees, or the sum), depending on the implementation. Thus Path can return this information in constant time plus the access bound.
So it remains to bound the "access" to find our running time.
Access makes use of splaying, which we know has an "O"(log "n") amortized upper bound. So the remaining analysis deals with the number of times we need to splay. This is equal to the number of preferred child changes (the number of edges changed in the preferred path) as we traverse up the tree.
We bound "access" by using a technique called Heavy-Light Decomposition.
Heavy-light decomposition.
This technique calls an edge heavy or light depending on the number of nodes in the subtree. &NoBreak;&NoBreak; represents the number of nodes in the subtree of "v" in the represented tree. An edge is called "heavy" if size(v) > <templatestyles src="Fraction/styles.css" />1⁄2 size(parent(v)). Thus we can see that each node can have at most 1 "heavy" edge. An edge that is not a "heavy" edge is referred to as a "light" edge.
The "light-depth" refers to the number of light edges on a given path from root to vertex "v". "Light-depth" ≤ lg "n" because each time we traverse a light-edge we decrease the number of nodes by at least a factor of 2 (since it can have at most half the nodes of the parent).
So a given edge in the represented tree can be any of four possibilities: "heavy-preferred", "heavy-unpreferred", "light-preferred" or "light-unpreferred".
First we prove an formula_0 upper bound.
"O"(log 2 "n") upper bound.
The splay operation of the access gives us log "n", so we need to bound the number of accesses to log "n" to prove the "O"(log 2 "n") upper bound.
Every change of preferred edge results in a new preferred edge being formed. So we count the number of preferred edges formed. Since there are at most log "n" edges that are light on any given path, there are at most log "n" light edges changing to preferred.
The number of heavy edges becoming preferred can be &NoBreak;&NoBreak; for any given operation, but it is &NoBreak;&NoBreak; amortized. Over a series of executions we can have "n"-1 heavy edges become preferred (as there are at most "n"-1 heavy edges total in the represented tree), but from then on the number of heavy edges that become preferred is equal to the number of heavy edges that became unpreferred on a previous step. For every heavy edge that becomes unpreferred a light edge must become preferred. We have seen already that the number of light edges that can become preferred is at most log "n". So the number of heavy edges that become preferred for "m" operations is &NoBreak;&NoBreak;. Over enough operations (&NoBreak;&NoBreak;) this averages to &NoBreak;&NoBreak;.
Improving to "O"(log "n") upper bound.
We have bound the number of preferred child changes at &NoBreak;&NoBreak;, so if we can show that each preferred child change has cost O(1) amortized we can bound the "access" operation at &NoBreak;&NoBreak;. This is done using the potential method.
Let s(v) be the number of nodes under "v" in the tree of auxiliary trees. Then the potential function formula_1. We know that the amortized cost of splaying is bounded by:
formula_2
We know that after splaying, "v" is the child of its path-parent node "w". So we know that:
formula_3
We use this inequality and the amortized cost of access to achieve a telescoping sum that is bounded by:
formula_4
where "R" is the root of the represented tree, and we know the number of preferred child changes is &NoBreak;&NoBreak;. "s"("R") = "n", so we have &NoBreak;&NoBreak; amortized.
Application.
Link/cut trees can be used to solve the dynamic connectivity problem for acyclic graphs. Given two nodes x and y, they are connected if and only if FindRoot(x) = FindRoot(y). Another data structure that can be used for the same purpose is Euler tour tree.
In solving the maximum flow problem, link/cut trees can be used to improve the running time of Dinic's algorithm from formula_5 to formula_6.
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "O(\\log^{2} n)"
},
{
"math_id": 1,
"text": "\\Phi = \\sum_{v} \\log{s(v)}"
},
{
"math_id": 2,
"text": "cost(splay(v)) \\leq 3 \\left( \\log{s(root(v))} - \\log{s(v)} \\right) + 1"
},
{
"math_id": 3,
"text": " s(v) \\leq s(w) "
},
{
"math_id": 4,
"text": " 3\\left(\\log{s(R)} - \\log{s(v)}\\right) + O(\\text{number of preferred child changes})"
},
{
"math_id": 5,
"text": "O(V^2 E)"
},
{
"math_id": 6,
"text": "O(VE \\log V)"
}
] | https://en.wikipedia.org/wiki?curid=9328337 |
9328562 | Boltzmann's entropy formula | Equation in statistical mechanics
In statistical mechanics, Boltzmann's equation (also known as the Boltzmann–Planck equation) is a probability equation relating the entropy formula_0, also written as formula_1, of an ideal gas to the multiplicity (commonly denoted as formula_2 or formula_3), the number of real microstates corresponding to the gas's macrostate:
where formula_4 is the Boltzmann constant (also written as simply formula_5) and equal to 1.380649 × 10−23 J/K, and formula_6 is the natural logarithm function (or log base e, as in the image above).
In short, the Boltzmann formula shows the relationship between entropy and the number of ways the atoms or molecules of a certain kind of thermodynamic system can be arranged.
History.
The equation was originally formulated by Ludwig Boltzmann between 1872 and 1875, but later put into its current form by Max Planck in about 1900. To quote Planck, "the logarithmic connection between entropy and probability was first stated by L. Boltzmann in his kinetic theory of gases".
A 'microstate' is a state specified in terms of the constituent particles of a body of matter or radiation that has been specified as a macrostate in terms of such variables as internal energy and pressure. A macrostate is experimentally observable, with at least a finite extent in spacetime. A microstate can be instantaneous, or can be a trajectory composed of a temporal progression of instantaneous microstates. In experimental practice, such are scarcely observable. The present account concerns instantaneous microstates.
The value of W was originally intended to be proportional to the "Wahrscheinlichkeit" (the German word for probability) of a macroscopic state for some probability distribution of possible microstates—the collection of (unobservable microscopic single particle) "ways" in which the (observable macroscopic) thermodynamic state of a system can be realized by assigning different positions and momenta to the respective molecules.
There are many instantaneous microstates that apply to a given macrostate. Boltzmann considered collections of such microstates. For a given macrostate, he called the collection of all possible instantaneous microstates of a certain kind by the name "monode", for which Gibbs' term "ensemble" is used nowadays. For single particle instantaneous microstates, Boltzmann called the collection an "ergode". Subsequently, Gibbs called it a "microcanonical ensemble", and this name is widely used today, perhaps partly because Bohr was more interested in the writings of Gibbs than of Boltzmann.
Interpreted in this way, Boltzmann's formula is the most basic formula for the thermodynamic entropy. Boltzmann's paradigm was an ideal gas of N "identical" particles, of which Ni are in the i-th microscopic condition (range) of position and momentum. For this case, the probability of each microstate of the system is equal, so it was equivalent for Boltzmann to calculate the number of microstates associated with a macrostate. W was historically misinterpreted as literally meaning the number of microstates, and that is what it usually means today. W can be counted using the formula for permutations
where i ranges over all possible molecular conditions and "!" denotes factorial. The "correction" in the denominator is due to the fact that identical particles in the same condition are indistinguishable. W is sometimes called the "thermodynamic probability" since it is an integer greater than one, while mathematical probabilities are always numbers between zero and one.
Introduction of the natural logarithm.
In Boltzmann’s 1877 paper, he clarifies molecular state counting to determine the state distribution number introducing the logarithm to simplify the equation.
Boltzmann writes:
“The first task is to determine the permutation number, previously designated by
𝒫
, for any state distribution. Denoting by J the sum of the permutations
𝒫
for all possible state distributions, the quotient
𝒫
/J is the state distribution’s probability, henceforth denoted by W. We would first like to calculate the permutations
𝒫
for
the state distribution characterized by w0 molecules with kinetic energy 0, w1 molecules with kinetic energy ϵ, etc. …
“The most likely state distribution will be for those w0, w1 … values for which
𝒫
is a maximum or since the numerator is a constant, for which the denominator is a minimum. The values w0, w1 must simultaneously satisfy the two constraints (1) and (2). Since the denominator of
𝒫
is a product, it is easiest to determine the minimum of its logarithm, …”
Therefore, by making the denominator small, he maximizes the number of states. So to simplify the product of the factorials, he uses their natural logarithm to add them. This is the reason for the natural logarithm in Boltzmann’s entropy formula.
Generalization.
Boltzmann's formula applies to microstates of a system, each possible microstate of which is presumed to be equally probable.
But in thermodynamics, the universe is divided into a system of interest, plus its surroundings; then the entropy of Boltzmann's microscopically specified system can be identified with the system entropy in classical thermodynamics. The microstates of such a thermodynamic system are "not" equally probable—for example, high energy microstates are less probable than low energy microstates for a thermodynamic system kept at a fixed temperature by allowing contact with a heat bath.
For thermodynamic systems where microstates of the system may not have equal probabilities, the appropriate generalization, called the Gibbs entropy, is:
This reduces to equation (1) if the probabilities "p"i are all equal.
Boltzmann used a formula_7 formula as early as 1866. He interpreted ρ as a density in phase space—without mentioning probability—but since this satisfies the axiomatic definition of a probability measure we can retrospectively interpret it as a probability anyway. Gibbs gave an explicitly probabilistic interpretation in 1878.
Boltzmann himself used an expression equivalent to (3) in his later work and recognized it as more general than equation (1). That is, equation (1) is a corollary of
equation (3)—and not vice versa. In every situation where equation (1) is valid,
equation (3) is valid also—and not vice versa.
Boltzmann entropy excludes statistical dependencies.
The term Boltzmann entropy is also sometimes used to indicate entropies calculated based on the approximation that the overall probability can be factored into an identical separate term for each particle—i.e., assuming each particle has an identical independent probability distribution, and ignoring interactions and correlations between the particles. This is exact for an ideal gas of identical particles that move independently apart from instantaneous collisions, and is an approximation, possibly a poor one, for other systems.
The Boltzmann entropy is obtained if one assumes one can treat all the component particles of a thermodynamic system as statistically independent. The probability distribution of the system as a whole then factorises into the product of "N" separate identical terms, one term for each particle; and when the summation is taken over each possible state in the 6-dimensional phase space of a "single" particle (rather than the 6"N"-dimensional phase space of the system as a whole), the Gibbs entropy
simplifies to the Boltzmann entropy formula_8.
This reflects the original statistical entropy function introduced by Ludwig Boltzmann in 1872. For the special case of an ideal gas it exactly corresponds to the proper thermodynamic entropy.
For anything but the most dilute of real gases, formula_8 leads to increasingly wrong predictions of entropies and physical behaviours, by ignoring the interactions and correlations between different molecules. Instead one must consider the ensemble of states of the system as a whole, called by Boltzmann a "holode", rather than single particle states. Gibbs considered several such kinds of ensembles; relevant here is the "canonical" one.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "S"
},
{
"math_id": 1,
"text": "S_\\mathrm{B}"
},
{
"math_id": 2,
"text": "\\Omega"
},
{
"math_id": 3,
"text": "W"
},
{
"math_id": 4,
"text": "k_\\mathrm B "
},
{
"math_id": 5,
"text": "k"
},
{
"math_id": 6,
"text": "\\ln"
},
{
"math_id": 7,
"text": "\\rho\\ln\\rho"
},
{
"math_id": 8,
"text": "S_{\\mathrm B}"
}
] | https://en.wikipedia.org/wiki?curid=9328562 |
9330285 | Skeletochronology | Set of methodologies in skeletology
Skeletochronology is a technique used to determine the individual, chronological ages of vertebrates by counting lines of arrested, annual growth, also known as LAGs, within skeletal tissues. Within the annual bone growth specimens, there are broad and narrow lines. Broad lines represent the growth period and narrow lines represent a growth pause. These narrow lines are what characterises one growth year, therefore make it suitable to determine the age of the specimen. Not all bones grow at the same rate and the individual growth rate of a bone changes over a lifetime, therefore periodic growth marks can take irregular patterns. This indicates significant chronological events in an individual's life. The use of bone as a biomaterial is useful in investigating structure-property relationships. In addition to current research in skeletochronology, the ability of bone to adapt and change its structure to the external environment provides potential for further research in bone histomorphometry in the future. Amphibians and Reptiles are commonly aged determined, using this method, because they undergo discrete annual activity cycles such as winter dormancy or metamorphosis, however it cannot be used for all species of bony animals. The different environmental and biological factors that influence bone growth and development can become a barrier in determining age as a complete record may be rare.
Method.
The extraction and study of bone tissue varies depending on the taxa involved and the amount of material available. However, skeletochronology best focuses on LAGs that encircle the entire shaft in a ring form and have a regular pattern of deposition. These growths show a repeated pattern, 'described mathematically as a time series'. The tissues are divided using a microtome, stained with haematoxylin to be then viewed under a microscope. The analysis is frequently performed on dry bones with the additional application of alcohol or congelated preservation if needed, as the aim is to enhance the optical contrast which results from different physical properties to light.
It is important to consider potential problems when selecting particular bones to study. If there is a weak optical contrast, it makes counting the arrested growth rings difficult and often inaccurate. There is also a possible presence of additional growth marks that are created to supplement weaker areas of growth. In these circumstances, alternative bones must be considered that may present more accurate data. Another case is the doubling of lines of arrested growth where two closely adjacent twin lines can be seen. However, when the pattern is widespread for several age classes in that species, then the twin LAGs can be counted as a single year growth. The most common issue to arise is the destruction of bone from biological processes, most frequently discovered in mammals and Birds. This causes age to be significantly underestimated. Over the lifespan of an individual, bone is constantly being reconstructed as specialised cells remove and deposit bone leading to a constant renewal of the bone material. The continuous resorption and deposition leaves gaps in the record of growth and missing bone tissue is a case at any stage of a vertebrate's life cycle; 'complete specimens that allow precise identification are extremely rare'.
Therefore, to account for any missing bone tissues in a specimen, retrocalculation of skeletal age is to be completed.
Three approaches can be identified in retro calculating.
1) Retro calculating of skeletal age which involves identifying major and minor axe of the bone's cross section and circumferences of bones calculated using Ramanujan's formula
formula_0.
2) Retro calculating through Arithmetic estimate which requires the sampling of several parts of other bone and making an estimate of the number of missing tissues
3) Retro calculating by superimposition in an Ontogenic series which requires a complete growth record on one individual so that their histological cross sections can be overlaid and reconstructed on another individual.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "C=\\pi[3(a+b)-\\surd(a+3b)(3a+b)]"
}
] | https://en.wikipedia.org/wiki?curid=9330285 |
9331066 | Induction generator | Type of AC electrical generator
An induction generator or "asynchronous generator" is a type of alternating current (AC) electrical generator that uses the principles of induction motors to produce electric power. Induction generators operate by mechanically turning their rotors faster than synchronous speed. A regular AC induction motor usually can be used as a generator, without any internal modifications. Because they can recover energy with relatively simple controls, induction generators are useful in applications such as mini hydro power plants, wind turbines, or in reducing high-pressure gas streams to lower pressure.
An induction generator draws reactive excitation current from an external source. Induction generators have an AC rotor and cannot bootstrap using residual magnetization to black start a de-energized distribution system as synchronous machines do. Power factor correcting capacitors can be added externally to neutralize a constant amount of the variable reactive excitation current. After starting, an induction generator can use a capacitor bank to produce reactive excitation current, but the isolated power system's voltage and frequency are not self-regulating and destabilize readily.
Principle of operation.
An induction generator produces electrical power when its rotor is turned faster than the "synchronous speed". For a four-pole motor (two pairs of poles on stator) powered by a 60 Hz source, the synchronous speed is 1800 rotations per minute (rpm) and 1500 RPM powered at 50 Hz. The motor always turns slightly slower than the synchronous speed. The difference between synchronous and operating speed is called "slip" and is often expressed as per cent of the synchronous speed. For example, a motor operating at 1450 RPM that has a synchronous speed of 1500 RPM is running at a slip of +3.3%.
In operation as a motor, the stator flux rotation is at the synchronous speed, which is faster than the rotor speed. This causes the stator flux to cycle at the slip frequency inducing rotor current through the mutual inductance between the stator and rotor. The induced current create a rotor flux with magnetic polarity opposite to the stator. In this way, the rotor is dragged along behind stator flux, with the currents in the rotor induced at the slip frequency. The motor runs at the speed where the induced rotor current gives rise to torque equal to the shaft load.
In generator operation, a prime mover (turbine or engine) drives the rotor above the synchronous speed (negative slip). The stator flux induces current in the rotor, but the opposing rotor flux is now cutting the stator coils, a current is induced in the stator coils 270° behind the magnetizing current, in phase with magnetizing voltage. The motor delivers real (in-phase) power to the power system.
Excitation.
An induction motor requires an externally supplied current to the stator windings in order to induce a current in the rotor. Because the current in an inductor is integral of the voltage with respect to time, for a sinusoidal voltage waveform the current lags the voltage by 90°, and the induction motor always consumes reactive power, regardless of whether it is consuming electrical power and delivering mechanical power as a motor or consuming mechanical power and delivering electrical power to the system.
A source of excitation current for magnetizing flux (reactive power) for the stator is still required, to induce rotor current. This can be supplied from the electrical grid or, once it starts producing power, from a capacitive reactance. The generating mode for induction motors is complicated by the need to excite the rotor, which being induced by an alternating current is demagnetized at shutdown with no residual magnetization to bootstrap a cold start. It is necessary to connect an external source of magnetizing current to initialize production. The power frequency and voltage are not self regulating. The generator is able to supply current out of phase with the voltage requiring more external equipment to build a functional isolated power system. Similar is the operation of the induction motor in parallel with a synchronous motor serving as a power factor compensator. A feature in the generator mode in parallel to the grid is that the rotor speed is higher than in the driving mode. Then active energy is being given to the grid. Another disadvantage of induction motor generator is that it consumes a significant magnetizing current I0 = (20-35)%.
Active power.
Active power delivered to the line is proportional to slip above the synchronous speed. Full rated power of the generator is reached at very small slip values (motor dependent, typically 3%). At synchronous speed of 1800 RPM, generator will produce no power. When the driving speed is increased to 1860 RPM (typical example), full output power is produced. If the prime mover is unable to produce enough power to fully drive the generator, speed will remain somewhere between 1800 and 1860 RPM range.
Required capacitance.
A capacitor bank must supply reactive power to the motor when used in stand-alone mode. The reactive power supplied should be equal or greater than the reactive power that the generator normally draws when operating as a motor.
Torque vs. slip.
The basic fundamental of induction generators is the conversion from mechanical energy to electrical energy. This requires an external torque applied to the rotor to turn it faster than the synchronous speed. However, indefinitely increasing torque doesn't lead to an indefinite increase in power generation. The rotating magnetic field torque excited from the armature works to counter the motion of the rotor and prevent over speed because of induced motion in the opposite direction. As the speed of the motor increases the counter torque reaches a max value of torque (breakdown torque) that it can operate until before the operating conditions become unstable. Ideally, induction generators work best in the stable region between the no-load condition and maximum torque region.
Rated current.
The maximum power that can be produced by an induction motor operated as a generator is limited by the rated current of the generator's windings.
Grid and stand-alone connections.
In induction generators, the reactive power required to establish the air gap magnetic flux is provided by a capacitor bank connected to the machine in case of stand-alone system and in case of grid connection it draws reactive power from the grid to maintain its air gap flux. For a grid-connected system, frequency and voltage at the machine will be dictated by the electric grid, since it is very small compared to the whole system. For stand-alone systems, frequency and voltage are complex function of machine parameters, capacitance used for excitation, and load value and type.
Uses.
Induction generators are often used in wind turbines and some micro hydro installations due to their ability to produce useful power at varying rotor speeds. Induction generators are mechanically and electrically simpler than other generator types. They are also more rugged, requiring no brushes or commutators.
Limitations.
An induction generator connected to a capacitor system can generate sufficient reactive power to operate independently. When the load current exceeds the capability of the generator to supply both magnetization reactive power and load power the generator will immediately cease to produce power. The load must be removed and the induction generator restarted with either an external DC motor or if present, residual magnetism in the core.
Induction generators are particularly suitable for wind generating stations as in this case speed is always a variable factor. Unlike synchronous motors, induction generators are load-dependent and cannot be used alone for grid frequency control.
Example application.
As an example, consider the use of a 10 hp, 1760 r/min, 440 V, three-phase induction motor (a.k.a. induction electrical machine in an asynchronous generator regime) as asynchronous generator. The full-load current of the motor is 10 A and the full-load power factor is 0.8.
Required capacitance per phase if capacitors are connected in delta:
Apparent power formula_0
Active power formula_1
Reactive power formula_2
For a machine to run as an asynchronous generator, capacitor bank must supply minimum 4567 / 3 phases = 1523 VAR per phase. Voltage per capacitor is 440 V because capacitors are connected in delta.
Capacitive current Ic = Q/E = 1523/440 = 3.46 A
Capacitive reactance per phase Xc = E/Ic = 127 Ω
Minimum capacitance per phase:
C = 1 / (2*π*f*Xc) = 1 / (2 * 3.141 * 60 * 127) = 21 μF.
If the load also absorbs reactive power, capacitor bank must be increased in size to compensate.
Prime mover speed should be used to generate frequency of 60 Hz:
Typically, slip should be similar to full-load value when machine is running as motor, but negative (generator operation):
if Ns = 1800, one can choose N=Ns+40 rpm
Required prime mover speed N = 1800 + 40 = 1840 rpm.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "S = \\sqrt{3} E I = 1.73 * 440 * 10 = 7612 VA "
},
{
"math_id": 1,
"text": " P = S \\cos(\\theta) = 7612 * 0.8 = 6090 W "
},
{
"math_id": 2,
"text": "Q = \\sqrt{S^2-P^2} = 4567 VAR "
}
] | https://en.wikipedia.org/wiki?curid=9331066 |
9332179 | A/B testing | Experiment methodology
A/B testing (also known as bucket testing, split-run testing, or split testing) is a user experience research method. A/B tests consist of a randomized experiment that usually involves two variants (A and B), although the concept can be also extended to multiple variants of the same variable. It includes application of statistical hypothesis testing or "two-sample hypothesis testing" as used in the field of statistics. A/B testing is a way to compare multiple versions of a single variable, for example by testing a subject's response to variant A against variant B, and determining which of the variants is more effective.
Overview.
"A/B testing" is a shorthand for a simple randomized controlled experiment, in which a number of samples (e.g. A and B) of a single vector-variable are compared. These values are similar except for one variation which might affect a user's behavior. A/B tests are widely considered the simplest form of controlled experiment, especially when they only involve two variants. However, by adding more variants to the test, its complexity grows.
A/B tests are useful for understanding user engagement and satisfaction of online features like a new feature or product. Large social media sites like LinkedIn, Facebook, and Instagram use A/B testing to make user experiences more successful and as a way to streamline their services.
Today, A/B tests are being used also for conducting complex experiments on subjects such as network effects when users are offline, how online services affect user actions, and how users influence one another. A/B testing is used by data engineers, marketers, designers, software engineers, and entrepreneurs, among others. Many positions rely on the data from A/B tests, as they allow companies to understand growth, increase revenue, and optimize customer satisfaction.
Version A might be used at present (thus forming the control group), while version B is modified in some respect vs. A (thus forming the treatment group). For instance, on an e-commerce website the purchase funnel is typically a good candidate for A/B testing, since even marginal-decreases in drop-off rates can represent a significant gain in sales. Significant improvements can be sometimes seen through testing elements like copy text, layouts, images and colors, but not always. In these tests, users only see one of two versions, since the goal is to discover which of the two versions is preferable.
Multivariate testing or multinomial testing is similar to A/B testing, but may test more than two versions at the same time or use more controls. Simple A/B tests are not valid for observational, quasi-experimental or other non-experimental situations—commonplace with survey data, offline data, and other, more complex phenomena.
A/B testing is claimed by some to be a change in philosophy and business-strategy in certain niches, though the approach is identical to a between-subjects design, which is commonly used in a variety of research traditions. A/B testing as a philosophy of web development brings the field into line with a broader movement toward evidence-based practice. The benefits of A/B testing are considered to be that it can be performed continuously on almost anything, especially since most marketing automation software now typically comes with the ability to run A/B tests on an ongoing basis.
Common test statistics.
"Two-sample hypothesis tests" are appropriate for comparing the two samples where the samples are divided by the two control cases in the experiment. Z-tests are appropriate for comparing means under stringent conditions regarding normality and a known standard deviation. Student's t-tests are appropriate for comparing means under relaxed conditions when less is assumed. Welch's t test assumes the least and is therefore the most commonly used test in a two-sample hypothesis test where the mean of a metric is to be optimized. While the mean of the variable to be optimized is the most common choice of estimator, others are regularly used.
For a comparison of two binomial distributions such as a click-through rate one would use Fisher's exact test.
Challenges.
When conducting A/B testing, the user should evaluate the pros and cons to see if it aligns well with the hoped-for results.
Pros: Through A/B testing, it is easy to get a clear idea of what users prefer, since it is directly testing one thing over the other. It is based on real user behavior, so the data can be very helpful especially when determining what works better between two options. In addition, it can also provide answers to very specific design questions. One example of this is Google's A/B testing with hyperlink colors. In order to optimize revenue, they tested dozens of different hyperlink hues to see which color the users tend to click more on.
Cons: As mentioned above, A/B testing is good for specific design questions but this can also be a downside since it is mostly only good for specific design problems with very measurable outcomes. It could also be a very costly and timely process. Depending on the size of the company and/or team, there could be a lot of meetings and discussions about what exactly to test and what the impact of the A/B test is. If there's not a significant impact, it could end up as a waste of time and resources.
In December 2018, representatives with experience in large-scale A/B testing from thirteen different organizations (Airbnb, Amazon, Booking.com, Facebook, Google, LinkedIn, Lyft, Microsoft, Netflix, Twitter, Uber, and Stanford University) attended a summit and summarized the top challenges in a SIGKDD Explorations paper.
The challenges can be grouped into four areas: Analysis, Engineering and Culture, Deviations from Traditional A/B tests, and Data quality.
History.
It is difficult to definitively establish when A/B testing was first used. The first randomized double-blind trial, to assess the effectiveness of a homeopathic drug, occurred in 1835. Experimentation with advertising campaigns, which has been compared to modern A/B testing, began in the early twentieth century. The advertising pioneer Claude Hopkins used promotional coupons to test the effectiveness of his campaigns. However, this process, which Hopkins described in his Scientific Advertising, did not incorporate concepts such as statistical significance and the null hypothesis, which are used in statistical hypothesis testing. Modern statistical methods for assessing the significance of sample data were developed separately in the same period. This work was done in 1908 by William Sealy Gosset when he altered the Z-test to create Student's t-test.
With the growth of the internet, new ways to sample populations have become available. Google engineers ran their first A/B test in the year 2000 in an attempt to determine what the optimum number of results to display on its search engine results page would be. The first test was unsuccessful due to glitches that resulted from slow loading times. Later A/B testing research would be more advanced, but the foundation and underlying principles generally remain the same, and in 2011, 11 years after Google's first test, Google ran over 7,000 different A/B tests.
In 2012, a Microsoft employee working on the search engine Microsoft Bing created an experiment to test different ways of displaying advertising headlines. Within hours, the alternative format produced a revenue increase of 12% with no impact on user-experience metrics. Today, companies like Microsoft and Google each conduct over 10,000 A/B tests annually.
Many companies now use the "designed experiment" approach to making marketing decisions, with the expectation that relevant sample results can improve positive conversion results. It is an increasingly common practice as the tools and expertise grow in this area.
Examples.
Email marketing.
A company with a customer database of 2,000 people decides to create an email campaign with a discount code in order to generate sales through its website. It creates two versions of the email with different call to action (the part of the copy which encourages customers to do something — in the case of a sales campaign, make a purchase) and identifying promotional code.
All other elements of the emails' copy and layout are identical. The company then monitors which campaign has the higher success rate by analyzing the use of the promotional codes. The email using the code A1 has a 5% response rate (50 of the 1,000 people emailed used the code to buy a product), and the email using the code B1 has a 3% response rate (30 of the recipients used the code to buy a product). The company therefore determines that in this instance, the first Call To Action is more effective and will use it in future sales. A more nuanced approach would involve applying statistical testing to determine if the differences in response rates between A1 and B1 were statistically significant (that is, highly likely that the differences are real, repeatable, and not due to random chance).
In the example above, the purpose of the test is to determine which is the more effective way to encourage customers to make a purchase. If, however, the aim of the test had been to see which email would generate the higher click-rate—that is, the number of people who actually click onto the website after receiving the email—then the results might have been different.
For example, even though more of the customers receiving the code B1 accessed the website, because the Call To Action didn't state the end-date of the promotion many of them may feel no urgency to make an immediate purchase. Consequently, if the purpose of the test had been simply to see which email would bring more traffic to the website, then the email containing code B1 might well have been more successful. An A/B test should have a defined outcome that is measurable such as number of sales made, click-rate conversion, or number of people signing up/registering.
A/B testing for product pricing.
A/B testing can be used to determine the right price for the product, as this is perhaps one of the most difficult tasks when a new product or service is launched. A/B testing (especially valid for digital goods) is an excellent way to find out which price-point and offering maximize the total revenue.
Political A/B testing.
A/B tests have also been used by political campaigns. In 2007, Barack Obama's presidential campaign used A/B testing as a way to garner online attraction and understand what voters wanted to see from the presidential candidate. For example, Obama's team tested four distinct buttons on their website that led users to sign up for newsletters. Additionally, the team used six different accompanying images to draw in users. Through A/B testing, staffers were able to determine how to effectively draw in voters and garner additional interest.
HTTP Routing and API feature testing.
A/B testing is very common when deploying a newer version of an API. For real-time user experience testing, an HTTP Reverse proxy is configured in such a way that, "N"% of the HTTP traffic goes into the newer version of the backend instance, while the remaining "100-N"% of HTTP traffic hits the (stable) older version of the backend HTTP application service. This is usually done for limiting the exposure of customers to a newer backend instance such that, if there is a bug on the newer version, only "N"% of the total user agents or clients get affected while others get routed to a stable backend, which is a common ingress control mechanism.
Segmentation and targeting.
A/B tests most commonly apply the same variant (e.g., user interface element) with equal probability to all users. However, in some circumstances, responses to variants may be heterogeneous. That is, while a variant A might have a higher response rate overall, variant B may have an even higher response rate within a specific segment of the customer base.
For instance, in the above example, the breakdown of the response rates by gender could have been:
In this case, we can see that while variant A had a higher response rate overall, variant B actually had a higher response rate with men.
As a result, the company might select a segmented strategy as a result of the A/B test, sending variant B to men and variant A to women in the future. In this example, a segmented strategy would yield an increase in expected response rates from formula_0 to formula_1 – constituting a 30% increase.
If segmented results are expected from the A/B test, the test should be properly designed at the outset to be evenly distributed across key customer attributes, such as gender. That is, the test should both (a) contain a representative sample of men vs. women, and (b) assign men and women randomly to each “variant” (variant A vs. variant B). Failure to do so could lead to experiment bias and inaccurate conclusions to be drawn from the test.
This segmentation and targeting approach can be further generalized to include multiple customer attributes rather than a single customer attribute—for example, customers' age "and" gender—to identify more nuanced patterns that may exist in the test results.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "5\\% = \\frac{40 + 10}{500 + 500}"
},
{
"math_id": 1,
"text": "6.5\\% = \\frac{40 + 25}{500+500}"
}
] | https://en.wikipedia.org/wiki?curid=9332179 |
9334818 | Range searching | In computer science, the range searching problem consists of processing a set "S" of objects, in order to determine which objects from "S" intersect with a query object, called the "range". For example, if "S" is a set of points corresponding to the coordinates of several cities, find the subset of cities within a given range of latitudes and longitudes.
The range searching problem and the data structures that solve it are a fundamental topic of computational geometry. Applications of the problem arise in areas such as geographical information systems (GIS), computer-aided design (CAD) and databases.
Variations.
There are several variations of the problem, and different data structures may be necessary for different variations. In order to obtain an efficient solution, several aspects of the problem need to be specified:
Data structures.
Orthogonal range searching.
In orthogonal range searching, the set "S" consists of formula_0 points in formula_1 dimensions, and the query consists of intervals in each of those dimensions. Thus, the query consists of a multi-dimensional axis-aligned rectangle. With an output size of formula_2, Jon Bentley used a k-d tree to achieve (in Big O notation) formula_3 space and formula_4 query time. Bentley also proposed using range trees, which improved query time to formula_5 but increased space to formula_6. Dan Willard used downpointers, a special case of fractional cascading to reduce the query time further to formula_7.
While the above results were achieved in the pointer machine model, further improvements have been made in the word RAM model of computation in low dimensions (2D, 3D, 4D). Bernard Chazelle used compress range trees to achieve formula_8 query time and formula_3 space for range counting. Joseph JaJa and others later improved this query time to formula_9 for range counting, which matches a lower bound and is thus asymptotically optimal.
As of 2015, the best results (in low dimensions (2D, 3D, 4D)) for range reporting found by Timothy M. Chan, Kasper Larsen, and Mihai Pătrașcu, also using compressed range trees in the word RAM model of computation, are one of the following:
In the orthogonal case, if one of the bounds is infinity, the query is called three-sided. If two of the bounds are infinity, the query is two-sided, and if none of the bounds are infinity, then the query is four-sided.
Dynamic range searching.
While in static range searching the set "S" is known in advance, dynamic range searching, insertions and deletions of points are allowed. In the incremental version of the problem, only insertions are allowed, whereas the decremental version only allows deletions. For the orthogonal case, Kurt Mehlhorn and Stefan Näher created a data structure for dynamic range searching which uses dynamic fractional cascading to achieve formula_15 space and formula_16 query time. Both incremental and decremental versions of the problem can be solved with formula_17 query time, but it is unknown whether general dynamic range searching can be done with that query time.
Colored range searching.
The problem of colored range counting considers the case where points have categorical attributes. If the categories are considered as colors of points in geometric space, then a query is for how many colors appear in a particular range. Prosenjit Gupta and others described a data structure in 1995 which solved 2D orthogonal colored range counting in formula_18 space and formula_19 query time.
Applications.
In addition to being considered in computational geometry, range searching, and orthogonal range searching in particular, has applications for range queries in databases. Colored range searching is also used for and motivated by searching through categorical data. For example, determining the rows in a database of bank accounts which represent people whose age is between 25 and 40 and who have between $10000 and $20000 might be an orthogonal range reporting problem where age and money are two dimensions.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "d"
},
{
"math_id": 2,
"text": "k"
},
{
"math_id": 3,
"text": "O(n)"
},
{
"math_id": 4,
"text": "O\\big(n^{1 - \\frac{1}{d}} + k\\big)"
},
{
"math_id": 5,
"text": "O(\\log^d n + k)"
},
{
"math_id": 6,
"text": "O(n\\log^{d - 1} n)"
},
{
"math_id": 7,
"text": "O(\\log^{d - 1} n + k)"
},
{
"math_id": 8,
"text": "O(\\log n)"
},
{
"math_id": 9,
"text": "O\\left(\\dfrac{\\log n}{\\log \\log n}\\right)"
},
{
"math_id": 10,
"text": "O(\\log ^\\epsilon n + k \\log ^\\epsilon n)"
},
{
"math_id": 11,
"text": "O(n \\log \\log n)"
},
{
"math_id": 12,
"text": "O(\\log \\log n + k \\log \\log n)"
},
{
"math_id": 13,
"text": "O(n \\log ^\\epsilon n)"
},
{
"math_id": 14,
"text": "O(\\log \\log n + k)"
},
{
"math_id": 15,
"text": "O(n \\log n)"
},
{
"math_id": 16,
"text": "O(\\log n \\log \\log n + k)"
},
{
"math_id": 17,
"text": "O(\\log n + k)"
},
{
"math_id": 18,
"text": "O(n^2\\log ^2 n)"
},
{
"math_id": 19,
"text": "O(\\log ^2 n)"
}
] | https://en.wikipedia.org/wiki?curid=9334818 |
9335064 | Ledinegg instability | In fluid dynamics, the Ledinegg instability occurs in two-phase flow, especially in a boiler tube, when the boiling boundary is within the tube. For a given mass flux J through the tube, the pressure drop per unit length (which typically varies as the square of the mass flux and inversely as the density, i.e., as formula_0) is much less when the flow is wholly of liquid than when the flow is wholly of steam. Thus, as the boiling boundary moves up the tube, the total pressure drop falls, potentially increasing the flow in an unstable manner. Boiler tubes normally overcome this (which is effectively a 'negative resistance' regime) by incorporating a narrow orifice at the entry, to give a stabilising pressure drop on entry. | [
{
"math_id": 0,
"text": "J^2/\\rho"
}
] | https://en.wikipedia.org/wiki?curid=9335064 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.