id
stringlengths 2
8
| title
stringlengths 1
130
| text
stringlengths 0
252k
| formulas
listlengths 1
823
| url
stringlengths 38
44
|
---|---|---|---|---|
5837233
|
Diagnosis (artificial intelligence)
|
As a subfield in artificial intelligence, diagnosis is concerned with the development of algorithms and techniques that are able to determine whether the behaviour of a system is correct. If the system is not functioning correctly, the algorithm should be able to determine, as accurately as possible, which part of the system is failing, and which kind of fault it is facing. The computation is based on "observations", which provide information on the current behaviour.
The expression "diagnosis" also refers to the answer of the question of whether the system is malfunctioning or not, and to the process of computing the answer. This word comes from the medical context where a diagnosis is the process of identifying a disease by its symptoms.
Example.
An example of diagnosis is the process of a garage mechanic with an automobile. The mechanic will first try to detect any abnormal behavior based on the observations on the car and his knowledge of this type of vehicle. If he finds out that the behavior is abnormal, the mechanic will try to refine his diagnosis by using new observations and possibly testing the system, until he discovers the faulty component; the mechanic plays an important role in the vehicle diagnosis.
Expert diagnosis.
The expert diagnosis (or diagnosis by expert system) is based on experience with the system. Using this experience, a mapping is built that efficiently associates the observations to the corresponding diagnoses.
The experience can be provided:
The main drawbacks of these methods are:
A slightly different approach is to build an expert system from a model of the system rather than directly from an expertise. An example is the computation of a diagnoser for the diagnosis of discrete event systems. This approach can be seen as model-based, but it benefits from some advantages and suffers some drawbacks of the expert system approach.
Model-based diagnosis.
Model-based diagnosis is an example of abductive reasoning using a model of the system. In general, it works as follows:
We have a model that describes the behaviour of the system (or artefact). The model is an abstraction of the behaviour of the system and can be incomplete. In particular, the faulty behaviour is generally little-known, and the faulty model may thus not be represented. Given observations of the system, the diagnosis system simulates the system using the model, and compares the observations actually made to the observations predicted by the simulation.
The modelling can be simplified by the following rules (where formula_0 is the "Ab"normal predicate):
formula_1
formula_2 (fault model)
The semantics of these formulae is the following: if the behaviour of the system is not abnormal (i.e. if it is normal), then the internal (unobservable) behaviour will be formula_3 and the observable behaviour formula_4. Otherwise, the internal behaviour will be formula_5 and the observable behaviour formula_6. Given the observations formula_7, the problem is to determine whether the system behaviour is normal or not (formula_8 or formula_9). This is an example of abductive reasoning.
Diagnosability.
A system is said to be diagnosable if whatever the behavior of the system, we will be able to determine without ambiguity a unique diagnosis.
The problem of diagnosability is very important when designing a system because on one hand one may want to reduce the number of sensors to reduce the cost, and on the other hand one may want to increase the number of sensors to increase the probability of detecting a faulty behavior.
Several algorithms for dealing with these problems exist. One class of algorithms answers the question whether a system is diagnosable; another class looks for sets of sensors that make the system diagnosable, and optionally comply to criteria such as cost optimization.
The diagnosability of a system is generally computed from the model of the system. In applications using model-based diagnosis, such a model is already present and doesn't need to be built from scratch.
External links.
DX workshops.
DX is the annual International Workshop on Principles of Diagnosis that started in 1989.
|
[
{
"math_id": 0,
"text": "Ab\\,"
},
{
"math_id": 1,
"text": "\\neg Ab(S) \\Rightarrow Int1 \\wedge Obs1"
},
{
"math_id": 2,
"text": "Ab(S) \\Rightarrow Int2 \\wedge Obs2"
},
{
"math_id": 3,
"text": "Int1\\,"
},
{
"math_id": 4,
"text": "Obs1\\,"
},
{
"math_id": 5,
"text": "Int2\\,"
},
{
"math_id": 6,
"text": "Obs2\\,"
},
{
"math_id": 7,
"text": "Obs\\,"
},
{
"math_id": 8,
"text": "\\neg Ab(S)\\,"
},
{
"math_id": 9,
"text": "Ab(S)\\,"
}
] |
https://en.wikipedia.org/wiki?curid=5837233
|
583785
|
Tarski's undefinability theorem
|
Theorem that arithmetical truth cannot be defined in arithmetic
Tarski's undefinability theorem, stated and proved by Alfred Tarski in 1933, is an important limitative result in mathematical logic, the foundations of mathematics, and in formal semantics. Informally, the theorem states that "arithmetical truth cannot be defined in arithmetic".
The theorem applies more generally to any sufficiently strong formal system, showing that truth in the standard model of the system cannot be defined within the system.
History.
In 1931, Kurt Gödel published the incompleteness theorems, which he proved in part by showing how to represent the syntax of formal logic within first-order arithmetic. Each expression of the formal language of arithmetic is assigned a distinct number. This procedure is known variously as Gödel numbering, "coding" and, more generally, as arithmetization. In particular, various "sets" of expressions are coded as sets of numbers. For various syntactic properties (such as "being a formula", "being a sentence", etc.), these sets are computable. Moreover, any computable set of numbers can be defined by some arithmetical formula. For example, there are formulas in the language of arithmetic defining the set of codes for arithmetic sentences, and for provable arithmetic sentences.
The undefinability theorem shows that this encoding cannot be done for semantic concepts such as truth. It shows that no sufficiently rich interpreted language can represent its own semantics. A corollary is that any metalanguage capable of expressing the semantics of some object language (e.g. a predicate is definable in Zermelo-Fraenkel set theory for whether formulae in the language of Peano arithmetic are true in the standard model of arithmetic) must have expressive power exceeding that of the object language. The metalanguage includes primitive notions, axioms, and rules absent from the object language, so that there are theorems provable in the metalanguage not provable in the object language.
The undefinability theorem is conventionally attributed to Alfred Tarski. Gödel also discovered the undefinability theorem in 1930, while proving his incompleteness theorems published in 1931, and well before the 1933 publication of Tarski's work (Murawski 1998). While Gödel never published anything bearing on his independent discovery of undefinability, he did describe it in a 1931 letter to John von Neumann. Tarski had obtained almost all results of his 1933 monograph "The Concept of Truth in the Languages of the Deductive Sciences" between 1929 and 1931, and spoke about them to Polish audiences. However, as he emphasized in the paper, the undefinability theorem was the only result he did not obtain earlier. According to the footnote to the undefinability theorem (Twierdzenie I) of the 1933 monograph, the theorem and the sketch of the proof were added to the monograph only after the manuscript had been sent to the printer in 1931. Tarski reports there that, when he presented the content of his monograph to the Warsaw Academy of Science on March 21, 1931, he expressed at this place only some conjectures, based partly on his own investigations and partly on Gödel's short report on the incompleteness theorems "" [Some metamathematical results on the definiteness of decision and consistency], Austrian Academy of Sciences, Vienna, 1930.
Statement.
We will first state a simplified version of Tarski's theorem, then state and prove in the next section the theorem Tarski proved in 1933.
Let formula_0 be the language of first-order arithmetic. This is the theory of the natural numbers, including their addition and multiplication, axiomatized by the first-order Peano axioms. This is a "first-order" theory: the quantifiers extend over natural numbers, but not over sets or functions of natural numbers. The theory is strong enough to describe recursively defined integer functions such as exponentiation, factorials or the Fibonacci sequence.
Let formula_1 be the standard structure for formula_2 i.e. formula_1 consists of the ordinary set of natural numbers and their addition and multiplication. Each sentence in formula_0 can be interpreted in formula_1 and then becomes either true or false. Thus formula_3 is the "interpreted first-order language of arithmetic".
Each formula formula_4 in formula_0 has a Gödel number formula_5 This is a natural number that "encodes" formula_6 In that way, the language formula_0 can talk about formulas in formula_2 not just about numbers. Let formula_7 denote the set of formula_0-sentences true in formula_1, and formula_8 the set of Gödel numbers of the sentences in formula_9 The following theorem answers the question: Can formula_8 be defined by a formula of first-order arithmetic?
"Tarski's undefinability theorem": There is no formula_0-formula formula_10 that defines formula_11
That is, there is no formula_0-formula formula_10 such that for every formula_0-sentence formula_12 formula_13 holds in formula_1.
Informally, the theorem says that the concept of truth of first-order arithmetic statements cannot be defined by a formula in first-order arithmetic. This implies a major limitation on the scope of "self-representation". It "is" possible to define a formula formula_10 whose extension is formula_14 but only by drawing on a metalanguage whose expressive power goes beyond that of formula_0. For example, a truth predicate for first-order arithmetic can be defined in second-order arithmetic. However, this formula would only be able to define a truth predicate for formulas in the original language formula_0. To define a truth predicate for the metalanguage would require a still higher metametalanguage, and so on.
To prove the theorem, we proceed by contradiction and assume that an formula_0-formula formula_10 exists which is true for the natural number formula_15 in formula_1 if and only if formula_15 is the Gödel number of a sentence in formula_0 that is true in formula_1. We could then use formula_10 to define a new formula_0-formula formula_16 which is true for the natural number formula_17 if and only if formula_17 is the Gödel number of a formula formula_18 (with a free variable formula_19) such that formula_20 is false when interpreted in formula_1 (i.e. the formula formula_21 when applied to its own Gödel number, yields a false statement). If we now consider the Gödel number formula_22 of the formula formula_16, and ask whether the sentence formula_23 is true in formula_1, we obtain a contradiction. (This is known as a diagonal argument.)
The theorem is a corollary of Post's theorem about the arithmetical hierarchy, proved some years after Tarski (1933). A semantic proof of Tarski's theorem from Post's theorem is obtained by reductio ad absurdum as follows. Assuming formula_8 is arithmetically definable, there is a natural number formula_15 such that formula_8 is definable by a formula at level formula_24 of the arithmetical hierarchy. However, formula_8 is formula_25-hard for all formula_26 Thus the arithmetical hierarchy collapses at level formula_15, contradicting Post's theorem.
General form.
Tarski proved a stronger theorem than the one stated above, using an entirely syntactical method. The resulting theorem applies to any formal language with negation, and with sufficient capability for self-reference that the diagonal lemma holds. First-order arithmetic satisfies these preconditions, but the theorem applies to much more general formal systems, such as ZFC.
"Tarski's undefinability theorem (general form)": Let formula_3 be any interpreted formal language which includes negation and has a Gödel numbering formula_27 satisfying the diagonal lemma, i.e. for every formula_0-formula formula_28 (with one free variable formula_19) there is a sentence formula_29 such that formula_30 holds in formula_1. Then there is no formula_0-formula formula_10 with the following property: for every formula_0-sentence formula_12 formula_13 is true in formula_1.
The proof of Tarski's undefinability theorem in this form is again by reductio ad absurdum. Suppose that an formula_0-formula formula_10 as above existed, i.e., if formula_29 is a sentence of arithmetic, then formula_31 holds in formula_1 if and only if formula_29 holds in formula_1. Hence for all formula_29, the formula formula_13 holds in formula_1. But the diagonal lemma yields a counterexample to this equivalence, by giving a "liar" formula formula_32 such that formula_33 holds in formula_1. This is a contradiction. QED.
Discussion.
The formal machinery of the proof given above is wholly elementary except for the diagonalization which the diagonal lemma requires. The proof of the diagonal lemma is likewise surprisingly simple; for example, it does not invoke recursive functions in any way. The proof does assume that every formula_0-formula has a Gödel number, but the specifics of a coding method are not required. Hence Tarski's theorem is much easier to motivate and prove than the more celebrated theorems of Gödel about the metamathematical properties of first-order arithmetic.
Smullyan (1991, 2001) has argued forcefully that Tarski's undefinability theorem deserves much of the attention garnered by Gödel's incompleteness theorems. That the latter theorems have much to say about all of mathematics and more controversially, about a range of philosophical issues (e.g., Lucas 1961) is less than evident. Tarski's theorem, on the other hand, is not directly about mathematics but about the inherent limitations of any formal language sufficiently expressive to be of real interest. Such languages are necessarily capable of enough self-reference for the diagonal lemma to apply to them. The broader philosophical import of Tarski's theorem is more strikingly evident.
An interpreted language is "strongly-semantically-self-representational" exactly when the language contains predicates and function symbols defining all the semantic concepts specific to the language. Hence the required functions include the "semantic valuation function" mapping a formula formula_29 to its truth value formula_34 and the "semantic denotation function" mapping a term formula_35 to the object it denotes. Tarski's theorem then generalizes as follows: "No sufficiently powerful language is strongly-semantically-self-representational".
The undefinability theorem does not prevent truth in one theory from being defined in a stronger theory. For example, the set of (codes for) formulas of first-order Peano arithmetic that are true in formula_1 is definable by a formula in second order arithmetic. Similarly, the set of true formulas of the standard model of second order arithmetic (or formula_15-th order arithmetic for any formula_15) can be defined by a formula in first-order ZFC.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "L"
},
{
"math_id": 1,
"text": "\\mathcal N"
},
{
"math_id": 2,
"text": "L,"
},
{
"math_id": 3,
"text": "(L, \\mathcal N)"
},
{
"math_id": 4,
"text": "\\varphi"
},
{
"math_id": 5,
"text": "g(\\varphi)."
},
{
"math_id": 6,
"text": "\\varphi."
},
{
"math_id": 7,
"text": "T"
},
{
"math_id": 8,
"text": "T^*"
},
{
"math_id": 9,
"text": "T."
},
{
"math_id": 10,
"text": "\\mathrm{True}(n)"
},
{
"math_id": 11,
"text": "T^*."
},
{
"math_id": 12,
"text": "A,"
},
{
"math_id": 13,
"text": "\\mathrm{True}(g(A)) \\iff A"
},
{
"math_id": 14,
"text": "T^*,"
},
{
"math_id": 15,
"text": "n"
},
{
"math_id": 16,
"text": "S(m)"
},
{
"math_id": 17,
"text": "m"
},
{
"math_id": 18,
"text": "\\varphi(x)"
},
{
"math_id": 19,
"text": "x"
},
{
"math_id": 20,
"text": "\\varphi(m)"
},
{
"math_id": 21,
"text": "\\varphi(x),"
},
{
"math_id": 22,
"text": "g"
},
{
"math_id": 23,
"text": "S(g)"
},
{
"math_id": 24,
"text": "\\Sigma^0_n"
},
{
"math_id": 25,
"text": "\\Sigma^0_k"
},
{
"math_id": 26,
"text": "k."
},
{
"math_id": 27,
"text": "g(\\varphi)"
},
{
"math_id": 28,
"text": "B(x)"
},
{
"math_id": 29,
"text": "A"
},
{
"math_id": 30,
"text": "A \\iff B(g(A))"
},
{
"math_id": 31,
"text": "\\mathrm{True}(g(A))"
},
{
"math_id": 32,
"text": "S"
},
{
"math_id": 33,
"text": "S \\iff \\lnot \\mathrm{True}(g(S))"
},
{
"math_id": 34,
"text": "||A||,"
},
{
"math_id": 35,
"text": "t"
}
] |
https://en.wikipedia.org/wiki?curid=583785
|
5838194
|
Vilnius photometric system
|
Photometric system
Vilnius photometric system is a medium-band seven-colour photometric system (UPXYZVS), created in 1963 by Vytautas Straižys and his coworkers. This system was highly optimized for classification of stars from ground-based observations. The system was chosen to be medium-band, to ensure the possibility to measure faint stars.
Selection of bandpasses.
The temperature classification of early-type stars is based on Balmer jump (Balmer discontinuity). To measure it one must have two bandpasses placed in the ultraviolet, one beyond the Balmer jump (U magnitude) and another after the jump (X magnitude).
The Y bandpass is near the breakpoint of the interstellar extinction law (interstellar extinction in the 300–800 nm region can be approximated by two straight lines, which intersect at ~435.5 nm).
The P magnitude is placed exactly on the Balmer jump in order to provide separation for luminosity classes of B-A-F stars.
The Z magnitude is placed on the Mg I triplet and the MgH molecular band. It is sensitive to the luminosity classes of G-K-M stars.
The S bandpass coincides with H-alpha line position and provides information about emission or absorption phenomena in that line.
Finally, the V magnitude is chosen to coincide with a similar bandpass in the UBV system. It provides the possibility to relate these two photometric systems.
Normalization.
Colour indices of the system were normalized to satisfy the condition:
formula_0
for un-reddened O-type stars.
Mean wavelength and half-widths of response functions.
The following table shows the characteristics of each of the filters used (represented colors are only approximate):
See also.
UBV photometric system
|
[
{
"math_id": 0,
"text": "U-P = P-X = X-Y = Y-Z = Z-V = V-S =0"
}
] |
https://en.wikipedia.org/wiki?curid=5838194
|
58383744
|
Separation principle in stochastic control
|
The separation principle is one of the fundamental principles of stochastic control theory, which states that the problems of optimal control and state estimation can be decoupled under certain conditions. In its most basic formulation it deals with a linear stochastic system
formula_0
with a state process formula_1, an output process formula_2 and a control formula_3, where formula_4 is a vector-valued Wiener process, formula_5 is a zero-mean Gaussian random vector independent of formula_4, formula_6, and formula_7, formula_8, formula_9, formula_10, formula_11 are matrix-valued functions which generally are taken to be continuous of bounded variation. Moreover, formula_12 is nonsingular on some interval formula_13. The problem is to design an output feedback law formula_14 which maps the observed process formula_2 to the control input formula_3 in a nonanticipatory manner so as to minimize the functional
formula_15
where formula_16 denotes expected value, prime (formula_17) denotes transpose. and formula_18 and formula_19 are continuous matrix functions of bounded variation, formula_20 is positive semi-definite and formula_21 is positive definite for all formula_22. Under suitable conditions, which need to be properly stated, the optimal policy formula_23 can be chosen in the form
formula_24
where formula_25 is the linear least-squares estimate of the state vector formula_26 obtained from the Kalman filter
formula_27
where formula_28 is the gain of the optimal linear-quadratic regulator obtained by taking formula_29 and formula_5 deterministic, and where formula_30 is the Kalman gain. There is also a non-Gaussian version of this problem (to be discussed below) where the Wiener process formula_4 is replaced by a more general square-integrable martingale with possible jumps. In this case, the Kalman filter needs to be replaced by a nonlinear filter providing an estimate of the (strict sense) conditional mean
formula_31
where
formula_32
is the "filtration" generated by the output process; i.e., the family of increasing sigma fields representing the data as it is produced.
In the early literature on the separation principle it was common to allow as admissible controls formula_3 all processes that are "adapted" to the filtration formula_33. This is equivalent to allowing all non-anticipatory Borel functions as feedback laws, which raises the question of existence of a unique solution to the equations of the feedback loop. Moreover, one needs to exclude the possibility that a nonlinear controller extracts more information from the data than what is possible with a linear control law.
Choices of the class of admissible control laws.
Linear-quadratic control problems are often solved by a completion-of-squares argument. In our present context we have
formula_34
in which the first term takes the form
formula_35
where formula_36 is the covariance matrix
formula_37
The separation principle would now follow immediately if formula_38 were independent of the control. However this needs to be established.
The state equation can be integrated to take the form
formula_39
where formula_40 is the state process obtained by setting formula_41 and formula_42 is the transition matrix function. By linearity, formula_43 equals
formula_44
where formula_45. Consequently,
formula_46
but we need to establish that formula_47 does not depend on the control. This would be the case if
formula_48
where formula_49 is the output process obtained by setting formula_41. This issue was discussed in detail by Lindquist. In fact, since the control process formula_3 is in general a "nonlinear" function of the data and thus non-Gaussian, then so is the output process formula_2. To avoid these problems one might begin by uncoupling the feedback loop and determine an optimal control process in the class of stochastic processes formula_3 that are adapted to the family formula_50 of sigma fields. This problem, where one optimizes over the class of all control processes adapted to a fixed filtration, is called a "stochastic open loop (SOL) problem". It is not uncommon in the literature to assume from the outset that the control is adapted to formula_51; see, e.g., Section 2.3 in Bensoussan, also van Handel and Willems.
In Lindquist 1973 a procedure was proposed for how to embed the class of admissible controls in various SOL classes in a problem-dependent manner, and then construct the corresponding feedback law. The largest class formula_52 of admissible feedback laws formula_23 consists of the non-anticipatory functions formula_53 such that the feedback equation has a unique solution and the corresponding control process formula_54 is adapted to formula_55.
Next, we give a few examples of specific classes of feedback laws that belong to this general class, as well as some other strategies in the literature to overcome the problems described above.
Linear control laws.
The admissible class formula_52 of control laws could be restricted to contain only certain linear ones as in Davis. More generally, the linear class
formula_56
where formula_57 is a deterministic function and formula_58 is an formula_59 kernel, ensures that formula_36 is independent of the control. In fact, the Gaussian property will then be preserved, and formula_60 will be generated by the Kalman filter. Then the error process formula_61 is generated by
formula_62
which is clearly independent of the choice of control, and thus so is formula_36.
Lipschitz-continuous control laws.
Wonham proved a separation theorem for controls in the class formula_63, even for a more general cost functional than J(u). However, the proof is far from simple and there are many technical assumptions. For example, formula_64 must square and have a determinant bounded away from zero, which is a serious restriction. A later proof by Fleming and Rishel is considerably simpler. They also prove the separation theorem with quadratic cost functional formula_65 for a class of Lipschitz continuous feedback laws, namely formula_66, where formula_67 is a non-anticipatory function of formula_2 which is Lipschitz continuous in this argument. Kushner proposed a more restricted class formula_68, where the modified state process formula_69 is given by
formula_70
leading to the identity formula_71.
Imposing delay.
If there is a delay in the processing of the observed data so that, for each formula_22, formula_72 is a function of formula_73, then formula_74, formula_75, see Example 3 in Georgiou and Lindquist. Consequently, formula_36 is independent of the control. Nevertheless, the control policy formula_23 must be such that the feedback equations have a unique solution.
Consequently, the problem with possibly control-dependent sigma fields does not occur in the usual discrete-time formulation. However, a procedure used in several textbooks to construct the continuous-time formula_36 as the limit of finite difference quotients of the discrete-time formula_36, which does not depend on the control, is circular or a best incomplete; see Remark 4 in Georgiou and Lindquist.
Weak solutions.
An approach introduced by Duncan and Varaiya and Davis and Varaiya, see also Section 2.4 in Bensoussan
is based on "weak solutions" of the stochastic differential equation. Considering such solutions of
formula_76
we can change the probability measure (that depends on formula_77) via a Girsanov transformation so that
formula_78
becomes a new Wiener process, which (under the new probability measure) can be assumed to be unaffected by the control. The question of how this could be implemented in an engineering system is left open.
Nonlinear filtering solutions.
Although a nonlinear control law will produce a non-Gaussian state process, it can be shown, using nonlinear filtering theory (Chapters 16.1 in Lipster and Shirayev
), that the state process is "conditionally Gaussian" given the filtration formula_79. This fact can be used to show that formula_80 is actually generated by a Kalman filter (see Chapters 11 and 12 in Lipster and Shirayev). However, this requires quite a sophisticated analysis and is restricted to the case where the driving noise formula_81 is a Wiener process.
Additional historical perspective can be found in Mitter.
Issues on feedback in linear stochastic systems.
At this point it is suitable to consider a more general class of controlled linear stochastic systems that also covers systems with time delays, namely
formula_82
with formula_83 a stochastic vector process which does not depend on the control. The standard stochastic system is then obtained as a special case where formula_84, formula_85 and formula_86. We shall use the short-hand notation
formula_87
for the feedback system, where
formula_88
is a Volterra operator.
In this more general formulation the embedding procedure of Lindquist defines the class formula_52 of admissible feedback laws formula_23 as the class of non-anticipatory functions formula_53 such that the feedback equation formula_89 has a unique solution formula_90 and formula_91 is adapted to formula_55.
In Georgiou and Lindquist a new framework for the separation principle was proposed. This approach considers stochastic systems as well-defined maps between sample paths rather than between stochastic processes and allows us to extend the separation principle to systems driven by martingales with possible jumps. The approach is motivated by engineering thinking where systems and feedback loops process signals, and not stochastic processes "per se" or transformations of probability measures. Hence the purpose is to create a natural class of admissible control laws that make engineering sense, including those that are nonlinear and discontinuous.
The feedback equation formula_89 has a unique strong solution if there exists a non-anticipating function formula_58 such that formula_92 satisfies the equation with probability one and all other solutions coincide with formula_93 with probability one. However, in the sample-wise setting, more is required, namely that such a unique solution exists and that formula_89 holds for all formula_94, not just almost all. The resulting feedback loop is "deterministically well-posed"in the sense that the feedback equations admit a unique solution that causally depends on the input for "each" input sample path.
In this context, a "signal" is defined to be a sample path of a stochastic process with possible discontinuities. More precisely, signals will belong to the "Skorohod space" formula_11, i.e., the space of functions which are continuous on the right and have a left limit at all points (càdlàg functions). In particular, the space formula_10 of continuous functions is a proper subspace of formula_11. Hence the response of a typical nonlinear operation that involves thresholding and switching can be modeled as a signal. The same goes for sample paths of counting processes and other martingales. A "system" is defined to be a measurable non-anticipatory map formula_95 sending sample paths to sample paths so that their outputs at any time formula_22 is a measurable function of past values of the input and time. For example, stochastic differential equations with Lipschitz coefficients driven by a Wiener process
induce maps between corresponding path spaces, see page 127 in Rogers and Williams, and pages 126-128 in Klebaner. Also, under fairly general conditions (see e.g., Chapter V in Protter), stochastic differential equations driven by martingales with sample paths in formula_11 have strong solutions who are semi-martingales.
For the time setting formula_96, the feedback system formula_89 can be written formula_97, where formula_94 can be interpreted as an input.
Definition. A feedback loop formula_97 is "deterministically well-posed" if it has a unique solution formula_98 for all inputs formula_99 and formula_100 is a system.
This implies that the processes formula_93 and formula_94 define identical filtrations. Consequently, no new information is created by the loop. However, what we need is that formula_74 for formula_75. This is ensured by the following lemma (Lemma 8 in Georgiou and Lindquist).
Key Lemma. If the feedback loop formula_89 is deterministically well-posed, formula_101 is a system, and formula_102 is a linear system having a right inverse formula_103 that is also a system, then formula_104 is a system and formula_74 for formula_75.
The condition on formula_102 in this lemma is clearly satisfied in the standard linear stochastic system, for which formula_105, and hence formula_106. The remaining conditions are collected in the following definition.
Definition. A feedback law formula_23 is "deterministically well-posed" for the system formula_89 if formula_101 is a system and the feedback system formula_89 deterministically well-posed.
Examples of simple systems that are not deterministically well-posed are given in Remark 12 in Georgiou and Lindquist.
A separation principle for physically realizable control laws.
By only considering feedback laws that are deterministically well-posed, all admissible control laws are physically realizable in the engineering sense that they induce a signal that travels through the feedback loop.
The proof of the following theorem can be found in Georgiou and Lindquist 2013.
Separation theorem.
Given the linear stochastic system
formula_107
where formula_4 is a vector-valued Wiener process, formula_5 is a zero-mean Gaussian random vector independent of formula_4, consider the problem of minimizing the quadratic functional J(u) over the class of all deterministically well-posed feedback laws formula_23. Then the unique optimal control law is given by formula_108 where formula_28 is defined as above and formula_60 is given by the Kalman filter. More generally, if formula_4 is a square-integrable martingale and formula_5 is an arbitrary zero mean random vector, formula_108, where formula_43, is the optimal control law provided it is deterministically well-posed.
In the general non-Gaussian case, which may involve counting processes, the Kalman filter needs to be replaced by a nonlinear filter.
A Separation principle for delay-differential systems.
Stochastic control for time-delay systems were first studied in Lindquist,
and Brooks, although Brooks relies on the strong assumption that the observation formula_2 is "functionally independent" of the control formula_3, thus avoiding the key question of feedback.
Consider the delay-differential system
formula_109
where formula_4 is now a (square-integrable) Gaussian (vector) martingale, and where formula_110 and formula_10 are of bounded variation in the first argument and continuous on the right in the second, formula_111 is deterministic for formula_112, and formula_6.
More precisely, formula_113 for formula_114, formula_115 for formula_116, and the total variation of formula_117 is bounded by an integrable function in the variable formula_22, and the same holds for formula_10.
We want to determine a control law which minimizes
formula_118
where formula_119 is a positive Stieltjes measure. The corresponding deterministic problem obtained by setting formula_120 is given by
formula_121
with formula_122.
The following separation principle for the delay system above can be found in Georgiou and Lindquist 2013 and generalizes the corresponding result in Lindquist 1973
Theorem. There is a unique feedback law formula_123 in the class of deterministically well-posed control laws that minimizes formula_124, and it is given by
formula_125
where formula_28 is the deterministic control gain and formula_126 is given by the linear (distributed) filter
formula_127
where formula_128 is the innovation process
formula_129
and the gain formula_1 is as defined in page 120 in Lindquist.
|
[
{
"math_id": 0,
"text": "\\begin{align}\n dx & =A(t)x(t)\\,dt+B_1(t)u(t)\\,dt+B_2(t)\\,dw \\\\\n dy & =C(t)x(t)\\,dt +D(t)\\,dw\n\\end{align}"
},
{
"math_id": 1,
"text": "x"
},
{
"math_id": 2,
"text": "y"
},
{
"math_id": 3,
"text": "u"
},
{
"math_id": 4,
"text": "w"
},
{
"math_id": 5,
"text": "x(0)"
},
{
"math_id": 6,
"text": "y(0)=0"
},
{
"math_id": 7,
"text": "A"
},
{
"math_id": 8,
"text": "B_1"
},
{
"math_id": 9,
"text": "B_2"
},
{
"math_id": 10,
"text": "C"
},
{
"math_id": 11,
"text": "D"
},
{
"math_id": 12,
"text": "DD'"
},
{
"math_id": 13,
"text": "[0,T]"
},
{
"math_id": 14,
"text": "\\pi:\\, y \\mapsto u"
},
{
"math_id": 15,
"text": "\nJ(u) = \\mathbb{E}\\left\\{ \\int_0^T x(t)'Q(t)x(t)\\,dt+\\int_0^Tu(t)'R(t)u(t)\\,dt +x(T)'Sx(T)\\right\\},\n"
},
{
"math_id": 16,
"text": "\\mathbb{E}"
},
{
"math_id": 17,
"text": "'"
},
{
"math_id": 18,
"text": "Q"
},
{
"math_id": 19,
"text": "R"
},
{
"math_id": 20,
"text": " Q(t)"
},
{
"math_id": 21,
"text": "R(t)"
},
{
"math_id": 22,
"text": "t"
},
{
"math_id": 23,
"text": "\\pi"
},
{
"math_id": 24,
"text": "\nu(t)=K(t)\\hat x(t),\n"
},
{
"math_id": 25,
"text": "\\hat x(t)"
},
{
"math_id": 26,
"text": "x(t)"
},
{
"math_id": 27,
"text": "\nd\\hat x=A(t)\\hat x(t)\\,dt+B_1(t)u(t)\\,dt +L(t)(dy-C(t)\\hat x(t)\\,dt),\\quad \\hat x(0)=0,\n"
},
{
"math_id": 28,
"text": "K"
},
{
"math_id": 29,
"text": "B_2=D=0"
},
{
"math_id": 30,
"text": "L"
},
{
"math_id": 31,
"text": "\n\\hat{x}(t)= \\operatorname E\\{ x(t)\\mid {\\cal Y}_t\\},\n"
},
{
"math_id": 32,
"text": "\n{\\cal Y}_t:=\\sigma\\{ y(\\tau), \\tau\\in [0,t]\\}, \\quad 0\\leq t\\leq T,\n"
},
{
"math_id": 33,
"text": "\\{{\\cal Y}_t, \\, 0\\leq t\\leq T\\}"
},
{
"math_id": 34,
"text": "\nJ(u)=\\operatorname{E}\\left\\{ \\int_0^T(u-Kx)'R(u-Kx) \\, dt\\right\\}\n+\\text{terms that do not depend on }u,\n"
},
{
"math_id": 35,
"text": "\\begin{align}\n\\operatorname{E}\\left\\{ \\int_0^T(u-Kx)'R(u-Kx)\\,dt\\right\\}=\\operatorname{E}\\left\\{\\int_0^T[(u-K\\hat{x})'R(u-K\\hat{x})+\\operatorname{tr}(K'RK\\Sigma)] \\, dt\\right\\},\n\\end{align}"
},
{
"math_id": 36,
"text": "\\Sigma"
},
{
"math_id": 37,
"text": "\n\\Sigma(t):=\\operatorname{E}\\{[x(t)-\\hat{x}(t)][x(t)-\\hat{x}(t)]'\\}.\n"
},
{
"math_id": 38,
"text": "\\begin{align}\\Sigma\\end{align}"
},
{
"math_id": 39,
"text": "\nx(t)=x_0(t)+\\int_0^t \\Phi(t,s)B_1(s)u(s) \\, ds,\n"
},
{
"math_id": 40,
"text": "x_0"
},
{
"math_id": 41,
"text": "u=0"
},
{
"math_id": 42,
"text": "\\Phi"
},
{
"math_id": 43,
"text": "\\hat{x}(t)=\\operatorname{E}\\{x(t)\\mid {\\cal Y}_t\\}"
},
{
"math_id": 44,
"text": "\n\\hat{x}(t)=\\hat{x}_0(t)+\\int_0^t \\Phi(t,s)B_1(s)u(s)\\,ds,\n"
},
{
"math_id": 45,
"text": "\\hat{x}_0(t)=\\operatorname{E}\\{x_0(t)\\mid {\\cal Y}_t\\}"
},
{
"math_id": 46,
"text": "\n\\Sigma(t):=\\mathbb{E}\\{[x_0(t)-\\hat{x}_0(t)][x_0(t)-\\hat{x}_0(t)]'\\},\n"
},
{
"math_id": 47,
"text": "\\begin{align}\\hat{x}_0\\end{align}"
},
{
"math_id": 48,
"text": "\n{\\cal Y}_t ={\\cal Y}_t^0:=\\sigma\\{ y_0(\\tau), \\tau\\in [0,t]\\}, \\quad 0\\leq t\\leq T,\n"
},
{
"math_id": 49,
"text": "y_0"
},
{
"math_id": 50,
"text": "\\{ {\\cal Y}_t^0\\}"
},
{
"math_id": 51,
"text": "\\{ {\\mathcal Y}_t^0\\}"
},
{
"math_id": 52,
"text": "\\Pi"
},
{
"math_id": 53,
"text": "u:=\\pi(y)"
},
{
"math_id": 54,
"text": "u_\\pi"
},
{
"math_id": 55,
"text": "\\{{\\mathcal Y}_t^0\\}"
},
{
"math_id": 56,
"text": "\n({\\mathcal L})\\quad u(t)=\\bar{u}(t)+\\int_0^tF(t,\\tau)\\,dy,\n"
},
{
"math_id": 57,
"text": "\\bar{u}"
},
{
"math_id": 58,
"text": "F"
},
{
"math_id": 59,
"text": "L_2"
},
{
"math_id": 60,
"text": "\\hat{x}"
},
{
"math_id": 61,
"text": "\\tilde{x}:= x-\\hat{x}"
},
{
"math_id": 62,
"text": "\nd\\tilde{x}=(A-LC)\\tilde{x}\\,dt +(B_2-LD)\\,dw, \\quad \\tilde{x}(0)=x(0),\n"
},
{
"math_id": 63,
"text": "\\begin{align}\\pi:\\, u(t)=\\psi(t,\\hat{x}(t))\\end{align}"
},
{
"math_id": 64,
"text": "\\begin{align}C(t)\\end{align}"
},
{
"math_id": 65,
"text": "J(u)"
},
{
"math_id": 66,
"text": "u(t)=\\phi(t,y)"
},
{
"math_id": 67,
"text": "\\phi:\\, [0,T]\\times C^n [0,T]\\to{\\mathbb R}^m"
},
{
"math_id": 68,
"text": "u(t)=\\psi(t,\\hat{\\xi}(t))"
},
{
"math_id": 69,
"text": "\\hat{\\xi}"
},
{
"math_id": 70,
"text": "\n\\hat{\\xi}(t)=\\operatorname{E}\\{ x_0(t)\\mid {\\mathcal Y}_t^0\\}+ \\int_0^t \\Phi(t,s)B_1(s)u(s)\\,ds,\n"
},
{
"math_id": 71,
"text": "\\begin{align}\\hat{x}=\\hat{\\xi}\\end{align}"
},
{
"math_id": 72,
"text": "u(t)"
},
{
"math_id": 73,
"text": "y(\\tau); \\, 0\\leq\\tau\\leq t-\\varepsilon"
},
{
"math_id": 74,
"text": "{\\cal Y}_t ={\\cal Y}_t^0"
},
{
"math_id": 75,
"text": "0\\leq t\\leq T"
},
{
"math_id": 76,
"text": "\ndx =A(t)x(t)\\,dt+B_1(t)u(t)\\,dt+B_2(t)\\,dw\n"
},
{
"math_id": 77,
"text": "\\begin{align}u\\end{align}"
},
{
"math_id": 78,
"text": "\nd\\tilde{w}:= B_1(t)u(t)\\,dt+B_2(t)\\,dw\n"
},
{
"math_id": 79,
"text": "\\begin{align}\\{{\\mathcal Y}_t\\}\\end{align}"
},
{
"math_id": 80,
"text": "\\begin{align}\\hat{x}\\end{align}"
},
{
"math_id": 81,
"text": "\\begin{align}w\\end{align}"
},
{
"math_id": 82,
"text": "\\begin{align}\n z(t) & =z_0(t) + \\int_0^t G(t,s)u(s)\\,ds \\\\\n y(t) & = Hz(t)\n\\end{align}"
},
{
"math_id": 83,
"text": "\\begin{align}z_0\\end{align}"
},
{
"math_id": 84,
"text": "z=[x',y']'"
},
{
"math_id": 85,
"text": "z_0=[x_0',y_0']'"
},
{
"math_id": 86,
"text": "H=[I,0]"
},
{
"math_id": 87,
"text": "\nz=z_0+g\\pi Hz\n"
},
{
"math_id": 88,
"text": "\ng\\;:\\; (t,u) \\mapsto \\int_0^t G(t,\\tau)u(\\tau)\\,d\\tau\n"
},
{
"math_id": 89,
"text": "z=z_0+g\\pi Hz"
},
{
"math_id": 90,
"text": "z_\\pi"
},
{
"math_id": 91,
"text": "u=\\pi(Hz_\\pi)"
},
{
"math_id": 92,
"text": "z=F(z_0)"
},
{
"math_id": 93,
"text": "z"
},
{
"math_id": 94,
"text": "z_0"
},
{
"math_id": 95,
"text": "D\\to D"
},
{
"math_id": 96,
"text": "f(z):=g\\pi Hz"
},
{
"math_id": 97,
"text": "z=z_0+f(z)"
},
{
"math_id": 98,
"text": "z\\in D"
},
{
"math_id": 99,
"text": "z_0\\in D"
},
{
"math_id": 100,
"text": "(1-f)^{-1}"
},
{
"math_id": 101,
"text": "g\\pi"
},
{
"math_id": 102,
"text": "H"
},
{
"math_id": 103,
"text": "H^{-R}"
},
{
"math_id": 104,
"text": "(1-Hg\\pi)^{-1}"
},
{
"math_id": 105,
"text": "H=[0,I]"
},
{
"math_id": 106,
"text": "H^{-R}=H'"
},
{
"math_id": 107,
"text": "\n\\begin{align}\n dx & =A(t)x(t)\\,dt+B_1(t)u(t)\\,dt+B_2(t)\\,dw \\\\\n dy & =C(t)x(t)\\,dt +D(t)\\,dw\n\\end{align}\n"
},
{
"math_id": 108,
"text": "u(t)=K(t)\\hat{x}(t)"
},
{
"math_id": 109,
"text": "\\begin{align}\ndx &=\\left(\\int_{t-h}^t d_s\\,A(t,s)x(s)\\right) \\,dt + B_1(t)u(t)\\,dt+B_2(t)\\,dw \\\\\n dy & =\\left(\\int_{t-h}^t d_s\\,C(t,s)x(s)\\right) \\,dt +D(t)\\,dw\n\\end{align}"
},
{
"math_id": 110,
"text": "\\begin{align}A\\end{align}"
},
{
"math_id": 111,
"text": "x(t)=\\xi(t)"
},
{
"math_id": 112,
"text": "-h\\leq t\\leq 0"
},
{
"math_id": 113,
"text": "A(t,s)=0"
},
{
"math_id": 114,
"text": "s\\geq t"
},
{
"math_id": 115,
"text": "A(t,s)=A(t,t-h)"
},
{
"math_id": 116,
"text": "t\\leq t-h"
},
{
"math_id": 117,
"text": "s\\mapsto A(t,s)"
},
{
"math_id": 118,
"text": "\nJ(u)=\\operatorname{E}\\left(\\int_0^T x(t)'Q(t)x(t)\\,d\\alpha(t)+\\int_0^Tu(t)'R(t)u(t)\\,dt\\right),\n"
},
{
"math_id": 119,
"text": "\\begin{align}d\\alpha\\end{align}"
},
{
"math_id": 120,
"text": "\\begin{align}w=0\\end{align}"
},
{
"math_id": 121,
"text": "\nu(t)=\\int_{t-h}^t d_\\tau \\, K(t,\\tau)x(\\tau),\n"
},
{
"math_id": 122,
"text": "\\begin{align}K\\end{align}"
},
{
"math_id": 123,
"text": "\\begin{align}\\pi:\\, y\\mapsto u\\end{align}"
},
{
"math_id": 124,
"text": "\\begin{align}J(u)\\end{align}"
},
{
"math_id": 125,
"text": "\nu(t)=\\int_{t-h}^t d_s \\, K(t,s)\\hat{x}(s\\mid t),\n"
},
{
"math_id": 126,
"text": "\\hat{x}(s\\mid t) := E\\{ x(s)\\mid {\\cal Y}_t\\}"
},
{
"math_id": 127,
"text": "\\begin{align}\n d\\hat{x}(t\\mid t) & =\\int_{t-h}^t d_s \\, A(t,s)\\hat{x}(s\\mid t) \\, dt +B_1u\\,dt+ X(t,t)\\,dv \\\\\n d\\hat{x}(t\\mid t) & =\\int_{t-h}^t d_s \\, A(t,s)\\hat{x}(s\\mid t) \\, dt +B_1u\\,dt+ X(t,t)\\,dv\n\\end{align}"
},
{
"math_id": 128,
"text": "v"
},
{
"math_id": 129,
"text": "\ndv=dy - \\int_{t-h}^t d_sC(t,s)\\hat{x}(s\\mid t)\\, dt, \\quad v(0)=0,\n"
}
] |
https://en.wikipedia.org/wiki?curid=58383744
|
58402395
|
Reachability analysis
|
Solution to the reachability problem in distributed systems (computer science)
Reachability analysis is a solution to the reachability problem in the particular context of distributed systems. It is used to determine which global states can be reached by a distributed system which consists of a certain number of local entities that communicated by the exchange of messages.
Overview.
Reachability analysis was introduced in a paper of 1978 for the analysis and verification of communication protocols. This paper was inspired by a paper by Bartlett et al. of 1968 which presented the alternating bit protocol using finite-state modeling of the protocol entities, and also pointed out that a similar protocol described earlier had a design flaw. This protocol belongs to the Link layer and, under certain assumptions, provides as service the correct data delivery without loss nor duplication, despite the occasional presence of message corruption or loss.
For reachability analysis, the local entities are modeled by their states and transitions. An entity changes state when it sends a message, consumes a received message, or performs an interaction at its local service interface. The global state formula_0 of a system with n entities is determined by the states formula_1 (i=1, ... n) of the entities and the state of the communication formula_2. In the simplest case, the medium between two entities is modeled by two FIFO queues in opposite directions, which contain the messages in transit (that are sent, but not yet consumed). Reachability analysis considers the possible behavior of the distributed system by analyzing all possible sequences of state transitions of the entities, and the corresponding global states reached.
The result of reachability analysis is a global state transition graph (also called reachability graph) which shows all global states of the distributed system that are reachable from the initial global state, and all possible sequences of send, consume and service interactions performed by the local entities. However, in many cases this transition graph is unbounded and can not be explored completely. The transition graph can be used for checking general design flaws of the protocol (see below), but also for verifying that the sequences of service interactions by the entities correspond to the requirements given by the global service specification of the system.
Protocol properties.
"Boundedness:" The global state transition graph is bounded if the number of messages that may be in transit is bounded and the number states of all entities is bounded. The question whether the number of messages remains bounded in the case of finite state entities is in general not decidable. One usually truncates the exploration of the transition graph when the number of messages in transit reaches a given threshold.
The following are design flaws:
An example.
As an example, we consider the system of two protocol entities that exchange the messages "ma", "mb", "mc" and "md" with one another, as shown in the first diagram. The protocol is defined by the behavior of the two entities, which is given in the second diagram in the form of two state machines. Here the symbol "!" means sending a message, and "?" means consuming a received message. The initial states are the states "1".
The third diagram shows the result of the reachability analysis for this protocol in the form of a global state machine. Each global state has four components: the state of protocol entity A (left), the state of the entity B (right) and the messages in transit in the middle (upper part: from A to B; lower part: from B to A). Each transition of this global state machine corresponds to one transition of protocol entity A or entity B. The initial state is [1, - - , 1] (no messages in transit).
One sees that this example has a bounded global state space - the maximum number of messages that may be in transit at the same time is two. This protocol has a global deadlock, which is the state [2, - - , 3]. If one removes the transition of A in state 2 for consuming message "mb", there will be an unspecified reception in the global states [2, "ma mb" ,3] and [2, - "mb" ,3].
Message transmission.
The design of a protocol has to be adapted to the properties of the underlying communication medium, to the possibility that the communication partner fails, and to the mechanism used by an entity to select the next message for consumption. The communication medium for protocols at the Link level is normally not reliable and allows for erroneous reception and message loss (modeled as a state transition of the medium). Protocols using the Internet IP service should also deal with the possibility of out-of-order delivery. Higher-level protocols normally use a session-oriented Transport service which means that the medium provides reliable FIFO transmission of messages between any pair of entities. However, in the analysis of distributed algorithms, one often takes into account the possibility that some entity fails completely, which is normally detected (like a loss of message in the medium) by a timeout mechanism when an expected message does not arrive.
Different assumptions have been made about whether an entity can select a particular message for consumption when several messages have arrived and are ready for consumption. The basic models are the following:
The original paper identifying the problem of unspecified receptions, and much of the subsequent work, assumed a single input queue. Sometimes, unspecified receptions are introduced by a race condition, which means that two messages are received and their order is not defined (which is often the case if they come from different partners). Many of these design flaws disappear when multiple queues or reception pools are used. With the systematic use of reception pools, reachability analysis should check for partial deadlocks and messages remaining forever in the pool (without being consumed by the entity)
Practical issues.
Most of the work on protocol modeling use finite-state machines (FSM) to model the behavior of the distributed entities (see also Communicating finite-state machines). However, this model is not powerful enough to model message parameters and local variables. Therefore often so-called extended FSM models are used, such as supported by languages like SDL or UML state machines. Unfortunately, reachability analysis becomes much more complex for such models.
A practical issue of reachability analysis is the so-called ″state space explosion″. If the two entities of a protocol have 100 states each, and the medium may include 10 types of messages, up to two in each direction, then the number of global states in the reachability graph is bound by the number 100 x 100 x (10 x 10) x (10 x 10) which is 100 million. Therefore a number of tools have been developed to automatically perform reachability analysis and model checking on the reachability graph. We mention only two examples: The SPIN model checker and a toolbox for the construction and analysis of distributed processes.
References and notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "s = (s_1, s_2, ..., s_n, medium)"
},
{
"math_id": 1,
"text": " s_i"
},
{
"math_id": 2,
"text": "medium"
}
] |
https://en.wikipedia.org/wiki?curid=58402395
|
58402686
|
Michell structures
|
Structures that are optimal based on the criteria defined by A.G.M. Michell
Michell structures are structures that are optimal based on the criteria defined by A.G.M. Michell in his frequently referenced 1904 paper.
Michell states that "“a frame (today called truss) (is optimal) attains the limit of economy of material possible in any frame-structure under the same applied forces, if the space occupied by it can be subjected to an appropriate small deformation, such that the strains in all the bars of the frame are increased by equal fractions of their lengths, not less than the fractional change of length of any element of the space.”"
The above conclusion is based on the Maxwell load-path theorem:
formula_0
Where formula_1 is the tension value in any tension element of length formula_2, formula_3 is the compression value in any compression element of length formula_4 and formula_5 is a constant value which is based on external loads applied to the structure.
Based on the Maxwell load-path theorem, reducing load path of tension members formula_6 will reduce by the same value the load path of compression elementsformula_7 for a given set of external loads. Structure with minimum load path is one having minimum compliance (having minimum weighted deflection in the points of applied loads weighted by the values of these loads). In consequence Michell structures are minimum compliance trusses.
Special cases.
1. All bars of a truss are subject to a load of the same sign (tension or compression).
Required volume of material is the same for all possible cases for a given set of loads. Michell defines minimum required volume of material to be:
formula_8
Where formula_9 is the allowable stress in the material.
2. Mixed tension and compression bars
More general case are frames which consist of bars that both before and after the appropriate deformation, form curves of orthogonal systems. A two-dimensional orthogonal system remains orthogonal after stretching one series of curves and compressing the other with equal strain if and only if the inclination between any two adjacent curves of the same series is constant throughout their length. This requirement results with the perpendicular series of curves to be either:
a) systems of tangents and involutes or
b) systems of intersecting logarithmic spirals.
Note that straight line or a circle are special cases of a logarithmic spiral.
Examples.
Michell provided several examples of optimum frames:
Prager trusses.
In recent years a lot of studies have been done on discrete optimum trusses. In spite of Michell trusses being defined for continuum (infinite number of members) these are sometimes called Michell trusses as well. Significant contribution to the topic of discrete optimum trusses had William Prager who used the method of the circle of relative displacements to arrive with optimal topology of such trusses (typically cantilevers). To recognize Prager's contribution discrete Michell trusses are sometimes called Prager trusses. Later geometry of cantilevered Prager trusses has been formalized by Mazurek, Baker and Tort who noticed certain geometrical relationships between members of optimal discrete trusses for 3 point or 3 force problems.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\sum_{} l_p f_p - \\sum_{} l_q f_q = C"
},
{
"math_id": 1,
"text": "f_p"
},
{
"math_id": 2,
"text": "l_p"
},
{
"math_id": 3,
"text": "f_q"
},
{
"math_id": 4,
"text": "l_q"
},
{
"math_id": 5,
"text": "C"
},
{
"math_id": 6,
"text": "\\textstyle \\sum_{} l_p f_p"
},
{
"math_id": 7,
"text": "\\textstyle \\sum_{} l_q f_q"
},
{
"math_id": 8,
"text": " V_m=\\frac{\\sum_{} l f}{P}"
},
{
"math_id": 9,
"text": "P"
}
] |
https://en.wikipedia.org/wiki?curid=58402686
|
58413
|
Chord (aeronautics)
|
Imaginary straight line joining the leading and trailing edges of an aerofoil
In aeronautics, the chord is an imaginary straight line joining the leading edge and trailing edge of an aerofoil. The chord length is the distance between the trailing edge and the point where the chord intersects the leading edge. The point on the leading edge used to define the chord may be the surface point of minimum radius. For a turbine aerofoil the chord may be defined by the line between points where the front and rear of a 2-dimensional blade section would touch a flat surface when laid convex-side up.
The wing, horizontal stabilizer, vertical stabilizer and propeller/rotor blades of an aircraft are all based on aerofoil sections, and the term "chord" or "chord length" is also used to describe their width. The chord of a wing, stabilizer and propeller is determined by measuring the distance between leading and trailing edges in the direction of the airflow. (If a wing has a rectangular planform, rather than tapered or swept, then the chord is simply the width of the wing measured in the direction of airflow.) The term "chord" is also applied to the width of wing flaps, ailerons and rudder on an aircraft.
The term is also applied to compressor and turbine aerofoils in gas turbine engines such as turbojet, turboprop, or turbofan engines for aircraft propulsion.
Many wings are not rectangular, so they have different chords at different positions. Usually, the chord length is greatest where the wing joins the aircraft's fuselage (called the root chord) and decreases along the wing toward the wing's tip (the tip chord). Most jet aircraft use a tapered swept wing design. To provide a characteristic figure that can be compared among various wing shapes, the mean aerodynamic chord (abbreviated MAC) is used, although it is complex to calculate. The mean aerodynamic chord is used for calculating pitching moments.
Standard mean chord.
Standard mean chord (SMC) is defined as wing area divided by wing span:
formula_0
where "S" is the wing area and "b" is the span of the wing. Thus, the SMC is the chord of a rectangular wing with the same area and span as those of the given wing. This is a purely geometric figure and is rarely used in aerodynamics.
Mean aerodynamic chord.
Mean aerodynamic chord (MAC) is defined as:
formula_1formula_2
where "y" is the coordinate along the wing span and "c" is the chord at the coordinate "y". Other terms are as for SMC.
The MAC is a two-dimensional representation of the whole wing. The pressure distribution over the entire wing can be reduced to a single lift force on and a moment around the aerodynamic center of the MAC. Therefore, not only the length but also the position of MAC is often important. In particular, the position of center of gravity (CG) of an aircraft is usually measured relative to the MAC, as the percentage of the distance from the leading edge of MAC to CG with respect to MAC itself.
Note that the figure to the right implies that the MAC occurs at a point where leading or trailing edge sweep changes. That is just a coincidence. In general, this is not the case. Any shape other than a simple trapezoid requires evaluation of the above integral.
The ratio of the length (or "span") of a rectangular-planform wing to its chord is known as the aspect ratio, an important indicator of the lift-induced drag the wing will create. (For wings with planforms that are not rectangular, the aspect ratio is calculated as the square of the span divided by the wing planform area.) Wings with higher aspect ratios will have less induced drag than wings with lower aspect ratios. Induced drag is most significant at low airspeeds. This is why gliders have long slender wings.
Tapered wing.
Knowing the area (Sw), taper ratio (formula_3) and the span (b) of the wing, the chord at any position on the span can be calculated by the formula:
formula_4
where
formula_5
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mbox{SMC} = \\frac{S}{b},"
},
{
"math_id": 1,
"text": "\\mbox{MAC} = \\frac{2}{S}"
},
{
"math_id": 2,
"text": "\\int_{0}^{\\frac{b}{2}}c(y)^2 dy,"
},
{
"math_id": 3,
"text": "\\lambda"
},
{
"math_id": 4,
"text": "c(y)=\\frac{2\\,S_w}{(1+\\lambda)b}\\left[1-\\frac{1-\\lambda}{b}|2 y|\\right]."
},
{
"math_id": 5,
"text": "\\lambda=\\frac{C_{\\rm Tip}}{C_{\\rm Root}}"
}
] |
https://en.wikipedia.org/wiki?curid=58413
|
58415827
|
Shimansky equation
|
Formula for a liquid's heat of vaporization as a function of temperature
In thermodynamics, the Shimansky equation describes the temperature dependence of the heat of vaporization (also known as the enthalpy of vaporization or the heat of evaporation):
formula_0
where:
This equation was obtained in 1955 by Yu. I. Shimansky, at first empirically, and later derived theoretically. The Shimansky equation does not contain any arbitrary constants, since the value of TC can be determined experimentally and "L"0 can be calculated if L has been measured experimentally for at least one given value of temperature T. The Shimansky equation describes quite well the heat of vaporization for a wide variety of liquids. For chemical compounds that belong to the same class (e.g. alcohols) the value of &NoBreak;}&NoBreak; ratio remains constant. For each such class of liquids, the Shimansky equation can be re-written in a form of
formula_1
where formula_2
The latter formula is a mathematical expression of structural similarity of liquids. The value of TC plays a role of the parameter for a group of curves of temperature dependence of L.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " L = L_0 \\tanh\\left( \\frac{L T_C}{L_0 T}\\right)"
},
{
"math_id": 1,
"text": " \\frac{L}{AT_C} = \\tanh\\frac{L}{AT}, "
},
{
"math_id": 2,
"text": " A = \\tfrac {L_0}{T_C} = \\text{const}."
}
] |
https://en.wikipedia.org/wiki?curid=58415827
|
58417911
|
Weibel's conjecture
|
In mathematics, Weibel's conjecture gives a criterion for vanishing of negative algebraic K-theory groups. The conjecture was proposed by Charles Weibel (1980) and proven in full generality by using methods from derived algebraic geometry. Previously partial cases had been proven by
,
, and
Statement of the conjecture.
Weibel's conjecture asserts that for a Noetherian scheme "X" of finite Krull dimension "d", the "K"-groups vanish in degrees < −"d":
formula_0
and asserts moreover a homotopy invariance property for negative "K"-groups
formula_1
|
[
{
"math_id": 0,
"text": " K_i(X) = 0 \\text{ for } i<-d "
},
{
"math_id": 1,
"text": " K_i(X) = K_i(X\\times \\mathbb A^r) \\text{ for } i\\le -d \\text{ and arbitrary } r. "
}
] |
https://en.wikipedia.org/wiki?curid=58417911
|
5841813
|
Mordell–Weil group
|
Abelian group
In arithmetic geometry, the Mordell–Weil group is an abelian group associated to any abelian variety formula_0 defined over a number field formula_1. It is an arithmetic invariant of the Abelian variety. It is simply the group of formula_1-points of formula_0, so formula_2 is the Mordell–Weil grouppg 207. The main structure theorem about this group is the Mordell–Weil theorem which shows this group is in fact a finitely-generated abelian group. Moreover, there are many conjectures related to this group, such as the Birch and Swinnerton-Dyer conjecture which relates the rank of formula_2 to the zero of the associated L-function at a special point.
Examples.
Constructing explicit examples of the Mordell–Weil group of an abelian variety is a non-trivial process which is not always guaranteed to be successful, so we instead specialize to the case of a specific elliptic curve formula_3. Let formula_4 be defined by the Weierstrass equationformula_5over the rational numbers. It has discriminant formula_6 (and this polynomial can be used to define a global model formula_7). It can be foundformula_8through the following procedure. First, we find some obvious torsion points by plugging in some numbers, which areformula_9In addition, after trying some smaller pairs of integers, we find formula_10 is a point which is not obviously torsion. One useful result for finding the torsion part of formula_11 is that the torsion of prime to formula_12, for formula_4 having good reduction to formula_12, denoted formula_13 injects into formula_14, soformula_15We check at two primes formula_16 and calculate the cardinality of the setsformula_17note that because both primes "only" contain a factor of formula_18, we have found all the torsion points. In addition, we know the point formula_10 has infinite order because otherwise there would be a prime factor shared by both cardinalities, so the rank is at least formula_19. Now, computing the rank is a more arduous process consisting of calculating the group formula_20 where formula_21 using some long exact sequences from homological algebra and the Kummer map.
Theorems concerning special cases.
There are many theorems in the literature about the structure of the Mordell–Weil groups of abelian varieties of specific dimension, over specific fields, or having some other special property.
Abelian varieties over the rational function field "k"("t").
For a hyperelliptic curve formula_22 and an abelian variety formula_0 defined over a fixed field formula_23, we denote the formula_24 the twist of formula_25 (the pullback of formula_0 to the function field formula_26) by a 1-cocyleformula_27for Galois cohomology of the field extension associated to the covering map formula_28. Note formula_29 which follows from the map being hyperelliptic. More explicitly, this 1-cocyle is given as a map of groupsformula_30which using universal properties is the same as giving two maps formula_31, hence we can write it as a mapformula_32where formula_33 is the inclusion map and formula_34 is sent to negative formula_35. This can be used to define the twisted abelian variety formula_24 defined over formula_36 using general theory of algebraic geometrypg 5. In particular, from universal properties of this construction, formula_24 is an abelian variety over formula_36 which is isomorphic to formula_37 after base-change to formula_38.
Theorem.
For the setup given above, there is an isomorphism of abelian groupsformula_39where formula_40 is the Jacobian of the curve formula_22, and formula_41 is the 2-torsion subgroup of formula_0.
|
[
{
"math_id": 0,
"text": "A"
},
{
"math_id": 1,
"text": "K"
},
{
"math_id": 2,
"text": "A(K)"
},
{
"math_id": 3,
"text": "E/\\mathbb{Q}"
},
{
"math_id": 4,
"text": "E"
},
{
"math_id": 5,
"text": "y^2 = x(x-6)(x+6)"
},
{
"math_id": 6,
"text": "\\Delta_E = 2^{12}\\cdot 3^6"
},
{
"math_id": 7,
"text": "\\mathcal{E}/\\mathbb{Z}"
},
{
"math_id": 8,
"text": "E(\\mathbb{Q}) \\cong \\mathbb{Z}/2\\times \\mathbb{Z}/2 \\times \\mathbb{Z}"
},
{
"math_id": 9,
"text": "\\infty, (0,0), (6,0), (-6,0)"
},
{
"math_id": 10,
"text": "(-3,9)"
},
{
"math_id": 11,
"text": "E(\\mathbb{Q})"
},
{
"math_id": 12,
"text": "p"
},
{
"math_id": 13,
"text": "E(\\mathbb{Q})_{\\mathrm{tors},p}"
},
{
"math_id": 14,
"text": "E(\\mathbb{F}_p)"
},
{
"math_id": 15,
"text": "E(\\mathbb{Q})_{\\mathrm{tors},p} \\hookrightarrow E(\\mathbb{F}_p)"
},
{
"math_id": 16,
"text": "p = 5,7"
},
{
"math_id": 17,
"text": "\\begin{align}\n\\# E(\\mathbb{F}_5) &= 8 = 2^3 \\\\\n\\# E(\\mathbb{F}_{7}) &= 12 = 2^2\\cdot 3 \n\\end{align}"
},
{
"math_id": 18,
"text": "2^2"
},
{
"math_id": 19,
"text": "1"
},
{
"math_id": 20,
"text": "E(\\mathbb{Q})/2E(\\mathbb{Q}) \\cong (\\mathbb{Z}/2)^{r + 2}"
},
{
"math_id": 21,
"text": "r = \\operatorname{rank}(E(\\mathbb{Q}))"
},
{
"math_id": 22,
"text": "C"
},
{
"math_id": 23,
"text": "k"
},
{
"math_id": 24,
"text": "A_b"
},
{
"math_id": 25,
"text": "A|_{k(t)}"
},
{
"math_id": 26,
"text": "k(t) = k(\\mathbb{P}^1)"
},
{
"math_id": 27,
"text": "b \\in Z^1(\\operatorname{Gal}(k(C)/k(t)), \\text{Aut}(A))"
},
{
"math_id": 28,
"text": "f:C \\to \\mathbb{P}^1"
},
{
"math_id": 29,
"text": "G = \\operatorname{Gal}(k(C)/k(t) \\cong \\mathbb{Z}/2"
},
{
"math_id": 30,
"text": "G\\times G \\to \\operatorname{Aut}(A)"
},
{
"math_id": 31,
"text": "G \\to \\text{Aut}(A)"
},
{
"math_id": 32,
"text": "b = (b_{id}, b_{\\iota})"
},
{
"math_id": 33,
"text": "b_{id}"
},
{
"math_id": 34,
"text": "b_\\iota"
},
{
"math_id": 35,
"text": "\\operatorname{Id}_A"
},
{
"math_id": 36,
"text": "k(t)"
},
{
"math_id": 37,
"text": "A|_{k(C)}"
},
{
"math_id": 38,
"text": "k(C)"
},
{
"math_id": 39,
"text": "A_b(k(t)) \\cong \\operatorname{Hom}_k(J(C), A)\\oplus A_2(k)"
},
{
"math_id": 40,
"text": "J(C)"
},
{
"math_id": 41,
"text": "A_2"
}
] |
https://en.wikipedia.org/wiki?curid=5841813
|
58422881
|
Anderson–Kadec theorem
|
All infinite-dimensional, separable Banach spaces are homeomorphic
In mathematics, in the areas of topology and functional analysis, the Anderson–Kadec theorem states that any two infinite-dimensional, separable Banach spaces, or, more generally, Fréchet spaces, are homeomorphic as topological spaces. The theorem was proved by Mikhail Kadec (1966) and Richard Davis Anderson.
Statement.
Every infinite-dimensional, separable Fréchet space is homeomorphic to formula_0 the Cartesian product of countably many copies of the real line formula_1
Preliminaries.
Kadec norm: A norm formula_2 on a normed linear space formula_3 is called a "<templatestyles src="Template:Visible anchor/styles.css" />Kadec norm with respect to a total subset formula_4" of the dual space formula_5 if for each sequence formula_6 the following condition is satisfied:
Eidelheit theorem: A Fréchet space formula_11 is either isomorphic to a Banach space, or has a quotient space isomorphic to formula_12
Kadec renorming theorem: Every separable Banach space formula_3 admits a Kadec norm with respect to a countable total subset formula_4 of formula_13 The new norm is equivalent to the original norm formula_2 of formula_14 The set formula_15 can be taken to be any weak-star dense countable subset of the unit ball of formula_5
Sketch of the proof.
In the argument below formula_11 denotes an infinite-dimensional separable Fréchet space and formula_16 the relation of topological equivalence (existence of homeomorphism).
A starting point of the proof of the Anderson–Kadec theorem is Kadec's proof that any infinite-dimensional separable Banach space is homeomorphic to formula_12
From Eidelheit theorem, it is enough to consider Fréchet space that are not isomorphic to a Banach space. In that case there they have a quotient that is isomorphic to formula_12 A result of Bartle-Graves-Michael proves that then
formula_17
for some Fréchet space formula_18
On the other hand, formula_11 is a closed subspace of a countable infinite product of separable Banach spaces formula_19 of separable Banach spaces. The same result of Bartle-Graves-Michael applied to formula_3 gives a homeomorphism
formula_20
for some Fréchet space formula_21 From Kadec's result the countable product of infinite-dimensional separable Banach spaces formula_3 is homeomorphic to formula_12
The proof of Anderson–Kadec theorem consists of the sequence of equivalences
formula_22
Notes.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\R^{\\N},"
},
{
"math_id": 1,
"text": "\\R."
},
{
"math_id": 2,
"text": "\\|\\,\\cdot\\,\\|"
},
{
"math_id": 3,
"text": "X"
},
{
"math_id": 4,
"text": "A \\subseteq X^*"
},
{
"math_id": 5,
"text": "X^*"
},
{
"math_id": 6,
"text": "x_n\\in X"
},
{
"math_id": 7,
"text": "\\lim_{n\\to\\infty} x^*\\left(x_n\\right) = x^*(x_0)"
},
{
"math_id": 8,
"text": "x^* \\in A"
},
{
"math_id": 9,
"text": "\\lim_{n\\to\\infty} \\left\\|x_n\\right\\| = \\left\\|x_0\\right\\|,"
},
{
"math_id": 10,
"text": "\\lim_{n\\to\\infty} \\left\\|x_n - x_0\\right\\| = 0."
},
{
"math_id": 11,
"text": "E"
},
{
"math_id": 12,
"text": "\\R^{\\N}."
},
{
"math_id": 13,
"text": "X^*."
},
{
"math_id": 14,
"text": "X."
},
{
"math_id": 15,
"text": "A"
},
{
"math_id": 16,
"text": "\\simeq"
},
{
"math_id": 17,
"text": "E \\simeq Y \\times \\R^{\\N}"
},
{
"math_id": 18,
"text": "Y."
},
{
"math_id": 19,
"text": "X = \\prod_{n=1}^{\\infty} X_i"
},
{
"math_id": 20,
"text": "X \\simeq E \\times Z"
},
{
"math_id": 21,
"text": "Z."
},
{
"math_id": 22,
"text": "\\begin{align}\n\\R^{\\N}\n&\\simeq (E \\times Z)^{\\N}\\\\\n&\\simeq E^\\N \\times Z^{\\N}\\\\\n&\\simeq E \\times E^{\\N} \\times Z^{\\N}\\\\\n&\\simeq E \\times \\R^{\\N}\\\\\n&\\simeq Y \\times \\R^{\\N} \\times \\R^{\\N}\\\\\n&\\simeq Y \\times \\R^{\\N} \\\\\n&\\simeq E\n\\end{align}"
}
] |
https://en.wikipedia.org/wiki?curid=58422881
|
58423715
|
Salt deformation
|
Change of shape of geological salt bodies submitted to stress
Salt deformation is the change of shape of natural salt bodies in response to forces and mechanisms that controls salt flow. Such deformation can generate large salt structures such as underground salt layers, salt diapirs or salt sheets at the surface. Strictly speaking, salt structures are formed by rock salt that is composed of pure halite (NaCl) crystal. However, most halite in nature appears in impure form, therefore rock salt usually refers to all rocks that composed mainly of halite, sometimes also as a mixture with other evaporites such as gypsum and anhydrite. Earth's salt deformation generally involves such mixed materials.
Due to the unique physical and chemical properties of rock salt such as its low density, high thermal conductivity and high solubility in water, it deforms distinctively in underground and surface environments compared with other rocks. Instability of rock salt is also given by its low viscosity, which allows rock salt to flow as a fluid. As the rock salt flows, a variety of salt structures are formed. Therefore, basins containing salt deform more easily than those lacking salt.
Physical properties of rock salt.
Density and buoyancy.
Rock salt has an effective porosity of nearly 50% on the surface, while the effective porosity decreases to less than 10% at a depth of 10 m. When the burial depth reaches about 45 m, the pore spaces are completely filled. After rock salt losses its porosity, it becomes almost incompressible and keeps a constant density of 2.2 g/cm3 as the depth continue to increase.
When rock salt reaches a depth of 6–8 km, other rocks are metamorphosed into greenschist. At such burial depths, the density of rock salt is slightly decreased as a result of thermal expansion. However, unlike rock salt, as the burial depth increases, shale and most other sedimentary rocks decrease in porosity and increase in density progressively. In the first 1000 m of burial depth, rock salt has a higher density compared with other rocks such as shale. When the buried material reaches a critical depth of 1.2-1.3 km, the density of rock salt and other rocks are roughly the same, where neutral buoyancy is reached. Starting from 1.3-1.5 km below the surface, the density of other rocks exceeds that of rock salt, density inversion takes place, meaning that salt has positive buoyancy when buried under other rocks at around 1.3 km. At this depth, salt rises and intrudes into the overburden, forming a diapir.
Thermal conductivity and expansivity.
Rock salt is characterized by its high thermal conductivity. For example, at 43 °C, it has a thermal conductivity of 5.13 W/(m⋅K), while shale only has a thermal conductivity of 1.76 W/(m⋅K) at the same temperature.
The volume of rock salt can be largely affected by thermal gradient. When rock salt is buried underground at 5 km at a thermal gradient of 30 °C/km, its volume expands by 2% due to thermal expansion, while pressurization only causes volume reduction of 0.5%. Therefore, the larger the burial depth of rock salt, the lower the density of it, which in turns favors the positive buoyancy induced by density inversion.
Heat can also lead to the internal flow of rock salt. When the burial depth of rock salt is over 2.9 km at a thermal gradient of 30 °C/km with viscosity below 1016 Pa.s, a flow of rock salt by thermal conduction occurs. However thermal conduction is not the dominant mechanism of salt flow in a sedimentary basin, which is completely different from the flow of magma. Salt flows at the surface if it is sufficiently wet, for instance, the flow of salt glaciers, which is an exposed structure formed when a salt diapir pierces through its overburden.
Viscosity.
Viscosity is a measure of the resistance of fluids to flow that can be represented by the ratio of shear stress to shear strain. High viscosity means a high resistance to flow and vice versa. Experimental results show that rock salt has a higher viscosity compare with bittern and rhyolite lava, but lower viscosity than mud rock, shale, and mantle. Besides, the viscosity of rock salt is closely related to the water content. The more the water content in rock salt, the lower its viscosity.
When salt glaciers feed from diapir is exposed at the surface and is infiltrated by meteoric water, the viscosity of rock salt is reduced. Consequently, the flow rate of salt glaciers is much faster than that of salt tongue spreading and salt diapir rise.
In general, fine-grained wet salt flows as a Newtonian fluid, unlike coarse-grained salt. Otherwise, it will spread due to gravitational force as it extrudes to the surface.
Strength.
When stress is applied, rock salt behaves like a fluid, while other rocks of higher strength are brittle under such conditions. When comparing the tensional and compressional strength of wet salt and dry salt with other typical rocks at a strain rate of 10−14s−1, such as shale and quartzite, both wet and dry salt shows lower strength than the other rocks. Wet salt is even weaker than dry salt: when the water content of rock salt exceeds 0.01%, the rock salt behaves as weak crystalline fluid. Therefore, wet salt deforms more easily compare with dry salt.
Salt deformation mechanism.
Subgrain rotation recrystallization.
Subgrain rotation recrystallization involves formation of new grain boundary as the subgrain rotates gradually and forms an angle between the surrounding crystals. A new crystal is created from mis-orientation of a subgrain. The process is dominant in the top and middle part of a salt glacier.
Grain-boundary migration.
Grain-boundary migration is a dominant deformation mechanism at the top and the middle part of the salt glacier. A subgrain is re-orientated to match the crystal lattice of the adjacent subgrain. Grain boundaries will move as the surrounding crystals are gradually consumed.
Pressure-solution.
Pressure-solution involves dissolution of crystals, it becomes the main deformation mechanism when salt is wetted. This process are usually observed at distal part of a salt glacier.
Salt dynamics.
Underground salt structure.
A subsurface salt layer or a salt diapir that has not extruded the surface is considered as an underground salt structure. Buoyancy, gravitational differential loading and tectonic stress are the three main types of force that can drive salt flow. However, salt flow can be restricted by strength of overlaying sediments and boundary friction within the salt layer.
Buoyancy.
In the critical depth of 1.2-1.3 km, the density of rock salt and surrounding rocks are roughly the same. At greater burial depth, density inverse and rock salt become less dense than the overburden rocks, which leads to positive buoyancy of the rock salt and causes the rise of salt. As temperature increases with depth, salt is heated and expands, this also leads to increase in buoyancy of rock salt.
However, when the overburden is thick enough, the salt will not be able to pierce the overburden by buoyancy.
Gravitational differential loading.
Gravitational differential loading is produced by a combination of gravitational forces acting on the overburden rocks and the underlying salt layer. The effect of gravitational loading on salt flow can simply be expressed by the concept of hydraulic head:
formula_0
Where h is the hydraulic head, z is the elevation head that counts from a datum to the top of the salt layer, P is pressure exerted on the salt layer by overburden, formula_1is the density of the salt and g is the gravity acceleration. Pressure head are expressed as P over formula_2. As P is also equal to formula_3 , where formula_4is the density of the overburden and "t" is its thickness.
Therefore, the equation can be rewritten as:
formula_5
Note that the pressure head "P" is then expressed as formula_6.
Assuming the ratio of density of the overburden rock to that of the salt layer remains unchanged in the following three cases:
Tectonic stress.
Tensional stress.
Tensional stress affects salt structure deformation by (1) forming fractures in the overlying rocks, thinning of overburden and reduction of strength of overburden, (2) developing a graben in overburden that favors gravitational differential loading. Most salt diapirs in the world were initiated during regional extension, implying that salt diapirism is primarily activated by tensional stress.
Tensional stress leads to thin-skinned extension, which stretches the overburden but not the salt layer in the base. Deformation of salt structures from thin-skinned extension can be divided into three stages. However, it is important to note that diapir does not need to go through all these stages, depending on the amount and rate of extension, density of overburden, etc.
1) In the initial stage, regional extension thins and weaken the overburden, salt begins to rise and fill up the space created from thinning. When regional extension stops, the rise of diapirs will also stop. This stage is called reactive diapirism, as it is reacting to the extension.
2) As the thinning and weakening continues, the deformation proceeds to the second stage, in which the overlying rock becomes weak enough for salt to pierce and to be pushed up. The phenomenon only occurs when overburden is denser than salt, probably after reaching the critical depth. This phase is termed as active diapirism, the salt still continues to rise even after regional extension stops.
3) In the third stage, the diapir pierces through the overlying rock and is exposed at the surface. This phase is called passive diapirism.
Compressional stress.
Compressional stress thickens and strengthens the overlying rock, this resists the rock salt from piercing up and slows down the formation of a diapir, except when an anticline formed from compression force is seriously eroded to great depth. In the case where there are preexisting salt diapir structures that are mechanically weaker, the diapirs are reactivated during regional compression, rock salt then moves upward and is cut off from the source layer. For another case of no preexisting rock salt diapirs, salt mainly acts as a lubricant to form décollement.
Shear stress.
Shear stress does not much affect the salt layer, but salt will still flow if compressional stress and tensional stress are induced from the shear, and results in similar salt deformation behavior in the stressed zone. Deformation of salt structure can be classified into four types:
The strength of overlying sediments.
As the burial depth increase, the strength of sedimentary rocks increases with the rising pressure. Therefore, most thick overburden is more difficult to pierce by underlying salt and deform accordingly. Overlying sediments that have a thickness of few hundred meters rarely deform if external forces such as compression and extension do not exist.
Boundary friction within the salt layer.
Boundary friction along the top and the bottom of the salt layer restricts the ability of salt to flow. When salt shear passes the boundary between the salt layer and the surrounding rigid rocks, a drag force opposite to the direction of flow exists in the shear zone and resists salt flow. The thickness of this boundary shear zone can affect the flow rate of the salt layer. If the flow has a constant dynamic viscosity, which means it is Newtonian viscous, the boundary layer is thicker. For salt flow that is power-law viscous, such that it decreases in dynamic viscosity as the rate of shear increases towards the fluid boundary, the boundary layer is thinner.
Newtonian flow has a great impact on the flow rate of salt, which the volumetric flux is proportional to the thickness of the salt layer to the power of three, which means if the thickness of the salt layer is doubled, the volumetric flux will speed up the flow by eight times. Power-law flow has a relatively smaller effect on slowing down salt flow.
Surface salt structure.
Surface salt structures are formed when underground salt diapirs pierce through the overlying rock.
When salt extrudes and flows at the surface, it becomes a salt glacier (also known as a salt fountain). Unlike underground salt structures, when rock salt is uncovered, it is exposed to rainwater, wind and heat from the sun that could lead to rapid deformation of salt structure within a short time, which can be daily to seasonal.
Uplift of salt glaciers.
When an underground salt diapir rises and extrudes at the surface, it pushes up the overlaying rock and results in an uplift movement of the salt glacier together with the overlying rock. Uplift movements in the rate of mm/yr are observed in various locations such as Mount Sedom in Israel and the salt glaciers in Iran.
Salt diapirs that are exposed at the surface rise faster than the diapirs that remain in the subsurface, as the strength of overlying sediments is decreased.
Deformation by precipitation.
Different parts of a salt glacier deform with different mechanisms. A microstructural study shows that as salt flows from the summit of the salt fountain to the distal part, pressure solution becomes the dominant process as a result of infiltrated rainwater and decreased grain size, instead of subgrain rotation recrystallization and grain boundary migration which are dominant in the top and the middle part of the salt fountain. In other words, it has been suggested that the infiltration of rainwater into rock salt will cause deformation at the grain-size level.
Plastic flow of salt glaciers during rainy seasons and individual storm events and shrinkage after drying of the glacier were observed in Jashak salt dome (also known as Dashti salt dome or Kuh-e-Namak), Iran, suggesting seasonal movements in salt glaciers in response to weather condition. However, another study in the Kuqa fold-thrust belt tried to test the seasonal responsiveness of glacier movement regarding rainfall, but did not observe the correlation between salt deformation and precipitation, and stated their result may be attributed to limited satellite and ground observation data.
Further investigations, by using remote sensing technique and especially performing field observations are needed to confirm the relationship.
Deformation by temperature change.
Other than crystallization and hydration, thermal expansion is one of the most frequently mentioned mechanisms in salt weathering. Rock salt expands when heated. It is known that most salt weathering occur in regions with arid climates. Due to the high thermal conductivity of salt glaciers, heat can be transmitted hundreds of meters through dry salt in a few minutes.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "h = z + \\frac{P}{\\rho_sg}"
},
{
"math_id": 1,
"text": "\\rho_s"
},
{
"math_id": 2,
"text": "\\rho_sg"
},
{
"math_id": 3,
"text": "\\rho_ot"
},
{
"math_id": 4,
"text": "\\rho_o"
},
{
"math_id": 5,
"text": "h = z + \\frac{\\rho_o}{\\rho_s}t "
},
{
"math_id": 6,
"text": "\\frac{\\rho_o}{\\rho_s}t"
}
] |
https://en.wikipedia.org/wiki?curid=58423715
|
5842384
|
Katětov–Tong insertion theorem
|
On existence of a continuous function between semicontinuous upper and lower bounds
The Katětov–Tong insertion theorem is a theorem of point-set topology proved independently by Miroslav Katětov and Hing Tong in the 1950s. The theorem states the following:
Let formula_0 be a normal topological space and let formula_1 be functions with g upper semicontinuous, h lower semicontinuous and formula_2. Then there exists a continuous function formula_3 with formula_4
This theorem has a number of applications and is the first of many classical insertion theorems. In particular it implies the Tietze extension theorem and consequently Urysohn's lemma, and so the conclusion of the theorem is equivalent to normality.
References.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "g, h\\colon X \\to \\mathbb{R}"
},
{
"math_id": 2,
"text": "g \\leq h"
},
{
"math_id": 3,
"text": "f\\colon X \\to \\mathbb{R}"
},
{
"math_id": 4,
"text": "g \\leq f \\leq h."
}
] |
https://en.wikipedia.org/wiki?curid=5842384
|
5842389
|
Gaussian filter
|
Filter in electronics and signal processing
In electronics and signal processing, mainly in digital signal processing, a Gaussian filter is a filter whose impulse response is a Gaussian function (or an approximation to it, since a true Gaussian response would have infinite impulse response). Gaussian filters have the properties of having no overshoot to a step function input while minimizing the rise and fall time. This behavior is closely connected to the fact that the Gaussian filter has the minimum possible group delay. A Gaussian filter will have the best combination of suppression of high frequencies while also minimizing spatial spread, being the critical point of the uncertainty principle. These properties are important in areas such as oscilloscopes and digital telecommunication systems.
Mathematically, a Gaussian filter modifies the input signal by convolution with a Gaussian function; this transformation is also known as the Weierstrass transform.
Definition.
The one-dimensional Gaussian filter has an impulse response given by
formula_0
and the frequency response is given by the Fourier transform
formula_1
with formula_2 the ordinary frequency. These equations can also be expressed with the standard deviation as parameter
formula_3
and the frequency response is given by
formula_4
By writing formula_5 as a function of formula_6 with the two equations for formula_7 and as a function of formula_8 with the two equations for formula_9 it can be shown that the product of the standard deviation and the standard deviation in the frequency domain is given by
formula_10,
where the standard deviations are expressed in their physical units, e.g. in the case of time and frequency in seconds and hertz, respectively.
In two dimensions, it is the product of two such Gaussians, one per direction:
formula_11
where "x" is the distance from the origin in the horizontal axis, "y" is the distance from the origin in the vertical axis, and "σ" is the standard deviation of the Gaussian distribution.
Synthesizing Gaussian filter polynomials.
The Gaussian transfer function polynomials may be synthesized using a Taylor series expansion of the square of Gaussian function of the form formula_12 where formula_5 is set such that formula_13 (equivalent of -3.01dB) at formula_14. The value of formula_5 may be calculated with this constraint to be formula_15, or 0.34657359 for an approximate -3.010 dB cutoff attenuation. If an attenuation of other than -3.010 dB is desired,formula_5 may be recalculated using a different attenuation, formula_16.
To meet all above criteria, formula_17 must be of the form obtained below, with no stop band zeros,
formula_18
To complete the transfer function, formula_19may be approximated with a Taylor Series expansion about 0. The full Taylor series for formula_20 is shown below.
formula_21
The ability of the filter to simulate a true Gaussian function depends on how many terms are taken from the series. The number of terms taken beyond 0 establishes the order N of the filter.
formula_22
For the frequency axis, formula_23 is replace with formula_24.
formula_25
Since only half the poles are located in the left half plane, selecting only those poles to build the transfer function also serves to square root the equation, as is seen above.
Simple 3rd order example.
A 3rd order Gaussian filter with a -3.010 dB cutoff attenuation at formula_26 = 1 requires the use of terms k=0 to k=3 in the Taylor series to produce the squared Gaussian function.
formula_27
Absorbing formula_5 into the coefficients, factoring using a root finding algorithm, and building the polynomials using only the left half plane poles yields the transfer function for a third order Gaussian filter with the required -3.010 dB cutoff attenuation..
formula_28
A quick sanity check of evaluating formula_29 yields a magnitude of -2.986 dB, which represents an error of only ~0.8% from the desired -3.010 dB. This error will decrease as the number of orders increases. In addition, the error at higher frequencies will be more pronounced for all Gaussian filters, bug will also decrease as the order of the filter increases.
Gaussian Transitional Filters.
Although Gaussian filters exhibit desirable group delay, as described in the opening description, the steepness of the cutoff attenuation may be less than desired. To work around this, tables have been developed and published that preserve the desirable Gaussian group delay response and the lower and mid frequencies, but switches to a higher steepness Chebyshev attenuation at the higher frequencies.
Digital implementation.
The Gaussian function is for formula_30 and would theoretically require an infinite window length. However, since it decays rapidly, it is often reasonable to truncate the filter window and implement the filter directly for narrow windows, in effect by using a simple rectangular window function. In other cases, the truncation may introduce significant errors. Better results can be achieved by instead using a different window function; see scale space implementation for details.
Filtering involves convolution. The filter function is said to be the kernel of an integral transform. The Gaussian kernel is continuous. Most commonly, the discrete equivalent is the sampled Gaussian kernel that is produced by sampling points from the continuous Gaussian. An alternate method is to use the discrete Gaussian kernel which has superior characteristics for some purposes. Unlike the sampled Gaussian kernel, the discrete Gaussian kernel is the solution to the discrete diffusion equation.
Since the Fourier transform of the Gaussian function yields a Gaussian function, the signal (preferably after being divided into overlapping windowed blocks) can be transformed with a fast Fourier transform, multiplied with a Gaussian function and transformed back. This is the standard procedure of applying an arbitrary finite impulse response filter, with the only difference being that the Fourier transform of the filter window is explicitly known.
Due to the central limit theorem (from statistics), the Gaussian can be approximated by several runs of a very simple filter such as the moving average. The simple moving average corresponds to convolution with the constant B-spline (a rectangular pulse). For example, four iterations of a moving average yield a cubic B-spline as a filter window, which approximates the Gaussian quite well. A moving average is quite cheap to compute, so levels can be cascaded quite easily.
In the discrete case, the filter's standard deviations (in the time and frequency domains) are related by
formula_31
where the standard deviations are expressed in a number of samples and "N" is the total number of samples. The standard deviation of a filter can be interpreted as a measure of its size. The cut-off frequency of a Gaussian filter might be defined by the standard deviation in the frequency domain:
formula_32
where all quantities are expressed in their physical units. If formula_33 is measured in samples, the cut-off frequency (in physical units) can be calculated with
formula_34
where formula_35 is the sample rate.
The response value of the Gaussian filter at this cut-off frequency equals exp(−0.5) ≈ 0.607.
However, it is more common to define the cut-off frequency as the half power point: where the filter response is reduced to 0.5 (−3 dB) in the power spectrum, or 1/√2 ≈ 0.707 in the amplitude spectrum (see e.g. Butterworth filter).
For an arbitrary cut-off value 1/"c" for the response of the filter, the cut-off frequency is given by
formula_36
For "c" = 2 the constant before the standard deviation in the frequency domain in the last equation equals approximately 1.1774, which is half the Full Width at Half Maximum (FWHM) (see Gaussian function). For "c" = √2 this constant equals approximately 0.8326. These values are quite close to 1.
A simple moving average corresponds to a uniform probability distribution and thus its filter width of size formula_37 has standard deviation formula_38. Thus the application of successive formula_39 moving averages with sizes formula_40 yield a standard deviation of
formula_41
A gaussian kernel requires formula_42 values, e.g. for a formula_43 of 3, it needs a kernel of length 17. A running mean filter of 5 points will have a sigma of formula_44. Running it three times will give a formula_43 of 2.42. It remains to be seen where the advantage is over using a gaussian rather than a poor approximation.
When applied in two dimensions, this formula produces a Gaussian surface that has a maximum at the origin, whose contours are concentric circles with the origin as center. A two-dimensional convolution matrix is precomputed from the formula and convolved with two-dimensional data. Each element in the resultant matrix new value is set to a weighted average of that element's neighborhood. The focal element receives the heaviest weight (having the highest Gaussian value), and neighboring elements receive smaller weights as their distance to the focal element increases. In Image processing, each element in the matrix represents a pixel attribute such as brightness or color intensity, and the overall effect is called Gaussian blur.
The Gaussian filter is non-causal, which means the filter window is symmetric about the origin in the time domain. This makes the Gaussian filter physically unrealizable. This is usually of no consequence for applications where the filter bandwidth is much larger than the signal. In real-time systems, a delay is incurred because incoming samples need to fill the filter window before the filter can be applied to the signal. While no amount of delay can make a theoretical Gaussian filter causal (because the Gaussian function is non-zero everywhere), the Gaussian function converges to zero so rapidly that a causal approximation can achieve any required tolerance with a modest delay, even to the accuracy of floating point representation.
|
[
{
"math_id": 0,
"text": "g(x)= \\sqrt{\\frac{a}{\\pi}}e^{-ax^2}"
},
{
"math_id": 1,
"text": "\\hat g(f)= e^{-\\pi^2f^2/a}"
},
{
"math_id": 2,
"text": "f"
},
{
"math_id": 3,
"text": "g(x) = \\frac{1}{\\sqrt{2\\pi}\\sigma}e^{-x^2/(2\\sigma^2)}"
},
{
"math_id": 4,
"text": "\\hat g(f) = e^{-f^2/(2\\sigma_f^2)}"
},
{
"math_id": 5,
"text": "a"
},
{
"math_id": 6,
"text": "\\sigma"
},
{
"math_id": 7,
"text": "g(x)"
},
{
"math_id": 8,
"text": "\\sigma_f"
},
{
"math_id": 9,
"text": "\\hat g(f)"
},
{
"math_id": 10,
"text": "\\sigma\\sigma_f=\\frac{1}{2\\pi}"
},
{
"math_id": 11,
"text": "g(x,y) = \\frac{1}{2\\pi \\sigma^2}e^{-(x^2 + y^2)/(2 \\sigma^2)}"
},
{
"math_id": 12,
"text": "\\epsilon^{-a\\omega^2}"
},
{
"math_id": 13,
"text": "\\epsilon^{-a\\omega^2} = \\sqrt{1/2}"
},
{
"math_id": 14,
"text": "\\omega=1"
},
{
"math_id": 15,
"text": "a=-log\\bigg(\\sqrt{1/2}\\bigg)"
},
{
"math_id": 16,
"text": "a=log(10^{(|dB|/20)})"
},
{
"math_id": 17,
"text": "F(\\omega)"
},
{
"math_id": 18,
"text": "F(\\omega) = \n\\epsilon^{-a\\omega^2}\n\\sqrt{(\\epsilon^{-a\\omega^2})^2}\n=\n\\sqrt{\\frac{1}{\\epsilon^{2a\\omega^2}}}\n"
},
{
"math_id": 19,
"text": "\\epsilon^{2a\\omega^2}\n"
},
{
"math_id": 20,
"text": "\\epsilon^{2a\\omega^2}"
},
{
"math_id": 21,
"text": "\\epsilon^{2a\\omega^2} = \\sum_{k=0}^\\infty \\frac{(2a)^k\\omega^{2k}}{k!}"
},
{
"math_id": 22,
"text": "F_N(\\omega) = \\sqrt{\\frac{1}{\\sum_{k=0}^\\N \\frac{(2a)^k\\omega^{2k}}{k!}}}"
},
{
"math_id": 23,
"text": "\\omega\n"
},
{
"math_id": 24,
"text": "j\\omega\n"
},
{
"math_id": 25,
"text": "F_N(j\\omega) = \\sqrt{\\frac{1}{\\sum_{k=0}^\\N \\frac{(2a)^k(j\\omega)^{2k}}{k!}}}\\bigg |_{\\text{left half plane}}"
},
{
"math_id": 26,
"text": "\\omega"
},
{
"math_id": 27,
"text": "F_3((j\\omega)^2) \n= \\frac{1}{1.33333a^3(j\\omega)^6+2a^2(j\\omega^4)+2a(j\\omega)^2+1}\n= \\frac{1}{-1.33333a^3\\omega^6+2a^2\\omega^4-2a\\omega^2+1}"
},
{
"math_id": 28,
"text": "F_3(j\\omega) = \\frac{1}{ 0.2355931(j\\omega)^3+1.0078328(j\\omega)^2+1.6458471(j\\omega)+1}"
},
{
"math_id": 29,
"text": "|F_3(j)|"
},
{
"math_id": 30,
"text": " x \\in (-\\infty,\\infty) "
},
{
"math_id": 31,
"text": "\\sigma_t\\cdot\\sigma_f=\\frac{N}{2\\pi}"
},
{
"math_id": 32,
"text": "f_c = \\sigma_f = \\frac{1}{2\\pi\\sigma_t}"
},
{
"math_id": 33,
"text": "\\sigma_t"
},
{
"math_id": 34,
"text": "f_c = \\frac{F_s}{2\\pi\\sigma_t}"
},
{
"math_id": 35,
"text": "F_s"
},
{
"math_id": 36,
"text": "f_c = \\sqrt{2\\ln(c)}\\cdot\\sigma_f "
},
{
"math_id": 37,
"text": "n"
},
{
"math_id": 38,
"text": "\\sqrt{(n^2-1)/12}"
},
{
"math_id": 39,
"text": "m"
},
{
"math_id": 40,
"text": "{n}_1,\\dots,{n}_m"
},
{
"math_id": 41,
"text": "\\sigma = \\sqrt{\\frac{n_1^2+\\cdots+n_m^2-m}{12}}"
},
{
"math_id": 42,
"text": "6\\sigma_t-1"
},
{
"math_id": 43,
"text": "{\\sigma_t}"
},
{
"math_id": 44,
"text": "{\\sqrt{2}}"
}
] |
https://en.wikipedia.org/wiki?curid=5842389
|
5842560
|
Conical function
|
In mathematics, conical functions or Mehler functions are functions which can be expressed in terms of Legendre functions of the first and second kind,
formula_0 and formula_1
The functions formula_0 were introduced by Gustav Ferdinand Mehler, in 1868, when expanding in series the distance of a point on the axis of a cone to a point located on the surface of the cone. Mehler used the notation formula_2 to represent these functions. He obtained integral representation and series of functions representations for them. He also established an addition theorem
for the conical functions. Carl Neumann obtained an expansion of the functions formula_2 in terms
of the Legendre polynomials in 1881. Leonhardt introduced for the conical functions the equivalent of the spherical harmonics in 1882.
|
[
{
"math_id": 0,
"text": "P^\\mu_{-(1/2)+i\\lambda}(x)"
},
{
"math_id": 1,
"text": "Q^\\mu_{-(1/2)+i\\lambda}(x)."
},
{
"math_id": 2,
"text": "K^\\mu(x)"
}
] |
https://en.wikipedia.org/wiki?curid=5842560
|
58428725
|
Crypto-PAn
|
Cryptographic algorithm for anonymizing IP addresses
Crypto-PAn (Cryptography-based Prefix-preserving Anonymization) is a cryptographic algorithm for anonymizing IP addresses while preserving their subnet structure. That is, the algorithm encrypts any string of bits formula_0 to a new string formula_1, while ensuring that for any pair of bit-strings formula_2 which share a common prefix of length formula_3, their images formula_4 also share a common prefix of length formula_3. A mapping with this property is called "prefix-preserving". In this way, Crypto-PAn is a kind of format-preserving encryption.
The mathematical outline of Crypto-PAn was developed by Jinliang Fan, Jun Xu, Mostafa H. Ammar (all of Georgia Tech) and Sue B. Moon. It was inspired by the IP address anonymization done by Greg Minshall's TCPdpriv program circa 1996.
Algorithm.
Intuitively, Crypto-PAn encrypts a bit-string of length formula_5 by descending a binary tree of depth formula_5, one step for each bit in the string. Each of the binary tree's formula_6 non-leaf nodes has been given a value of "0" or "1", according to some pseudo-random function seeded by the encryption key. At each step formula_7 of the descent, the algorithm computes the formula_7th bit of the output by XORing the formula_7th bit of the input with the value of the current node.
The reference implementation takes a 256-bit key. The first 128 bits of the key material are used to initialize an AES-128 cipher in ECB mode. The second 128 bits of the key material are encrypted with the cipher to produce a 128-bit padding block formula_8.
Given a 32-bit IPv4 address formula_0, the reference implementation performs the following operation for each bit formula_9 of the input: Compose a 128-bit input block formula_10. Encrypt formula_11 with the cipher to produce a 128-bit output block formula_12. Finally, XOR the formula_7th bit of that output block with the formula_7th bit of formula_0, and append the result — formula_13 — onto the output bitstring. Once all 32 bits of the output bitstring have been computed, the result is returned as the anonymized output formula_1 which corresponds to the original input formula_0.
The reference implementation does not implement deanonymization; that is, it does not provide a function formula_14 such that formula_15. However, decryption can be implemented almost identically to encryption, just making sure to compose each input block formula_10 using the plaintext bits of formula_0 decrypted so far, rather than using the ciphertext bits: formula_16.
The reference implementation does not implement encryption of bitstrings of lengths other than 32; for example, it does not support the anonymization of 128-bit IPv6 addresses. In practice, the 32-bit Crypto-PAn algorithm can be used in "ECB mode" itself, so that a 128-bit string formula_17 might be anonymized as formula_18. This approach preserves the prefix structure of the 128-bit string, but does leak information about the lower-order chunks; for example, an anonymized IPv6 address consisting of the same 32-bit ciphertext repeated four times is likely the special address codice_0, which thus reveals the encryption of the 32-bit plaintext codice_1.
In principle, the reference implementation's approach (building 128-bit input blocks formula_11) can be extended up to 128 bits. Beyond 128 bits, a different approach would have to be used; but the fundamental algorithm (descending a binary tree whose nodes are marked with a pseudo-random function of the key material) remains valid.
Implementations.
Crypto-PAn's C++ reference implementation was written in 2002 by Jinliang Fan.
In 2005, David Stott of Lucent made some improvements to the C++ reference implementation, including a deanonymization routine. Stott also observed that the algorithm preserves prefix structure while destroying suffix structure; running the Crypto-PAn algorithm on a bit-reversed string will preserve any existing "suffix" structure while destroying "prefix" structure. Thus, running the algorithm first on the input string, and then again on the bit-reversed output of the first pass, destroys both prefix and suffix structure. (However, once the suffix structure has been destroyed, destroying the remaining prefix structure can be accomplished far more efficiently by simply feeding the non-reversed output to AES-128 in ECB mode. There is no particular reason to reuse Crypto-PAn in the second pass.)
A Perl implementation was written in 2005 by John Kristoff. Python and Ruby implementations also exist.
Versions of the Crypto-PAn algorithm are used for data anonymization in many applications, including NetSniff and CAIDA's CoralReef library.
|
[
{
"math_id": 0,
"text": "x"
},
{
"math_id": 1,
"text": "E_k(x)"
},
{
"math_id": 2,
"text": "x, y"
},
{
"math_id": 3,
"text": "p"
},
{
"math_id": 4,
"text": "E_k(x), E_k(y)"
},
{
"math_id": 5,
"text": "n"
},
{
"math_id": 6,
"text": "2^n - 1"
},
{
"math_id": 7,
"text": "i"
},
{
"math_id": 8,
"text": "\\mathit{pad}"
},
{
"math_id": 9,
"text": "x_i"
},
{
"math_id": 10,
"text": "I_i = x_{[0,i)} \\mathit{pad}_{[i,128)}"
},
{
"math_id": 11,
"text": "I_i"
},
{
"math_id": 12,
"text": "O_i"
},
{
"math_id": 13,
"text": "x_i \\oplus O_{i,i}"
},
{
"math_id": 14,
"text": "D_k"
},
{
"math_id": 15,
"text": "D_k(E_k(x)) = x"
},
{
"math_id": 16,
"text": "I_i \\neq E_k(x)_{[0,i)} \\mathit{pad}_{[i,128)}"
},
{
"math_id": 17,
"text": "x_{[0,128)}"
},
{
"math_id": 18,
"text": "E_k(x_{[0,32)}) E_k(x_{[32,64)}) E_k(x_{[64,96)}) E_k(x_{[96,128)})"
}
] |
https://en.wikipedia.org/wiki?curid=58428725
|
584310
|
Octahedral number
|
In number theory, an octahedral number is a figurate number that represents the number of spheres in an octahedron formed from close-packed spheres. The formula_0th octahedral number formula_1 can be obtained by the formula:
formula_2
The first few octahedral numbers are:
1, 6, 19, 44, 85, 146, 231, 344, 489, 670, 891 (sequence in the OEIS).
Properties and applications.
The octahedral numbers have a generating function
formula_3
Sir Frederick Pollock conjectured in 1850 that every positive integer is the sum of at most 7 octahedral numbers. This statement, the Pollock octahedral numbers conjecture, has been proven true for all but finitely many numbers.
In chemistry, octahedral numbers may be used to describe the numbers of atoms in octahedral clusters; in this context they are called magic numbers.
Relation to other figurate numbers.
Square pyramids.
An octahedral packing of spheres may be partitioned into two square pyramids, one upside-down underneath the other, by splitting it along a square cross-section. Therefore,
the formula_0th octahedral number formula_1 can be obtained by adding two consecutive square pyramidal numbers together:
formula_4
Tetrahedra.
If formula_1 is the formula_0th octahedral number and formula_5 is the formula_0th tetrahedral number then
formula_6
This represents the geometric fact that gluing a tetrahedron onto each of four non-adjacent faces of an octahedron produces a tetrahedron of twice the size.
Another relation between octahedral numbers and tetrahedral numbers is also possible, based on the fact that an octahedron may be divided into four tetrahedra each having two adjacent original faces (or alternatively, based on the fact that each square pyramidal number is the sum of two tetrahedral numbers):
formula_7
Cubes.
If two tetrahedra are attached to opposite faces of an octahedron, the result is a rhombohedron. The number of close-packed spheres in the rhombohedron is a cube, justifying the equation
formula_8
Centered squares.
The difference between two consecutive octahedral numbers is a centered square number:
formula_9
Therefore, an octahedral number also represents the number of points in a square pyramid formed by stacking centered squares; for this reason, in his book "Arithmeticorum libri duo" (1575), Francesco Maurolico called these numbers "pyramides quadratae secundae".
The number of cubes in an octahedron formed by stacking centered squares is a centered octahedral number, the sum of two consecutive octahedral numbers. These numbers are
1, 7, 25, 63, 129, 231, 377, 575, 833, 1159, 1561, 2047, 2625, ... (sequence in the OEIS)
given by the formula
formula_10 for "n" = 1, 2, 3, ...
History.
The first study of octahedral numbers appears to have been by René Descartes, around 1630, in his "De solidorum elementis". Prior to Descartes, figurate numbers had been studied by the ancient Greeks and by Johann Faulhaber, but only for polygonal numbers, pyramidal numbers, and cubes. Descartes introduced the study of figurate numbers based on the Platonic solids and some of the semiregular polyhedra; his work included the octahedral numbers. However, "De solidorum elementis" was lost, and not rediscovered until 1860. In the meantime, octahedral numbers had been studied again by other mathematicians, including Friedrich Wilhelm Marpurg in 1774, Georg Simon Klügel in 1808, and Sir Frederick Pollock in 1850.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "O_n"
},
{
"math_id": 2,
"text": "O_n={n(2n^2 + 1) \\over 3}."
},
{
"math_id": 3,
"text": " \\frac{z(z+1)^2}{(z-1)^4} = \\sum_{n=1}^{\\infty} O_n z^n = z +6z^2 + 19z^3 + \\cdots ."
},
{
"math_id": 4,
"text": "O_n = P_{n-1} + P_n."
},
{
"math_id": 5,
"text": "T_n"
},
{
"math_id": 6,
"text": "O_n+4T_{n-1}=T_{2n-1}."
},
{
"math_id": 7,
"text": "O_n = T_n + 2T_{n-1} + T_{n-2}."
},
{
"math_id": 8,
"text": "O_n+2T_{n-1}=n^3."
},
{
"math_id": 9,
"text": "O_n - O_{n-1} = C_{4,n} = n^2 + (n-1)^2."
},
{
"math_id": 10,
"text": "O_n+O_{n-1}=\\frac{(2n-1)(2n^2-2n+3)}{3}"
}
] |
https://en.wikipedia.org/wiki?curid=584310
|
58433438
|
Optically stimulated luminescence thermochronometry
|
Optically stimulated luminescence (OSL) thermochronometry is a dating method used to determine the time since quartz and/or feldspar began to store charge as it cools through the effective closure temperature. The closure temperature for quartz and Na-rich K-feldspar is 30-35 °C and 25 °C respectively. When quartz and feldspar are beneath the earth, they are hot. They cool when any geological process e.g. focused erosion causes their exhumation to the earth surface. As they cool, they trap electron charges originating from within the crystal lattice. These charges are accommodated within crystallographic defects or vacancies in their crystal lattices as the mineral cools below the closure temperature.
During detrapping of these electrons, luminescence is produced. The luminescence or light emission from the mineral is assumed to be proportional to the trapped electron charge population. The age recorded in standard OSL method is determined by counting the number of these trapped charges in an OSL detection system. The OSL age is the cooling age of the quartz and/or feldspar. This cooling history is a record of the mineral's thermal history, which is used to reconstruct the geological event.
The sub-Quaternary period (104 to 105 years) is the geological age where OSL is a favourable dating technique because of low closure temperature of quartz and feldspar used in this technique. The Quaternary period is marked by intense crustal erosion particularly within active mountain ranges, leading to high exhumation rate of crustal rocks and formation of sub-Quaternary sediments. Previous techniques (e.g. Apatite Fission Track, Zircon Fission Track, and (Uranium-Thorium)/ Helium dating) could not adequately track the geological age records particularly in the last ~300 thousand years. OSL dating is currently the only dating method that has been successfully applied to understand the cooling ages of the geological events.
Theoretical concepts of electron trapping and detrapping for OSL measurement.
In natural environment, crystal lattices of quartz and/or feldspar are bombarded with radiation released from radiogenic source such as "in -situ" radioactive decay. As the crystals are irradiated, charges are stored up in their crystallographic defects. The charge trapping process involves atomic-scale ionic substitution of both electron and hole within the crystal lattices of quartz and feldspar. The electron diffusion happens in response to ionizing radiation as the minerals cools below their closure temperature.
If quartz or feldspar gains are exposed to natural light source such as the sun, the trapped charges will be evicted in form of luminescence. This natural process is called bleaching. Any other process that could heat up the sample will also cause the trapped electrons to escape from the crystal lattice known as thermal bleaching. Optical bleaching of the mineral leads to eviction of trapped charges in the minerals, hence, careful sampling and handling must be followed to avoid using bleached sample for OSL thermochronometry. To artificially produce luminescence in the laboratory for luminescence study of the mineral, these two processes are adopted.
Kinetic or rate equations for trapping and detrapping processes.
A wide range of kinetic models have been developed to explain trapping and detrapping processes in quartz and feldspar crystals. Two of these models are particularly useful in determining the cooling histories of quartz or feldspar These models are known as the general order kinetic model and band tail model. The two models consider three major processes to characterize the mineral luminescence, which are: trapping process, thermal detrapping process and athermal detrapping process. Each of the processes are guided by different equations discussed below. These models are useful for the determining of cooling history of the mineral, which involves subtracting the differential sum of thermal detrapping and athermal detrapping from the trapping process (i.e. Trapping – (Thermal detrapping + Athermal detrapping).
Determination of cooling history from the kinetic equations.
By combining the four equations above, a single differential equation is developed to convert the luminescence into cooling rate. We have:
formula_2 for the general order kinetic model; and
formula_3for the band tail model.
Any of the models can be used because the same series of laboratory experiments are followed for the estimation of all the parameters involved in the equations. The inversion of measured formula_0for a range of temperature -time history or T-t path can be used to determine the cooling rate. Sufficient number of T-t paths conducted in the laboratory is used to build a probability density function, which will help to determine the most likely cooling histories undergone by the mineral.
Sample preparation.
Bedrock samples from earth surface or boreholes are required earth materials for OSL dating. Minerals (quartz and/or feldspar) are usually separated from the rock or sediment samples under regulated laboratory lighting system similar to procedures used in archaeological OSL dating. The light source is usually a controlled red light condition to avoid luminescence signal resetting.[7] Crushing of sample are gently carried out to avoid generating heat that is strong enough to reset OSL signal in the minerals. Crushed samples are separated by means of a sieve to get fine-grained. A range of values varying from 90 – 125 microns, 100 – 200 microns and 180 – 212 microns can be used for OSL measurement. The selected grains are chemically treated with HCl to digest carbonates and with H2O2 to remove organic materials that can contaminate the sensitivity of OSL signal during measurement. Feldspar and quartz with densities of < 2.62 g cm−3 and < 2.68 g cm−3 respectively are separated from other heavier minerals by density separation. Inclusions of zircon, apatite and feldspar in quartz as well as alpha-particles irradiated grain edges that can contaminate OSL signal are removed by etching in hydrofluoric acid (HF).
OSL signal detection system.
OSL ages are commonly measured using an automated RisThermal Luminescence Reader (e.g. TL-DA-20). It contains an internal beta-source (e.g. 90Sr/90Y) with optical stimulation emitted through laser diodes (LEDs). The reader also has a detection filter for transmission of stimulated luminescence signals. During this measurement, the mineral grain (quartz or feldspar) is glued on a heater strip (stainless-steel discs) using adhesive (commonly silicone spray). The mineral grain is stimulated with the light source. This light is the series of light emitting diode. This bombardment stimulates the electrons, which are trapped and begin to recombine in the crystal. During this process, they give the OSL signal, which is collected or recorded in the ray sensitive photomultiplier tube. The photomultiplier tube converts all the incident photons (i.e. light) to electronic charge. This is the basic principle of how the luminescence (light) emission from the minerals under investigation is measured.
OSL age determination.
To determine the OSL age of the sample, the dose rate, (formula_1) and the equivalent dose (formula_4). A dose is the quantity of natural radiation or energy absorbed by a mineral. The dose rate is the effective radiation absorbed from naturally occurring ionizing source per unit time.
The age is calculated by determining the ratio of equivalent dose (formula_4) and the dose rate (formula_1) using the equation below.
formula_5
where formula_6 is the age (yr), formula_4is measured in Gray (Gy). Note that 1 Gy is equivalent to 1 J.kg−1 (Joule per kilogram) and formula_1is Gy year−1
Dose rate determination.
For a single grain of mineral, the dose rate (formula_1) can be determined by measuring the concentrations of uranium, potassium and thorium by direct mass spectrometric analysis of quartz or feldspar grains. Ge-Gamma, INAA, X-ray flourescnce and ICP-MS or ICP-OES are spectrometers that can could be used. Other methods for the determination the dose rate include: (1) overburden cosmic dose rate estimation, (2) water content attenuation method, and (3) disequilibrium dose rate correction method. An average dose rate is usually calculated as representative of the dose rate.
Equivalent dose determination.
The equivalent dose (formula_4) is also known as the dose response is determined from the dose response curve (see Plot B). The single-aliquot regenerative (SAR) protocol is a commonly used method for the determination of the equivalent dose. The protocol involves series of laboratory measurement of OSL signal (see Plot A), which is emitted by the aliquot after it has been optically stimulated at a known beta dose within a given time in seconds. The beta-source may be 90Sr/90Y in an automated RisThermal Luminescence Reader . During SAR protocol, the difference in the measurement for quartz and feldspar is mainly on the degree of heat required per time and the source of stimulation.
The first stage involves determination of the natural dose (see Plot B) preheating the aliquot to about 160 -130 °C (for feldspar) for 10 s or 160-300 °C (for quartz) when the natural luminescence signal (i.e. natural dose) is still intact. This is done to remove unstable signals in the mineral. After preheating, the aliquot is optically stimulated by Infrared light emitting diode (for feldspar) or Blue light emitting diode (for quartz) depending on which mineral (see OSL detection system) for 40 s at 125 °C (for feldspar) or 100 s at 125 °C (for quartz) and the natural OSL signal (NL) is measured and recorded in the photomultiplier tube. For the second stage, the aliquot is irradiated with a fixed known test dose (beta dose). The aliquot is preheated at temperature less than 160 °C. The IRSL signal measurement is taken as a test dose IRSL response (NT) after it has been optically stimulated for 40 s at 125 °C (for feldspar) or 100 s at 125 °C (for quartz). At this stage the aliquot is completely bleached. A regenerative test dose is then started after bleaching.
The same procedure as described above is followed but a range of regenerative dose is given at different temperature for sensitivity correction of OSL signal (See Plot B). For the regenerative dose measurement, the aliquot is irradiated with a known dose before preheating at 160-130 °C for 10 s or 160-300 °C for feldspar or quartz respectively while the signal response (Ri) is measured. A fixed test dose is by irradiating the aliquot and a preheating of the aliquot is carried out at a temperature less than 160 °C. The aliquot is optically stimulated at the same rate and the IRSL signal (RT) is measured. The steps are repeated for range of different regenerative dose including zero test dose. During each of the tests, all OSL signals are recorded in the photomultiplier tube and the OSL counts are plotted against the OSL exposure time in seconds as shown in the OSL signal curve (first graph).
For sensitivity correction, NL is plotted against NT representing the natural OSL signal while the plot of Ri against RT representing regenerative dose test (see Plot B). The natural dose is along the vertical axis because no laboratory dose is given at the stage. The regenerative dose measurement will vary with respect to the given dose at each stage. The equivalent dose (formula_4) is determined by drawing a line (red discontinuous line in Plot B) from the natural dose to intercept with the regenerative dose curve. The point of interception with the curve represent the equivalent dose by reading its value on the horizontal axis (See Plot B). The corresponding dose value at the horizontal axis is recorded for the equivalent dose (formula_4).
Applications..
General applications.
OSL finds application in all low-temperature (<50 °C) tectonic and sedimentary processes. These studies are mainly captured within the sub-Quaternary period including, but not limited to focused fluvial and/or glacial erosion, rock exhumation and evolution of topography in active tectonic regions. Other applications include glaciation deposits, lagoon deposits, storm surge and tsunami deposits, lake deposits including shoreline migration history, fluvial erosion deposits, loess deposit records. For example, the rate of slip on a normal faults plane can also be modelled, the rate of glacial or fluvial erosion of the earth surface can also be modelled as well as when sedimentary deposits are found within the sub-Quaternary period.
In active tectonics regions, the application of OSL dating is very useful in tracking the thermal history and rate of rock exhumation towards the Earth's surface. The closer the cooling ages, the higher the rate of erosion and/or exhumation of the rock unit under investigation. When the OSL age of quartz or feldspar is known, the obtained ages are coupled with the existing thermal-mechanical equations e.g. Pecube to reconstruct the thermal-mechanical history.
The OSL ages (see diagram), cooling ages, elevation data are plotted against the horizontal distance where samples and elevation data were collected to interpret the exhumation rate of rock or the evolution of the relief system through time. For example, OSL dating has been applied in determined the cooling histories of some rapidly eroding active regions at sub-Quaternary time-scale (i.e. 104 to 105 years). These examples are Whataroa-Perth catchment area in the Southern Alps of New Zealand and Namche Barwa-Gyala Peri dome in eastern Himalaya. In the Namche Barwa-Gyala Peri dome, river erosion was prevalent while glacial erosion was the main active process in the Whataroa-Perth catchment area. In both studies, the rate of exhumation and evolution of the relief systems were estimated by inversion of OSL thermochronological ages.
|
[
{
"math_id": 0,
"text": "\\tilde{n}"
},
{
"math_id": 1,
"text": "D_R"
},
{
"math_id": 2,
"text": "\\frac{d\\tilde{n}}{dt}=\\frac{D_R}{D_o}(1-\\tilde{n})^\\alpha-s\\tilde{n}^\\beta\\exp\\left ( \\frac{-E}{kT} \\right )-s\\tilde{n}^\\beta\\exp\\left ( -p'^{-\\tfrac{1}{3}}r'\\right )"
},
{
"math_id": 3,
"text": "\\frac{d\\tilde{n}}{dt}=\\frac{D_R}{D_o}(1-\\tilde{n})^\\alpha-s\\tilde{n}^\\beta\\exp\\left ( \\frac{-(E_t-E_b)}{kT} \\right )-s\\tilde{n}^\\beta\\exp\\left ( -p'^{-\\tfrac{1}{3}}r'\\right )"
},
{
"math_id": 4,
"text": "D_E"
},
{
"math_id": 5,
"text": "A = \\frac{D_E}{D_R}"
},
{
"math_id": 6,
"text": "A"
}
] |
https://en.wikipedia.org/wiki?curid=58433438
|
5843539
|
Bisection bandwidth
|
Measure of a network's bandwidth
In computer networking, if a network is bisected into two equal-sized partitions, the bisection bandwidth of a network topology is the bandwidth available between the two partitions. Bisection should be done in such a way that the bandwidth between two partitions is minimum. Bisection bandwidth gives the true bandwidth available in the entire system. The bisection bandwidth accounts for the bottleneck bandwidth of the entire network. Therefore, the bisection bandwidth represents bandwidth characteristics of the network better than any other metric.
Bisection bandwidth calculations.
For a linear array with n nodes bisection bandwidth is one link bandwidth. For linear array only one link needs to be broken to bisect the network into two partitions.
For ring topology with n nodes two links should be broken to bisect the network, so bisection bandwidth becomes bandwidth of two links.
For tree topology with n nodes can be bisected at the root by breaking one link, so bisection bandwidth is one link bandwidth.
For Mesh topology with n nodes, formula_0 links should be broken to bisect the network, so bisection bandwidth is bandwidth of formula_0 links.
For Hyper-cube topology with n nodes, n/2 links should be broken to bisect the network, so bisection bandwidth is bandwidth of n/2 links.
Significance of bisection bandwidth.
Theoretical support for the importance of this measure of network performance was developed in the PhD research of Clark Thomborson (formerly Clark Thompson). Thomborson proved that important algorithms for sorting, Fast Fourier transformation, and matrix-matrix multiplication become communication-limited—as opposed to CPU-limited or memory-limited—on computers with insufficient bisection bandwidth. F. Thomson Leighton's PhD research tightened Thomborson's loose bound on the bisection bandwidth of a computationally-important variant of the De Bruijn graph known as the shuffle-exchange network. Based on Bill Dally's analysis of latency, average-case throughput, and hot-spot throughput of m-ary n-cube networks for various m, it can be observed that low-dimensional networks, in comparison to high-dimensional networks (e.g., binary n-cubes) with the same bisection bandwidth (e.g., tori), have reduced latency and higher hot-spot throughput.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\sqrt{n}"
}
] |
https://en.wikipedia.org/wiki?curid=5843539
|
58435848
|
Barrier certificate
|
A barrier certificate or barrier function is used to prove that a given region is forward invariant for a given ordinary differential equation or hybrid dynamical system. That is, a barrier function can be used to show that if a solution starts in a given set, then it cannot leave that set.
Showing that a set is forward invariant is an aspect of "safety", which is the property where a system is guaranteed to avoid obstacles specified as an "unsafe set".
Barrier certificates play the analogical role for safety to the role of Lyapunov functions for stability. For every ordinary differential equation that robustly fulfills a safety property of a certain type there is a corresponding barrier certificate.
History.
The first result in the field of barrier certificates was the Nagumo theorem by Mitio Nagumo in 1942. The term "barrier certificate" was introduced later based on similar concept in convex optimization called barrier functions.
Barrier certificates were generalized to hybrid systems in 2004 by Stephen Prajna and Ali Jadbabaie.
Variants.
There are several different types of barrier functions. One distinguishing factor is the behavior of the barrier function at the boundary of the forward invariant set formula_0. A barrier function that goes to zero as the input approaches the boundary of formula_0 is called a "zeroing barrier function." A barrier function that goes to infinity as the inputs approach the boundary of formula_0 are called "reciprocal barrier functions". Here, "reciprocal" refers to the fact that a reciprocal barrier functions can be defined as the multiplicative inverse of a zeroing barrier function.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "S"
}
] |
https://en.wikipedia.org/wiki?curid=58435848
|
58437613
|
Zeta diversity
|
Measure of biodiversity
In ecology, zeta diversity (ζ-diversity), first described in 2014, measures the degree of overlap in the type of taxa present between a set of observed communities. It was developed to provide a more generalized framework for describing various measures of diversity, and can also be used to test various hypotheses pertaining to biogeography.
Zeta diversity as an extension of other measures of diversity.
α-diversity.
The most basic measure of community diversity, alpha diversity (α-diversity), can be described as the average number of distinct taxonomic groups (e.g. unique genera or operational taxonomic unit) present, independent of their abundances, on a per sample basis. In the ζ-diversity framework this can be described as ζ1, the number of unique taxa present in one sample.
β-diversity.
Beta diversity (β-diversity) is a measure to allow for a comparison between the diversity of local communities (α-diversity). The greater the similarity in community composition between multiple communities, the lower the value of β-diversity for that set of communities. Using the number of distinct taxonomic groups per community as a measure of α-diversity one can then describe β-diversity between two communities in terms of distinct number taxonomic groups held in common between them. Given two communities, "A" and "B", a measure β-diversity between both communities can be described in terms of their overlap "A" ∩ "B" (ζ2) as well as the average number of unique taxonomic categories found in "A" and "B" (ζ1). In the framework of ζ-diversity we can then describe the average β-diversity, as described by the Jaccard index, for a set of samples as formula_0
Multi-site assemblages.
The framework then for ζ-diversity can then be extended beyond measures of diversity in one (α-diversity), or between two communities (β-diversity), to describing diversity across sets of three or more communities. If ζ1 describes the number of distinct taxa in community "A", and ζ2 describes the number of distinct taxa held in common between communities "A" and "B", then ζn describes the number of distinct taxa held in common across n communities.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\frac{\\zeta_{2}}{2\\zeta_{1}-\\zeta_{2}}"
}
] |
https://en.wikipedia.org/wiki?curid=58437613
|
58439
|
Transformational grammar
|
Part of the theory of generative grammar
In linguistics, transformational grammar (TG) or transformational-generative grammar (TGG) is part of the theory of generative grammar, especially of natural languages. It considers grammar to be a system of rules that generate exactly those combinations of words that form grammatical sentences in a given language and involves the use of defined operations (called transformations) to produce new sentences from existing ones.
The method is commonly associated with the American linguist Noam Chomsky's biologically oriented concept of language. But in logical syntax, Rudolf Carnap introduced the term "transformation" in his application of Alfred North Whitehead's and Bertrand Russell's "Principia Mathematica". In such a context, the addition of the values of one and two, for example, transform into the value of three; many types of transformation are possible.
Generative algebra was first introduced to general linguistics by the structural linguist Louis Hjelmslev, although the method was described before him by Albert Sechehaye in 1908. Chomsky adopted the concept of transformation from his teacher Zellig Harris, who followed the American descriptivist separation of semantics from syntax. Hjelmslev's structuralist conception including semantics and pragmatics is incorporated into functional grammar.
Historical context.
Transformational analysis is a part of the classical Western grammatical tradition based on the metaphysics of Plato and Aristotle and on the grammar of Apollonius Dyscolus. These were joined to establish linguistics as a natural science in the Middle Ages. Transformational analysis was later developed by humanistic grammarians such as Thomas Linacre (1524), Julius Caesar Scaliger (1540), and Sanctius (Francisco Sánchez de las Brozas, 1587). The core observation is that grammatical rules alone do not constitute elegance, so learning to use a language correctly requires certain additional effects such as ellipsis. It is more desirable, for example, to say "Maggie and Alex went to the market" than to express the full underlying idea "Maggie went to the market and Alex went to the market". Such phenomena were described in terms of "understood elements". In modern terminology, the first expression is the surface structure of the second, and the second expression is the deep structure of the first. The notions of ellipsis and "restoration" are complementary: the deep structure is converted into the surface structure and "restored" from it by what were later known as transformational rules.
It was generally agreed that a degree of simplicity improves the quality of speech and writing, but closer inspection of the deep structures of different types of sentences led to many further insights, such as the concept of agent and patient in active and passive sentences. Transformations were given an explanatory role. Sanctius, among others, argued that surface structures pertaining to the choice of grammatical case in certain Latin expressions could not be understood without the restoration of the deep structure. His full transformational system included
Transformational analysis fell out of favor with the rise of historical-comparative linguistics in the 19th century, and the historical linguist Ferdinand de Saussure argued for limiting linguistic analysis to the surface structure. By contrast, Edmund Husserl, in his 1921 elaboration of the 17th-century Port-Royal Grammar, based his version of generative grammar on classical transformations ("Modifikationen"). Husserl's concept influenced Roman Jakobson, who advocated it in the Prague linguistic circle, which was likewise influenced by Saussure. Based on opposition theory, Jakobson developed his theory of markedness and, having moved to the United States, influenced Noam Chomsky, especially through Morris Halle. Chomsky and his colleagues, including Jerrold Katz and Jerry Fodor, developed what they called transformational generative grammar in the 1960s.
The transformational grammar of the 1960s differs from the Renaissance linguistics in its relation to the theory of language. While the humanistic grammarians considered language manmade, Chomsky and his colleagues exploited markedness and transformation theory in their attempt to uncover innate grammar. It would be later clarified that such grammar arises from a brain structure caused by a mutation in humans. In particular, generative linguists tried to reconstruct the underlying innate structure based on deep structure and unmarked forms. Thus, a modern notion of universal grammar, in contrast to the humanistic classics, suggested that the basic word order of "biological grammar" is unmarked, and unmodified in transformational terms.
Transformational generative grammar included two kinds of rules: phrase-structure rules and transformational rules. But scholars abandoned the project in the 1970s. Based on Chomsky's concept of I-language as the proper subject of linguistics as a cognitive science, Katz and Fodor had conducted their research on English grammar employing introspection. These findings could not be generalized cross-linguistically whereby they could not belong to an innate universal grammar.
The concept of transformation was nevertheless not fully rejected. In Chomsky's 1990s Minimalist Program, transformations pertain to the lexicon and the move operation. This more lenient approach offers more prospects of universalizability. It is, for example, argued that the English SVO word-order (subject, verb, object) represents the initial state of the cognitive language faculty. However, in languages like Classical Arabic, which has a basic VSO order, sentences are automatically transformed by the move operation from the underlying SVO order on which the matrix of all sentences in all languages is reconstructed. Therefore, there is no longer a need for a separate surface and deep matrix and additional rules of conversion between the two levels. According to Chomsky, this solution allows sufficient descriptive and explanatory adequacy—descriptive because all languages are analyzed on the same matrix, and explanatory because the analysis shows in which particular way the sentence is derived from the (hypothesized) initial cognitive state.
Basic mechanisms.
Deep structure and surface structure.
While Chomsky's 1957 book "Syntactic Structures" followed Harris's distributionalistic practice of excluding semantics from structural analysis, his 1965 book "Aspects of the Theory of Syntax" developed the idea that each sentence in a language has two levels of representation: a deep structure and a surface structure. But these are not quite identical to Hjelmslev's content plane and expression plane. The deep structure represents the core semantic relations of a sentence and is mapped onto the surface structure, which follows the phonological form of the sentence very closely, via "transformations". The concept of transformations had been proposed before the development of deep structure to increase the mathematical and descriptive power of context-free grammars. Deep structure was developed largely for technical reasons related to early semantic theory. Chomsky emphasized the importance of modern formal mathematical devices in the development of grammatical theory:
Transformations.
The usual usage of the term "transformation" in linguistics refers to a rule that takes an input, typically called the deep structure (in the Standard Theory) or D-structure (in the extended standard theory or government and binding theory), and changes it in some restricted way to result in a surface structure (or S-structure). In TG, phrase structure rules generate deep structures. For example, a typical transformation in TG is subject-auxiliary inversion (SAI). That rule takes as its input a declarative sentence with an auxiliary, such as "John has eaten all the heirloom tomatoes", and transforms it into "Has John eaten all the heirloom tomatoes?" In the original formulation (Chomsky 1957), those rules were stated as rules that held over strings of terminals, constituent symbols or both.
X NP AUX Y formula_0 X AUX NP Y
In the 1970s, by the time of the Extended Standard Theory, following Joseph Emonds's work on structure preservation, transformations came to be viewed as holding over trees. By the end of government and binding theory, in the late 1980s, transformations were no longer structure-changing operations at all; instead, they add information to already existing trees by copying constituents.
The earliest conceptions of transformations were that they were construction-specific devices. For example, there was a transformation that turned active sentences into passive ones. A different transformation raised embedded subjects into main clause subject position in sentences such as "John seems to have gone", and a third reordered arguments in the dative alternation. With the shift from rules to principles and constraints in the 1970s, those construction-specific transformations morphed into general rules (all the examples just mentioned are instances of NP movement), which eventually changed into the single general rule move alpha or Move.
Transformations actually come in two types: the post-deep structure kind mentioned above, which are string- or structure-changing, and generalized transformations (GTs). GTs were originally proposed in the earliest forms of generative grammar (such as in Chomsky 1957). They take small structures, either atomic or generated by other rules, and combine them. For example, the generalized transformation of embedding would take the kernel "Dave said X" and the kernel "Dan likes smoking" and combine them into "Dave said Dan likes smoking." GTs are thus structure-building rather than structure-changing. In the Extended Standard Theory and government and binding theory, GTs were abandoned in favor of recursive phrase structure rules, but they are still present in tree-adjoining grammar as the Substitution and Adjunction operations, and have recently reemerged in mainstream generative grammar in Minimalism, as the operations Merge and Move.
In generative phonology, another form of transformation is the phonological rule, which describes a mapping between an underlying representation (the phoneme) and the surface form that is articulated during natural speech.
Formal definition.
Chomsky's advisor, Zellig Harris, took transformations to be relations between sentences such as "I finally met this talkshow host you always detested" and simpler (kernel) sentences "I finally met this talkshow host" and "You always detested this talkshow host."{{quote needed|date=November 2013}} A transformational-generative (or simply transformational) grammar thus involved two types of productive rules: phrase structure rules, such as "S → NP VP" (a sentence may consist of a noun phrase followed by a verb phrase) etc., which could be used to generate grammatical sentences with associated parse trees (phrase markers, or P markers); and transformational rules, such as rules for converting statements to questions or active to passive voice, which acted on the phrase markers to produce other grammatically correct sentences. Hjelmslev had called word-order conversion rules "permutations".
In this context, transformational rules are not strictly necessary to generate the set of grammatical sentences in a language, since that can be done using phrase structure rules alone, but the use of transformations provides economy in some cases (the number of rules can be reduced), and it also provides a way of representing the grammatical relations between sentences, which would not be reflected in a system with phrase structure rules alone.
This notion of transformation proved adequate for subsequent versions, including the "extended", "revised extended", and Government-Binding (GB) versions of generative grammar, but it may no longer be sufficient for minimalist grammar, as merge may require a formal definition that goes beyond the tree manipulation characteristic of Move α.
Mathematical representation.
An important feature of all transformational grammars is that they are more powerful than context-free grammars. Chomsky formalized this idea in the Chomsky hierarchy. He argued that it is impossible to describe the structure of natural languages with context-free grammars. His general position on the non-context-freeness of natural language has held up since then, though his specific examples of the inadequacy of CFGs in terms of their weak generative capacity were disproved.
Core concepts.
Innate linguistic knowledge.
Chomsky is not the first person to suggest that all languages have certain fundamental things in common. He quoted philosophers who posited the same basic idea several centuries ago. But Chomsky helped make the innateness theory respectable after a period dominated by more behaviorist attitudes towards language. He made concrete and technically sophisticated proposals about the structure of language as well as important proposals about how grammatical theories' success should be evaluated.
Grammaticality.
Chomsky argued that "grammatical" and "ungrammatical" can be meaningfully and usefully defined. In contrast, an extreme behaviorist linguist would argue that language can be studied only through recordings or transcriptions of actual speech and that the role of the linguist is to look for patterns in such observed speech, not to hypothesize about why such patterns might occur or to label particular utterances grammatical or ungrammatical. Few linguists in the 1950s actually took such an extreme position, but Chomsky was on the opposite extreme, defining grammaticality in an unusually mentalistic way for the time. He argued that the intuition of a native speaker is enough to define the grammaticality of a sentence; that is, if a particular string of English words elicits a double-take or a feeling of wrongness in a native English speaker, with various extraneous factors affecting intuitions controlled for, it can be said that the string of words is ungrammatical. That, according to Chomsky, is entirely distinct from the question of whether a sentence is meaningful or can be understood. It is possible for a sentence to be both grammatical and meaningless, as in Chomsky's famous example, "colorless green ideas sleep furiously". But such sentences manifest a linguistic problem that is distinct from that posed by meaningful but ungrammatical (non)-sentences such as "man the bit sandwich the", the meaning of which is fairly clear, but which no native speaker would accept as well-formed.
The use of such intuitive judgments permitted generative syntacticians to base their research on a methodology in which studying language through a corpus of observed speech became downplayed since the grammatical properties of constructed sentences were considered appropriate data on which to build a grammatical model.
Theory evaluation.
In the 1960s, Chomsky introduced two central ideas relevant to the construction and evaluation of grammatical theories.
Competence versus performance.
One was the distinction between "competence" and "performance". Chomsky noted the obvious fact that when people speak in the real world, they often make linguistic errors, such as starting a sentence and then abandoning it midway through. He argued that such errors in linguistic "performance" are irrelevant to the study of linguistic "competence", the knowledge that allows people to construct and understand grammatical sentences. Consequently, the linguist can study an idealised version of language, which greatly simplifies linguistic analysis.
Descriptive versus explanatory adequacy.
The other idea related directly to evaluation of theories of grammar. Chomsky distinguished between grammars that achieve "descriptive adequacy" and those that go further and achieve "explanatory adequacy". A descriptively adequate grammar for a particular language defines the (infinite) set of grammatical sentences in that language; that is, it describes the language in its entirety. A grammar that achieves explanatory adequacy has the additional property that it gives insight into the mind's underlying linguistic structures. In other words, it does not merely describe the grammar of a language, but makes predictions about how linguistic knowledge is mentally represented. For Chomsky, such mental representations are largely innate and so if a grammatical theory has explanatory adequacy, it must be able to explain different languages' grammatical nuances as relatively minor variations in the universal pattern of human language.
Development of concepts.
Though transformations continue to be important in Chomsky's theories, he has now abandoned the original notion of deep structure and surface structure. Initially, two additional levels of representation were introduced—logical form (LF) and phonetic form (PF), but in the 1990s, Chomsky sketched a new program of research known at first as "Minimalism", in which deep structure and surface structure are no longer featured and PF and LF remain as the only levels of representation.
To complicate the understanding of the development of Chomsky's theories, the precise meanings of deep structure and surface structure have changed over time. By the 1970s, Chomskyan linguists normally called them D-Structure and S-Structure. In particular, Chomskyan linguists dropped for good the idea that a sentence's deep structure determined its meaning (taken to its logical conclusions by generative semanticists during the same period) when LF took over this role (previously, Chomsky and Ray Jackendoff had begun to argue that both deep and surface structure determined meaning).
"I-language" and "E-language".
In 1986, Chomsky proposed a distinction between I-language and E-language that is similar but not identical to the competence/performance distinction. "I-language" is internal language; "E-language" is external language. I-language is taken to be the object of study in linguistic theory; it is the mentally represented linguistic knowledge a native speaker of a language has and thus a mental object. From that perspective, most of theoretical linguistics is a branch of psychology. E-language encompasses all other notions of what a language is, such as a body of knowledge or behavioural habits shared by a community. Thus E-language is not a coherent concept by itself, and Chomsky argues that such notions of language are not useful in the study of innate linguistic knowledge or competence even though they may seem sensible and intuitive and useful in other areas of study. Competence, he argues, can be studied only if languages are treated as mental objects.
Minimalist program.
From the mid-1990s onward, much research in transformational grammar has been inspired by Chomsky's minimalist program. It aims to further develop ideas involving "economy of derivation" and "economy of representation", which had started to become significant in the early 1990s but were still rather peripheral aspects of transformational-generative grammar theory:
Both notions, as described here, are somewhat vague, and their precise formulation is controversial. An additional aspect of minimalist thought is the idea that the derivation of syntactic structures should be "uniform": rules should not be stipulated as applying at arbitrary points in a derivation but instead apply throughout derivations. Minimalist approaches to phrase structure have resulted in "Bare Phrase Structure", an attempt to eliminate X-bar theory. In 1998, Chomsky suggested that derivations proceed in phases. The distinction between deep structure and surface structure is absent in Minimalist theories of syntax, and the most recent phase-based theories also eliminate LF and PF as unitary levels of representation.
Critical reception.
In 1978, linguist and historian E. F. K. Koerner hailed transformational grammar as the third and last Kuhnian revolution in linguistics, arguing that it had brought about a shift from Ferdinand de Saussure's sociological approach to a Chomskyan conception of linguistics as analogous to chemistry and physics. Koerner also praised the philosophical and psychological value of Chomsky's theory.
In 1983 Koerner retracted his earlier statement suggesting that transformational grammar was a 1960s fad that had spread across the U.S. at a time when the federal government had invested heavily in new linguistic departments. But he claims Chomsky's work is unoriginal when compared to other syntactic models of the time. According to Koerner, Chomsky's rise to fame was orchestrated by Bernard Bloch, editor of "Language", the journal of the Linguistic Society of America, and Roman Jakobson, a personal friend of Chomsky's father. Koerner suggests that great sums of money were spent to fly foreign students to the 1962 International Congress at Harvard, where an exceptional opportunity was arranged for Chomsky to give a keynote speech making questionable claims of belonging to the rationalist tradition of Saussure, Humboldt and the Port-Royal Grammar, in order to win popularity among the Europeans. The transformational agenda was subsequently forced through at American conferences where students, instructed by Chomsky, regularly verbally attacked and ridiculed his potential opponents.
|
[
{
"math_id": 0,
"text": "\\Rightarrow"
}
] |
https://en.wikipedia.org/wiki?curid=58439
|
584406
|
Edge-transitive graph
|
Graph where all pairs of edges are automorphic
In the mathematical field of graph theory, an edge-transitive graph is a graph G such that, given any two edges "e"1 and "e"2 of G, there is an automorphism of G that maps "e"1 to "e"2.
In other words, a graph is edge-transitive if its automorphism group acts transitively on its edges.
Examples and properties.
The number of connected simple edge-transitive graphs on n vertices is 1, 1, 2, 3, 4, 6, 5, 8, 9, 13, 7, 19, 10, 16, 25, 26, 12, 28 ... (sequence in the OEIS)
Edge-transitive graphs include all symmetric graph, such as the vertices and edges of the cube. Symmetric graphs are also vertex-transitive (if they are connected), but in general edge-transitive graphs need not be vertex-transitive. Every connected edge-transitive graph that is not vertex-transitive must be bipartite, (and hence can be colored with only two colors), and either semi-symmetric or biregular.
Examples of edge but not vertex transitive graphs include the complete bipartite graphs formula_0 where m ≠ n, which includes the star graphs formula_1. For graphs on n vertices, there are (n-1)/2 such graphs for odd n and (n-2) for even n.
Additional edge transitive graphs which are not symmetric can be formed as subgraphs of these complete bi-partite graphs in certain cases. Subgraphs of complete bipartite graphs Km,n exist when m and n share a factor greater than 2. When the greatest common factor is 2, subgraphs exist when 2n/m is even or if m=4 and n is an odd multiple of 6. So edge transitive subgraphs exist for K3,6, K4,6 and K5,10 but not K4,10. An alternative construction for some edge transitive graphs is to add vertices to the midpoints of edges of a symmetric graph with v vertices and e edges, creating a bipartite graph with e vertices of order 2, and v of order 2e/v.
An edge-transitive graph that is also regular, but still not vertex-transitive, is called semi-symmetric. The Gray graph, a cubic graph on 54 vertices, is an example of a regular graph which is edge-transitive but not vertex-transitive. The Folkman graph, a quartic graph on 20 vertices is the smallest such graph.
The vertex connectivity of an edge-transitive graph always equals its minimum degree.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "K_{m,n}"
},
{
"math_id": 1,
"text": "K_{1,n}"
}
] |
https://en.wikipedia.org/wiki?curid=584406
|
5844519
|
Isaiah 53
|
53rd chapter of the Book of Isaiah in the Hebrew Bible
Isaiah 53 is the fifty-third chapter of the Book of Isaiah in the Hebrew Bible or the Old Testament of the Christian Bible. This book contains the prophecies attributed to the prophet Isaiah and is one of the Nevi'im. Chapters 40 to 55 are known as "Deutero-Isaiah" and date from the time of the Israelites' exile in Babylon.
The Fourth Servant Song: Isaiah 52:13 to 53:12.
<templatestyles src="Template:Blockquote/styles.css" />Yet it was our sickness that he was bearing,
Our suffering that he endured.
We accounted him plagued,
Smitten and afflicted by God;
But he was wounded because of our sins,
Crushed because of our iniquities.
He bore the chastisement that made us whole,
And by his bruises we were healed.
We all went astray like sheep,
Each going his own way;
And the LORD visited upon him
The guilt of all of us.”
-Isaiah 53:4-6, New Jewish Publication Society Translation
Isaiah 52:13–53:12 makes up the fourth of the "Servant Songs" of the Book of Isaiah, describing a "servant" of God who is abused but eventually vindicated. Major themes of the passage include:<br>
The passage's themes include a wide variety of ethical subjects, including guilt, innocence, violence, injustice, adherence to the divine will, repentance, and righteousness. Major interpretive options for the servant's identity will be discussed below.
Text.
The original text was written in Biblical Hebrew. This chapter is divided into 12 verses, although the pericope begins in Isaiah 52:13. The pericope thus encompasses 15 verses. The passage survives in a number of autonomous and parallel manuscript traditions in Hebrew, Greek, Latin, and others.
Hebrew<br>
The standard Hebrew edition that serves as the basis for most modern translations is Codex Leningradensis (1008). Other manuscripts of the Masoretic Text tradition include Codex Cairensis (895), the Petersburg Codex of the Prophets (916), and the Aleppo Codex (10th century).
Fragments containing all or parts of this chapter were found among the Dead Sea Scrolls. These are the earliest extant witnesses to the Hebrew text of the chapter:
Greek<br>
The translation into Koine Greek known as the Septuagint was made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), Codex Alexandrinus (A; formula_0A; 5th century) and Codex Marchalianus (Q; formula_0Q; 6th century). Several passages of the text were included in the New Testament (see New Testament below) and serve as further witnesses to the Greek text in the first century. Origen's Hexapla preserved assorted Greek translations of the text from Aquila, Theodotion, and Symmachus, dating to the second century CE.
<br>
Latin<br>
Jerome translated his Vulgate from Hebrew manuscripts that were available to him in the 4th century CE. Retroversion of the Latin into Hebrew may recover what his Hebrew manuscripts said at the time.
<br>Other Languages<br>
Versions of Isaiah 53 exist in many other languages, but they are of limited use for establishing the critical text. The Aramaic Targum Isaiah is often paraphrasing and loose with its translation. Many other early translations (i.e. Ethiopic, Slavonic, etc.), produced by Christians, were dependent upon the Septuagint and are of limited use for recovering the Hebrew.
Parashot.
The "parashah" sections listed here are based on the Aleppo Codex. Isaiah 53 is a part of the "Consolations (Isaiah 40–66)". {P}: open "parashah"; {S}: closed "parashah".
{S} 53:1-12 {P}
Interpretive options concerning the servant's identity.
The central interpretive question to be answered for the passage concerns Isaiah's intended referent for the servant. Important related questions include the Isaiah 53 servant's relationship with the servant(s) mentioned in the other servant songs, as well as the servant's relationship with the one preaching good news in Isaiah 52:7. Three major classes of interpretation have been proposed for the servant of Isaiah 53:<br>
Individual<br>
The individual interpretation states that the intended referent for the servant is a single Israelite man. The passage's third-person masculine singular nouns and verbs are cited as evidence for this position. Sometimes the entire pericope is interpreted concerning an individual, and in other cases only selected verses are so interpreted. Several individual referents have been proposed:
A Righteous Israelite Remnant<br>
Some interpretations state that the servant is representative of any Israelites who meet a particular standard of righteousness, such that the passage applies to some Israelites and not others. Examples include:
National<br>
This interpretation states that the servant is a metaphor for the entire nation of Israel. The sufferings of the servant are seen as sufferings of the nation as a whole while in exile. This interpretation first appears with unnamed Jews familiar to Origen in the third century CE (see below), and it subsequently became the majority position within Judaism from the medieval period until today. Sometimes this view is combined with the "righteous remnant" view (e.g. Rashi on 53:3 and 53:8) Representative commentaries include:
History of interpretation.
<templatestyles src="Template:Blockquote/styles.css" />Hardly any passage of the Hebrew Bible is and has been of such fundamental importance in the history of Jewish-Christian debate, or has played such a central role in it, as has the fourth Servant Song of Second Isaiah. Nor has any other passage experienced such different and sometimes mutually exclusive interpretations as this one.
- Stefan Schreiner, "The Suffering Servant"
A wide variety of sources across many centuries include interpretations of the chapter. This section will highlight some of the key interpretive sources organized by date of textual origin.
Dead Sea Scrolls (3rd century BCE – 1st century CE).
The Dead Sea Scrolls include both biblical and non-biblical scrolls that reflect the text and the themes of Isaiah 53.<br>
1QIsaa, the Great Isaiah Scroll.
In their article on the interpretation of Isaiah 53 in the pre-Christian period, Martin Hengel and Daniel P. Bailey noted a striking messianic reading in the Great Isaiah Scroll for Isaiah 52:14. They wrote,
<templatestyles src="Template:Blockquote/styles.css" />
The first line agrees with the MT (though the MT’s עָלֶיךָ is here spelled with the final ה "mater") and may be translated “Just as many were astonished at you.” But in the second line, instead of the MT’s unclear "hapax legomenon" מִשְׁחַת or מַשְׁחֵת, “marring,” “disfigurement,” 1QIsaa suffixes a yod to read the qal perfect first singular מָשַׁחְתִּי, “I have anointed.” To the last word אָדָם, 1QIsaa furthermore adds the article: “"the" human.” We may therefore translate:<br>
Just as many were astonished at you, so have I anointed his appearance beyond that of any (other) man, and his form beyond that of the sons of humanity [lit., of the human].
Because this reading indicates God anointed the servant "beyond that of any (other) man," it is likely that the scribe who penned the Great Isaiah Scroll interpreted the servant as Messiah.
Another variant is present in two Qumran manuscripts and the LXX. Martin and Hengel write, "The most important variant that Scrolls A and B have in common (see also 4QIsad) is the phrase יראה אור (“he will see light”) in 53:11, attested also in the LXX." This variant adds a vivid descriptor to the servant's experience after his persecution and death.
It is likely that the Qumran community saw Isaiah 52:7 as the beginning of the pericope, and 52:13 starting a subsection within it. Second Temple Judaism scholar Craig Evans notes that 1QIsaa includes a siglum in the margin at 52:7, just as it does in other major breaks of thought. Evans writes, "Although of uncertain meaning, this manuscript feature likely indicates the beginning of a new section." He notes that the Masoretic Text includes a "samek" (for seder) at the same verse, and a small "samek" after 52:12. Evans writes, "Accordingly, both the Great Isaiah Scroll of Qumran and the MT appear to view Isaiah 52:7-12 and 52:13-53:12 as two related units, perhaps with 52:7-12 introducing the hymn." The Qumran community interpreted Isaiah 52:7 messianically (see below), which may have bearing on the servant's identity, if the passages are to be linked.
4Q541 Fragment 9.
A portion of 4Q541 includes themes about an individual that will atone for his generation, despite his generation being evil and opposing him. Hengel and Bailey reviewed this fragment and others, noting, "As early as 1963, Starcky suspected that these portions of 4Q540 and 541... 'seem to evoke a suffering Messiah in the perspective opened up by the Servant Songs.'" The text of 4Q541 Fragment 9 reads,
<templatestyles src="Template:Blockquote/styles.css" />2 And he will atone for all the children of his generation, and he will be sent to all the children of<br>
3 his [people]. His word is like the word of the heavens, and his teaching, according to the will of God. His eternal sun will shine<br>
4 and its fire will burn in all the ends of the earth; above the darkness it will shine. Then, darkness will vanish<br>
5 [fr]om the earth, and gloom from the dry land. They will utter many words against him, and an abundance of<br>
6 [lie]s; they will fabricate fables against him, and utter every kind of disparagement against him. His generation will be evil and changed<br>
7 [and …] will be, and its position of deceit and of violence. [And] the people will go astray in his days and they will be bewildered.
11Q13 (11QMelch).
11Q13, also 11QMelch or the Melchizedek document, is a fragmentary manuscript among the Dead Sea Scrolls (from Cave 11) which mentions Melchizedek as leader of God's angels in a war in Heaven against the angels of darkness instead of the more familiar Archangel Michael. The text is an apocalyptic commentary on the Jubilee year of Leviticus 25. The passage includes a quotation of Isaiah 52:7 and a messianic explanation that ties the passage with Daniel 9:25. The scroll reads,
<templatestyles src="Template:Blockquote/styles.css" />13 But, Melchizedek will carry out the vengeance of Go[d’s] judgments, [and on that day he will fr]e[e them from the hand of] Belial and from the hand of all the sp[irits of his lot.]<br>
14 To his aid (shall come) all «the gods of [justice»; and h]e is the one w[ho …] all the sons of God, and … […]<br>
15 This […] is the day of [peace about whi]ch he said [… through Isa]iah the prophet, who said: [Isa 52:7 «How] beautiful<br>
16 upon the mountains are the feet [of] the messen[ger who] announces peace, the mess[enger of good who announces salvati]on, [sa]ying to Zion: your God [reigns.»]<br>
17 Its interpretation: The mountains [are] the prophet[s …] … […] for all … […]<br>
18 And the messenger i[s] the anointed of the spir[it] as Dan[iel] said [about him: Dan 9:25 «Until an anointed, a prince, it is seven weeks.» And the messenger of]<br>
19 good who announ[ces salvation] is the one about whom it is written that...
Septuagint (2nd century BCE).
The Septuagint (LXX) translation of Isaiah 53, dated to roughly 140 BCE, is a relatively free translation with a complicated relationship with the MT. Emanuel Tov has provided LXX/MT word equivalences for the passage, and verse-by-verse commentaries on the LXX of Isaiah 53 are provided by Jobes and Silva, and Hengel and Bailey.
In the LXX the verbal aspect and subject of many verbs differ from the MT. In 53:8, the child/servant is "led to death," with the translator seeing "lamavet" (לַמָּוֶת) rather than "lamo" (לָֽמֹו). Verses 10-12 shift the narrative toward the "we" in the audience, beseeching the reader to perform a sin offering in order to "cleanse" and "justify" the righteous servant/child who was an innocent sufferer. Hengel and Bailey comment, "Therefore in the MT of verse 10, the Servant himself gives his life as an אָשָׁם or “guilt offering” (NASB; NIV; cf. NJPS), that is, an atoning sacrifice. By contrast, the Greek conditional sentence ἐὰν δῶτε περὶ ἁμαρτίας in verse 10b requires a “sin offering” from the members of the congregation who previously went astray and who were guilty in relationship to the Servant, in order that they might receive their share of the salvation promised to the Servant." Despite these differences with the MT, the "vicarious suffering" theme of the MT remains intact, as evidenced by the LXX of verses 4-6:
<templatestyles src="Template:Blockquote/styles.css" />This one carries our sins and suffers pain for us, and we regarded him as one who is in difficulty, misfortune, and affliction. But he was wounded because of our sins, and he became sick because of our lawless acts. The discipline of our peace was upon him; by his bruise we were healed. We all have been misled like sheep; each person was misled in his own path, and the Lord handed him over for our sins. Isaiah 53:4-6, Lexham English Septuagint
While the theme of vicarious "suffering" is strong in the LXX, the translation avoids saying that the servant actually "dies". In verse 4, the MT's imagery that could imply death (מֻכֵּה) is lessened to "misfortune/blow" (πληγῇ). Jobes and Silva also note, "This rendering is only one of several examples where the translator clearly avoids statements that attribute the servant’s sufferings to God’s action." In verse 8, the servant is "led to death," but in verse 9, God saves the servant before his execution by "giving" the wicked and the wealthy unto death instead of the servant. Hengel notes that the tendency to downplay the idea of vicarious suffering continued in Theodotion's Greek translation:
<templatestyles src="Template:Blockquote/styles.css" />Jewish interpretation sharpens the tendency toward a statement of judgment at the end of the song and can completely suppress the idea of vicarious suffering. This is shown by Theodotion in the last phrase of 53:12. Against the Septuagint’s expression of vicariousness, in which the Servant “was delivered up on account of their sins” (see below) (καὶ διὰ τὰς ἁμαρτίας αὐτῶν παρεδόθη), Theodotion reads "et impios torquebit", “and he will torture the impious.” The Septuagint is still a long way from this complete reversal of the thought. The Servant receives his authority to act as judge precisely “"because" he did not do wrong, nor was deceit found in his mouth” (ὅτι ἀνομίαν οὐκ ἐποίησεν οὐδὲ εὑρέθη δόλος ἐν τῷ στόματι αὐτοῦ, v. 9). The motif of the innocent and righteous sufferer is therefore even clearer in the Septuagint than in the MT.
Unlike with 1QIsaa, the identity of the Servant in Isaiah 53 LXX is unclear. F. Hahn concluded without elaboration, "A messianic interpretation cannot be recognized even in the Septuagint version of Isaiah 53." Hengel disagrees:
<templatestyles src="Template:Blockquote/styles.css" />But who is this “righteous one” in the eyes of the translator? A one-sided collective interpretation referring to Israel seems to me hardly possible. Israel must be identified rather with the confession of the “we” group, which can hardly refer to the Gentile nations, since the Gentiles have no “report” to proclaim, as in 53:1 (ἀκοὴ ἡμῶν). Nor are the Gentiles healed by his “wounds” or “bruises” (53:5: μώλωψ); this can only apply to the people of God. The Servant will rather judge the kings and the nations, the wicked and the rich (Isa. 52:15; 53:9, 12). “The many” in 53:11–12 are the same as the “we” who make their confession in the first person plural in verses 1–7. They represent the doubting, straying Israel, for which the Servant has sacrificed himself. If the people of Israel repent, acknowledging and confessing their sins—which is perhaps their spiritual “sin offering” (cf. ἐὰν δῶτε περὶ ἁμαρτίας, 53:10)—then on the basis of the Servant’s vicarious atoning suffering, they may share his exalted destiny... At least the possibility of a messianic interpretation must be kept open, though as the Qumran texts now show, in the second century B.C.E. the concept of what was “messianic” was not yet as clearly fixed on the eschatological saving king from the house of David as it was later to become in the post-Christian rabbinic tradition. We must not allow the narrow concept of the Messiah in the post-Christian rabbis to regulate the diverse pre-Christian messianic ideas.
New Testament (1st century CE).
<templatestyles src="Template:Blockquote/styles.css" />"For even the Son of Man came not to be served but to serve, and to give his life as a ransom for many." - Jesus of Nazareth, Mark 10:45 (ESV)
The New Testament portrays a consistent and singular interpretation of Isaiah 53 by identifying the suffering servant as Jesus of Nazareth. His experience of crucifixion and resurrection are portrayed as the fulfillment of the text.
Besides these direct quotations, there are many more allusions to Isaiah 53 throughout the New Testament.
Gospels and Acts.
The first recorded words of Jesus in the Gospel of Mark, believed by many to be the earliest Gospel, are the following: "The time is fulfilled, and the kingdom of God is at hand; repent and believe in the gospel ("euangelion", εὐαγγέλιον)" (Mark 1:15). Biblical scholars often point to Isaiah 52:7 as the background to Jesus' proclamation. The Isaiah passage speaks of a messenger who would bring "good news" (LXX: "euangelion") of God's kingdom and the announcement of salvation (Heb: "yeshuah"). Jesus (Heb: "Yeshua") identifies himself as both the messenger of Isaiah 52:7 and the suffering servant of Isaiah 53(ref?), a linkage that was not unique in Judaism. Craig Evans cites multiple sources that link the "good news" of Isaiah 52:7 with the "report" of Isaiah 53:1 (DSS, Targum, Paul, Peter). Thus there is good reason to conjecture that whenever the New Testament authors speak of "the gospel" or "good news," it is a reference to Isaiah 53 as they saw it fulfilled in the life, death, and resurrection of Jesus (i.e. Acts 8:35). The New Testament authors refer to the "good news" ("euangelion") 76 times.
Jesus directly quotes and applies Isaiah 53:12 to himself in Luke 22:37. Mark 10:45, quoted above, is not a direct quotation of Isaiah 53, but alludes to it with the theme of serving "many" through death. These two passages provide examples of Jesus' self-understanding as the servant of Isaiah 53.
Several other passages in the Gospels and Acts apply the chapter to Jesus, but not through his own lips. Matthew comments on Jesus's miracles in healing his fellow Israelites, saying that such miracles were a fulfillment of Isaiah 53:4 (Matthew 8:17). A prominent place is given to the chapter in Acts 8:26-40, where an Ethiopian eunuch reads the chapter in the Septuagint and asks Philip, "About whom, I ask you, does the prophet say this, about himself or about someone else?" (Acts 8:34, ESV). Without elaboration, Acts continues, "Then Philip opened his mouth, and beginning with this Scripture he told him the good news about Jesus" (Acts 8:35, ESV). I. Howard Marshall commented as follows on Philip's response: It "implies that even by this early date [30s CE] the recognition that the job description in Isa. 53 fit Jesus, and only Jesus, was current among Christians."
Epistles.
Paul alludes to the themes of Isaiah 53 in 2 Cor 5:19-21, where he identifies Jesus as the sinless one who delivers righteousness to sinners. He says, "in [Jesus] we might become the righteousness of God" (2 Corinthians 5:21). This closely parallels Isaiah 53:11, where it says that the righteous servant "makes the many righteous" (NJPS) and bears the many's punishment. Romans 5:19 follows the same logic about "the many" and righteousness through Christ. In Romans 10:15, Paul identifies the message of salvation in Christ as the "good news" of Isaiah 52:7. Immediately thereafter, he appeals to Isaiah 53:1 and equates the "good news" with the "message" that Israel had rejected (Romans 10:16). With this exegesis, Paul holds that the Jewish rejection of Christ was prophesied by Isaiah, although the rejection was not in full, with Israel coming to believe in Christ at his apocalyptic return (Romans 11). John 12:38 cites Isaiah 53:1 for the same purpose of explaining that the Jewish rejection of Christ had been foretold.
The epistle of 1 Peter gives a prominent place to the text of Isaiah 53. In 1 Peter 2:23-25, at least four quotations and four allusions to Isaiah 53 are present. Carson writes, "Arguably, Peter himself was the first of the apostles to develop Suffering Servant Christology." Peter claims that Jesus's maltreatment and death were foretold in Isaiah 53, and he calls for Jesus's followers to repeat his ethical example through nonresistance.
Hebrews 9:28 includes a reference to Isaiah 53 when it says, "Christ, having been offered once to bear the sins of many..." This use of "the many" and identifying Christ as the one who bears sins follows other NT applications of Isaiah 53:11-12.
Pseudepigraphal sources (1st century CE and later).
The first-century pseudepigraphal book 4 Ezra includes the line, "Behold, my people is led like a flock to the slaughter" (4 Ezra 15:9). This may be an allusion to Isaiah 53:7, where the singular servant is interpreted as "my people." However, it could refer instead to Psalm 44:22, which includes a plural subject. The 4 Ezra passage does not have any atonement overtones, and it is Israel's persecutors who are punished by God rather than Israel themselves (see LXX above).
Psalms of Solomon 16 includes a hymn attributed to Solomon that includes themes from Isaiah 53. Solomon confesses that he had sinned greatly, saying, "my soul was poured out to death" (Psa. Solomon 16:2) and was in danger of descending to Hades with "the sinner" (cf. Isaiah 53:12). Solomon then praises God that he saved him from this fate, saying that God "did not count me with the sinners for my destruction" (Psa. Solomon 16:5). This also has overtones of Isaiah 53:12. This application of Isaiah 53 to the sinning Solomon ignores the innocence of the servant.
Sibylline Oracles 8.251-336 includes a hymn about Christ that weaves in the themes of Isaiah 53. This Christian section of the oracle may have been added to an originally Jewish version in the second or third centuries.
Wisdom of Solomon (1st century CE).
The Wisdom of Solomon 2―5, and especially 2:12-24 and 5:1-8, are commonly cited as an early Jewish reworking of the themes of Isaiah 53. The wicked and the righteous are presented as opponents, with the wicked conspiring to oppose and destroy the righteous. After the innocent righteous suffer, the wicked confess their sin and accept the righteousness of the one they rejected.
In Wisdom 2:13, the righteous is called "the servant of the Lord (παῖς κυρίου)" (c.f. Isa. 52:13 LXX). The wicked say (Wisdom 2:14), "[The servant] has become a reproof to us of our thoughts; he is burdensome for us even to see," paralleling Isaiah 53:3. The solution of the wicked is, "Let us examine him by insult and torture, that we might know his gentleness and judge his patience. Let us condemn him to a shameful death; for his examination will be by his words” (Wisdom 2:19-20). In Wisdom 5, the wicked recognize their sin and confess. Bailey comments,
<templatestyles src="Template:Blockquote/styles.css" />The wicked condemn the righteous man to a shameful death in Wis. 2:20. At the final judgment, the wicked “will be amazed” (ekstēsontai) at the unexpected salvation of the righteous man (Wis. 5:2), just as Isaiah’s many “will be amazed” (ekstēsontai) at the Servant (Isa. 52:14). Moreover, Wisdom’s “we” confess, “it was we who strayed (eplanēthēmen) from the way (hodos) of truth” (Wis. 5:6), just as Isaiah’s “we” confess, “All we like sheep have gone astray (eplanēthēmen), each … in his own way (hodos)” (Isa. 53:6). Both groups realize an error in their thinking and the correct alternative.
Although the servant is called a "son of God" who calls God his "father" (Wisdom 2:16-18), the passage does not give any indication that an individual messianic or salvific figure is in view. Bailey comments, "There is no vicarious suffering or sin-bearing of Wisdom's righteous man on behalf of sinners." Wisdom 3:1 says, with an emphasis on the plural, ""Righteous souls" are in the hand of God, and torment will never touch "them"." Thus, it would appear that the singular "righteous man" in Wisdom stands as a paradigm for the way the wicked often treat righteous individuals within Israel. The individual represents the pattern of the righteous within the nation, rather than being a singular individual with a unique experience in the nation.
Patristic sources (1st through 5th centuries CE).
Isaiah 53 was extensively quoted and applied to Jesus by the church fathers. Patristic quotations and allusions to the chapter are innumerable. This section will highlight various important witnesses to the patristic and Jewish views of the chapter as reported in patristic sources.
The earliest example outside the New Testament may be found in 1 Clement 16, circa 95 CE. Another early example is Barnabas 5:2, circa 100 CE. Irenaeus quotes it of Christ in Against Heresies 2.28.5, and Tertullian in Adversus Judaeos 10.
Justin Martyr (mid-2nd century CE).
Justin Martyr, a second century Platonic philosopher who converted to Christianity, interpreted Isaiah 53 at length with reference to Jesus. Both Justin's First Apology 50-51 and his Dialogue with Trypho include extended quotations and explanations of the text.
The Dialogue with Trypho (ca. 155 CE) is a purported debate between Justin and the Jewish man Trypho. Scholars disagree on the historicity of the debate, but the Trypho in question may have been Rabbi Tarfon.
Daniel P. Bailey has provided a nearly 100-page chapter on Justin Martyr's use of Isaiah 53 in the Dialogue with Trypho. Bailey writes, "Justin Martyr's Dialogue with Trypho makes the greatest use of Isaiah 53 of any Christian work of the first two centuries." He counts up to 42 different passages that quote or directly allude to Isaiah 53. This includes an extended quotation of the Septuagint version of Isaiah 52:10 through 54:6 in Dialogue 13. According to Bailey, the debate between Justin and Trypho concerning Isaiah 53 was twofold: 1) Do the Hebrew Scriptures in general, and Isaiah 53 in specific, predict a suffering (παθητός, "pathetos") Messiah? 2) Does Jesus fit the criteria for being the suffering Messiah so predicted? The men agreed on the first point, and disagreed on the second.
Justin's contention is that the Scriptures do predict a suffering Messiah, and he quotes Isaiah 53 repeatedly to make that point. After much argument, Trypho eventually responds,
<templatestyles src="Template:Blockquote/styles.css" />‘You know very well,’ said Trypho, ‘that we Jews all look forward to the coming of the Christ, and we admit that all your Scriptural quotations refer to Him... But we doubt whether the Christ should be so shamefully crucified, for the Law declares that he who is crucified is to be accursed (Deuteronomy 21:23). Consequently, you will find me very difficult to convince on this point. It is indeed evident that the Scriptures state that Christ was to suffer, but you will have to show us, if you can, whether it was to be the form of suffering cursed by the Law.’ (Dialogue 89)
Trypho's response, if authentic, speaks to a second-century Jewish understanding of the meaning of Isaiah 53. Trypho agrees with the concept of a suffering Messiah but denies that Jesus could be the Messiah on the grounds of Deuteronomy 21:23. While the Messiah may be subject to suffering in Trypho's mind, a shameful crucifixion would be a step too far. A crucifixion (hanging) of the Messiah would entail God to cursing his Messiah in accordance with the Torah, which Trypho could not accept. Bailey comments,
Rhetorically, by agreeing that the Messiah is to be παθητός, Trypho is already agreeing that the Messiah is prefigured by Isaiah 53. What he wants is a defense of the crucifixion against the curse of Deut 21:23. But here Justin simply offers him more of Isaiah 53, on the assumption that anyone already prepared to accept such a suffering Messiah will not be too offended at a crucified one and will be able to fit this into the picture of Isaiah 53.
In Justin's response to this objection, he stressed that suffering is equivalent to crucifixion, so Isaiah 53's fulfillment in Jesus was self-evident (Dialogue 89). Trypho affirmed that the Messiah was to suffer, but strongly objected that such suffering could include crucifixion, because God would not curse his Messiah with a shameful death (Deut 21:23). Trypho based his argument on Torah, but Justin's response downplayed the Torah and ultimately failed to respond to the argument.
In Bailey's judgment, Trypho's appeal to Deuteronomy 21:23 is a mark of authenticity for the debate on this matter, because Justin never satisfactorily answers the objection, and thus leaves his interpretation of Isaiah 53 without proper defense. Timothy J. Horner also points to Trypho's use of Deuteronomy as a mark of historicity:
[Trypho] is neither Justin’s puppet nor is he blindly obdurate. This examination reveals an individual voice with its own sensibility, style, and agenda. It is a voice which defies fiction. His personality is unique, consistent, and idiosyncratic. Perhaps more surprisingly, his function in the text actually weakens Justin’s argument in some places... It is implausible and inappropriate to imagine Justin crafting his Jewish disputant in such a way as to erode some of the basic tenets of his Christian argument.
In sum, the Dialogue with Trypho presents an argument between a second-century Christian and Jew, both of whom agree that Isaiah 53 predicts a suffering Messiah. They disagreed about whether the historical circumstances of Jesus' life, and especially his ignominious death, could be said to match the predictions of Isaiah. Justin said yes; Trypho, based on appeal to the Torah (Deuteronomy 21:23) said no.
Origen (early 3rd-century CE).
The church father, Platonist, textual critic, and theologian Origen preserved an early witness to the "national" identification of the servant in the Jewish circles of his acquaintance. In his works, he consistently interprets Isaiah 53 in reference to Christ. However, the pagan philosopher Celsus wrote a book criticizing Christianity, and in his arguments, Celsus often employed an anonymous Jew as the one delivering the objections. Circa 248 CE, Origen wrote a response entitled Contra Celsus, where he simultaneously argued against Celsus the man and the Jewish voice that Celsus incorporated. In Contra Celsus 1.55, Origen recalled a personal conversation he had with Jews he was acquainted with:
Now I remember that, on one occasion, at a disputation held with certain Jews, who were reckoned wise men, I quoted these prophecies; to which my Jewish opponent replied, that these predictions bore reference to the whole people, regarded as one individual, and as being in a state of dispersion and suffering, in order that many proselytes might be gained, on account of the dispersion of the Jews among numerous heathen nations. And in this way he explained the words, “Thy form shall be of no reputation among men;” and then, “They to whom no message was sent respecting him shall see;” and the expression, “A man under suffering.” Many arguments were employed on that occasion during the discussion to prove that these predictions regarding one particular person were not rightly applied by them to the whole nation... But we seemed to press them hardest with the expression, “Because of the iniquities of My people was He led away unto death.” [Isaiah 53:8 LXX] For if the people, according to them, are the subject of the prophecy, how is the man said to be led away to death because of the iniquities of the people of God, unless he be a different person from that people of God?
In this report, the Origen's Jewish interlocutors interpreted Isaiah 53 as a description of the entire nation of Israel while suffering in the diaspora. They cited the disrespect and ill repute of Jews in the eyes of the Gentile nations, as well as the suffering the entire nation endured as if one individual. In reference to the redemptive themes Isaiah 53, the Jewish interlocutors said that Israel's suffering was for the purpose of an increase in proselytes to Judaism, a reference to pre-Constantinian Jewish missionization hopes. Origen's response, based on the Septuagint of Isaiah 53:8 (which has "unto death"), responded that the reference to "my people" ought to distinguish the servant from being equivalent to the nation.
Midrash (Talmudic Era and later).
Midrash Tanchuma Buber interprets Isaiah 52:13 concerning the greatness of Messiah:
What is the meaning of "Who are you, O great mountain"? This is the Messianic King. Then why does it call him "great mountain"? Because he is greater than the ancestors, as stated (in Is. 52:13): "Behold, my servant shall bring low. He shall be exalted, lifted up, and become exceedingly tall. He shall be exalted" (rt.: RWM) more than Abraham, "lifted up" more than Moses, "and become exceedingly tall", more so than the ministering angels.
Ruth Rabbah 5.6 includes multiple interpretations of Boaz' statement to Ruth in Ruth 2.14. The fifth interpretation includes a reference to Isaiah 53:5, interpreted as describing the sufferings of Messiah: "The fifth interpretation makes it refer to the Messiah. "Come hither":' approach to royal state. "And eat of the bread" refers to the bread of royalty; "And dip thy morsel in the vinegar" refers to his sufferings, as it is said, "But he was wounded because of our transgressions" (Isaiah 53:5). "And she sat beside the reapers," for he will be deprived of his sovereignty for a time."
Although the allusion is not certain, it is possible that Sifre Numbers 131 identifies Pinchas (cf. Numbers 25:13) with the one in Isaiah 53:12 who makes atonement for the people of Israel.
Sifre Deuteronomy 355 interprets Isaiah 53:12 as an end-times description of Moses' honor at the head of Israel's scholars.
Numbers Rabbah, quoting Isaiah 53:12, interprets the verse in terms of Israel's final redemption: "Because Israel exposed their souls to death in exile-as you read, Because he "bared his soul unto death" (Isa. LIII, 12)- and busied themselves with the Torah which is sweeter than honey, the Holy One, blessed be He, will therefore in the hereafter give them to drink of the wine that is preserved in its grapes since the six days of Creation, and will let them bathe in rivers of milk."
Pesikta of Rav Kahana.
The Pesikta of Rav Kahana includes extended interpretations of the one who brings "good news" in Isaiah 52:7. Various interpretations are given, including Isaiah himself and the returned exiles of Israel in the era of redemption (Supplement 5.1-2). The verse is also interpreted of king Messiah in two places (Piska 5.9, Supplement 5.4). In Piska 5.9, Rabbi Johanan interprets as follows:
And the voice of the turtle (twr) is heard in our land (Song 2:12), words which mean, according to R. Johanan, that the voice of the king Messiah, the voice of the one who will lead us with great care through the final turnings (tyyr) of our journey is heard in the land: “How beautiful upon the mountains are the feet of the messenger of good tidings” (Isa. 52:7).
To the extent that the Midrash understood Isaiah 52:7 to connect with Isaiah 53, these interpretations of the former may have bearing on the latter.
In Piska 19.5, Rabbi Abbahu cites Isaiah 53:10 as evidence for why a sick person who sees a seminal emission should be encouraged that his health is improving.
Modern views.
According to modern scholarship (among whom include Christians), the suffering servant described in Isaiah chapter 53 is actually the Jewish people in its original context.
A number of Christian scholars present a different approach. Developed by Walter C. Kaiser and popularised by Raymond E. Brown, the Latin phrase sensus plenior has been used in biblical exegesis to describe the supposed deeper meaning intended by God but not by the human author.
Brown defines "sensus plenior" as
<templatestyles src="Template:Blockquote/styles.css" />That additional, deeper meaning, intended by God but not clearly intended by the human author, which is seen to exist in the words of a biblical text (or group of texts, or even a whole book) when they are studied in the light of further revelation or development in the understanding of revelation.
John Goldingay suggests that the citation of in is a "stock example" of "sensus plenior". In this view, the life and ministry of Jesus is considered the revelation of these deeper meanings, such as with Isaiah 53, regardless of the original context of passages quoted in the New Testament.
The legacy of Isaiah 53.
Jewish–Christian relations.
Before 1000.
The earliest known example of a Jew and a Christian debating the meaning of Isaiah 53 is the example from 248 cited by Origen. In Christian church father Origen's "Contra Celsum", written in 248, he writes of Isaiah 53:
Now I remember that, on one occasion, at a disputation held with certain Jews, who were reckoned wise men, I quoted these prophecies; to which my Jewish opponent replied, that these predictions bore reference to the whole people, regarded as one individual, and as being in a state of dispersion and suffering, in order that many proselytes might be gained, on account of the dispersion of the Jews among numerous heathen nations.
The discourse between Origen and his Jewish counterpart does not seem to have had any consequences for either party. This was not the case for the majority of centuries that have passed since that time. In Ecclesiastes Rabbah 1:24, written in the 700s, a debate about a much less controversial topic results in the arrest of the Jew engaging in the debate.
1000–1500.
In 1263, at the Disputation of Barcelona, Nachmanides expressed the Jewish viewpoint of Isaiah 53 and other matters regarding Christian belief about Jesus's role in Hebrew Scripture. The disputation was awarded in his favor by James I of Aragon, and as a result the Dominican Order compelled him to flee from Spain for the remainder of his life. Passages of Talmud were also censored.
Modern era.
The use of Isaiah 53 in debates between Jews and Christians still often occurs in the context of Christian missionary work among Jews, and the topic is a source of frequent discussion that is often repetitive and heated. Some devout Christians view the use of the Christian interpretation of Isaiah 53 in targeted conversion of Jews as a special act of Christian love and a fulfillment of Jesus Christ's teaching of the Great Commission. The unchanged common view among many Jews today, including Karaites, is that if the entire book of Isaiah is read from start to finish, in Hebrew, then it is clear that Isaiah 53 is not talking about one individual but instead the nation of Israel as a whole. Some believe the individual to be Hezekiah, who, according to , lived another 15 years (i.e., "prolonging his days") after praying to God while ill (i.e., "acquainted with grief"). His son and successor, Manasseh, was born during this time, thereby allowing Hezekiah to see his "offspring."
The phrase "like sheep to the slaughter", used to describe alleged Jewish passivity during the Holocaust, derives from Isaiah 53:7.
Jewish counter-missionary view.
International Jewish counter-missionary organizations such as Outreach Judaism or Jews for Judaism respond directly to the issues raised by Christian missionaries concerning Isaiah 53 and explore Judaism in contradistinction to Christianity.
Christian Music.
The King James Version of verses 3–6 and 8 from this chapter is cited as texts in the English-language oratorio "Messiah" by George Frideric Handel (HWV 56).
Jewish literature.
Talmud.
The Talmud refers occasionally to Isaiah 53:
Six things are a good sign for a sick person, namely, sneezing, perspiration, open bowels, seminal emission, sleep and a dream. Sneezing, as it is written: His sneezings flash forth light.15 Perspiration, as it is written, In the sweat of thy face shalt thou eat bread.16 Open bowels, as it is written: If lie that is bent down hasteneth to be loosed, he shall not go down dying to the pit.17 Seminal emission, as it is written: Seeing seed, he will prolong his days.18 Sleep, as it is written: I should have slept, then should I have been at rest.19 A dream, as it is written: Thou didst cause me to dream and make me to live.20
(15) Job XLI, 10.
(16) Gen. III, 19.
(17) Isa. LI, 14. E.V. "He that is bent down shall speedily, etc."
(18) Isa. LIII, 10.
(19) Job. III, 13.
(20) Isa. XXXVIII, 16. V.p. 335, n. 10.
Five things which are a favourable omen for an invalid, viz.: sneezing, perspiring, sleep, a dream, and semen. Sneezing, as it is written, His sneezings flash forth light (Job XLI, 10); sweat: In the Sweat of Thy Face Shalt Thou Eat Bread3; sleep: I had slept: then it were well with me (Job III, 13)4; a dream: Wherefore make me dream [E.V. 'recover Thou me'] and make me live (Isa. XXXVIII, 16); semen: He shall see seed [i.e. semen], and prolong his days (Isa. LIII,10)
Midrash.
The midrashic method of biblical exegesis, is "... going more deeply than the mere literal sense, attempts to penetrate into the spirit of the Scriptures, to examine the text from all sides, and thereby to derive interpretations which are not immediately obvious":
Midrash Rabbah—Exodus XIX:6
In this world, when Israel ate the paschal lamb in Egypt, they did so in haste, as it is said: And thus shall ye eat it, etc. (Ex. XII, 11), For in haste didst thou come forth out of the land of Egypt (Deut. XVI, 3), but in the Messianic era, we are told: For ye shall not go out in haste, neither shall ye go by flight (Isa. LII, 12).
Midrash Rabbah—Numbers XIII:2
Israel exposed (he'eru) their souls to death in exile-as you read, Because he bared (he'era) his soul unto death (Isa. LIII, 12)- and busied themselves with the Torah which is sweeter than honey, the Holy One, blessed be He, will therefore in the hereafter give them to drink of the wine that is preserved in its grapes since the six days of Creation, and will let them bathe in rivers of milk.
Midrash Rabbah—Ruth V:6
6. And Boaz said unto her at meal time: come hither, and eat of the bread, and dip thy morsel in the vinegar. And she sat beside the reapers; and they reached her parched corn, and she did eat and was satisfied and left thereof (II, 14). R. Jonathan interpreted this verse in six ways. The first refers it to David... The fifth interpretation makes it refer to the Messiah. "Come hither": approach to royal state. "And eat of the bread" refers to the bread of royalty; "And dip thy morsel in the vinegar" refers to his sufferings, as it is said, But he was wounded because of our transgressions (Isa. LIII, 5).
Zohar.
The Zohar is the foundational work in the literature of Jewish mystical Kabbalah. It references to Isaiah 53 in a wide variety:
"The Lord trieth the righteous" (Ps. XI, 5). For what reason? Said R. Simeon: "Because when God finds delight in the righteous, He brings upon them sufferings, as it is written: 'Yet it pleased the Lord to crush him by disease'" (Is. LIII, 10), as explained elsewhere. God finds delight in the soul but not in the body, as the soul resembles the supernal soul, whereas the body is not worthy to be allied to the supernal essences, although the image of the body is part of the supernal symbolism.
Observe that when God takes delight in the soul of a man, He afflicts the body in order that the soul may gain full freedom. For so long as the soul is together with the body it cannot exercise its full powers, but only when the body is broken and crushed. Again, "He trieth the righteous", so as to make them firm like "a tried stone", the "costly corner-stone" mentioned by the prophet (Is. XXVIII, 16).
R. Simeon further discoursed on the text: Behold, My servant shall prosper, he shall be exalted and lifted up, and shall be very high (Is. LII, 13). "Happy is the portion of the righteous", he said, "to whom the Holy One reveals the ways of the Torah that they may walk in them."
Observe the Scriptural text: "And Abraham took another wife, and her name was Keturah" (Gen. xxv, 1). Herein is an allusion to the soul which after death comes to earth to be built up as before. Observe that of the body it is written: "And it pleased the Lord to crush him by disease; to see if his soul would offer itself in restitution, that he might see his seed, and prolong his days, and that the purpose of the Lord might prosper by his hand." (Is. LIII, 10). That is to say, if the soul desires to be rehabilitated then he must see seed, for the soul hovers round about and is ready to enter the seed of procreation, and thus "he will prolong his days, and the purpose of the Lord", namely the Torah, "will prosper in his hand". For although a man labours in the Torah day and night, yet if his source remains fruitless, he will find no place by which to enter within the Heavenly curtain.
R. Simeon quoted here the verse: "A voice is heard in Ramah, lamentation and bitter weeping, Rachel weeping for her children, because they were not" (Jer. XXXI, I5). 'The Community of Israel is called "Rachel", as it says, "As a sheep (rahel) before her shearers is dumb" (Isa. LIII, 7). Why dumb? Because when other nations rule over her the voice departs from her and she becomes dumb. "Ramah"
When the Messiah hears of the great suffering of Israel in their dispersion, and of the wicked amongst them who seek not to know their Master, he weeps aloud on account of those wicked ones amongst them, as it is written: "But he was wounded because of our transgression, he was crushed because of our iniquities" (Isa. LIII, 5). The souls then return to their place. The Messiah, on his part, enters a certain Hall in the Garden of Eden, called the Hall of the Afflicted. There he calls for all the diseases and pains and sufferings of Israel, bidding them settle on himself, which they do. And were it not that he thus eases the burden from Israel, taking it on himself, no one could endure the sufferings meted out to Israel in expiation on account of their neglect of the Torah. So Scripture says; "Surely our diseases he did bear," etc. (Isa. LIII, 4). A similar function was performed by R. Eleazar here on earth. For, indeed, beyond number are the chastisements awaiting every man daily for the neglect of the Torah, all of which descended into the world at the time when the Torah was given. As long as Israel were in the Holy Land, by means of the Temple service and sacrifices they averted all evil diseases and afflictions from the world. Now it is the Messiah who is the means of averting them from mankind until the time when a man quits this world and receives his punishment, as already said. When a man's sins are so numerous that he has to pass through the nethermost compartments of Gehinnom in order to receive heavier punishment corresponding to the contamination of his soul, a more intense fire is kindled in order to consume that contamination. The destroying angels make use for this purpose of fiery rods, so as to expel that contamination. Woe to the soul that is subjected to such punishment! Happy are those who guard the precepts of the Torah!
"It has been taught in the name of R. Jose that on this day of Atonement it has been instituted that this portion should be read to atone for Israel in captivity. Hence we learn that if the chastisements of the Lord come upon a man, they are an atonement for his sins, and whoever sorrows for the sufferings of the righteous obtains pardon for his sins. Therefore on this day we read the portion commencing 'after the death of the two sons of Aaron', that the people may hear and lament the loss of the righteous and obtain forgiveness for their sins. For whenever a man so laments and sheds tears for them, God proclaims of him, 'thine iniquity is taken away and thy sin purged' (Isa. Vl, 7). Also he may be assured that his sons will not die in his lifetime, and of him it is written, 'he shall see seed, he shall prolong days (Isa. LIII, 19).'"
When God desires to give healing to the world He smites one righteous man among them with disease and suffering, and through him gives healing to all, as it is written, "But he was wounded for our transgressions, he was bruised for our iniquities... and with his stripes we are healed" (Isa. LIII, 5)
Why is Israel subjected to all nations? In order that the world may be preserved through them.
Notes and references.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
] |
https://en.wikipedia.org/wiki?curid=5844519
|
5845706
|
Comparison triangle
|
Define formula_0 as the 2-dimensional metric space of constant curvature formula_1. So, for example, formula_2 is the Euclidean plane, formula_3 is the surface of the unit sphere, and formula_4 is the hyperbolic plane.
Let formula_5 be a metric space. Let formula_6 be a triangle in formula_5, with vertices formula_7, formula_8 and formula_9. A comparison triangle formula_10 in formula_0 for formula_6 is a triangle in formula_0 with vertices formula_11, formula_12 and formula_13 such that formula_14, formula_15 and formula_16.
Such a triangle is unique up to isometry.
The interior angle of formula_10 at formula_11 is called the comparison angle between formula_8 and formula_9 at formula_7. This is well-defined provided formula_8 and formula_9 are both distinct from formula_7.
|
[
{
"math_id": 0,
"text": "M_{k}^{2}"
},
{
"math_id": 1,
"text": "k"
},
{
"math_id": 2,
"text": "M_{0}^{2}"
},
{
"math_id": 3,
"text": "M_{1}^{2}"
},
{
"math_id": 4,
"text": "M_{-1}^{2}"
},
{
"math_id": 5,
"text": "X"
},
{
"math_id": 6,
"text": "T"
},
{
"math_id": 7,
"text": "p"
},
{
"math_id": 8,
"text": "q"
},
{
"math_id": 9,
"text": "r"
},
{
"math_id": 10,
"text": "T*"
},
{
"math_id": 11,
"text": "p'"
},
{
"math_id": 12,
"text": "q'"
},
{
"math_id": 13,
"text": "r'"
},
{
"math_id": 14,
"text": "d(p,q) = d(p',q')"
},
{
"math_id": 15,
"text": "d(p,r) = d(p',r')"
},
{
"math_id": 16,
"text": "d(r,q) = d(r',q')"
}
] |
https://en.wikipedia.org/wiki?curid=5845706
|
5845712
|
All-pass filter
|
Signal processing filter
An all-pass filter is a signal processing filter that passes all frequencies equally in gain, but changes the phase relationship among various frequencies. Most types of filter reduce the amplitude (i.e. the magnitude) of the signal applied to it for some values of frequency, whereas the all-pass filter allows all frequencies through without changes in level.
Common applications.
A common application in electronic music production is in the design of an effects unit known as a "phaser", where a number of all-pass filters are connected in sequence and the output mixed with the raw signal.
It does this by varying its phase shift as a function of frequency. Generally, the filter is described by the frequency at which the phase shift crosses 90° (i.e., when the input and output signals go into quadrature – when there is a quarter wavelength of delay between them).
They are generally used to compensate for other undesired phase shifts that arise in the system, or for mixing with an unshifted version of the original to implement a notch comb filter.
They may also be used to convert a mixed phase filter into a minimum phase filter with an equivalent magnitude response or an unstable filter into a stable filter with an equivalent magnitude response.
Active analog implementation.
Implementation using low-pass filter.
The operational amplifier circuit shown in adjacent figure implements a single-pole active all-pass filter that features a low-pass filter at the non-inverting input of the opamp. The filter's transfer function is given by:
formula_0
which has one pole at -1/RC and one zero at 1/RC (i.e., they are "reflections" of each other across the imaginary axis of the complex plane). The magnitude and phase of H(iω) for some angular frequency ω are
formula_1
The filter has unity-gain magnitude for all ω. The filter introduces a different delay at each frequency and reaches input-to-output "quadrature" at ω=1/RC (i.e., phase shift is 90°).
This implementation uses a low-pass filter at the non-inverting input to generate the phase shift and negative feedback.
In fact, the phase shift of the all-pass filter is double the phase shift of the low-pass filter at its non-inverting input.
Interpretation as a Padé approximation to a pure delay.
The Laplace transform of a pure delay is given by
formula_2
where formula_3 is the delay (in seconds) and formula_4 is complex frequency. This can be approximated using a Padé approximant, as follows:
formula_5
where the last step was achieved via a first-order Taylor series expansion of the numerator and denominator. By setting formula_6 we recover formula_7 from above.
Implementation using high-pass filter.
The operational amplifier circuit shown in the adjacent figure implements a single-pole active all-pass filter that features a high-pass filter at the non-inverting input of the opamp. The filter's transfer function is given by:
formula_8
which has one pole at -1/RC and one zero at 1/RC (i.e., they are "reflections" of each other across the imaginary axis of the complex plane). The magnitude and phase of H(iω) for some angular frequency ω are
formula_9
The filter has unity-gain magnitude for all ω. The filter introduces a different delay at each frequency and reaches input-to-output "quadrature" at ω=1/RC (i.e., phase lead is 90°).
This implementation uses a high-pass filter at the non-inverting input to generate the phase shift and negative feedback.
In fact, the phase shift of the all-pass filter is double the phase shift of the high-pass filter at its non-inverting input.
Voltage controlled implementation.
The resistor can be replaced with a FET in its "ohmic mode" to implement a voltage-controlled phase shifter; the voltage on the gate adjusts the phase shift. In electronic music, a phaser typically consists of two, four or six of these phase-shifting sections connected in tandem and summed with the original. A low-frequency oscillator (LFO) ramps the control voltage to produce the characteristic swooshing sound.
Passive analog implementation.
The benefit to implementing all-pass filters with active components like operational amplifiers is that they do not require inductors, which are bulky and costly in integrated circuit designs. In other applications where inductors are readily available,
all-pass filters can be implemented entirely without active components. There are a number of circuit topologies that can be used for this. The following are the most commonly used circuits.
Lattice filter.
The lattice phase equaliser, or filter, is a filter composed of lattice, or X-sections. With single element branches it can produce a phase shift up to 180°, and with resonant branches it can produce phase shifts up to 360°. The filter is an example of a constant-resistance network (i.e., its image impedance is constant over all frequencies).
T-section filter.
The phase equaliser based on T topology is the unbalanced equivalent of the lattice filter and has the same phase response. While the circuit diagram may look
like a low pass filter it is different in that the two inductor branches are mutually coupled. This results in transformer action between the two inductors and an all-pass response even at high frequency.
Bridged T-section filter.
The bridged T topology is used for delay equalisation, particularly the differential delay between two landlines being used for stereophonic sound broadcasts. This application requires that the filter has a linear phase response with frequency (i.e., constant group delay) over a wide bandwidth and is the reason for choosing this topology.
Digital implementation.
A Z-transform implementation of an all-pass filter with a complex pole at formula_10 is
formula_11
which has a zero at formula_12, where formula_13 denotes the complex conjugate. The pole and zero sit at the same angle but have reciprocal magnitudes (i.e., they are "reflections" of each other across the boundary of the complex unit circle). The placement of this pole-zero pair for a given formula_10 can be rotated in the complex plane by any angle and retain its all-pass magnitude characteristic. Complex pole-zero pairs in all-pass filters help control the frequency where phase shifts occur.
To create an all-pass implementation with real coefficients, the complex all-pass filter can be cascaded with an all-pass that substitutes formula_14 for formula_10, leading to the Z-transform implementation
formula_15
which is equivalent to the difference equation
formula_16
where formula_17 is the output and formula_18 is the input at discrete time step formula_19.
Filters such as the above can be cascaded with unstable or mixed-phase filters to create a stable or minimum-phase filter without changing the magnitude response of the system. For example, by proper choice of formula_10, a pole of an unstable system that is outside of the unit circle can be canceled and reflected inside the unit circle.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "H(s) = - \\frac{ s - \\frac{1}{RC} }{ s + \\frac{1}{RC} } = \\frac {1-sRC} {1+sRC}, \\,"
},
{
"math_id": 1,
"text": "|H(i\\omega)|=1 \\quad \\text{and} \\quad \\angle H(i\\omega) = - 2\\arctan( \\omega RC ). \\,"
},
{
"math_id": 2,
"text": " e^{-sT},"
},
{
"math_id": 3,
"text": "T"
},
{
"math_id": 4,
"text": "s\\in\\mathbb{C}"
},
{
"math_id": 5,
"text": " e^{-sT} =\\frac{ e^{-sT/2}}{e^{sT/2} } \\approx \\frac{1-sT/2}{1+sT/2} ,"
},
{
"math_id": 6,
"text": "RC = T/2"
},
{
"math_id": 7,
"text": "H(s)"
},
{
"math_id": 8,
"text": "H(s) = \\frac{ s - \\frac{1}{RC} }{ s + \\frac{1}{RC} }, \\,"
},
{
"math_id": 9,
"text": "|H(i\\omega)|=1 \\quad \\text{and} \\quad \\angle H(i\\omega) = \\pi - 2\\arctan( \\omega RC ). \\,"
},
{
"math_id": 10,
"text": "z_0"
},
{
"math_id": 11,
"text": "H(z) = \\frac{z^{-1}-\\overline{z_0}}{1-z_0z^{-1}} \\ "
},
{
"math_id": 12,
"text": "1/\\overline{z_0}"
},
{
"math_id": 13,
"text": "\\overline{z}"
},
{
"math_id": 14,
"text": "\\overline{z_0}"
},
{
"math_id": 15,
"text": "H(z)\n= \n\\frac{z^{-1}-\\overline{z_0}}{1-z_0z^{-1}} \\times\n\\frac{z^{-1}-z_0}{1-\\overline{z_0}z^{-1}}\n=\n\\frac {z^{-2}-2\\Re(z_0)z^{-1}+\\left|{z_0}\\right|^2} {1-2\\Re(z_0)z^{-1}+\\left|z_0\\right|^2z^{-2}}, \\ "
},
{
"math_id": 16,
"text": "\ny[k] - 2\\Re(z_0) y[k-1] + \\left|z_0\\right|^2 y[k-2] =\nx[k-2] - 2\\Re(z_0) x[k-1] + \\left|z_0\\right|^2 x[k], \\,"
},
{
"math_id": 17,
"text": "y[k]"
},
{
"math_id": 18,
"text": "x[k]"
},
{
"math_id": 19,
"text": "k"
}
] |
https://en.wikipedia.org/wiki?curid=5845712
|
584602
|
Dinitrogen pentoxide
|
<templatestyles src="Chembox/styles.css"/>
Chemical compound
Dinitrogen pentoxide (also known as nitrogen pentoxide or nitric anhydride) is the chemical compound with the formula . It is one of the binary nitrogen oxides, a family of compounds that only contain nitrogen and oxygen. It exists as colourless crystals that sublime slightly above room temperature, yielding a colorless gas.
Dinitrogen pentoxide is an unstable and potentially dangerous oxidizer that once was used as a reagent when dissolved in chloroform for nitrations but has largely been superseded by nitronium tetrafluoroborate ().
is a rare example of a compound that adopts two structures depending on the conditions. The solid is a salt, nitronium nitrate, consisting of separate nitronium cations and nitrate anions ; but in the gas phase and under some other conditions it is a covalently-bound molecule.
History.
was first reported by Deville in 1840, who prepared it by treating silver nitrate () with chlorine.
Structure and physical properties.
Pure solid is a salt, consisting of separated linear nitronium ions and planar trigonal nitrate anions . Both nitrogen centers have oxidation state +5. It crystallizes in the space group "D" ("C"6/"mmc") with "Z" = 2, with the anions in the "D"3"h" sites and the cations in "D"3"d" sites.
The vapor pressure "P" (in atm) as a function of temperature "T" (in kelvin), in the range , is well approximated by the formula
formula_0
being about 48 torr at 0 °C, 424 torr at 25 °C, and 760 torr at 32 °C (9 °C below the melting point).
In the gas phase, or when dissolved in nonpolar solvents such as carbon tetrachloride, the compound exists as covalently-bonded molecules . In the gas phase, theoretical calculations for the minimum-energy configuration indicate that the angle in each wing is about 134° and the angle is about 112°. In that configuration, the two groups are rotated about 35° around the bonds to the central oxygen, away from the plane. The molecule thus has a propeller shape, with one axis of 180° rotational symmetry ("C"2)
When gaseous is cooled rapidly ("quenched"), one can obtain the metastable molecular form, which exothermically converts to the ionic form above −70 °C.
Gaseous absorbs ultraviolet light with dissociation into the free radicals nitrogen dioxide and nitrogen trioxide (uncharged nitrate). The absorption spectrum has a broad band with maximum at wavelength 160 nm.
Preparation.
A recommended laboratory synthesis entails dehydrating nitric acid () with phosphorus(V) oxide:
Another laboratory process is the reaction of lithium nitrate and bromine pentafluoride , in the ratio exceeding 3:1. The reaction first forms nitryl fluoride that reacts further with the lithium nitrate:
The compound can also be created in the gas phase by reacting nitrogen dioxide or with ozone:
However, the product catalyzes the rapid decomposition of ozone:
Dinitrogen pentoxide is also formed when a mixture of oxygen and nitrogen is passed through an electric
discharge. Another route is the reactions of Phosphoryl chloride or nitryl chloride with silver nitrate
Reactions.
Dinitrogen pentoxide reacts with water (hydrolyses) to produce nitric acid . Thus, dinitrogen pentoxide is the anhydride of nitric acid:
Solutions of dinitrogen pentoxide in nitric acid can be seen as nitric acid with more than 100% concentration. The phase diagram of the system − shows the well-known negative azeotrope at 60% (that is, 70% ), a positive azeotrope at 85.7% (100% ), and another negative one at 87.5% ("102% ").
The reaction with hydrogen chloride also gives nitric acid and nitryl chloride :
Dinitrogen pentoxide eventually decomposes at room temperature into and . Decomposition is negligible if the solid is kept at 0 °C, in suitably inert containers.
Dinitrogen pentoxide reacts with ammonia to give several products, including nitrous oxide , ammonium nitrate , nitramide and ammonium dinitramide , depending on reaction conditions.
Decomposition of dinitrogen pentoxide at high temperatures.
Dinitrogen pentoxide between high temperatures of , is decomposed in two successive stoichiometric steps:
In the shock wave, has decomposed stoichiometrically into nitrogen dioxide and oxygen. At temperatures of 600 K and higher, nitrogen dioxide is unstable with respect to nitrogen oxide NO and oxygen. The thermal decomposition of 0.1 mM nitrogen dioxide at 1000 K is known to require about two seconds.
Decomposition of dinitrogen pentoxide in carbon tetrachloride at 30 °C.
Apart from the decomposition of at high temperatures, it can also be decomposed in carbon tetrachloride at . Both and are soluble in and remain in solution while oxygen is insoluble and escapes. The volume of the oxygen formed in the reaction can be measured in a gas burette. After this step we can proceed with the decomposition, measuring the quantity of that is produced over time because the only form to obtain is with the decomposition. The equation below refers to the decomposition of in :
And this reaction follows the first order rate law that says:
formula_1
Decomposition of nitrogen pentoxide in the presence of nitric oxide.
can also be decomposed in the presence of nitric oxide :
The rate of the initial reaction between dinitrogen pentoxide and nitric oxide of the elementary unimolecular decomposition.
Applications.
Nitration of organic compounds.
Dinitrogen pentoxide, for example as a solution in chloroform, has been used as a reagent to introduce the functionality in organic compounds. This nitration reaction is represented as follows:
where Ar represents an arene moiety. The reactivity of the can be further enhanced with strong acids that generate the "super-electrophile" .
In this use, has been largely replaced by nitronium tetrafluoroborate . This salt retains the high reactivity of , but it is thermally stable, decomposing at about 180 °C (into and ).
Dinitrogen pentoxide is relevant to the preparation of explosives.
Atmospheric occurrence.
In the atmosphere, dinitrogen pentoxide is an important reservoir of the species that are responsible for ozone depletion: its formation provides a null cycle with which and are temporarily held in an unreactive state. Mixing ratios of several parts per billion by volume have been observed in polluted regions of the nighttime troposphere. Dinitrogen pentoxide has also been observed in the stratosphere at similar levels, the reservoir formation having been postulated in considering the puzzling observations of a sudden drop in stratospheric levels above 50 °N, the so-called 'Noxon cliff'.
Variations in reactivity in aerosols can result in significant losses in tropospheric ozone, hydroxyl radicals, and concentrations. Two important reactions of in atmospheric aerosols are hydrolysis to form nitric acid and reaction with halide ions, particularly , to form molecules which may serve as precursors to reactive chlorine atoms in the atmosphere.
Hazards.
is a strong oxidizer that forms explosive mixtures with organic compounds and ammonium salts. The decomposition of dinitrogen pentoxide produces the highly toxic nitrogen dioxide gas.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\ln P = 23.2348 - \\frac{7098.2}{T}"
},
{
"math_id": 1,
"text": "-\\frac{d[\\mathrm{A}]}{dt} = k [\\mathrm{A}]"
}
] |
https://en.wikipedia.org/wiki?curid=584602
|
58462412
|
Join-based tree algorithms
|
In computer science, join-based tree algorithms are a class of algorithms for self-balancing binary search trees. This framework aims at designing highly-parallelized algorithms for various balanced binary search trees. The algorithmic framework is based on a single operation "join". Under this framework, the "join" operation captures all balancing criteria of different balancing schemes, and all other functions "join" have generic implementation across different balancing schemes. The "join-based algorithms" can be applied to at least four balancing schemes: AVL trees, red–black trees, weight-balanced trees and treaps.
The "join"formula_0 operation takes as input two binary balanced trees formula_1 and formula_2 of the same balancing scheme, and a key formula_3, and outputs a new balanced binary tree formula_4 whose in-order traversal is the in-order traversal of formula_1, then formula_3 then the in-order traversal of formula_2. In particular, if the trees are search trees, which means that the in-order of the trees maintain a total ordering on keys, it must satisfy the condition that all keys in formula_1 are smaller than formula_3 and all keys in formula_2 are greater than formula_3.
History.
The "join" operation was first defined by Tarjan on red–black trees, which runs in worst-case logarithmic time. Later Sleator and Tarjan described a "join" algorithm for splay trees which runs in amortized logarithmic time. Later Adams extended "join" to weight-balanced trees and used it for fast set–set functions including union, intersection and set difference. In 1998, Blelloch and Reid-Miller extended "join" on treaps, and proved the bound of the set functions to be formula_5 for two trees of size formula_6 and formula_7, which is optimal in the comparison model. They also brought up parallelism in Adams' algorithm by using a divide-and-conquer scheme. In 2016, Blelloch et al. formally proposed the join-based algorithms, and formalized the "join" algorithm for four different balancing schemes: AVL trees, red–black trees, weight-balanced trees and treaps. In the same work they proved that Adams' algorithms on union, intersection and difference are work-optimal on all the four balancing schemes.
Join algorithms.
The function "join"formula_8 considers rebalancing the tree, and thus depends on the input balancing scheme. If the two trees are balanced, "join" simply creates a new node with left subtree "t"1, root k and right subtree "t"2. Suppose that "t"1 is heavier (this "heavier" depends on the balancing scheme) than "t"2 (the other case is symmetric). "Join" follows the right spine of "t"1 until a node c which is balanced with "t"2. At this point a new node with left child c, root k and right child "t"2 is created to replace c. The new node may invalidate the balancing invariant. This can be fixed with rotations.
The following is the "join" algorithms on different balancing schemes.
The "join" algorithm for AVL trees:
function joinRightAVL(TL, k, TR)
(l, k', c) := expose(TL)
if h(c) ≤ h(TR) + 1
T' := Node(c, k, TR)
if h(T') ≤ h(l) + 1
return Node(l, k', T')
else
return rotateLeft(Node(l, k', rotateRight(T')))
else
T' := joinRightAVL(c, k, TR)
T :"=" Node(l, k', T')
if h(T') ≤ h(l) + 1
return T
else
return rotateLeft(T)
function joinLeftAVL(TL, k, TR)
/* symmetric to joinRightAVL */
function join(TL, k, TR)
if h(TL) > h(TR) + 1
return joinRightAVL(TL, k, TR)
else if h(TR) > h(TL) + 1
return joinLeftAVL(TL, k, TR)
else
return Node(TL, k, TR)
Where:
The "join" algorithm for red–black trees:
function joinRightRB(TL, k, TR)
if TL.color = black and ĥ(TL) = ĥ(TR)
return Node(TL, ⟨k, red⟩, TR)
else
(L', ⟨k', c'⟩, R') := expose(TL)
T' := Node(L', ⟨k', c'⟩, joinRightRB(R', k, TR))
if c' = black and T'.right.color = T'.right.right.color = red
T'.right.right.color := black
return rotateLeft(T')
else
return T'
function joinLeftRB(TL, k, TR)
/* symmetric to joinRightRB */
function join(TL, k, TR)
if ĥ(TL) > ĥ(TR)
T' := joinRightRB(TL, k, TR)
if (T'.color = red) and (T'.right.color = red)
T'.color := black
return T'
else if ĥ(TR) > ĥ(TL)
/* symmetric */
else if TL.color = black and TR = black
return Node(TL, ⟨k, red⟩, TR)
else
return Node(TL, ⟨k, black⟩, TR)
Where:
The "join" algorithm for weight-balanced trees:
function joinRightWB(TL, k, TR)
(l, k', c) := expose(TL)
if w(TL) =α w(TR)
return Node(TL, k, TR)
else
T' := joinRightWB(c, k, TR)
(l1, k1, r1) := expose(T')
if w(l) =α w(T')
return Node(l, k', T')
else if w(l) =α w(l1) and w(l)+w(l1) =α w(r1)
return rotateLeft(Node(l, k', T'))
else
return rotateLeft(Node(l, k', rotateRight(T'))
function joinLeftWB(TL, k, TR)
/* symmetric to joinRightWB */
function join(TL, k, TR)
if w(TL) >α w(TR)
return joinRightWB(TL, k, TR)
else if w(TR) >α w(TL)
return joinLeftWB(TL, k, TR)
else
return Node(TL, k, TR)
Where:
Join-based algorithms.
In the following, formula_11 extracts the left child formula_12, key formula_3, and right child formula_13 of node formula_10 into a tuple formula_14. formula_15 creates a node with left child formula_12, key formula_3 and right child formula_13. "formula_25" means that two statements formula_26 and formula_27 can run in parallel.
Split.
To split a tree into two trees, those smaller than key "x", and those larger than key "x", we first draw a path from the root by inserting "x" into the tree. After this insertion, all values less than "x" will be found on the left of the path, and all values greater than "x" will be found on the right. By applying "Join", all the subtrees on the left side are merged bottom-up using keys on the path as intermediate nodes from bottom to top to form the left tree, and the right part is asymmetric. For some applications, "Split" also returns a boolean value denoting if "x" appears in the tree. The cost of "Split" is formula_28, order of the height of the tree.
The split algorithm is as follows:
function split(T, k)
if (T = nil)
return (nil, false, nil)
else
(L, m, R) := expose(T)
if k < m
(L', b, R') := split(L, k)
return (L', b, join(R', m, R))
else if k > m
(L', b, R') := split(R, k)
return (join(L, m, L'), b, R'))
else
return (L, true, R)
Join2.
This function is defined similarly as "join" but without the middle key. It first splits out the last key formula_3 of the left tree, and then join the rest part of the left tree with the right tree with formula_3. The algorithm is as follows:
function splitLast(T)
(L, k, R) := expose(T)
if R = nil
return (L, k)
else
(T', k') := splitLast(R)
return (join(L, k, T'), k')
function join2(L, R)
if L = nil
return R
else
(L', k) := splitLast(L)
return join(L', k, R)
The cost is formula_28 for a tree of size formula_29.
Insert and delete.
The insertion and deletion algorithms, when making use of "join" can be independent of balancing schemes. For an insertion, the algorithm compares the key to be inserted with the key in the root, inserts it to the left/right subtree if the key is smaller/greater than the key in the root, and joins the two subtrees back with the root. A deletion compares the key to be deleted with the key in the root. If they are equal, return join2 on the two subtrees. Otherwise, delete the key from the corresponding subtree, and join the two subtrees back with the root. The algorithms are as follows:
function insert(T, k)
if T = nil
return Node(nil, k, nil)
else
(L, k', R) := expose(T)
if k < k'
return join(insert(L,k), k', R)
else if k > k'
return join(L, k', insert(R, k))
else
return T
function delete(T, k)
if T = nil
return nil
else
(L, k', R) := expose(T)
if k < k'
return join(delete(L, k), k', R)
else if k > k'
return join(L, k', delete(R, k))
else
return join2(L, R)
Both insertion and deletion requires formula_28 time if formula_30.
Set–set functions.
Several set operations have been defined on weight-balanced trees: union, intersection and set difference. The union of two weight-balanced trees "t"1 and "t"2 representing sets A and B, is a tree "t" that represents "A" ∪ "B". The following recursive function computes this union:
function union(t1, t2)
if t1 = nil
return t2
else if t2 = nil
return t1
else
(l1, k1, r1) := expose(t1)
(t<, b, t>) := split(t2, k1)
l' := union(l1, t<) || r' := union(r1, t>)
return join(l', k1, r')
Similarly, the algorithms of intersection and set-difference are as follows:
function intersection(t1, t2)
if t1 = nil or t2 = nil
return nil
else
(l1, k1, r1) := expose(t1)
(t<, b, t>) = split(t2, k1)
l' := intersection(l1, t<) || r' := intersection(r1, t>)
if b
return join(l', k1, r')
else
return join2(l', r')
function difference(t1, t2)
if t1 = nil
return nil
else if t2 = nil
return t1
else
(l1, k1, r1) := expose(t1)
(t<, b, t>) := split(t2, k1)
l' = difference(l1, t<) || r' = difference(r1, t>)
if b
return join2(l', r')
else
return join(l', k1, r')
The complexity of each of union, intersection and difference is formula_31 for two weight-balanced trees of sizes formula_6 and formula_7. This complexity is optimal in terms of the number of comparisons. More importantly, since the recursive calls to union, intersection or difference are independent of each other, they can be executed in parallel with a parallel depth formula_32. When formula_33, the join-based implementation applies the same computation as in a single-element insertion or deletion if the root of the larger tree is used to split the smaller tree.
Build.
The algorithm for building a tree can make use of the union algorithm, and use the divide-and-conquer scheme:
function build(A[], n)
if n = 0
return nil
else if n = 1
return Node(nil, A[0], nil)
else
l' := build(A, n/2) || r' := (A+n/2, n-n/2)
return union(L, R)
This algorithm costs formula_34 work and has formula_35 depth. A more-efficient algorithm makes use of a parallel sorting algorithm.
function buildSorted(A[], n)
if n = 0
return nil
else if n = 1
return Node(nil, A[0], nil)
else
l' := build(A, n/2) || r' := (A+n/2+1, n-n/2-1)
return join(l', A[n/2], r')
function build(A[], n)
A' := sort(A, n)
return buildSorted(A, n)
This algorithm costs formula_34 work and has formula_28 depth assuming the sorting algorithm has formula_34 work and formula_28 depth.
Filter.
This function selects all entries in a tree satisfying a predicate formula_36, and return a tree containing all selected entries. It recursively filters the two subtrees, and join them with the root if the root satisfies formula_36, otherwise "join2" the two subtrees.
function filter(T, p)
if T = nil
return nil
else
(l, k, r) := expose(T)
l' := filter(l, p) || r' := filter(r, p)
if p(k)
return join(l', k, r')
else
return join2(l', R)
This algorithm costs work formula_37 and depth formula_38 on a tree of size formula_29, assuming formula_36 has constant cost.
Used in libraries.
The join-based algorithms are applied to support interface for sets, maps, and augmented maps in libraries such as Hackage, SML/NJ, and PAM.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "(L,k,R)"
},
{
"math_id": 1,
"text": "L"
},
{
"math_id": 2,
"text": "R"
},
{
"math_id": 3,
"text": "k"
},
{
"math_id": 4,
"text": "t"
},
{
"math_id": 5,
"text": "O(m\\log (1+\\tfrac{n}{m}))"
},
{
"math_id": 6,
"text": "m"
},
{
"math_id": 7,
"text": "n(\\ge m)"
},
{
"math_id": 8,
"text": "(t_1,k,t_2)"
},
{
"math_id": 9,
"text": "h(v)"
},
{
"math_id": 10,
"text": "v"
},
{
"math_id": 11,
"text": " \\text{expose}(v) "
},
{
"math_id": 12,
"text": "l"
},
{
"math_id": 13,
"text": "r"
},
{
"math_id": 14,
"text": " (l,k,r) "
},
{
"math_id": 15,
"text": "\\text{Node}(l,k,r) "
},
{
"math_id": 16,
"text": "c"
},
{
"math_id": 17,
"text": " v "
},
{
"math_id": 18,
"text": "(l, \\langle k, c \\rangle, r) "
},
{
"math_id": 19,
"text": "w(v) "
},
{
"math_id": 20,
"text": "w_1 =_\\alpha w_2 "
},
{
"math_id": 21,
"text": "w_1"
},
{
"math_id": 22,
"text": "w_2"
},
{
"math_id": 23,
"text": "w_1 >_\\alpha w_2"
},
{
"math_id": 24,
"text": "w_2\n"
},
{
"math_id": 25,
"text": "s_1 || s_2"
},
{
"math_id": 26,
"text": "s_1"
},
{
"math_id": 27,
"text": "s_2"
},
{
"math_id": 28,
"text": "O(\\log n)"
},
{
"math_id": 29,
"text": "n"
},
{
"math_id": 30,
"text": "|T|=n"
},
{
"math_id": 31,
"text": "O\\left(m \\log \\left(\\tfrac{n}{m}+1\\right)\\right)"
},
{
"math_id": 32,
"text": "O(\\log m\\log n)"
},
{
"math_id": 33,
"text": "m=1"
},
{
"math_id": 34,
"text": "O(n\\log n)"
},
{
"math_id": 35,
"text": "O(\\log^3 n)"
},
{
"math_id": 36,
"text": "p"
},
{
"math_id": 37,
"text": "O(n)"
},
{
"math_id": 38,
"text": "O(\\log^2 n)"
}
] |
https://en.wikipedia.org/wiki?curid=58462412
|
58467746
|
Schlüsselgerät 39
|
The "Schlüsselgerät" 39 (SG-39) was an electrically operated rotor cipher machine, invented by the German Fritz Menzer during World War II. The device was the evolution of the Enigma rotors coupled with three Hagelin pin wheels to provide variable stepping of the rotors. All three wheels stepped once with each encipherment. Rotors stepped according to normal Enigma rules, except that an active pin at the reading station for a pin wheel prevented the coupled rotor from stepping. The cycle for a normal Enigma was 17,576 characters. When the "Schlüsselgerät" 39 was correctly configured, its cycle length was formula_0 characters, which was more than 15,000 times longer than a standard Enigma. The "Schlüsselgerät" 39 was fully automatic, in that when a key was pressed, the plain and cipher letters were printed on separate paper tapes, divided into five-digit groups. The "Schlüsselgerät" 39 was abandoned by German forces in favour of the "Schlüsselgerät" 41.
Technical description.
Note: Otto Buggisch gave the technical description of the cipher unit as part of TICOM "homework".
Gerät 39 is an electrically operated cipher machine. The cipher technique is derived from the Enigma cipher machine. A direct current passes through 3 or 4 wheels, with 26 positions, I, II, II, a reflector wheel U, and the again through the 3 wheels in reverse order, III II and I. Unlike the Enigma, the wheels here do not control their own movement: this is done through 3 independent pin-wheels N1 N2 and N3 with periods 21,23 and 25. The figures were distributed among N1 N2 and N2 in possibly two different configurations.
The pin wheels have a uniform motion, i.e. they move one position for every letter keyed. As for the movement of the key wheels and other details, the machine passed through different stages of development in the course of time, for which there were no specific names and which will be denoted here by a, b, c and d.
Investigations in Periodicity.
In the case of model first a) above, the question of periodicity is elementary, there are 262=676 pure periods of the length
21 x 23 x 25 x 26 = 313, 950
as long as the number of active pins on each of the pin wheels is prime to 2 and 13. This last condition should be laid down in the cipher regulations; otherwise the 676 periods would be further broken in a manner easily seen.
Things are much more complicated in the case of Model b). Investigations into this problem in the winter of 1942/1943 were only partly successful; above all it was not possible exactly to calculate the lengths of the pure periods and the pre-periods. Estimates which were quite adequate for practical purposes were however given. Buggisch could not remember the details of these somewhat investigations. The extraordinary length of many pre-periods (lengths of some thousands were not uncommon) and the complication of their branches were remarkable. The general type can be illustrated by the following diagram:
In this circle represents the pure period and the straight lines the pre-periods. There were usually several pure periods, each one of them having a complicated system of pre-periods branching into it. Several separate figures of the above type side by side are then necessary to give a graphic representation of the periodicities. A lower limit for the lengths of the pure periods was, as far as I remember: 252 x 21 x 23 x 25 = 8,162,700.
The questions of periodicities in the case of model c) was still more involved. It was just not possible to calculate the lengths of the pure periods and pre-periods, let alone give the lower limits which are themselves not inconsiderable.
Cipher Security.
The principal weaknesses of the Enigma were as follows:
Faults 1 to 4 had already been eliminated on model a) of "Schlüsselgerät" 39, 5. then no longer appears vital. On the other hand, however, the giving up of the adjustable rings and of the stecker gave rise to weaknesses which the Enigma did not have. In fact, the absence of stecker S cannot be compensated for by making the reflector wheel U pluggable; investigations into Enigma had shown that it was considerably more difficult to find out the steckering S than the reflectoring wheel U.
In detail, the results of the investigations were as follows:
The above-mentioned weakness of model a) were eliminated by the introduction of steckering and adjustable rings of model b), although this had been done primarily for quite a different reason, namely to make interchangeable working with Enigma possible. It was not now thought that there was any longer a serious possibility of a break-in. As however the system of I and N1 still had a relatively small period of 21 x 26 it appeared desirable to destroy this too. This was done on model c) by making III react to I, and presented no technical difficulties.
Finally in model c), the total number of periods was multiplied by 26 compared with a) and b), but the introduction of a fourth wheel; it was not, it is true, intended primarily for this purpose but was added to carry out interchangeable working with the Naval Enigma.
History of "Gerät" 39.
Model a) had been developed as early as the year 1939 or 1940 at "Wa Pruef 7" at the "Waffenamt". In the summer of 1942, a prototype was available at Dr. Pupp's laboratory at "Wa Pruef 7", and was made by the firm Telefonbau und Normalzeit (literally 'Telephone and Standard Time', later called Tenovis). A noteworthy feature was that when the clear-text letter was keyed, the corresponding cipher letter could be sent out simultaneously by the transmitter as a Morse character. This from a technical point of view, a fairly complicated operation. The machine thus was like a cipher teleprinter except that instead of the 5-element alphabet the ordinary Morse alphabet was used. The maximum keying speed was also the same as on a current (1940s) cipher teleprinter. It could not however be made use of when working on direct transmission, because reception at the other end was not automatic as in the case of a cipher teleprinter, but had to be done aurally by the operator. That was one of the many reasons why the automatic transmission part of the machine was omitted in later models. This was done when Oberst Kahn, the director of the Pruef 7 department of the Waffenamt, left, he having especially advocated this strange principle. The second model actually constructed was like the model designated with c) in the [See 2.1.1] section above. It only printed clear text and cipher text on 2 separate strips. Buggisch saw it in January 1944 when I was visiting Wa Pruef 7 Section II at Planken.
The change from cipher-technical a) to b) and also c) was made at the end of 1942. It was made at the instigation of the Kriegsmarine who laid down the principle that any newly introduced cipher machine for higher HQs should permit interchangeable working with the Enigma. The Army (Heer and Wehrmacht) also adopted this standpoint: In the first instance only the highest authorities were to be issued with the new machine, e.g. OKW, OKH and the Army groups, and only gradually, as production permitted, was the Enigma machine to be replaced by the 39 at Armies, and finally perhaps at Army Corps. There were, during 1943 and 1944 between the various HQs interested, many and lengthy discussions and arguments for and against the introduction of the 39 machine. Special wishes of the Navy had to be taken into account. The industrial firm complained of lack of material and labour. Owing to these and similar difficulties, development stopped altogether at one time, and it was resumed however. At any rate the vagueness of the decisive authorities was, in addition to difficulties of production, the chief reason why the machine was never completed.
Citations.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "2.7 x 10^8"
}
] |
https://en.wikipedia.org/wiki?curid=58467746
|
58472183
|
Richard W. Cottle
|
Richard W. Cottle (29 June 1934) is an American mathematician. He was a professor of Management Science and Engineering at Stanford University, starting as an Acting Assistant Professor of Industrial Engineering in 1966 and retiring in 2005. He is notable for his work on mathematical programming/optimization, “Nonlinear programs”, the proposal of the linear complementarity problem, and the general field of operations research.
Life and career.
Early life and family.
Cottle was born in Chicago on 29 June 1934 to Charles and Rachel Cottle. He started his elementary education in the neighboring village of Oak Park, Illinois, and graduated from Oak Park-River Forest High School. After that, admitted to Harvard, Cottle began by studying government (political science) and taking premedical courses. After the first semester, he changed his major to mathematics in which he earned his bachelor's (cum laude) and master's degrees. Around 1958, he became interested in teaching secondary-level mathematics. He joined the Mathematics Department at the Middlesex School in Concord, Massachusetts, where he spent two years. Midway through the latter period, he married his wife, Suzanne.
Career.
While teaching at Middlesex School, he applied and was admitted to the PhD program in mathematics at the University of California at Berkeley, with the intention of focusing on geometry. Meanwhile, he also received an offer from the Radiation Laboratory at Berkeley as a part-time computer programmer. Through that work, some of which involved linear and quadratic programming, he became aware of the work of George Dantzig and Philip Wolfe. Soon thereafter he became a member of Dantzig's team at UC Berkeley Operations Research Center (ORC). There he had the opportunity to investigate quadratic and convex programming. This developed into his doctoral dissertation under the guidance of Dantzig and Edmund Eisenberg. Cottle's first research contribution, "Symmetric Dual Quadratic Programs," was published in 1963. This was soon generalized in the joint paper "Symmetric Dual Nonlinear Programs," co-authored with Dantzig and Eisenberg. This led to the consideration of what is called a "composite problem," the first-order optimality conditions for symmetric dual programs. This in turn, was named "the fundamental problem" and still later (in a more general context) "the complementarity problem." A special case of this, called "the linear complementarity problem", is a major part of Cottle's research output. Also in 1963, he was a summer consultant at the RAND Corporation working under the supervision of Philip Wolfe. This resulted in the RAND Memo, RM-3858-PR, "A Theorem of Fritz John in Mathematical Programming."
In 1964, upon completion of his doctorate at Berkeley, he worked for Bell Telephone Laboratories in Holmdel, New Jersey. In 1965, he was invited to visit Stanford's OR Program, and in 1966, he became an Acting Assistant Professor of Industrial Engineering at Stanford. The next year he became an Assistant Professor in Stanford's new Department of Operations Research. He became an Associate Professor in 1969 and Full Professor in 1973. He chaired the department from 1990 to 1996. During 39 years on the active faculty at Stanford he had over 30 leadership roles in national and international conferences. He served on the editorial board of 8 scholarly journals, and was Editor-in-Chief of the journal, Mathematical Programming. He served as the associate chair of the Engineering-Economic Systems & Operations Research Department (EES & OR) after the merger of the two departments. In 2000, EES & OR merged again, this time with the Industrial Engineering & Engineering Management Department to form Management Science and Engineering (MS&E). During his sabbatical year at Harvard and MIT (1970-1971), he wrote “Manifestations of the Schur Complement’’, one of his most cited papers. In 1974, he started working on “The Linear Complementarity Problem,” one his most noted publications. In the mid 1980s, two of his former students, Jong-Shi Pang and Richard E. Stone, joined him as co-authors of this book which was published in 1992. “The Linear Complementarity Problem” won the Frederick W. Lanchester Prize of the Institute for Operations Research and the Management Sciences (INFORMS) in 1994. “The Linear Complementarity Problem” was republished by the Society for Industrial and Applied Mathematics in the series “Classics in Applied Mathematics series” in 2009. During 1978–1979, he spent a sabbatical year at the University of Bonn and the University of Cologne. There he wrote the paper “Observations on a Class of Nasty Linear Complementarity Problems’’ which relates the celebrated Klee-Minty result on the exponential time behavior of the simplex method of linear programming with the same sort of behavior in Lemke's algorithm for the LCP and hamiltonian paths on the n-cube with the binary Gray code representation of the integers from 0 to 2^n - 1. Also during this time he solved the problem of minimally triangulating the n-cube for n = 4 and worked with Mark Broadie to solve a restricted case for n = 5. In 2006 he was appointed a fellow of INFORMS and in 2018 received the Saul I. Gass Expository Writing Award.
Contributions.
Linear complementarity Problem.
Cottle is best known for his extensive publications on the Linear Complementarity Problem (LCP). This work includes analytical studies, algorithms, and the interaction of matrix theory and linear inequality theory with the LCP. Much of this is an outgrowth of his doctoral dissertation supervised by George Dantzig, with whom he collaborated in some of his earliest papers. The leading example is "Complementary pivot theory of mathematical programming," published in 1968.
Definitions.
The standard form of the LCP is a mapping:
formula_0 (1)
Given formula_1, find a vector formula_2 , such that formula_3, formula_4 and formula_5, for formula_6
Because the affine mapping "f" is specified by vector and matrix, the problem is ordinarily denoted LCP("q", "M") or sometimes just ("q", "M"). A system of the form (1) in which "f" is not affine is called a "nonlinear complementarity problem" and is denoted NCP(formula_1). The notation CP(formula_1) is meant to cover both cases."
Polyhedral sets having a least element.
According to a paper by Cottle and Veinott: "For a fixed "m" formula_7 "n" matrix "A", we consider the family of polyhedral sets formula_8 , and prove a theorem characterizing, in terms of "A", the circumstances under which every nonempty "X_b" has a least element. In the special case where "A" contains all the rows of an "n formula_7 n" identity matrix, the conditions are equivalent to "A^T" being Leontief.
Publications and others.
Publications and Professional Activities.
This list has been retrieved from the website.
Further reading.
R. W. Cottle and G. B. Dantzig. Complementary pivot theory of mathematical programming. "Linear Algebra and its Applications", 1:103-125, 1968
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "f: R^n \\rightarrow R^n \\textvisiblespace f(x)=q +Mx"
},
{
"math_id": 1,
"text": "f"
},
{
"math_id": 2,
"text": "x\\in R^n"
},
{
"math_id": 3,
"text": "x_i\\geq 0"
},
{
"math_id": 4,
"text": "f_i(x)\\geq 0"
},
{
"math_id": 5,
"text": "x_if_i(x) = 0"
},
{
"math_id": 6,
"text": "i=1,2,...,n"
},
{
"math_id": 7,
"text": "\\times"
},
{
"math_id": 8,
"text": "X_b = \\{x|Ax \\geq b\\}, \\textvisiblespace b \\in R_m"
}
] |
https://en.wikipedia.org/wiki?curid=58472183
|
58472531
|
Maximum-entropy random graph model
|
Maximum-entropy random graph models are random graph models used to study complex networks subject to the principle of maximum entropy under a set of structural constraints, which may be global, distributional, or local.
Overview.
Any random graph model (at a fixed set of parameter values) results in a probability distribution on graphs, and those that are maximum entropy within the considered class of distributions have the special property of being maximally unbiased null models for network inference (e.g. biological network inference). Each model defines a family of probability distributions on the set of graphs of size formula_0 (for each formula_1 for some finite formula_2), parameterized by a collection of constraints on formula_3 observables formula_4 defined for each graph formula_5 (such as fixed expected average degree, degree distribution of a particular form, or specific degree sequence), enforced in the graph distribution alongside entropy maximization by the method of Lagrange multipliers. Note that in this context "maximum entropy" refers not to the entropy of a single graph, but rather the entropy of the whole probabilistic ensemble of random graphs.
Several commonly studied random network models are in fact maximum entropy, for example the ER graphs formula_6 and formula_7 (which each have one global constraint on the number of edges), as well as the configuration model (CM). and soft configuration model (SCM) (which each have formula_0 local constraints, one for each nodewise degree-value). In the two pairs of models mentioned above, an important distinction is in whether the constraint is sharp (i.e. satisfied by every element of the set of size-formula_0 graphs with nonzero probability in the ensemble), or soft (i.e. satisfied on average across the whole ensemble). The former (sharp) case corresponds to a microcanonical ensemble, the condition of maximum entropy yielding all graphs formula_5 satisfying formula_8 as equiprobable; the latter (soft) case is canonical, producing an exponential random graph model (ERGM).
Canonical ensemble of graphs (general framework).
Suppose we are building a random graph model consisting of a probability distribution formula_9 on the set formula_10 of simple graphs with formula_0 vertices. The Gibbs entropy formula_11 of this ensemble will be given by
formula_12
We would like the ensemble-averaged values formula_13 of observables formula_4 (such as average degree, average clustering, or average shortest path length) to be tunable, so we impose formula_3 "soft" constraints on the graph distribution:
formula_14
where formula_15 label the constraints. Application of the method of Lagrange multipliers to determine the distribution formula_9 that maximizes formula_11 while satisfying formula_16, and the normalization condition formula_17 results in the following:
formula_18
where formula_19 is a normalizing constant (the partition function) and formula_20 are parameters (Lagrange multipliers) coupled to the correspondingly indexed graph observables, which may be tuned to yield graph samples with desired values of those properties, on average; the result is an exponential family and canonical ensemble; specifically yielding an ERGM.
The Erdős–Rényi model formula_6.
In the canonical framework above, constraints were imposed on ensemble-averaged quantities formula_21. Although these properties will on average take on values specifiable by appropriate setting of formula_20, each specific instance formula_5 may have formula_22, which may be undesirable. Instead, we may impose a much stricter condition: every graph with nonzero probability must satisfy formula_23 exactly. Under these "sharp" constraints, the maximum-entropy distribution is determined. We exemplify this with the Erdős–Rényi model formula_6.
The sharp constraint in formula_6 is that of a fixed number of edges formula_24, that is formula_25, for all graphs formula_5 drawn from the ensemble (instantiated with a probability denoted formula_26). This restricts the sample space from formula_10 (all graphs on formula_0 vertices) to the subset formula_27. This is in direct analogy to the microcanonical ensemble in classical statistical mechanics, wherein the system is restricted to a thin manifold in the phase space of all states of a particular energy value.
Upon restricting our sample space to formula_28, we have no external constraints (besides normalization) to satisfy, and thus we'll select formula_26 to maximize formula_11 without making use of Lagrange multipliers. It is well known that the entropy-maximizing distribution in the absence of external constraints is the uniform distribution over the sample space (see maximum entropy probability distribution), from which we obtain:
formula_29
where the last expression in terms of binomial coefficients is the number of ways to place formula_24 edges among formula_30 possible edges, and thus is the cardinality of formula_28.
Generalizations.
A variety of maximum-entropy ensembles have been studied on generalizations of simple graphs. These include, for example, ensembles of simplicial complexes, and weighted random graphs with a given expected degree sequence
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "n>n_0"
},
{
"math_id": 2,
"text": "n_0"
},
{
"math_id": 3,
"text": "J"
},
{
"math_id": 4,
"text": "\\{Q_j(G)\\}_{j=1}^J"
},
{
"math_id": 5,
"text": "G"
},
{
"math_id": 6,
"text": "G(n,m)"
},
{
"math_id": 7,
"text": "G(n,p)"
},
{
"math_id": 8,
"text": "Q_j(G)=q_j\\forall j"
},
{
"math_id": 9,
"text": "\\mathbb{P}(G)"
},
{
"math_id": 10,
"text": "\\mathcal{G}_n"
},
{
"math_id": 11,
"text": "S[G]"
},
{
"math_id": 12,
"text": " S[G]=-\\sum_{G\\in \\mathcal{G}_n}\\mathbb{P}(G)\\log\\mathbb{P}(G)."
},
{
"math_id": 13,
"text": "\\{\\langle Q_j \\rangle\\}_{j=1}^J"
},
{
"math_id": 14,
"text": " \\langle Q_j \\rangle = \\sum_{G\\in \\mathcal{G}_n}\\mathbb{P}(G)Q_j(G) = q_j, "
},
{
"math_id": 15,
"text": "j=1,...,J"
},
{
"math_id": 16,
"text": "\\langle Q_j \\rangle=q_j"
},
{
"math_id": 17,
"text": "\\sum_{G\\in \\mathcal{G}_n}\\mathbb{P}(G)=1"
},
{
"math_id": 18,
"text": " \\mathbb{P}(G) = \\frac{1}{Z}\\exp\\left[-\\sum_{j=1}^J\\psi_j Q_j(G)\\right],"
},
{
"math_id": 19,
"text": "Z"
},
{
"math_id": 20,
"text": "\\{\\psi_j\\}_{j=1}^J"
},
{
"math_id": 21,
"text": "\\langle Q_j \\rangle"
},
{
"math_id": 22,
"text": "Q_j(G)\\ne q_j"
},
{
"math_id": 23,
"text": "Q_j(G)= q_j"
},
{
"math_id": 24,
"text": "m"
},
{
"math_id": 25,
"text": "|\\operatorname E(G)|=m"
},
{
"math_id": 26,
"text": "\\mathbb{P}_{n,m}(G)"
},
{
"math_id": 27,
"text": "\\mathcal{G}_{n,m}=\\{g\\in\\mathcal{G}_n;|\\operatorname E(g)|=m\\}\\subset \\mathcal{G}_n"
},
{
"math_id": 28,
"text": "\\mathcal{G}_{n,m}"
},
{
"math_id": 29,
"text": "\\mathbb{P}_{n,m}(G)=\\frac{1}{|\\mathcal{G}_{n,m}|}=\\binom{\\binom{n}{2}}{m}^{-1},"
},
{
"math_id": 30,
"text": "\\binom{n}{2}"
}
] |
https://en.wikipedia.org/wiki?curid=58472531
|
5847302
|
Outline of algebraic structures
|
Overview of and topical guide to algebraic structures
In mathematics, many types of algebraic structures are studied. Abstract algebra is primarily the study of specific algebraic structures and their properties. Algebraic structures may be viewed in different ways, however the common starting point of algebra texts is that an algebraic object incorporates one or more sets with one or more binary operations or unary operations satisfying a collection of axioms.
Another branch of mathematics known as universal algebra studies algebraic structures in general. From the universal algebra viewpoint, most structures can be divided into varieties and quasivarieties depending on the axioms used. Some axiomatic formal systems that are neither varieties nor quasivarieties, called "nonvarieties", are sometimes included among the algebraic structures by tradition.
Concrete examples of each structure will be found in the articles listed.
Algebraic structures are so numerous today that this article will inevitably be incomplete. In addition to this, there are sometimes multiple names for the same structure, and sometimes one name will be defined by disagreeing axioms by different authors. Most structures appearing on this page will be common ones which most authors agree on. Other web lists of algebraic structures, organized more or less alphabetically, include Jipsen and PlanetMath. These lists mention many structures not included below, and may present more information about some structures than is presented here.
<templatestyles src="Template:TOC limit/styles.css" />
Study of algebraic structures.
Algebraic structures appear in most branches of mathematics, and one can encounter them in many different ways.
Types of algebraic structures.
In full generality, an algebraic structure may use any number of sets and any number of axioms in its definition. The most commonly studied structures, however, usually involve only one or two sets and one or two binary operations. The structures below are organized by how many sets are involved, and how many binary operations are used. Increased indentation is meant to indicate a more exotic structure, and the least indented levels are the most basic.
One binary operation on one set.
The following "group-like" structures consist of a set with a binary operation. The binary operation can be indicated by any symbol, or with no symbol (juxtaposition). The most common structure is that of a "group". Other structures involve weakening or strengthening the axioms for groups, and may additionally use unary operations.
Two binary operations on one set.
The main types of structures with one set having two binary operations are ring-like or "ringoids" and lattice-like or simply "lattices". Ringoids and lattices can be clearly distinguished despite both having two defining binary operations. In the case of ringoids, the two operations are linked by the distributive law; in the case of lattices, they are linked by the absorption law. Ringoids also tend to have numerical models, while lattices tend to have set-theoretic models.
In ring-like structures or ringoids, the two binary operations are often called addition and multiplication, with multiplication linked to addition by the distributive law.
Lattice-like structures have two binary operations called meet and join, connected by the absorption law.
Module-like structures on two sets.
The following "module-like" structures have the common feature of having two sets, "A" and "B", so that there is a binary operation from "A"×"A" into "A" and another operation from "A"×"B" into "A". Modules, counting the ring operations, have at least three binary operations.
Algebra-like structures on two sets.
These structures are defined over two sets, a ring "R" and an "R"-module "M" equipped with an operation called multiplication. This can be viewed as a system with five binary operations: two operations on "R", two on "M" and one involving both "R" and "M". Many of these structures are hybrid structures of the previously mentioned ones.
Algebraic structures with additional non-algebraic structure.
There are many examples of mathematical structures where algebraic structure exists alongside non-algebraic structure.
Algebraic structures in different disciplines.
Some algebraic structures find uses in disciplines outside of abstract algebra. The following is meant to demonstrate some specific applications in other fields.
In Physics:
In Mathematical logic:
In Computer science:
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Refbegin/styles.css" />
A monograph available free online:
|
[
{
"math_id": 0,
"text": "\\Z_2"
}
] |
https://en.wikipedia.org/wiki?curid=5847302
|
58479429
|
Ilona Palásti
|
Hungarian mathematician
Ilona Palásti (1924–1991) was a Hungarian mathematician who worked at the Alfréd Rényi Institute of Mathematics. She is known for her research in discrete geometry, geometric probability, and the theory of random graphs.
With Alfréd Rényi and others, she was considered to be one of the members of the Hungarian School of Probability.
Contributions.
In connection to the Erdős distinct distances problem, Palásti studied the existence of point sets for which the formula_0th least frequent distance occurs formula_0 times. That is, in such points there is one distance that occurs only once, another distance that occurs exactly two times, a third distance that occurs exactly three times, etc. For instance, three points with this structure must form an isosceles triangle. Any formula_1 evenly-spaced points on a line or circular arc also have the same property, but Paul Erdős asked whether this is possible for points in general position (no three on a line, and no four on a circle). Palásti found an eight-point set with this property, and showed that for any number of points between three and eight (inclusive) there is a subset of the hexagonal lattice with this property. Palásti's eight-point example remains the largest known.[E]
Another of Palásti's results in discrete geometry concerns the number of triangular faces in an arrangement of lines. When no three lines may cross at a single point, she and Zoltán Füredi found sets of formula_1 lines, subsets of the diagonals of a regular formula_2-gon, having formula_3 triangles. This remains the best lower bound known for this problem, and differs from the upper bound by only formula_4 triangles.[D]
In geometric probability, Palásti is known for her conjecture on random sequential adsorption, also known in the one-dimensional case as "the parking problem". In this problem, one places non-overlapping balls within a given region, one at a time with random locations, until no more can be placed. Palásti conjectured that the average packing density in formula_5-dimensional space could be computed as the formula_5th power of the one-dimensional density. Although her conjecture led to subsequent research in the same area, it has been shown to be inconsistent with the actual average packing density in dimensions two through four.[A]
Palásti's results in the theory of random graphs include bounds on the probability that a random graph has a Hamiltonian circuit, and on the probability that a random directed graph is strongly connected.[B][C]
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "i"
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "2n"
},
{
"math_id": 3,
"text": "n(n-3)/3"
},
{
"math_id": 4,
"text": "O(n)"
},
{
"math_id": 5,
"text": "d"
}
] |
https://en.wikipedia.org/wiki?curid=58479429
|
58481710
|
Lnx
|
Lnx or LNX may refer to:
Topics referred to by the same term
<templatestyles src="Dmbox/styles.css" />
This page lists associated with the title .
|
[
{
"math_id": 0,
"text": "\\operatorname{ln}(x)"
}
] |
https://en.wikipedia.org/wiki?curid=58481710
|
5848270
|
Band (algebra)
|
Semigroup in which every element is idempotent
In mathematics, a band (also called idempotent semigroup) is a semigroup in which every element is idempotent (in other words equal to its own square). Bands were first studied and named by A. H. Clifford (1954).
The lattice of varieties of bands was described independently in the early 1970s by Biryukov, Fennemore and Gerhard. Semilattices, left-zero bands, right-zero bands, rectangular bands, normal bands, left-regular bands, right-regular bands and regular bands are specific subclasses of bands that lie near the bottom of this lattice and which are of particular interest; they are briefly described below.
Varieties of bands.
A class of bands forms a variety if it is closed under formation of subsemigroups, homomorphic images and direct product. Each variety of bands can be defined by a single defining identity.
Semilattices.
Semilattices are exactly commutative bands; that is, they are the bands satisfying the equation
Bands induce a preorder that may be defined as formula_0 if formula_1 . Requiring commutativity implies that this preorder becomes a (semilattice) partial order.
Zero bands.
A left-zero band is a band satisfying the equation
whence its Cayley table has constant rows.
Symmetrically, a right-zero band is one satisfying
so that the Cayley table has constant columns.
Rectangular bands.
A rectangular band is a band "S" that satisfies
In any semigroup the first identity is sufficient to characterize a Nowhere commutative semigroup.
Nowhere commutative semigroup implies the first identity.
In any flexible magma formula_2 so every element commutes with its square. So in any Nowhere commutative semigroup every element is idempotent thus any Nowhere commutative semigroup is in fact a Nowhere commutative band.
Thus in any Nowhere commutative semigroup
formula_3
So formula_4 commutes with formula_5 and thus formula_6 - the first characteristic identity.
In a any semigroup the first identity implies idempotence since formula_7 so formula_8 so idempotent (a band). Then
nowhere commutative since a band formula_9 So in a band
formula_10
In any semigroup the first identity also implies the second because "xyz" = "xy"("zxz") = ("x"("yz")"x")"z" = "xz".
The idempotents of a rectangular semigroup form a sub band that is a rectangular band but a rectangular semigroup may have elements that are not idempotent. In a band the second identity obviously implies the first but that requires idempotence. There exist semigroups that satisfy the second identity but are not bands and do not satisfy the first.
There is a complete classification of rectangular bands. Given arbitrary sets "I" and "J" one can define a magma operation on "I" × "J" by setting
formula_11
This operation is associative because for any three pairs ("i""x", "j""x"), ("i""y", "j""y"), ("i""z", "j""z") we have
formula_12 and likewise
formula_13
These two magma identities
"(xy)z" = "xz" and
"x(yz)" = "xz" are together equivalent to the second characteristic identity above.
The two together also imply associativity "(xy)z" ="x(yz)". Any magma that satisfies these two rectangular identities and idempotence is therefore a rectangular band. So any magma that satisfies both the characteristic identities (four separate magma identities) is a band and therefore a rectangular band.
The magma operation defined above is a rectangular band because for any pair ("i", "j") we have ("i", "j") · ("i", "j") = ("i", "j") so every element is idempotent and the first characteristic identity follows from the second together with idempotence.
But a magma that satisfies only the identities for the first characteristic and idempotence need not be associative so the second characteristic only follows from the first in a semigroup.
Any rectangular band is isomorphic to one of the above form (either formula_14 is empty, or pick any element formula_15, and then (formula_16) defines an isomorphism formula_17). Left-zero and right-zero bands are rectangular bands, and in fact every rectangular band is isomorphic to a direct product of a left-zero band and a right-zero band. All rectangular bands of prime order are zero bands, either left or right. A rectangular band is said to be purely rectangular if it is not a left-zero or right-zero band.
In categorical language, one can say that the category of nonempty rectangular bands is equivalent to formula_18, where formula_19 is the category with nonempty sets as objects and functions as morphisms. This implies not only that every nonempty rectangular band is isomorphic to one coming from a pair of sets, but also these sets are uniquely determined up to a canonical isomorphism, and all homomorphisms between bands come from pairs of functions between sets. If the set "I" is empty in the above result, the rectangular band "I" × "J" is independent of "J", and vice versa. This is why the above result only gives an equivalence between nonempty rectangular bands and pairs of nonempty sets.
Rectangular bands are also the "T"-algebras, where "T" is the monad on Set with "T"("X")="X"×"X", "T"("f")="f"×"f", formula_20 being the diagonal map formula_21, and formula_22.
Normal bands.
A normal band is a band "S" satisfying
We can also say a normal band is a band "S" satisfying
This is the same equation used to define medial magmas, so a normal band may also be called a medial band, and normal bands are examples of medial magmas.
Left-regular bands.
A left-regular band is a band "S" satisfying
If we take a semigroup and define "a" ≤ "b" if "ab = b", we obtain a partial ordering if and only if this semigroup is a left-regular band. Left-regular bands thus show up naturally in the study of posets.
Right-regular bands.
A right-regular band is a band "S" satisfying
Any right-regular band becomes a left-regular band using the opposite product. Indeed, every variety of bands has an 'opposite' version; this gives rise to the reflection symmetry in the figure below.
Regular bands.
A regular band is a band "S" satisfying
Lattice of varieties.
When partially ordered by inclusion, varieties of bands naturally form a lattice, in which the meet of two varieties is their intersection and the join of two varieties is the smallest variety that contains both of them. The complete structure of this lattice is known; in particular, it is countable, complete, and distributive. The sublattice consisting of the 13 varieties of regular bands is shown in the figure. The varieties of left-zero bands, semilattices, and right-zero bands are the three atoms (non-trivial minimal elements) of this lattice.
Each variety of bands shown in the figure is defined by just one identity. This is not a coincidence: in fact, "every" variety of bands can be defined by a single identity.
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " x \\leq y "
},
{
"math_id": 1,
"text": " x y = x "
},
{
"math_id": 2,
"text": "(aa)a = a(aa)"
},
{
"math_id": 3,
"text": "x(xyx) = (xx)yx = xyx = xy(xx) = (xyx)x \n"
},
{
"math_id": 4,
"text": "x"
},
{
"math_id": 5,
"text": "xyx"
},
{
"math_id": 6,
"text": "xyx = x"
},
{
"math_id": 7,
"text": "a = aaa"
},
{
"math_id": 8,
"text": "aa = aaaa = a(aa)a = a"
},
{
"math_id": 9,
"text": "xy = yx \\implies (xy)(yx) = (yx)(xy)"
},
{
"math_id": 10,
"text": "xy = yx \\implies x = xyx = x(yy)x = (xy)(yx) = (yx)(xy) = y(xx)y = yxy = y\n"
},
{
"math_id": 11,
"text": "(i, j) \\cdot (k, \\ell) = (i, \\ell) \\, "
},
{
"math_id": 12,
"text": " ((i_x, j_x) \\cdot (i_y, j_y)) \\cdot (i_z, j_z) = (i_x, j_y) \\cdot (i_z, j_z) = (i_x, j_z) = (i_x, j_x) \\cdot (i_z, j_z) "
},
{
"math_id": 13,
"text": " (i_x, j_x) \\cdot ((i_y, j_y) \\cdot (i_z, j_z)) = (i_x, j_x) \\cdot (i_y, j_z) = (i_x, j_z) = (i_x, j_x) \\cdot (i_z, j_z) "
},
{
"math_id": 14,
"text": "S"
},
{
"math_id": 15,
"text": "e\\in S"
},
{
"math_id": 16,
"text": "s\\mapsto (se,es)"
},
{
"math_id": 17,
"text": "S\\cong Se\\times eS"
},
{
"math_id": 18,
"text": "\\mathrm{Set}_{\\ne \\emptyset} \\times \\mathrm{Set}_{\\ne \\emptyset}"
},
{
"math_id": 19,
"text": "\\mathrm{Set}_{\\ne \\emptyset}"
},
{
"math_id": 20,
"text": "\\eta_X"
},
{
"math_id": 21,
"text": "X \\to X \\times X"
},
{
"math_id": 22,
"text": "\\mu_X ((x_{11}, x_{12}), (x_{21}, x_{22}))=(x_{11}, x_{22})"
}
] |
https://en.wikipedia.org/wiki?curid=5848270
|
5848497
|
Zerosumfree monoid
|
In abstract algebra, an additive monoid formula_0 is said to be zerosumfree, conical, centerless or positive if nonzero elements do not sum to zero. Formally:
formula_1
This means that the only way zero can be expressed as a sum is as formula_2.
|
[
{
"math_id": 0,
"text": "(M, 0, +)"
},
{
"math_id": 1,
"text": "(\\forall a,b\\in M)\\ a + b = 0 \\implies a = b = 0 \\!"
},
{
"math_id": 2,
"text": "0 + 0"
}
] |
https://en.wikipedia.org/wiki?curid=5848497
|
58486357
|
Soft configuration model
|
Random graph model in applied mathematics
In applied mathematics, the soft configuration model (SCM) is a random graph model subject to the principle of maximum entropy under constraints on the expectation of the degree sequence of sampled graphs. Whereas the configuration model (CM) uniformly samples random graphs of a specific degree sequence, the SCM only retains the specified degree sequence on average over all network realizations; in this sense the SCM has very relaxed constraints relative to those of the CM ("soft" rather than "sharp" constraints). The SCM for graphs of size formula_0 has a nonzero probability of sampling any graph of size formula_0, whereas the CM is restricted to only graphs having precisely the prescribed connectivity structure.
Model formulation.
The SCM is a statistical ensemble of random graphs formula_1 having formula_0 vertices (formula_2) labeled formula_3, producing a probability distribution on formula_4 (the set of graphs of size formula_0). Imposed on the ensemble are formula_0 constraints, namely that the ensemble average of the degree formula_5 of vertex formula_6 is equal to a designated value formula_7, for all formula_8. The model is fully parameterized by its size formula_0 and expected degree sequence formula_9. These constraints are both local (one constraint associated with each vertex) and soft (constraints on the ensemble average of certain observable quantities), and thus yields a canonical ensemble with an extensive number of constraints. The conditions formula_10 are imposed on the ensemble by the method of Lagrange multipliers (see Maximum-entropy random graph model).
Derivation of the probability distribution.
The probability formula_11 of the SCM producing a graph formula_1 is determined by maximizing the Gibbs entropy formula_12 subject to constraints formula_13 and normalization formula_14. This amounts to optimizing the multi-constraint Lagrange function below:
formula_15
where formula_16 and formula_17 are the formula_18 multipliers to be fixed by the formula_18 constraints (normalization and the expected degree sequence). Setting to zero the derivative of the above with respect to formula_11 for an arbitrary formula_19 yields
formula_20
the constant formula_21 being the partition function normalizing the distribution; the above exponential expression applies to all formula_22, and thus is the probability distribution. Hence we have an exponential family parameterized by formula_17, which are related to the expected degree sequence formula_9 by the following equivalent expressions:
formula_23
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "G"
},
{
"math_id": 2,
"text": "n=|V(G)|"
},
{
"math_id": 3,
"text": "\\{v_j\\}_{j=1}^n=V(G)"
},
{
"math_id": 4,
"text": "\\mathcal{G}_n"
},
{
"math_id": 5,
"text": "k_j"
},
{
"math_id": 6,
"text": "v_j"
},
{
"math_id": 7,
"text": "\\widehat{k}_j"
},
{
"math_id": 8,
"text": "v_j\\in V(G)"
},
{
"math_id": 9,
"text": "\\{\\widehat{k}_j\\}_{j=1}^n"
},
{
"math_id": 10,
"text": "\\langle k_j \\rangle = \\widehat{k}_j"
},
{
"math_id": 11,
"text": "\\mathbb{P}_\\text{SCM}(G)"
},
{
"math_id": 12,
"text": "S[G]"
},
{
"math_id": 13,
"text": "\\langle k_j \\rangle = \\widehat{k}_j, \\ j=1,\\ldots,n"
},
{
"math_id": 14,
"text": "\\sum_{G\\in \\mathcal{G}_n}\\mathbb{P}_\\text{SCM}(G)=1"
},
{
"math_id": 15,
"text": "\n\\begin{align}\n& \\mathcal{L}\\left(\\alpha,\\{\\psi_j\\}_{j=1}^n\\right) \\\\[6pt]\n= {} & -\\sum_{G\\in\\mathcal{G}_n}\\mathbb{P}_\\text{SCM}(G)\\log\\mathbb{P}_\\text{SCM}(G) + \\alpha\\left(1-\\sum_{G\\in \\mathcal{G}_n}\\mathbb{P}_\\text{SCM}(G) \\right)+\\sum_{j=1}^n\\psi_j\\left(\\widehat{k}_j-\\sum_{G\\in\\mathcal{G}_n}\\mathbb{P}_\\text{SCM}(G)k_j(G)\\right),\n\\end{align}\n"
},
{
"math_id": 16,
"text": "\\alpha"
},
{
"math_id": 17,
"text": "\\{\\psi_j\\}_{j=1}^n"
},
{
"math_id": 18,
"text": "n+1"
},
{
"math_id": 19,
"text": "G\\in \\mathcal{G}_n"
},
{
"math_id": 20,
"text": " 0 = \\frac{\\partial \\mathcal{L}\\left(\\alpha,\\{\\psi_j\\}_{j=1}^n\\right)}{\\partial \\mathbb{P}_\\text{SCM}(G)}= -\\log \\mathbb{P}_\\text{SCM}(G) -1-\\alpha-\\sum_{j=1}^n\\psi_j k_j(G) \\ \\Rightarrow \\ \\mathbb{P}_\\text{SCM}(G)=\\frac{1}{Z}\\exp\\left[-\\sum_{j=1}^n\\psi_jk_j(G)\\right],"
},
{
"math_id": 21,
"text": "Z:=e^{\\alpha+1}=\\sum_{G\\in\\mathcal{G}_n}\\exp\\left[-\\sum_{j=1}^n\\psi_jk_j(G)\\right]=\\prod_{1\\le i < j \\le n}\\left(1+e^{-(\\psi_i+\\psi_j)}\\right)"
},
{
"math_id": 22,
"text": "G\\in\\mathcal{G}_n"
},
{
"math_id": 23,
"text": " \\langle k_q \\rangle = \\sum_{G\\in \\mathcal{G}_n}k_q(G)\\mathbb{P}_\\text{SCM}(G) = -\\frac{\\partial \\log Z}{\\partial \\psi_q} =\\sum_{j\\ne q}\\frac{1}{e^{\\psi_q+\\psi_j}+1} = \\widehat{k}_q, \\ q=1,\\ldots,n."
}
] |
https://en.wikipedia.org/wiki?curid=58486357
|
58487522
|
Preconditioned Crank–Nicolson algorithm
|
In computational statistics, the preconditioned Crank–Nicolson algorithm (pCN) is a Markov chain Monte Carlo (MCMC) method for obtaining random samples – sequences of random observations – from a target probability distribution for which direct sampling is difficult.
The most significant feature of the pCN algorithm is its dimension robustness, which makes it well-suited for high-dimensional sampling problems. The pCN algorithm is well-defined, with non-degenerate acceptance probability, even for target distributions on infinite-dimensional Hilbert spaces. As a consequence, when pCN is implemented on a real-world computer in large but finite dimension "N", i.e. on an "N"-dimensional subspace of the original Hilbert space, the convergence properties (such as ergodicity) of the algorithm are independent of "N". This is in strong contrast to schemes such as Gaussian random walk Metropolis–Hastings and the Metropolis-adjusted Langevin algorithm, whose acceptance probability degenerates to zero as "N" tends to infinity.
The algorithm as named was highlighted in 2013 by Cotter, Roberts, Stuart and White, and its ergodicity properties were proved a year later by Hairer, Stuart and Vollmer. In the specific context of sampling diffusion bridges, the method was introduced in 2008.
Description of the algorithm.
Overview.
The pCN algorithm generates a Markov chain formula_0 on a Hilbert space formula_1 whose invariant measure is a probability measure formula_2 of the form
formula_3
for each measurable set formula_4, with normalising constant formula_5 given by
formula_6
where formula_7 is a Gaussian measure on formula_1 with covariance operator formula_8 and formula_9 is some function. Thus, the pCN method applied to target probability measures that are re-weightings of a reference Gaussian measure.
The Metropolis–Hastings algorithm is a general class of methods that try to produce such Markov chains formula_0, and do so by a two-step procedure of first "proposing" a new state formula_10 given the current state formula_11 and then "accepting" or "rejecting" this proposal, according to a particular acceptance probability, to define the next state formula_12. The idea of the pCN algorithm is that a clever choice of (non-symmetric) proposal for a new state formula_10 given formula_11 might have an associated acceptance probability function with very desirable properties.
The pCN proposal.
The special form of this pCN proposal is to take
formula_13
formula_14
or, equivalently,
formula_15
The parameter formula_16 is a step size that can be chosen freely (and even optimised for statistical efficiency). One then generates formula_17 and sets
formula_18
formula_19
The acceptance probability takes the simple form
formula_20
It can be shown that this method not only defines a Markov chain that satisfies detailed balance with respect to the target distribution formula_2, and hence has formula_2 as an invariant measure, but also possesses a spectral gap that is independent of the dimension of formula_1, and so the law of formula_11 converges to formula_2 as formula_21. Thus, although one may still have to tune the step size parameter formula_22 to achieve a desired level of statistical efficiency, the performance of the pCN method is robust to the dimension of the sampling problem being considered.
Contrast with symmetric proposals.
This behaviour of pCN is in stark contrast to the Gaussian random walk proposal
formula_23
with any choice of proposal covariance formula_24, or indeed any symmetric proposal mechanism. It can be shown using the Cameron–Martin theorem that for infinite-dimensional formula_1 this proposal has acceptance probability zero for formula_2-almost all formula_25 and formula_26. In practice, when one implements the Gaussian random walk proposal in dimension formula_27, this phenomenon can be seen in the way that
|
[
{
"math_id": 0,
"text": "(X_{n})_{n \\in \\mathbb{N}}"
},
{
"math_id": 1,
"text": "\\mathcal{H}"
},
{
"math_id": 2,
"text": "\\mu"
},
{
"math_id": 3,
"text": "\\mu(E) = \\frac{1}{Z} \\int_{E} \\exp(- \\Phi(x)) \\, \\mu_{0} (\\mathrm{d} x)"
},
{
"math_id": 4,
"text": "E \\subseteq \\mathcal{H}"
},
{
"math_id": 5,
"text": "Z"
},
{
"math_id": 6,
"text": "Z = \\int_{\\mathcal{H}} \\exp(- \\Phi(x)) \\, \\mu_{0} (\\mathrm{d} x) ,"
},
{
"math_id": 7,
"text": "\\mu_{0} = \\mathcal{N}(0, C_{0})"
},
{
"math_id": 8,
"text": "C_{0}"
},
{
"math_id": 9,
"text": "\\Phi \\colon \\mathcal{H} \\to \\mathbb{R}"
},
{
"math_id": 10,
"text": "X'_{n + 1}"
},
{
"math_id": 11,
"text": "X_{n}"
},
{
"math_id": 12,
"text": "X_{n + 1}"
},
{
"math_id": 13,
"text": "X'_{n + 1} = \\sqrt{ 1 - \\beta^{2} } X_{n} + \\beta \\Xi_{n + 1} , "
},
{
"math_id": 14,
"text": "\\Xi_{n + 1} \\sim \\mu_{0} \\text{ i.i.d.}"
},
{
"math_id": 15,
"text": "X'_{n + 1} | X_{n} \\sim \\mathcal{N} \\left( \\sqrt{ 1 - \\beta^{2} } X_{n} , \\beta^2 C_{0} \\right) ."
},
{
"math_id": 16,
"text": "0 < \\beta < 1"
},
{
"math_id": 17,
"text": "Z_{n + 1} \\sim \\mathrm{Unif}([0, 1])"
},
{
"math_id": 18,
"text": "X_{n + 1} = X'_{n + 1} \\text{ if } Z_{n + 1} \\leq \\alpha(X_{n}, X'_{n + 1}) ,"
},
{
"math_id": 19,
"text": "X_{n + 1} = X_{n} \\text{ if } Z_{n + 1} > \\alpha(X_{n}, X'_{n + 1}) ."
},
{
"math_id": 20,
"text": "\\alpha(x, x') = \\min ( 1 , \\exp ( \\phi(x) - \\phi(x') ) )."
},
{
"math_id": 21,
"text": "n \\to \\infty"
},
{
"math_id": 22,
"text": "\\beta"
},
{
"math_id": 23,
"text": "X'_{n + 1} \\mid X_n \\sim \\mathcal{N} \\left( X_n, \\beta \\Gamma \\right)"
},
{
"math_id": 24,
"text": "\\Gamma"
},
{
"math_id": 25,
"text": "X'_{n+1} "
},
{
"math_id": 26,
"text": "X_n"
},
{
"math_id": 27,
"text": "N"
},
{
"math_id": 28,
"text": "N \\to \\infty"
},
{
"math_id": 29,
"text": "\\beta \\to 0"
}
] |
https://en.wikipedia.org/wiki?curid=58487522
|
584887
|
Optical coating
|
Material which alters light reflection or transmission on optics
An optical coating is one or more thin layers of material deposited on an optical component such as a lens, prism or mirror, which alters the way in which the optic reflects and transmits light. These coatings have become a key technology in the field of optics. One type of optical coating is an anti-reflective coating, which reduces unwanted reflections from surfaces, and is commonly used on spectacle and camera lenses. Another type is the high-reflector coating, which can be used to produce mirrors that reflect greater than 99.99% of the light that falls on them. More complex optical coatings exhibit high reflection over some range of wavelengths, and anti-reflection over another range, allowing the production of dichroic thin-film filters.
Types of coating.
The simplest optical coatings are thin layers of metals, such as aluminium, which are deposited on glass substrates to make mirror surfaces, a process known as silvering. The metal used determines the reflection characteristics of the mirror; aluminium is the cheapest and most common coating, and yields a reflectivity of around 88%-92% over the visible spectrum. More expensive is silver, which has a reflectivity of 95%-99% even into the far infrared, but suffers from decreasing reflectivity (<90%) in the blue and ultraviolet spectral regions. Most expensive is gold, which gives excellent (98%-99%) reflectivity throughout the infrared, but limited reflectivity at wavelengths shorter than 550 nm, resulting in the typical gold colour.
By controlling the thickness and density of metal coatings, it is possible to decrease the reflectivity and increase the transmission of the surface, resulting in a "half-silvered mirror". These are sometimes used as "one-way mirrors".
The other major type of optical coating is the dielectric coating (i.e. using materials with a different refractive index to the substrate). These are constructed from thin layers of materials such as magnesium fluoride, calcium fluoride, and various metal oxides, which are deposited onto the optical substrate. By careful choice of the exact composition, thickness, and number of these layers, it is possible to tailor the reflectivity and transmitivity of the coating to produce almost any desired characteristic. Reflection coefficients of surfaces can be reduced to less than 0.2%, producing an "antireflection" (AR) coating. Conversely, the reflectivity can be increased to greater than 99.99%, producing a "high-reflector" (HR) coating. The level of reflectivity can also be tuned to any particular value, for instance to produce a mirror that reflects 90% and transmits 10% of the light that falls on it, over some range of wavelengths. Such mirrors are often used as beamsplitters, and as output couplers in lasers. Alternatively, the coating can be designed such that the mirror reflects light only in a narrow band of wavelengths, producing an optical filter.
The versatility of dielectric coatings leads to their use in many scientific optical instruments (such as lasers, optical microscopes, refracting telescopes, and interferometers) as well as consumer devices such as binoculars, spectacles, and photographic lenses.
Dielectric layers are sometimes applied over top of metal films, either to provide a protective layer (as in silicon dioxide over aluminium), or to enhance the reflectivity of the metal film. Metal and dielectric combinations are also used to make advanced coatings that cannot be made any other way. One example is the so-called "perfect mirror", which exhibits high (but not perfect) reflection, with unusually low sensitivity to wavelength, angle, and polarization.
Antireflection coatings.
Antireflection coatings are used to reduce reflection from surfaces. Whenever a ray of light moves from one medium to another (such as when light enters a sheet of glass after travelling through air), some portion of the light is reflected from the surface (known as the "interface") between the two media.
A number of different effects are used to reduce reflection. The simplest is to use a thin layer of material at the interface, with an index of refraction between those of the two media. The reflection is minimized when
formula_0,
where formula_1 is the index of the thin layer, and formula_2 and formula_3 are the indices of the two media. The optimum refractive indices for multiple coating layers at angles of incidence other than 0° is given by Moreno et al. (2005).
Such coatings can reduce the reflection for ordinary glass from about 4% per surface to around 2%. These were the first type of antireflection coating known, having been discovered by Lord Rayleigh in 1886. He found that old, slightly tarnished pieces of glass transmitted more light than new, clean pieces due to this effect.
Practical antireflection coatings rely on an intermediate layer not only for its direct reduction of reflection coefficient, but also use the interference effect of a thin layer. If the layer's thickness is controlled precisely such that it is exactly one-quarter of the wavelength of the light in the layer (a "quarter-wave coating"), the reflections from the front and back sides of the thin layer will destructively interfere and cancel each other.
In practice, the performance of a simple one-layer interference coating is limited by the fact that the reflections only exactly cancel for one wavelength of light at one angle, and by difficulties finding suitable materials. For ordinary glass ("n"≈1.5), the optimum coating index is "n"≈1.23. Few useful substances have the required refractive index. Magnesium fluoride (MgF2) is often used, since it is hard-wearing and can be easily applied to substrates using physical vapour deposition, even though its index is higher than desirable (n=1.38). With such coatings, reflection as low as 1% can be achieved on common glass, and better results can be obtained on higher index media.
Further reduction is possible by using multiple coating layers, designed such that reflections from the surfaces undergo maximum destructive interference. By using two or more layers, broadband antireflection coatings which cover the visible range (400-700 nm) with maximum reflectivities of less than 0.5% are commonly achievable. Reflection in narrower wavelength bands can be as low as 0.1%. Alternatively, a series of layers with small differences in refractive index can be used to create a broadband antireflective coating by means of a refractive index gradient.
High-reflection coatings.
High-reflection (HR) coatings work the opposite way to antireflection coatings. The general idea is usually based on the periodic layer system composed from two materials, one with a high index, such as zinc sulfide ("n"=2.32) or titanium dioxide ("n"=2.4), and one with a low index, such as magnesium fluoride ("n"=1.38) or silicon dioxide ("n"=1.49). This periodic system significantly enhances the reflectivity of the surface in the certain wavelength range called band-stop, whose width is determined by the ratio of the two used indices only (for quarter-wave systems), while the maximum reflectivity increases up to almost 100% with a number of layers in the "stack". The thicknesses of the layers are generally quarter-wave (then they yield to the broadest high reflection band in comparison to the non-quarter-wave systems composed from the same materials), this time designed such that reflected beams "constructively" interfere with one another to maximize reflection and minimize transmission. The best of these coatings built-up from deposited dielectric lossless materials on perfectly smooth surfaces can reach reflectivities greater than 99.999% (over a fairly narrow range of wavelengths). Common HR coatings can achieve 99.9% reflectivity over a broad wavelength range (tens of nanometers in the visible spectrum range).
As for AR coatings, HR coatings are affected by the incidence angle of the light. When used away from normal incidence, the reflective range shifts to shorter wavelengths, and becomes polarization dependent. This effect can be exploited to produce coatings that polarize a light beam.
By manipulating the exact thickness and composition of the layers in the reflective stack, the reflection characteristics can be tuned to a particular application, and may incorporate both high-reflective and anti-reflective wavelength regions. The coating can be designed as a long- or short-pass filter, a bandpass or notch filter, or a mirror with a specific reflectivity (useful in lasers). For example, the dichroic prism assembly used in some cameras requires two dielectric coatings, one long-wavelength pass filter reflecting light below 500 nm (to separate the blue component of the light), and one short-pass filter to reflect red light, above 600 nm wavelength. The remaining transmitted light is the green component.
Extreme ultraviolet coatings.
In the EUV portion of the spectrum (wavelengths shorter than about 30 nm) nearly all materials absorb strongly, making it difficult to focus or otherwise manipulate light in this wavelength range. Telescopes such as TRACE or EIT that form images with EUV light use multilayer mirrors that are constructed of hundreds of alternating layers of a high-mass metal such as molybdenum or tungsten, and a low-mass spacer such as silicon, vacuum deposited onto a substrate such as glass. Each layer pair is designed to have a thickness equal to half the wavelength of light to be reflected. Constructive interference between scattered light from each layer causes the mirror to reflect EUV light of the desired wavelength as would a normal metal mirror in visible light. Using multilayer optics it is possible to reflect up to 70% of incident EUV light (at a particular wavelength chosen when the mirror is constructed).
Transparent conductive coatings.
Transparent conductive coatings are used in applications where it is important that the coating conduct electricity or dissipate static charge. Conductive coatings are used to protect the aperture from electromagnetic interference, while dissipative coatings are used to prevent the build-up of static electricity. Transparent conductive coatings are also used extensively to provide electrodes in situations where light is required to pass, for example in flat panel display technologies and in many photoelectrochemical experiments. A common substance used in transparent conductive coatings is indium tin oxide (ITO). ITO is not very optically transparent, however. The layers must be thin to provide substantial transparency, particularly at the blue end of the spectrum. Using ITO, sheet resistances of 20 to 10,000 ohms per square can be achieved. An ITO coating may be combined with an antireflective coating to further improve transmittance. Other TCOs (Transparent Conductive Oxides) include AZO (Aluminium doped Zinc Oxide), which offers much better UV transmission than ITO.
A special class of transparent conductive coatings applies to infrared films for theater-air military optics where IR transparent windows need to have (Radar) stealth (Stealth technology) properties. These are known as RAITs (Radar Attenuating / Infrared Transmitting) and include materials such as boron doped DLC (Diamond-like carbon).
Phase correction coatings.
The multiple internal reflections in roof prisms cause a polarization-dependent phase-lag of the transmitted light, in a manner similar to a Fresnel rhomb. This must be suppressed by multilayer "phase-correction coatings" applied to one of the roof surfaces to avoid unwanted interference effects and a loss of contrast in the image. Dielectric phase-correction prism coatings are applied in a vacuum chamber with maybe 30 different superimposed vapor coating layers deposits, making it a complex production process.
In a roof prism without a phase-correcting coating, s-polarized and p-polarized light each acquire a different geometric phase as they pass through the upper prism. When the two polarized components are recombined, interference between the s-polarized and p-polarized light results in a different intensity distribution perpendicular to the roof edge as compared to that along the roof edge. This effect reduces contrast and resolution in the image perpendicular to the roof edge, producing an inferior image compared to that from a porro prism erecting system. This roof edge diffraction effect may also be seen as a diffraction spike perpendicular to the roof edge generated by bright points in the image. In technical optics, such a phase is also known as the Pancharatnam phase, and in quantum physics an equivalent phenomenon is known as the Berry phase.
This effect can be seen in the elongation of the Airy disk in the direction perpendicular to the crest of the roof as this is a diffraction from the discontinuity at the roof crest.
The unwanted interference effects are suppressed by vapour-depositing a special dielectric coating known as a phase-compensating coating on the roof surfaces of the roof prism. These "phase-correction coating" or "P-coating" on the roof surfaces was developed in 1988 by Adolf Weyrauch at Carl Zeiss Other manufacturers followed soon, and since then phase-correction coatings are used across the board in medium and high-quality roof prism binoculars. This coating corrects for the difference in geometric phase between s- and p-polarized light so both have effectively the same phase shift, preventing image-degrading interference.
From a technical point of view, the phase-correction coating layer does not correct the actual phase shift, but rather the partial polarization of the light that results from total reflection. Such a correction can always only be made for a selected wavelength and for a specific angle of incidence; however, it is possible to approximately correct a roof prism for polychromatic light by superimposing several layers. In this way, since the 1990s, roof prism binoculars have also achieved resolution values that were previously only achievable with porro prisms. The presence of a phase-correction coating can be checked on unopened binoculars using two polarization filters.
Fano Resonant Optical Coatings.
Fano Resonant Optical Coatings (FROCs) represent a new category of optical coatings. FROCs exhibit the photonic Fano resonance by coupling a broadband nanocavity, which serves as the continuum, with a narrowband Fabry-Perot nanocavity, representing the discrete state. The interference between these two resonances manifests as an asymmetric Fano resonance line-shape. FROCs are considered a separate category of optical coatings because they enjoy optical properties that cannot be reproduced using other optical coatings. Mainly, semi-transparent FROCs act as a beam splitting filter that reflects and transmits the same color, a property that cannot be achieved with transmission filters, dielectric mirrors, or semi-transparent metals.
FROCs enjoy remarkable structural coloring properties as they can produce colors across a wide color gamut with both high brightness and high purity. Moreover, the dependence of color on the angle of incident light can be controlled through the dielectric cavity material, making FROCs adaptable for applications requiring either angle-independent or angle-dependent coloring. This includes decorative purposes and anti-counterfeit measures.
FROCs were used as both monolithic spectrum splitters and selective solar absorbers which makes them suitable for hybrid solar-thermal energy generation. They can be designed to reflect specific wavelength ranges, aligning with the energy band gap of photovoltaic cells, while absorbing the remaining solar spectrum. This enables higher photovoltaic efficiency at elevated optical concentrations by reducing the photovoltaic's cell temperature. The reduced temperature also increases the cell's lifetime. Additionally, their low infrared emissivity minimizes thermal losses, increasing the system's overall optothermal efficiency.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "n_1 = \\sqrt{n_0 n_S}"
},
{
"math_id": 1,
"text": "n_1"
},
{
"math_id": 2,
"text": "n_0"
},
{
"math_id": 3,
"text": "n_S"
}
] |
https://en.wikipedia.org/wiki?curid=584887
|
58490041
|
Bruun rule
|
Formula for estimating the magnitude of shoreline retreat due to changes in sea level
The Bruun rule is a formula for estimating the magnitude of the retreat of the shoreline of a sandy shore in response to changes in sea level. Originally published in 1962 by Per Bruun,"" the Bruun rule was the first to give a relationship between sea level rise and shoreline recession. The rule is a simple, two dimensional mass conversion, and remains in common use to estimate shoreline recession in response to sea level rise, despite criticism and modification, and the availability of more complex alternate models.
The rule.
The Bruun rule gives a linear relationship between sea level rise and shoreline recession based on equilibrium profile theory, which asserts that shore face profile maintains an equilibrium shape, and as sea level rises the increasing accommodation space forces this equilibrium profile landward and upward to preserve its shape relative to the new sea level. As such, the Bruun rule analysis assumes that the upper beach is eroded as the shore profile moves landward, and that the volume of eroded material is deposited offshore, resulting in a rise of the nearshore bottom which maintains a constant water depth. The Bruun rule predicts coastal recession to be as much as 10 to 50 times sea level rise, depending on the slope of the beach.
The mathematical notation of the Bruun rule is:
formula_0
Where,
History.
In 1954, Per Moller Bruun published a paper describing beach profiles and cross shore equilibrium. Developing on these ideas, in 1962 he published his paper "Sea-Level Rise as a Cause of Shore Erosion","" which first gave a relationship between sea level rise and the shoreline recession of equilibrium beach profiles. His rule wasn't given a name until 1967, when Schwartz published "The Bruun Theory of Sea-Level Rise as a Cause of Shore Erosion" which detailed laboratory and field tests of the theory and concluded that "the concept henceforth be known as "Bruun's Rule.""""
Per Bruun has since published works to clarify the limitations and practicality of his model, including "Review of Conditions for Uses of the Bruun Rule of Erosion" in 1983, and "The Bruun Rule of Erosion by Sea-Level Rise: A Discussion on Large-Scale Two- and Three- Dimensional Usages" in 1988. These publications outline the rule's limitations and practicality and stress that, though this model is intentionally two dimensional, it is applied to three dimensional environments in practice. Bruun states that the rule "must be subjected to realistic adjustments" for practical use.
As Bruun acknowledges, the Bruun rule is a characteristically simple rule and has therefore been modified many times over, to account for more factors and in order to be accurately applicable in more cases."" The Bruun rule has been most often modified to account for longshore transport, overwash, and Aeolian sediment transport."" For example, Dean and Maurmeier in 1983 modified the rule to be applicable to barrier shores and islands, some of which experience accretion instead of erosion. Rosati et al. in 2013 and Dean and Houston in 2016 have modified the rule to account for onshore transport, and cross shore movement over the supposed depth of closure. Further, Ashton et al. in 2011 modified the Bruun rule for use on cliffed coasts, and Hinkel et al. in 2013 used the rule as part of a wider methodology to describe the effects on sea level rise on and around tidal inlets. The biggest challenge that remains seems to be isolating the effect of sea level rise on beach morphology from the coupled effects of wave energy, tidal currents, wind action, sediment supplies, sediment types and grain size, among others.
In various forms and coupled with other models, the Bruun rule has now been used to estimate shoreline responses to sea level rise worldwide. The Bruun rule has been applied to coasts including of the Caspian Sea, Korean Peninsula, Shuidong Bay, Norfolk, Rhode Island, Florida, Accra, and Hawaii.
Criticisms.
The effect of climate change on beaches is challenging to accurately model, as it is an interdisciplinary subject that involves ocean, earth, and atmospheric science as well as civil engineering and policy. Reliable coastal climate change impact assessments are needed to underpin effective strategies of adaptation in order to prepare growing coastal communities and high value coastal assets. As a result, models for estimating coastal erosion as a result of sea level rise - including the Bruun rule and models based on the Bruun rule - are constantly being reviewed and updated. The Bruun rule has become the centre of much academic debate.
In response to its inherent assumptions, the Bruun rule has been widely criticised."" Some of the rule's most criticised assumptions include the nonexistence of gradients in longshore sediment transport, the existence of a depth of closure, a closed sediment budget, and the availability of sufficient sand sources. In 2015, Andersen et al. labelled the Bruun rule as " on its own...virtually unusable in open-ocean coastal environments" due to its assumptions of physical environmental setting.
One prominently criticised assumption of the Bruun rule is its postulation of the net effects of longshore transport as negligible, as the rule is by definition a two dimensional cross shore model that does not account for the longshore third dimension. These longshore effects can, however, be the major cause of sediment erosion or deposition along beaches, dominating shoreline morphology and even masking the impacts of sea level rise as described by the Bruun Model.
Another criticised assumption is the existence of a 'depth of closure'. The depth of closure is considered to be the water depth beyond which there are no significant changes in bed level, and is usually taken as the boundary between the upper shoreface, characterised by breaking waves and bars, and the lower shoreface, characterised by nonbreaking waves and a lack of bars. The Bruun rule stipulates that there is no significant sediment transfer across this boundary, however the strength of this concept in practice is debated. The alternative R-DA model, proposed by Davidson-Arnott, is based on the same assumptions as the Bruun rule, except it recognises significant sediment transfer between the upper and lower shoreface, hypothesising that sediment is eroded from the lower shoreface and transported to the upper shoreface to maintain an equilibrium profile, and that as a result there is an upward and landward migration of the depth of closure with sea level rise.
Cooper and Pilkey have been direct in their criticism of the Bruun rule, publishing a paper in 2004 titled "Sea-level rise and shoreline retreat: time to abandon the Bruun Rule", which is regularly cited in ensuing literature. They argue that despite widespread criticism, the original Bruun rule continues to be applied in inappropriate contexts, as outlined by Bruun and experimental history, and that the Bruun rule and its accompanying controversial assumptions are embedded in later models claiming to offer more sophisticated insights into coastal behaviour. Cooper and Pilkey describe the use of the Bruun rule as "a "one model fits all" approach" to a range of complex coastal environments for which it is unfit to describe. They list three main reasons that the rule "does not work": its restrictive assumptions, the omission of important variables, and its reliance on outdated and erroneous concepts. They describe the rule's restrictive assumptions as a closed materials balance, which ignores net longshore transport, and the lack of an accretionary component, that assumes sea level rise always results in beach recession, which together limit the rule's use to a small number of coasts. Cooper and Pilkey list the omitted variables as the presence of outcrops or bottom currents, the effect of continental shelf slope on retreat rate, site specific feedback relationships, and the highly variable patterns of coastal evolution at millennial time scales, and the outdated and erroneous concepts relied upon as a universal equilibrium profile theory, a closure depth, and that unsupported idea that shoreface steepness has an effect on the rate of shore retreat. They "conclude that [the Bruun rule] has outlived its usefulness and should be abandoned", however they have been criticised for providing no "meaningful alternative" to the rule.
In his papers, Bruun did not present a rigorous mathematical derivation for his rule, which has caused confusion in the research community. For example, Rosen in 1978, Allison and Schwartz in 1981, Dean and Maurmeyer in 1983, and Zhang, Douglas and Leatherman in 2004 have all mathematically derived the Bruun rule differently, with disagreement on the assumptions and limitations of the Bruun rule unique to their own derivations. However, the latter revision by Zhang et al. presents an alternative derivation showing that "even though very simple, the Bruun model has considerable generality".
Some field and laboratory tests have supported the Bruun rule, although claimed experimental flaws in these publications have been criticised. Amongst others, Ranasinghe and Stive in 2009, and later Andersen et al. in 2015, have concluded that "no study has produced comprehensive, well-accepted verification of the Brunn model". However, there is a near consensus that the basic qualitative model of shoreline recession is valid, despite quantitative data gleaned from the Bruun rule being dubbed as "very coarse approximations" or "broadly indicative estimates". Despite these criticisms, the Bruun rule is credited for its simplicity, and there remains "no simple, viable alternative".
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "R = \\frac{SL}{h+B} = \\frac{S}{\\tan\\beta}"
}
] |
https://en.wikipedia.org/wiki?curid=58490041
|
58495269
|
Self-similar solution
|
Concept in partial differential equations
In the study of partial differential equations, particularly in fluid dynamics, a self-similar solution is a form of solution which is similar to itself if the independent and dependent variables are appropriately scaled. Self-similar solutions appear whenever the problem lacks a characteristic length or time scale (for example, the Blasius boundary layer of an infinite plate, but not of a finite-length plate). These include, for example, the Blasius boundary layer or the Sedov–Taylor shell.
Concept.
A powerful tool in physics is the concept of dimensional analysis and scaling laws. By examining the physical effects present in a system, we may estimate their size and hence which, for example, might be neglected. In some cases, the system may not have a fixed natural length or time scale, while the solution depends on space or time. It is then necessary to construct a scale using space or time and the other dimensional quantities present—such as the viscosity formula_0. These constructs are not 'guessed' but are derived immediately from the scaling of the governing equations.
Classification.
The normal self-similar solution is also referred to as a self-similar solution of the first kind, since another type of self-similar exists for finite-sized problems, which cannot be derived from dimensional analysis, known as a self-similar solution of the second kind.
Self-similar solution of the second kind.
The early identification of self-similar solutions of the second kind can be found in problems of imploding shock waves (Guderley–Landau–Stanyukovich problem), analyzed by G. Guderley (1942) and Lev Landau and K. P. Stanyukovich (1944), and propagation of shock waves by a short impulse, analysed by Carl Friedrich von Weizsäcker and Yakov Borisovich Zel'dovich (1956), who also classified it as the second kind for the first time. A complete description was made in 1972 by Grigory Barenblatt and Yakov Borisovich Zel'dovich. The self-similar solution of the second kind also appears in different contexts such as in boundary-layer problems subjected to small perturbations, as was identified by Keith Stewartson, Paul A. Libby and Herbert Fox. Moffatt eddies are also a self-similar solution of the second kind.
Examples.
Rayleigh problem.
A simple example is a semi-infinite domain bounded by a rigid wall and filled with viscous fluid. At time formula_1 the wall is made to move with constant speed formula_2 in a fixed direction (for definiteness, say the formula_3 direction and consider only the formula_4 plane), one can see that there is no distinguished length scale given in the problem. This is known as the Rayleigh problem. The boundary conditions of no-slip is
formula_5
Also, the condition that the plate has no effect on the fluid at infinity is enforced as
formula_6
Now, from the Navier-Stokes equations
formula_7
one can observe that this flow will be rectilinear, with gradients in the formula_8 direction and flow in the formula_3 direction, and that the pressure term will have no tangential component so that
formula_9. The formula_3 component of the Navier-Stokes equations then becomes
formula_10
and the scaling arguments can be applied to show that
formula_11
which gives the scaling of the formula_8 co-ordinate as
formula_12.
This allows one to pose a self-similar ansatz such that, with formula_13 and formula_14 dimensionless,
formula_15
The above contains all the relevant physics and the next step is to solve the equations, which for many cases will include numerical methods. This equation is
formula_16
with solution satisfying the boundary conditions that
formula_17
which is a self-similar solution of the first kind.
Semi-infinite solid approximation.
In transient heat transfer applications, such as impingement heating on a ship deck during missile launches and the sizing of thermal protection systems, self-similar solutions can be found for semi-infinite solids. The governing equation when heat conduction is the primary heat transfer mechanism is the one-dimensional energy equation:formula_18where formula_19 is the material's density, formula_20 is the material's specific heat capacity, formula_21 is the material's thermal conductivity. In the case when the material is assumed to be homogeneous and its properties constant, the energy equation is reduced to the heat equation:formula_22with formula_23 being the thermal diffusivity. By introducing the similarity variable formula_24 and assuming that formula_25, the PDE can be transformed into the ODE:formula_26If a simple model of thermal protection system sizing is assumed, where decomposition, pyrolysis gas flow, and surface recession are ignored, with the initial temperature formula_27 and a constant surface temperature formula_28, then the ODE can be solved for the temperature at a depth formula_3 and time formula_29:formula_30where formula_31 is the error function.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\nu"
},
{
"math_id": 1,
"text": "t=0"
},
{
"math_id": 2,
"text": "U"
},
{
"math_id": 3,
"text": "x"
},
{
"math_id": 4,
"text": "x-y"
},
{
"math_id": 5,
"text": "u{(y\\!=\\!0)} = U"
},
{
"math_id": 6,
"text": "u{(y\\!\\to\\!\\infty)} = 0."
},
{
"math_id": 7,
"text": "\\rho \\left( \\dfrac{\\partial \\vec{u}}{\\partial t} + \\vec{u} \\cdot \\nabla \\vec{u} \\right) =- \\nabla p + \\mu \\nabla^{2} \\vec{u}"
},
{
"math_id": 8,
"text": "y"
},
{
"math_id": 9,
"text": "\\dfrac{\\partial p}{\\partial y} = 0"
},
{
"math_id": 10,
"text": "\\dfrac{\\partial \\vec{u}}{\\partial t} = \\nu \\partial^{2}_{y} \\vec{u}"
},
{
"math_id": 11,
"text": " \\frac{U}{t} \\sim \\nu \\frac{U}{y^{2}}"
},
{
"math_id": 12,
"text": "y \\sim (\\nu t)^{1/2}"
},
{
"math_id": 13,
"text": "f"
},
{
"math_id": 14,
"text": "\\eta"
},
{
"math_id": 15,
"text": "u = U f \\left( \\eta \\equiv \\dfrac{y}{(\\nu t)^{1/2}} \\right)"
},
{
"math_id": 16,
"text": "- \\eta f'/2 = f''"
},
{
"math_id": 17,
"text": "f = 1 - \\operatorname{erf} (\\eta / 2) \\quad \\text{ or } \\quad u = U \\left(1 - \\operatorname{erf} \\left(y / (4 \\nu t)^{1/2} \\right)\\right)"
},
{
"math_id": 18,
"text": "\\rho c_{p} \\frac{\\partial T}{\\partial t} = \\frac{\\partial}{\\partial x}\\left( k \\frac{\\partial T}{\\partial x} \\right)"
},
{
"math_id": 19,
"text": "\\rho"
},
{
"math_id": 20,
"text": "c_{p}"
},
{
"math_id": 21,
"text": "k"
},
{
"math_id": 22,
"text": "\\frac{\\partial T}{\\partial t} = \\alpha \\frac{\\partial^{2}T}{\\partial x^{2}}, \\quad \\alpha = \\frac{k}{\\rho c_{p}}"
},
{
"math_id": 23,
"text": "\\alpha"
},
{
"math_id": 24,
"text": "\\eta = x/\\sqrt{t}"
},
{
"math_id": 25,
"text": "T(t,x) = f(\\eta)"
},
{
"math_id": 26,
"text": "f''(\\eta) + \\frac{1}{2\\alpha}\\eta f'(\\eta) = 0"
},
{
"math_id": 27,
"text": "T(0,x) = f(\\infty) = T_{i}"
},
{
"math_id": 28,
"text": "T(t,0) = f(0) = T_{s}"
},
{
"math_id": 29,
"text": "t"
},
{
"math_id": 30,
"text": "T(t,x) = \\text{erf}\\left( \\frac{x}{2\\sqrt{\\alpha t}} \\right) \\left( T_{i} - T_{s} \\right) + T_{s}"
},
{
"math_id": 31,
"text": "\\text{erf}(\\cdot)"
}
] |
https://en.wikipedia.org/wiki?curid=58495269
|
58496647
|
Nicolosi globular projection
|
The Nicolosi globular projection is a polyconic map projection invented about the year 1000 by the Iranian polymath al-Biruni. As a circular representation of a hemisphere, it is called "globular" because it evokes a globe. It can only display one hemisphere at a time and so normally appears as a "double hemispheric" presentation in world maps. The projection came into use in the Western world starting in 1660, reaching its most common use in the 19th century. As a "compromise" projection, it preserves no particular properties, instead giving a balance of distortions.
History.
Abū Rayḥān Muḥammad ibn Aḥmad Al-Bīrūnī, who was the foremost Muslim scholar of the Islamic Golden Age, invented the first recorded globular projection for use in celestial maps about the year 1000. Centuries later, as Europe entered its Age of Discovery, the demand for world maps increased rapidly, sparking a vast experimentation with diverse map projections. Globular projections were one category that received early attention, with inventions by Roger Bacon in the 13th century, Petrus Apianus in the 16th century, and also in the 16th century by French Jesuit priest Georges Fournier. In 1660, Giovanni Battista Nicolosi, a Sicilian chaplain in Rome, reinvented Al-Biruni's projection as a modification of Fournier's first projection. It is unlikely Nicolosi knew of al-Biruni's work, and Nicolosi's name is the one usually associated with the projection.
Nicolosi published a set of maps on the projection, one of the world in two hemispheres, and one each for the five known continents. Maps using the same projection appeared occasionally over the ensuing centuries, becoming relatively common in the 19th century as the stereographic projection fell out of common use for this purpose. Use of the Nicolosi projection continued into the early 20th century. It is rarely seen today.
Description.
The construction of the Nicolosi globular projection is fairly simple with compasses and straightedge.
Given a bounding circle to fit the map into, the poles are placed at the top and bottom of the circle, and the central meridian of the desired hemisphere is drawn as a straight vertical diameter between them.
The equator is drawn as a straight horizontal diameter.
Each remaining meridian is drawn as a circular arc going through both poles and the equator,
such that meridians are equally spaced along the equator.
Each remaining parallel is also drawn as a circular arc from the left edge through the central meridian to the right edge of the circle,
such that the parallels are equally spaced around the perimeter of the circle and also equally spaced along the central meridian.
A hemisphere shown with the Nicolosi globular projection closely resembles a hemisphere shown with the azimuthal equidistant projection centered on the same point.
In both projections of that hemisphere, the meridians are equally spaced along the equator, and the parallels are equally spaced along the central meridian and also equally spaced along the perimeter of the circle.
Nicolosi developed the projection as a drafting technique. Translating that into mathematical formulae yields:
formula_0
Here, formula_1 is the latitude, formula_2 is the longitude, formula_3 is the central longitude for the hemisphere, and formula_4 is the radius of the globe to be projected.
In the formula for formula_5, the formula_6 sign takes the sign of formula_7, i.e. take the positive root if formula_7 is positive, or the negative root if formula_7 is negative.
In the formula for formula_8, the formula_6 sign takes the opposite sign of formula_1, i.e. take the positive root if formula_1 is negative, or the negative root if formula_1 is positive.
Under certain circumstances, the full formulae fail. Use the following formulae instead:
When formula_9,
formula_10
When formula_11,
formula_12
When formula_13,
formula_14
When formula_15,
formula_10
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\begin{align}\nb &= \\frac{\\pi}{2 \\left(\\lambda-\\lambda_0 \\right)} - \\frac{2 \\left(\\lambda - \\lambda_0 \\right)}{\\pi} \\\\\nc &= \\frac{2 \\varphi}{\\pi} \\\\\nd &= \\frac{1 - c^2}{\\sin \\varphi - c} \\\\\nM &= \\frac{\\frac{b \\sin \\varphi}{d} - \\frac{b}{2}}{1+\\frac{b^2}{d^2}} \\\\\nN &= \\frac{\\frac{d^2 \\sin \\varphi}{b^2} + \\frac{d}{2}}{1+\\frac{d^2}{b^2}} \\\\\nx &= \\frac{\\pi}{2} R \\left(M \\pm \\sqrt{M^2 + \\frac{\\cos^2 \\varphi}{1 + \\frac{b^2}{d^2}}}\\right) \\\\\ny &= \\frac{\\pi}{2} R \\left(N \\pm \\sqrt{N^2 - \\frac{\\frac{d^2}{b^2}\\sin^2 \\varphi + d \\sin \\varphi - 1}{1 + \\frac{d^2}{b^2}}} \\right)\n\\end{align}"
},
{
"math_id": 1,
"text": "\\varphi"
},
{
"math_id": 2,
"text": "\\lambda"
},
{
"math_id": 3,
"text": "\\lambda_0"
},
{
"math_id": 4,
"text": "R"
},
{
"math_id": 5,
"text": "x"
},
{
"math_id": 6,
"text": "\\pm"
},
{
"math_id": 7,
"text": "\\lambda-\\lambda_0"
},
{
"math_id": 8,
"text": "y"
},
{
"math_id": 9,
"text": "\\lambda-\\lambda_0 = 0"
},
{
"math_id": 10,
"text": "\\begin{align}\nx &= 0 \\\\\ny &= R \\varphi\n\\end{align}"
},
{
"math_id": 11,
"text": "\\varphi = 0"
},
{
"math_id": 12,
"text": "\\begin{align}\nx &= R \\left(\\lambda - \\lambda_0 \\right) \\\\\ny &= 0\n\\end{align}"
},
{
"math_id": 13,
"text": "|\\lambda - \\lambda_0| = \\frac{\\pi}{2}"
},
{
"math_id": 14,
"text": "\\begin{align}\nx &= R \\left(\\lambda - \\lambda_0 \\right) \\cos \\varphi \\\\\ny &= \\frac{\\pi}{2} R \\sin \\varphi\n\\end{align}"
},
{
"math_id": 15,
"text": "|\\varphi| = \\frac{\\pi}{2}"
}
] |
https://en.wikipedia.org/wiki?curid=58496647
|
58498
|
Grover's algorithm
|
Quantum search algorithm
In quantum computing, Grover's algorithm, also known as the quantum search algorithm, is a quantum algorithm for unstructured search that finds with high probability the unique input to a black box function that produces a particular output value, using just formula_0 evaluations of the function, where formula_1 is the size of the function's domain. It was devised by Lov Grover in 1996.
The analogous problem in classical computation cannot be solved in fewer than formula_2 evaluations (because, on average, one has to check half of the domain to get a 50% chance of finding the right input). Charles H. Bennett, Ethan Bernstein, Gilles Brassard, and Umesh Vazirani proved that any quantum solution to the problem needs to evaluate the function formula_3 times, so Grover's algorithm is asymptotically optimal. Since classical algorithms for NP-complete problems require exponentially many steps, and Grover's algorithm provides at most a quadratic speedup over the classical solution for unstructured search, this suggests that Grover's algorithm by itself will not provide polynomial-time solutions for NP-complete problems (as the square root of an exponential function is an exponential, not polynomial, function).
Unlike other quantum algorithms, which may provide exponential speedup over their classical counterparts, Grover's algorithm provides only a quadratic speedup. However, even quadratic speedup is considerable when formula_1 is large, and Grover's algorithm can be applied to speed up broad classes of algorithms. Grover's algorithm could brute-force a 128-bit symmetric cryptographic key in roughly 264 iterations, or a 256-bit key in roughly 2128 iterations. It may not be the case that Grover's algorithm poses a significantly increased risk to encryption over existing classical algorithms, however.
Applications and limitations.
Grover's algorithm, along with variants like amplitude amplification, can be used to speed up a broad range of algorithms. In particular, algorithms for NP-complete problems which contain exhaustive search as a subroutine can be sped up by Grover's algorithm. The current theoretical best algorithm, in terms of worst-case complexity, for 3SAT is one such example. Generic constraint satisfaction problems also see quadratic speedups with Grover. These algorithms do not require that the input be given in the form of an oracle, since Grover's algorithm is being applied with an explicit function, e.g. the function checking that a set of bits satisfies a 3SAT instance. However, it is unclear whether Grover's algorithm could speed up best practical algorithms for these problems.
Grover's algorithm can also give provable speedups for black-box problems in quantum query complexity, including element distinctness and the collision problem (solved with the Brassard–Høyer–Tapp algorithm). In these types of problems, one treats the oracle function "f" as a database, and the goal is to use the quantum query to this function as few times as possible.
Cryptography.
Grover's algorithm essentially solves the task of "function inversion". Roughly speaking, if we have a function formula_4 that can be evaluated on a quantum computer, Grover's algorithm allows us to calculate formula_5 when given formula_6. Consequently, Grover's algorithm gives broad asymptotic speed-ups to many kinds of brute-force attacks on symmetric-key cryptography, including collision attacks and pre-image attacks. However, this may not necessarily be the most efficient algorithm since, for example, the parallel rho algorithm is able to find a collision in SHA2 more efficiently than Grover's algorithm.
Limitations.
Grover's original paper described the algorithm as a database search algorithm, and this description is still common. The database in this analogy is a table of all of the function's outputs, indexed by the corresponding input. However, this database is not represented explicitly. Instead, an oracle is invoked to evaluate an item by its index. Reading a full database item by item and converting it into such a representation may take a lot longer than Grover's search. To account for such effects, Grover's algorithm can be viewed as solving an equation or satisfying a constraint. In such applications, the oracle is a way to check the constraint and is not related to the search algorithm. This separation usually prevents algorithmic optimizations, whereas conventional search algorithms often rely on such optimizations and avoid exhaustive search. Fortunately, fast Grover's oracle implementation is possible for many constraint satisfaction and optimization problems.
The major barrier to instantiating a speedup from Grover's algorithm is that the quadratic speedup achieved is too modest to overcome the large overhead of near-term quantum computers. However, later generations of fault-tolerant quantum computers with better hardware performance may be able to realize these speedups for practical instances of data.
Problem description.
As input for Grover's algorithm, suppose we have a function formula_7. In the "unstructured database" analogy, the domain represent indices to a database, and "f"("x") = 1 if and only if the data that "x" points to satisfies the search criterion. We additionally assume that only one index satisfies "f"("x") = 1, and we call this index "ω". Our goal is to identify "ω".
We can access "f" with a subroutine (sometimes called an oracle) in the form of a unitary operator "Uω" that acts as follows:
formula_8
This uses the formula_1-dimensional state space formula_9, which is supplied by a register with formula_10 qubits.
This is often written as
formula_11
Grover's algorithm outputs "ω" with probability at least "1/2" using formula_0 applications of "Uω". This probability can be made arbitrarily large by running Grover's algorithm multiple times. If one runs Grover's algorithm until "ω" is found, the expected number of applications is still formula_0, since it will only be run twice on average.
Alternative oracle definition.
This section compares the above oracle formula_12 with an oracle formula_13.
"Uω" is different from the standard quantum oracle for a function "f". This standard oracle, denoted here as "Uf", uses an ancillary qubit system. The operation then represents an inversion (NOT gate) on the main system conditioned by the value of "f"("x") from the ancillary system:
formula_14
or briefly,
formula_15
These oracles are typically realized using uncomputation.
If we are given "Uf" as our oracle, then we can also implement "Uω", since "Uω" is "Uf" when the ancillary qubit is in the state formula_16:
formula_17
So, Grover's algorithm can be run regardless of which oracle is given. If "Uf" is given, then we must maintain an additional qubit in the state formula_18 and apply "Uf" in place of "Uω".
Algorithm.
The steps of Grover's algorithm are given as follows:
For the correctly chosen value of formula_22, the output will be formula_23 with probability approaching 1 for "N" ≫ 1. Analysis shows that this eventual value for formula_20 satisfies formula_24.
Implementing the steps for this algorithm can be done using a number of gates linear in the number of qubits. Thus, the gate complexity of this algorithm is formula_25, or formula_26 per iteration.
Geometric proof of correctness.
There is a geometric interpretation of Grover's algorithm, following from the observation that the quantum state of Grover's algorithm stays in a two-dimensional subspace after each step. Consider the plane spanned by formula_27 and formula_23; equivalently, the plane spanned by formula_23 and the perpendicular ket formula_28.
Grover's algorithm begins with the initial ket formula_27, which lies in the subspace. The operator formula_29 is a reflection at the hyperplane orthogonal to formula_23 for vectors in the plane spanned by formula_30 and formula_23, i.e. it acts as a reflection across formula_30. This can be seen by writing formula_12 in the form of a Householder reflection:
formula_31
The operator formula_32 is a reflection through formula_27. Both operators formula_33 and formula_29 take states in the plane spanned by formula_30 and formula_23 to states in the plane. Therefore, Grover's algorithm stays in this plane for the entire algorithm.
It is straightforward to check that the operator formula_34 of each Grover iteration step rotates the state vector by an angle of formula_35.
So, with enough iterations, one can rotate from the initial state formula_27 to the desired output state formula_23. The initial ket is close to the state orthogonal to formula_23:
formula_36
In geometric terms, the angle formula_37 between formula_27 and formula_30 is given by
formula_38
We need to stop when the state vector passes close to formula_23; after this, subsequent iterations rotate the state vector "away" from formula_23, reducing the probability of obtaining the correct answer. The exact probability of measuring the correct answer is
formula_39
where "r" is the (integer) number of Grover iterations. The earliest time that we get a near-optimal measurement is therefore formula_40.
Algebraic proof of correctness.
To complete the algebraic analysis, we need to find out what happens when we repeatedly apply formula_41. A natural way to do this is by eigenvalue analysis of a matrix. Notice that during the entire computation, the state of the algorithm is a linear combination of formula_42 and formula_43. We can write the action of formula_33 and formula_12 in the space spanned by formula_44 as:
formula_45
formula_46
So in the basis formula_47 (which is neither orthogonal nor a basis of the whole space) the action formula_48 of applying formula_12 followed by formula_33 is given by the matrix
formula_49
This matrix happens to have a very convenient Jordan form. If we define formula_50, it is
formula_51 where formula_52
It follows that "r"-th power of the matrix (corresponding to "r" iterations) is
formula_53
Using this form, we can use trigonometric identities to compute the probability of observing "ω" after "r" iterations mentioned in the previous section,
formula_54
Alternatively, one might reasonably imagine that a near-optimal time to distinguish would be when the angles 2"rt" and −2"rt" are as far apart as possible, which corresponds to formula_55, or formula_56. Then the system is in state
formula_57
A short calculation now shows that the observation yields the correct answer "ω" with error formula_58.
Extensions and variants.
Multiple matching entries.
If, instead of 1 matching entry, there are "k" matching entries, the same algorithm works, but the number of iterations must be formula_59instead of formula_60
There are several ways to handle the case if "k" is unknown. A simple solution performs optimally up to a constant factor: run Grover's algorithm repeatedly for increasingly small values of "k", e.g., taking "k" = "N", "N"/2, "N"/4, ..., and so on, taking formula_61 for iteration "t" until a matching entry is found.
With sufficiently high probability, a marked entry will be found by iteration formula_62 for some constant "c". Thus, the total number of iterations taken is at most
formula_63
Another approach if "k" is unknown is to derive it via the quantum counting algorithm prior.
If formula_64 (or the traditional one marked state Grover's Algorithm if run with formula_65), the algorithm will provide no amplification. If formula_66, increasing "k" will begin to increase the number of iterations necessary to obtain a solution. On the other hand, if formula_67, a classical running of the checking oracle on a single random choice of input will more likely than not give a correct solution.
A version of this algorithm is used in order to solve the collision problem.
Quantum partial search.
A modification of Grover's algorithm called quantum partial search was described by Grover and Radhakrishnan in 2004. In partial search, one is not interested in finding the exact address of the target item, only the first few digits of the address. Equivalently, we can think of "chunking" the search space into blocks, and then asking "in which block is the target item?". In many applications, such a search yields enough information if the target address contains the information wanted. For instance, to use the example given by L. K. Grover, if one has a list of students organized by class rank, we may only be interested in whether a student is in the lower 25%, 25–50%, 50–75% or 75–100% percentile.
To describe partial search, we consider a database separated into formula_68 blocks, each of size formula_69. The partial search problem is easier. Consider the approach we would take classically – we pick one block at random, and then perform a normal search through the rest of the blocks (in set theory language, the complement). If we don't find the target, then we know it's in the block we didn't search. The average number of iterations drops from formula_70 to formula_71.
Grover's algorithm requires formula_72 iterations. Partial search will be faster by a numerical factor that depends on the number of blocks formula_68. Partial search uses formula_73 global iterations and formula_74 local iterations. The global Grover operator is designated formula_75 and the local Grover operator is designated formula_76.
The global Grover operator acts on the blocks. Essentially, it is given as follows:
The optimal values of formula_77 and formula_78 are discussed in the paper by Grover and Radhakrishnan. One might also wonder what happens if one applies successive partial searches at different levels of "resolution". This idea was studied in detail by Vladimir Korepin and Xu, who called it binary quantum search. They proved that it is not in fact any faster than performing a single partial search.
Optimality.
Grover's algorithm is optimal up to sub-constant factors. That is, any algorithm that accesses the database only by using the operator "Uω" must apply "Uω" at least a formula_79 fraction as many times as Grover's algorithm. The extension of Grover's algorithm to "k" matching entries, π("N"/"k")1/2/4, is also optimal. This result is important in understanding the limits of quantum computation.
If the Grover's search problem was solvable with logc "N" applications of "Uω", that would imply that NP is contained in BQP, by transforming problems in NP into Grover-type search problems. The optimality of Grover's algorithm suggests that quantum computers cannot solve NP-Complete problems in polynomial time, and thus NP is not contained in BQP.
It has been shown that a class of non-local hidden variable quantum computers could implement a search of an formula_1-item database in at most formula_80 steps. This is faster than the formula_0 steps taken by Grover's algorithm.
|
[
{
"math_id": 0,
"text": "O(\\sqrt{N})"
},
{
"math_id": 1,
"text": "N"
},
{
"math_id": 2,
"text": "O(N)"
},
{
"math_id": 3,
"text": "\\Omega(\\sqrt{N})"
},
{
"math_id": 4,
"text": "y = f(x)"
},
{
"math_id": 5,
"text": "x"
},
{
"math_id": 6,
"text": "y"
},
{
"math_id": 7,
"text": "f\\colon \\{0,1,\\ldots,N-1\\} \\to \\{0,1\\}"
},
{
"math_id": 8,
"text": "\\begin{cases}\n U_\\omega |x\\rang = -|x\\rang & \\text{for } x = \\omega \\text{, that is, } f(x) = 1, \\\\\n U_\\omega |x\\rang = |x\\rang & \\text{for } x \\ne \\omega \\text{, that is, } f(x) = 0.\n\\end{cases}"
},
{
"math_id": 9,
"text": "\\mathcal{H}"
},
{
"math_id": 10,
"text": "n = \\lceil \\log_{2} N \\rceil"
},
{
"math_id": 11,
"text": "U_\\omega|x\\rang = (-1)^{f(x)}|x\\rang."
},
{
"math_id": 12,
"text": "U_\\omega"
},
{
"math_id": 13,
"text": "U_f"
},
{
"math_id": 14,
"text": "\\begin{cases}\n U_f |x\\rang |y\\rang = |x\\rang |\\neg y\\rang & \\text{for } x = \\omega \\text{, that is, } f(x) = 1, \\\\\n U_f |x\\rang |y\\rang = |x\\rang |y\\rang & \\text{for } x \\ne \\omega \\text{, that is, } f(x) = 0,\n\\end{cases}"
},
{
"math_id": 15,
"text": "\nU_f |x\\rang |y\\rang = |x\\rang |y \\oplus f(x)\\rang.\n"
},
{
"math_id": 16,
"text": "|-\\rang = \\frac1{\\sqrt2}\\big(|0\\rang - |1\\rang\\big) = H|1\\rang"
},
{
"math_id": 17,
"text": "\n\\begin{align}\nU_f \\big( |x\\rang \\otimes |-\\rang \\big)\n&= \\frac1{\\sqrt2} \\left( U_f |x\\rang |0\\rang - U_f |x\\rang |1\\rang \\right)\\\\\n&= \\frac1{\\sqrt2} \\left(|x\\rang |f(x)\\rang - |x\\rang |1 \\oplus f(x)\\rang \\right)\\\\\n&= \\begin{cases}\n \\frac1{\\sqrt2} \\left(-|x\\rang |0\\rang + |x\\rang |1\\rang\\right) & \\text{if } f(x) = 1, \\\\\n \\frac1{\\sqrt2} \\left( |x\\rang |0\\rang - |x\\rang |1\\rang \\right) & \\text{if } f(x) = 0\n\\end{cases} \\\\\n&= (U_\\omega |x\\rang) \\otimes |-\\rang\n\\end{align}\n"
},
{
"math_id": 18,
"text": "|-\\rang"
},
{
"math_id": 19,
"text": "|s\\rangle = \\frac{1}{\\sqrt{N}} \\sum_{x=0}^{N-1} |x\\rangle. "
},
{
"math_id": 20,
"text": "r(N)"
},
{
"math_id": 21,
"text": "U_s = 2 \\left|s\\right\\rangle \\left\\langle s\\right| - I"
},
{
"math_id": 22,
"text": "r"
},
{
"math_id": 23,
"text": "|\\omega\\rang"
},
{
"math_id": 24,
"text": "r(N) \\leq \\Big\\lceil\\frac{\\pi}{4}\\sqrt{N}\\Big\\rceil"
},
{
"math_id": 25,
"text": "O(\\log(N)r(N))"
},
{
"math_id": 26,
"text": "O(\\log(N))"
},
{
"math_id": 27,
"text": "|s\\rang"
},
{
"math_id": 28,
"text": "\\textstyle |s'\\rang = \\frac{1}{\\sqrt{N - 1}}\\sum_{x \\neq \\omega} |x\\rang"
},
{
"math_id": 29,
"text": "U_{\\omega}"
},
{
"math_id": 30,
"text": "|s'\\rang"
},
{
"math_id": 31,
"text": " U_\\omega = I - 2|\\omega\\rangle\\langle \\omega|."
},
{
"math_id": 32,
"text": " U_s = 2 |s\\rangle \\langle s| - I"
},
{
"math_id": 33,
"text": "U_s"
},
{
"math_id": 34,
"text": "U_s U_{\\omega}"
},
{
"math_id": 35,
"text": "\\theta = 2\\arcsin\\tfrac{1}{\\sqrt{N}} "
},
{
"math_id": 36,
"text": " \\lang s'|s\\rang = \\sqrt{\\frac{N-1}{N}}. "
},
{
"math_id": 37,
"text": "\\theta/2"
},
{
"math_id": 38,
"text": " \\sin \\frac{\\theta}{2} = \\frac{1}{\\sqrt{N}}. "
},
{
"math_id": 39,
"text": " \\sin^2\\left( \\Big( r + \\frac{1}{2} \\Big)\\theta\\right),"
},
{
"math_id": 40,
"text": "r \\approx \\pi \\sqrt{N} / 4"
},
{
"math_id": 41,
"text": "U_s U_\\omega"
},
{
"math_id": 42,
"text": "s"
},
{
"math_id": 43,
"text": "\\omega"
},
{
"math_id": 44,
"text": "\\{|s\\rang, |\\omega\\rang\\}"
},
{
"math_id": 45,
"text": " U_s : a |\\omega \\rang + b |s \\rang \\mapsto [|\\omega \\rang \\, | s \\rang] \\begin{bmatrix}\n-1 & 0 \\\\\n2/\\sqrt{N} & 1 \\end{bmatrix}\\begin{bmatrix}a\\\\b\\end{bmatrix}."
},
{
"math_id": 46,
"text": " U_\\omega : a |\\omega \\rang + b |s \\rang \\mapsto [|\\omega \\rang \\, | s \\rang] \\begin{bmatrix}\n-1 & -2/\\sqrt{N} \\\\\n0 & 1 \\end{bmatrix}\\begin{bmatrix}a\\\\b\\end{bmatrix}."
},
{
"math_id": 47,
"text": "\\{ |\\omega\\rang, |s\\rang \\}"
},
{
"math_id": 48,
"text": "U_sU_\\omega"
},
{
"math_id": 49,
"text": " U_sU_\\omega = \\begin{bmatrix} -1 & 0 \\\\ 2/\\sqrt{N} & 1 \\end{bmatrix}\n\\begin{bmatrix}\n-1 & -2/\\sqrt{N} \\\\\n0 & 1 \\end{bmatrix}\n = \n\\begin{bmatrix}\n1 & 2/\\sqrt{N} \\\\\n-2/\\sqrt{N} & 1-4/N \\end{bmatrix}."
},
{
"math_id": 50,
"text": "t = \\arcsin(1/\\sqrt{N})"
},
{
"math_id": 51,
"text": " U_sU_\\omega = M \\begin{bmatrix} e^{2it} & 0 \\\\ 0 & e^{-2it}\\end{bmatrix} M^{-1}"
},
{
"math_id": 52,
"text": "M = \\begin{bmatrix}-i & i \\\\ e^{it} & e^{-it} \\end{bmatrix}."
},
{
"math_id": 53,
"text": " (U_sU_\\omega)^r = M \\begin{bmatrix} e^{2rit} & 0 \\\\ 0 & e^{-2rit}\\end{bmatrix} M^{-1}."
},
{
"math_id": 54,
"text": "\\left|\\begin{bmatrix}\\lang\\omega|\\omega\\rang & \\lang\\omega|s\\rang\\end{bmatrix}(U_sU_\\omega)^r \\begin{bmatrix}0 \\\\ 1\\end{bmatrix} \\right|^2 = \\sin^2\\left( (2r+1)t\\right)."
},
{
"math_id": 55,
"text": "2rt \\approx \\pi/2"
},
{
"math_id": 56,
"text": "r = \\pi/4t = \\pi/4\\arcsin(1/\\sqrt{N}) \\approx \\pi\\sqrt{N}/4"
},
{
"math_id": 57,
"text": " [|\\omega \\rang \\, | s \\rang] (U_sU_\\omega)^r \\begin{bmatrix}0\\\\1\\end{bmatrix} \\approx [|\\omega \\rang \\, | s \\rang] M \\begin{bmatrix} i & 0 \\\\ 0 & -i\\end{bmatrix} M^{-1} \\begin{bmatrix}0\\\\1\\end{bmatrix} = | \\omega \\rang \\frac{1}{\\cos(t)} - |s \\rang \\frac{\\sin(t)}{\\cos(t)}."
},
{
"math_id": 58,
"text": "O\\left (\\frac{1}{N} \\right)"
},
{
"math_id": 59,
"text": " \\frac{\\pi}{4}{\\left( \\frac{N}{k} \\right)^{1/2}} "
},
{
"math_id": 60,
"text": " \\frac{\\pi}{4}{N^{1/2}}."
},
{
"math_id": 61,
"text": "k = N/2^t"
},
{
"math_id": 62,
"text": "t = \\log_2(N/k) + c"
},
{
"math_id": 63,
"text": " \\frac{\\pi}{4} \\Big(1 + \\sqrt{2} + \\sqrt{4} + \\cdots + \\sqrt{\\frac{N}{k2^c}}\\Big) = O\\big(\\sqrt{N/k}\\big). "
},
{
"math_id": 64,
"text": "k = N/2"
},
{
"math_id": 65,
"text": "N = 2"
},
{
"math_id": 66,
"text": "k > N/2"
},
{
"math_id": 67,
"text": "k \\geq N/2"
},
{
"math_id": 68,
"text": "K"
},
{
"math_id": 69,
"text": "b = N/K"
},
{
"math_id": 70,
"text": "N/2"
},
{
"math_id": 71,
"text": "(N-b)/2"
},
{
"math_id": 72,
"text": "\\frac{\\pi}{4}\\sqrt{N}"
},
{
"math_id": 73,
"text": "n_1"
},
{
"math_id": 74,
"text": "n_2"
},
{
"math_id": 75,
"text": "G_1"
},
{
"math_id": 76,
"text": "G_2"
},
{
"math_id": 77,
"text": "j_1"
},
{
"math_id": 78,
"text": "j_2"
},
{
"math_id": 79,
"text": "1-o(1)"
},
{
"math_id": 80,
"text": "O(\\sqrt[3]{N})"
}
] |
https://en.wikipedia.org/wiki?curid=58498
|
58503701
|
Strebe 1995 projection
|
Pseudoazimuthal equal-area map projection
The Strebe 1995 projection, Strebe projection, Strebe lenticular equal-area projection, or Strebe equal-area polyconic projection is an equal-area map projection presented by Daniel "daan" Strebe in 1994. Strebe designed the projection to keep all areas proportionally correct in size; to push as much of the inevitable distortion as feasible away from the continental masses and into the Pacific Ocean; to keep a familiar equatorial orientation; and to do all this without slicing up the map.
Description.
Strebe first presented the projection at a joint meeting of the Canadian Cartographic Association and the North American Cartographic Information Society (NACIS) in August 1994. Its final formulation was completed in 1995. The projection has been available in the map projection software Geocart since Geocart 1.2, released in October 1994.
The projection is arrived at by a series of steps, each of which preserves areas. Because each step preserves areas, the entire process preserves areas. The steps use a technique invented by Strebe called "substitute deprojection" or "Strebe's transformation". First, the Eckert IV projection is computed. Then, pretending that the Eckert projection is actually a shrunken portion of the Mollweide projection, the Eckert is "deprojected" back onto the sphere using the inverse transformation of the Mollweide projection. This yields a full-sphere-to-partial-sphere map. Then this mapped sphere is projected back to the plane using the Hammer projection. While the projections named here are the ones that define the Strebe 1995 projection, the substitute deprojection principle is not constrained to particular projections.
The projection as described can be formulated as follows:
formula_0
where formula_1 is solved iteratively:
formula_2
In these formulae, formula_3 represents longitude and formula_4 represents latitude.
Strebe's preferred arrangement is to set formula_5, as shown, and 11°E as the central meridian to avoid dividing eastern Siberia's Chukchi Peninsula. However, "s" can be modified to change the appearance without destroying the equal-area property.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\begin{align}\nx &= \\frac {2D}{s} \\cos \\varphi_{\\text{p}} \\sin \\lambda_{\\text{p}} , \\quad\ny = s D \\sin \\varphi_{\\text{p}} \\\\\ns &= 1.35 \\\\\nD &= \\sqrt {\\frac {2}{1 + \\cos \\varphi_{\\text{p}} \\cos \\lambda_{\\text{p}}}} \\\\\n\\sin \\varphi_{\\text{p}} &= \\frac {2 \\arcsin \\frac{\\sqrt{2} y_{\\text{e}}}{2} + r y_{\\text{e}}}{\\pi} \\\\\n\\lambda_{\\text{p}} &= \\frac {\\pi x_{\\text{e}}}{4r} \\\\\nr &= \\sqrt {2 - y_{\\text{e}}^2} \\\\\nx_{\\text{e}} &= s \\frac {\\lambda (1+\\cos \\theta)}{\\sqrt {4\\pi + \\pi^2}} , \\quad\ny_{\\text{e}} = 2 \\frac {\\sqrt \\pi \\sin \\theta}{s \\sqrt {4+\\pi}},\n\\end{align}"
},
{
"math_id": 1,
"text": "\\theta"
},
{
"math_id": 2,
"text": "\\begin{align}\n\\theta + \\sin \\theta \\cos \\theta + 2 \\sin \\theta &= \\frac{1}{2} \\left(4+\\pi\\right) \\sin \\varphi .\n\\end{align}"
},
{
"math_id": 3,
"text": "\\lambda"
},
{
"math_id": 4,
"text": "\\varphi"
},
{
"math_id": 5,
"text": "s = 1.35"
}
] |
https://en.wikipedia.org/wiki?curid=58503701
|
58507539
|
Feldman–Hájek theorem
|
Theory in probability theory
In probability theory, the Feldman–Hájek theorem or Feldman–Hájek dichotomy is a fundamental result in the theory of Gaussian measures. It states that two Gaussian measures formula_0 and formula_1 on a locally convex space formula_2 are either equivalent measures or else mutually singular: there is no possibility of an intermediate situation in which, for example, formula_0 has a density with respect to formula_1 but not vice versa. In the special case that formula_2 is a Hilbert space, it is possible to give an explicit description of the circumstances under which formula_0 and formula_1 are equivalent: writing formula_3 and formula_4 for the means of formula_0 and formula_5 and formula_6 and formula_7 for their covariance operators, equivalence of formula_0 and formula_1 holds if and only if
A simple consequence of the Feldman–Hájek theorem is that dilating a Gaussian measure on an infinite-dimensional Hilbert space formula_2 (i.e. taking formula_12 for some scale factor formula_13) always yields two mutually singular Gaussian measures, except for the trivial dilation with formula_14 since formula_15 is Hilbert–Schmidt only when formula_16
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mu"
},
{
"math_id": 1,
"text": "\\nu"
},
{
"math_id": 2,
"text": "X"
},
{
"math_id": 3,
"text": "m_{\\mu}"
},
{
"math_id": 4,
"text": "m_{\\nu}"
},
{
"math_id": 5,
"text": "\\nu,"
},
{
"math_id": 6,
"text": "C_\\mu"
},
{
"math_id": 7,
"text": "C_\\nu"
},
{
"math_id": 8,
"text": "H = C_\\mu^{1/2}(X) = C_\\nu^{1/2}(X)"
},
{
"math_id": 9,
"text": "m_\\mu - m_\\nu \\in H"
},
{
"math_id": 10,
"text": "(C_\\mu^{-1/2} C_\\nu^{1/2}) (C_\\mu^{-1/2} C_\\nu^{1/2})^{\\ast} - I"
},
{
"math_id": 11,
"text": "\\bar{H}."
},
{
"math_id": 12,
"text": "C_\\nu = s C_\\mu"
},
{
"math_id": 13,
"text": "s \\geq 0"
},
{
"math_id": 14,
"text": "s = 1,"
},
{
"math_id": 15,
"text": "(s^2 - 1) I"
},
{
"math_id": 16,
"text": "s = 1."
}
] |
https://en.wikipedia.org/wiki?curid=58507539
|
585102
|
Julius von Mayer
|
German physician, chemist, and physicist
Julius Robert von Mayer (25 November 1814 – 20 March 1878) was a German physician, chemist, and physicist and one of the founders of thermodynamics. He is best known for enunciating in 1841 one of the original statements of the conservation of energy or what is now known as one of the first versions of the first law of thermodynamics, namely that "energy can be neither created nor destroyed". In 1842, Mayer described the vital chemical process now referred to as oxidation as the primary source of energy for any living creature. He also proposed that plants convert light into chemical energy.
His achievements were overlooked and priority for the discovery in 1842 of the "mechanical equivalent of heat" was attributed to James Joule in the following year.
Early life.
Mayer was born on 25 November 1814 in Heilbronn, Württemberg (Baden-Württemberg, modern day Germany), the son of a pharmacist. He grew up in Heilbronn. After completing his "Abitur", he studied medicine at the University of Tübingen, where he was a member of the "Corps Guestphalia", a German Student Corps. During 1838 he attained his doctorate as well as passing the "Staatsexamen". After a stay in Paris (1839/40) he left as a ship's physician on a Dutch three-mast sailing ship for a journey to Jakarta.
Although he had hardly been interested before this journey in physical phenomena, his observation that storm-whipped waves are warmer than the calm sea started him thinking about the physical laws, in particular about the physical phenomenon of warmth and the question "whether the directly developed heat alone (the heat of combustion), or the sum of the quantities of heat developed in direct and indirect ways are to be accounted for in the burning process". After his return in February 1841 Mayer dedicated his efforts to solve this problem.
In 1841 he settled in Heilbronn and married.
Development of ideas.
Even as a young child, Mayer showed an intense interest with various mechanical mechanisms. He was a young man who performed various experiments of the physical and chemical variety. In fact, one of his favorite hobbies was creating various types of electrical devices and air pumps. It was obvious that he was intelligent. Hence, Mayer attended Eberhard-Karls University in May 1832. He studied medicine during his time there.
In 1837, he and some of his friends were arrested for wearing the couleurs of a forbidden organization. The consequences for this arrest included a one year expulsion from the college and a brief period of incarceration. This diversion sent Mayer traveling to Switzerland, France, and the Dutch East Indies. Mayer drew some additional interest in mathematics and engineering from his friend Carl Baur through private tutoring. In 1841, Mayer returned to Heilbronn to practice medicine, but physics became his new passion.
In June 1841 he completed his first scientific paper entitled "On the Quantitative and Qualitative Determination of Forces". It was largely ignored by other professionals in the area. Then, Mayer became interested in the area of heat and its motion. He presented a value in numerical terms for the mechanical equivalent of heat. He also was the first person to describe the vital chemical process now referred to as oxidation as the primary source of energy for any living creature.
In 1848 he calculated that in the absence of a source of energy the Sun would cool down in only 5000 years, and he suggested that the impact of meteorites kept it hot.
Since he was not taken seriously at the time, his achievements were overlooked and credit was given to James Joule. Mayer almost committed suicide after he discovered this fact. He spent some time in mental institutions to recover from this and the loss of some of his children. Several of his papers were published due to the advanced nature of the physics and chemistry. He was awarded an honorary doctorate in 1859 by the philosophical faculty at the University of Tübingen. His overlooked work was revived in 1862 by fellow physicist John Tyndall in a lecture at the London Royal Institution. In July 1867 Mayer published "Die Mechanik der Wärme." This publication dealt with the mechanics of heat and its motion. On 5 November 1867 Mayer was awarded personal nobility by the Kingdom of Württemberg (von Mayer) which is the German equivalent of a British knighthood. von Mayer died in Germany in 1878.
After Sadi Carnot stated it for caloric, Mayer was the first person to state the law of the conservation of energy, one of the most fundamental tenets of modern day physics. The law of the conservation of energy states that the total mechanical energy of a system remains constant in any isolated system of objects that interact with each other only by way of forces that are conservative.
Mayer's first attempt at stating the conservation of energy was a paper he sent to Johann Christian Poggendorff's "Annalen der Physik", in which he postulated a conservation of force ("Erhaltungssatz der Kraft"). However, owing to Mayer's lack of advanced training in physics, it contained some fundamental mistakes and was not published. Mayer continued to pursue the idea steadfastly and argued with the Tübingen physics professor Johann Gottlieb Nörremberg, who rejected his hypothesis. Nörremberg did, however, give Mayer a number of valuable suggestions on how the idea could be examined experimentally; for example, if kinetic energy transforms into heat energy, water should be warmed by vibration.
Mayer not only performed this demonstration, but determined also the quantitative factor of the transformation, calculating the mechanical equivalent of heat. The result of his investigations was published 1842 in the May edition of Justus von Liebig's "Annalen der Chemie und Pharmacie". It was translated as "Remarks on the Forces of Inorganic Nature" In his booklet "Die organische Bewegung im Zusammenhang mit dem Stoffwechsel" ("The Organic Movement in Connection with the Metabolism", 1845) he specified the numerical value of the mechanical equivalent of heat: at first as 365 kgf·m/kcal, later as 425 kgf·m/kcal; the modern values are 4.184 kJ/kcal (426.6 kgf·m/kcal) for the thermochemical calorie and 4.1868 kJ/kcal (426.9 kgf·m/kcal) for the international steam table calorie.
This relation implies that, although work and heat are different forms of energy, they can be transformed into one another. This law is now called the first law of thermodynamics, and led to the formulation of the general principle of conservation of energy, definitively stated by Hermann von Helmholtz in 1847.
Mayer's relation.
Mayer derived a relation between specific heat at constant pressure and the specific heat at constant volume for an ideal gas. The relation is:
formula_0,
where "CP,m" is the molar specific heat at constant pressure, "CV,m" is the molar specific heat at constant volume and "R" is the gas constant.
Later life.
Mayer was aware of the importance of his discovery, but his inability to express himself scientifically led to degrading speculation and resistance from the scientific establishment. Contemporary physicists rejected his principle of conservation of energy, and even acclaimed physicists Hermann von Helmholtz and James Prescott Joule viewed his ideas with hostility. The former doubted Mayer's qualifications in physical questions, and a bitter dispute over priority developed with the latter.
In 1848 two of his children died rapidly in succession, and Mayer's mental health deteriorated. He attempted suicide on 18 May 1850 and was committed to a mental institution. After he was released, he was a broken man and only timidly re-entered public life in 1860. However, in the meantime, his scientific fame had grown and he received a late appreciation of his achievement, although perhaps at a stage where he was no longer able to enjoy it.
He continued to work vigorously as a physician until his death.
Honors.
In chemistry, he invented Mayer's reagent which is used in detecting alkaloids.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "C_{P,m} - C_{V,m} = R"
}
] |
https://en.wikipedia.org/wiki?curid=585102
|
58512088
|
Jennifer Balakrishnan
|
American mathematician
Jennifer Shyamala Sayaka Balakrishnan is an American mathematician known for leading a team that solved the problem of the "cursed curve", a Diophantine equation that was known for being "famously difficult". More generally, Balakrishnan specializes in algorithmic number theory and arithmetic geometry. She is a Clare Boothe Luce Professor at Boston University.
Education and career.
Balakrishnan was born in Mangilao, Guam to Narayana and Shizuko Balakrishnan; her father is a professor of chemistry at the University of Guam.
As a junior at Harvest Christian Academy, Balakrishnan won an honorable mention in the 2001 Karl Menger Memorial Award competition, for the best mathematical project in the Intel International Science and Engineering Fair. Her project concerned elliptic coordinate systems. In the following year, she won the National High School Student Calculus Competition, given as part of the United States of America Mathematical Olympiad.
Balakrishnan graduated from Harvard University in 2006, with both a "magna cum laude" bachelor's degree and a master's degree in mathematics. She moved to the Massachusetts Institute of Technology for her doctoral studies, completing her Ph.D. in 2011. Her dissertation, "Coleman integration for hyperelliptic curves: algorithms and applications", was supervised by Kiran Kedlaya.
She returned to Harvard for her postdoctoral studies from 2011 to 2013, and then moved to the University of Oxford from 2013 to 2016, where she was a Junior Research Fellow in Balliol College and a Titchmarsh Research Fellow in the Mathematical Institute. She became Clare Booth Luce Assistant Professor at Boston University in 2016, Clare Booth Luce Associate Professor in 2021, and Clare Booth Luce Professor in 2023.
Balakrishnan is one of the principal investigators in the Simons Collaboration on Arithmetic Geometry, Number Theory, and Computation, a large multi-university collaboration involving Boston University, Harvard, MIT, Brown University, and Dartmouth College, with additional collaborators from other universities in the US, England, Australia, the Netherlands, and Canada.. She also serves on the board of directors of the Number Theory Foundation and the editorial boards of Research in Number Theory and Mathematics of Computation. She serves on the Scientific Advisory Board for the Institute for Computational and Experimental Research in Mathematics (ICERM).
Contributions.
In 2017, Balakrishnan led a team of mathematicians in settling the problem of the "cursed curve" formula_0. This curve is modeled by the equation
formula_1
and, as a Diophantine equation, the problem is to determine all rational solutions, i.e., assignments of rational numbers to the variables formula_2, formula_3, and formula_4 for which the equation is true.
Although as an explicit equation this curve has a complicated form, it is natural and conceptually significant in the number theory of elliptic curves. The equation describes a modular curve whose solutions characterize the one remaining unsolved case of a theorem of on the Galois representations of elliptic curves without complex multiplication.
Computations by and had previously identified seven solutions on the cursed curve (six corresponding to elliptic curves with complex multiplication, and one cusp), but their computational methods were unable to show that the list of solutions was complete. Following a suggestion of Oxford mathematician Minhyong Kim, Balakrishnan and her co-authors constructed a "Selmer variety" associated with the curve, such that the rational points of the curve all lie on the Selmer variety as well, and such that the number of points of intersection of the curve and the variety can be computed. Using this method, they proved that the seven known solutions to the cursed curve are the only ones possible. This work was initially reported in a 2017 arXiv preprint and was published in the journal "Annals of Mathematics" in 2019.
Balakrishnan has researched, with Ken Ono and others, Lehmer's question on whether the Ramanujan tau function formula_5 is ever zero for a positive integer n.
As well as for her work in number theory, Balakrishnan is known for her work implementing number-theoretical algorithms as part of the SageMath computer algebra system.
Recognition.
Balakrishnan received the Clare Boothe Luce Assistant Professorship in 2016. In 2018, Balakrishnan was selected as a Sloan Research Fellow. In 2020, she was selected for a National Science Foundation CAREER Award. She was named a Fellow of the American Mathematical Society, in the 2022 class of fellows, "for contributions to arithmetic geometry and computational number theory and service to the profession". She earned the 2022 AWM–Microsoft Research Prize in Algebra and Number Theory in recognition of her "outstanding contributions to explicit methods in number theory, particularly her advances in computing rational points on algebraic curves over number fields". She was selected as a Fellow of the Association for Women in Mathematics in the class of 2023 "for her support of women in mathematics through mentoring and advising; for organizing and supporting programs for women and girls, especially Women in Sage and Women in Numbers; for her work in outreach and education, including GirlsGetMath; and for working to improve diversity, equity, and inclusion in research communities. In 2023 she was awarded the 2023-2024 AMS-Birman Fellowship.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "X_s(13)"
},
{
"math_id": 1,
"text": "y^4+5x^4-6x^2y^2+6x^3z+26x^2yz+10xy^2z-10y^3z-32x^2z^2-40xyz^2+24y^2z^2+32xz^3-16yz^3=0"
},
{
"math_id": 2,
"text": "x"
},
{
"math_id": 3,
"text": "y"
},
{
"math_id": 4,
"text": "z"
},
{
"math_id": 5,
"text": "\\tau(n)"
}
] |
https://en.wikipedia.org/wiki?curid=58512088
|
585143
|
Closed-form expression
|
Mathematical formula involving a given set of operations
In mathematics, an expression or equation is in closed form if it is formed with constants, variables and a finite set of basic functions connected by arithmetic operations (, and integer powers) and function composition. Commonly, the allowed functions are "n"th root, exponential function, logarithm, and trigonometric functions. However, the set of basic functions depends on the context.
The "closed-form problem" arises when new ways are introduced for specifying mathematical objects, such as limits, series and integrals: given an object specified with such tools, a natural problem is to find, if possible, a "closed-form expression" of this object, that is, an expression of this object in terms of previous ways of specifying it.
Example: roots of polynomials.
The quadratic formula
formula_0
is a "closed form" of the solutions to the general quadratic equation formula_1
More generally, in the context of polynomial equations, a closed form of a solution is a solution in radicals; that is, a closed-form expression for which the allowed functions are only nth-roots and field operations formula_2 In fact, field theory allows showing that if a solution of a polynomial equation has a closed form involving exponentials, logarithms or trigonometric functions, then it has also a closed form that does not involve these functions.
There are expressions in radicals for all solutions of cubic equations (degree 3) and quartic equations (degree 4). The size of these expressions increases significantly with the degree, limiting their usefulness.
In higher degrees, the Abel–Ruffini theorem states that there are equations whose solutions cannot be expressed in radicals, and, thus, have no closed forms. A simple example is the equation formula_3 Galois theory provides an algorithmic method for deciding whether a particular polynomial equation can be solved in radicals.
Symbolic integration.
Symbolic integration consists essentially of the search of closed forms for antiderivatives of functions that are specified by closed-form expressions. In this context, the basic functions used for defining closed forms are commonly logarithms, exponential function and polynomial roots. Functions that have a closed form for these basic functions are called elementary functions and include trigonometric functions, inverse trigonometric functions, hyperbolic functions, and inverse hyperbolic functions.
The fundamental problem of symbolic integration is thus, given an elementary function specified by a closed-form expression, to decide whether its antiderivative is an elementary function, and, if it is, to find a closed-form expression for this antiderivative.
For rational functions; that is, for fractions of two polynomial functions; antiderivatives are not always rational fractions, but are always elementary functions that may involve logarithms and polynomial roots. This is usually proved with partial fraction decomposition. The need for logarithms and polynomial roots is illustrated by the formula
formula_4
which is valid if formula_5 and formula_6 are coprime polynomials such that formula_6 is square free and formula_7
Alternative definitions.
Changing the definition of "well known" to include additional functions can change the set of equations with closed-form solutions. Many cumulative distribution functions cannot be expressed in closed form, unless one considers special functions such as the error function or gamma function to be well known. It is possible to solve the quintic equation if general hypergeometric functions are included, although the solution is far too complicated algebraically to be useful. For many practical computer applications, it is entirely reasonable to assume that the gamma function and other special functions are well known since numerical implementations are widely available.
Analytic expression.
An analytic expression (also known as expression in analytic form or analytic formula) is a mathematical expression constructed using well-known operations that lend themselves readily to calculation. Similar to closed-form expressions, the set of well-known functions allowed can vary according to context but always includes the basic arithmetic operations (addition, subtraction, multiplication, and division), exponentiation to a real exponent (which includes extraction of the "n"th root), logarithms, and trigonometric functions.
However, the class of expressions considered to be analytic expressions tends to be wider than that for closed-form expressions. In particular, special functions such as the Bessel functions and the gamma function are usually allowed, and often so are infinite series and continued fractions. On the other hand, limits in general, and integrals in particular, are typically excluded.
If an analytic expression involves only the algebraic operations (addition, subtraction, multiplication, division, and exponentiation to a rational exponent) and rational constants then it is more specifically referred to as an algebraic expression.
Comparison of different classes of expressions.
Closed-form expressions are an important sub-class of analytic expressions, which contain a finite number of applications of well-known functions. Unlike the broader analytic expressions, the closed-form expressions do not include infinite series or continued fractions; neither includes integrals or limits. Indeed, by the Stone–Weierstrass theorem, any continuous function on the unit interval can be expressed as a limit of polynomials, so any class of functions containing the polynomials and closed under limits will necessarily include all continuous functions.
Similarly, an equation or system of equations is said to have a closed-form solution if, and only if, at least one solution can be expressed as a closed-form expression; and it is said to have an analytic solution if and only if at least one solution can be expressed as an analytic expression. There is a subtle distinction between a "closed-form "function"" and a "closed-form "number"" in the discussion of a "closed-form solution", discussed in and below. A closed-form or analytic solution is sometimes referred to as an explicit solution.
Dealing with non-closed-form expressions.
Transformation into closed-form expressions.
The expression:
formula_8
is not in closed form because the summation entails an infinite number of elementary operations. However, by summing a geometric series this expression can be expressed in the closed form:
formula_9
Differential Galois theory.
The integral of a closed-form expression may or may not itself be expressible as a closed-form expression. This study is referred to as differential Galois theory, by analogy with algebraic Galois theory.
The basic theorem of differential Galois theory is due to Joseph Liouville in the 1830s and 1840s and hence referred to as Liouville's theorem.
A standard example of an elementary function whose antiderivative does not have a closed-form expression is: formula_10 whose one antiderivative is (up to a multiplicative constant) the error function:
formula_11
Mathematical modelling and computer simulation.
Equations or systems too complex for closed-form or analytic solutions can often be analysed by mathematical modelling and computer simulation (for an example in physics, see).
Closed-form number.
Three subfields of the complex numbers C have been suggested as encoding the notion of a "closed-form number"; in increasing order of generality, these are the Liouvillian numbers (not to be confused with Liouville numbers in the sense of rational approximation), EL numbers and elementary numbers. The Liouvillian numbers, denoted L, form the smallest "algebraically closed" subfield of C closed under exponentiation and logarithm (formally, intersection of all such subfields)—that is, numbers which involve "explicit" exponentiation and logarithms, but allow explicit and "implicit" polynomials (roots of polynomials); this is defined in . L was originally referred to as elementary numbers, but this term is now used more broadly to refer to numbers defined explicitly or implicitly in terms of algebraic operations, exponentials, and logarithms. A narrower definition proposed in , denoted E, and referred to as EL numbers, is the smallest subfield of C closed under exponentiation and logarithm—this need not be algebraically closed, and corresponds to "explicit" algebraic, exponential, and logarithmic operations. "EL" stands both for "exponential–logarithmic" and as an abbreviation for "elementary".
Whether a number is a closed-form number is related to whether a number is transcendental. Formally, Liouvillian numbers and elementary numbers contain the algebraic numbers, and they include some but not all transcendental numbers. In contrast, EL numbers do not contain all algebraic numbers, but do include some transcendental numbers. Closed-form numbers can be studied via transcendental number theory, in which a major result is the Gelfond–Schneider theorem, and a major open question is Schanuel's conjecture.
Numerical computations.
For purposes of numeric computations, being in closed form is not in general necessary, as many limits and integrals can be efficiently computed. Some equations have no closed form solution, such as those that represent the Three-body problem or the Hodgkin–Huxley model. Therefore, the future states of these systems must be computed numerically.
Conversion from numerical forms.
There is software that attempts to find closed-form expressions for numerical values, including RIES, identify in Maple and SymPy, Plouffe's Inverter, and the Inverse Symbolic Calculator.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "x=\\frac{-b\\pm\\sqrt{b^2-4ac}}{2a}."
},
{
"math_id": 1,
"text": "ax^2+bx+c=0."
},
{
"math_id": 2,
"text": "(+, -, \\times ,/)."
},
{
"math_id": 3,
"text": "x^5-x-1=0."
},
{
"math_id": 4,
"text": "\\int\\frac{f(x)}{g(x)}\\,dx=\\sum_{\\alpha \\in \\operatorname{Roots}(g(x))} \\frac{f(\\alpha)}{g'(\\alpha)}\\ln(x-\\alpha),"
},
{
"math_id": 5,
"text": "f"
},
{
"math_id": 6,
"text": "g"
},
{
"math_id": 7,
"text": "\\deg f <\\deg g."
},
{
"math_id": 8,
"text": "f(x) = \\sum_{n=0}^\\infty \\frac{x}{2^n}"
},
{
"math_id": 9,
"text": "f(x) = 2x."
},
{
"math_id": 10,
"text": "e^{-x^2},"
},
{
"math_id": 11,
"text": "\\operatorname{erf}(x) = \\frac{2}{\\sqrt{\\pi}} \\int_{0}^x e^{-t^2} \\, dt."
}
] |
https://en.wikipedia.org/wiki?curid=585143
|
58519634
|
Schwarzschild's equation for radiative transfer
|
Formula for radiative heat transfer
In the study of heat transfer, Schwarzschild's equation is used to calculate radiative transfer (energy transfer via electromagnetic radiation) through a medium in local thermodynamic equilibrium that both absorbs and emits radiation.
The incremental change in spectral intensity, (dIλ, [W/sr/m2/μm]) at a given wavelength as radiation travels an incremental distance (ds) through a non-scattering medium is given by:
formula_0
where
This equation and various equivalent expressions are known as Schwarzschild's equation. The second term describes absorption of radiation by the molecules in a short segment of the radiation's path (ds) and the first term describes emission by those same molecules. In a non-homogeneous medium, these parameters can vary with altitude and location along the path, formally making these terms "n"("s"), "σλ"("s"), "T"("s"), and "Iλ"("s"). Additional terms are added when scattering is important. Integrating the change in spectral intensity [W/sr/m2/μm] over all relevant wavelengths gives the change in intensity [W/sr/m2]. Integrating over a hemisphere then affords the flux perpendicular to a plane (F, [W/m2]).
Schwarzschild's equation is the formula by which you may calculate the intensity of any flux of electromagnetic energy after passage through a non-scattering medium when all variables are fixed, provided we know the temperature, pressure, and composition of the medium.
'If no other fluxes change", the law of conservation of energy demands that the Earth warm (from one steady state to another) until balance is restored between inward and outward fluxes. Schwarzschild's equation alone says nothing about how much warming would be required to restore balance. When meteorologists and climate scientists refer to "radiative transfer calculations" or "radiative transfer equations" (RTE), the phenomena of emission and absorption are handled by numerical integration of Schwarzschild's equation over a path through the atmosphere. Weather forecasting models and climate models use versions of Schwarzschild's equation optimized to minimize computation time. Online programs are available that perform computations using Schwarzschild's equation.
History.
The Schwarzschild equation first appeared in Karl Schwarzschild's 1906 paper “"Ueber das Gleichgewicht der Sonnenatmosphäre"” ("On the equilibrium of the solar atmosphere").
Background.
Radiative transfer refers to energy transfer through an atmosphere or other medium by means of electromagnetic waves or (equivalently) photons. The simplest form of radiative transfer involves a collinear beam of radiation traveling through a sample to a detector. That flux can be reduced by absorption, scattering or reflection, resulting in energy transmission over a path of less than 100%. The concept of radiative transfer extends beyond simple laboratory phenomena to include thermal emission of radiation by the medium - which can result in more photons arriving at the end of a path than entering it. It also deals with radiation arriving at a detector from a large source - such as the surface of the Earth or the sky. Since emission can occur in all directions, atmospheric radiative transfer (like Planck's Law) requires units involving a solid angle, such as W/sr/m2.
At the most fundamental level, the absorption and emission of radiation are controlled by the Einstein coefficients for absorption, emission and stimulated emission of a photon ("B"12, "A"21 and "B"21) and the density of molecules in the ground and excited states ("n"1 and "n"2). However, in the simplest physical situation – blackbody radiation – radiation and the medium through which it is passing are in thermodynamic equilibrium, and the rate of absorption and emission are equal. The spectral intensity [W/sr/m2/μm] and intensity [W/sr/m2] of blackbody radiation are given by the Planck function "Bλ"("T") and the Stefan–Boltzmann law. These expressions are independent of Einstein coefficients. Absorption and emission often reach equilibrium inside dense, non-transparent materials, so such materials often emit thermal infrared of nearly blackbody intensity. Some of that radiation is internally reflected or scattered at a surface, producing emissivity less than 1. The same phenomena makes the absorptivity of incoming radiation less than 1 and equal to emissivity (Kirchhoff's law).
When radiation has not passed far enough through a homogeneous medium for emission and absorption to reach thermodynamic equilibrium or when the medium changes with distance, Planck's Law and the Stefan-Boltzmann equation do not apply. This is often the case when dealing with atmospheres. If a medium is in Local Thermodynamic Equilibrium (LTE), then Schwarzschild's equation can be used to calculate how radiation changes as it travels through the medium. A medium is in LTE when the fraction of molecules in an excited state is determined by the Boltzmann distribution. LTE exists when collisional excitation and collisional relaxation of any excited state occur much faster than absorption and emission. (LTE does not require the rates of absorption and emission to be equal.) The vibrational and rotational excited states of greenhouse gases that emit thermal infrared radiation are in LTE up to about 60 km. Radiative transfer calculations show negligible change (0.2%) due to absorption and emission above about 50 km. Schwarzschild's equation therefore is appropriate for most problems involving thermal infrared in the Earth's atmosphere. The absorption cross-sections (σλ) used in Schwarzschild's equation arise from Einstein coefficients and processes that broaden absorption lines. In practice, these quantities have been measured in the laboratory; not derived from theory.
When radiation is scattered (the phenomena that makes the sky appear blue) or when the fraction of molecules in an excited state is not determined by the Boltzmann distribution (and LTE doesn't exist), more complicated equations are required. For example, scattering from clear skies reflects about 32 W/m2 (about 13%) of incoming solar radiation back to space. Visible light is also reflected and scattered by aerosol particles and water droplets (clouds). Neither of these phenomena have a significant impact on the flux of thermal infrared through clear skies.
Schwarzschild's equation can not be used without first specifying the temperature, pressure, and composition of the medium through which radiation is traveling. When these parameters are first measured with a radiosonde, the observed spectrum of the downward flux of thermal infrared (DLR) agrees closely with calculations and varies dramatically with location. Where dI is negative, absorption is greater than emission, and net effect is to locally warm the atmosphere. Where dI is positive, the net effect is "radiative cooling". By repeated approximation, Schwarzschild's equation can be used to calculate the equilibrium temperature change caused by an increase in GHGs, but only in the upper atmosphere where heat transport by convection is unimportant.
Derivation.
Schwarzschild's equation can be derived from Kirchhoff's law of thermal radiation, which states that absorptivity must equal emissivity at a given wavelength. (Like Schwarzschild's equation, Kirchhoff's law only applies to media in LTE.) Given a thin slab of atmosphere of incremental thickness ds, by definition its absorptivity is &NoBreak;&NoBreak; where I is the incident radiation and dIa is radiation absorbed by the slab. According to Beer's Law:
formula_1
Also by definition, emissivity is equal to &NoBreak;&NoBreak; where dIe is the radiation emitted by the slab and "Bλ"("T") is the maximum radiation any object in LTE can emit. Setting absorptivity equal to emissivity affords:
formula_2
The total change in radiation, dI, passing through the slab is given by:
formula_3
Schwarzschild's equation has also been derived from Einstein coefficients by assuming a Maxwell–Boltzmann distribution of energy between a ground and excited state (LTE). The oscillator strength for any transition between ground and excited state depends on these coefficients. The absorption cross-section (σλ) is empirically determined from this oscillator strength and the broadening of the absorption/emission line by collisions, the Doppler effect and the uncertainty principle.
Equivalent equations.
Schwarzschild's equation has been expressed in different forms and symbols by different authors. The quantity nσλ is known as the absorption coefficient (βa), a measure of attenuation with units of [cm−1]. The absorption coefficient is fundamentally the product of a quantity of absorbers per unit volume, [cm−3], times an efficiency of absorption (area/absorber, [cm2]). Several sources replace nσλ with kλr, where kλ is the absorption coefficient per unit density and r is the density of the gas. The absorption coefficient for spectral flux (a beam of radiation with a single wavelength, [W/m2/μm]) differs from the absorption coefficient for spectral intensity [W/sr/m2/μm] used in Schwarzschild's equation.
Integration of an absorption coefficient over a path from "s"1 and "s"2 affords the optical thickness (τ) of that path, a dimensionless quantity that is used in some variants of the Schwarzschild equation. When emission is ignored, the incoming radiation is reduced by a factor for 1/"e" when transmitted over a path with an optical thickness of 1.
formula_4
When expressed in terms of optical thickness, Schwarzschild's equation becomes:
formula_5
After integrating between a sensor located at "τ" = 0 and an arbitrary starting point in the medium, τ', the spectral intensity of the radiation reaching the sensor, "Iλ"(0), is:
formula_6
where "I"("τ"') is the spectral intensity of the radiation at the beginning of the path, formula_7 is the transmittance along the path, and the final term is the sum of all of the emission along the path attenuated by absorption along the path yet to be traveled.
Relationship to Planck's and Beer's laws.
Both Beer's Law and Planck's Law can be derived from Schwarzschild's equation. In a sense, they are corollaries of Schwarzschild's equation.
When the spectral intensity of radiation is not changing as it passes through a medium, "dIλ" = 0. In that situation, Schwarzschild's equation simplifies to Planck's law:
formula_8
When "Iλ" > "Bλ"("T"), dI is negative and when "Iλ" < "Bλ"("T"), dI is positive. As a consequence, the intensity of radiation traveling through any medium is always approaching the blackbody intensity given by Planck's law and the local temperature. The rate of approach depends on the density of absorbing/emitting molecules (n) and their absorption cross-section (σλ).
When the intensity of the incoming radiation, Iλ, is much greater than the intensity of blackbody radiation, "Bλ"("T"), the emission term can be neglected. This is usually the case when working with a laboratory spectrophotometer, where the sample is near 300 K and the light source is a filament at several thousand K.
formula_9
If the medium is homogeneous, nσλ doesn't vary with location. Integration over a path of length s affords the form of Beer's Law used most often in the laboratory experiments:
formula_10
Greenhouse effect.
Schwarzschild's equation provides a simple explanation for the existence of the greenhouse effect and demonstrates that it requires a non-zero lapse rate. Rising air in the atmosphere expands and cools as the pressure on it falls, producing a negative temperature gradient in the Earth's troposphere. When radiation travels upward through falling temperature, the incoming radiation, I, (emitted by the warmer surface or by GHGs at lower altitudes) is more intense than that emitted locally by "Bλ"("T"). ["Bλ"("T") − "I"] is generally less than zero throughout the troposphere, and the intensity of outward radiation decreases as it travels upward. According to Schwarzschild's equation, the rate of fall in outward intensity is proportional to the density of GHGs (n) in the atmosphere and their absorption cross-sections (σλ). Any anthropogenic increase in GHGs will slow down the rate of radiative cooling to space, "i.e." produce a radiative forcing until a saturation point is reached.
At steady state, incoming and outgoing radiation at the top of the atmosphere (TOA) must be equal. When the presence of GHGs in the atmosphere causes outward radiation to decrease with altitude, then the surface must be warmer than it would be without GHGs "- assuming nothing else changed". Some scientists quantify the greenhouse effect as the 150 W/m2 difference between the average outward flux of thermal IR from the surface (390 W/m2) and the average outward flux at the TOA.
If the Earth had an isothermal atmosphere, Schwarzschild's equation predicts that there would be no greenhouse effect or no enhancement of the greenhouse effect by rising GHGs. In fact, the troposphere over the Antarctic plateau is nearly isothermal. Both observations and calculations show a slight "negative greenhouse effect" – more radiation emitted from the TOA than the surface. Although records are limited, the central Antarctic Plateau has seen little or no warming.
Saturation.
In the absence of thermal emission, wavelengths that are strongly absorbed by GHGs can be significantly attenuated within 10 m in the lower atmosphere. Those same wavelengths, however, are the ones where emission is also strongest. In an extreme case, roughly 90% of 667.5 cm−1 photons are absorbed within 1 meter by 400 ppm of CO2 at surface density, but they are replaced by emission of an equal number of 667.5 cm−1 photons. The radiation field thereby maintains the blackbody intensity appropriate for the local temperature. At equilibrium, "Iλ" = "Bλ"("T") and therefore "dIλ" = 0 even when the density of the GHG (n) increases.
This has led some to falsely believe that Schwarzschild's equation predicts no radiative forcing at wavelengths where absorption is "saturated". However, such reasoning reflects what some refer to as the "surface budget fallacy". This fallacy involves reaching erroneous conclusions by focusing on energy exchange near the planetary surface rather than at the top of the atmosphere (TOA). At wavelengths where absorption is saturated, increasing the concentration of a greenhouse gas does not change thermal radiation levels at low altitudes, but there are still important differences at high altitudes where the air is thinner.
As density decreases with altitude, even the strongest absorption bands eventually become semi-transparent. Once that happens, radiation can travel far enough that the local emission, "Bλ"("T"), can differ from the absorption of incoming Iλ. The altitude where the transition to semi-transparency occurs is referred to as the "effective emission altitude" or "effective radiating level." Thermal radiation from this altitude is able to escape to space. Consequently, the temperature at this level sets the intensity of outgoing longwave radiation. This altitude varies depending on the particular wavelength involved.
Increasing concentration increases the "effective emission altitude" at which emitted thermal radiation is able to escape to space. The lapse rate (change in temperature with altitude) at the effective radiating level determines how a change in concentration will affect outgoing emissions to space. For most wavelengths, this level is in the troposphere, where temperatures decrease with increasing altitude. This means that increasing concentrations of greenhouse gas lead to decreasing emissions to space (a positive incremental greenhouse effect), creating an energy imbalance that makes the planet warmer than it would be otherwise. Thus, the presence or absence of absorption saturation at low altitudes does not necessarily indicate that absence of radiative forcing in response to increased concentrations.
The radiative forcing from doubling carbon dioxide occurs mostly on the flanks of the strongest absorption band.
Temperature rises with altitude in the lower stratosphere, and increasing CO2 there increases radiative cooling to space and is predicted by some to cause cooling above 14–20 km.
Application to climate science.
Schwarzschild's equation is used to calculate the outward radiative flux from the Earth (measured in W/m2 perpendicular to the surface) at any altitude, especially the "top of the atmosphere" or TOA. This flux originates at the surface ("I"0) for clear skies or cloud tops. dI increments are calculated for layers thin enough to be effectively homogeneous in composition and flux (I). These increments are numerically integrated from the surface to the TOA to give the flux of thermal infrared to space, commonly referred to as outgoing long-wavelength radiation (OLR). OLR is the only mechanism by which the Earth gets rid of the heat delivered continuously by the sun. The net downward radiative flux of thermal IR (DLR) produced by emission from GHGs in the atmosphere is obtained by integrating dI from the TOA (where I0 is zero) to the surface. DLR adds to the energy from the sun. Emission from each layer adds equally to the upward and downward fluxes. In contrast, different amounts of radiation are absorbed, because the upward flux entering any layer is usually greater than the downward flux.
In "line-by-line" methods, the change in spectral intensity (dIλ, W/sr/m2/μm) is numerically integrated using a wavelength increment small enough (less than 1 nm) to accurately describe the shape of each absorption line. The HITRAN database contains the parameters needed to describe 7.4 million absorption lines for 47 GHGs and 120 isotopologues. A variety of programs or radiative transfer codes can be used to process this data, including an online facility, SpectralCalc. To reduce the computational demand, weather forecast and climate models use broad-band methods that handle many lines as a single "band". MODTRAN is a broad-band method available online with a simple interface that anyone can use.
To convert intensity [W/sr/m2] to flux [W/m2], calculations usually invoke the "two-stream" and "plane parallel" approximations. The radiative flux is decomposed into three components, upward (+z), downward (-z), and parallel to the surface. This third component contributes nothing to heating or cooling the planet. formula_11, where formula_12 is the zenith angle (away from vertical). Then the upward and downward intensities are integrated over a forward hemisphere, a process that can be simplified by using a "diffusivity factor" or "average effective zenith angle" of 53°. Alternatively, one can integrate over all possible paths from the entire surface to a sensor positioned a specified height above surface for OLR, or over all possible paths from the TOA to a sensor on the surface for DLR.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "dI _\\lambda = n\\sigma_\\lambda B_\\lambda(T) \\, ds - n\\sigma_\\lambda I_\\lambda \\, ds = n \\sigma_\\lambda[B_\\lambda(T) - I_\\lambda] \\, ds "
},
{
"math_id": 1,
"text": " \\frac{dI_a}{I} = n\\sigma_\\lambda \\, ds "
},
{
"math_id": 2,
"text": "\\begin{align} \n\\frac{dI_e}{B_\\lambda(T)} &= n\\sigma_\\lambda \\, ds \\\\[4pt]\ndI_e &= n\\sigma_\\lambda B_\\lambda(T) \\, ds \n\\end{align}"
},
{
"math_id": 3,
"text": " dI = dI_e - dI_a = n\\sigma_\\lambda B_\\lambda(T) \\, ds - n\\sigma_\\lambda I \\, ds "
},
{
"math_id": 4,
"text": " \\tau = \\int_{s_1}^{s_2} n(s)\\sigma_\\lambda(s)\\,ds "
},
{
"math_id": 5,
"text": " dI _\\lambda = [B _\\lambda(T) - I_\\lambda] \\, d\\tau "
},
{
"math_id": 6,
"text": " I_\\lambda(0) = I_\\lambda(\\tau')e^{-\\tau'} + \\int_0^{\\tau'} B_\\lambda(T)e^{-\\tau}\\,d\\tau "
},
{
"math_id": 7,
"text": "e^{-\\tau'}"
},
{
"math_id": 8,
"text": "\\begin{align}\n0 &= n\\sigma_\\lambda[B _\\lambda(T) - I_\\lambda] \\, ds \\\\[2pt]\nI_\\lambda &= B_\\lambda(T) \n\\end{align}"
},
{
"math_id": 9,
"text": " dI_\\lambda = -n\\sigma I_\\lambda \\, ds "
},
{
"math_id": 10,
"text": " dI_\\lambda = I_\\lambda(0)e^{-n\\sigma_\\lambda s} "
},
{
"math_id": 11,
"text": " ds = dz/\\cos\\theta "
},
{
"math_id": 12,
"text": "\\theta"
}
] |
https://en.wikipedia.org/wiki?curid=58519634
|
58527
|
Finitely generated abelian group
|
Commutative group where every element is the sum of elements from one finite subset
In abstract algebra, an abelian group formula_0 is called finitely generated if there exist finitely many elements formula_1 in formula_2 such that every formula_3 in formula_2 can be written in the form formula_4 for some integers formula_5. In this case, we say that the set formula_6 is a "generating set" of formula_2 or that formula_7 "generate" formula_2. So, finitely generated abelian groups can be thought of as a generalization of cyclic groups.
Every finite abelian group is finitely generated. The finitely generated abelian groups can be completely classified.
Examples.
There are no other examples (up to isomorphism). In particular, the group formula_11 of rational numbers is not finitely generated: if formula_12 are rational numbers, pick a natural number formula_13 coprime to all the denominators; then formula_14 cannot be generated by formula_12. The group formula_15 of non-zero rational numbers is also not finitely generated. The groups of real numbers under addition formula_16 and non-zero real numbers under multiplication formula_17 are also not finitely generated.
Classification.
The fundamental theorem of finitely generated abelian groups can be stated two ways, generalizing the two forms of the fundamental theorem of "finite" abelian groups. The theorem, in both forms, in turn generalizes to the structure theorem for finitely generated modules over a principal ideal domain, which in turn admits further generalizations.
Primary decomposition.
The primary decomposition formulation states that every finitely generated abelian group "G" is isomorphic to a direct sum of primary cyclic groups and infinite cyclic groups. A primary cyclic group is one whose order is a power of a prime. That is, every finitely generated abelian group is isomorphic to a group of the form
formula_18
where "n" ≥ 0 is the "rank", and the numbers "q"1, ..., "q""t" are powers of (not necessarily distinct) prime numbers. In particular, "G" is finite if and only if "n" = 0. The values of "n", "q"1, ..., "q""t" are (up to rearranging the indices) uniquely determined by "G", that is, there is one and only one way to represent "G" as such a decomposition.
The proof of this statement uses the basis theorem for finite abelian group: every finite abelian group is a direct sum of primary cyclic groups. Denote the torsion subgroup of "G" as "tG". Then, "G/tG" is a torsion-free abelian group and thus it is free abelian. "tG" is a direct summand of "G", which means there exists a subgroup "F" of "G" s.t. formula_19, where formula_20. Then, "F" is also free abelian. Since "tG" is finitely generated and each element of "tG" has finite order, "tG" is finite. By the basis theorem for finite abelian group, "tG" can be written as direct sum of primary cyclic groups.
Invariant factor decomposition.
We can also write any finitely generated abelian group "G" as a direct sum of the form
formula_21
where "k"1 divides "k"2, which divides "k"3 and so on up to "k""u". Again, the rank "n" and the "invariant factors" "k"1, ..., "k""u" are uniquely determined by "G" (here with a unique order). The rank and the sequence of invariant factors determine the group up to isomorphism.
Equivalence.
These statements are equivalent as a result of the Chinese remainder theorem, which implies that formula_22 if and only if "j" and "k" are coprime.
History.
The history and credit for the fundamental theorem is complicated by the fact that it was proven when group theory was not well-established, and thus early forms, while essentially the modern result and proof, are often stated for a specific case. Briefly, an early form of the finite case was proven by Gauss in 1801, the finite case was proven by Kronecker in 1870, and stated in group-theoretic terms by Frobenius and Stickelberger in 1878. The finitely "presented" case is solved by Smith normal form, and hence frequently credited to , though the finitely "generated" case is sometimes instead credited to Poincaré in 1900; details follow.
Group theorist László Fuchs states:
<templatestyles src="Template:Blockquote/styles.css" />As far as the fundamental theorem on finite abelian groups is concerned, it is not clear how far back in time one needs to go to trace its origin. ... it took a long time to formulate and prove the fundamental theorem in its present form ...
The fundamental theorem for "finite" abelian groups was proven by Leopold Kronecker in 1870, using a group-theoretic proof, though without stating it in group-theoretic terms; a modern presentation of Kronecker's proof is given in , 5.2.2 Kronecker's Theorem, 176–177. This generalized an earlier result of Carl Friedrich Gauss from "Disquisitiones Arithmeticae" (1801), which classified quadratic forms; Kronecker cited this result of Gauss's. The theorem was stated and proved in the language of groups by Ferdinand Georg Frobenius and Ludwig Stickelberger in 1878. Another group-theoretic formulation was given by Kronecker's student Eugen Netto in 1882.
The fundamental theorem for "finitely presented" abelian groups was proven by Henry John Stephen Smith in , as integer matrices correspond to finite presentations of abelian groups (this generalizes to finitely presented modules over a principal ideal domain), and Smith normal form corresponds to classifying finitely presented abelian groups.
The fundamental theorem for "finitely generated" abelian groups was proven by Henri Poincaré in 1900, using a matrix proof (which generalizes to principal ideal domains). This was done in the context of computing the
homology of a complex, specifically the Betti number and torsion coefficients of a dimension of the complex, where the Betti number corresponds to the rank of the free part, and the torsion coefficients correspond to the torsion part.
Kronecker's proof was generalized to "finitely generated" abelian groups by Emmy Noether in 1926.
Corollaries.
Stated differently the fundamental theorem says that a finitely generated abelian group is the direct sum of a free abelian group of finite rank and a finite abelian group, each of those being unique up to isomorphism. The finite abelian group is just the torsion subgroup of "G". The rank of "G" is defined as the rank of the torsion-free part of "G"; this is just the number "n" in the above formulas.
A corollary to the fundamental theorem is that every finitely generated torsion-free abelian group is free abelian. The finitely generated condition is essential here: formula_23 is torsion-free but not free abelian.
Every subgroup and factor group of a finitely generated abelian group is again finitely generated abelian. The finitely generated abelian groups, together with the group homomorphisms, form an abelian category which is a Serre subcategory of the category of abelian groups.
Non-finitely generated abelian groups.
Note that not every abelian group of finite rank is finitely generated; the rank 1 group formula_23 is one counterexample, and the rank-0 group given by a direct sum of countably infinitely many copies of formula_24 is another one.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "(G,+)"
},
{
"math_id": 1,
"text": "x_1,\\dots,x_s"
},
{
"math_id": 2,
"text": "G"
},
{
"math_id": 3,
"text": "x"
},
{
"math_id": 4,
"text": "x = n_1x_1 + n_2x_2 + \\cdots + n_sx_s"
},
{
"math_id": 5,
"text": "n_1,\\dots, n_s"
},
{
"math_id": 6,
"text": "\\{x_1,\\dots, x_s\\}"
},
{
"math_id": 7,
"text": "x_1,\\dots, x_s"
},
{
"math_id": 8,
"text": "\\left(\\mathbb{Z},+\\right)"
},
{
"math_id": 9,
"text": "n"
},
{
"math_id": 10,
"text": "\\left(\\mathbb{Z}/n\\mathbb{Z},+\\right)"
},
{
"math_id": 11,
"text": "\\left(\\mathbb{Q},+\\right)"
},
{
"math_id": 12,
"text": "x_1,\\ldots,x_n"
},
{
"math_id": 13,
"text": "k"
},
{
"math_id": 14,
"text": "1/k"
},
{
"math_id": 15,
"text": "\\left(\\mathbb{Q}^*,\\cdot\\right)"
},
{
"math_id": 16,
"text": " \\left(\\mathbb{R},+\\right)"
},
{
"math_id": 17,
"text": "\\left(\\mathbb{R}^*,\\cdot\\right)"
},
{
"math_id": 18,
"text": "\\mathbb{Z}^n \\oplus \\mathbb{Z}/q_1\\mathbb{Z} \\oplus \\cdots \\oplus \\mathbb{Z}/q_t\\mathbb{Z},"
},
{
"math_id": 19,
"text": "G=tG\\oplus F"
},
{
"math_id": 20,
"text": "F\\cong G/tG"
},
{
"math_id": 21,
"text": "\\mathbb{Z}^n \\oplus \\mathbb{Z}/{k_1}\\mathbb{Z} \\oplus \\cdots \\oplus \\mathbb{Z}/{k_u}\\mathbb{Z},"
},
{
"math_id": 22,
"text": "\\mathbb{Z}_{jk}\\cong \\mathbb{Z}_{j} \\oplus \\mathbb{Z}_{k}"
},
{
"math_id": 23,
"text": "\\mathbb{Q}"
},
{
"math_id": 24,
"text": "\\mathbb{Z}_{2}"
}
] |
https://en.wikipedia.org/wiki?curid=58527
|
585271
|
Perfect group
|
Mathematical group with trivial abelianization
In mathematics, more specifically in group theory, a group is said to be perfect if it equals its own commutator subgroup, or equivalently, if the group has no non-trivial abelian quotients (equivalently, its abelianization, which is the universal abelian quotient, is trivial). In symbols, a perfect group is one such that "G"(1) = "G" (the commutator subgroup equals the group), or equivalently one such that "G"ab = {1} (its abelianization is trivial).
Examples.
The smallest (non-trivial) perfect group is the alternating group "A"5. More generally, any non-abelian simple group is perfect since the commutator subgroup is a normal subgroup with abelian quotient. Conversely, a perfect group need not be simple; for example, the special linear group over the field with 5 elements, SL(2,5) (or the binary icosahedral group, which is isomorphic to it) is perfect but not simple (it has a non-trivial center containing formula_0).
The direct product of any two simple non-abelian groups is perfect but not simple; the commutator of two elements is [("a","b"),("c","d")] = (["a","c"],["b","d"]). Since commutators in each simple group form a generating set, pairs of commutators form a generating set of the direct product.
The fundamental group of formula_1 is a perfect group of order 120.
More generally, a quasisimple group (a perfect central extension of a simple group) that is a non-trivial extension (and therefore not a simple group itself) is perfect but not simple; this includes all the insoluble non-simple finite special linear groups SL("n","q") as extensions of the projective special linear group PSL("n","q") (SL(2,5) is an extension of PSL(2,5), which is isomorphic to "A"5). Similarly, the special linear group over the real and complex numbers is perfect, but the general linear group GL is never perfect (except when trivial or over formula_2, where it equals the special linear group), as the determinant gives a non-trivial abelianization and indeed the commutator subgroup is SL.
A non-trivial perfect group, however, is necessarily not solvable; and 4 divides its order (if finite), moreover, if 8 does not divide the order, then 3 does.
Every acyclic group is perfect, but the converse is not true: "A"5 is perfect but not acyclic (in fact, not even superperfect), see . In fact, for formula_3 the alternating group formula_4 is perfect but not superperfect, with formula_5 for formula_6.
Any quotient of a perfect group is perfect. A non-trivial finite perfect group that is not simple must then be an extension of at least one smaller simple non-abelian group. But it can be the extension of more than one simple group. In fact, the direct product of perfect groups is also perfect.
Every perfect group "G" determines another perfect group "E" (its universal central extension) together with a surjection "f": "E" → "G" whose kernel is in the center of "E,"
such that "f" is universal with this property. The kernel of "f" is called the Schur multiplier of "G" because it was first studied by Issai Schur in 1904; it is isomorphic to the homology group formula_7.
In the plus construction of algebraic K-theory, if we consider the group formula_8 for a commutative ring formula_9, then the subgroup of elementary matrices formula_10 forms a perfect subgroup.
Ore's conjecture.
As the commutator subgroup is "generated" by commutators, a perfect group may contain elements that are products of commutators but not themselves commutators. Øystein Ore proved in 1951 that the alternating groups on five or more elements contained only commutators, and conjectured that this was so for all the finite non-abelian simple groups. Ore's conjecture was finally proven in 2008. The proof relies on the classification theorem.
Grün's lemma.
A basic fact about perfect groups is Otto Grün's proposition of Grün's lemma : the quotient of a perfect group by its center is centerless (has trivial center).
Proof: If "G" is a perfect group, let "Z"1 and "Z"2 denote the first two terms of the upper central series of "G" (i.e., "Z"1 is the center of "G", and "Z"2/"Z"1 is the center of "G"/"Z"1). If "H" and "K" are subgroups of "G", denote the commutator of "H" and "K" by ["H", "K"] and note that ["Z"1, "G"] = 1 and ["Z"2, "G"] ⊆ "Z"1, and consequently (the convention that ["X", "Y", "Z"] = [["X", "Y"], "Z"] is followed):
formula_11
formula_12
By the [[three subgroups lemma]] (or equivalently, by the [[Commutator#Identities (group theory)|Hall-Witt identity]]), it follows that ["G", "Z"2] = [["G", "G"], "Z"2] = ["G", "G", "Z"2] = {1}. Therefore, "Z"2 ⊆ "Z"1 = "Z"("G"), and the center of the quotient group "G" / "Z"("G") is the [[trivial group]].
As a consequence, all [[Center (group theory)#Higher centers|higher centers]] (that is, higher terms in the [[upper central series]]) of a perfect group equal the center.
Group homology.
In terms of [[group homology]], a perfect group is precisely one whose first homology group vanishes: "H"1("G", Z) = 0, as the first homology group of a group is exactly the abelianization of the group, and perfect means trivial abelianization. An advantage of this definition is that it admits strengthening:
Quasi-perfect group.
Especially in the field of [[algebraic K-theory]], a group is said to be quasi-perfect if its commutator subgroup is perfect; in symbols, a quasi-perfect group is one such that "G"(1) = "G"(2) (the commutator of the commutator subgroup is the commutator subgroup), while a perfect group is one such that "G"(1) = "G" (the commutator subgroup is the whole group). See and .
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Refbegin/styles.css" />
External links.
[[Category:Properties of groups]]
[[Category:Lemmas]]
|
[
{
"math_id": 0,
"text": "\\left(\\begin{smallmatrix}-1 & 0 \\\\ 0 & -1\\end{smallmatrix}\\right) = \\left(\\begin{smallmatrix}4 & 0 \\\\ 0 & 4\\end{smallmatrix}\\right)"
},
{
"math_id": 1,
"text": "SO(3)/I_{60}"
},
{
"math_id": 2,
"text": "\\mathbb{F}_2"
},
{
"math_id": 3,
"text": "n\\ge 5"
},
{
"math_id": 4,
"text": "A_n"
},
{
"math_id": 5,
"text": "H_2(A_n,\\Z) = \\Z/2"
},
{
"math_id": 6,
"text": "n \\ge 8"
},
{
"math_id": 7,
"text": "H_2(G)"
},
{
"math_id": 8,
"text": "\\operatorname{GL}(A) = \\text{colim} \\operatorname{GL}_n(A)"
},
{
"math_id": 9,
"text": "A"
},
{
"math_id": 10,
"text": "E(R)"
},
{
"math_id": 11,
"text": "[Z_2,G,G]=[[Z_2,G],G]\\subseteq [Z_1,G]=1"
},
{
"math_id": 12,
"text": "[G,Z_2,G]=[[G,Z_2],G]=[[Z_2,G],G]\\subseteq [Z_1,G]=1."
},
{
"math_id": 13,
"text": "H_1(G,\\Z)=H_2(G,\\Z)=0"
},
{
"math_id": 14,
"text": "\\tilde H_i(G;\\Z) = 0."
},
{
"math_id": 15,
"text": "H_0"
}
] |
https://en.wikipedia.org/wiki?curid=585271
|
5852751
|
Grothendieck connection
|
In algebraic geometry and synthetic differential geometry, a Grothendieck connection is a way of viewing connections in terms of descent data from infinitesimal neighbourhoods of the diagonal.
Introduction and motivation.
The Grothendieck connection is a generalization of the Gauss–Manin connection constructed in a manner analogous to that in which the Ehresmann connection generalizes the Koszul connection. The construction itself must satisfy a requirement of "geometric invariance", which may be regarded as the analog of covariance for a wider class of structures including the schemes of algebraic geometry. Thus the connection in a certain sense must live in a natural sheaf on a Grothendieck topology. In this section, we discuss how to describe an Ehresmann connection in sheaf-theoretic terms as a Grothendieck connection.
Let formula_0 be a manifold and formula_1 a surjective submersion, so that formula_2 is a manifold fibred over formula_3 Let formula_4 be the first-order jet bundle of sections of formula_5 This may be regarded as a bundle over formula_0 or a bundle over the total space of formula_5 With the latter interpretation, an Ehresmann connection is a section of the bundle (over formula_2) formula_6 The problem is thus to obtain an intrinsic description of the sheaf of sections of this vector bundle.
Grothendieck's solution is to consider the diagonal embedding formula_7 The sheaf formula_8 of ideals of formula_9 in formula_10 consists of functions on formula_10 which vanish along the diagonal. Much of the infinitesimal geometry of formula_0 can be realized in terms of formula_11 For instance, formula_12 is the sheaf of sections of the cotangent bundle. One may define a "first-order infinitesimal neighborhood" formula_13 of formula_9 in formula_10 to be the subscheme corresponding to the sheaf of ideals formula_14 (See below for a coordinate description.)
There are a pair of projections formula_15 given by projection the respective factors of the Cartesian product, which restrict to give projections formula_16 One may now form the pullback of the fibre space formula_2 along one or the other of formula_17 or formula_18 In general, there is no canonical way to identify formula_19 and formula_20 with each other. A Grothendieck connection is a specified isomorphism between these two spaces. One may proceed to define curvature and p-curvature of a connection in the same language.
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "M"
},
{
"math_id": 1,
"text": "\\pi : E \\to M"
},
{
"math_id": 2,
"text": "E"
},
{
"math_id": 3,
"text": "M."
},
{
"math_id": 4,
"text": "J^1(M, E)"
},
{
"math_id": 5,
"text": "E."
},
{
"math_id": 6,
"text": "J^1(M, E) \\to E."
},
{
"math_id": 7,
"text": "\\Delta : M \\to M \\times M."
},
{
"math_id": 8,
"text": "I"
},
{
"math_id": 9,
"text": "\\Delta"
},
{
"math_id": 10,
"text": "M \\times M"
},
{
"math_id": 11,
"text": "I."
},
{
"math_id": 12,
"text": "\\Delta^*\\left(I, I^2\\right)"
},
{
"math_id": 13,
"text": "M^{(2)}"
},
{
"math_id": 14,
"text": "I^2."
},
{
"math_id": 15,
"text": "p_1, p_2 : M \\times M \\to M"
},
{
"math_id": 16,
"text": "p_1, p_2 : M^{(2)} \\to M."
},
{
"math_id": 17,
"text": "p_1"
},
{
"math_id": 18,
"text": "p_2."
},
{
"math_id": 19,
"text": "p_1^* E"
},
{
"math_id": 20,
"text": "p_2^* E"
}
] |
https://en.wikipedia.org/wiki?curid=5852751
|
58528658
|
Preventable fraction for the population
|
Measure used in epidemiology
In epidemiology, preventable fraction for the population (PF), is the proportion of incidents in the population that could be prevented by exposing the whole population. It is calculated as formula_0, where formula_1 is the incidence in the exposed group, formula_2 is the incidence in the population.
It is used when an exposure reduces the risk, as opposed to increasing it, in which case its symmetrical notion is attributable fraction for the population.
|
[
{
"math_id": 0,
"text": "PF_p = (I_p - I_e)/I_p"
},
{
"math_id": 1,
"text": "I_e"
},
{
"math_id": 2,
"text": "I_p"
}
] |
https://en.wikipedia.org/wiki?curid=58528658
|
58536963
|
Bartels–Stewart algorithm
|
Algorithm in numerical linear algebra
In numerical linear algebra, the Bartels–Stewart algorithm is used to numerically solve the Sylvester matrix equation formula_0. Developed by R.H. Bartels and G.W. Stewart in 1971, it was the first numerically stable method that could be systematically applied to solve such equations. The algorithm works by using the real Schur decompositions of formula_1 and formula_2 to transform formula_0 into a triangular system that can then be solved using forward or backward substitution. In 1979, G. Golub, C. Van Loan and S. Nash introduced an improved version of the algorithm, known as the Hessenberg–Schur algorithm. It remains a standard approach for solving Sylvester equations when formula_3 is of small to moderate size.
The algorithm.
Let formula_4, and assume that the eigenvalues of formula_1 are distinct from the eigenvalues of formula_2. Then, the matrix equation formula_0 has a unique solution. The Bartels–Stewart algorithm computes formula_3 by applying the following steps:
1.Compute the real Schur decompositions
formula_5
formula_6
The matrices formula_7 and formula_8 are block-upper triangular matrices, with diagonal blocks of size formula_9 or formula_10.
2. Set formula_11
3. Solve the simplified system formula_12, where formula_13. This can be done using forward substitution on the blocks. Specifically, if formula_14, then
formula_15
where formula_16is the formula_17th column of formula_18. When formula_19, columns formula_20 should be concatenated and solved for simultaneously.
4. Set formula_21
Computational cost.
Using the QR algorithm, the real Schur decompositions in step 1 require approximately formula_22 flops, so that the overall computational cost is formula_23.
Simplifications and special cases.
In the special case where formula_24 and formula_25 is symmetric, the solution formula_3 will also be symmetric. This symmetry can be exploited so that formula_18 is found more efficiently in step 3 of the algorithm.
The Hessenberg–Schur algorithm.
The Hessenberg–Schur algorithm replaces the decomposition formula_26 in step 1 with the decomposition formula_27, where formula_28 is an upper-Hessenberg matrix. This leads to a system of the form formula_29 that can be solved using forward substitution. The advantage of this approach is that formula_27 can be found using Householder reflections at a cost of formula_30 flops, compared to the formula_31 flops required to compute the real Schur decomposition of formula_1.
Software and implementation.
The subroutines required for the Hessenberg-Schur variant of the Bartels–Stewart algorithm are implemented in the SLICOT library. These are used in the MATLAB control system toolbox.
Alternative approaches.
For large systems, the formula_32 cost of the Bartels–Stewart algorithm can be prohibitive. When formula_1 and formula_2 are sparse or structured, so that linear solves and matrix vector multiplies involving them are efficient, iterative algorithms can potentially perform better. These include projection-based methods, which use Krylov subspace iterations, methods based on the alternating direction implicit (ADI) iteration, and hybridizations that involve both projection and ADI. Iterative methods can also be used to directly construct low rank approximations to formula_3 when solving formula_33.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " AX - XB = C"
},
{
"math_id": 1,
"text": "A"
},
{
"math_id": 2,
"text": "B"
},
{
"math_id": 3,
"text": "X"
},
{
"math_id": 4,
"text": "X, C \\in \\mathbb{R}^{m \\times n}"
},
{
"math_id": 5,
"text": "R = U^TAU,"
},
{
"math_id": 6,
"text": "S = V^TB^TV."
},
{
"math_id": 7,
"text": "R"
},
{
"math_id": 8,
"text": "S"
},
{
"math_id": 9,
"text": "1 \\times 1"
},
{
"math_id": 10,
"text": "2 \\times 2"
},
{
"math_id": 11,
"text": "F = U^TCV."
},
{
"math_id": 12,
"text": "RY - YS^T = F"
},
{
"math_id": 13,
"text": "Y = U^TXV"
},
{
"math_id": 14,
"text": "s_{k-1, k} = 0"
},
{
"math_id": 15,
"text": "(R - s_{kk}I)y_k = f_{k} + \\sum_{j = k+1}^n s_{kj}y_j,"
},
{
"math_id": 16,
"text": "y_k"
},
{
"math_id": 17,
"text": "k"
},
{
"math_id": 18,
"text": "Y"
},
{
"math_id": 19,
"text": "s_{k-1, k} \\neq 0"
},
{
"math_id": 20,
"text": "[ y_{k-1} \\mid y_{k}]"
},
{
"math_id": 21,
"text": "X = UYV^T."
},
{
"math_id": 22,
"text": "10(m^3 + n^3)"
},
{
"math_id": 23,
"text": "10(m^3 + n^3) + 2.5(mn^2 + nm^2)"
},
{
"math_id": 24,
"text": "B=-A^T"
},
{
"math_id": 25,
"text": "C"
},
{
"math_id": 26,
"text": "R = U^TAU"
},
{
"math_id": 27,
"text": "H = Q^TAQ"
},
{
"math_id": 28,
"text": "H"
},
{
"math_id": 29,
"text": " HY - YS^T = F"
},
{
"math_id": 30,
"text": "(5/3)m^3"
},
{
"math_id": 31,
"text": "10m^3"
},
{
"math_id": 32,
"text": "\\mathcal{O}(m^3 + n^3)"
},
{
"math_id": 33,
"text": "AX-XB = C"
}
] |
https://en.wikipedia.org/wiki?curid=58536963
|
585388
|
Homology sphere
|
Topological manifold whose homology coincides with that of a sphere
In algebraic topology, a homology sphere is an "n"-manifold "X" having the homology groups of an "n"-sphere, for some integer formula_0. That is,
formula_1
and
formula_2 for all other "i".
Therefore "X" is a connected space, with one non-zero higher Betti number, namely, formula_3. It does not follow that "X" is simply connected, only that its fundamental group is perfect (see Hurewicz theorem).
A rational homology sphere is defined similarly but using homology with rational coefficients.
Poincaré homology sphere.
The Poincaré homology sphere (also known as Poincaré dodecahedral space) is a particular example of a homology sphere, first constructed by Henri Poincaré. Being a spherical 3-manifold, it is the only homology 3-sphere (besides the 3-sphere itself) with a finite fundamental group. Its fundamental group is known as the binary icosahedral group and has order 120. Since the fundamental group of the 3-sphere is trivial, this shows that there exist 3-manifolds with the same homology groups as the 3-sphere that are not homeomorphic to it.
Construction.
A simple construction of this space begins with a dodecahedron. Each face of the dodecahedron is identified with its opposite face, using the minimal clockwise twist to line up the faces. Gluing each pair of opposite faces together using this identification yields a closed 3-manifold. (See Seifert–Weber space for a similar construction, using more "twist", that results in a hyperbolic 3-manifold.)
Alternatively, the Poincaré homology sphere can be constructed as the quotient space SO(3)/I where I is the icosahedral group (i.e., the rotational symmetry group of the regular icosahedron and dodecahedron, isomorphic to the alternating group A5). More intuitively, this means that the Poincaré homology sphere is the space of all geometrically distinguishable positions of an icosahedron (with fixed center and diameter) in Euclidean 3-space. One can also pass instead to the universal cover of SO(3) which can be realized as the group of unit quaternions and is homeomorphic to the 3-sphere. In this case, the Poincaré homology sphere is isomorphic to formula_4 where formula_5 is the binary icosahedral group, the perfect double cover of I embedded in formula_6.
Another approach is by Dehn surgery. The Poincaré homology sphere results from +1 surgery on the right-handed trefoil knot.
Cosmology.
In 2003, lack of structure on the largest scales (above 60 degrees) in the cosmic microwave background as observed for one year by the WMAP spacecraft led to the suggestion, by Jean-Pierre Luminet of the Observatoire de Paris and colleagues, that the shape of the universe is a Poincaré sphere. In 2008, astronomers found the best orientation on the sky for the model and confirmed some of the predictions of the model, using three years of observations by the WMAP spacecraft.
Data analysis from the Planck spacecraft suggests that there is no observable non-trivial topology to the universe.
formula_8
over the sphere with exceptional fibers of degrees "a"1, ..., "a""r" is a homology sphere, where the "b"'s are chosen so that
formula_9
(There is always a way to choose the "b"′s, and the homology sphere does not depend (up to isomorphism) on the choice of "b"′s.) If "r" is at most 2 this is just the usual 3-sphere; otherwise they are distinct non-trivial homology spheres. If the "a"′s are 2, 3, and 5 this gives the Poincaré sphere. If there are at least 3 "a"′s, not 2, 3, 5, then this is an acyclic homology 3-sphere with infinite fundamental group that has a Thurston geometry modeled on the universal cover of SL2(R).
Applications.
If "A" is a homology 3-sphere not homeomorphic to the standard 3-sphere, then the suspension of "A" is an example of a 4-dimensional homology manifold that is not a topological manifold. The double suspension of "A" is homeomorphic to the standard 5-sphere, but its triangulation (induced by some triangulation of "A") is not a PL manifold. In other words, this gives an example of a finite simplicial complex that is a topological manifold but not a PL manifold. (It is not a PL manifold because the link of a point is not always a 4-sphere.)
Galewski and Stern showed that all compact topological manifolds (without boundary) of dimension at least 5 are homeomorphic to simplicial complexes if and only if there is a homology 3 sphere Σ with Rokhlin invariant 1 such that the connected sum Σ#Σ of Σ with itself bounds a smooth acyclic 4-manifold. Ciprian Manolescu showed that there is no such homology sphere with the given property, and therefore, there are 5-manifolds not homeomorphic to simplicial complexes. In particular, the example originally given by Galewski and Stern is not triangulable.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "n\\ge 1"
},
{
"math_id": 1,
"text": "H_0(X,\\Z) = H_n(X,\\Z) = \\Z"
},
{
"math_id": 2,
"text": "H_i(X,\\Z) = \\{0\\}"
},
{
"math_id": 3,
"text": "b_n=1"
},
{
"math_id": 4,
"text": "S^3/\\widetilde{I}"
},
{
"math_id": 5,
"text": "\\widetilde{I}"
},
{
"math_id": 6,
"text": "S^3"
},
{
"math_id": 7,
"text": "a_1, \\ldots, a_r"
},
{
"math_id": 8,
"text": "\\{b, (o_1,0);(a_1,b_1),\\dots,(a_r,b_r)\\}\\,"
},
{
"math_id": 9,
"text": "b+b_1/a_1+\\cdots+b_r/a_r=1/(a_1\\cdots a_r)."
},
{
"math_id": 10,
"text": "\\Z/2\\Z"
}
] |
https://en.wikipedia.org/wiki?curid=585388
|
58540359
|
List of unsolved problems in fair division
|
This page lists notable open problems related to fair division - a field in the intersection of mathematics, computer science, political science and economics.
Open problems in fair cake-cutting.
Query complexity of envy-free cake-cutting.
In the problem of envy-free cake-cutting, there is a cake modeled as an interval, and "formula_0" agents with different value measures over the cake. The value measures are accessible only via queries of the form "evaluate a given piece of cake" or "mark a piece of cake with a given value". With "formula_1" agents, an envy-free division can be found using two queries, via divide and choose. With "formula_2" agents, there are several open problems regarding the number of required queries.
1. First, assume that the entire cake must be allocated (i.e., there is "no disposal"), and pieces may be disconnected. "How many queries are required?"
2. Next, assume that some cake may be left unallocated (i.e., there is "free disposal"), but the allocation must be proportional (in addition to envy-free): each agent must get at least "formula_5" of the total cake value. Pieces may still be disconnected. "How many queries are required?"
3. Next, assume there is free disposal, the allocation must still be proportional, but the pieces must be "connected". "How many queries are required?"
4. Next, assume there is free disposal, the pieces must be connected, but the allocation may be only approximately proportional (i.e., some agents may get less than "formula_5" of the total cake value). "What value can be guaranteed to each agent using a finite envy-free protocol?"
5. Finally, assume the entire cake must be allocated, and pieces may be disconnected, but the number of cuts (or number of pieces per agent) should be as small as possible. "How many cuts do we need in order to find an envy-free allocation in a finite number of queries"?
Number of cuts for cake-cutting with different entitlements.
When all agents have equal entitlements, a proportional cake-cutting can be implemented using formula_12 cuts, which is optimal.
"How many cuts are required for implementing a proportional cake-cutting among formula_0 agents with different entitlements?"
"How many cuts are required for implementing an envy-free cake-cutting among formula_0 agents with different entitlements?"
Fair division of a partly burnt cake.
A "partly burnt cake" is a metaphor to a cake in which agents may have both positive and negative valuations.
A proportional division of such a cake always exists.
"What is the runtime complexity of calculating a connected-proportional allocation of partly burnt cake?"
Known cases:
An envy-free division of a partly burnt cake is guaranteed to exist if-and-only-if the number of agents is the power of a prime integer. However, it cannot be found by a finite protocol - it can only be approximated. Given a small positive number "D", an allocation is called "D"-envy-free if, for each agent, the allocation will become envy-free if we move the cuts by at most "D" (length units).
"What is the runtime complexity (as a function of D) of calculating a connected D-envy-free allocation of a partly burnt cake?"
Truthful cake-cutting.
"Truthful cake-cutting" is the design of truthful mechanisms for fair cake-cutting. The currently known algorithms and impossibility results are shown here. The main cases in which it is unknown whether a deterministic truthful fair mechanism exists are:
Open problems in fair allocation of indivisible items.
Approximate maximin-share fairness.
The "1-of-formula_0 maximin share (MMS)" of an agent is the largest utility the agent can secure by partitioning the items into formula_0 bundles and receiving the worst bundle. For two agents, divide and choose gives each agent at least his/her 1-of-2 MMS. For formula_2 agents, it is almost always, but not always, possible to give each agent his/her 1-of-formula_0 MMS. This raises several kinds of questions.
1. Computational complexity.
"What is the runtime complexity of deciding whether a given instance admits a 1-of-formula_0 MMS allocation?"
NOTE: the following related problems are known to be computationally hard:
"What is the largest fraction r such that there always exists an allocation giving each agent a utility of at least r times his 1-of-formula_0 maximin share?"
2. Cardinal approximation.
Known cases:
"What is the smallest integer formula_26 (as a function of formula_0) such that there always exists an allocation among formula_0 agents giving each agent at least his 1-of-formula_26 MMS?"
3. Ordinal approximation.
Known cases:
So the answer can be anything between formula_33 and formula_13, inclusive. Smallest open case:
"For formula_9 agents with additive valuations, does there always exist a 1-of-5 maximin-share allocation?"
"Note:" there always exists an Approximate Competitive Equilibrium from Equal Incomes that guarantees the "1-of-(formula_33") maximin-share. However, this allocation may have excess supply, and - more importantly - excess demand: the sum of the bundles allocated to all agents might be slightly larger than the set of all items. Such an error is reasonable when allocating course seats among students, since a small excess supply can be corrected by adding a small number of seats. But the classic fair division problem assumes that items may not be added.
Envy-free up to one item.
An allocation is called EF1 (envy-free up to one item) if, for any two agents formula_34 and formula_35, and for any subset of size at most one contained in the bundle of formula_35, if we remove that subset from formula_35's bundle then formula_34 does not envy. An EF1 allocation always exists and can be found by the envy cycles algorithm. Combining it with other properties raises some open questions.
Pareto-optimal EF1 allocation (goods and bads).
When all items are good and all valuations are additive, a PO+EF1 always exists: the allocation maximizing the product of utilities is PO+EF1. Finding this maximizing allocation is NP-hard, but in theory, it may be possible to find other PO+EF1 allocations (not maximizing the product).
"What is the run-time complexity of finding a PO+EF1 allocation of goods?"
A PO+EF1 allocation of "bads (chores)" is not known to exist, even when all valuations are additive.
"Does a PO+EF1 allocation of bads always exist?"
"What is the run-time complexity of finding" "such allocation, if it exists?"
Contiguous EF1 allocation.
Often the goods are ordered on a line, for example, houses in a street. Each agent wants to get a contiguous block.
"For three or more agents with additive valuations, does an EF1 allocation always exist?"
Known cases:
Even when a contiguous EF1 allocation exists, the runtime complexity of finding it is unclear:
"For three or more agents with binary additive valuations, what is the complexity of finding a contiguous EF1 allocation?"
Price of fairness.
The price of fairness is the ratio between the maximum social welfare (sum of utilities) in any allocation, and the maximum social welfare in a fair allocation.
"What is the price of EF1 fairness?"
Existence of EFx allocation.
An allocation is called EFx ("envy-free up to any good") if, for any two agents formula_34 and formula_35, and for any good in the bundle of formula_35, if we remove that good from formula_35's bundle then formula_34 does not envy.
"For three agents with additive valuations, does there always exist an EFx allocation?"
"For formula_0 agents with general monotone valuations, can we prove that there does not exist an EFx allocation?"
Known cases:
Existence of Pareto-efficient PROPx allocation of bads.
An allocation of bads is called PROPx (aka FSx) if, for any agent "formula_34", and for any bad owned by "formula_34", if we remove that bad from formula_34's bundle, then formula_34's disutility is less than formula_5 the total disutility.
"Does there always exist an allocation of bads that is both PROPx and Pareto-efficient?"
Related known cases:
Competitive equilibrium for almost all incomes.
Competitive equilibrium (CE) is a very strong fairness notion - it implies Pareto-optimality and envy-freeness. When the incomes are equal, CE might not exist even when there are two agents and one good. But, in all other income-space, CE exists when there are two agents and one good. In other words, there is a competitive equilibrium for almost all income-vectors.
"For two agents with additive valuations over any number of goods, does there exist a competitive equilibrium for almost incomes?"
Known cases:
Open conjectures:
Fair division of partly divisible items.
Runtime complexity of fair allocation with bounded sharing.
"Given formula_38 agents, formula_39 items and an integer formula_40, suppose at most formula_40 items can be shared among two or more agents. What is the runtime complexity of deciding whether a proportional / envy-free allocation exists?"
Known cases:
Smallest open cases:
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "n=2"
},
{
"math_id": 2,
"text": "n>2"
},
{
"math_id": 3,
"text": "\\Omega(n^2)"
},
{
"math_id": 4,
"text": "O\\left(n^{n^{n^{n^{n^{n}}}}}\\right)"
},
{
"math_id": 5,
"text": "1/n"
},
{
"math_id": 6,
"text": "(n!)^2 n^2"
},
{
"math_id": 7,
"text": "n=3"
},
{
"math_id": 8,
"text": "n\\geq 4"
},
{
"math_id": 9,
"text": "n=4"
},
{
"math_id": 10,
"text": "1/(3n)"
},
{
"math_id": 11,
"text": "n\\geq 3"
},
{
"math_id": 12,
"text": "n-1"
},
{
"math_id": 13,
"text": "2n-2"
},
{
"math_id": 14,
"text": "3n-4"
},
{
"math_id": 15,
"text": "1/7"
},
{
"math_id": 16,
"text": "2/7"
},
{
"math_id": 17,
"text": "4/7"
},
{
"math_id": 18,
"text": "O(n\\log n)"
},
{
"math_id": 19,
"text": "O(n^2)"
},
{
"math_id": 20,
"text": "{NP}^{NP} "
},
{
"math_id": 21,
"text": "\\Sigma_2^P"
},
{
"math_id": 22,
"text": "r=1"
},
{
"math_id": 23,
"text": "r<1"
},
{
"math_id": 24,
"text": "r\\ge 3/4"
},
{
"math_id": 25,
"text": "r\\ge 9/11"
},
{
"math_id": 26,
"text": "N"
},
{
"math_id": 27,
"text": "N=2"
},
{
"math_id": 28,
"text": "N=n"
},
{
"math_id": 29,
"text": "N>n"
},
{
"math_id": 30,
"text": "N\\le 2n-1"
},
{
"math_id": 31,
"text": "(2n-1)"
},
{
"math_id": 32,
"text": "N\\le 2n-2"
},
{
"math_id": 33,
"text": "n+1"
},
{
"math_id": 34,
"text": "i"
},
{
"math_id": 35,
"text": "j"
},
{
"math_id": 36,
"text": "O(n)"
},
{
"math_id": 37,
"text": "O(\\sqrt n)"
},
{
"math_id": 38,
"text": "n\\geq 2"
},
{
"math_id": 39,
"text": "m"
},
{
"math_id": 40,
"text": "k"
},
{
"math_id": 41,
"text": "k=0"
},
{
"math_id": 42,
"text": "k\\geq n-1"
},
{
"math_id": 43,
"text": "k=1"
},
{
"math_id": 44,
"text": "k\\in\\{1,2\\}"
}
] |
https://en.wikipedia.org/wiki?curid=58540359
|
585453
|
Capillary number
|
Ratio of viscous drag forces to surface tension in fluids
In fluid dynamics, the capillary number (Ca) is a dimensionless quantity representing the relative effect of viscous drag forces versus surface tension forces acting across an interface between a liquid and a gas, or between two immiscible liquids. Alongside the Bond number, commonly denoted formula_0, this term is useful to describe the forces acting on a fluid front in porous or granular media, such as soil. The capillary number is defined as:
formula_1
where formula_2 is the dynamic viscosity of the liquid, formula_3 is a characteristic velocity and formula_4 is the surface tension or interfacial tension between the two fluid phases.
Being a dimensionless quantity, the capillary number's value does not depend on the system of units. In the petroleum industry, capillary number is denoted formula_5 instead of formula_6.
For low capillary numbers (a rule of thumb says less than 10−5), flow in porous media is dominated by capillary forces, whereas for high capillary numbers the capillary forces are negligible compared to the viscous forces. Flow through the pores in an oil reservoir has capillary number values in the order of 10−6, whereas flow of oil through an oil well drill pipe has a capillary number in the order of unity.
The capillary number plays a role in the dynamics of capillary flow; in particular, it governs the dynamic contact angle of a flowing droplet at an interface.
Multiphase formulation.
Multiphase flows forms when two or more partially or immiscible fluids are brought in contact. The Capillary number in multiphase flow has the same definition as the single flow formulation, the ratio of viscous to surface forces but has the added(?) effect of the ratio of fluid viscosities:
formula_7
where formula_8 and formula_9 are the viscosity of the continuous and the dispersed phases respectively.
Multiphase microflows are characterized by the ratio of viscous to surface forces, the capillary number (Ca), and by the ratio of fluid viscosities:
formula_10 and formula_11
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "Bo"
},
{
"math_id": 1,
"text": "\\mathrm{Ca} = \\frac{\\mu V}{\\sigma} "
},
{
"math_id": 2,
"text": "\\mu"
},
{
"math_id": 3,
"text": "V"
},
{
"math_id": 4,
"text": "\\sigma"
},
{
"math_id": 5,
"text": "N_c"
},
{
"math_id": 6,
"text": "\\mathrm{Ca}"
},
{
"math_id": 7,
"text": "\\mathrm{Ca} = \\frac{\\mu V}{\\sigma}, \\frac{\\mu }{\\hat{\\mu}} "
},
{
"math_id": 8,
"text": "\\mu "
},
{
"math_id": 9,
"text": "\\hat{\\mu} "
},
{
"math_id": 10,
"text": "\\mathrm{Ca} = \\frac{\\mu V}{\\sigma}"
},
{
"math_id": 11,
"text": "\\frac{\\mu }{\\hat{\\mu}}."
}
] |
https://en.wikipedia.org/wiki?curid=585453
|
5854674
|
Tool wear
|
Gradual failure of cutting tools due to regular use
In machining, tool wear is the gradual failure of cutting tools due to regular operation. Tools affected include tipped tools, tool bits, and drill bits that are used with machine tools.
Types of wear include:
Effects of tool wear.
Some general effects of tool wear include:
Reduction in tool wear can be accomplished by using lubricants and coolants while machining. These reduce friction and temperature, thus reducing the tool wear.
A more general form of the equation is
formula_0
where
Temperature considerations.
At high temperature zones crater wear occurs.
The highest temperature of the tool can exceed 700 °C and occurs at the rake face whereas the lowest temperature can be 500 °C or lower depending on the tool...
Energy considerations.
Energy comes in the form of heat from tool friction. It is a reasonable assumption that 80% of energy from cutting is carried away in the chip. If not for this the workpiece and the tool would be much hotter than what is experienced. The tool and the workpiece each carry approximately 10% of the energy. The percent of energy carried away in the chip increases as the speed of the cutting operation increases. This somewhat offsets the tool wear from increased cutting speeds. In fact, if not for the energy taken away in the chip increasing as cutting speed is increased; the tool would wear more quickly than is found.
Multi-criteria of machining operation.
Malakooti and Deviprasad (1989) introduced the multi-criteria metal cutting problem where the criteria could be cost per part, production time per part, and quality of surface. Also, Malakooti et al. (1990) proposed a method to rank the materials in terms of machinability. Malakooti (2013) presents a comprehensive discussion about tool life and its multi-criteria problem. As an example objectives can be minimizing of Total cost (which can be measured by the total cost of replacing all tools during a production period), maximizing of Productivity (which can be measured by the total number of parts produced per period), and maximizing of quality of cutting.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "V_c T^n \\times D^x S^y=C"
},
{
"math_id": 1,
"text": "V_c"
}
] |
https://en.wikipedia.org/wiki?curid=5854674
|
58550416
|
Fischer's inequality
|
Mathematical bound
In mathematics, Fischer's inequality gives an upper bound for the determinant of a positive-semidefinite matrix whose entries are complex numbers in terms of the determinants of its principal diagonal blocks.
Suppose "A", "C" are respectively "p"×"p", "q"×"q" positive-semidefinite complex matrices and "B" is a "p"×"q" complex matrix.
Let
formula_0
so that "M" is a ("p"+"q")×("p"+"q") matrix.
Then Fischer's inequality states that
formula_1
If "M" is positive-definite, equality is achieved in Fischer's inequality if and only if all the entries of "B" are 0. Inductively one may conclude that a similar inequality holds for a block decomposition of "M" with multiple principal diagonal blocks. Considering 1×1 blocks, a corollary is Hadamard's inequality. On the other hand, Fischer's inequality can also be proved by using Hadamard's inequality, see the proof of Theorem 7.8.5 in Horn and Johnson's Matrix Analysis.
Proof.
Assume that "A" and "C" are positive-definite. We have formula_2 and formula_3 are positive-definite. Let
formula_4
We note that
formula_5
Applying the AM-GM inequality to the eigenvalues of formula_6, we see
formula_7
By multiplicativity of determinant, we have
formula_8
In this case, equality holds if and only if "M" = "D" that is, all entries of "B" are 0.
For formula_9, as formula_10 and formula_11 are positive-definite, we have
formula_12
Taking the limit as formula_13 proves the inequality. From the inequality we note that if "M" is invertible, then both "A" and "C" are invertible and we get the desired equality condition.
Improvements.
If "M" can be partitioned in square blocks "Mij", then the following inequality by Thompson is valid:
formula_14
where [det("Mij")] is the matrix whose ("i","j") entry is det("Mij").
In particular, if the block matrices "B" and "C" are also square matrices, then the following inequality by Everett is valid:
formula_15
Thompson's inequality can also be generalized by an inequality in terms of the coefficients of the characteristic polynomial of the block matrices. Expressing the characteristic polynomial of the matrix "A" as
formula_16
and supposing that the blocks "Mij" are "m" x "m" matrices, the following inequality by Lin and Zhang is valid:
formula_17
Note that if "r" = "m", then this inequality is identical to Thompson's inequality.
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "M := \\left[\\begin{matrix} A & B \\\\ B^* & C \\end{matrix}\\right]"
},
{
"math_id": 1,
"text": " \\det (M) \\le \\det(A) \\det(C)."
},
{
"math_id": 2,
"text": "A^{-1}"
},
{
"math_id": 3,
"text": "C^{-1}"
},
{
"math_id": 4,
"text": "D := \\left[\\begin{matrix} A & 0 \\\\ 0 & C \\end{matrix}\\right]."
},
{
"math_id": 5,
"text": "D^{-\\frac{1}{2}} M D^{-\\frac{1}{2} } = \\left[\\begin{matrix} A^{-\\frac{1}{2}} & 0 \\\\ 0 & C^{-\\frac{1}{2}} \\end{matrix}\\right] \\left[\\begin{matrix} A & B \\\\ B^* & C \\end{matrix}\\right] \\left[\\begin{matrix} A^{-\\frac{1}{2}} & 0 \\\\ 0 & C^{-\\frac{1}{2}} \\end{matrix}\\right] = \\left[\\begin{matrix} I_{p} & A^{-\\frac{1}{2}} BC^{-\\frac{1}{2}} \\\\ C^{-\\frac{1}{2}}B^*A^{-\\frac{1}{2}} & I_{q}\\end{matrix}\\right]"
},
{
"math_id": 6,
"text": "D^{-\\frac{1}{2}} M D^{-\\frac{1}{2} }"
},
{
"math_id": 7,
"text": "\\det (D^{-\\frac{1}{2}} M D^{-\\frac{1}{2}}) \\le \\left({1 \\over p + q} \\mathrm{tr} (D^{-\\frac{1}{2}} M D^{-\\frac{1}{2}}) \\right)^{p+q} = 1^{p+q} = 1."
},
{
"math_id": 8,
"text": "\n\\begin{align}\n\\det(D^{-\\frac{1}{2}} ) \\det(M) \\det(D^{-\\frac{1}{2}} ) \\le 1 \\\\\n\\Longrightarrow \\det(M) \\le \\det(D) = \\det(A) \\det(C).\n\\end{align}"
},
{
"math_id": 9,
"text": "\\varepsilon > 0"
},
{
"math_id": 10,
"text": "A + \\varepsilon I_p"
},
{
"math_id": 11,
"text": "C + \\varepsilon I_q"
},
{
"math_id": 12,
"text": "\\det(M+ \\varepsilon I_{p+q}) \\le \\det(A + \\varepsilon I_p) \\det(C + \\varepsilon I_q)."
},
{
"math_id": 13,
"text": "\\varepsilon \\rightarrow 0"
},
{
"math_id": 14,
"text": "\\det(M) \\leq \\det([\\det(M_{ij})]) "
},
{
"math_id": 15,
"text": "\\det(M) \\le \\det \\begin{bmatrix} \\det(A) && \\det(B) \\\\ \\det(B^*) && \\det(C) \\end{bmatrix}"
},
{
"math_id": 16,
"text": "p_A (t) = \\sum_{k=0}^n t^{n-k} (-1)^k \\operatorname{tr}(\\Lambda^k A)"
},
{
"math_id": 17,
"text": "\\det(M) \\le \\left(\\frac{\\det([\\operatorname{tr}(\\Lambda^r M_{ij}]))}{ \\binom{m}r} \\right)^{\\frac{m}{r}},\\quad r=1, \\ldots, m"
}
] |
https://en.wikipedia.org/wiki?curid=58550416
|
5855043
|
METEOR
|
Metric for machine translation output
METEOR (Metric for Evaluation of Translation with Explicit ORdering) is a metric for the evaluation of machine translation output. The metric is based on the harmonic mean of unigram precision and recall, with recall weighted higher than precision. It also has several features that are not found in other metrics, such as stemming and synonymy matching, along with the standard exact word matching. The metric was designed to fix some of the problems found in the more popular BLEU metric, and also produce good correlation with human judgement at the sentence or segment level. This differs from the BLEU metric in that BLEU seeks correlation at the corpus level.
Results have been presented which give correlation of up to 0.964 with human judgement at the corpus level, compared to BLEU's achievement of 0.817 on the same data set. At the sentence level, the maximum correlation with human judgement achieved was 0.403.[#endnote_]
Algorithm.
As with BLEU, the basic unit of evaluation is the sentence, the algorithm first creates an "alignment" (see illustrations) between two sentences, the candidate translation string, and the reference translation string. The "alignment" is a set of "mappings" between unigrams. A mapping can be thought of as a line between a unigram in one string, and a unigram in another string. The constraints are as follows; every unigram in the candidate translation must map to zero or one unigram in the reference. Mappings are selected to produce an "alignment" as defined above. If there are two alignments with the same number of mappings, the alignment is chosen with the fewest "crosses", that is, with fewer intersections of two mappings. From the two alignments shown, alignment (a) would be selected at this point. Stages are run consecutively and each stage only adds to the alignment those unigrams which have not been matched in previous stages. Once the final alignment is computed, the score is computed as follows: Unigram precision P is calculated as:
formula_0
Where m is the number of unigrams in the candidate translation that are also found in the reference translation, and formula_1 is the number of unigrams in the candidate translation. Unigram recall R is computed as:
formula_2
Where m is as above, and formula_3 is the number of unigrams in the reference translation. Precision and recall are combined using the harmonic mean in the following fashion, with recall weighted 9 times more than precision:
formula_4
The measures that have been introduced so far only account for congruity with respect to single words but not with respect to larger segments that appear in both the reference and the candidate sentence. In order to take these into account, longer "n"-gram matches are used to compute a penalty p for the alignment. The more mappings there are that are not adjacent in the reference and the candidate sentence, the higher the penalty will be.
In order to compute this penalty, unigrams are grouped into the fewest possible "chunks", where a chunk is defined as a set of unigrams that are adjacent in the hypothesis and in the reference. The longer the adjacent mappings between the candidate and the reference, the fewer chunks there are. A translation that is identical to the reference will give just one chunk. The penalty p is computed as follows,
formula_5
Where "c" is the number of chunks, and formula_6 is the number of unigrams that have been mapped. The final score for a segment is calculated as M below. The penalty has the effect of reducing the formula_7 by up to 50% if there are no bigram or longer matches.
formula_8
To calculate a score over a whole corpus, or collection of segments, the aggregate values for P, R and p are taken and then combined using the same formula. The algorithm also works for comparing a candidate translation against more than one reference translations. In this case the algorithm compares the candidate against each of the references and selects the highest score.
|
[
{
"math_id": 0,
"text": "P = \\frac{m}{w_{t}}"
},
{
"math_id": 1,
"text": "w_{t}"
},
{
"math_id": 2,
"text": "R = \\frac{m}{w_{r}}"
},
{
"math_id": 3,
"text": "w_{r}"
},
{
"math_id": 4,
"text": "F_{mean} = \\frac{10PR}{R+9P}"
},
{
"math_id": 5,
"text": "p = 0.5 \\left ( \\frac{c}{u_{m}} \\right )^3"
},
{
"math_id": 6,
"text": "u_{m}"
},
{
"math_id": 7,
"text": "F_{mean}"
},
{
"math_id": 8,
"text": "M = F_{mean} (1 - p)"
}
] |
https://en.wikipedia.org/wiki?curid=5855043
|
58552301
|
Σ-Algebra of τ-past
|
Algebra of a branch of probability theory
The σ-algebra of τ-past, (also named stopped σ-algebra, stopped σ-field, or σ-field of τ-past) is a σ-algebra associated with a stopping time in the theory of stochastic processes, a branch of probability theory.
Definition.
Let formula_0 be a stopping time on the filtered probability space formula_1. Then the σ-algebra
formula_2
is called the σ-algebra of τ-past.
Properties.
Monotonicity.
Is formula_3 are two stopping times and
formula_4
almost surely, then
formula_5
Measurability.
A stopping time formula_0 is always formula_6-measurable.
Intuition.
The same way formula_7 is all the information up to time formula_8, formula_9 is all the information up time formula_10. The only difference is that formula_10 is random. For example, if you had a random walk, and you wanted to ask, “How many times did the random walk hit −5 before it first hit 10?”, then letting formula_10 be the first time the random walk hit 10, formula_9 would give you the information to answer that question.
|
[
{
"math_id": 0,
"text": " \\tau "
},
{
"math_id": 1,
"text": " (\\Omega, \\mathcal A, (\\mathcal F_t)_{t \\in T}, P ) "
},
{
"math_id": 2,
"text": " \\mathcal F_\\tau:= \\{ A \\in \\mathcal A \\mid \\forall t \\in T \\colon \\{ \\tau \\leq t \\} \\cap A \\in \\mathcal F_t\\} "
},
{
"math_id": 3,
"text": " \\sigma, \\tau "
},
{
"math_id": 4,
"text": " \\sigma \\leq \\tau "
},
{
"math_id": 5,
"text": " \\mathcal F_\\sigma \\subset \\mathcal F_\\tau. "
},
{
"math_id": 6,
"text": " \\mathcal F_\\tau"
},
{
"math_id": 7,
"text": "\\mathcal{F}_t"
},
{
"math_id": 8,
"text": "t"
},
{
"math_id": 9,
"text": "\\mathcal{F}_\\tau"
},
{
"math_id": 10,
"text": "\\tau"
}
] |
https://en.wikipedia.org/wiki?curid=58552301
|
58552803
|
Supersingular isogeny graph
|
Class of expander graphs arising in computational number theory
In mathematics, the supersingular isogeny graphs are a class of expander graphs that arise in computational number theory and have been applied in elliptic-curve cryptography. Their vertices represent supersingular elliptic curves over finite fields and their edges represent isogenies between curves.
Definition and properties.
A supersingular isogeny graph is determined by choosing a large prime number formula_0 and a small prime number formula_1, and considering the class of all supersingular elliptic curves defined over the finite field formula_2. There are approximately formula_3 such curves, each two of which can be related by isogenies. The vertices in the supersingular isogeny graph represent these curves (or more concretely, their j-invariants, elements of formula_2) and the edges represent isogenies of degree formula_1 between two curves.
The supersingular isogeny graphs are formula_4-regular graphs, meaning that each vertex has exactly formula_4 neighbors. They were proven by Pizer to be Ramanujan graphs, graphs with optimal expansion properties for their degree. The proof is based on Pierre Deligne's proof of the Ramanujan–Petersson conjecture.
Cryptographic applications.
One proposal for a cryptographic hash function involves starting from a fixed vertex of a supersingular isogeny graph, using the bits of the binary representation of an input value to determine a sequence of edges to follow in a walk in the graph, and using the identity of the vertex reached at the end of the walk as the hash value for the input. The security of the proposed hashing scheme rests on the assumption that it is difficult to find paths in this graph that connect arbitrary pairs of vertices.
It has also been proposed to use walks in two supersingular isogeny graphs with the same vertex set but different edge sets (defined using different choices of the formula_1 parameter) to develop a key exchange primitive analogous to Diffie–Hellman key exchange, called supersingular isogeny key exchange, suggested as a form of post-quantum cryptography. However, a leading variant of supersingular isogeny key exchange was broken in 2022 using non-quantum methods.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "p"
},
{
"math_id": 1,
"text": "\\ell"
},
{
"math_id": 2,
"text": "\\mathbb{F}_{p^2}"
},
{
"math_id": 3,
"text": "(p+1)/12"
},
{
"math_id": 4,
"text": "\\ell+1"
}
] |
https://en.wikipedia.org/wiki?curid=58552803
|
58554329
|
Multi-agent pathfinding
|
Pathfinding problem
The problem of Multi-Agent Pathfinding (MAPF) is an instance of multi-agent planning and consists in the computation of collision-free paths for a group of agents from their location to an assigned target. It is an optimization problem, since the aim is to find those paths that optimize a given objective function, usually defined as the number of time steps until all agents reach their goal cells. MAPF is the multi-agent generalization of the pathfinding problem, and it is closely related to the shortest path problem in the context of graph theory.
Several algorithms have been proposed to solve the MAPF problem. Due to its complexity, it happens that optimal approaches are infeasible on big environments and with a high number of agents. However, given the applications in which MAPF is involved such as automated warehouses and airport management, it is important to reach a trade-off between the efficiency of the solution and its effectiveness.
Problem Formalization.
The elements of a classical MAPF problem are the following:
It is assumed that time is discrete, and that each agent can perform one action at each time step. There are two possible types of actions: the wait action, in which the agent remains in its node, and the move action, that allows the agent to move to an adjacent node. An action is formalized as a function formula_7, meaning that formula_8 represents the action of moving from formula_9 to formula_10 if formula_10 is adjacent to formula_9 and different than formula_10, or to stay in node formula_9 if formula_11.
The agents perform sequences of actions to go from their starting point to their target location. A sequence of action performed by agent formula_12 is denoted by formula_13 and is called a plan. If agent formula_12 starts from its location formula_14 and arrives to its target location formula_15 performing plan formula_16, then formula_16 is called single-agent plan for agent formula_12. A valid solution for the MAPF problem is a set of formula_1 single-agent plans (one for each agent), such that the plans do not collide one another. Once an agent has reached its target, it can either remain in the target location or disappear.
Types of Collisions.
In order to have a valid solution for a MAPF problem, it is necessary that the single-agent plans of the formula_1 agents do not collide one another. Given plan formula_16, the expression formula_17 denotes the position of agent formula_12 after having performed formula_18 steps of plan formula_16. It is possible to distinguish five different types of collisions between two plans formula_16 and formula_19.
When formalizing a MAPF problem, it is possible to decide which conflicts are allowed and which are forbidden. A unified standard about permitted and denied conflicts does not exist, however vertex and edge conflicts are usually not allowed.
Objective Functions.
When computing single-agent plans, the aim is to maximize a user-defined objective function. There is not a standard objective function to adopt, however the most common are:
Algorithms.
Several algorithms have been proposed to solve the MAPF problem. The issue is that it is NP-hard to find optimal makespan or flow-time solutions, also when considering planar graphs, or graphs similar to grids. For what concerns bounded suboptimal solutions, it is shown that it is NP-hard to find a makespan-optimal solution with a factor of suboptimality smaller than formula_29. Optimal MAPF solvers return high quality solutions, but their efficiency is low. Instead, bounded-suboptimal and suboptimal solvers are more efficient, but their solutions are less effective. Also machine learning approaches have been proposed to solve the MAPF problem.
Prioritized Planning.
One possible approach to face the computational complexity is prioritized planning. It consists in decoupling the MAPF problem into formula_1 single-agent pathfinding problems.
The first step is to assign to each agent a unique number formula_30 that corresponds to the priority given to the agent. Then, following the priority order, for each agent a plan is computed to reach the target location. When planning, agents have to avoid collisions with paths of other agents with a higher priority that have already computed their plans.
Finding a solution for the MAPF problem in such setting corresponds to the shortest path problem in a time-expansion graph. A time-expansion graph is a graph that takes into account the passing of time. Each node is composed by two entries formula_31, where formula_9 is the node name and formula_32 is the time step. Each node formula_31 is linked to the nodes formula_33 such that formula_34 is adjacent to formula_9 and formula_34 is not occupied at time step formula_35.
The drawback of prioritized planning is that, even if it is a sound approach (it returns valid solutions), it is neither optimal nor complete. This means that it is not assured that the algorithm will return a solution and, even in that case, the solution may not be optimal.
Optimal MAPF Solvers.
It is possible to distinguish four different categories of optimal MAPF solvers:
Bounded Suboptimal MAPF Solvers.
Bounded suboptimal algorithms offer a trade-off between the optimality and the cost of the solution. They are said to be bounded by a certain factor because they return solutions with a cost at most equal to the optimal solution cost times the factor. MAPF bounded suboptimal solvers can be divided following the same categorization presented for optimal MAPF solvers.
Variations.
The way in which MAPF problems are defined allows to change various aspects, for example the fact of being in a grid environment or the assumption that time is discrete. This section reports some variations of the classical MAPF problem.
Anonymous MAPF.
It is a version of MAPF in which there is a set of target locations but agents are not assigned a specific target. It does not matter whether the agent that reaches the target, the important thing is that targets are completed. A slight modification of this version is the one in which agents are divided into groups and each group has to perform a set of targets.
Multi-Agent Pick-up and Delivery.
MAPF problem is not able to capture some aspects relative to real world applications. For example, in automated warehouses it happens that robots have to complete several tasks one after the other. For this reason, an extended MAPF version is proposed, called Multi-Agent Pick-up and Delivery (MAPD). In a MAPD setting, agents have to complete a stream of tasks, where each task is composed by a pick-up a location and a delivery location. When planning for the completion of a task, the path has to start from the current position of the robot and to end in the delivery position of the task, passing through the pick-up point. MAPD is considered a "lifelong" version of MAPF in which tasks arrive incrementally.
Beyond Classical MAPF.
The assumptions that the agents are in a grid environment, that their speed is constant and that time is discrete are simplifying hypotheses. Many works take into account the kinematic constraints of agents, such as velocity and orientation, or go past the assumption that the weights of the arcs are all equal to 1. Other works focus on eliminating the discrete time assumptions and that the duration of actions is exactly equal to one time step. Another assumption that does not reflect reality is that agents occupy exactly one cell of the environment in which they are: some studies have been conducted to overcome this hypothesis. It is interesting to note that the shape and geometry of agents may introduce new types of conflicts, since agents may crash with one another even if they are not completely overlapped.
Applications.
MAPF can be applied in several real case scenarios:
|
[
{
"math_id": 0,
"text": " A = \\{1,2,...,k\\} "
},
{
"math_id": 1,
"text": " k "
},
{
"math_id": 2,
"text": " G=(V,E) "
},
{
"math_id": 3,
"text": " V "
},
{
"math_id": 4,
"text": " E "
},
{
"math_id": 5,
"text": " s: A \\to V "
},
{
"math_id": 6,
"text": " t: A \\to V "
},
{
"math_id": 7,
"text": " a : V \\to V "
},
{
"math_id": 8,
"text": " a(v) = v' "
},
{
"math_id": 9,
"text": " v "
},
{
"math_id": 10,
"text": " v' "
},
{
"math_id": 11,
"text": " v = v' "
},
{
"math_id": 12,
"text": " i "
},
{
"math_id": 13,
"text": " \\pi_i = (a_1, a_2,..., a_n) "
},
{
"math_id": 14,
"text": " s(i) "
},
{
"math_id": 15,
"text": " t(i) "
},
{
"math_id": 16,
"text": " \\pi_i "
},
{
"math_id": 17,
"text": " \\pi_i[x] "
},
{
"math_id": 18,
"text": " x "
},
{
"math_id": 19,
"text": " \\pi_j "
},
{
"math_id": 20,
"text": " \\pi_i[x] = \\pi_j[x] "
},
{
"math_id": 21,
"text": " \\pi_i[x+1] = \\pi_j[x+1] "
},
{
"math_id": 22,
"text": " \\pi_i[x+1] = \\pi_j[x] "
},
{
"math_id": 23,
"text": " \\pi_j[x+1] = \\pi_i[x] "
},
{
"math_id": 24,
"text": " \\sum_{i \\in A} |\\pi_i| "
},
{
"math_id": 25,
"text": " \\pi_i, i \\in A "
},
{
"math_id": 26,
"text": " \\max_{i \\in A} |\\pi_i| "
},
{
"math_id": 27,
"text": " \\pi_i"
},
{
"math_id": 28,
"text": ", i \\in A "
},
{
"math_id": 29,
"text": " \\tfrac{4}{3}"
},
{
"math_id": 30,
"text": " \\{1,2,...,k\\} "
},
{
"math_id": 31,
"text": " (v,t) "
},
{
"math_id": 32,
"text": " t "
},
{
"math_id": 33,
"text": " (u,t+1) "
},
{
"math_id": 34,
"text": " u "
},
{
"math_id": 35,
"text": " t+1 "
}
] |
https://en.wikipedia.org/wiki?curid=58554329
|
58558530
|
Fuglede−Kadison determinant
|
In mathematics, the Fuglede−Kadison determinant of an invertible operator in a finite factor is a positive real number associated with it. It defines a multiplicative homomorphism from the set of invertible operators to the set of positive real numbers. The Fuglede−Kadison determinant of an operator formula_0 is often denoted by formula_1.
For a matrix formula_0 in formula_2, formula_3 which is the normalized form of the absolute value of the determinant of formula_0.
Definition.
Let formula_4 be a finite factor with the canonical normalized trace formula_5 and let formula_6 be an invertible operator in formula_4. Then the Fuglede−Kadison determinant of formula_6 is defined as
formula_7
(cf. Relation between determinant and trace via eigenvalues). The number formula_8 is well-defined by continuous functional calculus.
Extensions to singular operators.
There are many possible extensions of the Fuglede−Kadison determinant to singular operators in formula_4. All of them must assign a value of 0 to operators with non-trivial nullspace. No extension of the determinant formula_13 from the invertible operators to all operators in formula_4, is continuous in the uniform topology.
Algebraic extension.
The algebraic extension of formula_13 assigns a value of 0 to a singular operator in formula_4.
Analytic extension.
For an operator formula_0 in formula_4, the analytic extension of formula_13 uses the spectral decomposition of formula_16 to define formula_17 with the understanding that formula_18 if formula_19. This extension satisfies the continuity property
formula_20 for formula_21
Generalizations.
Although originally the Fuglede−Kadison determinant was defined for operators in finite factors, it carries over to the case of operators in von Neumann algebras with a tracial state (formula_5) in the case of which it is denoted by formula_22.
|
[
{
"math_id": 0,
"text": "A"
},
{
"math_id": 1,
"text": "\\Delta(A)"
},
{
"math_id": 2,
"text": "M_n(\\mathbb{C})"
},
{
"math_id": 3,
"text": "\\Delta(A) = \\left| \\det (A) \\right|^{1/n}"
},
{
"math_id": 4,
"text": "\\mathcal{M}"
},
{
"math_id": 5,
"text": "\\tau"
},
{
"math_id": 6,
"text": "X"
},
{
"math_id": 7,
"text": "\\Delta(X) := \\exp \\tau(\\log (X^*X)^{1/2}),"
},
{
"math_id": 8,
"text": "\\Delta(X)"
},
{
"math_id": 9,
"text": "\\Delta(XY) = \\Delta(X) \\Delta(Y)"
},
{
"math_id": 10,
"text": "X, Y \\in \\mathcal{M}"
},
{
"math_id": 11,
"text": "\\Delta (\\exp A) = \\left| \\exp \\tau(A) \\right| = \\exp \\Re \\tau(A)"
},
{
"math_id": 12,
"text": "A \\in \\mathcal{M}."
},
{
"math_id": 13,
"text": "\\Delta"
},
{
"math_id": 14,
"text": "GL_1(\\mathcal{M})"
},
{
"math_id": 15,
"text": "\\mathcal{M},"
},
{
"math_id": 16,
"text": "|A| = \\int \\lambda \\; dE_\\lambda"
},
{
"math_id": 17,
"text": "\\Delta(A) := \\exp \\left( \\int \\log \\lambda \\; d\\tau(E_\\lambda) \\right)"
},
{
"math_id": 18,
"text": "\\Delta(A) = 0"
},
{
"math_id": 19,
"text": "\\int \\log \\lambda \\; d\\tau(E_\\lambda) = -\\infty"
},
{
"math_id": 20,
"text": "\\lim_{\\varepsilon \\rightarrow 0} \\Delta(H + \\varepsilon I) = \\Delta(H)"
},
{
"math_id": 21,
"text": "H \\ge 0."
},
{
"math_id": 22,
"text": "\\Delta_\\tau(\\cdot)"
}
] |
https://en.wikipedia.org/wiki?curid=58558530
|
58559351
|
Brown measure
|
Probability measure on a complex plane
In mathematics, the Brown measure of an operator in a finite factor is a probability measure on the complex plane which may be viewed as an analog of the spectral counting measure (based on algebraic multiplicity) of matrices.
It is named after Lawrence G. Brown.
Definition.
Let formula_0 be a finite factor with the canonical normalized trace formula_1 and let formula_2 be the identity operator. For every operator formula_3 the function
formula_4
is a subharmonic function and its Laplacian in the distributional sense is a probability measure on formula_5
formula_6
which is called the Brown measure of formula_7 Here the Laplace operator formula_8 is complex.
The subharmonic function can also be written in terms of the Fuglede−Kadison determinant formula_9 as follows
formula_10
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathcal{M}"
},
{
"math_id": 1,
"text": "\\tau"
},
{
"math_id": 2,
"text": "I"
},
{
"math_id": 3,
"text": "A \\in \\mathcal{M},"
},
{
"math_id": 4,
"text": "\\lambda \\mapsto \\tau(\\log \\left|A-\\lambda I\\right|), \\; \\lambda \\in \\Complex,"
},
{
"math_id": 5,
"text": "\\Complex"
},
{
"math_id": 6,
"text": "\\mu_A(\\mathrm{d}(a+b\\mathbb{i})) := \\frac{1}{2\\pi}\\nabla^2 \\tau(\\log \\left|A-(a+b\\mathbb{i}) I\\right|)\\mathrm{d}a\\mathrm{d}b"
},
{
"math_id": 7,
"text": "A."
},
{
"math_id": 8,
"text": "\\nabla^2"
},
{
"math_id": 9,
"text": "\\Delta_{FK}"
},
{
"math_id": 10,
"text": "\\lambda \\mapsto \\log\\Delta_{FK}(A-\\lambda I), \\; \\lambda \\in \\Complex."
}
] |
https://en.wikipedia.org/wiki?curid=58559351
|
585797
|
Integer-valued polynomial
|
In mathematics, an integer-valued polynomial (also known as a numerical polynomial) formula_0 is a polynomial whose value formula_1 is an integer for every integer "n". Every polynomial with integer coefficients is integer-valued, but the converse is not true. For example, the polynomial
formula_2
takes on integer values whenever "t" is an integer. That is because one of "t" and formula_3 must be an even number. (The values this polynomial takes are the triangular numbers.)
Integer-valued polynomials are objects of study in their own right in algebra, and frequently appear in algebraic topology.
Classification.
The class of integer-valued polynomials was described fully by George Pólya (1915). Inside the polynomial ring formula_4 of polynomials with rational number coefficients, the subring of integer-valued polynomials is a free abelian group. It has as basis the polynomials
formula_5
for formula_6, i.e., the binomial coefficients. In other words, every integer-valued polynomial can be written as an integer linear combination of binomial coefficients in exactly one way. The proof is by the method of discrete Taylor series: binomial coefficients are integer-valued polynomials, and conversely, the discrete difference of an integer series is an integer series, so the discrete Taylor series of an integer series generated by a polynomial has integer coefficients (and is a finite series).
Fixed prime divisors.
Integer-valued polynomials may be used effectively to solve questions about fixed divisors of polynomials. For example, the polynomials "P" with integer coefficients that always take on even number values are just those such that formula_7 is integer valued. Those in turn are the polynomials that may be expressed as a linear combination with even integer coefficients of the binomial coefficients.
In questions of prime number theory, such as Schinzel's hypothesis H and the Bateman–Horn conjecture, it is a matter of basic importance to understand the case when "P" has no fixed prime divisor (this has been called "Bunyakovsky's property", after Viktor Bunyakovsky). By writing "P" in terms of the binomial coefficients, we see the highest fixed prime divisor is also the highest prime common factor of the coefficients in such a representation. So Bunyakovsky's property is equivalent to coprime coefficients.
As an example, the pair of polynomials formula_8 and formula_9 violates this condition at formula_10: for every formula_8 the product
formula_11
is divisible by 3, which follows from the representation
formula_12
with respect to the binomial basis, where the highest common factor of the coefficients—hence the highest fixed divisor of formula_13—is 3.
Other rings.
Numerical polynomials can be defined over other rings and fields, in which case the integer-valued polynomials above are referred to as classical numerical polynomials.
Applications.
The K-theory of BU("n") is numerical (symmetric) polynomials.
The Hilbert polynomial of a polynomial ring in "k" + 1 variables is the numerical polynomial formula_14.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "P(t)"
},
{
"math_id": 1,
"text": "P(n)"
},
{
"math_id": 2,
"text": " P(t) = \\frac{1}{2} t^2 + \\frac{1}{2} t=\\frac{1}{2}t(t+1)"
},
{
"math_id": 3,
"text": "t + 1"
},
{
"math_id": 4,
"text": "\\Q[t]"
},
{
"math_id": 5,
"text": "P_k(t) = t(t-1)\\cdots (t-k+1)/k!"
},
{
"math_id": 6,
"text": "k = 0,1,2, \\dots"
},
{
"math_id": 7,
"text": "P/2"
},
{
"math_id": 8,
"text": "n"
},
{
"math_id": 9,
"text": "n^2 + 2"
},
{
"math_id": 10,
"text": "p = 3"
},
{
"math_id": 11,
"text": "n(n^2 + 2)"
},
{
"math_id": 12,
"text": " n(n^2 + 2) = 6 \\binom{n}{3} + 6 \\binom{n}{2} + 3 \\binom{n}{1} "
},
{
"math_id": 13,
"text": "n(n^2+2)"
},
{
"math_id": 14,
"text": "\\binom{t+k}{k}"
}
] |
https://en.wikipedia.org/wiki?curid=585797
|
58580000
|
Bernoulli polynomials of the second kind
|
Polynomial sequence
The Bernoulli polynomials of the second kind "ψn"("x"), also known as the Fontana–Bessel polynomials, are the polynomials defined by the following generating function:
formula_0
The first five polynomials are:
formula_1
Some authors define these polynomials slightly differently
formula_2
so that
formula_3
and may also use a different notation for them (the most used alternative notation is "bn"("x")). Under this convention, the polynomials form a Sheffer sequence.
The Bernoulli polynomials of the second kind were largely studied by the Hungarian mathematician Charles Jordan, but their history may also be traced back to the much earlier works.
Integral representations.
The Bernoulli polynomials of the second kind may be represented via these integrals
formula_4
as well as
formula_5
These polynomials are, therefore, up to a constant, the antiderivative of the binomial coefficient and also that of the falling factorial.
Explicit formula.
For an arbitrary "n", these polynomials may be computed explicitly via the following summation formula
formula_6
where "s"("n","l") are the signed Stirling numbers of the first kind and "G""n" are the Gregory coefficients.
The expansion of the Bernoulli polynomials of the second kind into a Newton series reads
formula_7
It can be shown using the second integral representation and Vandermonde's identity.
Recurrence formula.
The Bernoulli polynomials of the second kind satisfy the recurrence relation
formula_8
or equivalently
formula_9
The repeated difference produces
formula_10
Symmetry property.
The main property of the symmetry reads
formula_11
Some further properties and particular values.
Some properties and particular values of these polynomials include
formula_12
where "C""n" are the "Cauchy numbers of the second kind" and "M""n" are the "central difference coefficients".
Some series involving the Bernoulli polynomials of the second kind.
The digamma function Ψ("x") may be expanded into a series with the Bernoulli polynomials of the second kind
in the following way
formula_13
and hence
formula_14
and
formula_15
where "γ" is Euler's constant. Furthermore, we also have
formula_16
where Γ("x") is the gamma function. The Hurwitz and Riemann zeta functions may be expanded into these polynomials as follows
formula_17
and
formula_18
and also
formula_19
The Bernoulli polynomials of the second kind are also involved in the following relationship
formula_20
between the zeta functions, as well as in various formulas for the Stieltjes constants, e.g.
formula_21
and
formula_22
which are both valid for formula_23 and formula_24.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\n\\frac{z(1+z)^x}{\\ln(1+z)}= \\sum_{n=0}^\\infty z^n \\psi_n(x) ,\\qquad |z|<1.\n"
},
{
"math_id": 1,
"text": "\n\\begin{align}\n\\psi_0(x) &= 1 \\\\[2mm]\n\\psi_1(x) &= x + \\frac{1}{2} \\\\[2mm]\n\\psi_2(x) &= \\frac{1}{2} x^2 - \\frac{1}{12} \\\\[2mm]\n\\psi_3(x) &= \\frac{1}{6} x^3 - \\frac{1}{4} x^2 + \\frac{1}{24} \\\\[2mm]\n\\psi_4(x) &= \\frac{1}{24} x^4 - \\frac{1}{6} x^3 + \\frac{1}{6} x^2 - \\frac{19}{720}\n\\end{align}\n"
},
{
"math_id": 2,
"text": " \\frac{z \\left(1+z\\right)^x}{\\ln(1+z)} = \\sum_{n=0}^\\infty \\frac{z^n}{n!} \\psi^*_n(x) ,\\qquad |z|<1, "
},
{
"math_id": 3,
"text": " \\psi^*_n(x) = \\psi_n(x) \\, n! "
},
{
"math_id": 4,
"text": "\n\\psi_{n}(x) = \\int_x^{x+1}\\! \\binom{u}{n} \\, du = \\int_0^1 \\binom{x+u}{n} \\, du\n"
},
{
"math_id": 5,
"text": "\\begin{align}\n\\psi_{n}(x) &= \\frac{\\left(-1\\right)^{n+1}}{\\pi} \n\\int_0^\\infty \\frac{\\pi \\cos\\pi x - \\sin\\pi x \\ln z}{(1+z)^n} \\cdot\\frac{z^x dz}{\\ln^2 z +\\pi^2}\n,\\qquad -1\\leq x\\leq n-1\\, \\\\[3mm]\n\\psi_{n}(x) &= \\frac{\\left(-1\\right)^{n+1}}{\\pi} \n\\int_{-\\infty}^{+\\infty} \\frac{\\pi \\cos\\pi x - v\\sin\\pi x }{\\left(1+e^v\\right)^n} \\cdot \\frac{e^{v(x+1)} }{v^2 +\\pi^2}\\, dv ,\\qquad -1\\leq x\\leq n-1\\,\n\\end{align}"
},
{
"math_id": 6,
"text": " \\psi_{n}(x) = \\frac{1}{(n-1)!}\\sum_{l=0}^{n-1} \\frac{s(n-1,l)}{l+1} x^{l+1} + G_{n}, \\qquad n=1,2,3,\\ldots "
},
{
"math_id": 7,
"text": "\\psi_{n}(x) = G_0 \\binom{x}{n} + G_1 \\binom{x}{n-1} + G_2 \\binom{x}{n-2} + \\ldots + G_n"
},
{
"math_id": 8,
"text": "\\psi_{n}(x+1) - \\psi_{n}(x) = \\psi_{n-1}(x)"
},
{
"math_id": 9,
"text": "\\Delta\\psi_{n}(x) = \\psi_{n-1}(x)"
},
{
"math_id": 10,
"text": "\\Delta^m\\psi_{n}(x) = \\psi_{n-m}(x)"
},
{
"math_id": 11,
"text": " \\psi_{n}{\\left(\\tfrac{1}{2}n-1+x\\right)} = \\left(-1\\right)^n \\psi_{n}{\\left(\\tfrac{1}{2}n-1-x\\right)} "
},
{
"math_id": 12,
"text": "\\begin{align}\n&\\psi_n(0) = G_n \\\\[2mm]\n&\\psi_n(1) = G_{n-1} + G_{n} \\\\[2mm]\n&\\psi_n(-1) = \\left(-1\\right)^{n+1} \\sum_{m=0}^n \\left|G_m\\right| = \\left(-1\\right)^n C_n\\\\[2mm]\n&\\psi_n(n-2) = - \\left|G_n\\right| \\\\[2mm]\n&\\psi_n(n-1) = \\left(-1\\right)^n \\psi_n(-1) = 1 - \\sum_{m=1}^n \\left|G_m\\right| \\\\[2mm]\n&\\psi_{2n}(n-1) = M_{2n} \\\\[2mm]\n&\\psi_{2n}(n-1+y) = \\psi_{2n}(n-1-y) \\\\[2mm]\n&\\psi_{2n+1}(n-\\tfrac{1}{2}+y) = -\\psi_{2n+1}(n-\\tfrac{1}{2}-y) \\\\[2mm]\n&\\psi_{2n+1}(n-\\tfrac{1}{2}) = 0 \n\\end{align}"
},
{
"math_id": 13,
"text": "\n\\Psi(v) = \\ln(v+a) + \\sum_{n=1}^\\infty \\frac{(-1)^n\\psi_{n}(a)\\,(n-1)!}{(v)_{n}},\\qquad \\Re(v) > -a,\n"
},
{
"math_id": 14,
"text": "\\gamma= -\\ln(a+1) - \\sum_{n=1}^\\infty\\frac{(-1)^n \\psi_{n}(a)}{n},\\qquad \\Re(a)>-1 "
},
{
"math_id": 15,
"text": "\\gamma = \\sum_{n=1}^\\infty \\frac{(-1)^{n+1}}{2n}\\left\\{\\psi_{n}(a)+ \\psi_{n}\\left(-\\frac{a}{1+a}\\right)\\right\\},\n\\quad a>-1"
},
{
"math_id": 16,
"text": "\n\\Psi(v) = \\frac{1}{v + a - \\frac{1}{2}} \\left\\{\\ln\\Gamma(v+a) + v - \\frac{1}{2}\\ln(2\\pi) - \\frac{1}{2} + \\sum_{n=1}^\\infty \\frac{\\left(-1\\right)^n \\psi_{n+1}(a)}{(v)_{n}} \\left(n-1\\right)!\\right\\}, \\quad \\Re(v)>-a, \n"
},
{
"math_id": 17,
"text": "\n\\zeta(s,v) = \\frac{(v+a)^{1-s} }{s-1} + \\sum_{n=0}^\\infty (-1)^n \\psi_{n+1}(a)\n\\sum_{k=0}^{n} \\left(-1\\right)^k \\binom{n}{k} (k+v)^{-s}\n"
},
{
"math_id": 18,
"text": "\n\\zeta(s)= \\frac{(a+1)^{1-s} }{s-1} + \\sum_{n=0}^\\infty (-1)^n \\psi_{n+1}(a)\n\\sum_{k=0}^{n} \\left(-1\\right)^k \\binom{n}{k} (k+1)^{-s}\n"
},
{
"math_id": 19,
"text": "\n\\zeta(s) = 1 + \\frac{(a+2)^{1-s}}{s-1} + \\sum_{n=0}^\\infty (-1)^n \\psi_{n+1}(a)\n\\sum_{k=0}^{n} \\left(-1\\right)^k \\binom{n}{k} (k+2)^{-s}\n"
},
{
"math_id": 20,
"text": "\n\\big(v+a-\\tfrac{1}{2}\\big)\\zeta(s,v) = -\\frac{\\zeta(s-1,v+a)}{s-1} + \\zeta(s-1,v) + \n \\sum_{n=0}^\\infty \\left(-1\\right)^n \\psi_{n+2}(a) \\sum_{k=0}^{n} \\left(-1\\right)^k \\binom{n}{k} (k+v)^{-s} \n"
},
{
"math_id": 21,
"text": "\n\\gamma_m(v) = -\\frac{\\ln^{m+1}(v+a)}{m+1} + \\sum_{n=0}^\\infty (-1)^n \\psi_{n+1}(a)\n\\sum_{k=0}^{n} \\left(-1\\right)^k \\binom{n}{k}\\frac{\\ln^m (k+v)}{k+v}\n"
},
{
"math_id": 22,
"text": "\n \\gamma_m(v)=\\frac{1}{\\tfrac{1}{2}-v-a}\n\\left\\{\\frac{(-1)^m}{m+1}\\,\\zeta^{(m+1)}(0,v+a)- (-1)^m \\zeta^{(m)}(0,v)\n- \\sum_{n=0}^\\infty (-1)^n \\psi_{n+2}(a) \n\\sum_{k=0}^{n} (-1)^k \\binom{n}{k}\\frac{\\ln^m (k+v)}{k+v}\\right\\} \n"
},
{
"math_id": 23,
"text": "\\Re(a) > -1"
},
{
"math_id": 24,
"text": "v \\in \\mathbb{C}\\setminus\\!\\{0,-1,-2,\\ldots\\}"
}
] |
https://en.wikipedia.org/wiki?curid=58580000
|
585826
|
Invariant subspace
|
Subspace preserved by a linear mapping
In mathematics, an invariant subspace of a linear mapping "T" : "V" → "V " i.e. from some vector space "V" to itself, is a subspace "W" of "V" that is preserved by "T". More generally, an invariant subspace for a collection of linear mappings is a subspace preserved by each mapping individually.
For a single operator.
Consider a vector space formula_0 and a linear map formula_1 A subspace formula_2 is called an invariant subspace for formula_3, or equivalently, T-invariant, if T transforms any vector formula_4 back into W. In formulas, this can be writtenformula_5or formula_6
In this case, T restricts to an endomorphism of W:formula_7
The existence of an invariant subspace also has a matrix formulation. Pick a basis "C" for "W" and complete it to a basis "B" of "V". With respect to B, the operator T has form formula_8 for some "T"12 and "T"22.
Examples.
Any linear map formula_9 admits the following invariant subspaces:
These are the improper and trivial invariant subspaces, respectively. Certain linear operators have no proper non-trivial invariant subspace: for instance, rotation of a two-dimensional real vector space. However, the axis of a rotation in three dimensions is always an invariant subspace.
1-dimensional subspaces.
If U is a 1-dimensional invariant subspace for operator T with vector v ∈ "U", then the vectors v and "T"v must be linearly dependent. Thus formula_13In fact, the scalar α does not depend on v.
The equation above formulates an eigenvalue problem. Any eigenvector for T spans a 1-dimensional invariant subspace, and vice-versa. In particular, a nonzero invariant vector (i.e. a fixed point of "T") spans an invariant subspace of dimension 1.
As a consequence of the fundamental theorem of algebra, every linear operator on a nonzero finite-dimensional complex vector space has an eigenvector. Therefore, every such linear operator in at least two dimensions has a proper non-trivial invariant subspace.
Diagonalization via projections.
Determining whether a given subspace "W" is invariant under "T" is ostensibly a problem of geometric nature. Matrix representation allows one to phrase this problem algebraically.
Write V as the direct sum "W" ⊕ "W"′; a suitable "W"′ can always be chosen by extending a basis of W. The associated projection operator "P" onto "W" has matrix representation
formula_14
A straightforward calculation shows that "W" is T-invariant if and only if "PTP" = "TP".
If 1 is the identity operator, then 1-"P" is projection onto "W"′. The equation "TP"
"PT" holds if and only if both im("P") and im(1 − "P") are invariant under "T". In that case, "T" has matrix representation formula_15
Colloquially, a projection that commutes with "T" "diagonalizes" "T".
Lattice of subspaces.
As the above examples indicate, the invariant subspaces of a given linear transformation "T" shed light on the structure of "T". When "V" is a finite-dimensional vector space over an algebraically closed field, linear transformations acting on "V" are characterized (up to similarity) by the Jordan canonical form, which decomposes "V" into invariant subspaces of "T". Many fundamental questions regarding "T" can be translated to questions about invariant subspaces of "T".
The set of T-invariant subspaces of V is sometimes called the invariant-subspace lattice of T and written Lat("T"). As the name suggests, it is a (modular) lattice, with meets and joins given by (respectively) set intersection and linear span. A minimal element in Lat("T") in said to be a minimal invariant subspace.
In the study of infinite-dimensional operators, Lat("T") is sometimes restricted to only the closed invariant subspaces.
For multiple operators.
Given a collection of operators, a subspace is called -invariant if it is invariant under each "T" ∈ T.
As in the single-operator case, the invariant-subspace lattice of , written , is the set of all -invariant subspaces, and bears the same meet and join operations. Set-theoretically, it is the intersection formula_16
Examples.
Let End("V") be the set of all linear operators on V. Then Lat(End("V"))={0,"V"}.
Given a representation of a group "G" on a vector space "V", we have a linear transformation "T"("g") : "V" → "V" for every element "g" of "G". If a subspace "W" of "V" is invariant with respect to all these transformations, then it is a subrepresentation and the group "G" acts on "W" in a natural way. The same construction applies to representations of an algebra.
As another example, let "T" ∈ End("V") and Σ be the algebra generated by {1, "T" }, where 1 is the identity operator. Then Lat("T") = Lat(Σ).
Fundamental theorem of noncommutative algebra.
Just as the fundamental theorem of algebra ensures that every linear transformation acting on a finite-dimensional complex vector space has a non-trivial invariant subspace, the "fundamental theorem of noncommutative algebra" asserts that Lat(Σ) contains non-trivial elements for certain Σ.
<templatestyles src="Math_theorem/styles.css" />
Theorem (Burnside) — Assume V is a complex vector space of finite dimension. For every proper subalgebra Σ of End("V"), Lat("Σ") contains a non-trivial element.
One consequence is that every commuting family in "L"("V") can be simultaneously upper-triangularized. To see this, note that an upper-triangular matrix representation corresponds to a flag of invariant subspaces, that a commuting family generates a commuting algebra, and that End("V") is not commutative when dim("V") &geq; 2.
Left ideals.
If "A" is an algebra, one can define a "left regular representation" Φ on "A": Φ("a")"b" = "ab" is a homomorphism from "A" to "L"("A"), the algebra of linear transformations on "A"
The invariant subspaces of Φ are precisely the left ideals of "A". A left ideal "M" of "A" gives a subrepresentation of "A" on "M".
If "M" is a left ideal of "A" then the left regular representation Φ on "M" now descends to a representation Φ' on the quotient vector space "A"/"M". If ["b"] denotes an equivalence class in "A"/"M", Φ'("a")["b"] = ["ab"]. The kernel of the representation Φ' is the set {"a" ∈ "A" | "ab" ∈ "M" for all "b"}.
The representation Φ' is irreducible if and only if "M" is a maximal left ideal, since a subspace "V" ⊂ "A"/"M" is an invariant under {Φ'("a") | "a" ∈ "A"} if and only if its preimage under the quotient map, "V" + "M", is a left ideal in "A".
Invariant subspace problem.
The invariant subspace problem concerns the case where "V" is a separable Hilbert space over the complex numbers, of dimension > 1, and "T" is a bounded operator. The problem is to decide whether every such "T" has a non-trivial, closed, invariant subspace. It is unsolved.
In the more general case where "V" is assumed to be a Banach space, Per Enflo (1976) found an example of an operator without an invariant subspace. A concrete example of an operator without an invariant subspace was produced in 1985 by Charles Read.
Almost-invariant halfspaces.
Related to invariant subspaces are so-called almost-invariant-halfspaces (AIHS's). A closed subspace formula_17 of a Banach space formula_18 is said to be almost-invariant under an operator formula_19 if formula_20 for some finite-dimensional subspace formula_21; equivalently, formula_17 is almost-invariant under formula_3 if there is a finite-rank operator formula_22 such that formula_23, i.e. if formula_17 is invariant (in the usual sense) under formula_24. In this case, the minimum possible dimension of formula_21 (or rank of formula_25) is called the defect.
Clearly, every finite-dimensional and finite-codimensional subspace is almost-invariant under every operator. Thus, to make things non-trivial, we say that formula_17 is a halfspace whenever it is a closed subspace with infinite dimension and infinite codimension.
The AIHS problem asks whether every operator admits an AIHS. In the complex setting it has already been solved; that is, if formula_18 is a complex infinite-dimensional Banach space and formula_19 then formula_3 admits an AIHS of defect at most 1. It is not currently known whether the same holds if formula_18 is a real Banach space. However, some partial results have been established: for instance, any self-adjoint operator on an infinite-dimensional real Hilbert space admits an AIHS, as does any strictly singular (or compact) operator acting on a real infinite-dimensional reflexive space.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "V"
},
{
"math_id": 1,
"text": "T: V \\to V."
},
{
"math_id": 2,
"text": "W \\subseteq V"
},
{
"math_id": 3,
"text": "T"
},
{
"math_id": 4,
"text": "\\mathbf{v} \\in W"
},
{
"math_id": 5,
"text": "\\mathbf{v} \\in W \\implies T(\\mathbf{v}) \\in W"
},
{
"math_id": 6,
"text": "TW\\subseteq W\\text{.}"
},
{
"math_id": 7,
"text": "T|_W : W \\to W\\text{;}\\quad T|_W(\\mathbf{w}) = T(\\mathbf{w})\\text{.}"
},
{
"math_id": 8,
"text": " T = \\begin{bmatrix} T|_W & T_{12} \\\\ 0 & T_{22} \\end{bmatrix} "
},
{
"math_id": 9,
"text": "T : V \\to V"
},
{
"math_id": 10,
"text": "V."
},
{
"math_id": 11,
"text": "\\{0\\}"
},
{
"math_id": 12,
"text": "T(0) = 0"
},
{
"math_id": 13,
"text": " \\forall\\mathbf{v}\\in U\\;\\exists\\alpha\\in\\mathbb{R}: T\\mathbf{v}=\\alpha\\mathbf{v}\\text{.}"
},
{
"math_id": 14,
"text": " \nP = \\begin{bmatrix} 1 & 0 \\\\ 0 & 0 \\end{bmatrix} : \\begin{matrix}W \\\\ \\oplus \\\\ W' \\end{matrix} \\rightarrow \\begin{matrix}W \\\\ \\oplus \\\\ W' \\end{matrix}.\n"
},
{
"math_id": 15,
"text": " \nT = \\begin{bmatrix} T_{11} & 0 \\\\ 0 & T_{22} \\end{bmatrix} : \\begin{matrix} \\operatorname{im}(P) \\\\ \\oplus \\\\ \\operatorname{im}(1-P) \\end{matrix} \\rightarrow \\begin{matrix} \\operatorname{im}(P) \\\\ \\oplus \\\\ \\operatorname{im}(1-P) \\end{matrix} \\;.\n"
},
{
"math_id": 16,
"text": "\\mathrm{Lat}(\\mathcal{T})=\\bigcap_{T\\in\\mathcal{T}}{\\mathrm{Lat}(T)}\\text{.}"
},
{
"math_id": 17,
"text": "Y"
},
{
"math_id": 18,
"text": "X"
},
{
"math_id": 19,
"text": "T \\in \\mathcal{B}(X)"
},
{
"math_id": 20,
"text": "TY \\subseteq Y+E"
},
{
"math_id": 21,
"text": "E"
},
{
"math_id": 22,
"text": "F \\in \\mathcal{B}(X)"
},
{
"math_id": 23,
"text": "(T+F)Y \\subseteq Y"
},
{
"math_id": 24,
"text": "T+F"
},
{
"math_id": 25,
"text": "F"
}
] |
https://en.wikipedia.org/wiki?curid=585826
|
585882
|
Harris–Todaro model
|
Economic model
The Harris–Todaro model, named after John R. Harris and Michael Todaro, is an economic model developed in 1970 and used in development economics and welfare economics to explain some of the issues concerning rural-urban migration. The main assumption of the model is that the migration decision is based on "expected" income differentials between rural and urban areas rather than just wage differentials. This implies that rural-urban migration in a context of high urban unemployment can be economically rational if expected urban income exceeds expected rural income.
Overview.
In the model, an equilibrium is reached when the expected wage in urban areas (actual wage adjusted for the unemployment rate), is equal to the marginal product of an agricultural worker. The model assumes that unemployment is non-existent in the rural agricultural sector. It is also assumed that rural agricultural production and the subsequent labor market is perfectly competitive. As a result, the agricultural rural wage is equal to agricultural marginal productivity. In equilibrium, the rural to urban migration rate will be zero since the expected rural income equals the expected urban income. However, in this equilibrium there will be positive unemployment in the urban sector. The model explains internal migration in China as the regional income gap has been proved to be a primary drive of rural-urban migration, while urban unemployment is local governments' main concern in many cities.
Formalism.
The formal statement of the equilibrium condition of the Harris–Todaro model is as follows:
Rural to urban migration will take place if:
formula_5
Conversely, urban to rural migration will occur if:
formula_6
At equilibrium,
formula_7
With the random matching of workers to available jobs, the ratio of available jobs to total job seekers gives the probability that any person moving from the agricultural sector to the urban sector will be able to find a job. As a result, in equilibrium, the agricultural wage rate is equal to the expected urban wage rate, which is the urban wage multiplied by the employment rate.
Conclusions.
Therefore, migration from rural areas to urban areas will increase if:
However, even though this migration creates unemployment and induces informal sector growth, this behavior is economically rational and utility-maximizing in the context of the Harris–Todaro model. As long as the migrating economic agents have complete and accurate information concerning rural and urban wage rates and probabilities of obtaining employment, they will make an expected income-maximizing decision.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\ w_A "
},
{
"math_id": 1,
"text": "\\ L_F "
},
{
"math_id": 2,
"text": "\\ L_I "
},
{
"math_id": 3,
"text": "\\ w_F "
},
{
"math_id": 4,
"text": "\\ w_I "
},
{
"math_id": 5,
"text": "\\ w_A < \\frac{L_F}{L_F+L_I} w_F + \\frac{L_I}{{L_F}+{L_I}} w_I "
},
{
"math_id": 6,
"text": "\\ w_A > \\frac{L_F}{L_F+L_I} w_F + \\frac{L_I}{{L_F}+{L_I}} w_I "
},
{
"math_id": 7,
"text": "\\ w_A = \\frac{L_F}{L_F+L_I} w_F + \\frac{L_I}{{L_F}+{L_I}} w_I "
}
] |
https://en.wikipedia.org/wiki?curid=585882
|
58590950
|
Suzuki graph
|
The Suzuki graph is a strongly regular graph with parameters formula_0. Its automorphism group has order 896690995200 and contains as a subgroup of order 2 the Suzuki sporadic group. It is named for Michio Suzuki.
|
[
{
"math_id": 0,
"text": "(1782, 416, 100, 96)"
}
] |
https://en.wikipedia.org/wiki?curid=58590950
|
58600169
|
Hill limit (solid-state)
|
In solid-state physics, the Hill limit is a critical distance defined in a lattice of actinide or rare-earth atoms. These atoms own partially filled formula_0 or formula_1 levels in their valence shell and are therefore responsible for the main interaction between each atom and its environment. In this context, the hill limit formula_2 is defined as twice the radius of the formula_3-orbital. Therefore, if two atoms of the lattice are separate by a distance greater than the Hill limit, the overlap of their formula_3-orbital becomes negligible. A direct consequence is the absence of hopping for the f electrons, ie their localization on the ion sites of the lattice.
Localized f electrons lead to paramagnetic materials since the remaining unpaired spins are stuck in their orbitals. However, when the rare-earth lattice (or a single atom) is embedded in a metallic one (intermetallic compound), interactions with the conduction band allow the f electrons to move through the lattice even for interatomic distances above the Hill limit.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "4f"
},
{
"math_id": 1,
"text": "5f"
},
{
"math_id": 2,
"text": "r_H"
},
{
"math_id": 3,
"text": "f"
}
] |
https://en.wikipedia.org/wiki?curid=58600169
|
58603519
|
Post-conflict aid
|
Assistance granted to regions hit by war
Post-conflict aid is the monetary, material or technical assistance granted by other states, non-governmental organizations and private donors to regions that have recently been hit by either an international war, a civil war, or an armed conflict. The donation can take the form of food, financial investment, reconstruction materials and many others and aims to the re-attainment of sustainable socio-economic development as well as to the re-organization of the governmental and judicial structures and institutions in the war-torn region.
Distinction from development and humanitarian aid.
Post-conflict aid differs from the conventional development aid in many aspects. It is to some extent comparable to the aid granted in view of certain environmental catastrophes with huge impacts on the humanitarian situation. Just like countries facing a natural disaster, post-conflict countries are mainly confronted with a humanitarian emergency due to the destruction of infrastructures, institutions, public services and labor force as well as the death and injury of many people in the area. Hence, aid committed to the respective country can in both scenarios reach an extraordinarily high level but is likely to decline very rapidly as soon as the emergency has come under control. In both scenarios, technical help and human resources complement the material component of aid. In contrast, development aid is far more sustainable, not triggered by a single event, tied to certain conditions and mostly granted in the framework of an official international and national development program. Development aid is based on the principle help for self-help, whereas post-conflict and humanitarian aid strives to provide the means to manage a current crisis without a long-term plan. Still, post-conflict aid is more comprehensive that pure humanitarian aid directed to still the basic needs of medical supply, shelter and food in the aftermath of a humanitarian catastrophe. In its most encompassing form, post-conflict aid occurs during the reconstruction phase and the reinstallation of a functioning political system and the rule of laws.
History of the development of post-conflict aid.
Prior to the development of post-conflict aid, there was simply humanitarian aid and development aid. Yet, following conflict there was not often aid provided to the state that suffered devastating losses during the conflict. Instead, the state was now under the rule of the state who won the war. An example of this would be colonialism where the state that lost is now a colony of the state that won the conflict.
Post-conflict aid historically became distinct from other types of aid following World War II. The Marshall Plan, part of the resolution following World War II, planned to rebuild the mass destruction that was throughout Europe with America's assistance. While there was infrastructure to be rebuilt and basic human needs that needed to be met, along with this was the deterioration of Europe's economy. This is where the definition of post-conflict aid, aid that plans for socioeconomic development, originates from.
The focus on socioeconomic development is a direct side effect of the globalized war. States understood how much they had the ability to harm, and thus improve, based on the other states around the globe. The multiplier-effect explains how a singular dollar spent can produce massive amounts of market productivity and growth.
Similarly, following the Cold War, providing aid to benefit socioeconomic growth continued. While during the Cold War, the United States and the Soviet Union provided heavily politicized aid to promote a political ideology; following the Cold War aid continued to be used as an economic multiplier more than it was a political tool.
The third and final stage of the development of post-conflict aid takes place following 9/11 and the war in Afghanistan, marking the end of the long peace. During this time, major arms producers provided security aid. The military-industrial complex shows this is how states were able to provide aid while increasing demand for one of the most expensive markets.
Furthermore, the United Nations developed the triple nexus approach for providing post-crisis/ post-conflict aid. The triple nexus breaks down aid into a humanitarian component, a development component and a peacekeeping component which largely focuses on the socioeconomic issues that commonly spark conflicts or that the previous conflict took place centered around.
Economics.
A donor disbursing aid to conflict-torn country offers either humanitarian or reconstruction aid or splits its donation up. Reconstruction help (R) gets part of the receiving country's production function just like government spending would do since it is given to the country in order to invest it in the rebuilding of the economy.formula_0A donor, in contrast, only gives humanitarian aid until the basic needs of the population are satisfied, that is, until the donor defined level of consumption (c*) is reached. As soon as the actual consumption c equals c*, humanitarian aid will come to a breakpoint. Else, the humanitarian aid provided depends also on the short-run generosity α of the donor, such that the overall post-conflict aid A is :formula_1
Economic impacts.
According to neo-classical growth models, high growth rates are expected to take place in post-conflict societies until the steady state capital stock is reached. Following the Collier-Dollar model explaining the impact of post-conflict aid on growth, we find that aid is generally subject to diminishing marginal returns and that the impact of post-conflict aid on growth highly depends on the policy environment, meaning that in a better policy environment aid is more effective in boosting growth, resulting in the following equation:formula_2whereby X stands for any exogenous factors, P measures the quality of policy and A gives the amount of aid received by the conflict-torn country.
In order to know the marginal impact of aid on growth, we derivate the above equation with respect to A:formula_3Hence, aid reaches its saturation point at: formula_4Aid given to states that are currently in a post-conflict phase has also a signaling effect on foreign investors that would otherwise not be able to assess properly the situation on the ground due to the lack of information about credibility and economic potential. Aid reflects the support and trust that an economy enjoys. One should assume a rapid acceleration in economic growth after a short post-conflict period since the prevalent uncertainty, which is typical under war conditions is resolved once the reconstruction of political institutions has started. Thus, investment and (technical) innovation is more likely to take place. In addition to that, the labor productivity might increase due to a more peaceful environment and the gradual recovery of human capital thanks to health care, growing opportunities of education and research and better nutrition. If peace can be maintained, it is to be assumed that a phase of catch-up will take place which will coast out after a given time and then the economy will fall back to its long-term growth rate.
Effectiveness.
Addressing the question of the effectiveness and sustainability of post-conflict aid, evaluations have shown that it differs between sectors. All in all, post-conflict aid is effective in improving social infrastructure, like education, health, water, sanitation, and governance, but ineffective in rebuilding economic infrastructure, like transportation, communication, energy, and finance. Studies from villages in northern Liberia could demonstrate that post-conflict reconstruction programs, as part of post-conflict aid can have a measurable impact on social cohesion in this villages with sustainable effects. The power of post-conflict aid to increase the overall level of foreign aid is rather small, suggesting that there are no spill-over effects in the long run, even if the respective economy is in need of long-term support. Thus, post-conflict aid is timely limited and has a rather small effect on the tong-term growth. According to the World Bank report, post-conflict aid should be phased in slowly so that it reaches its peak level in the middle of the first post-war decade. At this point, it should be doubled before being reduced.
Aid patterns.
Post-conflict aid is not granted without certain patterns but follows the donor's strategic rationales. The rationales can be of humanitarian nature or be based on the interest of implementing a peace agreement, typically in accordance to a political agenda. Another motivation might be the targeted state itself which might either be a client regime, or a region where the donor hopes to gain political or economic influence over. By looking at the donor-side, empirical research suggests multiple characteristics as useful predictors for post-conflict economic assistance: Aid from OECD countries starts in most cases in the aftermath of a conflict and increases over time, especially if OECD or UN Peacekeeping troops were involved during the conflict or the conflict management. OECD countries provide more aid to countries with a high level of democracy and an open-market economy but are more reluctant to provide it for (highly) developed countries. They are also more likely to assist countries which are undergoing a regime transition or change, especially if this leads to a donor-favored regime or peace agreement. But even countries with a donor-favored regime or peace agreement can lose funding in cases of ineffective policy-implementation, human rights considerations or a reduced sense of emergency. On the average this happens three to four years after the conflict.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "Y=f(k,l.R)"
},
{
"math_id": 1,
"text": "A=H+R=\\alpha(c*-c)+R "
},
{
"math_id": 2,
"text": "G=c+b_{1}X+b_{2}P+b_{3}A+b_{4}A^2+b_{5}AP"
},
{
"math_id": 3,
"text": "{dG \\over dA}=b_{3}+2b_{4}A+b_{5}P"
},
{
"math_id": 4,
"text": "A^S={-(b_{3+b_{5}P)}\\over 2b_{4}}"
}
] |
https://en.wikipedia.org/wiki?curid=58603519
|
58610
|
Non-Euclidean geometry
|
Two geometries based on axioms closely related to those specifying Euclidean geometry
In mathematics, non-Euclidean geometry consists of two geometries based on axioms closely related to those that specify Euclidean geometry. As Euclidean geometry lies at the intersection of metric geometry and affine geometry, non-Euclidean geometry arises by either replacing the parallel postulate with an alternative, or relaxing the metric requirement. In the former case, one obtains hyperbolic geometry and elliptic geometry, the traditional non-Euclidean geometries. When the metric requirement is relaxed, then there are affine planes associated with the planar algebras, which give rise to kinematic geometries that have also been called non-Euclidean geometry.
Principles.
The essential difference between the metric geometries is the nature of parallel lines. Euclid's fifth postulate, the parallel postulate, is equivalent to Playfair's postulate, which states that, within a two-dimensional plane, for any given line l and a point "A", which is not on l, there is exactly one line through "A" that does not intersect l. In hyperbolic geometry, by contrast, there are infinitely many lines through "A" not intersecting l, while in elliptic geometry, any line through "A" intersects l.
Another way to describe the differences between these geometries is to consider two straight lines indefinitely extended in a two-dimensional plane that are both perpendicular to a third line (in the same plane):
History.
Background.
Euclidean geometry, named after the Greek mathematician Euclid, includes some of the oldest known mathematics, and geometries that deviated from this were not widely accepted as legitimate until the 19th century.
The debate that eventually led to the discovery of the non-Euclidean geometries began almost as soon as Euclid wrote "Elements". In the "Elements", Euclid begins with a limited number of assumptions (23 definitions, five common notions, and five postulates) and seeks to prove all the other results (propositions) in the work. The most notorious of the postulates is often referred to as "Euclid's Fifth Postulate", or simply the "parallel postulate", which in Euclid's original formulation is:
If a straight line falls on two straight lines in such a manner that the interior angles on the same side are together less than two right angles, then the straight lines, if produced indefinitely, meet on that side on which are the angles less than the two right angles.
Other mathematicians have devised simpler forms of this property. Regardless of the form of the postulate, however, it consistently appears more complicated than Euclid's other postulates:
For at least a thousand years, geometers were troubled by the disparate complexity of the fifth postulate, and believed it could be proved as a theorem from the other four. Many attempted to find a proof by contradiction, including Ibn al-Haytham (Alhazen, 11th century), Omar Khayyám (12th century), Nasīr al-Dīn al-Tūsī (13th century), and Giovanni Girolamo Saccheri (18th century).
The theorems of Ibn al-Haytham, Khayyam and al-Tusi on quadrilaterals, including the Lambert quadrilateral and Saccheri quadrilateral, were "the first few theorems of the hyperbolic and the elliptic geometries". These theorems along with their alternative postulates, such as Playfair's axiom, played an important role in the later development of non-Euclidean geometry. These early attempts at challenging the fifth postulate had a considerable influence on its development among later European geometers, including Witelo, Levi ben Gerson, Alfonso, John Wallis and Saccheri. All of these early attempts made at trying to formulate non-Euclidean geometry, however, provided flawed proofs of the parallel postulate, depending on assumptions that are now recognized as essentially equivalent to the parallel postulate. These early attempts did, however, provide some early properties of the hyperbolic and elliptic geometries.
Khayyam, for example, tried to derive it from an equivalent postulate he formulated from "the principles of the Philosopher" (Aristotle): "Two convergent straight lines intersect and it is impossible for two convergent straight lines to diverge in the direction in which they converge." Khayyam then considered the three cases right, obtuse, and acute that the summit angles of a Saccheri quadrilateral can take and after proving a number of theorems about them, he correctly refuted the obtuse and acute cases based on his postulate and hence derived the classic postulate of Euclid, which he didn't realize was equivalent to his own postulate. Another example is al-Tusi's son, Sadr al-Din (sometimes known as "Pseudo-Tusi"), who wrote a book on the subject in 1298, based on al-Tusi's later thoughts, which presented another hypothesis equivalent to the parallel postulate. "He essentially revised both the Euclidean system of axioms and postulates and the proofs of many propositions from the "Elements"." His work was published in Rome in 1594 and was studied by European geometers, including Saccheri who criticised this work as well as that of Wallis.
Giordano Vitale, in his book "Euclide restituo" (1680, 1686), used the Saccheri quadrilateral to prove that if three points are equidistant on the base AB and the summit CD, then AB and CD are everywhere equidistant.
In a work titled "Euclides ab Omni Naevo Vindicatus" ("Euclid Freed from All Flaws"), published in 1733, Saccheri quickly discarded elliptic geometry as a possibility (some others of Euclid's axioms must be modified for elliptic geometry to work) and set to work proving a great number of results in hyperbolic geometry.
He finally reached a point where he believed that his results demonstrated the impossibility of hyperbolic geometry. His claim seems to have been based on Euclidean presuppositions, because no "logical" contradiction was present. In this attempt to prove Euclidean geometry he instead unintentionally discovered a new viable geometry, but did not realize it.
In 1766 Johann Lambert wrote, but did not publish, "Theorie der Parallellinien" in which he attempted, as Saccheri did, to prove the fifth postulate. He worked with a figure now known as a "Lambert quadrilateral", a quadrilateral with three right angles (can be considered half of a Saccheri quadrilateral). He quickly eliminated the possibility that the fourth angle is obtuse, as had Saccheri and Khayyam, and then proceeded to prove many theorems under the assumption of an acute angle. Unlike Saccheri, he never felt that he had reached a contradiction with this assumption. He had proved the non-Euclidean result that the sum of the angles in a triangle increases as the area of the triangle decreases, and this led him to speculate on the possibility of a model of the acute case on a sphere of imaginary radius. He did not carry this idea any further.
At this time it was widely believed that the universe worked according to the principles of Euclidean geometry.
Discovery of non-Euclidean geometry.
The beginning of the 19th century would finally witness decisive steps in the creation of non-Euclidean geometry.
Circa 1813, Carl Friedrich Gauss and independently around 1818, the German professor of law Ferdinand Karl Schweikart had the germinal ideas of non-Euclidean geometry worked out, but neither published any results. Schweikart's nephew Franz Taurinus did publish important results of hyperbolic trigonometry in two papers in 1825 and 1826, yet while admitting the internal consistency of hyperbolic geometry, he still believed in the special role of Euclidean geometry.
Then, in 1829–1830 the Russian mathematician Nikolai Ivanovich Lobachevsky and in 1832 the Hungarian mathematician János Bolyai separately and independently published treatises on hyperbolic geometry. Consequently, hyperbolic geometry is called Lobachevskian or Bolyai-Lobachevskian geometry, as both mathematicians, independent of each other, are the basic authors of non-Euclidean geometry. Gauss mentioned to Bolyai's father, when shown the younger Bolyai's work, that he had developed such a geometry several years before, though he did not publish. While Lobachevsky created a non-Euclidean geometry by negating the parallel postulate, Bolyai worked out a geometry where both the Euclidean and the hyperbolic geometry are possible depending on a parameter "k". Bolyai ends his work by mentioning that it is not possible to decide through mathematical reasoning alone if the geometry of the physical universe is Euclidean or non-Euclidean; this is a task for the physical sciences.
Bernhard Riemann, in a famous lecture in 1854, founded the field of Riemannian geometry, discussing in particular the ideas now called manifolds, Riemannian metric, and curvature.
He constructed an infinite family of non-Euclidean geometries by giving a formula for a family of Riemannian metrics on the unit ball in Euclidean space. The simplest of these is called elliptic geometry and it is considered a non-Euclidean geometry due to its lack of parallel lines.
By formulating the geometry in terms of a curvature tensor, Riemann allowed non-Euclidean geometry to apply to higher dimensions. Beltrami (1868) was the first to apply Riemann's geometry to spaces of negative curvature.
Terminology.
It was Gauss who coined the term "non-Euclidean geometry". He was referring to his own work, which today we call "hyperbolic geometry" or "Lobachevskian geometry". Several modern authors still use the generic term "non-Euclidean geometry" to mean "hyperbolic geometry".
Arthur Cayley noted that distance between points inside a conic could be defined in terms of logarithm and the projective cross-ratio function. The method has become called the Cayley–Klein metric because Felix Klein exploited it to describe the non-Euclidean geometries in articles in 1871 and 1873 and later in book form. The Cayley–Klein metrics provided working models of hyperbolic and elliptic metric geometries, as well as Euclidean geometry.
Klein is responsible for the terms "hyperbolic" and "elliptic" (in his system he called Euclidean geometry "parabolic", a term that generally fell out of use). His influence has led to the current usage of the term "non-Euclidean geometry" to mean either "hyperbolic" or "elliptic" geometry.
There are some mathematicians who would extend the list of geometries that should be called "non-Euclidean" in various ways.
There are many kinds of geometry that are quite different from Euclidean geometry but are also not necessarily included in the conventional meaning of "non-Euclidean geometry", such as more general instances of Riemannian geometry.
Axiomatic basis of non-Euclidean geometry.
Euclidean geometry can be axiomatically described in several ways. However, Euclid's original system of five postulates (axioms) is not one of these, as his proofs relied on several unstated assumptions that should also have been taken as axioms. Hilbert's system consisting of 20 axioms most closely follows the approach of Euclid and provides the justification for all of Euclid's proofs. Other systems, using different sets of undefined terms obtain the same geometry by different paths. All approaches, however, have an axiom that is logically equivalent to Euclid's fifth postulate, the parallel postulate. Hilbert uses the Playfair axiom form, while Birkhoff, for instance, uses the axiom that says that, "There exists a pair of similar but not congruent triangles." In any of these systems, removal of the one axiom equivalent to the parallel postulate, in whatever form it takes, and leaving all the other axioms intact, produces absolute geometry. As the first 28 propositions of Euclid (in "The Elements") do not require the use of the parallel postulate or anything equivalent to it, they are all true statements in absolute geometry.
To obtain a non-Euclidean geometry, the parallel postulate (or its equivalent) "must" be replaced by its negation. Negating the Playfair's axiom form, since it is a compound statement (... there exists one and only one ...), can be done in two ways:
Models.
Models of non-Euclidean geometry are mathematical models of geometries which are non-Euclidean in the sense that it is not the case that exactly one line can be drawn parallel to a given line "l" through a point that is not on "l". In hyperbolic geometric models, by contrast, there are infinitely many lines through "A" parallel to "l", and in elliptic geometric models, parallel lines do not exist. (See the entries on hyperbolic geometry and elliptic geometry for more information.)
Euclidean geometry is modelled by our notion of a "flat plane."
The simplest model for elliptic geometry is a sphere, where lines are "great circles" (such as the equator or the meridians on a globe), and points opposite each other are identified (considered to be the same).
The pseudosphere has the appropriate curvature to model hyperbolic geometry.
Elliptic geometry.
The simplest model for elliptic geometry is a sphere, where lines are "great circles" (such as the equator or the meridians on a globe), and points opposite each other (called antipodal points) are identified (considered the same). This is also one of the standard models of the real projective plane. The difference is that as a model of elliptic geometry a metric is introduced permitting the measurement of lengths and angles, while as a model of the projective plane there is no such metric.
In the elliptic model, for any given line l and a point "A", which is not on l, all lines through "A" will intersect l.
Hyperbolic geometry.
Even after the work of Lobachevsky, Gauss, and Bolyai, the question remained: "Does such a model exist for hyperbolic geometry?". The model for hyperbolic geometry was answered by Eugenio Beltrami, in 1868, who first showed that a surface called the pseudosphere has the appropriate curvature to model a portion of hyperbolic space and in a second paper in the same year, defined the Klein model, which models the entirety of hyperbolic space, and used this to show that Euclidean geometry and hyperbolic geometry were equiconsistent so that hyperbolic geometry was logically consistent if and only if Euclidean geometry was. (The reverse implication follows from the horosphere model of Euclidean geometry.)
In the hyperbolic model, within a two-dimensional plane, for any given line l and a point "A", which is not on l, there are infinitely many lines through "A" that do not intersect l.
In these models, the concepts of non-Euclidean geometries are represented by Euclidean objects in a Euclidean setting. This introduces a perceptual distortion wherein the straight lines of the non-Euclidean geometry are represented by Euclidean curves that visually bend. This "bending" is not a property of the non-Euclidean lines, only an artifice of the way they are represented.
Three-dimensional non-Euclidean geometry.
In three dimensions, there are eight models of geometries. There are Euclidean, elliptic, and hyperbolic geometries, as in the two-dimensional case; mixed geometries that are partially Euclidean and partially hyperbolic or spherical; twisted versions of the mixed geometries; and one unusual geometry that is completely anisotropic (i.e. every direction behaves differently).
Uncommon properties.
Euclidean and non-Euclidean geometries naturally have many similar properties, namely those that do not depend upon the nature of parallelism. This commonality is the subject of absolute geometry (also called "neutral geometry"). However, the properties that distinguish one geometry from others have historically received the most attention.
Besides the behavior of lines with respect to a common perpendicular, mentioned in the introduction, we also have the following:
Importance.
Before the models of a non-Euclidean plane were presented by Beltrami, Klein, and Poincaré, Euclidean geometry stood unchallenged as the mathematical model of space. Furthermore, since the substance of the subject in synthetic geometry was a chief exhibit of rationality, the Euclidean point of view represented absolute authority.
The discovery of the non-Euclidean geometries had a ripple effect which went far beyond the boundaries of mathematics and science. The philosopher Immanuel Kant's treatment of human knowledge had a special role for geometry. It was his prime example of synthetic a priori knowledge; not derived from the senses nor deduced through logic — our knowledge of space was a truth that we were born with. Unfortunately for Kant, his concept of this unalterably true geometry was Euclidean. Theology was also affected by the change from absolute truth to relative truth in the way that mathematics is related to the world around it, that was a result of this paradigm shift.
Non-Euclidean geometry is an example of a scientific revolution in the history of science, in which mathematicians and scientists changed the way they viewed their subjects. Some geometers called Lobachevsky the "Copernicus of Geometry" due to the revolutionary character of his work.
The existence of non-Euclidean geometries impacted the intellectual life of Victorian England in many ways and in particular was one of the leading factors that caused a re-examination of the teaching of geometry based on Euclid's Elements. This curriculum issue was hotly debated at the time and was even the subject of a book, "Euclid and his Modern Rivals", written by Charles Lutwidge Dodgson (1832–1898) better known as Lewis Carroll, the author of "Alice in Wonderland".
Planar algebras.
In analytic geometry a plane is described with Cartesian coordinates:
formula_0
The points are sometimes identified with complex numbers "z" = "x" + "y" ε where ε2 ∈ { –1, 0, 1}.
The Euclidean plane corresponds to the case ε2 = −1 since the modulus of z is given by
formula_1
and this quantity is the square of the Euclidean distance between z and the origin.
For instance, {"z" | "z z"* = 1} is the unit circle.
For planar algebra, non-Euclidean geometry arises in the other cases.
When ε2 = +1, then z is a split-complex number and conventionally j replaces epsilon. Then
formula_2
and {"z" | "z z"* = 1} is the unit hyperbola.
When ε2 = 0, then z is a dual number.
This approach to non-Euclidean geometry explains the non-Euclidean angles: the parameters of slope in the dual number plane and hyperbolic angle in the split-complex plane correspond to angle in Euclidean geometry. Indeed, they each arise in polar decomposition of a complex number z.
Kinematic geometries.
Hyperbolic geometry found an application in kinematics with the physical cosmology introduced by Hermann Minkowski in 1908. Minkowski introduced terms like worldline and proper time into mathematical physics. He realized that the submanifold, of events one moment of proper time into the future, could be considered a hyperbolic space of three dimensions.
Already in the 1890s Alexander Macfarlane was charting this submanifold through his "Algebra of Physics" and hyperbolic quaternions, though Macfarlane did not use cosmological language as Minkowski did in 1908. The relevant structure is now called the hyperboloid model of hyperbolic geometry.
The non-Euclidean planar algebras support kinematic geometries in the plane. For instance, the split-complex number "z" = e"a"j can represent a spacetime event one moment into the future of a frame of reference of rapidity "a". Furthermore, multiplication by "z" amounts to a Lorentz boost mapping the frame with rapidity zero to that with rapidity "a".
Kinematic study makes use of the dual numbers formula_3 to represent the classical description of motion in absolute time and space:
The equations formula_4 are equivalent to a shear mapping in linear algebra:
formula_5
With dual numbers the mapping is formula_6
Another view of special relativity as a non-Euclidean geometry was advanced by E. B. Wilson and Gilbert Lewis in "Proceedings of the American Academy of Arts and Sciences" in 1912. They revamped the analytic geometry implicit in the split-complex number algebra into synthetic geometry of premises and deductions.
Fiction.
Non-Euclidean geometry often makes appearances in works of science fiction and fantasy.
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "C = \\{ (x,y) : x,y \\isin \\mathbb{R} \\}"
},
{
"math_id": 1,
"text": "z z^\\ast = (x + y \\epsilon) (x - y \\epsilon) = x^2 + y^2"
},
{
"math_id": 2,
"text": "z z^\\ast = (x + y\\mathbf{j}) (x - y\\mathbf{j}) = x^2 - y^2 \\!"
},
{
"math_id": 3,
"text": "z = x + y \\epsilon, \\quad \\epsilon^2 = 0,"
},
{
"math_id": 4,
"text": "x^\\prime = x + vt,\\quad t^\\prime = t"
},
{
"math_id": 5,
"text": "\\begin{pmatrix}x' \\\\ t' \\end{pmatrix} = \\begin{pmatrix}1 & v \\\\ 0 & 1 \\end{pmatrix}\\begin{pmatrix}x \\\\ t \\end{pmatrix}."
},
{
"math_id": 6,
"text": "t^\\prime + x^\\prime \\epsilon = (1 + v \\epsilon)(t + x \\epsilon) = t + (x + vt)\\epsilon."
}
] |
https://en.wikipedia.org/wiki?curid=58610
|
58615411
|
Turing's method
|
In mathematics, Turing's method is used to verify that for any given Gram point "g""m" there lie "m" + 1 zeros of "ζ"("s"), in the region 0 < Im("s") < Im("g""m"), where "ζ"("s") is the Riemann zeta function. It was discovered by Alan Turing and published in 1953, although that proof contained errors and a correction was published in 1970 by R. Sherman Lehman.
For every integer "i" with "i" < "n" we find a list of Gram points formula_0 and a complementary list formula_1, where "g""i" is the smallest number such that
formula_2
where "Z"("t") is the Hardy Z function. Note that gi may be negative or zero. Assuming that formula_3 and there exists some integer "k" such that formula_4, then if
formula_5
and
formula_6
Then the bound is achieved and we have that there are exactly "m" + 1 zeros of "ζ"("s"), in the region 0 < Im("s") < Im("g""m").
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\{g_i \\mid 0\\leqslant i \\leqslant m \\} "
},
{
"math_id": 1,
"text": " \\{h_i \\mid 0\\leqslant i \\leqslant m \\} "
},
{
"math_id": 2,
"text": " (-1)^i Z(g_i + h_i) > 0, "
},
{
"math_id": 3,
"text": " h_m = 0 "
},
{
"math_id": 4,
"text": " h_k = 0 "
},
{
"math_id": 5,
"text": " 1 + \\frac{1.91 + 0.114\\log(g_{m+k}/2\\pi) + \\sum_{j=m+1}^{m+k-1}h_j}{g_{m+k} - g_m} < 2, "
},
{
"math_id": 6,
"text": " -1 - \\frac{1.91 + 0.114\\log(g_m/2\\pi) + \\sum_{j=1}^{k-1}h_{m-j}}{g_m - g_{m-k}} > -2, "
}
] |
https://en.wikipedia.org/wiki?curid=58615411
|
5861757
|
Weak localization
|
Weak localization is a physical effect which occurs in disordered electronic systems at very low temperatures. The effect manifests itself as a "positive" correction to the resistivity of a metal or semiconductor. The name emphasizes the fact that weak localization is a precursor of Anderson localization, which occurs at strong disorder.
General principle.
The effect is quantum-mechanical in nature and has the following origin: In a disordered electronic system, the electron motion is diffusive rather than ballistic. That is, an electron does not move along a straight line, but experiences a series of random scatterings off impurities which results in a random walk.
The resistivity of the system is related to the probability of an electron to propagate between two given points in space. Classical physics assumes that the total probability is just the sum of the probabilities of the paths connecting the two points. However quantum mechanics tells us that to find the total probability we have to sum up the quantum-mechanical amplitudes of the paths rather than the probabilities themselves. Therefore, the correct (quantum-mechanical) formula for the probability for an electron to move from a point A to a point B includes the classical part (individual probabilities of diffusive paths) and a number of interference terms (products of the amplitudes corresponding to different paths). These interference terms effectively make it more likely that a carrier will "wander around in a circle" than it would otherwise, which leads to an "increase" in the net resistivity. The usual formula for the conductivity of a metal (the so-called Drude formula) corresponds to the former classical terms, while the weak localization correction corresponds to the latter quantum interference terms averaged over disorder realizations.
The weak localization correction can be shown to come mostly from quantum interference between self-crossing paths in which an electron can propagate in the clock-wise and counter-clockwise direction around a loop. Due to the identical length of the two paths along a loop, the quantum phases cancel each other exactly and these (otherwise random in sign) quantum interference terms survive disorder averaging. Since it is much more likely to find a self-crossing trajectory in low dimensions, the weak localization effect manifests itself much more strongly in low-dimensional systems (films and wires).
Weak anti-localization.
In a system with spin–orbit coupling, the spin of a carrier is coupled to its momentum. The spin of the carrier rotates as it goes around a self-intersecting path, and the direction of this rotation is opposite for the two directions about the loop. Because of this, the two paths along any loop interfere "destructively" which leads to a "lower" net resistivity.
In two dimensions.
In two dimensions the change in conductivity from applying a magnetic field, due to either weak localization or weak anti-localization can be described by the Hikami-Larkin-Nagaoka equation:
formula_0
Where formula_1, and formula_2 are various relaxation times. This theoretically derived equation was soon restated in terms of characteristic fields, which are more directly experimentally relevant quantities:
formula_3
Where the characteristic fields are:
formula_4
formula_5
formula_6
formula_7
Where formula_8 is potential scattering, formula_9 is inelastic scattering, formula_10 is magnetic scattering, and formula_11 is spin-orbit scattering. Under some condition, this can be rewritten:
formula_12
formula_13
formula_14
formula_15 is the digamma function. formula_16 is the phase coherence characteristic field, which is roughly the magnetic field required to destroy phase coherence, formula_17 is the spin–orbit characteristic field which can be considered a measure of the strength of the spin–orbit interaction and formula_18 is the elastic characteristic field. The characteristic fields are better understood in terms of their corresponding characteristic lengths which are deduced from formula_19. formula_20 can then be understood as the distance traveled by an electron before it loses phase coherence, formula_21 can be thought of as the distance traveled before the spin of the electron undergoes the effect of the spin–orbit interaction, and finally formula_22 is the mean free path.
In the limit of strong spin–orbit coupling formula_23, the equation above reduces to:
formula_24
In this equation formula_25 is -1 for weak antilocalization and +1/2 for weak localization.
Magnetic field dependence.
The strength of either weak localization or weak anti-localization falls off quickly in the presence of a magnetic field, which causes carriers to acquire an additional phase as they move around paths.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\sigma(B) - \\sigma(0) = - {e^2 \\over 2 \\pi^2 \\hbar} \\left [ \\psi ({1 \\over 2} + {1 \\over \\tau a})- \\psi ({1 \\over 2}+{1 \\over \\tau_1 a})+ {1 \\over 2}\\psi ({1 \\over 2}+{1 \\over \\tau_2 a})-{1 \\over 2}\\psi ({1 \\over 2}+{1 \\over \\tau_3 a})\\right]"
},
{
"math_id": 1,
"text": "a=4DeH/\\hbar c"
},
{
"math_id": 2,
"text": "\\tau,\\tau_1,\\tau_2,\\tau_3"
},
{
"math_id": 3,
"text": "\\sigma(B) - \\sigma(0) =-{e^2 \\over 2 \\pi^2 \\hbar} \\left [ \\psi({1 \\over 2} + {H_1 \\over H}) - \\psi({1 \\over 2} + {H_2 \\over H})+{1 \\over 2} \\psi({1 \\over 2} + {H_3 \\over H}) - {1 \\over 2}\\psi({1 \\over 2} + {H_4 \\over H})\\right ]"
},
{
"math_id": 4,
"text": "H_1=H_0+H_{SO}+H_s"
},
{
"math_id": 5,
"text": "H_2={4 \\over 3}H_{SO}+{2 \\over 3}H_S+H_i "
},
{
"math_id": 6,
"text": "H_3=2H_S+H_i"
},
{
"math_id": 7,
"text": "H_4={2 \\over 3}H_S +{4 \\over 3}H_{SO}+H_i"
},
{
"math_id": 8,
"text": "H_0"
},
{
"math_id": 9,
"text": "H_i"
},
{
"math_id": 10,
"text": "H_S"
},
{
"math_id": 11,
"text": "H_{SO}"
},
{
"math_id": 12,
"text": "\\sigma(B) - \\sigma(0) = + {e^2 \\over 2 \\pi^2 \\hbar} \\left [ \\ln \\left ( {B_\\phi \\over B}\\right ) - \\psi \\left ({1 \\over 2} + {B_\\phi \\over B} \\right ) \\right] "
},
{
"math_id": 13,
"text": "+ {e^2 \\over \\pi^2 \\hbar} \\left [ \\ln \\left ( {B_\\text{SO} + B_e \\over B}\\right ) - \\psi \\left ({1 \\over 2} + {B_\\text{SO} + B_e \\over B} \\right ) \\right] "
},
{
"math_id": 14,
"text": "- {3e^2 \\over 2 \\pi^2 \\hbar} \\left [ \\ln \\left ( {(4/3)B_\\text{SO} + B_\\phi \\over B}\\right ) - \\psi \\left ({1 \\over 2} + {(4/3)B_\\text{SO}+B_\\phi \\over B} \\right ) \\right]"
},
{
"math_id": 15,
"text": "\\psi"
},
{
"math_id": 16,
"text": "B_\\phi"
},
{
"math_id": 17,
"text": "B_\\text{SO}"
},
{
"math_id": 18,
"text": "B_e"
},
{
"math_id": 19,
"text": "{B_i = \\hbar / 4 e l_i^2}"
},
{
"math_id": 20,
"text": "l_\\phi"
},
{
"math_id": 21,
"text": "l_\\text{SO}"
},
{
"math_id": 22,
"text": "l_e"
},
{
"math_id": 23,
"text": "B_\\text{SO} \\gg B_\\phi"
},
{
"math_id": 24,
"text": "\\sigma(B) - \\sigma(0) = \\alpha {e^2 \\over 2 \\pi^2 \\hbar} \\left [ \\ln \\left ( {B_\\phi \\over B}\\right ) - \\psi \\left ({1 \\over 2} + {B_\\phi \\over B} \\right ) \\right] "
},
{
"math_id": 25,
"text": "\\alpha"
}
] |
https://en.wikipedia.org/wiki?curid=5861757
|
58619229
|
Dino Cube
|
The Dino Cube is a cubic twisty puzzle in the style of the Rubik's Cube. It was invented in 1985 by Robert Webb, though it was not mass-produced until ten years later. It has a total of 12 external movable pieces to rearrange, compared to 20 movable pieces on the Rubik's Cube.
History.
Robert Webb designed and made the first prototype of what would become the Dino Cube in 1985; his original prototype was made entirely out of paper. Since then the puzzle was reinvented twice, but full mass production of the puzzle did not start until 1995. The first mass-produced version had pictures of dinosaurs depicted on each piece, which led to the adoption of the puzzle's current name of "Dino Cube". It is not known what the puzzle had been called before this dinosaur version was introduced. The later versions, however, adopted the practice of using standard single-colour stickers, in common with most other twisty puzzles.
Overview.
The Dino Cube is a twisty puzzle in the shape of a cube. It consists of 12 movable pieces, all of which are located on the edges of the cube.
The puzzle can be thought of as twisting around its corners: each "move" changes the position of three edge pieces adjacent to the same corner, by rotating them around that corner. There are in fact eight more "hidden" pieces inside the puzzle, which are located at the corners and are fixed to the puzzle's core; these pieces only become visible in the middle of a move.
The vast majority of mass-produced Dino Cubes have the standard six-colour scheme, with one colour on each face of the cube in the solved state. This is in common with most other cubic twisty puzzles, including the Rubik's Cube. However, a few versions with other colour schemes also exist, including one with four colours (where each colour is centred around one corner of the cube in the solved state), and one with just two colours (where each colour present on half of the puzzle).
The purpose of the puzzle is to scramble the colours and then restore them to their original configuration, usually of one colour per face.
Solving.
The Dino Cube is considered to be one of the easiest twisty puzzles to solve. One of the things that make it so easy is the fact that each move only affects three edge pieces at once, which means it is easy to solve one part of the puzzle without disturbing what is already solved. In addition, each edge piece only has one possible orientation, meaning that if a given piece is in the correct position, it will always be orientated the correct way as well. Therefore, the solver never has to worry about changing the orientation of any pieces.
Although not obvious at the first glance, the six-colour Dino Cube actually has two distinct configurations that represent a solved puzzle. The two solutions are mirror images of each other and the only visual difference between them is their colour schemes; for example, one solution has the colours "Blue - Yellow - Red" going clockwise around one vertex, while in the other these colours go anticlockwise.
Mathematically, the puzzle is identical to Hoberman's BrainTwist, which is a tetrahedral puzzle that can "flip" inside out and reveal another set of four faces. Like the Dino Cube, The BrainTwist has twelve movable pieces, and each move rotates three pieces around one corner. It also likewise has two distinct solutions: one with the same colour on each of the faces and one with the same colour at each of the corners.
Number of combinations.
The Dino Cube has twelve edge pieces. This means naturally there are twelve possible positions for the first given edge, however due to the lack of visible fixed "reference" pieces, all of these positions are rotationally symmetrical to each other. Therefore, the position of the first given edge is not taken into account.
The remaining eleven edge pieces can be permutated in 11! different ways, relative to the first edge piece. Only even permutations of these pieces are possible (i.e. it is impossible to swap one pair of pieces while leaving the rest of the puzzle solved), which divides the limit by 2.
The edge pieces cannot be flipped or misorientated (See "Solving"), therefore this is also not taken into account.
The total number of possible combinations on the six-colour Dino Cube is therefore equal to:
formula_0 formula_1 formula_2
This number is low compared to the number of combinations of the Rubik's Cube (which has over 4.3×1019 combinations) but still larger than many other puzzles in the Rubik's Cube family, notably the Pocket Cube (over 3.6 million combinations) and the Pyraminx (just over 930 thousand combinations, excluding rotations of the trivial tips).
Optimal solutions.
The number of possible of configurations, 19 958 400, is sufficiently small to allow a computer search for optimal solutions. The table below summarises the result of such a search, stating the number "p" of positions that require "n" moves to solve the six-colour Dino Cube ("p2" for either solution, "p1" for one specific solution):
This table shows that the God's Number of the six-colour Dino Cube is 10 (when solving into either solution) or 11 (when solving into one specific solution).
Variations.
Several shape modifications of the Dino Cube exist. These include the aforementioned "BrainTwist", in the shape of a tetrahedron that can flip inside out, the "Platypus", whose shape is also based on a tetrahedron, the "Redi Cube", a cubic version with shallower cuts, and the "Rainbow Cube", in the shape of a cuboctahedron. The latter three puzzles, unlike the Dino Cube, also have their "core" pieces visible.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\frac{11!}{2} = 19"
},
{
"math_id": 1,
"text": "958"
},
{
"math_id": 2,
"text": "400"
}
] |
https://en.wikipedia.org/wiki?curid=58619229
|
58619302
|
Hanoi graph
|
Pattern of states and moves in the Tower of Hanoi puzzle
In graph theory and recreational mathematics, the Hanoi graphs are undirected graphs whose vertices represent the possible states of the Tower of Hanoi puzzle, and whose edges represent allowable moves between pairs of states.
Construction.
The puzzle consists of a set of disks of different sizes, placed in increasing order of size on a fixed set of towers.
The Hanoi graph for a puzzle with formula_0 disks on formula_1 towers is denoted formula_2. Each state of the puzzle is determined by the choice of one tower for each disk, so the graph has formula_3 vertices.
In the moves of the puzzle, the smallest disk on one tower is moved either to an unoccupied tower or to a tower whose smallest disk is larger. If there are formula_4 unoccupied towers, the number of allowable moves is
formula_5
which ranges from a maximum of formula_6
to formula_8 (when all disks are on one tower and formula_4 is formula_8). Therefore, the degrees of the vertices in the Hanoi graph range from a maximum of formula_6 to a minimum of formula_8.
The total number of edges is
formula_9
For formula_10 (no disks) there is only one state of the puzzle and one vertex of the graph.
For formula_11, the Hanoi graph formula_2 can be decomposed into formula_1 copies of the smaller Hanoi graph formula_12, one for each placement of the largest disk. These copies are connected to each other only at states where the largest disk is free to move: it is the only disk in its tower, and some other tower is unoccupied.
General properties.
Every Hanoi graph contains a Hamiltonian cycle.
The Hanoi graph formula_13 is a complete graph on formula_1 vertices. Because they contain complete graphs, all larger Hanoi graphs formula_2 require at least formula_1 colors in any graph coloring. They may be colored with exactly formula_1 colors by summing the indexes of the towers containing each disk, and using the sum modulo formula_1 as the color.
Three towers.
A particular case of the Hanoi graphs that has been well studied since the work of is the case of the three-tower Hanoi graphs, formula_14. These graphs have 3"n" vertices (OEIS: ) and edges (OEIS: ).
They are penny graphs (the contact graphs of non-overlapping unit disks in the plane), with an arrangement of disks that resembles the Sierpinski triangle. One way of constructing this arrangement is to arrange the numbers of Pascal's triangle on the points of a hexagonal lattice, with unit spacing, and place a unit disk on each point whose number is odd.
The diameter of these graphs, and the length of the solution to the standard form of the Tower of Hanoi puzzle (in which the disks all start on one tower and must all move to one other tower) is formula_15.
More than three towers.
<templatestyles src="Unsolved/styles.css" />
Unsolved problem in mathematics:
What is the diameter of the graphs formula_2 for formula_16?
For formula_16, the structure of the Hanoi graphs is not as well understood, and the diameter of these graphs is unknown.
When formula_17 and formula_18 or when formula_19 and formula_20, these graphs are nonplanar.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "k"
},
{
"math_id": 2,
"text": "H^n_k"
},
{
"math_id": 3,
"text": "k^n"
},
{
"math_id": 4,
"text": "u"
},
{
"math_id": 5,
"text": "\\binom{k}{2}-\\binom{u}{2},"
},
{
"math_id": 6,
"text": "\\tbinom{k}{2}"
},
{
"math_id": 7,
"text": "\\tbinom{u}{2}"
},
{
"math_id": 8,
"text": "k-1"
},
{
"math_id": 9,
"text": "\\frac{1}{2}\\binom{k}{2}\\bigl(k^n-(k-2)^n\\bigr)."
},
{
"math_id": 10,
"text": "k=0"
},
{
"math_id": 11,
"text": "k > 0"
},
{
"math_id": 12,
"text": "H^{n-1}_k"
},
{
"math_id": 13,
"text": "H^1_k"
},
{
"math_id": 14,
"text": "H^n_3"
},
{
"math_id": 15,
"text": "2^{n}-1"
},
{
"math_id": 16,
"text": "k > 3"
},
{
"math_id": 17,
"text": "k > 4"
},
{
"math_id": 18,
"text": "n > 0"
},
{
"math_id": 19,
"text": "k = 4"
},
{
"math_id": 20,
"text": "n > 2"
}
] |
https://en.wikipedia.org/wiki?curid=58619302
|
58621737
|
Minkowski sausage
|
Fractal first proposed by Hermann Minkowski
The Minkowski sausage or Minkowski curve is a fractal first proposed by and named for Hermann Minkowski as well as its casual resemblance to a sausage or sausage links. The initiator is a line segment and the generator is a broken line of eight parts one fourth the length.
The Sausage has a Hausdorff dimension of formula_0. It is therefore often chosen when studying the physical properties of non-integer fractal objects. It is strictly self-similar. It never intersects itself. It is continuous everywhere, but differentiable nowhere. It is not rectifiable. It has a Lebesgue measure of 0. The type 1 curve has a dimension of ≈ 1.46.
Multiple Minkowski Sausages may be arranged in a four sided polygon or square to create a quadratic Koch island or Minkowski island/[snow]flake:
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\left ( \\ln8/\\ln4\\ \\right ) = 1.5 = 3/2"
}
] |
https://en.wikipedia.org/wiki?curid=58621737
|
58622264
|
Williams number
|
Class of numbers in number theory
In number theory, a Williams number base "b" is a natural number of the form formula_0 for integers "b" ≥ 2 and "n" ≥ 1. The Williams numbers base 2 are exactly the Mersenne numbers.
A Williams prime is a Williams number that is prime. They were considered by Hugh C. Williams.
It is conjectured that for every "b" ≥ 2, there are infinitely many Williams primes for base "b".
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "(b-1) \\cdot b^n-1"
}
] |
https://en.wikipedia.org/wiki?curid=58622264
|
58628753
|
Blockbusting (disambiguation)
|
Blockbusting is an unethical business practice used in the United States real estate market.
Blockbusting may also refer to:
See also.
Topics referred to by the same term
<templatestyles src="Dmbox/styles.css" />
This page lists associated with the title .
|
[
{
"math_id": 0,
"text": " 1 \\times n "
}
] |
https://en.wikipedia.org/wiki?curid=58628753
|
58628757
|
Blockbusting (game)
|
A combinatorial game solved using overheating
Blockbusting is a two-player game in which players alternate choosing squares from a line of squares, with one player aiming to choose as many pairs of adjacent squares as possible and the other player aiming to thwart this goal. Elwyn Berlekamp introduced it in 1987, as an example for a theoretical construction in combinatorial game theory.
Rules.
Blockbusting is a partisan game for two players, meaning that the roles of the two players are not symmetric. These two players are often known as Red and Blue (or Right and Left); they play the game on an formula_0 strip of squares called "parcels". Each player, in turn, claims and colors one previously unclaimed parcel until all parcels have been claimed.
At the end, Left's score is the number of pairs of neighboring parcels both of which he has claimed. Left therefore tries to maximize that number while Right tries to minimize it. Adjacent Right-Right pairs do not affect the score.
Although the purpose of the game is to further the study of combinatorial game theory, Berlekamp provides an interpretation alluding to the practice of blockbusting by real estate agents: the players may be seen as rival agents buying up all the parcels on a street, where Left is a segregationist trying to place clients as neighbors of one another
while Right is an integrationist trying to break up these segregated groups.
Theory.
In introducing the game of Blockbusting in 1987, Elwyn Berlekamp also introduced overheating, an operation for analyzing the theory of combinatorial games, and used Blockbusting as an example for that operation.
The operation of overheating was later adapted by Berlekamp and David Wolfe
to warming to analyze the end-game of Go.
The analysis of Blockbusting may be used as the basis of a strategy for the combinatorial game of Domineering.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " n \\times 1 "
}
] |
https://en.wikipedia.org/wiki?curid=58628757
|
58629308
|
Cooling and heating (combinatorial game theory)
|
Operations adjusting incentives of combinatorial games
In combinatorial game theory, cooling, heating, and overheating are operations on hot games to make them more amenable to the traditional methods of the theory,
which was originally devised for cold games in which the winner is the last player to have a legal move.
Overheating was generalised by Elwyn Berlekamp for the analysis of Blockbusting.
Chilling (or unheating) and warming are variants used in the analysis of the endgame of Go.
Cooling and chilling may be thought of as a tax on the player who moves, making them pay for the privilege of doing so,
while heating, warming and overheating are operations that more or less reverse cooling and chilling.
Basic operations: cooling, heating.
The cooled game formula_0 ("formula_1 cooled by formula_2") for a game formula_1 and a (surreal) number formula_2 is defined by
formula_3.
The amount formula_2 by which formula_1 is cooled is known as the "temperature"; the minimum formula_4 for which formula_5 is infinitesimally close to formula_6 is known as the "temperature" formula_7 "of" formula_1; formula_1 is said to "freeze" to formula_5; formula_6 is the "mean value" (or simply "mean") of formula_1.
Heating is the inverse of cooling and is defined as the "integral"
formula_8
Multiplication and overheating.
Norton multiplication is an extension of multiplication to a game formula_1 and a positive game formula_9 (the "unit")
defined by
formula_10
The incentives formula_11 of a game formula_9 are defined as formula_12.
Overheating is an extension of heating used in Berlekamp's solution of Blockbusting,
where formula_1 "overheated from" formula_13 "to" formula_2 is defined for arbitrary games formula_14 with formula_15 as
formula_16
"Winning Ways" also defines overheating of a game formula_1 by a positive game formula_17, as
formula_18
Note that in this definition numbers are not treated differently from arbitrary games.
Note that the "lower bound" 0 distinguishes this from the previous definition by Berlekamp
Operations for Go: chilling and warming.
Chilling is a variant of cooling by formula_19 used to analyse the Go endgame of Go and is defined by
formula_20
This is equivalent to cooling by formula_21 when formula_1 is an "even elementary Go position in canonical form".
Warming is a special case of overheating, namely formula_22, normally written simply as formula_23 which inverts chilling when formula_1 is an "even elementary Go position in canonical form".
In this case the previous definition simplifies to the form
formula_24
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " G_t "
},
{
"math_id": 1,
"text": " G "
},
{
"math_id": 2,
"text": " t "
},
{
"math_id": 3,
"text": "\nG_t = \\begin{cases} \\{ G^L_t - t \\mid G^R_t + t \\} & \\text { for all numbers } t \\leq \\text{ any number } \\tau \\text{ for which } G_\\tau \\text{ is infinitesimally close to some number } m \\text{ , }\\\\\n m & \\text{ for } t > \\tau\n\\end{cases}\n"
},
{
"math_id": 4,
"text": " \\tau "
},
{
"math_id": 5,
"text": " G_\\tau "
},
{
"math_id": 6,
"text": " m "
},
{
"math_id": 7,
"text": " t(G) "
},
{
"math_id": 8,
"text": "\n\\int^t G = \\begin{cases} G & \\text{ if } G \\text{ is a number, } \\\\\n\\{ \\int^t (G^L) + t \\mid \\int^t (G^R) - t \\} & \\text{ otherwise. }\n\\end{cases}\n"
},
{
"math_id": 9,
"text": " U "
},
{
"math_id": 10,
"text": "\nG.U = \\begin{cases} G \\times U & \\text{ (i.e. the sum of } G \\text{ copies of } U \\text{) if } G \\text{ is a non-negative integer, } \\\\\n-G \\times -U & \\text{ if } G \\text{ is a negative integer, } \\\\\n\\{ G^L.U + (U + I) \\mid G^R.U - (U + I) \\} \\text { where } I \\text { ranges over } \\Delta (U) & \\text{ otherwise. }\n\\end{cases}\n"
},
{
"math_id": 11,
"text": " \\Delta (U) "
},
{
"math_id": 12,
"text": " \\{ u - U : u \\in U^L \\} \\cup \\{ U - u : u \\in U^R \\} "
},
{
"math_id": 13,
"text": " s "
},
{
"math_id": 14,
"text": " G, s, t "
},
{
"math_id": 15,
"text": " s > 0 "
},
{
"math_id": 16,
"text": "\n\\int_s^t G = \\begin{cases} G . s & \\text{ if } G \\text{ is an integer, } \\\\\n\\{ \\int_s^t (G^L) + t \\mid \\int_s^t (G^R) - t \\} & \\text{ otherwise. }\n\\end{cases}\n"
},
{
"math_id": 17,
"text": " X "
},
{
"math_id": 18,
"text": "\n\\int_0^t G = \\left\\{ \\int_0^t (G^L) + X \\mid \\int_0^t (G^R) - X \\right\\}\n"
},
{
"math_id": 19,
"text": " 1 "
},
{
"math_id": 20,
"text": "\nf(G) = \\begin{cases} m & \\text{ if } G \\text{ is of the form } m \\text{ or } m *, \\\\\n\\{ f(G^L) - 1 \\mid f(G^R) + 1 \\} & \\text{ otherwise.}\n\\end{cases}\n"
},
{
"math_id": 21,
"text": "1"
},
{
"math_id": 22,
"text": " \\int_{1*}^1 "
},
{
"math_id": 23,
"text": " \\int "
},
{
"math_id": 24,
"text": "\n\\int G = \\begin{cases} G & \\text{ if } G \\text{ is an even integer, } \\\\\nG * & \\text{ if } G \\text{ is an odd integer, } \\\\\n\\{ \\int (G^L) + 1 \\mid \\int (G^R) - 1 \\} & \\text{ otherwise. }\n\\end{cases}\n"
}
] |
https://en.wikipedia.org/wiki?curid=58629308
|
5863
|
Copenhagen interpretation
|
Interpretation of quantum mechanics
The Copenhagen interpretation is a collection of views about the meaning of quantum mechanics, stemming from the work of Niels Bohr, Werner Heisenberg, Max Born, and others. While "Copenhagen" refers to the Danish city, the use as an "interpretation" was apparently coined by Heisenberg during the 1950s to refer to ideas developed in the 1925–1927 period, glossing over his disagreements with Bohr. Consequently, there is no definitive historical statement of what the interpretation entails.
Features common across versions of the Copenhagen interpretation include the idea that quantum mechanics is intrinsically indeterministic, with probabilities calculated using the Born rule, and the principle of complementarity, which states that objects have certain pairs of complementary properties that cannot all be observed or measured simultaneously. Moreover, the act of "observing" or "measuring" an object is irreversible, and no truth can be attributed to an object except according to the results of its measurement (that is, the Copenhagen interpretation rejects counterfactual definiteness). Copenhagen-type interpretations hold that quantum descriptions are objective, in that they are independent of physicists' personal beliefs and other arbitrary mental factors.
Over the years, there have been many objections to aspects of Copenhagen-type interpretations, including the discontinuous and stochastic nature of the "observation" or "measurement" process, the difficulty of defining what might count as a measuring device, and the seeming reliance upon classical physics in describing such devices. Still, including all the variations, the interpretation remains one of the most commonly taught.
Background.
Starting in 1900, investigations into atomic and subatomic phenomena forced a revision to the basic concepts of classical physics. However, it was not until a quarter-century had elapsed that the revision reached the status of a coherent theory. During the intervening period, now known as the time of the "old quantum theory", physicists worked with approximations and heuristic corrections to classical physics. Notable results from this period include Max Planck's calculation of the blackbody radiation spectrum, Albert Einstein's explanation of the photoelectric effect, Einstein and Peter Debye's work on the specific heat of solids, Niels Bohr and Hendrika Johanna van Leeuwen's proof that classical physics cannot account for diamagnetism, Bohr's model of the hydrogen atom and Arnold Sommerfeld's extension of the Bohr model to include relativistic effects. From 1922 through 1925, this method of heuristic corrections encountered increasing difficulties; for example, the Bohr–Sommerfeld model could not be extended from hydrogen to the next simplest case, the helium atom.
The transition from the old quantum theory to full-fledged quantum physics began in 1925, when Werner Heisenberg presented a treatment of electron behavior based on discussing only "observable" quantities, meaning to Heisenberg the frequencies of light that atoms absorbed and emitted. Max Born then realized that in Heisenberg's theory, the classical variables of position and momentum would instead be represented by matrices, mathematical objects that can be multiplied together like numbers with the crucial difference that the order of multiplication matters. Erwin Schrödinger presented an equation that treated the electron as a wave, and Born discovered that the way to successfully interpret the wave function that appeared in the Schrödinger equation was as a tool for calculating probabilities.
Quantum mechanics cannot easily be reconciled with everyday language and observation, and has often seemed counter-intuitive to physicists, including its inventors. The ideas grouped together as the Copenhagen interpretation suggest a way to think about how the mathematics of quantum theory relates to physical reality.
Origin and use of the term.
The 'Copenhagen' part of the term refers to the city of Copenhagen in Denmark. During the mid-1920s, Heisenberg had been an assistant to Bohr at his institute in Copenhagen. Together they helped originate quantum mechanical theory. At the 1927 Solvay Conference, a dual talk by Max Born and Heisenberg declared "we consider quantum mechanics to be a closed theory, whose fundamental physical and mathematical assumptions are no longer susceptible of any modification." In 1929, Heisenberg gave a series of invited lectures at the University of Chicago explaining the new field of quantum mechanics. The lectures then served as the basis for his textbook, "The Physical Principles of the Quantum Theory", published in 1930. In the book's preface, Heisenberg wrote:
On the whole, the book contains nothing that is not to be found in previous publications, particularly in the investigations of Bohr. The purpose of the book seems to me to be fulfilled if it contributes somewhat to the diffusion of that 'Kopenhagener Geist der Quantentheorie' [Copenhagen spirit of quantum theory] if I may so express myself, which has directed the entire development of modern atomic physics.
The term 'Copenhagen interpretation' suggests something more than just a spirit, such as some definite set of rules for interpreting the mathematical formalism of quantum mechanics, presumably dating back to the 1920s. However, no such text exists, and the writings of Bohr and Heisenberg contradict each other on several important issues. It appears that the particular term, with its more definite sense, was coined by Heisenberg around 1955, while criticizing alternative "interpretations" (e.g., David Bohm's) that had been developed. Lectures with the titles 'The Copenhagen Interpretation of Quantum Theory' and 'Criticisms and Counterproposals to the Copenhagen Interpretation', that Heisenberg delivered in 1955, are reprinted in the collection "Physics and Philosophy". Before the book was released for sale, Heisenberg privately expressed regret for having used the term, due to its suggestion of the existence of other interpretations, that he considered to be "nonsense". In a 1960 review of Heisenberg's book, Bohr's close collaborator Léon Rosenfeld called the term an "ambiguous expression" and suggested it be discarded. However, this did not come to pass, and the term entered widespread use.
Principles.
There is no uniquely definitive statement of the Copenhagen interpretation. The term encompasses the views developed by a number of scientists and philosophers during the second quarter of the 20th century. This lack of a single, authoritative source that establishes the Copenhagen interpretation is one difficulty with discussing it; another complication is that the philosophical background familiar to Einstein, Bohr, Heisenberg, and contemporaries is much less so to physicists and even philosophers of physics in more recent times. Bohr and Heisenberg never totally agreed on how to understand the mathematical formalism of quantum mechanics, and Bohr distanced himself from what he considered Heisenberg's more subjective interpretation. Bohr offered an interpretation that is independent of a subjective observer, or measurement, or collapse; instead, an "irreversible" or effectively irreversible process causes the decay of quantum coherence which imparts the classical behavior of "observation" or "measurement".
Different commentators and researchers have associated various ideas with the term. Asher Peres remarked that very different, sometimes opposite, views are presented as "the Copenhagen interpretation" by different authors. N. David Mermin coined the phrase "Shut up and calculate!" to summarize Copenhagen-type views, a saying often misattributed to Richard Feynman and which Mermin later found insufficiently nuanced. Mermin described the Copenhagen interpretation as coming in different "versions", "varieties", or "flavors".
Some basic principles generally accepted as part of the interpretation include the following:
Hans Primas and Roland Omnès give a more detailed breakdown that, in addition to the above, includes the following:
There are some fundamental agreements and disagreements between the views of Bohr and Heisenberg. For example, Heisenberg emphasized a sharp "cut" between the observer (or the instrument) and the system being observed, while Bohr offered an interpretation that is independent of a subjective observer or measurement or collapse, which relies on an "irreversible" or effectively irreversible process, which could take place within the quantum system.
Another issue of importance where Bohr and Heisenberg disagreed is wave–particle duality. Bohr maintained that the distinction between a wave view and a particle view was defined by a distinction between experimental setups, whereas Heisenberg held that it was defined by the possibility of viewing the mathematical formulas as referring to waves or particles. Bohr thought that a particular experimental setup would display either a wave picture or a particle picture, but not both. Heisenberg thought that every mathematical formulation was capable of both wave and particle interpretations.
Nature of the wave function.
A wave function is a mathematical entity that provides a probability distribution for the outcomes of each possible measurement on a system. Knowledge of the wave function together with the rules for the system's evolution in time exhausts all that can be predicted about the system's behavior. Generally, Copenhagen-type interpretations deny that the wave function provides a directly apprehensible image of an ordinary material body or a discernible component of some such, or anything more than a theoretical concept.
Probabilities via the Born rule.
The Born rule is essential to the Copenhagen interpretation. Formulated by Max Born in 1926, it gives the probability that a measurement of a quantum system will yield a given result. In its simplest form, it states that the probability density of finding a particle at a given point, when measured, is proportional to the square of the magnitude of the particle's wave function at that point.
Collapse.
The concept of wave function collapse postulates that the wave function of a system can change suddenly and discontinuously upon measurement. Prior to a measurement, a wave function involves the various probabilities for the different potential outcomes of that measurement. But when the apparatus registers one of those outcomes, no traces of the others linger. Since Bohr did not view the wavefunction as something physical, he never talks about "collapse". Nevertheless, many physicists and philosophers associate collapse with the Copenhagen interpretation.
Heisenberg spoke of the wave function as representing available knowledge of a system, and did not use the term "collapse", but instead termed it "reduction" of the wave function to a new state representing the change in available knowledge which occurs once a particular phenomenon is registered by the apparatus.
Role of the observer.
Because they assert that the existence of an observed value depends upon the intercession of the observer, Copenhagen-type interpretations are sometimes called "subjective". All of the original Copenhagen protagonists considered the process of observation as mechanical and independent of the individuality of the observer. Wolfgang Pauli, for example, insisted that measurement results could be obtained and recorded by "objective registering apparatus". As Heisenberg wrote,
<templatestyles src="Template:Blockquote/styles.css" />
In the 1970s and 1980s, the theory of decoherence helped to explain the appearance of quasi-classical realities emerging from quantum theory, but was insufficient to provide a technical explanation for the apparent wave function collapse.
Completion by hidden variables?
In metaphysical terms, the Copenhagen interpretation views quantum mechanics as providing knowledge of phenomena, but not as pointing to 'really existing objects', which it regards as residues of ordinary intuition. This makes it an epistemic theory. This may be contrasted with Einstein's view, that physics should look for 'really existing objects', making itself an ontic theory.
The metaphysical question is sometimes asked: "Could quantum mechanics be extended by adding so-called "hidden variables" to the mathematical formalism, to convert it from an epistemic to an ontic theory?" The Copenhagen interpretation answers this with a strong 'No'. It is sometimes alleged, for example by J.S. Bell, that Einstein opposed the Copenhagen interpretation because he believed that the answer to that question of "hidden variables" was "yes". By contrast, Max Jammer writes "Einstein never proposed a hidden variable theory." Einstein explored the possibility of a hidden variable theory, and wrote a paper describing his exploration, but withdrew it from publication because he felt it was faulty.
Acceptance among physicists.
During the 1930s and 1940s, views about quantum mechanics attributed to Bohr and emphasizing complementarity became commonplace among physicists. Textbooks of the time generally maintained the principle that the numerical value of a physical quantity is not meaningful or does not exist until it is measured. Prominent physicists associated with Copenhagen-type interpretations have included Lev Landau, Wolfgang Pauli, Rudolf Peierls, Asher Peres, Léon Rosenfeld, and Ray Streater.
Throughout much of the 20th century, the Copenhagen tradition had overwhelming acceptance among physicists. According to a very informal poll (some people voted for multiple interpretations) conducted at a quantum mechanics conference in 1997, the Copenhagen interpretation remained the most widely accepted label that physicists applied to their own views. A similar result was found in a poll conducted in 2011.
Consequences.
The nature of the Copenhagen interpretation is exposed by considering a number of experiments and paradoxes.
Schrödinger's cat.
This thought experiment highlights the implications that accepting uncertainty at the microscopic level has on macroscopic objects. A cat is put in a sealed box, with its life or death made dependent on the state of a subatomic particle.91 Thus a description of the cat during the course of the experiment—having been entangled with the state of a subatomic particle—becomes a "blur" of "living and dead cat." But this cannot be accurate because it implies the cat is actually both dead and alive until the box is opened to check on it. But the cat, if it survives, will only remember being alive. Schrödinger resists "so naively accepting as valid a 'blurred model' for representing reality." "How can the cat be both alive and dead?"
In Copenhagen-type views, the wave function reflects our knowledge of the system. The wave function formula_0 means that, once the cat is observed, there is a 50% chance it will be dead, and 50% chance it will be alive. (Some versions of the Copenhagen interpretation reject the idea that a wave function can be assigned to a physical system that meets the everyday definition of "cat"; in this view, the correct quantum-mechanical description of the cat-and-particle system must include a superselection rule.51)
Wigner's friend.
"Wigner's friend" is a thought experiment intended to make that of Schrödinger's cat more striking by involving two conscious beings, traditionally known as Wigner and his friend. (In more recent literature, they may also be known as Alice and Bob, per the convention of describing protocols in information theory.) Wigner puts his friend in with the cat. The external observer believes the system is in state formula_0. However, his friend is convinced that the cat is alive, i.e. for him, the cat is in the state formula_1. "How can Wigner and his friend see different wave functions?"
In a Heisenbergian view, the answer depends on the positioning of Heisenberg cut, which can be placed arbitrarily (at least according to Heisenberg, though not to Bohr). If Wigner's friend is positioned on the same side of the cut as the external observer, his measurements collapse the wave function for both observers. If he is positioned on the cat's side, his interaction with the cat is not considered a measurement. Different Copenhagen-type interpretations take different positions as to whether observers can be placed on the quantum side of the cut.
Double-slit experiment.
In the basic version of this experiment, a light source, such as a laser beam, illuminates a plate pierced by two parallel slits, and the light passing through the slits is observed on a screen behind the plate. The wave nature of light causes the light waves passing through the two slits to interfere, producing bright and dark bands on the screen – a result that would not be expected if light consisted of classical particles. However, the light is always found to be absorbed at the screen at discrete points, as individual particles (not waves); the interference pattern appears via the varying density of these particle hits on the screen. Furthermore, versions of the experiment that include detectors at the slits find that each detected photon passes through one slit (as would a classical particle), and not through both slits (as would a wave). Such experiments demonstrate that particles do not form the interference pattern if one detects which slit they pass through.
According to Bohr's complementarity principle, light is neither a wave nor a stream of particles. A particular experiment can demonstrate particle behavior (passing through a definite slit) or wave behavior (interference), but not both at the same time.
The same experiment has been performed for light, electrons, atoms, and molecules. The extremely small de Broglie wavelength of objects with larger mass makes experiments increasingly difficult, but in general quantum mechanics considers all matter as possessing both particle and wave behaviors.
Einstein–Podolsky–Rosen paradox.
This thought experiment involves a pair of particles prepared in what later authors would refer to as an entangled state. In a 1935 paper, Einstein, Boris Podolsky, and Nathan Rosen pointed out that, in this state, if the position of the first particle were measured, the result of measuring the position of the second particle could be predicted. If instead the momentum of the first particle were measured, then the result of measuring the momentum of the second particle could be predicted. They argued that no action taken on the first particle could instantaneously affect the other, since this would involve information being transmitted faster than light, which is forbidden by the theory of relativity. They invoked a principle, later known as the "Einstein–Podolsky–Rosen (EPR) criterion of reality", positing that, "If, without in any way disturbing a system, we can predict with certainty (i.e., with probability equal to unity) the value of a physical quantity, then there exists an element of reality corresponding to that quantity". From this, they inferred that the second particle must have a definite value of position and of momentum prior to either being measured.
Bohr's response to the EPR paper was published in the "Physical Review" later that same year. He argued that EPR had reasoned fallaciously. Because measurements of position and of momentum are complementary, making the choice to measure one excludes the possibility of measuring the other. Consequently, a fact deduced regarding one arrangement of laboratory apparatus could not be combined with a fact deduced by means of the other, and so, the inference of predetermined position and momentum values for the second particle was not valid. Bohr concluded that EPR's "arguments do not justify their conclusion that the quantum description turns out to be essentially incomplete."
Criticism.
Incompleteness and indeterminism.
Einstein was an early and persistent supporter of objective reality. Bohr and Heisenberg advanced the position that no physical property could be understood without an act of measurement, while Einstein refused to accept this. Abraham Pais recalled a walk with Einstein when the two discussed quantum mechanics: "Einstein suddenly stopped, turned to me and asked whether I really believed that the moon exists only when I look at it." While Einstein did not doubt that quantum mechanics was a correct physical theory in that it gave correct predictions, he maintained that it could not be a "complete" theory. The most famous product of his efforts to argue the incompleteness of quantum theory is the Einstein–Podolsky–Rosen thought experiment, which was intended to show that physical properties like position and momentum have values even if not measured. The argument of EPR was not generally persuasive to other physicists.
Carl Friedrich von Weizsäcker, while participating in a colloquium at Cambridge, denied that the Copenhagen interpretation asserted "What cannot be observed does not exist". Instead, he suggested that the Copenhagen interpretation follows the principle "What is observed certainly exists; about what is not observed we are still free to make suitable assumptions. We use that freedom to avoid paradoxes."
Einstein was likewise dissatisfied with the indeterminism of quantum theory. Regarding the possibility of randomness in nature, Einstein said that he was "convinced that He [God] does not throw dice." Bohr, in response, reputedly said that "it cannot be for us to tell God, how he is to run the world".
The Heisenberg cut.
Much criticism of Copenhagen-type interpretations has focused on the need for a classical domain where observers or measuring devices can reside, and the imprecision of how the boundary between quantum and classical might be defined. This boundary came to be termed the Heisenberg cut (while John Bell derisively called it the "shifty split"). As typically portrayed, Copenhagen-type interpretations involve two different kinds of time evolution for wave functions, the deterministic flow according to the Schrödinger equation and the probabilistic jump during measurement, without a clear criterion for when each kind applies. Why should these two different processes exist, when physicists and laboratory equipment are made of the same matter as the rest of the universe? And if there is somehow a split, where should it be placed? Steven Weinberg writes that the traditional presentation gives "no way to locate the boundary between the realms in which [...] quantum mechanics does or does not apply."
The problem of thinking in terms of classical measurements of a quantum system becomes particularly acute in the field of quantum cosmology, where the quantum system is the universe. How does an observer stand outside the universe in order to measure it, and who was there to observe the universe in its earliest stages? Advocates of Copenhagen-type interpretations have disputed the seriousness of these objections. Rudolf Peierls noted that "the observer does not have to be contemporaneous with the event"; for example, we study the early universe through the cosmic microwave background, and we can apply quantum mechanics to that just as well as to any electromagnetic field. Likewise, Asher Peres argued that physicists "are", conceptually, outside those degrees of freedom that cosmology studies, and applying quantum mechanics to the radius of the universe while neglecting the physicists in it is no different from quantizing the electric current in a superconductor while neglecting the atomic-level details.
<templatestyles src="Template:Blockquote/styles.css" />You may object that there is only one universe, but likewise there is only one SQUID in my laboratory.
Alternatives.
A large number of alternative interpretations have appeared, sharing some aspects of the Copenhagen interpretation while providing alternatives to other aspects.
The ensemble interpretation is similar; it offers an interpretation of the wave function, but not for single particles. The consistent histories interpretation advertises itself as "Copenhagen done right". More recently, interpretations inspired by quantum information theory like QBism and relational quantum mechanics have appeared. Experts on quantum foundational issues continue to favor the Copenhagen interpretation over other alternatives. Physicists who have suggested that the Copenhagen tradition needs to be built upon or extended include Rudolf Haag and Anton Zeilinger.
Under realism and determinism, if the wave function is regarded as ontologically real, and collapse is entirely rejected, a many-worlds interpretation results. If wave function collapse is regarded as ontologically real as well, an objective collapse theory is obtained. Bohmian mechanics shows that it is possible to reformulate quantum mechanics to make it deterministic, at the price of making it explicitly nonlocal. It attributes not only a wave function to a physical system, but in addition a real position, that evolves deterministically under a nonlocal guiding equation. The evolution of a physical system is given at all times by the Schrödinger equation together with the guiding equation; there is never a collapse of the wave function. The transactional interpretation is also explicitly nonlocal.
Some physicists espoused views in the "Copenhagen spirit" and then went on to advocate other interpretations. For example, David Bohm and Alfred Landé both wrote textbooks that put forth ideas in the Bohr–Heisenberg tradition, and later promoted nonlocal hidden variables and an ensemble interpretation respectively. John Archibald Wheeler began his career as an "apostle of Niels Bohr"; he then supervised the PhD thesis of Hugh Everett that proposed the many-worlds interpretation. After supporting Everett's work for several years, he began to distance himself from the many-worlds interpretation in the 1970s. Late in life, he wrote that while the Copenhagen interpretation might fairly be called "the fog from the north", it "remains the best interpretation of the quantum that we have".
Other physicists, while influenced by the Copenhagen tradition, have expressed frustration at how it took the mathematical formalism of quantum theory as given, rather than trying to understand how it might arise from something more fundamental. (E. T. Jaynes described the mathematical formalism of quantum physics as "a peculiar mixture describing in part realities of Nature, in part incomplete human information about Nature—all scrambled up together by Heisenberg and Bohr into an omelette that nobody has seen how to unscramble".) This dissatisfaction has motivated new interpretative variants as well as technical work in quantum foundations.
See also.
<templatestyles src="Div col/styles.css"/>
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "(|\\text{dead}\\rangle + |\\text{alive}\\rangle)/\\sqrt 2"
},
{
"math_id": 1,
"text": "|\\text{alive}\\rangle"
}
] |
https://en.wikipedia.org/wiki?curid=5863
|
58635628
|
Star Thrust Experiment
|
Plasma Physics Experiment
The Star Thrust Experiment (STX) was a plasma physics experiment at the University of Washington's Redmond Plasma Physics Laboratory which ran from 1999 to 2001. The experiment studied magnetic plasma confinement to support controlled nuclear fusion experiments. Specifically, STX pioneered the possibility of forming a Field-reversed configuration (FRC) by using a Rotating Magnetic Field (RMF).
Background.
FRCs are of interest to the plasma physics community because of their confinement properties and their small size. While most large fusion experiments in the world are tokamaks, FRCs are seen as a viable alternative because of their higher Beta, meaning the same power output could be produced from a smaller volume of plasma, and their good plasma stability.
History.
The STX was built in 1998. The STX was motivated by a discovery from an unrelated experiment; a few years previously, the Large-S Experiment (LSX) had demonstrated the existence of a kinetically stabilized parameter regime which appeared advantageous for a fusion reactor. However, the LSX experiment formed FRCs in a power-hungry, violent way called a theta-pinch.
The US Department of Energy funded the Translation Confinement Sustainment (TCS) program as a follow-on to the LSX program, but it had not yet begun when the STX started operation. The purpose of TCS was to see whether Rotating Magnetic Fields could sustain FRCs born of the theta-pinch method, but the question remained as to whether RMF alone could form FRCs. If so, this was expected to be a lighter, more efficient means of FRC formation. This was the question that the STX was meant to answer.
The STX was contemporary with the following RMF-FRC experiments: The TCS, the PFRC, and the PV Rotamak.
Relevance to spacecraft propulsion.
NASA funded the construction of the experiment. This is because FRC-based fusion reactors appear to be well-suited to deep-space fusion rockets, especially those formed by RMF. This concept is similar to the Direct Fusion Drive, a current research project to create a fusion rocket from an RMF-driven FRC fusion reactor.
Apparatus.
The STX vacuum vessel was made of quartz, as it needed to be non-conductive to allow the RMF to pass through. It was 3 meters long and 40 centimeters in diameter. The axial magnetic field was created by electromagnetic coils and was 100 Gauss in strength. The RMF was created by a novel solid-state RF amplifier which was designed to be more powerful and more efficient than preceding Rotamak experiments. The RMF system as run operated at 350 kHz, at 2 MW of power, far below its design rating.
To measure the plasma's behavior, the STX experiment was fitted with an insertable magnetic probe, an array of diamagnetic loops, an interferometer, visible-light spectroscopy diagnostics, and a triple Langmuir probe.
Contributions.
The STX experiment was able to use RMF to achieve temperatures of 40 eV, which is hotter than the surface of the Sun but still a factor of 500 from the temperatures necessary in a fusion reactor. The STX experiment was able to achieve plasma density of formula_0particles per cubic centimeter, which is a factor of 200 from the temperatures necessary in a fusion reactor.
While the STX was designed to demonstrate the formation of an FRC using RMF, it had more success in demonstrating the build-up and sustainment of FRCs created via the theta-pinch method.
Shortcomings.
An FRC plasma is harder to heat at low temperature. Because of this, the RMF system on the STX was designed to produce dozens of MW at the beginning of the discharge to rapidly heat the plasma beyond this so-called "radiation barrier" to hundreds of eV of temperature, where the plasma could be more easily sustained. However, problems with the novel solid-state RF amplifier led to only a fraction of this power being available for heating. As a result, rather than the hundreds of eV hoped for, only 40 eV of temperature was achieved.
Furthermore, it was initially hoped for that the plasma could be kept away from the walls of the vacuum vessel by using low-resistance loops of copper that fit snugly around the vessel called "flux conservers." However the plasma was often observed to be in contact with the 40 cm inner diameter quartz vessel.
Legacy.
The findings of STX were used to improve the TCS experiment, which eventually did demonstrate FRC formation solely from RMF. The TCS went on to heat the plasma to 350 eV.
The idea of using an RMF-driven FRC to create a fusion rocket persists to this day. One example is the Direct Fusion Drive.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "5 \\times 10 ^ {12} "
}
] |
https://en.wikipedia.org/wiki?curid=58635628
|
58635754
|
BNR Prolog
|
Constraint logic programming language
BNR Prolog, also known as CLP(BNR), is a declarative constraint logic programming language based on relational interval arithmetic developed at Bell-Northern Research in the 1980s and 1990s. Embedding relational interval arithmetic in a logic programming language differs from other constraint logic programming
(CLP) systems like CLP(R) or Prolog-III in that it does not perform any symbolic processing. BNR Prolog was the first such implementation of interval arithmetic in a logic programming language. Since the constraint propagation is performed on real interval values, it is possible to express and partially solve non-linear equations.
Example rule.
The simultaneous equations:
formula_0
formula_1
are expressed in CLP(BNR) as:
?- {X>=0,Y>=0, tan(X)==Y, X**2 + Y**2 == 5}.
and a typical implementation's response would be:
<samp>
X = _58::real(1.0966681287054703,1.0966681287054718),<br>
Y = _106::real(1.9486710896099515,1.9486710896099542).
<br>
Yes<br>
</samp>
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \n\\tan x = y "
},
{
"math_id": 1,
"text": "\nx^2 + y^2 = 5\n"
}
] |
https://en.wikipedia.org/wiki?curid=58635754
|
58638781
|
Diagonal morphism (algebraic geometry)
|
In algebraic geometry, given a morphism of schemes formula_0, the diagonal morphism
formula_1
is a morphism determined by the universal property of the fiber product formula_2 of "p" and "p" applied to the identity formula_3 and the identity formula_4.
It is a special case of a graph morphism: given a morphism formula_5 over "S", the graph morphism of it is formula_6 induced by formula_7 and the identity formula_4. The diagonal embedding is the graph morphism of formula_4.
By definition, "X" is a separated scheme over "S" (formula_0 is a separated morphism) if the diagonal morphism is a closed immersion. Also, a morphism formula_0 locally of finite presentation is an unramified morphism if and only if the diagonal embedding is an open immersion.
Explanation.
As an example, consider an algebraic variety over an algebraically closed field "k" and formula_8 the structure map. Then, identifying "X" with the set of its "k"-rational points, formula_9 and formula_10 is given as formula_11; whence the name diagonal morphism.
Separated morphism.
A separated morphism is a morphism formula_12 such that the fiber product of formula_12 with itself along formula_12 has its diagonal as a closed subscheme — in other words, the diagonal morphism is a "closed immersion".
As a consequence, a scheme formula_13 is separated when the diagonal of formula_13 within the "scheme product" of formula_13 with itself is a closed immersion. Emphasizing the relative point of view, one might equivalently define a scheme to be separated if the unique morphism formula_14 is separated.
Notice that a topological space "Y" is Hausdorff iff the diagonal embedding
formula_15
is closed. In algebraic geometry, the above formulation is used because a scheme which is a Hausdorff space is necessarily empty or zero-dimensional. The difference between the topological and algebro-geometric context comes from the topological structure of the fiber product (in the category of schemes) formula_16, which is different from the product of topological spaces.
Any "affine" scheme "Spec A" is separated, because the diagonal corresponds to the surjective map of rings (hence is a closed immersion of schemes):
"formula_17".
Let formula_18 be a scheme obtained by identifying two affine lines through the identity map except at the origins (see gluing scheme#Examples). It is not separated. Indeed, the image of the diagonal morphism formula_19 image has two origins, while its closure contains four origins.
Use in intersection theory.
A classic way to define the intersection product of algebraic cycles formula_20 on a smooth variety "X" is by intersecting (restricting) their cartesian product with (to) the diagonal: precisely,
formula_21
where formula_22 is the pullback along the diagonal embedding formula_23.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "p: X \\to S"
},
{
"math_id": 1,
"text": "\\delta: X \\to X \\times_S X"
},
{
"math_id": 2,
"text": "X \\times_S X"
},
{
"math_id": 3,
"text": "1_X : X \\to X"
},
{
"math_id": 4,
"text": "1_X"
},
{
"math_id": 5,
"text": "f: X \\to Y"
},
{
"math_id": 6,
"text": "X \\to X \\times_S Y"
},
{
"math_id": 7,
"text": "f"
},
{
"math_id": 8,
"text": "p: X \\to \\operatorname{Spec}(k)"
},
{
"math_id": 9,
"text": "X \\times_k X = \\{ (x, y) \\in X \\times X \\}"
},
{
"math_id": 10,
"text": "\\delta: X \\to X \\times_k X"
},
{
"math_id": 11,
"text": "x \\mapsto (x, x)"
},
{
"math_id": 12,
"text": " f "
},
{
"math_id": 13,
"text": " X "
},
{
"math_id": 14,
"text": "X \\rightarrow \\textrm{Spec} (\\mathbb{Z})"
},
{
"math_id": 15,
"text": "Y \\stackrel{\\Delta}{\\longrightarrow} Y \\times Y, \\, y \\mapsto (y, y)"
},
{
"math_id": 16,
"text": "X \\times_{\\textrm{Spec} (\\mathbb{Z})} X"
},
{
"math_id": 17,
"text": "A \\otimes_{\\mathbb Z} A \\rightarrow A, a \\otimes a' \\mapsto a \\cdot a'"
},
{
"math_id": 18,
"text": "S"
},
{
"math_id": 19,
"text": "S \\to S \\times S"
},
{
"math_id": 20,
"text": "A, B"
},
{
"math_id": 21,
"text": "A \\cdot B = \\delta^*(A \\times B)"
},
{
"math_id": 22,
"text": "\\delta^*"
},
{
"math_id": 23,
"text": "\\delta: X \\to X \\times X"
}
] |
https://en.wikipedia.org/wiki?curid=58638781
|
5864214
|
Yuktibhāṣā
|
Treatise on mathematics and astronomy
Yuktibhāṣā (), also known as Gaṇita-yukti-bhāṣā and (English: "Compendium of Astronomical Rationale"), is a major treatise on mathematics and astronomy, written by the Indian astronomer Jyesthadeva of the Kerala school of mathematics around 1530. The treatise, written in Malayalam, is a consolidation of the discoveries by Madhava of Sangamagrama, Nilakantha Somayaji, Parameshvara, Jyeshtadeva, Achyuta Pisharati, and other astronomer-mathematicians of the Kerala school. It also exists in a Sanskrit version, with unclear author and date, composed as a rough translation of the Malayalam original.
The work contains proofs and derivations of the theorems that it presents. Modern historians used to assert, based on the works of Indian mathematics that first became available, that early Indian scholars in astronomy and computation lacked in proofs, but demonstrates otherwise.
Some of its important topics include the infinite series expansions of functions; power series, including of π and π/4; trigonometric series of sine, cosine, and arctangent; Taylor series, including second and third order approximations of sine and cosine; radii, diameters and circumferences.
mainly gives rationale for the results in Nilakantha's "Tantra Samgraha". It is considered an early text to give some ideas of calculus like Taylor and infinity series, predating Newton and Leibniz by two centuries. The treatise was largely unnoticed outside India, as it was written in the local language of Malayalam. In modern times, due to wider international cooperation in mathematics, the wider world has taken notice of the work. For example, both Oxford University and the Royal Society of Great Britain have given attribution to pioneering mathematical theorems of Indian origin that predate their Western counterparts.
Contents.
contains most of the developments of the earlier Kerala school, particularly Madhava and Nilakantha. The text is divided into two parts – the former deals with mathematical analysis and the latter with astronomy. Beyond this, the continuous text does not have any further division into subjects or topics, so published editions divide the work into chapters based on editorial judgment.
Mathematics.
This subjects treated in the mathematics part of the can be divided into seven chapters:
The first four chapters of the contain elementary mathematics, such as division, the Pythagorean theorem, square roots, etc. Novel ideas are not discussed until the sixth chapter on circumference of a circle. contains a derivation and proof for the power series of inverse tangent, discovered by Madhava. In the text, Jyesthadeva describes Madhava's series in the following manner:
In modern mathematical notation,
formula_0
or, expressed in terms of tangents,
formula_1
which in Europe was conventionally called "Gregory's series" after James Gregory, who rediscovered it in 1671.
The text also contains Madhava's infinite series expansion of π which he obtained from the expansion of the arc-tangent function.
formula_2
which in Europe was conventionally called "Leibniz's series", after Gottfried Leibniz who rediscovered it in 1673.
Using a rational approximation of this series, he gave values of the number π as 3.14159265359, correct to 11 decimals, and as 3.1415926535898, correct to 13 decimals.
The text describes two methods for computing the value of π. First, obtain a rapidly converging series by transforming the original infinite series of π. By doing so, the first 21 terms of the infinite series
formula_3
was used to compute the approximation to 11 decimal places. The other method was to add a remainder term to the original series of π. The remainder termformula_4 was used in the infinite series expansion of formula_5 to improve the approximation of π to 13 decimal places of accuracy when "n"=76.
Apart from these, the contains many elementary and complex mathematical topics, including,
Astronomy.
Chapters eight to seventeen deal with subjects of astronomy: planetary orbits, celestial spheres, ascension, declination, directions and shadows, spherical triangles, ellipses, and parallax correction. The planetary theory described in the book is similar to that later adopted by Danish astronomer Tycho Brahe.
The topics covered in the eight chapters are computation of mean and true longitudes of planets, Earth and celestial spheres, fifteen problems relating to ascension, declination, longitude, etc., determination of time, place, direction, etc., from gnomonic shadow, eclipses, Vyatipata (when the sun and moon have the same declination), visibility correction for planets and phases of the moon.
Specifically,
Modern editions.
The importance of was brought to the attention of modern scholarship by C. M. Whish in 1832 through a paper published in the "Transactions of the Royal Asiatic Society of Great Britain and Ireland". The mathematics part of the text, along with notes in Malayalam, was first published in 1948 by Rama Varma Thampuran and Akhileswara Aiyar.
The first critical edition of the entire Malayalam text, alongside an English translation and detailed explanatory notes, was published in two volumes by Springer in 2008. A third volume, containing a critical edition of the Sanskrit Ganitayuktibhasa, was published by the Indian Institute of Advanced Study, Shimla in 2009.
This edition of Yuktibhasa has been divided into two volumes: Volume I deals with mathematics and Volume II treats astronomy. Each volume is divided into three parts: First part is an English translation of the relevant Malayalam part of Yuktibhasa, second part contains detailed explanatory notes on the translation, and in the third part the text in the Malayalam original is reproduced. The English translation is by K.V. Sarma and the explanatory notes are provided by K. Ramasubramanian, M. D. Srinivas, and M. S. Sriram.
An open access edition of Yuktibhasa is published by Sayahna Foundation in 2020.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " r\\theta={r\\frac{\\sin\\theta}{\\cos\\theta}}\n-\\frac{r}{3}\\frac{\\sin^3\\theta}{\\cos^3\\theta}\n+\\frac{r}{5}\\frac{\\sin^5\\theta}{\\cos^5\\theta}\n-\\frac{r}{7}\\frac{\\sin^7\\theta}{\\cos^7\\theta}\n+\\cdots"
},
{
"math_id": 1,
"text": "\\theta = \\tan\\theta - \\frac13 \\tan^3\\theta + \\frac15 \\tan^5\\theta - \\cdots \\ ,"
},
{
"math_id": 2,
"text": "\\frac{\\pi}{4} = 1 - \\frac{1}{3} + \\frac{1}{5} - \\frac{1}{7} + \\cdots + \\frac{(-1)^n}{2n + 1} + \\cdots \\ ,"
},
{
"math_id": 3,
"text": "\\pi = \\sqrt{12}\\left(1-{1\\over 3\\cdot3}+{1\\over5\\cdot 3^2}-{1\\over7\\cdot 3^3}+\\cdots\\right)"
},
{
"math_id": 4,
"text": "\\frac{n^2 + 1}{4n^3 + 5n}"
},
{
"math_id": 5,
"text": "\\frac{\\pi}{4}"
}
] |
https://en.wikipedia.org/wiki?curid=5864214
|
5864648
|
Natural filtration
|
In the theory of stochastic processes in mathematics and statistics, the generated filtration or natural filtration associated to a stochastic process is a filtration associated to the process which records its "past behaviour" at each time. It is in a sense the simplest filtration available for studying the given process: all information concerning the process, and only that information, is available in the natural filtration.
More formally, let (Ω, "F", P) be a probability space; let ("I", ≤) be a totally ordered index set; let ("S", Σ) be a measurable space; let "X" : "I" × Ω → "S" be a stochastic process. Then the natural filtration of "F" with respect to "X" is defined to be the filtration "F"•"X" = ("F""i""X")"i"∈"I" given by
formula_0
i.e., the smallest "σ"-algebra on Ω that contains all pre-images of Σ-measurable subsets of "S" for "times" "j" up to "i".
In many examples, the index set "I" is the natural numbers N (possibly including 0) or an interval [0, "T"] or [0, +∞); the state space "S" is often the real line R or Euclidean space R"n".
Any stochastic process "X" is an adapted process with respect to its natural filtration.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "F_{i}^{X} = \\sigma \\left\\{ \\left. X_{j}^{-1} (A) \\right| j \\in I, j \\leq i, A \\in \\Sigma \\right\\},"
}
] |
https://en.wikipedia.org/wiki?curid=5864648
|
58646755
|
Commuting probability
|
The probability that two uniform random elements of a finite group commute with each other
In mathematics and more precisely in group theory, the commuting probability (also called degree of commutativity or commutativity degree) of a finite group is the probability that two randomly chosen elements commute. It can be used to measure how close to abelian a finite group is. It can be generalized to infinite groups equipped with a suitable probability measure, and can also be generalized to other algebraic structures such as rings.
Definition.
Let formula_0 be a finite group. We define formula_1 as the averaged number of pairs of elements of formula_0 which commute:
formula_2
where formula_3 denotes the cardinality of a finite set formula_4.
If one considers the uniform distribution on formula_5, formula_1 is the probability that two randomly chosen elements of formula_0 commute. That is why formula_1 is called the commuting probability of formula_0.
formula_7
where formula_8 is the number of conjugacy classes of formula_0.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "G"
},
{
"math_id": 1,
"text": "p(G)"
},
{
"math_id": 2,
"text": "p(G) := \\frac{1}{\\# G^2} \\#\\!\\left\\{ (x,y) \\in G^2 \\mid xy=yx \\right\\}"
},
{
"math_id": 3,
"text": "\\# X"
},
{
"math_id": 4,
"text": "X"
},
{
"math_id": 5,
"text": "G^2"
},
{
"math_id": 6,
"text": "p(G) = 1"
},
{
"math_id": 7,
"text": "p(G) = \\frac{k(G)}{\\# G}"
},
{
"math_id": 8,
"text": "k(G)"
},
{
"math_id": 9,
"text": "p(G) \\leq 5/8"
},
{
"math_id": 10,
"text": "p(G) = 5/8"
},
{
"math_id": 11,
"text": "n"
},
{
"math_id": 12,
"text": "p(G) = 1/n"
},
{
"math_id": 13,
"text": "p(G) \\leq 1/12"
},
{
"math_id": 14,
"text": "\\mathfrak{A}_5"
},
{
"math_id": 15,
"text": "\\omega^\\omega"
},
{
"math_id": 16,
"text": "\\omega^{\\omega^2}"
}
] |
https://en.wikipedia.org/wiki?curid=58646755
|
58648815
|
Similarity system of triangles
|
A similarity system of triangles is a specific configuration involving a set of triangles. A set of triangles is considered a configuration when all of the triangles share a minimum of one incidence relation with one of the other triangles present in the set. An incidence relation between triangles refers to when two triangles share a point. For example, the two triangles to the right, formula_0 and formula_1, are a configuration made up of two incident relations, since points formula_2and formula_3are shared. The triangles that make up configurations are known as component triangles. Triangles must not only be a part of a configuration set to be in a similarity system, but must also be directly similar. Direct similarity implies that all angles are equal between two given triangle and that they share the same rotational sense. As is seen in the adjacent images, in the directly similar triangles, the rotation of formula_4 onto formula_2 and formula_5 onto formula_6occurs in the same direction. In the opposite similar triangles, the rotation of formula_4 onto formula_2 and formula_5 onto formula_6 occurs in the opposite direction. In sum, a configuration is a similarity system when all triangles in the set, lie in the same plane and the following holds true: if there are "n" triangles in the set and "n" − 1 triangles are directly similar, then n triangles are directly similar.
Background.
J.G. Mauldon introduced the idea of similarity systems of triangles in his paper in Mathematics Magazine "Similar Triangles". Mauldon began his analyses by examining given triangles formula_7 for direct similarity through complex numbers, specifically the equation formula_8. He then furthered his analyses to equilateral triangles, showing that if a triangle formula_9 satisfied the equation formula_10 when formula_11, it was equilateral. As evidence of this work, he applied his conjectures on direct similarity and equilateral triangles in proving Napoleon's theorem. He then built off Napoleon by proving that if an equilateral triangle was constructed with equilateral triangles incident on each vertex, the midpoints of the connecting lines between the non-incident vertices of the outer three equilateral triangles create an equilateral triangle. Other similar work was done by the French Geometer Thébault in his proof that given a parallelogram and squares that lie on each side of the parallelogram, the centers of the squares create a square. Mauldon then analyzed coplanar sets of triangles, determining if they were similarity systems based on the criterion, if all but one of the triangles were directly similar, then all of the triangles are directly similar.
Examples.
Triangles appended to a rectangle.
Direct similarity.
If we construct a rectangle formula_12 with directly similar triangles formula_13 on each side of the rectangle that are similar to formula_14, then formula_15 is directly similar and the set of triangles formula_16 is a similarity system.
Indirect similarity.
However, if we acknowledge that the triangles can be degenerate and take points formula_4 and formula_17 to lie on each other and formula_18and formula_19 to lie on each other, then the set of triangles is no longer a direct similarity system since the second triangle has area and the others do not.
Rectangular parallelepiped.
Given a figure where three sets of lines are parallel, but not equivalent in length (formally known as a rectangular parallelepiped) with all points of order two being labelled as follows:
formula_20
Then we can take the above points, analyze them as triangles and we can show that they form a similarity system.
"Proof:"
In order for any given triangle, formula_21, to be directly similar to formula_22the following equation should be satisfied:
formula_23 where "ℓ", "m", "k", "a"1, "b"1, and "c"1 are sides of triangles.
If the same pattern is followed for the rest of the triangles, one will notice that the summation of the equations for the first four triangles and the summation of the equations for the last four triangles provides the same result. Therefore, by the definition of a similarity system of triangles, no matter the seven similar triangles selected, the eighth will satisfy the system, making them all directly similar.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "AHC"
},
{
"math_id": 1,
"text": "BHC"
},
{
"math_id": 2,
"text": "C"
},
{
"math_id": 3,
"text": "H"
},
{
"math_id": 4,
"text": "B"
},
{
"math_id": 5,
"text": "B^1"
},
{
"math_id": 6,
"text": "C^1"
},
{
"math_id": 7,
"text": "ABC, XYZ"
},
{
"math_id": 8,
"text": "\\begin{vmatrix} a & b & c \\\\ x & y & z \\\\ 1 & 1 & 1 \\end{vmatrix} = 0"
},
{
"math_id": 9,
"text": "ABC"
},
{
"math_id": 10,
"text": "a + wb + w^2c =0"
},
{
"math_id": 11,
"text": "w = \\frac{-1 + i\\surd3}{2}"
},
{
"math_id": 12,
"text": "ABCD"
},
{
"math_id": 13,
"text": "PAB, QBC, RCD, SDA"
},
{
"math_id": 14,
"text": "PQS"
},
{
"math_id": 15,
"text": "RQS"
},
{
"math_id": 16,
"text": "\\{PAB, QBC, RCD, SDA, PQS, RQS\\}"
},
{
"math_id": 17,
"text": "P"
},
{
"math_id": 18,
"text": "Q, R, D"
},
{
"math_id": 19,
"text": "S"
},
{
"math_id": 20,
"text": "\\{A_1B_1C_1, A_2B_2C_2, A_3B_3C_3, A_4B_4C_4, A_1B_4C_3, A_2B_3C_4, A_3B_2C_1, A_4B_1C_2\\}"
},
{
"math_id": 21,
"text": "KLM"
},
{
"math_id": 22,
"text": "A_1, B_1, C_1"
},
{
"math_id": 23,
"text": "(\\ell-m)a_1+ (m-k)b_1+(k-1)c_1=0."
}
] |
https://en.wikipedia.org/wiki?curid=58648815
|
58653736
|
Unramified morphism
|
In algebraic geometry, an unramified morphism is a morphism formula_0 of schemes such that (a) it is locally of finite presentation and (b) for each formula_1 and formula_2, we have that
A flat unramified morphism is called an étale morphism. Less strongly, if formula_8 satisfies the conditions when restricted to sufficiently small neighborhoods of formula_9 and formula_10, then formula_8 is said to be unramified near formula_9.
Some authors prefer to use weaker conditions, in which case they call a morphism satisfying the above a G-unramified morphism.
Simple example.
Let formula_11 be a ring and "B" the ring obtained by adjoining an integral element to "A"; i.e., formula_12 for some monic polynomial "F". Then formula_13 is unramified if and only if the polynomial "F" is separable (i.e., it and its derivative generate the unit ideal of formula_14).
Curve case.
Let formula_0 be a finite morphism between smooth connected curves over an algebraically closed field, "P" a closed point of "X" and formula_15. We then have the local ring homomorphism formula_16 where formula_17 and formula_18 are the local rings at "Q" and "P" of "Y" and "X". Since formula_19 is a discrete valuation ring, there is a unique integer formula_20 such that formula_21. The integer formula_22 is called the ramification index of formula_23 over formula_24. Since formula_25 as the base field is algebraically closed, formula_8 is unramified at formula_23 (in fact, étale) if and only if formula_26. Otherwise, formula_8 is said to be ramified at "P" and "Q" is called a branch point.
Characterization.
Given a morphism formula_0 that is locally of finite presentation, the following are equivalent:
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "f: X \\to Y"
},
{
"math_id": 1,
"text": "x \\in X"
},
{
"math_id": 2,
"text": "y = f(x)"
},
{
"math_id": 3,
"text": "k(x)"
},
{
"math_id": 4,
"text": "k(y)"
},
{
"math_id": 5,
"text": "f^{\\#}(\\mathfrak{m}_y) \\mathcal{O}_{x, X} = \\mathfrak{m}_x, "
},
{
"math_id": 6,
"text": "f^{\\#}: \\mathcal{O}_{y, Y} \\to \\mathcal{O}_{x, X}"
},
{
"math_id": 7,
"text": "\\mathfrak{m}_y, \\mathfrak{m}_x"
},
{
"math_id": 8,
"text": "f"
},
{
"math_id": 9,
"text": "x"
},
{
"math_id": 10,
"text": "y"
},
{
"math_id": 11,
"text": "A"
},
{
"math_id": 12,
"text": "B = A[t]/(F)"
},
{
"math_id": 13,
"text": "\\operatorname{Spec}(B) \\to \\operatorname{Spec}(A)"
},
{
"math_id": 14,
"text": "A[t]"
},
{
"math_id": 15,
"text": "Q = f(P)"
},
{
"math_id": 16,
"text": "f^{\\#} : \\mathcal{O}_Q \\to \\mathcal{O}_P"
},
{
"math_id": 17,
"text": "(\\mathcal{O}_Q, \\mathfrak{m}_Q)"
},
{
"math_id": 18,
"text": "(\\mathcal{O}_P, \\mathfrak{m}_P)"
},
{
"math_id": 19,
"text": "\\mathcal{O}_P"
},
{
"math_id": 20,
"text": "e_P > 0"
},
{
"math_id": 21,
"text": "f^{\\#} (\\mathfrak{m}_Q) \\mathcal{O}_P = {\\mathfrak{m}_P}^{e_P}"
},
{
"math_id": 22,
"text": "e_P"
},
{
"math_id": 23,
"text": "P"
},
{
"math_id": 24,
"text": "Q"
},
{
"math_id": 25,
"text": "k(P) = k(Q)"
},
{
"math_id": 26,
"text": "e_P = 1"
},
{
"math_id": 27,
"text": "\\delta_f: X \\to X \\times_Y X"
},
{
"math_id": 28,
"text": "\\Omega_{X/Y}"
}
] |
https://en.wikipedia.org/wiki?curid=58653736
|
58654129
|
Basic feasible solution
|
Concept from linear programming
In the theory of linear programming, a basic feasible solution (BFS) is a solution with a minimal set of non-zero variables. Geometrically, each BFS corresponds to a vertex of the polyhedron of feasible solutions. If there exists an optimal solution, then there exists an optimal BFS. Hence, to find an optimal solution, it is sufficient to consider the BFS-s. This fact is used by the simplex algorithm, which essentially travels from one BFS to another until an optimal solution is found.
Definitions.
Preliminaries: equational form with linearly-independent rows.
For the definitions below, we first present the linear program in the so-called "equational form":
maximize formula_0
subject to formula_1 and formula_2
where:
Any linear program can be converted into an equational form by adding slack variables.
As a preliminary clean-up step, we verify that:
Feasible solution.
A feasible solution of the LP is any vector formula_2 such that formula_1. We assume that there is at least one feasible solution. If "m" = "n", then there is only one feasible solution. Typically "m" < "n", so the system formula_1 has many solutions; each such solution is called a feasible solution of the LP.
Basis.
A basis of the LP is a nonsingular submatrix of "A," with all "m" rows and only "m"<"n" columns.
Sometimes, the term basis is used not for the submatrix itself, but for the set of indices of its columns. Let "B" be a subset of "m" indices from {1...,"n"}. Denote by formula_7 the square "m"-by-"m" matrix made of the "m" columns of formula_6 indexed by "B". If formula_7 is nonsingular, the columns indexed by "B" are a basis of the column space of formula_6. In this case, we call "B" a basis of the LP.
Since the rank of formula_6 is "m", it has at least one basis; since formula_6 has "n" columns, it has at most formula_8 bases.
Basic feasible solution.
Given a basis "B", we say that a feasible solution formula_4 is a basic feasible solution with basis B if all its non-zero variables are indexed by "B", that is, for all formula_9.
Properties.
1. A BFS is determined only by the constraints of the LP (the matrix formula_6 and the vector formula_5); it does not depend on the optimization objective.
2. By definition, a BFS has at most "m" non-zero variables and at least "n"-"m" zero variables. A BFS can have less than "m" non-zero variables; in that case, it can have many different bases, all of which contain the indices of its non-zero variables.
3. A feasible solution formula_4 is basic if-and-only-if the columns of the matrix formula_10 are linearly independent, where "K" is the set of indices of the non-zero elements of formula_4.
4. Each basis determines a unique BFS: for each basis "B" of "m" indices, there is at most one BFS formula_11 with basis "B". This is because formula_11 must satisfy the constraint formula_12, and by definition of basis the matrix formula_7 is non-singular, so the constraint has a unique solution: formula_13 The opposite is not true: each BFS can come from many different bases. If the unique solution of formula_13 satisfies the non-negativity constraints formula_14, then "B" is called a feasible basis.
5. If a linear program has an optimal solution (i.e., it has a feasible solution, and the set of feasible solutions is bounded), then it has an optimal BFS. This is a consequence of the Bauer maximum principle: the objective of a linear program is convex; the set of feasible solutions is convex (it is an intersection of hyperspaces); therefore the objective attains its maximum in an extreme point of the set of feasible solutions.
Since the number of BFS-s is finite and bounded by formula_8, an optimal solution to any LP can be found in finite time by just evaluating the objective function in all formula_8BFS-s. This is not the most efficient way to solve an LP; the simplex algorithm examines the BFS-s in a much more efficient way.
Examples.
Consider a linear program with the following constraints:
formula_15
The matrix "A" is:
formula_16
Here, "m"=2 and there are 10 subsets of 2 indices, however, not all of them are bases: the set {3,5} is not a basis since columns 3 and 5 are linearly dependent.
The set "B"={2,4} is a basis, since the matrix formula_17 is non-singular.
The unique BFS corresponding to this basis is formula_18.
Geometric interpretation.
The set of all feasible solutions is an intersection of hyperspaces. Therefore, it is a convex polyhedron. If it is bounded, then it is a convex polytope. Each BFS corresponds to a vertex of this polytope.
Basic feasible solutions for the dual problem.
As mentioned above, every basis "B" defines a unique basic feasible solution formula_13 . In a similar way, each basis defines a solution to the dual linear program:
minimize formula_19
subject to formula_20.
The solution is formula_21.
Finding an optimal BFS.
There are several methods for finding a BFS that is also optimal.
Using the simplex algorithm.
In practice, the easiest way to find an optimal BFS is to use the simplex algorithm. It keeps, at each point of its execution, a "current basis" "B" (a subset of "m" out of "n" variables), a "current BFS", and a "current tableau". The tableau is a representation of the linear program where the basic variables are expressed in terms of the non-basic ones:formula_22where formula_23 is the vector of "m" basic variables, formula_24 is the vector of "n" non-basic variables, and formula_25 is the maximization objective. Since non-basic variables equal 0, the current BFS is formula_26, and the current maximization objective is formula_27.
If all coefficients in formula_28 are negative, then formula_27 is an optimal solution, since all variables (including all non-basic variables) must be at least 0, so the second line implies formula_29.
If some coefficients in formula_28 are positive, then it may be possible to increase the maximization target. For example, if formula_30 is non-basic and its coefficient in formula_28 is positive, then increasing it above 0 may make formula_25 larger. If it is possible to do so without violating other constraints, then the increased variable becomes basic (it "enters the basis"), while some basic variable is decreased to 0 to keep the equality constraints and thus becomes non-basic (it "exits the basis").
If this process is done carefully, then it is possible to guarantee that formula_25 increases until it reaches an optimal BFS.
Converting any optimal solution to an optimal BFS.
In the worst case, the simplex algorithm may require exponentially many steps to complete. There are algorithms for solving an LP in weakly-polynomial time, such as the ellipsoid method; however, they usually return optimal solutions that are not basic.
However, Given any optimal solution to the LP, it is easy to find an optimal feasible solution that is also "basic".see also "external links" below.
Finding a basis that is both primal-optimal and dual-optimal.
A basis "B" of the LP is called dual-optimal if the solution formula_21 is an optimal solution to the dual linear program, that is, it minimizes formula_19. In general, a primal-optimal basis is not necessarily dual-optimal, and a dual-optimal basis is not necessarily primal-optimal (in fact, the solution of a primal-optimal basis may even be unfeasible for the dual, and vice versa).
If both formula_13 is an optimal BFS of the primal LP, and formula_21 is an optimal BFS of the dual LP, then the basis "B" is called PD-optimal. Every LP with an optimal solution has a PD-optimal basis, and it is found by the Simplex algorithm. However, its run-time is exponential in the worst case. Nimrod Megiddo proved the following theorems:
Megiddo's algorithms can be executed using a tableau, just like the simplex algorithm.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathbf{c^T} \\mathbf{x}"
},
{
"math_id": 1,
"text": "A\\mathbf{x} = \\mathbf{b}"
},
{
"math_id": 2,
"text": "\\mathbf{x} \\ge 0"
},
{
"math_id": 3,
"text": "\\mathbf{c^T}"
},
{
"math_id": 4,
"text": "\\mathbf{x}"
},
{
"math_id": 5,
"text": "\\mathbf{b}"
},
{
"math_id": 6,
"text": "A"
},
{
"math_id": 7,
"text": "A_B"
},
{
"math_id": 8,
"text": "\\binom{n}{m}"
},
{
"math_id": 9,
"text": "j\\not\\in B: ~~ x_j = 0"
},
{
"math_id": 10,
"text": "A_K"
},
{
"math_id": 11,
"text": "\\mathbf{x_B}"
},
{
"math_id": 12,
"text": "A_B \\mathbf{x_B} = b"
},
{
"math_id": 13,
"text": "\\mathbf{x_B} = {A_B}^{-1}\\cdot b"
},
{
"math_id": 14,
"text": "\\mathbf{x_B} \\geq 0"
},
{
"math_id": 15,
"text": "\\begin{align} \nx_1 + 5 x_2 + 3 x_3 + 4 x_4 + 6 x_5 &= 14\n\\\\\nx_2 + 3 x_3 + 5 x_4 + 6 x_5 &= 7\n\\\\\n\\forall i\\in\\{1,\\ldots,5\\}: x_i&\\geq 0\n\\end{align}"
},
{
"math_id": 16,
"text": "A = \n\\begin{pmatrix} \n1 & 5 & 3 & 4 & 6\n\\\\ \n0 & 1 & 3 & 5 & 6\n\\end{pmatrix}\n~~~~~\n\\mathbf{b} = (14~~7)"
},
{
"math_id": 17,
"text": "A_B = \n\\begin{pmatrix} \n5 & 4\n\\\\ \n1 & 5 \n\\end{pmatrix}\n"
},
{
"math_id": 18,
"text": "x_B = (0~~2~~0~~1~~0)\n"
},
{
"math_id": 19,
"text": "\\mathbf{b^T} \\mathbf{y}"
},
{
"math_id": 20,
"text": "A^T\\mathbf{y} \\geq \\mathbf{c}"
},
{
"math_id": 21,
"text": "\\mathbf{y_B} = {A^T_B}^{-1}\\cdot c"
},
{
"math_id": 22,
"text": "\\begin{align} \nx_B &= p + Q x_N\n\\\\\nz &= z_0 + r^T x_N\n\\end{align}"
},
{
"math_id": 23,
"text": "x_B"
},
{
"math_id": 24,
"text": "x_N"
},
{
"math_id": 25,
"text": "z"
},
{
"math_id": 26,
"text": "p"
},
{
"math_id": 27,
"text": "z_0"
},
{
"math_id": 28,
"text": "r"
},
{
"math_id": 29,
"text": "z\\leq z_0"
},
{
"math_id": 30,
"text": "x_5"
}
] |
https://en.wikipedia.org/wiki?curid=58654129
|
586599
|
Penning trap
|
Device for storing charged particles
A Penning trap is a device for the storage of charged particles using a homogeneous magnetic field and a quadrupole electric field. It is mostly found in the physical sciences and related fields of study as a tool for precision measurements of properties of ions and stable subatomic particles, like for example mass, fission yields and isomeric yield ratios. One initial object of study was the so-called geonium atoms, which represent a way to measure the electron magnetic moment by storing a single electron. These traps have been used in the physical realization of quantum computation and quantum information processing by trapping qubits. Penning traps are in use in many laboratories worldwide, including CERN, to store and investigate anti-particles such as antiprotons. The main advantages of Penning traps are the potentially long storage times and the existence of a multitude of techniques to manipulate and non-destructively detect the stored particles. This makes Penning traps versatile tools for the investigation of stored particles, but also for their selection, preparation or mere storage.
History.
The Penning trap was named after F. M. Penning (1894–1953) by Hans Georg Dehmelt (1922–2017) who built the first trap. Dehmelt got inspiration from the vacuum gauge built by F. M. Penning where a current through a discharge tube in a magnetic field is proportional to the pressure. Citing from H. Dehmelt's autobiography: "I began to focus on the magnetron/Penning discharge geometry, which, in the Penning ion gauge, had caught my interest already at Göttingen and at Duke. In their 1955 cyclotron resonance work on photoelectrons in vacuum Franken and Liebes had reported undesirable frequency shifts caused by accidental electron trapping. Their analysis made me realize that in a pure electric quadrupole field the shift would not depend on the location of the electron in the trap. This is an important advantage over many other traps that I decided to exploit. A magnetron trap of this type had been briefly discussed in J.R. Pierce's 1949 book, and I developed a simple description of the axial, magnetron, and cyclotron motions of an electron in it. With the help of the expert glassblower of the Department, Jake Jonson, I built my first high vacuum magnetron trap in 1959 and was soon able to trap electrons for about 10 sec and to detect axial, magnetron and cyclotron resonances." – H. Dehmelt
H. Dehmelt shared the Nobel Prize in Physics in 1989 for the development of the ion trap technique.
Operation.
Penning traps use a strong homogeneous axial magnetic field to confine particles radially and a quadrupole electric field to confine the particles axially. The static electric potential can be generated using a set of three electrodes: a ring and two endcaps. In an ideal Penning trap the ring and endcaps are hyperboloids of revolution. For trapping of positive (negative) ions, the endcap electrodes are kept at a positive (negative) potential relative to the ring. This potential produces a saddle point in the centre of the trap, which traps ions along the axial direction. The electric field causes ions to oscillate (harmonically in the case of an ideal Penning trap) along the trap axis. The magnetic field in combination with the electric field causes charged particles to move in the radial plane with a motion which traces out an epitrochoid.
The orbital motion of ions in the radial plane is composed of two modes at frequencies which are called the "magnetron" formula_0and the "modified cyclotron" formula_1 frequencies. These motions are similar to the deferent and epicycle, respectively, of the Ptolemaic model of the solar system.
The sum of these two frequencies is the "cyclotron" frequency, which depends only on the ratio of electric charge to mass and on the strength of the magnetic field. This frequency can be measured very accurately and can be used to measure the masses of charged particles. Many of the highest-precision mass measurements (masses of the electron, proton, 2H, 20Ne and 28Si) come from Penning traps.
Buffer gas cooling, resistive cooling, and laser cooling are techniques to remove energy from ions in a Penning trap. Buffer gas cooling relies on collisions between the ions and neutral gas molecules that bring the ion energy closer to the energy of the gas molecules. In resistive cooling, moving image charges in the electrodes are made to do work through an external resistor, effectively removing energy from the ions. Laser cooling can be used to remove energy from some kinds of ions in Penning traps. This technique requires ions with an appropriate electronic structure. Radiative cooling is the process by which the ions lose energy by creating electromagnetic waves by virtue of their acceleration in the magnetic field. This process dominates the cooling of electrons in Penning traps, but is very small and usually negligible for heavier particles.
Using the Penning trap can have advantages over the radio frequency trap (Paul trap). Firstly, in the Penning trap only static fields are applied and therefore there is no micro-motion and resultant heating of the ions due to the dynamic fields, even for extended 2- and 3-dimensional ion Coulomb crystals. Also, the Penning trap can be made larger whilst maintaining strong trapping. The trapped ion can then be held further away from the electrode surfaces. Interaction with patch potentials on the electrode surfaces can be responsible for heating and decoherence effects and these effects scale as a high power of the inverse distance between the ion and the electrode.
Fourier-transform mass spectrometry.
Fourier-transform ion cyclotron resonance mass spectrometry (also known as Fourier-transform mass spectrometry) is a type of mass spectrometry used for determining the mass-to-charge ratio (m/z) of ions based on the cyclotron frequency of the ions in a fixed magnetic field. The ions are trapped in a Penning trap where they are excited to a larger cyclotron radius by an oscillating electric field perpendicular to the magnetic field. The excitation also results in the ions moving in phase (in a packet). The signal is detected as an image current on a pair of plates which the packet of ions passes close to as they cyclotron. The resulting signal is called a free induction decay (fid), transient or interferogram that consists of a superposition of sine waves. The useful signal is extracted from this data by performing a Fourier transform to give a mass spectrum.
Single ions can be investigated in a Penning trap held at a temperature of 4 K. For this the ring electrode is segmented and opposite electrodes are connected to a superconducting coil and the source and the gate of a field-effect transistor. The coil and the parasitic capacitances of the circuit form a LC circuit with a Q of about 50 000. The LC circuit is excited by an external electric pulse. The segmented electrodes couple the motion of the single electron to the LC circuit. Thus the energy in the LC circuit in resonance with the ion slowly oscillates between the many electrons (10000) in the gate of the field effect transistor and the single electron. This can be detected in the signal at the drain of the field effect transistor.
Geonium atom.
A geonium atom is a pseudo-atomic system that consists of a single electron or ion stored in a Penning trap which is 'bound' to the remaining Earth, hence the term 'geonium'. The name was coined by H.G. Dehmelt.
In the typical case, the trapped system consists of only one particle or ion. Such a quantum system is determined by quantum states of one particle, like in the hydrogen atom. Hydrogen consists of two particles, the nucleus and electron, but the electron motion relative to the nucleus is equivalent to one particle in an external field, see center-of-mass frame.
The properties of geonium are different from a typical atom. The charge undergoes cyclotron motion around the trap axis and oscillates along the axis. An inhomogeneous magnetic "bottle field" is applied to measure the quantum properties by the "continuous Stern-Gerlach" technique. Energy levels and g-factor of the particle can be measured with high precision. Van Dyck, "et al." explored the magnetic splitting of geonium spectra in 1978 and in 1987 published high-precision measurements of electron and positron g-factors, which constrained the electron radius.
Single particle.
In November 2017, an international team of scientists isolated a single proton in a Penning trap in order to measure its magnetic moment to the highest precision to date. It was found to be . The CODATA 2018 value matches this.
In science fiction.
Due to their ability to trap charged particles purely with electromagnetic forces, Penning traps are used in science fiction as a method to store large quantities of antimatter. Doing so in reality would require a vacuum of significantly higher quality than currently achievable.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\omega_-"
},
{
"math_id": 1,
"text": "\\omega_+"
}
] |
https://en.wikipedia.org/wiki?curid=586599
|
586610
|
Downburst
|
Strong surface-level winds that radiate from a single point
In meteorology, a downburst is a strong downward and outward gushing wind system that emanates from a point source above and blows radially, that is, in straight lines in all directions from the area of impact at surface level. It originates under deep, moist convective conditions like cumulus congestus or cumulonimbus. Capable of producing damaging winds, it may sometimes be confused with a tornado, where high-velocity winds circle a central area, and air moves inward and upward. These usually last for seconds to minutes. Downbursts are particularly strong downdrafts within thunderstorms (or deep, moist convection as sometimes downbursts emanate from cumulonimbus or even cumulus congestus clouds that are not producing lightning).
Downbursts are most often created by an area of significantly precipitation-cooled air that, after reaching the surface (subsiding), spreads out in all directions producing strong winds.
Dry downbursts are associated with thunderstorms that exhibit very little rain, while wet downbursts are created by thunderstorms with significant amounts of precipitation. Microbursts and macrobursts are downbursts at very small and larger scales, respectively. A rare variety of dry downburst, the "heat burst", is created by vertical currents on the backside of old outflow boundaries and squall lines where rainfall is lacking. Heat bursts generate significantly higher temperatures due to the lack of rain-cooled air in their formation and compressional heating during descent.
Downbursts are a topic of notable discussion in aviation, since they create vertical wind shear, which has the potential to be dangerous to aviation, especially during landing (or takeoff), where airspeed performance windows are the most narrow. Several fatal and historic crashes in past decades are attributed to the phenomenon and flight crew training goes to great lengths on how to properly recognize and recover from a downburst/wind shear event; wind shear recovery, among other adverse weather events, are standard topics across the world in flight simulator training that flight crews receive and must successfully complete. Detection and nowcasting technology was also implemented in much of the world and particularly around major airports, which in many cases actually have wind shear detection equipment on the field. This detection equipment helps air traffic controllers and pilots make decisions on the safety and feasibility of operating on or in the vicinity of the airport during storms.
Definition.
A downburst is created by a column of sinking air that after hitting the surface spreads out in all directions and is capable of producing damaging straight-line winds of over , often producing damage similar to, but distinguishable from, that caused by tornadoes. Downburst damage radiates from a central point as the descending column spreads out when hitting the surface, whereas tornado damage tends towards convergent damage consistent with rotating winds. To differentiate between tornado damage and damage from a downburst, the term straight-line winds is applied to damage from microbursts.
Downbursts in air that is precipitation free or contains virga are known as dry downbursts; those accompanied with precipitation are known as wet downbursts. These generally are formed by precipitation-cooled air rushing to the surface, but they perhaps also could be powered by strong winds aloft being deflected toward the surface by dynamical processes in a thunderstorm (see rear flank downdraft). Most downbursts are less than in extent: these are called microbursts. Downbursts larger than in extent are sometimes called macrobursts. Downbursts can occur over large areas. In the extreme case, a series of continuing downbursts results in a derecho, which covers huge areas of more than wide and over long, persisting for 12 hours or more, and which is associated with some of the most intense straight-line winds.
The term microburst was defined by mesoscale meteorology expert Ted Fujita as affecting an area in diameter or less, distinguishing them as a type of downburst and apart from common wind shear which can encompass greater areas. Fujita also coined the term macroburst for downbursts larger than .
Dry microbursts.
When rain falls below the cloud base or is mixed with dry air, it begins to evaporate and this evaporation process cools the air. The denser cool air descends and accelerates as it approaches the surface. When the cool air approaches the surface, it spreads out in all directions. High winds spread out in this type of pattern showing little or no curvature are known as straight-line winds.
Dry microbursts are typically produced by high based thunderstorms that contain little to no surface rainfall. They occur in environments characterized by a thermodynamic profile exhibiting an inverted-V at thermal and moisture profile, as viewed on a Skew-T log-P thermodynamic diagram. Wakimoto (1985) developed a conceptual model (over the High Plains of the United States) of a dry microburst environment that comprised three important variables: mid-level moisture, cloud base in the mid troposphere, and low surface relative humidity. These conditions evaporate the moisture from the air as it falls, cooling the air and making it fall faster because it is more dense.
Wet microbursts.
Wet microbursts are downbursts accompanied by significant precipitation at the surface. These downbursts rely more on the drag of precipitation for downward acceleration of parcels as well as the negative buoyancy which tend to drive "dry" microbursts. As a result, higher mixing ratios are necessary for these downbursts to form (hence the name "wet" microbursts). Melting of ice, particularly hail, appears to play an important role in downburst formation (Wakimoto and Bringi, 1988), especially in the lowest above surface level (Proctor, 1989). These factors, among others, make forecasting wet microbursts difficult.
Straight-line winds.
Straight-line winds (also known as plough winds, thundergusts, and hurricanes of the prairie) are very strong winds that can produce damage, demonstrating a lack of the rotational damage pattern associated with tornadoes. Straight-line winds are common with the gust front of a thunderstorm or originate with a downburst from a thunderstorm. These events can cause considerable damage, even in the absence of a tornado. The winds can gust to and winds of or more can last for more than twenty minutes. In the United States, such straight-line wind events are most common during the spring when instability is highest and weather fronts routinely cross the country. Straight-line wind events in the form of derechos can take place throughout the eastern half of the U.S.
Straight-line winds may be damaging to marine interests. Small ships, cutters and sailboats are at risk from this meteorological phenomenon.
Formation.
The formation of a downburst starts with hail or large raindrops falling through drier air. Hailstones melt and raindrops evaporate, pulling latent heat from surrounding air and cooling it considerably. Cooler air has a higher density than the warmer air around it, so it sinks to the surface. As the cold air hits the ground or water it spreads out and a mesoscale front can be observed as a gust front. Areas under and immediately adjacent to the downburst receive the highest winds and rainfall, if any is present. Also, because the rain-cooled air is descending from the middle troposphere, a significant drop in temperatures is noticed. Due to interaction with the surface, the downburst quickly loses strength as it fans out and forms the distinctive "curl shape" that is commonly seen at the periphery of the microburst (see image). Downbursts usually last only a few minutes and then dissipate, except in the case of squall lines and derecho events. However, despite their short lifespan, microbursts are a serious hazard to aviation and property and can result in substantial damage to the area.
Downbursts go through three stages in their cycle: the downburst, outburst, and cushion stages.
Development stages of microbursts.
The evolution of microbursts is broken down into three stages: the contact stage, the outburst stage, and the cushion stage:
On a weather radar Doppler display, a downburst is seen as a couplet of radial winds in the outburst and cushion stages. The rightmost image shows such a display from the ARMOR Doppler Weather Radar in Huntsville, Alabama, in 2012. The radar is on the right side of the image and the downburst is along the line separating the velocity towards the radar (green), and the one moving away (red).
Physical processes of dry and wet microbursts.
Basic physical processes using simplified buoyancy equations.
Start by using the vertical momentum equation:
formula_0
By decomposing the variables into a basic state and a perturbation, defining the basic states, and using the ideal gas law (formula_1), then the equation can be written in the form
formula_2
where B is buoyancy. The virtual temperature correction usually is rather small and to a good approximation; it can be ignored when computing buoyancy. Finally, the effects of precipitation loading on the vertical motion are parametrized by including a term that decreases buoyancy as the liquid water mixing ratio (formula_3) increases, leading to the final form of the parcel's momentum equation:
formula_4
The first term is the effect of perturbation pressure gradients on vertical motion. In some storms this term has a large effect on updrafts (Rotunno and Klemp, 1982) but there is not much reason to believe it has much of an impact on downdrafts (at least to a first approximation) and therefore will be ignored.
The second term is the effect of buoyancy on vertical motion. Clearly, in the case of microbursts, one expects to find that B is negative, meaning the parcel is cooler than its environment. This cooling typically takes place as a result of phase changes (evaporation, melting, and sublimation). Precipitation particles that are small, but are in great quantity, promote a maximum contribution to cooling and, hence, to creation of negative buoyancy. The major contribution to this process is from evaporation.
The last term is the effect of water loading. Whereas evaporation is promoted by large numbers of small droplets, it only requires a few large drops to contribute substantially to the downward acceleration of air parcels. This term is associated with storms having high precipitation rates. Comparing the effects of water loading to those associated with buoyancy, if a parcel has a liquid water mixing ratio of 1.0 g kg−1, this is roughly equivalent to about 0.3 K of negative buoyancy; the latter is a large (but not extreme) value. Therefore, in general terms, negative buoyancy is typically the major contributor to downdrafts.
Negative vertical motion associated only with buoyancy.
Using pure "parcel theory" results in a prediction of the maximum downdraft of
formula_5
where NAPE is the negative available potential energy,
formula_6
and where LFS denotes the level of free sink for a descending parcel and SFC denotes the surface. This means that the maximum downward motion is associated with the integrated negative buoyancy. Even a relatively modest negative buoyancy can result in a substantial downdraft if it is maintained over a relatively large depth. A downward speed of results from the relatively modest NAPE value of 312.5 m2 s−2. To a first approximation, the maximum gust is roughly equal to the maximum downdraft speed.
Heat bursts.
A special, and much rarer, kind of downburst is a heat burst, which results from precipitation-evaporated air compressionally heating as it descends from very high altitude, usually on the backside of a dying squall line or outflow boundary. Heat bursts are chiefly a nocturnal occurrence, can produce winds over , are characterized by exceptionally dry air, can suddenly raise the surface temperature to or more, and sometimes persist for several hours.
Danger to shipping.
The sinking of the yacht "Bayesian" in August 2024 was attributed, in part, to a downburst, after previously being attributed to a tornado.
Danger to aviation.
Downbursts, particularly microbursts, are exceedingly dangerous to aircraft which are taking off or landing due to the strong vertical wind shear caused by these events. Several fatal crashes are attributed to downbursts.
The following are some fatal crashes and/or aircraft incidents that have been attributed to microbursts in the vicinity of airports:
A microburst often causes aircraft to crash when they are attempting to land or shortly after takeoff (American Airlines Flight 63 and Delta Air Lines Flight 318 are notable exceptions). The microburst is an extremely powerful gust of air that, once hitting the surface, spreads in all directions. As the aircraft is coming in to land, the pilots try to slow the plane to an appropriate speed. When the microburst hits, the pilots will see a large spike in their airspeed, caused by the force of the headwind created by the microburst. A pilot inexperienced with microbursts would try to decrease the speed. The plane would then travel through the microburst, and fly into the tailwind, causing a sudden decrease in the amount of air flowing across the wings. The decrease in airflow over the wings of the aircraft causes a drop in the amount of lift produced. This decrease in lift combined with a strong downward flow of air can cause the thrust required to remain at altitude to exceed what is available, thus causing the aircraft to stall. If the plane is at a low altitude shortly after takeoff or during landing, it will not have sufficient altitude to recover.
The strongest microburst recorded thus far occurred at Andrews Field, Maryland, on 1 August 1983, with wind speeds reaching .
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "{dw\\over dt} = -{1\\over\\rho} {\\partial p\\over\\partial z}-g"
},
{
"math_id": 1,
"text": "p = \\rho RT_v"
},
{
"math_id": 2,
"text": "B \\equiv -{\\rho^\\prime\\over\\bar\\rho}g = g{T^\\prime_v - \\bar T_v \\over \\bar T_v}"
},
{
"math_id": 3,
"text": "\\ell"
},
{
"math_id": 4,
"text": "{dw^\\prime\\over dt} = {1\\over\\bar\\rho}{\\partial p^\\prime\\over\\partial z} + B - g\\ell"
},
{
"math_id": 5,
"text": "-w_{\\rm max} = \\sqrt{2\\times\\hbox{NAPE}}"
},
{
"math_id": 6,
"text": "\\hbox{NAPE} = -\\int_{\\rm SFC}^{\\rm LFS} B\\,dz"
}
] |
https://en.wikipedia.org/wiki?curid=586610
|
58665819
|
Cristiane de Morais Smith
|
Brazilian theoretical physicist
Cristiane de Morais Smith Lehner is a Brazilian theoretical physicist and professor at the Institute for Theoretical Physics at the University of Utrecht, where she leads a research group studying condensed matter physics, cold atoms and strongly-correlated systems. In 2019, the European Physical Society awarded Morais Smith its Emmy Noether Distinction.
Morais Smith has authored or co-authored over 100 academic papers, including articles in "Nature Communications", "Physical Review", and "Physical Review B", where several of her papers have been recognized as "Editors Choice" and "Scientific Highlights".
As of March 2020, her work has received over 2900 citations.
In addition, Morais Smith is an editor for the "European Journal of Physics B", which focuses on condensed matter and complex systems.
Education.
Morais Smith obtained a physics BSc from University of Campinas in 1985, continuing to complete a MSc with highest honors in 1989 entitled "The Effect of the Initial Preparation to Describe the Dynamics of a Quantum Brownian Particle", under her adviser Amir Caldeira.
She continued with Caldeira to complete a PhD in 1994, also at University of Campinas entitled "Quantum and Classical Creep of Vortices Intrinsically Pinned in High-Temperature Superconductors".
Much of the work for her PhD was completed at ETH Zurich where she worked with Gianni Blatter.
Career.
Starting in March 1986, Morais Smith was a French language teacher at Brazilian Telecommunications Company (TELEBRAS), in Campinas Brazil, until, in December 1988 Morais Smith briefly became the owner of and teacher at a French language school.
From 1989 to 1994, during her PhD, Morais Smith accepted a permanent lecturer position in the Department of Physics at the State University of São Paulo (UNESP), in Bauru, Brazil.
During this time, she was also a visiting scientist in the Condensed Matter group at the International Center for Theoretical Physics in Trieste, Italy, as well as a guest PhD student at ETH Zurich.
Following the award of her PhD, she accepted a postdoctoral position at the Institute of Theoretical Physics, also at ETH Zurich.
In 1995, she moved to the Institute of Theoretical Physics at the University of Hamburg, Germany as a research assistant.
After finishing a post-doc in Hamburg, Morais Smith began working at the Institute of Theoretical Physics at the University of Fribourg, Switzerland, where the Swiss National Science Foundation awarded her the Professor Boursier Fellowship. At the University of Fribourg, she advanced to an associate professor position.
In 2004, Morais Smith was hired by the Utrecht University as a full professor with a chair in condensed matter theory. She is chair of the Condensed Matter Physics department there.
Morais Smith is one of three directors of the Delta Institute for Theoretical Physics (a collaboration between University of Amsterdam, Leiden University, and Utrecht University), where she represents Utrecht University.
Other.
Morais Smith grew up in Paraguaçu Paulista, a village 500 km from Sao Paulo Brazil. She was thirteen years old when she decided to become a physicist.
Morais Smith is fluent in Portuguese (her native language), English, French, Italian, German, Spanish, and Dutch.
In September 2002, Morais Smith married Stefan Lehner, a Swiss designer.
Morais Smith was featured in a Dutch newspaper article in 2018 for her work on chiral superconductors where the pairing function has a chirality e.g. formula_0. In layered chiral superconductors there is a geometric Meissner effect, which is a variation on the Meissner effect.
Honors and awards.
In 2001, Morais Smith was awarded a "Professeur Boursier du Fond National Suisse" or Swiss National Science Foundation Professorship for a project entitled "Spontaneous formation of charge patterns in two-dimensional strongly interacting electron systems".
This prize is awarded to only 2 or 3 researchers per year, and holds a monetary award of more than one million Swiss Francs.
Morais Smith received a People’s Republic of China 'High-End Foreign Expert' Professorship in 2014 and 2015 at the Wilczek Quantum Center, awarded by the State Administration of Foreign Experts Affairs.
In 2016, the Hamburg Centre for Ultrafast Imaging (CUI) awarded its Dresselhaus Prize to Morais Smith for "her outstanding contribution to the understanding of topological phases in two-dimensional atomic and electronic systems".
In 2019, the European Physical Society awarded Morais Smith its EPS Emmy Noether Distinction "for her outstanding contributions to the theory of condensed matter systems and ultracold atoms to unveil novel quantum states of matter." The EPS Emmy Noether Distinction for Women in Physics, established in 2013, is awarded "to enhance the recognition of noteworthy women physicists with a strong connection to Europe through nationality or work," particularly those who are "role models that will help to attract women to a career in physics."
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "p_x + ip_y"
}
] |
https://en.wikipedia.org/wiki?curid=58665819
|
58666321
|
Numerical certification
|
Numerical certification is the process of verifying the correctness of a candidate solution to a system of equations. In (numerical) computational mathematics, such as numerical algebraic geometry, candidate solutions are computed algorithmically, but there is the possibility that errors have corrupted the candidates. For instance, in addition to the inexactness of input data and candidate solutions, numerical errors or errors in the discretization of the problem may result in corrupted candidate solutions. The goal of numerical certification is to provide a certificate which proves which of these candidates are, indeed, approximate solutions.
Methods for certification can be divided into two flavors: "a priori" certification and "a posteriori" certification. "A posteriori" certification confirms the correctness of the final answers (regardless of how they are generated), while "a priori" certification confirms the correctness of each step of a specific computation. A typical example of "a posteriori" certification is Smale's alpha theory, while a typical example of "a priori" certification is interval arithmetic.
Certificates.
A certificate for a root is a computational proof of the correctness of a candidate solution. For instance, a certificate may consist of an approximate solution formula_0, a region formula_1 containing formula_0, and a proof that formula_1 contains exactly one solution to the system of equations.
In this context, an "a priori" numerical certificate is a certificate in the sense of correctness in computer science. On the other hand, an "a posteriori" numerical certificate operates only on solutions, regardless of how they are computed. Hence, "a posteriori" certification is different from algorithmic correctness – for an extreme example, an algorithm could randomly generate candidates and attempt to certify them as approximate roots using "a posteriori" certification.
"A posteriori" certification methods.
There are a variety of methods for "a posteriori" certification, including
Alpha theory.
The cornerstone of Smale's alpha theory is bounding the error for Newton's method. Smale's 1986 work introduced the quantity formula_2, which quantifies the convergence of Newton's method. More precisely, let formula_3 be a system of analytic functions in the variables formula_0, formula_4 the derivative operator, and formula_5 the Newton operator. The quantities
formula_6
formula_7
and
formula_8
are used to certify a candidate solution. In particular, if
formula_9
then formula_0 is an approximate solution for formula_10, i.e., the candidate is in the domain of quadratic convergence for Newton's method. In other words, if this inequality holds, then there is a root formula_11 of formula_3 so that iterates of the Newton operator converge as
formula_12
The software package alphaCertified provides an implementation of the alpha test for polynomials by estimating formula_13 and formula_14.
Interval Newton and Krawczyck methods.
Suppose formula_15 is a function whose fixed points correspond to the roots of formula_3. For example, the Newton operator has this property. Suppose that formula_16 is a region, then,
There are versions of the following methods over the complex numbers, but both the interval arithmetic and conditions must be adjusted to reflect this case.
Interval Newton method.
In the univariate case, Newton's method can be directly generalized to certify a root over an interval. For an interval formula_19, let formula_20 be the midpoint of formula_19. Then, the interval Newton operator applied to formula_19 is
formula_21
In practice, any interval containing formula_22 can be used in this computation. If formula_0 is a root of formula_3, then by the mean value theorem, there is some formula_23 such that formula_24. In other words, formula_25. Since formula_22 contains the inverse of formula_3 at all points of formula_19, it follows that formula_26. Therefore, formula_27.
Furthermore, if formula_28, then either formula_20 is a root of formula_3 and formula_29 or formula_30. Therefore, formula_31 is at most half the width of formula_19. Therefore, if there is some root of formula_3 in formula_19, the iterative procedure of replacing formula_19 by formula_32 will converge to this root. If, on the other hand, there is no root of formula_3 in formula_19, this iterative procedure will eventually produce an empty interval, a witness to the nonexistence of roots.
See interval Newton method for higher dimensional analogues of this approach.
Krawczyck method.
Let formula_33 be any formula_34 invertible matrix in formula_35. Typically, one takes formula_33 to be an approximation to formula_36. Then, define the function formula_37 We observe that formula_0 is a fixed of formula_17 if and only if formula_0 is a root of formula_3. Therefore the approach above can be used to identify roots of formula_3. This approach is similar to a multivariate version of Newton's method, replacing the derivative with the fixed matrix formula_33.
We observe that if formula_19 is a compact and convex region and formula_38, then, for any formula_39, there exist formula_40 such that
formula_41
Let formula_42 be the Jacobian matrix of formula_17 evaluated on formula_19. In other words, the entry formula_43 consists of the image of formula_44 over formula_19. It then follows that formula_45
where the matrix-vector product is computed using interval arithmetic. Then, allowing formula_0 to vary in formula_19, it follows that the image of formula_17 on formula_19 satisfies the following containment: formula_46
where the calculations are, once again, computed using interval arithmetic. Combining this with the formula for formula_17, the result is the Krawczyck operator
formula_47
where formula_16 is the identity matrix.
If formula_48, then formula_17 has a fixed point in formula_19, i.e., formula_3 has a root in formula_19. On the other hand, if the maximum matrix norm using the supremum norm for vectors of all matrices in formula_49 is less than formula_50, then formula_17 is contractive within formula_19, so formula_17 has a unique fixed point.
A simpler test, when formula_19 is an axis-aligned parallelepiped, uses formula_51, i.e., the midpoint of formula_19. In this case, there is a unique root of formula_3 if
formula_52
where formula_53 is the length of the longest side of formula_19.
"A priori" certification methods.
Interval arithmetic.
Interval arithmetic can be used to provide an "a priori" numerical certificate by computing intervals containing unique solutions. By using intervals instead of plain numeric types during path tracking, resulting candidates are represented by intervals. The candidate solution-interval is itself the certificate, in the sense that the solution is guaranteed to be inside the interval.
Condition numbers.
Numerical algebraic geometry solves polynomial systems using homotopy continuation and path tracking methods. By monitoring the condition number for a tracked homotopy at every step, and ensuring that no two solution paths ever intersect, one can compute a numerical certificate along with a solution. This scheme is called "a priori" path tracking.
Non-certified numerical path tracking relies on heuristic methods for controlling time step size and precision. In contrast, "a priori" certified path tracking goes beyond heuristics to provide step size control that "guarantees" that for every step along the path, the current point is within the domain of quadratic convergence for the current path.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "x"
},
{
"math_id": 1,
"text": "R"
},
{
"math_id": 2,
"text": "\\alpha"
},
{
"math_id": 3,
"text": "F"
},
{
"math_id": 4,
"text": "D"
},
{
"math_id": 5,
"text": "N"
},
{
"math_id": 6,
"text": "\\beta(f,x) = \\|x - N(x)\\| = \\|Df(x)^{-1} f(x)\\|"
},
{
"math_id": 7,
"text": "\\gamma(f,x) = \\sup_{k\\geq 2}\\left\\| \\frac{Df(x)^{-1}D^kf(x)}{k!} \\right\\|^\\frac{1}{k-1}"
},
{
"math_id": 8,
"text": "\\alpha(f,x) = \\beta(f,x) \\gamma(f,x) "
},
{
"math_id": 9,
"text": "\\alpha(f,x) < \\frac{13 - 3\\sqrt{17}}{4},"
},
{
"math_id": 10,
"text": "f"
},
{
"math_id": 11,
"text": "x^\\ast"
},
{
"math_id": 12,
"text": "\\left\\|N^k(x)-x^\\ast\\right\\|\\leq\\frac{1}{2^{2^k-1}}\\|x-x^\\ast\\|."
},
{
"math_id": 13,
"text": "\\beta"
},
{
"math_id": 14,
"text": "\\gamma"
},
{
"math_id": 15,
"text": "G:\\mathbb{R}^n\\rightarrow\\mathbb{R}^n"
},
{
"math_id": 16,
"text": "I"
},
{
"math_id": 17,
"text": "G"
},
{
"math_id": 18,
"text": "G(I)\\subseteq I"
},
{
"math_id": 19,
"text": "J"
},
{
"math_id": 20,
"text": "m(J)"
},
{
"math_id": 21,
"text": "IN(J)=m(J)-F(m(J))/F'(J)."
},
{
"math_id": 22,
"text": "F'(J)"
},
{
"math_id": 23,
"text": "c\\in J"
},
{
"math_id": 24,
"text": "F(m(J))-F'(c)(m(J)-x)=F(x)=0"
},
{
"math_id": 25,
"text": "F(m(J))=F'(c)(m(J)-x)"
},
{
"math_id": 26,
"text": "m(J)-x\\in F(m(J))/F'(J)"
},
{
"math_id": 27,
"text": "x=m(J)-(m(J)-x)\\in IN(J)"
},
{
"math_id": 28,
"text": "0\\not\\in F'(J)"
},
{
"math_id": 29,
"text": "IN(J)=\\{m(J)\\}"
},
{
"math_id": 30,
"text": "m(J)\\not\\in IN(J)"
},
{
"math_id": 31,
"text": "J\\cap N(J)"
},
{
"math_id": 32,
"text": "J\\cap IN(J)"
},
{
"math_id": 33,
"text": "Y"
},
{
"math_id": 34,
"text": "n\\times n"
},
{
"math_id": 35,
"text": "GL(n,\\mathbb{R})"
},
{
"math_id": 36,
"text": "F'(y)^{-1}"
},
{
"math_id": 37,
"text": "G(x)=x-YF(x)."
},
{
"math_id": 38,
"text": "y\\in J"
},
{
"math_id": 39,
"text": "x\\in J"
},
{
"math_id": 40,
"text": "c_1,\\dots,c_n\\in J"
},
{
"math_id": 41,
"text": "G(y)-G(x)=\\begin{bmatrix}\\nabla g_1(c_1)^T\\\\\\vdots\\\\\\nabla g_n(c_n)^T\\end{bmatrix}(y-x)."
},
{
"math_id": 42,
"text": "G'(J)"
},
{
"math_id": 43,
"text": "(G'(J))_{ij}"
},
{
"math_id": 44,
"text": "\\frac{\\partial g_i}{\\partial x_j}"
},
{
"math_id": 45,
"text": "G(x)\\in G(y)+\\nabla G(J)(x-y),"
},
{
"math_id": 46,
"text": "G(J)\\subset G(y)+G'(J)(J-y),"
},
{
"math_id": 47,
"text": "K_{y,Y}(J)=y-YF(y)+(I-F'(J))(J-y),"
},
{
"math_id": 48,
"text": "K_{y,Y}(J)\\subset J"
},
{
"math_id": 49,
"text": "I-F'(J)"
},
{
"math_id": 50,
"text": "1"
},
{
"math_id": 51,
"text": "y=m(J)"
},
{
"math_id": 52,
"text": "\\|K(X)-m(X)\\|<\\frac{w(X)}{2},"
},
{
"math_id": 53,
"text": "w(X)"
}
] |
https://en.wikipedia.org/wiki?curid=58666321
|
58667660
|
Wiener's lemma
|
In mathematics, Wiener's lemma is a well-known identity which relates the asymptotic behaviour of the Fourier coefficients of a Borel measure on the circle to its atomic part. This result admits an analogous statement for measures on the real line. It was first discovered by Norbert Wiener.
formula_6
Statement.
where formula_7 is the formula_8-th Fourier coefficient of formula_0.
formula_11
where formula_12 is the Fourier transform of formula_0.
formula_14
Proof.
with formula_15. The function formula_16 is bounded by formula_17 in absolute value and has formula_18, while formula_19 for formula_20, which converges to formula_21 as formula_22. Hence, by the dominated convergence theorem,
formula_23
We now take formula_24 to be the pushforward of formula_25 under the inverse map on formula_1, namely formula_26 for any Borel set formula_27. This complex measure has Fourier coefficients formula_28. We are going to apply the above to the convolution between formula_0 and formula_24, namely we choose formula_29, meaning that formula_13 is the pushforward of the measure formula_30 (on formula_31) under the product map formula_32. By Fubini's theorem
formula_33
So, by the identity derived earlier,
formula_34
By Fubini's theorem again, the right-hand side equals
formula_35
formula_36
(which follows from Fubini's theorem), where formula_37.
We observe that formula_38, formula_39 and formula_40 for formula_41, which converges to formula_21 as formula_42. So, by dominated convergence, we have the analogous identity
formula_43
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mu"
},
{
"math_id": 1,
"text": "\\mathbb T"
},
{
"math_id": 2,
"text": "\\mu_a=\\sum_j c_j\\delta_{z_j}"
},
{
"math_id": 3,
"text": "\\mu(\\{z_j\\})=c_j\\neq 0"
},
{
"math_id": 4,
"text": "\\mu(\\{z\\})=0"
},
{
"math_id": 5,
"text": "z\\not\\in\\{z_j\\}"
},
{
"math_id": 6,
"text": "\\lim_{N\\to\\infty}\\frac{1}{2N+1}\\sum_{n=-N}^N|\\widehat\\mu(n)|^2=\\sum_j|c_j|^2,"
},
{
"math_id": 7,
"text": "\\widehat{\\mu}(n)=\\int_{\\mathbb T}z^{-n}\\,d\\mu(z)"
},
{
"math_id": 8,
"text": "n"
},
{
"math_id": 9,
"text": "\\mathbb R"
},
{
"math_id": 10,
"text": "\\mu_a=\\sum_j c_j\\delta_{x_j}"
},
{
"math_id": 11,
"text": "\\lim_{R\\to\\infty}\\frac{1}{2R}\\int_{-R}^R|\\widehat\\mu(\\xi)|^2\\,d\\xi=\\sum_j|c_j|^2,"
},
{
"math_id": 12,
"text": "\\widehat{\\mu}(\\xi)=\\int_{\\mathbb R}e^{-2\\pi i\\xi x}\\,d\\mu(x)"
},
{
"math_id": 13,
"text": "\\nu"
},
{
"math_id": 14,
"text": "\\frac{1}{2N+1}\\sum_{n=-N}^N\\widehat{\\nu}(n)=\\int_{\\mathbb T}f_N(z)\\,d\\nu(z),"
},
{
"math_id": 15,
"text": "f_N(z)=\\frac{1}{2N+1}\\sum_{n=-N}^N z^{-n}"
},
{
"math_id": 16,
"text": "f_N"
},
{
"math_id": 17,
"text": "1"
},
{
"math_id": 18,
"text": "f_N(1)=1"
},
{
"math_id": 19,
"text": "f_N(z)=\\frac{z^{N+1}-z^{-N}}{(2N+1)(z-1)}"
},
{
"math_id": 20,
"text": "z\\in\\mathbb{T}\\setminus\\{1\\}"
},
{
"math_id": 21,
"text": "0"
},
{
"math_id": 22,
"text": "N\\to\\infty"
},
{
"math_id": 23,
"text": "\\lim_{N\\to\\infty}\\frac{1}{2N+1}\\sum_{n=-N}^N\\widehat{\\nu}(n)=\\int_{\\mathbb T}1_{\\{1\\}}(z)\\,d\\nu(z)=\\nu(\\{1\\})."
},
{
"math_id": 24,
"text": "\\mu'"
},
{
"math_id": 25,
"text": "\\overline\\mu"
},
{
"math_id": 26,
"text": "\\mu'(B)=\\overline{\\mu(B^{-1})}"
},
{
"math_id": 27,
"text": "B\\subseteq\\mathbb T"
},
{
"math_id": 28,
"text": "\\widehat{\\mu'}(n)=\\overline{\\widehat{\\mu}(n)}"
},
{
"math_id": 29,
"text": "\\nu=\\mu*\\mu'"
},
{
"math_id": 30,
"text": "\\mu\\times\\mu'"
},
{
"math_id": 31,
"text": "\\mathbb T\\times\\mathbb T"
},
{
"math_id": 32,
"text": "\\cdot:\\mathbb{T}\\times\\mathbb{T}\\to\\mathbb{T}"
},
{
"math_id": 33,
"text": "\\widehat{\\nu}(n)=\\int_{\\mathbb{T}\\times\\mathbb{T}}(zw)^{-n}\\,d(\\mu\\times\\mu')(z,w)\n=\\int_{\\mathbb T}\\int_{\\mathbb T}z^{-n}w^{-n}\\,d\\mu'(w)\\,d\\mu(z)=\\widehat{\\mu}(n)\\widehat{\\mu'}(n)=|\\widehat{\\mu}(n)|^2."
},
{
"math_id": 34,
"text": "\\lim_{N\\to\\infty}\\frac{1}{2N+1}\\sum_{n=-N}^N|\\widehat{\\mu}(n)|^2=\\nu(\\{1\\})=\\int_{\\mathbb T\\times\\mathbb T}1_{\\{zw=1\\}}\\,d(\\mu\\times\\mu')(z,w)."
},
{
"math_id": 35,
"text": "\\int_{\\mathbb T}\\mu'(\\{z^{-1}\\})\\,d\\mu(z)=\\int_{\\mathbb T}\\overline{\\mu(\\{z\\})}\\,d\\mu(z)=\\sum_j|\\mu(\\{z_j\\})|^2=\\sum_j|c_j|^2."
},
{
"math_id": 36,
"text": "\\frac{1}{2R}\\int_{-R}^R\\widehat\\nu(\\xi)\\,d\\xi=\\int_{\\mathbb R}f_R(x)\\,d\\nu(x)"
},
{
"math_id": 37,
"text": "f_R(x)=\\frac{1}{2R}\\int_{-R}^R e^{-2\\pi i\\xi x}\\,d\\xi"
},
{
"math_id": 38,
"text": "|f_R|\\le 1"
},
{
"math_id": 39,
"text": "f_R(0)=1"
},
{
"math_id": 40,
"text": "f_R(x)=\\frac{e^{2\\pi iRx}-e^{-2\\pi iRx}}{4\\pi iRx}"
},
{
"math_id": 41,
"text": "x\\neq 0"
},
{
"math_id": 42,
"text": "R\\to\\infty"
},
{
"math_id": 43,
"text": "\\lim_{R\\to\\infty}\\frac{1}{2R}\\int_{-R}^R\\widehat\\nu(\\xi)\\,d\\xi=\\nu(\\{0\\})."
},
{
"math_id": 44,
"text": "\\mu_a=0"
},
{
"math_id": 45,
"text": "\\lim_{N\\to\\infty}\\frac{1}{2N+1}\\sum_{n=-N}^N|\\widehat\\mu(n)|^2=0"
},
{
"math_id": 46,
"text": "\\lim_{N\\to\\infty}\\frac{1}{2N+1}\\sum_{n=-N}^N|\\widehat\\mu(n)|^2=1"
},
{
"math_id": 47,
"text": "c_j"
},
{
"math_id": 48,
"text": "1=\\sum_j c_j^2\\le\\sum_j c_j\\le 1"
},
{
"math_id": 49,
"text": "c_j^2=c_j"
},
{
"math_id": 50,
"text": "c_j=1"
}
] |
https://en.wikipedia.org/wiki?curid=58667660
|
586694
|
Signed number representations
|
Encoding of negative numbers in binary number systems
In computing, signed number representations are required to encode negative numbers in binary number systems.
In mathematics, negative numbers in any base are represented by prefixing them with a minus sign ("−"). However, in RAM or CPU registers, numbers are represented only as sequences of bits, without extra symbols. The four best-known methods of extending the binary numeral system to represent signed numbers are: sign–magnitude, ones' complement, two's complement, and offset binary. Some of the alternative methods use implicit instead of explicit signs, such as negative binary, using the base −2. Corresponding methods can be devised for other bases, whether positive, negative, fractional, or other elaborations on such themes.
There is no definitive criterion by which any of the representations is universally superior. For integers, the representation used in most current computing devices is two's complement, although the Unisys ClearPath Dorado series mainframes use ones' complement.
History.
The early days of digital computing were marked by competing ideas about both hardware technology and mathematics technology (numbering systems). One of the great debates was the format of negative numbers, with some of the era's top experts expressing very strong and differing opinions. One camp supported two's complement, the system that is dominant today. Another camp supported ones' complement, where a negative value is formed by inverting all of the bits in its positive equivalent. A third group supported sign–magnitude, where a value is changed from positive to negative simply by toggling the word's highest-order bit.
There were arguments for and against each of the systems. Sign–magnitude allowed for easier tracing of memory dumps (a common process in the 1960s) as small numeric values use fewer 1 bits. These systems did ones' complement math internally, so numbers would have to be converted to ones' complement values when they were transmitted from a register to the math unit and then converted back to sign–magnitude when the result was transmitted back to the register. The electronics required more gates than the other systems – a key concern when the cost and packaging of discrete transistors were critical. IBM was one of the early supporters of sign–magnitude, with their 704, 709 and 709x series computers being perhaps the best-known systems to use it.
Ones' complement allowed for somewhat simpler hardware designs, as there was no need to convert values when passed to and from the math unit. But it also shared an undesirable characteristic with sign–magnitude: the ability to represent negative zero (−0). Negative zero behaves exactly like positive zero: when used as an operand in any calculation, the result will be the same whether an operand is positive or negative zero. The disadvantage is that the existence of two forms of the same value necessitates two comparisons when checking for equality with zero. Ones' complement subtraction can also result in an end-around borrow (described below). It can be argued that this makes the addition and subtraction logic more complicated or that it makes it simpler, as a subtraction requires simply inverting the bits of the second operand as it is passed to the adder. The PDP-1, CDC 160 series, CDC 3000 series, CDC 6000 series, UNIVAC 1100 series, and LINC computer use ones' complement representation.
Two's complement is the easiest to implement in hardware, which may be the ultimate reason for its widespread popularity. Processors on the early mainframes often consisted of thousands of transistors, so eliminating a significant number of transistors was a significant cost savings. Mainframes such as the IBM System/360, the GE-600 series, and the PDP-6 and PDP-10 use two's complement, as did minicomputers such as the PDP-5 and PDP-8 and the PDP-11 and VAX machines. The architects of the early integrated-circuit-based CPUs (Intel 8080, etc.) also chose to use two's complement math. As IC technology advanced, two's complement technology was adopted in virtually all processors, including x86, m68k, Power ISA, MIPS, SPARC, ARM, Itanium, PA-RISC, and DEC Alpha.
Sign–magnitude.
In the sign–magnitude representation, also called sign-and-magnitude or signed magnitude, a signed number is represented by the bit pattern corresponding to the sign of the number for the sign bit (often the most significant bit, set to 0 for a positive number and to 1 for a negative number), and the magnitude of the number (or absolute value) for the remaining bits. For example, in an eight-bit byte, only seven bits represent the magnitude, which can range from 0000000 (0) to 1111111 (127). Thus numbers ranging from −12710 to +12710 can be represented once the sign bit (the eighth bit) is added. For example, −4310 encoded in an eight-bit byte is 10101011 while 4310 is 00101011. Using sign–magnitude representation has multiple consequences which makes them more intricate to implement:
This approach is directly comparable to the common way of showing a sign (placing a "+" or "−" next to the number's magnitude). Some early binary computers (e.g., IBM 7090) use this representation, perhaps because of its natural relation to common usage. Sign–magnitude is the most common way of representing the significand in floating-point values.
Ones' complement.
In the "ones' complement" representation, a negative number is represented by the bit pattern corresponding to the bitwise NOT (i.e. the "complement") of the positive number. Like sign–magnitude representation, ones' complement has two representations of 0: 00000000 (+0) and 11111111 (−0).
As an example, the ones' complement form of 00101011 (4310) becomes 11010100 (−4310). The range of signed numbers using ones' complement is represented by −(2"N"−1 − 1) to (2"N"−1 − 1) and ±0. A conventional eight-bit byte is −12710 to +12710 with zero being either 00000000 (+0) or 11111111 (−0).
To add two numbers represented in this system, one does a conventional binary addition, but it is then necessary to do an "end-around carry": that is, add any resulting carry back into the resulting sum. To see why this is necessary, consider the following example showing the case of the addition of −1 (11111110) to +2 (00000010):
In the previous example, the first binary addition gives 00000000, which is incorrect. The correct result (00000001) only appears when the carry is added back in.
A remark on terminology: The system is referred to as "ones' complement" because the negation of a positive value x (represented as the bitwise NOT of x) can also be formed by subtracting x from the ones' complement representation of zero that is a long sequence of ones (−0). Two's complement arithmetic, on the other hand, forms the negation of x by subtracting x from a single large power of two that is congruent to +0. Therefore, ones' complement and two's complement representations of the same negative value will differ by one.
Note that the ones' complement representation of a negative number can be obtained from the sign–magnitude representation merely by bitwise complementing the magnitude (inverting all the bits after the first). For example, the decimal number −125 with its sign–magnitude representation 11111101 can be represented in ones' complement form as 10000010.
Two's complement.
In the "two's complement" representation, a negative number is represented by the bit pattern corresponding to the bitwise NOT (i.e. the "complement") of the positive number plus one, i.e. to the ones' complement plus one. It circumvents the problems of multiple representations of 0 and the need for the end-around carry of the ones' complement representation. This can also be thought of as the most significant bit representing the inverse of its value in an unsigned integer; in an 8-bit unsigned byte, the most significant bit represents the 128ths place, where in two's complement that bit would represent −128.
In two's-complement, there is only one zero, represented as 00000000. Negating a number (whether negative or positive) is done by inverting all the bits and then adding one to that result. This actually reflects the ring structure on all integers modulo 2"N": formula_0. Addition of a pair of two's-complement integers is the same as addition of a pair of unsigned numbers (except for detection of overflow, if that is done); the same is true for subtraction and even for "N" lowest significant bits of a product (value of multiplication). For instance, a two's-complement addition of 127 and −128 gives the same binary bit pattern as an unsigned addition of 127 and 128, as can be seen from the 8-bit two's complement table.
An easier method to get the negation of a number in two's complement is as follows:
Method two:
Example: for +2, which is 00000010 in binary (the ~ character is the C bitwise NOT operator, so ~X means "invert all the bits in X"):
Offset binary.
In the "offset binary" representation, also called "excess-K" or "biased", a signed number is represented by the bit pattern corresponding to the unsigned number plus K, with K being the "biasing value" or "offset". Thus 0 is represented by K, and −K is represented by an all-zero bit pattern. This can be seen as a slight modification and generalization of the aforementioned two's-complement, which is virtually the excess-(2N−1) representation with negated most significant bit.
Biased representations are now primarily used for the exponent of floating-point numbers. The IEEE 754 floating-point standard defines the exponent field of a single-precision (32-bit) number as an 8-bit excess-127 field. The double-precision (64-bit) exponent field is an 11-bit excess-1023 field; see exponent bias. It also had use for binary-coded decimal numbers as excess-3.
Base −2.
In the "base −2" representation, a signed number is represented using a number system with base −2. In conventional binary number systems, the base, or radix, is 2; thus the rightmost bit represents 20, the next bit represents 21, the next bit 22, and so on. However, a binary number system with base −2 is also possible. The rightmost bit represents (−2)0
+1, the next bit represents (−2)1
−2, the next bit (−2)2
+4 and so on, with alternating sign. The numbers that can be represented with four bits are shown in the comparison table below.
The range of numbers that can be represented is asymmetric. If the word has an even number of bits, the magnitude of the largest negative number that can be represented is twice as large as the largest positive number that can be represented, and vice versa if the word has an odd number of bits.
Comparison table.
The following table shows the positive and negative integers that can be represented using four bits.
Same table, as viewed from "given these binary bits, what is the number as interpreted by the representation system":
Other systems.
Google's Protocol Buffers "zig-zag encoding" is a system similar to sign–magnitude, but uses the least significant bit to represent the sign and has a single representation of zero. This allows a variable-length quantity encoding intended for nonnegative (unsigned) integers to be used efficiently for signed integers.
A similar method is used in the Advanced Video Coding/H.264 and High Efficiency Video Coding/H.265 video compression standards to extend exponential-Golomb coding to negative numbers. In that extension, the least significant bit is almost a sign bit; zero has the same least significant bit (0) as all the negative numbers. This choice results in the largest magnitude representable positive number being one higher than the largest magnitude negative number, unlike in two's complement or the Protocol Buffers zig-zag encoding.
Another approach is to give each digit a sign, yielding the signed-digit representation. For instance, in 1726, John Colson advocated reducing expressions to "small numbers", numerals 1, 2, 3, 4, and 5. In 1840, Augustin Cauchy also expressed preference for such modified decimal numbers to reduce errors in computation.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathbb{Z}/2^N\\mathbb{Z}"
}
] |
https://en.wikipedia.org/wiki?curid=586694
|
5867217
|
Archie's law
|
Relationship between the electrical conductivity of a rock to its porosity
In petrophysics, Archie's law is a purely empirical law relating the measured electrical conductivity of a porous rock to its porosity and fluid saturation. It is named after Gus Archie (1907–1978) and laid the foundation for modern well log interpretation, as it relates borehole electrical conductivity measurements to hydrocarbon saturations.
Statement of the law.
The "in-situ" electrical conductivity (formula_0) of a fluid saturated, porous rock is described as
formula_1
where
This relationship attempts to describe ion flow (mostly sodium and chloride) in clean, consolidated sands, with varying intergranular porosity. Electrical conduction is assumed to be exclusively performed by ions dissolved in the pore-filling fluid. Electrical conduction is considered to be absent in the rock grains of the solid phase or in organic fluids other than water (oil, hydrocarbon, gas).
Reformulated for resistivity measurements.
The electrical resistivity, the inverse of the electrical conductivity formula_8, is expressed as
formula_9
with formula_10 for the total fluid saturated rock resistivity, and formula_11 for the resistivity of the fluid itself (w meaning water or an aqueous solution containing dissolved salts with ions bearing electricity in solution).
The factor
formula_12
is also called the formation factor, where formula_10 (index formula_13 standing for total) is the resistivity of the rock saturated with the fluid and formula_11 is the resistivity of the fluid (index formula_14 standing for water) inside the porosity of the rock. The porosity being saturated with the fluid (often water, formula_14), formula_15.
In case the fluid filling the porosity is a mixture of water and hydrocarbon (petroleum, oil, gas), a resistivity index (formula_16) can be defined:
formula_17
Where formula_18 is the resistivity of the rock saturated in water only.
Parameters.
Cementation exponent, "m".
The cementation exponent models how much the pore network increases the resistivity, as the rock itself is assumed to be non-conductive. If the pore network were to be modelled as a set of parallel capillary tubes, a cross-section area average of the rock's resistivity would yield porosity dependence equivalent to a cementation exponent of 1. However, the tortuosity of the rock increases this to a higher number than 1. This relates the cementation exponent to the permeability of the rock, increasing permeability decreases the cementation exponent.
The exponent formula_5 has been observed near 1.3 for unconsolidated sands, and is believed to increase with cementation. Common values for this cementation exponent for consolidated sandstones are 1.8 < formula_5 < 2.0.
In carbonate rocks, the cementation exponent shows higher variance due to strong diagenetic affinity and complex pore structures. Values between 1.7 and 4.1 have been observed.
The cementation exponent is usually assumed not to be dependent on temperature.
Saturation exponent, "n".
The saturation exponent formula_6 usually is fixed to values close to 2. The saturation exponent models the dependency on the presence of non-conductive fluid (hydrocarbons) in the pore-space, and is related to the wettability of the rock. Water-wet rocks will, for low water saturation values, maintain a continuous film along the pore walls making the rock conductive. Oil-wet rocks will have discontinuous droplets of water within the pore space, making the rock less conductive.
Tortuosity factor, "a".
The constant formula_7, called the "tortuosity factor", "cementation intercept", "lithology factor" or, "lithology coefficient" is sometimes used. It is meant to correct for variation in compaction, pore structure and grain size.
The parameter formula_7 is called the tortuosity factor and is related to the path length of the current flow. The value lies in the range 0.5 to 1.5, and it may be different in different reservoirs. However a typical value to start with for a sandstone reservoir might be 0.6, which then can be tuned during log data matching process with other sources of data such as core.
Measuring the exponents.
In petrophysics, the only reliable source for the numerical value of both exponents is experiments on sand plugs from cored wells. The fluid electrical conductivity can be measured directly on produced fluid (groundwater) samples. Alternatively, the fluid electrical conductivity and the cementation exponent can also be inferred from downhole electrical conductivity measurements across fluid-saturated intervals. For fluid-saturated intervals (formula_19) Archie's law can be written
formula_20
Hence, plotting the logarithm of the measured in-situ electrical conductivity against the logarithm of the measured in-situ porosity (Pickett plot), according to Archie's law a straight-line relationship is expected with slope equal to the cementation exponent formula_5 and intercept equal to the logarithm of the in-situ fluid electrical conductivity.
Sands with clay/shaly sands.
Archie's law postulates that the rock matrix is non-conductive. For sandstone with clay minerals, this assumption is no longer true in general, due to the clay's structure and cation exchange capacity. The Waxman–Smits equation is one model that tries to correct for this.
|
[
{
"math_id": 0,
"text": "C_t"
},
{
"math_id": 1,
"text": "C_t = \\frac{1}{a} C_w \\phi^m S_w^n"
},
{
"math_id": 2,
"text": "\\phi\\,\\!"
},
{
"math_id": 3,
"text": "C_w"
},
{
"math_id": 4,
"text": "S_w"
},
{
"math_id": 5,
"text": "m"
},
{
"math_id": 6,
"text": "n"
},
{
"math_id": 7,
"text": "a"
},
{
"math_id": 8,
"text": "(R = \\frac{1}{C})"
},
{
"math_id": 9,
"text": "R_t = a R_w \\phi^{-m} S_w^{-n}"
},
{
"math_id": 10,
"text": "R_t"
},
{
"math_id": 11,
"text": "R_w"
},
{
"math_id": 12,
"text": "F = \\frac{a}{\\phi^m} = \\frac{R_t}{R_w}"
},
{
"math_id": 13,
"text": "t"
},
{
"math_id": 14,
"text": "w"
},
{
"math_id": 15,
"text": "S_w^{-n} = 1"
},
{
"math_id": 16,
"text": "I"
},
{
"math_id": 17,
"text": "I = \\frac{R_t}{R_o} = S_w^{-n}"
},
{
"math_id": 18,
"text": "R_o"
},
{
"math_id": 19,
"text": "S_w=1"
},
{
"math_id": 20,
"text": "\\log{C_t} = \\log{C_w} + m \\log{\\phi}\\,\\!"
}
] |
https://en.wikipedia.org/wiki?curid=5867217
|
586817
|
Mass spectrum
|
Tool in chemical analysis
A mass spectrum is a histogram plot of intensity vs. "mass-to-charge ratio" ("m/z") in a chemical sample, usually acquired using an instrument called a "mass spectrometer". Not all mass spectra of a given substance are the same; for example, some mass spectrometers break the analyte molecules into "fragments"; others observe the intact molecular masses with little fragmentation. A mass spectrum can represent many different types of information based on the type of mass spectrometer and the specific experiment applied. Common fragmentation processes for organic molecules are the "McLafferty rearrangement" and "alpha cleavage". Straight chain alkanes and alkyl groups produce a typical series of peaks: 29 (CH3CH2+), 43 (CH3CH2CH2+), 57 (CH3CH2CH2CH2+), 71 (CH3CH2CH2CH2CH2+) etc.
X-axis: "m/z" (mass-to-charge ratio).
The x-axis of a mass spectrum represents a relationship between the mass of a given ion and the number of elementary charges that it carries. This is written as the IUPAC standard "m/z" to denote the quantity formed by dividing the mass of an ion (in daltons) by the dalton unit and by its charge number (positive absolute value). Thus, "m/z" is a dimensionless quantity with no associated units. Despite carrying neither units of mass nor charge, the "m/z" is referred to as the mass-to-charge ratio of an ion. However, this is distinct from the mass-to-charge ratio, m/Q (SI standard units kg/C), which is commonly used in physics. The "m/z" is used in applied mass spectrometry because convenient and intuitive numerical relationships naturally arise when interpreting spectra. A single "m/z" value alone does not contain sufficient information to determine the mass or charge of an ion. However, mass information may be extracted when considering the whole spectrum, such as the spacing of isotopes or the observation of multiple charge states of the same molecule. These relationships and the relationship to the mass of the ion in daltons tend toward approximately rational number values in "m/z" space. For example, ions with one charge exhibit spacing between isotopes of 1 and the mass of the ion in daltons is numerically equal to the "m/z". The IUPAC Gold Book gives an example of appropriate use: "for the ion C7H72+, m/z equals 45.5".
Alternative x-axis notations.
There are several alternatives to the standard "m/z" notation that appear in the literature; however, these are not currently accepted by standards organizations and most journals. "m/e" appears in older historical literature. A label more consistent with the IUPAC green book and ISO 31 conventions is "m/Q" or "m/q" where "m" is the symbol for mass and "Q" or "q" the symbol for charge with the units u/e or Da/e. This notation is not uncommon in the physics of mass spectrometry but is rarely used as the abscissa of a mass spectrum. It was also suggested to introduce a new unit thomson (Th) as a unit of "m/z", where 1 Th = 1 u/e. According to this convention, mass spectra x axis could be labeled "m/z" (Th) and negative ions would have negative values. This notation is rare and not accepted by IUPAC or any other standards organisation.
History of x-axis notation.
In 1897 the mass-to-charge ratio formula_0 of the electron was first measured by J. J. Thomson. By doing this he showed that the electron, which was postulated before in order to explain electricity, was in fact a particle with a mass and a charge and that its mass-to-charge ratio was much smaller than the one for the hydrogen ion H+. In 1913 he measured the mass-to-charge ratio of ions with an instrument he called a parabola spectrograph. Although this data was not represented as a modern mass spectrum, it was similar in meaning. Eventually there was a change to the notation as "m/e" giving way to the current standard of "m/z".
Early in mass spectrometry research the resolution of mass spectrometers did not allow for accurate mass determination. Francis William Aston won the Nobel prize in Chemistry in 1922. "For his discovery, by means of his mass spectrograph, of isotopes, in a large number of non-radioactive elements, and for his enunciation of the Whole Number Rule." In which he stated that all atoms (including isotopes) follow a whole-number rule This implied that the masses of atoms were not on a scale but could be expressed as integers (in fact multiple charged ions were rare, so for the most part the ratio was whole as well). There have been several suggestions (e.g. the unit thomson) to change the official mass spectrometry nomenclature formula_1 to be more internally consistent.
Y-axis: signal intensity.
The "y"-axis of a mass spectrum represents signal intensity of the ions. When using counting detectors the intensity is often measured in counts per second (cps). When using analog detection electronics the intensity is typically measured in volts. In FTICR and Orbitraps the frequency domain signal (the "y"-axis) is related to the power (~amplitude squared) of the signal sine wave (often reduced to an rms power); however, the axis is usually not labeled as such for many reasons. In most forms of mass spectrometry, the intensity of ion current measured by the spectrometer does not accurately represent relative abundance, but correlates loosely with it. Therefore, it is common to label the "y"-axis with "arbitrary units".
Y-axis and relative abundance.
Signal intensity may be dependent on many factors, especially the nature of the molecules being analyzed and how they ionize. The efficiency of ionization varies from molecule to molecule and from ion source to ion source. For example, in electrospray sources in positive ion mode a quaternary amine will ionize exceptionally well whereas a large hydrophobic alcohol will most likely not be seen no matter how concentrated. In an EI source these molecules will behave very differently. Additionally there may be factors that affect ion transmission disproportionally between ionization and detection.
On the detection side there are many factors that can also affect signal intensity in a non-proportional way. The size of the ion will affect the velocity of impact and with certain detectors the velocity is proportional to the signal output. In other detection systems, such as FTICR, the number of charges on the ion are more important to signal intensity. In Fourier transform ion cyclotron resonance and Orbitrap type mass spectrometers the signal intensity (Y-axis) is related to the amplitude of the free induction decay signal. This is fundamentally a power relationship (amplitude squared) but often computed as an [rms]. For decaying signals the rms is not equal to the average amplitude. Additionally the damping constant (decay rate of the signal in the fid) is not the same for all ions. In order to make conclusions about relative intensity a great deal of knowledge and care is required.
A common way to get more quantitative information out of a mass spectrum is to create a standard curve to compare the sample to. This requires knowing what is to be quantitated ahead of time, having a standard available and designing the experiment specifically for this purpose. A more advanced variation on this is the use of an internal standard which behaves very similarly to the analyte. This is often an isotopically labeled version of the analyte. There are forms of mass spectrometry, such as accelerator mass spectrometry that are designed from the bottom up to be quantitative.
Spectral skewing.
Spectral skewing is the change in relative intensity of mass spectral peaks due to the changes in concentration of the analyte in the ion source as the mass spectrum is scanned. This situation occurs routinely as chromatographic components elute into a continuous ion source. Spectral skewing is not observed in ion trap (quadrupole (this has been seen also in QMS) or magnetic) or time-of-flight (TOF) mass analyzers because potentially all ions formed in operational cycle (a snapshot in time) of the instrument are available for detection.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "m/e"
},
{
"math_id": 1,
"text": "m/z"
}
] |
https://en.wikipedia.org/wiki?curid=586817
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.