id
stringlengths 2
8
| title
stringlengths 1
130
| text
stringlengths 0
252k
| formulas
listlengths 1
823
| url
stringlengths 38
44
|
---|---|---|---|---|
6172
|
Cantor set
|
Set of points on a line segment
In mathematics, the Cantor set is a set of points lying on a single line segment that has a number of unintuitive properties. It was discovered in 1874 by Henry John Stephen Smith and mentioned by German mathematician Georg Cantor in 1883.
Through consideration of this set, Cantor and others helped lay the foundations of modern point-set topology. The most common construction is the Cantor ternary set, built by removing the middle third of a line segment and then repeating the process with the remaining shorter segments. Cantor mentioned this ternary construction only in passing, as an example of a perfect set that is nowhere dense (, Anmerkungen zu §10, /p. 590).
More generally, in topology, "a" Cantor space is a topological space homeomorphic to the Cantor ternary set (equipped with its subspace topology). By a theorem of L. E. J. Brouwer, this is equivalent to being perfect, nonempty, compact, metrizable and zero dimensional.
Construction and formula of the ternary set.
The Cantor ternary set formula_0 is created by iteratively deleting the "open" middle third from a set of line segments. One starts by deleting the open middle third formula_1 from the interval formula_2, leaving two line segments: formula_3. Next, the open middle third of each of these remaining segments is deleted, leaving four line segments: formula_4.
The Cantor ternary set contains all points in the interval formula_5 that are not deleted at any step in this infinite process. The same facts can be described recursively by setting
formula_6
and
formula_7
for formula_8, so that
formula_9 formula_10 formula_11 for any formula_12.
The first six steps of this process are illustrated below.
Using the idea of self-similar transformations, formula_13 formula_14 and formula_15 the explicit closed formulas for the Cantor set are
formula_16
where every middle third is removed as the open interval formula_17 from the closed interval formula_18 surrounding it, or
formula_19
where the middle third formula_20 of the foregoing closed interval formula_21 is removed by intersecting with formula_22
This process of removing middle thirds is a simple example of a finite subdivision rule. The complement of the Cantor ternary set is an example of a fractal string.
In arithmetical terms, the Cantor set consists of all real numbers of the unit interval formula_5 that do not require the digit 1 in order to be expressed as a ternary (base 3) fraction. As the above diagram illustrates, each point in the Cantor set is uniquely located by a path through an infinitely deep binary tree, where the path turns left or right at each level according to which side of a deleted segment the point lies on. Representing each left turn with 0 and each right turn with 2 yields the ternary fraction for a point.
Mandelbrot's construction by "curdling".
In The Fractal Geometry of Nature, mathematician Benoit Mandelbrot provides a whimsical thought experiment to assist non-mathematical readers in imagining the construction of formula_0. His narrative begins with imagining a bar, perhaps of lightweight metal, in which the bar's matter "curdles" by iteratively shifting towards its extremities. As the bar's segments become smaller, they become thin, dense slugs that eventually grow too small and faint to see.CURDLING: The construction of the Cantor bar results from the process I call curdling. It begins with a round bar. It is best to think of it as having a very low density. Then matter "curdles" out of this bar's middle third into the end thirds, so that the positions of the latter remain unchanged. Next matter curdles out of the middle third of each end third into its end thirds, and so on ad infinitum until one is left with an infinitely large number of infinitely thin slugs of infinitely high density. These slugs are spaced along the line in the very specific fashion induced by the generating process. In this illustration, curdling (which eventually requires hammering!) stops when both the printer's press and our eye cease to follow; the last line is indistinguishable from the last but one: each of its ultimate parts is seen as a gray slug rather than two parallel black slugs.
Composition.
Since the Cantor set is defined as the set of points not excluded, the proportion (i.e., measure) of the unit interval remaining can be found by total length removed. This total is the geometric progression
formula_23
So that the proportion left is 1 − 1 = 0.
This calculation suggests that the Cantor set cannot contain any interval of non-zero length. It may seem surprising that there should be anything left—after all, the sum of the lengths of the removed intervals is equal to the length of the original interval. However, a closer look at the process reveals that there must be something left, since removing the "middle third" of each interval involved removing open sets (sets that do not include their endpoints). So removing the line segment (, ) from the original interval [0, 1] leaves behind the points and . Subsequent steps do not remove these (or other) endpoints, since the intervals removed are always internal to the intervals remaining. So the Cantor set is not empty, and in fact contains an uncountably infinite number of points (as follows from the above description in terms of paths in an infinite binary tree).
It may appear that "only" the endpoints of the construction segments are left, but that is not the case either. The number , for example, has the unique ternary form 0.020202... = 0.02. It is in the bottom third, and the top third of that third, and the bottom third of that top third, and so on. Since it is never in one of the middle segments, it is never removed. Yet it is also not an endpoint of any middle segment, because it is not a multiple of any power of 1/3.
All endpoints of segments are "terminating" ternary fractions and are contained in the set
formula_24
which is a countably infinite set.
As to cardinality, almost all elements of the Cantor set are not endpoints of intervals, nor rational points like 1/4. The whole Cantor set is in fact not countable.
Properties.
Cardinality.
It can be shown that there are as many points left behind in this process as there were to begin with, and that therefore, the Cantor set is uncountable. To see this, we show that there is a function "f" from the Cantor set formula_0 to the closed interval [0,1] that is surjective (i.e. "f" maps from formula_0 onto [0,1]) so that the cardinality of formula_0 is no less than that of [0,1]. Since formula_0 is a subset of [0,1], its cardinality is also no greater, so the two cardinalities must in fact be equal, by the Cantor–Bernstein–Schröder theorem.
To construct this function, consider the points in the [0, 1] interval in terms of base 3 (or ternary) notation. Recall that the proper ternary fractions, more precisely: the elements of formula_25, admit more than one representation in this notation, as for example , that can be written as 0.13 = 0.103, but also as 0.0222...3 = 0.023, and , that can be written as 0.23 = 0.203 but also as 0.1222...3 = 0.123.
When we remove the middle third, this contains the numbers with ternary numerals of the form 0.1xxxxx...3 where xxxxx...3 is strictly between 00000...3 and 22222...3. So the numbers remaining after the first step consist of
This can be summarized by saying that those numbers with a ternary representation such that the first digit after the radix point is not 1 are the ones remaining after the first step.
The second step removes numbers of the form 0.01xxxx...3 and 0.21xxxx...3, and (with appropriate care for the endpoints) it can be concluded that the remaining numbers are those with a ternary numeral where neither of the first "two" digits is 1.
Continuing in this way, for a number not to be excluded at step "n", it must have a ternary representation whose "n"th digit is not 1. For a number to be in the Cantor set, it must not be excluded at any step, it must admit a numeral representation consisting entirely of 0s and 2s.
It is worth emphasizing that numbers like 1, = 0.13 and = 0.213 are in the Cantor set, as they have ternary numerals consisting entirely of 0s and 2s: 1 = 0.222...3 = 0.23, = 0.0222...3 = 0.023 and = 0.20222...3 = 0.2023.
All the latter numbers are "endpoints", and these examples are right limit points of formula_0. The same is true for the left limit points of formula_0, e.g. = 0.1222...3 = 0.123 = 0.203 and = 0.21222...3 = 0.2123 = 0.2203. All these endpoints are "proper ternary" fractions (elements of formula_26) of the form , where denominator "q" is a power of 3 when the fraction is in its irreducible form. The ternary representation of these fractions terminates (i.e., is finite) or — recall from above that proper ternary fractions each have 2 representations — is infinite and "ends" in either infinitely many recurring 0s or infinitely many recurring 2s. Such a fraction is a left limit point of formula_0 if its ternary representation contains no 1's and "ends" in infinitely many recurring 0s. Similarly, a proper ternary fraction is a right limit point of formula_0 if it again its ternary expansion contains no 1's and "ends" in infinitely many recurring 2s.
This set of endpoints is dense in formula_0 (but not dense in [0, 1]) and makes up a countably infinite set. The numbers in formula_0 which are "not" endpoints also have only 0s and 2s in their ternary representation, but they cannot end in an infinite repetition of the digit 0, nor of the digit 2, because then it would be an endpoint.
The function from formula_0 to [0,1] is defined by taking the ternary numerals that do consist entirely of 0s and 2s, replacing all the 2s by 1s, and interpreting the sequence as a binary representation of a real number. In a formula,
formula_27 where formula_28
For any number "y" in [0,1], its binary representation can be translated into a ternary representation of a number "x" in formula_0 by replacing all the 1s by 2s. With this, "f"("x") = "y" so that "y" is in the range of "f". For instance if "y" = = 0.100110011001...2 = 0.1001, we write "x" = 0.2002 = 0.200220022002...3 = . Consequently, "f" is surjective. However, "f" is "not" injective — the values for which "f"("x") coincides are those at opposing ends of one of the "middle thirds" removed. For instance, take
= 0.023 (which is a right limit point of formula_0 and a left limit point of the middle third [, ]) and
= 0.203 (which is a left limit point of formula_0 and a right limit point of the middle third [, ])
so
formula_29
Thus there are as many points in the Cantor set as there are in the interval [0, 1] (which has the uncountable cardinality formula_30). However, the set of endpoints of the removed intervals is countable, so there must be uncountably many numbers in the Cantor set which are not interval endpoints. As noted above, one example of such a number is , which can be written as 0.020202...3 = 0.02 in ternary notation. In fact, given any formula_31, there exist formula_32 such that formula_33. This was first demonstrated by Steinhaus in 1917, who proved, via a geometric argument, the equivalent assertion that formula_34 for every formula_31. Since this construction provides an injection from formula_35 to formula_36, we have formula_37 as an immediate corollary. Assuming that formula_38 for any infinite set formula_39 (a statement shown to be equivalent to the axiom of choice by Tarski), this provides another demonstration that formula_40.
The Cantor set contains as many points as the interval from which it is taken, yet itself contains no interval of nonzero length. The irrational numbers have the same property, but the Cantor set has the additional property of being closed, so it is not even dense in any interval, unlike the irrational numbers which are dense in every interval.
It has been conjectured that all algebraic irrational numbers are normal. Since members of the Cantor set are not normal, this would imply that all members of the Cantor set are either rational or transcendental.
Self-similarity.
The Cantor set is the prototype of a fractal. It is self-similar, because it is equal to two copies of itself, if each copy is shrunk by a factor of 3 and translated. More precisely, the Cantor set is equal to the union of two functions, the left and right self-similarity transformations of itself, formula_41 and formula_14, which leave the Cantor set invariant up to homeomorphism: formula_42
Repeated iteration of formula_43 and formula_44 can be visualized as an infinite binary tree. That is, at each node of the tree, one may consider the subtree to the left or to the right. Taking the set formula_45 together with function composition forms a monoid, the dyadic monoid.
The automorphisms of the binary tree are its hyperbolic rotations, and are given by the modular group. Thus, the Cantor set is a homogeneous space in the sense that for any two points formula_46 and formula_47 in the Cantor set formula_0, there exists a homeomorphism formula_48 with formula_49. An explicit construction of formula_50 can be described more easily if we see the Cantor set as a product space of countably many copies of the discrete space formula_51. Then the map formula_52 defined by formula_53 is an involutive homeomorphism exchanging formula_46 and formula_47.
Conservation law.
It has been found that some form of conservation law is always responsible behind scaling and self-similarity. In the case of Cantor set it can be seen that the formula_54th moment (where formula_55 is the fractal dimension) of all the surviving intervals at any stage of the construction process is equal to a constant which is one in the case of the Cantor set.
We know that there are formula_56 intervals of size formula_57 present in the system at the formula_58th step of its construction. Then if we label the surviving intervals as formula_59 then the formula_54th moment is formula_60 since formula_61.
The Hausdorff dimension of the Cantor set is equal to ln(2)/ln(3) ≈ 0.631.
Topological and analytical properties.
Although "the" Cantor set typically refers to the original, middle-thirds Cantor set described above, topologists often talk about "a" Cantor set, which means any topological space that is homeomorphic (topologically equivalent) to it.
As the above summation argument shows, the Cantor set is uncountable but has Lebesgue measure 0. Since the Cantor set is the complement of a union of open sets, it itself is a closed subset of the reals, and therefore a complete metric space. Since it is also totally bounded, the Heine–Borel theorem says that it must be compact.
For any point in the Cantor set and any arbitrarily small neighborhood of the point, there is some other number with a ternary numeral of only 0s and 2s, as well as numbers whose ternary numerals contain 1s. Hence, every point in the Cantor set is an accumulation point (also called a cluster point or limit point) of the Cantor set, but none is an interior point. A closed set in which every point is an accumulation point is also called a perfect set in topology, while a closed subset of the interval with no interior points is nowhere dense in the interval.
Every point of the Cantor set is also an accumulation point of the complement of the Cantor set.
For any two points in the Cantor set, there will be some ternary digit where they differ — one will have 0 and the other 2. By splitting the Cantor set into "halves" depending on the value of this digit, one obtains a partition of the Cantor set into two closed sets that separate the original two points. In the relative topology on the Cantor set, the points have been separated by a clopen set. Consequently, the Cantor set is totally disconnected. As a compact totally disconnected Hausdorff space, the Cantor set is an example of a Stone space.
As a topological space, the Cantor set is naturally homeomorphic to the product of countably many copies of the space formula_62, where each copy carries the discrete topology. This is the space of all sequences in two digits
formula_63
which can also be identified with the set of 2-adic integers. The basis for the open sets of the product topology are cylinder sets; the homeomorphism maps these to the subspace topology that the Cantor set inherits from the natural topology on the real line. This characterization of the Cantor space as a product of compact spaces gives a second proof that Cantor space is compact, via Tychonoff's theorem.
From the above characterization, the Cantor set is homeomorphic to the "p"-adic integers, and, if one point is removed from it, to the "p"-adic numbers.
The Cantor set is a subset of the reals, which are a metric space with respect to the ordinary distance metric; therefore the Cantor set itself is a metric space, by using that same metric. Alternatively, one can use the "p"-adic metric on formula_64: given two sequences formula_65, the distance between them is formula_66, where formula_67 is the smallest index such that formula_68; if there is no such index, then the two sequences are the same, and one defines the distance to be zero. These two metrics generate the same topology on the Cantor set.
We have seen above that the Cantor set is a totally disconnected perfect compact metric space. Indeed, in a sense it is the only one: every nonempty totally disconnected perfect compact metric space is homeomorphic to the Cantor set. See Cantor space for more on spaces homeomorphic to the Cantor set.
The Cantor set is sometimes regarded as "universal" in the category of compact metric spaces, since any compact metric space is a continuous image of the Cantor set; however this construction is not unique and so the Cantor set is not universal in the precise categorical sense. The "universal" property has important applications in functional analysis, where it is sometimes known as the "representation theorem for compact metric spaces".
For any integer "q" ≥ 2, the topology on the group G = Z"q"ω (the countable direct sum) is discrete. Although the Pontrjagin dual Γ is also Z"q"ω, the topology of Γ is compact. One can see that Γ is totally disconnected and perfect - thus it is homeomorphic to the Cantor set. It is easiest to write out the homeomorphism explicitly in the case "q" = 2. (See Rudin 1962 p 40.)
Measure and probability.
The Cantor set can be seen as the compact group of binary sequences, and as such, it is endowed with a natural Haar measure. When normalized so that the measure of the set is 1, it is a model of an infinite sequence of coin tosses. Furthermore, one can show that the usual Lebesgue measure on the interval is an image of the Haar measure on the Cantor set, while the natural injection into the ternary set is a canonical example of a singular measure. It can also be shown that the Haar measure is an image of any probability, making the Cantor set a universal probability space in some ways.
In Lebesgue measure theory, the Cantor set is an example of a set which is uncountable and has zero measure. In contrast, the set has a Hausdorff measure of 1 in its dimension of log 2 / log 3.
Cantor numbers.
If we define a Cantor number as a member of the Cantor set, then
Descriptive set theory.
The Cantor set is a meagre set (or a set of first category) as a subset of [0,1] (although not as a subset of itself, since it is a Baire space). The Cantor set thus demonstrates that notions of "size" in terms of cardinality, measure, and (Baire) category need not coincide. Like the set formula_69, the Cantor set formula_0 is "small" in the sense that it is a null set (a set of measure zero) and it is a meagre subset of [0,1]. However, unlike formula_69, which is countable and has a "small" cardinality, formula_70, the cardinality of formula_0 is the same as that of [0,1], the continuum formula_71, and is "large" in the sense of cardinality. In fact, it is also possible to construct a subset of [0,1] that is meagre but of positive measure and a subset that is non-meagre but of measure zero: By taking the countable union of "fat" Cantor sets formula_72 of measure formula_73 (see Smith–Volterra–Cantor set below for the construction), we obtain a set formula_74which has a positive measure (equal to 1) but is meagre in [0,1], since each formula_72 is nowhere dense. Then consider the set formula_75. Since formula_76, formula_77 cannot be meagre, but since formula_78, formula_77 must have measure zero.
Variants.
Smith–Volterra–Cantor set.
Instead of repeatedly removing the middle third of every piece as in the Cantor set, we could also keep removing any other fixed percentage (other than 0% and 100%) from the middle. In the case where the middle of the interval is removed, we get a remarkably accessible case — the set consists of all numbers in [0,1] that can be written as a decimal consisting entirely of 0s and 9s. If a fixed percentage is removed at each stage, then the limiting set will have measure zero, since the length of the remainder formula_79 as formula_80 for any formula_81 such that formula_82.
On the other hand, "fat Cantor sets" of positive measure can be generated by removal of smaller fractions of the middle of the segment in each iteration. Thus, one can construct sets homeomorphic to the Cantor set that have positive Lebesgue measure while still being nowhere dense. If an interval of length formula_83 (formula_84) is removed from the middle of each segment at the "n"th iteration, then the total length removed is formula_85, and the limiting set will have a Lebesgue measure of formula_86. Thus, in a sense, the middle-thirds Cantor set is a limiting case with formula_87. If formula_88, then the remainder will have positive measure with formula_89. The case formula_90 is known as the Smith–Volterra–Cantor set, which has a Lebesgue measure of formula_91.
Stochastic Cantor set.
One can modify the construction of the Cantor set by dividing randomly instead of equally. Besides, to incorporate time we can divide only one of the available intervals at each step instead of dividing all the available intervals. In the case of stochastic triadic Cantor set the resulting process can be described by the following rate equation
formula_92
and for the stochastic dyadic Cantor set
formula_93
where formula_94 is the number of intervals of size between formula_46 and formula_95. In the case of triadic Cantor set the fractal dimension is formula_96 which is
less than its deterministic counterpart formula_97. In the case of stochastic dyadic Cantor set
the fractal dimension is formula_98 which is again less than that of its deterministic counterpart formula_99. In the case of stochastic dyadic Cantor set the solution for formula_100 exhibits dynamic scaling as its solution in the long-time limit is formula_101 where the fractal dimension of the stochastic dyadic Cantor set formula_102. In either case, like triadic Cantor set, the formula_54th moment (formula_103) of stochastic triadic and dyadic Cantor set too are conserved quantities.
Cantor dust.
Cantor dust is a multi-dimensional version of the Cantor set. It can be formed by taking a finite Cartesian product of the Cantor set with itself, making it a Cantor space. Like the Cantor set, Cantor dust has zero measure.
A different 2D analogue of the Cantor set is the Sierpinski carpet, where a square is divided up into nine smaller squares, and the middle one removed. The remaining squares are then further divided into nine each and the middle removed, and so on ad infinitum. One 3D analogue of this is the Menger sponge.
Historical remarks.
Cantor introduced what we call today the Cantor ternary set formula_104 as an example "of a perfect point-set, which is not everywhere-dense in any interval, however small." Cantor described formula_104 in terms of ternary expansions, as "the set of all real numbers given by the formula: formula_105where the coefficients formula_106 arbitrarily take the two values 0 and 2, and the series can consist of a finite number or an infinite number of elements."
A topological space formula_107 is perfect if all its points are limit points or, equivalently, if it coincides with its derived set formula_108. Subsets of the real line, like formula_104, can be seen as topological spaces under the induced subspace topology.
Cantor was led to the study of derived sets by his results on uniqueness of trigonometric series. The latter did much to set him on the course for developing an abstract, general theory of infinite sets.
Benoit Mandelbrot wrote much on Cantor dusts and their relation to natural fractals and statistical physics. He further reflected on the puzzling or even upsetting nature of such structures to those in the mathematics and physics community. In The Fractal geometry of Nature, he described how "When I started on this topic in 1962, everyone was agreeing that Cantor dusts are at least as monstrous as the Koch and Peano curves," and added that "every self-respecting physicist was automatically turned off by a mention of Cantor, ready to run a mile from anyone claiming formula_104 to be interesting in science."
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathcal{C}"
},
{
"math_id": 1,
"text": "\\left(\\frac{1}{3}, \\frac{2}{3}\\right)"
},
{
"math_id": 2,
"text": "\\textstyle\\left[0, 1\\right]"
},
{
"math_id": 3,
"text": "\\left[0, \\frac{1}{3}\\right]\\cup\\left[\\frac{2}{3}, 1\\right]"
},
{
"math_id": 4,
"text": "\\left[0, \\frac{1}{9}\\right]\\cup\\left[\\frac{2}{9}, \\frac{1}{3}\\right]\\cup\\left[\\frac{2}{3}, \\frac{7}{9}\\right]\\cup\\left[\\frac{8}{9}, 1\\right]"
},
{
"math_id": 5,
"text": "[0,1]"
},
{
"math_id": 6,
"text": "C_0 := [0,1]"
},
{
"math_id": 7,
"text": "C_n := \\frac{C_{n-1}} 3 \\cup \\left(\\frac 2 {3} +\\frac{C_{n-1}} 3 \\right) = \\frac13 \\bigl(C_{n-1} \\cup \\left(2 + C_{n-1} \\right)\\bigr)"
},
{
"math_id": 8,
"text": "n \\ge 1"
},
{
"math_id": 9,
"text": "\\mathcal{C} :="
},
{
"math_id": 10,
"text": "{\\color{Blue}\\lim_{n\\to\\infty}C_n}"
},
{
"math_id": 11,
"text": "= \\bigcap_{n=0}^\\infty C_n = \\bigcap_{n=m}^\\infty C_n "
},
{
"math_id": 12,
"text": "m \\ge 0"
},
{
"math_id": 13,
"text": "T_L(x)=x/3,"
},
{
"math_id": 14,
"text": "T_R(x)=(2+x)/3"
},
{
"math_id": 15,
"text": "C_n =T_L(C_{n-1})\\cup T_R(C_{n-1}),"
},
{
"math_id": 16,
"text": "\\mathcal{C}=[0,1] \\,\\setminus\\, \\bigcup_{n=0}^\\infty \\bigcup_{k=0}^{3^n-1} \\left(\\frac{3k+1}{3^{n+1}},\\frac{3k+2}{3^{n+1}} \\right)\\!,"
},
{
"math_id": 17,
"text": "\\left(\\frac{3k+1}{3^{n+1}},\\frac{3k+2}{3^{n+1}}\\right)"
},
{
"math_id": 18,
"text": "\\left[\\frac{3k+0}{3^{n+1}},\\frac{3k+3}{3^{n+1}}\\right] = \\left[\\frac{k+0}{3^n},\\frac{k+1}{3^n}\\right]"
},
{
"math_id": 19,
"text": "\\mathcal{C}=\\bigcap_{n=1}^\\infty \\bigcup_{k=0}^{3^{n-1}-1} \\left( \\left[\\frac{3k+0}{3^n},\\frac{3k+1}{3^n}\\right] \\cup \\left[\\frac{3k+2}{3^n},\\frac{3k+3}{3^n}\\right] \\right)\\!,"
},
{
"math_id": 20,
"text": "\\left(\\frac{3k+1}{3^n},\\frac{3k+2}{3^n}\\right) "
},
{
"math_id": 21,
"text": "\\left[\\frac{k+0}{3^{n-1}},\\frac{k+1}{3^{n-1}}\\right] = \\left[\\frac{3k+0}{3^n},\\frac{3k+3}{3^n}\\right]"
},
{
"math_id": 22,
"text": "\\left[\\frac{3k+0}{3^n},\\frac{3k+1}{3^n}\\right] \\cup \\left[\\frac{3k+2}{3^n},\\frac{3k+3}{3^n}\\right]\\!."
},
{
"math_id": 23,
"text": "\\sum_{n=0}^\\infty \\frac{2^n}{3^{n+1}} = \\frac{1}{3} + \\frac{2}{9} + \\frac{4}{27} + \\frac{8}{81} + \\cdots = \\frac{1}{3}\\left(\\frac{1}{1-\\frac{2}{3}}\\right) = 1."
},
{
"math_id": 24,
"text": " \\left\\{x \\in [0,1] \\mid \\exists i \\in \\N_0: x \\, 3^i \\in \\Z \\right\\} \\qquad \\Bigl(\\subset \\N_0 \\, 3^{-\\N_0} \\Bigr) "
},
{
"math_id": 25,
"text": "\\bigl(\\Z \\setminus \\{0\\}\\bigr) \\cdot 3^{-\\N_0}"
},
{
"math_id": 26,
"text": "\\Z \\cdot 3^{-\\N_0}"
},
{
"math_id": 27,
"text": "f \\bigg( \\sum_{k\\in \\N} a_k 3^{-k} \\bigg) = \\sum_{k\\in \\N} \\frac{a_k}{2} 2^{-k}"
},
{
"math_id": 28,
"text": "\\forall k\\in \\N : a_k \\in \\{0,2\\} ."
},
{
"math_id": 29,
"text": "\\begin{array}{lcl}\nf\\bigl({}^1\\!\\!/\\!_3 \\bigr) = f(0.0\\overline{2}_3) = 0.0\\overline{1}_2 = \\!\\! & \\!\\! 0.1_2 \\!\\! & \\!\\! = 0.1\\overline{0}_2 = f(0.2\\overline{0}_3) = f\\bigl({}^2\\!\\!/\\!_3 \\bigr) . \\\\\n& \\parallel \\\\\n& {}^1\\!\\!/\\!_2\n\\end{array}"
},
{
"math_id": 30,
"text": "\\mathfrak{c} = 2^{\\aleph_0}"
},
{
"math_id": 31,
"text": "a\\in[-1,1]"
},
{
"math_id": 32,
"text": "x,y\\in\\mathcal{C}"
},
{
"math_id": 33,
"text": "a = y-x"
},
{
"math_id": 34,
"text": "\\{(x,y)\\in\\mathbb{R}^2 \\mid y=x+a\\} \\; \\cap \\; (\\mathcal{C}\\times\\mathcal{C}) \\neq\\emptyset"
},
{
"math_id": 35,
"text": "[-1,1]"
},
{
"math_id": 36,
"text": "\\mathcal{C}\\times\\mathcal{C}"
},
{
"math_id": 37,
"text": "|\\mathcal{C}\\times\\mathcal{C}|\\geq|[-1,1]|=\\mathfrak{c}"
},
{
"math_id": 38,
"text": "|A\\times A|=|A|"
},
{
"math_id": 39,
"text": "A"
},
{
"math_id": 40,
"text": "|\\mathcal{C}|=\\mathfrak{c}"
},
{
"math_id": 41,
"text": "T_L(x)=x/3"
},
{
"math_id": 42,
"text": "T_L(\\mathcal{C})\\cong T_R(\\mathcal{C})\\cong \\mathcal{C}=T_L(\\mathcal{C})\\cup T_R(\\mathcal{C})."
},
{
"math_id": 43,
"text": "T_L"
},
{
"math_id": 44,
"text": "T_R"
},
{
"math_id": 45,
"text": "\\{T_L, T_R\\}"
},
{
"math_id": 46,
"text": "x"
},
{
"math_id": 47,
"text": "y"
},
{
"math_id": 48,
"text": "h:\\mathcal{C}\\to \\mathcal{C}"
},
{
"math_id": 49,
"text": "h(x)=y"
},
{
"math_id": 50,
"text": "h"
},
{
"math_id": 51,
"text": "\\{0,1\\}"
},
{
"math_id": 52,
"text": "h:\\{0,1\\}^\\N\\to\\{0,1\\}^\\N "
},
{
"math_id": 53,
"text": "h_n(u):=u_n+x_n+y_n \\mod 2"
},
{
"math_id": 54,
"text": "d_f"
},
{
"math_id": 55,
"text": "d_f=\\ln(2)/\\ln(3)"
},
{
"math_id": 56,
"text": "N=2^n"
},
{
"math_id": 57,
"text": "1/3^n"
},
{
"math_id": 58,
"text": "n"
},
{
"math_id": 59,
"text": "x_1, x_2, \\ldots, x_{2^n}"
},
{
"math_id": 60,
"text": "x_1^{d_f}+x_2^{d_f}+\\cdots+x_{2^n}^{d_f}=1"
},
{
"math_id": 61,
"text": "x_1=x_2= \\cdots =x_{2^n}=1/3^n"
},
{
"math_id": 62,
"text": "\\{0, 1\\}"
},
{
"math_id": 63,
"text": "2^\\mathbb{N} = \\{(x_n) \\mid x_n \\in \\{0,1\\} \\text{ for } n \\in \\mathbb{N}\\},"
},
{
"math_id": 64,
"text": "2^\\mathbb{N}"
},
{
"math_id": 65,
"text": "(x_n),(y_n)\\in 2^\\mathbb{N}"
},
{
"math_id": 66,
"text": "d((x_n),(y_n)) = 2^{-k}"
},
{
"math_id": 67,
"text": "k"
},
{
"math_id": 68,
"text": "x_k \\ne y_k"
},
{
"math_id": 69,
"text": "\\mathbb{Q}\\cap[0,1]"
},
{
"math_id": 70,
"text": "\\aleph_0"
},
{
"math_id": 71,
"text": "\\mathfrak{c}"
},
{
"math_id": 72,
"text": "\\mathcal{C}^{(n)}"
},
{
"math_id": 73,
"text": "\\lambda = (n-1)/n"
},
{
"math_id": 74,
"text": "\\mathcal{A} := \\bigcup_{n=1}^{\\infty}\\mathcal{C}^{(n)}"
},
{
"math_id": 75,
"text": "\\mathcal{A}^{\\mathrm{c}} = [0,1] \\setminus\\bigcup_{n=1}^\\infty \\mathcal{C}^{(n)}"
},
{
"math_id": 76,
"text": "\\mathcal{A}\\cup\\mathcal{A}^{\\mathrm{c}} = [0,1]"
},
{
"math_id": 77,
"text": "\\mathcal{A}^{\\mathrm{c}}"
},
{
"math_id": 78,
"text": "\\mu(\\mathcal{A})=1"
},
{
"math_id": 79,
"text": "(1-f)^n\\to 0"
},
{
"math_id": 80,
"text": "n\\to\\infty"
},
{
"math_id": 81,
"text": "f"
},
{
"math_id": 82,
"text": "0<f\\leq 1"
},
{
"math_id": 83,
"text": "r^n"
},
{
"math_id": 84,
"text": "r\\leq 1/3"
},
{
"math_id": 85,
"text": "\\sum_{n=1}^\\infty 2^{n-1}r^n=r/(1-2r)"
},
{
"math_id": 86,
"text": "\\lambda=(1-3r)/(1-2r)"
},
{
"math_id": 87,
"text": "r=1/3"
},
{
"math_id": 88,
"text": "0<r<1/3"
},
{
"math_id": 89,
"text": "0<\\lambda<1"
},
{
"math_id": 90,
"text": "r=1/4"
},
{
"math_id": 91,
"text": "1/2"
},
{
"math_id": 92,
"text": "\\frac{\\partial c(x,t)}{\\partial t} =-\\frac{x^2}{2} c(x,t) + 2\\int_x^\\infty (y-x)c(y,t) \\, dy,"
},
{
"math_id": 93,
"text": "{{\\partial c(x,t)}\\over{\\partial t}}=-xc(x,t)+(1+p)\\int_x^\\infty c(y,t) \\, dy,"
},
{
"math_id": 94,
"text": "c(x,t)dx"
},
{
"math_id": 95,
"text": "x+dx"
},
{
"math_id": 96,
"text": "0.5616"
},
{
"math_id": 97,
"text": "0.6309"
},
{
"math_id": 98,
"text": "p"
},
{
"math_id": 99,
"text": "\\ln (1+p)/\\ln 2"
},
{
"math_id": 100,
"text": "c(x,t)"
},
{
"math_id": 101,
"text": "t^{-(1+d_f)}e^{-xt}"
},
{
"math_id": 102,
"text": "d_f=p"
},
{
"math_id": 103,
"text": "\\int x^{d_f} c(x,t) \\, dx = \\text{constant}"
},
{
"math_id": 104,
"text": "\\mathcal C"
},
{
"math_id": 105,
"text": "z=c_1/3 +c_2/3^2 + \\cdots + c_\\nu/3^\\nu +\\cdots "
},
{
"math_id": 106,
"text": "c_\\nu"
},
{
"math_id": 107,
"text": "P"
},
{
"math_id": 108,
"text": "P'"
}
] |
https://en.wikipedia.org/wiki?curid=6172
|
6172616
|
Control-Lyapunov function
|
In control theory, a control-Lyapunov function (CLF) is an extension of the idea of Lyapunov function formula_0 to systems with control inputs. The ordinary Lyapunov function is used to test whether a dynamical system is "(Lyapunov) stable" or (more restrictively) "asymptotically stable". Lyapunov stability means that if the system starts in a state formula_1 in some domain "D", then the state will remain in "D" for all time. For "asymptotic stability", the state is also required to converge to formula_2. A control-Lyapunov function is used to test whether a system is "asymptotically stabilizable", that is whether for any state "x" there exists a control formula_3 such that the system can be brought to the zero state asymptotically by applying the control "u".
The theory and application of control-Lyapunov functions were developed by Zvi Artstein and Eduardo D. Sontag in the 1980s and 1990s.
Definition.
Consider an autonomous dynamical system with inputs
where formula_4 is the state vector and formula_5 is the control vector. Suppose our goal is to drive the system to an equilibrium formula_6 from every initial state in some domain formula_7. Without loss of generality, suppose the equilibrium is at formula_8 (for an equilibrium formula_9, it can be translated to the origin by a change of variables).
Definition. A control-Lyapunov function (CLF) is a function formula_10 that is continuously differentiable, positive-definite (that is, formula_0 is positive for all formula_11 except at formula_12 where it is zero), and such that for all formula_13 there exists formula_14 such that
formula_15
where formula_16 denotes the inner product of formula_17.
The last condition is the key condition; in words it says that for each state "x" we can find a control "u" that will reduce the "energy" "V". Intuitively, if in each state we can always find a way to reduce the energy, we should eventually be able to bring the energy asymptotically to zero, that is to bring the system to a stop. This is made rigorous by Artstein's theorem.
Some results apply only to control-affine systems—i.e., control systems in the following form:
where formula_18 and formula_19 for formula_20.
Theorems.
Eduardo Sontag showed that for a given control system, there exists a continuous CLF if and only if the origin is asymptotic stabilizable. It was later shown by Francis H. Clarke, Yuri Ledyaev, Eduardo Sontag, and A.I. Subbotin that every asymptotically controllable system can be stabilized by a (generally discontinuous) feedback.
Artstein proved that the dynamical system (2) has a differentiable control-Lyapunov function if and only if there exists a regular stabilizing feedback "u"("x").
Constructing the Stabilizing Input.
It is often difficult to find a control-Lyapunov function for a given system, but if one is found, then the feedback stabilization problem simplifies considerably. For the control affine system (2), "Sontag's formula" (or "Sontag's universal formula") gives the feedback law formula_21 directly in terms of the derivatives of the CLF. In the special case of a single input system formula_22, Sontag's formula is written as
formula_23
where formula_24 and formula_25 are the Lie derivatives of formula_26 along formula_27 and formula_28, respectively.
For the general nonlinear system (1), the input formula_29 can be found by solving a static non-linear programming problem
formula_30
for each state "x".
Example.
Here is a characteristic example of applying a Lyapunov candidate function to a control problem.
Consider the non-linear system, which is a mass-spring-damper system with spring hardening and position dependent mass described by
formula_31
Now given the desired state, formula_32, and actual state, formula_33, with error, formula_34, define a function formula_35 as
formula_36
A Control-Lyapunov candidate is then
formula_37
which is positive for all formula_38.
Now taking the time derivative of formula_26
formula_39
formula_40
The goal is to get the time derivative to be
formula_41
which is globally exponentially stable if formula_26 is globally positive definite (which it is).
Hence we want the rightmost bracket of formula_42,
formula_43
to fulfill the requirement
formula_44
which upon substitution of the dynamics, formula_45, gives
formula_46
Solving for formula_29 yields the control law
formula_47
with formula_48 and formula_49, both greater than zero, as tunable parameters
This control law will guarantee global exponential stability since upon substitution into the time derivative yields, as expected
formula_41
which is a linear first order differential equation which has solution
formula_50
And hence the error and error rate, remembering that formula_51, exponentially decay to zero.
If you wish to tune a particular response from this, it is necessary to substitute back into the solution we derived for formula_26 and solve for formula_52. This is left as an exercise for the reader but the first few steps at the solution are:
formula_53
formula_54
formula_55
formula_56
which can then be solved using any linear differential equation methods.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "V(x)"
},
{
"math_id": 1,
"text": "x \\ne 0"
},
{
"math_id": 2,
"text": "x = 0"
},
{
"math_id": 3,
"text": "u(x,t)"
},
{
"math_id": 4,
"text": "x\\in\\mathbb{R}^n"
},
{
"math_id": 5,
"text": "u\\in\\mathbb{R}^m"
},
{
"math_id": 6,
"text": "x_* \\in \\mathbb{R}^n"
},
{
"math_id": 7,
"text": "D\\subset\\mathbb{R}^n"
},
{
"math_id": 8,
"text": "x_*=0"
},
{
"math_id": 9,
"text": "x_*\\neq 0"
},
{
"math_id": 10,
"text": "V : D \\to \\mathbb{R}"
},
{
"math_id": 11,
"text": "x\\in D"
},
{
"math_id": 12,
"text": "x=0"
},
{
"math_id": 13,
"text": "x \\in \\mathbb{R}^n (x \\neq 0),"
},
{
"math_id": 14,
"text": "u\\in \\mathbb{R}^m"
},
{
"math_id": 15,
"text": "\n\\dot{V}(x, u) := \\langle \\nabla V(x), f(x,u)\\rangle < 0,\n"
},
{
"math_id": 16,
"text": "\\langle u, v\\rangle"
},
{
"math_id": 17,
"text": "u, v \\in\\mathbb{R}^n"
},
{
"math_id": 18,
"text": "f : \\mathbb{R}^n \\to \\mathbb{R}^n"
},
{
"math_id": 19,
"text": "g_i : \\mathbb{R}^n \\to \\mathbb{R}^{n}"
},
{
"math_id": 20,
"text": "i = 1, \\dots, m"
},
{
"math_id": 21,
"text": "k : \\mathbb{R}^n \\to \\mathbb{R}^m"
},
{
"math_id": 22,
"text": "(m=1)"
},
{
"math_id": 23,
"text": "k(x) = \\begin{cases} \\displaystyle -\\frac{L_{f} V(x)+\\sqrt{\\left[L_{f} V(x)\\right]^{2}+\\left[L_{g} V(x)\\right]^{4}}}{L_{g} V(x)} & \\text { if } L_{g} V(x) \\neq 0 \\\\ \n0 & \\text { if } L_{g} V(x)=0 \\end{cases} "
},
{
"math_id": 24,
"text": "L_f V(x) := \\langle \\nabla V(x), f(x)\\rangle"
},
{
"math_id": 25,
"text": "L_g V(x) := \\langle \\nabla V(x), g(x)\\rangle"
},
{
"math_id": 26,
"text": "V"
},
{
"math_id": 27,
"text": "f"
},
{
"math_id": 28,
"text": "g"
},
{
"math_id": 29,
"text": "u"
},
{
"math_id": 30,
"text": "\nu^*(x) = \\underset{u}{\\operatorname{arg\\,min}} \\nabla V(x) \\cdot f(x,u)\n"
},
{
"math_id": 31,
"text": "\nm(1+q^2)\\ddot{q}+b\\dot{q}+K_0q+K_1q^3=u\n"
},
{
"math_id": 32,
"text": "q_d"
},
{
"math_id": 33,
"text": "q"
},
{
"math_id": 34,
"text": "e = q_d - q"
},
{
"math_id": 35,
"text": "r"
},
{
"math_id": 36,
"text": "\nr=\\dot{e}+\\alpha e\n"
},
{
"math_id": 37,
"text": "\nr \\mapsto V(r) :=\\frac{1}{2}r^2\n"
},
{
"math_id": 38,
"text": " r \\ne 0"
},
{
"math_id": 39,
"text": "\n\\dot{V}=r\\dot{r}\n"
},
{
"math_id": 40,
"text": "\n\\dot{V}=(\\dot{e}+\\alpha e)(\\ddot{e}+\\alpha \\dot{e})\n"
},
{
"math_id": 41,
"text": "\n\\dot{V}=-\\kappa V\n"
},
{
"math_id": 42,
"text": "\\dot{V}"
},
{
"math_id": 43,
"text": "\n(\\ddot{e}+\\alpha \\dot{e})=(\\ddot{q}_d-\\ddot{q}+\\alpha \\dot{e})\n"
},
{
"math_id": 44,
"text": "\n(\\ddot{q}_d-\\ddot{q}+\\alpha \\dot{e}) = -\\frac{\\kappa}{2}(\\dot{e}+\\alpha e)\n"
},
{
"math_id": 45,
"text": "\\ddot{q}"
},
{
"math_id": 46,
"text": "\n\\left(\\ddot{q}_d-\\frac{u-K_0q-K_1q^3-b\\dot{q}}{m(1+q^2)}+\\alpha \\dot{e}\\right) = -\\frac{\\kappa}{2}(\\dot{e}+\\alpha e)\n"
},
{
"math_id": 47,
"text": "\nu= m(1+q^2)\\left(\\ddot{q}_d + \\alpha \\dot{e}+\\frac{\\kappa}{2}r\\right)+K_0q+K_1q^3+b\\dot{q}\n"
},
{
"math_id": 48,
"text": "\\kappa"
},
{
"math_id": 49,
"text": "\\alpha"
},
{
"math_id": 50,
"text": "\nV=V(0)\\exp(-\\kappa t)\n"
},
{
"math_id": 51,
"text": "V=\\frac{1}{2}(\\dot{e}+\\alpha e)^2"
},
{
"math_id": 52,
"text": "e"
},
{
"math_id": 53,
"text": "\nr\\dot{r}=-\\frac{\\kappa}{2}r^2\n"
},
{
"math_id": 54,
"text": "\n\\dot{r}=-\\frac{\\kappa}{2}r\n"
},
{
"math_id": 55,
"text": "\nr=r(0)\\exp\\left(-\\frac{\\kappa}{2} t\\right)\n"
},
{
"math_id": 56,
"text": "\n\\dot{e}+\\alpha e= (\\dot{e}(0)+\\alpha e(0))\\exp\\left(-\\frac{\\kappa}{2} t\\right) \n"
}
] |
https://en.wikipedia.org/wiki?curid=6172616
|
61728386
|
Krull's separation lemma
|
In abstract algebra, Krull's separation lemma is a lemma in ring theory. It was proved by Wolfgang Krull in 1928.
Statement of the lemma.
Let formula_0 be an ideal and let formula_1 be a multiplicative system ("i.e." formula_1 is closed under multiplication) in a ring formula_2, and suppose
formula_3. Then there exists a prime ideal formula_4 satisfying formula_5 and formula_6.
|
[
{
"math_id": 0,
"text": "I"
},
{
"math_id": 1,
"text": "M"
},
{
"math_id": 2,
"text": "R"
},
{
"math_id": 3,
"text": "I \\cap M = \\varnothing"
},
{
"math_id": 4,
"text": "P"
},
{
"math_id": 5,
"text": "I \\subseteq P"
},
{
"math_id": 6,
"text": "P \\cap M = \\varnothing"
}
] |
https://en.wikipedia.org/wiki?curid=61728386
|
6173
|
Cardinal number
|
Size of a possibly infinite set
In mathematics, a cardinal number, or cardinal for short, is what is commonly called the number of elements of a set. In the case of a finite set, its cardinal number, or cardinality is therefore a natural number. For dealing with the case of infinite sets, the infinite cardinal numbers have been introduced, which are often denoted with the Hebrew letter formula_0 (aleph) marked with subscript indicating their rank among the infinite cardinals.
Cardinality is defined in terms of bijective functions. Two sets have the same cardinality if, and only if, there is a one-to-one correspondence (bijection) between the elements of the two sets. In the case of finite sets, this agrees with the intuitive notion of number of elements. In the case of infinite sets, the behavior is more complex. A fundamental theorem due to Georg Cantor shows that it is possible for infinite sets to have different cardinalities, and in particular the cardinality of the set of real numbers is greater than the cardinality of the set of natural numbers. It is also possible for a proper subset of an infinite set to have the same cardinality as the original set—something that cannot happen with proper subsets of finite sets.
There is a transfinite sequence of cardinal numbers:
formula_1
This sequence starts with the natural numbers including zero (finite cardinals), which are followed by the aleph numbers. The aleph numbers are indexed by ordinal numbers. If the axiom of choice is true, this transfinite sequence includes every cardinal number. If the axiom of choice is not true (see ), there are infinite cardinals that are not aleph numbers.
Cardinality is studied for its own sake as part of set theory. It is also a tool used in branches of mathematics including model theory, combinatorics, abstract algebra and mathematical analysis. In category theory, the cardinal numbers form a skeleton of the category of sets.
History.
The notion of cardinality, as now understood, was formulated by Georg Cantor, the originator of set theory, in 1874–1884. Cardinality can be used to compare an aspect of finite sets. For example, the sets {1,2,3} and {4,5,6} are not "equal", but have the "same cardinality", namely three. This is established by the existence of a bijection (i.e., a one-to-one correspondence) between the two sets, such as the correspondence {1→4, 2→5, 3→6}.
Cantor applied his concept of bijection to infinite sets (for example the set of natural numbers N = {0, 1, 2, 3, ...}). Thus, he called all sets having a bijection with N "denumerable (countably infinite) sets", which all share the same cardinal number. This cardinal number is called formula_2, aleph-null. He called the cardinal numbers of infinite sets transfinite cardinal numbers.
Cantor proved that any unbounded subset of N has the same cardinality as N, even though this might appear to run contrary to intuition. He also proved that the set of all ordered pairs of natural numbers is denumerable; this implies that the set of all rational numbers is also denumerable, since every rational can be represented by a pair of integers. He later proved that the set of all real algebraic numbers is also denumerable. Each real algebraic number "z" may be encoded as a finite sequence of integers, which are the coefficients in the polynomial equation of which it is a solution, i.e. the ordered n-tuple ("a"0, "a"1, ..., "an"), "ai" ∈ Z together with a pair of rationals ("b"0, "b"1) such that "z" is the unique root of the polynomial with coefficients ("a"0, "a"1, ..., "an") that lies in the interval ("b"0, "b"1).
In his 1874 paper "On a Property of the Collection of All Real Algebraic Numbers", Cantor proved that there exist higher-order cardinal numbers, by showing that the set of real numbers has cardinality greater than that of N. His proof used an argument with nested intervals, but in an 1891 paper, he proved the same result using his ingenious and much simpler diagonal argument. The new cardinal number of the set of real numbers is called the cardinality of the continuum and Cantor used the symbol formula_3 for it.
Cantor also developed a large portion of the general theory of cardinal numbers; he proved that there is a smallest transfinite cardinal number (formula_2, aleph-null), and that for every cardinal number there is a next-larger cardinal
formula_4
His continuum hypothesis is the proposition that the cardinality formula_3 of the set of real numbers is the same as formula_5. This hypothesis is independent of the standard axioms of mathematical set theory, that is, it can neither be proved nor disproved from them. This was shown in 1963 by Paul Cohen, complementing earlier work by Kurt Gödel in 1940.
Motivation.
In informal use, a cardinal number is what is normally referred to as a "counting number", provided that 0 is included: 0, 1, 2, ... They may be identified with the natural numbers beginning with 0. The counting numbers are exactly what can be defined formally as the finite cardinal numbers. Infinite cardinals only occur in higher-level mathematics and logic.
More formally, a non-zero number can be used for two purposes: to describe the size of a set, or to describe the position of an element in a sequence. For finite sets and sequences it is easy to see that these two notions coincide, since for every number describing a position in a sequence we can construct a set that has exactly the right size. For example, 3 describes the position of 'c' in the sequence <'a','b','c','d'...>, and we can construct the set {a,b,c}, which has 3 elements.
However, when dealing with infinite sets, it is essential to distinguish between the two, since the two notions are in fact different for infinite sets. Considering the position aspect leads to ordinal numbers, while the size aspect is generalized by the cardinal numbers described here.
The intuition behind the formal definition of cardinal is the construction of a notion of the relative size or "bigness" of a set, without reference to the kind of members which it has. For finite sets this is easy; one simply counts the number of elements a set has. In order to compare the sizes of larger sets, it is necessary to appeal to more refined notions.
A set "Y" is at least as big as a set "X" if there is an injective mapping from the elements of "X" to the elements of "Y". An injective mapping identifies each element of the set "X" with a unique element of the set "Y". This is most easily understood by an example; suppose we have the sets "X" = {1,2,3} and "Y" = {a,b,c,d}, then using this notion of size, we would observe that there is a mapping:
1 → a
2 → b
3 → c
which is injective, and hence conclude that "Y" has cardinality greater than or equal to "X". The element d has no element mapping to it, but this is permitted as we only require an injective mapping, and not necessarily a bijective mapping. The advantage of this notion is that it can be extended to infinite sets.
We can then extend this to an equality-style relation. Two sets "X" and "Y" are said to have the same "cardinality" if there exists a bijection between "X" and "Y". By the Schroeder–Bernstein theorem, this is equivalent to there being "both" an injective mapping from "X" to "Y", "and" an injective mapping from "Y" to "X". We then write |"X"| = |"Y"|. The cardinal number of "X" itself is often defined as the least ordinal "a" with |"a"| = |"X"|. This is called the von Neumann cardinal assignment; for this definition to make sense, it must be proved that every set has the same cardinality as "some" ordinal; this statement is the well-ordering principle. It is however possible to discuss the relative cardinality of sets without explicitly assigning names to objects.
The classic example used is that of the infinite hotel paradox, also called Hilbert's paradox of the Grand Hotel. Supposing there is an innkeeper at a hotel with an infinite number of rooms. The hotel is full, and then a new guest arrives. It is possible to fit the extra guest in by asking the guest who was in room 1 to move to room 2, the guest in room 2 to move to room 3, and so on, leaving room 1 vacant. We can explicitly write a segment of this mapping:
1 → 2
2 → 3
3 → 4
"n" → "n" + 1
With this assignment, we can see that the set {1,2,3...} has the same cardinality as the set {2,3,4...}, since a bijection between the first and the second has been shown. This motivates the definition of an infinite set being any set that has a proper subset of the same cardinality (i.e., a Dedekind-infinite set); in this case {2,3,4...} is a proper subset of {1,2,3...}.
When considering these large objects, one might also want to see if the notion of counting order coincides with that of cardinal defined above for these infinite sets. It happens that it does not; by considering the above example we can see that if some object "one greater than infinity" exists, then it must have the same cardinality as the infinite set we started out with. It is possible to use a different formal notion for number, called ordinals, based on the ideas of counting and considering each number in turn, and we discover that the notions of cardinality and ordinality are divergent once we move out of the finite numbers.
It can be proved that the cardinality of the real numbers is greater than that of the natural numbers just described. This can be visualized using Cantor's diagonal argument; classic questions of cardinality (for instance the continuum hypothesis) are concerned with discovering whether there is some cardinal between some pair of other infinite cardinals. In more recent times, mathematicians have been describing the properties of larger and larger cardinals.
Since cardinality is such a common concept in mathematics, a variety of names are in use. Sameness of cardinality is sometimes referred to as "equipotence", "equipollence", or "equinumerosity". It is thus said that two sets with the same cardinality are, respectively, "equipotent", "equipollent", or "equinumerous".
Formal definition.
Formally, assuming the axiom of choice, the cardinality of a set "X" is the least ordinal number α such that there is a bijection between "X" and α. This definition is known as the von Neumann cardinal assignment. If the axiom of choice is not assumed, then a different approach is needed. The oldest definition of the cardinality of a set "X" (implicit in Cantor and explicit in Frege and "Principia Mathematica") is as the class ["X"] of all sets that are equinumerous with "X". This does not work in ZFC or other related systems of axiomatic set theory because if "X" is non-empty, this collection is too large to be a set. In fact, for "X" ≠ ∅ there is an injection from the universe into ["X"] by mapping a set "m" to {"m"} × "X", and so by the axiom of limitation of size, ["X"] is a proper class. The definition does work however in type theory and in New Foundations and related systems. However, if we restrict from this class to those equinumerous with "X" that have the least rank, then it will work (this is a trick due to Dana Scott: it works because the collection of objects with any given rank is a set).
Von Neumann cardinal assignment implies that the cardinal number of a finite set is the common ordinal number of all possible well-orderings of that set, and cardinal and ordinal arithmetic (addition, multiplication, power, proper subtraction) then give the same answers for finite numbers. However, they differ for infinite numbers. For example, formula_6 in ordinal arithmetic while formula_7 in cardinal arithmetic, although the von Neumann assignment puts formula_8. On the other hand, Scott's trick implies that the cardinal number 0 is formula_9, which is also the ordinal number 1, and this may be confusing. A possible compromise (to take advantage of the alignment in finite arithmetic while avoiding reliance on the axiom of choice and confusion in infinite arithmetic) is to apply von Neumann assignment to the cardinal numbers of finite sets (those which can be well ordered and are not equipotent to proper subsets) and to use Scott's trick for the cardinal numbers of other sets.
Formally, the order among cardinal numbers is defined as follows: |"X"| ≤ |"Y"| means that there exists an injective function from "X" to "Y". The Cantor–Bernstein–Schroeder theorem states that if |"X"| ≤ |"Y"| and |"Y"| ≤ |"X"| then |"X"| = |"Y"|. The axiom of choice is equivalent to the statement that given two sets "X" and "Y", either |"X"| ≤ |"Y"| or |"Y"| ≤ |"X"|.
A set "X" is Dedekind-infinite if there exists a proper subset "Y" of "X" with |"X"| = |"Y"|, and Dedekind-finite if such a subset does not exist. The finite cardinals are just the natural numbers, in the sense that a set "X" is finite if and only if |"X"| = |"n"| = "n" for some natural number "n". Any other set is infinite.
Assuming the axiom of choice, it can be proved that the Dedekind notions correspond to the standard ones. It can also be proved that the cardinal formula_2 (aleph null or aleph-0, where aleph is the first letter in the Hebrew alphabet, represented formula_0) of the set of natural numbers is the smallest infinite cardinal (i.e., any infinite set has a subset of cardinality formula_2). The next larger cardinal is denoted by formula_5, and so on. For every ordinal α, there is a cardinal number formula_10 and this list exhausts all infinite cardinal numbers.
Cardinal arithmetic.
We can define arithmetic operations on cardinal numbers that generalize the ordinary operations for natural numbers. It can be shown that for finite cardinals, these operations coincide with the usual operations for natural numbers. Furthermore, these operations share many properties with ordinary arithmetic.
Successor cardinal.
If the axiom of choice holds, then every cardinal κ has a successor, denoted κ+, where κ+ > κ and there are no cardinals between κ and its successor. (Without the axiom of choice, using Hartogs' theorem, it can be shown that for any cardinal number κ, there is a minimal cardinal κ+ such that formula_11) For finite cardinals, the successor is simply κ + 1. For infinite cardinals, the successor cardinal differs from the successor ordinal.
Cardinal addition.
If "X" and "Y" are disjoint, addition is given by the union of "X" and "Y". If the two sets are not already disjoint, then they can be replaced by disjoint sets of the same cardinality (e.g., replace "X" by "X"×{0} and "Y" by "Y"×{1}).
formula_12
Zero is an additive identity "κ" + 0 = 0 + "κ" = "κ".
Addition is associative ("κ" + "μ") + "ν" = "κ" + ("μ" + "ν").
Addition is commutative "κ" + "μ" = "μ" + "κ".
Addition is non-decreasing in both arguments:
formula_13
Assuming the axiom of choice, addition of infinite cardinal numbers is easy. If either "κ" or "μ" is infinite, then
formula_14
Subtraction.
Assuming the axiom of choice and, given an infinite cardinal "σ" and a cardinal "μ", there exists a cardinal "κ" such that "μ" + "κ" = "σ" if and only if "μ" ≤ "σ". It will be unique (and equal to "σ") if and only if "μ" < "σ".
Cardinal multiplication.
The product of cardinals comes from the Cartesian product.
formula_15
"κ"·0 = 0·"κ" = 0.
"κ"·"μ" = 0 → ("κ" = 0 or "μ" = 0).
One is a multiplicative identity "κ"·1 = 1·"κ" = "κ".
Multiplication is associative ("κ"·"μ")·"ν" = "κ"·("μ"·"ν").
Multiplication is commutative "κ"·"μ" = "μ"·"κ".
Multiplication is non-decreasing in both arguments:
"κ" ≤ "μ" → ("κ"·"ν" ≤ "μ"·"ν" and "ν"·"κ" ≤ "ν"·"μ").
Multiplication distributes over addition:
"κ"·("μ" + "ν") = "κ"·"μ" + "κ"·"ν" and
("μ" + "ν")·"κ" = "μ"·"κ" + "ν"·"κ".
Assuming the axiom of choice, multiplication of infinite cardinal numbers is also easy. If either "κ" or "μ" is infinite and both are non-zero, then
formula_16
Division.
Assuming the axiom of choice and, given an infinite cardinal "π" and a non-zero cardinal "μ", there exists a cardinal "κ" such that "μ" · "κ" = "π" if and only if "μ" ≤ "π". It will be unique (and equal to "π") if and only if "μ" < "π".
Cardinal exponentiation.
Exponentiation is given by
formula_17
where "XY" is the set of all functions from "Y" to "X". It is easy to check that the right-hand side depends only on formula_18 and formula_19.
κ0 = 1 (in particular 00 = 1), see empty function.
If 1 ≤ "μ", then 0"μ" = 0.
1"μ" = 1.
"κ"1 = "κ".
"κ""μ" + "ν" = "κ""μ"·"κ""ν".
κ"μ" · "ν" = ("κ""μ")"ν".
("κ"·"μ")"ν" = "κ""ν"·"μ""ν".
Exponentiation is non-decreasing in both arguments:
(1 ≤ "ν" and "κ" ≤ "μ") → ("ν""κ" ≤ "ν""μ") and
("κ" ≤ "μ") → ("κ""ν" ≤ "μ""ν").
2|"X"| is the cardinality of the power set of the set "X" and Cantor's diagonal argument shows that 2|"X"| > |"X"| for any set "X". This proves that no largest cardinal exists (because for any cardinal "κ", we can always find a larger cardinal 2"κ"). In fact, the class of cardinals is a proper class. (This proof fails in some set theories, notably New Foundations.)
All the remaining propositions in this section assume the axiom of choice:
If "κ" and "μ" are both finite and greater than 1, and "ν" is infinite, then "κ""ν" = "μ""ν".
If "κ" is infinite and "μ" is finite and non-zero, then "κ""μ" = "κ".
If 2 ≤ "κ" and 1 ≤ "μ" and at least one of them is infinite, then:
Max ("κ", 2"μ") ≤ "κ""μ" ≤ Max (2"κ", 2"μ").
Using König's theorem, one can prove "κ" < "κ"cf("κ") and "κ" < cf(2"κ") for any infinite cardinal "κ", where cf("κ") is the cofinality of "κ".
Roots.
Assuming the axiom of choice and, given an infinite cardinal "κ" and a finite cardinal "μ" greater than 0, the cardinal "ν" satisfying formula_20 will be formula_21.
Logarithms.
Assuming the axiom of choice and, given an infinite cardinal "κ" and a finite cardinal "μ" greater than 1, there may or may not be a cardinal "λ" satisfying formula_22. However, if such a cardinal exists, it is infinite and less than "κ", and any finite cardinality "ν" greater than 1 will also satisfy formula_23.
The logarithm of an infinite cardinal number "κ" is defined as the least cardinal number "μ" such that "κ" ≤ 2"μ". Logarithms of infinite cardinals are useful in some fields of mathematics, for example in the study of cardinal invariants of topological spaces, though they lack some of the properties that logarithms of positive real numbers possess.
The continuum hypothesis.
The continuum hypothesis (CH) states that there are no cardinals strictly between formula_2 and formula_24 The latter cardinal number is also often denoted by formula_3; it is the cardinality of the continuum (the set of real numbers). In this case formula_25
Similarly, the generalized continuum hypothesis (GCH) states that for every infinite cardinal formula_21, there are no cardinals strictly between formula_21 and formula_26. Both the continuum hypothesis and the generalized continuum hypothesis have been proved to be independent of the usual axioms of set theory, the Zermelo–Fraenkel axioms together with the axiom of choice (ZFC).
Indeed, Easton's theorem shows that, for regular cardinals formula_21, the only restrictions ZFC places on the cardinality of formula_26 are that formula_27, and that the exponential function is non-decreasing.
See also.
<templatestyles src="Div col/styles.css"/>
References.
Notes
<templatestyles src="Reflist/styles.css" />
Bibliography
|
[
{
"math_id": 0,
"text": "\\aleph"
},
{
"math_id": 1,
"text": "0, 1, 2, 3, \\ldots, n, \\ldots ; \\aleph_0, \\aleph_1, \\aleph_2, \\ldots, \\aleph_{\\alpha}, \\ldots.\\ "
},
{
"math_id": 2,
"text": "\\aleph_0"
},
{
"math_id": 3,
"text": "\\mathfrak{c}"
},
{
"math_id": 4,
"text": "(\\aleph_1, \\aleph_2, \\aleph_3, \\ldots)."
},
{
"math_id": 5,
"text": "\\aleph_1"
},
{
"math_id": 6,
"text": "2^\\omega=\\omega<\\omega^2"
},
{
"math_id": 7,
"text": "2^{\\aleph_0}>\\aleph_0=\\aleph_0^2"
},
{
"math_id": 8,
"text": "\\aleph_0=\\omega"
},
{
"math_id": 9,
"text": "\\{\\emptyset\\}"
},
{
"math_id": 10,
"text": "\\aleph_{\\alpha},"
},
{
"math_id": 11,
"text": "\\kappa^+\\nleq\\kappa. "
},
{
"math_id": 12,
"text": "|X| + |Y| = | X \\cup Y|."
},
{
"math_id": 13,
"text": "(\\kappa \\le \\mu) \\rightarrow ((\\kappa + \\nu \\le \\mu + \\nu) \\mbox{ and } (\\nu + \\kappa \\le \\nu + \\mu))."
},
{
"math_id": 14,
"text": "\\kappa + \\mu = \\max\\{\\kappa, \\mu\\}\\,."
},
{
"math_id": 15,
"text": "|X|\\cdot|Y| = |X \\times Y|"
},
{
"math_id": 16,
"text": "\\kappa\\cdot\\mu = \\max\\{\\kappa, \\mu\\}."
},
{
"math_id": 17,
"text": "|X|^{|Y|} = \\left|X^Y\\right|,"
},
{
"math_id": 18,
"text": "{|X|}"
},
{
"math_id": 19,
"text": "{|Y|}"
},
{
"math_id": 20,
"text": "\\nu^\\mu = \\kappa"
},
{
"math_id": 21,
"text": "\\kappa"
},
{
"math_id": 22,
"text": "\\mu^\\lambda = \\kappa"
},
{
"math_id": 23,
"text": "\\nu^\\lambda = \\kappa"
},
{
"math_id": 24,
"text": "2^{\\aleph_0}."
},
{
"math_id": 25,
"text": "2^{\\aleph_0} = \\aleph_1."
},
{
"math_id": 26,
"text": "2^\\kappa"
},
{
"math_id": 27,
"text": " \\kappa < \\operatorname{cf}(2^\\kappa) "
}
] |
https://en.wikipedia.org/wiki?curid=6173
|
617327
|
Armstrong oscillator
|
The Armstrong oscillator (also known as the Meissner oscillator) is an electronic oscillator circuit which uses an inductor and capacitor to generate an oscillation. The Meissner patent from 1913 describes a device for generating electrical vibrations, a radio transmitter used for on–off keying. Edwin Armstrong presented in 1915 some recent developments in the Audion receiver. His circuits improved radio frequency reception. Meissner used a Lieben-Reisz-Strauss tube, Armstrong used a de Forest Audion tube. Both circuits are sometimes called a tickler oscillator because the distinguishing feature is that the feedback signal needed to produce oscillations is magnetically coupled into the tank inductor by a "tickler coil" "(L2, right)". Assuming the coupling is weak but sufficient to sustain oscillation, the oscillation frequency "f" is determined primarily by the LC circuit (tank circuit L1 and C in the figure on the right) and is approximately given by
formula_0
This circuit was widely used in the regenerative radio receiver, popular until the 1940s. In that application, the input radio frequency signal from the antenna is magnetically coupled into the LC circuit by an additional winding, and the feedback is reduced with adjustable gain control in the feedback loop, so the circuit is just short of oscillation. The result is a narrow-band radio-frequency filter and amplifier. The non-linear characteristic of the transistor or tube also demodulated the RF signal to produce the audio signal.
The circuit diagram shown is a modern implementation, using a field-effect transistor as the amplifying element. Armstrong's original design used a triode vacuum tube.
In the Meissner variant, the LC resonant circuit is exchanged with the feedback coil, i.e., in the output path (vacuum tube plate, field-effect transistor drain, or bipolar transistor collector) of the amplifier (e.g., Grebennikov, Fig. 2.8). Many publications, however, embrace both variants with either name. English speakers call it the "Armstrong oscillator", whereas German speakers call it the "Meißner oscillator".
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "f = \\frac{1}{2\\pi\\sqrt{LC}} \\,"
}
] |
https://en.wikipedia.org/wiki?curid=617327
|
61739855
|
Donna Testerman
|
Mathematician
Donna Marie Testerman (born 1960) is a mathematician specializing in the representation theory of algebraic groups. She is a professor of mathematics at the École Polytechnique Fédérale de Lausanne in Switzerland.
Testerman completed her Ph.D. at the University of Oregon in 1985. Her dissertation, "Certain Embeddings of Simple Algebraic Groups", was supervised by Gary Seitz. As a faculty member at Wesleyan University, she won a Sloan Research Fellowship in 1992.
Testerman is an author or editor of several books and book-length research monographs in mathematics including:
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "A_1"
}
] |
https://en.wikipedia.org/wiki?curid=61739855
|
6174
|
Cardinality
|
Definition of the number of elements in a set
In mathematics, cardinality describes a relationship between sets which compares their relative size. For example, the sets formula_0 and formula_1 are the same size as they each contain 3 elements. Beginning in the late 19th century, this concept was generalized to infinite sets, which allows one to distinguish between different types of infinity, and to perform arithmetic on them. There are two notions often used when referring to cardinality: one which compares sets directly using bijections and injections, and another which uses cardinal numbers.
The cardinality of a set may also be called its size, when no confusion with other notions of size is possible.
When two sets, &NoBreak;&NoBreak; and &NoBreak;&NoBreak;, have the same cardinality, it is usually written as formula_2; however, if referring to the "cardinal number" of an individual set formula_3, it is simply denoted formula_4, with a vertical bar on each side; this is the same notation as absolute value, and the meaning depends on context. The cardinal number of a set formula_3 may alternatively be denoted by formula_5, formula_3, formula_6, or formula_7.
History.
A crude sense of cardinality, an awareness that groups of things or events compare with other groups by containing more, fewer, or the same number of instances, is observed in a variety of present-day animal species, suggesting an origin millions of years ago. Human expression of cardinality is seen as early as years ago, with equating the size of a group with a group of recorded notches, or a representative collection of other things, such as sticks and shells. The abstraction of cardinality as a number is evident by 3000 BCE, in Sumerian mathematics and the manipulation of numbers without reference to a specific group of things or events.
From the 6th century BCE, the writings of Greek philosophers show hints of the cardinality of infinite sets. While they considered the notion of infinity as an endless series of actions, such as adding 1 to a number repeatedly, they did not consider the size of an infinite set of numbers to be a thing. The ancient Greek notion of infinity also considered the division of things into parts repeated without limit. In Euclid's "Elements", commensurability was described as the ability to compare the length of two line segments, "a" and "b", as a ratio, as long as there were a third segment, no matter how small, that could be laid end-to-end a whole number of times into both "a" and "b". But with the discovery of irrational numbers, it was seen that even the infinite set of all rational numbers was not enough to describe the length of every possible line segment. Still, there was no concept of infinite sets as something that had cardinality.
To better understand infinite sets, a notion of cardinality was formulated c. 1880 by Georg Cantor, the originator of set theory. He examined the process of equating two sets with bijection, a one-to-one correspondence between the elements of two sets based on a unique relationship. In 1891, with the publication of Cantor's diagonal argument, he demonstrated that there are sets of numbers that cannot be placed in one-to-one correspondence with the set of natural numbers, i.e. uncountable sets that contain more elements than there are in the infinite set of natural numbers.
Comparing sets.
While the cardinality of a finite set is simply comparable to its number of elements, extending the notion to infinite sets usually starts with defining the notion of comparison of arbitrary sets (some of which are possibly infinite).
Definition 1: |"A"| = |"B"|.
Two sets have the same cardinality if there exists a bijection (a.k.a., one-to-one correspondence) from &NoBreak;&NoBreak; to &NoBreak;&NoBreak;, that is, a function from &NoBreak;&NoBreak; to &NoBreak;&NoBreak; that is both injective and surjective. Such sets are said to be "equipotent", "equipollent", or "equinumerous".
For example, the set formula_8 of non-negative even numbers has the same cardinality as the set formula_9 of natural numbers, since the function formula_10 is a bijection from &NoBreak;&NoBreak; to &NoBreak;&NoBreak; (see picture).
For finite sets &NoBreak;&NoBreak; and &NoBreak;&NoBreak;, if "some" bijection exists from &NoBreak;&NoBreak; to &NoBreak;&NoBreak;, then "each" injective or surjective function from &NoBreak;&NoBreak; to &NoBreak;&NoBreak; is a bijection. This is no longer true for infinite &NoBreak;&NoBreak; and &NoBreak;&NoBreak;. For example, the function &NoBreak;&NoBreak; from &NoBreak;&NoBreak; to &NoBreak;&NoBreak;, defined by formula_11 is injective, but not surjective since 2, for instance, is not mapped to, and &NoBreak;&NoBreak; from &NoBreak;&NoBreak; to &NoBreak;&NoBreak;, defined by formula_12 (see: modulo operation) is surjective, but not injective, since 0 and 1 for instance both map to 0. Neither &NoBreak;&NoBreak; nor &NoBreak;&NoBreak; can challenge formula_13, which was established by the existence of &NoBreak;&NoBreak;.
Definition 2: |"A"| ≤ |"B"|.
&NoBreak;&NoBreak; has cardinality less than or equal to the cardinality of &NoBreak;&NoBreak;, if there exists an injective function from &NoBreak;&NoBreak; into &NoBreak;&NoBreak;.
If formula_14 and formula_15, then formula_2 (a fact known as Schröder–Bernstein theorem). The axiom of choice is equivalent to the statement that formula_14 or formula_15 for every &NoBreak;&NoBreak; and &NoBreak;&NoBreak;.
Definition 3: |"A"| < |"B"|.
&NoBreak;&NoBreak; has cardinality strictly less than the cardinality of &NoBreak;&NoBreak;, if there is an injective function, but no bijective function, from &NoBreak;&NoBreak; to &NoBreak;&NoBreak;.
For example, the set &NoBreak;&NoBreak; of all natural numbers has cardinality strictly less than its power set &NoBreak;&NoBreak;, because formula_16 is an injective function from &NoBreak;&NoBreak; to &NoBreak;&NoBreak;, and it can be shown that no function from &NoBreak;&NoBreak; to &NoBreak;&NoBreak; can be bijective (see picture). By a similar argument, &NoBreak;&NoBreak; has cardinality strictly less than the cardinality of the set &NoBreak;&NoBreak; of all real numbers. For proofs, see Cantor's diagonal argument or Cantor's first uncountability proof.
Cardinal numbers.
In the above section, "cardinality" of a set was defined functionally. In other words, it was not defined as a specific object itself. However, such an object can be defined as follows.
The relation of having the same cardinality is called equinumerosity, and this is an equivalence relation on the class of all sets. The equivalence class of a set "A" under this relation, then, consists of all those sets which have the same cardinality as "A". There are two ways to define the "cardinality of a set":
Assuming the axiom of choice, the cardinalities of the infinite sets are denoted
formula_17
For each ordinal formula_18, formula_19 is the least cardinal number greater than formula_20.
The cardinality of the natural numbers is denoted aleph-null (formula_21), while the cardinality of the real numbers is denoted by "formula_22" (a lowercase fraktur script "c"), and is also referred to as the cardinality of the continuum. Cantor showed, using the diagonal argument, that formula_23. We can show that formula_24, this also being the cardinality of the set of all subsets of the natural numbers.
The continuum hypothesis says that formula_25, i.e. formula_26 is the smallest cardinal number bigger than formula_21, i.e. there is no set whose cardinality is strictly between that of the integers and that of the real numbers. The continuum hypothesis is independent of ZFC, a standard axiomatization of set theory; that is, it is impossible to prove the continuum hypothesis or its negation from ZFC—provided that ZFC is consistent. For more detail, see § Cardinality of the continuum below.
Finite, countable and uncountable sets.
If the axiom of choice holds, the law of trichotomy holds for cardinality. Thus we can make the following definitions:
Infinite sets.
Our intuition gained from finite sets breaks down when dealing with infinite sets. In the late 19th century Georg Cantor, Gottlob Frege, Richard Dedekind and others rejected the view that the whole cannot be the same size as the part. One example of this is Hilbert's paradox of the Grand Hotel.
Indeed, Dedekind defined an infinite set as one that can be placed into a one-to-one correspondence with a strict subset (that is, having the same size in Cantor's sense); this notion of infinity is called Dedekind infinite. Cantor introduced the cardinal numbers, and showed—according to his bijection-based definition of size—that some infinite sets are greater than others. The smallest infinite cardinality is that of the natural numbers (formula_21).
Cardinality of the continuum.
One of Cantor's most important results was that the cardinality of the continuum (formula_28) is greater than that of the natural numbers (formula_21); that is, there are more real numbers R than natural numbers N. Namely, Cantor showed that formula_29 (see Beth one) satisfies:
formula_30
(see Cantor's diagonal argument or Cantor's first uncountability proof).
The continuum hypothesis states that there is no cardinal number between the cardinality of the reals and the cardinality of the natural numbers, that is,
formula_31
However, this hypothesis can neither be proved nor disproved within the widely accepted ZFC axiomatic set theory, if ZFC is consistent.
Cardinal arithmetic can be used to show not only that the number of points in a real number line is equal to the number of points in any segment of that line, but that this is equal to the number of points on a plane and, indeed, in any finite-dimensional space. These results are highly counterintuitive, because they imply that there exist proper subsets and proper supersets of an infinite set "S" that have the same size as "S", although "S" contains elements that do not belong to its subsets, and the supersets of "S" contain elements that are not included in it.
The first of these results is apparent by considering, for instance, the tangent function, which provides a one-to-one correspondence between the interval (−½π, ½π) and R (see also Hilbert's paradox of the Grand Hotel).
The second result was first demonstrated by Cantor in 1878, but it became more apparent in 1890, when Giuseppe Peano introduced the space-filling curves, curved lines that twist and turn enough to fill the whole of any square, or cube, or hypercube, or finite-dimensional space. These curves are not a direct proof that a line has the same number of points as a finite-dimensional space, but they can be used to obtain such a proof.
Cantor also showed that sets with cardinality strictly greater than formula_22 exist (see his generalized diagonal argument and theorem). They include, for instance:
* the set of all subsets of R, i.e., the power set of R, written "P"(R) or 2R
* the set RR of all functions from R to R
Both have cardinality
formula_32
(see Beth two).
The cardinal equalities formula_33 formula_34 and formula_35 can be demonstrated using cardinal arithmetic:
formula_36
formula_37
formula_38
Union and intersection.
If "A" and "B" are disjoint sets, then
formula_40
From this, one can show that in general, the cardinalities of unions and intersections are related by the following equation:
formula_41
Definition of cardinality in class theory (NBG or MK).
Here formula_42 denote a class of all sets, and formula_43 denotes the class of all ordinal numbers.
formula_44
We use the intersection of a class which is defined by formula_45, therefore formula_46.
In this case
formula_47.
This definition allows also obtain a cardinality of any proper class formula_48, in particular
formula_49
This definition is natural since it agrees with the axiom of limitation of size which implies bijection between formula_42 and any proper class.
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "A = \\{1, 2, 3\\}"
},
{
"math_id": 1,
"text": "B = \\{2,4,6\\}"
},
{
"math_id": 2,
"text": "|A| = |B|"
},
{
"math_id": 3,
"text": "A"
},
{
"math_id": 4,
"text": "|A|"
},
{
"math_id": 5,
"text": "n(A)"
},
{
"math_id": 6,
"text": "\\operatorname{card}(A)"
},
{
"math_id": 7,
"text": "\\#A"
},
{
"math_id": 8,
"text": "E = \\{0, 2, 4, 6, \\text{...}\\}"
},
{
"math_id": 9,
"text": "\\N = \\{0, 1, 2, 3, \\text{...}\\}"
},
{
"math_id": 10,
"text": "f(n) = 2n"
},
{
"math_id": 11,
"text": "g(n) = 4n"
},
{
"math_id": 12,
"text": "h(n) = n - (n \\text{ mod } 2)"
},
{
"math_id": 13,
"text": "|E| = |\\N|"
},
{
"math_id": 14,
"text": "|A| \\leq |B|"
},
{
"math_id": 15,
"text": "|B| \\leq |A|"
},
{
"math_id": 16,
"text": "g(n) = \\{n\\}"
},
{
"math_id": 17,
"text": "\\aleph_0 < \\aleph_1 < \\aleph_2 < \\ldots . "
},
{
"math_id": 18,
"text": "\\alpha"
},
{
"math_id": 19,
"text": "\\aleph_{\\alpha + 1}"
},
{
"math_id": 20,
"text": "\\aleph_\\alpha"
},
{
"math_id": 21,
"text": "\\aleph_0"
},
{
"math_id": 22,
"text": "\\mathfrak c"
},
{
"math_id": 23,
"text": "{\\mathfrak c} >\\aleph_0"
},
{
"math_id": 24,
"text": "\\mathfrak c = 2^{\\aleph_0}"
},
{
"math_id": 25,
"text": "\\aleph_1 = 2^{\\aleph_0}"
},
{
"math_id": 26,
"text": "2^{\\aleph_0}"
},
{
"math_id": 27,
"text": "\\mathfrak c "
},
{
"math_id": 28,
"text": "\\mathfrak{c}"
},
{
"math_id": 29,
"text": "\\mathfrak{c} = 2^{\\aleph_0} = \\beth_1"
},
{
"math_id": 30,
"text": "2^{\\aleph_0} > \\aleph_0"
},
{
"math_id": 31,
"text": "2^{\\aleph_0} = \\aleph_1"
},
{
"math_id": 32,
"text": "2^\\mathfrak {c} = \\beth_2 > \\mathfrak c "
},
{
"math_id": 33,
"text": "\\mathfrak{c}^2 = \\mathfrak{c},"
},
{
"math_id": 34,
"text": "\\mathfrak c^{\\aleph_0} = \\mathfrak c,"
},
{
"math_id": 35,
"text": "\\mathfrak c ^{\\mathfrak c} = 2^{\\mathfrak c}"
},
{
"math_id": 36,
"text": "\\mathfrak{c}^2 = \\left(2^{\\aleph_0}\\right)^2 = 2^{2\\times{\\aleph_0}} = 2^{\\aleph_0} = \\mathfrak{c},"
},
{
"math_id": 37,
"text": "\\mathfrak c^{\\aleph_0} = \\left(2^{\\aleph_0}\\right)^{\\aleph_0} = 2^{{\\aleph_0}\\times{\\aleph_0}} = 2^{\\aleph_0} = \\mathfrak{c},"
},
{
"math_id": 38,
"text": " \\mathfrak c ^{\\mathfrak c} = \\left(2^{\\aleph_0}\\right)^{\\mathfrak c} = 2^{\\mathfrak c\\times\\aleph_0} = 2^{\\mathfrak c}."
},
{
"math_id": 39,
"text": "[0, 1]"
},
{
"math_id": 40,
"text": "\\left\\vert A \\cup B \\right\\vert = \\left\\vert A \\right\\vert + \\left\\vert B \\right\\vert."
},
{
"math_id": 41,
"text": " \\left\\vert C \\cup D \\right\\vert + \\left\\vert C \\cap D \\right\\vert = \\left\\vert C \\right\\vert + \\left\\vert D \\right\\vert."
},
{
"math_id": 42,
"text": "V"
},
{
"math_id": 43,
"text": "\\mbox{Ord}"
},
{
"math_id": 44,
"text": "|A|:=\\mbox{Ord}\\cap\\bigcap \\{\\alpha\\in\\mbox{Ord}|\\exists (f:A\\to\\alpha):(f\\mbox{ injective})\\}"
},
{
"math_id": 45,
"text": "(x\\in\\bigcap Q)\\iff(\\forall q\\in Q:x\\in q)"
},
{
"math_id": 46,
"text": "\\bigcap\\emptyset = V"
},
{
"math_id": 47,
"text": "(x\\mapsto|x|):V\\to\\mbox{Ord}"
},
{
"math_id": 48,
"text": "P"
},
{
"math_id": 49,
"text": "|P|=\\mbox{Ord}"
}
] |
https://en.wikipedia.org/wiki?curid=6174
|
61740916
|
Covariant (invariant theory)
|
In invariant theory, a branch of algebra, given a group "G", a covariant is a "G"-equivariant polynomial map formula_0 between linear representations "V", "W" of "G". It is a generalization of a classical convariant, which is a homogeneous polynomial map from the space of binary "m"-forms to the space of binary "p"-forms (over the complex numbers) that is formula_1-equivariant.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "V \\to W"
},
{
"math_id": 1,
"text": "SL_2(\\mathbb{C})"
}
] |
https://en.wikipedia.org/wiki?curid=61740916
|
61747300
|
2I/Borisov
|
Interstellar comet passing through the Solar System, discovered in 2019
2I/Borisov, originally designated C/2019 Q4 (Borisov), is the first observed rogue comet and the second observed interstellar interloper after ʻOumuamua. It was discovered by the Crimean amateur astronomer and telescope maker Gennadiy Borisov on 29 August 2019 UTC (30 August local time).
2I/Borisov has a heliocentric orbital eccentricity of 3.36 and is not bound to the Sun. The comet passed through the ecliptic of the Solar System at the end of October 2019, and made its closest approach to the Sun at just over on 8 December 2019. The comet passed closest to Earth on 28 December 2019. In November 2019, astronomers from Yale University said that the comet's tail was 14 times the size of Earth, and stated, "It's humbling to realize how small Earth is next to this visitor from another solar system." In the middle of March, 2020, the comet was observed to fragment; and later, in April, even more evidence of fragmentation was reported.
Nomenclature.
The comet is formally called "2I/Borisov" by the International Astronomical Union (IAU), with "2I" or "2I/2019 Q4" being its designation and "Borisov" being its name, but is sometimes referred to as "Comet Borisov", especially in the popular press. As the second observed interstellar interloper after 1I/ʻOumuamua, it was given the "2I" designation, where "I" stands for interstellar. The name Borisov follows the tradition of naming comets after their discoverers. Before final designation as 2I/Borisov, the object was referred to by other names:
Characteristics.
Unlike ʻOumuamua, which had an asteroidal appearance, 2I/Borisov's nucleus is surrounded by a coma, a cloud of dust and gas.
Size and shape.
Early estimates of nucleus 2I/Borisov diameter have ranged from . 2I/Borisov has, unlike Solar System comets, noticeably shrunk during Solar System flyby, losing at least 0.4% of its mass before perihelion. Also, the amplitude of non-gravitational acceleration place an upper limit of 0.4 km on nucleus size, consistent with a previous Hubble Space Telescope upper limit of 0.5 km. The comet did not come much closer to Earth than 300 million km, which prevents using radar to directly determine its size and shape. This could be done using the occultation of a star by 2I/Borisov but an occultation would be difficult to predict, requiring a precise determination of its orbit, and the detection would necessitate a network of small telescopes.
Rotation.
A study using observations from Hubble could not find a variation in the light curve. According to this study the rotational period must be larger than 10 hours. A study with CSA's NEOSSat found a period of 13.2 ± 0.2 days, which is unlikely to be the nuclear spin. Monte Carlo simulations based on the available orbit determinations suggest that the equatorial obliquity of 2I/Borisov could be about 59 degrees or 90 degrees, the latter is favored for the latest orbit determination.
Chemical makeup and nucleus structure.
David Jewitt and Jane Luu estimate from the size of its coma the comet is producing 2 kg/s of dust and is losing 60 kg/s of water. They extrapolate that it became active in June 2019 when it was between 4 and 5 au from the Sun. A search of image archives found precovery observations of 2I/Borisov as early as 13 December 2018, but not on 21 November 2018, indicating it became active between these dates.
2I/Borisov's composition appears uncommon yet not unseen in Solar System comets, being relatively depleted in water and diatomic carbon (C2), but enriched in carbon monoxide and amines (R-NH2). The molar ratio of carbon monoxide to water in 2I/Borisov tail is 35–105%, resembling the unusual blue-tailed comet C/2016 R2 (PANSTARRS) in contrast to the average ratio of 4% for solar system comets.
The 2I/Borisov has also produced a minor amount of neutral nickel emission attributed to an unknown volatile compound of nickel. The nickel to iron abundance ratio is similar to Solar System comets.
Trajectory.
As seen from Earth, the comet was in the northern sky from September until mid-November. It crossed the ecliptic plane on 26 October near the star Regulus, and the celestial equator on 13 November 2019, entering the southern sky. On 8 December 2019, the comet reached perihelion (closest approach to the Sun) and was near the inner edge of the asteroid belt. In late December, it made its closest approach to Earth, 1.9 au, and had a solar elongation of about 80°. Due to its 44° orbital inclination, 2I/Borisov did not make any notable close approaches to the planets. 2I/Borisov entered the Solar System from the direction of Cassiopeia near the border with Perseus. This direction indicates that it originates from the galactic plane, rather than from the galactic halo. It will leave the Solar System in the direction of Telescopium. In interstellar space, 2I/Borisov takes roughly years to travel a light-year relative to the Sun.
2I/Borisov's trajectory is extremely hyperbolic, having an orbital eccentricity of 3.36. This is much higher than the 300+ known weakly hyperbolic comets, with heliocentric eccentricities just over 1, and even ʻOumuamua with an eccentricity of 1.2. 2I/Borisov also has a hyperbolic excess velocity (formula_0) of , much higher than what could be explained by perturbations, which could produce velocities when approaching an infinite distance from the Sun of less than a few km/s. These two parameters are important indicators of 2I/Borisov's interstellar origin. For comparison, the "Voyager 1" spacecraft, which is leaving the Solar System, is traveling at . 2I/Borisov has a much larger eccentricity than ʻOumuamua due to its higher excess velocity and its significantly higher perihelion distance. At this larger distance, the Sun's gravity is less able to alter its path as it passes through the Solar System.
Observation.
Discovery.
The comet was discovered on 30 August 2019 by amateur astronomer Gennadiy Borisov at his personal observatory MARGO in Nauchnyy, Crimea, using a 0.65 meter telescope he designed and built himself. The discovery has been compared to the discovery of Pluto by Clyde Tombaugh. Tombaugh was also an amateur astronomer who was building his own telescopes, although he discovered Pluto using Lowell Observatory's astrograph. At discovery, it was inbound from the Sun, from Earth, and had a solar elongation of 38°. Borisov described his discovery thus:
<templatestyles src="Template:Blockquote/styles.css" />I observed it on August 29, but it was August 30 Greenwich Time. I saw a moving object in the frame, it moved in a direction that was slightly different from that of main asteroids. I measured its coordinates and consulted the Minor Planet Center database. Turned out, it was a new object. Then I measured the near-Earth object rating, it is calculated from various parameters, and it turned out to be 100% – in other words, dangerous. In such cases I must immediately post the parameters to the world webpage for confirmation of dangerous asteroids. I posted it and wrote that the object was diffuse and that it was not an asteroid, but a comet.
2I/Borisov's interstellar origin required a couple of weeks to confirm. Early orbital solutions based on initial observations included the possibility that the comet could be a near-Earth object 1.4 AU from the Sun in an elliptical orbit with an orbital period of less than 1 year. Later using 151 observations over 12 days, NASA Jet Propulsion Laboratory's Scout gave an eccentricity range of 2.9–4.5 . But with an observation arc of only 12 days, there was still some doubt that it was interstellar because the observations were at a low solar elongation, which could introduce biases in the data such as differential refraction. Using large non-gravitational forces on the highly eccentric orbit, a solution could be generated with an eccentricity of about 1, an Earth minimum orbit intersection distance (MOID) of , and a perihelion at 0.90 AU around 30 December 2019. However, based on available observations, the orbit could only be parabolic if non-gravitational forces (thrust due to outgassing) affected its orbit more than any previous comets. Eventually with more observations the orbit converged to the hyperbolic solution that indicated an interstellar origin and non-gravitational forces could not explain the motion.
Observation.
The last observations were in July 2020, seven months after perihelion. Observation of 2I/Borisov was aided by the fact that the comet was detected while inbound towards the Solar System. ʻOumuamua had been discovered as it was leaving the system, and thus could only be observed for 80 days before it was out of range. Because of its closest approach occurring near traditional year-end holidays, and the capability to have extended observations, some astronomers have called 2I/Borisov a "Christmas comet". Observations using the Hubble Space Telescope began on 12 October, when the comet moved far enough from the Sun to be safely observed by the telescope. Hubble is less affected by the confounding effects of the coma than ground-based telescopes, which will allow it to study the rotational light curve of 2I/Borisov's nucleus. This should facilitate an estimate of its size and shape.
Comet chemistry.
A preliminary (low-resolution) visible spectrum of 2I/Borisov was similar to typical Oort Cloud comets. Its color indexes also resemble the Solar System's long period comets. Emissions at indicated the presence of cyanide (formula CN), which is typically the first detected in Solar System comets including comet Halley. This was the first detection of gas emissions from an interstellar object. The non-detection of diatomic carbon had also been reported in October 2019, with the ratio C2 to CN being less than either 0.095 or 0.3 . The diatomic carbon was positively detected in November 2019, with measured C2 to CN ratio of . This resembles a carbon-chain depleted group of comets, which are either Jupiter family comets or rare blue-colored carbon monoxide comets exemplified by C/2016 R2. By the end of November 2019, C2 production had dramatically increased, and C2 to CN ratio reached 0.61, along with appearance of bright amine (NH2) bands. Atomic oxygen has also been detected, from this observers estimated an outgassing of water at a rate similar to Solar System comets. Initially, neither water nor OH lines were directly detected in September 2019. First unambiguous detection of OH lines was done 1 November 2019, and OH production peaked in early December 2019.
Suspected nucleus fragmentation.
The comet did come within about 2 AU of the Sun, a distance at which many small comets have been found to disintegrate. The probability that a comet disintegrates strongly depends on the size of its nucleus; Guzik et al. estimated a probability of 10% that this would happen to 2I/Borisov. Jewitt and Luu compared 2I/Borisov to C/2019 J2 (Palomar), another comet of similar size that disintegrated in May 2019 at a distance of 1.9 AU from the Sun. In the event that the nucleus disintegrates, as is sometimes seen with small comets, Hubble can be used to study the evolution of the disintegration process.
The severe outburst in February–March 2020, led to suspected "ongoing nucleus fragmentation" from the comet by 12 March. Indeed, images from the Hubble Space Telescope taken on 30 March 2020 show a non-stellar core indicating that Comet 2I/Borisov has ejected sunward a large fragment. The ejection is estimated to have begun around 7 March, and may have occurred during one of the outbursts that occurred near that time. The ejected fragment appeared to have vanished by 6 April 2020.
A followup study, reported on 6 April 2020, observed only a single object, and noted that the fragment component had now disappeared. Later analysis of the event showed the ejected dust and fragments have a combined mass of about 0.1% of total mass of nucleus, making the event a large outburst rather than fragmentation.
Exploration.
The high hyperbolic excess velocity of 2I/Borisov of makes it hard for a spacecraft to reach the comet with existing technology: according to a team of the Initiative for Interstellar Studies, a 202 kg (445 lb) spacecraft could theoretically have been sent in July 2018 to intercept 2I/Borisov using a Falcon Heavy-class launcher, or 765 kg (1687 lb) on a Space Launch System (SLS)-class booster, but only if the object had been discovered much earlier than it was to meet the optimal launch date. Launches after the actual discovery date would eliminate the possibility to use Falcon Heavy-class rockets, requiring Oberth maneuvres near Jupiter and near the Sun and a larger launch vehicle. Even an SLS-class launcher would only have been able to deliver a payload (such as a CubeSat) into a trajectory that would intercept 2I/Borisov in 2045 at a relative speed of . According to congressional testimony, NASA may need at least five years of preparation to launch such an intercepting mission.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "v_\\infty"
}
] |
https://en.wikipedia.org/wiki?curid=61747300
|
61751032
|
Compton generator
|
Physics apparatus to demonstrate rotation of Earth
A Compton generator or Compton tube is an apparatus for experiment to demonstrate the Earth's rotation, similar to the Foucault pendulum and to gyroscope devices. Arthur Compton (Nobel Prize in Physics in 1927) published it during his fourth year at the College of Wooster in 1913.
Explanation of apparatus.
A Compton generator is a circular hollow glass ring tube shaped like a doughnut, the inside of which is filled with water. If the ring lies flat on the table, the water in the ring is stationary, and it is then turned over by rotating itself 180 degree around a diameter, such that it again lies flat on the table surface, which is horizontal. The result of the experiment is that the water moves with a certain constant drift velocity around the tube after the doughnut has been rotated. If there were no friction with the walls, the water would continue to circulate indefinitely.
The ring used in the initial experiment was made of one inch brass tubing bent into a circle eighteen inches in diameter, where the windows were placed the tube was constricted to a diameter of about 3/8 inches (9.5 mm).
Compton used small droplets of coal oil mixed in the water to measure the drift velocity under a microscope.
Analysis.
Assume the diameter of the glass tube is much smaller than the diameter of the ring, and formula_0 is the radius of the ring, formula_1 is the Earth rotation rate and formula_2 is the latitude.
Initially the ring is horizontal and the water is stationary. Second the ring is then quickly rotated by 180° around its East-West diameter and stopped, such that it again lies flat on the table surface, which is horizontal. At this time, the velocity formula_3 of the water in the tube is given by
formula_4
Note that a rotation from the vertical to the vertical position produces the velocity formula_5 of the water in the tube is given by
formula_6
This is derived by first integrating the torque due to the Coriolis force around the ring, then integrating the torque over the time it takes for the ring to flip, to obtain the change in angular momentum.
With these two equations, one can solve for both formula_7, thus finding both the rotation speed of earth and the latitude of the apparatus.
Experimental verification.
Compton used this measured drift velocity to determine his latitude to within 3% accuracy. He also used it to measure the rotational period of earth to an accuracy of 16 minutes per day (accuracy of 1%). By careful methods he could observe the effect in a ring with a radius of only 9 inches (23 cm).
Earth's rotation is 7.3 × 10−5 radians/second. In the original report, Compton used a ring of 1 meter in radius at the College of Wooster (latitude 41 degrees). This would translate to a velocity of about formula_8.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "R"
},
{
"math_id": 1,
"text": "\\omega"
},
{
"math_id": 2,
"text": "\\lambda"
},
{
"math_id": 3,
"text": "v_{th}"
},
{
"math_id": 4,
"text": "v_{th} = 2 \\omega R \\sin \\lambda"
},
{
"math_id": 5,
"text": "v_{tv}"
},
{
"math_id": 6,
"text": "v_{tv} = 2 \\omega R \\cos \\lambda"
},
{
"math_id": 7,
"text": "\\omega, \\lambda"
},
{
"math_id": 8,
"text": "v_{th} \\approx 0.1 \\mathrm{mm/sec}"
}
] |
https://en.wikipedia.org/wiki?curid=61751032
|
617522
|
Issai Schur
|
German mathematician (1875-1941)
Issai Schur (10 January 1875 – 10 January 1941) was a Russian mathematician who worked in Germany for most of his life. He studied at the University of Berlin. He obtained his doctorate in 1901, became lecturer in 1903 and, after a stay at the University of Bonn, professor in 1919.
As a student of Ferdinand Georg Frobenius, he worked on group representations (the subject with which he is most closely associated), but also in combinatorics and number theory and even theoretical physics. He is perhaps best known today for his result on the existence of the Schur decomposition and for his work on group representations (Schur's lemma).
Schur published under the name of both I. Schur, and J. Schur, the latter especially in "Journal für die reine und angewandte Mathematik". This has led to some confusion.
Childhood.
Issai Schur was born into a Jewish family, the son of the businessman Moses Schur and his wife Golde Schur (née Landau). He was born in Mogilev on the Dnieper River in what was then the Russian Empire. Schur used the name "Schaia " ("Isaiah" as the epitaph on his grave) rather than "Issai" up in his middle twenties. Schur's father may have been a wholesale merchant.
In 1888, at the age of 13, Schur went to Liepāja (Courland, now in Latvia), where his married sister and his brother lived, 640 km north-west of Mogilev. Kurland was one of the three Baltic governorates of Tsarist Russia, and since the Middle Ages the Baltic Germans were the upper social class. The local Jewish community spoke mostly German and not Yiddish.
Schur attended the German-speaking Nicolai Gymnasium in Libau from 1888 to 1894 and reached the top grade in his final examination, and received a gold medal. Here he became fluent in German.
Education.
In October 1894, Schur attended the University of Berlin, with concentration in mathematics and physics. In 1901, he graduated summa cum laude under Frobenius and Lazarus Immanuel Fuchs with his dissertation "On a class of matrices that can be assigned to a given matrix", which contains a general theory of the representation of linear groups. According to Vogt, he began to use the name "Issai" at this time. Schur thought that his chance of success in the Russian Empire was rather poor, and because he spoke German so perfectly, he remained in Berlin. He graduated in 1903 and was a lecturer at the University of Berlin. Schur held a position as professor at the Berlin University for the ten years from 1903 to 1913.
In 1913 he accepted an appointment as associate professor and successor of Felix Hausdorff at the University of Bonn. In the following years Frobenius tried various ways to get Schur back to Berlin. Among other things, Schur's name was mentioned in a letter dated 27 June 1913 from Frobenius to Robert Gnehm (the School Board President of the ETH) as a possible successor to Carl Friedrich Geiser. Frobenius complained that they had never followed his advice before and then said: "That is why I can't even recommend Prof. J. Schur (now in Bonn) to you. He's too good for Zurich, and should be my successor in Berlin". Hermann Weyl got the job in Zurich. The efforts of Frobenius were finally successful in 1916, when Schur succeeded Johannes Knoblauch as adjunct professor. Frobenius died a year later, on 3 August 1917. Schur and Carathéodory were both named as the frontrunners for his successor. But they chose Constantin Carathéodory in the end. In 1919 Schur finally received a personal professorship, and in 1921 he took over the chair of the retired Friedrich Hermann Schottky. In 1922, he was also added to the Prussian Academy of Sciences.
During the time of Nazism.
After the takeover by the Nazis and the elimination of the parliamentary opposition, the Law for the Restoration of the Professional Civil Service on 7 April 1933, prescribed the release of all distinguished public servants that held unpopular political opinions or who were "Jewish" in origin; a subsequent regulation extended this to professors and therefore also to Schur. Schur was suspended and excluded from the university system. His colleague Erhard Schmidt fought for his reinstatement, and since Schur had been a Prussian official before the First World War, he was allowed to participate in certain special lectures on teaching in the winter semester of 1933/1934 again. Schur withdrew his application for leave from the Science Minister and passed up the offer of a visiting professorship at the University of Wisconsin–Madison for the academic year 1933–34. One element that likely played a role in the rejection of the offer was that Schur no longer felt he could cope with the requirements that would have come with a new beginning in an English-speaking environment.
Already in 1932, Schur's daughter Hilde had married the doctor Chaim Abelin in Bern. As a result, Issai Schur visited his daughter in Bern several times. In Zurich he met often with George Pólya, with whom he was on friendly terms since before the First World War.
On such a trip to Switzerland in the summer of 1935, a letter reached Schur from Ludwig Bieberbach signed on behalf of the Rector's, stating that Schur should urgently seek him out in the University of Berlin. They needed to discuss an important matter with him. It involved Schur's dismissal on 30 September 1935.
Schur remained a member of the Prussian Academy of Sciences after his release as a professor, but a little later he lost this last remnant of his official position. Due to an intervention from Bieberbach in the spring of 1938 he was forced to explain his resignation from the commission of the Academy. His membership in the Advisory Board of Mathematische Zeitschrift was ended in early 1939.
Emigration.
Schur found himself lonely after the flight of many of his students and the expulsion of renowned scientists from his previous place of work. Only Dr. Helmut Grunsky had been friendly to him, as Schur reported in the late thirties to his expatriate student Max Menachem Schiffer. The Gestapo was everywhere. Since Schur had announced to his wife his intentions to commit suicide in case of a summons to the Gestapo, in the summer of 1938 his wife took his letters, and with them a summons from the Gestapo, sent Issai Schur to a relaxing stay in a home outside of Berlin and went with medical certificate allowing her to meet the Gestapo in place of her husband. There they flatly asked why they were still staying in Germany. But there were economic obstacles to the planned emigration: emigrating Germans had a pre-departure Reich Flight Tax to pay, which was a quarter of their assets. Now Schur's wife had inherited a mortgage on a house in Lithuania, which because of the Lithuanian foreign exchange determination could not be repaid. On the other hand, Schur was forbidden to default or leave the mortgage to the German Reich. Thus the Schurs lacked cash and cash equivalents. Finally, the missing sum of money was somehow supplied, and to this day it does not seem to be clear who were the donors.
Schur was able to leave Germany in early 1939. His health, however, was already severely compromised. He traveled in the company of a nurse to his daughter in Bern, where his wife also followed a few days later. There they remained for several weeks and then emigrated to Palestine. Two years later, on his 66th birthday, on 10 January 1941, he died in Tel Aviv of a heart attack.
Work.
Schur continued the work of his teacher Frobenius with many important works for group theory and representation theory. In addition, he published important results and elegant proofs of known results in almost all branches of classical algebra and number theory. His collected works are proof of this. There, his work on the theory of integral equations and infinite series can be found.
Linear groups.
In his doctoral thesis "Über eine Klasse von Matrizen, die sich einer gegebenen Matrix zuordnen lassen" Issai Schur determined the polynomial representations of the general linear group formula_0 on the field formula_1 of complex numbers. The results and methods of this work are still relevant today. In his book, J.A. Green determined the polynomial representations of formula_2 over infinite fields formula_3 with arbitrary characteristic. It is mainly based on Schur's dissertation. Green writes, "This remarkable work (of Schur) contained many very original ideas, developed with superb algebraic skill. Schur showed that these (polynomial) representations are completely reducible, that each irreducible one is "homogeneous" of some degree formula_4, and that the equivalence types of irreducible polynomial representations of formula_5, of fixed homogeneous degree formula_6, are in one-one correspondence with the partitions formula_7 of formula_6 into not more than formula_8 parts. Moreover Schur showed that the character of an irreducible representation of type formula_9 is given by a certain symmetric function formula_10 in formula_8 variables (since described as a "Schur function")." According to Green, the methods of Schur's dissertation today are important for the theory of algebraic groups.
In 1927 Schur, in his work "On the rational representations of the general linear group", gave new proofs for the main results of his dissertation. If formula_11 is the natural formula_8-dimensional formula_1 vector space on which formula_0 operates, and if formula_6 is a natural number, then the formula_6-fold tensor product formula_12 over formula_1 is a formula_13-module, on which the symmetric group formula_14 of degree formula_6 also operates by permutation of the tensor factors of each generator formula_15 of formula_12. By exploiting these formula_16-bimodule actions on formula_12, Schur manages to find elegant proofs of his sentences. This work of Schur was once very well known.
Professorship in Berlin.
Schur lived in Berlin as a highly respected member of the academic world, an apolitical scholar. A leading mathematician and outstanding and very successful teacher, he held a prestigious chair at the University of Berlin for 16 years. Until 1933, his research group had an excellent reputation at the University of Berlin in Germany and beyond. With Schur in the center, his faculty worked with representation theory, which was extended by his students in different directions (including solvable groups, combinatorics, matrix theory). Schur made fundamental contributions to algebra and group theory which, according to Hermann Weyl, were comparable in scope and depth to those of Emmy Noether (1882–1935).
When Schur's lectures were canceled in 1933, there was an outcry among the students and professors who appreciated him and liked him. By the efforts of his colleague Erhard Schmidt Schur was allowed to continue lecturing until the end of September 1935 for the time being. Schur was the last Jewish professor who lost his job at this time.
Zurich lecture.
In Switzerland, Schur's colleagues Heinz Hopf and George Pólya were informed of the dismissal of Schur in 1935. They tried to help as best they could. On behalf of the Mathematical Seminars chief Michel Plancherel, on 12 December 1935 the school board president Arthur Rohn invited Schur to "une série de conférences sur la théorie de la représentation des groupes finis". At the same time he asked that the formal invitation should come from President Rohn, "comme le prof. Schur doit obtenir l'autorisation du ministère compétent de donner ces conférences". George Pólya arranged from this invitation of the Mathematical Seminars the Conference of the Department of Mathematics and Physics on 16 December. Meanwhile, on 14 December the official invitation letter from President Rohn had already been dispatched to Schur. Schur was promised for his guest lecture a fee of CHF 500.
Schur did not reply until 28 January 1936, on which day he was first in the possession of the required approval of the local authority. He declared himself willing to accept the invitation. He envisaged beginning the lecture on 4 February. Schur spent most of the month of February in Switzerland. Before his return to Germany he visited his daughter in Bern for a few days, and on 27 February he returned via Karlsruhe, where his sister lived, to Berlin. In a letter to Pólya from Berne, he concludes with the words: "From Switzerland I take farewell with a heavy heart".
In Berlin, meanwhile, mathematician and Nazi Ludwig Bieberbach, in a letter dated 20 February 1936, informed the Reich Minister for Science, Art, and Education on the journey of Schur, and announced that he wanted to find out what was the content of the lecture in Zurich.
Significant students.
Schur had a total of 26 graduate students, some of whom acquired a mathematical reputation. Among them are
Legacy.
Concepts named after Schur.
Among others, the following concepts are named after Issai Schur:
<templatestyles src="Div col/styles.css"/>
Quotes.
In his commemorative speech, Alfred Brauer (PhD candidate of Schur) spoke about Issai Schur as follows: "As a teacher, Schur was excellent. His lectures were very clear, but not always easy and required cooperation – During the winter semester of 1930, the number of students who wanted to attend Schur's theory of numbers lecture, was such that the second largest university lecture hall with about 500 seats was too small. His most human characteristics were probably his great modesty, his helpfulness and his human interest in his students."
Heinz Hopf, who had been in Berlin before his appointment to Zurich at the ETH Privatdozent, held – as is clear from oral statements and also from letters – Issai Schur as a mathematician and greatly appreciated man. Here, this appreciation was based entirely on reciprocity: in a letter of 1930 to George Pólya on the occasion of the re-appointment of Hermann Weyl, Schur says of Hopf: "Hopf is a very excellent teacher, a mathematician of strong temperament and strong effect, a master's discipline, trained excellent in other areas. – If I have to characterize him as a man, it may suffice if I say that I sincerely look forward to each time I meet with him".
Schur was, however, known for putting a correct distance in personal affairs. The testimony of Hopf is in accordance with statements of Schur's former students in Berlin, by Walter Ledermann and Bernhard Neumann.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "GL(n, \\mathbb{C})"
},
{
"math_id": 1,
"text": "\\mathbb{C}"
},
{
"math_id": 2,
"text": "GL (n, \\mathbb{K})"
},
{
"math_id": 3,
"text": "\\mathbb{K}"
},
{
"math_id": 4,
"text": "r \\geq 0"
},
{
"math_id": 5,
"text": "GL_n(\\mathbb{C})"
},
{
"math_id": 6,
"text": "r"
},
{
"math_id": 7,
"text": "\\lambda = (\\lambda_1, \\ldots, \\lambda_n)"
},
{
"math_id": 8,
"text": "n"
},
{
"math_id": 9,
"text": "\\lambda"
},
{
"math_id": 10,
"text": "{\\underline{S}}_{\\lambda}"
},
{
"math_id": 11,
"text": "V"
},
{
"math_id": 12,
"text": "V^{\\otimes r}"
},
{
"math_id": 13,
"text": "GL(n, \\mathbb {C})"
},
{
"math_id": 14,
"text": "S_r"
},
{
"math_id": 15,
"text": "v_1 \\otimes \\ldots \\otimes v_r"
},
{
"math_id": 16,
"text": "S_r - GL(n, \\mathbb{C})"
}
] |
https://en.wikipedia.org/wiki?curid=617522
|
617540
|
Mitsubishi Galant
|
The is an automobile which was produced by Japanese manufacturer Mitsubishi from 1969 until 2012. The model name was derived from the French word "galant", meaning "chivalrous". There have been nine distinct generations with total cumulative sales exceeding five million units. It began as a compact sedan, but over the course of its life evolved into a mid-size car. Initial production was based in Japan, with manufacturing later moved to other countries.
First generation (A50; 1969).
The first generation of the car, initially known as the Colt Galant, was released in December 1969 at a new Mitsubishi Japanese dealership called "Galant Shop". The design was dubbed "Dynawedge" by Mitsubishi, referring to the influence of aerodynamics on the silhouette. Three models were available, powered by the new 'Saturn' engine in 1.3- ("AI" model) or 1.5-liter ("AII" and "AIII") configurations. 1.4- and 1.6-liter versions (14L and 16L) replaced these in September 1971. A larger 1.7-liter arrived for the top GS model in January 1973. Initially only available as a four-door sedan, five-door estate and two-door hardtop (A53) variants were added in 1970. The hardtop was Mitsubishi's first production passenger car with full side windows and no side pillars. In March 1973, with only two months of production left, the cleaner "MCA-II" version of the 1.6 arrived. With it was three horsepower down on the regular version.
The Galant was offered as a competitor to the Toyota Corona, Nissan Bluebird, Honda Accord, and Mazda Capella. It became Mitsubishi's first car to be sold in the United States in 1971 when the Chrysler Corporation, the company's new partner and stakeholder, began importing the car as the Dodge Colt. It was also produced by Chrysler Australia and sold alongside the larger Chrysler Valiant models as the Chrysler Valiant Galant. In South Africa, the A53 Colt Galant arrived in late 1972 as the Dodge Colt 1600 GS (AY series). The car had already been rallied there, in 1300 and 1600 forms, and only the Hardtop GS version was sold to capitalize on the car's sporty image. Gross power claimed was at 6700 rpm and the car was fitted with Rostyle wheels as also used on locally assembled Hillman Vogues.
From 1970, a fastback coupé model was developed, the Galant GTO. Fashioned after contemporary American muscle cars, the hardtop GTO was available with a choice of two "Saturn" engines and the 2-litre "Astron 80", and was available until 1975. The nameplate was sufficiently highly regarded in Japan for it to be resurrected for the 1990 Mitsubishi GTO coupé.
A third, more compact coupé was introduced on a chassis shortened by 12 cm in 1971, the Galant FTO. Powered by the 4G41 1.4 L engine, it too would leave a legacy for the company to return to in the 1990s with the Mitsubishi FTO.
New Zealand.
Although the earlier Colt had been imported in limited numbers, this generation, in 1.6-litre coupé form only, was the first model to establish the Mitsubishi brand in New Zealand from 1971 when newly appointed distributor Todd Motors, which also imported and assembled Chrysler and Hillman, started selling a small number of Japanese-assembled cars to supplement its mainstream Hillman Avenger and Hunter models.
The coupé was assembled in New Zealand from 1972, firstly at Todd's Petone factory, on the Avenger/Hunter line and, from 1974, at the brand-new purpose-built factory in Porirua (closed in 1998).
Second generation (A112, A114, A115; 1973).
The second generation Mitsubishi Colt Galant A11* series was built from 1973 and received a replacement in 1976. Introduced on 24 May 1973 (on sale 1 June) in the Japanese domestic market, the second generation Galant was more widely exported as Mitsubishi's ambitions grew. It was again sold by Chrysler in many different guises; as the Dodge Colt in the United States, as the Plymouth Colt and Plymouth Cricket in Canada (from 1974), as the Chrysler Valiant Galant and as the Chrysler Galant in Australia, and in Europe as the Colt Galant. Transmissions were now all floor mounted and include a four-speed manual and a five-speed unit for sportier models. A three-speed automatic transmission was also available. The smaller 1600 engine was also available in the cleaner "MCA-II" version right from 1973, a model which met Japan's 1975 emissions standards. This version was marginally less powerful, with rather than the engine seen in the previous model.
This new Galant model was more curvaceous, influenced by contemporary "coke bottle styling", and featured a range of larger 'Astron' engines developing up to 125 PS in 2000 cc form to complement the 'Saturn' units. During the second generation, the first Astron 80 engines were introduced in some markets using Mitsubishi's newly developed "Silent Shaft" balance shaft technology for reduced vibration and noise. Body styles remained the same as the first generation Colt Galants offered in sedan, wagon, pillar-less two-door hardtop coupé with the addition of a fixed post coupé for some markets. New models were added to the line up, including GL-II, SL-5, GT and GS-II. The Estate (A112V, sold as a commercial vehicle in Japan) was only available with the 100 PS 1600 engine, in Custom, GL, or SL-5 (with a five-speed manual transmission). It had vestigial wood panelling, featuring a narrow strip on the tailgate only.
In New Zealand the hardtop, now with an 1855 cc engine was again assembled by Todd Motors at Porirua. The sedan was not offered as Todd was planning to assemble the larger Galant Sigma sedan and wagon range from late 1977 and they were still importing the British Avenger and Hunter models.
In South Africa, the Dodge Colt 1600 GS arrived in late 1975 (YB series) to replace the earlier AY. Aside from the new body, with wider wheels and improved handling, it also benefitted from a new five-speed gearbox. In August 1976, the name was changed to Chrysler Colt, and the new GS II received a 2.0-liter engine with . The 1600 also became available in less sporty GL trim, and a set of four-door models complemented the earlier hardtop. This new range signalled a move away from British and Australian sourced Chrysler products, with the four-door replacing the locally built Chrysler Vogue. Only three months later, Chrysler South Africa ceased operations. Mitsubishi production was continued by the new Sigma Motor Corporation.
Third generation (A120/A130; 1976).
The third generation of the car was introduced in 1976, and was known as Galant Σ (Sigma). In many export markets the car was simply known as the Galant. At that time, the Dodge Colt in America was actually a Mitsubishi Lancer, not the Galant anymore, but nonetheless the Galant Wagon variant was sold with the Dodge Colt label in the US and Canada. In Australia, where the car was made locally at Chrysler's Clovelly Park plant, it was marketed as the Chrysler Sigma and, after the 1980 buyout of Chrysler Australia by Mitsubishi, as the Mitsubishi Sigma. Australian content was quite high and included a locally-made 2.6-litre 'Astron' four (introduced 1980) which, in December 1985, replaced the 1.6, 1.85 and two-litre engines used in other export markets.
The wagon version was introduced in 1977, a little while after the sedans. A new two-door coupé was introduced in 1976 to replace the Galant GTO. It was known in Japan as the Galant Λ (Lambda). The coupé was sold in the United States between 1978 and 1980 as the Dodge Challenger and Plymouth Sapporo. In Australia the Lambda was marketed initially as the Chrysler Sigma Scorpion and latter as the Mitsubishi Scorpion.
Mitsubishi introduced the "MCA-Jet" engine for Japan and other emissions-controlled markets with its latest Galant. This incorporated the "Jet Valve", a secondary intake valve which improved emissions without necessitating the need for a completely redesigned cylinder head. In 1978, Mitsubishi in Japan established a dedicated dealership sales channel called () to sell the Galant and other selected vehicles. After late 1977 the 1850 variant was discontinued, as Mitsubishi focussed their efforts on making the 1600 and the 2000 engines pass the new, stricter emissions standards.
In Japan, the Galant range received a new variant in March 1978, known as Galant Sigma Eterna. This model has single rectangular headlights and different taillights. This model also sold as facelift model for selected markets in Europe, New Zealand and South America. Seven months later the twin round headlights front design was replaced with one featuring twin square headlights and also new taillights. Models with engines which passed the new 1978 standards changed from the A120 to the A130 range. Mitsubishi had limited resources, and the large choice of engines for the Galant lineup was reduced to one 1.6 and one 2.0, with , at the beginning of the 1979 model year.
Todd Motors initially assembled 1.6 GL, 1.85 GLX and two-litre GLS sedan models for New Zealand, with the GLS getting a five-speed manual transmission as standard with three-speed auto optional. These were the first NZ-assembled Mitsubishis to have rear screen demisters as standard. Early cars had conventional rod-suspended headliners developed locally to meet local content rules but these were notorious for collapsing on to the passengers' heads and were quickly replaced by newly developed, glued-in moulded foam liners. The range was later revised to add the wagon and drop the 1.85-litre engine.
The third generation Galant was the recipient of the Car of the Year award in South Africa in 1977. In South Africa, where it was built by the Sigma Motor Corporation, it was sold as the Colt Galant. Originally sold with the 1.6- and the 2.0-liter engines, the automatic-only 2.6-liter engine arrived in the middle of 1979 and was developed locally. The 2.6 arrived elsewhere only later. Mid-1979 was also when the facelifted (square headlights) model appeared in South Africa, with new "low-inertia" engines. Power output for the 2.0-liter remained at , but period testers felt it more powerful than the previous version.
Fourth generation (A160; 1980).
Introduced in 1980, Mitsubishi's fourth iteration of the Galant Σ (Sigma)/Eterna Σ (Sigma) debuted many new innovations for Mitsubishi. The car was sold as the Mitsubishi Galant in most export markets, although in both Australia and New Zealand it was known as the Mitsubishi Sigma. The fourth generation sedan and coupé were both slightly larger than the third generation cars. Additional emphasis was given to ergonomics, aerodynamics, and safety. Shoulder room, leg room, and head space were all increased, and the trunk was slightly enlarged for more luggage capacity. The interior was made quieter with additional carpeting and other acoustic dampening materials and a double-thickness front bulkhead. The wagon version was also changed, although from the firewall back the vehicle remained the same as the previous version.
Their new 'Sirius' engine was offered in turbocharged form for performance enthusiasts in some markets, with for Japanese market cars and for those export markets unencumbered by strict emissions rules. A new electronic fuel injection system was introduced on some versions of the gasoline Astron engine. For economy, an 'Astron' 4D55, the first turbo-diesel engine in a Japanese passenger car, was also offered. Unusually, the fourth Galant was never offered with a naturally aspirated diesel engine. The 2.3 Turbo D has , enough to be considered "sporty" at the time, and was first shown at the 1980 Paris Motor Show. The diesel had some initial reliability issues; a redesigned cylinder head which appeared in 1982 took care of the problems. This model proved very popular in some markets, such as the BeNeLux countries, where it helped establish Mitsubishi in general and the Galant in particular.
For the second generation in a row Mitsubishi could claim to be building an award-winning car, as this was chosen as Car of the Year in New Zealand in 1981. The cars sold there were again locally assembled with 1.6 and two-litre engines, and a choice of transmissions and trim. As elsewhere, the wagon versions carried over the old body style with a new nose and interior. Production of the wagon version continued in Australia until 1987 when it was replaced by the new Magna.
From 1982 to 1983, some of the Australian Sigmas, which had the carried-over 2.0 or 2.6-litre locally made inline-four engine, were exported to the United Kingdom with the Lonsdale badge, in en effort at circumventing the voluntary import quota restrictions adopted by Japanese manufacturers. However the car was unsuccessful, and for 1983 and 1984 it carried Mitsubishi Sigma badges in the UK before imports were finally discontinued.
The two door coupé was also redesigned for 1980 and was sold through 1983. While continuing with the Galant Λ/Eterna Λ label for the domestic Japanese market, the fourth generation was known as the Mitsubishi Scorpion in Australia, and the Dodge Challenger and Plymouth Sapporo in the United States.
Fifth generation (E11-E19; 1983).
A fifth-generation model shifted to front-wheel drive in August 1983 as a four-door sedan and four-door hardtop (with different styling). The design continued the direction started with the Tredia, albeit with more harmonious proportions. Drag resistance was down to an average 0.36 formula_0. All new chassis numbers, from E11A to E19A, marked the change. External dimensions all grew, but only marginally, while the wheelbase was longer. Thanks to the more compact drivetrain, however, passenger space increased noticeably and the boot grew from while the liftover edge was significantly lowered. Weight distribution was distinctly towards the front, with 64.47% of the car's weight over the front wheels for the turbodiesel. In the Japanese market there was also a parallel "Eterna" lineup with very minor differences in appearance and equipment. This generation formed the basis of the widened (by 4 inches/100 mm) Mitsubishi Magna produced in Australia from 1985, the same year in which Mitsubishi won "Bild am Sonntag's" "Das Goldene Lenkrad" (Golden Steering Wheel) award in Germany for the Galant and Wheels magazine's "Car of the Year" for the Magna. Mitsubishi Motors codenamed these cars as "YF" and "YFW"—"W" for "wide", respectively.
The station wagon version was effectively replaced by the Chariot/Space Wagon in most markets. The Galant was the third Japanese car to adopt four-wheel anti-lock brakes, using Bosch's ABS system. Vehicles in Japan installed with the four-speed transmission were equipped with what Mitsubishi called Super Shift, essentially installing a transfer case, without adding another driveshaft to the rear wheels. Super Shift was no longer offered with the introduction of the five-speed manual transmission.
Exports began about a year after introduction. European and rest-of-the-world trim levels were often engine-specific, depending on the market: At the time of introduction, GL and GLX models were offered with either 1.6-litre or 1.8-litre engines, GLS models had 2.0-litre engines (badged 2000 GLS; in some markets there was also a 2000 GLX) and Diesel versions had a 1.8-litre Sirius turbo-diesel engine. The diesel model received GL or GLX trim, although in some markets it was simply the 1800 TD. A fuel injected 2000 Turbo was also available in some export markets. The TD and the Turbo both received standard power steering.
Equipment levels in Japan had more interesting names, ranging from the LS and LG via Super Saloon, Super Touring, Super Exceed, and GSR-X up to the most luxurious Royal. The top models for Japan (the "Super Exceed" sedan or "VR" hardtop) were powered by the (JIS gross, later only 170 PS were claimed) turbocharged and intercooled "Sirius Dash 3/2 valve" engine. This engine switched between using two and three valves per cylinder to combine high top-end power with low-end drivability as well being economical in operation.
Beginning in October 1986, the all-new 2-liter Cyclone V6 engine was installed in the Galant/Eterna, sedans as well as hardtops. Some of the V6 variants received electrically retractable door mirrors and electronically controlled power steering.
Sales in the United States began with the 1985 model year; this was the first time that the Galant series was sold stateside since the station wagon was marketed as a Dodge Colt a few years earlier. New for 1987 (the last model year for this generation) were redesigned seats and the availability of a five-speed manual transmission as well as leather upholstery.
This generation was largely replaced in 1988 by the sixth generation Galant (see below). The widened Australian-made version, however, remained in production until 1991 when it was replaced by a new generation Magna, whereas the Japanese hardtop range was produced until it was replaced by the new Sigma/Diamante version in 1990. In addition, the taxi-spec sedan remained in production for Japanese commercial use until December 1999, when Mitsubishi abandoned that market. The taxi was only available with an LPG-powered 1.8-litre engine, originally the 4G37. From October 1986 the Taxi (and driving school model) was fitted with Mitsubishi's new "Cyclone" combustion chamber design.
At the end of October 1990, the Galant Σ Taxi received a light update and a reshuffle of the models. There was a base L model and a better equipped LG with body-colored bumpers. The modification included three-point belts in the rear seat, a high-mounted brake light, adjusted gearing, a flattened rear seat squab, larger radiator, and a larger washer fluid tank, amongst other detail improvements. A five-speed manual, or three- or four-speed automatics were on offer. Target production was around 1,200 units per year. For its last three years of production, this model received an LPG-version of the 1834 cc "4G93" engine.
The fifth-generation Galant was introduced to the New Zealand market in mid-1984, as the Mitsubishi Sigma. Assembled by Mitsubishi's New Zealand distributors, Todd Motors, the Sigma was available with the choice of 1.8- and 2.0-litre engines, the 2.0 having the option of automatic transmission, and availability with a turbocharger on certain models.
Several trim levels were offered, GL, GLX, GSR, Super Saloon and SE. The top SE versions notably featured 'Sigma' branded alloy wheels, digital instrumentation, climate controlled air conditioning, cruise control, speed-dependent intermittent wipers and a salmon-brown coloured interior treatment, the treatment changing deep red colour as a running change in 1985 on this model.
Further running changes concerned the rear styling. For the initial 1984 production run the rear numberplate was located above the bumper, however for 1985 and 1986 the plate was relocated to below the bumper, in the manner of the Japanese domestic market Galant models. New taillights were fitted for 1987, the rear numberplate reverting to its original place above the bumper.
1987 was a key year for Mitsubishi in New Zealand, when it bought out Todd Motors' automotive operations.
Although the sixth generation Galant was introduced for 1988, the older fifth generation bodyshell stayed in production alongside it. Mitsubishi Motors New Zealand intentionally decided to retain the fifth generation sedan bodystyle for a new, unique to New Zealand, flagship model—the 3.0-liter V6 engined Mitsubishi V3000. The V3000 was developed specifically to give Mitsubishi New Zealand a six-cylinder family car, suitable for towing boats and caravans, to compete with the imported Ford Falcon (EA) and Holden Commodore (VN) models.
While the rear styling of the previous Sigma model was retained, the frontal treatment was changed to now feature a more formal, upright chrome grille (the bonnet and grille were from the top-of the-line Sigma SE), and uprated suspension. The V3000 was available in basic Executive, mid-range Super Saloon, and top-of-the-range SEi trim levels, the latter with luxury trim and digital dashboard. Later a sports version Elante was introduced, based on the Executive. The V6 engine combined with relatively low weight and gearing ensured excellent performance, New Zealand's traffic patrol selected them as patrol cars to replace the turbocharged Sigma GSR. These police cars had the Elante suspension pack, which was an option on other models. For 1990, the V3000 was further updated and now featured the front styling of the Eterna hardtop. New Zealand was the only market where this restyling was applied to the fifth generation four-door sedan bodyshell. Assembly of this model continued until 1991, when it was replaced by the second-generation Australian Mitsubishi Magna TR V6 range, which continued to be known as V3000 for the New Zealand market.
Hardtop sedan.
The hardtop sedan bodywork was used in export markets as well, where it received a six-window design unlike for its Japanese market counterparts. It was marketed under different names; "Galant Σ" or "Eterna Σ" (Sigma) in Japan, "Sapporo" in Europe, and in the US as "Galant Σ" (1988 model year) followed by plain "Sigma" (1989 to 1990 model years). The "Galant Σ" was released for the 1988 model year, but the "Sigma" version with updated alloy wheels began US sales in August 1988 for the 1989 model year and continued until 1990. These cars were available with a 3.0-liter V6 (North America, only with automatic transmission) or 2.4-liter four-cylinder engines (Europe) in the export. In the domestic Japanese market the hardtops received 2.0-litre fours, or the smaller 2.0-litre "6G71" V6 engine from 1986, shared with the Mitsubishi Debonair limousine. For the top-of-the-line VR models, an intercooled turbo-charged "4G63T" "Sirius Dash 3x2" engine that automatically switched between two and three valves per cylinder depending upon throttle response and therefore allowing both economy and performance, was fitted, along with self-levelling suspension, climate-controlled air-conditioning, blue velour interior, steering wheel-controlled audio functions, and 15-inch alloy wheels. From 1985, the powerplant was renamed "Cyclone Dash 3x2".
The hardtop range continued to be available until 1990 as Mitsubishi's most luxurious offering in most export markets, until the Sigma/Diamante replaced it. It also continued on sale in Japan, but only as the Eterna Sigma after a facelift in May 1989. In Japan the hardtop was available with a 1.8-liter four at the bottom of the range and with the large 3.0-liter V6 in the top "Duke" version after this makeover. The European market Sapporo took its bow at the 1987 Frankfurt Motor Show; the large 2.4-liter "4G64" "Sirius" four-cylinder producing at 5,000 rpm ( for the catalyzed version).
Sixth generation (E31, E32, E33, E34, E35, E38, E39; 1987).
In October 1987 the same platform was used for a sixth-generation model which adopted taller, rounded styling. This generation won the Car of the Year Japan award in 1987 and the GS model became "Motor Trend"'s Import Car of the Year in 1989. This Galant began American sales in 1989, side by side with the previous generation Sigma.
Mitsubishi developed Dynamic ECS adaptive air suspension, the world's first production semi-active electronically controlled suspension system in passenger cars; the system was first incorporated in the 1987 Galant model.
The Galant range underwent a minor facelift in 1991, with new grilles and other modifications. Also in 1991, Mitsubishi Motors Company completed a new assembly facility at Barcelona, Venezuela, with the Galant being one of the first models produced. It was sold there until 1994 under the ZX, MF, MS and MX names, which identified the various levels of equipment and transmission.
The Sigma designation disappeared with the 1990 model. A new hardtop liftback model was added in 1988, called the (). and in Japan, the Eterna was only sold at a specific retail chain called "Car Plaza". This generation Galant was also sold in Canada as the Dodge 2000 GTX and Eagle 2000 GTX. The five-door liftback version was never offered in North America, where buyers prefer traditional sedans. In most of the world, the sixth generation Galant was replaced towards the end of 1992, but North American sales only ended in 1994, when the next generation Galant arrived there.
A limited edition based on the GTi-16v model was introduced in 1989, modified by German tuning company AMG (owned by Mercedes-Benz since 1999), with mildly uprated engine () and unique body kit, alloy wheels, and full leather interior. The AMG appearance treatment was also achieved on the Debonair for 1986. It, along with the Debonair, were the only Japanese cars that received the AMG treatment.
The sixth generation was also the first to see the introduction of the "VR-4" variant, which was the basis for Mitsubishi's participation in the 1988–1992 World Rally Championships. The Galant's "4G63" two-litre DOHC turbocharged engine and 4WD transmission was later adopted for the Mitsubishi Lancer Evolution with little modification and would remain in production for fifteen years. Starting in 1989, the Mitsubishi Galant V-series were produced for the Japanese market as a sporty alternative to the regular Galant range. The lineup consisted of Viento and VX-S/VZ-S models featuring the higher output 1.8 and 2.0 Turbo DOHC engines with both automatic and manual transmissions available. The V-series featured the VR-4 interior, exterior design and updated bumpers (without side skirts), clear indicator lens covers, optional two-tone body paint, as well as standard air conditioning, full electrics, rear windscreen wiper, spoiler and alloy wheels. Fans sometimes call this car the "Evo Zero" but this was never more than a nickname as the Evolution series is Lancer-based.
National Highway Traffic Safety Administration (NHTSA) crash test ratings for 1991–1992 Galant:
Seventh generation (E52, E53, E54, E55, E57, E64, E72, E74, E77, E84, E88; 1992).
A new Galant debuted in September 1992 at the Tokyo Motor Show (model year 1994 in the US), originally only available as a four-door sedan (which was the only model to be sold in the US). A five-door liftback derivative made its world premiere at the February 1993 Dutch Motor Show. A Japan-only hardtop derivative called the () (French for emerald) was also launched in 1992. The width dimensions of the model sold in Japan no longer complied with Japanese government dimension regulations.
In October 1993, Mitsubishi introduced a trim level for this model called "VX-R", offered a 2.0 L MIVEC version of the "6A12", a high revving naturally aspirated V6 engine with more aggressive tuning. This engine is also found on Mitsubishi's midsize sports car FTO's GP trim levels which introduced in 1994. Output was placed at 200 hp (149 kW) and 147 lb⋅ft (199 N⋅m) of torque.
This generation marked a substantial change in suspension design. The front switched from struts to a multi-link structure featuring two lower arms and one upper arm. The rear switched from a beam axle to a newly designed multi-link system. This was the world's first 4-wheel multi-link suspension in an FF car. Both designs would carry over to the second generation Mitsubishi Eclipse and its sister vehicles.
VR-4.
For 1992, the emergence of the homologated Lancer Evolution meant that the top-spec Galant VR-4 was no longer constrained by sporting regulations. The new generation thus became a less overtly competition oriented vehicle. The existing, proven 4WD transmission was carried over, in keeping with Mitsubishi's reputation for performance-enhancing technology, but the old inline-four was superseded by a smoother twin-turbo 2.0-litre V6, and mated either to a conventional five-speed manual, or a four-speed "INVECS" auto complete with "fuzzy logic", which allowed the transmission to adapt to the driver's style and road conditions "on the fly". It was capable of accelerating from 0–60 mph (97 km/h) in about 6.5 seconds, and if derestricted could reach about .
Variants of the VR-4 using the same engine and drivetrain were sold in Japan as the Eterna XX-4 (1992) and Galant Sports GT liftback.
"Engine"
"Configuration" – DOHC 24v V type 6-cylinder 6A12TT
"Bore/stroke, capacity" – 78.4 x 69.0 mm, 1998 cc
"Compression ratio" – 8.5:1
"Fuelling" – ECI-MULTI, premium unleaded fuel
"Peak power" – at 6000 rpm
"Peak torque" – at 3500 rpm
"Suspension" – Multi-link (front & rear)
"Wheels/tyres" – 205/60 R15 91Vβ̞
Export.
Production in the United States began on 24 May 1993 when the first seventh generation Galant rolled off the assembly line in Normal, Illinois. In 1995, a slightly upgraded GS version was available with a twin cam engine, speed-sensitive steering, rear stabilizer bar, and an available manual transmission.
In Europe were also available naturally aspirated 2.5 L 24-valve DOHC engine, which was mounted with four-wheel drive, 5-speed manual transmission and four-wheel-steering. Body styles were four-door sedan and five-door Liftback. Rear differential was not available with limited-slip. In option were sunroof, A/C, cruise control, power windows, central locking, electrical heated seats and electrical side-mirrors.
In the Philippines, the seventh generation Galant started production in late October 1993. It was offered in 2 grades: VR and top-spec Super Saloon. There were 2 engine choices offered: the 2.0L V6 engine mated to a 5-speed manual for the VR or a 2.0L inline-four engine mated to either a 5-speed manual or a 4-speed automatic transmission for the Super Saloon grade.
National Highway Traffic Safety Administration (NHTSA) crash test ratings for 1997 and 1998 Galant:
Eighth generation (EA1, EA2, EA3, EA4, EA5, EA7, EA8, EC1, EC4, EC5, EC7; 1996).
The eighth-generation 1996 model continued the 1992's design themes but a five-door station wagon (known in Japan as the Legnum, derives from the Latin word regnum, meaning regal power or rank) was added while the five-door liftback was dropped. This model won the 1996–97 Car of the Year Japan award for the second time. Despite being superseded in the US and Europe from 2003, it remained on sale in other countries until 2006. In Japan, the Legnum was sold only at a specific retail chain called "Car Plaza", while the Galant remained exclusive to Galant Shop locations. The Japanese market model was the first mass-produced car to use a gasoline direct injection engine, when a GDI version of the "4G93" inline-four engine was introduced.
This model was also produced in Barcelona, Venezuela, at the only Mitsubishi plant in Latin America. At the beginning, the Galant was marketed in that country under the "MX" and "MF" names in 1997 and 1998 (featuring a manual or "INVECS-II" automatic transmission respectively), then kept the Galant name until the end of its production in 2006. Although the equipment options were limited, the VR-4 appearance package was offered in that market.
The American market Galant, introduced on 7 July 1998, graduated to the US Environmental Protection Agency's mid-size class. The front suspension design switched from multi-link to struts, though the rear was upgraded with a stabilizer bar standard on all but the base DE model. The ES, LS and GTZ models were offered with a V6 engine, the "6G72" 3.0 L, mated to a standard four-speed conventional automatic. Another difference between Asian and European models was the lack of ABS, which was only installed on 3.0 L models. It received a facelift for the 2002 model year.
In August 1998, Mitsubishi introduced the Aspire as the successor of Eterna. Externally identical to the facelifted Galant at the same time of introduction.
Mitsubishi opted to further develop the technology in its range-topping VR-4, which was now powered by an enlarged 2.5 L V6 twin-turbo. The car features either a conventional five-speed manual or INVECS-II transmission. Some variants (all of the pre-facelift model and Type-S for the facelift model) were also fitted with the same advanced active yaw control (AYC) as the Lancer Evolution, to give it greater agility than would be expected of such a large vehicle. Finally, as with the rest of the range, the VR-4 could now be had either as a Galant sedan or as a Legnum station wagon.
The MIVEC version of the "6A12" was drop down from the Japanese market model, but some Asian markets were offer this engine with the trim levels called "VX-R" or "VR-M". The larger 2.5 L "6A13" was more common in the rest of the world.
National Highway Traffic Safety Administration crash test ratings for 2001 Galant without side airbags:
National Highway Traffic Safety Administration crash test ratings for 1999–2002 Galant with side airbags:
Ninth generation (2004).
North America.
The United States has had the sedan-only ninth-generation PS platform model since October 15, 2003. It was announced at the 2003 New York International Auto Show in April for the 2004 model year, following the exhibition of the SSS concept sedan at the North American International Auto Show three years before. The ninth-generation United States-sourced model is available for sale only in a few regional markets, namely the United States, Puerto Rico, Russia, Ukraine and Arabia. Russia began sourcing its Galants from the United States from 2006. The Arabian markets began sourcing their Galants from the United States from the 2007 model year. The Galant had also been available in Canada and Mexico until the 2010 and 2009 model years, respectively.
A size increase resulted in slightly more interior space and a weight gain of several hundred pounds. The four-cylinder engine, while still 2.4 liters in displacement, upgraded from Mitsubishi's 4G64 design to the newer 4G69 design, resulting in a horsepower increase from to and . Likewise, the V6 jumped from a 3.0-liter with to a 3.8-liter with and . All North American Galants gained all-wheel disc brakes but lost their rear stabilizer bar.
A Ralliart version joined for 2007, finally upgrading the V6 to a class-competitive and while also adding a firmer suspension, front strut tower bar, rear stabilizer bar, and eighteen-inch alloy wheels. Furthermore, the Ralliart trim was the first Galant to receive Mitsubishi's updated infotainment system (MMCS) featuring a 7-inch touchscreen display with GPS navigation. The Ralliart was further distinguished from other Galant trims by a unique front aero bumper, sport mesh grille, projector-style ellipsoid headlamps, two-tone bumpers and color-keyed side air dams. For 2008, the trimming of models left the Ralliart as the only V6 model, and the Galant skips the 2008 model year in Canada, only to return in 2009 with the facelifted model.
Four-cylinder Galant models sold in California, Maine, Massachusetts, New York and Vermont are certified as Partial Zero-Emissions Vehicles (PZEV), with the engine rated .
This iteration of the Mitsubishi Galant only went on sale in the Middle East region for the 2007 model year, with a 2.4-liter engine and a 3.8-liter engine, imported from the United States.
Osamu Masuko, the CEO of Mitsubishi Motors, indicated that the ninth generation of the Galant would be the last to be manufactured in North America, to be replaced on the MMNA production line in Illinois by smaller vehicles which are more likely to appeal to export markets.
The final Mitsubishi Galant rolled off the assembly line in the United States on August 30, 2012. The Mitsubishi Concept-ZT that was unveiled in 2007 was initially expected to become the tenth generation Galant but this never materialized.
Facelifts.
2006
The Galant receives some cosmetic changes, such as an AC adapter, a standard MP3 jack and upgrades to the interior.
2007
In 2007, the Galant was restyled - the interior and exterior were refreshed and an updated infotainment system was introduced.
2009
In 2009, the Galant was restyled for a third time during this generation. The 2009 Galant launched in February 2008.
A four-cylinder Sport Edition was added for the 2009 model year. Galant Sport models include new standard factory value packages as standard. Sportronic automatic transmission is standard in all models, with a four-speed for four-cylinder engines and a five-speed for V6 engines.
East Asia.
Mitsubishi also assembles and markets a Taiwan made version of the ninth-generation Galant. In Taiwan, this version is known as the Mitsubishi Grunder. Taiwan was one of the first regions outside the Americas to market the ninth generation vehicle, when the Galant Grunder was launched in December 2004 with a unique front end. It has a version of the 2.4-liter engine and four-speed automatic (INVECS-II), and comes in either SEi format or as the better equipped EXi model.
This facelifted model is also sold in the Philippines from 2006 to 2009 as the Galant 240M, using Mitsubishi's 2.4L 4G69 MIVEC engine with Mitsubishi's 4-speed INVECS-II automatic transmission with Sportronic. It came with leather seats, remote keyless entry, eight-way power adjustment seat with variable lumbar support (driver side only), automatic climate control system, and an MP3-ready audio system. It only came in two color options; "Merlin Black" or "Excalibur Silver". In 2009, Mitsubishi Philippines replaced this with the all-new "SE" trim. It featured a redesigned grill, a new 12-speaker audio system with Dolby 5.1 Surround and DTS support, GPS-based navigation system, power adjustable mirrors, reverse camera among other features.
The Taiwanese-made Galant has also been sold by Soueast Motor in the People's Republic of China as the Galant since 2006. Models in China receive a 2.0-liter or 2.4-liter petrol engines, each paired with an automatic transmission.
Australia.
From 2005 to 2008, a localized version called the Mitsubishi 380 was manufactured in Australia for the Australian and New Zealand markets. No four cylinder engines were offered, the 380 being available only with the 3.8-liter 6G75 V6 with . This replaced the long-lived Magna line, and it was the last Mitsubishi car produced in Australia.
20 limited edition TMR models (Team Mitsubishi Ralliart) were made towards the end of the car's production at the Tonsley Park factory in Adelaide, running a supercharged version of the 3.8 6G75 with 230kW and 442Nm.
Nameplate use with Lancer.
With the exception of the Lancer Evolution X, the ninth generation Lancer was marketed as the Galant Fortis (Latin for strong, brave and resolute) in the Japanese domestic market. It comes in three trim levels: Exceed, Super Exceed, and Ralliart.
Between August 2015 to August 2017, GHK Motors (Mitsubishi Brunei) offered a version of the Lancer Sportback hatchback model under the name "Galant" in Brunei. The production of this model ceased at the end of August 2017 due to poor sales. Instead of increasing Mitsubishi's popularity, it took sales away from the Lancer.
References.
<templatestyles src="Reflist/styles.css" />
External links.
style="width:100%" class="wraplinks"
|
[
{
"math_id": 0,
"text": "\\scriptstyle C_\\mathrm x\\,"
}
] |
https://en.wikipedia.org/wiki?curid=617540
|
617573
|
Multiple choice
|
Assessment that are responded by choosing correct answers from a list of choices
Multiple choice (MC), objective response or MCQ (for multiple choice question) is a form of an objective assessment in which respondents are asked to select only correct answers from the choices offered as a list. The multiple choice format is most frequently used in educational testing, in market research, and in elections, when a person chooses between multiple candidates, parties, or policies.
Although E. L. Thorndike developed an early scientific approach to testing students, it was his assistant Benjamin D. Wood who developed the multiple-choice test. Multiple-choice testing increased in popularity in the mid-20th century when scanners and data-processing machines were developed to check the result. Christopher P. Sole created the first multiple-choice examination for computers on a Sharp Mz 80 computer in 1982. It was developed to aid people with dyslexia cope with agricultural subjects, as Latin plant names can be difficult to understand and write.
Structure.
Multiple choice items consist of a stem and several alternative answers. The "stem" is the opening—a problem to be solved, a question asked, or an incomplete statement to be completed. The options are the possible answers that the examinee can choose from, with the correct answer called the "key" and the incorrect answers called "distractors". Only one answer may be keyed as correct. This contrasts with multiple response items in which more than one answer may be keyed as correct.
Usually, a correct answer earns a set number of points toward the total mark, and an incorrect answer earns nothing. However, tests may also award partial credit for unanswered questions or penalize students for incorrect answers, to discourage guessing. For example, the SAT Subject tests remove a quarter point from the test taker's score for an incorrect answer.
For advanced items, such as an applied knowledge item, the stem can consist of multiple parts. The stem can include extended or ancillary material such as a vignette, a case study, a graph, a table, or a detailed description which has multiple elements to it. Anything may be included as long as it is necessary to ensure the utmost validity and authenticity to the item. The stem ends with a lead-in question explaining how the respondent must answer. In a medical multiple choice items, a lead-in question may ask "What is the most likely diagnosis?" or "What pathogen is the most likely cause?" in reference to a case study that was previously presented.
The items of a multiple choice test are often colloquially referred to as "questions," but this is a misnomer because many items are not phrased as questions. For example, they can be presented as incomplete statements, analogies, or mathematical equations. Thus, the more general term "item" is a more appropriate label. Items are stored in an item bank.
Examples.
Ideally, the multiple choice question (MCQ) should be asked as a "stem", with plausible options, for example:
If formula_0 and formula_1, what is formula_2?
In the equation formula_3, solve for "x".
The city known as the "IT Capital of India" is
A well written multiple-choice question avoids obviously wrong or implausible distractors (such as the non-Indian city of Detroit being included in the third example), so that the question makes sense when read with each of the distractors as well as with the correct answer.
A more difficult and well-written multiple choice question is as follows:
Consider the following:
Which of these can be tiled by two-by-one dominoes (with no overlaps or gaps, and every domino contained within the board)?
Advantages.
There are several advantages to multiple choice tests. If item writers are well trained and items are quality assured, it can be a very effective assessment technique. If students are instructed on the way in which the item format works and myths surrounding the tests are corrected, they will perform better on the test. On many assessments, reliability has been shown to improve with larger numbers of items on a test, and with good sampling and care over case specificity, overall test reliability can be further increased.
Multiple choice tests often require less time to administer for a given amount of material than would tests requiring written responses.
Multiple choice questions lend themselves to the development of objective assessment items, but without author training, questions can be subjective in nature. Because this style of test does not require a teacher to interpret answers, test-takers are graded purely on their selections, creating a lower likelihood of teacher bias in the results. Factors irrelevant to the assessed material (such as handwriting and clarity of presentation) do not come into play in a multiple-choice assessment, and so the candidate is graded purely on their knowledge of the topic. Finally, if test-takers are aware of how to use answer sheets or online examination tick boxes, their responses can be relied upon with clarity. Overall, multiple choice tests are the strongest predictors of overall student performance compared with other forms of evaluations, such as in-class participation, case exams, written assignments, and simulation games.
Disadvantages.
The most serious disadvantage is the limited types of knowledge that can be assessed by multiple choice tests. Multiple choice tests are best adapted for testing well-defined or lower-order skills. Problem-solving and higher-order reasoning skills are better assessed through short-answer and essay tests. However, multiple choice tests are often chosen, not because of the type of knowledge being assessed, but because they are more affordable for testing a large number of students. This is especially true in the United States and India, where multiple choice tests are the preferred form of high-stakes testing and the sample size of test-takers is large respectively.
Another disadvantage of multiple choice tests is possible ambiguity in the examinee's interpretation of the item. Failing to interpret information as the test maker intended can result in an "incorrect" response, even if the taker's response is potentially valid. The term "multiple guess" has been used to describe this scenario because test-takers may attempt to guess rather than determine the correct answer. A free response test allows the test taker to make an argument for their viewpoint and potentially receive credit.
In addition, even if students have some knowledge of a question, they receive no credit for knowing that information if they select the wrong answer and the item is scored dichotomously. However, free response questions may allow an examinee to demonstrate partial understanding of the subject and receive partial credit. Additionally if more questions on a particular subject area or topic are asked to create a larger sample then statistically their level of knowledge for that topic will be reflected more accurately in the number of correct answers and final results.
Another disadvantage of multiple choice examinations is that a student who is incapable of answering a particular question can simply select a random answer and still have a chance of receiving a mark for it. If randomly guessing an answer, there is usually a 25 percent chance of getting it correct on a four-answer choice question. It is common practice for students with no time left to give all remaining questions random answers in the hope that they will get at least some of them right. Many exams, such as the Australian Mathematics Competition and the SAT, have systems in place to negate this, in this case by making it no more beneficial to choose a random answer than to give none.
Another system of negating the effects of random selection is formula scoring, in which a score is proportionally reduced based on the number of incorrect responses and the number of possible choices. In this method, the score is reduced by the number of wrong answers divided by the average number of possible answers for all questions in the test, "w"/("c" – 1) where "w" is the "number of wrong responses on the test" and "c" is "the average number of possible choices for all questions on the test". All exams scored with the three-parameter model of item response theory also account for guessing. This is usually not a great issue, moreover, since the odds of a student receiving significant marks by guessing are very low when four or more selections are available.
Additionally, it is important to note that questions phrased ambiguously may confuse test-takers. It is generally accepted that multiple choice questions allow for only one answer, where the one answer may encapsulate a collection of previous options. However, some test creators are unaware of this and might expect the student to select multiple answers without being given explicit permission, or providing the trailing encapsulation options.
Critics like philosopher and education proponent Jacques Derrida, said that while the demand for dispensing and checking basic knowledge is valid, there are other means to respond to this need than resorting to crib sheets.
Despite all the shortcomings, the format remains popular because MCQs are easy to create, score and analyse.
Changing answers.
The theory that students should trust their first instinct and stay with their initial answer on a multiple choice test is a myth worth dispelling. Researchers have found that although some people believe that changing answers is bad, it generally results in a higher test score. The data across twenty separate studies indicate that the percentage of "right to wrong" changes is 20.2%, whereas the percentage of "wrong to right" changes is 57.8%, nearly triple. Changing from "right to wrong" may be more painful and memorable (Von Restorff effect), but it is probably a good idea to change an answer after additional reflection indicates that a better choice could be made. In fact, a person's initial attraction to a particular answer choice could well derive from the surface plausibility that the test writer has intentionally built into a distractor (or incorrect answer choice). Test item writers are instructed to make their distractors plausible yet clearly incorrect. A test taker's first-instinct attraction to a distractor is thus often a reaction that probably should be revised in light of a careful consideration of each of the answer choices. Some test takers for some examination subjects might have accurate first instincts about a particular test item, but that does not mean that all test takers should trust their first instinct.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "a=1"
},
{
"math_id": 1,
"text": "b=2"
},
{
"math_id": 2,
"text": "a+b"
},
{
"math_id": 3,
"text": "2x+3=4"
}
] |
https://en.wikipedia.org/wiki?curid=617573
|
61760121
|
Rotor solidity
|
Rotor solidity is a dimensionless quantity used in design and analysis of rotorcraft, propellers and wind turbines. Rotor solidity is a function of the aspect ratio and number of blades in the rotor and is widely used as a parameter for ensuring geometric similarity in rotorcraft experiments. It provides a measure of how close a lifting rotor system is to an ideal actuator disk in momentum theory. It also plays an important role in determining the fluid speed across the rotor disk when lift is generated and consequentially the performance of the rotor; amount of downwash around it, and noise levels the rotor generates. It is also used to compare performance characteristics between rotors of different sizes. Typical values of rotor solidity ratio for helicopters fall in the range 0.05 to 0.12.
Definitions.
Rotor solidity is the ratio of area of the rotor blades to the area of the rotor disk.
For a rotor with formula_0 blades, each of radius formula_1 and chord formula_2, rotor solidity formula_3 is:
formula_4
where formula_5 is the blade area and formula_6 is the disk area.
For blades with a non-rectangular planform, solidity is often computed using an equivalent weighted form as
formula_7
where:
The weighing function is determined by the aerodynamic performance parameter that is assumed to be constant in comparison to an equivalent rotor having a rectangular blade planform. For example, when rotor thrust coefficient is assumed to be constant, the weighing function comes out to be:
formula_11
and the corresponding weighted solidity ratio is known as the thrust-weighted solidity ratio.
When rotor power or torque coefficient is assumed constant, the weighing function is:
formula_12
and the corresponding weighted solidity ratio is known as the power or torque-weighted solidity ratio. This solidity ratio is analogous to the activity factor used in propeller design and is also used in wind turbine analysis. However, it is rarely used in helicopter design.
Geometric significance.
A crude idea of what a rotor or propeller geometry looks like can be obtained from the rotor solidity ratio. Rotors with stubbier and/or a larger number of blades have a larger solidity ratio since they cover a larger fraction of the rotor disk. Rotorcraft like helicopters typically use blades with very low solidity ratios compared to fixed-wing and marine propellers.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "N"
},
{
"math_id": 1,
"text": "R"
},
{
"math_id": 2,
"text": "c"
},
{
"math_id": 3,
"text": "\\sigma"
},
{
"math_id": 4,
"text": "\\sigma \\equiv \\frac{A_b}{A_d} = \\frac{NcR}{\\pi R^2} = \\frac{Nc}{\\pi R}"
},
{
"math_id": 5,
"text": "A_b"
},
{
"math_id": 6,
"text": "A_d"
},
{
"math_id": 7,
"text": "\\sigma_e\\equiv \\frac{1}{R}\\int_0^R w(r)\\ \\sigma(r)dr"
},
{
"math_id": 8,
"text": " w"
},
{
"math_id": 9,
"text": " \\sigma"
},
{
"math_id": 10,
"text": " r"
},
{
"math_id": 11,
"text": " w(r) = 3 r^2 "
},
{
"math_id": 12,
"text": " w(r) = 4 r^3 "
}
] |
https://en.wikipedia.org/wiki?curid=61760121
|
61766433
|
Exsymmedian
|
Line tangent to a given triangle's circumcircle at one its vertices
In Euclidean geometry, the exsymmedians are three lines associated with a triangle. More precisely, for a given triangle the exsymmedians are the tangent lines on the triangle's circumcircle through the three vertices of the triangle. The triangle formed by the three exsymmedians is the tangential triangle; its vertices, that is the three intersections of the exsymmedians, are called exsymmedian points.
For a triangle △"ABC" with ea, eb, ec being the exsymmedians and sa, sb, sc being the symmedians through the vertices A, B, C, two exsymmedians and one symmedian intersect in a common point:
formula_0
The length of the perpendicular line segment connecting a triangle side with its associated exsymmedian point is proportional to that triangle side. Specifically the following formulas apply:
formula_1
Here △ denotes the area of the triangle △"ABC", and ka, kb, kc denote the perpendicular line segments connecting the triangle sides a, b, c with the exsymmedian points Ea, Eb, Ec.
|
[
{
"math_id": 0,
"text": "\\begin{align}\n E_a&=e_b \\cap e_c \\cap s_a \\\\\n E_b&=e_a \\cap e_c \\cap s_b \\\\\n E_c&=e_a \\cap e_b \\cap s_c\n\\end{align} "
},
{
"math_id": 1,
"text": "\\begin{align}\n k_a&=a\\cdot \\frac{2\\triangle}{c^2+b^2-a^2} \\\\[6pt]\n k_b&=b\\cdot \\frac{2\\triangle}{c^2+a^2-b^2} \\\\[6pt]\n k_c&=c\\cdot \\frac{2\\triangle}{a^2+b^2-c^2} \n\\end{align} "
}
] |
https://en.wikipedia.org/wiki?curid=61766433
|
6176811
|
Newman–Penrose formalism
|
Notation in general relativity
The Newman–Penrose (NP) formalism is a set of notation developed by Ezra T. Newman and Roger Penrose for general relativity (GR). Their notation is an effort to treat general relativity in terms of spinor notation, which introduces complex forms of the usual variables used in GR. The NP formalism is itself a special case of the tetrad formalism, where the tensors of the theory are projected onto a complete vector basis at each point in spacetime. Usually this vector basis is chosen to reflect some symmetry of the spacetime, leading to simplified expressions for physical observables. In the case of the NP formalism, the vector basis chosen is a null tetrad: a set of four null vectors—two real, and a complex-conjugate pair. The two real members often asymptotically point radially inward and radially outward, and the formalism is well adapted to treatment of the propagation of radiation in curved spacetime. The Weyl scalars, derived from the Weyl tensor, are often used. In particular, it can be shown that one of these scalars—formula_0 in the appropriate frame—encodes the outgoing gravitational radiation of an asymptotically flat system.
Newman and Penrose introduced the following functions as primary quantities using this tetrad:
In many situations—especially algebraically special spacetimes or vacuum spacetimes—the Newman–Penrose formalism simplifies dramatically, as many of the functions go to zero. This simplification allows for various theorems to be proven more easily than using the standard form of Einstein's equations.
In this article, we will only employ the tensorial rather than spinorial version of NP formalism, because the former is easier to understand and more popular in relevant papers. One can refer to ref. for a unified formulation of these two versions.
Null tetrad and sign convention.
The formalism is developed for four-dimensional spacetime, with a Lorentzian-signature metric. At each point, a tetrad (set of four vectors) is introduced. The first two vectors, formula_5 and formula_6 are just a pair of standard (real) null vectors such that formula_7. For example, we can think in terms of spherical coordinates, and take formula_8 to be the outgoing null vector, and formula_9 to be the ingoing null vector. A complex null vector is then constructed by combining a pair of real, orthogonal unit space-like vectors. In the case of spherical coordinates, the standard choice is
formula_10
The complex conjugate of this vector then forms the fourth element of the tetrad.
Two sets of signature and normalization conventions are in use for NP formalism: formula_11 and formula_12. The former is the original one that was adopted when NP formalism was developed and has been widely used in black-hole physics, gravitational waves and various other areas in general relativity. However, it is the latter convention that is usually employed in contemporary study of black holes from quasilocal perspectives (such as isolated horizons and dynamical horizons). In this article, we will utilize formula_12 for a systematic review of the NP formalism (see also refs.).
It's important to note that, when switching from formula_13 to formula_14, definitions of the spin coefficients, Weyl-NP scalars formula_15 and Ricci-NP scalars formula_16 need to change their signs; this way, the Einstein-Maxwell equations can be left unchanged.
In NP formalism, the complex null tetrad contains two real null (co)vectors formula_17 and two complex null (co)vectors formula_18. Being "null" (co)vectors, "self"-normalization of formula_17 naturally vanishes,
formula_19,
so the following two pairs of "cross"-normalization are adopted
formula_20
while contractions between the two pairs are also vanishing,
formula_21.
Here the indices can be raised and lowered by the global metric formula_22 which in turn can be obtained via
formula_23
NP quantities and tetrad equations.
Four covariant derivative operators.
In keeping with the formalism's practice of using distinct unindexed symbols for each component of an object, the covariant derivative operator formula_24 is expressed using four separate symbols (formula_25) which name a directional covariant derivative operator for each tetrad direction. Given a linear combination of tetrad vectors, formula_26, the covariant derivative operator in the formula_27 direction is formula_28.
The operators are defined as
formula_29
formula_30
which reduce to formula_31 when acting on "scalar" functions.
Twelve spin coefficients.
In NP formalism, instead of using index notations as in orthogonal tetrads, each Ricci rotation coefficient formula_32 in the null tetrad is assigned a lower-case Greek letter, which constitute the 12 complex "spin coefficients" (in three groups),
formula_33
formula_34
formula_35
formula_36
formula_37
formula_38
formula_39
formula_40
Spin coefficients are the primary quantities in NP formalism, with which all other NP quantities (as defined below) could be calculated indirectly using the NP field equations. Thus, NP formalism is sometimes referred to as "spin-coefficient formalism" as well.
Transportation equations: covariant derivatives of tetrad vectors.
The sixteen directional covariant derivatives of tetrad vectors are sometimes called the "transportation/propagation equations," perhaps because the derivatives are zero when the tetrad vector is parallel propagated or transported in the direction of the derivative operator.
These results in this exact notation are given by O'Donnell:
formula_41 formula_42 formula_43 formula_44
formula_45 formula_46 formula_47 formula_48
formula_49 formula_50 formula_51 formula_52
formula_53 formula_54 formula_55 formula_56
Interpretation of formula_57 from formula_58 and formula_59.
The two equations for the covariant derivative of a real null tetrad vector in its own direction indicate whether or not the vector is tangent to a geodesic and if so, whether the geodesic has an affine parameter.
A null tangent vector formula_60 is tangent to an affinely parameterized null geodesic if formula_61, which is to say if the vector is unchanged by parallel propagation or transportation in its own direction.
formula_62 shows that formula_8 is tangent to a geodesic if and only if formula_63, and is tangent to an affinely parameterized geodesic if in addition formula_64. Similarly, formula_65 shows that formula_9 is geodesic if and only if formula_66, and has affine parameterization when formula_67.
Commutators.
The metric-compatibility or torsion-freeness of the covariant derivative is recast into the "commutators of the directional derivatives",
formula_72 formula_73 formula_74 formula_75
which imply that
formula_76
formula_77
formula_78
formula_79
Note: (i) The above equations can be regarded either as implications of the commutators or combinations of the transportation equations; (ii) In these implied equations, the vectors formula_80 can be replaced by the covectors and the equations still hold.
Weyl–NP and Ricci–NP scalars.
The 10 independent components of the Weyl tensor can be encoded into 5 complex Weyl-NP scalars,
formula_81
formula_82
formula_83
formula_84
formula_85
The 10 independent components of the Ricci tensor are encoded into 4 "real" scalars formula_86, formula_87, formula_88, formula_89 and 3 "complex" scalars formula_90 (with their complex conjugates),
formula_91
formula_92 formula_93 formula_94
In these definitions, formula_95 could be replaced by its trace-free part formula_96 or by the Einstein tensor formula_97 because of the normalization relations. Also, formula_87 is reduced to formula_98 for electrovacuum (formula_99).
Einstein–Maxwell–NP equations.
NP field equations.
In a complex null tetrad, Ricci identities give rise to the following NP field equations connecting spin coefficients, Weyl-NP and Ricci-NP scalars (recall that in an orthogonal tetrad, Ricci rotation coefficients would respect Cartan's first and second structure equations),
These equations in various notations can be found in several texts. The notation in Frolov and Novikov
is identical.
formula_100 formula_101 formula_102
formula_103 formula_104 formula_105 formula_106 formula_107 formula_108 formula_109 formula_110 formula_111 formula_112
formula_113 formula_114 formula_115 formula_116 formula_117
Also, the Weyl-NP scalars formula_15 and the Ricci-NP scalars formula_16 can be calculated indirectly from the above NP field equations after obtaining the spin coefficients rather than directly using their definitions.
Maxwell–NP scalars, Maxwell equations in NP formalism.
The six independent components of the Faraday-Maxwell 2-form (i.e. the electromagnetic field strength tensor) formula_118 can be encoded into three complex Maxwell-NP scalars
formula_119
and therefore the eight real Maxwell equations formula_120 and formula_121 (as formula_122) can be transformed into four complex equations,
formula_123 formula_124 formula_125 formula_126
with the Ricci-NP scalars formula_16 related to Maxwell scalars by
formula_127
It is worthwhile to point out that, the supplementary equation formula_128 is only valid for electromagnetic fields; for example, in the case of Yang-Mills fields there will be formula_129 where formula_130 are Yang-Mills-NP scalars.
To sum up, the aforementioned transportation equations, NP field equations and Maxwell-NP equations together constitute the Einstein-Maxwell equations in Newman–Penrose formalism.
Applications of NP formalism to gravitational radiation field.
The Weyl scalar formula_0 was defined by Newman & Penrose as
formula_131
(note, however, that the overall sign is arbitrary, and that Newman & Penrose worked with a "timelike" metric signature of formula_132).
In empty space, the Einstein Field Equations reduce to formula_133. From the definition of the Weyl tensor, we see that this means that it equals the Riemann tensor, formula_134. We can make the standard choice for the tetrad at infinity:
formula_135
formula_136
formula_137
In transverse-traceless gauge, a simple calculation shows that linearized gravitational waves are related to components of the Riemann tensor as
formula_138
formula_139
assuming propagation in the formula_140 direction. Combining these, and using the definition of formula_0 above, we can write
formula_141
Far from a source, in nearly flat space, the fields formula_142 and formula_143 encode everything about gravitational radiation propagating in a given direction. Thus, we see that formula_0 encodes in a single complex field everything about (outgoing) gravitational waves.
Radiation from a finite source.
Using the wave-generation formalism summarised by Thorne, we can write the radiation field quite compactly in terms of the mass multipole, current multipole, and spin-weighted spherical harmonics:
formula_144
Here, prefixed superscripts indicate time derivatives. That is, we define
formula_145
The components formula_146 and formula_147 are the mass and current multipoles, respectively. formula_148 is the spin-weight -2 spherical harmonic.
|
[
{
"math_id": 0,
"text": "\\Psi_4"
},
{
"math_id": 1,
"text": "\\kappa, \\rho, \\sigma, \\tau\\,; \\lambda, \\mu, \\nu, \\pi\\,; \\epsilon, \\gamma, \\beta, \\alpha. "
},
{
"math_id": 2,
"text": "\\Psi_0, \\ldots, \\Psi_4"
},
{
"math_id": 3,
"text": " \\Phi_{00}, \\Phi_{11}, \\Phi_{22}, \\Lambda "
},
{
"math_id": 4,
"text": " \\Phi_{01}, \\Phi_{10}, \\Phi_{02}, \\Phi_{20}, \\Phi_{12}, \\Phi_{21} "
},
{
"math_id": 5,
"text": "l^\\mu"
},
{
"math_id": 6,
"text": "n^\\mu"
},
{
"math_id": 7,
"text": "l^a n_a = -1"
},
{
"math_id": 8,
"text": "l^a"
},
{
"math_id": 9,
"text": "n^a"
},
{
"math_id": 10,
"text": "m^\\mu = \\frac{1}{\\sqrt{2}}\\left( \\hat{\\theta} + i \\hat{\\phi} \\right)^\\mu\\ ."
},
{
"math_id": 11,
"text": "\\{(+,-,-,-); l^a n_a=1\\,,m^a \\bar{m}_a=-1\\}"
},
{
"math_id": 12,
"text": "\\{(-,+,+,+); l^a n_a=-1\\,,m^a \\bar{m}_a=1\\}"
},
{
"math_id": 13,
"text": "\\{(+,-,-,-)\\,,l^a n_a=1\\,,m^a \\bar{m}_a=-1\\}"
},
{
"math_id": 14,
"text": "\\{(-,+,+,+)\\,,l^a n_a=-1\\,,m^a \\bar{m}_a=1\\}"
},
{
"math_id": 15,
"text": "\\Psi_{i}"
},
{
"math_id": 16,
"text": "\\Phi_{ij}"
},
{
"math_id": 17,
"text": "\\{\\ell\\,,n\\}"
},
{
"math_id": 18,
"text": "\\{m\\,, \\bar m\\}"
},
{
"math_id": 19,
"text": "l_a l^a=n_a n^a=m_a m^a=\\bar{m}_a \\bar{m}^a=0"
},
{
"math_id": 20,
"text": "l_a n^a=-1=l^a n_a\\,,\\quad m_a \\bar{m}^a=1=m^a \\bar{m}_a\\,,"
},
{
"math_id": 21,
"text": "l_a m^a=l_a \\bar{m}^a=n_a m^a=n_a \\bar{m}^a=0"
},
{
"math_id": 22,
"text": "g_{ab}"
},
{
"math_id": 23,
"text": "g_{ab}=-l_a n_b - n_a l_b +m_a \\bar{m}_b +\\bar{m}_a m_b\\,, \\quad g^{ab}=-l^a n^b - n^a l^b +m^a \\bar{m}^b +\\bar{m}^a m^b\\,."
},
{
"math_id": 24,
"text": "\\nabla_a"
},
{
"math_id": 25,
"text": "D, \\Delta, \\delta, \\bar{\\delta}"
},
{
"math_id": 26,
"text": "X^a=\\mathrm{a}l^a+\\mathrm{b}n^a+\\mathrm{c}m^a+\\mathrm{d}\\bar{m}^a"
},
{
"math_id": 27,
"text": "X^a"
},
{
"math_id": 28,
"text": "\\nabla_X = X^a\\nabla_a=(\\mathrm{a}D+\\mathrm{b}\\Delta+\\mathrm{c}\\delta+\\mathrm{d}\\bar{\\delta})"
},
{
"math_id": 29,
"text": "D:= \\nabla_\\boldsymbol{l}=l^a\\nabla_a\\,,\\; \\Delta:= \\nabla_\\boldsymbol{n}=n^a\\nabla_a\\,, "
},
{
"math_id": 30,
"text": "\\delta := \\nabla_\\boldsymbol{m}=m^a\\nabla_a\\,,\\; \\bar{\\delta} := \\nabla_\\boldsymbol{\\bar{m}}=\\bar{m}^a\\nabla_a\\,,"
},
{
"math_id": 31,
"text": "D=l^a\\partial_a\\,, \\Delta=n^a\\partial_a\\,,\\delta=m^a\\partial_a\\,,\\bar{\\delta}=\\bar{m}^a\\partial_a "
},
{
"math_id": 32,
"text": "\\gamma_{ijk}"
},
{
"math_id": 33,
"text": " \\kappa:= -m^aDl_a=-m^a l^b \\nabla_b l_a\\,,\\quad \\tau:= -m^a\\Delta l_a=-m^a n^b \\nabla_b l_a\\,,"
},
{
"math_id": 34,
"text": " \\sigma:= -m^a\\delta l_a=-m^a m^b\\nabla_b l_a\\,, \\quad \\rho := -m^a\\bar{\\delta} l_a=-m^a \\bar{m}^b \\nabla_b l_a\\,; "
},
{
"math_id": 35,
"text": "\\pi:= \\bar{m}^aDn_a=\\bar{m}^al^b\\nabla_b n_a\\,, \\quad \\nu:= \\bar{m}^a\\Delta n_a=\\bar{m}^a n^b\\nabla_b n_a\\,, "
},
{
"math_id": 36,
"text": "\\mu:= \\bar{m}^a\\delta n_a=\\bar{m}^a m^b\\nabla_b n_a\\,, \\quad \\lambda:= \\bar{m}^a\\bar{\\delta} n_a=\\bar{m}^a \\bar{m}^b \\nabla_b n_a\\,;"
},
{
"math_id": 37,
"text": "\\varepsilon:= -\\frac{1}{2}\\big(n^aDl_a-\\bar{m}^aDm_a \\big)=-\\frac{1}{2}\\big(n^al^b\\nabla_b l_a-\\bar{m}^al^b\\nabla_b m_a \\big)\\,,"
},
{
"math_id": 38,
"text": "\\gamma:= -\\frac{1}{2}\\big(n^a\\Delta l_a-\\bar{m}^a\\Delta m_a \\big)= -\\frac{1}{2}\\big(n^a n^b\\nabla_b l_a-\\bar{m}^a n^b\\nabla_b m_a \\big)\\,,"
},
{
"math_id": 39,
"text": "\\beta:= -\\frac{1}{2}\\big(n^a\\delta l_a-\\bar{m}^a\\delta m_a \\big)=-\\frac{1}{2}\\big(n^a m^b\\nabla_b l_a-\\bar{m}^am^b\\nabla_b m_a \\big)\\,,"
},
{
"math_id": 40,
"text": "\\alpha:= -\\frac{1}{2}\\big(n^a\\bar{\\delta} l_a-\\bar{m}^a\\bar{\\delta}m_a \\big)=-\\frac{1}{2}\\big(n^a\\bar{m}^b\\nabla_b l_a-\\bar{m}^a\\bar{m}^b\\nabla_b m_a \\big)\\,."
},
{
"math_id": 41,
"text": "D l^a=(\\varepsilon+\\bar{\\varepsilon})l^a-\\bar{\\kappa}m^a-\\kappa\\bar{m}^a\\,,"
},
{
"math_id": 42,
"text": "\\Delta l^a=(\\gamma+\\bar{\\gamma})l^a-\\bar{\\tau}m^a-\\tau\\bar{m}^a\\,,"
},
{
"math_id": 43,
"text": "\\delta l^a =(\\bar{\\alpha}+\\beta)l^a-\\bar{\\rho}m^a-\\sigma\\bar{m}^a\\,,"
},
{
"math_id": 44,
"text": "\\bar{\\delta} l^a=(\\alpha+\\bar{\\beta})l^a-\\bar{\\sigma}m^a-\\rho\\bar{m}^a\\,;"
},
{
"math_id": 45,
"text": "D n^a=\\pi m^a+\\bar{\\pi}\\bar{m}^a-(\\varepsilon+\\bar{\\varepsilon})n^a\\,,"
},
{
"math_id": 46,
"text": "\\Delta n^a=\\nu m^a+\\bar{\\nu}\\bar{m}^a-(\\gamma+\\bar{\\gamma})n^a\\,,"
},
{
"math_id": 47,
"text": "\\delta n^a=\\mu m^a+\\bar{\\lambda}\\bar{m}^a-(\\bar{\\alpha}+\\beta)n^a\\,,"
},
{
"math_id": 48,
"text": "\\bar{\\delta} n^a=\\lambda m^a+\\bar{\\mu}\\bar{m}^a-(\\alpha+\\bar{\\beta})n^a\\,;"
},
{
"math_id": 49,
"text": "D m^a=(\\varepsilon-\\bar{\\varepsilon})m^a+\\bar{\\pi}l^a-\\kappa n^a\\,,"
},
{
"math_id": 50,
"text": "\\Delta m^a=(\\gamma-\\bar{\\gamma})m^a+\\bar{\\nu}l^a-\\tau n^a\\,,"
},
{
"math_id": 51,
"text": "\\delta m^a=(\\beta-\\bar{\\alpha})m^a+\\bar{\\lambda}l^a-\\sigma n^a\\,,"
},
{
"math_id": 52,
"text": "\\bar{\\delta} m^a=(\\alpha-\\bar{\\beta})m^a+\\bar{\\mu}l^a-\\rho n^a\\,;"
},
{
"math_id": 53,
"text": "D \\bar{m}^a=(\\bar{\\varepsilon}-\\varepsilon)\\bar{m}^a+\\pi l^a-\\bar{\\kappa} n^a\\,,"
},
{
"math_id": 54,
"text": "\\Delta \\bar{m}^a=(\\bar{\\gamma}-\\gamma)\\bar{m}^a+\\nu l^a-\\bar{\\tau} n^a\\,,"
},
{
"math_id": 55,
"text": "\\delta \\bar{m}^a=(\\bar{\\alpha}-\\beta)\\bar{m}^a+\\mu l^a-\\bar{\\rho} n^a\\,,"
},
{
"math_id": 56,
"text": "\\bar{\\delta} \\bar{m}^a=(\\bar{\\beta}-\\alpha)\\bar{m}^a+\\lambda l^a-\\bar{\\sigma} n^a\\,."
},
{
"math_id": 57,
"text": "\\kappa, \\varepsilon, \\nu, \\gamma"
},
{
"math_id": 58,
"text": "D l^a"
},
{
"math_id": 59,
"text": "\\Delta n^a"
},
{
"math_id": 60,
"text": "T^a"
},
{
"math_id": 61,
"text": "T^b\\nabla_bT^a=0"
},
{
"math_id": 62,
"text": "D l^a=(\\varepsilon+\\bar{\\varepsilon})l^a-\\bar{\\kappa}m^a-\\kappa\\bar{m}^a"
},
{
"math_id": 63,
"text": "\\kappa=0"
},
{
"math_id": 64,
"text": "(\\varepsilon+\\bar{\\varepsilon})=0\n"
},
{
"math_id": 65,
"text": "\\Delta n^a=\\nu m^a+\\bar{\\nu}\\bar{m}^a-(\\gamma+\\bar{\\gamma})n^a"
},
{
"math_id": 66,
"text": "\\nu=0"
},
{
"math_id": 67,
"text": "(\\gamma+\\bar{\\gamma})=0"
},
{
"math_id": 68,
"text": "m^a=x^a+iy^a"
},
{
"math_id": 69,
"text": "\\bar{m}^a=x^a-iy^a"
},
{
"math_id": 70,
"text": "x^a"
},
{
"math_id": 71,
"text": "y^a"
},
{
"math_id": 72,
"text": "\\Delta D-D\\Delta=(\\gamma+\\bar{\\gamma})D+(\\varepsilon+\\bar{\\varepsilon})\\Delta-(\\bar{\\tau}+\\pi)\\delta-(\\tau+\\bar{\\pi})\\bar{\\delta}\\,,"
},
{
"math_id": 73,
"text": "\\delta D-D\\delta=(\\bar{\\alpha}+\\beta-\\bar{\\pi})D+\\kappa\\Delta-(\\bar{\\rho}+\\varepsilon-\\bar{\\varepsilon})\\delta-\\sigma\\bar{\\delta}\\,,"
},
{
"math_id": 74,
"text": "\\delta\\Delta-\\Delta\\delta=-\\bar{\\nu}D+(\\tau-\\bar{\\alpha}-\\beta)\\Delta+(\\mu-\\gamma+\\bar{\\gamma})\\delta+\\bar{\\lambda}\\bar{\\delta}\\,,"
},
{
"math_id": 75,
"text": "\\bar{\\delta}\\delta-\\delta\\bar{\\delta}=(\\bar{\\mu}-\\mu)D+(\\bar{\\rho}-\\rho)\\Delta+(\\alpha-\\bar{\\beta})\\delta-(\\bar{\\alpha}-\\beta)\\bar{\\delta}\\,,"
},
{
"math_id": 76,
"text": "\\Delta l^a-D n^a=(\\gamma+\\bar{\\gamma})l^a+(\\varepsilon+\\bar{\\varepsilon})n^a-(\\bar{\\tau}+\\pi)m^a-(\\tau+\\bar{\\pi})\\bar{m}^a\\,,"
},
{
"math_id": 77,
"text": "\\delta l^a-D m^a=(\\bar{\\alpha}+\\beta-\\bar{\\pi})l^a+\\kappa n^a-(\\bar{\\rho}+\\varepsilon-\\bar{\\varepsilon}) m^a-\\sigma\\bar{m}^a\\,,"
},
{
"math_id": 78,
"text": "\\delta n^a-\\Delta m^a=-\\bar{\\nu}l^a+(\\tau-\\bar{\\alpha}-\\beta)n^a+(\\mu-\\gamma+\\bar{\\gamma})m^a+\\bar{\\lambda}\\bar{m}^a\\,,"
},
{
"math_id": 79,
"text": "\\bar{\\delta}m^a-\\delta\\bar{m}^a=(\\bar{\\mu}-\\mu)l^a+(\\bar{\\rho}-\\rho)n^a+(\\alpha-\\bar{\\beta})m^a-(\\bar{\\alpha}-\\beta)\\bar{m}^a\\,."
},
{
"math_id": 80,
"text": "\\{l^a,n^a,m^a,\\bar{m}^a\\}"
},
{
"math_id": 81,
"text": "\\Psi_0:= C_{abcd} l^a m^b l^c m^d\\,,"
},
{
"math_id": 82,
"text": "\\Psi_1:= C_{abcd} l^a n^b l^c m^d\\,,"
},
{
"math_id": 83,
"text": "\\Psi_2:= C_{abcd} l^a m^b\\bar{m}^c n^d\\,,"
},
{
"math_id": 84,
"text": "\\Psi_3:= C_{abcd} l^a n^b\\bar{m}^c n^d\\,,"
},
{
"math_id": 85,
"text": "\\Psi_4:= C_{abcd} n^a \\bar{m}^b n^c \\bar{m}^d\\,."
},
{
"math_id": 86,
"text": "\\{\\Phi_{00}"
},
{
"math_id": 87,
"text": "\\Phi_{11}"
},
{
"math_id": 88,
"text": "\\Phi_{22}"
},
{
"math_id": 89,
"text": "\\Lambda\\}"
},
{
"math_id": 90,
"text": "\\{\\Phi_{10},\\Phi_{20},\\Phi_{21} \\}"
},
{
"math_id": 91,
"text": "\\Phi_{00}:=\\frac{1}{2}R_{ab}l^a l^b\\,, \\quad \\Phi_{11}:=\\frac{1}{4}R_{ab}(\\,l^a n^b+m^a\\bar{m}^b)\\,, \\quad\\Phi_{22}:=\\frac{1}{2}R_{ab}n^a n^b\\,, \\quad\\Lambda:=\\frac{R}{24}\\,;"
},
{
"math_id": 92,
"text": "\\Phi_{01}:=\\frac{1}{2}R_{ab}l^a m^b\\,, \\quad\\; \\Phi_{10}:=\\frac{1}{2}R_{ab}l^a \\bar{m}^b=\\overline{\\Phi_{01}}\\,,"
},
{
"math_id": 93,
"text": "\\Phi_{02}:=\\frac{1}{2}R_{ab}m^a m^b\\,, \\quad \\Phi_{20}:=\\frac{1}{2}R_{ab}\\bar{m}^a \\bar{m}^b=\\overline{\\Phi_{02}}\\,,"
},
{
"math_id": 94,
"text": "\\Phi_{12}:=\\frac{1}{2}R_{ab} m^a n^b\\,, \\quad\\; \\Phi_{21}:=\\frac{1}{2}R_{ab} \\bar{m}^a n^b=\\overline{\\Phi_{12}}\\,."
},
{
"math_id": 95,
"text": "R_{ab}"
},
{
"math_id": 96,
"text": "\\displaystyle Q_{ab}=R_{ab}-\\frac{1}{4}g_{ab}R"
},
{
"math_id": 97,
"text": "\\displaystyle G_{ab}=R_{ab}-\\frac{1}{2}g_{ab}R"
},
{
"math_id": 98,
"text": "\\Phi_{11}=\\frac{1}{2}R_{ab}l^a n^b=\\frac{1}{2}R_{ab}m^a\\bar{m}^b"
},
{
"math_id": 99,
"text": "\\Lambda=0"
},
{
"math_id": 100,
"text": "D\\rho -\\bar{\\delta}\\kappa=(\\rho^2+\\sigma\\bar{\\sigma})+(\\varepsilon+\\bar{\\varepsilon})\\rho-\\bar{\\kappa}\\tau-\\kappa(3\\alpha+\\bar{\\beta}-\\pi)+\\Phi_{00}\\,,"
},
{
"math_id": 101,
"text": "D\\sigma-\\delta\\kappa=(\\rho+\\bar{\\rho})\\sigma+(3\\varepsilon-\\bar{\\varepsilon})\\sigma-(\\tau-\\bar{\\pi}+\\bar{\\alpha}+3\\beta)\\kappa+\\Psi_0\\,,"
},
{
"math_id": 102,
"text": "D\\tau-\\Delta\\kappa=(\\tau+\\bar{\\pi})\\rho+(\\bar{\\tau}+\\pi)\\sigma+(\\varepsilon-\\bar{\\varepsilon})\\tau-(3\\gamma+\\bar{\\gamma})\\kappa+\\Psi_1+\\Phi_{01}\\,,"
},
{
"math_id": 103,
"text": "D\\alpha-\\bar{\\delta}\\varepsilon=(\\rho+\\bar{\\varepsilon}-2\\varepsilon)\\alpha+\\beta\\bar{\\sigma}-\\bar{\\beta}\\varepsilon-\\kappa\\lambda-\\bar{\\kappa}\\gamma+(\\varepsilon+\\rho)\\pi+\\Phi_{10}\\,,"
},
{
"math_id": 104,
"text": "D\\beta-\\delta\\varepsilon=(\\alpha+\\pi)\\sigma+(\\bar{\\rho}-\\bar{\\varepsilon})\\beta-(\\mu+\\gamma)\\kappa-(\\bar{\\alpha}-\\bar{\\pi})\\varepsilon+\\Psi_1\\,,"
},
{
"math_id": 105,
"text": "D\\gamma-\\Delta\\varepsilon=(\\tau+\\bar{\\pi})\\alpha+(\\bar{\\tau}+\\pi)\\beta-(\\varepsilon+\\bar{\\varepsilon})\\gamma-(\\gamma+\\bar{\\gamma})\\varepsilon+\\tau\\pi-\\nu\\kappa+\\Psi_2+\\Phi_{11}-\\Lambda\\,,"
},
{
"math_id": 106,
"text": "D\\lambda-\\bar{\\delta}\\pi=(\\rho\\lambda+\\bar{\\sigma}\\mu)+\\pi^2+(\\alpha-\\bar{\\beta})\\pi-\\nu\\bar{\\kappa}-(3\\varepsilon-\\bar{\\varepsilon})\\lambda+\\Phi_{20}\\,,"
},
{
"math_id": 107,
"text": "D\\mu-\\delta\\pi=(\\bar{\\rho}\\mu+\\sigma\\lambda)+\\pi\\bar{\\pi}-(\\varepsilon+\\bar{\\varepsilon})\\mu-(\\bar{\\alpha}-\\beta)\\pi-\\nu\\kappa+\\Psi_2+2\\Lambda\\,,"
},
{
"math_id": 108,
"text": "D\\nu-\\Delta\\pi=(\\pi+\\bar{\\tau})\\mu+(\\bar{\\pi}+\\tau)\\lambda+(\\gamma-\\bar{\\gamma})\\pi-(3\\varepsilon+\\bar{\\varepsilon})\\nu+\\Psi_3+\\Phi_{21}\\,,"
},
{
"math_id": 109,
"text": "\\Delta\\lambda-\\bar{\\delta}\\nu=-(\\mu+\\bar{\\mu})\\lambda-(3\\gamma-\\bar{\\gamma})\\lambda+(3\\alpha+\\bar{\\beta}+\\pi-\\bar{\\tau})\\nu-\\Psi_4\\,,"
},
{
"math_id": 110,
"text": "\\delta\\rho-\\bar{\\delta}\\sigma=\\rho(\\bar{\\alpha}+\\beta)-\\sigma(3\\alpha-\\bar{\\beta})+(\\rho-\\bar{\\rho})\\tau+(\\mu-\\bar{\\mu})\\kappa-\\Psi_1+\\Phi_{01}\\,,"
},
{
"math_id": 111,
"text": "\\delta\\alpha-\\bar{\\delta}\\beta=(\\mu\\rho-\\lambda\\sigma)+\\alpha\\bar{\\alpha}+\\beta\\bar{\\beta}-2\\alpha\\beta+\\gamma(\\rho-\\bar{\\rho})+\\varepsilon(\\mu-\\bar{\\mu})-\\Psi_2+\\Phi_{11}+\\Lambda\\,,"
},
{
"math_id": 112,
"text": "\\delta\\lambda-\\bar{\\delta}\\mu=(\\rho-\\bar{\\rho})\\nu+(\\mu-\\bar{\\mu})\\pi+(\\alpha+\\bar{\\beta})\\mu+(\\bar\\alpha-3\\beta)\\lambda-\\Psi_3+\\Phi_{21}\\,,"
},
{
"math_id": 113,
"text": "\\delta\\nu-\\Delta\\mu=(\\mu^2+\\lambda\\bar{\\lambda})+(\\gamma+\\bar{\\gamma})\\mu-\\bar{\\nu}\\pi+(\\tau-3\\beta-\\bar{\\alpha})\\nu+\\Phi_{22}\\,,"
},
{
"math_id": 114,
"text": "\\delta\\gamma-\\Delta\\beta=(\\tau-\\bar{\\alpha}-\\beta)\\gamma+\\mu\\tau-\\sigma\\nu-\\varepsilon\\bar{\\nu}-(\\gamma-\\bar{\\gamma}-\\mu)\\beta+\\alpha\\bar{\\lambda}+\\Phi_{12}\\,,"
},
{
"math_id": 115,
"text": "\\delta\\tau-\\Delta\\sigma=(\\mu\\sigma+\\bar{\\lambda}\\rho)+(\\tau+\\beta-\\bar{\\alpha})\\tau-(3\\gamma-\\bar{\\gamma})\\sigma-\\kappa\\bar{\\nu}+\\Phi_{02}\\,,"
},
{
"math_id": 116,
"text": "\\Delta\\rho-\\bar{\\delta}\\tau=-(\\rho\\bar{\\mu}+\\sigma\\lambda)+(\\bar{\\beta}-\\alpha-\\bar{\\tau})\\tau+(\\gamma+\\bar{\\gamma})\\rho+\\nu\\kappa-\\Psi_2-2\\Lambda\\,,"
},
{
"math_id": 117,
"text": "\\Delta\\alpha-\\bar{\\delta}\\gamma=(\\rho+\\varepsilon)\\nu-(\\tau+\\beta)\\lambda+(\\bar{\\gamma}-\\bar{\\mu})\\alpha+(\\bar{\\beta}-\\bar{\\tau})\\gamma-\\Psi_3\\,."
},
{
"math_id": 118,
"text": "F_{ab}"
},
{
"math_id": 119,
"text": "\\phi_0:= F_{ab}l^a m^b \\,,\\quad \\phi_1:= \\frac{1}{2} F_{ab}\\big(l^an^b + \\bar{m}^a m^b \\big)\\,, \\quad \\phi_2 := F_{ab} \\bar{m}^a n^b\\,,"
},
{
"math_id": 120,
"text": "d\\mathbf{F}=0"
},
{
"math_id": 121,
"text": "d^{\\star}\\mathbf{F}=0"
},
{
"math_id": 122,
"text": "\\mathbf{F}=dA"
},
{
"math_id": 123,
"text": "D\\phi_1 -\\bar{\\delta}\\phi_0=(\\pi-2\\alpha)\\phi_0+2\\rho\\phi_1-\\kappa\\phi_2\\,, "
},
{
"math_id": 124,
"text": " D\\phi_2 -\\bar{\\delta}\\phi_1=-\\lambda\\phi_0+2\\pi\\phi_1+(\\rho-2\\varepsilon)\\phi_2\\,, "
},
{
"math_id": 125,
"text": " \\Delta\\phi_0-\\delta\\phi_1=(2\\gamma-\\mu)\\phi_0-2\\tau\\phi_1+\\sigma\\phi_2\\,, "
},
{
"math_id": 126,
"text": "\\Delta\\phi_1-\\delta\\phi_2=\\nu\\phi_0-2\\mu\\phi_1+(2\\beta-\\tau)\\phi_2\\,, "
},
{
"math_id": 127,
"text": "\\Phi_{ij}=\\,2\\,\\phi_i\\,\\overline{\\phi_j}\\,,\\quad (i,j\\in\\{0,1,2\\})\\,."
},
{
"math_id": 128,
"text": "\\Phi_{ij}=2\\,\\phi_i\\, \\overline{\\phi_j}"
},
{
"math_id": 129,
"text": "\\Phi_{ij}=\\,\\text{Tr}\\,(\\digamma_i \\,\\bar{\\digamma}_j)"
},
{
"math_id": 130,
"text": "\\digamma_i (i\\in\\{0,1,2 \\})"
},
{
"math_id": 131,
"text": "\\Psi_4 = -C_{\\alpha\\beta\\gamma\\delta} n^\\alpha \\bar{m}^\\beta n^\\gamma \\bar{m}^\\delta"
},
{
"math_id": 132,
"text": "(+,-,-,-)"
},
{
"math_id": 133,
"text": "R_{\\alpha\\beta}=0"
},
{
"math_id": 134,
"text": "C_{\\alpha\\beta\\gamma\\delta} = R_{\\alpha\\beta\\gamma\\delta}"
},
{
"math_id": 135,
"text": "l^{\\mu} = \\frac{1}{\\sqrt{2}} \\left( \\hat{t} + \\hat{r} \\right)\\ ,"
},
{
"math_id": 136,
"text": "n^{\\mu} = \\frac{1}{\\sqrt{2}} \\left( \\hat{t} - \\hat{r} \\right)\\ ,"
},
{
"math_id": 137,
"text": "m^{\\mu} = \\frac{1}{\\sqrt{2}} \\left( \\hat{\\theta} + i\\hat{\\phi} \\right)\\ ."
},
{
"math_id": 138,
"text": " \\frac{1}{4}\\left( \\ddot{h}_{\\hat{\\theta}\\hat{\\theta}} - \\ddot{h}_{\\hat{\\phi}\\hat{\\phi}} \\right) = -R_{\\hat{t}\\hat{\\theta}\\hat{t}\\hat{\\theta}} = -R_{\\hat{t}\\hat{\\phi}\\hat{r}\\hat{\\phi}} = -R_{\\hat{r}\\hat{\\theta}\\hat{r}\\hat{\\theta}} = R_{\\hat{t}\\hat{\\phi}\\hat{t}\\hat{\\phi}} = R_{\\hat{t}\\hat{\\theta}\\hat{r}\\hat{\\theta}} = R_{\\hat{r}\\hat{\\phi}\\hat{r}\\hat{\\phi}}\\ ,"
},
{
"math_id": 139,
"text": " \\frac{1}{2} \\ddot{h}_{\\hat{\\theta}\\hat{\\phi}} = -R_{\\hat{t}\\hat{\\theta}\\hat{t}\\hat{\\phi}} = -R_{\\hat{r}\\hat{\\theta}\\hat{r}\\hat{\\phi}} = R_{\\hat{t}\\hat{\\theta}\\hat{r}\\hat{\\phi}} = R_{\\hat{r}\\hat{\\theta}\\hat{t}\\hat{\\phi}}\\ ,"
},
{
"math_id": 140,
"text": "\\hat{r}"
},
{
"math_id": 141,
"text": " \\Psi_4 = \\frac{1}{2}\\left( \\ddot{h}_{\\hat{\\theta} \\hat{\\theta}} - \\ddot{h}_{\\hat{\\phi} \\hat{\\phi}} \\right) + i \\ddot{h}_{\\hat{\\theta}\\hat{\\phi}} = -\\ddot{h}_+ + i \\ddot{h}_\\times\\ . "
},
{
"math_id": 142,
"text": "h_+"
},
{
"math_id": 143,
"text": "h_\\times"
},
{
"math_id": 144,
"text": "\\Psi_4(t,r,\\theta,\\phi) = - \\frac{1}{r\\sqrt{2}} \\sum_{l=2}^{\\infty} \\sum_{m=-l}^l \\left[ {}^{(l+2)}I^{lm}(t-r) -i\\ {}^{(l+2)}S^{lm}(t-r) \\right] {}_{-2}Y_{lm}(\\theta,\\phi)\\ . "
},
{
"math_id": 145,
"text": "{}^{(l)}G(t) = \\left( \\frac{d}{dt} \\right)^l G(t)\\ ."
},
{
"math_id": 146,
"text": "I^{lm}"
},
{
"math_id": 147,
"text": "S^{lm}"
},
{
"math_id": 148,
"text": "{}_{-2}Y_{lm}"
}
] |
https://en.wikipedia.org/wiki?curid=6176811
|
61768338
|
Stromquist–Woodall theorem
|
The Stromquist–Woodall theorem is a theorem in fair division and measure theory. Informally, it says that, for any cake, for any "n" people with different tastes, and for any fraction "w", there exists a subset of the cake that all people value at exactly a fraction "w" of the total cake value, and it can be cut using at most formula_0 cuts.
The theorem is about a circular 1-dimensional cake (a "pie"). Formally, it can be described as the interval [0,1] in which the two endpoints are identified. There are "n" continuous measures over the cake: formula_1; each measure represents the valuations of a different person over subsets of the cake. The theorem says that, for every weight formula_2, there is a subset formula_3, which all people value at exactly formula_4:
formula_5,
where formula_3 is a union of at most formula_6 intervals. This means that formula_0 cuts are sufficient for cutting the subset formula_3. If the cake is not circular (that is, the endpoints are not identified), then formula_3 may be the union of up to formula_7 intervals, in case one interval is adjacent to 0 and one other interval is adjacent to 1.
Proof sketch.
Let formula_8 be the subset of all weights for which the theorem is true. Then:
From 1-4, it follows that formula_17. In other words, the theorem is valid for "every" possible weight.
formula_19
formula_21
formula_26
formula_30
Tightness proof.
Stromquist and Woodall prove that the number formula_6 is tight if the weight formula_4 is either irrational, or rational with a reduced fraction formula_40 such that formula_41.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "2n-2"
},
{
"math_id": 1,
"text": "V_1,\\ldots,V_n"
},
{
"math_id": 2,
"text": "w \\in [0,1]"
},
{
"math_id": 3,
"text": "C_w"
},
{
"math_id": 4,
"text": "w"
},
{
"math_id": 5,
"text": "\\forall i = 1,\\ldots,n: \\,\\,\\,\\,\\, V_i(C_w)=w"
},
{
"math_id": 6,
"text": "n-1"
},
{
"math_id": 7,
"text": "n"
},
{
"math_id": 8,
"text": "W \\subseteq [0,1]"
},
{
"math_id": 9,
"text": "1 \\in W"
},
{
"math_id": 10,
"text": "C_1 := C"
},
{
"math_id": 11,
"text": "w\\in W"
},
{
"math_id": 12,
"text": "1-w \\in W"
},
{
"math_id": 13,
"text": "C_{1-w} := C\\smallsetminus C_w"
},
{
"math_id": 14,
"text": "C_{1-w}"
},
{
"math_id": 15,
"text": "W"
},
{
"math_id": 16,
"text": "w/2 \\in W"
},
{
"math_id": 17,
"text": "W=[0,1]"
},
{
"math_id": 18,
"text": "f: C \\to \\mathbb{R}^n"
},
{
"math_id": 19,
"text": "f(t) = (t, t^2, \\ldots, t^n)\\,\\,\\,\\,\\,\\,t\\in[0,1]"
},
{
"math_id": 20,
"text": "\\mathbb{R}^n"
},
{
"math_id": 21,
"text": "U_i(Y) = V_i(f^{-1}(Y) \\cap C_w)\\,\\,\\,\\,\\,\\,\\,\\,\\, Y\\subseteq \\mathbb{R}^n"
},
{
"math_id": 22,
"text": "f^{-1}(\\mathbb{R}^n) = C"
},
{
"math_id": 23,
"text": "i"
},
{
"math_id": 24,
"text": "U_i(\\mathbb{R}^n) = w"
},
{
"math_id": 25,
"text": "H, H'"
},
{
"math_id": 26,
"text": "\\forall i = 1,\\ldots,n: \\,\\,\\,\\,\\, U_i(H)=U_i(H')=w/2"
},
{
"math_id": 27,
"text": "M=f^{-1}(H)\\cap C_w"
},
{
"math_id": 28,
"text": "M'=f^{-1}(H')\\cap C_w"
},
{
"math_id": 29,
"text": "U_i"
},
{
"math_id": 30,
"text": "\\forall i = 1,\\ldots,n: \\,\\,\\,\\,\\, V_i(M)=V_i(M')=w/2"
},
{
"math_id": 31,
"text": "f(C_w)"
},
{
"math_id": 32,
"text": "H"
},
{
"math_id": 33,
"text": "H'"
},
{
"math_id": 34,
"text": "H\\cap f(C_w)"
},
{
"math_id": 35,
"text": "H'\\cap f(C_w)"
},
{
"math_id": 36,
"text": "2n-1"
},
{
"math_id": 37,
"text": "M"
},
{
"math_id": 38,
"text": "n-1 "
},
{
"math_id": 39,
"text": "C_{w/2}=M"
},
{
"math_id": 40,
"text": "r/s"
},
{
"math_id": 41,
"text": "s\\geq n"
},
{
"math_id": 42,
"text": "w=1/n"
},
{
"math_id": 43,
"text": "(n-1)(n+1)"
},
{
"math_id": 44,
"text": "P_1,\\ldots,P_{(n-1)(n+1)}"
},
{
"math_id": 45,
"text": "(n+1)"
},
{
"math_id": 46,
"text": "P_{i},P_{i+(n-1)},\\ldots,P_{i+n(n-1)}"
},
{
"math_id": 47,
"text": "P_{i+k(n-1)}"
},
{
"math_id": 48,
"text": "1/(n+1)"
},
{
"math_id": 49,
"text": "u_i"
},
{
"math_id": 50,
"text": "1/n"
},
{
"math_id": 51,
"text": "2(n-1)"
},
{
"math_id": 52,
"text": "1/\\big((n+1)(n-1)\\big)"
}
] |
https://en.wikipedia.org/wiki?curid=61768338
|
617831
|
Semicircle
|
Geometric shape
In mathematics (and more specifically geometry), a semicircle is a one-dimensional locus of points that forms half of a circle. It is a circular arc that measures 180° (equivalently, π radians, or a half-turn). It only has one line of symmetry (reflection symmetry).
In non-technical usage, the term "semicircle" is sometimes used to refer to either a closed curve that also includes the diameter segment from one end of the arc to the other or to the half-disk, which is a two-dimensional geometric region that further includes all the interior points.
By Thales' theorem, any triangle inscribed in a semicircle with a vertex at each of the endpoints of the semicircle and the third vertex elsewhere on the semicircle is a right triangle, with a right angle at the third vertex.
All lines intersecting the semicircle perpendicularly are concurrent at the center of the circle containing the given semicircle.
Arithmetic and geometric means.
A semicircle can be used to construct the arithmetic and geometric means of two lengths using straight-edge and compass. For a semicircle with a diameter of "a" + "b", the length of its radius is the arithmetic mean of "a" and "b" (since the radius is half of the diameter).
The geometric mean can be found by dividing the diameter into two segments of lengths "a" and "b", and then connecting their common endpoint to the semicircle with a segment perpendicular to the diameter. The length of the resulting segment is the geometric mean. This can be proven by applying the Pythagorean theorem to three similar right triangles, each having as vertices the point where the perpendicular touches the semicircle and two of the three endpoints of the segments of lengths "a" and "b".
The construction of the geometric mean can be used to transform any rectangle into a square of the same area, a problem called the quadrature of a rectangle. The side length of the square is the geometric mean of the side lengths of the rectangle. More generally, it is used as a lemma in a general method for transforming any polygonal shape into a similar copy of itself with the area of any other given polygonal shape.
Farey diagram.
The Farey sequence of order "n" is the sequence of completely reduced fractions which when in lowest terms have denominators less than or equal to "n", arranged in order of increasing size. With a restricted definition, each Farey sequence starts with the value 0, denoted by the fraction , and ends with the fraction . Ford circles can be constructed tangent to their neighbours, and to the x-axis at these points. Semicircles joining adjacent points on the x-axis pass through the points of contact at right angles.
Equation.
The equation of a semicircle with midpoint formula_0 on the diameter between its endpoints and which is entirely concave from below is
formula_1
If it is entirely concave from above, the equation is
formula_2
Arbelos.
An arbelos is a region in the plane bounded by three semicircles connected at the corners, all on the same side of a straight line (the "baseline") that contains their diameters.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "(x_0,y_0)"
},
{
"math_id": 1,
"text": "y=y_0+\\sqrt{r^2-(x-x_0)^2}"
},
{
"math_id": 2,
"text": "y=y_0-\\sqrt{r^2-(x-x_0)^2}"
}
] |
https://en.wikipedia.org/wiki?curid=617831
|
6178477
|
BB84
|
Quantum key distribution protocol
BB84 is a quantum key distribution scheme developed by Charles Bennett and Gilles Brassard in 1984. It is the first quantum cryptography protocol. The protocol is provably secure assuming a perfect implementation, relying on two conditions: (1) the quantum property that information gain is only possible at the expense of disturbing the signal if the two states one is trying to distinguish are not orthogonal (see no-cloning theorem); and (2) the existence of an authenticated public classical channel. It is usually explained as a method of securely communicating a private key from one party to another for use in one-time pad encryption.
The proof of BB84 depends on a perfect implementation. Side channel attacks exist, taking advantage of non-quantum sources of information. Since this information is non-quantum, it can be intercepted without measuring or cloning quantum particles.
Description.
In the BB84 scheme, Alice wishes to send a private key to Bob. She begins with two strings of bits, formula_0 and formula_1, each formula_2 bits long. She then encodes these two strings as a tensor product of formula_2 qubits:
formula_3
where formula_4 and formula_5 are the formula_6-th bits of formula_0 and formula_1 respectively. Together, formula_7 give us an index into the following four qubit states:
formula_8
formula_9
formula_10
formula_11
Note that the bit formula_5 is what decides which basis formula_4 is encoded in (either in the computational basis or the Hadamard basis). The qubits are now in states that are not mutually orthogonal, and thus it is impossible to distinguish all of them with certainty without knowing formula_1.
Alice sends formula_12 over a public and authenticated quantum channel formula_13 to Bob. Bob receives a state formula_14, where formula_13 represents both the effects of noise in the channel and eavesdropping by a third party we'll call Eve. After Bob receives the string of qubits, both Bob and Eve have their own states. However, since only Alice knows formula_1, it makes it virtually impossible for either Bob or Eve to distinguish the states of the qubits. Also, after Bob has received the qubits, we know that Eve cannot be in possession of a copy of the qubits sent to Bob, by the no-cloning theorem, unless she has made measurements. Her measurements, however, risk disturbing a particular qubit with probability if she guesses the wrong basis.
Bob proceeds to generate a string of random bits formula_15 of the same length as formula_1 and then measures the qubits he has received from Alice, obtaining a bit string formula_16. At this point, Bob announces publicly that he has received Alice's transmission. Alice then knows she can now safely announce formula_1, i.e., the bases in which the qubits were prepared. Bob communicates over a public channel with Alice to determine which formula_5 and formula_17 are not equal. Both Alice and Bob now discard the bits in formula_0 and formula_16 where formula_1 and formula_15 do not match.
From the remaining formula_18 bits where both Alice and Bob measured in the same basis, Alice randomly chooses formula_19 bits and discloses her choices over the public channel. Both Alice and Bob announce these bits publicly and run a check to see whether more than a certain number of them agree. If this check passes, Alice and Bob proceed to use information reconciliation and privacy amplification techniques to create some number of shared secret keys. Otherwise, they cancel and start over.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "a"
},
{
"math_id": 1,
"text": "b"
},
{
"math_id": 2,
"text": "n"
},
{
"math_id": 3,
"text": "|\\psi\\rangle = \\bigotimes_{i=1}^{n}|\\psi_{a_ib_i}\\rangle,"
},
{
"math_id": 4,
"text": "a_i"
},
{
"math_id": 5,
"text": "b_i"
},
{
"math_id": 6,
"text": "i"
},
{
"math_id": 7,
"text": "a_ib_i"
},
{
"math_id": 8,
"text": "|\\psi_{00}\\rangle = |0\\rangle,"
},
{
"math_id": 9,
"text": "|\\psi_{10}\\rangle = |1\\rangle,"
},
{
"math_id": 10,
"text": "|\\psi_{01}\\rangle = |+\\rangle = \\frac{1}{\\sqrt{2}}|0\\rangle + \\frac{1}{\\sqrt{2}}|1\\rangle,"
},
{
"math_id": 11,
"text": "|\\psi_{11}\\rangle = |-\\rangle = \\frac{1}{\\sqrt{2}}|0\\rangle - \\frac{1}{\\sqrt{2}}|1\\rangle."
},
{
"math_id": 12,
"text": "|\\psi\\rangle"
},
{
"math_id": 13,
"text": "\\mathcal{E}"
},
{
"math_id": 14,
"text": "\\mathcal{E}(\\rho) = \\mathcal{E}(|\\psi\\rangle\\langle\\psi|)"
},
{
"math_id": 15,
"text": "b'"
},
{
"math_id": 16,
"text": "a'"
},
{
"math_id": 17,
"text": "b'_i"
},
{
"math_id": 18,
"text": "k"
},
{
"math_id": 19,
"text": "k/2"
}
] |
https://en.wikipedia.org/wiki?curid=6178477
|
6178951
|
Principle of maximum work
|
In the history of science, the principle of maximum work was a postulate concerning the relationship between chemical reactions, heat evolution, and the potential work produced there from. The principle was developed in approximate form in 1875 by French chemist Marcellin Berthelot, in the field of thermochemistry, and then in 1876 by American mathematical physicist Willard Gibbs, in the field of thermodynamics, in a more accurate form. Berthelot's version was essentially: "every pure chemical reaction is accompanied by evolution of heat." (and that this yields the maximum amount of work). The effects of irreversibility, however, showed this version to be incorrect. This was rectified, in thermodynamics, by incorporating the concept of entropy.
Overview.
Berthelot independently enunciated a generalization (commonly known as Berthelot's Third Principle, or Principle of Maximum Work), which may be briefly stated as: every pure chemical reaction is accompanied by evolution of heat. Whilst this principle is undoubtedly applicable to the great majority of chemical actions under ordinary conditions, it is subject to numerous exceptions, and cannot therefore be taken (as its authors originally intended) as a secure basis for theoretical reasoning on the connection between thermal effect and chemical affinity. The existence of reactions which are reversible on slight alteration of conditions at once invalidates the principle, for if the action proceeding in one direction evolves heat, it must absorb heat when proceeding in the reverse direction. As the principle was abandoned even by its authors, it is now only of historical importance, although for many years it exerted considerable influence on thermochemical research.
Thus, to summarize, in 1875 by the French chemist Marcellin Berthelot which stated that chemical reactions will tend to yield the maximum amount of chemical energy in the form of work as the reaction progresses.
In 1876, however, through the works of Willard Gibbs and others to follow, the work principle was found to be a particular case of a more general statement:
The principle of work was a precursor to the development of the thermodynamic concept of free energy.
Thermochemistry.
In thermodynamics, the Gibbs free energy or Helmholtz free energy is essentially the energy of a chemical reaction "free" or available to do external work. Historically, the "free energy" is a more advanced and accurate replacement for the thermochemistry term “affinity” used by chemists of olden days to describe the “force” that caused chemical reactions. The term dates back to at least the time of Albertus Magnus in 1250.
According to Nobelist and chemical engineering professor Ilya Prigogine: “as motion was explained by the Newtonian concept of force, chemists wanted a similar concept of ‘driving force’ for chemical change? Why do chemical reactions occur, and why do they stop at certain points? Chemists called the ‘force’ that caused chemical reactions affinity, but it lacked a clear definition.
During the entire 18th century, the dominant view in regard to heat and light was that put forward by Isaac Newton, called the “Newtonian hypothesis”, which stated that light and heat are forms of matter attracted or repelled by other forms of matter, with forces analogous to gravitation or to chemical affinity.
In the 19th century, the French chemist Marcellin Berthelot and the Danish chemist Julius Thomsen had attempted to quantify chemical affinity using heats of reaction. In 1875, after quantifying the heats of reaction for a large number of compounds, Berthelot proposed the “principle of maximum work” in which all chemical changes occurring without intervention of outside energy tend toward the production of bodies or of a system of bodies which liberate heat.
Thermodynamics.
With the development of the first two laws of thermodynamics in the 1850s and 60s, heats of reaction and the work associated with these processes were given a more accurate mathematical basis. In 1876, Willard Gibbs unified all of this in his 300-page "On the Equilibrium of Heterogeneous Substances". Suppose, for example, we have a general thermodynamic system, called the "primary" system and that we mechanically connect it to a "reversible work source". A reversible work source is a system which, when it does work, or has work done to it, does not change its entropy. It is therefore not a heat engine and does not suffer dissipation due to friction or heat exchanges. A simple example would be a frictionless spring, or a weight on a pulley in a gravitational field. Suppose further, that we thermally connect the primary system to a third system, a "reversible heat source". A reversible heat source may be thought of as a heat source in which all transformations are reversible. For such a source, the heat energy δQ added will be equal to the temperature of the source (T) times the increase in its entropy. (If it were an irreversible heat source, the entropy increase would be larger than δQ/T)
Define:
We may now make the following statements
Eliminating formula_0, formula_1, and formula_2 gives the following equation:
formula_3
When the primary system is reversible, the equality will hold and the amount of work delivered will be a maximum. Note that this will hold for "any" reversible system which has the same values of "dU" and "dS" .
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "dS_w"
},
{
"math_id": 1,
"text": "\\delta Q"
},
{
"math_id": 2,
"text": "dS_h"
},
{
"math_id": 3,
"text": "\\delta W\\le -(dU-TdS)"
}
] |
https://en.wikipedia.org/wiki?curid=6178951
|
61793916
|
Strip packing problem
|
The strip packing problem is a 2-dimensional geometric minimization problem.
Given a set of axis-aligned rectangles and a strip of bounded width and infinite height, determine an overlapping-free packing of the rectangles into the strip, minimizing its height.
This problem is a cutting and packing problem and is classified as an "Open Dimension Problem" according to Wäscher et al.
This problem arises in the area of scheduling, where it models jobs that require a contiguous portion of the memory over a given time period. Another example is the area of industrial manufacturing, where rectangular pieces need to be cut out of a sheet of material (e.g., cloth or paper) that has a fixed width but infinite length, and one wants to minimize the wasted material.
This problem was first studied in 1980. It is strongly-NP hard and there exists no polynomial-time approximation algorithm with a ratio smaller than formula_0 unless formula_1. However, the best approximation ratio achieved so far (by a polynomial time algorithm by Harren et al.) is formula_2, imposing an open question of whether there is an algorithm with approximation ratio formula_0.
Definition.
An instance formula_3 of the strip packing problem consists of a strip with width formula_4 and infinite height, as well as a set formula_5 of rectangular items.
Each item formula_6 has a width formula_7 and a height formula_8.
A packing of the items is a mapping that maps each lower-left corner of an item formula_6 to a position formula_9 inside the strip.
An inner point of a placed item formula_6 is a point from the set formula_10.
Two (placed) items overlap if they share an inner point.
The height of the packing is defined as formula_11.
The objective is to find an overlapping-free packing of the items inside the strip while minimizing the height of the packing.
This definition is used for all polynomial time algorithms. For pseudo-polynomial time and FPT-algorithms, the definition is slightly changed for the simplification of notation. In this case, all appearing sizes are integral. Especially the width of the strip is given by an arbitrary integer number larger than 1. Note that these two definitions are equivalent.
Variants.
There are several variants of the strip packing problem that have been studied. These variants concern the objects' geometry, the problem's dimension, the rotateability of the items, and the structure of the packing.
Geometry: In the standard variant of this problem, the set of given items consists of rectangles.
In an often considered subcase, all the items have to be squares. This variant was already considered in the first paper about strip packing.
Additionally, variants where the shapes are circular or even irregular have been studied. In the latter case, it is referred to as "irregular strip packing".
Dimension:
When not mentioned differently, the strip packing problem is a 2-dimensional problem. However, it also has been studied in three or even more dimensions. In this case, the objects are hyperrectangles, and the strip is open-ended in one dimension and bounded in the residual ones.
Rotation: In the classical strip packing problem, the items are not allowed to be rotated. However, variants have been studied where rotating by 90 degrees or even an arbitrary angle is allowed.
Structure:
In the general strip packing problem, the structure of the packing is irrelevant.
However, there are applications that have explicit requirements on the structure of the packing. One of these requirements is to be able to cut the items from the strip by horizontal or vertical edge-to-edge cuts.
Packings that allow this kind of cutting are called guillotine packing.
Hardness.
The strip packing problem contains the bin packing problem as a special case when all the items have the same height 1.
For this reason, it is strongly NP-hard, and there can be no polynomial time approximation algorithm that has an approximation ratio smaller than formula_0 unless formula_1.
Furthermore, unless formula_1, there cannot be a pseudo-polynomial time algorithm that has an approximation ratio smaller than formula_12, which can be proven by a reduction from the strongly NP-complete 3-partition problem.
Note that both lower bounds formula_0 and formula_12 also hold for the case that a rotation of the items by 90 degrees is allowed.
Additionally, it was proven by Ashok et al. that strip packing is W[1]-hard when parameterized by the height of the optimal packing.
Properties of optimal solutions.
There are two trivial lower bounds on optimal solutions.
The first is the height of the largest item.
Define formula_13.
Then it holds that
formula_14.
Another lower bound is given by the total area of the items.
Define formula_15 then it holds that
formula_16.
The following two lower bounds take notice of the fact that certain items cannot be placed next to each other in the strip, and can be computed in formula_17.
For the first lower bound assume that the items are sorted by non-increasing height. Define formula_18. For each formula_19 define formula_20 the first index such that formula_21. Then it holds that
formula_22.
For the second lower bound, partition the set of items into three sets. Let formula_23 and define formula_24, formula_25, and formula_26. Then it holds that
formula_27, where formula_28 for each formula_29.
On the other hand, Steinberg has shown that the height of an optimal solution can be upper bounded by
formula_30
More precisely he showed that given a formula_31 and a formula_32 then the items formula_5 can be placed inside a box with width formula_33 and height formula_34 if
formula_35, where formula_28.
Polynomial time approximation algorithms.
Since this problem is NP-hard, approximation algorithms have been studied for this problem.
Most of the heuristic approaches have an approximation ratio between formula_36 and formula_37.
Finding an algorithm with a ratio below formula_37 seems complicated, and
the complexity of the corresponding algorithms increases regarding their running time and their descriptions.
The smallest approximation ratio achieved so far is formula_38.
Bottom-up left-justified (BL).
This algorithm was first described by Baker et al. It works as follows:
Let formula_39 be a sequence of rectangular items.
The algorithm iterates the sequence in the given order.
For each considered item formula_40, it searches for the bottom-most position to place it and then shifts it as far to the left as possible.
Hence, it places formula_41 at the bottom-most left-most possible coordinate formula_42 in the strip.
This algorithm has the following properties:
Next-fit decreasing-height (NFDH).
This algorithm was first described by Coffman et al. in 1980 and works as follows:
Let formula_55 be the given set of rectangular items.
First, the algorithm sorts the items by order of nonincreasing height.
Then, starting at position formula_56, the algorithm places the items next to each other in the strip until the next item will overlap the right border of the strip.
At this point, the algorithm defines a new level at the top of the tallest item in the current level and places the items next to each other in this new level.
This algorithm has the following properties:
First-fit decreasing-height (FFDH).
This algorithm, first described by Coffman et al. in 1980, works similar to the NFDH algorithm.
However, when placing the next item, the algorithm scans the levels from bottom to top and places the item in the first level on which it will fit.
A new level is only opened if the item does not fit in any previous ones.
This algorithm has the following properties:
The split-fit algorithm (SF).
This algorithm was first described by Coffman et al.
For a given set of items formula_55 and strip with width formula_74, it works as follows:
This algorithm has the following properties:
Sleator's algorithm.
For a given set of items formula_55 and strip with width formula_74, it works as follows:
This algorithm has the following properties:
The split algorithm (SP).
This algorithm is an extension of Sleator's approach and was first described by Golan.
It places the items in nonincreasing order of width.
The intuitive idea is to split the strip into sub-strips while placing some items.
Whenever possible, the algorithm places the current item formula_95 side-by-side of an already placed item formula_96.
In this case, it splits the corresponding sub-strip into two pieces: one containing the first item formula_96 and the other containing the current item formula_95.
If this is not possible, it places formula_95 on top of an already placed item and does not split the sub-strip.
This algorithm creates a set <samp> S </samp> of sub-strips. For each sub-strip <samp> s ∈ S</samp> we know its lower left corner <samp> s.xposition</samp> and <samp> s.yposition</samp>, its width <samp> s.width</samp>, the horizontal lines parallel to the upper and lower border of the item placed last inside this sub-strip <samp> s.upper</samp> and <samp> s.lower</samp>, as well as the width of it <samp> s.itemWidth</samp>.
function Split Algorithm (SP) is
input: "items <samp>I</samp>, width of the strip <samp>W</samp>"
output: "A packing of the items"
Sort I in nonincreasing order of widths;
Define empty list S of sub-strips;
Define a new sub-strip s with s.xposition = 0, s.yposition = 0, s.width = W, s.lower = 0, s.upper = 0, s.itemWidth = W;
Add s to S;
while I not empty do
i := I.pop(); "Removes widest item from" I
Define new list S_2 containing all the substrips with s.width - s.itemWidth ≥ i.width;
"S_2 contains all sub-strips where i fits next to the already placed item"
if S_2 is empty then
"In this case, place the item on top of another one."
Find the sub-strip s in S with smallest s.upper; "i.e. the least filled sub-strip"
Place i at position (s.xposition, s.upper);
Update s: s.lower := s.upper; s.upper := s.upper+i.height; s.itemWidth := i.width;
else
"In this case, place the item next to another one at the same level and split the corresponding sub-strip at this position."
Find s ∈ S_2 with the smallest s.lower;
Place i at position (s.xposition + s.itemWidth, s.lower);
Remove s from S;
Define two new sub-strips s1 and s2 with
s1.xposition = s.xposition, s1.yposition = s.upper, s1.width = s.itemWidth, s1.lower = s.upper, s1.upper = s.upper, s1.itemWidth = s.itemWidth;
s2.xposition = s.xposition+s.itemWidth, s2.yposition = s.lower, s2.width = s.width - s.itemWidth, s2.lower = s.lower, s2.upper = s.lower + i.height, s2.itemWidth = i.width;
S.add(s1,s2);
return
end function
This algorithm has the following properties:
Reverse-fit (RF).
This algorithm was first described by Schiermeyer.
The description of this algorithm needs some additional notation.
For a placed item formula_6, its lower left corner is denoted by formula_102 and its upper right corner by formula_103.
Given a set of items formula_5 and a strip of width formula_33, it works as follows:
This algorithm has the following properties:
Steinberg's algorithm (ST).
Steinbergs algorithm is a recursive one. Given a set of rectangular items formula_120 and a rectangular target region with width formula_74 and height formula_121, it proposes four reduction rules, that place some of the items and leaves a smaller rectangular region with the same properties as before regarding of the residual items.
Consider the following notations: Given a set of items formula_120 we denote by formula_122 the tallest item height in formula_120, formula_123 the largest item width appearing in formula_120 and by formula_124 the total area of these items.
Steinbergs shows that if
formula_125, formula_126, and formula_127, where formula_128,
then all the items can be placed inside the target region of size formula_129.
Each reduction rule will produce a smaller target area and a subset of items that have to be placed. When the condition from above holds before the procedure started, then the created subproblem will have this property as well.
Procedure 1: It can be applied if formula_130.
Procedure 2: It can be applied if the following conditions hold: formula_139, formula_140, and there exist two different items formula_141 with formula_142, formula_143, formula_144, formula_145 and formula_146.
Procedure 3: It can be applied if the following conditions hold: formula_139, formula_140, formula_150, and when sorting the items by decreasing width there exist an index formula_151 such that when defining formula_152 as the first formula_151 items it holds that
formula_153 as well as formula_154
Note that procedures 1 to 3 have a symmetric version when swapping the height and the width of the items and the target region.
Procedure 4: It can be applied if the following conditions hold: formula_139, formula_140, and there exists an item formula_6 such that formula_159.
This algorithm has the following properties:
Pseudo-polynomial time approximation algorithms.
To improve upon the lower bound of formula_0 for polynomial-time algorithms, pseudo-polynomial time algorithms for the strip packing problem have been considered.
When considering this type of algorithms, all the sizes of the items and the strip are given as integrals. Furthermore, the width of the strip formula_33 is allowed to appear polynomially in the running time.
Note that this is no longer considered as a polynomial running time since, in the given instance, the width of the strip needs an encoding size of formula_163.
The pseudo-polynomial time algorithms that have been developed mostly use the same approach. It is shown that each optimal solution can be simplified and transformed into one that has one of a constant number of structures. The algorithm then iterates all these structures and places the items inside using linear and dynamic programming. The best ratio accomplished so far is formula_164. while there cannot be a pseudo-polynomial time algorithm with ratio better than formula_165 unless formula_1
Online algorithms.
In the online variant of strip packing, the items arrive over time. When an item arrives, it has to be placed immediately before the next item is known. There are two types of online algorithms that have been considered. In the first variant, it is not allowed to alter the packing once an item is placed. In the second, items may be repacked when another item arrives. This variant is called the migration model.
The quality of an online algorithm is measured by the (absolute) competitive ratio
formula_166,
where formula_167 corresponds to the solution generated by the online algorithm and formula_168 corresponds to the size of the optimal solution.
In addition to the absolute competitive ratio, the asymptotic competitive ratio of online algorithms has been studied. For instances formula_169 with formula_170 it is defined as
formula_171.
Note that all the instances can be scaled such that formula_170.
The framework of Han et al. is applicable in the online setting if the online bin packing
algorithm belongs to the class Super Harmonic. Thus, Seiden's online bin packing algorithm
Harmonic++ implies an algorithm for online strip packing with asymptotic ratio 1.58889.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "3/2"
},
{
"math_id": 1,
"text": "P = NP"
},
{
"math_id": 2,
"text": "(5/3 + \\varepsilon)"
},
{
"math_id": 3,
"text": " I = (\\mathcal{I},W)"
},
{
"math_id": 4,
"text": "W = 1"
},
{
"math_id": 5,
"text": "\\mathcal{I}"
},
{
"math_id": 6,
"text": "i \\in \\mathcal{I}"
},
{
"math_id": 7,
"text": "w_i \\in (0,1] \\cap \\mathbb{Q}"
},
{
"math_id": 8,
"text": "h_i \\in (0,1] \\cap \\mathbb{Q}"
},
{
"math_id": 9,
"text": "(x_i,y_i) \\in ([0,1-w_i] \\cap \\mathbb{Q}) \\times \\mathbb{Q}_{\\geq 0} "
},
{
"math_id": 10,
"text": "\\mathrm{inn}(i) = \\{(x,y) \\in \\mathbb{Q} \\times \\mathbb{Q}| x_i < x < x_i + w_i, y_i < y < y_i + h_i\\}"
},
{
"math_id": 11,
"text": "\\max \\{y_i+h_i | i \\in \\mathcal{I}\\}"
},
{
"math_id": 12,
"text": "5/4"
},
{
"math_id": 13,
"text": "h_{\\max}(I) := \\max\\{h(i) | i \\in \\mathcal{I}\\}"
},
{
"math_id": 14,
"text": "OPT(I) \\geq h_{\\max}(I)"
},
{
"math_id": 15,
"text": "\\mathrm{AREA}(\\mathcal{I}) := \\sum_{i \\in \\mathcal{I}}h(i)w(i)"
},
{
"math_id": 16,
"text": "OPT(I) \\geq \\mathrm{AREA}(\\mathcal{I})/W"
},
{
"math_id": 17,
"text": "\\mathcal{O}(n \\log(n))"
},
{
"math_id": 18,
"text": "k := \\max \\{i : \\sum_{j = 1}^k w(j) \\leq W\\}"
},
{
"math_id": 19,
"text": "l > k "
},
{
"math_id": 20,
"text": "i(l) \\leq k"
},
{
"math_id": 21,
"text": " w(l) + \\sum_{j = 1}^{i(l)} w(j) > W"
},
{
"math_id": 22,
"text": "OPT(I) \\geq \\max \\{h(l) + h(i(l))| l >k \\wedge w(l) + \\sum_{j = 1}^{i(l)} w(j) > W\\}"
},
{
"math_id": 23,
"text": "\\alpha \\in [1, W/2]\\cap \\mathbb{N}"
},
{
"math_id": 24,
"text": "\\mathcal{I}_1(\\alpha) := \\{i \\in \\mathcal{I} | w(i) > W - \\alpha\\}"
},
{
"math_id": 25,
"text": "\\mathcal{I}_2(\\alpha) := \\{i \\in \\mathcal{I} | W - \\alpha \\geq w(i) > W/2\\}"
},
{
"math_id": 26,
"text": "\\mathcal{I}_3(\\alpha) := \\{i \\in \\mathcal{I} | W/2 \\geq w(i) > \\alpha \\}"
},
{
"math_id": 27,
"text": "\nOPT(I) \\geq \n\\max_{\\alpha \\in [1, W/2]\\cap \\mathbb{N}} \n\\Bigg\\{ \\sum_{i \\in \\mathcal{I}_1(\\alpha) \\cup \\mathcal{I}_2(\\alpha)} h(i) + \\left(\\frac{\\sum_{i \\in \\mathcal{I}_3(\\alpha) h(i)w(i) - \\sum_{i \\in \\mathcal{I}_2(\\alpha)}(W -w(i))h(i)}}{W}\\right)_+\n\\Bigg\\}"
},
{
"math_id": 28,
"text": "(x)_+ := \\max\\{x,0\\}"
},
{
"math_id": 29,
"text": "x \\in \\mathbb{R}"
},
{
"math_id": 30,
"text": "OPT(I) \\leq 2\\max\\{h_{\\max}(I),\\mathrm{AREA}(\\mathcal{I})/W\\}."
},
{
"math_id": 31,
"text": "W \\geq w_{\\max}(\\mathcal{I})"
},
{
"math_id": 32,
"text": "H \\geq h_{\\max}(I)"
},
{
"math_id": 33,
"text": "W"
},
{
"math_id": 34,
"text": "H"
},
{
"math_id": 35,
"text": " WH \\geq 2\\mathrm{AREA}(\\mathcal{I}) + (2w_{\\max}(\\mathcal{I}) - W)_+(2h_{\\max}(I) - H)_+"
},
{
"math_id": 36,
"text": "3"
},
{
"math_id": 37,
"text": "2"
},
{
"math_id": 38,
"text": "(5/3+\\varepsilon)"
},
{
"math_id": 39,
"text": " L "
},
{
"math_id": 40,
"text": " r \\in L "
},
{
"math_id": 41,
"text": " r "
},
{
"math_id": 42,
"text": " (x,y)"
},
{
"math_id": 43,
"text": " M > 0 "
},
{
"math_id": 44,
"text": " BL(L)/ OPT(L) > M "
},
{
"math_id": 45,
"text": " BL(L) "
},
{
"math_id": 46,
"text": " OPT(L) "
},
{
"math_id": 47,
"text": " BL(L)/ OPT(L) \\leq 3 "
},
{
"math_id": 48,
"text": " BL(L)/ OPT(L) \\leq 2 "
},
{
"math_id": 49,
"text": " \\delta > 0 "
},
{
"math_id": 50,
"text": " BL(L)/ OPT(L) > 3 - \\delta "
},
{
"math_id": 51,
"text": " BL(L)/ OPT(L) > 2 - \\delta "
},
{
"math_id": 52,
"text": " \\varepsilon \\in (0,1] "
},
{
"math_id": 53,
"text": " BL(L)/ OPT(L) > \\frac{12}{11 +\\varepsilon} "
},
{
"math_id": 54,
"text": " BL(L)/ OPT(L) > \\frac{4}{3 +\\varepsilon} "
},
{
"math_id": 55,
"text": " \\mathcal{I} "
},
{
"math_id": 56,
"text": " (0,0) "
},
{
"math_id": 57,
"text": " \\mathcal{O}(|\\mathcal{I}| \\log(|\\mathcal{I}|))"
},
{
"math_id": 58,
"text": "\\mathcal{O}(|\\mathcal{I}|)"
},
{
"math_id": 59,
"text": " NFDH(\\mathcal{I}) \\leq 2 OPT(\\mathcal{I}) + h_{\\max} \\leq 3 OPT(\\mathcal{I})"
},
{
"math_id": 60,
"text": " h_{\\max} "
},
{
"math_id": 61,
"text": " \\varepsilon > 0 "
},
{
"math_id": 62,
"text": " NFDH(\\mathcal{I}|) > (2-\\varepsilon) OPT(\\mathcal{I})."
},
{
"math_id": 63,
"text": " \\mathcal{O}(|\\mathcal{I}|^2)"
},
{
"math_id": 64,
"text": " |\\mathcal{I}|"
},
{
"math_id": 65,
"text": " FFDH(\\mathcal{I}) \\leq 1.7 OPT(\\mathcal{I}) + h_{\\max} \\leq 2.7 OPT(\\mathcal{I})"
},
{
"math_id": 66,
"text": " m \\geq 2 "
},
{
"math_id": 67,
"text": " w(i) \\leq W/m "
},
{
"math_id": 68,
"text": " i \\in \\mathcal{I} "
},
{
"math_id": 69,
"text": " FFDH(\\mathcal{I}) \\leq \\left(1 + 1/m\\right) OPT(\\mathcal{I}) + h_{\\max}"
},
{
"math_id": 70,
"text": " FFDH(\\mathcal{I}) > \\left(1 + 1/m -\\varepsilon\\right)OPT(\\mathcal{I})"
},
{
"math_id": 71,
"text": " FFDH(\\mathcal{I}) \\leq (3/2) OPT(\\mathcal{I}) + h_{\\max}"
},
{
"math_id": 72,
"text": " \\varepsilon >0"
},
{
"math_id": 73,
"text": " FFDH(\\mathcal{I}) > \\left(3/2-\\varepsilon\\right)OPT(\\mathcal{I})"
},
{
"math_id": 74,
"text": " W"
},
{
"math_id": 75,
"text": " m \\in \\mathbb{N} "
},
{
"math_id": 76,
"text": " W/m "
},
{
"math_id": 77,
"text": " \\mathcal{I}_{wide} "
},
{
"math_id": 78,
"text": " \\mathcal{I}_{narrow} "
},
{
"math_id": 79,
"text": " i \\in \\mathcal{I}"
},
{
"math_id": 80,
"text": " w(i) > W/(m+1) "
},
{
"math_id": 81,
"text": " w(i) \\leq W/(m+1) "
},
{
"math_id": 82,
"text": " W(m+1)/(m+2) "
},
{
"math_id": 83,
"text": " R "
},
{
"math_id": 84,
"text": " W/(m+2) "
},
{
"math_id": 85,
"text": " m "
},
{
"math_id": 86,
"text": " SF(\\mathcal{I}) \\leq (m+2)/(m+1)OPT(\\mathcal{I}) + 2h_{\\max}"
},
{
"math_id": 87,
"text": " m=1 "
},
{
"math_id": 88,
"text": " SF(\\mathcal{I}) \\leq (3/2) OPT(\\mathcal{I}) + 2h_{\\max}"
},
{
"math_id": 89,
"text": " SF(\\mathcal{I}) > \\left((m+2)/(m+1) -\\varepsilon\\right)OPT(\\mathcal{I})"
},
{
"math_id": 90,
"text": " W/2 "
},
{
"math_id": 91,
"text": " h_0 "
},
{
"math_id": 92,
"text": " h_l "
},
{
"math_id": 93,
"text": " h_r "
},
{
"math_id": 94,
"text": " A(\\mathcal{I}) \\leq 2 OPT(\\mathcal{I}) + h_{\\max}/2 \\leq 2.5 OPT(\\mathcal{I})"
},
{
"math_id": 95,
"text": " i "
},
{
"math_id": 96,
"text": " j "
},
{
"math_id": 97,
"text": " SP(\\mathcal{I}) \\leq 2 OPT(\\mathcal{I}) + h_{\\max} \\leq 3 OPT(\\mathcal{I})"
},
{
"math_id": 98,
"text": "\\varepsilon >0"
},
{
"math_id": 99,
"text": " SP(\\mathcal{I}) > (3-\\varepsilon) OPT(\\mathcal{I})"
},
{
"math_id": 100,
"text": "C>0"
},
{
"math_id": 101,
"text": " SP(\\mathcal{I}) > (2-\\varepsilon) OPT(\\mathcal{I})+C"
},
{
"math_id": 102,
"text": "(a_i,c_i)"
},
{
"math_id": 103,
"text": "(b_i,d_i)"
},
{
"math_id": 104,
"text": "W/2"
},
{
"math_id": 105,
"text": "H_0"
},
{
"math_id": 106,
"text": "h_{\\max}"
},
{
"math_id": 107,
"text": "h_1"
},
{
"math_id": 108,
"text": "H_0 + h_{\\max} + h_1"
},
{
"math_id": 109,
"text": "H_1"
},
{
"math_id": 110,
"text": "f"
},
{
"math_id": 111,
"text": "s"
},
{
"math_id": 112,
"text": "x_r := \\max(b_f,b_s)"
},
{
"math_id": 113,
"text": "x_r < W/2"
},
{
"math_id": 114,
"text": "f'"
},
{
"math_id": 115,
"text": "s'"
},
{
"math_id": 116,
"text": "h_2"
},
{
"math_id": 117,
"text": "h_2 \\leq h(s)"
},
{
"math_id": 118,
"text": "h_2 > h(s)"
},
{
"math_id": 119,
"text": " RF(\\mathcal{I}) \\leq 2 OPT(\\mathcal{I})"
},
{
"math_id": 120,
"text": " \\mathcal{I}"
},
{
"math_id": 121,
"text": " H"
},
{
"math_id": 122,
"text": " h_{\\max}(\\mathcal{I})"
},
{
"math_id": 123,
"text": " w_{\\max}(\\mathcal{I})"
},
{
"math_id": 124,
"text": " \\mathrm{AREA}(\\mathcal{I}) := \\sum_{i \\in \\mathcal{I}} w(i)h(i)"
},
{
"math_id": 125,
"text": " h_{\\max}(\\mathcal{I}) \\leq H "
},
{
"math_id": 126,
"text": " w_{\\max}(\\mathcal{I}) \\leq W "
},
{
"math_id": 127,
"text": " \\mathrm{AREA}(\\mathcal{I}) \\leq W\\cdot H - (2h_{\\max}(\\mathcal{I}) -h)_+(2w_{\\max}(\\mathcal{I}) - W)_+ "
},
{
"math_id": 128,
"text": "(a)_+ := \\max\\{0,a\\}"
},
{
"math_id": 129,
"text": " W \\times H "
},
{
"math_id": 130,
"text": " w_{\\max}(\\mathcal{I}') \\geq W/2"
},
{
"math_id": 131,
"text": " w(i) \\geq W/2"
},
{
"math_id": 132,
"text": "h_0"
},
{
"math_id": 133,
"text": " h(i) > H-h_0"
},
{
"math_id": 134,
"text": "\\mathcal{I}_H"
},
{
"math_id": 135,
"text": "H-h_0"
},
{
"math_id": 136,
"text": "w_0"
},
{
"math_id": 137,
"text": "W-w_0"
},
{
"math_id": 138,
"text": "H - h_0"
},
{
"math_id": 139,
"text": " w_{\\max}(\\mathcal{I}) \\leq W/2"
},
{
"math_id": 140,
"text": " h_{\\max}(\\mathcal{I}) \\leq H/2"
},
{
"math_id": 141,
"text": "i,i' \\in \\mathcal{I}"
},
{
"math_id": 142,
"text": " w(i) \\geq W/4"
},
{
"math_id": 143,
"text": " w(i') \\geq W/4"
},
{
"math_id": 144,
"text": " h(i) \\geq H/4"
},
{
"math_id": 145,
"text": " h(i') \\geq H/4"
},
{
"math_id": 146,
"text": " 2(\\mathrm{AREA}(\\mathcal{I}) - w(i)h(i) -w(i')h(i')) \\leq (W- \\max\\{w(i),w(i')\\})H"
},
{
"math_id": 147,
"text": "i"
},
{
"math_id": 148,
"text": "i'"
},
{
"math_id": 149,
"text": " W- \\max\\{w(i),w(i')\\}"
},
{
"math_id": 150,
"text": " |\\mathcal{I}| > 1"
},
{
"math_id": 151,
"text": "m "
},
{
"math_id": 152,
"text": "\\mathcal{I'}"
},
{
"math_id": 153,
"text": " \\mathrm{AREA}(\\mathcal{I})- WH/4 \\leq \\mathrm{AREA}(\\mathcal{I'}) \\leq 3WH/8"
},
{
"math_id": 154,
"text": "w(i_{m+1})\\leq W/4 "
},
{
"math_id": 155,
"text": " W_1 := \\max{W/2, 2\\mathrm{AREA}(\\mathcal{I'})/H}"
},
{
"math_id": 156,
"text": "W_1"
},
{
"math_id": 157,
"text": "W-W_1"
},
{
"math_id": 158,
"text": "\\mathcal{I}\\setminus\\mathcal{I'}"
},
{
"math_id": 159,
"text": "w(i) h(i) \\geq \\mathrm{AREA}(\\mathcal{I}) - WH/4"
},
{
"math_id": 160,
"text": "W-w(i)"
},
{
"math_id": 161,
"text": " \\mathcal{O}(|\\mathcal{I}| \\log(|\\mathcal{I}|)^2/\\log(\\log(|\\mathcal{I}|)))"
},
{
"math_id": 162,
"text": " ST(\\mathcal{I}) \\leq 2 OPT(\\mathcal{I})"
},
{
"math_id": 163,
"text": " \\log(W)"
},
{
"math_id": 164,
"text": "(5/4 +\\varepsilon) OPT(I) "
},
{
"math_id": 165,
"text": "5/4 "
},
{
"math_id": 166,
"text": "\\mathrm{sup}_I A(I)/OPT(I) "
},
{
"math_id": 167,
"text": " A(I) "
},
{
"math_id": 168,
"text": " OPT(I) "
},
{
"math_id": 169,
"text": "I"
},
{
"math_id": 170,
"text": "h_{\\max}(I)\\leq 1 "
},
{
"math_id": 171,
"text": "\\lim \\mathrm{sup}_{OPT(I) \\rightarrow \\infty} A(I)/OPT(I) "
}
] |
https://en.wikipedia.org/wiki?curid=61793916
|
61795792
|
Hand–eye calibration problem
|
Robotics problem on coordinating two parts of a robot
In robotics and mathematics, the hand–eye calibration problem (also called the robot–sensor or robot–world calibration problem) is the problem of determining the transformation between a robot end-effector and a sensor or sensors (camera or laser scanner) or between a robot base and the world coordinate system. It is conceptually analogous to biological hand–eye coordination (hence the name). It takes the form of AX
ZB, where "A" and "B" are two systems, usually a robot base and a camera, and X and Z are unknown transformation matrices. A highly studied special case of the problem occurs where X
Z, taking the form of the problem AX
XB. Solutions to the problem take the forms of several types of methods, including separable closed-form solutions, simultaneous closed-form solutions, and iterative solutions. The covariance of X in the equation can be calculated for any randomly perturbed matrices A and B.
The problem is an important part of robot calibration, with efficiency and accuracy of the solutions determining the speed accuracy of the calibrations of robots.
Methods.
Many different methods and solutions developed to solve the problem, broadly defined as either separable, simultaneous solutions. Each type of solution has specific advantages and disadvantages as well as formulations and applications to the problem. A common theme throughout all of the methods is the common use of quaternions to represent rotations.
Separable solutions.
Given the equation AX
ZB, it is possible to decompose the equation into a purely rotational and translational part; methods utilizing this are referred to as separable methods. Where RA represents a 3×3 rotation matrix and tA a 3×1 translation vector, the equation can be broken into two parts:
RARX
RZRB
RAtX+tA
RZtB+tZ
The second equation becomes linear if RZ is known. As such, the most frequent approach is to solve for Rx and Rz using the first equation, then using Rz to solve for the variables in the second equation. Rotation is represented using quaternions, allowing for a linear solution to be found. While separable methods are useful, any error in the estimation for the rotation matrices is compounded when being applied to the translation vector. Other solutions avoid this problem.
Simultaneous solutions.
Simultaneous solutions are based on solving for both X and Z at the same time (rather than basing the solution of one part off of the other as in separable solutions), propagation of error is significantly reduced. By formulating the matrices as dual quaternions, it is possible to get a linear equation by which X is solvable in a linear format. An alternative way applies the least-squares method to the Kronecker product of the matrices A⊗B. As confirmed by experimental results, simultaneous solutions have less error than separable quaternion solutions.
Iterative solutions.
Iterative solutions are another method used to solve the problem of error propagation. One example of an iterative solution is a program based on minimizing ||AX−XB||. As the program iterates, it will converge on a solution to X independent to the initial robot orientation of RB. Solutions can also be two-step iterative processes, and like simultaneous solutions can also decompose the equations into dual quaternions. However, while iterative solutions to the problem are generally simultaneous and accurate, they can be computationally taxing to carry out and may not always converge on the optimal solution.
The AX=XB case.
The matrix equation AX
XB, where X is unknown, has an infinitive number of solutions that can be easily studied by a geometrical approach. To find X it is necessary to consider a simultaneous set of 2 equations A1X
XB1 and A2X
XB2; the matrices A1, A2, B1, B2 have to be dermined by experiments to be performed in an optimized way.
The 2D laser profile scanner case.
formula_0
where formula_1 represents the unknown coordinate of the point formula_2 in the robot base system, formula_3 represent the known relationship between the robot base system and end-effector, formula_4 are the unknown relationship between the end-effector and the scanner, and formula_5 is the known coordinate of the point formula_2 in the local scanner system. Methods are as follows,
There is a method using straight edges for hand-eye calibration.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\begin{bmatrix} p_b \\\\ 1 \\end{bmatrix}=\n\\begin{bmatrix} R_b & T_b \\\\ 0 & 1 \\end{bmatrix}\\centerdot\n\\begin{bmatrix} R_s & T_s \\\\ 0 & 1 \\end{bmatrix}\\centerdot\n\\begin{bmatrix} p_s \\\\ 1 \\end{bmatrix}"
},
{
"math_id": 1,
"text": "p_b"
},
{
"math_id": 2,
"text": "p"
},
{
"math_id": 3,
"text": "R_b, T_b"
},
{
"math_id": 4,
"text": "R_s,T_s"
},
{
"math_id": 5,
"text": "p_s"
}
] |
https://en.wikipedia.org/wiki?curid=61795792
|
61805833
|
Period-luminosity relation
|
Astronomical principle
In astronomy, a period-luminosity relation is a relationship linking the luminosity of pulsating variable stars with their pulsation period.
The best-known relation is the direct proportionality law holding for Classical Cepheid variables, sometimes called the Leavitt Law. Discovered in 1908 by Henrietta Swan Leavitt, the relation established Cepheids as foundational indicators of cosmic benchmarks for scaling galactic and extragalactic distances.
The physical model explaining the Leavitt's law for classical cepheids is called "kappa mechanism".
History.
Leavitt, a graduate of Radcliffe College, worked at the Harvard College Observatory as a "computer", tasked with examining photographic plates in order to measure and catalog the brightness of stars. Observatory Director Edward Charles Pickering assigned Leavitt to the study of variable stars of the Small and Large Magellanic Clouds, as recorded on photographic plates taken with the Bruce Astrograph of the Boyden Station of the Harvard Observatory in Arequipa, Peru. She identified 1777 variable stars, of which she classified 47 as Cepheids. In 1908 she published her results in the "Annals of the Astronomical Observatory of Harvard College", noting that the brighter variables had the longer period. Building on this work, Leavitt looked carefully at the relation between the periods and the brightness of a sample of 25 of the Cepheids variables in the Small Magellanic Cloud, published in 1912. This paper was communicated and signed by Edward Pickering, but the first sentence indicates that it was "prepared by Miss Leavitt".
In the 1912 paper, Leavitt graphed the stellar magnitude versus the logarithm of the period and determined that, in her own words,
<templatestyles src="Template:Blockquote/styles.css" /> Using the simplifying assumption that all of the Cepheids within the Small Magellanic Cloud were at approximately the same distance, the apparent magnitude of each star is equivalent to its absolute magnitude offset by a fixed quantity depending on that distance. This reasoning allowed Leavitt to establish that the logarithm of the period is linearly related to the logarithm of the star's average intrinsic optical luminosity (which is the amount of power radiated by the star in the visible spectrum).
At the time, there was an unknown scale factor in this brightness since the distances to the Magellanic Clouds were unknown. Leavitt expressed the hope that parallaxes to some Cepheids would be measured; one year after she reported her results, Ejnar Hertzsprung determined the distances of several Cepheids in the Milky Way and that, with this calibration, the distance to any Cepheid could then be determined.
The relation was used by Harlow Shapley in 1918 to investigate the distances of globular clusters and the absolute magnitudes of the cluster variables found in them. It was hardly noted at the time that there was a discrepancy in the relations found for several types of pulsating variable all known generally as Cepheids. This discrepancy was confirmed by Edwin Hubble's 1931 study of the globular clusters around the Andromeda Galaxy. The solution was not found until the 1950s, when it was shown that population II Cepheids were systematically fainter than population I Cepheids. The cluster variables (RR Lyrae variables) were fainter still.
The relations.
Period-luminosity relations are known for several types of pulsating variable stars: type I Cepheids; type II Cepheids; RR Lyrae variables; Mira variables; and other long-period variable stars.
Classical Cepheids.
The Classical Cepheid period-luminosity relation has been calibrated by many astronomers throughout the twentieth century, beginning with Hertzsprung. Calibrating the period-luminosity relation has been problematic; however, a firm Galactic calibration was established by Benedict et al. 2007 using precise HST parallaxes for 10 nearby classical Cepheids. Also, in 2008, ESO astronomers estimated with a precision within 1% the distance to the Cepheid RS Puppis, using light echos from a nebula in which it is embedded. However, that latter finding has been actively debated in the literature.
The following relationship between a Population I Cepheid's period "P" and its mean absolute magnitude "M"v was established from Hubble Space Telescope trigonometric parallaxes for 10 nearby Cepheids:
formula_0
with "P" measured in days.
The following relations can also be used to calculate the distance to classical Cepheids.
Impact.
Classical Cepheids (also known as Population I Cepheids, type I Cepheids, or Delta Cepheid variables) undergo pulsations with very regular periods on the order of days to months. Cepheid variables were discovered in 1784 by Edward Pigott, first with the variability of Eta Aquilae, and a few months later by John Goodricke with the variability of Delta Cephei, the eponymous star for classical Cepheids. Most of the Cepheids were identified by the distinctive light curve shape with a rapid increase in brightness and a sharp turnover.
Classical Cepheids are 4–20 times more massive than the Sun and up to 100,000 times more luminous. These Cepheids are yellow bright giants and supergiants of spectral class F6 – K2 and their radii change by of the order of 10% during a pulsation cycle.
Leavitt's work on Cepheids in the Magellanic Clouds led her to discover the relation between the luminosity and the period of Cepheid variables.
Her discovery provided astronomers with the first "standard candle" with which to measure the distance to faraway galaxies. Cepheids were soon detected in other galaxies, such as Andromeda (notably by Edwin Hubble in 1923–24), and they became an important part of the evidence that "spiral nebulae" are independent galaxies located far outside of the Milky Way. Leavitt's discovery provided the basis for a fundamental shift in cosmology, as it prompted Harlow Shapley to move the Sun from the center of the galaxy in the "Great Debate" and Hubble to move the Milky Way galaxy from the center of the universe. With the period-luminosity relation providing a way to accurately measure distances on an inter-galactic scale, a new era in modern astronomy unfolded with an understanding of the structure and scale of the universe. The discovery of the expanding universe by Georges Lemaitre and Hubble were made possible by Leavitt's groundbreaking research. Hubble often said that Leavitt deserved the Nobel Prize for her work, and indeed she was nominated by a member of the Swedish Academy of Sciences in 1924, although as she had died of cancer three years earlier she was not eligible. (The Nobel Prize is not awarded posthumously.)
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " M_\\mathrm{v} = (-2.43\\pm0.12) \\left(\\log_{10}P - 1\\right) - (4.05 \\pm 0.02) \\, "
}
] |
https://en.wikipedia.org/wiki?curid=61805833
|
61808035
|
Closed graph property
|
Graph of a map closed in the product space
In mathematics, particularly in functional analysis and topology, closed graph is a property of functions.
A function "f" : "X" → "Y" between topological spaces has a closed graph if its graph is a closed subset of the product space "X" × "Y".
A related property is open graph.
This property is studied because there are many theorems, known as closed graph theorems, giving conditions under which a function with a closed graph is necessarily continuous. One particularly well-known class of closed graph theorems are the closed graph theorems in functional analysis.
Definition and notation: The graph of a function "f" : "X" → "Y" is the set
Gr "f" := { ("x", "f"("x")) : "x" ∈ "X" } = { ("x", "y") ∈ "X" × "Y" : "y" = "f"("x") }.
Notation: If Y is a set then the power set of Y, which is the set of all subsets of Y, is denoted by 2"Y" or 𝒫("Y").
Definition: If X and Y are sets, a set-valued function in Y on X (also called a Y-valued multifunction on X) is a function "F" : "X" → 2"Y" with domain X that is valued in 2"Y". That is, F is a function on X such that for every "x" ∈ "X", "F"("x") is a subset of Y.
* Some authors call a function "F" : "X" → 2"Y" a set-valued function only if it satisfies the additional requirement that "F"("x") is not empty for every "x" ∈ "X"; this article does not require this.
Definition and notation: If "F" : "X" → 2"Y" is a set-valued function in a set Y then the graph of F is the set
Gr "F" := { ("x", "y") ∈ "X" × "Y" : "y" ∈ "F"("x") }.
Definition: A function "f" : "X" → "Y" can be canonically identified with the set-valued function "F" : "X" → 2"Y" defined by "F"("x") := { "f"("x") } for every "x" ∈ "X", where F is called the canonical set-valued function induced by (or associated with) f.
*Note that in this case, Gr "f" = Gr "F".
Definitions.
Open and closed graph.
We give the more general definition of when a Y-valued function or set-valued function defined on a "subset" S of X has a closed graph since this generality is needed in the study of closed linear operators that are defined on a dense subspace S of a topological vector space X (and not necessarily defined on all of X).
This particular case is one of the main reasons why functions with closed graphs are studied in functional analysis.
Assumptions: Throughout, X and Y are topological spaces, "S" ⊆ "X", and f is a Y-valued function or set-valued function on S (i.e. "f" : "S" → "Y" or "f" : "S" → 2"Y"). "X" × "Y" will always be endowed with the product topology.
Definition: We say that f has a closed graph (resp. open graph, sequentially closed graph, sequentially open graph) in "X" × "Y" if the graph of f, Gr "f", is a closed (resp. open, sequentially closed, sequentially open) subset of "X" × "Y" when "X" × "Y" is endowed with the product topology. If "S" = "X" or if X is clear from context then we may omit writing "in "X" × "Y""
Observation: If "g" : "S" → "Y" is a function and G is the canonical set-valued function induced by g (i.e. "G" : "S" → 2"Y" is defined by "G"("s") := { "g"("s") } for every "s" ∈ "S") then since Gr "g" = Gr "G", g has a closed (resp. sequentially closed, open, sequentially open) graph in "X" × "Y" if and only if the same is true of G.
Definition: We say that the function (resp. set-valued function) f is closable in "X" × "Y" if there exists a subset "D" ⊆ "X" containing S and a function (resp. set-valued function) "F" : "D" → "Y" whose graph is equal to the closure of the set Gr "f" in "X" × "Y". Such an F is called a closure of f in "X" × "Y", is denoted by , and necessarily extends f.
*Additional assumptions for linear maps: If in addition, S, X, and Y are topological vector spaces and "f" : "S" → "Y" is a linear map then to call f closable we also require that the set D be a vector subspace of X and the closure of f be a linear map.
Definition: If f is closable on S then a core or essential domain of f is a subset "D" ⊆ "S" such that the closure in "X" × "Y" of the graph of the restriction "f" |"D" : "D" → "Y" of f to D is equal to the closure of the graph of f in "X" × "Y" (i.e. the closure of Gr "f" in "X" × "Y" is equal to the closure of Gr "f" |"D" in "X" × "Y").
Definition and notation: When we write "f" : "D"("f") ⊆ "X" → "Y" then we mean that f is a Y-valued function with domain "D"("f") where "D"("f") ⊆ "X". If we say that "f" : "D"("f") ⊆ "X" → "Y" is closed (resp. sequentially closed) or has a closed graph (resp. has a sequentially closed graph) then we mean that the graph of f is closed (resp. sequentially closed) in "X" × "Y" (rather than in "D"("f") × "Y").
Closed maps and closed linear operators.
When reading literature in functional analysis, if "f" : "X" → "Y" is a linear map between topological vector spaces (TVSs) (e.g. Banach spaces) then "f is closed" will almost always means the following:
Definition: A map "f" : "X" → "Y" is called closed if its graph is closed in "X" × "Y". In particular, the term "closed linear operator" will almost certainly refer to a linear map whose graph is closed.
Otherwise, especially in literature about point-set topology, "f is closed" may instead mean the following:
Definition: A map "f" : "X" → "Y" between topological spaces is called a closed map if the image of a closed subset of X is a closed subset of Y.
These two definitions of "closed map" are not equivalent.
If it is unclear, then it is recommended that a reader check how "closed map" is defined by the literature they are reading.
Characterizations.
Throughout, let X and Y be topological spaces.
If "f" : "X" → "Y" is a function then the following are equivalent:
and if Y is a Hausdorff compact space then we may add to this list:
and if both X and Y are first-countable spaces then we may add to this list:
If "f" : "X" → "Y" is a function then the following are equivalent:
If "F" : "X" → 2"Y" is a set-valued function between topological spaces X and Y then the following are equivalent:
and if Y is compact and Hausdorff then we may add to this list:
and if both X and Y are metrizable spaces then we may add to this list:
Characterizations of closed graphs (general topology).
Throughout, let formula_0 and formula_1 be topological spaces and formula_2 is endowed with the product topology.
Function with a closed graph.
If formula_3 is a function then it is said to have a closed graph if it satisfies any of the following are equivalent conditions:
and if formula_1 is a Hausdorff compact space then we may add to this list:
and if both formula_0 and formula_1 are first-countable spaces then we may add to this list:
Function with a sequentially closed graph
If formula_3 is a function then the following are equivalent:
Closed graph theorems: When a closed graph implies continuity.
Conditions that guarantee that a function with a closed graph is necessarily continuous are called closed graph theorems.
Closed graph theorems are of particular interest in functional analysis where there are many theorems giving conditions under which a linear map with a closed graph is necessarily continuous.
Examples.
"For examples in functional analysis, see continuous linear operator."
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "Y"
},
{
"math_id": 2,
"text": "X \\times Y"
},
{
"math_id": 3,
"text": "f : X \\to Y"
}
] |
https://en.wikipedia.org/wiki?curid=61808035
|
618086
|
Interpretability logic
|
Interpretability logics comprise a family of modal logics that extend provability logic to describe interpretability or various related metamathematical properties and relations such as weak interpretability, Π1-conservativity, cointerpretability, tolerance, cotolerance, and arithmetic complexities.
Main contributors to the field are Alessandro Berarducci, Petr Hájek, Konstantin Ignatiev, Giorgi Japaridze, Franco Montagna, Vladimir Shavrukov, Rineke Verbrugge, Albert Visser, and Domenico Zambella.
Examples.
Logic ILM.
The language of ILM extends that of classical propositional logic by adding the unary modal operator formula_0 and the binary modal operator formula_1 (as always, formula_2 is defined as formula_3). The arithmetical interpretation of formula_4 is “formula_5 is provable in Peano arithmetic (PA)”, and formula_6 is understood as “formula_7 is interpretable in formula_8”.
Axiom schemata:
Rules of inference:
The completeness of ILM with respect to its arithmetical interpretation was independently proven by Alessandro Berarducci and Vladimir Shavrukov.
Logic TOL.
The language of TOL extends that of classical propositional logic by adding the modal operator formula_19 which is allowed to take any nonempty sequence of arguments. The arithmetical interpretation of formula_20 is “formula_21 is a tolerant sequence of theories”.
Axioms (with formula_22 standing for any formulas, formula_23 for any sequences of formulas, and formula_24 identified with ⊤):
Rules of inference:
The completeness of TOL with respect to its arithmetical interpretation was proven by Giorgi Japaridze.
|
[
{
"math_id": 0,
"text": "\\Box"
},
{
"math_id": 1,
"text": "\\triangleright"
},
{
"math_id": 2,
"text": "\\Diamond p"
},
{
"math_id": 3,
"text": "\\neg \\Box\\neg p"
},
{
"math_id": 4,
"text": "\\Box p"
},
{
"math_id": 5,
"text": "p"
},
{
"math_id": 6,
"text": "p \\triangleright q"
},
{
"math_id": 7,
"text": "PA+q"
},
{
"math_id": 8,
"text": "PA+p"
},
{
"math_id": 9,
"text": "\\Box(p \\rightarrow q) \\rightarrow (\\Box p \\rightarrow \\Box q)"
},
{
"math_id": 10,
"text": "\\Box(\\Box p \\rightarrow p) \\rightarrow \\Box p"
},
{
"math_id": 11,
"text": " \\Box (p \\rightarrow q) \\rightarrow (p \\triangleright q)"
},
{
"math_id": 12,
"text": " (p \\triangleright q)\\wedge (q \\triangleright r)\\rightarrow (p\\triangleright r)"
},
{
"math_id": 13,
"text": " (p \\triangleright r)\\wedge (q \\triangleright r)\\rightarrow ((p\\vee q)\\triangleright r)"
},
{
"math_id": 14,
"text": " (p \\triangleright q)\\rightarrow (\\Diamond p \\rightarrow \\Diamond q) "
},
{
"math_id": 15,
"text": " \\Diamond p \\triangleright p "
},
{
"math_id": 16,
"text": " (p \\triangleright q)\\rightarrow((p\\wedge\\Box r)\\triangleright (q\\wedge\\Box r)) "
},
{
"math_id": 17,
"text": "p\\rightarrow q"
},
{
"math_id": 18,
"text": "q"
},
{
"math_id": 19,
"text": "\\Diamond"
},
{
"math_id": 20,
"text": "\\Diamond( p_1,\\ldots,p_n)"
},
{
"math_id": 21,
"text": "(PA+p_1,\\ldots,PA+p_n)"
},
{
"math_id": 22,
"text": "p,q"
},
{
"math_id": 23,
"text": "\\vec{r},\\vec{s}"
},
{
"math_id": 24,
"text": "\\Diamond()"
},
{
"math_id": 25,
"text": "\\Diamond (\\vec{r},p,\\vec{s})\\rightarrow \\Diamond (\\vec{r}, p\\wedge\\neg q,\\vec{s})\\vee \\Diamond (\\vec{r}, q,\\vec{s}) "
},
{
"math_id": 26,
"text": "\\Diamond (p)\\rightarrow \\Diamond (p\\wedge \\neg\\Diamond (p)) "
},
{
"math_id": 27,
"text": "\\Diamond (\\vec{r},p,\\vec{s})\\rightarrow \\Diamond (\\vec{r},\\vec{s})"
},
{
"math_id": 28,
"text": "\\Diamond (\\vec{r},p,\\vec{s})\\rightarrow \\Diamond (\\vec{r},p,p,\\vec{s})"
},
{
"math_id": 29,
"text": "\\Diamond (p,\\Diamond(\\vec{r}))\\rightarrow \\Diamond (p\\wedge\\Diamond(\\vec{r}))"
},
{
"math_id": 30,
"text": "\\Diamond (\\vec{r},\\Diamond(\\vec{s}))\\rightarrow \\Diamond (\\vec{r},\\vec{s})"
},
{
"math_id": 31,
"text": "\\neg p"
},
{
"math_id": 32,
"text": "\\neg \\Diamond( p)"
}
] |
https://en.wikipedia.org/wiki?curid=618086
|
618119
|
Provability logic
|
Provability logic is a modal logic, in which the box (or "necessity") operator is interpreted as 'it is provable that'. The point is to capture the notion of a proof predicate of a reasonably rich formal theory, such as Peano arithmetic.
Examples.
There are a number of provability logics, some of which are covered in the literature mentioned in . The basic system is generally referred to as GL (for Gödel–Löb) or L or K4W (W stands for well-foundedness). It can be obtained by adding the modal version of Löb's theorem to the logic K (or K4).
Namely, the axioms of GL are all tautologies of classical propositional logic plus all formulas of one of the following forms:
And the rules of inference are:
History.
The GL model was pioneered by Robert M. Solovay in 1976. Since then, until his death in 1996, the prime inspirer of the field was George Boolos. Significant contributions to the field have been made by Sergei N. Artemov, Lev Beklemishev, Giorgi Japaridze, Dick de Jongh, Franco Montagna, Giovanni Sambin, Vladimir Shavrukov, Albert Visser and others.
Generalizations.
Interpretability logics and Japaridze's polymodal logic present natural extensions of provability logic.
|
[
{
"math_id": 0,
"text": "\\vdash"
}
] |
https://en.wikipedia.org/wiki?curid=618119
|
618120
|
Electric power conversion
|
Conversion of electric energy from one form to another
In electrical engineering, power conversion is the process of converting electric energy from one form to another.
A power converter is an electrical device for converting electrical energy between alternating current (AC) and direct current (DC). It can also change the voltage or frequency of the current.
Power Converters can include simpler tools such as transformer or more complex like a resonant converter. The term can also refer to a class of electrical machinery that is used to convert one frequency of alternating current into another. Power conversion systems often incorporate redundancy and voltage regulation.
Power converters are classified based on the type of power conversion they perform. One way of classifying power conversion systems is based on whether the input and output is alternating current or direct current.
DC power conversion.
DC to DC.
The following devices can convert DC to DC:
DC to AC.
The following devices can convert DC to AC:
AC power conversion.
AC to DC.
The following devices can convert AC to DC:
AC to AC.
The following devices can convert AC to AC:
Other systems.
There are also devices and methods to convert between power systems designed for single and three-phase operation.
The standard power voltage and frequency vary from country to country and sometimes within a country. In North America and northern South America, it is usually 120 volts, 60 hertz (Hz), but in Europe, Asia, Africa, and many other parts of the world, it is usually 230 volts, 50 Hz. Aircraft often use 400 Hz power internally, so 50 Hz or 60 Hz to 400 Hz frequency conversion is needed for use in the ground power unit used to power the airplane while it is on the ground. Conversely, internal 400 Hz internal power may be converted to 50 Hz or 60 Hz for convenience power outlets available to passengers during flight.
Certain specialized circuits can also be considered power converters, such as the flyback transformer subsystem powering a CRT, generating high voltage at approximately 15 kHz.
Consumer electronics usually include an AC adapter (a type of power supply) to convert mains-voltage AC current to low-voltage DC suitable for consumption by microchips. Consumer voltage converters (also known as "travel converters") are used when traveling between countries that use ~120 V versus ~240 V AC mains power. (There are also consumer "adapters" which merely form an electrical connection between two differently shaped AC power plugs and sockets, but these change neither voltage nor frequency.)
Why use transformers in power converters.
Transformers are used in power converters to incorporate electrical isolation and Voltage step-down or step up.
The secondary circuit is floating, when you touch the secondary circuit, you merely drag its potential to your body's potential or the earth's potential. There will be no current flowing through your body. That's why you can use your cellphone safely when it is being charged, even if your cellphone has a metal shell and is connected to the secondary circuit.
Operating at high frequency and supplying low power, power converters have much smaller transformers as compared with those of fundamental frequency, high power applications. Usually, in power systems, transformers transmit power simultaneously, no charge!
The current in the primary winding of a transformer help to sets up the mutual flux in accordance with Ampere's law and balances the demagnetizing effect of the load current in the secondary winding.
Flyback converter's transformer works differently, like an inductor. In each cycle, the flyback converter's transformer first gets charged and then releases its energy to the load. Accordingly, the flyback converter's transformer air gap has two functions. It not only determines inductance but also stores energy. For the flyback converter, the transformer gap can have the function of energy transmission through cycles of charging and discharging.
formula_0
The core's relative permeability formula_1 can be > 1,000, even > 10,000. While the air gap features much lower permeability, accordingly has higher energy density.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "W_{e}=\\frac{1}{2}BH=\\frac{1}{2} \\frac{B^2}{\\mu}"
},
{
"math_id": 1,
"text": "\\mu_r"
}
] |
https://en.wikipedia.org/wiki?curid=618120
|
61817525
|
Song of Songs 2
|
Second chapter of Song of Songs describing the intense love between a man and a woman
Song of Songs 2 (abbreviated as Song 2) is the second chapter of the Song of Songs in the Hebrew Bible or the Old Testament of the Christian Bible. This book is one of the Five Megillot, a collection of short books, together with Ruth, Lamentations, Ecclesiastes and Esther, within the Ketuvim, the third and the last part of the Hebrew Bible. Jewish tradition views Solomon as the author of this book (although this is now largely disputed), and this attribution influences the acceptance of this book as a canonical text. This chapter contains a dialogue in the open air and several female poems with the main imagery of flora and fauna.
Text.
The original text is written in Hebrew language. This chapter is divided into 17 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008). Some fragments containing parts of this chapter were found among the Dead Sea Scrolls, assigned as 4Q107 (4QCantb); 30 BCE-30 CE; extant verses 9–17).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
Structure.
The Modern English Version (MEV) attributes the voices in this chapter as follows:
Female: Love in paradise (1:16–2:1).
Verse 1 closes a poetic section providing a 'picture of the bed as a spreading growth', using a theme of nature's floras, starting from the previous chapter with verses 1:16–17 focusing on the subject of trees and verse 2:1 on the subject of flowers.
"I am the rose of Sharon, and the lily of the valleys."
Male: My love is like a flower (2:2).
Verse 2 links to verse 1 on the use of "lily" (or "lotus"), and forms a parallel with verse 3 on the word order and the use of particles ("as" or "like", "so") as well as the 'terms of endearment' ("my love", "my beloved", or "my darling", "my lover").
"As the lily among thorns, so is my love among the daughters".
Female: A pastoral scene (2:3-7).
The verse 3 shows an 'excellent synonymous parallelism' with verse 2 on the word order and the use of certain words, such as "as" or "like", "so", "among" or "between", "my love"/"my beloved" or "my darling"/"my lover". Each verse begins with a preposition of comparison ("as"), followed by three Hebrew words consisting of a singular noun, a preposition ("among" or "between"; "be^n") and a plural common noun with a definite article.
"As the apple tree among the trees of the wood, so is my beloved among the sons."
"I sat down under his shadow with great delight, and his fruit was sweet to my taste."
Verse 3.
The sensual imagery of "apple tree" as a place of romance is still used in modern times in songs such as "In the Shade of the Old Apple Tree" and "Don't Sit Under the Apple Tree".
"He brought me to the banqueting house, and his banner over me was love."
"Sustain me with raisins,"
"refresh me with apples;"
"for I am faint with love."
Verse 5.
The first two lines of this verse form a 'distinctive structure', using verbs and preposition of the same ideas: "refresh (sustain) me"/"revive (refresh) me", "with raisins"/"with apples". The word "apple(s)" links to the first word of verse 3, while the word "love" links to the last word of verse 4.
"I charge you, O daughters of Jerusalem,"
"By the gazelles or by the does of the field,"
"Do not stir up nor awaken love"
"Until it pleases."
Verse 7.
The names of God are apparently substituted with similar sounding phrases depicting 'female gazelles' (, "tseḇā’ōṯ") for [God of] hosts ( "tseḇā’ōṯ"), and 'does of the field'/'wild does/female deer' (, "’ay-lōṯ ha-śā-ḏeh") for God Almighty (, "’êl shaddai").
Female: Her lover pursues her (2:8–9).
This section starts a poetic exposition of lovers who are joined and separated (–).
Verses 8–17 form a unity of a poem of the spring by the woman, beginning with 'the voice of my beloved' ("qōl dōḏî"; or 'the sound of his [approach]'), which signals his presence before he even speaks.
Andrew Harper suggests that the scene moves now from Jerusalem ("daughters of Jerusalem" in verse 7) to "some royal residence in the country", probably in the northern hills. Verse 8b refers to her beloved "leaping upon the mountains, bounding over the hills". St. Ambrose comments by way of a paraphrase,<templatestyles src="Template:Blockquote/styles.css" />Let us see him leaping; he leaped out of heaven into the virgin, out of the womb into the manger, out of the manger into Jordan, out of Jordan to the cross, from the cross into the tomb, out of the grave into heaven.
"The fig tree putteth forth her green figs, and the vines with the tender grape give a good smell. Arise, my love, my fair one, and come away."
"O my dove, that art in the clefts of the rock, in the secret places of the stairs, let me see thy countenance, let me hear thy voice; for sweet is thy voice, and thy countenance is comely."
"Catch the foxes for us,"
"the little foxes"
"that spoil the vineyards,"
"for our vineyards are in blossom."
Female: Love affirmed, gratification delayed (2:16-17).
Unlike the ambiguity of the speaker (or speakers) in the previous verse, the two verses in this section are no doubt spoken by the woman, affirming the mutual affection with her lover.
"My beloved is mine, and I am his: he feedeth among the lilies."
Verse 16.
In reversed order compared to .
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
] |
https://en.wikipedia.org/wiki?curid=61817525
|
6182553
|
Macrodiversity
|
In the field of wireless communication, macrodiversity is a kind of space diversity scheme using several receiver or transmitter antennas for transferring the same signal. The distance between the transmitters is much longer than the wavelength, as opposed to microdiversity where the distance is in the order of or shorter than the wavelength.
In a cellular network or a wireless LAN, macro-diversity implies that the antennas are typically situated in different base station sites or access points. Receiver macro-diversity is a form of antenna combining, and requires an infrastructure that mediates the signals from the local antennas or receivers to a central receiver or decoder. Transmitter macro-diversity may be a form of simulcasting, where the same signal is sent from several nodes. If the signals are sent over the same physical channel (e.g. the channel frequency and the spreading sequence), the transmitters are said to form a single-frequency network—a term used especially in the broadcasting world.
The aim is to combat fading and to increase the received signal strength and signal quality in exposed positions in between the base stations or access points. Macro diversity may also facilitate efficient multicast services, where the same frequency channel can be used for all transmitters sending the same information. The diversity scheme may be based on transmitter (downlink) macro-diversity and/or receiver (uplink) macro-diversity.
Forms.
The baseline form of macrodiversity is called single-user macrodiversity. In this form, single user which may have multiple antennas, communicates with several base stations. Therefore, depending on the spatial degree of freedom (DoF) of the system, user may transmit or receive multiple independent data streams to/from base stations in the same time and frequency resource.
In next more advanced form of macrodiversity, multiple distributed users communicate with multiple distributed base stations in the same time and frequency resource. This form of configuration has been shown to utilize available spatial DoF optimally and thus increasing the cellular system capacity and user capacity considerably.
Mathematical description.
The macrodiversity multi-user MIMO uplink communication system considered here
consists of formula_0 distributed single antenna
users and formula_1 distributed single antenna
base stations (BS). Following the well established narrow band flat
fading MIMO system model, input-output relationship can be given as
formula_2
where formula_3 and
formula_4 are the receive and transmit
vectors, respectively, and formula_5 and
formula_6 are the macrodiversity channel
matrix and the spatially uncorrelated AWGN noise vector,
respectively. The power spectral density of AWGN noise is assumed to
be formula_7. The formula_8th element of formula_5, formula_9
represents the fading coefficient (see Fading) of the formula_10th constituent link
which in this particular case, is the link between
formula_11th user and the formula_12th base station. In macrodiversity scenario,
formula_13,
where formula_14 is called the average link
gain giving average link SNR of formula_15. The macrodiversity power profile matrix
can thus be defined as
formula_16
The original input-output relationship may be rewritten in terms of the macrodiversity power profile and so-called normalized channel matrix, formula_17, as
formula_18.
where formula_19 is the element-wise
square root of formula_20, and the operator, formula_21, represents Hadamard
multiplication (see Hadamard product). The formula_8th element of formula_17, formula_22, satisfies the condition given by
formula_23.
It has been shown that there exists a functional link between the permanent of macrodiversity power profile matrix, formula_20 and the performance of multi-user macrodiversity systems in fading. Although it appears as if the macrodiversity only manifests itself in the power profile, systems that rely on macrodiversity will typically have other types of transmit power constraints (e.g., each element of formula_24 has a limited average power) and different sets of coordinating transmitters/receivers when communicating with different users. Note that the input-output relationship above can be easily extended to the case when each transmitter and/or receiver have multiple antennas.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\scriptstyle N"
},
{
"math_id": 1,
"text": "\\scriptstyle n_{R}"
},
{
"math_id": 2,
"text": "\\mathbf{y} = \\mathbf{H}\\mathbf{x} + \\mathbf{n}"
},
{
"math_id": 3,
"text": "\\scriptstyle\\mathbf{y}"
},
{
"math_id": 4,
"text": "\\scriptstyle\\mathbf{x}"
},
{
"math_id": 5,
"text": "\\scriptstyle\\mathbf{H}"
},
{
"math_id": 6,
"text": "\\scriptstyle\\mathbf{n}"
},
{
"math_id": 7,
"text": "\\scriptstyle N_0"
},
{
"math_id": 8,
"text": "\\scriptstyle\ni,j"
},
{
"math_id": 9,
"text": "h_{ij}"
},
{
"math_id": 10,
"text": "\\scriptstyle i,j"
},
{
"math_id": 11,
"text": "\\scriptstyle j"
},
{
"math_id": 12,
"text": "\\scriptstyle\ni"
},
{
"math_id": 13,
"text": "E \\left \\{ \\left| h_{ij} \\right |^2 \\right \\} =\ng_{ij} \\quad \\forall i,j"
},
{
"math_id": 14,
"text": "\\scriptstyle g_{i,j}"
},
{
"math_id": 15,
"text": "\\scriptstyle\n\\frac{g_{ij}}{N_0}"
},
{
"math_id": 16,
"text": " \\mathbf{G} = \\begin{pmatrix}\n g_{11} & \\dots & g_{1N} \\\\\n g_{21} & \\dots & g_{2N} \\\\\n \\dots & \\dots & \\dots \\\\\n g_{n_R1} & \\dots & g_{n_RN} \\\\\n\\end{pmatrix}.\n"
},
{
"math_id": 17,
"text": "\\mathbf{H}_w"
},
{
"math_id": 18,
"text": "\\mathbf{y} = \\left( \\left( \\mathbf{G}^{\\circ\\frac{1}{2}} \\right) \\circ \\mathbf{H}_w \\right)\n\\mathbf{x} + \\mathbf{n}"
},
{
"math_id": 19,
"text": "\\mathbf{G}^{\\circ \\frac{1}{2}}"
},
{
"math_id": 20,
"text": "\\mathbf{G}"
},
{
"math_id": 21,
"text": "\\circ"
},
{
"math_id": 22,
"text": "h_{w,ij}"
},
{
"math_id": 23,
"text": "E \\left \\{ \\left| h_{w,ij} \\right |^2 \\right \\} = 1 \\quad \\forall i,j "
},
{
"math_id": 24,
"text": "\\mathbf{x}"
}
] |
https://en.wikipedia.org/wiki?curid=6182553
|
618291
|
Bohr effect
|
Concept in physiology
The Bohr effect is a phenomenon first described in 1904 by the Danish physiologist Christian Bohr. Hemoglobin's oxygen binding affinity (see oxygen–haemoglobin dissociation curve) is inversely related both to acidity and to the concentration of carbon dioxide. That is, the Bohr effect refers to the shift in the oxygen dissociation curve caused by changes in the concentration of carbon dioxide or the pH of the environment. Since carbon dioxide reacts with water to form carbonic acid, an increase in CO2 results in a decrease in blood pH, resulting in hemoglobin proteins releasing their load of oxygen. Conversely, a decrease in carbon dioxide provokes an increase in pH, which results in hemoglobin picking up more oxygen.
Experimental discovery.
In the early 1900s, Christian Bohr was a professor at the University of Copenhagen in Denmark, already well known for his work in the field of respiratory physiology. He had spent the last two decades studying the solubility of oxygen, carbon dioxide, and other gases in various liquids, and had conducted extensive research on haemoglobin and its affinity for oxygen. In 1903, he began working closely with Karl Hasselbalch and August Krogh, two of his associates at the university, in an attempt to experimentally replicate the work of Gustav von Hüfner, using whole blood instead of haemoglobin solution. Hüfner had suggested that the oxygen-haemoglobin binding curve was hyperbolic in shape, but after extensive experimentation, the Copenhagen group determined that the curve was in fact sigmoidal. Furthermore, in the process of plotting out numerous dissociation curves, it soon became apparent that high partial pressures of carbon dioxide caused the curves to shift to the right. Further experimentation while varying the CO2 concentration quickly provided conclusive evidence, confirming the existence of what would soon become known as the Bohr effect.
Controversy.
There is some more debate over whether Bohr was actually the first to discover the relationship between CO2 and oxygen affinity, or whether the Russian physiologist Bronislav Verigo beat him to it, allegedly discovering the effect in 1898, six years before Bohr. While this has never been proven, Verigo did in fact publish a paper on the haemoglobin-CO2 relationship in 1892. His proposed model was flawed, and Bohr harshly criticized it in his own publications.
Another challenge to Bohr's discovery comes from within his lab. Though Bohr was quick to take full credit, his associate Krogh, who invented the apparatus used to measure gas concentrations in the experiments, maintained throughout his life that he himself had actually been the first to demonstrate the effect. Though there is some evidence to support this, retroactively changing the name of a well-known phenomenon would be extremely impractical, so it remains known as the Bohr effect.
Physiological role.
The Bohr effect increases the efficiency of oxygen transportation through the blood. After hemoglobin binds to oxygen in the lungs due to the high oxygen concentrations, the Bohr effect facilitates its release in the tissues, particularly those tissues in most need of oxygen. When a tissue's metabolic rate increases, so does its carbon dioxide waste production. When released into the bloodstream, carbon dioxide forms bicarbonate and protons through the following reaction:
<chem>CO2 + H2O <=> H2CO3 <=> H+ + HCO3^-</chem>
Although this reaction usually proceeds very slowly, the enzyme carbonic anhydrase (which is present in red blood cells) drastically speeds up the conversion to bicarbonate and protons. This causes the pH of the blood to decrease, which promotes the dissociation of oxygen from haemoglobin, and allows the surrounding tissues to obtain enough oxygen to meet their demands. In areas where oxygen concentration is high, such as the lungs, binding of oxygen causes haemoglobin to release protons, which recombine with bicarbonate to eliminate carbon dioxide during exhalation. These opposing protonation and deprotonation reactions occur in equilibrium resulting in little overall change in blood pH.
The Bohr effect enables the body to adapt to changing conditions and makes it possible to supply extra oxygen to tissues that need it the most. For example, when muscles are undergoing strenuous activity, they require large amounts of oxygen to conduct cellular respiration, which generates CO2 (and therefore HCO3− and H+) as byproducts. These waste products lower the pH of the blood, which increases oxygen delivery to the active muscles. Carbon dioxide is not the only molecule that can trigger the Bohr effect. If muscle cells aren't receiving enough oxygen for cellular respiration, they resort to lactic acid fermentation, which releases lactic acid as a byproduct. This increases the acidity of the blood far more than CO2 alone, which reflects the cells' even greater need for oxygen. In fact, under anaerobic conditions, muscles generate lactic acid so quickly that pH of the blood passing through the muscles will drop to around 7.2, which causes haemoglobin to begin releasing roughly 10% more oxygen.
Strength of the effect and body size.
The magnitude of the Bohr effect is usually given by the slope of the formula_1 vs formula_2 curve where, P50 refers to the partial pressure of oxygen when 50% of haemoglobin's binding sites are occupied. The slope is denoted: formula_0 where formula_3 denotes change. That is, formula_4 denotes the change in formula_1 and formula_5 the change in formula_2.
Bohr effect strength exhibits an inverse relationship with the size of an organism: the magnitude increases as size and weight decreases. For example, mice possess a very strong Bohr effect, with a formula_0 value of -0.96, which requires relatively minor changes in H+ or CO2 concentrations, while elephants require much larger changes in concentration to achieve a much weaker effect formula_6.
Mechanism.
Allosteric interactions.
The Bohr effect hinges around allosteric interactions between the hemes of the haemoglobin tetramer, a mechanism first proposed by Max Perutz in 1970. Haemoglobin exists in two conformations: a high-affinity R state and a low-affinity T state. When oxygen concentration levels are high, as in the lungs, the R state is favored, enabling the maximum amount of oxygen to be bound to the hemes. In the capillaries, where oxygen concentration levels are lower, the T state is favored, in order to facilitate the delivery of oxygen to the tissues. The Bohr effect is dependent on this allostery, as increases in CO2 and H+ help stabilize the T state and ensure greater oxygen delivery to muscles during periods of elevated cellular respiration. This is evidenced by the fact that myoglobin, a monomer with no allostery, does not exhibit the Bohr effect. Haemoglobin mutants with weaker allostery may exhibit a reduced Bohr effect. For example, in Hiroshima variant haemoglobinopathy, allostery in haemoglobin is reduced, and the Bohr effect is diminished. As a result, during periods of exercise, the mutant haemoglobin has a higher affinity for oxygen and tissue may suffer minor oxygen starvation.
T-state stabilization.
When hemoglobin is in its T state, the N-terminal amino groups of the α-subunits and the C-terminal histidine of the β-subunits are protonated, giving them a positive charge and allowing these residues to participate in ionic interactions with carboxyl groups on nearby residues. These interactions help hold the haemoglobin in the T state. Decreases in pH (increases in acidity) stabilize this state even more, since a decrease in pH makes these residues even more likely to be protonated, strengthening the ionic interactions. In the R state, the ionic pairings are absent, meaning that the R state's stability increases when the pH increases, as these residues are less likely to stay protonated in a more basic environment. The Bohr effect works by simultaneously destabilizing the high-affinity R state and stabilizing the low-affinity T state, which leads to an overall decrease in oxygen affinity. This can be visualized on an oxygen-haemoglobin dissociation curve by shifting the whole curve to the right.
Carbon dioxide can also react directly with the N-terminal amino groups to form carbamates, according to the following reaction:
<chem>R-NH2 + CO2 <=> R-NH-COO^- + H+</chem>
CO2 forms carbamates more frequently with the T state, which helps to stabilize this conformation. The process also creates protons, meaning that the formation of carbamates also contributes to the strengthening of ionic interactions, further stabilizing the T state.
Special cases.
Marine mammals.
An exception to the otherwise well-supported link between animal body size and the sensitivity of its haemoglobin to changes in pH was discovered in 1961. Based on their size and weight, many marine mammals were hypothesized to have a very low, almost negligible Bohr effect. However, when their blood was examined, this was not the case. Humpback whales weighing 41,000 kilograms had an observed formula_0 value of 0.82, which is roughly equivalent to the Bohr effect magnitude in a 0.57 kg guinea pig. This extremely strong Bohr effect is hypothesized to be one of marine mammals' many adaptations for deep, long dives, as it allows for virtually all of the bound oxygen on haemoglobin to dissociate and supply the whale's body while it is underwater. Examination of other marine mammal species supports this. In pilot whales and porpoises, which are primarily surface feeders and seldom dive for more than a few minutes, the formula_0was 0.52, comparable to a cow, which is much closer to the expected Bohr effect magnitude for animals of their size.
Carbon monoxide.
Another special case of the Bohr effect occurs when carbon monoxide is present. This molecule serves as a competitive inhibitor for oxygen, and binds to haemoglobin to form carboxyhaemoglobin. Haemoglobin's affinity for CO is about 210 times stronger than its affinity for O2, meaning that it is very unlikely to dissociate, and once bound, it blocks the binding of O2 to that subunit. At the same time, CO is structurally similar enough to O2 to cause carboxyhemoglobin to favor the R state, raising the oxygen affinity of the remaining unoccupied subunits. This combination significantly reduces the delivery of oxygen to the tissues of the body, which is what makes carbon monoxide so toxic. This toxicity is reduced slightly by an increase in the strength of the Bohr effect in the presence of carboxyhemoglobin. This increase is ultimately due to differences in interactions between heme groups in carboxyhemoglobin relative to oxygenated hemoglobin. It is most pronounced when the oxygen concentration is extremely low, as a last-ditch effort when the need for oxygen delivery becomes critical. However, the physiological implications of this phenomenon remain unclear.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "{\\scriptstyle \\Delta \\log (P_{50}) \\over \\Delta \\text{pH}}"
},
{
"math_id": 1,
"text": "\\log (P_{50})"
},
{
"math_id": 2,
"text": "\\text{pH}"
},
{
"math_id": 3,
"text": " \\Delta "
},
{
"math_id": 4,
"text": "\\Delta \\log (P_{50})"
},
{
"math_id": 5,
"text": "\\Delta \\text{pH}"
},
{
"math_id": 6,
"text": "\\left({\\scriptstyle \\Delta \\log (P_{50}) \\over \\Delta \\text{pH}} = -0.38\\right)"
}
] |
https://en.wikipedia.org/wiki?curid=618291
|
6183392
|
Distributed minimum spanning tree
|
The distributed minimum spanning tree (MST) problem involves the construction of a minimum spanning tree by a distributed algorithm, in a network where nodes communicate by message passing. It is radically different from the classical sequential problem, although the most basic approach resembles Borůvka's algorithm. One important application of this problem is to find a tree that can be used for broadcasting. In particular, if the cost for a message to pass through an edge in a graph is significant, an MST can minimize the total cost for a source process to communicate with all the other processes in the network.
The problem was first suggested and solved in formula_0 time in 1983 by Gallager "et al.", where formula_1 is the number of vertices in the graph. Later, the solution was improved to formula_2 and finally
formula_3 where "D" is the network, or graph diameter. A lower bound on the time complexity of the solution has been eventually shown to be
formula_4
Overview.
The input graph formula_5 is considered to be a network, where vertices formula_1 are independent computing nodes and edges formula_6 are communication links. Links are weighted as in the classical problem.
At the beginning of the algorithm, nodes know only the weights of the links which are connected to them. (It is possible to consider models in which they know more, for example their neighbors' links.)
As the output of the algorithm, every node knows which of its links belong to the minimum spanning tree and which do not.
MST in message-passing model.
The message-passing model is one of the most commonly used models in distributed computing. In this model, each process is modeled as a node of a graph. Each communication channel between two processes is an edge of the graph.
Two commonly used algorithms for the classical minimum spanning tree problem are Prim's algorithm and Kruskal's algorithm. However, it is difficult to apply these two algorithms in the distributed message-passing model. The main challenges are:
Due to these difficulties, new techniques were needed for distributed MST algorithms in the message-passing model. Some bear similarities to Borůvka's algorithm for the classical MST problem.
GHS algorithm.
The GHS algorithm of Gallager, Humblet and Spira is one of the best-known algorithms in distributed computing theory. This algorithm constructs an MST in the asynchronous message-passing model.
Assumptions.
The GHS algorithm requires several assumptions.
Properties of MSTs.
Define a fragment of an MST formula_7 to be a sub-tree of formula_7. That is, a fragment is a connected set of nodes and edges of formula_7. MSTs have two important properties in relation to fragments:
These two properties form the basis for proving correctness of the GHS algorithm. In general, the GHS algorithm is a bottom-up algorithm in the sense that it starts by letting each individual node be a fragment, and then joining fragments until a single fragment is left. The above properties imply that the remaining fragment must be an MST.
Description of the algorithm.
The GHS algorithm assigns a "level" to each fragment, which is a non-decreasing integer with initial value 0. Furthermore, each fragment with a non-zero level has an "ID", which is the ID of the core edge in the fragment, which is selected when the fragment is constructed. During the execution of the algorithm, each node can classify each of its incident edges into three categories:
In level-0 fragments, each awakened node will do the following:
The edge that is chosen by the two nodes it connects becomes the core edge, and is assigned level 1.
In non-zero-level fragments, a separate algorithm is executed in each level. This algorithm can be separated into three stages: broadcast, convergecast, and change core.
Broadcast.
The two nodes adjacent to the core broadcast messages to the rest of the nodes in the fragment. The messages are sent via the branch edge but not via the core. Each broadcast message contains the ID and level of the fragment. At the end of this stage, each node has received the new fragment ID and level.
Convergecast.
In this stage, all nodes in the fragment cooperate to find the minimum weight outgoing edge of the fragment. Outgoing edges are edges connecting to other fragments. The messages sent in this stage are in the opposite direction of the broadcast stage. Initialized by all the leaves (the nodes that have only one branch edge), a message is sent through the branch edge. The message contains the minimum weight of the incident outgoing edge it found (or infinity if no such edge was found). The way to find the minimum outgoing edge will be discussed later. For each non-leaf node, given the number of its branch edges as formula_9, after receiving formula_10 convergecast messages, it will pick the minimum weight from the messages and compare it to the weights of its incident outgoing edges. The smallest weight will be sent toward the branch it received the broadcast from.
Change core.
After the completion of the previous stage, the two nodes connected by the core can inform each other of the best edges they received. Then they can identify the minimum outgoing edge from the entire fragment. A message will be sent from the core to the minimum outgoing edge via a path of branch edges. Finally, a message will be sent out via the chosen outgoing edge to request to combine the two fragments that the edge connects. Depending on the levels of those two fragments, one of two combined operations are performed to form a new fragment; details discussed below.
Finding the minimum-weight incident outgoing edge.
As discussed above, every node needs to find its minimum weight outgoing incident edge after the receipt of a broadcast message from the core. If node formula_9 receives a broadcast, it will pick its minimum weight basic edge and send a message to the node formula_11 on the other side with its fragment's ID and level. Then, node formula_11 will decide whether the edge is an outgoing edge and send back a message to notify node formula_9 of the result. The decision is made according to the following:
Combining two fragments.
Let formula_16 and formula_17 be the two fragments that need to be combined. There are two ways to do this:
Furthermore, when an "Absorb" operation occurs, formula_16 must be in the stage of changing the core, while formula_17 can be in an arbitrary stage. Therefore, "Absorb" operations may be done differently depending on the state of formula_17. Let formula_8 be the edge that formula_16 and formula_17 want to combine with, and let formula_9 and formula_11 be the two nodes connected by formula_8 in formula_16 and formula_17, respectively. There are two cases to consider:
Maximum number of levels.
As mentioned above, fragments are combined by either "Merge" or "Absorb" operation. The "Absorb" operation does not change the maximum level among all fragments. The "Merge" operation may increase the maximum level by 1. In the worst case, all fragments are combined with "Merge" operations, so the number of fragments decreases by half in each level. Therefore, the maximum number of levels is formula_21, where formula_1 is the number of nodes.
Progress property.
The GHS algorithm has a nice property that the lowest level fragments will not be blocked, although some operations in the non-lowest level fragments may be blocked. This property implies the algorithm will eventually terminate with a minimum spanning tree.
Approximation algorithms.
An formula_22-approximation algorithm was developed by Maleq Khan and Gopal Pandurangan. This algorithm runs in formula_23 time, where formula_24 is the local shortest path diameter of the graph.
|
[
{
"math_id": 0,
"text": "O(V \\log V)"
},
{
"math_id": 1,
"text": "V"
},
{
"math_id": 2,
"text": "O(V)"
},
{
"math_id": 3,
"text": "O(\\sqrt V \\log^* V + D)"
},
{
"math_id": 4,
"text": "\\Omega\\left({\\frac{\\sqrt V}{\\log V}+D}\\right)."
},
{
"math_id": 5,
"text": "G(V,E)"
},
{
"math_id": 6,
"text": "E"
},
{
"math_id": 7,
"text": "T"
},
{
"math_id": 8,
"text": "e"
},
{
"math_id": 9,
"text": "n"
},
{
"math_id": 10,
"text": "n - 1"
},
{
"math_id": 11,
"text": "n'"
},
{
"math_id": 12,
"text": "\\mathit{Fragment}_\\mathit{ID}(n) = \\mathit{Fragment}_\\mathit{ID}(n')"
},
{
"math_id": 13,
"text": "\\mathit{Fragment}_\\mathit{ID}(n) \\neq \\mathit{Fragment}_\\mathit{ID}(n')"
},
{
"math_id": 14,
"text": "\\mathit{Level}(n) \\leq \\mathit{Level}(n')"
},
{
"math_id": 15,
"text": "\\mathit{Level}(n) > \\mathit{Level}(n')"
},
{
"math_id": 16,
"text": "F"
},
{
"math_id": 17,
"text": "F'"
},
{
"math_id": 18,
"text": "\\mathit{Level}(F) = \\mathit{Level}(F')"
},
{
"math_id": 19,
"text": "\\mathit{Level}(F) + 1"
},
{
"math_id": 20,
"text": "\\mathit{Level}(F) < \\mathit{Level}(F')"
},
{
"math_id": 21,
"text": "O(\\log V)"
},
{
"math_id": 22,
"text": "O(\\log n)"
},
{
"math_id": 23,
"text": "O(D+L\\log n)"
},
{
"math_id": 24,
"text": "L"
}
] |
https://en.wikipedia.org/wiki?curid=6183392
|
61839
|
Inner automorphism
|
Automorphism of a group, ring, or algebra given by the conjugation action of one of its elements
In abstract algebra an inner automorphism is an automorphism of a group, ring, or algebra given by the conjugation action of a fixed element, called the "conjugating element". They can be realized via operations from within the group itself, hence the adjective "inner". These inner automorphisms form a subgroup of the automorphism group, and the quotient of the automorphism group by this subgroup is defined as the outer automorphism group.
Definition.
If G is a group and g is an element of G (alternatively, if G is a ring, and g is a unit), then the function
formula_0
is called (right) conjugation by g (see also conjugacy class). This function is an endomorphism of G: for all formula_1
formula_2
where the second equality is given by the insertion of the identity between formula_3 and formula_4 Furthermore, it has a left and right inverse, namely formula_5 Thus, formula_6 is both an monomorphism and epimorphism, and so an isomorphism of G with itself, i.e. an automorphism. An inner automorphism is any automorphism that arises from conjugation.
When discussing right conjugation, the expression formula_7 is often denoted exponentially by formula_8 This notation is used because composition of conjugations satisfies the identity: formula_9 for all formula_10 This shows that right conjugation gives a right action of G on itself.
A common example is as follows:
Describe a homomorphism formula_11 for which the image, formula_12, is a normal subgroup of inner automorphisms of a group formula_13; alternatively, describe a natural homomorphism of which the kernel of formula_11 is the center of formula_13 (all formula_14 for which conjugating by them returns the trivial automorphism), in other words, formula_15. There is always a natural homomorphism formula_16, which associates to every formula_14 an (inner) automorphism formula_17 in formula_18. Put identically, formula_19.
Letformula_20 as defined above. This requires demonstrating that (1) formula_17 is a homomorphism, (2) formula_17 is also a bijection, (3) formula_11 is a homomorphism.
Inner and outer automorphism groups.
The composition of two inner automorphisms is again an inner automorphism, and with this operation, the collection of all inner automorphisms of G is a group, the inner automorphism group of G denoted Inn("G").
Inn("G") is a normal subgroup of the full automorphism group Aut("G") of G. The outer automorphism group, Out("G") is the quotient group
formula_28
The outer automorphism group measures, in a sense, how many automorphisms of G are not inner. Every non-inner automorphism yields a non-trivial element of Out("G"), but different non-inner automorphisms may yield the same element of Out("G").
Saying that conjugation of x by a leaves x unchanged is equivalent to saying that a and x commute:
formula_29
Therefore the existence and number of inner automorphisms that are not the identity mapping is a kind of measure of the failure of the commutative law in the group (or ring).
An automorphism of a group G is inner if and only if it extends to every group containing G.
By associating the element "a" ∈ "G" with the inner automorphism "f"("x")
"x""a" in Inn("G") as above, one obtains an isomorphism between the quotient group "G" / Z("G") (where Z("G") is the center of G) and the inner automorphism group:
formula_30
This is a consequence of the first isomorphism theorem, because Z("G") is precisely the set of those elements of G that give the identity mapping as corresponding inner automorphism (conjugation changes nothing).
Non-inner automorphisms of finite p-groups.
A result of Wolfgang Gaschütz says that if G is a finite non-abelian p-group, then G has an automorphism of p-power order which is not inner.
It is an open problem whether every non-abelian p-group G has an automorphism of order p. The latter question has positive answer whenever G has one of the following conditions:
Types of groups.
The inner automorphism group of a group G, Inn("G"), is trivial (i.e., consists only of the identity element) if and only if G is abelian.
The group Inn("G") is cyclic only when it is trivial.
At the opposite end of the spectrum, the inner automorphisms may exhaust the entire automorphism group; a group whose automorphisms are all inner and whose center is trivial is called complete. This is the case for all of the symmetric groups on n elements when n is not 2 or 6. When "n"
6, the symmetric group has a unique non-trivial class of non-inner automorphisms, and when "n"
2, the symmetric group, despite having no non-inner automorphisms, is abelian, giving a non-trivial center, disqualifying it from being complete.
If the inner automorphism group of a perfect group G is simple, then G is called quasisimple.
Lie algebra case.
An automorphism of a Lie algebra 𝔊 is called an inner automorphism if it is of the form Ad"g", where Ad is the adjoint map and g is an element of a Lie group whose Lie algebra is 𝔊. The notion of inner automorphism for Lie algebras is compatible with the notion for groups in the sense that an inner automorphism of a Lie group induces a unique inner automorphism of the corresponding Lie algebra.
Extension.
If G is the group of units of a ring, A, then an inner automorphism on G can be extended to a mapping on the projective line over A by the group of units of the matrix ring, M2("A"). In particular, the inner automorphisms of the classical groups can be extended in that way.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\begin{align}\n \\varphi_g\\colon G&\\to G \\\\\n \\varphi_g(x)&:= g^{-1}xg\n \\end{align}"
},
{
"math_id": 1,
"text": "x_1,x_2\\in G,"
},
{
"math_id": 2,
"text": "\\varphi_g(x_1 x_2) = g^{-1} x_1 x_2g = g^{-1} x_1 \\left(g g^{-1}\\right) x_2 g = \\left(g^{-1} x_1 g\\right)\\left(g^{-1} x_2 g\\right) = \\varphi_g(x_1)\\varphi_g(x_2),"
},
{
"math_id": 3,
"text": "x_1"
},
{
"math_id": 4,
"text": "x_2."
},
{
"math_id": 5,
"text": "\\varphi_{g^{-1}}."
},
{
"math_id": 6,
"text": "\\varphi_g"
},
{
"math_id": 7,
"text": "g^{-1}xg"
},
{
"math_id": 8,
"text": "x^g."
},
{
"math_id": 9,
"text": "\\left(x^{g_1}\\right)^{g_2} = x^{g_1g_2}"
},
{
"math_id": 10,
"text": "g_1, g_2 \\in G."
},
{
"math_id": 11,
"text": "\\Phi"
},
{
"math_id": 12,
"text": "\\text{Im} (\\Phi)"
},
{
"math_id": 13,
"text": "G"
},
{
"math_id": 14,
"text": "g \\in G"
},
{
"math_id": 15,
"text": "\\text{Ker} (\\Phi) = \\text{Z}(G)"
},
{
"math_id": 16,
"text": "\\Phi : G \\to \\text{Aut}(G) "
},
{
"math_id": 17,
"text": "\\varphi_{g}"
},
{
"math_id": 18,
"text": "\\text{Aut}(G)"
},
{
"math_id": 19,
"text": "\\Phi : g \\mapsto \\varphi_{g}"
},
{
"math_id": 20,
"text": "\\varphi_{g}(x) := gxg^{-1}"
},
{
"math_id": 21,
"text": "\\varphi_{g}(xx')=gxx'g^{-1} =gx(g^{-1}g)x'g^{-1} = (gxg^{-1})(gx'g^{-1}) = \\varphi_{g}(x)\\varphi_{g}(x')"
},
{
"math_id": 22,
"text": "x"
},
{
"math_id": 23,
"text": "gxg^{-1}"
},
{
"math_id": 24,
"text": "g^{-1}"
},
{
"math_id": 25,
"text": "\\varphi_{g^{-1}}"
},
{
"math_id": 26,
"text": "\\Phi(gg')(x)=(gg')x(gg')^{-1}"
},
{
"math_id": 27,
"text": "\\Phi(g)\\circ \\Phi(g')(x)=\\Phi(g) \\circ (g'hg'^{-1}) = gg'hg'^{-1}g^{-1} = (gg')h(gg')^{-1}"
},
{
"math_id": 28,
"text": "\\operatorname{Out}(G) = \\operatorname{Aut}(G) / \\operatorname{Inn}(G)."
},
{
"math_id": 29,
"text": "a^{-1}xa = x \\iff xa = ax."
},
{
"math_id": 30,
"text": "G\\,/\\,\\mathrm{Z}(G) \\cong \\operatorname{Inn}(G)."
}
] |
https://en.wikipedia.org/wiki?curid=61839
|
61839625
|
Normal eigenvalue
|
Spectral theory eigenvalue
In mathematics, specifically in spectral theory, an eigenvalue of a closed linear operator is called normal if the space admits a decomposition into a direct sum of a finite-dimensional generalized eigenspace and an invariant subspace where formula_0 has a bounded inverse.
The set of normal eigenvalues coincides with the discrete spectrum.
Root lineal.
Let formula_1 be a Banach space. The root lineal formula_2 of a linear operator formula_3 with domain formula_4 corresponding to the eigenvalue formula_5 is defined as
formula_6
where formula_7 is the identity operator in formula_1.
This set is a linear manifold but not necessarily a vector space, since it is not necessarily closed in formula_1. If this set is closed (for example, when it is finite-dimensional), it is called the generalized eigenspace of formula_8 corresponding to the eigenvalue formula_9.
Definition of a normal eigenvalue.
An eigenvalue formula_5 of a closed linear operator formula_3 in the Banach space formula_1 with domain formula_10 is called "normal" (in the original terminology, "formula_9 corresponds to a normally splitting finite-dimensional root subspace"), if the following two conditions are satisfied:
That is, the restriction formula_15 of formula_8 onto formula_13 is an operator with domain formula_16 and with the range formula_17 which has a bounded inverse.
Equivalent characterizations of normal eigenvalues.
Let formula_3 be a closed linear densely defined operator in the Banach space formula_1. The following statements are equivalent(Theorem III.88):
If formula_9 is a normal eigenvalue, then the root lineal formula_2 coincides with the range of the Riesz projector, formula_22.
Relation to the discrete spectrum.
The above equivalence shows that the set of normal eigenvalues coincides with the discrete spectrum, defined as the set of isolated points of the spectrum with finite rank of the corresponding Riesz projector.
Decomposition of the spectrum of nonselfadjoint operators.
The spectrum of a closed operator formula_3 in the Banach space formula_1 can be decomposed into the union of two disjoint sets, the set of normal eigenvalues and the fifth type of the essential spectrum:
formula_23
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "A-\\lambda I"
},
{
"math_id": 1,
"text": "\\mathfrak{B}"
},
{
"math_id": 2,
"text": "\\mathfrak{L}_\\lambda(A)"
},
{
"math_id": 3,
"text": "A:\\,\\mathfrak{B}\\to\\mathfrak{B}"
},
{
"math_id": 4,
"text": "\\mathfrak{D}(A)"
},
{
"math_id": 5,
"text": "\\lambda\\in\\sigma_p(A)"
},
{
"math_id": 6,
"text": "\\mathfrak{L}_\\lambda(A)=\\bigcup_{k\\in\\N}\\{x\\in\\mathfrak{D}(A):\\,(A-\\lambda I_{\\mathfrak{B}})^j x\\in\\mathfrak{D}(A)\\,\\forall j\\in\\N,\\,j\\le k;\\, (A-\\lambda I_{\\mathfrak{B}})^k x=0\\}\\subset\\mathfrak{B}, "
},
{
"math_id": 7,
"text": "I_{\\mathfrak{B}}"
},
{
"math_id": 8,
"text": "A"
},
{
"math_id": 9,
"text": "\\lambda"
},
{
"math_id": 10,
"text": "\\mathfrak{D}(A)\\subset\\mathfrak{B}"
},
{
"math_id": 11,
"text": "\\nu=\\dim\\mathfrak{L}_\\lambda(A)<\\infty"
},
{
"math_id": 12,
"text": "\\mathfrak{B}=\\mathfrak{L}_\\lambda(A)\\oplus \\mathfrak{N}_\\lambda"
},
{
"math_id": 13,
"text": "\\mathfrak{N}_\\lambda"
},
{
"math_id": 14,
"text": "A-\\lambda I_{\\mathfrak{B}}"
},
{
"math_id": 15,
"text": "A_2"
},
{
"math_id": 16,
"text": "\\mathfrak{D}(A_2)=\\mathfrak{N}_\\lambda\\cap\\mathfrak{D}(A)"
},
{
"math_id": 17,
"text": "\\mathfrak{R}(A_2-\\lambda I)\\subset\\mathfrak{N}_\\lambda"
},
{
"math_id": 18,
"text": "\\lambda\\in\\sigma(A)"
},
{
"math_id": 19,
"text": "\\sigma(A)"
},
{
"math_id": 20,
"text": "P_\\lambda"
},
{
"math_id": 21,
"text": "\\nu=\\dim\\mathfrak{L}_\\lambda(A)"
},
{
"math_id": 22,
"text": "\\mathfrak{R}(P_\\lambda)"
},
{
"math_id": 23,
"text": "\n\\sigma(A)=\\{\\text{normal eigenvalues of}\\ A\\}\\cup\\sigma_{\\mathrm{ess},5}(A).\n"
}
] |
https://en.wikipedia.org/wiki?curid=61839625
|
61840134
|
Recamán's sequence
|
Endless sequence of integers
In mathematics and computer science, Recamán's sequence is a well known sequence defined by a recurrence relation. Because its elements are related to the previous elements in a straightforward way, they are often defined using recursion.
It takes its name after its inventor Bernardo Recamán Santos, a Colombian mathematician.
Definition.
Recamán's sequence formula_0 is defined as:
formula_1
The first terms of the sequence are:
0, 1, 3, 6, 2, 7, 13, 20, 12, 21, 11, 22, 10, 23, 9, 24, 8, 25, 43, 62, 42, 63, 41, 18, 42, 17, 43, 16, 44, 15, 45, 14, 46, 79, 113, 78, 114, 77, 39, 78, 38, 79, 37, 80, 36, 81, 35, 82, 34, 83, 33, 84, 32, 85, 31, 86, 30, 87, 29, 88, 28, 89, 27, 90, 26, 91, 157, 224, 156, 225, 155, ...
On-line encyclopedia of integer sequences (OEIS).
Recamán's sequence was named after its inventor, Colombian mathematician Bernardo Recamán Santos, by Neil Sloane, creator of the On-Line Encyclopedia of Integer Sequences (OEIS). The OEIS entry for this sequence is .
Visual representation.
The most-common visualization of the Recamán's sequence is simply plotting its values, such as the figure at right.
On January 14, 2018, the Numberphile YouTube channel published a video titled The Slightly Spooky Recamán Sequence, showing a visualization using alternating semi-circles, as it is shown in the figure at top of this page.
Sound representation.
Values of the sequence can be associated with musical notes, in such that case the running of the sequence can be associated with an execution of a musical tune.
Properties.
The sequence satisfies:
formula_2
formula_3
This is not a permutation of the integers: the first repeated term is formula_4. Another one is formula_5.
Conjecture.
Neil Sloane has conjectured that every number eventually appears, but it has not been proved. Even though 10230 terms have been calculated (in 2018), the number 852,655 has not appeared on the list.
Uses.
Besides its mathematical and aesthetic properties, Recamán's sequence can be used to secure 2D images by steganography.
Alternate sequence.
The sequence is the most-known sequence invented by Recamán. There is another sequence, less known, defined as:
formula_6
formula_7
This OEIS entry is .
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "a_0, a_1, a_2\\dots"
},
{
"math_id": 1,
"text": "a_n = \\begin{cases}\n0 && \\text{if } n = 0 \\\\\na_{n - 1} -n && \\text{if } a_{n - 1} -n > 0 \\text{ and is not already in the sequence} \\\\\na_{n - 1} + n && \\text{otherwise}\n\\end{cases}"
},
{
"math_id": 2,
"text": "a_n \\geq 0"
},
{
"math_id": 3,
"text": "|a_n - a_{n-1}| = n"
},
{
"math_id": 4,
"text": "42 = a_{24} = a_{20}"
},
{
"math_id": 5,
"text": "43 = a_{18} = a_{26}"
},
{
"math_id": 6,
"text": "a_1 = 1"
},
{
"math_id": 7,
"text": "a_{n + 1} = \\begin{cases}\na_n / n && \\text{if } n \\text{ divides } a_n \\\\\nn a_n && \\text{otherwise}\n\\end{cases}"
}
] |
https://en.wikipedia.org/wiki?curid=61840134
|
61852907
|
Song of Songs 3
|
Third chapter of the Song of Songs
Song of Songs 3 (abbreviated as Song 3) is the third chapter of the Song of Songs in the Hebrew Bible or the Old Testament of the Christian Bible. This book is one of the Five Megillot, a collection of short books, together with Ruth, Lamentations, Ecclesiastes and Esther, within the Ketuvim, the third and the last part of the Hebrew Bible. Jewish tradition views Solomon as the author of this book (although this is now largely disputed), and this attribution influences the acceptance of this book as a canonical text. This chapter contains a female song about her search for her lover at night and the poem describing King Solomon's procession.
Text.
The original text is written in Hebrew language. This chapter is divided into 11 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008). Some fragments containing parts of this chapter were found among the Dead Sea Scrolls: 4Q106 (4QCanta); 30 BCE-30 CE; extant verses 3–5, 7–11), 4Q107 (4QCantb); 30 BCE-30 CE; extant verses 1–2, 5, 9–11), and 4Q108 (4QCantc); 30 BCE-30 CE; extant verses 7–8).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
Structure.
New King James Version (NKJV) groups this chapter into:
Female: Search and seizure (3:1-5).
The first part of this chapter is "a tightly constructed song" of the female protagonist, describing how she looks for her lover at night (or in a dream) in the city streets, until she finds him and brings him into her mother's house. The setting of this poem progresses from the woman's bed (verse 1) to the public areas of the city (verses 2-4b) and finally to the privacy of her mother's bedroom (verses 4c-5). It closes with the second appeal to the 'daughters of Jerusalem'.
"On my bed by night I sought him whom my soul loves;"
"I sought him, but found him not."
Verse 1.
"By night" (, "ba-lê-lō-wṯ") can be read as "nightly" or "night after night": the word "refers to more nights than one".
The woman had expected her lover to return "before dawn"; Hudson Taylor notes that she might have regretted "lightly dismiss[ing] Him, with the thought: A little later I may enjoy His love ... Poor foolish bride!"
"I charge you, O daughters of Jerusalem,"
"By the gazelles or by the does of the field,"
"Do not stir up nor awaken love"
"Until it pleases."
Verse 5.
The names of God are apparently substituted with similar sounding phrases depicting 'female gazelles' (, "tseḇā’ōṯ") for [God of] hosts ( "tseḇā’ōṯ"), and 'does of the field'/'wild does/female deer' (, "’ay-lōṯ ha-śā-ḏeh") for God Almighty (, "’êl shaddai").
Male: Marriage scene (3:6-11).
This section starts a poetic exposition of love and marriage which form the core of the book (Song 3:6-5:1). Hess applies these six verses to the man, whereas Fox prefers the daughter of Jerusalem as the speakers, and the New King James Version assigns them to "the Shulamite" (= the woman).
Solomon is the focus of this section, as his name is mentioned three times (verses 7, 9 and 11), and the suffix 'his' ("-o") refers to him once in verse 7, another in verse 9 and four times in the second part of verse 11. The last word of this part is 'his heart' (), referring directly to the essential aspect of King Solomon and the most relevant to the whole love poem. The mention of Solomon's mother in verse 11 is in line with the focus on mothers in the book, both the woman's (; ; ; ) and the man's ().
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
] |
https://en.wikipedia.org/wiki?curid=61852907
|
6185334
|
Discrimination testing
|
Discrimination testing is a technique employed in sensory analysis to determine whether there is a detectable difference among two or more products. The test uses a group of assessors (panellists) with a degree of training appropriate to the complexity of the test to discriminate from one product to another through one of a variety of experimental designs. Though useful, these tests typically do not quantify or describe any differences, requiring a more specifically trained panel under different study design to describe differences and assess significance of the difference.
Statistical basis.
The statistical principle behind any discrimination test should be to reject a null hypothesis (H0) that states there is no detectable difference between two (or more) products. If there is sufficient evidence to reject H0 in favor of the alternative hypothesis, HA: There is a detectable difference, then a difference can be recorded. However, failure to reject H0 should not be assumed to be sufficient evidence to accept it. H0 is formulated on the premise that all of the assessors guessed when they made their response. The statistical test chosen should give a probability value that the result was arrived at through pure guesswork. If this probability is sufficiently low (usually below 0.05 or 5%) then H0 can be rejected in favor of HA.
Tests used to decide whether or not to reject H0 include binomial, χ2 (Chi-squared), t-test etc.
Types of test.
A number of tests can be classified as discrimination tests. If it's designed to detect a difference then it's a discrimination test. The type of test determines the number of samples presented to each member of the panel and also the question(s) they are asked to respond to.
Schematically, these tests may be described as follows; A & B are used for knowns, X and Y are used for different unknowns, while (AB) means that the order of presentation is unknown:
Paired comparison.
In this type of test the assessors are presented with two products and are asked to state which product fulfils a certain condition. This condition will usually be some attribute such as sweetness, sourness, intensity of flavor, etc. The probability for each assessor arriving at a correct response by guessing is formula_0
Advantages.
Minimum number of samples required. Most straightforward approach when the question is 'Which sample is more ____?"
Disadvantages.
Need to know in advance the attribute that is likely to change. Not statistically powerful with large panel sizes required to obtain sufficient confidence.
Duo-trio.
The assessors are presented with three products, one of which is identified as the control. Of the other two, one is identical to the control, the other is the test product. The assessors are asked to state which product more closely resembles the control.
The probability for each assessor arriving at a correct response by guessing is formula_0
Advantages.
Quick to set up and execute. No need to have prior knowledge of nature of difference.
Disadvantages.
Not statistically powerful therefore relatively large panel sizes required to obtain sufficient confidence.
Triangle.
The assessors are presented with three products, two of which are identical and the other one different. The assessors are asked to state which product they believe is the odd one out.
The probability for each assessor arriving at a correct response by guessing is formula_1
Advantages.
Can be quick to execute and offers greater power than paired comparison or duo-trio.
Disadvantages.
Error might occur:
There are many other errors which can occur but the above are the main possible errors. It is evident from the above information that randomization, control and professional conduct of the experiment are essential for obtaining the most accurate results.
Important
Used to assist research and development in formulating and reformulating products. Using the triangle design to determine if a particular ingredient change, or a change in processing, creates a detectable difference in the final product. Triangle taste testing is also used in quality control to determine if a particular production run (or production from different factories) meets the quality-control standard (i.e., is not different from the product standard in a triangle taste test using discriminators).
ABX.
The assessors are presented with three products, two of which are identified as reference A and alternative B, the third is unknown X, and identical to either A or B. The assessors are asked to state which of A and B the unknown is; the test may also be described as "matching-to-sample", or "duo-trio in balanced reference mode" (both knowns are presented as reference, rather than only one).
ABX testing is widely used in comparison of audio compression algorithms, but less used in food science.
ABX testing differs from the other listed tests in that subjects are given two known different samples, and thus are able to compare them with an eye towards differences – there is an "inspection phase". While this may be hypothesized to make discrimination easier, no advantage has been observed in discrimination performance in ABX testing compared with other testing methods.
Duo-trio in constant reference mode.
Like triangle testing, but third is known to not be the odd one out. Intermediate between ABX (where which of the first is which – which is control, which is proposed new one – is stated), and triangle, where any of the three could be out.
Notes and references.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "p = 0.5"
},
{
"math_id": 1,
"text": "p = 1/3"
}
] |
https://en.wikipedia.org/wiki?curid=6185334
|
6185898
|
Ridge detection
|
In image processing, ridge detection is the attempt, via software, to locate ridges in an image, defined as curves whose points are local maxima of the function, akin to geographical ridges.
For a function of "N" variables, its ridges are a set of curves whose points are local maxima in "N" − 1 dimensions. In this respect, the notion of ridge points extends the concept of a local maximum. Correspondingly, the notion of valleys for a function can be defined by replacing the condition of a local maximum with the condition of a local minimum. The union of ridge sets and valley sets, together with a related set of points called the connector set, form a connected set of curves that partition, intersect, or meet at the critical points of the function. This union of sets together is called the function's relative critical set.
Ridge sets, valley sets, and relative critical sets represent important geometric information intrinsic to a function. In a way, they provide a compact representation of important features of the function, but the extent to which they can be used to determine global features of the function is an open question. The primary motivation for the creation of ridge detection and valley detection procedures has come from image analysis and computer vision and is to capture the interior of elongated objects in the image domain. Ridge-related representations in terms of watersheds have been used for image segmentation. There have also been attempts to capture the shapes of objects by graph-based representations that reflect ridges, valleys and critical points in the image domain. Such representations may, however, be highly noise sensitive if computed at a single scale only. Because scale-space theoretic computations involve convolution with the Gaussian (smoothing) kernel, it has been hoped that use of multi-scale ridges, valleys and critical points in the context of scale space theory should allow for more a robust representation of objects (or shapes) in the image.
In this respect, ridges and valleys can be seen as a complement to natural interest points or local extremal points. With appropriately defined concepts, ridges and valleys in the intensity landscape (or in some other representation derived from the intensity landscape) may form a scale invariant skeleton for organizing spatial constraints on local appearance, with a number of qualitative similarities to the way the Blum's medial axis transform provides a shape skeleton for binary images. In typical applications, ridge and valley descriptors are often used for detecting roads in aerial images and for detecting blood vessels in retinal images or three-dimensional magnetic resonance images.
Differential geometric definition of ridges and valleys at a fixed scale in a two-dimensional image.
Let formula_0 denote a two-dimensional function, and let formula_1 be the scale-space representation of formula_0 obtained by convolving formula_0 with a Gaussian function
formula_2.
Furthermore, let formula_3 and formula_4 denote the eigenvalues of the Hessian matrix
formula_5
of the scale-space representation formula_1 with a coordinate transformation (a rotation) applied to local directional derivative operators,
formula_6
where p and q are coordinates of the rotated coordinate system.
It can be shown that the mixed derivative formula_7 in the transformed coordinate system is zero if we choose
formula_8,formula_9.
Then, a formal differential geometric definition of the ridges of formula_0 at a fixed scale formula_10 can be expressed as the set of points that satisfy
formula_11
Correspondingly, the valleys of formula_0 at scale formula_10 are the set of points
formula_12
In terms of a formula_13 coordinate system with the formula_14 direction parallel to the image gradient
formula_15
where
formula_16
it can be shown that this ridge and valley definition can instead be equivalently written as
formula_17
where
formula_18
formula_19
formula_20
and the sign of formula_21 determines the polarity; formula_22 for ridges and formula_23 for valleys.
Computation of variable scale ridges from two-dimensional images.
A main problem with the fixed scale ridge definition presented above is that it can be very sensitive to the choice of the scale level. Experiments show that the scale parameter of the Gaussian pre-smoothing kernel must be carefully tuned to the width of the ridge structure in the image domain, in order for the ridge detector to produce a connected curve reflecting the underlying image structures. To handle this problem in the absence of prior information, the notion of "scale-space ridges" has been introduced, which treats the scale parameter as an inherent property of the ridge definition and allows the scale levels to vary along a scale-space ridge. Moreover, the concept of a scale-space ridge also allows the scale parameter to be automatically tuned to the width of the ridge structures in the image domain, in fact as a consequence of a well-stated definition. In the literature, a number of different approaches have been proposed based on this idea.
Let formula_24 denote a measure of ridge strength (to be specified below). Then, for a two-dimensional image, a scale-space ridge is the set of points that satisfy
formula_25
where formula_10 is the scale parameter in the scale-space representation. Similarly, a "scale-space valley" is the set of points that satisfy
formula_26
An immediate consequence of this definition is that for a two-dimensional image the concept of scale-space ridges sweeps out a set of one-dimensional curves in the three-dimensional scale-space, where the scale parameter is allowed to vary along the scale-space ridge (or the scale-space valley). The ridge descriptor in the image domain will then be a projection of this three-dimensional curve into the two-dimensional image plane, where the attribute scale information at every ridge point can be used as a natural estimate of the width of the ridge structure in the image domain in a neighbourhood of that point.
In the literature, various measures of ridge strength have been proposed. When Lindeberg (1996, 1998) coined the term scale-space ridge, he considered three measures of ridge strength:
formula_27
expressed in terms of "formula_28-normalized derivatives" with
formula_29.
formula_30
formula_31
The notion of formula_28-normalized derivatives is essential here, since it allows the ridge and valley detector algorithms to be calibrated properly. By requiring that for a one-dimensional Gaussian ridge embedded in two (or three dimensions) the detection scale should be equal to the width of the ridge structure when measured in units of length (a requirement of a match between the size of the detection filter and the image structure it responds to), it follows that one should choose formula_32. Out of these three measures of ridge strength, the first entity formula_33 is a general purpose ridge strength measure with many applications such as blood vessel detection and road extraction. Nevertheless, the entity formula_34 has been used in applications such as fingerprint enhancement, real-time hand tracking and gesture recognition as well as for modelling local image statistics for detecting and tracking humans in images and video.
There are also other closely related ridge definitions that make use of normalized derivatives with the implicit assumption of formula_35. "Develop these approaches in further detail." When detecting ridges with formula_35, however, the detection scale will be twice as large as for formula_32, resulting in more shape distortions and a lower ability to capture ridges and valleys with nearby interfering image structures in the image domain.
History.
The notion of ridges and valleys in digital images was introduced by Haralick in 1983 and by Crowley concerning difference of Gaussians pyramids in 1984. The application of ridge descriptors to medical image analysis has been extensively studied by Pizer and his co-workers resulting in their notion of M-reps. Ridge detection has also been furthered by Lindeberg with the introduction of formula_28-normalized derivatives and scale-space ridges defined from local maximization of the appropriately normalized main principal curvature of the Hessian matrix (or other measures of ridge strength) over space and over scale. These notions have later been developed with application to road extraction by Steger et al. and to blood vessel segmentation by Frangi et al. as well as to the detection of curvilinear and tubular structures by Sato et al. and Krissian et al. A review of several of the classical ridge definitions at a fixed scale including relations between them has been given by Koenderink and van Doorn. A review of vessel extraction techniques has been presented by Kirbas and Quek.
Definition of ridges and valleys in N dimensions.
In its broadest sense, the notion of ridge generalizes the idea of a local maximum of a real-valued function. A point formula_36 in the domain of a function formula_37 is a local maximum of the function if there is a distance formula_38 with the property that if formula_39 is within formula_40 units of formula_36, then formula_41. It is well known that critical points, of which local maxima are just one type, are isolated points in a function's domain in all but the most unusual situations ("i.e.", the nongeneric cases).
Consider relaxing the condition that formula_41 for formula_39 in an entire neighborhood of formula_36 slightly to require only that this hold on an formula_42 dimensional subset. Presumably this relaxation allows the set of points which satisfy the criteria, which we will call the ridge, to have a single degree of freedom, at least in the generic case. This means that the set of ridge points will form a 1-dimensional locus, or a ridge curve. Notice that the above can be modified to generalize the idea to local minima and result in what might call 1-dimensional valley curves.
This following ridge definition follows the book by Eberly and can be seen as a generalization of some of the abovementioned ridge definitions. Let formula_43 be an open set, and formula_44 be smooth. Let formula_45. Let formula_46 be the gradient of formula_47 at formula_36, and let formula_48 be the formula_49 Hessian matrix of formula_47 at formula_36. Let formula_50 be the formula_51 ordered eigenvalues of formula_48 and let formula_52 be a unit eigenvector in the eigenspace for formula_53. (For this, one should assume that all the eigenvalues are distinct.)
The point formula_36 is a point on the 1-dimensional ridge of formula_47 if the following conditions hold:
This makes precise the concept that formula_47 restricted to "this particular" formula_42-dimensional subspace has a local maximum at formula_36.
This definition naturally generalizes to the "k"-dimensional ridge as follows: the point formula_36 is a point on the "k"-dimensional ridge of formula_47 if the following conditions hold:
In many ways, these definitions naturally generalize that of a local maximum of a function. Properties of maximal convexity ridges are put on a solid mathematical footing by Damon and Miller. Their properties in one-parameter families was established by Keller.
Maximal scale ridge.
The following definition can be traced to Fritsch who was interested in extracting geometric information about figures in two dimensional greyscale images. Fritsch filtered his image with a "medialness" filter that gave him information analogous to "distant to the boundary" data in scale-space. Ridges of this image, once projected to the original image, were to be analogous to a shape skeleton ("e.g.", the Blum medial axis) of the original image.
What follows is a definition for the maximal scale ridge of a function of three variables, one of which is a "scale" parameter. One thing that we want to be true in this definition is, if formula_59 is a point on this ridge, then the value of the function at the point is maximal in the scale dimension. Let formula_60 be a smooth differentiable function on formula_61. The formula_59 is a point on the maximal scale ridge if and only if
Relations between edge detection and ridge detection.
The purpose of ridge detection is usually to capture the major axis of symmetry of an elongated object, whereas the purpose of edge detection is usually to capture the boundary of the object. However, some literature on edge detection erroneously includes the notion of ridges into the concept of edges, which confuses the situation.
In terms of definitions, there is a close connection between edge detectors and ridge detectors. With the formulation of non-maximum as given by Canny, it holds that edges are defined as the points where the gradient magnitude assumes a local maximum in the gradient direction. Following a differential geometric way of expressing this definition, we can in the above-mentioned formula_13-coordinate system state that the gradient magnitude of the scale-space representation, which is equal to the first-order directional derivative in the formula_14-direction formula_66, should have its first order directional derivative in the formula_14-direction equal to zero
formula_67
while the second-order directional derivative in the formula_14-direction of formula_66 should be negative, i.e.,
formula_68.
Written out as an explicit expression in terms of local partial derivatives formula_69, formula_70 ... formula_71, this edge definition can be expressed as the zero-crossing curves of the differential invariant
formula_72
that satisfy a sign-condition on the following differential invariant
formula_73
(see the article on edge detection for more information). Notably, the edges obtained in this way are the ridges of the gradient magnitude.
|
[
{
"math_id": 0,
"text": "f(x, y)"
},
{
"math_id": 1,
"text": "L"
},
{
"math_id": 2,
"text": "g(x, y, t) = \\frac{1}{2 \\pi t} e^{-(x^2+y^2)/2t}"
},
{
"math_id": 3,
"text": "L_{pp}"
},
{
"math_id": 4,
"text": "L_{qq}"
},
{
"math_id": 5,
"text": "H = \\begin{bmatrix}\nL_{xx} & L_{xy} \\\\ \nL_{xy} & L_{yy}\n\\end{bmatrix}"
},
{
"math_id": 6,
"text": "\\partial_p = \\sin \\beta \\partial_x - \\cos \\beta \\partial_y, \\partial_q = \\cos \\beta \\partial_x + \\sin \\beta \\partial_y "
},
{
"math_id": 7,
"text": "L_{pq}"
},
{
"math_id": 8,
"text": "\\cos \\beta = \\sqrt{\\frac{1}{2} \\left( 1 + \\frac{L_{xx}-L_{yy}}{\\sqrt{(L_{xx}-L_{yy})^2 + 4 L_{xy}^2}} \\right)}"
},
{
"math_id": 9,
"text": " \\sin \\beta = \\sgn(L_{xy}) \\sqrt{\\frac{1}{2} \\left( 1 - \\frac{L_{xx}-L_{yy}}{\\sqrt{(L_{xx}-L_{yy})^2 + 4 L_{xy}^2}} \\right)} "
},
{
"math_id": 10,
"text": "t"
},
{
"math_id": 11,
"text": "L_{p} = 0, L_{pp} \\leq 0, |L_{pp}| \\geq |L_{qq}|."
},
{
"math_id": 12,
"text": "L_{q} = 0, L_{qq} \\geq 0, |L_{qq}| \\geq |L_{pp}|."
},
{
"math_id": 13,
"text": "(u, v)"
},
{
"math_id": 14,
"text": "v"
},
{
"math_id": 15,
"text": "\\partial_u = \\sin \\alpha \\partial_x - \\cos \\alpha \\partial_y, \\partial_v = \\cos \\alpha \\partial_x + \\sin \\alpha \\partial_y "
},
{
"math_id": 16,
"text": "\\cos \\alpha = \\frac{L_x}{\\sqrt{L_x^2 + L_y^2}}, \\sin \\alpha = \\frac{L_y}{\\sqrt{L_x^2 + L_y^2}} "
},
{
"math_id": 17,
"text": " L_{uv} = 0, L_{uu}^2 - L_{vv}^2 \\geq 0 "
},
{
"math_id": 18,
"text": "L_v^2 L_{uu} = L_x^2 L_{yy} - 2 L_x L_y L_{xy} + L_y^2 L_{xx},"
},
{
"math_id": 19,
"text": "L_v^2 L_{uv} = L_x L_y (L_{xx} - L_{yy}) - (L_x^2 - L_y^2) L_{xy}, "
},
{
"math_id": 20,
"text": "L_v^2 L_{vv} = L_x^2 L_{xx} + 2 L_x L_y L_{xy} + L_y^2 L_{yy} "
},
{
"math_id": 21,
"text": "L_{uu}"
},
{
"math_id": 22,
"text": "L_{uu}<0"
},
{
"math_id": 23,
"text": "L_{uu}>0"
},
{
"math_id": 24,
"text": "R(x, y, t)"
},
{
"math_id": 25,
"text": "L_{p} = 0, L_{pp} \\leq 0, \\partial_t(R) = 0, \\partial_{tt}(R) \\leq 0,"
},
{
"math_id": 26,
"text": "L_{q} = 0, L_{qq} \\geq 0, \\partial_t(R) = 0, \\partial_{tt}(R) \\leq 0."
},
{
"math_id": 27,
"text": "L_{pp, \\gamma-norm} = \\frac{t^{\\gamma}}{2} \\left( L_{xx}+L_{yy} - \\sqrt{(L_{xx}-L_{yy})^2 + 4 L_{xy}^2} \\right)"
},
{
"math_id": 28,
"text": "\\gamma"
},
{
"math_id": 29,
"text": "\\partial_{\\xi} = t^{\\gamma/2} \\partial_x, \\partial_{\\eta} = t^{\\gamma/2} \\partial_y"
},
{
"math_id": 30,
"text": "N_{\\gamma-norm} = \\left( L_{pp, \\gamma-norm}^2 - L_{qq, \\gamma-norm}^2 \\right)^2 = t^{4 \\gamma} (L_{xx}+L_{yy})^2 \\left( (L_{xx}-L_{yy})^2 + 4 L_{xy}^2 \\right). "
},
{
"math_id": 31,
"text": "A_{\\gamma-norm} = \\left( L_{pp, \\gamma-norm} - L_{qq, \\gamma-norm} \\right)^2 = t^{2 \\gamma} \\left( (L_{xx}-L_{yy})^2 + 4 L_{xy}^2 \\right). "
},
{
"math_id": 32,
"text": "\\gamma = 3/4"
},
{
"math_id": 33,
"text": "L_{pp, \\gamma-norm}"
},
{
"math_id": 34,
"text": "A_{\\gamma-norm}"
},
{
"math_id": 35,
"text": "\\gamma = 1"
},
{
"math_id": 36,
"text": "\\mathbf{x}_0"
},
{
"math_id": 37,
"text": "f:\\mathbb{R}^n \\rightarrow \\mathbb{R}"
},
{
"math_id": 38,
"text": "\\delta>0"
},
{
"math_id": 39,
"text": "\\mathbf{x}"
},
{
"math_id": 40,
"text": "\\delta"
},
{
"math_id": 41,
"text": "f(\\mathbf{x}) < f(\\mathbf{x}_0)"
},
{
"math_id": 42,
"text": "n-1"
},
{
"math_id": 43,
"text": "U \\subset \\mathbb{R}^n"
},
{
"math_id": 44,
"text": "f:U \\rightarrow \\mathbb{R}"
},
{
"math_id": 45,
"text": "\\mathbf{x}_0 \\in U"
},
{
"math_id": 46,
"text": "\\nabla_{\\mathbf{x}_0}f"
},
{
"math_id": 47,
"text": "f"
},
{
"math_id": 48,
"text": "H_{\\mathbf{x}_0}(f)"
},
{
"math_id": 49,
"text": "n \\times n"
},
{
"math_id": 50,
"text": "\\lambda_1 \\leq \\lambda_2 \\leq \\cdots \\leq \\lambda_n"
},
{
"math_id": 51,
"text": "n"
},
{
"math_id": 52,
"text": "\\mathbf{e}_i"
},
{
"math_id": 53,
"text": "\\lambda_i"
},
{
"math_id": 54,
"text": "\\lambda_{n-1}<0"
},
{
"math_id": 55,
"text": "\\nabla_{\\mathbf{x}_0} f \\cdot \\mathbf{e}_i=0"
},
{
"math_id": 56,
"text": "i=1, 2, \\ldots, n-1"
},
{
"math_id": 57,
"text": "\\lambda_{n-k}<0"
},
{
"math_id": 58,
"text": "i=1, 2, \\ldots, n-k"
},
{
"math_id": 59,
"text": "(\\mathbf{x},\\sigma)"
},
{
"math_id": 60,
"text": "f(\\mathbf{x},\\sigma)"
},
{
"math_id": 61,
"text": "U \\subset \\mathbb{R}^2 \\times \\mathbb{R}_{+}"
},
{
"math_id": 62,
"text": "\\frac{\\partial f}{\\partial \\sigma}=0"
},
{
"math_id": 63,
"text": "\\frac{\\partial^2 f}{\\partial \\sigma^2}<0"
},
{
"math_id": 64,
"text": "\\nabla f \\cdot \\mathbf{e}_1=0"
},
{
"math_id": 65,
"text": "\\mathbf{e}_1^t H(f) \\mathbf{e}_1 <0"
},
{
"math_id": 66,
"text": "L_v"
},
{
"math_id": 67,
"text": "\\partial_v(L_v) = 0"
},
{
"math_id": 68,
"text": "\\partial_{vv}(L_v) \\leq 0"
},
{
"math_id": 69,
"text": "L_x"
},
{
"math_id": 70,
"text": "L_y"
},
{
"math_id": 71,
"text": "L_{yyy}"
},
{
"math_id": 72,
"text": "L_v^2 L_{vv} = L_x^2 \\, L_{xx} + 2 \\, L_x \\, L_y \\, L_{xy} + L_y^2 \\, L_{yy} = 0,"
},
{
"math_id": 73,
"text": "L_v^3 L_{vvv} = L_x^3 \\, L_{xxx} + 3 \\, L_x^2 \\, L_y \\, L_{xxy} + 3 \\, L_x \\, L_y^2 \\, L_{xyy} + L_y^3 \\, L_{yyy} \\leq 0"
}
] |
https://en.wikipedia.org/wiki?curid=6185898
|
61860066
|
Kieka Mynhardt
|
South African and Canadian mathematician
Christina Magdalena (Kieka) Mynhardt (née Steyn; born 1953) is a South African born Canadian mathematician known for her work on dominating sets in graph theory, including domination versions of the eight queens puzzle. She is a professor of mathematics and statistics at the University of Victoria in Canada.
Education and career.
Mynhardt was born in Cape Town, and was a student at the Hoërskool Lichtenburg. She completed her Ph.D. at Rand Afrikaans University (now incorporated into the University of Johannesburg) in 1979, supervised by Izak Broere. Her dissertation, "The formula_0-constructability of graphs", gave a conjectured construction for the planar graphs by repeatedly adding vertices with prescribed neighborhoods.
She became a faculty member at the University of Pretoria and then the University of South Africa before moving to the University of Victoria.
Recognition.
In 1995, Mynhardt was selected as one of the founding members of the Academy of Science of South Africa.
She was a 2005 recipient of the Dignitas Award of the University of Johannesburg Alumni.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathcal{G}"
}
] |
https://en.wikipedia.org/wiki?curid=61860066
|
61860767
|
Hafnium–tungsten dating
|
Geochronological radiometric dating method using radioactive decay of hafnium-182
Hafnium–tungsten dating is a geochronological radiometric dating method utilizing the radioactive decay system of hafnium-182 to tungsten-182. The half-life of the system is million years. Today hafnium-182 is an extinct radionuclide, but the hafnium–tungsten radioactive system is useful in studies of the early Solar system since hafnium is lithophilic while tungsten is moderately siderophilic, which allows the system to be used to date the differentiation of a planet's core. It is also useful in determining the formation times of the parent bodies of iron meteorites.
The use of the hafnium-tungsten system as a chronometer for the early Solar system was suggested in the 1980s, but did not come into widespread use until the mid-1990s when the development of multi-collector inductively coupled plasma mass spectrometry enabled the use of samples with low concentrations of tungsten.
Basic principle.
The radioactive system behind hafnium–tungsten dating is a two-stage decay as follows:
Hf → Ta + +
Ta → W + +
The first decay has a half-life of 8.9 million years, while the second has a half-life of only 114 days, such that the intermediate nuclide tantalum-182 (182Ta) can effectively be ignored.
Since hafnium-182 is an extinct radionuclide, hafnium–tungsten chronometry is performed by examining the abundance of tungsten-182 relative to other stable isotopes of tungsten, of which there are effectively five in total, including the extremely long-lived isotope tungsten-180, which has a half-life much longer than the current age of the universe.
The abundance of tungsten-182 can be influenced by processes other than the decay of hafnium-182, but the existence of a large number of stable isotopes is very helpful for disentangling variations in tungsten-182 due to a different cause. For example, while 182W, 183W, 184W and 186W are all produced by the r- and s-processes, the rare isotope tungsten-180 is only produced by the p-process. Variations in tungsten isotopes caused by r- and s-process nucleosynthetic contributions also result in correlated changes in the ratios 182W/184W and 183W/184W, which means that the 183W/184W ratio can be used to quantify how much of the tungsten-182 variation is due to nucleosynthetic contributions.
The influence of cosmic rays is more difficult to correct for since cosmic ray interactions affect the abundance of tungsten-182 much more than any of the other tungsten isotopes. Nonetheless, cosmic ray effects can be corrected for by examining other isotope systems such as platinum, osmium or the stable isotopes of hafnium, or simply by taking samples from the interior that have not been exposed to cosmic rays, though the latter requires large samples.
Tungsten isotopic data is usually plotted in terms of ε182W and ε183W, which represent deviations in the ratios 182W/184W and 183W/184W in parts per 10,000 relative to terrestrial standards. Since Earth is differentiated the crust and mantle of Earth are enriched in tungsten-182 relative to the initial composition of the Solar system. Undifferentiated chondritic meteorites have ε182W = relative to Earth, which is extrapolated to give a value of for the initial ε182W of the Solar system.
Dating planetary core formation.
A primordial planet is undifferentiated, meaning that it is not layered according to density (with the densest material being towards the interior of the planet). When a planet undergoes differentiation the dense materials, particularly iron, separate from lighter components and sink to the interior forming the core of the planet. If this process took place relatively early in a planet's history, hafnium-182 would not have sufficient time to decay to tungsten-182. Since hafnium is a lithophile element the (undecayed) hafnium-182 would remain in the mantle (i.e. the outer layers of the planet). Then, after some time, the hafnium-182 would decay to tungsten-182 leaving an excess of tungsten-182 in the mantle. On the other hand, if differentiation occurred later in a planet's history, then most of the hafnium-182 would have decayed to tungsten-182 before differentiation began. Being moderately siderophilic, much of the tungsten-182 would sink towards the interior of the planet along with iron. In this scenario, not much tungsten-182 would subsequently be present in the outer layers of the planet. As such, by looking at how much tungsten-182 is present in the outer layers of a planet, relative to other isotopes of tungsten, the time of differentiation can be quantified.
Model ages.
If we have a sample from the mantle (or core) of a body and want to calculate a core formation age from the tungsten-182 abundance we need to also know the composition of the bulk planet. Since we do not have samples from the core of Earth (or any other intact planet) the composition of chondritic meteorites is generally substituted for that of the bulk planet. Hafnium and tungsten are both refractory elements so there is not expected to be any fractionation between hafnium and tungsten due to heating of the planet during or after formation. A model age for the time of core formation can then be calculated using the equation
formula_0,
where formula_1 is the decay constant for hafnium-182 (0.078±0.002 Ma−1), the ε182W values are those of the sample, chondritic meteorites (taken to represent the bulk planet) and the Solar System Initial value, and formula_2accounts for any differences in the general abundance of hafnium between the sample and chondritic meteorites,
formula_3.
It is important to note that this equation assumes that core formation is instantaneous. This can be a reasonable assumption for small bodies, like iron meteorites, but is not true for large bodies like Earth whose accretion likely took many millions of years. Instead more complex models that model core formation as a continuous process are more reasonable, and should be used.
Core formation times for Solar system bodies.
The method of hafnium-tungsten dating has been applied to many samples from Solar system bodies and used to provide estimates for the date of core formation. For iron meteorites hafnium-tungsten dating yields ages ranging from less than a million years after the formation of the first solids (calcium-aluminium-rich inclusions, usually called CAIs) to around 3 million years for different meteorite groups. While chondritic meteorites are not differentiated as a whole, hafnium-tungsten dating can still be useful for constraining formation ages by applying it to smaller melt regions in which metals and silicates have separated. For the very well studied carbonaceous chondrite Allende this gives a formation age of around 2.2 million years after the formation of CAIs. Martian meteorites have been examined and indicate that Mars may have been fully formed within 10 million years of the formation of CAIs, which has been used to suggest that Mars is a primordial planetary embryo. For Earth, models of accretion and core formation are strongly dependent on how much giant impacts, like that presumed to have formed the Moon, re-mixed the core and mantle, yielding dates of between 30 and 100 million years after CAIs depending on assumptions.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "t=\\frac{1}{\\lambda}\\ln \\left ( \\frac{(\\epsilon^{182}{\\rm W_{chondrite}}-\\epsilon^{182}{\\rm W_{SSI}})f^{\\rm Hf/W}}{\\epsilon^{182}{\\rm W_{sample}}-\\epsilon^{182}{\\rm W_{chondrite}}} \\right )"
},
{
"math_id": 1,
"text": "\\lambda"
},
{
"math_id": 2,
"text": "f^{Hf/W}"
},
{
"math_id": 3,
"text": "f^{Hf/W}=\\frac{(^{180}{\\rm Hf}/^{184}{\\rm W})_{\\rm sample}}{(^{180}{\\rm Hf}/^{184}{\\rm W})_{\\rm chondrite}}-1"
}
] |
https://en.wikipedia.org/wiki?curid=61860767
|
61861688
|
Riesz projector
|
In mathematics, or more specifically in spectral theory, the Riesz projector is the projector onto the eigenspace corresponding to a particular eigenvalue of an operator (or, more generally, a projector onto an invariant subspace corresponding to an isolated part of the spectrum). It was introduced by Frigyes Riesz in 1912.
Definition.
Let formula_0 be a closed linear operator in the Banach space formula_1. Let formula_2 be a simple or composite rectifiable contour, which encloses some region formula_3 and lies entirely within the resolvent set formula_4 (formula_5) of the operator formula_0. Assuming that the contour formula_2 has a positive orientation with respect to the region formula_3, the Riesz projector corresponding to formula_2 is defined by
formula_6
here formula_7 is the identity operator in formula_1.
If formula_8 is the only point of the spectrum of formula_0 in formula_3, then formula_9 is denoted by formula_10.
Properties.
The operator formula_9 is a projector which commutes with formula_0, and hence in the decomposition
formula_11
both terms formula_12 and formula_13 are invariant subspaces of the operator formula_0.
Moreover,
If formula_14 and formula_15 are two different contours having the properties indicated above, and the regions formula_16
and formula_17 have no points in common, then the projectors corresponding to them are mutually orthogonal:
formula_18
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "A"
},
{
"math_id": 1,
"text": "\\mathfrak{B}"
},
{
"math_id": 2,
"text": "\\Gamma"
},
{
"math_id": 3,
"text": "G_\\Gamma"
},
{
"math_id": 4,
"text": "\\rho(A)"
},
{
"math_id": 5,
"text": "\\Gamma\\subset\\rho(A)"
},
{
"math_id": 6,
"text": "\nP_\\Gamma=-\\frac{1}{2\\pi \\mathrm{i}}\\oint_\\Gamma(A-z I_{\\mathfrak{B}})^{-1}\\,\\mathrm{d}z;\n"
},
{
"math_id": 7,
"text": "I_{\\mathfrak{B}}"
},
{
"math_id": 8,
"text": "\\lambda\\in\\sigma(A)"
},
{
"math_id": 9,
"text": "P_\\Gamma"
},
{
"math_id": 10,
"text": "P_\\lambda"
},
{
"math_id": 11,
"text": "\\mathfrak{B}=\\mathfrak{L}_\\Gamma\\oplus\\mathfrak{N}_\\Gamma\n\\qquad\n\\mathfrak{L}_\\Gamma=P_\\Gamma\\mathfrak{B},\n\\quad\n\\mathfrak{N}_\\Gamma=(I_{\\mathfrak{B}}-P_\\Gamma)\\mathfrak{B},\n"
},
{
"math_id": 12,
"text": "\\mathfrak{L}_\\Gamma"
},
{
"math_id": 13,
"text": "\\mathfrak{N}_\\Gamma"
},
{
"math_id": 14,
"text": "\\Gamma_1"
},
{
"math_id": 15,
"text": "\\Gamma_2"
},
{
"math_id": 16,
"text": "G_{\\Gamma_1}"
},
{
"math_id": 17,
"text": "G_{\\Gamma_2}"
},
{
"math_id": 18,
"text": "\nP_{\\Gamma_1}P_{\\Gamma_2}\n=\nP_{\\Gamma_2}P_{\\Gamma_1}=0.\n"
}
] |
https://en.wikipedia.org/wiki?curid=61861688
|
61866
|
Max Born
|
German-British physicist and mathematician (1882–1970)
Max Born (; 11 December 1882 – 5 January 1970) was a German-British physicist and mathematician who was instrumental in the development of quantum mechanics. He also made contributions to solid-state physics and optics and supervised the work of a number of notable physicists in the 1920s and 1930s. Born was awarded the 1954 Nobel Prize in Physics for his "fundamental research in quantum mechanics, especially in the statistical interpretation of the wave function".
Born entered the University of Göttingen in 1904, where he met the three renowned mathematicians Felix Klein, David Hilbert, and Hermann Minkowski. He wrote his PhD thesis on the subject of "Stability of Elastica in a Plane and Space", winning the university's Philosophy Faculty Prize. In 1905, he began researching special relativity with Minkowski, and subsequently wrote his habilitation thesis on the Thomson model of the atom. A chance meeting with Fritz Haber in Berlin in 1918 led to discussion of how an ionic compound is formed when a metal reacts with a halogen, which is today known as the Born–Haber cycle.
In World War I he was originally placed as a radio operator, but his specialist knowledge led to his being moved to research duties on sound ranging. In 1921 Born returned to Göttingen, where he arranged another chair for his long-time friend and colleague James Franck. Under Born, Göttingen became one of the world's foremost centres for physics. In 1925 Born and Werner Heisenberg formulated the matrix mechanics representation of quantum mechanics. The following year, he formulated the now-standard interpretation of the probability density function for ψ*ψ in the Schrödinger equation, for which he was awarded the Nobel Prize in 1954. His influence extended far beyond his own research. Max Delbrück, Siegfried Flügge, Friedrich Hund, Pascual Jordan, Maria Goeppert-Mayer, Lothar Wolfgang Nordheim, Robert Oppenheimer, and Victor Weisskopf all received their PhD degrees under Born at Göttingen, and his assistants included Enrico Fermi, Werner Heisenberg, Gerhard Herzberg, Friedrich Hund, Wolfgang Pauli, Léon Rosenfeld, Edward Teller, and Eugene Wigner.
In January 1933, the Nazi Party came to power in Germany, and Born, who was Jewish, was suspended from his professorship at the University of Göttingen. He emigrated to the United Kingdom, where he took a job at St John's College, Cambridge, and wrote a popular science book, "The Restless Universe", as well as "Atomic Physics", which soon became a standard textbook. In October 1936, he became the Tait Professor of Natural Philosophy at the University of Edinburgh, where, working with German-born assistants E. Walter Kellermann and Klaus Fuchs, he continued his research into physics. Born became a naturalised British subject on 31 August 1939, one day before World War II broke out in Europe. He remained in Edinburgh until 1952. He retired to Bad Pyrmont, in West Germany, and died in a hospital in Göttingen on 5 January 1970.
Early life.
Max Born was born on 11 December 1882 in Breslau (now Wrocław, Poland), which at the time of Born's birth was part of the Prussian Province of Silesia in the German Empire, to a family of Jewish descent. He was one of two children born to Gustav Born, an anatomist and embryologist, who was a professor of embryology at the University of Breslau, and his wife Margarethe (Gretchen) née Kauffmann, from a Silesian family of industrialists. She died when Max was four years old, on 29 August 1886. Max had a sister, Käthe, who was born in 1884, and a half-brother, Wolfgang, from his father's second marriage, to Bertha Lipstein. Wolfgang later became Professor of Art History at the City College of New York.
Initially educated at the König-Wilhelm-Gymnasium in Breslau, Born entered the University of Breslau in 1901. The German university system allowed students to move easily from one university to another, so he spent summer semesters at Heidelberg University in 1902 and the University of Zurich in 1903. Fellow students at Breslau, Otto Toeplitz and Ernst Hellinger, told Born about the University of Göttingen, and Born went there in April 1904. At Göttingen he found three renowned mathematicians: Felix Klein, David Hilbert and Hermann Minkowski. Very soon after his arrival, Born formed close ties to the latter two men. From the first class he took with Hilbert, Hilbert identified Born as having exceptional abilities and selected him as the lecture scribe, whose function was to write up the class notes for the students' mathematics reading room at the University of Göttingen. Being class scribe put Born into regular, invaluable contact with Hilbert. Hilbert became Born's mentor after selecting him to be the first to hold the unpaid, semi-official position of assistant. Born's introduction to Minkowski came through Born's stepmother, Bertha, as she knew Minkowski from dancing classes in Königsberg. The introduction netted Born invitations to the Minkowski household for Sunday dinners. In addition, while performing his duties as scribe and assistant, Born often saw Minkowski at Hilbert's house.
Born's relationship with Klein was more problematic. Born attended a seminar conducted by Klein and professors of applied mathematics, Carl Runge and Ludwig Prandtl, on the subject of elasticity. Although not particularly interested in the subject, Born was obliged to present a paper. He presented one in which, taking the simple case of a curved wire with both ends fixed, he used Hilbert's calculus of variations to determine the configuration that would minimise potential energy and therefore be the most stable. Klein was impressed, and invited Born to submit a thesis on the subject of "Stability of Elastica in a Plane and Space" – a subject near and dear to Klein – which Klein had arranged to be the subject for the prestigious annual Philosophy Faculty Prize offered by the university. Entries could also qualify as doctoral dissertations. Born responded by turning down the offer, as applied mathematics was not his preferred area of study. Klein was greatly offended.
Klein had the power to make or break academic careers, so Born felt compelled to atone by submitting an entry for the prize. Because Klein refused to supervise him, Born arranged for Carl Runge to be his supervisor. Woldemar Voigt and Karl Schwarzschild became his other examiners. Starting from his paper, Born developed the equations for the stability conditions. As he became more interested in the topic, he had an apparatus constructed that could test his predictions experimentally. On 13 June 1906, the rector announced that Born had won the prize. A month later, he passed his oral examination and was awarded his PhD in mathematics "magna cum laude".
On graduation, Born was obliged to perform his military service, which he had deferred while a student. He found himself drafted into the German army, and posted to the 2nd Guards Dragoons "Empress Alexandra of Russia", which was stationed in Berlin. His service was brief, as he was discharged early after an asthma attack in January 1907. He then travelled to England, where he was admitted to Gonville and Caius College, Cambridge, and studied physics for six months at the Cavendish Laboratory under J. J. Thomson, George Searle and Joseph Larmor. After Born returned to Germany, the Army re-inducted him, and he served with the elite 1st (Silesian) Life Cuirassiers "Great Elector" until he was again medically discharged after just six weeks' service. He then returned to Breslau, where he worked under the supervision of Otto Lummer and Ernst Pringsheim, hoping to do his habilitation in physics. A minor accident involving Born's black body experiment, a ruptured cooling water hose, and a flooded laboratory, led to Lummer telling him that he would never become a physicist.
In 1905, Albert Einstein published his paper "On the Electrodynamics of Moving Bodies" about special relativity. Born was intrigued, and began researching the subject. He was devastated to discover that Minkowski was also researching special relativity along the same lines, but when he wrote to Minkowski about his results, Minkowski asked him to return to Göttingen and do his habilitation there. Born accepted. Toeplitz helped Born brush up on his matrix algebra so he could work with the four-dimensional Minkowski space matrices used in the latter's project to reconcile relativity with electrodynamics. Born and Minkowski got along well, and their work made good progress, but Minkowski died suddenly of appendicitis on 12 January 1909. The mathematics students had Born speak on their behalf at the funeral.
A few weeks later, Born attempted to present their results at a meeting of the Göttingen Mathematics Society. He did not get far before he was publicly challenged by Klein and Max Abraham, who rejected relativity, forcing him to terminate the lecture. However, Hilbert and Runge were interested in Born's work, and, after some discussion with Born, they became convinced of the veracity of his results and persuaded him to give the lecture again. This time he was not interrupted, and Voigt offered to sponsor Born's habilitation thesis. Born subsequently published his talk as an article on "The Theory of the Rigid Electron in the Kinematics of the Principle of Relativity" (), which introduced the concept of Born rigidity. On 23 October Born presented his habilitation lecture on the Thomson model of the atom.
Career.
Berlin and Frankfurt.
Born settled in as a young academic at Göttingen as a . In Göttingen, Born stayed at a boarding house run by Sister Annie at Dahlmannstraße 17, known as El BoKaReBo. The name was derived from the first letters of the last names of its boarders: "El" for Ella Philipson (a medical student), "Bo" for Born and Hans Bolza (a physics student), "Ka" for Theodore von Kármán (a ), and "Re" for Albrecht Renner (another medical student). A frequent visitor to the boarding house was Paul Peter Ewald, a doctoral student of Arnold Sommerfeld on loan to Hilbert at Göttingen as a special assistant for physics. Richard Courant, a mathematician and , called these people the "in group".
In 1912, Born met Hedwig (Hedi) Ehrenberg, the daughter of a Leipzig University law professor, and a friend of Carl Runge's daughter Iris. She was of Jewish background on her father's side, although he had become a practising Lutheran when he got married, as did Max's sister Käthe. Despite never practising his religion, Born refused to convert, and his wedding on 2 August 1913 was a garden ceremony. However, he was baptised as a Lutheran in March 1914 by the same pastor who had performed his wedding ceremony. Born regarded "religious professions and churches as a matter of no importance". His decision to be baptised was made partly in deference to his wife, and partly due to his desire to assimilate into German society. The marriage produced three children: two daughters, Irene, born in 1914, and Margarethe (Gritli), born in 1915, and a son, Gustav, born in 1921. Through marriage, Born is related to jurists Victor Ehrenberg, his father-in-law, and Rudolf von Jhering, his wife's maternal grandfather, as well as to philosopher and theologian Hans Ehrenberg, and is a great uncle of British comedian Ben Elton.
By the end of 1913, Born had published 27 papers, including important work on relativity and the dynamics of crystal lattices (3 with Theodore von Karman), which became a book. In 1914, he received a letter from Max Planck explaining that a new professor extraordinarius chair of theoretical physics had been created at the University of Berlin. The chair had been offered to Max von Laue, but he had turned it down. Born accepted. The First World War was now raging. Soon after arriving in Berlin in 1915, he enlisted in an Army signals unit. In October, he joined the Artillerie Prüfungskommission, the Army's Berlin-based artillery research and development organisation, under Rudolf Ladenburg, who had established a special unit dedicated to the new technology of sound ranging. In Berlin, Born formed a lifelong friendship with Einstein, who became a frequent visitor to Born's home. Within days of the armistice in November 1918, Planck had the Army release Born. A chance meeting with Fritz Haber that month led to discussion of the manner in which an ionic compound is formed when a metal reacts with a halogen, which is today known as the Born–Haber cycle.
Even before Born had taken up the chair in Berlin, von Laue had changed his mind, and decided that he wanted it after all. He arranged with Born and the faculties concerned for them to exchange jobs. In April 1919, Born became professor ordinarius and Director of the Institute of Theoretical Physics on the science faculty at the University of Frankfurt am Main. While there, he was approached by the University of Göttingen, which was looking for a replacement for Peter Debye as Director of the Physical Institute. "Theoretical physics," Einstein advised him, "will flourish wherever "you" happen to be; there is no other Born to be found in Germany today." In negotiating for the position with the education ministry, Born arranged for another chair, of experimental physics, at Göttingen for his long-time friend and colleague James Franck.
In 1919 Elisabeth Bormann joined the Institut für Theoretische Physik as his assistant. She developed the first atomic beams. Working with Born, Bormann was the first to measure the free path of atoms in gases and the size of molecules.
Göttingen.
For the 12 years Born and Franck were at the University of Göttingen (1921 to 1933), Born had a collaborator with shared views on basic scientific concepts—a benefit for teaching and research. Born's collaborative approach with experimental physicists was similar to that of Arnold Sommerfeld at the University of Munich, who was ordinarius professor of theoretical physics and Director of the Institute of Theoretical Physics—also a prime mover in the development of quantum theory. Born and Sommerfeld collaborated with experimental physicists to test and advance their theories. In 1922, when lecturing in the United States at the University of Wisconsin–Madison, Sommerfeld sent his student Werner Heisenberg to be Born's assistant. Heisenberg returned to Göttingen in 1923, where he completed his habilitation under Born in 1924, and became a at Göttingen.
In 1919 and 1920, Max Born became displeased about the large number of objections against Einstein's relativity, and gave speeches in the winter of 1919 in support of Einstein. Born received pay for his relativity speeches which helped with expenses through the year of rapid inflation. The speeches in German language became a book published in 1920 of which Einstein received the proofs before publication. A third edition was published in 1922 and an English translation was published in 1924. Born represented light speed as a function of curvature, "the velocity of light is much greater for some directions of the light ray than its ordinary value c, and other bodies can also attain much greater velocities."
In 1925, Born and Heisenberg formulated the matrix mechanics representation of quantum mechanics. On 9 July, Heisenberg gave Born a paper entitled "Über quantentheoretische Umdeutung kinematischer und mechanischer Beziehungen" ("Quantum-Theoretical Re-interpretation of Kinematic and Mechanical Relations") to review, and submit for publication. In the paper, Heisenberg formulated quantum theory, avoiding the concrete, but unobservable, representations of electron orbits by using parameters such as transition probabilities for quantum jumps, which necessitated using two indexes corresponding to the initial and final states. When Born read the paper, he recognized the formulation as one which could be transcribed and extended to the systematic language of matrices, which he had learned from his study under Jakob Rosanes at Breslau University.
Up until this time, matrices were seldom used by physicists; they were considered to belong to the realm of pure mathematics. Gustav Mie had used them in a paper on electrodynamics in 1912, and Born had used them in his work on the lattices theory of crystals in 1921. While matrices were used in these cases, the algebra of matrices with their multiplication did not enter the picture as they did in the matrix formulation of quantum mechanics. With the help of his assistant and former student Pascual Jordan, Born began immediately to make a transcription and extension, and they submitted their results for publication; the paper was received for publication just 60 days after Heisenberg's paper. A follow-on paper was submitted for publication before the end of the year by all three authors. The result was a surprising formulation:
formula_0
where "p" and "q" were matrices for location and momentum, and "I" is the identity matrix. The left hand side of the equation is not zero because matrix multiplication is not commutative. This formulation was entirely attributable to Born, who also established that all the elements not on the diagonal of the matrix were zero. Born considered that his paper with Jordan contained "the most important principles of quantum mechanics including its extension to electrodynamics." The paper put Heisenberg's approach on a solid mathematical basis.
Born was surprised to discover that Paul Dirac had been thinking along the same lines as Heisenberg. Soon, Wolfgang Pauli used the matrix method to calculate the energy values of the hydrogen atom and found that they agreed with the Bohr model. Another important contribution was made by Erwin Schrödinger, who looked at the problem using wave mechanics. This had a great deal of appeal to many at the time, as it offered the possibility of returning to deterministic classical physics. Born would have none of this, as it ran counter to facts determined by experiment. He formulated the now-standard interpretation of the probability density function for ψ*ψ in the Schrödinger equation, which he published in July 1926.
In a letter to Born on 4 December 1926, Einstein made his famous remark regarding quantum mechanics: <templatestyles src="Template:Blockquote/styles.css" />Quantum mechanics is certainly imposing. But an inner voice tells me that it is not yet the real thing. The theory says a lot, but does not really bring us any closer to the secret of the 'old one'. I, at any rate, am convinced that "He" is not playing at dice.
This quotation is often paraphrased as 'God does not play dice'.
In 1928, Einstein nominated Heisenberg, Born, and Jordan for the Nobel Prize in Physics, but Heisenberg alone won the 1932 Prize "for the creation of quantum mechanics, the application of which has led to the discovery of the allotropic forms of hydrogen", while Schrödinger and Dirac shared the 1933 Prize "for the discovery of new productive forms of atomic theory". On 25 November 1933, Born received a letter from Heisenberg in which he said he had been delayed in writing due to a "bad conscience" that he alone had received the Prize "for work done in Göttingen in collaboration—you, Jordan and I." Heisenberg went on to say that Born and Jordan's contribution to quantum mechanics cannot be changed by "a wrong decision from the outside." In 1954, Heisenberg wrote an article honouring Planck for his insight in 1900, in which he credited Born and Jordan for the final mathematical formulation of matrix mechanics and Heisenberg went on to stress how great their contributions were to quantum mechanics, which were not "adequately acknowledged in the public eye."
Those who received their PhD degrees under Born at Göttingen included Max Delbrück, Siegfried Flügge, Friedrich Hund, Pascual Jordan, Maria Goeppert-Mayer, Lothar Wolfgang Nordheim, Robert Oppenheimer, and Victor Weisskopf. Born's assistants at the University of Göttingen's Institute for Theoretical Physics included Enrico Fermi, Werner Heisenberg, Gerhard Herzberg, Friedrich Hund, Pascual Jordan, Wolfgang Pauli, Léon Rosenfeld, Edward Teller, and Eugene Wigner. Walter Heitler became an assistant to Born in 1928, and completed his habilitation under him in 1929. Born not only recognised talent to work with him, but he "let his superstars stretch past him; to those less gifted, he patiently handed out respectable but doable assignments." Delbrück, and Goeppert-Mayer went on to be awarded Nobel Prizes.
Later life.
In January 1933, the Nazi Party came to power in Germany. In May, Born became one of six Jewish professors at Göttingen who were suspended with pay; Franck had already resigned. In twelve years they had built Göttingen into one of the world's foremost centres for physics. Born began looking for a new job, writing to Maria Göppert-Mayer at Johns Hopkins University and Rudi Ladenburg at Princeton University. He accepted an offer from St John's College, Cambridge. At Cambridge, he wrote a popular science book, "The Restless Universe", and a textbook, "Atomic Physics", that soon became a standard text, going through seven editions. His family soon settled into life in England, with his daughters Irene and Gritli becoming engaged to Welshman Brinley (Bryn) Newton-John and Englishman Maurice Pryce respectively. Born's granddaughter Olivia Newton-John was the daughter of Irene.
Born's position at Cambridge was only a temporary one, and his tenure at Göttingen was terminated in May 1935. He therefore accepted an offer from C. V. Raman to go to Bangalore in 1935. Born considered taking a permanent position there, but the Indian Institute of Science did not create an additional chair for him. In November 1935, the Born family had their German citizenship revoked, rendering them stateless. A few weeks later Göttingen cancelled Born's doctorate. Born considered an offer from Pyotr Kapitsa in Moscow, and started taking Russian lessons from Rudolf Peierls's Russian-born wife Genia. But then Charles Galton Darwin asked Born if he would consider becoming his successor as Tait Professor of Natural Philosophy at the University of Edinburgh, an offer that Born promptly accepted, assuming the chair in October 1936.
In Edinburgh, Born promoted the teaching of mathematical physics. He had two German assistants, E. Walter Kellermann and Klaus Fuchs, and one Scottish assistant, Robert Schlapp, and together they continued to investigate the mysterious behaviour of electrons. Born became a Fellow of the Royal Society of Edinburgh in 1937, and of the Royal Society of London in March 1939. During 1939, he got as many of his remaining friends and relatives still in Germany as he could out of the country, including his sister Käthe, in-laws Kurt and Marga, and the daughters of his friend Heinrich Rausch von Traubenberg. Hedi ran a domestic bureau, placing young Jewish women in jobs. Born received his certificate of naturalisation as a British subject on 31 August 1939, one day before the Second World War broke out in Europe.
Born remained at Edinburgh until he reached the retirement age of 70 in 1952. He retired to Bad Pyrmont, in West Germany, in 1954. In October, he received word that he was being awarded the Nobel Prize. His fellow physicists had never stopped nominating him. Franck and Fermi had nominated him in 1947 and 1948 for his work on crystal lattices, and over the years, he had also been nominated for his work on solid state physics, quantum mechanics and other topics. In 1954, he received the prize for "fundamental research in Quantum Mechanics, especially in the statistical interpretation of the wave function"—something that he had worked on alone. In his Nobel lecture he reflected on the philosophical implications of his work:<templatestyles src="Template:Blockquote/styles.css" />I believe that ideas such as absolute certitude, absolute exactness, final truth, etc. are figments of the imagination which should not be admissible in any field of science. On the other hand, any assertion of probability is either right or wrong from the standpoint of the theory on which it is based. This loosening of thinking ("Lockerung des Denkens") seems to me to be the greatest blessing which modern science has given to us. For the belief in a single truth and in being the possessor thereof is the root cause of all evil in the world.
In retirement, he continued scientific work, and produced new editions of his books. In 1955 he became one of signatories to the Russell-Einstein Manifesto. He died at age 87 in hospital in Göttingen on 5 January 1970, and is buried in the "Stadtfriedhof" there, in the same cemetery as Walther Nernst, Wilhelm Weber, Max von Laue, Otto Hahn, Max Planck, and David Hilbert.
Global policy.
He was one of the signatories of the agreement to convene a convention for drafting a world constitution. As a result, for the first time in human history, a World Constituent Assembly convened to draft and adopt a Constitution for the Federation of Earth.
Personal life.
Born's wife Hedwig (Hedi) Martha Ehrenberg (1891–1972) was a daughter of the jurist Victor Ehrenberg and Elise von Jhering (a daughter of the jurist Rudolf von Jhering). Born was survived by his wife Hedi and their children Irene, Gritli and Gustav. Singer and actress Olivia Newton-John was a daughter of Irene (1914–2003), while Gustav is the father of musician and academic Georgina Born and actor Max Born ("Fellini Satyricon") who are thus also Max's grandchildren. His great-grandchildren include songwriter Brett Goldsmith, singer Tottie Goldsmith, racing car driver Emerson Newton-John, and singer Chloe Rose Lattanzi. Born helped his nephew, architect, Otto Königsberger (1908–1999) obtain commission in the Mysore State.
Bibliography.
During his life, Born wrote several semi-popular and technical books. His volumes on topics like atomic physics and optics were very well received. They are considered classics in their fields, and are still in print. The following is a chronological listing of his major works:
For a full list of his published papers, see HistCite . For his published works, see Published Works – Berlin-Brandenburgische Akademie der Wissenschaften Akademiebibliothek.
Citations.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " p q - q p = { h \\over 2 \\pi i } I "
}
] |
https://en.wikipedia.org/wiki?curid=61866
|
61869767
|
Dirichlet series inversion
|
In analytic number theory, a Dirichlet series, or Dirichlet generating function (DGF), of a sequence is a common way of understanding and summing arithmetic functions in a meaningful way. A little known, or at least often forgotten about, way of expressing formulas for arithmetic functions and their summatory functions is to perform an integral transform that inverts the operation of forming the DGF of a sequence. This inversion is analogous to performing an inverse Z-transform to the generating function of a sequence to express formulas for the series coefficients of a given ordinary generating function.
For now, we will use this page as a compendia of "oddities" and oft-forgotten facts about transforming and inverting Dirichlet series, DGFs, and relating the inversion of a DGF of a sequence to the sequence's summatory function. We also use the notation for coefficient extraction usually applied to formal generating functions in some complex variable, by denoting formula_0 for any positive integer formula_1, whenever
formula_2
denotes the DGF (or Dirichlet series) of "f" which is taken to be absolutely convergent whenever the real part of "s" is greater than the abscissa of absolute convergence, formula_3.
The relation of the Mellin transformation of the summatory function of a sequence to the DGF of a sequence provides us with a way of expressing arithmetic functions formula_4 such that formula_5, and the corresponding Dirichlet inverse functions, formula_6, by inversion formulas involving the summatory function, defined by
formula_7
In particular, provided that the DGF of some arithmetic function "f" has an to formula_8, we can express the Mellin transform of the summatory function of "f" by the continued DGF formula as
formula_9
It is often also convenient to express formulas for the summatory functions over the Dirichlet inverse function of "f" using this construction of a Mellin inversion type problem.
Preliminaries: Notation, conventions and known results on DGFs.
DGFs for Dirichlet inverse functions.
Recall that an arithmetic function is Dirichlet invertible, or has an inverse formula_6 with respect to Dirichlet convolution such that formula_10, or equivalently formula_11, if and only if formula_5. It is not difficult to prove that is formula_12 is the DGF of "f" and is absolutely convergent for all complex "s" satisfying formula_13, then the DGF of the Dirichlet inverse is given by formula_14 and is also absolutely convergent for all formula_13. The positive real formula_15 associated with each invertible arithmetic function "f" is called the abscissa of convergence.
We also see the following identities related to the Dirichlet inverse of some function "g" that does not vanish at one:
formula_16
Summatory functions.
Using the same convention in expressing the result of Perron's formula, we assume that the summatory function of a (Dirichlet invertible) arithmetic function formula_17, is defined for all real formula_18 according to the formula
formula_19
We know the following relation between the Mellin transform of the summatory function of "f" and the DGF of "f" whenever formula_13:
formula_20
Some examples of this relation include the following identities involving the Mertens function, or summatory function of the Moebius function, the prime zeta function and the prime-counting function, and the Riemann prime-counting function:
formula_21
Statements of the integral formula for Dirichlet inversion.
Classical integral formula.
For any "s" such that formula_22, we have that
formula_23
If we write the DGF of "f" according to the Mellin transform formula of the summatory function of "f", then the stated integral formula simply corresponds to a special case of Perron's formula. Another variant of the previous formula stated in Apostol's book provides an integral formula for an alternate sum in the following form for formula_24 and any real formula_25 where we denote formula_26:
formula_27
Direct proof: from Apostol's book.
This proof shows that the function formula_36 can be recovered from its associated Dirichlet series by means of an integral, which is known as the classical integral formula for Dirichlet inversion.
Special cases of the formula.
If we are interested in expressing formulas for the Dirichlet inverse of "f", denoted by formula_6 whenever formula_5, we write formula_37. Then we have by absolute convergence of the DGF for any formula_13 that
formula_38
Now we can call on integration by parts to see that if we denote by formula_39 denotes the formula_40 antiderivative of "F", for any fixed non-negative integers formula_41, we have
formula_42
Thus we obtain that
formula_43
We also can relate the iterated integrals for the formula_44 antiderivatives of "F" by a finite sum of "k" single integrals of power-scaled versions of "F":
formula_45
In light of this expansion, we can then write the partially limiting "T"-truncated Dirichlet series inversion integrals at hand in the form of
formula_46
A formal generating-function-like convolution lemma.
Suppose that we wish to treat the integrand integral formula for Dirichlet coefficient inversion in powers of formula_59 where formula_60, and then proceed as if we were evaluating a traditional integral on the real line. Then we have that
formula_61
We require the result given by the following formula, which is proved rigorously by an application of integration by parts, for any non-negative integer formula_62:
formula_63
So the respective real and imaginary parts of our arithmetic function coefficients "f" at positive integers "x" satisfy:
formula_64
The last identities suggest an application of the Hadamard product formula for generating functions. In particular, we can work out the following identities which express the real and imaginary parts of our function "f" at "x" in the following forms:
formula_65
Notice that in the special case where the arithmetic function "f" is strictly real-valued, we expect that the inner terms in the previous limit formula are always zero (i.e., for any "T").
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "[n^{-s}] D_f(s) =: f(n)"
},
{
"math_id": 1,
"text": "n \\geq 1"
},
{
"math_id": 2,
"text": "D_f(s) := \\sum_{n \\geq 0} \\frac{f(n)}{n^s}, \\quad \\Re(s) > \\sigma_{0,f},"
},
{
"math_id": 3,
"text": "\\sigma_{0,f} \\in \\mathbb{R}"
},
{
"math_id": 4,
"text": "f(n)"
},
{
"math_id": 5,
"text": "f(1) \\neq 0"
},
{
"math_id": 6,
"text": "f^{-1}(n)"
},
{
"math_id": 7,
"text": "S_f(x) := {\\sum_{n \\leq x}}^\\prime f(n), \\quad \\forall x \\geq 1."
},
{
"math_id": 8,
"text": "s \\mapsto -s"
},
{
"math_id": 9,
"text": "\\mathcal{M}[S_f](s) = -\\frac{D_f(-s)}{s}."
},
{
"math_id": 10,
"text": "(f \\ast f^{-1})(n) = \\delta_{n,1}"
},
{
"math_id": 11,
"text": "f \\ast f^{-1} = \\mu \\ast 1 \\equiv \\varepsilon"
},
{
"math_id": 12,
"text": "D_f(s)"
},
{
"math_id": 13,
"text": "\\Re(s) > \\sigma_{0,f}"
},
{
"math_id": 14,
"text": "D_{f^{-1}}(s) = 1 / D_f(s)"
},
{
"math_id": 15,
"text": "\\sigma_{0,f}"
},
{
"math_id": 16,
"text": "\\begin{align}(g^{-1} \\ast \\mu)(n) & = [n^{-s}]\\left(\\frac{1}{\\zeta(s) D_g(s)}\\right) \\\\ (g^{-1} \\ast 1)(n) & = [n^{-s}]\\left(\\frac{\\zeta(s)}{D_g(s)}\\right).\\end{align}"
},
{
"math_id": 17,
"text": "f"
},
{
"math_id": 18,
"text": "x \\geq 0"
},
{
"math_id": 19,
"text": "S_f(x) := {\\sum_{n \\leq x}}^{\\prime} f(n) = \\begin{cases} 0, & 0 \\leq x < 1 \\\\ \\sum\\limits_{n < [x]} f(n), & x \\in \\mathbb{R} \\setminus \\mathbb{Z}^{+} \\wedge x \\geq 1 \\\\ \\sum\\limits_{n \\leq [x]} f(n) - \\frac{f(x)}{2}, & x \\in \\mathbb{Z}^{+}. \\end{cases}"
},
{
"math_id": 20,
"text": "D_f(s) = s \\cdot \\int_1^{\\infty} \\frac{S_f(x)}{x^{s+1}} dx."
},
{
"math_id": 21,
"text": "\\begin{align}\\frac{1}{\\zeta(s)} & = s \\cdot \\int_1^{\\infty} \\frac{M(x)}{x^{s+1}} dx \\\\ P(s) & = s \\cdot \\int_1^{\\infty} \\frac{\\pi(x)}{x^{s+1}} dx \\\\ \\log \\zeta(s) & = s \\cdot \\int_0^{\\infty} \\frac{\\Pi_0(x)}{x^{s+1}} dx.\\end{align}"
},
{
"math_id": 22,
"text": "\\sigma := \\Re(s) > \\sigma_{0,f}"
},
{
"math_id": 23,
"text": "f(x) \\equiv [x^{-s}] D_f(s) = \\lim_{T \\rightarrow \\infty} \\frac{1}{2T} \\int_{-T}^T x^{\\sigma+\\imath t} D_f(\\sigma+\\imath t) \\, dt."
},
{
"math_id": 24,
"text": "c,x > 0"
},
{
"math_id": 25,
"text": "\\sigma > \\sigma_{0,f}-c"
},
{
"math_id": 26,
"text": "\\Re(s) := \\sigma"
},
{
"math_id": 27,
"text": "{\\sum_{n \\leq x}}^{\\prime} \\frac{f(n)}{n^s} = \\frac{1}{2\\pi\\imath} \\int_{c-\\imath\\infty}^{c+\\imath\\infty} D_f(s+z) \\frac{x^z}{z} dz."
},
{
"math_id": 28,
"text": "f(s) = \\sum_{n=1}^\\infty a_n n^{-s}"
},
{
"math_id": 29,
"text": "F(x) = \\sum_{n=1}^x a_n"
},
{
"math_id": 30,
"text": "g(x) = \\sum_{n=1}^x a_n \\lfloor x/n \\rfloor"
},
{
"math_id": 31,
"text": "F(x) = \\int_1^x g(t) dt + a_1"
},
{
"math_id": 32,
"text": "F(x)"
},
{
"math_id": 33,
"text": "g(x)"
},
{
"math_id": 34,
"text": "R(u)"
},
{
"math_id": 35,
"text": "[1, x]"
},
{
"math_id": 36,
"text": "f(s)"
},
{
"math_id": 37,
"text": "D_f(s) = 1 + s \\cdot A_f(s)"
},
{
"math_id": 38,
"text": "\\begin{align}\\int \\frac{x^{\\imath t}}{D_f(\\sigma+\\imath t)} \\,dt & = \\int \\left(\\sum_{j \\geq 0} (\\sigma+\\imath t)^{j} \\times \\sum_{k=0}^{j} (-1)^k D_f(\\sigma+\\imath t)^k \\frac{\\log^{j-k}(x)}{(j-k)!}\\right) \\, dt.\\end{align}"
},
{
"math_id": 39,
"text": "F^{(-m)}(x) = \\sum_{n \\geq 0} \\frac{F^{(n)}(0)}{n!} x^{n+m} \\times \\frac{n!}{(n+m)!}"
},
{
"math_id": 40,
"text": "m^{th}"
},
{
"math_id": 41,
"text": "k \\geq 0"
},
{
"math_id": 42,
"text": "\\int (ax+b)^k \\cdot F(ax+b) \\, dx = \\sum_{j=0}^{k} \\frac{k!}{a \\cdot j!} (-1)^{k-j} t^j F^{(j+1-k)}(ax+b)."
},
{
"math_id": 43,
"text": "\\begin{align}\\int_{-T}^T \\frac{x^{\\imath t}}{D_f(\\sigma+\\imath t)} \\, dt & = \\frac{1}{\\imath} \\left(\\sum_{j \\geq 0} \\sum_{k=0}^j \\sum_{m=0}^k \\frac{k!}{m!} (-1)^m (\\sigma+\\imath t)^{m} \\left[D_f^k\\right]^{(j+1-k)}(\\sigma+\\imath t) \\frac{\\log^{j-k}(x)}{(j-k)!}\\right) \\Biggr|_{t=-T}^{t=+T}.\\end{align}"
},
{
"math_id": 44,
"text": "k^{th}"
},
{
"math_id": 45,
"text": "F^{(-k)}(x) = \\sum_{i=0}^{k-1} \\binom{k-1}{i} \\frac{(-1)^i}{(k-1)!} \\int \\frac{F(x)}{x^i} \\, dx."
},
{
"math_id": 46,
"text": "\\begin{align}\\frac{1}{2T} \\int_{-T}^T \\frac{x^{\\imath t}}{D_f(\\sigma+\\imath t)} \\, dt & = \\frac{1}{2T \\cdot \\imath} \\left(\\sum_{j \\geq 0} \\sum_{k=0}^{j} \\sum_{m=0}^k \\sum_{n=0}^{j-k} \\binom{j-k}{n} \\frac{(-1)^{k+n+m} \\cdot k!}{(j-k)! \\cdot m!} (\\sigma+\\imath t)^{m} \\frac{\\log^{j-k}(x)}{(j-k)!} \\int_{0}^{\\sigma+\\imath t} \\left[A_f^k\\right](v) \\frac{dv}{v^n} \\right) \\Biggr|_{t=-T}^{t=+T} \\\\ & = \\frac{1}{2T \\cdot \\imath} \\left(\\sum_{j \\geq 0} \\sum_{k=0}^{j} \\sum_{m=0}^k \\frac{(-1)^{j-k} \\cdot (-s)^m}{m!} \\frac{\\log^{k}(x)}{k!} \\int_0^1 s \\cdot D_f^{j-k}(rs) \\left(1-\\frac{1}{rs}\\right)^k \\, dv\\right) \\Biggr|_{s=\\sigma-\\imath T}^{s=\\sigma+\\imath T} \\\\ & = \\frac{s\\left(e^{-s}+O_s(1)\\right)}{2T \\cdot \\imath} \\int_0^{1} \\frac{x^{1-\\frac{1}{rs}}}{1+D_f(rs)} dr \\Biggr|_{s=\\sigma-\\imath T}^{s=\\sigma+\\imath T} \\\\ & = \\frac{\\left(e^{-s}+O_s(1)\\right)}{2T \\cdot \\imath} \\int_0^{s} \\frac{x^{1-\\frac{1}{v}}}{1+D_f(v)} \\, dv \\Biggr|_{s=\\sigma-\\imath T}^{s=\\sigma+\\imath T}.\\end{align}"
},
{
"math_id": 47,
"text": "a(n)"
},
{
"math_id": 48,
"text": "s=1"
},
{
"math_id": 49,
"text": "\\sum_{n=1}^\\infty \\frac{a_n}{n^s} = \\int_0^\\infty x^{s-1} \\left(\\sum_{n=1}^\\infty a_n e^{-nx}\\right) dx = \\mathcal{M}{a_n}(s)"
},
{
"math_id": 50,
"text": "a_n = \\frac{1}{2\\pi i} \\int_{c-i\\infty}^{c+i\\infty} \\frac{\\sum_{m=1}^\\infty \\frac{\\mu(m)}{m^{s}}\\mathcal{M}{a_n}(s)}{n^s} ds"
},
{
"math_id": 51,
"text": "c"
},
{
"math_id": 52,
"text": "\\sum_{n=1}^\\infty a_n/n^s"
},
{
"math_id": 53,
"text": "b(n)"
},
{
"math_id": 54,
"text": "\\mathcal{M}{(a*b)_n}(s) = \\mathcal{M}{a_n}(s) \\mathcal{M}{b_n}(s)"
},
{
"math_id": 55,
"text": "f(x)"
},
{
"math_id": 56,
"text": "\\int_0^\\infty \\frac{f(x)}{x^{s+1}}dx"
},
{
"math_id": 57,
"text": "s"
},
{
"math_id": 58,
"text": "a_n = \\frac{1}{n^s}\\int_0^\\infty f(x) e^{-nx}dx"
},
{
"math_id": 59,
"text": "(\\imath t)^k"
},
{
"math_id": 60,
"text": "[(\\imath t)^{k}] F(\\sigma+\\imath t) = F^{(k)})(\\sigma) / k!"
},
{
"math_id": 61,
"text": "\\begin{align}\\hat{D}_f(x; \\sigma, T) & := \\int_{-T}^T x^{\\sigma+\\imath t} D_f(\\sigma+\\imath t)\\, dt \\\\ & = \\sum_{m \\geq 0} \\int_{-T}^{T} x^{\\sigma+\\imath t} (\\imath t)^m \\frac{D_f^{(m)}(\\sigma)}{m!} \\\\ & = \\sum_{m \\geq 0} \\int_{-T}^T t^{2m} x^{\\sigma+\\imath t} \\frac{(-1)^m D_f^{(2m)}(\\sigma)}{(2m)!}\\,dt + \\imath \\times \\sum_{m \\geq 0} \\int_{-T}^T t^{2m+1} x^{\\sigma+\\imath t} \\frac{(-1)^m A^{(2m+1)}(\\sigma)}{(2m+1)!} \\,dt.\\end{align}"
},
{
"math_id": 62,
"text": "m \\geq 0"
},
{
"math_id": 63,
"text": "\\begin{align}\\hat{I}_m(T) & := \\int_{-T}^{T} \\frac{(\\imath t)^m}{m!} x^{\\imath t} \\,dt \\\\ & = \\sum_{r=0}^{m} \\frac{\\left(x^{\\imath T} + (-1)^{r+1} x^{-\\imath T}\\right) (-\\imath)^{r+1}}{\\log^{k+1-r}(x)} \\cdot \\frac{T^r}{r!} \\\\ & = \\frac{\\cos(T \\log x)}{T \\log x} \\times \\sum_{r=0}^{\\lfloor k/2 \\rfloor} \\frac{(-1)^{r+1} T^{2r}}{\\log^{k-2r}(x) (2r)!} + \\frac{\\sin(T \\log x)}{T \\log x} \\times \\sum_{r=0}^{\\lfloor k/2 \\rfloor} \\frac{(-1)^{r+1} T^{2r+1}}{\\log^{k-2r-1}(x) (2r+1)!}.\\end{align}"
},
{
"math_id": 64,
"text": "\\begin{align}\\operatorname{Re}(f(x)) & = x^{\\sigma} \\times \\lim_{T \\rightarrow \\infty} \\sum_{m \\geq 0} D_f^{(2m)}(\\sigma) \\cdot \\frac{\\hat{I}_{2m}(T)}{2T} \\\\ \\operatorname{Im}(f(x)) & = x^{\\sigma} \\times \\lim_{T \\rightarrow \\infty} \\sum_{m \\geq 0} D_f^{(2m+1)}(\\sigma) \\cdot \\frac{\\hat{I}_{2m+1}(T)}{2T}.\\end{align}"
},
{
"math_id": 65,
"text": "\\begin{align}\\operatorname{Re}(f(x)) & = \\lim_{T \\rightarrow \\infty} \\left[\\frac{x^{\\sigma}}{2T} \\times \\frac{1}{2\\pi} \\int_{-\\pi}^{\\pi} \\left(D_f(\\sigma+\\imath T \\cdot e^{\\imath s}) + D_f(\\sigma - \\imath T e^{\\imath s})\\right) \\left(FUNC(e^{-\\imath s})\\right) \\, ds\\right] \\\\ \n\\operatorname{Im}(f(x)) & = \\lim_{T \\rightarrow \\infty} \\left[\\frac{x^\\sigma}{2T} \\times \\frac{1}{2\\pi} \\int_{-\\pi}^\\pi \\left(D_f(\\sigma+\\imath T \\cdot e^{\\imath s}) - D_f(\\sigma - \\imath T e^{\\imath s}\\right) () \\, ds \\right].\\end{align}"
}
] |
https://en.wikipedia.org/wiki?curid=61869767
|
6187083
|
Nucleon spin structure
|
Nucleon spin structure describes the partonic structure of nucleon (proton and neutron) intrinsic angular momentum (spin). The key question is how the nucleon's spin, whose magnitude is 1/2ħ, is carried by its constituent partons (quarks and gluons). It was originally expected before the 1980s that quarks carry all of the nucleon spin, but later experiments contradict this expectation. In the late 1980s, the European Muon Collaboration (EMC) conducted experiments that suggested the spin carried by quarks is not sufficient to account for the total spin of the nucleons. This finding astonished particle physicists at that time, and the problem of where the missing spin lies is sometimes referred to as the proton spin crisis.
Experimental research on these topics has been continued by the Spin Muon Collaboration (SMC) and the COMPASS experiment at CERN, experiments E142, E143, E154 and E155 at SLAC, HERMES at DESY, experiments at JLab and RHIC, and others. Global analysis of data from all major experiments confirmed the original EMC discovery and showed that the quark spin did contribute about 30% to the total spin of the nucleon. A major topic of modern particle physics is to find the missing angular momentum, which is believed to be carried either by gluon spin, or by gluon and quark orbital angular momentum. This fact is expressed by the sum rule,
formula_0
The gluon spin components formula_1 are being measured by many experiments. Quark and gluon angular momenta will be studied by measuring so-called generalized parton distributions (GPD) through deeply virtual compton scattering (DVCS) experiments, conducted at CERN (COMPASS) and at Jefferson Lab, among other laboratories.
|
[
{
"math_id": 0,
"text": "\\frac{1}{2} =\\frac{1}{2} \\Sigma_q + \\Sigma_g + L_q + L_g."
},
{
"math_id": 1,
"text": " \\Sigma_g"
}
] |
https://en.wikipedia.org/wiki?curid=6187083
|
61883319
|
Perfect digital invariant
|
In number theory, a perfect digital invariant (PDI) is a number in a given number base (formula_0) that is the sum of its own digits each raised to a given power (formula_1).
Definition.
Let formula_2 be a natural number. The perfect digital invariant function (also known as a happy function, from happy numbers) for base formula_3 and power formula_4 formula_5 is defined as:
formula_6
where formula_7 is the number of digits in the number in base formula_0, and
formula_8
is the value of each digit of the number. A natural number formula_2 is a perfect digital invariant if it is a fixed point for formula_9, which occurs if formula_10. formula_11 and formula_12 are trivial perfect digital invariants for all formula_0 and formula_1, all other perfect digital invariants are nontrivial perfect digital invariants.
For example, the number 4150 in base formula_13 is a perfect digital invariant with formula_14, because formula_15.
A natural number formula_2 is a sociable digital invariant if it is a periodic point for formula_9, where formula_16 for a positive integer formula_17 (here formula_18 is the formula_17th iterate of formula_19), and forms a cycle of period formula_17. A perfect digital invariant is a sociable digital invariant with formula_20, and a amicable digital invariant is a sociable digital invariant with formula_21.
All natural numbers formula_2 are preperiodic points for formula_9, regardless of the base. This is because if formula_22, formula_23, so any formula_2 will satisfy formula_24 until formula_25. There are a finite number of natural numbers less than formula_26, so the number is guaranteed to reach a periodic point or a fixed point less than formula_27, making it a preperiodic point.
Numbers in base formula_28 lead to fixed or periodic points of numbers formula_29.
<templatestyles src="Math_proof/styles.css" />Proof
If formula_28, then the formula_25 bound can be reduced.
Let formula_30 be the number for which the sum of squares of digits is largest among the numbers less than formula_31.
formula_32
formula_33 because formula_34
Let formula_35 be the number for which the sum of squares of digits is largest among the numbers less than formula_36.
formula_37
formula_38 because formula_39
Let formula_40 be the number for which the sum of squares of digits is largest among the numbers less than formula_41.
formula_42
formula_43
Let formula_44 be the number for which the sum of squares of digits is largest among the numbers less than formula_45.
formula_46
formula_47
formula_48. Thus, numbers in base formula_28 lead to cycles or fixed points of numbers formula_49.
The number of iterations formula_50 needed for formula_51 to reach a fixed point is the perfect digital invariant function's persistence of formula_2, and undefined if it never reaches a fixed point.
formula_52 is the digit sum. The only perfect digital invariants are the single-digit numbers in base formula_0, and there are no periodic points with prime period greater than 1.
formula_53 reduces to formula_54, as for any power formula_1, formula_55 and formula_56.
For every natural number formula_57, if formula_58, formula_59 and formula_60, then for every natural number formula_2, if formula_61, then formula_62, where formula_63 is Euler's totient function.
<templatestyles src="Math_proof/styles.css" />Proof
Let
formula_64
be a natural number with formula_65 digits, where formula_66, and formula_59, where formula_17 is a natural number greater than 1.
According to the divisibility rules of base formula_0, if formula_67, then if formula_61, then the digit sum
formula_68
If a digit formula_69, then formula_70. According to Euler's theorem, if formula_60, formula_71. Thus, if the digit sum formula_72, then formula_62.
Therefore, for any natural number formula_17, if formula_58, formula_59 and formula_60, then for every natural number formula_2, if formula_61, then formula_62.
No upper bound can be determined for the size of perfect digital invariants in a given base and arbitrary power, and it is not currently known whether or not the number of perfect digital invariants for an arbitrary base is finite or infinite.
"F"2,"b".
By definition, any three-digit perfect digital invariant formula_73 for formula_74 with natural number digits formula_75, formula_76, formula_77 has to satisfy the cubic Diophantine equation formula_78. formula_79 has to be equal to 0 or 1 for any formula_80, because the maximum value formula_2 can take is formula_81. As a result, there are actually two related quadratic Diophantine equations to solve:
formula_82 when formula_83, and
formula_84 when formula_85.
The two-digit natural number formula_86 is a perfect digital invariant in base
formula_87
This can be proven by taking the first case, where formula_83, and solving for formula_0. This means that for some values of formula_88 and formula_89, formula_2 is not a perfect digital invariant in any base, as formula_89 is not a divisor of formula_90. Moreover, formula_91, because if formula_92 or formula_93, then formula_94, which contradicts the earlier statement that formula_76.
There are no three-digit perfect digital invariants for formula_74, which can be proven by taking the second case, where formula_85, and letting formula_95 and formula_96. Then the Diophantine equation for the three-digit perfect digital invariant becomes
formula_97
formula_98
formula_99
formula_100
formula_101 for all values of formula_102. Thus, there are no solutions to the Diophantine equation, and there are no three-digit perfect digital invariants for formula_74.
"F"3,"b".
"There are just four numbers, after unity, which are the sums of the cubes of their digits:"
formula_103
formula_104
formula_105
formula_106
"These are odd facts, very suitable for puzzle columns and likely to amuse amateurs, but there is nothing in them which appeals to the mathematician." (sequence in the OEIS) — G. H. Hardy, "A Mathematician's Apology"
By definition, any four-digit perfect digital invariant formula_2 for formula_107 with natural number digits formula_75, formula_76, formula_77, formula_108 has to satisfy the quartic Diophantine equation formula_109. formula_110 has to be equal to 0, 1, 2 for any formula_111, because the maximum value formula_2 can take is formula_112. As a result, there are actually three related cubic Diophantine equations to solve
formula_113 when formula_114
formula_115 when formula_116
formula_117 when formula_118
We take the first case, where formula_114.
"b" = 3"k" + 1.
Let formula_17 be a positive integer and the number base formula_119. Then:
<templatestyles src="Math_proof/styles.css" />Proof
Let the digits of formula_121 be formula_122, formula_123, and formula_92. Then
formula_124
Thus formula_125 is a perfect digital invariant for formula_107 for all formula_17.
<templatestyles src="Math_proof/styles.css" />Proof
Let the digits of formula_127 be formula_122, formula_123, and formula_93. Then
formula_128
Thus formula_129 is a perfect digital invariant for formula_107 for all formula_17.
<templatestyles src="Math_proof/styles.css" />Proof
Let the digits of formula_131 be formula_132, formula_133, and formula_134. Then
formula_135
Thus formula_136 is a perfect digital invariant for formula_107 for all formula_17.
"b" = 3"k" + 2.
Let formula_17 be a positive integer and the number base formula_137. Then:
<templatestyles src="Math_proof/styles.css" />Proof
Let the digits of formula_121 be formula_122, formula_123, and formula_92. Then
formula_139
formula_140
formula_141
formula_142
formula_143
formula_144
formula_145
formula_146
formula_147
formula_148
formula_149
formula_150
formula_151
Thus formula_125 is a perfect digital invariant for formula_107 for all formula_17.
"b" = 6"k" + 4.
Let formula_17 be a positive integer and the number base formula_152. Then:
<templatestyles src="Math_proof/styles.css" />Proof
Let the digits of formula_154 be formula_132, formula_155, and formula_134. Then
formula_156
formula_157
formula_158
formula_159
formula_160
formula_161
formula_162
formula_163
formula_164
formula_165
formula_166
formula_167
formula_168
formula_169
formula_150
formula_170
Thus formula_171 is a perfect digital invariant for formula_107 for all formula_17.
"F""p","b".
All numbers are represented in base formula_0.
Extension to negative integers.
Perfect digital invariants can be extended to the negative integers by use of a signed-digit representation to represent each integer.
Balanced ternary.
In balanced ternary, the digits are 1, −1 and 0. This results in the following:
Relation to happy numbers.
A happy number formula_2 for a given base formula_0 and a given power formula_1 is a preperiodic point for the perfect digital invariant function formula_9 such that the formula_177-th iteration of formula_9 is equal to the trivial perfect digital invariant formula_12, and an unhappy number is one such that there exists no such formula_177.
Programming example.
The example below implements the perfect digital invariant function described in the definition above to search for perfect digital invariants and cycles in Python. This can be used to find happy numbers.
def pdif(x: int, p: int, b: int) -> int:
"""Perfect digital invariant function."""
total = 0
while x > 0:
total = total + pow(x % b, p)
x = x // b
return total
def pdif_cycle(x: int, p: int, b: int) -> list[int]:
seen = []
while x not in seen:
seen.append(x)
x = pdif(x, p, b)
cycle = []
while x not in cycle:
cycle.append(x)
x = pdif(x, p, b)
return cycle
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "b"
},
{
"math_id": 1,
"text": "p"
},
{
"math_id": 2,
"text": "n"
},
{
"math_id": 3,
"text": "b > 1"
},
{
"math_id": 4,
"text": "p > 0"
},
{
"math_id": 5,
"text": "F_{p, b} : \\mathbb{N} \\rightarrow \\mathbb{N}"
},
{
"math_id": 6,
"text": "F_{p, b}(n) = \\sum_{i=0}^{k - 1} d_i^p. "
},
{
"math_id": 7,
"text": "k = \\lfloor \\log_{b}{n} \\rfloor + 1"
},
{
"math_id": 8,
"text": "d_i = \\frac{n \\bmod{b^{i+1}} - n \\bmod b^i}{b^i}"
},
{
"math_id": 9,
"text": "F_{p, b}"
},
{
"math_id": 10,
"text": "F_{p, b}(n) = n"
},
{
"math_id": 11,
"text": "0"
},
{
"math_id": 12,
"text": "1"
},
{
"math_id": 13,
"text": "b = 10"
},
{
"math_id": 14,
"text": "p = 5"
},
{
"math_id": 15,
"text": "4150 = 4^5 + 1^5 + 5^5 + 0^5"
},
{
"math_id": 16,
"text": "F_{p,b}^k(n) = n"
},
{
"math_id": 17,
"text": "k"
},
{
"math_id": 18,
"text": "F_{p,b}^k"
},
{
"math_id": 19,
"text": "F_{p,b}"
},
{
"math_id": 20,
"text": "k = 1"
},
{
"math_id": 21,
"text": "k = 2"
},
{
"math_id": 22,
"text": "k \\geq p + 2"
},
{
"math_id": 23,
"text": "n \\geq b^{k-1} > b^p k"
},
{
"math_id": 24,
"text": "n > F_{b,p}(n)"
},
{
"math_id": 25,
"text": "n < b^{p+1}"
},
{
"math_id": 26,
"text": "b^{p+1}"
},
{
"math_id": 27,
"text": " b^{p+1}"
},
{
"math_id": 28,
"text": "b > p"
},
{
"math_id": 29,
"text": "n \\leq (p - 2)^p + p (b - 1)^p"
},
{
"math_id": 30,
"text": "r"
},
{
"math_id": 31,
"text": "b^p"
},
{
"math_id": 32,
"text": "r = b^p - 1 = \\sum_{t=0}^p (b - 1)b^t"
},
{
"math_id": 33,
"text": "F_{p, b}(r) = (p + 1)(b - 1)^p < (p + 1) b^p \\leq b^{p + 1}"
},
{
"math_id": 34,
"text": "b \\geq (p + 1)"
},
{
"math_id": 35,
"text": "s"
},
{
"math_id": 36,
"text": "(p + 1)(b - 1)^p"
},
{
"math_id": 37,
"text": "s = (p + 1)b^p - 1 = p b^p + \\sum_{t=0}^{p - 1} (b - 1)b^t"
},
{
"math_id": 38,
"text": "F_{p, b}(s) = p^p + p (b - 1)^p < p b^p"
},
{
"math_id": 39,
"text": "b \\geq p"
},
{
"math_id": 40,
"text": "t"
},
{
"math_id": 41,
"text": "p b^p"
},
{
"math_id": 42,
"text": "t = (p - 1) b^p + \\sum_{t=0}^{p - 1} (b - 1)b^t"
},
{
"math_id": 43,
"text": "F_{p, b}(t) = (p - 1)^p + p (b - 1)^p"
},
{
"math_id": 44,
"text": "u"
},
{
"math_id": 45,
"text": "F_{p, b}(t) + 1"
},
{
"math_id": 46,
"text": "u = (p - 2) b^p + \\sum_{t=0}^{p - 1} (b - 1)b^t"
},
{
"math_id": 47,
"text": "F_{p, b}(u) = (p - 2)^p + p (b - 1)^p < (p - 1)^p + p (b - 1)^p = n_{\\ell+1}"
},
{
"math_id": 48,
"text": "u \\leq F_{p, b}(u) < F_{p, b}(t)"
},
{
"math_id": 49,
"text": "n \\leq F_{p, b}(u) = (p - 1)^p + p (b - 1)^p"
},
{
"math_id": 50,
"text": "i"
},
{
"math_id": 51,
"text": "F_{p,b}^{i}(n)"
},
{
"math_id": 52,
"text": "F_{1, b}"
},
{
"math_id": 53,
"text": "F_{p, 2}"
},
{
"math_id": 54,
"text": "F_{1, 2}"
},
{
"math_id": 55,
"text": "0^p = 0"
},
{
"math_id": 56,
"text": "1^p = 1"
},
{
"math_id": 57,
"text": "k > 1"
},
{
"math_id": 58,
"text": "p < b"
},
{
"math_id": 59,
"text": "(b - 1) \\equiv 0 \\bmod k"
},
{
"math_id": 60,
"text": "(p - 1) \\equiv 0 \\bmod \\phi(k)"
},
{
"math_id": 61,
"text": "n \\equiv m \\bmod k"
},
{
"math_id": 62,
"text": "F_{p, b}(n) \\equiv m \\bmod k"
},
{
"math_id": 63,
"text": "\\phi(k)"
},
{
"math_id": 64,
"text": "n = \\sum_{i = 0}^j d_i b^i"
},
{
"math_id": 65,
"text": "j"
},
{
"math_id": 66,
"text": "0 \\leq d_i < b"
},
{
"math_id": 67,
"text": "b - 1 \\equiv 0 \\bmod k"
},
{
"math_id": 68,
"text": "F_{1, b}(n) = \\sum_{i = 0}^j d_i \\equiv m \\bmod k"
},
{
"math_id": 69,
"text": "d_i \\equiv m \\bmod k"
},
{
"math_id": 70,
"text": "d_i^p \\equiv m^p \\bmod k"
},
{
"math_id": 71,
"text": "m^p \\bmod k = m \\bmod k"
},
{
"math_id": 72,
"text": "F_{1, b}(n) \\equiv m \\bmod k"
},
{
"math_id": 73,
"text": "n = d_2 d_1 d_0"
},
{
"math_id": 74,
"text": "F_{2, b}"
},
{
"math_id": 75,
"text": "0 \\leq d_0 < b"
},
{
"math_id": 76,
"text": "0 \\leq d_1 < b"
},
{
"math_id": 77,
"text": "0 \\leq d_2 < b"
},
{
"math_id": 78,
"text": "d_0^2 + d_1^2 + d_2^2 = d_2 b^2 + d_1 b + d_0"
},
{
"math_id": 79,
"text": "d_2"
},
{
"math_id": 80,
"text": "b > 2"
},
{
"math_id": 81,
"text": "n = (2 - 1)^2 + 2 (b - 1)^2 = 1 + 2 (b - 1)^2 < 2 b^2"
},
{
"math_id": 82,
"text": "d_0^2 + d_1^2 = d_1 b + d_0"
},
{
"math_id": 83,
"text": "d_2 = 0"
},
{
"math_id": 84,
"text": "d_0^2 + d_1^2 + 1 = b^2 + d_1 b + d_0"
},
{
"math_id": 85,
"text": "d_2 = 1"
},
{
"math_id": 86,
"text": "n = d_1 d_0"
},
{
"math_id": 87,
"text": "b = d_1 + \\frac{d_0 (d_0 - 1)}{d_1}."
},
{
"math_id": 88,
"text": "d_0"
},
{
"math_id": 89,
"text": "d_1"
},
{
"math_id": 90,
"text": "d_0 (d_0 - 1)"
},
{
"math_id": 91,
"text": "d_0 > 1"
},
{
"math_id": 92,
"text": "d_0 = 0"
},
{
"math_id": 93,
"text": "d_0 = 1"
},
{
"math_id": 94,
"text": "b = d_1"
},
{
"math_id": 95,
"text": "d_0 = b - a_0"
},
{
"math_id": 96,
"text": "d_1 = b - a_1"
},
{
"math_id": 97,
"text": "(b - a_0)^2 + (b - a_1)^2 + 1 = b^2 + (b - a_1) b + (b - a_0)"
},
{
"math_id": 98,
"text": "b^2 - 2 a_0 b + a_0^2 + b^2 - 2 a_1 b + a_1^2 + 1 = b^2 + (b - a_1) b + (b - a_0)"
},
{
"math_id": 99,
"text": "2 b^2 - 2 (a_0 + a_1) b + a_0^2 + a_1^2 + 1 = b^2 + (b - a_1) b + (b - a_0)"
},
{
"math_id": 100,
"text": "b^2 + (b - 2 (a_0 + a_1)) b + a_0^2 + a_1^2 + 1 = b^2 + (b - a_1) b + (b - a_0)"
},
{
"math_id": 101,
"text": "2 (a_0 + a_1) > a_1"
},
{
"math_id": 102,
"text": "0 < a_1 \\leq b"
},
{
"math_id": 103,
"text": "153=1^3+5^3+3^3"
},
{
"math_id": 104,
"text": "370=3^3+7^3+0^3"
},
{
"math_id": 105,
"text": " 371=3^3+7^3+1^3"
},
{
"math_id": 106,
"text": "407=4^3+0^3+7^3."
},
{
"math_id": 107,
"text": "F_{3, b}"
},
{
"math_id": 108,
"text": "0 \\leq d_3 < b"
},
{
"math_id": 109,
"text": "d_0^3 + d_1^3 + d_2^3 + d_3^3 = d_3 b^3 + d_2 b^2 + d_1 b + d_0"
},
{
"math_id": 110,
"text": "d_3"
},
{
"math_id": 111,
"text": "b > 3"
},
{
"math_id": 112,
"text": "n = (3 - 2)^3 + 3 (b - 1)^3 = 1 + 3 (b - 1)^3 < 3 b^3"
},
{
"math_id": 113,
"text": "d_0^3 + d_1^3 + d_2^3 = d_2 b^2 + d_1 b + d_0"
},
{
"math_id": 114,
"text": "d_3 = 0"
},
{
"math_id": 115,
"text": "d_0^3 + d_1^3 + d_2^3 + 1 = b^3 + d_2 b^2 + d_1 b + d_0"
},
{
"math_id": 116,
"text": "d_3 = 1"
},
{
"math_id": 117,
"text": "d_0^3 + d_1^3 + d_2^3 + 8 = 2 b^3 + d_2 b^2 + d_1 b + d_0"
},
{
"math_id": 118,
"text": "d_3 = 2"
},
{
"math_id": 119,
"text": "b = 3 k + 1"
},
{
"math_id": 120,
"text": "n_1 = kb^2 + (2k + 1)b"
},
{
"math_id": 121,
"text": "n_1 = d_2 b^2 + d_1 b + d_0"
},
{
"math_id": 122,
"text": "d_2 = k"
},
{
"math_id": 123,
"text": "d_1 = 2k + 1"
},
{
"math_id": 124,
"text": "\n\\begin{align}\nF_{3, b}(n_1) & = d_0^3 + d_1^3 + d_2^3 \\\\\n& = k^3 + (2k + 1)^3 + 0^3 \\\\\n& = (k^2 - k(2k + 1) + (2k + 1)^2)(k + (2k + 1)) \\\\\n& = (k^2 - 2k^2 - k + 4k^2 + 4k + 1)(3k + 1) \\\\\n& = (3k^2 + 3k + 1)(3k + 1) \\\\\n& = (3k^2 + 4k + 1)(3k + 1) - k(3k + 1) \\\\\n& = (k + 1)(3k + 1)(3k + 1) - k(3k + 1) \\\\\n& = k(3k + 1)(3k + 1) + (3k + 1)(3k + 1) - k(3k + 1) \\\\\n& = k(3k + 1)^2 + (2k + 1)(3k + 1) + 0 \\\\\n& = d_2 b^2 + d_1 b + d_0 \\\\\n& = n_1\n\\end{align}\n"
},
{
"math_id": 125,
"text": "n_1"
},
{
"math_id": 126,
"text": "n_2 = kb^2 + (2k + 1)b + 1"
},
{
"math_id": 127,
"text": "n_2 = d_2 b^2 + d_1 b + d_0"
},
{
"math_id": 128,
"text": "\n\\begin{align}\nF_{3, b}(n_2) & = d_0^3 + d_1^3 + d_2^3 \\\\\n& = k^3 + (2k + 1)^3 + 1^3 \\\\\n& = (k^2 - k(2k + 1) + (2k + 1)^2)(k + (2k + 1)) + 1 \\\\\n& = (k^2 - 2k^2 - k + 4k^2 + 4k + 1)(3k + 1) + 1 \\\\\n& = (3k^2 + 3k + 1)(3k + 1) + 1 \\\\\n& = (3k^2 + 4k + 1)(3k + 1) - k(3k + 1) + 1 \\\\\n& = (k + 1)(3k + 1)(3k + 1) - k(3k + 1) + 1 \\\\\n& = k(3k + 1)(3k + 1) + (3k + 1)(3k + 1) - k(3k + 1) + 1 \\\\\n& = k(3k + 1)^2 + (2k + 1)(3k + 1) + 1 \\\\\n& = d_2 b^2 + d_1 b + d_0 \\\\\n& = n_2\n\\end{align}\n"
},
{
"math_id": 129,
"text": "n_2"
},
{
"math_id": 130,
"text": "n_3 = (k + 1)b^2 + (2k + 1)"
},
{
"math_id": 131,
"text": "n_3 = d_2 b^2 + d_1 b + d_0"
},
{
"math_id": 132,
"text": "d_2 = k + 1"
},
{
"math_id": 133,
"text": "d_1 = 0"
},
{
"math_id": 134,
"text": "d_0 = 2k + 1"
},
{
"math_id": 135,
"text": "\n\\begin{align}\nF_{3, b}(n_3) & = d_0^3 + d_1^3 + d_2^3 \\\\\n& = (k + 1)^3 + 0^3 + (2k + 1)^3 \\\\\n& = ((k + 1)^2 - (k + 1)(2k + 1) + (2k + 1)^2)((k + 1) + (2k + 1)) \\\\\n& = ((k + 1)^2 + k(2k + 1)(3k + 2) \\\\\n& = (k^2 + 2k + 1 + 2k^2 + k)(3k + 2) \\\\\n& = (3k^2 + 3k + 1)(3k + 2) \\\\\n& = (3k^2 + 3k)(3k + 2) + (3k + 2) \\\\\n& = 3k(k + 1)(3k + 2) + (3k + 2) \\\\\n& = (k + 1)((3k + 1)^2 - 1) + (3k + 2) \\\\\n& = (k + 1)(3k + 1)^2 - (k + 1) + (3k + 2) \\\\\n& = (k + 1)(3k + 1)^2 + 0(3k + 1) + (2k + 1) \\\\\n& = d_2 b^2 + d_1 b + d_0 \\\\\n& = n_3\n\\end{align}\n"
},
{
"math_id": 136,
"text": "n_3"
},
{
"math_id": 137,
"text": "b = 3 k + 2"
},
{
"math_id": 138,
"text": "n_1 = kb^2 + (2k + 1)"
},
{
"math_id": 139,
"text": "F_{3, b}(n_1) = d_0^3 + d_1^3 + d_2^3"
},
{
"math_id": 140,
"text": " = k^3 + 0^3 + (2k + 1)^3"
},
{
"math_id": 141,
"text": " = (k^2 - k(2k + 1) + (2k + 1)^2)(k + (2k + 1))"
},
{
"math_id": 142,
"text": " = (k^2 - 2k^2 - k + 4k^2 + 4k + 1)(3k + 1)"
},
{
"math_id": 143,
"text": " = (3k^2 + 3k + 1)(3k + 1)"
},
{
"math_id": 144,
"text": " = (3k^2 + 3k + 1)(3k + 2) - (3k^2 + 3k + 1)"
},
{
"math_id": 145,
"text": " = (3k^2 + 3k + 1)(3k + 2) - (3k^2 + 2k + k + 1)"
},
{
"math_id": 146,
"text": " = (3k^2 + 3k + 1)(3k + 2) - k(3k + 2) - (k + 1)"
},
{
"math_id": 147,
"text": " = (3k^2 + 2k + 1)(3k + 2) - (k + 1)"
},
{
"math_id": 148,
"text": " = (3k^2 + 2k)(3k + 2) + (3k + 2) - (k + 1)"
},
{
"math_id": 149,
"text": " = k(3k + 2)^2 + (2k + 1)"
},
{
"math_id": 150,
"text": " = d_2 b^2 + d_1 b + d_0"
},
{
"math_id": 151,
"text": " = n_1"
},
{
"math_id": 152,
"text": "b = 6 k + 4"
},
{
"math_id": 153,
"text": "n_4 = kb^2 + (3k + 2)b + (2k + 1)"
},
{
"math_id": 154,
"text": "n_4 = d_2 b^2 + d_1 b + d_0"
},
{
"math_id": 155,
"text": "d_1 = 3k + 2"
},
{
"math_id": 156,
"text": "F_{3, b}(n_3) = d_0^3 + d_1^3 + d_2^3"
},
{
"math_id": 157,
"text": " = (k)^3 + (3k + 2)^3 + (2k + 1)^3"
},
{
"math_id": 158,
"text": " = k^3 + ((3k + 2)^2 - (3k + 2)(2k + 1) + (2k + 1)^2)((3k + 2) + (2k + 1))"
},
{
"math_id": 159,
"text": " = k^3 + ((3k + 2)(k + 1) + (2k + 1)^2)(5k + 3)"
},
{
"math_id": 160,
"text": " = k^3 + (3k^2 + 5k + 2 + 4k^2 + 4k + 1)(5k + 3)"
},
{
"math_id": 161,
"text": " = k^3 + (7k^2 + 9k + 3)(5k + 3)"
},
{
"math_id": 162,
"text": " = k^3 + 5k(7k^2 + 9k + 3) + 3(7k^2 + 9k + 3)"
},
{
"math_id": 163,
"text": " = k^3 + 35k^3 + 45k^2 + 15k + 21k^2 + 27k + 9"
},
{
"math_id": 164,
"text": " = 36k^3 + 66k^2 + 42k + 9"
},
{
"math_id": 165,
"text": " = (6k + 4)(6k^2) + 42k^2 + 42k + 9"
},
{
"math_id": 166,
"text": " = (6k + 4)(6k^2) + (6k + 4)(4k^2) + 18k^2 + 26k + 9"
},
{
"math_id": 167,
"text": " = (6k + 4)(6k^2 + 4k) + 18k^2 + 26k + 9"
},
{
"math_id": 168,
"text": " = k(6k + 4)^2 + (6k + 4)(3k) + 14k + 9"
},
{
"math_id": 169,
"text": " = k(6k + 4)^2 + (3k + 2)(6k + 4) + 2k + 1"
},
{
"math_id": 170,
"text": " = n_4"
},
{
"math_id": 171,
"text": "n_4"
},
{
"math_id": 172,
"text": "p \\equiv 1 \\bmod 2"
},
{
"math_id": 173,
"text": "F_{p, \\text{bal}3}"
},
{
"math_id": 174,
"text": "(-1)^p = -1"
},
{
"math_id": 175,
"text": "p \\equiv 0 \\bmod 2"
},
{
"math_id": 176,
"text": "(-1)^p = 1^p = 1"
},
{
"math_id": 177,
"text": "m"
}
] |
https://en.wikipedia.org/wiki?curid=61883319
|
61890679
|
Isolation forest
|
Algorithm for anomaly detection
Isolation Forest is an algorithm for data anomaly detection using binary trees. It was developed by Fei Tony Liu in 2008. It has a linear time complexity and a low memory use, which works well for high-volume data. It is based on the assumption that because anomalies are few and different from other data, they can be isolated using few partitions. Like decision tree algorithms, it does not perform density estimation. Unlike decision tree algorithms, it uses only path length to output an anomaly score, and does not use leaf node statistics of class distribution or target value.
Isolation Forest is fast because it splits the data space, randomly selecting an attribute and split point. The anomaly score is inversely associated with the path-length because anomalies need fewer splits to be isolated, because they are few and different.
History.
The Isolation Forest (iForest) algorithm was initially proposed by Fei Tony Liu, Kai Ming Ting and Zhi-Hua Zhou in 2008. In 2012 the same authors showed that iForest has linear time complexity, a small memory requirement, and is applicable to high-dimensional data. In 2010, an extension of the algorithm, SCiforest, was published to address clustered and axis-paralleled anomalies.
Isolation trees.
The premise of the Isolation Forest algorithm is that anomalous data points are easier to separate from the rest of the sample. In order to isolate a data point, the algorithm recursively generates partitions on the sample by randomly selecting an attribute and then randomly selecting a split value between the minimum and maximum values allowed for that attribute.
An example of random partitioning in a 2D dataset of normally distributed points is shown in Fig. 2 for a non-anomalous point and Fig. 3 for a point that is more likely to be an anomaly. It is apparent from the pictures how anomalies require fewer random partitions to be isolated, compared to normal points.
Recursive partitioning can be represented by a tree structure named "Isolation Tree", while the number of partitions required to isolate a point can be interpreted as the length of the path, within the tree, to reach a terminating node starting from the root. For example, the path length of point formula_0 in Fig. 2 is greater than the path length of formula_1 in Fig. 3.
Let formula_2 be a set of d-dimensional points and formula_3. An Isolation Tree (iTree) is defined as a data structure with the following properties:
In order to build an iTree, the algorithm recursively divides formula_10 by randomly selecting an attribute formula_7 and a split value formula_8, until either
When the iTree is fully grown, each point in formula_11 is isolated at one of the external nodes. Intuitively, the anomalous points are those (easier to isolate, hence) with the smaller "path length" in the tree, where the path length formula_12 of point formula_13 is defined as the number of edges formula_0 traverses from the root node to get to an external node.
A probabilistic explanation of iTree is provided in the original iForest paper.
Anomaly detection.
Anomaly detection with Isolation Forest is done as follows:
Anomaly score.
The algorithm for computing the anomaly score of a data point is based on the observation that the structure of iTrees is equivalent to that of Binary Search Trees (BST): a termination to an external node of the iTree corresponds to an unsuccessful search in the BST. Therefore the estimation of average formula_14 for external node terminations is the same as that of the unsuccessful searches in BST, that is
formula_15
where formula_16 is the test set size, formula_17 is the sample set size and formula_18 is the harmonic number, which can be estimated by formula_19, where formula_20 is the Euler-Mascheroni constant.
Above, formula_21 is the average formula_14 given formula_17, so we can use it to normalize formula_14 to get an estimate of the anomaly score for a given instance x:
formula_22
where formula_23 is the average value of formula_14 from a collection of iTrees. For any data point formula_24:
Open source implementations.
Original implementation by Fei Tony Liu is Isolation Forest in R.
Other implementations (in alphabetical order):
Other variations of Isolation Forest algorithm implementations:
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "x_i"
},
{
"math_id": 1,
"text": "x_j"
},
{
"math_id": 2,
"text": "X = \\{x_1, \\dots, x_n\\}"
},
{
"math_id": 3,
"text": "X' \\subset X"
},
{
"math_id": 4,
"text": "T"
},
{
"math_id": 5,
"text": "T_l"
},
{
"math_id": 6,
"text": "T_r"
},
{
"math_id": 7,
"text": "q"
},
{
"math_id": 8,
"text": "p"
},
{
"math_id": 9,
"text": "q < p"
},
{
"math_id": 10,
"text": "X'"
},
{
"math_id": 11,
"text": "X"
},
{
"math_id": 12,
"text": "h(x_i)"
},
{
"math_id": 13,
"text": "x_i \\in X"
},
{
"math_id": 14,
"text": "h(x)"
},
{
"math_id": 15,
"text": "c(m) = \\begin{cases} 2H(m-1)-\\frac{2(m-1)}{n} & \\text{for }m>2 \\\\ 1 & \\text{for }m=2 \\\\ 0 & \\text{otherwise} \\end{cases}"
},
{
"math_id": 16,
"text": "n"
},
{
"math_id": 17,
"text": "m"
},
{
"math_id": 18,
"text": "H"
},
{
"math_id": 19,
"text": "H(i)=ln(i)+\\gamma"
},
{
"math_id": 20,
"text": "\\gamma=0.5772156649 "
},
{
"math_id": 21,
"text": "c(m)"
},
{
"math_id": 22,
"text": "s(x,m)=2^\\frac{-E(h(x))}{c(m)}"
},
{
"math_id": 23,
"text": "E(h(x))"
},
{
"math_id": 24,
"text": "x"
},
{
"math_id": 25,
"text": "s"
},
{
"math_id": 26,
"text": "1"
},
{
"math_id": 27,
"text": "0.5"
}
] |
https://en.wikipedia.org/wiki?curid=61890679
|
61891
|
Genus (mathematics)
|
Number of "holes" of a surface
In mathematics, genus (pl.: genera) has a few different, but closely related, meanings. Intuitively, the genus is the number of "holes" of a surface. A sphere has genus 0, while a torus has genus 1.
Topology.
Orientable surfaces.
The genus of a connected, orientable surface is an integer representing the maximum number of cuttings along non-intersecting closed simple curves without rendering the resultant manifold disconnected. It is equal to the number of handles on it. Alternatively, it can be defined in terms of the Euler characteristic "χ", via the relationship "χ" = 2 − 2"g" for closed surfaces, where "g" is the genus. For surfaces with "b" boundary components, the equation reads "χ" = 2 − 2"g" − "b".
In layman's terms, the genus is the number of "holes" an object has ("holes" interpreted in the sense of doughnut holes; a hollow sphere would be considered as having zero holes in this sense). A torus has 1 such hole, while a sphere has 0. The green surface pictured above has 2 holes of the relevant sort.
For instance:
Explicit construction of surfaces of the genus "g" is given in the article on the fundamental polygon.
Non-orientable surfaces.
The non-orientable genus, demigenus, or Euler genus of a connected, non-orientable closed surface is a positive integer representing the number of cross-caps attached to a sphere. Alternatively, it can be defined for a closed surface in terms of the Euler characteristic χ, via the relationship χ = 2 − "k", where "k" is the non-orientable genus.
For instance:
Knot.
The genus of a knot "K" is defined as the minimal genus of all Seifert surfaces for "K". A Seifert surface of a knot is however a manifold with boundary, the boundary being the knot, i.e.
homeomorphic to the unit circle. The genus of such a surface is defined to be the genus of the two-manifold, which is obtained by gluing the unit disk along the boundary.
Handlebody.
The genus of a 3-dimensional handlebody is an integer representing the maximum number of cuttings along embedded disks without rendering the resultant manifold disconnected. It is equal to the number of handles on it.
For instance:
Graph theory.
The genus of a graph is the minimal integer "n" such that the graph can be drawn without crossing itself on a sphere with "n" handles (i.e. an oriented surface of the genus "n"). Thus, a planar graph has genus 0, because it can be drawn on a sphere without self-crossing.
The non-orientable genus of a graph is the minimal integer "n" such that the graph can be drawn without crossing itself on a sphere with "n" cross-caps (i.e. a non-orientable surface of (non-orientable) genus "n"). (This number is also called the demigenus.)
The Euler genus is the minimal integer "n" such that the graph can be drawn without crossing itself on a sphere with "n" cross-caps or on a sphere with "n/2" handles.
In topological graph theory there are several definitions of the genus of a group. Arthur T. White introduced the following concept. The genus of a group "G" is the minimum genus of a (connected, undirected) Cayley graph for "G".
The graph genus problem is NP-complete.
Algebraic geometry.
There are two related definitions of genus of any projective algebraic scheme "X": the arithmetic genus and the geometric genus. When "X" is an algebraic curve with field of definition the complex numbers, and if "X" has no singular points, then these definitions agree and coincide with the topological definition applied to the Riemann surface of "X" (its manifold of complex points). For example, the definition of elliptic curve from algebraic geometry is "connected non-singular projective curve of genus 1 with a given rational point on it".
By the Riemann–Roch theorem, an irreducible plane curve of degree formula_0 given by the vanishing locus of a section formula_1 has geometric genus
formula_2
where formula_3 is the number of singularities when properly counted.
Differential geometry.
In differential geometry, a genus of an oriented manifold formula_4 may be defined as a complex number formula_5 subject to the conditions
In other words, formula_11 is a ring homomorphism formula_12, where formula_13 is Thom's oriented cobordism ring.
The genus formula_11 is multiplicative for all bundles on spinor manifolds with a connected compact structure if formula_14 is an elliptic integral such as formula_15 for some formula_16 This genus is called an elliptic genus.
The Euler characteristic formula_17 is not a genus in this sense since it is not invariant concerning cobordisms.
Biology.
Genus can be also calculated for the graph spanned by the net of chemical interactions in nucleic acids or proteins. In particular, one may study the growth of the genus along the chain. Such a function (called the genus trace) shows the topological complexity and domain structure of biomolecules.
Citations.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" />
<templatestyles src="Dmbox/styles.css" />
Index of articles associated with the same name
This includes a list of related items that share the same name (or similar names). <br> If an [ internal link] incorrectly led you here, you may wish to change the link to point directly to the intended article.
|
[
{
"math_id": 0,
"text": "d"
},
{
"math_id": 1,
"text": "s \\in \\Gamma(\\mathbb{P}^2, \\mathcal{O}_{\\mathbb{P}^2}(d))"
},
{
"math_id": 2,
"text": "g=\\frac{(d-1)(d-2)}{2}-s,"
},
{
"math_id": 3,
"text": "s"
},
{
"math_id": 4,
"text": "M"
},
{
"math_id": 5,
"text": "\\Phi(M)"
},
{
"math_id": 6,
"text": "\\Phi(M_{1}\\amalg M_{2})=\\Phi(M_{1})+\\Phi(M_{2})"
},
{
"math_id": 7,
"text": "\\Phi(M_{1}\\times M_{2})=\\Phi(M_{1})\\cdot \\Phi(M_{2})"
},
{
"math_id": 8,
"text": "\\Phi(M_{1})=\\Phi(M_{2})"
},
{
"math_id": 9,
"text": "M_{1}"
},
{
"math_id": 10,
"text": "M_{2}"
},
{
"math_id": 11,
"text": "\\Phi"
},
{
"math_id": 12,
"text": "R\\to\\mathbb{C}"
},
{
"math_id": 13,
"text": "R"
},
{
"math_id": 14,
"text": "\\log_{\\Phi}"
},
{
"math_id": 15,
"text": "\\log_{\\Phi}(x)=\\int^{x}_{0}(1-2\\delta t^{2}+\\varepsilon t^{4})^{-1/2}dt"
},
{
"math_id": 16,
"text": "\\delta,\\varepsilon\\in\\mathbb{C}."
},
{
"math_id": 17,
"text": "\\chi(M)"
}
] |
https://en.wikipedia.org/wiki?curid=61891
|
61891313
|
Roger D. Nussbaum
|
American mathematician
Roger David Nussbaum (born 29 January 1944, in Philadelphia) is an American mathematician, specializing in nonlinear functional analysis and differential equations.
Nussbaum graduated in 1965 with a bachelor's degree from Harvard University. He received his Ph.D. in 1969 from the University of Chicago with thesis "The Fixed Point Index and Fixed Point Theorems for K-Set Contractions" supervised by Felix Browder. At Rutgers University Nussbaum became in 1969 an assistant professor, in 1973 an associate professor, and in 1977 a full professor. He retired there as professor emeritus. He was elected in 2012 a Fellow of the American Mathematical Society.
|
[
{
"math_id": 0,
"text": "\\dot{x}(t)=-\\alpha f(x(t-1))"
}
] |
https://en.wikipedia.org/wiki?curid=61891313
|
61897288
|
Stephen M. Gersten
|
American mathematician (born 1940)
Stephen M. Gersten (born 2 December 1940) is an American mathematician, specializing in finitely presented groups and their geometric properties.
Gersten graduated in 1961 with an AB from Princeton University and in 1965 with a PhD from Trinity College, Cambridge. His doctoral thesis was "Class Groups of Supplemented Algebras" written under the supervision of John R. Stallings. In the late 1960s and early 1970s he taught at Rice University. In 1972–1973 he was a visiting scholar at the Institute for Advanced Study. In 1973 he became a professor at the University of Illinois at Urbana–Champaign. In 1974 he was an Invited Speaker at the International Congress of Mathematicians in Vancouver. At the University of Utah he became a professor in 1975 and is now semi-retired there. His PhD students include Roger C. Alperin and Edward W. Formanek.
Gersten's conjecture has motivated considerable research.
Gersten's theorem.
If φ is an automorphism of a finitely generated free group F then
|
[
{
"math_id": 0,
"text": "="
}
] |
https://en.wikipedia.org/wiki?curid=61897288
|
61899188
|
Song of Songs 4
|
Fourth chapter of the Song of Songs
Song of Songs 4 (abbreviated as Song 4) is the fourth chapter of the Song of Songs in the Hebrew Bible or the Old Testament of the Christian Bible. This book is one of the Five Megillot, a collection of short books, together with Ruth, Lamentations, Ecclesiastes and Esther, within the Ketuvim, the third and the last part of the Hebrew Bible. Jewish tradition views Solomon as the author of this book (although this is now largely disputed), and this attribution influences the acceptance of this book as a canonical text. This chapter contains the man's descriptive poem of the woman's body and the invitation to be together which is accepted by the woman.
Text.
The original text is written in Hebrew language. This chapter is divided into 16 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Codex Leningradensis (1008). Some fragments containing parts of this chapter were found among the Dead Sea Scrolls: 4Q106 (4QCanta); 30 BCE-30 CE; extant verses 1–7), and 4Q107 (4QCantb); 30 BCE-30 CE; extant verses 1–3, 8–11, 14–16).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
Structure.
The Modern English Version (MEV), along with other translations, sees verses 1 to 15 as the words of the man, and verse 16 as the words of the woman. Athalya Brenner treats verses 1 to 7 as the man's "waṣf" or descriptive poem, and verse 8 to 5:1 as a dialogue between the male and female lovers.
Analysis.
Male: First descriptive poem and call to come along (4:1-8).
The beginning (verse 1a) and the end (verse 8a) of this part contain repeated lines that "frame an address of endearment": "my darling/[my] bride." Verses 1-7 contain the man's "waṣf" or descriptive poem of his female lover from head to breast, using imagery of flora and fauna, with a few of "fortifications and military weapons". Verses 2 and 5 begin and end this imagery with comparisons with animals, such as sheep and fawns, whereas verses 6-8 focus on the desire of the male speaker to visit "the mountain of myrrh" and to be joined there by his partner, expressing his desire in terms of a sensual pursuit with his lover's body as a mountain on which he finds perfumes.
Verse 7 concludes with a summary statement of the woman's perfection and invitation to his bride to 'come away from the impregnable heights and to join him'.
This "waṣf" and the later ones (; ; 7:1-9) demonstrate theologically the heart of the Song, which values the body as not evil but good, even worthy of praise, and respects the body with an appreciative focus (rather than lurid). Hess notes that this reflects "the fundamental value of God's creation as good and the human body as a key part of that creation, whether at the beginning () or redeemed in the resurrection (, )". While verse 7a is in parallel with verse 1a, forming an "inclusio" as well as a sense of closure to this part of the poem, verse 7b follows the positive assertion of the woman's beauty with a more negative assertion that "she has no blemish or defect" ("mûm"; referring to physical imperfection; cf. the use in the sacrificial ritual, , : ), which is similar to the references to Absalom () and to Daniel and his three friends in the court of Nebuchadnezzar ().
"Thy neck is like the tower of David builded for an armoury, whereon there hang a thousand bucklers, all shields of mighty men."
"Thou art all fair, my love; there is no spot in thee."
"Come with me from Lebanon, my spouse, with me from Lebanon: look from the top of Amana, from the top of Shenir and Hermon, from the lions' dens, from the mountains of the leopards."
Verse 8.
This verse depicts the danger and the woman's inaccessibility (cf. ). The man is asking his bride not to go with him to Lebanon but to "come" with him "from" Lebanon, which is a 'figurative allusion to the general unapproachableness' of the woman. Verse 8b contains two parallel expressions that frame the central expression "from Hermon":
Travel
from the peak of Amana,
from the peak of Senir,
from Hermon,
from the dens of lions
from the mountain lairs of leopards.
A similar structure in verse 7 forms together the twin centers of "my darling" and "from Mount Hermon", which beautifully summarize the concern of the man for access to his bride.
Male: A walk in the garden (4:9-15).
This section is a part of a dialogue concerning "seduction and consummation" (until ), where here the man seduces the woman, with extravagant imagery of food and flowers/herbs.
"Spikenard and saffron; calamus and cinnamon, with all trees of frankincense; myrrh and aloes, with all the chief spices:"
Female: Invitation to her garden (4:16).
The woman consents to the man's call (), leading to a closure in .
Verse 16b.
The Vulgate version of the fourth chapter ends on "... " (transl. "... that its spices may flow out.") The next phrase, " ..." (transl. Let my beloved come ...) opens the fifth chapter in the Vulgate version, while most other versions and translations open that chapter with the man's response ("I have come into my garden").
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
] |
https://en.wikipedia.org/wiki?curid=61899188
|
6190932
|
Perifocal coordinate system
|
Frame of reference for an orbit
The perifocal coordinate (PQW) system is a frame of reference for an orbit. The frame is centered at the focus of the orbit, i.e. the celestial body about which the orbit is centered. The unit vectors formula_0 and formula_1 lie in the plane of the orbit. formula_0 is directed towards the periapsis of the orbit and formula_1 has a true anomaly (formula_2) of 90 degrees past the periapsis. The third unit vector formula_3 is the angular momentum vector and is directed orthogonal to the orbital plane such that:
formula_4
And, since formula_3 is the unit vector in the direction of the angular momentum vector, it may also be expressed as:
formula_5
where h is the specific relative angular momentum.
The position and velocity vectors can be determined for any location of the orbit. The position vector, r, can be expressed as:
formula_6
where formula_2 is the true anomaly and the radius formula_7 may be calculated from the orbit equation.
The velocity vector, v, is found by taking the time derivative of the position vector:
formula_8
A derivation from the orbit equation can be made to show that:
formula_9
where formula_10 is the gravitational parameter of the focus, "h" is the specific relative angular momentum of the orbital body, "e" is the eccentricity of the orbit, and formula_2 is the true anomaly. formula_11 is the radial component of the velocity vector (pointing inward toward the focus) and formula_12 is the tangential component of the velocity vector. By substituting the equations for formula_11 and formula_12 into the velocity vector equation and simplifying, the final form of the velocity vector equation is obtained as:
formula_13
Conversion between coordinate systems.
The perifocal coordinate system can also be defined using the orbital parameters inclination ("i"), right ascension of the ascending node (formula_14) and the argument of periapsis (formula_15). The following equations convert from perifocal coordinates to equatorial coordinates and vice versa.
Perifocal to equatorial.
formula_16
In most cases, formula_17.
Equatorial to perifocal.
formula_18
Applications.
Perifocal reference frames are most commonly used with elliptical orbits for the reason that the formula_0 coordinate must be aligned with the eccentricity vector. Circular orbits, having no eccentricity, give no means by which to orient the coordinate system about the focus.
The perifocal coordinate system may also be used as an inertial frame of reference because the axes do not rotate relative to the fixed stars. This allows the inertia of any orbital bodies within this frame of reference to be calculated. This is useful when attempting to solve problems like the two-body problem.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathbf{\\hat{p}}"
},
{
"math_id": 1,
"text": "\\mathbf{\\hat{q}}"
},
{
"math_id": 2,
"text": "\\theta"
},
{
"math_id": 3,
"text": "\\mathbf{\\hat{w}}"
},
{
"math_id": 4,
"text": "\\mathbf{\\hat{w}} = \\mathbf{\\hat{p}} \\times \\mathbf{\\hat{q}}"
},
{
"math_id": 5,
"text": "\\mathbf{\\hat{w}} = \\frac{\\mathbf{h}}{\\|\\mathbf{h}\\|}"
},
{
"math_id": 6,
"text": "\\mathbf{r} = r \\cos \\theta \\mathbf{\\hat{p}} + r \\sin \\theta \\mathbf{\\hat{q}}"
},
{
"math_id": 7,
"text": "r = \\|\\mathbf{r}\\|"
},
{
"math_id": 8,
"text": "\\mathbf{v} = \\mathbf{\\dot{r}} = (\\dot{r} \\cos \\theta - r \\dot{\\theta} \\sin \\theta)\\mathbf{\\hat{p}} + (\\dot{r} \\sin \\theta + r \\dot{\\theta} \\cos \\theta)\\mathbf{\\hat{q}}"
},
{
"math_id": 9,
"text": "\\dot{r} = \\frac{\\mu}{h}e \\sin \\theta"
},
{
"math_id": 10,
"text": "\\mu"
},
{
"math_id": 11,
"text": "\\dot{r}"
},
{
"math_id": 12,
"text": "r \\dot{\\theta}"
},
{
"math_id": 13,
"text": "\\mathbf{v} = \\frac{\\mu}{h} \\left[-\\sin \\theta \\mathbf{\\hat{p}} + (e + \\cos \\theta) \\mathbf{\\hat{q}}\\right]"
},
{
"math_id": 14,
"text": "\\Omega"
},
{
"math_id": 15,
"text": "\\omega"
},
{
"math_id": 16,
"text": "\n\\begin{bmatrix}\n x_\\text{equatorial} \\\\\n y_\\text{equatorial} \\\\\n z_\\text{equatorial} \\\\\n \\end{bmatrix}\n =\n \\begin{bmatrix}\n \\cos\\Omega\\cos\\omega - \\sin\\Omega\\cos i\\sin\\omega & -\\cos\\Omega\\sin\\omega - \\sin\\Omega\\cos i\\cos \\omega & \\sin\\Omega \\sin i \\\\\n \\sin\\Omega\\cos\\omega + \\cos\\Omega\\cos i\\sin\\omega & -\\sin\\Omega\\sin\\omega + \\cos\\Omega\\cos i\\cos\\omega & -\\cos \\Omega \\sin i \\\\\n \\sin i \\sin\\omega & \\sin i \\cos\\omega & \\cos i \\\\\n \\end{bmatrix}\n \\begin{bmatrix}\n x_\\text{perifocal} \\\\\n y_\\text{perifocal} \\\\\n z_\\text{perifocal} \\\\\n \\end{bmatrix}\n"
},
{
"math_id": 17,
"text": " z_\\text{perifocal} = 0 "
},
{
"math_id": 18,
"text": "\n\\begin{bmatrix}\n x_\\text{perifocal} \\\\\n y_\\text{perifocal} \\\\\n z_\\text{perifocal} \\\\\n \\end{bmatrix}\n =\n \\begin{bmatrix}\n \\cos\\Omega\\cos\\omega - \\sin\\Omega\\cos i\\sin\\omega & \\sin\\Omega\\cos\\omega + \\cos\\Omega\\cos i\\sin\\omega & \\sin i \\sin\\omega \\\\\n -\\cos\\Omega\\sin\\omega - \\sin\\Omega\\cos i\\cos\\omega & -\\sin\\Omega\\sin\\omega + \\cos\\Omega\\cos i\\cos\\omega & \\sin i \\cos\\omega \\\\\n \\sin\\Omega \\sin i & -\\cos \\Omega \\sin i & \\cos i \\\\\n \\end{bmatrix}\n \\begin{bmatrix}\n x_\\text{equatorial} \\\\\n y_\\text{equatorial} \\\\\n z_\\text{equatorial} \\\\\n \\end{bmatrix}\n"
}
] |
https://en.wikipedia.org/wiki?curid=6190932
|
61920776
|
Catalytic resonance theory
|
In chemistry, catalytic resonance theory was developed to describe the kinetics of reaction acceleration using dynamic catalyst surfaces. Catalytic reactions occur on surfaces that undergo variation in surface binding energy and/or entropy, exhibiting overall increase in reaction rate when the surface binding energy frequencies are comparable to the natural frequencies of the surface reaction, adsorption, and desorption.
History.
Catalytic resonance theory is constructed on the Sabatier principle of catalysis developed by French chemistry Paul Sabatier. In the limit of maximum catalytic performance, the surface of a catalyst is neither too strong nor too weak. Strong binding results in an overall catalytic reaction rate limitation due to product desorption, while weak binding catalysts are limited in the rate of surface chemistry. Optimal catalyst performance is depicted as a 'volcano' peak using a descriptor of the chemical reaction defining different catalytic materials. Experimental evidence of the Sabatier principle was first demonstrated by Balandin in 1960.
The concept of catalytic resonance was proposed on dynamic interpretation of the Sabatier volcano reaction plot. As described, extension of either side of the volcano plot above the peak defines the timescales of the two rate-limiting phenomena such as surface reaction(s) or desorption. For binding energy oscillation amplitudes that extend across the volcano peak, the amplitude endpoints intersect the transiently accessible faster timescales of independent reaction phenomena. At the conditions of sufficiently fast binding energy oscillation, the transient binding energy variation frequency matches the natural frequencies of the reaction and the rate of overall reaction achieves turnover frequencies greatly in excess of the volcano plot peak. The single resonance frequency (1/s) of the reaction and catalyst at the selected temperature and oscillation amplitude is identified as the purple tie line; all other applied frequencies are either slower or less efficient.
Theory.
The basis of catalytic resonance theory utilizes the transient behavior of adsorption, surface reactions, and desorption as surface binding energy and surface transition states oscillate with time. The binding energy of a single species, "i", is described via a temporal functional including square or sinusoidal waves of frequency, "fi", and amplitude, dUi:
formula_0
Other surface chemical species, "j", are related to the oscillating species, "i", by the constant linear parameter, "gamma" γi-j:
formula_1
The two surface species also share the common enthalpy of adsorption, "delta" δi-j. Specification of the oscillation frequency and amplitude of species "i" and relating γi-j and δi-j for all other surface species "j" permits determination of all chemical surface species adsorption enthalpy with time.
The transition state energy of a surface reaction between any two species "i" and "j" is predicted by the linear scaling relationship of the Bell–Evans–Polanyi principle which relates to the surface reaction enthalpy, ΔHi-j, to the transition state energy, Ea, by parameters α and β with the following relationship:
formula_2
The oscillating surface and transition state energies of chemical species alter the kinetic rate constants associated with surface reaction, adsorption, and desorption. The surface reaction rate constant of species "i" converting to surface species "j" includes the dynamic activation energy:
formula_3
The resulting surface chemistry kinetics are then described via a surface reaction rate expression containing dynamic kinetic parameters responding to the oscillation in surface binding energy:
formula_4,
with "k" reactions with dynamic activation energy. The desorption rate constant also varies with oscillating surface binding energy by:
formula_5.
Implementation of dynamic surface binding energy of a reversible A-to-B reaction on a heterogeneous catalyst in a continuous flow stirred tank reactor operating at 1% conversion of A produces a sinusoidal binding energy in species B as shown. In the transition between surface binding energy amplitude endpoints, the instantaneous reaction rate (i.e., turnover frequency) oscillates over an order of magnitude as a limit cycle solution.
Implications for Chemistry.
Oscillating binding energies of all surface chemical species introduces periodic instances of transient behavior to the catalytic surface. For slow oscillation frequencies, the transient period is only a small quantity of the oscillation time scale, and the surface reaction achieves a new steady state. However, as the oscillation frequency increases, the surface transient period approaches the timescale of the oscillation and the catalytic surface remains in a constant transient condition. A plot of the averaged turnover frequency of a reaction with respect to applied oscillation frequency identifies the 'resonant' frequency range for which the transient conditions of the catalyst surface match the applied frequencies. The 'resonance band' exists above the Sabatier volcano plot maximum of a static system with average reaction rates as high as five orders of magnitude faster than that achievable by conventional catalysis.
Surface binding energy oscillation also occurs to different extent with the various chemical surface species as defined by the γi-j parameter. For any non-unity γi-j system, the asymmetry in the surface energy profile results in conducting work to bias the reaction to a steady state away from equilibrium. Similar to the controlled directionality of molecular machines, the resulting ratchet (device) energy mechanism selectively moves molecules through a catalytic reaction against a free energy gradient.
Application of dynamic binding energy to a surface with multiple catalytic reactions exhibits complex behavior derived from the differences in the natural frequencies of each chemistry; these frequencies are identified by the inverse of the adsorption, desorption, and surface kinetic rate parameters. Considering a system of two parallel elementary reactions of A-to-B and A-to-C that only occur on a surface, the performance of the catalyst under dynamic conditions will result in varying capability for selecting either reaction product (B or C). For the depicted system, both reactions have the same overall thermodynamics and will produce B and C in equal amounts (50% selectivity) at chemical equilibrium. Under normal static catalyst operation, only product B can be produced at selectivities greater than 50% and product C is never favored. However, as shown, the application of surface binding dynamics in the form of a square wave at varying frequency and fixed oscillation amplitude but varying endpoints exhibits the full range of possible reactant selectivity. In the range of 1-10 Hertz, there exists a small island of parameters for which product C is highly selective; this condition is only accessible via dynamics.
Characteristics of Dynamic Surface Reactions.
Catalytic reactions on surfaces exhibit an energy ratchet that biases the reaction away from equilibrium. In the simplest form, the catalyst oscillates between two states of stronger or weaker binding, which in this example is referred to as 'green' or 'blue,' respectively. For a single elementary reaction on a catalyst oscillating between two states (green & blue), there exists four rate coefficients in total, one forward (k1) and one reverse (k-1) in each catalyst state. The catalyst switches between catalyst states (j of blue or green) with a frequency, f, with the time in each catalyst state, τj, such that the duty cycle, Dj is defined for catalyst state, j, as the fraction of the time the catalyst exists in state j. For the catalyst in the 'blue' state:
formula_6
The bias of a catalytic ratchet under dynamic conditions can be predicted via a ratchet directionality metric, λ, that can be calculated from the rate coefficients, ki, and the time constants of the oscillation, τi (or the duty cycle). For a catalyst oscillating between two catalyst states (blue and green), the ratchet directionality metric can be calculated:
formula_7
For directionality metrics greater than 1, the reaction exhibits forward bias to conversion higher than equilibrium. Directionality metrics less than 1 indicate negative reaction bias to conversion less than equilibrium. For more complicated reactions oscillating between multiple catalyst states, j, the ratchet directionality metric can be calculated based on the rate constants and time scales of all states.
formula_8
The kinetic bias of an independent catalytic ratchet exists for sufficiently high catalyst oscillation frequencies, f, above the ratchet cutoff frequency, fc, calculated as:
formula_9
The reaction rate constant, kII, corresponds to the second fastest rate constant in the catalytic elementary step. The duty cycle, DII, corresponds to the duty cycle of the catalyst state, j, with the second fastest reaction rate constant.
For a single independent catalytic elementary step of a reaction on a surface (e.g., A* ↔ B*) at high frequency (f » fc), the A* surface coverage, θA, can be predicted from the ratchet directionality metric,
formula_10
Experiments and Evidence.
Catalytic rate enhancement via dynamic perturbation of surface active sites has been demonstrated experimentally with dynamic electrocatalysis and dynamic photocatalysis. Those results may be explained in the framework of catalytic resonance theory but conclusive evidence is still lacking:
Implementation of catalyst dynamics has been proposed to occur by additional methods using oscillating light, electric potential, and physical perturbation.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\Delta H_\\text{i,ads}(t) = \\Delta H_\\text{0} + \\Delta U\\sin(f_i t),"
},
{
"math_id": 1,
"text": "\\gamma_\\text{i-j} = \\frac{\\Delta H_\\text{i,ads}(t)}{\\Delta H_\\text{j,ads}(t)} "
},
{
"math_id": 2,
"text": "E_a,_{i-j} (t) = \\beta + \\alpha\\Delta H_\\text{i,j} (t) "
},
{
"math_id": 3,
"text": "k_{i-j}(t,T) = Ae^\\frac{-E_{\\rm a, i-j}(t)}{RT},"
},
{
"math_id": 4,
"text": "\\frac{d\\theta_{i}(t)}{dt} = k_{ads,i}P_{i}\\theta_{*}(t) - k_{des,i}(t)\\theta_{i}(t) + \\sum_{k=1}^N {\\nu_{i,k}r_k(t,T)}"
},
{
"math_id": 5,
"text": " k_{des,i}(t) = A_{des,i}e^{\\left \\lbrack \\frac{\\Delta H_{ads,i}(t)}{RT} \\right \\rbrack} "
},
{
"math_id": 6,
"text": "D_B = \\frac{\\tau_B}{\\tau_{total}} "
},
{
"math_id": 7,
"text": " \\lambda_\\ = \\frac{k_{1,blue} D_B + k_{1,green} (1-D_B)}{k_{-1,blue} D_B + k_{-1,green} (1-D_B)} "
},
{
"math_id": 8,
"text": " \\lambda_\\ = \\frac{\\sum_{j} \\tau_j k_{1,j}}{\\sum_{j} \\tau_j k_{-1,j}}"
},
{
"math_id": 9,
"text": "f_c = \\frac{k_{II}D_{II}}{4(\\sqrt{2}-1)}"
},
{
"math_id": 10,
"text": "\\theta_A = \\frac{1}{1 + \\lambda}"
}
] |
https://en.wikipedia.org/wiki?curid=61920776
|
61933545
|
Koenigsberger ratio
|
The Koenigsberger ratio is the proportion of remanent magnetization relative to induced magnetization in natural rocks. It was first described by J.G. Koenigsberger. It is a dimensionless parameter often used in geophysical exploration to describe the magnetic characteristics of a geological body for help in interpreting magnetic anomaly patterns.
formula_0
The total magnetization of a rock is the sum of its natural remanent magnetization and the magnetization induced by the ambient geomagnetic field. Thus, a Koenigsberger ratio, "Q", greater than 1 indicates that the remanence properties contribute the majority of the total magnetization of the rock.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "Q = \\frac{M_{rem}}{M_{ind}}= \\frac{M_{rem}}{\\chi H} "
}
] |
https://en.wikipedia.org/wiki?curid=61933545
|
61936551
|
Maximally matchable edge
|
In graph theory, a maximally matchable edge in a graph is an edge that is included in at least one maximum-cardinality matching in the graph. An alternative term is allowed edge.
A fundamental problem in matching theory is: given a graph "G", find the set of all maximally matchable edges in "G." This is equivalent to finding the union of "all" maximum matchings in "G" (this is different than the simpler problem of finding a "single" maximum matching in "G"). Several algorithms for this problem are known.
Motivation.
Consider a matchmaking agency with a pool of men and women. Given the preferences of the candidates, the agency constructs a bipartite graph where there is an edge between a man and a woman if they are compatible. The ultimate goal of the agency is to create as many compatible couples as possible, i.e., find a maximum-cardinality matching in this graph. Towards this goal, the agency first chooses an edge in the graph, and suggests to the man and woman on both ends of the edge to meet. Now, the agency must take care to only choose a maximally matchable edge. This is because, if it chooses a non-maximally matchable edge, it may get stuck with an edge that cannot be completed to a maximum-cardinality matching.
Definition.
Let "G" = ("V","E") be a graph, where "V" are the vertices and "E" are the edges. A "matching" in "G" is a subset "M" of "E", such that each vertex in "V" is adjacent to at most a single edge in "M". A "maximum matching" is a matching of maximum cardinality.
An edge "e" in "E" is called maximally matchable (or allowed) if there exists a maximum matching "M" that contains "e".
Algorithms for general graphs.
Currently, the best known deterministic algorithm for general graphs runs in time formula_0 .
There is a randomized algorithm for general graphs in time formula_1 .
Algorithms for bipartite graphs.
In bipartite graphs, if a single maximum-cardinality matching is known, it is possible to find all maximally matchable edges in linear time - formula_2.
If a maximum matching is not known, it can be found by existing algorithms. In this case, the resulting overall runtime is formula_3 for general bipartite graphs and formula_4 for dense bipartite graphs with formula_5.
Bipartite graphs with a perfect matching.
The algorithm for finding maximally matchable edges is simpler when the graph admits a perfect matching.
Let the bipartite graph be formula_6, where formula_7 and formula_8. Let the perfect matching be formula_9.
"Theorem:" an edge "e" is maximally matchable if-and-only-if "e" is included in some "M-alternating cycle" - a cycle that alternates between edges in "M" and edges not in "M". "Proof":
Now, consider a directed graph formula_10, where formula_11 and there is an edge from formula_12 to formula_13 in "H" iff formula_14 and there is an edge between formula_15 and formula_16 in "G" (note that by assumption such edges are not in "M"). Each "M"-alternating cycle in "G" corresponds to a "directed cycle" in "H". A directed edge belongs to a directed cycle iff both its endpoints belong to the same strongly connected component. There are algorithms for finding all strongly connected components in linear time. Therefore, the set of all maximally matchable edges can be found as follows:
Bipartite graphs without a perfect matching.
Let the bipartite graph be formula_6, where formula_7 and formula_20 and formula_21. Let the given maximum matching be formula_22, where formula_23. The edges in "E" can be categorized into two classes:
"Theorem:" All formula_24-lower edges are maximally matchable. "Proof": suppose formula_25 where formula_26 is saturated and formula_27 is not. Then, removing formula_25 from formula_24 and adding formula_25 yields a new maximum-cardinality matching.
Hence, it remains to find the maximally matchable edges among the "M"-upper ones.
Let "H" be the subgraph of "G" induced by the "M"-saturated nodes. Note that "M" is a perfect matching in "H". Hence, using the algorithm of the previous subsection, it is possible to find all edges that are maximally matchable in "H". Tassa explains how to find the remaining maximally matchable edges, as well as how to dynamically update the set of maximally matchable edges when the graph changes.
|
[
{
"math_id": 0,
"text": "O(VE)"
},
{
"math_id": 1,
"text": "\\tilde{O}(V^{2.376}) "
},
{
"math_id": 2,
"text": "O(V+E)"
},
{
"math_id": 3,
"text": "O(V^{1/2}E)"
},
{
"math_id": 4,
"text": "O((V/\\log V)^{1/2}E)"
},
{
"math_id": 5,
"text": "E=\\Theta(V^2)"
},
{
"math_id": 6,
"text": "G=(X+Y, E)"
},
{
"math_id": 7,
"text": "X = (x_1,\\ldots,x_n)"
},
{
"math_id": 8,
"text": "Y=(y_1,\\ldots,y_n)"
},
{
"math_id": 9,
"text": "M = \\{(x_1,y_1),\\ldots,(x_n,y_n)\\}"
},
{
"math_id": 10,
"text": "H=(Z, E)"
},
{
"math_id": 11,
"text": "Z = (z_1,\\ldots,z_n)"
},
{
"math_id": 12,
"text": "z_i"
},
{
"math_id": 13,
"text": "z_j"
},
{
"math_id": 14,
"text": "i\\neq j"
},
{
"math_id": 15,
"text": "x_i"
},
{
"math_id": 16,
"text": "y_j"
},
{
"math_id": 17,
"text": "(x_i,y_i)"
},
{
"math_id": 18,
"text": "(z_i,z_j)"
},
{
"math_id": 19,
"text": "(x_i,y_j)"
},
{
"math_id": 20,
"text": "Y=(y_1,\\ldots,y_{n'})"
},
{
"math_id": 21,
"text": "n\\leq n'"
},
{
"math_id": 22,
"text": "M = \\{(x_1,y_1),\\ldots,(x_t,y_t)\\}"
},
{
"math_id": 23,
"text": "t \\leq n \\leq n'"
},
{
"math_id": 24,
"text": "M"
},
{
"math_id": 25,
"text": " e = (x_i,y_j)"
},
{
"math_id": 26,
"text": "x_i "
},
{
"math_id": 27,
"text": "y_i "
}
] |
https://en.wikipedia.org/wiki?curid=61936551
|
619424
|
Double
|
<templatestyles src="Template:TOC_right/styles.css" />
Double, The Double or Dubble may refer to:
See also.
Topics referred to by the same term
<templatestyles src="Dmbox/styles.css" />
This page lists associated with the title .
|
[
{
"math_id": 0,
"text": "x+yj"
},
{
"math_id": 1,
"text": "j^2=+1"
},
{
"math_id": 2,
"text": "(a,b)"
}
] |
https://en.wikipedia.org/wiki?curid=619424
|
6194406
|
Rectangular potential barrier
|
Area, where a potential exhibits a local maximum
In quantum mechanics, the rectangular (or, at times, square) potential barrier is a standard one-dimensional problem that demonstrates the phenomena of wave-mechanical tunneling (also called "quantum tunneling") and wave-mechanical reflection. The problem consists of solving the one-dimensional time-independent Schrödinger equation for a particle encountering a rectangular potential energy barrier. It is usually assumed, as here, that a free particle impinges on the barrier from the left.
Although classically a particle behaving as a point mass would be reflected if its energy is less than formula_0, a particle actually behaving as a matter wave has a non-zero probability of penetrating the barrier and continuing its travel as a wave on the other side. In classical wave-physics, this effect is known as evanescent wave coupling. The likelihood that the particle will pass through the barrier is given by the transmission coefficient, whereas the likelihood that it is reflected is given by the reflection coefficient. Schrödinger's wave-equation allows these coefficients to be calculated.
Calculation.
The time-independent Schrödinger equation for the wave function formula_1 reads
formula_2
where formula_3 is the Hamiltonian, formula_4 is the (reduced)
Planck constant, formula_5 is the mass, formula_6 the energy of the particle and
formula_7
is the barrier potential with height formula_8 and width formula_9. formula_10
is the Heaviside step function, i.e.,
formula_11
The barrier is positioned between formula_12 and formula_13. The barrier can be shifted to any formula_14 position without changing the results. The first term in the Hamiltonian, formula_15 is the kinetic energy.
The barrier divides the space in three parts (formula_16). In any of these parts, the potential is constant, meaning that the particle is quasi-free, and the solution of the Schrödinger equation can be written as a superposition of left and right moving waves (see free particle). If formula_17
formula_18
where the wave numbers are related to the energy via
formula_19
The index formula_20 on the coefficients formula_21 and formula_22 denotes the direction of the velocity vector. Note that, if the energy of the particle is below the barrier height, formula_23 becomes imaginary and the wave function is exponentially decaying within the barrier. Nevertheless, we keep the notation formula_20 even though the waves are not propagating anymore in this case. Here we assumed formula_24. The case formula_25 is treated below.
The coefficients formula_26 have to be found from the boundary conditions of the wave function at formula_12 and formula_13. The wave function and its derivative have to be continuous everywhere, so
formula_27
Inserting the wave functions, the boundary conditions give the following restrictions on the coefficients
formula_28
formula_29
formula_30
formula_31
Transmission and reflection.
At this point, it is instructive to compare the situation to the classical case. In both cases, the particle behaves as a free particle outside of the barrier region. A classical particle with energy formula_6 larger than the barrier height formula_0 would "always" pass the barrier, and a classical particle with formula_32 incident on the barrier would "always" get reflected.
To study the quantum case, consider the following situation: a particle incident on the barrier from the left side (formula_33). It may be reflected (formula_34) or transmitted (formula_35).
To find the amplitudes for reflection and transmission for incidence from the left, we put in the above equations formula_36 (incoming particle), formula_37 (reflection), formula_38 (no incoming particle from the right), and formula_39 (transmission). We then eliminate the coefficients formula_40 from the equation and solve for formula_41 and formula_42.
The result is:
formula_43
formula_44
Due to the mirror symmetry of the model, the amplitudes for incidence from the right are the same as those from the left. Note that these expressions hold for any energy formula_45, formula_46. If formula_25, then formula_47, so there is a singularity in both of these expressions.
Analysis of the obtained expressions.
"E" < "V"0.
The surprising result is that for energies less than the barrier height, formula_32 there is a non-zero probability
formula_48
for the particle to be transmitted through the barrier, with formula_49. This effect, which differs from the classical case, is called quantum tunneling. The transmission is exponentially suppressed with the barrier width, which can be understood from the functional form of the wave function: Outside of the barrier it oscillates with wave vector formula_50, whereas within the barrier it is exponentially damped over a distance formula_51. If the barrier is much wider than this decay length, the left and right part are virtually independent and tunneling as a consequence is suppressed.
"E" > "V"0.
In this case
formula_52
where formula_53.
Equally surprising is that for energies larger than the barrier height, formula_17, the particle may be reflected from the barrier with a non-zero probability
formula_54
The transmission and reflection probabilities are in fact oscillating with formula_55. The classical result of perfect transmission without any reflection (formula_56, formula_57) is reproduced not only in the limit of high energy formula_58 but also when the energy and barrier width satisfy formula_59, where formula_60 (see peaks near formula_61 and 1.8 in the above figure). Note that the probabilities and amplitudes as written are for any energy (above/below) the barrier height.
"E" = "V"0.
The transmission probability at formula_62 is
formula_63
This expression can be obtained by calculating the transmission coefficient from the constants stated above as for the other cases or by taking the limit of formula_64 as formula_6 approaches formula_0. For this purpose the ratio
formula_65
is defined, which is used in the function formula_66:
formula_67
In the last equation formula_68 is defined as follows:
formula_69
These definitions can be inserted in the expression for formula_64 which was obtained for the case formula_70.
formula_71
Now, when calculating the limit of formula_66 as x approaches 1 (using L'Hôpital's rule),
formula_72
also the limit of formula_73 as formula_14 approaches 1 can be obtained:
formula_74
By plugging in the above expression for formula_68 in the evaluated value for the limit, the above expression for T is successfully reproduced.
Remarks and applications.
The calculation presented above may at first seem unrealistic and hardly useful. However it has proved to be a suitable model for a variety of real-life systems. One such example are interfaces between two conducting materials. In the bulk of the materials, the motion of the electrons is quasi-free and can be described by the kinetic term in the above Hamiltonian with an effective mass formula_5. Often the surfaces of such materials are covered with oxide layers or are not ideal for other reasons. This thin, non-conducting layer may then be modeled by a barrier potential as above. Electrons may then tunnel from one material to the other giving rise to a current.
The operation of a scanning tunneling microscope (STM) relies on this tunneling effect. In that case, the barrier is due to the gap between the tip of the STM and the underlying object. Since the tunnel current depends exponentially on the barrier width, this device is extremely sensitive to height variations on the examined sample.
The above model is one-dimensional, while space is three-dimensional. One should solve the Schrödinger equation in three dimensions. On the other hand, many systems only change along one coordinate direction and are translationally invariant along the others; they are separable. The Schrödinger equation may then be reduced to the case considered here by an ansatz for the wave function of the type: formula_75.
For another, related model of a barrier, see Delta potential barrier (QM), which can be regarded as a special case of the finite potential barrier. All results from this article immediately apply to the delta potential barrier by taking the limits formula_76 while keeping formula_77 constant.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "V_0"
},
{
"math_id": 1,
"text": "\\psi(x)"
},
{
"math_id": 2,
"text": "\\hat H\\psi(x)=\\left[-\\frac{\\hbar^2}{2m} \\frac{d^2}{dx^2}+V(x)\\right]\\psi(x)=E\\psi(x)"
},
{
"math_id": 3,
"text": "\\hat H"
},
{
"math_id": 4,
"text": "\\hbar"
},
{
"math_id": 5,
"text": "m"
},
{
"math_id": 6,
"text": "E"
},
{
"math_id": 7,
"text": "V(x) = V_0[\\Theta(x)-\\Theta(x-a)]"
},
{
"math_id": 8,
"text": "V_0 > 0"
},
{
"math_id": 9,
"text": "a"
},
{
"math_id": 10,
"text": "\\Theta(x)=0,\\; x < 0;\\; \\Theta(x)=1,\\; x > 0"
},
{
"math_id": 11,
"text": "V(x)= \\begin{cases}\n0 &\\text{if } x < 0 \\\\\nV_0 &\\text{if } 0 < x < a \\\\\n0 &\\text{if } a < x\n\\end{cases}"
},
{
"math_id": 12,
"text": "x=0"
},
{
"math_id": 13,
"text": "x=a"
},
{
"math_id": 14,
"text": "x"
},
{
"math_id": 15,
"text": "-\\frac{\\hbar^2}{2m} \\frac{d^2}{dx^2}\\psi"
},
{
"math_id": 16,
"text": "x<0, 0<x<a, x>a"
},
{
"math_id": 17,
"text": "E > V_0"
},
{
"math_id": 18,
"text": "\\begin{cases}\n\\psi_L(x) = A_r e^{i k_0 x} + A_l e^{-i k_0x} & x<0 \\\\\n\\psi_C(x) = B_r e^{i k_1 x} + B_l e^{-i k_1x} & 0<x<a \\\\\n\\psi_R(x) = C_r e^{i k_0 x} + C_l e^{-i k_0x} & x>a\n\\end{cases}"
},
{
"math_id": 19,
"text": "\\begin{cases}\nk_0 = \\sqrt{2m E/\\hbar^2} & x<0 \\quad \\text{or}\\quad x>a \\\\\nk_1 = \\sqrt{2m (E-V_0)/\\hbar^2} & 0<x<a .\n\\end{cases}"
},
{
"math_id": 20,
"text": "r/l"
},
{
"math_id": 21,
"text": "A"
},
{
"math_id": 22,
"text": "B"
},
{
"math_id": 23,
"text": "k_1"
},
{
"math_id": 24,
"text": "E\\neq V_0"
},
{
"math_id": 25,
"text": "E = V_0"
},
{
"math_id": 26,
"text": "A, B, C"
},
{
"math_id": 27,
"text": "\\begin{align}\n\\psi_L(0) &= \\psi_C(0) \\\\\n\\left.\\frac{d\\psi_L}{dx}\\right|_{x = 0} &= \\left.\\frac{d\\psi_C}{dx}\\right|_{x = 0} \\\\\n\\psi_C(a) &= \\psi_R(a) \\\\\n\\left.\\frac{d\\psi_C}{dx}\\right|_{x = a} &= \\left.\\frac{d\\psi_R}{dx}\\right|_{x = a}.\n\\end{align}"
},
{
"math_id": 28,
"text": "A_r+A_l=B_r+B_l"
},
{
"math_id": 29,
"text": "ik_0(A_r-A_l)=ik_1(B_r-B_l)"
},
{
"math_id": 30,
"text": "B_re^{iak_1}+B_le^{-iak_1} = C_re^{iak_0}+C_le^{-iak_0}"
},
{
"math_id": 31,
"text": "ik_1 \\left(B_re^{iak_1}-B_le^{-iak_1}\\right) = ik_0 \\left(C_re^{iak_0}-C_le^{-iak_0}\\right)."
},
{
"math_id": 32,
"text": "E < V_0"
},
{
"math_id": 33,
"text": "A_r"
},
{
"math_id": 34,
"text": "A_l"
},
{
"math_id": 35,
"text": "C_r"
},
{
"math_id": 36,
"text": "A_r = 1"
},
{
"math_id": 37,
"text": "A_l = r"
},
{
"math_id": 38,
"text": "C_l = 0"
},
{
"math_id": 39,
"text": "C_r = t"
},
{
"math_id": 40,
"text": "B_l, B_r"
},
{
"math_id": 41,
"text": "r"
},
{
"math_id": 42,
"text": "t"
},
{
"math_id": 43,
"text": "t=\\frac{4 k_0k_1 e^{-i a(k_0-k_1)}}{(k_0+k_1)^2-e^{2ia k_1}(k_0-k_1)^2}"
},
{
"math_id": 44,
"text": "r=\\frac{(k_0^2-k_1^2)\\sin(ak_1)}{2 i k_0k_1 \\cos(ak_1)+(k_0^2+k_1^2)\\sin(ak_1)}."
},
{
"math_id": 45,
"text": "E > 0"
},
{
"math_id": 46,
"text": "E \\neq V_0"
},
{
"math_id": 47,
"text": "k_1 = 0"
},
{
"math_id": 48,
"text": "T=|t|^2= \\frac{1}{1+\\frac{V_0^2\\sinh^2(k_1 a)}{4E(V_0-E)}}"
},
{
"math_id": 49,
"text": "k_1=\\sqrt{2m (V_0-E)/\\hbar^{2}}"
},
{
"math_id": 50,
"text": "k_0"
},
{
"math_id": 51,
"text": "1/k_1"
},
{
"math_id": 52,
"text": "T=|t|^2= \\frac{1}{1+\\frac{V_0^2\\sin^2(k_1 a)}{4E(E-V_0)}},"
},
{
"math_id": 53,
"text": "k_1=\\sqrt{2m (E-V_0)/\\hbar^2}"
},
{
"math_id": 54,
"text": "R=|r|^2=1-T."
},
{
"math_id": 55,
"text": "k_1 a"
},
{
"math_id": 56,
"text": "T = 1"
},
{
"math_id": 57,
"text": "R = 0"
},
{
"math_id": 58,
"text": "E \\gg V_0"
},
{
"math_id": 59,
"text": "k_1 a = n \\pi"
},
{
"math_id": 60,
"text": "n = 1, 2, \\dots"
},
{
"math_id": 61,
"text": "E / V_0 = 1.2 "
},
{
"math_id": 62,
"text": "E=V_0"
},
{
"math_id": 63,
"text": "T=\\frac{1}{1+ma^2V_0/2\\hbar^2}."
},
{
"math_id": 64,
"text": "T"
},
{
"math_id": 65,
"text": "x = \\frac{E}{V_0}"
},
{
"math_id": 66,
"text": "f(x)"
},
{
"math_id": 67,
"text": "f(x) = \\frac{\\sinh(v_0\\sqrt{1-x})}{\\sqrt{1-x}}"
},
{
"math_id": 68,
"text": "v_0"
},
{
"math_id": 69,
"text": "v_0 = \\sqrt{\\frac{2mV_0a^2}{\\hbar^2}}"
},
{
"math_id": 70,
"text": "E<V_0"
},
{
"math_id": 71,
"text": "T(x) = \\frac{1}{1+\\frac{f(x)^2}{4x}}"
},
{
"math_id": 72,
"text": "\\lim_{x \\to 1} f(x)= \\lim_{x \\to 1} \\frac{\\sinh(v_0\\sqrt{1-x})}{(1-x)} = \\lim_{x \\to 1} \\frac{\\frac{d}{dx}\\sinh(v_0\\sqrt{1-x})}{\\frac{d}{dx}\\sqrt{1-x}} = v_0\\cosh(0) = v_0"
},
{
"math_id": 73,
"text": "T(x)"
},
{
"math_id": 74,
"text": "\\lim_{x \\to 1} T(x)=\\lim_{x \\to 1} \\frac{1}{1+\\frac{f(x)^2}{4x}} = \\frac{1}{1+\\frac{v_0^2}{4}} "
},
{
"math_id": 75,
"text": "\\Psi(x,y,z)=\\psi(x)\\phi(y,z)"
},
{
"math_id": 76,
"text": "V_0\\to\\infty,\\; a\\to 0"
},
{
"math_id": 77,
"text": "V_0 a = \\lambda"
}
] |
https://en.wikipedia.org/wiki?curid=6194406
|
61948211
|
Earthenware ceramics in the Philippines
|
Philippine ceramics are mostly earthenware, pottery that has not been fired to the point of vitrification. Other types of pottery like tradeware and stoneware have been fired at high enough temperatures to vitrify. Earthenware ceramics in the Philippines are mainly differentiated from tradeware and stoneware by the materials used during the process and the temperature at which they are fired. Additionally, earthenware and stoneware pottery can generally be referred to as ceramics that are made with local materials, while tradeware ceramics can generally be referred to as ceramics that are made with non-local materials.
Functions.
Earthenware ceramics had many different functions in the Philippines. In regards, to where it was produced it seems to have been in a domestic scenario. Most of the functions can be seen as utilitarian or ritualistic. According to Alice Yao, one of the main functions of earthenware was in feasting, meaning that the earthenware was mostly cooking pots, bowls and goblets. Other functions were in the reinforcing of alliances amongst groups, as it was the example between the lowlanders and highlanders in the Philippines when trading, whether political or economic. Earthenware was also used for burial, mainly secondary burial, in the form of jars and jarlets, and anthropomorphic vessels. Lastly, the earthenware was used in ritualistic and ceremonial events, such as those associated with burial.
Techniques.
Earthenware vessels in the Philippines were formed by two main techniques: paddle and anvil, and coiling and scraping. Although a level of highly skilled craftsmanship is present in the Philippines, no evidence of kilns are found, primarily because the type of clay to be found in the archipelago can only withstand relatively low temperatures of firing.
Primary surface treatment.
There are five primary modes of surface treatment in the Philippines. First is the simple scraping and perhaps polishing of the surface leaving it relatively smooth. Second is the application of liquid, which includes slipping, glazing, or painting (red hematite). Third is the incised or cutting out of various geometric patterns. Fourth is impressed similar to stamping designs directly into the surface. Finally the fifth method is applique treatment or the application of additional clay which raises the surface of the earthenware and produces a design.
Timeline.
Prehistoric and pre-colonial Philippines.
Neolithic period in the Philippines is assigned to the timeline between c.4500 to 3000 years before present (yBP). During this period, the red-slipped pottery with circular impressions seems to have been dominant.
'Metal period' c. 2500 to 1500 bp or Metal Age (500 BCE - AD 960 or 500 BCE - 500 AD).
Pre-colonial period, there was a more centralized production of pottery in certain areas. An example of one of those sites is Tanjay in the Negros Island, which existed from AD 500 - 1600, however it extents a little into the colonial period as well. However it is important to notice that the period in which Tanjay was producing earthenware was during the mid-1st and mid-2nd millennium
During this period, earthenware ceramics were developed to cook, store food, and practice ancient rituals
Colonial Philippines.
The colonial period (c. 1521–1898) in the Philippines began in 1521 when Ferdinand Magellan first discovered the Philippines and would end in 1898 when the Spanish government would sell the Philippines to the United States.
By the establishment of the Manila-Acapulco Trade, or the Manila Galleon Trade, in 1565, trade in the Philippines would start to decline as local populations and economies would be disrupted and displaced. A direct consequence of this destruction of villages and prestige goods-based economies would be the decrease in production of earthenware as well as the fragmentation of lowlanders from the highlanders.
Post-colonial and contemporary Philippines.
Post-colonial period (1946-1986).
Domestic Production decreased even more during this period. Utilitarian use is the only type which survived.
Contemporary period (1986-present).
Traditional, non-industrious techniques are still used today in the Philippines. These include Open-air firing, non-wheel production located in Talibon, Valencia.
Earthenware sites in the Philippines.
When referring to the Northern Philippines, this will include the islands of Luzon, Babuyan, and Batanes.
Batanes Islands.
The Batanes Islands are a group of islands located in the northernmost region of Luzon in the Municipality of Calayan, Province of Cagayan, Cagayan Valley just below Taiwan (19° 31′ 20″ N, 121° 57′ 13″ E). Archeological sites contained boat-shaped stone (limestone and coral stone) burials which are unique to the region. Several earthenware bowls and "high-fired" sherds were found associated with burial remains.
Babuyan Island.
Babuyan Island is an island located in the northern region of Luzon in the Municipality of Calayan, Province of Cagayan, Cagayan Valley below the Batanes Islands (19° 31′ 20″ N, 121° 57′ 13″ E). Archeological sites were discovered on Fuga Island by Willheim Solheim in 1952. Burial jars made of earthenware were the most significant find and were one of the sources or supporting evidence that Solheim utilized to create his article on burial jars in ISEA.
Luzon Island.
Luzon is the largest island in the Philippines located in the northernmost region of the Philippines, north of Mindoro, Marinduque, and Masbate (16° 0′ 0″ N, 121° 0′ 0″ E). Significant sites are found at the Pintu Rockshelter in the Nueva Vizcaya Province, Dimolit on Palanan Bay in Isabela Province and Lal-lo in the Cagayan Province of Northern Luzon.
Pintu Rockshelter and Dimolit were excavated in 1969 by Warren Peterson who believed hunter and gatherers seasonally occupied the sites at approximately 5,120BP, 3,900BP, and 3280BP. Evidence of Coiling and the paddle and anvil earthenware production were found at both sites. Mainly shallow dishes with low pedestal feet were excavated at Pintu and many contained impressed circles on their surfaces. Small and large post-holes were found at Dimolit, which aided its classification as an open habitation site. Earthenware pottery interior and exterior exhibited plain or red-slipped surface treatment. Pottery found was constructed into dishes which had perforations (circle or square) on the bottom. Also globular and angled vessels were found.
Lal-lo is considered by some archaeologists to be the most extensive shell midden site in Southeast Asia. Excavations have been conducted by Thiel in 1980, Aoyagi in 1983, Aoyagi and Tananka in 1985, and Ogawa and Aguilera in 1987. A sum of 21,664 earthenware sherds have been recovered and multiple types of vessels (red-slipped jars, bowls with ring footing, and shallow bowls with paddle impressions) have been collected.
Kalinga.
Kalinga is a province situated in the central region of the Cordillera Administrative Region (CAR) in northern Luzon that is bordered by the Province of Mountain to the south, the Province of Abra to the east, the Province of Isabela to the east, the Province of Cagayan to the northeast, and the Province of Apayao to the north (17° 45′ 00″ N, 121° 15′ 00″ E). There are evidence, indicating part-time production of earthenware pottery.
Batangas.
Batangas is a province situated in the Calabarzon region in southern Luzon that is bordered by the Province of Cavite and Laguna to the north and the Province of Quezon to the east (13° 50′ 00″ N, 121° 00′ 00″ E).
Most significant archaeological site is the Laurel site also referred to as the Taal-Lemery complex. Multiple field teams from the National Museum have conducted excavations at this site. Primarily Metal Age pottery is found and is considered by some to be the most equiset earthenware production found in the region post Neolithic. Vessels found are typically highly polished or re-slipped. Chinese ceramics heavily influenced earthenware production at this site. Unauthorized excavations and pothunters were detrimental to several sites. Fortunately, Maharlika A. Cuevas (Research Assistant) found one site fully intact. Cuevas claims the site may have also previously been a burial site.
Most significant artifact found is the Calatagan Pot found in Talisay, Calatagan, Batangas. Archeologist Eusebio Z. Dizon describes the Calatagan Pot as atypical and engraved with syllabic writing around the vessel's shoulder around. Additionally it the only earthenware found in the Philippines with inscriptions and is indicative of ancient writing. The inscriptions have yet to be fully deciphered and some experts believe it should be considered a National Cultural Treasure.
Quezon.
Quezon is a province situated in the Calabarzon region in southern Luzon that is bordered by the Province of Aurora to the north, the Provinces of Bulacan, Rizal, Laguna and Batangas to the west, and the Provinces of Camarines Norte and Camarines Sur to the east (14° 10′ 00″ N, 121° 50′ 00″ E).
Paradijon.
Paradijon is located within the Municipality of Gubat, the Province of Sorsogon, Bicol (12.912° 00′ 00" N, 124.1176° 00′ 00" E). There are evidence, indicating full-time pottery production of earthenware.
Marinduque Island.
Marinduque Island is an island located east of Mindoro and west of Luzon in the Municipality of Boac, Mimaropa (13°24′N 121°58′E).
Central Philippines.
The region, Central Philippines, refers to the regions below Luzon and above Mindanao, which includes the Visayas.
Palawan Island.
Palawan Island is the fifth largest island in the Philippines located in the westernmost region of the Philippines (9° 30′ 00" N 118° 30′ 00"E).
A few of the sites containing earthenware in the Palawan Island are El Nido, specifically Ille cave and Lipuun Point, more specifically the Tabon Caves, where the Mannungul Jar was found by Dr. Robert Fox and date to the late Neolithic.
Visayas Islands.
The Visayas Islands consist of the 6 major islands:
and is located in the central region of the Philippines.
The specific islands in which sites have been found are Masbate, Bohol and Negros. In Masbate, the main sites are located in the Batungan Mountain. In the island of Negros, in the region of Tanjay, there have been earthenware pottery uncovered and of low-fired production. Lastly, in Bohol island, there is a relevant burial site in District Ubijan, Tagbilaran city where earthenware was found and has been analyzed in order to assess this island as a likely source of a center of production. This research was achieved by doing petro analysis. The results of this research claimed that if Bohol were to be a center of production, other earthenwares in the region had to have similar signatures to the clay and temper existent in the Island of Bohol.
Panay Island.
Panay Island is the sixth largest island in the Philippines located to the west of Negros (11° 09′ 00" N, 122° 29′ 00" E). Islas de Gigantes was a site where presentation dishes were prominent.
Western Philippines underwater archaeological sites.
Comiran Island (Lumbucan Reef).
Comiran Island is an island located in the South China Sea south of Bugsuk Island and east of Balabac Island in the Province of Palawan, Mimaropa (7° 54' 57.6" N, 117° 13' 13.79" E).
Partial survey of the site revealed evidence of earthenware sherds and a part of stove.
Pandanan Island (Pandanan shipwreck).
is an island located in the South China Sea in the Municipality of Balabac, Province of Palawan, Mimaropa (8° 17′ 25.22″ N, 117° 13′ 32.48″ E). The island of is a relatively small island with the shape of a quadrangle and dimensions measured to be approximately 9.6 km long and 4 km wide.
The Pandanan Shipwreck was discovered by accident in 1993 by Mr. Gordirilla, a pearl-farm diver working at Ecofarm Resource Inc., when he was looking for a missing pearl basket from the seabed. Initial investigation of the site by the National Museum of the Philippines in June 1993 would pinpoint the wreckage of a seacraft to be located under a coral reef about 250 meters northeast off the coasts of at a depth of 40 meters below sea level (8° 9′ 48″ N, 117° 3′ 6″ E). Geographically, this area resides in a strait that serves as a passageway connecting the South China Sea to the Sulu Sea. Underwater archaeological excavation would proceed from February to May 1995 and yield relatively well-preserved remains of a wooden ship (25 to 30 meters long and 6 to 8 meters wide) with a cargo of Vietnamese, Thai, and Chinese ceramics.
4,722 artifacts were recovered and were divided into various categories:
About 72.4 percent of these findings were traced to be of Vietnamese origin or under the categorization of Vietnamese ceramics. Of the 4,722 artifacts, 301 of it would be earthenware vessels and fragments. Of the 4,722 artifacts, 301 of it would be earthenware vessels and fragments.
Earthenware can be divided into five categories:
Comparisons among earthenwares excavated in the Pandanan Shipwreck Site and other sites in the Philippines reveal several similarities like the pouring vessels in the Calatagan Sites in Batangas Province, Luzon Island that utilize a pot with incised design and a pot with polishing marks at the bottom, the stoves in Sta Ana Site in Manila, and the Butuan Sites in Northeastern Mindanao, which may suggest that the ship may have been trading in those areas prior to its ill-fated trip. Probable causes of the wreckage can be attributed to abrupt changes in weather between "amihan", the prevailing wind from the northeast (December to April), and "habagat", the prevailing southwest wind (May to November), strong typhoons, or other hazardous navigational factors like coral reefs. Scholars estimate the date of the sinking to have been somewhere from the mid-15th century to the late 15th century based on the periods that a majority of these ceramics belonged to. The earliest artifact, a Chinese coin, has been identified to be from the time of Yung-le (1403-1424 AD). These findings would highlight the presence of an active network of trade and interaction between mainland and island Southeast Asia in Pre-Spanish Philippines.
Rasa Island.
Rasa Island is an island located in the Sulu Sea south of Arena Island near the Municipality of Narra, Province of Palawan, Mimaropa (9° 13′ 25″ N, 118° 26′ 35″E). Partial survey of the site revealed that it was probably a jar burial site with evidence of earthenware jars and pottery fragments present.
Ramos Island (Secam Island).
Ramos Island is an island located just above Balabac Island in the Municipality of Balabac, Mimaropa (8° 6′ 0″ N, 117° 2′ 0″ E). Partial survey of the site revealed evidence of an earthenware stove and jar fragments.
North Mangsee Island (Simanahan Reef).
North Mangsee Island is located between the South China Sea and Sulu Sea in the Municipality of Balabac, Mimaropa and resides by the international treaty limits that separates the Philippines from Malaysia (7° 30′ 36.8″ N, 117° 18′ 37.7″ E). Partial survey of the site revealed evidence of ceramic sherds and iron ingots.
Southern Philippines.
The region, Southern Philippines, refers to the region that includes the island of Mindanao and its associated islands, Surigao del Norte, Basilan, Sulu, and Tawi-Tawi.
Mindanao (Maitum).
Mindanao is the second-largest island in the Philippines located in the southernmost region of the Philippines, south of Negros, Siquijor, Bohol, Leyte, and Samar (8° 00′ 00″ N, 125° 00′ 00″ E).
The site of Maitum is where Maitum anthropomorphic pottery was discovered by Mr. Michael Spadafora, a consulting geologist, when he was treasure-hunting for Japanese World War II gold bars on June 3, 1991. Initial survey of the site by the National Museum of the Philippines later in 1991 pinpointed a Miocene limestone cave about 1,000 meters inland and 6 meters above sea level in Pinol, Municipality of Maitum, Province of Sarangani, South Cotabato, Soccsksargen(6.1303° 00′ 00″ N, 124.3816° 00′ 00″ E). The site showed indications of looting, most likely by treasure hunters, as the entrance of the cave has been damaged, various deposits disturbed, and artifacts being carelessly left on the floor after being dug up. During this time, this region had also been relatively unstable due to the Moro Conflict (1969-2019) between the Government of the Philippines and the Moro National Liberation Front (MNLF). Despite this, archaeological excavation of this site would continue and undergo through three phases: the first phase (November 6, 1991 to December 1991), the second phase (April 8, 1992 to May 3, 1992), and the third phase (January 17, 1995 to February 15, 1995).
The archaeological team headed by Dr. Eusebio Dizon would be able to recover 200 artifacts: 29 of which were complete with minor damage, 20 of which were restorable, and the remaining of which were fragments. The Maitum anthropomorphic burial jars of Mindanao are uniquely characterized by designs featuring human figures (arms, hands, breasts), facial features (heads, eyes, ears, nose, mouth), and facial expressions. The people depicted on these pottery are believed to be of the initial inhabitants of Mindanao and the "specific dead persons whose remains they guard".
The types of pottery decoration utilized are:
Associated materials of these burial jars would include iron blades, shell implements and ornaments, glass beads and bracelets, human teeth and phalanges, and earthenware jarlets and beads. Comparisons among earthenwares excavated in the Maitum Site and other sites in Southeast Asia reveal several similarities like the earthenware shards from Tambler, the Manunggul Jar from Palawan, and Ban Chiang pottery from Thailand. However, as of now, the Philippines is the only area in Southeast Asia where this type of burial jar can be found. General consensus among scholars have estimated the date of these ceramics to be from the Metal Age in the Philippines, which ranges from 500 BC to 500 AD. These findings would uncover a part of Philippine prehistory that had been lost and forgotten for generations and highlight the importance of protecting, preserving, and conserving archaeological sites in the Philippines.
Tawi-Tawi (Balobok rockshelter).
Tawi-Tawi is a group of islands located in the southernmost region of the Philippines known as Bangsamoro and resides by the international treaty limits that separates the Philippines from Malaysia (5° 12′ 00″ N 120° 05′ 00″ E).
The Balobok Rockshelter is a multi-component habitation site located at Sanga Sanga Island in the Province of Tawi-Tawi (5° 4′ 21″ N, 119 ° 47′ 7″ E). It was first reported to the National Museum of the Philippines in 1966. Partial excavation of the site would be made in 1969. It wouldn't be until the re-excavation of the site in 1992 where it yielded earthenware shards of thick bodied wares and small vessels.
Radiocarbon dating of shell samples split the site into three distinct cultural layers:
Of the three distinct cultural layers, earthenware sherds were recovered from Layers II and III. Layer II suggests a hunting and gathering culture based on the lithic tools and debitage, refuse heaps of shells, bones of land and sea animals, and sparse amount of pottery. Layer III suggests a different culture based on the advanced tools like adzes, gouges, and axes, debitage, an opaque glass bead, and the abundance of pottery.
The types of pottery decoration utilized are:
Nearby regions.
Prehistoric.
Based on the analysis of prehistoric pottery in Southeast Asia conducted by Wilhelm G. Solheim II in 2003, red-slipped and small stamp-impressed potteries would travel eastwards throughout island Southeast Asia into the western Pacific by the Nusantao by around 4000 BP.
Early pottery from island Southeast Asia can also be found in:
Contemporary.
Based on the analysis of contemporary pottery in mainland Southeast Asia conducted by John N. Miksic in 2003, there have been six different types of production techniques identified.
Of the 6 types, Type C shows similarities to earthenware potteries. Type C earthenware are generally constructed from a pre-form of coils added to a flat base up to the upper rim where it is shaped. The insides and outsides are then either smoothed or scraped by a spatula, paddle, or anvil. Regions that utilize Type C earthenware are situated along the peninsular Malaysia and Vietnam coast.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\pm"
}
] |
https://en.wikipedia.org/wiki?curid=61948211
|
6194872
|
Absorption (pharmacology)
|
Movement of a drug into the bloodstream or lymph
Absorption is the journey of a drug travelling from the site of administration to the site of action.
The drug travels by some route of administration (oral, topical-dermal, etc.) in a chosen dosage form (e.g., tablets, capsules, or in solution). Absorption by some other routes, such as intravenous therapy, intramuscular injection, enteral nutrition, is even more straightforward and there is less variability in absorption and bioavailability is often near 100%. Intravascular administration does not involve absorption, and there is no loss of drug. The fastest route of absorption is inhalation.
Absorption is a primary focus in drug development and medicinal chemistry, since a drug must be absorbed before any medicinal effects can take place. Moreover, the drug's pharmacokinetic profile can be easily and significantly changed by adjusting factors that affect absorption.
Dissolution.
Oral ingestion is the most common route of administration of pharmaceuticals. Passing through the esophagus to the stomach, the contents of the capsule or tablet are absorbed by the GI tract. The absorbed pharmaceutical is then passed through the liver and kidneys.
The rate of dissolution is a key target for controlling the duration of a drug's effect, and as such, several dosage forms that contain the same active ingredient may be available, differing only in the rate of dissolution. If a drug is supplied in a form that is not readily dissolved, it may be released gradually and act for longer. Having a longer duration of action may improve compliance since the medication will not have to be taken as often. Additionally, slow-release dosage forms may maintain concentrations within an acceptable therapeutic range over a longer period, whereas quick-release dosage forms may have sharper peaks and troughs in serum concentration.
The rate of dissolution is described by the Noyes–Whitney equation as shown below:
formula_0
Where:
As can be inferred from the Noyes–Whitney equation, the rate of dissolution may be modified primarily by altering the surface area of the solid by altering the particle size (e.g., with micronization). For many drugs, reducing the particle size reduces the dose needed to achieve the same therapeutic effect. The particle size reduction increases the specific surface area and the dissolution rate and does not affect solubility.
The rate of dissolution may also be altered by choosing a suitable polymorph of a compound. Different polymorphs have different solubility and dissolution rate characteristics. Specifically, crystalline forms dissolve slower than amorphous forms since they require more energy to leave the lattice during dissolution. The stablest crystalline polymorph has the lowest dissolution rate. Dissolution also differs between anhydrous and hydrous forms of a drug. Anhydrous forms often dissolve faster but sometimes are less soluble.
Esterification is also used to control solubility. For example, stearate and estolate esters of drugs have decreased solubility in gastric fluid. Later, esterases in the gastrointestinal tract (GIT) wall and blood hydrolyze these esters to release the parent drug.
Coatings on a tablet or pellet may act as barriers to reducing the dissolution rate. Coatings may also be used to control where dissolution takes place. For example, enteric coatings only dissolve in the basic environment of the intestines.
Drugs held in solution do not need to be dissolved before being absorbed.
Lipid-soluble drugs are absorbed more rapidly than water-soluble drugs.
Ionization.
The gastrointestinal tract is lined with epithelial cells. Drugs must pass through or permeate these cells to be absorbed into the bloodstream. Cell membranes may act as barriers to some drugs. They are essentially lipid bilayers which form semipermeable membranes. Pure lipid bilayers are generally permeable only to small, uncharged solutes. Hence, whether or not a molecule is ionized will affect its absorption, since ionic molecules are charged. Solubility favors charged species, and permeability favors neutral species. Some molecules have special exchange proteins and channels to facilitate movement from the lumen into the circulation.
Ions cannot passively diffuse through the gastrointestinal tract because the epithelial cell membrane is made up of a phospholipid bilayer, comprising two layers of phospholipids in which the charged hydrophilic heads face outwards and the uncharged hydrophobic fatty acid chains are in the middle of the layer. The fatty acid chains repel ionized, charged molecules. This means that the ionized molecules cannot pass through the intestinal membrane and be absorbed.
The Henderson-Hasselbalch equation offers a way to determine the proportion of a substance that is ionized at a given pH. In the stomach, drugs that are weak acids (such as aspirin) will be present mainly in their non-ionic form, and weak bases will be in their ionic form. Since non-ionic species diffuse more readily through cell membranes, weak acids will have a higher absorption in the highly acidic stomach.
However, the reverse is true in the basic environment of the intestines—weak bases (such as caffeine) will diffuse more readily since they will be non-ionic.
This aspect of absorption has been targeted by medicinal chemists. For example, they may choose an analog that is more likely to be in a non-ionic form. Also, the chemists may develop prodrugs of a compound—these chemical variants may be more readily absorbed and then metabolized by the body into the active compound. However, changing the structure of a molecule is less predictable than altering dissolution properties, since changes in chemical structure may affect the pharmacodynamic properties of a drug.
The solubility and permeability of a drug candidate are important physicochemical properties the scientist wants to know as early as possible.
Other factors.
Absorption also varies depending on bioactivity, resonance, the inductive effect, isosterism, bio-isosterism, and consideration, amongst others.
Types.
Types of absorption in pharmacokinetics include the following:
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "\\frac{dW}{dt} = \\frac{DA(C_{s}-C)}{L}"
},
{
"math_id": 1,
"text": "\\frac{dW}{dt}"
},
{
"math_id": 2,
"text": "C_{s}"
}
] |
https://en.wikipedia.org/wiki?curid=6194872
|
61949595
|
Direct function
|
Alternate way to define a function in APL
A direct function (dfn, pronounced "dee fun") is an alternative way to define a function and operator (a higher-order function) in the programming language APL. A direct operator can also be called a dop (pronounced "dee op"). They were invented by John Scholes in 1996. They are a unique combination of array programming, higher-order function, and functional programming, and are a major distinguishing advance of early 21st century APL over prior versions.
A dfn is a sequence of possibly guarded expressions (or just a guard) between and , separated by or new-lines, wherein denotes the left argument and the right, and denotes recursion (function self-reference). For example, the function tests whether each row of is a Pythagorean triplet (by testing whether the sum of squares equals twice the square of the maximum).
PT 3 4 5
1
x
4 5 3
3 11 6
5 13 12
17 16 8
11 12 4
17 15 8
PT x
1 0 1 0 0 1
The factorial function as a dfn:
fact 5
120
fact¨ ⍳10 ⍝ fact applied to each element of 0 to 9
1 1 2 6 24 120 720 5040 40320 362880
Description.
The rules for dfns are summarized by the following "reference card":
A dfn is a sequence of possibly guarded expressions (or just a guard) between and , separated by or new-lines.
expression
guard: expression
guard:
The expressions and/or guards are evaluated in sequence. A guard must evaluate to a 0 or 1; its associated expression is evaluated if the value is 1. A dfn terminates after the first unguarded expression which does not end in assignment, or after the first guarded expression whose guard evaluates to 1, or if there are no more expressions. The result of a dfn is that of the last evaluated expression. If that last evaluated expression ends in assignment, the result is "shy"—not automatically displayed in the session.
Names assigned in a dfn are local by default, with lexical scope.
denotes the left function argument and the right; denotes the left operand and the right. If occurs in the definition, then the dfn is a dyadic operator; if only occurs but not , then it is a monadic operator; if neither or occurs, then the dfn is a function.
The special syntax is used to give a default value to the left argument if a dfn is called monadically, that is, called with no left argument. The is not evaluated otherwise.
denotes recursion or self-reference by the function, and denotes self-reference by the operator. Such denotation permits anonymous recursion.
Error trapping is provided through error-guards, . When an error is generated, the system searches dynamically through the calling functions for an error-guard that matches the error. If one is found, the execution environment is unwound to its state immediately prior to the error-guard's execution and the associated expression of the error-guard is evaluated as the result of the dfn.
Additional descriptions, explanations, and tutorials on dfns are available in the cited articles.
Examples.
The examples here illustrate different aspects of dfns. Additional examples are found in the cited articles.
Default left argument.
The function adds to (i or formula_0) times .
3 {⍺+0j1×⍵} 4
3J4
∘.{⍺+0j1×⍵}⍨ ¯2+⍳5
¯2J¯2 ¯2J¯1 ¯2 ¯2J1 ¯2J2
¯1J¯2 ¯1J¯1 ¯1 ¯1J1 ¯1J2
0J¯2 0J¯1 0 0J1 0J2
1J¯2 1J¯1 1 1J1 1J2
2J¯2 2J¯1 2 2J1 2J2
The significance of this function can be seen as follows:
<templatestyles src="Template:Blockquote/styles.css" />
Moreover, analogous to that monadic ⇔ ("negate") and monadic ⇔ ("reciprocal"), a monadic definition of the function is useful, effected by specifying a default value of 0 for : if , then ⇔ ⇔ .
3 j 4 ¯5.6 7.89
3J4 3J¯5.6 3J7.89
j 4 ¯5.6 7.89
0J4 0J¯5.6 0J7.89
sin← 1∘○
cos← 2∘○
Euler (¯0.5+?10⍴0) j (¯0.5+?10⍴0)
1 1 1 1 1 1 1 1 1 1
The last expression illustrates Euler's formula on ten random numbers with real and imaginary parts in the interval formula_1.
Single recursion.
The ternary construction of the Cantor set starts with the interval [0,1] and at each stage removes the middle third from each remaining subinterval:
formula_2
formula_3
formula_4
The Cantor set of order defined as a dfn:
Cantor 0
1
Cantor 1
1 0 1
Cantor 2
1 0 1 0 0 0 1 0 1
Cantor 3
1 0 1 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 1 0 1
Cantor 0 to Cantor 6 depicted as black bars:
The function computes a bit vector of length so that bit (for and ) is 1 if and only if is a prime.
sieve←{
4≥⍵:⍵⍴0 0 1 1
r←⌊0.5*⍨n←⍵
p←2 3 5 7 11 13 17 19 23 29 31 37 41 43
p←(1+(n≤×⍀p)⍳1)↑p
b← 0@1 ⊃ {(m⍴⍵)>m⍴⍺↑1 ⊣ m←n⌊⍺×≢⍵}⌿ ⊖1,p
{r<q←b⍳1:b⊣b[⍵]←1 ⋄ b[q,q×⍸b↑⍨⌈n÷q]←0 ⋄ ∇ ⍵,q}p
10 10 ⍴ sieve 100
0 0 1 1 0 1 0 1 0 0
0 1 0 1 0 0 0 1 0 1
0 0 0 1 0 0 0 0 0 1
0 1 0 0 0 0 0 1 0 0
0 1 0 1 0 0 0 1 0 0
0 0 0 1 0 0 0 0 0 1
0 1 0 0 0 0 0 1 0 0
0 1 0 1 0 0 0 0 0 1
0 0 0 1 0 0 0 0 0 1
0 0 0 0 0 0 0 1 0 0
b←sieve 1e9
≢b
1000000000
(10*⍳10) (+⌿↑)⍤0 1 ⊢b
0 4 25 168 1229 9592 78498 664579 5761455 50847534
The last sequence, the number of primes less than powers of 10, is an initial segment of OEIS: . The last number, 50847534, is the number of primes less than formula_5. It is called Bertelsen's number, memorably described by MathWorld as "an erroneous name erroneously given the erroneous value of formula_6".
uses two different methods to mark composites with 0s, both effected using local anonymous dfns: The first uses the sieve of Eratosthenes on an initial mask of 1 and a prefix of the primes 2 3...43, using the "insert" operator (right fold). (The length of the prefix obtains by comparison with the primorial function .) The second finds the smallest new prime remaining in (), and sets to 0 bit itself and bits at times the numbers at remaining 1 bits in an initial segment of (). This second dfn uses tail recursion.
Tail recursion.
Typically, the factorial function is define recursively (as above), but it can be coded to exploit tail recursion by using an accumulator left argument:
Similarly, the determinant of a square complex matrix using Gaussian elimination can be computed with tail recursion:
det←{ ⍝ determinant of a square complex matrix
⍺←1 ⍝ product of co-factor coefficients so far
0=≢⍵:⍺ ⍝ result for 0-by-0
(i j)←(⍴⍵)⊤⊃⍒|,⍵ ⍝ row and column index of the maximal element
k←⍳≢⍵
(⍺×⍵[i;j]ׯ1*i+j) ∇ ⍵[k~i;k~j] - ⍵[k~i;j] ∘.× ⍵[i;k~j]÷⍵[i;j]
Multiple recursion.
A partition of a non-negative integer formula_7 is a vector formula_8 of positive integers such that , where the order in formula_8 is not significant. For example, and are partitions of 4, and and and are considered to be the same partition.
The partition function formula_9 counts the number of partitions. The function is of interest in number theory, studied by Euler, Hardy, Ramanujan, Erdős, and others. The recurrence relation
formula_10
derived from Euler's pentagonal number theorem. Written as a dfn:
pn 10
42
pn¨ ⍳13 ⍝ OEIS A000041
1 1 2 3 5 7 11 15 22 30 42 56 77
The basis step states that for , the result of the function is , 1 if ⍵ is 0 or 1 and 0 otherwise. The recursive step is highly multiply recursive. For example, would result in the function being applied to each element of , which are:
rec 200
199 195 188 178 165 149 130 108 83 55 24 ¯10
198 193 185 174 160 143 123 100 74 45 13 ¯22
and requires longer than the age of the universe to compute (formula_11 function calls to itself). The compute time can be reduced by memoization, here implemented as the direct operator (higher-order function) :
M←{
f←⍺⍺
i←2+'⋄'⍳⍨t←2↓,⎕cr 'f'
⍎'{T←(1+⍵)⍴¯1 ⋄ ',(i↑t),'¯1≢T[⍵]:⊃T[⍵] ⋄ ⊃T[⍵]←⊂',(i↓t),'⍵}⍵'
pn M 200
3.973E12
0 ⍕ pn M 200 ⍝ format to 0 decimal places
3972999029388
This value of agrees with that computed by Hardy and Ramanujan in 1918.
The memo operator defines a variant of its operand function to use a cache and then evaluates it. With the operand the variant is:
Direct operator (dop).
Quicksort on an array works by choosing a "pivot" at random among its major cells, then catenating the sorted major cells which strictly precede the pivot, the major cells equal to the pivot, and the sorted major cells which strictly follow the pivot, as determined by a comparison function . Defined as a direct operator (dop) :
⍝ precedes ⍝ follows ⍝ equals
2 (×-) 8 8 (×-) 2 8 (×-) 8
¯1 1 0
x← 2 19 3 8 3 6 9 4 19 7 0 10 15 14
(×-) Q x
0 2 3 3 4 6 7 8 9 10 14 15 19 19
is a variant that catenates the three parts enclosed by the function instead of the parts "per se". The three parts generated at each recursive step are apparent in the structure of the final result. Applying the function derived from to the same argument multiple times gives different results because the pivots are chosen at random. In-order traversal of the results does yield the same sorted array.
(×-) Q3 x
│┌──────────────┬─┬─────────────────────────┐│19 19││
││┌──────┬───┬─┐│6│┌──────┬─┬──────────────┐││ ││
│││┌┬─┬─┐│3 3│4││ ││┌┬─┬─┐│9│┌┬──┬────────┐│││ ││
│││││0│2││ │ ││ ││││7│8││ │││10│┌──┬──┬┐││││ ││
│││└┴─┴─┘│ │ ││ ││└┴─┴─┘│ │││ ││14│15││││││ ││
││ │ ││ │ │└┴──┴────────┘│││ ││
│└──────────────┴─┴─────────────────────────┘│ ││
(×-) Q3 x
│┌┬─┬──────────────────────┐│7│┌────────────────────┬─────┬┐│
│││0│┌┬─┬─────────────────┐││ ││┌──────┬──┬────────┐│19 19│││
│││ │││2│┌────────────┬─┬┐│││ │││┌┬─┬─┐│10│┌──┬──┬┐││ │││
│││ │││ ││┌───────┬─┬┐│6│││││ │││││8│9││ ││14│15││││ │││
│││ │││ │││┌┬───┬┐│4│││ │││││ │││└┴─┴─┘│ │└──┴──┴┘││ │││
│││ │││ │││││3 3│││ │││ │││││ ││└──────┴──┴────────┘│ │││
│││ │││ ││└───────┴─┴┘│ │││││ │ │
│││ │└┴─┴─────────────────┘││ │ │
└───────────────────────────┴─┴─────────────────────────────┘
The above formulation is not new; see for example Figure 3.7 of the classic "The Design and Analysis of Computer Algorithms". However, unlike the pidgin ALGOL program in Figure 3.7, is executable, and the partial order used in the sorting is an operand, the the examples above.
Dfns with operators and trains.
Dfns, especially anonymous dfns, work well with operators and trains. The following snippet solves a "Programming Pearls" puzzle: given a dictionary of English words, here represented as the character matrix , find all sets of anagrams.
a {⍵[⍋⍵]}⍤1 ⊢a ({⍵[⍋⍵]}⍤1 {⊂⍵}⌸ ⊢) a
pats apst ┌────┬────┬────┐
spat apst │pats│teas│star│
teas aest │spat│sate│ │
sate aest │taps│etas│ │
taps apst │past│seat│ │
etas aest │ │eats│ │
past apst │ │tase│ │
seat aest │ │east│ │
eats aest │ │seta│ │
tase aest └────┴────┴────┘
star arst
east aest
seta aest
The algorithm works by sorting the rows individually (), and these sorted rows are used as keys ("signature" in the Programming Pearls description) to the "key" operator to group the rows of the matrix. The expression on the right is a "train", a syntactic form employed by APL to achieve tacit programming. Here, it is an isolated sequence of three functions such that ⇔ , whence the expression on the right is equivalent to .
Lexical scope.
When an inner (nested) dfn refers to a name, it is sought by looking outward through enclosing dfns rather than down the call stack. This regime is said to employ lexical scope instead of APL's usual dynamic scope. The distinction becomes apparent only if a call is made to a function defined at an outer level. For the more usual inward calls, the two regimes are indistinguishable.
For example, in the following function , the variable is defined both in itself and in the inner function . When calls outward to and refers to , it finds the outer one (with value ) rather than the one defined in (with value ):
which←{
ty←'lexical'
f1 ⍵
which ' scope'
lexical scope
Error-guard.
The following function illustrates use of error guards:
plus←{
tx←'catch all' ⋄ 0::tx
tx←'domain' ⋄ 11::tx
tx←'length' ⋄ 5::tx
}
2 plus 3 ⍝ no errors
5
2 3 4 5 plus 'three' ⍝ argument lengths don't match
length
2 3 4 5 plus 'four' ⍝ can't add characters
domain
2 3 plus 3 4⍴5 ⍝ can't add vector to matrix
catch all
In APL, error number 5 is "length error"; error number 11 is "domain error"; and error number 0 is a "catch all" for error numbers 1 to 999.
The example shows the unwinding of the local environment before an error-guard's expression is evaluated. The local name is set to describe the purview of its following error-guard. When an error occurs, the environment is unwound to expose 's statically correct value.
Dfns "versus" tradfns.
Since direct functions are dfns, APL functions defined in the traditional manner are referred to as tradfns, pronounced "trad funs". Here, dfns and tradfns are compared by consideration of the function : On the left is a dfn (as defined above); in the middle is a tradfn using control structures; on the right is a tradfn using gotos () and line labels.
History.
Kenneth E. Iverson, the inventor of APL, was dissatisfied with the way user functions (tradfns) were defined. In 1974, he devised "formal function definition" or "direct definition" for use in exposition. A direct definition has two or four parts, separated by colons:
name : expression
name : expression0 : proposition : expression1
Within a direct definition, denotes the left argument and the right argument. In the first instance, the result of is the result of the function; in the second instance, the result of the function is that of if evaluates to 0, or if it evaluates to 1. Assignments within a direct definition are dynamically local. Examples of using direct definition are found in the 1979 Turing Award Lecture and in books and application papers.
Direct definition was too limited for use in larger systems. The ideas were further developed by multiple authors in multiple works but the results were unwieldy. Of these, the "alternative APL function definition" of Bunda in 1987 came closest to current facilities, but is flawed in conflicts with existing symbols and in error handling which would have caused practical difficulties, and was never implemented. The main distillates from the different proposals were that (a) the function being defined is anonymous, with subsequent naming (if required) being effected by assignment; (b) the function is denoted by a symbol and thereby enables anonymous recursion.
In 1996, John Scholes of Dyalog Limited invented direct functions (dfns). The ideas originated in 1989 when he read a special issue of "The Computer Journal" on functional programming. He then proceeded to study functional programming and became strongly motivated ("sick with desire", like Yeats) to bring these ideas to APL. He initially operated in stealth because he was concerned the changes might be judged too radical and an unnecessary complication of the language; other observers say that he operated in stealth because Dyalog colleagues were not so enamored and thought he was wasting his time and causing trouble for people. Dfns were first presented in the Dyalog Vendor Forum at the APL '96 Conference and released in Dyalog APL in early 1997. Acceptance and recognition were slow in coming. As late as 2008, in "Dyalog at 25", a publication celebrating the 25th anniversary of Dyalog Limited, dfns were barely mentioned (mentioned twice as "dynamic functions" and without elaboration). As of 2019[ [update]], dfns are implemented in Dyalog APL, NARS2000, and ngn/apl. They also play a key role in efforts to exploit the computing abilities of a graphics processing unit (GPU).
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\sqrt{-1}"
},
{
"math_id": 1,
"text": "\\left(-0.5,0.5\\right)"
},
{
"math_id": 2,
"text": "\\biggl[0,1\\biggr] \\to"
},
{
"math_id": 3,
"text": "\\left[0,\\frac{1}{3}\\right] \\cup \\left[\\frac{2}{3},1\\right] \\to"
},
{
"math_id": 4,
"text": "\\left[0,\\frac{1}{9}\\right] \\cup \\left[\\frac{2}{9},\\frac{1}{3}\\right] \\cup \\left[\\frac{2}{3},\\frac{7}{9}\\right] \\cup \\left[\\frac{8}{9},1\\right] \\to \\cdots "
},
{
"math_id": 5,
"text": "10^9"
},
{
"math_id": 6,
"text": "\\pi(10^9) = 50847478"
},
{
"math_id": 7,
"text": "n"
},
{
"math_id": 8,
"text": "v"
},
{
"math_id": 9,
"text": "P(n)"
},
{
"math_id": 10,
"text": "P(n)=\\sum_{k=1}^n (-1)^{k+1}[P(n-\\frac{1}{2}k(3k-1))+P(n-\\frac{1}{2}k(3k+1))]"
},
{
"math_id": 11,
"text": "7.57\\times10^{47}"
}
] |
https://en.wikipedia.org/wiki?curid=61949595
|
61951537
|
BED (file format)
|
File format used for genomes
The BED (Browser Extensible Data) format is a text file format used to store genomic regions as coordinates and associated annotations. The data are presented in the form of columns separated by spaces or tabs. This format was developed during the Human Genome Project and then adopted by other sequencing projects. As a result of this increasingly wide use, this format had already become a "de facto" standard in bioinformatics before a formal specification was written.
One of the advantages of this format is the manipulation of coordinates instead of nucleotide sequences, which optimizes the power and computation time when comparing all or part of genomes. In addition, its simplicity makes it easy to manipulate and read (or parsing) coordinates or annotations using word processing and scripting languages such as Python, Ruby, or Perl or more specialized tools such as BEDTools.
History.
The end of the 20th century saw the emergence of the first projects to sequence complete genomes. Among these projects, the Human Genome Project was the most ambitious at the time, aiming to sequence for the first time a genome of several gigabases. This required the sequencing centres to carry out major methodological development in order to automate the processing of sequences and their analyses. Thus, many formats were created, such as FASTQ, GFF, and BED. However, no official specifications were published at the time, which affected some formats such as FASTQ when sequencing projects multiplied at the beginning of the 21st century.
Its wide use within genome browsers has made it possible to define this format in a relatively stable way as this description is used by many tools.
Format.
Initially the BED format did not have any official specification. Instead, the description provided by the UCSC Genome Browser has been widely used as a reference.
A formal BED specification was published in 2021 under the auspices of the Global Alliance for Genomics and Health.
Description.
A BED file consists of a minimum of three columns to which nine optional columns can be added for a total of twelve columns. The first three columns contain the names of chromosomes or scaffolds, the start, and the end coordinates of the sequences considered. The next nine columns contain annotations related to these sequences. These columns must be separated by spaces or tabs, the latter being recommended for reasons of compatibility between programs. Each row of a file must have the same number of columns. The order of the columns must be respected: if columns of high numbers are used, the columns of intermediate numbers must be filled in.
Header.
A BED file can optionally contain a header. However, there is no official description of the format of the header. It may contain one or more lines and be signified by different words or symbols, depending on its functional role or simply descriptive. Thus, a header line can begin with these words or symbol:
Coordinate system.
Unlike the coordinate system used by other standards such as GFF, the system used by the BED format is zero-based for the coordinate start and one-based for the coordinate end. Thus, the nucleotide with the coordinate 1 in a genome will have a value of 0 in column 2 and a value of 1 in column 3.
A thousand-base BED interval with the following start and end:
chr7 0 1000
would convert to the following 1-based "human" genome coordinates, as used by a genome browser such as UCSC:
chr7 1 1000
This choice is justified by the method of calculating the lengths of the genomic regions considered, this calculation being based on the simple subtraction of the end coordinates (column 3) by those of the start (column 2): formula_0. When the coordinate system is based on the use of 1 to designate the first position, the calculation becomes slightly more complex: formula_1. This slight difference can have a relatively large impact in terms of computation time when data sets with several thousand to hundreds of thousands of lines are used.
Alternatively, we can view both coordinates as zero-based, where the end position is non-inclusive. In other words, the zero-based end position denotes the index of the first position after the feature. For the example above, the zero-based end position of 1000 marks the first position after the feature including positions 0 through 999.
Examples.
Here is a minimal example:
chr7 127471196 127472363
chr7 127472363 127473530
chr7 127473530 127474697
Here is a typical example with nine columns from the UCSC Genome Browser. The first three lines are settings for the UCSC Genome Browser and are unrelated to the data specified in BED format:
browser position chr7:127471196-127495720
browser hide all
track name="ItemRGBDemo" description="Item RGB demonstration" visibility=2 itemRgb="On"
chr7 127471196 127472363 Pos1 0 + 127471196 127472363 255,0,0
chr7 127472363 127473530 Pos2 0 + 127472363 127473530 255,0,0
chr7 127473530 127474697 Pos3 0 + 127473530 127474697 255,0,0
chr7 127474697 127475864 Pos4 0 + 127474697 127475864 255,0,0
chr7 127475864 127477031 Neg1 0 - 127475864 127477031 0,0,255
chr7 127477031 127478198 Neg2 0 - 127477031 127478198 0,0,255
chr7 127478198 127479365 Neg3 0 - 127478198 127479365 0,0,255
chr7 127479365 127480532 Pos5 0 + 127479365 127480532 255,0,0
chr7 127480532 127481699 Neg4 0 - 127480532 127481699 0,0,255
File extension.
There is currently no standard file extension for BED files, but the ".bed" extension is the most frequently used. The number of columns sometimes is noted in the file extension, for example: ".bed3", ".bed4", ".bed6", ".bed12".
Usage.
The use of BED files has spread rapidly with the emergence of new sequencing techniques and the manipulation of larger and larger sequence files. The comparison of genomic sequences or even entire genomes by comparing the sequences themselves can quickly require significant computational resources and become time-consuming. Handling BED files makes this work more efficient by using coordinates to extract sequences of interest from sequencing sets or to directly compare and manipulate two sets of coordinates.
To perform these tasks, various programs can be used to manipulate BED files, including but not limited to the following:
.genome Files.
BEDtools also uses codice_0 files to determine chromosomal boundaries and ensure that padding operations do not extend past chromosome boundaries. Genome files are formatted as shown below, a two-column tab-separated file with one-line header.
chrom size
chr1 248956422
chr2 242193529
chr3 198295559
chr4 190214555
chr5 181538259
chr6 170805979
chr7 159345973
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "x_{end} - x_{start}"
},
{
"math_id": 1,
"text": "x_{end} - x_{start} + 1"
}
] |
https://en.wikipedia.org/wiki?curid=61951537
|
61954047
|
Anderson function
|
Set of basis functions
Anderson functions describe the projection of a magnetic dipole field in a given direction at points along an arbitrary line. They are useful in the study of magnetic anomaly detection, with historical applications in submarine hunting and underwater mine detection. They approximately describe the signal detected by a total field sensor as the sensor passes by a target (assuming the targets signature is small compared to the Earth's magnetic field).
Definition.
The magnetic field from a magnetic dipole along a given line, and in any given direction can be described by the following basis functions:
formula_0
which are known as Anderson functions.
Definitions:
The total magnetic field along the line is given by
formula_7
where formula_8 is the magnetic constant, and formula_9 are the Anderson coefficients, which depend on the geometry of the system. These are
formula_10
where formula_11 and formula_12 are unit vectors (given by formula_13 and formula_14, respectively).
Note, the antisymmetric portion of the function is represented by the second function. Correspondingly, the sign of formula_15 depends on how formula_16 is defined (e.g. direction is 'forward').
Total field measurements.
The total field measurement resulting from a dipole field formula_17 in the presence of a background field formula_2 (such as earth magnetic field) is
formula_18
The last line is an approximation that is accurate if the background field is much larger than contributions from the dipole. In such a case the total field reduces to the sum of the background field, and the projection of the dipole field onto the background field. This means that the total field can be accurately described as an Anderson function with an offset.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\n\\frac{\\theta^{~i-1}}{(\\theta^2+1)^{\\frac{5}{2}}}, \\text{ for } i = 1,2,3\n"
},
{
"math_id": 1,
"text": "\\vec{m}"
},
{
"math_id": 2,
"text": "\\vec{B}_E"
},
{
"math_id": 3,
"text": "x"
},
{
"math_id": 4,
"text": "\\hat{v}"
},
{
"math_id": 5,
"text": "\\vec{r}"
},
{
"math_id": 6,
"text": "\\theta = x/r"
},
{
"math_id": 7,
"text": "\nB(\\theta) = \\frac{\\mu_0}{4\\pi}\\frac{|m|}{r^3}\\left(\\frac{A_1}{(\\theta^2+1)^{\\frac{5}{2}}} + \\frac{A_2\\theta^1}{(\\theta^2+1)^{\\frac{5}{2}}} + \\frac{A_3\\theta^2}{(\\theta^2+1)^{\\frac{5}{2}}} \\right)\n"
},
{
"math_id": 8,
"text": "\\mu_0"
},
{
"math_id": 9,
"text": "A_{1,2,3}"
},
{
"math_id": 10,
"text": "\n\\begin{align}\nA_1 &= 3(\\hat{m}\\cdot\\hat{r})(\\hat{r}\\cdot\\hat{B}_E) -~~(\\hat{m}\\cdot\\hat{B}_E) \\\\\nA_2 &= 3(\\hat{m}\\cdot\\hat{r})(\\hat{v}\\cdot\\hat{B}_E) + 3(\\hat{m}\\cdot\\hat{v})(\\hat{r}\\cdot\\hat{B}_E) \\\\\nA_3 &= 3(\\hat{m}\\cdot\\hat{v})(\\hat{v}\\cdot\\hat{B}_E) -~~(\\hat{m}\\cdot\\hat{B}_E)\n\\end{align}\n"
},
{
"math_id": 11,
"text": " \\hat{m},\\hat{r},"
},
{
"math_id": 12,
"text": "\\hat{B}_E "
},
{
"math_id": 13,
"text": "\\frac{\\vec{m}}{|\\vec{m}|}, \\frac{\\vec{r}}{|\\vec{r}|},"
},
{
"math_id": 14,
"text": "\\frac{\\vec{B}_E}{|\\vec{B}_E|}"
},
{
"math_id": 15,
"text": "A_2"
},
{
"math_id": 16,
"text": "\\vec{v}"
},
{
"math_id": 17,
"text": "\\vec{B}_D"
},
{
"math_id": 18,
"text": "\n\\begin{align}\n|B| &= \\sqrt{(\\vec{B}_D+\\vec{B}_E)\\cdot(\\vec{B}_D+\\vec{B}_E)} \\\\\n &= |B_E|\\sqrt{1+\\frac{2\\vec{B}_D\\cdot\\vec{B}_E}{|B_E|^2}+\\frac{\\vec{B}_D^2}{|B_E|^2}} \\\\\n &\\approx |B_E|+\\frac{\\vec{B}_D\\cdot\\vec{B}_E}{|B_E|}, &|B_E|\\gg |B_D|.\n\\end{align}\n"
}
] |
https://en.wikipedia.org/wiki?curid=61954047
|
61955363
|
Song of Songs 5
|
Fifth chapter of the Song of Songs
Song of Songs 5 (abbreviated as Song 5) is the fifth chapter of the Song of Songs in the Hebrew Bible or the Old Testament of the Christian Bible. This book is one of the Five Megillot, a collection of short books, together with Ruth, Lamentations, Ecclesiastes and Esther, within the Ketuvim, the third and the last part of the Hebrew Bible. Jewish tradition views Solomon as the author of this book (although this is now largely disputed), and this attribution influences the acceptance of this book as a canonical text.
This chapter opens with the man's response to his lover's consent in the closing verses of chapter 4, but the second part of the chapter relates the refusal of the woman to welcome the man into her room at night, and when she changes her mind, he already disappears; in the next part she looks for him in the city and in the final section (verses 10 onwards) she describes to the daughters of Jerusalem how fair the man is.
Text.
The original text is written in Hebrew language. This chapter is divided into 16 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Codex Leningradensis (1008). One fragment containing a part of this chapter was found among the Dead Sea Scrolls, assigned as 4Q107 (4QCantb); 30 BCE-30 CE; extant verse 1).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
Structure.
The Modern English Version (MEV) identifies the speakers in this chapter as:
The start of the fifth chapter and the close of the fourth chapter are not in the same verse in all versions of the Bible: the Vulgate version of chapter 5 starts with "", which is the end of the woman's speech in the in most other versions:
<templatestyles src="Verse translation/styles.css" />
Analysis.
Male and chorus: tasting and enjoy the garden (5:1).
This verse contains the man's closure of the dialogue at the end of the previous chapter; the call to eat and drink implies consummation. John Gill notes that the words closing the dialogue should not have been separated from the rest of the exchange in chapter 4.
Verse 1.
[The Beloved/the Man]
"I have come to my garden, my sister, my spouse;"
"I have gathered my myrrh with my spice;"
"I have eaten my honeycomb with my honey;"
"I have drunk my wine with my milk."
[To His Friends]
"Eat, O friends!"
"Drink, yes, drink deeply,"
"O beloved ones!"
Female: A second search at night for her dream lover (5:2-8).
In this part, the woman refuses to welcome her lover into her room at night (either in reality or a dream; cf. ), but when she changes her mind, the man already disappears. She looks for him in the city, then the watchmen (the guards) found her and beat her up. She appeals for help to the daughters of Jerusalem about her lovesick condition.
"I sleep, but my heart waketh: it is the voice of my beloved that knocketh, saying, Open to me, my sister, my love, my dove, my undefiled: for my head is filled with dew, and my locks with the drops of the night."
Chorus: Challenge to compare the male lover (5:9).
The "daughters of Jerusalem" want to know what the male lover looks like.
Female: descriptive poem for the male (5:10-16).
The woman describes her lover from head to toe in a waṣf or descriptive poem, using the imagery of fauna and flora for his head, then metals and precious stones for the rest of his body. This "waṣf" and the other ones (; ; 7:2-10a (7:1-9a English)) theologically demonstrate the heart of the Song that values the body as not evil but good even worthy of praise, and respects the body with an appreciative focus (rather than lurid). Hess notes that this reflects 'the fundamental value of God's creation as good and the human body as a key part of that creation, whether at the beginning () or redeemed in the resurrection (, )'.
"His mouth is most sweet: yea, he is altogether lovely. This is my beloved, and this is my friend, O daughters of Jerusalem."
Musical settings.
The phrase "Veniat dilectus meus" and variant texts such as antiphons based on it have been set to music, for instance in Gregorian chant, and by composers including Alessandro Grandi and Pietro Torri.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
] |
https://en.wikipedia.org/wiki?curid=61955363
|
61956494
|
Unified scattering function
|
The unified scattering function was proposed in 1995 as a universal approach to describe small-angle X-ray, and neutron scattering (and in some cases light scattering) from disordered systems that display hierarchical structure.
Concept.
The concept of universal descriptions of scattering, that is scattering functions that do not depend on a specific structural model, but whose parameters can be related back to specific structures, have existed since about 1950. The prominent examples of universal scattering functions are Guinier's Law,
and Porod's Law,
where "G", "R"g, and "B" are constants related to the scattering contrast, structural volume, surface area, and radius of gyration. "q" is the magnitude of the scattering vector which is related to the Bragg spacing, "d", "q" = 2π/"d" = 4π/λ sin(θ/2). λ is the wavelength and θ is the scattering angle (2θ in diffraction).
Both Guinier's Law and Porod's Law refer to an aspect of a single structural level. A structural level is composed of a size that can be expressed in "R"g, and a structure as reflected in a power-law decay, -4 in the case of Porod's Law for solid objects with smooth, sharp interfaces. For other structures the power-law decay yields the mass-fractal dimension, "d"f, which relates the mass and size of the object, thereby partially defining the object. For instance, a rod has "d"f = 1 and a disk has "d"f = 2. The prefactor to the power-law yields other details of the structure such as the surface to volume ratio for solid objects, the branch content for chain structures, the convolution or crumpled-ness of various objects. The prefactor to Guinier's Law yields the mass and volume fraction under dilute conditions. Above the overlap concentration (generally 1 to 5 volume percent) structural screening must be considered.
In addition to these universal functions that describe only a part of a structural level, a number of scattering functions that can describe a single "structural level" have been proposed for some disordered systems, most interestingly Debye's scattering function for a Gaussian polymer chain derived during World War II,
where "x" = "q"2"R"g2. Eq. 3 reverts to Eq. 1 at low-"q" and to a power-law, "I"("q") = "Bq"−2 at high-"q" reflecting the two dimensional nature of a random walk or a diffusion path. Eq. 3 refers to a single structural level, corresponding to a Guinier regime and a power-law regime. The Guinier regime reflecting the overall size of the object without reference to the internal or surface structure of the object and the power-law reflecting the details of the structure, in this case a linear (unbranched), mass-fractal object with mass-fractal dimension, "d"f = 2 (connectivity dimension of 1 reflecting a linear structure; and minimum dimension of 2 indicating a random conformation in 3d space).
In the 1990s it became apparent that single structural level functions similar to Eq. 3 would be of great use in describing complex, disordered structures such as branched mass-fractal aggregates, linear polymers in good solvents ("d"f ~ 5/3), branched polymers ("d"f > 2), cyclic polymers, and macromolecules of complex topology such as star, dendrimer, and comb polymers, as well as polyelectrolytes, micellar and colloidal materials such as worm-like micelles. Further, no analytically derived scattering functions could describe multiple structural levels in hierarchical materials. The observation of multiple structural levels is extremely common even in the case of a simple linear Gaussian polymer chain describe by Eq. 3 which is statistically composed of rod-like Kuhn units (level 1) which follow "I"("q") = "Bq"−1 at the highest-q. Common examples of hierarchical materials are silica, titania, and carbon black nano-aggregates composed of solid primary particles (level 1) displaying Porod scattering at highest q, Eq. 2, which aggregate into fairly rigid mass-fractal structures at intermediate nanoscales (level 2), and which agglomerate into micron-scale solid or network structures (level 3). Since these structural levels overlap in a small-angle scattering pattern, it was not possible to accurately model these materials using Eq. 1 and various power-law functions such as Eq. 2. For these reasons, a global scattering function that could be expanded to multiple structural levels was of interest.
In 1995 Beaucage derived the Unified Scattering Function,
where "i" refers to the structural level starting with the smallest size, highest "q". "q"i* is defined by,
and "k" has a value of 1 for solid structural levels (:formula_0) and approximately 1.06 for mass-fractal structural levels (:formula_1). Eq. 4 recognizes that all structures display the behavior of Eq. 1 at largest sizes, that is all structures exhibit a size, and if the structure is randomly arranged that size manifests as a Gaussian function in small-angle scattering governed by the radius of gyration with larger objects displaying a smaller standard deviation, or larger "R"g. At high-"q" Eq. 1 fails to describe the structure because it reflects an object with no surface or internal structure [8]. The second term in Eq. 4 gives the missing information concerning the surface or internal structure of the object by way of the power "P"i and the prefactor "B"i (as well as how "P"i and "B"i relate to "G"i, and "R"g,i). Beaucage realized that the problem of obtaining a generic multi-level scattering function lay in Eq. 2 since a power-law could not extend infinitely to low-"q" and yield a finite intensity at "q" => 0. Also, such a function would over power Eq. 1 in the range of "q" where Eq. 1 is appropriate.
Reference provides one of several possible derivations of Eq. 4, using Eq. 2 as an example of a power-law regime. A vector, r, can be visualized as the vector connecting interference points between an incident beam and the scattered beam. r = 2π/q where q "= 4π/(λ sin θ/2)" is the scattering vector in inverse space. Scattering occurs when two fringe points separated by r contain scattering material. If material is located at |r|/2 destructive interference occurs. So within a solid object there is always material at a position |r|/2 that negates scattering form material separated at |r|. Only at the surface do conditions of contrast occur.
Eq. 2 describes scattering from a smooth sharp interface which results in scattering that is proportional to the surface area and decays with "q"−4. The volume of a scattering element in this case scales with "V" ~ "r"3. Scattering involves binary interference so is proportional to (ρ"V")2 ~ "r"6. The number of these V domains is proportional to the surface area divided by the area of a domain, "N" ~ "S"/"r"2. So the scattering intensity follows "I"("q") ~ "SV"2/"r"2 ~ "Sr"4 ~ "Sq"−4.
At small size scales, at high q, for an oddly shaped object with a smooth/sharp interface, the structure appears to be a flat surface and the described approach is appropriate. As the size scale of observation, "r", approaches "R"g at low "q" this model fails because the surface is no longer planar. That is, the scattering even in figure 1 relies on both ends of the vector, r, being coplanar and arranged as indicated (the specular condition) with respect to the incident and scattered beams. In the absence of this orientation no scattering occurs. The curvature of the particle, which is related to the radius of gyration, extinguishes surface scattering at low-"q" in the Guinier regime. Incorporating this observation in Porod's law in the original derivation is not possible since it relies on a Fourier transform of a correlation function for surface scattering. Beaucage arrived at Eq. 4 through a new derivation of Eq. 1 based on randomly placed particles and adoption of this approach to modification of Eq. 2.
Beaucage derivation of Guinier's Law.
Consider a randomly placed vector r such that both ends of the vector are in the particle. If the vector were held constant in space, while the particle were translated and rotated to any position meeting this condition and an average of the structures were taken, any object would result in a Gaussian mass distribution that would display a Gaussian correlation function,
and would appear as an average cloud with no surface. The Fourier transform of Eq. 6 results in Eq. 1.
Limitations to power-law scattering at low-q.
Power-law scattering is restricted to sizes smaller than the object. For example, within a mass-fractal object such as a polymer chain described by Eq. 3 the normalized mass of the chain, "z", scales with the normalized size, "R" ~ "R"eted/"l"k, with a scaling power of the mass-fractal dimension, "df", "z" ~ "Rd"f. Considering scattering elements of size "r", the number of such elements in a particle scales with "N" ~ "z"/"rd"f, and the mass of such a particle "n" ~ "rd"f, so the scattering is proportional to "Nn"2 or "rd"f ~ "q"-"d"f. At low-"q" the vector r "~ 1/q approaches the size of the particle. For this reason the power-law regime ends at low-q. One way to consider this is to think of the vector ra" beginning and ending in the particle, Figure 2 (a). This vector meets the mass fractal condition if the particle is a mass-fractal. In Figure 2 (b) the vector rb separating two points, does not meet the mass-fractal condition, but with a translation of the particle by d the mass fractal condition can be met for bothe ends of rb, (c).
In scattering we are considering all possible translations of the particle relative to one end of the vector r being located within the mass-fractal particle. The probability of moving the particle to meet the mass-fractal condition for both ends of the vector is less than 1 if r is close to the particle size. If the particle were of infinite size this probability would always be 1. For a finite particle Figure 2 shows that the reduction in probability for a scattering event at large sizes can be viewed as a reduction in the length of the vector r. This is the basis of the Unified Function. Rather than directly determining the scattering function, the reduction in r related to this translation is calculated. Since r is related to 2π/q we consider an effective increase in scattering vector q to q*. The relationship between q and q* is determined by first considering the consequence of the translation in Figure 2 on the correlation function based on the Gaussian derivation of Guinier's Law [8]. This analysis results in a modifying factor of,
Following the Debye relationship, this factor can be incorporated into q yielding the transform,
where,
as shown in Figure 2 in terms of q* = 2π/r*. References and demonstrates that for strong power-law decays Eq. 8 is equivalent to,
which allows for the direct use of a modification of Eq. 2 as,
For mass-fractal power-laws this approximation is not perfect due to the shape of the correlation function at low-q as described in. A good approximation is to include a constant "k" whose value is about 1.06 for "d"f = 2, so that Eq. 9 is replaced by,
In general for mass fractals it is found that k ~ 1.06 is a good approximation and k = 1 for surface fractal scattering.
With this modification, power-law scattering is compatible with Guinier scattering and the two terms can be summed in a Unified Equation,
Eq. 13 can describe a single structural level and can closely replicate Eq. 3, equations for polydisperse spheres, rods, sheets, good solvent polymers, branched polymers, cyclic polymers, as demonstrated in and related publications. A wide range of disordered materials including mass and surface fractal structures can therefore be described using the Unified Approach.
For hierarchical materials with multiple structural levels Eq. 13 can be extended using a Gaussian cutoff at high-q for the power-law function which is common to equations for rods, disks and other simple scattering functions such as described in Guinier and Fournet,
where it is taken that "R"g,0 = 0. This function has been used to describe persistence in polymer chains in good and theta solvents, branched polymers, polymers of complex topology such as star polymers, mass fractal primary particles/aggregates/agglomerates, rod diameter/length, disk thickness/width and other complex hierarchical structures. The lead cutoff term in Eq. 14 assumes that the structural level i is composed of structural levels i-1. If this is not true, a free parameter can substitute for Rg,i-1 as described in.
Eq. 14 is quite flexible and it has been extended as a Hybrid Unified Function for micellar systems where the local structure is a perfect cylinder or other structure.
Implementation of Unified Function.
Jan Ilavsky of Argonne National Laboratory's Advanced Photon Source (USA) has provided open user code to perform fits using the Unified Function in the Igor Pro programing environment including video tutorials and an instruction manual.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "3<P_i"
},
{
"math_id": 1,
"text": "3>P_i"
}
] |
https://en.wikipedia.org/wiki?curid=61956494
|
619578
|
Nicomachus
|
1st century AD Neopythagorean philosopher, mathematician, and music theorist
Nicomachus of Gerasa (Greek: ; c. 60 – c. 120 AD) was an Ancient Greek Neopythagorean philosopher from Gerasa, in the Roman province of Syria (now Jerash, Jordan). Like many Pythagoreans, Nicomachus wrote about the mystical properties of numbers, best known for his works "Introduction to Arithmetic" and "Manual of Harmonics", which are an important resource on Ancient Greek mathematics and Ancient Greek music in the Roman period. Nicomachus' work on arithmetic became a standard text for Neoplatonic education in Late antiquity, with philosophers such as Iamblichus and John Philoponus writing commentaries on it. A Latin paraphrase by Boethius of Nicomachus's works on arithmetic and music became standard textbooks in medieval education.
Life.
Little is known about the life of Nicomachus except that he was a Pythagorean who came from Gerasa. His "Manual of Harmonics" was addressed to a lady of noble birth, at whose request Nicomachus wrote the book, which suggests that he was a respected scholar of some status. He mentions his intent to write a more advanced work, and how the journeys he frequently undertakes leave him short of time.The approximate dates in which he lived (c. 100 AD) can only be estimated based on which other authors he refers to in his work, as well as which later mathematicians who refer to him. He mentions Thrasyllus in his "Manual of Harmonics", and his "Introduction to Arithmetic" was apparently translated into Latin in the mid 2nd century by Apuleius,while he makes no mention at all of either Theon of Smyrna's work on arithmetic or Ptolemy's work on music, implying that they were either later contemporaries or lived in the time after he did.
Philosophy.
Historians consider Nicomachus a Neopythagorean based on his tendency to view numbers as having mystical properties rather than their mathematical properties, citing an extensive amount of Pythagorean literature in his work, including works by Philolaus, Archytas, and Androcydes. He writes extensively on numbers, especially on the significance of prime numbers and perfect numbers and argues that arithmetic is ontologically prior to the other mathematical sciences (music, geometry, and astronomy), and is their cause. Nicomachus distinguishes between the wholly conceptual immaterial number, which he regards as the 'divine number', and the numbers which measure material things, the 'scientific' number. Nicomachus provided one of the earliest Greco-Roman multiplication tables; the oldest extant Greek multiplication table is found on a wax tablet dated to the 1st century AD (now found in the British Museum).
Metaphysics.
Although Nicomachus is considered a Pythagorean, John M. Dillon says that Nicomachus's philosophy "fits comfortably within the spectrum of contemporary Platonism." In his work on arithmetic, Nicomachus quotes from Plato's "Timaeus" to make a distinction between the intelligible world of Forms and the sensible world, however, he also makes more Pythagorean distinctions, such as between Odd and even numbers. Unlike many other Neopythagoreans, such as Moderatus of Gades, Nicomachus makes no attempt to distinguish between the Demiurge, who acts on the material world, and The One which serves as the supreme first principle. For Nicomachus, God as the supreme first principle is both the demiurge and the Intellect (nous), which Nicomachus also equates to being the monad, the potentiality from which all actualities are created.
Works.
Two of Nicomachus' works, the "Introduction to Arithmetic" and the "Manual of Harmonics" are extant in a complete form, and two others, a work on "Theology of Arithmetic" and a "Life of Pythagoras" survive in fragments, epitomes, and summaries by later authors. The "Theology of Arithmetic" (), on the Pythagorean mystical properties of numbers in two books is mentioned by Photius. There is an extant work sometimes attributed to Iamblichus under this title written two centuries later which contains a great deal of material thought to have been copied or paraphrased from Nicomachus' work. Nicomachus's "Life of Pythagoras" was one of the main sources used by Porphyry and Iamblichus, for their (extant) "Lives" of Pythagoras. An "Introduction to Geometry", referred to by Nicomachus himself in the "Introduction to Arithmetic," has not survived. Among his known lost work is another larger work on music, promised by Nicomachus himself, and apparently referred to by Eutocius in his comment on the sphere and cylinder of Archimedes.
"Introduction to Arithmetic".
"Introduction to Arithmetic" (Greek: , ) is the only extant work on mathematics by Nicomachus. The work contains both philosophical prose and basic mathematical ideas. Nicomachus refers to Plato quite often, and writes that philosophy can only be possible if one knows enough about mathematics. Nicomachus also describes how natural numbers and basic mathematical ideas are eternal and unchanging, and in an abstract realm. The work consists of two books, twenty-three and twenty-nine chapters, respectively.
Nicomachus's presentation is much less rigorous than Euclid centuries earlier. Propositions are typically stated and illustrated with one example, but not proven through inference. In some instances this results in patently false assertions. For example, he states that from (a-b) ∶ (b-c) ∷ c ∶ a it can be concluded that ab=2bc, only because this is true for a=6, b=5 and c=3.
Boethius' "De institutione arithmetica" is in large part a Latin translation of this work.
"Manual of Harmonics".
"Manuale Harmonicum" (Ἐγχειρίδιον ἁρμονικῆς, "Encheiridion Harmonikes") is the first important music theory treatise since the time of Aristoxenus and Euclid. It provides the earliest surviving record of the legend of Pythagoras's epiphany outside of a smithy that pitch is determined by numeric ratios. Nicomachus also gives the first in-depth account of the relationship between music and the ordering of the universe via the "music of the spheres." Nicomachus's discussion of the governance of the ear and voice in understanding music unites Aristoxenian and Pythagorean concerns, normally regarded as antitheses. In the midst of theoretical discussions, Nicomachus also describes the instruments of his time, also providing a valuable resource. In addition to the "Manual", ten extracts survive from what appear to have originally been a more substantial work on music.
Legacy.
Late antiquity.
The "Introduction to Arithmetic" of Nicomachus was a standard textbook in Neoplatonic schools, and commentaries on it were written by Iamblichus (3rd century) and John Philoponus (6th century).
The "Arithmetic" (in Latin: "De Institutione Arithmetica") of Boethius was a Latin paraphrase and a partial translation of the "Introduction to Arithmetic". The "Manual of Harmonics" also became the basis of the Boethius' Latin treatise titled "De institutione musica".
Medieval European philosophy.
The work of Boethius on arithmetic and music was a core part of the "Quadrivium" liberal arts and had a great diffusion during the Middle Ages.
Nicomachus's theorem.
At the end of Chapter 20 of his "Introduction to Arithmetic", Nicomachus points out that if one writes a list of the odd numbers, the first is the cube of 1, the sum of the next two is the cube of 2, the sum of the next three is the cube of 3, and so on. He does not go further than this, but from this it follows that the sum of the first n cubes equals the sum of the first formula_0 odd numbers, that is, the odd numbers from 1 to formula_1. The average of these numbers is obviously formula_0, and there are formula_0 of them, so their sum is formula_2 Many early mathematicians have studied and provided proofs of Nicomachus's theorem.
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "n(n+1)/2"
},
{
"math_id": 1,
"text": "n(n+1)-1"
},
{
"math_id": 2,
"text": "\\bigl(n(n+1)/2\\bigr)^2."
}
] |
https://en.wikipedia.org/wiki?curid=619578
|
6196078
|
Cox process
|
Poisson point process
In probability theory, a Cox process, also known as a doubly stochastic Poisson process is a point process which is a generalization of a Poisson process where the intensity that varies across the underlying mathematical space (often space or time) is itself a stochastic process. The process is named after the statistician David Cox, who first published the model in 1955.
Cox processes are used to generate simulations of spike trains (the sequence of action potentials generated by a neuron), and also in financial mathematics where they produce a "useful framework for modeling prices of financial instruments in which credit risk is a significant factor."
Definition.
Let formula_0 be a random measure.
A random measure formula_1 is called a Cox process directed by formula_0, if formula_2 is a Poisson process with intensity measure formula_3.
Here, formula_2 is the conditional distribution of formula_1, given formula_4.
Laplace transform.
If formula_1 is a Cox process directed by formula_0, then formula_1 has the Laplace transform
formula_5
for any positive, measurable function formula_6.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\xi "
},
{
"math_id": 1,
"text": " \\eta "
},
{
"math_id": 2,
"text": " \\mathcal L(\\eta \\mid \\xi=\\mu) "
},
{
"math_id": 3,
"text": " \\mu "
},
{
"math_id": 4,
"text": " \\{ \\xi=\\mu\\} "
},
{
"math_id": 5,
"text": " \\mathcal L_\\eta(f)=\\exp \\left(- \\int 1-\\exp(-f(x))\\; \\xi(\\mathrm dx)\\right) "
},
{
"math_id": 6,
"text": " f "
}
] |
https://en.wikipedia.org/wiki?curid=6196078
|
61965513
|
Vitaly Voloshinov
|
Soviet-Russian physicist (1947–2019)
Vitaly Borisovich Voloshinov (Russian: Виталий Борисович Волошинов; 20 March 1947 – 28 September 2019) was a Soviet and Russian physicist, one of the world's leading experts in the field of acoustoptics, honored teacher of Moscow State University
PhD in physics and mathematics, associate professor Physics Department, Moscow State University.
Consists of Corps of Experts in Natural Sciences, with 1683 citations for papers published after 1976. H-index – 19.
Biography.
Vitaly B. Voloshinov was born on 20 March 1947 in Berlin. Son of Boris E. Voloshinov and Nataliya K. Voloshinova.
In 1965 he graduated from English special school No.1 in Moscow and music school No.1 named after Sergey Prokofiev.
In 1971 he graduated from the Physics Department of Lomonosov Moscow State University with Highest Distinction and Prize named after Rem Khokhlov for best Graduate Student Research Projects.
In 1971–1973 he worked as an engineer and senior research engineer at the Scientific Research Institute of Space Engineering Instruments.
In 1977, upon completion of graduate school, defended his thesis on "Control of light beams using Bragg diffraction in an optically anisotropic medium", specialty No. 01.04.03, PhD in Radiophysics and Quantum Electronics.
Since 1976, a researcher at the physics department.
Since 1992, an associate professor at the physics department.
He died suddenly on 28 September 2019. The funeral service took place in the church of the Exaltation of the Holy Cross in Mitino. He was buried at Mitinsky cemetery in Moscow.
Achievements.
Achievements include 7 national patents for inventions in optical engineering.
Scientific supervisor of national and international research projects and grants (CRDF, RFBR, etc.).
Teaching activities.
Prepared 15 candidates of sciences(PhD).
Professor Voloshinov gave lectures to Graduate and Post-graduate Students of MSU on the following educational courses:
He also gave invited lectures in Russia, Bulgaria, Vietnam, Poland, Belgium, France, China, Colombia, (Germany) and United States.
Footnotes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "52^\\circ"
}
] |
https://en.wikipedia.org/wiki?curid=61965513
|
61970353
|
Mac Lane coherence theorem
|
In category theory, a branch of mathematics, Mac Lane coherence theorem states, in the words of Saunders Mac Lane, “every diagram commutes”. More precisely (cf. #Counter-example), it states every formal diagram commutes, where "formal diagram" is an analog of well-formed formulae and terms in proof theory.
Counter-example.
It is "not" reasonable to expect we can show literally every diagram commutes, due to the following example of Isbell.
Let formula_0 be a skeleton of the category of sets and "D" a unique countable set in it; note formula_1 by uniqueness. Let formula_2 be the projection onto the first factor. For any functions formula_3, we have formula_4. Now, suppose the natural isomorphisms formula_5 are the identity; in particular, that is the case for formula_6. Then for any formula_7, since formula_8 is the identity and is natural,
formula_9.
Since formula_10 is an epimorphism, this implies formula_11. Similarly, using the projection onto the second factor, we get formula_12 and so formula_13, which is absurd.
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathsf{Set}_0 \\subset \\mathsf{Set}"
},
{
"math_id": 1,
"text": "D \\times D = D"
},
{
"math_id": 2,
"text": "p : D = D \\times D \\to D"
},
{
"math_id": 3,
"text": "f, g: D \\to D"
},
{
"math_id": 4,
"text": "f \\circ p = p \\circ (f \\times g)"
},
{
"math_id": 5,
"text": "\\alpha: X \\times (Y \\times Z) \\simeq (X \\times Y) \\times Z"
},
{
"math_id": 6,
"text": "X = Y = Z = D"
},
{
"math_id": 7,
"text": "f, g, h: D \\to D"
},
{
"math_id": 8,
"text": "\\alpha"
},
{
"math_id": 9,
"text": "f \\circ p = p \\circ (f \\times (g \\times h)) = p \\circ \\alpha \\circ (f \\times (g \\times h)) = p \\circ ((f \\times g) \\times h) \\circ \\alpha = (f \\times g) \\circ p"
},
{
"math_id": 10,
"text": "p"
},
{
"math_id": 11,
"text": "f = f \\times g"
},
{
"math_id": 12,
"text": "g = f \\times g"
},
{
"math_id": 13,
"text": "f = g"
}
] |
https://en.wikipedia.org/wiki?curid=61970353
|
61980052
|
Semi-global matching
|
Computer vision algorithm
Semi-global matching (SGM) is a computer vision algorithm for the estimation of a dense disparity map from a rectified stereo image pair, introduced in 2005 by Heiko Hirschmüller while working at the German Aerospace Center. Given its predictable run time, its favourable trade-off between quality of the results and computing time, and its suitability for fast parallel implementation in ASIC or FPGA, it has encountered wide adoption in real-time stereo vision applications such as robotics and advanced driver assistance systems.
Problem.
Pixelwise stereo matching allows to perform real-time calculation of disparity maps by measuring the similarity of each pixel in one stereo image to each pixel within a subset in the other stereo image. Given a rectified stereo image pair, for a pixel with coordinates formula_0 the set of pixels in the other image is usually selected as formula_1, where formula_2 is a maximum allowed disparity shift.
A simple search for the best matching pixel produces many spurious matches, and this problem can be mitigated with the addition of a regularisation term that penalises jumps in disparity between adjacent pixels, with a cost function in the form
formula_3
where formula_4 is the pixel-wise dissimilarity cost at pixel formula_5 with disparity formula_6, and formula_7 is the regularisation cost between pixels formula_5 and formula_8 with disparities formula_6 and formula_9 respectively, for all pairs of neighbouring pixels formula_10. Such constraint can be efficiently enforced on a per-scanline basis by using dynamic programming (e.g. the Viterbi algorithm), but such limitation can still introduce streaking artefacts in the depth map, because little or no regularisation is performed across scanlines.
A possible solution is to perform global optimisation in 2D, which is however an NP-complete problem in the general case. For some families of cost functions (e.g. submodular functions) a solution with strong optimality properties can be found in polynomial time using graph cut optimization, however such global methods are generally too expensive for real-time processing.
Algorithm.
The idea behind SGM is to perform line optimisation along multiple directions and computing an aggregated cost formula_11 by summing the costs to reach pixel formula_5 with disparity formula_12 from each direction. The number of directions affects the run time of the algorithm, and while 16 directions usually ensure good quality, a lower number can be used to achieve faster execution. A typical 8-direction implementation of the algorithm can compute the cost in two passes, a forward pass accumulating the cost from the left, top-left, top, and top-right, and a backward pass accumulating the cost from right, bottom-right, bottom, and bottom-left. A single-pass algorithm can be implemented with only five directions.
The cost is composed by a matching term formula_13 and a binary regularisation term formula_14. The former can be in principle any local image dissimilarity measure, and commonly used functions are absolute or squared intensity difference (usually summed over a window around the pixel, and after applying a high-pass filter to the images to gain some illumination invariance), Birchfield–Tomasi dissimilarity, Hamming distance of the census transform, Pearson correlation (normalized cross-correlation). Even mutual information can be approximated as a sum over the pixels, and thus used as a local similarity metric. The regularisation term has the form
formula_15
where formula_16 and formula_17 are two constant parameters, with formula_18. The three-way comparison allows to assign a smaller penalty for unitary changes in disparity, thus allowing smooth transitions corresponding e.g. to slanted surfaces, and penalising larger jumps while preserving discontinuities due to the constant penalty term. To further preserve discontinuities, the gradient of the intensity can be used to adapt the penalty term, because discontinuities in depth usually correspond to a discontinuity in image intensity formula_19, by setting
formula_20
for each pair of pixels formula_5 and formula_8.
The accumulated cost formula_21 is the sum of all costs formula_22 to reach pixel formula_5 with disparity formula_12 along direction formula_23. Each term can be expressed recursively as
formula_24
where the minimum cost at the previous pixel formula_25 is subtracted for numerical stability, since it is constant for all values of disparity at the current pixel and therefore it does not affect the optimisation.
The value of disparity at each pixel is given by formula_26, and sub-pixel accuracy can be achieved by fitting a curve in formula_27 and its neighbouring costs and taking the minimum along the curve. Since the two images in the stereo pair are not treated symmetrically in the calculations, a consistency check can be performed by computing the disparity a second time in the opposite direction, swapping the role of the left and right image, and invalidating the result for the pixels where the result differs between the two calculations. Further post-processing techniques for the refinement of the disparity image include morphological filtering to remove outliers, intensity consistency checks to refine textureless regions, and interpolation to fill in pixels invalidated by consistency checks.
The cost volume formula_28 for all values of formula_29 and formula_12 can be precomputed and in an implementation of the full algorithm, using formula_2 possible disparity shifts and formula_30 directions, each pixel is subsequently visited formula_30 times, therefore the computational complexity of the algorithm for an image of size formula_31 is formula_32.
Memory efficient variant.
The main drawback of SGM is its memory consumption. An implementation of the two-pass 8-directions version of the algorithm requires to store formula_33 elements, since the accumulated cost volume has a size of formula_34 and to compute the cost for a pixel during each pass it is necessary to keep track of the formula_2 path costs of its left or right neighbour along one direction and of the formula_35 path costs of the pixels in the row above or below along 3 directions. One solution to reduce memory consumption is to compute SGM on partially overlapping image tiles, interpolating the values over the overlapping regions. This method also allows to apply SGM to very large images, that would not fit within memory in the first place.
A memory-efficient approximation of SGM stores for each pixel only the costs for the disparity values that represent a minimum along some direction, instead of all possible disparity values. The true minimum is highly likely to be predicted by the minima along the eight directions, thus yielding similar quality of the results. The algorithm uses eight directions and three passes, and during the first pass it stores for each pixel the cost for the optimal disparity along the four top-down directions, plus the two closest lower and higher values (for sub-pixel interpolation). Since the cost volume is stored in a sparse fashion, the four values of optimal disparity need also to be stored. In the second pass, the other four bottom-up directions are computed, completing the calculations for the four disparity values selected in the first pass, that now have been evaluated along all eight directions. An intermediate value of cost and disparity is computed from the output of the first pass and stored, and the memory of the four outputs from the first pass is replaced with the four optimal disparity values and their costs from the directions in the second pass. A third pass goes again along the same directions used in the first pass, completing the calculations for the disparity values from the second pass. The final result is then selected among the four minima from the third pass and the intermediate result computed during the second pass.
In each pass four disparity values are stored, together with three cost values each (the minimum and its two closest neighbouring costs), plus the disparity and cost values of the intermediate result, for a total of eighteen values for each pixel, making the total memory consumption equal to formula_36, at the cost in time of an additional pass over the image.
|
[
{
"math_id": 0,
"text": "(x, y)"
},
{
"math_id": 1,
"text": "\\{ (\\hat{x},y)|\\hat{x} \\ge x, \\hat{x} \\le x + D \\}"
},
{
"math_id": 2,
"text": "D"
},
{
"math_id": 3,
"text": "E(\\boldsymbol{d}) = \\sum_p D(p, d_p) + \\sum_{p,q \\in \\mathcal{N}} R(p, d_p, q, d_q)"
},
{
"math_id": 4,
"text": "D(p, d_p)"
},
{
"math_id": 5,
"text": "p"
},
{
"math_id": 6,
"text": "d_p"
},
{
"math_id": 7,
"text": "R(p, d_p, q, d_q)"
},
{
"math_id": 8,
"text": "q"
},
{
"math_id": 9,
"text": "d_q"
},
{
"math_id": 10,
"text": "\\mathcal{N}"
},
{
"math_id": 11,
"text": "S(p, d)"
},
{
"math_id": 12,
"text": "d"
},
{
"math_id": 13,
"text": "D(p, d)"
},
{
"math_id": 14,
"text": "R(d_p, d_q)"
},
{
"math_id": 15,
"text": "\nR(d_p, d_q) =\n\\begin{cases}\n 0 \\quad &d_p = d_q \\\\\n P_1 &|d_p - d_q| = 1 \\\\\n P_2 &|d_p - d_q| > 1\n\\end{cases}\n"
},
{
"math_id": 16,
"text": "P_1"
},
{
"math_id": 17,
"text": "P_2"
},
{
"math_id": 18,
"text": "P_1 < P_2"
},
{
"math_id": 19,
"text": "I"
},
{
"math_id": 20,
"text": "P_2 = \\max \\left\\{ P_1, \\frac{\\hat{P}_2}{|I(p) - I(q)|} \\right\\}"
},
{
"math_id": 21,
"text": "S(p, d) = \\sum_r L_r(p, d)"
},
{
"math_id": 22,
"text": "L_r(p, d)"
},
{
"math_id": 23,
"text": "r"
},
{
"math_id": 24,
"text": "\nL_r(p,d) =\n D(p,d)\n + \\min \\left\\{\n L_r(p - r, d),\n L_r(p - r, d - 1) + P_1,\n L_r(p - r, d + 1) + P_1,\n \\min_i L_r(p - r, i) + P_2\n \\right\\}\n - \\min_k L_r(p - r, k)\n"
},
{
"math_id": 25,
"text": "\\min_k L_r(p - r, k)"
},
{
"math_id": 26,
"text": "d^*(p) = \\operatorname{argmin}_d S(p, d)"
},
{
"math_id": 27,
"text": "d^*(p)"
},
{
"math_id": 28,
"text": "C(p,d)"
},
{
"math_id": 29,
"text": "p = (x, y)"
},
{
"math_id": 30,
"text": "R"
},
{
"math_id": 31,
"text": "W \\times H"
},
{
"math_id": 32,
"text": "O(WHD)"
},
{
"math_id": 33,
"text": "W \\times H \\times D + 3 \\times W \\times D + D"
},
{
"math_id": 34,
"text": "W \\times H \\times D"
},
{
"math_id": 35,
"text": "W \\times D"
},
{
"math_id": 36,
"text": "18 \\times W \\times H + 3 \\times W \\times D + D"
}
] |
https://en.wikipedia.org/wiki?curid=61980052
|
619801
|
DES-X
|
Block cipher
In cryptography, DES-X (or DESX) is a variant on the DES (Data Encryption Standard) symmetric-key block cipher intended to increase the complexity of a brute-force attack. The technique used to increase the complexity is called "key whitening".
The original DES algorithm was specified in 1976 with a 56-bit key size: 256 possibilities for the key. There was criticism that an exhaustive search might be within the capabilities of large governments, particularly the United States' National Security Agency (NSA). One scheme to increase the key size of DES without substantially altering the algorithm was DES-X, proposed by Ron Rivest in May 1984.
The algorithm has been included in RSA Security's BSAFE cryptographic library since the late 1980s.
DES-X augments DES by XORing an extra 64 bits of key (K1) to the plaintext "before" applying DES, and then XORing another 64 bits of key (K2) "after" the encryption:
formula_0
The key size is thereby increased to 56 + (2 × 64) = 184 bits.
However, the effective key size (security) is only increased to 56+64−1−"lb(M)" = 119 − "lb(M)" = ~119 bits, where "M" is the number of chosen plaintext/ciphertext pairs the adversary can obtain, and "lb" denotes the binary logarithm. Moreover, key size drops to 88 bits given 232.5 known plaintext and using advanced slide attack.
DES-X also increases the strength of DES against differential cryptanalysis and linear cryptanalysis, although the improvement is much smaller than in the case of brute force attacks. It is estimated that differential cryptanalysis would require 261 chosen plaintexts (vs. 247 for DES), while linear cryptanalysis would require 260 known plaintexts (vs. 243 for DES or 261 for DES with independent subkeys.) Note that with 264 plaintexts (known or chosen being the same in this case), DES (or indeed any other block cipher with a 64 bit block size) is totally broken as the whole cipher's codebook becomes available.
Although the differential and linear attacks, currently best attack on DES-X is a known-plaintext slide attack
discovered by Biryukov-Wagner which has complexity of 232.5 known plaintexts and 287.5 time of analysis. Moreover the attack is easily converted into a ciphertext-only attack with the same data complexity and 295 offline time complexity.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mbox{DES-X}(M) = K_2 \\oplus \\mbox{DES}_K(M \\oplus K_1)"
}
] |
https://en.wikipedia.org/wiki?curid=619801
|
61982153
|
Magic state distillation
|
Quantum computing algorithm
Magic state distillation is a method for creating more accurate quantum states from multiple noisy ones, which is important for building fault tolerant quantum computers. It has also been linked to quantum contextuality, a concept thought to contribute to quantum computers' power.
The technique was first proposed by Emanuel Knill in 2004,
and further analyzed by Sergey Bravyi and Alexei Kitaev the same year.
Thanks to the Gottesman–Knill theorem, it is known that some quantum operations (operations in the Clifford group) can be perfectly simulated in polynomial time on a classical computer. In order to achieve universal quantum computation, a quantum computer must be able to perform operations outside this set. Magic state distillation achieves this, in principle, by concentrating the usefulness of imperfect resources, represented by mixed states, into states that are conducive for performing operations that are difficult to simulate classically.
A variety of qubit magic state distillation routines and distillation routines for qubits with various advantages have been proposed.
Stabilizer formalism.
The Clifford group consists of a set of formula_0-qubit operations generated by the gates {"H", "S", CNOT} (where "H" is Hadamard and "S" is formula_1) called Clifford gates. The Clifford group generates stabilizer states which can be efficiently simulated classically, as shown by the Gottesman–Knill theorem. This set of gates with a non-Clifford operation is universal for quantum computation.
Magic states.
Magic states are purified from formula_0 copies of a mixed state formula_2. These states are typically provided via an ancilla to the circuit. A magic state for the formula_3 rotation operator is formula_4 where formula_5. By combining (copies of) magic states with Clifford gates, can be used to make a non-Clifford gate. Since Clifford gates combined with a non-Clifford gate are universal for quantum computation, magic states combined with Clifford gates are also universal.
Purification algorithm for distilling |"M"〉.
The first magic state distillation algorithm, invented by Sergey Bravyi and Alexei Kitaev, is a follows.
Input: Prepare 5 imperfect states.
Output: An almost pure state having a small error probability.
repeat
Apply the decoding operation of the five-qubit error correcting code and measure the syndrome.
If the measured syndrome is formula_6, the distillation attempt is successful.
else Get rid of the resulting state and restart the algorithm.
until The states have been distilled to the desired purity.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "\\begin{bmatrix} 1 & 0 \\\\ 0 & i \\end{bmatrix} "
},
{
"math_id": 2,
"text": "\\rho"
},
{
"math_id": 3,
"text": "\\pi/6"
},
{
"math_id": 4,
"text": "|M\\rangle = \\cos(\\beta/2)|0\\rangle + e^{i\\frac{\\pi}{4}}\\sin(\\beta/2)|1\\rangle"
},
{
"math_id": 5,
"text": "\\beta = \\arccos\\left(\\frac{1}{\\sqrt 3}\\right)"
},
{
"math_id": 6,
"text": "|0000\\rangle"
}
] |
https://en.wikipedia.org/wiki?curid=61982153
|
61989099
|
Frank-Olaf Schreyer
|
Frank-Olaf Schreyer is a German mathematician, specializing in algebraic geometry and algorithmic algebraic geometry.
Schreyer received in 1983 his PhD from Brandeis University with thesis "Syzgies of Curves with Special Pencils" under the supervision of David Eisenbud. Schreyer was a professor at University of Bayreuth and is since 2002 a professor at Saarland University.
He is involved in the development of (algorithmic) algebraic geometry advanced by David Eisenbud. Much of Schreyer's research deals with syzygy theory and the development of algorithms for the calculation of syzygies.
In 2010 he was an invited speaker (jointly with David Eisenbud) at the International Congress of Mathematicians in Hyderabad. In 2012 he was elected a Fellow of the American Mathematical Society.
|
[
{
"math_id": 0,
"text": "\\mathbb P^4"
}
] |
https://en.wikipedia.org/wiki?curid=61989099
|
61991533
|
Stereographic map projection
|
Type of conformal map projection
The stereographic projection, also known as the planisphere projection or the azimuthal conformal projection, is a conformal map projection whose use dates back to antiquity. Like the orthographic projection and gnomonic projection, the stereographic projection is an azimuthal projection, and when on a sphere, also a perspective projection.
On an ellipsoid, the perspective definition of the stereographic projection is not conformal, and adjustments must be made to preserve its azimuthal and conformal properties. The universal polar stereographic coordinate system uses one such ellipsoidal implementation.
History.
The stereographic projection was likely known in its polar aspect to the ancient Egyptians, though its invention is often credited to Hipparchus, who was the first Greek to use it. Its oblique aspect was used by Greek Mathematician Theon of Alexandria in the fourth century, and its equatorial aspect was used by Arab astronomer Al-Zarkali in the eleventh century. The earliest written description of it is Ptolemy's "Planisphaerium", which calls it the "planisphere projection".
The stereographic projection was exclusively used for star charts until 1507, when Walther Ludd of St. Dié, Lorraine created the first known instance of a stereographic projection of the Earth's surface. Its popularity in cartography increased after Rumold Mercator used its equatorial aspect for his 1595 atlas. It subsequently saw frequent use throughout the seventeenth century with its equatorial aspect being used for maps of the Eastern and Western hemispheres.
In 1695, Edmond Halley, motivated by his interest in star charts, published the first mathematical proof that this map is conformal. He used the recently established tools of calculus, invented by his friend Isaac Newton.
Formulae.
The spherical form of the stereographic projection is usually expressed in polar coordinates:
formula_0
where formula_1 is the radius of the sphere, and formula_2 and formula_3 are the latitude and longitude, respectively.
The sphere is normally chosen to model the Earth when the extent of the mapped region exceeds a few hundred kilometers in length in both dimensions. For maps of smaller regions, an ellipsoidal model must be chosen if greater accuracy is required.
The ellipsoidal form of the polar ellipsoidal projection uses conformal latitude. There are various forms of transverse or oblique stereographic projections of ellipsoids. One method uses double projection via a conformal sphere, while other methods do not.
Examples of transverse or oblique stereographic projections include the Miller Oblated Stereographic and the Roussilhe oblique stereographic projection.
Properties.
As an azimuthal projection, the stereographic projection faithfully represents the relative directions of all great circles passing through its center point. As a conformal projection, it faithfully represents angles everywhere. In addition, in its spherical form, the stereographic projection is the only map projection that renders all small circles as circles.
The spherical form of the stereographic projection is equivalent to a perspective projection where the point of perspective is on the point on the globe opposite the center point of the map.
Because the expression for formula_4 diverges as formula_2 approaches formula_5, the stereographic projection is infinitely large, and showing the South Pole (for a map centered on the North Pole) is impossible. However, it is possible to show points arbitrarily close to the South Pole as long as the boundaries of the map are extended far enough.
Derived projections.
The parallels on the Gall stereographic projection are distributed with the same spacing as those on the central meridian of the transverse stereographic projection.
The GS50 projection is formed by mapping the oblique stereographic projection to the complex plane and then transforming points on it via a tenth-order polynomial.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\n\\begin{align}\n r &= 2 R \\tan\\left(\\frac{\\pi}{4} - \\frac{\\varphi}{2}\\right) \\\\\n \\theta &= \\lambda\n\\end{align}\n"
},
{
"math_id": 1,
"text": "R"
},
{
"math_id": 2,
"text": "\\varphi"
},
{
"math_id": 3,
"text": "\\lambda"
},
{
"math_id": 4,
"text": "r"
},
{
"math_id": 5,
"text": "-\\frac{\\pi}{2}"
}
] |
https://en.wikipedia.org/wiki?curid=61991533
|
61992323
|
Chandrasekhar–Page equations
|
A massive fermion wave equation in Kerr spacetime
Chandrasekhar–Page equations describe the wave function of the spin-1/2 massive particles, that resulted by seeking a separable solution to the Dirac equation in Kerr metric or Kerr–Newman metric. In 1976, Subrahmanyan Chandrasekhar showed that a separable solution can be obtained from the Dirac equation in Kerr metric. Later, Don Page extended this work to Kerr–Newman metric, that is applicable to charged black holes. In his paper, Page notices that N. Toop also derived his results independently, as informed to him by Chandrasekhar.
By assuming a normal mode decomposition of the form formula_0 (with formula_1 being a half integer and with the convention formula_2) for the time and the azimuthal component of the spherical polar coordinates formula_3, Chandrasekhar showed that the four bispinor components of the wave function,
formula_4
can be expressed as product of radial and angular functions. The separation of variables is effected for the functions formula_5, formula_6, formula_7 and formula_8 (with formula_9 being the angular momentum per unit mass of the black hole) as in
formula_10
formula_11
Chandrasekhar–Page angular equations.
The angular functions satisfy the coupled eigenvalue equations,
formula_12
where formula_13 is the particle's rest mass (measured in units so that it is the inverse of the Compton wavelength),
formula_14
and formula_15. Eliminating formula_16 between the foregoing two equations, one obtains
formula_17
The function formula_18 satisfies the adjoint equation, that can be obtained from the above equation by replacing formula_19 with formula_20. The boundary conditions for these second-order differential equations are that formula_21(and formula_18) be regular at formula_22 and formula_23. The eigenvalue problem presented here in general requires numerical integrations for it to be solved. Explicit solutions are available for the case where formula_24.
Chandrasekhar–Page radial equations.
The corresponding radial equations are given by
formula_25
where formula_26 formula_27 is the black hole mass,
formula_28
and formula_29 Eliminating formula_30 from the two equations, we obtain
formula_31
The function formula_30 satisfies the corresponding complex-conjugate equation.
Reduction to one-dimensional scattering problem.
The problem of solving the radial functions for a particular eigenvalue of formula_32 of the angular functions can be reduced to a problem of reflection and transmission as in one-dimensional Schrödinger equation; see also Regge–Wheeler–Zerilli equations. Particularly, we end up with the equations
formula_33
where the Chandrasekhar–Page potentials formula_34 are defined by
formula_35
and formula_36, formula_37 is the tortoise coordinate and formula_38. The functions formula_39 are defined by formula_40, where
formula_41
Unlike the Regge–Wheeler–Zerilli potentials, the Chandrasekhar–Page potentials do not vanish for formula_42, but has the behaviour
formula_43
As a result, the corresponding asymptotic behaviours for formula_44 as formula_42 becomes
formula_45
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "e^{i(\\sigma t + m\\phi)}"
},
{
"math_id": 1,
"text": "m"
},
{
"math_id": 2,
"text": "\\mathrm{Re}\\{\\sigma\\}>0"
},
{
"math_id": 3,
"text": "(r,\\theta,\\phi)"
},
{
"math_id": 4,
"text": " \\begin{bmatrix} F_1(r,\\theta) \\\\ F_2(r,\\theta) \\\\ G_1(r,\\theta) \\\\ G_2(r,\\theta)\\end{bmatrix}e^{i(\\sigma t + m\\phi)}"
},
{
"math_id": 5,
"text": "f_1=(r-ia\\cos\\theta)F_1"
},
{
"math_id": 6,
"text": "f_2=(r-ia\\cos\\theta)F_2"
},
{
"math_id": 7,
"text": "g_1=(r+ia\\cos\\theta)G_1"
},
{
"math_id": 8,
"text": "g_2=(r+ia\\cos\\theta)G_2"
},
{
"math_id": 9,
"text": "a"
},
{
"math_id": 10,
"text": "f_1(r,\\theta) = R_{-\\frac{1}{2}}(r)S_{-\\frac{1}{2}}(\\theta), \\quad f_2(r,\\theta) = R_{+\\frac{1}{2}}(r)S_{+\\frac{1}{2}}(\\theta),"
},
{
"math_id": 11,
"text": "g_1(r,\\theta) = R_{+\\frac{1}{2}}(r)S_{-\\frac{1}{2}}(\\theta), \\quad g_2(r,\\theta) = R_{-\\frac{1}{2}}(r)S_{+\\frac{1}{2}}(\\theta)."
},
{
"math_id": 12,
"text": "\n\\begin{align}\n\\mathcal{L}_{\\frac{1}{2}} S_{+\\frac{1}{2}} &= -(\\lambda - a\\mu \\cos\\theta )S_{-\\frac{1}{2}}, \\\\\n\\mathcal{L}_{\\frac{1}{2}}^{\\dagger} S_{-\\frac{1}{2}} &= +(\\lambda + a\\mu \\cos\\theta )S_{+\\frac{1}{2}},\n\\end{align}\n"
},
{
"math_id": 13,
"text": "\\mu"
},
{
"math_id": 14,
"text": "\\mathcal{L}_n = \\frac{{d}}{{d}\\theta} + Q + n\\cot \\theta, \\quad \\mathcal{L}_n^{\\dagger} = \\frac{{d}}{{d}\\theta} - Q + n\\cot\\theta"
},
{
"math_id": 15,
"text": "Q= a\\sigma\\sin\\theta + m \\csc\\theta"
},
{
"math_id": 16,
"text": "S_{+1/2}(\\theta)"
},
{
"math_id": 17,
"text": "\\left(\\mathcal{L}_{\\frac{1}{2}}\\mathcal{L}_{\\frac{1}{2}}^{\\dagger} + \\frac{a\\mu\\sin\\theta}{\\lambda + a\\mu\\cos\\theta} \\mathcal{L}_{\\frac{1}{2}}^{\\dagger} + \\lambda^2 - a^2\\mu^2\\cos^2\\theta\\right) S_{-\\frac{1}{2}} = 0."
},
{
"math_id": 18,
"text": "S_{+\\frac{1}{2}}"
},
{
"math_id": 19,
"text": "\\theta"
},
{
"math_id": 20,
"text": "\\pi-\\theta"
},
{
"math_id": 21,
"text": "S_{-\\frac{1}{2}}"
},
{
"math_id": 22,
"text": "\\theta=0"
},
{
"math_id": 23,
"text": "\\theta=\\pi"
},
{
"math_id": 24,
"text": "\\sigma=\\mu"
},
{
"math_id": 25,
"text": "\n\\begin{align}\n\\Delta^{\\frac{1}{2}}\\mathcal{D}_{0} R_{-\\frac{1}{2}} &= (\\lambda +i\\mu r)\\Delta^{\\frac{1}{2}}R_{+\\frac{1}{2}}, \\\\\n\\Delta^{\\frac{1}{2}}\\mathcal{D}_{0}^\\dagger R_{+\\frac{1}{2}} &= (\\lambda -i\\mu r)R_{-\\frac{1}{2}},\n\\end{align}\n"
},
{
"math_id": 26,
"text": "\\Delta = r^2 - 2Mr + a^2,"
},
{
"math_id": 27,
"text": "M"
},
{
"math_id": 28,
"text": "\\mathcal{D}_n = \\frac{{d}}{{d}r} + \\frac{iK}{\\Delta} + 2n \\frac{r-M}{\\Delta}, \\quad \\mathcal{D}_n^\\dagger = \\frac{{d}}{{d}r} - \\frac{iK}{\\Delta} + 2n \\frac{r-M}{\\Delta},"
},
{
"math_id": 29,
"text": "K = (r^2+a^2)\\sigma + am."
},
{
"math_id": 30,
"text": "\\Delta^{\\frac{1}{2}} R_{+\\frac{1}{2}}"
},
{
"math_id": 31,
"text": "\\left(\\Delta\\mathcal{D}_{\\frac{1}{2}}^\\dagger\\mathcal{D}_{0} - \\frac{i\\mu \\Delta}{\\lambda + i\\mu r}\\mathcal{D}_0 -\\lambda^2 - \\mu^2r^2\\right) R_{-\\frac{1}{2}} = 0."
},
{
"math_id": 32,
"text": "\\lambda"
},
{
"math_id": 33,
"text": "\\left(\\frac{d^2}{d\\hat r_*^2} + \\sigma^2\\right) Z^{\\pm} = V^{\\pm} Z^{\\pm},"
},
{
"math_id": 34,
"text": "V^\\pm"
},
{
"math_id": 35,
"text": "V^{\\pm} = W^2 \\pm \\frac{dW}{d\\hat r_*}, \\quad W = \\frac{\\Delta^{\\frac{1}{2}}(\\lambda + \\mu^2r^2)^{3/2}}{\\varpi^2(\\lambda^2+\\mu^2r^2) + \\lambda\\mu\\Delta/2\\sigma},"
},
{
"math_id": 36,
"text": "\\hat r_*=r_*+\\tan^{-1}(\\mu r/\\lambda)/2\\sigma"
},
{
"math_id": 37,
"text": "r_*=r+2M\\ln(r/2M-1)"
},
{
"math_id": 38,
"text": "\\varpi^2 = r^2+a^2 + am/\\sigma"
},
{
"math_id": 39,
"text": "Z^{\\pm}(\\hat r_*)"
},
{
"math_id": 40,
"text": "Z^\\pm = \\psi^+ \\pm \\psi^-"
},
{
"math_id": 41,
"text": "\\psi^+ = \\Delta^{\\frac{1}{2}} R_{+\\frac{1}{2}} \\mathrm{exp}\\left(+\\frac{i}{2}\\tan^{-1} \\frac{\\mu r}{\\lambda}\\right), \\quad \\psi^- = R_{-\\frac{1}{2}} \\mathrm{exp}\\left(-\\frac{i}{2}\\tan^{-1} \\frac{\\mu r}{\\lambda}\\right)."
},
{
"math_id": 42,
"text": "r\\to\\infty"
},
{
"math_id": 43,
"text": "V^\\pm = \\mu^2\\left(1 - \\frac{2M}{r} + \\cdots\\right)."
},
{
"math_id": 44,
"text": "Z^\\pm"
},
{
"math_id": 45,
"text": "Z^\\pm = \\mathrm{exp}\\left\\{\\pm i \\left[(\\sigma^2-\\mu^2)^{1/2}r+ \\frac{M\\mu^2}{(\\sigma^2-\\mu^2)^{1/2}}\\ln \\frac{r}{2M}\\right]\\right\\}."
}
] |
https://en.wikipedia.org/wiki?curid=61992323
|
61992522
|
Discrete spectrum (mathematics)
|
In mathematics, specifically in spectral theory, a discrete spectrum of a closed linear operator is defined as the set of isolated points of its spectrum such that the rank of the corresponding Riesz projector is finite.
Definition.
A point formula_0
in the spectrum formula_1 of a closed linear operator formula_2 in the Banach space formula_3 with domain formula_4 is said to belong to "discrete spectrum" formula_5 of formula_6 if the following two conditions are satisfied:
Here formula_9 is the identity operator in the Banach space formula_3 and formula_10 is a smooth simple closed counterclockwise-oriented curve bounding an open region formula_11 such that formula_7 is the only point of the spectrum of formula_6 in the closure of formula_12; that is, formula_13
Relation to normal eigenvalues.
The discrete spectrum formula_5 coincides with the set of normal eigenvalues of formula_6:
formula_14
Relation to isolated eigenvalues of finite algebraic multiplicity.
In general, the rank of the Riesz projector can be larger than the dimension of the root lineal formula_15 of the corresponding eigenvalue, and in particular it is possible to have formula_16, formula_17. So, there is the following inclusion:
formula_18
In particular, for a quasinilpotent operator
formula_19
one has
formula_20, formula_17,
formula_21,
formula_22.
Relation to the point spectrum.
The discrete spectrum formula_5 of an operator formula_6 is not to be confused with the point spectrum formula_23, which is defined as the set of eigenvalues of formula_6.
While each point of the discrete spectrum belongs to the point spectrum,
formula_24
the converse is not necessarily true: the point spectrum does not necessarily consist of isolated points of the spectrum, as one can see from the example of the "left shift operator",
formula_25
For this operator, the point spectrum is the unit disc of the complex plane, the spectrum is the closure of the unit disc, while the discrete spectrum is empty:
formula_26
|
[
{
"math_id": 0,
"text": "\\lambda\\in\\C"
},
{
"math_id": 1,
"text": "\\sigma(A)"
},
{
"math_id": 2,
"text": "A:\\,\\mathfrak{B}\\to\\mathfrak{B}"
},
{
"math_id": 3,
"text": "\\mathfrak{B}"
},
{
"math_id": 4,
"text": "\\mathfrak{D}(A)\\subset\\mathfrak{B}"
},
{
"math_id": 5,
"text": "\\sigma_{\\mathrm{disc}}(A)"
},
{
"math_id": 6,
"text": "A"
},
{
"math_id": 7,
"text": "\\lambda"
},
{
"math_id": 8,
"text": "P_\\lambda=\\frac{-1}{2\\pi\\mathrm{i}}\\oint_\\Gamma(A-z I_{\\mathfrak{B}})^{-1}\\,dz"
},
{
"math_id": 9,
"text": "I_{\\mathfrak{B}}"
},
{
"math_id": 10,
"text": "\\Gamma\\subset\\C"
},
{
"math_id": 11,
"text": "\\Omega\\subset\\C"
},
{
"math_id": 12,
"text": "\\Omega"
},
{
"math_id": 13,
"text": "\\sigma(A)\\cap\\overline{\\Omega}=\\{\\lambda\\}."
},
{
"math_id": 14,
"text": "\\sigma_{\\mathrm{disc}}(A)=\\{\\mbox{normal eigenvalues of }A\\}."
},
{
"math_id": 15,
"text": "\\mathfrak{L}_\\lambda"
},
{
"math_id": 16,
"text": "\\mathrm{dim}\\,\\mathfrak{L}_\\lambda<\\infty"
},
{
"math_id": 17,
"text": "\\mathrm{rank}\\,P_\\lambda=\\infty"
},
{
"math_id": 18,
"text": "\\sigma_{\\mathrm{disc}}(A)\\subset\\{\\mbox{isolated points of the spectrum of }A\\mbox{ with finite algebraic multiplicity}\\}."
},
{
"math_id": 19,
"text": "Q:\\,l^2(\\N)\\to l^2(\\N),\\qquad Q:\\,(a_1,a_2,a_3,\\dots)\\mapsto (0,a_1/2,a_2/2^2,a_3/2^3,\\dots),"
},
{
"math_id": 20,
"text": "\\mathfrak{L}_\\lambda(Q)=\\{0\\}"
},
{
"math_id": 21,
"text": "\\sigma(Q)=\\{0\\}"
},
{
"math_id": 22,
"text": "\\sigma_{\\mathrm{disc}}(Q)=\\emptyset"
},
{
"math_id": 23,
"text": "\\sigma_{\\mathrm{p}}(A)"
},
{
"math_id": 24,
"text": "\\sigma_{\\mathrm{disc}}(A)\\subset\\sigma_{\\mathrm{p}}(A),"
},
{
"math_id": 25,
"text": "\nL:\\,l^2(\\N)\\to l^2(\\N),\n\\quad\nL:\\,(a_1,a_2,a_3,\\dots)\\mapsto (a_2,a_3,a_4,\\dots).\n"
},
{
"math_id": 26,
"text": "\\sigma_{\\mathrm{p}}(L)=\\mathbb{D}_1,\n\\qquad\n\\sigma(L)=\\overline{\\mathbb{D}_1};\n\\qquad\n\\sigma_{\\mathrm{disc}}(L)=\\emptyset.\n"
}
] |
https://en.wikipedia.org/wiki?curid=61992522
|
61994831
|
Homotopy theory
|
Branch of mathematics
In mathematics, homotopy theory is a systematic study of situations in which maps can come with homotopies between them. It originated as a topic in algebraic topology, but nowadays is learned as an independent discipline.
Applications to other fields of mathematics.
Besides algebraic topology, the theory has also been used in other areas of mathematics such as:
Concepts.
Spaces and maps.
In homotopy theory and algebraic topology, the word "space" denotes a topological space. In order to avoid pathologies, one rarely works with arbitrary spaces; instead, one requires spaces to meet extra constraints, such as being compactly generated, or Hausdorff, or a CW complex.
In the same vein as above, a "map" is a continuous function, possibly with some extra constraints.
Often, one works with a pointed space—that is, a space with a "distinguished point", called a basepoint. A pointed map is then a map which preserves basepoints; that is, it sends the basepoint of the domain to that of the codomain. In contrast, a free map is one which needn't preserve basepoints.
Homotopy.
Let "I" denote the unit interval. A family of maps indexed by "I", formula_0 is called a homotopy from formula_1 to formula_2 if formula_3 is a map (e.g., it must be a continuous function). When "X", "Y" are pointed spaces, the formula_4 are required to preserve the basepoints. A homotopy can be shown to be an equivalence relation. Given a pointed space "X" and an integer formula_5, let formula_6 be the homotopy classes of based maps formula_7 from a (pointed) "n"-sphere formula_8 to "X". As it turns out, formula_9 are groups; in particular, formula_10 is called the fundamental group of "X".
If one prefers to work with a space instead of a pointed space, there is the notion of a fundamental groupoid (and higher variants): by definition, the fundamental groupoid of a space "X" is the category where the objects are the points of "X" and the morphisms are paths.
Cofibration and fibration.
A map formula_11 is called a cofibration if given:
There exists a homotopy formula_14 that extends formula_1 and such that formula_15. In some loose sense, it is an analog of the defining diagram of an injective module in abstract algebra. The most basic example is a CW pair formula_16; since many work only with CW complexes, the notion of a cofibration is often implicit.
A fibration in the sense of Serre is the dual notion of a cofibration: that is, a map formula_17 is a fibration if given (1) a map formula_18 and (2) a homotopy formula_19, there exists a homotopy formula_20 such that formula_1 is the given one and formula_21. A basic example is a covering map (in fact, a fibration is a generalization of a covering map). If formula_22 is a principal "G"-bundle, that is, a space with a free and transitive (topological) group action of a (topological) group, then the projection map formula_23 is an example of a fibration.
Classifying spaces and homotopy operations.
Given a topological group "G", the classifying space for principal "G"-bundles ("the" up to equivalence) is a space formula_24 such that, for each space "X",
formula_25 {principal "G"-bundle on "X"} / ~ formula_26
where
Brown's representability theorem guarantees the existence of classifying spaces.
Spectrum and generalized cohomology.
The idea that a classifying space classifies principal bundles can be pushed further. For example, one might try to classify cohomology classes: given an abelian group "A" (such as formula_29),
formula_30
where formula_31 is the Eilenberg–MacLane space. The above equation leads to the notion of a generalized cohomology theory; i.e., a contravariant functor from the category of spaces to the category of abelian groups that satisfies the axioms generalizing ordinary cohomology theory. As it turns out, such a functor may not be representable by a space but it can always be represented by a sequence of (pointed) spaces with structure maps called a spectrum. In other words, to give a generalized cohomology theory is to give a spectrum.
A basic example of a spectrum is a sphere spectrum: formula_32
Obstruction theory and characteristic class.
See also: Characteristic class, Postnikov tower, Whitehead torsion
Specific theories.
There are several specific theories
Homotopy hypothesis.
One of the basic questions in the foundations of homotopy theory is the nature of a space. The homotopy hypothesis asks whether a space is something fundamentally algebraic.
|
[
{
"math_id": 0,
"text": "h_t : X \\to Y"
},
{
"math_id": 1,
"text": "h_0"
},
{
"math_id": 2,
"text": "h_1"
},
{
"math_id": 3,
"text": "h : I \\times X \\to Y, (t, x) \\mapsto h_t(x)"
},
{
"math_id": 4,
"text": "h_t"
},
{
"math_id": 5,
"text": "n \\ge 1"
},
{
"math_id": 6,
"text": "\\pi_n(X) = [S^n, X]_*"
},
{
"math_id": 7,
"text": "S^n \\to X"
},
{
"math_id": 8,
"text": "S^n"
},
{
"math_id": 9,
"text": "\\pi_n(X)"
},
{
"math_id": 10,
"text": "\\pi_1(X)"
},
{
"math_id": 11,
"text": "f: A \\to X"
},
{
"math_id": 12,
"text": "h_0 : X \\to Z"
},
{
"math_id": 13,
"text": "g_t : A \\to Z"
},
{
"math_id": 14,
"text": "h_t : X \\to Z"
},
{
"math_id": 15,
"text": "h_t \\circ f = g_t"
},
{
"math_id": 16,
"text": "(X, A)"
},
{
"math_id": 17,
"text": "p : X \\to B"
},
{
"math_id": 18,
"text": "Z \\to X"
},
{
"math_id": 19,
"text": "g_t : Z \\to B"
},
{
"math_id": 20,
"text": "h_t: Z \\to X"
},
{
"math_id": 21,
"text": "p \\circ h_t = g_t"
},
{
"math_id": 22,
"text": "E"
},
{
"math_id": 23,
"text": "p: E \\to X"
},
{
"math_id": 24,
"text": "BG"
},
{
"math_id": 25,
"text": "[X, BG] = "
},
{
"math_id": 26,
"text": ", \\,\\, [f] \\mapsto f^* EG"
},
{
"math_id": 27,
"text": "X \\to BG"
},
{
"math_id": 28,
"text": "EG"
},
{
"math_id": 29,
"text": "\\mathbb{Z}"
},
{
"math_id": 30,
"text": "[X, K(A, n)] = \\operatorname{H}^n(X; A)"
},
{
"math_id": 31,
"text": "K(A, n)"
},
{
"math_id": 32,
"text": "S^0 \\to S^1 \\to S^2 \\to \\cdots"
}
] |
https://en.wikipedia.org/wiki?curid=61994831
|
61995510
|
Triboracyclopropenyl
|
The triboracyclopropenyl fragment is a cyclic structural motif in boron chemistry, named for its geometric similarity to cyclopropene. In contrast to nonplanar borane clusters that exhibit higher coordination numbers at boron (e.g., through 3-center 2-electron bonds to bridging hydrides or cations), triboracyclopropenyl-type structures are rings of three boron atoms where substituents at each boron are also coplanar to the ring. Triboracyclopropenyl-containing compounds are extreme cases of inorganic aromaticity. They are the lightest and smallest cyclic structures known to display the bonding and magnetic properties that originate from fully delocalized electrons in orbitals of σ and π symmetry. Although three-membered rings of boron are frequently so highly strained as to be experimentally inaccessible, academic interest in their distinctive aromaticity and possible role as intermediates of borane pyrolysis motivated extensive computational studies by theoretical chemists. Beginning in the late 1980s with mass spectrometry work by Anderson "et al". on all-boron clusters, experimental studies of triboracyclopropenyls were for decades exclusively limited to gas-phase investigations of the simplest rings (ions of B3). However, more recent work has stabilized the triboracyclopropenyl moiety via coordination to donor ligands or transition metals, dramatically expanding the scope of its chemistry.
Synthesis.
For gas-phase spectroscopic studies, triboracyclopropenyl-containing compounds are obtained via laser ablation of boron targets and collimation of the resulting plasma cloud in a flow of inert carrier gas such as helium. The charged molecules of interest are then mass-selected by time-of-flight mass spectrometry. Addition of gases such as N2 or CO to the gas stream affords the corresponding adducts, while addition of metals such as iridium and vanadium to the B target yields the corresponding metal-doped clusters.
The sole isolable example of a triboracyclopropenyl anion that persists in solution and in the solid state was identified by Braunschweig and coworkers, who synthesized it by reducing the aminoborane Cl2B=NCy2 (Cy = cyclohexyl) with finely dispersed sodium metal in dimethoxyethane (DME). Cooling of the resulting orange-red solution of the dimeric species Na4[B3(NCy2)3]2 • 2 DME resulted in crystals suitable for X-ray diffraction, by which the structure was determined. Although the detailed reduction mechanism is unknown, it has been suggested that subvalent "R2N−B" intermediates are involved in the formation of such boron clusters.
Structure and bonding.
Due to their special status as the simplest aromatic cycles, the electronic structure of triboracyclopropenyl derivatives has been analyzed with a variety of techniques in computational chemistry. These have ranged from canonical molecular orbital theory to alternative formulations of bonding such as adaptive natural density partitioning theory, the quantum theory of atoms in molecules, natural bond orbital theory, natural orbitals for chemical valence and electron localization function analysis. NICS and ring current calculations have also been used to characterize the aromaticity in such systems by using magnetic criteria. In general, the extremely small size of these cycles implies that their bonding electrons experience substantial Coulomb repulsion, resulting in abnormally high ring strain. This effect is partially compensated for by the stabilization offered by aromatic delocalization.
B3+.
B3+ displays π aromaticity associated with its a2"-symmetric HOMO. In its singlet electronic ground state, it is a Hückel 2π electron system analogous to the cyclopropenium cation, but it is too reactive to be isolated. It is triangular, with D3h symmetry - all of its B atoms and B-B bond distances are chemically equivalent. The gas-phase adducts B3(N2)3+ and B3(CO)3+ have been computationally studied through ETS-NOCV (extended transition state - natural orbitals for chemical valence) theory, which dissects the changes in energy and electron density that result as a molecule is prepared from a reference state of noninteracting fragments. ETS-NOCV energy decomposition analysis suggests that the N2 and CO adducts are primarily stabilized (by -83.6 and -112.3 kcal/mol respectively) through σ donation of the exocyclic ligands into the highly electron-deficient boron ring. As a result, each was interpreted as a B3+ moiety supported by dative bonding from N2 or CO. The electron deformation density constructed from the NOCVs of this system, together with charges derived from natural bond orbital populations, indicate electron flow from the exocyclic ligand into the ring, which induces all the equivalent bonds of the B3+ core to shorten by approximately 4 pm. π-symmetry interactions are observed with both the weak σ donor N2 and the strong π acceptor ligand CO. However, the out-of-plane π backdonation (from the π system of the B3 ring to the π acceptor orbitals of each ligand) is less stabilizing than the in-plane π backdonation, with strengths of -26.7 and -19.6 kcal/mol for the [B3(CO)2+ + CO] system. This suggests that the minimum-energy configuration of the molecule is one which preserves maximal π aromaticity in the B3+ core.
Just as aromatic species like the cyclopentadienyl anion and the cyclopropenium cation can coordinate to transition metals, it was recently demonstrated that the B3+ ring can bind to metal centers. Laser ablation of a mixed B/Ir target produces two isomers of IrB3−, a B3+ ring coordinated to a formal Ir2- anion. These are a pseudo-planar η2 adduct and a tetrahedral η3 adduct, the latter of which contains an aromatic triboracyclopropenyl fragment. Both are nearly identical in energy and coexist in the generated cluster beam.
Computations suggest that B3+ may even bind inert noble-gas atoms to form an unusual family of compounds B3(Rg)3+ (Rg = rare/noble gas), with nonnegligible bond strengths (from 15 to 30 kcal/mol) that originate from Rg p-orbital σ donicity and a significant degree of charge transfer from Rg to B3+. The possibility of new noble-gas compounds that form exothermally and spontaneously is an opportunity for experimental work.
B3.
B3 possesses a singly occupied a1' HOMO (a SOMO) that consists of σ-symmetric orbitals oriented toward the core of the ring, associated with σ delocalization and slightly shorter B-B bond lengths as compared to B3+. It is paramagnetic with a doublet ground state. It is nonpolar, flat and triangular, having D3h symmetry.
B3−.
B3−, with a filled a1' HOMO in D3h symmetry, is considered to be "doubly" aromatic and relatively stable - it simultaneously possesses highly delocalized σ and π electrons in its HOMO and HOMO-1 respectively.
B3R32-.
B3R32-, formulated with electron-sharing B−R bonds rather than dative arrows, is isoelectronic to B3+. 8 electrons are assigned to the triboracyclopenyl core, 6 in σ bonding orbitals and 2 in the π system, resulting in Hückel aromaticity. The only experimentally characterized compound of this class is Na4[B3(NCy2)3]2 • 2 DME, a dimer of stacked B3R32- units which are themselves aromatic. Natural bond orbital analysis indicates that this compound is highly stabilized (by roughly 45 kcal/mol) by a donor-acceptor interaction of localized B−B bond orbitals with corresponding B−N antibonding orbital across the ring, in addition to being bound together by electrostatic attraction to bridging Na+ cations identified in the crystal structure. DFT calculations show that the HOMO and HOMO-1 are antisymmetric and symmetric combinations of the π HOMO of an individual ring, respectively - a feature shared with metallocenes. As expected for a species with B−B bonds that have a formal MO bond order of formula_0, the average B-B bond length of 1.62 Å is closer to those of diborene (R-B=B-R) radical cations than B−B single bonds of roughly 1.75 Å.
Spectroscopy and spectrometry.
Triboracyclopropenyl-derived compounds were first identified by their mass-to-charge ratio, as transient species in the mass spectrometry of complex mixtures of cationic boron clusters. Reactive scattering studies with O2 soon followed, revealing the relatively strong bonding within light boron clusters. Subsequently, B3 was isolated in matrices of frozen noble gases and electron paramagnetic resonance spectra were recorded which confirmed its D3h geometry. Hyperfine coupling of the unpaired electron to the 11B nucleus provided an estimate of 15% s-orbital character for the a1' HOMO. The small and nonpolar B3 rings were able to tumble and rotate freely even when confined in the matrix.
In general, triboracyclopropenyl-containing species have been too short-lived and produced in insufficient quantity for transmission-mode infrared spectroscopy. However, dissociating B3(N2)3+ with infrared light and observing the decay of the corresponding mass-to-charge signal via mass spectrometry allowed an effective infrared spectrum of B3(N2)3+ to be recorded. This vibrational photodissociation spectrum contained only a single detectable vibration with a redshift of 98 cm−1 relative to gaseous N2, suggesting a highly symmetric B3(N2)3+ adduct with slightly weakened N≡N bonding.
Negatively charged ions containing triboracyclopropenyl have proven amenable to study by photoelectron spectroscopy. By Koopman's theorem, neglecting the effects of strong electron correlation, the kinetic energies of electrons detached by X-rays can be mapped onto binding energies of individual orbitals and reveal the molecular electronic structure. Splitting of the resulting spectral peaks from "vibrational progression" (according to the Franck-Condon principle) indicates how ionization at different energies changes specific vibrational frequencies of the molecule, and such effects on bonding are interpreted in terms of changes to the electron configuration. In B3−, an unusually high-intensity and high energy band corresponding to a multielectron or "shake-up" transition (coupled electron detachment and electronic excitation) was observed, suggesting the strong electron correlation present in the triboracyclopropenyl fragment. For IrB3−, vibrational progression from the stretching and breathing vibrations of IrB3 could be assigned in the overlaid spectra of both isomers present in the cluster beam. By comparison to computations, the minimum energy structure of IrB3 could then be formulated as a tetrahedron with an intact, aromatic B3+ moiety.
Reactivity.
The reactivity of triboracyclopropenyl-containing compounds is relatively under-explored, as only one example has been prepared in the solution phase. The compound reported by Braunschweig, Na4[B3(NCy2)3]2 • 2 DME, is an extremely potent reductant with an oxidation potential of -2.42 V vs. the ferrocene/ferrocenium couple. As a result, it is capable of reducing chloroboranes to afford tetrahedral B clusters, along with reducing PbCl2 directly to metallic Pb. In addition, it will undergo a ring-opening reaction at the B3 moiety by abstracting chlorine atoms from hexachloroethane. This level of reducing power is roughly comparable to an alkali metal, and has not been previously observed for any molecule based on an organic framework.
Although most examples of transition metal-doped trinuclear boron clusters do not contain an aromatic triboracyclopropenyl fragment, the reactivity of such species with small molecules is likely to attract increasing scientific interest. It has been demonstrated under the conditions of mass spectrometry that VB3+ dehydrogenates methane to afford the products VB3CH2+ and H2. A minor side reaction that produces VH+ and eliminates B3CH3 is also operative.
|
[
{
"math_id": 0,
"text": "4/3"
}
] |
https://en.wikipedia.org/wiki?curid=61995510
|
619984
|
Surface layer
|
Layer of a turbulent fluid affected by interaction with a surface
The surface layer is the layer of a turbulent fluid most affected by interaction with a solid surface or the surface separating a gas and a liquid where the characteristics of the turbulence depend on distance from the interface. Surface layers are characterized by large normal gradients of tangential velocity and large concentration gradients of any substances (temperature, moisture, sediments et cetera) transported to or from the interface.
The term boundary layer is used in meteorology and physical oceanography. The atmospheric surface layer is the lowest part of the atmospheric boundary layer (typically the bottom 10% where the log wind profile is valid). The ocean has two surface layers: the benthic, found immediately above the sea floor, and the marine surface layer, at the air-sea interface.
Mathematical formulation.
A simple model of the surface layer can be derived by first examining the turbulent momentum flux through a surface.
Using Reynolds decomposition to express the horizontal flow in the formula_0 direction as the sum of a slowly varying component, formula_1, and a turbulent component, formula_2,
formula_3
and the vertical flow, formula_4, in an analogous fashion,
formula_5
we can express the flux of turbulent momentum through a surface, formula_6, as the time-averaged magnitude of vertical turbulent transport of horizontal turbulent momentum, formula_7:
formula_8.
If the flow is homogeneous within the region, we can set the product of the vertical gradient of the mean horizontal flow and the eddy viscosity coefficient formula_9 equal to formula_10:
formula_11,
where formula_9 is defined in terms of Prandtl's mixing length hypothesis:
formula_12
where formula_13 is the mixing length.
We can then express formula_6 as:
formula_14.
Assumptions about the mixing length.
From the figure above, we can see that the size of a turbulent eddy near the surface is constrained by its proximity to the surface; turbulent eddies centered near the surface cannot be as large as those centered further from the surface. From this consideration, and in neutral conditions, it is reasonable to assume that the mixing length, formula_13 is proportional to the eddy's depth in the surface:
formula_15,
where formula_16 is the depth and formula_17 is known as the von Kármán constant. Thus the gradient can be integrated to solve for formula_1:
formula_18.
So, we see that the mean flow in the surface layer has a logarithmic relationship with depth. In non-neutral conditions the mixing length is also affected by buoyancy forces and Monin-Obukhov similarity theory is required to describe the horizontal-wind profile.
Surface layer in oceanography.
The surface layer is studied in oceanography, as both the wind stress and action of surface waves can cause turbulent mixing necessary for the formation of a surface layer.
The world's oceans are made up of many different water masses. Each have particular temperature and salinity characteristics as a result of the location in which they formed. Once formed at a particular source, a water mass will travel some distance via large-scale ocean circulation. Typically, the flow of water in the ocean is described as turbulent (i.e. it doesn't follow straight lines). Water masses can travel across the ocean as turbulent eddies, or parcels of water usually along constant density (isopycnic) surfaces where the expenditure of energy is smallest. When these turbulent eddies of different water masses interact, they will mix together. With enough mixing, some stable equilibrium is reached and a mixed layer is formed. Turbulent eddies can also be produced from wind stress by the atmosphere on the ocean. This kind of interaction and mixing through buoyancy at the surface of the ocean also plays a role in the formation of a surface mixed layer.
Discrepancies with traditional theory.
The logarithmic flow profile has long been observed in the ocean, but recent, highly sensitive measurements reveal a sublayer within the surface layer in which turbulent eddies are enhanced by the action of surface waves.
It is becoming clear that the surface layer of the ocean is only poorly modeled as being up against the "wall" of the air-sea interaction.
Observations of turbulence in Lake Ontario reveal under wave-breaking conditions the traditional theory significantly underestimates the production of turbulent kinetic energy within the surface layer.
Diurnal cycle.
The depth of the surface mixed layer is affected by solar insolation and thus is related to the diurnal cycle. After nighttime convection over the ocean, the turbulent surface layer is found to completely decay and restratify. The decay is caused by the decrease in solar insolation, divergence of turbulent flux and relaxation of lateral gradients.
During the nighttime, the surface ocean cools because the atmospheric circulation is reduced due to the change in heat with the setting of the sun each day. Cooler water is less buoyant and will sink. This buoyancy effect causes water masses to be transported to lower depths even lower those reached during daytime. During the following daytime, water at depth is restratified or un-mixed because of the warming of the sea surface and buoyancy driving the warmed water upward. The entire cycle will be repeated and the water will be mixed during the following nighttime.
In general, the surface mixed layer only occupies the first 100 meters of the ocean but can reach 150 m in the end of winter. The diurnal cycle does not change the depth of the mixed layer significantly relative to the seasonal cycle which produces much larger changes in sea surface temperature and buoyancy. With several vertical profiles, one can estimate the depth of the mixed layer by assigning a set temperature or density difference in water between surface and deep ocean observations – this is known as the “threshold method”.
However, this diurnal cycle does not have the same effect in midlatitudes as it does at tropical latitudes. Tropical regions are less likely than midlatitude regions to have a mixed layer dependent on diurnal temperature changes. One study explored diurnal variability of the mixed layer depth in the Western Equatorial Pacific Ocean. Results suggested no appreciable change in the mixed layer depth with the time of day. The significant precipitation in this tropical area would lead to further stratification of the mixed layer. Another study which instead focused on the Central Equatorial Pacific Ocean found a tendency for increased depths of the mixed layer during nighttime. The extratropical or midlatitude mixed layer was shown in one study to be more affected by diurnal variability than the results of the two tropical ocean studies. Over a 15-day study period in Australia, the diurnal mixed layer cycle repeated in a consistent manner with decaying turbulence throughout the day.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "x"
},
{
"math_id": 1,
"text": "\\overline{u}"
},
{
"math_id": 2,
"text": "u'"
},
{
"math_id": 3,
"text": " u = \\overline{u} + u'"
},
{
"math_id": 4,
"text": "w"
},
{
"math_id": 5,
"text": " w = \\overline{w} + w' "
},
{
"math_id": 6,
"text": "u_*"
},
{
"math_id": 7,
"text": "u'w'"
},
{
"math_id": 8,
"text": " u_*^2 = \\left|\\overline{(u'w')_s} \\right|"
},
{
"math_id": 9,
"text": "K_m"
},
{
"math_id": 10,
"text": "u_*^2"
},
{
"math_id": 11,
"text": "K_m\\frac{\\partial \\overline{u}}{\\partial z} = u_*^2 "
},
{
"math_id": 12,
"text": "K_m = \\overline{\\xi'^2}\\left |\\frac{\\partial\\overline{u}}{\\partial z}\\right |"
},
{
"math_id": 13,
"text": "\\xi'"
},
{
"math_id": 14,
"text": "\\frac{\\partial \\overline{u}}{\\partial z} = \\frac{u_*}{\\overline{\\xi'}}"
},
{
"math_id": 15,
"text": "\\xi' = kz"
},
{
"math_id": 16,
"text": "z"
},
{
"math_id": 17,
"text": "k"
},
{
"math_id": 18,
"text": "\\overline{u} = \\frac{u_*}{k}\\ln\\frac{z}{z_o}"
}
] |
https://en.wikipedia.org/wiki?curid=619984
|
6200269
|
Mix network
|
Routing protocol
Mix networks are routing protocols that create hard-to-trace communications by using a chain of proxy servers known as "mixes" which take in messages from multiple senders, shuffle them, and send them back out in random order to the next destination (possibly another mix node). This breaks the link between the source of the request and the destination, making it harder for eavesdroppers to trace end-to-end communications. Furthermore, mixes only know the node that it immediately received the message from, and the immediate destination to send the shuffled messages to, making the network resistant to malicious mix nodes.
Each message is encrypted to each proxy using public key cryptography; the resulting encryption is layered like a Russian doll (except that each "doll" is of the same size) with the message as the innermost layer. Each proxy server strips off its own layer of encryption to reveal where to send the message next. If all but one of the proxy servers are compromised by the tracer, untraceability can still be achieved against some weaker adversaries.
The concept of mix networks was first described by David Chaum in 1981. Applications that are based on this concept include anonymous remailers (such as Mixmaster), onion routing, garlic routing, and key-based routing (including Tor, I2P, and Freenet).
History.
David Chaum published the concept of Mix Networks in 1979 in his paper: "Untraceable electronic mail, return addresses, and digital pseudonyms". The paper was for his master's degree thesis work, shortly after he was first introduced to the field of cryptography through the work of public key cryptography, Martin Hellman, Whitfield Diffie and Ralph Merkle. While public key cryptography encrypted the security of information, Chaum believed there to be personal privacy vulnerabilities in the meta data found in communications. Some vulnerabilities that enabled the compromise of personal privacy included time of messages sent and received, size of messages and the address of the original sender. He cites Martin Hellman and Whitfield's paper "New Directions in Cryptography" (1976) in his work.
Cypherpunk Movement (1990s).
Innovators like Ian Goldberg and Adam Back made huge contributions to mixnet technology. This era saw significant advancements in cryptographic methods, which were important for the practical implementation of mixnets. Mixnets began to draw attention in academic circles, leading to more research on improving their efficiency and security. However, widespread practical application was still limited, and mixnets stayed largely within experimental stages. A "cypherpunk remailer" software was developed to make it easier for individuals to send anonymous emails using mixnets.
2000s: Growing Practical Applications.
In the 2000s, the increasing concerns about internet privacy highlighted the significance of mix networks (mixnets). This era was marked by the emergence of Tor (The Onion Router) around the mid-2000s. Although Tor was not a straightforward implementation of a mixnet, it drew heavily from David Chaum's foundational ideas, particularly utilizing a form of onion routing akin to mixnet concepts. This period also witnessed the emergence of other systems that incorporated mixnet principles to various extents, all aimed at enhancing secure and anonymous communication.
2010s: Modernisation.
Entering the 2010s, there was a significant shift towards making mixnets more scalable and efficient. This change was driven by the introduction of new protocols and algorithms, which helped overcome some of the primary challenges that had previously hindered the widespread deployment of mixnets. The relevance of mixnets surged, especially after 2013, following Edward Snowden's disclosures about extensive global surveillance programs. This period saw a renewed focus on mixnets as vital tools for protecting privacy.
The upcoming arrival of quantum computing will have a big impact on mixnets. On one hand, it brings new challenges, because quantum computers are very powerful and could break some of the current security methods used in mixnets. On the other hand, it also offers opportunities to make mixnets better and stronger. Due to this, it's really important to develop new security methods that can stand up to quantum computing. This will help make sure that mixnets can keep offering strong privacy and security even as technology changes and grows.
How it works.
Participant "A" prepares a message for delivery to participant "B" by appending a random value R to the message, sealing it with the addressee's public key formula_0, appending B's address, and then sealing the result with the mix's public key formula_1.
M opens it with his private key, now he knows B's address, and he sends formula_2 to B.
Message format.
formula_3
To accomplish this, the sender takes the mix's public key (formula_1), and uses it to encrypt an envelope containing a random string (formula_4), a nested envelope addressed to the recipient, and the email address of the recipient ("B"). This nested envelope is encrypted with the recipient's public key (formula_0), and contains another random string ("R0"), along with the body of the message being sent. Upon receipt of the encrypted top-level envelope, the mix uses its secret key to open it. Inside, it finds the address of the recipient ("B") and an encrypted message bound for "B". The random string (formula_4) is discarded.
formula_5 is needed in the message in order to prevent an attacker from guessing messages. It is assumed that the attacker can observe all incoming and outgoing messages. If the random string is not used (i.e. only formula_6 is sent to formula_7) and an attacker has a good guess that the message formula_8 was sent, he can test whether formula_9 holds, whereby he can learn the content of the message. By appending the random string formula_5 the attacker is prevented from performing this kind of attack; even if he should guess the correct message (i.e. formula_10 is true) he won't learn if he is right since he doesn't know the secret value formula_5. Practically, formula_5 functions as a salt.
Return addresses.
What is needed now is a way for "B" to respond to "A" while still keeping the identity of "A" secret from "B".
A solution is for "A" to form an untraceable return address formula_11 where formula_12 is its own real address, formula_13 is a public one-time key chosen for the current occasion only, and formula_14 is a key that will also act as a random string for purposes of sealing. Then, "A" can send this return address to "B" as part of a message sent by the techniques already described.
B sends formula_15 to M, and M transforms it to formula_16.
This mix uses the string of bits formula_14 that it finds after decrypting the address part formula_17 as a key to re-encrypt the message part formula_18. Only the addressee, "A", can decrypt the resulting output because "A" created both formula_14 and formula_13.
The additional key formula_13 assures that the mix cannot see the content of the reply-message.
The following indicates how "B" uses this untraceable return address to form a response to "A", via a new kind of mix:
The message from "A" formula_19 "B":
formula_20
Reply message from "B"formula_19"A":
formula_21
Where: formula_0 = "B"’s public key, formula_1 = the mix's public key.
A destination can reply to a source without sacrificing source anonymity. The reply message shares all of the performance and security benefits with the anonymous messages from source to destination.
Vulnerabilities.
Although mix networks provide security even if an adversary is able to view the entire path, mixing is not absolutely perfect. Adversaries can provide long term correlation attacks and track the sender and receiver of the packets.
Threat model.
An adversary can perform a passive attack by monitoring the traffic to and from the mix network. Analyzing the arrival times between multiple packets can reveal information. Since no changes are actively made to the packets, an attack like this is hard to detect. In a worst case of an attack, we assume that all the links of the network are observable by the adversary and the strategies and infrastructure of the mix network are known.
A packet on an input link cannot be correlated to a packet on the output link based on information about the time the packet was received, the size of the packet, or the content of the packet. Packet correlation based on packet timing is prevented by batching and correlation based on content and packet size is prevented by encryption and packet padding, respectively.
Inter-packet intervals, that is, the time difference between observation of two consecutive packets on two network links, is used to infer if the links carry the same connection. The encryption and padding does not affect the inter-packet interval related to the same IP flow. Sequences of inter-packet interval vary greatly between connections, for example in web browsing, the traffic occurs in bursts. This fact can be used to identify a connection.
Active attack.
Active attacks can be performed by injecting bursts of packets that contain unique timing signatures into the targeted flow. The attacker can perform attacks to attempt to identify these packets on other network links. The attacker might not be able to create new packets due to the required knowledge of symmetric keys on all the subsequent mixes. Replay packets cannot be used either as they are easily preventable through hashing and caching.
Artificial gap.
Large gaps can be created in the target flow, if the attacker drops large volumes of consecutive packets in the flow. For example, a simulation is run sending 3000 packets to the target flow, where the attacker drops the packets 1 second after the start of the flow. As the number of consecutive packets dropped increases, the effectiveness of defensive dropping decreases significantly. Introducing a large gap will almost always create a recognizable feature.
Artificial bursts.
The attacker can create artificial bursts. This is done by creating a signature from artificial packets by holding them on a link for a certain period of time and then releasing them all at once. Defensive dropping provides no defense in this scenario and the attacker can identify the target flow. There are other defense measures that can be taken to prevent this attack. One such solution can be adaptive padding algorithms. The more the packets are delayed, the easier it is to identify the behavior and thus better defense can be observed.
Other time analysis attacks.
An attacker may also look into other timing attacks other than inter-packet intervals. The attacker can actively modify packet streams to observe the changes caused in the network's behavior. Packets can be corrupted to force re-transmission of TCP packets, which the behavior is easily observable to reveal information.
Sleeper attack.
Assuming an adversary can see messages being sent and received into threshold mixes but they can't see the internal working of these mixes or what is sent by the same. If the adversary has left their own messages in respective mixes and they receive one of the two, they are able to determine the message sent and the corresponding sender. The adversary has to place their messages (active component) in the mix at any given time and the messages must remain there prior to a message being sent. This is not typically an active attack. Weaker adversaries can use this attack in combination with other attacks to cause more issues.
Mix networks derive security by changing order of messages they receive to avoid creating significant relation between the incoming and outgoing messages. Mixes create interference between messages. The interference puts bounds on the rate of information leak to an observer of the mix. In a mix of size n, an adversary observing input to and output from the mix has an uncertainty of order n in determining a match. A sleeper attack can take advantage of this. In a layered network of threshold mixes with a sleeper in each mix, there is a layer receiving inputs from senders and a second layer of mixes that forward messages to the final destination. From this, the attacker can learn the received message could not have come from the sender into any layer 1 mix that did not fire. There is a higher probability of matching the sent and received messages with these sleepers thus communication is not completely anonymous. Mixes may also be purely timed: they randomize the order of messages received in a particular interval and attach some of them with the mixes, forwarding them at the end of the interval despite what has been received in that interval. Messages that are available for mixing will interfere, but if no messages are available, there is no interference with received messages.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "K_b"
},
{
"math_id": 1,
"text": "K_m"
},
{
"math_id": 2,
"text": "K_b(message, R)"
},
{
"math_id": 3,
"text": "K_m(R1,K_b(R0,message),B)\\longrightarrow(K_b(R0,message),B)"
},
{
"math_id": 4,
"text": "R1"
},
{
"math_id": 5,
"text": "R0"
},
{
"math_id": 6,
"text": "(K_b(message))"
},
{
"math_id": 7,
"text": "B"
},
{
"math_id": 8,
"text": "message'"
},
{
"math_id": 9,
"text": "K_b(message')=K_b(message)"
},
{
"math_id": 10,
"text": "message'=message"
},
{
"math_id": 11,
"text": "K_m(S1, A), K_x"
},
{
"math_id": 12,
"text": "A"
},
{
"math_id": 13,
"text": "K_x"
},
{
"math_id": 14,
"text": "S1"
},
{
"math_id": 15,
"text": "K_m(S1, A), K_x (S0, response)"
},
{
"math_id": 16,
"text": "A, S1 (K_x (S0, response))"
},
{
"math_id": 17,
"text": "K_m(S1, A)"
},
{
"math_id": 18,
"text": "K_x(S0, response)"
},
{
"math_id": 19,
"text": "\\longrightarrow"
},
{
"math_id": 20,
"text": "K_m(R1, K_b(R0, message, K_m(S1, A), K_x ), B)\\longrightarrow K_b(R0, message, K_m(S1, A), K_x )"
},
{
"math_id": 21,
"text": "K_m(S1, A) , K_x(S0, response)\\longrightarrow A, S1(K_x(S0, response))"
}
] |
https://en.wikipedia.org/wiki?curid=6200269
|
62004450
|
Ring of modular forms
|
Algebraic object
In mathematics, the ring of modular forms associated to a subgroup Γ of the special linear group SL(2, Z) is the graded ring generated by the modular forms of Γ. The study of rings of modular forms describes the algebraic structure of the space of modular forms.
Definition.
Let Γ be a subgroup of SL(2, Z) that is of finite index and let Mk(Γ) be the vector space of modular forms of weight k. The ring of modular forms of Γ is the graded ring formula_0.
Example.
The ring of modular forms of the full modular group SL(2, Z) is freely generated by the Eisenstein series E4 and E6. In other words, Mk(Γ) is isomorphic as a formula_1-algebra to formula_2, which is the polynomial ring of two variables over the complex numbers.
Properties.
The ring of modular forms is a graded Lie algebra since the Lie bracket formula_3 of modular forms f and g of respective weights k and ℓ is a modular form of weight "k" + "ℓ" + 2. A bracket can be defined for the n-th derivative of modular forms and such a bracket is called a Rankin–Cohen bracket.
Congruence subgroups of SL(2, Z).
In 1973, Pierre Deligne and Michael Rapoport showed that the ring of modular forms M(Γ) is finitely generated when Γ is a congruence subgroup of SL(2, Z).
In 2003, Lev Borisov and Paul Gunnells showed that the ring of modular forms M(Γ) is generated in weight at most 3 when formula_4 is the congruence subgroup formula_5 of prime level N in SL(2, Z) using the theory of toric modular forms. In 2014, Nadim Rustom extended the result of Borisov and Gunnells for formula_5 to all levels N and also demonstrated that the ring of modular forms for the congruence subgroup formula_6 is generated in weight at most 6 for some levels N.
In 2015, John Voight and David Zureick-Brown generalized these results: they proved that the graded ring of modular forms of even weight for any congruence subgroup Γ of SL(2, Z) is generated in weight at most 6 with relations generated in weight at most 12. Building on this work, in 2016, Aaron Landesman, Peter Ruhm, and Robin Zhang showed that the same bounds hold for the full ring (all weights), with the improved bounds of 5 and 10 when Γ has some nonzero odd weight modular form.
General Fuchsian groups.
A Fuchsian group Γ corresponds to the orbifold obtained from the quotient formula_7 of the upper half-plane formula_8. By a stacky generalization of Riemann's existence theorem, there is a correspondence between the ring of modular forms of Γ and the a particular section ring closely related to the canonical ring of a stacky curve.
There is a general formula for the weights of generators and relations of rings of modular forms due to the work of Voight and Zureick-Brown and the work of Landesman, Ruhm, and Zhang.
Let formula_9 be the stabilizer orders of the stacky points of the stacky curve (equivalently, the cusps of the orbifold formula_7) associated to Γ. If Γ has no nonzero odd weight modular forms, then the ring of modular forms is generated in weight at most formula_10 and has relations generated in weight at most formula_11. If Γ has a nonzero odd weight modular form, then the ring of modular forms is generated in weight at most formula_12 and has relations generated in weight at most formula_13.
Applications.
In string theory and supersymmetric gauge theory, the algebraic structure of the ring of modular forms can be used to study the structure of the Higgs vacua of four-dimensional gauge theories with N = 1 supersymmetry. The stabilizers of superpotentials in N = 4 supersymmetric Yang–Mills theory are rings of modular forms of the congruence subgroup Γ(2) of SL(2, Z).
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "M(\\Gamma) = \\bigoplus_{k \\geq 0} M_k(\\Gamma)"
},
{
"math_id": 1,
"text": "\\mathbb{C}"
},
{
"math_id": 2,
"text": "\\mathbb{C}[E_4, E_6]"
},
{
"math_id": 3,
"text": "[f,g] = kfg' - \\ell f'g"
},
{
"math_id": 4,
"text": "\\Gamma"
},
{
"math_id": 5,
"text": "\\Gamma_1(N)"
},
{
"math_id": 6,
"text": "\\Gamma_0(N)"
},
{
"math_id": 7,
"text": "\\Gamma \\backslash \\mathbb{H}"
},
{
"math_id": 8,
"text": "\\mathbb{H}"
},
{
"math_id": 9,
"text": "e_i"
},
{
"math_id": 10,
"text": "6 \\max(1, e_1, e_2, \\ldots, e_r)"
},
{
"math_id": 11,
"text": "12 \\max(1, e_1, e_2, \\ldots, e_r)"
},
{
"math_id": 12,
"text": "\\max(5, e_1, e_2, \\ldots, e_r)"
},
{
"math_id": 13,
"text": "2\\max(5, e_1, e_2, \\ldots, e_r)"
}
] |
https://en.wikipedia.org/wiki?curid=62004450
|
62004667
|
Stacky curve
|
Object in algebraic geometry
In mathematics, a stacky curve is an object in algebraic geometry that is roughly an algebraic curve with potentially "fractional points" called stacky points. A stacky curve is a type of stack used in studying Gromov–Witten theory, enumerative geometry, and rings of modular forms.
Stacky curves are closely related to 1-dimensional orbifolds and therefore sometimes called orbifold curves or orbicurves.
Definition.
A stacky curve formula_0 over a field k is a smooth proper geometrically connected Deligne–Mumford stack of dimension 1 over k that contains a dense open subscheme.
Properties.
A stacky curve is uniquely determined (up to isomorphism) by its coarse space X (a smooth quasi-projective curve over k), a finite set of points xi (its stacky points) and integers ni (its ramification orders) greater than 1. The canonical divisor of formula_0 is linearly equivalent to the sum of the canonical divisor of X and a ramification divisor R:
formula_1
Letting g be the genus of the coarse space X, the degree of the canonical divisor of formula_0 is therefore:
formula_2
A stacky curve is called spherical if d is positive, Euclidean if d is zero, and hyperbolic if d is negative.
Although the corresponding statement of Riemann–Roch theorem does not hold for stacky curves, there is a generalization of Riemann's existence theorem that gives an equivalence of categories between the category of stacky curves over the complex numbers and the category of complex orbifold curves.
Applications.
The generalization of GAGA for stacky curves is used in the derivation of algebraic structure theory of rings of modular forms.
The study of stacky curves is used extensively in equivariant Gromov–Witten theory and enumerative geometry.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathfrak{X}"
},
{
"math_id": 1,
"text": "K_\\mathfrak{X} \\sim K_X + R."
},
{
"math_id": 2,
"text": "d = \\deg K_\\mathfrak{X} = 2 - 2g - \\sum_{i=1}^r \\frac{n_i - 1}{n_i}."
}
] |
https://en.wikipedia.org/wiki?curid=62004667
|
620083
|
Sensitivity analysis
|
Study of uncertainty in the output of a mathematical model or system
Sensitivity analysis is the study of how the uncertainty in the output of a mathematical model or system (numerical or otherwise) can be divided and allocated to different sources of uncertainty in its inputs. A related practice is uncertainty analysis, which has a greater focus on uncertainty quantification and propagation of uncertainty; ideally, uncertainty and sensitivity analysis should be run in tandem.
The process of recalculating outcomes under alternative assumptions to determine the impact of a variable under sensitivity analysis can be useful for a range of purposes, including:
Overview.
A mathematical model (for example in biology, climate change, economics, or engineering) can be highly complex, and as a result, its relationships between inputs and outputs may be poorly understood. In such cases, the model can be viewed as a black box, i.e. the output is an "opaque" function of its inputs. Quite often, some or all of the model inputs are subject to sources of uncertainty, including errors of measurement, absence of information and poor or partial understanding of the driving forces and mechanisms. This uncertainty imposes a limit on our confidence in the response or output of the model. Further, models may have to cope with the natural intrinsic variability of the system (aleatory), such as the occurrence of stochastic events.
In models involving many input variables, sensitivity analysis is an essential ingredient of model building and quality assurance. National and international agencies involved in impact assessment studies have included sections devoted to sensitivity analysis in their guidelines. Examples are the European Commission (see e.g. the guidelines for impact assessment), the White House Office of Management and Budget, the Intergovernmental Panel on Climate Change and US Environmental Protection Agency's modeling guidelines.
Settings, constraints, and related issues.
Settings and constraints.
The choice of method of sensitivity analysis is typically dictated by a number of problem constraints or settings. Some of the most common are
Computational expense is a problem in many practical sensitivity analyses. Some methods of reducing computational expense include the use of emulators (for large models), and screening methods (for reducing the dimensionality of the problem). Another method is to use an event-based sensitivity analysis method for variable selection for time-constrained applications. This is an input variable selection (IVS) method that assembles together information about the trace of the changes in system inputs and outputs using sensitivity analysis to produce an input/output trigger/event matrix that is designed to map the relationships between input data as causes that trigger events and the output data that describes the actual events. The cause-effect relationship between the causes of state change i.e. input variables and the effect system output parameters determines which set of inputs have a genuine impact on a given output. The method has a clear advantage over analytical and computational IVS method since it tries to understand and interpret system state change in the shortest possible time with minimum computational overhead.
Assumptions vs. inferences.
In uncertainty and sensitivity analysis there is a crucial trade off between how scrupulous an analyst is in exploring the input assumptions and how wide the resulting inference may be. The point is well illustrated by the econometrician Edward E. Leamer:
I have proposed a form of organized sensitivity analysis that I call 'global sensitivity analysis' in which a neighborhood of alternative assumptions is selected and the corresponding interval of inferences is identified. Conclusions are judged to be sturdy only if the neighborhood of assumptions is wide enough to be credible and the corresponding interval of inferences is narrow enough to be useful.
Note Leamer's emphasis is on the need for 'credibility' in the selection of assumptions. The easiest way to invalidate a model is to demonstrate that it is fragile with respect to the uncertainty in the assumptions or to show that its assumptions have not been taken 'wide enough'. The same concept is expressed by Jerome R. Ravetz, for whom bad modeling is when "uncertainties in inputs must be suppressed lest outputs become indeterminate."
Pitfalls and difficulties.
Some common difficulties in sensitivity analysis include
Sensitivity analysis methods.
There are a large number of approaches to performing a sensitivity analysis, many of which have been developed to address one or more of the constraints discussed above. They are also distinguished by the type of sensitivity measure, be it based on (for example) variance decompositions, partial derivatives or elementary effects. In general, however, most procedures adhere to the following outline:
In some cases this procedure will be repeated, for example in high-dimensional problems where the user has to screen out unimportant variables before performing a full sensitivity analysis.
The various types of "core methods" (discussed below) are distinguished by the various sensitivity measures which are calculated. These categories can somehow overlap. Alternative ways of obtaining these measures, under the constraints of the problem, can be given.
One-at-a-time (OAT).
One of the simplest and most common approaches is that of changing one-factor-at-a-time (OAT), to see what effect this produces on the output. OAT customarily involves
Sensitivity may then be measured by monitoring changes in the output, e.g. by partial derivatives or linear regression. This appears a logical approach as any change observed in the output will unambiguously be due to the single variable changed. Furthermore, by changing one variable at a time, one can keep all other variables fixed to their central or baseline values. This increases the comparability of the results (all 'effects' are computed with reference to the same central point in space) and minimizes the chances of computer program crashes, more likely when several input factors are changed simultaneously.
OAT is frequently preferred by modelers because of practical reasons. In case of model failure under OAT analysis the modeler immediately knows which is the input factor responsible for the failure.
Despite its simplicity however, this approach does not fully explore the input space, since it does not take into account the simultaneous variation of input variables. This means that the OAT approach cannot detect the presence of interactions between input variables and is unsuitable for nonlinear models.
The proportion of input space which remains unexplored with an OAT approach grows superexponentially with the number of inputs. For example, a 3-variable parameter space which is explored one-at-a-time is equivalent to taking points along the x, y, and z axes of a cube centered at the origin. The convex hull bounding all these points is an octahedron which has a volume only 1/6th of the total parameter space. More generally, the convex hull of the axes of a hyperrectangle forms a hyperoctahedron which has a volume fraction of formula_0. With 5 inputs, the explored space already drops to less than 1% of the total parameter space. And even this is an overestimate, since the off-axis volume is not actually being sampled at all. Compare this to random sampling of the space, where the convex hull approaches the entire volume as more points are added. While the sparsity of OAT is theoretically not a concern for linear models, true linearity is rare in nature.
Derivative-based local methods.
Local derivative-based methods involve taking the partial derivative of the output "Y" with respect to an input factor "X""i" :
formula_1
where the subscript x0 indicates that the derivative is taken at some fixed point in the space of the input (hence the 'local' in the name of the class). Adjoint modelling and Automated Differentiation are methods which allow to compute all partial derivatives at a cost at most 4-6 times of that for evaluating the original function. Similar to OAT, local methods do not attempt to fully explore the input space, since they examine small perturbations, typically one variable at a time. It is possible to select similar samples from derivative-based sensitivity through Neural Networks and perform uncertainty quantification.
One advantage of the local methods is that it is possible to make a matrix to represent all the sensitivities in a system, thus providing an overview that cannot be achieved with global methods if there is a large number of input and output variables.
Regression analysis.
Regression analysis, in the context of sensitivity analysis, involves fitting a linear regression to the model response and using standardized regression coefficients as direct measures of sensitivity. The regression is required to be linear with respect to the data (i.e. a hyperplane, hence with no quadratic terms, etc., as regressors) because otherwise it is difficult to interpret the standardised coefficients. This method is therefore most suitable when the model response is in fact linear; linearity can be confirmed, for instance, if the coefficient of determination is large. The advantages of regression analysis are that it is simple and has a low computational cost.
Variance-based methods.
Variance-based methods are a class of probabilistic approaches which quantify the input and output uncertainties as probability distributions, and decompose the output variance into parts attributable to input variables and combinations of variables. The sensitivity of the output to an input variable is therefore measured by the amount of variance in the output caused by that input. These can be expressed as conditional expectations, i.e., considering a model "Y" = "f"(X) for X = {"X""1", "X""2", ... "X""k"}, a measure of sensitivity of the "i"th variable "X""i" is given as,
formula_2
where "Var" and "E" denote the variance and expected value operators respectively, and X"~i" denotes the set of all input variables except "X""i". This expression essentially measures the contribution "X""i" alone to the uncertainty (variance) in "Y" (averaged over variations in other variables), and is known as the "first-order sensitivity index" or "main effect index". Importantly, it does not measure the uncertainty caused by interactions with other variables. A further measure, known as the "total effect index", gives the total variance in "Y" caused by "X""i" "and" its interactions with any of the other input variables. Both quantities are typically standardised by dividing by Var("Y").
Variance-based methods allow full exploration of the input space, accounting for interactions, and nonlinear responses. For these reasons they are widely used when it is feasible to calculate them. Typically this calculation involves the use of Monte Carlo methods, but since this can involve many thousands of model runs, other methods (such as emulators) can be used to reduce computational expense when necessary.
Variogram analysis of response surfaces ("VARS").
One of the major shortcomings of the previous sensitivity analysis methods is that none of them considers the spatially ordered structure of the response surface/output of the model "Y"="f"(X) in the parameter space. By utilizing the concepts of directional variograms and covariograms, variogram analysis of response surfaces (VARS) addresses this weakness through recognizing a spatially continuous correlation structure to the values of "Y", and hence also to the values of formula_3.
Basically, the higher the variability the more heterogeneous is the response surface along a particular direction/parameter, at a specific perturbation scale. Accordingly, in the VARS framework, the values of directional variograms for a given perturbation scale can be considered as a comprehensive illustration of sensitivity information, through linking variogram analysis to both direction and perturbation scale concepts. As a result, the VARS framework accounts for the fact that sensitivity is a scale-dependent concept, and thus overcomes the scale issue of traditional sensitivity analysis methods. More importantly, VARS is able to provide relatively stable and statistically robust estimates of parameter sensitivity with much lower computational cost than other strategies (about two orders of magnitude more efficient). Noteworthy, it has been shown that there is a theoretical link between the VARS framework and the variance-based and derivative-based approaches.
Alternative methods.
A number of methods have been developed to overcome some of the constraints discussed above, which would otherwise make the estimation of sensitivity measures infeasible (most often due to computational expense). Generally, these methods focus on efficiently calculating variance-based measures of sensitivity.
Emulators.
Emulators (also known as metamodels, surrogate models or response surfaces) are data-modeling/machine learning approaches that involve building a relatively simple mathematical function, known as an "emulator", that approximates the input/output behavior of the model itself. In other words, it is the concept of "modeling a model" (hence the name "metamodel"). The idea is that, although computer models may be a very complex series of equations that can take a long time to solve, they can always be regarded as a function of their inputs "Y" = "f"(X). By running the model at a number of points in the input space, it may be possible to fit a much simpler emulator "η"(X), such that "η"(X) ≈ "f"(X) to within an acceptable margin of error. Then, sensitivity measures can be calculated from the emulator (either with Monte Carlo or analytically), which will have a negligible additional computational cost. Importantly, the number of model runs required to fit the emulator can be orders of magnitude less than the number of runs required to directly estimate the sensitivity measures from the model.
Clearly, the crux of an emulator approach is to find an "η" (emulator) that is a sufficiently close approximation to the model "f". This requires the following steps,
Sampling the model can often be done with low-discrepancy sequences, such as the Sobol sequence – due to mathematician Ilya M. Sobol or Latin hypercube sampling, although random designs can also be used, at the loss of some efficiency. The selection of the emulator type and the training are intrinsically linked since the training method will be dependent on the class of emulator. Some types of emulators that have been used successfully for sensitivity analysis include,
The use of an emulator introduces a machine learning problem, which can be difficult if the response of the model is highly nonlinear. In all cases, it is useful to check the accuracy of the emulator, for example using cross-validation.
High-dimensional model representations (HDMR).
A high-dimensional model representation (HDMR) (the term is due to H. Rabitz) is essentially an emulator approach, which involves decomposing the function output into a linear combination of input terms and interactions of increasing dimensionality. The HDMR approach exploits the fact that the model can usually be well-approximated by neglecting higher-order interactions (second or third-order and above). The terms in the truncated series can then each be approximated by e.g. polynomials or splines (REFS) and the response expressed as the sum of the main effects and interactions up to the truncation order. From this perspective, HDMRs can be seen as emulators which neglect high-order interactions; the advantage is that they are able to emulate models with higher dimensionality than full-order emulators.
Fourier amplitude sensitivity test (FAST).
The Fourier amplitude sensitivity test (FAST) uses the Fourier series to represent a multivariate function (the model) in the frequency domain, using a single frequency variable. Therefore, the integrals required to calculate sensitivity indices become univariate, resulting in computational savings.
Monte Carlo filtering.
Sensitivity analysis via Monte Carlo filtering is also a sampling-based approach, whose objective is to identify regions in the space of the input factors corresponding to particular values (e.g., high or low) of the output.
Shapley effects.
Shapley effects rely on Shapley values and represent the average marginal contribution of a given factors across all possible combinations of factors. These value are related to Sobol’s indices as their value falls between the first order Sobol’ effect and the total order effect.
Applications.
Examples of sensitivity analyses can be found in various area of application, such as:
Sensitivity auditing.
It may happen that a sensitivity analysis of a model-based study is meant to underpin an inference, and to certify its robustness, in a context where the inference feeds into a policy or decision-making process. In these cases the framing of the analysis itself, its institutional context, and the motivations of its author may become a matter of great importance, and a pure sensitivity analysis – with its emphasis on parametric uncertainty – may be seen as insufficient. The emphasis on the framing may derive inter-alia from the relevance of the policy study to different constituencies that are characterized by different norms and values, and hence by a different story about 'what the problem is' and foremost about 'who is telling the story'. Most often the framing includes more or less implicit assumptions, which could be political (e.g. which group needs to be protected) all the way to technical (e.g. which variable can be treated as a constant).
In order to take these concerns into due consideration the instruments of SA have been extended to provide an assessment of the entire knowledge and model generating process. This approach has been called 'sensitivity auditing'. It takes inspiration from NUSAP, a method used to qualify the worth of quantitative information with the generation of `Pedigrees' of numbers. Sensitivity auditing has been especially designed for an adversarial context, where not only the nature of the evidence, but also the degree of certainty and uncertainty associated to the evidence, will be the subject of partisan interests. Sensitivity auditing is recommended in the European Commission guidelines for impact assessment, as well as in the report Science Advice for Policy by European Academies.
Related concepts.
Sensitivity analysis is closely related with uncertainty analysis; while the latter studies the overall uncertainty in the conclusions of the study, sensitivity analysis tries to identify what source of uncertainty weighs more on the study's conclusions.
The problem setting in sensitivity analysis also has strong similarities with the field of design of experiments. In a design of experiments, one studies the effect of some process or intervention (the 'treatment') on some objects (the 'experimental units'). In sensitivity analysis one looks at the effect of varying the inputs of a mathematical model on the output of the model itself. In both disciplines one strives to obtain information from the system with a minimum of physical or numerical experiments.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "1/n!"
},
{
"math_id": 1,
"text": "\n\\left| \\frac{\\partial Y}{\\partial X_i} \\right |_{\\textbf {x}^0 },\n"
},
{
"math_id": 2,
"text": "\n\\operatorname{Var} \\left( E_{\\textbf{X}_{\\sim i}} \\left( Y \\mid X_i \\right)\n\\right)\n"
},
{
"math_id": 3,
"text": " \\frac{\\partial Y}{\\partial x_i} "
}
] |
https://en.wikipedia.org/wiki?curid=620083
|
62008819
|
Birchfield–Tomasi dissimilarity
|
In computer vision, the Birchfield–Tomasi dissimilarity is a pixelwise image dissimilarity measure that is robust with respect to sampling effects. In the comparison of two image elements, it fits the intensity of one pixel to the linearly interpolated intensity around a corresponding pixel on the other image. It is used as a dissimilarity measure in stereo matching, where one-dimensional search for correspondences is performed to recover a dense disparity map from a stereo image pair.
Description.
When performing pixelwise image matching, the measure of dissimilarity between pairs of pixels from different images is affected by differences in image acquisition such as illumination bias and noise. Even when assuming no difference in these aspects between an image pair, additional inconsistencies are introduced by the pixel sampling process, because each pixel is a sample obtained integrating the continuous light signal over a finite region of space, and two pixels matching the same feature of the image content may correspond to slightly different regions of the real object that can reflect light differently and can be subject to partial occlusion, depth discontinuity, or different lens defocus, thus generating different intensity signals.
The Birchfield–Tomasi measure compensates for the sampling effect by considering the linear interpolation of the samples. Pixel similarity is then determined by finding the best match between the intensity of a pixel sample in one image and the interpolated function in an interval around a location in the other image.
Considering the stereo matching problem for a rectified stereo pair, where the search for correspondences is performed in one dimension, given two columns formula_0 and formula_1 along the same scanline for the left and right image respectively, it is possible to define two symmetric functions
formula_2
where formula_3 and formula_4 are the linear interpolation functions of the left and right image intensity formula_5 and formula_6 along the scanline. The Birchfield–Tomasi dissimilarity can then be defined as
formula_7
In practice the measure can be computed with only a small and constant overhead with respect to the calculation of the simple intensity difference, because it is not necessary to reconstruct the interpolant function. Given that the interpolant is linear within each unit interval centred around a pixel, its minimum is located in one of its extremities. Therefore, formula_8 can be written as
formula_9
where
formula_10
denoting with formula_11 and formula_12 the values of the interpolated intensities at the rightmost and leftmost extremities of a one-pixel interval centred around formula_1
formula_13
The other function formula_14 can be similarly rewritten, completing the expression for formula_15.
|
[
{
"math_id": 0,
"text": "x_l"
},
{
"math_id": 1,
"text": "x_r"
},
{
"math_id": 2,
"text": "\n\\begin{align}\nd_l(x_l, x_r) &= \\min_{x_r - \\frac{1}{2} \\le x \\le x_r + \\frac{1}{2}} \\left| I_l(x_l) - \\hat{I}_r(x) \\right| \\\\\nd_r(x_l, x_r) &= \\min_{x_l - \\frac{1}{2} \\le x \\le x_l + \\frac{1}{2}} \\left| \\hat{I}_l(x) - I_r(x_r) \\right|\n\\end{align}\n"
},
{
"math_id": 3,
"text": "\\hat{I}_l"
},
{
"math_id": 4,
"text": "\\hat{I}_r"
},
{
"math_id": 5,
"text": "I_l"
},
{
"math_id": 6,
"text": "I_r"
},
{
"math_id": 7,
"text": "\nd(x_l, x_r) = \\min \\left\\{ d_l(x_l, x_r), d_r(x_l, x_r) \\right\\}.\n"
},
{
"math_id": 8,
"text": "d_l(x_l, x_r)"
},
{
"math_id": 9,
"text": "\nd_l(x_l, x_r) = \\max \\left\\{ 0, I_l(x_l) - I_{max}, I_{min} - I_l(x_l) \\right\\}\n"
},
{
"math_id": 10,
"text": "\n\\begin{align}\nI_{max} &= \\max \\left\\{ I_r(x_r), I^{+}_{r}(x_r), I^{-}_{r}(x_r) \\right\\} \\\\\nI_{min} &= \\min \\left\\{ I_r(x_r), I^{+}_{r}(x_r), I^{-}_{r}(x_r) \\right\\}\n\\end{align}\n"
},
{
"math_id": 11,
"text": "I^{+}_{r}(x_r)"
},
{
"math_id": 12,
"text": "I^{-}_{r}(x_r)"
},
{
"math_id": 13,
"text": "\n\\begin{align}\nI^{+}_{r}(x_r) &= \\frac{1}{2} \\left( I_r(x_r) + I_r(x_r + 1) \\right) \\\\\nI^{-}_{r}(x_r) &= \\frac{1}{2} \\left( I_r(x_r - 1) + I_r(x_r) \\right) .\n\\end{align}\n"
},
{
"math_id": 14,
"text": "d_r(x_l, x_r)"
},
{
"math_id": 15,
"text": "d"
}
] |
https://en.wikipedia.org/wiki?curid=62008819
|
620134
|
Non-measurable set
|
Set which cannot be assigned a meaningful "volume"
In mathematics, a non-measurable set is a set which cannot be assigned a meaningful "volume". The existence of such sets is construed to provide information about the notions of length, area and volume in formal set theory. In Zermelo–Fraenkel set theory, the axiom of choice entails that non-measurable subsets of formula_0 exist.
The notion of a non-measurable set has been a source of great controversy since its introduction. Historically, this led Borel and Kolmogorov to formulate probability theory on sets which are constrained to be measurable. The measurable sets on the line are iterated countable unions and intersections of intervals (called Borel sets) plus-minus null sets. These sets are rich enough to include every conceivable definition of a set that arises in standard mathematics, but they require a lot of formalism to prove that sets are measurable.
In 1970, Robert M. Solovay constructed the Solovay model, which shows that it is consistent with standard set theory without uncountable choice, that all subsets of the reals are measurable. However, Solovay's result depends on the existence of an inaccessible cardinal, whose existence and consistency cannot be proved within standard set theory.
Historical constructions.
The first indication that there might be a problem in defining length for an arbitrary set came from Vitali's theorem. A more recent combinatorial construction which is similar to the construction by Robin Thomas of a non-Lebesgue measurable set with some additional properties appeared in American Mathematical Monthly.
One would expect the measure of the union of two disjoint sets to be the sum of the measure of the two sets. A measure with this natural property is called "finitely additive". While a finitely additive measure is sufficient for most intuition of area, and is analogous to Riemann integration, it is considered insufficient for probability, because conventional modern treatments of sequences of events or random variables demand countable additivity.
In this respect, the plane is similar to the line; there is a finitely additive measure, extending Lebesgue measure, which is invariant under all isometries. For higher dimensions the picture gets worse. The Hausdorff paradox and Banach–Tarski paradox show that a three-dimensional ball of radius 1 can be dissected into 5 parts which can be reassembled to form two balls of radius 1.
Example.
Consider formula_1 the set of all points in the unit circle, and the action on formula_2 by a group formula_3 consisting of all rational rotations (rotations by angles which are rational multiples of formula_4). Here formula_3 is countable (more specifically, formula_3 is isomorphic to formula_5) while formula_2 is uncountable. Hence formula_2 breaks up into uncountably many orbits under formula_3 (the orbit of formula_6 is the countable set formula_7). Using the axiom of choice, we could pick a single point from each orbit, obtaining an uncountable subset formula_8 with the property that all of the rational translates (translated copies of the form formula_9 for some rational formula_10) of formula_11 by formula_3 are pairwise disjoint (meaning, disjoint from formula_11 and from each other). The set of those translates partitions the circle into a countable collection of disjoint sets, which are all pairwise congruent (by rational rotations). The set formula_11 will be non-measurable for any rotation-invariant countably additive probability measure on formula_2: if formula_11 has zero measure, countable additivity would imply that the whole circle has zero measure. If formula_11 has positive measure, countable additivity would show that the circle has infinite measure.
Consistent definitions of measure and probability.
The Banach–Tarski paradox shows that there is no way to define volume in three dimensions unless one of the following five concessions is made:
Standard measure theory takes the third option. One defines a family of measurable sets, which is very rich, and almost any set explicitly defined in most branches of mathematics will be among this family. It is usually very easy to prove that a given specific subset of the geometric plane is measurable. The fundamental assumption is that a countably infinite sequence of disjoint sets satisfies the sum formula, a property called σ-additivity.
In 1970, Solovay demonstrated that the existence of a non-measurable set for the Lebesgue measure is not provable within the framework of Zermelo–Fraenkel set theory in the absence of an additional axiom (such as the axiom of choice), by showing that (assuming the consistency of an inaccessible cardinal) there is a model of ZF, called Solovay's model, in which countable choice holds, every set is Lebesgue measurable and in which the full axiom of choice fails.
The axiom of choice is equivalent to a fundamental result of point-set topology, Tychonoff's theorem, and also to the conjunction of two fundamental results of functional analysis, the Banach–Alaoglu theorem and the Krein–Milman theorem. It also affects the study of infinite groups to a large extent, as well as ring and order theory (see Boolean prime ideal theorem). However, the axioms of determinacy and dependent choice together are sufficient for most geometric measure theory, potential theory, Fourier series and Fourier transforms, while making all subsets of the real line Lebesgue-measurable.
References.
Notes.
<templatestyles src="Reflist/styles.css" />
Bibliography.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathbb{R}"
},
{
"math_id": 1,
"text": "S,"
},
{
"math_id": 2,
"text": "S"
},
{
"math_id": 3,
"text": "G"
},
{
"math_id": 4,
"text": "\\pi"
},
{
"math_id": 5,
"text": "\\Q/\\Z"
},
{
"math_id": 6,
"text": "s \\in S"
},
{
"math_id": 7,
"text": "\\{ s e^{i q \\pi} : q \\in \\Q \\}"
},
{
"math_id": 8,
"text": "X \\subset S"
},
{
"math_id": 9,
"text": "e^{i q \\pi} X := \\{ e^{i q \\pi} x : x \\in X \\}"
},
{
"math_id": 10,
"text": "q"
},
{
"math_id": 11,
"text": "X"
},
{
"math_id": 12,
"text": "[0,1]^3"
},
{
"math_id": 13,
"text": "0"
},
{
"math_id": 14,
"text": "\\infty"
}
] |
https://en.wikipedia.org/wiki?curid=620134
|
62016308
|
UVA method
|
The UVA method ("fr. Méthode des Unités de Valeur Ajoutée" - the Value-added Unit Method) is an accounting and decision-making tool, based on calculating the cost of sales. Unlike management accounting, which calculates product margins, the UVA method calculates the result (profit or loss) generated by each sale. The UVA method relies on a very detailed analysis of all costs related to products, customers, orders, and deliveries. It introduces the notion of a single measure unit (the UVA), which applies to all the operations in the company. The method relies on an equivalent-based approach.
Origins of the UVA Method.
The UVA method is an upgrade of the GP method., which traces back to the works of Georges Perrin (1891–1958), a French engineer and a graduate of the "Ecole Centrale". In 1953, he presented his product costing method, based on the introduction of a measure unit – the GP –, which was distinct from currency, and which allowed one to express the entire production of a company. The GP unit stands for the effort that the company needs to make in order to produce a representative good (a basic item). The validity of the method relies on the principle of the "occult constants", postulated by Georges Perrin, according to which the relationships between the production efforts made to produce various goods remain stable over time.
Jean Fiévez and Robert Zaya from the "Les Ingénieurs Associés" (LIA) consulting office, took it to the next level. In 1995, its name changed into the Value-added Unit Method (UVA). As against the GP method, which focused only on product costing, the UVA method looks into most about every operation in the company.
Basic Concepts of the UVA Method.
The UVA method relies on a highly detailed analysis of all value-producing processes, which involves both product-related processes and customer-related ones.
The whole added-value production of a business is measured with the help of one unit: the UVA. The idea of using a measure unit that would be shared by all the company's operations is based on the fact that only companies that make a single product know its cost perfectly (namely, the result of total expenses divided by the number of units produced). The introduction of the UVA changes every company into a “mono-product” one
By analysing all of its processes, the company can envisage “determining its results per sale”. This is the main objective of the UVA method, since a transaction/sale (which translates into an invoice) concentrates all the efforts carried out through the company's various operations. This is clearly a new approach to management, in which one asks oneself whether a transaction between the company and its customer results in profit or loss. The customer is therefore at the centre of the decision-making mechanism. When the customer buys something for a given sum, has the company earned or lost money? Each sale contributes to the overall result of the company. The result of a sale is the difference between the cashed amount and the cost of the sale. The aggregated results of all sales constitute the company's EBIT (earnings before interest and taxes).
The question of precision in costing is linked to the presence of indirect costs. By placing the sale at the centre of the analysis, the UVA method does away with direct/indirect costs, because from a sale viewpoint, all costs are direct: sold products, transport, order processing, invoicing, etc. Thanks to that approach, the UVA method provides a high result precision.
The method presents two cost axes: product costs and customer costs. Product costs include design, processing, raw materials, production, control, storage, post-sale service, etc. Everything that the company has had to do to sell its products (or services) falls under the customer costs. Customer costs include market research, order processing, order preparation, shipment, invoicing, etc.
About 90 to 95 percent of resources fall under two categories of costs: products and customers. The rest of the costs have to do with the internal operation of the business: general management, financial accounting, etc.
Product costs have two distinct components: the cost of the value added by the company and the cost of integrated purchases. Integrated purchases (essentially, the raw materials required to produce a good) are called "Product-specific Expenses" (PSE), according to the UVA method. They can be found in the nomenclatures, so they are easy to calculate.
formula_0
Customer costs, too, have two distinct components: the cost of the added value and the "Customer-specific Expenses" (CSE). The CSE may include the commission to the representative, the customer rebate, or the price paid to the carrier. These are costs that are determined directly.
formula_1
The UVA method focuses on a precise calculation of the added value.
All the added value that the business produces is measured by a single unit: the UVA.
Main Notions.
The UVA method introduces a certain number of specific notions :
Attributable expenses.
They are expenses that can be attributed to the entries without any key of arbitrary allocation; they represent the use of resources by the entries.
The main attributable expenses are the following:
Cost of a Sale.
The cost of a sale is the sum of product costs and customer costs incurred by the company in order to carry out the sale.
Index of a UVA Entry.
The index of a UVA entry is the ratio between its rate and the basic rate.
formula_2
In other words, the index of a UVA entry is given by its use of resources, expressed in added value units.
Process.
The process (or sequence of operations) is a succession of activities that are carried out in association with a UVA entry, within a given timeframe. The concept of UVA entry allows one to define every process that they identify in a business – without making a distinction between the manufacturing processes and any other process that generate added value (management, marketing, etc.), but rather viewing them as a sequence of operations carried out at the UVA entries during a given timeframe.
Profitability Curve.
The profitability curve of the sales is a curve that represents the aggregated turnover (invoice by invoice) on its abscissa and the result rate (the result as a percentage of the turnover) on its ordinate. The invoices are classified in ascending order of the result rate.
The profitability curve shows a summarised structure of the result that the company has obtained during a given time period. It emphasises the heterogeneity of the transaction results obtained by the business. The company's global result may be positive, but the results of the sales that make it up will be highly-variable. One distinguishes four categories of sales: highly-deficient sales – the so-called “hemorrhagic” ones (which have a result rate below –20 percent), deficient sales (going from –20 to 0 percent), profitable sales (0 to 20 percent), and the so-called dangerously profitable sales (of over 20 percent). The graph displays a sales profit curve in its canonical form. Turnover percentages may vary a lot from one category to another; on the other hand, result variation is common in all companies. In addition to the base curve that groups all sales together, one may plot out curves to represent a higher level of data aggregation. A sale that is represented by an invoice is in fact a “foundation brick” that enables one to form various types of groups. For instance, by grouping together all the invoices per customer, one obtains a customer profitability curve.
"Profit curve in canonical form"
Rate of UVA Entry.
The rate of a UVA entry is the sum of the resources that have been used per unit of work. It includes labour, depreciation, floor space used, consumables, maintenance, etc. To calculate it, one must add up the unit expenses attributable to the entry.
Result of a sale.
The result of a sale is the difference between the amount paid and the cost of the sale.
UVA – Added Value Unit.
The added value unit (1 UVA) represents the use of resources required to carry out a typical process within the company's activity: generally speaking, it is the manufacturing of an item (in the case of a processing company) and execution of a service (in the case of a service company). The process is called basic process, and its rate will be the basic rate. By definition, the added-value unit corresponds to the amount of resources that have been used and which are required to carry out the basic process (producing a good or performing a service).
To calculate the current monetary cost of the unit, it is enough to divide the expenses that were incurred during a specific period of time by the number of UVAs produced during that period (UVAP). The product-specific expenses and customer-specific expenses (PSE and CSE) are deducted from the cost amount (C) provided by the general accounting, because these expenses are taken into account directly.
formula_3
UVA Equivalent of a Process.
The UVA equivalent of a process is given by its use of resources, expressed in added value units.
By multiplying the index of every UVA entry involved in a process execution by the usage time thereof, one obtains its use of resources (expressed in UVA) during that process. The sum of these uses constitutes the UVA equivalent of the process.
formula_4
UVA Entry.
The UVA entry comprises all the material and human resources needed to carry out an operation, which are used in a clearly-defined technical and economic framework. In other words, a UVA entry is by definition a homogeneous package of resource uses. UVA entries are present in every company operation.
To do a precise analysis of a company's operations, one needs to start by reviewing all of its work entries. The number of UVA entries depends on the size and structure of the analysed company and may range from a few dozen to several hundred.
Implementation of the UVA Method.
The implementation of the UVA method is composed of two phases : construction and exploitation
Construction.
The construction of the UVA method comprises the following steps:
Exploitation.
To exploit the method means:
Fields of application.
The UVA method can be applied in industrial, service and distribution enterprises which can be described as a repetitive process network in production, administration, logistic etc. In contrast to companies that work by individual projects.
The best results can be obtained for heterogeneous, complexe entreprises what means commercialising many different products and having many clients.
Innovations of the UVA Method.
The UVA method is not a method of cost allocation. In management accounting, one “cuts up” the whole (= the company) into “pieces” (sections/centres of analysis/activities). Then in the same way, all the expenses are discharged/allocated to each portion that was cut out. This approach may be described as “cost allocation” [as done in traditional bookkeeping]. It is a top-down approach. With the UVA method, one does the reverse – a re-composition: at a micro-scale (namely, at the level of one sale), one identifies each and every element that has helped carry out the sale. Then by adding up all the sale transactions, one recomposes almost all the resources of the company. The only resources that are not allocated directly here are those that are associated with the internal management processes; however, they are to be found in the cost of each transaction, via the UVA cost. This is a bottom-up approach.
The UVA method regards the company as a network of processes. The method makes a precise analysis of all these processes. The novelty of the UVA method is to analyse not only the processes related to sold products, but also the ones related to served customers.
According to the UVA method, the object of the costing is the sale, not the product. Thanks to this new way of viewing things, product-related costs that were indirect now become direct, as from the viewpoint of a sale, all costs are direct: the cost of sold products, the freight cost, the cost of order processing and invoicing, etc. The difference between the amount that is invoiced to the customer and the cost of the sale gives the result: profit or loss. The UVA method is the only one that allows one to calculate the profitability of each sale.
The profitability curve is one of the most valuable indicators provided by the UVA method. It has been made possible by choosing to view the sale as a cost object. The curve can be plotted out, since one calculates the profitability of each invoice. Thanks to the profitability curve, the account managers determine the structure of the company's result and can make precisely-targeted decisions.
|
[
{
"math_id": 0,
"text": "Cost\\ of\\ a\\ product = PSE + cost\\ of\\ the\\ added\\ value"
},
{
"math_id": 1,
"text": "Customer\\ cost = CSE + cost\\ of\\ the\\ added\\ value"
},
{
"math_id": 2,
"text": "Index\\ of\\ a\\ UVA\\ entry={rate\\ of\\ the\\ UVA\\ entry \\over base\\ rate}"
},
{
"math_id": 3,
"text": "Cost\\ of\\ the\\ UVA=\\frac{C -(PSE + CSE)}{UVAP}"
},
{
"math_id": 4,
"text": "UVA\\ equivalent\\ of\\ a\\ process=\\sum_{i=1}^N entry\\ index\\ i\\ \\times \\ usage\\ time\\ of\\ the\\ entry\\ i"
}
] |
https://en.wikipedia.org/wiki?curid=62016308
|
62017767
|
Superplan
|
Programming language with strong abstraction from details of hardware
Superplan was a high-level programming language developed between 1949 and 1951 by Heinz Rutishauser, the name being a reference to "" (i.e. computation plan), in Konrad Zuse's terminology designating a single program.
The language was described in Rutishauser's 1951 publication (i.e. "Automatically created Computation Plans for Program-Controlled Computing Machines").
Superplan introduced the keyword as for loop, which had the following form (formula_0 being an array item):
Für i="base"("increment")"limit": formula_0 + "addend" = formula_0
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "a_i"
}
] |
https://en.wikipedia.org/wiki?curid=62017767
|
62018
|
Comparative advantage
|
Lower relative opportunity cost in producing a good
Comparative advantage in an economic model is the advantage over others in producing a particular good. A good can be produced at a lower relative opportunity cost or autarky price, i.e. at a lower relative marginal cost prior to trade. Comparative advantage describes the economic reality of the gains from trade for individuals, firms, or nations, which arise from differences in their factor endowments or technological progress.
David Ricardo developed the classical theory of comparative advantage in 1817 to explain why countries engage in international trade even when one country's workers are more efficient at producing "every" single good than workers in other countries. He demonstrated that if two countries capable of producing two commodities engage in the free market (albeit with the assumption that the capital and labour do not move internationally), then each country will increase its overall consumption by exporting the good for which it has a comparative advantage while importing the other good, provided that there exist differences in labor productivity between both countries. Widely regarded as one of the most powerful yet counter-intuitive insights in economics, Ricardo's theory implies that comparative advantage rather than absolute advantage is responsible for much of international trade.
Classical theory and David Ricardo's formulation.
Adam Smith first alluded to the concept of "absolute advantage" as the basis for international trade in 1776, in "The Wealth of Nations":
<templatestyles src="Template:Blockquote/styles.css" />If a foreign country can supply us with a commodity cheaper than we ourselves can make it, better buy it off them with some part of the produce of our own industry employed in a way in which we have some advantage. The general industry of the country, being always in proportion to the capital which employs it, will not thereby be diminished [...] but only left to find out the way in which it can be employed with the greatest advantage.
Writing several decades after Smith in 1808, Robert Torrens articulated a preliminary definition of comparative advantage as the loss from the closing of trade:
<templatestyles src="Template:Blockquote/styles.css" />[I]f I wish to know the extent of the advantage, which arises to England, from her giving France a hundred pounds of broadcloth, in exchange for a hundred pounds of lace, I take the quantity of lace which she has acquired by this transaction, and compare it with the quantity which she might, at the same expense of labour and capital, have acquired by manufacturing it at home. The lace that remains, beyond what the labour and capital employed on the cloth, might have fabricated at home, is the amount of the advantage which England derives from the exchange.
In 1814 the anonymously published pamphlet "Considerations on the Importation of Foreign Corn" featured the earliest recorded formulation of the concept of comparative advantage. Torrens would later publish his work "External Corn Trade" in 1815 acknowledging this pamphlet author's priority.
In 1817, David Ricardo published what has since become known as the theory of comparative advantage in his book "On the Principles of Political Economy and Taxation".
Ricardo's example.
In a famous example, Ricardo considers a world economy consisting of two countries, Portugal and England, each producing two goods of identical quality. In Portugal, the "a priori" more efficient country, it is possible to produce wine and cloth with less labor than it would take to produce the same quantities in England. However, the relative costs or ranking of cost of producing those two goods differ between the countries.
In this illustration, England could commit 100 hours of labor to produce one unit of cloth, or produce units of wine. Meanwhile, in comparison, Portugal could commit 100 hours of labor to produce units of cloth, or produce units of wine. Portugal possesses an "absolute advantage" in producing both cloth and wine due to more produced per hour (since > 1). If the capital and labour were mobile, both wine and cloth should be made in Portugal, with the capital and labour of England removed there. If they were not mobile, as Ricardo believed them to be generally, then England's "comparative advantage" (due to lower opportunity cost) in producing cloth means that it has an incentive to produce more of that good which is relatively cheaper for them to produce than the other—assuming they have an advantageous opportunity to trade in the marketplace for the other more difficult to produce good.
In the absence of trade, England requires 220 hours of work to both produce and consume one unit each of cloth and wine while Portugal requires 170 hours of work to produce and consume the same quantities. England is more efficient at producing cloth than wine, and Portugal is more efficient at producing wine than cloth. So, if each country specializes in the good for which it has a comparative advantage, then the global production of both goods increases, for England can spend 220 labor hours to produce 2.2 units of cloth while Portugal can spend 170 hours to produce 2.125 units of wine. Moreover, if both countries specialize in the above manner and England trades a unit of its cloth for to units of Portugal's wine, then both countries can consume at least a unit each of cloth and wine, with 0 to 0.2 units of cloth and 0 to 0.125 units of wine remaining in each respective country to be consumed or exported. Consequently, both England and Portugal can consume more wine and cloth under free trade than in autarky.
Ricardian model.
The Ricardian model is a general equilibrium mathematical model of international trade. Although the idea of the Ricardian model was first presented in the "Essay on Profits" (a single-commodity version) and then in the "Principles" (a multi-commodity version) by David Ricardo, the first mathematical Ricardian model was published by William Whewell in 1833. The earliest test of the Ricardian model was performed by G.D.A. MacDougall, which was published in "Economic Journal" of 1951 and 1952. In the Ricardian model, trade patterns depend on productivity differences.
The following is a typical modern interpretation of the classical Ricardian model. In the interest of simplicity, it uses notation and definitions, such as opportunity cost, unavailable to Ricardo.
The world economy consists of two countries, Home and Foreign, which produce wine and cloth. Labor, the only factor of production, is mobile domestically but not internationally; there may be migration between sectors but not between countries. We denote the labor force in Home by formula_0, the amount of labor required to produce one unit of wine in Home by formula_1, and the amount of labor required to produce one unit of cloth in Home by formula_2. The total amount of wine and cloth produced in Home are formula_3 and formula_4 respectively. We denote the same variables for Foreign by appending a prime. For instance, formula_5 is the amount of labor needed to produce a unit of wine in Foreign.
We do not know if Home can produce cloth using fewer hours of work than Foreign. That is, we do not know if formula_6. Similarly, we do not know if Home can produce wine using fewer hours of work. However, we assume Home is "relatively" more productive than Foreign in making in cloth vs. wine:
formula_7
Equivalently, we may assume that Home has a comparative advantage in cloth in the sense that it has a lower opportunity cost for cloth in terms of wine than Foreign:
formula_8
In the absence of trade, the relative price of cloth and wine in each country is determined solely by the relative labor cost of the goods. Hence the relative autarky price of cloth is formula_9 in Home and formula_10 in Foreign. With free trade, the price of cloth or wine in either country is the world price formula_11 orformula_12.
Instead of considering the world demand (or supply) for cloth and wine, we are interested in the world "relative demand" (or "relative supply") for cloth and wine, which we define as the ratio of the world demand (or supply) for cloth to the world demand (or supply) for wine. In general equilibrium, the world relative price formula_13 will be determined uniquely by the intersection of world relative demand formula_14 and world relative supply formula_15 curves.
We assume that the relative demand curve reflects substitution effects and is decreasing with respect to relative price. The behavior of the relative supply curve, however, warrants closer study. Recalling our original assumption that Home has a comparative advantage in cloth, we consider five possibilities for the relative quantity of cloth supplied at a given price.
As long as the relative demand is finite, the relative price is always bounded by the inequality
formula_24
In autarky, Home faces a production constraint of the form
formula_25
from which it follows that Home's cloth consumption at the production possibilities frontier is
formula_26.
With free trade, Home produces cloth exclusively, an amount of which it exports in exchange for wine at the prevailing rate. Thus Home's overall consumption is now subject to the constraint
formula_27
while its cloth consumption at the "consumption possibilities" frontier is given by
formula_28.
A symmetric argument holds for Foreign. Therefore, by trading and specializing in a good for which it has a comparative advantage, each country can expand its consumption possibilities. Consumers can choose from bundles of wine and cloth that they could not have produced themselves in closed economies.
There is another way to prove the theory of comparative advantage, which requires less assumption than the above-detailed proof, and in particular does not require for the hourly wages to be equal in both industries, nor requires any equilibrium between offer and demand on the market. Such a proof can be extended to situations with many goods and many countries, non constant returns and more than one factor of production.
Terms of trade.
Terms of trade is the rate at which one good could be traded for another. If both countries specialize in the good for which they have a comparative advantage then trade, the terms of trade for a good (that benefit both entities) will fall between each entities opportunity costs. In the example above one unit of cloth would trade for between formula_29 units of wine and formula_30 units of wine.
Haberler's opportunity costs formulation.
In 1930 Austrian-American economist Gottfried Haberler detached the doctrine of comparative advantage from Ricardo's labor theory of value and provided a modern opportunity cost formulation. Haberler's reformulation of comparative advantage revolutionized the theory of international trade and laid the conceptual groundwork of modern trade theories.
Haberler's innovation was to reformulate the theory of comparative advantage such that the value of good X is measured in terms of the forgone units of production of good Y rather than the labor units necessary to produce good X, as in the Ricardian formulation. Haberler implemented this opportunity-cost formulation of comparative advantage by introducing the concept of a production possibility curve into international trade theory.
Modern theories.
Since 1817, economists have attempted to generalize the Ricardian model and derive the principle of comparative advantage in broader settings, most notably in the neoclassical "specific factors" Ricardo-Viner (which allows for the model to include more factors than just labour) and "factor proportions" Heckscher–Ohlin models. Subsequent developments in the new trade theory, motivated in part by the empirical shortcomings of the H–O model and its inability to explain intra-industry trade, have provided an explanation for aspects of trade that are not accounted for by comparative advantage. Nonetheless, economists like Alan Deardorff, Avinash Dixit, Gottfried Haberler, and Victor D. Norman have responded with weaker generalizations of the principle of comparative advantage, in which countries will only "tend" to export goods for which they have a comparative advantage.
Dornbusch et al.'s continuum of goods formulation.
In both the Ricardian and H–O models, the comparative advantage theory is formulated for a 2 countries/2 commodities case. It can be extended to a 2 countries/many commodities case, or a many countries/2 commodities case. Adding commodities in order to have a smooth continuum of goods is the major insight of the seminal paper by Dornbusch, Fisher, and Samuelson. In fact, inserting an increasing number of goods into the chain of comparative advantage makes the gaps between the ratios of the labor requirements negligible, in which case the three types of equilibria around any good in the original model collapse to the same outcome. It notably allows for transportation costs to be incorporated, although the framework remains restricted to two countries. But in the case with many countries (more than 3 countries) and many commodities (more than 3 commodities), the notion of comparative advantage requires a substantially more complex formulation.
Deardorff's general law of comparative advantage.
Skeptics of comparative advantage have underlined that its theoretical implications hardly hold when applied to individual commodities or pairs of commodities in a world of multiple commodities. Deardorff argues that the insights of comparative advantage remain valid if the theory is restated in terms of averages across all commodities. His models provide multiple insights on the correlations between vectors of trade and vectors with relative-autarky-price measures of comparative advantage. "Deardorff's general law of comparative advantage" is a model incorporating multiple goods which takes into account tariffs, transportation costs, and other obstacles to trade.
Alternative approaches.
Recently, Y. Shiozawa succeeded in constructing a theory of international value in the tradition of Ricardo's cost-of-production theory of value. This was based on a wide range of assumptions: Many countries; Many commodities; Several production techniques for a product in a country; Input trade (intermediate goods are freely traded); Durable capital goods with constant efficiency during a predetermined lifetime; No transportation cost (extendable to positive cost cases).
In a famous comment, McKenzie pointed that "A moment's consideration will convince one that Lancashire would be unlikely to produce cotton cloth if the cotton had to be grown in England." However, McKenzie and later researchers could not produce a general theory which includes traded input goods because of the mathematical difficulty. As John Chipman points it, McKenzie found that "introduction of trade in intermediate product necessitates a fundamental alteration in classical analysis." Durable capital goods such as machines and installations are inputs to the productions in the same title as part and ingredients.
In view of the new theory, no physical criterion exists. Deardorff examines 10 versions of definitions in two groups but could not give a general formula for the case with intermediate goods. The competitive patterns are determined by the traders trials to find cheapest products in a world. The search of cheapest product is achieved by world optimal procurement. Thus the new theory explains how the global supply chains are formed.
Empirical approach to comparative advantage.
Comparative advantage is a theory about the benefits that specialization and trade would bring, rather than a strict prediction about actual behavior. (In practice, governments restrict international trade for a variety of reasons; under Ulysses S. Grant, the US postponed opening up to free trade until its industries were up to strength, following the example set earlier by Britain.) Nonetheless there is a large amount of empirical work testing the predictions of comparative advantage. The empirical works usually involve testing predictions of a particular model. For example, the Ricardian model predicts that technological differences in countries result in differences in labor productivity. The differences in labor productivity in turn determine the comparative advantages across different countries. Testing the Ricardian model for instance involves looking at the relationship between relative labor productivity and international trade patterns. A country that is relatively efficient in producing shoes tends to export shoes.
Direct test: natural experiment of Japan.
Assessing the validity of comparative advantage on a global scale with the examples of contemporary economies is analytically challenging because of the multiple factors driving globalization: indeed, investment, migration, and technological change play a role in addition to trade. Even if we could isolate the workings of open trade from other processes, establishing its causal impact also remains complicated: it would require a comparison with a counterfactual world without open trade. Considering the durability of different aspects of globalization, it is hard to assess the sole impact of open trade on a particular economy.
Daniel Bernhofen and John Brown have attempted to address this issue, by using a natural experiment of a sudden transition to open trade in a market economy. They focus on the case of Japan. The Japanese economy indeed developed over several centuries under autarky and a quasi-isolation from international trade but was, by the mid-19th century, a sophisticated market economy with a population of 30 million. Under Western military pressure, Japan opened its economy to foreign trade through a series of unequal treaties.
In 1859, the treaties limited tariffs to 5% and opened trade to Westerners. Considering that the transition from autarky, or self-sufficiency, to open trade was brutal, few changes to the fundamentals of the economy occurred in the first 20 years of trade. The general law of comparative advantage theorizes that an economy should, on average, export goods with low self-sufficiency prices and import goods with high self-sufficiency prices. Bernhofen and Brown found that by 1869, the price of Japan's main export, silk and derivatives, saw a 100% increase in real terms, while the prices of numerous imported goods declined of 30-75%. In the next decade, the ratio of imports to gross domestic product reached 4%.
Structural estimation.
Another important way of demonstrating the validity of comparative advantage has consisted in 'structural estimation' approaches. These approaches have built on the Ricardian formulation of two goods for two countries and subsequent models with many goods or many countries. The aim has been to reach a formulation accounting for both multiple goods and multiple countries, in order to reflect real-world conditions more accurately. Jonathan Eaton and Samuel Kortum underlined that a convincing model needed to incorporate the idea of a 'continuum of goods' developed by Dornbusch et al. for both goods and countries. They were able to do so by allowing for an arbitrary (integer) number i of countries, and dealing exclusively with unit labor requirements for each good (one for each point on the unit interval) in each country (of which there are i).
Earlier empirical work.
Two of the first tests of comparative advantage were by MacDougall (1951, 1952). A prediction of a two-country Ricardian comparative advantage model is that countries will export goods where output per worker (i.e. productivity) is higher. That is, we expect a positive relationship between output per worker and the number of exports. MacDougall tested this relationship with data from the US and UK, and did indeed find a positive relationship. The statistical test of this positive relationship was replicated with new data by Stern (1962) and Balassa (1963).
Dosi et al. (1988) conducted a book-length empirical examination that suggests that international trade in manufactured goods is largely driven by differences in national technological competencies.
One critique of the textbook model of comparative advantage is that there are only two goods. The results of the model are robust to this assumption. Dornbusch et al. (1977) generalized the theory to allow for such a large number of goods as to form a smooth continuum. Based in part on these generalizations of the model, Davis (1995) provides a more recent view of the Ricardian approach to explain trade between countries with similar resources.
More recently, Golub and Hsieh (2000) presents modern statistical analysis of the relationship between relative productivity and trade patterns, which finds reasonably strong correlations, and Nunn (2007) finds that countries that have greater enforcement of contracts specialize in goods that require relationship-specific investments.
Taking a broader perspective, there has been work about the benefits of international trade. Zimring & Etkes (2014) finds that the Blockade of the Gaza Strip, which substantially restricted the availability of imports to Gaza, saw labor productivity fall by 20% in three years. Markusen et al. (1994) reports the effects of moving away from autarky to free trade during the Meiji Restoration, with the result that national income increased by up to 65% in 15 years.
Criticism.
Several arguments have been advanced against using comparative advantage as a justification for advocating free trade, and they have gained an audience among economists. James Brander and Barbara Spencer demonstrated how, in a strategic setting where a few firms compete for the world market, export subsidies and import restrictions can keep foreign firms from competing with national firms, increasing welfare in the country implementing these so-called strategic trade policies.
There are some economists who dispute the claims of the benefit of comparative advantage. James K. Galbraith has stated that "free trade has attained the status of a god" and that "... none of the world's most successful trading regions, including Japan, Korea, Taiwan, and now mainland China, reached their current status by adopting neoliberal trading rules." He argues that comparative advantage relies on the assumption of constant returns, which he states is not generally the case. According to Galbraith, nations trapped into specializing in agriculture are condemned to perpetual poverty, as agriculture is dependent on land, a finite non-increasing natural resource.
References.
<templatestyles src="Reflist/styles.css" />
Bibliography.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "\\textstyle L"
},
{
"math_id": 1,
"text": "\\textstyle a_{LW}"
},
{
"math_id": 2,
"text": "\\textstyle a_{LC}"
},
{
"math_id": 3,
"text": "Q_W"
},
{
"math_id": 4,
"text": "Q_C"
},
{
"math_id": 5,
"text": "\\textstyle a'_{LW}"
},
{
"math_id": 6,
"text": "a_{LC}<a'_{LC}"
},
{
"math_id": 7,
"text": "a_{LC}/a'_{LC}<a_{LW}/a'_{LW}."
},
{
"math_id": 8,
"text": "a_{LC}/a_{LW}<a'_{LC}/a'_{LW}."
},
{
"math_id": 9,
"text": "a_{LC}/a_{LW}"
},
{
"math_id": 10,
"text": "a'_{LC}/a'_{LW}"
},
{
"math_id": 11,
"text": "P_C"
},
{
"math_id": 12,
"text": "P_W"
},
{
"math_id": 13,
"text": "\\textstyle P_C/P_W"
},
{
"math_id": 14,
"text": "\\textstyle RD"
},
{
"math_id": 15,
"text": "\\textstyle RS"
},
{
"math_id": 16,
"text": "\\textstyle P_C/P_W = a_{LC}/a_{LW}<a'_{LC}/a'_{LW}"
},
{
"math_id": 17,
"text": "P'_W/a'_{LW}"
},
{
"math_id": 18,
"text": "P'_C/a'_{LC}"
},
{
"math_id": 19,
"text": "\\textstyle P_C/P_W < a_{LC}/a_{LW}<a'_{LC}/a'_{LW}"
},
{
"math_id": 20,
"text": "\\textstyle a_{LC}/a_{LW}<P_C/P_W < a'_{LC}/a'_{LW}"
},
{
"math_id": 21,
"text": "\\textstyle \\frac{L/a_{LC}}{L'/a'_{LW}}"
},
{
"math_id": 22,
"text": "\\textstyle a_{LC}/a_{LW}<a'_{LC}/a'_{LW}<P_C/P_W"
},
{
"math_id": 23,
"text": "\\textstyle a_{LC}/a_{LW}<a'_{LC}/a'_{LW}=P_C/P_W"
},
{
"math_id": 24,
"text": " a_{LC}/a_{LW}\\leq {P_C/P_W}\\leq {a'_{LC}/a'_{LW}}."
},
{
"math_id": 25,
"text": " a_{LC}Q_C+a_{LW}Q_W\\leq L,"
},
{
"math_id": 26,
"text": "Q_C=L/a_{LC}-(a_{LW}/a_{LC})Q_W"
},
{
"math_id": 27,
"text": "a_{LC}Q_C+a_{LC}(P_W/P_C)Q_W\\leq L"
},
{
"math_id": 28,
"text": "Q_C=L/a_{LC}-(P_W/P_C)Q_W\\geq L/a_{LC}-(a_{LW}/a_{LC})Q_W"
},
{
"math_id": 29,
"text": "\\frac 56"
},
{
"math_id": 30,
"text": "\\frac 9 8"
}
] |
https://en.wikipedia.org/wiki?curid=62018
|
62026514
|
BERT (language model)
|
Series of language models developed by Google AI
Bidirectional Encoder Representations from Transformers (BERT) is a language model introduced in October 2018 by researchers at Google. It learned by self-supervised learning to represent text as a sequence of vectors. It had the transformer encoder architecture. It was notable for its dramatic improvement over previous state of the art models, and as an early example of large language model. As of 2020[ [update]], BERT was a ubiquitous baseline in Natural Language Processing (NLP) experiments.
BERT is trained by masked token prediction and next sentence prediction. As a result of this training process, BERT learns contextual, latent representations of tokens in their context, similar to ELMo and GPT-2. It found applications for many many natural language processing tasks, such as coreference resolution and polysemy resolution. It is an evolutionary step over ELMo, and spawned the study of "BERTology", which attempts to interpret what is learned by BERT.
BERT was originally implemented in the English language at two model sizes, BERTBASE (110 million parameters) and BERTLARGE (340 million parameters). Both were trained on the Toronto BookCorpus (800M words) and English Wikipedia (2,500M words). The weights were released on GitHub. On March 11, 2020, 24 smaller models were released, the smallest being BERTTINY with just 4 million parameters.
Architecture.
BERT is an "encoder-only" transformer architecture. At a high level, BERT consists of 4 modules:
The task head is necessary for pre-training, but it is often unnecessary for so-called "downstream tasks," such as question answering or sentiment classification. Instead, one removes the task head and replaces it with a newly initialized module suited for the task, and finetune the new module. The latent vector representation of the model is directly fed into this new module, allowing for sample-efficient transfer learning.
Embedding.
This section describes the embedding used by BERTBASE. The other one, BERTLARGE, is similar, just larger.
The tokenizer of BERT is WordPiece, which is a sub-word strategy like byte pair encoding. Its vocabulary size is 30,000, and any token not appearing in its vocabulary is replaced by codice_0 ("unknown").
The first layer is the embedding layer, which contains three components: token type embeddings, position embeddings, and segment type embeddings.
The three embedding vectors are added together representing the initial token representation as a function of these three pieces of information. After embedding, the vector representation is normalized using a LayerNorm operation, outputting a 768-dimensional vector for each input token. After this, the representation vectors are passed forward through 12 Transformer encoder blocks, and are decoded back to 30,000-dimensional vocabulary space using a basic affine transformation layer.
Architectural family.
The encoder stack of BERT has 2 free parameters: formula_0, the number of layers, and formula_1, the hidden size. There are always formula_2 self-attention heads, and the feed-forward/filter size is always formula_3. By varying these two numbers, one obtains an entire family of BERT models.
For BERT
The notation for encoder stack is written as L/H. For example, BERTBASE is written as 12L/768H, BERTLARGE as 24L/1024H, and BERTTINY as 2L/128H.
Training.
Pre-training.
BERT was pre-trained simultaneously on two tasks.
Masked Language Modeling.
In Masked Language Modeling, 15% of tokens would be randomly selected for masked-prediction task, and the training objective was to predict the masked token given its context. In more detail, the selected token is
The reason not all selected tokens are masked is to avoid the dataset shift problem. The dataset shift problem arises when the distribution of inputs seen during training differs significantly from the distribution encountered during inference. A trained BERT model might be applied to word representation (like Word2Vec), where it would be run over sentences not containing any codice_2 tokens. It is later found that more diverse training objectives are generally better.
As an illustrative example, consider the sentence "my dog is cute". It would first be divided into tokens like "my1 dog2 is3 cute4". Then a random token in the sentence would be picked. Let it be the 4th one "cute4". Next, there would be three possibilities:
After processing the input text, the model's 4th output vector is passed to its decoder layer, which outputs a probability distribution over its 30,000-dimensional vocabulary space.
Next Sentence Prediction.
Given two spans of text, the model predicts if these two spans appeared sequentially in the training corpus, outputting either codice_5 or codice_6. The first span starts with a special token codice_7 (for "classify"). The two spans are separated by a special token codice_1 (for "separate"). After processing the two spans, the 1-st output vector (the vector coding for codice_7) is passed to a separate neural network for the binary classification into codice_5 and codice_6.
Fine-tuning.
BERT is meant as a general pretrained model for various applications in natural language processing. That is, after pre-training, BERT can be fine-tuned with fewer resources on smaller datasets to optimize its performance on specific tasks such as natural language inference and text classification, and sequence-to-sequence-based language generation tasks such as question answering and conversational response generation.
The original BERT paper published results demonstrating that a small amount of finetuning (for BERTLARGE, 1 hour on 1 Cloud TPU) allowed it to achieved state-of-the-art performance on a number of natural language understanding tasks:
Cost.
BERT was trained on the BookCorpus (800M words) and a filtered version of English Wikipedia (2,500M words) without lists, tables, and headers.
Training BERTBASE on 4 Cloud TPU (16 TPU chips total) took 4 days, at an estimated cost of 500 USD. Training BERTLARGE on 16 Cloud TPU (64 TPU chips total) took 4 days.
Interpretation.
Language models like ELMo, GPT-2, and BERT, spawned the study of "BERTology", which attempts to interpret what is learned by these models. Their performance on these natural language understanding tasks are not yet well understood. Several research publications in 2018 and 2019 focused on investigating the relationship behind BERT's output as a result of carefully chosen input sequences, analysis of internal vector representations through probing classifiers, and the relationships represented by attention weights.
The high performance of the BERT model could also be attributed to the fact that it is bidirectionally trained. This means that BERT, based on the Transformer model architecture, applies its self-attention mechanism to learn information from a text from the left and right side during training, and consequently gains a deep understanding of the context. For example, the word "fine" can have two different meanings depending on the context (I feel fine today, She has fine blond hair). BERT considers the words surrounding the target word "fine" from the left and right side.
However it comes at a cost: due to encoder-only architecture lacking a decoder, BERT can't be prompted and can't generate text, while bidirectional models in general do not work effectively without the right side, thus being difficult to prompt. As an illustrative example, if one wishes to use BERT to continue a sentence fragment "Today, I went to", then naively one would mask out all the tokens as "Today, I went to codice_2 codice_2 codice_2 ... codice_2 ." where the number of codice_2 is the length of the sentence one wishes to extend to. However, this constitutes a dataset shift, as during training, BERT has never seen sentences with that many tokens masked out. Consequently, its performance degrades. More sophisticated techniques allow text generation, but at a high computational cost.
History.
BERT was originally published by Google researchers Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. The design has its origins from pre-training contextual representations, including semi-supervised sequence learning, generative pre-training, ELMo, and ULMFit. Unlike previous models, BERT is a deeply bidirectional, unsupervised language representation, pre-trained using only a plain text corpus. Context-free models such as word2vec or GloVe generate a single word embedding representation for each word in the vocabulary, whereas BERT takes into account the context for each occurrence of a given word. For instance, whereas the vector for "running" will have the same word2vec vector representation for both of its occurrences in the sentences "He is running a company" and "He is running a marathon", BERT will provide a contextualized embedding that will be different according to the sentence.
On October 25, 2019, Google announced that they had started applying BERT models for English language search queries within the US. On December 9, 2019, it was reported that BERT had been adopted by Google Search for over 70 languages. In October 2020, almost every single English-based query was processed by a BERT model.
Variants.
The BERT models were influential and inspired many variants.
RoBERTa (2019) was an engineering improvement. It preserves BERT's architecture (slightly larger, at 355M parameters), but improves its training, changing key hyperparameters, removing the "next-sentence prediction" task, and using much larger mini-batch sizes.
DistilBERT (2019) distills BERTBASE to a model with just 60% of its parameters (66M), while preserving 95% of its benchmark scores. Similarly, TinyBERT (2019) is a distilled model with just 28% of its parameters.
ALBERT (2019) used shared-parameter across layers, and experimented with independently varying the hidden size and the word-embedding layer's output size as two hyperparameters. They also replaced the "next sentence prediction" task with the "sentence-order prediction" (SOP) task, where the model must distinguish the correct order of two consecutive text segments from their reversed order.
ELECTRA (2020) applied the idea of generative adversarial networks to the MLM task. Instead of masking out tokens, a small language model generates random plausible plausible substitutions, and a larger network identify these replaced tokens. The small model aims to fool the large model.
DeBERTa.
DeBERTa (2020) is a significant architectural variant, with "disentangled attention". Its key idea is to treat the positional and token encodings separately throughout the attention mechanism. Instead of combining the positional encoding (formula_4) and token encoding (formula_5) into a single input vector (formula_6), DeBERTa keeps them separate as a tuple: (formula_7). Then, at each self-attention layer, DeBERTa computes three distinct attention matrices, rather than the single attention matrix used in BERT:
The three attention matrices are added together element-wise, then passed through a softmax layer and multiplied by a projection matrix.
Absolute position encoding is included in the final self-attention layer as additional input.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "L"
},
{
"math_id": 1,
"text": "H"
},
{
"math_id": 2,
"text": "H/64"
},
{
"math_id": 3,
"text": "4H"
},
{
"math_id": 4,
"text": "x_{position}"
},
{
"math_id": 5,
"text": "x_{\\text{token}}"
},
{
"math_id": 6,
"text": "x_{input} = x_{position} + x_{token}"
},
{
"math_id": 7,
"text": "(x_{position}, x_{token})"
}
] |
https://en.wikipedia.org/wiki?curid=62026514
|
6202791
|
Tach timer
|
The tach(ometer) timer is an instrument used in aviation to accumulate the total number of revolutions performed by the engine. The unit of measure is equivalent to the number of hours of running at a certain, specific reference speed of rotation. If the reference speed of rotation is 2400 RPM then the timer runs in real time when the engine is running at 2400 RPM, half speed while the engine is run at 1200 RPM (a fast idle for some aviation engines) or at 5/6ths real time at 2000 RPM (a slow cruise speed).
The tach timer integrates over time the instantaneous rotation speed displayed by the tachometer. The displayed number is incremented by one if the engine is run at its reference speed for one hour. The quantity recorded is referred to as tach(ometer) hours. If the reference rotation speed is 2400 RPM then the tach timer records
formula_0
Uses.
The tach timer is usually used to schedule engine maintenance, although it is just an approximation of "Time in service" which is used to time and schedule aircraft maintenance. Time in service is defined in 14 CFR 1.1 as the actual time in the air, whereas tach time measures engine revolutions, which would still count time on the ground while the engine is idling (at a lower rate).<br>
It can also be used as a basis for charging for aircraft rental as opposed to charging for elapsed time. This encourages the renter to properly warm the engine before takeoff and not to run the engine continuously at maximum speed.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\frac{\\text{total}\\ \\text{revolutions}}{2400\\cdot 60}"
}
] |
https://en.wikipedia.org/wiki?curid=6202791
|
62035449
|
Parascorodite
|
Parascorodite is a rare, secondary iron-arsenate mineral. It has a chemical formula of () and was discovered in 1967 using X-ray powder diffraction methods, when an unknown substance was found along with scorodite on medieval ore dumps in the Czech Republic. The holotype of parascorodite can be found in the mineralogical collection of the National Museum, Prague, Czech Republic under acquisition number P1p25/98.
Occurrence.
Parascorodite occurs at the Kank mine in the Kutna Hora ore district in Central Bohemia, Czech Republic . It is one of the rarest secondary minerals. Parascorodite is found in medieval ore dumps, that were most likely used for silver and polymetallic ore waste. The dumps contain arsenic rich ore, which in medieval times was considered waste.
Paragenesis.
The medieval ore dumps are heavily weathered, but it is assumed that parascorodite, along with other secondary iron arsenates and arsenosulfates, actually formed much before the dumping of waste material on this area by natural weathering processes. Parascorodite formed as a product of arsenopyrite dissolution, followed by recrystallization of iron-arsenic bearing solutions, in near surface weathering conditions. Parascorodite is dimorphous with scorodite, and is also associated with pitticite, gypsum, jarosite, and amorphous ferric hydroxides.
Physical properties.
Parascorodite occurs in aggregates of somewhat hemispherical shapes. The aggregates grow to be about 2 cm across, consisting of extremely small crystals that can be arranged in fan-like or irregular masses. Parascorodite is cryptocrystalline, and has a luster that can vary from earthy to vitreous. It is a soft mineral, falling between 1-2 on the Mohs hardness scale. Aggregates can be white to yellowish, or more rarely green-grey in color, and have a yellow-white streak. The measured density of earthy aggregates in ethyl alcohol is 3.212 g/cm3. The rare green-grey variety of parascorodite aggregates may exhibit conchoidal fracture.
Individual crystal size varies between 0.1 μm and 0.5 μm, with some twinned crystals measuring 1.0 μm. Crystals occur as either prisms or thin flakes with a hexagonal outline.
Parascorodite dissolves slowly in 10% hydrochloric acid (HCl). In water, it will disintegrate rapidly into a powder. Under hydrothermal conditions parascorodite can re-crystallize back to scorodite.
Chemical composition.
The chemical composition of parascorodite was determined using qualitative spectral analysis. Two major elements were indicated: iron and arsenic. Quantitative analysis was also determined using two wet chemical analyses (results in the table below).
Crystallography.
X-ray diffraction.
The crystal structure of parascorodite was determined using X-ray powder diffraction. Using the X-ray diffraction data (in the table below), the parascorodite unit cell was determined as hexagonal or trigonal. The unit cell parameters are formula_0 = 8.9327(5)Å, formula_1 = 9.9391(8)Å, with a cell volume of formula_2 = 686.83(8)Å3.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "a_1,a_2"
},
{
"math_id": 1,
"text": "c"
},
{
"math_id": 2,
"text": "V"
}
] |
https://en.wikipedia.org/wiki?curid=62035449
|
62045188
|
Quantum steering
|
In physics, in the area of quantum information theory and quantum computation, quantum steering is a special kind of nonlocal correlation, which is intermediate between Bell nonlocality and quantum entanglement. A state exhibiting Bell nonlocality must also exhibit quantum steering, a state exhibiting quantum steering must also exhibit quantum entanglement. But for mixed quantum states, there exist examples which lie between these different quantum correlation sets. The notion was initially proposed by Erwin Schrödinger, and later made popular by Howard M. Wiseman, S. J. Jones, and A. C. Doherty.
Definition.
In the usual formulation of quantum steering, two distant parties, Alice and Bob, are considered, they share an unknown quantum state formula_0 with induced states formula_1 and formula_2 for Alice and Bob respectively. Alice and Bob can both perform local measurements on their own subsystems, for instance, Alice and Bob measure formula_3 and formula_4 and obtain the outcome formula_5 and formula_6. After running the experiment many times, they will obtain measurement statistics formula_7, this is just the symmetric scenario for nonlocal correlation. Quantum steering introduces some asymmetry between two parties, viz., Bob's measurement devices are trusted, he knows what measurement his device carried out, meanwhile, Alice's devices are untrusted. Bob's goal is to determine if Alice influences his states in a quantum mechanical way or just using some of her prior knowledge of his partial states and by some classical means. The classical way of Alice is known as the local hidden states model which is an extension of the local variable model for Bell nonlocality and also a restriction for separable states model for quantum entanglement.
Mathematically, consider Alice having the measurement formula_8, where the elements formula_9 make up a POVM and the set formula_10 are the corresponding outcomes. Then Bob's local state assemblage (a set of positive operators) corresponding to Alice's measurement formula_11 is
formula_12 with formula_13 where the probability formula_14.
Similar to the case of quantum entanglement, we define first un-steerable states. We introduce the local hidden state assemblage formula_15 for which formula_16 and formula_17.
We say that a state is un-steerable if for an arbitrary POVM measurement formula_8 and state assemblage formula_12, there exists a local hidden state assemblage formula_15 such that
formula_18 for all formula_19.
A state is called a steering state if it is not un-steerable.
Local hidden state model.
Let us do some comparison among Bell nonlocality, quantum steering, and quantum entanglement. By definition, a Bell nonlocal which does not admit a local hidden variable model for some measurement setting, a quantum steering state is a state which does not admit a local hidden state model for some measurement assemblage and state assemblage, and quantum entangled state is a state which is not separable. They share a great similarity.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rho"
},
{
"math_id": 1,
"text": "\\rho_A"
},
{
"math_id": 2,
"text": "\\rho_B"
},
{
"math_id": 3,
"text": "x"
},
{
"math_id": 4,
"text": "y"
},
{
"math_id": 5,
"text": "a"
},
{
"math_id": 6,
"text": "b"
},
{
"math_id": 7,
"text": "p(a,b|x,y)"
},
{
"math_id": 8,
"text": "M=\\{X_1,\\cdots,X_n\\}"
},
{
"math_id": 9,
"text": "X_i"
},
{
"math_id": 10,
"text": "\\{a_1, \\cdots,a_n\\}"
},
{
"math_id": 11,
"text": "M"
},
{
"math_id": 12,
"text": "\\mathcal{S}=\\{\\rho_{a_1|M},\\cdots,\\rho_{a_n|M}\\}"
},
{
"math_id": 13,
"text": "\\sum_{i=1}^{n}p(a_i|M)\\rho_{a_i|M}=\\rho_B"
},
{
"math_id": 14,
"text": "p(a_i|M)=\\mathrm{Tr}(\\rho_{a_i|M})"
},
{
"math_id": 15,
"text": "\\mathcal{A}=\\{\\sigma_{\\lambda}\\}"
},
{
"math_id": 16,
"text": "\\sum_{\\lambda}p(\\lambda) = \\sum_{\\lambda}\\mathrm{Tr}(\\sigma_{\\lambda}) = 1"
},
{
"math_id": 17,
"text": "\\sum_{\\lambda}p(\\lambda)\\sigma_{\\lambda}=\\rho_B"
},
{
"math_id": 18,
"text": "\\rho_{a_i|M}=\\sum_{\\lambda}p(\\lambda)p(a_i|M,\\lambda)\\sigma_{\\lambda}"
},
{
"math_id": 19,
"text": "a_i"
},
{
"math_id": 20,
"text": "p(a,b|x,y)=\\sum_{\\lambda}p(a|x,\\lambda)p(b|y,\\lambda)p(\\lambda)"
},
{
"math_id": 21,
"text": "p(a,b|x,y)=\\sum_{\\lambda}p(a|x,\\lambda)\\mathrm{Tr}(F_{b|y}\\sigma_{\\lambda})p(\\lambda)"
},
{
"math_id": 22,
"text": "p(a,b|x,y)=\\sum_{\\lambda}\\mathrm{Tr}(E_{a|x}\\chi_{\\lambda})\\mathrm{Tr}(F_{b|y}\\sigma_{\\lambda})p(\\lambda)"
}
] |
https://en.wikipedia.org/wiki?curid=62045188
|
62047
|
Mrs. Miniver's problem
|
Problem on areas of intersecting circles
Mrs. Miniver's problem is a geometry problem about the area of circles. It asks how to place two circles formula_0 and formula_1 of given radii in such a way that the lens formed by intersecting their two interiors has equal area to the symmetric difference of formula_0 and formula_1 (the area contained in one but not both circles). It was named for an analogy between geometry and social dynamics enunciated by fictional character Mrs. Miniver, who "saw every relationship as a pair of intersecting circles". Its solution involves a transcendental equation.
Origin.
The problem derives from "A Country House Visit", one of Jan Struther's newspaper articles appearing in the "Times of London" between 1937 and 1939 featuring her character Mrs. Miniver. According to the story:
She saw every relationship as a pair of intersecting circles. It would seem at first glance that the more they overlapped the better the relationship; but this is not so. Beyond a certain point the law of diminishing returns sets in, and there are not enough private resources left on either side to enrich the life that is shared. Probably perfection is reached when the area of the two outer crescents, added together, is exactly equal to that of the leaf-shaped piece in the middle. On paper there must be some neat mathematical formula for arriving at this; in life, none.
Louis A. Graham and Clifton Fadiman formalized the mathematics of the problem and popularized it among recreational mathematicians.
Solution.
The problem can be solved by cutting the lune along the line segment between the two crossing points of the circles, into two circular segments, and using the formula for the area of a circular segment to relate the distance between the crossing points to the total area that the problem requires the lune to have. This gives a transcendental equation for the distance between crossing points but it can be solved numerically. There are two boundary conditions whose distances between centers can be readily solved: the farthest apart the centers can be is when the circles have equal radii, and the closest they can be is when one circle is contained completely within the other, which happens when the ratio between radii is formula_2. If the ratio of radii falls beyond these limiting cases, the circles cannot satisfy the problem's area constraint.
In the case of two circles of equal size, these equations can be simplified somewhat. The rhombus formed by the two circle centers and the two crossing points, with side lengths equal to the radius, has an angle formula_3 radians at the circle centers, found by solving the equation formula_4 from which it follows that the ratio of the distance between their centers to their radius is formula_5.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "A"
},
{
"math_id": 1,
"text": "B"
},
{
"math_id": 2,
"text": "\\sqrt2"
},
{
"math_id": 3,
"text": "\\theta\\approx 2.605"
},
{
"math_id": 4,
"text": "\\theta-\\sin\\theta=\\frac{2\\pi}{3},"
},
{
"math_id": 5,
"text": "2\\cos\\tfrac\\theta2\\approx0.529864"
}
] |
https://en.wikipedia.org/wiki?curid=62047
|
62047186
|
Kotzig's theorem
|
In graph theory and polyhedral combinatorics, areas of mathematics, Kotzig's theorem is the statement that every polyhedral graph has an edge whose two endpoints have total degree at most 13. An extreme case is the triakis icosahedron, where no edge has smaller total degree. The result is named after Anton Kotzig, who published it in 1955 in the dual form that every convex polyhedron has two adjacent faces with a total of at most 13 sides. It was named and popularized in the west in the 1970s by Branko Grünbaum.
More generally, every planar graph of minimum degree at least three either has an edge of total degree at most 12, or at least 60 edges that (like the edges in the triakis icosahedron) connect vertices of degrees 3 and 10.
If all triangular faces of a polyhedron are vertex-disjoint, there exists an edge with smaller total degree, at most eight.
Generalizations of the theorem are also known for graph embeddings onto surfaces with higher genus.
The theorem cannot be generalized to all planar graphs, as the complete bipartite graphs formula_0 and formula_1 have edges with unbounded total degree. However, for planar graphs with vertices of degree lower than three, variants of the theorem have been proven, showing that either there is an edge of bounded total degree or some other special kind of subgraph.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "K_{1,n-1}"
},
{
"math_id": 1,
"text": "K_{2,n-2}"
}
] |
https://en.wikipedia.org/wiki?curid=62047186
|
62047700
|
Song of Songs 6
|
Sixth chapter of the Song of Songs
Song of Songs 6 (abbreviated as Song 6) is the sixth chapter of the Song of Songs in the Hebrew Bible or the Old Testament of the Christian Bible. This book is one of the Five Megillot, a collection of short books, together with Ruth, Lamentations, Ecclesiastes and Esther, within the Ketuvim, the third and the last part of the Hebrew Bible. Jewish tradition views Solomon as the author of this book (although this is now largely disputed), and this attribution influences the acceptance of this book as a canonical text. This chapter contains a dialogue between the daughters of Jerusalem and the woman about the man, followed by the man's descriptive poem of the woman, ending with a collective call to the woman to return.
Text.
The original text is written in Hebrew language. This chapter is divided into 13 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Codex Leningradensis (1008). Some fragments containing parts of this chapter were found among the Dead Sea Scrolls: 4Q106 (4QCanta); 30 BCE-30 CE; extant verses 11(?)-12).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
Structure.
The Modern English Version (MEV) identifies the speakers in this chapter as:
Chorus: Inquiry for the male (6:1).
Continuing from chapter 5, the daughters of Jerusalem agree to look for the man.
"Where has your beloved gone,"
"O fairest among women?"
"Where has your beloved turned aside,"
"that we may seek him with you?"
Verse 1.
The words in this verse parallel those in .
Female: Reunites with her lover (6:2-3).
This part contains the woman's affirmation of her love, when she finds him enjoying his garden.
"My beloved is gone down into his garden, to the beds of spices, to feed in the gardens, and to gather lilies."
Verse 2.
This could be related to where Solomon says, "I planted me vineyards; I made me gardens and parks, and I planted trees in them of all kinds of fruit; I made me pools of water, to water therefrom the forest where trees were reared." Franz Delitzsch suggests that she locates him in the garden because this is where he is inclined to spend his time, "where he delights most to tarry".
"I am my beloved's, and my beloved is mine: he feedeth among the lilies."
Verse 3.
In reversed order compared to . He feeds his flock among the lilies: reference to the flock is added in the New King James Version and other texts.
Male: Second descriptive poem for the female (6:4-10).
This descriptive poem by the man still belongs to a long section concerning the desire and love in the country which continues until 8:4, and partly parallel to the one in chapter 4. The man's "waṣf" and the other ones (; ; 7:1-9) theologically demonstrate the heart of the Song that values the body as not evil but good even worthy of praise, and respects the body with an appreciative focus (rather than lurid). Hess notes that this reflects 'the fundamental value of God's creation as good and the human body as a key part of that creation, whether at the beginning () or redeemed in the resurrection (, )'.
"You are beautiful as Tirzah, my love,"
"comely as Jerusalem,"
"awesome as an army with banners!"
Female: Lingering in the groves (6:11-12).
The woman's voice in this part contains ambiguity in the meaning of some words, that poses difficulty in assigning it to either of the main speakers (NIV assigns this part to the man).
"I went down into the garden of nuts to see the fruits of the valley, and to see whether the vine flourished and the pomegranates budded."
"Or ever I was aware, my soul made me like the chariots of Amminadib."
Chorus: Call to return (6:13).
This verse does not indicate clearly who the speaker is, but there must be either multiple persons concerned in it or a quotation, because 'there is an evident interchange of question and answer'.
Verse 13.
[Friends of the Woman]
"Return, return, O Shulammite!"
"Return, return, that we may look upon you."
[The Man]
"Why should you look upon the Shulammite,"
"as upon a dance before two armies?"
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
] |
https://en.wikipedia.org/wiki?curid=62047700
|
6205
|
Chaitin's constant
|
Halting probability of a random computer program
In the computer science subfield of algorithmic information theory, a Chaitin constant (Chaitin omega number) or halting probability is a real number that, informally speaking, represents the probability that a randomly constructed program will halt. These numbers are formed from a construction due to Gregory Chaitin.
Although there are infinitely many halting probabilities, one for each (universal, see below) method of encoding programs, it is common to use the letter Ω to refer to them as if there were only one. Because Ω depends on the program encoding used, it is sometimes called Chaitin's construction when not referring to any specific encoding.
Each halting probability is a normal and transcendental real number that is not computable, which means that there is no algorithm to compute its digits. Each halting probability is Martin-Löf random, meaning there is not even any algorithm which can reliably guess its digits.
Background.
The definition of a halting probability relies on the existence of a prefix-free universal computable function. Such a function, intuitively, represents a programming language with the property that no valid program can be obtained as a proper extension of another valid program.
Suppose that "F" is a partial function that takes one argument, a finite binary string, and possibly returns a single binary string as output. The function "F" is called computable if there is a Turing machine that computes it, in the sense that for any finite binary strings "x" and "y," "F(x) = y" if and only if the Turing machine halts with "y" on its tape when given the input "x".
The function "F" is called universal if the following property holds: for every computable function "f" of a single variable there is a string "w" such that for all "x", "F"("w" "x") = "f"("x"); here "w" "x" represents the concatenation of the two strings "w" and "x". This means that "F" can be used to simulate any computable function of one variable. Informally, "w" represents a "script" for the computable function "f", and "F" represents an "interpreter" that parses the script as a prefix of its input and then executes it on the remainder of input.
The domain of "F" is the set of all inputs "p" on which it is defined. For "F" that are universal, such a "p" can generally be seen both as the concatenation of a program part and a data part, and as a single program for the function "F".
The function "F" is called prefix-free if there are no two elements "p", "p′" in its domain such that "p′" is a proper extension of "p". This can be rephrased as: the domain of "F" is a prefix-free code (instantaneous code) on the set of finite binary strings. A simple way to enforce prefix-free-ness is to use machines whose means of input is a binary stream from which bits can be read one at a time. There is no end-of-stream marker; the end of input is determined by when the universal machine decides to stop reading more bits, and the remaining bits are not considered part of the accepted string. Here, the difference between the two notions of program mentioned in the last paragraph becomes clear; one is easily recognized by some grammar, while the other requires arbitrary computation to recognize.
The domain of any universal computable function is a computably enumerable set but never a computable set. The domain is always Turing equivalent to the halting problem.
Definition.
Let "P"F be the domain of a prefix-free universal computable function "F". The constant ΩF is then defined as
formula_0,
where formula_1 denotes the length of a string "p". This is an infinite sum which has one summand for every "p" in the domain of "F". The requirement that the domain be prefix-free, together with Kraft's inequality, ensures that this sum converges to a real number between 0 and 1. If "F" is clear from context then ΩF may be denoted simply Ω, although different prefix-free universal computable functions lead to different values of Ω.
Relationship to the halting problem.
Knowing the first "N" bits of Ω, one could calculate the halting problem for all programs of a size up to "N". Let the program "p" for which the halting problem is to be solved be "N" bits long. In dovetailing fashion, all programs of all lengths are run, until enough have halted to jointly contribute enough probability to match these first "N" bits. If the program "p" hasn't halted yet, then it never will, since its contribution to the halting probability would affect the first "N" bits. Thus, the halting problem would be solved for "p".
Because many outstanding problems in number theory, such as Goldbach's conjecture, are equivalent to solving the halting problem for special programs (which would basically search for counter-examples and halt if one is found), knowing enough bits of Chaitin's constant would also imply knowing the answer to these problems. But as the halting problem is not generally solvable, and therefore calculating any but the first few bits of Chaitin's constant is not possible in a very concise language, this just reduces hard problems to impossible ones, much like trying to build an oracle machine for the halting problem would be.
Interpretation as a probability.
The Cantor space is the collection of all infinite sequences of 0s and 1s. A halting probability can be interpreted as the measure of a certain subset of Cantor space under the usual probability measure on Cantor space. It is from this interpretation that halting probabilities take their name.
The probability measure on Cantor space, sometimes called the fair-coin measure, is defined so that for any binary string "x" the set of sequences that begin with "x" has measure 2−|"x"|. This implies that for each natural number "n", the set of sequences "f" in Cantor space such that "f"("n") = 1 has measure 1/2, and the set of sequences whose "n"th element is 0 also has measure 1/2.
Let "F" be a prefix-free universal computable function. The domain "P" of "F" consists of an infinite set of binary strings
formula_2.
Each of these strings "p""i" determines a subset "S""i" of Cantor space; the set "S""i" contains all sequences in cantor space that begin with "p""i". These sets are disjoint because "P" is a prefix-free set. The sum
formula_3
represents the measure of the set
formula_4.
In this way, Ω"F" represents the probability that a randomly selected infinite sequence of 0s and 1s begins with a bit string (of some finite length) that is in the domain of "F". It is for this reason that Ω"F" is called a halting probability.
Properties.
Each Chaitin constant Ω has the following properties:
Not every set that is Turing equivalent to the halting problem is a halting probability. A finer equivalence relation, Solovay equivalence, can be used to characterize the halting probabilities among the left-c.e. reals. One can show that a real number in [0,1] is a Chaitin constant (i.e. the halting probability of some prefix-free universal computable function) if and only if it is left-c.e. and algorithmically random. Ω is among the few definable algorithmically random numbers and is the best-known algorithmically random number, but it is not at all typical of all algorithmically random numbers.
Uncomputability.
A real number is called computable if there is an algorithm which, given "n", returns the first "n" digits of the number. This is equivalent to the existence of a program that enumerates the digits of the real number.
No halting probability is computable. The proof of this fact relies on an algorithm which, given the first "n" digits of Ω, solves Turing's halting problem for programs of length up to "n". Since the halting problem is undecidable, Ω cannot be computed.
The algorithm proceeds as follows. Given the first "n" digits of Ω and a "k" ≤ "n", the algorithm enumerates the domain of "F" until enough elements of the domain have been found so that the probability they represent is within 2−("k"+1) of Ω. After this point, no additional program of length "k" can be in the domain, because each of these would add 2−"k" to the measure, which is impossible. Thus the set of strings of length "k" in the domain is exactly the set of such strings already enumerated.
Algorithmic randomness.
A real number is random if the binary sequence representing the real number is an algorithmically random sequence. Calude, Hertling, Khoussainov, and Wang showed that a recursively enumerable real number is an algorithmically random sequence if and only if it is a Chaitin's Ω number.
Incompleteness theorem for halting probabilities.
For each specific consistent effectively represented axiomatic system for the natural numbers, such as Peano arithmetic, there exists a constant "N" such that no bit of Ω after the "N"th can be proven to be 1 or 0 within that system. The constant "N" depends on how the formal system is effectively represented, and thus does not directly reflect the complexity of the axiomatic system. This incompleteness result is similar to Gödel's incompleteness theorem in that it shows that no consistent formal theory for arithmetic can be complete.
Super Omega.
As mentioned above, the first n bits of Gregory Chaitin's constant Ω are random or incompressible in the sense that we cannot compute them by a halting algorithm with fewer than n-O(1) bits. However, consider the short but never halting algorithm which systematically lists and runs all possible programs; whenever one of them halts its probability gets added to the output (initialized by zero). After finite time the first n bits of the output will never change any more (it does not matter that this time itself is not computable by a halting program). So there is a short non-halting algorithm whose output converges (after finite time) onto the first n bits of Ω. In other words, the enumerable first n bits of Ω are highly compressible in the sense that they are limit-computable by a very short algorithm; they are not random with respect to the set of enumerating algorithms. Jürgen Schmidhuber (2000) constructed a limit-computable "Super Ω" which in a sense is much more random than the original limit-computable Ω, as one cannot significantly compress the Super Ω by any enumerating non-halting algorithm.
For an alternative "Super Ω", the universality probability of a prefix-free Universal Turing Machine (UTM) – namely, the probability that it remains universal even when every input of it (as a binary string) is prefixed by a random binary string – can be seen as the non-halting probability of a machine with oracle the third iteration of the halting problem (i.e., formula_6using Turing Jump notation).
|
[
{
"math_id": 0,
"text": "\\Omega_F = \\sum_{p \\in P_F} 2^{-|p|}"
},
{
"math_id": 1,
"text": "\\left|p\\right|"
},
{
"math_id": 2,
"text": "P = \\{p_1,p_2,\\ldots\\}"
},
{
"math_id": 3,
"text": "\\sum_{p \\in P} 2^{-|p|}"
},
{
"math_id": 4,
"text": "\\bigcup_{i \\in \\mathbb{N}} S_i"
},
{
"math_id": 5,
"text": "\\Delta^0_2"
},
{
"math_id": 6,
"text": "O^{(3)}"
}
] |
https://en.wikipedia.org/wiki?curid=6205
|
62052484
|
Stahl's theorem
|
In matrix analysis Stahl's theorem is a theorem proved in 2011 by Herbert Stahl concerning Laplace transforms for special matrix functions. It originated in 1975 as the Bessis-Moussa-Villani (BMV) conjecture by Daniel Bessis, Pierre Moussa, and Marcel Villani. In 2004 Elliott H. Lieb and Robert Seiringer gave two important reformulations of the BMV conjecture. In 2015, Alexandre Eremenko gave a simplified proof of Stahl's theorem.
In 2023, Otte Heinävaara proved a structure theorem for Hermitian matrices introducing tracial joint spectral measures that implies Stahl's theorem as a corollary.
Statement of the theorem.
Let formula_0 denote the trace of a matrix. If formula_1 and formula_2 are formula_3 Hermitian matrices and formula_2 is positive semidefinite, define formula_4, for all real formula_5. Then formula_6 can be represented as the Laplace transform of a non-negative Borel measure formula_7 on formula_8. In other words, for all real formula_5,
formula_6(t) = formula_9,
for some non-negative measure formula_7 depending upon formula_1 and formula_2.
|
[
{
"math_id": 0,
"text": "\\operatorname{tr}"
},
{
"math_id": 1,
"text": "A"
},
{
"math_id": 2,
"text": "B"
},
{
"math_id": 3,
"text": "n\\times n"
},
{
"math_id": 4,
"text": "\\mathbf{f}(t) = \\operatorname{tr}(\\exp(A-tB))"
},
{
"math_id": 5,
"text": "t\\geq 0"
},
{
"math_id": 6,
"text": "\\mathbf{f}"
},
{
"math_id": 7,
"text": "\\mu"
},
{
"math_id": 8,
"text": "[0,\\infty)"
},
{
"math_id": 9,
"text": "\\int_{[0,\\infty)} e^{-ts}\\, d\\mu(s)"
}
] |
https://en.wikipedia.org/wiki?curid=62052484
|
6206
|
Computable number
|
Real number that can be computed within arbitrary precision
In mathematics, computable numbers are the real numbers that can be computed to within any desired precision by a finite, terminating algorithm. They are also known as the recursive numbers, effective numbers or the computable reals or recursive reals. The concept of a computable real number was introduced by Émile Borel in 1912, using the intuitive notion of computability available at the time.
Equivalent definitions can be given using μ-recursive functions, Turing machines, or λ-calculus as the formal representation of algorithms. The computable numbers form a real closed field and can be used in the place of real numbers for many, but not all, mathematical purposes.
Informal definition using a Turing machine as example.
In the following, Marvin Minsky defines the numbers to be computed in a manner similar to those defined by Alan Turing in 1936; i.e., as "sequences of digits interpreted as decimal fractions" between 0 and 1:
<templatestyles src="Template:Blockquote/styles.css" />A computable number [is] one for which there is a Turing machine which, given "n" on its initial tape, terminates with the "n"th digit of that number [encoded on its tape].
The key notions in the definition are (1) that some "n" is specified at the start, (2) for any "n" the computation only takes a finite number of steps, after which the machine produces the desired output and terminates.
An alternate form of (2) – the machine successively prints all "n" of the digits on its tape, halting after printing the "n"th – emphasizes Minsky's observation: (3) That by use of a Turing machine, a "finite" definition – in the form of the machine's state table – is being used to define what is a potentially "infinite" string of decimal digits.
This is however not the modern definition which only requires the result be accurate to within any given accuracy. The informal definition above is subject to a rounding problem called the table-maker's dilemma whereas the modern definition is not.
Formal definition.
A real number "a" is computable if it can be approximated by some computable function formula_0 in the following manner: given any positive integer "n", the function produces an integer "f"("n") such that:
formula_1
There are two similar definitions that are equivalent:
There is another equivalent definition of computable numbers via computable Dedekind cuts. A computable Dedekind cut is a computable function formula_7 which when provided with a rational number formula_8 as input returns formula_9 or formula_10, satisfying the following conditions:
formula_11
formula_12
formula_13
formula_14
An example is given by a program "D" that defines the cube root of 3. Assuming formula_15 this is defined by:
formula_16
formula_17
A real number is computable if and only if there is a computable Dedekind cut "D" corresponding to it. The function "D" is unique for each computable number (although of course two different programs may provide the same function).
A complex number is called computable if its real and imaginary parts are computable.
Properties.
Not computably enumerable.
Assigning a Gödel number to each Turing machine definition produces a subset formula_18 of the natural numbers corresponding to the computable numbers and identifies a surjection from formula_18 to the computable numbers. There are only countably many Turing machines, showing that the computable numbers are subcountable. The set formula_18 of these Gödel numbers, however, is not computably enumerable (and consequently, neither are subsets of formula_18 that are defined in terms of it). This is because there is no algorithm to determine which Gödel numbers correspond to Turing machines that produce computable reals. In order to produce a computable real, a Turing machine must compute a total function, but the corresponding decision problem is in Turing degree 0′′. Consequently, there is no surjective computable function from the natural numbers to the set formula_18 of machines representing computable reals, and Cantor's diagonal argument cannot be used constructively to demonstrate uncountably many of them.
While the set of real numbers is uncountable, the set of computable numbers is classically countable and thus almost all real numbers are not computable. Here, for any given computable number formula_19 the well ordering principle provides that there is a minimal element in formula_18 which corresponds to formula_20, and therefore there exists a subset consisting of the minimal elements, on which the map is a bijection. The inverse of this bijection is an injection into the natural numbers of the computable numbers, proving that they are countable. But, again, this subset is not computable, even though the computable reals are themselves ordered.
Properties as a field.
The arithmetical operations on computable numbers are themselves computable in the sense that whenever real numbers "a" and "b" are computable then the following real numbers are also computable: "a + b", "a - b", "ab", and "a/b" if "b" is nonzero.
These operations are actually "uniformly computable"; for example, there is a Turing machine which on input ("A","B",formula_21) produces output "r", where "A" is the description of a Turing machine approximating "a", "B" is the description of a Turing machine approximating "b", and "r" is an formula_21 approximation of "a"+"b".
The fact that computable real numbers form a field was first proved by Henry Gordon Rice in 1954.
Computable reals however do not form a computable field, because the definition of a computable field requires effective equality.
Non-computability of the ordering.
The order relation on the computable numbers is not computable. Let "A" be the description of a Turing machine approximating the number formula_5. Then there is no Turing machine which on input "A" outputs "YES" if formula_22 and "NO" if formula_23 To see why, suppose the machine described by "A" keeps outputting 0 as formula_21 approximations. It is not clear how long to wait before deciding that the machine will "never" output an approximation which forces "a" to be positive. Thus the machine will eventually have to guess that the number will equal 0, in order to produce an output; the sequence may later become different from 0. This idea can be used to show that the machine is incorrect on some sequences if it computes a total function. A similar problem occurs when the computable reals are represented as Dedekind cuts. The same holds for the equality relation: the equality test is not computable.
While the full order relation is not computable, the restriction of it to pairs of unequal numbers is computable. That is, there is a program that takes as input two Turing machines "A" and "B" approximating numbers formula_24 and formula_25, where formula_26, and outputs whether formula_27 or formula_28 It is sufficient to use formula_21-approximations where formula_29 so by taking increasingly small formula_21 (approaching 0), one eventually can decide whether formula_27 or formula_28
Other properties.
The computable real numbers do not share all the properties of the real numbers used in analysis. For example, the least upper bound of a bounded increasing computable sequence of computable real numbers need not be a computable real number. A sequence with this property is known as a Specker sequence, as the first construction is due to Ernst Specker in 1949. Despite the existence of counterexamples such as these, parts of calculus and real analysis can be developed in the field of computable numbers, leading to the study of computable analysis.
Every computable number is arithmetically definable, but not vice versa. There are many arithmetically definable, noncomputable real numbers, including:
Both of these examples in fact define an infinite set of definable, uncomputable numbers, one for each Universal Turing machine.
A real number is computable if and only if the set of natural numbers it represents (when written in binary and viewed as a characteristic function) is computable.
The set of computable real numbers (as well as every countable, densely ordered subset of computable reals without ends) is order-isomorphic to the set of rational numbers.
Digit strings and the Cantor and Baire spaces.
Turing's original paper defined computable numbers as follows:
<templatestyles src="Template:Blockquote/styles.css" />A real number is computable if its digit sequence can be produced by some algorithm or Turing machine. The algorithm takes an integer formula_31 as input and produces the formula_32-th digit of the real number's decimal expansion as output.
Turing was aware that this definition is equivalent to the formula_21-approximation definition given above. The argument proceeds as follows: if a number is computable in the Turing sense, then it is also computable in the formula_21 sense: if formula_33, then the first "n" digits of the decimal expansion for "a" provide an formula_21 approximation of "a". For the converse, we pick an formula_21 computable real number "a" and generate increasingly precise approximations until the "n"th digit after the decimal point is certain. This always generates a decimal expansion equal to "a" but it may improperly end in an infinite sequence of 9's in which case it must have a finite (and thus computable) proper decimal expansion.
Unless certain topological properties of the real numbers are relevant, it is often more convenient to deal with elements of formula_34 (total 0,1 valued functions) instead of reals numbers in formula_35. The members of formula_34 can be identified with binary decimal expansions, but since the decimal expansions formula_36 and formula_37 denote the same real number, the interval formula_35 can only be bijectively (and homeomorphically under the subset topology) identified with the subset of formula_34 not ending in all 1's.
Note that this property of decimal expansions means that it is impossible to effectively identify the computable real numbers defined in terms of a decimal expansion and those defined in the formula_21 approximation sense. Hirst has shown that there is no algorithm which takes as input the description of a Turing machine which produces formula_21 approximations for the computable number "a", and produces as output a Turing machine which enumerates the digits of "a" in the sense of Turing's definition. Similarly, it means that the arithmetic operations on the computable reals are not effective on their decimal representations as when adding decimal numbers. In order to produce one digit, it may be necessary to look arbitrarily far to the right to determine if there is a carry to the current location. This lack of uniformity is one reason why the contemporary definition of computable numbers uses formula_21 approximations rather than decimal expansions.
However, from a computability theoretic or measure theoretic perspective, the two structures formula_34 and formula_35 are essentially identical. Thus, computability theorists often refer to members of formula_34 as reals. While formula_34 is totally disconnected, for questions about formula_38 classes or randomness it is easier to work in formula_34.
Elements of formula_39 are sometimes called reals as well and though containing a homeomorphic image of formula_40, formula_39 isn't even locally compact (in addition to being totally disconnected). This leads to genuine differences in the computational properties. For instance the formula_41 satisfying formula_42, with formula_43 quantifier free, must be computable while the unique formula_44 satisfying a universal formula may have an arbitrarily high position in the hyperarithmetic hierarchy.
Use in place of the reals.
The computable numbers include the specific real numbers which appear in practice, including all real algebraic numbers, as well as "e", "π", and many other transcendental numbers. Though the computable reals exhaust those reals we can calculate or approximate, the assumption that all reals are computable leads to substantially different conclusions about the real numbers. The question naturally arises of whether it is possible to dispose of the full set of reals and use computable numbers for all of mathematics. This idea is appealing from a constructivist point of view, and has been pursued by the Russian school of constructive mathematics.
To actually develop analysis over computable numbers, some care must be taken. For example, if one uses the classical definition of a sequence, the set of computable numbers is not closed under the basic operation of taking the supremum of a bounded sequence (for example, consider a Specker sequence, see the section above). This difficulty is addressed by considering only sequences which have a computable modulus of convergence. The resulting mathematical theory is called computable analysis.
Implementations of exact arithmetic.
Computer packages representing real numbers as programs computing approximations have been proposed as early as 1985, under the name "exact arithmetic". Modern examples include the CoRN library (Coq), and the RealLib package (C++). A related line of work is based on taking a real RAM program and running it with rational or floating-point numbers of sufficient precision, such as the iRRAM package.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "f:\\mathbb{N}\\to\\mathbb{Z}"
},
{
"math_id": 1,
"text": "{f(n)-1\\over n} \\leq a \\leq {f(n)+1\\over n}."
},
{
"math_id": 2,
"text": "\\varepsilon"
},
{
"math_id": 3,
"text": "|r - a| \\leq \\varepsilon."
},
{
"math_id": 4,
"text": "q_i"
},
{
"math_id": 5,
"text": "a"
},
{
"math_id": 6,
"text": "|q_i - q_{i+1}| < 2^{-i}\\,"
},
{
"math_id": 7,
"text": "D\\;"
},
{
"math_id": 8,
"text": "r"
},
{
"math_id": 9,
"text": "D(r)=\\mathrm{true}\\;"
},
{
"math_id": 10,
"text": "D(r)=\\mathrm{false}\\;"
},
{
"math_id": 11,
"text": "\\exists r D(r)=\\mathrm{true}\\;"
},
{
"math_id": 12,
"text": "\\exists r D(r)=\\mathrm{false}\\;"
},
{
"math_id": 13,
"text": "(D(r)=\\mathrm{true}) \\wedge (D(s)=\\mathrm{false}) \\Rightarrow r<s\\;"
},
{
"math_id": 14,
"text": "D(r)=\\mathrm{true} \\Rightarrow \\exist s>r, D(s)=\\mathrm{true}.\\;"
},
{
"math_id": 15,
"text": "q>0\\;"
},
{
"math_id": 16,
"text": "p^3<3 q^3 \\Rightarrow D(p/q)=\\mathrm{true}\\;"
},
{
"math_id": 17,
"text": "p^3>3 q^3 \\Rightarrow D(p/q)=\\mathrm{false}.\\;"
},
{
"math_id": 18,
"text": "S"
},
{
"math_id": 19,
"text": "x,"
},
{
"math_id": 20,
"text": "x"
},
{
"math_id": 21,
"text": "\\epsilon"
},
{
"math_id": 22,
"text": "a > 0"
},
{
"math_id": 23,
"text": "a \\le 0."
},
{
"math_id": 24,
"text": " a"
},
{
"math_id": 25,
"text": " b"
},
{
"math_id": 26,
"text": "a \\ne b"
},
{
"math_id": 27,
"text": "a < b"
},
{
"math_id": 28,
"text": "a > b."
},
{
"math_id": 29,
"text": " \\epsilon < |b-a|/2,"
},
{
"math_id": 30,
"text": "\\Omega"
},
{
"math_id": 31,
"text": "n \\ge 1"
},
{
"math_id": 32,
"text": "n"
},
{
"math_id": 33,
"text": "n > \\log_{10} (1/\\epsilon)"
},
{
"math_id": 34,
"text": "2^{\\omega}"
},
{
"math_id": 35,
"text": "[0,1]"
},
{
"math_id": 36,
"text": ".d_1d_2\\ldots d_n0111\\ldots"
},
{
"math_id": 37,
"text": ".d_1d_2\\ldots d_n10"
},
{
"math_id": 38,
"text": "\\Pi^0_1"
},
{
"math_id": 39,
"text": "\\omega^{\\omega}"
},
{
"math_id": 40,
"text": "\\mathbb{R}"
},
{
"math_id": 41,
"text": "x \\in \\mathbb{R}"
},
{
"math_id": 42,
"text": "\\forall(n \\in \\omega)\\phi(x,n)"
},
{
"math_id": 43,
"text": "\\phi(x,n)"
},
{
"math_id": 44,
"text": "x \\in \\omega^{\\omega}"
}
] |
https://en.wikipedia.org/wiki?curid=6206
|
6206635
|
Fineness modulus
|
The Fineness Modulus (FM) is an empirical figure obtained by adding the total percentage of the sample of an aggregate retained on each of a specified series of sieves, dividing the sum by 100. Sieves sizes are: 150-μm (No. 100), 300-μm (No. 50), 600-μm (No. 30), 1.18-mm (No. 16), 2.36-mm (No. 8), 4.75-mm (No. 4), 9.5-mm (3/8-in.), 19.0-mm (3/4-in.), 37.5-mm (11/2-in.), and larger, increasing in the ratio of 2 to 1. The same value of fineness modulus may therefore be obtained from several different particle size distributions. In general, however, a smaller value indicates a finer aggregate. Fine aggregates range from an FM of 2.00 to 4.00, and coarse aggregates smaller than 38.1 mm range from 6.75 to 8.00. Combinations of fine and coarse aggregates have intermediate values.
Fineness modulus of combined aggregates.
Fineness modulus of combined aggregates is always between the Fineness modulus of aggregates and combined modulus coarse aggregate have intermediate value. It is given by the formula.
formula_0
here
formula_1 is resultant fineness modulus
formula_2 is fineness modulus of fine aggregate
formula_3 is fineness modulus of coarse aggregate
formula_4 is proportion of fine aggregate in combined aggregate
ratio X of fine aggregate on coarse aggregate in combined aggregate can be found by:
formula_5
proportion of fine aggregate Y in percentage can be calculated by:
formula_6
Put X value
formula_7
References.
Fineness modulus
<templatestyles src="Reflist/styles.css" />
Fineness modulus and its calculation
ASTM C136, https://compass.astm.org/EDIT/html_annot.cgi?C136
|
[
{
"math_id": 0,
"text": "\nF=(F_1 \\times Y + F_2 \\times (1-Y))\n"
},
{
"math_id": 1,
"text": "F"
},
{
"math_id": 2,
"text": "F_1"
},
{
"math_id": 3,
"text": "F_2"
},
{
"math_id": 4,
"text": "Y"
},
{
"math_id": 5,
"text": "\nX = \\frac{F_2-F}{F-F_1}\n"
},
{
"math_id": 6,
"text": "\nY = \\frac{X}{1+X} \\times 100\n"
},
{
"math_id": 7,
"text": "\nY = \\frac{F_2-F}{F_2-F_1}\\times 100\n"
}
] |
https://en.wikipedia.org/wiki?curid=6206635
|
62068372
|
NewHope
|
Cryptographic protocol designed to resist quantum computer attacks
In post-quantum cryptography, NewHope is a key-agreement protocol by Erdem Alkim, Léo Ducas, Thomas Pöppelmann, and Peter Schwabe that is designed to resist quantum computer attacks.
NewHope is based on a mathematical problem ring learning with errors (RLWE) that is believed to be difficult to solve. NewHope has been selected as a round-two contestant in the NIST Post-Quantum Cryptography Standardization competition, and was used in Google's CECPQ1 experiment as a quantum-secure algorithm, alongside the classical X25519 algorithm.
Design choices.
The designers of NewHope made several choices in developing the algorithm:
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "a "
}
] |
https://en.wikipedia.org/wiki?curid=62068372
|
62069340
|
De Brouckere mean diameter
|
The De Brouckere mean diameter is the mean of a particle size distribution weighted by the volume (also called volume-weighted mean diameter, volume moment mean diameter. or volume-weighted mean size). It is the mean diameter, which is directly obtained in particle size measurements, where the measured signal is proportional to the volume of the particles. The most prominent examples are laser diffraction and acoustic spectroscopy (Coulter counter).
The De Brouckere mean is defined in terms of the moment-ratio system as,
formula_0
Where ni is the frequency of occurrence of particles in size class i, having a mean Di diameter. Usually in logarithmic spaced classes, the geometric mean size of the size class is taken
Applications.
The De Brouckere mean has the advantage of being more sensitive to the larger particles, which take up the largest volume of the sample, therefore giving crucial information about the product in the mining and milling industries. It was also used in combustion analysis, as the D[4,3] is less affected by the presence of very small particulate residuals, which enabled the evaluation of the primary diesel spray
See also.
Sauter mean diameter
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "D[4,3]= \\frac{\\Sigma n_iD_i^4}{\\Sigma n_iD_i^3}"
}
] |
https://en.wikipedia.org/wiki?curid=62069340
|
6207
|
Electric current
|
Flow of electric charge
An electric current is a flow of charged particles, such as electrons or ions, moving through an electrical conductor or space. It is defined as the net rate of flow of electric charge through a surface. The moving particles are called charge carriers, which may be one of several types of particles, depending on the conductor. In electric circuits the charge carriers are often electrons moving through a wire. In semiconductors they can be electrons or holes. In an electrolyte the charge carriers are ions, while in plasma, an ionized gas, they are ions and electrons.
In the International System of Units (SI), electric current is expressed in units of ampere (sometimes called an "amp", symbol A), which is equivalent to one coulomb per second. The ampere is an SI base unit and electric current is a base quantity in the International System of Quantities (ISQ). Electric current is also known as amperage and is measured using a device called an "ammeter".
Electric currents create magnetic forces, which are used in motors, generators, inductors, and transformers. In ordinary conductors, they cause Joule heating, which creates light in incandescent light bulbs. Time-varying currents emit electromagnetic waves, which are used in telecommunications to broadcast information.
Symbol.
The conventional symbol for current is "I", which originates from the French phrase , (current intensity). Current intensity is often referred to simply as "current". The "I" symbol was used by André-Marie Ampère, after whom the unit of electric current is named, in formulating Ampère's force law (1820). The notation travelled from France to Great Britain, where it became standard, although at least one journal did not change from using "C" to "I" until 1896.
Conventions.
The conventional direction of current, also known as "conventional current", is arbitrarily defined as the direction in which positive charges flow. In a conductive material, the moving charged particles that constitute the electric current are called charge carriers. In metals, which make up the wires and other conductors in most electrical circuits, the positively charged atomic nuclei of the atoms are held in a fixed position, and the negatively charged electrons are the charge carriers, free to move about in the metal. In other materials, notably the semiconductors, the charge carriers can be positive "or" negative, depending on the dopant used. Positive and negative charge carriers may even be present at the same time, as happens in an electrolyte in an electrochemical cell.
A flow of positive charges gives the same electric current, and has the same effect in a circuit, as an equal flow of negative charges in the opposite direction. Since current can be the flow of either positive or negative charges, or both, a convention is needed for the direction of current that is independent of the type of charge carriers. Negatively charged carriers, such as the electrons (the charge carriers in metal wires and many other electronic circuit components), therefore flow in the opposite direction of conventional current flow in an electrical circuit.
Reference direction.
A current in a wire or circuit element can flow in either of two directions. When defining a variable formula_0 to represent the current, the direction representing positive current must be specified, usually by an arrow on the circuit schematic diagram. This is called the "reference direction" of the current formula_0. When analyzing electrical circuits, the actual direction of current through a specific circuit element is usually unknown until the analysis is completed. Consequently, the reference directions of currents are often assigned arbitrarily. When the circuit is solved, a negative value for the current implies the actual direction of current through that circuit element is opposite that of the chosen reference direction.
Ohm's law.
Ohm's law states that the current through a conductor between two points is directly proportional to the potential difference across the two points. Introducing the constant of proportionality, the resistance, one arrives at the usual mathematical equation that describes this relationship:
formula_1
where "I" is the current through the conductor in units of amperes, "V" is the potential difference measured "across" the conductor in units of volts, and "R" is the resistance of the conductor in units of ohms. More specifically, Ohm's law states that the "R" in this relation is constant, independent of the current.
Alternating and direct current.
In alternating current (AC) systems, the movement of electric charge periodically reverses direction. AC is the form of electric power most commonly delivered to businesses and residences. The usual waveform of an AC power circuit is a sine wave, though certain applications use alternative waveforms, such as triangular or square waves. Audio and radio signals carried on electrical wires are also examples of alternating current. An important goal in these applications is recovery of information encoded (or "modulated") onto the AC signal.
In contrast, direct current (DC) refers to a system in which the movement of electric charge in only one direction (sometimes called unidirectional flow). Direct current is produced by sources such as batteries, thermocouples, solar cells, and commutator-type electric machines of the dynamo type. Alternating current can also be converted to direct current through use of a rectifier. Direct current may flow in a conductor such as a wire, but can also flow through semiconductors, insulators, or even through a vacuum as in electron or ion beams. An old name for direct current was "galvanic current".
Despite DC being mathematically and conceptually simpler than AC, it is actually less widely-used than AC. However, due to the versatility of electric circuits, there are many situations that call for one type of electric current over the other.
Power transmission.
AC is almost always used for power transmission to consumers. The reason behind this is the sum of multiple historical and technological details.
Besides solar power, most power generation methods produce AC current. In order to distribute electricity in the form of DC, a rectifier must be used to convert from the initial AC to DC. However, rectification is a complex, expensive, and, until recent, fairly lossy conversion, especially on the scale of power plants. This has made it historically inefficient to convert the AC generated by power plants to DC for distribution.
Along the power lines connecting power stations to consumers, voltage is stepped up and down to reduce heat loss, often multiple times. While DC inherently experiences less heat loss than AC, it cannot be stepped up or down with transformers. This is due to transformers working on the principle of induction: the changing electric field created by AC generates a changing magnetic field, which induces an electromotive force (EMF) of higher or lower voltage in the connected power line. DC, however, generally does not fluctuate much, resulting in an unchanging electric field, which generates no magnetic field, making induction impossible. While there is now technology for DC transformers, they are more complex, massive, and expensive than AC transformers, and when infrastructural power grids were being built in much of the world, such technology either didn't exist or was inefficient to utilize.
There are certain cases, however, specifically in high-voltage, long-distance power transmission (HVDC), where DC is used. The main issue with using DC in commercial power transmission is changing voltage for consumers, but this is because the voltage is changed multiple times. For safety, voltage is stepped down the closer it gets to consumers to prevent incredibly high voltage lines from running through highly populated residential areas. However, if constant voltage transmission across long distances is needed, DC outperforms AC. DC carries better over distance and experiences less heat loss, as the skin effect is only observed in AC systems. Additionally, out-of-phase AC systems can only be connected via DC. For example, the U.S. power grid is split into three asynchronous AC systems. For power to be transferred between any of these systems, DC must be used to mediate the transfer.
Electronic devices.
Household electronic devices overwhelmingly use DC. This is due to the nature of basic circuit components. Many essential components in electronic devices, such as transistors, diodes, and logic gates, operate on a one-way basis; the diode, for example, only allows current to flow in one, fixed direction. Trying to use AC with such components would immensely complicate circuit design, requiring designs to function symmetrically with respect to current direction. Additionally, CPUs update millions of times per second, much faster than the mere 100/120 times per second that AC alternates polarities. This would cause massive inconsistencies in CPU voltage and performance, meaning all devices that use CPUs must also use DC.
There are, however, devices that are indifferent to current type due to incredibly simple functionality. These devices, such as incandescent lightbulbs or some toaster ovens, operate by running current through high resistance elements, emitting EM radiation. Both AC and DC is affected by resistance, so the current type used is irrelevant, although both types may need to use different voltages to produce identical results.
Education.
In university-level EM classes, whether it be algebra or calculus based, DC is typically used as an introduction to electric current. This is due to DC being mathematically and conceptually simpler than AC. A constant, direct current is, arguably, much more intuitive than one that alternates many times a second. DC power sources are also trivial to add to circuits, both in theory and in practice. For AC, however, phase must be considered when dealing with multiple power sources; whether the phase is off by 0 or π makes a massive difference in the functionality of a circuit. Additionally, circuits with capacitive and inductive elements are very easy to mathematically represent with DC. Charge on a capacitor and resistance on an inductor asymptotically stabilize with time, meaning DC systems reach steady state eventually (usually fairly quickly in mathematical practice problems). Such systems are easy to analyze once they reach steady state, and even systems that have not stabilized are not too difficult to analyze, as all the required formulas are relatively simple functions of time and initial/final voltage. AC, on the other hand, never reaches steady state. The fluctuating current means there will always be changing capacitive and inductive effects in the circuit, complicating mathematical analysis of the circuit. While the required formulas for these effects are still relatively simple functions of time and voltage, voltage is now a sinusoidal function of time, not to mention the phase delay and forwarding in capacitive and inductive elements.
Occurrences.
The occurrences of electric current can be divided into three main types: inorganic, biological and technological.
Inorganic occurrences of electricity in nature.
Electrostatic materials – piezoelectricity – the geophysical flow of molten iron
Natural observable examples of electric current include static electric discharge and electricity in the Earth's mantle. Electrostaticity was among the earliest discoveries of inorganic electricity occurring in nature. Already early in history, it was discovered that certain types of stone and resins, like amber, can become electrostatic when friction is applied. Rubbing amber will cause it to become negatively charged. Piezoelectricity occurs when applying mechanical strain to certain crystals, causing them to become electrically polarized. Iron monoxide (FeO), which makes up 9% of the Earth mantle, conducts electricity (especially when molten); this is believed to contribute to Earth's rotation.
Atmospheric occurrence
Lightning flashes are also one of the earliest discovered natural observable examples of electric current. Electric charge can build up vertically in dense clouds, leading to lightning that discharges in flashes to the ground. The flashes follow a path of lowest electric resistance through the air.
In the second half of the 19th century and first half of the 20th century, it was discovered that the solar wind is the source of the polar auroras that occur in the atmosphere near the North and South Poles. The aurora borealis and aurora australis are generated by flows of charged particles emanating from the Sun, which occur when solar activity is high (temporarily strong solar flares).
Cosmic occurrences
The discovery of Birkeland currents by Kristian Birkeland has triggered further research into electric phenomena in the magnetosphere and in the cosmos. Thanks to the research of pioneering scientists and engineers like Hannes Alfvén, Irving Langmuir and Donald E. Scott, it was discovered that electricity is ubiquitous in the universe. Plasmas account for a large part of matter in the cosmos; plasmas are electrically charged. It is now believed that plasma-filled cosmic filaments are probably responsible for transfer of electricity through the universe. Recent astrophysical research indicates that such filaments can transfer electricity both between galaxies and between star clusters. Neutron stars, pulsars, magnetars, quasars and astrophysical jets play an important role in cosmic electric phenomena. It is hypothesized that star formation in clusters is in part powered by those filaments, as the star formation mostly occurs along strings, following the plasma filaments. It is believed that the intergalactic plasma filaments are slowly rotating, because of their electric charge that generates multi-layer, concentric electromagnetic fields. This filament rotation may account for the position and rotation of galaxies. This is a field of ongoing scientific research. The James Webb Space Telescope yields new insights that enable further astrophysical exploration into this subject.
Organic occurrences of electricity – biological use of electricity.
Nerve system and electromagnetic sensitivity
A biological example of current is the flow of ions in neurons and nerves, responsible for both brain activity and sensory perception. Neurons are found in all animals, even in very small animals like starfish and seahorses. Brain development is more substantial in higher organisms, enabling thought and memory. Consciousness is enabled in the highest organisms, including man. Some mammals, like whales and dolphins, are capable of wireless communication, thanks to their brain activity or quantum effects. A pod of these sea mammals can communicate, for example to warn each other of predators. Migratory birds can sense the magnetic field of the Earth, using it to coordinate their flight. Some migratory sea animals, like salmon and sea turtles, also use Earth's magnetic field to navigate during long-distance migration.
Electroreception and electrogenesis
Electroreceptive animals have the ability to perceive natural electrical stimuli. Electrogenic animals, like eels, have organs that are capable of producing electric shocks – as a means of protection or as part of predator behaviour.
Photosynthesis and electrification of mechanical energy
Some species of plants can generate electricity at a microscopic level, either through photosynthesis or through electrification of mechanical energy. For instance, the oleander is able to capture mechanical energy from wind stirring its leaves, and transform it into electricity for use in the plant. Natural photosynthesis in chlorophyll is highly efficient at converting sunlight into electricity, which then drives the chemical formation of glucose. That electricity can also be used artificially in a photo-bioelectrochemical cell.
Technological usage of electric current – wired and wireless.
Man-made occurrences of electric current include the controlled flow of conduction electrons in metal wires, such as overhead power lines for long-distance energy delivery and the smaller wires within electrical and electronic devices. Eddy currents are electric currents that occur in conductors exposed to changing magnetic fields. Similarly, electric currents occur, particularly in the surface, of conductors exposed to electromagnetic waves. When oscillating electric currents flow at the correct voltages within radio antennas, radio waves are generated.
Pantographs are an example of electricity transfer through a sliding contact. The pantograph has enabled large-scale electrification of railways, subway and tram networks, as well as networks of trolleybuses.
Subsea interconnectors enable large-scale coast-to-coast electricity transfer between countries.
In electronics, other forms of electric current include the flow of electrons through resistors or through the vacuum in a vacuum tube, the flow of ions inside a battery, and the flow of holes within metals and semiconductors.
A newly developed use of electric current is wireless charging of batteries, for use in phones and electric vehicles for instance. Poynting's theorem shows that electric energy can be transferred from A to B, without carrying current from A to B. This is achieved by using one or more coils in A. The coils generate an electromagnetic field that carries the electric energy to B. Wireless charging devices could potentially be integrated in furniture, walls or road surfaces. Currently research is being done in long-distance wireless charging.
Measurement.
Current can be measured using an ammeter. While a galvanometer offers direct electric current measurement, it requires breaking the electrical circuit, which can be inconvenient for certain applications. Current can also be measured without breaking the circuit by detecting the magnetic field associated with the current. Devices, at the circuit level, use various techniques to measure current:
Resistive heating.
Joule heating, also known as "ohmic heating" and "resistive heating", is the process of power dissipation by which the passage of an electric current through a conductor increases the internal energy of the conductor, converting thermodynamic work into heat. The phenomenon was first studied by James Prescott Joule in 1841. Joule immersed a length of wire in a fixed mass of water and measured the temperature rise due to a known current through the wire for a 30 minute period. By varying the current and the length of the wire he deduced that the heat produced was proportional to the square of the current multiplied by the electrical resistance of the wire.
formula_2
This relationship is known as Joule's Law. The SI unit of energy was subsequently named the joule and given the symbol "J". The commonly known SI unit of power, the watt (symbol: W), is equivalent to one joule per second.
Electromagnetism.
Electromagnet.
In an electromagnet a coil of wires behaves like a magnet when an electric current flows through it. When the current is switched off, the coil loses its magnetism immediately. Electric current produces a magnetic field. The magnetic field can be visualized as a pattern of circular field lines surrounding the wire that persists as long as there is current.
Electromagnetic induction.
Magnetic fields can also be used to make electric currents. When a changing magnetic field is applied to a conductor, an electromotive force (EMF) is induced, which starts an electric current, when there is a suitable path.
Radio waves.
When an electric current flows in a suitably shaped conductor at radio frequencies, radio waves can be generated. These travel at the speed of light and can cause electric currents in distant conductors.
Conduction mechanisms in various media.
In metallic solids, electric charge flows by means of electrons, from lower to higher electrical potential. In other media, any stream of charged objects (ions, for example) may constitute an electric current. To provide a definition of current independent of the type of charge carriers, "conventional current" is defined as moving in the same direction as the positive charge flow. So, in metals where the charge carriers (electrons) are negative, conventional current is in the opposite direction to the overall electron movement. In conductors where the charge carriers are positive, conventional current is in the same direction as the charge carriers.
In a vacuum, a beam of ions or electrons may be formed. In other conductive materials, the electric current is due to the flow of both positively and negatively charged particles at the same time. In still others, the current is entirely due to positive charge flow. For example, the electric currents in electrolytes are flows of positively and negatively charged ions. In a common lead-acid electrochemical cell, electric currents are composed of positive hydronium ions flowing in one direction, and negative sulfate ions flowing in the other. Electric currents in sparks or plasma are flows of electrons as well as positive and negative ions. In ice and in certain solid electrolytes, the electric current is entirely composed of flowing ions.
Metals.
In a metal, some of the outer electrons in each atom are not bound to the individual molecules as they are in molecular solids, or in full bands as they are in insulating materials, but are free to move within the metal lattice. These conduction electrons can serve as charge carriers, carrying a current. Metals are particularly conductive because there are many of these free electrons. With no external electric field applied, these electrons move about randomly due to thermal energy but, on average, there is zero net current within the metal. At room temperature, the average speed of these random motions is 106 metres per second. Given a surface through which a metal wire passes, electrons move in both directions across the surface at an equal rate. As George Gamow wrote in his popular science book, "One, Two, Three...Infinity" (1947), "The metallic substances differ from all other materials by the fact that the outer shells of their atoms are bound rather loosely, and often let one of their electrons go free. Thus the interior of a metal is filled up with a large number of unattached electrons that travel aimlessly around like a crowd of displaced persons. When a metal wire is subjected to electric force applied on its opposite ends, these free electrons rush in the direction of the force, thus forming what we call an electric current."
When a metal wire is connected across the two terminals of a DC voltage source such as a battery, the source places an electric field across the conductor. The moment contact is made, the free electrons of the conductor are forced to drift toward the positive terminal under the influence of this field. The free electrons are therefore the charge carrier in a typical solid conductor.
For a steady flow of charge through a surface, the current "I" (in amperes) can be calculated with the following equation:
formula_3
where "Q" is the electric charge transferred through the surface over a time "t". If "Q" and "t" are measured in coulombs and seconds respectively, "I" is in amperes.
More generally, electric current can be represented as the rate at which charge flows through a given surface as:
formula_4
Electrolytes.
Electric currents in electrolytes are flows of electrically charged particles (ions). For example, if an electric field is placed across a solution of Na+ and Cl− (and conditions are right) the sodium ions move towards the negative electrode (cathode), while the chloride ions move towards the positive electrode (anode). Reactions take place at both electrode surfaces, neutralizing each ion.
Water-ice and certain solid electrolytes called proton conductors contain positive hydrogen ions ("protons") that are mobile. In these materials, electric currents are composed of moving protons, as opposed to the moving electrons in metals.
In certain electrolyte mixtures, brightly coloured ions are the moving electric charges. The slow progress of the colour makes the current visible.
Gases and plasmas.
In air and other ordinary gases below the breakdown field, the dominant source of electrical conduction is via relatively few mobile ions produced by radioactive gases, ultraviolet light, or cosmic rays. Since the electrical conductivity is low, gases are dielectrics or insulators. However, once the applied electric field approaches the breakdown value, free electrons become sufficiently accelerated by the electric field to create additional free electrons by colliding, and ionizing, neutral gas atoms or molecules in a process called avalanche breakdown. The breakdown process forms a plasma that contains enough mobile electrons and positive ions to make it an electrical conductor. In the process, it forms a light emitting conductive path, such as a spark, arc or lightning.
Plasma is the state of matter where some of the electrons in a gas are stripped or "ionized" from their molecules or atoms. A plasma can be formed by high temperature, or by application of a high electric or alternating magnetic field as noted above. Due to their lower mass, the electrons in a plasma accelerate more quickly in response to an electric field than the heavier positive ions, and hence carry the bulk of the current. The free ions recombine to create new chemical compounds (for example, breaking atmospheric oxygen into single oxygen [O2 → 2O], which then recombine creating ozone [O3]).
Vacuum.
Since a "perfect vacuum" contains no charged particles, it normally behaves as a perfect insulator. However, metal electrode surfaces can cause a region of the vacuum to become conductive by injecting free electrons or ions through either field electron emission or thermionic emission. Thermionic emission occurs when the thermal energy exceeds the metal's work function, while field electron emission occurs when the electric field at the surface of the metal is high enough to cause tunneling, which results in the ejection of free electrons from the metal into the vacuum. Externally heated electrodes are often used to generate an electron cloud as in the filament or indirectly heated cathode of vacuum tubes. Cold electrodes can also spontaneously produce electron clouds via thermionic emission when small incandescent regions (called "cathode spots" or "anode spots") are formed. These are incandescent regions of the electrode surface that are created by a localized high current. These regions may be initiated by field electron emission, but are then sustained by localized thermionic emission once a vacuum arc forms. These small electron-emitting regions can form quite rapidly, even explosively, on a metal surface subjected to a high electrical field. Vacuum tubes and sprytrons are some of the electronic switching and amplifying devices based on vacuum conductivity.
Superconductivity.
Superconductivity is a phenomenon of exactly zero electrical resistance and expulsion of magnetic fields occurring in certain materials when cooled below a characteristic critical temperature. It was discovered by Heike Kamerlingh Onnes on April 8, 1911 in Leiden. Like ferromagnetism and atomic spectral lines, superconductivity is a quantum mechanical phenomenon. It is characterized by the Meissner effect, the complete ejection of magnetic field lines from the interior of the superconductor as it transitions into the superconducting state. The occurrence of the Meissner effect indicates that superconductivity cannot be understood simply as the idealization of "perfect conductivity" in classical physics.
Semiconductor.
In a semiconductor it is sometimes useful to think of the current as due to the flow of positive "holes" (the mobile positive charge carriers that are places where the semiconductor crystal is missing a valence electron). This is the case in a p-type semiconductor. A semiconductor has electrical conductivity intermediate in magnitude between that of a conductor and an insulator. This means a conductivity roughly in the range of 10−2 to 104 siemens per centimeter (S⋅cm−1).
In the classic crystalline semiconductors, electrons can have energies only within certain bands (i.e. ranges of levels of energy). Energetically, these bands are located between the energy of the ground state, the state in which electrons are tightly bound to the atomic nuclei of the material, and the free electron energy, the latter describing the energy required for an electron to escape entirely from the material. The energy bands each correspond to many discrete quantum states of the electrons, and most of the states with low energy (closer to the nucleus) are occupied, up to a particular band called the "valence band". Semiconductors and insulators are distinguished from metals because the valence band in any given metal is nearly filled with electrons under usual operating conditions, while very few (semiconductor) or virtually none (insulator) of them are available in the "conduction band", the band immediately above the valence band.
The ease of exciting electrons in the semiconductor from the valence band to the conduction band depends on the band gap between the bands. The size of this energy band gap serves as an arbitrary dividing line (roughly 4 eV) between semiconductors and insulators.
With covalent bonds, an electron moves by hopping to a neighboring bond. The Pauli exclusion principle requires that the electron be lifted into the higher anti-bonding state of that bond. For delocalized states, for example in one dimension – that is in a nanowire, for every energy there is a state with electrons flowing in one direction and another state with the electrons flowing in the other. For a net current to flow, more states for one direction than for the other direction must be occupied. For this to occur, energy is required, as in the semiconductor the next higher states lie above the band gap. Often this is stated as: full bands do not contribute to the electrical conductivity. However, as a semiconductor's temperature rises above absolute zero, there is more energy in the semiconductor to spend on lattice vibration and on exciting electrons into the conduction band. The current-carrying electrons in the conduction band are known as "free electrons", though they are often simply called "electrons" if that is clear in context.
Current density and Ohm's law.
Current density is the rate at which charge passes through a chosen unit area. It is defined as a vector whose magnitude is the current per unit cross-sectional area. As discussed in Reference direction, the direction is arbitrary. Conventionally, if the moving charges are positive, then the current density has the same sign as the velocity of the charges. For negative charges, the sign of the current density is opposite to the velocity of the charges. In SI units, current density (symbol: j) is expressed in the SI base units of amperes per square metre.
In linear materials such as metals, and under low frequencies, the current density across the conductor surface is uniform. In such conditions, Ohm's law states that the current is directly proportional to the potential difference between two ends (across) of that metal (ideal) resistor (or other ohmic device):
formula_5
where formula_0 is the current, measured in amperes; formula_6 is the potential difference, measured in volts; and formula_7 is the resistance, measured in ohms. For alternating currents, especially at higher frequencies, skin effect causes the current to spread unevenly across the conductor cross-section, with higher density near the surface, thus increasing the apparent resistance.
Drift speed.
The mobile charged particles within a conductor move constantly in random directions, like the particles of a gas. (More accurately, a Fermi gas.) To create a net flow of charge, the particles must also move together with an average drift rate. Electrons are the charge carriers in most metals and they follow an erratic path, bouncing from atom to atom, but generally drifting in the opposite direction of the electric field. The speed they drift at can be calculated from the equation:
formula_8
where
Typically, electric charges in solids flow slowly. For example, in a copper wire of cross-section 0.5 mm2, carrying a current of 5 A, the drift velocity of the electrons is on the order of a millimetre per second. To take a different example, in the near-vacuum inside a cathode-ray tube, the electrons travel in near-straight lines at about a tenth of the speed of light.
Any accelerating electric charge, and therefore any changing electric current, gives rise to an electromagnetic wave that propagates at very high speed outside the surface of the conductor. This speed is usually a significant fraction of the speed of light, as can be deduced from Maxwell's equations, and is therefore many times faster than the drift velocity of the electrons. For example, in AC power lines, the waves of electromagnetic energy propagate through the space between the wires, moving from a source to a distant load, even though the electrons in the wires only move back and forth over a tiny distance.
The ratio of the speed of the electromagnetic wave to the speed of light in free space is called the velocity factor, and depends on the electromagnetic properties of the conductor and the insulating materials surrounding it, and on their shape and size.
The magnitudes (not the natures) of these three velocities can be illustrated by an analogy with the three similar velocities associated with gases. (See also hydraulic analogy.)
See also.
<templatestyles src="Div col/styles.css"/>
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "I"
},
{
"math_id": 1,
"text": "I = \\frac{V}{R},"
},
{
"math_id": 2,
"text": "P \\propto I^2 R. "
},
{
"math_id": 3,
"text": "I = {Q \\over t} \\, ,"
},
{
"math_id": 4,
"text": "I = \\frac{\\mathrm{d}Q}{\\mathrm{d}t} \\, ."
},
{
"math_id": 5,
"text": "I = {V \\over R} \\, ,"
},
{
"math_id": 6,
"text": "V"
},
{
"math_id": 7,
"text": "R"
},
{
"math_id": 8,
"text": "v = \\frac{I}{nAQ}"
},
{
"math_id": 9,
"text": "v"
},
{
"math_id": 10,
"text": "n"
},
{
"math_id": 11,
"text": "A"
},
{
"math_id": 12,
"text": "Q"
}
] |
https://en.wikipedia.org/wiki?curid=6207
|
62070387
|
Joris van der Hoeven
|
Dutch mathematician and computer scientist
Joris van der Hoeven (born 1971) is a Dutch mathematician and computer scientist, specializing in algebraic analysis and computer algebra. He is the primary developer of GNU TeXmacs.
Education and career.
Joris van der Hoeven received in 1997 his doctorate from Paris Diderot University (Paris 7) with thesis "Asymptotique automatique". He is a "Directeur de recherche" at the CNRS and head of the team "Max Modélisation algébrique" at the Laboratoire d'informatique of the École Polytechnique.
Research.
His research deals with transseries ("i.e." generalizations of formal power series) with applications to algebraic analysis and asymptotic solutions of nonlinear differential equations. In addition to transseries' properties as part of differential algebra and model theory, he also examines their algorithmic aspects as well as those of classical complex function theory.
He is the main developer of GNU TeXmacs (a free scientific editing platform) and Mathemagix (free software, a computer algebra and analysis system).
In 2019, van der Hoeven and his coauthor David Harvey announced their discovery of the fastest known multiplication algorithm, allowing the multiplication of formula_0-bit binary numbers in time formula_1. Their paper was peer reviewed and published in the "Annals of Mathematics" in 2021.
Recognition.
In 2018, he was an Invited Speaker (with Matthias Aschenbrenner and Lou van den Dries) with the talk "On numbers, germs, and transseries" at the International Congress of Mathematicians in Rio de Janeiro. In 2018, the three received the Karp Prize.
In 2022, he received the N. G. de Bruijn prize
|
[
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "O(n\\log n)"
}
] |
https://en.wikipedia.org/wiki?curid=62070387
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.