id
stringlengths
2
8
title
stringlengths
1
130
text
stringlengths
0
252k
formulas
listlengths
1
823
url
stringlengths
38
44
11735693
Complex quadratic polynomial
A complex quadratic polynomial is a quadratic polynomial whose coefficients and variable are complex numbers. Properties. Quadratic polynomials have the following properties, regardless of the form: Forms. When the quadratic polynomial has only one variable (univariate), one can distinguish its four main forms: The monic and centered form has been studied extensively, and has the following properties: The lambda form formula_6 is: Conjugation. Between forms. Since formula_8 is affine conjugate to the general form of the quadratic polynomial it is often used to study complex dynamics and to create images of Mandelbrot, Julia and Fatou sets. When one wants change from formula_9 to formula_10: formula_11 When one wants change from formula_12 to formula_10, the parameter transformation is formula_13 and the transformation between the variables in formula_14 and formula_15 is formula_16 With doubling map. There is semi-conjugacy between the dyadic transformation (the doubling map) and the quadratic polynomial case of "c" = –2. Notation. Iteration. Here formula_17 denotes the "n"-th iterate of the function formula_18: formula_19 so formula_20 Because of the possible confusion with exponentiation, some authors write formula_21 for the "n"th iterate of formula_18. Parameter. The monic and centered form formula_5 can be marked by: so : formula_22 formula_23 Examples: Map. The monic and centered form, sometimes called the Douady-Hubbard family of quadratic polynomials, is typically used with variable formula_27 and parameter formula_10: formula_28 When it is used as an evolution function of the discrete nonlinear dynamical system formula_29 it is named the quadratic map: formula_30 The Mandelbrot set is the set of values of the parameter "c" for which the initial condition "z"0 = 0 does not cause the iterates to diverge to infinity. Critical items. Critical points. complex plane. A critical point of formula_31 is a point formula_32 on the dynamical plane such that the derivative vanishes: formula_33 Since formula_34 implies formula_35 we see that the only (finite) critical point of formula_31 is the point formula_36. formula_37 is an initial point for Mandelbrot set iteration. For the quadratic family formula_38 the critical point z = 0 is the center of symmetry of the Julia set Jc, so it is a convex combination of two points in Jc. extended complex plane. In the Riemann sphere polynomial has 2d-2 critical points. Here zero and infinity are critical points. Critical value. A critical value formula_39 of formula_31 is the image of a critical point: formula_40 Since formula_41 we have formula_42 So the parameter formula_10 is the critical value of formula_43. Critical level curves. A critical level curve the level curve which contain critical point. It acts as a sort of skeleton of dynamical plane Example : level curves cross at saddle point, which is a special type of critical point. Critical limit set. Critical limit set is the set of forward orbit of all critical points Critical orbit. The forward orbit of a critical point is called a critical orbit. Critical orbits are very important because every attracting periodic orbit attracts a critical point, so studying the critical orbits helps us understand the dynamics in the Fatou set. formula_44 formula_45 formula_46 formula_47 formula_48 This orbit falls into an attracting periodic cycle if one exists. Critical sector. The critical sector is a sector of the dynamical plane containing the critical point. Critical set. Critical set is a set of critical points formula_49 Critical polynomial. so formula_50 formula_51 formula_52 formula_53 These polynomials are used for: formula_54 formula_56 Critical curves. Diagrams of critical polynomials are called critical curves. These curves create the skeleton (the dark lines) of a bifurcation diagram. Spaces, planes. 4D space. One can use the Julia-Mandelbrot 4-dimensional (4D) space for a global analysis of this dynamical system. In this space there are two basic types of 2D planes: There is also another plane used to analyze such dynamical systems "w"-plane: 2D Parameter plane. The phase space of a quadratic map is called its parameter plane. Here: formula_57 is constant and formula_10 is variable. There is no dynamics here. It is only a set of parameter values. There are no orbits on the parameter plane. The parameter plane consists of: There are many different subtypes of the parameter plane. See also : 2D Dynamical plane. "The polynomial Pc maps each dynamical ray to another ray doubling the angle (which we measure in full turns, i.e. 0 = 1 = 2π rad = 360°), and the dynamical rays of any polynomial "look like straight rays" near infinity. This allows us to study the Mandelbrot and Julia sets combinatorially, replacing the dynamical plane by the unit circle, rays by angles, and the quadratic polynomial by the doubling modulo one map." Virpi KaukoOn the dynamical plane one can find: The dynamical plane consists of: Here, formula_10 is a constant and formula_27 is a variable. The two-dimensional dynamical plane can be treated as a Poincaré cross-section of three-dimensional space of continuous dynamical system. Dynamical "z"-planes can be divided into two groups: Riemann sphere. The extended complex plane plus a point at infinity Derivatives. First derivative with respect to "c". On the parameter plane: The first derivative of formula_62 with respect to "c" is formula_63 This derivative can be found by iteration starting with formula_64 and then replacing at every consecutive step formula_65 This can easily be verified by using the chain rule for the derivative. This derivative is used in the distance estimation method for drawing a Mandelbrot set. First derivative with respect to "z". On the dynamical plane: At a fixed point formula_37, formula_67 At a periodic point "z"0 of period "p" the first derivative of a function formula_68 is often represented by formula_69 and referred to as the multiplier or the Lyapunov characteristic number. Its logarithm is known as the Lyapunov exponent. Absolute value of multiplier is used to check the stability of periodic (also fixed) points. At a nonperiodic point, the derivative, denoted by formula_70, can be found by iteration starting with formula_71 and then using formula_72 This derivative is used for computing the external distance to the Julia set. Schwarzian derivative. The Schwarzian derivative (SD for short) of "f" is: formula_73 References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": " f(x) = a_2 x^2 + a_1 x + a_0 " }, { "math_id": 1, "text": " a_2 \\ne 0" }, { "math_id": 2, "text": "f_r(x) = r x (1-x)" }, { "math_id": 3, "text": "f_{\\theta}(x) = x^2 +\\lambda x" }, { "math_id": 4, "text": "\\lambda = e^{2 \\pi \\theta i}" }, { "math_id": 5, "text": "f_c(x) = x^2 +c" }, { "math_id": 6, "text": " f_{\\lambda}(z) = z^2 +\\lambda z" }, { "math_id": 7, "text": " z \\mapsto \\lambda z" }, { "math_id": 8, "text": "f_c(x)" }, { "math_id": 9, "text": "\\theta" }, { "math_id": 10, "text": "c" }, { "math_id": 11, "text": "c = c(\\theta) = \\frac {e^{2 \\pi \\theta i}}{2} \\left(1 - \\frac {e^{2 \\pi \\theta i}}{2}\\right). " }, { "math_id": 12, "text": "r" }, { "math_id": 13, "text": "\nc = c(r) = \\frac{1- (r-1)^2}{4} = -\\frac{r}{2} \\left(\\frac{r-2}{2}\\right)\n" }, { "math_id": 14, "text": "z_{t+1}=z_t^2+c" }, { "math_id": 15, "text": "x_{t+1}=rx_t(1-x_t)" }, { "math_id": 16, "text": "z=r\\left(\\frac{1}{2}-x\\right)." }, { "math_id": 17, "text": " f^n" }, { "math_id": 18, "text": "f" }, { "math_id": 19, "text": "f_c^n(z) = f_c^1(f_c^{n-1}(z))" }, { "math_id": 20, "text": "z_n = f_c^n(z_0)." }, { "math_id": 21, "text": "f^{\\circ n}" }, { "math_id": 22, "text": "f_c = f_{\\theta}" }, { "math_id": 23, "text": "c = c({\\theta})" }, { "math_id": 24, "text": " z \\to z^2+i" }, { "math_id": 25, "text": " z \\to z^2+ c" }, { "math_id": 26, "text": "c = -1.23922555538957 + 0.412602181602004*i" }, { "math_id": 27, "text": "z" }, { "math_id": 28, "text": "f_c(z) = z^2 +c." }, { "math_id": 29, "text": "z_{n+1} = f_c(z_n)" }, { "math_id": 30, "text": "f_c : z \\to z^2 + c." }, { "math_id": 31, "text": "f_c" }, { "math_id": 32, "text": "z_{cr}" }, { "math_id": 33, "text": "f_c'(z_{cr}) = 0." }, { "math_id": 34, "text": "f_c'(z) = \\frac{d}{dz}f_c(z) = 2z" }, { "math_id": 35, "text": "z_{cr} = 0," }, { "math_id": 36, "text": " z_{cr} = 0" }, { "math_id": 37, "text": "z_0" }, { "math_id": 38, "text": "f_c(z)=z^2+c" }, { "math_id": 39, "text": "z_{cv} " }, { "math_id": 40, "text": "z_{cv} = f_c(z_{cr})" }, { "math_id": 41, "text": "z_{cr} = 0" }, { "math_id": 42, "text": "z_{cv} = c" }, { "math_id": 43, "text": "f_c(z)" }, { "math_id": 44, "text": "z_0 = z_{cr} = 0" }, { "math_id": 45, "text": "z_1 = f_c(z_0) = c" }, { "math_id": 46, "text": "z_2 = f_c(z_1) = c^2 +c" }, { "math_id": 47, "text": "z_3 = f_c(z_2) = (c^2 + c)^2 + c" }, { "math_id": 48, "text": "\\ \\vdots" }, { "math_id": 49, "text": "P_n(c) = f_c^n(z_{cr}) = f_c^n(0)" }, { "math_id": 50, "text": "P_0(c)= 0" }, { "math_id": 51, "text": "P_1(c) = c" }, { "math_id": 52, "text": "P_2(c) = c^2 + c" }, { "math_id": 53, "text": "P_3(c) = (c^2 + c)^2 + c" }, { "math_id": 54, "text": "\\text{centers} = \\{ c : P_n(c) = 0 \\}" }, { "math_id": 55, "text": "P_n(c)" }, { "math_id": 56, "text": "M_{n,k} = \\{ c : P_k(c) = P_{k+n}(c) \\}" }, { "math_id": 57, "text": "z_0 = z_{cr}" }, { "math_id": 58, "text": "f_0" }, { "math_id": 59, "text": "c = 0" }, { "math_id": 60, "text": "c \\ne 0" }, { "math_id": 61, "text": "z_0 = 0 " }, { "math_id": 62, "text": "f_c^n(z_0)" }, { "math_id": 63, "text": "z_n' = \\frac{d}{dc} f_c^n(z_0)." }, { "math_id": 64, "text": "z_0' = \\frac{d}{dc} f_c^0(z_0) = 1" }, { "math_id": 65, "text": "z_{n+1}' = \\frac{d}{dc} f_c^{n+1}(z_0) = 2\\cdot{}f_c^n(z)\\cdot\\frac{d}{dc} f_c^n(z_0) + 1 = 2 \\cdot z_n \\cdot z_n' +1." }, { "math_id": 66, "text": "c " }, { "math_id": 67, "text": "f_c'(z_0) = \\frac{d}{dz}f_c(z_0) = 2z_0 ." }, { "math_id": 68, "text": "(f_c^p)'(z_0) = \\frac{d}{dz}f_c^p(z_0) = \\prod_{i=0}^{p-1} f_c'(z_i) = 2^p \\prod_{i=0}^{p-1} z_i = \\lambda " }, { "math_id": 69, "text": "\\lambda" }, { "math_id": 70, "text": "z'_n" }, { "math_id": 71, "text": "z'_0 = 1," }, { "math_id": 72, "text": "z'_n= 2*z_{n-1}*z'_{n-1}." }, { "math_id": 73, "text": " (Sf)(z) = \\frac{f'''(z)}{f'(z)} - \\frac{3}{2} \\left ( \\frac{f''(z)}{f'(z)}\\right ) ^2 . " } ]
https://en.wikipedia.org/wiki?curid=11735693
11737468
Probabilistic bisimulation
In theoretical computer science, probabilistic bisimulation is an extension of the concept of bisimulation for fully probabilistic transition systems first described by K.G. Larsen and A. Skou. A discrete probabilistic transition system is a triple formula_0 where formula_1 gives the probability of starting in the state "s", performing the action "a" and ending up in the state "t". The set of states is assumed to be countable. There is no attempt to assign probabilities to actions. It is assumed that the actions are chosen nondeterministically by an adversary or by the environment. This type of system is fully probabilistic, there is no other indeterminacy. The definition of a probabilistic bisimulation on a system "S" is an equivalence relation "R" on the state space St, such that for every pair "s","t" in St with "sRt" and for every action "a" in Act and for every equivalence class "C" of "R" formula_2 Two states are said to be probabilistically bisimilar if there is some such "R" relating them. When applied to Markov chains, probabilistic bisimulation is the same concept as lumpability. Probabilistic bisimulation extends naturally to weighted bisimulation. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "S = (\\operatorname{St}, \\operatorname{Act}, \\tau:\\operatorname{St} \\times \\operatorname{Act}\\times \\operatorname{St}\\rightarrow [0,1])" }, { "math_id": 1, "text": "\\tau(s,a,t)" }, { "math_id": 2, "text": "\\tau(s,a,C) = \\tau(t,a,C)." } ]
https://en.wikipedia.org/wiki?curid=11737468
11740035
Flavin-containing monooxygenase 3
Protein-coding gene in the species Homo sapiens Flavin-containing monooxygenase 3 (FMO3), also known as dimethylaniline monooxygenase [N-oxide-forming] 3 and trimethylamine monooxygenase, is a flavoprotein enzyme (EC 1.14.13.148) that in humans is encoded by the "FMO3" gene. This enzyme catalyzes the following chemical reaction, among others: trimethylamine + NADPH + H+ + O2 formula_0 trimethylamine "N"-oxide + NADP+ + H2O FMO3 is the main flavin-containing monooxygenase isoenzyme that is expressed in the liver of adult humans. The human FMO3 enzyme catalyzes several types of reactions, including: the "N"-oxygenation of primary, secondary, and tertiary amines; the "S"-oxygenation of nucleophilic sulfur-containing compounds; and the 6-methylhydroxylation of the anti-cancer agent dimethylxanthenone acetic acid (DMXAA). FMO3 is the primary enzyme in humans which catalyzes the "N"-oxidation of trimethylamine into trimethylamine "N"-oxide; FMO1 also does this, but to a much lesser extent than FMO3. Genetic deficiencies of the FMO3 enzyme cause primary trimethylaminuria, also known as "fish odor syndrome". FMO3 is also involved in the metabolism of many xenobiotics (i.e., exogenous compounds which are not normally present in the body), such as the oxidative deamination of amphetamine. Cancer. The "FMO3" gene has been observed progressively downregulated in human papillomavirus-positive neoplastic keratinocytes derived from uterine cervical preneoplastic lesions at different levels of malignancy. For this reason, "FMO3" is likely to be associated with tumorigenesis and may be a potential prognostic marker for progression of uterine cervical preneoplastic lesions. References. <templatestyles src="Reflist/styles.css" /> Further reading. <templatestyles src="Refbegin/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=11740035
11740178
Primes in arithmetic progression
In number theory, primes in arithmetic progression are any sequence of at least three prime numbers that are consecutive terms in an arithmetic progression. An example is the sequence of primes (3, 7, 11), which is given by formula_0 for formula_1. According to the Green–Tao theorem, there exist arbitrarily long arithmetic progressions in the sequence of primes. Sometimes the phrase may also be used about primes which belong to an arithmetic progression which also contains composite numbers. For example, it can be used about primes in an arithmetic progression of the form formula_2, where "a" and "b" are coprime which according to Dirichlet's theorem on arithmetic progressions contains infinitely many primes, along with infinitely many composites. For integer "k" ≥ 3, an AP-"k (also called PAP-"k) is any sequence of "k" primes in arithmetic progression. An AP-"k" can be written as "k" primes of the form "a"·"n" + "b", for fixed integers "a" (called the common difference) and "b", and "k" consecutive integer values of "n". An AP-"k" is usually expressed with "n" = 0 to "k" − 1. This can always be achieved by defining "b" to be the first prime in the arithmetic progression. Properties. Any given arithmetic progression of primes has a finite length. In 2004, Ben J. Green and Terence Tao settled an old conjecture by proving the Green–Tao theorem: The primes contain arbitrarily long arithmetic progressions. It follows immediately that there are infinitely many AP-"k" for any "k". If an AP-"k" does not begin with the prime "k", then the common difference is a multiple of the primorial "k"# = 2·3·5·...·"j", where "j" is the largest prime ≤ "k". "Proof": Let the AP-"k" be "a"·"n" + "b" for "k" consecutive values of "n". If a prime "p" does not divide "a", then modular arithmetic says that "p" will divide every "p"'th term of the arithmetic progression. (From H.J. Weber, Cor.10 in ``Exceptional Prime Number Twins, Triplets and Multiplets," arXiv:1102.3075[math.NT]. See also Theor.2.3 in ``Regularities of Twin, Triplet and Multiplet Prime Numbers," arXiv:1103.0447[math.NT], Global J.P.A.Math 8(2012), in press.) If the AP is prime for "k" consecutive values, then "a" must therefore be divisible by all primes "p" ≤ "k". This also shows that an AP with common difference "a" cannot contain more consecutive prime terms than the value of the smallest prime that does not divide "a". If "k" is prime then an AP-"k" can begin with "k" and have a common difference which is only a multiple of ("k"−1)# instead of "k"#. (From H. J. Weber, ``Less Regular Exceptional and Repeating Prime Number Multiplets," arXiv:1105.4092[math.NT], Sect.3.) For example, the AP-3 with primes {3, 5, 7} and common difference 2# = 2, or the AP-5 with primes {5, 11, 17, 23, 29} and common difference 4# = 6. It is conjectured that such examples exist for all primes "k". As of 2018[ [update]], the largest prime for which this is confirmed is "k" = 19, for this AP-19 found by Wojciech Iżykowski in 2013: 19 + 4244193265542951705·17#·n, for "n" = 0 to 18. It follows from widely believed conjectures, such as Dickson's conjecture and some variants of the prime k-tuple conjecture, that if "p" > 2 is the smallest prime not dividing "a", then there are infinitely many AP-("p"−1) with common difference "a". For example, 5 is the smallest prime not dividing 6, so there is expected to be infinitely many AP-4 with common difference 6, which is called a sexy prime quadruplet. When "a" = 2, "p" = 3, it is the twin prime conjecture, with an "AP-2" of 2 primes ("b", "b" + 2). Minimal primes in AP. We minimize the last term. Largest known primes in AP. For prime "q", "q"# denotes the primorial 2·3·5·7·...·"q". As of  2019[ [update]], the longest known AP-"k" is an AP-27. Several examples are known for AP-26. The first to be discovered was found on April 12, 2010, by Benoît Perichon on a PlayStation 3 with software by Jarosław Wróblewski and Geoff Reynolds, ported to the PlayStation 3 by Bryan Little, in a distributed PrimeGrid project: 43142746595714191 + 23681770·23#·"n", for "n" = 0 to 25. (23# = 223092870) (sequence in the OEIS) By the time the first AP-26 was found the search was divided into 131,436,182 segments by PrimeGrid and processed by 32/64bit CPUs, Nvidia CUDA GPUs, and Cell microprocessors around the world. Before that, the record was an AP-25 found by Raanan Chermoni and Jarosław Wróblewski on May 17, 2008: 6171054912832631 + 366384·23#·"n", for "n" = 0 to 24. (23# = 223092870) The AP-25 search was divided into segments taking about 3 minutes on Athlon 64 and Wróblewski reported "I think Raanan went through less than 10,000,000 such segments" (this would have taken about 57 cpu years on Athlon 64). The earlier record was an AP-24 found by Jarosław Wróblewski alone on January 18, 2007: 468395662504823 + 205619·23#·"n", for "n" = 0 to 23. For this Wróblewski reported he used a total of 75 computers: 15 64-bit Athlons, 15 dual core 64-bit Pentium D 805, 30 32-bit Athlons 2500, and 15 Durons 900. The following table shows the largest known AP-"k" with the year of discovery and the number of decimal digits in the ending prime. Note that the largest known AP-"k" may be the end of an AP-("k"+1). Some record setters choose to first compute a large set of primes of form "c"·"p"#+1 with fixed "p", and then search for AP's among the values of "c" that produced a prime. This is reflected in the expression for some records. The expression can easily be rewritten as "a"·"n" + "b". Consecutive primes in arithmetic progression. Consecutive primes in arithmetic progression refers to at least three "consecutive" primes which are consecutive terms in an arithmetic progression. Note that unlike an AP-"k", all the other numbers between the terms of the progression must be composite. For example, the AP-3 {3, 7, 11} does not qualify, because 5 is also a prime. For an integer "k" ≥ 3, a CPAP-"k" is "k" consecutive primes in arithmetic progression. It is conjectured there are arbitrarily long CPAP's. This would imply infinitely many CPAP-"k" for all "k". The middle prime in a CPAP-3 is called a balanced prime. The largest known as of 2022[ [update]] has 15004 digits. The first known CPAP-10 was found in 1998 by Manfred Toplic in the distributed computing project CP10 which was organized by Harvey Dubner, Tony Forbes, Nik Lygeros, Michel Mizony and Paul Zimmermann. This CPAP-10 has the smallest possible common difference, 7# = 210. The only other known CPAP-10 as of 2018 was found by the same people in 2008. If a CPAP-11 exists then it must have a common difference which is a multiple of 11# = 2310. The difference between the first and last of the 11 primes would therefore be a multiple of 23100. The requirement for at least 23090 composite numbers between the 11 primes makes it appear extremely hard to find a CPAP-11. Dubner and Zimmermann estimate it would be at least 1012 times harder than a CPAP-10. Minimal consecutive primes in AP. The first occurrence of a CPAP-"k" is only known for "k" ≤ 6 (sequence in the OEIS). Largest known consecutive primes in AP. The table shows the largest known case of "k" consecutive primes in arithmetic progression, for "k" = 3 to 10. "x""d" is a "d"-digit number used in one of the above records to ensure a small factor in unusually many of the required composites between the primes.<br> Notes. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "a_n = 3 + 4n" }, { "math_id": 1, "text": "0 \\le n \\le 2" }, { "math_id": 2, "text": "an + b" } ]
https://en.wikipedia.org/wiki?curid=11740178
1174047
Rock magnetism
The study of magnetism in rocks Rock magnetism is the study of the magnetic properties of rocks, sediments and soils. The field arose out of the need in paleomagnetism to understand how rocks record the Earth's magnetic field. This remanence is carried by minerals, particularly certain strongly magnetic minerals like magnetite (the main source of magnetism in lodestone). An understanding of remanence helps paleomagnetists to develop methods for measuring the ancient magnetic field and correct for effects like sediment compaction and metamorphism. Rock magnetic methods are used to get a more detailed picture of the source of the distinctive striped pattern in marine magnetic anomalies that provides important information on plate tectonics. They are also used to interpret terrestrial magnetic anomalies in magnetic surveys as well as the strong crustal magnetism on Mars. Strongly magnetic minerals have properties that depend on the size, shape, defect structure and concentration of the minerals in a rock. Rock magnetism provides non-destructive methods for analyzing these minerals such as magnetic hysteresis measurements, temperature-dependent remanence measurements, Mössbauer spectroscopy, ferromagnetic resonance and so on. With such methods, rock magnetists can measure the effects of past climate change and human impacts on the mineralogy (see environmental magnetism). In sediments, a lot of the magnetic remanence is carried by minerals that were created by magnetotactic bacteria, so rock magnetists have made significant contributions to biomagnetism. History. Until the 20th century, the study of the Earth's field (geomagnetism and paleomagnetism) and of magnetic materials (especially ferromagnetism) developed separately. Rock magnetism had its start when scientists brought these two fields together in the laboratory. Koenigsberger (1938), Thellier (1938) and Nagata (1943) investigated the origin of remanence in igneous rocks. By heating rocks and archeological materials to high temperatures in a magnetic field, they gave the materials a thermoremanent magnetization (TRM), and they investigated the properties of this magnetization. Thellier developed a series of conditions (the Thellier laws) that, if fulfilled, would allow the determination of the intensity of the ancient magnetic field to be determined using the Thellier–Thellier method. In 1949, Louis Néel developed a theory that explained these observations, showed that the Thellier laws were satisfied by certain kinds of single-domain magnets, and introduced the concept of blocking of TRM. When paleomagnetic work in the 1950s lent support to the theory of continental drift, skeptics were quick to question whether rocks could carry a stable remanence for geological ages. Rock magnetists were able to show that rocks could have more than one component of remanence, some soft (easily removed) and some very stable. To get at the stable part, they took to "cleaning" samples by heating them or exposing them to an alternating field. However, later events, particularly the recognition that many North American rocks had been pervasively remagnetized in the Paleozoic, showed that a single cleaning step was inadequate, and paleomagnetists began to routinely use stepwise demagnetization to strip away the remanence in small bits. Fundamentals. Types of magnetic order. The contribution of a mineral to the total magnetism of a rock depends strongly on the type of magnetic order or disorder. Magnetically disordered minerals (diamagnets and paramagnets) contribute a weak magnetism and have no remanence. The more important minerals for rock magnetism are the minerals that can be magnetically ordered, at least at some temperatures. These are the ferromagnets, ferrimagnets and certain kinds of antiferromagnets. These minerals have a much stronger response to the field and can have a remanence. Diamagnetism. Diamagnetism is a magnetic response shared by all substances. In response to an applied magnetic field, electrons precess (see Larmor precession), and by Lenz's law they act to shield the interior of a body from the magnetic field. Thus, the moment produced is in the opposite direction to the field and the susceptibility is negative. This effect is weak but independent of temperature. A substance whose only magnetic response is diamagnetism is called a diamagnet. Paramagnetism. Paramagnetism is a weak positive response to a magnetic field due to rotation of electron spins. Paramagnetism occurs in certain kinds of iron-bearing minerals because the iron contains an unpaired electron in one of their shells (see Hund's rules). Some are paramagnetic down to absolute zero and their susceptibility is inversely proportional to the temperature (see Curie's law); others are magnetically ordered below a critical temperature and the susceptibility increases as it approaches that temperature (see Curie–Weiss law). Ferromagnetism. Collectively, strongly magnetic materials are often referred to as ferromagnets. However, this magnetism can arise as the result of more than one kind of magnetic order. In the strict sense, ferromagnetism refers to magnetic ordering where neighboring electron spins are aligned by the exchange interaction. The classic ferromagnet is iron. Below a critical temperature called the Curie temperature, ferromagnets have a spontaneous magnetization and there is hysteresis in their response to a changing magnetic field. Most importantly for rock magnetism, they have remanence, so they can record the Earth's field. Iron does not occur widely in its pure form. It is usually incorporated into iron oxides, oxyhydroxides and sulfides. In these compounds, the iron atoms are not close enough for direct exchange, so they are coupled by indirect exchange or superexchange. The result is that the crystal lattice is divided into two or more sublattices with different moments. Ferrimagnetism. Ferrimagnets have two sublattices with opposing moments. One sublattice has a larger moment, so there is a net unbalance. Magnetite, the most important of the magnetic minerals, is a ferrimagnet. Ferrimagnets often behave like ferromagnets, but the temperature dependence of their spontaneous magnetization can be quite different. Louis Néel identified four types of temperature dependence, one of which involves a reversal of the magnetization. This phenomenon played a role in controversies over marine magnetic anomalies. Antiferromagnetism. Antiferromagnets, like ferrimagnets, have two sublattices with opposing moments, but now the moments are equal in magnitude. If the moments are exactly opposed, the magnet has no remanence. However, the moments can be tilted (spin canting), resulting in a moment nearly at right angles to the moments of the sublattices. Hematite has this kind of magnetism. Types of remanence. Magnetic remanence is often identified with a particular kind of remanence that is obtained after exposing a magnet to a field at room temperature. However, the Earth's field is not large, and this kind of remanence would be weak and easily overwritten by later fields. A central part of rock magnetism is the study of magnetic remanence, both as natural remanent magnetization (NRM) in rocks obtained from the field and remanence induced in the laboratory. Below are listed the important natural remanences and some artificially induced kinds. Thermoremanent magnetization (TRM). When an igneous rock cools, it acquires a "thermoremanent magnetization (TRM)" from the Earth's field. TRM can be much larger than it would be if exposed to the same field at room temperature (see isothermal remanence). This remanence can also be very stable, lasting without significant change for millions of years. TRM is the main reason that paleomagnetists are able to deduce the direction and magnitude of the ancient Earth's field. If a rock is later re-heated (as a result of burial, for example), part or all of the TRM can be replaced by a new remanence. If it is only part of the remanence, it is known as "partial thermoremanent magnetization (pTRM)". Because numerous experiments have been done modeling different ways of acquiring remanence, pTRM can have other meanings. For example, it can also be acquired in the laboratory by cooling in zero field to a temperature formula_0 (below the Curie temperature), applying a magnetic field and cooling to a temperature formula_1, then cooling the rest of the way to room temperature in zero field. The standard model for TRM is as follows. When a mineral such as magnetite cools below the Curie temperature, it becomes ferromagnetic but is not immediately capable of carrying a remanence. Instead, it is superparamagnetic, responding reversibly to changes in the magnetic field. For remanence to be possible there must be a strong enough magnetic anisotropy to keep the magnetization near a stable state; otherwise, thermal fluctuations make the magnetic moment wander randomly. As the rock continues to cool, there is a critical temperature at which the magnetic anisotropy becomes large enough to keep the moment from wandering: this temperature is called the "blocking temperature" and referred to by the symbol formula_2. The magnetization remains in the same state as the rock is cooled to room temperature and becomes a thermoremanent magnetization. Chemical (or crystallization) remanent magnetization (CRM). Magnetic grains may precipitate from a circulating solution, or be formed during chemical reactions, and may record the direction of the magnetic field at the time of mineral formation. The field is said to be recorded by "chemical remanent magnetization (CRM)". The mineral recording the field commonly is hematite, another iron oxide. Redbeds, clastic sedimentary rocks (such as sandstones) that are red primarily because of hematite formation during or after sedimentary diagenesis, may have useful CRM signatures, and magnetostratigraphy can be based on such signatures. Depositional remanent magnetization (DRM). Magnetic grains in sediments may align with the magnetic field during or soon after deposition; this is known as detrital remanent magnetization (DRM). If the magnetization is acquired as the grains are deposited, the result is a depositional detrital remanent magnetization (dDRM); if it is acquired soon after deposition, it is a "post-depositional detrital remanent magnetization (pDRM)". Viscous remanent magnetization. "Viscous remanent magnetization (VRM)", also known as viscous magnetization, is remanence that is acquired by ferromagnetic minerals by sitting in a magnetic field for some time. The natural remanent magnetization of an igneous rock can be altered by this process. To remove this component, some form of stepwise demagnetization must be used. Applications of rock magnetism. <templatestyles src="Div col/styles.css"/> Notes. <templatestyles src="Reflist/styles.css" /> References. <templatestyles src="Refbegin/styles.css" />
[ { "math_id": 0, "text": "T_1" }, { "math_id": 1, "text": "T_2" }, { "math_id": 2, "text": "T_B" } ]
https://en.wikipedia.org/wiki?curid=1174047
11742
Finite set
Mathematical set containing a finite number of elements In mathematics, particularly set theory, a finite set is a set that has a finite number of elements. Informally, a finite set is a set which one could in principle count and finish counting. For example, <templatestyles src="Block indent/styles.css"/>formula_0 is a finite set with five elements. The number of elements of a finite set is a natural number (possibly zero) and is called the "cardinality (or the cardinal number)" of the set. A set that is not a finite set is called an "infinite set". For example, the set of all positive integers is infinite: <templatestyles src="Block indent/styles.css"/>formula_1 Finite sets are particularly important in combinatorics, the mathematical study of counting. Many arguments involving finite sets rely on the pigeonhole principle, which states that there cannot exist an injective function from a larger finite set to a smaller finite set. Definition and terminology. Formally, a set formula_2 is called finite if there exists a bijection <templatestyles src="Block indent/styles.css"/>formula_3 for some natural number formula_4 (natural numbers are defined as sets in Zermelo-Fraenkel set theory). The number formula_4 is the set's cardinality, denoted as formula_5. If a set is finite, its elements may be written — in many ways — in a sequence: <templatestyles src="Block indent/styles.css"/>formula_6 In combinatorics, a finite set with formula_4 elements is sometimes called an "formula_4-set" and a subset with formula_7 elements is called a "formula_7-subset". For example, the set formula_8 is a 3-set – a finite set with three elements – and formula_9 is a 2-subset of it. Basic properties. Any proper subset of a finite set formula_2 is finite and has fewer elements than "S" itself. As a consequence, there cannot exist a bijection between a finite set "S" and a proper subset of "S". Any set with this property is called Dedekind-finite. Using the standard ZFC axioms for set theory, every Dedekind-finite set is also finite, but this implication cannot be proved in ZF (Zermelo–Fraenkel axioms without the axiom of choice) alone. The axiom of countable choice, a weak version of the axiom of choice, is sufficient to prove this equivalence. Any injective function between two finite sets of the same cardinality is also a surjective function (a surjection). Similarly, any surjection between two finite sets of the same cardinality is also an injection. The union of two finite sets is finite, with <templatestyles src="Block indent/styles.css"/>formula_10 In fact, by the inclusion–exclusion principle: <templatestyles src="Block indent/styles.css"/>formula_11 More generally, the union of any finite number of finite sets is finite. The Cartesian product of finite sets is also finite, with: <templatestyles src="Block indent/styles.css"/>formula_12 Similarly, the Cartesian product of finitely many finite sets is finite. A finite set with formula_4 elements has formula_13 distinct subsets. That is, the power set formula_14 of a finite set "S" is finite, with cardinality formula_15. Any subset of a finite set is finite. The set of values of a function when applied to elements of a finite set is finite. All finite sets are countable, but not all countable sets are finite. (Some authors, however, use "countable" to mean "countably infinite", so do not consider finite sets to be countable.) The free semilattice over a finite set is the set of its non-empty subsets, with the join operation being given by set union. Necessary and sufficient conditions for finiteness. In Zermelo–Fraenkel set theory without the axiom of choice (ZF), the following conditions are all equivalent: If the axiom of choice is also assumed (the axiom of countable choice is sufficient), then the following conditions are all equivalent: Other concepts of finiteness. In ZF set theory without the axiom of choice, the following concepts of finiteness for a set formula_2 are distinct. They are arranged in strictly decreasing order of strength, i.e. if a set formula_2 meets a criterion in the list then it meets all of the following criteria. In the absence of the axiom of choice the reverse implications are all unprovable, but if the axiom of choice is assumed then all of these concepts are equivalent. (Note that none of these definitions need the set of finite ordinal numbers to be defined first; they are all pure "set-theoretic" definitions in terms of the equality and membership relations, not involving ω.) The forward implications (from strong to weak) are theorems within ZF. Counter-examples to the reverse implications (from weak to strong) in ZF with urelements are found using model theory. Most of these finiteness definitions and their names are attributed to by . However, definitions I, II, III, IV and V were presented in , together with proofs (or references to proofs) for the forward implications. At that time, model theory was not sufficiently advanced to find the counter-examples. Each of the properties I-finite thru IV-finite is a notion of smallness in the sense that any subset of a set with such a property will also have the property. This is not true for V-finite thru VII-finite because they may have countably infinite subsets. Notes. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\displaystyle \\{2,4,6,8,10\\}" }, { "math_id": 1, "text": "\\displaystyle \\{1,2,3,\\ldots\\}" }, { "math_id": 2, "text": "S" }, { "math_id": 3, "text": "\\displaystyle f\\colon S\\to n" }, { "math_id": 4, "text": "n" }, { "math_id": 5, "text": "|S|" }, { "math_id": 6, "text": "\\displaystyle x_1,x_2,\\ldots,x_n \\quad (x_i \\in S, \\ 1 \\le i \\le n)." }, { "math_id": 7, "text": "k" }, { "math_id": 8, "text": "\\{5,6,7\\}" }, { "math_id": 9, "text": "\\{6,7\\}" }, { "math_id": 10, "text": "\\displaystyle |S \\cup T| \\le |S| + |T|." }, { "math_id": 11, "text": "\\displaystyle |S \\cup T| = |S| + |T| - |S\\cap T|." }, { "math_id": 12, "text": "\\displaystyle |S \\times T| = |S|\\times|T|." }, { "math_id": 13, "text": "2^n" }, { "math_id": 14, "text": "\\wp(S)" }, { "math_id": 15, "text": "2^{|S|}" }, { "math_id": 16, "text": "\\wp\\bigl(\\wp(S)\\bigr)" }, { "math_id": 17, "text": "\\subseteq" }, { "math_id": 18, "text": "|S|=0" }, { "math_id": 19, "text": "2\\cdot|S|>|S|" }, { "math_id": 20, "text": "|S|=1" }, { "math_id": 21, "text": "|S|^2>|S|" } ]
https://en.wikipedia.org/wiki?curid=11742
11743104
Infinite alleles model
The infinite alleles model is a mathematical model for calculating genetic mutations. The Japanese geneticist Motoo Kimura and American geneticist James F. Crow (1964) introduced the "infinite alleles model", an attempt to determine for a finite diploid population what proportion of loci would be homozygous. This was, in part, motivated by assertions by other geneticists that more than 50 percent of "Drosophila" loci were heterozygous, a claim they initially doubted. In order to answer this question they assumed first, that there were a large enough number of alleles so that any mutation would lead to a different allele (that is the probability of back mutation to the original allele would be low enough to be negligible); and second, that the mutations would result in a number of different outcomes from neutral to . They determined that in the neutral case, the probability that an individual would be homozygous, "F", was: formula_0 where "u" is the mutation rate, and "N"e is the effective population size. The effective number of alleles "n" maintained in a population is defined as the inverse of the homozygosity, that is formula_1 which is a lower bound for the actual number of alleles in the population. If the effective population is large, then a large number of alleles can be maintained. However, this result only holds for the "neutral" case, and is not necessarily true for the case when some alleles are subject to selection, i.e. more or less fit than others, for example when the fittest genotype is a heterozygote (a situation often referred to as overdominance or heterosis). In the case of overdominance, because Mendel's second law (the law of segregation) necessarily results in the production of homozygotes (which are by definition in this case, less fit), this means that population will always harbor a number of less fit individuals, which leads to a decrease in the average fitness of the population. This is sometimes referred to as "genetic load", in this case it is a special kind of load known as "segregational load". Crow and Kimura showed that at equilibrium conditions, for a given strength of selection ("s"), that there would be an upper limit to the number of fitter alleles (polymorphisms) that a population could harbor for a particular locus. Beyond this number of alleles, the selective advantage of presence of those alleles in heterozygous genotypes would be cancelled out by continual generation of less fit homozygous genotypes. These results became important in the formation of the neutral theory, because neutral (or nearly neutral) alleles create no such segregational load, and allow for the accumulation of a great deal of polymorphism. When Richard Lewontin and J. Hubby published their groundbreaking results in 1966 which showed high levels of genetic variation in Drosophila via protein electrophoresis, the theoretical results from the infinite alleles model were used by Kimura and others to support the idea that this variation would have to be neutral (or result in excess segregational load). References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "F = {1 \\over 4 N_e u + 1}" }, { "math_id": 1, "text": "n = {1 \\over F} = 4N_e u + 1" } ]
https://en.wikipedia.org/wiki?curid=11743104
11743693
Fractional flow reserve
Fractional flow reserve (FFR) is a diagnostic technique used in coronary catheterization. FFR measures pressure differences across a coronary artery stenosis (narrowing, usually due to atherosclerosis) to determine the likelihood that the stenosis impedes oxygen delivery to the heart muscle (myocardial ischemia). Fractional flow reserve is defined as the pressure after (distal to) a stenosis relative to the pressure before the stenosis. The result is an absolute number; an FFR of 0.80 means that a given stenosis causes a 20% drop in blood pressure. In other words, FFR expresses the maximal flow down a vessel in the presence of a stenosis compared to the maximal flow in the hypothetical absence of the stenosis. Procedure. During coronary catheterization, a catheter is inserted into the femoral (groin) or radial arteries (wrist) using a sheath and guidewire. FFR uses a small sensor on the tip of the wire (commonly a transducer) to measure pressure, temperature and flow to determine the exact severity of the lesion. This is done during maximal blood flow (hyperemia), which can be induced by injecting products such as adenosine or papaverine. A pullback of the pressure wire is performed, and pressures are recorded across the vessel. When interpreting FFR measurements, higher values indicate a non-significant stenosis, whereas lower values indicate a significant lesion. There is no absolute cut-off point at which an FFR measurement is considered abnormal. However, reviews of clinical trials show a cut-off range between 0.75 and 0.80 has been used when determining significance. Equation. Fractional flow reserve (FFR) is the ratio of maximum blood flow distal to a stenotic lesion to normal maximum flow in the same vessel. It is calculated using the pressure ratio formula_0 where formula_1 is the pressure distal to the lesion, and formula_2 is the pressure proximal to the lesion. Rationale. The decision to perform a percutaneous coronary intervention (PCI) is usually based on angiographic results alone. Angiography can be used for the visual evaluation of the inner diameter of a vessel. In ischemic heart disease, deciding which narrowing is the culprit lesion is not always clear-cut. Fractional flow reserve can provide a functional evaluation by measuring the pressure decline caused by a vessel narrowing. Advantages and disadvantages. FFR has certain advantages over other techniques to evaluate narrowed coronary arteries, such as coronary angiography, intravascular ultrasound or CT coronary angiography. For example, FFR takes into account collateral flow, which can render an anatomical blockage functionally unimportant. Also, standard angiography can underestimate or overestimate narrowing, because it only visualizes contrast inside a vessel. Finally, when compared to other indices of vessel narrowing, FFR seems to be less vulnerable to variability between patients. Other techniques can also provide information which FFR cannot. Intravascular ultrasound, for example, can provide information on plaque vulnerability, whereas FFR measures are only determined by plaque thickness. There are newly developed technologies that can assess both plaque vulnerability and FFR from CT by measuring the vasodilitative capacity of the arterial wall. FFR allows real-time estimation of the effects of a narrowed vessel, and allows for simultaneous treatment with balloon dilatation and stenting. On the other hand, FFR is an invasive procedure for which non-invasive (less drastic) alternatives exist, such as cardiac stress testing. In this test, physical exercise or intravenous medication (adenosine/dobutamine) is used to increase the workload and oxygen demand of the heart muscle, and ischemia is detected using ECG changes or nuclear imaging. DEFER study. In the DEFER study, fractional flow reserve was used to determine the need for stenting in patients with intermediate single vessel disease. In stenosis patients with an FFR of less than 0.75, outcomes were significantly worse. In patients with an FFR of 0.75 or more however, stenting did not influence outcomes. FAME study. The "Fractional Flow Reserve versus Angiography for Multivessel Evaluation" (FAME) study evaluated the role of FFR in patients with multivessel coronary artery disease. In 20 centers in Europe and the United States, 1005 patients undergoing percutaneous coronary intervention with drug eluting stent implantation were randomized to intervention based on angiography or based on fractional flow reserve in addition to angiography. In the angiography arm of the study, all suspicious-looking lesions were stented. In the FFR arm, only angiographically suspicious lesions with an FFR of 0.80 or less were stented. In the patients whose care was guided by FFR, fewer stents were used (2.7±1.2 and 1.9±1.3, respectively). After one year, the primary endpoint of death, nonfatal myocardial infarction, and repeat revascularization were lower in the FFR group (13.2% versus 18.3%), largely attributable to fewer stenting procedures and their associated complications. There also was a non-significant higher number of patients with residual angina (81% versus 78%). In the FFR group, hospital stay was slightly shorter (3.4 vs 3.7 days) and procedural costs were less ($5,332 vs $6,007). FFR did not prolong procedure (around 70 minutes in both groups). References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\n FFR = \\frac{p_d}{p_a} \n" }, { "math_id": 1, "text": "p_d" }, { "math_id": 2, "text": "p_a" } ]
https://en.wikipedia.org/wiki?curid=11743693
11744449
Scattering from rough surfaces
Surface roughness scattering or interface roughness scattering is the elastic scattering of particles against a rough solid surface or imperfect interface between two different materials. This effect has been observed in classical systems, such as microparticle scattering, as well as quantum systems, where it arises electronic devices, such as field effect transistors and quantum cascade lasers. Classical description. In the classical mechanics framework, a rough surface, such as a machined metal surface, randomizes the probability distribution function governing the incoming particles, leading to net momentum loss of the particle flux. Quantum description. In the quantum mechanical framework, this scattering is most noticeable in confined systems, in which the energies for charge carriers are determined by the locations of interfaces. An example of such a system is a quantum well, which may be constructed from a sandwich of different layers of semiconductor. Variations in the thickness of these layers therefore causes the energy of particles to be dependent on their in-plane location in the layer. Classification of the roughness at a given position, formula_0, is complex, but as in the classical models, it has been modeled as a Gaussian distribution by some researchers This assumption may be formulated in terms of the ensemble average for some given characteristic height, formula_1, and correlation length, formula_2, such that formula_3 Types of Scattering. Selective Scattering : In selective Scattering scattering depends upon the wavelength of light. Mie scattering : Mie theory can describe how electromagnetic waves interact with homogeneously spherical particles. However, a theory for homogeneous spheres will completely fail to predict polarization effects. When the size of the molecules is greater than the wavelength of light, the result is a non-uniform scattering of light. Lambertian Scattering: This type of scattering occurs when a surface has microscopic irregularities that scatter light perfectly uniformly in all directions, causing it to appear equally bright from all viewing angles. Subsurface Scattering: This type of scattering occurs when light scatters within a material before exiting the surface at a different point. Isotropic crystal scattering (aka powder diffraction): This type of scattering occurs when every crystalline orientation is represented equally in a powdered sample. Powder X-ray diffraction (PXRD) operates under the assumption that the sample is randomly arranged such that each plane will be represented in the signal. Notes. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\Delta_z(\\mathbf{r})" }, { "math_id": 1, "text": "\\Delta" }, { "math_id": 2, "text": "\\Lambda" }, { "math_id": 3, "text": "\\langle\\Delta_z(\\mathbf{r})\\Delta_z(\\mathbf{r'})\\rangle = \\Delta^2\\exp\\left(-\\frac{|\\mathbf{r}-\\mathbf{r'}|^2}{\\Lambda^2}\\right)" } ]
https://en.wikipedia.org/wiki?curid=11744449
11746064
San Marino Scale
Proposed SETI measurement The San Marino Scale is a suggested scale for assessing risks associated with deliberate transmissions from Earth aimed to possible extraterrestrial intelligent life. The scale evaluates the significance of transmissions from Earth as a function of signal intensity and information content. The scale was suggested by Iván Almár at a conference in San Marino in 2005. The radio output of Jupiter, Saturn and Neptune is not considered in the model. The San Marino Scale was subsequently adopted by the SETI Permanent Study Group of the International Academy of Astronautics at its 2007 meeting in Hyderabad, India. Calculation. In the original presentation given by Almár, the San Marino Index, SMI, of a given event is calculated as the sum of two terms. formula_0 The first term, I, is based on the intensity of the signal relative to the background noise in the same frequency band. This term is logarithmic, and calculated as: formula_1 For example, a signal which is 100 times more intense than the background noise at the same frequency and bandwidth would have an I value of two. The second term, C, is more subjective and relates to the content, aiming, timing, and character of the signal. A C rating of one is something like a stray radar pulse, lacking any information content and randomly directed. A C rating of five is a deliberate reply to an extraterrestrial signal. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": " SMI = I + C " }, { "math_id": 1, "text": " I {{=}} \\log_{10}\\left(\\frac {\\text{signal intensity}} { \\text{background intensity}}\\right)" } ]
https://en.wikipedia.org/wiki?curid=11746064
11748718
Superprocess
An formula_0-superprocess, formula_1, within mathematics probability theory is a stochastic process on formula_2 that is usually constructed as a special limit of near-critical branching diffusions. Informally, it can be seen as a branching process where each particle splits and dies at infinite rates, and evolves according to a diffusion equation, and we follow the rescaled population of particles, seen as a measure on formula_3. Scaling limit of a discrete branching process. Simplest setting. For any integer formula_4, consider a branching Brownian process formula_5 defined as follows: The notation formula_5 means should be interpreted as: at each time formula_10, the number of particles in a set formula_11 is formula_12. In other words, formula_13 is a measure-valued random process. Now, define a renormalized process: formula_14 Then the finite-dimensional distributions of formula_15 converge as formula_16 to those of a measure-valued random process formula_1, which is called a formula_17-superprocess, with initial value formula_18, where formula_19 and where formula_20 is a Brownian motion (specifically, formula_21 where formula_22 is a measurable space, formula_23 is a filtration, and formula_24 under formula_25 has the law of a Brownian motion started at formula_26). As will be clarified in the next section, formula_27 encodes an underlying branching mechanism, and formula_20 encodes the motion of the particles. Here, since formula_20 is a Brownian motion, the resulting object is known as a Super-brownian motion. Generalization to (ξ, ϕ)-superprocesses. Our discrete branching system formula_5 can be much more sophisticated, leading to a variety of superprocesses: Add the following requirement that the expected number of offspring is bounded:formula_36Define formula_14 as above, and define the following crucial function:formula_37Add the requirement, for all formula_38, that formula_39 is Lipschitz continuous with respect to formula_40 uniformly on formula_41, and that formula_42 converges to some function formula_27 as formula_16 uniformly on formula_41. Provided all of these conditions, the finite-dimensional distributions of formula_43 converge to those of a measure-valued random process formula_1 which is called a formula_17-superprocess, with initial value formula_18. Commentary on ϕ. Provided formula_44, that is, the number of branching events becomes infinite, the requirement that formula_42 converges implies that, taking a Taylor expansion of formula_45, the expected number of offspring is close to 1, and therefore that the process is near-critical. Generalization to Dawson-Watanabe superprocesses. The branching particle system formula_5 can be further generalized as follows: Then, under suitable hypotheses, the finite-dimensional distributions of formula_43 converge to those of a measure-valued random process formula_1 which is called a Dawson-Watanabe superprocess, with initial value formula_18. Properties. A superprocess has a number of properties. It is a Markov process, and its Markov kernel formula_54 verifies the branching property:formula_55where formula_56 is the convolution.A special class of superprocesses are formula_57-superprocesses, with formula_58. A formula_57-superprocesses is defined on formula_59. Its "branching mechanism" is defined by its factorial moment generating function (the definition of a branching mechanism varies slightly among authors, some use the definition of formula_60 in the previous section, others use the factorial moment generating function): formula_61 and the spatial motion of individual particles (noted formula_62 in the previous section) is given by the formula_63-symmetric stable process with infinitesimal generator formula_64. The formula_65 case means formula_62 is a standard Brownian motion and the formula_66-superprocess is called the super-Brownian motion. One of the most important properties of superprocesses is that they are intimately connected with certain nonlinear partial differential equations. The simplest such equation is formula_67 When the spatial motion (migration) is a diffusion process, one talks about a superdiffusion. The connection between superdiffusions and nonlinear PDE's is similar to the one between diffusions and linear PDE's.
[ { "math_id": 0, "text": " (\\xi,d,\\beta)" }, { "math_id": 1, "text": "X(t,dx)" }, { "math_id": 2, "text": "\\mathbb{R} \\times \\mathbb{R}^d" }, { "math_id": 3, "text": "\\mathbb{R}" }, { "math_id": 4, "text": "N\\geq 1" }, { "math_id": 5, "text": "Y^N(t,dx)" }, { "math_id": 6, "text": "t=0" }, { "math_id": 7, "text": "N" }, { "math_id": 8, "text": "\\mu" }, { "math_id": 9, "text": "1/2" }, { "math_id": 10, "text": "t" }, { "math_id": 11, "text": "A\\subset \\mathbb{R}" }, { "math_id": 12, "text": "Y^N(t,A)" }, { "math_id": 13, "text": "Y" }, { "math_id": 14, "text": "X^N(t,dx):=\\frac{1}{N}Y^N(t,dx)" }, { "math_id": 15, "text": "X^N" }, { "math_id": 16, "text": "N\\to +\\infty" }, { "math_id": 17, "text": "(\\xi,\\phi)" }, { "math_id": 18, "text": "X(0) = \\mu" }, { "math_id": 19, "text": "\\phi(z):= \\frac{z^2}{2}" }, { "math_id": 20, "text": "\\xi" }, { "math_id": 21, "text": "\\xi=(\\Omega,\\mathcal{F},\\mathcal{F}_t,\\xi_t,\\textbf{P}_x)" }, { "math_id": 22, "text": "(\\Omega,\\mathcal{F})" }, { "math_id": 23, "text": "(\\mathcal{F}_t)_{t\\geq 0}" }, { "math_id": 24, "text": "\\xi_t" }, { "math_id": 25, "text": "\\textbf{P}_x" }, { "math_id": 26, "text": "x" }, { "math_id": 27, "text": "\\phi" }, { "math_id": 28, "text": "E" }, { "math_id": 29, "text": "\\gamma_N" }, { "math_id": 30, "text": "n_{t,\\xi_t}" }, { "math_id": 31, "text": "n_{t,x}" }, { "math_id": 32, "text": "(n_{t,x})_{t,x}" }, { "math_id": 33, "text": "p_k(x)=\\mathbb{P}[n_{t,x}=k]" }, { "math_id": 34, "text": "g" }, { "math_id": 35, "text": "g(x,z):=\\sum\\limits_{k=0}^\\infty p_k(x)z^k" }, { "math_id": 36, "text": "\\sup\\limits_{x\\in E}\\mathbb{E}[n_{t,x}]<+\\infty" }, { "math_id": 37, "text": "\\phi_N(x,z):=N\\gamma_N \\left[g_N\\Big(x,1-\\frac{z}{N}\\Big)\\,-\\,\\Big(1-\\frac{z}{N}\\Big)\\right]" }, { "math_id": 38, "text": "a\\geq 0" }, { "math_id": 39, "text": "\\phi_N(x,z)" }, { "math_id": 40, "text": "z" }, { "math_id": 41, "text": "E\\times [0,a]" }, { "math_id": 42, "text": "\\phi_N" }, { "math_id": 43, "text": "X^N(t)" }, { "math_id": 44, "text": "\\lim_{N\\to+\\infty}\\gamma_N = +\\infty" }, { "math_id": 45, "text": "g_N" }, { "math_id": 46, "text": "[r,t)" }, { "math_id": 47, "text": "(\\xi_t)_{t\\geq 0}" }, { "math_id": 48, "text": "\\exp\\left\\{-\\int_r^t\\alpha_N(\\xi_s)K(ds)\\right\\}" }, { "math_id": 49, "text": "\\alpha_N" }, { "math_id": 50, "text": "K" }, { "math_id": 51, "text": "F_N(\\xi_{t-},d\\nu)" }, { "math_id": 52, "text": "\\nu(1)" }, { "math_id": 53, "text": "\\sup\\limits_{x\\in E}\\int \\nu(1)F_N(x,d\\nu)<+\\infty" }, { "math_id": 54, "text": "Q_t(\\mu,d\\nu)" }, { "math_id": 55, "text": "Q_t(\\mu+\\mu',\\cdot) = Q_t(\\mu,\\cdot)*Q_t(\\mu',\\cdot)" }, { "math_id": 56, "text": "*" }, { "math_id": 57, "text": " (\\alpha,d,\\beta)" }, { "math_id": 58, "text": " \\alpha\\in (0,2],d\\in \\N,\\beta \\in (0,1]" }, { "math_id": 59, "text": " \\R^d" }, { "math_id": 60, "text": " \\phi" }, { "math_id": 61, "text": " \\Phi(s) = \\frac{1}{1+\\beta}(1-s)^{1+\\beta}+s" }, { "math_id": 62, "text": " \\xi" }, { "math_id": 63, "text": "\\alpha" }, { "math_id": 64, "text": "\\Delta_{\\alpha}" }, { "math_id": 65, "text": "\\alpha = 2" }, { "math_id": 66, "text": "(2,d,1)" }, { "math_id": 67, "text": "\\Delta u-u^2=0\\ on\\ \\mathbb{R}^d." } ]
https://en.wikipedia.org/wiki?curid=11748718
1174919
SL (complexity)
In computational complexity theory, SL (Symmetric Logspace or Sym-L) is the complexity class of problems log-space reducible to USTCON ("undirected s-t connectivity"), which is the problem of determining whether there exists a path between two vertices in an undirected graph, otherwise described as the problem of determining whether two vertices are in the same connected component. This problem is also called the undirected reachability problem. It does not matter whether many-one reducibility or Turing reducibility is used. Although originally described in terms of symmetric Turing machines, that equivalent formulation is very complex, and the reducibility definition is what is used in practice. USTCON is a special case of STCON ("directed reachability"), the problem of determining whether a directed path between two vertices in a directed graph exists, which is complete for NL. Because USTCON is SL-complete, most advances that impact USTCON have also impacted SL. Thus they are connected, and discussed together. In October 2004 Omer Reingold showed that SL = L. Origin. SL was first defined in 1982 by Harry R. Lewis and Christos Papadimitriou, who were looking for a class in which to place USTCON, which until this time could, at best, be placed only in NL, despite seeming not to require nondeterminism. They defined the symmetric Turing machine, used it to define SL, showed that USTCON was complete for SL, and proved that formula_0 where L is the more well-known class of problems solvable by an ordinary deterministic Turing machine in logarithmic space, and NL is the class of problems solvable by nondeterministic Turing machines in logarithmic space. The result of Reingold, discussed later, shows that in fact, when limited to log space, the symmetric Turing machine is equivalent in power to the deterministic Turing machine. Complete problems. By definition, USTCON is complete for SL (all problems in SL reduce to it, including itself). Many more interesting complete problems were found, most by reducing directly or indirectly from USTCON, and a compendium of them was made by Àlvarez and Greenlaw. Many of the problems are graph theory problems on undirected graphs. Some of the simplest and most important SL-complete problems they describe include: The complements of all these problems are in SL as well, since, as we will see, SL is closed under complement. From the fact that L = SL, it follows that many more problems are SL-complete with respect to log-space reductions: every non-trivial problem in L or in SL is SL-complete; moreover, even if the reductions are in some smaller class than L, L-completeness is equivalent to SL-completeness. In this sense this class has become somewhat trivial. Important results. There are well-known classical algorithms such as depth-first search and breadth-first search which solve USTCON in linear time and space. Their existence, shown long before SL was defined, proves that SL is contained in P. It's also not difficult to show that USTCON, and so SL, is in NL, since we can just nondeterministically guess at each vertex which vertex to visit next in order to discover a path if one exists. The first nontrivial result for SL, however, was Savitch's theorem, proved in 1970, which provided an algorithm that solves USTCON in log2 "n" space. Unlike depth-first search, however, this algorithm is impractical for most applications because of its potentially superpolynomial running time. One consequence of this is that USTCON, and so SL, is in . (Actually, Savitch's theorem gives the stronger result that NL is in .) Although there were no (uniform) "deterministic" space improvements on Savitch's algorithm for 22 years, a highly practical probabilistic log-space algorithm was found in 1979 by Aleliunas et al.: simply start at one vertex and perform a random walk until you find the other one (then accept) or until | · |3 time has passed (then reject). False rejections are made with a small bounded probability that shrinks exponentially the longer the random walk is continued. This showed that SL is contained in RLP, the class of problems solvable in polynomial time and logarithmic space with probabilistic machines that reject incorrectly less than 1/3 of the time. By replacing the random walk by a universal traversal sequence, Aleliunas et al. also showed that SL is contained in L/poly, a non-uniform complexity class of the problems solvable deterministically in logarithmic space with polynomial advice. In 1989, Borodin et al. strengthened this result by showing that the complement of USTCON, determining whether two vertices are in different connected components, is also in RLP. This placed USTCON, and SL, in co-RLP and in the intersection of RLP and co-RLP, which is ZPLP, the class of problems which have log-space, expected polynomial-time, no-error randomized algorithms. In 1992, Nisan, Szemerédi, and Wigderson finally found a new deterministic algorithm to solve USTCON using only log1.5 "n" space. This was improved slightly, but there would be no more significant gains until Reingold. In 1995, Nisan and Ta-Shma showed the surprising result that SL is closed under complement, which at the time was believed by many to be false; that is, SL = co-SL. Equivalently, if a problem can be solved by reducing it to a graph and asking if two vertices are in the "same" component, it can also be solved by reducing it to another graph and asking if two vertices are in "different" components. However, Reingold's paper would later make this result redundant. One of the most important corollaries of SL = co-SL is that LSL = SL; that is, a deterministic, log-space machine with an oracle for SL can solve problems in SL (trivially) but cannot solve any other problems. This means it does not matter whether we use Turing reducibility or many-one reducibility to show a problem is in SL; they are equivalent. In 2004, a breakthrough paper by Omer Reingold showed that USTCON is in fact in L. This paper used expander graphs to guide the search through the input graph. Since USTCON is SL-complete, Reingold's result implies that SL = L, essentially eliminating the usefulness of consideration of SL as a separate class. A few weeks later, graduate student Vladimir Trifonov showed that USTCON could be solved deterministically using formula_4 space—a weaker result—using different techniques. There has not been substantial effort into turning Reingold's algorithm for USTCON into a practical formulation. It is explicit in his paper (and those leading up to it) that they are primarily concerned with asymptotics; as a result, the algorithm he describes would actually take formula_5 memory, and formula_6 time. This means that even for formula_7, the algorithm would require more memory than contained on all computers in the world (a kiloexaexaexabyte). Consequences of L = SL. The collapse of L and SL has a number of significant consequences. Most obviously, all SL-complete problems are now in L, and can be gainfully employed in the design of deterministic log-space and polylogarithmic-space algorithms. In particular, we have a new set of tools to use in log-space reductions. It is also now known that a problem is in L if and only if it is log-space reducible to USTCON.
[ { "math_id": 0, "text": "\\mathsf{L} \\subseteq \\mathsf{SL} \\subseteq \\mathsf{NL}" }, { "math_id": 1, "text": "x_i" }, { "math_id": 2, "text": "x_j" }, { "math_id": 3, "text": "(x_i,x_j)" }, { "math_id": 4, "text": "O\\text{(log } n \\text {log log } n)" }, { "math_id": 5, "text": "64^{32}\\,\\log N" }, { "math_id": 6, "text": "O(N^{64^{32}})" }, { "math_id": 7, "text": "N=2" } ]
https://en.wikipedia.org/wiki?curid=1174919
1175180
Mathematics of three-phase electric power
Mathematics and basic principles of three-phase electric power In electrical engineering, three-phase electric power systems have at least three conductors carrying alternating voltages that are offset in time by one-third of the period. A three-phase system may be arranged in delta (∆) or star (Y) (also denoted as wye in some areas, as symbolically it is similar to the letter 'Y'). A wye system allows the use of two different voltages from all three phases, such as a 230/400 V system which provides 230 V between the neutral (centre hub) and any one of the phases, and 400 V across any two phases. A delta system arrangement provides only one voltage, but it has a greater redundancy as it may continue to operate normally with one of the three supply windings offline, albeit at 57.7% of total capacity. Harmonic current in the neutral may become very large if nonlinear loads are connected. Definitions. In a star (wye) connected topology, with rotation sequence L1 - L2 - L3, the time-varying instantaneous voltages can be calculated for each phase A,C,B respectively by: formula_0 formula_1 formula_2 where: formula_3 is the peak voltage, formula_4 is the phase angle in radians formula_5 is the time in seconds formula_6 is the frequency in cycles per second and voltages L1-N, L2-N and L3-N are referenced to the star connection point. Diagrams. The below images demonstrate how a system of six wires delivering three phases from an alternator may be replaced by just three. A three-phase transformer is also shown. Balanced loads. Generally, in electric power systems, the loads are distributed as evenly as is practical among the phases. It is usual practice to discuss a balanced system first and then describe the effects of unbalanced systems as deviations from the elementary case. Constant power transfer. An important property of three-phase power is that the instantaneous power available to a resistive load, formula_7, is constant at all times. Indeed, let formula_8 To simplify the mathematics, we define a nondimensionalized power for intermediate calculations, formula_9 formula_10 Hence (substituting back): formula_11 Since we have eliminated formula_12 we can see that the total power does not vary with time. This is essential for keeping large generators and motors running smoothly. Notice also that using the root mean square voltage formula_13, the expression for formula_14 above takes the following more classic form: formula_15. The load need not be resistive for achieving a constant instantaneous power since, as long as it is balanced or the same for all phases, it may be written as formula_16 so that the peak current is formula_17 for all phases and the instantaneous currents are formula_18 formula_19 formula_20 Now the instantaneous powers in the phases are formula_21 formula_22 formula_23 Using angle subtraction formulae: formula_24 formula_25 formula_26 which add up for a total instantaneous power formula_27 Since the three terms enclosed in square brackets are a three-phase system, they add up to zero and the total power becomes formula_28 or formula_29 showing the above contention. Again, using the root mean square voltage formula_13, formula_14 can be written in the usual form formula_30. No neutral current. For the case of equal loads on each of three phases, no net current flows in the neutral. The neutral current is the inverted vector sum of the line currents. See Kirchhoff's circuit laws. formula_31 We define a non-dimensionalized current, formula_32: formula_33 Since we have shown that the neutral current is zero we can see that removing the neutral core will have no effect on the circuit, provided the system is balanced. Such connections are generally used only when the load on the three phases is part of the same piece of equipment (for example a three-phase motor), as otherwise switching loads and slight imbalances would cause large voltage fluctuations. Unbalanced systems. In practice, systems rarely have perfectly balanced loads, currents, voltages and impedances in all three phases. The analysis of unbalanced cases is greatly simplified by the use of the techniques of symmetrical components. An unbalanced system is analysed as the superposition of three balanced systems, each with the positive, negative or zero sequence of balanced voltages. When specifying wiring sizes in a three-phase system, we only need to know the magnitude of the phase and neutral currents. The neutral current can be determined by adding the three phase currents together as complex numbers and then converting from rectangular to polar co-ordinates. If the three-phase root mean square (RMS) currents are formula_34, formula_35, and formula_36, the neutral RMS current is: formula_37 which resolves to formula_38 The polar magnitude of this is the square root of the sum of the squares of the real and imaginary parts, which reduces to formula_39 Non-linear loads. With linear loads, the neutral only carries the current due to imbalance between the phases. Devices that utilize rectifier-capacitor front ends (such as switch-mode power supplies for computers, office equipment and the like) introduce third order harmonics. Third harmonic currents are in-phase on each of the supply phases and therefore will add together in the neutral which can cause the neutral current in a wye system to exceed the phase currents. Revolving magnetic field. Any polyphase system, by virtue of the time displacement of the currents in the phases, makes it possible to easily generate a magnetic field that revolves at the line frequency. Such a revolving magnetic field makes polyphase induction motors possible. Indeed, where induction motors must run on single-phase power (such as is usually distributed in homes), the motor must contain some mechanism to produce a revolving field, otherwise the motor cannot generate any stand-still torque and will not start. The field produced by a single-phase winding can provide energy to a motor already rotating, but without auxiliary mechanisms the motor will not accelerate from a stop. A rotating magnetic field of steady amplitude requires that all three phase currents be equal in magnitude, and accurately displaced one-third of a cycle in phase. Unbalanced operation results in undesirable effects on motors and generators. Conversion to other phase systems. Provided two voltage waveforms have at least some relative displacement on the time axis, other than a multiple of a half-cycle, any other polyphase set of voltages can be obtained by an array of passive transformers. Such arrays will evenly balance the polyphase load between the phases of the source system. For example, balanced two-phase power can be obtained from a three-phase network by using two specially constructed transformers, with taps at 50% and 86.6% of the primary voltage. This "Scott T" connection produces a true two-phase system with 90° time difference between the phases. Another example is the generation of higher-phase-order systems for large rectifier systems, to produce a smoother DC output and to reduce the harmonic currents in the supply. When three-phase is needed but only single-phase is readily available from the electricity supplier, a phase converter can be used to generate three-phase power from the single phase supply. A motor–generator is often used in factory industrial applications. System measurements. In a three-phase system, at least two transducers are required to measure power when there is no neutral, or three transducers when there is a neutral. Blondel's theorem states that the number of measurement elements required is one less than the number of current-carrying conductors. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "V_{L1-N} = V_P \\sin\\left(\\theta\\right)\\,\\!" }, { "math_id": 1, "text": "V_{L2-N} = V_P \\sin\\left(\\theta - \\frac{2}{3}\\pi\\right) = V_P \\sin\\left(\\theta + \\frac{4}{3}\\pi\\right)" }, { "math_id": 2, "text": "V_{L3-N} = V_P \\sin\\left(\\theta - \\frac{4}{3}\\pi\\right) = V_P \\sin\\left(\\theta + \\frac{2}{3}\\pi\\right)" }, { "math_id": 3, "text": "V_P" }, { "math_id": 4, "text": "\\theta = 2\\pi ft\\,\\!" }, { "math_id": 5, "text": "t" }, { "math_id": 6, "text": "f" }, { "math_id": 7, "text": "\\scriptstyle P \\,=\\, V I \\,=\\, \\frac{V^2}{R}" }, { "math_id": 8, "text": "\\begin{align}\n P_{Li} &= \\frac{V_{Li}^{2}}{R} \\\\\n P_{TOT} &= \\sum_i P_{Li}\n\\end{align}" }, { "math_id": 9, "text": "\\scriptstyle p \\,=\\, \\frac{1}{V_P^2}P_{TOT} R" }, { "math_id": 10, "text": "p=\\sin^{2} \\theta+\\sin^{2} \\left(\\theta-\\frac{2}{3} \\pi\\right)+\\sin^{2} \\left(\\theta-\\frac{4}{3} \\pi\\right)=\\frac{3}{2}" }, { "math_id": 11, "text": "P_{TOT}=\\frac{3 V_P^2}{2R}." }, { "math_id": 12, "text": "\\theta" }, { "math_id": 13, "text": "V = \\frac{V_p}{\\sqrt{2}}" }, { "math_id": 14, "text": "P_{TOT}" }, { "math_id": 15, "text": "P_{TOT} = \\frac{3V^2}{R}" }, { "math_id": 16, "text": "Z=|Z|e^{j\\varphi}" }, { "math_id": 17, "text": "I_P=\\frac{V_P}{|Z|}" }, { "math_id": 18, "text": "I_{L1}=I_P\\sin\\left(\\theta-\\varphi\\right)" }, { "math_id": 19, "text": "I_{L2}=I_P\\sin\\left(\\theta-\\frac{2}{3}\\pi-\\varphi\\right)" }, { "math_id": 20, "text": "I_{L3}=I_P\\sin\\left(\\theta-\\frac{4}{3}\\pi-\\varphi\\right)" }, { "math_id": 21, "text": "P_{L1}=V_{L1}I_{L1}=V_P I_P\\sin\\left(\\theta\\right)\\sin\\left(\\theta-\\varphi\\right)" }, { "math_id": 22, "text": "P_{L2}=V_{L2}I_{L2}=V_P I_P\\sin\\left(\\theta-\\frac{2}{3}\\pi\\right)\\sin\\left(\\theta-\\frac{2}{3}\\pi-\\varphi\\right)" }, { "math_id": 23, "text": "P_{L3}=V_{L3}I_{L3}=V_P I_P\\sin\\left(\\theta-\\frac{4}{3}\\pi\\right)\\sin\\left(\\theta-\\frac{4}{3}\\pi-\\varphi\\right)" }, { "math_id": 24, "text": "P_{L1}=\\frac{V_P I_P}{2}\\left[\\cos\\left(\\varphi\\right)-\\cos\\left(2\\theta-\\varphi\\right)\\right]" }, { "math_id": 25, "text": "P_{L2}=\\frac{V_P I_P}{2}\\left[\\cos\\left(\\varphi\\right)-\\cos\\left(2\\theta-\\frac{4}{3}\\pi-\\varphi\\right)\\right]" }, { "math_id": 26, "text": "P_{L3}=\\frac{V_P I_P}{2}\\left[\\cos\\left(\\varphi\\right)-\\cos\\left(2\\theta-\\frac{8}{3}\\pi-\\varphi\\right)\\right]" }, { "math_id": 27, "text": "P_{TOT}=\\frac{V_P I_P}{2}\\left\\{3\\cos\\varphi-\\left[\\cos\\left(2\\theta-\\varphi\\right)+\\cos\\left(2\\theta-\\frac{4}{3}\\pi-\\varphi\\right)+\\cos\\left(2\\theta-\\frac{8}{3}\\pi-\\varphi\\right)\\right]\\right\\}" }, { "math_id": 28, "text": "P_{TOT}=\\frac{3V_P I_P}{2}\\cos\\varphi" }, { "math_id": 29, "text": "P_{TOT}=\\frac{3V_P^2}{2|Z|}\\cos\\varphi" }, { "math_id": 30, "text": "P_{TOT}=\\frac{3V^2}{Z}\\cos\\varphi" }, { "math_id": 31, "text": "\\begin{align}\nI_{L1} &= \\frac{V_{L1-N}}{R},\\; I_{L2}=\\frac{V_{L2-N}}{R},\\; I_{L3}=\\frac{V_{L3-N}}{R}\\\\\n-I_{N} &= I_{L1} + I_{L2} + I_{L3}\n\\end{align}" }, { "math_id": 32, "text": "i=\\frac{I_{N}R}{V_P}" }, { "math_id": 33, "text": "\\begin{align}\n i &= \\sin\\left(\\theta\\right) + \\sin\\left(\\theta - \\frac{2\\pi}{3}\\right) + \\sin\\left(\\theta + \\frac{2\\pi}{3}\\right)\\\\\n &= \\sin\\left(\\theta\\right) + 2\\sin\\left(\\theta\\right) \\cos\\left(\\frac{2\\pi}{3}\\right)\\\\\n &= \\sin\\left(\\theta\\right) - \\sin\\left(\\theta\\right)\\\\\n &= 0\n\\end{align}" }, { "math_id": 34, "text": "I_{L1}" }, { "math_id": 35, "text": "I_{L2}" }, { "math_id": 36, "text": "I_{L3}" }, { "math_id": 37, "text": "I_{L1} + I_{L2} \\cos\\left(\\frac{2}{3}\\pi\\right) + j I_{L2} \\sin\\left(\\frac{2}{3}\\pi\\right) + I_{L3} \\cos\\left(\\frac{4}{3}\\pi\\right) + j I_{L3} \\sin\\left(\\frac{4}{3}\\pi\\right)" }, { "math_id": 38, "text": "I_{L1} - I_{L2} \\frac{1}{2} - I_{L3} \\frac{1}{2} + j \\frac{\\sqrt{3}}{2} \\left(I_{L2} - I_{L3}\\right)" }, { "math_id": 39, "text": "\\sqrt{I_{L1}^2 + I_{L2}^2 + I_{L3}^2 - I_{L1} I_{L2} - I_{L1} I_{L3} - I_{L2} I_{L3}}" } ]
https://en.wikipedia.org/wiki?curid=1175180
11752313
Millioctave
Unit of measurement for musical intervalsThe millioctave (moct) is a unit of measurement for musical intervals. As is expected from the prefix milli-, a millioctave is defined as 1/1000 of an octave. From this it follows that one millioctave is equal to the ratio 21/1000, the 1000th root of 2, or approximately 1.0006934 (). Given two frequencies "a" and "b", the measurement of the interval between them in millioctaves can be calculated by formula_0 Likewise, if you know a note "b" and the number "n" of millioctaves in the interval, then the other note "a" may be calculated by: formula_1 Like the more common cent, the millioctave is a linear measure of intervals, and thus the size of intervals can be calculated by adding their millioctave values, instead of multiplication, which is necessary for calculations of frequencies. A millioctave is exactly 1.2 cents. History and use. The millioctave was introduced by the German physicist Arthur von Oettingen in his book "Das duale Harmoniesystem" (1913). The invention goes back to John Herschel, who proposed a division of the octave into 1000 parts, which was published (with appropriate credit to Herschel) in George Biddell Airy's book on musical acoustics. Compared to the cent, the millioctave has not been as popular because it is not aligned with just intervals. It is however occasionally used by authors who wish to avoid the close association between the cent and twelve-tone equal temperament. Some considers that the millioctave introduces as well a bias for the less familiar 10-tone equal temperament however this bias is common in the decimal system. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n = 1000 \\log_2 \\left( \\frac{a}{b} \\right) \\approx 3322 \\log_{10} \\left( \\frac{a}{b} \\right)" }, { "math_id": 1, "text": "a = b \\times 2 ^ \\frac{n}{1000}" } ]
https://en.wikipedia.org/wiki?curid=11752313
1175262
Performance indicator
Measurement that evaluates the success of an organization A performance indicator or key performance indicator (KPI) is a type of performance measurement. KPIs evaluate the success of an organization or of a particular activity (such as projects, programs, products and other initiatives) in which it engages. KPIs provide a focus for strategic and operational improvement, create an analytical basis for decision making and help focus attention on what matters most. Often success is simply the repeated, periodic achievement of some levels of operational goal (e.g. zero defects, 10/10 customer satisfaction), and sometimes success is defined in terms of making progress toward strategic goals. Accordingly, choosing the right KPIs relies upon a good understanding of what is important to the organization. What is deemed important often depends on the department measuring the performance – e.g. the KPIs useful to finance will differ from the KPIs assigned to sales. Since there is a need to understand well what is important, various techniques to assess the present state of the business, and its key activities, are associated with the selection of performance indicators. These assessments often lead to the identification of potential improvements, so performance indicators are routinely associated with 'performance improvement' initiatives. A very common way to choose KPIs is to apply a management framework such as the balanced scorecard. The importance of such performance indicators is evident in the typical decision-making process (e.g. in management of organisations). When a decision-maker considers several options, they must be equipped to properly analyse the status quo to predict the consequences of future actions. Should they make their analysis on the basis of faulty or incomplete information, the predictions will not be reliable and consequently the decision made might yield an unexpected result. Therefore, the proper usage of performance indicators is vital to avoid such mistakes and minimise the risk. KPIs are used not only for business organizations but also for technical aspects such as machine performance. For example, a machine used for production in a factory would output various signals indicating how the current machine status is (e.g., machine sensor signals). Some signals or signals as a result of processing the existing signals may represent the high-level machine performance. These representative signals can be KPI for the machine. Categorization of indicators. Key performance indicators define a set of values against to which measure. These raw sets of values, which can be fed to systems that aggregate the data, are called "indicators". There are two categories of measurements for KPIs. An 'indicator' can only measure what 'has' happened, in the past tense, so the only type of measurement is descriptive or lagging. Any KPI that attempts to measure something in a future state as predictive, diagnostic or prescriptive is no longer an 'indicator', it is a 'prognosticator' – at this point, it is analytics (possibly based on a KPI) but leading KPIs are also used to indicate the amount of front end loading activities. Points of measurement. "Performance" focuses on measuring a particular "element" of an "activity". An activity can have four elements: input, output, control, and mechanism. At a minimum, activity is required to have at least an input and an output. Something goes into the activity as an "input"; the activity transforms the input by changing its "state", and the activity produces an "output". An activity can also enable "mechanisms" that are typically separated into "human" and "system" mechanisms. It can also be constrained in some way by a "control". Lastly, its actions can have a temporal construct of "time". Identifying indicators. Performance indicators differ from business drivers and aims (or goals). A school might consider the failure rate of its students as a key performance indicator which might help the school understand its position in the educational community, whereas a business might consider the percentage of income from returning customers as a potential KPI. The key stages in identifying KPIs are: Key performance indicators (KPIs) are ways to periodically assess the performances of organizations, business units, and their division, departments and employees. Accordingly, KPIs are most commonly defined in a way that is understandable, meaningful, and measurable. They are rarely defined in such a way that their fulfillment would be hampered by factors seen as non-controllable by the organizations or individuals responsible. Such KPIs are usually ignored by organizations. KPIs should follow the SMART criteria. This means the measure has a Specific purpose for the business, it is Measurable to really get a value of the KPI, the defined norms have to be Achievable, the improvement of a KPI has to be Relevant to the success of the organization, and finally it must be Time phased, which means the value or outcomes are shown for a predefined and relevant period. KPIs should be set at a senior level within an organization and cascaded through all levels of management. In order to be evaluated, KPIs are linked to target values, so that the value of the measure can be assessed as meeting expectations or not. Key performance indicators are mostly the non-financial measures of a company's performance – they do not have a monetary value but in a business context they do contribute to the company's profitability. Examples. Accounts. These are some of the examples: Marketing and sales. Many of these customer KPIs are developed and managed with customer relationship management software. Faster availability of data is a competitive issue for most organizations. For example, businesses that have higher operational/credit risk (involving for example credit cards or wealth management) may want weekly or even daily availability of KPI analysis, facilitated by appropriate IT systems and tools. Manufacturing. Overall equipment effectiveness (OEE) is a set of broadly accepted nonfinancial metrics that reflect manufacturing success. Professional services. Most professional services firms (for example, management consultancies, systems integration firms, or digital marketing agencies) use three key performance indicators to track the health of their businesses. They typically use professional services automation (PSA) software to keep track of and manage these metrics. Supply chain management. Businesses can utilize supply chain KPIs to establish and monitor progress toward a variety of goals, including lean manufacturing objectives, minority business enterprise and diversity spending, environmental "green" initiatives, cost avoidance programs and low-cost country sourcing targets. Suppliers can implement KPIs to gain a competitive advantage. Suppliers have instant access to a user-friendly portal for submitting standardized cost savings templates. Suppliers and their customers exchange vital supply chain performance data while gaining visibility to the exact status of cost improvement projects and cost savings documentation. Any business, regardless of size, can better manage supplier performance and overall supply chain performance, with the help of KPIs' robust capabilities, which include: Main KPIs for supply chain management will detail the following processes: In a warehouse, the manager will use KPIs that target best use of the facility, like the receiving and put away KPIs to measure the receiving efficiency and the putaway cost per line. Storage KPIs can also be used to determine the efficiency of the storage space and the carrying cost of the inventory. Government. The provincial government of Ontario, Canada has been using KPIs since 1998 to measure the performance of higher education institutions in the province. All post-secondary schools collect and report performance data in five areas – graduate satisfaction, student satisfaction, employer satisfaction, employment rate, and graduation rate. In England, Public Health England uses KPIs to provide a consistent measure of the performance of NHS population screening activities, and publication of up to four main KPIs for the most important contracts outsourced by each UK government department is seen as a measure helping to increase transparency in the delivery of public services. formula_0 Other performance indicators. Human Resource Management Problems. In practice, overseeing key performance indicators can prove expensive or difficult for organizations. Some indicators such as staff morale may be impossible to quantify. As such, dubious KPIs can be adopted that can be used as a rough guide rather than a precise benchmark. Key performance indicators can also lead to perverse incentives and unintended consequences as a result of employees working to the specific measurements at the expense of the actual quality or value of their work. Sometimes, collecting statistics can become a substitute for a better understanding of the problems, so the use of dubious KPIs can result in progress in aims and measured effectiveness becoming different. For example, during the Vietnam War, US soldiers were shown to be effective in kill ratios and high body counts, but this was misleading when used to measure aims as it did not show the lack of progress towards the US goal of increasing South Vietnamese government control of its territory. Another example would be to measure the productivity of a software development team in terms of lines of source code written. This approach can easily add large amounts of dubious code, thereby inflating the line count but adding little value in terms of systemic improvement. A similar problem arises when a footballer kicks a ball uselessly to build up their statistics. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\text{ROC} = \\frac{\\text{Close}-\\text{Close (Past)}}{\\text{Close (Past)}}\\times100" } ]
https://en.wikipedia.org/wiki?curid=1175262
117534
Optical microscope
Microscope that uses visible light The optical microscope, also referred to as a light microscope, is a type of microscope that commonly uses visible light and a system of lenses to generate magnified images of small objects. Optical microscopes are the oldest design of microscope and were possibly invented in their present compound form in the 17th century. Basic optical microscopes can be very simple, although many complex designs aim to improve resolution and sample contrast. The object is placed on a stage and may be directly viewed through one or two eyepieces on the microscope. In high-power microscopes, both eyepieces typically show the same image, but with a stereo microscope, slightly different images are used to create a 3-D effect. A camera is typically used to capture the image (micrograph). The sample can be lit in a variety of ways. Transparent objects can be lit from below and solid objects can be lit with light coming through (bright field) or around (dark field) the objective lens. Polarised light may be used to determine crystal orientation of metallic objects. Phase-contrast imaging can be used to increase image contrast by highlighting small details of differing refractive index. A range of objective lenses with different magnification are usually provided mounted on a turret, allowing them to be rotated into place and providing an ability to zoom-in. The maximum magnification power of optical microscopes is typically limited to around 1000x because of the limited resolving power of visible light. While larger magnifications are possible no additional details of the object are resolved. Alternatives to optical microscopy which do not use visible light include scanning electron microscopy and transmission electron microscopy and scanning probe microscopy and as a result, can achieve much greater magnifications. Types. There are two basic types of optical microscopes: simple microscopes and compound microscopes. A simple microscope uses the optical power of a single lens or group of lenses for magnification. A compound microscope uses a system of lenses (one set enlarging the image produced by another) to achieve a much higher magnification of an object. The vast majority of modern research microscopes are compound microscopes, while some cheaper commercial digital microscopes are simple single-lens microscopes. Compound microscopes can be further divided into a variety of other types of microscopes, which differ in their optical configurations, cost, and intended purposes. Simple microscope. A simple microscope uses a lens or set of lenses to enlarge an object through angular magnification alone, giving the viewer an erect enlarged virtual image. The use of a single convex lens or groups of lenses are found in simple magnification devices such as the magnifying glass, loupes, and eyepieces for telescopes and microscopes. Compound microscope. A compound microscope uses a lens close to the object being viewed to collect light (called the objective lens), which focuses a real image of the object inside the microscope (image 1). That image is then magnified by a second lens or group of lenses (called the eyepiece) that gives the viewer an enlarged inverted virtual image of the object (image 2). The use of a compound objective/eyepiece combination allows for much higher magnification. Common compound microscopes often feature exchangeable objective lenses, allowing the user to quickly adjust the magnification. A compound microscope also enables more advanced illumination setups, such as phase contrast. Other microscope variants. There are many variants of the compound optical microscope design for specialized purposes. Some of these are physical design differences allowing specialization for certain purposes: Other microscope variants are designed for different illumination techniques: Digital microscope. A digital microscope is a microscope equipped with a digital camera allowing observation of a sample via a computer. Microscopes can also be partly or wholly computer-controlled with various levels of automation. Digital microscopy allows greater analysis of a microscope image, for example, measurements of distances and areas and quantitation of a fluorescent or histological stain. Low-powered digital microscopes, USB microscopes, are also commercially available. These are essentially webcams with a high-powered macro lens and generally do not use transillumination. The camera is attached directly to a computer's USB port to show the images directly on the monitor. They offer modest magnifications (up to about 200×) without the need to use eyepieces and at a very low cost. High-power illumination is usually provided by an LED source or sources adjacent to the camera lens. Digital microscopy with very low light levels to avoid damage to vulnerable biological samples is available using sensitive photon-counting digital cameras. It has been demonstrated that a light source providing pairs of entangled photons may minimize the risk of damage to the most light-sensitive samples. In this application of ghost imaging to photon-sparse microscopy, the sample is illuminated with infrared photons, each spatially correlated with an entangled partner in the visible band for efficient imaging by a photon-counting camera. History. Invention. The earliest microscopes were single lens magnifying glasses with limited magnification, which date at least as far back as the widespread use of lenses in eyeglasses in the 13th century. Compound microscopes first appeared in Europe around 1620 including one demonstrated by Cornelis Drebbel in London (around 1621) and one exhibited in Rome in 1624. The actual inventor of the compound microscope is unknown although many claims have been made over the years. These include a claim 35 years after they appeared by Dutch spectacle-maker Johannes Zachariassen that his father, Zacharias Janssen, invented the compound microscope and/or the telescope as early as 1590. Johannes' testimony, which some claim is dubious, pushes the invention date so far back that Zacharias would have been a child at the time, leading to speculation that, for Johannes' claim to be true, the compound microscope would have to have been invented by Johannes' grandfather, Hans Martens. Another claim is that Janssen's competitor, Hans Lippershey (who applied for the first telescope patent in 1608) also invented the compound microscope. Other historians point to the Dutch innovator Cornelis Drebbel with his 1621 compound microscope. Galileo Galilei is sometimes cited as a compound microscope inventor. After 1610, he found that he could close focus his telescope to view small objects, such as flies, close up and/or could look through the wrong end in reverse to magnify small objects. The only drawback was that his 2 foot long telescope had to be extended out to 6 feet to view objects that close. After seeing the compound microscope built by Drebbel exhibited in Rome in 1624, Galileo built his own improved version. In 1625, Giovanni Faber coined the name "microscope" for the compound microscope Galileo submitted to the in 1624 (Galileo had called it the ""occhiolino" or "little eye""). Faber coined the name from the Greek words "μικρόν" (micron) meaning "small", and "σκοπεῖν" (skopein) meaning "to look at", a name meant to be analogous with "telescope", another word coined by the Linceans. Christiaan Huygens, another Dutchman, developed a simple 2-lens ocular system in the late 17th century that was achromatically corrected, and therefore a huge step forward in microscope development. The Huygens ocular is still being produced to this day, but suffers from a small field size, and other minor disadvantages. Popularization. Antonie van Leeuwenhoek (1632–1724) is credited with bringing the microscope to the attention of biologists, even though simple magnifying lenses were already being produced in the 16th century. Van Leeuwenhoek's home-made microscopes were simple microscopes, with a single very small, yet strong lens. They were awkward in use, but enabled van Leeuwenhoek to see detailed images. It took about 150 years of optical development before the compound microscope was able to provide the same quality image as van Leeuwenhoek's simple microscopes, due to difficulties in configuring multiple lenses. In the 1850s, John Leonard Riddell, Professor of Chemistry at Tulane University, invented the first practical binocular microscope while carrying out one of the earliest and most extensive American microscopic investigations of cholera. Lighting techniques. While basic microscope technology and optics have been available for over 400 years it is much more recently that techniques in sample illumination were developed to generate the high quality images seen today. In August 1893, August Köhler developed Köhler illumination. This method of sample illumination gives rise to extremely even lighting and overcomes many limitations of older techniques of sample illumination. Before development of Köhler illumination the image of the light source, for example a lightbulb filament, was always visible in the image of the sample. The Nobel Prize in physics was awarded to Dutch physicist Frits Zernike in 1953 for his development of phase contrast illumination which allows imaging of transparent samples. By using interference rather than absorption of light, extremely transparent samples, such as live mammalian cells, can be imaged without having to use staining techniques. Just two years later, in 1955, Georges Nomarski published the theory for differential interference contrast microscopy, another interference-based imaging technique. Fluorescence microscopy. Modern biological microscopy depends heavily on the development of fluorescent probes for specific structures within a cell. In contrast to normal transilluminated light microscopy, in fluorescence microscopy the sample is illuminated through the objective lens with a narrow set of wavelengths of light. This light interacts with fluorophores in the sample which then emit light of a longer wavelength. It is this emitted light which makes up the image. Since the mid-20th century chemical fluorescent stains, such as DAPI which binds to DNA, have been used to label specific structures within the cell. More recent developments include immunofluorescence, which uses fluorescently labelled antibodies to recognise specific proteins within a sample, and fluorescent proteins like GFP which a live cell can express making it fluorescent. Components. All modern optical microscopes designed for viewing samples by transmitted light share the same basic components of the light path. In addition, the vast majority of microscopes have the same 'structural' components (numbered below according to the image on the right): Eyepiece (ocular lens). The eyepiece, or ocular lens, is a cylinder containing two or more lenses; its function is to bring the image into focus for the eye. The eyepiece is inserted into the top end of the body tube. Eyepieces are interchangeable and many different eyepieces can be inserted with different degrees of magnification. Typical magnification values for eyepieces include 5×, 10× (the most common), 15× and 20×. In some high performance microscopes, the optical configuration of the objective lens and eyepiece are matched to give the best possible optical performance. This occurs most commonly with apochromatic objectives. Objective turret (revolver or revolving nose piece). Objective turret, revolver, or revolving nose piece is the part that holds the set of objective lenses. It allows the user to switch between objective lenses. Objective lens. At the lower end of a typical compound optical microscope, there are one or more objective lenses that collect light from the sample. The objective is usually in a cylinder housing containing a glass single or multi-element compound lens. Typically there will be around three objective lenses screwed into a circular nose piece which may be rotated to select the required objective lens. These arrangements are designed to be parfocal, which means that when one changes from one lens to another on a microscope, the sample stays in focus. Microscope objectives are characterized by two parameters, namely, magnification and numerical aperture. The former typically ranges from 5× to 100× while the latter ranges from 0.14 to 0.7, corresponding to focal lengths of about 40 to 2 mm, respectively. Objective lenses with higher magnifications normally have a higher numerical aperture and a shorter depth of field in the resulting image. Some high performance objective lenses may require matched eyepieces to deliver the best optical performance. Oil immersion objective. Some microscopes make use of oil-immersion objectives or water-immersion objectives for greater resolution at high magnification. These are used with index-matching material such as immersion oil or water and a matched cover slip between the objective lens and the sample. The refractive index of the index-matching material is higher than air allowing the objective lens to have a larger numerical aperture (greater than 1) so that the light is transmitted from the specimen to the outer face of the objective lens with minimal refraction. Numerical apertures as high as 1.6 can be achieved. The larger numerical aperture allows collection of more light making detailed observation of smaller details possible. An oil immersion lens usually has a magnification of 40 to 100×. Focus knobs. Adjustment knobs move the stage up and down with separate adjustment for coarse and fine focusing. The same controls enable the microscope to adjust to specimens of different thickness. In older designs of microscopes, the focus adjustment wheels move the microscope tube up or down relative to the stand and had a fixed stage. Frame. The whole of the optical assembly is traditionally attached to a rigid arm, which in turn is attached to a robust U-shaped foot to provide the necessary rigidity. The arm angle may be adjustable to allow the viewing angle to be adjusted. The frame provides a mounting point for various microscope controls. Normally this will include controls for focusing, typically a large knurled wheel to adjust coarse focus, together with a smaller knurled wheel to control fine focus. Other features may be lamp controls and/or controls for adjusting the condenser. Stage. The stage is a platform below the objective lens which supports the specimen being viewed. In the center of the stage is a hole through which light passes to illuminate the specimen. The stage usually has arms to hold slides (rectangular glass plates with typical dimensions of 25×75 mm, on which the specimen is mounted). At magnifications higher than 100× moving a slide by hand is not practical. A mechanical stage, typical of medium and higher priced microscopes, allows tiny movements of the slide via control knobs that reposition the sample/slide as desired. If a microscope did not originally have a mechanical stage it may be possible to add one. All stages move up and down for focus. With a mechanical stage slides move on two horizontal axes for positioning the specimen to examine specimen details. Focusing starts at lower magnification in order to center the specimen by the user on the stage. Moving to a higher magnification requires the stage to be moved higher vertically for re-focus at the higher magnification and may also require slight horizontal specimen position adjustment. Horizontal specimen position adjustments are the reason for having a mechanical stage. Due to the difficulty in preparing specimens and mounting them on slides, for children it is best to begin with prepared slides that are centered and focus easily regardless of the focus level used. Light source. Many sources of light can be used. At its simplest, daylight is directed via a mirror. Most microscopes, however, have their own adjustable and controllable light source – often a halogen lamp, although illumination using LEDs and lasers are becoming a more common provision. Köhler illumination is often provided on more expensive instruments. Condenser. The condenser is a lens designed to focus light from the illumination source onto the sample. The condenser may also include other features, such as a diaphragm and/or filters, to manage the quality and intensity of the illumination. For illumination techniques like dark field, phase contrast and differential interference contrast microscopy additional optical components must be precisely aligned in the light path. Magnification. The actual power or magnification of a compound optical microscope is the product of the powers of the eyepiece and the objective lens. For example a 10x eyepiece magnification and a 100x objective lens magnification gives a total magnification of 1,000×. Modified environments such as the use of oil or ultraviolet light can increase the resolution and allow for resolved details at magnifications larger than 1,000x. Operation. Illumination techniques. Many techniques are available which modify the light path to generate an improved contrast image from a sample. Major techniques for generating increased contrast from the sample include cross-polarized light, dark field, phase contrast and differential interference contrast illumination. A recent technique (Sarfus) combines cross-polarized light and specific contrast-enhanced slides for the visualization of nanometric samples. &lt;gallery caption="Four examples of transilumination techniques used to generate contrast in a sample of tissue paper. 1.559 μm/pixel." align="center"&gt; File:Paper Micrograph Bright.png|Bright field illumination, sample contrast comes from absorbance of light in the sample. File:Paper Micrograph Cross-Polarised.png|Cross-polarized light illumination, sample contrast comes from rotation of polarized light through the sample. File:Paper Micrograph Dark.png|Dark field illumination, sample contrast comes from light scattered by the sample. File:Paper Micrograph Phase.png|Phase contrast illumination, sample contrast comes from interference of different path lengths of light through the sample. &lt;/gallery&gt; Other techniques. Modern microscopes allow more than just observation of transmitted light image of a sample; there are many techniques which can be used to extract other kinds of data. Most of these require additional equipment in addition to a basic compound microscope. *Epifluorescence microscopy *Confocal microscopy Applications. Optical microscopy is used extensively in microelectronics, nanophysics, biotechnology, pharmaceutic research, mineralogy and microbiology. Optical microscopy is used for medical diagnosis, the field being termed histopathology when dealing with tissues, or in smear tests on free cells or tissue fragments. In industrial use, binocular microscopes are common. Aside from applications needing true depth perception, the use of dual eyepieces reduces eye strain associated with long workdays at a microscopy station. In certain applications, long-working-distance or long-focus microscopes are beneficial. An item may need to be examined behind a window, or industrial subjects may be a hazard to the objective. Such optics resemble telescopes with close-focus capabilities. Measuring microscopes are used for precision measurement. There are two basic types. One has a reticle graduated to allow measuring distances in the focal plane. The other (and older) type has simple crosshairs and a micrometer mechanism for moving the subject relative to the microscope. Very small, portable microscopes have found some usage in places where a laboratory microscope would be a burden. Limitations. At very high magnifications with transmitted light, point objects are seen as fuzzy discs surrounded by diffraction rings. These are called Airy disks. The "resolving power" of a microscope is taken as the ability to distinguish between two closely spaced Airy disks (or, in other words the ability of the microscope to reveal adjacent structural detail as distinct and separate). It is these impacts of diffraction that limit the ability to resolve fine details. The extent and magnitude of the diffraction patterns are affected by both the wavelength of light (λ), the refractive materials used to manufacture the objective lens and the numerical aperture (NA) of the objective lens. There is therefore a finite limit beyond which it is impossible to resolve separate points in the objective field, known as the diffraction limit. Assuming that optical aberrations in the whole optical set-up are negligible, the resolution "d", can be stated as: formula_0 Usually a wavelength of 550 nm is assumed, which corresponds to green light. With air as the external medium, the highest practical "NA" is 0.95, and with oil, up to 1.5. In practice the lowest value of "d" obtainable with conventional lenses is about 200 nm. A new type of lens using multiple scattering of light allowed to improve the resolution to below 100 nm. Surpassing the resolution limit. Multiple techniques are available for reaching resolutions higher than the transmitted light limit described above. Holographic techniques, as described by Courjon and Bulabois in 1979, are also capable of breaking this resolution limit, although resolution was restricted in their experimental analysis. Using fluorescent samples more techniques are available. Examples include Vertico SMI, near field scanning optical microscopy which uses evanescent waves, and stimulated emission depletion. In 2005, a microscope capable of detecting a single molecule was described as a teaching tool. Despite significant progress in the last decade, techniques for surpassing the diffraction limit remain limited and specialized. While most techniques focus on increases in lateral resolution there are also some techniques which aim to allow analysis of extremely thin samples. For example, sarfus methods place the thin sample on a contrast-enhancing surface and thereby allows to directly visualize films as thin as 0.3 nanometers. On 8 October 2014, the Nobel Prize in Chemistry was awarded to Eric Betzig, William Moerner and Stefan Hell for the development of super-resolved fluorescence microscopy. Structured illumination SMI. SMI (spatially modulated illumination microscopy) is a light optical process of the so-called point spread function (PSF) engineering. These are processes which modify the PSF of a microscope in a suitable manner to either increase the optical resolution, to maximize the precision of distance measurements of fluorescent objects that are small relative to the wavelength of the illuminating light, or to extract other structural parameters in the nanometer range. Localization microscopy SPDMphymod. SPDM (spectral precision distance microscopy), the basic localization microscopy technology is a light optical process of fluorescence microscopy which allows position, distance and angle measurements on "optically isolated" particles (e.g. molecules) well below the theoretical limit of resolution for light microscopy. "Optically isolated" means that at a given point in time, only a single particle/molecule within a region of a size determined by conventional optical resolution (typically approx. 200–250 nm diameter) is being registered. This is possible when molecules within such a region all carry different spectral markers (e.g. different colors or other usable differences in the light emission of different particles). Many standard fluorescent dyes like GFP, Alexa dyes, Atto dyes, Cy2/Cy3 and fluorescein molecules can be used for localization microscopy, provided certain photo-physical conditions are present. Using this so-called SPDMphymod (physically modifiable fluorophores) technology a single laser wavelength of suitable intensity is sufficient for nanoimaging. 3D super resolution microscopy. 3D super resolution microscopy with standard fluorescent dyes can be achieved by combination of localization microscopy for standard fluorescent dyes SPDMphymod and structured illumination SMI. STED. Stimulated emission depletion is a simple example of how higher resolution surpassing the diffraction limit is possible, but it has major limitations. STED is a fluorescence microscopy technique which uses a combination of light pulses to induce fluorescence in a small sub-population of fluorescent molecules in a sample. Each molecule produces a diffraction-limited spot of light in the image, and the centre of each of these spots corresponds to the location of the molecule. As the number of fluorescing molecules is low the spots of light are unlikely to overlap and therefore can be placed accurately. This process is then repeated many times to generate the image. Stefan Hell of the Max Planck Institute for Biophysical Chemistry was awarded the 10th German Future Prize in 2006 and Nobel Prize for Chemistry in 2014 for his development of the STED microscope and associated methodologies. Alternatives. In order to overcome the limitations set by the diffraction limit of visible light other microscopes have been designed which use other waves. It is important to note that higher frequency waves have limited interaction with matter, for example soft tissues are relatively transparent to X-rays resulting in distinct sources of contrast and different target applications. The use of electrons and X-rays in place of light allows much higher resolution – the wavelength of the radiation is shorter so the diffraction limit is lower. To make the short-wavelength probe non-destructive, the atomic beam imaging system (atomic nanoscope) has been proposed and widely discussed in the literature, but it is not yet competitive with conventional imaging systems. STM and AFM are scanning probe techniques using a small probe which is scanned over the sample surface. Resolution in these cases is limited by the size of the probe; micromachining techniques can produce probes with tip radii of 5–10 nm. Additionally, methods such as electron or X-ray microscopy use a vacuum or partial vacuum, which limits their use for live and biological samples (with the exception of an environmental scanning electron microscope). The specimen chambers needed for all such instruments also limits sample size, and sample manipulation is more difficult. Color cannot be seen in images made by these methods, so some information is lost. They are however, essential when investigating molecular or atomic effects, such as age hardening in aluminium alloys, or the microstructure of polymers. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "d = \\frac { \\lambda } { 2 NA }" } ]
https://en.wikipedia.org/wiki?curid=117534
11753597
Truncated normal distribution
Type of probability distribution In probability and statistics, the truncated normal distribution is the probability distribution derived from that of a normally distributed random variable by bounding the random variable from either below or above (or both). The truncated normal distribution has wide applications in statistics and econometrics. Definitions. Suppose formula_1 has a normal distribution with mean formula_2 and variance formula_3 and lies within the interval formula_4. Then formula_5 conditional on formula_6 has a truncated normal distribution. Its probability density function, formula_7, for formula_8, is given by formula_9 and by formula_10 otherwise. Here, formula_11 is the probability density function of the standard normal distribution and formula_12 is its cumulative distribution function formula_13 By definition, if formula_14, then formula_15, and similarly, if formula_16, then formula_17. The above formulae show that when formula_18 the scale parameter formula_3 of the truncated normal distribution is allowed to assume negative values. The parameter formula_19 is in this case imaginary, but the function formula_7 is nevertheless real, positive, and normalizable. The scale parameter formula_3 of the untruncated normal distribution must be positive because the distribution would not be normalizable otherwise. The doubly truncated normal distribution, on the other hand, can in principle have a negative scale parameter (which is different from the variance, see summary formulae), because no such integrability problems arise on a bounded domain. In this case the distribution cannot be interpreted as an untruncated normal conditional on formula_6, of course, but can still be interpreted as a maximum-entropy distribution with first and second moments as constraints, and has an additional peculiar feature: it presents "two" local maxima instead of one, located at formula_20 and formula_21. Properties. The truncated normal is one of two possible maximum entropy probability distributions for a fixed mean and variance constrained to the interval [a,b], the other being the truncated "U". Truncated normals with fixed support form an exponential family. Nielsen reported closed-form formula for calculating the Kullback-Leibler divergence and the Bhattacharyya distance between two truncated normal distributions with the support of the first distribution nested into the support of the second distribution. Moments. If the random variable has been truncated only from below, some probability mass has been shifted to higher values, giving a first-order stochastically dominating distribution and hence increasing the mean to a value higher than the mean formula_2 of the original normal distribution. Likewise, if the random variable has been truncated only from above, the truncated distribution has a mean less than formula_22 Regardless of whether the random variable is bounded above, below, or both, the truncation is a mean-preserving contraction combined with a mean-changing rigid shift, and hence the variance of the truncated distribution is less than the variance formula_3 of the original normal distribution. Two sided truncation. Let formula_23 and formula_24. Then: formula_25 and formula_26 Care must be taken in the numerical evaluation of these formulas, which can result in catastrophic cancellation when the interval formula_27 does not include formula_2. There are better ways to rewrite them that avoid this issue. One sided truncation (of lower tail). In this case formula_28 then formula_29 and formula_30 where formula_31 One sided truncation (of upper tail). In this case formula_32 then formula_33 formula_34 give a simpler expression for the variance of one sided truncations. Their formula is in terms of the chi-square CDF, which is implemented in standard software libraries. provide formulas for (generalized) confidence intervals around the truncated moments. A recursive formula. As for the non-truncated case, there is a recursive formula for the truncated moments. Multivariate. Computing the moments of a multivariate truncated normal is harder. Generating values from the truncated normal distribution. A random variate formula_0 defined as formula_35 with formula_36 the cumulative distribution function and formula_37 its inverse, formula_38 a uniform random number on formula_39, follows the distribution truncated to the range formula_40. This is simply the inverse transform method for simulating random variables. Although one of the simplest, this method can either fail when sampling in the tail of the normal distribution, or be much too slow. Thus, in practice, one has to find alternative methods of simulation. One such truncated normal generator (implemented in Matlab and in R (programming language) as trandn.R ) is based on an acceptance rejection idea due to Marsaglia. Despite the slightly suboptimal acceptance rate of in comparison with , Marsaglia's method is typically faster, because it does not require the costly numerical evaluation of the exponential function. For more on simulating a draw from the truncated normal distribution, see , , . The MSM package in R has a function, rtnorm, that calculates draws from a truncated normal. The truncnorm package in R also has functions to draw from a truncated normal. proposed (arXiv) an algorithm inspired from the Ziggurat algorithm of Marsaglia and Tsang (1984, 2000), which is usually considered as the fastest Gaussian sampler, and is also very close to Ahrens's algorithm (1995). Implementations can be found in C, C++, Matlab and Python. Sampling from the "multivariate" truncated normal distribution is considerably more difficult. Exact or perfect simulation is only feasible in the case of truncation of the normal distribution to a polytope region. In more general cases, introduce a general methodology for sampling truncated densities within a Gibbs sampling framework. Their algorithm introduces one latent variable and, within a Gibbs sampling framework, it is more computationally efficient than the algorithm of . Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x" }, { "math_id": 1, "text": " X " }, { "math_id": 2, "text": "\\mu" }, { "math_id": 3, "text": "\\sigma^2" }, { "math_id": 4, "text": "(a,b), \\text{with} \\; -\\infty \\leq a < b \\leq \\infty " }, { "math_id": 5, "text": "X" }, { "math_id": 6, "text": " a < X < b " }, { "math_id": 7, "text": "f" }, { "math_id": 8, "text": " a \\leq x \\leq b " }, { "math_id": 9, "text": "\nf(x;\\mu,\\sigma,a,b) = \\frac{1}{\\sigma}\\,\\frac{\\varphi(\\frac{x - \\mu}{\\sigma})}{\\Phi(\\frac{b - \\mu}{\\sigma}) - \\Phi(\\frac{a - \\mu}{\\sigma}) }" }, { "math_id": 10, "text": "f=0" }, { "math_id": 11, "text": "\\varphi(\\xi)=\\frac{1}{\\sqrt{2 \\pi}}\\exp\\left(-\\frac{1}{2}\\xi^2\\right)" }, { "math_id": 12, "text": "\\Phi(\\cdot)" }, { "math_id": 13, "text": "\\Phi(x) = \\frac{1}{2} \\left( 1+\\operatorname{erf}(x/\\sqrt{2}) \\right)." }, { "math_id": 14, "text": "b=\\infty" }, { "math_id": 15, "text": "\\Phi\\left(\\tfrac{b - \\mu}{\\sigma}\\right) =1" }, { "math_id": 16, "text": "a = -\\infty" }, { "math_id": 17, "text": "\\Phi\\left(\\tfrac{a - \\mu}{\\sigma}\\right) = 0" }, { "math_id": 18, "text": "-\\infty<a<b<+\\infty" }, { "math_id": 19, "text": "\\sigma" }, { "math_id": 20, "text": "x=a" }, { "math_id": 21, "text": "x=b" }, { "math_id": 22, "text": "\\mu." }, { "math_id": 23, "text": "\\alpha = (a-\\mu)/\\sigma" }, { "math_id": 24, "text": "\\beta = (b-\\mu)/\\sigma " }, { "math_id": 25, "text": " \\operatorname{E}(X \\mid a<X<b) = \\mu - \\sigma\\frac{\\varphi(\\beta) - \\varphi(\\alpha)}{\\Phi(\\beta)-\\Phi(\\alpha)} " }, { "math_id": 26, "text": " \n\\operatorname{Var}(X \\mid a<X<b) = \\sigma^2\\left[ 1 - \\frac{\\beta\\varphi(\\beta) - \\alpha\\varphi(\\alpha)}{\\Phi(\\beta)-\\Phi(\\alpha)}\n-\\left(\\frac{\\varphi(\\beta) - \\varphi(\\alpha)}{\\Phi(\\beta)-\\Phi(\\alpha)}\\right)^2\\right]" }, { "math_id": 27, "text": "[a,b]" }, { "math_id": 28, "text": "\\; b=\\infty, \\; \\varphi(\\beta)=0, \\; \\Phi(\\beta)=1," }, { "math_id": 29, "text": " \\operatorname{E}(X \\mid X>a) = \\mu +\\sigma \\varphi(\\alpha)/Z ,\\!" }, { "math_id": 30, "text": " \\operatorname{Var}(X \\mid X>a) = \\sigma^2[1+ \\alpha \\varphi(\\alpha)/Z- (\\varphi(\\alpha)/Z)^2 ]," }, { "math_id": 31, "text": " Z=1-\\Phi(\\alpha). " }, { "math_id": 32, "text": "\\; a=\\alpha=-\\infty, \\; \\varphi(\\alpha)=0, \\; \\Phi(\\alpha) = 0," }, { "math_id": 33, "text": " \\operatorname{E}(X \\mid X<b) = \\mu -\\sigma\\frac{\\varphi(\\beta)}{\\Phi(\\beta)} ," }, { "math_id": 34, "text": " \\operatorname{Var}(X \\mid X<b) = \\sigma^2\\left[1-\\beta \\frac{\\varphi(\\beta)}{\\Phi(\\beta)}- \\left(\\frac{\\varphi(\\beta)}{\\Phi(\\beta)} \\right)^2\\right]." }, { "math_id": 35, "text": " x = \\Phi^{-1}( \\Phi(\\alpha) + U\\cdot(\\Phi(\\beta)-\\Phi(\\alpha)))\\sigma + \\mu " }, { "math_id": 36, "text": "\\Phi" }, { "math_id": 37, "text": "\\Phi^{-1}" }, { "math_id": 38, "text": "U" }, { "math_id": 39, "text": "(0, 1)" }, { "math_id": 40, "text": "(a, b)" }, { "math_id": 41, "text": "(0, \\infty)" }, { "math_id": 42, "text": " f(x)= \\frac{2\\beta^{\\frac{\\alpha}{2}} x^{\\alpha-1} \\exp(-\\beta x^2+ \\gamma x )}{\\Psi{\\left(\\frac{\\alpha}{2}, \\frac{ \\gamma}{\\sqrt{\\beta}}\\right)}}" }, { "math_id": 43, "text": "\\Psi(\\alpha,z)={}_1\\Psi_1\\left(\\begin{matrix}\\left(\\alpha,\\frac{1}{2}\\right) \\\\ (1,0) \\end{matrix};z \\right)" } ]
https://en.wikipedia.org/wiki?curid=11753597
11754068
Knödel number
In number theory, an "n"-Knödel number for a given positive integer "n" is a composite number "m" with the property that each "i" &lt; "m" coprime to "m" satisfies formula_0. The concept is named after Walter Knödel. The set of all "n"-Knödel numbers is denoted "K""n". The special case "K"1 is the Carmichael numbers. There are infinitely many "n"-Knödel numbers for a given "n". Due to Euler's theorem every composite number "m" is an "n"-Knödel number for formula_1 where formula_2 is Euler's totient function. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "i^{m - n} \\equiv 1 \\pmod{m}" }, { "math_id": 1, "text": "n = m-\\varphi(m) " }, { "math_id": 2, "text": " \\varphi " } ]
https://en.wikipedia.org/wiki?curid=11754068
11754125
Heun's method
Procedure for solving ODEs In mathematics and computational science, Heun's method may refer to the improved or modified Euler's method (that is, the explicit trapezoidal rule), or a similar two-stage Runge–Kutta method. It is named after Karl Heun and is a numerical procedure for solving ordinary differential equations (ODEs) with a given initial value. Both variants can be seen as extensions of the Euler method into two-stage second-order Runge–Kutta methods. The procedure for calculating the numerical solution to the initial value problem: formula_0 by way of Heun's method, is to first calculate the intermediate value formula_1 and then the final approximation formula_2 at the next integration point. formula_3 formula_4 where formula_5 is the step size and formula_6. Description. Euler's method is used as the foundation for Heun's method. Euler's method uses the line tangent to the function at the beginning of the interval as an estimate of the slope of the function over the interval, assuming that if the step size is small, the error will be small. However, even when extremely small step sizes are used, over a large number of steps the error starts to accumulate and the estimate diverges from the actual functional value. Where the solution curve is concave up, its tangent line will underestimate the vertical coordinate of the next point and vice versa for a concave down solution. The ideal prediction line would hit the curve at its next predicted point. In reality, there is no way to know whether the solution is concave-up or concave-down, and hence if the next predicted point will overestimate or underestimate its vertical value. The concavity of the curve cannot be guaranteed to remain consistent either and the prediction may overestimate and underestimate at different points in the domain of the solution. Heun's Method addresses this problem by considering the interval spanned by the tangent line segment as a whole. Taking a concave-up example, the left tangent prediction line underestimates the slope of the curve for the entire width of the interval from the current point to the next predicted point. If the tangent line at the right end point is considered (which can be estimated using Euler's Method), it has the opposite problem. The points along the tangent line of the left end point have vertical coordinates which all underestimate those that lie on the solution curve, including the right end point of the interval under consideration. The solution is to make the slope greater by some amount. Heun's Method considers the tangent lines to the solution curve at "both" ends of the interval, one which "overestimates", and one which "underestimates" the ideal vertical coordinates. A prediction line must be constructed based on the right end point tangent's slope alone, approximated using Euler's Method. If this slope is passed through the left end point of the interval, the result is evidently too steep to be used as an ideal prediction line and overestimates the ideal point. Therefore, the ideal point lies approximately halfway between the erroneous overestimation and underestimation, the average of the two slopes. Euler's Method is used to roughly estimate the coordinates of the next point in the solution, and with this knowledge, the original estimate is re-predicted or "corrected". Assuming that the quantity formula_7 on the right hand side of the equation can be thought of as the slope of the solution sought at any point formula_8, this can be combined with the Euler estimate of the next point to give the slope of the tangent line at the right end-point. Next the average of both slopes is used to find the corrected coordinates of the right end interval. formula_9 formula_10 formula_11 Derivation. Using the principle that the slope of a line equates to the rise/run, the coordinates at the end of the interval can be found using the following formula: formula_12 formula_13 formula_14, formula_15 formula_16 formula_17 formula_18 The accuracy of the Euler method improves only linearly with the step size is decreased, whereas the Heun Method improves accuracy quadratically . The scheme can be compared with the implicit trapezoidal method, but with formula_19 replaced by formula_20 in order to make it explicit. formula_1 is the result of one step of Euler's method on the same initial value problem. So, Heun's method is a predictor-corrector method with forward Euler's method as predictor and trapezoidal method as corrector. Runge–Kutta method. The improved Euler's method is a two-stage Runge–Kutta method, and can be written using the Butcher tableau (after John C. Butcher): The other method referred to as Heun's method (also known as Ralston's method) has the Butcher tableau: This method minimizes the truncation error.
[ { "math_id": 0, "text": "y'(t) = f(t,y(t)), \\qquad \\qquad y(t_0)=y_0, " }, { "math_id": 1, "text": "\\tilde{y}_{i+1}" }, { "math_id": 2, "text": "y_{i+1}" }, { "math_id": 3, "text": "\\tilde{y}_{i+1} = y_i + h f(t_i,y_i)" }, { "math_id": 4, "text": "y_{i+1} = y_i + \\frac{h}{2}[f(t_i, y_i) + f(t_{i+1},\\tilde{y}_{i+1})]," }, { "math_id": 5, "text": "h" }, { "math_id": 6, "text": "t_{i+1}=t_i+h" }, { "math_id": 7, "text": "\\textstyle f(x, y)" }, { "math_id": 8, "text": "\\textstyle (x, y) " }, { "math_id": 9, "text": "\\text{Slope}_{\\text{left}} = f(x_i, y_i)" }, { "math_id": 10, "text": "\\text{Slope}_{\\text{right}} = f(x_i + h, y_i + h f(x_i, y_i))" }, { "math_id": 11, "text": "\\text{Slope}_{\\text{ideal}} = \\frac{1}{2} (\\text{Slope}_{\\text{left}} + \\text{Slope}_{\\text{right}})" }, { "math_id": 12, "text": "\\text{Slope}_{\\text{ideal}} = \\frac{\\Delta y}{h} " }, { "math_id": 13, "text": "\\Delta y = h (\\text{Slope}_{\\text{ideal}})" }, { "math_id": 14, "text": "x_{i+1} = x_i + h" }, { "math_id": 15, "text": "\\textstyle y_{i+1} = y_i + \\Delta y" }, { "math_id": 16, "text": "y_{i+1} = y_i + h \\text{Slope}_{\\text{ideal}}" }, { "math_id": 17, "text": "y_{i+1} = y_{i} + \\frac{1}{2} h (\\text{Slope}_{\\text{left}} + \\text{Slope}_{\\text{right}})" }, { "math_id": 18, "text": "y_{i+1} = y_{i} + \\frac{h}{2}(f(x_i, y_i) + f(x_i + h, y_i + hf(x_i, y_i)))" }, { "math_id": 19, "text": "f(t_{i+1},y_{i+1})" }, { "math_id": 20, "text": "f(t_{i+1},\\tilde{y}_{i+1})" } ]
https://en.wikipedia.org/wiki?curid=11754125
1175666
Cograph
Graph formed by complementation and disjoint union In graph theory, a cograph, or complement-reducible graph, or "P"4-free graph, is a graph that can be generated from the single-vertex graph "K"1 by complementation and disjoint union. That is, the family of cographs is the smallest class of graphs that includes "K"1 and is closed under complementation and disjoint union. Cographs have been discovered independently by several authors since the 1970s; early references include , , , and . They have also been called D*-graphs, hereditary Dacey graphs (after the related work of James C. Dacey Jr. on orthomodular lattices), and 2-parity graphs. They have a simple structural decomposition involving disjoint union and complement graph operations that can be represented concisely by a labeled tree, and used algorithmically to efficiently solve many problems such as finding the maximum clique that are hard on more general graph classes. Special cases of the cographs include the complete graphs, complete bipartite graphs, cluster graphs, and threshold graphs. The cographs are, in turn, special cases of the distance-hereditary graphs, permutation graphs, comparability graphs, and perfect graphs. Definition. Recursive construction. Any cograph may be constructed using the following rules: The cographs may be defined as the graphs that can be constructed using these operations, starting from the single-vertex graphs. Alternatively, instead of using the complement operation, one can use the join operation, which consists of forming the disjoint union formula_3 and then adding an edge between every pair of a vertex from formula_0 and a vertex from formula_2. Other characterizations. Several alternative characterizations of cographs can be given. Among them: Cotrees. A cotree is a tree in which the internal nodes are labeled with the numbers 0 and 1. Every cotree "T" defines a cograph "G" having the leaves of "T" as vertices, and in which the subtree rooted at each node of "T" corresponds to the induced subgraph in "G" defined by the set of leaves descending from that node: An equivalent way of describing the cograph formed from a cotree is that two vertices are connected by an edge if and only if the lowest common ancestor of the corresponding leaves is labeled by 1. Conversely, every cograph can be represented in this way by a cotree. If we require the labels on any root-leaf path of this tree to alternate between 0 and 1, this representation is unique. Computational properties. Cographs may be recognized in linear time, and a cotree representation constructed, using modular decomposition, partition refinement, LexBFS , or split decomposition. Once a cotree representation has been constructed, many familiar graph problems may be solved via simple bottom-up calculations on the cotrees. For instance, to find the maximum clique in a cograph, compute in bottom-up order the maximum clique in each subgraph represented by a subtree of the cotree. For a node labeled 0, the maximum clique is the maximum among the cliques computed for that node's children. For a node labeled 1, the maximum clique is the union of the cliques computed for that node's children, and has size equal to the sum of the children's clique sizes. Thus, by alternately maximizing and summing values stored at each node of the cotree, we may compute the maximum clique size, and by alternately maximizing and taking unions, we may construct the maximum clique itself. Similar bottom-up tree computations allow the maximum independent set, vertex coloring number, maximum clique cover, and Hamiltonicity (that is the existence of a Hamiltonian cycle) to be computed in linear time from a cotree representation of a cograph. Because cographs have bounded clique-width, Courcelle's theorem may be used to test any property in the monadic second-order logic of graphs (MSO1) on cographs in linear time. The problem of testing whether a given graph is "k" vertices away and/or "t" edges away from a cograph is fixed-parameter tractable. Deciding if a graph can be "k"-edge-deleted to a cograph can be solved in O*(2.415"k") time, and "k"-edge-edited to a cograph in O*(4.612"k"). If the largest induced cograph subgraph of a graph can be found by deleting "k" vertices from the graph, it can be found in O*(3.30"k") time. Two cographs are isomorphic if and only if their cotrees (in the canonical form with no two adjacent vertices with the same label) are isomorphic. Because of this equivalence, one can determine in linear time whether two cographs are isomorphic, by constructing their cotrees and applying a linear time isomorphism test for labeled trees. If "H" is an induced subgraph of a cograph "G", then "H" is itself a cograph; the cotree for "H" may be formed by removing some of the leaves from the cotree for "G" and then suppressing nodes that have only one child. It follows from Kruskal's tree theorem that the relation of being an induced subgraph is a well-quasi-ordering on the cographs. Thus, if a subfamily of the cographs (such as the planar cographs) is closed under induced subgraph operations then it has a finite number of forbidden induced subgraphs. Computationally, this means that testing membership in such a subfamily may be performed in linear time, by using a bottom-up computation on the cotree of a given graph to test whether it contains any of these forbidden subgraphs. However, when the sizes of two cographs are both variable, testing whether one of them is an induced subgraph of the other is NP-complete. Cographs play a key role in algorithms for recognizing read-once functions. Some counting problems also become tractable when the input is restricted to be a cograph. For instance, there are polynomial-time algorithms to count the number of cliques or the number of maximum cliques in a cograph. Enumeration. The number of connected cographs with "n" vertices, for "n" = 1, 2, 3, ..., is: 1, 1, 2, 5, 12, 33, 90, 261, 766, 2312, 7068, 21965, 68954, ... (sequence in the OEIS) For "n" &gt; 1 there are the same number of disconnected cographs, because for every cograph exactly one of it or its complement graph is connected. Related graph families. Subclasses. Every complete graph "K""n" is a cograph, with a cotree consisting of a single 1-node and n leaves. Similarly, every complete bipartite graph "K""a","b" is a cograph. Its cotree is rooted at a 1-node which has two 0-node children, one with a leaf children and one with b leaf children. A Turán graph may be formed by the join of a family of equal-sized independent sets; thus, it also is a cograph, with a cotree rooted at a 1-node that has a child 0-node for each independent set. Every threshold graph is also a cograph. A threshold graph may be formed by repeatedly adding one vertex, either connected to all previous vertices or to none of them; each such operation is one of the disjoint union or join operations by which a cotree may be formed. Superclasses. The characterization of cographs by the property that every clique and maximal independent set have a nonempty intersection is a stronger version of the defining property of strongly perfect graphs, in which there every induced subgraph contains an independent set that intersects all maximal cliques. In a cograph, every maximal independent set intersects all maximal cliques. Thus, every cograph is strongly perfect. The fact that cographs are "P"4-free implies that they are perfectly orderable. In fact, every vertex order of a cograph is a perfect order which further implies that max clique finding and min colouring can be found in linear time with any greedy colouring and without the need for a cotree decomposition. Every cograph is a distance-hereditary graph, meaning that every induced path in a cograph is a shortest path. The cographs may be characterized among the distance-hereditary graphs as having diameter at most two in each connected component. Every cograph is also a comparability graph of a series-parallel partial order, obtained by replacing the disjoint union and join operations by which the cograph was constructed by disjoint union and ordinal sum operations on partial orders. Because strongly perfect graphs, perfectly orderable graphs, distance-hereditary graphs, and comparability graphs are all perfect graphs, cographs are also perfect. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "G" }, { "math_id": 1, "text": "\\overline{G}" }, { "math_id": 2, "text": "H" }, { "math_id": 3, "text": "G\\cup H" }, { "math_id": 4, "text": "v_1,v_2,v_3,v_4" }, { "math_id": 5, "text": "\\{v_1,v_2\\},\\{v_2,v_3\\}" }, { "math_id": 6, "text": "\\{v_3,v_4\\}" }, { "math_id": 7, "text": "\\{v_1,v_3\\},\\{v_1,v_4\\}" }, { "math_id": 8, "text": "\\{v_2,v_4\\}" } ]
https://en.wikipedia.org/wiki?curid=1175666
11757994
Yamabe problem
The Yamabe problem refers to a conjecture in the mathematical field of differential geometry, which was resolved in the 1980s. It is a statement about the scalar curvature of Riemannian manifolds: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt; By computing a formula for how the scalar curvature of "fg" relates to that of g, this statement can be rephrased in the following form: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt; The mathematician Hidehiko Yamabe, in the paper , gave the above statements as theorems and provided a proof; however, discovered an error in his proof. The problem of understanding whether the above statements are true or false became known as the Yamabe problem. The combined work of Yamabe, Trudinger, Thierry Aubin, and Richard Schoen provided an affirmative resolution to the problem in 1984. It is now regarded as a classic problem in geometric analysis, with the proof requiring new methods in the fields of differential geometry and partial differential equations. A decisive point in Schoen's ultimate resolution of the problem was an application of the positive energy theorem of general relativity, which is a purely differential-geometric mathematical theorem first proved (in a provisional setting) in 1979 by Schoen and Shing-Tung Yau. There has been more recent work due to Simon Brendle, Marcus Khuri, Fernando Codá Marques, and Schoen, dealing with the collection of all positive and smooth functions f such that, for a given Riemannian manifold ("M","g"), the metric "fg" has constant scalar curvature. Additionally, the Yamabe problem as posed in similar settings, such as for complete noncompact Riemannian manifolds, is not yet fully understood. The Yamabe problem in special cases. Here, we refer to a "solution of the Yamabe problem" on a Riemannian manifold formula_0 as a Riemannian metric g on M for which there is a positive smooth function formula_1 with formula_2 On a closed Einstein manifold. Let formula_0 be a smooth Riemannian manifold. Consider a positive smooth function formula_1 so that formula_3 is an arbitrary element of the smooth conformal class of formula_4 A standard computation shows formula_5 Taking the g-inner product with formula_6 results in formula_7 If formula_8 is assumed to be Einstein, then the left-hand side vanishes. If formula_9 is assumed to be closed, then one can do an integration by parts, recalling the Bianchi identity formula_10 to see formula_11 If g has constant scalar curvature, then the right-hand side vanishes. The consequent vanishing of the left-hand side proves the following fact, due to Obata (1971): &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;Every solution to the Yamabe problem on a closed Einstein manifold is Einstein. Obata then went on to prove that, except in the case of the standard sphere with its usual constant-sectional-curvature metric, the only constant-scalar-curvature metrics in the conformal class of an Einstein metric (on a closed manifold) are constant multiples of the given metric. The proof proceeds by showing that the gradient of the conformal factor is actually a conformal Killing field. If the conformal factor is not constant, following flow lines of this gradient field, starting at a minimum of the conformal factor, then allows one to show that the manifold is conformally related to the cylinder formula_12, and hence has vanishing Weyl curvature. The non-compact case. A closely related question is the so-called "non-compact Yamabe problem", which asks: Is it true that on every smooth complete Riemannian manifold ("M","g") which is not compact, there exists a metric that is conformal to "g", has constant scalar curvature and is also complete? The answer is no, due to counterexamples given by . Various additional criteria under which a solution to the Yamabe problem for a non-compact manifold can be shown to exist are known (for example ); however, obtaining a full understanding of when the problem can be solved in the non-compact case remains a topic of research.
[ { "math_id": 0, "text": "(M,\\overline{g})" }, { "math_id": 1, "text": "\\varphi:M\\to\\mathbb{R}," }, { "math_id": 2, "text": "g=\\varphi^{-2}\\overline{g}." }, { "math_id": 3, "text": "g=\\varphi^{-2}\\overline{g}" }, { "math_id": 4, "text": "\\overline{g}." }, { "math_id": 5, "text": "\\overline{R}_{ij}-\\frac{1}{n}\\overline{R}\\overline{g}_{ij}=R_{ij}-\\frac{1}{n}Rg_{ij}+\\frac{n-2}{\\varphi}\\Big(\\nabla_i\\nabla_j\\varphi+\\frac{1}{n}g_{ij}\\Delta\\varphi\\Big)." }, { "math_id": 6, "text": "\\textstyle\\varphi(\\operatorname{Ric}-\\frac{1}{n}Rg)" }, { "math_id": 7, "text": "\\varphi\\left\\langle\\overline{\\operatorname{Ric}}-\\frac{1}{n}\\overline{R}\\overline{g},\\operatorname{Ric}-\\frac{1}{n}Rg\\right\\rangle_g=\\varphi\\Big|\\operatorname{Ric}-\\frac{1}{n}Rg\\Big|_g^2+(n-2)\\Big(\\big\\langle\\operatorname{Ric},\\operatorname{Hess}\\varphi\\big\\rangle_g-\\frac{1}{n}R\\Delta\\varphi\\Big)." }, { "math_id": 8, "text": "\\overline{g}" }, { "math_id": 9, "text": "M" }, { "math_id": 10, "text": "\\textstyle\\operatorname{div}\\operatorname{Ric}=\\frac{1}{2}\\nabla R," }, { "math_id": 11, "text": "\\int_M \\varphi\\Big|\\operatorname{Ric}-\\frac{1}{n}Rg\\Big|^2\\,d\\mu_g=(n-2)\\Big(\\frac{1}{2}-\\frac{1}{n}\\Big)\\int_M \\langle\\nabla R,\\nabla\\varphi\\rangle\\,d\\mu_g." }, { "math_id": 12, "text": "S^{n-1}\\times \\mathbb{R}" } ]
https://en.wikipedia.org/wiki?curid=11757994
1176
Antisymmetric relation
Binary relation such that if A is related to B and is different from it then B is not related to A &lt;templatestyles src="Stack/styles.css"/&gt; In mathematics, a binary relation formula_0 on a set formula_1 is antisymmetric if there is no pair of "distinct" elements of formula_1 each of which is related by formula_0 to the other. More formally, formula_0 is antisymmetric precisely if for all formula_2 formula_3 or equivalently, formula_4 The definition of antisymmetry says nothing about whether formula_5 actually holds or not for any formula_6. An antisymmetric relation formula_0 on a set formula_1 may be reflexive (that is, formula_5 for all formula_7), irreflexive (that is, formula_5 for no formula_7), or neither reflexive nor irreflexive. A relation is asymmetric if and only if it is both antisymmetric and irreflexive. Examples. The divisibility relation on the natural numbers is an important example of an antisymmetric relation. In this context, antisymmetry means that the only way each of two numbers can be divisible by the other is if the two are, in fact, the same number; equivalently, if formula_8 and formula_9 are distinct and formula_8 is a factor of formula_10 then formula_9 cannot be a factor of formula_11 For example, 12 is divisible by 4, but 4 is not divisible by 12. The usual order relation formula_12 on the real numbers is antisymmetric: if for two real numbers formula_13 and formula_14 both inequalities formula_15 and formula_16 hold, then formula_13 and formula_14 must be equal. Similarly, the subset order formula_17 on the subsets of any given set is antisymmetric: given two sets formula_18 and formula_19 if every element in formula_18 also is in formula_20 and every element in formula_20 is also in formula_21 then formula_18 and formula_20 must contain all the same elements and therefore be equal: formula_22 A real-life example of a relation that is typically antisymmetric is "paid the restaurant bill of" (understood as restricted to a given occasion). Typically, some people pay their own bills, while others pay for their spouses or friends. As long as no two people pay each other's bills, the relation is antisymmetric. Properties. Partial and total orders are antisymmetric by definition. A relation can be both symmetric and antisymmetric (in this case, it must be coreflexive), and there are relations which are neither symmetric nor antisymmetric (for example, the "preys on" relation on biological species). Antisymmetry is different from asymmetry: a relation is asymmetric if and only if it is antisymmetric and irreflexive. References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "R" }, { "math_id": 1, "text": "X" }, { "math_id": 2, "text": "a, b \\in X," }, { "math_id": 3, "text": "\\text{if } \\,aRb\\, \\text{ with } \\,a \\neq b\\, \\text{ then } \\,bRa\\, \\text{ must not hold}," }, { "math_id": 4, "text": "\\text{if } \\,aRb\\, \\text{ and } \\,bRa\\, \\text{ then } \\,a = b." }, { "math_id": 5, "text": "aRa" }, { "math_id": 6, "text": "a" }, { "math_id": 7, "text": "a \\in X" }, { "math_id": 8, "text": "n" }, { "math_id": 9, "text": "m" }, { "math_id": 10, "text": "m," }, { "math_id": 11, "text": "n." }, { "math_id": 12, "text": "\\,\\leq\\," }, { "math_id": 13, "text": "x" }, { "math_id": 14, "text": "y" }, { "math_id": 15, "text": "x \\leq y" }, { "math_id": 16, "text": "y \\leq x" }, { "math_id": 17, "text": "\\,\\subseteq\\," }, { "math_id": 18, "text": "A" }, { "math_id": 19, "text": "B," }, { "math_id": 20, "text": "B" }, { "math_id": 21, "text": "A," }, { "math_id": 22, "text": "A \\subseteq B \\text{ and } B \\subseteq A \\text{ implies } A = B" } ]
https://en.wikipedia.org/wiki?curid=1176
11763375
Concatenated error correction code
In coding theory, concatenated codes form a class of error-correcting codes that are derived by combining an inner code and an outer code. They were conceived in 1966 by Dave Forney as a solution to the problem of finding a code that has both exponentially decreasing error probability with increasing block length and polynomial-time decoding complexity. Concatenated codes became widely used in space communications in the 1970s. Background. The field of channel coding is concerned with sending a stream of data at the highest possible rate over a given communications channel, and then decoding the original data reliably at the receiver, using encoding and decoding algorithms that are feasible to implement in a given technology. Shannon's channel coding theorem shows that over many common channels there exist channel coding schemes that are able to transmit data reliably at all rates formula_0 less than a certain threshold formula_1, called the channel capacity of the given channel. In fact, the probability of decoding error can be made to decrease exponentially as the block length formula_2 of the coding scheme goes to infinity. However, the complexity of a naive optimum decoding scheme that simply computes the likelihood of every possible transmitted codeword increases exponentially with formula_2, so such an optimum decoder rapidly becomes infeasible. In his doctoral thesis, Dave Forney showed that concatenated codes could be used to achieve exponentially decreasing error probabilities at all data rates less than capacity, with decoding complexity that increases only polynomially with the code block length. Description. Let "C""in" be a ["n", "k", "d"] code, that is, a block code of length "n", dimension "k", minimum Hamming distance "d", and rate "r" = "k"/"n", over an alphabet "A": formula_3 Let "C""out" be a ["N", "K", "D"] code over an alphabet "B" with |"B"| = |"A"|"k" symbols: formula_4 The inner code "C""in" takes one of |"A"|"k" = |"B"| possible inputs, encodes into an "n"-tuple over "A", transmits, and decodes into one of |"B"| possible outputs. We regard this as a (super) channel which can transmit one symbol from the alphabet "B". We use this channel "N" times to transmit each of the "N" symbols in a codeword of "C""out". The "concatenation" of "C""out" (as outer code) with "C""in" (as inner code), denoted "C""out"∘"C""in", is thus a code of length "Nn" over the alphabet "A": formula_5 It maps each input message "m" = ("m"1, "m"2, ..., "m"K) to a codeword ("C""in"("m"'1), "C""in"("m"'2), ..., "C""in"("m"'N)), where ("m"'1, "m"'2, ..., "m"'N) = "C""out"("m"1, "m"2, ..., "m"K). The "key insight" in this approach is that if "C""in" is decoded using a maximum-likelihood approach (thus showing an exponentially decreasing error probability with increasing length), and "C""out" is a code with length "N" = 2"nr" that can be decoded in polynomial time of "N", then the concatenated code can be decoded in polynomial time of its combined length "n"2"nr" = "O"("N"⋅log("N")) and shows an exponentially decreasing error probability, even if "C""in" has exponential decoding complexity. This is discussed in more detail in section Decoding concatenated codes. In a generalization of above concatenation, there are "N" possible inner codes "C""in","i" and the "i"-th symbol in a codeword of "C""out" is transmitted across the inner channel using the "i"-th inner code. The Justesen codes are examples of generalized concatenated codes, where the outer code is a Reed–Solomon code. Properties. 1. The distance of the concatenated code "C""out"∘"C""in" is at least "dD", that is, it is a ["nN", "kK", "D"'] code with "D"' ≥ "dD". "Proof:" Consider two different messages "m"1 ≠ "m"2 ∈ "B""K". Let Δ denote the distance between two codewords. Then formula_6 Thus, there are at least "D" positions in which the sequence of "N" symbols of the codewords "C""out"("m"1) and "C""out"("m"2) differ. For these positions, denoted "i", we have formula_7 Consequently, there are at least "d"⋅"D" positions in the sequence of "n"⋅"N" symbols taken from the alphabet "A" in which the two codewords differ, and hence formula_8 2. If "C""out" and "C""in" are linear block codes, then "C""out"∘"C""in" is also a linear block code. This property can be easily shown based on the idea of defining a generator matrix for the concatenated code in terms of the generator matrices of "C""out" and "C""in". Decoding concatenated codes. A natural concept for a decoding algorithm for concatenated codes is to first decode the inner code and then the outer code. For the algorithm to be practical it must be polynomial-time in the final block length. Consider that there is a polynomial-time unique decoding algorithm for the outer code. Now we have to find a polynomial-time decoding algorithm for the inner code. It is understood that polynomial running time here means that running time is polynomial in the final block length. The main idea is that if the inner block length is selected to be logarithmic in the size of the outer code then the decoding algorithm for the inner code may run in exponential time of the inner block length, and we can thus use an exponential-time but optimal maximum likelihood decoder (MLD) for the inner code. In detail, let the input to the decoder be the vector "y" = ("y"1, ..., "y""N") ∈ ("A""n")"N". Then the decoding algorithm is a two-step process: Now, the time complexity of the first step is "O"("N"⋅exp("n")), where "n" = "O"(log("N")) is the inner block length. In other words, it is "N""O"(1) (i.e., polynomial-time) in terms of the outer block length "N". As the outer decoding algorithm in step two is assumed to run in polynomial time the complexity of the overall decoding algorithm is polynomial-time as well. Remarks. The decoding algorithm described above can be used to correct all errors up to less than "dD"/4 in number. Using minimum distance decoding, the outer decoder can correct all inputs "y"' with less than "D"/2 symbols "y"'"i" in error. Similarly, the inner code can reliably correct an input "y""i" if less than "d"/2 inner symbols are erroneous. Thus, for an outer symbol "y"'"i" to be incorrect after inner decoding at least "d"/2 inner symbols must have been in error, and for the outer code to fail this must have happened for at least "D"/2 outer symbols. Consequently, the total number of inner symbols that must be received incorrectly for the concatenated code to fail must be at least "d"/2⋅"D"/2 = "dD"/4. The algorithm also works if the inner codes are different, e.g., for Justesen codes. The generalized minimum distance algorithm, developed by Forney, can be used to correct up to "dD"/2 errors. It uses erasure information from the inner code to improve performance of the outer code, and was the first example of an algorithm using soft-decision decoding. Applications. Although a simple concatenation scheme was implemented already for the 1971 Mariner Mars orbiter mission, concatenated codes were starting to be regularly used for deep space communication with the Voyager program, which launched two space probes in 1977. Since then, concatenated codes became the workhorse for efficient error correction coding, and stayed so at least until the invention of turbo codes and LDPC codes. Typically, the inner code is not a block code but a soft-decision convolutional Viterbi-decoded code with a short constraint length. For the outer code, a longer hard-decision block code, frequently a Reed-Solomon code with eight-bit symbols, is used. The larger symbol size makes the outer code more robust to error bursts that can occur due to channel impairments, and also because erroneous output of the convolutional code itself is bursty. An interleaving layer is usually added between the two codes to spread error bursts across a wider range. The combination of an inner Viterbi convolutional code with an outer Reed–Solomon code (known as an RSV code) was first used in "Voyager 2", and it became a popular construction both within and outside of the space sector. It is still notably used today for satellite communications, such as the DVB-S digital television broadcast standard. In a looser sense, any (serial) combination of two or more codes may be referred to as a concatenated code. For example, within the DVB-S2 standard, a highly efficient LDPC code is combined with an algebraic outer code in order to remove any resilient errors left over from the inner LDPC code due to its inherent error floor. A simple concatenation scheme is also used on the compact disc (CD), where an interleaving layer between two Reed–Solomon codes of different sizes spreads errors across various blocks. Turbo codes: A parallel concatenation approach. The description above is given for what is now called a serially concatenated code. Turbo codes, as described first in 1993, implemented a parallel concatenation of two convolutional codes, with an interleaver between the two codes and an iterative decoder that passes information forth and back between the codes. This design has a better performance than any previously conceived concatenated codes. However, a key aspect of turbo codes is their iterated decoding approach. Iterated decoding is now also applied to serial concatenations in order to achieve higher coding gains, such as within serially concatenated convolutional codes (SCCCs). An early form of iterated decoding was implemented with two to five iterations in the "Galileo code" of the Galileo space probe. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "R" }, { "math_id": 1, "text": "C" }, { "math_id": 2, "text": "N" }, { "math_id": 3, "text": "C_{in}: A^k \\rightarrow A^n" }, { "math_id": 4, "text": "C_{out}: B^K \\rightarrow B^N" }, { "math_id": 5, "text": "C_{out} \\circ C_{in}: A^{kK} \\rightarrow A^{nN}" }, { "math_id": 6, "text": "\\Delta(C_{out}(m^1), C_{out}(m^2)) \\ge D." }, { "math_id": 7, "text": "\\Delta(C_{in}(C_{out}(m^1)_i), C_{in}(C_{out}(m^2)_i)) \\ge d." }, { "math_id": 8, "text": "\\Delta(C_{in}(C_{out}(m^1)), C_{in}(C_{out}(m^2))) \\ge dD." } ]
https://en.wikipedia.org/wiki?curid=11763375
11763521
Nucleate boiling
Type of boiling In fluid thermodynamics, nucleate boiling is a type of boiling that takes place when the surface temperature is hotter than the saturated fluid temperature by a certain amount but where the heat flux is below the critical heat flux. For water, as shown in the graph below, nucleate boiling occurs when the surface temperature is higher than the saturation temperature (TS) by between . The critical heat flux is the peak on the curve between nucleate boiling and transition boiling. The heat transfer from surface to liquid is greater than that in film boiling. Nucleate boiling is common in electric kettles and is responsible for the noise that occurs before boiling occurs. It also occurs in water boilers where water is rapidly heated. Mechanism. Two different regimes may be distinguished in the nucleate boiling range. When the temperature difference is between approximately above TS, isolated bubbles form at nucleation sites and separate from the surface. This separation induces considerable fluid mixing near the surface, substantially increasing the convective heat transfer coefficient and the heat flux. In this regime, most of the heat transfer is through direct transfer from the surface to the liquid in motion at the surface and not through the vapor bubbles rising from the surface. Between above TS, a second flow regime may be observed. As more nucleation sites become active, increased bubble formation causes bubble interference and coalescence. In this region the vapor escapes as jets or columns which subsequently merge into plugs of vapor. Interference between the densely populated bubbles inhibits the motion of liquid near the surface. This is observed on the graph as a change in the direction of the gradient of the curve or an inflection in the boiling curve. After this point, the heat transfer coefficient starts to reduce as the surface temperature is further increased although the product of the heat transfer coefficient and the temperature difference (the heat flux) is still increasing. When the relative increase in the temperature difference is balanced by the relative reduction in the heat transfer coefficient, a maximum heat flux is achieved as observed by the peak in the graph. This is the critical heat flux. At this point in the maximum, considerable vapor is being formed, making it difficult for the liquid to continuously wet the surface to receive heat from the surface. This causes the heat flux to reduce after this point. At extremes, film boiling commonly known as the Leidenfrost effect is observed. The process of forming steam bubbles within liquid in micro cavities adjacent to the wall if the wall temperature at the heat transfer surface rises above the saturation temperature while the bulk of the liquid (heat exchanger) is subcooled. The bubbles grow until they reach some critical size, at which point they separate from the wall and are carried into the main fluid stream. There the bubbles collapse because the temperature of bulk fluid is not as high as at the heat transfer surface, where the bubbles were created. This collapsing is also responsible for the sound a water kettle produces during heat up but before the temperature at which bulk boiling is reached. Heat transfer and mass transfer during nucleate boiling has a significant effect on the heat transfer rate. This heat transfer process helps quickly and efficiently to carry away the energy created at the heat transfer surface and is therefore sometimes desirable—for example in nuclear power plants, where liquid is used as a coolant. The effects of nucleate boiling take place at two locations: The nucleate boiling process has a complex nature. A limited number of experimental studies provided valuable insights into the boiling phenomena, however these studies provided often contradictory data due to internal recalculation (state of chaos in the fluid not applying to classical thermodynamic methods of calculation, therefore giving wrong return values) and have not provided conclusive findings yet to develop models and correlations. Nucleate boiling phenomenon still requires more understanding. Boiling heat transfer correlations. The nucleate boiling regime is important to engineers because of the high heat fluxes possible with moderate temperature differences. The data can be correlated by an equation of the form formula_0 Where Nu is the Nusselt number, defined as: formula_1 where: Rohsenow has developed the first and most widely used correlation for nucleate boiling, formula_3 where: The variable n depends on the surface fluid combination and typically has a value of 1.0 or 1.7. For example, water and nickel have a Csf of 0.006 and n of 1.0. Departure from nucleate boiling. If the heat flux of a boiling system is higher than the critical heat flux (CHF) of the system, the bulk fluid may boil, or in some cases, "regions" of the bulk fluid may boil where the fluid travels in small channels. Thus large bubbles form, sometimes blocking the passage of the fluid. This results in a departure from nucleate boiling (DNB) in which steam bubbles no longer break away from the solid surface of the channel, bubbles dominate the channel or surface, and the heat flux dramatically decreases. Vapor essentially insulates the bulk liquid from the hot surface. During DNB, the surface temperature must therefore increase substantially above the bulk fluid temperature in order to maintain a high heat flux. Avoiding the CHF is an engineering problem in heat transfer applications, such as nuclear reactors, where fuel plates must not be allowed to overheat. DNB may be avoided in practice by increasing the pressure of the fluid, increasing its flow rate, or by utilizing a lower temperature bulk fluid which has a higher CHF. If the bulk fluid temperature is too low or the pressure of the fluid is too high, nucleate boiling is however not possible. DNB is also known as transition boiling, unstable film boiling, and partial film boiling. For water boiling as shown on the graph, transition boiling occurs when the temperature difference between the surface and the boiling water is approximately above the TS. This corresponds to the high peak and the low peak on the boiling curve. The low point between transition boiling and film boiling is the Leidenfrost point. During transition boiling of water, the bubble formation is so rapid that a vapor film or blanket begins to form at the surface. However, at any point on the surface, the conditions may oscillate between film and nucleate boiling, but the fraction of the total surface covered by the film increases with increasing temperature difference. As the thermal conductivity of the vapor is much less than that of the liquid, the convective heat transfer coefficient and the heat flux reduces with increasing temperature difference. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathrm{Nu}_b = C_{fc} ( \\mathrm{Re}_b, \\mathrm{Pr}_L )" }, { "math_id": 1, "text": "\\mathrm{Nu}_b =\\frac{ (q/A)D_b }{ (T_s - T_\\mathrm{sat})k_L}" }, { "math_id": 2, "text": "\\mathrm{Re}_b = \\tfrac{D_bG_b}{\\mu _L}," }, { "math_id": 3, "text": "\\frac{q}{A} = \\mu_L h_{fg} \n\\left[ \\frac{ g(\\rho_L - \\rho_v) }{ \\sigma } \\right]^\\frac{1}{2}\n\\left[ \\frac{c_{pL}\\left( T_s -T_\\mathrm{sat} \\right)}{C_{sf}h_{fg} \\mathrm{Pr}_L^n} \\right]^3" } ]
https://en.wikipedia.org/wiki?curid=11763521
11764738
Bonnet's theorem
In classical mechanics, Bonnet's theorem states that if "n" different force fields each produce the same geometric orbit (say, an ellipse of given dimensions) albeit with different speeds "v"1, "v"2...,"v""n" at a given point "P", then the same orbit will be followed if the speed at point "P" equals formula_0 History. This theorem was first derived by Adrien-Marie Legendre in 1817, but it is named after Pierre Ossian Bonnet. Derivation. The shape of an orbit is determined only by the centripetal forces at each point of the orbit, which are the forces acting perpendicular to the orbit. By contrast, forces "along" the orbit change only the speed, but not the direction, of the velocity. Let the instantaneous radius of curvature at a point "P" on the orbit be denoted as "R". For the "k"th force field that produces that orbit, the force normal to the orbit "F""k" must provide the centripetal force formula_1 Adding all these forces together yields the equation formula_2 Hence, the combined force-field produces the same orbit if the speed at a point "P" is set equal to formula_0 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\nv_{\\mathrm{combined}} = \\sqrt{v_{1}^{2} + v_{2}^{2} + \\cdots + v_{n}^{2}}\n" }, { "math_id": 1, "text": "\nF_{k} = \\frac{m}{R} v_{k}^{2}\n" }, { "math_id": 2, "text": "\n\\sum_{k=1}^{n} F_{k} = \\frac{m}{R} \\sum_{k=1}^{n} v_{k}^{2}\n" } ]
https://en.wikipedia.org/wiki?curid=11764738
11764750
Point diffraction interferometer
Type of common-path interferometer A point diffraction interferometer (PDI) is a type of common-path interferometer. Unlike an amplitude-splitting interferometer, such as a Michelson interferometer, which separates out an unaberrated beam and interferes this with the test beam, a common-path interferometer generates its own reference beam. In PDI systems, the test and reference beams travel the same or almost the same path. This design makes the PDI extremely useful when environmental isolation is not possible or a reduction in the number of precision optics is required. The reference beam is created from a portion of the test beam by diffraction from a small pinhole in a semitransparent coating. The principle of a PDI is shown in Figure 1. The device is similar to a spatial filter. Incident light is focused onto a semi-transparent mask (about 0.1% transmission). In the centre of the mask is a hole about the size of the Airy disc, and the beam is focused onto this hole with a Fourier-transforming lens. The zeroth order (the low frequencies in Fourier space) then passes through the hole and interferes with the rest of beam. The transmission and the hole size are selected to balance the intensities of the test and reference beams. The device is similar in operation to phase-contrast microscopy. Development in PDI systems. PDI systems are valuable tool to measure absolute surface characteristics of an optical or reflective instruments non destructively. The common path design eliminates any need of having a reference optics, which are known to overlap the absolute surface form of a test object with its own surface form errors. This is a major disadvantage of a double path systems, such as Fizeau interferometers, as shown in Figure 2. Similarly the common path design is resistant to ambient disturbances. The main criticisms of the original design are (1) that the required low-transmission reduces the efficiency, and (2) when the beam becomes too aberrated, the intensity on-axis is reduced, and less light is available for the reference beam, leading to a loss of fringe contrast. Lowered transmission was associated with lowered signal to noise ratio. These problems are largely overcome in the phase-shifting point diffraction interferometer designs, in which a grating or beamsplitter creates multiple, identical copies of the beam that is incident on an opaque mask. The test beam passes through a somewhat large hole or aperture in the membrane, without losses due to absorption; the reference beam is focused onto the pinhole for highest transmission. In the grating-based instance, phase-shifting is accomplished by translating the grating perpendicular to the rulings, while multiple images are recorded. The continued developments in phase shifting PDI have achieved accuracy orders of magnitude greater than standard Fizeau based systems. Phase-shifting [see Interferometry] versions have been created to increase measurement resolution and efficiency. These include a diffraction grating interferometer by Kwon and the Phase-Shifting Point Diffraction Interferometer. Types of phase-shifting PDI systems. Phase-shifting PDI with single pinhole. Gary Sommargren proposed a point diffraction interferometer design which directly followed from the basic design where parts of the diffracted wavefront was used for testing and the remaining part for detection as shown in Figure 3. This design was a major upgrade to existing systems. The scheme could accurately measure the optical surface with variations of 1 nm. The phase shifting was obtained by moving the test part with a piezo electric translation stage. An unwanted side effect of moving the test part is that the defocus also moves distorting the fringes. Another downsides of Sommargren's approach is that it produces low contrast fringes and an attempt to regulate the contrast also modifies the measured wavefront. PDI systems using optical fibres. In this type of point diffraction interferometer the point source is a single mode fiber. The end face is narrowed down to resemble a cone and is covered with metallic film to reduce the light spill. Fibre is arranged so that they generate spherical waves for both testing and referencing. End of an optical fibre is known to generate spherical waves with an accuracy greater than formula_0. Although optical fibre based PDIs provide some advancement over the single pinhole based system, they are difficult to manufacture and align. Two-beam phase-shifting PDI. Two-beam PDI provides a major advantage over other schemes by availing two independently steerable beams. Here, the test beam and reference beam are perpendicular to each other, where the intensity of reference can be regulated. Similarly, an arbitrary and stable phase shifts can be obtained relative to the test beam keeping the test part static. The scheme as shown in Figure 4 is easy to manufacture and provides user-friendly measuring conditions similar to Fizeau type interferometers. At the same time renders following additional benefits: The device is self-referencing, therefore it can be used in environments with a lot of vibrations or when no reference beam is available, such as in many adaptive optics and short-wavelength scenarios. Applications of PDI. Interferometry has been used for various quantitative characterisation of optical systems indicating their overall performance. Traditionally, Fizeau interferometers have been used to detect optical or polished surface forms but new advances in precision manufacturing has allowed industrial point diffraction interferometry possible. PDI is especially suited for high resolution, high accuracy measurements in laboratory conditions to noisy factory floors. Lack of reference optics makes the method suitable to visualise absolute surface form of optical systems. Therefore, a PDI is uniquely suitable to verify the reference optics of other interferometers. It is also immensely useful in analysing optical assemblies used in Laser based systems. Characterising optics for UV lithography. Quality control of precision optics. Verifying the actual resolution of an optical assembly. Measuring the wavefront map produced by X-ray optics. PS-PDI can also be used to verify rated resolution of space optics before deployment.
[ { "math_id": 0, "text": "\\lambda \\diagup 2000" } ]
https://en.wikipedia.org/wiki?curid=11764750
11764848
Penrose transform
In theoretical physics, the Penrose transform, introduced by Roger Penrose (1967, 1968, 1969), is a complex analogue of the Radon transform that relates massless fields on spacetime, or more precisely the space of solutions to massless field equations, to sheaf cohomology groups on complex projective space. The projective space in question is the twistor space, a geometrical space naturally associated to the original spacetime, and the twistor transform is also geometrically natural in the sense of integral geometry. The Penrose transform is a major component of classical twistor theory. Overview. Abstractly, the Penrose transform operates on a double fibration of a space "Y", over two spaces "X" and "Z" formula_0 In the classical Penrose transform, "Y" is the spin bundle, "X" is a compactified and complexified form of Minkowski space (which as a complex manifold is formula_1) and "Z" is the twistor space (which is formula_2). More generally examples come from double fibrations of the form formula_3 where "G" is a complex semisimple Lie group and "H"1 and "H"2 are parabolic subgroups. The Penrose transform operates in two stages. First, one pulls back the sheaf cohomology groups "H""r"("Z",F) to the sheaf cohomology "H""r"("Y",η−1F) on "Y"; in many cases where the Penrose transform is of interest, this pullback turns out to be an isomorphism. One then pushes the resulting cohomology classes down to "X"; that is, one investigates the direct image of a cohomology class by means of the Leray spectral sequence. The resulting direct image is then interpreted in terms of differential equations. In the case of the classical Penrose transform, the resulting differential equations are precisely the massless field equations for a given spin. Example. The classical example is given as follows The maps from "Y" to "X" and "Z" are the natural projections. Using spinor index notation, the Penrose transform gives a bijection between solutions to the spin formula_4 massless field equation formula_5 and the first sheaf cohomology group formula_6, where formula_7 is the Riemann sphere, formula_8 are the usual holomorphic line bundles over projective space, and the sheaves under consideration are the sheaves of sections of formula_8. Penrose–Ward transform. The Penrose–Ward transform is a nonlinear modification of the Penrose transform, introduced by , that (among other things) relates holomorphic vector bundles on 3-dimensional complex projective space CP3 to solutions of the self-dual Yang–Mills equations on S4. used this to describe instantons in terms of algebraic vector bundles on complex projective 3-space and explained how this could be used to classify instantons on a 4-sphere. References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "Z\\xleftarrow{\\eta} Y \\xrightarrow{\\tau} X." }, { "math_id": 1, "text": "\\mathbf{Gr}(2,4)" }, { "math_id": 2, "text": "\\mathbb{P}^3" }, { "math_id": 3, "text": "G/H_1\\xleftarrow{\\eta} G/(H_1\\cap H_2) \\xrightarrow{\\tau} G/H_2" }, { "math_id": 4, "text": "\\pm n/2" }, { "math_id": 5, "text": "\\partial_A\\,^{A_1'}\\phi_{A_1'A_2'\\cdots A_n'} = 0" }, { "math_id": 6, "text": "H^1(\\mathbb{P}^1, \\mathcal{O}(\\pm n-2))" }, { "math_id": 7, "text": "\\mathbb{P}^1" }, { "math_id": 8, "text": "\\mathcal{O}(k)" } ]
https://en.wikipedia.org/wiki?curid=11764848
11766887
Curvature of a measure
In mathematics, the curvature of a measure defined on the Euclidean plane R2 is a quantification of how much the measure's "distribution of mass" is "curved". It is related to notions of curvature in geometry. In the form presented below, the concept was introduced in 1995 by the mathematician Mark S. Melnikov; accordingly, it may be referred to as the Melnikov curvature or Menger-Melnikov curvature. Melnikov and Verdera (1995) established a powerful connection between the curvature of measures and the Cauchy kernel. Definition. Let "μ" be a Borel measure on the Euclidean plane R2. Given three (distinct) points "x", "y" and "z" in R2, let "R"("x", "y", "z") be the radius of the Euclidean circle that joins all three of them, or +∞ if they are collinear. The Menger curvature "c"("x", "y", "z") is defined to be formula_0 with the natural convention that "c"("x", "y", "z") = 0 if "x", "y" and "z" are collinear. It is also conventional to extend this definition by setting "c"("x", "y", "z") = 0 if any of the points "x", "y" and "z" coincide. The Menger-Melnikov curvature "c"2("μ") of "μ" is defined to be formula_1 More generally, for "α" ≥ 0, define "c"2"α"("μ") by formula_2 One may also refer to the curvature of "μ" at a given point "x": formula_3 in which case formula_4 Relationship to the Cauchy kernel. In this section, R2 is thought of as the complex plane C. Melnikov and Verdera (1995) showed the precise relation of the boundedness of the Cauchy kernel to the curvature of measures. They proved that if there is some constant "C"0 such that formula_5 for all "x" in C and all "r" &gt; 0, then there is another constant "C", depending only on "C"0, such that formula_6 for all "ε" &gt; 0. Here "c""ε" denotes a truncated version of the Menger-Melnikov curvature in which the integral is taken only over those points "x", "y" and "z" such that formula_7 formula_8 formula_9 Similarly, formula_10 denotes a truncated Cauchy integral operator: for a measure "μ" on C and a point "z" in C, define formula_11 where the integral is taken over those points "ξ" in C with formula_12
[ { "math_id": 0, "text": "c(x, y, z) = \\frac{1}{R(x, y, z)}," }, { "math_id": 1, "text": "c^{2} (\\mu) = \\iiint_{\\mathbb{R}^{2}} c(x, y, z)^{2} \\, \\mathrm{d} \\mu (x) \\mathrm{d} \\mu (y) \\mathrm{d} \\mu (z)." }, { "math_id": 2, "text": "c^{2 \\alpha} (\\mu) = \\iiint_{\\mathbb{R}^{2}} c(x, y, z)^{2 \\alpha} \\, \\mathrm{d} \\mu (x) \\mathrm{d} \\mu (y) \\mathrm{d} \\mu (z)." }, { "math_id": 3, "text": "c^{2} (\\mu; x) = \\iint_{\\mathbb{R}^{2}} c(x, y, z)^{2} \\, \\mathrm{d} \\mu (y) \\mathrm{d} \\mu (z)," }, { "math_id": 4, "text": "c^{2} (\\mu) = \\int_{\\mathbb{R}^{2}} c^{2} (\\mu; x) \\, \\mathrm{d} \\mu (x)." }, { "math_id": 5, "text": "\\mu(B_{r} (x)) \\leq C_{0} r" }, { "math_id": 6, "text": "\\left| 6 \\int_{\\mathbb{C}} | \\mathcal{C}_{\\varepsilon} (\\mu) (z) |^{2} \\, \\mathrm{d} \\mu (z) - c_{\\varepsilon}^{2} (\\mu) \\right| \\leq C \\| \\mu \\|" }, { "math_id": 7, "text": "| x - y | > \\varepsilon;" }, { "math_id": 8, "text": "| y - z | > \\varepsilon;" }, { "math_id": 9, "text": "| z - x | > \\varepsilon." }, { "math_id": 10, "text": "\\mathcal{C}_{\\varepsilon}" }, { "math_id": 11, "text": "\\mathcal{C}_{\\varepsilon} (\\mu) (z) = \\int \\frac{1}{\\xi - z} \\, \\mathrm{d} \\mu (\\xi)," }, { "math_id": 12, "text": "| \\xi - z | > \\varepsilon." } ]
https://en.wikipedia.org/wiki?curid=11766887
11771113
Gauss's continued fraction
In complex analysis, Gauss's continued fraction is a particular class of continued fractions derived from hypergeometric functions. It was one of the first analytic continued fractions known to mathematics, and it can be used to represent several important elementary functions, as well as some of the more complicated transcendental functions. History. Lambert published several examples of continued fractions in this form in 1768, and both Euler and Lagrange investigated similar constructions, but it was Carl Friedrich Gauss who utilized the algebra described in the next section to deduce the general form of this continued fraction, in 1813. Although Gauss gave the form of this continued fraction, he did not give a proof of its convergence properties. Bernhard Riemann and L.W. Thomé obtained partial results, but the final word on the region in which this continued fraction converges was not given until 1901, by Edward Burr Van Vleck. Derivation. Let formula_0 be a sequence of analytic functions so that formula_1 for all formula_2, where each formula_3 is a constant. Then formula_4 Setting formula_5 formula_6 So formula_7 Repeating this ad infinitum produces the continued fraction expression formula_8 In Gauss's continued fraction, the functions formula_9 are hypergeometric functions of the form formula_10, formula_11, and formula_12, and the equations formula_13 arise as identities between functions where the parameters differ by integer amounts. These identities can be proven in several ways, for example by expanding out the series and comparing coefficients, or by taking the derivative in several ways and eliminating it from the equations generated. The series 0F1. The simplest case involves formula_14 Starting with the identity formula_15 we may take formula_16 giving formula_17 or formula_18 This expansion converges to the meromorphic function defined by the ratio of the two convergent series (provided, of course, that "a" is neither zero nor a negative integer). The series 1F1. The next case involves formula_19 for which the two identities formula_20 formula_21 are used alternately. Let formula_22 formula_23 formula_24 formula_25 formula_26 etc. This gives formula_13 where formula_27, producing formula_28 or formula_29 Similarly formula_30 or formula_31 Since formula_32, setting "a" to 0 and replacing "b" + 1 with "b" in the first continued fraction gives a simplified special case: formula_33 The series 2F1. The final case involves formula_34 Again, two identities are used alternately. formula_35 formula_36 These are essentially the same identity with "a" and "b" interchanged. Let formula_37 formula_38 formula_39 formula_40 formula_41 etc. This gives formula_13 where formula_42, producing formula_43 or formula_44 Since formula_45, setting "a" to 0 and replacing "c" + 1 with "c" gives a simplified special case of the continued fraction: formula_46 Convergence properties. In this section, the cases where one or more of the parameters is a negative integer are excluded, since in these cases either the hypergeometric series are undefined or that they are polynomials so the continued fraction terminates. Other trivial exceptions are excluded as well. In the cases formula_10 and formula_11, the series converge everywhere so the fraction on the left hand side is a meromorphic function. The continued fractions on the right hand side will converge uniformly on any closed and bounded set that contains no poles of this function. In the case formula_12, the radius of convergence of the series is 1 and the fraction on the left hand side is a meromorphic function within this circle. The continued fractions on the right hand side will converge to the function everywhere inside this circle. Outside the circle, the continued fraction represents the analytic continuation of the function to the complex plane with the positive real axis, from +1 to the point at infinity removed. In most cases +1 is a branch point and the line from +1 to positive infinity is a branch cut for this function. The continued fraction converges to a meromorphic function on this domain, and it converges uniformly on any closed and bounded subset of this domain that does not contain any poles. Applications. The series 0"F"1. We have formula_47 formula_48 so formula_49 This particular expansion is known as Lambert's continued fraction and dates back to 1768. It easily follows that formula_50 The expansion of tanh can be used to prove that "e""n" is irrational for every non-zero integer "n" (which is alas not enough to prove that "e" is transcendental). The expansion of tan was used by both Lambert and Legendre to prove that π is irrational. The Bessel function formula_51 can be written formula_52 from which it follows formula_53 These formulas are also valid for every complex "z". The series 1F1. Since formula_54, formula_55 formula_56 formula_57 With some manipulation, this can be used to prove the simple continued fraction representation of "e", formula_58 The error function erf ("z"), given by formula_59 can also be computed in terms of Kummer's hypergeometric function: formula_60 By applying the continued fraction of Gauss, a useful expansion valid for every complex number "z" can be obtained: formula_61 A similar argument can be made to derive continued fraction expansions for the Fresnel integrals, for the Dawson function, and for the incomplete gamma function. A simpler version of the argument yields two useful continued fraction expansions of the exponential function. The series 2F1. From formula_62 formula_63 It is easily shown that the Taylor series expansion of arctan "z" in a neighborhood of zero is given by formula_64 The continued fraction of Gauss can be applied to this identity, yielding the expansion formula_65 which converges to the principal branch of the inverse tangent function on the cut complex plane, with the cut extending along the imaginary axis from "i" to the point at infinity, and from −"i" to the point at infinity. This particular continued fraction converges fairly quickly when "z" = 1, giving the value π/4 to seven decimal places by the ninth convergent. The corresponding series formula_66 converges much more slowly, with more than a million terms needed to yield seven decimal places of accuracy. Variations of this argument can be used to produce continued fraction expansions for the natural logarithm, the arcsin function, and the generalized binomial series. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f_0, f_1, f_2, \\dots" }, { "math_id": 1, "text": "f_{i-1} - f_i = k_i\\,z\\,f_{i+1}" }, { "math_id": 2, "text": "i > 0" }, { "math_id": 3, "text": "k_i" }, { "math_id": 4, "text": "\\frac{f_{i-1}}{f_i} = 1 + k_i z \\frac{f_{i+1}}{{f_i}}, \\text{ and so } \\frac{f_i}{f_{i-1}} = \\frac{1}{1 + k_i z \\frac{f_{i+1}}{{f_i}}}" }, { "math_id": 5, "text": "g_i = f_i / f_{i-1}," }, { "math_id": 6, "text": "g_i = \\frac{1}{1 + k_i z g_{i+1}}," }, { "math_id": 7, "text": "g_1 = \\frac{f_1}{f_0} = \\cfrac{1}{1 + k_1 z g_2} = \\cfrac{1}{1 + \\cfrac{k_1 z}{1 + k_2 z g_3}}\n = \\cfrac{1}{1 + \\cfrac{k_1 z}{1 + \\cfrac{k_2 z}{1 + k_3 z g_4}}} = \\cdots.\\ " }, { "math_id": 8, "text": "\\frac{f_1}{f_0} = \\cfrac{1}{1 + \\cfrac{k_1 z}{1 + \\cfrac{k_2 z}{1 + \\cfrac{k_3 z}{1 + {}\\ddots}}}}" }, { "math_id": 9, "text": "f_i" }, { "math_id": 10, "text": "{}_0F_1" }, { "math_id": 11, "text": "{}_1F_1" }, { "math_id": 12, "text": "{}_2F_1" }, { "math_id": 13, "text": "f_{i-1} - f_i = k_i z f_{i+1}" }, { "math_id": 14, "text": "\\,_0F_1(a;z) = 1 + \\frac{1}{a\\,1!}z + \\frac{1}{a(a+1)\\,2!}z^2 + \\frac{1}{a(a+1)(a+2)\\,3!}z^3 + \\cdots. " }, { "math_id": 15, "text": "\\,_0F_1(a-1;z)-\\,_0F_1(a;z) = \\frac{z}{a(a-1)}\\,_0F_1(a+1;z)," }, { "math_id": 16, "text": "f_i = {}_0F_1(a+i;z),\\,k_i = \\tfrac{1}{(a+i)(a+i-1)}," }, { "math_id": 17, "text": "\\frac{\\,_0F_1(a+1;z)}{\\,_0F_1(a;z)} = \\cfrac{1}{1 + \\cfrac{\\frac{1}{a(a+1)}z}\n{1 + \\cfrac{\\frac{1}{(a+1)(a+2)}z}{1 + \\cfrac{\\frac{1}{(a+2)(a+3)}z}{1 + {}\\ddots}}}}" }, { "math_id": 18, "text": "\\frac{\\,_0F_1(a+1;z)}{a\\,_0F_1(a;z)} = \\cfrac{1}{a + \\cfrac{z}\n{(a+1) + \\cfrac{z}{(a+2) + \\cfrac{z}{(a+3) + {}\\ddots}}}}." }, { "math_id": 19, "text": "{}_1F_1(a;b;z) = 1 + \\frac{a}{b\\,1!}z + \\frac{a(a+1)}{b(b+1)\\,2!}z^2 + \\frac{a(a+1)(a+2)}{b(b+1)(b+2)\\,3!}z^3 + \\cdots" }, { "math_id": 20, "text": "\\,_1F_1(a;b-1;z)-\\,_1F_1(a+1;b;z) = \\frac{(a-b+1)z}{b(b-1)}\\,_1F_1(a+1;b+1;z)" }, { "math_id": 21, "text": "\\,_1F_1(a;b-1;z)-\\,_1F_1(a;b;z) = \\frac{az}{b(b-1)}\\,_1F_1(a+1;b+1;z)" }, { "math_id": 22, "text": "f_0(z) = \\,_1F_1(a;b;z)," }, { "math_id": 23, "text": "f_1(z) = \\,_1F_1(a+1;b+1;z)," }, { "math_id": 24, "text": "f_2(z) = \\,_1F_1(a+1;b+2;z)," }, { "math_id": 25, "text": "f_3(z) = \\,_1F_1(a+2;b+3;z)," }, { "math_id": 26, "text": "f_4(z) = \\,_1F_1(a+2;b+4;z)," }, { "math_id": 27, "text": "k_1=\\tfrac{a-b}{b(b+1)}, k_2=\\tfrac{a+1}{(b+1)(b+2)}, k_3=\\tfrac{a-b-1}{(b+2)(b+3)}, k_4=\\tfrac{a+2}{(b+3)(b+4)}" }, { "math_id": 28, "text": "\\frac{{}_1F_1(a+1;b+1;z)}{{}_1F_1(a;b;z)} = \\cfrac{1}{1 + \\cfrac{\\frac{a-b}{b(b+1)} z}{1 + \\cfrac{\\frac{a+1}{(b+1)(b+2)} z}{1 + \\cfrac{\\frac{a-b-1}{(b+2)(b+3)} z}{1 + \\cfrac{\\frac{a+2}{(b+3)(b+4)} z}{1 + {}\\ddots}}}}}" }, { "math_id": 29, "text": "\\frac{{}_1F_1(a+1;b+1;z)}{b{}_1F_1(a;b;z)} = \\cfrac{1}{b + \\cfrac{(a-b) z}{(b+1) + \\cfrac{(a+1) z}{(b+2) + \\cfrac{(a-b-1) z}{(b+3) + \\cfrac{(a+2) z}{(b+4) + {}\\ddots}}}}}" }, { "math_id": 30, "text": "\\frac{{}_1F_1(a;b+1;z)}{{}_1F_1(a;b;z)} = \\cfrac{1}{1 + \\cfrac{\\frac{a}{b(b+1)} z}{1 + \\cfrac{\\frac{a-b-1}{(b+1)(b+2)} z}{1 + \\cfrac{\\frac{a+1}{(b+2)(b+3)} z}{1 + \\cfrac{\\frac{a-b-2}{(b+3)(b+4)} z}{1 + {}\\ddots}}}}}" }, { "math_id": 31, "text": "\\frac{{}_1F_1(a;b+1;z)}{b{}_1F_1(a;b;z)} = \\cfrac{1}{b + \\cfrac{a z}{(b+1) + \\cfrac{(a-b-1) z}{(b+2) + \\cfrac{(a+1) z}{(b+3) + \\cfrac{(a-b-2) z}{(b+4) + {}\\ddots}}}}}" }, { "math_id": 32, "text": "{}_1F_1(0;b;z)=1" }, { "math_id": 33, "text": "{}_1F_1(1;b;z) = \\cfrac{1}{1 + \\cfrac{-z}{b + \\cfrac{z}{(b+1) + \\cfrac{-b z}{(b+2) + \\cfrac{2z}{(b+3) + \\cfrac{-(b+1)z}{(b+4) + {}\\ddots}}}}}}" }, { "math_id": 34, "text": "{}_2F_1(a,b;c;z) = 1 + \\frac{ab}{c\\,1!}z + \\frac{a(a+1)b(b+1)}{c(c+1)\\,2!}z^2 + \\frac{a(a+1)(a+2)b(b+1)(b+2)}{c(c+1)(c+2)\\,3!}z^3 + \\cdots.\\," }, { "math_id": 35, "text": "\\,_2F_1(a,b;c-1;z)-\\,_2F_1(a+1,b;c;z) = \\frac{(a-c+1)bz}{c(c-1)}\\,_2F_1(a+1,b+1;c+1;z), " }, { "math_id": 36, "text": "\\,_2F_1(a,b;c-1;z)-\\,_2F_1(a,b+1;c;z) = \\frac{(b-c+1)az}{c(c-1)}\\,_2F_1(a+1,b+1;c+1;z). " }, { "math_id": 37, "text": "f_0(z) = \\,_2F_1(a,b;c;z)," }, { "math_id": 38, "text": "f_1(z) = \\,_2F_1(a+1,b;c+1;z)," }, { "math_id": 39, "text": "f_2(z) = \\,_2F_1(a+1,b+1;c+2;z)," }, { "math_id": 40, "text": "f_3(z) = \\,_2F_1(a+2,b+1;c+3;z)," }, { "math_id": 41, "text": "f_4(z) = \\,_2F_1(a+2,b+2;c+4;z)," }, { "math_id": 42, "text": "k_1=\\tfrac{(a-c)b}{c(c+1)},\nk_2=\\tfrac{(b-c-1)(a+1)}{(c+1)(c+2)}, k_3=\\tfrac{(a-c-1)(b+1)}{(c+2)(c+3)}, k_4=\\tfrac{(b-c-2)(a+2)}{(c+3)(c+4)}" }, { "math_id": 43, "text": "\\frac{{}_2F_1(a+1,b;c+1;z)}{{}_2F_1(a,b;c;z)} = \\cfrac{1}{1 + \\cfrac{\\frac{(a-c)b}{c(c+1)} z}{1 + \\cfrac{\\frac{(b-c-1)(a+1)}{(c+1)(c+2)} z}{1 + \\cfrac{\\frac{(a-c-1)(b+1)}{(c+2)(c+3)} z}{1 + \\cfrac{\\frac{(b-c-2)(a+2)}{(c+3)(c+4)} z}{1 + {}\\ddots}}}}}" }, { "math_id": 44, "text": "\\frac{{}_2F_1(a+1,b;c+1;z)}{c{}_2F_1(a,b;c;z)} = \\cfrac{1}{c + \\cfrac{(a-c)b z}{(c+1) + \\cfrac{(b-c-1)(a+1) z}{(c+2) + \\cfrac{(a-c-1)(b+1) z}{(c+3) + \\cfrac{(b-c-2)(a+2) z}{(c+4) + {}\\ddots}}}}}" }, { "math_id": 45, "text": "{}_2F_1(0,b;c;z)=1" }, { "math_id": 46, "text": "{}_2F_1(1,b;c;z) = \\cfrac{1}{1 + \\cfrac{-b z}{c + \\cfrac{(b-c) z}{(c+1) + \\cfrac{-c(b+1) z}{(c+2) + \\cfrac{2(b-c-1) z}{(c+3) + \\cfrac{-(c+1)(b+2) z}{(c+4) + {}\\ddots}}}}}}" }, { "math_id": 47, "text": "\\cosh(z) = \\,_0F_1({\\tfrac{1}{2}};{\\tfrac{z^2}{4}})," }, { "math_id": 48, "text": "\\sinh(z) = z\\,_0F_1({\\tfrac{3}{2}};{\\tfrac{z^2}{4}})," }, { "math_id": 49, "text": "\\tanh(z) = \\frac{z\\,_0F_1({\\tfrac{3}{2}};{\\tfrac{z^2}{4}})}{\\,_0F_1({\\tfrac{1}{2}};{\\tfrac{z^2}{4}})}\n= \\cfrac{z/2}{\\tfrac{1}{2} + \\cfrac{\\tfrac{z^2}{4}}{\\tfrac{3}{2} + \\cfrac{\\tfrac{z^2}{4}}{\\tfrac{5}{2} + \\cfrac{\\tfrac{z^2}{4}}{\\tfrac{7}{2} + {}\\ddots}}}} = \\cfrac{z}{1 + \\cfrac{z^2}{3 + \\cfrac{z^2}{5 + \\cfrac{z^2}{7 + {}\\ddots}}}}." }, { "math_id": 50, "text": "\\tan(z) = \\cfrac{z}{1 - \\cfrac{z^2}{3 - \\cfrac{z^2}{5 - \\cfrac{z^2}{7 - {}\\ddots}}}}." }, { "math_id": 51, "text": "J_\\nu" }, { "math_id": 52, "text": "J_\\nu(z) = \\frac{(\\tfrac{1}{2}z)^\\nu}{\\Gamma(\\nu+1)}\\,_0F_1(\\nu+1;-\\frac{z^2}{4})," }, { "math_id": 53, "text": "\\frac{J_\\nu(z)}{J_{\\nu-1}(z)}=\\cfrac{z}{2\\nu - \\cfrac{z^2}{2(\\nu+1) - \\cfrac{z^2}{2(\\nu+2) - \\cfrac{z^2}{2(\\nu+3) - {}\\ddots}}}}." }, { "math_id": 54, "text": "e^z = {}_1F_1(1;1;z)" }, { "math_id": 55, "text": "1/e^z = e^{-z}" }, { "math_id": 56, "text": "e^z = \\cfrac{1}{1 + \\cfrac{-z}{1 + \\cfrac{z}{2 + \\cfrac{-z}{3 + \\cfrac{2z}{4 + \\cfrac{-2z}{5 + {}\\ddots}}}}}}" }, { "math_id": 57, "text": "e^z = 1 + \\cfrac{z}{1 + \\cfrac{-z}{2 + \\cfrac{z}{3 + \\cfrac{-2z}{4 + \\cfrac{2z}{5 + {}\\ddots}}}}}." }, { "math_id": 58, "text": "e=2+\\cfrac{1}{1+\\cfrac{1}{2+\\cfrac{1}{1+\\cfrac{1}{1+\\cfrac{1}{4+{}\\ddots}}}}}" }, { "math_id": 59, "text": "\n\\operatorname{erf}(z) = \\frac{2}{\\sqrt{\\pi}}\\int_0^z e^{-t^2} \\, dt,\n" }, { "math_id": 60, "text": "\n\\operatorname{erf}(z) = \\frac{2z}{\\sqrt{\\pi}} e^{-z^2} \\,_1F_1(1;{\\scriptstyle\\frac{3}{2}};z^2).\n" }, { "math_id": 61, "text": "\n\\frac{\\sqrt{\\pi}}{2} e^{z^2} \\operatorname{erf}(z) = \\cfrac{z}{1 - \\cfrac{z^2}{\\frac{3}{2} +\n\\cfrac{z^2}{\\frac{5}{2} - \\cfrac{\\frac{3}{2}z^2}{\\frac{7}{2} + \\cfrac{2z^2}{\\frac{9}{2} -\n\\cfrac{\\frac{5}{2}z^2}{\\frac{11}{2} + \\cfrac{3z^2}{\\frac{13}{2} -\n\\cfrac{\\frac{7}{2}z^2}{\\frac{15}{2} + - \\ddots}}}}}}}}.\n" }, { "math_id": 62, "text": "(1-z)^{-b}={}_1F_0(b;;z)=\\,_2F_1(1,b;1;z)," }, { "math_id": 63, "text": "(1-z)^{-b} = \\cfrac{1}{1 + \\cfrac{-b z}{1 + \\cfrac{(b-1) z}{2 + \\cfrac{-(b+1) z}{3 + \\cfrac{2(b-2) z}{4 + {}\\ddots}}}}}" }, { "math_id": 64, "text": "\n\\arctan z = zF({\\scriptstyle\\frac{1}{2}},1;{\\scriptstyle\\frac{3}{2}};-z^2).\n" }, { "math_id": 65, "text": "\n\\arctan z = \\cfrac{z} {1+\\cfrac{(1z)^2} {3+\\cfrac{(2z)^2} {5+\\cfrac{(3z)^2} {7+\\cfrac{(4z)^2} {9+\\ddots}}}}},\n" }, { "math_id": 66, "text": "\n\\frac{\\pi}{4} = \\cfrac{1} {1+\\cfrac{1^2} {2+\\cfrac{3^2} {2+\\cfrac{5^2} {2+\\ddots}}}} \n= 1 - \\frac{1}{3} + \\frac{1}{5} - \\frac{1}{7} \\pm \\cdots\n" } ]
https://en.wikipedia.org/wiki?curid=11771113
11772928
Weibull modulus
The Weibull modulus is a dimensionless parameter of the Weibull distribution. It represents the width of a probability density function (PDF) in which a higher modulus is a characteristic of a narrower distribution of values. Use case examples include biological and brittle material failure analysis, where modulus is used to describe the variability of failure strength for materials. Definition. The Weibull distribution, represented as a cumulative distribution function (CDF), is defined by: formula_0 in which "m" is the Weibull modulus. formula_1 is a parameter found during the fit of data to the Weibull distribution and represents an input value for which ~67% of the data is encompassed. As "m" increases, the CDF distribution more closely resembles a step function at formula_1, which correlates with a sharper peak in the probability density function (PDF) defined by: formula_2Failure analysis often uses this distribution, as a CDF of the probability of failure "F" of a sample, as a function of applied stress σ, in the form: formula_3 Failure stress of the sample, σ, is substituted for the formula_4 property in the above equation. The initial property formula_5 is assumed to be 0, an unstressed, equilibrium state of the material. In the plotted figure of the Weibull CDF, it is worth noting that the plotted functions all intersect at a stress value of 50 MPa, the characteristic strength for the distributions, even though the value of the Weibull moduli vary. It is also worth noting in the plotted figure of the Weibull PDF that a higher Weibull modulus results in a steeper slope within the plot. The Weibull distribution can also be multi-modal, in which there would be multiple reported formula_1 values and multiple reported moduli, "m." The CDF for a bimodal Weibull distribution has the following form, when applied to materials failure analysis: formula_6 This represents a material which fails by two different modes. In this equation "m"1 is the modulus for the first mode, and "m2" is the modulus for the second mode. Φ is the fraction of the sample set which fail by the first mode. The corresponding PDF is defined by:formula_7 Examples of a bimodal Weibull PDF and CDF are plotted in the figures of this article with values of the characteristic strength being 40 and 120 MPa, the Weibull moduli being 4 and 10, and the value of Φ is 0.5, corresponding to 50% of the specimens failing by each failure mode. Linearization of the CDF. The compliment of the cumulative Weibull distribution function can be expressed as: formula_8 Where P corresponds to the probability of survival of a specimen for a given stress value. Thus, it follows that: formula_9 where m is the Weibull modulus. If the probability is plotted vs the stress, we find that the graph is sigmoidal, as shown in the figure above. Taking advantage of the fact that the exponential is the base of the natural logarithm, the above equation can be rearranged to: formula_10 Which using the properties of logarithms can also be expressed as: formula_11 When the left side of this equation is plotted as a function of the natural logarithm of stress, a linear plot can be created which has a slope of the Weibull modulus, m, and an x-intercept of formula_12. Looking at the plotted linearization of the CDFs from above it can be seen that all of the lines intersect the x-axis at the same point because all of the functions have the same value of the characteristic strength. The slopes vary because of the differing values of the Weibull moduli. Measurement. Standards organizations have created multiple standards for measuring and reporting values of Weibull parameters, along with other statistical analyses of strength data: When applying a Weibull distribution to a set of data the data points must first be put in ranked order. For the use case of failure analysis specimens' failure strengths are ranked in ascending order, i.e. from lowest to greatest strength. A probability of failure is then assigned to each failure strength measured, ASTM C1239-13 uses the following formula: formula_13 where formula_14 is the specimen number as ranked and formula_15 is the total number of specimens in the sample. From there formula_16 can be plotted against failure strength to obtain a Weibull CDF. The Weibull parameters, modulus and characteristic strength, can be obtained from fitting or using the linearization method detailed above. Example uses from published work. Weibull statistics are often used for ceramics and other brittle materials. They have also been applied to other fields as well such as meteorology where wind speeds are often described using Weibull statistics. Ceramics and brittle materials. For ceramics and other brittle materials, the maximum stress that a sample can be measured to withstand before failure may vary from specimen to specimen, even under identical testing conditions. This is related to the distribution of physical flaws present in the surface or body of the brittle specimen, since brittle failure processes originate at these weak points. Much work has been done to describe brittle failure with the field of linear elastic fracture mechanics and specifically with the development of the ideas of the stress intensity factor and Griffith Criterion. When flaws are consistent and evenly distributed, samples will behave more uniformly than when flaws are clustered inconsistently. This must be taken into account when describing the strength of the material, so strength is best represented as a distribution of values rather than as one specific value. Consider strength measurements made on many small samples of a brittle ceramic material. If the measurements show little variation from sample to sample, the calculated Weibull modulus will be high, and a single strength value would serve as a good description of the sample-to-sample performance. It may be concluded that its physical flaws, whether inherent to the material itself or resulting from the manufacturing process, are distributed uniformly throughout the material. If the measurements show high variation, the calculated Weibull modulus will be low; this reveals that flaws are clustered inconsistently, and the measured strength will be generally weak and variable. Products made from components of low Weibull modulus will exhibit low reliability and their strengths will be broadly distributed. With careful manufacturing processes Weibull moduli of up to 98 have been seen for glass fibers tested in tension. A table is provided with the Weibull moduli for several common materials. However, it is important to note that the Weibull modulus is a fitting parameter from strength data, and therefore the reported value may vary from source to source. It also is specific to the sample preparation and testing method, and subject to change if the analysis or manufacturing process changes. Organic materials. Studies examining organic brittle materials highlight the consistency and variability of the Weibull modulus within naturally occurring ceramics such as human dentin and abalone nacre. Research on human dentin samples indicates that the Weibull modulus remains stable across different depths or locations within the tooth, with an average value of approximately 4.5 and a range between 3 and 6. Variations in the modulus suggest differences in flaw populations between individual teeth, thought to be caused by random defects introduced during specimen preparation. Speculation exists regarding a potential decrease in the Weibull modulus with age due to changes in flaw distribution and stress sensitivity. Failure in dentin typically initiates at these flaws, which can be intrinsic or extrinsic in origin, arising from factors such as cavity preparation, wear, damage, or cyclic loading. Studies on the abalone shell illustrate its unique structural adaptations, sacrificing tensile strength perpendicular to its structure to enhance strength parallel to the tile arrangement. The Weibull modulus of abalone nacre samples is determined to be 1.8, indicating a moderate degree of variability in strength among specimens. Quasi-brittle materials. The Weibull modulus of quasi-brittle materials correlates with the decline in the slope of the energy barrier spectrum, as established in fracture mechanics models. This relationship allows for the determination of both the fracture energy barrier spectrum decline slope and the Weibull modulus, while keeping factors like crack interaction and defect-induced degradation in consideration. Temperature dependence and variations due to crack interactions or stress field interactions are observed in the Weibull modulus of quasi-brittle materials. Damage accumulation leads to a rapid decrease in the Weibull modulus, resulting in a right-shifted distribution with a smaller Weibull modulus as damage increases. Quality analysis. Weibull analysis is also used in quality control and "life analysis" for products. A higher Weibull modulus allows for companies to more confidently predict the life of their product for use in determining warranty periods. Other methods of characterization for brittle materials. A further method to determine the strength of brittle materials has been described by the Wikibook contribution . References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "F(x)=1-\\exp(-(\\frac{x-x_u}{x_0})^m)" }, { "math_id": 1, "text": "x_0" }, { "math_id": 2, "text": " f(x)=(\\frac{m}{x_0})(\\frac{x-x_u}{x_0})^{m-1}\\exp(-(\\frac{x-x_u}{x_0})^m)" }, { "math_id": 3, "text": "F(\\sigma)=1-\\exp\\left[-(\\frac{\\sigma}{\\sigma_0})^m\\right]" }, { "math_id": 4, "text": "x" }, { "math_id": 5, "text": "x_u" }, { "math_id": 6, "text": "F(\\sigma)=1-\\phi\\exp[(-(\\frac{\\sigma}{\\sigma_{01}})^{m_1}] -(1-\\phi)\\exp[-(\\frac{\\sigma}{\\sigma_{02}})^{m_2}]" }, { "math_id": 7, "text": "f(\\sigma)=\\phi(\\frac{m_1}{\\sigma_{01}})(\\frac{\\sigma}{\\sigma_{01}})^{m_1-1}\\exp[-(\\frac{\\sigma}{\\sigma_{01}})^{m_1}]+(1-\\phi)(\\frac{m_2}{\\sigma_{02}})(\\frac{\\sigma}{\\sigma_{02}})^{m_2-1}\\exp[-(\\frac{\\sigma}{\\sigma_{02}})^{m_2}]" }, { "math_id": 8, "text": "P=1-F" }, { "math_id": 9, "text": "P(\\sigma)=1 - \\left[ 1 - \\exp\\left[-(\\frac{\\sigma}{\\sigma_0})^m\\right]\\right]=\\exp\\left[-(\\frac{\\sigma}{\\sigma_0})^m\\right]" }, { "math_id": 10, "text": "\\ln \\left[\\ln\\left (\\frac{1}{1-F} \\right)\\right]= \\ln \\left[\\left (\\frac{\\sigma}{\\sigma_0} \\right)^m\\right]" }, { "math_id": 11, "text": "\\ln \\left[\\ln\\left (\\frac{1}{1-F} \\right)\\right]= m\\ln(\\sigma)-m\\ln(\\sigma_0)" }, { "math_id": 12, "text": "\\ln(\\sigma_0)" }, { "math_id": 13, "text": "F(\\sigma)=\\frac{i-0.5}{N}" }, { "math_id": 14, "text": "i" }, { "math_id": 15, "text": "N" }, { "math_id": 16, "text": "F" } ]
https://en.wikipedia.org/wiki?curid=11772928
11774498
Baum–Connes conjecture
In mathematics, specifically in operator K-theory, the Baum–Connes conjecture suggests a link between the K-theory of the reduced C*-algebra of a group and the K-homology of the classifying space of proper actions of that group. The conjecture sets up a correspondence between different areas of mathematics, with the K-homology of the classifying space being related to geometry, differential operator theory, and homotopy theory, while the K-theory of the group's reduced C*-algebra is a purely analytical object. The conjecture, if true, would have some older famous conjectures as consequences. For instance, the surjectivity part implies the Kadison–Kaplansky conjecture for discrete torsion-free groups, and the injectivity is closely related to the Novikov conjecture. The conjecture is also closely related to index theory, as the assembly map formula_0 is a sort of index, and it plays a major role in Alain Connes' noncommutative geometry program. The origins of the conjecture go back to Fredholm theory, the Atiyah–Singer index theorem and the interplay of geometry with operator K-theory as expressed in the works of Brown, Douglas and Fillmore, among many other motivating subjects. Formulation. Let Γ be a second countable locally compact group (for instance a countable discrete group). One can define a morphism formula_1 called the assembly map, from the equivariant K-homology with formula_2-compact supports of the classifying space of proper actions formula_3 to the K-theory of the reduced C*-algebra of Γ. The subscript index * can be 0 or 1. Paul Baum and Alain Connes introduced the following conjecture (1982) about this morphism: Baum-Connes Conjecture. The assembly map formula_0 is an isomorphism. As the left hand side tends to be more easily accessible than the right hand side, because there are hardly any general structure theorems of the formula_4-algebra, one usually views the conjecture as an "explanation" of the right hand side. The original formulation of the conjecture was somewhat different, as the notion of equivariant K-homology was not yet common in 1982. In case formula_2 is discrete and torsion-free, the left hand side reduces to the non-equivariant K-homology with compact supports of the ordinary classifying space formula_5 of formula_2. There is also more general form of the conjecture, known as Baum–Connes conjecture with coefficients, where both sides have coefficients in the form of a formula_4-algebra formula_6 on which formula_2 acts by formula_4-automorphisms. It says in KK-language that the assembly map formula_7 is an isomorphism, containing the case without coefficients as the case formula_8 However, counterexamples to the conjecture with coefficients were found in 2002 by Nigel Higson, Vincent Lafforgue and Georges Skandalis. However, the conjecture with coefficients remains an active area of research, since it is, not unlike the classical conjecture, often seen as a statement concerning particular groups or class of groups. Examples. Let formula_2 be the integers formula_9. Then the left hand side is the K-homology of formula_10 which is the circle. The formula_4-algebra of the integers is by the commutative Gelfand–Naimark transform, which reduces to the Fourier transform in this case, isomorphic to the algebra of continuous functions on the circle. So the right hand side is the topological K-theory of the circle. One can then show that the assembly map is KK-theoretic Poincaré duality as defined by Gennadi Kasparov, which is an isomorphism. Results. The conjecture without coefficients is still open, although the field has received great attention since 1982. The conjecture is proved for the following classes of groups: Injectivity is known for a much larger class of groups thanks to the Dirac-dual-Dirac method. This goes back to ideas of Michael Atiyah and was developed in great generality by Gennadi Kasparov in 1987. Injectivity is known for the following classes: The simplest example of a group for which it is not known whether it satisfies the conjecture is formula_24.
[ { "math_id": 0, "text": "\\mu" }, { "math_id": 1, "text": " \\mu: RK^\\Gamma_*(\\underline{E\\Gamma}) \\to K_*(C^*_r(\\Gamma))," }, { "math_id": 2, "text": "\\Gamma" }, { "math_id": 3, "text": "\\underline{E\\Gamma}" }, { "math_id": 4, "text": "C^*" }, { "math_id": 5, "text": "B\\Gamma" }, { "math_id": 6, "text": "A" }, { "math_id": 7, "text": " \\mu_{A,\\Gamma}: RKK^\\Gamma_*(\\underline{E\\Gamma},A) \\to K_*(A\\rtimes_\\lambda \\Gamma)," }, { "math_id": 8, "text": "A=\\C." }, { "math_id": 9, "text": "\\Z" }, { "math_id": 10, "text": "B\\Z" }, { "math_id": 11, "text": "SO(n,1)" }, { "math_id": 12, "text": "SU(n,1)" }, { "math_id": 13, "text": "H" }, { "math_id": 14, "text": "\\lim_{n\\to\\infty} g_n\\xi\\to\\infty" }, { "math_id": 15, "text": "\\xi\\in H" }, { "math_id": 16, "text": "g_n" }, { "math_id": 17, "text": "\\lim_{n\\to\\infty}g_n\\to\\infty" }, { "math_id": 18, "text": "CAT(0)" }, { "math_id": 19, "text": "SL(3,\\R), SL(3,\\C)" }, { "math_id": 20, "text": "SL(3,\\Q_p)" }, { "math_id": 21, "text": "SL(3,\\R)" }, { "math_id": 22, "text": "k" }, { "math_id": 23, "text": "k = \\Q_p" }, { "math_id": 24, "text": "SL_3(\\Z)" } ]
https://en.wikipedia.org/wiki?curid=11774498
1177592
Ellipsometry
Optical technique for characterizing thin films Ellipsometry is an optical technique for investigating the dielectric properties (complex refractive index or dielectric function) of thin films. Ellipsometry measures the change of polarization upon reflection or transmission and compares it to a model. It can be used to characterize composition, roughness, thickness (depth), crystalline nature, doping concentration, electrical conductivity and other material properties. It is very sensitive to the change in the optical response of incident radiation that interacts with the material being investigated. A spectroscopic ellipsometer can be found in most thin film analytical labs. Ellipsometry is also becoming more interesting to researchers in other disciplines such as biology and medicine. These areas pose new challenges to the technique, such as measurements on unstable liquid surfaces and microscopic imaging. Etymology. The name "ellipsometry" stems from the fact that elliptical polarization of light is used. The term "spectroscopic" relates to the fact that the information gained is a function of the light's wavelength or energy (spectra). The technique has been known at least since 1888 by the work of Paul Drude and has many applications today. The first documented use of the term "ellipsometry" was in 1945. Basic principles. The measured signal is the change in polarization as the incident radiation (in a known state) interacts with the material structure of interest (reflected, absorbed, scattered, or transmitted). The polarization change is quantified by the amplitude ratio, Ψ, and the phase difference, Δ (defined below). Because the signal depends on the thickness as well as the material properties, ellipsometry can be a universal tool for contact free determination of thickness and optical constants of films of all kinds. Upon the analysis of the change of polarization of light, ellipsometry can yield information about layers that are thinner than the wavelength of the probing light itself, even down to a single atomic layer. Ellipsometry can probe the complex refractive index or dielectric function tensor, which gives access to fundamental physical parameters like those listed above. It is commonly used to characterize film thickness for single layers or complex multilayer stacks ranging from a few angstroms or tenths of a nanometer to several micrometers with an excellent accuracy. Experimental details. Typically, ellipsometry is done only in the reflection setup. The exact nature of the polarization change is determined by the sample's properties (thickness, complex refractive index or dielectric function tensor). Although optical techniques are inherently diffraction-limited, ellipsometry exploits phase information (polarization state), and can achieve sub-nanometer resolution. In its simplest form, the technique is applicable to thin films with thickness of less than a nanometer to several micrometers. Most models assume the sample is composed of a small number of discrete, well-defined layers that are optically homogeneous and isotropic. Violation of these assumptions requires more advanced variants of the technique (see below). Methods of immersion or multiangular ellipsometry are applied to find the optical constants of the material with rough sample surface or presence of inhomogeneous media. New methodological approaches allow the use of reflection ellipsometry to measure physical and technical characteristics of gradient elements in case the surface layer of the optical detail is inhomogeneous. Experimental setup. Electromagnetic radiation is emitted by a light source and linearly polarized by a polarizer. It can pass through an optional compensator (retarder, quarter wave plate) and falls onto the sample. After reflection the radiation passes a compensator (optional) and a second polarizer, which is called an analyzer, and falls into the detector. Instead of the compensators, some ellipsometers use a phase-modulator in the path of the incident light beam. Ellipsometry is a specular optical technique (the angle of incidence equals the angle of reflection). The incident and the reflected beam span the "plane of incidence". Light which is polarized parallel to this plane is named "p-polarized". A polarization direction perpendicular is called "s-polarized" ("s"-polarised), accordingly. The "s" is contributed from the German "" (perpendicular). Data acquisition. Ellipsometry measures the complex reflectance ratio formula_0 of a system, which may be parametrized by the amplitude component formula_1 and the phase difference formula_2. The polarization state of the light incident upon the sample may be decomposed into an "s" and a "p" component (the "s" component is oscillating perpendicular to the plane of incidence and parallel to the sample surface, and the "p" component is oscillating parallel to the plane of incidence). The amplitudes of the "s" and "p" components, after reflection and normalized to their initial value, are denoted by formula_3 and formula_4 respectively. The angle of incidence is chosen close to the Brewster angle of the sample to ensure a maximal difference in formula_4 and formula_3. Ellipsometry measures the complex reflectance ratio formula_0 (a complex quantity), which is the ratio of formula_4 over formula_3: formula_5 Thus, formula_6 is the amplitude ratio upon reflection, and formula_2 is the phase shift (difference). (Note that the right side of the equation is simply another way to represent a complex number.) Since ellipsometry is measuring the ratio (or difference) of two values (rather than the absolute value of either), it is very robust, accurate, and reproducible. For instance, it is relatively insensitive to scatter and fluctuations and requires no standard sample or reference beam. Data analysis. Ellipsometry is an indirect method, i.e. in general the measured formula_1 and formula_2 cannot be converted directly into the optical constants of the sample. Normally, a model analysis must be performed, for example the Forouhi Bloomer model. This is one weakness of ellipsometry. Models can be physically based on energy transitions or simply free parameters used to fit the data. Direct inversion of formula_1 and formula_2 is only possible in very simple cases of isotropic, homogeneous and infinitely thick films. In all other cases a layer model must be established, which considers the optical constants (refractive index or dielectric function tensor) and thickness parameters of all individual layers of the sample including the correct layer sequence. Using an iterative procedure (least-squares minimization) unknown optical constants and/or thickness parameters are varied, and formula_1 and formula_2 values are calculated using the Fresnel equations. The calculated formula_1 and formula_2 values which match the experimental data best provide the optical constants and thickness parameters of the sample. Definitions. Modern ellipsometers are complex instruments that incorporate a wide variety of radiation sources, detectors, digital electronics and software. The range of wavelength employed is far in excess of what is visible so strictly these are no longer optical instruments. Single-wavelength vs. spectroscopic ellipsometry. Single-wavelength ellipsometry employs a monochromatic light source. This is usually a laser in the visible spectral region, for instance, a HeNe laser with a wavelength of 632.8 nm. Therefore, single-wavelength ellipsometry is also called laser ellipsometry. The advantage of laser ellipsometry is that laser beams can be focused on a small spot size. Furthermore, lasers have a higher power than broad band light sources. Therefore, laser ellipsometry can be used for imaging (see below). However, the experimental output is restricted to one set of formula_1 and formula_2 values per measurement. Spectroscopic ellipsometry (SE) employs broad band light sources, which cover a certain spectral range in the infrared, visible or ultraviolet spectral region. By that the complex refractive index or the dielectric function tensor in the corresponding spectral region can be obtained, which gives access to a large number of fundamental physical properties. Infrared spectroscopic ellipsometry (IRSE) can probe lattice vibrational (phonon) and free charge carrier (plasmon) properties. Spectroscopic ellipsometry in the near infrared, visible up to ultraviolet spectral region studies the refractive index in the transparency or below-band-gap region and electronic properties, for instance, band-to-band transitions or excitons. Standard vs. generalized ellipsometry (anisotropy). Standard ellipsometry (or just short 'ellipsometry') is applied, when no "s" polarized light is converted into "p" polarized light nor vice versa. This is the case for optically isotropic samples, for instance, amorphous materials or crystalline materials with a cubic crystal structure. Standard ellipsometry is also sufficient for optically uniaxial samples in the special case, when the optical axis is aligned parallel to the surface normal. In all other cases, when "s" polarized light is converted into "p" polarized light and/or vice versa, the generalized ellipsometry approach must be applied. Examples are arbitrarily aligned, optically uniaxial samples, or optically biaxial samples. Jones matrix vs. Mueller matrix formalism (depolarization). There are typically two different ways of mathematically describing how an electromagnetic wave interacts with the elements within an ellipsometer (including the sample): the Jones matrix and the Mueller matrix formalisms. In the Jones matrix formalism, the electromagnetic wave is described by a Jones vector with two orthogonal complex-valued entries for the electric field (typically formula_7 and formula_8), and the effect that an optical element (or sample) has on it is described by the complex-valued 2×2 Jones matrix. In the Mueller matrix formalism, the electromagnetic wave is described by Stokes vectors with four real-valued entries, and their transformation is described by the real-valued 4x4 Mueller matrix. When no depolarization occurs both formalisms are fully consistent. Therefore, for non-depolarizing samples, the simpler Jones matrix formalism is sufficient. If the sample is depolarizing the Mueller matrix formalism should be used, because it also gives the amount of depolarization. Reasons for depolarization are, for instance, thickness non-uniformity or backside-reflections from a transparent substrate. Advanced experimental approaches. Imaging ellipsometry. Ellipsometry can also be done as imaging ellipsometry by using a CCD camera as a detector. This provides a real time contrast image of the sample, which provides information about film thickness and refractive index. Advanced imaging ellipsometer technology operates on the principle of classical null ellipsometry and real-time ellipsometric contrast imaging. Imaging ellipsometry is based on the concept of nulling. In ellipsometry, the film under investigation is placed onto a reflective substrate. The film and the substrate have different refractive indexes. In order to obtain data about film thickness, the light reflecting off of the substrate must be nulled. Nulling is achieved by adjusting the analyzer and polarizer so that all reflected light off of the substrate is extinguished. Due to the difference in refractive indexes, this will allow the sample to become very bright and clearly visible. The light source consists of a monochromatic laser of the desired wavelength. A common wavelength that is used is 532 nm green laser light. Since only intensity of light measurements are needed, almost any type of camera can be implemented as the CCD, which is useful if building an ellipsometer from parts. Typically, imaging ellipsometers are configured in such a way so that the laser (L) fires a beam of light which immediately passes through a linear polarizer (P). The linearly polarized light then passes through a quarter wavelength compensator (C) which transforms the light into elliptically polarized light. This elliptically polarized light then reflects off the sample (S), passes through the analyzer (A) and is imaged onto a CCD camera by a long working distance objective. The analyzer here is another polarizer identical to the P, however, this polarizer serves to help quantify the change in polarization and is thus given the name analyzer. This design is commonly referred to as a LPCSA configuration. The orientation of the angles of P and C are chosen in such a way that the elliptically polarized light is completely linearly polarized after it is reflected off the sample. For simplification of future calculations, the compensator can be fixed at a 45 degree angle relative to the plane of incidence of the laser beam. This set up requires the rotation of the analyzer and polarizer in order to achieve null conditions. The ellipsometric null condition is obtained when A is perpendicular with respect to the polarization axis of the reflected light achieving complete destructive interference, i.e., the state at which the absolute minimum of light flux is detected at the CCD camera. The angles of P, C, and A obtained are used to determine the Ψ and Δ values of the material. formula_9 and formula_10 where "A" and "P" are the angles of the analyzer and polarizer under null conditions respectively. By rotating the analyzer and polarizer and measuring the change in intensities of light over the image, analysis of the measured data by use of computerized optical modeling can lead to a deduction of spatially resolved film thickness and complex refractive index values. Due to the fact that the imaging is done at an angle, only a small line of the entire field of view is actually in focus. The line in focus can be moved along the field of view by adjusting the focus. In order to analyze the entire region of interest, the focus must be incrementally moved along the region of interest with a photo taken at each position. All of the images are then compiled into a single, in focus image of the sample. In situ ellipsometry. In situ ellipsometry refers to dynamic measurements during the modification process of a sample. This process can be used to study, for instance, the growth of a thin film, including calcium phosphate mineralization at the air-liquid interface, etching or cleaning of a sample. By in situ ellipsometry measurements it is possible to determine fundamental process parameters, such as, growth or etch rates, variation of optical properties with time. In situ ellipsometry measurements require a number of additional considerations: The sample spot is usually not as easily accessible as for ex situ measurements outside the process chamber. Therefore, the mechanical setup has to be adjusted, which can include additional optical elements (mirrors, prisms, or lenses) for redirecting or focusing the light beam. Because the environmental conditions during the process can be harsh, the sensitive optical elements of the ellipsometry setup must be separated from the hot zone. In the simplest case this is done by optical view ports, though strain induced birefringence of the (glass-) windows has to be taken into account or minimized. Furthermore, the samples can be at elevated temperatures, which implies different optical properties compared to samples at room temperature. Despite all these problems, in situ ellipsometry becomes more and more important as process control technique for thin film deposition and modification tools. In situ ellipsometers can be of single-wavelength or spectroscopic type. Spectroscopic in situ ellipsometers use multichannel detectors, for instance CCD detectors, which measure the ellipsometric parameters for all wavelengths in the studied spectral range simultaneously. Ellipsometric porosimetry. Ellipsometric porosimetry measures the change of the optical properties and thickness of the materials during adsorption and desorption of a volatile species at atmospheric pressure or under reduced pressure depending on the application. The EP technique is unique in its ability to measure porosity of very thin films down to 10 nm, its reproducibility and speed of measurement. Compared to traditional porosimeters, Ellipsometer porosimeters are well suited to very thin film pore size and pore size distribution measurement. Film porosity is a key factor in silicon based technology using low-κ materials, organic industry (encapsulated organic light-emitting diodes) as well as in the coating industry using sol gel techniques. Magneto-optic generalized ellipsometry. Magneto-optic generalized ellipsometry (MOGE) is an advanced infrared spectroscopic ellipsometry technique for studying free charge carrier properties in conducting samples. By applying an external magnetic field it is possible to determine independently the density, the optical mobility parameter and the effective mass parameter of free charge carriers. Without the magnetic field only two out of the three free charge carrier parameters can be extracted independently. Applications. This technique has found applications in many different fields, from semiconductor physics to microelectronics and biology, from basic research to industrial applications. Ellipsometry is a very sensitive measurement technique and provides unequaled capabilities for thin film metrology. As an optical technique, spectroscopic ellipsometry is non-destructive and contactless. Because the incident radiation can be focused, small sample sizes can be imaged and desired characteristics can be mapped over a larger area (m2). Advantages. Ellipsometry has a number of advantages compared to standard reflection intensity measurements: Ellipsometry is especially superior to reflectivity measurements when studying anisotropic samples. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rho" }, { "math_id": 1, "text": "\\Psi" }, { "math_id": 2, "text": "\\Delta" }, { "math_id": 3, "text": "r_s" }, { "math_id": 4, "text": "r_p" }, { "math_id": 5, "text": "\\rho = \\frac{r_p}{r_s} = \\tan \\Psi \\cdot e^{i\\Delta}." }, { "math_id": 6, "text": "\\tan\\Psi" }, { "math_id": 7, "text": "E_x" }, { "math_id": 8, "text": "E_y" }, { "math_id": 9, "text": "\\Psi = A" }, { "math_id": 10, "text": "\\Delta = 2P + \\pi/2," } ]
https://en.wikipedia.org/wiki?curid=1177592
1177768
Zech's logarithm
Tool for a fast finite-field arithmetic Zech logarithms are used to implement addition in finite fields when elements are represented as powers of a generator formula_0. Zech logarithms are named after Julius Zech, and are also called Jacobi logarithms, after Carl G. J. Jacobi who used them for number theoretic investigations. Definition. Given a primitive element formula_0 of a finite field, the Zech logarithm relative to the base formula_0 is defined by the equation formula_1 which is often rewritten as formula_2 The choice of base formula_0 is usually dropped from the notation when it is clear from the context. To be more precise, formula_3 is a function on the integers modulo the multiplicative order of formula_0, and takes values in the same set. In order to describe every element, it is convenient to formally add a new symbol formula_4, along with the definitions formula_5 formula_6 formula_7 formula_8 where formula_9 is an integer satisfying formula_10, that is formula_11 for a field of characteristic 2, and formula_12 for a field of odd characteristic with formula_13 elements. Using the Zech logarithm, finite field arithmetic can be done in the exponential representation: formula_14 formula_15 formula_16 formula_17 formula_18 formula_19 These formulas remain true with our conventions with the symbol formula_4, with the caveat that subtraction of formula_4 is undefined. In particular, the addition and subtraction formulas need to treat formula_20 as a special case. This can be extended to arithmetic of the projective line by introducing another symbol formula_21 satisfying formula_22 and other rules as appropriate. For fields of characteristic 2, formula_23. Uses. For sufficiently small finite fields, a table of Zech logarithms allows an especially efficient implementation of all finite field arithmetic in terms of a small number of integer addition/subtractions and table look-ups. The utility of this method diminishes for large fields where one cannot efficiently store the table. This method is also inefficient when doing very few operations in the finite field, because one spends more time computing the table than one does in actual calculation. Examples. Let "α" ∈ GF(23) be a root of the primitive polynomial "x"3 + "x"2 + 1. The traditional representation of elements of this field is as polynomials in α of degree 2 or less. A table of Zech logarithms for this field are "Z"(−∞) = 0, "Z"(0) = −∞, "Z"(1) = 5, "Z"(2) = 3, "Z"(3) = 2, "Z"(4) = 6, "Z"(5) = 1, and "Z"(6) = 4. The multiplicative order of "α" is 7, so the exponential representation works with integers modulo 7. Since "α" is a root of "x"3 + "x"2 + 1 then that means "α"3 + "α"2 + 1 = 0, or if we recall that since all coefficients are in GF(2), subtraction is the same as addition, we obtain "α"3 = "α"2 + 1. The conversion from exponential to polynomial representations is given by formula_24 (as shown above) formula_25 formula_26 formula_27 Using Zech logarithms to compute "α"&amp;hairsp;6 + "α"&amp;hairsp;3: formula_28 or, more efficiently, formula_29 and verifying it in the polynomial representation: formula_30 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\alpha" }, { "math_id": 1, "text": "\\alpha^{Z_\\alpha(n)} = 1 + \\alpha^n," }, { "math_id": 2, "text": "Z_\\alpha(n) = \\log_\\alpha(1 + \\alpha^n)." }, { "math_id": 3, "text": "Z_\\alpha" }, { "math_id": 4, "text": "-\\infty" }, { "math_id": 5, "text": "\\alpha^{-\\infty} = 0" }, { "math_id": 6, "text": "n + (-\\infty) = -\\infty" }, { "math_id": 7, "text": "Z_\\alpha(-\\infty) = 0" }, { "math_id": 8, "text": "Z_\\alpha(e) = -\\infty" }, { "math_id": 9, "text": "e" }, { "math_id": 10, "text": "\\alpha^e = -1" }, { "math_id": 11, "text": "e=0" }, { "math_id": 12, "text": "e = \\frac{q-1}{2}" }, { "math_id": 13, "text": "q" }, { "math_id": 14, "text": "\\alpha^m + \\alpha^n = \\alpha^m \\cdot (1 + \\alpha^{n-m}) = \\alpha^m \\cdot \\alpha^{Z(n-m)} = \\alpha^{m + Z(n-m)} " }, { "math_id": 15, "text": "-\\alpha^n = (-1) \\cdot \\alpha^n = \\alpha^e \\cdot \\alpha^n = \\alpha^{e+n}" }, { "math_id": 16, "text": "\\alpha^m - \\alpha^n = \\alpha^m + (-\\alpha^n) = \\alpha^{m + Z(e+n-m)} " }, { "math_id": 17, "text": "\\alpha^m \\cdot \\alpha^n = \\alpha^{m+n}" }, { "math_id": 18, "text": "\\left( \\alpha^m \\right)^{-1} = \\alpha^{-m}" }, { "math_id": 19, "text": "\\alpha^m / \\alpha^n = \\alpha^m \\cdot \\left( \\alpha^n \\right)^{-1} = \\alpha^{m - n}" }, { "math_id": 20, "text": "m = -\\infty" }, { "math_id": 21, "text": "+\\infty" }, { "math_id": 22, "text": "\\alpha^{+\\infty} = \\infty" }, { "math_id": 23, "text": "Z_\\alpha(n) = m \\iff Z_\\alpha(m) = n" }, { "math_id": 24, "text": "\\alpha^3 = \\alpha^2 + 1" }, { "math_id": 25, "text": "\\alpha^4 = \\alpha^3 \\alpha = (\\alpha^2 + 1)\\alpha = \\alpha^3 + \\alpha = \\alpha^2 + \\alpha + 1" }, { "math_id": 26, "text": "\\alpha^5 = \\alpha^4 \\alpha = (\\alpha^2 + \\alpha + 1)\\alpha = \\alpha^3 + \\alpha^2 + \\alpha = \\alpha^2 + 1 + \\alpha^2 + \\alpha = \\alpha + 1" }, { "math_id": 27, "text": "\\alpha^6 = \\alpha^5 \\alpha = (\\alpha + 1)\\alpha = \\alpha^2 + \\alpha" }, { "math_id": 28, "text": "\\alpha^6 + \\alpha^3 = \\alpha^{6 + Z(-3)} = \\alpha^{6 + Z(4)} = \\alpha^{6 + 6} = \\alpha^{12} = \\alpha^5," }, { "math_id": 29, "text": "\\alpha^6 + \\alpha^3 = \\alpha^{3 + Z(3)} = \\alpha^{3 + 2} = \\alpha^5," }, { "math_id": 30, "text": "\\alpha^6 + \\alpha^3 = (\\alpha^2 + \\alpha) + (\\alpha^2 + 1) = \\alpha + 1 = \\alpha^5." } ]
https://en.wikipedia.org/wiki?curid=1177768
1177937
Elongated triangular pyramid
Polyhedron constructed with tetrahedra and a triangular prism In geometry, the elongated triangular pyramid is one of the Johnson solids ("J"7). As the name suggests, it can be constructed by elongating a tetrahedron by attaching a triangular prism to its base. Like any elongated pyramid, the resulting solid is topologically (but not geometrically) self-dual. Construction. The elongated triangular pyramid is constructed from a triangular prism by attaching regular tetrahedron onto one of its bases, a process known as elongation. The tetrahedron covers an equilateral triangle, replacing it with three other equilateral triangles, so that the resulting polyhedron has four equilateral triangles and three squares as its faces. A convex polyhedron in which all of the faces are regular polygons is called the Johnson solid, and the elongated triangular pyramid is among them, enumerated as the seventh Johnson solid formula_0. Properties. An elongated triangular pyramid with edge length formula_1 has a height, by adding the height of a regular tetrahedron and a triangular prism: formula_2 Its surface area can be calculated by adding the area of all eight equilateral triangles and three squares: formula_3 and its volume can be calculated by slicing it into a regular tetrahedron and a prism, adding their volume up:: formula_4 It has the three-dimensional symmetry group, the cyclic group formula_5 of order 6. Its dihedral angle can be calculated by adding the angle of the tetrahedron and the triangular prism: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " J_7 " }, { "math_id": 1, "text": " a " }, { "math_id": 2, "text": " \\left( 1 + \\frac{\\sqrt{6}}{3}\\right)a \\approx 1.816a. " }, { "math_id": 3, "text": " \\left(3+\\sqrt{3}\\right)a^2 \\approx 4.732a^2, " }, { "math_id": 4, "text": " \\left(\\frac{1}{12}\\left(\\sqrt{2}+3\\sqrt{3}\\right)\\right)a^3 \\approx 0.551a^3. " }, { "math_id": 5, "text": " C_{3\\mathrm{v}} " }, { "math_id": 6, "text": " \\arccos \\left(\\frac{1}{3}\\right) \\approx 70.5^\\circ " }, { "math_id": 7, "text": " \\frac{\\pi}{2} = 90^\\circ " }, { "math_id": 8, "text": " \\arccos \\left(\\frac{1}{3}\\right) + \\frac{\\pi}{2} \\approx 160.5^\\circ " }, { "math_id": 9, "text": " \\frac{\\pi}{3} = 60^\\circ " } ]
https://en.wikipedia.org/wiki?curid=1177937
1177939
Elongated square pyramid
Polyhedron with cube and square pyramid In geometry, the elongated square pyramid is a convex polyhedron constructed from a cube by attaching an equilateral square pyramid onto one of its faces. It is an example of Johnson solid. Construction. The elongated square bipyramid is a composite, since it can constructed by attaching two equilateral square pyramids onto the faces of a cube that are opposite each other, a process known as elongation. This construction involves the removal of those two squares and replacing them with those pyramids, resulting in eight equilateral triangles and four squares as their faces. A convex polyhedron in which all of its faces are regular is a Johnson solid, and the elongated square bipyramid is one of them, denoted as formula_1, the fifteenth Johnson solid. Properties. Given that formula_2 is the edge length of an elongated square pyramid. The height of an elongated square pyramid can be calculated by adding the height of an equilateral square pyramid and a cube. The height of a cube is the same as the given edge length formula_2, and the height of an equilateral square pyramid is formula_3. Therefore, the height of an elongated square bipyramid is: formula_4 Its surface area can be calculated by adding all the area of four equilateral triangles and four squares: formula_5 Its volume is obtained by slicing it into an equilateral square pyramid and a cube, and then adding them: formula_6 The elongated square pyramid has the same three-dimensional symmetry group as the equilateral square pyramid, the cyclic group formula_0 of order eight. Its dihedral angle can be obtained by adding the angle of an equilateral square pyramid and a cube: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " C_{4v} " }, { "math_id": 1, "text": " J_{15} " }, { "math_id": 2, "text": " a " }, { "math_id": 3, "text": " (1/\\sqrt{2})a " }, { "math_id": 4, "text": " a + \\frac{1}{\\sqrt{2}}a = \\left(1 + \\frac{\\sqrt{2}}{2}\\right)a \\approx 1.707a. " }, { "math_id": 5, "text": " \\left(5 + \\sqrt{3}\\right)a^2 \\approx 6.732a^2. " }, { "math_id": 6, "text": " \\left(1 + \\frac{\\sqrt{2}}{6}\\right)a^3 \\approx 1.236a^3. " }, { "math_id": 7, "text": " \\arccos(-1/3) \\approx 109.47^\\circ " }, { "math_id": 8, "text": " \\pi/2 " }, { "math_id": 9, "text": " \\arctan \\left(\\sqrt{2}\\right) \\approx 54.74^\\circ " }, { "math_id": 10, "text": " \\arctan\\left(\\sqrt{2}\\right) + \\frac{\\pi}{2} \\approx 144.74^\\circ. " } ]
https://en.wikipedia.org/wiki?curid=1177939
1177950
Elongated triangular bipyramid
14th Johnson solid; triangular prism capped with tetrahedra In geometry, the elongated triangular bipyramid (or dipyramid) or triakis triangular prism a polyhedron constructed from a triangular prism by attaching two tetrahedrons to its bases. It is an example of Johnson solid. Construction. The elongated triangular bipyramid is constructed from a triangular prism by attaching two tetrahedrons onto its bases, a process known as the elongation. These tetrahedrons cover the triangular faces so that the resulting polyhedron has nine faces (six of them are equilateral triangles and three of them are squares), fifteen edges, and eight vertices. A convex polyhedron in which all of the faces are regular polygons is the Johnson solid. The elongated bipyramid is one of them, enumerated as the fourteenth Johnson solid formula_0. Properties. The surface area of an elongated triangular bipyramid formula_1 is the sum of all polygonal face's area: six equilateral triangles and three squares. The volume of an elongated triangular bipyramid formula_2 can be ascertained by slicing it off into two tetrahedrons and a regular triangular prism and then adding their volume. The height of an elongated triangular bipyramid formula_3 is the sum of two tetrahedrons and a regular triangular prism' height. Therefore, given the edge length formula_4, its surface area and volume is formulated as: formula_5 It has the same three-dimensional symmetry group as the triangular prism, the dihedral group formula_6 of order twelve. The dihedral angle of an elongated triangular bipyramid can be calculated by adding the angle of the tetrahedron and the triangular prism: Appearances. The nirrosula, an African musical instrument woven out of strips of plant leaves, is made in the form of a series of elongated bipyramids with non-equilateral triangles as the faces of their end caps. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " J_{14} " }, { "math_id": 1, "text": " A " }, { "math_id": 2, "text": " V " }, { "math_id": 3, "text": " h " }, { "math_id": 4, "text": " a " }, { "math_id": 5, "text": " \\begin{align}\n A &= \\left(\\frac{3\\sqrt{3}}{2} + 3\\right)a^2 \\approx 5.598a^2, \\\\\n V &= \\left(\\frac{\\sqrt{2}}{6} + \\frac{\\sqrt{3}}{2} \\right) a^3 \\approx 0.669a^3, \\\\\n h &= \\left(\\frac{2\\sqrt{6}}{3} + 1 \\right)\\cdot a \\approx \\cdot 2.633a.\n\\end{align}\n" }, { "math_id": 6, "text": " D_{3 \\mathrm{h}} " }, { "math_id": 7, "text": " \\arccos \\left(\\frac{1}{3}\\right) \\approx 70.5^\\circ " }, { "math_id": 8, "text": " \\frac{\\pi}{2} = 90^\\circ " }, { "math_id": 9, "text": " \\arccos \\left(\\frac{1}{3}\\right) + \\frac{\\pi}{2} \\approx 160.5^\\circ " }, { "math_id": 10, "text": " \\frac{\\pi}{3} = 60^\\circ " } ]
https://en.wikipedia.org/wiki?curid=1177950
1177952
Elongated square bipyramid
Cube capped by two square pyramids In geometry, the elongated square bipyramid (or elongated octahedron) is the polyhedron constructed by attaching two equilateral square pyramids onto a cube's faces that are opposite each other. It can also be seen as 4 lunes (squares with triangles on opposite sides) linked together with squares to squares and triangles to triangles. It is also been named the pencil cube or 12-faced pencil cube due to its shape. A zircon crystal is an example of an elongated square bipyramid. Construction. The elongated square bipyramid is constructed by attaching two equilateral square pyramids onto the faces of a cube that are opposite each other, a process known as elongation. This construction involves the removal of those two squares and replacing them with those pyramids, resulting in eight equilateral triangles and four squares as their faces.. A convex polyhedron in which all of its faces are regular is a Johnson solid, and the elongated square bipyramid is one of them, denoted as formula_1, the fifteenth Johnson solid. Properties. Given that formula_2 is the edge length of an elongated square bipyramid. The height of an elongated square pyramid can be calculated by adding the height of two equilateral square pyramids and a cube. The height of a cube is the same as the given edge length formula_2, and the height of an equilateral square pyramid is formula_3. Therefore, the height of an elongated square bipyramid is: formula_4 Its surface area can be calculated by adding all the area of eight equilateral triangles and four squares: formula_5 Its volume is obtained by slicing it into two equilateral square pyramids and a cube, and then adding them: formula_6 Its dihedral angle can be obtained in a similar way as the elongated square pyramid, by adding the angle of square pyramids and a cube: The elongated square bipyramid has the dihedral symmetry, the dihedral group formula_0 of order eight: it has an axis of symmetry passing through the apices of square pyramids and the center of a cube, and its appearance is symmetrical by reflecting across a horizontal plane. Related polyhedra and honeycombs. The elongated square bipyramid is dual to the square bifrustum, which has eight trapezoidals and two squares. A special kind of elongated square bipyramid "without" all regular faces allows a self-tessellation of Euclidean space. The triangles of this elongated square bipyramid are "not" regular; they have edges in the ratio 2:√3:√3. It can be considered a transitional phase between the cubic and rhombic dodecahedral honeycombs. Here, the cells are colored white, red, and blue based on their orientation in space. The square pyramid "caps" have shortened isosceles triangle faces, with six of these pyramids meeting together to form a cube. The dual of this honeycomb is composed of two kinds of octahedra (regular octahedra and triangular antiprisms), formed by superimposing octahedra into the cuboctahedra of the rectified cubic honeycomb. Both honeycombs have a symmetry of formula_11. Cross-sections of the honeycomb, through cell centers, produce a chamfered square tiling, with flattened horizontal and vertical hexagons, and squares on the perpendicular polyhedra. With regular faces, the elongated square bipyramid can form a tessellation of space with tetrahedra and octahedra. (The octahedra can be further decomposed into square pyramids.) This honeycomb can be considered an elongated version of the tetrahedral-octahedral honeycomb. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " D_{4h} " }, { "math_id": 1, "text": " J_{15} " }, { "math_id": 2, "text": " a " }, { "math_id": 3, "text": " (1/\\sqrt{2})a " }, { "math_id": 4, "text": " a + 2 \\cdot \\frac{1}{\\sqrt{2}}a = \\left(1 + \\sqrt{2}\\right)a \\approx 2.414a. " }, { "math_id": 5, "text": " \\left(4 + 2\\sqrt{3}\\right)a^2 \\approx 7.464a^2. " }, { "math_id": 6, "text": " \\left(1 + \\frac{\\sqrt{2}}{3}\\right)a^3 \\approx 1.471a^3. " }, { "math_id": 7, "text": " \\arccos(-1/3) \\approx 109.47^\\circ " }, { "math_id": 8, "text": " \\pi/2 " }, { "math_id": 9, "text": " \\arctan \\left(\\sqrt{2}\\right) \\approx 54.74^\\circ " }, { "math_id": 10, "text": " \\arctan\\left(\\sqrt{2}\\right) + \\frac{\\pi}{2} \\approx 144.74^\\circ. " }, { "math_id": 11, "text": " [4,3,4] " } ]
https://en.wikipedia.org/wiki?curid=1177952
1177953
Elongated pentagonal bipyramid
16th Johnson solid; pentagonal prism capped by pyramids In geometry, the elongated pentagonal bipyramid is a polyhedron constructed by attaching two pentagonal pyramids onto the base of a pentagonal prism. It is an example of Johnson solid. Construction. The elongated pentagonal bipyramid is constructed from a pentagonal prism by attaching two pentagonal pyramids onto its bases, a process called elongation. These pyramids cover the pentagonal faces so that the resulting polyhedron ten equilateral triangles and five squares. A convex polyhedron in which all of the faces are regular polygons is the Johnson solid. The elongated pentagonal bipyramid is among them, enumerated as the sixteenth Johnson solid formula_0. Properties. The surface area of an elongated pentagonal bipyramid formula_1 is the sum of all polygonal faces' area: ten equilateral triangles, and five squares. Its volume formula_2 can be ascertained by dissecting it into two pentagonal pyramids and one regular pentagonal prism and then adding its volume. Given an elongated pentagonal bipyramid with edge length formula_3, they can be formulated as: formula_4 It has the same three-dimensional symmetry group as the pentagonal prism, the dihedral group formula_5 of order 20. Its dihedral angle can be calculated by adding the angle of the pentagonal pyramid and pentagonal prism: The dual of the elongated square bipyramid is a pentagonal bifrustum. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " J_{16} " }, { "math_id": 1, "text": " A " }, { "math_id": 2, "text": " V " }, { "math_id": 3, "text": " a " }, { "math_id": 4, "text": " \\begin{align}\n A &= \\frac{5}{2} \\left(2+\\sqrt{3}\\right)a^2 \\approx 9.330a^2, \\\\\n V &= \\frac{1}{12} \\left(5+\\sqrt{5}+3 \\sqrt{5 \\left(5+2 \\sqrt{5}\\right)}\\right)a^3 \\approx 2.324a^3.\n\\end{align} " }, { "math_id": 5, "text": " D_{5\\mathrm{h}} " } ]
https://en.wikipedia.org/wiki?curid=1177953
1177955
Elongated pentagonal cupola
20th Johnson solid In geometry, the elongated pentagonal cupola is one of the Johnson solids ("J"20). As the name suggests, it can be constructed by elongating a pentagonal cupola ("J"5) by attaching a decagonal prism to its base. The solid can also be seen as an elongated pentagonal orthobicupola ("J"38) with its "lid" (another pentagonal cupola) removed. A Johnson solid is one of 92 strictly convex polyhedra that is composed of regular polygon faces but are not uniform polyhedra (that is, they are not Platonic solids, Archimedean solids, prisms, or antiprisms). They were named by Norman Johnson, who first listed these polyhedra in 1966. Formulas. The following formulas for the volume and surface area can be used if all faces are regular, with edge length "a": formula_0 formula_1 Dual polyhedron. The dual of the elongated pentagonal cupola has 25 faces: 10 isosceles triangles, 5 kites, and 10 quadrilaterals. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "V=\\left(\\frac{1}{6}\\left(5+4\\sqrt{5}+15\\sqrt{5+2\\sqrt{5}}\\right)\\right)a^3\\approx10.0183...a^3" }, { "math_id": 1, "text": "A=\\left(\\frac{1}{4}\\left(60+\\sqrt{10\\left(80+31\\sqrt{5}+\\sqrt{2175+930\\sqrt{5}}\\right)}\\right)\\right)a^2\\approx26.5797...a^2" } ]
https://en.wikipedia.org/wiki?curid=1177955
1177960
Gyroelongated pentagonal cupola
In geometry, the gyroelongated pentagonal cupola is one of the Johnson solids ("J"24). As the name suggests, it can be constructed by gyroelongating a pentagonal cupola ("J"5) by attaching a decagonal antiprism to its base. It can also be seen as a gyroelongated pentagonal bicupola ("J"46) with one pentagonal cupola removed. A Johnson solid is one of 92 strictly convex polyhedra that is composed of regular polygon faces but are not uniform polyhedra (that is, they are not Platonic solids, Archimedean solids, prisms, or antiprisms). They were named by Norman Johnson, who first listed these polyhedra in 1966. Area and Volume. With edge length a, the surface area is formula_0 and the volume is formula_1 Dual polyhedron. The dual of the gyroelongated pentagonal cupola has 25 faces: 10 kites, 5 rhombi, and 10 pentagons.
[ { "math_id": 0, "text": "A=\\frac{1}{4}\\left( 20+25\\sqrt{3}+\\left(10+\\sqrt{5}\\right)\\sqrt{5+2\\sqrt{5}}\\right)a^2\\approx25.240003791...a^2," }, { "math_id": 1, "text": "V=\\left(\\frac{5}{6}+\\frac{2}{3}\\sqrt{5} + \\frac{5}{6}\\sqrt{2\\sqrt{650+290\\sqrt{5}}-2\\sqrt{5}-2}\\right) a^3\\approx 9.073333194...a^3." } ]
https://en.wikipedia.org/wiki?curid=1177960
1177963
Gyrobifastigium
Polyhedron by attaching two triangular prisms In geometry, the gyrobifastigium is a polyhedron that is constructed by attaching a triangular prism to square face of another one. It is an example of a Johnson solid. It is the only Johnson solid that can tile three-dimensional space. Construction and its naming. The gyrobifastigium can be constructed by attaching two triangular prisms along corresponding square faces, giving a quarter-turn to one prism. These prisms cover the square faces so the resulting polyhedron has four equilateral triangles and four squares, making eight faces in total, an octahedron. Because its faces are all regular polygons and it is convex, the gyrobifastigium is classified as the Johnson solid that is enumerated as twenty-sixth Johnson solid formula_0. The name of the gyrobifastigium comes from the Latin "fastigium", meaning a sloping roof. In the standard naming convention of the Johnson solids, "bi-" means two solids connected at their bases, and "gyro-" means the two halves are twisted with respect to each other. Cartesian coordinates for the gyrobifastigium with regular faces and unit edge lengths may easily be derived from the formula of the height of unit edge length formula_1 as follows: formula_2 Properties. To calculate the formula for the surface area and volume of a gyrobifastigium with regular faces and with edge length formula_3, one may adapt the corresponding formulae for the triangular prism. Its surface area formula_4 can be obtained by summing the area of four equilateral triangles and four squares, whereas its volume formula_5 by slicing it off into two triangular prisms and adding their volume. That is: formula_6 Related figures. The Schmitt–Conway–Danzer biprism (also called a SCD prototile) is a polyhedron topologically equivalent to the gyrobifastigium, but with parallelogram and irregular triangle faces instead of squares and equilateral triangles. Like the gyrobifastigium, it can fill space, but only aperiodically or with a screw symmetry, not with a full three-dimensional group of symmetries. Thus, it provides a partial solution to the three-dimensional einstein problem. The gyrated triangular prismatic honeycomb can be constructed by packing together large numbers of identical gyrobifastigiums. The gyrobifastigium is one of five convex polyhedra with regular faces capable of space-filling (the others being the cube, truncated octahedron, triangular prism, and hexagonal prism) and it is the only Johnson solid capable of doing so. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " J_{26} " }, { "math_id": 1, "text": " h = \\frac{\\sqrt{3}}{2} " }, { "math_id": 2, "text": "\\left(\\pm\\frac{1}{2},\\pm\\frac{1}{2},0\\right),\\left(0,\\pm\\frac{1}{2},\\frac{\\sqrt{3}+1}{2}\\right),\\left(\\pm\\frac{1}{2},0,-\\frac{\\sqrt{3}+1}{2}\\right)." }, { "math_id": 3, "text": " a " }, { "math_id": 4, "text": " A " }, { "math_id": 5, "text": " V " }, { "math_id": 6, "text": " \\begin{align}\n A &= \\left(4+\\sqrt{3}\\right)a^2 \\approx 5.73205a^2, \\\\\n V &= \\left(\\frac{\\sqrt{3}}{2}\\right)a^3 \\approx 0.86603a^3.\n\\end{align}\n" } ]
https://en.wikipedia.org/wiki?curid=1177963
1177964
Pentagonal orthobicupola
30th Johnson solid; 2 pentagonal cupolae joined base-to-base In geometry, the pentagonal orthobicupola is one of the Johnson solids ("J"30). As the name suggests, it can be constructed by joining two pentagonal cupolae ("J"5) along their decagonal bases, matching like faces. A 36-degree rotation of one cupola before the joining yields a pentagonal gyrobicupola ("J"31). The "pentagonal orthobicupola" is the third in an infinite set of orthobicupolae. A Johnson solid is one of 92 strictly convex polyhedra that is composed of regular polygon faces but are not uniform polyhedra (that is, they are not Platonic solids, Archimedean solids, prisms, or antiprisms). They were named by Norman Johnson, who first listed these polyhedra in 1966. Formulae. The following formulae for volume and surface area can be used if all faces are regular, with edge length "a": formula_0 formula_1 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "V=\\frac{1}{3}\\left(5+4\\sqrt{5}\\right)a^3\\approx4.64809...a^3" }, { "math_id": 1, "text": "A=\\left(10+\\sqrt{\\frac{5}{2}\\left(10+\\sqrt{5}+\\sqrt{75+30\\sqrt{5}}\\right)}\\right)a^2\\approx17.7711...a^2" } ]
https://en.wikipedia.org/wiki?curid=1177964
1177965
Pentagonal gyrobicupola
31st Johnson solid; 2 pentagonal cupolae joined base-to-base The pentagonal gyrobicupola is a polyhedron that is constructed by attaching two pentagonal cupolas base-to-base, each of its cupolas is twisted at 36°. It is an example of a Johnson solid and a composite polyhedron. Construction. The pentagonal gyrobicupola is a composite polyhedron: it is constructed by attaching two pentagonal rotundas base-to-base. This construction is similar to the pentagonal orthobicupola; the difference is that one of cupolas in the pentagonal gyrobicupola is twisted at 36°, as suggested by the prefix "gyro-". The resulting polyhedron has the same faces as the pentagonal orthobicupola does: those cupolas cover their decagonal bases, replacing it with eight equilateral triangles, eight squares, and two regular pentagons. A convex polyhedron in which all of its faces are regular polygons is the Johnson solid. The pentagonal gyrobicupola has such these, enumerating it as the thirty-first Johnson solid formula_0. Properties. Because it has a similar construction as the pentagonal orthobicupola, the surface area of a pentagonal gyrobicupola formula_1 is the sum of polygonal faces' area, and its volume formula_2 is twice the volume of a pentagonal cupola for which slicing it into those: formula_3 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " J_{31} " }, { "math_id": 1, "text": " A " }, { "math_id": 2, "text": " V " }, { "math_id": 3, "text": " \\begin{align}\n A &= \\frac{20 + \\sqrt{100 + 10 \\sqrt{5} + 10\\sqrt{75+30\\sqrt{5}}}}{2}a^2 \\approx 17.771a^2, \\\\\n V &= \\frac{5+4\\sqrt{5}}{3}a^3 \\approx 4.648a^3.\n\\end{align} " } ]
https://en.wikipedia.org/wiki?curid=1177965
1177967
Elongated pentagonal orthobicupola
38th Johnson solid In geometry, the elongated pentagonal orthobicupola or cantellated pentagonal prism is one of the Johnson solids ("J"38). As the name suggests, it can be constructed by elongating a pentagonal orthobicupola ("J"30) by inserting a decagonal prism between its two congruent halves. Rotating one of the cupolae through 36 degrees before inserting the prism yields an elongated pentagonal gyrobicupola ("J"39). A Johnson solid is one of 92 strictly convex polyhedra that is composed of regular polygon faces but are not uniform polyhedra (that is, they are not Platonic solids, Archimedean solids, prisms, or antiprisms). They were named by Norman Johnson, who first listed these polyhedra in 1966. Formulae. The following formulae for volume and surface area can be used if all faces are regular, with edge length "a": formula_0 formula_1 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "V=\\frac{1}{6}\\left(10+8\\sqrt{5}+15\\sqrt{5+2\\sqrt{5}}\\right)a^3\\approx12.3423...a^3" }, { "math_id": 1, "text": "A=\\left(20+\\sqrt{\\frac{5}{2}\\left(10+\\sqrt{5}+\\sqrt{75+30\\sqrt{5}}\\right)}\\right)a^2\\approx27.7711...a^2" } ]
https://en.wikipedia.org/wiki?curid=1177967
1177969
Elongated pentagonal gyrobicupola
39th Johnson solid In geometry, the elongated pentagonal gyrobicupola is one of the Johnson solids ("J"39). As the name suggests, it can be constructed by elongating a pentagonal gyrobicupola ("J"31) by inserting a decagonal prism between its congruent halves. Rotating one of the pentagonal cupolae ("J"5) through 36 degrees before inserting the prism yields an elongated pentagonal orthobicupola ("J"38). A Johnson solid is one of 92 strictly convex polyhedra that is composed of regular polygon faces but are not uniform polyhedra (that is, they are not Platonic solids, Archimedean solids, prisms, or antiprisms). They were named by Norman Johnson, who first listed these polyhedra in 1966. Formulae. The following formulae for volume and surface area can be used if all faces are regular, with edge length "a": formula_0 formula_1 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "V=\\frac{1}{6}\\left(10+8\\sqrt{5}+15\\sqrt{5+2\\sqrt{5}}\\right)a^3\\approx12.3423...a^3" }, { "math_id": 1, "text": "A=\\left(20+\\sqrt{\\frac{5}{2}\\left(10+\\sqrt{5}+\\sqrt{75+30\\sqrt{5}}\\right)}\\right)a^2\\approx27.7711...a^2" } ]
https://en.wikipedia.org/wiki?curid=1177969
1177970
Augmented triangular prism
49th Johnson solid In geometry, the augmented triangular prism is a polyhedron constructed by attaching an equilateral square pyramid onto the square face of a triangular prism. As a result, it is an example of Johnson solid. It can be visualized as the chemical compound, known as capped trigonal prismatic molecular geometry. Construction. The augmented triangular prism can be constructed from a triangular prism by attaching an equilateral square pyramid to one of its square faces, a process known as augmentation. This square pyramid covers the square face of the prism, so the resulting polyhedron has 6 equilateral triangles and 2 squares as its faces. A convex polyhedron in which all faces are regular is Johnson solid, and the augmented triangular prism is among them, enumerated as 49th Johnson solid formula_1. Properties. An augmented triangular prism with edge length formula_2 has a surface area, calculated by adding six equilateral triangles and two squares' area: formula_3 Its volume can be obtained by slicing it into a regular triangular prism and an equilateral square pyramid, and adding their volume subsequently: formula_4 It has three-dimensional symmetry group of the cyclic group formula_0 of order 4. Its dihedral angle can be calculated by adding the angle of an equilateral square pyramid and a regular triangular prism in the following: Application. In the geometry of chemical compounds, a polyhedron may commonly visualize an atom cluster surrounding a central atom. The capped trigonal prismatic molecular geometry describes clusters for which this polyhedron is an augmented triangular prism. An example of such compound is the potassium heptafluorotantalate. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " C_{2\\mathrm{v}} " }, { "math_id": 1, "text": " J_{49} " }, { "math_id": 2, "text": " a " }, { "math_id": 3, "text": " \\frac{4 + 3\\sqrt{3}}{2}a^2 \\approx 4.598a^2. " }, { "math_id": 4, "text": " \\frac{2\\sqrt{2} + 3\\sqrt{3}}{12}a^3 \\approx 0.669a^3. " }, { "math_id": 5, "text": " \\arccos \\left(-1/3 \\right) \\approx 109.5^\\circ " }, { "math_id": 6, "text": " \\pi/3 = 60^\\circ " }, { "math_id": 7, "text": " \\pi/2 = 90^\\circ " }, { "math_id": 8, "text": " \\arctan \\left(\\sqrt{2}\\right) \\approx 54.7^\\circ " }, { "math_id": 9, "text": " \\begin{align}\n \\arctan \\left(\\sqrt{2}\\right) + \\frac{\\pi}{3} &\\approx 114.7^\\circ, \\\\\n 2 \\arctan \\left(\\sqrt{2}\\right) + \\frac{\\pi}{3} &\\approx 169.4^\\circ.\n\\end{align} \n" } ]
https://en.wikipedia.org/wiki?curid=1177970
1177972
Biaugmented triangular prism
50th Johnson solid In geometry, the biaugmented triangular prism is a polyhedron constructed from a triangular prism by attaching two equilateral square pyramids onto two of its square faces. It is an example of Johnson solid. It can be found in stereochemistry in bicapped trigonal prismatic molecular geometry. Construction. The biaugmented triangular prism can be constructed from a triangular prism by attaching two equilateral square pyramids onto its two square faces, a process known as augmentation. These pyramids covers the square face of the prism, so the resulting polyhedron has 10 equilateral triangles and 1 square as its faces. A convex polyhedron in which all faces are regular polygons is Johnson solid. The biaugmented triangular prism is among them, enumerated as 50th Johnson solid formula_1. Properties. A biaugmented triangular prism with edge length formula_2 has a surface area, calculated by adding ten equilateral triangles and one square's area: formula_3 Its volume can be obtained by slicing it into a regular triangular prism and two equilateral square pyramids, and adding their volumes subsequently: formula_4 It has three-dimensional symmetry group of the cyclic group formula_0 of order 4. Its dihedral angle can be calculated by adding the angle of an equilateral square pyramid and a regular triangular prism in the following: Appearance. The biaugmented triangular prism can be found in stereochemistry, as a structural shape of a chemical compound known as bicapped trigonal prismatic molecular geometry. It is one of the three common shapes for transition metal complexes with eight vertices other than the chemical structure other than square antiprism and the snub disphenoid. An example of such structure is plutonium(III) bromide PuBr3 adopted by bromides and iodides of the lanthanides and actinides. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " C_{2\\mathrm{v}} " }, { "math_id": 1, "text": " J_{50} " }, { "math_id": 2, "text": " a " }, { "math_id": 3, "text": "\\frac{2 + 5\\sqrt{3}}{2}a^2 \\approx 5.3301a^2. " }, { "math_id": 4, "text": "\\sqrt{\\frac{59}{144} + \\frac{1}{\\sqrt{6}}}a^3 \\approx 0.904a^3. " }, { "math_id": 5, "text": " \\arccos \\left(-1/3 \\right) \\approx 109.5^\\circ " }, { "math_id": 6, "text": " \\pi/2 = 90^\\circ " }, { "math_id": 7, "text": " \\arctan \\left(\\sqrt{2}\\right) \\approx 54.7^\\circ " }, { "math_id": 8, "text": " \\pi/3 = 60^\\circ " }, { "math_id": 9, "text": " \\begin{align}\n \\arctan \\left(\\sqrt{2}\\right) + \\frac{\\pi}{3} &\\approx 114.7^\\circ, \\\\\n 2 \\arctan \\left(\\sqrt{2}\\right) + \\frac{\\pi}{3} &\\approx 169.4^\\circ.\n\\end{align} \n" }, { "math_id": 10, "text": " \\arccos \\left(-\\frac{1}{3}\\right) + \\frac{\\pi}{2} \\approx 144.5^\\circ. " } ]
https://en.wikipedia.org/wiki?curid=1177972
1177974
Augmented pentagonal prism
52nd Johnson solid In geometry, the augmented pentagonal prism is a polyhedron that can be constructed by attaching an equilateral square pyramid onto the square face of pentagonal prism. It is an example of Johnson solid. Construction. The augmented pentagonal prism can be constructed from a pentagonal prism by attaching an equilateral square pyramid to one of its square faces, a process known as augmentation. This square pyramid covers the square face of the prism, so the resulting polyhedron has four equilateral triangles, four squares, and two regular pentagons as its faces. A convex polyhedron in which all faces are regular is Johnson solid, and the augmented pentagonal prism is among them, enumerated as 52nd Johnson solid formula_0. Properties. An augmented pentagonal prism with edge length formula_1 has a surface area, calculated by adding the area of four equilateral triangles, four squares, and two regular pentagons:formula_2 Its volume can be obtained by slicing it into a regular pentagonal prism and an equilateral square pyramid, and adding their volume subsequently:formula_3 The dihedral angle of an augmented pentagonal prism can be calculated by adding the dihedral angle of an equilateral square pyramid and the regular pentagonal prism: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " J_{52} " }, { "math_id": 1, "text": " a " }, { "math_id": 2, "text": " \\frac{8 + 2\\sqrt{3} + \\sqrt{5 + 2\\sqrt{5}}}{2}a^2 \\approx 9.173a^2. " }, { "math_id": 3, "text": " \\frac{\\sqrt{233 + 90\\sqrt{5} + 12\\sqrt{50 + 20\\sqrt{5}}}}{12}a^3 \\approx 1.9562a^3. " }, { "math_id": 4, "text": " \\arccos \\left(-\\frac{1}{3} \\right) \\approx 109.5^\\circ " }, { "math_id": 5, "text": " \\frac{3\\pi}{5} = 108^\\circ " }, { "math_id": 6, "text": " \\frac{\\pi}{2} = 90^\\circ " }, { "math_id": 7, "text": " \\arctan \\left(\\sqrt{2}\\right) + \\frac{\\pi}{2} \\approx 144.7^\\circ " }, { "math_id": 8, "text": " \\arctan \\left(\\sqrt{2}\\right) \\approx 54.7^\\circ " }, { "math_id": 9, "text": " \\arctan \\left(\\sqrt{2}\\right) + \\frac{3\\pi}{5} \\approx 162.7^\\circ " } ]
https://en.wikipedia.org/wiki?curid=1177974
1177977
Biaugmented pentagonal prism
53rd Johnson solid In geometry, the biaugmented pentagonal prism is a polyhedron constructed from a pentagonal prism by attaching two equilateral square pyramids onto each of its square faces. It is an example of Johnson solid. Construction. The biaugmented pentagonal prism can be constructed from a pentagonal prism by attaching two equilateral square pyramids to each of its square faces, a process known as augmentation. These square pyramids cover the square face of the prism, so the resulting polyhedron has eight equilateral triangles, three squares, and two regular pentagons as its faces. A convex polyhedron in which all faces are regular is Johnson solid, and the augmented pentagonal prism is among them, enumerated as 53rd Johnson solid formula_0. Properties. An biaugmented pentagonal prism with edge length formula_1 has a surface area, calculated by adding the area of four equilateral triangles, four squares, and two regular pentagons: formula_2 Its volume can be obtained by slicing it into a regular pentagonal prism and an equilateral square pyramid, and adding their volume subsequently: formula_3 The dihedral angle of an augmented pentagonal prism can be calculated by adding the dihedral angle of an equilateral square pyramid and the regular pentagonal prism: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " J_{53} " }, { "math_id": 1, "text": " a " }, { "math_id": 2, "text": " \\frac{6 + 4\\sqrt{3} + \\sqrt{5 + 2\\sqrt{5}}}{2}a^2 \\approx 9.9051a^2. " }, { "math_id": 3, "text": " \\frac{\\sqrt{257 + 90\\sqrt{5} + 24\\sqrt{50 + 20\\sqrt{5}}}}{12}a^3 \\approx 2.1919a^3. " }, { "math_id": 4, "text": " \\arccos \\left(-\\frac{1}{3} \\right) \\approx 109.5^\\circ " }, { "math_id": 5, "text": " \\frac{3\\pi}{5} = 108^\\circ " }, { "math_id": 6, "text": " \\frac{\\pi}{2} = 90^\\circ " }, { "math_id": 7, "text": " \\arctan \\left(\\sqrt{2}\\right) + \\frac{\\pi}{2} \\approx 144.7^\\circ " }, { "math_id": 8, "text": " \\arctan \\left(\\sqrt{2}\\right) \\approx 54.7^\\circ " }, { "math_id": 9, "text": " \\arctan \\left(\\sqrt{2}\\right) + \\frac{3\\pi}{5} \\approx 162.7^\\circ " } ]
https://en.wikipedia.org/wiki?curid=1177977
1177979
Augmented hexagonal prism
54th Johnson solid In geometry, the augmented hexagonal prism is one of the Johnson solids ("J"54). As the name suggests, it can be constructed by augmenting a hexagonal prism by attaching a square pyramid ("J"1) to one of its equatorial faces. When two or three such pyramids are attached, the result may be a parabiaugmented hexagonal prism ("J"55), a metabiaugmented hexagonal prism ("J"56), or a triaugmented hexagonal prism ("J"57). Construction. The augmented hexagonal prism is constructed by attaching one equilateral square pyramid onto the square face of a hexagonal prism, a process known as augmentation. This construction involves the removal of the prism square face and replacing it with the square pyramid, so that there are eleven faces: four equilateral triangles, five squares, and two regular hexagons. A convex polyhedron in which all of the faces are regular is a Johnson solid, and the augmented hexagonal prism is among them, enumerated as formula_0. Relatedly, two or three equilateral square pyramids attaching onto more square faces of the prism give more different Johnson solids; these are the parabiaugmented hexagonal prism formula_1, the metabiaugmented hexagonal prism formula_2, and the triaugmented hexagonal prism formula_3. Properties. An augmented hexagonal prism with edge length formula_4 has surface area formula_5 the sum of two hexagons, four equilateral triangles, and five squares area. Its volume formula_6 can be obtained by slicing into one equilateral square pyramid and one hexagonal prism, and adding their volume up. It has an axis of symmetry passing through the apex of a square pyramid and the centroid of a prism square face, rotated in a half and full-turn angle. Its dihedral angle can be obtained by calculating the angle of a square pyramid and a hexagonal prism in the following: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " J_{54} " }, { "math_id": 1, "text": " J_{55} " }, { "math_id": 2, "text": " J_{56} " }, { "math_id": 3, "text": " J_{57} " }, { "math_id": 4, "text": " a " }, { "math_id": 5, "text": " \\left(5 + 4\\sqrt{3}\\right)a^2 \\approx 11.928a^2, " }, { "math_id": 6, "text": " \\frac{\\sqrt{2} + 9\\sqrt{3}}{2}a^3 \\approx 2.834a^3, " }, { "math_id": 7, "text": " \\arccos \\left(-1/3\\right) \\approx 109.5^\\circ " }, { "math_id": 8, "text": " 2\\pi/3 = 120^\\circ " }, { "math_id": 9, "text": " \\pi/2 " }, { "math_id": 10, "text": " \\arctan \\left(\\sqrt{2}\\right) \\approx 54.75^\\circ " }, { "math_id": 11, "text": " \\begin{align}\n \\arctan \\left(\\sqrt{2}\\right) + \\frac{2\\pi}{3} \\approx 174.75^\\circ, \\\\\n \\arctan \\left(\\sqrt{2}\\right) + \\frac{\\pi}{2} \\approx 144.75^\\circ.\n\\end{align}\n" } ]
https://en.wikipedia.org/wiki?curid=1177979
1178048
Gyroelongated pentagonal bicupola
46th Johnson solid In geometry, the gyroelongated pentagonal bicupola is one of the Johnson solids ("J"46). As the name suggests, it can be constructed by gyroelongating a pentagonal bicupola ("J"30 or "J"31) by inserting a decagonal antiprism between its congruent halves. The gyroelongated pentagonal bicupola is one of five Johnson solids which are chiral, meaning that they have a "left-handed" and a "right-handed" form. In the illustration to the right, each square face on the bottom half of the figure is connected by a path of two triangular faces to a square face above it and to the right. In the figure of opposite chirality (the mirror image of the illustrated figure), each bottom square would be connected to a square face above it and to the left. The two chiral forms of "J"46 are not considered different Johnson solids. A Johnson solid is one of 92 strictly convex polyhedra that is composed of regular polygon faces but are not uniform polyhedra (that is, they are not Platonic solids, Archimedean solids, prisms, or antiprisms). They were named by Norman Johnson, who first listed these polyhedra in 1966. Area and Volume. With edge length a, the surface area is formula_0 and the volume is formula_1
[ { "math_id": 0, "text": "A=\\frac{1}{2}\\left(20+15\\sqrt{3}+\\sqrt{25+10\\sqrt{5}}\\right)a^2\\approx26.431335858...a^2," }, { "math_id": 1, "text": "V=\\left(\\frac{5}{3}+\\frac{4}{3}\\sqrt{5} + \\frac{5}{6}\\sqrt{2\\sqrt{650+290\\sqrt{5}}-2\\sqrt{5}-2}\\right) a^3\\approx11.397378512...a^3." } ]
https://en.wikipedia.org/wiki?curid=1178048
11782257
Concordance correlation coefficient
In statistics, the concordance correlation coefficient measures the agreement between two variables, e.g., to evaluate reproducibility or for inter-rater reliability. Definition. The form of the concordance correlation coefficient formula_0 as formula_1 where formula_2 and formula_3 are the means for the two variables and formula_4 and formula_5 are the corresponding variances. formula_6 is the correlation coefficient between the two variables. This follows from its definition as formula_7 When the concordance correlation coefficient is computed on a formula_8-length data set (i.e., formula_8 paired data values formula_9, for formula_10), the form is formula_11 where the mean is computed as formula_12 and the variance formula_13 and the covariance formula_14 Whereas the ordinary correlation coefficient (Pearson's) is immune to whether the biased or unbiased versions for estimation of the variance is used, the concordance correlation coefficient is not. In the original article Lin suggested the 1/N normalization, while in another article Nickerson appears to have used the 1/(N-1), i.e., the concordance correlation coefficient may be computed slightly differently between implementations. Relation to other measures of correlation. The concordance correlation coefficient is nearly identical to some of the measures called intra-class correlations. Comparisons of the concordance correlation coefficient with an "ordinary" intraclass correlation on different data sets found only small differences between the two correlations, in one case on the third decimal. It has also been stated that the ideas for concordance correlation coefficient "are quite similar to results already published by Krippendorff in 1970". In the original article Lin suggested a form for multiple classes (not just 2). Over ten years later a correction to this form was issued. One example of the use of the concordance correlation coefficient is in a comparison of analysis method for functional magnetic resonance imaging brain scans. References. &lt;templatestyles src="Reflist/styles.css" /&gt; For a small Excel and VBA implementation by Peter Urbani see here
[ { "math_id": 0, "text": "\\rho_c" }, { "math_id": 1, "text": "\\rho_c = \\frac{2\\rho\\sigma_x\\sigma_y}{\\sigma_x^2 + \\sigma_y^2 + (\\mu_x - \\mu_y)^2}," }, { "math_id": 2, "text": "\\mu_x" }, { "math_id": 3, "text": "\\mu_y" }, { "math_id": 4, "text": "\\sigma^2_x" }, { "math_id": 5, "text": "\\sigma^2_y" }, { "math_id": 6, "text": "\\rho" }, { "math_id": 7, "text": "\\rho_c = 1 - \\frac{{\\rm Expected\\ orthogonal\\ squared\\ distance\\ from\\ the\\ diagonal\\ }x=y}\n{{\\rm Expected\\ orthogonal\\ squared\\ distance\\ from\\ the\\ diagonal\\ }x=y{\\rm \\ assuming\\ independence}}." }, { "math_id": 8, "text": "N" }, { "math_id": 9, "text": "(x_n, y_n)" }, { "math_id": 10, "text": "n=1,...,N" }, { "math_id": 11, "text": "\\hat{\\rho}_c = \\frac{2 s_{xy}}{s_x^2 + s_y^2 + (\\bar{x} - \\bar{y})^2}," }, { "math_id": 12, "text": "\\bar{x} = \\frac{1}{N} \\sum_{n=1}^N x_n" }, { "math_id": 13, "text": "s_x^2 = \\frac{1}{N} \\sum_{n=1}^N (x_n - \\bar{x})^2" }, { "math_id": 14, "text": "s_{xy} = \\frac{1}{N} \\sum_{n=1}^N (x_n - \\bar{x})(y_n - \\bar{y}) ." } ]
https://en.wikipedia.org/wiki?curid=11782257
1178438
Large eddy simulation
Mathematical model for turbulence Large eddy simulation (LES) is a mathematical model for turbulence used in computational fluid dynamics. It was initially proposed in 1963 by Joseph Smagorinsky to simulate atmospheric air currents, and first explored by Deardorff (1970). LES is currently applied in a wide variety of engineering applications, including combustion, acoustics, and simulations of the atmospheric boundary layer. The simulation of turbulent flows by numerically solving the Navier–Stokes equations requires resolving a very wide range of time and length scales, all of which affect the flow field. Such a resolution can be achieved with direct numerical simulation (DNS), but DNS is computationally expensive, and its cost prohibits simulation of practical engineering systems with complex geometry or flow configurations, such as turbulent jets, pumps, vehicles, and landing gear. The principal idea behind LES is to reduce the computational cost by ignoring the smallest length scales, which are the most computationally expensive to resolve, via low-pass filtering of the Navier–Stokes equations. Such a low-pass filtering, which can be viewed as a time- and spatial-averaging, effectively removes small-scale information from the numerical solution. This information is not irrelevant, however, and its effect on the flow field must be modelled, a task which is an active area of research for problems in which small-scales can play an important role, such as near-wall flows, reacting flows, and multiphase flows. Filter definition and properties. An LES filter can be applied to a spatial and temporal field formula_0 and perform a spatial filtering operation, a temporal filtering operation, or both. The filtered field, denoted with a bar, is defined as: formula_1 where formula_2 is the filter convolution kernel. This can also be written as: formula_3 The filter kernel formula_2 has an associated cutoff length scale formula_4 and cutoff time scale formula_5. Scales smaller than these are eliminated from formula_6. Using the above filter definition, any field formula_7 may be split up into a filtered and sub-filtered (denoted with a prime) portion, as formula_8 It is important to note that the large eddy simulation filtering operation does not satisfy the properties of a Reynolds operator. Filtered governing equations. The governing equations of LES are obtained by filtering the partial differential equations governing the flow field formula_9. There are differences between the incompressible and compressible LES governing equations, which lead to the definition of a new filtering operation. Incompressible flow. For incompressible flow, the continuity equation and Navier–Stokes equations are filtered, yielding the filtered incompressible continuity equation, formula_10 and the filtered Navier–Stokes equations, formula_11 where formula_12 is the filtered pressure field and formula_13 is the rate-of-strain tensor evaluated using the filtered velocity. The nonlinear filtered advection term formula_14 is the chief cause of difficulty in LES modeling. It requires knowledge of the unfiltered velocity field, which is unknown, so it must be modeled. The analysis that follows illustrates the difficulty caused by the nonlinearity, namely, that it causes interaction between large and small scales, preventing separation of scales. The filtered advection term can be split up, following Leonard (1975), as: formula_15 where formula_16 is the residual stress tensor, so that the filtered Navier-Stokes equations become formula_17 with the residual stress tensor formula_16 grouping all unclosed terms. Leonard decomposed this stress tensor as formula_18 and provided physical interpretations for each term. formula_19, the Leonard tensor, represents interactions among large scales, formula_20, the Reynolds stress-like term, represents interactions among the sub-filter scales (SFS), and formula_21, the Clark tensor, represents cross-scale interactions between large and small scales. Modeling the unclosed term formula_16 is the task of sub-grid scale (SGS) models. This is made challenging by the fact that the subgrid stress tensor formula_16 must account for interactions among all scales, including filtered scales with unfiltered scales. The filtered governing equation for a passive scalar formula_7, such as mixture fraction or temperature, can be written as formula_22 where formula_23 is the diffusive flux of formula_7, and formula_24 is the sub-filter flux for the scalar formula_7. The filtered diffusive flux formula_25 is unclosed, unless a particular form is assumed for it, such as a gradient diffusion model formula_26. formula_24 is defined analogously to formula_16, formula_27 and can similarly be split up into contributions from interactions between various scales. This sub-filter flux also requires a sub-filter model. Derivation. Using Einstein notation, the Navier–Stokes equations for an incompressible fluid in Cartesian coordinates are formula_28 formula_29 Filtering the momentum equation results in formula_30 If we assume that filtering and differentiation commute, then formula_31 This equation models the changes in time of the filtered variables formula_32. Since the unfiltered variables formula_33 are not known, it is impossible to directly calculate formula_34. However, the quantity formula_35 is known. A substitution is made: formula_36 Let formula_37. The resulting set of equations are the LES equations: formula_38 Compressible governing equations. For the governing equations of compressible flow, each equation, starting with the conservation of mass, is filtered. This gives: formula_39 which results in an additional sub-filter term. However, it is desirable to avoid having to model the sub-filter scales of the mass conservation equation. For this reason, Favre proposed a density-weighted filtering operation, called Favre filtering, defined for an arbitrary quantity formula_7 as: formula_40 which, in the limit of incompressibility, becomes the normal filtering operation. This makes the conservation of mass equation: formula_41 This concept can then be extended to write the Favre-filtered momentum equation for compressible flow. Following Vreman: formula_42 where formula_43 is the shear stress tensor, given for a Newtonian fluid by: formula_44 and the term formula_45 represents a sub-filter viscous contribution from evaluating the viscosity formula_46 using the Favre-filtered temperature formula_47. The subgrid stress tensor for the Favre-filtered momentum field is given by formula_48 By analogy, the Leonard decomposition may also be written for the residual stress tensor for a filtered triple product formula_49. The triple product can be rewritten using the Favre filtering operator as formula_50, which is an unclosed term (it requires knowledge of the fields formula_7 and formula_51, when only the fields formula_52 and formula_53 are known). It can be broken up in a manner analogous to formula_14 above, which results in a sub-filter stress tensor formula_54. This sub-filter term can be split up into contributions from three types of interactions: the Leondard tensor formula_55, representing interactions among resolved scales; the Clark tensor formula_56, representing interactions between resolved and unresolved scales; and the Reynolds tensor formula_57, which represents interactions among unresolved scales. Filtered kinetic energy equation. In addition to the filtered mass and momentum equations, filtering the kinetic energy equation can provide additional insight. The kinetic energy field can be filtered to yield the total filtered kinetic energy: formula_58 and the total filtered kinetic energy can be decomposed into two terms: the kinetic energy of the filtered velocity field formula_59, formula_60 and the residual kinetic energy formula_61, formula_62 such that formula_63. The conservation equation for formula_59 can be obtained by multiplying the filtered momentum transport equation by formula_64 to yield: formula_65 where formula_66 is the dissipation of kinetic energy of the filtered velocity field by viscous stress, and formula_67 represents the sub-filter scale (SFS) dissipation of kinetic energy. The terms on the left-hand side represent transport, and the terms on the right-hand side are sink terms that dissipate kinetic energy. The formula_68 SFS dissipation term is of particular interest, since it represents the transfer of energy from large resolved scales to small unresolved scales. On average, formula_68 transfers energy from large to small scales. However, instantaneously formula_68 can be positive "or" negative, meaning it can also act as a source term for formula_59, the kinetic energy of the filtered velocity field. The transfer of energy from unresolved to resolved scales is called backscatter (and likewise the transfer of energy from resolved to unresolved scales is called forward-scatter). Numerical methods for LES. Large eddy simulation involves the solution to the discrete filtered governing equations using computational fluid dynamics. LES resolves scales from the domain size formula_69 down to the filter size formula_4, and as such a substantial portion of high wave number turbulent fluctuations must be resolved. This requires either high-order numerical schemes, or fine grid resolution if low-order numerical schemes are used. Chapter 13 of Pope addresses the question of how fine a grid resolution formula_70 is needed to resolve a filtered velocity field formula_71. Ghosal found that for low-order discretization schemes, such as those used in finite volume methods, the truncation error can be the same order as the subfilter scale contributions, unless the filter width formula_4 is considerably larger than the grid spacing formula_70. While even-order schemes have truncation error, they are non-dissipative, and because subfilter scale models are dissipative, even-order schemes will not affect the subfilter scale model contributions as strongly as dissipative schemes. Filter implementation. The filtering operation in large eddy simulation can be implicit or explicit. Implicit filtering recognizes that the subfilter scale model will dissipate in the same manner as many numerical schemes. In this way, the grid, or the numerical discretization scheme, can be assumed to be the LES low-pass filter. While this takes full advantage of the grid resolution, and eliminates the computational cost of calculating a subfilter scale model term, it is difficult to determine the shape of the LES filter that is associated with some numerical issues. Additionally, truncation error can also become an issue. In explicit filtering, an LES filter is applied to the discretized Navier–Stokes equations, providing a well-defined filter shape and reducing the truncation error. However, explicit filtering requires a finer grid than implicit filtering, and the computational cost increases with formula_72. Chapter 8 of Sagaut (2006) covers LES numerics in greater detail. Boundary conditions of large eddy simulations. Inlet boundary conditions affect the accuracy of LES significantly, and the treatment of inlet conditions for LES is a complicated problem. Theoretically, a good boundary condition for LES should contain the following features: (1) providing accurate information of flow characteristics, i.e. velocity and turbulence; (2) satisfying the Navier-Stokes equations and other physics; (3) being easy to implement and adjust to different cases. Currently, methods of generating inlet conditions for LES are broadly divided into two categories classified by Tabor et al.: The first method for generating turbulent inlets is to synthesize them according to particular cases, such as Fourier techniques, principle orthogonal decomposition (POD) and vortex methods. The synthesis techniques attempt to construct turbulent field at inlets that have suitable turbulence-like properties and make it easy to specify parameters of the turbulence, such as turbulent kinetic energy and turbulent dissipation rate. In addition, inlet conditions generated by using random numbers are computationally inexpensive. However, one serious drawback exists in the method. The synthesized turbulence does not satisfy the physical structure of fluid flow governed by Navier-Stokes equations. The second method involves a separate and precursor calculation to generate a turbulent database which can be introduced into the main computation at the inlets. The database (sometimes named as ‘library’) can be generated in a number of ways, such as cyclic domains, pre-prepared library, and internal mapping. However, the method of generating turbulent inflow by precursor simulations requires large calculation capacity. Researchers examining the application of various types of synthetic and precursor calculations have found that the more realistic the inlet turbulence, the more accurate LES predicts results. Modeling unresolved scales. To discuss the modeling of unresolved scales, first the unresolved scales must be classified. They fall into two groups: resolved sub-filter scales (SFS), and sub-grid scales(SGS). The resolved sub-filter scales represent the scales with wave numbers larger than the cutoff wave number formula_73, but whose effects are dampened by the filter. Resolved sub-filter scales only exist when filters non-local in wave-space are used (such as a box or Gaussian filter). These resolved sub-filter scales must be modeled using filter reconstruction. Sub-grid scales are any scales that are smaller than the cutoff filter width formula_4. The form of the SGS model depends on the filter implementation. As mentioned in the Numerical methods for LES section, if implicit LES is considered, no SGS model is implemented and the numerical effects of the discretization are assumed to mimic the physics of the unresolved turbulent motions. Sub-grid scale models. Without a universally valid description of turbulence, empirical information must be utilized when constructing and applying SGS models, supplemented with fundamental physical constraints such as Galilean invariance Two classes of SGS models exist; the first class is functional models and the second class is structural models. Some models may be categorized as both. Functional (eddy–viscosity) models. Functional models are simpler than structural models, focusing only on dissipating energy at a rate that is physically correct. These are based on an artificial eddy viscosity approach, where the effects of turbulence are lumped into a turbulent viscosity. The approach treats dissipation of kinetic energy at sub-grid scales as analogous to molecular diffusion. In this case, the deviatoric part of formula_16 is modeled as: formula_74 where formula_75 is the turbulent eddy viscosity and formula_76 is the rate-of-strain tensor. Based on dimensional analysis, the eddy viscosity must have units of formula_77. Most eddy viscosity SGS models model the eddy viscosity as the product of a characteristic length scale and a characteristic velocity scale. Smagorinsky–Lilly model. The first SGS model developed was the Smagorinsky–Lilly SGS model, which was developed by Smagorinsky and used in the first LES simulation by Deardorff. It models the eddy viscosity as: formula_78 where formula_4 is the grid size and formula_79 is a constant. This method assumes that the energy production and dissipation of the small scales are in equilibrium - that is, formula_80. The Dynamic Model (Germano et al. and beyond). Germano et al. identified a number of studies using the Smagorinsky model that each found different values for the Smagorinsky constant formula_79 for different flow configurations. In an attempt to formulate a more universal approach to SGS models, Germano et al. proposed a dynamic Smagorinsky model, which utilized two filters: a grid LES filter, denoted formula_81, and a test LES filter, denoted formula_82 for any turbulent field formula_83. The test filter is larger in size than the grid filter and adds an additional smoothing of the turbulence field over the already smoothed fields represented by the LES. Applying the test filter to the LES equations (which are obtained by applying the "grid" filter to Navier-Stokes equations) results in a new set of equations that are identical in form but with the SGS stress formula_84 replaced by formula_85. Germano {\it et} al. noted that even though neither formula_16 nor formula_86 can be computed exactly because of the presence of unresolved scales, there is an exact relation connecting these two tensors. This relation, known as the Germano identity is formula_87 Here formula_88 can be explicitly evaluated as it involves only the filtered velocities and the operation of test filtering. The significance of the identity is that if one assumes that turbulence is self similar so that the SGS stress at the grid and test levels have the same form formula_89 and formula_90, then the Germano identity provides an equation from which the Smagorinsky coefficient formula_79 (which is no longer a 'constant') can potentially be determined. [Inherent in the procedure is the assumption that the coefficient formula_79 is invariant of scale (see review In order to do this, two additional steps were introduced in the original formulation. First, one assumed that even though formula_79 was in principle variable, the variation was sufficiently slow that it can be moved out of the filtering operation formula_91. Second, since formula_79 was a scalar, the Germano identity was contracted with a second rank tensor (the rate of strain tensor was chosen) to convert it to a scalar equation from which formula_79 could be determined. Lilly found a less arbitrary and therefore more satisfactory approach for obtaining C from the tensor identity. He noted that the Germano identity required the satisfaction of nine equations at each point in space (of which only five are independent) for a single quantity formula_79. The problem of obtaining formula_79 was therefore over-determined. He proposed therefore that formula_79 be determined using a least square fit by minimizing the residuals. This results in formula_92 Here formula_93 and for brevity formula_94, formula_95 Initial attempts to implement the model in LES simulations proved unsuccessful. First, the computed coefficient was not at all "slowly varying" as assumed and varied as much as any other turbulent field. Secondly, the computed formula_79 could be positive as well as negative. The latter fact in itself should not be regarded as a shortcoming as a priori tests using filtered DNS fields have shown that the local subgrid dissipation rate formula_96 in a turbulent field is almost as likely to be negative as it is positive even though the integral over the fluid domain is always positive representing a net dissipation of energy in the large scales. A slight preponderance of positive values as opposed to strict positivity of the eddy-viscosity results in the observed net dissipation. This so-called "backscatter" of energy from small to large scales indeed corresponds to negative C values in the Smagorinsky model. Nevertheless, the Germano-Lilly formulation was found not to result in stable calculations. An ad hoc measure was adopted by averaging the numerator and denominator over homogeneous directions (where such directions exist in the flow) formula_97 When the averaging involved a large enough statistical sample that the computed formula_79 was positive (or at least only rarely negative) stable calculations were possible. Simply setting the negative values to zero (a procedure called "clipping") with or without the averaging also resulted in stable calculations. Meneveau proposed an averaging over Lagrangian fluid trajectories with an exponentially decaying "memory". This can be applied to problems lacking homogeneous directions and can be stable if the effective time over which the averaging is done is long enough and yet not so long as to smooth out spatial inhomogenieties of interest. Lilly's modification of the Germano method followed by a statistical averaging or synthetic removal of negative viscosity regions seems ad hoc, even if it could be made to "work". An alternate formulation of the least square minimization procedure known as the "Dynamic Localization Model" (DLM) was suggested by Ghosal et al. In this approach one first defines a quantity formula_98 with the tensors formula_16 and formula_86 replaced by the appropriate SGS model. This tensor then represents the amount by which the subgrid model fails to respect the Germano identity at each spatial location. In Lilly's approach, formula_79 is then pulled out of the hat operator formula_99 making formula_100 an algebraic function of formula_79 which is then determined by requiring that formula_101 considered as a function of C have the least possible value. However, since the formula_79 thus obtained turns out to be just as variable as any other fluctuating quantity in turbulence, the original assumption of the constancy of formula_79 cannot be justified a posteriori. In the DLM approach one avoids this inconsistency by not invoking the step of removing C from the test filtering operation. Instead, one defines a global error over the entire flow domain by the quantity formula_102 where the integral ranges over the whole fluid volume. This global error formula_103 is then a functional of the spatially varying function formula_104 (here the time instant, formula_105, is fixed and therefore appears just as a parameter) which is determined so as to minimize this functional. The solution to this variational problem is that formula_79 must satisfy a Fredholm integral equation of the second kind formula_106 where the functions formula_107 and formula_108 are defined in terms of the resolved fields formula_109 and are therefore known at each time step and the integral ranges over the whole fluid domain. The integral equation is solved numerically by an iteration procedure and convergence was found to be generally rapid if used with a pre-conditioning scheme. Even though this variational approach removes an inherent inconsistency in Lilly's approach, the formula_104 obtained from the integral equation still displayed the instability associated with negative viscosities. This can be resolved by insisting that formula_110 be minimized subject to the constraint formula_111. This leads to an equation for formula_79 that is nonlinear formula_112 Here the suffix + indicates the "positive part of" that is, formula_113. Even though this superficially looks like "clipping" it is not an ad hoc scheme but a bonafide solution of the constrained variational problem. This DLM(+) model was found to be stable and yielded excellent results for forced and decaying isotropic turbulence, channel flows and a variety of other more complex geometries. If a flow happens to have homogeneous directions (let us say the directions x and z) then one can introduce the ansatz formula_114. The variational approach then immediately yields Lilly's result with averaging over homogeneous directions without any need for ad hoc modifications of a prior result. One shortcoming of the DLM(+) model was that it did not describe backscatter which is known to be a real "thing" from analyzing DNS data. Two approaches were developed to address this. In one approach due to Carati et al. a fluctuating force with amplitude determined by the fluctuation-dissipation theorem is added in analogy to Landau's theory of fluctuating hydrodynamics. In the second approach, one notes that any "backscattered" energy appears in the resolved scales only at the expense of energy in the subgrid scales. The DLM can be modified in a simple way to take into account this physical fact so as to allow for backscatter while being inherently stable. This k-equation version of the DLM, DLM(k) replaces formula_115 in the Smagorinsky eddy viscosity model by formula_116 as an appropriate velocity scale. The procedure for determining formula_79 remains identical to the "unconstrained" version except that the tensors formula_117, formula_118 where the sub-test scale kinetic energy K is related to the subgrid scale kinetic energy k by formula_119 (follows by taking the trace of the Germano identity). To determine k we now use a transport equation formula_120 where formula_121 is the kinematic viscosity and formula_122 are positive coefficients representing kinetic energy dissipation and diffusion respectively. These can be determined following the dynamic procedure with constrained minimization as in DLM(+). This approach, though more expensive to implement than the DLM(+) was found to be stable and resulted in good agreement with experimental data for a variety of flows tested. Furthermore, it is mathematically impossible for the DLM(k) to result in an unstable computation as the sum of the large scale and SGS energies is non-increasing by construction. Both of these approaches incorporating backscatter works well. They yield models that are slightly less dissipative with somewhat improved performance over the DLM(+). The DLM(k) model additionally yields the subgrid kinetic energy, which may be a physical quantity of interest. These improvements are achieved at a somewhat increased cost in model implementation. The Dynamic Model originated at the 1990 Summer Program of the Center for Turbulence Research (CTR) at Stanford University. A series of "CTR-Tea" seminars celebrated the 30th Anniversary of this important milestone in turbulence modeling. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\phi(\\boldsymbol{x},t)" }, { "math_id": 1, "text": "\n\\overline{\\phi(\\boldsymbol{x},t)} = \\displaystyle{\n\\int_{-\\infty}^{\\infty}} \\int_{-\\infty}^{\\infty} \\phi(\\boldsymbol{r},\\tau) G(\\boldsymbol{x}-\\boldsymbol{r},t - \\tau) d\\tau d \\boldsymbol{r}\n" }, { "math_id": 2, "text": "G" }, { "math_id": 3, "text": "\n\\overline{\\phi} = G \\star \\phi .\n" }, { "math_id": 4, "text": "\\Delta" }, { "math_id": 5, "text": "\\tau_{c}" }, { "math_id": 6, "text": "\\overline{\\phi}" }, { "math_id": 7, "text": "\\phi" }, { "math_id": 8, "text": "\n\\phi = \\bar{\\phi} + \\phi^{\\prime} .\n" }, { "math_id": 9, "text": "\\rho \\boldsymbol{u}(\\boldsymbol{x},t)" }, { "math_id": 10, "text": "\n\\frac{ \\partial \\bar{u_i} }{ \\partial x_i } = 0\n" }, { "math_id": 11, "text": "\n\\frac{ \\partial \\bar{u_i} }{ \\partial t }\n+ \\frac{ \\partial }{ \\partial x_j } \\left( \\overline{ u_i u_j } \\right) \n= - \\frac{1}{\\rho} \\frac{ \\partial \\overline{p} }{ \\partial x_i } \n+ \\nu \\frac{\\partial}{\\partial x_j} \\left( \\frac{ \\partial \\bar{u_i} }{ \\partial x_j } + \\frac{ \\partial \\bar{u_j} }{ \\partial x_i } \\right)\n= - \\frac{1}{\\rho} \\frac{ \\partial \\overline{p} }{ \\partial x_i } \n+ 2 \\nu \\frac{\\partial}{\\partial x_j} \\bar{S}_{ij},\n" }, { "math_id": 12, "text": "\\bar{p}" }, { "math_id": 13, "text": "\\bar{S}_{ij}" }, { "math_id": 14, "text": "\\overline{u_i u_j}" }, { "math_id": 15, "text": "\n\\overline{u_i u_j} = \\tau_{ij} + \\overline{u}_i \\overline{u}_j\n" }, { "math_id": 16, "text": "\\tau_{ij}" }, { "math_id": 17, "text": "\n\\frac{ \\partial \\bar{u_i} }{ \\partial t }\n+ \\frac{ \\partial }{ \\partial x_j } \\left( \\overline{u}_i \\overline{u}_j \\right) \n= - \\frac{1}{\\rho} \\frac{ \\partial \\overline{p} }{ \\partial x_i } \n+ 2 \\nu \\frac{\\partial}{\\partial x_j} \\bar{S}_{ij}\n- \\frac{ \\partial \\tau_{ij} }{ \\partial x_j }\n" }, { "math_id": 18, "text": "\\tau_{ij} = L_{ij} + C_{ij} + R_{ij}" }, { "math_id": 19, "text": "L_{ij} = \\overline{ \\bar{u}_{i} \\bar{u}_{j} } - \\bar{u}_{i} \\bar{u}_{j}" }, { "math_id": 20, "text": "R_{ij} = \\overline{u^{\\prime}_{i} u^{\\prime}_{j}}" }, { "math_id": 21, "text": "C_{ij} = \\overline{\\bar{u}_{i} u^{\\prime}_{j}} + \\overline{\\bar{u}_{j} u^{\\prime}_{i}}\n" }, { "math_id": 22, "text": "\n\\frac{ \\partial \\overline{\\phi} }{ \\partial t }\n+ \\frac{\\partial}{\\partial x_j} \\left( \\overline{u}_j \\overline{\\phi} \\right)\n= \\frac{\\partial \\overline{J_{\\phi}} }{\\partial x_j} \n+ \\frac{ \\partial q_j }{ \\partial x_j }\n" }, { "math_id": 23, "text": "J_{\\phi}" }, { "math_id": 24, "text": "q_j" }, { "math_id": 25, "text": "\\overline{J_{\\phi}}" }, { "math_id": 26, "text": "J_{\\phi} = D_{\\phi} \\frac{ \\partial \\phi }{ \\partial x_i }" }, { "math_id": 27, "text": "\nq_j = \\bar{\\phi} \\overline{u}_j - \\overline{\\phi u_j}\n" }, { "math_id": 28, "text": " \\frac{\\partial u_i}{\\partial x_i} = 0 " }, { "math_id": 29, "text": " \\frac{\\partial u_i}{\\partial t} + \\frac{\\partial u_iu_j}{\\partial x_j}\n= - \\frac{1}{\\rho} \\frac{\\partial p}{\\partial x_i}\n+ \\nu \\frac{\\partial^2 u_i}{\\partial x_j \\partial x_j}.\n" }, { "math_id": 30, "text": " \\overline{\\frac{\\partial u_i}{\\partial t}} + \\overline{\\frac{\\partial u_iu_j}{\\partial x_j}}\n= - \\overline{\\frac{1}{\\rho} \\frac{\\partial p}{\\partial x_i}}\n+ \\overline{\\nu \\frac{\\partial^2 u_i}{\\partial x_j \\partial x_j}}.\n" }, { "math_id": 31, "text": " \\frac{\\partial \\bar{u_i}}{\\partial t} + \\overline{\\frac{\\partial u_iu_j}{\\partial x_j}}\n= - \\frac{1}{\\rho} \\frac{\\partial \\bar{p}}{\\partial x_i}\n+ \\nu \\frac{\\partial^2 \\bar{u_i}}{\\partial x_j \\partial x_j}.\n" }, { "math_id": 32, "text": "\\bar{u_i}" }, { "math_id": 33, "text": "u_i" }, { "math_id": 34, "text": "\\overline{\\frac{\\partial u_iu_j}{\\partial x_j}}" }, { "math_id": 35, "text": " \\frac{\\partial \\bar{u_i}\\bar{u_j}}{\\partial x_j}" }, { "math_id": 36, "text": " \\frac{\\partial \\bar{u_i}}{\\partial t} + \\frac{\\partial \\bar{u_i}\\bar{u_j}}{\\partial x_j}\n= - \\frac{1}{\\rho} \\frac{\\partial \\bar{p}}{\\partial x_i}\n+ \\nu \\frac{\\partial^2 \\bar{u_i}}{\\partial x_j \\partial x_j}\n- \\left(\\overline{ \\frac{\\partial u_iu_j}{\\partial x_j}} - \\frac{\\partial \\bar{u_i}\\bar{u_j}}{\\partial x_j}\\right).\n" }, { "math_id": 37, "text": "\\tau_{ij} = \\overline{u_i u_j} - \\bar{u}_{i} \\bar{u}_{j}" }, { "math_id": 38, "text": " \\frac{\\partial \\bar{u_i}}{\\partial t} + \\bar{u_j} \\frac{\\partial \\bar{u_i}}{\\partial x_j}\n= - \\frac{1}{\\rho} \\frac{\\partial \\bar{p}}{\\partial x_i}\n+ \\nu \\frac{\\partial^2 \\bar{u_i}}{\\partial x_j \\partial x_j}\n- \\frac{\\partial\\tau_{ij}}{\\partial x_j}.\n" }, { "math_id": 39, "text": "\n\\frac{\\partial \\overline{\\rho}}{\\partial t} + \\frac{ \\partial \\overline{u_i \\rho} }{\\partial x_i} = 0\n" }, { "math_id": 40, "text": "\n\\tilde{\\phi} = \\frac{ \\overline{\\rho \\phi} }{ \\overline{\\rho} }\n" }, { "math_id": 41, "text": "\n\\frac{\\partial \\overline{\\rho}}{\\partial t} + \\frac{ \\partial \\overline{\\rho} \\tilde{u_i} }{ \\partial x_i } = 0.\n" }, { "math_id": 42, "text": "\n\\frac{ \\partial \\overline{\\rho} \\tilde{u_i} }{ \\partial t }\n+ \\frac{ \\partial \\overline{\\rho} \\tilde{u_i} \\tilde{u_j} }{ \\partial x_j }\n+ \\frac{ \\partial \\overline{p} }{ \\partial x_i }\n- \\frac{ \\partial \\tilde{\\sigma}_{ij} }{ \\partial x_j }\n= - \\frac{ \\partial \\overline{\\rho} \\tau_{ij}^{r} }{ \\partial x_j }\n+ \\frac{ \\partial }{ \\partial x_j } \\left( \\overline{\\sigma}_{ij} - \\tilde{\\sigma}_{ij} \\right)\n" }, { "math_id": 43, "text": "\\sigma_{ij}" }, { "math_id": 44, "text": "\n\\sigma_{ij} = 2 \\mu(T) S_{ij} - \\frac{2}{3} \\mu(T) \\delta_{ij} S_{kk}\n" }, { "math_id": 45, "text": "\\frac{ \\partial }{\\partial x_j} \\left( \\overline{\\sigma}_{ij} - \\tilde{\\sigma}_{ij} \\right)" }, { "math_id": 46, "text": "\\mu(T)" }, { "math_id": 47, "text": "\\tilde{T}" }, { "math_id": 48, "text": "\n\\tau_{ij}^{r} = \\widetilde{ u_i \\cdot u_j } - \\tilde{u_i} \\tilde{u_j}\n" }, { "math_id": 49, "text": "\\overline{\\rho \\phi \\psi}" }, { "math_id": 50, "text": "\\overline{\\rho} \\widetilde{\\phi \\psi}" }, { "math_id": 51, "text": "\\psi" }, { "math_id": 52, "text": "\\tilde{\\phi}" }, { "math_id": 53, "text": "\\tilde{\\psi}" }, { "math_id": 54, "text": "\\overline{\\rho} \\left( \\widetilde{\\phi \\psi} - \\tilde{\\phi} \\tilde{\\psi} \\right)" }, { "math_id": 55, "text": "L_{ij}" }, { "math_id": 56, "text": "C_{ij}" }, { "math_id": 57, "text": "R_{ij}" }, { "math_id": 58, "text": "\n\\overline{E} = \\frac{1}{2} \\overline{ u_i u_i }\n" }, { "math_id": 59, "text": "E_f" }, { "math_id": 60, "text": "\nE_f = \\frac{1}{2} \\overline{u_i} \\, \\overline{u_i}\n" }, { "math_id": 61, "text": "k_r" }, { "math_id": 62, "text": "\nk_r = \\frac{1}{2} \\overline{ u_i u_i } - \\frac{1}{2} \\overline{u_i} \\, \\overline{u_i} = \\frac{1}{2} \\tau_{ii}^{r}\n" }, { "math_id": 63, "text": "\\overline{E} = E_f + k_r" }, { "math_id": 64, "text": "\\overline{u_i}" }, { "math_id": 65, "text": "\n\\frac{\\partial E_f}{\\partial t} \n+ \\overline{u_j} \\frac{\\partial E_f}{\\partial x_j} \n+ \\frac{1}{\\rho} \\frac{\\partial \\overline{u_i} \\bar{p} }{ \\partial x_i }\n+ \\frac{\\partial \\overline{u_i} \\tau_{ij}^{r}}{\\partial x_j} \n- 2 \\nu \\frac{ \\partial \\overline{u_i} \\bar{S_{ij}} }{ \\partial x_j }\n= \n- \\epsilon_{f} \n- \\Pi\n" }, { "math_id": 66, "text": "\\epsilon_{f} = 2 \\nu \\bar{S_{ij}} \\bar{S_{ij}}" }, { "math_id": 67, "text": "\\Pi = -\\tau_{ij}^{r} \\bar{S_{ij}}" }, { "math_id": 68, "text": "\\Pi" }, { "math_id": 69, "text": "L" }, { "math_id": 70, "text": "\\Delta x" }, { "math_id": 71, "text": "\\overline{u}(\\boldsymbol{x})" }, { "math_id": 72, "text": "(\\Delta x)^4" }, { "math_id": 73, "text": "k_c" }, { "math_id": 74, "text": "\n\\tau_{ij}^r - \\frac{1}{3} \\tau_{kk} \\delta_{ij} = -2 \\nu_\\mathrm{t} \\bar{S}_{ij}\n" }, { "math_id": 75, "text": "\\nu_\\mathrm{t}" }, { "math_id": 76, "text": "\\bar{S}_{ij} = \\frac{1}{2} \\left( \\frac{\\partial \\bar{u}_i }{\\partial x_j} + \\frac{\\partial \\bar{u}_j}{ \\partial x_i} \\right)" }, { "math_id": 77, "text": "\\left[ \\nu_\\mathrm{t} \\right] = \\frac{\\mathrm{m^2}}{\\mathrm{s}}" }, { "math_id": 78, "text": "\\nu_\\mathrm{t} \n= C \\Delta^2\\sqrt{2\\bar{S}_{ij}\\bar{S}_{ij}} \n= C \\Delta^2 \\left| \\bar{S} \\right|\n" }, { "math_id": 79, "text": "C" }, { "math_id": 80, "text": "\\epsilon = \\Pi" }, { "math_id": 81, "text": "\\overline{f}" }, { "math_id": 82, "text": "\\hat{f}" }, { "math_id": 83, "text": "f" }, { "math_id": 84, "text": "\\tau_{ij} = \\overline{u_{i} u_{j}} - \\bar{u}_{i} \\bar{u}_{j}" }, { "math_id": 85, "text": "T_{ij} = \\widehat{\\overline{u_{i} u_{j}}} - \\hat{\\bar{u}}_{i} \\hat{\\bar{u}}_{j}" }, { "math_id": 86, "text": "T_{ij}" }, { "math_id": 87, "text": "\nL_{ij} = T_{ij} - \\hat{\\tau}_{ij}.\n" }, { "math_id": 88, "text": " L_{ij} = \\widehat{\\bar{u}_{i} \\bar{u}_{j}} - \\widehat{\\bar{u}_{i}} \\widehat{\\bar{u}_{j}}" }, { "math_id": 89, "text": "\\tau_{ij} - (\\tau_{kk}/3)\\delta_{ij} = - 2 C \\Delta^{2} |\\bar{S}_{ij}| \\bar{S}_{ij}" }, { "math_id": 90, "text": "T_{ij} - (T_{kk}/3)\\delta_{ij} = - 2 C \\hat{\\Delta}^{2} |\\hat{\\bar{S}}_{ij}| \\hat{\\bar{S}}_{ij}" }, { "math_id": 91, "text": " \\widehat{C (.)} = C \\widehat{(.)} " }, { "math_id": 92, "text": "\nC = \\frac{ L_{ij} m_{ij} }{ m_{kl} m_{kl} }.\n" }, { "math_id": 93, "text": "\nm_{ij} = \\alpha_{ij} - \\widehat{\\beta}_{ij} \n" }, { "math_id": 94, "text": " \\alpha_{ij} = - 2 \\hat{\\Delta}^{2} | \\hat{\\bar{S}} | \\hat{\\bar{S}}_{ij} " }, { "math_id": 95, "text": " \\beta_{ij} = - 2 \\Delta^2 | \\bar{S} | \\bar{S}_{ij} " }, { "math_id": 96, "text": " - \\tau_{ij} \\bar{S}_{ij}" }, { "math_id": 97, "text": "\nC = \\frac{ \n\\left\\langle L_{ij} m_{ij} \\right\\rangle\n}{ \n\\left\\langle m_{kl} m_{kl} \\right\\rangle\n}.\n" }, { "math_id": 98, "text": "\nE_{ij} = L_{ij} - T_{ij} + \\hat{\\tau}_{ij}\n" }, { "math_id": 99, "text": "\n \\widehat{C (.)} = C \\widehat{(.)}\n" }, { "math_id": 100, "text": "E_{ij}" }, { "math_id": 101, "text": "E_{ij} E_{ij}" }, { "math_id": 102, "text": "\n E [ C ] = \\int E_{ij} E_{ij} dV\n" }, { "math_id": 103, "text": "E[C(x,y,z,t)]" }, { "math_id": 104, "text": "C(x,y,z,t)" }, { "math_id": 105, "text": "t" }, { "math_id": 106, "text": " \nC (\\boldsymbol{x}) = f ( \\boldsymbol{x} ) + \n\\int K(\\boldsymbol{x}, \\boldsymbol{y}) C ( \\boldsymbol{y} ) d\\boldsymbol{y}\n" }, { "math_id": 107, "text": "K(\\boldsymbol{x}, \\boldsymbol{y})" }, { "math_id": 108, "text": "f ( \\boldsymbol{x} )" }, { "math_id": 109, "text": "L_{ij},\\alpha_{ij},\\beta_{ij}" }, { "math_id": 110, "text": "E[C]" }, { "math_id": 111, "text": "C(x,y,z,t) \\geq 0" }, { "math_id": 112, "text": " \nC (\\boldsymbol{x}) = \\left[ f ( \\boldsymbol{x} ) + \n\\int K(\\boldsymbol{x}, \\boldsymbol{y}) C ( \\boldsymbol{y} ) d\\boldsymbol{y} \\right]_{+}\n" }, { "math_id": 113, "text": " x_{+} = (x + |x|)/2 " }, { "math_id": 114, "text": " C = C(y,t) " }, { "math_id": 115, "text": " \\Delta | \\bar{S} | " }, { "math_id": 116, "text": " \\sqrt{k} " }, { "math_id": 117, "text": " \\alpha_{ij} = - 2 \\hat{\\Delta} \\sqrt{K} \\hat{\\bar{S}}_{ij} " }, { "math_id": 118, "text": " \\beta_{ij} = - 2 \\hat{\\Delta} \\sqrt{k} \\bar{S}_{ij} " }, { "math_id": 119, "text": " K = k + L_{ii}/2 " }, { "math_id": 120, "text": " \n\\frac{\\partial k}{\\partial t} + u_{j} \\frac{\\partial k}{\\partial x_{j}} = \n- \\tau_{ij} \\bar{S}_{ij} - \\frac{C_{*}}{\\Delta} k^{3/2} + \n\\frac{\\partial }{\\partial x_j} \\left( D \\Delta \\sqrt{k} \\frac{\\partial k }{\\partial x_j} \\right) \n+ \\nu \\frac{\\partial^{2} k }{\\partial x_j \\partial x_j}\n" }, { "math_id": 121, "text": "\\nu" }, { "math_id": 122, "text": "C_{*},D" } ]
https://en.wikipedia.org/wiki?curid=1178438
11785522
Weyl integral
In mathematics, the Weyl integral (named after Hermann Weyl) is an operator defined, as an example of fractional calculus, on functions "f" on the unit circle having integral 0 and a Fourier series. In other words there is a Fourier series for "f" of the form formula_0 with "a"0 = 0. Then the Weyl integral operator of order "s" is defined on Fourier series by formula_1 where this is defined. Here "s" can take any real value, and for integer values "k" of "s" the series expansion is the expected "k"-th derivative, if "k" &gt; 0, or (−"k")th indefinite integral normalized by integration from "θ" = 0. The condition "a"0 = 0 here plays the obvious role of excluding the need to consider division by zero. The definition is due to Hermann Weyl (1917).
[ { "math_id": 0, "text": "\\sum_{n=-\\infty}^{\\infty} a_n e^{in \\theta}" }, { "math_id": 1, "text": "\\sum_{n=-\\infty}^{\\infty} (in)^s a_n e^{in\\theta}" } ]
https://en.wikipedia.org/wiki?curid=11785522
1178722
Paul Flory
American chemist (1910–1985) Paul John Flory (June 19, 1910 – September 9, 1985) was an American chemist and Nobel laureate who was known for his work in the field of polymers, or macromolecules. He was a pioneer in understanding the behavior of polymers in solution, and won the Nobel Prize in Chemistry in 1974 "for his fundamental achievements, both theoretical and experimental, in the physical chemistry of macromolecules". Biography. Personal life. Flory was born in Sterling, Illinois, on June 19, 1910 to Ezra Flory and Martha Brumbaugh. His father worked as a clergyman-educator, and his mother was a school teacher. His ancestors were German Huguenots, who traced their roots back to Alsace. He first gained an interest in science from Carl W Holl, who was a chemistry professor at Manchester College, Indiana. In 1936, he married Emily Catherine Tabor. They had three children together: Susan Springer, Melinda Groom and Paul John Flory, Jr. His first position was at DuPont with Wallace Carothers. He was posthumously inducted into the Alpha Chi Sigma Hall of Fame in 2002. Flory died on September 9, 1985, following a heart attack. His wife Emily died in 2006 aged 94. Schooling. After graduating from Elgin High School in Elgin, Illinois in 1927, Flory received a bachelor's degree from Manchester College (Indiana) (now Manchester University) in 1931 and a Ph.D. from the Ohio State University in 1934. He completed a years of master's study in organic chemistry under the supervision of Prof. Cecil E Boord, before moving into physical chemistry. Flory's doctoral thesis was on the photochemistry of nitric oxide, supervised by Prof. Herrick L. Johnston. Work. In 1934 Flory joined the Central Department of Dupont and Company working with Wallace H. Carothers. After Carothers' death in 1937, Flory worked for two years at the Basic Research Laboratory located in the University of Cincinnati. During World War II, there was a need for research to develop synthetic rubber, so Flory joined the Esso Laboratories of the Standard Oil Development Company. From 1943 to 1948 Flory worked in the polymer research team of the Goodyear Tire and Rubber Company. In 1948, Flory gave the George Fisher Baker lectures at Cornell University, and subsequently joined the university as a professor. In 1957, Flory and his family moved to Pittsburgh, Pennsylvania, where Flory was executive director of research at the Mellon Institute of Industrial Research. In 1961, he took up a professorship at Stanford University in the department of chemistry. After retirement, Flory remained active in the world of chemistry, running research labs both in Stanford, and IBM. Research. Career and polymer science. Flory's earliest work in polymer science was in the area of polymerization kinetics at the DuPont Experimental Station. In condensation polymerization, he challenged the assumption that the reactivity of the end group decreased as the macromolecule grew, and by arguing that the reactivity was independent of the size, he was able to derive the result that the number of chains present decreased with size exponentially. In addition polymerization, he introduced the important concept of chain transfer to improve the kinetic equations and remove difficulties in understanding the polymer size distribution. In 1938, after Carothers' death, Flory moved to the Basic Science Research Laboratory at the University of Cincinnati. There he developed a mathematical theory for the polymerization of compounds with more than two functional groups and the theory of polymer networks or gels. This led to the Flory-Stockmayer theory of gelation, which was equivalent to percolation on the Bethe lattice and represents the first paper in the percolation field. In 1940 he joined the Linden, NJ laboratory of the Standard Oil Development Company where he developed a statistical mechanical theory for polymer mixtures. In 1943 he left to join the research laboratories of Goodyear as head of a group on polymer fundamentals. In the Spring of 1948 Peter Debye, then chairman of the chemistry department at Cornell University, invited Flory to give the annual Baker Lectures. He then was offered a position with the faculty in the Fall of the same year. He was initiated into the Tau chapter of Alpha Chi Sigma at Cornell in 1949. At Cornell he elaborated and refined his Baker Lectures into his magnum opus, "Principles of Polymer Chemistry" which was published in 1953 by Cornell University Press. This quickly became a standard text for all workers in the field of polymers, and is still widely used to this day. Flory introduced the concept of excluded volume, coined by Werner Kuhn in 1934, to polymers. Excluded volume refers to the idea that one part of a long chain molecule can not occupy space that is already occupied by another part of the same molecule. Excluded volume causes the ends of a polymer chain in a solution to be further apart (on average) than they would be were there no excluded volume. The recognition that excluded volume was an important factor in analyzing long-chain molecules in solutions provided an important conceptual breakthrough, and led to the explanation of several puzzling experimental results of the day. It also led to the concept of the theta point, the set of conditions at which an experiment can be conducted that causes the excluded volume effect to be neutralized. At the theta point, the chain reverts to ideal chain characteristics – the long-range interactions arising from excluded volume are eliminated, allowing the experimenter to more easily measure short-range features such as structural geometry, bond rotation potentials, and steric interactions between near-neighboring groups. Flory correctly identified that the chain dimension in polymer melts would have the size computed for a chain in ideal solution if excluded volume interactions were neutralized by experimenting at the theta point. Among his accomplishments are an original method for computing the probable size of a polymer in good solution, the Flory-Huggins Solution Theory, the extension of polymer physics concepts to the field of liquid crystals, and the derivation of the Flory exponent, which helps characterize the movement of polymers in solution. "see Flory convention for details." The Flory convention. In modeling the position vectors of atoms in macromolecules it is often necessary to convert from Cartesian coordinates (x,y,z) to generalized coordinates. The Flory convention for defining the variables involved is usually employed. For an example, a peptide bond can be described by the x,y,z positions of every atom in this bond or the Flory convention can be used. Here one must know the bond lengths formula_0, bond angles formula_1, and the dihedral angles formula_2. Applying a vector conversion from the Cartesian coordinates to the generalized coordinates will describe the same three-dimensional structure using the Flory convention. Awards and honors. Flory was elected to the United States National Academy of Sciences in 1953 and the American Academy of Arts and Sciences in 1957. In 1968, he received the Charles Goodyear Medal. He also received the Priestley Medal and the Golden Plate Award of the American Academy of Achievement in 1974. He received the Carl-Dietrich-Harries-Medal for commendable scientific achievements in 1977. Flory received the Nobel Prize in Chemistry in 1974 "for his fundamental achievements both theoretical and experimental, in the physical chemistry of the macromolecules." Additionally in 1974 Flory was awarded the National Medal of Science in Physical Sciences. The medal was presented to him by President Gerald Ford. This award was given to him because of his research on the "formation and structure of polymeric substances". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "l_i" }, { "math_id": 1, "text": "\\theta_i" }, { "math_id": 2, "text": "\\phi_i" } ]
https://en.wikipedia.org/wiki?curid=1178722
11787643
Frettenheim
Frettenheim is an "Ortsgemeinde" – a municipality belonging to a "Verbandsgemeinde", a kind of collective municipality – in the Alzey-Worms district in Rhineland-Palatinate, Germany. Geography. Location. The municipality lies in Rhenish Hesse. It belongs to the "Verbandsgemeinde" of Wonnegau, whose seat is in Osthofen. Neighbouring municipalities. Frettenheim’s neighbours are Dittelsheim-Heßloch, Dorn-Dürkheim, Gau-Odernheim and Hillesheim. History. In 767, Frettenheim had its first documentary mention in the Lorsch codex. It then still bore the name "Frittenheim", as the founder who built his farm there was named "Frido". Only in 1402 did Frettenheim get its current name. In 1575, Frettenheim became part of Electoral Palatinate. Beginning in 1755, the Barons of Heddersdorf were tithe lords. In 1792, Frettenheim lay under French administration and belonged to the Department of Mont-Tonnerre (or Donnersberg in German). In 1816 came the transfer to the Grand Duchy of Hesse. Frettenheim became autonomous in 1868. Population development. In 1772, the population amounted to 90 persons. Since then, the figure has risen to almost fourfold. The municipality is among the smallest "Ortsgemeinden" in the district. Politics. Municipal council. The council is made up of 8 council members, who were elected by proportional representation at the municipal election held on 7 June 2009, and the honorary mayor as chairman. The municipal election held on 7 June 2009 yielded the following results: Mayors. Frettenheim’s "Ortsbürgermeister" (mayor) is Carsten Claß (independent), who was elected in May 2019. Previous mayors were: formula_0 Coat of arms. The municipality’s arms might be described thus: Per fess, sable a lion passant Or armed, langued and crowned gules, and lozengy argent and azure. Culture and sightseeing. Buildings. The little municipality has at its disposal two Baroque churches and a "Dorfgemeinschaftshaus" (village hall), which until 1967 was used as a school, having been given its current function in 1981. Economy and infrastructure. In 2006, the municipality counted seven full-time farmers. Transport. Beginning in 1897, the Osthofen–Gau-Odernheim railway line led through Frettenheim, over which there were links to Worms at Osthofen and to the Alzey–Bodenheim railway (known locally as the "Amiche") at Gau Odernheim; however, it has been out of service since the mid-1980s. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\vdots" } ]
https://en.wikipedia.org/wiki?curid=11787643
11790568
Percolation threshold
Threshold of percolation theory models The percolation threshold is a mathematical concept in percolation theory that describes the formation of long-range connectivity in random systems. Below the threshold a giant connected component does not exist; while above it, there exists a giant component of the order of system size. In engineering and coffee making, percolation represents the flow of fluids through porous media, but in the mathematics and physics worlds it generally refers to simplified lattice models of random systems or networks (graphs), and the nature of the connectivity in them. The percolation threshold is the critical value of the occupation probability "p", or more generally a critical surface for a group of parameters "p"1, "p"2, ..., such that infinite connectivity ("percolation") first occurs. Percolation models. The most common percolation model is to take a regular lattice, like a square lattice, and make it into a random network by randomly "occupying" sites (vertices) or bonds (edges) with a statistically independent probability "p". At a critical threshold "pc", large clusters and long-range connectivity first appear, and this is called the percolation threshold. Depending on the method for obtaining the random network, one distinguishes between the site percolation threshold and the bond percolation threshold. More general systems have several probabilities "p"1, "p"2, etc., and the transition is characterized by a "critical surface" or "manifold". One can also consider continuum systems, such as overlapping disks and spheres placed randomly, or the negative space ("Swiss-cheese" models). To understand the threshold, you can consider a quantity such as the probability that there is a continuous path from one boundary to another along occupied sites or bonds—that is, within a single cluster. For example, one can consider a square system, and ask for the probability "P" that there is a path from the top boundary to the bottom boundary. As a function of the occupation probability "p", one finds a sigmoidal plot that goes from "P=0" at "p=0" to "P=1" at "p=1". The larger the square is compared to the lattice spacing, the sharper the transition will be. When the system size goes to infinity, "P(p)" will be a step function at the threshold value "pc". For finite large systems, "P(pc)" is a constant whose value depends upon the shape of the system; for the square system discussed above, "P(pc)=&lt;templatestyles src="Fraction/styles.css" /&gt;1⁄2" exactly for any lattice by a simple symmetry argument. There are other signatures of the critical threshold. For example, the size distribution (number of clusters of size "s") drops off as a power-law for large "s" at the threshold, "ns(pc) ~ s−τ", where τ is a dimension-dependent percolation critical exponents. For an infinite system, the critical threshold corresponds to the first point (as "p" increases) where the size of the clusters become infinite. In the systems described so far, it has been assumed that the occupation of a site or bond is completely random—this is the so-called "Bernoulli percolation." For a continuum system, random occupancy corresponds to the points being placed by a Poisson process. Further variations involve correlated percolation, such as percolation clusters related to Ising and Potts models of ferromagnets, in which the bonds are put down by the Fortuin–Kasteleyn method. In "bootstrap" or "k-sat" percolation, sites and/or bonds are first occupied and then successively culled from a system if a site does not have at least "k" neighbors. Another important model of percolation, in a different universality class altogether, is directed percolation, where connectivity along a bond depends upon the direction of the flow. Over the last several decades, a tremendous amount of work has gone into finding exact and approximate values of the percolation thresholds for a variety of these systems. Exact thresholds are only known for certain two-dimensional lattices that can be broken up into a self-dual array, such that under a triangle-triangle transformation, the system remains the same. Studies using numerical methods have led to numerous improvements in algorithms and several theoretical discoveries. Simple duality in two dimensions implies that all fully triangulated lattices (e.g., the triangular, union jack, cross dual, martini dual and asanoha or 3-12 dual, and the Delaunay triangulation) all have site thresholds of &lt;templatestyles src="Fraction/styles.css" /&gt;1⁄2, and self-dual lattices (square, martini-B) have bond thresholds of &lt;templatestyles src="Fraction/styles.css" /&gt;1⁄2. The notation such as (4,82) comes from Grünbaum and Shephard, and indicates that around a given vertex, going in the clockwise direction, one encounters first a square and then two octagons. Besides the eleven Archimedean lattices composed of regular polygons with every site equivalent, many other more complicated lattices with sites of different classes have been studied. Error bars in the last digit or digits are shown by numbers in parentheses. Thus, 0.729724(3) signifies 0.729724 ± 0.000003, and 0.74042195(80) signifies 0.74042195 ± 0.00000080. The error bars variously represent one or two standard deviations in net error (including statistical and expected systematic error), or an empirical confidence interval, depending upon the source. Percolation on networks. For a random tree-like network (i.e., a connected network with no cycle) without degree-degree correlation, it can be shown that such network can have a giant component, and the percolation threshold (transmission probability) is given by formula_0. Where formula_1 is the generating function corresponding to the excess degree distribution, formula_2 is the average degree of the network and formula_3 is the second moment of the degree distribution. So, for example, for an ER network, since the degree distribution is a Poisson distribution, the threshold is at formula_4. In networks with low clustering, formula_5, the critical point gets scaled by formula_6 such that: formula_7 This indicates that for a given degree distribution, the clustering leads to a larger percolation threshold, mainly because for a fixed number of links, the clustering structure reinforces the core of the network with the price of diluting the global connections. For networks with high clustering, strong clustering could induce the core–periphery structure, in which the core and periphery might percolate at different critical points, and the above approximate treatment is not applicable. Percolation in 2D. Thresholds on Archimedean lattices. Note: sometimes "hexagonal" is used in place of honeycomb, although in some contexts a triangular lattice is also called a hexagonal lattice. "z" = bulk coordination number. 2D lattices with extended and complex neighborhoods. In this section, sq-1,2,3 corresponds to square (NN+2NN+3NN), etc. Equivalent to square-2N+3N+4N, sq(1,2,3). tri = triangular, hc = honeycomb. Here NN = nearest neighbor, 2NN = second nearest neighbor (or next nearest neighbor), 3NN = third nearest neighbor (or next-next nearest neighbor), etc. These are also called 2N, 3N, 4N respectively in some papers. 2D distorted lattices. Here, one distorts a regular lattice of unit spacing by moving vertices uniformly within the box formula_13, and considers percolation when sites are within Euclidean distance formula_14 of each other. Overlapping shapes on 2D lattices. Site threshold is number of overlapping objects per lattice site. "k" is the length (net area). Overlapping squares are shown in the complex neighborhood section. Here z is the coordination number to k-mers of either orientation, with formula_15 for formula_16 sticks. The coverage is calculated from formula_8 by formula_17 for formula_16 sticks, because there are formula_18 sites where a stick will cause an overlap with a given site. For aligned formula_16 sticks: formula_19 AB percolation and colored percolation in 2D. In AB percolation, a formula_20 is the proportion of A sites among B sites, and bonds are drawn between sites of opposite species. It is also called antipercolation. In colored percolation, occupied sites are assigned one of formula_21 colors with equal probability, and connection is made along bonds between neighbors of different colors. Site-bond percolation in 2D. Site bond percolation. Here formula_22 is the site occupation probability and formula_23 is the bond occupation probability, and connectivity is made only if both the sites and bonds along a path are occupied. The criticality condition becomes a curve formula_24 = 0, and some specific critical pairs formula_25 are listed below. Square lattice: Honeycomb (hexagonal) lattice: Kagome lattice: Approximate formula for site-bond percolation on a honeycomb lattice Archimedean duals (Laves lattices). Laves lattices are the duals to the Archimedean lattices. Drawings from. See also Uniform tilings. 2-uniform lattices. Top 3 lattices: #13 #12 #36 Bottom 3 lattices: #34 #37 #11 Top 2 lattices: #35 #30 Bottom 2 lattices: #41 #42 Top 4 lattices: #22 #23 #21 #20 Bottom 3 lattices: #16 #17 #15 Top 2 lattices: #31 #32 Bottom lattice: #33 Inhomogeneous 2-uniform lattice. This figure shows something similar to the 2-uniform lattice #37, except the polygons are not all regular—there is a rectangle in the place of the two squares—and the size of the polygons is changed. This lattice is in the isoradial representation in which each polygon is inscribed in a circle of unit radius. The two squares in the 2-uniform lattice must now be represented as a single rectangle in order to satisfy the isoradial condition. The lattice is shown by black edges, and the dual lattice by red dashed lines. The green circles show the isoradial constraint on both the original and dual lattices. The yellow polygons highlight the three types of polygons on the lattice, and the pink polygons highlight the two types of polygons on the dual lattice. The lattice has vertex types (&lt;templatestyles src="Fraction/styles.css" /&gt;1⁄2)(33,42) + (&lt;templatestyles src="Fraction/styles.css" /&gt;1⁄2)(3,4,6,4), while the dual lattice has vertex types (&lt;templatestyles src="Fraction/styles.css" /&gt;1⁄15)(46)+(&lt;templatestyles src="Fraction/styles.css" /&gt;6⁄15)(42,52)+(&lt;templatestyles src="Fraction/styles.css" /&gt;2⁄15)(53)+(&lt;templatestyles src="Fraction/styles.css" /&gt;6⁄15)(52,4). The critical point is where the longer bonds (on both the lattice and dual lattice) have occupation probability p = 2 sin (π/18) = 0.347296... which is the bond percolation threshold on a triangular lattice, and the shorter bonds have occupation probability 1 − 2 sin(π/18) = 0.652703..., which is the bond percolation on a hexagonal lattice. These results follow from the isoradial condition but also follow from applying the star-triangle transformation to certain stars on the honeycomb lattice. Finally, it can be generalized to having three different probabilities in the three different directions, p1, p2 and "p"3 for the long bonds, and 1 − "p"1, 1 − "p"2, and 1 − "p"3 for the short bonds, where "p"1, "p"2 and "p"3 satisfy the critical surface for the inhomogeneous triangular lattice. Thresholds on 2D bow-tie and martini lattices. To the left, center, and right are: the martini lattice, the martini-A lattice, the martini-B lattice. Below: the martini covering/medial lattice, same as the 2×2, 1×1 subnet for kagome-type lattices (removed). Some other examples of generalized bow-tie lattices (a-d) and the duals of the lattices (e-h): Thresholds on subnet lattices. The 2 x 2, 3 x 3, and 4 x 4 subnet kagome lattices. The 2 × 2 subnet is also known as the "triangular kagome" lattice. Thresholds of random sequentially adsorbed objects. The threshold gives the fraction of sites occupied by the objects when site percolation first takes place (not at full jamming). For longer k-mers see Ref. Thresholds of full dimer coverings of two dimensional lattices. Here, we are dealing with networks that are obtained by covering a lattice with dimers, and then consider bond percolation on the remaining bonds. In discrete mathematics, this problem is known as the 'perfect matching' or the 'dimer covering' problem. Thresholds of polymers (random walks) on a square lattice. System is composed of ordinary (non-avoiding) random walks of length l on the square lattice. Thresholds for 2D continuum models. For disks, formula_27 equals the critical number of disks per unit area, measured in units of the diameter formula_28, where formula_29 is the number of objects and formula_30 is the system size For disks, formula_31 equals critical total disk area. formula_32 gives the number of disk centers within the circle of influence (radius 2 r). formula_33 is the critical disk radius. formula_34 for ellipses of semi-major and semi-minor axes of a and b, respectively. Aspect ratio formula_35 with formula_36. formula_37 for rectangles of dimensions formula_26 and formula_38. Aspect ratio formula_39 with formula_40. formula_41 for power-law distributed disks with formula_42, formula_43. formula_44 equals critical area fraction. For disks, Ref. use formula_45 where formula_46 is the density of disks of radius formula_47. formula_48 equals number of objects of maximum length formula_49 per unit area. For ellipses, formula_50 For void percolation, formula_51 is the critical void fraction. For more ellipse values, see For more rectangle values, see Both ellipses and rectangles belong to the superellipses, with formula_52. For more percolation values of superellipses, see. For the monodisperse particle systems, the percolation thresholds of concave-shaped superdisks are obtained as seen in For binary dispersions of disks, see Thresholds on 2D correlated systems. Assuming power-law correlations formula_53 Thresholds on slabs. "h" is the thickness of the slab, "h" × ∞ × ∞. Boundary conditions (b.c.) refer to the top and bottom planes of the slab. Percolation in 3D. Filling factor = fraction of space filled by touching spheres at every lattice site (for systems with uniform bond length only). Also called Atomic Packing Factor. Filling fraction (or Critical Filling Fraction) = filling factor * pc(site). NN = nearest neighbor, 2NN = next-nearest neighbor, 3NN = next-next-nearest neighbor, etc. kxkxk cubes are cubes of occupied sites on a lattice, and are equivalent to extended-range percolation of a cube of length (2k+1), with edges and corners removed, with z = (2k+1)3-12(2k-1)-9 (center site not counted in z). Question: the bond thresholds for the hcp and fcc lattice agree within the small statistical error. Are they identical, and if not, how far apart are they? Which threshold is expected to be bigger? Similarly for the ice and diamond lattices. See 3D distorted lattices. Here, one distorts a regular lattice of unit spacing by moving vertices uniformly within the cube formula_54, and considers percolation when sites are within Euclidean distance formula_14 of each other. Overlapping shapes on 3D lattices. Site threshold is the number of overlapping objects per lattice site. The coverage φc is the net fraction of sites covered, and "v" is the volume (number of cubes). Overlapping cubes are given in the section on thresholds of 3D lattices. Here z is the coordination number to k-mers of either orientation, with formula_55 The coverage is calculated from formula_8 by formula_56 for sticks, and formula_57 for plaquettes. Thresholds for 3D continuum models. All overlapping except for jammed spheres and polymer matrix. formula_58 is the total volume (for spheres), where N is the number of objects and L is the system size. formula_44 is the critical volume fraction, valid for overlapping randomly placed objects. For disks and plates, these are effective volumes and volume fractions. For void ("Swiss-Cheese" model), formula_51 is the critical void fraction. For more results on void percolation around ellipsoids and elliptical plates, see. For more ellipsoid percolation values see. For spherocylinders, H/D is the ratio of the height to the diameter of the cylinder, which is then capped by hemispheres. Additional values are given in. For superballs, m is the deformation parameter, the percolation values are given in., In addition, the thresholds of concave-shaped superballs are also determined in For cuboid-like particles (superellipsoids), m is the deformation parameter, more percolation values are given in. Void percolation in 3D. Void percolation refers to percolation in the space around overlapping objects. Here formula_59 refers to the fraction of the space occupied by the voids (not of the particles) at the critical point, and is related to formula_60 by formula_51. formula_60 is defined as in the continuum percolation section above. Thresholds for other 3D models. formula_61 In drilling percolation, the site threshold formula_8 represents the fraction of columns in each direction that have not been removed, and formula_62. For the 1d drilling, we have formula_63(columns) formula_8(sites). † In tube percolation, the bond threshold represents the value of the parameter formula_64 such that the probability of putting a bond between neighboring vertical tube segments is formula_65, where formula_66 is the overlap height of two adjacent tube segments. Thresholds in different dimensional spaces. Continuum models in higher dimensions. formula_67 In 4d, formula_68. In 5d, formula_69. In 6d, formula_70. formula_44 is the critical volume fraction, valid for overlapping objects. For void models, formula_51 is the critical void fraction, and formula_71 is the total volume of the overlapping objects Thresholds on hypercubic lattices. For thresholds on high dimensional hypercubic lattices, we have the asymptotic series expansions formula_72 formula_73 where formula_74. For 13-dimensional bond percolation, for example, the error with the measured value is less than 10−6, and these formulas can be useful for higher-dimensional systems. Thresholds in one-dimensional long-range percolation. In a one-dimensional chain we establish bonds between distinct sites formula_75 and formula_76 with probability formula_77 decaying as a power-law with an exponent formula_78. Percolation occurs at a critical value formula_79 for formula_80. The numerically determined percolation thresholds are given by: Thresholds on hyperbolic, hierarchical, and tree lattices. In these lattices there may be two percolation thresholds: the lower threshold is the probability above which infinite clusters appear, and the upper is the probability above which there is a unique infinite cluster. Note: {m,n} is the Schläfli symbol, signifying a hyperbolic lattice in which n regular m-gons meet at every vertex For bond percolation on {P,Q}, we have by duality formula_81. For site percolation, formula_82 because of the self-matching of triangulated lattices. Cayley tree (Bethe lattice) with coordination number formula_83 Thresholds for directed percolation. nn = nearest neighbors. For a ("d" + 1)-dimensional hypercubic system, the hypercube is in d dimensions and the time direction points to the 2D nearest neighbors. Site-Bond Directed Percolation. p_b = bond threshold p_s = site threshold Site-bond percolation is equivalent to having different probabilities of connections: P_0 = probability that no sites are connected P_2 = probability that exactly one descendant is connected to the upper vertex (two connected together) P_3 = probability that both descendants are connected to the original vertex (all three connected together) Formulas: P_0 = (1-p_s) + p_s(1-p_b)^2 P_2 = p_s p_b (1-p_b) P_3 = p_s p_b^2 P_0 + 2P_2 + P_3 = 1 Exact critical manifolds of inhomogeneous systems. Inhomogeneous triangular lattice bond percolation formula_84 Inhomogeneous honeycomb lattice bond percolation = kagome lattice site percolation formula_85 Inhomogeneous (3,12^2) lattice, site percolation formula_86 or formula_87 Inhomogeneous union-jack lattice, site percolation with probabilities formula_88 formula_89 Inhomogeneous martini lattice, bond percolation formula_90 Inhomogeneous martini lattice, site percolation. "r" = site in the star formula_91 Inhomogeneous martini-A (3–7) lattice, bond percolation. Left side (top of "A" to bottom): formula_92. Right side: formula_93. Cross bond: formula_94. formula_95 Inhomogeneous martini-B (3–5) lattice, bond percolation Inhomogeneous martini lattice with outside enclosing triangle of bonds, probabilities formula_96 from inside to outside, bond percolation formula_97 Inhomogeneous checkerboard lattice, bond percolation formula_98 Inhomogeneous bow-tie lattice, bond percolation formula_99 where formula_100 are the four bonds around the square and formula_101 is the diagonal bond connecting the vertex between bonds formula_102 and formula_103. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "p_c = \\frac{1}{g_1'(1)} = \\frac{\\langle k \\rangle}{\\langle k^2 \\rangle - \\langle k \\rangle}" }, { "math_id": 1, "text": "g_1(z)" }, { "math_id": 2, "text": "{\\langle k \\rangle}" }, { "math_id": 3, "text": "{\\langle k^2 \\rangle}" }, { "math_id": 4, "text": "p_c = {\\langle k \\rangle}^{-1}" }, { "math_id": 5, "text": "\n0 < C \\ll 1\n" }, { "math_id": 6, "text": "\n(1-C)^{-1}\n" }, { "math_id": 7, "text": "p_c = \\frac{1}{1-C}\\frac{1}{g_1'(1)}." }, { "math_id": 8, "text": "p_c" }, { "math_id": 9, "text": " \\phi_c " }, { "math_id": 10, "text": "1-(1-\\phi_c)^{1/4} = 0.196724(10)\\ldots" }, { "math_id": 11, "text": "\\phi_c= 0.58365(2)" }, { "math_id": 12, "text": "p_c=1-(1-\\phi_c)^{1/9} = 0.095765(5)\\ldots" }, { "math_id": 13, "text": "(x-\\alpha,x+\\alpha),(y-\\alpha,y+\\alpha)" }, { "math_id": 14, "text": "d" }, { "math_id": 15, "text": " z = k^2+10k-2" }, { "math_id": 16, "text": "1 \\times k" }, { "math_id": 17, "text": "\\phi_c = 1-(1-p_c)^{2 k} " }, { "math_id": 18, "text": "2k" }, { "math_id": 19, "text": "\\phi_c = 1-(1-p_c)^{k} " }, { "math_id": 20, "text": "p_\\mathrm{site}" }, { "math_id": 21, "text": "n" }, { "math_id": 22, "text": "p_s" }, { "math_id": 23, "text": "p_b" }, { "math_id": 24, "text": " f(p_{s},p_{b}) " }, { "math_id": 25, "text": "(p_{s},p_{b}) " }, { "math_id": 26, "text": "\\ell" }, { "math_id": 27, "text": "n_c = 4 r^2 N / L^2" }, { "math_id": 28, "text": " 2r " }, { "math_id": 29, "text": "N " }, { "math_id": 30, "text": "L" }, { "math_id": 31, "text": "\\eta_c = \\pi r^2 N / L^2 = (\\pi/4) n_c " }, { "math_id": 32, "text": "4 \\eta_c " }, { "math_id": 33, "text": "r_c = L \\sqrt{\\frac{\\eta_c}{\\pi N}} = \\frac{L}{2} \\sqrt{\\frac{n_c}{N}} " }, { "math_id": 34, "text": "\\eta_c = \\pi a b N / L^2" }, { "math_id": 35, "text": "\\epsilon = a / b " }, { "math_id": 36, "text": "a > b" }, { "math_id": 37, "text": "\\eta_c = \\ell m N / L^2" }, { "math_id": 38, "text": "m" }, { "math_id": 39, "text": "\\epsilon = \\ell/m" }, { "math_id": 40, "text": "\\ell > m" }, { "math_id": 41, "text": "\\eta_c = \\pi x N / (4 L^2 (x-2))" }, { "math_id": 42, "text": "\\hbox{Prob(radius}\\ge R) = R^{-x}" }, { "math_id": 43, "text": " R \\ge 1 " }, { "math_id": 44, "text": "\\phi_c = 1 - e^{-\\eta_c} " }, { "math_id": 45, "text": "\\phi_c = 1 - e^{-\\pi x / 2} " }, { "math_id": 46, "text": "x" }, { "math_id": 47, "text": " 1/\\sqrt{2} " }, { "math_id": 48, "text": "n_c = \\ell^2 N / L^2" }, { "math_id": 49, "text": "\\ell = 2 a " }, { "math_id": 50, "text": "n_c = (4 \\epsilon / \\pi)\\eta_c " }, { "math_id": 51, "text": "\\phi_c = e^{-\\eta_c} " }, { "math_id": 52, "text": "|x/a|^{2m}+|y/b|^{2m}=1 " }, { "math_id": 53, "text": " C(r) \\sim |r|^{-\\alpha} " }, { "math_id": 54, "text": "(x-\\alpha,x+\\alpha),(y-\\alpha,y+\\alpha),(z-\\alpha,z+\\alpha)" }, { "math_id": 55, "text": " z=6k^2+18k-4 " }, { "math_id": 56, "text": "\\phi_c = 1-(1-p_c)^{3 k} " }, { "math_id": 57, "text": "\\phi_c = 1-(1-p_c)^{3 k^2} " }, { "math_id": 58, "text": "\\eta_c = (4/3) \\pi r^3 N / L^3" }, { "math_id": 59, "text": "\\phi_c" }, { "math_id": 60, "text": "\\eta_c" }, { "math_id": 61, "text": "^*" }, { "math_id": 62, "text": "\\phi_c=p_c^3" }, { "math_id": 63, "text": "\\phi_c = p_c" }, { "math_id": 64, "text": "\\mu" }, { "math_id": 65, "text": "1-e^{-\\mu h_i}" }, { "math_id": 66, "text": " h_i " }, { "math_id": 67, "text": "\\eta_c = (\\pi^{d/2}/ \\Gamma[d/2 + 1]) r^d N / L^d." }, { "math_id": 68, "text": "\\eta_c = (1/2) \\pi^2 r^4 N / L^4" }, { "math_id": 69, "text": "\\eta_c = (8/15) \\pi^2 r^5 N / L^5" }, { "math_id": 70, "text": "\\eta_c = (1/6) \\pi^3 r^6 N / L^6" }, { "math_id": 71, "text": "\\eta_c " }, { "math_id": 72, "text": "p_c^\\mathrm{site}(d)=\\sigma^{-1}+\\frac{3}{2}\\sigma^{-2}+\\frac{15}{4}\\sigma^{-3}+\\frac{83}{4}\\sigma^{-4}+\\frac{6577}{48}\\sigma^{-5}+\\frac{119077}{96}\\sigma^{-6}+{\\mathcal O}(\\sigma^{-7})" }, { "math_id": 73, "text": "p_c^\\mathrm{bond}(d)=\\sigma^{-1}+\\frac{5}{2}\\sigma^{-3}+\\frac{15}{2}\\sigma^{-4}+57\\sigma^{-5}+\\frac{4855}{12}\\sigma^{-6}+{\\mathcal O}(\\sigma^{-7})" }, { "math_id": 74, "text": " \\sigma = 2 d - 1 " }, { "math_id": 75, "text": "i" }, { "math_id": 76, "text": "j" }, { "math_id": 77, "text": "p=\\frac{C}{|i-j|^{1+\\sigma}}" }, { "math_id": 78, "text": "\\sigma>0" }, { "math_id": 79, "text": "C_c<1" }, { "math_id": 80, "text": "\\sigma<1" }, { "math_id": 81, "text": "p_{c,\\ell}(P,Q) + p_{c,u}(Q,P) = 1" }, { "math_id": 82, "text": "p_{c,\\ell}(3,Q) + p_{c,u}(3,Q) = 1" }, { "math_id": 83, "text": "z : p_c = 1 / ( z - 1 )" }, { "math_id": 84, "text": "\n1 - p_1 - p_2 - p_3 + p_1 p_2 p_3 = 0\n" }, { "math_id": 85, "text": "\n1 - p_1 p_2 - p_1 p_3 - p_2 p_3+ p_1 p_2 p_3 = 0\n" }, { "math_id": 86, "text": "\n1 - 3(s_1s_2)^2 + (s_1s_2)^3 = 0,\n" }, { "math_id": 87, "text": "\ns_1 s_2 = 1 - 2 \\sin(\\pi/18)\n" }, { "math_id": 88, "text": " p_1, p_2, p_3, p_4" }, { "math_id": 89, "text": "\np_3 = 1 - p_1; \\qquad p_4 = 1 - p_2\n" }, { "math_id": 90, "text": "\n1 - (p_1 p_2 r_3 + p_2 p_3 r_1 + p_1 p_3 r_2)\n - (p_1 p_2 r_1 r_2 + p_1 p_3 r_1 r_3 + p_2 p_3 r_2 r_3)\n + p_1 p_2 p_3 ( r_1 r_2 + r_1 r_3 + r_2 r_3)\n + r_1 r_2 r_3 (p_1 p_2 + p_1 p_3 + p_2 p_3)\n - 2 p_1 p_2 p_3 r_1 r_2 r_3 = 0 \n" }, { "math_id": 91, "text": "\n1 - r (p_1 p_2 + p_1 p_3 + p_2 p_3 - p_1 p_2 p_3) = 0 \n" }, { "math_id": 92, "text": "r_2,\\ p_1" }, { "math_id": 93, "text": "r_1, \\ p_2" }, { "math_id": 94, "text": "\\ r_3" }, { "math_id": 95, "text": "\n1 - p_1 r_2 - p_2 r_1 - p_1 p_2 r_3 - p_1 r_1 r_3 \n- p_2 r_2 r_3 + p_1 p_2 r_1 r_3 + p_1 p_2 r_2 r_3\n+ p_1 r_1 r_2 r_3+ p_2 r_1 r_2 r_3 - p_1 p_2 r_1 r_2 r_3 = 0 \n" }, { "math_id": 96, "text": "y, x, z" }, { "math_id": 97, "text": "\n1 - 3 z + z^3-(1-z^2) [3 x^2 y (1 + y - y^2)(1 + z) + x^3 y^2 (3 - 2 y)(1 + 2 z) ] = 0\n" }, { "math_id": 98, "text": "\n1 - (p_1 p_2 + p_1 p_3 + p_1 p_4 + p_2 p_3 + p_2 p_4 + p_3 p_4) \n + p_1 p_2 p_3 + p_1 p_2 p_4 + p_1 p_3 p_4 + p_2 p_3 p_4 = 0\n" }, { "math_id": 99, "text": "\n1 - (p_1 p_2 + p_1 p_3 + p_1 p_4 + p_2 p_3 + p_2 p_4 + p_3 p_4) \n + p_1 p_2 p_3 + p_1 p_2 p_4 + p_1 p_3 p_4 + p_2 p_3 p_4\n - u(1 - p_1 p_2 - p_3 p_4 + p_1 p_2 p_3 p_4) = 0\n" }, { "math_id": 100, "text": "p_1, p_2, p_3, p_4" }, { "math_id": 101, "text": "u" }, { "math_id": 102, "text": "p_4, p_1" }, { "math_id": 103, "text": "p_2, p_3" } ]
https://en.wikipedia.org/wiki?curid=11790568
1179451
Change of basis
Coordinate change in linear algebra In mathematics, an ordered basis of a vector space of finite dimension n allows representing uniquely any element of the vector space by a coordinate vector, which is a sequence of n scalars called coordinates. If two different bases are considered, the coordinate vector that represents a vector v on one basis is, in general, different from the coordinate vector that represents v on the other basis. A change of basis consists of converting every assertion expressed in terms of coordinates relative to one basis into an assertion expressed in terms of coordinates relative to the other basis. Such a conversion results from the "change-of-basis formula" which expresses the coordinates relative to one basis in terms of coordinates relative to the other basis. Using matrices, this formula can be written formula_0 where "old" and "new" refer respectively to the initially defined basis and the other basis, formula_1 and formula_2 are the column vectors of the coordinates of the same vector on the two bases, and formula_3 is the change-of-basis matrix (also called transition matrix), which is the matrix whose columns are the coordinates of the new basis vectors on the old basis. This article deals mainly with finite-dimensional vector spaces. However, many of the principles are also valid for infinite-dimensional vector spaces. Change of basis formula. Let formula_4 be a basis of a finite-dimensional vector space V over a field F. For "j" = 1, ..., "n", one can define a vector "w""j" by its coordinates formula_5 over formula_6 formula_7 Let formula_8 be the matrix whose jth column is formed by the coordinates of "w""j". (Here and in what follows, the index i refers always to the rows of A and the formula_9 while the index j refers always to the columns of A and the formula_10 such a convention is useful for avoiding errors in explicit computations.) Setting formula_11 one has that formula_12 is a basis of V if and only if the matrix A is invertible, or equivalently if it has a nonzero determinant. In this case, A is said to be the "change-of-basis matrix" from the basis formula_13 to the basis formula_14 Given a vector formula_15 let formula_16 be the coordinates of formula_17 over formula_18 and formula_19 its coordinates over formula_20 that is formula_21 The "change-of-basis formula" expresses the coordinates over the old basis in terms of the coordinates over the new basis. With above notation, it is formula_22 In terms of matrices, the change of basis formula is formula_23 where formula_24 and formula_25 are the column vectors of the coordinates of z over formula_13 and formula_26 respectively. "Proof:" Using the above definition of the change-of basis matrix, one has formula_27 As formula_28 the change-of-basis formula results from the uniqueness of the decomposition of a vector over a basis. Example. Consider the Euclidean vector space formula_29 Its standard basis consists of the vectors formula_30 and formula_31 If one rotates them by an angle of t, one gets a "new basis" formed by formula_32 and formula_33 So, the change-of-basis matrix is formula_34 The change-of-basis formula asserts that, if formula_35 are the new coordinates of a vector formula_36 then one has formula_37 That is, formula_38 This may be verified by writing formula_39 In terms of linear maps. Normally, a matrix represents a linear map, and the product of a matrix and a column vector represents the function application of the corresponding linear map to the vector whose coordinates form the column vector. The change-of-basis formula is a specific case of this general principle, although this is not immediately clear from its definition and proof. When one says that a matrix "represents" a linear map, one refers implicitly to bases of implied vector spaces, and to the fact that the choice of a basis induces an isomorphism between a vector space and "F""n", where F is the field of scalars. When only one basis is considered for each vector space, it is worth to leave this isomorphism implicit, and to work up to an isomorphism. As several bases of the same vector space are considered here, a more accurate wording is required. Let F be a field, the set formula_40 of the n-tuples is a F-vector space whose addition and scalar multiplication are defined component-wise. Its standard basis is the basis that has as its ith element the tuple with all components equal to 0 except the ith that is 1. A basis formula_41 of a F-vector space V defines a linear isomorphism formula_42 by formula_43 Conversely, such a linear isomorphism defines a basis, which is the image by formula_44 of the standard basis of formula_45 Let formula_4 be the "old basis" of a change of basis, and formula_46 the associated isomorphism. Given a change-of basis matrix A, one could consider it the matrix of an endomorphism formula_47 of formula_45 Finally, define formula_48 (where formula_49 denotes function composition), and formula_50 A straightforward verification shows that this definition of formula_51 is the same as that of the preceding section. Now, by composing the equation formula_48 with formula_52 on the left and formula_53 on the right, one gets formula_54 It follows that, for formula_55 one has formula_56 which is the change-of-basis formula expressed in terms of linear maps instead of coordinates. Function defined on a vector space. A function that has a vector space as its domain is commonly specified as a multivariate function whose variables are the coordinates on some basis of the vector on which the function is applied. When the basis is changed, the expression of the function is changed. This change can be computed by substituting the "old" coordinates for their expressions in terms of the "new" coordinates. More precisely, if "f"(x) is the expression of the function in terms of the old coordinates, and if x "Ay is the change-of-base formula, then "f"("Ay) is the expression of the same function in terms of the new coordinates. The fact that the change-of-basis formula expresses the old coordinates in terms of the new one may seem unnatural, but appears as useful, as no matrix inversion is needed here. As the change-of-basis formula involves only linear functions, many function properties are kept by a change of basis. This allows defining these properties as properties of functions of a variable vector that are not related to any specific basis. So, a function whose domain is a vector space or a subset of it is if the multivariate function that represents it on some basis—and thus on every basis—has the same property. This is specially useful in the theory of manifolds, as this allows extending the concepts of continuous, differentiable, smooth and analytic functions to functions that are defined on a manifold. Linear maps. Consider a linear map "T": "W" → "V" from a vector space W of dimension n to a vector space V of dimension m. It is represented on "old" bases of V and W by a "m"×"n" matrix M. A change of bases is defined by an "m"×"m" change-of-basis matrix P for V, and an "n"×"n" change-of-basis matrix Q for W. On the "new" bases, the matrix of T is formula_57 This is a straightforward consequence of the change-of-basis formula. Endomorphisms. Endomorphisms, are linear maps from a vector space V to itself. For a change of basis, the formula of the preceding section applies, with the same change-of-basis matrix on both sides of the formula. That is, if M is the square matrix of an endomorphism of V over an "old" basis, and P is a change-of-basis matrix, then the matrix of the endomorphism on the "new" basis is formula_58 As every invertible matrix can be used as a change-of-basis matrix, this implies that two matrices are similar if and only if they represent the same endomorphism on two different bases. Bilinear forms. A "bilinear form" on a vector space "V" over a field F is a function "V" × "V" → F which is linear in both arguments. That is, "B" : "V" × "V" → F is bilinear if the maps formula_59 and formula_60 are linear for every fixed formula_61 The matrix B of a bilinear form B on a basis formula_62 (the "old" basis in what follows) is the matrix whose entry of the ith row and jth column is formula_63. It follows that if v and w are the column vectors of the coordinates of two vectors v and w, one has formula_64 where formula_65 denotes the transpose of the matrix v. If P is a change of basis matrix, then a straightforward computation shows that the matrix of the bilinear form on the new basis is formula_66 A symmetric bilinear form is a bilinear form B such that formula_67 for every v and w in V. It follows that the matrix of B on any basis is symmetric. This implies that the property of being a symmetric matrix must be kept by the above change-of-base formula. One can also check this by noting that the transpose of a matrix product is the product of the transposes computed in the reverse order. In particular, formula_68 and the two members of this equation equal formula_69 if the matrix B is symmetric. If the characteristic of the ground field F is not two, then for every symmetric bilinear form there is a basis for which the matrix is diagonal. Moreover, the resulting nonzero entries on the diagonal are defined up to the multiplication by a square. So, if the ground field is the field formula_70 of the real numbers, these nonzero entries can be chosen to be either 1 or –1. Sylvester's law of inertia is a theorem that asserts that the numbers of 1 and of –1 depends only on the bilinear form, and not of the change of basis. Symmetric bilinear forms over the reals are often encountered in geometry and physics, typically in the study of quadrics and of the inertia of a rigid body. In these cases, orthonormal bases are specially useful; this means that one generally prefer to restrict changes of basis to those that have an orthogonal change-of-base matrix, that is, a matrix such that formula_71 Such matrices have the fundamental property that the change-of-base formula is the same for a symmetric bilinear form and the endomorphism that is represented by the same symmetric matrix. The Spectral theorem asserts that, given such a symmetric matrix, there is an orthogonal change of basis such that the resulting matrix (of both the bilinear form and the endomorphism) is a diagonal matrix with the eigenvalues of the initial matrix on the diagonal. It follows that, over the reals, if the matrix of an endomorphism is symmetric, then it is diagonalizable. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbf x_\\mathrm{old} = A \\,\\mathbf x_\\mathrm{new}," }, { "math_id": 1, "text": "\\mathbf x_\\mathrm{old}" }, { "math_id": 2, "text": "\\mathbf x_\\mathrm{new}" }, { "math_id": 3, "text": "A" }, { "math_id": 4, "text": "B_\\mathrm {old}=(v_1, \\ldots, v_n)" }, { "math_id": 5, "text": "a_{i,j}" }, { "math_id": 6, "text": "B_\\mathrm {old}\\colon" }, { "math_id": 7, "text": "w_j=\\sum_{i=1}^n a_{i,j}v_i." }, { "math_id": 8, "text": "A=\\left(a_{i,j}\\right)_{i,j}" }, { "math_id": 9, "text": "v_i," }, { "math_id": 10, "text": "w_j;" }, { "math_id": 11, "text": "B_\\mathrm {new}=(w_1, \\ldots, w_n)," }, { "math_id": 12, "text": "B_\\mathrm {new}" }, { "math_id": 13, "text": "B_\\mathrm {old}" }, { "math_id": 14, "text": "B_\\mathrm {new}." }, { "math_id": 15, "text": "z\\in V," }, { "math_id": 16, "text": "(x_1, \\ldots, x_n) " }, { "math_id": 17, "text": "z" }, { "math_id": 18, "text": "B_\\mathrm {old}," }, { "math_id": 19, "text": "(y_1, \\ldots, y_n) " }, { "math_id": 20, "text": "B_\\mathrm {new};" }, { "math_id": 21, "text": "z=\\sum_{i=1}^nx_iv_i = \\sum_{j=1}^ny_jw_j." }, { "math_id": 22, "text": "x_i = \\sum_{j=1}^n a_{i,j}y_j\\qquad\\text{for } i=1, \\ldots, n." }, { "math_id": 23, "text": "\\mathbf x = A\\,\\mathbf y," }, { "math_id": 24, "text": "\\mathbf x" }, { "math_id": 25, "text": "\\mathbf y" }, { "math_id": 26, "text": "B_\\mathrm {new}," }, { "math_id": 27, "text": "\\begin{align}\nz&=\\sum_{j=1}^n y_jw_j\\\\\n &=\\sum_{j=1}^n \\left(y_j\\sum_{i=1}^n a_{i,j}v_i\\right)\\\\\n &=\\sum_{i=1}^n \\left(\\sum_{j=1}^n a_{i,j} y_j \\right) v_i.\n\\end{align}" }, { "math_id": 28, "text": "z=\\textstyle \\sum_{i=1}^n x_iv_i," }, { "math_id": 29, "text": "\\mathbb R^2." }, { "math_id": 30, "text": "v_1= (1,0)" }, { "math_id": 31, "text": "v_2= (0,1)." }, { "math_id": 32, "text": "w_1=(\\cos t, \\sin t)" }, { "math_id": 33, "text": "w_2=(-\\sin t, \\cos t)." }, { "math_id": 34, "text": "\\begin{bmatrix}\n\\cos t& -\\sin t\\\\\n\\sin t& \\cos t\n\\end{bmatrix}." }, { "math_id": 35, "text": "y_1, y_2" }, { "math_id": 36, "text": "(x_1, x_2)," }, { "math_id": 37, "text": "\\begin{bmatrix}x_1\\\\x_2\\end{bmatrix}=\\begin{bmatrix}\n\\cos t& -\\sin t\\\\\n\\sin t& \\cos t\n\\end{bmatrix}\\,\\begin{bmatrix}y_1\\\\y_2\\end{bmatrix}." }, { "math_id": 38, "text": "x_1=y_1\\cos t - y_2\\sin t \\qquad\\text{and}\\qquad x_2=y_1\\sin t + y_2\\cos t." }, { "math_id": 39, "text": "\\begin{align}\nx_1v_1+x_2v_2 &= (y_1\\cos t - y_2\\sin t) v_1 + (y_1\\sin t + y_2\\cos t) v_2\\\\\n &= y_1 (\\cos (t) v_1 + \\sin(t)v_2) + y_2 (-\\sin(t) v_1 +\\cos(t) v_2)\\\\\n &=y_1w_1+y_2w_2.\n\\end{align}" }, { "math_id": 40, "text": "F^n" }, { "math_id": 41, "text": "B=(v_1, \\ldots, v_n)" }, { "math_id": 42, "text": "\\phi\\colon F^n\\to V" }, { "math_id": 43, "text": "\\phi(x_1,\\ldots,x_n)=\\sum_{i=1}^n x_i v_i." }, { "math_id": 44, "text": "\\phi" }, { "math_id": 45, "text": "F^n." }, { "math_id": 46, "text": "\\phi_\\mathrm {old}" }, { "math_id": 47, "text": "\\psi_A" }, { "math_id": 48, "text": "\\phi_\\mathrm{new}=\\phi_\\mathrm{old}\\circ\\psi_A" }, { "math_id": 49, "text": "\\circ" }, { "math_id": 50, "text": "B_\\mathrm{new}= \\phi_\\mathrm{new}(\\phi_\\mathrm{old}^{-1}(B_\\mathrm{old})). " }, { "math_id": 51, "text": "B_\\mathrm{new}" }, { "math_id": 52, "text": "\\phi_\\mathrm{old}^{-1}" }, { "math_id": 53, "text": "\\phi_\\mathrm{new}^{-1}" }, { "math_id": 54, "text": "\\phi_\\mathrm{old}^{-1} = \\psi_A \\circ \\phi_\\mathrm{new}^{-1}." }, { "math_id": 55, "text": "v\\in V," }, { "math_id": 56, "text": "\\phi_\\mathrm{old}^{-1}(v)= \\psi_A(\\phi_\\mathrm{new}^{-1}(v))," }, { "math_id": 57, "text": "P^{-1}MQ." }, { "math_id": 58, "text": "P^{-1}MP." }, { "math_id": 59, "text": "v \\mapsto B(v, w)" }, { "math_id": 60, "text": "v \\mapsto B(w, v)" }, { "math_id": 61, "text": "w\\in V." }, { "math_id": 62, "text": "(v_1, \\ldots, v_n) " }, { "math_id": 63, "text": "B(v_i, v_j)" }, { "math_id": 64, "text": "B(v, w)=\\mathbf v^{\\mathsf T}\\mathbf B\\mathbf w," }, { "math_id": 65, "text": "\\mathbf v^{\\mathsf T}" }, { "math_id": 66, "text": "P^{\\mathsf T}\\mathbf B P." }, { "math_id": 67, "text": "B(v,w)=B(w,v)" }, { "math_id": 68, "text": "(P^{\\mathsf T}\\mathbf B P)^{\\mathsf T} = P^{\\mathsf T}\\mathbf B^{\\mathsf T} P," }, { "math_id": 69, "text": "P^{\\mathsf T} \\mathbf B P" }, { "math_id": 70, "text": "\\mathbb R" }, { "math_id": 71, "text": "P^{\\mathsf T}=P^{-1}." } ]
https://en.wikipedia.org/wiki?curid=1179451
11795634
Fourier algebra
Algebras arising in harmonic analysis Fourier and related algebras occur naturally in the harmonic analysis of locally compact groups. They play an important role in the duality theories of these groups. The Fourier–Stieltjes algebra and the Fourier–Stieltjes transform on the Fourier algebra of a locally compact group were introduced by Pierre Eymard in 1964. Definition. Informal. Let G be a locally compact abelian group, and Ĝ the dual group of G. Then formula_0 is the space of all functions on Ĝ which are integrable with respect to the Haar measure on Ĝ, and it has a Banach algebra structure where the product of two functions is convolution. We define formula_1 to be the set of Fourier transforms of functions in formula_0, and it is a closed sub-algebra of formula_2, the space of bounded continuous complex-valued functions on G with pointwise multiplication. We call formula_1 the Fourier algebra of G. Similarly, we write formula_3 for the measure algebra on Ĝ, meaning the space of all finite regular Borel measures on Ĝ. We define formula_4 to be the set of Fourier-Stieltjes transforms of measures in formula_3. It is a closed sub-algebra of formula_2, the space of bounded continuous complex-valued functions on G with pointwise multiplication. We call formula_4 the Fourier-Stieltjes algebra of G. Equivalently, formula_4 can be defined as the linear span of the set formula_5 of continuous positive-definite functions on G. Since formula_0 is naturally included in formula_3, and since the Fourier-Stieltjes transform of an formula_0 function is just the Fourier transform of that function, we have that formula_6. In fact, formula_1 is a closed ideal in formula_4. Formal. Let formula_7 be a Fourier–Stieltjes algebra and formula_8 be a Fourier algebra such that the locally compact group formula_9 is abelian. Let formula_10 be the measure algebra of finite measures on formula_11 and let formula_12 be the convolution algebra of integrable functions on formula_11, where formula_13 is the character group of the Abelian group formula_9. The Fourier–Stieltjes transform of a finite measure formula_14 on formula_13 is the function formula_15 on formula_9 defined by formula_16 The space formula_7 of these functions is an algebra under pointwise multiplication is isomorphic to the measure algebra formula_10. Restricted to formula_12, viewed as a subspace of formula_10, the Fourier–Stieltjes transform is the Fourier transform on formula_12 and its image is, by definition, the Fourier algebra formula_8. The generalized Bochner theorem states that a measurable function on formula_9 is equal, almost everywhere, to the Fourier–Stieltjes transform of a non-negative finite measure on formula_11 if and only if it is positive definite. Thus, formula_7 can be defined as the linear span of the set of continuous positive-definite functions on formula_9. This definition is still valid when formula_9 is not Abelian. Helson–Kahane–Katznelson–Rudin theorem. Let A(G) be the Fourier algebra of a compact group G. Building upon the work of Wiener, Lévy, Gelfand, and Beurling, in 1959 Helson, Kahane, Katznelson, and Rudin proved that, when G is compact and abelian, a function f defined on a closed convex subset of the plane operates in A(G) if and only if f is real analytic. In 1969 Dunkl proved the result holds when G is compact and contains an infinite abelian subgroup. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " L_1(\\hat{\\mathit{G}}) " }, { "math_id": 1, "text": "A(G) " }, { "math_id": 2, "text": "CB(G) " }, { "math_id": 3, "text": " M(\\hat{\\mathit{G}}) " }, { "math_id": 4, "text": "B(G) " }, { "math_id": 5, "text": "P(G) " }, { "math_id": 6, "text": "A(G) \\subset B(G) " }, { "math_id": 7, "text": " B(\\mathit{G}) " }, { "math_id": 8, "text": " A(\\mathit{G}) " }, { "math_id": 9, "text": " \\mathit{G} " }, { "math_id": 10, "text": " M(\\widehat{\\mathit{G}}) " }, { "math_id": 11, "text": " \\widehat{G} " }, { "math_id": 12, "text": " L_1(\\widehat{\\mathit{G}}) " }, { "math_id": 13, "text": " \\widehat{\\mathit{G}} " }, { "math_id": 14, "text": " \\mu " }, { "math_id": 15, "text": " \\widehat{\\mu} " }, { "math_id": 16, "text": " \\widehat{\\mu}(x) = \\int_{\\widehat{G}} \\overline{X(x)} \\, d \\mu(X), \\quad x \\in G " } ]
https://en.wikipedia.org/wiki?curid=11795634
11797347
Electrochemical equivalent
Mass of an element transported by 1 coulomb of electric charge In chemistry, the electrochemical equivalent (Eq or Z) of a chemical element is the mass of that element (in grams) transported by a specific quantity of electricity, usually expressed in grams per coulomb of electric charge. The electrochemical equivalent of an element is measured with a voltameter. Definition. The electrochemical equivalent of a substance is the mass of the substance deposited to one of the electrodes when a current of 1 ampere is passed for 1 second, i.e. a quantity of electricity of one coulomb is passed. The formula for finding electrochemical equivalent is as follows: formula_0 where formula_1 is the mass of substance and formula_2 is the charge passed. Since formula_3, where formula_4 is the current applied and formula_5 is time, we also have formula_6 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "Z = M/q " }, { "math_id": 1, "text": "M" }, { "math_id": 2, "text": "q" }, { "math_id": 3, "text": "q=It" }, { "math_id": 4, "text": "I" }, { "math_id": 5, "text": "t" }, { "math_id": 6, "text": "Z=M/It" } ]
https://en.wikipedia.org/wiki?curid=11797347
11797854
Double complex
Mathematical concept In mathematics, specifically Homological algebra, a double complex is a generalization of a chain complex where instead of having a formula_0-grading, the objects in the bicomplex have a formula_1-grading. The most general definition of a double complex, or a bicomplex, is given with objects in an additive category formula_2. A bicomplex is a sequence of objects formula_3 with two differentials, the horizontal differentialformula_4and the vertical differentialformula_5which have the compatibility relationformula_6Hence a double complex is a commutative diagram of the formformula_7where the rows and columns form chain complexes. Some authors instead require that the squares anticommute. That is formula_8 This eases the definition of Total Complexes. By setting formula_9, we can switch between having commutativity and anticommutativity. If the commutative definition is used, this alternating sign will have to show up in the definition of Total Complexes. Examples. There are many natural examples of bicomplexes that come up in nature. In particular, for a Lie groupoid, there is a bicomplex associated to itpg 7-8 which can be used to construct its de-Rham complex. Another common example of bicomplexes are in Hodge theory, where on an almost complex manifold formula_10 there's a bicomplex of differential forms formula_11 whose components are linear or anti-linear. For example, if formula_12 are the complex coordinates of formula_13 and formula_14 are the complex conjugate of these coordinates, a formula_15-form is of the formformula_16 See also. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbb{Z}" }, { "math_id": 1, "text": "\\mathbb{Z}\\times\\mathbb{Z}" }, { "math_id": 2, "text": "\\mathcal{A}" }, { "math_id": 3, "text": "C_{p,q} \\in \\text{Ob}(\\mathcal{A})" }, { "math_id": 4, "text": "d^h: C_{p,q} \\to C_{p+1,q}" }, { "math_id": 5, "text": "d^v:C_{p,q} \\to C_{p,q+1}" }, { "math_id": 6, "text": "d_h\\circ d_v = d_v\\circ d_h" }, { "math_id": 7, "text": "\\begin{matrix}\n & & \\vdots & & \\vdots & & \\\\\n & & \\uparrow & & \\uparrow & & \\\\\n\\cdots & \\to & C_{p,q+1} & \\to & C_{p+1,q+1} & \\to & \\cdots \\\\\n& & \\uparrow & & \\uparrow & & \\\\\n\\cdots & \\to & C_{p,q} & \\to & C_{p+1,q} & \\to & \\cdots \\\\\n& & \\uparrow & & \\uparrow & & \\\\\n & & \\vdots & & \\vdots & & \\\\\n\n\\end{matrix}" }, { "math_id": 8, "text": "d_h\\circ d_v + d_v\\circ d_h = 0." }, { "math_id": 9, "text": "f_{p,q} = (-1)^p d^v_{p,q} \\colon C_{p,q} \\to C_{p,q-1}" }, { "math_id": 10, "text": "X" }, { "math_id": 11, "text": "\\Omega^{p,q}(X)" }, { "math_id": 12, "text": "z_1,z_2" }, { "math_id": 13, "text": "\\mathbb{C}^2" }, { "math_id": 14, "text": "\\overline{z}_1,\\overline{z}_2" }, { "math_id": 15, "text": "(1,1)" }, { "math_id": 16, "text": "f_{a,b}dz_a\\wedge d\\overline{z}_b" } ]
https://en.wikipedia.org/wiki?curid=11797854
1179950
Feature selection
Procedure in machine learning and statistics &lt;templatestyles src="Machine learning/styles.css"/&gt; Feature selection is the process of selecting a subset of relevant features (variables, predictors) for use in model construction. Stylometry and DNA microarray analysis are two cases where feature selection is used. It should be distinguished from feature extraction. Feature selection techniques are used for several reasons: * simplification of models to make them easier to interpret by researchers/users, * shorter training times, * to avoid the curse of dimensionality, *improve data's compatibility with a learning model class, *encode inherent symmetries present in the input space. The central premise when using a feature selection technique is that the data contains some features that are either "redundant" or "irrelevant", and can thus be removed without incurring much loss of information. "Redundant" and "irrelevant" are two distinct notions, since one relevant feature may be redundant in the presence of another relevant feature with which it is strongly correlated. Feature extraction creates new features from functions of the original features, whereas feature selection returns a subset of the features. Feature selection techniques are often used in domains where there are many features and comparatively few samples (or data points). Introduction. A feature selection algorithm can be seen as the combination of a search technique for proposing new feature subsets, along with an evaluation measure which scores the different feature subsets. The simplest algorithm is to test each possible subset of features finding the one which minimizes the error rate. This is an exhaustive search of the space, and is computationally intractable for all but the smallest of feature sets. The choice of evaluation metric heavily influences the algorithm, and it is these evaluation metrics which distinguish between the three main categories of feature selection algorithms: wrappers, filters and embedded methods. In traditional regression analysis, the most popular form of feature selection is stepwise regression, which is a wrapper technique. It is a greedy algorithm that adds the best feature (or deletes the worst feature) at each round. The main control issue is deciding when to stop the algorithm. In machine learning, this is typically done by cross-validation. In statistics, some criteria are optimized. This leads to the inherent problem of nesting. More robust methods have been explored, such as branch and bound and piecewise linear network. Subset selection. Subset selection evaluates a subset of features as a group for suitability. Subset selection algorithms can be broken up into wrappers, filters, and embedded methods. Wrappers use a search algorithm to search through the space of possible features and evaluate each subset by running a model on the subset. Wrappers can be computationally expensive and have a risk of over fitting to the model. Filters are similar to wrappers in the search approach, but instead of evaluating against a model, a simpler filter is evaluated. Embedded techniques are embedded in, and specific to, a model. Many popular search approaches use greedy hill climbing, which iteratively evaluates a candidate subset of features, then modifies the subset and evaluates if the new subset is an improvement over the old. Evaluation of the subsets requires a scoring metric that grades a subset of features. Exhaustive search is generally impractical, so at some implementor (or operator) defined stopping point, the subset of features with the highest score discovered up to that point is selected as the satisfactory feature subset. The stopping criterion varies by algorithm; possible criteria include: a subset score exceeds a threshold, a program's maximum allowed run time has been surpassed, etc. Alternative search-based techniques are based on targeted projection pursuit which finds low-dimensional projections of the data that score highly: the features that have the largest projections in the lower-dimensional space are then selected. Search approaches include: Two popular filter metrics for classification problems are correlation and mutual information, although neither are true metrics or 'distance measures' in the mathematical sense, since they fail to obey the triangle inequality and thus do not compute any actual 'distance' – they should rather be regarded as 'scores'. These scores are computed between a candidate feature (or set of features) and the desired output category. There are, however, true metrics that are a simple function of the mutual information; see here. Other available filter metrics include: Optimality criteria. The choice of optimality criteria is difficult as there are multiple objectives in a feature selection task. Many common criteria incorporate a measure of accuracy, penalised by the number of features selected. Examples include Akaike information criterion (AIC) and Mallows's "Cp", which have a penalty of 2 for each added feature. AIC is based on information theory, and is effectively derived via the maximum entropy principle. Other criteria are Bayesian information criterion (BIC), which uses a penalty of formula_0 for each added feature, minimum description length (MDL) which asymptotically uses formula_0, Bonferroni / RIC which use formula_1, maximum dependency feature selection, and a variety of new criteria that are motivated by false discovery rate (FDR), which use something close to formula_2. A maximum entropy rate criterion may also be used to select the most relevant subset of features. Structure learning. Filter feature selection is a specific case of a more general paradigm called structure learning. Feature selection finds the relevant feature set for a specific target variable whereas structure learning finds the relationships between all the variables, usually by expressing these relationships as a graph. The most common structure learning algorithms assume the data is generated by a Bayesian Network, and so the structure is a directed graphical model. The optimal solution to the filter feature selection problem is the Markov blanket of the target node, and in a Bayesian Network, there is a unique Markov Blanket for each node. Information Theory Based Feature Selection Mechanisms. There are different Feature Selection mechanisms around that utilize mutual information for scoring the different features. They usually use all the same algorithm: The simplest approach uses the mutual information as the "derived" score. However, there are different approaches, that try to reduce the redundancy between features. Minimum-redundancy-maximum-relevance (mRMR) feature selection. Peng "et al." proposed a feature selection method that can use either mutual information, correlation, or distance/similarity scores to select features. The aim is to penalise a feature's relevancy by its redundancy in the presence of the other selected features. The relevance of a feature set S for the class c is defined by the average value of all mutual information values between the individual feature "fi" and the class c as follows: formula_7. The redundancy of all features in the set S is the average value of all mutual information values between the feature "fi" and the feature "fj": formula_8 The mRMR criterion is a combination of two measures given above and is defined as follows: formula_9 Suppose that there are n full-set features. Let "xi" be the set membership indicator function for feature "fi", so that "xi"=1 indicates presence and "xi"=0 indicates absence of the feature "fi" in the globally optimal feature set. Let formula_10 and formula_11. The above may then be written as an optimization problem: formula_12 The mRMR algorithm is an approximation of the theoretically optimal maximum-dependency feature selection algorithm that maximizes the mutual information between the joint distribution of the selected features and the classification variable. As mRMR approximates the combinatorial estimation problem with a series of much smaller problems, each of which only involves two variables, it thus uses pairwise joint probabilities which are more robust. In certain situations the algorithm may underestimate the usefulness of features as it has no way to measure interactions between features which can increase relevancy. This can lead to poor performance when the features are individually useless, but are useful when combined (a pathological case is found when the class is a parity function of the features). Overall the algorithm is more efficient (in terms of the amount of data required) than the theoretically optimal max-dependency selection, yet produces a feature set with little pairwise redundancy. mRMR is an instance of a large class of filter methods which trade off between relevancy and redundancy in different ways. Quadratic programming feature selection. mRMR is a typical example of an incremental greedy strategy for feature selection: once a feature has been selected, it cannot be deselected at a later stage. While mRMR could be optimized using floating search to reduce some features, it might also be reformulated as a global quadratic programming optimization problem as follows: formula_13 where formula_14 is the vector of feature relevancy assuming there are n features in total, formula_15 is the matrix of feature pairwise redundancy, and formula_16 represents relative feature weights. QPFS is solved via quadratic programming. It is recently shown that QFPS is biased towards features with smaller entropy, due to its placement of the feature self redundancy term formula_17 on the diagonal of H. Conditional mutual information. Another score derived for the mutual information is based on the conditional relevancy: formula_18 where formula_19 and formula_20. An advantage of SPECCMI is that it can be solved simply via finding the dominant eigenvector of Q, thus is very scalable. SPECCMI also handles second-order feature interaction. Joint mutual information. In a study of different scores Brown et al. recommended the joint mutual information as a good score for feature selection. The score tries to find the feature, that adds the most new information to the already selected features, in order to avoid redundancy. The score is formulated as follows: formula_21 The score uses the conditional mutual information and the mutual information to estimate the redundancy between the already selected features (formula_22) and the feature under investigation (formula_23). Hilbert-Schmidt Independence Criterion Lasso based feature selection. For high-dimensional and small sample data (e.g., dimensionality &gt; 105 and the number of samples &lt; 103), the Hilbert-Schmidt Independence Criterion Lasso (HSIC Lasso) is useful. HSIC Lasso optimization problem is given as formula_24 where formula_25 is a kernel-based independence measure called the (empirical) Hilbert-Schmidt independence criterion (HSIC), formula_26 denotes the trace, formula_27 is the regularization parameter, formula_28 and formula_29 are input and output centered Gram matrices, formula_30 and formula_31 are Gram matrices, formula_32 and formula_33 are kernel functions, formula_34 is the centering matrix, formula_35 is the m-dimensional identity matrix (m: the number of samples), formula_36 is the m-dimensional vector with all ones, and formula_37 is the formula_38-norm. HSIC always takes a non-negative value, and is zero if and only if two random variables are statistically independent when a universal reproducing kernel such as the Gaussian kernel is used. The HSIC Lasso can be written as formula_39 where formula_40 is the Frobenius norm. The optimization problem is a Lasso problem, and thus it can be efficiently solved with a state-of-the-art Lasso solver such as the dual augmented Lagrangian method. Correlation feature selection. The correlation feature selection (CFS) measure evaluates subsets of features on the basis of the following hypothesis: "Good feature subsets contain features highly correlated with the classification, yet uncorrelated to each other". The following equation gives the merit of a feature subset "S" consisting of "k" features: formula_41 Here, formula_42 is the average value of all feature-classification correlations, and formula_43 is the average value of all feature-feature correlations. The CFS criterion is defined as follows: formula_44 The formula_45 and formula_46 variables are referred to as correlations, but are not necessarily Pearson's correlation coefficient or Spearman's ρ. Hall's dissertation uses neither of these, but uses three different measures of relatedness, minimum description length (MDL), symmetrical uncertainty, and relief. Let "xi" be the set membership indicator function for feature "fi"; then the above can be rewritten as an optimization problem: formula_47 The combinatorial problems above are, in fact, mixed 0–1 linear programming problems that can be solved by using branch-and-bound algorithms. Regularized trees. The features from a decision tree or a tree ensemble are shown to be redundant. A recent method called regularized tree can be used for feature subset selection. Regularized trees penalize using a variable similar to the variables selected at previous tree nodes for splitting the current node. Regularized trees only need build one tree model (or one tree ensemble model) and thus are computationally efficient. Regularized trees naturally handle numerical and categorical features, interactions and nonlinearities. They are invariant to attribute scales (units) and insensitive to outliers, and thus, require little data preprocessing such as normalization. Regularized random forest (RRF) is one type of regularized trees. The guided RRF is an enhanced RRF which is guided by the importance scores from an ordinary random forest. Overview on metaheuristics methods. A metaheuristic is a general description of an algorithm dedicated to solve difficult (typically NP-hard problem) optimization problems for which there is no classical solving methods. Generally, a metaheuristic is a stochastic algorithm tending to reach a global optimum. There are many metaheuristics, from a simple local search to a complex global search algorithm. Main principles. The feature selection methods are typically presented in three classes based on how they combine the selection algorithm and the model building. Filter method. Filter type methods select variables regardless of the model. They are based only on general features like the correlation with the variable to predict. Filter methods suppress the least interesting variables. The other variables will be part of a classification or a regression model used to classify or to predict data. These methods are particularly effective in computation time and robust to overfitting. Filter methods tend to select redundant variables when they do not consider the relationships between variables. However, more elaborate features try to minimize this problem by removing variables highly correlated to each other, such as the Fast Correlation Based Filter (FCBF) algorithm. Wrapper method. Wrapper methods evaluate subsets of variables which allows, unlike filter approaches, to detect the possible interactions amongst variables. The two main disadvantages of these methods are: Embedded method. Embedded methods have been recently proposed that try to combine the advantages of both previous methods. A learning algorithm takes advantage of its own variable selection process and performs feature selection and classification simultaneously, such as the FRMT algorithm. Application of feature selection metaheuristics. This is a survey of the application of feature selection metaheuristics lately used in the literature. This survey was realized by J. Hammon in her 2013 thesis. Feature selection embedded in learning algorithms. Some learning algorithms perform feature selection as part of their overall operation. These include: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\sqrt{\\log{n}}" }, { "math_id": 1, "text": "\\sqrt{2\\log{p}}" }, { "math_id": 2, "text": "\\sqrt{2\\log{\\frac{p}{q}}}" }, { "math_id": 3, "text": " f_{i} \\in F " }, { "math_id": 4, "text": "\\underset{f_{i} \\in F}\\operatorname{argmax}(I(f_{i},c))" }, { "math_id": 5, "text": "\\underset{f_{i} \\in F}\\operatorname{argmax}(I_{derived}(f_{i},c))" }, { "math_id": 6, "text": "|S|=l" }, { "math_id": 7, "text": " D(S,c) = \\frac{1}{|S|}\\sum_{f_{i}\\in S}I(f_{i};c) " }, { "math_id": 8, "text": " R(S) = \\frac{1}{|S|^{2}}\\sum_{f_{i},f_{j}\\in S}I(f_{i};f_{j})" }, { "math_id": 9, "text": "\\mathrm{mRMR}= \\max_{S}\n\\left[\\frac{1}{|S|}\\sum_{f_{i}\\in S}I(f_{i};c) - \n\\frac{1}{|S|^{2}}\\sum_{f_{i},f_{j}\\in S}I(f_{i};f_{j})\\right]." }, { "math_id": 10, "text": "c_i=I(f_i;c)" }, { "math_id": 11, "text": "a_{ij}=I(f_i;f_j)" }, { "math_id": 12, "text": "\\mathrm{mRMR}= \\max_{x\\in \\{0,1\\}^{n}} \n\\left[\\frac{\\sum^{n}_{i=1}c_{i}x_{i}}{\\sum^{n}_{i=1}x_{i}} -\n\\frac{\\sum^{n}_{i,j=1}a_{ij}x_{i}x_{j}}\n{(\\sum^{n}_{i=1}x_{i})^{2}}\\right]." }, { "math_id": 13, "text": "\n\\mathrm{QPFS}: \\min_\\mathbf{x} \\left\\{ \\alpha \\mathbf{x}^T H \\mathbf{x} - \\mathbf{x}^T F\\right\\} \\quad \\mbox{s.t.} \\ \\sum_{i=1}^n x_i=1, x_i\\geq 0\n" }, { "math_id": 14, "text": "F_{n\\times1}=[I(f_1;c),\\ldots, I(f_n;c)]^T" }, { "math_id": 15, "text": "H_{n\\times n}=[I(f_i;f_j)]_{i,j=1\\ldots n}" }, { "math_id": 16, "text": "\\mathbf{x}_{n\\times 1}" }, { "math_id": 17, "text": "I(f_i;f_i)" }, { "math_id": 18, "text": "\n\\mathrm{SPEC_{CMI}}: \\max_{\\mathbf{x}} \\left\\{\\mathbf{x}^T Q \\mathbf{x}\\right\\} \\quad \\mbox{s.t.}\\ \\|\\mathbf{x}\\|=1, x_i\\geq 0\n" }, { "math_id": 19, "text": "Q_{ii}=I(f_i;c)" }, { "math_id": 20, "text": "Q_{ij}=(I(f_i;c|f_j)+I(f_j;c|f_i))/2, i\\ne j" }, { "math_id": 21, "text": "\n\\begin{align}\nJMI(f_i) &= \\sum_{f_j \\in S} (I(f_i;c) + I(f_i;c|f_j)) \\\\\n &= \\sum_{f_j \\in S} \\bigl[ I (f_j;c) + I (f_i;c) - \\bigl(I (f_i;f_j) - I (f_i;f_j|c)\\bigr)\\bigr]\n\\end{align}\n" }, { "math_id": 22, "text": " f_j \\in S " }, { "math_id": 23, "text": "f_i" }, { "math_id": 24, "text": "\n\\mathrm{HSIC_{Lasso}}: \\min_{\\mathbf{x}} \\frac{1}{2}\\sum_{k,l = 1}^n x_k x_l {\\mbox{HSIC}}(f_k,f_l) - \\sum_{k = 1}^n x_k {\\mbox{HSIC}}(f_k,c) + \\lambda \\|\\mathbf{x}\\|_1, \\quad \\mbox{s.t.} \\ x_1,\\ldots, x_n \\geq 0,\n" }, { "math_id": 25, "text": "{\\mbox{HSIC}}(f_k,c) =\\mbox{tr}(\\bar{\\mathbf{K}}^{(k)} \\bar{\\mathbf{L}})" }, { "math_id": 26, "text": "\\mbox{tr}(\\cdot)" }, { "math_id": 27, "text": "\\lambda" }, { "math_id": 28, "text": "\\bar{\\mathbf{K}}^{(k)} = \\mathbf{\\Gamma} \\mathbf{K}^{(k)} \\mathbf{\\Gamma}" }, { "math_id": 29, "text": "\\bar{\\mathbf{L}} = \\mathbf{\\Gamma} \\mathbf{L} \\mathbf{\\Gamma}" }, { "math_id": 30, "text": "K^{(k)}_{i,j} = K(u_{k,i},u_{k,j})" }, { "math_id": 31, "text": "L_{i,j} = L(c_i,c_j)" }, { "math_id": 32, "text": "K(u,u')" }, { "math_id": 33, "text": "L(c,c')" }, { "math_id": 34, "text": "\\mathbf{\\Gamma} = \\mathbf{I}_m - \\frac{1}{m}\\mathbf{1}_m \\mathbf{1}_m^T" }, { "math_id": 35, "text": "\\mathbf{I}_m" }, { "math_id": 36, "text": "\\mathbf{1}_m" }, { "math_id": 37, "text": "\\|\\cdot\\|_{1}" }, { "math_id": 38, "text": "\\ell_1" }, { "math_id": 39, "text": "\n\\mathrm{HSIC_{Lasso}}: \\min_{\\mathbf{x}} \\frac{1}{2}\\left\\|\\bar{\\mathbf{L}} - \\sum_{k = 1}^{n} x_k \\bar{\\mathbf{K}}^{(k)} \\right\\|^2_{F} + \\lambda \\|\\mathbf{x}\\|_1, \\quad \\mbox{s.t.} \\ x_1,\\ldots,x_n \\geq 0,\n" }, { "math_id": 40, "text": "\\|\\cdot\\|_{F}" }, { "math_id": 41, "text": " \\mathrm{Merit}_{S_{k}} = \\frac{k\\overline{r_{cf}}}{\\sqrt{k+k(k-1)\\overline{r_{ff}}}}." }, { "math_id": 42, "text": " \\overline{r_{cf}} " }, { "math_id": 43, "text": " \\overline{r_{ff}} " }, { "math_id": 44, "text": "\\mathrm{CFS} = \\max_{S_k}\n\\left[\\frac{r_{c f_1}+r_{c f_2}+\\cdots+r_{c f_k}}\n{\\sqrt{k+2(r_{f_1 f_2}+\\cdots+r_{f_i f_j}+ \\cdots\n+ r_{f_k f_{k-1} })}}\\right]." }, { "math_id": 45, "text": "r_{cf_{i}}" }, { "math_id": 46, "text": "r_{f_{i}f_{j}}" }, { "math_id": 47, "text": "\\mathrm{CFS} = \\max_{x\\in \\{0,1\\}^{n}} \n\\left[\\frac{(\\sum^{n}_{i=1}a_{i}x_{i})^{2}}\n{\\sum^{n}_{i=1}x_i + \\sum_{i\\neq j} 2b_{ij} x_i x_j }\\right]." } ]
https://en.wikipedia.org/wiki?curid=1179950
11800092
Shannon wavelet
In functional analysis, the Shannon wavelet (or sinc wavelets) is a decomposition that is defined by signal analysis by ideal bandpass filters. Shannon wavelet may be either of real or complex type. Shannon wavelet is not well-localized (noncompact) in the time domain, but its Fourier transform is band-limited (compact support). Hence Shannon wavelet has poor time localization but has good frequency localization. These characteristics are in stark contrast to those of the Haar wavelet. The Haar and sinc systems are Fourier duals of each other. Definition. Sinc function is the starting point for the definition of the Shannon wavelet. Scaling function. First, we define the scaling function to be the sinc function. formula_0 And define the dilated and translated instances to be formula_1 where the parameter formula_2 means the dilation and the translation for the wavelet respectively. Then we can derive the Fourier transform of the scaling function: formula_3 where the (normalised) gate function is defined by formula_4 Also for the dilated and translated instances of scaling function: formula_5 Mother wavelet. Use formula_6 and multiresolution approximation we can derive the Fourier transform of the Mother wavelet: formula_7 And the dilated and translated instances: formula_8 Then the shannon mother wavelet function and the family of dilated and translated instances can be obtained by the inverse Fourier transform: formula_9 formula_10 Property of mother wavelet and scaling function. formula_11 formula_13 formula_14 Reconstruction of a Function by Shannon Wavelets. Suppose formula_15 such that formula_16 and for any dilation and the translation parameter formula_17, formula_18, formula_19 Then formula_20 is uniformly convergent, where formula_21 Real Shannon wavelet. The Fourier transform of the Shannon mother wavelet is given by: formula_22 where the (normalised) gate function is defined by formula_23 The analytical expression of the real Shannon wavelet can be found by taking the inverse Fourier transform: formula_24 or alternatively as formula_25 where formula_26 is the usual sinc function that appears in Shannon sampling theorem. This wavelet belongs to the formula_27-class of differentiability, but it decreases slowly at infinity and has no bounded support, since band-limited signals cannot be time-limited. The scaling function for the Shannon MRA (or "Sinc"-MRA) is given by the sample function: formula_28 Complex Shannon wavelet. In the case of complex continuous wavelet, the Shannon wavelet is defined by formula_29,
[ { "math_id": 0, "text": "\\phi^{\\text{(Sha)}}(t) := \\frac {\\sin \\pi t} {\\pi t} = \\operatorname{sinc}(t)." }, { "math_id": 1, "text": "\\phi^n_k(t) := 2^{n/2}\\phi^{\\text{(Sha)}}(2^n t-k)" }, { "math_id": 2, "text": "n,k" }, { "math_id": 3, "text": "\\Phi^{\\text{(Sha)}}(\\omega) = \\frac{1}{2\\pi}\\Pi(\\frac{\\omega}{2\\pi})\n=\n\\begin{cases}\n\\frac{1}{2\\pi}, & \\mbox{if } {|\\omega| \\le \\pi}, \\\\\n0 & \\mbox{if } \\mbox{otherwise}. \\\\\n\\end{cases}" }, { "math_id": 4, "text": " \\Pi ( x):= \n\\begin{cases}\n1, & \\mbox{if } {|x| \\le 1/2}, \\\\\n0 & \\mbox{if } \\mbox{otherwise}. \\\\\n\\end{cases} " }, { "math_id": 5, "text": "\\Phi^n_k(\\omega) = \\frac{2^{-n/2}}{2\\pi}e^{-i\\omega(k+1)/2^n}\\Pi(\\frac{\\omega}{2^{n+1}\\pi})" }, { "math_id": 6, "text": "\\Phi^{\\text{(Sha)}}" }, { "math_id": 7, "text": "\\Psi^{\\text{(Sha)}}(\\omega) = \\frac{1}{2\\pi}e^{-i\\omega}\n\\bigg(\\Pi(\\frac{\\omega}{\\pi}-\\frac{3}{2})+\\Pi(\\frac{\\omega}{\\pi}+\\frac{3}{2})\\bigg)\n" }, { "math_id": 8, "text": "\\Psi^n_k(\\omega) = \\frac{2^{-n/2}}{2\\pi}e^{-i\\omega(k+1)/2^n}\n\\bigg(\\Pi(\\frac{\\omega}{2^n\\pi}-\\frac{3}{2})+\\Pi(\\frac{\\omega}{2^n\\pi}+\\frac{3}{2})\\bigg)\n" }, { "math_id": 9, "text": "\\psi^{\\text{(Sha)}}(t) = \\frac{\\sin\\pi(t-(1/2))-\\sin2\\pi(t-(1/2))}{\\pi(t-1/2)}\n=\\operatorname{sinc}\\bigg(t-\\frac{1}{2}\\bigg)-2\\operatorname{sinc}\\bigg(2(t-\\frac{1}{2})\\bigg)\n" }, { "math_id": 10, "text": "\\psi^n_k(t) = 2^{n/2}\\psi^{\\text{(Sha)}}(2^nt-k)\n" }, { "math_id": 11, "text": "<\\psi^n_k(t), \\psi^m_h(t)>=\\delta^{nm}\\delta_{hk}=\n\\begin{cases} 1, & \\text{if }h=k \\text{ and } n=m\\\\ 0, & \\text{otherwise} \\end{cases}\n" }, { "math_id": 12, "text": "n=0\n" }, { "math_id": 13, "text": "<\\phi^0_k(t), \\phi^0_h(t)>=\\delta^{kh}\n" }, { "math_id": 14, "text": "<\\phi^0_k(t), \\psi^m_h(t)>=0\n" }, { "math_id": 15, "text": "f(x)\\in L_2(\\mathbb{R})\n" }, { "math_id": 16, "text": "\\operatorname{supp}\\operatorname{FT}\\{f\\}\\subset[-\\pi,\\pi]\n" }, { "math_id": 17, "text": "n,k\n" }, { "math_id": 18, "text": "\\Bigg|\\int^\\infty_{-\\infty}f(t)\\phi^0_k(t)dt\\Bigg|<\\infty\n" }, { "math_id": 19, "text": "\\Bigg|\\int^\\infty_{-\\infty}f(t)\\psi^n_k(t)dt\\Bigg|<\\infty\n" }, { "math_id": 20, "text": "f(t)=\\sum^\\infty_{k=\\infty}\\alpha_k\\phi^0_k(t)\n" }, { "math_id": 21, "text": "\\alpha_k=f(k)\n" }, { "math_id": 22, "text": " \\Psi^{(\\operatorname{Sha}) }(w) = \\prod \\left( \\frac {w- 3 \\pi /2} {\\pi}\\right)+\\prod \\left( \\frac {w+ 3 \\pi /2} {\\pi}\\right). " }, { "math_id": 23, "text": " \\prod ( x):= \n\\begin{cases}\n1, & \\mbox{if } {|x| \\le 1/2}, \\\\\n0 & \\mbox{if } \\mbox{otherwise}. \\\\\n\\end{cases} " }, { "math_id": 24, "text": " \\psi^{(\\operatorname{Sha}) }(t) = \\operatorname{sinc} \\left( \\frac {t} {2}\\right)\\cdot \\cos \\left( \\frac {3 \\pi t} {2}\\right)" }, { "math_id": 25, "text": " \\psi^{(\\operatorname{Sha})}(t)=2 \\cdot \\operatorname{sinc}(2t)-\\operatorname{sinc}(t), " }, { "math_id": 26, "text": "\\operatorname{sinc}(t):= \\frac {\\sin {\\pi t}} {\\pi t}" }, { "math_id": 27, "text": "C^\\infty" }, { "math_id": 28, "text": "\\phi^{(Sha)}(t)= \\frac {\\sin \\pi t} {\\pi t} = \\operatorname{sinc}(t)." }, { "math_id": 29, "text": " \\psi^{(CSha) }(t)=\\operatorname{sinc}(t) \\cdot e^{-2\\pi i t}" } ]
https://en.wikipedia.org/wiki?curid=11800092
1180105
Ancient Egyptian mathematics
Mathematics developed and used in Ancient Egypt Ancient Egyptian mathematics is the mathematics that was developed and used in Ancient Egypt c. 3000 to c.  BCE, from the Old Kingdom of Egypt until roughly the beginning of Hellenistic Egypt. The ancient Egyptians utilized a numeral system for counting and solving written mathematical problems, often involving multiplication and fractions. Evidence for Egyptian mathematics is limited to a scarce amount of surviving sources written on papyrus. From these texts it is known that ancient Egyptians understood concepts of geometry, such as determining the surface area and volume of three-dimensional shapes useful for architectural engineering, and algebra, such as the false position method and quadratic equations. Overview. Written evidence of the use of mathematics dates back to at least 3200 BC with the ivory labels found in Tomb U-j at Abydos. These labels appear to have been used as tags for grave goods and some are inscribed with numbers. Further evidence of the use of the base 10 number system can be found on the Narmer Macehead which depicts offerings of 400,000 oxen, 1,422,000 goats and 120,000 prisoners. Archaeological evidence has suggested that the Ancient Egyptian counting system had origins in Sub-Saharan Africa. Also, fractal geometry designs which are widespread among Sub-Saharan African cultures are also found in Egyptian architecture and cosmological signs. The evidence of the use of mathematics in the Old Kingdom (c. 2690–2180 BC) is scarce, but can be deduced from inscriptions on a wall near a mastaba in Meidum which gives guidelines for the slope of the mastaba. The lines in the diagram are spaced at a distance of one cubit and show the use of that unit of measurement. The earliest true mathematical documents date to the 12th Dynasty (c. 1990–1800 BC). The Moscow Mathematical Papyrus, the Egyptian Mathematical Leather Roll, the Lahun Mathematical Papyri which are a part of the much larger collection of Kahun Papyri and the Berlin Papyrus 6619 all date to this period. The Rhind Mathematical Papyrus which dates to the Second Intermediate Period (c. 1650 BC) is said to be based on an older mathematical text from the 12th dynasty. The Moscow Mathematical Papyrus and Rhind Mathematical Papyrus are so called mathematical problem texts. They consist of a collection of problems with solutions. These texts may have been written by a teacher or a student engaged in solving typical mathematics problems. An interesting feature of ancient Egyptian mathematics is the use of unit fractions. The Egyptians used some special notation for fractions such as , and and in some texts for , but other fractions were all written as unit fractions of the form or sums of such unit fractions. Scribes used tables to help them work with these fractions. The Egyptian Mathematical Leather Roll for instance is a table of unit fractions which are expressed as sums of other unit fractions. The Rhind Mathematical Papyrus and some of the other texts contain tables. These tables allowed the scribes to rewrite any fraction of the form as a sum of unit fractions. During the New Kingdom (c. 1550–1070 BC) mathematical problems are mentioned in the literary Papyrus Anastasi I, and the Papyrus Wilbour from the time of Ramesses III records land measurements. In the workers village of Deir el-Medina several ostraca have been found that record volumes of dirt removed while quarrying the tombs. Sources. Current understanding of ancient Egyptian mathematics is impeded by the paucity of available sources. The sources that do exist include the following texts (which are generally dated to the Middle Kingdom and Second Intermediate Period): From the New Kingdom there are a handful of mathematical texts and inscriptions related to computations: According to Étienne Gilson, Abraham "taught the Egyptians arythmetic and astronomy". Numerals. Ancient Egyptian texts could be written in either hieroglyphs or in hieratic. In either representation the number system was always given in base 10. The number 1 was depicted by a simple stroke, the number 2 was represented by two strokes, etc. The numbers 10, 100, 1000, 10,000 and 100,000 had their own hieroglyphs. Number 10 is a hobble for cattle, number 100 is represented by a coiled rope, the number 1000 is represented by a lotus flower, the number 10,000 is represented by a finger, the number 100,000 is represented by a frog, and a million was represented by a god with his hands raised in adoration. Egyptian numerals date back to the Predynastic period. Ivory labels from Abydos record the use of this number system. It is also common to see the numerals in offering scenes to indicate the number of items offered. The king's daughter Neferetiabet is shown with an offering of 1000 oxen, bread, beer, etc. The Egyptian number system was additive. Large numbers were represented by collections of the glyphs and the value was obtained by simply adding the individual numbers together. The Egyptians almost exclusively used fractions of the form . One notable exception is the fraction , which is frequently found in the mathematical texts. Very rarely a special glyph was used to denote . The fraction was represented by a glyph that may have depicted a piece of linen folded in two. The fraction was represented by the glyph for a mouth with 2 (different sized) strokes. The rest of the fractions were always represented by a mouth super-imposed over a number. Notation. Steps of calculations were written in sentences in Egyptian languages. (e.g. "Multiply 10 times 100; it becomes 1000.") In Rhind Papyrus Problem 28, the hieroglyphs D54-and-D55 (D54, D55), symbols for feet, were used to mean "to add" and "to subtract." These were presumably shorthands for G35-D54 and O1:D21:D54 meaning "to go in" and "to go out." Multiplication and division. Egyptian multiplication was done by a repeated doubling of the number to be multiplied (the multiplicand), and choosing which of the doublings to add together (essentially a form of binary arithmetic), a method that links to the Old Kingdom. The multiplicand was written next to figure 1; the multiplicand was then added to itself, and the result written next to the number 2. The process was continued until the doublings gave a number greater than half of the multiplier. Then the doubled numbers (1, 2, etc.) would be repeatedly subtracted from the multiplier to select which of the results of the existing calculations should be added together to create the answer. As a shortcut for larger numbers, the multiplicand can also be immediately multiplied by 10, 100, 1000, 10000, etc. For example, Problem 69 on the Rhind Papyrus (RMP) provides the following illustration, as if Hieroglyphic symbols were used (rather than the RMP's actual hieratic script). The "" denotes the intermediate results that are added together to produce the final answer. The table above can also be used to divide 1120 by 80. We would solve this problem by finding the quotient (80) as the sum of those multipliers of 80 that add up to 1120. In this example that would yield a quotient of 10 + 4 = 14. A more complicated example of the division algorithm is provided by Problem 66. A total of 3200 ro of fat are to be distributed evenly over 365 days. First the scribe would double 365 repeatedly until the largest possible multiple of 365 is reached, which is smaller than 3200. In this case 8 times 365 is 2920 and further addition of multiples of 365 would clearly give a value greater than 3200. Next it is noted that  +  +  times 365 gives us the value of 280 we need. Hence we find that 3200 divided by 365 must equal 8 +  +  + . Algebra. Egyptian algebra problems appear in both the Rhind mathematical papyrus and the Moscow mathematical papyrus as well as several other sources. Aha problems involve finding unknown quantities (referred to as Aha) if the sum of the quantity and part(s) of it are given. The Rhind Mathematical Papyrus also contains four of these type of problems. Problems 1, 19, and 25 of the Moscow Papyrus are Aha problems. For instance problem 19 asks one to calculate a quantity taken times and added to 4 to make 10. In other words, in modern mathematical notation we are asked to solve the linear equation: formula_0 Solving these Aha problems involves a technique called method of false position. The technique is also called the method of false assumption. The scribe would substitute an initial guess of the answer into the problem. The solution using the false assumption would be proportional to the actual answer, and the scribe would find the answer by using this ratio. The mathematical writings show that the scribes used (least) common multiples to turn problems with fractions into problems using integers. In this connection red auxiliary numbers are written next to the fractions. The use of the Horus eye fractions shows some (rudimentary) knowledge of geometrical progression. Knowledge of arithmetic progressions is also evident from the mathematical sources. Quadratic equations. The ancient Egyptians were the first civilization to develop and solve second-degree (quadratic) equations. This information is found in the Berlin Papyrus fragment. Additionally, the Egyptians solve first-degree algebraic equations found in Rhind Mathematical Papyrus. Geometry. There are only a limited number of problems from ancient Egypt that concern geometry. Geometric problems appear in both the Moscow Mathematical Papyrus (MMP) and in the Rhind Mathematical Papyrus (RMP). The examples demonstrate that the Ancient Egyptians knew how to compute areas of several geometric shapes and the volumes of cylinders and pyramids. The Seqed. Problem 56 of the RMP indicates an understanding of the idea of geometric similarity. This problem discusses the ratio run/rise, also known as the seqed. Such a formula would be needed for building pyramids. In the next problem (Problem 57), the height of a pyramid is calculated from the base length and the "seked" (Egyptian for the reciprocal of the slope), while problem 58 gives the length of the base and the height and uses these measurements to compute the seqed. In Problem 59 part 1 computes the seqed, while the second part may be a computation to check the answer: "If you construct a pyramid with base side 12 [cubits] and with a seqed of 5 palms 1 finger; what is its altitude?" References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac 3 2 \\times x + 4 = 10.\\ " } ]
https://en.wikipedia.org/wiki?curid=1180105
11801199
Zero field NMR
Acquisition of NMR spectra of chemicals Zero- to ultralow-field (ZULF) NMR is the acquisition of nuclear magnetic resonance (NMR) spectra of chemicals with magnetically active nuclei (spins 1/2 and greater) in an environment carefully screened from magnetic fields (including from the Earth's field). ZULF NMR experiments typically involve the use of passive or active shielding to attenuate Earth’s magnetic field. This is in contrast to the majority of NMR experiments which are performed in high magnetic fields provided by superconducting magnets. In ZULF experiments the sample is moved through a low field magnet into the "zero field" region where the dominant interactions are nuclear spin-spin couplings, and the coupling between spins and the external magnetic field is a perturbation to this. There are a number of advantages to operating in this regime: magnetic-susceptibility-induced line broadening is attenuated which reduces inhomogeneous broadening of the spectral lines for samples in heterogeneous environments. Another advantage is that the low frequency signals readily pass through conductive materials such as metals due to the increased skin depth; this is not the case for high-field NMR for which the sample containers are usually made of glass, quartz or ceramic. High-field NMR employs inductive detectors to pick up the radiofrequency signals, but this would be inefficient in ZULF NMR experiments since the signal frequencies are typically much lower (on the order of hertz to kilohertz). The development of highly sensitive magnetic sensors in the early 2000s including SQUIDs, magnetoresistive sensors, and SERF atomic magnetometers made it possible to detect NMR signals directly in the ZULF regime. Previous ZULF NMR experiments relied on indirect detection where the sample had to be shuttled from the shielded ZULF environment into a high magnetic field for detection with a conventional inductive pick-up coil. One successful implementation was using atomic magnetometers at zero magnetic field working with rubidium vapor cells to detect zero-field NMR. Without a large magnetic field to induce nuclear spin polarization, the nuclear spins must be polarized externally using hyperpolarization techniques. This can be as simple as polarizing the spins in a magnetic field followed by shuttling to the ZULF region for signal acquisition, and alternative chemistry-based hyperpolarization techniques can also be used. It is sometimes but inaccurately referred to as nuclear quadrupole resonance (NQR). Zero-field NMR experiments. Spin Hamiltonians. Free evolution of nuclear spins is governed by a Hamiltonian (formula_0), which in the case of liquid-state nuclear magnetic resonance may be split into two major terms. The first term (formula_1) corresponds to the Zeeman interaction between spins and the external magnetic field, which includes chemical shift (formula_2). The second term (formula_3) corresponds to the indirect spin-spin, or J-coupling, interaction. formula_4, where: formula_5, and formula_6. Here the summation is taken over the whole system of coupled spins; formula_7 denotes the reduced Planck constant; formula_8 denotes the gyromagnetic ratio of spin a; formula_9 denotes the isotropic part of the chemical shift for the a-th spin; formula_10 denotes the spin operator of the a-th spin; formula_11 is the external magnetic field experienced by all considered spins, and; formula_12 is the J-coupling constant between spins a and b. Importantly, the relative strength of formula_1 and formula_3 (and therefore the spin dynamics behavior of such a system) depends on the magnetic field. For example, in conventional NMR, formula_13 is typically larger than 1 T, so the Larmor frequency formula_14 of 1H exceeds tens of MHz. This is much larger than formula_15-coupling values which are typically Hz to hundreds of Hz. In this limit, formula_3 is a perturbation to formula_1. In contrast, at nanotesla fields, Larmor frequencies can be much smaller than formula_15-couplings, and formula_3 dominates. Polarization. Before signals can be detected in a ZULF NMR experiment, it is first necessary to polarize the nuclear spin ensemble, since the signal is proportional to the nuclear spin magnetization. There are a number of methods to generate nuclear spin polarization. The most common is to allow the spins to thermally equilibrate in a magnetic field, and the nuclear spin alignment with the magnetic field due to the Zeeman interaction leads to weak spin polarization. The polarization generated in this way is on the order of 10−6 for tesla field-strengths. An alternative approach is to use hyperpolarization techniques, which are chemical and physical methods to generate nuclear spin polarization. Examples include parahydrogen-induced polarization, spin-exchange optical pumping of noble gas atoms, dissolution dynamic nuclear polarization, and chemically-induced dynamic nuclear polarization. Excitation and spin manipulation. NMR experiments require creating a transient non-stationary state of the spin system. In conventional high-field experiments, radio frequency pulses tilt the magnetization from along the main magnetic field direction to the transverse plan. Once in the transverse plan, the magnetization is no longer in a stationary state (or eigenstate) and so it begins to precess about the main magnetic field creating a detectable oscillating magnetic field. In ZULF experiments, constant magnetic field pulses are used to induce non-stationary states of the spin system. The two main strategies consist of (1) switching of the magnetic field from pseudo-high field to zero (or ultra-low) field, or (2) of ramping down the magnetic field experienced by the spins to zero field in order to convert the Zeeman populations into zero-field eigenstates adiabatically and subsequently in applying a constant magnetic field pulse to induce a coherence between the zero-field eigenstates. In the simple case of a heteronuclear pair of J-coupled spins, both these excitation schemes induce a transition between the singlet and triplet-0 states, which generates a detectable oscillatory magnetic field. More sophisticated pulse sequences have been reported including selective pulses, two-dimensional experiments and decoupling schemes. Signal detection. NMR signals are usually detected inductively, but the low frequencies of the electromagnetic radiation emitted by samples in a ZULF experiment makes inductive detection impractical at low fields. Hence, the earliest approach for measuring zero-field NMR in solid samples was via field-cycling techniques. The field cycling involves three steps: preparation, evolution and detection. In the preparation stage, a field is applied in order to magnetize the nuclear spins. Then the field is suddenly switched to zero to initiate the evolution interval and the magnetization evolves under the zero-field Hamiltonian. After a time period, the field is again switched on and the signal is detected inductively at high field. In a single field cycle, the magnetization observed corresponds only to a single value of the zero-field evolution time. The time-varying magnetization can be detected by repeating the field cycle with incremented lengths of the zero-field interval, and hence the evolution and decay of the magnetization is measured point by point. The Fourier transform of this magnetization will result to the zero-field absorption spectrum. The emergence of highly sensitive magnetometry techniques has allowed for the detection of zero-field NMR signals in situ. Examples include superconducting quantum interference devices (SQUIDs), magnetoresistive sensors, and SERF atomic magnetometers. SQUIDs have high sensitivity, but require cryogenic conditions to operate, which makes them practically somewhat difficult to employ for the detection of chemical or biological samples. Magnetoresistive sensors are less sensitive, but are much easier to handle and to bring close to the NMR sample which is advantageous since proximity improves sensitivity. The most common sensors employed in ZULF NMR experiments are optically-pumped magnetometers, which have high sensitivity and can be placed in close proximity to an NMR sample. Definition of the ZULF regime. The boundaries between zero-, ultralow-, low- and high-field NMR are not rigorously defined, although approximate working definitions are in routine use for experiments involving small molecules in solution. The boundary between zero and ultralow field is usually defined as the field at which the nuclear spin precession frequency matches the spin relaxation rate, i.e., at zero field the nuclear spins relax faster than they precess about the external field. The boundary between ultralow and low field is usually defined as the field at which Larmor frequency differences between different nuclear spin species match the spin-spin (J or dipolar) couplings, i.e., at ultralow field spin-spin couplings dominate and the Zeeman interaction is a perturbation. The boundary between low and high field is more ambiguous and these terms are used differently depending on the application or research topic. In the context of ZULF NMR, the boundary is defined as the field at which chemical shift differences between nuclei of the same isotopic species in a sample match the spin-spin couplings. Note that these definitions strongly depend on the sample being studied, and the field regime boundaries can vary by orders of magnitude depending on sample parameters such as the nuclear spin species, spin-spin coupling strengths, and spin relaxation times. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\hat{H}" }, { "math_id": 1, "text": "\\hat{H}_z" }, { "math_id": 2, "text": "\\sigma" }, { "math_id": 3, "text": "\\hat{H}_J" }, { "math_id": 4, "text": "\\hat{H}=\\hat{H}_z+\\hat{H}_J" }, { "math_id": 5, "text": "\\hat{H}_z=-\\hbar\\sum_a\\gamma_a(1-\\sigma_a)\\hat{I}_a\\cdot B_0" }, { "math_id": 6, "text": "\\hat{H}_J=-\\hbar 2\\pi\\sum_{a>b} J_{ab}\\hat{I}_a\\cdot\\hat{I}_b" }, { "math_id": 7, "text": "\\hbar" }, { "math_id": 8, "text": "\\gamma_a" }, { "math_id": 9, "text": "\\sigma_a" }, { "math_id": 10, "text": "I_a" }, { "math_id": 11, "text": "B_0" }, { "math_id": 12, "text": "J_{ab}" }, { "math_id": 13, "text": "|B_0|" }, { "math_id": 14, "text": "\\nu_0=-\\gamma B_0/2\\pi" }, { "math_id": 15, "text": "J" } ]
https://en.wikipedia.org/wiki?curid=11801199
1180641
Stochastic gradient descent
Optimization algorithm &lt;templatestyles src="Machine learning/styles.css"/&gt; Stochastic gradient descent (often abbreviated SGD) is an iterative method for optimizing an objective function with suitable smoothness properties (e.g. differentiable or subdifferentiable). It can be regarded as a stochastic approximation of gradient descent optimization, since it replaces the actual gradient (calculated from the entire data set) by an estimate thereof (calculated from a randomly selected subset of the data). Especially in high-dimensional optimization problems this reduces the very high computational burden, achieving faster iterations in exchange for a lower convergence rate. The basic idea behind stochastic approximation can be traced back to the Robbins–Monro algorithm of the 1950s. Today, stochastic gradient descent has become an important optimization method in machine learning. Background. Both statistical estimation and machine learning consider the problem of minimizing an objective function that has the form of a sum: formula_0 where the parameter formula_1 that minimizes formula_2 is to be estimated. Each summand function formula_3 is typically associated with the formula_4-th observation in the data set (used for training). In classical statistics, sum-minimization problems arise in least squares and in maximum-likelihood estimation (for independent observations). The general class of estimators that arise as minimizers of sums are called M-estimators. However, in statistics, it has been long recognized that requiring even local minimization is too restrictive for some problems of maximum-likelihood estimation. Therefore, contemporary statistical theorists often consider stationary points of the likelihood function (or zeros of its derivative, the score function, and other estimating equations). The sum-minimization problem also arises for empirical risk minimization. There, formula_5 is the value of the loss function at formula_4-th example, and formula_2 is the empirical risk. When used to minimize the above function, a standard (or "batch") gradient descent method would perform the following iterations: formula_6 The step size is denoted by formula_7 (sometimes called the "learning rate" in machine learning) and here "formula_8" denotes the update of a variable in the algorithm. In many cases, the summand functions have a simple form that enables inexpensive evaluations of the sum-function and the sum gradient. For example, in statistics, one-parameter exponential families allow economical function-evaluations and gradient-evaluations. However, in other cases, evaluating the sum-gradient may require expensive evaluations of the gradients from all summand functions. When the training set is enormous and no simple formulas exist, evaluating the sums of gradients becomes very expensive, because evaluating the gradient requires evaluating all the summand functions' gradients. To economize on the computational cost at every iteration, stochastic gradient descent samples a subset of summand functions at every step. This is very effective in the case of large-scale machine learning problems. Iterative method. In stochastic (or "on-line") gradient descent, the true gradient of formula_2 is approximated by a gradient at a single sample: formula_9 As the algorithm sweeps through the training set, it performs the above update for each training sample. Several passes can be made over the training set until the algorithm converges. If this is done, the data can be shuffled for each pass to prevent cycles. Typical implementations may use an adaptive learning rate so that the algorithm converges. In pseudocode, stochastic gradient descent can be presented as : &lt;templatestyles src="Framebox/styles.css" /&gt; A compromise between computing the true gradient and the gradient at a single sample is to compute the gradient against more than one training sample (called a "mini-batch") at each step. This can perform significantly better than "true" stochastic gradient descent described, because the code can make use of vectorization libraries rather than computing each step separately as was first shown in where it was called "the bunch-mode back-propagation algorithm". It may also result in smoother convergence, as the gradient computed at each step is averaged over more training samples. The convergence of stochastic gradient descent has been analyzed using the theories of convex minimization and of stochastic approximation. Briefly, when the learning rates formula_7 decrease with an appropriate rate, and subject to relatively mild assumptions, stochastic gradient descent converges almost surely to a global minimum when the objective function is convex or pseudoconvex, and otherwise converges almost surely to a local minimum. This is in fact a consequence of the Robbins–Siegmund theorem. Example. Suppose we want to fit a straight line formula_12 to a training set with observations formula_13 and corresponding estimated responses formula_14 using least squares. The objective function to be minimized is formula_15 The last line in the above pseudocode for this specific problem will become: formula_16 Note that in each iteration or update step, the gradient is only evaluated at a single formula_17. This is the key difference between stochastic gradient descent and batched gradient descent. History. In 1951, Herbert Robbins and Sutton Monro introduced the earliest stochastic approximation methods, preceding stochastic gradient descent. Building on this work one year later, Jack Kiefer and Jacob Wolfowitz published an optimization algorithm very close to stochastic gradient descent, using central differences as an approximation of the gradient. Later in the 1950s, Frank Rosenblatt used SGD to optimize his perceptron model, demonstrating the first applicability of stochastic gradient descent to neural networks. Backpropagation was first described in 1986, with stochastic gradient descent being used to efficiently optimize parameters across neural networks with multiple hidden layers. Soon after, another improvement was developed: mini-batch gradient descent, where small batches of data are substituted for single samples. In 1997, the practical performance benefits from vectorization achievable with such small batches were first explored, paving the way for efficient optimization in machine learning. As of 2023, this mini-batch approach remains the norm for training neural networks, balancing the benefits of stochastic gradient descent with gradient descent. By the 1980s, momentum had already been introduced, and was added to SGD optimization techniques in 1986. However, these optimization techniques assumed constant hyperparameters, i.e. a fixed learning rate and momentum parameter. In the 2010s, adaptive approaches to applying SGD with a per-parameter learning rate were introduced with AdaGrad (for "Adaptive Gradient") in 2011 and RMSprop (for "Root Mean Square Propagation") in 2012. In 2014, Adam (for "Adaptive Moment Estimation") was published, applying the adaptive approaches of RMSprop to momentum; many improvements and branches of Adam were then developed such as Adadelta, Adagrad, AdamW, and Adamax. Within machine learning, approaches to optimization in 2023 are dominated by Adam-derived optimizers. TensorFlow and PyTorch, by far the most popular machine learning libraries, as of 2023 largely only include Adam-derived optimizers, as well as predecessors to Adam such as RMSprop and classic SGD. PyTorch also partially supports Limited-memory BFGS, a line-search method, but only for single-device setups without parameter groups. Notable applications. Stochastic gradient descent is a popular algorithm for training a wide range of models in machine learning, including (linear) support vector machines, logistic regression (see, e.g., Vowpal Wabbit) and graphical models. When combined with the back propagation algorithm, it is the "de facto" standard algorithm for training artificial neural networks. Its use has been also reported in the Geophysics community, specifically to applications of Full Waveform Inversion (FWI). Stochastic gradient descent competes with the L-BFGS algorithm, which is also widely used. Stochastic gradient descent has been used since at least 1960 for training linear regression models, originally under the name ADALINE. Another stochastic gradient descent algorithm is the least mean squares (LMS) adaptive filter. Extensions and variants. Many improvements on the basic stochastic gradient descent algorithm have been proposed and used. In particular, in machine learning, the need to set a learning rate (step size) has been recognized as problematic. Setting this parameter too high can cause the algorithm to diverge; setting it too low makes it slow to converge. A conceptually simple extension of stochastic gradient descent makes the learning rate a decreasing function ηt of the iteration number t, giving a "learning rate schedule", so that the first iterations cause large changes in the parameters, while the later ones do only fine-tuning. Such schedules have been known since the work of MacQueen on k-means clustering. Practical guidance on choosing the step size in several variants of SGD is given by Spall. Implicit updates (ISGD). As mentioned earlier, classical stochastic gradient descent is generally sensitive to learning rate η. Fast convergence requires large learning rates but this may induce numerical instability. The problem can be largely solved by considering "implicit updates" whereby the stochastic gradient is evaluated at the next iterate rather than the current one: formula_18 This equation is implicit since formula_19 appears on both sides of the equation. It is a stochastic form of the proximal gradient method since the update can also be written as: formula_20 As an example, consider least squares with features formula_21 and observations formula_22. We wish to solve: formula_23 where formula_24 indicates the inner product. Note that formula_25 could have "1" as the first element to include an intercept. Classical stochastic gradient descent proceeds as follows: formula_26 where formula_4 is uniformly sampled between 1 and formula_27. Although theoretical convergence of this procedure happens under relatively mild assumptions, in practice the procedure can be quite unstable. In particular, when formula_7 is misspecified so that formula_28 has large absolute eigenvalues with high probability, the procedure may diverge numerically within a few iterations. In contrast, "implicit stochastic gradient descent" (shortened as ISGD) can be solved in closed-form as: formula_29 This procedure will remain numerically stable virtually for all formula_7 as the learning rate is now normalized. Such comparison between classical and implicit stochastic gradient descent in the least squares problem is very similar to the comparison between least mean squares (LMS) and normalized least mean squares filter (NLMS). Even though a closed-form solution for ISGD is only possible in least squares, the procedure can be efficiently implemented in a wide range of models. Specifically, suppose that formula_5 depends on formula_1 only through a linear combination with features formula_17, so that we can write formula_30, where formula_31 may depend on formula_32 as well but not on formula_1 except through formula_33. Least squares obeys this rule, and so does logistic regression, and most generalized linear models. For instance, in least squares, formula_34, and in logistic regression formula_35, where formula_36 is the logistic function. In Poisson regression, formula_37, and so on. In such settings, ISGD is simply implemented as follows. Let formula_38, where formula_39 is scalar. Then, ISGD is equivalent to: formula_40 The scaling factor formula_41 can be found through the bisection method since in most regular models, such as the aforementioned generalized linear models, function formula_42 is decreasing, and thus the search bounds for formula_43 are formula_44. Momentum. Further proposals include the "momentum method" or the "heavy ball method", which in ML context appeared in Rumelhart, Hinton and Williams' paper on backpropagation learning and borrowed the idea from Soviet mathematician Boris Polyak's 1964 article on solving functional equations. Stochastic gradient descent with momentum remembers the update Δ"w" at each iteration, and determines the next update as a linear combination of the gradient and the previous update: formula_45 formula_46 that leads to: formula_47 where the parameter formula_1 which minimizes formula_2 is to be estimated, formula_7 is a step size (sometimes called the "learning rate" in machine learning) and formula_48 is an exponential decay factor between 0 and 1 that determines the relative contribution of the current gradient and earlier gradients to the weight change. The name momentum stems from an analogy to momentum in physics: the weight vector formula_1, thought of as a particle traveling through parameter space, incurs acceleration from the gradient of the loss ("force"). Unlike in classical stochastic gradient descent, it tends to keep traveling in the same direction, preventing oscillations. Momentum has been used successfully by computer scientists in the training of artificial neural networks for several decades. The "momentum method" is closely related to underdamped Langevin dynamics, and may be combined with simulated annealing. In mid-1980s the method was modified by Yurii Nesterov to use the gradient predicted at the next point, and the resulting so-called "Nesterov Accelerated Gradient" was sometimes used in ML in the 2010s. Averaging. "Averaged stochastic gradient descent", invented independently by Ruppert and Polyak in the late 1980s, is ordinary stochastic gradient descent that records an average of its parameter vector over time. That is, the update is the same as for ordinary stochastic gradient descent, but the algorithm also keeps track of formula_49When optimization is done, this averaged parameter vector takes the place of w. AdaGrad. "AdaGrad" (for adaptive gradient algorithm) is a modified stochastic gradient descent algorithm with per-parameter learning rate, first published in 2011. Informally, this increases the learning rate for and decreases the learning rate for ones that are less sparse. This strategy often improves convergence performance over standard stochastic gradient descent in settings where data is sparse and sparse parameters are more informative. Examples of such applications include natural language processing and image recognition. It still has a base learning rate η, but this is multiplied with the elements of a vector {"G""j","j"} which is the diagonal of the outer product matrix formula_50 where formula_51, the gradient, at iteration τ. The diagonal is given by formula_52This vector essentially stores a historical sum of gradient squares by dimension and is updated after every iteration. The formula for an update is now formula_53 or, written as per-parameter updates, formula_54 Each {"G"("i","i")} gives rise to a scaling factor for the learning rate that applies to a single parameter "w""i". Since the denominator in this factor, formula_55 is the "ℓ"2 norm of previous derivatives, extreme parameter updates get dampened, while parameters that get few or small updates receive higher learning rates. While designed for convex problems, AdaGrad has been successfully applied to non-convex optimization. RMSProp. "RMSProp" (for Root Mean Square Propagation) is a method invented in 2012 by James Martens and Ilya Sutskever, at the time both PhD students in Geoffrey Hinton's group, in which the learning rate is, like in Adagrad, adapted for each of the parameters. The idea is to divide the learning rate for a weight by a running average of the magnitudes of recent gradients for that weight. Unusually, it was not published in an article but merely described in a Coursera lecture. So, first the running average is calculated in terms of means square, formula_56 where, formula_57 is the forgetting factor. The concept of storing the historical gradient as sum of squares is borrowed from Adagrad, but "forgetting" is introduced to solve Adagrad's diminishing learning rates in non-convex problems by gradually decreasing the influence of old data. And the parameters are updated as, formula_58 RMSProp has shown good adaptation of learning rate in different applications. RMSProp can be seen as a generalization of Rprop and is capable to work with mini-batches as well opposed to only full-batches. Adam. "Adam" (short for Adaptive Moment Estimation) is a 2014 update to the "RMSProp" optimizer combining it with the main feature of the "Momentum method". In this optimization algorithm, running averages with exponential forgetting of both the gradients and the second moments of the gradients are used. Given parameters formula_59 and a loss function formula_60, where formula_61 indexes the current training iteration (indexed at formula_62), Adam's parameter update is given by: formula_63 formula_64 formula_65 formula_66 formula_67 where formula_68 is a small scalar (e.g. formula_69) used to prevent division by 0, and formula_70 (e.g. 0.9) and formula_71 (e.g. 0.999) are the forgetting factors for gradients and second moments of gradients, respectively. Squaring and square-rooting is done element-wise. The initial proof establishing the convergence of Adam was incomplete, and subsequent analysis has revealed that Adam does not converge for all convex objectives. Despite this, "Adam" continues to be used in practice due to its strong performance in practice. Variants. The popularity of "Adam" inspired many variants and enhancements. Some examples include: Sign-based stochastic gradient descent. Even though sign-based optimization goes back to the aforementioned "Rprop", in 2018 researchers tried to simplify Adam by removing the magnitude of the stochastic gradient from being taken into account and only considering its sign. Backtracking line search. Backtracking line search is another variant of gradient descent. All of the below are sourced from the mentioned link. It is based on a condition known as the Armijo–Goldstein condition. Both methods allow learning rates to change at each iteration; however, the manner of the change is different. Backtracking line search uses function evaluations to check Armijo's condition, and in principle the loop in the algorithm for determining the learning rates can be long and unknown in advance. Adaptive SGD does not need a loop in determining learning rates. On the other hand, adaptive SGD does not guarantee the "descent property" – which Backtracking line search enjoys – which is that formula_72 for all n. If the gradient of the cost function is globally Lipschitz continuous, with Lipschitz constant L, and learning rate is chosen of the order 1/L, then the standard version of SGD is a special case of backtracking line search. Second-order methods. A stochastic analogue of the standard (deterministic) Newton–Raphson algorithm (a "second-order" method) provides an asymptotically optimal or near-optimal form of iterative optimization in the setting of stochastic approximation. A method that uses direct measurements of the Hessian matrices of the summands in the empirical risk function was developed by Byrd, Hansen, Nocedal, and Singer. However, directly determining the required Hessian matrices for optimization may not be possible in practice. Practical and theoretically sound methods for second-order versions of SGD that do not require direct Hessian information are given by Spall and others. (A less efficient method based on finite differences, instead of simultaneous perturbations, is given by Ruppert.) Another approach to the approximation Hessian matrix is replacing it with the Fisher information matrix, which transforms usual gradient to natural. These methods not requiring direct Hessian information are based on either values of the summands in the above empirical risk function or values of the gradients of the summands (i.e., the SGD inputs). In particular, second-order optimality is asymptotically achievable without direct calculation of the Hessian matrices of the summands in the empirical risk function. When the objective is a nonlinear least-squres loss formula_73 where formula_74 is the predictive model (e.g., a deep neural network) the objective's structure can be exploited to estimate 2nd order information using gradients only. The resulting methods are simple and often effective Approximations in continuous time. For small learning rate formula_75 stochastic gradient descent formula_76 can be viewed as a discretization of the gradient flow ODE formula_77 subject to additional stochastic noise. This approximation is only valid on a finite time-horizon in the following sense: assume that all the coefficients formula_78 are sufficiently smooth. Let formula_79 and formula_80 be a sufficiently smooth test function. Then, there exists a constant formula_81 such that for all formula_82 formula_83 where formula_84 denotes taking the expectation with respect to the random choice of indices in the stochastic gradient descent scheme. Since this approximation does not capture the random fluctuations around the mean behavior of stochastic gradient descent solutions to stochastic differential equations (SDEs) have been proposed as limiting objects. More precisely, the solution to the SDE formula_85 for formula_86 where formula_87 denotes the Ito-integral with respect to a Brownian motion is a more precise approximation in the sense that there exists a constant formula_81 such that formula_88 However this SDE only approximates the one-point motion of stochastic gradient descent. For an approximation of the stochastic flow one has to consider SDEs with infinite-dimensional noise. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "Q(w) = \\frac{1}{n}\\sum_{i=1}^n Q_i(w)," }, { "math_id": 1, "text": "w" }, { "math_id": 2, "text": "Q(w)" }, { "math_id": 3, "text": "Q_i" }, { "math_id": 4, "text": "i" }, { "math_id": 5, "text": "Q_i(w)" }, { "math_id": 6, "text": "w := w - \\eta\\,\\nabla Q(w) = w - \\frac{\\eta}{n} \\sum_{i=1}^n \\nabla Q_i(w)." }, { "math_id": 7, "text": "\\eta" }, { "math_id": 8, "text": ":=" }, { "math_id": 9, "text": "w := w - \\eta\\, \\nabla Q_i(w)." }, { "math_id": 10, "text": " i=1, 2, ..., n" }, { "math_id": 11, "text": " w := w - \\eta\\, \\nabla Q_i(w)." }, { "math_id": 12, "text": "\\hat y = w_1 + w_2 x" }, { "math_id": 13, "text": "((x_1, y_1), (x_2, y_2) \\ldots, (x_n, y_n))" }, { "math_id": 14, "text": "(\\hat y_1, \\hat y_2, \\ldots, \\hat y_n)" }, { "math_id": 15, "text": "Q(w) = \\sum_{i=1}^n Q_i(w) = \\sum_{i=1}^n \\left(\\hat y_i - y_i\\right)^2 = \\sum_{i=1}^n \\left(w_1 + w_2 x_i - y_i\\right)^2." }, { "math_id": 16, "text": "\\begin{bmatrix} w_1 \\\\ w_2 \\end{bmatrix} :=\n \\begin{bmatrix} w_1 \\\\ w_2 \\end{bmatrix}\n - \\eta \\begin{bmatrix} \\frac{\\partial}{\\partial w_1} (w_1 + w_2 x_i - y_i)^2 \\\\\n \\frac{\\partial}{\\partial w_2} (w_1 + w_2 x_i - y_i)^2 \\end{bmatrix} =\n \\begin{bmatrix} w_1 \\\\ w_2 \\end{bmatrix}\n - \\eta \\begin{bmatrix} 2 (w_1 + w_2 x_i - y_i) \\\\ 2 x_i(w_1 + w_2 x_i - y_i) \\end{bmatrix}." }, { "math_id": 17, "text": "x_i" }, { "math_id": 18, "text": "w^\\text{new} := w^\\text{old} - \\eta\\, \\nabla Q_i(w^{\\rm new})." }, { "math_id": 19, "text": "w^{\\rm new}" }, { "math_id": 20, "text": "w^\\text{new} := \\arg\\min_w \\left\\{ Q_i(w) + \\frac{1}{2\\eta} \\left\\|w - w^\\text{old}\\right\\|^2 \\right\\}." }, { "math_id": 21, "text": "x_1, \\ldots, x_n \\in\\mathbb{R}^p" }, { "math_id": 22, "text": "y_1, \\ldots, y_n\\in\\mathbb{R}" }, { "math_id": 23, "text": "\\min_w \\sum_{j=1}^n \\left(y_j - x_j'w\\right)^2," }, { "math_id": 24, "text": "x_j' w = x_{j1} w_1 + x_{j, 2} w_2 + ... + x_{j,p} w_p" }, { "math_id": 25, "text": "x" }, { "math_id": 26, "text": "w^\\text{new} = w^\\text{old} + \\eta \\left(y_i - x_i'w^\\text{old}\\right) x_i" }, { "math_id": 27, "text": "n" }, { "math_id": 28, "text": "I - \\eta x_i x_i'" }, { "math_id": 29, "text": "w^\\text{new} = w^\\text{old} + \\frac{\\eta}{1 + \\eta \\left\\|x_i\\right\\|^2} \\left(y_i - x_i'w^\\text{old}\\right) x_i." }, { "math_id": 30, "text": "\\nabla_w Q_i(w) = -q(x_i'w) x_i" }, { "math_id": 31, "text": "q() \\in\\mathbb{R}" }, { "math_id": 32, "text": "x_i, y_i" }, { "math_id": 33, "text": "x_i'w" }, { "math_id": 34, "text": "q(x_i'w) = y_i - x_i'w" }, { "math_id": 35, "text": "q(x_i'w) = y_i - S(x_i'w)" }, { "math_id": 36, "text": "S(u) = e^u/(1+e^u)" }, { "math_id": 37, "text": "q(x_i'w) = y_i - e^{x_i'w}" }, { "math_id": 38, "text": "f(\\xi) = \\eta q(x_i'w^{old} + \\xi \\|x_i\\|^2)" }, { "math_id": 39, "text": "\\xi" }, { "math_id": 40, "text": "w^\\text{new} = w^\\text{old} + \\xi^\\ast x_i,~\\text{where}~\\xi^\\ast = f(\\xi^\\ast)." }, { "math_id": 41, "text": "\\xi^\\ast\\in\\mathbb{R}" }, { "math_id": 42, "text": "q()" }, { "math_id": 43, "text": "\\xi^\\ast" }, { "math_id": 44, "text": "[\\min(0, f(0)), \\max(0, f(0))]" }, { "math_id": 45, "text": "\\Delta w := \\alpha \\Delta w - \\eta\\, \\nabla Q_i(w)" }, { "math_id": 46, "text": "w := w + \\Delta w " }, { "math_id": 47, "text": "w := w - \\eta\\, \\nabla Q_i(w) + \\alpha \\Delta w " }, { "math_id": 48, "text": "\\alpha" }, { "math_id": 49, "text": "\\bar{w} = \\frac{1}{t} \\sum_{i=0}^{t-1} w_i." }, { "math_id": 50, "text": "G = \\sum_{\\tau=1}^t g_\\tau g_\\tau^\\mathsf{T}" }, { "math_id": 51, "text": "g_\\tau = \\nabla Q_i(w)" }, { "math_id": 52, "text": "G_{j,j} = \\sum_{\\tau=1}^t g_{\\tau,j}^2." }, { "math_id": 53, "text": "w := w - \\eta\\, \\mathrm{diag}(G)^{-\\frac{1}{2}} \\odot g" }, { "math_id": 54, "text": "w_j := w_j - \\frac{\\eta}{\\sqrt{G_{j,j}}} g_j." }, { "math_id": 55, "text": "\\sqrt{G_i} = \\sqrt{\\sum_{\\tau=1}^t g_\\tau^2}" }, { "math_id": 56, "text": "v(w,t):=\\gamma v(w,t-1) + \\left(1-\\gamma\\right) \\left(\\nabla Q_i(w)\\right)^2" }, { "math_id": 57, "text": "\\gamma" }, { "math_id": 58, "text": "w:=w-\\frac{\\eta}{\\sqrt{v(w,t)}}\\nabla Q_i(w)" }, { "math_id": 59, "text": " w^ {(t)} " }, { "math_id": 60, "text": " L ^ {(t)} " }, { "math_id": 61, "text": " t " }, { "math_id": 62, "text": " 0 " }, { "math_id": 63, "text": "m_w ^ {(t+1)} \\leftarrow \\beta_1 m_w ^ {(t)} + \\left(1 - \\beta_1\\right) \\nabla _w L ^ {(t)} " }, { "math_id": 64, "text": "v_w ^ {(t+1)} \\leftarrow \\beta_2 v_w ^ {(t)} + \\left(1 - \\beta_2\\right) \\left(\\nabla _w L ^ {(t)} \\right)^2 " }, { "math_id": 65, "text": "\\hat{m}_w = \\frac{m_w ^ {(t+1)}}{1 - \\beta_1^t} " }, { "math_id": 66, "text": "\\hat{v}_w = \\frac{ v_w ^ {(t+1)}}{1 - \\beta_2^t} " }, { "math_id": 67, "text": "w ^ {(t+1)} \\leftarrow w ^ {(t)} - \\eta \\frac{\\hat{m}_w}{\\sqrt{\\hat{v}_w} + \\epsilon} " }, { "math_id": 68, "text": "\\epsilon" }, { "math_id": 69, "text": "10^{-8}" }, { "math_id": 70, "text": "\\beta_1" }, { "math_id": 71, "text": "\\beta_2" }, { "math_id": 72, "text": "f(x_{n+1})\\leq f(x_n)" }, { "math_id": 73, "text": " Q(w) = \\frac{1}{n} \\sum_{i=1}^n Q_i(w) = \\frac{1}{n} \\sum_{i=1}^n (m(w;x_i)-y_i)^2, " }, { "math_id": 74, "text": "m(w;x_i)" }, { "math_id": 75, "text": "\\eta" }, { "math_id": 76, "text": "(w_n)_{n \\in \\N_0}" }, { "math_id": 77, "text": "\\frac{d}{dt} W_t = -\\nabla Q(W_t)" }, { "math_id": 78, "text": "Q_i " }, { "math_id": 79, "text": "T >0 " }, { "math_id": 80, "text": "g: \\R^d \\to \\R " }, { "math_id": 81, "text": "C>0 " }, { "math_id": 82, "text": "\\eta >0 " }, { "math_id": 83, "text": "\\max_{k=0, \\dots, \\lfloor T/\\eta \\rfloor } \\left|\\mathbb E[g(w_k)]-g(W_{k \\eta})\\right| \\le C \\eta," }, { "math_id": 84, "text": "\\mathbb E " }, { "math_id": 85, "text": "d W_t = - \\nabla \\left(Q(W_t)+\\tfrac 1 4 \\eta |\\nabla Q(W_t)|^2\\right)dt + \\sqrt \\eta \\Sigma (W_t)^{1/2} dB_t," }, { "math_id": 86, "text": "\\Sigma(w) = \\frac{1}{n^2} \\left(\\sum_{i=1}^n Q_i(w)-Q(w)\\right)\\left(\\sum_{i=1}^n Q_i(w)-Q(w)\\right)^T " }, { "math_id": 87, "text": "dB_t " }, { "math_id": 88, "text": "\\max_{k=0, \\dots, \\lfloor T/\\eta \\rfloor } \\left|\\mathbb E[g(w_k)]-\\mathbb E [g(W_{k \\eta})]\\right| \\le C \\eta^2." } ]
https://en.wikipedia.org/wiki?curid=1180641
11807
Ferromagnetism
Mechanism by which materials form into and are attracted to magnets Ferromagnetism is a property of certain materials (such as iron) that results in a significant, observable magnetic permeability, and in many cases, a significant magnetic coercivity, allowing the material to form a permanent magnet. Ferromagnetic materials are noticeably attracted to a magnet, which is a consequence of their substantial magnetic permeability. Magnetic permeability describes the induced magnetization of a material due to the presence of an external magnetic field. For example, this temporary magnetization inside a steel plate accounts for the plate's attraction to a magnet. Whether or not that steel plate then acquires permanent magnetization depends on both the strength of the applied field and on the coercivity of that particular piece of steel (which varies with the steel's chemical composition and any heat treatment it may have undergone). In physics, multiple types of material magnetism have been distinguished. Ferromagnetism (along with the similar effect ferrimagnetism) is the strongest type and is responsible for the common phenomenon of everyday magnetism. An example of a permanent magnet formed from a ferromagnetic material is a refrigerator magnet. Substances respond weakly to three other types of magnetism—paramagnetism, diamagnetism, and antiferromagnetism—but the forces are usually so weak that they can be detected only by lab instruments. Permanent magnets (materials that can be magnetized by an external magnetic field and remain magnetized after the external field is removed) are either ferromagnetic or ferrimagnetic, as are the materials that are attracted to them. Relatively few materials are ferromagnetic. They are typically pure forms, alloys, or compounds of iron, cobalt, nickel, and certain rare-earth metals. Ferromagnetism is vital in industrial applications and modern technologies, forming the basis for electrical and electromechanical devices such as electromagnets, electric motors, generators, transformers, magnetic storage (including tape recorders and hard disks), and nondestructive testing of ferrous materials. Ferromagnetic materials can be divided into magnetically soft materials (like annealed iron), which do not tend to stay magnetized, and magnetically hard materials, which do. Permanent magnets are made from hard ferromagnetic materials (such as alnico) and ferrimagnetic materials (such as ferrite) that are subjected to special processing in a strong magnetic field during manufacturing to align their internal microcrystalline structure, making them difficult to demagnetize. To demagnetize a saturated magnet, a magnetic field must be applied. The threshold at which demagnetization occurs depends on the coercivity of the material. Magnetically hard materials have high coercivity, whereas magnetically soft materials have low coercivity. The overall strength of a magnet is measured by its magnetic moment or, alternatively, its total magnetic flux. The local strength of magnetism in a material is measured by its magnetization. Terms. Historically, the term "ferromagnetism" was used for any material that could exhibit spontaneous magnetization: a net magnetic moment in the absence of an external magnetic field; that is, any material that could become a magnet. This definition is still in common use. In a landmark paper in 1948, Louis Néel showed that two levels of magnetic alignment result in this behavior. One is ferromagnetism in the strict sense, where all the magnetic moments are aligned. The other is "ferrimagnetism", where some magnetic moments point in the opposite direction but have a smaller contribution, so spontaneous magnetization is present. In the special case where the opposing moments balance completely, the alignment is known as "antiferromagnetism"; antiferromagnets do not have a spontaneous magnetization. Materials. Ferromagnetism is an unusual property that occurs in only a few substances. The common ones are the transition metals iron, nickel, and cobalt, as well as their alloys and alloys of rare-earth metals. It is a property not just of the chemical make-up of a material, but of its crystalline structure and microstructure. Ferromagnetism results from these materials having many unpaired electrons in their d-block (in the case of iron and its relatives) or f-block (in the case of the rare-earth metals), a result of Hund's rule of maximum multiplicity. There are ferromagnetic metal alloys whose constituents are not themselves ferromagnetic, called Heusler alloys, named after Fritz Heusler. Conversely, there are non-magnetic alloys, such as types of stainless steel, composed almost exclusively of ferromagnetic metals. Amorphous (non-crystalline) ferromagnetic metallic alloys can be made by very rapid quenching (cooling) of an alloy. These have the advantage that their properties are nearly isotropic (not aligned along a crystal axis); this results in low coercivity, low hysteresis loss, high permeability, and high electrical resistivity. One such typical material is a transition metal-metalloid alloy, made from about 80% transition metal (usually Fe, Co, or Ni) and a metalloid component (B, C, Si, P, or Al) that lowers the melting point. A relatively new class of exceptionally strong ferromagnetic materials are the rare-earth magnets. They contain lanthanide elements that are known for their ability to carry large magnetic moments in well-localized f-orbitals. The table lists a selection of ferromagnetic and ferrimagnetic compounds, along with their Curie temperature ("T"C), above which they cease to exhibit spontaneous magnetization. Unusual materials. Most ferromagnetic materials are metals, since the conducting electrons are often responsible for mediating the ferromagnetic interactions. It is therefore a challenge to develop ferromagnetic insulators, especially multiferroic materials, which are both ferromagnetic and ferroelectric. A number of actinide compounds are ferromagnets at room temperature or exhibit ferromagnetism upon cooling. PuP is a paramagnet with cubic symmetry at room temperature, but which undergoes a structural transition into a tetragonal state with ferromagnetic order when cooled below its "T"C = 125 K. In its ferromagnetic state, PuP's easy axis is in the ⟨100⟩ direction. In NpFe2 the easy axis is ⟨111⟩. Above "T"C ≈ 500 K, NpFe2 is also paramagnetic and cubic. Cooling below the Curie temperature produces a rhombohedral distortion wherein the rhombohedral angle changes from 60° (cubic phase) to 60.53°. An alternate description of this distortion is to consider the length "c" along the unique trigonal axis (after the distortion has begun) and "a" as the distance in the plane perpendicular to "c". In the cubic phase this reduces to "c"/"a" 1.00. Below the Curie temperature formula_0 which is the largest strain in any actinide compound. NpNi2 undergoes a similar lattice distortion below "T"C 32 K, with a strain of (43 ± 5) × 10−4. NpCo2 is a ferrimagnet below 15 K. In 2009, a team of MIT physicists demonstrated that a lithium gas cooled to less than one kelvin can exhibit ferromagnetism. The team cooled fermionic lithium-6 to less than 150 nK (150 billionths of one kelvin) using infrared laser cooling. This demonstration is the first time that ferromagnetism has been demonstrated in a gas. In rare circumstances, ferromagnetism can be observed in compounds consisting of only s-block and p-block elements, such as rubidium sesquioxide. In 2018, a team of University of Minnesota physicists demonstrated that body-centered tetragonal ruthenium exhibits ferromagnetism at room temperature. Electrically induced ferromagnetism. Recent research has shown evidence that ferromagnetism can be induced in some materials by an electric current or voltage. Antiferromagnetic LaMnO3 and SrCoO have been switched to be ferromagnetic by a current. In July 2020, scientists reported inducing ferromagnetism in the abundant diamagnetic material iron pyrite ("fool's gold") by an applied voltage. In these experiments, the ferromagnetism was limited to a thin surface layer. Explanation. The Bohr–Van Leeuwen theorem, discovered in the 1910s, showed that classical physics theories are unable to account for any form of material magnetism, including ferromagnetism; the explanation rather depends on the quantum mechanical description of atoms. Each of an atom's electrons has a magnetic moment according to its spin state, as described by quantum mechanics. The Pauli exclusion principle, also a consequence of quantum mechanics, restricts the occupancy of electrons' spin states in atomic orbitals, generally causing the magnetic moments from an atom's electrons to largely or completely cancel. An atom will have a "net" magnetic moment when that cancellation is incomplete. Origin of atomic magnetism. One of the fundamental properties of an electron (besides that it carries charge) is that it has a magnetic dipole moment, i.e., it behaves like a tiny magnet, producing a magnetic field. This dipole moment comes from a more fundamental property of the electron: its quantum mechanical spin. Due to its quantum nature, the spin of the electron can be in one of only two states, with the magnetic field either pointing "up" or "down" (for any choice of up and down). Electron spin in atoms is the main source of ferromagnetism, although there is also a contribution from the orbital angular momentum of the electron about the nucleus. When these magnetic dipoles in a piece of matter are aligned (point in the same direction), their individually tiny magnetic fields add together to create a much larger macroscopic field. However, materials made of atoms with filled electron shells have a total dipole moment of zero: because the electrons all exist in pairs with opposite spin, every electron's magnetic moment is cancelled by the opposite moment of the second electron in the pair. Only atoms with partially filled shells (i.e., unpaired spins) can have a net magnetic moment, so ferromagnetism occurs only in materials with partially filled shells. Because of Hund's rules, the first few electrons in an otherwise unoccupied shell tend to have the same spin, thereby increasing the total dipole moment. These unpaired dipoles (often called simply "spins", even though they also generally include orbital angular momentum) tend to align in parallel to an external magnetic field – leading to a macroscopic effect called paramagnetism. In ferromagnetism, however, the magnetic interaction between neighboring atoms' magnetic dipoles is strong enough that they align with "each other" regardless of any applied field, resulting in the spontaneous magnetization of so-called domains. This results in the large observed magnetic permeability of ferromagnetics, and the ability of magnetically hard materials to form permanent magnets. Exchange interaction. When two nearby atoms have unpaired electrons, whether the electron spins are parallel or antiparallel affects whether the electrons can share the same orbit as a result of the quantum mechanical effect called the exchange interaction. This in turn affects the electron location and the Coulomb (electrostatic) interaction and thus the energy difference between these states. The exchange interaction is related to the Pauli exclusion principle, which says that two electrons with the same spin cannot also be in the same spatial state (orbital). This is a consequence of the spin–statistics theorem and that electrons are fermions. Therefore, under certain conditions, when the orbitals of the unpaired outer valence electrons from adjacent atoms overlap, the distributions of their electric charge in space are farther apart when the electrons have parallel spins than when they have opposite spins. This reduces the electrostatic energy of the electrons when their spins are parallel compared to their energy when the spins are antiparallel, so the parallel-spin state is more stable. This difference in energy is called the exchange energy. In simple terms, the outer electrons of adjacent atoms, which repel each other, can move further apart by aligning their spins in parallel, so the spins of these electrons tend to line up. This energy difference can be orders of magnitude larger than the energy differences associated with the magnetic dipole–dipole interaction due to dipole orientation, which tends to align the dipoles antiparallel. In certain doped semiconductor oxides, RKKY interactions have been shown to bring about periodic longer-range magnetic interactions, a phenomenon of significance in the study of spintronic materials. The materials in which the exchange interaction is much stronger than the competing dipole–dipole interaction are frequently called "magnetic materials". For instance, in iron (Fe) the exchange force is about 1,000 times stronger than the dipole interaction. Therefore, below the Curie temperature, virtually all of the dipoles in a ferromagnetic material will be aligned. In addition to ferromagnetism, the exchange interaction is also responsible for the other types of spontaneous ordering of atomic magnetic moments occurring in magnetic solids: antiferromagnetism and ferrimagnetism. There are different exchange interaction mechanisms which create the magnetism in different ferromagnetic, ferrimagnetic, and antiferromagnetic substances—these mechanisms include direct exchange, RKKY exchange, double exchange, and superexchange. Magnetic anisotropy. Although the exchange interaction keeps spins aligned, it does not align them in a particular direction. Without magnetic anisotropy, the spins in a magnet randomly change direction in response to thermal fluctuations, and the magnet is superparamagnetic. There are several kinds of magnetic anisotropy, the most common of which is magnetocrystalline anisotropy. This is a dependence of the energy on the direction of magnetization relative to the crystallographic lattice. Another common source of anisotropy, inverse magnetostriction, is induced by internal strains. Single-domain magnets also can have a "shape anisotropy" due to the magnetostatic effects of the particle shape. As the temperature of a magnet increases, the anisotropy tends to decrease, and there is often a blocking temperature at which a transition to superparamagnetism occurs. Magnetic domains. The spontaneous alignment of magnetic dipoles in ferromagnetic materials would seem to suggest that every piece of ferromagnetic material should have a strong magnetic field, since all the spins are aligned; yet iron and other ferromagnets are often found in an "unmagnetized" state. This is because a bulk piece of ferromagnetic material is divided into tiny regions called "magnetic domains" (also known as "Weiss domains"). Within each domain, the spins are aligned, but if the bulk material is in its lowest energy configuration (i.e. "unmagnetized"), the spins of separate domains point in different directions and their magnetic fields cancel out, so the bulk material has no net large-scale magnetic field. Ferromagnetic materials spontaneously divide into magnetic domains because the exchange interaction is a short-range force, so over long distances of many atoms, the tendency of the magnetic dipoles to reduce their energy by orienting in opposite directions wins out. If all the dipoles in a piece of ferromagnetic material are aligned parallel, it creates a large magnetic field extending into the space around it. This contains a lot of magnetostatic energy. The material can reduce this energy by splitting into many domains pointing in different directions, so the magnetic field is confined to small local fields in the material, reducing the volume of the field. The domains are separated by thin domain walls a number of molecules thick, in which the direction of magnetization of the dipoles rotates smoothly from one domain's direction to the other. Magnetized materials. Thus, a piece of iron in its lowest energy state ("unmagnetized") generally has little or no net magnetic field. However, the magnetic domains in a material are not fixed in place; they are simply regions where the spins of the electrons have aligned spontaneously due to their magnetic fields, and thus can be altered by an external magnetic field. If a strong-enough external magnetic field is applied to the material, the domain walls will move via a process in which the spins of the electrons in atoms near the wall in one domain turn under the influence of the external field to face in the same direction as the electrons in the other domain, thus reorienting the domains so more of the dipoles are aligned with the external field. The domains will remain aligned when the external field is removed, and sum to create a magnetic field of their own extending into the space around the material, thus creating a "permanent" magnet. The domains do not go back to their original minimum energy configuration when the field is removed because the domain walls tend to become 'pinned' or 'snagged' on defects in the crystal lattice, preserving their parallel orientation. This is shown by the Barkhausen effect: as the magnetizing field is changed, the material's magnetization changes in thousands of tiny discontinuous jumps as domain walls suddenly "snap" past defects. This magnetization as a function of an external field is described by a hysteresis curve. Although this state of aligned domains found in a piece of magnetized ferromagnetic material is not a minimal-energy configuration, it is metastable, and can persist for long periods, as shown by samples of magnetite from the sea floor which have maintained their magnetization for millions of years. Heating and then cooling (annealing) a magnetized material, subjecting it to vibration by hammering it, or applying a rapidly oscillating magnetic field from a degaussing coil tends to release the domain walls from their pinned state, and the domain boundaries tend to move back to a lower energy configuration with less external magnetic field, thus demagnetizing the material. Commercial magnets are made of "hard" ferromagnetic or ferrimagnetic materials with very large magnetic anisotropy such as alnico and ferrites, which have a very strong tendency for the magnetization to be pointed along one axis of the crystal, the "easy axis". During manufacture the materials are subjected to various metallurgical processes in a powerful magnetic field, which aligns the crystal grains so their "easy" axes of magnetization all point in the same direction. Thus, the magnetization, and the resulting magnetic field, is "built in" to the crystal structure of the material, making it very difficult to demagnetize. Curie temperature. As the temperature of a material increases, thermal motion, or entropy, competes with the ferromagnetic tendency for dipoles to align. When the temperature rises beyond a certain point, called the Curie temperature, there is a second-order phase transition and the system can no longer maintain a spontaneous magnetization, so its ability to be magnetized or attracted to a magnet disappears, although it still responds paramagnetically to an external field. Below that temperature, there is a spontaneous symmetry breaking and magnetic moments become aligned with their neighbors. The Curie temperature itself is a critical point, where the magnetic susceptibility is theoretically infinite and, although there is no net magnetization, domain-like spin correlations fluctuate at all length scales. The study of ferromagnetic phase transitions, especially via the simplified Ising spin model, had an important impact on the development of statistical physics. There, it was first clearly shown that mean field theory approaches failed to predict the correct behavior at the critical point (which was found to fall under a "universality class" that includes many other systems, such as liquid-gas transitions), and had to be replaced by renormalization group theory. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{c}{a} - 1 = -(120 \\pm 5) \\times 10^{-4}," } ]
https://en.wikipedia.org/wiki?curid=11807
1181004
Knight shift
The Knight shift is a shift in the nuclear magnetic resonance (NMR) frequency of a paramagnetic substance first published in 1949 by the UC Berkeley physicist Walter D. Knight. For an ensemble of "N" spins in a magnetic induction field formula_0, the nuclear Hamiltonian for the Knight shift is expressed in Cartesian form by: formula_1, where for the "i"th spin formula_2 is the gyromagnetic ratio, formula_3 is a vector of the Cartesian nuclear angular momentum operators, the formula_4 matrix is a second-rank tensor similar to the chemical shift shielding tensor. The Knight shift refers to the relative shift "K" in NMR frequency for atoms in a metal (e.g. sodium) compared with the same atoms in a nonmetallic environment (e.g. sodium chloride). The observed shift reflects the local magnetic field produced at the sodium nucleus by the magnetization of the conduction electrons. The average local field in sodium augments the applied resonance field by approximately one part per 1000. In nonmetallic sodium chloride the local field is negligible in comparison. The Knight shift is due to the conduction electrons in metals. They introduce an "extra" effective field at the nuclear site, due to the spin orientations of the conduction electrons in the presence of an external field. This is responsible for the shift observed in the nuclear magnetic resonance. The shift comes from two sources, one is the Pauli paramagnetic spin susceptibility, the other is the s-component wavefunctions at the nucleus. Depending on the electronic structure, the Knight shift may be temperature dependent. However, in metals which normally have a broad featureless electronic density of states, Knight shifts are temperature independent. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\vec{B}" }, { "math_id": 1, "text": "{{\\hat{\\mathcal{H}}}_{\\text{KS}}}=-\\sum\\limits_{\\mathit{i}}^{{N}}{{{\\gamma }_{\\mathit{i}}}\\cdot {{{\\hat{\\vec{I}}}}_{\\mathit{i}}}\\cdot {{{\\hat{\\mathbf{K}}}}_{\\mathit{i}}}\\cdot \\vec{B}}" }, { "math_id": 2, "text": "{\\gamma }_{\\mathit{i}}" }, { "math_id": 3, "text": "{{{\\hat{\\vec{I}}}}_{\\mathit{i}}}" }, { "math_id": 4, "text": "{{{\\hat{\\mathbf{K}}}}_{i}}=\\left( \\begin{matrix}\n {{K}_{xx}} & {{K}_{xy}} & {{K}_{xz}} \\\\\n {{K}_{yx}} & {{K}_{yy}} & {{K}_{yz}} \\\\\n {{K}_{zx}} & {{K}_{zy}} & {{K}_{zz}} \\\\\n\\end{matrix} \\right)" } ]
https://en.wikipedia.org/wiki?curid=1181004
11813890
Supersymmetry algebras in 1 + 1 dimensions
A two dimensional Minkowski space, i.e. a flat space with one time and one spatial dimension, has a two-dimensional Poincaré group IO(1,1) as its symmetry group. The respective Lie algebra is called the Poincaré algebra. It is possible to extend this algebra to a supersymmetry algebra, which is a formula_0-graded Lie superalgebra. The most common ways to do this are discussed below. == "N" (2,2) algebra == Let the Lie algebra of IO(1,1) be generated by the following generators: For the commutators between these generators, see Poincaré algebra. The formula_4 supersymmetry algebra over this space is a supersymmetric extension of this Lie algebra with the four additional generators (supercharges) formula_5, which are odd elements of the Lie superalgebra. Under Lorentz transformations the generators formula_6 and formula_7 transform as left-handed Weyl spinors, while formula_8 and formula_9 transform as right-handed Weyl spinors. The algebra is given by the Poincaré algebra plus formula_10 where all remaining commutators vanish, and formula_11 and formula_12 are complex central charges. The supercharges are related via formula_13. formula_14, formula_15, and formula_16 are Hermitian. == Subalgebras of the "N" (2,2) algebra == === The "N" (0,2) and "N" (2,0) subalgebras === The formula_17 subalgebra is obtained from the formula_18 algebra by removing the generators formula_8 and formula_9. Thus its anti-commutation relations are given by formula_19 plus the commutation relations above that do not involve formula_8 or formula_9. Both generators are left-handed Weyl spinors. Similarly, the formula_20 subalgebra is obtained by removing formula_6 and formula_7 and fulfills formula_21 Both supercharge generators are right-handed. === The "N" (1,1) subalgebra === The formula_22 subalgebra is generated by two generators formula_23 and formula_24 given by formula_25for two real numbers formula_26and formula_27. By definition, both supercharges are real, i.e. formula_28. They transform as Majorana-Weyl spinors under Lorentz transformations. Their anti-commutation relations are given by formula_29 where formula_30 is a real central charge. === The "N" (0,1) and "N" (1,0) subalgebras === These algebras can be obtained from the formula_22 subalgebra by removing formula_24 resp. formula_23from the generators.
[ { "math_id": 0, "text": "\\mathbb{Z}_2" }, { "math_id": 1, "text": "H = P_0\n" }, { "math_id": 2, "text": "P = P_1\n" }, { "math_id": 3, "text": "M = M_{01}" }, { "math_id": 4, "text": "\\mathcal{N}=(2,2)" }, { "math_id": 5, "text": "Q_+, \\, Q_-, \\, \\overline{Q}_+, \\, \\overline{Q}_-" }, { "math_id": 6, "text": "Q_+" }, { "math_id": 7, "text": "\\overline{Q}_+" }, { "math_id": 8, "text": "Q_-" }, { "math_id": 9, "text": "\\overline{Q}_-" }, { "math_id": 10, "text": "\\begin{align}\n&\\begin{align}\n&Q_+^2 = Q_{-}^2 = \\overline{Q}_+^2 = \\overline{Q}_-^2 =0, \\\\\n&\\{ Q_{\\pm}, \\overline{Q}_{\\pm} \\} = H \\pm P, \\\\\n\\end{align} \\\\\n&\\begin{align}\n&\\{\\overline{Q}_+, \\overline{Q}_- \\} = Z, && \\{Q_+, Q_- \\} = Z^*, \\\\\n&\\{Q_-, \\overline{Q}_+ \\} =\\tilde{Z}, && \\{Q_+, \\overline{Q}_-\\} = \\tilde{Z}^*,\\\\\n&{[iM, Q_{\\pm}]} = \\mp Q_{\\pm}, && {[iM, \\overline{Q}_{\\pm}]} = \\mp \\overline{Q}_{\\pm},\n\\end{align}\n\\end{align}\n" }, { "math_id": 11, "text": "Z\n" }, { "math_id": 12, "text": "\\tilde{Z}\n" }, { "math_id": 13, "text": "Q_{\\pm}^\\dagger = \\overline{Q}_\\pm\n" }, { "math_id": 14, "text": "H\n" }, { "math_id": 15, "text": "P\n" }, { "math_id": 16, "text": "M" }, { "math_id": 17, "text": "\\mathcal{N} = (0,2)" }, { "math_id": 18, "text": "\\mathcal{N} = (2,2)\n" }, { "math_id": 19, "text": "\n\\begin{align}\n&Q_+^2 = \\overline{Q}_+^2 = 0, \\\\\n&\\{ Q_{+}, \\overline{Q}_{+} \\} = H + P \\\\\n\\end{align}\n" }, { "math_id": 20, "text": "\\mathcal{N} = (2,0)" }, { "math_id": 21, "text": "\n\\begin{align}\n&Q_-^2 = \\overline{Q}_-^2 = 0, \\\\\n&\\{ Q_{-}, \\overline{Q}_{-} \\} = H - P. \\\\\n\\end{align}\n" }, { "math_id": 22, "text": "\\mathcal{N} = (1,1)" }, { "math_id": 23, "text": "Q_+^1" }, { "math_id": 24, "text": "Q_-^1" }, { "math_id": 25, "text": "\n\\begin{align}\nQ^1_{\\pm} = e^{i \\nu_{\\pm}} Q_{\\pm} + e^{-i \\nu_{\\pm}} \\overline{Q}_{\\pm}\n\\end{align}\n" }, { "math_id": 26, "text": "\\nu_+" }, { "math_id": 27, "text": "\\nu_-" }, { "math_id": 28, "text": "(Q_{\\pm}^1)^\\dagger = Q^1_\\pm\n" }, { "math_id": 29, "text": "\n\\begin{align}\n&\\{ Q^1_{\\pm}, Q^1_{\\pm} \\} = 2 (H \\pm P), \\\\\n&\\{ Q^1_{+}, Q^1_{-} \\} = Z^1,\n\\end{align}\n" }, { "math_id": 30, "text": "Z^1" } ]
https://en.wikipedia.org/wiki?curid=11813890
11814285
Stefan's formula
In thermodynamics, Stefan's formula says that the specific surface energy at a given interface is determined by the respective enthalpy difference formula_0. formula_1 where "σ" is the specific surface energy, "N"A is the Avogadro constant, formula_2 is a steric dimensionless coefficient, and "V"m is the molar volume. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\Delta H^*" }, { "math_id": 1, "text": "\\sigma = \\gamma_0 \\left( \\frac{\\Delta H^*}{N_\\text{A}^{1/3}V_\\text{m}^{2/3}}\\right)," }, { "math_id": 2, "text": "\\gamma_0" } ]
https://en.wikipedia.org/wiki?curid=11814285
11814370
Center (category theory)
Variant of the notion of the center of a monoid, group, or ring to a category In category theory, a branch of mathematics, the center (or Drinfeld center, after Soviet-American mathematician Vladimir Drinfeld) is a variant of the notion of the center of a monoid, group, or ring to a category. Definition. The center of a monoidal category formula_0, denoted formula_1, is the category whose objects are pairs "(A,u)" consisting of an object "A" of formula_2 and an isomorphism formula_3 which is natural in formula_4 satisfying formula_5 and formula_6 (this is actually a consequence of the first axiom). An arrow from "(A,u)" to "(B,v)" in formula_1 consists of an arrow formula_7 in formula_2 such that formula_8. This definition of the center appears in . Equivalently, the center may be defined as formula_9 i.e., the endofunctors of "C" which are compatible with the left and right action of "C" on itself given by the tensor product. Braiding. The category formula_1 becomes a braided monoidal category with the tensor product on objects defined as formula_10 where formula_11, and the obvious braiding. Higher categorical version. The categorical center is particularly useful in the context of higher categories. This is illustrated by the following example: the center of the (abelian) category formula_12 of "R"-modules, for a commutative ring "R", is formula_12 again. The center of a monoidal ∞-category "C" can be defined, analogously to the above, as formula_13. Now, in contrast to the above, the center of the derived category of "R"-modules (regarded as an ∞-category) is given by the derived category of modules over the cochain complex encoding the Hochschild cohomology, a complex whose degree 0 term is "R" (as in the abelian situation above), but includes higher terms such as formula_14 (derived Hom). The notion of a center in this generality is developed by . Extending the above-mentioned braiding on the center of an ordinary monoidal category, the center of a monoidal ∞-category becomes an formula_15-monoidal category. More generally, the center of a formula_16-monoidal category is an algebra object in formula_16-monoidal categories and therefore, by Dunn additivity, an formula_17-monoidal category. Examples. has shown that the Drinfeld center of the category of sheaves on an orbifold "X" is the category of sheaves on the inertia orbifold of "X". For "X" being the classifying space of a finite group "G", the inertia orbifold is the stack quotient "G"/"G", where "G" acts on itself by conjugation. For this special case, Hinich's result specializes to the assertion that the center of the category of "G"-representations (with respect to some ground field "k") is equivalent to the category consisting of "G"-graded "k"-vector spaces, i.e., objects of the form formula_18 for some "k"-vector spaces, together with "G"-equivariant morphisms, where "G" acts on itself by conjugation. In the same vein, have shown that Drinfeld center of the derived category of quasi-coherent sheaves on a perfect stack "X" is the derived category of sheaves on the loop stack of "X". Related notions. Centers of monoid objects. The center of a monoid and the Drinfeld center of a monoidal category are both instances of the following more general concept. Given a monoidal category "C" and a monoid object "A" in "C", the center of "A" is defined as formula_19 For "C" being the category of sets (with the usual cartesian product), a monoid object is simply a monoid, and "Z"("A") is the center of the monoid. Similarly, if "C" is the category of abelian groups, monoid objects are rings, and the above recovers the center of a ring. Finally, if "C" is the category of categories, with the product as the monoidal operation, monoid objects in "C" are monoidal categories, and the above recovers the Drinfeld center. Categorical trace. The categorical trace of a monoidal category (or monoidal ∞-category) is defined as formula_20 The concept is being widely applied, for example in . References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathcal{C} = (\\mathcal{C},\\otimes,I)" }, { "math_id": 1, "text": "\\mathcal{Z(C)}" }, { "math_id": 2, "text": "\\mathcal{C}" }, { "math_id": 3, "text": "u_X:A \\otimes X \\rightarrow X \\otimes A" }, { "math_id": 4, "text": "X" }, { "math_id": 5, "text": "u_{X \\otimes Y} = (1 \\otimes u_Y)(u_X \\otimes 1)" }, { "math_id": 6, "text": "u_I = 1_A" }, { "math_id": 7, "text": "f:A \\rightarrow B" }, { "math_id": 8, "text": "v_X (f \\otimes 1_X) = (1_X \\otimes f) u_X" }, { "math_id": 9, "text": "\\mathcal Z(\\mathcal C) = \\mathrm{End}_{\\mathcal C \\otimes \\mathcal C^{op}}(\\mathcal C)," }, { "math_id": 10, "text": "(A,u) \\otimes (B,v) = (A \\otimes B,w)" }, { "math_id": 11, "text": "w_X = (u_X \\otimes 1)(1 \\otimes v_X)" }, { "math_id": 12, "text": "\\mathrm{Mod}_R" }, { "math_id": 13, "text": "Z(\\mathcal C) := \\mathrm{End}_{\\mathcal C \\otimes \\mathcal C^{op}}(\\mathcal C)" }, { "math_id": 14, "text": "Hom(R, R)" }, { "math_id": 15, "text": "E_2" }, { "math_id": 16, "text": "E_k" }, { "math_id": 17, "text": "E_{k+1}" }, { "math_id": 18, "text": "\\bigoplus_{g \\in G} V_g" }, { "math_id": 19, "text": "Z(A) = End_{A \\otimes A^{op}}(A)." }, { "math_id": 20, "text": "Tr(C) := C \\otimes_{C \\otimes C^{op}} C." } ]
https://en.wikipedia.org/wiki?curid=11814370
11815074
Noncentral hypergeometric distributions
Hypergeometric distribution In statistics, the hypergeometric distribution is the discrete probability distribution generated by picking colored balls at random from an urn without replacement. Various generalizations to this distribution exist for cases where the picking of colored balls is biased so that balls of one color are more likely to be picked than balls of another color. This can be illustrated by the following example. Assume that an opinion poll is conducted by calling random telephone numbers. Unemployed people are more likely to be home and answer the phone than employed people are. Therefore, unemployed respondents are likely to be over-represented in the sample. The probability distribution of employed versus unemployed respondents in a sample of "n" respondents can be described as a noncentral hypergeometric distribution. The description of biased urn models is complicated by the fact that there is more than one noncentral hypergeometric distribution. Which distribution one gets depends on whether items (e.g., colored balls) are sampled one by one in a manner in which there is competition between the items or they are sampled independently of one another. The name "noncentral hypergeometric distribution" has been used for both of these cases. The use of the same name for two different distributions came about because they were studied by two different groups of scientists with hardly any contact with each other. Agner Fog (2007, 2008) suggested that the best way to avoid confusion is to use the name Wallenius' noncentral hypergeometric distribution for the distribution of a biased urn model in which a predetermined number of items are drawn one by one in a competitive manner and to use the name Fisher's noncentral hypergeometric distribution for one in which items are drawn independently of each other, so that the total number of items drawn is known only after the experiment. The names refer to Kenneth Ted Wallenius and R. A. Fisher, who were the first to describe the respective distributions. Fisher's noncentral hypergeometric distribution had previously been given the name "extended hypergeometric distribution", but this name is rarely used in the scientific literature, except in handbooks that need to distinguish between the two distributions. Wallenius' noncentral hypergeometric distribution. Wallenius' distribution can be explained as follows. Assume that an urn contains formula_0 red balls and formula_1 white balls, totalling formula_2 balls. formula_3 balls are drawn at random from the urn one by one without replacement. Each red ball has the weight formula_4, and each white ball has the weight formula_5. We assume that the probability of taking a particular ball is proportional to its weight. The physical property that determines the odds may be something else than weight, such as size or slipperiness or some other factor, but it is convenient to use the word "weight" for the odds parameter. The probability that the first ball picked is red is equal to the weight fraction of red balls: formula_6 The probability that the second ball picked is red depends on whether the first ball was red or white. If the first ball was red then the above formula is used with formula_0 reduced by one. If the first ball was white then the above formula is used with formula_1 reduced by one. The important fact that distinguishes Wallenius' distribution is that there is competition between the balls. The probability that a particular ball is taken in a particular draw depends not only on its own weight, but also on the total weight of the competing balls that remain in the urn at that moment. And the weight of the competing balls depends on the outcomes of all preceding draws. A multivariate version of Wallenius' distribution is used if there are more than two different colors. The distribution of the balls that are not drawn is a complementary Wallenius' noncentral hypergeometric distribution. Fisher's noncentral hypergeometric distribution. In the Fisher model, the fates of the balls are independent and there is no dependence between draws. One may as well take all "n" balls at the same time. Each ball has no "knowledge" of what happens to the other balls. For the same reason, it is impossible to know the value of "n" before the experiment. If we tried to fix the value of "n" then we would have no way of preventing ball number "n" + 1 from being taken without violating the principle of independence between balls. "n" is therefore a random variable, and the Fisher distribution is a conditional distribution which can only be determined after the experiment when "n" is observed. The unconditional distribution is two independent binomials, one for each color. Fisher's distribution can simply be defined as the conditional distribution of two or more independent binomial variates dependent upon their sum. A multivariate version of the Fisher's distribution is used if there are more than two colors of balls. The difference between the two noncentral hypergeometric distributions. Wallenius' and Fisher's distributions are approximately equal when the odds ratio formula_7 is near 1, and "n" is low compared to the total number of balls, "N". The difference between the two distributions becomes higher when the odds ratio is far from one and "n" is near "N". The two distributions approximate each other better when they have the same mean than when they have the same odds (ω = 1) (see figures above). Both distributions degenerate into the hypergeometric distribution when the odds ratio is 1, or to the binomial distribution when "n" = 1. To understand why the two distributions are different, we may consider the following extreme example: An urn contains one red ball with the weight 1000, and a thousand white balls each with the weight 1. We want to calculate the probability that the red ball is "not" taken. First we consider the Wallenius model. The probability that the red ball is not taken in the first draw is 1000/2000 = &lt;templatestyles src="Fraction/styles.css" /&gt;1⁄2. The probability that the red ball is not taken in the second draw, under the condition that it was not taken in the first draw, is 999/1999 ≈ &lt;templatestyles src="Fraction/styles.css" /&gt;1⁄2. The probability that the red ball is not taken in the third draw, under the condition that it was not taken in the first two draws, is 998/1998 ≈ &lt;templatestyles src="Fraction/styles.css" /&gt;1⁄2. Continuing in this way, we can calculate that the probability of not taking the red ball in "n" draws is approximately 2−"n" as long as "n" is small compared to "N". In other words, the probability of not taking a very heavy ball in "n" draws falls almost exponentially with "n" in Wallenius' model. The exponential function arises because the probabilities for each draw are all multiplied together. This is not the case in Fisher's model, where balls are taken independently, and possibly simultaneously. Here the draws are independent and the probabilities are therefore not multiplied together. The probability of not taking the heavy red ball in Fisher's model is approximately 1/("n" + 1). The two distributions are therefore very different in this extreme case, even though they are quite similar in less extreme cases. The following conditions must be fulfilled for Wallenius' distribution to be applicable: The following conditions must be fulfilled for Fisher's distribution to be applicable: Examples. The following examples illustrate which distribution applies in different situations. Example 1. You are catching fish in a small lake that contains a limited number of fish. There are different kinds of fish with different weights. The probability of catching a particular fish at a particular moment is proportional to its weight. You are catching the fish one by one with a fishing rod. You have decided to catch "n" fish. You are determined to catch exactly "n" fish regardless of how long it may take. You will stop after you have caught "n" fish even if you can see more fish that are tempting. This scenario will give a distribution of the types of fish caught that is equal to Wallenius' noncentral hypergeometric distribution. Example 2. You are catching fish as in example 1, but using a big net. You set up the net one day and come back the next day to remove the net. You count how many fish you have caught and then you go home regardless of how many fish you have caught. Each fish has a probability of being ensnared that is proportional to its weight but independent of what happens to the other fish. The total number of fish that will be caught in this scenario is not known in advance. The expected number of fish caught is therefore described by multiple binomial distributions, one for each kind of fish. After the fish have been counted, the total number "n" of fish is known. The probability distribution when "n" is known (but the number of each type is not known yet) is Fisher's noncentral hypergeometric distribution. Example 3. You are catching fish with a small net. It is possible that more than one fish can be caught in the net at the same time. You will use the net repeatedly until you have got at least "n" fish. This scenario gives a distribution that lies between Wallenius' and Fisher's distributions. The total number of fish caught can vary if you are getting too many fish in the last catch. You may put the excess fish back into the lake, but this still does not give Wallenius' distribution. This is because you are catching multiple fish at the same time. The condition that each catch depends on all previous catches does not hold for fish that are caught simultaneously or in the same operation. The resulting distribution will be close to Wallenius' distribution if there are few fish in the net in each catch and many casts of the net. The resulting distribution will be close to Fisher's distribution if there are many fish in the net in each catch and few casts. Example 4. You are catching fish with a big net. Fish swim into the net randomly in a situation that resembles a Poisson process. You watch the net and take it up as soon as you have caught exactly "n" fish. The resulting distribution will be close to Fisher's distribution because the fish arrive in the net independently of each other. But the fates of the fish are not completely independent because a particular fish can be saved from being caught if "n" other fish happen to arrive in the net before this particular fish. This is more likely to happen if the other fish are heavy than if they are light. Example 5. You are catching fish one by one with a fishing rod as in example 1. You need a particular amount of fish in order to feed your family. You will stop when the total weight of the fish caught reaches this predetermined limit. The resulting distribution will be close to Wallenius' distribution, but not exactly equal to it because the decision to stop depends on the weight of the fish caught so far. "n" is therefore not known before the fishing trip. Conclusion to the examples. These examples show that the distribution of the types of fish caught depends on the way they are caught. Many situations will give a distribution that lies somewhere between Wallenius' and Fisher's noncentral hypergeometric distributions. A consequence of the difference between these two distributions is that one will catch more of the heavy fish, on average, by catching "n" fish one by one than by catching all "n" at the same time. In general, we can say that, in biased sampling, the odds parameter has a stronger effect in Wallenius' distribution than in Fisher's distribution, especially when "n"/"N" is high.
[ { "math_id": 0, "text": "m_1" }, { "math_id": 1, "text": "m_2" }, { "math_id": 2, "text": "N = m_1 + m_2" }, { "math_id": 3, "text": "n" }, { "math_id": 4, "text": "\\omega_1" }, { "math_id": 5, "text": "\\omega_2" }, { "math_id": 6, "text": "p_1 = \\frac{m_1 \\omega_1}{m_1 \\omega_1 + m_2 \\omega_2}." }, { "math_id": 7, "text": "\\omega = \\omega_1/\\omega_2" } ]
https://en.wikipedia.org/wiki?curid=11815074
11815157
Particle-size distribution
Function representing relative sizes of particles in a system In granulometry, the particle-size distribution (PSD) of a powder, or granular material, or particles dispersed in fluid, is a list of values or a mathematical function that defines the relative amount, typically by mass, of particles present according to size. Significant energy is usually required to disintegrate soil, etc. particles into the PSD that is then called a grain size distribution. Significance. The PSD of a material can be important in understanding its physical and chemical properties. It affects the strength and load-bearing properties of rocks and soils. It affects the reactivity of solids participating in chemical reactions, and needs to be tightly controlled in many industrial products such as the manufacture of printer toner, cosmetics, and pharmaceutical products. Significance in the collection of particulate matter. Particle size distribution can greatly affect the efficiency of any collection device. Settling chambers will normally only collect very large particles, those that can be separated using sieve trays. Centrifugal collectors will normally collect particles down to about 20 μm. Higher efficiency models can collect particles down to 10 μm. Fabric filters are one of the most efficient and cost effective types of dust collectors available and can achieve a collection efficiency of more than 99% for very fine particles. Wet scrubbers that use liquid are commonly known as wet scrubbers. In these systems, the scrubbing liquid (usually water) comes into contact with a gas stream containing dust particles. The greater the contact of the gas and liquid streams, the higher the dust removal efficiency. Electrostatic precipitators use electrostatic forces to separate dust particles from exhaust gases. They can be very efficient at the collection of very fine particles. Filter Press used for filtering liquids by cake filtration mechanism. The PSD plays an important part in the cake formation, cake resistance, and cake characteristics. The filterability of the liquid is determined largely by the size of the particles. Nomenclature. ρp: Actual particle density (g/cm3) ρg: Gas or sample matrix density (g/cm3) r2: Least-squares coefficient of determination. The closer this value is to 1.0, the better the data fit to a hyperplane representing the relationship between the response variable and a set of covariate variables. A value equal to 1.0 indicates all data fit perfectly within the hyperplane. λ: Gas mean free path (cm) D50: Mass-median-diameter (MMD). The log-normal distribution mass median diameter. The MMD is considered to be the average particle diameter by mass. σg: Geometric standard deviation. This value is determined mathematically by the equation: σg = D84.13/D50 = D50/D15.87 The value of σg determines the slope of the least-squares regression curve. α: Relative standard deviation or degree of polydispersity. This value is also determined mathematically. For values less than 0.1, the particulate sample can be considered to be monodisperse. α = σg/D50 Re(P) : Particle Reynolds Number. In contrast to the large numerical values noted for flow Reynolds number, particle Reynolds number for fine particles in gaseous mediums is typically less than 0.1. Ref : Flow Reynolds number. Kn: Particle Knudsen number. Types. PSD is usually defined by the method by which it is determined. The most easily understood method of determination is sieve analysis, where powder is separated on sieves of different sizes. Thus, the PSD is defined in terms of discrete size ranges: e.g. "% of sample between 45 μm and 53 μm", when sieves of these sizes are used. The PSD is usually determined over a list of size ranges that covers nearly all the sizes present in the sample. Some methods of determination allow much narrower size ranges to be defined than can be obtained by use of sieves, and are applicable to particle sizes outside the range available in sieves. However, the idea of the notional "sieve", that "retains" particles above a certain size, and "passes" particles below that size, is universally used in presenting PSD data of all kinds. The PSD may be expressed as a "range" analysis, in which the amount in each size range is listed in order. It may also be presented in "cumulative" form, in which the total of all sizes "retained" or "passed" by a single notional "sieve" is given for a range of sizes. Range analysis is suitable when a particular ideal mid-range particle size is being sought, while cumulative analysis is used where the amount of "under-size" or "over-size" must be controlled. The way in which "size" is expressed is open to a wide range of interpretations. A simple treatment assumes the particles are spheres that will just pass through a square hole in a "sieve". In practice, particles are irregular – often extremely so, for example in the case of fibrous materials – and the way in which such particles are characterized during analysis is very dependent on the method of measurement used. Sampling. Before a PSD can be determined, it is vital that a representative sample is obtained. In the case where the material to be analysed is flowing, the sample must be withdrawn from the stream in such a way that the sample has the same proportions of particle sizes as the stream. The best way to do this is to take many samples of the whole stream over a period, instead of taking a portion of the stream for the whole time.p. 6 In the case where the material is in a heap, scoop or thief sampling needs to be done, which is inaccurate: the sample should ideally have been taken while the powder was flowing towards the heap.p. 10 After sampling, the sample volume typically needs to be reduced. The material to be analysed must be carefully blended, and the sample withdrawn using techniques that avoid size segregation, for example using a rotary dividerp. 5. Particular attention must be paid to avoidance of loss of fines during manipulation of the sample. Measurement techniques. Sieve analysis. Sieve analysis is often used because of its simplicity, cheapness, and ease of interpretation. Methods may be simple shaking of the sample in sieves until the amount retained becomes more or less constant. Alternatively, the sample may be washed through with a non-reacting liquid (usually water) or blown through with an air current. "Advantages": this technique is well-adapted for bulk materials. A large amount of materials can be readily loaded into sieve trays. Two common uses in the powder industry are wet-sieving of milled limestone and dry-sieving of milled coal. "Disadvantages": many PSDs are concerned with particles too small for separation by sieving to be practical. A very fine sieve, such as 37 μm sieve, is exceedingly fragile, and it is very difficult to get material to pass through it. Another disadvantage is that the amount of energy used to sieve the sample is arbitrarily determined. Over-energetic sieving causes attrition of the particles and thus changes the PSD, while insufficient energy fails to break down loose agglomerates. Although manual sieving procedures can be ineffective, automated sieving technologies using image fragmentation analysis software are available. These technologies can sieve material by capturing and analyzing a photo of material. Air elutriation analysis. Material may be separated by means of air elutriation, which employs an apparatus with a vertical tube through which fluid is passed at a controlled velocity. When the particles are introduced, often through a side tube, the smaller particles are carried over in the fluid stream while the large particles settle against the upward current. If we start with low flow rates small less dense particle attain terminal velocities, and flow with the stream, the particle from the stream is collected in overflow and hence will be separated from the feed. Flow rates can be increased to separate higher size ranges. Further size fractions may be collected if the overflow from the first tube is passed vertically upwards through a second tube of greater cross-section, and any number of such tubes can be arranged in series. "Advantages": a bulk sample is analyzed using centrifugal classification and the technique is non-destructive. Each cut-point can be recovered for future size-respective chemical analyses. This technique has been used for decades in the air pollution control industry (data used for design of control devices). This technique determines particle size as a function of settling velocity in an air stream (as opposed to water, or some other liquid). "Disadvantages": a bulk sample (about ten grams) must be obtained. It is a fairly time-consuming analytical technique. The actual test method has been withdrawn by ASME due to obsolescence. Instrument calibration materials are therefore no longer available. Photoanalysis. Materials can now be analysed through photoanalysis procedures. Unlike sieve analyses which can be time-consuming and inaccurate, taking a photo of a sample of the materials to be measured and using software to analyze the photo can result in rapid, accurate measurements. Another advantage is that the material can be analyzed without being handled. This is beneficial in the agricultural industry, as handling of food products can lead to contamination. Photoanalysis equipment and software is currently being used in mining, forestry and agricultural industries worldwide. Optical counting methods. PSDs can be measured microscopically by sizing against a graticule and counting, but for a statistically valid analysis, millions of particles must be measured. This is impossibly arduous when done manually, but automated analysis of electron micrographs is now commercially available. It is used to determine the particle size within the range of 0.2 to 100 micrometers. Electroresistance counting methods. An example of this is the Coulter counter, which measures the momentary changes in the conductivity of a liquid passing through an orifice that take place when individual non-conducting particles pass through. The particle count is obtained by counting pulses. This pulse is proportional to the volume of the sensed particle. "Advantages": very small sample aliquots can be examined. "Disadvantages": sample must be dispersed in a liquid medium... some particles may (partially or fully) dissolve in the medium altering the size distribution. The results are only related to the projected cross-sectional area that a particle displaces as it passes through an orifice. This is a physical diameter, not really related to mathematical descriptions of particles (e.g. terminal settling velocity). Sedimentation techniques. These are based upon study of the terminal velocity acquired by particles suspended in a viscous liquid. Sedimentation time is longest for the finest particles, so this technique is useful for sizes below 10 μm, but sub-micrometer particles cannot be reliably measured due to the effects of Brownian motion. Typical apparatus disperses the sample in liquid, then measures the density of the column at timed intervals. Other techniques determine the optical density of successive layers using visible light or x-rays. "Advantages": this technique determines particle size as a function of settling velocity. "Disadvantages": Sample must be dispersed in a liquid medium... some particles may (partially or fully) dissolve in the medium altering the size distribution, requiring careful selection of the dispersion media. Density is highly dependent upon fluid temperature remaining constant. X-Rays will not count carbon (organic) particles. Many of these instruments can require a bulk sample (e.g. two to five grams). Laser diffraction methods. These depend upon analysis of the "halo" of diffracted light produced when a laser beam passes through a dispersion of particles in air or in a liquid. The angle of diffraction increases as particle size decreases, so that this method is particularly good for measuring sizes between 0.1 and 3,000 μm. Advances in sophisticated data processing and automation have allowed this to become the dominant method used in industrial PSD determination. This technique is relatively fast and can be performed on very small samples. A particular advantage is that the technique can generate a continuous measurement for analyzing process streams. Laser diffraction measures particle size distributions by measuring the angular variation in intensity of light scattered as a laser beam passes through a dispersed particulate sample. Large particles scatter light at small angles relative to the laser beam and small particles scatter light at large angles. The angular scattering intensity data is then analyzed to calculate the size of the particles responsible for creating the scattering pattern, using the Mie theory or Fraunhofer approximation of light scattering. The particle size is reported as a volume equivalent sphere diameter. Laser Obscuration Time" (LOT) or "Time Of Transition" (TOT). A focused laser beam rotates in a constant frequency and interacts with particles within the sample medium. Each randomly scanned particle obscures the laser beam to its dedicated photo diode, which measures the time of obscuration. The time of obscuration directly relates to the particle's Diameter, by a simple calculation principle of multiplying the known beam rotation Velocity in the directly measured Time of obscuration, (D=V*t). Acoustic spectroscopy or ultrasound attenuation spectroscopy. Instead of light, this method employs ultrasound for collecting information on the particles that are dispersed in fluid. Dispersed particles absorb and scatter ultrasound similarly to light. This has been known since Lord Rayleigh developed the first theory of "ultrasound scattering" and published a book "The Theory of Sound" in 1878. There have been hundreds of papers studying ultrasound propagation through fluid particulates in the 20th century. It turns out that instead of measuring "scattered energy versus angle", as with light, in the case of ultrasound, measuring the "transmitted energy versus frequency" is a better choice. The resulting ultrasound attenuation frequency spectra are the raw data for calculating particle size distribution. It can be measured for any fluid system with no dilution or other sample preparation. This is a big advantage of this method. Calculation of particle size distribution is based on theoretical models that are well verified for up to 50% by volume of dispersed particles on micron and nanometer scales. However, as concentration increases and the particle sizes approach the nanoscale, conventional modelling gives way to the necessity to include shear-wave re-conversion effects in order for the models to accurately reflect the real attenuation spectra. Air pollution emissions measurements. Cascade impactors – particulate matter is withdrawn isokinetically from a source and segregated by size in a cascade impactor at the sampling point exhaust conditions of temperature, pressure, etc. Cascade impactors use the principle of inertial separation to size segregate particle samples from a particle laden gas stream. The mass of each size fraction is determined gravimetrically. The California Air Resources Board Method 501 is currently the most widely accepted test method for particle size distribution emissions measurements. Mathematical models. Probability distributions. Rosin–Rammler distribution. The Weibull distribution, now named for Waloddi Weibull was first identified by and first applied by to describe particle size distributions. It is still widely used in mineral processing to describe particle size distributions in comminution processes. formula_0 where formula_1: Particle size formula_2: 80th percentile of the particle size distribution formula_3: Parameter describing the spread of the distribution The inverse distribution is given by: formula_4 where formula_5: Mass fraction Parameter estimation. The parameters of the Rosin–Rammler distribution can be determined by refactoring the distribution function to the form formula_6 Hence the slope of the line in a plot of formula_7 versus formula_8 yields the parameter formula_3 and formula_2 is determined by substitution into formula_9 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "F(x;P_{\\rm{80}},m) = \\begin{cases}\n1-e^{\\ln\\left(0.2\\right)\\left(\\frac{x}{P_{\\rm{80}}}\\right)^m} & x\\geq0 ,\\\\\n0 & x<0 ,\\end{cases}" }, { "math_id": 1, "text": "x" }, { "math_id": 2, "text": "P_{\\rm{80}}" }, { "math_id": 3, "text": "m" }, { "math_id": 4, "text": "f(F;P_{\\rm{80}},m) = \\begin{cases}\nP_{\\rm{80}} \\sqrt[m] {\\frac{\\ln(1-F)}{\\ln(0.2)}} & F>0 ,\\\\\n0 & F\\leq0 ,\\end{cases}" }, { "math_id": 5, "text": "F" }, { "math_id": 6, "text": "\\ln\\left(-\\ln\\left(1-F\\right)\\right) = \nm\\ln(x)+ \\ln\\left(\\frac{-\\ln(0.2)}{(P_{\\rm{80}})^m}\\right)" }, { "math_id": 7, "text": "\\ln\\left(-\\ln\\left(1-F\\right)\\right)" }, { "math_id": 8, "text": "\\ln(x)" }, { "math_id": 9, "text": "P_{\\rm{80}} = \\left(\\frac{-\\ln(0.2)}{e^{intercept}}\\right)^\\frac{1}{m}" } ]
https://en.wikipedia.org/wiki?curid=11815157
11816319
Siegel's lemma
In mathematics, specifically in transcendental number theory and Diophantine approximation, Siegel's lemma refers to bounds on the solutions of linear equations obtained by the construction of auxiliary functions. The existence of these polynomials was proven by Axel Thue; Thue's proof used Dirichlet's box principle. Carl Ludwig Siegel published his lemma in 1929. It is a pure existence theorem for a system of linear equations. Siegel's lemma has been refined in recent years to produce sharper bounds on the estimates given by the lemma. Statement. Suppose we are given a system of "M" linear equations in "N" unknowns such that "N" &gt; "M", say formula_0 formula_1 formula_2 where the coefficients are rational integers, not all 0, and bounded by "B". The system then has a solution formula_3 with the "X"s all rational integers, not all 0, and bounded by formula_4 gave the following sharper bound for the "X"'s: formula_5 where "D" is the greatest common divisor of the "M" × "M" minors of the matrix "A", and "A""T" is its transpose. Their proof involved replacing the pigeonhole principle by techniques from the geometry of numbers. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "a_{11} X_1 + \\cdots+ a_{1N} X_N = 0" }, { "math_id": 1, "text": "\\cdots" }, { "math_id": 2, "text": "a_{M1} X_1 +\\cdots+ a_{MN} X_N = 0" }, { "math_id": 3, "text": "(X_1, X_2, \\dots, X_N)" }, { "math_id": 4, "text": "(NB)^{M/(N-M)}." }, { "math_id": 5, "text": "\\max|X_j| \\,\\le \\left(D^{-1}\\sqrt{\\det(AA^T)}\\right)^{\\!1/(N-M)}" } ]
https://en.wikipedia.org/wiki?curid=11816319
1181756
Carmichael function
Function in mathematical number theory In number theory, a branch of mathematics, the Carmichael function "λ"("n") of a positive integer n is the smallest member of the set of positive integers m having the property that formula_0 holds for every integer a coprime to n. In algebraic terms, "λ"("n") is the exponent of the multiplicative group of integers modulo n. As this is a finite abelian group, there must exist an element whose order equals the exponent, "λ"("n"). Such an element is called a primitive "λ"-root modulo n. The Carmichael function is named after the American mathematician Robert Carmichael who defined it in 1910. It is also known as Carmichael's λ function, the reduced totient function, and the least universal exponent function. The order of the multiplicative group of integers modulo n is "φ"("n"), where φ is Euler's totient function. Since the order of an element of a finite group divides the order of the group, "λ"("n") divides "φ"("n"). The following table compares the first 36 values of "λ"("n") (sequence in the OEIS) and "φ"("n") (in bold if they are different; the ns such that they are different are listed in OEIS: ). Numerical examples. 5. The set of numbers less than and coprime to 5 is {1,2,3,4}. Hence Euler's totient function has value "φ"(5) 4 and the value of Carmichael's function, "λ"(5), must be a divisor of 4. The divisor 1 does not satisfy the definition of Carmichael's function since formula_1 except for formula_2. Neither does 2 since formula_3. Hence "λ"(5) 4. Indeed, formula_4. Both 2 and 3 are primitive λ-roots modulo 5 and also primitive roots modulo 5. 8. The set of numbers less than and coprime to 8 is {1,3,5,7}. Hence "φ"(8) 4 and "λ"(8) must be a divisor of 4. In fact "λ"(8) 2 since formula_5. The primitive λ-roots modulo 8 are 3, 5, and 7. There are no primitive roots modulo 8. Recurrence for "λ"("n"). The Carmichael lambda function of a prime power can be expressed in terms of the Euler totient. Any number that is not 1 or a prime power can be written uniquely as the product of distinct prime powers, in which case λ of the product is the least common multiple of the λ of the prime power factors. Specifically, "λ"("n") is given by the recurrence formula_6 Euler's totient for a prime power, that is, a number "p""r" with "p" prime and "r" ≥ 1, is given by formula_7 Carmichael's theorems. Carmichael proved two theorems that, together, establish that if "λ"("n") is considered as defined by the recurrence of the previous section, then it satisfies the property stated in the introduction, namely that it is the smallest positive integer m such that formula_8 for all a relatively prime to n. &lt;templatestyles src="Math_theorem/styles.css" /&gt; Theorem 1 — If a is relatively prime to n then formula_9. This implies that the order of every element of the multiplicative group of integers modulo n divides "λ"("n"). Carmichael calls an element a for which formula_10 is the least power of a congruent to 1 (mod n) a "primitive λ-root modulo n". (This is not to be confused with a primitive root modulo n, which Carmichael sometimes refers to as a primitive formula_11-root modulo n.) &lt;templatestyles src="Math_theorem/styles.css" /&gt; Theorem 2 — For every positive integer n there exists a primitive λ-root modulo n. Moreover, if g is such a root, then there are formula_12 primitive λ-roots that are congruent to powers of g. If g is one of the primitive λ-roots guaranteed by the theorem, then formula_13 has no positive integer solutions m less than "λ"("n"), showing that there is no positive "m" &lt; "λ"("n") such that formula_8 for all a relatively prime to n. The second statement of Theorem 2 does not imply that all primitive λ-roots modulo n are congruent to powers of a single root g. For example, if "n" 15, then "λ"("n") 4 while formula_14 and formula_15. There are four primitive λ-roots modulo 15, namely 2, 7, 8, and 13 as formula_16. The roots 2 and 8 are congruent to powers of each other and the roots 7 and 13 are congruent to powers of each other, but neither 7 nor 13 is congruent to a power of 2 or 8 and vice versa. The other four elements of the multiplicative group modulo 15, namely 1, 4 (which satisfies formula_17), 11, and 14, are not primitive λ-roots modulo 15. For a contrasting example, if "n" 9, then formula_18 and formula_15. There are two primitive λ-roots modulo 9, namely 2 and 5, each of which is congruent to the fifth power of the other. They are also both primitive formula_11-roots modulo 9. Properties of the Carmichael function. In this section, an integer formula_19 is divisible by a nonzero integer formula_20 if there exists an integer formula_21 such that formula_22. This is written as formula_23 A consequence of minimality of "λ"("n"). Suppose "am" ≡ 1 (mod "n") for all numbers a coprime with n. Then "λ"("n") | "m". Proof: If "m" "kλ"("n") + "r" with 0 ≤ "r" &lt; "λ"("n"), then formula_24 for all numbers a coprime with n. It follows that "r" = 0 since "r" &lt; "λ"("n") and "λ"("n") is the minimal positive exponent for which the congruence holds for all a coprime with n. "λ"("n") divides "φ"("n"). This follows from elementary group theory, because the exponent of any finite group must divide the order of the group. "λ"("n") is the exponent of the multiplicative group of integers modulo n while "φ"("n") is the order of that group. In particular, the two must be equal in the cases where the multiplicative group is cyclic due to the existence of a primitive root, which is the case for odd prime powers. We can thus view Carmichael's theorem as a sharpening of Euler's theorem. formula_25 Divisibility. Proof. By definition, for any integer formula_21 with formula_26 (and thus also formula_27), we have that formula_28 , and therefore formula_29. This establishes that formula_30 for all k relatively prime to a. By the consequence of minimality proved above, we have formula_31. Composition. For all positive integers a and b it holds that formula_32. This is an immediate consequence of the recurrence for the Carmichael function. Exponential cycle length. If formula_33 is the biggest exponent in the prime factorization formula_34 of n, then for all a (including those not coprime to n) and all "r" ≥ "r"max, formula_35 In particular, for square-free n ( "r"max 1), for all a we have formula_36 Average value. For any "n" ≥ 16: formula_37 (called Erdős approximation in the following) with the constant formula_38 and "γ" ≈ 0.57721, the Euler–Mascheroni constant. The following table gives some overview over the first 226 – 1 = values of the λ function, for both, the exact average and its Erdős-approximation. Additionally given is some overview over the more easily accessible “logarithm over logarithm” values LoL("n") := with There, the table entry in row number 26 at column indicates that 60.49% (≈ ) of the integers 1 ≤ "n" ≤ have "λ"("n") &gt; "n" meaning that the majority of the λ values is exponential in the length "l" : log2("n") of the input n, namely formula_39 Prevailing interval. For all numbers N and all but "o"("N") positive integers "n" ≤ "N" (a "prevailing" majority): formula_40 with the constant formula_41 Lower bounds. For any sufficiently large number N and for any Δ ≥ (ln ln "N")3, there are at most formula_42 positive integers "n" ≤ N such that "λ"("n") ≤ "ne"−Δ. Minimal order. For any sequence "n"1 &lt; "n"2 &lt; "n"3 &lt; ⋯ of positive integers, any constant 0 &lt; "c" &lt;, and any sufficiently large i: formula_43 Small values. For a constant c and any sufficiently large positive A, there exists an integer "n" &gt; "A" such that formula_44 Moreover, n is of the form formula_45 for some square-free integer "m" &lt; (ln "A")"c" ln ln ln "A". Image of the function. The set of values of the Carmichael function has counting function formula_46 where formula_47 Use in cryptography. The Carmichael function is important in cryptography due to its use in the RSA encryption algorithm. Proof of Theorem 1. For "n" "p", a prime, Theorem 1 is equivalent to Fermat's little theorem: formula_48 For prime powers "p""r", "r" &gt; 1, if formula_49 holds for some integer h, then raising both sides to the power p gives formula_50 for some other integer formula_51. By induction it follows that formula_52 for all a relatively prime to p and hence to "p""r". This establishes the theorem for "n" 4 or any odd prime power. Sharpening the result for higher powers of two. For a coprime to (powers of) 2 we have "a" 1 + 2"h"2 for some integer "h"2. Then, formula_53, where formula_54 is an integer. With "r" = 3, this is written formula_55 Squaring both sides gives formula_56 where formula_57 is an integer. It follows by induction that formula_58 for all formula_59 and all a coprime to formula_60. Integers with multiple prime factors. By the unique factorization theorem, any "n" &gt; 1 can be written in a unique way as formula_34 where "p"1 &lt; "p"2 &lt; ... &lt; "pk" are primes and "r"1, "r"2, ..., "rk" are positive integers. The results for prime powers establish that, for formula_61, formula_62 From this it follows that formula_63 where, as given by the recurrence, formula_64 From the Chinese remainder theorem one concludes that formula_65
[ { "math_id": 0, "text": "a^m \\equiv 1 \\pmod{n}" }, { "math_id": 1, "text": "a^1 \\not\\equiv 1\\pmod{5}" }, { "math_id": 2, "text": "a\\equiv1\\pmod{5}" }, { "math_id": 3, "text": "2^2 \\equiv 3^2 \\equiv 4 \\not\\equiv 1\\pmod{5}" }, { "math_id": 4, "text": "1^4\\equiv 2^4\\equiv 3^4\\equiv 4^4\\equiv1\\pmod{5}" }, { "math_id": 5, "text": "1^2\\equiv 3^2\\equiv 5^2\\equiv 7^2\\equiv1\\pmod{8}" }, { "math_id": 6, "text": "\\lambda(n) = \\begin{cases}\n\\varphi(n) & \\text{if }n\\text{ is 1, 2, 4, or an odd prime power,}\\\\\n\\tfrac12\\varphi(n) & \\text{if }n=2^r,\\ r\\ge3,\\\\\n\\operatorname{lcm}\\Bigl(\\lambda(n_1),\\lambda(n_2),\\ldots,\\lambda(n_k)\\Bigr) & \\text{if }n=n_1n_2\\ldots n_k\\text{ where }n_1,n_2,\\ldots,n_k\\text{ are powers of distinct primes.}\n\\end{cases}" }, { "math_id": 7, "text": "\\varphi(p^r) {{=}} p^{r-1}(p-1)." }, { "math_id": 8, "text": "a^m\\equiv 1\\pmod{n}" }, { "math_id": 9, "text": "a^{\\lambda(n)}\\equiv 1\\pmod{n}" }, { "math_id": 10, "text": "a^{\\lambda(n)}" }, { "math_id": 11, "text": "\\varphi" }, { "math_id": 12, "text": "\\varphi(\\lambda(n))" }, { "math_id": 13, "text": "g^m\\equiv1\\pmod{n}" }, { "math_id": 14, "text": "\\varphi(n)=8" }, { "math_id": 15, "text": "\\varphi(\\lambda(n))=2" }, { "math_id": 16, "text": "1\\equiv2^4\\equiv8^4\\equiv7^4\\equiv13^4" }, { "math_id": 17, "text": "4\\equiv2^2\\equiv8^2\\equiv7^2\\equiv13^2" }, { "math_id": 18, "text": "\\lambda(n)=\\varphi(n)=6" }, { "math_id": 19, "text": "n" }, { "math_id": 20, "text": "m" }, { "math_id": 21, "text": "k" }, { "math_id": 22, "text": "n = km" }, { "math_id": 23, "text": "m \\mid n." }, { "math_id": 24, "text": "a^r = 1^k \\cdot a^r \\equiv \\left(a^{\\lambda(n)}\\right)^k\\cdot a^r = a^{k\\lambda(n)+r} = a^m \\equiv 1\\pmod{n}" }, { "math_id": 25, "text": " a\\,|\\,b \\Rightarrow \\lambda(a)\\,|\\,\\lambda(b) " }, { "math_id": 26, "text": "\\gcd(k,b) = 1" }, { "math_id": 27, "text": "\\gcd(k,a) = 1" }, { "math_id": 28, "text": " b \\,|\\, (k^{\\lambda(b)} - 1)" }, { "math_id": 29, "text": " a \\,|\\, (k^{\\lambda(b)} - 1)" }, { "math_id": 30, "text": "k^{\\lambda(b)}\\equiv1\\pmod{a}" }, { "math_id": 31, "text": " \\lambda(a)\\,|\\,\\lambda(b) " }, { "math_id": 32, "text": "\\lambda(\\mathrm{lcm}(a,b)) = \\mathrm{lcm}(\\lambda(a), \\lambda(b))" }, { "math_id": 33, "text": "r_{\\mathrm{max}}=\\max_i\\{r_i\\}" }, { "math_id": 34, "text": " n= p_1^{r_1}p_2^{r_2} \\cdots p_{k}^{r_k} " }, { "math_id": 35, "text": "a^r \\equiv a^{\\lambda(n)+r} \\pmod n." }, { "math_id": 36, "text": "a \\equiv a^{\\lambda(n)+1} \\pmod n." }, { "math_id": 37, "text": "\\frac{1}{n} \\sum_{i \\leq n} \\lambda (i) = \\frac{n}{\\ln n} e^{B (1+o(1)) \\ln\\ln n / (\\ln\\ln\\ln n) }" }, { "math_id": 38, "text": "B := e^{-\\gamma} \\prod_{p\\in\\mathbb P} \\left({1 - \\frac{1}{(p-1)^2(p+1)}}\\right) \\approx 0.34537 " }, { "math_id": 39, "text": "\\left(2^\\frac45\\right)^l = 2^\\frac{4l}{5} = \\left(2^l\\right)^\\frac45 = n^\\frac45." }, { "math_id": 40, "text": "\\lambda(n) = \\frac{n} {(\\ln n)^{\\ln\\ln\\ln n + A + o(1)}}" }, { "math_id": 41, "text": "A := -1 + \\sum_{p\\in\\mathbb P} \\frac{\\ln p}{(p-1)^2} \\approx 0.2269688 " }, { "math_id": 42, "text": "N\\exp\\left(-0.69(\\Delta\\ln\\Delta)^\\frac13\\right)" }, { "math_id": 43, "text": "\\lambda(n_i) > \\left(\\ln n_i\\right)^{c\\ln\\ln\\ln n_i}." }, { "math_id": 44, "text": "\\lambda(n)<\\left(\\ln A\\right)^{c\\ln\\ln\\ln A}." }, { "math_id": 45, "text": "n=\\mathop{\\prod_{q \\in \\mathbb P}}_{(q-1)|m}q" }, { "math_id": 46, "text": "\\frac{x}{(\\ln x)^{\\eta+o(1)}} ," }, { "math_id": 47, "text": "\\eta=1-\\frac{1+\\ln\\ln2}{\\ln2} \\approx 0.08607" }, { "math_id": 48, "text": "a^{p-1}\\equiv1\\pmod{p}\\qquad\\text{for all }a\\text{ coprime to }p." }, { "math_id": 49, "text": "a^{p^{r-1}(p-1)}=1+hp^r" }, { "math_id": 50, "text": "a^{p^r(p-1)}=1+h'p^{r+1}" }, { "math_id": 51, "text": "h'" }, { "math_id": 52, "text": "a^{\\varphi(p^r)}\\equiv1\\pmod{p^r}" }, { "math_id": 53, "text": "a^2 = 1+4h_2(h_2+1) = 1+8\\binom{h_2+1}{2}=:1+8h_3" }, { "math_id": 54, "text": "h_3" }, { "math_id": 55, "text": "a^{2^{r-2}} = 1+2^r h_r." }, { "math_id": 56, "text": "a^{2^{r-1}}=\\left(1+2^r h_r\\right)^2=1+2^{r+1}\\left(h_r+2^{r-1}h_r^2\\right)=:1+2^{r+1}h_{r+1}," }, { "math_id": 57, "text": "h_{r+1}" }, { "math_id": 58, "text": "a^{2^{r-2}}=a^{\\frac{1}{2}\\varphi(2^r)}\\equiv 1\\pmod{2^r}" }, { "math_id": 59, "text": "r\\ge3" }, { "math_id": 60, "text": "2^r" }, { "math_id": 61, "text": "1\\le j\\le k" }, { "math_id": 62, "text": "a^{\\lambda\\left(p_j^{r_j}\\right)}\\equiv1 \\pmod{p_j^{r_j}}\\qquad\\text{for all }a\\text{ coprime to }n\\text{ and hence to }p_i^{r_i}." }, { "math_id": 63, "text": "a^{\\lambda(n)}\\equiv1 \\pmod{p_j^{r_j}}\\qquad\\text{for all }a\\text{ coprime to }n," }, { "math_id": 64, "text": "\\lambda(n) = \\operatorname{lcm}\\Bigl(\\lambda\\left(p_1^{r_1}\\right),\\lambda\\left(p_2^{r_2}\\right),\\ldots,\\lambda\\left(p_k^{r_k}\\right)\\Bigr)." }, { "math_id": 65, "text": "a^{\\lambda(n)}\\equiv1 \\pmod{n}\\qquad\\text{for all }a\\text{ coprime to }n." } ]
https://en.wikipedia.org/wiki?curid=1181756
11817965
Air permeability specific surface
Powder fineness indicator The air permeability specific surface of a powder material is a single-parameter measurement of the fineness of the powder. The specific surface is derived from the resistance to flow of air (or some other gas) through a porous bed of the powder. The SI units are m2·kg−1 ("mass specific surface") or m2·m−3 ("volume specific surface"). Significance. The particle size, or fineness, of powder materials is very often critical to their performance. Measurement of air permeability can be performed very rapidly, and does not require the powder to be exposed to vacuum or to gases or vapours, as is necessary for the BET method for determination of specific surface area. This makes it both very cost-effective, and also allows it to be used for materials which may be unstable under vacuum. When a powder reacts chemically with a liquid or gas at the surface of its particles, the specific surface is directly related to its rate of reaction. The measurement is therefore important in the manufacture of many processed materials. In particular, air permeability is almost universally used in the cement industry as a gauge of product fineness which is directly related to such properties as speed of setting and rate of strength development. Other fields where air permeability has been used to determine specific surface area include: In some fields, particularly powder metallurgy, the related Fisher number is the parameter of interest. This is the equivalent average particle diameter, assuming that the particles are spherical and have uniform size. Historically, the Fisher number was obtained by measurement using the "Fisher Sub-sieve Sizer", a commercial instrument containing an air pump and pressure regulator to establish a constant air flow, which is measured using a flowmeter. A number of manufacturers make equivalent instruments, and the Fisher number can be calculated from air permeability specific surface area values. Methods. Measurement consists of packing the powder into a cylindrical "bed" having a known porosity (i.e. volume of air-space between particles divided by total bed volume). A pressure drop is set up along the length of the bed cylinder. The resulting flow-rate of air through the bed yields the specific surface by the Kozeny–Carman equation: formula_0 where: S is specific surface, m2·kg−1 d is the cylinder diameter, m ρ is the sample particle density, kg·m−3 ε is the volume porosity of the bed (dimensionless) δP is the pressure drop across the bed, Pa l is the cylinder length, m η is the air dynamic viscosity, Pa·s Q is the flowrate, m3·s−1 It can be seen that the specific surface is proportional to the square root of the ratio of pressure to flow. Various standard methods have been proposed: Lea and Nurse method. The second of these was developed by Lea and Nurse. The bed is 25 mm in diameter and 10 mm thick. The desired porosity (which may vary in the range 0.4 to 0.6) is obtained by using a calculated weight of sample, pressed to precisely these dimensions. The required weight is given by: formula_1 A flowmeter consisting of a long capillary is connected in series with the powder bed. The pressure drop across the flowmeter (measured by a manometer) is proportional to the flowrate, and the proportionality constant can be measured by direct calibration. The pressure drop across the bed is measured by a similar manometer. Thus the required pressure/flow ratio can be obtained from the ratio of the two manometer readings, and when fed into the Carman equation, yields an "absolute" value of the air permeability surface area. The apparatus is maintained at a constant temperature, and dry air is used so that the air viscosity can be obtained from tables. Rigden method. This was developed in the desire for a simpler method. The bed is connected to a wide-diameter u-tube containing a liquid such as kerosene. On pressurizing the space between the u-tube and the bed, the liquid is forced down. The level of liquid then acts as a measure of both pressure and volume flow. The liquid level rises as air leaks out through the bed. The time taken for the liquid level to pass between two pre-set marks on the tube is measured by stop-watch. The mean pressure and mean flowrate can be derived from the dimensions of the tube and the density of the liquid. A later development used mercury in the u-tube: because of mercury's greater density, the apparatus could be more compact, and electrical contacts in the tube touching the conductive mercury could automatically start and stop a timer. Blaine method. This was developed independently by R L Blaine of the American National Bureau of Standards, and uses a small glass kerosene manometer to apply suction to the powder bed. It differs from the other methods in that, because of uncertainty of the dimensions of the manometer tube, absolute results can't be calculated from the Carman equation. Instead, the apparatus must be calibrated using a known standard material. The original standards, supplied by NBS, were certified using the Lea and Nurse method. Despite this shortcoming, the Blaine method has become by far the most commonly used for cement materials, mainly because of the ease of maintenance of the apparatus and simplicity of the procedure.
[ { "math_id": 0, "text": "S=\\cfrac{7d}{\\rho\\,(1-\\epsilon\\,)}\\sqrt{\\dfrac{\\epsilon\\,^3\\pi\\,\\delta\\,P}{l\\eta\\,Q}}" }, { "math_id": 1, "text": "M=\\tfrac{\\pi}{4}\\,d^2l\\rho\\,(1-\\epsilon\\,)" } ]
https://en.wikipedia.org/wiki?curid=11817965
1181818
Signed-digit representation
Positional system with signed digits; the representation may not be unique In mathematical notation for numbers, a signed-digit representation is a positional numeral system with a set of signed digits used to encode the integers. Signed-digit representation can be used to accomplish fast addition of integers because it can eliminate chains of dependent carries. In the binary numeral system, a special case signed-digit representation is the "non-adjacent form", which can offer speed benefits with minimal space overhead. History. Challenges in calculation stimulated early authors Colson (1726) and Cauchy (1840) to use signed-digit representation. The further step of replacing negated digits with new ones was suggested by Selling (1887) and Cajori (1928). In 1928, Florian Cajori noted the recurring theme of signed digits, starting with Colson (1726) and Cauchy (1840). In his book "History of Mathematical Notations", Cajori titled the section "Negative numerals". For completeness, Colson uses examples and describes addition (pp. 163–4), multiplication (pp. 165–6) and division (pp. 170–1) using a table of multiples of the divisor. He explains the convenience of approximation by truncation in multiplication. Colson also devised an instrument (Counting Table) that calculated using signed digits. Eduard Selling advocated inverting the digits 1, 2, 3, 4, and 5 to indicate the negative sign. He also suggested "snie", "jes", "jerd", "reff", and "niff" as names to use vocally. Most of the other early sources used a bar over a digit to indicate a negative sign for it. Another German usage of signed-digits was described in 1902 in Klein's encyclopedia. Definition and properties. Digit set. Let formula_0 be a finite set of numerical digits with cardinality formula_1 (If formula_2, then the positional number system is trivial and only represents the trivial ring), with each digit denoted as formula_3 for formula_4 formula_5 is known as the radix or number base. formula_0 can be used for a signed-digit representation if it's associated with a unique function formula_6 such that formula_7 for all formula_4 This function, formula_8 is what rigorously and formally establishes how integer values are assigned to the symbols/glyphs in formula_9 One benefit of this formalism is that the definition of "the integers" (however they may be defined) is not conflated with any particular system for writing/representing them; in this way, these two distinct (albeit closely related) concepts are kept separate. formula_0 can be partitioned into three distinct sets formula_10, formula_11, and formula_12, representing the positive, zero, and negative digits respectively, such that all digits formula_13 satisfy formula_14, all digits formula_15 satisfy formula_16 and all digits formula_17 satisfy formula_18. The cardinality of formula_10 is formula_19, the cardinality of formula_11 is formula_20, and the cardinality of formula_12 is formula_21, giving the number of positive and negative digits respectively, such that formula_22. Balanced form representations. Balanced form representations are representations where for every positive digit formula_23, there exist a corresponding negative digit formula_24 such that formula_25. It follows that formula_26. Only odd bases can have balanced form representations, as otherwise formula_27 has to be the opposite of itself and hence 0, but formula_28. In balanced form, the negative digits formula_17 are usually denoted as positive digits with a bar over the digit, as formula_29 for formula_13. For example, the digit set of balanced ternary would be formula_30 with formula_31, formula_32, and formula_33. This convention is adopted in finite fields of odd prime order formula_34: formula_35 Dual signed-digit representation. Every digit set formula_0 has a dual digit set formula_36 given by the inverse order of the digits with an isomorphism formula_37 defined by formula_38. As a result, for any signed-digit representations formula_39 of a number system ring formula_40 constructed from formula_0 with valuation formula_41, there exists a dual signed-digit representations of formula_40, formula_42, constructed from formula_36 with valuation formula_43, and an isomorphism formula_44 defined by formula_45, where formula_46 is the additive inverse operator of formula_40. The digit set for balanced form representations is self-dual. For integers. Given the digit set formula_0 and function formula_47 as defined above, let us define an integer endofunction formula_48 as the following: formula_49 If the only periodic point of formula_50 is the fixed point formula_51, then the set of all signed-digit representations of the integers formula_52 using formula_0 is given by the Kleene plus formula_53, the set of all finite concatenated strings of digits formula_54 with at least one digit, with formula_55. Each signed-digit representation formula_56 has a valuation formula_57 formula_58. Examples include balanced ternary with digits formula_59. Otherwise, if there exist a non-zero periodic point of formula_50, then there exist integers that are represented by an infinite number of non-zero digits in formula_0. Examples include the standard decimal numeral system with the digit set formula_60, which requires an infinite number of the digit formula_61 to represent the additive inverse formula_62, as formula_63, and the positional numeral system with the digit set formula_64 with formula_65, which requires an infinite number of the digit formula_66 to represent the number formula_67, as formula_68. For decimal fractions. If the integers can be represented by the Kleene plus formula_53, then the set of all signed-digit representations of the decimal fractions, or formula_5-adic rationals formula_69, is given by formula_70, the Cartesian product of the Kleene plus formula_53, the set of all finite concatenated strings of digits formula_54 with at least one digit, the singleton formula_71 consisting of the radix point (formula_72 or formula_73), and the Kleene star formula_74, the set of all finite concatenated strings of digits formula_75, with formula_76. Each signed-digit representation formula_77 has a valuation formula_78 formula_79 For real numbers. If the integers can be represented by the Kleene plus formula_53, then the set of all signed-digit representations of the real numbers formula_80 is given by formula_81, the Cartesian product of the Kleene plus formula_53, the set of all finite concatenated strings of digits formula_54 with at least one digit, the singleton formula_71 consisting of the radix point (formula_72 or formula_73), and the Cantor space formula_82, the set of all infinite concatenated strings of digits formula_83, with formula_55. Each signed-digit representation formula_84 has a valuation formula_85 formula_86. The infinite series always converges to a finite real number. For other number systems. All base-formula_5 numerals can be represented as a subset of formula_87, the set of all doubly infinite sequences of digits in formula_0, where formula_52 is the set of integers, and the ring of base-formula_5 numerals is represented by the formal power series ring formula_88, the doubly infinite series formula_89 where formula_90 for formula_91. Integers modulo powers of "b". The set of all signed-digit representations of the integers modulo formula_92, formula_93 is given by the set formula_94, the set of all finite concatenated strings of digits formula_95 of length formula_96, with formula_55. Each signed-digit representation formula_97 has a valuation formula_98 formula_99 Prüfer groups. A Prüfer group is the quotient group formula_100 of the integers and the formula_5-adic rationals. The set of all signed-digit representations of the Prüfer group is given by the Kleene star formula_74, the set of all finite concatenated strings of digits formula_101, with formula_55. Each signed-digit representation formula_102 has a valuation formula_103 formula_104 Circle group. The circle group is the quotient group formula_105 of the integers and the real numbers. The set of all signed-digit representations of the circle group is given by the Cantor space formula_82, the set of all right-infinite concatenated strings of digits formula_106. Each signed-digit representation formula_97 has a valuation formula_107 formula_108 The infinite series always converges. "b"-adic integers. The set of all signed-digit representations of the formula_5-adic integers, formula_109 is given by the Cantor space formula_82, the set of all left-infinite concatenated strings of digits formula_110. Each signed-digit representation formula_97 has a valuation formula_111 formula_112 "b"-adic solenoids. The set of all signed-digit representations of the formula_5-adic solenoids, formula_113 is given by the Cantor space formula_87, the set of all doubly infinite concatenated strings of digits formula_114. Each signed-digit representation formula_97 has a valuation formula_115 formula_116 In written and spoken language. Indo-Aryan languages. The oral and written forms of numbers in the Indo-Aryan languages use a negative numeral (e.g., "un" in Hindi and Bengali, "un" or "unna" in Punjabi, "ekon" in Marathi) for the numbers between 11 and 90 that end with a nine. The numbers followed by their names are shown for Punjabi below (the prefix "ik" means "one"): Similarly, the Sesotho language utilizes negative numerals to form 8's and 9's. Classical Latin. In Classical Latin, integers 18 and 19 did not even have a spoken, nor written form including corresponding parts for "eight" or "nine" in practice - despite them being in existence. Instead, in Classic Latin, For upcoming integer numerals [28, 29, 38, 39, ..., 88, 89] the additive form in the language had been much more common, however, for the listed numbers, the above form was still preferred. Hence, approaching thirty, numerals were expressed as: This is one of the main foundations of contemporary historians' reasoning, explaining why the subtractive I- and II- was so common in this range of cardinals compared to other ranges. Numerals 98 and 99 could also be expressed in both forms, yet "two to hundred" might have sounded a bit odd - clear evidence is the scarce occurrence of these numbers written down in a subtractive fashion in authentic sources. Finnish Language. There is yet another language having this feature (by now, only in traces), however, still in active use today. This is the Finnish Language, where the (spelled out) numerals are used this way should a digit of 8 or 9 occur. The scheme is like this: Above list is no special case, it consequently appears in larger cardinals as well, e.g.: Emphasizing of these attributes stay present even in the shortest colloquial forms of numerals: However, this phenomenon has no influence on written numerals, the Finnish use the standard Western-Arabic decimal notation. Time keeping. In the English language it is common to refer to times as, for example, 'seven to three', 'to' performing the negation. Other systems. There exist other signed-digit bases such that the base formula_117. A notable examples of this is Booth encoding, which has a digit set formula_118 with formula_119 and formula_120, but which uses a base formula_121. The standard binary numeral system would only use digits of value formula_122. Note that non-standard signed-digit representations are not unique. For instance: formula_123 formula_124 formula_125 formula_126 The non-adjacent form (NAF) of Booth encoding does guarantee a unique representation for every integer value. However, this only applies for integer values. For example, consider the following repeating binary numbers in NAF, formula_127 Notes and references. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathcal{D}" }, { "math_id": 1, "text": "b > 1" }, { "math_id": 2, "text": "b \\leq 1" }, { "math_id": 3, "text": "d_i" }, { "math_id": 4, "text": "0 \\leq i < b." }, { "math_id": 5, "text": "b" }, { "math_id": 6, "text": "f_\\mathcal{D}:\\mathcal{D}\\rightarrow\\mathbb{Z}" }, { "math_id": 7, "text": "f_\\mathcal{D}(d_i) \\equiv i \\bmod b" }, { "math_id": 8, "text": "f_{\\mathcal{D}}," }, { "math_id": 9, "text": "\\mathcal{D}." }, { "math_id": 10, "text": "\\mathcal{D}_{+}" }, { "math_id": 11, "text": "\\mathcal{D}_{0}" }, { "math_id": 12, "text": "\\mathcal{D}_{-}" }, { "math_id": 13, "text": "d_{+}\\in\\mathcal{D}_{+}" }, { "math_id": 14, "text": "f_\\mathcal{D}(d_{+}) > 0" }, { "math_id": 15, "text": "d_{0}\\in\\mathcal{D}_{0}" }, { "math_id": 16, "text": "f_\\mathcal{D}(d_{0}) = 0" }, { "math_id": 17, "text": "d_{-}\\in\\mathcal{D}_{-}" }, { "math_id": 18, "text": "f_\\mathcal{D}(d_{-}) < 0" }, { "math_id": 19, "text": "b_{+}" }, { "math_id": 20, "text": "b_{0}" }, { "math_id": 21, "text": "b_{-}" }, { "math_id": 22, "text": "b = b_{+} + b_{0} + b_{-}" }, { "math_id": 23, "text": "d_{+}" }, { "math_id": 24, "text": "d_{-}" }, { "math_id": 25, "text": "f_\\mathcal{D}(d_{+}) = -f_\\mathcal{D}(d_{-})" }, { "math_id": 26, "text": "b_{+} = b_{-}" }, { "math_id": 27, "text": "d_{b/2}" }, { "math_id": 28, "text": "0\\ne \\frac b2" }, { "math_id": 29, "text": "d_{-} = \\bar{d}_{+}" }, { "math_id": 30, "text": "\\mathcal{D}_{3} = \\lbrace\\bar{1},0,1\\rbrace" }, { "math_id": 31, "text": "f_{\\mathcal{D}_{3}}(\\bar{1}) = -1" }, { "math_id": 32, "text": "f_{\\mathcal{D}_{3}}(0) = 0" }, { "math_id": 33, "text": "f_{\\mathcal{D}_{3}}(1) = 1" }, { "math_id": 34, "text": "q" }, { "math_id": 35, "text": "\\mathbb{F}_{q} = \\lbrace0, 1, \\bar{1} = -1,... d = \\frac{q - 1}{2},\\ \\bar{d} = \\frac{1-q}{2}\\ |\\ q = 0\\rbrace." }, { "math_id": 36, "text": "\\mathcal{D}^\\operatorname{op}" }, { "math_id": 37, "text": "g:\\mathcal{D}\\rightarrow\\mathcal{D}^\\operatorname{op}" }, { "math_id": 38, "text": "-f_\\mathcal{D} = g\\circ f_{\\mathcal{D}^\\operatorname{op}}" }, { "math_id": 39, "text": "\\mathcal{N}" }, { "math_id": 40, "text": "N" }, { "math_id": 41, "text": "v_\\mathcal{D}:\\mathcal{N}\\rightarrow N" }, { "math_id": 42, "text": "\\mathcal{N}^\\operatorname{op}" }, { "math_id": 43, "text": "v_{\\mathcal{D}^\\operatorname{op}}:\\mathcal{N}^\\operatorname{op}\\rightarrow N" }, { "math_id": 44, "text": "h:\\mathcal{N}\\rightarrow\\mathcal{N}^\\operatorname{op}" }, { "math_id": 45, "text": "-v_\\mathcal{D} = h\\circ v_{\\mathcal{D}^\\operatorname{op}}" }, { "math_id": 46, "text": "-" }, { "math_id": 47, "text": "f:\\mathcal{D}\\rightarrow\\mathbb{Z}" }, { "math_id": 48, "text": "T:\\mathbb{Z}\\rightarrow\\mathbb{Z}" }, { "math_id": 49, "text": "T(n) = \n\\begin{cases}\n\\frac{n - f(d_i)}{b} &\\text{if } n \\equiv i \\bmod b, 0 \\leq i < b\n\\end{cases}" }, { "math_id": 50, "text": "T" }, { "math_id": 51, "text": "0" }, { "math_id": 52, "text": "\\mathbb{Z}" }, { "math_id": 53, "text": "\\mathcal{D}^+" }, { "math_id": 54, "text": "d_n \\ldots d_0" }, { "math_id": 55, "text": "n\\in\\mathbb{N}" }, { "math_id": 56, "text": "m \\in \\mathcal{D}^+" }, { "math_id": 57, "text": "v_\\mathcal{D}:\\mathcal{D}^+\\rightarrow\\mathbb{Z}" }, { "math_id": 58, "text": "v_\\mathcal{D}(m) = \\sum_{i=0}^{n}f_\\mathcal{D}(d_{i})b^{i}" }, { "math_id": 59, "text": "\\mathcal{D} = \\lbrace \\bar{1}, 0, 1\\rbrace" }, { "math_id": 60, "text": "\\operatorname{dec} = \\lbrace 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 \\rbrace" }, { "math_id": 61, "text": "9" }, { "math_id": 62, "text": "-1" }, { "math_id": 63, "text": "T_\\operatorname{dec}(-1) = \\frac{-1 - 9}{10} = -1" }, { "math_id": 64, "text": "\\mathcal{D} = \\lbrace \\text{A}, 0, 1\\rbrace" }, { "math_id": 65, "text": "f(\\text{A}) = -4" }, { "math_id": 66, "text": "\\text{A}" }, { "math_id": 67, "text": "2" }, { "math_id": 68, "text": "T_\\mathcal{D}(2) = \\frac{2 - (-4)}{3} = 2" }, { "math_id": 69, "text": "\\mathbb{Z}[1\\backslash b]" }, { "math_id": 70, "text": "\\mathcal{Q} = \\mathcal{D}^+\\times\\mathcal{P}\\times\\mathcal{D}^*" }, { "math_id": 71, "text": "\\mathcal{P}" }, { "math_id": 72, "text": "." }, { "math_id": 73, "text": "," }, { "math_id": 74, "text": "\\mathcal{D}^*" }, { "math_id": 75, "text": "d_{-1} \\ldots d_{-m}" }, { "math_id": 76, "text": "m,n\\in\\mathbb{N}" }, { "math_id": 77, "text": "q \\in \\mathcal{Q}" }, { "math_id": 78, "text": "v_\\mathcal{D}:\\mathcal{Q}\\rightarrow\\mathbb{Z}[1\\backslash b]" }, { "math_id": 79, "text": "v_\\mathcal{D}(q) = \\sum_{i=-m}^{n}f_\\mathcal{D}(d_{i})b^{i}" }, { "math_id": 80, "text": "\\mathbb{R}" }, { "math_id": 81, "text": "\\mathcal{R} = \\mathcal{D}^+ \\times \\mathcal{P} \\times \\mathcal{D}^\\mathbb{N}" }, { "math_id": 82, "text": "\\mathcal{D}^\\mathbb{N}" }, { "math_id": 83, "text": "d_{-1} d_{-2} \\ldots" }, { "math_id": 84, "text": "r \\in \\mathcal{R}" }, { "math_id": 85, "text": "v_\\mathcal{D}:\\mathcal{R}\\rightarrow\\mathbb{R}" }, { "math_id": 86, "text": "v_\\mathcal{D}(r) = \\sum_{i=-\\infty}^{n}f_\\mathcal{D}(d_{i})b^{i}" }, { "math_id": 87, "text": "\\mathcal{D}^\\mathbb{Z}" }, { "math_id": 88, "text": "\\mathbb{Z}[[b,b^{-1}]]" }, { "math_id": 89, "text": "\\sum_{i = -\\infty}^{\\infty}a_i b^i" }, { "math_id": 90, "text": "a_i\\in\\mathbb{Z}" }, { "math_id": 91, "text": "i\\in\\mathbb{Z}" }, { "math_id": 92, "text": "b^n" }, { "math_id": 93, "text": "\\mathbb{Z}\\backslash b^n\\mathbb{Z}" }, { "math_id": 94, "text": "\\mathcal{D}^n" }, { "math_id": 95, "text": "d_{n - 1} \\ldots d_0" }, { "math_id": 96, "text": "n" }, { "math_id": 97, "text": "m \\in \\mathcal{D}^n" }, { "math_id": 98, "text": "v_\\mathcal{D}:\\mathcal{D}^n\\rightarrow\\mathbb{Z}/b^n\\mathbb{Z}" }, { "math_id": 99, "text": "v_\\mathcal{D}(m) \\equiv \\sum_{i=0}^{n - 1}f_\\mathcal{D}(d_{i})b^{i} \\bmod b^n" }, { "math_id": 100, "text": "\\mathbb{Z}(b^\\infty) = \\mathbb{Z}[1\\backslash b]/\\mathbb{Z}" }, { "math_id": 101, "text": "d_{1} \\ldots d_{n}" }, { "math_id": 102, "text": "p \\in \\mathcal{D}^*" }, { "math_id": 103, "text": "v_\\mathcal{D}:\\mathcal{D}^*\\rightarrow\\mathbb{Z}(b^\\infty)" }, { "math_id": 104, "text": "v_\\mathcal{D}(m) \\equiv \\sum_{i=1}^{n}f_\\mathcal{D}(d_{i})b^{-i} \\bmod 1" }, { "math_id": 105, "text": "\\mathbb{T} = \\mathbb{R}/\\mathbb{Z}" }, { "math_id": 106, "text": "d_{1} d_{2} \\ldots" }, { "math_id": 107, "text": "v_\\mathcal{D}:\\mathcal{D}^\\mathbb{N}\\rightarrow\\mathbb{T}" }, { "math_id": 108, "text": "v_\\mathcal{D}(m) \\equiv \\sum_{i=1}^{\\infty}f_\\mathcal{D}(d_{i})b^{-i} \\bmod 1" }, { "math_id": 109, "text": "\\mathbb{Z}_b" }, { "math_id": 110, "text": "\\ldots d_{1} d_{0}" }, { "math_id": 111, "text": "v_\\mathcal{D}:\\mathcal{D}^\\mathbb{N}\\rightarrow\\mathbb{Z}_{b}" }, { "math_id": 112, "text": "v_\\mathcal{D}(m) = \\sum_{i=0}^{\\infty}f_\\mathcal{D}(d_{i})b^{i}" }, { "math_id": 113, "text": "\\mathbb{T}_b" }, { "math_id": 114, "text": "\\ldots d_{1} d_{0} d_{-1} \\ldots" }, { "math_id": 115, "text": "v_\\mathcal{D}:\\mathcal{D}^\\mathbb{Z}\\rightarrow\\mathbb{T}_{b}" }, { "math_id": 116, "text": "v_\\mathcal{D}(m) = \\sum_{i=-\\infty}^{\\infty}f_\\mathcal{D}(d_{i})b^{i}" }, { "math_id": 117, "text": "b \\neq b_{+} + b_{-} + 1" }, { "math_id": 118, "text": "\\mathcal{D} = \\lbrace\\bar{1},0,1\\rbrace" }, { "math_id": 119, "text": "b_{+} = 1" }, { "math_id": 120, "text": "b_{-} = 1" }, { "math_id": 121, "text": "b = 2 < 3 = b_{+} + b_{-} + 1" }, { "math_id": 122, "text": "\\lbrace0,1\\rbrace" }, { "math_id": 123, "text": "0111_{\\mathcal{D}} = 4 + 2 + 1 = 7" }, { "math_id": 124, "text": "10\\bar{1}1_{\\mathcal{D}} = 8 - 2 + 1 = 7" }, { "math_id": 125, "text": "1\\bar{1}11_{\\mathcal{D}} = 8 - 4 + 2 + 1 = 7" }, { "math_id": 126, "text": "100\\bar{1}_{\\mathcal{D}} = 8 - 1 = 7" }, { "math_id": 127, "text": "\\frac{2}{3} = 0.\\overline{10}_{\\mathcal{D}} = 1.\\overline{0\\bar{1}}_{\\mathcal{D}}" } ]
https://en.wikipedia.org/wiki?curid=1181818
11826062
Auxiliary function
Construction in transcendental number theory In mathematics, auxiliary functions are an important construction in transcendental number theory. They are functions that appear in most proofs in this area of mathematics and that have specific, desirable properties, such as taking the value zero for many arguments, or having a zero of high order at some point. Definition. Auxiliary functions are not a rigorously defined kind of function, rather they are functions which are either explicitly constructed or at least shown to exist and which provide a contradiction to some assumed hypothesis, or otherwise prove the result in question. Creating a function during the course of a proof in order to prove the result is not a technique exclusive to transcendence theory, but the term "auxiliary function" usually refers to the functions created in this area. Explicit functions. Liouville's transcendence criterion. Because of the naming convention mentioned above, auxiliary functions can be dated back to their source simply by looking at the earliest results in transcendence theory. One of these first results was Liouville's proof that transcendental numbers exist when he showed that the so called Liouville numbers were transcendental. He did this by discovering a transcendence criterion which these numbers satisfied. To derive this criterion he started with a general algebraic number α and found some property that this number would necessarily satisfy. The auxiliary function he used in the course of proving this criterion was simply the minimal polynomial of α, which is the irreducible polynomial "f" with integer coefficients such that "f"(α) = 0. This function can be used to estimate how well the algebraic number α can be estimated by rational numbers "p"/"q". Specifically if α has degree "d" at least two then he showed that formula_0 and also, using the mean value theorem, that there is some constant depending on α, say "c"(α), such that formula_1 Combining these results gives a property that the algebraic number must satisfy; therefore any number not satisfying this criterion must be transcendental. The auxiliary function in Liouville's work is very simple, merely a polynomial that vanishes at a given algebraic number. This kind of property is usually the one that auxiliary functions satisfy. They either vanish or become very small at particular points, which is usually combined with the assumption that they do not vanish or can't be too small to derive a result. Fourier's proof of the irrationality of "e". Another simple, early occurrence is in Fourier's proof of the irrationality of "e", though the notation used usually disguises this fact. Fourier's proof used the power series of the exponential function: formula_2 By truncating this power series after, say, "N" + 1 terms we get a polynomial with rational coefficients of degree "N" which is in some sense "close" to the function "e""x". Specifically if we look at the auxiliary function defined by the remainder: formula_3 then this function—an exponential polynomial—should take small values for "x" close to zero. If "e" is a rational number then by letting "x" = 1 in the above formula we see that "R"(1) is also a rational number. However, Fourier proved that "R"(1) could not be rational by eliminating every possible denominator. Thus "e" cannot be rational. Hermite's proof of the irrationality of "e""r". Hermite extended the work of Fourier by approximating the function "e""x" not with a polynomial but with a rational function, that is a quotient of two polynomials. In particular he chose polynomials "A"("x") and "B"("x") such that the auxiliary function "R" defined by formula_4 could be made as small as he wanted around "x" = 0. But if "e""r" were rational then "R"("r") would have to be rational with a particular denominator, yet Hermite could make "R"("r") too small to have such a denominator, hence a contradiction. Hermite's proof of the transcendence of "e". To prove that "e" was in fact transcendental, Hermite took his work one step further by approximating not just the function "e""x", but also the functions "e""kx" for integers "k" = 1...,"m", where he assumed "e" was algebraic with degree "m". By approximating "e""kx" by rational functions with integer coefficients and with the same denominator, say "A""k"("x") / "B"("x"), he could define auxiliary functions "R""k"("x") by formula_5 For his contradiction Hermite supposed that "e" satisfied the polynomial equation with integer coefficients "a"0 + "a"1"e" + ... + "a""m""e""m" = 0. Multiplying this expression through by "B"(1) he noticed that it implied formula_6 The right hand side is an integer and so, by estimating the auxiliary functions and proving that 0 &lt; |"R"| &lt; 1 he derived the necessary contradiction. Auxiliary functions from the pigeonhole principle. The auxiliary functions sketched above can all be explicitly calculated and worked with. A breakthrough by Axel Thue and Carl Ludwig Siegel in the twentieth century was the realisation that these functions don't necessarily need to be explicitly known – it can be enough to know they exist and have certain properties. Using the Pigeonhole Principle Thue, and later Siegel, managed to prove the existence of auxiliary functions which, for example, took the value zero at many different points, or took high order zeros at a smaller collection of points. Moreover they proved it was possible to construct such functions without making the functions too large. Their auxiliary functions were not explicit functions, then, but by knowing that a certain function with certain properties existed, they used its properties to simplify the transcendence proofs of the nineteenth century and give several new results. This method was picked up on and used by several other mathematicians, including Alexander Gelfond and Theodor Schneider who used it independently to prove the Gelfond–Schneider theorem. Alan Baker also used the method in the 1960s for his work on linear forms in logarithms and ultimately Baker's theorem. Another example of the use of this method from the 1960s is outlined below. Auxiliary polynomial theorem. Let β equal the cube root of "b/a" in the equation "ax"3 + "bx"3 = "c" and assume "m" is an integer that satisfies "m" + 1 &gt; 2"n"/3 ≥ "m" ≥ 3 where "n" is a positive integer. Then there exists formula_7 such that formula_8 formula_9 The auxiliary polynomial theorem states formula_10 A theorem of Lang. In the 1960s Serge Lang proved a result using this non-explicit form of auxiliary functions. The theorem implies both the Hermite–Lindemann and Gelfond–Schneider theorems. The theorem deals with a number field "K" and meromorphic functions "f"1...,"f""N" of order at most "ρ", at least two of which are algebraically independent, and such that if we differentiate any of these functions then the result is a polynomial in all of the functions. Under these hypotheses the theorem states that if there are "m" distinct complex numbers ω1...,ω"m" such that "f""i" (ω"j" ) is in "K" for all combinations of "i" and "j", then "m" is bounded by formula_11 To prove the result Lang took two algebraically independent functions from "f"1...,"f""N", say "f" and "g", and then created an auxiliary function which was simply a polynomial "F" in "f" and "g". This auxiliary function could not be explicitly stated since "f" and "g" are not explicitly known. But using Siegel's lemma Lang showed how to make "F" in such a way that it vanished to a high order at the "m" complex numbers ω1...,ω"m". Because of this high order vanishing it can be shown that a high-order derivative of "F" takes a value of small size one of the ω"i"s, "size" here referring to an algebraic property of a number. Using the maximum modulus principle Lang also found a separate way to estimate the absolute values of derivatives of "F", and using standard results comparing the size of a number and its absolute value he showed that these estimates were contradicted unless the claimed bound on "m" holds. Interpolation determinants. After the myriad of successes gleaned from using existent but not explicit auxiliary functions, in the 1990s Michel Laurent introduced the idea of interpolation determinants. These are alternants – determinants of matrices of the form formula_12 where φ"i" are a set of functions interpolated at a set of points ζ"j". Since a determinant is just a polynomial in the entries of a matrix, these auxiliary functions succumb to study by analytic means. A problem with the method was the need to choose a basis before the matrix could be worked with. A development by Jean-Benoît Bost removed this problem with the use of Arakelov theory, and research in this area is ongoing. The example below gives an idea of the flavour of this approach. A proof of the Hermite–Lindemann theorem. One of the simpler applications of this method is a proof of the real version of the Hermite–Lindemann theorem. That is, if α is a non-zero, real algebraic number, then "e"α is transcendental. First we let "k" be some natural number and "n" be a large multiple of "k". The interpolation determinant considered is the determinant Δ of the "n"4×"n"4 matrix formula_13 The rows of this matrix are indexed by 1 ≤ "i"1 ≤ "n"4/"k" and 1 ≤ "i"2 ≤ "k", while the columns are indexed by 1 ≤ "j"1 ≤ "n"3 and 1 ≤ "j"2 ≤ "n". So the functions in our matrix are monomials in "x" and "e""x" and their derivatives, and we are interpolating at the "k" points 0,α,2α...,("k" − 1)α. Assuming that "e"α is algebraic we can form the number field Q(α,"e"α) of degree "m" over Q, and then multiply Δ by a suitable denominator as well as all its images under the embeddings of the field Q(α,"e"α) into C. For algebraic reasons this product is necessarily an integer, and using arguments relating to Wronskians it can be shown that it is non-zero, so its absolute value is an integer Ω ≥ 1. Using a version of the mean value theorem for matrices it is possible to get an analytic bound on Ω as well, and in fact using big-O notation we have formula_14 The number "m" is fixed by the degree of the field Q(α,"e"α), but "k" is the number of points we are interpolating at, and so we can increase it at will. And once "k" &gt; 2("m" + 1)/3 we will have Ω → 0, eventually contradicting the established condition Ω ≥ 1. Thus "e"α cannot be algebraic after all. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\left|f\\left(\\frac{p}{q}\\right)\\right|\\geq\\frac{1}{q^d}," }, { "math_id": 1, "text": "\\left|f\\left(\\frac{p}{q}\\right)\\right| \\leq c(\\alpha)\\left|\\alpha-\\frac{p}{q}\\right|." }, { "math_id": 2, "text": "e^x=\\sum_{n=0}^{\\infty} \\frac{x^n}{n!}." }, { "math_id": 3, "text": "R(x)=e^x-\\sum_{n=0}^{N} \\frac{x^n}{n!}" }, { "math_id": 4, "text": "R(x)=B(x)e^x-A(x)" }, { "math_id": 5, "text": "R_k(x)=B(x)e^{kx}-A_k(x)." }, { "math_id": 6, "text": "R=a_0+a_1 R_1(1) + \\cdots +a_m R_m(1)=a_1 A_1(1)+ \\cdots +a_m A_m(1)." }, { "math_id": 7, "text": "F(X,Y) = P(X) + Y*Q(X)" }, { "math_id": 8, "text": "\\sum_{i=0}^{m+n} u_i X^i = P(X)," }, { "math_id": 9, "text": "\\sum_{i=0}^{m+n} v_i X^i = Q(X)." }, { "math_id": 10, "text": "\\max_{0 \\le i \\le m+n} {(|u_i|,|v_i|)}\\le 2b^{9(m+n)}." }, { "math_id": 11, "text": "m\\leq 20\\rho [K:\\mathbb{Q}]." }, { "math_id": 12, "text": "\\mathcal{M}=\\left(\\varphi_i(\\zeta_j)\\right)_{1\\leq i,j\\leq N}" }, { "math_id": 13, "text": "\\left(\\{\\exp(j_2x)x^{j_1-1}\\}^{(i_1-1)}\\Big|_{x=(i_2-1)\\alpha}\\right)." }, { "math_id": 14, "text": "\\Omega=O\\left(\\exp\\left(\\left(\\frac{m+1}{k}-\\frac{3}{2}\\right)n^8\\log n\\right)\\right)." } ]
https://en.wikipedia.org/wiki?curid=11826062
11827553
Reciprocal Fibonacci constant
Mathematical constant The reciprocal Fibonacci constant ψ is the sum of the reciprocals of the Fibonacci numbers: formula_0 Because the ratio of successive terms tends to the reciprocal of the golden ratio, which is less than 1, the ratio test shows that the sum converges. The value of ψ is approximately formula_1 (sequence in the OEIS). With k terms, the series gives O("k") digits of accuracy. Bill Gosper derived an accelerated series which provides O("k"&amp;hairsp;2) digits. ψ is irrational, as was conjectured by Paul Erdős, Ronald Graham, and Leonard Carlitz, and proved in 1989 by Richard André-Jeannin. Its continued fraction representation is: formula_2 (sequence in the OEIS).
[ { "math_id": 0, "text": "\\psi = \\sum_{k=1}^{\\infty} \\frac{1}{F_k} = \\frac{1}{1} + \\frac{1}{1} + \\frac{1}{2} + \\frac{1}{3} + \\frac{1}{5} + \\frac{1}{8} + \\frac{1}{13} + \\frac{1}{21} + \\cdots." }, { "math_id": 1, "text": "\\psi = 3.359885666243177553172011302918927179688905133732\\dots" }, { "math_id": 2, "text": "\\psi = [3;2,1,3,1,1,13,2,3,3,2,1,1,6,3,2,4,362,2,4,8,6,30,50,1,6,3,3,2,7,2,3,1,3,2, \\dots] \\!\\," } ]
https://en.wikipedia.org/wiki?curid=11827553
1182871
Hard-core predicate
In cryptography, a hard-core predicate of a one-way function "f" is a predicate "b" (i.e., a function whose output is a single bit) which is easy to compute (as a function of "x") but is hard to compute given "f(x)". In formal terms, there is no probabilistic polynomial-time (PPT) algorithm that computes "b(x)" from "f(x)" with probability significantly greater than one half over random choice of "x". In other words, if "x" is drawn uniformly at random, then given "f(x)", any PPT adversary can only distinguish the hard-core bit "b(x)" and a uniformly random bit with negligible advantage over the length of "x". A hard-core function can be defined similarly. That is, if "x" is chosen uniformly at random, then given "f(x)", any PPT algorithm can only distinguish the hard-core function value "h(x)" and uniformly random bits of length "|h(x)|" with negligible advantage over the length of "x". A hard-core predicate captures "in a concentrated sense" the hardness of inverting "f". While a one-way function is hard to invert, there are no guarantees about the feasibility of computing partial information about the preimage "c" from the image "f(x)". For instance, while RSA is conjectured to be a one-way function, the Jacobi symbol of the preimage can be easily computed from that of the image. It is clear that if a one-to-one function has a hard-core predicate, then it must be one way. Oded Goldreich and Leonid Levin (1989) showed how every one-way function can be trivially modified to obtain a one-way function that has a specific hard-core predicate. Let "f" be a one-way function. Define "g(x,r) = (f(x), r)" where the length of "r" is the same as that of "x". Let "xj" denote the "j"th bit of "x" and "rj" the "j"th bit of "r". Then formula_0 is a hard core predicate of "g". Note that "b(x, r)" = &lt;"x, r"&gt; where &lt;·, ·&gt; denotes the standard inner product on the vector space (Z2)"n". This predicate is hard-core due to computational issues; that is, it is not hard to compute because "g(x, r)" is information theoretically lossy. Rather, if there exists an algorithm that computes this predicate efficiently, then there is another algorithm that can invert "f" efficiently. A similar construction yields a hard-core function with "O(log |x|)" output bits. Suppose "f" is a strong one-way function. Define "g(x, r)" = "(f(x), r)" where |"r"| = 2|"x"|. Choose a length function "l(n)" = "O(log n)" s.t. "l(n)" ≤ "n". Let formula_1 Then "h(x, r)" := "b1(x, r) b2(x, r) ... bl(|x|)(x, r)" is a hard-core function with output length "l(|x|)". It is sometimes the case that an actual bit of the input "x" is hard-core. For example, every single bit of inputs to the RSA function is a hard-core predicate of RSA and blocks of "O(log |x|)" bits of "x" are indistinguishable from random bit strings in polynomial time (under the assumption that the RSA function is hard to invert). Hard-core predicates give a way to construct a pseudorandom generator from any one-way permutation. If "b" is a hard-core predicate of a one-way permutation "f", and "s" is a random seed, then formula_2 is a pseudorandom bit sequence, where "fn" means the n-th iteration of applying "f" on "s", and "b" is the generated hard-core bit by each round "n". Hard-core predicates of trapdoor one-way permutations (known as trapdoor predicates) can be used to construct semantically secure public-key encryption schemes. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " b(x,r) := \\langle x, r\\rangle = \\bigoplus_j x_j r_j " }, { "math_id": 1, "text": " b_i(x, r) = \\bigoplus_j x_j r_{i+j}. " }, { "math_id": 2, "text": " \\{ b(f^n(s))\\}_n" } ]
https://en.wikipedia.org/wiki?curid=1182871
1182975
Dual basis in a field extension
In mathematics, the linear algebra concept of dual basis can be applied in the context of a finite extension "L"/"K", by using the field trace. This requires the property that the field trace "Tr""L"/"K" provides a non-degenerate quadratic form over "K". This can be guaranteed if the extension is separable; it is automatically true if "K" is a perfect field, and hence in the cases where "K" is finite, or of characteristic zero. A dual basis () is not a concrete basis like the polynomial basis or the normal basis; rather it provides a way of using a second basis for computations. Consider two bases for elements in a finite field, GF("p""m"): formula_0 and formula_1 then "B"2 can be considered a dual basis of "B"1 provided formula_2 Here the trace of a value in GF("p""m") can be calculated as follows: formula_3 Using a dual basis can provide a way to easily communicate between devices that use different bases, rather than having to explicitly convert between bases using the change of bases formula. Furthermore, if a dual basis is implemented then conversion from an element in the original basis to the dual basis can be accomplished with multiplication by the multiplicative identity (usually 1). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "B_1 = {\\alpha_0, \\alpha_1, \\ldots, \\alpha_{m-1}}" }, { "math_id": 1, "text": "B_2 = {\\gamma_0, \\gamma_1, \\ldots, \\gamma_{m-1}}" }, { "math_id": 2, "text": "\\operatorname{Tr}(\\alpha_i\\cdot \\gamma_j) = \\left\\{\\begin{matrix} 0, & \\operatorname{if}\\ i \\neq j\\\\ 1, & \\operatorname{otherwise} \\end{matrix}\\right. " }, { "math_id": 3, "text": "\\operatorname{Tr}(\\beta ) = \\sum_{i=0}^{m-1} \\beta^{p^i}" } ]
https://en.wikipedia.org/wiki?curid=1182975
1182982
Dual basis
Linear algebra concept In linear algebra, given a vector space formula_0 with a basis formula_1 of vectors indexed by an index set formula_2 (the cardinality of formula_2 is the dimension of formula_0), the dual set of formula_1 is a set formula_3 of vectors in the dual space formula_4 with the same index set formula_2 such that formula_1 and formula_3 form a biorthogonal system. The dual set is always linearly independent but does not necessarily span formula_4. If it does span formula_4, then formula_3 is called the dual basis or reciprocal basis for the basis formula_1. Denoting the indexed vector sets as formula_5 and formula_6, being biorthogonal means that the elements pair to have an inner product equal to 1 if the indexes are equal, and equal to 0 otherwise. Symbolically, evaluating a dual vector in formula_4 on a vector in the original space formula_0: formula_7 where formula_8 is the Kronecker delta symbol. Introduction. To perform operations with a vector, we must have a straightforward method of calculating its components. In a Cartesian frame the necessary operation is the dot product of the vector and the base vector. For example, formula_9 where formula_10 is the basis in a Cartesian frame. The components of formula_11 can be found by formula_12 However, in a non-Cartesian frame, we do not necessarily have formula_13 for all formula_14. However, it is always possible to find vectors formula_15 in the dual space such that formula_16 The equality holds when the formula_15s are the dual basis of formula_17s. Notice the difference in position of the index formula_18. Existence and uniqueness. The dual set always exists and gives an injection from "V" into "V"∗, namely the mapping that sends "vi" to "vi". This says, in particular, that the dual space has dimension greater or equal to that of "V". However, the dual set of an infinite-dimensional "V" does not span its dual space "V"∗. For example, consider the map "w" in "V"∗ from "V" into the underlying scalars "F" given by "w"("vi") = 1 for all "i". This map is clearly nonzero on all "vi". If "w" were a finite linear combination of the dual basis vectors "vi", say formula_19 for a finite subset "K" of "I", then for any "j" not in "K", formula_20, contradicting the definition of "w". So, this "w" does not lie in the span of the dual set. The dual of an infinite-dimensional space has greater dimension (this being a greater infinite cardinality) than the original space has, and thus these cannot have a basis with the same indexing set. However, a dual set of vectors exists, which defines a subspace of the dual isomorphic to the original space. Further, for topological vector spaces, a continuous dual space can be defined, in which case a dual basis may exist. Finite-dimensional vector spaces. In the case of finite-dimensional vector spaces, the dual set is always a dual basis and it is unique. These bases are denoted by formula_21 and formula_22. If one denotes the evaluation of a covector on a vector as a pairing, the biorthogonality condition becomes: formula_23 The association of a dual basis with a basis gives a map from the space of bases of "V" to the space of bases of "V"∗, and this is also an isomorphism. For topological fields such as the real numbers, the space of duals is a topological space, and this gives a homeomorphism between the Stiefel manifolds of bases of these spaces. A categorical and algebraic construction of the dual space. Another way to introduce the dual space of a vector space (module) is by introducing it in a categorical sense. To do this, let formula_24 be a module defined over the ring formula_25 (that is, formula_24 is an object in the category formula_26). Then we define the dual space of formula_24, denoted formula_27, to be formula_28, the module formed of all formula_25-linear module homomorphisms from formula_24 into formula_25. Note then that we may define a dual to the dual, referred to as the double dual of formula_24, written as formula_29, and defined as formula_30. To formally construct a basis for the dual space, we shall now restrict our view to the case where formula_31 is a finite-dimensional free (left) formula_25-module, where formula_25 is a ring with unity. Then, we assume that the set formula_32 is a basis for formula_31. From here, we define the Kronecker Delta function formula_33 over the basis formula_32 by formula_34 if formula_35 and formula_36 if formula_37. Then the set formula_38 describes a linearly independent set with each formula_39. Since formula_31 is finite-dimensional, the basis formula_32 is of finite cardinality. Then, the set formula_40 is a basis to formula_41 and formula_41 is a free (right) formula_25-module. Examples. For example, the standard basis vectors of formula_42 (the Cartesian plane) are formula_43 and the standard basis vectors of its dual space formula_44 are formula_45 In 3-dimensional Euclidean space, for a given basis formula_46, the biorthogonal (dual) basis formula_47 can be found by formulas below: formula_48 where T denotes the transpose and formula_49 is the volume of the parallelepiped formed by the basis vectors formula_50 and formula_51 In general the dual basis of a basis in a finite-dimensional vector space can be readily computed as follows: given the basis formula_52 and corresponding dual basis formula_53 we can build matrices formula_54 Then the defining property of the dual basis states that formula_55 Hence the matrix for the dual basis formula_56 can be computed as formula_57 Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "V" }, { "math_id": 1, "text": "B" }, { "math_id": 2, "text": "I" }, { "math_id": 3, "text": "B^*" }, { "math_id": 4, "text": "V^*" }, { "math_id": 5, "text": "B = \\{v_i\\}_{i\\in I}" }, { "math_id": 6, "text": "B^{*} = \\{v^i\\}_{i \\in I}" }, { "math_id": 7, "text": "\nv^i\\cdot v_j = \\delta^i_j =\n\\begin{cases}\n 1 & \\text{if } i = j\\\\\n 0 & \\text{if } i \\ne j\\text{,}\n\\end{cases}\n" }, { "math_id": 8, "text": "\\delta^i_j" }, { "math_id": 9, "text": "\\mathbf{x} = x^1 \\mathbf{i}_1 + x^2 \\mathbf{i}_2 + x^3 \\mathbf{i}_3" }, { "math_id": 10, "text": "\\{\\mathbf{i}_1, \\mathbf{i}_2, \\mathbf{i}_3\\}" }, { "math_id": 11, "text": "\\mathbf{x}" }, { "math_id": 12, "text": "x^k = \\mathbf{x} \\cdot \\mathbf{i}_k." }, { "math_id": 13, "text": "\\mathbf{e}_i\\cdot\\mathbf{e}_j=0" }, { "math_id": 14, "text": "i\\neq j" }, { "math_id": 15, "text": "\\mathbf{e}^i" }, { "math_id": 16, "text": "x^i = \\mathbf{e}^i(\\mathbf{x}) \\qquad (i = 1, 2, 3)." }, { "math_id": 17, "text": "\\mathbf{e}_i" }, { "math_id": 18, "text": "i" }, { "math_id": 19, "text": "w=\\sum_{i\\in K}\\alpha_iv^i" }, { "math_id": 20, "text": "w(v_j)=\\left(\\sum_{i\\in K}\\alpha_iv^i\\right)\\left(v_j\\right)=0" }, { "math_id": 21, "text": "B=\\{e_1,\\dots,e_n\\}" }, { "math_id": 22, "text": "B^*=\\{e^1,\\dots,e^n\\}" }, { "math_id": 23, "text": "\\left\\langle e^i, e_j \\right\\rangle = \\delta^i_j." }, { "math_id": 24, "text": "A" }, { "math_id": 25, "text": "R" }, { "math_id": 26, "text": "R\\text{-}\\mathbf{Mod}" }, { "math_id": 27, "text": "A^{\\ast}" }, { "math_id": 28, "text": "\\text{Hom}_R(A,R)" }, { "math_id": 29, "text": "A^{\\ast\\ast}" }, { "math_id": 30, "text": "\\text{Hom}_R(A^{\\ast},R)" }, { "math_id": 31, "text": "F" }, { "math_id": 32, "text": "X" }, { "math_id": 33, "text": "\\delta_{xy}" }, { "math_id": 34, "text": "\\delta_{xy}=1" }, { "math_id": 35, "text": "x=y" }, { "math_id": 36, "text": "\\delta_{xy}=0" }, { "math_id": 37, "text": "x\\ne y" }, { "math_id": 38, "text": " S = \\lbrace f_x:F \\to R \\; | \\; f_x(y)=\\delta_{xy} \\rbrace " }, { "math_id": 39, "text": "f_x \\in \\text{Hom}_R(F,R)" }, { "math_id": 40, "text": " S " }, { "math_id": 41, "text": "F^\\ast" }, { "math_id": 42, "text": "\\R^2" }, { "math_id": 43, "text": "\n \\left\\{\\mathbf{e}_1, \\mathbf{e}_2\\right\\} = \\left\\{\n \\begin{pmatrix}\n 1 \\\\\n 0 \n \\end{pmatrix},\n \\begin{pmatrix}\n 0 \\\\\n 1 \n \\end{pmatrix}\n \\right\\}\n" }, { "math_id": 44, "text": "(\\R^2)^*" }, { "math_id": 45, "text": "\n \\left\\{\\mathbf{e}^1, \\mathbf{e}^2\\right \\} = \\left\\{\n \\begin{pmatrix}\n 1 & 0 \n \\end{pmatrix},\n \\begin{pmatrix}\n 0 & 1 \n \\end{pmatrix}\n \\right\\}\\text{.}\n" }, { "math_id": 46, "text": "\\{\\mathbf{e}_1, \\mathbf{e}_2, \\mathbf{e}_3\\}" }, { "math_id": 47, "text": "\\{\\mathbf{e}^1, \\mathbf{e}^2, \\mathbf{e}^3\\}" }, { "math_id": 48, "text": "\n \\mathbf{e}^1 = \\left(\\frac{\\mathbf{e}_2 \\times \\mathbf{e}_3}{V}\\right)^\\mathsf{T},\\ \n \\mathbf{e}^2 = \\left(\\frac{\\mathbf{e}_3 \\times \\mathbf{e}_1}{V}\\right)^\\mathsf{T},\\ \n \\mathbf{e}^3 = \\left(\\frac{\\mathbf{e}_1 \\times \\mathbf{e}_2}{V}\\right)^\\mathsf{T}.\n" }, { "math_id": 49, "text": "\n V \\,=\\,\n \\left(\\mathbf{e}_1;\\mathbf{e}_2;\\mathbf{e}_3\\right) \\,=\\,\n \\mathbf{e}_1\\cdot(\\mathbf{e}_2\\times\\mathbf{e}_3) \\,=\\,\n \\mathbf{e}_2\\cdot(\\mathbf{e}_3\\times\\mathbf{e}_1) \\,=\\,\n \\mathbf{e}_3\\cdot(\\mathbf{e}_1\\times\\mathbf{e}_2)\n" }, { "math_id": 50, "text": "\\mathbf{e}_1,\\,\\mathbf{e}_2" }, { "math_id": 51, "text": "\\mathbf{e}_3." }, { "math_id": 52, "text": "f_1,\\ldots,f_n" }, { "math_id": 53, "text": "f^1,\\ldots,f^n" }, { "math_id": 54, "text": "\n\\begin{align}\nF &= \\begin{bmatrix}f_1 & \\cdots & f_n \\end{bmatrix} \\\\\nG &= \\begin{bmatrix}f^1 & \\cdots & f^n \\end{bmatrix}\n\\end{align}\n" }, { "math_id": 55, "text": "G^\\mathsf{T}F = I" }, { "math_id": 56, "text": "G" }, { "math_id": 57, "text": "G = \\left(F^{-1}\\right)^\\mathsf{T}" } ]
https://en.wikipedia.org/wiki?curid=1182982
1183025
Zeta potential
Electrokinetic potential in colloidal dispersions Zeta potential is the electrical potential at the slipping plane. This plane is the interface which separates mobile fluid from fluid that remains attached to the surface. Zeta potential is a scientific term for electrokinetic potential in colloidal dispersions. In the colloidal chemistry literature, it is usually denoted using the Greek letter zeta (ζ), hence ζ-potential. The usual units are volts (V) or, more commonly, millivolts (mV). From a theoretical viewpoint, the zeta potential is the electric potential in the interfacial double layer (DL) at the location of the slipping plane relative to a point in the bulk fluid away from the interface. In other words, zeta potential is the potential difference between the dispersion medium and the stationary layer of fluid attached to the dispersed particle. The zeta potential is caused by the net electrical charge contained within the region bounded by the slipping plane, and also depends on the location of that plane. Thus, it is widely used for quantification of the magnitude of the charge. However, zeta potential is not equal to the Stern potential or electric surface potential in the double layer, because these are defined at different locations. Such assumptions of equality should be applied with caution. Nevertheless, zeta potential is often the only available path for characterization of double-layer properties. The zeta potential is an important and readily measurable indicator of the stability of colloidal dispersions. The magnitude of the zeta potential indicates the degree of electrostatic repulsion between adjacent, similarly charged particles in a dispersion. For molecules and particles that are small enough, a high zeta potential will confer stability, i.e., the solution or dispersion will resist aggregation. When the potential is small, attractive forces may exceed this repulsion and the dispersion may break and flocculate. So, colloids with high zeta potential (negative or positive) are electrically stabilized while colloids with low zeta potentials tend to coagulate or flocculate as outlined in the table. Zeta potential can also be used for the pKa estimation of complex polymers that is otherwise difficult to measure accurately using conventional methods. This can help studying the ionisation behaviour of various synthetic and natural polymers under various conditions and can help in establishing standardised dissolution-pH thresholds for pH responsive polymers. Measurement. Some new instrumentations techniques exist that allow zeta potential to be measured. The Zeta Potential Analyzer can measure solid, fibers, or powdered material. The motor found in the instrument creates an oscillating flow of electrolyte solution through the sample. Several sensors in the instrument monitor other factors, so the software attached is able to do calculations to find the zeta potential. Temperature, pH, conductivity, pressure, and streaming potential are all measured in the instrument for this reason. Zeta potential can also be calculated using theoretical models, and an experimentally-determined electrophoretic mobility or dynamic electrophoretic mobility. Electrokinetic phenomena and electroacoustic phenomena are the usual sources of data for calculation of zeta potential. (See Zeta potential titration.) Electrokinetic phenomena. Electrophoresis is used for estimating zeta potential of particulates, whereas streaming potential/current is used for porous bodies and flat surfaces. In practice, the zeta potential of dispersion is measured by applying an electric field across the dispersion. Particles within the dispersion with a zeta potential will migrate toward the electrode of opposite charge with a velocity proportional to the magnitude of the zeta potential. This velocity is measured using the technique of the laser Doppler anemometer. The frequency shift or phase shift of an incident laser beam caused by these moving particles is measured as the particle mobility, and this mobility is converted to the zeta potential by inputting the dispersant viscosity and dielectric permittivity, and the application of the Smoluchowski theories. Electrophoresis. Electrophoretic mobility is proportional to electrophoretic velocity, which is the measurable parameter. There are several theories that link electrophoretic mobility with zeta potential. They are briefly described in the article on electrophoresis and in details in many books on colloid and interface science. There is an IUPAC Technical Report prepared by a group of world experts on the electrokinetic phenomena. From the instrumental viewpoint, there are three different experimental techniques: microelectrophoresis, electrophoretic light scattering, and tunable resistive pulse sensing. Microelectrophoresis has the advantage of yielding an image of the moving particles. On the other hand, it is complicated by electro-osmosis at the walls of the sample cell. Electrophoretic light scattering is based on dynamic light scattering. It allows measurement in an open cell which eliminates the problem of electro-osmotic flow except for the case of a capillary cell. And, it can be used to characterize very small particles, but at the price of the lost ability to display images of moving particles. Tunable resistive pulse sensing (TRPS) is an impedance-based measurement technique that measures the zeta potential of individual particles based on the duration of the resistive pulse signal. The translocation duration of nanoparticles is measured as a function of voltage and applied pressure. From the inverse translocation time versus voltage-dependent electrophoretic mobility, and thus zeta potentials are calculated. The main advantage of the TRPS method is that it allows for simultaneous size and surface charge measurements on a particle-by-particle basis, enabling the analysis of a wide spectrum of synthetic and biological nano/microparticles and their mixtures. All these measuring techniques may require dilution of the sample. Sometimes this dilution might affect properties of the sample and change zeta potential. There is only one justified way to perform this dilution – by using equilibrium supernatant. In this case, the interfacial equilibrium between the surface and the bulk liquid would be maintained and zeta potential would be the same for all volume fractions of particles in the suspension. When the diluent is known (as is the case for a chemical formulation), additional diluent can be prepared. If the diluent is unknown, equilibrium supernatant is readily obtained by centrifugation. Streaming potential, streaming current. The streaming potential is an electric potential that develops during the flow of liquid through a capillary. In nature, a streaming potential may occur at a significant magnitude in areas with volcanic activities. The streaming potential is also the primary electrokinetic phenomenon for the assessment of the zeta potential at the solid material-water interface. A corresponding solid sample is arranged in such a way to form a capillary flow channel. Materials with a flat surface are mounted as duplicate samples that are aligned as parallel plates. The sample surfaces are separated by a small distance to form a capillary flow channel. Materials with an irregular shape, such as fibers or granular media, are mounted as a porous plug to provide a pore network, which serves as capillaries for the streaming potential measurement. Upon the application of pressure on a test solution, liquid starts to flow and to generate an electric potential. This streaming potential is related to the pressure gradient between the ends of either a single flow channel (for samples with a flat surface) or the porous plug (for fibers and granular media) to calculate the surface zeta potential. Alternatively to the streaming potential, the measurement of streaming current offers another approach to the surface zeta potential. Most commonly, the classical equations derived by Maryan Smoluchowski are used to convert streaming potential or streaming current results into the surface zeta potential. Applications of the streaming potential and streaming current method for the surface zeta potential determination consist of the characterization of surface charge of polymer membranes, biomaterials and medical devices, and minerals. Electroacoustic phenomena. There are two electroacoustic effects that are widely used for characterizing zeta potential: colloid vibration current and electric sonic amplitude. There are commercially available instruments that exploit these effects for measuring dynamic electrophoretic mobility, which depends on zeta potential. Electroacoustic techniques have the advantage of being able to perform measurements in intact samples, without dilution. Published and well-verified theories allow such measurements at volume fractions up to 50%. Calculation of zeta potential from the dynamic electrophoretic mobility requires information on the densities for particles and liquid. In addition, for larger particles exceeding roughly 300 nm in size information on the particle size required as well. Calculation. The most known and widely used theory for calculating zeta potential from experimental data is that developed by Marian Smoluchowski in 1903. This theory was originally developed for electrophoresis; however, an extension to electroacoustics is now also available. Smoluchowski's theory is powerful because it is valid for dispersed particles of any shape and any concentration. However, it has its limitations: formula_2 The model of the "thin double layer" offers tremendous simplifications not only for electrophoresis theory but for many other electrokinetic and electroacoustic theories. This model is valid for most aqueous systems because the Debye length is typically only a few nanometers in water. The model breaks only for nano-colloids in a solution with ionic strength approaching that of pure water. formula_3 The development of electrophoretic and electroacoustic theories with a wider range of validity was a purpose of many studies during the 20th century. There are several analytical theories that incorporate surface conductivity and eliminate the restriction of the small Dukhin number for both the electrokinetic and electroacoustic applications. Early pioneering work in that direction dates back to Overbeek and Booth. Modern, rigorous electrokinetic theories that are valid for any zeta potential, and often any formula_4, stem mostly from Soviet Ukrainian (Dukhin, Shilov, and others) and Australian (O'Brien, White, Hunter, and others) schools. Historically, the first one was Dukhin–Semenikhin theory. A similar theory was created ten years later by O'Brien and Hunter. Assuming a thin double layer, these theories would yield results that are very close to the numerical solution provided by O'Brien and White. There are also general electroacoustic theories that are valid for any values of Debye length and Dukhin number. Henry's equation. When κa is between large values where simple analytical models are available, and low values where numerical calculations are valid, Henry's equation can be used when the zeta potential is low. For a nonconducting sphere, Henry's equation is formula_5, where "f"1 is the Henry function, one of a collection of functions which vary smoothly from 1.0 to 1.5 as κa approaches infinity. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "1/\\kappa" }, { "math_id": 1, "text": "a" }, { "math_id": 2, "text": "{\\kappa} \\cdot a \\gg 1" }, { "math_id": 3, "text": "Du \\ll 1" }, { "math_id": 4, "text": "\\kappa a" }, { "math_id": 5, "text": "u_e= \\frac{2\\varepsilon_{rs} \\varepsilon_0}{3\\eta} \\zeta f_1(\\kappa a)" } ]
https://en.wikipedia.org/wiki?curid=1183025
11830303
Dissipative soliton
Dissipative solitons (DSs) are stable solitary localized structures that arise in nonlinear spatially extended dissipative systems due to mechanisms of self-organization. They can be considered as an extension of the classical soliton concept in conservative systems. An alternative terminology includes autosolitons, spots and pulses. Apart from aspects similar to the behavior of classical particles like the formation of bound states, DSs exhibit interesting behavior – e.g. scattering, creation and annihilation – all without the constraints of energy or momentum conservation. The excitation of internal degrees of freedom may result in a dynamically stabilized intrinsic speed, or periodic oscillations of the shape. Historical development. Origin of the soliton concept. DSs have been experimentally observed for a long time. Helmholtz measured the propagation velocity of nerve pulses in 1850. In 1902, Lehmann found the formation of localized anode spots in long gas-discharge tubes. Nevertheless, the term "soliton" was originally developed in a different context. The starting point was the experimental detection of "solitary water waves" by Russell in 1834. These observations initiated the theoretical work of Rayleigh and Boussinesq around 1870, which finally led to the approximate description of such waves by Korteweg and de Vries in 1895; that description is known today as the (conservative) KdV equation. On this background the term "soliton" was coined by Zabusky and Kruskal in 1965. These authors investigated certain well localised solitary solutions of the KdV equation and named these objects solitons. Among other things they demonstrated that in 1-dimensional space solitons exist, e.g. in the form of two unidirectionally propagating pulses with different size and speed and exhibiting the remarkable property that number, shape and size are the same before and after collision. Gardner et al. introduced the inverse scattering technique for solving the KdV equation and proved that this equation is completely integrable. In 1972 Zakharov and Shabat found another integrable equation and finally it turned out that the inverse scattering technique can be applied successfully to a whole class of equations (e.g. the nonlinear Schrödinger and sine-Gordon equations). From 1965 up to about 1975, a common agreement was reached: to reserve the term "soliton" to pulse-like solitary solutions of conservative nonlinear partial differential equations that can be solved by using the inverse scattering technique. Weakly and strongly dissipative systems. With increasing knowledge of classical solitons, possible technical applicability came into perspective, with the most promising one at present being the transmission of optical solitons via glass fibers for the purpose of data transmission. In contrast to conservative systems, solitons in fibers dissipate energy and this cannot be neglected on an intermediate and long time scale. Nevertheless, the concept of a classical soliton can still be used in the sense that on a short time scale dissipation of energy can be neglected. On an intermediate time scale one has to take small energy losses into account as a perturbation, and on a long scale the amplitude of the soliton will decay and finally vanish. There are however various types of systems which are capable of producing solitary structures and in which dissipation plays an essential role for their formation and stabilization. Although research on certain types of these DSs has been carried out for a long time (for example, see the research on nerve pulses culminating in the work of Hodgkin and Huxley in 1952), since 1990 the amount of research has significantly increased (see e.g.) Possible reasons are improved experimental devices and analytical techniques, as well as the availability of more powerful computers for numerical computations. Nowadays, it is common to use the term "dissipative solitons" for solitary structures in strongly dissipative systems. Experimental observations. Today, DSs can be found in many different experimental set-ups. Examples include Remarkably enough, phenomenologically the dynamics of the DSs in many of the above systems are similar in spite of the microscopic differences. Typical observations are (intrinsic) propagation, scattering, formation of bound states and clusters, drift in gradients, interpenetration, generation, and annihilation, as well as higher instabilities. Theoretical description. Most systems showing DSs are described by nonlinear partial differential equations. Discrete difference equations and cellular automata are also used. Up to now, modeling from first principles followed by a quantitative comparison of experiment and theory has been performed only rarely and sometimes also poses severe problems because of large discrepancies between microscopic and macroscopic time and space scales. Often simplified prototype models are investigated which reflect the essential physical processes in a larger class of experimental systems. Among these are formula_0 A frequently encountered example is the two-component Fitzhugh–Nagumo-type activator–inhibitor system formula_1 Stationary DSs are generated by production of material in the center of the DSs, diffusive transport into the tails and depletion of material in the tails. A propagating pulse arises from production in the leading and depletion in the trailing end. Among other effects, one finds periodic oscillations of DSs ("breathing"), bound states, and collisions, merging, generation and annihilation. formula_2 To understand the mechanisms leading to the formation of DSs, one may consider the energy "ρ" = |"q"|2 for which one may derive the continuity equation formula_3 One can thereby show that energy is generally produced in the flanks of the DSs and transported to the center and potentially to the tails where it is depleted. Dynamical phenomena include propagating DSs in 1d, propagating clusters in 2d, bound states and vortex solitons, as well as "exploding DSs". formula_4 For "dr" &gt; 0 one essentially has the same mechanisms as in the Ginzburg–Landau equation. For "dr" &lt; 0, in the real Swift–Hohenberg equation one finds bistability between homogeneous states and Turing patterns. DSs are stationary localized Turing domains on the homogeneous background. This also holds for the complex Swift–Hohenberg equations; however, propagating DSs as well as interaction phenomena are also possible, and observations include merging and interpenetration. Particle properties and universality. DSs in many different systems show universal particle-like properties. To understand and describe the latter, one may try to derive "particle equations" for slowly varying order parameters like position, velocity or amplitude of the DSs by adiabatically eliminating all fast variables in the field description. This technique is known from linear systems, however mathematical problems arise from the nonlinear models due to a coupling of fast and slow modes. Similar to low-dimensional dynamic systems, for supercritical bifurcations of stationary DSs one finds characteristic normal forms essentially depending on the symmetries of the system. E.g., for a transition from a symmetric stationary to an intrinsically propagating DS one finds the Pitchfork normal form formula_5 for the velocity "v" of the DS, here σ represents the bifurcation parameter and σ0 the bifurcation point. For a bifurcation to a "breathing" DS, one finds the Hopf normal form formula_6 for the amplitude "A" of the oscillation. It is also possible to treat "weak interaction" as long as the overlap of the DSs is not too large. In this way, a comparison between experiment and theory is facilitated. Note that the above problems do not arise for classical solitons as inverse scattering theory yields complete analytical solutions. References. Inline. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\partial_t \\boldsymbol{q} = \\underline{\\boldsymbol{D}} \\, \\Delta \\boldsymbol{q} + \\boldsymbol{R}(\\boldsymbol{q})." }, { "math_id": 1, "text": " \\left( \\begin{array}{c} \\tau_u \\, \\partial_t u\\\\\n\\tau_v \\, \\partial_t v\n\\end{array} \\right) =\n\\left(\\begin{array}{cc} d_u^2 &0\\\\ 0 & d_v^2\n\\end{array}\\right)\n\\left( \\begin{array}{c} \\Delta u \\\\\n\\Delta v \\end{array} \\right) + \\left(\\begin{array}{c} \\lambda u -u^3 - \\kappa_3 v +\\kappa_1\\\\u-v\n\\end{array}\\right).\n" }, { "math_id": 2, "text": " \\partial_t q = (d_r+ i d_i) \\, \\Delta q + \\ell_r q + (c_r + i c_i) |q|^2 q + (q_r + i q_i) |q|^4 q." }, { "math_id": 3, "text": "\n\\begin{align}\n& \\partial_t \\rho + \\nabla \\cdot \\boldsymbol{m} = S = d_r(q \\, \\Delta q^\\ast + q^\\ast \\, \\Delta q) + 2 \\ell_r \\rho + 2 c_r \\rho^2 + 2 q_r \\rho^3 \\\\\n& \\text{with } \\boldsymbol{m} = 2 d_i \\operatorname{Im}(q^\\ast \\nabla q).\n\\end{align}\n" }, { "math_id": 4, "text": "\\partial_t q = (s_r+ i s_i) \\,\\Delta^2 q + (d_r+ i d_i) \\,\\Delta q + \\ell_r q + (c_r + i c_i)|q|^2 q + (q_r + i q_i) |q|^4 q." }, { "math_id": 5, "text": " \\dot{\\boldsymbol{v}} = (\\sigma - \\sigma_0) \\boldsymbol{v} - |\\boldsymbol{v}|^2 \\boldsymbol{v}" }, { "math_id": 6, "text": " \\dot{A} = (\\sigma - \\sigma_0) A - |A|^2 A" } ]
https://en.wikipedia.org/wiki?curid=11830303
11830372
Menger curvature
In mathematics, the Menger curvature of a triple of points in "n"-dimensional Euclidean space R"n" is the reciprocal of the radius of the circle that passes through the three points. It is named after the Austrian-American mathematician Karl Menger. Definition. Let "x", "y" and "z" be three points in R"n"; for simplicity, assume for the moment that all three points are distinct and do not lie on a single straight line. Let Π ⊆ R"n" be the Euclidean plane spanned by "x", "y" and "z" and let "C" ⊆ Π be the unique Euclidean circle in Π that passes through "x", "y" and "z" (the circumcircle of "x", "y" and "z"). Let "R" be the radius of "C". Then the Menger curvature "c"("x", "y", "z") of "x", "y" and "z" is defined by formula_0 If the three points are collinear, "R" can be informally considered to be +∞, and it makes rigorous sense to define "c"("x", "y", "z") = 0. If any of the points "x", "y" and "z" are coincident, again define "c"("x", "y", "z") = 0. Using the well-known formula relating the side lengths of a triangle to its area, it follows that formula_1 where "A" denotes the area of the triangle spanned by "x", "y" and "z". Another way of computing Menger curvature is the identity formula_2 where formula_3 is the angle made at the "y"-corner of the triangle spanned by "x","y","z". Menger curvature may also be defined on a general metric space. If "X" is a metric space and "x","y", and "z" are distinct points, let "f" be an isometry from formula_4 into formula_5. Define the Menger curvature of these points to be formula_6 Note that "f" need not be defined on all of "X", just on "{x,y,z}", and the value "c""X" "(x,y,z)" is independent of the choice of "f". Integral Curvature Rectifiability. Menger curvature can be used to give quantitative conditions for when sets in formula_7 may be rectifiable. For a Borel measure formula_8 on a Euclidean space formula_9 define formula_10 The basic intuition behind the result is that Menger curvature measures how straight a given triple of points are (the smaller formula_15 is, the closer x,y, and z are to being collinear), and this integral quantity being finite is saying that the set E is flat on most small scales. In particular, if the power in the integral is larger, our set is smoother than just being rectifiable In the opposite direction, there is a result of Peter Jones: Analogous results hold in general metric spaces:
[ { "math_id": 0, "text": "c (x, y, z) = \\frac1{R}." }, { "math_id": 1, "text": "c (x, y, z) = \\frac1{R} = \\frac{4 A}{|x - y ||y - z ||z - x |}," }, { "math_id": 2, "text": " c(x,y,z)=\\frac{2\\sin \\angle xyz}{|x-z|}" }, { "math_id": 3, "text": "\\angle xyz" }, { "math_id": 4, "text": "\\{x,y,z\\}" }, { "math_id": 5, "text": "\\mathbb{R}^{2}" }, { "math_id": 6, "text": " c_{X} (x,y,z)=c(f(x),f(y),f(z))." }, { "math_id": 7, "text": " \\mathbb{R}^{n} " }, { "math_id": 8, "text": "\\mu" }, { "math_id": 9, "text": " \\mathbb{R}^{n}" }, { "math_id": 10, "text": " c^{p}(\\mu)=\\int\\int\\int c(x,y,z)^{p}d\\mu(x)d\\mu(y)d\\mu(z)." }, { "math_id": 11, "text": " E\\subseteq \\mathbb{R}^{n} " }, { "math_id": 12, "text": " c^{2}(H^{1}|_{E})<\\infty" }, { "math_id": 13, "text": " H^{1}|_{E} " }, { "math_id": 14, "text": " E" }, { "math_id": 15, "text": " c(x,y,z)\\max\\{|x-y|,|y-z|,|z-y|\\}" }, { "math_id": 16, "text": " p>3" }, { "math_id": 17, "text": " f:S^{1}\\rightarrow \\mathbb{R}^{n}" }, { "math_id": 18, "text": "\\Gamma=f(S^{1})" }, { "math_id": 19, "text": " f\\in C^{1,1-\\frac{3}{p}}(S^{1})" }, { "math_id": 20, "text": " c^{p}(H^{1}|_{\\Gamma})<\\infty" }, { "math_id": 21, "text": " 0<H^{s}(E)<\\infty" }, { "math_id": 22, "text": " 0<s\\leq\\frac{1}{2}" }, { "math_id": 23, "text": " c^{2s}(H^{s}|_{E})<\\infty" }, { "math_id": 24, "text": "C^{1}" }, { "math_id": 25, "text": "\\Gamma_{i}" }, { "math_id": 26, "text": " H^{s}(E\\backslash \\bigcup\\Gamma_{i})=0" }, { "math_id": 27, "text": " \\frac{1}{2}<s<1" }, { "math_id": 28, "text": " c^{2s}(H^{s}|_{E})=\\infty" }, { "math_id": 29, "text": " 1<s\\leq n" }, { "math_id": 30, "text": "E\\subseteq\\Gamma\\subseteq\\mathbb{R}^{2}" }, { "math_id": 31, "text": " H^{1}(E)>0" }, { "math_id": 32, "text": "\\Gamma" }, { "math_id": 33, "text": "E" }, { "math_id": 34, "text": " \\mu B(x,r)\\leq r" }, { "math_id": 35, "text": "x\\in E" }, { "math_id": 36, "text": "r>0" }, { "math_id": 37, "text": "c^{2}(\\mu)<\\infty" }, { "math_id": 38, "text": "H^{1}(B(x,r)\\cap\\Gamma)\\leq Cr" }, { "math_id": 39, "text": " x\\in \\Gamma" } ]
https://en.wikipedia.org/wiki?curid=11830372
1183041
Quadratic residuosity problem
Problem in computational number theory The quadratic residuosity problem (QRP) in computational number theory is to decide, given integers formula_0 and formula_1, whether formula_0 is a quadratic residue modulo formula_1 or not. Here formula_2 for two unknown primes formula_3 and formula_4, and formula_0 is among the numbers which are not obviously quadratic non-residues (see below). The problem was first described by Gauss in his "Disquisitiones Arithmeticae" in 1801. This problem is believed to be computationally difficult. Several cryptographic methods rely on its hardness, see . An efficient algorithm for the quadratic residuosity problem immediately implies efficient algorithms for other number theoretic problems, such as deciding whether a composite formula_1 of unknown factorization is the product of 2 or 3 primes. Precise formulation. Given integers formula_0 and formula_5, formula_0 is said to be a "quadratic residue modulo formula_5" if there exists an integer formula_6 such that formula_7. Otherwise we say it is a quadratic non-residue. When formula_8 is a prime, it is customary to use the Legendre symbol: formula_9 This is a multiplicative character which means formula_10 for exactly formula_11 of the values formula_12, and it is formula_13 for the remaining. It is easy to compute using the law of quadratic reciprocity in a manner akin to the Euclidean algorithm; see Legendre symbol. Consider now some given formula_2 where formula_3 and formula_4 are two different unknown primes. A given formula_0 is a quadratic residue modulo formula_1 if and only if formula_0 is a quadratic residue modulo both formula_3 and formula_4 and formula_14. Since we don't know formula_3 or formula_4, we cannot compute formula_15 and formula_16. However, it is easy to compute their product. This is known as the Jacobi symbol: formula_17 This also can be efficiently computed using the law of quadratic reciprocity for Jacobi symbols. However, formula_18 cannot in all cases tell us whether formula_0 is a quadratic residue modulo formula_1 or not! More precisely, if formula_19 then formula_0 is necessarily a quadratic non-residue modulo either formula_3 or formula_4, in which case we are done. But if formula_20 then it is either the case that formula_0 is a quadratic residue modulo both formula_3 and formula_4, or a quadratic non-residue modulo both formula_3 and formula_4. We cannot distinguish these cases from knowing just that formula_20. This leads to the precise formulation of the quadratic residue problem: Problem: Given integers formula_0 and formula_2, where formula_3 and formula_4 are distinct unknown primes, and where formula_20, determine whether formula_0 is a quadratic residue modulo formula_1 or not. Distribution of residues. If formula_0 is drawn uniformly at random from integers formula_21 such that formula_20, is formula_0 more often a quadratic residue or a quadratic non-residue modulo formula_1? As mentioned earlier, for exactly half of the choices of formula_22, then formula_23, and for the rest we have formula_24. By extension, this also holds for half the choices of formula_25. Similarly for formula_4. From basic algebra, it follows that this partitions formula_26 into 4 parts of equal size, depending on the sign of formula_15 and formula_16. The allowed formula_0 in the quadratic residue problem given as above constitute exactly those two parts corresponding to the cases formula_27 and formula_28. Consequently, exactly half of the possible formula_0 are quadratic residues and the remaining are not. Applications. The intractability of the quadratic residuosity problem is the basis for the security of the Blum Blum Shub pseudorandom number generator. It also yields the public key Goldwasser–Micali cryptosystem, as well as the identity based Cocks scheme. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "a" }, { "math_id": 1, "text": "N" }, { "math_id": 2, "text": "N = p_1 p_2" }, { "math_id": 3, "text": "p_1" }, { "math_id": 4, "text": "p_2" }, { "math_id": 5, "text": "T" }, { "math_id": 6, "text": "b" }, { "math_id": 7, "text": "a \\equiv b^2 \\pmod T" }, { "math_id": 8, "text": "T = p" }, { "math_id": 9, "text": "\\left( \\frac{a}{p} \\right) = \\begin{cases}\n 1 & \\text{ if } a \\text{ is a quadratic residue modulo } p \\text{ and } a \\not\\equiv 0\\pmod{p}, \\\\\n-1 & \\text{ if } a \\text{ is a quadratic non-residue modulo } p, \\\\\n 0 & \\text{ if } a \\equiv 0 \\pmod{p}. \n\\end{cases}" }, { "math_id": 10, "text": "\\big(\\tfrac{a}{p}\\big) = 1" }, { "math_id": 11, "text": "(p-1)/2" }, { "math_id": 12, "text": "1,\\ldots,p-1" }, { "math_id": 13, "text": "-1" }, { "math_id": 14, "text": "\\gcd(a, N) = 1" }, { "math_id": 15, "text": "\\big(\\tfrac{a}{p_1}\\big)" }, { "math_id": 16, "text": "\\big(\\tfrac{a}{p_2}\\big)" }, { "math_id": 17, "text": "\\left(\\frac{a}{N}\\right) = \\left(\\frac{a}{p_1}\\right)\\left(\\frac{a}{p_2}\\right)" }, { "math_id": 18, "text": "\\big(\\tfrac{a}{N}\\big)" }, { "math_id": 19, "text": "\\big(\\tfrac{a}{N}\\big) = -1" }, { "math_id": 20, "text": "\\big(\\tfrac{a}{N}\\big) = 1" }, { "math_id": 21, "text": "0,\\ldots,N-1" }, { "math_id": 22, "text": "a \\in \\{1,\\ldots,p_1-1\\}" }, { "math_id": 23, "text": "\\big(\\tfrac{a}{p_1}\\big) = 1" }, { "math_id": 24, "text": "\\big(\\tfrac{a}{p_1}\\big) = -1" }, { "math_id": 25, "text": "a \\in \\{1,\\ldots,N-1\\} \\setminus p_1\\mathbb{Z}" }, { "math_id": 26, "text": "(\\mathbb{Z}/N\\mathbb{Z})^{\\times}" }, { "math_id": 27, "text": "\\big(\\tfrac{a}{p_1}\\big) = \\big(\\tfrac{a}{p_2}\\big) = 1" }, { "math_id": 28, "text": "\\big(\\tfrac{a}{p_1}\\big) = \\big(\\tfrac{a}{p_2}\\big) = -1" } ]
https://en.wikipedia.org/wiki?curid=1183041
11830506
Load regulation
Load regulation is the capability to maintain a constant voltage (or current) level on the output channel of a power supply despite changes in the supply's load (such as a change in resistance value connected across the supply output). Definitions. Load regulation of a constant-voltage source is defined by the equation: formula_0 Where: For a constant-current supply, the above equation uses currents instead of voltages, and the maximum and minimum load values are when the largest and smallest specified voltage across the load are produced. For switching power supplies, the primary source of regulation error is switching ripple, rather than control loop precision. In such cases, load regulation is defined without normalizing to voltage at nominal load and has the unit of volts, not a percentage. formula_4 Measurement. A simple way to manually measure load regulation is to connect three parallel load resistors to the power supply where two of the resistors, R2 and R3, are connected through switches while the other resistor, R1 is connected directly. The values of the resistors are selected such that R1 gives the highest load resistance, R1||R2 gives the nominal load resistance and either R1||R2||R3 or R2||R3 gives the lowest load resistance. A voltmeter is then connected in parallel to the resistors and the measured values of voltage for each load state can be used to calculate the load regulation as given in the equation above. Programmable loads are typically used to automate the measurement of load regulation. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\%\\text{Load Regulation} = 100\\% \\, \\frac{V_{min-load} - V_{max-load}}{V_{nom-load}}" }, { "math_id": 1, "text": "V_{max-load}" }, { "math_id": 2, "text": "V_{min-load}" }, { "math_id": 3, "text": "V_{nom-load}" }, { "math_id": 4, "text": "\\text{Load Regulation}(V) = V_{min-load} - V_{max-load}" } ]
https://en.wikipedia.org/wiki?curid=11830506
11830536
Line regulation
Line regulation is the ability of a power supply to maintain a constant output voltage despite changes to the input voltage, with the output current drawn from the power supply remaining constant. formula_0 where ΔVi is the change in input voltage while ΔVo is the corresponding change in output voltage. It is desirable for a power supply to maintain a stable output regardless of changes in the input voltage. The line regulation is important when the input voltage source is unstable or unregulated and this would result in significant variations in the output voltage. The line regulation for an unregulated power supply is usually very high for a majority of operations, but this can be improved by using a voltage regulator. A low line regulation is always preferred. In practice, a well regulated power supply should have a line regulation of at most 0.1%. In the regulator device datasheets the line regulation is expressed as percent change in output with respect to change in input per volt of the output. Mathematically it is expressed as: formula_1 The unit here is %/V. For example, In the ABLIC Inc. S1206-series regulator device the typical line regulation is expressed as 0.05%/V which means that the change in the output with respect to change in the input of the regulator device is 0.05%, when the output of the device is set at 1V. Moreover, the line regulation of the device expressed in the datasheet is temperature dependent. Usually the datasheets mention line regulation at 25 °C. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\text{Line Regulation} = \\frac{\\Delta V_\\text{o}}{\\Delta V_\\text{i }} \\cdot 100\\%" }, { "math_id": 1, "text": "\\text{Line Regulation} = \\frac{\\Delta V_\\text{o}}{\\Delta V_\\text{i } \\cdot V_\\text{o}} \\cdot 100\\%/V" } ]
https://en.wikipedia.org/wiki?curid=11830536
1183118
Total internal reflection fluorescence microscope
A total internal reflection fluorescence microscope (TIRFM) is a type of microscope with which a thin region of a specimen, usually less than 200 nanometers can be observed. TIRFM is an imaging modality which uses the excitation of fluorescent cells in a thin optical specimen section that is supported on a glass slide. The technique is based on the principle that when excitation light is totally internally reflected in a transparent solid coverglass at its interface with a liquid medium, an electromagnetic field, also known as an evanescent wave, is generated at the solid-liquid interface with the same frequency as the excitation light. The intensity of the evanescent wave exponentially decays with distance from the surface of the solid so that only fluorescent molecules within a few hundred nanometers of the solid are efficiently excited. Two-dimensional images of the fluorescence can then be obtained, although there are also mechanisms in which three-dimensional information on the location of vesicles or structures in cells can be obtained. History. Widefield fluorescence was introduced in 1910 which was an optical technique that illuminates the entire sample. Confocal microscopy was then introduced in 1960 which decreased the background and exposure time of the sample by directing light to a pinpoint and illuminating cones of light into the sample. In the 1980s, the introduction of TIRFM further decreased background and exposure time by only illuminating the thin section of the sample being examined. Background. There are two common methods for producing the evanescent wave for TIRFM. The first is the prism method which uses a prism to direct the laser toward the interface between the coverglass and the media/cells at an incident angle sufficient to cause total internal reflection. This configuration has been applied to cellular microscopy for over 30 years but has never become a mainstream tool due to several limitations. Although there are many variations of the prism configuration, most restrict access to the specimen which makes it difficult to perform manipulations, inject media into the specimen space, or carry out physiological measurements. Another disadvantage is that in most configurations based on the inverted microscope designs, the illumination is introduced on the specimen side opposite of the objective optics which requires imaging of the evanescent field region through the bulk of the specimen. There is great complexity and precision required in imaging this system which meant that the prism method was not used by many biologists but rather limited to use by physicists. The other method is known as the objective lens method which has increased the use of TIRFM in cellular microscopy and increased furthermore since a commercial solution became available. In this mechanism, one can easily switch between standard widefield fluorescence and TIRF by changing the off-axis position of the beam focus at the objective's back focal plane. There are several developed ways to change the positions of the beam such as using an actuator that can change the position in relation to the fluorescence illuminator that is attached to the microscope. Application. In cell and molecular biology, a large number of molecular events in cellular surfaces such as cell adhesion, binding of cells by hormones, secretion of neurotransmitters, and membrane dynamics have been studied with conventional fluorescence microscopes. However, fluorophores that are bound to the specimen surface and those in the surrounding medium exist in an equilibrium state. When these molecules are excited and detected with a conventional fluorescence microscope, the resulting fluorescence from those fluorophores bound to the surface is often overwhelmed by the background fluorescence due to the much larger population of non-bound molecules. TIRFM allows for selective excitation of the surface-bound fluorophores, while non-bound molecules are not excited and do not fluoresce. Due to the fact of sub-micron surface selectivity, TIRFM has become a method of choice for single molecule detection. There are many applications of TIRFM in cellular microscopy. Some of these applications include: With the ability to resolve individual vesicles optically and follow the dynamics of their interactions directly, TIRFM provides the capability to study the vast number of proteins involved in neurobiological processes in a manner that was not possible before. Benefits. TIRFM provides several benefits over standard widefield and confocal fluorescence microscopy such as: Overview. The idea of using total internal reflection to illuminate cells contacting the surface of glass was first described by E.J. Ambrose in 1956. This idea was then extended by Daniel Axelrod at the University of Michigan, Ann Arbor in the early 1980s as TIRFM. A TIRFM uses an evanescent wave to selectively illuminate and excite fluorophores in a restricted region of the specimen immediately adjacent to the glass-water interface. The evanescent electromagnetic field decays exponentially from the interface, and thus penetrates to a depth of only approximately 100 nm into the sample medium. Thus the TIRFM enables a selective visualization of surface regions such as the basal plasma membrane (which are about 7.5 nm thick) of cells. Note, however, that the region visualized is at least a few hundred nanometers wide, so the cytoplasmic zone immediately beneath the plasma membrane is necessarily visualized in addition to the plasma membrane during TIRF microscopy. The selective visualization of the plasma membrane renders the features and events on the plasma membrane in living cells with high axial resolution. TIRF can also be used to observe the fluorescence of a single molecule, making it an important tool of biophysics and quantitative biology. TIRF microscopy has also been applied in the single molecule detection of DNA biomarkers and SNP discrimination. Cis-geometry (through-objective TIRFM) and trans-geometry (prism- and lightguide based TIRFM) have been shown to provide different quality of the effect of total internal reflection. In the case of trans-geometry, the excitation lightpath and the emission channel are separated, while in the case of objective-type TIRFM they share the objective and other optical elements of the microscope. Prism-based geometry was shown to generate clean evanescent wave, which exponential decay is close to theoretically predicted function. In the case of objective-based TIRFM, however, the evanescent wave is contaminated with intense stray light. The intensity of stray light was shown to amount 10–15% of the evanescent wave, which makes it difficult to interpret data obtained by objective-type TIRFM Mechanism. The basic components of the TIRFM device include: Objective-based vs prism-based. Key differences between objective-based (cis) and prism-based (trans) TIRFM are that prism based TIRFM requires usage of a prism/solution interface to generate the evanescent field, while objective-based TIRFM does not require a prism and utilizes a cover slip/solution interface to generate the evanescent field. Typically objective-based TIRFM are more popularly used, however have lowered imaging quality due to stray light noise within the evanescent wave. Methodology. Fundamental physics. TIRFM is predicated on the optical phenomena of total internal reflection, in which waves arriving at a medium interface do not transmit into medium 2 but are completely reflected back into medium 1. Total internal reflection requires medium 2 to have a lower refractive index than medium 1, and for the waves must be incident at sufficiently oblique angles on the interface. An observed phenomena accompanying total internal reflection is the evanescent wave, which spatially extends away perpendicularly from the interface into medium 2, and decays exponentially, as a factor of wavelength, refractive index, and incident angle. It is the evanescent wave which is used to achieve increased excitation of the fluorophores close to the surface of the sample, and diminished excitation of superfluous fluorophores within solution. For practical purposes, in objective based TIRF, medium 1 is typically a high refractive index glass coverslip, and medium 2 is the sample in solution with a lower refractive index. There may be immersion oil between the lens and the glass coverslip to prevent significant refraction through air. Evanescent wave. The critical angle for excitatory light incidence can be derived from Snell's law: formula_0 For formula_1 the refractive index of sample, formula_2 the refractive index of the cover slip. Thus, as the angle of incidence reaches formula_3, we begin observing effects of total internal reflection and evanescent wave, and as it surpasses formula_3 these effects are more prevalent. The intensity of the evanescent wave is given by: formula_4 With penetration depth formula_5 given by: formula_6 Typically, formula_5 ≤~100 nanometers, which is typically much smaller than the wavelength of light, and much thinner than a slice from confocal microscopes. For TIRFM imaging the wavelength of the excitation beam formula_7 within the sample can be selected for by filtering. Additionally, the range of incident angles formula_8 is determined by the numerical aperture (NA) of the objective, and requires that NA &gt; formula_9. This parameter can be adjusted by changing the angle the excitation beam enters the objective lens. Finally, the reflective indices (formula_9) of the solution and cover slip can be experimentally found or reported by manufacturers. Excitation beam. For complex fluoroscope microscopy techniques, lasers are the preferred light source as they are highly uniform, intense, and near-monochromatic. However, it is noted that ARC LAMP light sources and other types of sources may also work. Typically the wavelength of excitation beam is designated by the requirements of the fluorophores within the sample, with most common excitation wavelengths being in the 400–700 nm range for biological samples. In practice, a lightbox will generate a high intensity multichromatic laser, which will then be filtered to allow the desired wavelengths through to excite the sample. For objective-based TIRFM, the excitation beam and fluoresced emission beam will be captured via the same objective lens. Thus, to split the beams, a dichromatic mirror is used to reflect the incoming excitation beam towards the objective lens, and allow the emission beam to pass through into the detector. Additional filtering may be required to further separate emission and excitation wavelengths. Emission beam. When excited with specific wavelengths of light, fluorophore dyes will reemit light at longer wavelengths (which contain less energy). In the context of TIRFM, only fluorophores close to the interface will be readily excited by the evanescent field, while those past ~100 nm will be highly attenuated. Light emitted by the fluorophores will be undirected, and thus will pass through the objective lens at varying locations with varying intensities. This signal will then pass through the dichromatic mirror and onward to the detector. Cover slip and immersion oil. Glass cover slips typically have a reflective index around formula_10, while the immersion oil refractive index is a comparable formula_11. The medium of air, which has a refractive index of formula_12, would cause refraction of the excitation beam between the objective and the coverslip, thus the oil is used to buffer the region and prevent superfluous interface interactions before the beam reaches the interface between coverslip and sample. Objective lens. The objective lens numerical aperture (NA) specifies the range of angles over which the system can accept or emit light. To achieve the greatest incident angles, it is desirable to pass light at an off-axis angle through the peripheries of the lens. Back focal plane (BPF). The back focal plane (also called "aperture plane") is the plane through which the excitatory beam is focused before passing through the objective. Adjusting the distance between the objective and BPF can yield different imaging magnification, as the incident angle will become less or more steep. The beam must be passed through the BPF off-axis in order to pass through the objective at its ends, allowing for the angle to be sufficiently greater than the critical angle. The beam must also be focused at the BPF because this ensures that the light passing through the objective is collimated, interacting with the cover slip at the same angle and thus all totally internally reflecting. Sample. The sample should be adsorbed to the surface of the glass cover slide and stained with appropriate fluorophores to resolve the features desired within the sample. This is in protocol with any other fluorescent microscopy technique. Dichroic (dichromatic) filter. The dichroic filter is an edge filter used at an oblique angle of incidence (typically 45°) to efficiently reflect light in the excitation band and to transmit light in the emission band. The 45° angle of the filter separates the path of the excitation and emission beam. The filter is composed of a complex system of multiple layers of metals, metal salts and dielectrics which have been vacuum-deposited onto thin glass. This coating is designed to have high reflectivity for shorter wavelengths and high transmission for longer wavelengths. While the filter transmits the selected excitation light (shorter wavelength) through the objective and onto the plane of the specimen, it also passes emission fluorescence light (longer wavelength) to the barrier filter and reflecting any scattered excitation light back in the direction of the laser source. This maximizes the amount of exciting radiation passing through the filter and emitted fluorescence beam that is detected by the detector. Barrier filter. The barrier filter mainly blocks off undesired wavelengths, especially shorter excitation light wavelengths. It is typically a bandpass filter that passes only the wavelengths emitted by the fluorophore and blocks all undesired light outside this band. More modern microscopes enable the barrier filter to be changed according to the wavelength of the fluorophore's specific emission. Image detection and resolution. The image is detected by a charged-coupled device (CCD) digital camera. CCD cameras have photon detectors, which are thin silicon wafers, assembled into 2D arrays of light-sensitive regions. The detector arrays capture and store image information in the form of localized electrical charge that varies with incident light intensity. As shown in the schematic the photons are transform to electrons by the detectors and the electrons are converted to readable electrical signal in the circuit board. The electrical signal is then convoluted with a point spread function (PSF) to sample the original signal. As such, image resolution is highly dependent on the number of detectors and the point spread function will determine the image resolution. Image artifact and noise. Most fluorescence imaging techniques exhibit background noise due to illuminating and reconstructing large slices (in the z-direction) of the samples. Since TIRFM uses an evanescent wave to fluoresce a thin slice of the sample, there is inherently less background noise and artifacts. However, there are still other noises and artifacts such as poisson noise, optical aberrations, photobleaching, and other fluorescence molecules. Poissonian noise are fundamental uncertainties with the measurement of light. This will cause uncertainties during the detection of fluorescence photons. If N photons are measured in a particular measurement, there is a 63% probability that the true average value is in the range between N +√N and N −√N. This noise may cause misrepresentation of the object at incorrect pixel locations. Optical aberrations can arise from diffraction of fluorescence light or microscope and objective misalignment. Diffraction of light on the sample slide can spread the fluorescence signal and result in blurring in the convoluted images. Similarly, if there is a misalignment between the objective lens, filter, and detector, the excitation or emission beam may not be in focus and can cause blurring in the images. Photobleaching can occur when the covalent or noncovalent bonds in the fluorophores are destructed by the excitation light and can no longer fluoresce. The fluorescing substances will always degrade to some extent by the energy of the exciting radiation and will cause the fluorescence to fade and result in a dark blurry image. Photobleaching is inevitable but can be minimized by avoiding unwanted light exposure and using immersion oils to minimize light scattering. Autofluorescence can occur in certain cell structures where the natural compound in the structure would fluoresce after being excited at relatively shorter wavelengths (similar to that of the excitation wavelength). Induced fluorescence can also occur when certain non-autofluorescent compounds become fluorescent after binding to certain chemicals (such as formaldehyde). These fluorescence can result in artifacts or background noise in the image. Noise from other fluorescence compounds can be effectively eliminated by using filters to capture the desired fluorescence wavelength, or by making sure the autofluorescence compounds are not present in the sample. Current and future work. Modern fluorescence techniques attempt to incorporate methods to eliminate some blurring and noises. Optical aberrations are generally deterministic (it is constant throughout the image process and across different samples). Deterministic blurring can be eliminated by deconvoluting the signal and subtracting the known artifact. The deconvolution technique is simply using an inverse fourier transform to obtain the original fluorescence signal and remove the artifact. Nevertheless, deconvolution has only been shown to work if there is a strong fluorescence signal or when the noise is clearly identified. In addition, deconvolution performs poorly because it does not include statistical information and can not reduce non-deterministic noise such as poissonian noise. To obtain better image resolution and quality, researchers have used statistical techniques to model the probability where photons may be distributed on the detector. This technique, called the maximum likelihood method, is being further improved by algorithms to improve its performance speed.
[ { "math_id": 0, "text": "\\theta_c = \\sin^{-1} \\left( \\frac{n_1}{n_2} \\right)" }, { "math_id": 1, "text": "n_1" }, { "math_id": 2, "text": "n_2" }, { "math_id": 3, "text": "\\theta_c" }, { "math_id": 4, "text": "I(Z) = I_0 e^{-z/d}" }, { "math_id": 5, "text": "d" }, { "math_id": 6, "text": "d = \\frac{\\lambda_0}{4\\pi}\\left(n_2^2 \\sin^2\\theta - n_1^2\\right)^{-1/2}" }, { "math_id": 7, "text": "\\lambda_0" }, { "math_id": 8, "text": "\\theta" }, { "math_id": 9, "text": "n" }, { "math_id": 10, "text": "n=1.52" }, { "math_id": 11, "text": "n=1.51" }, { "math_id": 12, "text": "n=1.00" } ]
https://en.wikipedia.org/wiki?curid=1183118
1183122
Water potential
Potential energy of water per unit volume relative to water in known conditions Water potential is the potential energy of water per unit volume relative to pure water in reference conditions. Water potential quantifies the tendency of water to move from one area to another due to osmosis, gravity, mechanical pressure and matrix effects such as capillary action (which is caused by surface tension). The concept of water potential has proved useful in understanding and computing water movement within plants, animals, and soil. Water potential is typically expressed in potential energy per unit volume and very often is represented by the Greek letter ψ. Water potential integrates a variety of different potential drivers of water movement, which may operate in the same or different directions. Within complex biological systems, many potential factors may be operating simultaneously. For example, the addition of solutes lowers the potential (negative vector), while an increase in pressure increases the potential (positive vector). If the flow is not restricted, water will move from an area of higher water potential to an area that is lower potential. A common example is water with dissolved salts, such as seawater or the fluid in a living cell. These solutions have negative water potential, relative to the pure water reference. With no restriction on flow, water will move from the locus of greater potential (pure water) to the locus of lesser (the solution); flow proceeds until the difference in potential is equalized or balanced by another water potential factor, such as pressure or elevation. Components of water potential. Many different factors may affect the total water potential, and the sum of these potentials determines the overall water potential and the direction of water flow: formula_0 where: All of these factors are quantified as potential energies per unit volume, and different subsets of these terms may be used for particular applications (e.g., plants or soils). Different conditions are also defined as reference depending on the application: for example, in soils, the reference condition is typically defined as pure water at the soil surface. Pressure potential. Pressure potential is based on mechanical pressure and is an important component of the total water potential within plant cells. Pressure potential increases as water enters a cell. As water passes through the cell wall and cell membrane, it increases the total amount of water present inside the cell, which exerts an outward pressure that is opposed by the structural rigidity of the cell wall. By creating this pressure, the plant can maintain turgor, which allows the plant to keep its rigidity. Without turgor, plants will lose structure and wilt. The pressure potential in a plant cell is usually positive. In plasmolysed cells, pressure potential is almost zero. Negative pressure potentials occur when water is pulled through an open system such as a plant xylem vessel. Withstanding negative pressure potentials (frequently called "tension") is an important adaptation of the xylem. This tension can be measured empirically using the Pressure bomb. Osmotic potential (solute potential). Pure water is usually defined as having an osmotic potential (formula_2) of zero, and in this case, solute potential can never be positive. The relationship of solute concentration (in molarity) to solute potential is given by the van 't Hoff equation: formula_7 where formula_8 is the concentration in molarity of the solute, formula_9 is the van 't Hoff factor, the ratio of amount of particles in solution to amount of formula units dissolved, formula_10 is the ideal gas constant, and formula_11 is the absolute temperature. For example, when a solute is dissolved in water, water molecules are less likely to diffuse away via osmosis than when there is no solute. A solution will have a lower and hence more negative water potential than that of pure water. Furthermore, the more solute molecules present, the more negative the solute potential is. Osmotic potential has important implications for many living organisms. If a living cell is surrounded by a more concentrated solution, the cell will tend to lose water to the more negative water potential (formula_12) of the surrounding environment. This can be the case for marine organisms living in sea water and halophytic plants growing in saline environments. In the case of a plant cell, the flow of water out of the cell may eventually cause the plasma membrane to pull away from the cell wall, leading to plasmolysis. Most plants, however, have the ability to increase solute inside the cell to drive the flow of water into the cell and maintain turgor. This effect can be used to power an osmotic power plant. A soil solution also experiences osmotic potential. The osmotic potential is made possible due to the presence of both inorganic and organic solutes in the soil solution. As water molecules increasingly clump around solute ions or molecules, the freedom of movement, and thus the potential energy, of the water is lowered. As the concentration of solutes is increased, the osmotic potential of the soil solution is reduced. Since water has a tendency to move toward lower energy levels, water will want to travel toward the zone of higher solute concentrations. Although, liquid water will only move in response to such differences in osmotic potential if a semipermeable membrane exists between the zones of high and low osmotic potential. A semipermeable membrane is necessary because it allows water through its membrane while preventing solutes from moving through its membrane. If no membrane is present, movement of the solute, rather than of the water, largely equalizes concentrations. Since regions of soil are usually not divided by a semipermeable membrane, the osmotic potential typically has a negligible influence on the mass movement of water in soils. On the other hand, osmotic potential has an extreme influence on the rate of water uptake by plants. If soils are high in soluble salts, the osmotic potential is likely to be lower in the soil solution than in the plant root cells. In such cases, the soil solution would severely restrict the rate of water uptake by plants. In salty soils, the osmotic potential of soil water may be so low that the cells in young seedlings start to collapse (plasmolyze). Matrix potential (Matric potential). When water is in contact with solid particles (e.g., clay or sand particles within soil), adhesive intermolecular forces between the water and the solid can be large and important. The forces between the water molecules and the solid particles in combination with attraction among water molecules promote surface tension and the formation of menisci within the solid matrix. Force is then required to break these menisci. The magnitude of matrix potential depends on the distances between solid particles—the width of the menisci (also capillary action and differing Pa at ends of the capillary)—and the chemical composition of the solid matrix (meniscus, macroscopic motion due to ionic attraction). In many cases, the absolute value of matrix potential can be relatively large in comparison to the other components of water potential discussed above. Matrix potential markedly reduces the energy state of water near particle surfaces. Although water movement due to matrix potential may be slow, it is still extremely important in supplying water to plant roots and in engineering applications. The matrix potential is always negative because the water attracted by the soil matrix has an energy state lower than that of pure water. Matrix potential only occurs in unsaturated soil above the water table. If the matrix potential approaches a value of zero, nearly all soil pores are completely filled with water, i.e. fully saturated and at maximum retentive capacity. The matrix potential can vary considerably among soils. In the case that water drains into less-moist soil zones of similar porosity, the matrix potential is generally in the range of −10 to −30 kPa. Empirical examples. Soil-plant-air continuum. At a potential of 0 kPa, soil is in a state of saturation. At saturation, all soil pores are filled with water, and water typically drains from large pores by gravity. At a potential of −33 kPa, or −1/3 bar, (−10 kPa for sand), soil is at field capacity. Typically, at field capacity, air is in the macropores, and water in the micropores. Field capacity is viewed as the optimal condition for plant growth and microbial activity. At a potential of −1500 kPa, the soil is at its permanent wilting point, at which plant roots cannot extract the water through osmotic diffusion. Soil waterways still evaporate at more negative potentials down to a hygroscopic level, at which soil water is held by solid particles in a thin film by molecular adhesion forces. In contrast, atmospheric water potentials are much more negative—a typical value for dry air is −100 MPa, though this value depends on the temperature and the humidity. Root water potential must be more negative than the soil, and the stem water potential must be an intermediate lower value than the roots but higher than the leaf water potential, to create a passive flow of water from the soil to the roots, up the stem, to the leaves and then into the atmosphere. Measurement techniques. A tensiometer, electrical resistance gypsum block, neutron probes, or time-domain reflectometry (TDR) can be used to determine soil water potential energy. Tensiometers are limited to 0 to −85 kPa, electrical resistance blocks are limited to −90 to −1500 kPa, neutron probes are limited to 0 to −1500 kPa, and a TDR is limited to 0 to −10,000 kPa. A scale can be used to estimate water weight (percentage composition) if special equipment is not on hand.
[ { "math_id": 0, "text": "\\Psi = \\Psi_0 + \\Psi_\\pi + \\Psi_p + \\Psi_s + \\Psi_v + \\Psi_m " }, { "math_id": 1, "text": "\\Psi_0" }, { "math_id": 2, "text": "\\Psi_\\pi" }, { "math_id": 3, "text": "\\Psi_p" }, { "math_id": 4, "text": "\\Psi_s" }, { "math_id": 5, "text": "\\Psi_v" }, { "math_id": 6, "text": "\\Psi_m" }, { "math_id": 7, "text": "\\Psi_\\pi = - MiRT" }, { "math_id": 8, "text": "M" }, { "math_id": 9, "text": "i" }, { "math_id": 10, "text": "R" }, { "math_id": 11, "text": "T" }, { "math_id": 12, "text": "\\Psi_w" } ]
https://en.wikipedia.org/wiki?curid=1183122
11831990
Bloch's theorem (complex variables)
Mathematical theorem In complex analysis, a branch of mathematics, Bloch's theorem describes the behaviour of holomorphic functions defined on the unit disk. It gives a lower bound on the size of a disk in which an inverse to a holomorphic function exists. It is named after André Bloch. Statement. Let "f" be a holomorphic function in the unit disk |"z"| ≤ 1 for which formula_0 Bloch's Theorem states that there is a disk S ⊂ D on which f is biholomorphic and f(S) contains a disk with radius 1/72. Landau's theorem. If "f" is a holomorphic function in the unit disk with the property |"f′"(0)| = 1, then let "Lf" be the radius of the largest disk contained in the image of "f". Landau's theorem states that there is a constant "L" defined as the infimum of "Lf" over all such functions "f", and that "L" is greater than Bloch's constant "L" ≥ "B". This theorem is named after Edmund Landau. Valiron's theorem. Bloch's theorem was inspired by the following theorem of Georges Valiron: Theorem. If "f" is a non-constant entire function then there exist disks "D" of arbitrarily large radius and analytic functions φ in "D" such that "f"(φ("z")) = "z" for "z" in "D". Bloch's theorem corresponds to Valiron's theorem via the so-called Bloch's Principle. Proof. Landau's theorem. We first prove the case when "f"(0) = 0, "f′"(0) = 1, and |"f′"("z")| ≤ 2 in the unit disk. By Cauchy's integral formula, we have a bound formula_1 where γ is the counterclockwise circle of radius "r" around "z", and 0 &lt; "r" &lt; 1 − |"z"|. By Taylor's theorem, for each "z" in the unit disk, there exists 0 ≤ "t" ≤ 1 such that "f"("z") = "z" + "z"2"f″"("tz") / 2. Thus, if |"z"| = 1/3 and |"w"| &lt; 1/6, we have formula_2 By Rouché's theorem, the range of "f" contains the disk of radius 1/6 around 0. Let "D"("z"0, "r") denote the open disk of radius "r" around "z"0. For an analytic function "g" : "D"("z"0, "r") → C such that "g"("z"0) ≠ 0, the case above applied to ("g"("z"0 + "rz") − "g"("z"0)) / ("rg′"(0)) implies that the range of "g" contains "D"("g"("z"0), |"g′"(0)|"r" / 6). For the general case, let "f" be an analytic function in the unit disk such that |"f′"(0)| = 1, and "z"0 = 0. Repeating this argument, we either find a disk of radius at least 1/24 in the range of "f", proving the theorem, or find an infinite sequence ("zn") such that |"zn" − "z""n"−1| &lt; 1/2"n"+1 and |"f′"("zn")| &gt; 2|"f′"("z""n"−1)|. In the latter case the sequence is in "D"(0, 1/2), so "f′" is unbounded in "D"(0, 1/2), a contradiction. Bloch's Theorem. In the proof of Landau's Theorem above, Rouché's theorem implies that not only can we find a disk "D" of radius at least 1/24 in the range of "f", but there is also a small disk "D"0 inside the unit disk such that for every "w" ∈ "D" there is a unique "z" ∈ "D"0 with "f"("z") = "w". Thus, "f" is a bijective analytic function from "D"0 ∩ "f"−1("D") to "D", so its inverse φ is also analytic by the inverse function theorem. Bloch's and Landau's constants. The number "B" is called the Bloch's constant. The lower bound 1/72 in Bloch's theorem is not the best possible. Bloch's theorem tells us "B" ≥ 1/72, but the exact value of "B" is still unknown. The best known bounds for "B" at present are formula_3 where Γ is the Gamma function. The lower bound was proved by Chen and Gauthier, and the upper bound dates back to Ahlfors and Grunsky. The similarly defined optimal constant "L" in Landau's theorem is called the Landau's constant. Its exact value is also unknown, but it is known that formula_4 (sequence in the OEIS) In their paper, Ahlfors and Grunsky conjectured that their upper bounds are actually the true values of "B" and "L". For injective holomorphic functions on the unit disk, a constant "A" can similarly be defined. It is known that formula_5
[ { "math_id": 0, "text": "|f'(0)|=1" }, { "math_id": 1, "text": "|f''(z)|=\\left|\\frac{1}{2\\pi i}\\oint_\\gamma\\frac{f'(w)}{(w-z)^2}\\,\\mathrm{d}w\\right|\\le\\frac{1}{2\\pi}\\cdot2\\pi r\\sup_{w=\\gamma(t)}\\frac{|f'(w)|}{|w-z|^2}\\le\\frac{2}{r}," }, { "math_id": 2, "text": "|(f(z)-w)-(z-w)|=\\frac12|z|^2|f''(tz)|\\le\\frac{|z|^2}{1-t|z|}\\le\\frac{|z|^2}{1-|z|}=\\frac16<|z|-|w|\\le|z-w|." }, { "math_id": 3, "text": "0.4332\\approx\\frac{\\sqrt{3}}{4}+2\\times10^{-14}\\leq B\\leq \\sqrt{\\frac{\\sqrt{3}-1}{2}} \\cdot \\frac{\\Gamma(\\frac{1}{3})\\Gamma(\\frac{11}{12})}{\\Gamma(\\frac{1}{4})}\\approx 0.47186," }, { "math_id": 4, "text": "0.5 < L \\le \\frac{\\Gamma(\\frac{1}{3})\\Gamma(\\frac{5}{6})}{\\Gamma(\\frac{1}{6})} = 0.543258965342... \\,\\!" }, { "math_id": 5, "text": "0.5 < A \\le 0.7853" } ]
https://en.wikipedia.org/wiki?curid=11831990
11832736
Ryszard Engelking
Polish mathematician (1935–2023) Ryszard Engelking (16 November 1935 – 16 November 2023) was a Polish mathematician. He was working mainly on general topology and dimension theory. He is the author of several influential monographs in this field. The 1989 edition of his "General Topology" is nowadays a standard reference for topology. Engelking died on 16 November 2023, his 88th birthday. Scientific work. Apart from his books, Ryszard Engelking is known, among other things, for a generalization to an arbitrary topological space of the "Alexandroff double circle", for works on completely metrizable spaces, suborderable spaces and generalized ordered spaces. The "Engelking–Karlowicz theorem", proved together with Monica Karlowicz, is a statement about the existence of a family of functions from formula_0 to formula_1 with topological and set-theoretical applications. Books. Engelking's books include: Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "2^ \\mu" }, { "math_id": 1, "text": "\\mu" } ]
https://en.wikipedia.org/wiki?curid=11832736
11834957
Picosecond ultrasonics
Non-destructive type of ultrasonics Picosecond ultrasonics is a type of ultrasonics that uses ultra-high frequency ultrasound generated by ultrashort light pulses. It is a non-destructive technique in which picosecond acoustic pulses penetrate into thin films or nanostructures to reveal internal features such as film thickness as well as cracks, delaminations and voids. It can also be used to probe liquids. The technique is also referred to as picosecond laser ultrasonics or laser picosecond acoustics. Introduction. When an ultrashort light pulse, known as the pump pulse, is focused onto a thin opaque film on a substrate, the optical absorption results in a thermal expansion that launches an elastic strain pulse. This strain pulse mainly consists of longitudinal acoustic phonons that propagate directly into the film as a coherent pulse. After acoustic reflection from the film-substrate interface, the strain pulse returns to the film surface, where it can be detected by a delayed optical probe pulse through optical reflectance or (for films that are thin enough) transmittance changes. This time-resolved method for generation and photoelastic detection of coherent picosecond acoustic phonon pulses was proposed by Christian Thomsen and coworkers in a collaboration between Brown University and Bell Laboratories in 1984. Initial development took place in Humphrey Maris’s group at Brown University and elsewhere in the late 1980s. In the early 1990s the method was extended in scope at Nippon Steel Corp. by direct sensing of the picosecond surface vibrations of the film caused by the returning strain pulses, resulting in improved detection sensitivity in many cases. Advances after the year 2000 include the generation of picosecond acoustic solitons by the use of millimeter propagation distances and the generation of picosecond shear waves by the use of anisotropic materials or small (~1 μm) optical spot sizes. Acoustic frequencies up to the terahertz range in solids and up to ~ 10 GHz in liquids have been reported. Apart from thermal expansion, generation through the deformation potential or through piezoelectricity is possible. Picosecond ultrasonics is currently used as a thin film metrology technique for probing films of sub-micrometer thicknesses with nanometer resolution in-depth, that sees widespread use in the semiconductor processing industry. The picosecond ultrasonics has also been applied to measure the acoustic velocity inside nanomaterials or to study phonon physics. Generation and detection. Generation. The absorption of an incident optical pump pulse sets up a local thermal stress near the surface of the sample. This stress launches an elastic strain pulse that propagates into the sample. The exact depth for the stress generation depends, in particular, on the material involved and the optical pump wavelength. In metals and semiconductors, for example, ultrashort-timescale thermal and carrier diffusion tends to increase the depth that is initially heated within the first ~1 ps. Acoustic pulses are generated with a temporal duration approximately equal to the acoustic transit time across this initially heated depth, in general greater than the optical absorption depth. For example, the optical absorption depths in Al and GaAs are ~10 nm for blue light, but the electron diffusion depths are ~50 and 100 nm, respectively. The diffusion depth determines the spatial extent of the strain pulse in the through-thickness direction. The main generation mechanism for metals is thermal expansion, whereas for semiconductors it is often the deformation potential mechanism. In piezoelectric materials the inverse piezoelectric effect, arising from the production of internal electric fields induced by charge separation, may dominate. When the optical spot diameter "D", for example "D"~10 μm, at the surface of an elastically isotropic and flat sample is much greater than the initially heated depth, one can approximate the acoustic field propagating into the solid by a one-dimensional problem, provided that one does not work with strain propagation depths that are too large (~"D"²/Λ=Rayleigh length, where Λ is the acoustic wavelength). In this configuration—the one originally proposed for picosecond ultrasonics—only longitudinal acoustic strain pulses need to be considered. The strain pulse forms a pancake-like region of longitudinal strain that propagates directly into the solid away from the surface. For small spot sizes approaching the optical diffraction limit, for example "D"~1 μm, it may be necessary to consider the three-dimensional nature of the problem. In this case acoustic mode-conversion at surfaces and interfaces and acoustic diffraction play an important role, resulting in the involvement of both shear and longitudinal polarizations. The strain pulse separates into different polarization components and spreads out laterally (for distances &gt;"D"²/Λ) as it propagates down into the sample, resulting in a more complicated, three-dimensional strain distribution. The use of both shear and longitudinal pulses is advantageous for measuring elastic constants or sound velocities. Shear waves may also be generated by the use of elastically anisotropic solids cut at oblique angles to the crystal axes. This allows shear or quasi-shear waves to be generated with a large amplitude in the through-thickness direction. It is also possible to generate strain pulses whose shape does not vary on propagation. These so-called acoustic solitons have been demonstrated at low temperatures over propagation distances of a few millimeters. They result from a delicate balance between acoustic dispersion and nonlinear effects. Detection. Strain pulses returning to the surface from buried interfaces or other sub-surface acoustically inhomogeneous regions are detected as a series of echoes. For example, strain pulses propagating back and forth through a thin film produce a decaying series of echoes, from which one may derive, in particular, the film thickness, the ultrasonic attenuation or the ultrasonic dispersion. The original detection mechanism used in picosecond ultrasonics is based on the photoelastic effect. The refractive index and extinction coefficient near the surface of the solid are perturbed by the returning strain pulses (within the optical absorption depth of the probe light), resulting in changes in the optical reflectance or transmission. The measured temporal echo shape results from a spatial integral involving both the probe light optical absorption profile and the strain pulse spatial profile (see below). Detection involving the surface displacement is also possible if the optical phase is variation is recorded. In this case the echo shape when measured through the optical phase variation is proportional to a spatial integral of the strain distribution (see below). Surface displacement detection has been demonstrated with ultrafast optical beam deflection and with interferometry. For a homogeneous isotropic sample in vacuum with normal optical incidence, the optical amplitude reflectance ("r") modulation can be expressed as formula_0 where formula_1 ("n" the refractive index and "κ" the extinction coefficient) is the complex refractive index for the probe light in the sample, "k" is the wave number of the probe light in vacuum, "η"("z", "t") is the spatiotemporal longitudinal strain variation, formula_2 is the photoelastic constant, "z" is the depth in the sample, "t" is the time and "u" is the surface displacement of the sample (in the +"z" direction): formula_3 To obtain the variation in optical reflectivity for intensity "R" one uses formula_4, whereas to obtain the variation in optical phase one uses formula_5. The theory of optical detection in multilayer samples, including both interface motion and the photoelastic effect, is now well-developed. The control of the polarization state and angle of incidence of the probe light has been shown to be useful for detecting shear acoustic waves. Applications and future challenges. Picosecond ultrasonics has been applied successfully to analyze a variety of materials, both solid and liquid. It is increasingly being applied to nanostructures, including sub-micrometre films, multilayers, quantum wells, semiconductor heterostructures and nano-cavities. It is also applied to probe the mechanical properties of a single biological cell. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac {\\delta r}{r} = \\frac{4ik\\tilde n}{1-{\\tilde n}^2}\\frac{d\\tilde n}{d\\eta}\\int_{0}^{\\infty} \\eta(z,t)e^{2i\\tilde nkz}dz+2iku(t)" }, { "math_id": 1, "text": "\\tilde n =n+i\\kappa" }, { "math_id": 2, "text": "d\\tilde n/d\\eta" }, { "math_id": 3, "text": "u(t)= -\\int_{0}^{\\infty} \\eta(z,t)dz" }, { "math_id": 4, "text": "\\delta R/R=2\\rm{Re}(\\it{\\delta r/r})" }, { "math_id": 5, "text": "\\delta \\it{\\phi}=\\rm{Im}(\\it{\\delta r/r})" } ]
https://en.wikipedia.org/wiki?curid=11834957
1183512
Automatic stabilizer
In macroeconomics, automatic stabilizers are features of the structure of modern government budgets, particularly income taxes and welfare spending, that act to damp out fluctuations in real GDP. The size of the government budget deficit tends to increase when a country enters a recession, which tends to keep national income higher by maintaining aggregate demand. There may also be a multiplier effect. This effect happens automatically depending on GDP and household income, without any explicit policy action by the government, and acts to reduce the severity of recessions. Similarly, the budget deficit tends to decrease during booms, which pulls back on aggregate demand. Therefore, automatic stabilizers tend to reduce the size of the fluctuations in a country's GDP. Induced taxes. Tax revenues generally depend on household income and the pace of economic activity. Household incomes fall and the economy slows down during a recession, and government tax revenues fall as well. This change in tax revenue occurs because of the way modern tax systems are generally constructed. If national income rises, by contrast, then tax revenues will rise. During an economic boom, tax revenue is higher and in a recession tax revenue is lower, not only in absolute terms but as a proportion of national income. Some other forms of taxation do not exhibit these effects, if they bear no relation to income (e.g. poll taxes, export tariffs or property taxes). Transfer payments. Most governments also pay unemployment and welfare benefits. Generally speaking, the number of unemployed people and those on low incomes who are entitled to other benefits increases in a recession and decreases in a boom. As a result, government expenditure increases automatically in recessions and decreases automatically in booms in absolute terms. Since output increases in booms and decreases in recessions, expenditure is expected to increase as a share of income in recessions and decrease as a share of income in booms. Incorporated into the expenditure multiplier. This section incorporates automatic stabilization into a broadly Keynesian multiplier model. formula_0 Holding all other things constant, ceteris paribus, the greater the level of taxes, or the greater the MPI then the value of this multiplier will drop. For example, lets assume that: → "MPC" = 0.8 → "T" = 0 → "MPI" = 0.2 Here we have an economy with zero marginal taxes and zero transfer payments. If these figures were substituted into the multiplier formula, the resulting figure would be 2.5. This figure would give us the instance where a (for instance) $1 billion change in expenditure would lead to a $2.5 billion change in equilibrium real GDP. Lets now take an economy where there are positive taxes (an increase from 0 to 0.2), while the MPC and MPI remain the same: → "MPC" = 0.8 → "T" = 0.2 → "MPI" = 0.2 If these figures were now substituted into the multiplier formula, the resulting figure would be 1.79. This figure would give us the instance where, again, a $1 billion change in expenditure would now lead to only a $1.79 billion change in equilibrium real GDP. This example shows us how the multiplier is lessened by the existence of an automatic stabilizer and thus helping to lessen the fluctuations in real GDP as a result of changes in expenditure. Not only does this example work with changes in T, it would also work by changing the MPI while holding MPC and T constant as well. There is broad consensus among economists that automatic stabilizers often exist and function in the short term. Additionally, imports often tend to decrease in a recession, meaning more of the national income is spent at home rather than abroad. This also helps stabilize the economy. Estimated effects. Analysis conducted by the Congressional Budget Office in 2013 estimated the effects of automatic stabilizers on budget deficits and surpluses in each fiscal year since 1960. The analysis found, for example, that stabilizers increased the deficit by 32.9% in fiscal 2009, as the deficit soared to $1.4 trillion as a result of the Great Recession, and by 47.6% in fiscal 2010. Stabilizers increased deficits in 30 of the 52 years from 1960 through 2012. In each of the five surplus years during the period, stabilizers contributed to the surplus; the $3 billion surplus in 1969 would have been a $13 billion deficit if not for stabilizers, and 60% of the 1999 $126 billion surplus was attributed to stabilizers. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "Multiplier=\\frac{1}{1-[MPC(1-T)-MPI]}" } ]
https://en.wikipedia.org/wiki?curid=1183512
118396
Band gap
Energy range in a solid where no electron states exist In solid-state physics and solid-state chemistry, a band gap, also called a bandgap or energy gap, is an energy range in a solid where no electronic states exist. In graphs of the electronic band structure of solids, the band gap refers to the energy difference (often expressed in electronvolts) between the top of the valence band and the bottom of the conduction band in insulators and semiconductors. It is the energy required to promote an electron from the valence band to the conduction band. The resulting conduction-band electron (and the electron hole in the valence band) are free to move within the crystal lattice and serve as charge carriers to conduct electric current. It is closely related to the HOMO/LUMO gap in chemistry. If the valence band is completely full and the conduction band is completely empty, then electrons cannot move within the solid because there are no available states. If the electrons are not free to move within the crystal lattice, then there is no generated current due to no net charge carrier mobility. However, if some electrons transfer from the valence band (mostly full) to the conduction band (mostly empty), then current "can" flow (see carrier generation and recombination). Therefore, the band gap is a major factor determining the electrical conductivity of a solid. Substances having large band gaps (also called "wide" band gaps) are generally insulators, those with small band gaps (also called "narrow" band gaps) are semiconductor, and conductors either have very small band gaps or none, because the valence and conduction bands overlap to form a continuous band. In semiconductor physics. Every solid has its own characteristic energy-band structure. This variation in band structure is responsible for the wide range of electrical characteristics observed in various materials. Depending on the dimension, the band structure and spectroscopy can vary. The different types of dimensions are as listed: one dimension, two dimensions, and three dimensions. In semiconductors and insulators, electrons are confined to a number of bands of energy, and forbidden from other regions because there are no allowable electronic states for them to occupy. The term "band gap" refers to the energy difference between the top of the valence band and the bottom of the conduction band. Electrons are able to jump from one band to another. However, in order for a valence band electron to be promoted to the conduction band, it requires a specific minimum amount of energy for the transition. This required energy is an intrinsic characteristic of the solid material. Electrons can gain enough energy to jump to the conduction band by absorbing either a phonon (heat) or a photon (light). A semiconductor is a material with an intermediate-sized, non-zero band gap that behaves as an insulator at T=0K, but allows thermal excitation of electrons into its conduction band at temperatures that are below its melting point. In contrast, a material with a large band gap is an insulator. In conductors, the valence and conduction bands may overlap, so there is no longer a bandgap with forbidden regions of electronic states. The conductivity of intrinsic semiconductors is strongly dependent on the band gap. The only available charge carriers for conduction are the electrons that have enough thermal energy to be excited across the band gap and the electron holes that are left off when such an excitation occurs. Band-gap engineering is the process of controlling or altering the band gap of a material by controlling the composition of certain semiconductor alloys, such as GaAlAs, InGaAs, and InAlAs. It is also possible to construct layered materials with alternating compositions by techniques like molecular-beam epitaxy. These methods are exploited in the design of heterojunction bipolar transistors (HBTs), laser diodes and solar cells. The distinction between semiconductors and insulators is a matter of convention. One approach is to think of semiconductors as a type of insulator with a narrow band gap. Insulators with a larger band gap, usually greater than 4 eV, are not considered semiconductors and generally do not exhibit semiconductive behaviour under practical conditions. Electron mobility also plays a role in determining a material's informal classification. The band-gap energy of semiconductors tends to decrease with increasing temperature. When temperature increases, the amplitude of atomic vibrations increase, leading to larger interatomic spacing. The interaction between the lattice phonons and the free electrons and holes will also affect the band gap to a smaller extent. The relationship between band gap energy and temperature can be described by Varshni's empirical expression (named after Y. P. Varshni), formula_0, where "Eg"(0), α and β are material constants. Furthermore, lattice vibrations increase with temperature, which increases the effect of electron scattering. Additionally, the number of charge carriers within a semiconductor will increase, as more carriers have the energy required to cross the band-gap threshold and so conductivity of semiconductors also increases with increasing temperature. The external pressure also influences the electronic structure of semiconductors and, therefore, their optical band gaps. In a regular semiconductor crystal, the band gap is fixed owing to continuous energy states. In a quantum dot crystal, the band gap is size dependent and can be altered to produce a range of energies between the valence band and conduction band. It is also known as quantum confinement effect. Band gaps can be either direct or indirect, depending on the electronic band structure of the material. It was mentioned earlier that the dimensions have different band structure and spectroscopy. For non-metallic solids, which are one dimensional, have optical properties that are dependent on the electronic transitions between valence and conduction bands. In addition, the spectroscopic transition probability is between the initial and final orbital and it depends on the integral. φi is the initial orbital, φf is the final orbital, ʃ φf*ûεφi is the integral, ε is the electric vector, and u is the dipole moment. Two-dimensional structures of solids behave because of the overlap of atomic orbitals. The simplest two-dimensional crystal contains identical atoms arranged on a square lattice. Energy splitting occurs at the Brillouin zone edge for one-dimensional situations because of a weak periodic potential, which produces a gap between bands. The behavior of the one-dimensional situations does not occur for two-dimensional cases because there are extra freedoms of motion. Furthermore, a bandgap can be produced with strong periodic potential for two-dimensional and three-dimensional cases. Direct and indirect band gap. Based on their band structure, materials are characterised with a direct band gap or indirect band gap. In the free-electron model, k is the momentum of a free electron and assumes unique values within the Brillouin zone that outlines the periodicity of the crystal lattice. If the momentum of the lowest energy state in the conduction band and the highest energy state of the valence band of a material have the same value, then the material has a direct bandgap. If they are not the same, then the material has an indirect band gap and the electronic transition must undergo momentum transfer to satisfy conservation. Such indirect "forbidden" transitions still occur, however at very low probabilities and weaker energy. For materials with a direct band gap, valence electrons can be directly excited into the conduction band by a photon whose energy is larger than the bandgap. In contrast, for materials with an indirect band gap, a photon and phonon must both be involved in a transition from the valence band top to the conduction band bottom, involving a momentum change. Therefore, direct bandgap materials tend to have stronger light emission and absorption properties and tend to be better suited for photovoltaics (PVs), light-emitting diodes (LEDs), and laser diodes; however, indirect bandgap materials are frequently used in PVs and LEDs when the materials have other favorable properties. Light-emitting diodes and laser diodes. LEDs and laser diodes usually emit photons with energy close to and slightly larger than the band gap of the semiconductor material from which they are made. Therefore, as the band gap energy increases, the LED or laser color changes from infrared to red, through the rainbow to violet, then to UV. Photovoltaic cells. The optical band gap (see below) determines what portion of the solar spectrum a photovoltaic cell absorbs. Strictly, a semiconductor will not absorb photons of energy less than the band gap; whereas most of the photons with energies exceeding the band gap will generate heat. Neither of them contribute to the efficiency of a solar cell. One way to circumvent this problem is based on the so-called photon management concept, in which case the solar spectrum is modified to match the absorption profile of the solar cell. List of band gaps. Below are band gap values for some selected materials. For a comprehensive list of band gaps in semiconductors, see List of semiconductor materials. Optical versus electronic bandgap. In materials with a large exciton binding energy, it is possible for a photon to have just barely enough energy to create an exciton (bound electron–hole pair), but not enough energy to separate the electron and hole (which are electrically attracted to each other). In this situation, there is a distinction between "optical band gap" and "electronic band gap" (or "transport gap"). The optical bandgap is the threshold for photons to be absorbed, while the transport gap is the threshold for creating an electron–hole pair that is "not" bound together. The optical bandgap is at lower energy than the transport gap. In almost all inorganic semiconductors, such as silicon, gallium arsenide, etc., there is very little interaction between electrons and holes (very small exciton binding energy), and therefore the optical and electronic bandgap are essentially identical, and the distinction between them is ignored. However, in some systems, including organic semiconductors and single-walled carbon nanotubes, the distinction may be significant. Band gaps for other quasi-particles. In photonics, band gaps or stop bands are ranges of photon frequencies where, if tunneling effects are neglected, no photons can be transmitted through a material. A material exhibiting this behaviour is known as a photonic crystal. The concept of hyperuniformity has broadened the range of photonic band gap materials, beyond photonic crystals. By applying the technique in supersymmetric quantum mechanics, a new class of optical disordered materials has been suggested, which support band gaps perfectly equivalent to those of crystals or quasicrystals. Similar physics applies to phonons in a phononic crystal. Materials. &lt;templatestyles src="Div col/styles.css"/&gt; List of electronics topics. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "E_g(T)=E_g(0)-\\frac{\\alpha T^2}{T+\\beta}" } ]
https://en.wikipedia.org/wiki?curid=118396
118404
Parse tree
Tree in formal language theory A parse tree or parsing tree (also known as a derivation tree or concrete syntax tree) is an ordered, rooted tree that represents the syntactic structure of a string according to some context-free grammar. The term "parse tree" itself is used primarily in computational linguistics; in theoretical syntax, the term "syntax tree" is more common. Concrete syntax trees reflect the syntax of the input language, making them distinct from the abstract syntax trees used in computer programming. Unlike Reed-Kellogg sentence diagrams used for teaching grammar, parse trees do not use distinct symbol shapes for different types of constituents. Parse trees are usually constructed based on either the constituency relation of constituency grammars (phrase structure grammars) or the dependency relation of dependency grammars. Parse trees may be generated for sentences in natural languages (see natural language processing), as well as during processing of computer languages, such as programming languages. A related concept is that of phrase marker or P-marker, as used in transformational generative grammar. A phrase marker is a linguistic expression marked as to its phrase structure. This may be presented in the form of a tree, or as a bracketed expression. Phrase markers are generated by applying phrase structure rules, and themselves are subject to further transformational rules. A set of possible parse trees for a syntactically ambiguous sentence is called a "parse forest." Nomenclature. A parse tree is made up of nodes and branches. In the picture the parse tree is the entire structure, starting from S and ending in each of the leaf nodes (John, ball, the, hit). In a parse tree, each node is either a "root" node, a "branch" node, or a "leaf" node. In the above example, S is a root node, NP and VP are branch nodes, while John, ball, the, and hit are all leaf nodes. Nodes can also be referred to as parent nodes and child nodes. A "parent" node is one which has at least one other node linked by a branch under it. In the example, S is a parent of both NP and VP. A "child" node is one which has at least one node directly above it to which it is linked by a branch of the tree. Again from our example, hit is a child node of V. A nonterminal function is a function (node) which is either a root or a branch in that tree whereas a terminal function is a function (node) in a parse tree which is a leaf. For binary trees (where each parent node has two immediate child nodes), the number of possible parse trees for a sentence with "n" words is given by the Catalan number formula_0. Constituency-based parse trees. The constituency-based parse trees of constituency grammars (phrase structure grammars) distinguish between terminal and non-terminal nodes. The interior nodes are labeled by non-terminal categories of the grammar, while the leaf nodes are labeled by terminal categories. The image below represents a constituency-based parse tree; it shows the syntactic structure of the English sentence "John hit the ball": The parse tree is the entire structure, starting from S and ending in each of the leaf nodes ("John", "hit", "the", "ball"). The following abbreviations are used in the tree: * S for sentence, the top-level structure in this example * NP for noun phrase. The first (leftmost) NP, a single noun "John", serves as the subject of the sentence. The second one is the object of the sentence. * VP for verb phrase, which serves as the predicate * V for verb. In this case, it's a transitive verb "hit". * D for determiner, in this instance the definite article "the" * N for noun Each node in the tree is either a "root" node, a "branch" node, or a "leaf" node. A root node is a node that does not have any branches on top of it. Within a sentence, there is only ever one root node. A branch node is a parent node that connects to two or more child nodes. A leaf node, however, is a terminal node that does not dominate other nodes in the tree. S is the root node, NP and VP are branch nodes, and "John" (N), "hit" (V), "the" (D), and "ball" (N) are all leaf nodes. The leaves are the lexical tokens of the sentence. A parent node is one that has at least one other node linked by a branch under it. In the example, S is a parent of both N and VP. A child node is one that has at least one node directly above it to which it is linked by a branch of a tree. From the example, "hit" is a child node of V. The terms "mother" and "daughter" are also sometimes used for this relationship. Dependency-based parse trees. The dependency-based parse trees of dependency grammars see all nodes as terminal, which means they do not acknowledge the distinction between terminal and non-terminal categories. They are simpler on average than constituency-based parse trees because they contain fewer nodes. The dependency-based parse tree for the example sentence above is as follows: This parse tree lacks the phrasal categories (S, VP, and NP) seen in the constituency-based counterpart above. Like the constituency-based tree, constituent structure is acknowledged. Any complete sub-tree of the tree is a constituent. Thus this dependency-based parse tree acknowledges the subject noun "John" and the object noun phrase "the ball" as constituents just like the constituency-based parse tree does. The constituency vs. dependency distinction is far-reaching. Whether the additional syntactic structure associated with constituency-based parse trees is necessary or beneficial is a matter of debate. Phrase markers. Phrase markers, or P-markers, were introduced in early transformational generative grammar, as developed by Noam Chomsky and others. A phrase marker representing the deep structure of a sentence is generated by applying phrase structure rules. Then, this application may undergo further transformations. Phrase markers may be presented in the form of trees (as in the above section on constituency-based parse trees), but are often given instead in the form of "bracketed expressions", which occupy less space in the memory. For example, a bracketed expression corresponding to the constituency-based tree given above may be something like: formula_1 As with trees, the precise construction of such expressions and the amount of detail shown can depend on the theory being applied and on the points that the query author wishes to illustrate. See also. &lt;templatestyles src="Div col/styles.css"/&gt; Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Div col/styles.css"/&gt;
[ { "math_id": 0, "text": "C_n" }, { "math_id": 1, "text": "[_S\\ [_\\mathit{N}\\ \\text{John}]\\ [_\\mathit{VP}\\ [_V\\ \\text{hit}]\\ [_\\mathit{NP}\\ [_\\mathit{D}\\ \\text{the}]\\ [_N\\ \\text{ball}]]]]" } ]
https://en.wikipedia.org/wiki?curid=118404
11840458
John ellipsoid
In mathematics, the John ellipsoid or Löwner–John ellipsoid "E"("K") associated to a convex body "K" in "n"-dimensional Euclidean space R"n" can refer to the "n"-dimensional ellipsoid of maximal volume contained within "K" or the ellipsoid of minimal volume that contains "K". Often, the minimal volume ellipsoid is called the Löwner ellipsoid, and the maximal volume ellipsoid is called the John ellipsoid (although John worked with the minimal volume ellipsoid in its original paper). One can also refer to the minimal volume circumscribed ellipsoid as the outer Löwner–John ellipsoid, and the maximum volume inscribed ellipsoid as the inner Löwner–John ellipsoid. The German-American mathematician Fritz John proved in 1948 that each convex body in R"n" is circumscribed by a unique ellipsoid of minimal volume, and that the dilation of this ellipsoid by factor 1/"n" is contained inside the convex body. That is, the outer Lowner-John ellipsoid is larger than the inner one by a factor of at most "n". For a balanced body, this factor can be reduced to formula_0. Properties. The inner Löwner–John ellipsoid "E"("K") of a convex body "K" ⊂ R"n" is a closed unit ball "B" in R"n" if and only if "B" ⊆ "K" and there exists an integer "m" ≥ "n" and, for "i" = 1, ..., "m", real numbers "c""i" &gt; 0 and unit vectors "u""i" ∈ S"n"−1 ∩ ∂"K" such that formula_1 and, for all "x" ∈ R"n" formula_2 Computation. In general, computing the John ellipsoid of a given convex body is a hard problem. However, for some specific cases, explicit formulas are known. Some cases are particularly important for the ellipsoid method. Let E(A,a) be an ellipsoid in R"n", defined by a matrix A and center a. Let c be a nonzero vector in R"n". Let E'(A,a,c) be the half-ellipsoid derived by cutting E(A,a) at its center using the hyperplane defined by c. Then, the Lowner-John ellipsoid of E'(A,a,c) is an ellipsoid E(A',a') defined by:formula_3 formula_4where b is a vector defined by:formula_5Similarly, there are formulas for other sections of ellipsoids, not necessarily through its center. Applications. The computation of Löwner–John ellipsoids (and in more general, the computation of minimal-volume polynomial level sets enclosing a set) has found many applications in control and robotics. In particular, computing Löwner–John ellipsoids has applications in obstacle collision detection for robotic systems, where the distance between a robot and its surrounding environment is estimated using a best ellipsoid fit. Löwner–John ellipsoids has also been used to approximate the optimal policy in portfolio optimization problems with transaction costs. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\sqrt{n}" }, { "math_id": 1, "text": "\\sum_{i = 1}^{m} c_{i} u_{i} = 0" }, { "math_id": 2, "text": "x = \\sum_{i = 1}^{m} c_{i} (x \\cdot u_{i}) u_{i}." }, { "math_id": 3, "text": "a' = a-\\frac{1}{n+1} b" }, { "math_id": 4, "text": "A' = \\frac{n^2}{n^2-1}\\left(A - \\frac{2}{n+1} b b^T \\right)" }, { "math_id": 5, "text": "b = \\frac{1}{\\sqrt{c^T A c}} A c" } ]
https://en.wikipedia.org/wiki?curid=11840458
11840868
Entropy power inequality
In information theory, the entropy power inequality (EPI) is a result that relates to so-called "entropy power" of random variables. It shows that the entropy power of suitably well-behaved random variables is a superadditive function. The entropy power inequality was proved in 1948 by Claude Shannon in his seminal paper "A Mathematical Theory of Communication". Shannon also provided a sufficient condition for equality to hold; Stam (1959) showed that the condition is in fact necessary. Statement of the inequality. For a random vector "X" : Ω → R"n" with probability density function "f" : R"n" → R, the differential entropy of "X", denoted "h"("X"), is defined to be formula_0 and the entropy power of "X", denoted "N"("X"), is defined to be formula_1 In particular, "N"("X") = |"K"| 1/"n" when "X" is normal distributed with covariance matrix "K". Let "X" and "Y" be independent random variables with probability density functions in the "L""p" space "L""p"(R"n") for some "p" &gt; 1. Then formula_2 Moreover, equality holds if and only if "X" and "Y" are multivariate normal random variables with proportional covariance matrices. Alternative form of the inequality. The entropy power inequality can be rewritten in an equivalent form that does not explicitly depend on the definition of entropy power (see Costa and Cover reference below). Let "X" and "Y" be independent random variables, as above. Then, let X' and Y' be independently distributed random variables with gaussian distributions, such that formula_3 and formula_4 Then, formula_5
[ { "math_id": 0, "text": "h(X) = - \\int_{\\mathbb{R}^{n}} f(x) \\log f(x) \\, d x" }, { "math_id": 1, "text": " N(X) = \\frac{1}{2\\pi e} e^{ \\frac{2}{n} h(X) }." }, { "math_id": 2, "text": "N(X + Y) \\geq N(X) + N(Y). \\," }, { "math_id": 3, "text": "h(X') = h(X)" }, { "math_id": 4, "text": "h(Y') = h(Y)" }, { "math_id": 5, "text": "h(X + Y) \\geq h(X' + Y')" } ]
https://en.wikipedia.org/wiki?curid=11840868
1184256
Patience sorting
Sorting algorithm In computer science, patience sorting is a sorting algorithm inspired by, and named after, the card game patience. A variant of the algorithm efficiently computes the length of a longest increasing subsequence in a given array. Overview. The algorithm's name derives from a simplified variant of the patience card game. The game begins with a shuffled deck of cards. The cards are dealt one by one into a sequence of piles on the table, according to the following rules. This card game is turned into a two-phase sorting algorithm, as follows. Given an array of n elements from some totally ordered domain, consider this array as a collection of cards and simulate the patience sorting game. When the game is over, recover the sorted sequence by repeatedly picking off the minimum visible card; in other words, perform a k-way merge of the p piles, each of which is internally sorted. Analysis. The first phase of patience sort, the card game simulation, can be implemented to take "O"("n" log "n") comparisons in the worst case for an n-element input array: there will be at most n piles, and by construction, the top cards of the piles form an increasing sequence from left to right, so the desired pile can be found by binary search. The second phase, the merging of piles, can be done in formula_0 time as well using a priority queue. When the input data contain natural "runs", i.e., non-decreasing subarrays, then performance can be strictly better. In fact, when the input array is already sorted, all values form a single pile and both phases run in "O"("n") time. The average-case complexity is still "O"("n" log "n"): any uniformly random sequence of values will produce an expected number of formula_1 piles, which take formula_2 time to produce and merge. An evaluation of the practical performance of patience sort is given by Chandramouli and Goldstein, who show that a naive version is about ten to twenty times slower than a state-of-the-art quicksort on their benchmark problem. They attribute this to the relatively small amount of research put into patience sort, and develop several optimizations that bring its performance to within a factor of two of that of quicksort. If values of cards are in the range 1, . . . , "n", there is an efficient implementation with formula_0 worst-case running time for putting the cards into piles, relying on a Van Emde Boas tree. Relations to other problems. Patience sorting is closely related to a card game called Floyd's game. This game is very similar to the game sketched earlier: The object of the game is to finish with as few piles as possible. The difference with the patience sorting algorithm is that there is no requirement to place a new card on the "leftmost" pile where it is allowed. Patience sorting constitutes a greedy strategy for playing this game. Aldous and Diaconis suggest defining 9 or fewer piles as a winning outcome for "n" 52, which happens with approximately 5% probability. Algorithm for finding a longest increasing subsequence. First, execute the sorting algorithm as described above. The number of piles is the length of a longest subsequence. Whenever a card is placed on top of a pile, put a back-pointer to the top card in the previous pile (that, by assumption, has a lower value than the new card has). In the end, follow the back-pointers from the top card in the last pile to recover a decreasing subsequence of the longest length; its reverse is an answer to the longest increasing subsequence algorithm. S. Bespamyatnikh and M. Segal give a description of an efficient implementation of the algorithm, incurring no additional asymptotic cost over the sorting one (as the back-pointers storage, creation and traversal require linear time and space). They further show how to report "all" the longest increasing subsequences from the same resulting data structures. History. Patience sorting was named by C. L. Mallows, who attributed its invention to A.S.C. Ross in the early 1960s. According to Aldous and Diaconis, patience sorting was first recognized as an algorithm to compute the longest increasing subsequence length by Hammersley. A.S.C. Ross and independently Robert W. Floyd recognized it as a sorting algorithm. Initial analysis was done by Mallows. Floyd's game was developed by Floyd in correspondence with Donald Knuth. Use. The patience sorting algorithm can be applied to process control. Within a series of measurements, the existence of a long increasing subsequence can be used as a trend marker. A 2002 article in SQL Server magazine includes a SQL implementation, in this context, of the patience sorting algorithm for the length of the longest increasing subsequence. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "O(n\\log n)" }, { "math_id": 1, "text": "O(\\sqrt{n})" }, { "math_id": 2, "text": "O(n\\log\\sqrt{n}) = O(n\\log n)" } ]
https://en.wikipedia.org/wiki?curid=1184256
11843393
Clock angle problem
Clock angle problems are a type of mathematical problem which involve finding the angle between the hands of an analog clock. Math problem. Clock angle problems relate two different measurements: angles and time. The angle is typically measured in degrees from the mark of number 12 clockwise. The time is usually based on a 12-hour clock. A method to solve such problems is to consider the rate of change of the angle in degrees per minute. The hour hand of a normal 12-hour analogue clock turns 360° in 12 hours (720 minutes) or 0.5° per minute. The minute hand rotates through 360° in 60 minutes or 6° per minute. formula_0 Equation for the angle of the hour hand. where: formula_2 Equation for the angle of the minute hand. where: Example. The time is 5:24. The angle in degrees of the hour hand is: formula_3 The angle in degrees of the minute hand is: formula_4 Equation for the angle between the hands. The angle between the hands can be found using the following formula: formula_5 where If the angle is greater than 180 degrees then subtract it from 360 degrees. Example 1. The time is 2:20. formula_6 Example 2. The time is 10:16. formula_7 When are the hour and minute hands of a clock superimposed? The hour and minute hands are superimposed only when their angle is the same. formula_8 H is an integer in the range 0–11. This gives times of: 0:00, 1:05.45, 2:10.90, 3:16.36, 4:21.81, 5:27.27. 6:32.72, 7:38.18, 8:43.63, 9:49.09, 10:54.54, and 12:00. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\theta_{\\text{hr}} = 0.5^{\\circ} \\times M_{\\Sigma} = 0.5^{\\circ} \\times (60 \\times H + M)" }, { "math_id": 1, "text": " M_{\\Sigma} = (60 \\times H + M)" }, { "math_id": 2, "text": "\\theta_{\\text{min.}} = 6^{\\circ} \\times M" }, { "math_id": 3, "text": "\\theta_{\\text{hr}} = 0.5^{\\circ} \\times (60 \\times 5 + 24) = 162^{\\circ}" }, { "math_id": 4, "text": "\\theta_{\\text{min.}} = 6^{\\circ} \\times 24 = 144^{\\circ}" }, { "math_id": 5, "text": "\\begin{align}\n\\Delta\\theta\n &= \\vert \\theta_{\\text{hr}} - \\theta_{\\text{min.}} \\vert \\\\\n &= \\vert 0.5^{\\circ}\\times(60\\times H+M) -6^{\\circ}\\times M \\vert \\\\\n &= \\vert 0.5^{\\circ}\\times(60\\times H+M) -0.5^{\\circ}\\times 12 \\times M \\vert \\\\\n &= \\vert 0.5^{\\circ}\\times(60\\times H -11 \\times M) \\vert \\\\\n\\end{align}" }, { "math_id": 6, "text": "\\begin{align}\n\\Delta\\theta \n &= \\vert 0.5^{\\circ} \\times (60 \\times 2 - 11 \\times 20) \\vert \\\\\n &= \\vert 0.5^{\\circ} \\times (120 - 220) \\vert \\\\\n &= 50^{\\circ}\n\\end{align}" }, { "math_id": 7, "text": "\\begin{align}\n\\Delta\\theta \n &= \\vert 0.5^{\\circ} \\times (60 \\times 10 - 11 \\times 16) \\vert \\\\\n &= \\vert 0.5^{\\circ} \\times (600 - 176) \\vert \\\\\n &= 212^{\\circ} \\ \\ ( > 180^{\\circ})\\\\\n &= 360^{\\circ} - 212^{\\circ} \\\\\n &= 148^{\\circ}\n\\end{align}" }, { "math_id": 8, "text": "\\begin{align}\n\\theta_{\\text{min}} &= \\theta_{\\text{hr}}\\\\\n\\Rightarrow 6^{\\circ} \\times M &= 0.5^{\\circ} \\times (60 \\times H + M) \\\\\n\\Rightarrow 12 \\times M &= 60 \\times H + M \\\\\n\\Rightarrow 11 \\times M &= 60 \\times H\\\\\n\\Rightarrow M &= \\frac{60}{11} \\times H\\\\\n\\Rightarrow M &= 5.\\overline{45} \\times H\n\\end{align}" } ]
https://en.wikipedia.org/wiki?curid=11843393