id
stringlengths 2
8
| title
stringlengths 1
130
| text
stringlengths 0
252k
| formulas
listlengths 1
823
| url
stringlengths 38
44
|
---|---|---|---|---|
14603715
|
Darboux's formula
|
Summation formula
In mathematical analysis, Darboux's formula is a formula introduced by Gaston Darboux (1876) for summing infinite series by using integrals or evaluating integrals using infinite series. It is a generalization to the complex plane of the Euler–Maclaurin summation formula, which is used for similar purposes and derived in a similar manner (by repeated integration by parts of a particular choice of integrand). Darboux's formula can also be used to derive the Taylor series from calculus.
Statement.
If "φ"("t") is a polynomial of degree "n" and "f" an analytic function then
formula_0
The formula can be proved by repeated integration by parts.
Special cases.
Taking "φ" to be a Bernoulli polynomial in Darboux's formula gives the Euler–Maclaurin summation formula. Taking "φ" to be ("t" − 1)"n" gives the formula for a Taylor series.
|
[
{
"math_id": 0,
"text": " \n\\begin{align}\n & \\sum_{m=0}^n (-1)^m (z - a)^m \\left[\\varphi^{(n - m)}(1)f^{(m)}(z) - \\varphi^{(n - m)}(0)f^{(m)}(a)\\right] \\\\\n = {} & (-1)^n(z - a)^{n + 1}\\int_0^1\\varphi(t)f^{(n+1)}\\left[a + t(z - a)\\right]\\, dt.\n\\end{align}\n"
}
] |
https://en.wikipedia.org/wiki?curid=14603715
|
1460629
|
Effective temperature
|
Temperature of a black body that would emit the same total amount of electromagnetic radiation
The effective temperature of a body such as a star or planet is the temperature of a black body that would emit the same total amount of electromagnetic radiation. Effective temperature is often used as an estimate of a body's surface temperature when the body's emissivity curve (as a function of wavelength) is not known.
When the star's or planet's net emissivity in the relevant wavelength band is less than unity (less than that of a black body), the actual temperature of the body will be higher than the effective temperature. The net emissivity may be low due to surface or atmospheric properties, such as the greenhouse effect.
Star.
The effective temperature of a star is the temperature of a black body with the same luminosity per "surface area" () as the star and is defined according to the Stefan–Boltzmann law . Notice that the total (bolometric) luminosity of a star is then "L"
4π"R"2"σT"eff4, where "R" is the stellar radius. The definition of the stellar radius is obviously not straightforward. More rigorously the effective temperature corresponds to the temperature at the radius that is defined by a certain value of the Rosseland optical depth (usually 1) within the stellar atmosphere. The effective temperature and the bolometric luminosity are the two fundamental physical parameters needed to place a star on the Hertzsprung–Russell diagram. Both effective temperature and bolometric luminosity depend on the chemical composition of a star.
The effective temperature of the Sun is around .
The nominal value defined by the International Astronomical Union for use as a unit of measure of temperature is .
Stars have a decreasing temperature gradient, going from their central core up to the atmosphere. The "core temperature" of the Sun—the temperature at the centre of the Sun where nuclear reactions take place—is estimated to be 15,000,000 K.
The color index of a star indicates its temperature from the very cool—by stellar standards—red M stars that radiate heavily in the infrared to the very hot blue O stars that radiate largely in the ultraviolet. Various colour-effective temperature relations exist in the literature. Their relations also have smaller dependencies on other stellar parameters, such as the stellar metallicity and surface gravity. The effective temperature of a star indicates the amount of heat that the star radiates per unit of surface area. From the hottest surfaces to the coolest is the sequence of stellar classifications known as O, B, A, F, G, K, M.
A red star could be a tiny red dwarf, a star of feeble energy production and a small surface or a bloated giant or even supergiant star such as Antares or Betelgeuse, either of which generates far greater energy but passes it through a surface so large that the star radiates little per unit of surface area. A star near the middle of the spectrum, such as the modest Sun or the giant Capella radiates more energy per unit of surface area than the feeble red dwarf stars or the bloated supergiants, but much less than such a white or blue star as Vega or Rigel.
Planet.
Blackbody temperature.
To find the effective (blackbody) temperature of a planet, it can be calculated by equating the power received by the planet to the known power emitted by a blackbody of temperature T.
Take the case of a planet at a distance D from the star, of luminosity L.
Assuming the star radiates isotropically and that the planet is a long way from the star, the power absorbed by the planet is given by treating the planet as a disc of radius r, which intercepts some of the power which is spread over the surface of a sphere of radius D (the distance of the planet from the star). The calculation assumes the planet reflects some of the incoming radiation by incorporating a parameter called the albedo (a). An albedo of 1 means that all the radiation is reflected, an albedo of 0 means all of it is absorbed. The expression for absorbed power is then:
formula_0
The next assumption we can make is that the entire planet is at the same temperature T, and that the planet radiates as a blackbody. The Stefan–Boltzmann law gives an expression for the power radiated by the planet:
formula_1
Equating these two expressions and rearranging gives an expression for the effective temperature:
formula_2
Where formula_3 is the Stefan–Boltzmann constant. Note that the planet's radius has cancelled out of the final expression.
The effective temperature for Jupiter from this calculation is 88 K and 51 Pegasi b (Bellerophon) is 1,258 K. A better estimate of effective temperature for some planets, such as Jupiter, would need to include the internal heating as a power input. The actual temperature depends on albedo and atmosphere effects. The actual temperature from spectroscopic analysis for HD 209458 b (Osiris) is 1,130 K, but the effective temperature is 1,359 K. The internal heating within Jupiter raises the effective temperature to about 152 K.
Surface temperature of a planet.
The surface temperature of a planet can be estimated by modifying the effective-temperature calculation to account for emissivity and temperature variation.
The area of the planet that absorbs the power from the star is "A"abs which is some fraction of the total surface area "A"total
4π"r"2, where r is the radius of the planet. This area intercepts some of the power which is spread over the surface of a sphere of radius D. We also allow the planet to reflect some of the incoming radiation by incorporating a parameter a called the albedo. An albedo of 1 means that all the radiation is reflected, an albedo of 0 means all of it is absorbed. The expression for absorbed power is then:
formula_4
The next assumption we can make is that although the entire planet is not at the same temperature, it will radiate as if it had a temperature T over an area "A"rad which is again some fraction of the total area of the planet. There is also a factor ε, which is the emissivity and represents atmospheric effects. ε ranges from 1 to 0 with 1 meaning the planet is a perfect blackbody and emits all the incident power. The Stefan–Boltzmann law gives an expression for the power radiated by the planet:
formula_5
Equating these two expressions and rearranging gives an expression for the surface temperature:
formula_6
Note the ratio of the two areas. Common assumptions for this ratio are for a rapidly rotating body and for a slowly rotating body, or a tidally locked body on the sunlit side. This ratio would be 1 for the subsolar point, the point on the planet directly below the sun and gives the maximum temperature of the planet — a factor of √2 (1.414) greater than the effective temperature of a rapidly rotating planet.
Also note here that this equation does not take into account any effects from internal heating of the planet, which can arise directly from sources such as radioactive decay and also be produced from frictions resulting from tidal forces.
Earth effective temperature.
Earth has an albedo of about 0.306 and a solar irradiance (L / 4 π D2) of 1361 W m−2 at its mean orbital radius of 1.5×108 km. The calculation with ε=1 and remaining physical constants then gives an Earth effective temperature of .
The actual temperature of Earth's surface is an average as of 2020. The difference between the two values is called the "greenhouse effect". The greenhouse effect results from materials in the atmosphere (greenhouse gases and clouds) absorbing thermal radiation and reducing emissions to space, i.e., reducing the planet's emissivity of thermal radiation from its surface into space. Substituting the surface temperature into the equation and solving for ε gives an effective emissivity of about 0.61 for a 288 K Earth. Furthermore, these values calculate an outgoing thermal radiation flux of 238 W m−2 (with ε=0.61 as viewed from space) versus a surface thermal radiation flux of 390 W m−2 (with ε≈1 at the surface). Both fluxes are near the confidence ranges reported by the IPCC.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "P_{\\rm abs} = \\frac {L r^2 (1-a)}{4 D^2}"
},
{
"math_id": 1,
"text": "P_{\\rm rad} = 4 \\pi r^2 \\sigma T^4"
},
{
"math_id": 2,
"text": "T = \\sqrt[4]{\\frac{L (1-a)}{16 \\pi \\sigma D^2}}"
},
{
"math_id": 3,
"text": "\\sigma"
},
{
"math_id": 4,
"text": "P_{\\rm abs} = \\frac {L A_{\\rm abs} (1-a)}{4 \\pi D^2}"
},
{
"math_id": 5,
"text": "P_{\\rm rad} = A_{\\rm rad} \\varepsilon \\sigma T^4"
},
{
"math_id": 6,
"text": "T = \\sqrt[4]{\\frac{A_{\\rm abs}}{A_{\\rm rad}} \\frac{L (1-a)}{4 \\pi \\sigma \\varepsilon D^2} }"
}
] |
https://en.wikipedia.org/wiki?curid=1460629
|
14606730
|
Cass criterion
|
The Cass criterion, also known as the Malinvaud–Cass criterion, is a central result in theory of overlapping generations models in economics. It is named after David Cass.
A major feature which sets overlapping generations models in economics apart from the standard model with a finite number of infinitely lived individuals is that the First Welfare Theorem might not hold—that is, competitive equilibria may be not be Pareto optimal.
If formula_0 represents the vector of Arrow–Debreu commodity prices prevailing in period formula_1 and if
formula_2
then a competitive equilibrium allocation is inefficient.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "p_t"
},
{
"math_id": 1,
"text": "t"
},
{
"math_id": 2,
"text": "\\sum_{t=0}^{\\infty} \\frac{1}{\\| p_t \\| } < \\infty ,"
}
] |
https://en.wikipedia.org/wiki?curid=14606730
|
146089
|
Danica McKellar
|
American actress, mathematics writer, and education advocate (born 1975)
Danica Mae McKellar (born January 3, 1975) is an American actress, mathematics writer, and education advocate. She is best known for playing Winnie Cooper in the television series "The Wonder Years."
McKellar has appeared in various television films for the Hallmark Channel. She has also done voice acting, including Frieda Goren in "Static Shock," Miss Martian in "Young Justice," and Killer Frost in "DC Super Hero Girls." In 2015, McKellar joined part of the main cast in the Netflix original series "Project Mc2".
In addition to her acting work, McKellar later wrote seven non-fiction books, all dealing with mathematics: "Math Doesn't Suck", "Kiss My Math", "Hot X: Algebra Exposed", "Girls Get Curves: Geometry Takes Shape", which encourage middle-school and high-school girls to have confidence and succeed in mathematics, "Goodnight, Numbers", and "Do Not Open This Math Book".
Early life and education.
McKellar was born in La Jolla, California. She moved with her family to Los Angeles when she was eight. Her mother Mahaila McKellar (née Tello) was a homemaker; her father Christopher McKellar is a real estate developer; her younger sister Crystal (b. 1976) is a lawyer. She is of paternal Scottish, French, German, Spanish, and Dutch descent and her mother is of Portuguese origin via the Azores and Madeira islands.
McKellar studied at the University of California, Los Angeles where she was a member of the Alpha Delta Pi sorority and earned a Bachelor of Science degree "summa cum laude" in Mathematics in 1998. As an undergraduate, she coauthored a scientific paper with Professor Lincoln Chayes and fellow student Brandy Winn titled "Percolation and Gibbs states multiplicity for ferromagnetic Ashkin–Teller models on formula_0." Their results are termed the "Chayes–McKellar–Winn theorem". Later, when Chayes was asked to comment about the mathematical abilities of his student coauthors, he was quoted in "The New York Times", "I thought that the two were really, really first-rate." For her past collaborative work on research papers, McKellar is currently assigned the Erdős number four, and her Erdős–Bacon number is six.
Acting career.
"The Wonder Years" and early acting career.
At age seven, McKellar enrolled in weekend acting classes for children at the Lee Strasberg Institute in Los Angeles. In her teens, she landed a prominent role in "The Wonder Years", an American television comedy-drama that ran for six seasons on ABC, from 1988 to 1993. She played Gwendolyn "Winnie" Cooper, the main love interest of Kevin Arnold (played by Fred Savage) on the show. Her first kiss was with Fred Savage in an episode of "The Wonder Years". She later said, "My first kiss was a pretty nerve-wracking experience! But we never kissed off screen, and pretty quickly our feelings turned into brother/sister, and stayed that way."
Later acting career.
McKellar has said that she found it "difficult" to move from being a child actress to an adult actress. Since leaving "The Wonder Years", McKellar has had several guest roles in television series (including one with former co-star Fred Savage on "Working"), and has written and directed two short films. She appeared in two Lifetime films in the "Moment of Truth" series, playing Kristin Guthrie in 1994's "" and Annie Mills Carman in 1996's "." She briefly returned to regular television with a recurring role in the 2002–03 season of "The West Wing," portraying Elsie Snuffin, the half-sister and assistant of Deputy White House Communications Director Will Bailey.
McKellar was featured in the video for Debbie Gibson's eighth single from the "Electric Youth" album, "No More Rhyme", which was released in 1989. She plays the cello in the beginning of the video.
McKellar appeared in lingerie in the July 2005 edition of "Stuff" magazine after readers voted her the 1990s star they would most like to see in lingerie. McKellar explained that she agreed to the shoot in part to obtain "grittier roles".
In 2006, McKellar starred in a Lifetime film and web-based series titled "Inspector Mom" about a mother who solves mysteries.
On the August 1, 2007, edition of the "Don and Mike Show", a WJFK-FM radio program out of Washington, D.C., McKellar announced that the producers of "How I Met Your Mother" were planning to bring her back for a recurring role (she guest-starred on the show in late 2005 in "The Pineapple Incident" and again in early 2007 in "Third Wheel"). She also made an appearance on the show "The Big Bang Theory", in the episode "The Psychic Vortex".
In 2008, she starred in "Heatstroke", a Sci-Fi Channel film about searching for alien life on Earth and in 2009 she was one of the stars commenting on the occurrences of the new millennium in VH1's "I Love the New Millennium" and was the math correspondent for "Brink", a program by the Science Channel about technology. In 2013, she played Ellen Plainview in Lifetime's reimagining of the 1956 Alfred Hitchcock film "The Wrong Man."
McKellar has also worked as a voice actress, having provided the voice of Jubilee in the video game "X-Men Legends" (2004), and Invisible Woman in ' (2006) and ' (2009). She provided the voice of Miss Martian in the TV series "Young Justice".
In 2012, she starred in the Lifetime film "Love at the Christmas Table" with Dustin Milligan.
In January 2013, she starred in the Syfy film "Tasmanian Devils" with Apolo Ohno.
On August 20, 2013, Canadian singer Avril Lavigne released the music video for her single "Rock N Roll" from her self-titled fifth album, which features McKellar as "Winnie Cooper".
On March 4, 2014, she was announced to be competing on season 18 of "Dancing with the Stars". She paired with Valentin Chmerkovskiy. McKellar and Chmerkovskiy were eliminated on Week 8, finishing in 6th place.
She had a guest appearance in the "Impractical Jokers" season four episode six titled "The Blunder Years". She made another guest appearance in the season seven episode ten titled "Speech Impediment".
In 2015, she starred in the Netflix original series "Project Mc2" as The Quail.
She has starred in several Hallmark Channel films, including "Crown for Christmas", "My Christmas Dream", "Campfire Kiss", "Love and Sunshine", "Christmas at Dollywood", and "You, Me & the Christmas Trees" as well as the Hallmark Movies & Mysteries series "The Matchmaker Mysteries".
McKellar is a judge on Fox’s "Domino Masters" which premiered on March 9, 2022.
Books.
McKellar has authored several mathematics-related books primarily targeting adolescent readers interested in succeeding at the study of mathematics:
Her first book, "Math Doesn't Suck: How to Survive Middle School Math without Losing Your Mind or Breaking a Nail", was a "New York Times" bestseller, and was favorably reviewed by Tara C. Smith, the founder of Iowa Citizens for Science and a professor of epidemiology at the University of Iowa. The book also received a review from Anthony Jones, writing for the "School Librarian" journal, who described the book as "a trouble-shooting guide to help girls overcome their biggest maths challenges," noting what he described as "real-world examples of great mathematics in action." In an interview with Smith, McKellar said that she wrote the book "to show girls that math is accessible and relevant, and even a little glamorous" and to counteract "damaging social messages telling young girls that math and science aren't for them".
McKellar's second book, "Kiss My Math: Showing Pre-Algebra Who's Boss", was released on August 5, 2008. The book's target audience is girls in the 7th through 9th grades. Her third book, "Hot X: Algebra Exposed!" covers algebra topics, while the previous two titles were intended as "algebra-readiness books." "Hot X" was published on August 3, 2010. Her fourth book, "Girls Get Curves – Geometry Takes Shape", focuses on the subject of geometry, and attempts to make the subject more accessible.
Three of McKellar's books were listed in "The New York Times" children's bestseller list. She received Mathical Honors for "Goodnight, Numbers".
Awards and honors.
McKellar was named "Person of the Week" on "World News with Charles Gibson" for the week ending August 10, 2007. The news segment highlighted her book "Math Doesn't Suck" and her efforts to help girls develop an interest in mathematics, especially during the middle school years. In January 2014, she received the Joint Policy Board for Mathematics (JPBM) Communications Award. The citation credited her books, blog, and public appearances for encouraging "countless middle and high school students, especially girls, to be more interested in mathematics."
Personal life.
McKellar married composer Mike Verta on March 22, 2009, in La Jolla, California; the couple had dated since 2001. They had their first child, a son, in 2010. McKellar filed for divorce from Verta in June 2012.
On July 16, 2014, she became engaged to her boyfriend Scott Sveslosky, a partner in the Los Angeles legal firm Sheppard, Mullin, Richter & Hampton. On November 15, 2014, they married in Kauai, Hawaii.
McKellar is a Christian and regularly attends church services. She cites Candace Cameron Bure as being a major influence in her life after Bure gave her a copy of the Bible.
Cultural references.
McKellar's notoriety for Hallmark mystery films was spoofed in the 2019 film "Knives Out", complete with the parody title "Deadly By Surprise".
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathbb{Z}^2"
}
] |
https://en.wikipedia.org/wiki?curid=146089
|
14609233
|
Holographic algorithm
|
Algorithm using holographic reduction
In computer science, a holographic algorithm is an algorithm that uses a holographic reduction. A holographic reduction is a constant-time reduction that maps solution fragments many-to-many such that the sum of the solution fragments remains unchanged. These concepts were introduced by Leslie Valiant, who called them "holographic" because "their effect can be viewed as that of producing interference patterns among the solution fragments". The algorithms are unrelated to laser holography, except metaphorically. Their power comes from the mutual cancellation of many contributions to a sum, analogous to the interference patterns in a hologram.
Holographic algorithms have been used to find polynomial-time solutions to problems without such previously known solutions for special cases of satisfiability, vertex cover, and other graph problems. They have received notable coverage due to speculation that they are relevant to the P versus NP problem and their impact on computational complexity theory. Although some of the general problems are #P-hard problems, the special cases solved are not themselves #P-hard, and thus do not prove FP = #P.
Holographic algorithms have some similarities with quantum computation, but are completely classical.
Holant problems.
Holographic algorithms exist in the context of Holant problems, which generalize counting constraint satisfaction problems (#CSP). A #CSP instance is a hypergraph "G"=("V","E") called the constraint graph. Each hyperedge represents a variable and each vertex formula_0 is assigned a constraint formula_1 A vertex is connected to an hyperedge if the constraint on the vertex involves the variable on the hyperedge. The counting problem is to compute
formula_2
which is a sum over all variable assignments, the product of every constraint, where the inputs to the constrain formula_3 are the variables on the incident hyperedges of formula_0.
A Holant problem is like a #CSP except the input must be a graph, not a hypergraph. Restricting the class of input graphs in this way is indeed a generalization. Given a #CSP instance, replace each hyperedge "e" of size "s" with a vertex "v" of degree "s" with edges incident to the vertices contained in "e". The constraint on "v" is the equality function of arity "s". This identifies all of the variables on the edges incident to "v", which is the same effect as the single variable on the hyperedge "e".
In the context of Holant problems, the expression in (1) is called the Holant after a related exponential sum introduced by Valiant.
Holographic reduction.
A standard technique in complexity theory is a many-one reduction, where an instance of one problem is reduced to an instance of another (hopefully simpler) problem.
However, holographic reductions between two computational problems preserve the sum of solutions without necessarily preserving correspondences between solutions. For instance, the total number of solutions in both sets can be preserved, even though individual problems do not have matching solutions. The sum can also be weighted, rather than simply counting the number of solutions, using linear basis vectors.
General example.
It is convenient to consider holographic reductions on bipartite graphs. A general graph can always be transformed it into a bipartite graph while preserving the Holant value. This is done by replacing each edge in the graph by a path of length 2, which is also known as the 2-stretch of the graph. To keep the same Holant value, each new vertex is assigned the binary equality constraint.
Consider a bipartite graph "G"=("U","V","E") where the constraint assigned to every vertex formula_4 is formula_5 and the constraint assigned to every vertex formula_6 is formula_3. Denote this counting problem by formula_7 If the vertices in "U" are viewed as one large vertex of degree |"E"|, then the constraint of this vertex is the tensor product of formula_5 with itself |"U"| times, which is denoted by formula_8 Likewise, if the vertices in "V" are viewed as one large vertex of degree |"E"|, then the constraint of this vertex is formula_9 Let the constraint formula_5 be represented by its weighted truth table as a row vector and the constraint formula_3 be represented by its weighted truth table as a column vector. Then the Holant of this constraint graph is simply formula_10
Now for any complex 2-by-2 invertible matrix "T" (the columns of which are the linear basis vectors mentioned above), there is a holographic reduction between formula_11 and formula_12 To see this, insert the identity matrix formula_13 in between formula_14 to get
formula_14
formula_15
formula_16
Thus, formula_11 and formula_17 have exactly the same Holant value for every constraint graph. They essentially define the same counting problem.
Specific examples.
Vertex covers and independent sets.
Let "G" be a graph. There is a 1-to-1 correspondence between the vertex covers of "G" and the independent sets of "G". For any set "S" of vertices of "G", "S" is a vertex cover in "G" if and only if the complement of "S" is an independent set in "G". Thus, the number of vertex covers in "G" is exactly the same as the number of independent sets in "G".
The equivalence of these two counting problems can also be proved using a holographic reduction. For simplicity, let "G" be a 3-regular graph. The 2-stretch of "G" gives a bipartite graph "H"=("U","V","E"), where "U" corresponds to the edges in "G" and "V" corresponds to the vertices in "G". The Holant problem that naturally corresponds to counting the number of vertex covers in "G" is formula_18 The truth table of OR2 as a row vector is (0,1,1,1). The truth table of EQUAL3 as a column vector is formula_19. Then under a holographic transformation by formula_20
formula_21
formula_22
formula_23
formula_24
formula_25
formula_26
which is formula_27 the Holant problem that naturally corresponds to counting the number of independent sets in "G".
History.
As with any type of reduction, a holographic reduction does not, by itself, yield a polynomial time algorithm. In order to get a polynomial time algorithm, the problem being reduced to must also have a polynomial time algorithm. Valiant's original application of holographic algorithms used a holographic reduction to a problem where every constraint is realizable by matchgates, which he had just proved is tractable by a further reduction to counting the number of perfect matchings in a planar graph. The latter problem is tractable by the FKT algorithm, which dates to the 1960s.
Soon after, Valiant found holographic algorithms with reductions to matchgates for #7Pl-Rtw-Mon-3CNF and #7Pl-3/2Bip-VC. These problems may appear somewhat contrived, especially with respect to the modulus. Both problems were already known to be #P-hard when ignoring the modulus and Valiant supplied proofs of #P-hardness modulo 2, which also used holographic reductions. Valiant found these two problems by a computer search that looked for problems with holographic reductions to matchgates. He called their algorithms "accidental algorithms", saying "when applying the term accidental to an algorithm we intend to point out that the algorithm arises from satisfying an apparently onerous set of constraints." The "onerous" set of constraints in question are polynomial equations that, if satisfied, imply the existence of a holographic reduction to matchgate realizable constraints.
After several years of developing (what is known as) matchgate signature theory, Jin-Yi Cai and Pinyan Lu were able to explain the existence of Valiant's two accidental algorithms. These two problems are just special cases of two much larger families of problems: #2k-1Pl-Rtw-Mon-kCNF and #2k-1Pl-k/2Bip-VC for any positive integer "k". The modulus 7 is just the third Mersenne number and Cai and Lu showed that these types of problems with parameter "k" can be solved in polynomial time exactly when the modulus is the "k"th Mersenne number by using holographic reductions to matchgates and the Chinese remainder theorem.
Around the same time, Jin-Yi Cai, Pinyan Lu and Mingji Xia gave the first holographic algorithm that did not reduce to a problem that is tractable by matchgates. Instead, they reduced to a problem that is tractable by Fibonacci gates, which are symmetric constraints whose truth tables satisfy a recurrence relation similar to one that defines the Fibonacci numbers. They also used holographic reductions to prove that certain counting problems are #P-hard. Since then, holographic reductions have been used extensively as ingredients in both polynomial time algorithms and proofs of #P-hardness.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "v"
},
{
"math_id": 1,
"text": "f_v."
},
{
"math_id": 2,
"text": "\\sum_{\\sigma : E \\to \\{0,1\\}} \\prod_{v \\in V} f_v(\\sigma|_{E(v)}),~~~~~~~~~~(1)"
},
{
"math_id": 3,
"text": "f_v"
},
{
"math_id": 4,
"text": "u \\in U"
},
{
"math_id": 5,
"text": "f_u"
},
{
"math_id": 6,
"text": "v \\in V"
},
{
"math_id": 7,
"text": "\\text{Holant}(G, f_u, f_v)."
},
{
"math_id": 8,
"text": "f_u^{\\otimes |U|}."
},
{
"math_id": 9,
"text": "f_v^{\\otimes |V|}."
},
{
"math_id": 10,
"text": "f_u^{\\otimes |U|} f_v^{\\otimes |V|}."
},
{
"math_id": 11,
"text": "\\text{Holant}(G, f_u, f_v)"
},
{
"math_id": 12,
"text": "\\text{Holant}(G, f_u T^{\\otimes (\\deg u)}, (T^{-1})^{\\otimes (\\deg v)} f_v)."
},
{
"math_id": 13,
"text": "T^{\\otimes |E|} (T^{-1})^{\\otimes |E|}"
},
{
"math_id": 14,
"text": "f_u^{\\otimes |U|} f_v^{\\otimes |V|}"
},
{
"math_id": 15,
"text": "= f_u^{\\otimes |U|} T^{\\otimes |E|} (T^{-1})^{\\otimes |E|} f_v^{\\otimes |V|}"
},
{
"math_id": 16,
"text": "= \\left(f_u T^{\\otimes (\\deg u)}\\right)^{\\otimes |U|} \\left(f_v (T^{-1})^{\\otimes (\\deg v)}\\right)^{\\otimes |V|}."
},
{
"math_id": 17,
"text": "\\text{Holant}(G, f_u T^{\\otimes (\\deg u)}, (T^{-1})^{\\otimes (\\deg v)} f_v)"
},
{
"math_id": 18,
"text": "\\text{Holant}(H, \\text{OR}_2, \\text{EQUAL}_3)."
},
{
"math_id": 19,
"text": "(1,0,0,0,0,0,0,1)^T = \\begin{bmatrix} 1 \\\\ 0 \\end{bmatrix}^{\\otimes 3} + \\begin{bmatrix} 0 \\\\ 1 \\end{bmatrix}^{\\otimes 3}"
},
{
"math_id": 20,
"text": "\\begin{bmatrix} 0 & 1 \\\\ 1 & 0 \\end{bmatrix},"
},
{
"math_id": 21,
"text": "\\text{OR}_2^{\\otimes |U|} \\text{EQUAL}_3^{\\otimes |V|}"
},
{
"math_id": 22,
"text": "= (0,1,1,1)^{\\otimes |U|} \\left(\\begin{bmatrix} 1 \\\\ 0 \\end{bmatrix}^{\\otimes 3} + \\begin{bmatrix} 0 \\\\ 1 \\end{bmatrix}^{\\otimes 3}\\right)^{\\otimes |V|}"
},
{
"math_id": 23,
"text": "= (0,1,1,1)^{\\otimes |U|} \\begin{bmatrix} 0 & 1 \\\\ 1 & 0 \\end{bmatrix}^{\\otimes |E|} \\begin{bmatrix} 0 & 1 \\\\ 1 & 0 \\end{bmatrix}^{\\otimes |E|} \\left(\\begin{bmatrix} 1 \\\\ 0 \\end{bmatrix}^{\\otimes 3} + \\begin{bmatrix} 0 \\\\ 1 \\end{bmatrix}^{\\otimes 3}\\right)^{\\otimes |V|}"
},
{
"math_id": 24,
"text": "= \\left((0,1,1,1) \\begin{bmatrix} 0 & 1 \\\\ 1 & 0 \\end{bmatrix}^{\\otimes 2}\\right)^{\\otimes |U|} \\left(\\left(\\begin{bmatrix} 0 & 1 \\\\ 1 & 0 \\end{bmatrix} \\begin{bmatrix} 1 \\\\ 0 \\end{bmatrix}\\right)^{\\otimes 3} + \\left(\\begin{bmatrix} 0 & 1 \\\\ 1 & 0 \\end{bmatrix} \\begin{bmatrix} 0 \\\\ 1 \\end{bmatrix}\\right)^{\\otimes 3}\\right)^{\\otimes |V|}"
},
{
"math_id": 25,
"text": "= (1,1,1,0)^{\\otimes |U|} \\left(\\begin{bmatrix} 0 \\\\ 1 \\end{bmatrix}^{\\otimes 3} + \\begin{bmatrix} 1 \\\\ 0 \\end{bmatrix}^{\\otimes 3}\\right)^{\\otimes |V|}"
},
{
"math_id": 26,
"text": "= \\text{NAND}_2^{\\otimes |U|} \\text{EQUAL}_3^{\\otimes |V|},"
},
{
"math_id": 27,
"text": "\\text{Holant}(H, \\text{NAND}_2, \\text{EQUAL}_3),"
}
] |
https://en.wikipedia.org/wiki?curid=14609233
|
14609763
|
Normal shock tables
|
Calculations in aerodynamics
In aerodynamics, the normal shock tables are a series of tabulated data listing the various properties before and after the occurrence of a normal shock wave. With a given upstream Mach number, the post-shock Mach number can be calculated along with the pressure, density, temperature, and stagnation pressure ratios. Such tables are useful since the equations used to calculate the properties after a normal shock are cumbersome.
The tables below have been calculated using a heat capacity ratio, formula_0, equal to 1.4. The upstream Mach number, formula_1, begins at 1 and ends at 5. Although the tables could be extended over any range of Mach numbers, stopping at Mach 5 is typical since assuming formula_0 to be 1.4 over the entire Mach number range leads to errors over 10% beyond Mach 5.
Normal shock table equations.
Given an upstream Mach number, formula_1, and the ratio of specific heats, formula_0, the post normal shock Mach number, formula_2, can be calculated using the equation below.
formula_3
The next equation shows the relationship between the post normal shock pressure, formula_4, and the upstream ambient pressure, formula_5.
formula_6
The relationship between the post normal shock density, formula_7, and the upstream ambient density, formula_8 is shown next in the tables.
formula_9
Next, the equation below shows the relationship between the post normal shock temperature, formula_10, and the upstream ambient temperature, formula_11.
formula_12
Finally, the ratio of stagnation pressures is shown below where formula_13 is the upstream stagnation pressure and formula_14 occurs after the normal shock. The ratio of stagnation temperatures remains constant across a normal shock since the process is adiabatic.
formula_15
Note that before and after the shock the isentropic relations are valid and connect static and total quantities. That means, formula_16 (comes from Bernoulli, assumes incompressible flow) because the flow is for Mach numbers greater than unity always compressible.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\gamma"
},
{
"math_id": 1,
"text": "M_1"
},
{
"math_id": 2,
"text": "M_2"
},
{
"math_id": 3,
"text": " M_2 = \\sqrt{\\frac{M_1^2\\left(\\gamma - 1\\right)+2}{2\\gamma M_1^2 - \\left(\\gamma - 1\\right)}}"
},
{
"math_id": 4,
"text": "p_2"
},
{
"math_id": 5,
"text": "p_1"
},
{
"math_id": 6,
"text": " \\frac{p_2}{p_1} = \\frac{2\\gamma M_1^2}{\\gamma + 1} - \\frac{\\gamma - 1}{\\gamma + 1}"
},
{
"math_id": 7,
"text": "\\rho_2"
},
{
"math_id": 8,
"text": "\\rho_1"
},
{
"math_id": 9,
"text": " \\frac{\\rho_2}{\\rho_1} = \\frac{\\left(\\gamma + 1\\right)M_1^2}{\\left(\\gamma - 1\\right)M_1^2 + 2}"
},
{
"math_id": 10,
"text": "T_2"
},
{
"math_id": 11,
"text": "T_1"
},
{
"math_id": 12,
"text": " \\frac{T_2}{T_1} = \\frac{\\left(1 + \\frac{\\gamma - 1}{2}M_1^2\\right)\\left(\\frac{2\\gamma}{\\gamma - 1}M_1^2 - 1\\right)}{M_1^2\\left(\\frac{2\\gamma}{\\gamma - 1} + \\frac{\\gamma - 1}{2}\\right)}"
},
{
"math_id": 13,
"text": "p_{01}"
},
{
"math_id": 14,
"text": "p_{02}"
},
{
"math_id": 15,
"text": " \\frac{p_{02}}{p_{01}} = \\left(\\frac{\\frac{\\gamma + 1}{2}M_1^2}{1 + \\frac{\\gamma - 1}{2}M_1^2}\\right)^\\frac{\\gamma}{\\gamma - 1}\\left(\\frac{1}{\\frac{2\\gamma}{\\gamma + 1}M_1^2 - \\frac{\\gamma - 1}{\\gamma + 1}}\\right)^\\frac{1}{\\gamma - 1}"
},
{
"math_id": 16,
"text": "p_{total}\\neq p_{static} + p_{dynamic}"
}
] |
https://en.wikipedia.org/wiki?curid=14609763
|
146103
|
Nonlinear system
|
System where changes of output are not proportional to changes of input
In mathematics and science, a nonlinear system (or a non-linear system) is a system in which the change of the output is not proportional to the change of the input. Nonlinear problems are of interest to engineers, biologists, physicists, mathematicians, and many other scientists since most systems are inherently nonlinear in nature. Nonlinear dynamical systems, describing changes in variables over time, may appear chaotic, unpredictable, or counterintuitive, contrasting with much simpler linear systems.
Typically, the behavior of a nonlinear system is described in mathematics by a nonlinear system of equations, which is a set of simultaneous equations in which the unknowns (or the unknown functions in the case of differential equations) appear as variables of a polynomial of degree higher than one or in the argument of a function which is not a polynomial of degree one.
In other words, in a nonlinear system of equations, the equation(s) to be solved cannot be written as a linear combination of the unknown variables or functions that appear in them. Systems can be defined as nonlinear, regardless of whether known linear functions appear in the equations. In particular, a differential equation is "linear" if it is linear in terms of the unknown function and its derivatives, even if nonlinear in terms of the other variables appearing in it.
As nonlinear dynamical equations are difficult to solve, nonlinear systems are commonly approximated by linear equations (linearization). This works well up to some accuracy and some range for the input values, but some interesting phenomena such as solitons, chaos, and singularities are hidden by linearization. It follows that some aspects of the dynamic behavior of a nonlinear system can appear to be counterintuitive, unpredictable or even chaotic. Although such chaotic behavior may resemble random behavior, it is in fact not random. For example, some aspects of the weather are seen to be chaotic, where simple changes in one part of the system produce complex effects throughout. This nonlinearity is one of the reasons why accurate long-term forecasts are impossible with current technology.
Some authors use the term nonlinear science for the study of nonlinear systems. This term is disputed by others:
<templatestyles src="Template:Blockquote/styles.css" />Using a term like nonlinear science is like referring to the bulk of zoology as the study of non-elephant animals.
Definition.
In mathematics, a linear map (or "linear function") formula_0 is one which satisfies both of the following properties:
Additivity implies homogeneity for any rational "α", and, for continuous functions, for any real "α". For a complex "α", homogeneity does not follow from additivity. For example, an antilinear map is additive but not homogeneous. The conditions of additivity and homogeneity are often combined in the superposition principle
formula_3
An equation written as
formula_4
is called linear if formula_0 is a linear map (as defined above) and nonlinear otherwise. The equation is called "homogeneous" if formula_5 and formula_0 is a homogeneous function.
The definition formula_4 is very general in that formula_6 can be any sensible mathematical object (number, vector, function, etc.), and the function formula_0 can literally be any mapping, including integration or differentiation with associated constraints (such as boundary values). If formula_0 contains differentiation with respect to formula_6, the result will be a differential equation.
Nonlinear systems equations.
A nonlinear system of equations consists of a set of equations in several variables such that at least one of them is not a linear equation.
For a single equation of the form formula_7 many methods have been designed; see Root-finding algorithm. In the case where f is a polynomial, one has a "polynomial equation" such as
formula_8 The general root-finding algorithms apply to polynomial roots, but, generally they do not find all the roots, and when they fail to find a root, this does not imply that there is no roots. Specific methods for polynomials allow finding all roots or the real roots; see real-root isolation.
Solving systems of polynomial equations, that is finding the common zeros of a set of several polynomials in several variables is a difficult problem for which elaborated algorithms have been designed, such as Gröbner base algorithms.
For the general case of system of equations formed by equating to zero several differentiable functions, the main method is Newton's method and its variants. Generally they may provide a solution, but do not provide any information on the number of solutions.
Nonlinear recurrence relations.
A nonlinear recurrence relation defines successive terms of a sequence as a nonlinear function of preceding terms. Examples of nonlinear recurrence relations are the logistic map and the relations that define the various Hofstadter sequences. Nonlinear discrete models that represent a wide class of nonlinear recurrence relationships include the NARMAX (Nonlinear Autoregressive Moving Average with eXogenous inputs) model and the related nonlinear system identification and analysis procedures. These approaches can be used to study a wide class of complex nonlinear behaviors in the time, frequency, and spatio-temporal domains.
Nonlinear differential equations.
A system of differential equations is said to be nonlinear if it is not a system of linear equations. Problems involving nonlinear differential equations are extremely diverse, and methods of solution or analysis are problem dependent. Examples of nonlinear differential equations are the Navier–Stokes equations in fluid dynamics and the Lotka–Volterra equations in biology.
One of the greatest difficulties of nonlinear problems is that it is not generally possible to combine known solutions into new solutions. In linear problems, for example, a family of linearly independent solutions can be used to construct general solutions through the superposition principle. A good example of this is one-dimensional heat transport with Dirichlet boundary conditions, the solution of which can be written as a time-dependent linear combination of sinusoids of differing frequencies; this makes solutions very flexible. It is often possible to find several very specific solutions to nonlinear equations, however the lack of a superposition principle prevents the construction of new solutions.
Ordinary differential equations.
First order ordinary differential equations are often exactly solvable by separation of variables, especially for autonomous equations. For example, the nonlinear equation
formula_9
has formula_10 as a general solution (and also the special solution formula_11 corresponding to the limit of the general solution when "C" tends to infinity). The equation is nonlinear because it may be written as
formula_12
and the left-hand side of the equation is not a linear function of formula_13 and its derivatives. Note that if the formula_14 term were replaced with formula_13, the problem would be linear (the exponential decay problem).
Second and higher order ordinary differential equations (more generally, systems of nonlinear equations) rarely yield closed-form solutions, though implicit solutions and solutions involving nonelementary integrals are encountered.
Common methods for the qualitative analysis of nonlinear ordinary differential equations include:
Partial differential equations.
The most common basic approach to studying nonlinear partial differential equations is to change the variables (or otherwise transform the problem) so that the resulting problem is simpler (possibly linear). Sometimes, the equation may be transformed into one or more ordinary differential equations, as seen in separation of variables, which is always useful whether or not the resulting ordinary differential equation(s) is solvable.
Another common (though less mathematical) tactic, often exploited in fluid and heat mechanics, is to use scale analysis to simplify a general, natural equation in a certain specific boundary value problem. For example, the (very) nonlinear Navier-Stokes equations can be simplified into one linear partial differential equation in the case of transient, laminar, one dimensional flow in a circular pipe; the scale analysis provides conditions under which the flow is laminar and one dimensional and also yields the simplified equation.
Other methods include examining the characteristics and using the methods outlined above for ordinary differential equations.
Pendula.
A classic, extensively studied nonlinear problem is the dynamics of a frictionless pendulum under the influence of gravity. Using Lagrangian mechanics, it may be shown that the motion of a pendulum can be described by the dimensionless nonlinear equation
formula_15
where gravity points "downwards" and formula_16 is the angle the pendulum forms with its rest position, as shown in the figure at right. One approach to "solving" this equation is to use formula_17 as an integrating factor, which would eventually yield
formula_18
which is an implicit solution involving an elliptic integral. This "solution" generally does not have many uses because most of the nature of the solution is hidden in the nonelementary integral (nonelementary unless formula_19).
Another way to approach the problem is to linearize any nonlinearity (the sine function term in this case) at the various points of interest through Taylor expansions. For example, the linearization at formula_20, called the small angle approximation, is
formula_21
since formula_22 for formula_23. This is a simple harmonic oscillator corresponding to oscillations of the pendulum near the bottom of its path. Another linearization would be at formula_24, corresponding to the pendulum being straight up:
formula_25
since formula_26 for formula_27. The solution to this problem involves hyperbolic sinusoids, and note that unlike the small angle approximation, this approximation is unstable, meaning that formula_28 will usually grow without limit, though bounded solutions are possible. This corresponds to the difficulty of balancing a pendulum upright, it is literally an unstable state.
One more interesting linearization is possible around formula_29, around which formula_30:
formula_31
This corresponds to a free fall problem. A very useful qualitative picture of the pendulum's dynamics may be obtained by piecing together such linearizations, as seen in the figure at right. Other techniques may be used to find (exact) phase portraits and approximate periods.
Examples of nonlinear equations.
<templatestyles src="Div col/styles.css"/>
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "f(x)"
},
{
"math_id": 1,
"text": "\\textstyle f(x + y) = f(x) + f(y);"
},
{
"math_id": 2,
"text": "\\textstyle f(\\alpha x) = \\alpha f(x)."
},
{
"math_id": 3,
"text": "f(\\alpha x + \\beta y) = \\alpha f(x) + \\beta f(y)"
},
{
"math_id": 4,
"text": "f(x) = C"
},
{
"math_id": 5,
"text": "C = 0"
},
{
"math_id": 6,
"text": "x"
},
{
"math_id": 7,
"text": "f(x)=0,"
},
{
"math_id": 8,
"text": "x^2 + x - 1 = 0."
},
{
"math_id": 9,
"text": "\\frac{d u}{d x} = -u^2"
},
{
"math_id": 10,
"text": "u=\\frac{1}{x+C}"
},
{
"math_id": 11,
"text": "u = 0,"
},
{
"math_id": 12,
"text": "\\frac{du}{d x} + u^2=0"
},
{
"math_id": 13,
"text": "u"
},
{
"math_id": 14,
"text": "u^2"
},
{
"math_id": 15,
"text": "\\frac{d^2 \\theta}{d t^2} + \\sin(\\theta) = 0"
},
{
"math_id": 16,
"text": "\\theta"
},
{
"math_id": 17,
"text": "d\\theta/dt"
},
{
"math_id": 18,
"text": "\\int{\\frac{d \\theta}{\\sqrt{C_0 + 2 \\cos(\\theta)}}} = t + C_1"
},
{
"math_id": 19,
"text": "C_0 = 2"
},
{
"math_id": 20,
"text": "\\theta = 0"
},
{
"math_id": 21,
"text": "\\frac{d^2 \\theta}{d t^2} + \\theta = 0"
},
{
"math_id": 22,
"text": "\\sin(\\theta) \\approx \\theta"
},
{
"math_id": 23,
"text": "\\theta \\approx 0"
},
{
"math_id": 24,
"text": "\\theta = \\pi"
},
{
"math_id": 25,
"text": "\\frac{d^2 \\theta}{d t^2} + \\pi - \\theta = 0"
},
{
"math_id": 26,
"text": "\\sin(\\theta) \\approx \\pi - \\theta"
},
{
"math_id": 27,
"text": "\\theta \\approx \\pi"
},
{
"math_id": 28,
"text": "|\\theta|"
},
{
"math_id": 29,
"text": "\\theta = \\pi/2"
},
{
"math_id": 30,
"text": "\\sin(\\theta) \\approx 1"
},
{
"math_id": 31,
"text": "\\frac{d^2 \\theta}{d t^2} + 1 = 0."
}
] |
https://en.wikipedia.org/wiki?curid=146103
|
1461077
|
Sensor fusion
|
Combining of sensor data from disparate sources
Sensor fusion is the process of combining sensor data or data derived from disparate sources so that the resulting information has less uncertainty than would be possible if these sources were used individually. For instance, one could potentially obtain a more accurate location estimate of an indoor object by combining multiple data sources such as video cameras and WiFi localization signals. The term "uncertainty reduction" in this case can mean more accurate, more complete, or more dependable, or refer to the result of an emerging view, such as stereoscopic vision (calculation of depth information by combining two-dimensional images from two cameras at slightly different viewpoints).
The data sources for a fusion process are not specified to originate from identical sensors. One can distinguish "direct fusion", "indirect fusion" and fusion of the outputs of the former two. Direct fusion is the fusion of sensor data from a set of heterogeneous or homogeneous sensors, soft sensors, and history values of sensor data, while indirect fusion uses information sources like "a priori" knowledge about the environment and human input.
Sensor fusion is also known as "(multi-sensor) data fusion" and is a subset of "information fusion".
Algorithms.
Sensor fusion is a term that covers a number of methods and algorithms, including:
Example calculations.
Two example sensor fusion calculations are illustrated below.
Let formula_0 and formula_1 denote two sensor measurements with noise variances formula_2 and formula_3
, respectively. One way of obtaining a combined measurement formula_4 is to apply inverse-variance weighting, which is also employed within the Fraser-Potter fixed-interval smoother, namely
formula_5 ,
where formula_6 is the variance of the combined estimate. It can be seen that the fused result is simply a linear combination of the two measurements weighted by their respective noise variances.
Another (equivalent) method to fuse two measurements is to use the optimal Kalman filter. Suppose that the data is generated by a first-order system and let formula_7 denote the solution of the filter's Riccati equation. By applying Cramer's rule within the gain calculation it can be found that the filter gain is given by:
formula_8
By inspection, when the first measurement is noise free, the filter ignores the second measurement and vice versa. That is, the combined estimate is weighted by the quality of the measurements.
Centralized versus decentralized.
In sensor fusion, centralized versus decentralized refers to where the fusion of the data occurs. In centralized fusion, the clients simply forward all of the data to a central location, and some entity at the central location is responsible for correlating and fusing the data. In decentralized, the clients take full responsibility for fusing the data. "In this case, every sensor or platform can be viewed as an intelligent asset having some degree of autonomy in decision-making."
Multiple combinations of centralized and decentralized systems exist.
Another classification of sensor configuration refers to the coordination of information flow between sensors. These mechanisms provide a way to resolve conflicts or disagreements and to allow the development of dynamic sensing strategies.
Sensors are in redundant (or competitive) configuration if each node delivers independent measures of the same properties. This configuration can be used in error correction when comparing information from multiple nodes. Redundant strategies are often used with high level fusions in voting procedures.
Complementary configuration occurs when multiple information sources supply different information about the same features. This strategy is used for fusing information at raw data level within decision-making algorithms. Complementary features are typically applied in motion recognition tasks with Neural network, Hidden Markov model, Support-vector machine, clustering methods and other techniques.
Cooperative sensor fusion uses the information extracted by multiple independent sensors to provide information that would not be available from single sensors. For example, sensors connected to body segments are used for the detection of the angle between them. Cooperative sensor strategy gives information impossible to obtain from single nodes. Cooperative information fusion can be used in motion recognition, gait analysis, motion analysis.
Levels.
There are several categories or levels of sensor fusion that are commonly used.
Sensor fusion level can also be defined basing on the kind of information used to feed the fusion algorithm. More precisely, sensor fusion can be performed fusing raw data coming from different sources, extrapolated features or even decision made by single nodes.
Applications.
One application of sensor fusion is GPS/INS, where Global Positioning System and inertial navigation system data is fused using various different methods, e.g. the extended Kalman filter. This is useful, for example, in determining the attitude of an aircraft using low-cost sensors. Another example is using the data fusion approach to determine the traffic state (low traffic, traffic jam, medium flow) using road side collected acoustic, image and sensor data. In the field of autonomous driving, sensor fusion is used to combine the redundant information from complementary sensors in order to obtain a more accurate and reliable representation of the environment.
Although technically not a dedicated sensor fusion method, modern Convolutional neural network based methods can simultaneously process many channels of sensor data (such as Hyperspectral imaging with hundreds of bands ) and fuse relevant information to produce classification results.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "{\\textbf{x}}_1"
},
{
"math_id": 1,
"text": "{\\textbf{x}}_2"
},
{
"math_id": 2,
"text": "\\scriptstyle\\sigma_1^2"
},
{
"math_id": 3,
"text": "\\scriptstyle\\sigma_2^2"
},
{
"math_id": 4,
"text": "{\\textbf{x}}_3"
},
{
"math_id": 5,
"text": "{\\textbf{x}}_3 = \\sigma_3^{2} (\\sigma_1^{-2}{\\textbf{x}}_1 + \\sigma_2^{-2}{\\textbf{x}}_2)"
},
{
"math_id": 6,
"text": " \\scriptstyle\\sigma_3^{2} = (\\scriptstyle\\sigma_1^{-2} + \\scriptstyle\\sigma_2^{-2})^{-1}"
},
{
"math_id": 7,
"text": "{\\textbf{P}}_k"
},
{
"math_id": 8,
"text": " {\\textbf{L}}_k =\n\n\\begin{bmatrix}\n\\tfrac{\\scriptstyle\\sigma_2^{2}{\\textbf{P}}_k}{\\scriptstyle\\sigma_2^{2}{\\textbf{P}}_k + \\scriptstyle\\sigma_1^{2}{\\textbf{P}}_k + \\scriptstyle\\sigma_1^{2} \\scriptstyle\\sigma_2^{2}} & \\tfrac{\\scriptstyle\\sigma_1^{2}{\\textbf{P}}_k}{\\scriptstyle\\sigma_2^{2}{\\textbf{P}}_k + \\scriptstyle\\sigma_1^{2}{\\textbf{P}}_k + \\scriptstyle\\sigma_1^{2} \\scriptstyle\\sigma_2^{2}} \\end{bmatrix}."
}
] |
https://en.wikipedia.org/wiki?curid=1461077
|
1461105
|
Borel's lemma
|
Result used in the theory of asymptotic expansions and partial differential equations
In mathematics, Borel's lemma, named after Émile Borel, is an important result used in the theory of asymptotic expansions and partial differential equations.
Statement.
Suppose "U" is an open set in the Euclidean space R"n", and suppose that "f"0, "f"1, ... is a sequence of smooth functions on "U".
If "I" is any open interval in R containing 0 (possibly "I" = R), then there exists a smooth function "F"("t", "x") defined on "I"×"U", such that
formula_0
for "k" ≥ 0 and "x" in "U".
Proof.
Proofs of Borel's lemma can be found in many text books on analysis, including and , from which the proof below is taken.
Note that it suffices to prove the result for a small interval "I" = (−"ε","ε"), since if "ψ"("t") is a smooth bump function with compact support in (−"ε","ε") equal identically to 1 near 0, then "ψ"("t") ⋅ "F"("t", "x") gives a solution on R × "U". Similarly using a smooth partition of unity on R"n" subordinate to a covering by open balls with centres at "δ"⋅Z"n", it can be assumed that all the "f""m" have compact support in some fixed closed ball "C". For each "m", let
formula_1
where "εm" is chosen sufficiently small that
formula_2
for |"α"| < "m". These estimates imply that each sum
formula_3
is uniformly convergent and hence that
formula_4
is a smooth function with
formula_5
By construction
formula_6
Note: Exactly the same construction can be applied, without the auxiliary space "U", to produce a smooth function on the interval "I" for which the derivatives at 0 form an arbitrary sequence.
References.
"This article incorporates material from Borel lemma on PlanetMath, which is licensed under the ."
|
[
{
"math_id": 0,
"text": "\\left.\\frac{\\partial^k F}{\\partial t^k}\\right|_{(0,x)} = f_k(x),"
},
{
"math_id": 1,
"text": "F_m(t,x)={t^m\\over m!} \\cdot \\psi\\left({t\\over \\varepsilon_m}\\right)\\cdot f_m(x),"
},
{
"math_id": 2,
"text": "\\|\\partial^\\alpha F_m \\|_\\infty \\le 2^{-m}"
},
{
"math_id": 3,
"text": "\\sum_{m\\ge 0} \\partial^\\alpha F_m"
},
{
"math_id": 4,
"text": "F=\\sum_{m\\ge 0} F_m"
},
{
"math_id": 5,
"text": "\\partial^\\alpha F=\\sum_{m\\ge 0} \\partial^\\alpha F_m."
},
{
"math_id": 6,
"text": "\\partial_t^m F(t,x)|_{t=0}=f_m(x)."
}
] |
https://en.wikipedia.org/wiki?curid=1461105
|
1461205
|
Argon–argon dating
|
Radiometric dating method
Argon–argon (or 40Ar/39Ar) dating is a radiometric dating method invented to supersede potassium–argon (K/Ar) dating in accuracy. The older method required splitting samples into two for separate potassium and argon measurements, while the newer method requires only one rock fragment or mineral grain and uses a single measurement of argon isotopes. 40Ar/39Ar dating relies on neutron irradiation from a nuclear reactor to convert a stable form of potassium (39K) into the radioactive 39Ar. As long as a standard of known age is co-irradiated with unknown samples, it is possible to use a single measurement of argon isotopes to calculate the 40K/40Ar* ratio, and thus to calculate the age of the unknown sample. 40Ar* refers to the radiogenic 40Ar, i.e. the 40Ar produced from radioactive decay of 40K. 40Ar* does not include atmospheric argon adsorbed to the surface or inherited through diffusion and its calculated value is derived from measuring the 36Ar (which is assumed to be of atmospheric origin) and assuming that 40Ar is found in a constant ratio to 36Ar in atmospheric gases.
Method.
The sample is generally crushed and single crystals of a mineral or fragments of rock are hand-selected for analysis. These are then irradiated to produce 39Ar from 39K via the (n-p) reaction 39K(n,p)39Ar. The sample is then degassed in a high-vacuum mass spectrometer via a laser or resistance furnace. Heating causes the crystal structure of the mineral (or minerals) to degrade, and, as the sample melts, trapped gases are released. The gas may include atmospheric gases, such as carbon dioxide, water, nitrogen, and radiogenic gases like argon and helium, generated from regular radioactive decay over geologic time. The abundance of 40Ar* increases with the age of the sample, though the rate of increase decays exponentially with the half-life of 40K, which is 1.248 billion years.
Age equation.
The age of a sample is given by the age equation:
formula_0
where λ is the radioactive decay constant of 40K (approximately 5.5 x 10−10 year−1, corresponding to a half-life of approximately 1.25 billion years), J is the J-factor (parameter associated with the irradiation process), and R is the 40Ar*/39Ar ratio. The J factor relates to the fluence of the neutron bombardment during the irradiation process; a denser flow of neutron particles will convert more atoms of 39K to 39Ar than a less dense one.
Relative dating only.
The 40Ar/39Ar method only measures relative dates. In order for an age to be calculated by the 40Ar/39Ar technique, the J parameter must be determined by irradiating the unknown sample along with a sample of known age for a standard. Because this (primary) standard ultimately cannot be determined by 40Ar/39Ar, it must be first determined by another dating method. The method most commonly used to date the primary standard is the conventional K/Ar technique. An alternative method of calibrating the used standard is astronomical tuning (also known as orbital tuning), which arrives at a slightly different age.
Applications.
The primary use for 40Ar/39Ar geochronology is dating metamorphic and igneous minerals. 40Ar/39Ar is unlikely to provide the age of intrusions of granite as the age typically reflects the time when a mineral cooled through its closure temperature. However, in a metamorphic rock that has not exceeded its closure temperature the age likely dates the crystallization of the mineral. Dating of movement on fault systems is also possible with the 40Ar/39Ar method. Different minerals have different closure temperatures; biotite is ~300°C, muscovite is about 400°C and hornblende has a closure temperature of ~550°C. Thus, a granite containing all three minerals will record three different "ages" of emplacement as it cools down through these closure temperatures. Thus, although a crystallization age is not recorded, the information is still useful in constructing the thermal history of the rock.
Dating minerals "may" provide age information on a rock, but assumptions must be made. Minerals usually only record the "last time" they cooled down below the closure temperature, and this may not represent all of the events which the rock has undergone, and may not match the age of intrusion. Thus, discretion and interpretation of age dating is essential. 40Ar/39Ar geochronology assumes that a rock retains all of its 40Ar after cooling past the "closing temperature" and that this was properly sampled during analysis.
This technique allows the errors involved in K-Ar dating to be checked. Argon–argon dating has the advantage of not requiring determinations of potassium. Modern methods of analysis allow individual regions of crystals to be investigated. This method is important as it allows crystals forming and cooling during different events to be identified.
Recalibration.
One problem with argon-argon dating has been a slight discrepancy with other methods of dating. Work by Kuiper et al. reports that a correction of 0.65% is needed. Thus the Cretaceous–Paleogene extinction (when the dinosaurs died out)—previously dated at 65.0 or 65.5 million years ago—is more accurately dated to 66.0-66.1 Ma.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "t=\\frac{1}{\\lambda} \\ln (J \\times R+1)"
}
] |
https://en.wikipedia.org/wiki?curid=1461205
|
1461209
|
Riemann–Lebesgue lemma
|
Theorem in harmonic analysis
In mathematics, the Riemann–Lebesgue lemma, named after Bernhard Riemann and Henri Lebesgue, states that the Fourier transform or Laplace transform of an "L"1 function vanishes at infinity. It is of importance in harmonic analysis and asymptotic analysis.
Statement.
Let formula_0 be an integrable function, i.e. formula_1 is a measurable function such that
formula_2
and let formula_3 be the Fourier transform of formula_4, i.e.
formula_5
Then formula_3 vanishes at infinity: formula_6 as formula_7.
Because the Fourier transform of an integrable function is continuous, the Fourier transform formula_3 is a continuous function vanishing at infinity. If formula_8 denotes the vector space of continuous functions vanishing at infinity, the Riemann–Lebesgue lemma may be formulated as follows: The Fourier transformation maps formula_9 to formula_8.
Proof.
We will focus on the one-dimensional case formula_10, the proof in higher dimensions is similar. First, suppose that formula_4 is continuous and compactly supported. For formula_11, the substitution formula_12 leads to
formula_13.
This gives a second formula for formula_14. Taking the mean of both formulas, we arrive at the following estimate:
formula_15.
Because formula_4 is continuous, formula_16 converges to formula_17 as formula_18 for all formula_19. Thus, formula_20 converges to 0 as formula_18 due to the dominated convergence theorem.
If formula_4 is an arbitrary integrable function, it may be approximated in the formula_21 norm by a compactly supported continuous function. For formula_22, pick a compactly supported continuous function formula_23 such that formula_24. Then
formula_25
Because this holds for any formula_22, it follows that formula_6 as formula_7.
Other versions.
The Riemann–Lebesgue lemma holds in a variety of other situations.
formula_27
as formula_28 within the half-plane formula_29.
Applications.
The Riemann–Lebesgue lemma can be used to prove the validity of asymptotic approximations for integrals. Rigorous treatments of the method of steepest descent and the method of stationary phase, amongst others, are based on the Riemann–Lebesgue lemma.
|
[
{
"math_id": 0,
"text": "f\\in L^1(\\R^n)"
},
{
"math_id": 1,
"text": "f\\colon\\R^n \\rightarrow \\C"
},
{
"math_id": 2,
"text": "\\|f\\|_{L^1} = \\int_{\\R^n} |f(x)| \\mathrm{d}x < \\infty, "
},
{
"math_id": 3,
"text": "\\hat{f}"
},
{
"math_id": 4,
"text": "f"
},
{
"math_id": 5,
"text": "\\hat{f}\\colon\\R^n \\rightarrow \\C, \\ \\xi\\mapsto \\int_{\\R^n} f(x) \\mathrm{e}^{-\\mathrm{i}x\\cdot\\xi}\\mathrm{d}x."
},
{
"math_id": 6,
"text": "|\\hat{f}(\\xi)| \\to 0"
},
{
"math_id": 7,
"text": " |\\xi| \\to\\infty "
},
{
"math_id": 8,
"text": "C_0(\\R^n)"
},
{
"math_id": 9,
"text": "L^1(\\R^n)"
},
{
"math_id": 10,
"text": "n=1"
},
{
"math_id": 11,
"text": "\\xi \\neq 0"
},
{
"math_id": 12,
"text": "\\textstyle x\\to x+\\frac{\\pi}{\\xi}"
},
{
"math_id": 13,
"text": "\\hat{f}(\\xi) = \\int_{\\R} f(x) \\mathrm{e}^{-\\mathrm{i}x\\xi}\\mathrm{d}x = \\int_{\\R} f\\left(x+\\frac{\\pi}{\\xi}\\right) \\mathrm{e}^{-\\mathrm{i}x\\xi} \\mathrm{e}^{-\\mathrm{i}\\pi} \\mathrm{d}x = -\\int_{\\R} f\\left(x+\\frac{\\pi}{\\xi}\\right) \\mathrm{e}^{-\\mathrm{i}x\\xi} \\mathrm{d}x "
},
{
"math_id": 14,
"text": "\\hat{f}(\\xi)"
},
{
"math_id": 15,
"text": "|\\hat{f}(\\xi)|\\le \\frac{1}{2}\\int_{\\R} \\left|f(x)-f\\left(x+\\frac{\\pi}{\\xi}\\right)\\right|\\mathrm{d}x"
},
{
"math_id": 16,
"text": "\\left|f(x)-f\\left(x+\\tfrac{\\pi}{\\xi}\\right)\\right|"
},
{
"math_id": 17,
"text": "0"
},
{
"math_id": 18,
"text": "|\\xi| \\to \\infty"
},
{
"math_id": 19,
"text": "x \\in \\R"
},
{
"math_id": 20,
"text": "|\\hat{f}(\\xi)|"
},
{
"math_id": 21,
"text": "L^1"
},
{
"math_id": 22,
"text": "\\varepsilon > 0"
},
{
"math_id": 23,
"text": "g"
},
{
"math_id": 24,
"text": "\\|f-g\\|_{L^1} \\leq \\varepsilon"
},
{
"math_id": 25,
"text": " \\limsup_{\\xi\\rightarrow\\pm\\infty} |\\hat{f}(\\xi)| \\leq \\limsup_{\\xi\\to\\pm\\infty} \\left|\\int (f(x)-g(x))\\mathrm{e}^{-\\mathrm{i}x\\xi} \\, \\mathrm{d}x\\right| + \\limsup_{\\xi\\rightarrow\\pm\\infty} \\left|\\int g(x)\\mathrm{e}^{-\\mathrm{i}x\\xi} \\, \\mathrm{d}x\\right| \\leq \\varepsilon + 0 = \\varepsilon."
},
{
"math_id": 26,
"text": "f \\in L^1[0,\\infty)"
},
{
"math_id": 27,
"text": "\\int_0^\\infty f(t) \\mathrm{e}^{-tz} \\mathrm{d}t \\to 0"
},
{
"math_id": 28,
"text": "|z| \\to \\infty"
},
{
"math_id": 29,
"text": "\\mathrm{Re}(z) \\geq 0"
},
{
"math_id": 30,
"text": "\\hat{f}_k "
},
{
"math_id": 31,
"text": "k \\to \\pm \\infty "
}
] |
https://en.wikipedia.org/wiki?curid=1461209
|
1461257
|
Common gate
|
Electronic amplifier circuit type
In electronics, a common-gate amplifier is one of three basic single-stage field-effect transistor (FET) amplifier topologies, typically used as a current buffer or voltage amplifier. In this circuit, the source terminal of the transistor serves as the input, the drain is the output, and the gate is connected to some DC biasing voltage (i.e. an AC ground), or "common," hence its name.
The analogous bipolar junction transistor circuit is the common-base amplifier.
Applications.
This configuration is used less often than the common source or source follower. However, it can be combined with common source amplifiers to create cascode configurations. It is useful in, for example, CMOS RF receivers, especially when operating near the frequency limitations of the FETs; it is desirable because of the ease of impedance matching and potentially has lower noise. Gray and Meyer provide a general reference for this circuit.
Low-frequency characteristics.
At low frequencies and under small-signal conditions, the circuit in Figure 1 can be represented by that in Figure 2, where the hybrid-pi model for the MOSFET has been employed.
The amplifier characteristics are summarized below in Table 1. The approximate expressions use the assumptions (usually accurate) "rO" » "RL" and "gmrO" » 1.
In general, the overall voltage/current gain may be substantially less than the open/short circuit gains listed above (depending on the source and load resistances) due to the loading effect.
Closed circuit voltage gain.
Taking input and output loading into consideration, the closed circuit voltage gain (that is, the gain with load "RL" and source with resistance "RS" both attached) of the common gate can be written as:
formula_0 ,
which has the simple limiting forms
formula_1,
depending upon whether "gmRS" is much larger or much smaller than one.
In the first case the circuit acts as a current follower, as understood as follows: for "RS" » 1/"gm" the voltage source can be replaced by its Norton equivalent with Norton current "vThév / RS" and parallel Norton resistance "RS". Because the amplifier input resistance is small, the driver delivers by current division a current "vThév / RS" to the amplifier. The current gain is unity, so the same current is delivered to the output load "RL", producing by Ohm's law an output voltage "vout = vThévRL / RS", that is, the first form of the voltage gain above.
In the second case "RS" « 1/"gm" and the Thévenin representation of the source is useful, producing the second form for the gain, typical of voltage amplifiers.
Because the input impedance of the common-gate amplifier is very low, the cascode amplifier often is used instead. The cascode places a common-source amplifier between the voltage driver and the common-gate circuit to permit voltage amplification using a driver with "RS » 1/gm".
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\n{A_\\mathrm{v}} \\approx \\begin{matrix} \\frac {g_m R_\\mathrm{L}} {1+g_mR_S} \\end{matrix}"
},
{
"math_id": 1,
"text": "A_\\mathrm{v} = \\begin{matrix} \\frac {R_L}{R_S}\\end{matrix} \\ \\ \\mathrm{ or } \\ \\ A_\\mathrm{v} = g_m R_L "
}
] |
https://en.wikipedia.org/wiki?curid=1461257
|
1461259
|
Common source
|
Electronic amplifier circuit type
In electronics, a common-source amplifier is one of three basic single-stage field-effect transistor (FET) amplifier topologies, typically used as a voltage or transconductance amplifier. The easiest way to tell if a FET is common source, common drain, or common gate is to examine where the signal enters and leaves. The remaining terminal is what is known as "common". In this example, the signal enters the gate, and exits the drain. The only terminal remaining is the source. This is a common-source FET circuit. The analogous bipolar junction transistor circuit may be viewed as a transconductance amplifier or as a voltage amplifier. (See classification of amplifiers). As a transconductance amplifier, the input voltage is seen as modulating the current going to the load. As a voltage amplifier, input voltage modulates the current flowing through the FET, changing the voltage across the output resistance according to Ohm's law. However, the FET device's output resistance typically is not high enough for a reasonable transconductance amplifier (ideally infinite), nor low enough for a decent voltage amplifier (ideally zero). As seen below in the formula, the voltage gain depends on the load resistance, so it cannot be applied to drive low-resistance devices, such as a speaker (having a resistance of 8 ohms). Another major drawback is the amplifier's limited high-frequency response. Therefore, in practice the output often is routed through either a voltage follower (common-drain or CD stage), or a current follower (common-gate or CG stage), to obtain more favorable output and frequency characteristics. The CS–CG combination is called a cascode amplifier.
Characteristics.
At low frequencies and using a simplified hybrid-pi model (where the output resistance due to channel length modulation is not considered), the following closed-loop small-signal characteristics can be derived.
Bandwidth.
Bandwidth of common-source amplifier tends to be low, due to high capacitance resulting from the Miller effect. The gate-drain capacitance is effectively multiplied by the factor formula_0, thus increasing the total input capacitance and lowering the overall bandwidth.
Figure 3 shows a MOSFET common-source amplifier with an active load. Figure 4 shows the corresponding small-signal circuit when a load resistor "R"L is added at the output node and a Thévenin driver of applied voltage "V"A and series resistance "R"A is added at the input node. The limitation on bandwidth in this circuit stems from the coupling of parasitic transistor capacitance "C"gd between gate and drain and the series resistance of the source "R"A. (There are other parasitic capacitances, but they are neglected here as they have only a secondary effect on bandwidth.)
Using Miller's theorem, the circuit of Figure 4 is transformed to that of Figure 5, which shows the "Miller capacitance" "C"M on the input side of the circuit. The size of "C"M is decided by equating the current in the input circuit of Figure 5 through the Miller capacitance, say "i"M, which is:
formula_1 ,
to the current drawn from the input by capacitor "C"gd in Figure 4, namely "jωC"gd "v"GD. These two currents are the same, making the two circuits have the same input behavior, provided the Miller capacitance is given by:
formula_2 .
Usually the frequency dependence of the gain "v"D / "v"G is unimportant for frequencies even somewhat above the corner frequency of the amplifier, which means a low-frequency hybrid-pi model is accurate for determining "v"D / "v"G. This evaluation is "Miller's approximation" and provides the estimate (just set the capacitances to zero in Figure 5):
formula_3 ,
so the Miller capacitance is
formula_4 .
The gain "g"m ("r"O || "R"L) is large for large "R"L, so even a small parasitic capacitance "C"gd can become a large influence in the frequency response of the amplifier, and many circuit tricks are used to counteract this effect. One trick is to add a common-gate (current-follower) stage to make a cascode circuit. The current-follower stage presents a load to the common-source stage that is very small, namely the input resistance of the current follower ("R"L ≈ 1 / "g"m ≈ "V"ov / (2"I"D) ; see common gate). Small "R"L reduces "C"M. The article on the common-emitter amplifier discusses other solutions to this problem.
Returning to Figure 5, the gate voltage is related to the input signal by voltage division as:
formula_5 .
The bandwidth (also called the 3 dB frequency) is the frequency where the signal drops to 1/ √2 of its low-frequency value. (In decibels, dB(√2) = 3.01 dB). A reduction to 1/ √2 occurs when "ωC"M "R"A = 1, making the input signal at this value of "ω" (call this value "ω"3 dB, say) "v"G = "V"A / (1+j). The magnitude of (1+j) = √2. As a result, the 3 dB frequency "f"3 dB = "ω"3 dB / (2π) is:
formula_6 .
If the parasitic gate-to-source capacitance "C"gs is included in the analysis, it simply is parallel with "C"M, so
formula_7 .
Notice that "f"3 dB becomes large if the source resistance "R"A is small, so the Miller amplification of the capacitance has little effect upon the bandwidth for small "R"A. This observation suggests another circuit trick to increase bandwidth: add a common-drain (voltage-follower) stage between the driver and the common-source stage so the Thévenin resistance of the combined driver plus voltage follower is less than the "R"A of the original driver.
Examination of the output side of the circuit in Figure 2 enables the frequency dependence of the gain "v"D / "v"G to be found, providing a check that the low-frequency evaluation of the Miller capacitance is adequate for frequencies "f" even larger than "f"3 dB. (See article on pole splitting to see how the output side of the circuit is handled.)
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "1+|A_\\text{v}|\\,"
},
{
"math_id": 1,
"text": "\\ i_\\mathrm{M} = j \\omega C_\\mathrm{M} v_\\mathrm{GS} = j \\omega C_\\mathrm{M} v_\\mathrm{G}"
},
{
"math_id": 2,
"text": " C_\\mathrm{M} = C_\\mathrm{gd} \\frac {v_\\mathrm{GD}} {v_\\mathrm{GS}} = C_\\mathrm{gd} \\left( 1 - \\frac {v_\\mathrm{D}} {v_\\mathrm{G}} \\right)"
},
{
"math_id": 3,
"text": " \\frac {v_\\mathrm{D}} {v_\\mathrm{G}} \\approx -g_\\mathrm{m} (r_\\mathrm{O} \\parallel R_\\mathrm{L})"
},
{
"math_id": 4,
"text": " C_\\mathrm{M} = C_\\mathrm{gd} \\left( 1+g_\\mathrm{m} (r_\\mathrm{O} \\parallel R_\\mathrm{L})\\right) "
},
{
"math_id": 5,
"text": " v_\\mathrm{G} = V_\\mathrm{A}\\frac {1/(j \\omega C_\\mathrm{M}) } {1/(j \\omega C_\\mathrm{M}) +R_\\mathrm{A}} = V_\\mathrm{A}\\frac {1} {1+j \\omega C_\\mathrm{M} R_\\mathrm{A}} "
},
{
"math_id": 6,
"text": " f_\\mathrm{3dB}=\\frac {1}{2\\pi R_\\mathrm{A} C_\\mathrm{M}}= \\frac {1}{2\\pi R_\\mathrm{A} [ C_\\mathrm{gd}(1+g_\\mathrm{m} (r_\\mathrm{O} \\parallel R_\\mathrm{L})]}"
},
{
"math_id": 7,
"text": " f_\\mathrm{3dB}=\\frac {1}{2\\pi R_\\mathrm{A} (C_\\mathrm{M}+C_\\mathrm{gs})} =\\frac {1}{2\\pi R_\\mathrm{A} [C_\\mathrm{gs} + C_\\mathrm{gd}(1+g_\\mathrm{m} (r_\\mathrm{O} \\parallel R_\\mathrm{L}))]}"
}
] |
https://en.wikipedia.org/wiki?curid=1461259
|
1461265
|
Split-quaternion
|
Four-dimensional associative algebra over the reals
In abstract algebra, the split-quaternions or coquaternions form an algebraic structure introduced by James Cockle in 1849 under the latter name. They form an associative algebra of dimension four over the real numbers.
After introduction in the 20th century of coordinate-free definitions of rings and algebras, it was proved that the algebra of split-quaternions is isomorphic to the ring of the 2×2 real matrices. So the study of split-quaternions can be reduced to the study of real matrices, and this may explain why there are few mentions of split-quaternions in the mathematical literature of the 20th and 21st centuries.
Definition.
The "split-quaternions" are the linear combinations (with real coefficients) of four basis elements 1, i, j, k that satisfy the following product rules:
i2 = −1,
j2 = 1,
k2 = 1,
ij = k = −ji.
By associativity, these relations imply
jk = −i = −kj,
ki = j = −ik,
and also ijk = 1.
So, the split-quaternions form a real vector space of dimension four with {1, i, j, k} as a basis. They form also a noncommutative ring, by extending the above product rules by distributivity to all split-quaternions.
Let consider the square matrices
formula_0
They satisfy the same multiplication table as the corresponding split-quaternions. As these matrices form a basis of the two-by-two matrices, the unique linear function that maps 1, i, j, k to formula_1 (respectively) induces an algebra isomorphism from the split-quaternions to the two-by-two real matrices.
The above multiplication rules imply that the eight elements 1, i, j, k, −1, −i, −j, −k form a group under this multiplication, which is isomorphic to the dihedral group D4, the . In fact, if one considers a square whose vertices are the points whose coordinates are 0 or 1, the matrix formula_2 is the clockwise rotation of the quarter of a turn, formula_3 is the symmetry around the first diagonal, and formula_4 is the symmetry around the x axis.
Properties.
Like the quaternions introduced by Hamilton in 1843, they form a four dimensional real associative algebra. But like the real algebra of 2×2 matrices – and unlike the real algebra of quaternions – the split-quaternions contain nontrivial zero divisors, nilpotent elements, and idempotents. (For example, (1 + j) is an idempotent zero-divisor, and i − j is nilpotent.) As an algebra over the real numbers, the algebra of split-quaternions is isomorphic to the algebra of 2×2 real matrices by the above defined isomorphism.
This isomorphism allows identifying each split-quaternion with a 2×2 matrix. So every property of split-quaternions corresponds to a similar property of matrices, which is often named differently.
The "conjugate" of a split-quaternion
"q" = "w" + "x"i + "y"j + "z"k, is "q"∗ = "w" − "x"i − "y"j − "z"k. In term of matrices, the conjugate is the cofactor matrix obtained by exchanging the diagonal entries and changing the sign of the other two entries.
The product of a split-quaternion with its conjugate is the isotropic quadratic form:
formula_5
which is called the "norm" of the split-quaternion or the determinant of the associated matrix.
The real part of a split-quaternion "q" = "w" + "x"i + "y"j + "z"k is "w" = ("q"∗ + "q")/2. It equals the trace of associated matrix.
The norm of a product of two split-quaternions is the product of their norms. Equivalently, the determinant of a product of matrices is the product of their determinants. This property means that split-quaternions form a composition algebra. As there are nonzero split-quaternions having a zero norm, split-quaternions form a "split composition algebra" – hence their name.
A split-quaternion with a nonzero norm has a multiplicative inverse, namely "q"∗/"N"("q"). In terms of matrices, this is equivalent to the Cramer rule that asserts that a matrix is invertible if and only its determinant is nonzero, and, in this case, the inverse of the matrix is the quotient of the cofactor matrix by the determinant.
The isomorphism between split-quaternions and 2×2 real matrices shows that the multiplicative group of split-quaternions with a nonzero norm is isomorphic with formula_6 and the group of split quaternions of norm 1 is isomorphic with formula_7
Geometrically, the split-quaternions can be compared to Hamilton's quaternions as pencils of planes. In both cases the real numbers form the axis of a pencil. In Hamilton quaternions there is a sphere of imaginary units, and any pair of antipodal imaginary units generates a complex plane with the real line. For split-quaternions there are hyperboloids of hyperbolic and imaginary units that generate split-complex or ordinary complex planes, as described below in § Stratification.
Representation as complex matrices.
There is a representation of the split-quaternions as a unital associative subalgebra of the 2×2 matrices with complex entries. This representation can be defined by the algebra homomorphism that maps a split-quaternion "w" + "x"i + "y"j + "z"k to the matrix
formula_8
Here, i (italic) is the imaginary unit, not to be confused with the split quaternion basis element i (upright roman).
The image of this homomorphism is the matrix ring formed by the matrices of the form
formula_9
where the superscript formula_10 denotes a complex conjugate.
This homomorphism maps respectively the split-quaternions i, j, k on the matrices
formula_11
The proof that this representation is an algebra homomorphism is straightforward but requires some boring computations, which can be avoided by starting from the expression of split-quaternions as 2×2 real matrices, and using matrix similarity. Let S be the matrix
formula_12
Then, applied to the representation of split-quaternions as 2×2 real matrices, the above algebra homomorphism is the matrix similarity.
formula_13
It follows almost immediately that for a split quaternion represented as a complex matrix, the conjugate is the matrix of the cofactors, and the norm is the determinant.
With the representation of split quaternions as complex matrices. the matrices of quaternions of norm 1 are exactly the elements of the special unitary group SU(1,1). This is used for in hyperbolic geometry for describing hyperbolic motions of the Poincaré disk model.
Generation from split-complex numbers.
Split-quaternions may be generated by modified Cayley–Dickson construction similar to the method of L. E. Dickson and Adrian Albert. for the division algebras C, H, and O. The multiplication rule
formula_14
is used when producing the doubled product in the real-split cases. The doubled conjugate formula_15 so that
formula_16
If "a" and "b" are split-complex numbers and split-quaternion formula_17
then formula_18
Stratification.
In this section, the real subalgebras generated by a single split-quaternion are studied and classified.
Let "p"
"w" + "x"i + "y"j + "z"k be a split-quaternion. Its "real part" is "w" = ("p" + "p"*). Let "q" = "p" – "w" = ("p" – "p"*) be its "nonreal part". One has "q"* = –"q", and therefore formula_19 It follows that "p"2 is a real number if and only "p" is either a real number ("q" = 0 and "p" = "w") or a "purely nonreal split quaternion" ("w" = 0 and "p" = "q").
The structure of the subalgebra formula_20 generated by p follows straightforwardly. One has
formula_21
and this is a commutative algebra. Its dimension is two except if p is real (in this case, the subalgebra is simply formula_22).
The nonreal elements of formula_20 whose square is real have the form "aq" with formula_23
Three cases have to be considered, which are detailed in the next subsections.
Nilpotent case.
With above notation, if formula_24 (that is, if "q" is nilpotent), then "N"("q") = 0, that is, formula_25 This implies that there exist w and t in formula_22 such that 0 ≤ "t" < 2π and
formula_26
This is a parametrization of all split-quaternions whose nonreal part is nilpotent.
This is also a parameterization of these subalgebras by the points of a circle: the split-quaternions of the form formula_27 form a circle; a subalgebra generated by a nilpotent element contains exactly one point of the circle; and the circle does not contain any other point.
The algebra generated by a nilpotent element is isomorphic to formula_28 and to the plane of dual numbers.
Imaginary units.
This is the case where "N"("q") > 0. Letting formula_29 one has
formula_30
It follows that "q" belongs to the hyperboloid of two sheets of equation formula_31 Therefore, there are real numbers "n", "t", "u" such that 0 ≤ "t" < 2π and
formula_32
This is a parametrization of all split-quaternions whose nonreal part has a positive norm.
This is also a parameterization of the corresponding subalgebras by the pairs of opposite points of a hyperboloid of two sheets: the split-quaternions of the form formula_33 form a hyperboloid of two sheets; a subalgebra generated by a split-quaternion with a nonreal part of positive norm contains exactly two opposite points on this hyperboloid, one on each sheet; and the hyperboloid does not contain any other point.
The algebra generated by a split-quaternion with a nonreal part of positive norm is isomorphic to formula_34 and to the field formula_35 of complex numbers.
Hyperbolic units.
This is the case where "N"("q") < 0. Letting formula_36 one has
formula_37
It follows that "q" belongs to the hyperboloid of one sheet of equation "y"2 + "z"2 − "x"2 = 1. Therefore, there are real numbers "n", "t", "u" such that 0 ≤ "t" < 2π and
formula_38
This is a parametrization of all split-quaternions whose nonreal part has a negative norm.
This is also a parameterization of the corresponding subalgebras by the pairs of opposite points of a hyperboloid of one sheet: the split-quaternions of the form formula_39 form a hyperboloid of one sheet; a subalgebra generated by a split-quaternion with a nonreal part of negative norm contains exactly two opposite points on this hyperboloid; and the hyperboloid does not contain any other point.
The algebra generated by a split-quaternion with a nonreal part of negative norm is isomorphic to formula_40 and to the ring of split-complex numbers. It is also isomorphic (as an algebra) to formula_41 by the mapping defined by
formula_42
Stratification by the norm.
As seen above, the purely nonreal split-quaternions of norm –1, 1 and 0 form respectively a hyperboloid of one sheet, a hyperboloid of two sheets and a circular cone in the space of non real quaternions.
These surfaces are pairwise asymptote and do not intersect. Their complement consist of six connected regions:
This stratification can be refined by considering split-quaternions of a fixed norm: for every real number "n" ≠ 0 the purely nonreal split-quaternions of norm "n" form an hyperboloid. All these hyperboloids are asymptote to the above cone, and none of these surfaces intersect any other. As the set of the purely nonreal split-quaternions is the disjoint union of these surfaces, this provides the desired stratification.
Colour space.
Split quaternions have been applied to colour balance The model refers to the Jordan algebra of symmetric matrices representing the algebra. The model reconciles trichromacy with Hering's opponency and uses the Cayley–Klein model of hyperbolic geometry for chromatic distances.
Historical notes.
The coquaternions were initially introduced (under that name) in 1849 by James Cockle in the London–Edinburgh–Dublin Philosophical Magazine. The introductory papers by Cockle were recalled in the 1904 "Bibliography" of the Quaternion Society.
Alexander Macfarlane called the structure of split-quaternion vectors an "exspherical system" when he was speaking at the International Congress of Mathematicians in Paris in 1900. Macfarlane considered the "hyperboloidal counterpart to spherical analysis" in a 1910 article "Unification and Development of the Principles of the Algebra of Space" in the "Bulletin" of the Quaternion Society.
The unit sphere was considered in 1910 by Hans Beck. For example, the dihedral group appears on page 419. The split-quaternion structure has also been mentioned briefly in the "Annals of Mathematics".
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\begin{align}\n\\boldsymbol{1} =\\begin{pmatrix}1&0\\\\0&1\\end{pmatrix},\\qquad&\\boldsymbol{i} =\\begin{pmatrix}0&1\\\\-1&0\\end{pmatrix},\\\\\n\\boldsymbol{j} =\\begin{pmatrix}0&1\\\\1&0\\end{pmatrix},\\qquad&\\boldsymbol{k} =\\begin{pmatrix}1&0\\\\0&-1\\end{pmatrix}.\n\\end{align}"
},
{
"math_id": 1,
"text": "\\boldsymbol{1}, \\boldsymbol{i}, \\boldsymbol{j}, \\boldsymbol{k}"
},
{
"math_id": 2,
"text": "\\boldsymbol{i}"
},
{
"math_id": 3,
"text": "\\boldsymbol{j}"
},
{
"math_id": 4,
"text": "\\boldsymbol{k}"
},
{
"math_id": 5,
"text": "N(q) = q q^* = w^2 + x^2 - y^2 - z^2,"
},
{
"math_id": 6,
"text": "\\operatorname{GL}(2, \\mathbb R),"
},
{
"math_id": 7,
"text": "\\operatorname{SL}(2, \\mathbb R)."
},
{
"math_id": 8,
"text": "\\begin{pmatrix}w+xi& y+zi\\\\y-zi&w-xi\\end{pmatrix}."
},
{
"math_id": 9,
"text": "\\begin{pmatrix}u & v \\\\ v^* & u^* \\end{pmatrix},"
},
{
"math_id": 10,
"text": "^*"
},
{
"math_id": 11,
"text": "\\begin{pmatrix}i & 0 \\\\0 &-i \\end{pmatrix}, \\quad\\begin{pmatrix}0 & 1 \\\\1 &0 \\end{pmatrix},\\quad \\begin{pmatrix}0 & i \\\\-i &0 \\end{pmatrix}."
},
{
"math_id": 12,
"text": "S=\\begin{pmatrix}1 & i \\\\i &1 \\end{pmatrix}."
},
{
"math_id": 13,
"text": "M\\mapsto S^{-1}MS."
},
{
"math_id": 14,
"text": "(a,b)(c,d)\\ = \\ (ac + d^* b, \\ da + bc^* )"
},
{
"math_id": 15,
"text": "(a,b)^* = (a^*, - b), "
},
{
"math_id": 16,
"text": "N(a,b) \\ = \\ (a,b)(a,b)^* \\ = \\ (a a^* - b b^* , 0)."
},
{
"math_id": 17,
"text": "q = (a,b) = ((w + z j), (y + xj)), "
},
{
"math_id": 18,
"text": "N(q) = a a^* - b b^* = w^2 - z^2 - (y^2 - x^2) = w^2 + x^2 - y^2 - z^2 ."
},
{
"math_id": 19,
"text": "p^2=w^2+2wq-N(q)."
},
{
"math_id": 20,
"text": "\\mathbb R[p]"
},
{
"math_id": 21,
"text": "\\mathbb R[p]=\\mathbb R[q]=\\{a+bq\\mid a,b\\in\\mathbb R\\},"
},
{
"math_id": 22,
"text": "\\mathbb R"
},
{
"math_id": 23,
"text": "a\\in \\mathbb R."
},
{
"math_id": 24,
"text": "q^2=0,"
},
{
"math_id": 25,
"text": "x^2-y^2-z^2=0."
},
{
"math_id": 26,
"text": "p=w+a\\mathrm i + a\\cos(t)\\mathrm j + a\\sin(t)\\mathrm k."
},
{
"math_id": 27,
"text": "\\mathrm i + \\cos(t)\\mathrm j + \\sin(t)\\mathrm k"
},
{
"math_id": 28,
"text": "\\mathbb R[X]/\\langle X^2\\rangle"
},
{
"math_id": 29,
"text": "n=\\sqrt{N(q)},"
},
{
"math_id": 30,
"text": "q^2 =-q^*q=N(q)=n^2=x^2-y^2-z^2."
},
{
"math_id": 31,
"text": "x^2-y^2-z^2=1."
},
{
"math_id": 32,
"text": "p=w+n\\cosh(u)\\mathrm i + n\\sinh(u)\\cos(t)\\mathrm j + n\\sinh(u)\\sin(t)\\mathrm k."
},
{
"math_id": 33,
"text": "\\cosh(u)\\mathrm i + \\sinh(u)\\cos(t)\\mathrm j + \\sinh(u)\\sin(t)\\mathrm k"
},
{
"math_id": 34,
"text": "\\mathbb R[X]/\\langle X^2+1\\rangle"
},
{
"math_id": 35,
"text": "\\Complex"
},
{
"math_id": 36,
"text": "n=\\sqrt{-N(q)},"
},
{
"math_id": 37,
"text": "q^2 = -q^*q=N(q)=-n^2=x^2-y^2-z^2."
},
{
"math_id": 38,
"text": "p=w+n\\sinh(u)\\mathrm i + n\\cosh(u)\\cos(t)\\mathrm j + n\\cosh(u)\\sin(t)\\mathrm k."
},
{
"math_id": 39,
"text": "\\sinh(u)\\mathrm i + \\cosh(u)\\cos(t)\\mathrm j + \\cosh(u)\\sin(t)\\mathrm k"
},
{
"math_id": 40,
"text": "\\mathbb R[X]/\\langle X^2-1\\rangle"
},
{
"math_id": 41,
"text": "\\mathbb R^2"
},
{
"math_id": 42,
"text": "(1,0)\\mapsto \\frac{1+X}2, \\quad\n(0,1)\\mapsto \\frac{1-X}2.\n"
},
{
"math_id": 43,
"text": "N(q)>1"
},
{
"math_id": 44,
"text": "0<N(q)<1"
},
{
"math_id": 45,
"text": "-1<N(q)<0"
},
{
"math_id": 46,
"text": "N(q)<-1"
}
] |
https://en.wikipedia.org/wiki?curid=1461265
|
1461290
|
Dirichlet–Jordan test
|
In mathematics, the Dirichlet–Jordan test gives sufficient conditions for a real-valued, periodic function "f" to be equal to the sum of its Fourier series at a point of continuity. Moreover, the behavior of the Fourier series at points of discontinuity is determined as well (it is the midpoint of the values of the discontinuity). It is one of many conditions for the convergence of Fourier series.
The original test was established by Peter Gustav Lejeune Dirichlet in 1829, for piecewise monotone functions (functions with a finite number of sections per period each o f which is monotonic). It was extended in the late 19th century by Camille Jordan to functions of bounded variation in each period (any function of bounded variation is the difference of two monotonically increasing functions).
Dirichlet–Jordan test for Fourier series.
The Dirichlet–Jordan test states that if a periodic function formula_0 is of bounded variation on a period, then the Fourier series formula_1 converges, as formula_2, at each point of the domain to
formula_3
In particular, if formula_4 is continuous at formula_5, then the Fourier series converges to formula_0. Moreover, if formula_4 is continuous everywhere, then the convergence is uniform.
Stated in terms of a periodic function of period 2π, the Fourier series coefficients are defined as
formula_6
and the partial sums of the Fourier series are
formula_7
The analogous statement holds irrespective of what the period of "f" is, or which version of the Fourier series is chosen.
There is also a pointwise version of the test: if formula_4 is a periodic function in formula_8, and is of bounded variation in a neighborhood of formula_5, then the Fourier series at formula_5 converges to the limit as above
formula_3
Jordan test for Fourier integrals.
For the Fourier transform on the real line, there is a version of the test as well. Suppose that formula_0 is in formula_9 and of bounded variation in a neighborhood of the point formula_5. Then
formula_10
If formula_4 is continuous in an open interval, then the integral on the left-hand side converges uniformly in the interval, and the limit on the right-hand side is formula_0.
This version of the test (although not satisfying modern demands for rigor) is historically prior to Dirichlet, being due to Joseph Fourier.
Dirichlet conditions in signal processing.
In signal processing, the test is often retained in the original form due to Dirichlet: a piecewise monotone bounded periodic function formula_4 (having a finite number of monotonic intervals per period) has a convergent Fourier series whose value at each point is the arithmetic mean of the left and right limits of the function. The condition of piecewise monotonicity stipulates having only finitely many local extrema per period, i.e., that the function changes its variation only finitely many times. This may be called a function of "finite variation", as opposed to bounded variation. Finite variation implies bounded variation, but the reverse is not true. (Dirichlet required in addition that the function have only finitely many discontinuities, but this constraint is unnecessarily stringent.) Any signal that can be physically produced in a laboratory satisfies these conditions.
As in the pointwise case of the Jordan test, the condition of boundedness can be relaxed if the function is assumed to be absolutely integrable (i.e., formula_8) over a period, provided it satisfies the other conditions of the test in a neighborhood of the point formula_5 where the limit is taken.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "f(x)"
},
{
"math_id": 1,
"text": "S_nf(x)"
},
{
"math_id": 2,
"text": "n\\to\\infty"
},
{
"math_id": 3,
"text": "\\lim_{\\varepsilon\\to 0}\\frac{f(x+\\varepsilon)+f(x-\\varepsilon)}{2}."
},
{
"math_id": 4,
"text": "f"
},
{
"math_id": 5,
"text": "x"
},
{
"math_id": 6,
"text": " a_k = \\frac{1}{2\\pi} \\int_{-\\pi}^\\pi f(x) e^{-ikx}\\, dx,"
},
{
"math_id": 7,
"text": "S_nf(x) = \\sum_{k=-n}^na_k e^{ikx}"
},
{
"math_id": 8,
"text": "L^1"
},
{
"math_id": 9,
"text": "L^1(-\\infty,\\infty)"
},
{
"math_id": 10,
"text": "\\frac1\\pi\\lim_{M\\to\\infty}\\int_0^{M}du\\int_{-\\infty}^\\infty f(t)\\cos u(x-t)\\,dt = \\lim_{\\varepsilon\\to 0}\\frac{f(x+\\varepsilon)+f(x-\\varepsilon)}{2}."
}
] |
https://en.wikipedia.org/wiki?curid=1461290
|
1461372
|
Laser rangefinder
|
Range finding device that uses a laser beam to determine the distance to an object
A laser rangefinder, also known as a laser telemeter, is a rangefinder that uses a laser beam to determine the distance to an object. The most common form of laser rangefinder operates on the time of flight principle by sending a laser pulse in a narrow beam towards the object and measuring the time taken by the pulse to be reflected off the target and returned to the sender. Due to the high speed of light, this technique is not appropriate for high precision sub-millimeter measurements, where triangulation and other techniques are often used instead. Laser rangefinders are sometimes classified as type of handheld scannerless lidar.
Pulse.
The pulse may be coded to reduce the chance that the rangefinder can be jammed. It is possible to use Doppler effect techniques to judge whether the object is moving towards or away from the rangefinder, and if so, how fast.
Precision.
The precision of an instrument is correlated with the rise time, divergence, and power of its laser pulse, as well as the quality of its optics and onboard digital signal processing. Environmental factors can significantly reduce range and accuracy:
In good conditions, skilled operators using precision laser rangefinders can range a target to within a meter at distances on the order of three kilometers.
Range and range error.
Despite the beam being narrow, it will eventually spread over long distances due to the divergence of the laser beam, as well as due to scintillation and beam wander effects, caused by the presence of water droplets in the air acting as lenses ranging in size from microscopic to roughly half the height of the laser beam's path above the earth.
These atmospheric distortions coupled with the divergence of the laser itself and with transverse winds that serve to push the atmospheric heat bubbles laterally may combine to make it difficult to get an accurate reading of the distance of an object, say, beneath some trees or behind bushes, or even over long distances of more than 1 km in open and unobscured desert terrain.
Some of the laser light might reflect off leaves or branches which are closer than the object, giving an early return and a reading which is too low. Alternatively, over distances longer than 360 m, if the target is in proximity to the earth, it may simply vanish into a mirage, caused by temperature gradients in the air in proximity to the heated surface bending the laser light. All these effects must be considered.
Calculation.
The distance between point A and B is given by
formula_0
where c is the speed of light and t is the amount of time for the round-trip between A and B.
formula_1
where φ is the phase delay made by the light traveling and ω is the angular frequency of optical wave.
Then substituting the values in the equation,
formula_2
In this equation, λ is the wavelength ; Δφ is the part of the phase delay that does not fulfill π (that is, φ modulo π); N is the integer number of wave half-cycles of the round-trip and ΔN the remaining fractional part.
Technologies.
Time of flight - this measures the time taken for a light pulse to travel to the target and back. With the speed of light known, and an accurate measurement of the time taken, the distance can be calculated. Many pulses are fired sequentially and the average response is most commonly used. This technique requires very accurate sub-nanosecond timing circuitry.
Multiple frequency phase-shift - this measures the phase shift of multiple frequencies on reflection then solves some simultaneous equations to give a final measure.
Interferometry - the most accurate and most useful technique for measuring changes in distance rather than absolute distances.
Light attenuation by atmospheric absorption - The method measures the attenuation of a laser beam caused by the absorption from an atmospheric compound (H2O, CO2, CH4, O2 etc.) to calculate the distance to an object. The light atmospheric absorption attenuation method requires unmodulated incoherent light sources and low-frequency electronics that reduce the complexity of the devices. Due to this, low-cost light sources can be used for range-finding. However, the application of the method is limited to atmospheric measurements or planetary exploration.
Applications.
Military.
Rangefinders provide an exact distance to targets located beyond the distance of point-blank shooting to snipers and artillery. They can also be used for military reconnaissance and engineering. Usually tanks use LRF to correct the direct shoot solution.
Handheld military rangefinders operate at ranges of 2 km up to 25 km and are combined with binoculars or monoculars. When the rangefinder is equipped with a digital magnetic compass (DMC) and inclinometer it is capable of providing magnetic azimuth, inclination, and height (length) of targets. Some rangefinders can also measure a target's speed in relation to the observer. Some rangefinders have cable or wireless interfaces to enable them to transfer their measurement(s) data to other equipment like fire control computers. Some models also offer the possibility to use add-on night vision modules. Most handheld rangefinders use standard or rechargeable batteries.
The more powerful models of rangefinders measure distance up to 40 km and are normally installed either on a tripod or directly on a vehicle, ship, jet, helicopter or gun platform. In the latter case the rangefinder module is integrated with on-board thermal, night vision and daytime observation equipment. The most advanced military rangefinders can be integrated with computers.
To make laser rangefinders and laser-guided weapons less useful against military targets, various military arms may have developed laser-absorbing paint for their vehicles. Regardless, some objects don't reflect laser light very well and using a laser rangefinder on them is difficult.
The first commercial laser rangefinder was the Barr & Stroud LF1, developed in association with Hughes Aircraft, which became available in 1965. This was then followed by the Barr & Stroud LF2, which integrated the rangefinder into a tank sight, and this was used on the Chieftain tank in 1969, the first vehicle so-equipped with such a system. Both systems used ruby lasers.
3D modelling.
Laser rangefinders are used extensively in 3D object recognition, 3D object modelling, and a wide variety of computer vision-related fields. This technology constitutes the heart of the so-called "time-of-flight" 3D scanners. In contrast to the military instruments, laser rangefinders offer high-precision scanning abilities, with either single-face or 360-degree scanning modes.
A number of algorithms have been developed to merge the range data retrieved from multiple angles of a single object to produce complete 3D models with as little error as possible. One of the advantages offered by laser rangefinders over other methods of computer vision is in not needing to correlate features from two images in order to determine depth-information like stereoscopic methods do.
Laser rangefinders used in computer vision applications often have depth resolutions of 0.1 mm or less. This can be achieved by using triangulation or refraction measurement techniques unlike to the time of flight techniques used in LIDAR.
Forestry.
Special laser rangefinders are used in forestry. These devices have anti-leaf filters and work with reflectors. Laser beam reflects only from this reflector and so exact distance measurement is guaranteed. Laser rangefinders with anti-leaf filter are used for example for forest inventories.
Sports.
Laser rangefinders may be effectively used in various sports that require precision distance measurement, such as golf, hunting, and archery. Some of the more popular manufacturers are Caddytalk, Opti-logic Corporation, Bushnell, Leupold, LaserTechnology, Trimble, Leica, Newcon Optik, Op. Electronics, Nikon, Swarovski Optik and Zeiss.
Many rangefinders from Bushnell come with advanced features, such as ARC (angle range compensation), multi-distance ability, slope, JOLT (Vibrate when the target is locked), and Pin-Seeking. ARC can be calculated by hand using the rifleman's rule, but it's usually much easier if you let a rangefinder do it when you are out hunting. In golfing where time is most important, a laser rangefinder comes useful in locating distance to the flag. However not all features are 100% legal for golf tournament play. Many hunters in the eastern U.S. don't need a rangefinder, although many western hunters need them, due to longer shooting distances and more open spaces.
Industrial production processes.
An important application is the use of laser rangefinder technology during the automation of stock management systems and production processes in steel industry.
Laser measuring tools.
Laser rangefinders are also used in several industries like construction, renovation and real estate as alternatives to tape measures, and was first introduced by Leica Geosystems in 1993 in France. To measure a large object like a room with a tape measure, one would need another person to hold the tape at the far wall and a clear line straight across the room to stretch the tape. With a laser measuring tool, the job can be completed by one operator with just a line of sight. Although tape measures are technically perfectly accurate, laser measuring tools are much more precise. Laser measuring tools typically include the ability to produce some simple calculations, such as the area or volume of a room. These devices can be found in hardware stores and online marketplaces.
Price.
Laser rangefinders can vary in price, depending on the quality and application of the product. Military grade rangefinders need to be as accurate as possible and must also reach great distances. These devices can cost hundreds of thousands of dollars. For civilian applications, such as hunting or golf, devices are more affordable and much more readily accessible.
Safety.
Laser rangefinders are divided into four classes and several subclasses. Laser rangefinders available to consumers are usually laser class 1 or class 2 devices and are considered relatively eye-safe. Regardless of the safety rating, direct eye contact should always be avoided. Most laser rangefinders for military use exceed the laser class 2 energy levels.
References.
<templatestyles src="Reflist/styles.css" />
External links.
Media related to at Wikimedia Commons
|
[
{
"math_id": 0,
"text": "D=\\frac{ct}{2}"
},
{
"math_id": 1,
"text": "t=\\frac{\\phi}{\\omega}"
},
{
"math_id": 2,
"text": "D=\\frac{1}{2} ct = \\frac{1}{2} \\frac{c \\phi}{\\omega} = \\frac{c}{4 \\pi f} (N \\pi + \\Delta \\phi) = \\frac{\\lambda}{4}(N+ \\Delta N)"
}
] |
https://en.wikipedia.org/wiki?curid=1461372
|
1461442
|
Progressive function
|
In mathematics, a progressive function "ƒ" ∈ "L"2(R) is a function whose Fourier transform is supported by positive frequencies only:
formula_0
It is called super regressive if and only if the time reversed function "f"(−"t") is progressive, or equivalently, if
formula_1
The complex conjugate of a progressive function is regressive, and vice versa.
The space of progressive functions is sometimes denoted formula_2, which is known as the Hardy space of the upper half-plane. This is because a progressive function has the Fourier inversion formula
formula_3
and hence extends to a holomorphic function on the upper half-plane formula_4
by the formula
formula_5
Conversely, every holomorphic function on the upper half-plane which is uniformly square-integrable on every horizontal line
will arise in this manner.
Regressive functions are similarly associated with the Hardy space on the lower half-plane formula_6.
References.
<templatestyles src="Reflist/styles.css" />
"This article incorporates material from progressive function on PlanetMath, which is licensed under the ."
|
[
{
"math_id": 0,
"text": "\\mathop{\\rm supp}\\hat{f} \\subseteq \\mathbb{R}_+."
},
{
"math_id": 1,
"text": "\\mathop{\\rm supp}\\hat{f} \\subseteq \\mathbb{R}_-."
},
{
"math_id": 2,
"text": "H^2_+(R)"
},
{
"math_id": 3,
"text": "f(t) = \\int_0^\\infty e^{2\\pi i st} \\hat f(s)\\, ds"
},
{
"math_id": 4,
"text": "\\{ t + iu: t, u \\in R, u \\geq 0 \\}"
},
{
"math_id": 5,
"text": "f(t+iu) = \\int_0^\\infty e^{2\\pi i s(t+iu)} \\hat f(s)\\, ds\n= \\int_0^\\infty e^{2\\pi i st} e^{-2\\pi su} \\hat f(s)\\, ds."
},
{
"math_id": 6,
"text": "\\{ t + iu: t, u \\in R, u \\leq 0 \\}"
}
] |
https://en.wikipedia.org/wiki?curid=1461442
|
1461517
|
Fixed-point space
|
Space where all functions have fixed points
In mathematics, a Hausdorff space "X" is called a fixed-point space if it obeys a fixed-point theorem, according to which every continuous function formula_0 has a fixed point, a point formula_1 for which formula_2.
For example, the closed unit interval is a fixed point space, as can be proved from the intermediate value theorem. The real line is not a fixed-point space, because the continuous function that adds one to its argument does not have a fixed point. Generalizing the unit interval, by the Brouwer fixed-point theorem, every compact bounded convex set in a Euclidean space is a fixed-point space.
The definition of a fixed-point space can also be extended from continuous functions of topological spaces to other classes of maps on other types of space.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "f:X\\rightarrow X"
},
{
"math_id": 1,
"text": "x"
},
{
"math_id": 2,
"text": "f(x)=x"
}
] |
https://en.wikipedia.org/wiki?curid=1461517
|
146153
|
Overburden pressure
|
Stress imposed on soil or rock by overlying material
Pressure is force magnitude applied over an area. Overburden pressure is a geology term that denotes the pressure caused by the weight of the overlying layers of material at a specific depth under the earth's surface. Overburden pressure is also called lithostatic pressure, or vertical stress.
In a stratigraphic layer that is in hydrostatic equilibrium; the overburden pressure at a depth z, assuming the magnitude of the gravity acceleration is approximately constant, is given by:
formula_0
where:
In deep-earth geophysics/geodynamics, gravitational acceleration varies significantly over depth and formula_5 should not be assumed to be constant, and should be inside the integral.
Some sections of stratigraphic layers can be sealed or isolated. These changes create areas where there is not static equilibrium. A location in the layer is said to be in under pressure when the local pressure is less than the hydrostatic pressure, and in overpressure when the local pressure is greater than the hydrostatic pressure.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "P(z) = P_0 + g \\int_{0}^{z} \\rho(z) \\, dz"
},
{
"math_id": 1,
"text": "z"
},
{
"math_id": 2,
"text": "P(z)"
},
{
"math_id": 3,
"text": "P_0"
},
{
"math_id": 4,
"text": "\\rho(z)"
},
{
"math_id": 5,
"text": "g"
},
{
"math_id": 6,
"text": "m/s^2 "
}
] |
https://en.wikipedia.org/wiki?curid=146153
|
1461534
|
Wirtinger's inequality for functions
|
Theorem in analysis
"For other inequalities named after Wirtinger, see Wirtinger's inequality."
In the mathematical field of analysis, the Wirtinger inequality is an important inequality for functions of a single variable, named after Wilhelm Wirtinger. It was used by Adolf Hurwitz in 1901 to give a new proof of the isoperimetric inequality for curves in the plane. A variety of closely related results are today known as Wirtinger's inequality, all of which can be viewed as certain forms of the Poincaré inequality.
Theorem.
There are several inequivalent versions of the Wirtinger inequality:
"y"("L"). Then
formula_0
and equality holds if and only if "y"("x")
"c" sin for some numbers c and α.
"y"("L")
0. Then
formula_1
and equality holds if and only if "y"("x")
"c" sin for some number c.
formula_2
and equality holds if and only if "y"("x")
"c" cos for some number c.
Despite their differences, these are closely related to one another, as can be seen from the account given below in terms of spectral geometry. They can also all be regarded as special cases of various forms of the Poincaré inequality, with the optimal "Poincaré constant" identified explicitly. The middle version is also a special case of the Friedrichs inequality, again with the optimal constant identified.
Proofs.
The three versions of the Wirtinger inequality can all be proved by various means. This is illustrated in the following by a different kind of proof for each of the three Wirtinger inequalities given above. In each case, by a linear change of variables in the integrals involved, there is no loss of generality in only proving the theorem for one particular choice of L.
Fourier series.
Consider the first Wirtinger inequality given above. Take L to be 2π. Since Dirichlet's conditions are met, we can write
formula_3
and the fact that the average value of y is zero means that "a"0
0. By Parseval's identity,
formula_4
and
formula_5
and since the summands are all nonnegative, the Wirtinger inequality is proved. Furthermore it is seen that equality holds if and only if "an"
"bn"
0 for all "n" ≥ 2, which is to say that "y"("x")
"a"1 sin "x" + "b"1 cos "x". This is equivalent to the stated condition by use of the trigonometric addition formulas.
Integration by parts.
Consider the second Wirtinger inequality given above. Take L to be π. Any differentiable function "y"("x") satisfies the identity
formula_6
Integration using the fundamental theorem of calculus and the boundary conditions "y"(0)
"y"(π)
0 then shows
formula_7
This proves the Wirtinger inequality, since the second integral is clearly nonnegative. Furthermore, equality in the Wirtinger inequality is seen to be equivalent to "y"′("x")
"y"("x") cot "x", the general solution of which (as computed by separation of variables) is "y"("x")
"c" sin "x" for an arbitrary number c.
There is a subtlety in the above application of the fundamental theorem of calculus, since it is not the case that "y"("x")2 cot "x" extends continuously to "x"
0 and "x"
π for every function "y"("x"). This is resolved as follows. It follows from the Hölder inequality and "y"(0)
0 that
formula_8
which shows that as long as
formula_9
is finite, the limit of "y"("x")2 as x converges to zero is zero. Since cot "x" < for small positive values of x, it follows from the squeeze theorem that "y"("x")2 cot "x" converges to zero as x converges to zero. In exactly the same way, it can be proved that "y"("x")2 cot "x" converges to zero as x converges to π.
Functional analysis.
Consider the third Wirtinger inequality given above. Take L to be 1. Given a continuous function f on of average value zero, let "Tf") denote the function u on which is of average value zero, and with "u"′′ + "f"
0 and "u"′(0)
"u"′(1)
0. From basic analysis of ordinary differential equations with constant coefficients, the eigenvalues of T are ("k"π)−2 for nonzero integers k, the largest of which is then π−2. Because T is a bounded and self-adjoint operator, it follows that
formula_10
for all f of average value zero, where the equality is due to integration by parts. Finally, for any continuously differentiable function y on of average value zero, let "g""n" be a sequence of compactly supported continuously differentiable functions on which converge in L2 to "y"′. Then define
formula_11
Then each "y""n" has average value zero with "y""n"′(0)
"y""n"′(1)
0, which in turn implies that −"y""n"′′ has average value zero. So application of the above inequality to "f"
−"y""n"′′ is legitimate and shows that
formula_12
It is possible to replace "y""n" by y, and thereby prove the Wirtinger inequality, as soon as it is verified that "y""n" converges in L2 to "y". This is verified in a standard way, by writing
formula_13
and applying the Hölder or Jensen inequalities.
This proves the Wirtinger inequality. In the case that "y"("x") is a function for which equality in the Wirtinger inequality holds, then a standard argument in the calculus of variations says that y must be a weak solution of the Euler–Lagrange equation "y"′′("x") + "y"("x")
0 with "y"′(0)
"y"′(1)
0, and the regularity theory of such equations, followed by the usual analysis of ordinary differential equations with constant coefficients, shows that "y"("x")
"c" cos π"x" for some number c.
To make this argument fully formal and precise, it is necessary to be more careful about the function spaces in question.
Spectral geometry.
In the language of spectral geometry, the three versions of the Wirtinger inequality above can be rephrased as theorems about the first eigenvalue and corresponding eigenfunctions of the Laplace–Beltrami operator on various one-dimensional Riemannian manifolds:
These can also be extended to statements about higher-dimensional spaces. For example, the Riemannian circle may be viewed as the one-dimensional version of either a sphere, real projective space, or torus (of arbitrary dimension). The Wirtinger inequality, in the first version given here, can then be seen as the "n"
1 case of any of the following:
The second and third versions of the Wirtinger inequality can be extended to statements about first Dirichlet and Neumann eigenvalues of the Laplace−Beltrami operator on metric balls in Euclidean space:
Application to the isoperimetric inequality.
In the first form given above, the Wirtinger inequality can be used to prove the isoperimetric inequality for curves in the plane, as found by Adolf Hurwitz in 1901. Let ("x", "y") be a differentiable embedding of the circle in the plane. Parametrizing the circle by so that ("x", "y") has constant speed, the length "L" of the curve is given by
formula_14
and the area A enclosed by the curve is given (due to Stokes theorem) by
formula_15
Since the integrand of the integral defining L is assumed constant, there is
formula_16
which can be rewritten as
formula_17
The first integral is clearly nonnegative. Without changing the area or length of the curve, ("x", "y") can be replaced by ("x", "y" + "z") for some number z, so as to make y have average value zero. Then the Wirtinger inequality can be applied to see that the second integral is also nonnegative, and therefore
formula_18
which is the isoperimetric inequality. Furthermore, equality in the isoperimetric inequality implies both equality in the Wirtinger inequality and also the equality "x"′("t") + "y"("t")
0, which amounts to "y"("t")
"c"1 sin("t" – α) and then "x"("t")
"c"1 cos("t" – α) + "c"2 for arbitrary numbers "c"1 and "c"2. These equations mean that the image of ("x", "y") is a round circle in the plane.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\int_0^L y(x)^2\\,\\mathrm{d}x\\leq\\frac{L^2}{4\\pi^2}\\int_0^L y'(x)^2\\,\\mathrm{d}x,"
},
{
"math_id": 1,
"text": "\\int_0^L y(x)^2\\,\\mathrm{d}x\\leq\\frac{L^2}{\\pi^2}\\int_0^L y'(x)^2\\,\\mathrm{d}x,"
},
{
"math_id": 2,
"text": "\\int_0^L y(x)^2\\,\\mathrm{d}x\\leq \\frac{L^2}{\\pi^2}\\int_0^L y'(x)^2\\,\\mathrm{d}x."
},
{
"math_id": 3,
"text": "y(x)=\\frac{1}{2}a_0+\\sum_{n\\ge 1}\\left(a_n\\frac{\\sin nx}{\\sqrt{\\pi}}+b_n\\frac{\\cos nx}{\\sqrt{\\pi}}\\right),"
},
{
"math_id": 4,
"text": "\\int_0^{2\\pi}y(x)^2\\,\\mathrm{d}x=\\sum_{n=1}^\\infty(a_n^2+b_n^2)"
},
{
"math_id": 5,
"text": "\\int_0^{2\\pi}y'(x)^2 \\,\\mathrm{d}x = \\sum_{n=1}^\\infty n^2(a_n^2+b_n^2)"
},
{
"math_id": 6,
"text": "y(x)^2+\\big(y'(x)-y(x)\\cot x\\big)^2=y'(x)^2-\\frac{d}{dx}\\big(y(x)^2\\cot x\\big)."
},
{
"math_id": 7,
"text": "\\int_0^\\pi y(x)^2\\,\\mathrm{d}x+\\int_0^\\pi\\big(y'(x)-y(x)\\cot x\\big)^2\\,\\mathrm{d}x=\\int_0^\\pi y'(x)^2\\,\\mathrm{d}x."
},
{
"math_id": 8,
"text": "|y(x)|=\\left|\\int_0^x y'(x)\\,\\mathrm{d}x\\right|\\leq\\int_0^x |y'(x)|\\,\\mathrm{d}x\\leq\\sqrt{x}\\left(\\int_0^x y'(x)^2\\,\\mathrm{d}x\\right)^{1/2},"
},
{
"math_id": 9,
"text": "\\int_0^\\pi y'(x)^2\\,\\mathrm{d}x"
},
{
"math_id": 10,
"text": "\\int_0^1 Tf(x)^2\\,\\mathrm{d}x\\leq\\pi^{-2}\\int_0^1 f(x)Tf(x)\\,\\mathrm{d}x=\\frac{1}{\\pi^2}\\int_0^1 (Tf)'(x)^2\\,\\mathrm{d}x"
},
{
"math_id": 11,
"text": "y_n(x)=\\int_0^x g_n(z)\\,\\mathrm{d}z-\\int_0^1 \\int_0^w g_n(z)\\,\\mathrm{d}z\\,\\mathrm{d}w."
},
{
"math_id": 12,
"text": "\\int_0^1 y_n(x)^2\\,\\mathrm{d}x\\leq\\frac{1}{\\pi^2}\\int_0^1 y_n'(x)^2\\,\\mathrm{d}x."
},
{
"math_id": 13,
"text": "y(x)-y_n(x)=\\int_0^x \\big(y_n'(z)-g_n(z)\\big)\\,\\mathrm{d}z-\\int_0^1\\int_0^w (y_n'(z)-g_n(z)\\big)\\,\\mathrm{d}z\\,\\mathrm{d}w"
},
{
"math_id": 14,
"text": "\\int_0^{2\\pi}\\sqrt{x'(t)^2+y'(t)^2}\\,\\mathrm{d}t"
},
{
"math_id": 15,
"text": "-\\int_0^{2\\pi}y(t)x'(t)\\,\\mathrm{d}t."
},
{
"math_id": 16,
"text": "\\frac{L^2}{2\\pi}-2A=\\int_0^{2\\pi}\\big(x'(t)^2+y'(t)^2+2y(t)x'(t)\\big)\\,\\mathrm{d}t"
},
{
"math_id": 17,
"text": "\\int_0^{2\\pi}\\big(x'(t)+y(t)\\big)^2\\,\\mathrm{d}t+\\int_0^{2\\pi}\\big(y'(t)^2-y(t)^2\\big)\\,\\mathrm{d}t."
},
{
"math_id": 18,
"text": "\\frac{L^2}{4\\pi}\\geq A,"
}
] |
https://en.wikipedia.org/wiki?curid=1461534
|
14615532
|
Spurious-free dynamic range
|
Spurious-free dynamic range (SFDR) is the strength ratio of the fundamental signal to the strongest spurious signal in the output. It
is also defined as a measure used to specify analog-to-digital and digital-to-analog converters (ADCs and DACs, respectively) and radio receivers.
SFDR is defined as the ratio of the RMS value of the carrier wave (maximum signal component) at the input of the ADC or output of DAC to the RMS value of the next largest noise or harmonic distortion component (which is referred to as “spurious” or a “spur”) at its output. SFDR is usually measured in dBc (i.e. with respect to the carrier signal amplitude) or in dBFS (i.e. with respect to the ADC's full-scale range). Depending on the test condition, SFDR is observed within a pre-defined frequency window or from DC up to Nyquist frequency of the converter (ADC or DAC).
In case of a radio receiver application, the definition is slightly different. The reference is the minimum detectable signal level at the input of a receiver, which can be calculated through a knowledge of the noise figure and the input signal bandwidth of the receiver or the system. The difference between this value and the input level which will produce distortion products equal to the minimum detectable signal referred to the input of the system is the SFDR of the system. However, this procedure is mainly reliable for ADCs. In RF systems where output spurious signals are nonlinear function of input power, more precise measurement is required to take into account this non-linearity in power.
formula_0
Where formula_1 is the third-order intercept point and formula_2 is the noise floor of the component, expressed in dB or dBm.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "DR_f(dB)=\\tfrac{2}{3}(P_3-N_0)"
},
{
"math_id": 1,
"text": "P_3"
},
{
"math_id": 2,
"text": "N_0"
}
] |
https://en.wikipedia.org/wiki?curid=14615532
|
14617391
|
CUBIC TCP
|
TCP congestion avoidance algorithm
CUBIC is a network congestion avoidance algorithm for TCP which can achieve high bandwidth connections over networks more quickly and reliably in the face of high latency than earlier algorithms. It helps optimize long fat networks.
In 2006, the first CUBIC implementation was released in Linux kernel 2.6.13. Since kernel version 2.6.19, CUBIC replaces BIC-TCP as the default TCP congestion control algorithm in the Linux kernel.
MacOS adopted TCP CUBIC with the OS X Yosemite release in 2014, while the previous release OS X Mavericks still used TCP New Reno.
Microsoft adopted it by default in Windows 10.1709 Fall Creators Update (2017), and Windows Server 2016 1709 update.
Characteristics.
CUBIC is a less aggressive and more systematic derivative of BIC TCP, in which the window size is a cubic function of time since the last congestion event, with the inflection point set to the window size prior to the event. Because it is a cubic function, there are two components to window growth. The first is a concave portion where the window size quickly ramps up to the size before the last congestion event. Next is the convex growth where CUBIC probes for more bandwidth, slowly at first then very rapidly. CUBIC spends a lot of time at a plateau between the concave and convex growth region which allows the network to stabilize before CUBIC begins looking for more bandwidth.
Another major difference between CUBIC and many earlier TCP algorithms is that it does not rely on the cadence of RTTs to increase the window size. CUBIC's window size is dependent only on the last congestion event. With earlier algorithms like TCP New Reno, flows with very short round-trip delay times (RTTs) will receive ACKs faster and therefore have their congestion windows grow faster than other flows with longer RTTs. CUBIC allows for more fairness between flows since the window growth is independent of RTT.
Algorithm.
CUBIC increases its window to be real-time dependent, not RTT dependent like BIC. The calculation for cwnd (congestion window) is simpler than BIC, too.
Define the following variables:
RFC 8312 indicates the following:
Then cwnd can be modeled by:
formula_0
See also.
Apart from window based algorithms like Cubic, there are rate based algorithms (including BBR from Google) that works differently using "sending rate" instead of the window
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\begin{array}{lcr}\ncwnd \\ = \\ C(T-K)^3 + w_{{max}} \\\\\n\\textrm{where} \\ K = \\sqrt[3]{\\frac{w_{{max}}(1-\\beta)}{C}}\n\\end{array}"
}
] |
https://en.wikipedia.org/wiki?curid=14617391
|
14617395
|
Beurling algebra
|
In mathematics, the term Beurling algebra is used for different algebras introduced by Arne Beurling (1949), usually it is an algebra of periodic functions with Fourier series
formula_0
Example
We may consider the algebra of those functions "f" where the majorants
formula_1
of the Fourier coefficients "a""n" are summable. In other words
formula_2
Example
We may consider a weight function "w" on formula_3 such that
formula_4
in which case formula_5
is a unitary commutative Banach algebra.
These algebras are closely related to the Wiener algebra.
|
[
{
"math_id": 0,
"text": "f(x)=\\sum a_ne^{inx}"
},
{
"math_id": 1,
"text": "c_k=\\sup_{|n|\\ge k} |a_n|"
},
{
"math_id": 2,
"text": "\\sum_{k\\ge 0} c_k<\\infty."
},
{
"math_id": 3,
"text": "\\mathbb{Z}"
},
{
"math_id": 4,
"text": "w(m+n)\\leq w(m)w(n),\\quad w(0)=1"
},
{
"math_id": 5,
"text": "A_w(\\mathbb{T}) =\\{f:f(t)=\\sum_na_ne^{int},\\,\\|f\\|_w=\\sum_n|a_n|w(n)<\\infty\\} \\,(\\sim\\ell^1_w(\\mathbb{Z}))"
}
] |
https://en.wikipedia.org/wiki?curid=14617395
|
14617515
|
Geometric flow
|
In the mathematical field of differential geometry, a geometric flow, also called a geometric evolution equation, is a type of partial differential equation for a geometric object such as a Riemannian metric or an embedding. It is not a term with a formal meaning, but is typically understood to refer to parabolic partial differential equations.
Certain geometric flows arise as the gradient flow associated with a functional on a manifold which has a geometric interpretation, usually associated with some extrinsic or intrinsic curvature. Such flows are fundamentally related to the calculus of variations, and include mean curvature flow and Yamabe flow.
Examples.
Extrinsic.
Extrinsic geometric flows are flows on embedded submanifolds, or more generally
immersed submanifolds. In general they change both the Riemannian metric and the immersion.
Intrinsic.
Intrinsic geometric flows are flows on the Riemannian metric, independent of any embedding or immersion.
Classes of flows.
Important classes of flows are curvature flows, variational flows (which extremize some functional), and flows arising as solutions to parabolic partial differential equations. A given flow frequently admits all of these interpretations, as follows.
Given an elliptic operator formula_0 the parabolic PDE formula_1 yields a flow, and stationary states for the flow are solutions to the elliptic partial differential equation formula_2
If the equation formula_3 is the Euler–Lagrange equation for some functional formula_4 then the flow has a variational interpretation as the gradient flow of formula_4 and stationary states of the flow correspond to critical points of the functional.
In the context of geometric flows, the functional is often the formula_5 norm of some curvature.
Thus, given a curvature formula_6 one can define the functional formula_7 which has Euler–Lagrange equation formula_8 for some elliptic operator formula_0 and associated parabolic PDE formula_9
The Ricci flow, Calabi flow, and Yamabe flow arise in this way (in some cases with normalizations).
Curvature flows may or may not "preserve volume" (the Calabi flow does, while the Ricci flow does not), and if not, the flow may simply shrink or grow the manifold, rather than regularizing the metric. Thus one often normalizes the flow, for instance, by fixing the volume.
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "L,"
},
{
"math_id": 1,
"text": "u_t = Lu"
},
{
"math_id": 2,
"text": "Lu = 0."
},
{
"math_id": 3,
"text": "Lu = 0"
},
{
"math_id": 4,
"text": "F,"
},
{
"math_id": 5,
"text": "L^2"
},
{
"math_id": 6,
"text": "K,"
},
{
"math_id": 7,
"text": "F(K) = \\|K\\|_2 := \\left(\\int_M K^2\\right)^{1/2}."
},
{
"math_id": 8,
"text": "Lu=0"
},
{
"math_id": 9,
"text": "u_t = Lu."
}
] |
https://en.wikipedia.org/wiki?curid=14617515
|
14617622
|
Pugh's closing lemma
|
Mathematical result
In mathematics, Pugh's closing lemma is a result that links periodic orbit solutions of differential equations to chaotic behaviour. It can be formally stated as follows:
Let formula_0 be a formula_1 diffeomorphism of a compact smooth manifold formula_2. Given a nonwandering point formula_3 of formula_4, there exists a diffeomorphism formula_5 arbitrarily close to formula_4 in the formula_1 topology of formula_6 such that formula_3 is a periodic point of formula_5.
Interpretation.
Pugh's closing lemma means, for example, that any chaotic set in a bounded continuous dynamical system corresponds to a periodic orbit in a different but closely related dynamical system. As such, an open set of conditions on a bounded continuous dynamical system that rules out periodic behaviour also implies that the system cannot behave chaotically; this is the basis of some autonomous convergence theorems.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
"This article incorporates material from Pugh's closing lemma on PlanetMath, which is licensed under the ."
|
[
{
"math_id": 0,
"text": " f:M \\to M "
},
{
"math_id": 1,
"text": " C^1 "
},
{
"math_id": 2,
"text": " M "
},
{
"math_id": 3,
"text": " x "
},
{
"math_id": 4,
"text": " f "
},
{
"math_id": 5,
"text": " g "
},
{
"math_id": 6,
"text": " \\operatorname{Diff}^1(M) "
}
] |
https://en.wikipedia.org/wiki?curid=14617622
|
14619729
|
Hasse–Davenport relation
|
Two identities for Gauss sums
The Hasse–Davenport relations, introduced by Davenport and Hasse (1935), are two related identities for Gauss sums, one called the Hasse–Davenport lifting relation, and the other called the Hasse–Davenport product relation. The Hasse–Davenport lifting relation is an equality in number theory relating Gauss sums over different fields. used it to calculate the zeta function of a Fermat hypersurface over a finite field, which motivated the Weil conjectures.
Gauss sums are analogues of the gamma function over finite fields, and the Hasse–Davenport product relation is the analogue of Gauss's multiplication formula
formula_0
In fact the Hasse–Davenport product relation follows from the analogous multiplication formula for "p"-adic gamma functions together with the Gross–Koblitz formula of .
Hasse–Davenport lifting relation.
Let "F" be a finite field with "q" elements, and "F"s be the field such that ["F"s:"F"] = "s", that is, "s" is the dimension of the vector space "F"s over "F".
Let formula_1 be an element of formula_2.
Let formula_3 be a multiplicative character from "F" to the complex numbers.
Let formula_4 be the norm from formula_2 to formula_5 defined by
formula_6
Let
formula_7 be the multiplicative character on formula_2 which is the composition of formula_3 with the norm from "F"s to "F", that is
formula_8
Let ψ be some nontrivial additive character of "F", and let
formula_9 be the additive character on formula_2 which is the composition of formula_10 with the trace from "F"s to "F", that is
formula_11
Let
formula_12
be the Gauss sum over "F", and let
formula_13 be the Gauss sum over formula_2.
Then the Hasse–Davenport lifting relation states that
formula_14
Hasse–Davenport product relation.
The Hasse–Davenport product relation states that
formula_15
where ρ is a multiplicative character of exact order "m" dividing "q"–1 and χ is any multiplicative character and ψ is a non-trivial additive character.
|
[
{
"math_id": 0,
"text": "\n\\Gamma(z) \\; \\Gamma\\left(z + \\frac{1}{k}\\right) \\; \\Gamma\\left(z + \\frac{2}{k}\\right) \\cdots\n\\Gamma\\left(z + \\frac{k-1}{k}\\right) =\n(2 \\pi)^{ \\frac{k-1}{2}} \\; k^{1/2 - kz} \\; \\Gamma(kz). \\,\\!\n"
},
{
"math_id": 1,
"text": "\\alpha"
},
{
"math_id": 2,
"text": "F_s"
},
{
"math_id": 3,
"text": "\\chi"
},
{
"math_id": 4,
"text": "N_{F_s/F}(\\alpha)"
},
{
"math_id": 5,
"text": "F"
},
{
"math_id": 6,
"text": "N_{F_s/F}(\\alpha):=\\alpha\\cdot\\alpha^q\\cdots\\alpha^{q^{s-1}}.\\,"
},
{
"math_id": 7,
"text": "\\chi'"
},
{
"math_id": 8,
"text": "\\chi'(\\alpha):=\\chi(N_{F_s/F}(\\alpha))"
},
{
"math_id": 9,
"text": "\\psi'"
},
{
"math_id": 10,
"text": "\\psi"
},
{
"math_id": 11,
"text": "\\psi'(\\alpha):=\\psi(Tr_{F_s/F}(\\alpha))"
},
{
"math_id": 12,
"text": "\\tau(\\chi,\\psi)=\\sum_{x\\in F}\\chi(x)\\psi(x)"
},
{
"math_id": 13,
"text": "\\tau(\\chi',\\psi')"
},
{
"math_id": 14,
"text": "(-1)^s\\cdot \\tau(\\chi,\\psi)^s=-\\tau(\\chi',\\psi')."
},
{
"math_id": 15,
"text": "\\prod_{a\\bmod m} \\tau(\\chi\\rho^a,\\psi) = -\\chi^{-m}(m)\\tau(\\chi^m,\\psi)\\prod_{a\\bmod m} \\tau(\\rho^a,\\psi)"
}
] |
https://en.wikipedia.org/wiki?curid=14619729
|
14621035
|
Similarities between Wiener and LMS
|
The Least mean squares filter solution converges to the Wiener filter solution, assuming that the unknown system is LTI and the noise is stationary. Both filters can be used to identify the impulse response of an unknown system, knowing only the original input signal and the output of the unknown system. By relaxing the error criterion to reduce current sample error instead of minimizing the total error over all of n, the LMS algorithm can be derived from the Wiener filter.
Derivation of the Wiener filter for system identification.
Given a known input signal formula_0, the output of an unknown LTI system formula_1 can be expressed as:
formula_2
where formula_3 is an unknown filter tap coefficients and formula_4 is noise.
The model system formula_5, using a Wiener filter solution with an order N, can be expressed as:
formula_6
where formula_7 are the filter tap coefficients to be determined.
The error between the model and the unknown system can be expressed as:
formula_8
The total squared error formula_9 can be expressed as:
formula_10
formula_11
formula_12
Use the Minimum mean-square error criterion over all of formula_13 by setting its gradient to zero:
formula_14
which is
formula_15
for all formula_16
formula_17
Substitute the definition of formula_5:
formula_18
Distribute the partial derivative:
formula_19
Using the definition of discrete cross-correlation:
formula_20
formula_21
Rearrange the terms:
formula_22
for all formula_16
This system of N equations with N unknowns can be determined.
The resulting coefficients of the Wiener filter can be determined by: formula_23, where formula_24 is the cross-correlation vector between formula_25 and formula_26.
Derivation of the LMS algorithm.
By relaxing the infinite sum of the Wiener filter to just the error at time formula_13, the LMS algorithm can be derived.
The squared error can be expressed as:
formula_27
Using the Minimum mean-square error criterion, take the gradient:
formula_28
Apply chain rule and substitute definition of y[n]
formula_29
formula_30
Using gradient descent and a step size formula_31:
formula_32
which becomes, for i = 0, 1, ..., N-1,
formula_33
This is the LMS update equation.
|
[
{
"math_id": 0,
"text": "s[n]"
},
{
"math_id": 1,
"text": "x[n]"
},
{
"math_id": 2,
"text": "x[n] = \\sum_{k=0}^{N-1} h_ks[n-k] + w[n]"
},
{
"math_id": 3,
"text": "h_k"
},
{
"math_id": 4,
"text": "w[n]"
},
{
"math_id": 5,
"text": "\\hat{x}[n]"
},
{
"math_id": 6,
"text": "\\hat{x}[n] = \\sum_{k=0}^{N-1}\\hat{h}_ks[n-k]"
},
{
"math_id": 7,
"text": "\\hat{h}_k"
},
{
"math_id": 8,
"text": "e[n] = x[n] - \\hat{x}[n]"
},
{
"math_id": 9,
"text": "E"
},
{
"math_id": 10,
"text": "E = \\sum_{n=-\\infty}^{\\infty}e[n]^2"
},
{
"math_id": 11,
"text": "E = \\sum_{n=-\\infty}^{\\infty}(x[n] - \\hat{x}[n])^2"
},
{
"math_id": 12,
"text": "E = \\sum_{n=-\\infty}^{\\infty}(x[n]^2 - 2x[n]\\hat{x}[n] + \\hat{x}[n]^2)"
},
{
"math_id": 13,
"text": "n"
},
{
"math_id": 14,
"text": "\\nabla E = 0"
},
{
"math_id": 15,
"text": "\\frac{\\partial E}{\\partial \\hat{h}_i} = 0"
},
{
"math_id": 16,
"text": "i = 0, 1, 2, ..., N-1"
},
{
"math_id": 17,
"text": "\\frac{\\partial E}{\\partial \\hat{h}_i} = \\frac{\\partial}{\\partial \\hat{h}_i} \\sum_{n=-\\infty}^{\\infty}[x[n]^2 - 2x[n]\\hat{x}[n] + \\hat{x}[n]^2 ]"
},
{
"math_id": 18,
"text": "\\frac{\\partial E}{\\partial \\hat{h}_i} = \\frac{\\partial}{\\partial \\hat{h}_i} \\sum_{n=-\\infty}^{\\infty}[x[n]^2 - 2x[n]\\sum_{k=0}^{N-1}\\hat{h}_ks[n-k] + (\\sum_{k=0}^{N-1}\\hat{h}_ks[n-k])^2 ]"
},
{
"math_id": 19,
"text": "\\frac{\\partial E}{\\partial \\hat{h}_i} = \\sum_{n=-\\infty}^{\\infty}[-2x[n]s[n-i] + 2(\\sum_{k=0}^{N-1}\\hat{h}_ks[n-k])s[n-i] ]"
},
{
"math_id": 20,
"text": "R_{xy}(i) = \\sum_{n=-\\infty}^{\\infty} x[n]y[n-i]"
},
{
"math_id": 21,
"text": "\\frac{\\partial E}{\\partial \\hat{h}_i} = -2R_{xs}[i] + 2\\sum_{k=0}^{N-1}\\hat{h}_kR_{ss}[i - k] = 0 "
},
{
"math_id": 22,
"text": "R_{xs}[i] = \\sum_{k=0}^{N-1}\\hat{h}_kR_{ss}[i - k] "
},
{
"math_id": 23,
"text": "W = R_{xx}^{-1} P_{xs}"
},
{
"math_id": 24,
"text": " P_{xs} "
},
{
"math_id": 25,
"text": "x"
},
{
"math_id": 26,
"text": "s"
},
{
"math_id": 27,
"text": " E = (d[n] - y[n])^2 "
},
{
"math_id": 28,
"text": "\\frac{\\partial E}{\\partial w} = \\frac{\\partial}{\\partial w}(d[n] - y[n])^2"
},
{
"math_id": 29,
"text": "\\frac{\\partial E}{\\partial w} = 2(d[n] - y[n]) \\frac{\\partial}{\\partial w}(d[n] - \\sum_{k=0}^{N-1}\\hat{w}_kx[n-k])"
},
{
"math_id": 30,
"text": "\\frac{\\partial E}{\\partial w_i} = -2(e[n])(x[n-i])"
},
{
"math_id": 31,
"text": "\\mu"
},
{
"math_id": 32,
"text": "w[n+1] = w[n] - \\mu\\frac{\\partial E}{\\partial w}"
},
{
"math_id": 33,
"text": "w_i[n+1] = w_i[n] + 2\\mu(e[n])(x[n-i])"
}
] |
https://en.wikipedia.org/wiki?curid=14621035
|
14621793
|
Lindemann mechanism
|
Mechanism for unimolecular reactions
In chemical kinetics, the Lindemann mechanism (also called the Lindemann–Christiansen mechanism or the Lindemann–Hinshelwood mechanism) is a schematic reaction mechanism for unimolecular reactions. Frederick Lindemann and J.A. Christiansen proposed the concept almost simultaneously in 1921, and Cyril Hinshelwood developed it to take into account the energy distributed among vibrational degrees of freedom for some reaction steps.
It breaks down an apparently unimolecular reaction into two elementary steps, with a rate constant for each elementary step. The rate law and rate equation for the entire reaction can be derived from the rate equations and rate constants for the two steps.
The Lindemann mechanism is used to model gas phase decomposition or isomerization reactions. Although the net formula for decomposition or isomerization appears to be unimolecular and suggests first-order kinetics in the reactant, the Lindemann mechanism shows that the unimolecular reaction step is preceded by a bimolecular activation step so that the kinetics may actually be second-order in certain cases.
Activated reaction intermediates.
The overall equation for a unimolecular reaction may be written A → P, where A is the initial reactant molecule and P is one or more products (one for isomerization, more for decomposition).
A Lindemann mechanism typically includes an activated reaction intermediate, labeled A*. The activated intermediate is produced from the reactant only after a sufficient activation energy is acquired by collision with a second molecule M, which may or may not be similar to A. It then either deactivates from A* back to A by another collision, or reacts in a unimolecular step to produce the product(s) P.
The two-step mechanism is then
formula_0
Rate equation in steady-state approximation.
The rate equation for the rate of formation of product P may be obtained by using the steady-state approximation, in which the concentration of intermediate A* is assumed constant because its rates of production and consumption are (almost) equal. This assumption simplifies the calculation of the rate equation.
For the schematic mechanism of two elementary steps above, rate constants are defined as formula_1 for the forward reaction rate of the first step, formula_2 for the reverse reaction rate of the first step, and formula_3 for the forward reaction rate of the second step. For each elementary step, the order of reaction is equal to the molecularity
The rate of production of the intermediate A* in the first elementary step is simply:
formula_4 (forward first step)
A* is consumed both in the reverse first step and in the forward second step. The respective rates of consumption of A* are:
formula_5 (reverse first step)
formula_6 (forward second step)
According to the steady-state approximation, the rate of production of A* equals the rate of consumption. Therefore:
formula_7
Solving for formula_8, it is found that
formula_9
The overall reaction rate is
formula_10
Now, by substituting the calculated value for [A*], the overall reaction rate can be expressed in terms of the original reactants A and M:
formula_11
Reaction order and rate-determining step.
The steady-state rate equation is of mixed order and predicts that a unimolecular reaction can be of either first or second order, depending on which of the two terms in the denominator is larger. At sufficiently low pressures, formula_12 so that
formula_13, which is second order. That is, the rate-determining step is the first, bimolecular activation step.
At higher pressures, however, formula_14 so that formula_15 which is first order, and the rate-determining step is the second step, i.e. the unimolecular reaction of the activated molecule.
The theory can be tested by defining an effective rate constant (or coefficient) formula_16 which would be constant if the reaction were first order at all pressures: formula_17. The Lindemann mechanism predicts that k decreases with pressure, and that its reciprocal formula_18 is a linear function of formula_19 or equivalently of formula_20. Experimentally for many reactions, formula_21 does decrease at low pressure, but the graph of formula_22 as a function of formula_23 is quite curved. To account accurately for the pressure-dependence of rate constants for unimolecular reactions, more elaborate theories are required such as the RRKM theory.
Decomposition of dinitrogen pentoxide.
In the Lindemann mechanism for a true unimolecular reaction, the activation step is followed by a single step corresponding to the formation of products. Whether this is actually true for any given reaction must be established from the evidence.
Much early experimental investigation of the Lindemann mechanism involved study of the gas-phase decomposition of dinitrogen pentoxide 2 N2O5 → 2 N2O4 + O2. This reaction was studied by Farrington Daniels and coworkers, and initially assumed to be a true unimolecular reaction. However it is now known to be a multistep reaction whose mechanism was established by Ogg as:
N2O5 ⇌ NO2 + NO3
NO2 + NO3 → NO2 + O2 + NO
NO + N2O5 → 3 NO2
An analysis using the steady-state approximation shows that this mechanism can also explain the observed first-order kinetics and the fall-off of the rate constant at very low pressures.
Mechanism of the isomerization of cyclopropane.
The Lindemann-Hinshelwood mechanism explains unimolecular reactions that take place in the gas phase. Usually, this mechanism is used in gas phase decomposition and also in isomerization reactions. An example of isomerization by a Lindemann mechanism is the isomerization of cyclopropane.
cyclo−C3H6 → CH3−CH=CH2
Although it seems like a simple reaction, it is actually a multistep reaction:
cyclo−C3H6 → CH2−CH2−CH2 (k1)
CH2−CH2−CH2 → cyclo−C3H6 (k−1)
CH2−CH2−CH2 → CH3−CH=CH2 (k2)
This isomerization can be explained by the Lindemann mechanism, because once the cyclopropane, the reactant, is excited by collision it becomes an energized cyclopropane. And then, this molecule can be deactivated back to reactants or produce propene, the product.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\begin{align}\n\\ce{{A} + M}\\ &\\ce{<=> {A^\\ast} + M} \\\\\n\\ce{A^\\ast}\\ &\\ce{-> P}\n\\end{align}"
},
{
"math_id": 1,
"text": "k_1"
},
{
"math_id": 2,
"text": "k_{-1}"
},
{
"math_id": 3,
"text": "k_2"
},
{
"math_id": 4,
"text": "\\frac{\\mathrm{d}[\\ce A^*]}{\\mathrm{d}t} = k_1 [\\ce A] [\\ce M]"
},
{
"math_id": 5,
"text": " -\\frac{\\mathrm{d}[\\ce A^*]}{\\mathrm{d}t} = k_{-1} [\\ce A^*] [\\ce M]"
},
{
"math_id": 6,
"text": "- \\frac{\\mathrm{d}[\\ce A^*]}{\\mathrm{d}t} = k_2 [\\ce A^*]"
},
{
"math_id": 7,
"text": "k_1 [\\ce A] [\\ce M] = k_{-1} [\\ce A^*] [\\ce M] + k_2 [\\ce A^*]"
},
{
"math_id": 8,
"text": "[\\ce A^*]"
},
{
"math_id": 9,
"text": "[\\ce A^*] = \\frac{k_1 [\\ce A] [\\ce M]}{k_{-1} [\\ce M] + k_2}"
},
{
"math_id": 10,
"text": "\\frac{\\mathrm{d}[\\ce P]}{\\mathrm{d}t} = k_2 [\\ce A^*]"
},
{
"math_id": 11,
"text": "\\frac{\\mathrm{d}[\\ce P]}{\\mathrm{d}t} = \\frac{k_1 k_2 [\\ce A] [\\ce M]}{k_{-1} [\\ce M] + k_2}"
},
{
"math_id": 12,
"text": "k_{-1}[\\ce M] \\ll k_2"
},
{
"math_id": 13,
"text": "\\mathrm{d}[\\ce P]/\\mathrm{d}t = k_1 [\\ce A][\\ce M]"
},
{
"math_id": 14,
"text": "k_{-1}[\\ce M] \\gg k_2"
},
{
"math_id": 15,
"text": "\\frac{\\mathrm{d}[\\ce P]}{\\mathrm{d}t} = \\frac{k_1k_2}{k_{-1}}[\\ce A]"
},
{
"math_id": 16,
"text": "k_{\\rm uni}"
},
{
"math_id": 17,
"text": "\\frac{\\mathrm{d}[\\ce P]}{\\mathrm{d}t} = k_{\\rm uni}[\\ce A], \\quad k_{\\rm uni} = \\frac{1}{[A]} \\frac{\\mathrm{d[P]}}{\\mathrm{d}t}"
},
{
"math_id": 18,
"text": "\\frac{1}{k}=\\frac{k_{-1}}{k_1k_2}+\\frac{1}{k_1[\\ce M]}"
},
{
"math_id": 19,
"text": "\\frac{1}{[\\ce M]}"
},
{
"math_id": 20,
"text": "\\frac{1}{p}"
},
{
"math_id": 21,
"text": "k"
},
{
"math_id": 22,
"text": "1/k"
},
{
"math_id": 23,
"text": "1/p"
}
] |
https://en.wikipedia.org/wiki?curid=14621793
|
14622989
|
LabelMe
|
Image dataset
LabelMe is a project created by the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) that provides a dataset of digital images with annotations. The dataset is dynamic, free to use, and open to public contribution. The most applicable use of LabelMe is in computer vision research. As of October 31, 2010, LabelMe has 187,240 images, 62,197 annotated images, and 658,992 labeled objects.
Motivation.
The motivation behind creating LabelMe comes from the history of publicly available data for computer vision researchers. Most available data was tailored to a specific research group's problems and caused new researchers to have to collect additional data to solve their own problems. LabelMe was created to solve several common shortcomings of available data. The following is a list of qualities that distinguish LabelMe from previous work.
Annotation Tool.
The LabelMe annotation tool provides a means for users to contribute to the project. The tool can be accessed anonymously or by logging into a free account. To access the tool, users must have a compatible web browser with JavaScript support. When the tool is loaded, it chooses a random image from the LabelMe dataset and displays it on the screen. If the image already has object labels associated with it, they will be overlaid on top of the image in polygon format. Each distinct object label is displayed in a different color.
If the image is not completely labeled, the user can use the mouse to draw a polygon containing an object in the image. For example, in the adjacent image, if a person was standing in front of the building, the user could click on a point on the border of the person, and continue clicking along the outside edge until returning to the starting point. Once the polygon is closed, a bubble pops up on the screen which allows the user to enter a label for the object. The user can choose whatever label the user thinks best describes the object. If the user disagrees with the previous labeling of the image, the user can click on the outline polygon of an object and either delete the polygon completely or edit the text label to give it a new name.
As soon as changes are made to the image by the user, they are saved and openly available for anyone to download from the LabelMe dataset. In this way, the data is always changing due to contributions by the community of users who use the tool. Once the user is finished with an image, the "Show me another image" link can be clicked and another random image will be selected to display to the user.
Problems with the data.
The LabelMe dataset has some problems. Some are inherent in the data, such as the objects in the images not being uniformly distributed with respect to size and image location. This is due to the images being primarily taken by humans who tend to focus the camera on interesting objects in a scene. However, cropping and rescaling the images randomly can simulate a uniform distribution. Other problems are caused by the amount of freedom given to the users of the annotation tool. Some problems that arise are:
The creators of LabelMe decided to leave these decisions up to the annotator. The reason for this is that they believe people will tend to annotate the images according to what they think is the natural labeling of the images. This also provides some variability in the data, which can help researchers tune their algorithms to account for this variability.
Extending the data.
Using WordNet.
Since the text labels for objects provided in LabelMe come from user input, there is a lot of variation in the labels used (as described above). Because of this, analysis of objects can be difficult. For example, a picture of a dog might be labeled as "dog", "canine", "hound", "pooch", or "animal". Ideally, when using the data, the object class "dog" at the abstract level should incorporate all of these text labels.
WordNet is a database of words organized into a structural way. It allows assigning a word to a category, or in WordNet language: a sense. Sense assignment is not easy to do automatically. When the authors of LabelMe tried automatic sense assignment, they found that it was prone to a high rate of error, so instead they assigned words to senses manually. At first, this may seem like a daunting task since new labels are added to the LabelMe project continuously. To the right is a graph comparing the growth of polygons to the growth of words (descriptions). As you can see, the growth of words is small compared with the continuous growth of polygons, and therefore is easy enough to keep up to date manually by the LabelMe team.
Once WordNet assignment is done, searches in the LabelMe database are much more effective. For example, a search for "animal" might bring up pictures of "dogs", "cats" and "snakes". However, since the assignment was done manually, a picture of a computer mouse labeled as "mouse" would not show up in a search for "animals". Also, if objects are labeled with more complex terms like "dog walking", WordNet still allows the search of "dog" to return these objects as results. WordNet makes the LabelMe database much more useful.
Object-part hierarchy.
Having a large dataset of objects where overlap is allowed provides enough data to try and categorize objects as being a part of another object. For example, most of the labels assigned "wheel" are probably part of objects assigned to other labels like "car" or "bicycle". These are called part labels. To determine if label P is a part label for label O:
This algorithm allows the automatic classification of parts of an object when the part objects are frequently contained within the outer object.
Object depth ordering.
Another instance of object overlap is when one object is actually on top of the other. For example, an image might contain a person standing in front of a building. The person is not a part label as above since the person is not part of the building. Instead, they are two separate objects that happen to overlap. To automatically determine which object is the foreground and which is the background, the authors of LabelMe propose several options:
Matlab Toolbox.
The LabelMe project provides a set of tools for using the LabelMe dataset from Matlab. Since research is often done in Matlab, this allows the integration of the dataset with existing tools in computer vision. The entire dataset can be downloaded and used offline, or the toolbox allows dynamic downloading of content on demand.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathrm{I}_\\mathrm{O}\\,"
},
{
"math_id": 1,
"text": "\\mathrm{I}_\\mathrm{P}\\,"
},
{
"math_id": 2,
"text": "\\mathrm{S}_{\\mathrm{O},\\mathrm{P}}\\,"
},
{
"math_id": 3,
"text": "\\frac{\\mathrm{A}(\\mathrm{O}\\cap\\mathrm{P})}{\\mathrm{A}(\\mathrm{P})}\\,"
},
{
"math_id": 4,
"text": "\\mathrm{I}_{\\mathrm{O},\\mathrm{P}} \\subseteq \\mathrm{I}_\\mathrm{P}\\,"
},
{
"math_id": 5,
"text": "\\mathrm{S}_{\\mathrm{O},\\mathrm{P}} > \\beta\\,"
},
{
"math_id": 6,
"text": "\\beta\\,"
},
{
"math_id": 7,
"text": "\\beta=0.5\\,"
},
{
"math_id": 8,
"text": "\\frac{\\mathrm{N}_{\\mathrm{O},\\mathrm{P}}}{\\mathrm{N}_\\mathrm{P}+\\alpha}\\,"
},
{
"math_id": 9,
"text": "\\mathrm{N}_{\\mathrm{O},\\mathrm{P}}\\,"
},
{
"math_id": 10,
"text": "\\mathrm{N}_\\mathrm{P}\\,"
},
{
"math_id": 11,
"text": "\\mathrm{I}_{\\mathrm{O},\\mathrm{P}}\\,"
},
{
"math_id": 12,
"text": "\\alpha\\,"
},
{
"math_id": 13,
"text": "\\alpha=5\\,"
}
] |
https://en.wikipedia.org/wiki?curid=14622989
|
146230
|
Iceberg
|
Large piece of freshwater ice broken off a glacier or ice shelf and floating in open water
An iceberg is a piece of freshwater ice more than long that has broken off a glacier or an ice shelf and is floating freely in open water. Smaller chunks of floating glacially derived ice are called "growlers" or "bergy bits". Much of an iceberg is below the water's surface, which led to the expression "tip of the iceberg" to illustrate a small part of a larger unseen issue. Icebergs are considered a serious maritime hazard.
Icebergs vary considerably in size and shape. Icebergs that calve from glaciers in Greenland are often irregularly shaped while Antarctic ice shelves often produce large tabular (table top) icebergs. The largest iceberg in recent history, named B-15, was measured at nearly in 2000. The largest iceberg on record was an Antarctic tabular iceberg measuring sighted west of Scott Island, in the South Pacific Ocean, by the USS "Glacier" on November 12, 1956. This iceberg was larger than Belgium.
Etymology.
The word "iceberg" is a partial loan translation from the Dutch word "ijsberg", literally meaning "ice mountain", cognate to Danish "isbjerg", German "Eisberg", Low Saxon "Iesbarg" and Swedish "isberg".
Overview.
Typically about one-tenth of the volume of an iceberg is above water, which follows from Archimedes's Principle of buoyancy; the density of pure ice is about 920 kg/m3 (57 lb/cu ft), and that of seawater about . The contour of the underwater portion can be difficult to judge by looking at the portion above the surface.
The largest icebergs recorded have been calved, or broken off, from the Ross Ice Shelf of Antarctica. Icebergs may reach a height of more than above the sea surface and have mass ranging from about 100,000 tonnes up to more than 10 million tonnes. Icebergs or pieces of floating ice smaller than 5 meters above the sea surface are classified as "bergy bits"; smaller than 1 meter—"growlers". The largest known iceberg in the North Atlantic was above sea level, reported by the USCG icebreaker "Eastwind" in 1958, making it the height of a 55-story building. These icebergs originate from the glaciers of western Greenland and may have interior temperatures of .
Drift.
A given iceberg's trajectory through the ocean can be modelled by integrating the equation
formula_0
where "m" is the iceberg mass, "v" the drift velocity, and the variables "f", "k", and "F" correspond to the Coriolis force, the vertical unit vector, and a given force. The subscripts a, w, r, s, and p correspond to the air drag, water drag, wave radiation force, sea ice drag, and the horizontal pressure gradient force.
Icebergs deteriorate through melting and fracturing, which changes the mass "m", as well as the surface area, volume, and stability of the iceberg. Iceberg deterioration and drift, therefore, are interconnected ie. iceberg thermodynamics, and fracturing must be considered when modelling iceberg drift.
Winds and currents may move icebergs close to coastlines, where they can become frozen into pack ice (one form of sea ice), or drift into shallow waters, where they can come into contact with the seabed, a phenomenon called seabed gouging.
Mass loss.
Icebergs lose mass due to melting, and calving. Melting can be due to solar radiation, or heat and salt transport from the ocean. Iceberg calving is generally enhanced by waves impacting the iceberg.
Melting tends to be driven by the ocean, rather than solar radiation. Ocean driven melting is often modelled as
formula_1
where formula_2 is the melt rate in m/day, formula_3 is the relative velocity between the iceberg and the ocean, formula_4 is the temperature difference between the ocean and the iceberg, and formula_5 is the length of the iceberg. formula_6 is a constant based on properties of the iceberg and the ocean and is approximately formula_7 in the polar ocean.
The influence of the shape of an iceberg and of the Coriolis force on iceberg melting rates has been demonstrated in laboratory experiments.
Wave erosion is more poorly constrained but can be estimated by
formula_8
where formula_9 is the wave erosion rate in m/day, formula_10, formula_11 describes the sea state, formula_12 is the sea surface temperature, and formula_13 is the sea ice concentration.
Bubbles.
Air trapped in snow forms bubbles as the snow is compressed to form firn and then glacial ice. Icebergs can contain up to 10% air bubbles by volume. These bubbles are released during melting, producing a fizzing sound that some may call "Bergie Seltzer". This sound results when the water-ice interface reaches compressed air bubbles trapped in the ice. As each bubble bursts it makes a "popping" sound and the acoustic properties of these bubbles can be used to study iceberg melt.
Stability.
An iceberg may flip, or capsize, as it melts and breaks apart, changing the center of gravity. Capsizing can occur shortly after calving when the iceberg is young and establishing balance. Icebergs are unpredictable and can capsize anytime and without warning. Large icebergs that break off from a glacier front and flip onto the glacier face can push the entire glacier backwards momentarily, producing 'glacial earthquakes' that generate as much energy as an atomic bomb.
Color.
Icebergs are generally white because they are covered in snow, but can be green, blue, yellow, black, striped, or even rainbow-colored. Seawater, algae and lack of air bubbles in the ice can create diverse colors. Sediment can create the dirty black coloration present in some icebergs.
Shape.
In addition to size classification (Table 1), icebergs can be classified on the basis of their shapes. The two basic types of iceberg forms are "tabular" and "non-tabular". Tabular icebergs have steep sides and a flat top, much like a plateau, with a length-to-height ratio of more than 5:1.
This type of iceberg, also known as an "ice island", can be quite large, as in the case of Pobeda Ice Island. Antarctic icebergs formed by breaking off from an ice shelf, such as the Ross Ice Shelf or Filchner–Ronne Ice Shelf, are typically tabular. The largest icebergs in the world are formed this way.
Non-tabular icebergs have different shapes and include:
Monitoring and control.
History.
Prior to 1914 there was no system in place to track icebergs to guard ships against collisions despite fatal sinkings of ships by icebergs. In 1907, "SS Kronprinz Wilhelm", a German liner, rammed an iceberg and suffered a crushed bow, but she was still able to complete her voyage. The advent of watertight compartmentalization in ship construction led designers to declare their ships "unsinkable".
During the 1912 sinking of the "Titanic", the iceberg that sank the Titanic killed more than 1,500 of its estimated 2,224 passengers and crew, seriously damaging the 'unsinkable' claim. For the remainder of the ice season of that year, the United States Navy patrolled the waters and monitored ice movements. In November 1913, the International Conference on the Safety of Life at Sea met in London to devise a more permanent system of observing icebergs. Within three months the participating maritime nations had formed the International Ice Patrol (IIP). The goal of the IIP was to collect data on meteorology and oceanography to measure currents, ice-flow, ocean temperature, and salinity levels. They monitored iceberg dangers near the Grand Banks of Newfoundland and provided the "limits of all known ice" in that vicinity to the maritime community. The IIP published their first records in 1921, which allowed for a year-by-year comparison of iceberg movement.
Technological development.
Aerial surveillance of the seas in the early 1930s allowed for the development of charter systems that could accurately detail the ocean currents and iceberg locations. In 1945, experiments tested the effectiveness of radar in detecting icebergs. A decade later, oceanographic monitoring outposts were established for the purpose of collecting data; these outposts continue to serve in environmental study. A computer was first installed on a ship for the purpose of oceanographic monitoring in 1964, which allowed for a faster evaluation of data. By the 1970s, ice-breaking ships were equipped with automatic transmissions of satellite photographs of ice in Antarctica. Systems for optical satellites had been developed but were still limited by weather conditions. In the 1980s, drifting buoys were used in Antarctic waters for oceanographic and climate research. They are equipped with sensors that measure ocean temperature and currents.
Side looking airborne radar (SLAR) made it possible to acquire images regardless of weather conditions. On November 4, 1995, Canada launched RADARSAT-1. Developed by the Canadian Space Agency, it provides images of Earth for scientific and commercial purposes. This system was the first to use synthetic aperture radar (SAR), which sends microwave energy to the ocean surface and records the reflections to track icebergs. The European Space Agency launched ENVISAT (an observation satellite that orbits the Earth's poles) on March 1, 2002. ENVISAT employs advanced synthetic aperture radar (ASAR) technology, which can detect changes in surface height accurately. The Canadian Space Agency launched RADARSAT-2 in December 2007, which uses SAR and multi-polarization modes and follows the same orbit path as RADARSAT-1.
Modern monitoring.
Iceberg concentrations and size distributions are monitored worldwide by the U.S. National Ice Center (NIC), established in 1995, which produces analyses and forecasts of Arctic, Antarctic, Great Lakes and Chesapeake Bay ice conditions. More than 95% of the data used in its sea ice analyses are derived from the remote sensors on polar-orbiting satellites that survey these remote regions of the Earth.
The NIC is the only organization that names and tracks all Antarctic Icebergs. It assigns each iceberg larger than along at least one axis a name composed of a letter indicating its point of origin and a running number. The letters used are as follows:
A – longitude 0° to 90° W (Bellingshausen Sea, Weddell Sea)
B – longitude 90° W to 180° (Amundsen Sea, Eastern Ross Sea)
C – longitude 90° E to 180° (Western Ross Sea, Wilkes Land)
D – longitude 0° to 90° E (Amery Ice Shelf, Eastern Weddell Sea)
The Danish Meteorological Institute monitors iceberg populations around Greenland using data collected by the synthetic aperture radar (SAR) on the Sentinel-1 satellites.
Iceberg management.
In Labrador and Newfoundland, iceberg management plans have been developed to protect offshore installations from impacts with icebergs.
Commercial use.
The idea of towing large icebergs to other regions as a source of water has been raised since at least the 1950s, without having been put into practice. In 2017, a business from the UAE announced plans to tow an iceberg from Antarctica to the Middle East; in 2019 salvage engineer Nick Sloane announced a plan to move one to South Africa at an estimated cost of $200 million. In 2019, a German company, Polewater, announced plans to tow Antarctic icebergs to places like South Africa.
Companies have used iceberg water in products such as bottled water, fizzy ice cubes and alcoholic drinks. For example, Iceberg Beer by Quidi Vidi Brewing Company is made from icebergs found around St. John's, Newfoundland. Although annual iceberg supply in Newfoundland and Labrador exceeds the total freshwater consumption of the United States, in 2016 the province introduced a tax on iceberg harvesting and imposed a limit on how much fresh water can be exported yearly.
Oceanography and ecology.
The freshwater injected into the ocean by melting icebergs can change the density of the seawater in the vicinity of the iceberg. Fresh melt water released at depth is lighter, and therefore more buoyant, than the surrounding seawater causing it to rise towards the surface. Icebergs can also act as floating breakwaters, impacting ocean waves.
Icebergs contain variable concentrations of nutrients and minerals that are released into the ocean during melting. Iceberg-derived nutrients, particularly the iron contained in sediments, can fuel blooms of phytoplankton. Samples collected from icebergs in Antarctica, Patagonia, Greenland, Svalbard, and Iceland, however, show that iron concentrations vary significantly, complicating efforts to generalize the impacts of icebergs on marine ecosystems.
Recent large icebergs.
Iceberg B15 calved from the Ross Ice Shelf in 2000 and initially had an area of . It broke apart in November 2002. The largest remaining piece of it, Iceberg B-15A, with an area of , was still the largest iceberg on Earth until it ran aground and split into several pieces October 27, 2005, an event that was observed by seismographs both on the iceberg and across Antarctica. It has been hypothesized that this breakup may also have been abetted by ocean swell generated by an Alaskan storm 6 days earlier and away.
In culture.
One of the most infamous icebergs in history is the Iceberg that sank the "Titanic". The catastrophe led to the establishment of an International Ice Patrol shortly after. Icebergs in both the northern and southern hemispheres have often been compared in size to multiples of the -area of Manhattan Island.
Artists have used icebergs as the subject matter for their paintings. Frederic Edwin Church, "The Icebergs", 1861 was painted from sketches Church completed on a boat trip off Newfoundland and Labrador. Caspar David Friedrich, "The Sea of Ice," 1823–1824 is polar landscape with an iceberg and ship wreck depicting the dangers of such conditions"." William Bradford created detailed paintings of sailing ships set in arctic coasts and was fascinated by icebergs. Albert Bierstadt made studies on arctic trips aboard steamships in 1883 and 1884 that were the basis of his paintings of arctic scenes with colossal icebergs made in the studio.
American poet, Lydia Sigourney, wrote the poem . While on a return journey from Europe in 1841, her steamship encountered a field of icebergs overnight, during an Aurora Borealis. The ship made it through unscathed to the next morning, when the sun rose and "touched the crowns, Of all those arctic kings."
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "m \\frac{d\\vec{v}}{dt} = -mf\\vec{k} \\times \\vec{v} + \\vec{F}_\\text{a} + \\vec{F}_\\text{w} + \\vec{F}_\\text{r} + \\vec{F}_\\text{s} + \\vec{F}_\\text{p},"
},
{
"math_id": 1,
"text": "M_{b} = K \\Delta u^{0.8} \\frac{T_0-T}{L^{0.2}},"
},
{
"math_id": 2,
"text": "M_\\text{b}"
},
{
"math_id": 3,
"text": "\\Delta u"
},
{
"math_id": 4,
"text": "T_0-T"
},
{
"math_id": 5,
"text": "L"
},
{
"math_id": 6,
"text": "K"
},
{
"math_id": 7,
"text": "0.75^\\circ \\text{C}^{-1} \\text{m}^{0.4} \\text{day}^{-1} \\text{s}^{0.8}"
},
{
"math_id": 8,
"text": " M_\\text{e} = cS_s(T_\\text{s}+2)[1+\\text{cos}(I_\\text{c}^3\\pi)],"
},
{
"math_id": 9,
"text": "M_\\text{e}"
},
{
"math_id": 10,
"text": "c = \\frac{1}{12} \\text{m day}^{-1}"
},
{
"math_id": 11,
"text": "S_\\text{S}"
},
{
"math_id": 12,
"text": "T_\\text{S}"
},
{
"math_id": 13,
"text": "I_\\text{c}"
}
] |
https://en.wikipedia.org/wiki?curid=146230
|
14623383
|
Henselian ring
|
In mathematics, a Henselian ring (or Hensel ring) is a local ring in which Hensel's lemma holds. They were introduced by , who named them after Kurt Hensel. Azumaya originally allowed Henselian rings to be non-commutative, but most authors now restrict them to be commutative.
Some standard references for Hensel rings are , , and .
Definitions.
In this article rings will be assumed to be commutative, though there is also a theory of non-commutative Henselian rings.
Henselian rings in algebraic geometry.
Henselian rings are the local rings with respect to the Nisnevich topology in the sense that if formula_13 is a Henselian local ring, and formula_14 is a Nisnevich covering of formula_15, then one of the formula_16 is an isomorphism. This should be compared to the fact that for any Zariski open covering formula_14 of the spectrum formula_15 of a local ring formula_13, one of the formula_17 is an isomorphism. In fact, this property characterises Henselian rings, resp. local rings.
Likewise strict Henselian rings are the local rings of geometric points in the étale topology.
Henselization.
For any local ring "A" there is a universal Henselian ring "B" generated by "A", called the Henselization of "A", introduced by , such that any local homomorphism from "A" to a Henselian ring can be extended uniquely to "B". The Henselization of "A" is unique up to unique isomorphism. The Henselization of "A" is an algebraic substitute for the completion of "A". The Henselization of "A" has the same completion and residue field as "A" and is a flat module over "A". If "A" is Noetherian, reduced, normal, regular, or excellent then so is its Henselization. For example, the Henselization of the ring of polynomials "k"["x","y"...] localized at the point (0,0...) is the ring of algebraic formal power series (the formal power series satisfying an algebraic equation). This can be thought of as the "algebraic" part of the completion.
Similarly there is a strictly Henselian ring generated by "A", called the strict Henselization of "A". The strict Henselization is not quite universal: it is unique, but only up to "non-unique" isomorphism. More precisely it depends on the choice of a separable algebraic closure of the residue field of "A", and automorphisms of this separable algebraic closure correspond to automorphisms of the corresponding strict Henselization. For example, a strict Henselization of the field of "p"-adic numbers is given by the maximal unramified extension, generated by all roots of unity of order prime to "p". It is not "universal" as it has non-trivial automorphisms.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "K"
},
{
"math_id": 1,
"text": "v"
},
{
"math_id": 2,
"text": "K^{alg}"
},
{
"math_id": 3,
"text": "K^{sep}"
},
{
"math_id": 4,
"text": "(K, v)"
},
{
"math_id": 5,
"text": "\\alpha"
},
{
"math_id": 6,
"text": "\\alpha'"
},
{
"math_id": 7,
"text": "v(\\alpha') = v(\\alpha)"
},
{
"math_id": 8,
"text": "\\sigma"
},
{
"math_id": 9,
"text": "v\\circ \\sigma"
},
{
"math_id": 10,
"text": "v|_K"
},
{
"math_id": 11,
"text": "L/K"
},
{
"math_id": 12,
"text": "L"
},
{
"math_id": 13,
"text": "R"
},
{
"math_id": 14,
"text": "\\{U_i \\to X\\}"
},
{
"math_id": 15,
"text": "X = Spec(R)"
},
{
"math_id": 16,
"text": "U_i \\to X"
},
{
"math_id": 17,
"text": " U_i \\to X "
}
] |
https://en.wikipedia.org/wiki?curid=14623383
|
14623985
|
Kinetic chain length
|
Average number of monomers added during chain-growth polymerization
In polymer chemistry, the kinetic chain length (ν) of a polymer is the average number of units called monomers added to a growing chain during chain-growth polymerization. During this process, a polymer chain is formed when monomers are bonded together to form long chains known as polymers. Kinetic chain length is defined as the average number of monomers that react with an active center such as a radical from initiation to termination.
This definition is a special case of the concept of "chain length" in chemical kinetics. For any chemical chain reaction, the chain length is defined as the average number of times that the closed cycle of chain propagation steps is repeated. It is equal to the rate of the overall reaction divided by the rate of the initiation step in which the chain carriers are formed. For example, the decomposition of ozone in water is a chain reaction which has been described in terms of its chain length.
In chain-growth polymerization the propagation step is the addition of a monomer to the growing chain. The word "kinetic" is added to "chain length" in order to distinguish the number of reaction steps in the kinetic chain from the number of monomers in the final macromolecule, a quantity named the degree of polymerization. In fact the kinetic chain length is one factor which influences the average degree of polymerization, but there are other factors as described below. The kinetic chain length and therefore the degree of polymerization can influence certain physical properties of the polymer, including chain mobility, glass-transition temperature, and modulus of elasticity.
Calculating chain length.
For most chain-growth polymerizations, the propagation steps are much faster than the initiation steps, so that each growing chain is formed in a short time compared to the overall polymerization reaction. During the formation of a single chain, the reactant concentrations and therefore the propagation rate remain effectively constant. Under these conditions, the ratio of the number of propagation steps to the number of initiation steps is just the ratio of reaction rates:
formula_0
where Rp is the rate of propagation, Ri is the rate of initiation of polymerization, and Rt is the rate of termination of the polymer chain. The second form of the equation is valid at steady-state polymerization, as the chains are being initiated at the same rate they are being terminated ("Ri" = "Rt").
An exception is the class of living polymerizations, in which propagation is much "slower" than initiation, and chain termination does not occur until a quenching agent is added. In such reactions the reactant monomer is slowly consumed and the propagation rate varies and is not used to obtain the kinetic chain length. Instead the length at a given time is usually written as:
formula_1
where [M]0 – [M] represents the number of monomer units consumed, and [I]0 the number of radicals that initiate polymerization. When the reaction goes to completion, [M] = 0, and then the kinetic chain length is equal to the number average degree of polymerization of the polymer.
In both cases kinetic chain length is an average quantity, as not all polymer chains in a given reaction are identical in length. The value of ν depends on the nature and concentration of both the monomer and initiator involved.
Kinetic chain length and degree of polymerization.
In chain-growth polymerization, the degree of polymerization depends not only on the kinetic chain length but also on the type of termination step and the possibility of chain transfer.
Termination by disproportionation.
Termination by disproportionation occurs when an atom is transferred from one polymer free radical to another. The atom is usually hydrogen, and this results in two polymer chains.
With this type of termination and no chain transfer, the number average degree of polymerization (DPn) is then equal to the average kinetic chain length:
formula_2
Termination by combination.
Combination simply means that two radicals are joined together, destroying the radical character of each and forming one polymeric chain. With no chain transfer, the average degree of polymerization is then twice the average kinetic chain length
formula_3
Chain transfer.
Some chain-growth polymerizations include chain transfer steps, in which another atom (often hydrogen) is transferred from a molecule in the system to the polymer radical. The original polymer chain is terminated and a new one is initiated. The kinetic chain is not terminated if the new radical can add monomer. However the degree of polymerization is reduced without affecting the rate of polymerization (which depends on kinetic chain length), since two (or more) macromolecules are formed instead of one. For the case of termination by disproportionation, the degree of polymerization becomes:
formula_4
where Rtr is the rate of transfer. The greater Rtr is, the shorter the final macromolecule.
Significance.
The kinetic chain length is important in determining the degree of polymerization, which in turn influences many physical properties of the polymer.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\nu = \\frac{R_p}{R_i} = \\frac{R_p}{R_t}"
},
{
"math_id": 1,
"text": "\\nu = \\frac{[\\ce{M}]_0-[\\ce{M}]}{[\\ce{I}]_0}"
},
{
"math_id": 2,
"text": "DP_n = \\nu"
},
{
"math_id": 3,
"text": "DP_n = 2 \\nu"
},
{
"math_id": 4,
"text": "DP_n = \\frac{R_p}{R_t + R_{tr}} < \\nu = \\frac{R_p}{R_t}"
}
] |
https://en.wikipedia.org/wiki?curid=14623985
|
146263
|
Monotone convergence theorem
|
Theorems on the convergence of bounded monotonic sequences
In the mathematical field of real analysis, the monotone convergence theorem is any of a number of related theorems proving the good convergence behaviour of monotonic sequences, i.e. sequences that are non-increasing, or non-decreasing. In its simplest form, it says that a non-decreasing bounded-above sequence of real numbers formula_0 converges to its smallest upper bound, its supremum. Likewise, a non-increasing bounded-below sequence converges to its largest lower bound, its infimum. In particular, infinite sums of non-negative numbers converge to the supremum of the partial sums if and only if the partial sums are bounded.
For sums of non-negative increasing sequences formula_1, it says that taking the sum and the supremum can be interchanged.
In more advanced mathematics the monotone convergence theorem usually refers to a fundamental result in measure theory due to Lebesgue and Beppo Levi that says that for sequences of non-negative pointwise-increasing measurable functions formula_2, taking the integral and the supremum can be interchanged with the result being finite if either one is finite.
Convergence of a monotone sequence of real numbers.
Every bounded-above monotonically nondecreasing sequence of real numbers is convergent in the real numbers because the supremum exists and is a real number. The proposition does not apply to rational numbers because the supremum of a sequence of rational numbers may be irrational.
Proposition.
(A) For a non-decreasing and bounded-above sequence of real numbers
formula_3
the limit formula_4 exists and equals its supremum:
formula_5
(B) For a non-increasing and bounded-below sequence of real numbers
formula_6
the limit formula_7 exists and equals its infimum:
formula_8.
Proof.
Let formula_9 be the set of values of formula_10. By assumption, formula_11 is non-empty and bounded above by formula_12. By the least-upper-bound property of real numbers, formula_13 exists and formula_14. Now, for every formula_15, there exists formula_16 such that formula_17, since otherwise formula_18 is a strictly smaller upper bound of formula_11, contradicting the definition of the supremum formula_19. Then since formula_20 is non decreasing, and formula_19 is an upper bound, for every formula_21, we have
formula_22
Hence, by definition formula_23.
The proof of the (B) part is analogous or follows from (A) by considering formula_24.
Theorem.
If formula_20 is a monotone sequence of real numbers, i.e., if formula_25 for every formula_26 or formula_27 for every formula_26, then this sequence has a finite limit if and only if the sequence is bounded.
Convergence of a monotone series.
There is a variant of the proposition above where we allow unbounded sequences in the extended real numbers, the real numbers with formula_29 and formula_30 added.
formula_31
In the extended real numbers every set has a supremum (resp. infimum) which of course may be formula_29 (resp. formula_32) if the set is unbounded. An important use of the extended reals is that any set of non negative numbers formula_33 has a well defined summation order independent sum
formula_34
where formula_35 are the upper extended non negative real numbers. For a series of non negative numbers
formula_36
so this sum coincides with the sum of a series if both are defined. In particular the sum of a series of non negative numbers does not depend on the order of summation.
Theorem (monotone convergence of non negative sums).
Let formula_37 be a sequence of non-negative real numbers indexed by natural numbers formula_38 and formula_39. Suppose that formula_40 for all formula_41. Then
formula_42
Remark
The suprema and the sums may be finite or infinite but the left hand side is finite if and only if the right hand side is.
proof: Since formula_43 we have formula_44 so formula_45. Conversely, we can interchange sup and sum for finite sums so
formula_46 hence formula_47.
The theorem states that if you have an infinite matrix of non-negative real numbers formula_48 such that
then
As an example, consider the expansion
formula_53
Now set
formula_54
for formula_55 and formula_56 for formula_57, then formula_58 with formula_59 and
formula_60.
The right hand side is a non decreasing sequence in formula_39, therefore
formula_61.
Beppo Levi's lemma.
The following result is a generalisation of the monotone convergence of non negative sums theorem above to the measure theoretic setting. It is a cornerstone of measure and integration theory with many applications and has Fatou's lemma and the dominated convergence theorem as direct consequence. It is due to Beppo Levi, who proved a slight generalization in 1906 of an earlier result by Henri Lebesgue.
Let formula_62 denotes the formula_63-algebra of Borel sets on the upper extended non negative real numbers formula_64. By definition, formula_62 contains the set formula_65 and all Borel subsets of formula_66
Theorem (monotone convergence theorem for non-negative measurable functions).
Let formula_67 be a measure space, and formula_68 a measurable set. Let formula_69 be a pointwise non-decreasing sequence of formula_70-measurable non-negative functions, i.e. each function formula_71 is formula_70-measurable and for every formula_72 and every formula_73,
formula_74
Then the pointwise supremum
formula_75
is a formula_70-measurable function and
formula_76
Remark 1. The integrals and the suprema may be finite or infinite, but the left-hand side is finite if and only if the right-hand side is.
Remark 2. Under the assumptions of the theorem,
Note that the second chain of equalities follows from monoticity of the integral (lemma 2 below). Thus we can also write the conclusion of the theorem as
formula_77
with the tacit understanding that the limits are allowed to be infinite.
Remark 3. The theorem remains true if its assumptions hold formula_78-almost everywhere. In other words, it is enough that there is a null set formula_16 such that the sequence formula_79 non-decreases for every formula_80 To see why this is true, we start with an observation that allowing the sequence formula_81 to pointwise non-decrease almost everywhere causes its pointwise limit formula_82 to be undefined on some null set formula_16. On that null set, formula_82 may then be defined arbitrarily, e.g. as zero, or in any other way that preserves measurability. To see why this will not affect the outcome of the theorem, note that since formula_83 we have, for every formula_84
formula_85 and formula_86
provided that formula_82 is formula_87-measurable.section 21.38 (These equalities follow directly from the definition of the Lebesgue integral for a non-negative function).
Remark 4. The proof below does not use any properties of the Lebesgue integral except those established here. The theorem, thus, can be used to prove other basic properties, such as linearity, pertaining to Lebesgue integration.
Proof.
This proof does "not" rely on Fatou's lemma; however, we do explain how that lemma might be used. Those not interested in this independency of the proof may skip the intermediate results below.
Intermediate results.
We need three basic lemmas. In the proof below, we apply the monotonic property of the Lebesgue integral to non-negative functions only. Specifically (see Remark 4),
Monotonicity of the Lebesgue integral.
lemma 1 . let the functions formula_88 be formula_70-measurable.
formula_91
formula_94
Proof. Denote by formula_95 the set of simple formula_96-measurable functions formula_97 such that
formula_98 everywhere on formula_99
1. Since formula_100 we have
formula_101 hence
formula_102
2. The functions formula_103 where formula_104 is the indicator function of formula_105, are easily seen to be measurable and formula_106. Now apply 1.
Lebesgue integral as measure.
Lemma 2. Let formula_67 be a measurable space. Consider a simple formula_87-measurable non-negative function formula_107. For a measurable subset formula_108, define
formula_109
Then formula_110 is a measure on formula_111.
proof (lemma 2).
Write
formula_112
with formula_113 and measurable sets formula_114. Then
formula_115
Since finite positive linear combinations of countably additive set functions are countably additive, to prove countable additivity of formula_110 it suffices to prove that, the set function defined by formula_116 is countably additive for all formula_117. But this follows directly from the countable additivity of formula_78.
Continuity from below.
Lemma 3. Let formula_78 be a measure, and formula_118, where
formula_119
is a non-decreasing chain with all its sets formula_78-measurable. Then
formula_120
proof (lemma 3).
Set formula_121, then
we decompose formula_122 as a countable disjoint union of measurable sets and likewise formula_123 as a finite disjoint union. Therefore
formula_124, and formula_125 so formula_126.
Proof of theorem.
Set formula_127.
Denote by formula_128 the set of simple formula_87-measurable functions formula_97 (formula_29 nor included!) such that
formula_129 on formula_130.
Step 1. The function formula_82 is formula_131–measurable, and the integral formula_132 is well-defined (albeit possibly infinite)section 21.3
From formula_133 we get formula_134. Hence we have to show that formula_82 is formula_70-measurable. To see this, it suffices to prove that formula_135 is formula_136-measurable for all formula_137, because the intervals formula_138 generate the Borel sigma algebra on the extended non negative reals formula_139 by complementing and taking countable intersections, complements and countable unions.
Now since the formula_140 is a non decreasing sequence,
formula_141 if and only if formula_142 for all formula_39. Since we already know that formula_143 and formula_144 we conclude that
formula_145
Hence formula_146 is a measurable set,
being the countable intersection of the measurable sets formula_147.
Since formula_148 the integral is well defined (but possibly infinite) as
formula_149.
Step 2. We have the inequality
formula_150
This is equivalent to formula_151 for all formula_39 which follows directly from formula_152 and "monotonicity of the integral" (lemma 1).
step 3 We have the reverse inequality
formula_153.
By the definition of integral as a supremum step 3 is equivalent to
formula_154
for every formula_155.
It is tempting to prove formula_156 for formula_157 sufficiently large, but this does not work e.g. if formula_158 is itself simple and the formula_159. However, we can get ourself an "epsilon of room" to manoeuvre and avoid this problem.
Step 3 is also equivalent to
formula_160
for every simple function formula_155 and every formula_161
where for the equality we used that the left hand side of the inequality is a finite sum. This we will prove.
Given formula_155 and formula_161, define
formula_162
We claim the sets formula_163 have the following properties:
Assuming the claim, by the definition of formula_163 and "monotonicity of the Lebesgue integral" (lemma 1) we have
formula_167
Hence by "Lebesgue integral of a simple function as measure" (lemma 2), and "continuity from below" (lemma 3) we get:
formula_168
which we set out to prove. Thus it remains to prove the claim.
Ad 1: Write formula_169, for non-negative constants formula_170, and measurable sets formula_171, which we may assume are pairwise disjoint and with union formula_172. Then for formula_173 we have formula_174 if and only if formula_175 so
formula_176
which is measurable since the formula_177 are measurable.
Ad 2: For formula_178 we have formula_179 so formula_180
Ad 3: Fix formula_181. If formula_182 then formula_183, hence formula_184. Otherwise, formula_185 and formula_186 so formula_187 for formula_188
sufficiently large, hence formula_189.
The proof of the monotone convergence theorem is complete.
Relaxing the monotonicity assumption.
Under similar hypotheses to Beppo Levi's theorem, it is possible to relax the hypothesis of monotonicity. As before, let formula_190 be a measure space and formula_191. Again, formula_192 will be a sequence of formula_193-measurable non-negative functions formula_71. However, we do not assume they are pointwise non-decreasing. Instead, we assume that formula_194 converges for almost every formula_195, we define formula_82 to be the pointwise limit of formula_192, and we assume additionally that formula_196 pointwise almost everywhere for all formula_39. Then formula_82 is formula_193-measurable, and formula_197 exists, and formula_198
Proof based on Fatou's lemma.
The proof can also be based on Fatou's lemma instead of a direct proof as above, because Fatou's lemma can be proved independent of the monotone convergence theorem.
However the monotone convergence theorem is in some ways more primitive than Fatou's lemma. It easily follows from the monotone convergence theorem and proof of Fatou's lemma is similar and arguably slightly less natural than the proof above.
As before, measurability follows from the fact that formula_199 almost everywhere. The interchange of limits and integrals is then an easy consequence of Fatou's lemma. One has formula_200 by Fatou's lemma, and then, since formula_201 (monotonicity),
formula_202 Therefore
formula_203
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "a_1 \\le a_2 \\le a_3 \\le ...\\le K"
},
{
"math_id": 1,
"text": "0 \\le a_{i,1} \\le a_{i,2} \\le \\cdots "
},
{
"math_id": 2,
"text": "0 \\le f_1(x) \\le f_2(x) \\le \\cdots"
},
{
"math_id": 3,
"text": "a_1 \\le a_2 \\le a_3 \\le...\\le K < \\infty,"
},
{
"math_id": 4,
"text": "\\lim_{n \\to \\infty} a_n"
},
{
"math_id": 5,
"text": "\\lim_{n \\to \\infty} a_n = \\sup_n a_n \\le K."
},
{
"math_id": 6,
"text": "a_1 \\ge a_2 \\ge a_3 \\ge \\cdots \\ge L > -\\infty,"
},
{
"math_id": 7,
"text": " \\lim_{n \\to \\infty} a_n"
},
{
"math_id": 8,
"text": "\\lim_{n \\to \\infty} a_n = \\inf_n a_n \\ge L"
},
{
"math_id": 9,
"text": "\\{ a_n \\}_{n\\in\\mathbb{N}}"
},
{
"math_id": 10,
"text": " (a_n)_{n\\in\\mathbb{N}} "
},
{
"math_id": 11,
"text": "\\{ a_n \\}"
},
{
"math_id": 12,
"text": "K"
},
{
"math_id": 13,
"text": "c = \\sup_n \\{a_n\\}"
},
{
"math_id": 14,
"text": " c \\le K"
},
{
"math_id": 15,
"text": "\\varepsilon > 0"
},
{
"math_id": 16,
"text": "N"
},
{
"math_id": 17,
"text": "c\\ge a_N > c - \\varepsilon "
},
{
"math_id": 18,
"text": "c - \\varepsilon "
},
{
"math_id": 19,
"text": "c"
},
{
"math_id": 20,
"text": "(a_n)_{n\\in\\mathbb{N}}"
},
{
"math_id": 21,
"text": "n > N"
},
{
"math_id": 22,
"text": "|c - a_n| = c -a_n \\leq c - a_N = |c -a_N|< \\varepsilon. "
},
{
"math_id": 23,
"text": " \\lim_{n \\to \\infty} a_n = c =\\sup_n a_n"
},
{
"math_id": 24,
"text": "\\{-a_n\\}_{n \\in \\N}"
},
{
"math_id": 25,
"text": "a_n \\le a_{n+1}"
},
{
"math_id": 26,
"text": "n \\ge 1"
},
{
"math_id": 27,
"text": "a_n \\ge a_{n+1}"
},
{
"math_id": 28,
"text": "L"
},
{
"math_id": 29,
"text": "\\infty"
},
{
"math_id": 30,
"text": " -\\infty"
},
{
"math_id": 31,
"text": " \\bar\\R = \\R \\cup \\{-\\infty, \\infty\\}"
},
{
"math_id": 32,
"text": "-\\infty"
},
{
"math_id": 33,
"text": " a_i \\ge 0, i \\in I "
},
{
"math_id": 34,
"text": " \\sum_{i \\in I} a_i = \\sup_{J \\subset I,\\ |J|< \\infty} \\sum_{j \\in J} a_j \\in \\bar \\R_{\\ge 0}"
},
{
"math_id": 35,
"text": "\\bar\\R_{\\ge 0} = [0, \\infty] \\subset \\bar \\R"
},
{
"math_id": 36,
"text": "\\sum_{i = 1}^\\infty a_i = \\lim_{k \\to \\infty} \\sum_{i = 1}^k a_i = \\sup_k \\sum_{i =1}^k a_i = \\sup_{J \\subset \\N, |J| < \\infty} \\sum_{j \\in J} a_j = \\sum_{i \\in \\N} a_i,"
},
{
"math_id": 37,
"text": "a_{i,k} \\ge 0 "
},
{
"math_id": 38,
"text": "i"
},
{
"math_id": 39,
"text": "k"
},
{
"math_id": 40,
"text": "a_{i,k} \\le a_{i,k+1}"
},
{
"math_id": 41,
"text": "i, k"
},
{
"math_id": 42,
"text": "\\sup_k \\sum_i a_{i,k} = \\sum_i \\sup_k a_{i,k} \\in \\bar\\R_{\\ge 0}."
},
{
"math_id": 43,
"text": "a_{i,k} \\le \\sup_k a_{i,k}"
},
{
"math_id": 44,
"text": "\\sum_i a_{i,k} \\le \\sum_i \\sup_k a_{i,k}"
},
{
"math_id": 45,
"text": "\\sup_k \\sum_i a_{i,k} \\le \\sum_i \\sup_k a_{i,k} "
},
{
"math_id": 46,
"text": "\\sum_{i = 1}^N \\sup_k a_{i,k} = \\sup_k \\sum_{i =1}^N a_{i,k} \\le \\sup_k \\sum_{i =1}^\\infty a_{i,k}"
},
{
"math_id": 47,
"text": "\\sum_{i = 1}^\\infty \\sup_k a_{i,k} \\le \\sup_k \\sum_{i =1}^\\infty a_{i,k}"
},
{
"math_id": 48,
"text": " a_{i,k} \\ge 0"
},
{
"math_id": 49,
"text": "a_{i,k} \\le K_i"
},
{
"math_id": 50,
"text": "\\sum_i K_i <\\infty"
},
{
"math_id": 51,
"text": "\\sum_i a_{i,k} \\le \\sum K_i "
},
{
"math_id": 52,
"text": " \\sup_k a_{i,k}"
},
{
"math_id": 53,
"text": " \\left( 1+ \\frac1k\\right)^k = \\sum_{i=0}^k \\binom ki \\frac1{k^i} \n"
},
{
"math_id": 54,
"text": " a_{i,k} = \\binom ki \\frac1{k^i} = \\frac1{i!} \\cdot \\frac kk \\cdot \\frac{k-1}k\\cdot \\cdots \\frac{k-i+1}k.\n"
},
{
"math_id": 55,
"text": " i \\le k "
},
{
"math_id": 56,
"text": " a_{i,k} = 0"
},
{
"math_id": 57,
"text": " i > k "
},
{
"math_id": 58,
"text": "0\\le a_{i,k} \\le a_{i,k+1}"
},
{
"math_id": 59,
"text": "\\sup_k a_{i,k} = \\frac 1{i!}<\\infty "
},
{
"math_id": 60,
"text": "\\left( 1+ \\frac1k\\right)^k = \\sum_{i =0}^\\infty a_{i,k}"
},
{
"math_id": 61,
"text": " \\lim_{k \\to \\infty}\n\\left( 1+ \\frac1k\\right)^k = \\sup_k \\sum_{i=0}^\\infty a_{i,k} = \\sum_{i = 0}^\\infty \\sup_k a_{i,k} = \\sum_{i = 0}^\\infty \\frac1{i!} = e"
},
{
"math_id": 62,
"text": "\\operatorname{\\mathcal B}_{\\bar\\R_{\\geq 0}}"
},
{
"math_id": 63,
"text": "\\sigma"
},
{
"math_id": 64,
"text": "[0,+\\infty]"
},
{
"math_id": 65,
"text": "\\{+\\infty\\}"
},
{
"math_id": 66,
"text": "\\R_{\\geq 0}."
},
{
"math_id": 67,
"text": "(\\Omega,\\Sigma,\\mu)"
},
{
"math_id": 68,
"text": "X\\in\\Sigma"
},
{
"math_id": 69,
"text": "\\{f_k\\}^\\infty_{k=1}"
},
{
"math_id": 70,
"text": "(\\Sigma,\\operatorname{\\mathcal B}_{\\bar\\R_{\\geq 0}})"
},
{
"math_id": 71,
"text": "f_k:X\\to [0,+\\infty]"
},
{
"math_id": 72,
"text": "{k\\geq 1}"
},
{
"math_id": 73,
"text": "{x\\in X}"
},
{
"math_id": 74,
"text": " 0 \\leq \\ldots\\le f_k(x) \\leq f_{k+1}(x)\\leq\\ldots\\leq \\infty. "
},
{
"math_id": 75,
"text": " \\sup_k f_k : x \\mapsto \\sup_k f_k(x) "
},
{
"math_id": 76,
"text": "\\sup_k \\int_X f_k \\,d\\mu = \\int_X \\sup_k f_k \\,d\\mu."
},
{
"math_id": 77,
"text": " \\lim_{k \\to \\infty} \\int_X f_k(x) \\, d\\mu(x) = \\int_X \\lim_{k\\to \\infty} f_k(x) \\, d\\mu(x)\n"
},
{
"math_id": 78,
"text": "\\mu"
},
{
"math_id": 79,
"text": "\\{f_n(x)\\}"
},
{
"math_id": 80,
"text": "{x\\in X\\setminus N}."
},
{
"math_id": 81,
"text": "\\{ f_n \\}"
},
{
"math_id": 82,
"text": "f"
},
{
"math_id": 83,
"text": "{\\mu(N)=0},"
},
{
"math_id": 84,
"text": "k,"
},
{
"math_id": 85,
"text": " \\int_X f_k \\,d\\mu = \\int_{X \\setminus N} f_k \\,d\\mu"
},
{
"math_id": 86,
"text": "\\int_X f \\,d\\mu = \\int_{X \\setminus N} f \\,d\\mu, "
},
{
"math_id": 87,
"text": "(\\Sigma,\\operatorname{\\mathcal B}_{\\R_{\\geq 0}})"
},
{
"math_id": 88,
"text": "f,g : X \\to [0,+\\infty]"
},
{
"math_id": 89,
"text": "f \\leq g"
},
{
"math_id": 90,
"text": "X,"
},
{
"math_id": 91,
"text": "\\int_X f\\,d\\mu \\leq \\int_X g\\,d\\mu."
},
{
"math_id": 92,
"text": " X_1,X_2 \\in \\Sigma "
},
{
"math_id": 93,
"text": "X_1 \\subseteq X_2, "
},
{
"math_id": 94,
"text": "\\int_{X_1} f\\,d\\mu \\leq \\int_{X_2} f\\,d\\mu."
},
{
"math_id": 95,
"text": "\\operatorname{SF}(h)"
},
{
"math_id": 96,
"text": "(\\Sigma, \\operatorname{\\mathcal B}_{\\R_{\\geq 0}})"
},
{
"math_id": 97,
"text": "s:X\\to [0,\\infty)"
},
{
"math_id": 98,
"text": "0\\leq s\\leq h"
},
{
"math_id": 99,
"text": "X."
},
{
"math_id": 100,
"text": "f \\leq g,"
},
{
"math_id": 101,
"text": " \\operatorname{SF}(f) \\subseteq \\operatorname{SF}(g), "
},
{
"math_id": 102,
"text": "\\int_X f\\,d\\mu = \\sup_{s\\in {\\rm SF}(f)}\\int_X s\\,d\\mu \\leq \\sup_{s\\in {\\rm SF}(g)}\\int_X s\\,d\\mu = \\int_X g\\,d\\mu."
},
{
"math_id": 103,
"text": "f\\cdot {\\mathbf 1}_{X_1}, f\\cdot {\\mathbf 1}_{X_2},"
},
{
"math_id": 104,
"text": "{\\mathbf 1}_{X_i}"
},
{
"math_id": 105,
"text": "X_i"
},
{
"math_id": 106,
"text": "f\\cdot{\\mathbf 1}_{X_1}\\le f\\cdot{\\mathbf 1}_{X_2}"
},
{
"math_id": 107,
"text": "s:\\Omega\\to{\\mathbb R_{\\geq 0}}"
},
{
"math_id": 108,
"text": "S \\in \\Sigma"
},
{
"math_id": 109,
"text": "\\nu_s(S)=\\int_Ss\\,d\\mu."
},
{
"math_id": 110,
"text": "\\nu_s"
},
{
"math_id": 111,
"text": "(\\Omega, \\Sigma)"
},
{
"math_id": 112,
"text": "s=\\sum^n_{k=1}c_k\\cdot {\\mathbf 1}_{A_k},"
},
{
"math_id": 113,
"text": "c_k\\in{\\mathbb R}_{\\geq 0}"
},
{
"math_id": 114,
"text": "A_k\\in\\Sigma"
},
{
"math_id": 115,
"text": "\\nu_s(S)=\\sum_{k =1}^n c_k \\mu(S\\cap A_k)."
},
{
"math_id": 116,
"text": "\\nu_A(S) = \\mu(A \\cap S)"
},
{
"math_id": 117,
"text": "A \\in \\Sigma"
},
{
"math_id": 118,
"text": "S = \\bigcup^\\infty_{i=1}S_i"
},
{
"math_id": 119,
"text": "\nS_1\\subseteq\\cdots\\subseteq S_i\\subseteq S_{i+1}\\subseteq\\cdots\\subseteq S\n"
},
{
"math_id": 120,
"text": "\\mu(S)=\\sup_i\\mu(S_i)."
},
{
"math_id": 121,
"text": "S_0 = \\emptyset"
},
{
"math_id": 122,
"text": " S = \\coprod_{1 \\le i } S_i \\setminus S_{i -1} "
},
{
"math_id": 123,
"text": "S_k = \\coprod_{1\\le i \\le k } S_i \\setminus S_{i -1} "
},
{
"math_id": 124,
"text": "\\mu(S_k) = \\sum_{i=1}^k \\mu (S_i \\setminus S_{i -1})"
},
{
"math_id": 125,
"text": "\\mu(S) = \\sum_{i = 1}^\\infty \\mu(S_i \\setminus S_{i-1})"
},
{
"math_id": 126,
"text": "\\mu(S) = \\sup_k \\mu(S_k)"
},
{
"math_id": 127,
"text": " f = \\sup_k f_k"
},
{
"math_id": 128,
"text": "\\operatorname{SF}(f)"
},
{
"math_id": 129,
"text": "0\\leq s\\leq f"
},
{
"math_id": 130,
"text": "X"
},
{
"math_id": 131,
"text": " (\\Sigma, \\operatorname{\\mathcal B}_{\\bar\\R_{\\geq 0}}) "
},
{
"math_id": 132,
"text": "\\textstyle \\int_X f \\,d\\mu "
},
{
"math_id": 133,
"text": "0 \\le f_k(x) \\le \\infty"
},
{
"math_id": 134,
"text": "0 \\le f(x) \\le \\infty"
},
{
"math_id": 135,
"text": "f^{-1}([0,t])"
},
{
"math_id": 136,
"text": "\\Sigma "
},
{
"math_id": 137,
"text": "0 \\le t \\le \\infty"
},
{
"math_id": 138,
"text": "[0,t]"
},
{
"math_id": 139,
"text": "[0,\\infty]"
},
{
"math_id": 140,
"text": "f_k(x)"
},
{
"math_id": 141,
"text": " f(x) = \\sup_k f_k(x) \\leq t"
},
{
"math_id": 142,
"text": "f_k(x)\\leq t"
},
{
"math_id": 143,
"text": "f\\ge 0"
},
{
"math_id": 144,
"text": "f_k\\ge 0"
},
{
"math_id": 145,
"text": "f^{-1}([0, t]) = \\bigcap_k f_k^{-1}([0,t])."
},
{
"math_id": 146,
"text": "f^{-1}([0, t])"
},
{
"math_id": 147,
"text": "f_k^{-1}([0,t])"
},
{
"math_id": 148,
"text": "f \\ge 0"
},
{
"math_id": 149,
"text": " \\int_X f \\,d\\mu = \\sup_{s \\in SF(f)}\\int_X s \\, d\\mu"
},
{
"math_id": 150,
"text": "\\sup_k \\int_X f_k \\,d\\mu \\le \\int_X f \\,d\\mu "
},
{
"math_id": 151,
"text": "\\int_X f_k(x) \\, d\\mu \\le \\int_X f(x)\\, d\\mu"
},
{
"math_id": 152,
"text": "f_k(x) \\le f(x)"
},
{
"math_id": 153,
"text": " \\int_X f \\,d\\mu \\le \\sup_k \\int_X f_k \\,d\\mu "
},
{
"math_id": 154,
"text": " \\int_X s\\,d\\mu\\leq\\sup_k\\int_X f_k\\,d\\mu"
},
{
"math_id": 155,
"text": "s\\in\\operatorname{SF}(f)"
},
{
"math_id": 156,
"text": " \\int_X s\\,d\\mu\\leq \\int_X f_k \\,d\\mu"
},
{
"math_id": 157,
"text": "k >K_s "
},
{
"math_id": 158,
"text": " f "
},
{
"math_id": 159,
"text": "f_k < f"
},
{
"math_id": 160,
"text": " \n(1-\\varepsilon) \\int_X s \\, d\\mu = \\int_X (1 - \\varepsilon) s \\, d\\mu \\le \\sup_k \\int_X f_k \\, d\\mu "
},
{
"math_id": 161,
"text": "0 <\\varepsilon \\ll 1"
},
{
"math_id": 162,
"text": "B^{s,\\varepsilon}_k=\\{x\\in X\\mid (1 - \\varepsilon) s(x)\\leq f_k(x)\\}\\subseteq X."
},
{
"math_id": 163,
"text": "B^{s,\\varepsilon}_k"
},
{
"math_id": 164,
"text": "\\Sigma"
},
{
"math_id": 165,
"text": "B^{s,\\varepsilon}_k\\subseteq B^{s,\\varepsilon}_{k+1}"
},
{
"math_id": 166,
"text": " X=\\bigcup_k B^{s,\\varepsilon}_k"
},
{
"math_id": 167,
"text": "\\int_{B^{s,\\varepsilon}_k}(1-\\varepsilon) s\\,d\\mu\\leq\\int_{B^{s,\\varepsilon}_k} f_k\\,d\\mu \\leq\\int_X f_k\\,d\\mu.\n"
},
{
"math_id": 168,
"text": " \\sup_k \\int_{B^{s,\\varepsilon}_k} (1-\\varepsilon)s\\,d\\mu = \\int_X (1- \\varepsilon)s \\, d\\mu \n\\le \\sup_k \\int_X f_k \\, d\\mu. "
},
{
"math_id": 169,
"text": "s=\\sum_{1 \\le i \\le m}c_i\\cdot{\\mathbf 1}_{A_i}"
},
{
"math_id": 170,
"text": "c_i \\in \\R_{\\geq 0}"
},
{
"math_id": 171,
"text": "A_i\\in\\Sigma"
},
{
"math_id": 172,
"text": "\\textstyle X=\\coprod^m_{i=1}A_i"
},
{
"math_id": 173,
"text": " x\\in A_i"
},
{
"math_id": 174,
"text": "(1-\\varepsilon)s(x)\\leq f_k(x)"
},
{
"math_id": 175,
"text": " f_k(x) \\in [( 1- \\varepsilon)c_i, \\,\\infty],"
},
{
"math_id": 176,
"text": "B^{s,\\varepsilon}_k=\\coprod^m_{i=1}\\Bigl(f^{-1}_k\\Bigl([(1-\\varepsilon)c_i,\\infty]\\Bigr)\\cap A_i\\Bigr)"
},
{
"math_id": 177,
"text": "f_k"
},
{
"math_id": 178,
"text": " x \\in B^{s,\\varepsilon}_k"
},
{
"math_id": 179,
"text": "(1 - \\varepsilon)s(x) \\le f_k(x)\\le f_{k+1}(x)"
},
{
"math_id": 180,
"text": "x \\in B^{s,\\varepsilon}_{k + 1}."
},
{
"math_id": 181,
"text": "x \\in X"
},
{
"math_id": 182,
"text": "s(x) = 0"
},
{
"math_id": 183,
"text": "(1 - \\varepsilon)s(x) = 0 \\le f_1(x)"
},
{
"math_id": 184,
"text": "x \\in B^{s,\\varepsilon}_1"
},
{
"math_id": 185,
"text": "s(x) > 0"
},
{
"math_id": 186,
"text": "(1-\\varepsilon)s(x) < s(x) \\le f(x) = \\sup_k f(x)"
},
{
"math_id": 187,
"text": "(1- \\varepsilon)s(x) < f_{N_x}(x)"
},
{
"math_id": 188,
"text": "N_x"
},
{
"math_id": 189,
"text": "x \\in B^{s,\\varepsilon}_{N_x}"
},
{
"math_id": 190,
"text": "(\\Omega, \\Sigma, \\mu)"
},
{
"math_id": 191,
"text": "X \\in \\Sigma"
},
{
"math_id": 192,
"text": "\\{f_k\\}_{k=1}^\\infty"
},
{
"math_id": 193,
"text": "(\\Sigma, \\mathcal{B}_{\\R_{\\geq 0}})"
},
{
"math_id": 194,
"text": "\\{f_k(x)\\}_{k=1}^\\infty"
},
{
"math_id": 195,
"text": "x"
},
{
"math_id": 196,
"text": "f_k \\le f"
},
{
"math_id": 197,
"text": "\\lim_{k\\to\\infty} \\int_X f_k \\,d\\mu"
},
{
"math_id": 198,
"text": "\\lim_{k\\to\\infty} \\int_X f_k \\,d\\mu = \\int_X f \\,d\\mu."
},
{
"math_id": 199,
"text": "f = \\sup_k f_k = \\lim_{k \\to \\infty} f_k = \\liminf_{k \\to \\infty}f_k"
},
{
"math_id": 200,
"text": "\\int_X f\\,d\\mu = \\int_X \\liminf_k f_k\\,d\\mu \\le \\liminf \\int_X f_k\\,d\\mu"
},
{
"math_id": 201,
"text": "\\int f_k \\,d\\mu \\le \\int f_{k + 1} \\,d\\mu \\le \\int f d\\mu"
},
{
"math_id": 202,
"text": "\\liminf \\int_X f_k\\,d\\mu \\le \\limsup_k \\int_X f_k\\,d\\mu = \\sup_k \\int_X f_k\\,d\\mu \\le \\int_X f\\,d\\mu."
},
{
"math_id": 203,
"text": "\\int_X f \\, d\\mu = \\liminf_{k \\to\\infty} \\int_X f_k\\,d\\mu = \\limsup_{k \\to\\infty} \\int_X f_k\\,d\\mu = \\lim_{k \\to\\infty} \\int_X f_k \\, d\\mu = \\sup_k \\int_X f_k\\,d\\mu."
}
] |
https://en.wikipedia.org/wiki?curid=146263
|
14626628
|
Polynomial Diophantine equation
|
In mathematics, a polynomial Diophantine equation is an indeterminate polynomial equation for which one seeks solutions restricted to be polynomials in the indeterminate. A Diophantine equation, in general, is one where the solutions are restricted to some algebraic system, typically integers. (In another usage ) "Diophantine" refers to the Hellenistic mathematician of the 3rd century, Diophantus of Alexandria, who made initial studies of integer Diophantine equations.
An important type of polynomial Diophantine equations takes the form:
formula_0
where "a", "b", and "c" are known polynomials, and we wish to solve for "s" and "t".
A simple example (and a solution) is:
formula_1
formula_2
formula_3
A necessary and sufficient condition for a polynomial Diophantine equation to have a solution is for "c" to be a multiple of the GCD of "a" and "b". In the example above, the GCD of "a" and "b" was 1, so solutions would exist for any value of c.
Solutions to polynomial Diophantine equations are not unique. Any multiple of formula_4 (say formula_5) can be used to transform formula_6 and formula_7 into another solution formula_8 formula_9:
formula_10
Some polynomial Diophantine equations can be solved using the extended Euclidean algorithm, which works as well with polynomials as it does with integers.
|
[
{
"math_id": 0,
"text": "sa+tb=c"
},
{
"math_id": 1,
"text": "s(x^2+1)+t(x^3+1)=2x"
},
{
"math_id": 2,
"text": "s=-x^3-x^2+x"
},
{
"math_id": 3,
"text": "t=x^2+x."
},
{
"math_id": 4,
"text": "ab"
},
{
"math_id": 5,
"text": "rab"
},
{
"math_id": 6,
"text": "s"
},
{
"math_id": 7,
"text": "t"
},
{
"math_id": 8,
"text": "s'=s+rb"
},
{
"math_id": 9,
"text": "t'=t-ra"
},
{
"math_id": 10,
"text": "(s+rb)a+(t-ra)b=c."
}
] |
https://en.wikipedia.org/wiki?curid=14626628
|
1462677
|
Bertrand competition
|
Economic model of competition
Bertrand competition is a model of competition used in economics, named after Joseph Louis François Bertrand (1822–1900). It describes interactions among firms (sellers) that set prices and their customers (buyers) that choose quantities at the prices set. The model was formulated in 1883 by Bertrand in a review of Antoine Augustin Cournot's book "Recherches sur les Principes Mathématiques de la Théorie des Richesses" (1838) in which Cournot had put forward the Cournot model. Cournot's model argued that each firm should maximise its profit by selecting a quantity level and then adjusting price level to sell that quantity. The outcome of the model equilibrium involved firms pricing above marginal cost; hence, the competitive price. In his review, Bertrand argued that each firm should instead maximise its profits by selecting a price level that undercuts its competitors' prices, when their prices exceed marginal cost. The model was not formalized by Bertrand; however, the idea was developed into a mathematical model by Francis Ysidro Edgeworth in 1889.
Underlying assumptions of Bertrand competition.
Considering the simple framework, the underlying assumptions that the Bertrand model makes is as follows:
Furthermore, it is intuitively deducible, when considering the law of demand of firms' competition in the market:
The Bertrand duopoly equilibrium.
In the Bertrand model, the competitive price serves as a Nash equilibrium for strategic pricing decisions. If both firms establish a competitive price at the marginal cost (unit cost), neither firm obtains profits. If one firm aligns its price with the marginal cost while the other raises its price above the unit cost, the latter earns nothing, as consumers opt for the competitively priced option. No other pricing scenario reaches equilibrium. Setting identical prices above unit cost leads to a destabilizing incentive for each firm to undercut the other, aiming to capture the entire market and significantly boost profits. This lack of equilibrium arises from the firms competing in a market with substitute goods, where consumers favor the cheaper product due to identical preferences. Additionally, equilibrium is not achieved when firms set different prices; the higher-priced firm earns nothing, prompting it to lower prices to undercut the competitor. Therefore, the sole equilibrium in the Bertrand model emerges when both firms establish a price equal to unit cost, known as the competitive price.
It is to highlight that the Bertrand equilibrium is a "weak" Nash-equilibrium. The firms lose nothing by deviating from the competitive price: it is an equilibrium simply because each firm can earn no more than zero profits given that the other firm sets the competitive price and is willing to meet all demand at that price.
Classic modelling of the Bertrand competition.
The Bertrand model of price competition in a duopoly market producing homogenous goods has the following characteristics:
Firm formula_12’s individual demand function is downward sloping and a function of the price set by each firm:
formula_13
Important to note, in this case, the market demand is continuous; however, the firm's demand is discontinuous, as seen in the above function statement. This means the firm's profit function is also discontinuous. Therefore, firm formula_12 aims to maximise its profit, as stated below, taking formula_14 as given:
formula_15
In order to derive the best response for firm formula_12, let formula_16 be the monopoly price that maximises total industry profit, where formula_17. This highlights the incentive for firms to 'undercut' rival firms. As if the rival sets the price at formula_16 firm formula_12 can reduce its price by the smallest currency unit, formula_18, to capture the entire market demand, formula_19. Therefore, firm formula_12 ’s best response is:formula_20
Diagram 1 illustrates firm 1's best response function, formula_21, given the price set by firm 2. Note, formula_22 in the diagram stands for marginal cost, formula_23. The Nash Equilibrium (formula_24) in the Bertrand model is the mutual best response; an equilibrium where neither firm has an incentive to deviate from it. As illustrated in the Diagram 2, the Bertrand-Nash equilibrium occurs when the best response function for both firm's intersects at the point, where formula_25. This means both firms make zero economic profits.
Therefore, if rival prices below marginal cost, firm ends up making losses attracting extra demand and is better of setting price level to marginal cost. Important to note, Bertrand's model of price competition leads to a perfect competitive outcome. This is known as the Bertrand paradox; as two competitors in a market are sufficient to generate competitive pricing; however, this result is not consistent in many real world industries.
If one firm has lower average cost (a superior production technology), it will charge the highest price that is lower than the average cost of the other one (i.e. a price "just" below the lowest price the other firm can manage) and take all the business. This is known as "limit pricing".
Critical analysis of the Bertrand model.
The Bertrand model rests on some very extreme assumptions. For example, it assumes that consumers want to buy from the lowest priced firm. There are various reasons why this may not hold in many markets: non-price competition and product differentiation, transport and search costs. For example, would someone travel twice as far to save 1% on the price of their vegetables? The Bertrand model can be extended to include product or location differentiation but then the main result – that price is driven down to marginal cost – no longer holds. With search costs, there may be other equilibria apart from the competitive price – the monopoly price or even price dispersion may be equilibria as in the classic "Bargains and Rip-offs" model.
The model also ignores capacity constraints. If a single firm does not have the capacity to supply the whole market then the "price equals marginal cost" result may not hold. The analysis of this case was started by Francis Ysidro Edgeworth and has become known as the Bertrand–Edgeworth model. With capacity constraints, there may not exist any pure strategy Nash equilibrium, the so-called Edgeworth paradox. However, in general there will exist a mixed-strategy Nash equilibrium as shown by Huw Dixon.
Moreover, some economists have criticized the model as leading to impractical outcomes in situations, where firms have fixed cost formula_26 and, as mentioned previously, constant marginal cost, formula_23. Hence, the total cost, formula_27, of producing formula_28 units is, formula_29. As described in the classic model, prices eventually are driven down to marginal cost, where firms are making zero economic profit and earn no margins on inframarginal units. Thus, firms are not able to recoup any fixed costs. However, if firms have an upward-sloping marginal cost curve, they can earn marginal on infra-marginal sales, which contributes to recouping fixed costs.
There is a big incentive to cooperate in the Bertrand model; colluding to charge the monopoly price, formula_16, and sharing the market equally, formula_30, where formula_0 is the number of firms in the market. However, not colluding and charging marginal cost is the non-cooperative outcome and the only Nash equilibrium of this model. Therefore, moving from a simultaneous move game to a repeated game with infinite horizon, then collusion is possible because of the Folk Theorem.
Bertrand competition versus Cournot competition.
The Bertrand and Cournot model focus on different aspects of the competitive process, which has led to the model identifying different set of mechanisms that vary the characteristics of the market demand that are exhibited by the firms. Cournot model assumes that the market allocates sales equal to whatever any given firm quantity produced, but at the price level determined by the market. Whereas the Bertrand model assumes that the firm with the lowest price acquires all the sales in the market.
When comparing the models, the oligopoly theory suggest that the Bertrand industries are more competitive than Cournot industries. This is because quantities in the Cournot model are considered as strategic substitutes; that is, the increase in quantity level produced by a firm is accommodated by the rival, producing less. Whereas the prices in the Bertrand model are strategic complements; a firm aggressively counters an increase in price level by reducing its price below the rivals.
Moreover, both models are criticised based on the assumptions that are made in comparison to the real-world scenario. However, the results from the classic models can be reconciled in a manner of thinking, as presented below. Considering the models appropriate application in the market:
Neither model is necessarily "better" than the other. The accuracy of the predictions of each model will vary from industry to industry, depending on the closeness of each model to the industry situation. If capacity and output can be easily changed, Bertrand is generally a better model of duopoly competition. If output and capacity are difficult to adjust, then Cournot is generally a better model.
Under some conditions the Cournot model can be recast as a two-stage model, wherein the first stage firms choose capacities, and in the second they compete in Bertrand fashion.
Bertrand Competition in Real Life.
Bertrand Competition with Asymmetric Marginal Costs.
In Bertrand Competition, we have made several assumptions, for instance, each firm produces identical goods and cost. However, this is not the case in the real world because there are a lot of factors that lead the cost of different firms to become slightly different like the cost of renting and the larger scale of the firm can enjoy economies of scale. Thus, different researchers tried to investigate the result of Bertrand Competition with asymmetric marginal cost. According to the experiment from “Bertrand competition with asymmetric costs: Experimental evidence”, the author found that there is a negative relationship between the level of asymmetry in the cost and the price set by the firms (J Boone, et al., 2012). It means that firms have different incentives to set their prices.
Thomas Demuynck et al. (2019) conducted research to find out a solution in pure strategies in Bertrand competition with asymmetric costs. Ha has defined the Myopic Stable Set (MSS)for Normal-form games. Suppose there are two firms, we use C for the marginal cost, C1 stands for the marginal cost of firm 1 and C2 stands for the marginal cost of firm 2. From the result, there are two cases:
This is the case of the basic Bertrand Competition which both firms have the same marginal cost. From the figure, MSS has illustrated that there is only one unique point that both firms are going to set their price. It is the pure strategy of Nash equilibrium.
It means that the marginal cost of Firm 2 is higher than the marginal cost of Firm 1. Under this situation, firm 2 can only set their price equal to their marginal cost. On the other hand, Firm 1 can choose its price between its marginal cost and Firm 2's marginal cost. Thus, there are a lot of points for Firm 1 to set its price.
As you can see, Firms may not set their price equal to their marginal cost under asymmetric costs, unlike the standard Bertrand Competition Model. From the situation, firms with the lower marginal cost can choose whatever they want within the range between their marginal cost and other firms’ marginal costs. There is no absolute answer to which price they should set, it is just based on different factors, for example, the current market situation.
At the same time, Subhasish Dugar et al. (2009) conducted research about the relationship between the size of cost asymmetry and Bertrand Competition. They found that there is no huge difference when the cost asymmetry is small as there is relatively little impact on competition. However, the lower-cost firm will undercut the price and capture a large market share when the size of cost asymmetry is large.
Bertrand Competition with Network effects.
Also, the standard Bertrand Competition also assumes that all consumers will choose the product from the firm with a lower price and the firm can only set their price based on their marginal costs. However, it is not perfectly correct as the theory did not mention the network effects. Consumers will buy a product based on the number of other consumers using it. It is very rational, like when you purchase sports shoes, most of us will prefer Nike and Adidas. As they are relatively huge brands and both of them have a strong customer network, we will have a certain confidence guarantee with many people are using their products.
However, Christian and Irina (2008)found a different result if the market has a network effects. Firms will prefer to set their price aggressively in order to attract more customers and increase the company network. Masaki (2018) also mentioned firms can gain a larger customer base by setting their prices aggressively and they will attract more and more customers through network effects. It creates a positive feedback loop. As you can see, firms are not only setting their price blindly but also willing to gain a larger customer network.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "i = 1, 2, ..."
},
{
"math_id": 2,
"text": "Q=D(p)"
},
{
"math_id": 3,
"text": "Q=\\sum_{i=1}^n"
},
{
"math_id": 4,
"text": "Q_i"
},
{
"math_id": 5,
"text": "D'(p)<0"
},
{
"math_id": 6,
"text": "c_1=c_2=...=c"
},
{
"math_id": 7,
"text": "p_1=p_2=...=p"
},
{
"math_id": 8,
"text": "\\frac{p}{n}"
},
{
"math_id": 9,
"text": "i=1, 2"
},
{
"math_id": 10,
"text": "c,\\ (c_1=c_2)"
},
{
"math_id": 11,
"text": "p_i=p_1"
},
{
"math_id": 12,
"text": "i"
},
{
"math_id": 13,
"text": "D(p_i,p_j) = \\begin{cases} D(p_i), & \\text{if }p_i<p_j \\\\ \\frac{D(p_i)}{2}, & \\text{if }p_i=p_j \\\\ 0, & \\text{otherwise}\\end{cases}"
},
{
"math_id": 14,
"text": "p_j"
},
{
"math_id": 15,
"text": "\\pi_i=(p_i-c)D(p_i)"
},
{
"math_id": 16,
"text": "p_m"
},
{
"math_id": 17,
"text": "p_m=argmax_p(p-c)D(p)"
},
{
"math_id": 18,
"text": "\\epsilon"
},
{
"math_id": 19,
"text": "D(p)"
},
{
"math_id": 20,
"text": "R_i(p_j) = \\begin{cases} p_m, & \\text{if }p_j \\geq p_m \\\\ p_j-\\epsilon, & \\text{if }c<p_j<p_m \\\\ c, & \\text{if } p_j\\leq c\\end{cases}"
},
{
"math_id": 21,
"text": "P_1''(P_2)"
},
{
"math_id": 22,
"text": "MC"
},
{
"math_id": 23,
"text": "c"
},
{
"math_id": 24,
"text": "N"
},
{
"math_id": 25,
"text": "P_1^N= P_2^N =MC"
},
{
"math_id": 26,
"text": "F"
},
{
"math_id": 27,
"text": "TC"
},
{
"math_id": 28,
"text": "Q"
},
{
"math_id": 29,
"text": "TC = F + cQ"
},
{
"math_id": 30,
"text": "\\frac{p_m}{n}"
}
] |
https://en.wikipedia.org/wiki?curid=1462677
|
14626877
|
Paracrystallinity
|
Materials without long-range ordering of their crystal lattices
In materials science, paracrystalline materials are defined as having short- and medium-range ordering in their lattice (similar to the liquid crystal phases) but lacking crystal-like long-range ordering at least in one direction.
Origin and definition.
The words "paracrystallinity" and "paracrystal" were coined by the late Friedrich Rinne in the year 1933. Their German equivalents, e.g. "Parakristall", appeared in print one year earlier.
A general theory of paracrystals has been formulated in a basic textbook, and then further developed/refined by various authors.
Rolf Hosemann's definition of an ideal paracrystal is: "The electron density distribution of any material is equivalent to that of a paracrystal when there is for every building block one ideal point so that the distance statistics to other ideal points are identical for all of these points. The electron configuration of each building block around its ideal point is statistically independent of its counterpart in neighboring building blocks. A building block corresponds then to the material content of a cell of this "blurred" space lattice, which is to be considered a paracrystal."
Theory.
"Ordering" is the regularity in which atoms appear in a predictable lattice, as measured from one point. In a highly ordered, perfectly crystalline material, or single crystal, the location of every atom in the structure can be described exactly measuring out from a single origin. Conversely, in a disordered structure such as a liquid or amorphous solid, the location of the nearest and, perhaps, second-nearest neighbors can be described from an origin (with some degree of uncertainty) and the ability to predict locations decreases rapidly from there out. The distance at which atom locations can be predicted is referred to as the correlation length formula_0. A paracrystalline material exhibits a correlation somewhere between the fully amorphous and fully crystalline.
The primary, most accessible source of crystallinity information is X-ray diffraction and cryo-electron microscopy, although other techniques may be needed to observe the complex structure of paracrystalline materials, such as fluctuation electron microscopy in combination with density of states modeling of electronic and vibrational states. Scanning transmission electron microscopy can provide real-space and reciprocal space characterization of paracrystallinity in nanoscale material, such as quantum dot solids.
The scattering of X-rays, neutrons and electrons on paracrystals is quantitatively described by the theories of the ideal and real paracrystal.
Numerical differences in analyses of diffraction experiments on the basis of either of these two theories of paracrystallinity can often be neglected.
Just like ideal crystals, ideal paracrystals extend theoretically to infinity. Real paracrystals, on the other hand, follow the empirical α*-law, which restricts their size. That size is also indirectly proportional to the components of the tensor of the paracrystalline distortion. Larger solid state aggregates are then composed of micro-paracrystals.
Applications.
The paracrystal model has been useful, for example, in describing the state of partially amorphous semiconductor materials after deposition. It has also been successfully applied to synthetic polymers, liquid crystals, biopolymers, quantum dot solids, and biomembranes.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\xi"
}
] |
https://en.wikipedia.org/wiki?curid=14626877
|
14627005
|
Nisnevich topology
|
Structure in algebraic geometry
In algebraic geometry, the Nisnevich topology, sometimes called the completely decomposed topology, is a Grothendieck topology on the category of schemes which has been used in algebraic K-theory, A¹ homotopy theory, and the theory of motives. It was originally introduced by Yevsey Nisnevich, who was motivated by the theory of adeles.
Definition.
A morphism of schemes formula_0 is called a Nisnevich morphism if it is an étale morphism such that for every (possibly non-closed) point "x" ∈ "X", there exists a point "y" ∈ "Y" in the fiber "f"−1("x") such that the induced map of residue fields "k"("x") → "k"("y") is an isomorphism. Equivalently, "f" must be flat, unramified, locally of finite presentation, and for every point "x" ∈ "X", there must exist a point "y" in the fiber "f"−1("x") such that "k"("x") → "k"("y") is an isomorphism.
A family of morphisms {"u"α : "X"α → "X"} is a Nisnevich cover if each morphism in the family is étale and for every (possibly non-closed) point "x" ∈ "X", there exists "α" and a point "y" ∈ "X"α s.t. "u"α("y") = "x" and the induced map of residue fields "k"("x") → "k"("y") is an isomorphism. If the family is finite, this is equivalent to the morphism formula_1 from formula_2 to "X" being a Nisnevich morphism. The Nisnevich covers are the covering families of a pretopology on the category of schemes and morphisms of schemes. This generates a topology called the Nisnevich topology. The category of schemes with the Nisnevich topology is notated "Nis".
The small Nisnevich site of X has as underlying category the same as the small étale site, that is to say, objects are schemes "U" with a fixed étale morphism "U" → "X" and the morphisms are morphisms of schemes compatible with the fixed maps to "X". Admissible coverings are Nisnevich morphisms.
The big Nisnevich site of X has as underlying category schemes with a fixed map to "X" and morphisms the morphisms of "X"-schemes. The topology is the one given by Nisnevich morphisms.
The Nisnevich topology has several variants which are adapted to studying singular varieties. Covers in these topologies include resolutions of singularities or weaker forms of resolution.
The cdh and l′ topologies are incomparable with the étale topology, and the h topology is finer than the étale topology.
Equivalent conditions for a Nisnevich cover.
Assume the category consists of smooth schemes over a qcqs (quasi-compact and quasi-separated) scheme, then the original definition due to NisnevichRemark 3.39, which is equivalent to the definition above, for a family of morphisms formula_3 of schemes to be a Nisnevich covering is if
The following yet another equivalent condition for Nisnevich covers is due to Lurie: The Nisnevich topology is generated by all finite families of étale morphisms formula_3 such that there is a finite sequence of finitely presented closed subschemesformula_7such that for formula_8, formula_9admits a section.
Notice that when evaluating these morphisms on formula_10-points, this implies the map is a surjection. Conversely, taking the trivial sequence formula_11 gives the result in the opposite direction.
Motivation.
One of the key motivations for introducing the Nisnevich topology in motivic cohomology is the fact that a Zariski open cover formula_12 does not yield a resolution of Zariski sheavesformula_13whereformula_14is the representable functor over the category of presheaves with transfers. For the Nisnevich topology, the local rings are Henselian, and a finite cover of a Henselian ring is given by a product of Henselian rings, showing exactness.
Local rings in the Nisnevich topology.
If "x" is a point of a scheme "X", then the local ring of "x" in the Nisnevich topology is the Henselization of the local ring of "x" in the Zariski topology. This differs from the Etale topology where the local rings are "strict" henselizations. One of the important points between the two cases can be seen when looking at a local ring formula_15 with residue field formula_16. In this case, the residue fields of the Henselization and strict Henselization differformula_17so the residue field of the strict Henselization gives the separable closure of the original residue field formula_16.
Examples of Nisnevich Covering.
Consider the étale cover given by
formula_18
If we look at the associated morphism of residue fields for the generic point of the base, we see that this is a degree 2 extension
formula_19
This implies that this étale cover is not Nisnevich. We can add the étale morphism formula_20 to get a Nisnevich cover since there is an isomorphism of points for the generic point of formula_21.
Conditional covering.
If we take formula_22 as a scheme over a field formula_5, then a coveringpg 21 given byformula_23where formula_24 is the inclusion and formula_25, then this covering is Nisnevich if and only if formula_26 has a solution over formula_5. Otherwise, the covering cannot be a surjection on formula_5-points. In this case, the covering is only an Etale covering.
Zariski coverings.
Every Zariski coveringpg 21 is Nisnevich but the converse doesn't hold in general. This can be easily proven using any of the definitions since the residue fields will always be an isomorphism regardless of the Zariski cover, and by definition a Zariski cover will give a surjection on points. In addition, Zariski inclusions are always Etale morphisms.
Applications.
Nisnevich introduced his topology to provide a cohomological interpretation of the class set of an affine group scheme, which was originally defined in adelic terms. He used it to partially prove a conjecture of Alexander Grothendieck and Jean-Pierre Serre which states that a rationally trivial torsor under a reductive group scheme over an integral regular Noetherian base scheme is locally trivial in the Zariski topology. One of the key properties of the Nisnevich topology is the existence of a descent spectral sequence. Let "X" be a Noetherian scheme of finite Krull dimension, and let "G""n"("X") be the Quillen K-groups of the category of coherent sheaves on "X". If formula_27 is the sheafification of these groups with respect to the Nisnevich topology, there is a convergent spectral sequence
formula_28
for p ≥ 0, q ≥ 0, and p - q ≥ 0. If formula_29 is a prime number not equal to the characteristic of "X", then there is an analogous convergent spectral sequence for K-groups with coefficients in formula_30.
The Nisnevich topology has also found important applications in algebraic K-theory, A¹ homotopy theory and the theory of motives.
|
[
{
"math_id": 0,
"text": "f:Y \\to X"
},
{
"math_id": 1,
"text": "\\coprod u_\\alpha"
},
{
"math_id": 2,
"text": "\\coprod X_\\alpha"
},
{
"math_id": 3,
"text": "\\{p_\\alpha: U_\\alpha \\to X\\}_{\\alpha \\in A}"
},
{
"math_id": 4,
"text": "p_\\alpha"
},
{
"math_id": 5,
"text": "k"
},
{
"math_id": 6,
"text": "p_k: \\coprod_{\\alpha}U_\\alpha(k) \\to X(k)"
},
{
"math_id": 7,
"text": "\\varnothing = Z_{n+1} \\subseteq Z_n \\subseteq \\cdots \\subseteq Z_1 \\subseteq Z_0 = X"
},
{
"math_id": 8,
"text": "0\\leq m\\leq n"
},
{
"math_id": 9,
"text": "\\coprod_{\\alpha \\in A} p_\\alpha^{-1}(Z_m - Z_{m+1}) \\to Z_m - Z_{m+1}"
},
{
"math_id": 10,
"text": "S"
},
{
"math_id": 11,
"text": "Z_0 = X"
},
{
"math_id": 12,
"text": "\\pi: U \\to X"
},
{
"math_id": 13,
"text": "\\cdots \\to \\mathbf{Z}_{tr}(U\\times_XU) \\to \\mathbf{Z}_{tr}(U) \\to \\mathbf{Z}_{tr}(X) \\to 0"
},
{
"math_id": 14,
"text": "\\mathbf{Z}_{tr}(Y)(Z) := \\text{Hom}_{cor}(Z,Y)"
},
{
"math_id": 15,
"text": "(R,\\mathfrak{p})"
},
{
"math_id": 16,
"text": "\\kappa"
},
{
"math_id": 17,
"text": "\\begin{align}\n(R,\\mathfrak{p})^h &\\rightsquigarrow \\kappa \\\\\n(R,\\mathfrak{p})^{sh} &\\rightsquigarrow \\kappa^{sep}\n\\end{align}"
},
{
"math_id": 18,
"text": "\n\\text{Spec}(\\mathbb{C}[x,t,t^{-1}]/(x^2 - t)) \\to \\text{Spec}(\\mathbb{C}[t,t^{-1}])\n"
},
{
"math_id": 19,
"text": "\n\\mathbb{C}(t) \\to \\frac{\\mathbb{C}(t)[x]}{(x^2 - t)}\n"
},
{
"math_id": 20,
"text": "\\mathbb{A}^1 - \\{0,1\\} \\to \\mathbb{A}^1 - \\{0\\}"
},
{
"math_id": 21,
"text": "\\mathbb{A}^1-\\{0\\}"
},
{
"math_id": 22,
"text": "\\mathbb{A}^1"
},
{
"math_id": 23,
"text": "\\begin{align}\ni: \\mathbb{A}^1 - \\{a \\} \\hookrightarrow \\mathbb{A}^1 \\\\\nf: \\mathbb{A}^1 - \\{0 \\} \\to \\mathbb{A}^1\n\\end{align}"
},
{
"math_id": 24,
"text": "i"
},
{
"math_id": 25,
"text": "f(x) = x^k"
},
{
"math_id": 26,
"text": "x^k = a"
},
{
"math_id": 27,
"text": "\\tilde G_n^{\\,\\text{cd}}(X)"
},
{
"math_id": 28,
"text": "E^{p,q}_2 = H^p(X_\\text{cd}, \\tilde G_q^{\\,\\text{cd}}) \\Rightarrow G_{q-p}(X)"
},
{
"math_id": 29,
"text": "\\ell"
},
{
"math_id": 30,
"text": "\\mathbf{Z}/\\ell\\mathbf{Z}"
}
] |
https://en.wikipedia.org/wiki?curid=14627005
|
1462712
|
Centrality
|
Degree of connectedness within a graph
In graph theory and network analysis, indicators of centrality assign numbers or rankings to nodes within a graph corresponding to their network position. Applications include identifying the most influential person(s) in a social network, key infrastructure nodes in the Internet or urban networks, super-spreaders of disease, and brain networks.<ref name="10.1016/j.tics.2013.09.012"></ref><ref name="10.1038/s41598-021-81767-7"></ref> Centrality concepts were first developed in social network analysis, and many of the terms used to measure centrality reflect their sociological origin.
Definition and characterization of centrality indices.
Centrality indices are answers to the question "What characterizes an important vertex?" The answer is given in terms of a real-valued function on the vertices of a graph, where the values produced are expected to provide a ranking which identifies the most important nodes.
The word "importance" has a wide number of meanings, leading to many different definitions of centrality. Two categorization schemes have been proposed. "Importance" can be conceived in relation to a type of flow or transfer across the network. This allows centralities to be classified by the type of flow they consider important. "Importance" can alternatively be conceived as involvement in the cohesiveness of the network. This allows centralities to be classified based on how they measure cohesiveness. Both of these approaches divide centralities in distinct categories. A further conclusion is that a centrality which is appropriate for one category will often "get it wrong" when applied to a different category.
Many, though not all, centrality measures effectively count the number of paths (also called walks) of some type going through a given vertex; the measures differ in how the relevant walks are defined and counted. Restricting consideration to this group allows for taxonomy which places many centralities on a spectrum from those concerned with walks of length one (degree centrality) to infinite walks (eigenvector centrality). Other centrality measures, such as betweenness centrality focus not just on overall connectedness but occupying positions that are pivotal to the network's connectivity.
Characterization by network flows.
A network can be considered a description of the paths along which something flows. This allows a characterization based on the type of flow and the type of path encoded by the centrality. A flow can be based on transfers, where each indivisible item goes from one node to another, like a package delivery going from the delivery site to the client's house. A second case is serial duplication, in which an item is replicated so that both the source and the target have it. An example is the propagation of information through gossip, with the information being propagated in a private way and with both the source and the target nodes being informed at the end of the process. The last case is parallel duplication, with the item being duplicated to several links at the same time, like a radio broadcast which provides the same information to many listeners at once.
Likewise, the type of path can be constrained to geodesics (shortest paths), paths (no vertex is visited more than once), trails (vertices can be visited multiple times, no edge is traversed more than once), or walks (vertices and edges can be visited/traversed multiple times).
Characterization by walk structure.
An alternative classification can be derived from how the centrality is constructed. This again splits into two classes. Centralities are either "radial" or "medial." Radial centralities count walks which start/end from the given vertex. The degree and eigenvalue centralities are examples of radial centralities, counting the number of walks of length one or length infinity. Medial centralities count walks which pass through the given vertex. The canonical example is Freeman's betweenness centrality, the number of shortest paths which pass through the given vertex.
Likewise, the counting can capture either the "volume" or the "length" of walks. Volume is the total number of walks of the given type. The three examples from the previous paragraph fall into this category. Length captures the distance from the given vertex to the remaining vertices in the graph. Closeness centrality, the total geodesic distance from a given vertex to all other vertices, is the best known example. Note that this classification is independent of the type of walk counted (i.e. walk, trail, path, geodesic).
Borgatti and Everett propose that this typology provides insight into how best to compare centrality measures. Centralities placed in the same box in this 2×2 classification are similar enough to make plausible alternatives; one can reasonably compare which is better for a given application. Measures from different boxes, however, are categorically distinct. Any evaluation of relative fitness can only occur within the context of predetermining which category is more applicable, rendering the comparison moot.
Radial-volume centralities exist on a spectrum.
The characterization by walk structure shows that almost all centralities in wide use are radial-volume measures. These encode the belief that a vertex's centrality is a function of the centrality of the vertices it is associated with. Centralities distinguish themselves on how association is defined.
Bonacich showed that if association is defined in terms of walks, then a family of centralities can be defined based on the length of walk considered. Degree centrality counts walks of length one, while eigenvalue centrality counts walks of length infinity. Alternative definitions of association are also reasonable. Alpha centrality allows vertices to have an external source of influence. Estrada's subgraph centrality proposes only counting closed paths (triangles, squares, etc.).
The heart of such measures is the observation that powers of the graph's adjacency matrix gives the number of walks of length given by that power. Similarly, the matrix exponential is also closely related to the number of walks of a given length. An initial transformation of the adjacency matrix allows a different definition of the type of walk counted. Under either approach, the centrality of a vertex can be expressed as an infinite sum, either
formula_0
for matrix powers or
formula_1
for matrix exponentials, where
Bonacich's family of measures does not transform the adjacency matrix. Alpha centrality replaces the adjacency matrix with its resolvent. Subgraph centrality replaces the adjacency matrix with its trace. A startling conclusion is that regardless of the initial transformation of the adjacency matrix, all such approaches have common limiting behavior. As formula_4 approaches zero, the indices converge to degree centrality. As formula_4 approaches its maximal value, the indices converge to eigenvalue centrality.
Game-theoretic centrality.
The common feature of most of the aforementioned standard measures is that they assess the
importance of a node by focusing only on the role that a node plays by itself. However,
in many applications such an approach is inadequate because of synergies that may occur
if the functioning of nodes is considered in groups.
For example, consider the problem of stopping an epidemic. Looking at above image of network, which nodes should we vaccinate? Based on previously described measures, we want to recognize nodes that are the most important in disease spreading. Approaches based only on centralities, that focus on individual features of nodes, may not be good idea. Nodes in the red square, individually cannot stop disease spreading, but considering them as a group, we clearly see that they can stop disease if it has started in nodes formula_5, formula_6, and formula_7. Game-theoretic centralities try to consult described problems and opportunities, using tools from game-theory. The approach proposed in uses the Shapley value. Because of the time-complexity hardness of the Shapley value calculation, most efforts in this domain are driven into implementing new algorithms and methods which rely on a peculiar topology of the network or a special character of the problem. Such an approach may lead to reducing time-complexity from exponential to polynomial.
Similarly, the solution concept authority distribution () applies the Shapley-Shubik power index, rather than the Shapley value, to measure the bilateral direct influence between the players. The distribution is indeed a type of eigenvector centrality. It is used to sort big data objects in Hu (2020), such as ranking U.S. colleges.
Important limitations.
Centrality indices have two important limitations, one obvious and the other subtle. The obvious limitation is that a centrality which is optimal for one application is often sub-optimal for a different application. Indeed, if this were not so, we would not need so many different centralities. An illustration of this phenomenon is provided by the Krackhardt kite graph, for which three different notions of centrality give three different choices of the most central vertex.
The more subtle limitation is the commonly held fallacy that vertex centrality indicates the relative importance of vertices. Centrality indices are explicitly designed to produce a ranking which allows indication of the most important vertices. This they do well, under the limitation just noted. They are not designed to measure the influence of nodes in general. Recently, network physicists have begun developing node influence metrics to address this problem.
The error is two-fold. Firstly, a ranking only orders vertices by importance, it does not quantify the difference in importance between different levels of the ranking. This may be mitigated by applying Freeman centralization to the centrality measure in question, which provide some insight to the importance of nodes depending on the differences of their centralization scores. Furthermore, Freeman centralization enables one to compare several networks by comparing their highest centralization scores.
Secondly, the features which (correctly) identify the most important vertices in a given network/application do not necessarily generalize to the remaining vertices.
For the majority of other network nodes the rankings may be meaningless. This explains why, for example, only the first few results of a Google image search appear in a reasonable order. The pagerank is a highly unstable measure, showing frequent rank reversals after small adjustments of the jump parameter.
While the failure of centrality indices to generalize to the rest of the network may at first seem counter-intuitive, it follows directly from the above definitions.
Complex networks have heterogeneous topology. To the extent that the optimal measure depends on the network structure of the most important vertices, a measure which is optimal for such vertices is sub-optimal for the remainder of the network.
Degree centrality.
Historically first and conceptually simplest is degree centrality, which is defined as the number of links incident upon a node (i.e., the number of ties that a node has). The degree can be interpreted in terms of the immediate risk of a node for catching whatever is flowing through the network (such as a virus, or some information). In the case of a directed network (where ties have direction), we usually define two separate measures of degree centrality, namely indegree and outdegree. Accordingly, indegree is a count of the number of ties directed to the node and outdegree is the number of ties that the node directs to others. When ties are associated to some positive aspects such as friendship or collaboration, indegree is often interpreted as a form of popularity, and outdegree as gregariousness.
The degree centrality of a vertex formula_8, for a given graph formula_9 with formula_10 vertices and formula_11 edges, is defined as
formula_12
Calculating degree centrality for all the nodes in a graph takes formula_13 in a dense adjacency matrix representation of the graph, and for edges takes formula_14 in a sparse matrix representation.
The definition of centrality on the node level can be extended to the whole graph, in which case we are speaking of "graph centralization". Let formula_15 be the node with highest degree centrality in formula_16. Let formula_17 be the formula_18-node connected graph that maximizes the following quantity (with formula_19 being the node with highest degree centrality in formula_20):
formula_21
Correspondingly, the degree centralization of the graph formula_16 is as follows:
formula_22
The value of formula_23 is maximized when the graph formula_20 contains one central node to which all other nodes are connected (a star graph), and in this case
formula_24
So, for any graph formula_25
formula_26
Also, a new extensive global measure for degree centrality named Tendency to Make Hub (TMH) defines as follows:
formula_27
where TMH increases by appearance of degree centrality in the network.
Closeness centrality.
In a connected graph, the normalized closeness centrality (or closeness) of a node is the average length of the shortest path between the node and all other nodes in the graph. Thus the more central a node is, the closer it is to all other nodes.
Closeness was defined by Alex Bavelas (1950) as the reciprocal of the farness, that is formula_28 where formula_29 is the distance between vertices "u" and "v". However, when speaking of closeness centrality, people usually refer to its normalized form, given by the previous formula multiplied by formula_30, where formula_31 is the number of nodes in the graph
formula_32
This normalisation allows comparisons between nodes of graphs of different sizes. For many graphs, there is a strong correlation between the inverse of closeness and the logarithm of degree, formula_33 where formula_34 is the degree of vertex "v" while α and β are constants for each network.
Taking distances "from" or "to" all other nodes is irrelevant in undirected graphs, whereas it can produce totally different results in directed graphs (e.g. a website can have a high closeness centrality from outgoing link, but low closeness centrality from incoming links).
Harmonic centrality.
In a (not necessarily connected) graph, the harmonic centrality reverses the sum and reciprocal operations in the definition of closeness centrality:
formula_35
where formula_36 if there is no path from "u" to "v". Harmonic centrality can be normalized by dividing by formula_30, where formula_31 is the number of nodes in the graph.
Harmonic centrality was proposed by Marchiori and Latora (2000) and then independently by Dekker (2005), using the name "valued centrality," and by Rochat (2009).
Betweenness centrality.
Betweenness is a centrality measure of a vertex within a graph (there is also edge betweenness, which is not discussed here). Betweenness centrality quantifies the number of times a node acts as a bridge along the shortest path between two other nodes. It was introduced as a measure for quantifying the control of a human on the communication between other humans in a social network by Linton Freeman. In his conception, vertices that have a high probability to occur on a randomly chosen shortest path between two randomly chosen vertices have a high betweenness.
The betweenness of a vertex formula_8 in a graph formula_9 with formula_37 vertices is computed as follows:
More compactly the betweenness can be represented as:
formula_38
where formula_39 is total number of shortest paths from node formula_40 to node formula_41 and formula_42 is the number of those paths that pass through formula_8. The betweenness may be normalised by dividing through the number of pairs of vertices not including "v", which for directed graphs is formula_43 and for undirected graphs is formula_44. For example, in an undirected star graph, the center vertex (which is contained in every possible shortest path) would have a betweenness of formula_44 (1, if normalised) while the leaves (which are contained in no shortest paths) would have a betweenness of 0.
From a calculation aspect, both betweenness and closeness centralities of all vertices in a graph involve calculating the shortest paths between all pairs of vertices on a graph, which requires formula_45 time with the Floyd–Warshall algorithm. However, on sparse graphs, Johnson's algorithm may be more efficient, taking formula_46 time. In the case of unweighted graphs the calculations can be done with Brandes' algorithm which takes formula_47 time. Normally, these algorithms assume that graphs are undirected and connected with the allowance of loops and multiple edges. When specifically dealing with network graphs, often graphs are without loops or multiple edges to maintain simple relationships (where edges represent connections between two people or vertices). In this case, using Brandes' algorithm will divide final centrality scores by 2 to account for each shortest path being counted twice.
Eigenvector centrality.
Eigenvector centrality (also called eigencentrality) is a measure of the influence of a node in a network. It assigns relative scores to all nodes in the network based on the concept that connections to high-scoring nodes contribute more to the score of the node in question than equal connections to low-scoring nodes. Google's PageRank and the Katz centrality are variants of the eigenvector centrality.
Using the adjacency matrix to find eigenvector centrality.
For a given graph formula_9 with formula_10 number of vertices let formula_48 be the adjacency matrix, i.e. formula_49 if vertex formula_8 is linked to vertex formula_41, and formula_50 otherwise. The relative centrality score of vertex formula_8 can be defined as:
formula_51
where formula_52 is a set of the neighbors of formula_8 and formula_53 is a constant. With a small rearrangement this can be rewritten in vector notation as the eigenvector equation
formula_54
In general, there will be many different eigenvalues formula_53 for which a non-zero eigenvector solution exists. Since the entries in the adjacency matrix are non-negative, there is a unique largest eigenvalue, which is real and positive, by the Perron–Frobenius theorem. This greatest eigenvalue results in the desired centrality measure. The formula_55 component of the related eigenvector then gives the relative centrality score of the vertex formula_8 in the network. The eigenvector is only defined up to a common factor, so only the ratios of the centralities of the vertices are well defined. To define an absolute score one must normalise the eigenvector, e.g., such that the sum over all vertices is 1 or the total number of vertices "n". Power iteration is one of many eigenvalue algorithms that may be used to find this dominant eigenvector. Furthermore, this can be generalized so that the entries in "A" can be real numbers representing connection strengths, as in a stochastic matrix.
Katz centrality.
Katz centrality is a generalization of degree centrality. Degree centrality measures the number of direct neighbors, and Katz centrality measures the number of all nodes that can be connected through a path, while the contributions of distant nodes are penalized. Mathematically, it is defined as
formula_56
where formula_57 is an attenuation factor in formula_58.
Katz centrality can be viewed as a variant of eigenvector centrality. Another form of Katz centrality is
formula_59
Compared to the expression of eigenvector centrality, formula_60 is replaced by formula_61
It is shown that the principal eigenvector (associated with the largest eigenvalue of formula_62, the adjacency matrix) is the limit of Katz centrality as formula_57 approaches formula_63 from below.
PageRank centrality.
PageRank satisfies the following equation
formula_64
where
formula_65
is the number of neighbors of node formula_66 (or number of outbound links in a directed graph). Compared to eigenvector centrality and Katz centrality, one major difference is the scaling factor formula_67. Another difference between PageRank and eigenvector centrality is that the PageRank vector is a left hand eigenvector (note the factor formula_68 has indices reversed).
Percolation centrality.
A slew of centrality measures exist to determine the ‘importance’ of a single node in a complex network. However, these measures quantify the importance of a node in purely topological terms, and the value of the node does not depend on the ‘state’ of the node in any way. It remains constant regardless of network dynamics. This is true even for the weighted betweenness measures. However, a node may very well be centrally located in terms of betweenness centrality or another centrality measure, but may not be ‘centrally’ located in the context of a network in which there is percolation. Percolation of a ‘contagion’ occurs in complex networks in a number of scenarios. For example, viral or bacterial infection can spread over social networks of people, known as contact networks. The spread of disease can also be considered at a higher level of abstraction, by contemplating a network of towns or population centres, connected by road, rail or air links. Computer viruses can spread over computer networks. Rumours or news about business offers and deals can also spread via social networks of people. In all of these scenarios, a ‘contagion’ spreads over the links of a complex network, altering the ‘states’ of the nodes as it spreads, either recoverable or otherwise. For example, in an epidemiological scenario, individuals go from ‘susceptible’ to ‘infected’ state as the infection spreads. The states the individual nodes can take in the above examples could be binary (such as received/not received a piece of news), discrete (susceptible/infected/recovered), or even continuous (such as the proportion of infected people in a town), as the contagion spreads. The common feature in all these scenarios is that the spread of contagion results in the change of node states in networks. Percolation centrality (PC) was proposed with this in mind, which specifically measures the importance of nodes in terms of aiding the percolation through the network. This measure was proposed by Piraveenan et al.
Percolation centrality is defined for a given node, at a given time, as the proportion of ‘percolated paths’ that go through that node. A ‘percolated path’ is a shortest path between a pair of nodes, where the source node is percolated (e.g., infected). The target node can be percolated or non-percolated, or in a partially percolated state.
formula_69
where formula_70 is total number of shortest paths from node formula_40 to node formula_71 and formula_72 is the number of those paths that pass through formula_8. The percolation state of the node formula_73 at time formula_41 is denoted by formula_74 and two special cases are when formula_75 which indicates a non-percolated state at time formula_41 whereas when formula_76 which indicates a fully percolated state at time formula_41. The values in between indicate partially percolated states ( e.g., in a network of townships, this would be the percentage of people infected in that town).
The attached weights to the percolation paths depend on the percolation levels assigned to the source nodes, based on the premise that the higher the percolation level of a source node is, the more important are the paths that originate from that node. Nodes which lie on shortest paths originating from highly percolated nodes are therefore potentially more important to the percolation. The definition of PC may also be extended to include target node weights as well. Percolation centrality calculations run in formula_77 time with an efficient implementation adopted from Brandes' fast algorithm and if the calculation needs to consider target nodes weights, the worst case time is formula_78.
Cross-clique centrality.
Cross-clique centrality of a single node in a complex graph determines the connectivity of a node to different cliques. A node with high cross-clique connectivity facilitates the propagation of information or disease in a graph. Cliques are subgraphs in which every node is connected to every other node in the clique. The cross-clique connectivity of a node formula_8 for a given graph formula_9 with formula_10 vertices and formula_11 edges, is defined as formula_79 where formula_79 is the number of cliques to which vertex formula_8 belongs. This measure was used by Faghani in 2013 but was first proposed by Everett and Borgatti in 1998 where they called it clique-overlap centrality.
Freeman centralization.
The centralization of any network is a measure of how central its most central node is in relation to how central all the other nodes are. Centralization measures then (a) calculate the sum in differences in centrality between the most central node in a network and all other nodes; and (b) divide this quantity by the theoretically largest such sum of differences in any network of the same size. Thus, every centrality measure can have its own centralization measure. Defined formally, if formula_80 is any centrality measure of point formula_73, if formula_81 is the largest such measure in the network, and if:
formula_82
is the largest sum of differences in point centrality formula_83 for any graph with the same number of nodes, then the centralization of the network is:
formula_84
The concept is due to Linton Freeman.
Dissimilarity-based centrality measures.
In order to obtain better results in the ranking of the nodes of a given network, in are used dissimilarity measures (specific to the theory of classification and data mining) to enrich the centrality measures in complex networks. This is illustrated with eigenvector centrality, calculating the centrality of each node through the solution of the eigenvalue problem
formula_85
where formula_86 (coordinate-to-coordinate product) and formula_87 is an arbitrary dissimilarity matrix, defined through a dissimilarity measure, e.g., Jaccard dissimilarity given by
formula_88
Where this measure permits us to quantify the topological contribution (which is why is called contribution centrality) of each node to the centrality of a given node, having more weight/relevance those nodes with greater dissimilarity, since these allow to the given node access to nodes that which themselves can not access directly.
Is noteworthy that formula_89 is non-negative because formula_62 and formula_90 are non-negative matrices, so we can use the Perron–Frobenius theorem to ensure that the above problem has a unique solution for "λ" = "λmax" with c non-negative, allowing us to infer the centrality of each node in the network. Therefore, the centrality of the i-th node is
formula_91
where formula_92 is the number of the nodes in the network. Several dissimilarity measures and networks were tested in obtaining improved results in the studied cases.
Centrality measures used in transportation networks.
Transportation networks such as road networks and railway networks are studied extensively in transportation science and urban planning. A number of recent studies have focused on using centrality measures to analyze transportation networks. While many of these studies simply use generic centrality measures such as Betweenness Centrality, custom centrality measures have also been defined specifically for transportation network analysis. Prominent among them is Transportation Centrality.
Transportation centrality measures the summation of the proportions of paths from pairs of nodes in a network which go through the node under consideration. In this respect it is similar to Betweenness Centrality. However, unlike Betweenness Centrality which considers only shortest paths, Transportation Centrality considers all possible paths between a pair of nodes. Therefore, Transportation Centrality is a generic version of Betweenness Centrality, and under certain conditions, it indeed reduces to Betweenness Centrality.
Transportation Centrality of a given node "v" is defined as:
formula_93
Notes and references.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\sum_{k=0}^\\infty A_{R}^{k} \\beta^k "
},
{
"math_id": 1,
"text": "\\sum_{k=0}^\\infty \\frac{(A_R \\beta)^k}{k!}"
},
{
"math_id": 2,
"text": "k"
},
{
"math_id": 3,
"text": "A_R"
},
{
"math_id": 4,
"text": "\\beta"
},
{
"math_id": 5,
"text": "v_1"
},
{
"math_id": 6,
"text": "v_4"
},
{
"math_id": 7,
"text": "v_5"
},
{
"math_id": 8,
"text": "v"
},
{
"math_id": 9,
"text": "G:=(V,E)"
},
{
"math_id": 10,
"text": "|V|"
},
{
"math_id": 11,
"text": "|E|"
},
{
"math_id": 12,
"text": "C_D(v)= \\deg(v)"
},
{
"math_id": 13,
"text": "\\Theta(V^2)"
},
{
"math_id": 14,
"text": "\\Theta(E)"
},
{
"math_id": 15,
"text": "v*"
},
{
"math_id": 16,
"text": "G"
},
{
"math_id": 17,
"text": "X:=(Y,Z)"
},
{
"math_id": 18,
"text": "|Y|"
},
{
"math_id": 19,
"text": "y*"
},
{
"math_id": 20,
"text": "X"
},
{
"math_id": 21,
"text": "H= \\sum^{|Y|}_{j=1} [C_D(y*)-C_D(y_j)]"
},
{
"math_id": 22,
"text": "C_D(G)= \\frac{\\sum^{|V|}_{i=1} [C_D(v*)-C_D(v_i)]}{H}"
},
{
"math_id": 23,
"text": "H"
},
{
"math_id": 24,
"text": "H=(n-1)\\cdot((n-1)-1)=n^2-3n+2."
},
{
"math_id": 25,
"text": "G:=(V,E),"
},
{
"math_id": 26,
"text": "C_D(G)= \\frac{\\sum^{|V|}_{i=1} [C_D(v*)-C_D(v_i)] }{|V|^2-3|V|+2}"
},
{
"math_id": 27,
"text": "\\text{TMH} = \\frac{\\sum^{|V|}_{i=1} \\deg(v)^2 }{\\sum^{|V|}_{i=1} \\deg(v) }"
},
{
"math_id": 28,
"text": "C_B(v)= (\\sum_u d(u,v))^{-1}"
},
{
"math_id": 29,
"text": "d(u,v)"
},
{
"math_id": 30,
"text": "N-1"
},
{
"math_id": 31,
"text": "N"
},
{
"math_id": 32,
"text": "C(v)= \\frac{N-1}{\\sum_u d(u,v)} ."
},
{
"math_id": 33,
"text": "(C(v))^{-1} \\approx -\\alpha \\ln(k_v) + \\beta"
},
{
"math_id": 34,
"text": "k_v"
},
{
"math_id": 35,
"text": "H(v)= \\sum_{u | u \\neq v} \\frac{1}{d(u,v)}"
},
{
"math_id": 36,
"text": "1 / d(u,v) = 0"
},
{
"math_id": 37,
"text": "V"
},
{
"math_id": 38,
"text": "C_B(v)= \\sum_{s \\neq v \\neq t \\in V}\\frac{\\sigma_{st}(v)}{\\sigma_{st}}"
},
{
"math_id": 39,
"text": "\\sigma_{st}"
},
{
"math_id": 40,
"text": "s"
},
{
"math_id": 41,
"text": "t"
},
{
"math_id": 42,
"text": "\\sigma_{st}(v)"
},
{
"math_id": 43,
"text": "(n-1)(n-2)"
},
{
"math_id": 44,
"text": "(n-1)(n-2)/2"
},
{
"math_id": 45,
"text": "O(V^3)"
},
{
"math_id": 46,
"text": "O(|V||E|+|V|^2 \\log|V|)"
},
{
"math_id": 47,
"text": "O(|V||E|)"
},
{
"math_id": 48,
"text": "A = (a_{v,t})"
},
{
"math_id": 49,
"text": "a_{v,t} = 1"
},
{
"math_id": 50,
"text": "a_{v,t} = 0"
},
{
"math_id": 51,
"text": "x_v = \\frac{1}{\\lambda} \\sum_{t \\in M(v)}x_t = \\frac{1}{\\lambda} \\sum_{t \\in G} a_{v,t}x_t"
},
{
"math_id": 52,
"text": "M(v)"
},
{
"math_id": 53,
"text": "\\lambda"
},
{
"math_id": 54,
"text": "\\mathbf{Ax} = {\\lambda}\\mathbf{x}"
},
{
"math_id": 55,
"text": "v^{th}"
},
{
"math_id": 56,
"text": "x_i = \\sum_{k=1}^{\\infin}\\sum_{j=1}^N \\alpha^k (A^k)_{ji}"
},
{
"math_id": 57,
"text": "\\alpha"
},
{
"math_id": 58,
"text": "(0,1)"
},
{
"math_id": 59,
"text": "x_i = \\alpha \\sum_{j =1}^N a_{ij}(x_j+1)."
},
{
"math_id": 60,
"text": "x_j"
},
{
"math_id": 61,
"text": "x_j+1."
},
{
"math_id": 62,
"text": "A"
},
{
"math_id": 63,
"text": "\\tfrac{1}{\\lambda}"
},
{
"math_id": 64,
"text": "x_i = \\alpha \\sum_{j } a_{ji}\\frac{x_j}{L(j)} + \\frac{1-\\alpha}{N},"
},
{
"math_id": 65,
"text": "L(j) = \\sum_{i} a_{ji}"
},
{
"math_id": 66,
"text": "j"
},
{
"math_id": 67,
"text": "L(j)"
},
{
"math_id": 68,
"text": "a_{ji}"
},
{
"math_id": 69,
"text": "PC^t(v)= \\frac{1}{N-2}\\sum_{s \\neq v \\neq r}\\frac{\\sigma_{sr}(v)}{\\sigma_{sr}}\\frac{{x^t}_s}{{\\sum {[{x^t}_i}]}-{x^t}_v}"
},
{
"math_id": 70,
"text": "\\sigma_{sr}"
},
{
"math_id": 71,
"text": "r"
},
{
"math_id": 72,
"text": "\\sigma_{sr}(v)"
},
{
"math_id": 73,
"text": "i"
},
{
"math_id": 74,
"text": "{x^t}_i"
},
{
"math_id": 75,
"text": "{x^t}_i=0"
},
{
"math_id": 76,
"text": "{x^t}_i=1"
},
{
"math_id": 77,
"text": "O(NM)"
},
{
"math_id": 78,
"text": "O(N^3)"
},
{
"math_id": 79,
"text": "X(v)"
},
{
"math_id": 80,
"text": "C_x(p_i)"
},
{
"math_id": 81,
"text": "C_x(p_*)"
},
{
"math_id": 82,
"text": "\\max \\sum_{i=1}^{N} (C_x(p_*)-C_x(p_i))"
},
{
"math_id": 83,
"text": "C_x"
},
{
"math_id": 84,
"text": "C_x=\\frac{\\sum_{i=1}^{N} (C_x(p_*)-C_x(p_i))}{\\max \\sum_{i=1}^{N} (C_x(p_*)-C_x(p_i))}."
},
{
"math_id": 85,
"text": "W\\mathbf{c}=\\lambda \\mathbf{c}"
},
{
"math_id": 86,
"text": "W_{ij}=A_{ij}D_{ij}"
},
{
"math_id": 87,
"text": "D_{ij}"
},
{
"math_id": 88,
"text": "D_{ij}=1-\\dfrac{|V^{+}(i)\\cap V^{+}(j)|}{|V^{+}(i)\\cup V^{+}(j)|}"
},
{
"math_id": 89,
"text": "W"
},
{
"math_id": 90,
"text": "D"
},
{
"math_id": 91,
"text": "c_i=\\dfrac{1}{n}\\sum_{j=1}^{n}W_{ij}c_{j}, \\,\\,\\,\\,\\,\\, i=1,\\cdots,n"
},
{
"math_id": 92,
"text": "n"
},
{
"math_id": 93,
"text": "TC(v)=1/((N-1)(N-2))\\Sigma_{s\\neq v \\neq t} \\frac{\\Sigma_{i \\in P^v_{s,t}} e^{-\\beta C^i_{s,t}}}{\\Sigma_{j \\in P^v_{s,t}} e^{-\\beta C^j_{s,t}}} "
}
] |
https://en.wikipedia.org/wiki?curid=1462712
|
146285
|
Gauss–Markov process
|
Gauss–Markov stochastic processes (named after Carl Friedrich Gauss and Andrey Markov) are stochastic processes that satisfy the requirements for both Gaussian processes and Markov processes. A stationary Gauss–Markov process is unique up to rescaling; such a process is also known as an Ornstein–Uhlenbeck process.
Gauss–Markov processes obey Langevin equations.
Basic properties.
Every Gauss–Markov process "X"("t") possesses the three following properties:
Property (3) means that every non-degenerate mean-square continuous Gauss–Markov process can be synthesized from the standard Wiener process (SWP).
Other properties.
A stationary Gauss–Markov process with variance formula_0 and time constant formula_1 has the following properties.
There are also some trivial exceptions to all of the above.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\textbf{E}(X^{2}(t)) = \\sigma^{2}"
},
{
"math_id": 1,
"text": "\\beta^{-1}"
},
{
"math_id": 2,
"text": "\\textbf{R}_{x}(\\tau) = \\sigma^{2}e^{-\\beta |\\tau|}."
},
{
"math_id": 3,
"text": " \\textbf{S}_{x}(j\\omega) = \\frac{2\\sigma^{2}\\beta}{\\omega^{2} + \\beta^{2}}."
},
{
"math_id": 4,
"text": "\\textbf{S}_{x}(s) = \\frac{2\\sigma^{2}\\beta}{-s^{2} + \\beta^{2}} \n = \\frac{\\sqrt{2\\beta}\\,\\sigma}{(s + \\beta)} \n \\cdot\\frac{\\sqrt{2\\beta}\\,\\sigma}{(-s + \\beta)}. "
}
] |
https://en.wikipedia.org/wiki?curid=146285
|
14628623
|
Isomap
|
Isomap is a nonlinear dimensionality reduction method. It is one of several widely used low-dimensional embedding methods. Isomap is used for computing a quasi-isometric, low-dimensional embedding of a set of high-dimensional data points. The algorithm provides a simple method for estimating the intrinsic geometry of a data manifold based on a rough estimate of each data point’s neighbors on the manifold. Isomap is highly efficient and generally applicable to a broad range of data sources and dimensionalities.
Introduction.
Isomap is one representative of isometric mapping methods, and extends metric multidimensional scaling (MDS) by incorporating the geodesic distances imposed by a weighted graph. To be specific, the classical scaling of metric MDS performs low-dimensional embedding based on the pairwise distance between data points, which is generally measured using straight-line Euclidean distance. Isomap is distinguished by its use of the geodesic distance induced by a neighborhood graph embedded in the classical scaling. This is done to incorporate manifold structure in the resulting embedding. Isomap defines the geodesic distance to be the sum of edge weights along the shortest path between two nodes (computed using Dijkstra's algorithm, for example). The top "n" eigenvectors of the geodesic distance matrix, represent the coordinates in the new "n"-dimensional Euclidean space.
Algorithm.
A very high-level description of Isomap algorithm is given below.
Possible issues.
The connectivity of each data point in the neighborhood graph is defined as its nearest "k" Euclidean neighbors in the high-dimensional space. This step is vulnerable to "short-circuit errors" if "k" is too large with respect to the manifold structure or if noise in the data moves the points slightly off the manifold. Even a single short-circuit error can alter many entries in the geodesic distance matrix, which in turn can lead to a drastically different (and incorrect) low-dimensional embedding. Conversely, if "k" is too small, the neighborhood graph may become too sparse to approximate geodesic paths accurately. But improvements have been made to this algorithm to make it work better for sparse and noisy data sets.
Relationship with other methods.
Following the connection between the classical scaling and PCA, metric MDS can be interpreted as kernel PCA. In a similar manner, the geodesic distance matrix in Isomap can be viewed as a kernel matrix. The doubly centered geodesic distance matrix "K" in Isomap is of the form
formula_0
where formula_1 is the elementwise square of the geodesic distance matrix "D" = ["D""ij"], "H" is the centering matrix, given by
formula_2
However, the kernel matrix "K" is not always positive semidefinite. The main idea for kernel Isomap is to make this "K" as a Mercer kernel matrix (that is positive semidefinite) using a constant-shifting method, in order to relate it to kernel PCA such that the generalization property naturally emerges
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " K = -\\frac{1}{2} HD^2 H\\, "
},
{
"math_id": 1,
"text": "D^2 = D^2_{ij}:=(D_{ij})^2"
},
{
"math_id": 2,
"text": " H = I_n-\\frac{1}{N} e_N e^T_N, \\quad\\text{where }e_N= [1\\ \\dots\\ 1]^T \\in \\mathbb{R}^N. "
}
] |
https://en.wikipedia.org/wiki?curid=14628623
|
1463006
|
Examples of vector spaces
|
This page lists some examples of vector spaces. See vector space for the definitions of terms used on this page. See also: dimension, basis.
"Notation". Let "F" denote an arbitrary field such as the real numbers R or the complex numbers C.
Trivial or zero vector space.
The simplest example of a vector space is the trivial one: {0}, which contains only the zero vector (see the third axiom in the Vector space article). Both vector addition and scalar multiplication are trivial. A basis for this vector space is the empty set, so that {0} is the 0-dimensional vector space over "F". Every vector space over "F" contains a subspace isomorphic to this one.
The zero vector space is conceptually different from the null space of a linear operator "L", which is the kernel of "L". (Incidentally, the null space of "L" is a zero space if and only if "L" is injective.)
Field.
The next simplest example is the field "F" itself. Vector addition is just field addition, and scalar multiplication is just field multiplication. This property can be used to prove that a field is a vector space. Any non-zero element of "F" serves as a basis so "F" is a 1-dimensional vector space over itself.
The field is a rather special vector space; in fact it is the simplest example of a commutative algebra over "F". Also, "F" has just two subspaces: {0} and "F" itself.
Coordinate space.
A basic example of a vector space is the following. For any positive integer "n", the set of all "n"-tuples of elements of "F" forms an "n"-dimensional vector space over "F" sometimes called "coordinate space" and denoted "F""n". An element of "F""n" is written
formula_0
where each "x""i" is an element of "F". The operations on "F""n" are defined by
formula_1
formula_2
formula_3
formula_4
Commonly, "F" is the field of real numbers, in which case we obtain real coordinate space R"n". The field of complex numbers gives complex coordinate space C"n". The "a + bi" form of a complex number shows that C itself is a two-dimensional real vector space with coordinates ("a","b"). Similarly, the quaternions and the octonions are respectively four- and eight-dimensional real vector spaces, and C"n" is a "2n"-dimensional real vector space.
The vector space "F""n" has a standard basis:
formula_5
formula_6
formula_7
formula_8
where 1 denotes the multiplicative identity in "F".
Infinite coordinate space.
Let "F"∞ denote the space of infinite sequences of elements from "F" such that only "finitely" many elements are nonzero. That is, if we write an element of "F"∞ as
formula_9
then only a finite number of the "x""i" are nonzero (i.e., the coordinates become all zero after a certain point). Addition and scalar multiplication are given as in finite coordinate space. The dimensionality of "F"∞ is countably infinite. A standard basis consists of the vectors "e""i" which contain a 1 in the "i"-th slot and zeros elsewhere. This vector space is the coproduct (or direct sum) of countably many copies of the vector space "F".
Note the role of the finiteness condition here. One could consider arbitrary sequences of elements in "F", which also constitute a vector space with the same operations, often denoted by "F"N - see below. "F"N is the "product" of countably many copies of "F".
By Zorn's lemma, "F"N has a basis (there is no obvious basis). There are uncountably infinite elements in the basis. Since the dimensions are different, "F"N is "not" isomorphic to "F"∞. It is worth noting that "F"N is (isomorphic to) the dual space of "F"∞, because a linear map "T" from "F"∞ to "F" is determined uniquely by its values "T"("ei") on the basis elements of "F"∞, and these values can be arbitrary. Thus one sees that a vector space need not be isomorphic to its double dual if it is infinite dimensional, in contrast to the finite dimensional case.
Product of vector spaces.
Starting from "n" vector spaces, or a countably infinite collection of them, each with the same field, we can define the product space like above.
Matrices.
Let "F""m"×"n" denote the set of "m"×"n" matrices with entries in "F". Then "F""m"×"n" is a vector space over "F". Vector addition is just matrix addition and scalar multiplication is defined in the obvious way (by multiplying each entry by the same scalar). The zero vector is just the zero matrix. The dimension of "F""m"×"n" is "mn". One possible choice of basis is the matrices with a single entry equal to 1 and all other entries 0.
When "m" = "n" the matrix is square and matrix multiplication of two such matrices produces a third. This vector space of dimension "n"2 forms an algebra over a field.
Polynomial vector spaces.
One variable.
The set of polynomials with coefficients in "F" is a vector space over "F", denoted "F"["x"]. Vector addition and scalar multiplication are defined in the obvious manner. If the degree of the polynomials is unrestricted then the dimension of "F"["x"] is countably infinite. If instead one restricts to polynomials with degree less than or equal to "n", then we have a vector space with dimension "n" + 1.
One possible basis for "F"["x"] is a monomial basis: the coordinates of a polynomial with respect to this basis are its coefficients, and the map sending a polynomial to the sequence of its coefficients is a linear isomorphism from "F"["x"] to the infinite coordinate space "F"∞.
The vector space of polynomials with real coefficients and degree less than or equal to "n" is often denoted by "P""n".
Several variables.
The set of polynomials in several variables with coefficients in "F" is vector space over "F" denoted "F"["x"1, "x"2, ..., "x""r"]. Here "r" is the number of variables.
"See main article at Function space, especially the functional analysis section."
Function spaces.
Let "X" be a non-empty arbitrary set and "V" an arbitrary vector space over "F". The space of all functions from "X" to "V" is a vector space over "F" under pointwise addition and multiplication. That is, let "f" : "X" → "V" and "g" : "X" → "V" denote two functions, and let "α" in "F". We define
formula_10
formula_11
where the operations on the right hand side are those in "V". The zero vector is given by the constant function sending everything to the zero vector in "V". The space of all functions from "X" to "V" is commonly denoted "V""X".
If "X" is finite and "V" is finite-dimensional then "V""X" has dimension |"X"|(dim "V"), otherwise the space is infinite-dimensional (uncountably so if "X" is infinite).
Many of the vector spaces that arise in mathematics are subspaces of some function space. We give some further examples.
Generalized coordinate space.
Let "X" be an arbitrary set. Consider the space of all functions from "X" to "F" which vanish on all but a finite number of points in "X". This space is a vector subspace of "F""X", the space of all possible functions from "X" to "F". To see this, note that the union of two finite sets is finite, so that the sum of two functions in this space will still vanish outside a finite set.
The space described above is commonly denoted ("F""X")0 and is called "generalized coordinate space" for the following reason. If "X" is the set of numbers between 1 and "n" then this space is easily seen to be equivalent to the coordinate space "F""n". Likewise, if "X" is the set of natural numbers, N, then this space is just "F"∞.
A canonical basis for ("F""X")0 is the set of functions {δ"x" | "x" ∈ "X"} defined by
formula_12
The dimension of ("F""X")0 is therefore equal to the cardinality of "X". In this manner we can construct a vector space of any dimension over any field. Furthermore, "every vector space is isomorphic to one of this form". Any choice of basis determines an isomorphism by sending the basis onto the canonical one for ("F""X")0.
Generalized coordinate space may also be understood as the direct sum of |"X"| copies of "F" (i.e. one for each point in "X"):
formula_13
The finiteness condition is built into the definition of the direct sum. Contrast this with the direct product of |"X"| copies of "F" which would give the full function space "F""X".
Linear maps.
An important example arising in the context of linear algebra itself is the vector space of linear maps. Let "L"("V","W") denote the set of all linear maps from "V" to "W" (both of which are vector spaces over "F"). Then "L"("V","W") is a subspace of "W""V" since it is closed under addition and scalar multiplication.
Note that L("F""n","F""m") can be identified with the space of matrices "F""m"×"n" in a natural way. In fact, by choosing appropriate bases for finite-dimensional spaces V and W, L(V,W) can also be identified with "F""m"×"n". This identification normally depends on the choice of basis.
Continuous functions.
If "X" is some topological space, such as the unit interval [0,1], we can consider the space of all continuous functions from "X" to R. This is a vector subspace of R"X" since the sum of any two continuous functions is continuous and scalar multiplication is continuous.
Differential equations.
The subset of the space of all functions from R to R consisting of (sufficiently differentiable) functions that satisfy a certain differential equation is a subspace of RR if the equation is linear. This is because differentiation is a linear operation, i.e., ("a" "f" + "b" "g")′ = "a" "f"
′ + "b" "g"′, where ′ is the differentiation operator.
Field extensions.
Suppose "K" is a subfield of "F" (cf. field extension). Then "F" can be regarded as a vector space over "K" by restricting scalar multiplication to elements in "K" (vector addition is defined as normal). The dimension of this vector space, if it exists, is called the "degree" of the extension. For example, the complex numbers C form a two-dimensional vector space over the real numbers R. Likewise, the real numbers R form a vector space over the rational numbers Q which has (uncountably) infinite dimension, if a Hamel basis exists.
If "V" is a vector space over "F" it may also be regarded as vector space over "K". The dimensions are related by the formula
dim"K""V" = (dim"F""V")(dim"K""F")
For example, C"n", regarded as a vector space over the reals, has dimension 2"n".
Finite vector spaces.
Apart from the trivial case of a zero-dimensional space over any field, a vector space over a field "F" has a finite number of elements if and only if "F" is a finite field and the vector space has a finite dimension. Thus we have "F""q", the unique finite field (up to isomorphism) with "q" elements. Here "q" must be a power of a prime ("q" = "p""m" with "p" prime). Then any "n"-dimensional vector space "V" over "F""q" will have "q""n" elements. Note that the number of elements in "V" is also the power of a prime (because a power of a prime power is again a prime power). The primary example of such a space is the coordinate space ("F""q")"n".
These vector spaces are of critical importance in the representation theory of finite groups, number theory, and cryptography.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "x = (x_1, x_2, \\ldots, x_n) "
},
{
"math_id": 1,
"text": "x + y = (x_1 + y_1, x_2 + y_2, \\ldots, x_n + y_n) "
},
{
"math_id": 2,
"text": "\\alpha x = (\\alpha x_1, \\alpha x_2, \\ldots, \\alpha x_n) "
},
{
"math_id": 3,
"text": "0 = (0, 0, \\ldots, 0) "
},
{
"math_id": 4,
"text": "-x = (-x_1, -x_2, \\ldots, -x_n) "
},
{
"math_id": 5,
"text": "e_1 = (1, 0, \\ldots, 0) "
},
{
"math_id": 6,
"text": "e_2 = (0, 1, \\ldots, 0) "
},
{
"math_id": 7,
"text": "\\vdots "
},
{
"math_id": 8,
"text": "e_n = (0, 0, \\ldots, 1) "
},
{
"math_id": 9,
"text": "x = (x_1, x_2, x_3, \\ldots) "
},
{
"math_id": 10,
"text": "(f + g)(x) = f(x) + g(x) "
},
{
"math_id": 11,
"text": "(\\alpha f)(x) = \\alpha f(x) "
},
{
"math_id": 12,
"text": "\\delta_x(y) = \\begin{cases}1 \\quad x = y \\\\ 0 \\quad x \\neq y\\end{cases}"
},
{
"math_id": 13,
"text": "(\\mathbf F^X)_0 = \\bigoplus_{x\\in X}\\mathbf F."
}
] |
https://en.wikipedia.org/wiki?curid=1463006
|
1463025
|
Single displacement reaction
|
Type of chemical reaction
A single-displacement reaction, also known as single replacement reaction or exchange reaction, is an archaic concept in chemistry. It describes the stoichiometry of some chemical reactions in which one element or ligand is replaced by atom or group.
It can be represented generically as:
<chem>A + BC -> AC + B</chem>
where either
This will most often occur if <chem>A</chem> is more reactive than <chem>B</chem>, thus giving a more stable product. The reaction in that case is exergonic and spontaneous.
In the first case, when <chem>A</chem> and <chem>B</chem> are metals, <chem>BC</chem> and <chem>AC</chem> are usually aqueous compounds (or very rarely in a molten state) and <chem>C</chem> is a spectator ion (i.e. remains unchanged).
<chem> A(s) + \underbrace{B+(aq) + C^{-}(aq)}_{BC(aq)} -> \underbrace{A+(aq) + C^{-}(aq)}_{AC(aq)} + B(s)
</chem>
In the reactivity series, the metals with the highest propensity to donate their electrons to react are listed first, followed by less reactive ones. Therefore, a metal higher on the list can displace anything below it. Here is a condensed version of the same:
formula_0
(Hydrogen, carbon and ammonium — labeled in gray — are not metals.)
Similarly, the halogens with the highest propensity to acquire electrons are the most reactive. The activity series for halogens is:
<chem> F2>Cl2>Br2>I2 </chem>
Due to the free state nature of <chem>A</chem> and <chem>B</chem>, single displacement reactions are also redox reactions, involving the transfer of electrons from one reactant to another. When <chem>A</chem> and <chem>B</chem> are metals, <chem>A</chem> is always oxidized and <chem>B</chem> is always reduced. Since halogens prefer to gain electrons, <chem>A</chem> is reduced (from <chem>0</chem> to <chem>-1</chem>) and <chem>B</chem> is oxidized (from <chem>-1</chem> to <chem>0</chem>).
Cation replacement.
Here one cation replaces another:
<chem> A + BC -> AC + B </chem>
Some examples are:
<chem>Fe + CuSO4 -> FeSO4 + Cu</chem>
(Blue vitriol)____(Green vitriol)
<chem>Zn + CuSO4 -> ZnSO4 + Cu</chem>
(Blue vitriol)___(White vitriol)
<chem>Zn + FeSO4 -> ZnSO4 + Fe</chem>
(Green vitriol) (White vitriol)
These reactions are exothermic and the rise in temperature is usually in the order of the reactivity of the different metals.
If the reactant in elemental form is not the more reactive metal, then no reaction will occur. Some examples of this would be the reverse.
<chem>Fe + ZnSO4 -> </chem> No Reaction
Anion replacement.
Here one anion replaces another:
<chem> A + CB -> CA + B </chem>
Some examples are:
<chem> Cl2 + 2NaBr -> 2NaCl + Br2 </chem>
<chem> Br2 + 2KI -> 2KBr + I2(v) </chem>
<chem> Cl2 + H2S -> 2HCl + S(v) </chem>
Again, the less reactive halogen cannot replace the more reactive halogen:
<chem>I2 + 2KBr -> </chem> no reaction
Common reactions.
Metal-acid reaction.
Metals react with acids to form salts and hydrogen gas.
<chem>Zn(s) + 2HCl(aq) -> ZnCl2(aq) + H2 ^</chem>
However less reactive metals can not displace the hydrogen from acids. (They may react with oxidizing acids though.)
<chem>Cu + HCl -> </chem> No reaction
Reaction between metal and water.
Metals react with water to form metal oxides and hydrogen gas. The metal oxides further dissolve in water to form alkalies.
<chem>Fe(s) + H2O (g) -> FeO(s) + H2 ^</chem>
<chem>Ca(s) + 2H2O (l) -> Ca(OH)2(aq) + H2 ^</chem>
The reaction can be extremely violent with alkali metals as the hydrogen gas catches fire.
Metals like gold and silver, which are below hydrogen in the reactivity series, do not react with water.
Metal extraction.
Coke or more reactive metals are used to reduce metals by carbon from their metal oxides, such as in the carbothermic reaction of zinc oxide (zincite) to produce zinc metal:
<chem>ZnO + C -> Zn + CO</chem>
and the use of aluminium to produce manganese from manganese dioxide:
<chem>3MnO2 + 4Al -> 3Mn + 2Al2O3</chem>
Such reactions are also used in extraction of boron, silicon, titanium and tungsten.
<chem>3SiO2 + 4Al -> 3Si + 2Al2O3</chem>
<chem>B2O3 + 3Mg -> 2B + 3MgO</chem>
<chem>TiCl4 + 2Mg -> Ti + 2MgCl2</chem>
<chem>WF6 + 3 H2 -> W + 6 HF</chem>
Thermite reaction.
Using highly reactive metals as reducing agents leads to exothermic reactions that melt the metal produced. This is used for welding railway tracks.
<chem>Fe2O3(s) + 2 Al(s) -> 2 Fe(l) + Al2O3(s)</chem>
a(Haematite)
<chem>3CuO + 2Al -> 3Cu + Al2O3</chem>
Silver tarnish.
Silver tarnishes due to the presence of hydrogen sulfide, leading to formation of silver sulfide.
<chem>4Ag + 2H2S + O2 -> 2Ag2S + 2H2O</chem>
<chem>3Ag2S + 2Al -> 6Ag + Al2S3</chem>
Extraction of halogens.
Chlorine is manufactured industrially by the Deacon's process. The reaction takes place at about 400 to 450 °C in the presence of a variety of catalysts such as <chem>CuCl2</chem>.
<chem>4HCl + O2 -> 2 Cl2 + 2H2O </chem>
Bromine and iodine are extracted from brine by displacing with chlorine.
<chem>2HBr + Cl2 -> 2HCl + Br2 ^ </chem>
<chem>2HI + Cl2 -> 2HCl + I2 ^ </chem>
References.
<templatestyles src="Reflist/styles.css" />
External links.
Reactivity series by RSC
Halogen displacement reaction by RSC
Chlorine water reacting with Iodide and Bromide, YouTube
|
[
{
"math_id": 0,
"text": " \\ce{K} > \\ce{Ca} > \\ce{Na\n} > \\ce{Mg} > \\ce{Al} > {\\color{gray}\\ce{C}} > \\ce{Zn} > \\ce{Fe} > {\\color{gray}\\ce{NH4^+}} > {\\color{gray}\\ce{H+}} > \\ce{Cu} > \\ce{Ag} > \\ce{Au} "
}
] |
https://en.wikipedia.org/wiki?curid=1463025
|
1463083
|
Zipping (computer science)
|
Function which maps a tuple of sequences into a sequence of tuples
In computer science, zipping is a function which maps a tuple of sequences into a sequence of tuples. This name zip derives from the action of a zipper in that it interleaves two formerly disjoint sequences. The inverse function is "unzip".
Example.
Given the three words "cat", "fish" and "be" where |"cat"| is 3, |"fish"| is 4 and |"be"| is 2. Let formula_0 denote the length of the longest word which is "fish"; formula_1. The zip of "cat", "fish", "be" is then 4 tuples of elements:
formula_2
where "#" is a symbol not in the original alphabet. In Haskell this truncates to the shortest sequence formula_3, where formula_4:
zip3 "cat" "fish" "be"
-- [('c','f','b'),('a','i','e')]
Definition.
Let Σ be an alphabet, # a symbol not in Σ.
Let "x"1"x"2... "x"|"x"|, "y"1"y"2... "y"|"y"|, "z"1"z"2... "z"|"z"|, ... be "n" words (i.e. finite sequences) of elements of Σ. Let formula_0 denote the length of the longest word, i.e. the maximum of |"x"|, |"y"|, |"z"|, ... .
The zip of these words is a finite sequence of "n"-tuples of elements of (Σ ∪ {#}), i.e. an element of formula_5:
formula_6,
where for any index "i" > |"w"|, the "wi" is #.
The zip of "x, y, z, ..." is denoted zip("x, y, z, ...") or "x" ⋆ "y" ⋆ "z" ⋆ ...
The inverse to zip is sometimes denoted unzip.
A variation of the zip operation is defined by:
formula_7
where formula_3 is the "minimum" length of the input words. It avoids the use of an adjoined element formula_8, but destroys information about elements of the input sequences beyond formula_3.
In programming languages.
Zip functions are often available in programming languages, often referred to as zip. In Lisp-dialects one can simply map the desired function over the desired lists, map is variadic in Lisp so it can take an arbitrary number of lists as argument. An example from Clojure:
In Common Lisp:
Languages such as Python provide a zip() function. zip() in conjunction with the * operator unzips a list:
»> nums = [1, 2, 3]
»> tens = [10, 20, 30]
»> firstname = 'Alice'
»> zipped = list(zip(nums, tens))
»> zipped
[(1, 10), (2, 20), (3, 30)]
»> list(zip(*zipped)) # unzip
[(1, 2, 3), (10, 20, 30)]
»> zipped2 = list(zip(nums, tens, list(firstname)))
»> zipped2 # zip, truncates on shortest
[(1, 10, 'A'), (2, 20, 'l'), (3, 30, 'i')]
»> list(zip(*zipped2)) # unzip
[(1, 2, 3), (10, 20, 30), ('A', 'l', 'i')]
Haskell has a method of zipping sequences but requires a specific function for each arity (zip for two sequences, zip3 for three etc.), similarly the functions unzip and unzip3 are available for unzipping:
-- nums contains an infinite list of numbers [1, 2, 3, ...]
nums = [1..]
tens = [10, 20, 30]
firstname = "Alice"
zip nums tens
-- ⇒ [(1,10), (2,20), (3,30)] — zip, truncates infinite list
unzip $ zip nums tens
-- ⇒ ([1,2,3], [10,20,30]) — unzip
zip3 nums tens firstname
-- ⇒ [(1,10,'A'), (2,20,'l'), (3,30,'i')] — zip, truncates
unzip3 $ zip3 nums tens firstname
-- ⇒ ([1,2,3], [10,20,30], "Ali") — unzip
Language comparison.
List of languages by support of zip:
|
[
{
"math_id": 0,
"text": "\\ell"
},
{
"math_id": 1,
"text": "\\ell = 4"
},
{
"math_id": 2,
"text": " (c,f,b)(a,i,e)(t,s,\\#)(\\#,h,\\#)"
},
{
"math_id": 3,
"text": "\\underline{\\ell}"
},
{
"math_id": 4,
"text": "\\underline{\\ell} = 2"
},
{
"math_id": 5,
"text": "((\\Sigma\\cup\\{\\# \\})^n)^*"
},
{
"math_id": 6,
"text": " (x_1,y_1,\\ldots)(x_2,y_2,\\ldots)\\ldots(x_\\ell,y_\\ell,\\ldots)"
},
{
"math_id": 7,
"text": " (x_1,y_1,\\ldots)(x_2,y_2,\\ldots)\\ldots(x_{\\underline{\\ell}},y_{\\underline{\\ell}},\\ldots)"
},
{
"math_id": 8,
"text": "\\#"
}
] |
https://en.wikipedia.org/wiki?curid=1463083
|
1463286
|
Enantioselective synthesis
|
Chemical reaction(s) which favor one chiral isomer over another
Enantioselective synthesis, also called asymmetric synthesis, is a form of chemical synthesis. It is defined by IUPAC as "a chemical reaction (or reaction sequence) in which one or more new elements of chirality are formed in a substrate molecule and which produces the stereoisomeric (enantiomeric or diastereomeric) products in unequal amounts."
Put more simply: it is the synthesis of a compound by a method that favors the formation of a specific enantiomer or diastereomer. Enantiomers are stereoisomers that have opposite configurations at every chiral center. Diastereomers are stereoisomers that differ at one or more chiral centers.
Enantioselective synthesis is a key process in modern chemistry and is particularly important in the field of pharmaceuticals, as the different enantiomers or diastereomers of a molecule often have different biological activity.
Overview.
Many of the building blocks of biological systems such as sugars and amino acids are produced exclusively as one enantiomer. As a result, living systems possess a high degree of chemical chirality and will often react differently with the various enantiomers of a given compound. Examples of this selectivity include:
As such enantioselective synthesis is of great importance but it can also be difficult to achieve. Enantiomers possess identical enthalpies and entropies and hence should be produced in equal amounts by an undirected process – leading to a racemic mixture. Enantioselective synthesis can be achieved by using a chiral feature that favors the formation of one enantiomer over another through interactions at the transition state. This biasing is known as asymmetric induction and can involve chiral features in the substrate, reagent, catalyst, or environment and works by making the activation energy required to form one enantiomer lower than that of the opposing enantiomer.
Enantioselectivity is usually determined by the relative rates of an enantiodifferentiating step—the point at which one reactant can become either of two enantiomeric products. The rate constant, "k", for a reaction is function of the activation energy of the reaction, sometimes called the "energy barrier", and is temperature-dependent. Using the Gibbs free energy of the energy barrier, Δ"G"*, means that the relative rates for opposing stereochemical outcomes at a given temperature, "T", is:
formula_0
This temperature dependence means the rate difference, and therefore the enantioselectivity, is greater at lower temperatures. As a result, even small energy-barrier differences can lead to a noticeable effect.
Approaches.
Enantioselective catalysis.
Enantioselective catalysis (known traditionally as "asymmetric catalysis") is performed using chiral catalysts, which are typically chiral coordination complexes. Catalysis is effective for a broader range of transformations than any other method of enantioselective synthesis. The chiral metal catalysts are almost invariably rendered chiral by using chiral ligands, but it is possible to generate chiral-at-metal complexes composed entirely of achiral ligands.
Most enantioselective catalysts are effective at low substrate/catalyst ratios. Given their high efficiencies, they are often suitable for industrial scale synthesis, even with expensive catalysts. A versatile example of enantioselective synthesis is asymmetric hydrogenation, which is used to reduce a wide variety of functional groups.
The design of new catalysts is dominated by the development of new classes of ligands. Certain ligands, often referred to as "privileged ligands", are effective in a wide range of reactions; examples include BINOL, Salen, and BOX. Most catalysts are effective for only one type of asymmetric reaction. For example, Noyori asymmetric hydrogenation with BINAP/Ru requires a β-ketone, although another catalyst, BINAP/diamine-Ru, widens the scope to α,β-alkenes and aromatic chemicals.
Chiral auxiliaries.
A chiral auxiliary is an organic compound which couples to the starting material to form a new compound which can then undergo diastereoselective reactions via intramolecular asymmetric induction. At the end of the reaction the auxiliary is removed, under conditions that will not cause racemization of the product. It is typically then recovered for future use.
Chiral auxiliaries must be used in stoichiometric amounts to be effective and require additional synthetic steps to append and remove the auxiliary. However, in some cases the only available stereoselective methodology relies on chiral auxiliaries and these reactions tend to be versatile and very well-studied, allowing the most time-efficient access to enantiomerically pure products. Additionally, the products of auxiliary-directed reactions are diastereomers, which enables their facile separation by methods such as column chromatography or crystallization.
Biocatalysis.
Biocatalysis makes use of biological compounds, ranging from isolated enzymes to living cells, to perform chemical transformations.
The advantages of these reagents include very high e.e.s and reagent specificity, as well as mild operating conditions and low environmental impact. Biocatalysts are more commonly used in industry than in academic research; for example in the production of statins.
The high reagent specificity can be a problem, however, as it often requires that a wide range of biocatalysts be screened before an effective reagent is found.
Enantioselective organocatalysis.
Organocatalysis refers to a form of catalysis, where the rate of a chemical reaction is increased by an organic compound consisting of carbon, hydrogen, sulfur and other non-metal elements.
When the organocatalyst is chiral, then enantioselective synthesis can be achieved;
for example a number of carbon–carbon bond forming reactions become enantioselective in the presence of proline with the aldol reaction being a prime example.
Organocatalysis often employs natural compounds and secondary amines as chiral catalysts; these are inexpensive and environmentally friendly, as no metals are involved.
Chiral pool synthesis.
Chiral pool synthesis is one of the simplest and oldest approaches for enantioselective synthesis. A readily available chiral starting material is manipulated through successive reactions, often using achiral reagents, to obtain the desired target molecule. This can meet the criteria for enantioselective synthesis when a new chiral species is created, such as in an SN2 reaction.
Chiral pool synthesis is especially attractive for target molecules having similar chirality to a relatively inexpensive naturally occurring building-block such as a sugar or amino acid. However, the number of possible reactions the molecule can undergo is restricted and tortuous synthetic routes may be required (e.g. Oseltamivir total synthesis). This approach also requires a stoichiometric amount of the enantiopure starting material, which can be expensive if it is not naturally occurring.
Separation and analysis of enantiomers.
The two enantiomers of a molecule possess many of the same physical properties (e.g. melting point, boiling point, polarity etc.) and so behave identically to each other. As a result, they will migrate with an identical Rf in thin layer chromatography and have identical retention times in HPLC and GC. Their NMR and IR spectra are identical.
This can make it very difficult to determine whether a process has produced a single enantiomer (and crucially which enantiomer it is) as well as making it hard to separate enantiomers from a reaction which has not been 100% enantioselective. Fortunately, enantiomers behave differently in the presence of other chiral materials and this can be exploited to allow their separation and analysis.
Enantiomers do not migrate identically on chiral chromatographic media, such as quartz or standard media that has been chirally modified. This forms the basis of chiral column chromatography, which can be used on a small scale to allow analysis via GC and HPLC, or on a large scale to separate chirally impure materials. However this process can require large amount of chiral packing material which can be expensive. A common alternative is to use a chiral derivatizing agent to convert the enantiomers into a diastereomers, in much the same way as chiral auxiliaries. These have different physical properties and hence can be separated and analysed using conventional methods. Special chiral derivitizing agents known as 'chiral resolution agents' are used in the NMR spectroscopy of stereoisomers, these typically involve coordination to chiral europium complexes such as Eu(fod)3 and Eu(hfc)3.
The separation and analysis of component enantiomers of a racemic drugs or pharmaceutical substances are referred to as chiral analysis. or enantioselective analysis. The most frequently employed technique to carry out chiral analysis involves separation science procedures, specifically chiral chromatographic methods.
The enantiomeric excess of a substance can also be determined using certain optical methods. The oldest method for doing this is to use a polarimeter to compare the level of optical rotation in the product against a 'standard' of known composition. It is also possible to perform ultraviolet-visible spectroscopy of stereoisomers by exploiting the Cotton effect.
One of the most accurate ways of determining the chirality of compound is to determine its absolute configuration by X-ray crystallography. However this is a labour-intensive process which requires that a suitable single crystal be grown.
History.
Inception (1815–1905).
In 1815 the French physicist Jean-Baptiste Biot showed that certain chemicals could rotate the plane of a beam of polarised light, a property called optical activity.
The nature of this property remained a mystery until 1848, when Louis Pasteur proposed that it had a molecular basis originating from some form of "dissymmetry",
with the term "chirality" being coined by Lord Kelvin a year later.
The origin of chirality itself was finally described in 1874, when Jacobus Henricus van 't Hoff and Joseph Le Bel independently proposed the tetrahedral geometry of carbon. Structural models prior to this work had been two-dimensional, and van 't Hoff and Le Bel theorized that the arrangement of groups around this tetrahedron could dictate the optical activity of the resulting compound through what became known as the Le Bel–van 't Hoff rule.
In 1894 Hermann Emil Fischer outlined the concept of asymmetric induction; in which he correctly ascribed selective the formation of -glucose by plants to be due to the influence of optically active substances within chlorophyll. Fischer also successfully performed what would now be regarded as the first example of enantioselective synthesis, by enantioselectively elongating sugars via a process which would eventually become the Kiliani–Fischer synthesis.
The first enantioselective chemical synthesis is most often attributed to Willy Marckwald, Universität zu Berlin, for a brucine-catalyzed enantioselective decarboxylation of 2-ethyl-2-methylmalonic acid reported in 1904. A slight excess of the levorotary form of the product of the reaction, 2-methylbutyric acid, was produced; as this product is also a natural product—e.g., as a side chain of lovastatin formed by its diketide synthase (LovF) during its biosynthesis—this result constitutes the first recorded total synthesis with enantioselectivity, as well other firsts (as Koskinen notes, first "example of asymmetric catalysis, enantiotopic selection, and organocatalysis"). This observation is also of historical significance, as at the time enantioselective synthesis could only be understood in terms of vitalism. At the time many prominent chemists such as Jöns Jacob Berzelius argued that natural and artificial compounds were fundamentally different and that chirality was simply a manifestation of the 'vital force' which could only exist in natural compounds. Unlike Fischer, Marckwald had performed an enantioselective reaction upon an achiral, "un-natural" starting material, albeit with a chiral organocatalyst (as we now understand this chemistry).
Early work (1905–1965).
The development of enantioselective synthesis was initially slow, largely due to the limited range of techniques available for their separation and analysis.
Diastereomers possess different physical properties, allowing separation by conventional means, however at the time enantiomers could only be separated by spontaneous resolution (where enantiomers separate upon crystallisation) or kinetic resolution (where one enantiomer is selectively destroyed). The only tool for analysing enantiomers was optical activity using a polarimeter, a method which provides no structural data.
It was not until the 1950s that major progress really began. Driven in part by chemists such as R. B. Woodward and Vladimir Prelog but also by the development of new techniques.
The first of these was X-ray crystallography, which was used to determine the absolute configuration of an organic compound by Johannes Bijvoet in 1951.
Chiral chromatography was introduced a year later by Dalgliesh, who used paper chromatography to separate chiral amino acids.
Although Dalgliesh was not the first to observe such separations, he correctly attributed the separation of enantiomers to differential retention by the chiral cellulose. This was expanded upon in 1960, when Klem and Reed first reported the use of chirally-modified silica gel for chiral HPLC separation.
Thalidomide.
While it was known that the different enantiomers of a drug could have different activities, with significant early work being done by Arthur Robertson Cushny, this was not accounted for in early drug design and testing. However, following the thalidomide disaster the development and licensing of drugs changed dramatically.
First synthesized in 1953, thalidomide was widely prescribed for morning sickness from 1957 to 1962, but was soon found to be seriously teratogenic, eventually causing birth defects in more than 10,000 babies. The disaster prompted many countries to introduce tougher rules for the testing and licensing of drugs, such as the Kefauver-Harris Amendment (US) and Directive 65/65/EEC1 (EU).
Early research into the teratogenic mechanism, using mice, suggested that one enantiomer of thalidomide was teratogenic while the other possessed all the therapeutic activity. This theory was later shown to be incorrect and has now been superseded by a body of research. However it raised the importance of chirality in drug design, leading to increased research into enantioselective synthesis.
Modern age (since 1965).
The Cahn–Ingold–Prelog priority rules (often abbreviated as the CIP system) were first published in 1966; allowing enantiomers to be more easily and accurately described.
The same year saw first successful enantiomeric separation by gas chromatography an important development as the technology was in common use at the time.
Metal-catalysed enantioselective synthesis was pioneered by William S. Knowles, Ryōji Noyori and K. Barry Sharpless; for which they would receive the 2001 Nobel Prize in Chemistry. Knowles and Noyori began with the development of asymmetric hydrogenation, which they developed independently in 1968. Knowles replaced the achiral triphenylphosphine ligands in Wilkinson's catalyst with chiral phosphine ligands. This experimental catalyst was employed in an asymmetric hydrogenation with a modest 15% enantiomeric excess. Knowles was also the first to apply enantioselective metal catalysis to industrial-scale synthesis; while working for the Monsanto Company he developed an enantioselective hydrogenation step for the production of L-DOPA, utilising the DIPAMP ligand.
Noyori devised a copper complex using a chiral Schiff base ligand, which he used for the metal–carbenoid cyclopropanation of styrene.
In common with Knowles' findings, Noyori's results for the enantiomeric excess for this first-generation ligand were disappointingly low: 6%. However continued research eventually led to the development of the Noyori asymmetric hydrogenation reaction.
Sharpless complemented these reduction reactions by developing a range of asymmetric oxidations (Sharpless epoxidation, Sharpless asymmetric dihydroxylation, Sharpless oxyamination) during the 1970s and 1980s. With the asymmetric oxyamination reaction, using osmium tetroxide, being the earliest.
During the same period, methods were developed to allow the analysis of chiral compounds by NMR; either using chiral derivatizing agents, such as Mosher's acid,
or europium based shift reagents, of which Eu(DPM)3 was the earliest.
Chiral auxiliaries were introduced by E.J. Corey in 1978 and featured prominently in the work of Dieter Enders. Around the same time enantioselective organocatalysis was developed, with pioneering work including the Hajos–Parrish–Eder–Sauer–Wiechert reaction.
Enzyme-catalyzed enantioselective reactions became more and more common during the 1980s, particularly in industry, with their applications including asymmetric ester hydrolysis with pig-liver esterase. The emerging technology of genetic engineering has allowed the tailoring of enzymes to specific processes, permitting an increased range of selective transformations. For example, in the asymmetric hydrogenation of statin precursors.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\frac{k_1}{k_2} = 10^\\frac{\\Delta \\Delta G^*}{T \\times 1.98 \\times 2.3}"
}
] |
https://en.wikipedia.org/wiki?curid=1463286
|
1463363
|
FIVB Senior World Rankings
|
Ranking system for men's and women's national teams in volleyball
The FIVB Senior World Rankings is a ranking system for men's and women's national teams in volleyball. The teams of the member nations of Fédération Internationale de Volleyball (FIVB), volleyball's world governing body, are ranked based on their game results with the most successful teams being ranked highest. A points system is used, with points being awarded based on the results of all FIVB-recognised full international matches. The rankings are used in international competitions to define the seeded teams and arrange them in pools. Specific procedures for seeding and pooling are established by the FIVB in each competition's formula, but the method usually employed is the serpentine system.
The ranking system has been revamped in 2020, responding to criticism that the preceding calculation method did not effectively reflect the relative strengths of the national teams. The old version of the ranking system was finally used on 31 January 2020.
As of 23 July 2023, the highest ranked team in the men's category is Poland, while in the women's category is Turkey.
Previous calculation method.
The system of point attribution for the selected FIVB World and Official Competitions below is as follows:
Current calculation method.
In 2019, FIVB collaborated with Hypercube Business Innovation of the Netherlands to design a new world ranking platform. The previous calculation method had a problem of circularity in the international volleyball calendar: only countries who participate in the major volleyball events can earn ranking points, whilst the number of ranking points of countries also determines seeding and access of teams for major events. This unfair principle does not contribute to the sporting and commercial quality of volleyball.
On 1 February 2020, the new ranking system will be implemented and will take into account all results from 1 January 2019. The system will be consistently updated to reflect the latest results and performances. The new World Ranking considers the match results from all official competitions:
The rankings outcome of each match depends on two main factors:
Ranking Procedure.
It is based on the zero-sum system, like CONCACAF Ranking Index or FIFA World ranking, where, after each game, points will be added to or subtracted from a team's rating according to the formula:
formula_0
where:
Match result.
<templatestyles src="Col-begin/styles.css"/>
Expected match result.
The expected results is then calculated as
formula_7
where formula_8 is the probability of the outcome formula_9 obtained using the following model (known as Ordered probit):
formula_10
formula_11
formula_12
formula_13
formula_14
formula_15
where formula_16 is the Cumulative distribution function of the Normal distribution, and
formula_17 are the cut-points
set so that formula_8 is the probability of the outcome formula_6 between two equal strength opponents (that is when formula_23), which is derived from the actual match results of the past decade.
<templatestyles src="Col-begin/styles.css"/>
The parameter formula_24 represents the scaled difference of the teams rankings
formula_25
where:
Examples.
Before the match at the "FIVB Volleyball World Championship (K = 45)", Brazil "(Team A)" is ranked number 1 with a "415 WR score" and Japan "(Team B)" is ranked number 11 with a "192 WR score".
formula_28
Expected match result for Brazil:
formula_35
Expected match result for Japan:
formula_36
<templatestyles src="Col-begin/styles.css"/>
World and Continental Rankings.
The five Continental Rankings filter the World Ranking points won and lost in matches played between teams from the same Continental Confederation.
Japan (Asian Volleyball Confederation) vs Italy (Confédération Européenne de Volleyball)<Br>
"The points calculated in FIVB World Rankings."
Japan (Asian Volleyball Confederation) vs South Korea (Asian Volleyball Confederation)<Br>
"The points calculated in FIVB World Rankings, and AVC Continental Rankings."
FIVB World Rankings.
<templatestyles src="Col-float/styles.css" />
Current men's top teams.
<templatestyles src="Col-float/styles.css" />
Current women's top teams.
<templatestyles src="Col-float/styles.css" />
Historic men's leaders.
For historical men's FIVB rankings from October 2005 to present.
<templatestyles src="Col-float/styles.css" />
Historic women's leaders.
For historical women's FIVB rankings from September 2005 to present.
Notes and references.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "S_\\text{after} = S_\\text{before} + {K(R-E) \\over 8} "
},
{
"math_id": 1,
"text": "S_\\text{after}"
},
{
"math_id": 2,
"text": "S_\\text{before}"
},
{
"math_id": 3,
"text": "K"
},
{
"math_id": 4,
"text": "R"
},
{
"math_id": 5,
"text": "E"
},
{
"math_id": 6,
"text": "n"
},
{
"math_id": 7,
"text": " E = R_1 P_1 + R_2 P_2 + R_3 P_3 + R_4 P_4 + R_5 P_5 + R_6 P_6"
},
{
"math_id": 8,
"text": "P_n"
},
{
"math_id": 9,
"text": "R_n"
},
{
"math_id": 10,
"text": " P_\\text{1} = \\Phi(C_\\text{1}+\\Delta) "
},
{
"math_id": 11,
"text": " P_\\text{2} = \\Phi(C_\\text{2}+\\Delta) - \\Phi(C_\\text{1}+\\Delta) "
},
{
"math_id": 12,
"text": " P_\\text{3} = \\Phi(C_\\text{3}+\\Delta) - \\Phi(C_\\text{2}+\\Delta) "
},
{
"math_id": 13,
"text": " P_\\text{4} = \\Phi(C_\\text{4}+\\Delta) - \\Phi(C_\\text{3}+\\Delta)"
},
{
"math_id": 14,
"text": " P_\\text{5} = \\Phi(C_\\text{5}+\\Delta) - \\Phi(C_\\text{4}+\\Delta) "
},
{
"math_id": 15,
"text": " P_\\text{6} = 1- \\Phi(C_\\text{5}+\\Delta) "
},
{
"math_id": 16,
"text": "\\Phi(z)"
},
{
"math_id": 17,
"text": " C_1,\\ldots, C_5 "
},
{
"math_id": 18,
"text": "C_1=-1.06"
},
{
"math_id": 19,
"text": "C_2=-0.394"
},
{
"math_id": 20,
"text": "C_3=0"
},
{
"math_id": 21,
"text": "C_4=0.394"
},
{
"math_id": 22,
"text": "C_5=1.06"
},
{
"math_id": 23,
"text": "\\Delta=0"
},
{
"math_id": 24,
"text": "\\Delta"
},
{
"math_id": 25,
"text": " \\Delta = {8(S_\\text{teamA}-S_\\text{teamB}) \\over 1000} "
},
{
"math_id": 26,
"text": "S_\\text{teamA}"
},
{
"math_id": 27,
"text": "S_\\text{teamB}"
},
{
"math_id": 28,
"text": " \\Delta = {8(415-192) \\over 1000} = 1.784 "
},
{
"math_id": 29,
"text": "P_1 = \\Phi(-1.060+1.784)"
},
{
"math_id": 30,
"text": "P_2 = \\Phi(-0.364+1.784) - \\Phi(-1.060+1.784)"
},
{
"math_id": 31,
"text": "P_3 = \\Phi(0.000+1.784) - \\Phi(-0.364+1.784)"
},
{
"math_id": 32,
"text": "P_4 = \\Phi(0.364+1.784) - \\Phi(0.000+1.784)"
},
{
"math_id": 33,
"text": "P_5 = \\Phi(1.060+1.784) - \\Phi(0.364+1.784)"
},
{
"math_id": 34,
"text": "P_6 = 1 - \\Phi(1.060+1.784)"
},
{
"math_id": 35,
"text": " E = 76.5%(+2) + 15.2%(+1.5) + 4.5%(+1) + 2.2%(-1) + 1.2%(-1.5) + 0.2%(-2) = +1.76"
},
{
"math_id": 36,
"text": " E = 0.2%(+2) + 1.2%(+1.5) + 2.2%(+1) + 4.5%(-1) + 15.2%(-1.5) + 76.5%(-2) = -1.76"
}
] |
https://en.wikipedia.org/wiki?curid=1463363
|
1463652
|
Birkhoff interpolation
|
In mathematics, Birkhoff interpolation is an extension of polynomial interpolation. It refers to the problem of finding a polynomial formula_0 of degree formula_1 such that "only certain" derivatives have specified values at specified points:
formula_2
where the data points formula_3 and the nonnegative integers formula_4 are given. It differs from Hermite interpolation in that it is possible to specify derivatives of formula_0 at some points without specifying the lower derivatives or the polynomial itself. The name refers to George David Birkhoff, who first studied the problem in 1906.
Existence and uniqueness of solutions.
In contrast to Lagrange interpolation and Hermite interpolation, a Birkhoff interpolation problem does not always have a unique solution. For instance, there is no quadratic polynomial formula_0 such that formula_5 and formula_6. On the other hand, the Birkhoff interpolation problem where the values of formula_7 and formula_8 are given always has a unique solution.
An important problem in the theory of Birkhoff interpolation is to classify those problems that have a unique solution. Schoenberg formulates the problem as follows. Let formula_1 denote the number of conditions (as above) and let formula_9 be the number of interpolation points. Given a formula_10 matrix formula_11, all of whose entries are either formula_12 or formula_13, such that exactly formula_1 entries are formula_13, then the corresponding problem is to determine formula_0 such that
formula_14
The matrix formula_11 is called the incidence matrix. For example, the incidence matrices for the interpolation problems mentioned in the previous paragraph are:
formula_15
Now the question is: Does a Birkhoff interpolation problem with a given incidence matrix formula_11 have a unique solution for any choice of the interpolation points?
The case with formula_16 interpolation points was tackled by George Pólya in 1931. Let formula_17 denote the sum of the entries in the first formula_18 columns of the incidence matrix:
formula_19
Then the Birkhoff interpolation problem with formula_16 has a unique solution if and only if formula_20. Schoenberg showed that this is a necessary condition for all values of formula_9.
Some examples.
Consider a differentiable function formula_21 on formula_22, such that formula_23. Let us see that there is no Birkhoff interpolation quadratic polynomial such that formula_24 where formula_25: Since formula_23, one may write the polynomial as formula_26 (by completing the square) where formula_27 are merely the interpolation coefficients. The derivative of the interpolation polynomial is given by formula_28. This implies formula_29, however this is absurd, since formula_30 is not necessarily formula_12. The incidence matrix is given by:
formula_31
Consider a differentiable function formula_21 on formula_22, and denote formula_32 with formula_33. Let us see that there is indeed Birkhoff interpolation quadratic polynomial such that formula_34 and formula_35. Construct the interpolating polynomial of formula_36 at the nodes formula_37, such that formula_38. Thus the polynomial : formula_39 is the Birkhoff interpolating polynomial. The incidence matrix is given by:
formula_40
Given a natural number formula_41, and a differentiable function formula_21 on formula_22, is there a polynomial such that: formula_42 and formula_43 for formula_44 with formula_45? Construct the Lagrange/Newton polynomial (same interpolating polynomial, different form to calculate and express them) formula_46 that satisfies formula_47 for formula_44, then the polynomial formula_48 is the Birkhoff interpolating polynomial satisfying the above conditions. The incidence matrix is given by:
formula_49
Given a natural number formula_41, and a formula_50 differentiable function formula_21 on formula_22, is there a polynomial such that: formula_51 and formula_52 for formula_53? Construct formula_54 as the interpolating polynomial of formula_21 at formula_55 and formula_56, such that formula_57. Define then the iterates formula_58 . Then formula_59 is the Birkhoff interpolating polynomial. The incidence matrix is given by:
formula_60
|
[
{
"math_id": 0,
"text": "P(x)"
},
{
"math_id": 1,
"text": "d"
},
{
"math_id": 2,
"text": " P^{(n_i)}(x_i) = y_i \\qquad\\mbox{for } i=1,\\ldots,d, "
},
{
"math_id": 3,
"text": "(x_i,y_i)"
},
{
"math_id": 4,
"text": "n_i"
},
{
"math_id": 5,
"text": "P(-1)=P(1)=0"
},
{
"math_id": 6,
"text": "P^{(1)}(0)=1"
},
{
"math_id": 7,
"text": "P^{(1)}(-1), P(0)"
},
{
"math_id": 8,
"text": "P^{(1)}(1)"
},
{
"math_id": 9,
"text": "k"
},
{
"math_id": 10,
"text": "d\\times k"
},
{
"math_id": 11,
"text": "E"
},
{
"math_id": 12,
"text": "0"
},
{
"math_id": 13,
"text": "1"
},
{
"math_id": 14,
"text": " P^{(j)}(x_i) = y_{i,j} \\qquad\\forall (i,j) / e_{ij} = 1 "
},
{
"math_id": 15,
"text": " \\begin{pmatrix} 1 & 0 & 0 \\\\ 0 & 1 & 0 \\\\ 1 & 0 & 0 \\end{pmatrix} \\qquad\\mathrm{and}\\qquad \\begin{pmatrix} 0 & 1 & 0 \\\\ 1 & 0 & 0 \\\\ 0 & 1 & 0 \\end{pmatrix}. "
},
{
"math_id": 16,
"text": "k=2"
},
{
"math_id": 17,
"text": "S_m"
},
{
"math_id": 18,
"text": "m"
},
{
"math_id": 19,
"text": " S_m = \\sum_{i=1}^k \\sum_{j=1}^m e_{ij}. "
},
{
"math_id": 20,
"text": "S_m\\geqslant m \\quad\\forall m"
},
{
"math_id": 21,
"text": "f(x)"
},
{
"math_id": 22,
"text": "[a,b]"
},
{
"math_id": 23,
"text": "f(a)=f(b)"
},
{
"math_id": 24,
"text": "P^{(1)}(c)=f^{(1)}(c)"
},
{
"math_id": 25,
"text": "c=\\frac{a+b}{2}"
},
{
"math_id": 26,
"text": "P(x)=A(x-c)^2+B"
},
{
"math_id": 27,
"text": "A,B"
},
{
"math_id": 28,
"text": "P^{(1)}(x)=2A(x-c)^2"
},
{
"math_id": 29,
"text": "P^{(1)}(c)=0"
},
{
"math_id": 30,
"text": "f^{(1)}(c)"
},
{
"math_id": 31,
"text": " \\begin{pmatrix} 1 & 0 & 0 \\\\ 0 & 1 & 0 \\\\ 1 & 0 & 0 \\end{pmatrix}_{3\\times3} "
},
{
"math_id": 32,
"text": "x_0=a,x_2=b"
},
{
"math_id": 33,
"text": "x_1\\in[a,b]"
},
{
"math_id": 34,
"text": "P(x_1)=f(x_1)"
},
{
"math_id": 35,
"text": "P^{(1)}(x_0)=f^{(1)}(x_0),P^{(1)}(x_2)=f^{(1)}(x_2)"
},
{
"math_id": 36,
"text": "f^{(1)}(x)"
},
{
"math_id": 37,
"text": "x_0,x_2"
},
{
"math_id": 38,
"text": "\\displaystyle P_1(x) = \\frac{f^{(1)}(x_2)-f^{(1)}(x_0)}{x_2-x_0}(x-x_0)+f^{(1)}(x_0)"
},
{
"math_id": 39,
"text": "\\displaystyle P_2(x) = f(x_1) + \\int_{x_1}^x\\!P_1(t)\\;\\mathrm{d}t"
},
{
"math_id": 40,
"text": " \\begin{pmatrix} 0 & 1 & 0 \\\\ 1 & 0 & 0 \\\\ 0 & 1 & 0 \\end{pmatrix}_{3\\times3} "
},
{
"math_id": 41,
"text": "N"
},
{
"math_id": 42,
"text": "P(x_0)=f(x_0)"
},
{
"math_id": 43,
"text": "P^{(1)}(x_i)=f^{(1)}(x_i)"
},
{
"math_id": 44,
"text": "i=1,\\cdots,N"
},
{
"math_id": 45,
"text": "x_0,x_1,\\cdots,x_N\\in[a,b]"
},
{
"math_id": 46,
"text": "P_{N-1}(x)"
},
{
"math_id": 47,
"text": "P_{N-1}(x_i)=f^{(1)}(x_i)"
},
{
"math_id": 48,
"text": "\\displaystyle P_N(x) = f(x_0) + \\int_{x_0}^x\\! P_{N-1}(t)\\;\\mathrm{d}t"
},
{
"math_id": 49,
"text": " \\begin{pmatrix} 1 & 0 & 0 & \\cdots & 0 \\\\ 0 & 1 & 0 & \\cdots & 0 \\\\ \\vdots & \\vdots & \\vdots & \\ddots & \\vdots \\\\ 0 & 1 & 0 & \\cdots & 0 \\\\ \\end{pmatrix}_{N\\times N} "
},
{
"math_id": 50,
"text": "2N"
},
{
"math_id": 51,
"text": "P^{(k)}(a)=f^{(k)}(a)"
},
{
"math_id": 52,
"text": "P^{(k)}(b)=f^{(k)}(b)"
},
{
"math_id": 53,
"text": "k=0,2,\\cdots,2N"
},
{
"math_id": 54,
"text": "P_1(x)"
},
{
"math_id": 55,
"text": "x=a"
},
{
"math_id": 56,
"text": "x=b"
},
{
"math_id": 57,
"text": "P_1(x)=\\frac{f^{(2N)}(b)-f^{(2N)}(a)}{b-a}(x-a)\n+f^{(2N)}(a)"
},
{
"math_id": 58,
"text": "\\displaystyle P_{k+2}(x)=\\frac{f^{(2N-2k)}(b)-f^{(2N-2k)}(a)}{b-a}(x-a)\n+f^{(2N-2k)}(a) + \\int_a^x\\!\\int_a^t\\! P_k(s)\\;\\mathrm{d}s\\;\\mathrm{d}t "
},
{
"math_id": 59,
"text": "P_{2N+1}(x)"
},
{
"math_id": 60,
"text": " \\begin{pmatrix} 1 & 0 & 1 & 0 \\cdots \\\\ 1 & 0 & 1 & 0 \\cdots \\\\ \\end{pmatrix}_{2\\times N} "
}
] |
https://en.wikipedia.org/wiki?curid=1463652
|
14641222
|
Longitudinal stability
|
Stability of an aircraft in the pitching plane
In flight dynamics, longitudinal stability is the stability of an aircraft in the longitudinal, or pitching, plane. This characteristic is important in determining whether an aircraft pilot will be able to control the aircraft in the pitching plane without requiring excessive attention or excessive strength.
The longitudinal stability of an aircraft, also called pitch stability, refers to the aircraft's stability in its plane of symmetry about the lateral axis (the axis along the wingspan). It is an important aspect of the handling qualities of the aircraft, and one of the main factors determining the ease with which the pilot is able to maintain level flight.
Longitudinal static stability refers to the aircraft's initial tendency on pitching. Dynamic stability refers to whether oscillations tend to increase, decrease or stay constant.
Static stability.
If an aircraft is longitudinally statically stable, a small increase in angle of attack will create a nose-down pitching moment on the aircraft, so that the angle of attack decreases. Similarly, a small decrease in angle of attack will create a nose-up pitching moment so that the angle of attack increases. This means the aircraft will self-correct longitudinal (pitch) disturbances without pilot input.
If an aircraft is longitudinally statically unstable, a small increase in angle of attack will create a nose-up pitching moment on the aircraft, promoting a further increase in the angle of attack.
If the aircraft has zero longitudinal static stability it is said to be statically neutral, and the position of its center of gravity is called the "neutral point".27
The longitudinal static stability of an aircraft depends on the location of its center of gravity relative to the neutral point. As the center of gravity moves increasingly forward, the pitching moment arm is increased, increasing stability. The distance between the center of gravity and the neutral point is defined as "static margin". It is usually given as a percentage of the mean aerodynamic chord. If the center of gravity is forward of the neutral point, the static margin is positive.8 If the center of gravity is aft of the neutral point, the static margin is negative. The greater the static margin, the more stable the aircraft will be.
Most conventional aircraft have positive longitudinal stability, providing the aircraft's center of gravity lies within the approved range. The operating handbook for every airplane specifies a range over which the center of gravity is permitted to move. If the center of gravity is too far aft, the aircraft will be unstable. If it is too far forward, the aircraft will be excessively stable, which makes the aircraft "stiff" in pitch and hard for the pilot to bring the nose up for landing. Required control forces will be greater.
Some aircraft have low stability to reduce trim drag. This has the benefit of reducing fuel consumption. Some aerobatic and fighter aircraft may have low or even negative stability to provide high manoeuvrability. Low or negative stability is called relaxed stability. An aircraft with low or negative static stability will typically have fly-by-wire controls with computer augmentation to assist the pilot. Otherwise, an aircraft with negative longitudinal stability will be more difficult to fly. It will be necessary for the pilot devote more effort, make more frequent inputs to the elevator control, and make larger inputs, in an attempt to maintain the desired pitch attitude.
For an aircraft to possess positive static stability, it is not necessary for its level to return to exactly what it was before the upset. It is sufficient that the speed and orientation do not continue to diverge but undergo at least a small change back towards the original speed and orientation.4773
The deployment of flaps will increase longitudinal stability.
Unlike motion about the other two axes, and in the other degrees of freedom of the aircraft (sideslip translation, rotation in roll, rotation in yaw), which are usually heavily coupled, motion in the longitudinal plane does not typically cause a roll or yaw.
A larger horizontal stabilizer, and a greater moment arm of the horizontal stabilizer about the neutral point, will increase longitudinal stability.
Tailless aircraft.
For a tailless aircraft, the neutral point coincides with the aerodynamic center, and so for such aircraft to have longitudinal static stability, the center of gravity must lie ahead of the aerodynamic center.
For missiles with symmetric airfoils, the neutral point and the center of pressure are coincident and the term "neutral point" is not used.
An unguided rocket must have a large positive static margin so the rocket shows minimum tendency to diverge from the direction of flight given to it at launch. In contrast, guided missiles usually have a negative static margin for increased maneuverability.
Dynamic stability.
Longitudinal dynamic stability of a statically stable aircraft refers to whether the aircraft will continue to oscillate after a disturbance, or whether the oscillations are damped. A dynamically stable aircraft will experience oscillations reducing to nil. A dynamically neutral aircraft will continue to oscillate around its original level, and dynamically unstable aircraft will experience increasing oscillations and displacement from its original level.
Dynamic stability is caused by damping. If damping is too great, the aircraft will be less responsive and less manoeuvrable.588
Decreasing phugoid (long-period) oscillations can be achieved by building a smaller stabilizer on a longer tail, and by shifting the center of gravity to the rear.
An aircraft that is not statically stable cannot be dynamically stable.3
Analysis.
Near the cruise condition most of the lift force is generated by the wings, with ideally only a small amount generated by the fuselage and tail. We may analyse the longitudinal static stability by considering the aircraft in equilibrium under wing lift, tail force, and weight. The moment equilibrium condition is called trim, and we are generally interested in the longitudinal stability of the aircraft about this trim condition.
Equating forces in the vertical direction:
formula_0
where W is the weight, formula_1 is the wing lift and formula_2 is the tail force.
For a thin airfoil at low angle of attack, the wing lift is proportional to the angle of attack:
formula_3
where formula_4 is the wing area formula_5 is the (wing) lift coefficient, formula_6 is the angle of attack. The term formula_7 is included to account for camber, which results in lift at zero angle of attack. Finally formula_8 is the dynamic pressure:
formula_9
where formula_10 is the air density and formula_11 is the speed.
Trim.
The force from the tail-plane is proportional to its angle of attack, including the effects of any elevator deflection and any adjustment the pilot has made to trim-out any stick force. In addition, the tail is located in the flow field of the main wing, and consequently experiences downwash, reducing its angle of attack.
In a statically stable aircraft of conventional (tail in rear) configuration, the tail-plane force may act upward or downward depending on the design and the flight conditions. In a typical canard aircraft both fore and aft planes are lifting surfaces. The fundamental requirement for static stability is that the aft surface must have greater authority (leverage) in restoring a disturbance than the forward surface has in exacerbating it. This leverage is a product of moment arm from the center of gravity and surface area. Correctly balanced in this way, the partial derivative of pitching moment with respect to changes in angle of attack will be negative: a momentary pitch up to a larger angle of attack makes the resultant pitching moment tend to pitch the aircraft back down. (Here, pitch is used casually for the angle between the nose and the direction of the airflow; angle of attack.) This is the "stability derivative" d(M)/d(alpha), described below.
The tail force is, therefore:
formula_12
where formula_13 is the tail area, formula_14 is the tail force coefficient, formula_15 is the elevator deflection, and formula_16 is the downwash angle.
A canard aircraft may have its foreplane rigged at a high angle of incidence, which can be seen in a canard catapult glider from a toy store; the design puts the c.g. well forward, requiring nose-up lift.
Violations of the basic principle are exploited in some high performance "relaxed static stability" combat aircraft to enhance agility; artificial stability is supplied by active electronic means.
There are a few classical cases where this favorable response was not achieved, notably in T-tail configurations. A T-tail airplane has a higher horizontal tail that passes through the wake of the wing later (at a higher angle of attack) than a lower tail would, and at this point the wing has already stalled and has a much larger separated wake. Inside the separated wake, the tail sees little to no freestream and loses effectiveness. Elevator control power is also heavily reduced or even lost, and the pilot is unable to easily escape the stall. This phenomenon is known as 'deep stall'.
Taking moments about the center of gravity, the net nose-up moment is:
formula_17
where formula_18 is the location of the center of gravity behind the aerodynamic center of the main wing, formula_19 is the tail moment arm.
For trim, this moment must be zero. For a given maximum elevator deflection, there is a corresponding limit on center of gravity position at which the aircraft can be kept in equilibrium. When limited by control deflection this is known as a 'trim limit'. In principle trim limits could determine the permissible forwards and rearwards shift of the center of gravity, but usually it is only the forward cg limit which is determined by the available control, the aft limit is usually dictated by stability.
In a missile context 'trim limit' more usually refers to the maximum angle of attack, and hence lateral acceleration which can be generated.
Static stability.
The nature of stability may be examined by considering the increment in pitching moment with change in angle of attack at the trim condition. If this is nose up, the aircraft is longitudinally unstable; if nose down it is stable. Differentiating the moment equation with respect to formula_6:
formula_20
Note: formula_21 is a stability derivative.
It is convenient to treat total lift as acting at a distance h ahead of the centre of gravity, so that the moment equation may be written:
formula_22
Applying the increment in angle of attack:
formula_23
Equating the two expressions for moment increment:
formula_24
The total lift formula_25 is the sum of formula_1 and formula_2 so the sum in the denominator can be simplified and written as the derivative of the total lift due to angle of attack, yielding:
formula_26
Where c is the mean aerodynamic chord of the main wing. The term:
formula_27
is known as the tail volume ratio. Its coefficient, the ratio of the two lift derivatives, has values in the range of 0.50 to 0.65 for typical configurations. Hence the expression for h may be written more compactly, though somewhat approximately, as:
formula_28
formula_29 is known as the static margin. For stability it must be negative. (However, for consistency of language, the static margin is sometimes taken as formula_30, so that positive stability is associated with positive static margin.)8
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "W=L_w+L_t"
},
{
"math_id": 1,
"text": "L_w"
},
{
"math_id": 2,
"text": "L_t"
},
{
"math_id": 3,
"text": "L_w=qS_w\\frac{\\partial C_L}{\\partial \\alpha} (\\alpha+\\alpha_0)"
},
{
"math_id": 4,
"text": "S_w"
},
{
"math_id": 5,
"text": "C_L"
},
{
"math_id": 6,
"text": "\\alpha"
},
{
"math_id": 7,
"text": "\\alpha_0"
},
{
"math_id": 8,
"text": "q"
},
{
"math_id": 9,
"text": "q=\\frac{1}{2}\\rho v^2"
},
{
"math_id": 10,
"text": "\\rho"
},
{
"math_id": 11,
"text": "v"
},
{
"math_id": 12,
"text": " L_t=q S_t\\left(\\frac{\\partial C_l}{\\partial \\alpha}\\left(\\alpha-\\frac{\\partial \\epsilon}{\\partial \\alpha}\\alpha\\right)+\\frac{\\partial C_l}{\\partial \\eta}\\eta\\right)"
},
{
"math_id": 13,
"text": "S_t\\!"
},
{
"math_id": 14,
"text": "C_l\\!"
},
{
"math_id": 15,
"text": "\\eta\\!"
},
{
"math_id": 16,
"text": "\\epsilon\\!"
},
{
"math_id": 17,
"text": "M=L_w x_g-(l_t-x_g)L_t\\!"
},
{
"math_id": 18,
"text": "x_g\\!"
},
{
"math_id": 19,
"text": "l_t\\!"
},
{
"math_id": 20,
"text": "\\frac{\\partial M}{\\partial \\alpha}=x_g\\frac{\\partial L_w}{\\partial \\alpha}-(l_t-x_g)\\frac{\\partial L_t}{\\partial \\alpha} "
},
{
"math_id": 21,
"text": "\\frac{\\partial M}{\\partial \\alpha} "
},
{
"math_id": 22,
"text": "M=h(L_w+L_t)\\!"
},
{
"math_id": 23,
"text": "\\frac{\\partial M}{\\partial \\alpha}=h\\left(\\frac{\\partial L_w}{\\partial \\alpha}+\\frac{\\partial L_t}{\\partial \\alpha}\\right)"
},
{
"math_id": 24,
"text": "h=x_g-l_t\\frac {\\frac {\\partial L_t}{\\partial \\alpha}}{\\frac{\\partial L_w}{\\partial \\alpha}+\\frac{\\partial L_t}{\\partial \\alpha}}"
},
{
"math_id": 25,
"text": "L"
},
{
"math_id": 26,
"text": "\\frac{h}{c}=\\frac{x_g}{c}-\\left(1-\\frac{\\partial \\epsilon}{\\partial \\alpha}\\right)\\frac{\\frac{\\partial C_l}{\\partial \\alpha}}{\\frac{\\partial C_L}{\\partial \\alpha}}\\frac{l_t S_t}{c S_w}"
},
{
"math_id": 27,
"text": "V_t=\\frac{l_t S_t}{c S_w}"
},
{
"math_id": 28,
"text": "h=x_g-0.5 cV_t\\!"
},
{
"math_id": 29,
"text": "h"
},
{
"math_id": 30,
"text": "-h"
}
] |
https://en.wikipedia.org/wiki?curid=14641222
|
14642741
|
Digital image correlation and tracking
|
Digital image correlation and tracking is an optical method that employs tracking and image registration techniques for accurate 2D and 3D measurements of changes in images. This method is often used to measure full-field displacement and strains, and it is widely applied in many areas of science and engineering. Compared to strain gauges and extensometers, digital image correlation methods provide finer details about deformation, due to the ability to provide both local and average data.
Overview.
Digital image correlation (DIC) techniques have been increasing in popularity, especially in micro- and nano-scale mechanical testing applications due to their relative ease of implementation and use. Advances in computer technology and digital cameras have been the enabling technologies for this method and while white-light optics has been the predominant approach, DIC can be and has been extended to almost any imaging technology.
The concept of using cross-correlation to measure shifts in datasets has been known for a long time, and it has been applied to digital images since at least the early 1970s. The present-day applications are almost innumerable, including image analysis, image compression, velocimetry, and strain estimation. Much early work in DIC in the field of mechanics was led by researchers at the University of South Carolina in the early 1980s and has been optimized and improved in recent years. Commonly, DIC relies on finding the maximum of the correlation array between pixel intensity array subsets on two or more corresponding images, which gives the integer translational shift between them. It is also possible to estimate shifts to a finer resolution than the resolution of the original images, which is often called "sub-pixel" registration because the measured shift is smaller than an integer pixel unit. For sub-pixel interpolation of the shift, other methods do not simply maximize the correlation coefficient. An iterative approach can also be used to maximize the interpolated correlation coefficient by using non-linear optimization techniques. The non-linear optimization approach tends to be conceptually simpler and can handle large deformations more accurately, but as with most nonlinear optimization techniques , it is slower.
The two-dimensional discrete cross correlation formula_0 can be defined in several ways, one possibility being:
formula_1
Here "f"("m", "n") is the pixel intensity or the gray-scale value at a point ("m", "n") in the original image, "g"("m", "n") is the gray-scale value at a point ("m", "n") in the translated image, formula_2 and formula_3 are mean values of the intensity matrices "f" and "g" respectively.
However, in practical applications, the correlation array is usually computed using Fourier-transform methods, since the fast Fourier transform is a much faster method than directly computing the correlation.
formula_4
Then taking the complex conjugate of the second result and multiplying the Fourier transforms together elementwise, we obtain the Fourier transform of the correlogram,formula_5:
formula_6
where formula_7 is the Hadamard product (entry-wise product). It is also fairly common to normalize the magnitudes to unity at this point, which results in a variation called "phase correlation".
Then the cross-correlation is obtained by applying the inverse Fourier transform:
formula_8
At this point, the coordinates of the maximum of formula_0 give the integer shift:
formula_9
Deformation mapping.
For deformation mapping, the mapping function that relates the images can be derived from comparing a set of subwindow pairs over the whole images. (Figure 1). The coordinates or grid points ("xi", "yj") and ("xi"*, "yj"*) are related by the translations that occur between the two images. If the deformation is small and perpendicular to the optical axis of the camera, then the relation between ("xi", "yj") and ("xi"*, "yj"*) can be approximated by a 2D affine transformation such as:
formula_10
formula_11
Here "u" and "v" are translations of the center of the sub-image in the "X" and "Y" directions respectively. The distances from the center of the sub-image to the point ("x", "y") are denoted by formula_12 and formula_13. Thus, the correlation coefficient "rij" is a function of displacement components ("u", "v") and displacement gradients
formula_14
DIC has proven to be very effective at mapping deformation in macroscopic mechanical testing, where the application of specular markers (e.g. paint, toner powder) or surface finishes from machining and polishing provide the needed contrast to correlate images well. However, these methods for applying surface contrast do not extend to the application of free-standing thin films for several reasons. First, vapor deposition at normal temperatures on semiconductor grade substrates results in mirror-finish quality films with RMS roughnesses that are typically on the order of several nanometers. No subsequent polishing or finishing steps are required, and unless electron imaging techniques are employed that can resolve microstructural features, the films do not possess enough useful surface contrast to adequately correlate images. Typically this challenge can be circumvented by applying paint that results in a random speckle pattern on the surface, although the large and turbulent forces resulting from either spraying or applying paint to the surface of a free-standing thin film are too high and would break the specimens. In addition, the sizes of individual paint particles are on the order of μms, while the film thickness is only several hundred nanometers, which would be analogous to supporting a large boulder on a thin sheet of paper.
μDIC.
Advances in pattern application and deposition at reduced length scales have exploited small-scale synthesis methods including nano-scale chemical surface restructuring and photolithography of computer-generated random specular patterns to produce suitable surface contrast for DIC. The application of very fine powder particles that electrostatically adhere to the surface of the specimen and can be digitally tracked is one approach. For Al thin films, fine alumina abrasive polishing powder was initially used since the particle sizes are relatively well controlled, although the adhesion to Al films was not very good and the particles tended to agglomerate excessively. The candidate that worked most effectively was a silica powder designed for a high temperature adhesive compound (Aremco, inc.), which was applied through a plastic syringe.
A light blanket of powder would coat the gage section of the tensile sample and the larger particles could be blown away gently. The remaining particles would be those with the best adhesion to the surface. While the resulting surface contrast is not ideal for DIC, the high intensity ratio between the particles and the background provide a unique opportunity to track the particles between consecutive digital images taken during deformation. This can be achieved quite straightforwardly using digital image processing techniques. Sub-pixel tracking can be achieved by a number of correlation techniques, or by fitting to the known intensity profiles of particles.
Photolithography and Electron Beam Lithography can be used to create micro tooling for micro speckle stamps, and the stamps can print speckle patterns onto the surface of the specimen. Stamp inks can be chosen which are appropriate for optical DIC, SEM-DIC, and simultaneous SEM-DIC/EBSD studies (the ink can be transparent to EBSD).
Digital volume correlation.
Digital Volume Correlation (DVC, and sometimes called Volumetric-DIC) extends the 2D-DIC algorithms into three dimensions to calculate the full-field 3D deformation from a pair of 3D images. This technique is distinct from 3D-DIC, which only calculates the 3D deformation of an "exterior surface" using conventional optical images. The DVC algorithm is able to track full-field displacement information in the form of voxels instead of pixels. The theory is similar to above except that another dimension is added: the z-dimension. The displacement is calculated from the correlation of 3D subsets of the reference and deformed volumetric images, which is analogous to the correlation of 2D subsets described above.
DVC can be performed using volumetric image datasets. These images can be obtained using confocal microscopy, X-ray computed tomography, Magnetic Resonance Imaging or other techniques. Similar to the other DIC techniques, the images must exhibit a distinct, high-contrast 3D "speckle pattern" to ensure accurate displacement measurement.
DVC was first developed in 1999 to study the deformation of trabecular bone using X-ray computed tomography images. Since then, applications of DVC have grown to include granular materials, metals, foams, composites and biological materials. To date it has been used with images acquired by MRI imaging, Computer Tomography (CT), micro-CT, confocal microscopy, and lightsheet microscopy. DVC is currently considered to be ideal in the research world for 3D quantification of local displacements, strains, and stress in biological specimens. It is preferred because of the non-invasiveness of the method over traditional experimental methods.
Two of the key challenges are improving the speed and reliability of the DVC measurement. The 3D imaging techniques produce noisier images than conventional 2D optical images, which reduces the quality of the displacement measurement. Computational speed is restricted by the file sizes of 3D images, which are significantly larger than 2D images. For example, an 8-bit (1024x1024) pixel 2D image has a file size of 1 MB, while an 8-bit (1024x1024x1024) voxel 3D image has a file size of 1 GB. This can be partially offset using parallel computing.
Applications.
Digital image correlation has demonstrated uses in the following industries:
It has also been used for mapping earthquake deformation.
DIC Standardization.
The International Digital Image Correlation Society (iDICs) is a body composed of members from academia, government, and industry, and is involved in training and educating end-users about DIC systems and the standardization of DIC practice for general applications. Created in 2015, the iDIC has been focused on creating standardizations for DIC users.
|
[
{
"math_id": 0,
"text": "r_{ij}"
},
{
"math_id": 1,
"text": "\nr_{ij} = \\frac{\\sum_m \\sum_n [f(m+i,n+j)-\\bar{f}][g(m,n)-\\bar{g}]}{\\sqrt{\\sum_m \\sum_n {[f(m,n)-\\bar{f}]^2} \\sum_m \\sum_n {[g(m, n)-\\bar{g}]^2}}}.\n"
},
{
"math_id": 2,
"text": "\\bar{f}"
},
{
"math_id": 3,
"text": "\\bar{g}"
},
{
"math_id": 4,
"text": " \\mathbf{F} = \\mathcal{F}\\{f\\}, \\quad \\mathbf{G} = \\mathcal{F}\\{g\\}."
},
{
"math_id": 5,
"text": "\\ R"
},
{
"math_id": 6,
"text": " R = \\mathbf{F} \\circ \\mathbf{G}^*,"
},
{
"math_id": 7,
"text": "\\circ"
},
{
"math_id": 8,
"text": "\\ r = \\mathcal{F}^{-1}\\{R\\}."
},
{
"math_id": 9,
"text": "(\\Delta x, \\Delta y) = \\arg\\max_{(i, j)}\\{r\\}."
},
{
"math_id": 10,
"text": "x^* = x + u + \\frac{\\partial u}{\\partial x}\\Delta x + \\frac{\\partial u}{\\partial y}\\Delta y,"
},
{
"math_id": 11,
"text": "y^* = y + v + \\frac{\\partial v}{\\partial x}\\Delta x + \\frac{\\partial v}{\\partial y}\\Delta y."
},
{
"math_id": 12,
"text": "\\Delta x"
},
{
"math_id": 13,
"text": "\\Delta y"
},
{
"math_id": 14,
"text": "\\frac{\\partial u}{\\partial x},\\frac{\\partial u}{\\partial y},\\frac{\\partial v}{\\partial x},\\frac{\\partial v}{\\partial y}."
}
] |
https://en.wikipedia.org/wiki?curid=14642741
|
14643334
|
Monad transformer
|
In functional programming, a monad transformer is a type constructor which takes a monad as an argument and returns a monad as a result.
Monad transformers can be used to compose features encapsulated by monads – such as state, exception handling, and I/O – in a modular way. Typically, a monad transformer is created by generalising an existing monad; applying the resulting monad transformer to the identity monad yields a monad which is equivalent to the original monad (ignoring any necessary boxing and unboxing).
Definition.
A monad transformer consists of:
Examples.
The option monad transformer.
Given any monad formula_0, the option monad transformer formula_1 (where formula_2 denotes the option type) is defined by:
formula_3
The exception monad transformer.
Given any monad formula_0, the exception monad transformer formula_4 (where E is the type of exceptions) is defined by:
formula_5
The reader monad transformer.
Given any monad formula_0, the reader monad transformer formula_6 (where E is the environment type) is defined by:
formula_7
The state monad transformer.
Given any monad formula_0, the state monad transformer formula_8 (where S is the state type) is defined by:
formula_9
The writer monad transformer.
Given any monad formula_0, the writer monad transformer formula_10 (where W is endowed with a monoid operation ∗ with identity element formula_11) is defined by:
formula_12
The continuation monad transformer.
Given any monad formula_0, the continuation monad transformer maps an arbitrary type R into functions of type formula_13, where R is the result type of the continuation. It is defined by:
formula_14
Note that monad transformations are usually not commutative: for instance, applying the state transformer to the option monad yields a type formula_15 (a computation which may fail and yield no final state), whereas the converse transformation has type formula_16 (a computation which yields a final state and an optional return value).
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathrm{M} \\, A"
},
{
"math_id": 1,
"text": "\\mathrm{M} \\left( A^{?} \\right)"
},
{
"math_id": 2,
"text": "A^{?}"
},
{
"math_id": 3,
"text": "\\begin{array}{ll}\n\\mathrm{return}: & A \\rarr \\mathrm{M} \\left( A^{?} \\right) = a \\mapsto \\mathrm{return} (\\mathrm{Just}\\,a) \\\\\n\\mathrm{bind}: & \\mathrm{M} \\left( A^{?} \\right) \\rarr \\left( A \\rarr \\mathrm{M} \\left( B^{?} \\right) \\right) \\rarr \\mathrm{M} \\left( B^{?} \\right) = m \\mapsto f \\mapsto \\mathrm{bind} \\, m \\, \\left(a \\mapsto \\begin{cases} \\mbox{return Nothing} & \\mbox{if } a = \\mathrm{Nothing}\\\\ f \\, a' & \\mbox{if } a = \\mathrm{Just} \\, a' \\end{cases} \\right) \\\\\n\\mathrm{lift}: & \\mathrm{M} (A) \\rarr \\mathrm{M} \\left( A^{?} \\right) = m \\mapsto \\mathrm{bind} \\, m \\, (a \\mapsto \\mathrm{return} (\\mathrm{Just} \\, a)) \\end{array}"
},
{
"math_id": 4,
"text": "\\mathrm{M} (A + E)"
},
{
"math_id": 5,
"text": "\\begin{array}{ll}\n\\mathrm{return}: & A \\rarr \\mathrm{M} (A + E) = a \\mapsto \\mathrm{return} (\\mathrm{value}\\,a) \\\\\n\\mathrm{bind}: & \\mathrm{M} (A + E) \\rarr (A \\rarr \\mathrm{M} (B + E)) \\rarr \\mathrm{M} (B + E) = m \\mapsto f \\mapsto \\mathrm{bind} \\, m \\,\\left( a \\mapsto \\begin{cases} \\mbox{return err } e & \\mbox{if } a = \\mathrm{err} \\, e\\\\ f \\, a' & \\mbox{if } a = \\mathrm{value} \\, a' \\end{cases} \\right) \\\\\n\\mathrm{lift}: & \\mathrm{M} \\, A \\rarr \\mathrm{M} (A + E) = m \\mapsto \\mathrm{bind} \\, m \\, (a \\mapsto \\mathrm{return} (\\mathrm{value} \\, a)) \\\\\n\\end{array}"
},
{
"math_id": 6,
"text": "E \\rarr \\mathrm{M}\\,A"
},
{
"math_id": 7,
"text": "\\begin{array}{ll}\n\\mathrm{return}: & A \\rarr E \\rarr \\mathrm{M} \\, A = a \\mapsto e \\mapsto \\mathrm{return} \\, a \\\\\n\\mathrm{bind}: & (E \\rarr \\mathrm{M} \\, A) \\rarr (A \\rarr E \\rarr \\mathrm{M}\\,B) \\rarr E \\rarr \\mathrm{M}\\,B = m \\mapsto k \\mapsto e \\mapsto \\mathrm{bind} \\, (m \\, e) \\,( a \\mapsto k \\, a \\, e) \\\\\n\\mathrm{lift}: & \\mathrm{M} \\, A \\rarr E \\rarr \\mathrm{M} \\, A = a \\mapsto e \\mapsto a \\\\\n\\end{array}"
},
{
"math_id": 8,
"text": "S \\rarr \\mathrm{M}(A \\times S)"
},
{
"math_id": 9,
"text": "\\begin{array}{ll}\n\\mathrm{return}: & A \\rarr S \\rarr \\mathrm{M} (A \\times S) = a \\mapsto s \\mapsto \\mathrm{return} \\, (a, s) \\\\\n\\mathrm{bind}: & (S \\rarr \\mathrm{M}(A \\times S)) \\rarr (A \\rarr S \\rarr \\mathrm{M}(B \\times S)) \\rarr S \\rarr \\mathrm{M}(B \\times S) = m \\mapsto k \\mapsto s \\mapsto \\mathrm{bind} \\, (m \\, s) \\,((a, s') \\mapsto k \\, a \\, s') \\\\\n\\mathrm{lift}: & \\mathrm{M} \\, A \\rarr S \\rarr \\mathrm{M}(A \\times S) = m \\mapsto s \\mapsto \\mathrm{bind} \\, m \\, (a \\mapsto \\mathrm{return} \\, (a, s)) \\end{array}"
},
{
"math_id": 10,
"text": "\\mathrm{M}(W \\times A)"
},
{
"math_id": 11,
"text": "\\varepsilon"
},
{
"math_id": 12,
"text": "\\begin{array}{ll}\n\\mathrm{return}: & A \\rarr \\mathrm{M} (W \\times A) = a \\mapsto \\mathrm{return} \\, (\\varepsilon, a) \\\\\n\\mathrm{bind}: & \\mathrm{M}(W \\times A) \\rarr (A \\rarr \\mathrm{M}(W \\times B)) \\rarr \\mathrm{M}(W \\times B) = m \\mapsto f \\mapsto \\mathrm{bind} \\, m \\,((w, a) \\mapsto \\mathrm{bind} \\, (f \\, a) \\, ((w', b) \\mapsto \\mathrm{return} \\, (w * w', b))) \\\\\n\\mathrm{lift}: & \\mathrm{M} \\, A \\rarr \\mathrm{M}(W \\times A) = m \\mapsto \\mathrm{bind} \\, m \\, (a \\mapsto \\mathrm{return} \\, (\\varepsilon, a)) \\\\\n\\end{array}"
},
{
"math_id": 13,
"text": "(A \\rarr \\mathrm{M} \\, R) \\rarr \\mathrm{M} \\, R"
},
{
"math_id": 14,
"text": "\\begin{array}{ll}\n\\mathrm{return} \\colon & A \\rarr \\left( A \\rarr \\mathrm{M} \\, R \\right) \\rarr \\mathrm{M} \\, R = a \\mapsto k \\mapsto k \\, a \\\\\n\\mathrm{bind} \\colon & \\left( \\left( A \\rarr \\mathrm{M} \\, R \\right) \\rarr \\mathrm{M} \\, R \\right) \\rarr \\left( A \\rarr \\left( B \\rarr \\mathrm{M} \\, R \\right) \\rarr \\mathrm{M} \\, R \\right) \\rarr \\left( B \\rarr \\mathrm{M} \\, R \\right) \\rarr \\mathrm{M} \\, R = c \\mapsto f \\mapsto k \\mapsto c \\, \\left( a \\mapsto f \\, a \\, k \\right) \\\\\n\\mathrm{lift} \\colon & \\mathrm{M} \\, A \\rarr (A \\rarr \\mathrm{M} \\, R) \\rarr \\mathrm{M} \\, R = \\mathrm{bind} \n\\end{array}"
},
{
"math_id": 15,
"text": "S \\rarr \\left(A \\times S \\right)^{?}"
},
{
"math_id": 16,
"text": "S \\rarr \\left(A^{?} \\times S \\right)"
}
] |
https://en.wikipedia.org/wiki?curid=14643334
|
14643464
|
K-approximation of k-hitting set
|
In computer science, k-approximation of k-hitting set is an approximation algorithm for weighted hitting set. The input is a collection "S" of subsets of some universe "T" and a mapping "W" from "T" to non-negative numbers called the weights of the elements of "T". In k-hitting set the size of the sets in "S" cannot be larger than "k". That is, formula_0. The problem is now to pick some subset "T"' of "T" such that every set in "S" contains some element of "T"', and such that the total weight of all elements in "T"' is as small as possible.
The algorithm.
For each set formula_1 in "S" is maintained a "price", formula_2, which is initially 0. For an element "a" in "T", let "S"("a") be the collection of sets from "S" containing "a". During the algorithm the following invariant is kept
formula_3
We say that an element, "a", from "T" is "tight" if formula_4. The main part of the algorithm consists of a loop: As long as there is a set in "S" that contains no element from "T" which is tight, the price of this set is increased as much as possible without violating the invariant above. When this loop exits, all sets contain some tight element. Pick all the tight elements to be the hitting set.
Correctness.
The algorithm always terminates because in each iteration of the loop the price of some set in "S" is increased enough to make one more element from "T" tight. If it cannot increase any price, it exits. It runs in polynomial time because the loop will not make more iterations than the number of elements in the union of all the sets of "S". It returns a hitting set, because when the loop exits, all sets in "S" contain a tight element from "T", and the set of these tight elements are returned.
Note that for any hitting set "T*" and any prices formula_5 where the invariant from the algorithm is true, the total weight of the hitting set is an upper limit to the sum over all members of "T*" of the sum of the prices of sets containing this element, that is: formula_6. This follows from the invariant property. Further, since the price of every set must occur at least once on the left hand side, we get formula_7. Especially, this property is true for the optimal hitting set.
Further, for the hitting set "H" returned from the algorithm above, we have formula_8. Since any price formula_2 can appear at most "k" times in the left-hand side (since subsets of "S" can contain no more than "k" element from "T") we get formula_9 Combined with the previous paragraph we get formula_10, where "T*" is the optimal hitting set. This is exactly the guarantee that it is a k-approximation algorithm.
Relation to linear programming.
This algorithm is an instance of the primal-dual method, also called the pricing method. The intuition is that it is dual to a linear programming algorithm. For an explanation see http://algo.inria.fr/seminars/sem00-01/vazirani.html.
|
[
{
"math_id": 0,
"text": "\\forall i \\in S: |i| \\leq k"
},
{
"math_id": 1,
"text": "j"
},
{
"math_id": 2,
"text": "p_j"
},
{
"math_id": 3,
"text": "\\forall a \\in T: \\sum_{j \\in S(a)} p_j \\leq W(a).\\,"
},
{
"math_id": 4,
"text": "\\Sigma_{j \\in S(a)} p_j = W(a)"
},
{
"math_id": 5,
"text": "p_1, \\ldots, p_{|S|}"
},
{
"math_id": 6,
"text": "\\Sigma_{a \\in T^*} \\Sigma_{j \\in S(a)} p_j \\leq \\Sigma_{a \\in T^*} W(a)"
},
{
"math_id": 7,
"text": "\\Sigma_{j \\in S} p_j \\leq \\Sigma_{a \\in T^*} W(a)"
},
{
"math_id": 8,
"text": "\\Sigma_{a \\in H} \\Sigma_{j \\in S(a)} p_j = \\Sigma_{a \\in H} W(a)"
},
{
"math_id": 9,
"text": "\\Sigma_{a \\in H} W(a) \\leq k \\cdot \\Sigma_{j \\in S} p_j"
},
{
"math_id": 10,
"text": "\\Sigma_{a \\in H} W(a) \\leq k \\cdot \\Sigma_{a \\in T^*} W(a)"
}
] |
https://en.wikipedia.org/wiki?curid=14643464
|
1464363
|
Heat transfer coefficient
|
Quantity relating heat flux and temperature difference
In thermodynamics, the heat transfer coefficient or film coefficient, or film effectiveness, is the proportionality constant between the heat flux and the thermodynamic driving force for the flow of heat (i.e., the temperature difference, Δ"T" ). It is used in calculating the heat transfer, typically by convection or phase transition between a fluid and a solid. The heat transfer coefficient has SI units in watts per square meter per kelvin (W/m2K).
The overall heat transfer rate for combined modes is usually expressed in terms of an overall conductance or heat transfer coefficient, U. In that case, the heat transfer rate is:
formula_0
where (in SI units):
formula_1: Heat transfer rate (W)
formula_2: Heat transfer coefficient (W/m²K)
formula_3: surface area where the heat transfer takes place (m²)
formula_4: temperature of the surrounding fluid (K)
formula_5: temperature of the solid surface (K)
The general definition of the heat transfer coefficient is:
formula_6
where:
formula_7: heat flux (W/m²); i.e., thermal power per unit area, formula_8
formula_9: difference in temperature between the solid surface and surrounding fluid area (K)
The heat transfer coefficient is the reciprocal of thermal insulance. This is used for building materials (R-value) and for clothing insulation.
There are numerous methods for calculating the heat transfer coefficient in different heat transfer modes, different fluids, flow regimes, and under different thermohydraulic conditions. Often it can be estimated by dividing the thermal conductivity of the convection fluid by a length scale. The heat transfer coefficient is often calculated from the Nusselt number (a dimensionless number). There are also online calculators available specifically for Heat-transfer fluid applications. Experimental assessment of the heat transfer coefficient poses some challenges especially when small fluxes are to be measured (e.g. < 0.2 W/cm2).
Composition.
A simple method for determining an overall heat transfer coefficient that is useful to find the heat transfer between simple elements such as walls in buildings or across heat exchangers is shown below. This method only accounts for conduction within materials, it does not take into account heat transfer through methods such as radiation. The method is as follows:
formula_10
Where:
formula_11 = the overall heat transfer coefficient (W/(m2·K))
formula_12 = the contact area for each fluid side (m2) (with formula_13 and formula_14 expressing either surface)
formula_15 = the thermal conductivity of the material (W/(m·K))
formula_16 = the individual convection heat transfer coefficient for each fluid (W/(m2·K))
formula_17 = the wall thickness (m).
As the areas for each surface approach being equal the equation can be written as the transfer coefficient per unit area as shown below:
formula_18
or
formula_19
Often the value for formula_20 is referred to as the difference of two radii where the inner and outer radii are used to define the thickness of a pipe carrying a fluid, however, this figure may also be considered as a wall thickness in a flat plate transfer mechanism or other common flat surfaces such as a wall in a building when the area difference between each edge of the transmission surface approaches zero.
In the walls of buildings the above formula can be used to derive the formula commonly used to calculate the heat through building components. Architects and engineers call the resulting values either the U-Value or the R-Value of a construction assembly like a wall. Each type of value (R or U) are related as the inverse of each other such that R-Value = 1/U-Value and both are more fully understood through the concept of an overall heat transfer coefficient described in lower section of this document.
Convective heat transfer correlations.
Although convective heat transfer can be derived analytically through dimensional analysis, exact analysis of the boundary layer, approximate integral analysis of the boundary layer and analogies between energy and momentum transfer, these analytic approaches may not offer practical solutions to all problems when there are no mathematical models applicable. Therefore, many correlations were developed by various authors to estimate the convective heat transfer coefficient in various cases including natural convection, forced convection for internal flow and forced convection for external flow. These empirical correlations are presented for their particular geometry and flow conditions. As the fluid properties are temperature dependent, they are evaluated at the film temperature formula_21, which is the average of the surface formula_22 and the surrounding bulk temperature, formula_23.
formula_24
External flow, vertical plane.
Recommendations by Churchill and Chu provide the following correlation for natural convection adjacent to a vertical plane, both for laminar and turbulent flow. "k" is the thermal conductivity of the fluid, "L" is the characteristic length with respect to the direction of gravity, Ra"L" is the Rayleigh number with respect to this length and Pr is the Prandtl number (the Rayleigh number can be written as the product of the Grashof number and the Prandtl number).
formula_25
For laminar flows, the following correlation is slightly more accurate. It is observed that a transition from a laminar to a turbulent boundary occurs when Ra"L" exceeds around 109.
formula_26
External flow, vertical cylinders.
For cylinders with their axes vertical, the expressions for plane surfaces can be used provided the curvature effect is not too significant. This represents the limit where boundary layer thickness is small relative to cylinder diameter formula_27. For fluids with Pr ≤ 0.72, the correlations for vertical plane walls can be used when
formula_28
where formula_29 is the Grashof number.
And in fluids of Pr ≤ 6 when
formula_30
Under these circumstances, the error is limited to up to 5.5%.
External flow, horizontal plates.
W. H. McAdams suggested the following correlations for horizontal plates. The induced buoyancy will be different depending upon whether the hot surface is facing up or down.
For a hot surface facing up, or a cold surface facing down, for laminar flow:
formula_31
and for turbulent flow:
formula_32
For a hot surface facing down, or a cold surface facing up, for laminar flow:
formula_33
The characteristic length is the ratio of the plate surface area to perimeter. If the surface is inclined at an angle "θ" with the vertical then the equations for a vertical plate by Churchill and Chu may be used for "θ" up to 60°; if the boundary layer flow is laminar, the gravitational constant "g" is replaced with "g" cos "θ" when calculating the Ra term.
External flow, horizontal cylinder.
For cylinders of sufficient length and negligible end effects, Churchill and Chu has the following correlation for formula_34.
formula_35
External flow, spheres.
For spheres, T. Yuge has the following correlation for Pr≃1 and formula_36.
formula_37
Vertical rectangular enclosure.
For heat flow between two opposing vertical plates of rectangular enclosures, Catton recommends the following two correlations for smaller aspect ratios. The correlations are valid for any value of Prandtl number.
For formula_38 :
formula_39
where "H" is the internal height of the enclosure and "L" is the horizontal distance between the two sides of different temperatures.
For formula_40 :
formula_41
For vertical enclosures with larger aspect ratios, the following two correlations can be used. For 10 < "H"/"L" < 40:
formula_42
For formula_43 :
formula_44
For all four correlations, fluid properties are evaluated at the average temperature—as opposed to film temperature—formula_45, where formula_5 and formula_4 are the temperatures of the vertical surfaces and formula_46.
Forced convection.
See main article Nusselt number and Churchill–Bernstein equation for forced convection over a horizontal cylinder.
Internal flow, laminar flow.
Sieder and Tate give the following correlation to account for entrance effects in laminar flow in tubes where formula_27 is the internal diameter, formula_47 is the fluid viscosity at the bulk mean temperature, formula_48 is the viscosity at the tube wall surface temperature.
formula_49
For fully developed laminar flow, the Nusselt number is constant and equal to 3.66. Mills combines the entrance effects and fully developed flow into one equation
formula_50
Internal flow, turbulent flow.
The Dittus-Bölter correlation (1930) is a common and particularly simple correlation useful for many applications. This correlation is applicable when forced convection is the only mode of heat transfer; i.e., there is no boiling, condensation, significant radiation, etc. The accuracy of this correlation is anticipated to be ±15%.
For a fluid flowing in a straight circular pipe with a Reynolds number between 10,000 and 120,000 (in the turbulent pipe flow range), when the fluid's Prandtl number is between 0.7 and 120, for a location far from the pipe entrance (more than 10 pipe diameters; more than 50 diameters according to many authors) or other flow disturbances, and when the pipe surface is hydraulically smooth, the heat transfer coefficient between the bulk of the fluid and the pipe surface can be expressed explicitly as:
formula_51
where:
formula_52 is the hydraulic diameter
formula_53 is the thermal conductivity of the bulk fluid
formula_54 is the fluid viscosity
formula_55 is the mass flux
formula_56 is the isobaric heat capacity of the fluid
formula_57 is 0.4 for heating (wall hotter than the bulk fluid) and 0.33 for cooling (wall cooler than the bulk fluid).
The fluid properties necessary for the application of this equation are evaluated at the bulk temperature thus avoiding iteration.
Forced convection, external flow.
In analyzing the heat transfer associated with the flow past the exterior surface of a solid, the situation is complicated by phenomena such as boundary layer separation. Various authors have correlated charts and graphs for different geometries and flow conditions.
For flow parallel to a plane surface, where formula_58 is the distance from the edge and formula_59 is the height of the boundary layer, a mean Nusselt number can be calculated using the Colburn analogy.
Thom correlation.
There exist simple fluid-specific correlations for heat transfer coefficient in boiling. The Thom correlation is for the flow of boiling water (subcooled or saturated at pressures up to about 20 MPa) under conditions where the nucleate boiling contribution predominates over forced convection. This correlation is useful for rough estimation of expected temperature difference given the heat flux:
formula_60
where:
formula_61 is the wall temperature elevation above the saturation temperature, K
"q" is the heat flux, MW/m2
"P" is the pressure of water, MPa
This empirical correlation is specific to the units given.
Heat transfer coefficient of pipe wall.
The resistance to the flow of heat by the material of pipe wall can be expressed as a "heat transfer coefficient of the pipe wall". However, one needs to select if the heat flux is based on the pipe inner or the outer diameter.
Selecting to base the heat flux on the pipe inner diameter, and assuming that the pipe wall thickness is small in comparison with the pipe inner diameter, then the heat transfer coefficient for the pipe wall can be calculated as if the wall were not curved:
formula_62
where
formula_53 is the effective thermal conductivity of the wall material
formula_58 is the difference between the outer and inner diameter.
If the above assumption does not hold, then the wall heat transfer coefficient can be calculated using the following expression:
formula_63
where
formula_64 = inner diameter of the pipe [m]
formula_65 = outer diameter of the pipe [m]
The thermal conductivity of the tube material usually depends on temperature; the mean thermal conductivity is often used.
Combining convective heat transfer coefficients.
For two or more heat transfer processes acting in parallel, convective heat transfer coefficients simply add:
formula_66
For two or more heat transfer processes connected in series, convective heat transfer coefficients add inversely:
formula_67
For example, consider a pipe with a fluid flowing inside. The approximate rate of heat transfer between the bulk of the fluid inside the pipe and the pipe external surface is:
formula_68
where
formula_7 = heat transfer rate (W)
formula_2 = convective heat transfer coefficient (W/(m²·K))
formula_69 = wall thickness (m)
formula_53 = wall thermal conductivity (W/m·K)
formula_3 = area (m²)
formula_9 = difference in temperature (K)
Overall heat transfer coefficient.
The overall heat transfer coefficient formula_70 is a measure of the overall ability of a series of conductive and convective barriers to transfer heat. It is commonly applied to the calculation of heat transfer in heat exchangers, but can be applied equally well to other problems.
For the case of a heat exchanger, formula_70 can be used to determine the total heat transfer between the two streams in the heat exchanger by the following relationship:
formula_71
where:
formula_7 = heat transfer rate (W)
formula_70 = overall heat transfer coefficient (W/(m2·K))
formula_3 = heat transfer surface area (m2)
formula_72 = logarithmic mean temperature difference (K).
The overall heat transfer coefficient takes into account the individual heat transfer coefficients of each stream and the resistance of the pipe material. It can be calculated as the reciprocal of the sum of a series of thermal resistances (but more complex relationships exist, for example when heat transfer takes place by different routes in parallel):
formula_73
where:
"R" = Resistance(s) to heat flow in pipe wall (K/W)
Other parameters are as above.
The heat transfer coefficient is the heat transferred per unit area per kelvin. Thus "area" is included in the equation as it represents the area over which the transfer of heat takes place. The areas for each flow will be different as they represent the contact area for each fluid side.
The "thermal resistance" due to the pipe wall (for thin walls) is calculated by the following relationship:
formula_74
where
formula_58 = the wall thickness (m)
formula_53 = the thermal conductivity of the material (W/(m·K))
This represents the heat transfer by conduction in the pipe.
The "thermal conductivity" is a characteristic of the particular material. Values of thermal conductivities for various materials are listed in the list of thermal conductivities.
As mentioned earlier in the article the "convection heat transfer coefficient" for each stream depends on the type of fluid, flow properties and temperature properties.
Some typical heat transfer coefficients include:
Thermal resistance due to fouling deposits.
Often during their use, heat exchangers collect a layer of fouling on the surface which, in addition to potentially contaminating a stream, reduces the effectiveness of heat exchangers. In a fouled heat exchanger the buildup on the walls creates an additional layer of materials that heat must flow through. Due to this new layer, there is additional resistance within the heat exchanger and thus the overall heat transfer coefficient of the exchanger is reduced. The following relationship is used to solve for the heat transfer resistance with the additional fouling resistance:
formula_75 = formula_76
where
formula_77 = overall heat transfer coefficient for a fouled heat exchanger, formula_78
formula_79= perimeter of the heat exchanger, may be either the hot or cold side perimeter however, it must be the same perimeter on both sides of the equation, formula_80
formula_70 = overall heat transfer coefficient for an unfouled heat exchanger, formula_78
formula_81 = fouling resistance on the cold side of the heat exchanger, formula_82
formula_83 = fouling resistance on the hot side of the heat exchanger, formula_82
formula_84 = perimeter of the cold side of the heat exchanger, formula_80
formula_85 = perimeter of the hot side of the heat exchanger, formula_80
This equation uses the overall heat transfer coefficient of an unfouled heat exchanger and the fouling resistance to calculate the overall heat transfer coefficient of a fouled heat exchanger. The equation takes into account that the perimeter of the heat exchanger is different on the hot and cold sides. The perimeter used for the formula_79 does not matter as long as it is the same. The overall heat transfer coefficients will adjust to take into account that a different perimeter was used as the product formula_86 will remain the same.
The fouling resistances can be calculated for a specific heat exchanger if the average thickness and thermal conductivity of the fouling are known. The product of the average thickness and thermal conductivity will result in the fouling resistance on a specific side of the heat exchanger.
formula_87 = formula_88
where:
formula_89 = average thickness of the fouling in a heat exchanger, formula_80
formula_90 = thermal conductivity of the fouling, formula_91.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\dot{Q}=hA(T_2-T_1)"
},
{
"math_id": 1,
"text": "\\dot{Q}"
},
{
"math_id": 2,
"text": "h"
},
{
"math_id": 3,
"text": "A"
},
{
"math_id": 4,
"text": "T_2"
},
{
"math_id": 5,
"text": "T_1"
},
{
"math_id": 6,
"text": "h = \\frac{q}{\\Delta T}"
},
{
"math_id": 7,
"text": "q"
},
{
"math_id": 8,
"text": "q = d\\dot{Q}/dA"
},
{
"math_id": 9,
"text": "\\Delta T"
},
{
"math_id": 10,
"text": " \\frac{1}{U \\cdot A} = \\frac{1}{h_1 \\cdot A_1} + \\frac{dx_w}{k \\cdot A} + \\frac{1}{h_2 \\cdot A_2} "
},
{
"math_id": 11,
"text": " U "
},
{
"math_id": 12,
"text": " A "
},
{
"math_id": 13,
"text": " A_{1} "
},
{
"math_id": 14,
"text": " A_{2} "
},
{
"math_id": 15,
"text": " k "
},
{
"math_id": 16,
"text": " h "
},
{
"math_id": 17,
"text": " dx_w "
},
{
"math_id": 18,
"text": " \\frac{1}{U} = \\frac{1}{h_1} + \\frac{dx_w}{k} + \\frac{1}{h_2} "
},
{
"math_id": 19,
"text": " U = \\frac{1}{\\frac{1}{h_1} + \\frac{dx_w}{k} + \\frac{1}{h_2}} "
},
{
"math_id": 20,
"text": "dx_w"
},
{
"math_id": 21,
"text": "T_f"
},
{
"math_id": 22,
"text": "T_s"
},
{
"math_id": 23,
"text": "{{T}_{\\infty }}"
},
{
"math_id": 24,
"text": "{{T}_{f}}=\\frac{{{T}_{s}}+{{T}_{\\infty }}}{2}"
},
{
"math_id": 25,
"text": "h \\ = \\frac{k}{L}\\left({0.825 + \\frac{0.387 \\mathrm{Ra}_L^{1/6}}{\\left(1 + (0.492/\\mathrm{Pr})^{9/16} \\right)^{8/27} }}\\right)^2 \\, \\quad \\mathrm{Ra}_L < 10^{12}"
},
{
"math_id": 26,
"text": "h \\ = \\frac{k}{L} \\left(0.68 + \\frac{0.67 \\mathrm{Ra}_L^{1/4}}{\\left(1 + (0.492/\\mathrm{Pr})^{9/16}\\right)^{4/9}}\\right) \\, \\quad \\mathrm10^{-1} < \\mathrm{Ra}_L < 10^9 "
},
{
"math_id": 27,
"text": "D"
},
{
"math_id": 28,
"text": "\\frac{D}{L}\\ge \\frac{35}{\\mathrm{Gr}_{L}^{\\frac{1}{4}}}"
},
{
"math_id": 29,
"text": "\\mathrm{Gr}_L"
},
{
"math_id": 30,
"text": "\\frac{D}{L}\\ge \\frac{25.1}{\\mathrm{Gr}_{L}^{\\frac{1}{4}}}"
},
{
"math_id": 31,
"text": "h \\ = \\frac{k 0.54 \\mathrm{Ra}_L^{1/4}} {L} \\, \\quad 10^5 < \\mathrm{Ra}_L < 2\\times 10^7"
},
{
"math_id": 32,
"text": "h \\ = \\frac{k 0.14 \\mathrm{Ra}_L^{1/3}} {L} \\, \\quad 2\\times 10^7 < \\mathrm{Ra}_L < 3\\times 10^{10} ."
},
{
"math_id": 33,
"text": "h \\ = \\frac{k 0.27 \\mathrm{Ra}_L^{1/4}} {L} \\, \\quad 3\\times 10^5 < \\mathrm{Ra}_L < 3\\times 10^{10}."
},
{
"math_id": 34,
"text": "10^{-5}<\\mathrm{Ra}_D<10^{12}"
},
{
"math_id": 35,
"text": "h \\ = \\frac{k} {D}\\left({0.6 + \\frac{0.387 \\mathrm{Ra}_D^{1/6}}{\\left(1 + (0.559/\\mathrm{Pr})^{9/16} \\, \\right)^{8/27} \\,}}\\right)^2"
},
{
"math_id": 36,
"text": "1 \\le \\mathrm{Ra}_D \\le 10^5"
},
{
"math_id": 37,
"text": "{\\mathrm{Nu}}_D \\ = 2 + 0.43 \\mathrm{Ra}_D^{1/4}"
},
{
"math_id": 38,
"text": " 1 <\\frac{H}{L} < 2 "
},
{
"math_id": 39,
"text": "h \\ = \\frac{k}{L}0.18 \\left(\\frac{\\mathrm{Pr}}{0.2 + \\mathrm{Pr}} \\mathrm{Ra}_L \\right)^{0.29} \\, \\quad \\mathrm{Ra}_L \\mathrm{Pr}/(0.2 + \\mathrm{Pr}) > 10^3"
},
{
"math_id": 40,
"text": " 2 < \\frac{H}{L} < 10 "
},
{
"math_id": 41,
"text": "h \\ = \\frac{k}{L}0.22 \\left(\\frac{\\mathrm{Pr}}{0.2 + \\mathrm{Pr}} \\mathrm{Ra}_L \\right)^{0.28} \\left(\\frac{H}{L} \\right)^{-1/4} \\, \\quad \\mathrm{Ra}_L < 10^{10}."
},
{
"math_id": 42,
"text": "h \\ = \\frac{k}{L}0.42 \\mathrm{Ra}_L^{1/4} \\mathrm{Pr}^{0.012} \\left(\\frac{H}{L} \\right)^{-0.3} \\, \\quad 1 < \\mathrm{Pr} < 2\\times10^4, \\, \\quad 10^4 < \\mathrm{Ra}_L < 10^7."
},
{
"math_id": 43,
"text": " 1 < \\frac{H}{L} < 40"
},
{
"math_id": 44,
"text": "h \\ = \\frac{k}{L}0.46 \\mathrm{Ra}_L^{1/3} \\, \\quad 1 < \\mathrm{Pr} < 20, \\, \\quad 10^6 < \\mathrm{Ra}_L < 10^9."
},
{
"math_id": 45,
"text": "(T_1+T_2)/2"
},
{
"math_id": 46,
"text": "T_1 > T_2"
},
{
"math_id": 47,
"text": "{\\mu }_{b}"
},
{
"math_id": 48,
"text": "{\\mu }_{w}"
},
{
"math_id": 49,
"text": "\\mathrm{Nu}_{D}={1.86}\\cdot{{{\\left( \\mathrm{Re}\\cdot\\mathrm{Pr} \\right)}^{{}^{1}\\!\\!\\diagup\\!\\!{}_{3}\\;}}}{{\\left( \\frac{D}{L} \\right)}^{{}^{1}\\!\\!\\diagup\\!\\!{}_{3}\\;}}{{\\left( \\frac{{{\\mu }_{b}}}{{{\\mu }_{w}}} \\right)}^{0.14}}"
},
{
"math_id": 50,
"text": "\\mathrm{Nu}_{D}=3.66+\\frac{0.065\\cdot\\mathrm{Re}\\cdot\\mathrm{Pr}\\cdot\\frac{D}{L}}{1+0.04\\cdot\\left( \\mathrm{Re}\\cdot\\mathrm{Pr}\\cdot\\frac{D}{L}\\right)^{2/3}}"
},
{
"math_id": 51,
"text": "{h d \\over k}= {0.023} \\, \\left({j d \\over \\mu}\\right)^{0.8} \\, \\left({\\mu c_p \\over k}\\right)^n"
},
{
"math_id": 52,
"text": "d"
},
{
"math_id": 53,
"text": "k"
},
{
"math_id": 54,
"text": "\\mu"
},
{
"math_id": 55,
"text": "j"
},
{
"math_id": 56,
"text": "c_p"
},
{
"math_id": 57,
"text": "n"
},
{
"math_id": 58,
"text": "x"
},
{
"math_id": 59,
"text": "L"
},
{
"math_id": 60,
"text": "\\Delta T_{\\rm sat} = 22.5 \\cdot {q}^{0.5} \\exp (-P/8.7)"
},
{
"math_id": 61,
"text": "\\Delta T_{\\rm sat}"
},
{
"math_id": 62,
"text": "h_{\\rm wall} = {2 k \\over x}"
},
{
"math_id": 63,
"text": "h_{\\rm wall} = {2k \\over {d_{\\rm i}\\ln(d_{\\rm o}/d_{\\rm i})}}"
},
{
"math_id": 64,
"text": "d_i"
},
{
"math_id": 65,
"text": "d_o"
},
{
"math_id": 66,
"text": "h = h_1 + h_2 + \\cdots"
},
{
"math_id": 67,
"text": "{1\\over h} = {1\\over h_1} + {1\\over h_2} + \\dots"
},
{
"math_id": 68,
"text": "q=\\left( {1\\over{{1 \\over h}+{t \\over k}}} \\right) \\cdot A \\cdot \\Delta T"
},
{
"math_id": 69,
"text": "t"
},
{
"math_id": 70,
"text": "U"
},
{
"math_id": 71,
"text": "q = UA \\Delta T_{LM}"
},
{
"math_id": 72,
"text": "\\Delta T_{LM}"
},
{
"math_id": 73,
"text": "\\frac {1} {UA} = \\sum \\frac{1} {hA} + \\sum R "
},
{
"math_id": 74,
"text": "R = \\frac{x}{kA}"
},
{
"math_id": 75,
"text": "\\frac{1}{U_{f}P}"
},
{
"math_id": 76,
"text": "\\frac{1}{UP}+\\frac{R_{fH}}{P_{H}}+\\frac{R_{fC}}{P_{C}}"
},
{
"math_id": 77,
"text": "U_{f}"
},
{
"math_id": 78,
"text": "\\textstyle \\rm \\frac{W}{m^2K}"
},
{
"math_id": 79,
"text": "P"
},
{
"math_id": 80,
"text": "\\rm m"
},
{
"math_id": 81,
"text": "R_{fC}"
},
{
"math_id": 82,
"text": "\\textstyle \\rm \\frac{m^2K}{W}"
},
{
"math_id": 83,
"text": "R_{fH}"
},
{
"math_id": 84,
"text": "P_C"
},
{
"math_id": 85,
"text": "P_H"
},
{
"math_id": 86,
"text": "UP"
},
{
"math_id": 87,
"text": "R_f"
},
{
"math_id": 88,
"text": "\\frac{d_f}{k_f}"
},
{
"math_id": 89,
"text": "d_f"
},
{
"math_id": 90,
"text": "k_f"
},
{
"math_id": 91,
"text": "\\textstyle \\rm \\frac{W}{mK}"
}
] |
https://en.wikipedia.org/wiki?curid=1464363
|
14643727
|
Pirani gauge
|
The Pirani gauge is a robust thermal conductivity gauge used for the measurement of the pressures in vacuum systems. It was invented in 1906 by Marcello Pirani.
Marcello Stefano Pirani was a German physicist working for Siemens & Halske which was involved in the vacuum lamp industry. In 1905 their product was tantalum lamps which required a high vacuum environment for the filaments. The gauges that Pirani was using in the production environment were some fifty McLeod gauges, each filled with 2 kg of mercury in glass tubes.
Pirani was aware of the gas thermal conductivity investigations of Kundt and Warburg (1875) published thirty years earlier and the work of Marian Smoluchowski (1898). In 1906 he described his "directly indicating vacuum gauge" that used a heated wire to measure vacuum by monitoring the heat transfer from the wire by the vacuum environment.
Structure.
The Pirani gauge consists of a metal sensor wire (usually gold plated tungsten or platinum) suspended in a tube which is connected to the system whose vacuum is to be measured. The wire is usually coiled to make the gauge more compact. The connection is usually made either by a ground glass joint or a flanged metal connector, sealed with an o-ring. The sensor wire is connected to an electrical circuit from which, after calibration, a pressure reading may be taken.
Mode of operation.
In order to understand the technology, consider that in a gas filled system there are four ways that a heated wire transfers heat to its surroundings.
A heated metal wire (sensor wire, or simply sensor) suspended in a gas will lose heat to the gas as its molecules collide with the wire and remove heat. If the gas pressure is reduced, the number of molecules present will fall proportionately and the wire will lose heat more slowly. Measuring the heat loss is an indirect indication of pressure.
There are three possible schemes that can be done.
Note that keeping the temperature constant implies that the end losses(4.) and the thermal radiation losses (3.) are constant.
The electrical resistance of a wire varies with its temperature, so the resistance indicates the temperature of wire. In many systems, the wire is maintained at a constant resistance "R" by controlling the current "I" through the wire. The resistance can be set using a bridge circuit. The current required to achieve this balance is therefore a measure of the vacuum.
The gauge may be used for pressures between 0.5 Torr to 1×10−4 Torr. Below 5×10−4 Torr, a Pirani gauge has only one significant digit of resolution. The thermal conductivity and heat capacity of the gas affects the readout from the meter, and therefore the apparatus may need calibrating before accurate readings are obtainable. For lower pressure measurement, the thermal conductivity of the gas becomes increasingly smaller and more difficult to measure accurately, and other instruments such as a Penning gauge or Bayard–Alpert gauge are used instead.
Pulsed Pirani gauge.
A special form of the Pirani gauge is the pulsed Pirani vacuum gauge where the sensor wire is not operated at a constant temperature, but is cyclically heated up to a certain temperature threshold by an increasing voltage ramp. When the threshold is reached, the heating voltage is switched off and the sensor cools down again. The required heat-up time is used as a measure of pressure.
For adequately low pressure, the following first-order dynamic thermal response model relating supplied heating power formula_3 and sensor temperature "T"("t") applies:
formula_4
where formula_5 and formula_6 are specific heat and emissivity of the sensor wire (material properties), formula_7 and formula_8 are surface area and mass of the sensor wire, and formula_9 and formula_10 are constants determined for each sensor in calibration.
Alternative.
An alternative to the Pirani gauge is the thermocouple gauge, which works on the same principle of detecting thermal conductivity of the gas by a change in temperature. In the thermocouple gauge, the temperature is sensed by a thermocouple rather than by the change in resistance of the heated wire.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "E\\propto dT/dr"
},
{
"math_id": 1,
"text": "E \\propto P(T_1-T_0)/\\surd T_0"
},
{
"math_id": 2,
"text": "E \\propto (T_1^4 - T_0^4)"
},
{
"math_id": 3,
"text": "P_{\\text{el}}"
},
{
"math_id": 4,
"text": "P_{\\text{el}} = C_1 \\lambda_{\\text{gas}}(T(t) - T_a) + C_2\\lambda_{\\text{fil}}(T(t) - T_a) + A_{\\text{fil}} \\epsilon \\sigma(T(t)^4 - T^4_a) + c_{\\text{fil}}m_{\\text{fil}} \\frac{\\mathrm{d}T}{\\mathrm{d}t} ,"
},
{
"math_id": 5,
"text": "c_{\\text{fil}}"
},
{
"math_id": 6,
"text": "\\epsilon"
},
{
"math_id": 7,
"text": "A_{\\text{fil}}"
},
{
"math_id": 8,
"text": "m_{\\text{fil}}"
},
{
"math_id": 9,
"text": "C_1"
},
{
"math_id": 10,
"text": "C_2"
}
] |
https://en.wikipedia.org/wiki?curid=14643727
|
1464384
|
Superadditivity
|
Property of a function
In mathematics, a function formula_0 is superadditive if
formula_1
for all formula_2 and formula_3 in the domain of formula_4
Similarly, a sequence formula_5 is called superadditive if it satisfies the inequality
formula_6
for all formula_7 and formula_8
The term "superadditive" is also applied to functions from a boolean algebra to the real numbers where formula_9 such as lower probabilities.
Properties.
If formula_0 is a superadditive function whose domain contains formula_22 then formula_23 To see this, take the inequality at the top: formula_24 Hence formula_25
The negative of a superadditive function is subadditive.
Fekete's lemma.
The major reason for the use of superadditive sequences is the following lemma due to Michael Fekete.
Lemma: (Fekete) For every superadditive sequence formula_26 the limit formula_27 is equal to the supremum formula_28 (The limit may be positive infinity, as is the case with the sequence formula_29 for example.)
The analogue of Fekete's lemma holds for subadditive functions as well.
There are extensions of Fekete's lemma that do not require the definition of superadditivity above to hold for all formula_7 and formula_8
There are also results that allow one to deduce the rate of convergence to the limit whose existence is stated in Fekete's lemma if some kind of both superadditivity and subadditivity is present. A good exposition of this topic may be found in Steele (1997).
References.
<templatestyles src="Reflist/styles.css" />
Notes
"This article incorporates material from Superadditivity on PlanetMath, which is licensed under the ."
|
[
{
"math_id": 0,
"text": "f"
},
{
"math_id": 1,
"text": "f(x+y) \\geq f(x) + f(y)"
},
{
"math_id": 2,
"text": "x"
},
{
"math_id": 3,
"text": "y"
},
{
"math_id": 4,
"text": "f."
},
{
"math_id": 5,
"text": "a_1, a_2, \\ldots"
},
{
"math_id": 6,
"text": "a_{n+m} \\geq a_n + a_m"
},
{
"math_id": 7,
"text": "m"
},
{
"math_id": 8,
"text": "n."
},
{
"math_id": 9,
"text": "P(X \\lor Y) \\geq P(X) + P(Y),"
},
{
"math_id": 10,
"text": "f(x) = x^2"
},
{
"math_id": 11,
"text": "x+y"
},
{
"math_id": 12,
"text": "y,"
},
{
"math_id": 13,
"text": "f(x + y) = (x + y)^2 = x^2 + y^2 + 2 x y = f(x) + f(y) + 2 x y."
},
{
"math_id": 14,
"text": "A, B \\in \\text{Mat}_n(\\Complex)"
},
{
"math_id": 15,
"text": "\\det(A + B) \\geq \\det(A) + \\det(B)."
},
{
"math_id": 16,
"text": "\\det(\\cdot)^{1/n}"
},
{
"math_id": 17,
"text": "n"
},
{
"math_id": 18,
"text": "\\det(A + B)^{1/n} \\geq \\det(A)^{1/n} + \\det(B)^{1/n}."
},
{
"math_id": 19,
"text": "H(x)"
},
{
"math_id": 20,
"text": "x, y"
},
{
"math_id": 21,
"text": "x, y \\geq 1.5031."
},
{
"math_id": 22,
"text": "0,"
},
{
"math_id": 23,
"text": "f(0) \\leq 0."
},
{
"math_id": 24,
"text": "f(x) \\leq f(x + y) - f(y)."
},
{
"math_id": 25,
"text": "f(0) \\leq f(0 + y) - f(y) = 0."
},
{
"math_id": 26,
"text": "a_1, a_2, \\ldots,"
},
{
"math_id": 27,
"text": "\\lim a_n/n"
},
{
"math_id": 28,
"text": "\\sup a_n/n."
},
{
"math_id": 29,
"text": "a_n = \\log n!"
}
] |
https://en.wikipedia.org/wiki?curid=1464384
|
1464422
|
Hanbury Brown and Twiss effect
|
Quantum correlations related to wave-particle duality
In physics, the Hanbury Brown and Twiss (HBT) effect is any of a variety of correlation and anti-correlation effects in the intensities received by two detectors from a beam of particles. HBT effects can generally be attributed to the wave–particle duality of the beam, and the results of a given experiment depend on whether the beam is composed of fermions or bosons. Devices which use the effect are commonly called intensity interferometers and were originally used in astronomy, although they are also heavily used in the field of quantum optics.
History.
In 1954, Robert Hanbury Brown and Richard Q. Twiss introduced the intensity interferometer concept to radio astronomy for measuring the tiny angular size of stars, suggesting that it might work with visible light as well. Soon after they successfully tested that suggestion: in 1956 they published an in-lab experimental mockup using blue light from a mercury-vapor lamp, and later in the same year, they applied this technique to measuring the size of Sirius. In the latter experiment, two photomultiplier tubes, separated by a few meters, were aimed at the star using crude telescopes, and a correlation was observed between the two fluctuating intensities. Just as in the radio studies, the correlation dropped away as they increased the separation (though over meters, instead of kilometers), and they used this information to determine the apparent angular size of Sirius.
This result was met with much skepticism in the physics community. The radio astronomy result was justified by Maxwell's equations, but there were concerns that the effect should break down at optical wavelengths, since the light would be quantised into a relatively small number of photons that induce discrete photoelectrons in the detectors. Many physicists worried that the correlation was inconsistent with the laws of thermodynamics. Some even claimed that the effect violated the uncertainty principle. Hanbury Brown and Twiss resolved the dispute in a neat series of articles (see References below) that demonstrated, first, that wave transmission in quantum optics had exactly the same mathematical form as Maxwell's equations, albeit with an additional noise term due to quantisation at the detector, and second, that according to Maxwell's equations, intensity interferometry should work. Others, such as Edward Mills Purcell immediately supported the technique, pointing out that the clumping of bosons was simply a manifestation of an effect already known in statistical mechanics. After a number of experiments, the whole physics community agreed that the observed effect was real.
The original experiment used the fact that two bosons tend to arrive at two separate detectors at the same time. Morgan and Mandel used a thermal photon source to create a dim beam of photons and observed the tendency of the photons to arrive at the same time on a single detector. Both of these effects used the wave nature of light to create a correlation in arrival time – if a single photon beam is split into two beams, then the particle nature of light requires that each photon is only observed at a single detector, and so an anti-correlation was observed in 1977 by H. Jeff Kimble. Finally, bosons have a tendency to clump together, giving rise to Bose–Einstein correlations, while fermions due to the Pauli exclusion principle, tend to spread apart, leading to Fermi–Dirac (anti)correlations. Bose–Einstein correlations have been observed between pions, kaons and photons, and Fermi–Dirac (anti)correlations between protons, neutrons and electrons. For a general introduction in this field, see the textbook on Bose–Einstein correlations by Richard M. Weiner. A difference in repulsion of Bose–Einstein condensate in the "trap-and-free fall" analogy of the HBT effect affects comparison.
Also, in the field of particle physics, Gerson Goldhaber et al. performed an experiment in 1959 in Berkeley and found an unexpected angular correlation among identical pions, discovering the ρ0 resonance, by means of formula_0 decay. From then on, the HBT technique started to be used by the heavy-ion community to determine the space–time dimensions of the particle emission source for heavy-ion collisions. For developments in this field up to 2005, see for example this review article.
Wave mechanics.
The HBT effect can, in fact, be predicted solely by treating the incident electromagnetic radiation as a classical wave. Suppose we have a monochromatic wave with frequency formula_1 on two detectors, with an amplitude formula_2 that varies on timescales slower than the wave period formula_3. (Such a wave might be produced from a very distant point source with a fluctuating intensity.)
Since the detectors are separated, say the second detector gets the signal delayed by a time formula_4, or equivalently, a phase formula_5; that is,
formula_6
formula_7
The intensity recorded by each detector is the square of the wave amplitude, averaged over a timescale that is long compared to the wave period formula_3 but short compared to the fluctuations in formula_2:
formula_8
where the overline indicates this time averaging. For wave frequencies above a few terahertz (wave periods less than a picosecond), such a time averaging is unavoidable, since detectors such as photodiodes and photomultiplier tubes cannot produce photocurrents that vary on such short timescales.
The correlation function formula_9 of these time-averaged intensities can then be computed:
formula_10
Most modern schemes actually measure the correlation in intensity fluctuations at the two detectors, but it is not too difficult to see that if the intensities are correlated, then the fluctuations formula_11, where formula_12 is the average intensity, ought to be correlated, since
formula_13
In the particular case that formula_2 consists mainly of a steady field formula_14 with a small sinusoidally varying component formula_15, the time-averaged intensities are
formula_16
with formula_17, and formula_18 indicates terms proportional to formula_19, which are small and may be ignored.
The correlation function of these two intensities is then
formula_20
showing a sinusoidal dependence on the delay formula_4 between the two detectors.
Quantum interpretation.
The above discussion makes it clear that the Hanbury Brown and Twiss (or photon bunching) effect can be entirely described by classical optics. The quantum description of the effect is less intuitive: if one supposes that a thermal or chaotic light source such as a star randomly emits photons, then it is not obvious how the photons "know" that they should arrive at a detector in a correlated (bunched) way. A simple argument suggested by Ugo Fano in 1961 captures the essence of the quantum explanation. Consider two points formula_21 and formula_22 in a source that emit photons detected by two detectors formula_23 and formula_24 as in the diagram. A joint detection takes place when the photon emitted by formula_21 is detected by formula_23 and the photon emitted by formula_22 is detected by formula_24 (red arrows) "or" when formula_21's photon is detected by formula_24 and formula_22's by formula_23 (green arrows). The quantum mechanical probability amplitudes for these two possibilities are denoted by
formula_25 and
formula_26 respectively. If the photons are indistinguishable, the two amplitudes interfere constructively to give a joint detection probability greater than that for two independent events. The sum over all possible pairs formula_27 in the source washes out the interference unless the distance formula_28 is sufficiently small.
Fano's explanation nicely illustrates the necessity of considering two-particle amplitudes, which are not as intuitive as the more familiar single-particle amplitudes used to interpret most interference effects. This may help to explain why some physicists in the 1950s had difficulty accepting the Hanbury Brown and Twiss result. But the quantum approach is more than just a fancy way to reproduce the classical result: if the photons are replaced by identical fermions such as electrons, the antisymmetry of wave functions under exchange of particles renders the interference destructive, leading to zero joint detection probability for small detector separations. This effect is referred to as antibunching of fermions. The above treatment also explains photon antibunching: if the source consists of a single atom, which can only emit one photon at a time, simultaneous detection in two closely spaced detectors is clearly impossible. Antibunching, whether of bosons or of fermions, has no classical wave analog.
From the point of view of the field of quantum optics, the HBT effect was important to lead physicists (among them Roy J. Glauber and Leonard Mandel) to apply quantum electrodynamics to new situations, many of which had never been experimentally studied, and in which classical and quantum predictions differ.
Footnotes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rho^0 \\to \\pi^-\\pi^+"
},
{
"math_id": 1,
"text": "\\omega"
},
{
"math_id": 2,
"text": "E(t)"
},
{
"math_id": 3,
"text": "2\\pi/\\omega"
},
{
"math_id": 4,
"text": "\\tau"
},
{
"math_id": 5,
"text": "\\phi = \\omega\\tau"
},
{
"math_id": 6,
"text": " E_1(t) = E(t) \\sin(\\omega t),"
},
{
"math_id": 7,
"text": " E_2(t) = E(t - \\tau) \\sin(\\omega t - \\phi)."
},
{
"math_id": 8,
"text": "\n \\begin{align}\n i_1(t) &= \\overline{E_1(t)^2} = \\overline{E(t)^2 \\sin^2(\\omega t)} = \\tfrac{1}{2} E(t)^2, \\\\\n i_2(t) &= \\overline{E_2(t)^2} = \\overline{E(t - \\tau)^2 \\sin^2(\\omega t - \\phi)} = \\tfrac{1}{2} E(t - \\tau)^2,\n \\end{align}\n"
},
{
"math_id": 9,
"text": "\\langle i_1 i_2 \\rangle(\\tau)"
},
{
"math_id": 10,
"text": "\n \\begin{align}\n \\langle i_1 i_2 \\rangle(\\tau) &= \\lim_{T \\to \\infty} \\frac{1}{T} \\int\\limits_0^T i_1(t) i_2(t)\\, \\mathrm{d}t \\\\\n &= \\lim_{T \\to \\infty} \\frac{1}{T} \\int\\limits_0^T \\tfrac{1}{4} E(t)^2 E(t-\\tau)^2 \\, \\mathrm{d}t.\n \\end{align}\n"
},
{
"math_id": 11,
"text": "\\Delta i = i - \\langle i\\rangle"
},
{
"math_id": 12,
"text": "\\langle i\\rangle"
},
{
"math_id": 13,
"text": "\\begin{align}\n \\langle\\Delta i_1\\Delta i_2\\rangle &= \\big\\langle(i_1 - \\langle i_1\\rangle)(i_2 - \\langle i_2\\rangle)\\big\\rangle = \\langle i_1 i_2\\rangle - \\big\\langle i_1\\langle i_2\\rangle\\big\\rangle - \\big\\langle i_2\\langle i_1\\rangle\\big\\rangle + \\langle i_1\\rangle \\langle i_2\\rangle \\\\\n &=\\langle i_1 i_2\\rangle -\\langle i_1\\rangle \\langle i_2\\rangle.\n\\end{align}"
},
{
"math_id": 14,
"text": "E_0"
},
{
"math_id": 15,
"text": "\\delta E \\sin(\\Omega t)"
},
{
"math_id": 16,
"text": "\n \\begin{align}\n i_1(t) &= \\tfrac{1}{2} E_0^2 + E_0\\,\\delta E \\sin(\\Omega t) + \\mathcal{O}(\\delta E^2), \\\\\n i_2(t) &= \\tfrac{1}{2} E_0^2 + E_0\\,\\delta E \\sin(\\Omega t-\\Phi) + \\mathcal{O}(\\delta E^2),\n \\end{align}\n"
},
{
"math_id": 17,
"text": "\\Phi = \\Omega \\tau"
},
{
"math_id": 18,
"text": "\\mathcal{O}(\\delta E^2)"
},
{
"math_id": 19,
"text": "(\\delta E)^2"
},
{
"math_id": 20,
"text": "\n \\begin{align}\n \\langle \\Delta i_1 \\Delta i_2 \\rangle(\\tau) &= \\lim_{T \\to \\infty} \\frac{(E_0\\delta E)^2}{T} \\int\\limits_0^T \\sin(\\Omega t) \\sin(\\Omega t - \\Phi) \\, \\mathrm{d}t \\\\\n &= \\tfrac{1}{2} (E_0 \\delta E)^2 \\cos(\\Omega\\tau),\n \\end{align}\n"
},
{
"math_id": 21,
"text": "a"
},
{
"math_id": 22,
"text": "b"
},
{
"math_id": 23,
"text": "A"
},
{
"math_id": 24,
"text": "B"
},
{
"math_id": 25,
"text": "\\langle A|a \\rangle \\langle B|b \\rangle"
},
{
"math_id": 26,
"text": "\\langle B|a \\rangle \\langle A|b \\rangle"
},
{
"math_id": 27,
"text": "a, b"
},
{
"math_id": 28,
"text": "AB"
}
] |
https://en.wikipedia.org/wiki?curid=1464422
|
14644226
|
Histogram of oriented gradients
|
Feature descriptor used in computer vision
The histogram of oriented gradients (HOG) is a feature descriptor used in computer vision and image processing for the purpose of object detection. The technique counts occurrences of gradient orientation in localized portions of an image. This method is similar to that of edge orientation histograms, scale-invariant feature transform descriptors, and shape contexts, but differs in that it is computed on a dense grid of uniformly spaced cells and uses overlapping local contrast normalization for improved accuracy.
Robert K. McConnell of Wayland Research Inc. first described the concepts behind HOG without using the term HOG in a patent application in 1986. In 1994 the concepts were used by Mitsubishi Electric Research Laboratories. However, usage only became widespread in 2005 when Navneet Dalal and Bill Triggs, researchers for the French National Institute for Research in Computer Science and Automation (INRIA), presented their supplementary work on HOG descriptors at the Conference on Computer Vision and Pattern Recognition (CVPR). In this work they focused on pedestrian detection in static images, although since then they expanded their tests to include human detection in videos, as well as to a variety of common animals and vehicles in static imagery.
Theory.
The essential thought behind the histogram of oriented gradients descriptor is that local object appearance and shape within an image can be described by the distribution of intensity gradients or edge directions. The image is divided into small connected regions called cells, and for the pixels within each cell, a histogram of gradient directions is compiled. The descriptor is the concatenation of these histograms. For improved accuracy, the local histograms can be contrast-normalized by calculating a measure of the intensity across a larger region of the image, called a block, and then using this value to normalize all cells within the block. This normalization results in better invariance to changes in illumination and shadowing.
The HOG descriptor has a few key advantages over other descriptors. Since it operates on local cells, it is invariant to geometric and photometric transformations, except for object orientation. Such changes would only appear in larger spatial regions. Moreover, as Dalal and Triggs discovered, coarse spatial sampling, fine orientation sampling, and strong local photometric normalization permits the individual body movement of pedestrians to be ignored so long as they maintain a roughly upright position. The HOG descriptor is thus particularly suited for human detection in images.
Algorithm implementation.
Gradient computation.
The first step of calculation in many feature detectors in image pre-processing is to ensure normalized color and gamma values. As Dalal and Triggs point out, however, this step can be omitted in HOG descriptor computation, as the ensuing descriptor normalization essentially achieves the same result. Image pre-processing thus provides little impact on performance. Instead, the first step of calculation is the computation of the gradient values. The most common method is to apply the 1-D centered, point discrete derivative mask in one or both of the horizontal and vertical directions. Specifically, this method requires filtering the color or intensity data of the image with the following filter kernels:
formula_0
Dalal and Triggs tested other, more complex masks, such as the 3x3 Sobel mask or diagonal masks, but these masks generally performed more poorly in detecting humans in images. They also experimented with Gaussian smoothing before applying the derivative mask, but similarly found that omission of any smoothing performed better in practice.
Orientation binning.
The second step of calculation is creating the cell histograms. Each pixel within the cell casts a weighted vote for an orientation-based histogram bin based on the values found in the gradient computation. The cells themselves can either be rectangular or radial in shape, and the histogram channels are evenly spread over 0 to 180 degrees or 0 to 360 degrees, depending on whether the gradient is “unsigned” or “signed”. Dalal and Triggs found that unsigned gradients used in conjunction with 9 histogram channels performed best in their human detection experiments, while noting that signed gradients lead to significant improvements in the recognition of some other object classes, like cars or motorbikes. As for the vote weight, pixel contribution can either be the gradient magnitude itself, or some function of the magnitude. In tests, the gradient magnitude itself generally produces the best results. Other options for the vote weight could include the square root or square of the gradient magnitude, or some clipped version of the magnitude.
Descriptor blocks.
To account for changes in illumination and contrast, the gradient strengths must be locally normalized, which requires grouping the cells together into larger, spatially connected blocks. The HOG descriptor is then the concatenated vector of the components of the normalized cell histograms from all of the block regions. These blocks typically overlap, meaning that each cell contributes more than once to the final descriptor. Two main block geometries exist: rectangular R-HOG blocks and circular C-HOG blocks. R-HOG blocks are generally square grids, represented by three parameters: the number of cells per block, the number of pixels per cell, and the number of channels per cell histogram. In the Dalal and Triggs human detection experiment, the optimal parameters were found to be four 8x8 pixels cells per block (16x16 pixels per block) with 9 histogram channels. Moreover, they found that some minor improvement in performance could be gained by applying a Gaussian spatial window within each block before tabulating histogram votes in order to weight pixels around the edge of the blocks less. The R-HOG blocks appear quite similar to the scale-invariant feature transform (SIFT) descriptors; however, despite their similar formation, R-HOG blocks are computed in dense grids at some single scale without orientation alignment, whereas SIFT descriptors are usually computed at sparse, scale-invariant key image points and are rotated to align orientation. In addition, the R-HOG blocks are used in conjunction to encode spatial form information, while SIFT descriptors are used singly.
Circular HOG blocks (C-HOG) can be found in two variants: those with a single, central cell and those with an angularly divided central cell. In addition, these C-HOG blocks can be described with four parameters: the number of angular and radial bins, the radius of the center bin, and the expansion factor for the radius of additional radial bins. Dalal and Triggs found that the two main variants provided equal performance, and that two radial bins with four angular bins, a center radius of 4 pixels, and an expansion factor of 2 provided the best performance in their experimentation (to achieve a good performance, at last use this configure). Also, Gaussian weighting provided no benefit when used in conjunction with the C-HOG blocks. C-HOG blocks appear similar to shape context descriptors, but differ strongly in that C-HOG blocks contain cells with several orientation channels, while shape contexts only make use of a single edge presence count in their formulation.
Block normalization.
Dalal and Triggs explored four different methods for block normalization. Let formula_1 be the non-normalized vector containing all histograms in a given block, formula_2 be its "k"-norm for formula_3 and formula_4 be some small constant (the exact value, hopefully, is unimportant). Then the normalization factor can be one of the following:
L2-norm: formula_5
L2-hys: L2-norm followed by clipping (limiting the maximum values of v to 0.2) and renormalizing, as in
L1-norm: formula_6
L1-sqrt: formula_7
In their experiments, Dalal and Triggs found the L2-hys, L2-norm, and L1-sqrt schemes provide similar performance, while the L1-norm provides slightly less reliable performance; however, all four methods showed very significant improvement over the non-normalized data.
Object recognition.
HOG descriptors may be used for object recognition by providing them as features to a machine learning algorithm. Dalal and Triggs used HOG descriptors as features in a support vector machine (SVM); however, HOG descriptors are not tied to a specific machine learning algorithm.
Performance.
In their original human detection experiment, Dalal and Triggs compared their R-HOG and C-HOG descriptor blocks against generalized Haar wavelets, PCA-SIFT descriptors, and shape context descriptors. Generalized Haar wavelets are oriented Haar wavelets, and were used in 2001 by Mohan, Papageorgiou, and Poggio in their own object detection experiments. PCA-SIFT descriptors are similar to SIFT descriptors, but differ in that principal component analysis is applied to the normalized gradient patches. PCA-SIFT descriptors were first used in 2004 by Ke and Sukthankar and were claimed to outperform regular SIFT descriptors. Finally, shape contexts use circular bins, similar to those used in C-HOG blocks, but only tabulate votes on the basis of edge presence, making no distinction with regards to orientation. Shape contexts were originally used in 2001 by Belongie, Malik, and Puzicha.
The testing commenced on two different data sets. The Massachusetts Institute of Technology (MIT) pedestrian database contains 509 training images and 200 test images of pedestrians on city streets. The set only contains images featuring the front or back of human figures and contains little variety in human pose. The set is well-known and has been used in a variety of human detection experiments, such as those conducted by Papageorgiou and Poggio in 2000. The MIT database is currently available for research at https://web.archive.org/web/20041118152354/http://cbcl.mit.edu/cbcl/software-datasets/PedestrianData.html. The second set was developed by Dalal and Triggs exclusively for their human detection experiment due to the fact that the HOG descriptors performed near-perfectly on the MIT set. Their set, known as INRIA, contains 1805 images of humans taken from personal photographs. The set contains images of humans in a wide variety of poses and includes difficult backgrounds, such as crowd scenes, thus rendering it more complex than the MIT set. The INRIA database is currently available for research at http://lear.inrialpes.fr/data.
The above site has an image showing examples from the INRIA human detection database.
As for the results, the C-HOG and R-HOG block descriptors perform comparably, with the C-HOG descriptors maintaining a slight advantage in the detection miss rate at fixed false positive rates across both data sets. On the MIT set, the C-HOG and R-HOG descriptors produced a detection miss rate of essentially zero at a 10−4 false positive rate. On the INRIA set, the C-HOG and R-HOG descriptors produced a detection miss rate of roughly 0.1 at a 10−4 false positive rate. The generalized Haar wavelets represent the next highest performing approach: they produced roughly a 0.01 miss rate at a 10−4 false positive rate on the MIT set, and roughly a 0.3 miss rate on the INRIA set. The PCA-SIFT descriptors and shape context descriptors both performed fairly poorly on both data sets. Both methods produced a miss rate of 0.1 at a 10−4 false positive rate on the MIT set and nearly a miss rate of 0.5 at a 10−4 false positive rate on the INRIA set.
Further development.
As part of the Pascal Visual Object Classes 2006 Workshop, Dalal and Triggs presented results on applying histogram of oriented gradients descriptors to image objects other than humans, such as cars, buses, and bicycles, as well as common animals such as dogs, cats, and cows. They included with their results the optimal parameters for block formulation and normalization in each case. The image in the below reference shows some of their detection examples for motorbikes.
As part of the 2006 European Conference on Computer Vision (ECCV), Dalal and Triggs teamed up with Cordelia Schmid to apply HOG detectors to the problem of human detection in films and videos. They combined HOG descriptors on individual video frames with their newly introduced internal motion histograms (IMH) on pairs of subsequent video frames. These internal motion histograms use the gradient magnitudes from optical flow fields obtained from two consecutive frames. These gradient magnitudes are then used in the same manner as those produced from static image data within the HOG descriptor approach. When testing on two large datasets taken from several movies, the combined HOG-IMH method yielded a miss rate of approximately 0.1 at a formula_8 false positive rate.
At the Intelligent Vehicles Symposium in 2006, F. Suard, A. Rakotomamonjy, and A. Bensrhair introduced a complete system for pedestrian detection based on HOG descriptors. Their system operates using two infrared cameras. Since human beings appear brighter than their surroundings on infrared images, the system first locates positions of interest within the larger view field where humans could possibly be located. Then support vector machine classifiers operate on the HOG descriptors taken from these smaller positions of interest to formulate a decision regarding the presence of a pedestrian. Once pedestrians are located within the view field, the actual position of the pedestrian is estimated using stereo vision.
At the IEEE Conference on Computer Vision and Pattern Recognition in 2006, Qiang Zhu, Shai Avidan, Mei-Chen Yeh, and Kwang-Ting Cheng presented an algorithm to significantly speed up human detection using HOG descriptor methods. Their method uses HOG descriptors in combination with the cascading classifiers algorithm normally applied with great success to face detection. Also, rather than relying on blocks of uniform size, they introduce blocks that vary in size, location, and aspect ratio. In order to isolate the blocks best suited for human detection, they applied the AdaBoost algorithm to select those blocks to be included in the cascade. In their experimentation, their algorithm achieved comparable performance to the original Dalal and Triggs algorithm, but operated at speeds up to 70 times faster. In 2006, the Mitsubishi Electric Research Laboratories applied for the U.S. Patent of this algorithm under application number 20070237387.
At the IEEE International Conference on Image Processing in 2010, Rui Hu, Mark Banard, and John Collomosse extended the HOG descriptor for use in sketch based image retrieval (SBIR). A dense orientation field was extrapolated from dominant responses in the Canny edge detector under a Laplacian smoothness constraint, and HOG computed over this field. The resulting gradient field HOG (GF-HOG) descriptor captured local spatial structure in sketches or image edge maps. This enabled the descriptor to be used within a content-based image retrieval system searchable by free-hand sketched shapes. The GF-HOG adaptation was shown to outperform existing gradient histogram descriptors such as SIFT, SURF, and HOG by around 15 percent at the task of SBIR.
In 2010, Martin Krückhans introduced an enhancement of the HOG descriptor for 3D pointclouds. Instead of image gradients he used distances between points (pixels) and planes, so called residuals, to characterize a local region in a pointcloud. His histogram of oriented residuals descriptor (HOR) was successfully used in object detection tasks of 3d pointclouds.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "[-1, 0, 1]\\text{ and }[-1, 0, 1]^\\top.\\,"
},
{
"math_id": 1,
"text": "v"
},
{
"math_id": 2,
"text": "\\|v\\|_k"
},
{
"math_id": 3,
"text": "k={1,2}"
},
{
"math_id": 4,
"text": "e"
},
{
"math_id": 5,
"text": "f = {v \\over \\sqrt{\\|v\\|^2_2+e^2}}"
},
{
"math_id": 6,
"text": "f = {v \\over (\\|v\\|_1+e)}"
},
{
"math_id": 7,
"text": "f = \\sqrt{v \\over (\\|v\\|_1+e)}"
},
{
"math_id": 8,
"text": "10^{-4}"
}
] |
https://en.wikipedia.org/wiki?curid=14644226
|
14644287
|
Pfaffian function
|
In mathematics, Pfaffian functions are a certain class of functions whose derivative can be written in terms of the original function. They were originally introduced by Askold Khovanskii in the 1970s, but are named after German mathematician Johann Pfaff.
Basic definition.
Some functions, when differentiated, give a result which can be written in terms of the original function. Perhaps the simplest example is the exponential function, "f"("x") = "e""x". If we differentiate this function we get "ex" again, that is
formula_0
Another example of a function like this is the reciprocal function, "g"("x") = 1/"x". If we differentiate this function we will see that
formula_1
Other functions may not have the above property, but their derivative may be written in terms of functions like those above. For example, if we take the function "h"("x") = "e""x" log "x" then we see
formula_2
Functions like these form the links in a so-called Pfaffian chain. Such a chain is a sequence of functions, say "f"1, "f"2, "f"3, etc., with the property that if we differentiate any of the functions in this chain then the result can be written in terms of the function itself and all the functions preceding it in the chain (specifically as a polynomial in those functions and the variables involved). So with the functions above we have that "f", "g", "h" is a Pfaffian chain.
A Pfaffian function is then just a polynomial in the functions appearing in a Pfaffian chain and the function argument. So with the Pfaffian chain just mentioned, functions such as "F"("x") = "x"3"f"("x")2 − 2"g"("x")"h"("x") are Pfaffian.
Rigorous definition.
Let "U" be an open domain in R"n". A Pfaffian chain of order "r" ≥ 0 and degree "α" ≥ 1 in "U" is a sequence of real analytic functions "f"1..., "f""r" in "U" satisfying differential equations
formula_3
for "i" = 1, ..., "r" where "P""i", "j" ∈ R["x"1, ..., "x""n", "y"1, ..., "y""i"] are polynomials of degree ≤ "α". A function "f" on "U" is called a Pfaffian function of order "r" and degree ("α", "β") if
formula_4
where "P" ∈ R["x"1, ..., "x""n", "y"1, ..., "y""r"] is a polynomial of degree at most "β" ≥ 1. The numbers "r", "α", and "β" are collectively known as the format of the Pfaffian function, and give a useful measure of its complexity.
In model theory.
Consider the structure R = (R, +, −, ·, <, 0, 1), the ordered field of real numbers. In the 1960s Andrei Gabrielov proved that the structure obtained by starting with R and adding a function symbol for every analytic function restricted to the unit box [0, 1]"m" is model complete. That is, any set definable in this structure Ran was just the projection of some higher-dimensional set defined by identities and inequalities involving these restricted analytic functions.
In the 1990s, Alex Wilkie showed that one has the same result if instead of adding every restricted analytic function, one just adds the "unrestricted" exponential function to R to get the ordered real field with exponentiation, Rexp, a result known as Wilkie's theorem. Wilkie also tackled the question of which finite sets of analytic functions could be added to R to get a model-completeness result. It turned out that adding any Pfaffian chain restricted to the box [0, 1]"m" would give the same result. In particular one may add "all" Pfaffian functions to R to get the structure RPfaff as a variant of Gabrielov's result. The result on exponentiation is not a special case of this result (even though exp is a Pfaffian chain by itself), as it applies to the unrestricted exponential function.
This result of Wilkie's proved that the structure RPfaff is an o-minimal structure.
Noetherian functions.
The equations above that define a Pfaffian chain are said to satisfy a triangular condition, since the derivative of each successive function in the chain is a polynomial in one extra variable. Thus if they are written out in turn a triangular shape appears:
formula_5
and so on. If this triangularity condition is relaxed so that the derivative of each function in the chain is a polynomial in all the other functions in the chain, then the chain of functions is known as a Noetherian chain, and a function constructed as a polynomial in this chain is called a Noetherian function. So, for example, a Noetherian chain of order three is composed of three functions "f"1, "f"2, "f"3, satisfying the equations
formula_6
The name stems from the fact that the ring generated by the functions in such a chain is Noetherian.
Any Pfaffian chain is also a Noetherian chain (the extra variables in each polynomial are simply redundant in this case), but not every Noetherian chain is Pfaffian; for example, if we take "f"1("x") = sin "x" and "f"2("x") = cos "x" then we have the equations
formula_7
and these hold for all real numbers "x", so "f"1, "f"2 is a Noetherian chain on all of R. But there is no polynomial "P"("x", "y") such that the derivative of sin "x" can be written as "P"("x", sin "x"), and so this chain is not Pfaffian.
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "f^\\prime(x) = f(x)."
},
{
"math_id": 1,
"text": "g^\\prime(x) = -g(x)^2."
},
{
"math_id": 2,
"text": "h^\\prime(x) = e^x\\log x+x^{-1}e^x = h(x)+f(x)g(x)."
},
{
"math_id": 3,
"text": "\\frac{\\partial f_{i}}{\\partial x_j}=P_{i,j}(\\boldsymbol{x},f_{1}(\\boldsymbol{x}),\\ldots,f_{i}(\\boldsymbol{x}))"
},
{
"math_id": 4,
"text": "f(\\boldsymbol{x})=P(\\boldsymbol{x},f_{1}(\\boldsymbol{x}),\\ldots,f_{r}(\\boldsymbol{x})),\\,"
},
{
"math_id": 5,
"text": "\\begin{align}f_1^\\prime &= P_1(x,f_1)\\\\\nf_2^\\prime &= P_2(x,f_1,f_2)\\\\\nf_3^\\prime &= P_3(x,f_1,f_2,f_3),\\end{align}"
},
{
"math_id": 6,
"text": "\\begin{align}f_1^\\prime &= P_1(x,f_1,f_2,f_3)\\\\\nf_2^\\prime &= P_2(x,f_1,f_2,f_3)\\\\\nf_3^\\prime &= P_3(x,f_1,f_2,f_3).\\end{align}"
},
{
"math_id": 7,
"text": "\\begin{align}f_1^\\prime(x) &= f_2(x)\\\\\nf_2^\\prime(x) &= -f_1(x),\\end{align}"
}
] |
https://en.wikipedia.org/wiki?curid=14644287
|
14645368
|
Mixed logit
|
Statistical model
Mixed logit is a fully general statistical model for examining discrete choices. It overcomes three important limitations of the standard logit model by allowing for random taste variation across choosers, unrestricted substitution patterns across choices, and correlation in unobserved factors over time. Mixed logit can choose any distribution formula_0 for the random coefficients, unlike probit which is limited to the normal distribution. It is called "mixed logit" because the choice probability is a mixture of logits, with formula_0 as the mixing distribution. It has been shown that a mixed logit model can approximate to any degree of accuracy any true random utility model of discrete choice, given appropriate specification of variables and the coefficient distribution.
Random taste variation.
The standard logit model's "taste" coefficients, or formula_1's, are fixed, which means the formula_1's are the same for everyone. Mixed logit has different formula_1's for each person (i.e., each decision maker.)
In the standard logit model, the utility of person formula_2 for alternative formula_3 is:
formula_4
with
formula_5 ~ iid extreme value
For the mixed logit model, this specification is generalized by allowing formula_6 to be random. The utility of person formula_7 for alternative formula_8 in the mixed logit model is:
formula_9
with
formula_5 ~ iid extreme value
formula_10
where "θ" are the parameters of the distribution of formula_6's over the population, such as the mean and variance of formula_6.
Conditional on formula_11, the probability that person formula_2 chooses alternative formula_8 is the standard logit formula:
formula_12
However, since formula_11 is random and not known, the (unconditional) choice probability is the integral of this logit formula over the density of formula_11.
formula_13
This model is also called the random coefficient logit model since formula_14 is a random variable. It allows the slopes of utility (i.e., the marginal utility) to be random, which is an extension of the random effects model where only the intercept was stochastic.
Any probability density function can be specified for the distribution of the coefficients in the population, i.e., for formula_15. The most widely used distribution is normal, mainly for its simplicity. For coefficients that take the same sign for all people, such as a price coefficient that is necessarily negative or the coefficient of a desirable attribute, distributions with support on only one side of zero, like the lognormal, are used. When coefficients cannot logically be unboundedly large or small, then bounded distributions are often used, such as the formula_16 or triangular distributions.
Unrestricted substitution patterns.
The mixed logit model can represent general substitution pattern because it does not exhibit logit's restrictive independence of irrelevant alternatives (IIA) property. The percentage change in person formula_2's unconditional probability of choosing alternative formula_8 given a percentage change in the "m"th attribute of alternative formula_17 (the elasticity of formula_18 with respect to formula_19) is
formula_20
where formula_21 is the "m"th element of formula_1. It can be seen from this formula that a ten-percent reduction for formula_18 need not imply (as with logit) a ten-percent reduction in each other alternative formula_22. The reason is that the relative percentages depend on the correlation between the conditional likelihood that person formula_2 will choose alternative formula_23 and the conditional likelihood that person formula_2 will choose alternative formula_24 over various draws of formula_25.
Correlation in unobserved factors over time.
Standard logit does not take into account any unobserved factors that persist over time for a given decision maker. This can be a problem if you are using panel data, which represent repeated choices over time. By applying a standard logit model to panel data you are making the assumption that the unobserved factors that affect a person's choice are new every time the person makes the choice. That is a very unlikely assumption. To take into account both random taste variation and correlation in unobserved factors over time, the utility for respondent n for alternative i at time t is specified as follows:
formula_26
where the subscript t is the time dimension. We still make the logit assumption which is that formula_27 is i.i.d extreme value. That means that formula_27 is independent over time, people, and alternatives. formula_27 is essentially just white noise. However, correlation over time and over alternatives arises from the common effect of the formula_1's, which enter utility in each time period and each alternative.
To examine the correlation explicitly, assume that the "β"'s are normally distributed with mean formula_28 and variance formula_29. Then the utility equation becomes:
formula_30
and "η" is a draw from the standard normal density. Rearranging, the equation becomes:
formula_31
formula_32
where the unobserved factors are collected in formula_33. Of the unobserved factors, formula_34 is independent over time, and formula_35 is not independent over time or alternatives.
Then the covariance between alternatives formula_36 and formula_37 is,
formula_38
and the covariance between time formula_39 and formula_40 is
formula_41
By specifying the X's appropriately, one can obtain any pattern of covariance over time and alternatives.
Conditional on formula_6, the probability of the sequence of choices by a person is simply the product of the logit probability of each individual choice by that person:
formula_42
since formula_43 is independent over time. Then the (unconditional) probability of the sequence of choices is simply the integral of this product of logits over the density of formula_1.
formula_44
Simulation.
Unfortunately there is no closed form for the integral that enters the choice probability, and so the researcher must simulate Pn. Fortunately for the researcher, simulating Pn can be very simple. There are four basic steps to follow
1. Take a draw from the probability density function that you specified for the 'taste' coefficients. That is, take a draw from formula_45 and label the draw formula_46, for formula_47 representing the first draw.
2. Calculate formula_48. (The conditional probability.)
3. Repeat many times, for formula_49.
4. Average the results
Then the formula for the simulation look like the following,
formula_50
where R is the total number of draws taken from the distribution, and r is one draw.
Once this is done you will have a value for the probability of each alternative i for each respondent n.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "f"
},
{
"math_id": 1,
"text": " \\beta "
},
{
"math_id": 2,
"text": "n"
},
{
"math_id": 3,
"text": " i"
},
{
"math_id": 4,
"text": " U_{ni} = \\beta x_{ni} + \\varepsilon_{ni} "
},
{
"math_id": 5,
"text": " \\varepsilon_{ni} "
},
{
"math_id": 6,
"text": " \\beta_n "
},
{
"math_id": 7,
"text": " n "
},
{
"math_id": 8,
"text": "i"
},
{
"math_id": 9,
"text": " U_{ni} = \\beta_n x_{ni} + \\varepsilon_{ni} "
},
{
"math_id": 10,
"text": " \\quad \\beta_n \\sim f(\\beta | \\theta) "
},
{
"math_id": 11,
"text": "\\beta_n"
},
{
"math_id": 12,
"text": " L_{ni} (\\beta_{n}) = \\frac{e^{\\beta_{n}X_{ni}}} {\\sum_{j} e^{\\beta_{n}X_{nj}}} "
},
{
"math_id": 13,
"text": " P_{ni} = \\int L_{ni} (\\beta) f(\\beta | \\theta) d\\beta "
},
{
"math_id": 14,
"text": " \\beta_n"
},
{
"math_id": 15,
"text": "f(\\beta | \\theta) "
},
{
"math_id": 16,
"text": " S_b "
},
{
"math_id": 17,
"text": "j "
},
{
"math_id": 18,
"text": "P_{ni}"
},
{
"math_id": 19,
"text": "x_{nj}^m"
},
{
"math_id": 20,
"text": " \\text{Elasticity}_{P_{ni},x_{nj}^m} = -\\frac{x_{nj}^m} {P_{ni}} \\int \\beta^m L_{ni}(\\beta) L_{nj}(\\beta) f(\\beta) d \\beta = - x_{nj}^m \\int \\beta^m L_{nj} (\\beta) \\frac{L_{ni} (\\beta)} {P_{ni}} f(\\beta) d \\beta "
},
{
"math_id": 21,
"text": "\\beta^m"
},
{
"math_id": 22,
"text": "P_{nj}"
},
{
"math_id": 23,
"text": "i, L_{ni},"
},
{
"math_id": 24,
"text": "j, L_{nj},"
},
{
"math_id": 25,
"text": "\\beta"
},
{
"math_id": 26,
"text": " U_{nit} = \\beta_{n} X_{nit} + \\varepsilon_{nit} "
},
{
"math_id": 27,
"text": "\\varepsilon"
},
{
"math_id": 28,
"text": "\\bar{\\beta}"
},
{
"math_id": 29,
"text": " \\sigma^2 "
},
{
"math_id": 30,
"text": " U_{nit} = (\\bar{\\beta} + \\sigma \\eta_{n}) X_{nit} + \\varepsilon_{nit} "
},
{
"math_id": 31,
"text": " U_{nit} = \\bar{\\beta} X_{nit} + (\\sigma \\eta_{n} X_{nit} + \\varepsilon_{nit}) "
},
{
"math_id": 32,
"text": " U_{nit} = \\bar{\\beta} X_{nit} + e_{nit} "
},
{
"math_id": 33,
"text": " e_{nit} = \\sigma \\eta_{n} X_{nit} + \\varepsilon_{nit} "
},
{
"math_id": 34,
"text": "\\varepsilon_{nit}"
},
{
"math_id": 35,
"text": " \\sigma \\eta_{n} X_{nit} "
},
{
"math_id": 36,
"text": " i "
},
{
"math_id": 37,
"text": " j "
},
{
"math_id": 38,
"text": " \\text{Cov}(e_{nit}, e_{njt}) = \\sigma^2 (X_{nit} X_{njt}) "
},
{
"math_id": 39,
"text": " t "
},
{
"math_id": 40,
"text": " q "
},
{
"math_id": 41,
"text": " \\text{Cov}(e_{nit}, e_{niq}) = \\sigma^2 (X_{nit} X_{niq}) "
},
{
"math_id": 42,
"text": " L_{n} (\\beta_{n}) = \\prod_{t} \\frac{e^{\\beta_{n}X_{nit}}} {\\sum_{j} e^{\\beta_{n}X_{njt}}} "
},
{
"math_id": 43,
"text": " \\varepsilon_{nit} "
},
{
"math_id": 44,
"text": " P_{ni} = \\int L_{n} (\\beta) f(\\beta | \\theta) d\\beta "
},
{
"math_id": 45,
"text": " f(\\beta | \\theta) "
},
{
"math_id": 46,
"text": "\\beta^r"
},
{
"math_id": 47,
"text": "r=1"
},
{
"math_id": 48,
"text": "L_n(\\beta^r)"
},
{
"math_id": 49,
"text": "r=2,...,R"
},
{
"math_id": 50,
"text": " \\tilde{P}_{ni} = \\frac {\\sum_{r} L_{ni}(\\beta^r)} {R} "
}
] |
https://en.wikipedia.org/wiki?curid=14645368
|
1464555
|
Air shower (physics)
|
Cascade of atmospheric subatomic particles
Air showers are extensive cascades of subatomic particles and ionized nuclei, produced in the atmosphere when a "primary" cosmic ray enters the atmosphere. When a particle of the cosmic radiation, which could be a proton, a nucleus, an electron, a photon, or (rarely) a positron, interacts with the nucleus of a molecule in the atmosphere, it produces a vast number of secondary particles, which make up the shower. In the first interactions of the cascade especially hadrons (mostly light mesons like pions and kaons) are produced and decay rapidly in the air, producing other particles and electromagnetic radiation, which are part of the shower components. Depending on the energy of the cosmic ray, the detectable size of the shower can reach several kilometers in diameter.
The absorbed ionizing radiation from cosmic radiation is largely from muons, neutrons, and electrons, with a dose rate that varies in different parts of the world and is based largely on the geomagnetic field, altitude, and solar cycle. Airline crews are exposed to more radiation from cosmic rays if they routinely work flight routes that take them close to the North or South pole at high altitudes, where the shielding by the geomagnetic field is minimal.
The air shower phenomenon was unknowingly discovered by Bruno Rossi in 1933 in a laboratory experiment. In 1937 Pierre Auger, unaware of Rossi's earlier report, detected the same phenomenon and investigated it in some detail. He concluded that cosmic-ray particles are of extremely high energies and interact with nuclei high up in the atmosphere, initiating a cascade of secondary interactions that produce extensive showers of subatomic particles.
The most important experiments detecting extensive air showers today are the Telescope Array Project and the Pierre Auger Observatory. The latter is the largest observatory for cosmic rays ever built, operating with 4 fluorescence detector buildings and 1600 surface detector stations spanning an area of 3,000 km2 in the Argentinean desert.
History.
In 1933, shortly after the discovery of cosmic radiation by Victor Hess, Bruno Rossi conducted an experiment in the Institute of Physics in Florence, using shielded Geiger counters to confirm the penetrating character of the cosmic radiation. He used different arrangements of Geiger counters, including a setup of three counters, where two were placed next to each other and a third was centered underneath with additional shielding. From the detection of air-shower particles passing through the Geiger counters in coincidence, he assumed that secondary particles are being produced by cosmic rays in the first shielding layer as well as in the rooftop of the laboratory, unknowing that the particles he measured were muons, which are produced in air showers and which would only be discovered three years later. He also noted that the coincidence rate drops significantly for cosmic rays that are detected at a zenith angle below formula_0.
A similar experiment was conducted in 1936 by Hilgert and Bothe in Heidelberg.
In a publication in 1939, Pierre Auger, together with three colleagues, suggested that secondary particles are created by cosmic rays in the atmosphere, and conducted experiments using shielded scintillators and Wilson chambers on the Jungfraujoch at an altitude of formula_1 above sea level, and on Pic du Midi at an altitude of formula_2 above sea level, and at sea level. They found that the rate of coincidences reduces with increasing distance of the detectors, but does not vanish, even at high altitudes. Thus confirming that cosmic rays produce air showers of secondary particles in the atmosphere.
They estimated that the primary particles of this phenomenon must have energies of up to formula_3.
Based on the idea of quantum theory, theoretical work on air showers was carried between 1935 and 1940 out by many well-known physicists of the time (including Bhabha, Oppenheimer, Landau, Rossi and others), assuming that in the vicinity of nuclear fields high-energy gamma rays will undergo pair-production of electrons and positrons, and electrons and positrons will produce gamma rays by radiation.
Work on extensive air showers continued mainly after the war, as many key figures were involved in the Manhattan project. In the 1950s, the lateral and angular structure of electromagnetic particles in air showers were calculated by Japanese scientists Koichi Kamata and Jun Nishimura.
In 1955, the first surface detector array to detect air showers with sufficient precision to detect the arrival direction of the primary cosmic rays was built at the Agassiz station at MIT.
The Agassiz array consisted of 16 plastic scintillators arranged in a formula_4 diameter circular array. The results of the experiment on the arrival directions of cosmic rays, however, where inconclusive.
The Volcano Ranch experiment, which was built in 1959 and operated by John Linsley, was the first surface detector array of sufficient size to detect ultrahigh-energy cosmic rays.
In 1962, the first cosmic ray with an energy of formula_5 was reported. With a footprint of several kilometers, the shower size at the ground was twice as large as any event recorded before, approximately producing formula_6 particles in the shower. Furthermore, it was confirmed that the lateral distribution of the particles detected at the ground matched Kenneth Greisen's approximation of the structure functions derived by Kamata and Nishimura.
A novel detection technique for extensive air showers was proposed by Greisen in 1965. He suggested to directly observe Cherenkov radiation of the shower particles, and fluorescence light produced by excited nitrogen molecules in the atmosphere. In this way, one would be able to measure the longitudinal development of a shower in the atmosphere. This method was first applied successfully and reported in 1977 at Volcano Ranch, using 67 optical modules.
Volcano Ranch finished its operation shortly after due to lack of funding.
Many air-shower experiments followed in the decades after, including KASCADE, AGASA, and HIRES. In 1995, the latter reported the detection of an ultrahigh-energy cosmic ray with an energy beyond the theoretically expected spectral cutoff.
The air shower of the cosmic ray was detected by the Fly's Eye fluorescence detector system and was estimated to contain approximately 240 billion particles at its maximum. This corresponds to a primary energy for the cosmic ray of about formula_7. To this day, no single particle with a larger energy was recorded. It is therefore publicly referred to as the Oh-My-God particle.
Air shower formation.
The air shower is formed by interaction of the primary cosmic ray with the atmosphere, and then by subsequent interaction of the secondary particles, and so on. Depending on the type of the primary particle, the shower particles will be created mostly by hadronic or electromagnetic interactions.
Simplified shower model.
Shortly after entering the atmosphere, the primary cosmic ray (which is assumed to be a proton or nucleus in the following) is scattered by a nucleus in the atmosphere and creates a shower core - a region of high-energy hadrons that develops along the extended trajectory of the primary cosmic ray, until it is fully absorbed by either the atmosphere or the ground. The interaction and decay of particles in the shower core feeds the main particle components of the shower, which are hadrons, muons, and purely electromagnetic particles. The hadronic part of the shower consists mostly of pions, and some heavier mesons, such as kaons and formula_8 mesons.
Neutral pions, formula_9, decay by the electroweak interaction into pairs of oppositely spinning photons, which fuel the electromagnetic component of the shower. Charged pions, formula_10, preferentially decay into muons and (anti)neutrinos via the weak interaction. The same holds true for charged and neutral kaons. In addition, kaons also produce pions. Neutrinos from pion and kaon decay are usually not accounted for as parts of the shower because of their very low cross-section, and are referred to as part of the "invisible energy" of the shower.
Qualitatively, the particle content of a shower can be described by a simplified model, in which all particles partaking in any interaction of the shower will equally share the available energy. One can assume that in each hadronic interaction, formula_11 charged pions and formula_12 neutral pions are produced. The neutral pions will decay into photons, which fuel the electromagnetic part of the shower. The charged pions will then continue to interact hadronically. After formula_13 interactions, the share of the primary energy formula_14 deposited in the hadronic component is given by
formula_15,
and the electromagnetic part thus approximately carries
formula_16.
A pion in the formula_13th generation thus carries an energy of formula_17. The reaction continues, until the pions reach a critical energy formula_18, at which they decay into muons. Thus, a total of
formula_19
interactions are expected and a total of formula_20 muons are produced, with formula_21. The electromagnetic part of the cascade develops in parallel by bremsstrahlung and pair production. For the sake of simplicity, photons, electrons, and positrons are often treated as equivalent particles in the shower. The electromagnetic cascade continues, until the particles reach a critical energy of formula_22, from which on they start losing most of their energy due to scattering with molecules in the atmosphere. Because formula_23, the electromagnetic particles dominate the number of particles in the shower by far. A good approximation for the number of (electromagnetic) particles produced in a shower is formula_24. Assuming each electromagnetic interaction occurs after the average radiation length formula_25, the shower will reach its maximum at a depth of approximately
formula_26,
where formula_27 is assumed to be the depth of the first interaction of the cosmic ray in the atmosphere. This approximation is, however, not accurate for all types of primary particles. Especially showers from heavy nuclei will reach their maximum much earlier.
Longitudinal profile.
The number of particles present in an air shower is approximately proportional to the calorimetric energy deposit of the shower. The energy deposit as a function of the surpassed atmospheric matter, as it can for example be seen by fluorescence detector telescopes, is known as the longitudinal profile of the shower. For the longitudinal profile of the shower, only the electromagnetic particles (electrons, positrons, and photons) are relevant, as they dominate the particle content and the contribution to the calorimetric energy deposit.
The shower profile is characterized by a fast rise in the number of particles, before the average energy of the particles falls below formula_28 around the shower maximum, and a slow decay afterwards. Mathematically the profile can be well described by a slanted Gaussian, the Gaisser-Hillas function or the generalized Greisen function,
formula_29
Here formula_30 and formula_31 using the electromagnetic radiation length in air, formula_32. formula_33 marks the point of the first interaction, and formula_34 is a dimensionless constant.
The shower age parameter formula_35 is introduced to compare showers with different starting depths and different primary energies to highlight their universal features, as for example at the shower maximum formula_36. For a shower with a first interaction at formula_37, the shower age formula_35 is usually defined as
formula_38.
The image shows the ideal longitudinal profile of showers using different primary energies, as a function of the surpassed atmospheric depth formula_39 or, equivalently, the number of radiation lengths formula_40.
The longitudinal profiles of showers are particularly interesting in the context of measuring the total calorimetric energy deposit and the depth of the shower maximum, formula_41, since the latter is an observable that is sensitive to type of the primary particle.
The shower appears brightest in a fluorescence telescope at its maximum.
Lateral profile.
For idealized electromagnetic showers, the angular and lateral distribution functions for electromagnetic particles have been derived by Japanese physicists Nishimura and Kamata.
For a shower of age formula_35, the density of electromagnetic particles as a function of the distance formula_42 to the shower axis can be approximated by the NKG function
formula_43
using the number of particles formula_44, Molière radius formula_45 and the common Gamma function.
formula_44 can be given for example by the longitudinal profile function.
The lateral distribution of hadronic showers (i.e. initiated by a primary hadron, such as a proton), which contain a significantly increased amount of muons, can be well approximated by a superposition of NKG-like functions, in which different particle components are described using effective values for formula_35 and formula_45.
Detection.
The original particle arrives with high energy and hence a velocity near the speed of light, so the products of the collisions tend also to move generally in the same direction as the primary, while to some extent spreading sidewise. In addition, the secondary particles produce a widespread flash of light in forward direction due to the Cherenkov effect, as well as fluorescence light that is emitted isotropically from the excitation of nitrogen molecules. The particle cascade and the light produced in the atmosphere can be detected with surface detector arrays and optical telescopes. Surface detectors typically use Cherenkov detectors or scintillation counters to detect the charged secondary particles at ground level. The telescopes used to measure the fluorescence and Cherenkov light use large mirrors to focus the light on PMT clusters. Finally, air showers emit radio waves due to the deflection of electrons and positrons by the geomagnetic field. As advantage over the optical techniques, radio detection is possible around the clock and not only during dark and clear nights. Thus, several modern experiments, e.g., TAIGA, LOFAR, or the Pierre Auger Observatory use radio antennas in addition to particle detectors and optical techniques.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "60^\\circ"
},
{
"math_id": 1,
"text": "3500\\,\\text{m}"
},
{
"math_id": 2,
"text": "2900\\,\\text{m}"
},
{
"math_id": 3,
"text": "10^{15}\\,\\text{eV} = 1\\,\\text{PeV}"
},
{
"math_id": 4,
"text": "460\\,\\text{m}"
},
{
"math_id": 5,
"text": "10^{20}\\,\\text{eV}"
},
{
"math_id": 6,
"text": "5\\times10^{10}"
},
{
"math_id": 7,
"text": "3.2\\times 10^{20}\\text{eV}"
},
{
"math_id": 8,
"text": "\\varrho"
},
{
"math_id": 9,
"text": "\\pi^0"
},
{
"math_id": 10,
"text": "\\pi^\\pm"
},
{
"math_id": 11,
"text": " 2 N_\\text{ch} "
},
{
"math_id": 12,
"text": " N_\\text{ch}"
},
{
"math_id": 13,
"text": "n"
},
{
"math_id": 14,
"text": "E_0"
},
{
"math_id": 15,
"text": " E_{\\pi} = \\left(\\frac{2}{3}\\right)^n E_0 "
},
{
"math_id": 16,
"text": " E_{\\gamma} = \\left(1-\\left(\\frac{2}{3}\\right)^n\\right) E_0"
},
{
"math_id": 17,
"text": "E_0/(3 N_\\text{ch}/2)^n"
},
{
"math_id": 18,
"text": "\\epsilon_\\text{c}^\\pi \\simeq 20\\,\\text{GeV}"
},
{
"math_id": 19,
"text": " n_\\text{c} = \\left\\lceil \\frac{\\ln\\left( E_0 / \\epsilon^{\\pi}_\\text{c} \\right)}{\\ln\\left(\\tfrac{3}{2}\\,N_\\text{ch}\\right)} \\right\\rceil "
},
{
"math_id": 20,
"text": "(N_\\text{ch})^{n_\\text{c}} = (E_0/\\epsilon_\\text{c}^\\pi)^\\beta"
},
{
"math_id": 21,
"text": "\\beta = \\ln N_\\text{ch} / \\ln(3 N_\\text{ch}/2) \\simeq 0.95"
},
{
"math_id": 22,
"text": "\\epsilon_\\text{c}^{\\gamma} \\simeq 87\\,\\text{MeV}"
},
{
"math_id": 23,
"text": "\\epsilon_\\text{c}^{\\gamma} \\ll \\epsilon_\\text{c}^\\pi "
},
{
"math_id": 24,
"text": "N \\simeq E_0/\\text{GeV}"
},
{
"math_id": 25,
"text": "X_0\\simeq37\\,\\text{g}/\\text{cm}^2"
},
{
"math_id": 26,
"text": " X_\\text{max} \\simeq X_1 + X_0 \\ln\\left(\\frac{E_0}{\\text{GeV}}\\right) "
},
{
"math_id": 27,
"text": " X_1 "
},
{
"math_id": 28,
"text": "\\epsilon^\\gamma_\\text{c}"
},
{
"math_id": 29,
"text": "\nN(t) = \\frac{\\epsilon}{\\sqrt{\\beta}}\\,\\text{e}^{\\left((t-t_1) - \\tfrac{3}{2}\\ln s \\right)}.\n"
},
{
"math_id": 30,
"text": "\\beta = \\ln(E_0 / \\epsilon^\\gamma_\\text{c}) "
},
{
"math_id": 31,
"text": " t = X / X_0 "
},
{
"math_id": 32,
"text": "X_0 = 37\\,\\text{g}/\\text{cm}^{-2}"
},
{
"math_id": 33,
"text": "t_1"
},
{
"math_id": 34,
"text": "\\epsilon \\approx 0.31"
},
{
"math_id": 35,
"text": "s"
},
{
"math_id": 36,
"text": "s=1"
},
{
"math_id": 37,
"text": "t_0=0"
},
{
"math_id": 38,
"text": " s = \\frac{3t}{t + 2\\beta} "
},
{
"math_id": 39,
"text": "X"
},
{
"math_id": 40,
"text": "t"
},
{
"math_id": 41,
"text": "X_\\text{max}"
},
{
"math_id": 42,
"text": "r"
},
{
"math_id": 43,
"text": "\n\\varrho(r) = \\frac{N}{2\\pi r_\\text{M}^2} \\frac{\\Gamma(\\tfrac{9}{2})}{\\Gamma(s)\\Gamma(\\frac{9}{2}-2s)}\n\\left(\\frac{r}{r_\\text{M}}\\right)^{s-2} \\, \\left(1+\\frac{r}{r_\\text{M}}\\right)^{s-9/2},\n"
},
{
"math_id": 44,
"text": "N"
},
{
"math_id": 45,
"text": "r_\\text{M}"
}
] |
https://en.wikipedia.org/wiki?curid=1464555
|
14645977
|
Upwind scheme
|
Discretization method for differential equations
In computational physics, the term advection scheme refers to a class of numerical discretization methods for solving hyperbolic partial differential equations. In the so-called upwind schemes "typically", the so-called upstream variables are used to calculate the derivatives in a flow field. That is, derivatives are estimated using a set of data points biased to be more "upwind" of the query point, with respect to the direction of the flow. Historically, the origin of upwind methods can be traced back to the work of Courant, Isaacson, and Rees who proposed the CIR method.
Model equation.
To illustrate the method, consider the following one-dimensional linear advection equation
formula_0
which describes a wave propagating along the formula_1-axis with a velocity formula_2. This equation is also a mathematical model for one-dimensional linear advection. Consider a typical grid point formula_3 in the
domain. In a one-dimensional domain, there are only two directions associated with point formula_3 – left (towards negative infinity) and
right (towards positive infinity). If formula_2 is positive, the traveling wave solution of the equation above propagates towards the right, the left side is called the "upwind" side and the right side is the "downwind" side. Similarly, if formula_2 is negative the traveling wave solution propagates towards the left, the left side is called "downwind" side and right side is the "upwind" side. If the finite difference scheme for the spatial derivative, formula_4 contains more points in the upwind side, the scheme is called an upwind-biased or simply an upwind scheme.
First-order upwind scheme.
The simplest upwind scheme possible is the first-order upwind scheme. It is given by
where formula_5 refers to the formula_6 dimension and formula_3 refers to the formula_1 dimension. (By comparison, a central difference scheme in this scenario would look like
formula_7
regardless of the sign of formula_2.)
Compact form.
Defining
formula_8
and
formula_9
the two conditional equations (1) and (2) can be combined and written in a compact form as
Equation (3) is a general way of writing any upwind-type schemes.
Stability.
The upwind scheme is stable if the following Courant–Friedrichs–Lewy condition (CFL) is satisfied.
formula_10 and formula_11.
A Taylor series analysis of the upwind scheme discussed above will show that it is first-order accurate in space and time. Modified wavenumber analysis shows that the first-order upwind scheme introduces severe numerical diffusion/dissipation in the solution where large gradients exist due to necessity of high wavenumbers to represent sharp gradients.
Second-order upwind scheme.
The spatial accuracy of the first-order upwind scheme can be improved by including 3 data points instead of just 2, which offers a more accurate finite difference stencil for the approximation of spatial derivative. For the second-order upwind scheme, formula_12 becomes the 3-point backward difference in equation (3) and is defined as
formula_13
and formula_14 is the 3-point forward difference, defined as
formula_15
This scheme is less diffusive compared to the first-order accurate scheme and is called linear upwind differencing (LUD) scheme.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\n\\frac{\\partial u}{\\partial t} + a \\frac{\\partial u}{\\partial x} = 0\n"
},
{
"math_id": 1,
"text": "x"
},
{
"math_id": 2,
"text": "a"
},
{
"math_id": 3,
"text": "i"
},
{
"math_id": 4,
"text": "\\partial u / \\partial x"
},
{
"math_id": 5,
"text": "n"
},
{
"math_id": 6,
"text": "t"
},
{
"math_id": 7,
"text": " \\frac{u_i^{n+1} - u_i^{n}}{\\Delta t} + a \\frac{u_{i+1}^n - u_{i-1}^n}{2\\Delta x} = 0, "
},
{
"math_id": 8,
"text": "\n a^+ = \\text{max}(a,0)\\,, \\qquad a^- = \\text{min}(a,0)\n"
},
{
"math_id": 9,
"text": "\n u_x^- = \\frac{u_i^{n} - u_{i-1}^{n}}{\\Delta x}\\,, \\qquad u_x^+ = \\frac{u_{i+1}^{n} - u_{i}^{n}}{\\Delta x}\n"
},
{
"math_id": 10,
"text": " c = \\left| \\frac{a\\Delta t}{\\Delta x} \\right| \\le 1 "
},
{
"math_id": 11,
"text": " 0 \\le a "
},
{
"math_id": 12,
"text": "u_x^-"
},
{
"math_id": 13,
"text": " u_x^- = \\frac{3u_i^n - 4u_{i-1}^n + u_{i-2}^n}{2\\Delta x} "
},
{
"math_id": 14,
"text": "u_x^+"
},
{
"math_id": 15,
"text": "\n u_x^+ = \\frac{-u_{i+2}^n + 4u_{i+1}^n - 3u_i^n}{2\\Delta x}\n"
}
] |
https://en.wikipedia.org/wiki?curid=14645977
|
14646706
|
Charge ordering
|
Charge ordering (CO) is a (first- or second-order) phase transition occurring mostly in strongly correlated materials such as transition metal oxides or organic conductors. Due to the strong interaction between electrons, charges are localized on different sites leading to a disproportionation and an ordered superlattice. It appears in different patterns ranging from vertical to horizontal stripes to a checkerboard–like pattern
, and it is not limited to the two-dimensional case. The charge order transition is accompanied by symmetry breaking and may lead to ferroelectricity. It is often found in close proximity to superconductivity and colossal magnetoresistance.
This long range order phenomena was first discovered in magnetite (Fe3O4) by Verwey in 1939.
He observed an increase of the electrical resistivity by two orders of magnitude at TCO=120K, suggesting a phase transition which is now well known as the Verwey transition. He was the first to propose the idea of an ordering process in this context. The charge ordered structure of magnetite was solved in 2011 by a group led by Paul Attfield with the results published in "Nature". Periodic lattice distortions associated with charge order were later mapped in the manganite lattice to reveal striped domains containing topological disorder.
Theoretical description.
The extended one-dimensional Hubbard model delivers a good description of the charge order transition with the on-site and nearest neighbor Coulomb repulsion U and V. It emerged that V is a crucial parameter and important for developing the charge order state. Further model calculations try to take the temperature and an interchain interaction into account.
The extended Hubbard model for a single chain including inter-site and on-site interaction V and U as well as the parameter formula_0 for a small dimerization which can be typically found in the (TMTTF)2X compounds is presented as follows:
formula_1
where t describes the transfer integral or the kinetic energy of the electron and formula_2 and formula_3 are the creation and annihilation operator, respectively, for an electron with the spin formula_4 at the formula_5th or formula_6th site. formula_7 denotes the density operator. For non-dimerized systems, formula_0 can be set to zero Normally, the on-site Coulomb repulsion U stays unchanged only t and V can vary with pressure.
Examples.
Organic conductors.
Organic conductors consist of donor and acceptor molecules building separated planar sheets or columns. The energy difference in the ionization energy acceptor and the electron affinity of the donor leads to a charge transfer and consequently to free carriers whose number is normally fixed. The carriers are delocalized throughout the crystal due to the overlap of the molecular orbitals being also reasonable for the high anisotropic conductivity. That is why it will be distinct between different dimensional organic conductors. They possess a huge variety of ground states, for instance, charge ordering, spin-Peierls, spin-density wave, antiferromagnetic state, superconductivity, charge-density wave to name only some of them.
Quasi-one-dimensional organic conductors.
The model system of one-dimensional conductors is the Bechgaard-Fabre salts family, (TMTTF)2X and (TMTSF)2X, where in the latter one sulfur is substituted by selenium leading to a more metallic behavior over a wide temperature range and exhibiting no charge order. While the TMTTF compounds depending on the counterions X show the conductivity of a semiconductor at room temperature and are expected to be more one-dimensional than (TMTSF)2X.
The transition temperature TCO for the TMTTF subfamily was registered over two order of magnitudes for the centrosymmetric anions X = Br, PF6, AsF6, SbF6 and the non-centrosymmetric anions X= BF4 and ReO4.
In the middle of the eighties, a new "structureless transition" was discovered by Coulon et al. conducting transport and thermopower measurements. They observed a suddenly rise of the resistivity and the thermopower at TCO while x-ray measurements showed no evidence for a change in the crystal symmetry or a formation of a superstructure. The transition was later confirmed by 13C-NMR and dielectric measurements.
Different measurements under pressure reveal a decrease of the transition temperature TCO by increasing the pressure. According to the phase diagram of that family, an increasing pressure applied to the TMTTF compounds can be understood as a shift from the semiconducting state (at room temperature) to a higher dimensional and metallic state as you can find for TMTSF compounds without a charge order state.
Quasi-two-dimensional organic conductors.
A dimensional crossover can be induced not only by applying pressure, but also be substituting the donor molecules by other ones. From a historical point of view, the main aim was to synthesize an organic superconductor with a high TC. The key to reach that aim was to increase the orbital overlap in two dimension. With the BEDT-TTF and its huge π-electron system, a new family of quasi-two-dimensional organic conductors were created exhibiting also a great variety of the phase diagram and crystal structure arrangements.
At the turn of the 20th century, first NMR measurements on the θ-(BEDT-TTF)2RbZn(SCN)4 compound uncovered the known metal to insulator transition at TCO= 195 K as an charge order transition.
Transition metal oxides.
The most prominent transition metal oxide revealing a CO transition is the magnetite Fe3O4 being a mixed-valence oxide where the iron atoms have a statistical distribution of Fe3+ and Fe2+ above the transition temperature. Below 122 K, the combination of 2+ and 3+ species arrange themselves in a regular pattern, whereas above that transition temperature (also referred to as the Verwey temperature in this case) the thermal energy is large enough to destroy the order.
Alkali metal oxides.
The alkali metal oxides rubidium sesquioxide (Rb4O6) and caesium sesquioxide (Cs4O6) display charge ordering.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\delta_d "
},
{
"math_id": 1,
"text": "H = -t \\sum_{i} \\sum_{\\sigma} \\left ( \\left [ 1+ \\left(-1 \\right)^i \\delta_d \\right ]c^{\\dagger}_{i,\\sigma}c_{i+1,\\sigma}+ h.c \\right)+ U \\sum_i n_{i,\\uparrow}n_{i,\\downarrow} + V \\sum_{i}n_i, n_{i+1} "
},
{
"math_id": 2,
"text": " c^{\\dagger}_{i,\\sigma}"
},
{
"math_id": 3,
"text": " c_{i+1,\\sigma} "
},
{
"math_id": 4,
"text": " \\sigma = \\uparrow , \\downarrow "
},
{
"math_id": 5,
"text": " i "
},
{
"math_id": 6,
"text": "i+1"
},
{
"math_id": 7,
"text": " n_{i,\\downarrow, \\uparrow} "
}
] |
https://en.wikipedia.org/wiki?curid=14646706
|
14647485
|
Bag-of-words model in computer vision
|
Image classification model
In computer vision, the bag-of-words model (BoW model) sometimes called bag-of-visual-words model can be applied to image classification or retrieval, by treating image features as words. In document classification, a bag of words is a sparse vector of occurrence counts of words; that is, a sparse histogram over the vocabulary. In computer vision, a "bag of visual words" is a vector of occurrence counts of a vocabulary of local image features.
Image representation based on the BoW model.
To represent an image using the BoW model, an image can be treated as a document. Similarly, "words" in images need to be defined too. To achieve this, it usually includes following three steps: feature detection, feature description, and codebook generation.
A definition of the BoW model can be the "histogram representation based on independent features". Content based image indexing and retrieval (CBIR) appears to be the early adopter of this image representation technique.
Feature representation.
After feature detection, each image is abstracted by several local patches. Feature representation methods deal with how to represent the patches as numerical vectors. These vectors are called feature descriptors. A good descriptor should have the ability to handle intensity, rotation, scale and affine variations to some extent. One of the most famous descriptors is the scale-invariant feature transform (SIFT). SIFT converts each patch to 128-dimensional vector. After this step, each image is a collection of vectors of the same dimension (128 for SIFT), where the order of different vectors is of no importance.
Codebook generation.
The final step for the BoW model is to convert vector-represented patches to "codewords" (analogous to words in text documents), which also produces a "codebook" (analogy to a word dictionary). A codeword can be considered as a representative of several similar patches. One simple method is performing k-means clustering over all the vectors. Codewords are then defined as the centers of the learned clusters. The number of the clusters is the codebook size (analogous to the size of the word dictionary).
Thus, each patch in an image is mapped to a certain codeword through the clustering process and the image can be represented by the histogram of the codewords.
Learning and recognition based on the BoW model.
Computer vision researchers have developed several learning methods to leverage the BoW model for image related tasks, such as object categorization. These methods can roughly be divided into two categories, unsupervised and supervised models. For multiple label categorization problem, the confusion matrix can be used as an evaluation metric.
Unsupervised models.
Here are some notations for this section. Suppose the size of codebook is formula_0.
Since the BoW model is an analogy to the BoW model in NLP, generative models developed in text domains can also be adapted in computer vision. Simple Naive Bayes model and hierarchical Bayesian models are discussed.
Naive Bayes.
The simplest one is Naive Bayes classifier. Using the language of graphical models, the Naive Bayes classifier is described by the equation below. The basic idea (or assumption) of this model is that each category has its own distribution over the codebooks, and that the distributions of each category are observably different. Take a face category and a car category for an example. The face category may emphasize the codewords which represent "nose", "eye" and "mouth", while the car category may emphasize the codewords which represent "wheel" and "window". Given a collection of training examples, the classifier learns different distributions for different categories. The categorization decision is made by
formula_13
Since the Naive Bayes classifier is simple yet effective, it is usually used as a baseline method for comparison.
Hierarchical Bayesian models.
The basic assumption of Naive Bayes model does not hold sometimes. For example, a natural scene image may contain several different themes.
Probabilistic latent semantic analysis (pLSA) and latent Dirichlet allocation (LDA) are two popular topic models from text domains to tackle the similar multiple "theme" problem. Take LDA for an example. To model natural scene images using LDA, an analogy is made with document analysis:
This method shows very promising results in natural scene categorization on 13 Natural Scene Categories.
Supervised models.
Since images are represented based on the BoW model, any discriminative model suitable for text document categorization can be tried, such as support vector machine (SVM) and AdaBoost. Kernel trick is also applicable when kernel based classifier is used, such as SVM. Pyramid match kernel is newly developed one based on the BoW model. The local feature approach of using BoW model representation learnt by machine learning classifiers with different kernels (e.g., EMD-kernel and formula_14 kernel) has been vastly tested in the area of texture and object recognition. Very promising results on a number of datasets have been reported.
This approach has achieved very impressive results in the PASCAL Visual Object Classes Challenge.
Pyramid match kernel.
Pyramid match kernel is a fast algorithm (linear complexity instead of classic one in quadratic complexity) kernel function (satisfying Mercer's condition) which maps the BoW features, or set of features in high dimension, to multi-dimensional multi-resolution histograms. An advantage of these multi-resolution histograms is their ability to capture co-occurring features. The pyramid match kernel builds multi-resolution histograms by binning data points into discrete regions of increasing size. Thus, points that do not match at high resolutions have the chance to match at low resolutions. The pyramid match kernel performs an approximate similarity match, without explicit search or computation of distance. Instead, it intersects the histograms to approximate the optimal match. Accordingly, the computation time is only linear in the number of features. Compared with other kernel approaches, the pyramid match kernel is much faster, yet provides equivalent accuracy. The pyramid match kernel was applied to ETH-80 database and Caltech 101 database with promising results.
Limitations and recent developments.
One of the notorious disadvantages of BoW is that it ignores the spatial relationships among the patches, which are very important in image representation. Researchers have proposed several methods to incorporate the spatial information. For feature level improvements, correlogram features can capture spatial co-occurrences of features. For generative models, relative positions of codewords are also taken into account. The hierarchical shape and appearance model for human action introduces a new part layer (Constellation model) between the mixture proportion and the BoW features, which captures the spatial relationships among parts in the layer. For discriminative models, spatial pyramid match performs pyramid matching by partitioning the image into increasingly fine sub-regions and compute histograms of local features inside each sub-region. Recently, an augmentation of local image descriptors (i.e. SIFT) by their spatial coordinates normalised by the image width and height have proved to be a robust and simple Spatial Coordinate Coding approach which introduces spatial information to the BoW model.
The BoW model has not been extensively tested yet for view point invariance and scale invariance, and the performance is unclear. Also the BoW model for object segmentation and localization is not well understood.
A systematic comparison of classification pipelines found that the encoding of first and second order statistics (Vector of Locally Aggregated Descriptors (VLAD) and Fisher Vector (FV)) considerably increased classification accuracy compared to BoW, while also decreasing the codebook size, thus lowering the computational effort for codebook generation. Moreover, a recent detailed comparison of coding and pooling methods for BoW has showed that second order statistics combined with Sparse Coding and an appropriate pooling such as Power Normalisation can further outperform Fisher Vectors and even approach results of simple models of Convolutional Neural Network on some object recognition datasets such as Oxford Flower Dataset 102.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "V"
},
{
"math_id": 1,
"text": "w"
},
{
"math_id": 2,
"text": "v"
},
{
"math_id": 3,
"text": "w^v=1"
},
{
"math_id": 4,
"text": "w^u = 0"
},
{
"math_id": 5,
"text": "u\\neq v"
},
{
"math_id": 6,
"text": "\\mathbf{w}"
},
{
"math_id": 7,
"text": "\\mathbf{w}=[w_1, w_2, \\cdots, w_N]"
},
{
"math_id": 8,
"text": "d_j"
},
{
"math_id": 9,
"text": "j"
},
{
"math_id": 10,
"text": "c"
},
{
"math_id": 11,
"text": "z"
},
{
"math_id": 12,
"text": "\\pi"
},
{
"math_id": 13,
"text": "c^*=\\arg \\max_c p(c|\\mathbf{w}) = \\arg \\max_c p(c)p(\\mathbf{w}|c)=\\arg \\max_c p(c)\\prod_{n=1}^Np(w_n|c)"
},
{
"math_id": 14,
"text": "X^2"
}
] |
https://en.wikipedia.org/wiki?curid=14647485
|
14648105
|
NAD(P)(+)—protein-arginine ADP-ribosyltransferase
|
Class of enzymes
In enzymology, a NAD(P)+-protein-arginine ADP-ribosyltransferase (EC 2.4.2.31) is an enzyme that catalyzes the chemical reaction using nicotinamide adenine dinucleotide
NAD+ + protein L-arginine formula_0 nicotinamide + Nomega-(ADP-D-ribosyl)-protein-L-arginine NADP+ + protein L-arginine formula_0 nicotinamide + Nomega-[(2'-phospho-ADP)-D-ribosyl]-protein-L-arginine
as well as the corresponding reaction using nicotinamide adenine dinucleotide phosphate
NADP+ + protein L-arginine formula_0 nicotinamide + Nomega-(ADP-D-ribosyl)-protein-L-arginine NADP+ + protein L-arginine formula_0 nicotinamide + Nomega-[(2'-phospho-ADP)-D-ribosyl]-protein-L-arginine
Thus, the two substrates of this enzyme are NAD+ (or NADP+) and protein L-arginine, whereas its two products are nicotinamide and Nomega-(ADP-D-ribosyl)-protein-L-arginine (or Nomega-[(2'-phospho-ADP)-D-ribosyl]-protein-L-arginine, respectively).
This enzyme belongs to the family of glycosyltransferases, specifically the pentosyltransferases. The systematic name of this enzyme class is NAD(P)+:protein-L-arginine ADP-D-ribosyltransferase. Other names in common use include ADP-ribosyltransferase, mono(ADP-ribosyl)transferase, NAD+:L-arginine ADP-D-ribosyltransferase, NAD(P)+-arginine ADP-ribosyltransferase, and NAD(P)+:L-arginine ADP-D-ribosyltransferase.
At least five forms of the enzyme have been characterised to date, some of which are attached to the membrane via glycosylphosphatidylinositol (GPI) anchors, while others appear to be secreted. The enzymes contain ~250-300 residues, which encode putative signal sequences and carbohydrate attachment sites. In addition, the N- and C-termini are predominantly hydrophobic, a characteristic of GPI-anchored proteins.
Structural studies.
As of late 2007, 6 structures have been solved for this class of enzymes, with PDB accession codes 1GXY, 1GXZ, 1GY0, 1OG1, 1OG3, and 1OG4.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14648105
|
14650100
|
1,2-diacylglycerol 3-glucosyltransferase
|
Class of enzymes
In enzymology, a 1,2-diacylglycerol 3-glucosyltransferase (EC 2.4.1.157) is an enzyme that catalyzes the chemical reaction
UDP-glucose + 1,2-diacylglycerol formula_0 UDP + 3-D-glucosyl-1,2-diacylglycerol
Thus, the two substrates of this enzyme are UDP-glucose and 1,2-diacylglycerol, whereas its two products are UDP and 3-D-glucosyl-1,2-diacylglycerol.
This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is UDP-glucose:1,2-diacylglycerol 3-D-glucosyltransferase. Other names in common use include UDP-glucose:diacylglycerol glucosyltransferase, UDP-glucose:1,2-diacylglycerol glucosyltransferase, uridine diphosphoglucose-diacylglycerol glucosyltransferase, and UDP-glucose-diacylglycerol glucosyltransferase. This enzyme participates in glycerolipid metabolism.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14650100
|
14650121
|
1,3-beta-D-glucan phosphorylase
|
Class of enzymes
In enzymology, a 1,3-beta-D-glucan phosphorylase (EC 2.4.1.97) is an enzyme that catalyzes the chemical reaction
(1,3-beta-D-glucosyl)n + phosphate formula_0 (1,3-beta-D-glucosyl)n-1 + alpha-D-glucose 1-phosphate
Thus, the two substrates of this enzyme are (1,3-beta-D-glucosyl)n and phosphate, whereas its two products are (1,3-beta-D-glucosyl)n-1 and alpha-D-glucose 1-phosphate.
This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is 1,3-beta-D-glucan:phosphate alpha-D-glucosyltransferase. Other names in common use include laminarin phosphoryltransferase, 1,3-beta-D-glucan:orthophosphate glucosyltransferase, and laminarin phosphoryltransferase.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14650121
|
14650144
|
1,3-beta-galactosyl-N-acetylhexosamine phosphorylase
|
Class of enzymes
In enzymology, a 1,3-beta-galactosyl-N-acetylhexosamine phosphorylase (EC 2.4.1.211) is an enzyme that catalyzes the chemical reaction
beta-D-galactopyranosyl-(1->3)-N-acetyl-D-glucosamine + phosphate formula_0 alpha-D-galactopyranose 1-phosphate + N-acetyl-D-glucosamine
Thus, the two substrates of this enzyme are beta-D-galactopyranosyl-(1->3)-N-acetyl-D-glucosamine and phosphate, whereas its two products are alpha-D-galactopyranose 1-phosphate and N-acetyl-D-glucosamine.
This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is beta-D-galactopyranosyl-(1->3)-N-acetyl-D-hexosamine:phosphate galactosyltransferase.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14650144
|
14650163
|
1,3-beta-oligoglucan phosphorylase
|
Class of enzymes
In enzymology, a 1,3-beta-oligoglucan phosphorylase (EC 2.4.1.30) is an enzyme that catalyzes the chemical reaction
(1,3-beta-D-glucosyl)n + phosphate formula_0 (1,3-beta-D-glucosyl)n-1 + alpha-D-glucose 1-phosphate
Thus, the two substrates of this enzyme are (1,3-beta-D-glucosyl)n and phosphate, whereas its two products are (1,3-beta-D-glucosyl)n-1 and alpha-D-glucose 1-phosphate.
This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is 1,3-beta-D-oligoglucan:phosphate alpha-D-glucosyltransferase. Other names in common use include beta-1,3-oligoglucan:orthophosphate glucosyltransferase II, and beta-1,3-oligoglucan phosphorylase.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14650163
|
14650181
|
13-hydroxydocosanoate 13-beta-glucosyltransferase
|
Class of enzymes
In enzymology, a 13-hydroxydocosanoate 13-beta-glucosyltransferase (EC 2.4.1.158) is an enzyme that catalyzes the chemical reaction. This reaction is part of sophorosyloxydocosanoate biosynthesis. Extracts for research are frequently obtained from Candida yeasts.
UDP-glucose + 13-hydroxydocosanoate formula_0 UDP + 13-beta-D-glucosyloxydocosanoate
Thus, the two substrates of this enzyme are UDP-glucose and 13-hydroxydocosanoate, whereas its two products are UDP and 13-beta-D-glucosyloxydocosanoate.
This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is UDP-glucose:13-hydroxydocosanoate 13-beta-D-glucosyltransferase. Other names in common use include 13-glucosyloxydocosanoate 2'-beta-glucosyltransferase, UDP-glucose:13-hydroxydocosanoic acid glucosyltransferase, uridine diphosphoglucose-hydroxydocosanoate glucosyltransferase, and UDP-glucose-13-hydroxydocosanoate glucosyltransferase.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14650181
|
14650200
|
1,4-beta-D-xylan synthase
|
Class of enzymes
In enzymology, a 1,4-beta-D-xylan synthase (EC 2.4.2.24) is an enzyme that catalyzes the chemical reaction
UDP-D-xylose + (1,4-beta-D-xylan)n formula_0 UDP + (1,4-beta-D-xylan)n+1
Thus, the two substrates of this enzyme are UDP-D-xylose and (1,4-beta-D-xylan)n, whereas its two products are UDP and (1,4-beta-D-xylan)n+1.
This enzyme belongs to the family of glycosyltransferases, specifically the pentosyltransferases. The systematic name of this enzyme class is UDP-D-xylose:1,4-beta-D-xylan 4-beta-D-xylosyltransferase. Other names in common use include uridine diphosphoxylose-1,4-beta-xylan xylosyltransferase, 1,4-beta-xylan synthase, xylan synthase, and xylan synthetase. This enzyme participates in starch and sucrose metabolism and nucleotide sugars metabolism.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14650200
|
14650237
|
2-coumarate O-beta-glucosyltransferase
|
Class of enzymes
In enzymology, a 2-coumarate O-beta-glucosyltransferase (EC 2.4.1.114) is an enzyme that catalyzes the chemical reaction
UDP-glucose + trans-2-hydroxycinnamate formula_0 UDP + trans-beta-D-glucosyl-2-hydroxycinnamate
Thus, the two substrates of this enzyme are UDP-glucose and trans-2-hydroxycinnamate, whereas its two products are UDP and trans-beta-D-glucosyl-2-hydroxycinnamate.
This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is UDP-glucose:trans-2-hydroxycinnamate O-beta-D-glucosyltransferase. Other names in common use include uridine diphosphoglucose-o-coumarate glucosyltransferase, and UDPG:o-coumaric acid O-glucosyltransferase. This enzyme participates in phenylpropanoid biosynthesis.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14650237
|
14650252
|
2-Hydroxyacylsphingosine 1-beta-galactosyltransferase
|
Class of enzymes
In enzymology, a 2-hydroxyacylsphingosine 1-beta-galactosyltransferase (EC 2.4.1.45) is an enzyme that catalyzes the chemical reaction
UDP-galactose + 2-(2-hydroxyacyl)sphingosine formula_0 UDP + 1-(beta-D-galactosyl)-2-(2-hydroxyacyl)sphingosine
Thus, the two substrates of this enzyme are UDP-galactose and 2-(2-hydroxyacyl)sphingosine, whereas its two products are UDP and 1-(beta-D-galactosyl)-2-(2-hydroxyacyl)sphingosine.
This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is UDP-galactose:2-(2-hydroxyacyl)sphingosine 1-beta-D-galactosyl-transferase. Other names in common use include galactoceramide synthase, uridine diphosphogalactose-2-hydroxyacylsphingosine, galactosyltransferase, UDPgalactose-2-hydroxyacylsphingosine galactosyltransferase, UDP-galactose:ceramide galactosyltransferase, and UDP-galactose:2-2-hydroxyacylsphingosine galactosyltransferase.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14650252
|
14650280
|
3-Galactosyl-N-acetylglucosaminide 4-alpha-L-fucosyltransferase
|
Class of enzymes
In enzymology, a 3-galactosyl-N-acetylglucosaminide 4-alpha-L-fucosyltransferase (EC 2.4.1.65) is an enzyme that catalyzes the chemical reaction
GDP-beta-L-fucose + beta-D-galactosyl-(1->3)-N-acetyl-D-glucosaminyl-R formula_0 GDP + beta-D-galactosyl-(1->3)-[alpha-L-fucosyl-(1->4)]-N-acetyl-beta-D- glucosaminyl-R
Thus, the two substrates of this enzyme are GDP-beta-L-fucose and beta-D-galactosyl-(1->3)-N-acetyl-D-glucosaminyl-R, whereas its 3 products are GDP, beta-D-galactosyl-(1->3)-[alpha-L-fucosyl-(1->4)]-N-acetyl-beta-D-, and glucosaminyl-R.
This enzyme participates in 3 metabolic pathways: glycosphingolipid biosynthesis - lactoseries, glycosphingolipid biosynthesis - neo-lactoseries, and glycan structures - biosynthesis 2.
Nomenclature.
This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is GDP-L-fucose:3-beta-D-galactosyl-N-acetyl-D-glucosaminyl-R 4I-alpha-L-fucosyltransferase. Other names in common use include:
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14650280
|
14650303
|
4-galactosyl-N-acetylglucosaminide 3-alpha-L-fucosyltransferase
|
Class of enzymes
In enzymology, a 4-galactosyl-N-acetylglucosaminide 3-alpha-L-fucosyltransferase (EC 2.4.1.152) is an enzyme that catalyzes the chemical reaction
GDP-beta-L-fucose + 1,4-beta-D-galactosyl-N-acetyl-D-glucosaminyl-R formula_0 GDP + 1,4-beta-D-galactosyl-(alpha-1,3-L-fucosyl)-N-acetyl-D-glucosaminyl- R
Thus, the two substrates of this enzyme are GDP-beta-L-fucose and 1,4-beta-D-galactosyl-N-acetyl-D-glucosaminyl-R. Its 3 products are GDP, 1,4-beta-D-galactosyl-(alpha-1,3-L-fucosyl)-N-acetyl-D-glucosaminyl-, and R.
This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is GDP-beta-L-fucose:1,4-beta-D-galactosyl-N-acetyl-D-glucosaminyl-R 3-alpha-L-fucosyltransferase. Other names in common use include:
This enzyme participates in 3 metabolic pathways: glycosphingolipid biosynthesis - neo-lactoseries, glycosphingolipid biosynthesis - globoseries, and glycan structures - biosynthesis 2.
Structural studies.
As of late 2007, 3 structures have been solved for this class of enzymes, with PDB accession codes 2NZW, 2NZX, and 2NZY.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14650303
|
14650320
|
4-hydroxybenzoate 4-O-beta-D-glucosyltransferase
|
Class of enzymes
In enzymology, a 4-hydroxybenzoate 4-O-beta-D-glucosyltransferase (EC 2.4.1.194) is an enzyme that catalyzes the chemical reaction
UDP-glucose + 4-hydroxybenzoate formula_0 UDP + 4-(beta-D-glucosyloxy)benzoate
Thus, the two substrates of this enzyme are UDP-glucose and 4-hydroxybenzoate, whereas its two products are UDP and 4-(beta-D-glucosyloxy)benzoate.
This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is UDP-glucose:4-hydroxybenzoate 4-O-beta-D-glucosyltransferase. Other names in common use include uridine diphosphoglucose-4-hydroxybenzoate glucosyltransferase, UDP-glucose:4-(beta-D-glucopyranosyloxy)benzoic acid, glucosyltransferase, HBA glucosyltransferase, p-hydroxybenzoate glucosyltransferase, PHB glucosyltransferase, and PHB-O-glucosyltransferase.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14650320
|
14650340
|
6G-fructosyltransferase
|
Class of enzymes
In enzymology, a 6G-fructosyltransferase (EC 2.4.1.243) is an enzyme that catalyzes the chemical reaction
[1-beta-D-fructofuranosyl-(2->1)-]m+1 alpha-D-glucopyranoside + [1-beta-D-fructofuranosyl-(2->1)-]n+1 alpha-D-glucopyranoside formula_0 [1-beta-D-fructofuranosyl-(2->1)-]m alpha-D-glucopyranoside + [1-beta-D-fructofuranosyl-(2->1)-]n+1 beta-D-fructofuranosyl-(2->6)-alpha-D-glucopyranoside (m > 0; n >formula_0 0)
Thus, the two substrates of this enzyme are [1-beta-D-fructofuranosyl-(2->1)-]m+1 alpha-D-glucopyranoside and [1-beta-D-fructofuranosyl-(2->1)-]n+1 alpha-D-glucopyranoside, whereas its 4 products are [1-beta-D-fructofuranosyl-(2->1)-]m alpha-D-glucopyranoside, [1-beta-D-fructofuranosyl-(2->1)-]n+1, beta-D-fructofuranosyl-(2->6)-alpha-D-glucopyranoside (m > 0; n >=, and 0.
This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is 1F-oligo[beta-D-fructofuranosyl-(2->1)-]sucrose 6G-beta-D-fructotransferase. Other names in common use include fructan:fructan 6G-fructosyltransferase, 1F(1-beta-D-fructofuranosyl)m, sucrose:1F(1-beta-D-fructofuranosyl)nsucrose, 6G-fructosyltransferase, 6G-FFT, 6G-FT, and 6G-fructotransferase.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14650340
|
14650360
|
Abequosyltransferase
|
Class of enzymes
In enzymology, an abequosyltransferase (EC 2.4.1.60) is an enzyme that catalyzes the chemical reaction
CDP-abequose + D-mannosyl-L-rhamnosyl-D-galactose-1-diphospholipid formula_0 CDP + D-abequosyl-D-mannosyl-rhamnosyl-D-galactose-1-diphospholipid
Thus, the two substrates of this enzyme are CDP-abequose and D-mannosyl-L-rhamnosyl-D-galactose-1-diphospholipid, whereas its two products are CDP and D-abequosyl-D-mannosyl-rhamnosyl-D-galactose-1-diphospholipid.
This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is CDP-abequose:D-mannosyl-L-rhamnosyl-D-galactose-1-diphospholipid D-abequosyltransferase. This enzyme is also called trihexose diphospholipid abequosyltransferase.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14650360
|
14650375
|
Aldose beta-D-fructosyltransferase
|
Class of enzymes
In enzymology, an aldose beta-D-fructosyltransferase (EC 2.4.1.162) is an enzyme that catalyzes the chemical reaction
alpha-D-aldosyl1 beta-D-fructoside + D-aldose2 formula_0 D-aldose1 + alpha-D-aldosyl2 beta-D-fructoside
Thus, the two substrates of this enzyme are alpha-D-aldosyl1 beta-D-fructoside and D-aldose2, whereas its two products are D-aldose1 and alpha-D-aldosyl2 beta-D-fructoside.
This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is alpha-D-aldosyl-beta-D-fructoside:aldose 1-beta-D-fructosyltransferase.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14650375
|
14650395
|
Monadic second-order logic
|
In mathematical logic, monadic second-order logic (MSO) is the fragment of second-order logic where the second-order quantification is limited to quantification over sets. It is particularly important in the logic of graphs, because of Courcelle's theorem, which provides algorithms for evaluating monadic second-order formulas over graphs of bounded treewidth. It is also of fundamental importance in automata theory, where the Büchi–Elgot–Trakhtenbrot theorem gives a logical characterization of the regular languages.
Second-order logic allows quantification over predicates. However, MSO is the fragment in which second-order quantification is limited to monadic predicates (predicates having a single argument). This is often described as quantification over "sets" because monadic predicates are equivalent in expressive power to sets (the set of elements for which the predicate is true).
Variants.
Monadic second-order logic comes in two variants. In the variant considered over structures such as graphs and in Courcelle's theorem, the formula may involve non-monadic predicates (in this case the binary edge predicate formula_0), but quantification is restricted to be over monadic predicates only. In the variant considered in automata theory and the Büchi–Elgot–Trakhtenbrot theorem, all predicates, including those in the formula itself, must be monadic, with the exceptions of equality (formula_1) and ordering (formula_2) relations.
Computational complexity of evaluation.
Existential monadic second-order logic (EMSO) is the fragment of MSO in which all quantifiers over sets must be existential quantifiers, outside of any other part of the formula. The first-order quantifiers are not restricted. By analogy to Fagin's theorem, according to which existential (non-monadic) second-order logic captures precisely the descriptive complexity of the complexity class NP, the class of problems that may be expressed in existential monadic second-order logic has been called monadic NP. The restriction to monadic logic makes it possible to prove separations in this logic that remain unproven for non-monadic second-order logic. For instance, in the logic of graphs, testing whether a graph is disconnected belongs to monadic NP, as the test can be represented by a formula that describes the existence of a proper subset of vertices with no edges connecting them to the rest of the graph; however, the complementary problem, testing whether a graph is connected, does not belong to monadic NP. The existence of an analogous pair of complementary problems, only one of which has an existential second-order formula (without the restriction to monadic formulas) is equivalent to the inequality of NP and coNP, an open question in computational complexity.
By contrast, when we wish to check whether a Boolean MSO formula is satisfied by an input finite tree, this problem can be solved in linear time in the tree, by translating the Boolean MSO formula to a tree automaton and evaluating the automaton on the tree. In terms of the query, however, the complexity of this process is generally nonelementary. Thanks to Courcelle's theorem, we can also evaluate a Boolean MSO formula in linear time on an input graph if the treewidth of the graph is bounded by a constant.
For MSO formulas that have free variables, when the input data is a tree or has bounded treewidth, there are efficient enumeration algorithms to produce the set of all solutions, ensuring that the input data is preprocessed in linear time and that each solution is then produced in a delay linear in the size of each solution, i.e., constant-delay in the common case where all free variables of the query are first-order variables (i.e., they do not represent sets). There are also efficient algorithms for counting the number of solutions of the MSO formula in that case.
Decidability and complexity of satisfiability.
The satisfiability problem for monadic second-order logic is undecidable in general because this logic subsumes first-order logic.
The monadic second-order theory of the infinite complete binary tree, called S2S, is decidable. As a consequence of this result, the following theories are decidable:
For each of these theories (S2S, S1S, WS2S, WS1S), the complexity of the decision problem is nonelementary.
Use of satisfiability of MSO on trees in verification.
Monadic second-order logic of trees has applications in formal verification. Decision procedures for MSO satisfiability have been used to prove properties of programs manipulating linked data structures, as a form of shape analysis, and for symbolic reasoning in hardware verification.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "E(x, y)"
},
{
"math_id": 1,
"text": "="
},
{
"math_id": 2,
"text": "<"
},
{
"math_id": 3,
"text": "\\mathbb{N}"
}
] |
https://en.wikipedia.org/wiki?curid=14650395
|
14650406
|
Alginate synthase
|
Class of enzymes
In enzymology, an alginate synthase (EC 2.4.1.33) is an enzyme that catalyzes the chemical reaction
GDP-D-mannuronate + (alginate)n formula_0 GDP + (alginate)n+1
Thus, the two substrates of this enzyme are GDP-D-mannuronate and (alginate)n, whereas its two products are GDP and (alginate)n+1.
This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is GDP-D-mannuronate:alginate D-mannuronyltransferase. This enzyme is also called mannuronosyl transferase. This enzyme participates in fructose and mannose metabolism.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14650406
|
14650425
|
Alizarin 2-beta-glucosyltransferase
|
Class of enzymes
In enzymology, an alizarin 2-beta-glucosyltransferase (EC 2.4.1.103) is an enzyme that catalyzes the chemical reaction
UDP-glucose + alizarin formula_0 UDP + 1-hydroxy-2-(beta-D-glucosyloxy)-9,10-anthraquinone
Thus, the two substrates of this enzyme are UDP-glucose and alizarin, whereas its two products are UDP and 1-hydroxy-2-(beta-D-glucosyloxy)-9,10-anthraquinone.
This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is UDP-glucose:1,2-dihydroxy-9,10-anthraquinone 2-O-beta-D-glucosyltransferase. This enzyme is also called uridine diphosphoglucose-alizarin glucosyltransferase.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14650425
|
14650442
|
Alpha-1,3-glucan synthase
|
Class of enzymes
In enzymology, an alpha-1,3-glucan synthase (EC 2.4.1.183) is an enzyme that catalyzes the chemical reaction
UDP-glucose + [alpha-D-glucosyl-(1-3)]n formula_0 UDP + [alpha-D-glucosyl-(1-3)]n+1
Thus, the two substrates of this enzyme are UDP-glucose and [alpha-D-glucosyl-(1-3)]n, whereas its two products are UDP and [alpha-D-glucosyl-(1-3)]n+1.
This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is UDP-glucose:alpha-D-(1-3)-glucan 3-alpha-D-glucosyltransferase. Other names in common use include uridine diphosphoglucose-1,3-alpha-glucan glucosyltransferase, and 1,3-alpha-D-glucan synthase.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14650442
|
14650463
|
Alpha-1,4-glucan-protein synthase (ADP-forming)
|
Class of enzymes
In enzymology, an alpha-1,4-glucan-protein synthase (ADP-forming) (EC 2.4.1.113) is an enzyme that catalyzes the chemical reaction
ADP-glucose + protein formula_0 ADP + alpha-D-glucosyl-protein
Thus, the two substrates of this enzyme are ADP-glucose and protein, whereas its two products are ADP and alpha-D-glucosyl-protein.
This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is ADP-glucose:protein 4-alpha-D-glucosyltransferase. Other names in common use include ADP-glucose:protein glucosyltransferase, and adenosine diphosphoglucose-protein glucosyltransferase.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14650463
|
14650484
|
Alpha,alpha-trehalose-phosphate synthase (GDP-forming)
|
Class of enzymes
In enzymology, an alpha,alpha-trehalose-phosphate synthase (GDP-forming) (EC 2.4.1.36) is an enzyme that catalyzes the chemical reaction
GDP-glucose + glucose 6-phosphate formula_0 GDP + alpha,alpha-trehalose 6-phosphate
Thus, the two substrates of this enzyme are GDP-glucose and glucose 6-phosphate, whereas its two products are GDP and alpha,alpha'-trehalose 6-phosphate.
This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is GDP-glucose:D-glucose-6-phosphate 1-alpha-D-glucosyltransferase. Other names in common use include GDP-glucose-glucose-phosphate glucosyltransferase, guanosine diphosphoglucose-glucose phosphate glucosyltransferase, and trehalose phosphate synthase (GDP-forming).
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14650484
|
14650508
|
Alpha,alpha-trehalose-phosphate synthase (UDP-forming)
|
Class of enzymes
In enzymology, an alpha,alpha-trehalose-phosphate synthase (UDP-forming) (EC 2.4.1.15) is an enzyme that catalyzes the chemical reaction
UDP-glucose + D-glucose 6-phosphate formula_0 UDP + alpha,alpha-trehalose 6-phosphate
Thus, the two substrates of this enzyme are UDP-glucose and D-glucose 6-phosphate, whereas its two products are UDP and alpha,alpha'-trehalose 6-phosphate.
This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is "UDP-glucose:D-glucose-6-phosphate 1-alpha-D-glucosyltransferase". Other names in common use include "UDP-glucose-glucose-phosphate glucosyltransferase", "trehalosephosphate-UDP glucosyltransferase", "UDP-glucose-glucose-phosphate glucosyltransferase", "alpha,alpha-trehalose phosphate synthase (UDP-forming)", "phosphotrehalose-uridine diphosphate transglucosylase", "trehalose 6-phosphate synthase", "trehalose 6-phosphate synthetase", "trehalose phosphate synthase", "trehalose phosphate synthetase", "trehalose phosphate-uridine diphosphate glucosyltransferase", "trehalose-P synthetase", "transglucosylase", and "uridine diphosphoglucose phosphate glucosyltransferase". This enzyme participates in starch and sucrose metabolism.
Structural studies.
As of late 2007, 3 structures have been solved for this class of enzymes, with PDB accession codes 1GZ5, 1UQT, and 1UQU.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14650508
|
14650522
|
Alpha,alpha-trehalose phosphorylase
|
Class of enzymes
In enzymology, an alpha,alpha-trehalose phosphorylase (EC 2.4.1.64) is an enzyme that catalyzes the chemical reaction
alpha,alpha-trehalose + phosphate formula_0 D-glucose + beta-D-glucose 1-phosphate
Thus, the two substrates of this enzyme are trehalose and phosphate, whereas its two products are D-glucose and beta-D-glucose 1-phosphate.
This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is alpha,alpha-trehalose:phosphate beta-D-glucosyltransferase. This enzyme is also called trehalose phosphorylase. This enzyme participates in starch and sucrose metabolism.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14650522
|
14650536
|
Alpha,alpha-trehalose phosphorylase (configuration-retaining)
|
Enzyme family
In enzymology, an alpha,alpha-trehalose phosphorylase (configuration-retaining) (EC 2.4.1.231) is an enzyme that catalyzes the chemical reaction
alpha,alpha-trehalose + phosphate formula_0 alpha-D-glucose + alpha-D-glucose 1-phosphate
Thus, the two substrates of this enzyme are alpha,alpha-trehalose and phosphate, whereas its two products are alpha-D-glucose and alpha-D-glucose 1-phosphate.
This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is alpha,alpha-trehalose:phosphate alpha-D-glucosyltransferase. This enzyme is also called trehalose phosphorylase[ambiguous].
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14650536
|
14650549
|
Alpha-N-acetylgalactosaminide alpha-2,6-sialyltransferase
|
Class of enzymes
In enzymology, an alpha-N-acetylgalactosaminide alpha-2,6-sialyltransferase (EC 2.4.99.3) is an enzyme that catalyzes the chemical reaction
CMP-N-acetylneuraminate + glycano-1,3-(N-acetyl-alpha-D-galactosaminyl)-glycoprotein formula_0 CMP + glycano-(2,6-alpha-N-acetylneuraminyl)-(N-acetyl-D-galactosaminyl)- glycoprotein
Thus, the two substrates of this enzyme are CMP-N-acetylneuraminate and glycano-1,3-(N-acetyl-alpha-D-galactosaminyl)-glycoprotein, whereas its 3 products are CMP, glycano-(2,6-alpha-N-acetylneuraminyl)-(N-acetyl-D-galactosaminyl)-, and glycoprotein.
This enzyme belongs to the family of transferases, specifically those glycosyltransferases that do not transfer hexosyl or pentosyl groups. The systematic name of this enzyme class is CMP-N-acetylneuraminate:glycano-1,3-(N-acetyl-alpha-D-galactosaminyl)-glycoprotein alpha-2,6-N-acetylneuraminyltransferase. This enzyme participates in o-glycan biosynthesis and glycan structures - biosynthesis 1.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14650549
|
14650570
|
Alpha-N-acetylneuraminate alpha-2,8-sialyltransferase
|
Class of enzymes
In enzymology, an alpha-N-acetylneuraminate alpha-2,8-sialyltransferase (EC 2.4.99.8) is an enzyme that catalyzes the chemical reaction
CMP-N-acetylneuraminate + alpha-N-acetylneuraminyl-2,3-beta-D-galactosyl-R formula_0 CMP + alpha-N-acetylneuraminyl-2,8-alpha-N-acetylneuraminyl-2,3-beta-D- galactosyl-R
Thus, the two substrates of this enzyme are CMP-N-acetylneuraminate and alpha-N-acetylneuraminyl-2,3-beta-D-galactosyl-R, whereas its 3 products are CMP, alpha-N-acetylneuraminyl-2,8-alpha-N-acetylneuraminyl-2,3-beta-D-, and galactosyl-R. This enzyme participates in 4 metabolic pathways: glycosphingolipid biosynthesis - neo-lactoseries, glycosphingolipid biosynthesis - globoseries, glycosphingolipid biosynthesis - ganglioseries, and glycan structures - biosynthesis 2.
This enzyme belongs to the family of transferases, specifically those glycosyltransferases that do not transfer hexosyl or pentosyl groups. The systematic name of this enzyme class is CMP-N-acetylneuraminate:alpha-N-acetylneuraminyl-2,3-beta-D-galactos ide alpha-2,8-N-acetylneuraminyltransferase. Other names in common use include cytidine monophosphoacetylneuraminate-ganglioside GM3, alpha-2,8-sialyltransferase, ganglioside GD3 synthase, ganglioside GD3 synthetase sialyltransferase, CMP-NeuAc:LM1(alpha2-8) sialyltransferase, GD3 synthase, and SAT-2.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14650570
|
14650578
|
Amylosucrase
|
Class of enzymes
In enzymology, an amylosucrase (EC 2.4.1.4) is an enzyme that catalyzes the chemical reaction
sucrose + (1,4-alpha-D-glucosyl)n formula_0 D-fructose + (1,4-alpha-D-glucosyl)n+1
Thus, the two substrates of this enzyme are sucrose and (1,4-alpha-D-glucosyl)n, whereas its two products are D-fructose and (1,4-alpha-D-glucosyl)n+1.
This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is sucrose:1,4-alpha-D-glucan 4-alpha-D-glucosyltransferase. Other names in common use include sucrose-glucan glucosyltransferase, and sucrose-1,4-alpha-glucan glucosyltransferase. This enzyme participates in starch and sucrose metabolism.
Structural studies.
As of late 2007, 10 structures have been solved for this class of enzymes, with PDB accession codes 1G5A, 1JG9, 1JGI, 1MVY, 1MW0, 1MW1, 1MW2, 1MW3, 1S46, and 1ZS2.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14650578
|
14650586
|
Anthocyanidin 3-O-glucosyltransferase
|
Class of enzymes
In enzymology, an anthocyanidin 3-O-glucosyltransferase (EC 2.4.1.115) is an enzyme that catalyzes the chemical reaction
UDP-D-glucose + an anthocyanidin formula_0 UDP + an anthocyanidin-3-O-beta-D-glucoside
Thus, the two substrates of this enzyme are UDP-D-glucose and anthocyanidin, whereas its two products are UDP and anthocyanidin-3-O-beta-D-glucoside.
This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is UDP-D-glucose:anthocyanidin 3-O-beta-D-glucosyltransferase. Other names in common use include uridine diphosphoglucose-anthocyanidin 3-O-glucosyltransferase, UDP-glucose:anthocyanidin/flavonol 3-O-glucosyltransferase, UDP-glucose:cyanidin-3-O-glucosyltransferase, UDP-glucose:anthocyanidin 3-O-D-glucosyltransferase, and 3-GT. This enzyme participates in flavonoid biosynthesis.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14650586
|
14650609
|
Anthocyanin 3'-O-beta-glucosyltransferase
|
Class of enzymes
In enzymology, an anthocyanin 3'-O-beta-glucosyltransferase (EC 2.4.1.238) is an enzyme that catalyzes the chemical reaction
UDP-glucose + an anthocyanin formula_0 UDP + an anthocyanin 3'-O-beta-D-glucoside
Thus, the two substrates of this enzyme are UDP-glucose and anthocyanin, whereas its two products are UDP and anthocyanin 3'-O-beta-D-glucoside.
This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is UDP-glucose:anthocyanin 3'-O-beta-D-glucosyltransferase. Other names in common use include UDP-glucose:anthocyanin 3'-O-glucosyltransferase, and 3'GT.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14650609
|
14650624
|
Anthranilate phosphoribosyltransferase
|
InterPro Family
In enzymology, an anthranilate phosphoribosyltransferase (EC 2.4.2.18) is an enzyme that catalyzes the chemical reaction
anthranilate + phosphoribosyl pyrophosphate formula_0 N-(5-phosphoribosyl)-anthranilate + diphosphate
The two substrates of this enzyme are anthranilate and phosphoribosyl pyrophosphate. Its two products are N-(5-phosphoribosyl)-anthranilate and diphosphate.
This enzyme participates in aromatic amino acid biosynthesis and two-component system (general).
Nomenclature.
This enzyme belongs to the family of glycosyltransferases, specifically the pentosyltransferases. The systematic name of this enzyme class is N-(5-phospho-D-ribosyl)-anthranilate:diphosphate phospho-alpha-D-ribosyltransferase.
Other names in common use are:
Function.
Anthranilate phosphoribosyltransferase (AnPRT) is a transferase enzyme which catalyses one of the most fundamental biochemical reactions: the transfer of a ribose group between an aromatic base and phosphate groups. More specifically, AnPRT facilitates the formation of a carbon-nitrogen bond between (PRPP) and anthranilate.
Reaction.
In the aromatic amino acid biosynthesis pathway specifically the tryptophan synthesis portion, AnPRT draws anthranilate and 5-phospho-alpha-D-ribose 1-diphosphate into the active site of the protein. Through the Sn1 mechanism below, AnPRT transfers the 5-phospho-alpha-D-ribose group (shown in blue) to the anthranilate (shown in red) from the diphosphate molecule (shown in black).
Structure.
As of late 2007, 12 structures have been solved for this class of enzymes, with PDB accession codes 1GXB, 1KGZ, 1KHD, 1O17, 1V8G, 1VQU, 1ZVW, 1ZXY, 1ZYK, 2BPQ, 2ELC, and 2GVQ.
AnPRT has four domains and its quaternary structure consists of two identical protein structures. Each domain of AnPRT contains a magnesium ion and a pyrophosphate molecule as the active site. The secondary structure of AnPRT consists mainly of alpha helices with a beta sheet within each domain.
Homologues.
There are homologues of AnPRT within "Saccharomyces cerevisiae", "Kluyveromyces lactis", "Schizosaccharomyces pombe", "Magnaporthe grisea", "Neurospora crassa", "Arabidopsis thaliana", and "Oryza sativa". All of these organisms are alike in the sense that they make all of the amino acids needed for proper protein formation (also called autotrophs).
AnPRT is vital in these organisms because it is a vital step in the pathway to synthesis tryptophan, an essential amino acid in humans, which humans take from eating plants or fungi.
Mutations in AnPRT.
An experiment was conducted on the varying mutations in the gene which codes for AnPRT in Arabidopsis. This study focused on the observed fluorescence of Arabidopsis plants when the gene which coded for AnPRT was mutated. It was found that there were nine mutations of the gene all with varying auxotrophic and prototrophic capabilities.
It was discovered that there was an increased level of anthranilate in the cells of Arabidopsis which were mutated. This was concluded to be linked to the fluorescence of the plants.
This study's relevance comes from the applications of the conclusions found by the scientists. The auxotrophic mutant could be used as a selectable marker in plant transformations. This can lead to a better way to engineer plants and find new ways to develop their systems to work for humanities purposes.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14650624
|
14650643
|
Arylamine glucosyltransferase
|
Class of enzymes
In enzymology, an arylamine glucosyltransferase (EC 2.4.1.71) is an enzyme that catalyzes the chemical reaction
UDP-glucose + an arylamine formula_0 UDP + an N-D-glucosylarylamine
Thus, the two substrates of this enzyme are UDP-glucose and arylamine, whereas its two products are UDP and N-D-glucosylarylamine.
This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is UDP-glucose:arylamine N-D-glucosyltransferase. Other names in common use include UDP glucose-arylamine glucosyltransferase, and uridine diphosphoglucose-arylamine glucosyltransferase.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14650643
|
14650659
|
ATP phosphoribosyltransferase
|
Class of enzymes
In enzymology, an ATP phosphoribosyltransferase (EC 2.4.2.17) is an enzyme that catalyzes the chemical reaction
1-(5-phospho-D-ribosyl)-ATP + diphosphate formula_0 ATP + 5-phospho-alpha-D-ribose 1-diphosphate
Thus, the two substrates of this enzyme are 1-(5-phospho-D-ribosyl)-ATP and diphosphate, whereas its two products are ATP and 5-phospho-alpha-D-ribose 1-diphosphate.
This enzyme belongs to the family of glycosyltransferases, specifically the pentosyltransferases. The systematic name of this enzyme class is 1-(5-phospho-D-ribosyl)-ATP:diphosphate phospho-alpha-D-ribosyl-transferase. Other names in common use include phosphoribosyl-ATP pyrophosphorylase, adenosine triphosphate phosphoribosyltransferase, phosphoribosyladenosine triphosphate:pyrophosphate, phosphoribosyltransferase, phosphoribosyl ATP synthetase, phosphoribosyl ATP:pyrophosphate phosphoribosyltransferase, phosphoribosyl-ATP:pyrophosphate-phosphoribosyl phosphotransferase, phosphoribosyladenosine triphosphate pyrophosphorylase, and phosphoribosyladenosine triphosphate synthetase.
This enzyme catalyses the first step in the biosynthesis of histidine in bacteria, fungi and plants. It is a member of the larger phosphoribosyltransferase superfamily of enzymes which catalyse the condensation of 5-phospho-alpha-D-ribose 1-diphosphate with nitrogenous bases in the presence of divalent metal ions.
Histidine biosynthesis is an energetically expensive process and ATP phosphoribosyltransferase activity is subject to control at several levels. Transcriptional regulation is based primarily on nutrient conditions and determines the amount of enzyme present in the cell, while feedback inhibition rapidly modulates activity in response to cellular conditions. The enzyme has been shown to be inhibited by 1-(5-phospho-D-ribosyl)-ATP, histidine, ppGpp (a signal associated with adverse environmental conditions) and ADP and AMP (which reflect the overall energy status of the cell). As this pathway of histidine biosynthesis is present only in prokaryotes, plants and fungi, this enzyme is a promising target for the development of novel antimicrobial compounds and herbicides.
ATP phosphoribosyltransferase is found in two distinct forms: a long form containing two catalytic domains and a C-terminal regulatory domain, and a short form in which the regulatory domain is missing. The long form is catalytically competent, but in organisms with the short form, a histidyl-tRNA synthetase paralogue, HisZ, is required for enzyme activity.
The structures of the long form enzymes from "Escherichia coli" and "Mycobacterium tuberculosis" have been determined. Interconversion between the various forms is largely reversible and is influenced by the binding of the natural substrates and inhibitors of the enzyme. The two catalytic domains are linked by a two-stranded beta-sheet and together form a "periplasmic binding protein fold". A crevice between these domains contains the active site. The C-terminal domain is not directly involved in catalysis but appears to be involved the formation of hexamers, induced by the binding of inhibitors such as histidine to the enzyme, thus regulating activity.
Structural studies.
As of late 2007, 10 structures have been solved for this class of enzymes, with PDB accession codes 1H3D, 1NH7, 1NH8, 1O63, 1O64, 1Q1K, 1USY, 1VE4, 1Z7M, and 1Z7N.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14650659
|
14650671
|
Beta-galactoside alpha-2,3-sialyltransferase
|
Enzyme
In enzymology, a beta-galactoside alpha-2,3-sialyltransferase (EC 2.4.99.4) is an enzyme that catalyzes the chemical reaction
CMP-N-acetylneuraminate + beta-D-galactosyl-1,3-N-acetyl-alpha-D-galactosaminyl-R formula_0 CMP + alpha-N-acetylneuraminyl-2,3-beta-D-galactosyl-1,3-N-acetyl-alpha-D- galactosaminyl-R
Thus, the two substrates of this enzyme are CMP-N-acetylneuraminate and beta-D-galactosyl-1,3-N-acetyl-alpha-D-galactosaminyl-R, whereas its 3 products are CMP, alpha-N-acetylneuraminyl-2,3-beta-D-galactosyl-1,3-N-acetyl-alpha-D-, and galactosaminyl-R.
This enzyme belongs to the family of transferases, specifically those glycosyltransferases that do not transfer hexosyl or pentosyl groups. The systematic name of this enzyme class is CMP-N-acetylneuraminate:beta-D-galactoside alpha-2,3-N-acetylneuraminyl-transferase. This enzyme participates in 7 metabolic pathways: o-glycan biosynthesis, keratan sulfate biosynthesis, glycosphingolipid biosynthesis - lactoseries, glycosphingolipid biosynthesis - globoseries, glycosphingolipid biosynthesis - ganglioseries, glycan structures - biosynthesis 1, and glycan structures - biosynthesis 2.
Structural studies.
As of late 2007, 9 structures have been solved for this class of enzymes, with PDB accession codes 2EX0, 2EX1, 2IHJ, 2IHK, 2IHZ, 2II6, 2IIB, 2IIQ, and 2ILV.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14650671
|
14650680
|
Beta-galactoside alpha-2,6-sialyltransferase
|
Enzyme
In enzymology, a beta-galactoside alpha-2,6-sialyltransferase (EC 2.4.99.1) is an enzyme that catalyzes the chemical reaction
CMP-N-acetylneuraminate + beta-D-galactosyl-1,4-N-acetyl-beta-D-glucosamine formula_0 CMP + alpha-N-acetylneuraminyl-2,6-beta-D-galactosyl-1,4-N-acetyl-beta-D- glucosamine
Thus, the two substrates of this enzyme are CMP-N-acetylneuraminate and beta-D-galactosyl-1,4-N-acetyl-beta-D-glucosamine, whereas its three products are CMP, alpha-N-acetylneuraminyl-2,6-beta-D-galactosyl-1,4-N-acetyl-beta-D-, and glucosamine.
This enzyme belongs to the family of transferases, specifically those glycosyltransferases that do not transfer hexosyl or pentosyl groups. The systematic name of this enzyme class is CMP-N-acetylneuraminate:beta-D-galactosyl-1,4-N-acetyl-beta-D-glucos amine alpha-2,6-N-acetylneuraminyltransferase. This enzyme participates in n-glycan biosynthesis and glycan structures - biosynthesis 1.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14650680
|
14650698
|
Bilirubin-glucuronoside glucuronosyltransferase
|
Class of enzymes
In enzymology, a bilirubin-glucuronoside glucuronosyltransferase (EC 2.4.1.95) is an enzyme that catalyzes the chemical reaction
2 bilirubin-glucuronoside formula_0 bilirubin + bilirubin-bisglucuronoside
Hence, this enzyme has one substrate, bilirubin-glucuronoside, and two products, bilirubin and bilirubin-bisglucuronoside.
This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is bilirubin-glucuronoside:bilirubin-glucuronoside D-glucuronosyltransferase. Other names in common use include bilirubin monoglucuronide transglucuronidase, and bilirubin glucuronoside glucuronosyltransferase.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14650698
|
14650719
|
Cellobiose phosphorylase
|
Class of enzymes
In enzymology, a cellobiose phosphorylase (EC 2.4.1.20) is an enzyme that catalyzes the chemical reaction
cellobiose + phosphate formula_0 alpha-D-glucose 1-phosphate + D-glucose
Thus, the two substrates of this enzyme are cellobiose and phosphate, whereas its two products are alpha-D-glucose 1-phosphate and D-glucose.
This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is cellobiose:phosphate alpha-D-glucosyltransferase. This enzyme participates in starch and sucrose metabolism.
Structural studies.
As of late 2006, two structures have been solved for this class of enzymes, with PDB accession codes 2CQS and 2CQT.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14650719
|
14650732
|
Cellodextrin phosphorylase
|
Class of enzymes
In enzymology, a cellodextrin phosphorylase (EC 2.4.1.49) is an enzyme that catalyzes the chemical reaction
(1,4-beta-D-glucosyl)n + phosphate formula_0 (1,4-beta-D-glucosyl)n-1 + alpha-D-glucose 1-phosphate
Thus, the two substrates of this enzyme are (1,4-beta-D-glucosyl)n and phosphate, whereas its two products are (1,4-beta-D-glucosyl)n-1 and alpha-D-glucose 1-phosphate.
This enzyme belongs to GH (glycoside hydrolases) family 94. The systematic name of this enzyme class is 1,4-beta-D-oligo-D-glucan:phosphate alpha-D-glucosyltransferase. This enzyme is also called beta-1,4-oligoglucan:orthophosphate glucosyltransferase.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14650732
|
14650748
|
Cellulose synthase (GDP-forming)
|
Class of enzymes
In enzymology, a cellulose synthase (GDP-forming) (EC 2.4.1.29) is an enzyme that catalyzes the chemical reaction
GDP-glucose + (1,4-beta-D-glucosyl)n formula_0 GDP + (1,4-beta-D-glucosyl)n+1
Thus, the two substrates of this enzyme are GDP-glucose and (1,4-beta-D-glucosyl)n, whereas its two products are GDP and (1,4-beta-D-glucosyl)n+1.
This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is GDP-glucose:1,4-beta-D-glucan 4-beta-D-glucosyltransferase. Other names in common use include cellulose synthase (guanosine diphosphate-forming), cellulose synthetase, guanosine diphosphoglucose-1,4-beta-glucan glucosyltransferase, and guanosine diphosphoglucose-cellulose glucosyltransferase. This enzyme participates in starch and sucrose metabolism.
As of 2019[ [update]], no proteins with this activity are known in the UniProt/NiceZYme or the gene ontology database.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14650748
|
14650773
|
Chitin synthase
|
Class of enzymes
In enzymology, a chitin synthase (EC 2.4.1.16) is an enzyme that catalyzes the chemical reaction
UDP-N-acetyl-D-glucosamine + [1,4-(N-acetyl-beta-D-glucosaminyl)]n formula_0 UDP + [1,4-(N-acetyl-beta-D-glucosaminyl)]n+1
Thus, the two substrates of this enzyme are UDP-N-acetyl-D-glucosamine and [1,4-(N-acetyl-beta-D-glucosaminyl)]n, whereas its two products are UDP and [1,4-(N-acetyl-beta-D-glucosaminyl)]n+1.
This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is UDP-N-acetyl-D-glucosamine:chitin 4-beta-N-acetylglucosaminyl-transferase. Other names in common use include chitin-UDP N-acetylglucosaminyltransferase, chitin-uridine diphosphate acetylglucosaminyltransferase, chitin synthetase, and trans-N-acetylglucosaminosylase. This enzyme participates in aminosugars metabolism.
Production.
Chitin Synthase is manufactured in the rough endoplasmic reticulum of fungi as the inactive form, zymogen. The zymogen is then packaged into chitosomes in the golgi apparatus. Chitosomes bring the zymogen to the hyphal tip of a mold or yeast cell membrane. Chitin synthase is placed into the interior side of the cell membrane and then activated.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14650773
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.