id
stringlengths 2
8
| title
stringlengths 1
130
| text
stringlengths 0
252k
| formulas
listlengths 1
823
| url
stringlengths 38
44
|
---|---|---|---|---|
6780860 | Kepler–Bouwkamp constant | In plane geometry, the Kepler–Bouwkamp constant (or polygon inscribing constant) is obtained as a limit of the following sequence. Take a circle of radius 1. Inscribe a regular triangle in this circle. Inscribe a circle in this triangle. Inscribe a square in it. Inscribe a circle, regular pentagon, circle, regular hexagon and so forth.
The radius of the limiting circle is called the Kepler–Bouwkamp constant. It is named after Johannes Kepler and Christoffel Bouwkamp, and is the inverse of the polygon circumscribing constant.
Numerical value.
The decimal expansion of the Kepler–Bouwkamp constant is (sequence in the OEIS)
formula_0
The natural logarithm of the Kepler-Bouwkamp constant is given by
formula_1
where formula_2 is the Riemann zeta function.
If the product is taken over the odd primes, the constant
formula_3
is obtained (sequence in the OEIS).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\prod_{k=3}^\\infty \\cos\\left(\\frac\\pi k\\right) = 0.1149420448\\dots. "
},
{
"math_id": 1,
"text": "-2\\sum_{k=1}^\\infty\\frac{2^{2k}-1}{2k}\\zeta(2k)\\left(\\zeta(2k)-1-\\frac{1}{2^{2k}}\\right)"
},
{
"math_id": 2,
"text": "\\zeta(s) = \\sum_{n=1}^{\\infty} \\frac{1}{n^s}"
},
{
"math_id": 3,
"text": "\\prod_{k=3,5,7,11,13,17,\\ldots} \\cos\\left(\\frac\\pi k\\right) = \n0.312832\\ldots"
}
]
| https://en.wikipedia.org/wiki?curid=6780860 |
678138 | Cabtaxi number | Smallest positive integer written as the sum of two integer cubes in n ways
In number theory, the n-th cabtaxi number, typically denoted Cabtaxi("n"), is defined as the smallest positive integer that can be written as the sum of two "positive or negative or 0" cubes in n ways. Such numbers exist for all n, which follows from the analogous result for taxicab numbers.
Known cabtaxi numbers.
Only 10 cabtaxi numbers are known (sequence in the OEIS):
formula_0
History.
Cabtaxi(2) was known to François Viète and Pietro Bongo in the late 16th century in the equivalent form formula_1. The existence of Cabtaxi(3) was known to Leonhard Euler, but its actual solution was not found until later, by Edward B. Escott in 1902.
Cabtaxi(4) through and Cabtaxi(7) were found by Randall L. Rathbun in 1992; Cabtaxi(8) was found by Daniel J. Bernstein in 1998. Cabtaxi(9) was found by Duncan Moore in 2005, using Bernstein's method. Cabtaxi(10) was first reported as an upper bound by Christian Boyer in 2006 and verified as Cabtaxi(10) by Uwe Hollerbach and reported on the NMBRTHRY mailing list on May 16, 2008.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\begin{align}\n \\mathrm{Cabtaxi}(1) =& \\ 1 \\\\\n &= 1^3 + 0^3 \\\\[6pt]\n \n \\mathrm{Cabtaxi}(2) =& \\ 91 \\\\\n &= 3^3 + 4^3 \\\\\n &= 6^3 - 5^3 \\\\[6pt]\n \n \\mathrm{Cabtaxi}(3) =& \\ 728 \\\\\n &= 6^3 + 8^3 \\\\\n &= 9^3 - 1^3 \\\\\n &= 12^3 - 10^3 \\\\[6pt]\n \n \\mathrm{Cabtaxi}(4) =& \\ 2741256 \\\\\n &= 108^3 + 114^3 \\\\\n &= 140^3 - 14^3 \\\\\n &= 168^3 - 126^3 \\\\\n &= 207^3 - 183^3 \\\\[6pt]\n \n \\mathrm{Cabtaxi}(5) =& \\ 6017193 \\\\\n &= 166^3 + 113^3 \\\\\n &= 180^3 + 57^3 \\\\\n &= 185^3 - 68^3 \\\\\n &= 209^3 - 146^3 \\\\\n &= 246^3 - 207^3 \\\\[6pt]\n \n \\mathrm{Cabtaxi}(6) =& \\ 1412774811 \\\\\n &= 963^3 + 804^3 \\\\\n &= 1134^3 - 357^3 \\\\\n &= 1155^3 - 504^3 \\\\\n &= 1246^3 - 805^3 \\\\\n &= 2115^3 - 2004^3 \\\\\n &= 4746^3 - 4725^3 \\\\[6pt]\n \n \\mathrm{Cabtaxi}(7) =& \\ 11302198488 \\\\\n &= 1926^3 + 1608^3 \\\\\n &= 1939^3 + 1589^3 \\\\\n &= 2268^3 - 714^3 \\\\\n &= 2310^3 - 1008^3 \\\\\n &= 2492^3 - 1610^3 \\\\\n &= 4230^3 - 4008^3 \\\\\n &= 9492^3 - 9450^3 \\\\[6pt]\n \n \\mathrm{Cabtaxi}(8) =& \\ 137513849003496 \\\\\n &= 22944^3 + 50058^3 \\\\\n &= 36547^3 + 44597^3 \\\\\n &= 36984^3 + 44298^3 \\\\\n &= 52164^3 - 16422^3 \\\\\n &= 53130^3 - 23184^3 \\\\\n &= 57316^3 - 37030^3 \\\\\n &= 97290^3 - 92184^3 \\\\ \n &= 218316^3 - 217350^3 \\\\[6pt]\n \n \\mathrm{Cabtaxi}(9) =& \\ 424910390480793000 \\\\\n &= 645210^3 + 538680^3 \\\\\n &= 649565^3 + 532315^3 \\\\\n &= 752409^3 - 101409^3 \\\\\n &= 759780^3 - 239190^3 \\\\\n &= 773850^3 - 337680^3 \\\\\n &= 834820^3 - 539350^3 \\\\\n &= 1417050^3 - 1342680^3 \\\\\n &= 3179820^3 - 3165750^3 \\\\\n &= 5960010^3 - 5956020^3 \\\\[6pt]\n \n \\mathrm{Cabtaxi}(10) =& \\ 933528127886302221000 \\\\\n &= 8387730^3 + 7002840^3 \\\\\n &= 8444345^3 + 6920095^3 \\\\\n &= 9773330^3 - 84560^3 \\\\\n &= 9781317^3 - 1318317^3 \\\\\n &= 9877140^3 - 3109470^3 \\\\\n &= 10060050^3 - 4389840^3 \\\\\n &= 10852660^3 - 7011550^3 \\\\\n &= 18421650^3 - 17454840^3 \\\\\n &= 41337660^3 - 41154750^3 \\\\\n &= 77480130^3 - 77428260^3\n\\end{align}"
},
{
"math_id": 1,
"text": "3^3+4^3+5^3=6^3"
}
]
| https://en.wikipedia.org/wiki?curid=678138 |
67816593 | Yamaha OPL | Sound chip series by Yamaha
The OPL (FM Operator Type-L) series are a family of sound chips developed by Yamaha. The OPL series are low-cost sound chips providing FM synthesis for use in computing, music and video game applications.
The OPL series of chips enabled the creation of affordable sound cards in IBM PC compatibles like the AdLib and Sound Blaster, becoming a de-facto standard until they were supplanted by "wavetable synthesis" cards in the early-to-mid 1990s.
Internal operation.
The internal operation of the chips is completely digital. Each FM-tone is generated by a digital oscillator using a form of direct digital synthesis. A low-frequency oscillator and an envelope generator drive an FM operator to produce floating-point output for the DAC. Decapsulation of the chips shows two look-up tables, one for calculating exponents and one for log-sine. This allows the FM operator to calculate its output without any multipliers, using the formula formula_0 and two 256-entry look-up tables. Both tables are stored as pairs of values rounded to the nearest whole number, with the second value represented as the difference between it and the first value.
A quarter of the log-transformed sine waveform is stored as a sampled approximation in a 256-word read-only memory (ROM) table, computed by formula_1 for values of 0 to 255. The rest of the sine-waveform is extrapolated via its property of symmetry. Scaling the output of an oscillator to a wanted volume would normally be done by multiplication, but the YM3526 avoids multiplications by operating on log-transformed signals, which reduces multiplications into computationally cheaper additions.
Another 256-word ROM stores the exponential function as a lookup table, used to convert the logarithmic scale signal back to linear scale when required, as the final stage where the oscillator-outputs are summed together (just prior to the DAC-output bus), with the modulator waveform always delayed by one sample before the carrier waveform. This table is computed by formula_2 for values of 0 to 255. To compute the exponent, 1024 is added to the value at the index given by the least significant byte of input; this becomes the significand and the remaining bits of input become the exponent of the floating point output.
Chips in the series.
OPL.
The YM3526, introduced in 1984, was the first in the OPL family, providing a nine channel, two operator synthesizer. A very closely related chip is the Y8950, or "MSX-AUDIO", which was used as an MSX expansion. It is essentially a YM3526 with ADPCM recording and playback capability.
The circuit has 244 different write-only registers. It can produce 9 channels of sound, each made of two oscillators or 6 channels with 5 percussion instruments available. Each oscillator can produce sine waves and has its own ADSR envelope generator. Its main method of synthesis is frequency modulation synthesis, accomplished via phase modulation of the phase of one channel's oscillators by the output of another.
The YM3526's output, a sequence of floating point numbers clocked at a sampling frequency of approximately 49716 Hz, is sent to a separate digital-to-analog converter (DAC) chip, the YM3014B.
Overview of a channel's registers:
For the whole channel:
For each one of the two oscillators:
There are also a few parameters that can be set for the whole chip:
OPL2.
In 1985, Yamaha created the YM3812, also known as the OPL2. It is backwards compatible with the YM3526. Another related chip is the YM2413 (OPLL), which is a cut down version.
Among its newly-added features is the ability to pick between four waveforms for each individual oscillator by setting a register. In addition to the original sine wave, three modified waveforms can be produced: half-sine waves (where the negative part of the sine is muted), absolute-sine waves (where the negative part is inverted), and pseudo-sawtooth waves (quarter sine waves upward only with silent sections in between). This odd way of producing waveforms give the YM3812 a characteristic sound.
Limited to two-operator FM synthesis, the chip is unable to accurately reproduce timbres of real instruments and percussive sounds. Melody polyphony is limited to nine voices in melodic mode and six voices in percussive mode.
Having little competition on the market at the time of introduction of Adlib and Sound Blaster, the chip became the de-facto standard for "Sound Blaster compatible" sound cards.
The YM3812 is used with the YM3014B external DAC chip to output its audio in analog form, like with the YM3526.
OPL3.
An upgraded version of the OPL2, the YMF262 (a.k.a. OPL3), was released in 1990. It improved upon the feature-set of the YM3812, using four-operator FM synthesis, which produces harmonically richer sound similar to contemporary consumer synthesizer keyboards such as Yamaha DX100.
The following features were added:
The YMF262 also removed support for the little-used CSM (Composite sine mode) mode, featured on the YM3812 and YM3526.
The YMF262's FM synthesis mode can be configured in different ways:
Like its predecessors, the OPL3 outputs audio in digital-I/O form, requiring an external DAC chip such as the YAC512.
The YMF262 was used in the revised Sound Blaster Pro, Sound Blaster 16, AdLib Gold, Media Vision’s Pro AudioSpectrum cards, and Microsoft’s Windows Sound System cards. Competing sound chip vendors (such as ESS, OPTi, Crystal and others) designed their own OPL3-compatible audio chips, with varying degrees of faithfulness to the original OPL3.
Yamaha YMF289.
Yamaha also produced a fully compatible, low-power variant of the YMF262 in 1995 called the YMF289 (OPL3-L), which targeted PCMCIA sound cards and laptop computers. It was used in some Sound Blaster 16 sound cards made by Creative Technology. The YMF289B is paired with a YAC513 or YAC516 companion floating-point DAC chip.
The YMF289 is fully register-compatible with and retains the feature-set of the YMF262, with a number of differences:
ESS ESFM.
ESS Technology's in-house developed derivative, termed "ESFM", is an enhanced 72-operator OPL3-compatible clone incorporating two operating modes, a Native mode and a Legacy mode, which controls its feature-set and behavior. In Native mode, ESFM allows 18 4-operator FM voices to be mapped, each with per-operator frequency control and LFO depth, potentially allowing for a significant increase in the complexity of tones generated. The drivers for Windows 9x incorporate their own custom instrument patches which make use of this extended mode. Conversely, Legacy mode provides full backward-compatibility with Yamaha's YMF262. ESFM's output in this mode is moderately faithful to the YMF262 overall, but some tones are rendered quite differently, resulting in unique distortions in the sound and music of some games.
ESFM is available in ESS sound chips starting with the ISA-based ES1688 AudioDrive, up to the PCI-based ES1946 Solo-1E, whereas earlier chips required an external FM synthesizer chip (typically a Yamaha YMF262). ESS's Maestro series of PCI-based sound chips rely on a software implementation of FM synthesis that lacks ESFM's special features.
OPL3-SA, DS-XG, OPL4.
Yamaha's later PC audio controllers, including the YMF278 (OPL4), the single-chip Yamaha YMF718/719S, and the PCI YMF724/74x family, included the YMF262's FM synthesis block for backward compatibility with legacy software. See YMF7xx for more information.
Products using the OPL series.
The YM3526 was notably used in a Commodore 64 expansion, the "Sound Expander", as well as several arcade games, such as "Terra Cresta" and "Bubble Bobble". A modified version of the YM3526 with ADPCM audio known as the Y8950 (MSX-AUDIO) was used in the MSX computer as an optional expansion.
The YM3812 saw wide use in IBM PC-based sound cards such as the AdLib, Sound Blaster and Pro AudioSpectrum (8bit), as well as several arcade games by Nichibutsu, Toaplan and others.
The YM2413 was used in the FM Sound Unit expansion for the Sega Mark III and the Japanese model Sega Master System, as well as the MSX-MUSIC standard, which was released both as separate enhancement cards (such as the Panasonic FM-PAC) and built-in into several MSX2+ and the MSX TurboR computers.
The YMF262 was used in many IBM PC-based sound cards, firstly with the popular Sound Blaster Pro 2 in 1991 and then later with the Sound Blaster 16 ASP in 1992, as well as the Pro AudioSpectrum (16bit). Later models of the Sound Blaster 16 and Sound Blaster AWE series integrated the OPL3 with other chips, with Creative Labs using an OPL3 clone chip, the CQM, integral with other chips in later models from late 1995. It is also used in several arcade games by Tecmo and others.
The YMF278 was used in the Moonsound card for the MSX, as well as the SoundEdge card by Yamaha for IBM PC compatibles.
Synthesizers.
Synthesizers that use the YM3812:
Synthesizers that use the YM2413 (cost reduced YM3812):
Variants and derivatives.
An open-source RTL implementation of the OPL3 was written in SystemVerilog and adapted to an FPGA in 2015.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\exp [\\log \\sin[\\varphi_2 + \\exp [\\log \\sin [\\varphi_1] + A_1]] + A_2]"
},
{
"math_id": 1,
"text": "256\\times -\\log_2 \\left(\\sin\\left(\\frac{(x+0.5)\\times\\pi}{512}\\right)\\right)"
},
{
"math_id": 2,
"text": "\\left(\\frac{2^x}{256}-1\\right)\\times 1024"
}
]
| https://en.wikipedia.org/wiki?curid=67816593 |
678194 | Generalized taxicab number | Smallest number expressable as the sum of j numbers to the kth power in n ways
<templatestyles src="Unsolved/styles.css" />
Unsolved problem in mathematics:
Does there exist any number that can be expressed as a sum of two positive fifth powers in at least two different ways, i.e., formula_0?
In number theory, the generalized taxicab number Taxicab("k", "j", "n") is the smallest number — if it exists — that can be expressed as the sum of j numbers to the kth positive power in n different ways. For "k" = 3 and "j" = 2, they coincide with taxicab number.
formula_1
The latter example is 1729, as first noted by Ramanujan.
Euler showed that
formula_2
However, Taxicab(5, 2, "n") is not known for any "n" ≥ 2:<br>No positive integer is known that can be written as the sum of two 5th powers in more than one way, and it is not known whether such a number exists. | [
{
"math_id": 0,
"text": "a^5+b^5=c^5+d^5"
},
{
"math_id": 1,
"text": "\\begin{align}\n\\mathrm{Taxicab}(1, 2, 2) &= 4 = 1 + 3 = 2 + 2 \\\\\n\\mathrm{Taxicab}(2, 2, 2) &= 50 = 1^2 + 7^2 = 5^2 + 5^2 \\\\\n\\mathrm{Taxicab}(3, 2, 2) &= 1729 = 1^3 + 12^3 = 9^3 + 10^3\n\\end{align}"
},
{
"math_id": 2,
"text": "\\mathrm{Taxicab}(4, 2, 2) = 635318657 = 59^4 + 158^4 = 133^4 + 134^4."
}
]
| https://en.wikipedia.org/wiki?curid=678194 |
6782658 | Termination analysis | In computer science, termination analysis is program analysis which attempts to determine whether the evaluation of a given program halts for "each" input. This means to determine whether the input program computes a "total" function.
It is closely related to the halting problem, which is to determine whether a given program halts for a "given" input and which is undecidable. The termination analysis is even more difficult than the Halting problem: the termination analysis in the model of Turing machines as the model of programs implementing computable functions would have the goal of deciding whether a given Turing machine is a total Turing machine, and this problem is at level formula_0 of the arithmetical hierarchy and thus is strictly more difficult than the Halting problem.
Now as the question whether a computable function is total is not semi-decidable, each "sound" termination analyzer (i.e. an affirmative answer is never given for a non-terminating program) is "incomplete", i.e. must fail in determining termination for infinitely many terminating programs, either by running forever or halting with an indefinite answer.
Termination proof.
A "termination proof" is a type of mathematical proof that plays a critical role in formal verification because total correctness of an algorithm depends on termination.
A simple, general method for constructing termination proofs involves associating a measure with each step of an algorithm. The measure is taken from the domain of a well-founded relation, such as from the ordinal numbers. If the measure "decreases" according to the relation along every possible step of the algorithm, it must terminate, because there are no infinite descending chains with respect to a well-founded relation.
Some types of termination analysis can automatically generate or imply the existence of a termination proof.
Example.
An example of a programming language construct which may or may not terminate is a loop, as they can be run repeatedly. Loops implemented using a counter variable as typically found in data processing algorithms will usually terminate, demonstrated by the pseudocode example below:
i := 0
loop until i = SIZE_OF_DATA
process_data(data[i])) // process the data chunk at position i
i := i + 1 // move to the next chunk of data to be processed
If the value of "SIZE_OF_DATA" is non-negative, fixed and finite, the loop will eventually terminate, assuming "process_data" terminates too.
Some loops can be shown to always terminate or never terminate through human inspection. For example, the following loop will, in theory, never stop. However, it may halt when executed on a physical machine due to arithmetic overflow: either leading to an exception or causing the counter to wrap to a negative value and enabling the loop condition to be fulfilled.
i := 1
loop until i = 0
i := i + 1
In termination analysis one may also try to determine the termination behaviour of some program depending on some unknown input. The following example illustrates this problem.
i := 1
loop until i = UNKNOWN
i := i + 1
Here the loop condition is defined using some value UNKNOWN, where the value of UNKNOWN is not known (e.g. defined by the user's input when the program is executed). Here the termination analysis must take into account all possible values of UNKNOWN and find out that in the possible case of UNKNOWN = 0 (as in the original example) the termination cannot be shown.
There is, however, no general procedure for determining whether an expression involving looping instructions will halt, even when humans are tasked with the inspection. The theoretical reason for this is the undecidability of the Halting Problem: there cannot exist some algorithm which determines whether any given program stops after finitely many computation steps.
In practice one fails to show termination (or non-termination) because every algorithm works with a finite set of methods being able to extract relevant information out of a given program. A method might look at how variables change with respect to some loop condition (possibly showing termination for that loop), other methods might try to transform the program's calculation to some mathematical construct and work on that, possibly getting information about the termination behaviour out of some properties of this mathematical model. But because each method is only able to "see" some specific reasons for (non)termination, even through combination of such methods one cannot cover all possible reasons for (non)termination.
Recursive functions and loops are equivalent in expression; any expression involving loops can be written using recursion, and vice versa. Thus the termination of recursive expressions is also undecidable in general. Most recursive expressions found in common usage (i.e. not pathological) can be shown to terminate through various means, usually depending on the definition of the expression itself. As an example, the function argument in the recursive expression for the factorial function below will always decrease by 1; by the well-ordering property of natural numbers, the argument will eventually reach 1 and the recursion will terminate.
function factorial (argument as natural number)
if argument = 0 or argument = 1
return 1
otherwise
return argument * factorial(argument - 1)
Dependent types.
Termination check is very important in dependently typed programming language and theorem proving systems like Coq and Agda. These systems use Curry-Howard isomorphism between programs and proofs. Proofs over inductively defined data types were traditionally described using induction principles. However, it was found later that describing a program via a recursively defined function with pattern matching is a more natural way of proving than using induction principles directly. Unfortunately, allowing non-terminating definitions leads to logical inconsistency in type theories, which is why Agda and Coq have termination checkers built-in.
Sized types.
One of the approaches to termination checking in dependently typed programming languages are sized types. The main idea is to annotate the types over which we can recurse with size annotations and allow recursive calls only on smaller arguments. Sized types are implemented in Agda as a syntactic extension.
Current research.
There are several research teams that work on new methods that can show (non)termination. Many researchers include these methods into programs that try to analyze the termination behavior automatically (so without human interaction). An ongoing aspect of research is to allow the existing methods to be used to analyze termination behavior of programs written in "real world" programming languages. For declarative languages like Haskell, Mercury and Prolog, many results exist (mainly because of the strong mathematical background of these languages). The research community also works on new methods to analyze termination behavior of programs written in imperative languages like C and Java.
References.
<templatestyles src="Reflist/styles.css" />
Research papers on automated program termination analysis include:
System descriptions of automated termination analysis tools include: | [
{
"math_id": 0,
"text": "\\Pi^0_2"
}
]
| https://en.wikipedia.org/wiki?curid=6782658 |
67826857 | 1 Kings 10 | 1 Kings, chapter 10
1 Kings 10 is the tenth chapter of the Books of Kings in the Hebrew Bible or the First Book of Kings in the Old Testament of the Christian Bible. The book is a compilation of various annals recording the acts of the kings of Israel and Judah by a Deuteronomic compiler in the seventh century BCE, with a supplement added in the sixth century BCE. This chapter belongs to the section focusing on the reign of Solomon over the unified kingdom of Judah and Israel (1 Kings 1 to 11). The focus of this chapter is the Solomon's achievements.
Text.
This chapter was originally written in the Hebrew language and since the 16th century is divided into 29 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century).
The visit of the Queen of Sheba (10:1–13).
This story essentially displays Solomon's wisdom by showing a noble and wise ruler deeply impressed by him ('there was no more spirit in her' or "breathless", verse 5), with 'great spiritual and even political after-effects all the way to Ethiopia'. The keyword of this passage is "hear", used twice in the verse 1 (literally, "...the queen of Sheba heard the hearing of Solomon...") and later (verses 6, 7, 8, 24) of how the world "hear" of Solomon, a king with "hearing heart" (). The beautiful order of Solomon's table is described in a chiastic structure, framed by "houses" of Solomon and YHWH (verses 4–5; cf. 1 Kings 6–7):
A the house that he built (that is, Solomon's palace)
B food of his table
C seating of his servants
C' standing of his attendants and attire
B' cupbearers
A' ascent to the house of Yahweh
"And she gave the king an hundred and twenty talents of gold, and of spices very great store, and precious stones: there came no more such abundance of spices as these which the queen of Sheba gave to king Solomon."
Solomon's wealth (10:14–29).
The description of Solomon's wisdom and wealth in this passage centers on the glory of his throne (verse 18), greater than any of the Gentiles (verse 20), sitting on the seventh level above six steps (verse 19), and thus depicting Solomon seated in a 'sabbatical' position The structure of these verses is:
A a great throne made of ivory ("tooth"), overlaid with pure gold ("refined")
B six ascending steps to the throne
C the top was round at the back
D armrests on either side of
E the place of "resting" (, "shebeth", meaning "seat', "dwelling", "place")
D' a pair of lions, each on the side of the armrests
C' twelve lions standing.
B' on the six steps, one at either end of each step
A' nothing like that had ever been made for any kingdom.
Everything around Solomon was literally layered in gold, such that silver 'was not considered as anything in the days of Solomon' (verse 21), against the warning in Deuteronomy 17:17 about not hoarding too much silver and gold. Solomon also profited from being an 'agent for the export of arms from Egypt to Syria and Asia Minor' (cf. Deuteronomy 17:16).
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=67826857 |
6782835 | Jaro–Winkler distance | String distance measure
In computer science and statistics, the Jaro–Winkler similarity is a string metric measuring an edit distance between two sequences. It is a variant of the Jaro distance metric (1989, Matthew A. Jaro) proposed in 1990 by William E. Winkler.
The Jaro–Winkler distance uses a prefix scale formula_0 which gives more favourable ratings to strings that match from the beginning for a set prefix length formula_1.
The higher the Jaro–Winkler distance for two strings is, the less similar the strings are. The score is normalized such that 0 means an exact match and 1 means there is no similarity. The original paper actually defined the metric in terms of similarity, so the distance is defined as the inversion of that value (distance = 1 − similarity).
Although often referred to as a "distance metric", the Jaro–Winkler distance is not a metric in the mathematical sense of that term because it does not obey the triangle inequality.
Definition.
Jaro similarity.
The Jaro similarity formula_2 of two given strings formula_3 and formula_4 is
formula_5
Where:
Jaro similarity score is 0 if the strings do not match at all, and 1 if they are an exact match. In the first step, each character of formula_3 is compared with all its matching characters in formula_4. Two characters from formula_3 and formula_4 respectively, are considered "matching" only if they are the same and not farther than formula_10 characters apart. For example, the following two nine character long strings, FAREMVIEL and FARMVILLE, have 8 matching characters. 'F', 'A' and 'R' are in the same position in both strings. Also 'M', 'V', 'I', 'E' and 'L' are within three (result of formula_11) characters away. If no matching characters are found then the strings are not similar and the algorithm terminates by returning Jaro similarity score 0.
If non-zero matching characters are found, the next step is to find the number of transpositions. Transposition is the number of matching characters that are not in the right order divided by two. In the above example between FAREMVIEL and FARMVILLE, 'E' and 'L' are the matching characters that are not in the right order. So the number of transposition is one.
Finally, plugging in the number of matching characters formula_8 and number of transpositions formula_9 the Jaro similarity of FAREMVIEL and FARMVILLE can be calculated,
formula_12
Jaro–Winkler similarity.
Jaro–Winkler similarity uses a prefix scale formula_0 which gives more favorable ratings to strings that match from the beginning for a set prefix length formula_1. Given two strings formula_3 and formula_4, their Jaro–Winkler similarity formula_13 is:
formula_14
where:
The Jaro–Winkler distance formula_16 is defined as formula_17.
Although often referred to as a "distance metric", the Jaro–Winkler distance is not a metric in the mathematical sense of that term because it does not obey the triangle inequality. The Jaro–Winkler distance also does not satisfy the identity axiom formula_18.
Relationship with other edit distance metrics.
There are other popular measures of edit distance, which are calculated using a different set of allowable edit operations. For instance,
Edit distance is usually defined as a parameterizable metric calculated with a specific set of allowed edit operations, and each operation is assigned a cost (possibly infinite). This is further generalized by DNA sequence alignment algorithms such as the Smith–Waterman algorithm, which make an operation's cost depend on where it is applied.
Footnotes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "p"
},
{
"math_id": 1,
"text": "\\ell"
},
{
"math_id": 2,
"text": "sim_j"
},
{
"math_id": 3,
"text": "s_1"
},
{
"math_id": 4,
"text": "s_2"
},
{
"math_id": 5,
"text": "sim_j = \\left\\{\n\\begin{array}{l l}\n 0 & \\text{if }m = 0\\\\\n \\frac{1}{3}\\left(\\frac{m}{|s_1|} + \\frac{m}{|s_2|} + \\frac{m-t}{m}\\right) & \\text{otherwise} \\end{array} \\right."
},
{
"math_id": 6,
"text": "|s_i|"
},
{
"math_id": 7,
"text": "s_i"
},
{
"math_id": 8,
"text": "m"
},
{
"math_id": 9,
"text": "t"
},
{
"math_id": 10,
"text": "\\left\\lfloor\\frac{\\max(|s_1|,|s_2|)}{2}\\right\\rfloor-1"
},
{
"math_id": 11,
"text": "\\lfloor\\tfrac{\\max(9, 9)}{2}\\rfloor - 1"
},
{
"math_id": 12,
"text": "\\frac{1}{3}\\left(\\frac{8}{9} + \\frac{8}{9} + \\frac{8-1}{8} \\right) = 0.88"
},
{
"math_id": 13,
"text": "sim_w"
},
{
"math_id": 14,
"text": "sim_w = sim_j + \\ell p (1 - sim_j),"
},
{
"math_id": 15,
"text": "p = 0.1"
},
{
"math_id": 16,
"text": "d_w"
},
{
"math_id": 17,
"text": "d_w = 1 - sim_w"
},
{
"math_id": 18,
"text": " d(x,y)=0 \\leftrightarrow x = y"
}
]
| https://en.wikipedia.org/wiki?curid=6782835 |
67832663 | Satish B. Rao | American computer scientist and educator
Satish B. Rao is an American computer scientist who is a professor of computer science at the University of California, Berkeley.
Biography.
Satish Rao received his PhD from the Massachusetts Institute of Technology in 1989 and joined the faculty at the University of California, Berkeley in 1999.
Research and awards.
Rao's research focuses on computational biology, graph partitioning, and single- and multi-commodity flows (maximum flow problem).
Rao is an ACM Fellow (2013) and won the Fulkerson Prize with Sanjeev Arora and Umesh Vazirani in 2012 for their work on improving the approximation ratio for graph separators and related problems from formula_0 to formula_1. Rao teaches discrete mathematics and probability theory at the University of California, Berkeley.
Publications.
Satish Rao has published over 100 publications and is cited frequently. | [
{
"math_id": 0,
"text": "O(\\log n)"
},
{
"math_id": 1,
"text": "O(\\sqrt{\\log n})"
}
]
| https://en.wikipedia.org/wiki?curid=67832663 |
67834517 | 1 Kings 11 | 1 Kings, chapter 11
1 Kings 11 is the eleventh chapter of the Books of Kings in the Hebrew Bible or the First Book of Kings in the Old Testament of the Christian Bible. The book is a compilation of various annals recording the acts of the kings of Israel and Judah by a Deuteronomic compiler in the seventh century BCE, with a supplement added in the sixth century BCE. This chapter belongs to the section focusing on the reign of Solomon over the unified kingdom of Judah and Israel (1 Kings 1 to 11). The focus of this chapter is Solomon's decline and death.
Text.
This chapter was originally written in the Hebrew language and since the 16th century is divided into 43 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century).
Solomon's wives and their Idolatry (11:1–8).
Solomon marrying many wives might not be considered unethical at that time, especially for diplomatic reasons, but it should be intolerable in light of the Torah (cf. Deuteronomy 17:17). The passage focuses on religious rather than moral arguments for the foreign wives in a tone similar to post-exilic texts (Ezra 10; Nehemiah 10) viewing them as a temptation threatening loyalty to the God of Israel. Solomon gave his wives something similar to minority rights and religious freedom in modern terms, but he went too far that he committed a grave sin against Yahweh, leading to dire consequences.
A Divine Manifestation (11:9–13).
Because Solomon had "turned away from the Lord", thereby he had broken the first commandment, he faced a consequence of losing power, but in recognition of David's merits, the punishment was delayed and his successor would be left with a smaller kingdom.
"However, I will not tear away all the kingdom, but I will give one tribe to your son, for the sake of David my servant and for the sake of Jerusalem that I have chosen."
The adversaries of Solomon (11:14–40).
Solomon's disloyalty to God resulted in the emergence of 'adversary' (Hebrew: "satan") to his reign, in form of three different persons: Hadad, an Edomite prince (verses 14–22), Rezon the son of Eliada of Damascus (verses 23–25), and Jeroboam ben Nebat (verses 26–40). The passage clearly states that God was the initiator of these adversaries (verses 14, 23, also 29–33). The brief biography of each adversary presented in the passage has similarities with the earlier history of Israel.
The life of Hadad, the Edomite prince, echoes the history of the migration of Jacob's family to Egypt and the Exodus:
Hadad stated his desire to return to Edom using 'exodus language': "send me out" (based on the same Hebrew verb: "shalakh").
The biography of Rezon the son of Eliada of Damascus (11:23–25) also has a parallel with the history of David, the king of Israel.
Jeroboam ben Nebat, Solomon's third adversary, arose from within northern Israel, tellingly from among the forced laborers in Ephraim. The parallels of his biography with the life of David are as follows:
Ahijah of Shiloh is shown as Jeroboam's supporter in this passage, but he will be Jeroboam's enemy in 1 Kings 14:1-18.
Death of Solomon (11:41–43).
This is the first use regular concluding formula in the books of Kings. The Chronicler mentioned 'the Book of the Acts of Solomon' as a source of information, presumably in form of royal annals.
"And the time that Solomon reigned in Jerusalem over all Israel was forty years."
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=67834517 |
678365 | Enthalpy change of solution | Change in enthalpy from dissolving a substance
In thermochemistry, the enthalpy of solution (heat of solution or enthalpy of solvation) is the enthalpy change associated with the dissolution of a substance in a solvent at constant pressure resulting in infinite dilution.
The enthalpy of solution is most often expressed in kJ/mol at constant temperature. The energy change can be regarded as being made up of three parts: the endothermic breaking of bonds within the solute and within the solvent, and the formation of attractions between the solute and the solvent. An ideal solution has a null enthalpy of mixing. For a non-ideal solution, it is an excess molar quantity.
Energetics.
Dissolution by most gases is exothermic. That is, when a gas dissolves in a liquid solvent, energy is released as heat, warming both the system (i.e. the solution) and the surroundings.
The temperature of the solution eventually decreases to match that of the surroundings. The equilibrium, between the gas as a separate phase and the gas in solution, will by Le Châtelier's principle shift to favour the gas going into solution as the temperature is decreased (decreasing the temperature increases the solubility of a gas).
When a saturated solution of a gas is heated, gas comes out of the solution.
Steps in dissolution.
Dissolution can be viewed as occurring in three steps:
The value of the enthalpy of solvation is the sum of these individual steps.
formula_0
Dissolving ammonium nitrate in water is endothermic. The energy released by the solvation of the ammonium ions and nitrate ions is less than the energy absorbed in breaking up the ammonium nitrate ionic lattice and the attractions between water molecules. Dissolving potassium hydroxide is exothermic, as more energy is released during solvation than is used in breaking up the solute and solvent.
Expressions in differential or integral form.
The expressions of the enthalpy change of dissolution can be differential or integral, as a function of the ratio of amounts of solute-solvent.
The molar differential enthalpy change of dissolution is:
formula_1
where &NoBreak;&NoBreak; is the infinitesimal variation or differential of the mole number of the solute during dissolution.
The integral heat of dissolution is defined as a process of obtaining a certain amount of solution with a final concentration. The enthalpy change in this process, normalized by the mole number of solute, is evaluated as the molar integral heat of dissolution. Mathematically, the molar integral heat of dissolution is denoted as:
formula_2
The prime heat of dissolution is the differential heat of dissolution for obtaining an infinitely diluted solution.
Dependence on the nature of the solution.
The enthalpy of mixing of an ideal solution is zero by definition but the enthalpy of dissolution of nonelectrolytes has the value of the enthalpy of fusion or vaporisation. For non-ideal solutions of electrolytes it is connected to the activity coefficient of the solute(s) and the temperature derivative of the relative permittivity through the following formula:
formula_3
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Delta H_\\text{solv} = \\Delta H_\\text{diss} + U_\\text{latt}"
},
{
"math_id": 1,
"text": "\\Delta_\\text{diss}^{d} H= \\left(\\frac{\\partial \\Delta_\\text{diss} H}{\\partial \\Delta n_i}\\right)_{T,p,n_B}"
},
{
"math_id": 2,
"text": " \\Delta_\\text{diss}^{i} H = \\frac{\\Delta_\\text{diss} H}{n_B}"
},
{
"math_id": 3,
"text": " H_{dil} = \\sum_i \\nu_i RT \\ln \\gamma_i \\left( 1 + \\frac{T}{\\epsilon}\\frac{\\partial \\epsilon}{\\partial T} \\right)"
}
]
| https://en.wikipedia.org/wiki?curid=678365 |
67841457 | Multiple subset sum | The multiple subset sum problem is an optimization problem in computer science and operations research. It is a generalization of the subset sum problem. The input to the problem is a multiset formula_0 of "n" integers and a positive integer "m" representing the number of subsets. The goal is to construct, from the input integers, some "m" subsets. The problem has several variants:
Max-sum and max-min MSSP.
When "m" is variable (a part of the input), both problems are strongly NP-hard, by reduction from 3-partition. This means that they have no fully polynomial-time approximation scheme (FPTAS) unless P=NP.
Even when "m"=2, the problems do not have an FPTAS unless P=NP. This can be shown by a reduction from the "equal-cardinality partition problem" (EPART):
The following approximation algorithms are known:
Fair subset sum problem.
The "fair subset sum problem" ("FSSP") is a generalization of SSP in which, after the subset is selected, its items are allocated among two or more agents. The utility of each agent equals the sum of weights of the items allocated to him/her. The goal is that the utility profile satisfies some criterion of fairness, such as the egalitarian rule or the proportional-fair rule. Two variants of the problem are:
Both variants are NP-hard. However, there are pseudopolynomial time algorithms for enumerating all Pareto-optimal solutions when there are two agents:
Nicosia, Pacifici and Pferschy study the price of fairness, that is, the ratio between the maximum sum of utilities, and the maximum sum of utilities in a fair solution:
In both cases, if the item value is bounded by some constant "a", then the POF is bounded by a function of "a".
Multiple knapsack problem.
The multiple knapsack problem (MKP) is a generalization of both the max-sum MSSP and the knapsack problem. In this problem, there are "m" knapsacks and "n" items, where each item has both a value and a weight. The goal is to pack as much value as possible into the "m" bins, such that the total weight in each bin is at most its capacity.
The MKP has a Polynomial-time approximation scheme.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "S"
},
{
"math_id": 1,
"text": "1- 1/(n+1)"
},
{
"math_id": 2,
"text": "\\epsilon < 1/(n+1)"
},
{
"math_id": 3,
"text": "(1-\\epsilon)"
},
{
"math_id": 4,
"text": "\\epsilon = 1/(n+2)"
},
{
"math_id": 5,
"text": "\\epsilon"
},
{
"math_id": 6,
"text": "1/\\epsilon^2"
},
{
"math_id": 7,
"text": "O(n^{2m/\\epsilon})"
},
{
"math_id": 8,
"text": "Q"
},
{
"math_id": 9,
"text": "Q(w_1,w_2)=\\text{true}"
},
{
"math_id": 10,
"text": "O(n\\cdot c^2)"
},
{
"math_id": 11,
"text": "Q_j"
},
{
"math_id": 12,
"text": "Q_j(w)=\\text{true}"
},
{
"math_id": 13,
"text": "O(n\\cdot c)"
}
]
| https://en.wikipedia.org/wiki?curid=67841457 |
67848380 | 1 Kings 12 | 1 Kings, chapter 12
1 Kings 12 is the twelfth chapter of the Books of Kings in the Hebrew Bible or the First Book of Kings in the Old Testament of the Christian Bible. The book is a compilation of various annals recording the acts of the kings of Israel and Judah by a Deuteronomic compiler in the seventh century BCE, with a supplement added in the sixth century BCE. 1 Kings 12:1 to 16:14 documents the consolidation of the kingdoms of northern Israel and Judah: this chapter focusses on the reigns of Rehoboam and Jeroboam.
Text.
This chapter was originally written in the Hebrew language and since the 16th century is divided into 33 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls, that is, 6Q4 (6QpapKgs; 150–75 BCE) with extant verses 28–31.
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century).
Negotiations in Shechem (12:1–20).
Rehoboam took the throne in Judah without opposition, but he required confirmation from the northern kingdom (cf ; ; 19:10-11,42-4). After Solomon's death, the northern tribes of Israel requested negotiations with the new king in Shechem (today: Nablus, in the central mountain country of Ephraim) and when the early negotiation failed, Jeroboam was called upon to lead the petition to reduce the financial burdens imposed by Solomon (verse 20). Rehoboam seeks advice from 'the older men who had attended his father Solomon' and with 'the young men who had grown up with him and now attended him' (verses 6, 8), representing a political conflict between two generations. The 'undiplomatic arrogance' of Rehoboam's reply based on the advice of the younger advisors (using vulgarity) triggered what already perceived by the northern tribes that Solomon and his family intended to squeeze the northern Israel hard, in comparison to the tribe of Judah, so the northern tribes decided to separate (verse 16 uses a language of separation almost identical to 2 Samuel 20:1, when the northern tribes had privately distanced themselves from Davidic rule during Absalom's failed revolt). Despite the acknowledgment that things happened exactly as the prophet Ahijah of Shiloh had forecast (verse 15, cf. 1 Kings 11:29–32), the author of this passage still regards the separation as a 'perverse rebellion against the legitimate reign of the descendants of David' (verse 19).
" Then king Rehoboam sent Adoram, who was over the tribute; and all Israel stoned him with stones, that he died. Therefore king Rehoboam made speed to get him up to his chariot, to flee to Jerusalem."
Civil war averted (12:21–24).
The separation of northern tribes happened as prophesied by the prophet Ahijah of Shiloh as a (limited) divine judgement upon the ruling house of Jerusalem (1 Kings 11:29–39), and confirmed by the prophet Shemaiah in this passage that Rehoboam and the Judeans should not go against God's irreversible decision, especially when it means fighting against their 'kindred'.
State worship in Bethel and Dan (12:25–33).
The record of Jeroboam I of Israel spans from 1 Kings 12:25 to 14:24, but in the Septuagint version of Codex Vaticanus there is an addition before verse 25, numbered as 24a to 24z, which is not present in the Hebrew Bible, but this Greek text often concurs literally with the Hebrew text in 1 Kings 11–14 although containing some significant differences, such as:
Also, in Greek text, Rehoboam was made king at 16 years of age (Hebrew text: 40 years old), and reigned 12 years (Hebrew text: 17 years); his mother was Naanan (Hebrew text: Naamah), the daughter of Ana, son of Nahash, king of Ammon.
Jeroboam became the founder and quasi-democratically legitimized ruler of northern Israel (1 Kings 12:20), but he was always afraid to be dethroned by the same constituents when they still remember the Davidic rule (verses 26–27), so he initiated a number of building projects (imitating Solomon), such as castles in the cis-Jordanian Shechem and in trans-Jordanian Penuel (verse 25); cf. 1 Samuel 11), — (as the central city of the original Israelite region of Gilead, cf. 1 Samuel 11) and state holy sites in Dan (far north) and Bethel (deep in the south of his kingdom), at the sites of long existed worship places (cf. Judges 17–18; Genesis 28; 35). Jeroboam's statue of 'calves' more closely resembled those of (young) bulls, the animal symbolizing Canaan's main gods El and Baal, but he claimed to worship the Israelite YHWH 'who brought you up out of the land of Egypt' (verse 28), as also evidenced in the archaeological excavations in Tel-Dan, the site of ancient city of Dan, where the seal impressions with Yahwistic names, the architecture of the high place, the artifacts, and the animal bones for sacrifices to YHWH. Nonetheless, this is not in accordance to the main belief that God's temple resides in Zion (Jerusalem), so Jeroboam's policy was severely criticized and interpreted as 'the seed of the fall of his dynasty and also the kingdom he founded' (cf. 12:29; 13:33–34; as the links of all northern kings' wickedness to 'the sins of Jeroboam'), in particular, the establishment of holy high places (cf. verse 31 with Leviticus 26:30; Deuteronomy 12; 2 Kings 17:9–10), the appointment of non-Levite priests (cf. verse 31 with Deuteronomy 18:1-8), and the unauthorized introduction of a religious feast (cf. verse 32 with Leviticus 23:34).
"28So the king took counsel and made two calves of gold. And he said to the people, "You have gone up to Jerusalem long enough. Behold your gods, O Israel, who brought you up out of the land of Egypt.""
"29 And he set one in Bethel, and the other he put in Dan."
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=67848380 |
6785051 | History of trigonometry |
Early study of triangles can be traced to the 2nd millennium BC, in Egyptian mathematics (Rhind Mathematical Papyrus) and Babylonian mathematics. Trigonometry was also prevalent in Kushite mathematics.
Systematic study of trigonometric functions began in Hellenistic mathematics, reaching India as part of Hellenistic astronomy. In Indian astronomy, the study of trigonometric functions flourished in the Gupta period, especially due to Aryabhata (sixth century BC), who discovered the sine function, cosine function, and versine function.
When during the Middle Ages, the study of trigonometry continued in Islamic mathematics, by mathematicians such as Al-Khwarizmi and Abu al-Wafa. It became an independent discipline in the Islamic world, where all six trigonometric functions were known. Translations of Arabic and Greek texts led to trigonometry being adopted as a subject in the Latin West beginning in the Renaissance with Regiomontanus.
The development of modern trigonometry shifted during the western Age of Enlightenment, beginning with 17th-century mathematics (Isaac Newton and James Stirling) and reaching its modern form with Leonhard Euler (1748).
Etymology.
The term "trigonometry" was derived from Greek τρίγωνον "trigōnon", "triangle" and μέτρον "metron", "measure".
The modern words "sine" and "cosine" are derived from the Latin word via mistranslation from Arabic (see Sine and cosine#Etymology). Particularly Fibonacci's "sinus rectus arcus" proved influential in establishing the term.
The word "tangent" comes from Latin meaning "touching", since the line "touches" the circle of unit radius, whereas "secant" stems from Latin "cutting" since the line "cuts" the circle.
The prefix "co-" (in "cosine", "cotangent", "cosecant") is found in Edmund Gunter's "Canon triangulorum" (1620), which defines the "cosinus" as an abbreviation for the "sinus complementi" (sine of the complementary angle) and proceeds to define the "cotangens" similarly.
The words "minute" and "second" are derived from the Latin phrases "partes minutae primae" and "partes minutae secundae". These roughly translate to "first small parts" and "second small parts".
Ancient.
Ancient Near East.
The ancient Egyptians and Babylonians had known of theorems on the ratios of the sides of similar triangles for many centuries. However, as pre-Hellenic societies lacked the concept of an angle measure, they were limited to studying the sides of triangles instead.
The Babylonian astronomers kept detailed records on the rising and setting of stars, the motion of the planets, and the solar and lunar eclipses, all of which required familiarity with angular distances measured on the celestial sphere. Based on one interpretation of the Plimpton 322 cuneiform tablet (c. 1900 BC), some have even asserted that the ancient Babylonians had a table of secants but does not work in this context as without using circles and angles in the situation modern trigonometric notations will not apply. There is, however, much debate as to whether it is a table of Pythagorean triples, a solution of quadratic equations, or a trigonometric table.
The Egyptians, on the other hand, used a primitive form of trigonometry for building pyramids in the 2nd millennium BC. The Rhind Mathematical Papyrus, written by the Egyptian scribe Ahmes (c. 1680–1620 BC), contains the following problem related to trigonometry:
<templatestyles src="Template:Blockquote/styles.css" />"If a pyramid is 250 cubits high and the side of its base 360 cubits long, what is its "seked"?"
Ahmes' solution to the problem is the ratio of half the side of the base of the pyramid to its height, or the run-to-rise ratio of its face. In other words, the quantity he found for the "seked" is the cotangent of the angle to the base of the pyramid and its face.
Classical antiquity.
Ancient Greek and Hellenistic mathematicians made use of the chord. Given a circle and an arc on the circle, the chord is the line that subtends the arc. A chord's perpendicular bisector passes through the center of the circle and bisects the angle. One half of the bisected chord is the sine of one half the bisected angle, that is,
formula_0
and consequently the sine function is also known as the "half-chord". Due to this relationship, a number of trigonometric identities and theorems that are known today were also known to Hellenistic mathematicians, but in their equivalent chord form.
Although there is no trigonometry in the works of Euclid and Archimedes, in the strict sense of the word, there are theorems presented in a geometric way (rather than a trigonometric way) that are equivalent to specific trigonometric laws or formulas. For instance, propositions twelve and thirteen of book two of the "Elements" are the laws of cosines for obtuse and acute angles, respectively. Theorems on the lengths of chords are applications of the law of sines. And Archimedes' theorem on broken chords is equivalent to formulas for sines of sums and differences of angles. To compensate for the lack of a table of chords, mathematicians of Aristarchus' time would sometimes use the statement that, in modern notation, sin "α"/sin "β" < "α"/"β" < tan "α"/tan "β" whenever 0° < β < α < 90°, now known as Aristarchus's inequality.
The first trigonometric table was apparently compiled by Hipparchus of Nicaea (180 – 125 BC), who is now consequently known as "the father of trigonometry." Hipparchus was the first to tabulate the corresponding values of arc and chord for a series of angles.
Although it is not known when the systematic use of the 360° circle came into mathematics, it is known that the systematic introduction of the 360° circle came a little after Aristarchus of Samos composed "On the Sizes and Distances of the Sun and Moon" (c. 260 BC), since he measured an angle in terms of a fraction of a quadrant. It seems that the systematic use of the 360° circle is largely due to Hipparchus and his table of chords. Hipparchus may have taken the idea of this division from Hypsicles who had earlier divided the day into 360 parts, a division of the day that may have been suggested by Babylonian astronomy. In ancient astronomy, the zodiac had been divided into twelve "signs" or thirty-six "decans". A seasonal cycle of roughly 360 days could have corresponded to the signs and decans of the zodiac by dividing each sign into thirty parts and each decan into ten parts. It is due to the Babylonian sexagesimal numeral system that each degree is divided into sixty minutes and each minute is divided into sixty seconds.
Menelaus of Alexandria (c. 100 AD) wrote in three books his "Sphaerica". In Book I, he established a basis for spherical triangles analogous to the Euclidean basis for plane triangles. He established a theorem that is without Euclidean analogue, that two spherical triangles are congruent if corresponding angles are equal, but he did not distinguish between congruent and symmetric spherical triangles. Another theorem that he establishes is that the sum of the angles of a spherical triangle is greater than 180°. Book II of "Sphaerica" applies spherical geometry to astronomy. And Book III contains the "theorem of Menelaus". He further gave his famous "rule of six quantities".
Later, Claudius Ptolemy (c. 90 – c. 168 AD) expanded upon Hipparchus' "Chords in a Circle" in his "Almagest", or the "Mathematical Syntaxis". The Almagest is primarily a work on astronomy, and astronomy relies on trigonometry. Ptolemy's table of chords gives the lengths of chords of a circle of diameter 120 as a function of the number of degrees "n" in the corresponding arc of the circle, for "n" ranging from 1/2 to 180 by increments of 1/2. The thirteen books of the "Almagest" are the most influential and significant trigonometric work of all antiquity. A theorem that was central to Ptolemy's calculation of chords was what is still known today as Ptolemy's theorem, that the sum of the products of the opposite sides of a cyclic quadrilateral is equal to the product of the diagonals. A special case of Ptolemy's theorem appeared as proposition 93 in Euclid's "Data". Ptolemy's theorem leads to the equivalent of the four sum-and-difference formulas for sine and cosine that are today known as Ptolemy's formulas, although Ptolemy himself used chords instead of sine and cosine. Ptolemy further derived the equivalent of the half-angle formula
formula_1
Ptolemy used these results to create his trigonometric tables, but whether these tables were derived from Hipparchus' work cannot be determined.
Neither the tables of Hipparchus nor those of Ptolemy have survived to the present day, although descriptions by other ancient authors leave little doubt that they once existed.
Indian mathematics.
Some of the early and very significant developments of trigonometry were in India. Influential works from the 4th–5th century AD, known as the Siddhantas (of which there were five, the most important of which is the Surya Siddhanta) first defined the sine as the modern relationship between half an angle and half a chord, while also defining the cosine, versine, and inverse sine. Soon afterwards, another Indian mathematician and astronomer, Aryabhata (476–550 AD), collected and expanded upon the developments of the Siddhantas in an important work called the "Aryabhatiya". The "Siddhantas" and the "Aryabhatiya" contain the earliest surviving tables of sine values and versine (1 − cosine) values, in 3.75° intervals from 0° to 90°, to an accuracy of 4 decimal places. They used the words "jya" for sine, "kojya" for cosine, "utkrama-jya" for versine, and "otkram jya" for inverse sine. The words "jya" and "kojya" eventually became "sine" and "cosine" respectively after a mistranslation described above.
In the 7th century, Bhaskara I produced a formula for calculating the sine of an acute angle without the use of a table. He also gave the following approximation formula for sin("x"), which had a relative error of less than 1.9%:
formula_2
Later in the 7th century, Brahmagupta redeveloped the formula
formula_3
(also derived earlier, as mentioned above) and the Brahmagupta interpolation formula for computing sine values.
Another later Indian author on trigonometry was Bhaskara II in the 12th century. Bhaskara II developed spherical trigonometry, and discovered many trigonometric results.
Bhaskara II was the one of the first to discover formula_4 and formula_5 trigonometric results like:
Madhava (c. 1400) made early strides in the analysis of trigonometric functions and their infinite series expansions. He developed the concepts of the power series and Taylor series, and produced the power series expansions of sine, cosine, tangent, and arctangent. Using the Taylor series approximations of sine and cosine, he produced a sine table to 12 decimal places of accuracy and a cosine table to 9 decimal places of accuracy. He also gave the power series of π and the angle, radius, diameter, and circumference of a circle in terms of trigonometric functions. His works were expanded by his followers at the Kerala School up to the 16th century.
The Indian text the Yuktibhāṣā contains proof for the expansion of the sine and cosine functions and the derivation and proof of the power series for inverse tangent, discovered by Madhava. The Yuktibhāṣā also contains rules for finding the sines and the cosines of the sum and difference of two angles.
Chinese mathematics.
In China, Aryabhata's table of sines were translated into the Chinese mathematical book of the "Kaiyuan Zhanjing", compiled in 718 AD during the Tang dynasty. Although the Chinese excelled in other fields of mathematics such as solid geometry, binomial theorem, and complex algebraic formulas, early forms of trigonometry were not as widely appreciated as in the earlier Greek, Hellenistic, Indian and Islamic worlds. Instead, the early Chinese used an empirical substitute known as "chong cha", while practical use of plane trigonometry in using the sine, the tangent, and the secant were known. However, this embryonic state of trigonometry in China slowly began to change and advance during the Song dynasty (960–1279), where Chinese mathematicians began to express greater emphasis for the need of spherical trigonometry in calendrical science and astronomical calculations. The polymath Chinese scientist, mathematician and official Shen Kuo (1031–1095) used trigonometric functions to solve mathematical problems of chords and arcs. Victor J. Katz writes that in Shen's formula "technique of intersecting circles", he created an approximation of the arc "s" of a circle given the diameter "d", sagitta "v", and length "c" of the chord subtending the arc, the length of which he approximated as
formula_7
Sal Restivo writes that Shen's work in the lengths of arcs of circles provided the basis for spherical trigonometry developed in the 13th century by the mathematician and astronomer Guo Shoujing (1231–1316). As the historians L. Gauchet and Joseph Needham state, Guo Shoujing used spherical trigonometry in his calculations to improve the calendar system and Chinese astronomy. Along with a later 17th-century Chinese illustration of Guo's mathematical proofs, Needham states that:
Guo used a quadrangular spherical pyramid, the basal quadrilateral of which consisted of one equatorial and one ecliptic arc, together with two meridian arcs, one of which passed through the summer solstice point...By such methods he was able to obtain the du lü (degrees of equator corresponding to degrees of ecliptic), the ji cha (values of chords for given ecliptic arcs), and the cha lü (difference between chords of arcs differing by 1 degree).
Despite the achievements of Shen and Guo's work in trigonometry, another substantial work in Chinese trigonometry would not be published again until 1607, with the dual publication of "Euclid's Elements" by Chinese official and astronomer Xu Guangqi (1562–1633) and the Italian Jesuit Matteo Ricci (1552–1610).
Medieval Islamic world.
Previous works were later translated and expanded in the medieval Islamic world by Muslim mathematicians of mostly Persian and Arab descent, who enunciated a large number of theorems which freed the subject of trigonometry from dependence upon the complete quadrilateral, as was the case in Hellenistic mathematics due to the application of Menelaus' theorem. According to E. S. Kennedy, it was after this development in Islamic mathematics that "the first real trigonometry emerged, in the sense that only then did the object of study become the spherical or plane triangle, its sides and angles."
Methods dealing with spherical triangles were also known, particularly the method of Menelaus of Alexandria, who developed "Menelaus' theorem" to deal with spherical problems. However, E. S. Kennedy points out that while it was possible in pre-Islamic mathematics to compute the magnitudes of a spherical figure, in principle, by use of the table of chords and Menelaus' theorem, the application of the theorem to spherical problems was very difficult in practice. In order to observe holy days on the Islamic calendar in which timings were determined by phases of the moon, astronomers initially used Menelaus' method to calculate the place of the moon and stars, though this method proved to be clumsy and difficult. It involved setting up two intersecting right triangles; by applying Menelaus' theorem it was possible to solve one of the six sides, but only if the other five sides were known. To tell the time from the sun's altitude, for instance, repeated applications of Menelaus' theorem were required. For medieval Islamic astronomers, there was an obvious challenge to find a simpler trigonometric method.
In the early 9th century AD, Muhammad ibn Mūsā al-Khwārizmī produced accurate sine and cosine tables, and the first table of tangents. He was also a pioneer in spherical trigonometry. In 830 AD, Habash al-Hasib al-Marwazi produced the first table of cotangents. Muhammad ibn Jābir al-Harrānī al-Battānī (Albatenius) (853–929 AD) discovered the reciprocal functions of secant and cosecant, and produced the first table of cosecants for each degree from 1° to 90°.
By the 10th century AD, in the work of Abū al-Wafā' al-Būzjānī, all six trigonometric functions were used. Abu al-Wafa had sine tables in 0.25° increments, to 8 decimal places of accuracy, and accurate tables of tangent values. He also developed the following trigonometric formula:
formula_8 (a special case of Ptolemy's angle-addition formula; see above)
In his original text, Abū al-Wafā' states: "If we want that, we multiply the given sine by the cosine minutes, and the result is half the sine of the double". Abū al-Wafā also established the angle addition and difference identities presented with complete proofs:
formula_9
formula_10
For the second one, the text states: "We multiply the sine of each of the two arcs by the cosine of the other "minutes". If we want the sine of the sum, we add the products, if we want the sine of the difference, we take their difference".
He also discovered the law of sines for spherical trigonometry:
formula_11
Also in the late 10th and early 11th centuries AD, the Egyptian astronomer Ibn Yunus performed many careful trigonometric calculations and demonstrated the following trigonometric identity:
formula_12
Al-Jayyani (989–1079) of al-Andalus wrote "The book of unknown arcs of a sphere", which is considered "the first treatise on spherical trigonometry". It "contains formulae for right-handed triangles, the general law of sines, and the solution of a spherical triangle by means of the polar triangle." This treatise later had a "strong influence on European mathematics", and his "definition of ratios as numbers" and "method of solving a spherical triangle when all sides are unknown" are likely to have influenced Regiomontanus.
The method of triangulation was first developed by Muslim mathematicians, who applied it to practical uses such as surveying and Islamic geography, as described by Abu Rayhan Biruni in the early 11th century. Biruni himself introduced triangulation techniques to measure the size of the Earth and the distances between various places. In the late 11th century, Omar Khayyám (1048–1131) solved cubic equations using approximate numerical solutions found by interpolation in trigonometric tables. In the 13th century, Nasīr al-Dīn al-Tūsī was the first to treat trigonometry as a mathematical discipline independent from astronomy, and he developed spherical trigonometry into its present form. He listed the six distinct cases of a right-angled triangle in spherical trigonometry, and in his "On the Sector Figure", he stated the law of sines for plane and spherical triangles, discovered the law of tangents for spherical triangles, and provided proofs for both these laws. Nasir al-Din al-Tusi has been described as the creator of trigonometry as a mathematical discipline in its own right.
In the 15th century, Jamshīd al-Kāshī provided the first explicit statement of the law of cosines in a form suitable for triangulation. In France, the law of cosines is still referred to as the "". He also gave trigonometric tables of values of the sine function to four sexagesimal digits (equivalent to 8 decimal places) for each 1° of argument with differences to be added for each 1/60 of 1°. Ulugh Beg also gives accurate tables of sines and tangents correct to 8 decimal places around the same time.
Modern.
European renaissance and afterwards.
In 1342, Levi ben Gershon, known as Gersonides, wrote "On Sines, Chords and Arcs", in particular proving the sine law for plane triangles and giving five-figure sine tables.
A simplified trigonometric table, the "toleta de marteloio", was used by sailors in the Mediterranean Sea during the 14th-15th Centuries to calculate navigation courses. It is described by Ramon Llull of Majorca in 1295, and laid out in the 1436 atlas of Venetian captain Andrea Bianco.
Regiomontanus was perhaps the first mathematician in Europe to treat trigonometry as a distinct mathematical discipline, in his "De triangulis omnimodis" written in 1464, as well as his later "Tabulae directionum" which included the tangent function, unnamed.
The "Opus palatinum de triangulis" of Georg Joachim Rheticus, a student of Copernicus, was probably the first in Europe to define trigonometric functions directly in terms of right triangles instead of circles, with tables for all six trigonometric functions; this work was finished by Rheticus' student Valentin Otho in 1596.
In the 17th century, Isaac Newton and James Stirling developed the general Newton–Stirling interpolation formula for trigonometric functions.
In the 18th century, Leonhard Euler's "Introduction in analysin infinitorum" (1748) was mostly responsible for establishing the analytic treatment of trigonometric functions in Europe, deriving their infinite series and presenting "Euler's formula" "e""ix" = cos "x" + "i" sin "x". Euler used the near-modern abbreviations "sin.", "cos.", "tang.", "cot.", "sec.", and "cosec." Prior to this, Roger Cotes had computed the derivative of sine in his "Harmonia Mensurarum" (1722).
Also in the 18th century, Brook Taylor defined the general Taylor series and gave the series expansions and approximations for all six trigonometric functions. The works of James Gregory in the 17th century and Colin Maclaurin in the 18th century were also very influential in the development of trigonometric series.
Citations and footnotes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathrm{chord}\\ \\theta = 2 r\\sin \\frac{\\theta}{2}, "
},
{
"math_id": 1,
"text": "\\sin^2\\left(\\frac{x}{2}\\right) = \\frac{1 - \\cos(x)}{2}."
},
{
"math_id": 2,
"text": "\\sin x \\approx \\frac{16x (\\pi - x)}{5 \\pi^2 - 4x (\\pi - x)}, \\qquad \\left(0\\leq x\\leq\\pi\\right)."
},
{
"math_id": 3,
"text": "\\ 1 - \\sin^2(x) = \\cos^2(x) = \\sin^2\\left (\\frac{\\pi}{2} - x\\right )"
},
{
"math_id": 4,
"text": " \\sin\\left(a + b\\right)"
},
{
"math_id": 5,
"text": "\\sin\\left(a - b\\right)"
},
{
"math_id": 6,
"text": "\\sin\\left(a + b\\right) = \\sin a\\cos b + \\cos a\\sin b"
},
{
"math_id": 7,
"text": "s = c + \\frac{2v^2}{d}."
},
{
"math_id": 8,
"text": "\\ \\sin(2x) = 2 \\sin(x) \\cos(x) "
},
{
"math_id": 9,
"text": "\\sin(\\alpha \\pm \\beta) = \\sqrt{\\sin^2 \\alpha - (\\sin \\alpha \\sin \\beta)^2} \\pm \\sqrt{\\sin^2 \\beta- (\\sin \\alpha\\sin \\beta)^2}"
},
{
"math_id": 10,
"text": "\\sin(\\alpha \\pm \\beta) = \\sin \\alpha \\cos \\beta \\pm \\cos \\alpha \\sin \\beta "
},
{
"math_id": 11,
"text": "\\frac{\\sin A}{\\sin a} = \\frac{\\sin B}{\\sin b} = \\frac{\\sin C}{\\sin c}."
},
{
"math_id": 12,
"text": "\\cos a \\cos b = \\frac{\\cos(a+b) + \\cos(a-b)}{2}"
}
]
| https://en.wikipedia.org/wiki?curid=6785051 |
67856304 | Egalitarian rule | Rawlsian decision rule for social choice
In social choice and operations research, the egalitarian rule (also called the max-min rule or the Rawlsian rule) is a rule saying that, among all possible alternatives, society should pick the alternative which maximizes the "minimum utility" of all individuals in society. It is a formal mathematical representation of the egalitarian philosophy. It also corresponds to John Rawls' principle of maximizing the welfare of the worst-off individual.
Definition.
Let formula_0 be a set of possible `states of the world' or `alternatives'. Society wishes to choose a single state from formula_0. For example, in a single-winner election, formula_0 may represent the set of candidates; in a resource allocation setting, formula_0 may represent all possible allocations.
Let formula_1 be a finite set, representing a collection of individuals. For each formula_2, let formula_3 be a "utility function", describing the amount of happiness an individual "i" derives from each possible state.
A "social choice rule" is a mechanism which uses the data formula_4 to select some element(s) from formula_0 which are `best' for society. The question of what 'best' means is the basic question of social choice theory. The egalitarian rule selects an element formula_5 which maximizes the "minimum utility", that is, it solves the following optimization problem:
formula_6
Leximin rule.
Often, there are many different states with the same minimum utility. For example, a state with utility profile (0,100,100) has the same minimum value as a state with utility profile (0,0,0). In this case, the egalitarian rule often uses the leximin order, that is: subject to maximizing the smallest utility, it aims to maximize the next-smallest utility; subject to that, maximize the next-smallest utility, and so on.
For example, suppose there are two individuals - Alice and George, and three possible states: state x gives a utility of 2 to Alice and 4 to George; state y gives a utility of 9 to Alice and 1 to George; and state z gives a utility of 1 to Alice and 8 to George. Then state x is leximin-optimal, since its utility profile is (2,4) which is leximin-larger than that of y (9,1) and z (1,8).
The egalitarian rule strengthened with the leximin order is often called the leximin rule, to distinguish it from the simpler max-min rule.
The leximin rule for social choice was introduced by Amartya Sen in 1970, and discussed in depth in many later books.
Properties.
Pareto inefficiency.
The leximin rule is Pareto-efficient if the outcomes of every decision are known with perfect certainty. However, by Harsanyi's utilitarian theorem, any leximin function is Pareto-inefficient for a society that must make tradeoffs under uncertainty: There exist situations in which every person in a society would be better-off (ex ante) if they were to take a particular bet, but the leximin rule will reject it (because some person might be made worse off ex post).
Pigou-Dalton property.
The leximin rule satisfies the Pigou–Dalton principle, that is: if utility is "moved" from an agent with more utility to an agent with less utility, and as a result, the utility-difference between them becomes smaller, then resulting alternative is preferred.
Moreover, the leximin rule is the only social-welfare ordering rule which simultaneously satisfies the following three properties:
Egalitarian resource allocation.
The egalitarian rule is particularly useful as a rule for fair division. In this setting, the set formula_0 represents all possible allocations, and the goal is to find an allocation which maximizes the minimum utility, or the leximin vector. This rule has been studied in several contexts:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "I"
},
{
"math_id": 2,
"text": "i \\in I"
},
{
"math_id": 3,
"text": "u_i:X\\longrightarrow\\mathbb{R}"
},
{
"math_id": 4,
"text": "(u_i)_{i \\in I}"
},
{
"math_id": 5,
"text": "x \\in X"
},
{
"math_id": 6,
"text": " \\max_{x\\in X} \\min_{i\\in I} u_i(x)."
}
]
| https://en.wikipedia.org/wiki?curid=67856304 |
6785716 | Bearing capacity | Capacity of soil to support loads
In geotechnical engineering, bearing capacity is the capacity of soil to support the loads applied to the ground. The bearing capacity of soil is the maximum average contact pressure between the foundation and the soil which should not produce shear failure in the soil. "Ultimate bearing capacity" is the theoretical maximum pressure which can be supported without failure; "allowable bearing capacity" is the ultimate bearing capacity divided by a factor of safety. Sometimes, on soft soil sites, large settlements may occur under loaded foundations without actual shear failure occurring; in such cases, the allowable bearing capacity is based on the maximum allowable settlement. The allowable bearing pressure is the maximum pressure that can be applied to the soil without causing failure. The ultimate bearing capacity, on the other hand, is the maximum pressure that can be applied to the soil before it fails.
There are three modes of failure that limit bearing capacity: general shear failure, local shear failure, and punching shear failure.
It depends upon the shear strength of soil as well as shape, size, depth and type of foundation.
Introduction.
A foundation is the part of a structure which transmits the weight of the structure to the ground. All structures constructed on land are supported on foundations. A foundation is a connecting link between the structure proper and the ground which supports it.
The bearing strength characteristics of foundation soil are major design criterion for civil engineering structures. In nontechnical engineering, bearing capacity is the capacity of soil to support the loads applied to the ground. The bearing capacity of soil is the maximum average contact pressure between the foundation and the soil which should not produce shear failure in the soil. Ultimate bearing capacity is the theoretical maximum pressure which can be supported without failure; allowable bearing capacity is the ultimate bearing capacity divided by a factor of safety. Sometimes, on soft soil sites, large settlements may occur under loaded foundations without actual shear failure occurring; in such cases, the allowable bearing capacity is based on the maximum allowable settlement.
General bearing failure.
A general bearing failure occurs when the load on the footing causes large movement of the soil on a shear failure surface which extends away from the footing and up to the soil surface. Calculation of the capacity of the footing in general bearing is based on the size of the footing and the soil properties. The basic method was developed by Terzaghi, with modifications and additional factors by Meyerhof and Vesić.
. The general shear failure case is the one normally analyzed. Prevention against other failure modes is accounted for implicitly in settlement calculations. Stress distribution in elastic soils under foundations was found in a closed form by Ludwig Föppl (1941) and Gerhard Schubert (1942). There are many different methods for computing when this failure will occur.
Terzaghi's Bearing Capacity Theory.
Karl von Terzaghi was the first to present a comprehensive theory for the evaluation of the ultimate bearing capacity of rough shallow foundations. This theory states that a foundation is shallow if its depth is less than or equal to its width. Later investigations, however, have suggested that foundations with a depth, measured from the ground surface, equal to 3 to 4 times their width may be defined as shallow foundations.
Terzaghi developed a method for determining bearing capacity for the general shear failure case in 1943. The equations, which take into account soil cohesion, soil friction, embedment, surcharge, and self-weight, are given below.
For square foundations:
formula_0
For continuous foundations:
formula_1
For circular foundations:
formula_2
where
formula_3
formula_4 for φ' = 0 [Note: 5.14 is Meyerhof's value -- see below. Terzaghi's value is 5.7.]
formula_5 for φ' > 0 [Note: As phi' goes to zero, N_c goes to 5.71...]
formula_6
"c"′ is the effective cohesion.
"σzD"′ is the vertical effective stress at the depth the foundation is laid.
γ"′ is the effective unit weight when saturated or the total unit weight when not fully saturated.
"B" is the width or the diameter of the foundation.
"φ"′ is the effective internal angle of friction.
"Kpγ" is obtained graphically. Simplifications have been made to eliminate the need for "Kpγ". One such was done by Coduto, given below, and it is accurate to within 10%.
formula_7
For foundations that exhibit the local shear failure mode in soils, Terzaghi suggested the following modifications to the previous equations. The equations are given below.
For square foundations:
formula_8
For continuous foundations:
formula_9
For circular foundations:
formula_10
formula_11, the modified bearing capacity factors, can be calculated by using the bearing capacity factors equations(for formula_12, respectively) by replacing the effective internal angle of frictionformula_13 by a value equal to formula_14
Meyerhof's Bearing Capacity theory.
In 1951, Meyerhof published a bearing capacity theory which could be applied to rough shallow and deep foundations. Meyerhof (1951, 1963) proposed a bearing-capacity equation similar to that of Terzaghi's but included a shape factor s-q with the depth term Nq. He also included depth factors and inclination factors. [Note: Meyerhof re-evaluated N_q based on a different assumption from Terzaghi and found N_q = ( 1 + sin phi) exp (pi tan phi ) / (1 - sin phi). Then N_c is the same equation as Terzaghi: N_c = (N_q - 1) / tan phi. For phi = 0, Meyerhof's N_c converges to 2 + pi = 5.14... Meyerhof also re-evaluated N_gamma and obtained N_gamma = (N_q - 1) tan(1.4 phi).]
Factor of safety.
Calculating the gross allowable-load bearing capacity of shallow foundations requires the application of a factor of safety (FS) to the gross ultimate bearing capacity, or;
formula_15
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " q_{ult} = 1.3 c' N_c + \\sigma '_{zD} N_q + 0.4 \\gamma ' B N_\\gamma \\ "
},
{
"math_id": 1,
"text": " q_{ult} = c' N_c + \\sigma '_{zD} N_q + 0.5 \\gamma ' B N_\\gamma \\ "
},
{
"math_id": 2,
"text": " q_{ult} = 1.3 c' N_c + \\sigma '_{zD} N_q + 0.3 \\gamma ' B N_\\gamma \\ "
},
{
"math_id": 3,
"text": " N_q = \\frac{ e ^{ 2 \\pi \\left( 0.75 - \\phi '/360 \\right) \\tan \\phi ' } }{2 \\cos ^2 \\left( 45 + \\phi '/2 \\right) } "
},
{
"math_id": 4,
"text": " N_c = 5.14 \\ "
},
{
"math_id": 5,
"text": " N_c = \\frac{ N_q - 1 }{ \\tan \\phi '} "
},
{
"math_id": 6,
"text": " N_\\gamma = \\frac{ \\tan \\phi ' }{2} \\left( \\frac{ K_{p \\gamma} }{ \\cos ^2 \\phi ' } - 1 \\right) "
},
{
"math_id": 7,
"text": " N_\\gamma = \\frac{ 2 \\left( N_q + 1 \\right) \\tan \\phi ' }{1 + 0.4 \\sin 4 \\phi ' }"
},
{
"math_id": 8,
"text": " q_{ult} = 0.867 c' N '_c + \\sigma '_{zD} N '_q + 0.4 \\gamma ' B N '_\\gamma \\ "
},
{
"math_id": 9,
"text": " q_{ult} = \\frac{2}{3} c' N '_c + \\sigma '_{zD} N '_q + 0.5 \\gamma ' B N '_\\gamma \\ "
},
{
"math_id": 10,
"text": " q_{ult} = 0.867 c' N '_c + \\sigma '_{zD} N '_q + 0.3 \\gamma ' B N '_\\gamma \\ "
},
{
"math_id": 11,
"text": " N '_c, N '_q and N '_y "
},
{
"math_id": 12,
"text": " N_c, N_q, and N_y"
},
{
"math_id": 13,
"text": "(\\phi ')"
},
{
"math_id": 14,
"text": " : tan^{-1}\\, (\\frac{2}{3} tan \\phi ') "
},
{
"math_id": 15,
"text": " q_{all} = \\frac{q_{ult}}{FS} "
}
]
| https://en.wikipedia.org/wiki?curid=6785716 |
67858994 | Iterative rational Krylov algorithm | The iterative rational Krylov algorithm (IRKA), is an iterative algorithm, useful for model order reduction (MOR) of single-input single-output (SISO) linear time-invariant dynamical systems. At each iteration, IRKA does an Hermite type interpolation of the original system transfer function. Each interpolation requires solving formula_0 shifted pairs of linear systems, each of size formula_1; where formula_2 is the original system order, and formula_0 is the desired reduced model order (usually formula_3).
The algorithm was first introduced by Gugercin, Antoulas and Beattie in 2008. It is based on a first order necessary optimality condition, initially investigated by Meier and Luenberger in 1967. The first convergence proof of IRKA was given by Flagg, Beattie and Gugercin in 2012, for a particular kind of systems.
MOR as an optimization problem.
Consider a SISO linear time-invariant dynamical system, with input formula_5, and output formula_6:
formula_7
Applying the Laplace transform, with zero initial conditions, we obtain the transfer function formula_8, which is a fraction of polynomials:
formula_9
Assume formula_8 is stable. Given formula_10, MOR tries to approximate the transfer function formula_8, by a stable rational transfer function formula_11, of order formula_0:
formula_12
A possible approximation criterion is to minimize the absolute error in formula_4 norm:
formula_13
This is known as the formula_4 optimization problem. This problem has been studied extensively, and it is known to be non-convex; which implies that usually it will be difficult to find a global minimizer.
Meier–Luenberger conditions.
The following first order necessary optimality condition for the formula_4 problem, is of great importance for the IRKA algorithm.
<templatestyles src="Math_theorem/styles.css" />
Theorem ([Theorem 3.4] [Theorem 1.2]) — Assume that the formula_4 optimization problem admits a solution formula_14 with simple poles. Denote these poles by: formula_15. Then, formula_14 must be an Hermite interpolator of formula_8, through the reflected poles of formula_14:
formula_16
Note that the poles formula_17 are the eigenvalues of the reduced formula_18 matrix formula_19.
Hermite interpolation.
An Hermite interpolant formula_11 of the rational function formula_8, through formula_0 distinct points formula_20, has components:
formula_21
where the matrices formula_22 and formula_23 may be found by solving formula_0 dual pairs of linear systems, one for each shift [Theorem 1.1]:
formula_24
IRKA algorithm.
As can be seen from the previous section, finding an Hermite interpolator formula_11 of formula_8, through formula_0 given points, is relatively easy. The difficult part is to find the correct interpolation points. IRKA tries to iteratively approximate these "optimal" interpolation points.
For this, it starts with formula_0 arbitrary interpolation points (closed under conjugation), and then, at each iteration formula_25, it imposes the first order necessary optimality condition of the formula_26 problem:
1. find the Hermite interpolant formula_11 of formula_8, through the actual formula_0 shift points: formula_27.
2. update the shifts by using the poles of the new formula_11: formula_28
The iteration is stopped when the relative change in the set of shifts of two successive iterations is less than a given tolerance. This condition may be stated as:
formula_29
As already mentioned, each Hermite interpolation requires solving formula_0 shifted pairs of linear systems, each of size formula_1:
formula_30
Also, updating the shifts requires finding the formula_0 poles of the new interpolant formula_11. That is, finding the formula_0 eigenvalues of the reduced formula_18 matrix formula_19.
Pseudocode.
The following is a pseudocode for the IRKA algorithm [Algorithm 4.1].
algorithm IRKA
input: formula_31, formula_32, formula_33 closed under conjugation
formula_34 % Solve primal systems
formula_35 % Solve dual systems
while relative change in {formula_36} > tol
formula_37 % Reduced order matrix
formula_38 % Update shifts, using poles of formula_14
formula_34 % Solve primal systems
formula_39 % Solve dual systems
end while
return formula_40 % Reduced order model
Convergence.
A SISO linear system is said to have symmetric state space (SSS), whenever: formula_41 This type of systems appear in many important applications, such as in the analysis of RC circuits and in inverse problems involving 3D Maxwell's equations. For SSS systems with distinct poles, the following convergence result has been proven: "IRKA is a locally convergent fixed point iteration to a local minimizer of the formula_4 optimization problem."
Although there is no convergence proof for the general case, numerous experiments have shown that IRKA often converges rapidly for different kind of linear dynamical systems.
Extensions.
IRKA algorithm has been extended by the original authors to multiple-input multiple-output (MIMO) systems, and also to discrete time and differential algebraic systems [Remark 4.1].
See also.
Model order reduction
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "r"
},
{
"math_id": 1,
"text": "n \\times n"
},
{
"math_id": 2,
"text": "n"
},
{
"math_id": 3,
"text": "r \\ll n"
},
{
"math_id": 4,
"text": "H_{2}"
},
{
"math_id": 5,
"text": "v(t)"
},
{
"math_id": 6,
"text": "y(t)"
},
{
"math_id": 7,
"text": " \\begin{cases}\n\\dot{x}(t) = A x(t) + b v(t)\\\\\ny(t) = c^T x(t)\n\\end{cases} \\qquad A \\in \\mathbb{R}^{n \\times n}, \\, b,c \\in \\mathbb{R}^n, \\, v(t),y(t) \\in \\mathbb{R}, \\, x(t) \\in \\mathbb{R}^n."
},
{
"math_id": 8,
"text": "G"
},
{
"math_id": 9,
"text": "G(s)=c^T (sI-A)^{-1} b, \\quad A \\in \\mathbb{R}^{n \\times n}, \\, b,c \\in \\mathbb{R}^n."
},
{
"math_id": 10,
"text": "r < n"
},
{
"math_id": 11,
"text": "G_r"
},
{
"math_id": 12,
"text": " G_r(s) = c_r^T (sI_r-A_r)^{-1} b_r, \\quad A_r \\in \\mathbb{R}^{r \\times r}, \\, b_r, c_r \\in \\mathbb{R}^r."
},
{
"math_id": 13,
"text": "G_{r} \\in \\underset{ \\dim(\\hat{G})=r, \\, \\hat{G} \\text{ stable}} {\\operatorname{arg \\min}} \\|G-\\hat{G}\\|_{H_2}, \\quad \\|G\\|_{H_2}^2:= \\frac{1}{2 \\pi} \\int \\limits_{-\\infty}^\\infty |G(ja)|^2 \\, da ."
},
{
"math_id": 14,
"text": "G_{r}"
},
{
"math_id": 15,
"text": "\\lambda_{1}(A_{r}), \\ldots, \\lambda_{r}(A_{r})"
},
{
"math_id": 16,
"text": "G_{r}(\\sigma_{i}) = G(\\sigma_{i}), \\quad G_{r}^{\\prime}(\\sigma_{i}) = G^{\\prime}(\\sigma_{i}), \\quad \\sigma_{i} = - \\lambda_{i}(A_{r}), \\quad \\forall \\, i=1,\\ldots,r ."
},
{
"math_id": 17,
"text": "\\lambda_i(A_r)"
},
{
"math_id": 18,
"text": "r \\times r"
},
{
"math_id": 19,
"text": "A_r"
},
{
"math_id": 20,
"text": "\\sigma_1, \\ldots, \\sigma_r \\in \\mathbb{C}"
},
{
"math_id": 21,
"text": " A_r = W_r^* A V_r, \\quad b_r = W_r^* b, \\quad c_{r}=V_r^* c, \\quad A_r \\in \\mathbb{R}^{r \\times r}, \\, b_r \\in \\mathbb{R}^r, \\, c_r \\in \\mathbb{R}^r;"
},
{
"math_id": 22,
"text": "V_r = ( v_1 \\mid \\ldots \\mid v_r ) \\in \\mathbb{C}^{n \\times r}"
},
{
"math_id": 23,
"text": "W_r = ( w_1 \\mid \\ldots \\mid w_r ) \\in \\mathbb{C}^{n \\times r}"
},
{
"math_id": 24,
"text": "(\\sigma_i I-A) v_i=b, \\quad (\\sigma_i I-A)^* w_i=c, \\quad \\forall \\, i=1,\\ldots,r ."
},
{
"math_id": 25,
"text": "m"
},
{
"math_id": 26,
"text": "H_2"
},
{
"math_id": 27,
"text": "\\sigma_1^m,\\ldots,\\sigma_r^m"
},
{
"math_id": 28,
"text": " \\sigma_i^{m+1} = -\\lambda_i(A_r), \\, \\forall \\, i=1,\\ldots,r ."
},
{
"math_id": 29,
"text": "\\frac{ |\\sigma_i^{m+1}-\\sigma_i^m| }{|\\sigma_i^m|} < \\text{tol}, \\, \\forall \\, i=1,\\ldots,r ."
},
{
"math_id": 30,
"text": " (\\sigma_i^m I-A) v_{i} = b, \\quad (\\sigma_i^m I-A)^* w_i = c, \\quad \\forall \\, i=1,\\ldots,r ."
},
{
"math_id": 31,
"text": "A,b,c"
},
{
"math_id": 32,
"text": "\\text{tol}>0"
},
{
"math_id": 33,
"text": "\\sigma_1,\\ldots,\\sigma_r"
},
{
"math_id": 34,
"text": "(\\sigma_i I-A)v_i=b, \\, \\forall \\, i=1,\\ldots,r"
},
{
"math_id": 35,
"text": "(\\sigma_i I-A)^* w_i=c, \\, \\forall \\, i=1,\\ldots,r"
},
{
"math_id": 36,
"text": "\\sigma_{i}"
},
{
"math_id": 37,
"text": "A_{r} = W_r^* AV_r"
},
{
"math_id": 38,
"text": "\\sigma_i = -\\lambda_i(A_r), \\, \\forall \\, i=1,\\ldots,r"
},
{
"math_id": 39,
"text": "(\\sigma_i I-A)^{*}w_{i}=c, \\, \\forall \\, i=1,\\ldots,r"
},
{
"math_id": 40,
"text": "A_r=W_r^* AV_r, \\, b_r=W_r^{*}b, \\, c_r^T=c^T V_r"
},
{
"math_id": 41,
"text": "A=A^{T}, \\, b=c ."
}
]
| https://en.wikipedia.org/wiki?curid=67858994 |
6786225 | Distance decay | Sociological effect
Distance decay is a geographical term which describes the effect of distance on cultural or spatial interactions. The distance decay effect states that the interaction between two locales declines as the distance between them increases. Once the distance is outside of the two locales' activity space, their interactions begin to decrease. It is thus an assertion that the mathematics of the inverse square law in physics can be applied to many geographic phenomena, and is one of the ways in which physics principles such as gravity are often applied metaphorically to geographic situations.
Mathematical models.
Distance decay is graphically represented by a curving line that swoops concavely downward as distance along the x-axis increases. Distance decay can be mathematically represented as an inverse-square law by the expression
formula_0
or
formula_1
where I is interaction and d is distance. In practice, it is often parameterized to fit a specific situation, such as
formula_2
in which the constant A is a vertical stretching factor, B is a horizontal shift (so that the curve has a y-axis intercept at a finite value), and k is the decay power.
It can take other forms such as negative exponential, i.e.
formula_3
In addition to fitting the parameters, a cutoff value can be added to a distance decay function to specify a distance beyond which spatial interaction drops to zero, or to delineate a "zone of indifference" in which all interactions have the same strength.
Applications.
Distance decay is evident in town/city centres. It can refer to various things which decline with greater distance from the center of the central business district (CBD):
Distance decay weighs into the decision to migrate, leading many migrants to move less far.
With the advent of faster travel and communications technology, such as telegraphs, telephones, broadcasting, and internet, the effects of distance have been reduced, a trend known as time-space convergence. Exceptions include places previously connected by now-abandoned railways, for example, have fallen off the beaten path.
Related concepts.
Related terms include "friction of distance", which describes the forces that create the distance decay effect. Waldo R. Tobler's "First law of geography", an informal statement that "All things are related, but near things are more related than far things," and the mathematical principle spatial autocorrelation are similar expressions of distance decay effects.
"Loss of Strength Gradient" holds that the amount of a nation's military power that could be brought to bear in any part of the world depends on geographic distance.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "I = \\text{constant} \\times d^{-2}"
},
{
"math_id": 1,
"text": "I \\propto 1/d^2,"
},
{
"math_id": 2,
"text": "I = \\frac{A}{(d + B)^k},"
},
{
"math_id": 3,
"text": "I \\propto e^{-d}. "
}
]
| https://en.wikipedia.org/wiki?curid=6786225 |
67866203 | Fuller calculator | An advanced type of slide rule
The Fuller calculator, sometimes called Fuller's cylindrical slide rule, is a cylindrical slide rule with a helical main scale taking 50 turns around the cylinder. This creates an instrument of considerable precision – it is equivalent to a traditional slide rule long. It was invented in 1878 by George Fuller, professor of engineering at Queen's University Belfast, and despite its size and price it remained on the market for nearly a century because it outperformed nearly all other slide rules.
As with other slide rules, the Fuller is limited to calculations based on multiplication and division with additional scales allowing for trigonometical and exponential functions. The mechanical calculators produced in the same era were generally restricted to addition and subtraction with only advanced versions, like the Arithmometer, able to multiply and divide. Even these advanced machines could not perform trigonometry or exponentiation and they were bigger, heavier and much more expensive than the Fuller. In the mid-twentieth century the handheld Curta mechanical calculator became available which also competed in convenience and price. However, for scientific calculations the Fuller remained viable until 1973 when it was made obsolete by the HP-35 handheld scientific electronic calculator.
Design.
Model 1, the standard model.
In essence, the calculator consists of three separate hollow cylindrical parts that can twist and slide over each other about a common axis without any tendency to slip. The following details describe the version made between 1921 and 1935. There is a papier-mâché cylinder "(marked D in the annotated photograph)" some long and in diameter fastened to a mahogany handle. A second papier-mâché cylinder "(marked C") – long and diameter – is a slide fit over the first. Both cylinders are covered in paper varnished with shellac. The second, outer, cylinder is printed with the slide rule's primary logarithmic scale in the form of a 50-turn helix long with annotations on the scale going from 100 to 1000. A brass tube with a mahogany cap at the top is a slide fit into the first cylinder.
A brass pointer with an engraved index marker at its tip "(marked A)" is attached to the handle so that it points to a place on the primary logarithmic scale, depending on the position to which the scale on cylinder C has been adjusted. A second brass pointer "(marked B)" is attached to the top cap pointing down over the logarithmic scale and it is positioned by rotating and sliding the cap at the top. This pointer has four index marks "(marked B1, B2, B3, B4)" such that whichever one is convenient may be used. Printed on the inner cylinder D are simply tables of data for reference purposes.
The calculator was sold in a hinged mahogany case which, if required, holds the instrument when in use by means a brass support that can be latched to the outer end of the case. Out of its case the calculator weighs about . For all except the earliest instruments the last two digits of the date and a serial number, believed to be consecutively allocated, are stamped at the top of pointer B.
Other Fuller models.
The calculator described above was called "Model No. 1" . Model 2 had scales on the inner cylinder for calculating logs and sines. The "Fuller-Bakewell" model 3 had two scales of angles printed on the inner cylinder to calculate cosine² and sine⋅cosine for use by engineers and surveyors for tacheometry calculations. A smaller model with a scale was available for a short time but very few survive. In about 1935 the brass tube was replaced by one of phenolic resin and in about 1945 the mahogany was replaced by Bakelite.
Included in Stanley's 1912 catalogue and continuing there until 1958 was Barnard's Coordinate calculator. It is very similar in construction to the Fuller instruments but its pointers have multiple indices so additional trigonometrical functions can be used. It cost slightly less than the Fuller-Bakewell and a 1919 example is held by the Science Museum, London. In 1962 the Whythe-Fuller complex number calculator was introduced. As well as being able to multiply and divide complex numbers it can convert between Cartesian and polar coordinates.
Comparison with other slide rules and contemporaneous calculators.
The calculator's unusual single-scale design makes its helical spiral equivalent to a scale twice this length on a traditional slide rule – long. The scale can always be read to four significant figures and often to five. In 1900 William Stanley, whose firm manufactured and sold scientific instruments including the Fuller calculator, described the slide rule as "possibly the highest refinement in this class of rules".
When it was introduced the Fuller calculator had a much greater precision than other slide rules although the Thacher instrument became available a couple of years later. This was made in the United States and was comparable in size and precision but radically different in design. However, both of these types of slide rule required some skill to operate accurately compared with mechanical calculators which manipulated exact numerical digits rather than using positioning and reading from a graduated scale. Mechanical calculators could only add and subtract (which the Fuller did not do at all) although models such as the Arithmometer could perform all four functions of elementary arithmetic. No mechanical calculators could calculate transcendental functions, which slide rules could be designed to do, and they were bigger, heavier and much more expensive than any slide rule, including the Fuller.
However, a revolutionary miniature mechanical calculator went on sale in the mid-twentieth century – while Curt Herzstark had been imprisoned in a Nazi concentration camp in World War II he had developed the design of the handheld Curta mechanical calculator. It was simple to use and, being digital, was completely accurate. Because of these advantages and despite its somewhat higher price its total sales were 150,000 – over ten times more than the Fuller. Its range of mathematical calculations was seen as being adequate. However, for scientific calculations the Fuller remained viable until 1973 when, along with the Curta, it was made obsolete by the Hewlett-Packard HP-35 handheld scientific electronic calculator.
Invention, sales and demise.
The calculator was invented by George Fuller (1829–1907), professor of engineering at Queen's University Belfast (Queen's College at that time). He patented it in Britain in 1878, described it in "Nature" in 1879 and in that year he also patented it the United States, depositing a patent model.
Fuller's calculators were manufactured by the scientific instrument maker W.F. Stanley & Co. of London who made nearly 14,000 between 1878 and 1973.
In Britain the prices charged by W.F. Stanley in 1900 were for model 1 £3 () and for model 3 £4 10s. The Whythe-Fuller model was advertised in a 1962 W.F. Stanley catalogue at £21 (£ in 2023). The calculator was still listed in Stanley's catalogue in 1976 when model 1 cost £60 (£ in 2023) and model 2 was £61.25.
In the United States the instrument was marketed by Keuffel and Esser who only supplied model 1. They described it as "Fuller's Spiral Slide Rule" and, over the period it was sold between 1895 and 1927, it rose in price from $28 to $42 (falling from $ to $ in 2023 prices).
From the time when serial numbers were first stamped (about 1900) to when production ceased in 1973 around 14,000 instruments were made. Production was about 180 per year overall but it declined after about 1955. In 1949 "Encyclopædia Britannica", noting that the Fuller had been designed in 1878, reported that it "has been in considerable use up to the present time".
In 1958 the mathematician and physicist Douglas Hartree wrote that the Fuller "... is cheap compared with a desk machine and may be found very useful in work for which its accuracy is adequate and in circumstances in which the cost of a desk machine is prohibitive. [...] With one of these slide-rules and an adding machine much useful numerical work can be done ...". In 1968 the standard Fuller cost about $50 at a time when an electronic Hewlett-Packard HP 9100A desktop calculator (weighing ) cost just under $5000.
But in 1972 Hewlett-Packard introduced the HP-35, the first handheld calculator with scientific functions, at $395 – the Fuller went out of production the next year.
Operation.
Multiplication and division.
The instrument operates on the principle that two pointers are set at an appropriate separation on the helical scale of the calculator. The relevant numbers are indexed by adjusting separately both the movable cylinder and the movable pointer. Since the scale is logarithmic the separation represents the ratio of the numbers. If the cylinder is then moved without altering the positions of the pointers, this same ratio applies between any other pair of numbers addressed. In other words, it is a logarithmic Gunter's scale wound into a helix with Gunter's compass points being provided by pointers A and B.
To multiply two numbers, "p" and "q", cylinder C is rotated and shifted until pointer A points to "p" and pointer B is then moved so B1 points to 100. Next, cylinder C is moved so B1 points to "q". The product is then read from the pointer A. The decimal point is determined as with an ordinary slide rule. At the end of a calculation the slide rule is already positioned to continue with further multiplications ("p" x "q" x "r" ...).
To divide "p" by "q", cylinder C is rotated and shifted until pointer A points to "p", B1 is brought to "q", cylinder C is moved to bring 100 to B1 and the quotient is read from pointer A. It turns out to be particularly efficient to alternate multiplication with division.
Determining logarithms.
There are two other scales inscribed on the calculator which allow logarithms to be calculated and enabling such evaluations as "p""q" and formula_0. The scales are linear and one is engraved along the length of pointer B and the other printed around the circumference of the top of cylinder C. Index B1 is set to the relevant value on cylinder C and then two readings are taken. The first reading is from the scale on pointer B where it crosses the topmost spiral of the helical scale on the cylinder. The second reading is from the scale at the top circumference of cylinder C where it crosses the left edge of pointer B. The sum of the readings provides the mantissa of the log of the value.
Trigonometry and log functions.
For model 2 instruments with scales on the inner cylinder D, there is an index mark inscribed on both the top and bottom edges of cylinder C. As an example of use, when the lower index mark is set to an angle printed on the lower scale on cylinder D, pointer A points to the corresponding value of sine on cylinder C. The same approach apples for the log scale on the upper part of cylinder D. The model 3 Fuller–Bakewell is used in the same way but its scales on cylinder D are for cosine² and sine⋅cosine"(see photograph)".
Notes.
<templatestyles src="Reflist/styles.css" />
References.
Citations.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\sqrt[q]{p}"
}
]
| https://en.wikipedia.org/wiki?curid=67866203 |
67869112 | Proportional-fair rule | Decision rule for social choice
In operations research and social choice, the proportional-fair (PF) rule is a rule saying that, among all possible alternatives, one should pick an alternative that cannot be improved, where "improvement" is measured by the sum of relative improvements possible for each individual agent. It aims to provide a compromise between the utilitarian rule - which emphasizes overall system efficiency, and the egalitarian rule - which emphasizes individual fairness.
The rule was first presented in the context of rate control in communication networks. However, it is a general social choice rule and can also be used, for example, in resource allocation.
Definition.
Let formula_0 be a set of possible `states of the world' or `alternatives'. Society wishes to choose a single state from formula_0. For example, in a single-winner election, formula_0 may represent the set of candidates; in a resource allocation setting, formula_0 may represent all possible allocations of the resource.
Let formula_1 be a finite set, representing a collection of individuals. For each formula_2, let formula_3 be a "utility function", describing the amount of happiness an individual "i" derives from each possible state.
A "social choice rule" is a mechanism which uses the data formula_4 to select some element(s) from formula_0 which are `best' for society. The question of what 'best' means is the basic question of social choice theory. The proportional-fair rule selects an element formula_5 such that, for every other state formula_6:formula_7Note that the term inside the sum, formula_8, represents the relative gain of agent "i" when switching from "x" to "y". The PF rule prefers a state "x" over a state "y", if and only if If the sum of relative gains when switching from "x" to "y" is not positive.
Comparison to other rules.
The utilitarian rule selects an element formula_5 that maximizes the "sum" of individual utilities, that is, for every other state formula_6:formula_9That rule ignores the current utility of the individuals. In particular, it might select a state in which the utilities of some individuals is zero, if the utilities of some other individuals is sufficiently large.
The egalitarian rule selects an element formula_5 that maximizes the "smallest" individual utilities, that is, for every other state formula_6:formula_10This rule ignores the total efficiency of the system. In particular, it might select a state in which the utilities of most individuals are very low, just to make the smallest utility slightly larger.
The proportional-fair rule aims to balance between these two extremes. On one hand, it considers a sum of utilities rather than just the smaller utility; on the other hand, inside the sum, it gives more weight to agents whose current utility is smaller. In particular, if the utility of some individual in "x" is 0, and there is another state "y" in which his utility is larger than 0, then the PF rule would prefer state y, as the relative improvement of individual "y" is infinite (it is divided by 0).
Properties.
When the utility sets are convex, a proportional-fair solution always exists. Moreover, it maximizes the "product" of utilities (also known as the "Nash welfare").
When the utility sets are not convex, a proportional-fair solution is not guaranteed to exist. However, when it exists, it still maximizes the product of utilities.
The PF rule in specific settings.
Proportional fairness has been studied in various settings.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "I"
},
{
"math_id": 2,
"text": "i \\in I"
},
{
"math_id": 3,
"text": "u_i:X\\longrightarrow\\mathbb{R}"
},
{
"math_id": 4,
"text": "(u_i)_{i \\in I}"
},
{
"math_id": 5,
"text": "x \\in X"
},
{
"math_id": 6,
"text": "y \\in X"
},
{
"math_id": 7,
"text": "\n0 \\geq \\sum_{i\\in I} \\frac{u_i(y) - u_i(x)}{ u_i(x)}.\n"
},
{
"math_id": 8,
"text": " \\frac{u_i(y) - u_i(x)}{ u_i(x)}"
},
{
"math_id": 9,
"text": "\n0 \\geq \\sum_{i\\in I} \\left(u_i(y) - u_i(x)\\right).\n"
},
{
"math_id": 10,
"text": "\n0 \\geq \\min_{i\\in I} u_i(y) - \\min_{i\\in I} u_i(x).\n"
}
]
| https://en.wikipedia.org/wiki?curid=67869112 |
6787723 | Fluxional molecule | Molecules whose atoms interchange between symmetric positions
In chemistry and molecular physics, fluxional (or non-rigid) molecules are molecules that undergo dynamics such that some or all of their atoms interchange between symmetry-equivalent positions. Because virtually all molecules are fluxional in some respects, e.g. bond rotations in most organic compounds, the term fluxional depends on the context and the method used to assess the dynamics. Often, a molecule is considered fluxional if its spectroscopic signature exhibits line-broadening (beyond that dictated by the Heisenberg uncertainty principle) due to chemical exchange. In some cases, where the rates are slow, fluxionality is not detected spectroscopically, but by isotopic labeling and other methods.
Spectroscopic studies.
Many organometallic compounds exhibit fluxionality. Fluxionality is, however, pervasive.
NMR spectroscopy.
Temperature dependent changes in the NMR spectra result from dynamics associated with the fluxional molecules when those dynamics proceed at rates comparable to the frequency differences observed by NMR. The experiment is called DNMR and typically involves recording spectra at various temperatures. In the ideal case, low temperature spectra can be assigned to the "slow exchange limit", whereas spectra recorded at higher temperatures correspond to molecules at "fast exchange limit". Typically, high temperature spectra are simpler than those recorded at low temperatures, since at high temperatures, equivalent sites are averaged out. Prior to the advent of DNMR, kinetics of reactions were measured on non-equilibrium mixtures, monitoring the approach to equilibrium.
Many molecular processes exhibit fluxionality that can be probed on the NMR time scale. Beyond the examples highlighted below, other classic examples include the Cope rearrangement in bullvalene and the chair inversion in cyclohexane.
For processes that are too slow for traditional DNMR analysis, the technique spin saturation transfer (SST, also called EXSY for exchange spectroscopy) is applicable. This magnetization transfer technique gives rate information, provided that the rates exceed 1/"T"1.
IR spectroscopy.
Although less common, some dynamics are also observable on the time-scale of IR spectroscopy. One example is electron transfer in a mixed-valence dimer of metal clusters. Application of the equation for coalescence of two signals separated by 10 cm−1 gives the following result:
formula_0
Clearly, processes that induce line-broadening on the IR time-scale must be much more rapid than the cases that exchange on the NMR time scale.
Examples.
Cyclohexane and related rings.
The interconversion of equivalent chair conformers of cyclohexane (and many other cyclic compounds) is called ring flipping. Carbon–hydrogen bonds that are axial in one configuration become equatorial in the other, and vice versa. At room temperature the two chair conformations rapidly equilibrate. The proton- and carbon-13 NMR spectra of cyclohexane show each only singlets near room temperature. At low temperatures, the singlet in the 1H NMR spectrum decoalesces but the 13C NMR spectrum remains unchanged.
Berry pseudorotation of pentacoordinate compounds.
A prototypical fluxional molecule is phosphorus pentafluoride. Its 19F NMR spectrum consists of a 31P-coupled doublet, indicating that the equatorial and axial fluorine centers interchange rapidly on the NMR timescale. Fluorine-19 NMR spectroscopy, even at temperatures as low as −100 °C, fails to distinguish the axial from the equatorial fluorine environments. The apparent equivalency arises from the low barrier for pseudorotation via the Berry mechanism, by which the axial and equatorial fluorine atoms rapidly exchange positions. Iron pentacarbonyl (Fe(CO)5) follows the pattern set for PF5: only one signal is observed in the 13C NMR spectrum near room temperature) whereas at low temperatures, two signals in a 2:3 ratio can be resolved. In sulfur tetrafluoride (SF4), a similar pattern is observed even though this compound has only four ligands.
A well-studied fluxional ion is the methanium ion, CH5+. Even at absolute zero there is no rigid molecular structure; the H atoms are always in motion. More precisely, the spatial distribution of protons in CH5+ is many times broader than its parent molecule CH4, methane.
Six-coordinate species.
While nonrigidity is common for pentacoordinate species, six-coordinate species typically adopt a more rigid octahedral molecular geometry, featuring close-packed array of six ligating atoms surrounding a central atom. Such compounds do rearrange intramolecularly via the Ray-Dutt twist and the Bailar twist, but the barriers for these processes are typically high such that these processes do not lead to line broadening. For some compounds, dynamics occur via dissociation of a ligand, giving a pentacoordinate intermediate, which is subject to the mechanisms discussed above. Yet another mechanism, exhibited by Fe(CO)4(SiMe3)2 and related hydride complexes, is intramolecular scrambling of ligands over the faces of the tetrahedron defined by the four CO ligands.
Dimethylformamide.
A classic example of a fluxional molecule is dimethylformamide (DMF).
At temperatures near 100 °C, the 500 MHz 1H NMR spectrum of DMF shows only one signal for the methyl groups. Near room temperature, however, separate signals are seen for the non-equivalent methyl groups. The rate of exchange can be calculated at the temperature where the two signals are just merged. This "coalescence temperature" depends on the measuring field. The relevant equation is:
formula_1
where Δνo is the difference in Hz between the frequencies of the exchanging sites. These frequencies are obtained from the limiting low-temperature NMR spectrum. At these lower temperatures, the dynamics continue, of course, but the contribution of the dynamics to line broadening is negligible.
For example, if Δνo = 1ppm @ 500 MHz
formula_2 (ca. 0.5 millisecond half-life)
"Ring whizzing".
The compound Fe(η5-C5H5)(η1-C5H5)(CO)2 exhibits the phenomenon of "ring whizzing".
At 30 °C, the 1H NMR spectrum shows only two peaks, one typical (δ5.6) of the η5-C5H5 and the other assigned η1-C5H5. The singlet assigned to the η1-C5H5 ligand splits at low temperatures owing to the slow hopping of the Fe center from carbon to carbon in the η1-C5H5 ligand. Two mechanisms have been proposed, with the consensus favoring the 1,2 shift pathway.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "k \\sim \\Delta \\nu_\\circ \\sim 2(10 \\mathrm{cm}^{-1}) (300 \\cdot 10^8 \\mathrm{cm/s}) \\sim 6 \\times 10^{11} \\mathrm{s}^{-1} \\cdot"
},
{
"math_id": 1,
"text": "k = \\frac{\\pi \\Delta \\nu_\\circ}{2^{1/2}} \\sim 2 \\Delta \\nu_\\circ"
},
{
"math_id": 2,
"text": "k \\sim 2(500) = 1000 \\mathrm{s}^{-1}"
}
]
| https://en.wikipedia.org/wiki?curid=6787723 |
67893191 | Bergman's diamond lemma | In mathematics, specifically the field of abstract algebra, Bergman's Diamond Lemma (after George Bergman) is a method for confirming whether a given set of monomials of an algebra forms a formula_0-basis. It is an extension of Gröbner bases to non-commutative rings. The proof of the lemma gives rise to an algorithm for obtaining a non-commutative Gröbner basis of the algebra from its defining relations. However, in contrast to Buchberger's algorithm, in the non-commutative case, this algorithm may not terminate.
Preliminaries.
Let formula_0 be a commutative associative ring with identity element 1, usually a field. Take an arbitrary set formula_1 of variables. In the finite case one usually has formula_2. Then formula_3 is the free semigroup with identity 1 on formula_1. Finally, formula_4 is the free associative formula_0-algebra over formula_1. Elements of formula_3 will be called words"," since elements of formula_1 can be seen as letters.
Monomial Ordering.
The reductions below require a choice of ordering formula_5 on the words i.e. monomials of formula_3. This has to be a total order and satisfy the following:
We call such an order admissible. An important example is the degree lexicographic order, where formula_8 if formula_7 has smaller degree than formula_11; or in the case where they have the same degree, we say formula_8 if formula_7 comes earlier in the lexicographic order than formula_11. For example the degree lexicographic order on monomials of formula_12 is given by first assuming formula_13. Then the above rule implies that the monomials are ordered in the following way:
formula_14
Every element formula_15 has a leading word which is the largest word under the ordering formula_5 which appears in formula_16 with non-zero coefficient. In formula_12 if formula_17, then the leading word of formula_16 under degree lexicographic order is formula_18.
Reduction.
Assume we have a set formula_19 which generates a 2-sided ideal formula_20 of formula_4. Then we may scale each formula_21 such that its leading word formula_22 has coefficient 1. Thus we can write formula_23, where formula_24 is a linear combination of words formula_11 such that formula_25. A word formula_7 is called reduced with respect to the relations formula_26 if it does not contain any of the leading words formula_27. Otherwise, formula_28 for some formula_29 and some formula_30. Then there is a reduction formula_31, which is an endomorphism of formula_4 that fixes all elements of formula_3 apart from formula_28 and sends this to formula_32. By the choice of ordering there are only finitely many words less than any given word, hence a finite composition of reductions will send any formula_15 to a linear combination of reduced words.
Any element shares an equivalence class modulo formula_20 with its reduced form. Thus the canonical images of the reduced words in formula_33 form a formula_0-spanning set. The idea of non-commutative Gröbner bases is to find a set of generators formula_34 of the ideal formula_20 such that the images of the corresponding reduced words in formula_33 are a formula_0-basis. Bergman's Diamond Lemma lets us verify if a set of generators formula_34 has this property. Moreover, in the case where it does not have this property, the proof of Bergman's Diamond Lemma leads to an algorithm for extending the set of generators to one that does.
An element formula_35 is called reduction-unique if given two finite compositions of reductions formula_36 and formula_37 such that the images formula_38 and formula_39 are linear combinations of reduced words, then formula_40. In other words, if we apply reductions to transform an element into a linear combination of reduced words in two different ways, we obtain the same result.
Ambiguities.
When performing reductions there might not always be an obvious choice for which reduction to do. This is called an ambiguity and there are two types which may arise. Firstly, suppose we have a word formula_41 for some non-empty words formula_42 and assume that formula_43 and formula_44 are leading words for some formula_45. This is called an overlap ambiguity, because there are two possible reductions, namely formula_46 and formula_47. This ambiguity is resolvable if formula_48 and formula_49 can be reduced to a common expression using compositions of reductions.
Secondly, one leading word may be contained in another i.e. formula_50 for some words formula_51 and some indices formula_45. Then we have an inclusion ambiguity. Again, this ambiguity is resolvable if formula_52, for some compositions of reductions formula_36 and formula_37.
Statement of the Lemma.
The statement of the lemma is simple but involves the terminology defined above. This lemma is applicable as long as the underlying ring is associative.
Let formula_19 generate an ideal formula_20 of formula_4, where formula_23 with formula_27 the leading words under some fixed admissible ordering of formula_3. Then the following are equivalent:
Here the reductions are done with respect to the fixed set of generators formula_26 of formula_20. When any of the above hold we say that formula_26 is a Gröbner basis for formula_20. Given a set of generators, one usually checks the first or second condition to confirm that the set is a formula_0-basis.
Examples.
Resolving ambiguities.
Take formula_54, which is the quantum polynomial ring in 3 variables, and assume formula_55. Take formula_5 to be degree lexicographic order, then the leading words of the defining relations are formula_56, formula_57 and formula_58. There is exactly one overlap ambiguity which is formula_59 and no inclusion ambiguities. One may resolve via formula_60 or via formula_61 first. The first option gives us the following chain of reductions,
formula_62
whereas the second possibility gives,
formula_63
Since formula_64 are commutative the above are equal. Thus the ambiguity resolves and the Lemma implies that formula_65 is a Gröbner basis of formula_20.
Non-resolving ambiguities.
Let formula_66. Under the same ordering as in the previous example, the leading words of the generators of the ideal are formula_67, formula_57 and formula_58. There are two overlap ambiguities, namely formula_68 and formula_69. Let us consider formula_68. If we resolve formula_67 first we get,
formula_70
which contains no leading words and is therefore reduced. Resolving formula_57 first we obtain,
formula_71
Since both of the above are reduced but not equal we see that the ambiguity does not resolve. Hence formula_72 is not a Gröbner basis for the ideal it generates.
Algorithm.
The following short algorithm follows from the proof of Bergman's Diamond Lemma. It is based on adding new relations which resolve previously unresolvable ambiguities. Suppose that formula_73 is an overlap ambiguity which does not resolve. Then, for some compositions of reductions formula_36 and formula_37, we have that formula_74 and formula_75 are distinct linear combinations of reduced words. Therefore, we obtain a new non-zero relation formula_76. The leading word of this relation is necessarily different from the leading words of existing relations. Now scale this relation by a non-zero constant such that its leading word has coefficient 1 and add it to the generating set of formula_20. The process is analogous for inclusion ambiguities.
Now, the previously unresolvable overlap ambiguity resolves by construction of the new relation. However, new ambiguities may arise. This process may terminate after a finite number of iterations producing a Gröbner basis for the ideal or never terminate. The infinite set of relations produced in the case where the algorithm never terminates is still a Gröbner basis, but it may not be useful unless a pattern in the new relations can be found.
Example.
Let us continue with the example from above where formula_66. We found that the overlap ambiguity formula_68 does not resolve. This gives us formula_77 and formula_78. The new relation is therefore formula_79 whose leading word is formula_80 with coefficient 1. Hence we do not need to scale it and can add it to our set of relations which is now formula_81. The previous ambiguity now resolves to either formula_82 or formula_83. Adding the new relation did not add any ambiguities so we are left with the overlap ambiguity formula_69 we identified above. Let us try and resolve it with the relations we currently have. Again, resolving formula_67 first we obtain,
formula_84
On the other hand resolving formula_58 twice first and then formula_67 we find,
formula_85
Thus we have formula_86 and formula_87 and the new relation is formula_88 with leading word formula_18. Since the coefficient of the leading word is -1 we scale the relation and then add formula_89 to the set of defining relations. Now all ambiguities resolve and Bergman's Diamond Lemma implies that
formula_90 is a Gröbner basis for the ideal it defines.
Further generalisations.
The importance of the diamond lemma can be seen by how many other mathematical structures it has been adapted for:
The lemma has been used to prove the Poincaré–Birkhoff–Witt theorem. | [
{
"math_id": 0,
"text": "k"
},
{
"math_id": 1,
"text": "X"
},
{
"math_id": 2,
"text": "X = \\{ x_1, x_2, x_3, \\dots, x_n \\}"
},
{
"math_id": 3,
"text": "\\langle X \\rangle"
},
{
"math_id": 4,
"text": "k \\langle X \\rangle"
},
{
"math_id": 5,
"text": "<"
},
{
"math_id": 6,
"text": "u,u', v "
},
{
"math_id": 7,
"text": "w"
},
{
"math_id": 8,
"text": "w < v"
},
{
"math_id": 9,
"text": "uwu' < u v u'"
},
{
"math_id": 10,
"text": "\\{v \\in \\langle X \\rangle : v < w \\}"
},
{
"math_id": 11,
"text": "v"
},
{
"math_id": 12,
"text": "k \\langle x, y \\rangle"
},
{
"math_id": 13,
"text": "x < y"
},
{
"math_id": 14,
"text": "1 < x < y < x^2 < xy < yx < y^2 < x^3 < x^2 y < \\dots"
},
{
"math_id": 15,
"text": "h \\in k \\langle X \\rangle"
},
{
"math_id": 16,
"text": "h"
},
{
"math_id": 17,
"text": "h = x^2 + 2 x^2 y - y^2 x"
},
{
"math_id": 18,
"text": "y^2 x"
},
{
"math_id": 19,
"text": "\\{ g_\\sigma \\}_{\\sigma \\in S} \\subseteq k \\langle X \\rangle"
},
{
"math_id": 20,
"text": "I"
},
{
"math_id": 21,
"text": "g_{\\sigma }"
},
{
"math_id": 22,
"text": "w_\\sigma"
},
{
"math_id": 23,
"text": "g_{\\sigma } = w_{\\sigma} - f_{\\sigma}"
},
{
"math_id": 24,
"text": "f_{\\sigma}"
},
{
"math_id": 25,
"text": "v< w_{\\sigma}"
},
{
"math_id": 26,
"text": "\\{ g_{\\sigma}\\}_{\\sigma \\in S}"
},
{
"math_id": 27,
"text": "w_{\\sigma}"
},
{
"math_id": 28,
"text": "w = u w_\\sigma v"
},
{
"math_id": 29,
"text": "u,v \\in \\langle X \\rangle"
},
{
"math_id": 30,
"text": "\\sigma \\in S"
},
{
"math_id": 31,
"text": "r_{u \\sigma v}: k \\langle X \\rangle \\to k \\langle X \\rangle"
},
{
"math_id": 32,
"text": "u f_\\sigma v"
},
{
"math_id": 33,
"text": "k \\langle X \\rangle / I"
},
{
"math_id": 34,
"text": "g_{\\sigma}"
},
{
"math_id": 35,
"text": "h \\in k \\langle X \\rangle "
},
{
"math_id": 36,
"text": "s_1"
},
{
"math_id": 37,
"text": "s_2"
},
{
"math_id": 38,
"text": "s_1(h)"
},
{
"math_id": 39,
"text": "s_2(h)"
},
{
"math_id": 40,
"text": "s_1(h) = s_2(h)"
},
{
"math_id": 41,
"text": "w = tvu"
},
{
"math_id": 42,
"text": "t,v,u"
},
{
"math_id": 43,
"text": "w_{\\sigma } = tv"
},
{
"math_id": 44,
"text": "w_{\\tau }= vu"
},
{
"math_id": 45,
"text": "\\sigma, \\tau \\in S"
},
{
"math_id": 46,
"text": "r_{1 \\sigma u}"
},
{
"math_id": 47,
"text": "r_{t \\tau 1}"
},
{
"math_id": 48,
"text": "t r_{1 \\sigma u}"
},
{
"math_id": 49,
"text": "r_{t \\tau 1} u"
},
{
"math_id": 50,
"text": "w_{\\sigma } = t \\omega_{\\tau} u"
},
{
"math_id": 51,
"text": "t,u"
},
{
"math_id": 52,
"text": "s_1 \\circ r_{1 \\sigma 1} (w) = s_2 \\circ r_{t \\tau u} (w)"
},
{
"math_id": 53,
"text": "k \\langle X \\rangle /I"
},
{
"math_id": 54,
"text": "A = k \\langle x,y,z \\rangle / (yx-pxy, zx-qxz, zy - ryz)"
},
{
"math_id": 55,
"text": "x < y <z"
},
{
"math_id": 56,
"text": "yx"
},
{
"math_id": 57,
"text": "zx"
},
{
"math_id": 58,
"text": "zy"
},
{
"math_id": 59,
"text": "zyx"
},
{
"math_id": 60,
"text": "yx = pxy"
},
{
"math_id": 61,
"text": "zy = ryz"
},
{
"math_id": 62,
"text": "zyx = pzxy = pqxzy = pqr xyz,"
},
{
"math_id": 63,
"text": "zyx = ryzx = rq yxz = rqp xyz."
},
{
"math_id": 64,
"text": "p,q,r"
},
{
"math_id": 65,
"text": "\\{yx-pxy, zx-qxz, zy - ryz\\}"
},
{
"math_id": 66,
"text": "A = k \\langle x,y,z \\rangle /(z^2-xy-yx, zx-xz, zy-yz)"
},
{
"math_id": 67,
"text": "z^2"
},
{
"math_id": 68,
"text": "z^2 x"
},
{
"math_id": 69,
"text": "z^2 y"
},
{
"math_id": 70,
"text": "z^2 x = (xy+yx) x = xyx + yx^2, "
},
{
"math_id": 71,
"text": "z^2 x = zxz = xz^2 = x( xy +yx) = x^2 y + xyx. "
},
{
"math_id": 72,
"text": "\\{ z^2-xy-yx, zx-xz, zy-yz \\}"
},
{
"math_id": 73,
"text": "w = w_{\\sigma } u = t w_{\\tau}"
},
{
"math_id": 74,
"text": "h_{1} = s_1 \\circ r_{1 \\sigma u} (w)"
},
{
"math_id": 75,
"text": "h_2 = s_2 \\circ r_{t \\tau 1} (w)"
},
{
"math_id": 76,
"text": "h_1 - h_2 \\in I"
},
{
"math_id": 77,
"text": "h_1 = xyx + yx^2 "
},
{
"math_id": 78,
"text": "h_2 = x^2 y + xyx"
},
{
"math_id": 79,
"text": "h_1 - h_2 = yx^2 - x^2 y \\in I"
},
{
"math_id": 80,
"text": "y x^2"
},
{
"math_id": 81,
"text": "\\{ z^2-xy-yx, zx-xz, zy-yz, y x^2- x^2 y\\}"
},
{
"math_id": 82,
"text": "h_1"
},
{
"math_id": 83,
"text": "h_2"
},
{
"math_id": 84,
"text": "z^2 y = (xy+yx) y = xy^2 + yxy."
},
{
"math_id": 85,
"text": "z^2 y = zyz = y z^2 = y(xy+yx) = yxy + y^2 x."
},
{
"math_id": 86,
"text": "h_3 = x y^2 + yxy"
},
{
"math_id": 87,
"text": "h_{4} = yxy + y^2 x"
},
{
"math_id": 88,
"text": "h_3 - h_4 = x y^2 - y^2 x "
},
{
"math_id": 89,
"text": "y^2 x - x y^2 "
},
{
"math_id": 90,
"text": "\\{ z^2-xy-yx, zx-xz, zy-yz, y x^2 - x^2 y,y^2 x - x y^2 \\}"
}
]
| https://en.wikipedia.org/wiki?curid=67893191 |
6789891 | Lévy–Prokhorov metric | In mathematics, the Lévy–Prokhorov metric (sometimes known just as the Prokhorov metric) is a metric (i.e., a definition of distance) on the collection of probability measures on a given metric space. It is named after the French mathematician Paul Lévy and the Soviet mathematician Yuri Vasilyevich Prokhorov; Prokhorov introduced it in 1956 as a generalization of the earlier Lévy metric.
Definition.
Let formula_0 be a metric space with its Borel sigma algebra formula_1. Let formula_2 denote the collection of all probability measures on the measurable space formula_3.
For a subset formula_4, define the ε-neighborhood of formula_5 by
formula_6
where formula_7 is the open ball of radius formula_8 centered at formula_9.
The Lévy–Prokhorov metric formula_10 is defined by setting the distance between two probability measures formula_11 and formula_12 to be
formula_13
For probability measures clearly formula_14.
Some authors omit one of the two inequalities or choose only open or closed formula_5; either inequality implies the other, and formula_15, but restricting to open sets may change the metric so defined (if formula_16 is not Polish).
Relation to other distances.
Let formula_20 be separable. Then
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(M, d)"
},
{
"math_id": 1,
"text": "\\mathcal{B} (M)"
},
{
"math_id": 2,
"text": "\\mathcal{P} (M)"
},
{
"math_id": 3,
"text": "(M, \\mathcal{B} (M))"
},
{
"math_id": 4,
"text": "A \\subseteq M"
},
{
"math_id": 5,
"text": "A"
},
{
"math_id": 6,
"text": "A^{\\varepsilon} := \\{ p \\in M ~|~ \\exists q \\in A, \\ d(p, q) < \\varepsilon \\} = \\bigcup_{p \\in A} B_{\\varepsilon} (p)."
},
{
"math_id": 7,
"text": "B_{\\varepsilon} (p)"
},
{
"math_id": 8,
"text": "\\varepsilon"
},
{
"math_id": 9,
"text": "p"
},
{
"math_id": 10,
"text": "\\pi : \\mathcal{P} (M)^{2} \\to [0, + \\infty)"
},
{
"math_id": 11,
"text": "\\mu"
},
{
"math_id": 12,
"text": "\\nu"
},
{
"math_id": 13,
"text": "\\pi (\\mu, \\nu) := \\inf \\left\\{ \\varepsilon > 0 ~|~ \\mu(A) \\leq \\nu (A^{\\varepsilon}) + \\varepsilon \\ \\text{and} \\ \\nu (A) \\leq \\mu (A^{\\varepsilon}) + \\varepsilon \\ \\text{for all} \\ A \\in \\mathcal{B}(M) \\right\\}."
},
{
"math_id": 14,
"text": "\\pi (\\mu, \\nu) \\le 1"
},
{
"math_id": 15,
"text": "(\\bar{A})^\\varepsilon = A^\\varepsilon"
},
{
"math_id": 16,
"text": "M"
},
{
"math_id": 17,
"text": "\\pi"
},
{
"math_id": 18,
"text": "\\left( \\mathcal{P} (M), \\pi \\right)"
},
{
"math_id": 19,
"text": "\\mathcal{K} \\subseteq \\mathcal{P} (M)"
},
{
"math_id": 20,
"text": "(M,d)"
},
{
"math_id": 21,
"text": "\\pi (\\mu , \\nu ) = \\inf \\{ \\alpha (X,Y) : \\text{Law}(X) = \\mu , \\text{Law}(Y) = \\nu \\} "
},
{
"math_id": 22,
"text": "\\alpha (X,Y) = \\inf\\{ \\varepsilon > 0 : \\mathbb{P} ( d( X ,Y ) > \\varepsilon ) \\leq \\varepsilon \\}"
},
{
"math_id": 23,
"text": " \\pi (\\mu , \\nu ) \\leq \\delta (\\mu , \\nu) "
},
{
"math_id": 24,
"text": "\\delta (\\mu,\\nu)"
},
{
"math_id": 25,
"text": " \\pi (\\mu , \\nu)^2 \\leq W_p (\\mu, \\nu)^p"
},
{
"math_id": 26,
"text": "W_p"
},
{
"math_id": 27,
"text": "p\\geq 1"
},
{
"math_id": 28,
"text": "\\mu, \\nu"
}
]
| https://en.wikipedia.org/wiki?curid=6789891 |
67899164 | Valery Vasilevich Kozlov | Russian mathematician and mathematical physicist
Valery Vasilevich Kozlov (Валерий Васильевич Козлов, born 1 January 1950 in Ryazan Oblast) is a Russian mathematician and mathematical physicist.
Education and career.
Kozlov studied from 1967 at the Moscow State University with his undergraduate degree in 1972 and his Candidate of Sciences degree in 1974 under Andrei Kolmogorov with thesis Качественное исследование движения тяжёлого твёрдого тела в интегрируемых случаях (Qualitative study of the motion of a heavy rigid body in integrable cases). At Moscow State University he was a lecturer and assistant and completed in 1978 his Russian Doctor of Sciences degree (habilitation) in 1978 with thesis Вопросы качественного анализа в динамике твёрдого тела (Questions of qualitative analysis in the dynamics of a rigid body). At Moscow State University he became in 1983 a professor of theoretical mechanics and in 2002 the head of the Department of Mathematical Statistics and Random Processes. At the Steklov Institute of Mathematics he became in 2003 head of the mechanics department and in 2004 the Institute's deputy director.
From 1980 to 1987 he was Moscow State University's deputy dean for science and research of the Faculty of Mathematics and Mechanics. From 1997 to 2001 he was Deputy Minister of Education of the Russian Federation.
Kozlov's research deals with theoretical and statistical mechanics and related mathematical areas such as the qualitative theory of differential equations and topological considerations in the integrability of dynamic systems. In 1979 he proved that for 2-dimensional manifolds formula_0 with genus greater than 1 (i.e. excluding topological spheres and tori) the geodetic flow (and general solutions of Hamilton's equations on the associated tangential bundle of formula_0) has no real-analytic first integrals other than energy.
He was elected in 1997 a corresponding member and in 2000 a full member of the Russian Academy of Sciences. He became the Academy's vice-president in 2001 and was its acting president for a few months in 2017 (upon the resignation of Vladimir Fortov). Kozlov is also a member of the Serbian Academy of Sciences and Arts and the European Academy of Arts and Sciences.
Kozlov was the founder and editor-in-chief of the journal "Regular and Chaotic Dynamics".
In 2007 he received the Leonhard Euler Gold Medal of the Russian Academy of Sciences and in 2018 its Demidov Prize, in 2009 the Gili Agostinelli Prize of the Turin Academy of Sciences, in 2000 the Kovalevskaya Prize of the Russian Academy of Sciences Sciences and in 1988 its Chaplygin Prize. and in 1994 the State Prize of the Russian Federation. | [
{
"math_id": 0,
"text": "M"
}
]
| https://en.wikipedia.org/wiki?curid=67899164 |
6790044 | Lévy metric | Metric used in mathematics
In mathematics, the Lévy metric is a metric on the space of cumulative distribution functions of one-dimensional random variables. It is a special case of the Lévy–Prokhorov metric, and is named after the French mathematician Paul Lévy.
Definition.
Let formula_0 be two cumulative distribution functions. Define the Lévy distance between them to be
formula_1
Intuitively, if between the graphs of "F" and "G" one inscribes squares with sides parallel to the coordinate axes (at points of discontinuity of a graph vertical segments are added), then the side-length of the largest such square is equal to "L"("F", "G").
A sequence of cumulative distribution functions formula_2 weakly converges to another cumulative distribution function formula_3 if and only if formula_4. | [
{
"math_id": 0,
"text": "F, G : \\mathbb{R} \\to [0, 1]"
},
{
"math_id": 1,
"text": "L(F, G) := \\inf \\{ \\varepsilon > 0 | F(x - \\varepsilon) - \\varepsilon \\leq G(x) \\leq F(x + \\varepsilon) + \\varepsilon,\\; \\forall x \\in \\mathbb{R} \\}."
},
{
"math_id": 2,
"text": "\\{F_n \\}_{n=1}^\\infty"
},
{
"math_id": 3,
"text": "F"
},
{
"math_id": 4,
"text": "L(F_n,F) \\to 0"
}
]
| https://en.wikipedia.org/wiki?curid=6790044 |
67900481 | Angular correlation function | Measure of the projected clustering of galaxies
The angular correlation function is a function which measures the projected clustering of galaxies, due to discrepancies between their actual and expected distributions. The function may be computed as follows: formula_0, where formula_1 represents the conditional probability of finding a galaxy, formula_2 denotes the solid angle, and formula_3 is the mean number density. In a homogeneous universe, the angular correlation scales with a characteristic depth.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "w(\\theta)=\\frac{1}{N}\\frac{dP}{d\\Omega}-1"
},
{
"math_id": 1,
"text": "P"
},
{
"math_id": 2,
"text": "\\Omega"
},
{
"math_id": 3,
"text": "N"
}
]
| https://en.wikipedia.org/wiki?curid=67900481 |
67903 | Squaring the square | Mathematical problem
Squaring the square is the problem of tiling an integral square using only other integral squares. (An integral square is a square whose sides have integer length.) The name was coined in a humorous analogy with squaring the circle. Squaring the square is an easy task unless additional conditions are set. The most studied restriction is that the squaring be perfect, meaning the sizes of the smaller squares are all different. A related problem is squaring the plane, which can be done even with the restriction that each natural number occurs exactly once as a size of a square in the tiling. The order of a squared square is its number of constituent squares.
Perfect squared squares.
A "perfect" squared square is a square such that each of the smaller squares has a different size.
It is first recorded as being studied by R. L. Brooks, C. A. B. Smith, A. H. Stone and W. T. Tutte (writing under the collective pseudonym "Blanche Descartes") at Cambridge University between 1936 and 1938. They transformed the square tiling into an equivalent electrical circuit – they called it a "Smith diagram" – by considering the squares as resistors that connected to their neighbors at their top and bottom edges, and then applied Kirchhoff's circuit laws and circuit decomposition techniques to that circuit. The first perfect squared squares they found were of order 69.
The first perfect squared square to be published, a compound one of side 4205 and order 55, was found by Roland Sprague in 1939.
Martin Gardner published an extensive article written by W. T. Tutte about the early history of squaring the square in his "Mathematical Games" column of November 1958.
Simple squared squares.
A "simple" squared square is one where no subset of more than one of the squares forms a rectangle or square. When a squared square has a square or rectangular subset, it is "compound".
In 1978, A. J. W. Duijvestijn discovered a simple perfect squared square of side 112 with the smallest number of squares using a computer search. His tiling uses 21 squares, and has been proved to be minimal. This squared square forms the logo of the Trinity Mathematical Society. It also appears on the cover of the Journal of Combinatorial Theory.
Duijvestijn also found two simple perfect squared squares of sides 110 but each comprising 22 squares. Theophilus Harding Willcocks, an amateur mathematician and fairy chess composer, found another. In 1999, I. Gambini proved that these three are the smallest perfect squared squares in terms of side length.
The perfect compound squared square with the fewest squares was discovered by T.H. Willcocks in 1946 and has 24 squares; however, it was not until 1982 that Duijvestijn, Pasquale Joseph Federico and P. Leeuw mathematically proved it to be the lowest-order example.
Mrs. Perkins's quilt.
When the constraint of all the squares being different sizes is relaxed, a squared square such that the side lengths of the smaller squares do not have a common divisor larger than 1 is called a "Mrs. Perkins's quilt". In other words, the greatest common divisor of all the smaller side lengths should be 1. The Mrs. Perkins's quilt problem asks for a Mrs. Perkins's quilt with the fewest pieces for a given formula_0 square. The number of pieces required is at least formula_1, and at most formula_2. Computer searches have found exact solutions for small values of formula_3 (small enough to need up to 18 pieces). For formula_4 the number of pieces required is:
<templatestyles src="Block indent/styles.css"/>
No more than two different sizes.
For any integer formula_3 other than 2, 3, and 5, it is possible to dissect a square into formula_3 squares of one or two different sizes.
Squaring the plane.
In 1975, Solomon Golomb raised the question whether the whole plane can be tiled by squares, one of each integer edge-length, which he called the heterogeneous tiling conjecture. This problem was later publicized by Martin Gardner in his Scientific American column and appeared in several books, but it defied solution for over 30 years.
In "Tilings and patterns", published in 1987, Branko Grünbaum and G. C. Shephard stated that in all perfect integral tilings of the plane known at that time, the sizes of the squares grew exponentially. For example, the plane can be tiled with different integral squares, but not for every integer, by recursively taking any perfect squared square and enlarging it so that the formerly smallest tile now has the size of the original squared square, then replacing this tile with a copy of the original squared square.
In 2008 James Henle and Frederick Henle proved that this, in fact, can be done. Their proof is constructive and proceeds by "puffing up" an L-shaped region formed by two side-by-side and horizontally flush squares of different sizes to a perfect tiling of a larger rectangular region, then adjoining the square of the smallest size not yet used to get another, larger L-shaped region. The squares added during the puffing up procedure have sizes that have not yet appeared in the construction and the procedure is set up so that the resulting rectangular regions are expanding in all four directions, which leads to a tiling of the whole plane.
Cubing the cube.
Cubing the cube is the analogue in three dimensions of squaring the square: that is, given a cube "C", the problem of dividing it into finitely many smaller cubes, no two congruent.
Unlike the case of squaring the square, a hard yet solvable problem, there is no perfect cubed cube and, more generally, no dissection of a rectangular cuboid "C" into a finite number of unequal cubes.
To prove this, we start with the following claim: for any perfect dissection of a "rectangle" in squares, the smallest square in this dissection does not lie on an edge of the rectangle. Indeed, each corner square has a smaller adjacent edge square, and the smallest edge square is adjacent to smaller squares not on the edge.
Now suppose that there is a perfect dissection of a rectangular cuboid in cubes. Make a face of "C" its horizontal base. The base is divided into a perfect squared rectangle "R" by the cubes which rest on it. The smallest square "s"1 in "R" is surrounded by "larger", and therefore "higher", cubes. Hence the upper face of the cube on "s"1 is divided into a perfect squared square by the cubes which rest on it. Let "s"2 be the smallest square in this dissection. By the claim above, this is surrounded on all 4 sides by squares which are larger than "s"2 and therefore higher.
The sequence of squares "s"1, "s"2, ... is infinite and the corresponding cubes are infinite in number. This contradicts our original supposition.
If a 4-dimensional hypercube could be perfectly hypercubed then its 'faces' would be perfect cubed cubes; this is impossible. Similarly, there is no solution for all cubes of higher dimensions.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "n\\times n"
},
{
"math_id": 1,
"text": "\\log_2 n"
},
{
"math_id": 2,
"text": "6\\log_2 n"
},
{
"math_id": 3,
"text": "n"
},
{
"math_id": 4,
"text": "n=1,2,3,\\dots"
}
]
| https://en.wikipedia.org/wiki?curid=67903 |
6790888 | Karplus equation | The Karplus equation, named after Martin Karplus, describes the correlation between 3J-coupling constants and dihedral torsion angles in nuclear magnetic resonance spectroscopy:
formula_0
where "J" is the 3"J" coupling constant, formula_1 is the dihedral angle, and "A", "B", and "C" are empirically derived parameters whose values depend on the atoms and substituents involved. The relationship may be expressed in a variety of equivalent ways e.g. involving cos 2φ rather than cos2 φ —these lead to different numerical values of "A", "B", and "C" but do not change the nature of the relationship.
The relationship is used for 3"J"H,H coupling constants. The superscript "3" indicates that a 1H atom is coupled to another 1H atom three bonds away, via H-C-C-H bonds. (Such hydrogens bonded to neighbouring carbon atoms are termed vicinal). The magnitude of these couplings are generally smallest when the torsion angle is close to 90° and largest at angles of 0 and 180°.
This relationship between local geometry and coupling constant is of great value throughout nuclear magnetic resonance spectroscopy and is particularly valuable for determining backbone torsion angles in protein NMR studies.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "J(\\phi) = A \\cos^2 \\phi + B \\cos\\,\\phi + C"
},
{
"math_id": 1,
"text": " \\phi "
}
]
| https://en.wikipedia.org/wiki?curid=6790888 |
67911 | Busy beaver | Longest-running Turing machine of a given size
In theoretical computer science, the busy beaver game aims at finding a terminating program of a given size that (depending on definition) either produces the most output possible, or runs for the longest number of steps. Since an endlessly looping program producing infinite output or running for infinite time is easily conceived, such programs are excluded from the game. Rather than traditional programming languages, the programs used in the game are n-state Turing machines, one of the first mathematical models of computation.
Turing machines consist of an infinite tape, and a finite set of states which serve as the program's "source code". Producing the most output is defined as writing the largest number of 1s on the tape, also referred to as achieving the highest score, and running for the longest time is defined as taking the longest number of steps to halt. The "n-"state busy beaver game consists of finding the longest-running or highest-scoring Turing machine which has "n" states and eventually halts. Such machines are assumed to start on a blank tape, and the tape is assumed to contain only zeros and ones (a binary Turing machine). A player should conceive of a set of transitions between states aiming for the highest score or longest running time while making sure the machine will halt eventually.
An n"th busy beaver, BB-"n or simply "busy beaver" is a Turing machine that wins the "n"-state busy beaver game. Depending on definition, it either attains the highest score, or runs for the longest time, among all other possible "n"-state competing Turing machines. The functions determining the highest score or longest running time of the "n"-state busy beavers by each definition are Σ(n) and S(n) respectively.
Deciding the running time or score of the "n"th Busy Beaver is incomputable. In fact, both the functions Σ(n) and S(n) eventually become larger than any computable function. This has implications in computability theory, the halting problem, and complexity theory. The concept was first introduced by Tibor Radó in his 1962 paper, "On Non-Computable Functions". One of the most interesting aspects of the busy beaver game is that, if it were possible to compute the functions Σ(n) and S(n) for all "n", then this would resolve all mathematical conjectures which can be encoded as "does this Turing machine halt or not". For example, a 27-state Turing machine could check Goldbach's conjecture for each number and halt on a counterexample: if this machine had not halted after running for S(27) steps, then it must run forever, resolving the conjecture. Many other problems, including the Riemann hypothesis (744 states) and the consistency of ZF set theory (748 states), can be expressed in a similar form, where at most a countably infinite number of cases need to be checked.
Technical definition.
The "n"-state busy beaver game (or BB-"n" game), introduced in Tibor Radó's 1962 paper, involves a class of Turing machines, each member of which is required to meet the following design specifications:
#the current non-Halt state,
#the symbol in the current tape cell,
and produces three outputs:
#a symbol to write over the symbol in the current tape cell (it may be the same symbol as the symbol overwritten),
#a direction to move (left or right; that is, shift to the tape cell one place to the left or right of the current cell), and
#a state to transition into (which may be the Halt state).
"Running" the machine consists of starting in the starting state, with the current tape cell being any cell of a blank (all-0) tape, and then iterating the transition function until the Halt state is entered (if ever). If, and only if, the machine eventually halts, then the number of 1s finally remaining on the tape is called the machine's "score". The "n"-state busy beaver (BB-"n") game is therefore a contest, depending on definition to find such an "n"-state Turing machine having the largest possible score or running time.
Related functions.
Score function Σ.
The score function quantifies the maximum score attainable by a busy beaver on a given measure. This is a noncomputable function, because it grows asymptotically faster than any computable function.
The score function, formula_0, is defined so that Σ("n") is the maximum attainable score (the maximum number of 1s finally on the tape) among all halting 2-symbol "n"-state Turing machines of the above-described type, when started on a blank tape.
It is clear that Σ is a well-defined function: for every "n", there are at most finitely many "n"-state Turing machines as above, up to isomorphism, hence at most finitely many possible running times.
According to the score-based definition, any "n"-state 2-symbol Turing machine "M" for which "σ"("M") = Σ("n") (i.e., which attains the maximum score) is called a busy beaver. For each "n", there exist at least 4("n" − 1)! "n"-state busy beavers. (Given any "n"-state busy beaver, another is obtained by merely changing the shift direction in a halting transition, a third by reversing "all" shift directions uniformly, and a fourth by reversing the halt direction of the all-swapped busy beaver. Furthermore, a permutation of all states except Start and Halt produces a machine that attains the same score. Theoretically, there could be more than one kind of transition leading to the halting state, but in practice it would be wasteful, because there is only one sequence of state transitions producing the sought-after result.)
Non-computability.
Radó's 1962 paper proved that if formula_1 is any computable function, then Σ("n") > "f"("n") for all sufficiently large "n", and hence that Σ is not a computable function.
Moreover, this implies that it is undecidable by a general algorithm whether an arbitrary Turing machine is a busy beaver. (Such an algorithm cannot exist, because its existence would allow Σ to be computed, which is a proven impossibility. In particular, such an algorithm could be used to construct another algorithm that would compute Σ as follows: for any given "n", each of the finitely many "n"-state 2-symbol Turing machines would be tested until an "n"-state busy beaver is found; this busy beaver machine would then be simulated to determine its score, which is by definition Σ("n").)
Even though Σ("n") is an uncomputable function, there are some small "n" for which it is possible to obtain its values and prove that they are correct. It is not hard to show that Σ(0) = 0, Σ(1) = 1, Σ(2) = 4, and with progressively more difficulty it can be shown that Σ(3) = 6, Σ(4) = 13 and Σ(5) = 4098 (sequence in the OEIS). Σ("n") has not yet been determined for any instance of "n" > 5, although lower bounds have been established (see the Known values section below).
Complexity and unprovability of Σ.
A variant of Kolmogorov complexity is defined as follows: The "complexity" of a number "n" is the smallest number of states needed for a BB-class Turing machine that halts with a single block of "n" consecutive 1s on an initially blank tape. The corresponding variant of Chaitin's incompleteness theorem states that, in the context of a given axiomatic system for the natural numbers, there exists a number "k" such that no specific number can be proven to have complexity greater than "k", and hence that no specific upper bound can be proven for Σ("k") (the latter is because "the complexity of "n" is greater than "k"" would be proven if "n" > Σ("k") were proven). As mentioned in the cited reference, for any axiomatic system of "ordinary mathematics" the least value "k" for which this is true is far less than 10⇈10; consequently, in the context of ordinary mathematics, neither the value nor any upper-bound of Σ(10⇈10) can be proven. (Gödel's first incompleteness theorem is illustrated by this result: in an axiomatic system of ordinary mathematics, there is a true-but-unprovable sentence of the form Σ(10⇈10) = "n", and there are infinitely many true-but-unprovable sentences of the form Σ(10⇈10) < "n".)
Maximum shifts function "S".
In addition to the function Σ, Radó [1962] introduced another extreme function for Turing machines, the maximum shifts function, "S", defined as follows:
Because normal Turing machines are required to have a shift in each and every transition or "step" (including any transition to a Halt state), the max-shifts function is at the same time a max-steps function.
Radó showed that "S" is noncomputable for the same reason that Σ is noncomputable — it grows faster than any computable function. He proved this simply by noting that for each "n", "S"("n") ≥ Σ("n"). Each shift may write a 0 or a 1 on the tape, while Σ counts a subset of the shifts that wrote a 1, namely the ones that hadn't been overwritten by the time the Turing machine halted; consequently, "S" grows at least as fast as Σ, which had already been proved to grow faster than any computable function.
The following connection between Σ and "S" was used by Lin & Radó ["Computer Studies of Turing Machine Problems", 1965] to prove that Σ(3) = 6: For a given "n", if "S"("n") is known then all "n"-state Turing machines can (in principle) be run for up to "S"("n") steps, at which point any machine that hasn't yet halted will never halt. At that point, by observing which machines have halted with the most 1s on the tape (i.e., the busy beavers), one obtains from their tapes the value of Σ("n"). The approach used by Lin & Radó for the case of "n" = 3 was to conjecture that "S"(3) = 21, then to simulate all the essentially different 3-state machines for up to 21 steps. By analyzing the behavior of the machines that had not halted within 21 steps, they succeeded in showing that none of those machines would ever halt, thus proving the conjecture that "S"(3) = 21, and determining that Σ(3) = 6 by the procedure just described.
In 2016, Adam Yedidia and Scott Aaronson obtained the first (explicit) upper bound on the minimum "n" for which S("n") is unprovable in ZFC. To do so they constructed a 7910-state Turing machine whose behavior cannot be proven based on the usual axioms of set theory (Zermelo–Fraenkel set theory with the axiom of choice), under reasonable consistency hypotheses (stationary Ramsey property). Stefan O'Rear then reduced it to 1919 states, with the dependency on the stationary Ramsey property eliminated, and later to 748 states. In July 2023, Riebel reduced it to 745 states.
Proof for uncomputability of "S"("n") and Σ("n").
Suppose that "S"("n") is a computable function and let "EvalS" denote a TM, evaluating "S"("n"). Given a tape with "n" 1s it will produce "S"("n") 1s on the tape and then halt. Let "Clean" denote a Turing machine cleaning the sequence of 1s initially written on the tape. Let "Double" denote a Turing machine evaluating function "n" + "n". Given a tape with "n" 1s it will produce 2"n" 1s on the tape and then halt.
Let us create the composition "Double" | "EvalS" | "Clean" and let "n"0 be the number of states of this machine. Let "Create_n0" denote a Turing machine creating "n"0 1s on an initially blank tape. This machine may be constructed in a trivial manner to have "n"0 states (the state "i" writes 1, moves the head right and switches to state "i" + 1, except the state "n"0, which halts). Let "N" denote the sum "n"0 + "n"0.
Let "BadS" denote the composition "Create_n0" | "Double" | "EvalS" | "Clean". Notice that this machine has "N" states. Starting with an initially blank tape it first creates a sequence of "n"0 1s and then doubles it, producing a sequence of "N" 1s. Then "BadS" will produce "S"("N") 1s on tape, and at last it will clear all 1s and then halt. But the phase of cleaning will continue at least "S"("N") steps, so the time of working of "BadS" is strictly greater than "S"("N"), which contradicts to the definition of the function "S"("n").
The uncomputability of Σ("n") may be proved in a similar way. In the above proof, one must exchange the machine "EvalS" with "EvalΣ" and "Clean" with "Increment" — a simple TM, searching for a first 0 on the tape and replacing it with 1.
The uncomputability of "S"("n") can also be established by reference to the blank tape halting problem. The blank tape halting problem is the problem of deciding for any Turing machine whether or not it will halt when started on an empty tape. The blank tape halting problem is equivalent to the standard halting problem and so it is also uncomputable. If "S"("n") was computable, then we could solve the blank tape halting problem simply by running any given Turing machine with "n" states for "S"("n") steps; if it has still not halted, it never will. So, since the blank tape halting problem is not computable, it follows that "S"("n") must likewise be uncomputable.
Other functions.
A number of other uncomputable functions can be defined based on measuring the performance of Turing machines in other ways than time or maximal number of ones. For example:
Both of these functions are also uncomputable. This can be shown for formula_3 by noting that every tape square a Turing machine writes a one to, it must also visit: in other words, formula_4. The formula_2 function can be shown to be incomputable by proving, for example, that formula_5: this can be done by designing an "(3n+3)"-state Turing machine which simulates the "n"-state space champion, and then uses it to write at least formula_3 contiguous ones to the tape. These functions stand in the relation:
Generalizations.
Analogs of the shift function can be simply defined in any programming language, given that the programs can be described by bit-strings, and a program's number of steps can be counted. For example, the busy beaver game can also be generalized to two dimensions using Turing machines on two-dimensional tapes, or to Turing machines that are allowed to stay in the same place as well as move to the left and right. Alternatively a "busy beaver function" for diverse models of computation can be defined with Kolgomorov complexity. This is done by taking formula_7 to be the largest integer formula_8 such that formula_9, where formula_10 is the length of the shortest program in formula_11 that outputs formula_8: formula_7 is thereby the largest integer a program with length formula_12 or less can output in formula_11.
The longest running 6-state, 2-symbol machine which has the additional property of reversing the tape value at each step produces 1s after steps. So for the Reversal Turing Machine (RTM) class, "S"RTM(6) ≥ and ΣRTM(6) ≥ . Likewise we could define an analog to the Σ function for register machines as the largest number which can be present in any register on halting, for a given number of instructions.
Different numbers of symbols.
A simple generalization is the extension to Turing machines with "m" symbols instead of just 2 (0 and 1). For example a trinary Turing machine with "m" = 3 symbols would have the symbols 0, 1, and 2. The generalization to Turing machines with "n" states and "m" symbols defines the following generalized busy beaver functions:
For example, the longest-running 3-state 3-symbol machine found so far runs steps before halting.
Nondeterministic Turing machines.
The problem can be extended to nondeterministic Turing machines by looking for the system with the most states across all branches or the branch with the longest number of steps. The question of whether a given NDTM will halt is still computationally irreducible, and the computation required to find an NDTM busy beaver is significantly greater than the deterministic case, since there are multiple branches that need to be considered. For a 2-state, 2-color system with "p" cases or rules, the table to the right gives the maximum number of steps before halting and maximum number of unique states created by the NDTM.
Applications.
Open mathematics problems.
In addition to posing a rather challenging mathematical game, the busy beaver functions Σ(n) and "S"("n") offer an entirely new approach to solving pure mathematics problems. Many open problems in mathematics could in theory, but not in practice, be solved in a systematic way given the value of "S"("n") for a sufficiently large "n". Theoretically speaking, the value of S(n) encodes the answer to all mathematical conjectures that can be checked in infinite time by a Turing machine with less than or equal to "n" states.
Consider any formula_13 conjecture: any conjecture that could be disproven via a counterexample among a countable number of cases (e.g. Goldbach's conjecture). Write a computer program that sequentially tests this conjecture for increasing values. In the case of Goldbach's conjecture, we would consider every even number ≥ 4 sequentially and test whether or not it is the sum of two prime numbers. Suppose this program is simulated on an "n"-state Turing machine. If it finds a counterexample (an even number ≥ 4 that is not the sum of two primes in our example), it halts and indicates that. However, if the conjecture is true, then our program will never halt. (This program halts "only" if it finds a counterexample.)
Now, this program is simulated by an "n"-state Turing machine, so if we know "S"("n") we can decide (in a finite amount of time) whether or not it will ever halt by simply running the machine that many steps. And if, after "S"("n") steps, the machine does not halt, we know that it never will and thus that there are no counterexamples to the given conjecture (i.e., no even numbers that are not the sum of two primes). This would prove the conjecture to be true. Thus specific values (or upper bounds) for "S"("n") could be, in theory, used to systematically solve many open problems in mathematics.
However, current results on the busy beaver problem suggest that this will not be practical for two reasons:
Checking consistency of theories.
Another property of S(n) is that no arithmetically sound, computably axiomatized theory can prove all of the function's values. Specifically, given a computable and arithmetically sound theory formula_15, there is a number formula_16 such that for all formula_17, no statement of the form formula_18 can be proved in formula_15. This implies that for each theory there is a specific largest value of S(n) that it can prove. This is true because for every such formula_15, a Turing machine with formula_16 states can be designed to enumerate every possible proof in formula_15. If the theory is inconsistent, then all false statements are provable, and the Turing machine can be given the condition to halt if and only if it finds a proof of, for example, formula_19. Any theory that proves the value of formula_20 proves its own consistency, violating Gödel's second incompleteness theorem. This can be used to place various theories on a scale, for example the various large cardinal axioms in ZFC: if each theory formula_15 is assigned as its number formula_16, theories with larger values of formula_16 prove the consistency of those below them, placing all such theories on a countably infinite scale.
Universal Turing machines.
Exploring the relationship between computational universality and the dynamic behavior of Busy Beaver Turing machines, a conjecture was proposed in 2012 suggesting that Busy Beaver machines were natural candidates for Turing universality as they display complex characteristics, known for (1) their maximal computational complexity within size constraints, (2) their ability to perform non-trivial calculations before halting, and (3) the difficulty in finding and proving these machines; these features suggest that Busy Beaver machines possess the necessary complexity for universality.
Known results.
Lower bounds.
Green machines.
In 1964 Milton Green developed a lower bound for the 1s-counting variant of the Busy Beaver function that was published in the proceedings of the 1964 IEEE symposium on switching circuit theory and logical design. Heiner Marxen and Jürgen Buntrock described it as "a non-trivial (not primitive recursive) lower bound". This lower bound can be calculated but is too complex to state as a single expression in terms of "n". This was done with a set of Turing machines, each of which demonstrated the lower bound for a certain "n". When "n"=8 the method gives
formula_21.
In contrast, the best current (as of 2024) lower bound on formula_22 is formula_23, where each formula_24 is Knuth's up-arrow notation. This represents formula_14, an exponentiated chain of 15 tens equal to formula_25. The value of formula_26 is probably much larger still than that.
Specifically, the lower bound was shown with a series of recursive Turing machines, each of which was made of a smaller one with two additional states that repeatedly applied the smaller machine to the input tape. Defining the value of the N-state busy-beaver competitor on a tape containing formula_8 ones to be formula_27 (the ultimate output of each machine being its value on formula_28, because a blank tape has 0 ones), the recursion relations are as follows: a
formula_29
formula_30
formula_31
formula_31
This leads to two formulas, for odd and even numbers, for calculating the lower bound given by the Nth machine, formula_32:
formula_33 for odd N
formula_34 for even N
The lower bound BB(N) can also be related to the Ackermann function. It can be shown that:
formula_35
Relationships between Busy beaver functions.
Trivially, S(n) ≥ Σ(n) because a machine that writes Σ(n) ones must take at least Σ(n) steps to do so. It is possible to give a number of upper bounds on the time S(n) with the number of ones Σ(n):
By defining num(n) to be the maximum number of ones an "n"-state Turing machine is allowed to output contiguously, rather than in any position (the largest unary number it can output), it is possible to show:
formula_39
formula_40
formula_41 (Ben-Amram, et al., 1996)
Ben-Amram and Petersen, 2002, also give an asymptotically improved bound on S(n). There exists a constant "c", such that for all "n" ≥ 2:
formula_42
&NoBreak;&NoBreak; tends to be close to the square of &NoBreak;&NoBreak;, and in fact many machines give &NoBreak;&NoBreak; less than &NoBreak;&NoBreak;.
Exact values and lower bounds.
The following table lists the exact values and some known lower bounds for "S"("n"), Σ("n"), and several other busy beaver functions. In this table, 2-symbol Turing machines are used. Entries listed as "?" are at least as large as other entries to the left (because all n-state machines are also (n+1) state machines), and no larger than entries above them (because S(n) ≥ space(n) ≥ Σ(n) ≥ num(n)).
The 5-state busy beaver was discovered by Heiner Marxen and Jürgen Buntrock in 1989, but only proved to be the winning fifth busy beaver — stylized as BB(5) — in 2024 using a proof in Coq.
List of busy beavers.
These are tables of rules for the Turing machines that generate Σ(1) and "S"(1), Σ(2) and "S"(2), Σ(3) (but not "S"(3)), Σ(4) and "S"(4), Σ(5) and "S"(5), and the best known lower bound for Σ(6) and "S"(6).
In the tables, columns represent the current state and rows represent the current symbol read from the tape. Each table entry is a string of three characters, indicating the symbol to write onto the tape, the direction to move, and the new state (in that order). The halt state is shown as H.
Each machine begins in state A with an infinite tape that contains all 0s. Thus, the initial symbol read from the tape is a 0.
Result key: (starts at the position overlined, halts at the position underlined)
Result: 0 0 1 0 0 (1 step, one "1" total)
Result: 0 0 1 1 1 0 0 (6 steps, four "1"s total)
Result: 0 0 1 1 1 1 1 1 0 0 (14 steps, six "1"s total).
Unlike the previous machines, this one is a busy beaver only for Σ, but not for "S". ("S"(3) = 21.)
Result: 0 0 1 0 1 1 1 1 1 1 1 1 1 1 1 1 0 0 (107 steps, thirteen "1"s total)
Result: 4098 "1"s with 8191 "0"s interspersed in 47,176,870 steps.
Note in the image to the right how this solution is similar qualitatively to the evolution of some cellular automata.
Result: 1 0 1 1 1 ... 1 1 1 ("10" followed by more than 10↑↑15 contiguous "1"s in more than 10↑↑15 steps, where 10↑↑15=1010..10, an exponential tower of 15 tens).
Visualizations.
In the following table, the rules for each busy beaver (maximizing Σ) are represented visually, with orange squares corresponding to a "1" on the tape, and white corresponding to "0". The position of the head is indicated by the black ovoid, with the orientation of the head representing the state. Individual tapes are laid out horizontally, with time progressing from top to bottom. The halt state is represented by a rule which maps one state to itself (head doesn't move).
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\Sigma: \\mathbb{N} \\to \\mathbb{N}"
},
{
"math_id": 1,
"text": "f: \\mathbb{N} \\to \\mathbb{N}"
},
{
"math_id": 2,
"text": "\\text{num}(n)"
},
{
"math_id": 3,
"text": "\\text{space}(n)"
},
{
"math_id": 4,
"text": "\\Sigma(n) \\leq \\text{space}(n)"
},
{
"math_id": 5,
"text": "\\text{space}(n) < \\text{num}(3n + 3)"
},
{
"math_id": 6,
"text": "\\text{num}(n) \\leq \\Sigma(n) \\leq \\text{space}(n) \\leq S(n)"
},
{
"math_id": 7,
"text": "{BB}(n)"
},
{
"math_id": 8,
"text": "m"
},
{
"math_id": 9,
"text": "K_L(m) \\le n"
},
{
"math_id": 10,
"text": "K_L(m)"
},
{
"math_id": 11,
"text": "L"
},
{
"math_id": 12,
"text": "n"
},
{
"math_id": 13,
"text": "\\Pi_1^0"
},
{
"math_id": 14,
"text": "10^{(10^{(10^{(10^{(\\ldots)})})})}"
},
{
"math_id": 15,
"text": "T"
},
{
"math_id": 16,
"text": "n_T"
},
{
"math_id": 17,
"text": "n \\geq n_T"
},
{
"math_id": 18,
"text": "S(n) = k"
},
{
"math_id": 19,
"text": "0 = 1"
},
{
"math_id": 20,
"text": "S(n_T)"
},
{
"math_id": 21,
"text": "\\Sigma(8) \\geq 3 \\times (7 \\times 3^{92} - 1) / 2 \\approx 8.248 \\times 10^{44}"
},
{
"math_id": 22,
"text": "\\Sigma(6)"
},
{
"math_id": 23,
"text": "10 \\uparrow\\uparrow 15"
},
{
"math_id": 24,
"text": "\\uparrow"
},
{
"math_id": 25,
"text": "10^{10 \\uparrow\\uparrow 14}"
},
{
"math_id": 26,
"text": "\\Sigma(8)"
},
{
"math_id": 27,
"text": "B_N(m)"
},
{
"math_id": 28,
"text": "m = 0"
},
{
"math_id": 29,
"text": "B_N(0) = 1"
},
{
"math_id": 30,
"text": "B_1(m) = m + 1"
},
{
"math_id": 31,
"text": "B_N(m) = 1 + B_{N-2}(1 + B_N(m - 1))"
},
{
"math_id": 32,
"text": "G(N)"
},
{
"math_id": 33,
"text": "G(N) = B_{N-2}(B_{N-2}(1))"
},
{
"math_id": 34,
"text": "G(N) = 1 + B_{N-3}(1 + B_{N-3}(1))"
},
{
"math_id": 35,
"text": "A(n,n) > G(4N+3) > A(4, 2N+1)"
},
{
"math_id": 36,
"text": "S(n) \\leq (n + 1) \\times \\Sigma(5n) \\times 2^{\\Sigma(5n)}"
},
{
"math_id": 37,
"text": "S(n) \\leq \\Sigma(9n)"
},
{
"math_id": 38,
"text": "S(n) \\leq (2n - 1) \\times \\Sigma(3n + 3)"
},
{
"math_id": 39,
"text": "\\text{num}(n) < \\Sigma(n)"
},
{
"math_id": 40,
"text": "S(n) < \\text{num}(n + o(n))"
},
{
"math_id": 41,
"text": "S(n) < \\text{num}(3n+6)"
},
{
"math_id": 42,
"text": "S(n) \\le \\Sigma\\left(n + \\left\\lceil \\frac{8n}{\\log_2 n} \\right\\rceil + c\\right). "
}
]
| https://en.wikipedia.org/wiki?curid=67911 |
6791579 | Guard digit | In numerical analysis, one or more guard digits can be used to reduce the amount of roundoff error.
Example.
Suppose that the final result of a long, multi-step calculation can be safely rounded off to "N" decimal places. That is to say, the roundoff error introduced by this final roundoff makes a negligible contribution to the overall uncertainty.
However, it is quite likely that it is "not" safe to round off the intermediate steps in the calculation to the same number of digits. Be aware that roundoff errors can accumulate. If "M" decimal places are used in the intermediate calculation, we say there are "M−N" guard digits.
In computing.
Guard digits are also used in floating point operations in most computer systems.
As an example, consider the subtraction formula_0. Here, the product notation indicates a binary floating point representation with the exponent of the representation given as a power of two and with the significand given with three bits after the binary point. To compute the subtraction it is necessary to change the forms of these numbers so that they have the same exponent, and so that when the product notation is expanded the resulting numbers have their binary points lined up with each other. Shifting the second operand into position, as formula_1, gives it a fourth digit after the binary point. This creates the need to add an extra digit to the first operand—a guard digit—putting the subtraction into the form formula_2.
Performing this operation gives as the result formula_3 or formula_4.
Without using a guard digit the subtraction would be performed only to three bits of precision, as formula_5, yielding formula_6 or formula_7, twice as large as the correct result. Thus, in this example, the use of a guard digit led to a more accurate result.
An example of the error caused by floating point roundoff is illustrated in the following C code.
int main(){
double a;
int i;
a = 0.2;
a += 0.1;
a -= 0.3;
for (i = 0; a < 1.0; i++)
a += a;
printf("i=%d, a=%f\n", i, a);
return 0;
It appears that the program should not terminate. Yet the output is:
Another example is:
Take two numbers:<br>
formula_8 and formula_9<br>
We bring the first number to the same power of formula_10 as the second one: <br>
formula_11
The addition of the two numbers is:<br>
After padding the second number (i.e., formula_12) with two formula_13s, the bit after formula_14 is the guard digit, and the bit after is the round digit. The result after rounding is formula_15 as opposed to formula_16, without the extra bits (guard and round bits), i.e., by considering only formula_17. The error therefore is formula_18. | [
{
"math_id": 0,
"text": "2^1 \\times 0.100_2 - 2^0 \\times 0.111_2"
},
{
"math_id": 1,
"text": "2^1 \\times 0.0111_2"
},
{
"math_id": 2,
"text": "2^1 \\times 0.1000_2 - 2^1 \\times 0.0111_2"
},
{
"math_id": 3,
"text": "2^1 \\times 0.0001_2"
},
{
"math_id": 4,
"text": "2^{-2} \\times 0.100_2"
},
{
"math_id": 5,
"text": "2^1 \\times 0.100_2 - 2^1 \\times 0.011_2"
},
{
"math_id": 6,
"text": "2^1 \\times 0.001_2="
},
{
"math_id": 7,
"text": "2^{-1} \\times 0.100_2"
},
{
"math_id": 8,
"text": "2.56\\times 10^0"
},
{
"math_id": 9,
"text": "2.34\\times 10^2"
},
{
"math_id": 10,
"text": "10"
},
{
"math_id": 11,
"text": "0.0256\\times 10^2"
},
{
"math_id": 12,
"text": "2.34\\times 10^2 "
},
{
"math_id": 13,
"text": "0"
},
{
"math_id": 14,
"text": "4"
},
{
"math_id": 15,
"text": "2.37"
},
{
"math_id": 16,
"text": "2.36"
},
{
"math_id": 17,
"text": "0.02+2.34 = 2.36"
},
{
"math_id": 18,
"text": "0.01"
}
]
| https://en.wikipedia.org/wiki?curid=6791579 |
679186 | Stefan problem | In mathematics and its applications, particularly to phase transitions in matter, a Stefan problem is a particular kind of boundary value problem for a system of partial differential equations (PDE), in which the boundary between the phases can move with time. The classical Stefan problem aims to describe the evolution of the boundary between two phases of a material undergoing a phase change, for example the melting of a solid, such as ice to water. This is accomplished by solving heat equations in both regions, subject to given boundary and initial conditions. At the interface between the phases (in the classical problem) the temperature is set to the phase change temperature. To close the mathematical system a further equation, the Stefan condition, is required. This is an energy balance which defines the position of the moving interface. Note that this evolving boundary is an unknown (hyper-)surface; hence, Stefan problems are examples of free boundary problems.
Analogous problems occur, for example, in the study of porous media flow, mathematical finance and crystal growth from monomer solutions.
Historical note.
The problem is named after Josef Stefan (Jožef Stefan), the Slovenian physicist who introduced the general class of such problems around 1890 in a series of four papers concerning the freezing of the ground and the formation of sea ice. However, some 60 years earlier, in 1831, an equivalent problem, concerning the formation of the Earth's crust, had been studied by Lamé and Clapeyron. Stefan's problem admits a similarity solution, this is often termed the Neumann solution, which was allegedly presented in a series of lectures in the early 1860s.
A comprehensive description of the history of Stefan problems may be found in Rubinstein.
Premises to the mathematical description.
From a mathematical point of view, the phases are merely regions in which the solutions of the underlying PDE are continuous and differentiable up to the order of the PDE. In physical problems such solutions represent properties of the medium for each phase. The moving boundaries (or interfaces) are infinitesimally thin surfaces that separate adjacent phases; therefore, the solutions of the underlying PDE and its derivatives may suffer discontinuities across interfaces.
The underlying PDEs are not valid at the phase change interfaces; therefore, an additional condition—the Stefan condition—is needed to obtain closure. The Stefan condition expresses the local velocity of a moving boundary, as a function of quantities evaluated at either side of the phase boundary, and is usually derived from a physical constraint. In problems of heat transfer with phase change, for instance, conservation of energy dictates that the discontinuity of heat flux at the boundary must be accounted for by the rate of latent heat release (which is proportional to the local velocity of the interface).
The regularity of the equation has been studied mainly by Luis Caffarelli and further refined by work of Alessio Figalli, Xavier Ros-Oton and Joaquim Serra
Mathematical formulation.
The one-dimensional one-phase Stefan problem.
The one-phase Stefan problem is based on an assumption that one of the material phases may be neglected. Typically this is achieved by assuming that a phase is at the phase change temperature and hence any variation from this leads to a change of phase. This is a mathematically convenient approximation, which simplifies analysis whilst still demonstrating the essential ideas behind the process. A further standard simplification is to work in non-dimensional format, such that the temperature at the interface may be set to zero and far-field values to formula_0 or formula_1.
Consider a semi-infinite one-dimensional block of ice initially at melting temperature formula_2 for formula_3. The most well-known form of Stefan problem involves melting via an imposed constant temperature at the left hand boundary, leaving a region formula_4 occupied by water. The melted depth, denoted by formula_5, is an unknown function of time. The Stefan problem is defined by
* The heat equation: formula_6
* A fixed temperature, above the melt temperature, on the left boundary: formula_7
* The interface at the melting temperature is set to formula_8
* The Stefan condition: formula_9 where formula_10 is the Stefan number, the ratio of latent to "specific" sensible heat (where specific indicates it is divided by the mass). Note this definition follows naturally from the nondimensionalisation and is used in many texts however it may also be defined as the inverse of this.
* The initial temperature distribution: formula_11
* The initial depth of the melted ice block: formula_12
The Neumann solution, obtained by using self-similar variables, indicates that the position of the boundary is given by formula_13 where formula_14 satisfies the transcendental equation formula_15 The temperature in the liquid is then given by formula_16
Applications.
Apart from modelling melting of solids, Stefan problem is also used as a model for the asymptotic behaviour (in time) of more complex problems. For example, Pego uses matched asymptotic expansions to prove that Cahn-Hilliard solutions for phase separation problems behave as solutions to a non-linear Stefan problem at an intermediate time scale. Additionally, the solution of the Cahn–Hilliard equation for a binary mixture is reasonably comparable with the solution of a Stefan problem. In this comparison, the Stefan problem was solved using a front-tracking, moving-mesh method with homogeneous Neumann boundary conditions at the outer boundary. Also, Stefan problems can be applied to describe phase transformations other than solid-fluid or fluid-fluid.
Application of Stefan problem to metal crystallization in electrochemical deposition of metal powders was envisaged by Călușaru
The Stefan problem also has a rich inverse theory; in such problems, the melting depth (or curve or hyper-surface) "s" is the known datum and the problem is to find "u" or "f".
Advanced forms of Stefan problem.
The classical Stefan problem deals with stationary materials with constant thermophysical properties (usually irrespective of phase), a constant phase change temperature and, in the example above, an instantaneous switch from the initial temperature to a distinct value at the boundary. In practice thermal properties may vary and specifically always do when the phase changes. The jump in density at phase change induces a fluid motion: the resultant kinetic energy does not figure in the standard energy balance. With an instantaneous temperature switch the initial fluid velocity is infinite, resulting in an initial infinite kinetic energy. In fact the liquid layer is often in motion, thus requiring advection or convection terms in the heat equation. The melt temperature may vary with size, curvature or speed of the interface. It is impossible to instantaneously switch temperatures and then difficult to maintain an exact fixed boundary temperature. Further, at the nanoscale the temperature may not even follow Fourier's law.
A number of these issues have been tackled in recent years for a variety of physical applications. In the solidification of supercooled melts an analysis where the phase change temperature depends on the interface velocity may be found in Font "et al". Nanoscale solidification, with variable phase change temperature and energy/density effects are modelled in. Solidification with flow in a channel has been studied, in the context of lava and microchannels, or with a free surface in the context of water freezing over an ice layer. A general model including different properties in each phase, variable phase change temperature and heat equations based on either Fourier's law or the Guyer-Krumhansl equation is analysed in.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "+1"
},
{
"math_id": 1,
"text": "-1"
},
{
"math_id": 2,
"text": "u=0"
},
{
"math_id": 3,
"text": "x \\in [0;+\\infty)"
},
{
"math_id": 4,
"text": "[0;s(t)]"
},
{
"math_id": 5,
"text": "s(t)"
},
{
"math_id": 6,
"text": "\\frac{\\partial u}{\\partial t} = \\frac{\\partial^2 u}{\\partial x^2}, \\quad \\forall (x,t) \\in [0;s(t)] \\times [0;+\\infty]\n"
},
{
"math_id": 7,
"text": "u(0,t) = 1, \\quad \\forall t > 0\n"
},
{
"math_id": 8,
"text": "u \\left(s(t),t \\right) = 0\n"
},
{
"math_id": 9,
"text": "\\beta \\frac{\\mathrm{d}}{\\mathrm{d}t} s(t) = -\\frac{\\partial}{\\partial x} u \\left(s(t), t \\right)\n"
},
{
"math_id": 10,
"text": "\\beta\n"
},
{
"math_id": 11,
"text": "u(x,0) = 0, \\; \\forall x \\geq 0\n"
},
{
"math_id": 12,
"text": "s(0) = 0\n"
},
{
"math_id": 13,
"text": "s(t) = 2 \\lambda \\sqrt{t}\n"
},
{
"math_id": 14,
"text": "\\lambda\n"
},
{
"math_id": 15,
"text": " \\beta \\lambda = \\frac{1}{\\sqrt{\\pi}}\\frac{\\mathrm{e}^{-\\lambda^2}}{\\text{erf}(\\lambda)}.\n"
},
{
"math_id": 16,
"text": "T=1-\\frac{\\text{erf}\\left(\\frac{x}{2\\sqrt{t}}\\right)}{\\text{erf}(\\lambda)}.\n"
}
]
| https://en.wikipedia.org/wiki?curid=679186 |
67922 | Ketchup | Sauce used as a condiment
Ketchup or catsup () is a table condiment with a sweet and sour flavor. The unmodified term ("ketchup") now typically refers to tomato ketchup, although early recipes for various different varieties of ketchup contained mushrooms, oysters, mussels, egg whites, grapes or walnuts, among other ingredients.
Tomato ketchup is made from tomatoes, sugar, and vinegar, with seasonings and spices. The spices and flavors vary, but commonly include onions, allspice, coriander, cloves, cumin, garlic, and mustard, and sometimes include celery, cinnamon, or ginger. The market leader in the United States (60% market share) and the United Kingdom (82%) is Heinz Tomato Ketchup. Tomato ketchup is often used as a condiment to dishes that are usually served hot and are fried or greasy: french fries and other potato dishes, hamburgers, hot dogs, chicken tenders, hot sandwiches, meat pies, cooked eggs, and grilled or fried meat. Ketchup is sometimes used as the basis for, or as one ingredient in, other sauces and dressings, and the flavor may be replicated as an additive flavoring for snacks, such as potato chips.
History.
Mushroom ketchup.
In the United Kingdom, ketchup was historically prepared with mushrooms as a primary ingredient, rather than tomatoes. In the United States, mushroom ketchup dates back to at least 1770, and was prepared by British colonists in the Thirteen Colonies.
Tomato ketchup.
Many variations of ketchup were created, but the tomato-based version did not appear until around a century after other types. An early recipe for "Tomato Catsup" from 1817 includes anchovies:
By the mid-1850s, the anchovies had been dropped.
The term ketchup first appeared in 1682. Ketchup recipes began to appear in British and then American cookbooks in the 18th century. James Mease published the first known tomato ketchup recipe in 1812. In 1824, a ketchup recipe using tomatoes appeared in "The Virginia Housewife" (an influential 19th-century cookbook written by Mary Randolph, Thomas Jefferson's cousin). Tomato ketchup was sold locally by farmers. Jonas Yerkes is credited as the first American to sell it in a bottle. By 1837, he had produced and distributed the condiment nationally. Shortly thereafter, other companies followed suit. F. & J. Heinz launched their tomato ketchup in 1876. American cooks also began to sweeten ketchup in the 19th century. The "Webster's Dictionary" of 1913 defined "catsup" as: "table sauce made from mushrooms, tomatoes, walnuts, etc. [Also written as ketchup]." As the century progressed, tomato ketchup began its ascent in popularity in the United States. Tomato ketchup was popular long before fresh tomatoes were. People were less hesitant to eat tomatoes as part of a highly processed product that had been cooked and infused with vinegar and spices.
Heinz Tomato Ketchup was advertised: "Blessed relief for Mother and the other women in the household!", a slogan which alluded to the lengthy process required to produce tomato ketchup in the home. With industrial ketchup production and a need for better preservation there was a great increase of sugar in ketchup, leading to the typically sweet and sour formula of today. In Australia, it was not until the late 19th century that sugar was added to "tomato sauce", initially in small quantities, but today it contains just as much as American ketchup and only differed in the proportions of tomatoes, salt and vinegar in early recipes.
Modern ketchup emerged in the early years of the 20th century, out of a debate over the use of sodium benzoate as a preservative in condiments. Harvey W. Wiley, the "father" of the US Food and Drug Administration, challenged the safety of benzoate which was banned in the 1906 Pure Food and Drug Act.
In response, entrepreneurs including Henry J. Heinz, pursued an alternative recipe that eliminated the need for that preservative. Katherine Bitting, a bacteriologist working for the U.S. Department of Agriculture, carried out research in 1909 that proved increasing the sugar and vinegar content of the product would prevent spoilage without use of artificial preservatives. She was assisted by her husband, Arvil Bitting, an official at that agency.
Prior to Heinz (and his fellow innovators), commercial tomato ketchups of that time were watery and thin, in part because they used unripe tomatoes, which were low in pectin. They had less vinegar than modern ketchups; by pickling ripe tomatoes, the need for benzoate was eliminated without spoilage or degradation in flavor. But the changes driven by the desire to eliminate benzoate also produced changes that some experts (such as Andrew F. Smith) believe were key to the establishment of tomato ketchup as the dominant American condiment.
Later innovations.
In fast-food outlets, ketchup is often dispensed in small sachets or tubs. Diners tear the side or top and squeeze the ketchup out of the ketchup packets, or peel the foil lid off the tub for dipping. In 2011, Heinz began offering a new measured-portion package, called the "Dip and Squeeze" packet, which can be opened in either way, giving both options.
Some fast food outlets previously dispensed ketchup from hand-operated pumps into paper cups. This method has made a comeback in the first decades of the 21st century, as cost and environmental concerns over the increasing use of individual plastic ketchup tubs were taken into account.
In October 2000, Heinz introduced colored ketchup products called EZ Squirt, which eventually included green (2000), purple (2001), mystery (pink, orange, or teal, 2002), and blue (2003). These products were made by adding food coloring to the traditional ketchup. By January 2006, these products were discontinued.
Terminology.
The term used for the sauce varies. "Ketchup" is the dominant term in American English and Canadian English, although "catsup" is commonly used in some southern US states and Mexico.
In Canada and the US, "tomato sauce" is not a synonym for ketchup but is a sauce made from tomatoes and commonly used in making sauce for pasta.
Etymology.
The etymology of the word "ketchup" is unclear and has multiple competing theories:
Amoy theory.
A popular folk etymology is that the word came to English from the Cantonese 茄汁 (, literally meaning 'tomato sauce' in Cantonese). The character means 'eggplant'; "tomato" in Cantonese is , which literally translates to 'foreign eggplant'.
Another theory among academics is that the word derives from one of two words from Hokkien of the Fujian region of coastal southern China: "kôe-chiap" (in the Amoy/Xiamen dialect and Quanzhou dialect) or "kê-chiap" (in the Zhangzhou dialect). Both of these pronunciations of the same word (, / ) come from the Quanzhou dialect, Amoy dialect, and Zhangzhou dialect of Hokkien respectively, where it meant the brine of pickled fish or shellfish (, 'pickled food' (usually seafood) + , 'juice'). There are citations of in the "Chinese-English Dictionary of the Vernacular or Spoken Language of Amoy" (1873) by Carstairs Douglas, defined as "brine of pickled fish or shell-fish".
Malay theory.
Ketchup may have entered the English language from the Malay word (, sometimes spelled or ). Originally meaning 'soy sauce', the word itself derives from Chinese.
In Indonesian cuisine, which is similar to Malay, the term refers to fermented savory sauces. Two main types are well known in their cuisine: which translates to 'salty ' in Indonesian (a salty soy sauce) and or 'sweet ' in Indonesian. is a sweet soy sauce that is a mixture of soy sauce with brown sugar, molasses, garlic, ginger, anise, coriander and a bay leaf reduced over medium heat until rather syrupy. A third type, , meaning 'fish ' is fish sauce similar to the Thai "nam pla" or the Philippine "patis". It is not, however, soy-based.
European-Arabic theory.
American anthropologist E. N. Anderson relies on Elizabeth David to claim that "ketchup" is a cognate of the French , meaning 'food in sauce'. The word also exists in Spanish and Portuguese forms as "escabeche", 'a sauce for pickling', which culinary historian Karen Hess traced back to Arabic , or 'pickling with vinegar'. The term was anglicized to "caveach", a word first attested in the late 17th century, at the same time as "ketchup".
Early uses in English.
The word entered the English language in Britain during the late 17th century, appearing in print as "catchup" (1690) and later as "ketchup" (1711). The following is a list of early quotations collected by the "Oxford English Dictionary".
Composition.
U.S. Heinz tomato ketchup's ingredients (listed from highest to lowest percentage weight) are: tomato concentrate from red ripe tomatoes, distilled vinegar, high-fructose corn syrup, corn syrup, salt, spice, onion powder, and natural flavoring.
"Fancy" ketchup.
Some ketchup in the U.S. is labeled "Fancy", a USDA grade related to specific gravity. Fancy ketchup has a higher tomato solid concentration than other USDA grades.
Nutrition.
The following table compares the nutritional value of ketchup with raw ripe tomatoes and salsa, based on information from the USDA Food Nutrient Database.
Viscosity.
Commercial tomato ketchup has an additive, usually xanthan gum, which gives the condiment a non-Newtonian, pseudoplastic or "shear thinning" property – more commonly known as thixotropic. This increases the viscosity of the ketchup considerably with a relatively small amount added—usually 0.5%—which can make it difficult to pour from a container. However, the shear thinning property of the gum ensures that when a force is applied to the ketchup it will lower the viscosity enabling the sauce to flow. A common method to getting ketchup out of the bottle involves inverting the bottle and shaking it or hitting the bottom with the heel of the hand, which causes the ketchup to flow rapidly. Ketchup in plastic bottles can be additionally manipulated by squeezing the bottle, which also decreases the viscosity of the ketchup inside. Another technique involves inverting the bottle and forcefully tapping its upper neck with two fingers (index and middle finger together). Specifically, with a Heinz ketchup glass bottle, one taps the 57 circle on the neck. This helps the ketchup flow by applying the correct shearing force. These techniques work because of how pseudoplastic fluids behave: their viscosity (resistance to flow) decreases with increasing shear rate. The faster the ketchup is sheared (by shaking or tapping the bottle), the more fluid it becomes. After the shear is removed the ketchup thickens to its original viscosity.
Ketchup is a non-Newtonian fluid, meaning that its viscosity changes under stress and is not constant. It is a shear thinning fluid which means its viscosity decreases with increased shear stress. The equation used to designate a non-Newtonian fluid is as follows: formula_0. This equation represents apparent viscosity where apparent viscosity is the shear stress divided by shear rate. Viscosity is dependent on stress. This is apparent when you shake a bottle of tomato sauce/ketchup so it becomes liquid enough to squirt out. Its viscosity decreased with stress.
The molecular composition of ketchup is what creates its pseudoplastic characteristics. Small polysaccharides, sugars, acids, and water make up the majority of the metastable ketchup product, and these small structures are able to move more easily throughout a matrix because of their low mass. While exposed to shear stress, the molecules within the suspension are able to respond quickly and create an alignment within the product. The bonds between the molecules are mostly hydrogen bonds, ionic interactions, and electrostatic interactions, all of which can be broken when subject to stress. Hydrogen bonds are constantly rearranging within a product due to their need to be in the lowest energy state, which further confirms that the bonds between the molecules will be easily disrupted. This alignment only lasts for as long as shear stress is applied. The molecules return to their original disorganized state once the shear stress dissipates.
In 2017, researchers at the Massachusetts Institute of Technology reported the development of a bottle coating that allowed all the product to slip out without leaving a residue.
In 2022, researchers at the University of Oxford found that splatter from a near-empty bottle can be prevented by squeezing more slowly and doubling the diameter of the nozzle.
Separation.
Ketchup is one of the many products that are leachable, meaning that the water within the product migrates together as the larger molecules within the product sediment, ultimately causing water to separate out. This forms a layer of water on top of the ketchup due to the molecular instability within the product. This instability is caused by interactions between hydrophobic molecules and charged molecules within the ketchup suspension.
Pectin is a polysaccharide within tomatoes that has the ability to bind to itself and to other molecules, especially water, around it. This enables it to create a gel-like matrix, dependent on the amount within the solution. Water is a large part of ketchup, due to it being 80% of the composition of distilled vinegar. In order for the water within the ketchup to be at the lowest possible energy state, all of the hydrogen bonds that are able to be made within the matrix must be made. The water bound to the polysaccharide moves more slowly within the matrix, which is unfavorable with respect to entropy. The increased order within the polysaccharide-water complex gives rise to a high-energy state, in which the water will want to be relieved. This concept implies that water will more favorably bind with itself because of the increased disorder between water molecules. This is partially the cause for water leaching out of solution when left undisturbed for a short period of time.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\eta=\\tau/\\dot{y}"
}
]
| https://en.wikipedia.org/wiki?curid=67922 |
67929526 | E-graph | Graph data structure
In computer science, an e-graph is a data structure that stores an equivalence relation over terms of some language.
Definition and operations.
Let formula_0 be a set of uninterpreted functions, where formula_1 is the subset of formula_0 consisting of functions of arity formula_2. Let formula_3 be a countable set of opaque identifiers that may be compared for equality, called e-class IDs. The application of formula_4 to e-class IDs formula_5 is denoted formula_6 and called an e-node.
The e-graph then represents equivalence classes of e-nodes, using the following data structures:
Invariants.
In addition to the above structure, a valid e-graph conforms to several data structure invariants. Two e-nodes are equivalent if they are in the same e-class. The congruence invariant states that an e-graph must ensure that equivalence is closed under congruence, where two e-nodes formula_20 are congruent when formula_21. The hashcons invariant states that the hashcons maps canonical e-nodes to their e-class ID.
Operations.
E-graphs expose wrappers around the formula_9, formula_8, and formula_10 operations from the union-find that preserve the e-graph invariants. The last operation, e-matching, is described below.
E-matching.
Let formula_22 be a set of variables and let formula_23 be the smallest set that includes the 0-arity function symbols (also called constants), includes the variables, and is closed under application of the function symbols. In other words, formula_23 is the smallest set such that formula_24, formula_25, and when formula_26 and formula_4, then formula_27. A term containing variables is called a pattern, a term without variables is called ground.
An e-graph formula_28 represents a ground term formula_29 if one of its e-classes represents formula_30. An e-class formula_31 represents formula_30 if some e-node formula_32 does. An e-node formula_32 represents a term formula_33 if formula_34 and each e-class formula_35 represents the term formula_36 (formula_37 in formula_16).
e-matching is an operation that takes a pattern formula_38 and an e-graph formula_28, and yields all pairs formula_39 where formula_40 is a substitution mapping the variables in formula_41 to e-class IDs and formula_42 is an e-class ID such that each term formula_43 is represented by formula_31. There are several known algorithms for e-matching, the "relational e-matching" algorithm is based on worst-case optimal joins and is worst-case optimal.
Equality saturation.
Equality saturation is a technique for building optimizing compilers using e-graphs. It operates by applying a set of rewrites using e-matching until the e-graph is saturated, a timeout is reached, an e-graph size limit is reached, a fixed number of iterations is exceeded, or some other halting condition is reached. After rewriting, an optimal term is extracted from the e-graph according to some cost function, usually related to AST size or performance considerations.
Applications.
E-graphs are used in automated theorem proving. They are a crucial part of modern SMT solvers such as Z3 and CVC4, where they are used to decide the empty theory by computing the congruence closure of a set of equalities, and e-matching is used to instantiate quantifiers. In DPLL(T)-based solvers that use conflict-driven clause learning (also known as non-chronological backtracking), e-graphs are extended to produce proof certificates. E-graphs are also used in the Simplify theorem prover of ESC/Java.
Equality saturation is used in specialized optimizing compilers, e.g. for deep learning and linear algebra. Equality saturation has also been used for translation validation applied to the LLVM toolchain.
E-graphs have been applied to several problems in program analysis, including fuzzing, abstract interpretation, and library learning.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Sigma"
},
{
"math_id": 1,
"text": "\\Sigma_n"
},
{
"math_id": 2,
"text": "n"
},
{
"math_id": 3,
"text": "\\mathbb{id}"
},
{
"math_id": 4,
"text": "f\\in\\Sigma_n"
},
{
"math_id": 5,
"text": "i_1, i_2, \\ldots, i_n\\in\\mathbb{id}"
},
{
"math_id": 6,
"text": "f(i_1, i_2, \\ldots, i_n)"
},
{
"math_id": 7,
"text": "U"
},
{
"math_id": 8,
"text": "\\mathrm{find}"
},
{
"math_id": 9,
"text": "\\mathrm{add}"
},
{
"math_id": 10,
"text": "\\mathrm{merge}"
},
{
"math_id": 11,
"text": "e"
},
{
"math_id": 12,
"text": "\\mathrm{find}(U, e) = e"
},
{
"math_id": 13,
"text": "f(i_1,\\ldots,i_n)"
},
{
"math_id": 14,
"text": "i_j"
},
{
"math_id": 15,
"text": "j"
},
{
"math_id": 16,
"text": "1,\\ldots,n"
},
{
"math_id": 17,
"text": "H"
},
{
"math_id": 18,
"text": "M"
},
{
"math_id": 19,
"text": "\\forall i, j\\in\\mathbb{id},M[i]=M[j]\\Leftrightarrow \\mathrm{find}(U,i)=\\mathrm{find}(U,j)"
},
{
"math_id": 20,
"text": "f(i_1,\\ldots,i_n),f(j_1,\\ldots,j_n)"
},
{
"math_id": 21,
"text": "\\mathrm{find}(U, i_k)=\\mathrm{find}(U, j_k),k\\in \\{1,\\ldots,n\\}"
},
{
"math_id": 22,
"text": "V"
},
{
"math_id": 23,
"text": "\\mathrm{Term}(\\Sigma, V)"
},
{
"math_id": 24,
"text": "V\\subset\\mathrm{Term}( \\Sigma, V)"
},
{
"math_id": 25,
"text": "\\Sigma_0\\subset\\mathrm{Term}(\\Sigma, V)"
},
{
"math_id": 26,
"text": "x_1, \\ldots, x_n\\in \\mathrm{Term}(\\Sigma, V)"
},
{
"math_id": 27,
"text": "f(x_1,\\ldots,x_n)\\in\\mathrm{Term}(\\Sigma, V)"
},
{
"math_id": 28,
"text": "E"
},
{
"math_id": 29,
"text": "t\\in\\mathrm{Term}(\\Sigma, \\emptyset)"
},
{
"math_id": 30,
"text": "t"
},
{
"math_id": 31,
"text": "C"
},
{
"math_id": 32,
"text": "f(i_1,\\ldots,i_n)\\in C"
},
{
"math_id": 33,
"text": "g(j_1,\\ldots,j_n)"
},
{
"math_id": 34,
"text": "f=g"
},
{
"math_id": 35,
"text": "M[i_k]"
},
{
"math_id": 36,
"text": "j_k"
},
{
"math_id": 37,
"text": "k"
},
{
"math_id": 38,
"text": "p\\in\\mathrm{Term}(\\Sigma, V)"
},
{
"math_id": 39,
"text": "(\\sigma, C)"
},
{
"math_id": 40,
"text": "\\sigma\\subset V\\times\\mathbb{id}"
},
{
"math_id": 41,
"text": "p"
},
{
"math_id": 42,
"text": "C\\in\\mathbb{id}"
},
{
"math_id": 43,
"text": "\\sigma(p)"
}
]
| https://en.wikipedia.org/wiki?curid=67929526 |
679297 | Specular reflection | Mirror-like wave reflection
Specular reflection, or regular reflection, is the mirror-like reflection of waves, such as light, from a surface.
The law of reflection states that a reflected ray of light emerges from the reflecting surface at the same angle to the surface normal as the incident ray, but on the opposing side of the surface normal in the plane formed by the incident and reflected rays. This behavior was first described by Hero of Alexandria (AD c. 10–70). Later, Alhazen gave a complete statement of the law of reflection. He was first to state that the incident ray, the reflected ray, and the normal to the surface all lie in a same plane perpendicular to reflecting plane.
Specular reflection may be contrasted with diffuse reflection, in which light is scattered away from the surface in a range of directions.
Law of reflection.
When light encounters a boundary of a material, it is affected by the optical and electronic response functions of the material to electromagnetic waves. Optical processes, which comprise reflection and refraction, are expressed by the difference of the refractive index on both sides of the boundary, whereas reflectance and absorption are the real and imaginary parts of the response due to the electronic structure of the material.
The degree of participation of each of these processes in the transmission is a function of the frequency, or wavelength, of the light, its polarization, and its angle of incidence. In general, reflection increases with increasing angle of incidence, and with increasing absorptivity at the boundary. The Fresnel equations describe the physics at the optical boundary.
Reflection may occur as specular, or mirror-like, reflection and diffuse reflection. Specular reflection reflects all light which arrives from a given direction at the same angle, whereas diffuse reflection reflects light in a broad range of directions. The distinction may be illustrated with surfaces coated with glossy paint and matte paint. Matte paints exhibit essentially complete diffuse reflection, while glossy paints show a larger component of specular behavior. A surface built from a non-absorbing powder, such as plaster, can be a nearly perfect diffuser, whereas polished metallic objects can specularly reflect light very efficiently. The reflecting material of mirrors is usually aluminum or silver.
Light propagates in space as a wave front of electromagnetic fields. A ray of light is characterized by the direction normal to the wave front ("wave normal"). When a ray encounters a surface, the angle that the wave normal makes with respect to the surface normal is called the angle of incidence and the plane defined by both directions is the plane of incidence. Reflection of the incident ray also occurs in the plane of incidence.
The law of reflection states that the angle of reflection of a ray equals the angle of incidence, and that the incident direction, the surface normal, and the reflected direction are coplanar.
When the light impinges perpendicularly to the surface, it is reflected straight back in the source direction.
The phenomenon of reflection arises from the diffraction of a plane wave on a flat boundary. When the boundary size is much larger than the wavelength, then the electromagnetic fields at the boundary are oscillating exactly in phase only for the specular direction.
Vector formulation.
The law of reflection can also be equivalently expressed using linear algebra. The direction of a reflected ray is determined by the vector of incidence and the surface normal vector. Given an incident direction formula_0 from the light source to the surface and the surface normal direction formula_1 the specularly reflected direction formula_2 (all unit vectors) is:
formula_3
where formula_4 is a scalar obtained with the dot product. Different authors may define the incident and reflection directions with different signs.
Assuming these Euclidean vectors are represented in column form, the equation can be equivalently expressed as a matrix-vector multiplication:
formula_5
where formula_6 is the so-called Householder transformation matrix, defined as:
formula_7
in terms of the identity matrix formula_8 and twice the outer product of formula_9.
Reflectivity.
"Reflectivity" is the ratio of the power of the reflected wave to that of the incident wave. It is a function of the wavelength of radiation, and is related to the refractive index of the material as expressed by Fresnel's equations. In regions of the electromagnetic spectrum in which absorption by the material is significant, it is related to the electronic absorption spectrum through the imaginary component of the complex refractive index. The electronic absorption spectrum of an opaque material, which is difficult or impossible to measure directly, may therefore be indirectly determined from the reflection spectrum by a Kramers-Kronig transform. The polarization of the reflected light depends on the symmetry of the arrangement of the incident probing light with respect to the absorbing transitions dipole moments in the material.
Measurement of specular reflection is performed with normal or varying incidence reflection spectrophotometers ("reflectometer") using a scanning variable-wavelength light source. Lower quality measurements using a glossmeter quantify the glossy appearance of a surface in gloss units.
Consequences.
Internal reflection.
When light is propagating in a material and strikes an interface with a material of lower index of refraction, some of the light is reflected. If the angle of incidence is greater than the critical angle, total internal reflection occurs: all of the light is reflected. The critical angle can be shown to be given by
formula_10
Polarization.
When light strikes an interface between two materials, the reflected light is generally partially polarized. However, if the light strikes the interface at Brewster's angle, the reflected light is "completely" linearly polarized parallel to the interface. Brewster's angle is given by
formula_11
Reflected images.
The image in a flat mirror has these features:
The reversal of images by a plane mirror is perceived differently depending on the circumstances. In many cases, the image in a mirror appears to be reversed from left to right. If a flat mirror is mounted on the ceiling it can appear to reverse "up" and "down" if a person stands under it and looks up at it. Similarly a car turning "left" will still appear to be turning "left" in the rear view mirror for the driver of a car in front of it. The reversal of directions, or lack thereof, depends on how the directions are defined. More specifically a mirror changes the handedness of the coordinate system, one axis of the coordinate system appears to be reversed, and the chirality of the image may change. For example, the image of a right shoe will look like a left shoe.
Examples.
A classic example of specular reflection is a mirror, which is specifically designed for specular reflection.
In addition to visible light, specular reflection can be observed in the ionospheric reflection of radiowaves and the reflection of radio- or microwave radar signals by flying objects. The measurement technique of x-ray reflectivity exploits specular reflectivity to study thin films and interfaces with sub-nanometer resolution, using either modern laboratory sources or synchrotron x-rays.
Non-electromagnetic waves can also exhibit specular reflection, as in acoustic mirrors which reflect sound, and atomic mirrors, which reflect neutral atoms. For the efficient reflection of atoms from a solid-state mirror, very cold atoms and/or grazing incidence are used in order to provide significant quantum reflection; ridged mirrors are used to enhance the specular reflection of atoms. Neutron reflectometry uses specular reflection to study material surfaces and thin film interfaces in an analogous fashion to x-ray reflectivity.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbf{\\hat{d}}_\\mathrm{i}"
},
{
"math_id": 1,
"text": "\\mathbf{\\hat{d}}_\\mathrm{n},"
},
{
"math_id": 2,
"text": "\\mathbf{\\hat{d}}_\\mathrm{s}"
},
{
"math_id": 3,
"text": "\\mathbf{\\hat{d}}_\\mathrm{s} = \\mathbf{\\hat{d}}_\\mathrm{i} - 2 \\mathbf{\\hat{d}}_\\mathrm{n} \\left(\\mathbf{\\hat{d}}_\\mathrm{n} \\cdot \\mathbf{\\hat{d}}_\\mathrm{i}\\right),\n"
},
{
"math_id": 4,
"text": "\\mathbf{\\hat{d}}_\\mathrm{n} \\cdot \\mathbf{\\hat{d}}_\\mathrm{i}"
},
{
"math_id": 5,
"text": "\\mathbf{\\hat{d}}_\\mathrm{s} = \\mathbf{R} \\; \\mathbf{\\hat{d}}_\\mathrm{i},"
},
{
"math_id": 6,
"text": "\\mathbf{R}"
},
{
"math_id": 7,
"text": "\\mathbf{R} = \\mathbf{I} - 2 \\mathbf{\\hat{d}}_\\mathrm{n} \\mathbf{\\hat{d}}_\\mathrm{n}^\\mathrm{T};"
},
{
"math_id": 8,
"text": "\\mathbf{I}"
},
{
"math_id": 9,
"text": "\\mathbf{\\hat{d}}_\\mathrm{n}"
},
{
"math_id": 10,
"text": "\\theta_\\text{crit} = \\arcsin\\!\\left(\\frac{n_2}{n_1}\\right)\\!."
},
{
"math_id": 11,
"text": "\\theta_\\mathrm{B} = \\arctan\\!\\left(\\frac{n_2}{n_1}\\right)\\!."
}
]
| https://en.wikipedia.org/wiki?curid=679297 |
6793014 | Rotation of axes in two dimensions | Transformation of coordinates through an angle
In mathematics, a rotation of axes in two dimensions is a mapping from an "xy"-Cartesian coordinate system to an "x′y′"-Cartesian coordinate system in which the origin is kept fixed and the "x′" and "y′" axes are obtained by rotating the "x" and "y" axes counterclockwise through an angle formula_0. A point "P" has coordinates ("x", "y") with respect to the original system and coordinates ("x′", "y′") with respect to the new system. In the new coordinate system, the point "P" will appear to have been rotated in the opposite direction, that is, clockwise through the angle formula_0. A rotation of axes in more than two dimensions is defined similarly. A rotation of axes is a linear map and a rigid transformation.
Motivation.
Coordinate systems are essential for studying the equations of curves using the methods of analytic geometry. To use the method of coordinate geometry, the axes are placed at a convenient position with respect to the curve under consideration. For example, to study the equations of ellipses and hyperbolas, the foci are usually located on one of the axes and are situated symmetrically with respect to the origin. If the curve (hyperbola, parabola, ellipse, etc.) is "not" situated conveniently with respect to the axes, the coordinate system should be changed to place the curve at a convenient and familiar location and orientation. The process of making this change is called a transformation of coordinates.
The solutions to many problems can be simplified by rotating the coordinate axes to obtain new axes through the same origin.
Derivation.
The equations defining the transformation in two dimensions, which rotates the "xy" axes counterclockwise through an angle formula_0 into the "x′y′" axes, are derived as follows.
In the "xy" system, let the point "P" have polar coordinates formula_1. Then, in the "x′y′" system, "P" will have polar coordinates formula_2.
Using trigonometric functions, we have
and using the standard trigonometric formulae for differences, we have
Substituting equations (1) and (2) into equations (3) and (4), we obtain
Equations (5) and (6) can be represented in matrix form as
formula_3
which is the standard matrix equation of a rotation of axes in two dimensions.
The inverse transformation is
or
formula_4
Examples in two dimensions.
Example 1.
Find the coordinates of the point formula_5 after the axes have been rotated through the angle formula_6, or 30°.
Solution:
formula_7
formula_8
The axes have been rotated counterclockwise through an angle of formula_6 and the new coordinates are formula_9. Note that the point appears to have been rotated clockwise through formula_10 with respect to fixed axes so it now coincides with the (new) "x′" axis.
Example 2.
Find the coordinates of the point formula_11 after the axes have been rotated clockwise 90°, that is, through the angle formula_12, or −90°.
Solution:
formula_13
The axes have been rotated through an angle of formula_12, which is in the clockwise direction and the new coordinates are formula_14. Again, note that the point appears to have been rotated counterclockwise through formula_15 with respect to fixed axes.
Rotation of conic sections.
The most general equation of the second degree has the form
Through a change of coordinates (a rotation of axes and a translation of axes), equation (9) can be put into a standard form, which is usually easier to work with. It is always possible to rotate the coordinates at a specific angle so as to eliminate the "x′y′" term. Substituting equations (7) and (8) into equation (9), we obtain
where
If formula_0 is selected so that formula_16 we will have formula_17 and the "x′y′" term in equation (10) will vanish.
When a problem arises with "B", "D" and "E" all different from zero, they can be eliminated by performing in succession a rotation (eliminating "B") and a translation (eliminating the "D" and "E" terms).
Identifying rotated conic sections.
A non-degenerate conic section given by equation (9) can be identified by evaluating formula_18. The conic section is:
Generalization to several dimensions.
Suppose a rectangular "xyz"-coordinate system is rotated around its "z" axis counterclockwise (looking down the positive "z" axis) through an angle formula_0, that is, the positive "x" axis is rotated immediately into the positive "y" axis. The "z" coordinate of each point is unchanged and the "x" and "y" coordinates transform as above. The old coordinates ("x", "y", "z") of a point "Q" are related to its new coordinates ("x′", "y′", "z′") by
formula_22
Generalizing to any finite number of dimensions, a rotation matrix formula_23 is an orthogonal matrix that differs from the identity matrix in at most four elements. These four elements are of the form
formula_24 and formula_25
for some formula_0 and some "i" ≠ "j".
Example in several dimensions.
Example 3.
Find the coordinates of the point formula_26 after the positive "w" axis has been rotated through the angle formula_27, or 15°, into the positive "z" axis.
Solution:
formula_28 | [
{
"math_id": 0,
"text": " \\theta "
},
{
"math_id": 1,
"text": " (r, \\alpha) "
},
{
"math_id": 2,
"text": " (r, \\alpha - \\theta) "
},
{
"math_id": 3,
"text": "\n\\begin{bmatrix} x' \\\\ y' \\end{bmatrix} =\n\\begin{bmatrix}\n \\cos \\theta & \\sin \\theta \\\\\n- \\sin \\theta & \\cos \\theta\n\\end{bmatrix}\n\\begin{bmatrix} x \\\\ y \\end{bmatrix},\n"
},
{
"math_id": 4,
"text": "\n\\begin{bmatrix} x \\\\ y \\end{bmatrix} =\n\\begin{bmatrix}\n\\cos \\theta & - \\sin \\theta \\\\\n\\sin \\theta & \\cos \\theta\n\\end{bmatrix}\n\\begin{bmatrix} x' \\\\ y' \\end{bmatrix}.\n"
},
{
"math_id": 5,
"text": " P_1 = (x, y) = (\\sqrt 3, 1) "
},
{
"math_id": 6,
"text": " \\theta_1 = \\pi / 6 "
},
{
"math_id": 7,
"text": " x' = \\sqrt 3 \\cos ( \\pi / 6 ) + 1 \\sin ( \\pi / 6 ) = (\\sqrt 3)({\\sqrt 3}/2) + (1)(1/2) = 2 "
},
{
"math_id": 8,
"text": " y' = 1 \\cos ( \\pi / 6 ) - \\sqrt 3 \\sin ( \\pi / 6 ) = (1)({\\sqrt 3}/2) - (\\sqrt 3)(1/2) = 0 ."
},
{
"math_id": 9,
"text": " P_1 = (x', y') = (2, 0) "
},
{
"math_id": 10,
"text": " \\pi / 6 "
},
{
"math_id": 11,
"text": " P_2 = (x, y) = (7, 7) "
},
{
"math_id": 12,
"text": " \\theta_2 = - \\pi / 2 "
},
{
"math_id": 13,
"text": "\n\\begin{bmatrix} x' \\\\ y' \\end{bmatrix} =\n\\begin{bmatrix}\n \\cos ( - \\pi / 2 ) & \\sin( - \\pi / 2 ) \\\\\n- \\sin( - \\pi / 2 ) & \\cos( - \\pi / 2 )\n\\end{bmatrix}\n\\begin{bmatrix} 7 \\\\ 7 \\end{bmatrix} =\n\\begin{bmatrix}\n 0 & -1 \\\\\n1 & 0\n\\end{bmatrix}\n\\begin{bmatrix} 7 \\\\ 7 \\end{bmatrix} =\n\\begin{bmatrix} -7 \\\\ 7 \\end{bmatrix}.\n"
},
{
"math_id": 14,
"text": " P_2 = (x', y') = (-7, 7) "
},
{
"math_id": 15,
"text": " \\pi / 2 "
},
{
"math_id": 16,
"text": " \\cot 2 \\theta = (A - C)/B "
},
{
"math_id": 17,
"text": " B' = 0 "
},
{
"math_id": 18,
"text": "B^2-4AC"
},
{
"math_id": 19,
"text": " B^2-4AC<0"
},
{
"math_id": 20,
"text": " B^2-4AC=0"
},
{
"math_id": 21,
"text": " B^2-4AC>0"
},
{
"math_id": 22,
"text": "\\begin{bmatrix} x' \\\\ y' \\\\ z' \\end{bmatrix} =\n\\begin{bmatrix}\n \\cos \\theta & \\sin \\theta & 0 \\\\\n- \\sin \\theta & \\cos \\theta & 0 \\\\\n 0 & 0 & 1\n\\end{bmatrix}\n\\begin{bmatrix} x \\\\ y \\\\ z \\end{bmatrix}.\n"
},
{
"math_id": 23,
"text": " A "
},
{
"math_id": 24,
"text": " a_{ii} = a_{jj} = \\cos \\theta "
},
{
"math_id": 25,
"text": " a_{ij} = - a_{ji} = \\sin \\theta ,"
},
{
"math_id": 26,
"text": " P_3 = (w, x, y, z) = (1, 1, 1, 1) "
},
{
"math_id": 27,
"text": " \\theta_3 = \\pi / 12 "
},
{
"math_id": 28,
"text": "\\begin{align}\n\\begin{bmatrix} w' \\\\ x' \\\\ y' \\\\ z' \\end{bmatrix}\n&=\n\\begin{bmatrix}\n \\cos( \\pi / 12 ) & 0 & 0 & \\sin( \\pi / 12 ) \\\\\n 0 & 1 & 0 & 0 \\\\\n 0 & 0 & 1 & 0 \\\\\n- \\sin( \\pi / 12 ) & 0 & 0 & \\cos( \\pi / 12 ) \n\\end{bmatrix}\n\\begin{bmatrix} w \\\\ x \\\\ y \\\\ z \\end{bmatrix}\n\\\\[4pt]\n&\\approx\n\\begin{bmatrix}\n 0.96593 & 0.0 & 0.0 & 0.25882 \\\\\n 0.0 & 1.0 & 0.0 & 0.0 \\\\\n 0.0 & 0.0 & 1.0 & 0.0 \\\\\n- 0.25882 & 0.0 & 0.0 & 0.96593\n\\end{bmatrix}\n\\begin{bmatrix} 1.0 \\\\ 1.0 \\\\ 1.0 \\\\ 1.0 \\end{bmatrix} =\n\\begin{bmatrix} 1.22475 \\\\ 1.00000 \\\\ 1.00000 \\\\ 0.70711 \\end{bmatrix}.\n\\end{align}\n"
}
]
| https://en.wikipedia.org/wiki?curid=6793014 |
6793305 | ProbCons | ProbCons is an open source probabilistic consistency-based multiple alignment of amino acid sequences. It is one of the most efficient protein multiple sequence alignment programs, since it has repeatedly demonstrated a statistically significant advantage in accuracy over similar tools, including Clustal and MAFFT.
Algorithm.
The following describes the basic outline of the ProbCons algorithm.
Step 1: Reliability of an alignment edge.
For every pair of sequences compute the probability that letters formula_0 and formula_1 are paired in formula_2 an alignment that is generated by the model.
formula_3
Step 2: Maximum expected accuracy.
The accuracy of an alignment formula_2 with respect to another alignment formula_5 is defined as the number of common aligned pairs divided by the length of the shorter sequence.
Calculate expected accuracy of each sequence:
formula_6
This yields a maximum expected accuracy (MEA) alignment:
formula_7
Step 3: Probabilistic Consistency Transformation.
All pairs of sequences x,y from the set of all sequences formula_8 are now re-estimated using all intermediate sequences z:
formula_9
This step can be iterated.
Step 4: Computation of guide tree.
Construct a guide tree by hierarchical clustering using MEA score as sequence similarity score. Cluster similarity is defined using weighted average over pairwise sequence similarity.
Step 5: Compute MSA.
Finally compute the MSA using progressive alignment or iterative alignment.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "x_i"
},
{
"math_id": 1,
"text": "y_i"
},
{
"math_id": 2,
"text": "a^*"
},
{
"math_id": 3,
"text": "\\begin{align}\nP(x_i \\sim y_i|x,y) & \\stackrel{def}{=} Pr[x_i \\sim y_i \\text{ in some a }|x,y] \\\\\n& = \\sum_{\\text{alignment a with }x_i - y_i} Pr[a|x,y]\\\\\n& = \\sum_{\\text{alignment a}} \\mathbf{1}\\{x_i - y_i \\in a\\} Pr[a|x,y]\n\\end{align}"
},
{
"math_id": 4,
"text": "\\mathbf{1}\\{x_i \\sim y_i \\in a\\}"
},
{
"math_id": 5,
"text": "a"
},
{
"math_id": 6,
"text": "\\begin{align}\nE_{Pr[a|x,y]}(acc(a^*,a)) & = \\sum_{a}Pr[a|x,y]acc(a^*,a) \\\\\n& = \\frac{1}{min(|x|,|y|)} \\cdot \\sum_{a}\\mathbf{1}\\{x_i \\sim y_i \\in a\\} Pr[a|x,y]\\\\\n& = \\frac{1}{min(|x|,|y|)} \\cdot \\sum_{x_i - y_i} P(x_i \\sim y_j|x,y)\n\\end{align}"
},
{
"math_id": 7,
"text": "\nE(x,y) = \\arg\\max_{a^*} \\; E_{Pr[a|x,y]}(acc(a^*,a))\n"
},
{
"math_id": 8,
"text": "\\mathcal{S}"
},
{
"math_id": 9,
"text": "\nP'(x_i - y_i|x,y) = \\frac{1}{|\\mathcal{S}|} \\sum_{z} \\sum_{1 \\leq k \\leq |z|} P(x_i \\sim z_i|x,z) \\cdot P(z_i \\sim y_i|z,y)\n"
}
]
| https://en.wikipedia.org/wiki?curid=6793305 |
679351 | Deformation (mathematics) | Branch of mathematics
In mathematics, deformation theory is the study of infinitesimal conditions associated with varying a solution "P" of a problem to slightly different solutions "P"ε, where ε is a small number, or a vector of small quantities. The infinitesimal conditions are the result of applying the approach of differential calculus to solving a problem with constraints. The name is an analogy to non-rigid structures that deform slightly to accommodate external forces.
Some characteristic phenomena are: the derivation of first-order equations by treating the ε quantities as having negligible squares; the possibility of "isolated solutions", in that varying a solution may not be possible, "or" does not bring anything new; and the question of whether the infinitesimal constraints actually 'integrate', so that their solution does provide small variations. In some form these considerations have a history of centuries in mathematics, but also in physics and engineering. For example, in the geometry of numbers a class of results called "isolation theorems" was recognised, with the topological interpretation of an "open orbit" (of a group action) around a given solution. Perturbation theory also looks at deformations, in general of operators.
Deformations of complex manifolds.
The most salient deformation theory in mathematics has been that of complex manifolds and algebraic varieties. This was put on a firm basis by foundational work of Kunihiko Kodaira and Donald C. Spencer, after deformation techniques had received a great deal of more tentative application in the Italian school of algebraic geometry. One expects, intuitively, that deformation theory of the first order should equate the Zariski tangent space with a moduli space. The phenomena turn out to be rather subtle, though, in the general case.
In the case of Riemann surfaces, one can explain that the complex structure on the Riemann sphere is isolated (no moduli). For genus 1, an elliptic curve has a one-parameter family of complex structures, as shown in elliptic function theory. The general Kodaira–Spencer theory identifies as the key to the deformation theory the sheaf cohomology group
formula_0
where Θ is (the sheaf of germs of sections of) the holomorphic tangent bundle. There is an obstruction in the "H"2 of the same sheaf; which is always zero in case of a curve, for general reasons of dimension. In the case of genus 0 the "H"1 vanishes, also. For genus 1 the dimension is the Hodge number "h"1,0 which is therefore 1. It is known that all curves of genus one have equations of form "y"2 = "x"3 + "ax" + "b". These obviously depend on two parameters, a and b, whereas the isomorphism classes of such curves have only one parameter. Hence there must be an equation relating those a and b which describe isomorphic elliptic curves. It turns out that curves for which "b"2"a"−3 has the same value, describe isomorphic curves. I.e. varying a and b is one way to deform the structure of the curve "y"2 = "x"3 + "ax" + "b", but not all variations of "a,b" actually change the isomorphism class of the curve.
One can go further with the case of genus "g" > 1, using Serre duality to relate the "H"1 to
formula_1
where Ω is the holomorphic cotangent bundle and the notation Ω[2] means the "tensor square" ("not" the second exterior power). In other words, deformations are regulated by holomorphic quadratic differentials on a Riemann surface, again something known classically. The dimension of the moduli space, called Teichmüller space in this case, is computed as 3"g" − 3, by the Riemann–Roch theorem.
These examples are the beginning of a theory applying to holomorphic families of complex manifolds, of any dimension. Further developments included: the extension by Spencer of the techniques to other structures of differential geometry; the assimilation of the Kodaira–Spencer theory into the abstract algebraic geometry of Grothendieck, with a consequent substantive clarification of earlier work; and deformation theory of other structures, such as algebras.
Deformations and flat maps.
The most general form of a deformation is a flat map formula_2 of complex-analytic spaces, schemes, or germs of functions on a space. Grothendieck was the first to find this far-reaching generalization for deformations and developed the theory in that context. The general idea is there should exist a universal family formula_3 such that any deformation can be found as a "unique" pullback squareformula_4In many cases, this universal family is either a Hilbert scheme or Quot scheme, or a quotient of one of them. For example, in the construction of the moduli of curves, it is constructed as a quotient of the smooth curves in the Hilbert scheme. If the pullback square is not unique, then the family is only versal.
Deformations of germs of analytic algebras.
One of the useful and readily computable areas of deformation theory comes from the deformation theory of germs of complex spaces, such as Stein manifolds, complex manifolds, or complex analytic varieties. Note that this theory can be globalized to complex manifolds and complex analytic spaces by considering the sheaves of germs of holomorphic functions, tangent spaces, etc. Such algebras are of the formformula_5 where formula_6 is the ring of convergent power-series and formula_7 is an ideal. For example, many authors study the germs of functions of a singularity, such as the algebraformula_8representing a plane-curve singularity. A germ of analytic algebras is then an object in the opposite category of such algebras. Then, a deformation of a germ of analytic algebras formula_9 is given by a flat map of germs of analytic algebras formula_2 where formula_10 has a distinguished point formula_11 such that the formula_9 fits into the pullback squareformula_12These deformations have an equivalence relation given by commutative squaresformula_13where the horizontal arrows are isomorphisms. For example, there is a deformation of the plane curve singularity given by the opposite diagram of the commutative diagram of analytic algebrasformula_14In fact, Milnor studied such deformations, where a singularity is deformed by a constant, hence the fiber over a non-zero formula_15 is called the Milnor fiber.
Cohomological Interpretation of deformations.
It should be clear there could be many deformations of a single germ of analytic functions. Because of this, there are some book-keeping devices required to organize all of this information. These organizational devices are constructed using tangent cohomology. This is formed by using the Koszul–Tate resolution, and potentially modifying it by adding additional generators for non-regular algebras formula_16. In the case of analytic algebras these resolutions are called the Tjurina resolution for the mathematician who first studied such objects, Galina Tyurina. This is a graded-commutative differential graded algebra formula_17 such that formula_18 is a surjective map of analytic algebras, and this map fits into an exact sequenceformula_19Then, by taking the differential graded module of derivations formula_20, its cohomology forms the tangent cohomology of the germ of analytic algebras formula_16. These cohomology groups are denoted formula_21. The formula_22 contains information about all of the deformations of formula_16 and can be readily computed using the exact sequenceformula_23If formula_16 is isomorphic to the algebraformula_24then its deformations are equal toformula_25were formula_26 is the jacobian matrix of formula_27. For example, the deformations of a hypersurface given by formula_28 has the deformationsformula_29For the singularity formula_30 this is the moduleformula_31hence the only deformations are given by adding constants or linear factors, so a general deformation of formula_32 is formula_33 where the formula_34 are deformation parameters.
Functorial description.
Another method for formalizing deformation theory is using functors on the category formula_35 of local Artin algebras over a field. A pre-deformation functor is defined as a functor
formula_36
such that formula_37 is a point. The idea is that we want to study the infinitesimal structure of some moduli space around a point where lying above that point is the space of interest. It is typically the case that it is easier to describe the functor for a moduli problem instead of finding an actual space. For example, if we want to consider the moduli-space of hypersurfaces of degree formula_38 in formula_39, then we could consider the functor
formula_40
where
formula_41
Although in general, it is more convenient/required to work with functors of groupoids instead of sets. This is true for moduli of curves.
Technical remarks about infinitesimals.
Infinitesimals have long been in use by mathematicians for non-rigorous arguments in calculus. The idea is that if we consider polynomials formula_42 with an infinitesimal formula_43, then only the first order terms really matter; that is, we can consider
formula_44
A simple application of this is that we can find the derivatives of monomials using infinitesimals:
formula_45
the formula_43 term contains the derivative of the monomial, demonstrating its use in calculus. We could also interpret this equation as the first two terms of the Taylor expansion of the monomial. Infinitesimals can be made rigorous using nilpotent elements in local artin algebras. In the ring formula_46 we see that arguments with infinitesimals can work. This motivates the notation formula_47, which is called the ring of dual numbers.
Moreover, if we want to consider higher-order terms of a Taylor approximation then we could consider the artin algebras formula_48. For our monomial, suppose we want to write out the second order expansion, then
formula_49
Recall that a Taylor expansion (at zero) can be written out as
formula_50
hence the previous two equations show that the second derivative of formula_51 is formula_52.
In general, since we want to consider arbitrary order Taylor expansions in any number of variables, we will consider the category of all local artin algebras over a field.
Motivation.
To motivate the definition of a pre-deformation functor, consider the projective hypersurface over a field
formula_53
If we want to consider an infinitesimal deformation of this space, then we could write down a Cartesian square
formula_54
where formula_55. Then, the space on the right hand corner is one example of an infinitesimal deformation: the extra scheme theoretic structure of the nilpotent elements in formula_56 (which is topologically a point) allows us to organize this infinitesimal data. Since we want to consider all possible expansions, we will let our predeformation functor be defined on objects as
formula_57
where formula_16 is a local Artin formula_58-algebra.
Smooth pre-deformation functors.
A pre-deformation functor is called smooth if for any surjection formula_59 such that the square of any element in the kernel is zero, there is a surjection
formula_60
This is motivated by the following question: given a deformation
formula_61
does there exist an extension of this cartesian diagram to the cartesian diagrams
formula_62
the name smooth comes from the lifting criterion of a smooth morphism of schemes.
Tangent space.
Recall that the tangent space of a scheme formula_63 can be described as the formula_64-set
formula_65
where the source is the ring of dual numbers. Since we are considering the tangent space of a point of some moduli space, we can define the tangent space of our (pre-)deformation functor as
formula_66
Applications of deformation theory.
Dimension of moduli of curves.
One of the first properties of the moduli of algebraic curves formula_67 can be deduced using elementary deformation theory. Its dimension can be computed asformula_68for an arbitrary smooth curve of genus formula_69 because the deformation space is the tangent space of the moduli space. Using Serre duality the tangent space is isomorphic toformula_70Hence the Riemann–Roch theorem givesformula_71For curves of genus formula_72 the formula_73 becauseformula_74the degree isformula_75and formula_76 for line bundles of negative degree. Therefore the dimension of the moduli space is formula_77.
Bend-and-break.
Deformation theory was famously applied in birational geometry by Shigefumi Mori to study the existence of rational curves on varieties. For a Fano variety of positive dimension Mori showed that there is a rational curve passing through every point. The method of the proof later became known as Mori's bend-and-break. The rough idea is to start with some curve "C" through a chosen point and keep deforming it until it breaks into several components. Replacing "C" by one of the components has the effect of decreasing either the genus or the degree of "C". So after several repetitions of the procedure, eventually we'll obtain a curve of genus 0, i.e. a rational curve. The existence and the properties of deformations of "C" require arguments from deformation theory and a reduction to positive characteristic.
Arithmetic deformations.
One of the major applications of deformation theory is in arithmetic. It can be used to answer the following question: if we have a variety formula_78, what are the possible extensions formula_79? If our variety is a curve, then the vanishing formula_80 implies that every deformation induces a variety over formula_81; that is, if we have a smooth curve
formula_82
and a deformation
formula_83
then we can always extend it to a diagram of the form
formula_84
This implies that we can construct a formal scheme formula_85 giving a curve over formula_81.
Deformations of abelian schemes.
The Serre–Tate theorem asserts, roughly speaking, that the deformations of abelian scheme "A" is controlled by deformations of the "p"-divisible group formula_86 consisting of its "p"-power torsion points.
Galois deformations.
Another application of deformation theory is with Galois deformations. It allows us to answer the question: If we have a Galois representation
formula_87
how can we extend it to a representation
formula_88
Relationship to string theory.
The so-called Deligne conjecture arising in the context of algebras (and Hochschild cohomology) stimulated much interest in deformation theory in relation to string theory (roughly speaking, to formalise the idea that a string theory can be regarded as a deformation of a point-particle theory). This is now accepted as proved, after some hitches with early announcements. Maxim Kontsevich is among those who have offered a generally accepted proof of this.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " H^1(\\Theta) \\, "
},
{
"math_id": 1,
"text": " H^0(\\Omega^{[2]}) "
},
{
"math_id": 2,
"text": "f:X \\to S"
},
{
"math_id": 3,
"text": "\\mathfrak{X} \\to B"
},
{
"math_id": 4,
"text": "\\begin{matrix}\nX & \\to & \\mathfrak{X} \\\\\n\\downarrow & & \\downarrow \\\\\nS & \\to & B\n\\end{matrix}"
},
{
"math_id": 5,
"text": "A \\cong \\frac{\\mathbb{C}\\{z_1,\\ldots, z_n\\}}{I}"
},
{
"math_id": 6,
"text": "\\mathbb{C}\\{z_1,\\ldots,z_n \\}"
},
{
"math_id": 7,
"text": "I"
},
{
"math_id": 8,
"text": "A \\cong \\frac{\\mathbb{C}\\{z_1,\\ldots,z_n\\}}{(y^2 - x^n)}"
},
{
"math_id": 9,
"text": "X_0"
},
{
"math_id": 10,
"text": "S"
},
{
"math_id": 11,
"text": "0"
},
{
"math_id": 12,
"text": "\\begin{matrix}\nX_0 & \\to & X \\\\\n\\downarrow & & \\downarrow \\\\\n* & \\xrightarrow[0]{} & S\n\\end{matrix}"
},
{
"math_id": 13,
"text": "\\begin{matrix}\nX'& \\to & X \\\\\n\\downarrow & & \\downarrow \\\\\nS' & \\to & S\n\\end{matrix}"
},
{
"math_id": 14,
"text": "\\begin{matrix}\n\\frac{\\mathbb {C} \\{x,y\\}}{(y^{2}-x^{n})} & \\leftarrow & \\frac{\\mathbb {C} \\{x,y, s\\}}{(y^{2}-x^{n} + s)} \\\\\n\\uparrow & & \\uparrow \\\\\n\\mathbb{C} & \\leftarrow & \\mathbb{C}\\{s\\}\n\\end{matrix}"
},
{
"math_id": 15,
"text": "s"
},
{
"math_id": 16,
"text": "A"
},
{
"math_id": 17,
"text": "(R_\\bullet, s)"
},
{
"math_id": 18,
"text": "R_0 \\to A"
},
{
"math_id": 19,
"text": "\\cdots \\xrightarrow{s} R_{-2} \\xrightarrow{s} R_{-1} \\xrightarrow{s} R_0 \\xrightarrow{p} A \\to 0"
},
{
"math_id": 20,
"text": "(\\text{Der}(R_\\bullet), d)"
},
{
"math_id": 21,
"text": "T^k(A)"
},
{
"math_id": 22,
"text": "T^1(A)"
},
{
"math_id": 23,
"text": "0 \\to T^0(A) \\to \\text{Der}(R_0) \\xrightarrow{d} \\text{Hom}_{R_0}(I,A) \\to T^1(A) \\to 0"
},
{
"math_id": 24,
"text": "\\frac{\\mathbb{C}\\{z_1,\\ldots,z_n\\}}{(f_1,\\ldots, f_m)}"
},
{
"math_id": 25,
"text": "T^1(A) \\cong \\frac{A^m}{df \\cdot A^n}"
},
{
"math_id": 26,
"text": "df"
},
{
"math_id": 27,
"text": "f = (f_1,\\ldots, f_m): \\mathbb{C}^n \\to \\mathbb{C}^m"
},
{
"math_id": 28,
"text": "f"
},
{
"math_id": 29,
"text": "T^1(A) \\cong \\frac{A^n}{\\left( \\frac{\\partial f}{\\partial z_1}, \\ldots, \\frac{\\partial f}{\\partial z_n} \\right)}"
},
{
"math_id": 30,
"text": "y^2 - x^3"
},
{
"math_id": 31,
"text": "\\frac{A^2}{(y, x^2)}"
},
{
"math_id": 32,
"text": "f(x,y) = y^2 - x^3"
},
{
"math_id": 33,
"text": "F(x,y,a_1,a_2) = y^2 - x^3 + a_1 + a_2x "
},
{
"math_id": 34,
"text": "a_i"
},
{
"math_id": 35,
"text": "\\text{Art}_k"
},
{
"math_id": 36,
"text": "F: \\text{Art}_k \\to \\text{Sets}"
},
{
"math_id": 37,
"text": "F(k)"
},
{
"math_id": 38,
"text": "d"
},
{
"math_id": 39,
"text": "\\mathbb{P}^n"
},
{
"math_id": 40,
"text": "F: \\text{Sch} \\to \\text{Sets}"
},
{
"math_id": 41,
"text": "\nF(S) = \\left\\{\n\\begin{matrix}\nX \\\\\n\\downarrow \\\\\nS\n\\end{matrix}\n: \\text{ each fiber is a degree } d \\text{ hypersurface in }\\mathbb{P}^n\\right\\}\n"
},
{
"math_id": 42,
"text": "F(x,\\varepsilon)"
},
{
"math_id": 43,
"text": "\\varepsilon"
},
{
"math_id": 44,
"text": " F(x,\\varepsilon) \\equiv f(x) + \\varepsilon g(x) + O(\\varepsilon^2)"
},
{
"math_id": 45,
"text": " (x+\\varepsilon)^3 = x^3 + 3x^2\\varepsilon + O(\\varepsilon^2)"
},
{
"math_id": 46,
"text": "k[y]/(y^2)"
},
{
"math_id": 47,
"text": "k[\\varepsilon] = k[y]/(y^2)"
},
{
"math_id": 48,
"text": "k[y]/(y^k)"
},
{
"math_id": 49,
"text": "(x+\\varepsilon)^3 = x^3 + 3x^2\\varepsilon + 3x\\varepsilon^2 + \\varepsilon^3"
},
{
"math_id": 50,
"text": "f(x) = f(0) + \\frac{f^{(1)}(0)}{1!}x + \\frac{f^{(2)}(0)}{2!}x^2 + \\frac{f^{(3)}(0)}{3!}x^3 + \\cdots "
},
{
"math_id": 51,
"text": "x^3"
},
{
"math_id": 52,
"text": "6x"
},
{
"math_id": 53,
"text": "\n\\begin{matrix}\n\\operatorname{Proj}\\left( \\dfrac{\\mathbb{C}[x_0,x_1,x_2,x_3]}{(x_0^4 + x_1^4 + x_2^4 + x_3^4)} \\right) \\\\\n\\downarrow \\\\\n\\operatorname{Spec}(k)\n\\end{matrix}\n"
},
{
"math_id": 54,
"text": "\n\\begin{matrix}\n\\operatorname{Proj}\\left( \\dfrac{\\mathbb{C}[x_0,x_1,x_2,x_3]}{(x_0^4 + x_1^4 + x_2^4 + x_3^4)} \\right) & \\to & \\operatorname{Proj}\\left( \\dfrac{ \\mathbb{C}[x_0,x_1,x_2,x_3][\\varepsilon]}{(x_0^4 + x_1^4 + x_2^4 + x_3^4 + \\varepsilon x_0^{a_0} x_1^{a_1} x_2^{a_2} x_3^{a_3}) } \\right) \\\\\n\\downarrow & & \\downarrow\\\\\n\\operatorname{Spec}(k) & \\to & \\operatorname{Spec}(k[\\varepsilon])\n\\end{matrix}\n"
},
{
"math_id": 55,
"text": "a_0 + a_1 + a_2 + a_3 = 4"
},
{
"math_id": 56,
"text": "\\operatorname{Spec}(k[\\varepsilon])"
},
{
"math_id": 57,
"text": "\nF(A) = \\left\\{\n\\begin{matrix}\n\\operatorname{Proj}\\left( \\dfrac{\\mathbb{C}[x_0,x_1,x_2,x_3]}{(x_0^4 + x_1^4 + x_2^4 + x_3^4)} \\right) & \\to & \\mathfrak{X} \\\\\n\\downarrow & & \\downarrow \\\\\n\\operatorname{Spec}(k) & \\to & \\operatorname{Spec}(A)\n\\end{matrix}\n\\right\\}\n"
},
{
"math_id": 58,
"text": "k"
},
{
"math_id": 59,
"text": "A' \\to A"
},
{
"math_id": 60,
"text": "F(A') \\to F(A)"
},
{
"math_id": 61,
"text": "\n\\begin{matrix}\nX & \\to & \\mathfrak{X} \\\\\n\\downarrow & & \\downarrow \\\\\n\\operatorname{Spec}(k) & \\to & \\operatorname{Spec}(A)\n\\end{matrix}\n"
},
{
"math_id": 62,
"text": "\n\\begin{matrix}\nX & \\to & \\mathfrak{X} & \\to & \\mathfrak{X}' \\\\\n\\downarrow & & \\downarrow & & \\downarrow \\\\\n\\operatorname{Spec}(k) & \\to & \\operatorname{Spec}(A) & \\to & \\operatorname{Spec}(A')\n\\end{matrix}\n"
},
{
"math_id": 63,
"text": "X"
},
{
"math_id": 64,
"text": "\\operatorname{Hom}"
},
{
"math_id": 65,
"text": "TX := \\operatorname{Hom}_{\\text{Sch}/k}(\\operatorname{Spec}(k[\\varepsilon]),X)"
},
{
"math_id": 66,
"text": "T_F := F(k[\\varepsilon])."
},
{
"math_id": 67,
"text": "\\mathcal{M}_g"
},
{
"math_id": 68,
"text": "\\dim(\\mathcal{M}_g) = \\dim H^1(C,T_C)"
},
{
"math_id": 69,
"text": "g"
},
{
"math_id": 70,
"text": "\\begin{align}\nH^1(C,T_C) &\\cong H^0(C,T_C^* \\otimes \\omega_C)^\\vee \\\\\n&\\cong H^0(C,\\omega_C^{\\otimes 2})^\\vee\n\\end{align}"
},
{
"math_id": 71,
"text": "\\begin{align}\nh^0(C,\\omega_C^{\\otimes 2}) - h^1(C,\\omega_C^{\\otimes 2}) &= 2(2g - 2) - g + 1 \\\\\n &= 3g - 3\n\\end{align}"
},
{
"math_id": 72,
"text": "g \\geq 2"
},
{
"math_id": 73,
"text": "h^1(C,\\omega_C^{\\otimes 2}) = 0"
},
{
"math_id": 74,
"text": "h^1(C,\\omega_C^{\\otimes 2}) = h^0(C, (\\omega_C^{\\otimes 2})^{\\vee}\\otimes \\omega_C)\n"
},
{
"math_id": 75,
"text": "\\begin{align}\n\\text{deg}((\\omega_C^{\\otimes 2})^\\vee \\otimes \\omega_C) &= 4 - 4g + 2g - 2 \\\\\n&= 2 - 2g\n\\end{align}"
},
{
"math_id": 76,
"text": "h^0(L) = 0"
},
{
"math_id": 77,
"text": "3g - 3"
},
{
"math_id": 78,
"text": "X/\\mathbb{F}_p"
},
{
"math_id": 79,
"text": "\\mathfrak{X}/\\mathbb{Z}_p"
},
{
"math_id": 80,
"text": "H^2"
},
{
"math_id": 81,
"text": "\\mathbb{Z}_p"
},
{
"math_id": 82,
"text": "\n\\begin{matrix}\nX \\\\\n\\downarrow \\\\\n\\operatorname{Spec}(\\mathbb{F}_p)\n\\end{matrix}\n"
},
{
"math_id": 83,
"text": "\n\\begin{matrix}\nX & \\to & \\mathfrak{X}_2 \\\\\n\\downarrow & & \\downarrow \\\\\n\\operatorname{Spec}(\\mathbb{F}_p) & \\to & \\operatorname{Spec}(\\mathbb{Z}/(p^2))\n\\end{matrix}\n"
},
{
"math_id": 84,
"text": "\n\\begin{matrix}\nX & \\to & \\mathfrak{X}_2 & \\to & \\mathfrak{X}_3 & \\to \\cdots \\\\\n\\downarrow & & \\downarrow & & \\downarrow & \\\\\n\\operatorname{Spec}(\\mathbb{F}_p) & \\to & \\operatorname{Spec}(\\mathbb{Z}/(p^2)) & \\to & \\operatorname{Spec}(\\mathbb{Z}/(p^3)) & \\to \\cdots\n\\end{matrix}\n"
},
{
"math_id": 85,
"text": "\\mathfrak{X} = \\operatorname{Spet}(\\mathfrak{X}_\\bullet)"
},
{
"math_id": 86,
"text": "A[p^\\infty]"
},
{
"math_id": 87,
"text": "G \\to \\operatorname{GL}_n(\\mathbb{F}_p)"
},
{
"math_id": 88,
"text": "G \\to \\operatorname{GL}_n(\\mathbb{Z}_p) \\text{?}"
}
]
| https://en.wikipedia.org/wiki?curid=679351 |
6793679 | Pointwise mutual information | Information Theory
In statistics, probability theory and information theory, pointwise mutual information (PMI), or point mutual information, is a measure of association. It compares the probability of two events occurring together to what this probability would be if the events were independent.
PMI (especially in its positive pointwise mutual information variant) has been described as "one of the most important concepts in NLP", where it "draws on the intuition that the best way to weigh the association between two words is to ask how much more the two words co-occur in [a] corpus than we would have expected them to appear by chance."
The concept was introduced in 1961 by Robert Fano under the name of "mutual information", but today that term is instead used for a related measure of dependence between random variables: The mutual information (MI) of two discrete random variables refers to the average PMI of all possible events.
Definition.
The PMI of a pair of outcomes "x" and "y" belonging to discrete random variables "X" and "Y" quantifies the discrepancy between the probability of their coincidence given their joint distribution and their individual distributions, assuming independence. Mathematically:
formula_0
(with the latter two expressions being equal to the first by Bayes' theorem). The mutual information (MI) of the random variables "X" and "Y" is the expected value of the PMI (over all possible outcomes).
The measure is symmetric (formula_1). It can take positive or negative values, but is zero if "X" and "Y" are independent. Note that even though PMI may be negative or positive, its expected outcome over all joint events (MI) is non-negative. PMI maximizes when "X" and "Y" are perfectly associated (i.e. formula_2 or formula_3), yielding the following bounds:
formula_4
Finally, formula_5 will increase if formula_2 is fixed but formula_6 decreases.
Here is an example to illustrate:
Using this table we can marginalize to get the following additional table for the individual distributions:
With this example, we can compute four values for formula_5. Using base-2 logarithms:
Similarities to mutual information.
Pointwise Mutual Information has many of the same relationships as the mutual information. In particular,
formula_8
Where formula_9 is the self-information, or formula_10.
Variants.
Several variations of PMI have been proposed, in particular to address what has been described as its "two main limitations":
Positive PMI.
The positive pointwise mutual information (PPMI) measure is defined by setting negative values of PMI to zero:
formula_11
This definition is motivated by the observation that "negative PMI values (which imply things are co-occurring less often than we would expect by chance) tend to be unreliable unless our corpora are enormous" and also by a concern that "it's not clear whether it's even possible to evaluate such scores of 'unrelatedness' with human judgment". It also avoid having to deal with formula_12 values for events that never occur together (formula_13), by setting PPMI for these to 0.
Normalized pointwise mutual information (npmi).
Pointwise mutual information can be normalized between [-1,+1] resulting in -1 (in the limit) for never occurring together, 0 for independence, and +1 for complete co-occurrence.
formula_14
Where formula_15 is the joint self-information formula_16.
PMIk family.
The PMIk measure (for k=2, 3 etc.), which was introduced by Béatrice Daille around 1994, and as of 2011 was described as being "among the most widely used variants", is defined as
formula_17
In particular, formula_18. The additional factors of formula_19 inside the logarithm are intended to correct the bias of PMI towards low-frequency events, by boosting the scores of frequent pairs. A 2011 case study demonstrated the success of PMI3 in correcting this bias on a corpus drawn from English Wikipedia. Taking x to be the word "football", its most strongly associated words y according to the PMI measure (i.e. those maximizing formula_20) were domain-specific ("midfielder", "cornerbacks", "goalkeepers") whereas the terms ranked most highly by PMI3 were much more general ("league", "clubs", "england").
Chain-rule.
Like mutual information, point mutual information follows the chain rule, that is,
formula_21
This is proven through application of Bayes' theorem:
formula_22
Applications.
PMI could be used in various disciplines e.g. in information theory, linguistics or chemistry (in profiling and analysis of chemical compounds). In computational linguistics, PMI has been used for finding collocations and associations between words. For instance, countings of occurrences and co-occurrences of words in a text corpus can be used to approximate the probabilities formula_6 and formula_19 respectively. The following table shows counts of pairs of words getting the most and the least PMI scores in the first 50 millions of words in Wikipedia (dump of October 2015) filtering by 1,000 or more co-occurrences. The frequency of each count can be obtained by dividing its value by 50,000,952. (Note: natural log is used to calculate the PMI values in this example, instead of log base 2)
Good collocation pairs have high PMI because the probability of co-occurrence is only slightly lower than the probabilities of occurrence of each word. Conversely, a pair of words whose probabilities of occurrence are considerably higher than their probability of co-occurrence gets a small PMI score.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\n\\operatorname{pmi}(x;y) \\equiv \\log_2\\frac{p(x,y)}{p(x)p(y)} = \\log_2\\frac{p(x|y)}{p(x)} = \\log_2\\frac{p(y|x)}{p(y)}\n"
},
{
"math_id": 1,
"text": "\\operatorname{pmi}(x;y)=\\operatorname{pmi}(y;x)"
},
{
"math_id": 2,
"text": "p(x|y)"
},
{
"math_id": 3,
"text": "p(y|x)=1"
},
{
"math_id": 4,
"text": "\n-\\infty \\leq \\operatorname{pmi}(x;y) \\leq \\min\\left[ -\\log p(x), -\\log p(y) \\right] .\n"
},
{
"math_id": 5,
"text": "\\operatorname{pmi}(x;y)"
},
{
"math_id": 6,
"text": "p(x)"
},
{
"math_id": 7,
"text": "\\operatorname{I}(X;Y)"
},
{
"math_id": 8,
"text": "\n\\begin{align}\n\\operatorname{pmi}(x;y) &=& h(x) + h(y) - h(x,y) \\\\ \n &=& h(x) - h(x \\mid y) \\\\ \n &=& h(y) - h(y \\mid x)\n\\end{align}\n"
},
{
"math_id": 9,
"text": "h(x)"
},
{
"math_id": 10,
"text": "-\\log_2 p(x)"
},
{
"math_id": 11,
"text": "\n\\operatorname{ppmi}(x;y) \\equiv \\max\\left(\\log_2\\frac{p(x,y)}{p(x)p(y)},0\\right)\n"
},
{
"math_id": 12,
"text": "\n-\\infty\n"
},
{
"math_id": 13,
"text": "\np(x,y)=0\n"
},
{
"math_id": 14,
"text": "\n\n\\operatorname{npmi}(x;y) = \\frac{\\operatorname{pmi}(x;y)}{h(x, y) }\n\n"
},
{
"math_id": 15,
"text": "h(x,y)"
},
{
"math_id": 16,
"text": "-\\log_2 p(x,y)"
},
{
"math_id": 17,
"text": "\n\\operatorname{pmi}^k(x;y) \\equiv \\log_2\\frac{p(x,y)^k}{p(x)p(y)} = \\operatorname{pmi}(x;y)-(-(k-1))\\log_2 p(x,y))\n"
},
{
"math_id": 18,
"text": "pmi^1(x;y) = pmi(x;y)"
},
{
"math_id": 19,
"text": "p(x,y)"
},
{
"math_id": 20,
"text": "pmi(x;y)"
},
{
"math_id": 21,
"text": "\\operatorname{pmi}(x;yz) = \\operatorname{pmi}(x;y) + \\operatorname{pmi}(x;z|y)"
},
{
"math_id": 22,
"text": "\n\\begin{align}\n\\operatorname{pmi}(x;y) + \\operatorname{pmi}(x;z|y) & {} = \\log\\frac{p(x,y)}{p(x)p(y)} + \\log\\frac{p(x,z|y)}{p(x|y)p(z|y)} \\\\ \n& {} = \\log \\left[ \\frac{p(x,y)}{p(x)p(y)} \\frac{p(x,z|y)}{p(x|y)p(z|y)} \\right] \\\\ \n& {} = \\log \\frac{p(x|y)p(y)p(x,z|y)}{p(x)p(y)p(x|y)p(z|y)} \\\\\n& {} = \\log \\frac{p(x,yz)}{p(x)p(yz)} \\\\\n& {} = \\operatorname{pmi}(x;yz)\n\\end{align}\n"
}
]
| https://en.wikipedia.org/wiki?curid=6793679 |
679384 | Polygon triangulation | Partition of a simple polygon into triangles
In computational geometry, polygon triangulation is the partition of a polygonal area (simple polygon) P into a set of triangles, i.e., finding a set of triangles with pairwise non-intersecting interiors whose union is P.
Triangulations may be viewed as special cases of planar straight-line graphs. When there are no holes or added points, triangulations form maximal outerplanar graphs.
Polygon triangulation without extra vertices.
Over time, a number of algorithms have been proposed to triangulate a polygon.
Special cases.
It is trivial to triangulate any convex polygon in linear time into a fan triangulation, by adding diagonals from one vertex to all other non-nearest neighbor vertices.
The total number of ways to triangulate a convex "n"-gon by non-intersecting diagonals is the ("n"−2)nd Catalan number, which equals
formula_0,
a formula found by Leonhard Euler.
A monotone polygon can be triangulated in linear time with either the algorithm of A. Fournier and D.Y. Montuno, or the algorithm of Godfried Toussaint.
Ear clipping method.
One way to triangulate a simple polygon is based on the two ears theorem, as the fact that any simple polygon with at least 4 vertices without holes has at least two "ears", which are triangles with two sides being the edges of the polygon and the third one completely inside it. The algorithm then consists of finding such an ear, removing it from the polygon (which results in a new polygon that still meets the conditions) and repeating until there is only one triangle left.
This algorithm is easy to implement, but slower than some other algorithms, and it only works on polygons without holes. An implementation that keeps separate lists of convex and concave vertices will run in O(n2) time. This method is known as "ear clipping" and sometimes "ear trimming". An efficient algorithm for cutting off ears was discovered by Hossam ElGindy, Hazel Everett, and Godfried Toussaint.
Monotone polygon triangulation.
A simple polygon is monotone with respect to a line L, if any line orthogonal to L intersects the polygon at most twice. A monotone polygon can be split into two monotone "chains". A polygon that is monotone with respect to the y-axis is called "y-monotone". A monotone polygon with n vertices can be triangulated in O(n) time. Assuming a given polygon is y-monotone, the greedy algorithm begins by walking on one chain of the polygon from top to bottom while adding diagonals whenever it is possible. It is easy to see that the algorithm can be applied to any monotone polygon.
Triangulating a non-monotone polygon.
If a polygon is not monotone, it can be partitioned into monotone subpolygons in O(n log n) time using a sweep-line approach. The algorithm does not require the polygon to be simple, thus it can be applied to polygons with holes.
Generally, this algorithm can triangulate a planar subdivision with n vertices in O(n log n) time using O(n) space.
Dual graph of a triangulation.
A useful graph that is often associated with a triangulation of a polygon P is the dual graph. Given a triangulation TP of P, one defines the graph G(TP) as the graph whose vertex set are the triangles of TP, two vertices (triangles) being adjacent if and only if they share a diagonal. It is easy to observe that G(TP) is a tree with maximum degree 3.
Computational complexity.
Until 1988, whether a simple polygon can be triangulated faster than O(n log n) time was an open problem in computational geometry. Then, discovered an O(n log log n)-time algorithm for triangulation, later simplified by . Several improved methods with complexity O(n log* n) (in practice, indistinguishable from linear time) followed.
Bernard Chazelle showed in 1991 that any simple polygon can be triangulated in linear time, though the proposed algorithm is very complex. A simpler randomized algorithm with linear expected time is also known.
Seidel's decomposition algorithm and Chazelle's triangulation method are discussed in detail in .
The time complexity of triangulation of an n-vertex polygon "with" holes has an Ω(n log n) lower bound, in algebraic computation tree models of computation. It is possible to compute the number of distinct triangulations of a simple polygon in polynomial time using dynamic programming, and (based on this counting algorithm) to generate uniformly random triangulations in polynomial time. However, counting the triangulations of a polygon with holes is #P-complete, making it unlikely that it can be done in polynomial time.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\frac{n(n+1)...(2n-4)}{(n-2)!}"
}
]
| https://en.wikipedia.org/wiki?curid=679384 |
67938919 | Bimaximal mixing | Bimaximal mixing refers to a proposed form of the lepton mixing matrix. It is characterized by the formula_0 neutrino being a "bimaximal mixture" of formula_1 and formula_2 and being completely decoupled from the formula_3, i.e. a uniform mixture of formula_1 and formula_2. The formula_3 is consequently a uniform mixture of formula_4 and formula_5. Other notable properties are the symmetries between the formula_1 and formula_2 flavours and formula_4 and formula_5 mass eigenstates and an absence of CP violation. The moduli squared of the matrix elements have to be:
formula_6.
According to PDG convention, bimaximal mixing corresponds to formula_7 and formula_8, which produces following matrix:
formula_9.
Alternatively, formula_10 and formula_8 can be used, which corresponds to:
formula_11.
Phenomenology.
The L/E flatness of the electron-like event ratio at Super-Kamiokande severely restricts the CP-conserving neutrino mixing matrices
to the form:
formula_12
Bimaximal mixing corresponds to formula_13. Tribimaximal mixing and golden-ratio mixing also correspond to an angle in the above parametrization. Bimaximal mixing, along with these other mixing schemes, have been falsified by a non-zero formula_14.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\nu_3"
},
{
"math_id": 1,
"text": "\\nu_\\mu"
},
{
"math_id": 2,
"text": "\\nu_\\tau"
},
{
"math_id": 3,
"text": "\\nu_e"
},
{
"math_id": 4,
"text": "\\nu_1"
},
{
"math_id": 5,
"text": "\\nu_2"
},
{
"math_id": 6,
"text": "\\begin{bmatrix}\n|U_{e 1}|^2 & |U_{e 2}|^2 & |U_{e 3}|^2 \\\\\n|U_{\\mu 1}|^2 & |U_{\\mu 2}|^2 & |U_{\\mu 3}|^2 \\\\ \n|U_{\\tau 1}|^2 & |U_{\\tau 2}|^2 & |U_{\\tau 3}|^2 \n\\end{bmatrix} =\n\\begin{bmatrix} \\frac{1}{2} & \\frac{1}{2} & 0 \\\\\n\\frac{1}{4} & \\frac{1}{4} & \\frac{1}{2} \\\\\n\\frac{1}{4} & \\frac{1}{4} & \\frac{1}{2}\n\\end{bmatrix}"
},
{
"math_id": 7,
"text": "\\theta_{12}=\\theta_{23}=45^\\circ"
},
{
"math_id": 8,
"text": "\\theta_{13}=\\delta_{13}=0"
},
{
"math_id": 9,
"text": "\\begin{bmatrix}\nU_{e 1} & U_{e 2} & U_{e 3} \\\\\nU_{\\mu 1} & U_{\\mu 2} & U_{\\mu 3} \\\\ \nU_{\\tau 1} & U_{\\tau 2} & U_{\\tau 3}\n\\end{bmatrix} =\n\\begin{bmatrix} \\frac{1}{\\sqrt{2}} & \\frac{1}{\\sqrt{2}} & 0 \\\\\n-\\frac{1}{2} & \\frac{1}{2} & \\frac{1}{\\sqrt{2}} \\\\\n\\frac{1}{2} & -\\frac{1}{2} & \\frac{1}{\\sqrt{2}}\n\\end{bmatrix}"
},
{
"math_id": 10,
"text": "\\theta_{12}=\\theta_{23}=-45^\\circ"
},
{
"math_id": 11,
"text": "\\begin{bmatrix}\nU_{e 1} & U_{e 2} & U_{e 3} \\\\\nU_{\\mu 1} & U_{\\mu 2} & U_{\\mu 3} \\\\ \nU_{\\tau 1} & U_{\\tau 2} & U_{\\tau 3}\n\\end{bmatrix} =\n\\begin{bmatrix} \\frac{1}{\\sqrt{2}} & -\\frac{1}{\\sqrt{2}} & 0 \\\\\n\\frac{1}{2} & \\frac{1}{2} & -\\frac{1}{\\sqrt{2}} \\\\\n\\frac{1}{2} & \\frac{1}{2} & \\frac{1}{\\sqrt{2}}\n\\end{bmatrix}"
},
{
"math_id": 12,
"text": "\nU=\n\\begin{bmatrix}\n\\cos\\theta & \\sin\\theta & 0 \\\\\n-\\sin\\theta/\\sqrt{2} & \\cos\\theta/\\sqrt{2} & \\frac{1}{\\sqrt{2}} \\\\ \n\\sin\\theta/\\sqrt{2} & -\\cos\\theta/\\sqrt{2} & \\frac{1}{\\sqrt{2}} \n\\end{bmatrix}.\n"
},
{
"math_id": 13,
"text": "\\theta=45^\\circ"
},
{
"math_id": 14,
"text": "\\theta_{13}"
}
]
| https://en.wikipedia.org/wiki?curid=67938919 |
67944487 | Knowledge graph embedding | Dimensionality reduction of graph-based semantic data objects [machine learning task]
In representation learning, knowledge graph embedding (KGE), also referred to as knowledge representation learning (KRL), or multi-relation learning, is a machine learning task of learning a low-dimensional representation of a knowledge graph's entities and relations while preserving their semantic meaning. Leveraging their embedded representation, knowledge graphs (KGs) can be used for various applications such as link prediction, triple classification, entity recognition, clustering, and relation extraction.
Definition.
A knowledge graph formula_0 is a collection of entities formula_1, relations formula_2, and facts formula_3. A "fact" is a triple formula_4 that denotes a link formula_5 between the head formula_6 and the tail formula_7 of the triple. Another notation that is often used in the literature to represent a triple (or fact) is formula_8. This notation is called resource description framework (RDF). A knowledge graph represents the knowledge related to a specific domain; leveraging this structured representation, it is possible to infer a piece of new knowledge from it after some refinement steps. However, nowadays, people have to deal with the sparsity of data and the computational inefficiency to use them in a real-world application.
The embedding of a knowledge graph translates each entity and relation of a knowledge graph, formula_9 into a vector of a given dimension formula_10, called embedding dimension. In the general case, we can have different embedding dimensions for the entities formula_10 and the relations formula_11. The collection of embedding vectors for all the entities and relations in the knowledge graph can then be used for downstream tasks.
A knowledge graph embedding is characterized by four different aspects:
Embedding procedure.
All the different knowledge graph embedding models follow roughly the same procedure to learn the semantic meaning of the facts. First of all, to learn an embedded representation of a knowledge graph, the embedding vectors of the entities and relations are initialized to random values. Then, starting from a training set until a stop condition is reached, the algorithm continuously optimizes the embeddings. Usually, the stop condition is given by the overfitting over the training set. For each iteration, is sampled a batch of size formula_12 from the training set, and for each triple of the batch is sampled a random corrupted fact—i.e., a triple that does not represent a true fact in the knowledge graph. The corruption of a triple involves substituting the head or the tail (or both) of the triple with another entity that makes the fact false. The original triple and the corrupted triple are added in the training batch, and then the embeddings are updated, optimizing a scoring function. At the end of the algorithm, the learned embeddings should have extracted the semantic meaning from the triples and should correctly predict unseen true facts in the knowledge graph.
Pseudocode.
The following is the pseudocode for the general embedding procedure.
algorithm Compute entity and relation embeddings is
input: The training set formula_13,
entity set formula_14,
relation set formula_2,
embedding dimension formula_11
output: Entity and relation embeddings
initialization: "the entities" formula_15 "and relations" formula_16 "embeddings (vectors) are randomly initialized"
while stop condition do
formula_17 // From the training set randomly sample a batch of size b
for each formula_18 in formula_19 do
formula_20 // sample a corrupted fact of triple
formula_21
end for
Update embeddings by minimizing the loss function
end while
Performance indicators.
These indexes are often used to measure the embedding quality of a model. The simplicity of the indexes makes them very suitable for evaluating the performance of an embedding algorithm even on a large scale. Given <chem>Q</chem> as the set of all ranked predictions of a model, it is possible to define three different performance indexes: Hits@K, MR, and MRR.
Hits@K.
Hits@K or in short, H@K, is a performance index that measures the probability to find the correct prediction in the first top K model predictions. Usually, it is used formula_22. Hits@K reflects the accuracy of an embedding model to predict the relation between two given triples correctly.
Hits@Kformula_23
Larger values mean better predictive performances.
Mean rank (MR).
Mean rank is the average ranking position of the items predicted by the model among all the possible items.
formula_24
The smaller the value, the better the model.
Mean reciprocal rank (MRR).
Mean reciprocal rank measures the number of triples predicted correctly. If the first predicted triple is correct, then 1 is added, if the second is correct formula_25 is summed, and so on.
Mean reciprocal rank is generally used to quantify the effect of search algorithms.
formula_26
The larger the index, the better the model.
Applications.
Machine learning tasks.
Knowledge graph completion (KGC) is a collection of techniques to infer knowledge from an embedded knowledge graph representation. In particular, this technique completes a triple inferring the missing entity or relation. The corresponding sub-tasks are named link or entity prediction (i.e., guessing an entity from the embedding given the other entity of the triple and the relation), and relation prediction (i.e., forecasting the most plausible relation that connects two entities).
Triple Classification is a binary classification problem. Given a triple, the trained model evaluates the plausibility of the triple using the embedding to determine if a triple is true or false. The decision is made with the model score function and a given threshold. Clustering is another application that leverages the embedded representation of a sparse knowledge graph to condense the representation of similar semantic entities close in a 2D space.
Real world applications.
The use of knowledge graph embedding is increasingly pervasive in many applications. In the case of recommender systems, the use of knowledge graph embedding can overcome the limitations of the usual reinforcement learning. Training this kind of recommender system requires a huge amount of information from the users; however, knowledge graph techniques can address this issue by using a graph already constructed over a prior knowledge of the item correlation and using the embedding to infer from it the recommendation.
Drug repurposing is the use of an already approved drug, but for a therapeutic purpose different from the one for which it was initially designed. It is possible to use the task of link prediction to infer a new connection between an already existing drug and a disease by using a biomedical knowledge graph built leveraging the availability of massive literature and biomedical databases.
Knowledge graph embedding can also be used in the domain of social politics.
Models.
Given a collection of triples (or facts) formula_27, the knowledge graph embedding model produces, for each entity and relation present in the knowledge graph a continuous vector representation. formula_18 is the corresponding embedding of a triple with formula_28 and formula_29 , where formula_10 is the embedding dimension for the entities, and formula_11 for the relations. The score function of a given model is denoted by formula_30 and measures the distance of the embedding of the head from the embedding of tail given the embedding of the relation, or in other words, it quantifies the plausibility of the embedded representation of a given fact.
Rossi et al. propose a taxonomy of the embedding models and identifies three main families of models: tensor decomposition models, geometric models, and deep learning models.
Tensor decomposition model.
The tensor decomposition is a family of knowledge graph embedding models that use a multi-dimensional matrix to represent a knowledge graph, that is partially knowable due to the gaps of the knowledge graph describing a particular domain thoroughly. In particular, these models use a three-way (3D) tensor, which is then factorized into low-dimensional vectors that are the entities and relations embeddings. The third-order tensor is a suitable methodology to represent a knowledge graph because it records only the existence or the absence of a relation between entities, and for this reason is simple, and there is no need to know a priori the network structure, making this class of embedding models light, and easy to train even if they suffer from high-dimensional and sparsity of data.
Bilinear models.
This family of models uses a linear equation to embed the connection between the entities through a relation. In particular, the embedded representation of the relations is a bidimensional matrix. These models, during the embedding procedure, only use the single facts to compute the embedded representation and ignore the other associations to the same entity or relation.
Geometric models.
The geometric space defined by this family of models encodes the relation as a geometric transformation between the head and tail of a fact. For this reason, to compute the embedding of the tail, it is necessary to apply a transformation formula_32 to the head embedding, and a distance function formula_33 is used to measure the goodness of the embedding or to score the reliability of a fact.
formula_34
Geometric models are similar to the tensor decomposition model, but the main difference between the two is that they have to preserve the applicability of the transformation formula_32 in the geometric space in which it is defined.
Pure translational models.
This class of models is inspired by the idea of translation invariance introduced in word2vec. A pure translational model relies on the fact that the embedding vector of the entities are close to each other after applying a proper relational translation in the geometric space in which they are defined. In other words, given a fact, when the embedding of head is added to the embedding of relation, the expected result should be the embedding of the tail. The closeness of the entities embedding is given by some distance measure and quantifies the reliability of a fact.
Translational models with additional embeddings.
It is possible to associate additional information to each element in the knowledge graph and their common representation facts. Each entity and relation can be enriched with text descriptions, weights, constraints, and others in order to improve the overall description of the domain with a knowledge graph. During the embedding of the knowledge graph, this information can be used to learn specialized embeddings for these characteristics together with the usual embedded representation of entities and relations, with the cost of learning a more significant number of vectors.
Roto-translational models.
This family of models, in addition or in substitution of a translation they employ a rotation-like transformation.
Deep learning models.
This group of embedding models uses deep neural network to learn patterns from the knowledge graph that are the input data. These models have the generality to distinguish the type of entity and relation, temporal information, path information, underlay structured information, and resolve the limitations of distance-based and semantic-matching-based models in representing all the features of a knowledge graph. The use of deep learning for knowledge graph embedding has shown good predictive performance even if they are more expensive in the training phase, data-hungry, and often required a pre-trained embedding representation of knowledge graph coming from a different embedding model.
Convolutional neural networks.
This family of models, instead of using fully connected layers, employs one or more convolutional layers that convolve the input data applying a low-dimensional filter capable of embedding complex structures with few parameters by learning nonlinear features.
Capsule neural networks.
This family of models uses capsule neural networks to create a more stable representation that is able to recognize a feature in the input without losing spatial information. The network is composed of convolutional layers, but they are organized in capsules, and the overall result of a capsule is sent to a higher-capsule decided by a dynamic process routine.
Recurrent neural networks.
This class of models leverages the use of recurrent neural network. The advantage of this architecture is to memorize a sequence of fact, rather than just elaborate single events.
Model performance.
The machine learning task for knowledge graph embedding that is more often used to evaluate the embedding accuracy of the models is the link prediction. Rossi et al. produced an extensive benchmark of the models, but also other surveys produces similar results. The benchmark involves five datasets FB15k, WN18, FB15k-237, WN18RR, and YAGO3-10. More recently, it has been discussed that these datasets are far away from real-world applications, and other datasets should be integrated as a standard benchmark. | [
{
"math_id": 0,
"text": "\\mathcal{G} = \\{E, R, F\\}"
},
{
"math_id": 1,
"text": "E\n"
},
{
"math_id": 2,
"text": "R"
},
{
"math_id": 3,
"text": "F"
},
{
"math_id": 4,
"text": "(h, r, t) \\in F"
},
{
"math_id": 5,
"text": "r \\in R"
},
{
"math_id": 6,
"text": "h \\in E"
},
{
"math_id": 7,
"text": "t \\in E"
},
{
"math_id": 8,
"text": "<head, relation, tail>"
},
{
"math_id": 9,
"text": "\\mathcal{G}"
},
{
"math_id": 10,
"text": "d"
},
{
"math_id": 11,
"text": "k"
},
{
"math_id": 12,
"text": "b"
},
{
"math_id": 13,
"text": "S = \\{(h, r, t)\\}"
},
{
"math_id": 14,
"text": "E\n "
},
{
"math_id": 15,
"text": "e"
},
{
"math_id": 16,
"text": "r"
},
{
"math_id": 17,
"text": "S_{batch} \\leftarrow sample(S, b)"
},
{
"math_id": 18,
"text": "(h, r, t)"
},
{
"math_id": 19,
"text": "S_{batch}"
},
{
"math_id": 20,
"text": "(h', r, t') \\leftarrow sample(S')"
},
{
"math_id": 21,
"text": "T_{batch} \\leftarrow T_{batch} \\cup \\{((h,r, t), (h', r, t')) \\}"
},
{
"math_id": 22,
"text": "k=10"
},
{
"math_id": 23,
"text": "= \\frac{|\\{q \\in Q : q < k \\}|}{|Q|} \\in [0, 1]"
},
{
"math_id": 24,
"text": "MR = \\frac{1}{|Q|}\\sum_{q \\in Q}{q}"
},
{
"math_id": 25,
"text": "\\frac{1}{2}"
},
{
"math_id": 26,
"text": "MRR = \\frac{1}{|Q|}\\sum_{q \\in Q}{\\frac{1}{q}} \\in [0, 1]"
},
{
"math_id": 27,
"text": "\\mathcal{F} = \\{<head, relation, tail>\\}"
},
{
"math_id": 28,
"text": "h,t \\in {\\rm I\\!R}^{d}"
},
{
"math_id": 29,
"text": "r \\in {\\rm I\\!R}^{k}"
},
{
"math_id": 30,
"text": "\\mathcal{f}_{r}(h, t)\n"
},
{
"math_id": 31,
"text": "(t, r^{-1}, h)"
},
{
"math_id": 32,
"text": "\\tau"
},
{
"math_id": 33,
"text": "\\delta"
},
{
"math_id": 34,
"text": "\\mathcal{f}_{r}(h, t) = \\delta(\\tau(h, r), t)\n"
},
{
"math_id": 35,
"text": "h + r = t"
},
{
"math_id": 36,
"text": "(Obama, president\\_of, USA)"
},
{
"math_id": 37,
"text": "h + r"
},
{
"math_id": 38,
"text": "t"
},
{
"math_id": 39,
"text": "W_{r}^{h}"
},
{
"math_id": 40,
"text": "W_{r}^{t}"
},
{
"math_id": 41,
"text": "h"
},
{
"math_id": 42,
"text": "C"
},
{
"math_id": 43,
"text": "t\n"
},
{
"math_id": 44,
"text": "\\mathcal{W}"
},
{
"math_id": 45,
"text": "d \\times 3"
},
{
"math_id": 46,
"text": "1 \\times 3"
}
]
| https://en.wikipedia.org/wiki?curid=67944487 |
67944516 | Physics-informed neural networks | Technique to solve partial differential equations
Physics-informed neural networks (PINNs), also referred to as Theory-Trained Neural Networks (TTNs), are a type of universal function approximators that can embed the knowledge of any physical laws that govern a given data-set in the learning process, and can be described by partial differential equations (PDEs).They overcome the low data availability of some biological and engineering systems that makes most state-of-the-art machine learning techniques lack robustness, rendering them ineffective in these scenarios. The prior knowledge of general physical laws acts in the training of neural networks (NNs) as a regularization agent that limits the space of admissible solutions, increasing the correctness of the function approximation. This way, embedding this prior information into a neural network results in enhancing the information content of the available data, facilitating the learning algorithm to capture the right solution and to generalize well even with a low amount of training examples.
Function approximation.
Most of the physical laws that govern the dynamics of a system can be described by partial differential equations. For example, the Navier–Stokes equations are a set of partial differential equations derived from the conservation laws (i.e., conservation of mass, momentum, and energy) that govern fluid mechanics. The solution of the Navier–Stokes equations with appropriate initial and boundary conditions allows the quantification of flow dynamics in a precisely defined geometry. However, these equations cannot be solved exactly and therefore numerical methods must be used (such as finite differences, finite elements and finite volumes). In this setting, these governing equations must be solved while accounting for prior assumptions, linearization, and adequate time and space discretization.
Recently, solving the governing partial differential equations of physical phenomena using deep learning has emerged as a new field of scientific machine learning (SciML), leveraging the universal approximation theorem and high expressivity of neural networks. In general, deep neural networks could approximate any high-dimensional function given that sufficient training data are supplied. However, such networks do not consider the physical characteristics underlying the problem, and the level of approximation accuracy provided by them is still heavily dependent on careful specifications of the problem geometry as well as the initial and boundary conditions. Without this preliminary information, the solution is not unique and may lose physical correctness. On the other hand, physics-informed neural networks (PINNs) leverage governing physical equations in neural network training. Namely, PINNs are designed to be trained to satisfy the given training data as well as the imposed governing equations. In this fashion, a neural network can be guided with training data that do not necessarily need to be large and complete. Potentially, an accurate solution of partial differential equations can be found without knowing the boundary conditions. Therefore, with some knowledge about the physical characteristics of the problem and some form of training data (even sparse and incomplete), PINN may be used for finding an optimal solution with high fidelity.
PINNs allow for addressing a wide range of problems in computational science and represent a pioneering technology leading to the development of new classes of numerical solvers for PDEs. PINNs can be thought of as a meshfree alternative to traditional approaches (e.g., CFD for fluid dynamics), and new data-driven approaches for model inversion and system identification. Notably, the trained PINN network can be used for predicting the values on simulation grids of different resolutions without the need to be retrained. In addition, they allow for exploiting automatic differentiation (AD) to compute the required derivatives in the partial differential equations, a new class of differentiation techniques widely used to derive neural networks assessed to be superior to numerical or symbolic differentiation.
Modeling and computation.
A general nonlinear partial differential equation can be:
formula_0
where formula_1 denotes the solution, formula_2 is a nonlinear operator parameterized by formula_3, and formula_4 is a subset of formula_5. This general form of governing equations summarizes a wide range of problems in mathematical physics, such as conservative laws, diffusion process, advection-diffusion systems, and kinetic equations. Given noisy measurements of a generic dynamic system described by the equation above, PINNs can be designed to solve two classes of problems:
Data-driven solution of partial differential equations.
The "data-driven solution of PDE" computes the hidden state formula_1 of the system given boundary data and/or measurements formula_6, and fixed model parameters formula_3. We solve:
formula_7.
By defining the residual formula_8 as
formula_9,
and approximating formula_1 by a deep neural network. This network can be differentiated using automatic differentiation. The parameters of formula_1 and formula_8 can be then learned by minimizing the following loss function formula_10:
formula_11.
Where formula_12 is the error between the PINN formula_13 and the set of boundary conditions and measured data on the set of points formula_14 where the boundary conditions and data are defined, and formula_15 is the mean-squared error of the residual function. This second term encourages the PINN to learn the structural information expressed by the partial differential equation during the training process.
This approach has been used to yield computationally efficient physics-informed surrogate models with applications in the forecasting of physical processes, model predictive control, multi-physics and multi-scale modeling, and simulation. It has been shown to converge to the solution of the PDE.
Data-driven discovery of partial differential equations.
Given noisy and incomplete measurements formula_6 of the state of the system, the "data-driven discovery of PDE" results in computing the unknown state formula_1 and learning model parameters formula_3 that best describe the observed data and it reads as follows:
formula_0.
By defining formula_8 as
formula_16,
and approximating formula_1 by a deep neural network, formula_8 results in a PINN. This network can be derived using automatic differentiation. The parameters of formula_1 and formula_8, together with the parameter formula_17 of the differential operator can be then learned by minimizing the following loss function formula_10:
formula_11.
Where formula_12, with formula_18 and formula_6 state solutions and measurements at sparse location formula_14, respectively and formula_15 residual function. This second term requires the structured information represented by the partial differential equations to be satisfied in the training process.
This strategy allows for discovering dynamic models described by nonlinear PDEs assembling computationally efficient and fully differentiable surrogate models that may find application in predictive forecasting, control, and data assimilation.
Physics-informed neural networks for piece-wise function approximation.
PINN is unable to approximate PDEs that have strong non-linearity or sharp gradients that commonly occur in practical fluid flow problems. Piece-wise approximation has been an old practice in the field of numerical approximation. With the capability of approximating strong non-linearity extremely light weight PINNs are used to solve PDEs in much larger discrete subdomains that increases accuracy substantially and decreases computational load as well. DPINN (Distributed physics-informed neural networks) and DPIELM (Distributed physics-informed extreme learning machines) are generalizable space-time domain discretization for better approximation. DPIELM is an extremely fast and lightweight approximator with competitive accuracy. Domain scaling on the top has a special effect. Another school of thought is discretization for parallel computation to leverage usage of available computational resources.
XPINNs is a generalized space-time domain decomposition approach for the physics-informed neural networks (PINNs) to solve nonlinear partial differential equations on arbitrary complex-geometry domains. The XPINNs further pushes the boundaries of both PINNs as well as Conservative PINNs (cPINNs), which is a spatial domain decomposition approach in the PINN framework tailored to conservation laws. Compared to PINN, the XPINN method has large representation and parallelization capacity due to the inherent property of deployment of multiple neural networks in the smaller subdomains. Unlike cPINN, XPINN can be extended to any type of PDEs. Moreover, the domain can be decomposed in any arbitrary way (in space and time), which is not possible in cPINN. Thus, XPINN offers both space and time parallelization, thereby reducing the training cost more effectively. The XPINN is particularly effective for the large-scale problems (involving large data set) as well as for the high-dimensional problems where single network based PINN is not adequate. The rigorous bounds on the errors resulting from the approximation of the nonlinear PDEs (incompressible Navier–Stokes equations) with PINNs and XPINNs are proved. However, DPINN debunks the use of residual (flux) matching at the domain interfaces as they hardly seem to improve the optimization.
Physics-informed neural networks and functional interpolation.
In the PINN framework, initial and boundary conditions are not analytically satisfied, thus they need to be included in the loss function of the network to be simultaneously learned with the differential equation (DE) unknown functions. Having competing objectives during the network's training can lead to unbalanced gradients while using gradient-based techniques, which causes PINNs to often struggle to accurately learn the underlying DE solution. This drawback is overcome by using functional interpolation techniques such as the Theory of Functional Connections (TFC)'s constrained expression, in the Deep-TFC framework, which reduces the solution search space of constrained problems to the subspace of neural network that analytically satisfies the constraints. A further improvement of PINN and functional interpolation approach is given by the Extreme Theory of Functional Connections (X-TFC) framework, where a single-layer Neural Network and the extreme learning machine training algorithm are employed. X-TFC allows to improve the accuracy and performance of regular PINNs, and its robustness and reliability are proved for stiff problems, optimal control, aerospace, and rarefied gas dynamics applications.
Physics-informed PointNet (PIPN) for multiple sets of irregular geometries.
Regular PINNs are only able to obtain the solution of a forward or inverse problem on a single geometry. It means that for any new geometry (computational domain), one must retrain a PINN. This limitation of regular PINNs imposes high computational costs, specifically for a comprehensive investigation of geometric parameters in industrial designs. Physics-informed PointNet (PIPN) is fundamentally the result of a combination of PINN's loss function with PointNet. In fact, instead of using a simple fully connected neural network, PIPN uses PointNet as the core of its neural network. PointNet has been primarily designed for deep learning of 3D object classification and segmentation by the research group of Leonidas J. Guibas. PointNet extracts geometric features of input computational domains in PIPN. Thus, PIPN is able to solve governing equations on multiple computational domains (rather than only a single domain) with irregular geometries, simultaneously. The effectiveness of PIPN has been shown for incompressible flow, heat transfer and linear elasticity.
Physics-informed neural networks (PINNs) for inverse computations.
Physics-informed neural networks (PINNs) have proven particularly effective in solving inverse problems within differential equations, demonstrating their applicability across science, engineering, and economics. They have shown useful for solving inverse problems in a variety of fields, including nano-optics, topology optimization/characterization, multiphase flow in porous media, and high-speed fluid flow. PINNs have demonstrated flexibility when dealing with noisy and uncertain observation datasets. They also demonstrated clear advantages in the inverse calculation of parameters for multi-fidelity datasets, meaning datasets with different quality, quantity, and types of observations. Uncertainties in calculations can be evaluated using ensemble-based or Bayesian-based calculations.
Physics-informed neural networks (PINNs) with backward stochastic differential equation.
Deep backward stochastic differential equation method is a numerical method that combines deep learning with Backward stochastic differential equation (BSDE) to solve high-dimensional problems in financial mathematics. By leveraging the powerful function approximation capabilities of deep neural networks, deep BSDE addresses the computational challenges faced by traditional numerical methods like finite difference methods or Monte Carlo simulations, which struggle with the curse of dimensionality. Deep BSDE methods use neural networks to approximate solutions of high-dimensional partial differential equations (PDEs), effectively reducing the computational burden. Additionally, integrating Physics-informed neural networks (PINNs) into the deep BSDE framework enhances its capability by embedding the underlying physical laws into the neural network architecture, ensuring solutions adhere to governing stochastic differential equations, resulting in more accurate and reliable solutions.
Limitations.
Translation and discontinuous behavior are hard to approximate using PINNs, that marks earliest and most elaborate analysis to mark the limitations in PINNs as a method for solving PDEs. They fail when solving differential equations with slight advective dominance and hence asymptotic behaviour is the killer in PDEs kill this method. Such PDEs could be solved by scaling variables.
This difficulty in training of PINNs in advection-dominated PDEs can be explained by Kolmogorov n–width of the solution.
They also fail to solve a system of dynamical systems and hence have not been a success in solving chaotic equations. One of the reasons behind the failure of the regular PINNs is soft-constraining of Dirichlet and Neumann boundary conditions which pose multi-objective optimization problem which requires manually weighing the loss terms to be able to optimize.
Another reason is getting optimization itself. Posing PDE solving as an optimization problem brings in all the problems that are faced in the world of optimization, the major one being getting stuck at a local optimum pretty often. | [
{
"math_id": 0,
"text": "u_t + N[u; \\lambda]=0, \\quad x \\in \\Omega, \\quad t \\in[0, T]"
},
{
"math_id": 1,
"text": "u(t,x)"
},
{
"math_id": 2,
"text": "N[\\cdot; \\lambda]"
},
{
"math_id": 3,
"text": "\\lambda"
},
{
"math_id": 4,
"text": "\\Omega"
},
{
"math_id": 5,
"text": "\\mathbb{R}^{D}"
},
{
"math_id": 6,
"text": "z"
},
{
"math_id": 7,
"text": "u_t + N[u]=0, \\quad x \\in \\Omega, \\quad t \\in[0, T]"
},
{
"math_id": 8,
"text": "f(t,x)"
},
{
"math_id": 9,
"text": " f := u_t + N[u]=0"
},
{
"math_id": 10,
"text": "L_{tot}"
},
{
"math_id": 11,
"text": "L_{tot} = L_{u} + L_{f}"
},
{
"math_id": 12,
"text": "L_{u} = \\Vert u-z\\Vert_{\\Gamma} "
},
{
"math_id": 13,
"text": "u(t, x)"
},
{
"math_id": 14,
"text": "\\Gamma"
},
{
"math_id": 15,
"text": "L_{f} = \\Vert f\\Vert_{\\Gamma}"
},
{
"math_id": 16,
"text": " f := u_t + N[u; \\lambda]=0"
},
{
"math_id": 17,
"text": " \\lambda "
},
{
"math_id": 18,
"text": "u"
}
]
| https://en.wikipedia.org/wiki?curid=67944516 |
67944523 | Quantum Markov semigroup | A kind of mathematical structure which describes the dynamics in a Markovian open quantum system.
In quantum mechanics, a quantum Markov semigroup describes the dynamics in a Markovian open quantum system. The axiomatic definition of the prototype of quantum Markov semigroups was first introduced by A. M. Kossakowski in 1972, and then developed by V. Gorini, A. M. Kossakowski, E. C. G. Sudarshan and Göran Lindblad in 1976.
Motivation.
An ideal quantum system is not realistic because it should be completely isolated while, in practice, it is influenced by the coupling to an environment, which typically has a large number of degrees of freedom (for example an atom interacting with the surrounding radiation field). A complete microscopic description of the degrees of freedom of the environment is typically too complicated. Hence, one looks for simpler descriptions of the dynamics of the open system. In principle, one should investigate the unitary dynamics of the total system, i.e. the system and the environment, to obtain information about the reduced system of interest by averaging the appropriate observables over the degrees of freedom of the environment. To model the dissipative effects due to the interaction with the environment, the Schrödinger equation is replaced by a suitable master equation, such as a Lindblad equation or a stochastic Schrödinger equation in which the infinite degrees of freedom of the environment are "synthesized" as a few quantum noises. Mathematically, time evolution in a Markovian open quantum system is no longer described by means of one-parameter groups of unitary maps, but one needs to introduce quantum Markov semigroups.
Definitions.
Quantum dynamical semigroup (QDS).
In general, quantum dynamical semigroups can be defined on von Neumann algebras, so the dimensionality of the system could be infinite. Let formula_0 be a von Neumann algebra acting on Hilbert space formula_1, a quantum dynamical semigroup on formula_0 is a collection of bounded operators on formula_0, denoted by formula_2, with the following properties:
Under the condition of complete positivity, the operators formula_7 are formula_9-weakly continuous if and only if formula_7 are normal. Recall that, letting formula_12 denote the convex cone of positive elements in formula_0, a positive operator formula_13 is said to be normal if for every increasing net formula_14 in formula_12 with least upper bound formula_15 in formula_12 one has
formula_16
for each formula_17 in a norm-dense linear sub-manifold of formula_1.
Quantum Markov semigroup (QMS).
A quantum dynamical semigroup formula_18 is said to be identity-preserving (or conservative, or Markovian) if
where formula_19 is the identity element. For simplicity, formula_18 is called quantum Markov semigroup. Notice that, the identity-preserving property and positivity of formula_7 imply formula_20 for all formula_8 and then formula_18 is a contraction semigroup.
The Condition (1) plays an important role not only in the proof of uniqueness and unitarity of solution of a Hudson–Parthasarathy quantum stochastic differential equation, but also in deducing regularity conditions for paths of classical Markov processes in view of operator theory.
Infinitesimal generator of QDS.
The infinitesimal generator of a quantum dynamical semigroup formula_18 is the operator formula_21 with domain formula_22, where
formula_23
and formula_24.
Characterization of generators of uniformly continuous QMSs.
If the quantum Markov semigroup formula_18 is uniformly continuous in addition, which means formula_25, then
Under such assumption, the infinitesimal generator formula_21 has the characterization
formula_28
where formula_10, formula_29, formula_30, and formula_31 is self-adjoint. Moreover, above formula_32 denotes the commutator, and formula_33 the anti-commutator.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathcal{A} "
},
{
"math_id": 1,
"text": " \\mathcal{H} "
},
{
"math_id": 2,
"text": " \\mathcal{T} := \\left( \\mathcal{T}_t \\right)_{t \\ge 0} "
},
{
"math_id": 3,
"text": " \\mathcal{T}_0 \\left( a \\right) = a "
},
{
"math_id": 4,
"text": " \\forall a \\in \\mathcal{A} "
},
{
"math_id": 5,
"text": " \\mathcal{T}_{t + s} \\left( a \\right) = \\mathcal{T}_t \\left( \\mathcal{T}_s \\left( a \\right) \\right) "
},
{
"math_id": 6,
"text": " \\forall s, t \\ge 0 "
},
{
"math_id": 7,
"text": " \\mathcal{T}_t "
},
{
"math_id": 8,
"text": " t \\ge 0 "
},
{
"math_id": 9,
"text": " \\sigma "
},
{
"math_id": 10,
"text": " a \\in \\mathcal{A} "
},
{
"math_id": 11,
"text": " t \\mapsto \\mathcal{T}_t \\left( a \\right) "
},
{
"math_id": 12,
"text": " \\mathcal{A}_+ "
},
{
"math_id": 13,
"text": " T : \\mathcal{A} \\rightarrow \\mathcal{A} "
},
{
"math_id": 14,
"text": " \\left( x_\\alpha \\right)_\\alpha "
},
{
"math_id": 15,
"text": " x "
},
{
"math_id": 16,
"text": " \\lim_{\\alpha} \\langle u, (T x_\\alpha) u \\rangle = \\sup_{\\alpha} \\langle u, (T x_\\alpha) u \\rangle = \\langle u, (T x) u \\rangle "
},
{
"math_id": 17,
"text": " u "
},
{
"math_id": 18,
"text": " \\mathcal{T} "
},
{
"math_id": 19,
"text": " \\boldsymbol{1} \\in \\mathcal{A} "
},
{
"math_id": 20,
"text": " \\left\\| \\mathcal{T}_t \\right\\| = 1 "
},
{
"math_id": 21,
"text": " \\mathcal{L} "
},
{
"math_id": 22,
"text": " \\operatorname{Dom} (\\mathcal{L}) "
},
{
"math_id": 23,
"text": " \\operatorname{Dom} \\left( \\mathcal{L} \\right) := \\left\\{ a \\in \\mathcal{A} ~\\left\\vert~ \\lim_{t \\rightarrow 0} \\frac{\\mathcal{T}_t(a) - a}{t} = b \\text{ in } \\sigma\\text{-weak topology} \\right. \\right\\} "
},
{
"math_id": 24,
"text": " \\mathcal{L}(a) := b "
},
{
"math_id": 25,
"text": " \\lim_{t \\rightarrow 0^+} \\left\\| \\mathcal{T}_t - \\mathcal{T}_0 \\right\\| = 0 "
},
{
"math_id": 26,
"text": " \\mathrm{Dom} (\\mathcal{L}) = \\mathcal{A} "
},
{
"math_id": 27,
"text": " t \\mapsto \\mathcal{T}_t a "
},
{
"math_id": 28,
"text": " \\mathcal{L} \\left( a \\right) = i \\left[ H, a \\right] + \\sum_{j} \\left( V_j^\\dagger a V_j - \\frac{1}{2} \\left\\{ V_j^\\dagger V_j, a \\right\\} \\right) "
},
{
"math_id": 29,
"text": " V_j \\in \\mathcal{B} (\\mathcal{H}) "
},
{
"math_id": 30,
"text": " \\sum_{j} V_j^\\dagger V_j \\in \\mathcal{B} (\\mathcal{H}) "
},
{
"math_id": 31,
"text": " H \\in \\mathcal{B} (\\mathcal{H}) "
},
{
"math_id": 32,
"text": " \\left[ \\cdot, \\cdot \\right] "
},
{
"math_id": 33,
"text": " \\left\\{ \\cdot, \\cdot \\right\\} "
}
]
| https://en.wikipedia.org/wiki?curid=67944523 |
67944539 | Groundwater contamination by pharmaceuticals | Aquifer contamination by medical drugs
Groundwater contamination by pharmaceuticals, which belong to the category of contaminants of emerging concern (CEC) or emerging organic pollutants (EOP), has been receiving increasing attention in the fields of environmental engineering, hydrology and hydrogeochemistry since the last decades of the twentieth century.
Pharmaceuticals are suspected to provoke long-term effects in aquatic ecosystems even at low concentration ranges (trace concentrations) because of their bioactive and chemically stable nature, which leads to recalcitrant behaviours in the aqueous compartments, a feature that is typically associated with the difficulty in degrading these compounds to innocuous molecules, similarly with the behaviour exhibited by persistent organic pollutants. Furthermore, continuous release of medical products in the water cycle poses concerns about bioaccumulation and biomagnification phenomena. As the vulnerability of groundwater systems is increasingly recognized even from the regulating authority (the European Medicines Agency, EMA), environmental risk assessment (ERA) procedures, which is required for pharmaceuticals appliance for marketing authorization and preventive actions urged to preserve these environments.
In the last decades of the twentieth century, scientific research efforts have been fostered towards deeper understanding of the interactions of groundwater transport and attenuation mechanisms with the chemical nature of polluting agents. Amongst the multiple mechanisms governing solutes mobility in groundwater, biotransformation and biodegradation play a crucial role in determining the evolution of the system (as identified by developing concentration fields) in the presence of organic compounds, such as pharmaceuticals. Other processes that might impact on pharmaceuticals fate in groundwater include classical advective-dispersive mass transfer, as well as geochemical reactions, such as adsorption onto soils and dissolution / precipitation.
One major goal in the field of environmental protection and risk mitigation is the development of mathematical formulations yielding reliable predictions of the fate of pharmaceuticals in aquifer systems, eventually followed by an appropriate quantification of predictive uncertainty and estimation of the risks associated with this kind of contamination.
General problem.
Pharmaceuticals represent a serious threat to aquifer systems because of their bioactive nature, which makes them capable of interacting directly with therein residing living microorganisms and yielding bioaccumulation and biomagnification phenomena. Occurrence of xenobiotics in groundwater has been proven to harm the delicate equilibria of aquatic ecosystems in several ways, such as promoting the growth of antibiotic-resistant bacteria or causing hormones-related sexual disruption in living organisms in surface waters. Considering then the role of groundwater systems as main worldwide drinking water resources, the capability of pharmaceuticals to interact with human tissues poses serious concerns also in terms of human health. Indeed, the majority of pharmaceuticals do not degrade in groundwater, where get accumulated due to their continuous release in the environment. Then, these compounds reach subsurface systems through different sources, such as hospital effluents, wastewaters and landfill leachates, which clearly risk contaminating drinking water.
Most detected pharmaceutical classes.
The main pharmaceutical classes detected in worldwide groundwater systems are listed below. The following categorisation is based on a medical perspective and it is often referred to as therapeutic classification.
Chemical aspects relevant to aquifer systems dynamics.
The chemical structure of pharmaceuticals affects the type of hydro-geochemical processes that mainly impacts on their fate in groundwater and it is strictly associated with their chemical properties. Therefore, a classification of pharmaceuticals based on chemical classes is a valid alternative to the purpose of understanding the role of molecular structures in determining the kind of physical and geochemical processes affecting their mobility in porous media.
With regard to the occurrence of medical drugs in subsurface aquatic systems, the following chemical properties are of major interest:
Pharmaceuticals solubility in water affects the mobility of these compounds within aquifers. This feature depends on pharmaceuticals polarity, as polar substances are typically hydrophilic, thereby showing marked tendency to dissolve in the aqueous phase, where they become solutes. This aspect impacts on dissolution / precipitation equilibrium, a phenomenon that is mathematically described in terms of the substance solubility product (addressed in many books with the notation formula_0).
Large formula_1 values outline the non polar character of the chemical species, which shows instead particular affinity to dissolve into organic solvents. Therefore, lipophilic pharmaceuticals are markedly subjected to the risk to bioaccumulate and biomagnificate in the environment, consistent with their preferential partition with the organic tissues of living organisms. Sufficiently large formula_1 pharmaceuticals are in fact subjected to specific tiers in the environmental risk assessment (ERA) procedure (to be supplied for the marketing authorisation application) and are highlighted as potential sources of bioaccumulation and biomagnification according to the EMA guidelines. Lipophilic compounds are then insoluble in water, where they persist as a separated phase from the aqueous one. This renders their mobility in groundwater basically decoupled with dissolution / precipitation mechanisms and attributed to the mean flow transport (advection and dispersion) and soil-mediated mechanisms of reaction (adsorption).
This feature is expressed in terms of the so-called organic carbon-water partition coefficient, that is usually referred to as formula_2 and is an intrinsic property of the molecule.
Molecules behaviour in relation to aqueous dissociation reactions is typically related to their acid dissociation constants, that are typically outlined in terms of their formula_3 coefficients.
The molecular structure of xenobiotics typically outlines the existence of several possible reaction pathways, which are embedded in complex reaction networks and are typically referred to as transformation processes. With reference to organic compounds, such as pharmaceuticals, innumerable kinds of chemical reactions exist, most of them involving common chemical mechanisms, such as functional groups elimination, addition and substitution. These processes often involve further redox reactions accomplished on the substrates, which are here represented by pharmaceutical solutes and, eventually, their transformation products and metabolites. These processes can be then classified as either biotic or abiotic, depending on the presence or absence of bacterial communities acting as reaction mediators. In the former case, these transformation pathways are typically addressed as biodegradation or biotransformation in the hydrogeochemical literature, depending on the extent of cleavage of the parent molecule into highly oxidized, innocuous species.
Transport and attenuation processes.
The fate of pharmaceuticals in groundwater is governed by different processes. The reference theoretical framework is that of reactive solute transport in porous media at the continuum scale, that is typically interpreted through the advective-dispersive-reactive equation (ADRE). With reference to the saturated region of the aquifer, the ADRE is written as:
formula_4
Where formula_5 represents the effective porosity of the medium, formula_6 and formula_7 represent - respectively - the spatial coordinates vector and the time coordinate. formula_8 represents the divergence operator, except for when it applies to formula_9, where the nabla symbol stands for gradient of formula_9. The term formula_10 denotes then the pharmaceutical solute concentration field in the water phase (for unsaturated regions of the aquifer, the ADRE equation has a similar shape, but it includes additional terms accounting for volumetric contents and contaminants concentrations in other phases than water), while formula_11 represents the velocity field. formula_12 is the hydrodynamic dispersion tensor and is typically function of the sole variable formula_13. Lastly, the storage term formula_14 includes the accumulation or removal contribution due to all possible reactive processes in the system, i.e., adsorption, dissolution / precipitation, acid dissociation and other transformation reactions, such as biodegradation.
The main hydrological transport processes driving pharmaceuticals and organic contaminants migration in aquifer systems are:
The most influential geochemical processes, also referred to as reactive processes and whose effect is embedded in the term formula_14 of the ADRE, include:
Advection.
Advective transport accounts for the contribution of solute mass transfer across the system that originates from bulk flow motion. At the continuum scale of analysis, the system is interpreted as a continuous medium rather than a collection of solid particles (grains) and empty spaces (pores) through which the fluid can flow. In this context, an average flow velocity can be typically estimated, which arises upscaling the pore scale velocities. Here, the fluid flow conditions ensure the validity of the Darcy's law, which governs the system evolution in terms of average fluid velocity, typically referred to as seepage or advective velocity. Dissolved pharmaceuticals in groundwater are transferred within the domain along with the mean fluid flow and in agreement with the physical principles governing any other solute migration across the system.
Hydrodynamic dispersion.
Hydrodynamic dispersion identifies a process that arises as summation of two separate effects. First, it is associated with molecular diffusion, a phenomenon that is appreciated at the macroscale as consequence of microscale Brownian motions. Secondly, it includes a contribution (called mechanical dispersion) arising as an effect of upscaling the fluid-dynamic transport problem from the pore to the continuum scale of investigation, due to the upscaling of local dishomogeneous velocities. The latter contribution is therefore not related to the occurrence of any physical process at the pore scale, but it is only a fictitious consequence of the modelling scale choice. Hydrodynamic dispersion is then embedded in the advective-dispersive-reactive equation (ADRE) assuming a Fickian closure model. Dispersion is felt at the macroscale as responsible of a spread effect of the contaminant plume around its center of mass.
Adsorption onto soil.
Sorption identifies a heterogeneous reaction that is often driven by instantaneous thermochemical equilibrium. It describes the process for which a certain mass of solute dissolved in the aqueous phase adheres to a solid phase (such as the organic fraction of soil in the case of organic compounds), being therefore removed from the liquid phase. In hydrogeochemistry, this phenomenon has been proved to cause a delayed effect in solute mobility with respect to the case in which solely advection and dispersion occur in the aquifer. For pharmaceuticals, it can be typically interpreted using a linear adsorption model at equilibrium, which is fully applicable at low concentrations ranges. The latter model relies upon assessment of a linear partition coefficient, usually denoted as formula_15, that depends - for organic compounds - on both organic carbon-water partition coefficient formula_2 and organic carbon fraction formula_16 into soil. While the former term is an intrinsic chemical property of the molecule, the latter one instead depends on the soil moisture of the analyzed aquifer.
Sorption of trace elements like pharmaceuticals in groundwater is interpreted through the following linear isotherm model:
formula_17
Where formula_18 identifies the adsorbed concentration on the solid phase and formula_19.
The neutral form of the organic molecules dissolved in water is typically the sole responsible of sorptive mechanisms, that become as more important as the soils are rich in terms of organic carbon. Anionic forms are instead insensitive to sorptive mechanisms, while cations can undergo adsorption only in very particular conditions.
Dissolution and precipitation.
Dissolution represents the heterogeneous reaction during which a solid compound, such as an organic salt in the case of pharmaceuticals, gets dissolved into the aqueous phase. Here, the original salt appears in the form of both aqueous cations and anions, depending on the stoichiometry of the dissolution reaction. Precipitation represents the reverse reaction. This process is typically accomplished at thermochemical equilibrium, but in some applications of hydrogeochemical modelling it might be required to consider its kinetics. As an example for the case of pharmaceuticals, the non-steroidal anti-inflammatory drug diclofenac, which is commercialised as sodium diclofenac, undergoes this process in groundwater environments.
Acid dissociation and aqueous complexation.
Acid dissociation is a homogeneous reaction that yields dissociation of a dissolved acid (in the water phase) into cationic and anionic forms, while aqueous complexation denotes its reverse process. The aqueous speciation of a solution is determined on the basis of the formula_3 coefficient, that typically ranges between 3 and 50 (approximately) for organic compounds, such as pharmaceuticals. Being the latter ones weak acids and considering that this process is always accomplished upon instantaneous achievement of thermochemical equilibrium conditions, it is then reasonable to assume that the undissociated form of the original contaminant is predominant in the water speciation for most practical cases in the field of hydrogeochemistry.
Biodegradation, biotransformation and other transformation pathways.
Pharmaceuticals can undergo biotransformation or transformation processes in groundwater systems.
Aquifers are indeed rich reserves in terms of minerals and other dissolved chemical species, such as organic matter, dissolved oxygen, nitrates, ferrous and manganese compounds, sulfates, etc., as well as dissolved cations, such as calcium, magnesium and sodium ones. All of these compounds interact through complex reaction networks embedding reactive processes of different nature, such as carbonates precipitation / dissolution, acid–base reactions, sorption and redox reactions. With reference to the latter kind of processes, several pathways are typically possible in aquifers because the environment is often rich in both reducing (like organic matter) and oxidizing agents (like dissolved oxygen, nitrates, ferrous and Manganese oxides, sulfates etc.). Pharmaceuticals can act as substrates as well in this scenario, i.e., they can represent either the reducing, or the oxidizing agent in the context of redox processes. In fact, most chemical reactions involving organic molecules are typically accomplished upon gain or loss of electrons, so that the oxidation state of the molecule changes along the reactive pathway. In this context, the aquifer acts as a "chemical reactor".
There are innumerable kinds of chemical reactions that pharmaceuticals can undergo in this environment, which depend on the availability of other reactants, pH and other environmental conditions, but all of these processes typically share common mechanisms. The main ones involve addition, elimination or substitution of functional groups. The mechanism of reaction is important in the field of hydrogeochemical modeling of aquifer systems because all of these reactions are typically governed by kinetic laws. Therefore, recognizing the correct molecular mechanisms through which a chemical reaction progresses is fundamental to the purpose of modelling the reaction rates correctly (for example, it is often possible to identify a rate limiting step within multistep reactions and relate the rate of reaction progress to that particular step). Modelling these reactions typically follows the classic kinetic laws, except for the case in which reactions involving the contaminant are accomplished in the context of bacterial metabolism. While in the former case the ensemble of reactions is addressed as transformation pathway, in the latter one the terms biodegradation or biotransformation are used, depending on the extent to which the chemical reactions effectively degrade the original organic molecule to innocuous compounds in their maximum oxidation state (i.e., carbon dioxide, methane and water). In case of biologically mediated pathways of reaction, which are relevant in the study of groundwater contamination by pharmaceuticals, there are appropriate kinetic laws that can be employed to model these processes in hydrogeochemical contexts. For example, the Monod and Michaelis-Menten equations are suitable options in case of biotic transformation processes involving organic compounds (such as pharmaceuticals) as substrates.
Despite most hydrogeochemical literature addresses these processes through linear biodegradation models, several studies have been carried out since the second decade of the twenty-first century, as the former ones are typically too simplified to ensure reliable predictions of pharmaceuticals fate in groundwater and might bias risk estimates in the context of risk mitigation applications for the environment.
Hydrologic and geochemical modelling approaches.
Groundwater contamination by pharmaceuticals is a topic of great interest in the field of the environmental and hydraulic engineering, where most research efforts have been fostered towards studies on this kind of contaminants since the beginning of the twenty-first century. The general goal of those disciplines is that of developing interpretive models capable to predict the behaviour of aquifer systems in relation to the occurrence of various types of contaminants, among which are included also medical drugs. Such goal is motivated by the necessity to provide mathematical tools to predict, for example, how contaminants concentration fields develop across the aquifer along time. This may provide useful information to support decision-making processes in the context of environmental risk assessment procedures. To this purpose, several interdisciplinary strategies and tools are typically employed, the most fundamental ones being listed below:
All of these interdisciplinary tools and strategies are contemporarily employed to analyse the fate of pharmaceuticals in groundwater. | [
{
"math_id": 0,
"text": "K_S"
},
{
"math_id": 1,
"text": "K_{OW}"
},
{
"math_id": 2,
"text": "K_{OC}"
},
{
"math_id": 3,
"text": "Pk_a"
},
{
"math_id": 4,
"text": "\\frac{\\partial}{\\partial t}(\\phi C_W(\\boldsymbol {x},t))=\n- \\underbrace {\\nabla (\\phi C_W(\\boldsymbol {x},t)\\boldsymbol v (\\boldsymbol{x},t))}_{\\text{advection}}+\n\\underbrace{\\nabla (\\phi \\boldsymbol D(\\boldsymbol {x},t) \\nabla C_W(\\boldsymbol {x},t))}_{\\text{hydrodynamic dispersion}}\n+\\underbrace{R}_{\\text{accumulation}}"
},
{
"math_id": 5,
"text": "\\phi"
},
{
"math_id": 6,
"text": "\\boldsymbol {x}"
},
{
"math_id": 7,
"text": "t"
},
{
"math_id": 8,
"text": "\\nabla "
},
{
"math_id": 9,
"text": "C_W"
},
{
"math_id": 10,
"text": "C_W(\\boldsymbol{x},t)"
},
{
"math_id": 11,
"text": "\\boldsymbol{v}(\\boldsymbol{x},t)"
},
{
"math_id": 12,
"text": "D(\\boldsymbol {x},t)"
},
{
"math_id": 13,
"text": "\\boldsymbol x"
},
{
"math_id": 14,
"text": "R"
},
{
"math_id": 15,
"text": " k_d "
},
{
"math_id": 16,
"text": "f_{OC}"
},
{
"math_id": 17,
"text": "C_S=k_d C_W"
},
{
"math_id": 18,
"text": "C_S"
},
{
"math_id": 19,
"text": " k_d \\propto K_{OC},f_{OC} "
}
]
| https://en.wikipedia.org/wiki?curid=67944539 |
67944609 | Spectral interferometry | Spectral interferometry (SI) or frequency-domain interferometry is a linear technique used to measure optical pulses, with the condition that a reference pulse that was previously characterized is available. This technique provides information about the intensity and phase of the pulses. SI was first proposed by Claude Froehly and coworkers in the 1970s.
A known (acting as the reference) and an unknown pulse arrive at a spectrometer, with a time delay formula_0 between them, in order to create spectral fringes. A spectrum is produced by the sum of these two pulses and, by measuring said fringes, one can retrieve the unknown pulse. If formula_1 and formula_2 are the electric fields of the unknown and reference pulse respectively, the time delay can be expressed as a phase factor formula_3 for the unknown pulses. Then, the combined field is:
formula_4
The average spacing between fringes is inversely proportional to the time delay formula_0. Thus, the SI signal is given by:
formula_5
where formula_6 is the oscillation phase.
Furthermore, the spectral fringes width can provide information on the spectral phase difference between the two pulses formula_7; narrowly spaced fringes indicate rapid phase changes with frequency.
Comparison with the Time Domain.
Compared to time-domain interferometry, SI presents some interesting advantages. Firstly, by using a CCD detector or a simple camera, the whole interferogram can be recorded simultaneously. Furthermore, the interferogram is not nullified by small fluctuations of the optical path, but reduction in the fringe contrast should be expected in cases of exposure time being bigger than the fluctuation time scale. However, SI produces phase measurements through its cosine only, meaning that results arise for phase differences in multiples of formula_8 which can lead to solutions that degrade the signal-to-noise ratio.
There have been efforts to measure pulse intensity and phase in both the time and the frequency domain by combining the autocorrelation and the spectrum. This technique is called Temporal Information Via Intensity (TIVI) and it involves an iterative algorithm to find an intensity consistent with the autocorrelation, followed by another iterative algorithm to find the temporal and spectral phases consistent with the intensity and spectrum, but the results are inconclusive.
Applications.
Spectral Interferometry has gained momentum in recent years. It is frequently used for measuring the linear response of materials, such as the thickness and refractive index of normal dispersive materials, the amplitude and phase of the electric field in semiconductor nanostructures and the group delay on laser mirrors.
In the realm of femtosecond spectroscopy, SI is the technique on which SPIDER is based, thus it is used for four-wave mixing experiments and various phase-resolved pump-probe experiments.
Experimental Difficulties.
This technique is not commonly used since it relies on a number of factors in order to obtain strong fringes during experimental processes. Some of them include:
Spectral Shearing Interferometry.
In cases of relatively long pulses, one can opt for Spectral Shearing Interferometry. For this method, the reference pulse is obtained by sending its mirror image through a sinusoidal phase modulation. Hence, a spectral shift of magnitude formula_9 can be correlated to the produced linear temporal phase modulation and the spectrum of the combined pulses then has a modulation phase of:
formula_10
where the approximate relation is appropriate for small enough formula_9. Thus, the spectral derivative of the phase of the signal pulse which corresponds to the frequency-dependent group delay can be obtained.
Spectral Phase Interferometry for Direct Electric-field Reconstruction.
Spectral Phase Interferometry for Direct Electric-field Reconstruction (SPIDER) is a nonlinear self-referencing technique based on spectral shearing interferometry. For this method, the reference pulse should produce a mirror image of itself with a spectral shift, in order to provide the spectral intensity and phase of the probe pulse via a direct Fast Fourier Transform (FFT) filtering routine. However, unlike SI, in order to produce the probe pulse phase, it requires phase integration extracted from the interferogram.
Self-Referenced Spectral Interferometry.
Self-Referenced Spectral Interferometry (SRSI) is a technique where the reference pulse is self created from the unknown pulse being. The self referencing is possible due to pulse shaping optimization and non-linear temporal filtering. It provides all the benefits associated with SI (high sensitivity, precision and resolution, dynamic and large temporal range) but, unlike the SPIDER technique, neither shear nor harmonic generation are necessary in order to be implemented.
For SRSI, the generation of a weak mirror image of the unknown pulse is required. That image is perpendicularly polarized and delayed with respect to the input pulse. Then, in order to filter the reference pulse in the time domain, the main portion of the pulse is used for cross-polarized wave generation (XPW) in a nonlinear crystal. The interference between the reference pulse and the mirror image is recorded and analyzed via Fourier transform spectral interferometry (FTSI). Known applications of the SRSI technique include the characterization of pulses below 15 fs.
Frequency-Resolved Optical Gating.
Frequency Resolved Optical Gating (FROG) is a technique that determines the intensity and phase of a pulse by measuring the spectrum of a particular temporal component of said pulse. This results in an intensity trace, related to the spectrogram of the pulse formula_11, versus frequency and delay:
formula_12
where formula_13 is a variable-delay gate pulse. FROG is commonly combined with Second Harmonic Generation (SHG) process (SHG-FROG).
But the same principle can be applied exploiting different physical process, like polarization-gated FROG (PG-FROG) or transient-grating FROG (TG-FROG).
Other Linear Techniques.
There is a variety of linear techniques that are based on the main principles of spectral interferometry. Some of them are listed below.
The acquisition of the two quadratures of the interference signal resolves the issue generated by the phase differences being expressed in multiples of formula_8. The acquisition should happen simultaneously via polarization multiplexing, with the reference beam under circular polarization.
It is a technique created for direct determination of formula_14, mainly used for femtosecond pump-probe experiments in materials with long dephasing times. It is based on the inverse Fourier transform of the signal: formula_15
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\tau "
},
{
"math_id": 1,
"text": " E_{un}(\\omega)"
},
{
"math_id": 2,
"text": " E_{ref}(\\omega)"
},
{
"math_id": 3,
"text": " e^{-i\\omega \\tau} "
},
{
"math_id": 4,
"text": "\nE_{SI}=E_{ref}(\\omega)+E_{un}(\\omega) e^{-i\\omega \\tau} \n"
},
{
"math_id": 5,
"text": "\nS_{SI}=S_{ref}(\\omega)+S_{un}(\\omega)+2\\sqrt{S_{ref}(\\omega)}\\sqrt{S_{un}(\\omega)}cos[\\phi_{SI}]\n"
},
{
"math_id": 6,
"text": " \\phi_{SI}=\\phi_{un}(\\omega)-\\phi_{ref}(\\omega)+\\omega \\tau "
},
{
"math_id": 7,
"text": " \\Delta \\phi = \\phi_{un}(\\omega) -\\phi_{ref}(\\omega) "
},
{
"math_id": 8,
"text": " 2\\pi "
},
{
"math_id": 9,
"text": " \\delta \\omega "
},
{
"math_id": 10,
"text": "\n\\phi(\\omega)= \\phi_{ref}(\\omega +\\delta \\omega)+ \\omega \\tau = \\frac{\\partial \\phi_{ref}}{\\partial \\omega}\\delta \\omega +\\omega \\tau \n"
},
{
"math_id": 11,
"text": " S_E (\\omega, \\tau) "
},
{
"math_id": 12,
"text": "\nS_E (\\omega , \\tau) = \\left\\vert \\int \\limits_{-\\infty }^{\\infty} E(t)g(t- \\tau )e^{-i\\omega }dt \\right\\vert^2\n"
},
{
"math_id": 13,
"text": " g(t - \\tau ) "
},
{
"math_id": 14,
"text": " \\Delta \\phi "
},
{
"math_id": 15,
"text": " F.T.^{-1}_{SI}(t)=E^{\\ast}_{ref}(-t) \\otimes E_{ref}(t) + E^{\\ast}_{un}(-t) \\otimes E_{un}(t) + f(t- \\tau ) + f(-t- \\tau )^{\\ast} "
}
]
| https://en.wikipedia.org/wiki?curid=67944609 |
67944684 | Lithium aluminium germanium phosphate | <templatestyles src="Chembox/styles.css"/>
Chemical compound
Lithium aluminium germanium phosphate, typically known with the acronyms LAGP or LAGPO, is an inorganic ceramic solid material whose general formula is Li1+xAlxGe2-x(PO4)3. LAGP belongs to the NASICON (Sodium Super Ionic Conductors) family of solid conductors and has been applied as a solid electrolyte in all-solid-state lithium-ion batteries. Typical values of ionic conductivity in LAGP at room temperature are in the range of 10–5 - 10–4 S/cm, even if the actual value of conductivity is strongly affected by stoichiometry, microstructure, and synthesis conditions. Compared to lithium aluminium titanium phosphate (LATP), which is another phosphate-based lithium solid conductor, the absence of titanium in LAGP improves its stability towards lithium metal. In addition, phosphate-based solid electrolytes have superior stability against moisture and oxygen compared to sulfide-based electrolytes like Li10GeP2S12 (LGPS) and can be handled safely in air, thus simplifying the manufacture process.
Since the best performances are encountered when the stoichiometric value of "x" is 0.5, the acronym LAGP usually indicates the particular composition of Li1.5Al0.5Ge1.5(PO4)3, which is also the typically used material in battery applications.
Properties.
Crystal structure.
Lithium-containing NASICON-type crystals are described by the general formula LiM2(PO4)3, in which M stands for a metal or a metalloid (Ti, Zr, Hf, Sn, Ge), and display a complex three-dimensional network of corner-sharing MO6 octahedra and phosphate tetrahedra. Lithium ions are hosted in voids in between, which can be subdivided into three kinds of sites:
In order to promote lithium conductivity at sufficiently high rates, Li(1) sites should be fully occupied and Li(2) sites should be fully empty. Li(3) sites are located between Li(1) and Li(2) sites and are occupied only when large tetravalent cations are present in the structure, such as Zr, Hf, and Sn. If some Ge4+ cations in the LiGe2(PO4)3 (LGP) structure are partially replaced by Al3+ cations, the LAGP material is obtained with the general formula Li1+xAlxGe2-x(PO4)3. The single-phase NASICON structure is stable with "x" between 0.1 and 0.6; when this limit is exceeded, a solid solution is no more possible and secondary phases tend to be formed. Although Ge4+ and Al3+ cations have very similar ionic radii (0.53 Å for Ge4+ vs. 0.535 Å for Al3+), cationic substitution leads to compositional disorder and promotes the incorporation of a larger amount of lithium ions to achieve electrical neutrality. Additional lithium ions can be incorporated in either Li(2) or Li(3) empty sites.
In the available scientific literature, there is not a unique description of the sites available for lithium ions and of their atomic coordination, as well as of the sites directly involved during the conduction mechanism. For example, only two available sites, namely Li(1) and Li(2), are mentioned in some cases, while the Li(3) site is neither occupied nor involved in the conduction process. This results in the lack of unambiguous description of LAGP local crystal structure, especially concerning the arrangement of lithium ions and site occupancy when germanium is partially replaced by aluminium.
LAGP displays a rhombohedral unit cell with a space group R3c.
Vibrational properties.
Factor group analysis.
LAGP crystals belong to the space group D63d - R3c. The factor group analysis of NASICON-type materials with general formula MIM2IVPO4 (where MI stands for a monovalent metal ion like Na+, Li+ or K+, and MIV represents a tetravalent cation such as Ti4+, Ge4+, Sn4+, Zr4+ or Hf4+) is usually performed assuming the separation between internal vibrational modes (i.e. modes originating in PO4 units) and external modes (i.e. modes arising from the translations of the MI and MIV cations, from PO4 translations, and from PO4 librations).
Focusing on internal modes only, the factor group analysis for R3c space group identifies 14 Raman-active modes for the PO4 units: 6 of these modes correspond to stretching vibrations and 8 to bending vibrations.
On the contrary, the analysis of external modes leads to many available vibrations: since the number of irreducible representations within the rhombohedral R3c space group is restricted, interactions among different modes could be expected and a clear assignment or discrimination becomes unfeasible.
Raman spectra.
The vibrational properties of LAGP could be directly probed using Raman spectroscopy. LAGP shows the Raman features characteristic of all the NASICON-type materials, most of which caused by the vibrational motions of PO4 units. The main spectral regions in a Raman spectrum of NASICON-type materials are summarized in the following table.
The Raman spectra of LAGP are usually characterized by broad peaks, even when the material is in its crystalline form. Indeed, both the presence of aluminium ions in place of germanium ions and the extra lithium ions introduce structural and compositional disorder in the sublattice, resulting in peak broadening.
Transport properties.
LAGP is a solid ionic conductor and features the two fundamental properties to be used as a solid-state electrolyte in lithium-ion batteries, namely a sufficiently high ionic conductivity and a negligible electronic conductivity. Indeed, during battery operations, LAGP should guarantee the easy and fast motion of lithium ions between cathode and anode, while preventing the transfer of electrons.
As stated in the description of the crystal structure, three kinds of sites are available for hosting lithium ions in the LAGP NASICON structure, i.e. the Li(1) sites, the Li(2) sites and the Li(3) sites. Ionic conduction occurs because of hopping of lithium ions from Li(1) to Li(2) sites or across two Li(3) sites. The bottleneck to ionic motion is represented by a triangular window delimited by three oxygen atoms between Li(1) and Li(2) sites.
The ionic conductivity formula_0 in LAGP follows the usual dependency on temperature expressed by an Arrhenius-type equation, which is typical of most of solid-state ionic conductors:
formula_1
where
"σ"0 is the pre-exponential factor,
T is the absolute temperature,
"E"a is the activation energy for ionic transport,
kB is the Boltzmann constant.
Typical values for the activation energies of bulk LAGP materials are in the range of 0.35 - 0.41 eV. Similarly, the room-temperature ionic conductivity is closely related to the synthesis conditions and to the actual material microstructure, therefore the conductivity values reported in scientific literature span from 10–5 S/cm up to 10 mS/cm, the highest value close to room temperature reported up to now. Compared to LGP, the room-temperature ionic conductivity of LAGP is increased by 3-4 orders of magnitude upon partial substitution of Ge4+ by Al3+. Aluminium ions have a lower charge compared to Ge4+ ions and additional lithium is incorporated in the NASICON structure to maintain charge balance, resulting in an enlarged number of charge carriers. The beneficial effect of aluminium is maximized for "x" around 0.4 - 0.5; for larger Al content, the single-phase NASICON structure is not stable and secondary phases appear, mainly AlPO4, Li4P2O7, and GeO2. Secondary phases are typically nonconductive; however, small and controlled amounts of AlPO4 exert a densification effect which affects in a positive way the overall ionic conductivity of the material.
The prefactor "σ"0 in the Arrhenius equation can in turn be written as a function of fundamental constants and conduction parameters:
formula_2
where
"Z" is ion valence,
e is the elementary charge,
T is the absolute temperature,
kB is the Boltzmann constant,
n is the concentration of charge carriers,
v0 is the average velocity of the ions,
l0 is the mean free path.
The prefactor is directly proportional to the concentration of mobile lithium-ion carriers, which increases with the aluminium content in the material. As a result, since the dependency of the activation energy on aluminium content is negligible, the ionic conductivity is expected to increase with increasing Ge4+ substitution by Al3+, until secondary phases are formed. The introduction of aluminium also reduces the grain boundary resistivity of the material, positively impacting on the total (bulk crystal + grain boundary) ionic conductivity of the LAGP material.
As expected for solid ionic conductors, the ionic conductivity of LAGP increases with increasing temperature.
Regarding the electronic conductivity of LAGP, it should be as low as possible to prevent electrical short circuit between anode and cathode. As for ionic conductivity, the exact stoichiometry and microstructure, strongly connected to the synthesis method, have an influence on the electronic conductivity, even if the reported values are very low and close to (or lower than) 10–9 S/cm.
Thermal properties.
The specific heat capacity of LAGP materials with general formula Li1+xAlxGe2-x(PO4)3 fits into the Maier-Kelley polynomial law in the temperature range from room temperature to 700 °C:
formula_3
where
T is the absolute temperature,
"A, B, C" are fitting constants.
Typical values are in the range of 0.75 - 1.5 J⋅g−1⋅K−1 in the temperature interval 25 - 100 °C. The "A", "B" constants increase with the "x" value, i.e. with both the aluminium and the lithium content, while the constant "C" does not follow a precise trend. As a result, the specific heat capacity of LAGP is expected to increase as the Al content grows and the Ge content decreases, which is consistent with data about the relative specific heats of aluminium and germanium compounds.
In addition, the thermal diffusivity formula_4 of LAGP follows a decreasing trend with increasing temperature, irrespective of the aluminium content:
formula_5
The aluminium level affects the exponent formula_6, which varies from 0.08 (high Al content) to 0.11 (low Al content). Such small values suggest the presence of a large number of point defects in the material, which is highly beneficial for solid ionic conductors. Finally, the expression for the thermal conductivity can be written:
formula_7
where
Cv is the heat capacity per unit volume,
vph is the average phonon group velocity,
lph is the phonon mean free path,
formula_8 is the density of the material.
Taking everything into account, as the aluminium content in LAGP increases, the ionic conductivity increases as well, while the thermal conductivity decreases, since a larger number of lithium ions enhances the phonon scattering, thus reducing the phonon mean free path and the thermal transport in the material. Therefore, thermal and ionic transports in LAGP ceramics are not correlated: the corresponding conductivities follow opposite trends as a function of the aluminium content and are affected in a different way by temperature variations (e.g., the ionic conductivity increases by one order of magnitude upon an increase from room temperature to 100 °C, while the thermal conductivity increases by only 6%).
Thermal stability.
Detrimental secondary phases can also form because of thermal treatments or during the material production. Excessively high sintering/annealing temperatures or long dwelling times will result in the loss of volatile species (especially Li2O) and in the decomposition of LAGP main phase into AlPO4 and GeO2. LAGP bulk samples and thin films are typically stable up to 700-750 °C; if this temperature is exceeded, volatile lithium is lost and the impurity phase GeO2 forms. If the temperature is further increased beyond 950 °C, also AlPO4 appears.
Raman spectroscopy and "in situ" X-ray diffraction (XRD) are useful techniques that can be employed to recognise the phase purity of LAGP samples during and after the heat treatments.
Chemical and electrochemical stability.
LAGP belongs to phosphate-based solid electrolytes and, in spite of showing a moderate ionic conductivity compared to other families of solid ionic conductors, it possesses some intrinsic advantages with respect to sulfides and oxides:
One of the main advantages of LAGP is its chemical stability in the presence of oxygen, water vapour, and carbon dioxide, which simplifies the manufacture process preventing the use of a glovebox or protected environments. Unlike sulfide-based solid electrolytes, which react with water releasing poisonous gaseous hydrogen sulfide, and garnet-type lithium lanthanum zirconium oxide (LLZO), which react with water and CO2 to form passivating layers of LiOH and Li2CO3, LAGP is practically inert in humid air.
Another important advantage of LAGP is its wide electrochemical stability window, up to 6 V, which allows the use of such electrolyte in contact with high-voltage cathodes, thus enabling high energy densities. However, the stability at very low voltages and against lithium metal is controversial: even if LAGP is more stable than LATP because of the absence of titanium, some literature works report on the reduction of Ge4+ by lithium as well, with formation of Ge2+ and metallic germanium at the electrode-electrolyte interface and dramatic increase of interfacial resistance.
The possible decomposition mechanism of LAGP in contact with metallic lithium is reported in the equation below:
<chem>2LiGe2(PO4)3 + 4Li -> 3GeO2 + 6LiPO3 + Ge</chem>
Synthesis.
Several synthesis methods exist to produce LAGP in the form of bulk pellets or thin films, depending on the required performances and final applications. The synthesis path significantly affects the microstructure of the LAGP material, which plays a key role in determining its overall conductive properties. Indeed, a compact layer of crystalline LAGP with large and connected grains, and minimal amount of secondary, non-conductive phases ensures the highest conductivity values. On the contrary, an amorphous structure or the presence of small grains and pores tend to hinder the motion of lithium ions, with values of ionic conductivity in the range of 10–8 - 10–6 S/cm for glassy LAGP.
In most cases, a post-process thermal treatment is performed to achieve the desired degree of crystallinity.
Bulk pellets.
Solid-state sintering.
Solid-state sintering is the most used synthesis process to produce solid-state electrolytes. Powders of LAGP precursors, including oxides like GeO2 and Al2O3, are mixed, calcinated and densified at high temperature (700 - 1200 °C) and for long times (12 hours). Sintered LAGP is characterized by high crystalline quality, large grains, a compact microstructure, and high density, even if negative side effects such as loss of volatile lithium compounds and formation of secondary phases should be avoided while the material is kept at high temperature.
The sintering parameters affects LAGP microstructure and purity and, ultimately, its ionic conductivity and conduction performances.
Glass crystallization.
LAGP glass-ceramics can be obtained starting from an amorphous glass with nominal composition of Li1.5Al0.5Ge1.5(PO4)3, which is subsequently annealed to promote crystallization. Compared to solid-state sintering, ceramic melt-quenching followed by crystallization is a simpler and more flexible process which leads to a denser and more homogeneous microstructure.
The starting point for glass crystallization is the synthesis of the glass through a melt-quenching process of precursors in suitable amount to achieve the desired stoichiometry. Different precursors can be used, especially to provide phosphorus to the material. One possible route is the following:
The main steps are summarized in the following equation:
<chem>3/4Li2CO3 + 3/2GeO2 + 3NH4H2PO4 + 1/4Al2O3 -> Li_{1.5}Al_{0.5}Ge_{1.5}(PO4)3 + 3NH3 + 3/4CO2 + 9/2H2O</chem>
The annealing temperature is selected to promote the full crystallization and avoiding the formation of detrimental secondary phases, pores, and cracks. Various temperatures are reported in different literature sources; however, crystallization does not usually start below 550-600 °C, while temperatures larger than 850 °C cause the extensive formation of impurity phases.
Sol-gel techniques.
The sol-gel technique enables the production of LAGP particles at lower processing temperatures compared to sintering or glass crystallization. The typical precursor is a germanium organic compound, like germanium ethoxide Ge(OC2H5), which is dissolved in an aqueous solution with stoichiometric amounts of the sources of lithium, phosphorus, and aluminium. The mixture is then heated and stirred. The sol-gel process starts after the addition of a gelation agent and the final material is obtained after subsequent heating steps aimed at eliminating water and at promoting the pyrolysis reaction, followed by calcination.
The sol-gel process requires the use of germanium organic precursors, which are more expensive compared to GeO2.
Thin films.
Sputtering.
Sputtering (in particular radio-frequency magnetron sputtering) has been applied to the fabrication of LAGP thin-films starting from a LAGP target. Depending on the temperature of the substrate during the deposition, LAGP can be deposited in the cold sputtering or hot sputtering configuration.
The film stoichiometry and microstructure can be tuned by controlling the deposition parameters, especially the power density, the chamber pressure, and the substrate temperature. Both amorphous and crystalline films are obtained, with a typical thickness around 1 μm. The room-temperature ionic conductivity and the activation energy of sputtered and annealed LAGP films are comparable with those of bulk pellets, i.e. 10–4 S/cm and 0.31 eV.
Aerosol deposition.
Pre-synthesized LAGP powders can be sprayed on a substrate to form a LAGP film by means of aerosol deposition. The powders are loaded into the aerosol deposition chamber and purified air is used as the carrier gas to drive the particles towards the substrate, where they impinge and coalesce to generate the film. Since the as-produced film is amorphous, an annealing treatment is usually performed to improve the film crystallinity and its conduction properties.
Other techniques.
Some other methods to produce LAGP materials have been reported in literature works, including liquid-based techniques, spark plasma sintering, and co-precipitation.
In the following table, some ionic conductivity values are reported for LAGP materials produced with different synthesis routes, in the case of optimized production and annealing conditions.
Applications.
LAGP is one of the most studied solid-state electrolytes for lithium-ion batteries. The use of a solid-state electrolyte improves the battery safety eliminating liquid-based electrolytes, which are flammable and usually unstable above 4.3 V. In addition, it physically separates the anode from the cathode, reducing the risk of short-circuit, and strongly inhibits lithium dendrite growth. Finally, solid-state electrolytes can operate in a wide range of temperatures, with minimum conductivity loss and decomposition issues. Nevertheless, the ionic conductivity of solid-state electrolytes is some orders of magnitude lower than the one of conventional liquid-based electrolytes, therefore a thin electrolyte layer is preferred to reduce the overall internal impedance and to achieve a shorter diffusion path and larger energy densities. Therefore, LAGP is a suitable candidate for all-solid-state thin-film lithium-ion batteries, in which the electrolyte thickness ranges from 1 to some hundreds of micrometres. The good mechanical strength of LAGP effectively suppress lithium dendrites during lithium stripping and plating, reducing the risk of internal short-circuit and battery failure.
LAGP is applied as a solid-state electrolyte both as a pure material and as a component in organic-inorganic composite electrolytes. For example, LAGP can be composited with polymeric materials, like polypropylene (PP) or polyethylene oxide (PEO), to improve the ionic conductivity and to tune the electrochemical stability. Moreover, since LAGP is not fully stable against metallic lithium because of the electrochemical reactivity of Ge4+ cations, additional interlayers can be introduced between the lithium anode and the solid electrolyte to improve the interfacial stability. The addition of a thin layer of metallic germanium inhibits the electrochemical reduction by lithium metal at very negative potentials and promotes the interfacial contact between the anode and the electrolyte, resulting in improved cycling performance and battery stability. The use of polymer-ceramic composite interlayers or the excess of Li2O are alternative strategies to improve the electrochemical stability of LAGP at negative potentials.
LAGP has been also tested not only as a solid electrolyte, but also as an anode material in lithium-ion battery, showing high electrochemical stability and good cycling performance.
Lithium-sulfur batteries.
LAGP-based membranes have been applied as separators in lithium-sulfur batteries. LAGP allows the transfer of lithium ions from anode to cathode but, at the same time, prevents the diffusion of polysulfides from the cathode, suppressing the polysulfide shuttle effect and enhancing the overall performance of the battery. Typically, all-solid-state lithium-sulfur batteries are not fabricated because of high interfacial resistance; therefore, hybrid electrolytes are usually realized, in which LAGP acts as a barrier against polysulfide diffusion but it is combined with liquid or polymer electrolytes to promote fast lithium diffusion and to improve the interfacial contact with electrodes. | [
{
"math_id": 0,
"text": "\\sigma "
},
{
"math_id": 1,
"text": "\\sigma = \\frac{\\sigma_0}{T}e^{-\\frac{E_{a}}{k_{B}T}}"
},
{
"math_id": 2,
"text": "\\sigma_0=\\frac{1}{3}\\frac{(Ze)^2}{k_{B}T}nv_0l_0"
},
{
"math_id": 3,
"text": "C_p(T)=A + B\\cdot T + C\\cdot T^{-2}"
},
{
"math_id": 4,
"text": "\\alpha"
},
{
"math_id": 5,
"text": "\\alpha\\propto T^{-p}"
},
{
"math_id": 6,
"text": "p"
},
{
"math_id": 7,
"text": "\\kappa = \\frac{1}{3} \\cdot C_v \\cdot v_{ph} \\cdot l_{ph} = \\alpha \\cdot C_p \\cdot \\rho "
},
{
"math_id": 8,
"text": "\\rho "
}
]
| https://en.wikipedia.org/wiki?curid=67944684 |
67944695 | Biaxial tensile testing | Testing a material's tensile strength along two perpendicular axes
In materials science and solid mechanics, biaxial tensile testing is a versatile technique to address the mechanical characterization of planar materials. It is a generalized form of tensile testing in which the material sample is simultaneously stressed along two perpendicular axes. Typical materials tested in biaxial configuration include
metal sheets,
silicone elastomers,
composites,
thin films,
textiles
and biological soft tissues.
Purposes of biaxial tensile testing.
A biaxial tensile test generally allows the assessment of the mechanical properties
and a complete characterization for uncompressible isotropic materials, which can be obtained through a fewer number of specimens with respect to uniaxial tensile tests.
Biaxial tensile testing is particularly suitable for understanding the mechanical properties of biomaterials, due to their directionally oriented microstructures.
If the testing aims at the material characterization of the post elastic behaviour, the uniaxial results become inadequate, and a biaxial test is required in order to examine the plastic behaviour.
In addition to this, using uniaxial test results to predict rupture under biaxial stress states seems to be inadequate.
Even if a biaxial tensile test is performed in a planar configuration, it may be equivalent to the stress state applied on three-dimensional geometries, such as cylinders with an inner pressure and an axial stretching.
The relationship between the inner pressure and the circumferential stress is given by the Mariotte formula:
formula_0
where formula_1 is the circumferential stress, "P" the inner pressure, "D" the inner diameter and "t" the wall thickness of the tube.
Equipment.
Typically, a biaxial tensile machine is equipped with motor stages, two load cells and a gripping system.
Motor stages.
Through the movement of the motor stages a certain displacement is applied on the material sample. If the motor stage is one, the displacement is the same in the two direction and only the equi-biaxial state is allowed. On the other hand, by using four independent motor stages, any load condition is allowed; this feature makes the biaxial tensile test superior to other tests that may apply a biaxial tensile state, such as the hydraulic bulge, semispherical bulge, stack compression or flat punch.
Using four independent motor stages allows to keep the sample centred during the whole duration of the test; this feature is particularly useful to couple an image analysis during the mechanical test. The most common way to obtain the fields of displacements and strains is the Digital Image Correlation (DIC), which is a contactless technique and so very useful since it doesn't affect the mechanical results.
Load cells.
Two load cells are placed along the two orthogonal load directions to measure the normal reaction forces explicated by the specimen. The dimensions of the sample have to be in accordance with the resolution and the full scale of the load cells.
A biaxial tensile test can be performed either in a load-controlled condition, or a displacement-controlled condition, in accordance with the settings of the biaxial tensile machine. In the former configuration a constant loading rate is applied and the displacements are measured, whereas in the latter configuration a constant displacement rate is applied and the forces are measured.
Dealing with elastic materials the load history is not relevant, whereas in viscoelastic materials it is not negligible. Furthermore, for this class of materials also the loading rate plays a role.
Gripping system.
The gripping system transfers the load from the motor stages to the specimen. Although the use of biaxial tensile testing is growing more and more, there is still a lack of robust standardized protocols concerning the gripping system. Since it plays a fundamental role in the application and distribution of the load, the gripping system has to be carefully designed in order to satisfy the Saint-Venant principle. Some different gripping systems are reported below.
Clamps.
The clamps are the most common used gripping system for biaxial tensile test since they allow a quite uniformly distributed load at the junction with the sample. To increase the uniformity of stress in the region of the sample close to the clamps, some notches with circular tips are obtained from the arm of the sample. The main problem related with the clamps is the low friction at the interface with the sample; indeed, if the friction between the inner surface of the clamps and the sample is too low, there could be a relative motion between the two systems altering the results of the test.
Sutures.
Small holes are performed on the surface on the sample to connect it to the motor stages through wire with a stiffness much higher than the sample. Typically, sutures are used with square samples. In contrast to the clamps, sutures allow the rotation of the sample around the axis perpendicular to the plane; in this way they do not allow the transmission of shear stresses to the sample.
The load transmission is very local, thereby the load distribution is not uniform. A template is needed to apply the sutures in the same position in different samples, to have repeatability among different tests.
Rakes.
This system is similar to the suture gripping system, but stiffer. The rakes transfer a limited quantity of shear stress, so they are less useful than sutures if used in presence of large shear strains. Although the load is transmitted in a discontinuous way, the load distribution is more uniform if compared to the sutures.
Specimen shape.
The success of a biaxial tensile test is strictly related to the shape of the specimen.
The two most used geometries are the square and cruciform shapes. Dealing with fibrous materials or fibres reinforced composites, the fibres should be aligned to the load directions for both classes of specimens, in order to minimize the shear stresses and to avoid the sample rotation.
Square samples.
Square or more generally rectangular specimens are easy to obtain, and their dimension and ratio depend on the material availability. Large specimens are needed to make negligible the effects of the gripping system in the core of the sample. However this solution is very material consuming so small specimen are required. Since the gripping system is very close to the core of the specimen the strain distribution is not homogeneous.
Cruciform samples.
A proper cruciform sample should fulfil the following requirements:
Is important to note that on this kind of sample, the stretch is larger in the outer region than in the centre, where the strain is uniform.
Method.
Uniaxial stress test is typically used to measure mechanical properties of materials, while many materials exhibit various behavior when different loading stress are exerted. Thus, biaxial tensile test become one of the prospective measurements. Small Punch Test (SPT) and Bulge Testing are two methods applying biaxial tensile state.
Small Punch Test (SPT).
The Small Punch Test (SPT) was first developed in the 1980s as minimal invasive in-situ technique to investigate the local degradation and embrittlement of nuclear material. The SPT is a kind of miniaturized test method that only small volume specimen is required. Using small volumes would not severely affect and damage an in-service component which make SPT a good method to determine the mechanical properties of unirradiated and irradiated materials or analyze small regions of structural components.
In terms of the testing, the disc shaped specimen is clamped between two dies. The punch is then pushed with a constant displacement rate through the specimen. A flat punch or concave tip pushing a ball are typically used in the test. After the testing, some characteristic parameters such as force-displacement curves are used to estimate yield strength, ultimate tensile stress. Considering the curves with various temperatures from SPT tensile/fracture data, ductile to brittle transition temperature (DBTT) can be calculated. One thing to be noticed is that the specimen used in SPT is suggested to be very flat to reduce the stress error caused by undefined contact situation.
Hydraulic Bulge Test (HBT).
Hydraulic Bulge Test (HBT) is a method of biaxial tensile testing. It used to determine the mechanical properties such as Young’s moduli, yield strength, ultimate tensile strength, and strain-hardening properties of sheet material like thin films. HBT can better describe the plastic properties of a sheet at large strains since the strain in press forming are normally larger than the uniform strain. However, the geometries of forming part are not symmetry, therefore, the true stress and strain measured by HBT will be higher than that measured by tensile test.
In HBT, rupture discs and high-pressure hydraulic oil are used to cause specimen deformation which also used to avoid influence factors such as friction during small punch test. While there are constraints in test conditions, the temperature is limited by solidification and vaporization of hydraulic oil. High temperature would lead to loading failure, while low temperature result in the failure of the seal part and the leaking vapor might be dangerous.
In HBT, a circular sample is normally stripped from a substrate on which they have been prepared and clamped over a hole around its periphery at the end of a cylinder. It experiences pressure from one side using hydraulic oil and then bulges and expands into a cavity with increasing pressure. The flow stress is calculated from the dome height of the bulging blank and the pressure and height can also be determined. Strain will be measured by Digital Image Correlation (DIC). With the specimen thickness and clamper size being considered, the true stress and strain can be calculated.
Other liquids may also be used as the hydraulic fluid in HBT. Xiang et al. (2005) developed a HBT for sub-micron thin films by using standard photolithographic microfabrication techniques etch away a small channel behind the film of interest, then pressurized the channel with water to bulge thin films. Validity of this method was confirmed using finite element analysis (FEA).
Gas Bulge Test (GBT).
Gas bulge tests (GBT) operate similarly to HBT. Instead of a hydraulic oil, high-pressure gas is used to back-pressure a thin plate specimen. Since gas has a much lower density than liquid, the maximum safe pressure output from GBT is considerably lower than hydraulic systems. Therefore, elevated temperature GBT is often used to increase ductility of the specimen, enabling plastic deformation at lower pressures.
Unlike HBT, elevated temperatures are possible for GBT. Operating temperatures of biaxial bulge testing are limited by phase transitions of the pressurized fluid—gasses therefore have an extremely wide range of operating temperatures. GBT is suitable for studying fatigue, low and high-temperature mechanical properties (given sufficient ductility at low temperatures), and thermal cycling. Additionally, holding pressure at a high temperature allows for testing time-dependent mechanical properties such as creep.
High temperature DIC may be used to measure biaxial stress and strain during GBT. Alternatively, a laser interferometer may be used to find the displacement near the apex of the dome, and many models are presented for calculating both radius of curvature and radial strain of bulged specimens. True stress is best approximated by the Young-Laplace equation. Results are comparable to biaxial testing standard ISO 16808. Clamping of elevated-temperature gas bulge specimens requires clamping materials with an operating temperature in excess of the operating temperature. This is possible using high-temperature mechanical fasteners, or by directly bonding materials via traditional welding, friction stir welding (FSW), or diffusion bonding.
"GBT example studies".
Frary et al. (2002) use GBT to demonstrate superplastic deformation of commercially pure (CP) titanium and Ti64 by thermally cycling through the material’s α/β transformation temperature.
Huang et al. (2019) measure coefficients of thermal expansion through GBT, and thermally cycle NiTi shape memory alloys to measure stress evolution.
The ability to perform GBT in parallel for an array of specimens enables high-throughput screening of mechanical properties and facilitates rapid materials design. Ding et al. (2014) conducted parallel measurements of viscosity across a huge composition-space of bulk metallic glass. Instead of using a direct pressure hookup, tungstic acid was placed into the cavities behind the specimen plate and decomposed to produce gas upon heating to ~100 °C.
Analytical solution.
A biaxial tensile state can be derived starting from the most general constitutive law for isotropic materials in large strains regime:
formula_2
where S is the second Piola-Kirchhoff stress tensor, I the identity matrix, C the right Cauchy-Green tensor, and formula_3, formula_4 and formula_5 the derivatives of the strain energy function per unit of volume in the undeformed configuration formula_6 with respect to the three invariants of C.
For an uncompressible material, the previous equation becomes:
formula_7
where "p" is of hydrostatic nature and plays the role of a Lagrange multiplier. It is worth nothing that "p" is not the hydrostatic pressure and must be determined independently of constitutive model of the material.
A well-posed problem requires specifying formula_8; for a biaxial state of a membrane formula_9, thereby the "p" term can be obtained
formula_10
where formula_11 is the third component of the diagonal of C.
According to the definition, the three non zero components of the deformation gradient tensor F are formula_12, formula_13 and formula_14.
Consequently, the components of C can be calculated with the formula formula_15, and they are formula_16, formula_17 and formula_18.
According with this stress state, the two non zero components of the second Piola-Kirchhoff stress tensor are:
formula_19
formula_20
By using the relationship between the second Piola-Kirchhoff and the Cauchy stress tensor, formula_21 and formula_22 can be calculated:
formula_23
formula_24
Equi-biaxial configuration.
The simplest biaxial configuration is the equi-biaxial configuration, where each of the two direction of load are subjected to the same stretch at the same rate. In an uncompressible isotropic material under a biaxial stress state, the non zero components of the deformation gradient tensor F are formula_25 and formula_26.
According to the definition of C, its non zero components are formula_27 and formula_28.
formula_29
The Cauchy stress in the two directions is:
formula_30
Strip biaxial configuration.
A strip biaxial test is a test configuration where the stretch of one direction is confined, namely there is a zero displacement applied on that direction. The components of the C tensor become formula_31, formula_32 and formula_33. It is worth nothing that even if there is no displacement along the direction 2, the stress is different from zero and it is dependent on the stretch applied on the orthogonal direction, as stated in the following equations:
formula_34
formula_35
The Cauchy stress in the two directions is:
formula_36
formula_37
The strip biaxial test has been used in different applications, such as the prediction of the behaviour of orthotropic materials under a uniaxial tensile stress, delamination problems, and failure analysis.
FEM analysis.
Finite Element Methods (FEM) are sometimes used to obtain the material parameters.
The procedure consists of reproducing the experimental test and obtain the same stress-stretch behaviour; to do so, an iterative procedure is needed to calibrate the constitutive parameters. Nevertheless, the cracking behavior of a cruciform specimen under mixed mode loading can be determined using FEA. Franc2d program is used to calculate the stress intensity factor (SIF) for such specimens using the linear elastic fracture mechanics approach. This kind of approach has been demonstrated to be effective to obtain the stress-stretch relationship for a wide class of hyperelastic material models (Ogden, Neo-Hooke, Yeoh, and Mooney-Rivlin).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\sigma_c = \\frac{PD}{2t}"
},
{
"math_id": 1,
"text": "\\sigma_c"
},
{
"math_id": 2,
"text": "\\mathbf{S} = 2 \\left(W_1\\mathbf{I} + W_2 \\left(I_1^C\\mathbf{I}-\\mathbf{C}\\right) + W_3I_3^C \\mathbf{C}^{-1}\\right)"
},
{
"math_id": 3,
"text": "W_1 = \\frac{\\partial W_0}{\\partial I_1^C}"
},
{
"math_id": 4,
"text": "W_2 = \\frac{\\partial W_0}{\\partial I_2^C}"
},
{
"math_id": 5,
"text": "W_3 = \\frac{\\partial W_0}{\\partial I_3^C}"
},
{
"math_id": 6,
"text": "W_0"
},
{
"math_id": 7,
"text": "\\mathbf{S} = 2 \\left(W_1\\mathbf{I} + W_2 \\left(I_1^C\\mathbf{I}-\\mathbf{C}\\right)\\right) - p\\mathbf{C}^{-1}"
},
{
"math_id": 8,
"text": "S_{33}"
},
{
"math_id": 9,
"text": "S_{33}=0"
},
{
"math_id": 10,
"text": "p = 2 C_{33} \\left(W_1 + W_2 \\left(I_1^C-C_{33}\\right)\\right)"
},
{
"math_id": 11,
"text": "C_{33}"
},
{
"math_id": 12,
"text": "F_{11} = \\lambda_{11}"
},
{
"math_id": 13,
"text": "F_{22}=\\lambda_{22}"
},
{
"math_id": 14,
"text": "F_{33} = \\frac{1}{\\lambda_{11}\\lambda_{22}}"
},
{
"math_id": 15,
"text": "\\mathbf{C}=\\mathbf{F}^T\\mathbf{F}"
},
{
"math_id": 16,
"text": "C_{11} = \\lambda_{11}^2"
},
{
"math_id": 17,
"text": "C_{22}=\\lambda_{22}^2"
},
{
"math_id": 18,
"text": "C_{33}=\\frac{1}{\\lambda_{11}^2\\lambda_{22}^2}"
},
{
"math_id": 19,
"text": "S_{11} = 2 W_1\\left(1-\\frac{1}{\\lambda_{11}^4 \\lambda_{22}^2}\\right)+2 W_2\\left(\\lambda_{22}^2-\\frac{1}{\\lambda_{11}^4}\\right)"
},
{
"math_id": 20,
"text": "S_{22} = 2 W_1 \\left(1-\\frac{1}{\\lambda_{11}^2\\lambda_{22}^4}\\right) + 2 W_2 \\left(\\lambda_{11}^2-\\frac{1}{\\lambda_{22}^4}\\right)"
},
{
"math_id": 21,
"text": "\\sigma_{11}"
},
{
"math_id": 22,
"text": "\\sigma_{22}"
},
{
"math_id": 23,
"text": "\\sigma_{11} = 2 W_1 \\left(\\lambda_{11}^2-\\frac{1}{\\lambda_{11}^2\\lambda_{22}^2}\\right) + 2 W_2 \\left(\\lambda_{11}^2\\lambda_{22}^2-\\frac{1}{\\lambda_{11}^2}\\right)"
},
{
"math_id": 24,
"text": "\\sigma_{22} = 2 W_1 \\left(\\lambda_{22}^2-\\frac{1}{\\lambda_{11}^2\\lambda_{22}^2}\\right) + 2 W_2 \\left(\\lambda_{11}^2\\lambda_{22}^2-\\frac{1}{\\lambda_{22}^2}\\right)"
},
{
"math_id": 25,
"text": "F_{11}=F_{22}=\\lambda"
},
{
"math_id": 26,
"text": "F_{33}=\\frac{1}{\\lambda^2}"
},
{
"math_id": 27,
"text": "C_{11}=C_{22}=\\lambda^2"
},
{
"math_id": 28,
"text": "C_{33}=\\frac{1}{\\lambda^4}"
},
{
"math_id": 29,
"text": "S_{11} = S_{22} = 2 W_1 \\left(1-\\frac{1}{\\lambda^6}\\right) + 2 W_2 \\left(\\lambda^2 - \\frac{1}{\\lambda^4}\\right)"
},
{
"math_id": 30,
"text": "\\sigma_{11} = \\sigma_{22} = 2 W_1 \\left(\\lambda^2-\\frac{1}{\\lambda^4}\\right) + 2 W_2 \\left(\\lambda^4-\\frac{1}{\\lambda^2}\\right)"
},
{
"math_id": 31,
"text": "C_{11}=\\lambda^2"
},
{
"math_id": 32,
"text": "C_{22}=1"
},
{
"math_id": 33,
"text": "C_{33}=\\frac{1}{\\lambda^2}"
},
{
"math_id": 34,
"text": "S_{11} = 2 W_1 \\left(1-\\frac{1}{\\lambda^4}\\right) + 2 W_2\\left(1-\\frac{1}{\\lambda^4}\\right)"
},
{
"math_id": 35,
"text": "S_{22} = 2 W_1 \\left(1-\\frac{1}{\\lambda^2}\\right) + 2 W_2 \\left(\\lambda^2-1\\right)"
},
{
"math_id": 36,
"text": "\\sigma_{11} = 2 W_1 \\left(\\lambda^2-\\frac{1}{\\lambda^2}\\right)+2W_2\\left(\\lambda^2-\\frac{1}{\\lambda^2}\\right)"
},
{
"math_id": 37,
"text": "\\sigma_{22} = 2 W_1 \\left(\\lambda^2-1\\right) + 2 W_2 \\left(\\lambda^4-\\lambda^2\\right)"
}
]
| https://en.wikipedia.org/wiki?curid=67944695 |
67944697 | Non-linear inverse Compton scattering | Electron-many photon scattering
Non-linear inverse Compton scattering (NICS), also known as non-linear Compton scattering and multiphoton Compton scattering, is the scattering of multiple low-energy photons, given by an intense electromagnetic field, in a high-energy photon (X-ray or gamma ray) during the interaction with a charged particle, in many cases an electron. This process is an inverted variant of Compton scattering since, contrary to it, the charged particle transfers its energy to the outgoing high-energy photon instead of receiving energy from an incoming high-energy photon. Furthermore, differently from Compton scattering, this process is explicitly non-linear because the conditions for multiphoton absorption by the charged particle are reached in the presence of a very intense electromagnetic field, for example, the one produced by high-intensity lasers.
Non-linear inverse Compton scattering is a scattering process belonging to the category of light-matter interaction phenomena. The absorption of multiple photons of the electromagnetic field by the charged particle causes the consequent emission of an X-ray or a gamma ray with energy comparable or higher with respect to the charged particle rest energy.
The normalized vector potential formula_0 helps to isolate the regime in which non-linear inverse Compton scattering occurs (formula_1 is the electron charge, formula_2 is the electron mass, formula_3 the speed of light and formula_4 the vector potential). If formula_5, the emission phenomenon can be reduced to the scattering of a single photon by an electron, which is the case of inverse Compton scattering. While, if formula_6, NICS occurs and the probability amplitudes of emission have non-linear dependencies on the field. For this reason, in the description of non-linear inverse Compton scattering, formula_7 is called classical non-linearity parameter.
History.
The physical process of non-linear inverse Compton scattering has been first introduced theoretically in different scientific articles starting from 1964. Before this date, some seminal works had emerged dealing with the description of the classical limit of NICS, called non-linear Thomson scattering or multiphoton Thomson scattering. In 1964, different papers were published on the topic of electron scattering in intense electromagnetic fields by L. S. Brown and T. W. B. Kibble, and by A. I. Nikishov and V. I. Ritus, among the others. The development of the high-intensity laser systems required to study the phenomenon has motivated the continuous advancements in the theoretical and experimental studies of NICS. At the time of the first theoretical studies, the terms non-linear (inverse) Compton scattering and multiphoton Compton scattering were not in use yet and they progressively emerged in later works. The case of an electron scattering off high-energy photons in the field of a monochromatic background plane wave with either circular or linear polarization was one of the most studied topics at the beginning. Then, some groups have studied more complicated non-linear inverse Compton scattering scenario, considering complex electromagnetic fields of finite spatial and temporal extension, typical of laser pulses.
The advent of laser amplification techniques and in particular of chirped pulse amplification (CPA) has allowed to reach sufficiently high-laser intensities to study new regimes of light-matter interaction and to significantly observe non-linear inverse Compton scattering and its peculiar effects. Non-linear Thomson scattering was first observed in 1983 with formula_8 keV electron beam colliding with a Q-switched delivering an intensity of formula_9 W/cm2 (formula_10), photons of frequency two times the one of the laser were produced, then in 1995 with a CPA laser of peak intensity around formula_11 W/cm2 interacting with neon gas, and in 1998 in the interaction of a mode-locked Nd:YAG laser (formula_12 W/cm2, formula_13) with plasma electrons from a helium gas jet, producing multiple harmonics of the laser frequency. NICS was detected for the first time in a pioneering experiment at the SLAC National Accelerator Laboratory at Stanford University, USA. In this experiment, the collision of an ultra-relativistic electron beam, with energy of about formula_14 GeV, with a terawatt , with an intensity of formula_11 W/cm2 (formula_15, formula_16), produced NICS photons which were observed indirectly via a nonlinear energy shift in the spectrum of electrons in output; consequent positron generation was also observed in this experiment.
Multiple experiments have been then performed by crossing a high-energy laser pulse with a relativistic electron beam from a conventional linear electron accelerator, but a further achievement in the study of non-linear inverse Compton scattering has been achieved with the realization of all-optical setups. In these cases, a laser pulse is both responsible for the electron acceleration, through the mechanisms of plasma acceleration, and for the non-linear inverse Compton scattering occurring in the interaction of accelerated electrons with a laser pulse (possibly counter-propagating with respect to electrons). One of the first experiment of this type was made in 2006 producing photons of energy from formula_17 to formula_18 keV with a Ti:Sa laser beam (formula_19W/cm2). Research is still ongoing and active in this field as attested by the numerous theoretical and experimental publications.
Classical limit.
The classical limit of non-linear inverse Compton scattering, also called non-linear Thomson scattering and multiphoton Thomson scattering, is a special case of classical synchrotron emission driven by the force exerted on a charged particle by intense electric and magnetic fields. Practically, a moving charge emits electromagnetic radiation while experiencing the Lorentz force induced by the presence of these electromagnetic fields. The calculation of the emitted spectrum in this classical case is based on the solution of the Lorentz equation for the particle and the substitution of the corresponding particle trajectory in the Liénard-Wiechert fields. In the following, the considered charged particles will be electrons, and gaussian units will be used.
The component of the Lorentz force perpendicular to the particle velocity is the component responsible for the local radial acceleration and thus of the relevant part of the radiation emission by a relativistic electron of charge formula_1, mass formula_2 and velocity formula_20. In a simplified picture, one can suppose a local circular trajectory for a relativistic particle and can assume a relativistic centripetal force equal to the magnitude of the perpendicular Lorentz force acting on the particle: formula_21formula_22 and formula_23 are the electric and magnetic fields respectively, formula_24 is the magnitude of the electron velocity and formula_25 is the Lorentz factor formula_26. This equation defines a simple dependence of the local radius of curvature on the particle velocity and on the electromagnetic fields felt by the particle. Since the motion of the particle is relativistic, the magnitude formula_24 can be substituted with the speed of light to simplify the expression for formula_27. Given an expression for formula_27, the model given in can be used to approximately describe the classical limit of non-linear inverse Compton scattering. Thus, the power distribution in frequency of non-linear Thomson scattering by a relativistic charged particle can be seen as equivalent to the general case of synchrotron emission with the main parameters made explicitly dependent on the particle velocity and on the electromagnetic fields.
Electron quantum parameter.
Increasing the intensity of the electromagnetic field and the particle velocity, the emission of photons with energy comparable to the electron one becomes more probable and non-linear inverse Compton scattering starts to progressively differ from the classical limit because of quantum effects such as photon recoil. A dimensionless parameter, called electron quantum parameter, can be introduced to describe how far the physical condition are from the classical limit and how much non-linear and quantum effects matter. This parameter is given by the following expression:where formula_28 V/m is the Schwinger field. In scientific literature, formula_29 is also called formula_30. The Schwinger field formula_31, appearing in this definition, is a critical field capable of performing on electrons a work of formula_32 over a reduced Compton length formula_33, where formula_34 is the reduced Planck constant. The presence of such a strong field implies the instability of vacuum and it is necessary to explore non-linear QED effects, such as the production of pairs from vacuum. The Schwinger field corresponds to an intensity of nearly formula_35 W/cm2. Consequently, formula_36 represents the work, in units of formula_32, performed by the field over the Compton length formula_33 and in this way it also measures the importance of quantum non-linear effects since it compares the field strength in the rest frame of the electron with that of the critical field. Non-linear quantum effects, like the production of an electron-positron pair in vacuum, occur above the critical field formula_31, however, they can be observed also well below this limit since ultra-relativistic particles with Lorentz factor equal to formula_37 see fields of the order of formula_31 in their rest frame. formula_36 is called also non-linear quantum parameter whereas it is a measure of the magnitude of non-linear quantum effects. The electron quantum parameter is linked to the magnitude of the Lorentz four-force acting on the particle due to the electromagnetic field and it is a Lorentz-invariant:formula_38The four-force acting on the particle is equal to the derivative of the four-momentum with respect to proper time. Using this fact in the classical limit, the radiated power according to the relativistic generalization of the Larmor formula becomes:formula_39As a result, emission is improved by higher values of formula_36 and, therefore, some considerations can be done on which are the conditions for prolific emission, further evaluating the definition (1). The electron quantum parameter increases with the energy of the electron (direct proportionality to formula_25) and it is larger when the force exerted by the field perpendicularly to the particle velocity increases.
Plane wave case.
Considering a plane wave the electron quantum parameter can be rewritten using this relation between electric and magnetic fields:formula_40where formula_41 is the wavevector of the plane wave and formula_42 the wavevector magnitude. Inserting this expression in the formula of formula_36:formula_43where the vectorial identity formula_44 was used. Elaborating the expression:formula_45Since formula_46 for a plane wave and the last two terms under the square root compensate each other, formula_36 reduces to:
formula_47
In the simplified configuration of a plane wave impinging on the electron, higher values of the electron quantum parameter are obtained when the plane wave is counter-propagating with respect to the electron velocity.
Quantum effects.
A full description of non-linear inverse Compton scattering must include some effects related to the quantization of light and matter. The principal ones are listed below.
where formula_53 stands for the McDonald functions. The mean energy of the emitted photon is given by formula_54. Consequently, a large Lorentz factor and intense fields increase the chance of producing high-energy photons. formula_55 goes as formula_36 because of this formula.
Emission description when formula_6 and formula_59.
When the incoming field is very intense formula_6, the interaction of the electron with the electromagnetic field is completely equivalent to the interaction of the electron with multiple photons, with no need of explicitly quantize the electromagnetic field of the incoming low-energy radiation. While the interaction with the radiation field, i.e. the emitted photon, is treated with perturbation theory: the probability of photon emission is evaluated considering the transition between the states of the electron in presence of the electromagnetic field. This problem has been solved primarily in the case in which electric and magnetic fields are orthogonal and equal in magnitude (crossed field); in particular, the case of a plane electromagnetic wave has been considered. Crossed fields represent in good approximation many existing fields so the found solution can be considered quite general. The spectrum of non-linear inverse Compton scattering, obtained with this approach and valid for formula_6 and formula_59, is:
where the parameter formula_60, is now defined as:formula_61The result is similar to the classical one except for the different expression of formula_62. For formula_63 it reduces to the classical spectrum (2). Note that if formula_64 (formula_65 or formula_66) the spectrum must be zero because the energy of the emitted photon cannot be higher than the electron energy, in particular could not be higher than the electron kinetic energy formula_67.
The total power emitted in radiation is given by the integration in formula_51 of the spectrum (3):formula_68where the result of the integration of formula_69 is contained in the last term:
formula_71This expression is equal to the classical one if formula_70 is equal to one and it can be expanded in two limiting cases, near the classical limit and when quantum effects are of major importance:formula_72A related quantity is the rate of photon emission:formula_73where it is made explicit that the integration is limited by the condition that if formula_65 no photons can be produced. This rate of photon emission depends explicitly on electron quantum parameter and on the Lorentz factor for the electron.
Applications.
Non-linear inverse Compton scattering is an interesting phenomenon for all applications requiring high-energy photons since NICS is capable of producing photons with energy comparable to formula_32 and higher. In the case of electrons, this means that it is possible to produce photons with MeV energy that can consequently trigger other phenomena such as pair production, Breit–Wheeler pair production, Compton scattering, nuclear reactions.
In the context of laser-plasma acceleration, both relativistic electrons and laser pulses of ultra-high intensity can be present, setting favourable conditions for the observation and the exploitation of non-linear inverse Compton scattering for high-energy photon production, for diagnostic of electron motion, and for probing non-linear quantum effects and non-linear QED. Because of this reason, several numerical tools have been introduced to study non-linear inverse Compton scattering. For example, particle-in-cell codes for the study of laser-plasma acceleration have been developed with the capabilities of simulating non-linear inverse Compton scattering with Monte Carlo methods. These tools are used to explore the different regimes of NICS in the context of laser-plasma interaction. | [
{
"math_id": 0,
"text": "{a_0=eA/(m c^2)}"
},
{
"math_id": 1,
"text": "e"
},
{
"math_id": 2,
"text": "m"
},
{
"math_id": 3,
"text": "c\n"
},
{
"math_id": 4,
"text": "A"
},
{
"math_id": 5,
"text": "a_0\\ll1"
},
{
"math_id": 6,
"text": "a_0\\gg1"
},
{
"math_id": 7,
"text": "a_0"
},
{
"math_id": 8,
"text": "1"
},
{
"math_id": 9,
"text": "1.7\\cdot 10^{14}"
},
{
"math_id": 10,
"text": "a_0=0.01"
},
{
"math_id": 11,
"text": "10^{18}"
},
{
"math_id": 12,
"text": "4.4\\cdot 10^{18}"
},
{
"math_id": 13,
"text": "a_0=1.88"
},
{
"math_id": 14,
"text": "46.6"
},
{
"math_id": 15,
"text": "a_0=0.8"
},
{
"math_id": 16,
"text": "\\chi=0.3"
},
{
"math_id": 17,
"text": "0.4"
},
{
"math_id": 18,
"text": "2"
},
{
"math_id": 19,
"text": "2\\cdot 10^{19}"
},
{
"math_id": 20,
"text": "\\mathbf{v}\n"
},
{
"math_id": 21,
"text": "\n\\gamma \\dfrac{m v^2}{\\rho}=e\\sqrt{\\left(\\mathbf{E}+\\dfrac{\\mathbf{v}}{c}\\times\\mathbf{B}\\right)^2-\\left(\\dfrac{\\mathbf{E}\\cdot\\mathbf{v}}{v}\\right)^2}\n"
},
{
"math_id": 22,
"text": "\\mathbf{E}"
},
{
"math_id": 23,
"text": "\\mathbf{B}"
},
{
"math_id": 24,
"text": "v"
},
{
"math_id": 25,
"text": "\\gamma"
},
{
"math_id": 26,
"text": " \\left(1 - v^2/c^2\\right)^{-1/2} "
},
{
"math_id": 27,
"text": "\\rho"
},
{
"math_id": 28,
"text": "E_s=m^2c^3/(\\hbar e)\\simeq 1.3 \\cdot 10^{18}"
},
{
"math_id": 29,
"text": "\\chi "
},
{
"math_id": 30,
"text": "\\eta "
},
{
"math_id": 31,
"text": "E_s"
},
{
"math_id": 32,
"text": "mc^2"
},
{
"math_id": 33,
"text": "\\hbar/(m c)"
},
{
"math_id": 34,
"text": "\\hbar"
},
{
"math_id": 35,
"text": "10^{29}"
},
{
"math_id": 36,
"text": "\\chi"
},
{
"math_id": 37,
"text": "E_s/|\\mathbf{E}|"
},
{
"math_id": 38,
"text": "\n\\chi=\\dfrac{e \\hbar}{m^3 c^4}|F_{\\alpha\\beta}p^\\alpha|\n"
},
{
"math_id": 39,
"text": "\nP=\\dfrac{2}{3}\\dfrac{e^2m^2c^3}{\\hbar^2}\\chi^2\n"
},
{
"math_id": 40,
"text": "\\mathbf{B}=\\dfrac{\\mathbf{k}\\times\\mathbf{E}}{k}"
},
{
"math_id": 41,
"text": "\\mathbf{k}"
},
{
"math_id": 42,
"text": "k"
},
{
"math_id": 43,
"text": "\\chi=\\dfrac{\\gamma}{E_s}\\sqrt{\\left(\\mathbf{E}+\\dfrac{(\\mathbf{E}\\cdot\\mathbf{v})}{c} \\dfrac{\\mathbf{k}}{k}-\\dfrac{(\\mathbf{v}\\cdot \\mathbf{k})}{k c}\\mathbf{E}\\right)^2-\\left(\\dfrac{\\mathbf{E}\\cdot\\mathbf{v}}{c}\\right)^2}"
},
{
"math_id": 44,
"text": "\\mathbf{A}\\times(\\mathbf{B}\\times\\mathbf{C})=(\\mathbf{A}\\cdot\\mathbf{C})\\mathbf{B}-(\\mathbf{A}\\cdot\\mathbf{B})\\mathbf{C}"
},
{
"math_id": 45,
"text": "\\chi=\\dfrac{\\gamma}{E_s}\\sqrt{\\left[\\mathbf{E}\\left(1-\\dfrac{\\mathbf{v}\\cdot \\mathbf{k}}{k c}\\right)\\right]^2-2\\left(1-\\dfrac{\\mathbf{v}\\cdot \\mathbf{k}}{k c}\\right)\\left(\\dfrac{\\mathbf{E}\\cdot\\mathbf{v}}{k c}\\right)\\mathbf{k}\\cdot\\mathbf{E}+\\left(\\dfrac{(\\mathbf{E}\\cdot\\mathbf{v})}{c} \\dfrac{\\mathbf{k}}{k}\\right)^2-\\left(\\dfrac{\\mathbf{E}\\cdot\\mathbf{v}}{c}\\right)^2}"
},
{
"math_id": 46,
"text": "\\mathbf{k}\\cdot\\mathbf{E}=0"
},
{
"math_id": 47,
"text": "\\chi=\\dfrac{\\gamma |\\mathbf{E}|}{E_s}\\sqrt{\\left(1-\\dfrac{\\mathbf{v}\\cdot \\mathbf{k}}{k c}\\right)^2}"
},
{
"math_id": 48,
"text": "\\omega"
},
{
"math_id": 49,
"text": "\\eta=\\dfrac{e \\hbar^2}{m^3 c^4}|F_{\\alpha\\beta}k^\\alpha|"
},
{
"math_id": 50,
"text": "k^\\alpha=(\\omega/c,\\mathbf{k})"
},
{
"math_id": 51,
"text": "\\eta"
},
{
"math_id": 52,
"text": "\\zeta=\\dfrac{\\eta}{\\chi}\\simeq\\dfrac{\\hbar\\omega}{\\gamma m c^2}"
},
{
"math_id": 53,
"text": "K_\\alpha"
},
{
"math_id": 54,
"text": "\\langle\\hbar\\omega\\rangle=4\\chi \\gamma m c^2/(5\\sqrt{3})"
},
{
"math_id": 55,
"text": "\\zeta"
},
{
"math_id": 56,
"text": "\\chi\\sim\\zeta\\sim\\hbar \\omega/(\\gamma m c^2)"
},
{
"math_id": 57,
"text": "\\chi,\\zeta\\ll1"
},
{
"math_id": 58,
"text": "\\chi,\\zeta\\sim1"
},
{
"math_id": 59,
"text": "\\gamma\\gg 1"
},
{
"math_id": 60,
"text": "y"
},
{
"math_id": 61,
"text": "y=\\dfrac{2\\eta}{3\\chi(\\chi-\\eta)}=\\dfrac{2\\zeta}{3\\chi(1-\\zeta)}\n"
},
{
"math_id": 62,
"text": "F"
},
{
"math_id": 63,
"text": "\\chi,\\zeta\\to0"
},
{
"math_id": 64,
"text": "\\zeta\\geq1"
},
{
"math_id": 65,
"text": "\\eta \\geq \\chi"
},
{
"math_id": 66,
"text": "y<0"
},
{
"math_id": 67,
"text": "(\\gamma-1)mc^2"
},
{
"math_id": 68,
"text": "\nP=\\dfrac{2}{3}\\dfrac{e^2m^2c^3}{\\hbar^2}\\chi^2 g(\\chi) \n"
},
{
"math_id": 69,
"text": "F(\\chi,\\eta)"
},
{
"math_id": 70,
"text": "g(\\chi)"
},
{
"math_id": 71,
"text": " \ng(\\chi)=\\dfrac{3\\sqrt{3}}{2\\pi \\chi^2}\\int_0^{+\\infty}F(\\chi,\\eta)d\\eta=\\dfrac{9\\sqrt{3}}{8\\pi}\\int_0^{+\\infty}\\left[\\dfrac{2y^2K_{\\frac{5}{3}}(y)}{(2+3\\chi y)^2}+\\dfrac{36\\chi^2 y^3 K_{\\frac{2}{3}}(y)}{2+3\\chi y)^4}\\right]dy\n"
},
{
"math_id": 72,
"text": " \n\\begin{cases} P\\approx \\dfrac{2}{3}\\dfrac{e^2 m^2 c^3}{\\hbar^2}\\left(1-\\dfrac{55\\sqrt{3}}{16}\\chi+48\\chi^2\\right), & \\text{for }\\chi\\ll1 \\\\ P\\approx0.37\\dfrac{e^2 m^2 c^3}{\\hbar^2}(3\\chi)^{\\frac{2}{3}}, & \\text{for }\\chi\\gg1 \\end{cases} \n"
},
{
"math_id": 73,
"text": "\n\\dfrac{dN}{dt}=\\dfrac{\\sqrt{3}}{2\\pi}\\dfrac{q^2 m c}{\\hbar^2 }\\dfrac{\\chi}{\\gamma} \\int_0^{\\chi}\\dfrac{F(\\chi,\\eta)}{\\eta}d\\eta \n"
}
]
| https://en.wikipedia.org/wiki?curid=67944697 |
67944776 | Dynamic stall on helicopter rotors | Dynamic stall on helicopter rotors
The dynamic stall is one of the hazardous phenomena on helicopter rotors, which can cause the onset of large torsional airloads and vibrations on the rotor blades. Unlike fixed-wing aircraft, of which the stall occurs at relatively low flight speed, the dynamic stall on a helicopter rotor emerges at high airspeeds or/and during manoeuvres with high load factors of helicopters, when the angle of attack(AOA) of blade elements varies intensively due to time-dependent blade flapping, cyclic pitch and wake inflow. For example, during forward flight at the velocity close to VNE, "velocity, never exceed", the advancing and retreating blades almost reach their operation limits whereas flows are still attached to the blade surfaces. That is, the advancing blades operate at high Mach numbers so low values of AOA is needed but shock-induced flow separation may happen, while the retreating blade operates at much lower Mach numbers but the high values of AoA result in the stall (also see advancing blade compressibility and retreating blade stall).
Performance limits.
The effect of dynamic stall limits the helicopter performance in several ways such as:
Flow topology.
The visualization is considered a vivid method to better understand the aerodynamic principle of the dynamic stall on a helicopter rotor, and the investigation generally starts from the analysis of the unsteady motion on 2D airfoil (see Blade element theory).
Dynamic stall for 2D airfoils.
By wind tunnel experiments, it has been found that the behavior of an airfoil under unsteady motion is quite different from that under quasi-steady motion. Flow separation is less likely to happen on the upper airfoil surface with a larger value of AoA than the latter, which can increase the maximum lift coefficient to a certain extent. Three primary unsteady phenomena have been identified to contribute to the delay in the onset of flow separation under unsteady condition:
The development process of dynamic stall on 2D airfoil can be summarized in several stages:
Dynamic stall in the rotor environment.
Although the unsteady mechanism of idealized 2D experiments has already been studied comprehensively, the dynamic stall on a rotor presents strong three-dimensional character differences. According to a well-collected in-flight data by Bousman, the generation location of the DSV is "tightly grouped", where lift overshoots and large nose-down pitching moments are featured and can be classified into three groups.
Factors.
Mean AoA.
The increasing of the mean value of AoA leads to more evident flow separation, higher overshoots of lift and pitch moment, and larger airloads hysteresis, which may ultimately result in deep dynamic stall.
Oscillating angle.
The amplitude of oscillation is also an important parameter for the stall behaviour of an airfoil. With a larger oscillating angle, deep dynamic stall tends to occur.
Reduced frequency.
A higher value of reduced frequency formula_0 suggests a delay of the onset of flow separation at higher AoA, and a reduction of airloads overshoots and hysteresis is secured because of the increase of the kinematic induced camber effect. But when reduce frequency is rather low, i.e. formula_1, the vortex-shedding phenomenon is not likely to happen, so does the deep dynamic stall.
Airfoil geometry.
The effect of airfoil geometry on dynamic stall is quite intricate. As is shown in the figure, for a cambered airfoil, the lift stall is delayed and the maximum nose-down pitch moment is significantly reduced. On the other hand, the inception of stall is more abrupt for a sharp leading-edge airfoil. More information is available here.
Sweep angle.
The sweep angle of the flow to a blade element for a helicopter in forward flight can be significant. It is defined as the radial component of the velocity relative to the leading edge of the blade:
formula_2
Based on experimental data, a sweep angle of 30° is able to delay the onset of stall to a higher AoA thanks to the convection of the leading-edge vortex at a lower velocity and reduce the varying rate of lift, pitch moment, and the scale of hysteresis loops.
Reynolds number.
As the figure suggests, the effect of Reynolds numbers seems to be minor, with a low value of reduced frequency k=0.004, stall overshoot is minimal and most of the hysteresis loop is attributable to a delay in reattachment, rather than vortex shedding.
Three-dimensional effects.
Lorber et al. found that at the outermost wing station, the existence of the tip vortex gives both the steady and unsteady lift and pitching moment hysteresis loops a more nonlinear quasi-steady behaviour due to an element of steady vortex-induced lift, while for the rest of the wing stations where oscillations below stall, there is no particular difference from 2-D cases.
Time-varying velocity.
During forward flight, the blade element of a rotor will encounter a time-varying incident velocity, leading to additional unsteady aerodynamic characters. Several features have been discovered through experiments, for example, depending on the phasing of the velocity variations with respect to the AoA, initiation of LEV shedding and the chordwise convection of LEV appear to be different. However, more works are needed to better understand this problem adopting mathematical models.
Modelling.
There are mainly two types of mathematical models to predict the dynamic stall behaviour: semi-empirical models and computational fluid dynamics method. With regard to the latter method, because of the sophisticated flow field during the process of the dynamic stall, the full Navier-Stokes equations and proper models are adopted, and some promising results have been presented in the literature. However, to utilize this method precisely, proper turbulence models and transition models should be carefully selected. Furthermore, this method is also sometimes too computationally costly for research purposes as well as the pre-design of a helicopter rotor. On the other hand, to date some semi-empirical models have shown their capability of providing adequate precision, which contains sets of linear and nonlinear equations, based on classical unsteady thin-airfoil theory and parameterized by empirical coefficients. Therefore, a large number of experimental results are demanded to correct the empirical coefficients, and it is foreseeable that these models cannot be generally adapted to a wide range of conditions such as different airfoils, Mach numbers, and so on.
Here, two typical semi-empirical methods are presented to give insights into the modelling of dynamic stall.
Boeing-Vertol Gamma Function Method.
The model was initially developed by Gross&Harris and Gormont, the basic idea is as follows:
The onset of dynamic stall is assumed to occur at
formula_3,
where formula_4 is the critical AoA of dynamic stall, formula_5 is static stall AoA and formula_6 is given by
formula_7,
where formula_8 is the time derivative of AoA, formula_9 is the blade chord, and formula_10 is the free-stream velocity. The formula_11 function is empirical, depends on geometry and Mach number and is different for lift and pitching moment.
The airloads coefficients are constructed from static data using an equivalent angle of attack formula_12 derived from Theodorsen's theory at the appropriate reduced frequency of the forcing and a reference angle formula_13 as follows:
formula_14, formula_15, formula_16, where formula_17 is the center point of rotation.
A comprehensive analysis of a helicopter rotor using this model is presented in the reference.
Leishman-Beddoes Method.
The model was initially developed by Beddoes and Leishman&Beddoes and refined by Leishman and Tyler&Leishman.
The model consists of three distinct sub-systems for describing the dynamic stall physics:
One significant advantage of the model is that it uses relatively few empirical coefficients, with all but four at each Mach number being derived from static airfoil data.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " k "
},
{
"math_id": 1,
"text": " k < 0.05 "
},
{
"math_id": 2,
"text": "\\Lambda=\\arctan{\\frac{U_R}{U_T}}=\\arctan{\\frac{\\mu \\cos{\\psi}}{r+\\sin{\\psi}}}"
},
{
"math_id": 3,
"text": "\\alpha_{DS}=\\alpha_{SS}+\\Delta\\alpha_D"
},
{
"math_id": 4,
"text": "\\alpha_{DS}"
},
{
"math_id": 5,
"text": "\\alpha_{SS}"
},
{
"math_id": 6,
"text": "\\Delta\\alpha_D"
},
{
"math_id": 7,
"text": "\\Delta\\alpha_D=\\gamma \\sqrt{\\dot{\\alpha} c/V_\\infty}"
},
{
"math_id": 8,
"text": "\\dot{\\alpha}"
},
{
"math_id": 9,
"text": "c"
},
{
"math_id": 10,
"text": "V_\\infty"
},
{
"math_id": 11,
"text": "\\gamma"
},
{
"math_id": 12,
"text": "\\alpha_{eq}"
},
{
"math_id": 13,
"text": "\\alpha_r=\\alpha \\pm \\gamma \\sqrt{\\dot{\\alpha} c/V_\\infty}"
},
{
"math_id": 14,
"text": "C_L=\\frac{\\alpha_{eq}}{\\alpha_r}C_L(\\alpha_r)"
},
{
"math_id": 15,
"text": "C_D=C_D(\\alpha_r)"
},
{
"math_id": 16,
"text": "C_M=(0.25-x_{CP})C_L(\\alpha_r)"
},
{
"math_id": 17,
"text": "x_{CP}"
}
]
| https://en.wikipedia.org/wiki?curid=67944776 |
67952883 | History of nuclear fusion | The history of nuclear fusion began early in the 20th century as an inquiry into how stars powered themselves and expanded to incorporate a broad inquiry into the nature of matter and energy, as potential applications expanded to include warfare, energy production and rocket propulsion.
<templatestyles src="Template:TOC limit/styles.css" />
Early research.
In 1920, the British physicist, Francis William Aston, discovered that the mass of four hydrogen atoms is greater than the mass of one helium atom (He-4), which implied that energy can be released by combining hydrogen atoms to form helium. This provided the first hints of a mechanism by which stars could produce energy. Throughout the 1920s, Arthur Stanley Eddington became a major proponent of the proton–proton chain reaction (PP reaction) as the primary system running the Sun. Quantum tunneling was discovered by Friedrich Hund in 1929, and shortly afterwards Robert Atkinson and Fritz Houtermans used the measured masses of light elements to show that large amounts of energy could be released by fusing small nuclei.
Henry Norris Russell observed that the relationship in the Hertzsprung–Russell diagram suggested that a star's heat came from a hot core rather than from the entire star. Eddington used this to calculate that the temperature of the core would have to be about 40 million K. This became a matter of debate, because the value is much higher than astronomical observations that suggested about one-third to one-half that value. George Gamow introduced the mathematical basis for quantum tunnelling in 1928. In 1929 Atkinson and Houtermans provided the first estimates of the stellar fusion rate. They showed that fusion can occur at lower energies than previously believed, backing Eddington's calculations.
Nuclear experiments began using a particle accelerator built by John Cockcroft and Ernest Walton at Ernest Rutherford's Cavendish Laboratory at the University of Cambridge. In 1932, Walton produced the first man-made fission by using protons from the accelerator to split lithium into alpha particles. The accelerator was then used to fire "deuterons" at various targets. Working with Rutherford and others, Mark Oliphant discovered the nuclei of helium-3 ("helions") and tritium ("tritons"), the first case of human-caused fusion.
Neutrons from fusion were first detected in 1933. The experiment involved the acceleration of protons towards a target at energies of up to 600,000 electron volts.
A theory verified by Hans Bethe in 1939 showed that beta decay and quantum tunneling in the Sun's core might convert one of the protons into a neutron and thereby produce deuterium rather than a diproton. The deuterium would then fuse through other reactions to further increase the energy output. For this work, Bethe won the 1967 Nobel Prize in Physics.
In 1938, Peter Thonemann developed a detailed plan for a pinch device, but was told to do other work for his thesis.
The first patent related to a fusion reactor was registered in 1946 by the United Kingdom Atomic Energy Authority. The inventors were Sir George Paget Thomson and Moses Blackman. This was the first detailed examination of the Z-pinch concept. Starting in 1947, two UK teams carried out experiments based on this concept.
1950s.
The first successful man-made fusion device was the boosted fission weapon tested in 1951 in the Greenhouse Item test. The first true fusion weapon was 1952's Ivy Mike, and the first practical example was 1954's Castle Bravo. In these devices, the energy released by a fission explosion compresses and heats the fuel, starting a fusion reaction. Fusion releases neutrons. These neutrons hit the surrounding fission fuel, causing the atoms to split apart much faster than normal fission processes. This increased the effectiveness of bombs: normal fission weapons blow themselves apart before all their fuel is used; fusion/fission weapons do not waste their fuel.
Stellarator.
In 1949 expatriate German Ronald Richter proposed the Huemul Project in Argentina, announcing positive results in 1951. These turned out to be fake, but prompted others' interest. Lyman Spitzer began considering ways to solve problems involved in confining a hot plasma, and, unaware of the Z-pinch efforts, he created the stellarator. Spitzer applied to the US Atomic Energy Commission for funding to build a test device.
During this period, James L. Tuck, who had worked with the UK teams on Z-pinch, had been introducing the stellarator concept to his coworkers at LANL. When he heard of Spitzer's pitch, he applied to build a pinch machine of his own, the Perhapsatron.
Spitzer's idea won funding and he began work under Project Matterhorn. His work led to the creation of Princeton Plasma Physics Laboratory (PPPL). Tuck returned to LANL and arranged local funding to build his machine. By this time it was clear that the pinch machines were afflicted by instability, stalling progress. In 1953, Tuck and others suggested solutions that led to a second series of pinch machines, such as the ZETA and Sceptre devices.
Spitzer's first machine, 'A' worked, but his next one, 'B', suffered from instabilities and plasma leakage.
In 1954 AEC chair Lewis Strauss foresaw electricity as "too cheap to meter". Strauss was likely referring to fusion power, part of the secret Project Sherwood—but his statement was interpreted as referring to fission. The AEC had issued more realistic testimony regarding fission to Congress months before, projecting that "costs can be brought down... [to]... about the same as the cost of electricity from conventional sources..."
Edward Teller.
In 1951 Edward Teller and Stanislaw Ulam at Los Alamos National Laboratory (LANL) developed the Teller-Ulam design for a thermonuclear weapon, allowing for the development of multi-megaton yield fusion bombs. Fusion work in the UK was classified after the Klaus Fuchs affair.
In the mid-1950s the theoretical tools used to calculate the performance of fusion machines were not predicting their actual behavior. Machines invariably leaked plasma at rates far higher than predicted. In 1954, Edward Teller gathered fusion researchers at the Princeton Gun Club. He pointed out the problems and suggested that any system that confined plasma within concave fields was doomed due to what became known as interchange instability. Attendees remember him saying in effect that the fields were like rubber bands, and they would attempt to snap back to a straight configuration whenever the power was increased, ejecting the plasma. He suggested that the only way to predictably confine plasma would be to use convex fields: a "cusp" configuration.:118
When the meeting concluded, most researchers turned out papers explaining why Teller's concerns did not apply to their devices. Pinch machines did not use magnetic fields in this way, while the mirror and stellarator claques proposed various solutions. This was soon followed by Martin David Kruskal and Martin Schwarzschild's paper discussing pinch machines, however, which demonstrated those devices' instabilities were inherent.:118
ZETA.
The largest "classic" pinch device was the ZETA, which started operation in the UK in 1957. Its name is a take-off on small experimental fission reactors that often had "zero energy" in their name, such as ZEEP.
In early 1958, John Cockcroft announced that fusion had been achieved in the ZETA, an announcement that made headlines around the world. He dismissed US physicists' concerns. US experiments soon produced similar neutrons, although temperature measurements suggested these could not be from fusion. The ZETA neutrons were later demonstrated to be from different versions of the instability processes that had plagued earlier machines. Cockcroft was forced to retract his fusion claims, tainting the entire field for years. ZETA ended in 1968.
Scylla.
The first experiment to achieve controlled thermonuclear fusion was accomplished using Scylla I at LANL in 1958. Scylla I was a θ-pinch machine, with a cylinder full of deuterium. Electric current shot down the sides of the cylinder. The current made magnetic fields that pinched the plasma, raising temperatures to 15 million degrees Celsius, for long enough that atoms fused and produced neutrons. The Sherwood program sponsored a series of Scylla machines at Los Alamos. The program began with 5 researchers and $100,000 in US funding in January 1952. By 1965, a total of $21 million had been spent. The θ-pinch approach was abandoned after calculations showed it could not scale up to produce a reactor.
Tokamak.
In 1950–1951 in the Soviet Union, Igor Tamm and Andrei Sakharov first discussed a tokamak-like approach. Experimental research on those designs began in 1956 at the Moscow Kurchatov Institute by a group of Soviet scientists led by Lev Artsimovich. The tokamak essentially combined a low-power pinch device with a low-power stellarator. The notion was to combine the fields in such a way that the particles orbited within the reactor a particular number of times, today known as the "safety factor". The combination of these fields dramatically improved confinement times and densities, resulting in huge improvements over existing devices.
Other.
In 1951 Ivy Mike, part of Operation Ivy, became the first detonation of a thermonuclear weapon, yielding 10.4 megatons of TNT using liquid deuterium. Cousins and Ware built a toroidal pinch device in England and demonstrated that the plasma in pinch devices is inherently unstable. In 1953 The Soviet Union tested its RDS-6S test, (codenamed "Joe 4" in the US) demonstrated a fission/fusion/fission ("Layercake") design that yielded 600 kilotons. Igor Kurchatov spoke at Harwell on pinch devices, revealing that the USSR was working on fusion.
Seeking to generate electricity, Japan, France and Sweden all start fusion research programs
In 1955, John D. Lawson (scientist) creates what is now known as the Lawson criterion which is a criterion for a fusion reactor to produce more energy than is lost to the environment due to problems like Bremsstrahlung radiation.
In 1956 the Soviet Union began publishing articles on plasma physics, leading the US and UK to follow over the next several years.
The Sceptre III z-pinch plasma column remained stable for 300 to 400 microseconds, a dramatic improvement on previous efforts. The team calculated that the plasma had an electrical resistivity around 100 times that of copper, and was able to carry 200 kA of current for 500 microseconds.
1960s.
In 1960 John Nuckolls published the concept of inertial confinement fusion (ICF). The laser, introduced the same year, turned out to be a suitable "driver".
In 1961 the Soviet Union tested its 50 megaton Tsar Bomba, the most powerful thermonuclear weapon ever.
Spitzer published a key plasma physics text at Princeton in 1963. He took the ideal gas laws and adapted them to an ionized plasma, developing many of the fundamental equations used to model a plasma.
Laser fusion was suggested in 1962 by scientists at LLNL. Initially, lasers had little power. Laser fusion (inertial confinement fusion) research began as early as 1965.
At the 1964 World's Fair, the public was given its first fusion demonstration. The device was a Theta-pinch from General Electric. This was similar to the Scylla machine developed earlier at Los Alamos.
By the mid-1960s progress had stalled across the world. All of the major designs were losing plasma at unsustainable rates. The 12-beam "4 pi laser" attempt at inertial confinement fusion developed at LLNL targeted a gas-filled target chamber of about 20 centimeters in diameter.
The magnetic mirror was first published in 1967 by Richard F. Post and many others at LLNL. The mirror consisted of two large magnets arranged so they had strong fields within them, and a weaker, but connected, field between them. Plasma introduced in the area between the two magnets would "bounce back" from the stronger fields in the middle.
A.D. Sakharov's group constructed the first tokamaks. The most successful were the T-3 and its larger version T-4. T-4 was tested in 1968 in Novosibirsk, producing the first quasistationary fusion reaction.:90 When this was announced, the international community was skeptical. A British team was invited to see T-3, and confirmed the Soviet claims. A burst of activity followed as many planned devices were abandoned and tokamaks were introduced in their place—the C model stellarator, then under construction after many redesigns, was quickly converted to the Symmetrical Tokamak.
In his work with vacuum tubes, Philo Farnsworth observed that electric charge accumulated in the tube. In 1962, Farnsworth patented a design using a positive inner cage to concentrate plasma and fuse protons. During this time, Robert L. Hirsch joined Farnsworth Television labs and began work on what became the Farnsworth-Hirsch Fusor. This effect became known as the Multipactor effect. Hirsch patented the design in 1966 and published it in 1967.
Plasma temperatures of approximately 40 million degrees Celsius and 109 deuteron-deuteron fusion reactions per discharge were achieved at LANL with Scylla IV.
In 1968 the Soviets announced results from the T-3 tokamak, claiming temperatures an order of magnitude higher than any other device. A UK team, nicknamed "The Culham Five", confirmed the results. The results led many other teams, including the Princeton group, which converted their stellarator to a tokamak.
1970s.
Princeton's conversion of the Model C stellarator to a tokamak produced results matching the Soviets. With an apparent solution to the magnetic bottle problem in-hand, plans begin for a larger machine to test scaling and methods to heat the plasma.
In 1972, John Nuckolls outlined the idea of fusion ignition, a fusion chain reaction. Hot helium made during fusion reheats the fuel and starts more reactions. Nuckolls's paper started a major development effort. LLNL built laser systems including Argus, Cyclops, Janus, the neodymium-doped glass (Nd:glass) laser Long Path, Shiva laser, and the 10 beam Nova in 1984. Nova would ultimately produce 120 kilojoules of infrared light during a nanosecond pulse.
The UK built the Central Laser Facility in 1976.
The "advanced tokamak" concept emerged, which included non-circular plasma, internal diverters and limiters, superconducting magnets, and operation in the so-called "H-mode" island of increased stability. Two other designs became prominent; the compact tokamak sited the magnets on the inside of the vacuum chamber, and the spherical tokamak with as small a cross section as possible.
In 1974 J.B. Taylor re-visited ZETA and noticed that after an experimental run ended, the plasma entered a short period of stability. This led to the reversed field pinch concept. On May 1, 1974, the KMS fusion company (founded by Kip Siegel) achieved the world's first laser induced fusion in a deuterium-tritium pellet.
The Princeton Large Torus (PLT), the follow-on to the Symmetrical Tokamak, surpassed the best Soviet machines and set temperature records that were above what was needed for a commercial reactor. Soon after it received funding with the target of breakeven.
In the mid-1970s, Project PACER, carried out at LANL explored the possibility of exploding small hydrogen bombs (fusion bombs) inside an underground cavity.:25 As an energy source, the system was the only system that could work using the technology of the time. It required a large, continuous supply of nuclear bombs, however, with questionable economics.
In 1976, the two beam Argus laser became operational at LLNL. In 1977, the 20 beam Shiva laser there was completed, capable of delivering 10.2 kilojoules of infrared energy on target. At a price of $25 million and a size approaching that of a football field, Shiva was the first megalaser.
At a 1977 workshop at the Claremont Hotel in Berkeley Dr. C. Martin Stickley, then Director of the Energy Research and Development Agency ’s Office of Inertial Fusion, claimed that "no showstoppers" lay on the road to fusion energy.
The DOE selected a Princeton design Tokamak Fusion Test Reactor (TFTR) and the challenge of running on deuterium-tritium fuel.
The 20 beam Shiva laser at LLNL became capable of delivering 10.2 kilojoules of infrared energy on target. Costing $25 million and nearly covering a football field, Shiva was the first "megalaser" at LLNL.
1980s.
In the German/US HIBALL study, Garching used the high repetition rate of the RF driver to serve four reactor chambers using liquid lithium inside the chamber cavity. In 1982 high-confinement mode (H-mode) was discovered in tokamaks.
Magnetic mirror.
The US funded a magnetic mirror program in the late 1970s and early 1980s. This program resulted in a series of magnetic mirror devices including: 2X,:273 Baseball I, Baseball II, the Tandem Mirror Experiment and upgrade, the Mirror Fusion Test Facility, and MFTF-B. These machines were built and tested at LLNL from the late 1960s to the mid-1980s. The final machine, MFTF cost 372 million dollars and was, at that time, the most expensive project in LLNL history. It opened on February 21, 1986, and immediately closed, allegedly to balance the federal budget.
Laser.
Laser fusion progress: in 1983, the NOVETTE laser was completed. The following December, the ten-beam NOVA laser was finished. Five years later, NOVA produced 120 kilojoules of infrared light during a nanosecond pulse.
Research focused on either fast delivery or beam smoothness. Both focused on increasing energy uniformity. One early problem was that the light in the infrared wavelength lost energy before hitting the fuel. Breakthroughs were made at LLE at University of Rochester. Rochester scientists used frequency-tripling crystals to transform infrared laser beams into ultraviolet beams.
Chirping.
In 1985, Donna Strickland and Gérard Mourou invented a method to amplify laser pulses by "chirping". This changed a single wavelength into a full spectrum. The system amplified the beam at each wavelength and then reversed the beam into one color. Chirp pulsed amplification became instrumental for NIF and the Omega EP system.
LANL constructed a series of laser facilities. They included Gemini (a two beam system), Helios (eight beams), Antares (24 beams) and Aurora (96 beams). The program ended in the early nineties with a cost on the order of one billion dollars.
In 1987, Akira Hasegawa noticed that in a dipolar magnetic field, fluctuations tended to compress the plasma without energy loss. This effect was noticed in data taken by Voyager 2, when it encountered Uranus. This observation became the basis for a fusion approach known as the levitated dipole.
In tokamaks, the Tore Supra was under construction from 1983 to 1988 in Cadarache, France. Its superconducting magnets permitted it to generate a strong permanent toroidal magnetic field. First plasma came in 1988.
In 1983, JET achieved first plasma. In 1985, the Japanese tokamak, JT-60 produced its first plasmas. In 1988, the T-15 a Soviet tokamak was completed, the first to use (helium-cooled) superconducting magnets.
In 1998, the T-15 Soviet tokamak with superconducting helium-cooled coils was completed.
Spherical tokamak.
In 1984, Martin Peng proposed an alternate arrangement of magnet coils that would greatly reduce the aspect ratio while avoiding the erosion issues of the compact tokamak: a spherical tokamak. Instead of wiring each magnet coil separately, he proposed using a single large conductor in the center, and wiring the magnets as half-rings off of this conductor. What was once a series of individual rings passing through the hole in the center of the reactor was reduced to a single post, allowing for aspect ratios as low as 1.2.:B247:225 The ST concept appeared to represent an enormous advance in tokamak design. The proposal came during a period when US fusion research budgets were dramatically smaller. ORNL was provided with funds to develop a suitable central column built out of a high-strength copper alloy called "Glidcop". However, they were unable to secure funding to build a demonstration machine.
Failing at ORNL, Peng began a worldwide effort to interest other teams in the concept and get a test machine built. One approach would be to convert a spheromak.:225 Peng's advocacy caught the interest of Derek Robinson, of the United Kingdom Atomic Energy Authority. Robinson gathered a team and secured on the order of 100,000 pounds to build an experimental machine, the Small Tight Aspect Ratio Tokamak, or START. Parts of the machine were recycled from earlier projects, while others were loaned from other labs, including a 40 keV neutral beam injector from ORNL. Construction began in 1990 and operation started in January 1991.:11 It achieved a record beta (plasma pressure compared to magnetic field pressure) of 40% using a neutral beam injector
ITER.
The International Thermonuclear Experimental Reactor (ITER) coalition forms, involving EURATOM, Japan, the Soviet Union and United States and kicks off the conceptual design process.
1990s.
In 1991 JET's Preliminary Tritium Experiment achieved the world's first controlled release of fusion power.
In 1992, "Physics Today" published Robert McCory's outline of the current state of ICF, advocating for a national ignition facility. This was followed by a review article from John Lindl in 1995, making the same point. During this time various ICF subsystems were developed, including target manufacturing, cryogenic handling systems, new laser designs (notably the NIKE laser at NRL) and improved diagnostics including time of flight analyzers and Thomson scattering. This work was done at the NOVA laser system, General Atomics, Laser Mégajoule and the GEKKO XII system in Japan. Through this work and lobbying by groups like the fusion power associates and John Sethian at NRL, Congress authorized funding for the NIF project in the late nineties.
In 1992 the United States and the former republics of the Soviet Union stopped testing nuclear weapons.
In 1993 TFTR at PPPL experimented with 50% deuterium, 50% tritium, eventually reaching 10 megawatts.
In the early nineties, theory and experimental work regarding fusors and polywells was published. In response, Todd Rider at MIT developed general models of these devices, arguing that all plasma systems at thermodynamic equilibrium were fundamentally limited. In 1995, William Nevins published a criticism arguing that the particles inside fusors and polywells would acquire angular momentum, causing the dense core to degrade.
In 1995, the University of Wisconsin–Madison built a large fusor, known as HOMER. Dr George H. Miley at Illinois built a small fusor that produced neutrons using deuterium and discovered the "star mode" of fusor operation. At this time in Europe, an IEC device was developed as a commercial neutron source by Daimler-Chrysler and NSD Fusion.
The next year, Tore Supra reached a record plasma duration of two minutes with a current of almost 1 M amperes driven non-inductively by 2.3 MW of lower hybrid frequency waves (i.e. 280 MJ of injected and extracted energy), enabled by actively cooled plasma-facing components.
The upgraded Z-machine opened to the public in August 1998. The key attributes were its 18 million ampere current and a discharge time of less than 100 nanoseconds. This generated a magnetic pulse inside a large oil tank, which struck a liner (an array of tungsten wires). Firing the Z-machine became a way to test high energy, high temperature (2 billion degrees) conditions. In 1996.
In 1997, JET reached 16.1 MW (65% of heat to plasma), sustaining over 10 MW for over 0.5 sec. As of 2020 this remained the record output level. Four megawatts of alpha particle self-heating was achieved.
ITER was officially announced as part of a seven-party consortium (six countries and the EU). ITER was designed to produce ten times more fusion power than the input power. ITER was sited in Cadarache. The US withdrew from the project in 1999.
JT-60 produced a reversed shear plasma with the equivalent fusion amplification factor formula_0 of 1.25 - as of 2021 this remained the world record.
In the late nineties, a team at Columbia University and MIT developed the levitated dipole, a fusion device that consisted of a superconducting electromagnet, floating in a saucer shaped vacuum chamber. Plasma swirled around this donut and fused along the center axis.
In 1999 MAST replaced START.
2000s.
"Fast ignition" appeared in the late nineties, as part of a push by LLE to build the Omega EP system, which finished in 2008. Fast ignition showed dramatic power savings and moved ICF into the race for energy production. The HiPER experimental facility became dedicated to fast ignition.
In 2001 the United States, China and Republic of Korea joined ITER while Canada withdrew.
In April 2005, a UCLA team announced a way of producing fusion using a machine that "fits on a lab bench", using lithium tantalate to generate enough voltage to fuse deuterium. The process did not generate net power.
The next year, China's EAST test reactor was completed. This was the first tokamak to use superconducting magnets to generate both toroidal and poloidal fields.
In the early 2000s, LANL researchers claimed that an oscillating plasma could reach local thermodynamic equilibrium. This prompted the POPS and Penning trap designs.
In 2005 NIF fired its first bundle of eight beams, achieving the most powerful laser pulse to date - 152.8 kJ (infrared).
MIT researchers became interested in fusors for space propulsion, using fusors with multiple inner cages. Greg Piefer founded Phoenix Nuclear Labs and developed the fusor into a neutron source for medical isotope production. Robert Bussard began speaking openly about the polywell in 2006.
In March 2009, NIF became operational.
In the early 2000s privately backed fusion companies launched to develop commercial fusion power. Tri Alpha Energy, founded in 1998, began by exploring a field-reversed configuration approach. In 2002, Canadian company General Fusion began proof-of-concept experiments based on a hybrid magneto-inertial approach called Magnetized Target Fusion. Investors included Jeff Bezos (General Fusion) and Paul Allen (Tri Alpha Energy). Toward the end of the decade, Tokamak Energy started exploring spherical tokamak devices using reconnection.
2010s.
Private and public research accelerated in the 2010s.
Private projects.
In 2017, General Fusion developed its plasma injector technology and Tri Alpha Energy constructed and operated its C-2U device. In August 2014, Phoenix Nuclear Labs announced the sale of a high-yield neutron generator that could sustain 5×1011 deuterium fusion reactions per second over a 24-hour period.
In October 2014, Lockheed Martin's Skunk Works announced the development of a high beta fusion reactor, the Compact Fusion Reactor. Although the original concept was to build a 20-ton, container-sized unit, the team conceded in 2018 that the minimum scale would be 2,000 tons.
In January 2015, the polywell was presented at Microsoft Research. TAE Technologies announced that its Norman reactor had achieved plasma.
In 2017, Helion Energy's fifth-generation plasma machine went into operation, seeking to achieve plasma density of 20 T and fusion temperatures. ST40 generated "first plasma".
In 2018, Eni announced a $50 million investment in Commonwealth Fusion Systems, to attempt to commercialize ARC technology using a test reactor (SPARC) in collaboration with MIT. The reactor planned to employ yttrium barium copper oxide (YBCO) high-temperature superconducting magnet technology. Commonwealth Fusion Systems in 2021 tested successfully a 20 T magnet making it the strongest high-temperature superconducting magnet in the world. Following the 20 T magnet CFS raised $1.8 billion from private investors.
General Fusion began developing a 70% scale demo system. In 2018, TAE Technologies' reactor reached nearly 20 M°C.
Government and academic projects.
In 2010, NIF researchers conducted a series of "tuning" shots to determine the optimal target design and laser parameters for high-energy ignition experiments with fusion fuel. Net fuel energy gain was achieved in September 2013.
In April 2014, LLNL ended the Laser Inertial Fusion Energy (LIFE) program and directed their efforts towards NIF.
A 2012 paper demonstrated that a dense plasma focus had achieved temperatures of 1.8 billion degrees Celsius, sufficient for boron fusion, and that fusion reactions were occurring primarily within the contained plasmoid, necessary for net power.
In August 2014, MIT announced a tokamak it named the ARC fusion reactor, using rare-earth barium-copper oxide (REBCO) superconducting tapes to construct high-magnetic field coils that it claimed produced comparable magnetic field strength in a smaller configuration than other designs.
In October 2015, researchers at the Max Planck Institute of Plasma Physics completed building the largest stellarator to date, the Wendelstein 7-X. In December they produced the first helium plasma, and in February 2016 produced hydrogen plasma. In 2015, with plasma discharges lasting up to 30 minutes, Wendelstein 7-X attempted to demonstrate the essential stellarator attribute: continuous operation of a high-temperature plasma.
In 2014 EAST achieved a record confinement time of 30 seconds for plasma in the high-confinement mode (H-mode), thanks to improved heat dispersal. This was an order of magnitude improvement vs other reactors. In 2017 the reactor achieved a stable 101.2-second steady-state high confinement plasma, setting a world record in long-pulse H-mode operation.
In 2018 MIT scientists formulated a theoretical means to remove the excess heat from compact nuclear fusion reactors via larger and longer divertors.
In 2019 the United Kingdom announced a planned £200-million (US$248-million) investment to produce a design for a fusion facility named the Spherical Tokamak for Energy Production (STEP), by the early 2040s.
2020s.
In December 2020, the Chinese experimental nuclear fusion reactor HL-2M achieved its first plasma discharge. In May 2021, Experimental Advanced Superconducting Tokamak (EAST) announced a new world record for superheated plasma, sustaining a temperature of 120 M°C for 101 seconds and a peak of 160 M°C for 20 seconds. In December 2021 EAST set a new world record for high temperature (70 M°C) plasma of 1,056 seconds.
In 2020, Chevron Corporation announced an investment in start-up Zap Energy, co-founded by British entrepreneur and investor, Benj Conway, together with physicists Brian Nelson and Uri Shumlak from University of Washington. In 2021 the company raised $27.5 million in Series B funding led by Addition.
In 2021, the US DOE launched the INFUSE program, a public-private knowledge sharing initiative involving a PPPL, MIT Plasma Science and Fusion Center and Commonwealth Fusion Systems partnership, together with partnerships with TAE Technologies, Princeton Fusion Systems, and Tokamak Energy. In 2021, DOE's Fusion Energy Sciences Advisory Committee approved a strategic plan to guide fusion energy and plasma physics research that included a working power plant by 2040, similar to Canadian, Chinese, and U.K. efforts.
In January 2021, SuperOx announced the commercialization of a new superconducting wire, with more than 700 A/mm2 current capability.
TAE Technologies announced that its Norman device had sustained a temperature of about 60 million degrees C for 30 milliseconds, 8 and 10 times higher, respectively, than the company's previous devices. The duration was claimed to be limited by the power supply rather than the device.
In August 2021, the National Ignition Facility recorded a record-breaking 1.3 megajoules of energy created from fusion which is the first example of the Lawson criterion being surpassed in a laboratory.
In February 2022, JET sustained 11 MW and a Q value of 0.33 for over 5 seconds, outputting 59.7 megajoules, using a mix of deuterium and tritium for fuel. In March 2022 it was announced that Tokamak Energy achieved a record plasma temperature of 100 million kelvins, inside a commercial compact tokamak.
In October 2022, the Korea Superconducting Tokamak Advanced Research (KSTAR) reached a record plasma duration of 45 seconds, sustaining the high-temperature fusion plasma over the 100 million degrees Celsus based on the integrated real-time RMP control for ELM-less H-mode, i.e. fast ions regulated enhancement (FIRE) mode, machine learning algorithm, and 3D field optimization via an edge-localized RMP.
In December 2022, the NIF achieved the first scientific breakeven controlled fusion experiment, with an energy gain of 1.5.
In February 2024, the KSTAR tokamak set a new record (shot #34705) for the longest duration (102 seconds) of a magnetically confined plasma. The plasma was operated in the H-mode, with much better control of the error field than was possible previously. KSTAR also set a record (shot #34445) for the longest steady-state duration at a temperature of 100 million degrees Celsius (48 seconds, ELM-LESS FIRE mode).
References.
Citations.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "Q_{eq}"
}
]
| https://en.wikipedia.org/wiki?curid=67952883 |
6795342 | Atiyah–Hirzebruch spectral sequence | In mathematics, the Atiyah–Hirzebruch spectral sequence is a spectral sequence for calculating generalized cohomology, introduced by Michael Atiyah and Friedrich Hirzebruch (1961) in the special case of topological K-theory. For a CW complex formula_0 and a generalized cohomology theory formula_1, it relates the generalized cohomology groups
formula_2
with 'ordinary' cohomology groups formula_3 with coefficients in the generalized cohomology of a point. More precisely, the formula_4 term of the spectral sequence is formula_5, and the spectral sequence converges conditionally to formula_6.
Atiyah and Hirzebruch pointed out a generalization of their spectral sequence that also generalizes the Serre spectral sequence, and reduces to it in the case where formula_7. It can be derived from an exact couple that gives the formula_8 page of the Serre spectral sequence, except with the ordinary cohomology groups replaced with formula_9.
In detail, assume formula_0 to be the total space of a Serre fibration with fibre formula_10 and base space formula_11. The filtration of formula_11 by its formula_12-skeletons formula_13 gives rise to a filtration of formula_0. There is a corresponding spectral sequence with formula_4 term
formula_14
and converging to the associated graded ring of the filtered ring
formula_15.
This is the Atiyah–Hirzebruch spectral sequence in the case where the fibre formula_10 is a point.
Examples.
Topological K-theory.
For example, the complex topological formula_16-theory of a point is
formula_17 where formula_18 is in degree formula_19
By definition, the terms on the formula_4-page of a finite CW-complex formula_0 look like
formula_20
Since the formula_16-theory of a point is
formula_21
we can always guarantee that
formula_22
This implies that the spectral sequence collapses on formula_4 for many spaces. This can be checked on every formula_23, algebraic curves, or spaces with non-zero cohomology in even degrees. Therefore, it collapses for all (complex) even dimensional smooth complete intersections in formula_23.
Cotangent bundle on a circle.
For example, consider the cotangent bundle of formula_24. This is a fiber bundle with fiber formula_25 so the formula_4-page reads as
formula_26
Differentials.
The odd-dimensional differentials of the AHSS for complex topological K-theory can be readily computed. For formula_27 it is the Steenrod square formula_28 where we take it as the composition
formula_29
where formula_30 is reduction mod formula_19 and formula_31 is the Bockstein homomorphism (connecting morphism) from the short exact sequence
formula_32
Complete intersection 3-fold.
Consider a smooth complete intersection 3-fold formula_0 (such as a complete intersection Calabi-Yau 3-fold). If we look at the formula_4-page of the spectral sequence
formula_33
we can see immediately that the only potentially non-trivial differentials are
formula_34
It turns out that these differentials vanish in both cases, hence formula_35. In the first case, since formula_36 is trivial for formula_37 we have the first set of differentials are zero. The second set are trivial because formula_38 sends formula_39 the identification formula_40 shows the differential is trivial.
Twisted K-theory.
The Atiyah–Hirzebruch spectral sequence can be used to compute twisted K-theory groups as well. In short, twisted K-theory is the group completion of the isomorphism classes of vector bundles defined by gluing data formula_41 where
formula_42
for some cohomology class formula_43. Then, the spectral sequence reads as
formula_44
but with different differentials. For example,
formula_45
On the formula_46-page the differential is
formula_47
Higher odd-dimensional differentials formula_48 are given by Massey products for twisted K-theory tensored by formula_25. So
formula_49
Note that if the underlying space is formal, meaning its rational homotopy type is determined by its rational cohomology, hence has vanishing Massey products, then the odd-dimensional differentials are zero. Pierre Deligne, Phillip Griffiths, John Morgan, and Dennis Sullivan proved this for all compact Kähler manifolds, hence formula_50 in this case. In particular, this includes all smooth projective varieties.
Twisted K-theory of 3-sphere.
The twisted K-theory for formula_51 can be readily computed. First of all, since formula_40 and formula_52, we have that the differential on the formula_46-page is just cupping with the class given by formula_53. This gives the computation
formula_54
Rational bordism.
Recall that the rational bordism group formula_55 is isomorphic to the ring
formula_56
generated by the bordism classes of the (complex) even dimensional projective spaces formula_57 in degree formula_58. This gives a computationally tractable spectral sequence for computing the rational bordism groups.
Complex cobordism.
Recall that formula_59 where formula_60. Then, we can use this to compute the complex cobordism of a space formula_0 via the spectral sequence. We have the formula_4-page given by
formula_61 | [
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "E^\\bullet"
},
{
"math_id": 2,
"text": "E^i(X)"
},
{
"math_id": 3,
"text": "H^j"
},
{
"math_id": 4,
"text": "E_2"
},
{
"math_id": 5,
"text": "H^p(X;E^q(pt))"
},
{
"math_id": 6,
"text": "E^{p+q}(X)"
},
{
"math_id": 7,
"text": "E=H_{\\text{Sing}}"
},
{
"math_id": 8,
"text": "E_1"
},
{
"math_id": 9,
"text": "E"
},
{
"math_id": 10,
"text": "F"
},
{
"math_id": 11,
"text": "B"
},
{
"math_id": 12,
"text": "n"
},
{
"math_id": 13,
"text": "B_n"
},
{
"math_id": 14,
"text": "H^p(B; E^q(F))"
},
{
"math_id": 15,
"text": "E_\\infty^{p,q} \\Rightarrow E^{p+q}(X)"
},
{
"math_id": 16,
"text": "K"
},
{
"math_id": 17,
"text": "KU(*) = \\mathbb{Z}[x,x^{-1}]"
},
{
"math_id": 18,
"text": "x"
},
{
"math_id": 19,
"text": "2"
},
{
"math_id": 20,
"text": "E_2^{p,q}(X) = H^p(X;KU^q(pt))"
},
{
"math_id": 21,
"text": "\nK^q(pt) = \\begin{cases}\n\\mathbb{Z} & \\text{if q is even} \\\\\n0 & \\text{otherwise} \n\\end{cases}\n"
},
{
"math_id": 22,
"text": "E_2^{p,2k+1}(X) = 0"
},
{
"math_id": 23,
"text": "\\mathbb{CP}^n"
},
{
"math_id": 24,
"text": "S^1"
},
{
"math_id": 25,
"text": "\\mathbb{R}"
},
{
"math_id": 26,
"text": "\n\\begin{array}{c|cc}\n\\vdots &\\vdots & \\vdots \\\\\n2 & H^0(S^1;\\mathbb{Q}) & H^1(S^1;\\mathbb{Q}) \\\\\n1 & 0 & 0 \\\\\n0 & H^0(S^1;\\mathbb{Q}) & H^1(S^1;\\mathbb{Q}) \\\\\n-1 & 0 & 0 \\\\\n-2 & H^0(S^1;\\mathbb{Q}) & H^1(S^1;\\mathbb{Q}) \\\\\n\\vdots &\\vdots & \\vdots \\\\\n\\hline & 0 & 1\n\\end{array}\n"
},
{
"math_id": 27,
"text": "d_3"
},
{
"math_id": 28,
"text": "Sq^3"
},
{
"math_id": 29,
"text": " \\beta \\circ Sq^2 \\circ r"
},
{
"math_id": 30,
"text": "r"
},
{
"math_id": 31,
"text": "\\beta"
},
{
"math_id": 32,
"text": "0 \\to \\mathbb{Z} \\to \\mathbb{Z} \\to \\mathbb{Z}/2 \\to 0"
},
{
"math_id": 33,
"text": "\n\\begin{array}{c|ccccc}\n\\vdots & \\vdots & \\vdots & \\vdots & \\vdots & \\vdots & \\vdots & \\vdots \\\\\n2 & H^0(X; \\mathbb{Z}) & 0 & H^2(X;\\mathbb{Z}) & H^3(X;\\mathbb{Z}) & H^4(X;\\mathbb{Z}) & 0 & H^6(X;\\mathbb{Z}) \\\\\n1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\\n0 & H^0(X; \\mathbb{Z}) & 0 & H^2(X;\\mathbb{Z}) & H^3(X;\\mathbb{Z}) & H^4(X;\\mathbb{Z}) & 0 & H^6(X;\\mathbb{Z})\\\\\n-1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\\n-2 & H^0(X; \\mathbb{Z}) & 0 & H^2(X;\\mathbb{Z}) & H^3(X;\\mathbb{Z}) & H^4(X;\\mathbb{Z}) & 0 & H^6(X;\\mathbb{Z})\\\\\n\\vdots & \\vdots & \\vdots & \\vdots & \\vdots & \\vdots & \\vdots & \\vdots \\\\\n\\hline & 0 & 1 & 2 & 3 & 4 & 5 & 6 \n\\end{array}\n"
},
{
"math_id": 34,
"text": "\n\\begin{align}\nd_3:E_3^{0,2k} \\to E_3^{3,2k-2} \\\\\nd_3:E_3^{3,2k} \\to E_3^{6,2k-2}\n\\end{align}\n"
},
{
"math_id": 35,
"text": "E_2 = E_\\infty"
},
{
"math_id": 36,
"text": "Sq^k:H^i(X;\\mathbb{Z}/2) \\to H^{k+i}(X;\\mathbb{Z}/2)"
},
{
"math_id": 37,
"text": "k > i"
},
{
"math_id": 38,
"text": "Sq^2"
},
{
"math_id": 39,
"text": "H^3(X;\\mathbb{Z}/2) \\to H^5(X) = 0"
},
{
"math_id": 40,
"text": "Sq^3 = \\beta \\circ Sq^2 \\circ r"
},
{
"math_id": 41,
"text": "(U_{ij},g_{ij})"
},
{
"math_id": 42,
"text": " g_{ij}g_{jk}g_{ki} = \\lambda_{ijk} "
},
{
"math_id": 43,
"text": "\\lambda \\in H^3(X,\\mathbb{Z})"
},
{
"math_id": 44,
"text": " E_2^{p,q} = H^p(X;KU^q(*)) \\Rightarrow KU^{p+q}_\\lambda(X)"
},
{
"math_id": 45,
"text": "\nE_3^{p,q} = E_2^{p,q} = \\begin{array}{c|cccc}\n\\vdots & \\vdots & \\vdots & \\vdots & \\vdots \\\\\n2 & H^0(S^3;\\mathbb{Z}) & 0 & 0 & H^3(S^3;\\mathbb{Z}) \\\\\n1 & 0 & 0 & 0 & 0 \\\\\n0 & H^0(S^3;\\mathbb{Z}) & 0 & 0 & H^3(S^3;\\mathbb{Z}) \\\\\n-1 & 0 & 0 & 0 & 0 \\\\\n-2 & H^0(S^3;\\mathbb{Z}) & 0 & 0 & H^3(S^3;\\mathbb{Z}) \\\\\n\\vdots & \\vdots & \\vdots & \\vdots & \\vdots \\\\\n\\hline& 0 & 1 & 2 & 3\n\\end{array}\n"
},
{
"math_id": 46,
"text": "E_3"
},
{
"math_id": 47,
"text": " d_3 = Sq^3 + \\lambda "
},
{
"math_id": 48,
"text": "d_{2k+1}"
},
{
"math_id": 49,
"text": "\n\\begin{align}\nd_5 &= \\{ \\lambda, \\lambda, - \\} \\\\\nd_7 &= \\{ \\lambda, \\lambda, \\lambda, - \\}\n\\end{align}\n"
},
{
"math_id": 50,
"text": "E_\\infty = E_4"
},
{
"math_id": 51,
"text": "S^3"
},
{
"math_id": 52,
"text": "H^2(S^3) = 0"
},
{
"math_id": 53,
"text": "\\lambda"
},
{
"math_id": 54,
"text": " KU_\\lambda^k = \\begin{cases}\n\\mathbb{Z} & k \\text{ is even} \\\\\n\\mathbb{Z}/\\lambda & k \\text{ is odd}\n\\end{cases}\n"
},
{
"math_id": 55,
"text": "\\Omega_*^{\\text{SO}}\\otimes \\mathbb{Q}"
},
{
"math_id": 56,
"text": " \\mathbb{Q}[[\\mathbb{CP}^0], [\\mathbb{CP}^2], [\\mathbb{CP}^4],[\\mathbb{CP}^6],\\ldots]"
},
{
"math_id": 57,
"text": "[\\mathbb{CP}^{2k}]"
},
{
"math_id": 58,
"text": "4k"
},
{
"math_id": 59,
"text": "MU^*(pt) = \\mathbb{Z}[x_1,x_2,\\ldots]"
},
{
"math_id": 60,
"text": "x_i \\in \\pi_{2i}(MU)"
},
{
"math_id": 61,
"text": "E_2^{p,q} = H^p(X;MU^q(pt))"
}
]
| https://en.wikipedia.org/wiki?curid=6795342 |
67954458 | Beryllium oxalate | <templatestyles src="Chembox/styles.css"/>
Chemical compound
Beryllium oxalate is an inorganic compound, a salt of beryllium metal and oxalic acid with the chemical formula C2BeO4. It forms colorless crystals, dissolves in water, and also forms crystalline hydrates. The compound is used to prepare ultra-pure beryllium oxide by thermal decomposition.
Synthesis.
The action of oxalic acid on beryllium hydroxide:
formula_0
Chemical properties.
Crystalline hydrates lose water when heated:
formula_1
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathsf{Be(OH)_2 + H_2C_2O_4 \\ \\xrightarrow{}\\ BeC_2O_4 + 2H_2O }"
},
{
"math_id": 1,
"text": "\\mathsf{BeC_2O_4\\cdot 3H_2O \\ \\xrightarrow[-2H_2O]{100^oC}\\ BeC_2O_4\\cdot H_2O \\ \\xrightarrow[-H_2O]{220^oC}\\ BeC_2O_4}"
}
]
| https://en.wikipedia.org/wiki?curid=67954458 |
679596 | Continuous wavelet transform | Integral transform
In mathematics, the continuous wavelet transform (CWT) is a formal (i.e., non-numerical) tool that provides an overcomplete representation of a signal by letting the translation and scale parameter of the wavelets vary continuously.
Definition.
The continuous wavelet transform of a function formula_0 at a scale formula_1 and translational value formula_2 is expressed by the following integral
formula_3
where formula_4 is a continuous function in both the time domain and the frequency domain called the mother wavelet and the overline represents operation of complex conjugate. The main purpose of the mother wavelet is to provide a source function to generate the daughter wavelets which are simply the translated and scaled versions of the mother wavelet. To recover the original signal formula_0, the first inverse continuous wavelet transform can be exploited.
formula_5
formula_6 is the dual function of formula_4 and
formula_7
is admissible constant, where hat means Fourier transform operator. Sometimes, formula_8, then the admissible constant becomes
formula_9
Traditionally, this constant is called wavelet admissible constant. A wavelet whose admissible constant satisfies
formula_10
is called an admissible wavelet. To recover the original signal formula_0, the second inverse continuous wavelet transform can be exploited.
formula_11
This inverse transform suggests that a wavelet should be defined as
formula_12
where formula_13 is a window. Such defined wavelet can be called as an analyzing wavelet, because it admits to time-frequency analysis. An analyzing wavelet is unnecessary to be admissible.
Scale factor.
The scale factor formula_14 either dilates or compresses a signal. When the scale factor is relatively low, the signal is more contracted which in turn results in a more detailed resulting graph. However, the drawback is that low scale factor does not last for the entire duration of the signal. On the other hand, when the scale factor is high, the signal is stretched out which means that the resulting graph will be presented in less detail. Nevertheless, it usually lasts the entire duration of the signal.
Continuous wavelet transform properties.
In definition, the continuous wavelet transform is a convolution of the input data sequence with a set of functions generated by the mother wavelet. The convolution can be computed by using a fast Fourier transform (FFT) algorithm. Normally, the output formula_15 is a real valued function except when the mother wavelet is complex. A complex mother wavelet will convert the continuous wavelet transform to a complex valued function. The power spectrum of the continuous wavelet transform can be represented by formula_16.
Applications of the wavelet transform.
One of the most popular applications of wavelet transform is image compression. The advantage of using wavelet-based coding in image compression is that it provides significant improvements in picture quality at higher compression ratios over conventional techniques. Since wavelet transform has the ability to decompose complex information and patterns into elementary forms, it is commonly used in acoustics processing and pattern recognition, but it has been also proposed as an instantaneous frequency estimator. Moreover, wavelet transforms can be applied to the following scientific research areas: edge and corner detection, partial differential equation solving, transient detection, filter design, electrocardiogram (ECG) analysis, texture analysis, business information analysis and gait analysis. Wavelet transforms can also be used in Electroencephalography (EEG) data analysis to identify epileptic spikes resulting from epilepsy. Wavelet transform has been also successfully used for the interpretation of time series of landslides and land subsidence, and for calculating the changing periodicities of epidemics.
Continuous Wavelet Transform (CWT) is very efficient in determining the damping ratio of oscillating signals (e.g. identification of damping in dynamic systems). CWT is also very resistant to the noise in the signal.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "x(t)"
},
{
"math_id": 1,
"text": "a\\in\\mathbb{R^{+*}}"
},
{
"math_id": 2,
"text": "b\\in\\mathbb{R}"
},
{
"math_id": 3,
"text": "X_w(a,b)=\\frac{1}{|a|^{1/2}} \\int_{-\\infty}^\\infty x(t)\\overline\\psi\\left(\\frac{t-b}{a}\\right)\\,\\mathrm{d}t"
},
{
"math_id": 4,
"text": "\\psi(t)"
},
{
"math_id": 5,
"text": "x(t)=C_\\psi^{-1}\\int_{0}^{\\infty}\\int_{-\\infty}^{\\infty} X_w(a,b)\\frac{1}{|a|^{1/2}}\\tilde\\psi\\left(\\frac{t-b}{a}\\right)\\, \\mathrm{d}b\\ \\frac{\\mathrm{d}a}{a^2}"
},
{
"math_id": 6,
"text": "\\tilde\\psi(t)"
},
{
"math_id": 7,
"text": "C_\\psi=\\int_{-\\infty}^{\\infty}\\frac{\\overline\\hat{\\psi}(\\omega)\\hat{\\tilde\\psi}(\\omega)}{|\\omega|}\\, \\mathrm{d}\\omega"
},
{
"math_id": 8,
"text": "\\tilde\\psi(t)=\\psi(t)"
},
{
"math_id": 9,
"text": "C_\\psi = \\int_{-\\infty}^{+\\infty}\n \\frac{\\left| \\hat{\\psi}(\\omega) \\right|^2}{\\left| \\omega \\right|} \\, \\mathrm{d}\\omega\n"
},
{
"math_id": 10,
"text": "0<C_\\psi <\\infty"
},
{
"math_id": 11,
"text": "x(t)=\\frac{1}{2\\pi\\overline\\hat{\\psi}(1)}\\int_{0}^{\\infty}\\int_{-\\infty}^{\\infty} \\frac{1}{a^2}X_w(a,b)\\exp\\left(i\\frac{t-b}{a}\\right)\\,\\mathrm{d}b\\ \\mathrm{d}a"
},
{
"math_id": 12,
"text": "\\psi(t)=w(t)\\exp(it) "
},
{
"math_id": 13,
"text": "w(t)"
},
{
"math_id": 14,
"text": "a"
},
{
"math_id": 15,
"text": "X_w(a,b)"
},
{
"math_id": 16,
"text": "\\frac{1}{a}\\cdot|X_w(a,b)|^2"
}
]
| https://en.wikipedia.org/wiki?curid=679596 |
6796246 | Sulfotransferase | Class of enzymes which transfer a sulfo group (–SO3) between molecules
In biochemistry, sulfotransferases (SULTs) are transferase enzymes that catalyze the transfer of a sulfo group () from a donor molecule to an acceptor alcohol () or amine (). The most common sulfo group donor is 3'-phosphoadenosine-5'-phosphosulfate (PAPS). In the case of alcohol as acceptor, the product is a sulfate ():
formula_0
whereas an amine leads to a sulfamate ():
formula_1
Both reactive groups for a sulfonation via sulfotransferases may be part of a protein, lipid, carbohydrate or steroid.
Examples.
The following are examples of sulfotransferases:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\ce{R-SO3-} \\ + \\ \\ce{R'-OH} \\quad \\xrightarrow[\\text{ SULT }]{} \\quad \\ce{R-H} \\ + \\ \\ce{R'-OSO3-}"
},
{
"math_id": 1,
"text": "\\ce{R-SO3-} \\ + \\ \\ce{R'-NH2} \\quad \\xrightarrow[\\text{ SULT }]{} \\quad \\ce{R-H} \\ + \\ \\ce{R'-NHSO3-}"
}
]
| https://en.wikipedia.org/wiki?curid=6796246 |
67962485 | 1 Kings 13 | 1 Kings, chapter 13
1 Kings 13 is the thirteenth chapter of the Books of Kings in the Hebrew Bible or the First Book of Kings in the Old Testament of the Christian Bible. The book is a compilation of various annals recording the acts of the kings of Israel and Judah by a Deuteronomic compiler in the seventh century BCE, with a supplement added in the sixth century BCE. 1 Kings 12:1 to 16:14 documents the consolidation of the kingdoms of northern Israel and Judah: this chapter focusses on the reign of Jeroboam in the northern kingdom.
Text.
This chapter was originally written in the Hebrew language and since the 16th century is divided into 34 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century).
Jeroboam’s hand withers (13:1–10).
Jeroboam's 'illegitimate cult activities' at the 'illegitimate holy site' of Bethel was exposed by a prophet from Judah who was loyal to YHWH and demonstrated that the miraculous power of God was superior to a king. The conflict in Bethel may lead to the story of prophet Amos' appearance in Bethel (cf. Amos 7:10–17). The anonymous prophet foretold the end of Jeroboam's dynasty and the northern kingdom, that only the 'house of David' would remain to take action against the high places in Bethel (cf. Josiah's actions in 2 Kings 22:1–23:10). The broken altar provided a sign that the prophecy is true, whereas Jeroboam's withered hand showed the impotence of the king against the prophetic word.
"And the man cried against the altar by the word of the Lord and said, "O altar, altar, thus says the Lord: 'Behold, a son shall be born to the house of David, Josiah by name, and he shall sacrifice on you the priests of the high places who make offerings on you, and human bones shall be burned on you.'""
The old prophet and the man of God (13:11–34).
The second narrative of the chapter deals with the meeting between two prophets to address the question "who can decide who is right when two prophets speak, claiming God's authority, yet contradict each other?" (cf. 1 Kings 22 and Jeremiah 27–28). In the story, the 'true' prophet allowed himself to be deceived by the 'false' prophet and paid for it with his life, so that his death convinced skeptics of the 'true' prophet's relationship to God. As the previous passage, the focus of the story was 'the holy site in Bethel and its altar', both of which were contaminated by 'Jeroboam's sin': the prophet's word immediately destroyed the altar (verses 3, 5) and the holy site would be abolished 300 years later by King Josiah (2 Kings 23:15–18), while the common grave of both prophets was preserved. Another point of the story is that God requires 'complete and radical obedience' to what he has commanded, not to be swayed by another's claim that God had spoken through the other person.
"31 And after he had buried him, he said to his sons, "When I am dead, bury me in the grave in which the man of God is buried; lay my bones beside his bones."
"32 For the saying that he cried out by the word of the Lord against the altar in Bethel and against all the houses of the high places that are in the cities of Samaria shall surely come to pass.""
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=67962485 |
67964130 | Polarization (Lie algebra) | In representation theory, polarization is the maximal totally isotropic subspace of a certain skew-symmetric bilinear form on a Lie algebra. The notion of polarization plays an important role in construction of irreducible unitary representations of some classes of Lie groups by means of the orbit method as well as in harmonic analysis on Lie groups and mathematical physics.
Definition.
Let formula_0 be a Lie group, formula_1 the corresponding Lie algebra and formula_2 its dual. Let formula_3 denote the value of the linear form (covector) formula_4 on a vector formula_5. The subalgebra formula_6 of the algebra formula_7 is called subordinate of formula_4 if the condition
formula_8,
or, alternatively,
formula_9
is satisfied. Further, let the group formula_0 act on the space formula_2 via coadjoint representation formula_10. Let formula_11 be the orbit of such action which passes through the point formula_12 and let formula_13 be the Lie algebra of the stabilizer formula_14 of the point formula_12. A subalgebra formula_15 subordinate of formula_12 is called a polarization of the algebra formula_1 with respect to formula_12, or, more concisely, polarization of the covector formula_12, if it has maximal possible dimensionality, namely
formula_16.
Pukanszky condition.
The following condition was obtained by L. Pukanszky:
Let formula_6 be the polarization of algebra formula_1 with respect to covector formula_12 and formula_17 be its annihilator: formula_18. The polarization formula_6 is said to satisfy the Pukanszky condition if
formula_19
L. Pukanszky has shown that this condition guaranties applicability of the Kirillov's orbit method initially constructed for nilpotent groups to more general case of solvable groups as well.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "G"
},
{
"math_id": 1,
"text": "\\mathfrak{g}"
},
{
"math_id": 2,
"text": "\\mathfrak{g}^*"
},
{
"math_id": 3,
"text": "\\langle f,\\,X\\rangle"
},
{
"math_id": 4,
"text": "f\\in\\mathfrak{g}^*"
},
{
"math_id": 5,
"text": "X\\in\\mathfrak{g}"
},
{
"math_id": 6,
"text": "\\mathfrak{h}"
},
{
"math_id": 7,
"text": "\\mathfrak g"
},
{
"math_id": 8,
"text": "\\forall X, Y\\in\\mathfrak{h}\\ \\langle f,\\,[X,\\,Y]\\rangle = 0"
},
{
"math_id": 9,
"text": "\\langle f,\\,[\\mathfrak{h},\\,\\mathfrak{h}]\\rangle = 0"
},
{
"math_id": 10,
"text": "\\mathrm{Ad}^*"
},
{
"math_id": 11,
"text": "\\mathcal{O}_f"
},
{
"math_id": 12,
"text": "f"
},
{
"math_id": 13,
"text": "\\mathfrak{g}^f"
},
{
"math_id": 14,
"text": "\\mathrm{Stab}(f)"
},
{
"math_id": 15,
"text": "\\mathfrak{h}\\subset\\mathfrak{g}"
},
{
"math_id": 16,
"text": "\\dim\\mathfrak{h} = \\frac{1}{2}\\left(\\dim\\,\\mathfrak{g} + \\dim\\,\\mathfrak{g}^f\\right) = \\dim\\,\\mathfrak{g} - \\frac{1}{2}\\dim\\,\\mathcal{O}_f"
},
{
"math_id": 17,
"text": "\\mathfrak{h}^\\perp"
},
{
"math_id": 18,
"text": "\\mathfrak{h}^\\perp := \\{\\lambda\\in\\mathfrak{g}^*|\\langle\\lambda,\\,\\mathfrak{h}\\rangle = 0\\}"
},
{
"math_id": 19,
"text": "f + \\mathfrak{h}^\\perp\\in\\mathcal{O}_f."
},
{
"math_id": 20,
"text": "\\langle f,\\,[\\cdot,\\,\\cdot]\\rangle"
},
{
"math_id": 21,
"text": "(\\mathfrak{g},\\,f)"
},
{
"math_id": 22,
"text": "\\mathrm{Ad}_g\\mathfrak{h}"
},
{
"math_id": 23,
"text": "\\mathrm{Ad}^*_g f"
},
{
"math_id": 24,
"text": "\\mathcal{O}"
},
{
"math_id": 25,
"text": "f\\in\\mathcal{O}"
}
]
| https://en.wikipedia.org/wiki?curid=67964130 |
67966219 | Tungsten(III) iodide | <templatestyles src="Chembox/styles.css"/>
Chemical compound
Tungsten(III) iodide or tungsten triiodide is a chemical compound of tungsten and iodine with the formula WI3.
Preparation.
Tungsten(III) iodide can be prepared by reducing tungsten hexacarbonyl with iodine.
formula_0
Properties.
Tungsten(III) iodide is a black solid that releases iodine at room temperature, and is less stable than molybdenum(III) iodide. It is soluble in acetone and nitrobenzene, and slightly soluble in chloroform.
It decomposes to form tungsten(II) iodide:
formula_1
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathrm{2 \\ W(CO)_6 + 3 \\ I_2 \\longrightarrow 2 \\ WI_3 + 12 \\ CO}"
},
{
"math_id": 1,
"text": "\\mathrm{ 6 WI_3 \\longrightarrow [W_6I_8]I_4+3 I_2}"
}
]
| https://en.wikipedia.org/wiki?curid=67966219 |
67967371 | Kovacs effect | In statistical mechanics and condensed matter physics, the Kovacs effect is a kind of memory effect in glassy systems below the glass-transition temperature. A.J. Kovacs observed that a system’s state out of equilibrium is defined not only by its macro thermodynamical variables, but also by the inner parameters of the system. In the original effect, in response to a temperature change, under constant pressure, the isobaric volume and free energy of the system experienced a recovery characterized by non-monotonic departure from equilibrium, whereas all other thermodynamical variables were in their equilibrium values. It is considered a memory effect since the relaxation dynamics of the system depend on its thermal and mechanical history.
The effect was discovered by Kovacs in the 1960s in polyvinyl acetate. Since then, the Kovacs effect has been established as a very general phenomenon that comes about in a large variety of systems, model glasses,
tapped dense granular matter, spin-glasses, molecular liquids, granular gases, active matter, disordered mechanical systems, protein molecules, and more.
The effect in Kovacs’ experiments.
Kovacs’ experimental procedure on polyvinyl acetate consisted of two main stages. In the first step, the sample is instantaneously quenched from a high initial temperature formula_0 to a low reference temperature formula_1, under constant pressure. The time-dependent volume of the system in formula_1 , formula_2 , is recorded, until the time formula_3 when the system is considered to be at equilibrium. The volume at formula_3 is defined as the equilibrium volume of the system at temperature formula_1:
formula_4
In the second step, the sample is quenched again from formula_0 to a temperature formula_5 that is lower than formula_1, so that formula_6. But now, the system is held at temperature formula_5 only until the time formula_7 when its volume reaches the equilibrium value of formula_1, meaning formula_8.
Then, the temperature is raised instantaneously to formula_1, so both the temperature and the volume agree with the same equilibrium state. Naively, one expects that nothing should happen when the system is at formula_9 and formula_10. But instead, the volume of the system first increases and then relaxes back to formula_11, while the temperature is held constant at formula_1. This non-monotonic behavior in time of the volume formula_12 after the jump in the temperature can be simply captured by:
formula_13
where formula_14 , and formula_15. formula_16 is also referred as the “Kovacs hump”. Kovacs also found that the hump displayed some general features: formula_14 with only one maximum of height formula_17 at a certain time formula_18; as the temperature formula_5 is lowered, the hump becomes larger, formula_17 increases, and moves to shorter times, formula_19 decreases.
In the subsequent studies of the Kovacs hump in different systems, a similar protocol with two jumps in the temperature has been employed. The associated time evolution of a relevant physical quantity formula_20, often the energy, is monitored and displays the Kovacs hump. The physical relevance of this behavior is the same as in the Kovacs experiment: it shows that formula_20 does not completely characterize the dynamical state of the system, and the necessity of incorporating additional variables to have the whole picture.
The Kovacs hump described above has been rationalized by employing linear response theory for molecular systems, in which the initial and final states are equilibrium ones. Therein, the "direct" relaxation function (with only one temperature jump, instead of two) is a superposition of positive exponentially decaying modes, as a consequence of the fluctuation-dissipation theorem. Linear response makes it possible to write the Kovacs hump in terms of the direct relaxation function. Specifically, the positivity of the all the modes in the direct relaxation function ensures the "normal" character of the hump, i.e. the fact that formula_21.
Recently, analogous experiments have been proposed for "athermal" systems, like granular systems or active matter, with the proper reinterpretation of the variables. For instance, in granular gases the relevant physical property formula_20 is still the energy—although one usually employs the terminology "granular temperature" for the kinetic energy in this context—but it is the intensity of the external driving formula_22 that plays the role of the temperature. The emergence of Kovacs-like humps highlights the relevance of non-Gaussianities to describe the physical state of granular gases.
"Anomalous" Kovacs humps have been reported in athermal systems, i.e. formula_23, i.e. a minimum is observed instead of a maximum. Although the linear response connection between the Kovacs hump and the direct relaxation function can be extended to athermal systems, not all the modes are positive definite—the standard version of the fluctuation-dissipation theorem does not apply. This is the key that facilitates the emergence of anomalous behavior.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "T_0"
},
{
"math_id": 1,
"text": "T_r"
},
{
"math_id": 2,
"text": "V(t)|_{T_r} "
},
{
"math_id": 3,
"text": "t_{eq}"
},
{
"math_id": 4,
"text": "V(t_{eq})|_{T_r} \\equiv V_{eq}(T_r) "
},
{
"math_id": 5,
"text": "T_1"
},
{
"math_id": 6,
"text": "T_0>T_r>T_1"
},
{
"math_id": 7,
"text": "t_1"
},
{
"math_id": 8,
"text": "V(t_1)|_{T_1}=V_{eq}(T_r) "
},
{
"math_id": 9,
"text": "V=V_{eq}(T_r) "
},
{
"math_id": 10,
"text": "T=T_r "
},
{
"math_id": 11,
"text": "V_{eq}(T_r) "
},
{
"math_id": 12,
"text": "V(t) "
},
{
"math_id": 13,
"text": "V(t)=V_{eq}(T_r)+\\Delta V "
},
{
"math_id": 14,
"text": "\\Delta V \\geq 0"
},
{
"math_id": 15,
"text": "\\Delta V(t=t_1)=0, \\Delta V(t\\rightarrow\\infty)=0"
},
{
"math_id": 16,
"text": "\\Delta V "
},
{
"math_id": 17,
"text": "\\Delta V_M "
},
{
"math_id": 18,
"text": "t_M "
},
{
"math_id": 19,
"text": "t_M"
},
{
"math_id": 20,
"text": " P "
},
{
"math_id": 21,
"text": " \\Delta P \\geq 0 "
},
{
"math_id": 22,
"text": "\\xi"
},
{
"math_id": 23,
"text": "\\Delta P\\leq 0"
}
]
| https://en.wikipedia.org/wiki?curid=67967371 |
6796750 | Werckmeister temperament | Tuning system described by Andreas Werckmeister
Werckmeister temperaments are the tuning systems described by Andreas Werckmeister in his writings. The tuning systems are numbered in two different ways: the first refers to the order in which they were presented as "good temperaments" in Werckmeister's 1691 treatise, the second to their labelling on his monochord. The monochord labels start from III since just intonation is labelled I and quarter-comma meantone is labelled II. The temperament commonly known as "Werckmeister III" is referred to in this article as "Werckmeister I (III)".
The tunings I (III), II (IV) and III (V) were presented graphically by a cycle of fifths and a list of major thirds, giving the temperament of each in fractions of a comma. Werckmeister used the organbuilder's notation of ^ for a downwards tempered or narrowed interval and v for an upward tempered or widened one. (This appears counterintuitive - it is based on the use of a conical tuning tool which would reshape the ends of the pipes.) A pure fifth is simply a dash. Werckmeister was not explicit about whether the syntonic comma or Pythagorean comma was meant: the difference between them, the so-called schisma, is almost inaudible and he stated that it could be divided up among the fifths.
The last "Septenarius" tuning was not conceived in terms of fractions of a comma, despite some modern authors' attempts to approximate it by some such method. Instead, Werckmeister gave the string lengths on the monochord directly, and from that calculated how each fifth ought to be tempered.
Werckmeister I (III): "correct temperament" based on 1/4 comma divisions.
This tuning uses mostly pure (perfect) fifths, as in Pythagorean tuning, but each of the fifths C–G, G–D, D–A and B–F♯ is made smaller, i.e. tempered by 1/4 of the comma. No matter if the Pythagorean comma or the syntonic comma is used, the resulting tempered fifths are for all practical purposes the same as meantone temperament fifths. All major thirds are reasonably close to 400 cents and, because not all fifths are tempered, there is no wolf fifth and all 12 notes can be used as the tonic.
Werckmeister designated this tuning as particularly suited for playing chromatic music ("ficte"), which may have led to its popularity as a tuning for J. S. Bach's music in recent years.
Because a quarter of the Pythagorean comma is formula_0, or formula_1, it is possible to calculate exact mathematical values for the frequency relationships and intervals:
Werckmeister II (IV): another temperament included in the Orgelprobe, divided up through 1/3 comma.
In Werckmeister II the fifths C–G, D–A, E–B, F♯–C♯, and B♭–F are tempered narrow by 1/3 comma, and the fifths G♯–D♯ and E♭–B♭ are widened by 1/3 comma. The other fifths are pure. Werckmeister designed this tuning for playing mainly diatonic music (i.e. rarely using the "black notes"). Most of its intervals are close to sixth-comma meantone. Werckmeister also gave a table of monochord lengths for this tuning, setting C=120 units, a practical approximation to the exact theoretical values. Following the monochord numbers the G and D are somewhat lower than their theoretical values but other notes are somewhat higher.
Werckmeister III (V): an additional temperament divided up through 1/4 comma.
In Werckmeister III the fifths D–A, A–E, F♯–C♯, C♯–G♯, and F–C are narrowed by 1/4, and the fifth G♯–D♯ is widened by 1/4 comma. The other fifths are pure. This temperament is closer to equal temperament than the previous two.
Werckmeister IV (VI): the Septenarius tunings.
This tuning is based on a division of the monochord length into formula_2 parts. The various notes are then defined by which 196-division one should place the bridge on in order to produce their pitches. The resulting scale has rational frequency relationships, so it is mathematically distinct from the irrational tempered values above; however in practice, both involve pure and impure sounding fifths. Werckmeister also gave a version where the total length is divided into 147 parts, which is simply a transposition of the intervals of the 196-tuning. He described the Septenarius as "an additional temperament which has nothing at all to do with the divisions of the comma, nevertheless in practice so correct that one can be really satisfied with it".
One apparent problem with these tunings is the value given to D (or A in the transposed version): Werckmeister writes it as 176. However this produces a musically bad effect because the fifth G–D would then be very flat (more than half a comma); the third B♭–D would be pure, but D–F♯ would be more than a comma too sharp – all of which contradict the rest of Werckmeister's writings on temperament. In the illustration of the monochord division, the number "176" is written one place too far to the right, where 175 should be. Therefore it is conceivable that the number 176 is a mistake for 175, which gives a musically much more consistent result. Both values are given in the table below.
In the tuning with D=175, the fifths C–G, G–D, D–A, B–F♯, F♯–C♯, and B♭–F are tempered narrow, while the fifth G♯–D♯ is tempered wider than pure; the other fifths are pure. | [
{
"math_id": 0,
"text": "\\sqrt[4]{\\frac{531441}{524288}}"
},
{
"math_id": 1,
"text": "\\frac{27}{32}\\sqrt[4]{2}"
},
{
"math_id": 2,
"text": "196 = 7\\times 7\\times 4"
}
]
| https://en.wikipedia.org/wiki?curid=6796750 |
67969171 | Option on realized variance | In finance, an option on realized variance (or variance option) is a type of variance derivatives which is the derivative securities on which the payoff depends on the annualized realized variance of the return of a specified underlying asset, such as stock index, bond, exchange rate, etc. Another liquidated security of the same type is variance swap, which is, in other words, the futures contract on realized variance.
With a similar notion to the vanilla options, variance options give the owner a right but without obligation to buy or sell the realized variance in exchange with some agreed price (variance strike) sometime in the future (expiry date), except that risk exposure is solely subjected to the price's variance itself. This property gains interest among traders since they can use it as an instrument to speculate the future movement of the asset volatility to, for example, delta-hedge a portfolio, without taking a directional risk of possessing the underlying asset.
Definitions.
In practice, the annualized realized variance is defined by the sum of the square of discrete-sampling log-return of the specified underlying asset. In other words, if there are formula_0 sampling points of the underlying prices, says formula_1 observed at time formula_2 where formula_3 for all formula_4, then the realized variance denoted by formula_5 is valued of the form
formula_6
where
If one puts
then payoffs at expiry for the call and put options written on formula_5 (or just variance call and put) are
formula_15
and
formula_16
respectively.
Note that the annualized realized variance can also be defined through continuous sampling, which resulted in the quadratic variation of the underlying price. That is, if we suppose that formula_17 determines the instantaneous volatility of the price process, then
formula_18
defines the continuous-sampling annualized realized variance which is also proved to be the limit in the probability of the discrete form i.e.
formula_19.
However, this approach is only adopted to approximate the discrete one since the contracts involving realized variance are practically quoted in terms of the discrete sampling.
Pricing and valuation.
Suppose that under a risk-neutral measure formula_20 the underlying asset price formula_21 solves the time-varying Black–Scholes model as follows:
formula_22
where:
ฺBy this setting, in the case of variance call, its fair price at time formula_29 denoted by formula_30 can be attained by the expected present value of its payoff function i.e.
formula_31
where formula_32 for the discrete sampling while formula_33 for the continuous sampling. And by put-call parity we also get the put value once formula_30 is known. The solution can be approached analytically with the similar methodology to that of the Black–Scholes derivation once the probability density function of formula_34 is perceived, or by means of some approximation schemes, like, the Monte Carlo method.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "n+1"
},
{
"math_id": 1,
"text": "S_{t_0},S_{t_2},\\dots,S_{t_{n}}"
},
{
"math_id": 2,
"text": "t_i "
},
{
"math_id": 3,
"text": "0\\leq t_{i-1}<t_{i}\\leq T"
},
{
"math_id": 4,
"text": "i\\in \\{1,\\dots,n\\}"
},
{
"math_id": 5,
"text": "RV_d"
},
{
"math_id": 6,
"text": "RV_d:=\\frac{A}{n}\\sum_{i=1}^{n}\\ln^2\\Big(\\frac{S_{t_i}}{S_{t_{i-1}}}\\Big)"
},
{
"math_id": 7,
"text": "A"
},
{
"math_id": 8,
"text": "A=252"
},
{
"math_id": 9,
"text": "A=52"
},
{
"math_id": 10,
"text": "A=12"
},
{
"math_id": 11,
"text": "T"
},
{
"math_id": 12,
"text": "n/{A}."
},
{
"math_id": 13,
"text": "K^C_{\\text{var}}"
},
{
"math_id": 14,
"text": "L"
},
{
"math_id": 15,
"text": "(RV_d-K^C_{\\text{var}})^+\\times L"
},
{
"math_id": 16,
"text": "(K^C_{\\text{var}}-RV_d)^+\\times L"
},
{
"math_id": 17,
"text": "\\sigma(t)"
},
{
"math_id": 18,
"text": "RV_{c}:= \\frac{1}{T}\\int_{0}^{T}\\sigma^2(s)ds"
},
{
"math_id": 19,
"text": "\\lim_{n\\to\\infty}RV_d=\\lim_{n\\to\\infty}\\frac{A}{n}\\sum_{i=1}^{n}\\ln^2\\Big(\\frac{S_{t_i}}{S_{t_{i-1}}}\\Big)=\\frac{1}{T}\\int_{0}^{T}\\sigma^2(s)ds=RV_{c}"
},
{
"math_id": 20,
"text": "\\mathbb{Q}"
},
{
"math_id": 21,
"text": "S=(S_t)_{0\\leq t \\leq T}"
},
{
"math_id": 22,
"text": "\\frac{dS_t}{S_t}=r(t) \\, dt+\\sigma(t) \\, dW_t, \\;\\; S_0>0"
},
{
"math_id": 23,
"text": "r(t)\\in\\mathbb{R}"
},
{
"math_id": 24,
"text": "\\sigma(t)>0"
},
{
"math_id": 25,
"text": "W=(W_t)_{0\\leq t \\leq T}"
},
{
"math_id": 26,
"text": "(\\Omega,\\mathcal{F},\\mathbb{F},\\mathbb{Q})"
},
{
"math_id": 27,
"text": "\\mathbb{F}=(\\mathcal{F}_t)_{0\\leq t \\leq T}"
},
{
"math_id": 28,
"text": "W"
},
{
"math_id": 29,
"text": "t_0"
},
{
"math_id": 30,
"text": "C_{t_0}^\\text{var}"
},
{
"math_id": 31,
"text": "C_{t_0}^\\operatorname{var}:=e^{-\\int^T_{t_0} r(s) \\, ds}\\operatorname{E}^{\\mathbb{Q}}[(RV_{(\\cdot)}-K^C_{\\operatorname{var}})^+\\mid\\mathcal{F}_{t_0}],"
},
{
"math_id": 32,
"text": "RV_{(\\cdot)} = RV_d"
},
{
"math_id": 33,
"text": "RV_{(\\cdot)} = RV_c"
},
{
"math_id": 34,
"text": "RV_{(\\cdot)}"
}
]
| https://en.wikipedia.org/wiki?curid=67969171 |
6796998 | Gleason's theorem | Theorem in quantum mechanics
In mathematical physics, Gleason's theorem shows that the rule one uses to calculate probabilities in quantum physics, the Born rule, can be derived from the usual mathematical representation of measurements in quantum physics together with the assumption of non-contextuality. Andrew M. Gleason first proved the theorem in 1957, answering a question posed by George W. Mackey, an accomplishment that was historically significant for the role it played in showing that wide classes of hidden-variable theories are inconsistent with quantum physics. Multiple variations have been proven in the years since. Gleason's theorem is of particular importance for the field of quantum logic and its attempt to find a minimal set of mathematical axioms for quantum theory.
Statement of the theorem.
Conceptual background.
In quantum mechanics, each physical system is associated with a Hilbert space. For the purposes of this overview, the Hilbert space is assumed to be finite-dimensional. In the approach codified by John von Neumann, a measurement upon a physical system is represented by a self-adjoint operator on that Hilbert space sometimes termed an "observable". The eigenvectors of such an operator form an orthonormal basis for the Hilbert space, and each possible outcome of that measurement corresponds to one of the vectors comprising the basis. A density operator is a positive-semidefinite operator on the Hilbert space whose trace is equal to 1. In the language of von Weizsäcker, a density operator is a "catalogue of probabilities": for each measurement that can be defined, the probability distribution over the outcomes of that measurement can be computed from the density operator. The procedure for doing so is the Born rule, which states that
formula_0
where formula_1 is the density operator, and formula_2 is the projection operator onto the basis vector corresponding to the measurement outcome formula_3.
The Born rule associates a probability with each unit vector in the Hilbert space, in such a way that these probabilities sum to 1 for any set of unit vectors comprising an orthonormal basis. Moreover, the probability associated with a unit vector is a function of the density operator and the unit vector, and not of additional information like a choice of basis for that vector to be embedded in. Gleason's theorem establishes the converse: all assignments of probabilities to unit vectors (or, equivalently, to the operators that project onto them) that satisfy these conditions take the form of applying the Born rule to some density operator. Gleason's theorem holds if the dimension of the Hilbert space is 3 or greater; counterexamples exist for dimension 2.
Deriving the state space and the Born rule.
The probability of any outcome of a measurement upon a quantum system must be a real number between 0 and 1 inclusive, and in order to be consistent, for any individual measurement the probabilities of the different possible outcomes must add up to 1. Gleason's theorem shows that any function that assigns probabilities to measurement outcomes, as identified by projection operators, must be expressible in terms of a density operator and the Born rule. This gives not only the rule for calculating probabilities, but also determines the set of possible quantum states.
Let formula_4 be a function from projection operators to the unit interval with the property that, if a set formula_5 of projection operators sum to the identity matrix (that is, if they correspond to an orthonormal basis), then
formula_6
Such a function expresses an assignment of probability values to the outcomes of measurements, an assignment that is "noncontextual" in the sense that the probability for an outcome does not depend upon which measurement that outcome is embedded within, but only upon the mathematical representation of that specific outcome, i.e., its projection operator. Gleason's theorem states that for any such function formula_4, there exists a positive-semidefinite operator formula_1 with unit trace such that
formula_7
Both the Born rule and the fact that "catalogues of probability" are positive-semidefinite operators of unit trace follow from the assumptions that measurements are represented by orthonormal bases, and that probability assignments are "noncontextual". In order for Gleason's theorem to be applicable, the space on which measurements are defined must be a real or complex Hilbert space, or a quaternionic module. (Gleason's argument is inapplicable if, for example, one tries to construct an analogue of quantum mechanics using "p"-adic numbers.)
History and outline of Gleason's proof.
In 1932, John von Neumann also managed to derive the Born rule in his textbook "Mathematische Grundlagen der Quantenmechanik" ["Mathematical Foundations of Quantum Mechanics"]. However, the assumptions on which von Neumann built his proof were rather strong and eventually regarded to not be well-motivated. Specifically, von Neumann assumed that the probability function must be linear on all observables, commuting or non-commuting. His proof was derided by John Bell as "not merely false but foolish!". Gleason, on the other hand, did not assume linearity, but merely additivity for commuting projectors together with noncontextuality, assumptions seen as better motivated and more physically meaningful.
By the late 1940s, George Mackey had grown interested in the mathematical foundations of quantum physics, wondering in particular whether the Born rule was the only possible rule for calculating probabilities in a theory that represented measurements as orthonormal bases on a Hilbert space. Mackey discussed this problem with Irving Segal at the University of Chicago, who in turn raised it with Richard Kadison, then a graduate student. Kadison showed that for 2-dimensional Hilbert spaces there exists a probability measure that does not correspond to quantum states and the Born rule. Gleason's result implies that this only happens in dimension 2.
Gleason's original proof proceeds in three stages. In Gleason's terminology, a "frame function" is a real-valued function formula_4 on the unit sphere of a Hilbert space such that
formula_8
whenever the vectors formula_3 comprise an orthonormal basis. A noncontextual probability assignment as defined in the previous section is equivalent to a frame function. Any such measure that can be written in the standard way, that is, by applying the Born rule to a quantum state, is termed a "regular" frame function. Gleason derives a sequence of lemmas concerning when a frame function is necessarily regular, culminating in the final theorem. First, he establishes that every continuous frame function on the Hilbert space formula_9 is regular. This step makes use of the theory of spherical harmonics. Then, he proves that frame functions on formula_9 have to be continuous, which establishes the theorem for the special case of formula_9. This step is regarded as the most difficult of the proof. Finally, he shows that the general problem can be reduced to this special case. Gleason credits one lemma used in this last stage of the proof to his doctoral student Richard Palais.
Robin Lyth Hudson described Gleason's theorem as "celebrated and notoriously difficult". Cooke, Keane and Moran later produced a proof that is longer than Gleason's but requires fewer prerequisites.
Implications.
Gleason's theorem highlights a number of fundamental issues in quantum measurement theory. As Fuchs argues, the theorem "is an extremely powerful result", because "it indicates the extent to which the Born probability rule and even the state-space structure of density operators are "dependent" upon the theory's other postulates". In consequence, quantum theory is "a tighter package than one might have first thought". Various approaches to rederiving the quantum formalism from alternative axioms have, accordingly, employed Gleason's theorem as a key step, bridging the gap between the structure of Hilbert space and the Born rule.
Hidden variables.
Moreover, the theorem is historically significant for the role it played in ruling out the possibility of certain classes of hidden variables in quantum mechanics. A hidden-variable theory that is deterministic implies that the probability of a given outcome is "always" either 0 or 1. For example, a Stern–Gerlach measurement on a spin-1 atom will report that the atom's angular momentum along the chosen axis is one of three possible values, which can be designated formula_10, formula_11 and formula_12. In a deterministic hidden-variable theory, there exists an underlying physical property that fixes the result found in the measurement. Conditional on the value of the underlying physical property, any given outcome (for example, a result of formula_12) must be either impossible or guaranteed. But Gleason's theorem implies that there can be no such deterministic probability measure. The mapping formula_13 is continuous on the unit sphere of the Hilbert space for any density operator formula_1. Since this unit sphere is connected, no continuous probability measure on it can be deterministic. Gleason's theorem therefore suggests that quantum theory represents a deep and fundamental departure from the classical intuition that uncertainty is due to ignorance about hidden degrees of freedom. More specifically, Gleason's theorem rules out hidden-variable models that are "noncontextual". Any hidden-variable model for quantum mechanics must, in order to avoid the implications of Gleason's theorem, involve hidden variables that are not properties belonging to the measured system alone but also dependent upon the external context in which the measurement is made. This type of dependence is often seen as contrived or undesirable; in some settings, it is inconsistent with special relativity.
To construct a counterexample for 2-dimensional Hilbert space, known as a qubit, let the hidden variable be a unit vector formula_14 in 3-dimensional Euclidean space. Using the Bloch sphere, each possible measurement on a qubit can be represented as a pair of antipodal points on the unit sphere. Defining the probability of a measurement outcome to be 1 if the point representing that outcome lies in the same hemisphere as formula_14 and 0 otherwise yields an assignment of probabilities to measurement outcomes that obeys Gleason's assumptions. However, this probability assignment does not correspond to any valid density operator. By introducing a probability distribution over the possible values of formula_14, a hidden-variable model for a qubit that reproduces the predictions of quantum theory can be constructed.
Gleason's theorem motivated later work by John Bell, Ernst Specker and Simon Kochen that led to the result often called the Kochen–Specker theorem, which likewise shows that noncontextual hidden-variable models are incompatible with quantum mechanics. As noted above, Gleason's theorem shows that there is no probability measure over the rays of a Hilbert space that only takes the values 0 and 1 (as long as the dimension of that space exceeds 2). The Kochen–Specker theorem refines this statement by constructing a specific finite subset of rays on which no such probability measure can be defined. The fact that such a finite subset of rays must exist follows from Gleason's theorem by way of a logical compactness argument, but this method does not construct the desired set explicitly. In the related no-hidden-variables result known as Bell's theorem, the assumption that the hidden-variable theory is noncontextual instead is replaced by the assumption that it is local. The same sets of rays used in Kochen–Specker constructions can also be employed to derive Bell-type proofs.
Pitowsky uses Gleason's theorem to argue that quantum mechanics represents a new theory of probability, one in which the structure of the space of possible events is modified from the classical, Boolean algebra thereof. He regards this as analogous to the way that special relativity modifies the kinematics of Newtonian mechanics.
The Gleason and Kochen–Specker theorems have been cited in support of various philosophies, including perspectivism, constructive empiricism and agential realism.
Quantum logic.
Gleason's theorem finds application in quantum logic, which makes heavy use of lattice theory. Quantum logic treats the outcome of a quantum measurement as a logical proposition and studies the relationships and structures formed by these logical propositions. They are organized into a lattice, in which the distributive law, valid in classical logic, is weakened, to reflect the fact that in quantum physics, not all pairs of quantities can be measured simultaneously. The "representation theorem" in quantum logic shows that such a lattice is isomorphic to the lattice of subspaces of a vector space with a scalar product. Using Solèr's theorem, the (skew) field "K" over which the vector space is defined can be proven, with additional hypotheses, to be either the real numbers, complex numbers, or the quaternions, as is needed for Gleason's theorem to hold.
By invoking Gleason's theorem, the form of a probability function on lattice elements can be restricted. Assuming that the mapping from lattice elements to probabilities is noncontextual, Gleason's theorem establishes that it must be expressible with the Born rule.
Generalizations.
Gleason originally proved the theorem assuming that the measurements applied to the system are of the von Neumann type, i.e., that each possible measurement corresponds to an orthonormal basis of the Hilbert space. Later, Busch and independently Caves "et al." proved an analogous result for a more general class of measurements, known as positive-operator-valued measures (POVMs). The set of all POVMs includes the set of von Neumann measurements, and so the assumptions of this theorem are significantly stronger than Gleason's. This made the proof of this result simpler than Gleason's, and the conclusions stronger. Unlike the original theorem of Gleason, the generalized version using POVMs also applies to the case of a single qubit. Assuming noncontextuality for POVMs is, however, controversial, as POVMs are not fundamental, and some authors defend that noncontextuality should be assumed only for the underlying von Neumann measurements. Gleason's theorem, in its original version, does not hold if the Hilbert space is defined over the rational numbers, i.e., if the components of vectors in the Hilbert space are restricted to be rational numbers, or complex numbers with rational parts. However, when the set of allowed measurements is the set of all POVMs, the theorem holds.
The original proof by Gleason was not constructive: one of the ideas on which it depends is the fact that every continuous function defined on a compact space attains its minimum. Because one cannot in all cases explicitly show where the minimum occurs, a proof that relies upon this principle will not be a constructive proof. However, the theorem can be reformulated in such a way that a constructive proof can be found.
Gleason's theorem can be extended to some cases where the observables of the theory form a von Neumann algebra. Specifically, an analogue of Gleason's result can be shown to hold if the algebra of observables has no direct summand that is representable as the algebra of 2×2 matrices over a commutative von Neumann algebra (i.e., no direct summand of type "I"2). In essence, the only barrier to proving the theorem is the fact that Gleason's original result does not hold when the Hilbert space is that of a qubit.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "P(x_i) = \\operatorname{Tr}(\\Pi_i \\rho),"
},
{
"math_id": 1,
"text": "\\rho"
},
{
"math_id": 2,
"text": "\\Pi_i"
},
{
"math_id": 3,
"text": "x_i"
},
{
"math_id": 4,
"text": "f"
},
{
"math_id": 5,
"text": "\\{ \\Pi_i\\}"
},
{
"math_id": 6,
"text": "\\sum_i f(\\Pi_i) = 1."
},
{
"math_id": 7,
"text": "f(\\Pi_i) = \\operatorname{Tr}(\\Pi_i \\rho)."
},
{
"math_id": 8,
"text": "\\sum_i f(x_i) = 1"
},
{
"math_id": 9,
"text": "\\mathbb{R}^3"
},
{
"math_id": 10,
"text": "-"
},
{
"math_id": 11,
"text": "0"
},
{
"math_id": 12,
"text": "+"
},
{
"math_id": 13,
"text": "u \\to \\langle \\rho u, u \\rangle"
},
{
"math_id": 14,
"text": "\\vec{\\lambda}"
}
]
| https://en.wikipedia.org/wiki?curid=6796998 |
679709 | Nonlinear programming | Solution process for some optimization problems
In mathematics, nonlinear programming (NLP) is the process of solving an optimization problem where some of the constraints are not linear equalities or the objective function is not a linear function. An optimization problem is one of calculation of the extrema (maxima, minima or stationary points) of an objective function over a set of unknown real variables and conditional to the satisfaction of a system of equalities and inequalities, collectively termed constraints. It is the sub-field of mathematical optimization that deals with problems that are not linear.
Definition and discussion.
Let "n", "m", and "p" be positive integers. Let "X" be a subset of "Rn" (usually a box-constrained one), let "f", "gi", and "hj" be real-valued functions on "X" for each "i" in {"1", ..., "m"} and each "j" in {"1", ..., "p"}, with at least one of "f", "gi", and "hj" being nonlinear.
A nonlinear programming problem is an optimization problem of the form
formula_0
Depending on the constraint set, there are several possibilities:
Most realistic applications feature feasible problems, with infeasible or unbounded problems seen as a failure of an underlying model. In some cases, infeasible problems are handled by minimizing a sum of feasibility violations.
Some special cases of nonlinear programming have specialized solution methods:
Applicability.
A typical non-convex problem is that of optimizing transportation costs by selection from a set of transportation methods, one or more of which exhibit economies of scale, with various connectivities and capacity constraints. An example would be petroleum product transport given a selection or combination of pipeline, rail tanker, road tanker, river barge, or coastal tankship. Owing to economic batch size the cost functions may have discontinuities in addition to smooth changes.
In experimental science, some simple data analysis (such as fitting a spectrum with a sum of peaks of known location and shape but unknown magnitude) can be done with linear methods, but in general these problems are also nonlinear. Typically, one has a theoretical model of the system under study with variable parameters in it and a model the experiment or experiments, which may also have unknown parameters. One tries to find a best fit numerically. In this case one often wants a measure of the precision of the result, as well as the best fit itself.
Methods for solving a general nonlinear program.
Analytic methods.
Under differentiability and constraint qualifications, the Karush–Kuhn–Tucker (KKT) conditions provide necessary conditions for a solution to be optimal. If some of the functions are non-differentiable, subdifferential versions of Karush–Kuhn–Tucker (KKT) conditions are available.
Under convexity, the KKT conditions are sufficient for a global optimum. Without convexity, these conditions are sufficient only for a local optimum. In some cases, the number of local optima is small, and one can find all of them analytically and find the one for which the objective value is smallest.
Numeric methods.
In most realistic cases, it is very hard to solve the KKT conditions analytically, and so the problems are solved using numerical methods. These methods are iterative: they start with an initial point, and then proceed to points that are supposed to be closer to the optimal point, using some update rule. There are three kinds of update rules:5.1.2
Third-order routines (and higher) are theoretically possible, but not used in practice, due to the higher computational load and little theoretical benefit.
Branch and bound.
Another method involves the use of branch and bound techniques, where the program is divided into subclasses to be solved with convex (minimization problem) or linear approximations that form a lower bound on the overall cost within the subdivision. With subsequent divisions, at some point an actual solution will be obtained whose cost is equal to the best lower bound obtained for any of the approximate solutions. This solution is optimal, although possibly not unique. The algorithm may also be stopped early, with the assurance that the best possible solution is within a tolerance from the best point found; such points are called ε-optimal. Terminating to ε-optimal points is typically necessary to ensure finite termination. This is especially useful for large, difficult problems and problems with uncertain costs or values where the uncertainty can be estimated with an appropriate reliability estimation.
Implementations.
There exist numerous nonlinear programming solvers, including open source:
Numerical Examples.
2-dimensional example.
A simple problem (shown in the diagram) can be defined by the constraints
formula_1
with an objective function to be maximized
formula_2
where x = ("x"1, "x"2).
3-dimensional example.
Another simple problem (see diagram) can be defined by the constraints
formula_3
with an objective function to be maximized
formula_4
where x = ("x"1, "x"2, "x"3).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\n\\begin{align}\n \\text{minimize } & f(x) \\\\\n \\text{subject to } & g_i(x) \\leq 0 \\text{ for each } i \\in \\{1, \\dotsc, m\\} \\\\\n & h_j(x) = 0 \\text{ for each } j \\in \\{1, \\dotsc, p\\} \\\\\n & x \\in X.\n\\end{align}\n"
},
{
"math_id": 1,
"text": "\\begin{align}\nx_1 &\\geq 0 \\\\\nx_2 &\\geq 0 \\\\\nx_1^2 + x_2^2 &\\geq 1 \\\\\nx_1^2 + x_2^2 &\\leq 2\n\\end{align}"
},
{
"math_id": 2,
"text": "f(\\mathbf x) = x_1 + x_2"
},
{
"math_id": 3,
"text": "\\begin{align}\nx_1^2 - x_2^2 + x_3^2 &\\leq 2 \\\\\nx_1^2 + x_2^2 + x_3^2 &\\leq 10\n\\end{align}"
},
{
"math_id": 4,
"text": "f(\\mathbf x) = x_1 x_2 + x_2 x_3"
}
]
| https://en.wikipedia.org/wiki?curid=679709 |
67973010 | Praseodymium(III) oxalate | <templatestyles src="Chembox/styles.css"/>
Chemical compound
Praseodymium(III) oxalate is an inorganic compound, a salt of praseodymium metal and oxalic acid, with the chemical formula C6O12Pr2. The compound forms light green crystals that are insoluble in water. It also forms crystalline hydrates.
Preparation.
Praseodymium(III) oxalate can be prepared from the reaction of soluble praseodymium salts with oxalic acid:
formula_0
Properties.
Praseodymium(III) oxalate forms crystalline hydrates (light green crystals): Pr2(C2O4)3•10H2O. The crystalline hydrate decomposes stepwise when heated:
formula_1
Uses.
Praseodymium(III) oxalate is used as an intermediate product in the synthesis of praseodymium. It is also applied to colour some glasses and enamels. If mixed with certain other materials, the compound paints glass intense yellow.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathsf{2Pr(NO_3)_3 + 3(COOH)_2 \\ \\xrightarrow{}\\ Pr_2(C_2O_4)_3\\downarrow + 6HNO_3 }"
},
{
"math_id": 1,
"text": "\\mathsf{Pr_2(C_2O_4)_3\\cdot 10H_2O \\ \\xrightarrow[-H_2O]{T}\\ Pr_2(C_2O_4)_3 \\ \\xrightarrow{T}\\ Pr_2O(CO_3)_2 \\ \\xrightarrow{T}\\ Pr_2O_2CO_3 \\ \\xrightarrow{800^oC}\\ Pr_6O_{11} }"
}
]
| https://en.wikipedia.org/wiki?curid=67973010 |
6797677 | Regular semigroup | In mathematics, a regular semigroup is a semigroup "S" in which every element is regular, i.e., for each element "a" in "S" there exists an element "x" in "S" such that "axa" = "a". Regular semigroups are one of the most-studied classes of semigroups, and their structure is particularly amenable to study via Green's relations.
History.
Regular semigroups were introduced by J. A. Green in his influential 1951 paper "On the structure of semigroups"; this was also the paper in which Green's relations were introduced. The concept of "regularity" in a semigroup was adapted from an analogous condition for rings, already considered by John von Neumann. It was Green's study of regular semigroups which led him to define his celebrated relations. According to a footnote in Green 1951, the suggestion that the notion of regularity be applied to semigroups was first made by David Rees.
The term inversive semigroup (French: demi-groupe inversif) was historically used as synonym in the papers of Gabriel Thierrin (a student of Paul Dubreil) in the 1950s, and it is still used occasionally.
The basics.
There are two equivalent ways in which to define a regular semigroup "S":
(1) for each "a" in "S", there is an "x" in "S", which is called a pseudoinverse, with "axa" = "a";
(2) every element "a" has at least one inverse "b", in the sense that "aba" = "a" and "bab" = "b".
To see the equivalence of these definitions, first suppose that "S" is defined by (2). Then "b" serves as the required "x" in (1). Conversely, if "S" is defined by (1), then "xax" is an inverse for "a", since "a"("xax")"a" = "axa"("xa") = "axa" = "a" and ("xax")"a"("xax") = "x"("axa")("xax") = "xa"("xax") = "x"("axa")"x" = "xax".
The set of inverses (in the above sense) of an element "a" in an arbitrary semigroup "S" is denoted by "V"("a"). Thus, another way of expressing definition (2) above is to say that in a regular semigroup, "V"("a") is nonempty, for every "a" in "S". The product of any element "a" with any "b" in "V"("a") is always idempotent: "abab" = "ab", since "aba" = "a".
Unique inverses and unique pseudoinverses.
A regular semigroup in which idempotents commute (with idempotents) is an inverse semigroup, or equivalently, every element has a "unique" inverse. To see this, let "S" be a regular semigroup in which idempotents commute. Then every element of "S" has at least one inverse. Suppose that "a" in "S" has two inverses "b" and "c", i.e.,
"aba" = "a", "bab" = "b", "aca" = "a" and "cac" = "c". Also "ab", "ba", "ac" and "ca" are idempotents as above.
Then
"b" = "bab" = "b"("aca")"b" = "bac"("a")"b" = "bac"("aca")"b" = "bac"("ac")("ab") = "bac"("ab")("ac") = "ba"("ca")"bac" = "ca"("ba")"bac" = "c"("aba")"bac" = "cabac" = "cac" = "c".
So, by commuting the pairs of idempotents "ab" & "ac" and "ba" & "ca", the inverse of "a" is shown to be unique. Conversely, it can be shown that any inverse semigroup is a regular semigroup in which idempotents commute.
The existence of a unique pseudoinverse implies the existence of a unique inverse, but the opposite is not true. For example, in the symmetric inverse semigroup, the empty transformation Ø does not have a unique pseudoinverse, because Ø = Ø"f"Ø for any transformation "f". The inverse of Ø is unique however, because only one "f" satisfies the additional constraint that "f" = "f"Ø"f", namely "f" = Ø. This remark holds more generally in any semigroup with zero. Furthermore, if every element has a unique pseudoinverse, then the semigroup is a group, and the unique pseudoinverse of an element coincides with the group inverse.
Green's relations.
Recall that the principal ideals of a semigroup "S" are defined in terms of "S"1, the "semigroup with identity adjoined"; this is to ensure that an element "a" belongs to the principal right, left and two-sided ideals which it generates. In a regular semigroup "S", however, an element "a" = "axa" automatically belongs to these ideals, without recourse to adjoining an identity. Green's relations can therefore be redefined for regular semigroups as follows:
formula_0 if, and only if, "Sa" = "Sb";
formula_1 if, and only if, "aS" = "bS";
formula_2 if, and only if, "SaS" = "SbS".
In a regular semigroup "S", every formula_3- and formula_4-class contains at least one idempotent. If "a" is any element of "S" and "a′" is any inverse for "a", then "a" is formula_3-related to "a′a" and formula_4-related to "aa′".
Theorem. Let "S" be a regular semigroup; let "a" and "b" be elements of "S", and let "V(x)" denote the set of inverses of "x" in "S". Then
If "S" is an inverse semigroup, then the idempotent in each formula_3- and formula_4-class is unique.
Special classes of regular semigroups.
Some special classes of regular semigroups are:
The class of generalised inverse semigroups is the intersection of the class of locally inverse semigroups and the class of orthodox semigroups.
All inverse semigroups are orthodox and locally inverse. The converse statements do not hold.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "a\\,\\mathcal{L}\\,b"
},
{
"math_id": 1,
"text": "a\\,\\mathcal{R}\\,b"
},
{
"math_id": 2,
"text": "a\\,\\mathcal{J}\\,b"
},
{
"math_id": 3,
"text": "\\mathcal{L}"
},
{
"math_id": 4,
"text": "\\mathcal{R}"
},
{
"math_id": 5,
"text": "a\\,\\mathcal{H}\\,b"
}
]
| https://en.wikipedia.org/wiki?curid=6797677 |
67979295 | 1 Kings 14 | 1 Kings, chapter 14
1 Kings 14 is the fourteenth chapter of the Books of Kings in the Hebrew Bible or the First Book of Kings in the Old Testament of the Christian Bible. The book is a compilation of various annals recording the acts of the kings of Israel and Judah by a Deuteronomic compiler in the seventh century BCE, with a supplement added in the sixth century BCE. 1 Kings 12:1 to 16:14 documents the consolidation of the kingdoms of northern Israel and Judah: this chapter focusses on the reigns of Jeroboam and Nadab in the northern kingdom and Rehoboam in the southern kingdom.
Text.
This chapter was originally written in the Hebrew language and since the 16th century is divided into 31 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century).
A breach between Ahijah of Shiloh and Jeroboam (14:1–20).
After the event in previous chapter Jeroboam received a further rebuke from Ahijah of Shiloh, when he attempted to cheat the prophet who was already old and blind, to get a word about his sick child. Although Jeroboam's wife was well disguised, the prophet immediately recognized her (in contrast to Genesis 27) and mercilessly revealed that her child (also Jeroboam's) would die (thematically similar to 1 Samuel 9:1–10:16 and 2 Kings 1). The same prophet who prophesied Jeroboam's rise to power (1 Kings 11:29–39) now forecasts the fall of Jeroboam's dynasty, because Jeroboam failed to behave like David. The end of Jeroboam's family would be dishonorable as the bodies of his family members would not be properly buried but would be eaten by 'dogs and birds' (verse 11, cf. 1 Samuel 31:8–13 for the significance of proper burial), and the fulfillment happened quickly in the second year of the reign of Jeroboam's son, Nadab (1 Kings 15:29–30). The pattern of prophecy and fulfilment are common in the books of Kings (cf. then 12:15; 16:1–4 then 16:11–12; 21:21–23 then 22:38 + 2 Kings 9:36–37; 2 Kings 9:7–10 then 10:17; 21:10–15 then 24:2; 22:16–17 then 25:1–7), emphasizing that the history of Israel is dictated by its relationship to God.
"For the Lord will smite Israel, as a reed is shaken in the water, and He will uproot Israel from this good land, which he gave to their fathers, and will scatter them beyond the river, because they have made their Asherah poles, provoking the Lord to anger."
Verse 15.
Without a strong, continuous dynasty in the northern kingdom of Israel, the land would know only the instability of 'a reed shaken (blown by the wind) in the water', and finally be exiled to places beyond "the River" (that is, "Euphrates") in Assyria.
"And the time that Jeroboam reigned was twenty-two years. And he slept with his fathers, and Nadab his son reigned in his place."
Rehoboam's reign in Judah and the attack of Shishak (14:21–31).
The proper introductory formula, an editorial principe in Kings, is only now inserted for Rehoboam, although his reign was mentioned in the story of the kingdom's division. It was mentioned twice (verses 21, 31) that Rehoboam's mother was an Ammonite, recalling Solomon's foreign wives and their idol-worship (1 Kings 11:1–8) that caused widespread idolatry in Judah (not confined to Jerusalem, as with Solomon). Standard sentences (verses 22–24) were used repeatedly later in the books of Kings to build the case 'how breaches of the first commandment formed the underlying evil' which led to the downfall (and implicitly, exile) of the kingdom of Judah (and even earlier, the kingdom of [northern] Israel). Just five years after the death of Solomon, Pharaoh Shishak plundered the wealth that Solomon had accumulated as a high price of freedom for Jerusalem, a first sign of warning for 'the city that the LORD had chosen out of all the tribes of Israel, to put his name there' (verse 21). The invasion of Shishak is documented in Egyptian sources and archaeological record, the first event in the Bible to have support from independent witnesses.
"And Rehoboam the son of Solomon reigned in Judah. Rehoboam was forty and one years old when he began to reign, and he reigned seventeen years in Jerusalem, the city which the Lord did choose out of all the tribes of Israel, to put his name there. And his mother's name was Naamah an Ammonitess."
"It happened in the fifth year of King Rehoboam that Shishak king of Egypt came up against Jerusalem."
"And Sousakim gave to Jeroboam Ano the eldest sister of Thekemina his wife, to him as wife; she was great among the king's daughters".
Verse 25.
Most scholars support the identification by Champollion with Shoshenq I of the 22nd dynasty (ruled Egypt 945–924 BCE), who left behind "explicit records of a campaign into Canaan (scenes; a long list of Canaanite place-names from the Negev to Galilee; stelae), including a stela [found] at Megiddo", and Bubastite Portal at Karnak, although Jerusalem was not mentioned in any of these campaign records. A common variant of Shoshenq's name omits its 'n' glyphs, resulting in a pronunciation like, "Shoshek".
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=67979295 |
67980941 | Coordination sequence | In crystallography and the theory of infinite vertex-transitive graphs, the coordination sequence of a vertex formula_0 is an integer sequence that counts how many vertices are at each possible distance from formula_0. That is, it is a sequence
formula_1
where each formula_2 is the number of vertices that are formula_3 steps away from formula_0. If the graph is vertex-transitive, then the sequence is an invariant of the graph that does not depend on the specific choice of formula_0. Coordination sequences can also be defined for sphere packings, by using either the contact graph of the spheres or the Delaunay triangulation of their centers, but these two choices may give rise to different sequences.
As an example, in a square grid, for each positive integer formula_3, there are formula_4 grid points that are formula_3 steps away from the origin. Therefore, the coordination sequence of the square grid is the sequence
formula_5
in which, except for the initial value of one, each number is a multiple of four.
The concept was proposed by Georg O. Brunner and Fritz Laves and later developed by Michael O'Keefe. The coordination sequences of many low-dimensional lattices and uniform tilings are known.
The coordination sequences of periodic structures are known to be quasi-polynomial.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "v"
},
{
"math_id": 1,
"text": "n_0, n_1, n_2,\\dots"
},
{
"math_id": 2,
"text": "n_i"
},
{
"math_id": 3,
"text": "i"
},
{
"math_id": 4,
"text": "4i"
},
{
"math_id": 5,
"text": "1,4,8,12,16,20,\\dots\\ ."
}
]
| https://en.wikipedia.org/wiki?curid=67980941 |
67981364 | Redheffer star product | Binary operation
In mathematics, the Redheffer star product is a binary operation on linear operators that arises in connection to solving coupled systems of linear equations. It was introduced by Raymond Redheffer in 1959, and has subsequently been widely adopted in computational methods for scattering matrices. Given two scattering matrices from different linear scatterers, the Redheffer star product yields the combined scattering matrix produced when some or all of the output channels of one scatterer are connected to inputs of another scatterer.
Definition.
Suppose formula_0 are the block matrices
formula_1
and
formula_2,
whose blocks formula_3 have the same shape when
formula_4.
The Redheffer star product is then defined by:
formula_5
assuming that formula_6 are invertible,
where formula_7 is an identity matrix conformable
to formula_8 or formula_9, respectively.
This can be rewritten several ways making use of the so-called
push-through identity
formula_10.
Redheffer's definition extends beyond matrices to
linear operators on a Hilbert space formula_11.
By definition, formula_3 are linear endomorphisms of formula_11,
making formula_0 linear endomorphisms of formula_12,
where formula_13 is the direct sum.
However, the star product still makes sense as long as the transformations are compatible,
which is possible when formula_14
and formula_15
so that formula_16.
Properties.
Existence.
formula_17 exists if and only if
formula_18 exists.
Thus when either exists, so does the Redheffer star product.
Identity.
The star identity is the identity on formula_12,
or formula_19.
Associativity.
The star product is associative, provided all of the relevant matrices are defined.
Thus formula_20.
Adjoint.
Provided either side exists, the adjoint of a Redheffer
star product is formula_21.
Inverse.
If formula_22 is the left matrix inverse of formula_23 such that
formula_24, formula_25 has a right inverse, and
formula_26 exists, then formula_27.
Similarly, if formula_22 is the left matrix inverse of formula_23 such
that formula_24, formula_28 has a right inverse, and
formula_29 exists, then formula_30.
Also, if formula_27 and formula_25 has a left inverse
then formula_24.
The star inverse equals the matrix inverse and both can be computed with
block inversion as
formula_31.
Derivation from a linear system.
The star product arises from solving multiple linear systems of equations that share
variables in common.
Often, each linear system models the behavior of one subsystem in a physical process
and by connecting the multiple subsystems into a whole, one can eliminate variables
shared across subsystems in order to obtain the overall linear system.
For instance, let formula_32 be elements of a Hilbert space
formula_11 such that
formula_33
and
formula_34
giving the following formula_35 equations in formula_36 variables:
formula_37.
By substituting the first equation into the last we find:
formula_38.
By substituting the last equation into the first we find:
formula_39.
Eliminating formula_40 by substituting the two preceding equations
into those for formula_41 results in the Redheffer star product
being the matrix such that:
formula_42.
Connection to scattering matrices.
Many scattering processes take on a form that motivates a different
convention for the block structure of the linear system of a scattering matrix.
Typically a physical device that performs a linear transformation on inputs, such as
linear dielectric media on electromagnetic waves or in quantum mechanical scattering,
can be encapsulated as a system which interacts with the environment through various
ports, each of which accepts inputs and returns outputs. It is conventional to use a different notation for the Hilbert space, formula_43, whose subscript
labels a port on the device.
Additionally, any element, formula_44, has an additional superscript labeling the direction of travel (where + indicates moving from port i to i+1 and - indicates the reverse).
The equivalent notation for a Redheffer transformation,
formula_45,
used in the previous section is
formula_46
The action of the S-matrix,
formula_47,
is defined with an additional flip compared to Redheffer's definition:
formula_48
so
formula_49
Note that for in order for the off-diagonal identity matrices to be defined,
we require formula_50 be the same underlying Hilbert space.
The star product, formula_51,
for two S-matrices, formula_0, is given by
formula_52
where formula_53
and formula_54,
so formula_55.
Properties.
These are analogues of the properties of formula_56 for formula_51
Most of them follow from the correspondence
formula_57.
formula_58, the exchange operator, is also the S-matrix star identity defined below.
For the rest of this section, formula_59 are S-matrices.
Existence.
formula_60 exists when either
formula_61
or
formula_62
exist.
Identity.
The S-matrix star identity, formula_58, is
formula_63.
This means formula_64
Associativity.
Associativity of formula_51 follows from associativity of formula_56 and of matrix multiplication.
Adjoint.
From the correspondence between formula_56 and formula_51,
and the adjoint of formula_56, we have that
formula_65
Inverse.
The matrix formula_66 that is the S-matrix star product inverse of
formula_67 in the sense that formula_68
is formula_69 where formula_70 is the ordinary matrix inverse
and formula_58 is as defined above.
Connection to transfer matrices.
Observe that a scattering matrix can be rewritten as a
transfer matrix, formula_71, with action
formula_72,
where
formula_73
Here the subscripts relate the different directions of propagation at each port.
As a result, the star product of scattering matrices
formula_74
is analogous to the following matrix multiplication of transfer matrices
formula_75
where formula_76
and formula_77,
so formula_78.
Generalizations.
Redheffer generalized the star product in several ways:
Arbitrary bijections.
If there is a bijection formula_79 given by
formula_80 then an associative star product can be defined by:
formula_81.
The particular star product defined by Redheffer above is obtained from:
formula_82
where formula_83.
3x3 star product.
A star product can also be defined for 3x3 matrices.
Applications to scattering matrices.
In physics, the Redheffer star product appears when constructing a total
scattering matrix from two or more subsystems.
If system formula_23 has a scattering matrix formula_84 and system
formula_22 has scattering matrix formula_85, then the combined system
formula_86 has scattering matrix formula_87.
Transmission line theory.
Many physical processes, including radiative transfer, neutron diffusion, circuit theory, and others are described by scattering processes whose formulation depends on the dimension of the process and the representation of the operators. For probabilistic problems, the scattering equation may appear in a Kolmogorov-type equation.
Electromagnetism.
The Redheffer star product can be used to solve for the propagation of electromagnetic fields in stratified, multilayered media. Each layer in the structure has its own scattering matrix and the total structure's scattering matrix can be described as the star product between all of the layers. A free software program that simulates electromagnetism in layered media is the
Stanford Stratified Structure Solver.
Semiconductor interfaces.
Kinetic models of consecutive semiconductor interfaces can use a scattering matrix formulation to model the motion of electrons between the semiconductors.
Factorization on graphs.
In the analysis of Schrödinger operators on graphs, the scattering matrix of a graph can be obtained as a generalized star product of the scattering matrices corresponding to its subgraphs.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "A, B"
},
{
"math_id": 1,
"text": "A = \n\\begin{pmatrix}\n A_{11} & A_{12}\n \\\\\n A_{21} & A_{22}\n\\end{pmatrix}"
},
{
"math_id": 2,
"text": "B =\n\\begin{pmatrix}\n B_{11} & B_{12}\n \\\\\n B_{21} & B_{22}\n\\end{pmatrix} "
},
{
"math_id": 3,
"text": "A_{ij}, B_{kl}"
},
{
"math_id": 4,
"text": "ij = kl"
},
{
"math_id": 5,
"text": "A \\star B =\n\n\\begin{pmatrix}\n B_{11} (I - A_{12} B_{21})^{-1} A_{11} & B_{12} + B_{11} (I - A_{12} B_{21})^{-1} A_{12} B_{22}\n \\\\\n A_{21} + A_{22} (I - B_{21} A_{12})^{-1} B_{21} A_{11} & A_{22} (I - B_{21} A_{12})^{-1} B_{22}\n\\end{pmatrix}\n"
},
{
"math_id": 6,
"text": "(I - A_{12} B_{21}), (I - B_{21} A_{12})"
},
{
"math_id": 7,
"text": "I"
},
{
"math_id": 8,
"text": "A_{12} B_{21}"
},
{
"math_id": 9,
"text": "B_{21} A_{12}"
},
{
"math_id": 10,
"text": "(I - A B) A = A (I - B A) \\iff A (I - B A)^{-1} = (I - A B)^{-1} A"
},
{
"math_id": 11,
"text": "\\mathcal H"
},
{
"math_id": 12,
"text": "\\mathcal H \\oplus \\mathcal H"
},
{
"math_id": 13,
"text": "\\oplus"
},
{
"math_id": 14,
"text": "A \\in \\mathcal{L (H_\\gamma \\oplus H_\\alpha, H_\\alpha \\oplus H_\\gamma)}"
},
{
"math_id": 15,
"text": "B \\in \\mathcal{L (H_\\alpha \\oplus H_\\beta, H_\\beta \\oplus H_\\alpha)}"
},
{
"math_id": 16,
"text": "A \\star B \\in \\mathcal{L (H_\\gamma \\oplus H_\\beta, H_\\beta \\oplus H_\\gamma)}"
},
{
"math_id": 17,
"text": "(I - A_{12} B_{21})^{-1}"
},
{
"math_id": 18,
"text": "(I - B_{21} A_{12})^{-1}"
},
{
"math_id": 19,
"text": "\\begin{pmatrix} I & 0 \\\\ 0 & I \\end{pmatrix}"
},
{
"math_id": 20,
"text": "A \\star B \\star C = (A \\star B) \\star C = A \\star (B \\star C)"
},
{
"math_id": 21,
"text": "(A \\star B)^* = B^* \\star A^*"
},
{
"math_id": 22,
"text": "B"
},
{
"math_id": 23,
"text": "A"
},
{
"math_id": 24,
"text": "BA = I"
},
{
"math_id": 25,
"text": "A_{22}"
},
{
"math_id": 26,
"text": "A \\star B"
},
{
"math_id": 27,
"text": "A \\star B = I"
},
{
"math_id": 28,
"text": "A_{11}"
},
{
"math_id": 29,
"text": "B \\star A"
},
{
"math_id": 30,
"text": "B \\star A = I"
},
{
"math_id": 31,
"text": "\\begin{pmatrix}\nA_{11} & A_{12}\n\\\\\nA_{21} & A_{22}\n\\end{pmatrix}^{-1}\n=\n\\begin{pmatrix}\n(A_{11} - A_{12} A_{22}^{-1} A_{21})^{-1} & (A_{21} - A_{22} A_{12}^{-1} A_{11})^{-1}\n\\\\\n(A_{12} - A_{11} A_{21}^{-1} A_{22})^{-1} & (A_{22} - A_{21} A_{11}^{-1} A_{12})^{-1}\n\\end{pmatrix}"
},
{
"math_id": 32,
"text": "\\{ x_i \\}_{i=1}^6"
},
{
"math_id": 33,
"text": "\\begin{pmatrix}\n x_3\n \\\\\n x_6\n\\end{pmatrix}\n=\n\\begin{pmatrix}\n A_{11} & A_{12}\n \\\\\n A_{21} & A_{22}\n\\end{pmatrix}\n\\begin{pmatrix}\n x_5\n \\\\\n x_4\n\\end{pmatrix}"
},
{
"math_id": 34,
"text": "\\begin{pmatrix}\n x_1\n \\\\\n x_4\n\\end{pmatrix}\n=\n\\begin{pmatrix}\n B_{11} & B_{12}\n \\\\\n B_{21} & B_{22}\n\\end{pmatrix}\n\\begin{pmatrix}\n x_3\n \\\\\n x_2\n\\end{pmatrix}"
},
{
"math_id": 35,
"text": "4"
},
{
"math_id": 36,
"text": "6"
},
{
"math_id": 37,
"text": "\\begin{align}\nx_3 &= A_{11} x_5 + A_{12} x_4\n\\\\\nx_6 &= A_{21} x_5 + A_{22} x_4\n\\\\\nx_1 &= B_{11} x_3 + B_{12} x_2\n\\\\\nx_4 &= B_{21} x_3 + B_{22} x_2\n\\end{align}"
},
{
"math_id": 38,
"text": "x_4 = (I - B_{21}A_{12})^{-1} (B_{21}A_{11} x_5 + B_{22} x_2)"
},
{
"math_id": 39,
"text": "x_3 = (I - A_{12}B_{21})^{-1} (A_{11} x_5 + A_{12}B_{22} x_2)"
},
{
"math_id": 40,
"text": "x_3, x_4"
},
{
"math_id": 41,
"text": "x_1, x_6"
},
{
"math_id": 42,
"text": "\\begin{pmatrix}\n x_1\n \\\\\n x_6\n\\end{pmatrix}\n= (A \\star B)\n\\begin{pmatrix}\n x_5\n \\\\\n x_2\n\\end{pmatrix}"
},
{
"math_id": 43,
"text": "\\mathcal H_i"
},
{
"math_id": 44,
"text": "c_i^\\pm \\in \\mathcal H_i"
},
{
"math_id": 45,
"text": "R \\in \\mathcal{L (H_1 \\oplus H_2, H_2 \\oplus H_1)}"
},
{
"math_id": 46,
"text": "\n\\begin{pmatrix}\n c_2^+\n \\\\\n c_1^-\n\\end{pmatrix}\n=\n\\begin{pmatrix}\n R_{11} & R_{12}\n \\\\\n R_{21} & R_{22}\n\\end{pmatrix}\n\\begin{pmatrix}\n c_1^+\n \\\\\n c_2^-\n\\end{pmatrix}\n"
},
{
"math_id": 47,
"text": "S \\in \\mathcal{L (H_1 \\oplus H_2, H_1 \\oplus H_2)}"
},
{
"math_id": 48,
"text": "\n\\begin{pmatrix}\n c_1^-\n \\\\\n c_2^+\n\\end{pmatrix}\n=\n\\begin{pmatrix}\n S_{11} & S_{12}\n \\\\\n S_{21} & S_{22}\n\\end{pmatrix}\n\\begin{pmatrix}\n c_1^+\n \\\\\n c_2^-\n\\end{pmatrix}\n"
},
{
"math_id": 49,
"text": "\nS =\n\\begin{pmatrix}\n 0 & I\n \\\\\n I & 0\n\\end{pmatrix}\nR\n"
},
{
"math_id": 50,
"text": "\\mathcal{H_1, H_2}"
},
{
"math_id": 51,
"text": "\\star_S"
},
{
"math_id": 52,
"text": "\nA \\star_S B\n=\n\\begin{pmatrix}\n A_{11} + A_{12} (I - B_{11} A_{22})^{-1} B_{11} A_{21} &\n A_{12} (I - B_{11} A_{22})^{-1} B_{12}\n \\\\\n B_{21} (I - A_{22} B_{11})^{-1} A_{21} &\n B_{22} + B_{21} (I - A_{22} B_{11})^{-1} A_{22} B_{12}\n\\end{pmatrix}\n"
},
{
"math_id": 53,
"text": "A \\in \\mathcal{L (H_1 \\oplus H_2, H_1 \\oplus H_2)}"
},
{
"math_id": 54,
"text": "B \\in \\mathcal{L (H_2 \\oplus H_3, H_2 \\oplus H_3)}"
},
{
"math_id": 55,
"text": "A \\star_S B \\in \\mathcal{L (H_1 \\oplus H_3, H_1 \\oplus H_3)}"
},
{
"math_id": 56,
"text": "\\star"
},
{
"math_id": 57,
"text": "J(A \\star B) = (JA) \\star_S (JB)"
},
{
"math_id": 58,
"text": "J"
},
{
"math_id": 59,
"text": "A,B,C"
},
{
"math_id": 60,
"text": "A \\star_S B"
},
{
"math_id": 61,
"text": "(I - A_{22} B_{11})^{-1}"
},
{
"math_id": 62,
"text": "(I - B_{11} A_{22})^{-1}"
},
{
"math_id": 63,
"text": "\nJ =\n\\begin{pmatrix}\n 0 & I\n \\\\\n I & 0\n\\end{pmatrix}\n"
},
{
"math_id": 64,
"text": "J \\star_S S = S \\star_S J = S"
},
{
"math_id": 65,
"text": "(A \\star_S B)^* = J (B^* \\star_S A^*) J"
},
{
"math_id": 66,
"text": "\\Sigma"
},
{
"math_id": 67,
"text": "S"
},
{
"math_id": 68,
"text": "\\Sigma \\star_S S = S \\star_S \\Sigma = J"
},
{
"math_id": 69,
"text": "JS^{-1}J"
},
{
"math_id": 70,
"text": "S^{-1}"
},
{
"math_id": 71,
"text": "T"
},
{
"math_id": 72,
"text": "\\begin{pmatrix}\n c_2^+\n \\\\\n c_2^-\n\\end{pmatrix}\n= T\n\\begin{pmatrix}\n c_1^+\n \\\\\n c_1^-\n\\end{pmatrix}"
},
{
"math_id": 73,
"text": "\nT =\n\\begin{pmatrix}\n T_{\\scriptscriptstyle ++} & T_{\\scriptscriptstyle +-}\n \\\\\n T_{\\scriptscriptstyle -+} & T_{\\scriptscriptstyle --}\n\\end{pmatrix}\n=\n\\begin{pmatrix}\n S_{21} - S_{22} S_{12}^{-1} S_{11} & S_{22} S_{12}^{-1}\n \\\\\n - S_{12}^{-1} S_{11} & S_{12}^{-1}\n\\end{pmatrix}\n"
},
{
"math_id": 74,
"text": "\n\\begin{pmatrix}\n c_3^+\n \\\\\n c_1^-\n\\end{pmatrix}\n= (S^A \\star S^B)\n\\begin{pmatrix}\n c_1^+\n \\\\\n c_3^-\n\\end{pmatrix}\n"
},
{
"math_id": 75,
"text": "\n\\begin{pmatrix}\n c_3^+\n \\\\\n c_3^-\n\\end{pmatrix}\n= (T^A T^B)\n\\begin{pmatrix}\n c_1^+\n \\\\\n c_1^-\n\\end{pmatrix}\n"
},
{
"math_id": 76,
"text": "T^A \\in \\mathcal{L (H_1 \\oplus H_1, H_2 \\oplus H_2)}"
},
{
"math_id": 77,
"text": "T^B \\in \\mathcal{L (H_2 \\oplus H_2, H_3 \\oplus H_3)}"
},
{
"math_id": 78,
"text": "T^A T^B \\in \\mathcal{L (H_1 \\oplus H_1, H_3 \\oplus H_3)}"
},
{
"math_id": 79,
"text": "M \\leftrightarrow L"
},
{
"math_id": 80,
"text": "L = f(M)"
},
{
"math_id": 81,
"text": "A \\star B = f^{-1} (f(A) f(B))"
},
{
"math_id": 82,
"text": "f(A) = ((I - A) + (I + A) J)^{-1} ((A - I) + (A + I) J)"
},
{
"math_id": 83,
"text": "J(x, y) = (-x, y)"
},
{
"math_id": 84,
"text": "S^A"
},
{
"math_id": 85,
"text": "S^B"
},
{
"math_id": 86,
"text": "AB"
},
{
"math_id": 87,
"text": "S^{AB} = S^A \\star S^B"
}
]
| https://en.wikipedia.org/wiki?curid=67981364 |
67984129 | Praseodymium(IV) fluoride | <templatestyles src="Chembox/styles.css"/>
Chemical compound
Praseodymium(IV) fluoride (also praseodymium tetrafluoride) is a binary inorganic compound, a highly oxidised metal salt of praseodymium and fluoride with the chemical formula PrF4.
Synthesis.
Praseodymium(IV) fluoride can be prepared by the effect of krypton difluoride on praseodymium(IV) oxide:
formula_0
Praseodymium(IV) fluoride can also be made by the dissolution of sodium hexafluoropraseodymate(IV) in liquid hydrogen fluoride:
formula_1
Properties.
Praseodymium(IV) fluoride forms light yellow crystals. The crystal structure is anticubic and isomorphic to that of uranium tetrafluoride UF4. It decomposes when heated:
formula_2
Due to the high normal potential of the tetravalent praseodymium cations (Pr3+ / Pr4+: +3.2 V), praseodymium(IV) fluoride decomposes in water, releasing oxygen, O2.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathsf{ PrO_2 + 2 KrF_2 \\ \\xrightarrow{}\\ PrF_4 + O_2 + 2 Kr }"
},
{
"math_id": 1,
"text": "\\mathsf{ Na_2[PrF_6] + 2 HF \\ \\xrightarrow{}\\ PrF_4\\downarrow + 2 NaHF_2 }"
},
{
"math_id": 2,
"text": "\\mathsf{ 2 PrF_4 \\ \\xrightarrow{90^oC}\\ 2 PrF_3 + F_2 }"
}
]
| https://en.wikipedia.org/wiki?curid=67984129 |
67988243 | 1 Kings 15 | 1 Kings, chapter 15
1 Kings 15 is the fifteenth chapter of the Books of Kings in the Hebrew Bible or the First Book of Kings in the Old Testament of the Christian Bible. The book is a compilation of various annals recording the acts of the kings of Israel and Judah by a Deuteronomic compiler in the seventh century BCE, with a supplement added in the sixth century BCE. 1 Kings 12:1-16:14 documents the consolidation of the kingdoms of northern Israel and Judah. This chapter focusses on the reigns of Abijam (or Abijah) and Asa in the southern kingdom, as well as Nadab and Baasha in the northern kingdom.
Text.
This chapter was originally written in the Hebrew language and since the 16th century is divided into 34 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century).
Abijam, the king of Judah (15:1–8).
Abijam is the first king who is given synchronized dating, that is, correlation to the line of kings in the northern kingdoms, a reminder of the common heritage, despite their separate development, as the people of YHWH. The names of the Judean queen mothers are always noted for specific political reasons: as an overriding factor to decide who took up the reins of the government among rival parties and interest-groups (cf. 1 Kings 1), also as she held a specific rank of 'mistress' (synonymous with the Hebrew word for 'queen mother') giving her power especially in the case of her son's death, similar to other cultures of the ancient Near East, such as amongst the Hittites. Abijam did not rule for long (about two full years, cf. verse 1 with 15:9; the number 'three' in 15:2 can be explained since the years of accession and death were not complete calendar years). Abijam was given a poor rating as a king, because he did not reverse the (alleged) atrocities introduced by Rehoboam, and failed to be "like David", but for David's sake, God still gave "a lamp in Jerusalem" (verse 4; cf. 1 Kings 11:36) even when there were conflicts with the northern state at this time (v. 7b, probably a note from the annals of the Judean kings).
"1 Now in the eighteenth year of king Jeroboam the son of Nebat reigned Abijam over Judah."
"2 Three years reigned he in Jerusalem. and his mother's name was Maachah, the daughter of Abishalom."
Asa, the king of Judah (15:9–24).
Asa reigned for an unusually long time in Jerusalem, seeing five Israelite kings rise and fall before Ahab started to reign, until Asa was 'diseased in his feet' in old age, which indicates his son Jehoshaphat's regency during Asa's lifetime. He was given a good assessment compared to David, though he did not abolish the high places outside Jerusalem (which was left to Josiah, 2 Kings 23:8), but otherwise was regarded as exemplary as he 'made pious donations' to the temple, 'chased the cult-prostitutes out' of the land (cf. 1 Kings 14:24), and dismissed the queen mother (his grandmother) 'because she had made an abominable image for Asherah'. The queen mother, Maacah, was the mother of Abijam, not Asa, but kept her position as queen mother following Abijam's early death until Asa relieved her of the post. Asa's strategy to fend off northern Israel's provocative expansion of the Benjaminite town of Ramah into a border fortress (cf. Joshua 18:25) was questionable, because he incited the Aramean king in Damascus to carry out a military attack on northern Israel, devastating Galilee, and while the Israelite king turned his back on the south to concentrate on the enemy in the north, Asa took the chance to build his own border fortress in Ramah, using the available materials from the northern kingdom.
"9And in the twentieth year of Jeroboam king of Israel reigned Asa over Judah."
"10 And forty and one years reigned he in Jerusalem. And his mother's name was Maachah, the daughter of Abishalom."
Nadab, the king of Israel (15:25–32).
The narrative turns to the kingdom of northern Israel, where Nadab, son of Jeroboam I, inherited a dynasty which only lasted a short time, although he managed to wage war against the Philistines in the Philistine territory (apparently resumed the war which Saul had begun; cf. 1 Samuel 13–14; 31). The motives of Baasha was not clear on why he overthrew the king and liquidated the entire royal family, other than stated as everything came to pass as prophesied by the prophet Ahijah that due to Jeroboam's sin, his 'house' had to be eliminated and Baasha carried it out. However, this is not a licence for political murder, for in 1 Kings 16:7 Baasha and his son would pay the price for the bloodbath he brought upon the house of Jeroboam (God may use humans as instruments of his judgement, but he does not condone their crimes).
"And Nadab the son of Jeroboam began to reign over Israel in the second year of Asa king of Judah, and reigned over Israel two years."
Baasha, the king of Israel (15:33–34).
It is already recorded in previous passages how Baasha became the second founder of a dynasty in the northern kingdom of Israel (after killing the heir of the previous dynasty, 15:27–28), and was involved in a war on two fronts against Judah and Syria (15:17–22). Now it is noted that he reigned for twenty-four years in Tirzah, a city in the territory of Manasseh (generally identified as "el-Far'ah", about 10 km. north of Nablus) which Jeroboam had already used as a residence (1 Kings 14:17). Baasha was given a poor rating as king because he walked 'in the way of Jeroboam', a religious (not political) criteria, as he left the bull cult of Bethel (and Dan) untouched.
"In the third year of Asa king of Judah began Baasha the son of Ahijah to reign over all Israel in Tirzah, twenty and four years."
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=67988243 |
6799 | COBOL | Programming language with English-like syntax
COBOL (; an acronym for "common business-oriented language") is a compiled English-like computer programming language designed for business use. It is an imperative, procedural, and, since 2002, object-oriented language. COBOL is primarily used in business, finance, and administrative systems for companies and governments. COBOL is still widely used in applications deployed on mainframe computers, such as large-scale batch and transaction processing jobs. Many large financial institutions were developing new systems in the language as late as 2006, but most programming in COBOL today is purely to maintain existing applications. Programs are being moved to new platforms, rewritten in modern languages, or replaced with other software.
COBOL was designed in 1959 by CODASYL and was partly based on the programming language FLOW-MATIC, designed by Grace Hopper. It was created as part of a U.S. Department of Defense effort to create a portable programming language for data processing. It was originally seen as a stopgap, but the Defense Department promptly pressured computer manufacturers to provide it, resulting in its widespread adoption. It was standardized in 1968 and has been revised five times. Expansions include support for structured and object-oriented programming. The current standard is ISO/IEC 1989:2023.
COBOL statements have prose syntax such as , which was designed to be self-documenting and highly readable. However, it is verbose and uses over 300 reserved words. This contrasts with the succinct and mathematically inspired syntax of other languages (in this case, ).
The COBOL code is split into four "divisions" (identification, environment, data, and procedure), containing a rigid hierarchy of sections, paragraphs, and sentences. Lacking a large standard library, the standard specifies 43 statements, 87 functions, and just one class.
Academic computer scientists were generally uninterested in business applications when COBOL was created and were not involved in its design; it was (effectively) designed from the ground up as a computer language for business, with an emphasis on inputs and outputs, whose only data types were numbers and strings of text.
COBOL has been criticized for its verbosity, design process, and poor support for structured programming. These weaknesses result in monolithic programs that are hard to comprehend as a whole, despite their local readability.
For years, COBOL has been assumed as a programming language for business operations in mainframes, although in recent years, many COBOL operations have been moved to cloud computing.
History and specification.
Background.
In the late 1950s, computer users and manufacturers were becoming concerned about the rising cost of programming. A 1959 survey had found that in any data processing installation, the programming cost US$800,000 on average and that translating programs to run on new hardware would cost US$600,000. At a time when new programming languages were proliferating, the same survey suggested that if a common business-oriented language were used, conversion would be far cheaper and faster.
On April 8, 1959, Mary K. Hawes, a computer scientist at Burroughs Corporation, called a meeting of representatives from academia, computer users, and manufacturers at the University of Pennsylvania to organize a formal meeting on common business languages. Representatives included Grace Hopper (inventor of the English-like data processing language FLOW-MATIC), Jean Sammet, and Saul Gorn.
At the April meeting, the group asked the Department of Defense (DoD) to sponsor an effort to create a common business language. The delegation impressed Charles A. Phillips, director of the Data System Research Staff at the DoD, who thought that they "thoroughly understood" the DoD's problems. The DoD operated 225 computers, had 175 more on order and had spent over $200 million on implementing programs to run on them. Portable programs would save time, reduce costs and ease modernization.
Charles Phillips agreed to sponsor the meeting and tasked the delegation with drafting the agenda.
COBOL 60.
On May 28 and 29, 1959 (exactly one year after the Zürich ALGOL 58 meeting), a meeting was held at the Pentagon to discuss the creation of a common programming language for business. It was attended by 41 people and was chaired by Phillips. The Department of Defense was concerned about whether it could run the same data processing programs on different computers. FORTRAN, the only mainstream language at the time, lacked the features needed to write such programs.
Representatives enthusiastically described a language that could work in a wide variety of environments, from banking and insurance to utilities and inventory control. They agreed unanimously that more people should be able to program and that the new language should not be restricted by the limitations of contemporary technology. A majority agreed that the language should make maximal use of English, be capable of change, be machine-independent, and be easy to use, even at the expense of power.
The meeting resulted in the creation of a steering committee and short, intermediate, and long-range committees. The short-range committee was given until September (three months) to produce specifications for an interim language, which would then be improved upon by the other committees. Their official mission, however, was to identify the strengths and weaknesses of existing programming languages; it did not explicitly direct them to create a new language.
The deadline was met with disbelief by the short-range committee. One member, Betty Holberton, described the three-month deadline as "gross optimism" and doubted that the language really would be a stopgap.
The steering committee met on June 4 and agreed to name the entire activity the "Committee on Data Systems Languages", or CODASYL, and to form an executive committee.
The short-range committee members represented six computer manufacturers and three government agencies. The computer manufacturers were Burroughs Corporation, IBM, Minneapolis-Honeywell (Honeywell Labs), RCA, Sperry Rand, and Sylvania Electric Products. The government agencies were the U.S. Air Force, the Navy's David Taylor Model Basin, and the National Bureau of Standards (now the National Institute of Standards and Technology). The committee was chaired by Joseph Wegstein of the U.S. National Bureau of Standards. Work began by investigating data descriptions, statements, existing applications, and user experiences.
The committee mainly examined the FLOW-MATIC, AIMACO, and COMTRAN programming languages. The FLOW-MATIC language was particularly influential because it had been implemented and because AIMACO was a derivative of it with only minor changes. FLOW-MATIC's inventor, Grace Hopper, also served as a technical adviser to the committee. FLOW-MATIC's major contributions to COBOL were long variable names, English words for commands, and the separation of data descriptions and instructions.
Hopper is sometimes called "the mother of COBOL" or "the grandmother of COBOL", although Jean Sammet, a lead designer of COBOL, said Hopper "was not the mother, creator or developer of Cobol."
IBM's COMTRAN language, invented by Bob Bemer, was regarded as a competitor to FLOW-MATIC by a short-range committee made up of colleagues of Grace Hopper. Some of its features were not incorporated into COBOL so that it would not look like IBM had dominated the design process, and Jean Sammet said in 1981 that there had been a "strong anti-IBM bias" from some committee members (herself included). In one case, after Roy Goldfinger, author of the COMTRAN manual and intermediate-range committee member, attended a subcommittee meeting to support his language and encourage the use of algebraic expressions, Grace Hopper sent a memo to the short-range committee reiterating Sperry Rand's efforts to create a language based on English.
In 1980, Grace Hopper commented that "COBOL 60 is 95% FLOW-MATIC" and that COMTRAN had had an "extremely small" influence. Furthermore, she said that she would claim that work was influenced by both FLOW-MATIC and COMTRAN only to "keep other people happy [so they] wouldn't try to knock us out".
Features from COMTRAN incorporated into COBOL included formulas, the clause, an improved codice_0 statement, which obviated the need for GO TOs, and a more robust file management system.
The usefulness of the committee's work was subject of great debate. While some members thought the language had too many compromises and was the result of design by committee, others felt it was better than the three languages examined. Some felt the language was too complex; others, too simple.
Controversial features included those some considered useless or too advanced for data processing users. Such features included Boolean expressions, formulas and table "<dfn >subscripts</dfn>" (indices). Another point of controversy was whether to make keywords context-sensitive and the effect that would have on readability. Although context-sensitive keywords were rejected, the approach was later used in PL/I and partially in COBOL from 2002. Little consideration was given to interactivity, interaction with operating systems (few existed at that time) and functions (thought of as purely mathematical and of no use in data processing).
The specifications were presented to the executive committee on 4 September. They fell short of expectations: Joseph Wegstein noted that "it contains rough spots and requires some additions", and Bob Bemer later described them as a "hodgepodge". The subcommittee was given until December to improve it.
At a mid-September meeting, the committee discussed the new language's name. Suggestions included "BUSY" (Business System), "INFOSYL" (Information System Language) and "COCOSYL" (Common Computer Systems Language). It is unclear who coined the name "COBOL", although Bob Bemer later claimed it had been his suggestion.
In October, the intermediate-range committee received copies of the FACT language specification created by Roy Nutt. Its features impressed the committee so much that they passed a resolution to base COBOL on it.
This was a blow to the short-range committee, who had made good progress on the specification. Despite being technically superior, FACT had not been created with portability in mind or through manufacturer and user consensus. It also lacked a demonstrable implementation, allowing supporters of a FLOW-MATIC-based COBOL to overturn the resolution. RCA representative Howard Bromberg also blocked FACT, so that RCA's work on a COBOL implementation would not go to waste.
It soon became apparent that the committee was too large for any further progress to be made quickly. A frustrated Howard Bromberg bought a $15 tombstone with "COBOL" engraved on it and sent it to Charles Phillips to demonstrate his displeasure.
A sub-committee was formed to analyze existing languages and was made up of six individuals:
The sub-committee did most of the work creating the specification, leaving the short-range committee to review and modify their work before producing the finished specification.
The specifications were approved by the executive committee on 8 January 1960, and sent to the government printing office, which printed them as "COBOL 60". The language's stated objectives were to allow efficient, portable programs to be easily written, to allow users to move to new systems with minimal effort and cost, and to be suitable for inexperienced programmers.
The CODASYL Executive Committee later created the COBOL Maintenance Committee to answer questions from users and vendors and to improve and expand the specifications.
During 1960, the list of manufacturers planning to build COBOL compilers grew. By September, five more manufacturers had joined CODASYL (Bendix, Control Data Corporation, General Electric (GE), National Cash Register and Philco), and all represented manufacturers had announced COBOL compilers. GE and IBM planned to integrate COBOL into their own languages, GECOM and COMTRAN, respectively. In contrast, International Computers and Tabulators planned to replace their language, CODEL, with COBOL.
Meanwhile, RCA and Sperry Rand worked on creating COBOL compilers. The first COBOL program ran on 17 August on an RCA 501. On 6 and 7 December, the same COBOL program (albeit with minor changes) ran on an RCA computer and a Remington-Rand Univac computer, demonstrating that compatibility could be achieved.
The relative influences of which languages were used continues to this day in the recommended advisory printed in all COBOL reference manuals:
<templatestyles src="Template:Blockquote/styles.css" />
COBOL-61 to COBOL-65.
<templatestyles src="Template:Quote_box/styles.css" />
It is rather unlikely that Cobol will be around by the end of the decade.
Anonymous, June 1960
Many logical flaws were found in "COBOL 60", leading General Electric's Charles Katz to warn that it could not be interpreted unambiguously. A reluctant short-term committee performed a total cleanup and, by March 1963, it was reported that COBOL's syntax was as definable as ALGOL's, although semantic ambiguities remained.
COBOL is a difficult language to write a compiler for, due to the large syntax and many optional elements within syntactic constructs as well as to the need to generate efficient code for a language with many possible data representations, implicit type conversions, and necessary set-ups for I/O operations. Early COBOL compilers were primitive and slow. A 1962 US Navy evaluation found compilation speeds of 3–11 statements per minute. By mid-1964, they had increased to 11–1000 statements per minute. It was observed that increasing memory would drastically increase speed and that compilation costs varied wildly: costs per statement were between $0.23 and $18.91.
In late 1962, IBM announced that COBOL would be their primary development language and that development of COMTRAN would cease.
The COBOL specification was revised three times in the five years after its publication. COBOL-60 was replaced in 1961 by COBOL-61. This was then replaced by the COBOL-61 Extended specifications in 1963, which introduced the sort and report writer facilities. The added facilities corrected flaws identified by Honeywell in late 1959 in a letter to the short-range committee. COBOL Edition 1965 brought further clarifications to the specifications and introduced facilities for handling mass storage files and tables.
COBOL-68.
Efforts began to standardize COBOL to overcome incompatibilities between versions. In late 1962, both ISO and the United States of America Standards Institute (now ANSI) formed groups to create standards. ANSI produced "USA Standard COBOL X3.23" in August 1968, which became the cornerstone for later versions. This version was known as American National Standard (ANS) COBOL and was adopted by ISO in 1972.
COBOL-74.
By 1970, COBOL had become the most widely used programming language in the world.
Independently of the ANSI committee, the CODASYL Programming Language Committee was working on improving the language. They described new versions in 1968, 1969, 1970 and 1973, including changes such as new inter-program communication, debugging and file merging facilities as well as improved string-handling and library inclusion features.
Although CODASYL was independent of the ANSI committee, the "CODASYL Journal of Development" was used by ANSI to identify features that were popular enough to warrant implementing. The Programming Language Committee also liaised with ECMA and the Japanese COBOL Standard committee.
The Programming Language Committee was not well-known, however. The vice-president, William Rinehuls, complained that two-thirds of the COBOL community did not know of the committee's existence. It also lacked the funds to make public documents, such as minutes of meetings and change proposals, freely available.
In 1974, ANSI published a revised version of (ANS) COBOL, containing new features such as file organizations, the statement and the segmentation module. Deleted features included the statement, the statement (which was replaced by ) and the implementer-defined random access module (which was superseded by the new sequential and relative I/O modules). These made up 44 changes, which rendered existing statements incompatible with the new standard. The report writer was slated to be removed from COBOL, but was reinstated before the standard was published. ISO later adopted the updated standard in 1978.
COBOL-85.
In June 1978, work began on revising COBOL-74. The proposed standard (commonly called COBOL-80) differed significantly from the previous one, causing concerns about incompatibility and conversion costs. In January 1981, Joseph T. Brophy, Senior Vice-president of Travelers Insurance, threatened to sue the standard committee because it was not upwards compatible with COBOL-74. Mr. Brophy described previous conversions of their 40-million-line code base as "non-productive" and a "complete waste of our programmer resources". Later that year, the Data Processing Management Association (DPMA) said it was "strongly opposed" to the new standard, citing "prohibitive" conversion costs and enhancements that were "forced on the user".
During the first public review period, the committee received 2,200 responses, of which 1,700 were negative form letters. Other responses were detailed analyses of the effect COBOL-80 would have on their systems; conversion costs were predicted to be at least 50 cents per line of code. Fewer than a dozen of the responses were in favor of the proposed standard.
ISO TC97-SC5 installed in 1979 the international COBOL Experts Group, on initiative of Wim Ebbinkhuijsen. The group consisted of COBOL experts from many countries, including the United States. Its goal was to achieve mutual understanding and respect between ANSI and the rest of the world with regard to the need of new COBOL features. After three years, ISO changed the status of the group to a formal Working Group: WG 4 COBOL. The group took primary ownership and development of the COBOL standard, where ANSI made most of the proposals.
In 1983, the DPMA withdrew its opposition to the standard, citing the responsiveness of the committee to public concerns. In the same year, a National Bureau of Standards study concluded that the proposed standard would present few problems. A year later, DEC released a VAX/VMS COBOL-80, and noted that conversion of COBOL-74 programs posed few problems. The new codice_1 statement and inline codice_2 were particularly well received and improved productivity, thanks to simplified control flow and debugging.
The second public review drew another 1,000 (mainly negative) responses, while the last drew just 25, by which time many concerns had been addressed.
In 1985, the ISO Working Group 4 accepted the then-version of the ANSI proposed standard, made several changes and set it as the new ISO standard COBOL 85. It was published in late 1985.
Sixty features were changed or deprecated and 115 were added, such as:
The new standard was adopted by all national standard bodies, including ANSI.
Two amendments followed in 1989 and 1993. The first amendment introduced intrinsic functions and the other provided corrections.
COBOL 2002 and object-oriented COBOL.
In 1997, Gartner Group estimated that there were a total of 200 billion lines of COBOL in existence, which ran 80% of all business programs.
In the early 1990s, work began on adding object-orientation in the next full revision of COBOL. Object-oriented features were taken from C++ and Smalltalk.
The initial estimate was to have this revision completed by 1997, and an ISO Committee Draft (CD) was available by 1997. Some vendors (including Micro Focus, Fujitsu, and IBM) introduced object-oriented syntax based on drafts of the full revision. The final approved ISO standard was approved and published in late 2002.
Fujitsu/GTSoftware, Micro Focus introduced object-oriented COBOL compilers targeting the .NET Framework.
There were many other new features, many of which had been in the "CODASYL COBOL Journal of Development" since 1978 and had missed the opportunity to be included in COBOL-85. These other features included:
Three corrigenda were published for the standard: two in 2006 and one in 2009.
COBOL 2014.
Between 2003 and 2009, three technical reports were produced describing object finalization, XML processing and collection classes for COBOL.
COBOL 2002 suffered from poor support: no compilers completely supported the standard. Micro Focus found that it was due to a lack of user demand for the new features and due to the abolition of the NIST test suite, which had been used to test compiler conformance. The standardization process was also found to be slow and under-resourced.
COBOL 2014 includes the following changes:
COBOL 2023.
The COBOL 2023 standard added a few new features:
There is as yet no known complete implementation of this standard.
Legacy.
COBOL programs are used globally in governments and businesses and are running on diverse operating systems such as z/OS, z/VSE, VME, Unix, NonStop OS, OpenVMS and Windows. In 1997, the Gartner Group reported that 80% of the world's business ran on COBOL with over 200 billion lines of code and 5 billion lines more being written annually.
Near the end of the 20th century, the year 2000 problem (Y2K) was the focus of significant COBOL programming effort, sometimes by the same programmers who had designed the systems decades before. The particular level of effort required to correct COBOL code has been attributed to the large amount of business-oriented COBOL, as business applications use dates heavily, and to fixed-length data fields. Some studies attribute as much as "24% of Y2K software repair costs to Cobol". After the clean-up effort put into these programs for Y2K, a 2003 survey found that many remained in use. The authors said that the survey data suggest "a gradual decline in the importance of COBOL in application development over the [following] 10 years unless ... integration with other languages and technologies can be adopted".
In 2006 and 2012, "Computerworld" surveys (of 352 readers) found that over 60% of organizations used COBOL (more than C++ and Visual Basic .NET) and that for half of those, COBOL was used for the majority of their internal software. 36% of managers said they planned to migrate from COBOL, and 25% said that they would do so if not for the expense of rewriting legacy code. Alternatively, some businesses have migrated their COBOL programs from mainframes to cheaper, faster hardware.
Testimony before the House of Representatives in 2016 indicated that COBOL is still in use by many federal agencies. Reuters reported in 2017 that 43% of banking systems still used COBOL with over 220 billion lines of COBOL code in use.
By 2019, the number of COBOL programmers was shrinking fast due to retirements, leading to an impending skills gap in business and government organizations which still use mainframe systems for high-volume transaction processing. Efforts to rewrite systems in newer languages have proven expensive and problematic, as has the outsourcing of code maintenance, thus proposals to train more people in COBOL are advocated.
During the COVID-19 pandemic and the ensuing surge of unemployment, several US states reported a shortage of skilled COBOL programmers to support the legacy systems used for unemployment benefit management. Many of these systems had been in the process of conversion to more modern programming languages prior to the pandemic, but the process was put on hold. Similarly, the US Internal Revenue Service rushed to patch its COBOL-based Individual Master File in order to disburse the tens of millions of payments mandated by the Coronavirus Aid, Relief, and Economic Security Act.
Features.
Syntax.
COBOL has an English-like syntax, which is used to describe nearly everything in a program. For example, a condition can be expressed as or more concisely as or . More complex conditions can be abbreviated by removing repeated conditions and variables. For example, can be shortened to . To support this syntax, COBOL has over 300 keywords. Some of the keywords are simple alternative or pluralized spellings of the same word, which provides for more grammatically appropriate statements and clauses; e.g., the and keywords can be used interchangeably, as can and , and and .
Each COBOL program is made up of four basic lexical items: words, literals, picture character-strings (see ) and separators. Words include reserved words and user-defined identifiers. They are up to 31 characters long and may include letters, digits, hyphens and underscores. Literals include numerals (e.g. ) and strings (e.g. ). Separators include the space character and commas and semi-colons followed by a space.
A COBOL program is split into four divisions: the identification division, the environment division, the data division and the procedure division. The identification division specifies the name and type of the source element and is where classes and interfaces are specified. The environment division specifies any program features that depend on the system running it, such as files and character sets. The data division is used to declare variables and parameters. The procedure division contains the program's statements. Each division is sub-divided into sections, which are made up of paragraphs.
Metalanguage.
COBOL's syntax is usually described with a unique metalanguage using braces, brackets, bars and underlining. The metalanguage was developed for the original COBOL specifications. Although Backus–Naur form did exist at the time, the committee had not heard of it.
As an example, consider the following description of an codice_22 statement:
formula_0
This description permits the following variants:
ADD 1 TO x
ADD 1, a, b TO x ROUNDED, y, z ROUNDED
ADD a, b TO c
ON SIZE ERROR
DISPLAY "Error"
END-ADD
ADD a TO b
NOT SIZE ERROR
DISPLAY "No error"
ON SIZE ERROR
DISPLAY "Error"
Code format.
The height of COBOL's popularity coincided with the era of keypunch machines and punched cards. The program itself was written onto punched cards, then read in and compiled, and the data fed into the program was sometimes on cards as well.
COBOL can be written in two formats: fixed (the default) or free. In fixed-format, code must be aligned to fit in certain areas (a hold-over from using punched cards). Until COBOL 2002, these were:
In COBOL 2002, Areas A and B were merged to form the program-text area, which now ends at an implementor-defined column.
COBOL 2002 also introduced free-format code. Free-format code can be placed in any column of the file, as in newer programming languages. Comments are specified using codice_23, which can be placed anywhere and can also be used in fixed-format source code. Continuation lines are not present, and the codice_24 directive replaces the codice_25 indicator.
Identification division.
The identification division identifies the following code entity and contains the definition of a class or interface.
Object-oriented programming.
Classes and interfaces have been in COBOL since 2002. Classes have factory objects, containing class methods and variables, and instance objects, containing instance methods and variables. Inheritance and interfaces provide polymorphism. Support for generic programming is provided through parameterized classes, which can be instantiated to use any class or interface. Objects are stored as references which may be restricted to a certain type. There are two ways of calling a method: the statement, which acts similarly to , or through inline method invocation, which is analogous to using functions.
INVOKE my-class "foo" RETURNING var
MOVE my-class::"foo" TO var *> Inline method invocation
COBOL does not provide a way to hide methods. Class data can be hidden, however, by declaring it without a clause, which leaves external code no way to access it. Method overloading was added in COBOL 2014.
Environment division.
The environment division contains the configuration section and the input-output section. The configuration section is used to specify variable features such as currency signs, locales and character sets. The input-output section contains file-related information.
Files.
COBOL supports three file formats, or "<dfn >organizations</dfn>": sequential, indexed and relative. In sequential files, records are contiguous and must be traversed sequentially, similarly to a linked list. Indexed files have one or more indexes which allow records to be randomly accessed and which can be sorted on them. Each record must have a unique key, but other, "<dfn >alternate</dfn>", record keys need not be unique. Implementations of indexed files vary between vendors, although common implementations, such as C-ISAM and VSAM, are based on IBM's ISAM. Other implementations are Record Management Services on OpenVMS and Enscribe on HPE NonStop (Tandem). Relative files, like indexed files, have a unique record key, but they do not have alternate keys. A relative record's key is its ordinal position; for example, the 10th record has a key of 10. This means that creating a record with a key of 5 may require the creation of (empty) preceding records. Relative files also allow for both sequential and random access.
A common non-standard extension is the "<dfn >line sequential</dfn>" organization, used to process text files. Records in a file are terminated by a newline and may be of varying length.
Data division.
The data division is split into six sections which declare different items: the file section, for file records; the working-storage section, for static variables; the local-storage section, for automatic variables; the linkage section, for parameters and the return value; the report section and the screen section, for text-based user interfaces.
Aggregated data.
Data items in COBOL are declared hierarchically through the use of level-numbers which indicate if a data item is part of another. An item with a higher level-number is subordinate to an item with a lower one. Top-level data items, with a level-number of 1, are called "<dfn >records</dfn>". Items that have subordinate aggregate data are called "<dfn >group items</dfn>"; those that do not are called "<dfn >elementary items</dfn>". Level-numbers used to describe standard data items are between 1 and 49.
01 some-record. *> Aggregate group record item
05 num PIC 9(10). *> Elementary item
05 the-date. *> Aggregate (sub)group record item
10 the-year PIC 9(4). *> Elementary item
10 the-month PIC 99. *> Elementary item
10 the-day PIC 99. *> Elementary item
In the above example, elementary item and group item are subordinate to the record , while elementary items , , and are part of the group item .
Subordinate items can be disambiguated with the (or ) keyword. For example, consider the example code above along with the following example:
01 sale-date.
05 the-year PIC 9(4).
05 the-month PIC 99.
05 the-day PIC 99.
The names , , and are ambiguous by themselves, since more than one data item is defined with those names. To specify a particular data item, for instance one of the items contained within the group, the programmer would use (or the equivalent ). This syntax is similar to the "dot notation" supported by most contemporary languages.
Other data levels.
A level-number of 66 is used to declare a re-grouping of previously defined items, irrespective of how those items are structured. This data level, also referred to by the associated <dfn >clause</dfn>, is rarely used and, circa 1988, was usually found in old programs. Its ability to ignore the hierarchical and logical structure data meant its use was not recommended and many installations forbade its use.
01 customer-record.
05 cust-key PIC X(10).
05 cust-name.
10 cust-first-name PIC X(30).
10 cust-last-name PIC X(30).
05 cust-dob PIC 9(8).
05 cust-balance PIC 9(7)V99.
66 cust-personal-details RENAMES cust-name THRU cust-dob.
66 cust-all-details RENAMES cust-name THRU cust-balance.
A 77 level-number indicates the item is stand-alone, and in such situations is equivalent to the level-number 01. For example, the following code declares two 77-level data items, and , which are non-group data items that are independent of (not subordinate to) any other data items:
77 property-name PIC X(80).
77 sales-region PIC 9(5).
An 88 level-number declares a "<dfn >condition name</dfn>" (a so-called 88-level) which is true when its parent data item contains one of the values specified in its clause. For example, the following code defines two 88-level condition-name items that are true or false depending on the current character data value of the data item. When the data item contains a value of , the condition-name is true, whereas when it contains a value of or , the condition-name is true. If the data item contains some other value, both of the condition-names are false.
01 wage-type PIC X.
88 wage-is-hourly VALUE "H".
88 wage-is-yearly VALUE "S", "Y".
Data types.
Standard COBOL provides the following data types:
Type safety is variable in COBOL. Numeric data is converted between different representations and sizes silently and alphanumeric data can be placed in any data item that can be stored as a string, including numeric and group data. In contrast, object references and pointers may only be assigned from items of the same type and their values may be restricted to a certain type.
PICTURE clause.
A (or ) clause is a string of characters, each of which represents a portion of the data item and what it may contain. Some picture characters specify the type of the item and how many characters or digits it occupies in memory. For example, a indicates a decimal digit, and an indicates that the item is signed. Other picture characters (called "<dfn >insertion</dfn>" and "<dfn >editing</dfn>" characters) specify how an item should be formatted. For example, a series of characters define character positions as well as how a leading sign character is to be positioned within the final character data; the rightmost non-numeric character will contain the item's sign, while other character positions corresponding to a to the left of this position will contain a space. Repeated characters can be specified more concisely by specifying a number in parentheses after a picture character; for example, is equivalent to . Picture specifications containing only digit () and sign () characters define purely "<dfn >numeric</dfn>" data items, while picture specifications containing alphabetic () or alphanumeric () characters define "<dfn >alphanumeric</dfn>" data items. The presence of other formatting characters define "<dfn >edited numeric</dfn>" or "<dfn >edited alphanumeric</dfn>" data items.
USAGE clause.
The clause declares the format in which data is stored. Depending on the data type, it can either complement or be used instead of a clause. While it can be used to declare pointers and object references, it is mostly geared towards specifying numeric types. These numeric formats are:
Report writer.
The report writer is a declarative facility for creating reports. The programmer need only specify the report layout and the data required to produce it, freeing them from having to write code to handle things like page breaks, data formatting, and headings and footings.
Reports are associated with report files, which are files which may only be written to through report writer statements.
FD report-out REPORT sales-report.
Each report is defined in the report section of the data division. A report is split into report groups which define the report's headings, footings and details. Reports work around hierarchical "<dfn >control breaks</dfn>". Control breaks occur when a key variable changes it value; for example, when creating a report detailing customers' orders, a control break could occur when the program reaches a different customer's orders. Here is an example report description for a report which gives a salesperson's sales and which warns of any invalid records:
RD sales-report
PAGE LIMITS 60 LINES
FIRST DETAIL 3
CONTROLS seller-name.
01 TYPE PAGE HEADING.
03 COL 1 VALUE "Sales Report".
03 COL 74 VALUE "Page".
03 COL 79 PIC Z9 SOURCE PAGE-COUNTER.
01 sales-on-day TYPE DETAIL, LINE + 1.
03 COL 3 VALUE "Sales on".
03 COL 12 PIC 99/99/9999 SOURCE sales-date.
03 COL 21 VALUE "were".
03 COL 26 PIC $$$$9.99 SOURCE sales-amount.
01 invalid-sales TYPE DETAIL, LINE + 1.
03 COL 3 VALUE "INVALID RECORD:".
03 COL 19 PIC X(34) SOURCE sales-record.
01 TYPE CONTROL HEADING seller-name, LINE + 2.
03 COL 1 VALUE "Seller:".
03 COL 9 PIC X(30) SOURCE seller-name.
The above report description describes the following layout:
Four statements control the report writer: , which prepares the report writer for printing; , which prints a report group; , which suppresses the printing of a report group; and , which terminates report processing. For the above sales report example, the procedure division might look like this:
OPEN INPUT sales, OUTPUT report-out
INITIATE sales-report
PERFORM UNTIL 1 <> 1
READ sales
AT END
EXIT PERFORM
END-READ
VALIDATE sales-record
IF valid-record
GENERATE sales-on-day
ELSE
GENERATE invalid-sales
END-IF
END-PERFORM
TERMINATE sales-report
CLOSE sales, report-out
Use of the Report Writer facility tends to vary considerably; some organizations use it extensively and some not at all. In addition, implementations of Report Writer ranged in quality, with those at the lower end sometimes using excessive amounts of memory at runtime.
Procedure division.
Procedures.
The sections and paragraphs in the procedure division (collectively called procedures) can be used as labels and as simple subroutines. Unlike in other divisions, paragraphs do not need to be in sections.
Execution goes down through the procedures of a program until it is terminated.
To use procedures as subroutines, the verb is used.
A statement somewhat resembles a procedure call in a newer languages in the sense that execution returns to the code following the statement at the end of the called code; however, it does not provide a mechanism for parameter passing or for returning a result value. If a subroutine is invoked using a simple statement like , then control returns at the end of the called procedure. However, is unusual in that it may be used to call a range spanning a sequence of several adjacent procedures. This is done with the construct:
PROCEDURE so-and-so.
PERFORM ALPHA
PERFORM ALPHA THRU GAMMA
STOP RUN.
ALPHA.
DISPLAY 'A'.
BETA.
DISPLAY 'B'.
GAMMA.
DISPLAY 'C'.
The output of this program will be: "A A B C".
also differs from conventional procedure calls in that there is, at least traditionally, no notion of a call stack. As a consequence, nested invocations are possible (a sequence of code being 'ed may execute a statement itself), but require extra care if parts of the same code are executed by both invocations. The problem arises when the code in the inner invocation reaches the exit point of the outer invocation. More formally, if control passes through the exit point of a invocation that was called earlier but has not yet completed, the COBOL 2002 standard stipulates that the behavior is undefined.
The reason is that COBOL, rather than a "return address", operates with what may be called a continuation address. When control flow reaches the end of any procedure, the continuation address is looked up and control is transferred to that address. Before the program runs, the continuation address for every procedure is initialized to the start address of the procedure that comes next in the program text so that, if no statements happen, control flows from top to bottom through the program. But when a statement executes, it modifies the continuation address of the called procedure (or the last procedure of the called range, if was used), so that control will return to the call site at the end. The original value is saved and is restored afterwards, but there is only one storage position. If two nested invocations operate on overlapping code, they may interfere which each other's management of the continuation address in several ways.
The following example (taken from ) illustrates the problem:
LABEL1.
DISPLAY '1'
PERFORM LABEL2 THRU LABEL3
STOP RUN.
LABEL2.
DISPLAY '2'
PERFORM LABEL3 THRU LABEL4.
LABEL3.
DISPLAY '3'.
LABEL4.
DISPLAY '4'.
One might expect that the output of this program would be "1 2 3 4 3": After displaying "2", the second causes "3" and "4" to be displayed, and then the first invocation continues on with "3". In traditional COBOL implementations, this is not the case. Rather, the first statement sets the continuation address at the end of so that it will jump back to the call site inside . The second statement sets the return at the end of but does not modify the continuation address of , expecting it to be the default continuation. Thus, when the inner invocation arrives at the end of , it jumps back to the outer statement, and the program stops having printed just "1 2 3". On the other hand, in some COBOL implementations like the open-source TinyCOBOL compiler, the two statements do not interfere with each other and the output is indeed "1 2 3 4 3". Therefore, the behavior in such cases is not only (perhaps) surprising, it is also not portable.
A special consequence of this limitation is that cannot be used to write recursive code. Another simple example to illustrate this (slightly simplified from ):
MOVE 1 TO A
PERFORM LABEL
STOP RUN.
LABEL.
DISPLAY A
IF A < 3
ADD 1 TO A
PERFORM LABEL
END-IF
DISPLAY 'END'.
One might expect that the output is "1 2 3 END END END", and in fact that is what some COBOL compilers will produce. But other compilers, like IBM COBOL, will produce code that prints "1 2 3 END END END END ..." and so on, printing "END" over and over in an endless loop. Since there is limited space to store backup continuation addresses, the backups get overwritten in the course of recursive invocations, and all that can be restored is the jump back to .
Statements.
COBOL 2014 has 47 statements (also called "<dfn >verbs</dfn>"), which can be grouped into the following broad categories: control flow, I/O, data manipulation and the report writer. The report writer statements are covered in the report writer section.
Control flow.
COBOL's conditional statements are and . is a switch-like statement with the added capability of evaluating multiple values and conditions. This can be used to implement decision tables. For example, the following might be used to control a CNC lathe:
EVALUATE TRUE ALSO desired-speed ALSO current-speed
WHEN lid-closed ALSO min-speed THRU max-speed ALSO LESS THAN desired-speed
PERFORM speed-up-machine
WHEN lid-closed ALSO min-speed THRU max-speed ALSO GREATER THAN desired-speed
PERFORM slow-down-machine
WHEN lid-open ALSO ANY ALSO NOT ZERO
PERFORM emergency-stop
WHEN OTHER
CONTINUE
END-EVALUATE
The statement is used to define loops which are executed until a condition is true (not while true, which is more common in other languages). It is also used to call procedures or ranges of procedures (see the procedures section for more details). and call subprograms and methods, respectively. The name of the subprogram/method is contained in a string which may be a literal or a data item. Parameters can be passed by reference, by content (where a copy is passed by reference) or by value (but only if a prototype is available).
unloads subprograms from memory. causes the program to jump to a specified procedure.
The statement is a return statement and the statement stops the program. The statement has six different formats: it can be used as a return statement, a break statement, a continue statement, an end marker or to leave a procedure.
Exceptions are raised by a statement and caught with a handler, or "<dfn >declarative</dfn>", defined in the portion of the procedure division. Declaratives are sections beginning with a statement which specify the errors to handle. Exceptions can be names or objects. is used in a declarative to jump to the statement after the one that raised the exception or to a procedure outside the . Unlike other languages, uncaught exceptions may not terminate the program and the program can proceed unaffected.
I/O.
File I/O is handled by the self-describing , , , and statements along with a further three: , which updates a record; , which selects subsequent records to access by finding a record with a certain key; and , which releases a lock on the last record accessed.
User interaction is done using and .
Data manipulation.
The following verbs manipulate data:
Files and tables are sorted using and the verb merges and sorts files. The verb provides records to sort and retrieves sorted records in order.
Scope termination.
Some statements, such as and , may themselves contain statements. Such statements may be terminated in two ways: by a period ("<dfn >implicit termination</dfn>"), which terminates "all" unterminated statements contained, or by a scope terminator, which terminates the nearest matching open statement.
IF invalid-record
IF no-more-records
NEXT SENTENCE
ELSE
READ record-file
AT END SET no-more-records TO TRUE.
IF invalid-record
IF no-more-records
CONTINUE
ELSE
READ record-file
AT END SET no-more-records TO TRUE
END-READ
END-IF
END-IF
Nested statements terminated with a period are a common source of bugs. For example, examine the following code:
IF x
DISPLAY y.
DISPLAY z.
Here, the intent is to display codice_29 and codice_30 if condition codice_31 is true. However, codice_30 will be displayed whatever the value of codice_31 because the codice_0 statement is terminated by an erroneous period after .
Another bug is a result of the dangling else problem, when two codice_0 statements can associate with an codice_36.
IF x
IF y
DISPLAY a
ELSE
DISPLAY b.
In the above fragment, the codice_36 associates with the statement instead of the statement, causing a bug. Prior to the introduction of explicit scope terminators, preventing it would require to be placed after the inner codice_0.
Self-modifying code.
The original (1959) COBOL specification supported the infamous statement, for which many compilers generated self-modifying code. codice_39 and codice_40 are procedure labels, and the single statement in procedure codice_39 executed after such an statement means instead. Many compilers still support it, but it was deemed obsolete in the COBOL 1985 standard and deleted in 2002.
The statement was poorly regarded because it undermined "locality of context" and made a program's overall logic difficult to comprehend. As textbook author Daniel D. McCracken wrote in 1976, when "someone who has never seen the program before must become familiar with it as quickly as possible, sometimes under critical time pressure because the program has failed ... the sight of a GO TO statement in a paragraph by itself, signaling as it does the existence of an unknown number of ALTER statements at unknown locations throughout the program, strikes fear in the heart of the bravest programmer."
Hello, world.
A "Hello, World!" program in COBOL:
IDENTIFICATION DIVISION.
PROGRAM-ID. hello-world.
PROCEDURE DIVISION.
DISPLAY "Hello, world!"
When the now famous "Hello, World!" program example in "The C Programming Language" was first published in 1978 a similar mainframe COBOL program sample would have been submitted through JCL, very likely using a punch card reader, and 80 column punch cards. The listing below, "with an empty DATA DIVISION", was tested using Linux and the System/370 Hercules emulator running MVS 3.8J. The JCL, written in July 2015, is derived from the Hercules tutorials and samples hosted by Jay Moseley. In keeping with COBOL programming of that era, HELLO, WORLD is displayed in all capital letters.
//COBUCLG JOB (001),'COBOL BASE TEST', 00010000
// CLASS=A,MSGCLASS=A,MSGLEVEL=(1,1) 00020000
//BASETEST EXEC COBUCLG 00030000
//COB.SYSIN DD * 00040000
00000* VALIDATION OF BASE COBOL INSTALL 00050000
01000 IDENTIFICATION DIVISION. 00060000
01100 PROGRAM-ID. 'HELLO'. 00070000
02000 ENVIRONMENT DIVISION. 00080000
02100 CONFIGURATION SECTION. 00090000
02110 SOURCE-COMPUTER. GNULINUX. 00100000
02120 OBJECT-COMPUTER. HERCULES. 00110000
02200 SPECIAL-NAMES. 00120000
02210 CONSOLE IS CONSL. 00130000
03000 DATA DIVISION. 00140000
04000 PROCEDURE DIVISION. 00150000
04100 00-MAIN. 00160000
04110 DISPLAY 'HELLO, WORLD' UPON CONSL. 00170000
04900 STOP RUN. 00180000
//LKED.SYSLIB DD DSNAME=SYS1.COBLIB,DISP=SHR 00190000
// DD DSNAME=SYS1.LINKLIB,DISP=SHR 00200000
//GO.SYSPRINT DD SYSOUT=A 00210000
// 00220000
After submitting the JCL, the MVS console displayed:
19.52.48 JOB 3 $HASP100 COBUCLG ON READER1 COBOL BASE TEST
19.52.48 JOB 3 IEF677I WARNING MESSAGE(S) FOR JOB COBUCLG ISSUED
19.52.48 JOB 3 $HASP373 COBUCLG STARTED - INIT 1 - CLASS A - SYS BSP1
19.52.48 JOB 3 IEC130I SYSPUNCH DD STATEMENT MISSING
19.52.48 JOB 3 IEC130I SYSLIB DD STATEMENT MISSING
19.52.48 JOB 3 IEC130I SYSPUNCH DD STATEMENT MISSING
19.52.48 JOB 3 IEFACTRT - Stepname Procstep Program Retcode
19.52.48 JOB 3 COBUCLG BASETEST COB IKFCBL00 RC= 0000
19.52.48 JOB 3 COBUCLG BASETEST LKED IEWL RC= 0000
19.52.48 JOB 3 +HELLO, WORLD
19.52.48 JOB 3 COBUCLG BASETEST GO PGM=*.DD RC= 0000
19.52.48 JOB 3 $HASP395 COBUCLG ENDED
"Line 10 of the console listing above is highlighted for effect, the highlighting is not part of the actual console output".
The associated compiler listing generated over four pages of technical detail and job run information, for the single line of output from the 14 lines of COBOL.
Reception.
Lack of structure.
In the 1970s, adoption of the structured programming paradigm was becoming increasingly widespread. Edsger Dijkstra, a preeminent computer scientist, wrote a letter to the editor of Communications of the ACM, published in 1975 entitled "How do we tell truths that might hurt?", in which he was critical of COBOL and several other contemporary languages; remarking that "the use of COBOL cripples the mind".
In a published dissent to Dijkstra's remarks, the computer scientist Howard E. Tompkins claimed that unstructured COBOL tended to be "written by programmers that have never had the benefit of structured COBOL taught well", arguing that the issue was primarily one of training.
One cause of spaghetti code was the statement. Attempts to remove s from COBOL code, however, resulted in convoluted programs and reduced code quality. s were largely replaced by the statement and procedures, which promoted modular programming and gave easy access to powerful looping facilities. However, could be used only with procedures so loop bodies were not located where they were used, making programs harder to understand.
COBOL programs were infamous for being monolithic and lacking modularization.
COBOL code could be modularized only through procedures, which were found to be inadequate for large systems. It was impossible to restrict access to data, meaning a procedure could access and modify any data item. Furthermore, there was no way to pass parameters to a procedure, an omission Jean Sammet regarded as the committee's biggest mistake.
Another complication stemmed from the ability to a specified sequence of procedures. This meant that control could jump to and return from any procedure, creating convoluted control flow and permitting a programmer to break the single-entry single-exit rule.
This situation improved as COBOL adopted more features. COBOL-74 added subprograms, giving programmers the ability to control the data each part of the program could access. COBOL-85 then added nested subprograms, allowing programmers to hide subprograms. Further control over data and code came in 2002 when object-oriented programming, user-defined functions and user-defined data types were included.
Nevertheless, much important legacy COBOL software uses unstructured code, which has become practically unmaintainable. It can be too risky and costly to modify even a simple section of code, since it may be used from unknown places in unknown ways.
Compatibility issues.
COBOL was intended to be a highly portable, "common" language. However, by 2001, around 300 dialects had been created. One source of dialects was the standard itself: the 1974 standard was composed of one mandatory nucleus and eleven functional modules, each containing two or three levels of support. This permitted 104,976 possible variants.
COBOL-85 was not fully compatible with earlier versions, and its development was controversial. Joseph T. Brophy, the CIO of Travelers Insurance, spearheaded an effort to inform COBOL users of the heavy reprogramming costs of implementing the new standard. As a result, the ANSI COBOL Committee received more than 2,200 letters from the public, mostly negative, requiring the committee to make changes. On the other hand, conversion to COBOL-85 was thought to increase productivity in future years, thus justifying the conversion costs.
Verbose syntax.
<templatestyles src="Template:Quote_box/styles.css" />
COBOL: /koh′bol/, n.
A weak, verbose, and flabby language used by code grinders to do boring mindless things on dinosaur mainframes. [...] Its very name is seldom uttered without ritual expressions of disgust or horror.
The Jargon File 4.4.8.
COBOL syntax has often been criticized for its verbosity. Proponents say that this was intended to make the code self-documenting, easing program maintenance. COBOL was also intended to be easy for programmers to learn and use, while still being readable to non-technical staff such as managers.
The desire for readability led to the use of English-like syntax and structural elements, such as nouns, verbs, clauses, sentences, sections, and divisions. Yet by 1984, maintainers of COBOL programs were struggling to deal with "incomprehensible" code and the main changes in COBOL-85 were there to help ease maintenance.
Jean Sammet, a short-range committee member, noted that "little attempt was made to cater to the professional programmer, in fact people whose main interest is programming tend to be very unhappy with COBOL" which she attributed to COBOL's verbose syntax.
Isolation from the computer science community.
The COBOL community has always been isolated from the computer science community. No academic computer scientists participated in the design of COBOL: all of those on the committee came from commerce or government. Computer scientists at the time were more interested in fields like numerical analysis, physics and system programming than the commercial file-processing problems which COBOL development tackled. Jean Sammet attributed COBOL's unpopularity to an initial "snob reaction" due to its inelegance, the lack of influential computer scientists participating in the design process and a disdain for business data processing. The COBOL specification used a unique "notation", or metalanguage, to define its syntax rather than the new Backus–Naur form which the committee did not know of. This resulted in "severe" criticism.
<templatestyles src="Template:Quote_box/styles.css" />
The academic world tends to regard COBOL as verbose, clumsy and inelegant, and tries to ignore it, although there are probably more COBOL programs and programmers in the world than there are for FORTRAN, ALGOL and PL/I combined. For the most part, only schools with an immediate vocational objective provide instruction in COBOL.
Richard Conway and David Gries, 1973
Later, COBOL suffered from a shortage of material covering it; it took until 1963 for introductory books to appear (with Richard D. Irwin publishing a college textbook on COBOL in 1966). By 1985, there were twice as many books on FORTRAN and four times as many on BASIC as on COBOL in the Library of Congress. University professors taught more modern, state-of-the-art languages and techniques instead of COBOL which was said to have a "trade school" nature. Donald Nelson, chair of the CODASYL COBOL committee, said in 1984 that "academics ... hate COBOL" and that computer science graduates "had 'hate COBOL' drilled into them".
By the mid-1980s, there was also significant condescension towards COBOL in the business community from users of other languages, for example FORTRAN or assembler, implying that COBOL could be used only for non-challenging problems.
In 2003, COBOL featured in 80% of information systems curricula in the United States, the same proportion as C++ and Java. Ten years later, a poll by Micro Focus found that 20% of university academics thought COBOL was outdated or dead and that 55% believed their students thought COBOL was outdated or dead. The same poll also found that only 25% of academics had COBOL programming on their curriculum even though 60% thought they should teach it.
Concerns about the design process.
Doubts have been raised about the competence of the standards committee. Short-term committee member Howard Bromberg said that there was "little control" over the development process and that it was "plagued by discontinuity of personnel and ... a lack of talent." Jean Sammet and Jerome Garfunkel also noted that changes introduced in one revision of the standard would be reverted in the next, due as much to changes in who was in the standard committee as to objective evidence.
COBOL standards have repeatedly suffered from delays: COBOL-85 arrived five years later than hoped, COBOL 2002 was five years late, and COBOL 2014 was six years late. To combat delays, the standard committee allowed the creation of optional addenda which would add features more quickly than by waiting for the next standard revision. However, some committee members raised concerns about incompatibilities between implementations and frequent modifications of the standard.
Influences on other languages.
COBOL's data structures influenced subsequent programming languages. Its record and file structure influenced PL/I and Pascal, and the codice_42 clause was a predecessor to Pascal's variant records. Explicit file structure definitions preceded the development of database management systems and aggregated data was a significant advance over Fortran's arrays.
codice_26 data declarations were incorporated into PL/I, with minor changes.
COBOL's facility, although considered "primitive", influenced the development of include directives.
The focus on portability and standardization meant programs written in COBOL could be portable and facilitated the spread of the language to a wide variety of hardware platforms and operating systems. Additionally, the well-defined division structure restricts the definition of external references to the Environment Division, which simplifies platform changes in particular.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
Citations.
<templatestyles src="Reflist/styles.css" />
Sources.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\n \\begin{array}{l}\n \\underline{\\text{ADD}}\\,\n \\begin{Bmatrix}\n \\text{identifier-1} \\\\\n \\text{literal-1}\n \\end{Bmatrix}\\dots\n \\;\\underline{\\text{TO}}\\,\\left\\{\\text{identifier-2}\\,\\left[\\,\\underline{\\text{ROUNDED}}\\,\\right]\\right\\}\\dots \\\\[1em]\n \\quad\n \\left[\\left|\\begin{array}{l}\n \\text{ON}\\,\\underline{\\text{SIZE}}\\,\\underline{\\text{ERROR}}\\,\\text{imperative-statement-1} \\\\\n \\underline{\\text{NOT}}\\,\\text{ON}\\,\\underline{\\text{SIZE}}\\,\\underline{\\text{ERROR}}\\,\\text{imperative-statement-2}\n \\end{array}\\right|\\right] \\\\[1em]\n \\quad\n \\left[\\,\\underline{\\text{END-ADD}}\\,\\right]\n \\end{array}\n "
}
]
| https://en.wikipedia.org/wiki?curid=6799 |
6799095 | Flicker noise | Type of electronic noise
Flicker noise is a type of electronic noise with a 1/"f" power spectral density. It is therefore often referred to as 1/"f" noise or pink noise, though these terms have wider definitions. It occurs in almost all electronic devices and can show up with a variety of other effects, such as impurities in a conductive channel, generation and recombination noise in a transistor due to base current, and so on.
Properties.
1/"f" noise in current or voltage is usually related to a direct current, as resistance fluctuations are transformed to voltage or current fluctuations by Ohm's law. There is also a 1/"f" component in resistors with no direct current through them, likely due to temperature fluctuations modulating the resistance. This effect is not present in manganin, as it has negligible temperature coefficient of resistance.
In electronic devices, it shows up as a low-frequency phenomenon, as the higher frequencies are overshadowed by white noise from other sources. In oscillators, however, the low-frequency noise can be mixed up to frequencies close to the carrier, which results in oscillator phase noise.
Its contribution to total noise is characterized by the corner frequency "f"c between the low-frequency region dominated by flicker noise and the higher-frequency region dominated by the flat spectrum of white noise. MOSFETs have a high "f"c (can be in the GHz range). JFETs and BJTs have a lower "f"c around 1 kHz, but JFETs usually exhibit more flicker noise at low frequencies than BJTs, and can have "f"c as high as several kHz in JFETs not selected for flicker noise.
It typically has a Gaussian distribution and is time-reversible. It is generated by a linear mechanism in resistors and FETs, but by a non-linear mechanism in BJTs and diodes.
The spectral density of flicker-noise voltage in MOSFETs as a function of frequency "f" is often modeled as formula_0, where "K" is the process-dependent constant, formula_1 is the oxide capacitance, "W" and "L" are channel width and length respectively. This is an empirical model and generally thought to be an oversimplification.
Flicker noise is found in carbon-composition resistors and in thick-film resistors, where it is referred to as "excess noise", since it increases the overall noise level above the thermal noise level, which is present in all resistors. In contrast, wire-wound resistors have the least amount of flicker noise. Since flicker noise is related to the level of DC, if the current is kept low, thermal noise will be the predominant effect in the resistor, and the type of resistor used may not affect noise levels, depending on the frequency window.
Measurement.
The measurement of 1/"f" noise spectrum in voltage or current is done in the same way as the measurement of other types of noises. Sampling spectrum analyzers take a finite-time sample from the noise and calculate the Fourier transform by FFT algorithm. Then, after calculating the squared absolute value of the Fourier spectrum, they calculate its average value by repeating this sampling process by a sufficiently large number of times. The resulting pattern is proportional to the power-density spectrum of the measured noise. It is then normalized by the duration of the finite-time sample and also by a numerical constant in the order of 1 to get its exact value. This procedure gives correct spectral data only deeply within the frequency window determined by the reciprocal of the duration of the finite-time sample (low-frequency end) and the digital sampling rate of the noise (high-frequency end). Thus the upper and the lower half decades of the obtained power density spectrum are usually discarded from the spectrum. Conventional spectrum analyzers that sweep a narrow filtered band over the signal have good signal-to-noise ratio (SNR), since they are narrow-band instruments. These instruments do not operate at frequencies low enough to fully measure flicker noise. Sampling instruments are broadband, and hence high noise. They reduce the noise by taking multiple sample traces and averaging them. Conventional spectrum analyzers still have better SNR due to their narrow-band acquisition.
Removal in instrumentation and measurements.
For DC measurements 1/"f" noise can be particularly troublesome, as it is very significant at low frequencies, tending to infinity with integration/averaging at DC. At very low frequencies, you can think of the noise as becoming drift, although the mechanisms causing drift are usually distinct from flicker noise.
One powerful technique involves moving the signal of interest to a higher frequency and using a phase-sensitive detector to measure it. For example, the signal of interest can be chopped with a frequency. Now the signal chain carries an AC, not DC, signal. AC-coupled stages filters out the DC component; this also attenuates the flicker noise. A synchronous detector that samples the peaks of the AC signal, which are equivalent to the original DC value. In other words, first the low-frequency signal is shifted to high frequency by multiplying it with high-frequency carrier, and it is given to the device affected by the flicker noise. The output of the device is again multiplied with the same carrier, so the previous information signal comes back to baseband, and flicker noise will be shifted to higher frequency, which can easily be filtered out.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\tfrac{K}{C_\\text{ox}\\cdot W L f}"
},
{
"math_id": 1,
"text": "C_\\text{ox}"
}
]
| https://en.wikipedia.org/wiki?curid=6799095 |
6799273 | Grassmann's laws (color science) | Perception of color mixtures
Grassmann's laws describe empirical results about how the perception of mixtures of colored lights (i.e., lights that co-stimulate the same area on the retina) composed of different spectral power distributions can be algebraically related to one another in a color matching context. Discovered by Hermann Grassmann these "laws" are actually principles used to predict color match responses to a good approximation under photopic and mesopic vision. A number of studies have examined how and why they provide poor predictions under specific conditions.
Modern interpretation.
The four laws are described in modern texts with varying degrees of algebraic notation and are summarized as follows (the precise numbering and corollary definitions can vary across sources):
These laws entail an algebraic representation of colored light. Assuming beam 1 and 2 each have a color, and the observer chooses formula_0 as the strengths of the primaries that match beam 1 and formula_1 as the strengths of the primaries that match beam 2, then if the two beams were combined, the matching values will be the sums of the components. Precisely, they will be formula_2, where:
formula_3
formula_4
formula_5
Grassmann's laws can be expressed in general form by stating that for a given color with a spectral power distribution formula_6 the RGB coordinates are given by:
formula_7
formula_8
formula_9
Observe that these are linear in formula_10; the functions formula_11 are the color matching functions with respect to the chosen primaries.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(R_1,G_1,B_1)"
},
{
"math_id": 1,
"text": "(R_2,G_2,B_2)"
},
{
"math_id": 2,
"text": "(R,G,B)"
},
{
"math_id": 3,
"text": "R= R_1+R_2\\,"
},
{
"math_id": 4,
"text": "G= G_1+G_2\\,"
},
{
"math_id": 5,
"text": "B= B_1+B_2\\,"
},
{
"math_id": 6,
"text": "I(\\lambda)"
},
{
"math_id": 7,
"text": "R= \\int_0^\\infty I(\\lambda)\\,\\bar r(\\lambda)\\,d\\lambda"
},
{
"math_id": 8,
"text": "G= \\int_0^\\infty I(\\lambda)\\,\\bar g(\\lambda)\\,d\\lambda"
},
{
"math_id": 9,
"text": "B= \\int_0^\\infty I(\\lambda)\\,\\bar b(\\lambda)\\,d\\lambda"
},
{
"math_id": 10,
"text": "I"
},
{
"math_id": 11,
"text": "\\bar r(\\lambda), \\bar g(\\lambda), \\bar b(\\lambda)"
}
]
| https://en.wikipedia.org/wiki?curid=6799273 |
67995256 | Identical-machines scheduling | Identical-machines scheduling is an optimization problem in computer science and operations research. We are given "n" jobs "J"1, "J"2, ..., "Jn" of varying processing times, which need to be scheduled on "m" identical machines, such that a certain objective function is optimized, for example, the makespan is minimized.
Identical machine scheduling is a special case of uniform machine scheduling, which is itself a special case of optimal job scheduling. In the general case, the processing time of each job may be different on different machines; in the case of identical machine scheduling, the processing time of each job is the same on each machine. Therefore, identical machine scheduling is equivalent to multiway number partitioning. A special case of identical machine scheduling is single-machine scheduling.
In the standard three-field notation for optimal job scheduling problems, the identical-machines variant is denoted by P in the first field. For example, " P||formula_0" is an identical machine scheduling problem with no constraints, where the goal is to minimize the maximum completion time.
In some variants of the problem, instead of minimizing the "maximum" completion time, it is desired to minimize the "average" completion time (averaged over all "n" jobs); it is denoted by P||formula_1. More generally, when some jobs are more important than others, it may be desired to minimize a "weighted average" of the completion time, where each job has a different weight. This is denoted by P||formula_2.
Algorithms.
Minimizing average and weighted-average completion time.
Minimizing the "average" completion time (P||formula_1) can be done in polynomial time. The SPT algorithm (Shortest Processing Time First), sorts the jobs by their length, shortest first, and then assigns them to the processor with the earliest end time so far. It runs in time O("n" log "n"), and minimizes the average completion time on identical machines, P||formula_1.
Minimizing the "weighted average" completion time is NP-hard even on identical machines, by reduction from the knapsack problem. It is NP-hard even if the number of machines is fixed and at least 2, by reduction from the partition problem.
Sahni presents an exponential-time algorithm and a polynomial-time approximation scheme for solving both these NP-hard problems on identical machines:
Minimizing the maximum completion time (makespan).
Minimizing the "maximum" completion time (P||formula_0) is NP-hard even for "identical" machines, by reduction from the partition problem. Many exact and approximation algorithms are known.
Graham proved that:
Coffman, Garey and Johnson presented a different algorithm called multifit algorithm, using techniques from bin packing, which has an approximation factor of 13/11≈1.182.
Huang and Lu presented a simple polynomial-time algorithm that attains an 11/9≈1.222 approximation in time O("m" log "m" + "n"), through the more general problem of "maximin-share allocation of chores".
Sahni presented a PTAS that attains (1+ε)OPT in time formula_5. It is an FPTAS if "m" is fixed. For m=2, the run-time improves to formula_6. The algorithm uses a technique called "interval partitioning".
Hochbaum and Shmoys presented several approximation algorithms for any number of identical machines (even when the number of machines is not fixed):
Leung improved the run-time of this algorithm to formula_10.
Maximizing the minimum completion time.
Maximizing the minimum completion time (P||formula_11) is applicable when the "jobs" are actually spare parts that are required to keep the machines running, and they have different life-times. The goal is to keep machines running for as long as possible. The LPT algorithm attains at least formula_12 of the optimum.
Woeginger presented a PTAS that attains an approximation factor of formula_13 in time formula_14, where formula_15 a huge constant that is exponential in the required approximation factor ε. The algorithm uses Lenstra's algorithm for integer linear programming.
General objective functions.
Alon, Azar, Woeginger and Yadid consider a more general objective function. Given a positive real function "f", which depends only on the completion times "Ci", they consider the objectives of minimizing formula_16, minimizing formula_17, maximizing formula_16, and maximizing formula_18. They prove that, if "f" is non-negative, convex, and satisfies a strong continuity assumption that they call "F*", then both minimization problems have a PTAS. Similarly, if "f" is non-negative, concave, and satisfies F*, then both maximization problems have a PTAS. In both cases, the run-time of the PTAS is O("n"), but with constants that are exponential in 1/"ε." | [
{
"math_id": 0,
"text": "C_\\max"
},
{
"math_id": 1,
"text": "\\sum C_i"
},
{
"math_id": 2,
"text": "\\sum w_i C_i"
},
{
"math_id": 3,
"text": "2-1/m"
},
{
"math_id": 4,
"text": "4/3-1/3m"
},
{
"math_id": 5,
"text": "O(n\\cdot (n^2 / \\epsilon)^{m-1})"
},
{
"math_id": 6,
"text": "O(n^2 / \\epsilon)"
},
{
"math_id": 7,
"text": "O(n(r+\\log{n}))"
},
{
"math_id": 8,
"text": "O(n(r m^4+\\log{n}))"
},
{
"math_id": 9,
"text": "O((n/\\varepsilon)^{(1/\\varepsilon^2)})"
},
{
"math_id": 10,
"text": "O\\left((n/\\varepsilon)^{(1/\\varepsilon)\\log{(1/\\varepsilon)}}\\right)"
},
{
"math_id": 11,
"text": "C_\\min"
},
{
"math_id": 12,
"text": "\\frac{3m-1}{4m-2}"
},
{
"math_id": 13,
"text": "1-{\\varepsilon}"
},
{
"math_id": 14,
"text": "O(c_{\\varepsilon}n\\log{k})"
},
{
"math_id": 15,
"text": "c_{\\varepsilon}"
},
{
"math_id": 16,
"text": "\\sum_{i=1}^m f(C_i)"
},
{
"math_id": 17,
"text": "\\max_{i=1}^m f(C_i)"
},
{
"math_id": 18,
"text": "\\min_{i=1}^m f(C_i)"
}
]
| https://en.wikipedia.org/wiki?curid=67995256 |
679987 | Euclidean division | Division with remainder of integers
In arithmetic, Euclidean division – or division with remainder – is the process of dividing one integer (the dividend) by another (the divisor), in a way that produces an integer quotient and a natural number remainder strictly smaller than the absolute value of the divisor. A fundamental property is that the quotient and the remainder exist and are unique, under some conditions. Because of this uniqueness, "Euclidean division" is often considered without referring to any method of computation, and without explicitly computing the quotient and the remainder. The methods of computation are called integer division algorithms, the best known of which being long division.
Euclidean division, and algorithms to compute it, are fundamental for many questions concerning integers, such as the Euclidean algorithm for finding the greatest common divisor of two integers, and modular arithmetic, for which only remainders are considered. The operation consisting of computing only the remainder is called the "modulo operation", and is used often in both mathematics and computer science.
Division theorem.
Euclidean division is based on the following result, which is sometimes called Euclid's division lemma.
Given two integers "a" and "b", with "b" ≠ 0, there exist unique integers "q" and "r" such that
"a" = "bq" + "r"
and
0 ≤ "r" < |"b"|,
where |"b"| denotes the absolute value of "b".
In the above theorem, each of the four integers has a name of its own: "a" is called the dividend, "b" is called the divisor, "q" is called the quotient and "r" is called the remainder.
The computation of the quotient and the remainder from the dividend and the divisor is called division, or in case of ambiguity, Euclidean division. The theorem is frequently referred to as the division algorithm (although it is a theorem and not an algorithm), because its proof as given below lends itself to a simple division algorithm for computing "q" and "r" (see the section Proof for more).
Division is not defined in the case where "b" = 0; see division by zero.
For the remainder and the modulo operation, there are conventions other than 0 ≤ "r" < |"b"|, see .
Generalization.
Although originally restricted to integers, Euclidean division and the division theorem can be generalized to univariate polynomials over a field and to Euclidean domains.
In the case of univariate polynomials, the main difference is that the inequalities formula_0 are replaced with
formula_1 or formula_2
where formula_3 denotes the polynomial degree.
In the generalization to Euclidean domains, the inequality becomes
formula_1 or formula_4
where formula_5 denote a specific function from the domain to the natural numbers called a "Euclidean function".
The uniqueness of the quotient and the remainder remains true for polynomials, but it is false in general.
History.
Although "Euclidean division" is named after Euclid, it seems that he did not know the existence and uniqueness theorem, and that the only computation method that he knew was the division by repeated subtraction.
Before the discovery of Hindu–Arabic numeral system, which was introduced in Europe during the 13th century by Fibonacci, division was extremely difficult, and only the best mathematicians were able to do it. Presently, most division algorithms, including long division, are based on this notation or its variants, such as binary numerals. A notable exception is Newton–Raphson division, which is independent from any numeral system.
The term "Euclidean division" was introduced during the 20th century as a shorthand for "division of Euclidean rings". It has been rapidly adopted by mathematicians for distinguishing this division from the other kinds of division of numbers.
Intuitive example.
Suppose that a pie has 9 slices and they are to be divided evenly among 4 people. Using Euclidean division, 9 divided by 4 is 2 with remainder 1. In other words, each person receives 2 slices of pie, and there is 1 slice left over.
This can be confirmed using multiplication, the inverse of division: if each of the 4 people received 2 slices, then 4 × 2 = 8 slices were given out in total. Adding the 1 slice remaining, the result is 9 slices. In summary: 9 = 4 × 2 + 1.
In general, if the number of slices is denoted formula_6 and the number of people is denoted "formula_7", then one can divide the pie evenly among the people such that each person receives "formula_8" slices (the quotient), with some number of slices formula_9 being the leftover (the remainder). In which case, the equation formula_10 holds.
If 9 slices were divided among 3 people instead of 4, then each would receive 3 and no slice would be left over, which means that the remainder would be zero, leading to the conclusion that 3 "evenly divides" 9, or that 3 "divides" 9.
Euclidean division can also be extended to negative dividend (or negative divisor) using the same formula; for example −9 = 4 × (−3) + 3, which means that −9 divided by 4 is −3 with remainder 3.
Proof.
The following proof of the division theorem relies on the fact that a decreasing sequence of non-negative integers stops eventually. It is separated into two parts: one for existence and another for uniqueness of "formula_8" and "formula_11". Other proofs use the well-ordering principle (i.e., the assertion that every non-empty set of non-negative integers has a smallest element) to make the reasoning simpler, but have the disadvantage of not providing directly an algorithm for solving the division (see for more).
Existence.
For proving the existence of Euclidean division, one can suppose formula_12 since, if formula_13 the equality formula_14 can be rewritten formula_15 So, if the latter equality is a Euclidean division with formula_16 the former is also a Euclidean division.
Given formula_17 and formula_18 there are integers formula_19 and formula_20 such that formula_21 for example, formula_22 and formula_23 if formula_24 and otherwise formula_25 and formula_26
Let formula_8 and formula_11 be such a pair of numbers for which formula_11 is nonnegative and minimal. If formula_27 we have Euclidean division. Thus, we have to prove that, if formula_28 then formula_11 is not minimal. Indeed, if formula_28 one has formula_29 with formula_30 and formula_11 is not minimal
This proves the existence in all cases. This provides also an algorithm for computing the quotient and the remainder, by starting from formula_31 (if formula_32) and adding formula_33 to it until formula_34 However, this algorithm is not efficient, since its number of steps is of the order of formula_35
Uniqueness.
The pair of integers "r" and "q" such that "a" = "bq" + "r" is unique, in the sense that there can be no other pair of integers that satisfy the same condition in the Euclidean division theorem. In other words, if we have another division of "a" by "b", say "a" = "bq' " + "r"' with 0 ≤ "r' " < |"b"|, then we must have that
"q' " = "q" and "r' " = "r".
To prove this statement, we first start with the assumptions that
0 ≤ "r " < |"b"|
0 ≤ "r' " < |"b"|
"a" = "bq" + "r"
"a" = "bq' " + "r' "
Subtracting the two equations yields
"b"("q" – "q′") = "r′" – "r".
So "b" is a divisor of "r′" – "r". As
by the above inequalities, one gets
"r′" – "r" = 0,
and
"b"("q" – "q′") = 0.
Since "b" ≠ 0, we get that "r" = "r′" and "q" = "q′", which proves the uniqueness part of the Euclidean division theorem.
Effectiveness.
In general, an existence proof does not provide an algorithm for computing the existing quotient and remainder, but the above proof does immediately provide an algorithm (see Division algorithm#Division by repeated subtraction), even though it is not a very efficient one as it requires as many steps as the size of the quotient. This is related to the fact that it uses only additions, subtractions and comparisons of integers, without involving multiplication, nor any particular representation of the integers such as decimal notation.
In terms of decimal notation, long division provides a much more efficient algorithm for solving Euclidean divisions. Its generalization to binary and hexadecimal notation provides further flexibility and possibility for computer implementation. However, for large inputs, algorithms that reduce division to multiplication, such as Newton–Raphson, are usually preferred, because they only need a time which is proportional to the time of the multiplication needed to verify the result—independently of the multiplication algorithm which is used (for more, see Division algorithm#Fast division methods).
Variants.
The Euclidean division admits a number of variants, some of which are listed below.
Other intervals for the remainder.
In Euclidean division with d as divisor, the remainder is supposed to belong to the interval [0, "d") of length |"d"|. Any other interval of the same length may be used. More precisely, given integers formula_36, formula_6, formula_37 with formula_38, there exist unique integers formula_8 and formula_11 with formula_39 such that formula_40.
In particular, if formula_41 then formula_42 . This division is called the "centered division", and its remainder formula_11 is called the "centered remainder" or the least absolute remainder".
This is used for approximating real numbers: Euclidean division defines truncation, and centered division defines rounding.
Montgomery division.
Given integers formula_6, formula_36 and formula_43 with formula_44 and formula_45 let formula_46 be the modular multiplicative inverse of formula_47 (i.e., formula_48 with formula_49 being a multiple of formula_36), then there exist unique integers formula_8 and formula_11 with formula_50 such that formula_51.
This result generalizes Hensel's odd division (1900).
The value formula_11 is the "N"-residue defined in Montgomery reduction.
In Euclidean domains.
Euclidean domains (also known as Euclidean rings) are defined as integral domains which support the following generalization of Euclidean division:
Given an element "a" and a non-zero element "b" in a Euclidean domain "R" equipped with a Euclidean function "d" (also known as a Euclidean valuation or degree function), there exist "q" and "r" in "R" such that "a"
"bq" + "r" and either "r"
0 or "d"("r") < "d"("b").
Uniqueness of "q" and "r" is not required. It occurs only in exceptional cases, typically for univariate polynomials, and for integers, if the further condition "r" ≥ 0 is added.
Examples of Euclidean domains include fields, polynomial rings in one variable over a field, and the Gaussian integers. The Euclidean division of polynomials has been the object of specific developments.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "0\\le r<|b|"
},
{
"math_id": 1,
"text": "r = 0"
},
{
"math_id": 2,
"text": "\\deg r < \\deg b,"
},
{
"math_id": 3,
"text": "\\deg"
},
{
"math_id": 4,
"text": "f(r) < f(b),"
},
{
"math_id": 5,
"text": "f"
},
{
"math_id": 6,
"text": "a"
},
{
"math_id": 7,
"text": "b"
},
{
"math_id": 8,
"text": "q"
},
{
"math_id": 9,
"text": "r < b"
},
{
"math_id": 10,
"text": "a = bq + r"
},
{
"math_id": 11,
"text": "r"
},
{
"math_id": 12,
"text": "b > 0,"
},
{
"math_id": 13,
"text": "b < 0,"
},
{
"math_id": 14,
"text": "a=bq+r"
},
{
"math_id": 15,
"text": "a = (-b)(-q) + r."
},
{
"math_id": 16,
"text": "-b > 0,"
},
{
"math_id": 17,
"text": "b > 0"
},
{
"math_id": 18,
"text": "a,"
},
{
"math_id": 19,
"text": "q_1"
},
{
"math_id": 20,
"text": "r_1 \\ge 0"
},
{
"math_id": 21,
"text": "a = bq_1 + r_1;"
},
{
"math_id": 22,
"text": "q_1 = 0"
},
{
"math_id": 23,
"text": "r_1 = a"
},
{
"math_id": 24,
"text": "a \\ge 0,"
},
{
"math_id": 25,
"text": "q_1 = a"
},
{
"math_id": 26,
"text": "r_1 = a - ab."
},
{
"math_id": 27,
"text": "r < b."
},
{
"math_id": 28,
"text": "r \\ge b,"
},
{
"math_id": 29,
"text": "a = b(q+1) + (r-b),"
},
{
"math_id": 30,
"text": "0 \\le r-b < r,"
},
{
"math_id": 31,
"text": "q = 0"
},
{
"math_id": 32,
"text": "a \\ge 0"
},
{
"math_id": 33,
"text": "1"
},
{
"math_id": 34,
"text": "a-bq < b."
},
{
"math_id": 35,
"text": "a/b"
},
{
"math_id": 36,
"text": "m"
},
{
"math_id": 37,
"text": "d"
},
{
"math_id": 38,
"text": "m>0"
},
{
"math_id": 39,
"text": "d \\le r < m+d "
},
{
"math_id": 40,
"text": "a = mq+r"
},
{
"math_id": 41,
"text": " d=- \\left\\lfloor \\frac{m}{2} \\right\\rfloor "
},
{
"math_id": 42,
"text": " - \\left\\lfloor \\frac{m}{2} \\right\\rfloor \\le r < m-\\left\\lfloor \\frac{m}{2} \\right\\rfloor "
},
{
"math_id": 43,
"text": "R,"
},
{
"math_id": 44,
"text": "m >0"
},
{
"math_id": 45,
"text": "\\gcd(R,m) =1,"
},
{
"math_id": 46,
"text": "R^{-1}"
},
{
"math_id": 47,
"text": "R"
},
{
"math_id": 48,
"text": " 0<R^{-1}<m"
},
{
"math_id": 49,
"text": "R^{-1}R-1"
},
{
"math_id": 50,
"text": "0 \\le r < m "
},
{
"math_id": 51,
"text": " a = mq+R^{-1} \\cdot r "
}
]
| https://en.wikipedia.org/wiki?curid=679987 |
68011960 | Condensed mathematics | Area of mathematics using condensed sets
Condensed mathematics is a theory developed by Dustin Clausen and Peter Scholze which, according to some, aims to unify various mathematical subfields, including topology, complex geometry, and algebraic geometry.
Idea.
The fundamental idea in the development of the theory is given by replacing topological spaces by "condensed sets", defined below. The category of condensed sets, as well as related categories such as that of condensed abelian groups, are much better behaved than the category of topological spaces. In particular, unlike the category of topological abelian groups, the category of condensed abelian groups is an abelian category, which allows for the use of tools from homological algebra in the study of those structures.
The framework of condensed mathematics turns out to be general enough that, by considering various "spaces" with sheaves valued in condensed algebras, one is able to incorporate algebraic geometry, p-adic analytic geometry and complex analytic geometry.
Definition.
A "condensed set" is a sheaf of sets on the site of profinite sets, with the Grothendieck topology given by finite, jointly surjective collections of maps. Similarly, a "condensed group", "condensed ring", etc. is defined as a sheaf of groups, rings etc. on this site.
To any topological space formula_0 one can associate a condensed set, customarily denoted formula_1, which to any profinite set formula_2 associates the set of continuous maps formula_3. If formula_0 is a topological group or ring, then formula_1 is a condensed group or ring.
History.
In 2013, Bhargav Bhatt and Peter Scholze introduced a general notion of "pro-étale site" associated to an arbitrary scheme. In 2018, together with Dustin Clausen, they arrived at the conclusion that already the pro-étale site of a single point, which is isomorphic to the site of profinite sets introduced above, has rich enough structure to realize large classes of topological spaces as sheaves on it. Further developments have led to a theory of condensed sets and "solid abelian groups", through which one is able to incorporate non-Archimedean geometry into the theory.
In 2020 Scholze completed a proof of a result which would enable the incorporation of functional analysis as well as complex geometry into the condensed mathematics framework, using the notion of "liquid vector spaces". The argument has turned out to be quite subtle, and to get rid of any doubts about the validity of the result, he asked other mathematicians to provide a formalized and verified proof. Over a 6-month period, a group led by Johan Commelin verified the central part of the proof using the proof assistant Lean. As of 14 July 2022, the proof has been completed.
Coincidentally, in 2019 Barwick and Haine introduced a very similar theory of "pyknotic objects". This theory is very closely related to that of condensed sets, with the main differences being set-theoretic in nature: pyknotic theory depends on a choice of Grothendieck universes, whereas condensed mathematics can be developed strictly within ZFC. | [
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "\\underline X"
},
{
"math_id": 2,
"text": "S"
},
{
"math_id": 3,
"text": "S\\to X"
}
]
| https://en.wikipedia.org/wiki?curid=68011960 |
68013738 | Source unfolding | In computational geometry, the source unfolding of a convex polyhedron is a net obtained by cutting the polyhedron along the cut locus of a point on the surface of the polyhedron. The cut locus of a point formula_0 consists of all points on the surface that have two or more shortest geodesics to formula_0. For every convex polyhedron, and every choice of the point formula_0 on its surface, cutting the polyhedron on the cut locus will produce a result that can be unfolded into a flat plane, producing the source unfolding. The resulting net may, however, cut across some of the faces of the polyhedron rather than only cutting along its edges.
The source unfolding can also be continuously transformed from the polyhedron to its flat net, keeping flat the parts of the net that do not lie along edges of the polyhedron, as a blooming of the polyhedron. The unfolded shape of the source unfolding is always a star-shaped polygon, with all of its points visible by straight line segments from the image of formula_0; this is in contrast to the star unfolding, a different method for producing nets that does not always produce star-shaped polygons.
An analogous unfolding method can be applied to any higher-dimensional convex polytope, cutting the surface of the polytope into a net that can be unfolded into a flat hyperplane.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "p"
}
]
| https://en.wikipedia.org/wiki?curid=68013738 |
6801871 | Ribbon Hopf algebra | Algebraic structure
A ribbon Hopf algebra formula_0 is a quasitriangular Hopf algebra which possess an invertible central element formula_1 more commonly known as the ribbon element, such that the following conditions hold:
formula_2
formula_3
where formula_4. Note that the element "u" exists for any quasitriangular Hopf algebra, and
formula_5 must always be central and satisfies formula_6, so that all that is required is that it have a central square root with the above properties.
Here
formula_7 is a vector space
formula_8 is the multiplication map formula_9
formula_10 is the co-product map formula_11
formula_12 is the unit operator formula_13
formula_14 is the co-unit operator formula_15
formula_16 is the antipode formula_17
formula_18 is a universal R matrix
We assume that the underlying field formula_19 is formula_20
If formula_7 is finite-dimensional, one could equivalently call it "ribbon Hopf" if and only if its category of (say, left) modules is ribbon; if formula_7 is finite-dimensional and quasi-triangular, then it is ribbon if and only if its category of (say, left) modules is pivotal. | [
{
"math_id": 0,
"text": "(A,\\nabla, \\eta,\\Delta,\\varepsilon,S,\\mathcal{R},\\nu)"
},
{
"math_id": 1,
"text": "\\nu"
},
{
"math_id": 2,
"text": "\\nu^{2}=uS(u), \\; S(\\nu)=\\nu, \\; \\varepsilon (\\nu)=1"
},
{
"math_id": 3,
"text": "\\Delta (\\nu)=(\\mathcal{R}_{21}\\mathcal{R}_{12})^{-1}(\\nu \\otimes \\nu )"
},
{
"math_id": 4,
"text": "u=\\nabla(S\\otimes \\text{id})(\\mathcal{R}_{21})"
},
{
"math_id": 5,
"text": "uS(u)"
},
{
"math_id": 6,
"text": "S(uS(u))=uS(u), \\varepsilon(uS(u))=1, \\Delta(uS(u)) = \n(\\mathcal{R}_{21}\\mathcal{R}_{12})^{-2}(uS(u) \\otimes uS(u))"
},
{
"math_id": 7,
"text": " A "
},
{
"math_id": 8,
"text": " \\nabla "
},
{
"math_id": 9,
"text": "\\nabla:A \\otimes A \\rightarrow A"
},
{
"math_id": 10,
"text": " \\Delta "
},
{
"math_id": 11,
"text": "\\Delta: A \\rightarrow A \\otimes A"
},
{
"math_id": 12,
"text": " \\eta "
},
{
"math_id": 13,
"text": "\\eta:\\mathbb{C} \\rightarrow A"
},
{
"math_id": 14,
"text": " \\varepsilon "
},
{
"math_id": 15,
"text": "\\varepsilon: A \\rightarrow \\mathbb{C}"
},
{
"math_id": 16,
"text": " S "
},
{
"math_id": 17,
"text": "S: A\\rightarrow A"
},
{
"math_id": 18,
"text": "\\mathcal{R}"
},
{
"math_id": 19,
"text": "K"
},
{
"math_id": 20,
"text": "\\mathbb{C}"
}
]
| https://en.wikipedia.org/wiki?curid=6801871 |
68021560 | Aleksei Chernavskii | Russian mathematician (1938–2023)
Aleksei Viktorovich Chernavskii (or Chernavsky or Černavskii) (; 17 January 1938 – 22 December 2023) was a Russian mathematician, specializing in differential geometry and topology.
Biography.
Chernavskii was born in Moscow and completed undergraduate study at the Faculty of Mechanics and Mathematics of Moscow State University in 1959. He enrolled in graduate school at the Steklov Institute of Mathematics. In 1964 he defended his Candidate of Sciences (PhD) thesis, written under the guidance of Lyudmila Keldysh, on the topic Конечнократные отображения многообразий (Finite-fold mappings of manifolds). In 1970 he defended his Russian Doctor of Sciences (habilitation) thesis Гомеоморфизмы и топологические вложения многообразий (Homeomorphisms and topological embeddings of manifolds). In 1970 he was an Invited Speaker at the International Congress of Mathematicians in Nice.
Chernavskii worked as a senior researcher at the Steklov Institute until 1973 and from 1973 to 1980 at Yaroslavl State University. From 1980 to 1985 he was a senior researcher at the Moscow Institute of Physics and Technology.
From 1985 he was employed the Kharkevich Institute for Information Transmission Problems of the Russian Academy of Sciences. From 1993 he was working part-time as a professor at the Department of Higher Geometry and Topology, Faculty of Mechanics and Mathematics, Moscow State University. He wrote a textbook on differential differential geometry for advanced students.
Chernavskii died on 22 December 2023, at the age of 85.
Chernavskii's theorem.
Chernavskii's theorem (1964): If formula_0 and formula_1 are "n"-manifolds and formula_2 is a discrete, open, continuous mapping of formula_0 into formula_1<br>then the branch set formula_3formula_2 = { x: x is an element of formula_0 and formula_2 fails to be a local homeomorphism at x} satisfies dimension (formula_3formula_2) ≤ "n" – 2.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "M"
},
{
"math_id": 1,
"text": "N"
},
{
"math_id": 2,
"text": "f"
},
{
"math_id": 3,
"text": "B"
}
]
| https://en.wikipedia.org/wiki?curid=68021560 |
680237 | Givens rotation | Concept in numerical linear algebra
In numerical linear algebra, a Givens rotation is a rotation in the plane spanned by two coordinates axes. Givens rotations are named after Wallace Givens, who introduced them to numerical analysts in the 1950s while he was working at Argonne National Laboratory.
Matrix representation.
A Givens rotation is represented by a matrix of the form
formula_0
where "c" = cos "θ" and "s" = sin "θ" appear at the intersections ith and jth rows and columns. That is, for fixed i > j, the non-zero elements of Givens matrix are given by:
formula_1
The product "G"("i", "j", "θ")x represents a counterclockwise rotation of the vector x in the ("i", "j") plane of θ radians, hence the name Givens rotation.
The main use of Givens rotations in numerical linear algebra is to transform vectors or matrices into a special form with zeros in certain coefficients. This effect can, for example, be employed for computing the QR decomposition of a matrix. One advantage over Householder transformations is that they can easily be parallelised, and another is that often for very sparse matrices they have a lower operation count.
Stable calculation.
When a Givens rotation matrix, "G"("i", "j", "θ"), multiplies another matrix, A, from the left, "G A", only rows i and j of A are affected. Thus we restrict attention to the following counterclockwise problem. Given a and b, find "c" = cos "θ" and "s" = sin "θ" such that
formula_2
where formula_3 is the length of the vector formula_4.
Explicit calculation of θ is rarely necessary or desirable. Instead we directly seek c and s. An obvious solution would be
formula_5
However, the computation for r may overflow or underflow. An alternative formulation avoiding this problem is implemented as the hypot function in many programming languages.
The following Fortran code is a minimalistic implementation of Givens rotation for real numbers. If the input values 'a' or 'b' are frequently zero, the code may be optimized to handle these cases as presented here.
subroutine givens_rotation(a, b, c, s, r)
real a, b, c, s, r
real h, d
if (b.ne.0.0) then
h = hypot(a, b)
d = 1.0 / h
c = abs(a) * d
s = sign(d, a) * b
r = sign(1.0, a) * h
else
c = 1.0
s = 0.0
r = a
end if
return
end
Furthermore, as Edward Anderson discovered in improving LAPACK, a previously overlooked numerical consideration is continuity. To achieve this, we require r to be positive. The following MATLAB/GNU Octave code illustrates the algorithm.
function [c, s, r] = givens_rotation(a, b)
if b == 0;
c = sign(a);
if (c == 0);
c = 1.0; % Unlike other languages, MatLab's sign function returns 0 on input 0.
end;
s = 0;
r = abs(a);
elseif a == 0;
c = 0;
s = -sign(b);
r = abs(b);
elseif abs(a) > abs(b);
t = b / a;
u = sign(a) * sqrt(1 + t * t);
c = 1 / u;
s = -c * t;
r = a * u;
else
t = a / b;
u = sign(b) * sqrt(1 + t * t);
s = -1 / u;
c = t / u;
r = b * u;
end
end
The IEEE 754 codice_0 function, provides a safe and cheap way to copy the sign of codice_1 to codice_2. If that is not available, , using the abs and sgn functions, is an alternative as done above.
Triangularization.
Given the following Matrix:
formula_6
two iterations of the Givens rotation (note that the Givens rotation algorithm used here differs slightly from above) yield an upper triangular matrix in order to compute the QR decomposition.
In order to form the desired matrix, zeroing elements and is required; element is zeroed first, using a rotation matrix of:
formula_7
The following matrix multiplication results:
formula_8
where
formula_9
Using these values for c and s and performing the matrix multiplication above yields A2:
formula_10
Zeroing element finishes off the process. Using the same idea as before, the rotation matrix is:
formula_11
Afterwards, the following matrix multiplication is:
formula_12
where
formula_13
Using these values for c and s and performing the multiplications results in A3:
formula_14
This new matrix A3 is the upper triangular matrix needed to perform an iteration of the QR decomposition. Q is now formed using the transpose of the rotation matrices in the following manner:
formula_15
Performing this matrix multiplication yields:
formula_16
This completes two iterations of the Givens Rotation and calculating the QR decomposition can now be done.
Complex matrices.
Another method can extend Givens rotations to complex matrices. A diagonal matrix whose diagonal elements have unit magnitudes but arbitrary phases is unitary. Let A be a matrix for which it is desired to make the ji element be zero using the rows and columns i and j>i. Let D be a diagonal matrix whose diagonal elements are one except the ii and jj diagonal elements which also have unit magnitude but have phases which are to be determined. The phases of the ii and jj elements of D can be chosen so as to make the ii and ji elements of the product matrix D A be real. Then a Givens rotation G can be chosen using the i and j>i rows and columns so as to make the ji element of the product matrix G D A be zero. Since a product of unitary matrices is unitary, the product matrix G D is unitary and so is any product of such matrix pair products.
In Clifford algebra.
In Clifford algebra and its child structures such as geometric algebra, rotations are represented by bivectors. Givens rotations are represented by the exterior product of the basis vectors. Given any pair of basis vectors formula_17 Givens rotations bivectors are:
formula_18
Their action on any vector is written:
formula_19
where
formula_20
Dimension 3.
There are three Givens rotations in dimension 3:
formula_21
formula_22
formula_23
Given that they are endomorphisms they can be composed with each other as many times as desired, keeping in mind that "g" ∘ "f" ≠ "f" ∘ "g".
These three Givens rotations composed can generate any rotation matrix according to Davenport's chained rotation theorem. This means that they can transform the standard basis of the space to any other frame in the space.
When rotations are performed in the right order, the values of the rotation angles of the final frame will be equal to the Euler angles of the final frame in the corresponding convention. For example, an operator formula_24 transforms the basis of the space into a frame with angles roll, pitch and yaw formula_25 in the Tait–Bryan convention "z"-"x"-"y" (convention in which the line of nodes is perpendicular to "z" and "Y" axes, also named "Y"-"X′"-"Z″").
For the same reason, any rotation matrix in 3D can be decomposed in a product of three of these rotation operators.
The meaning of the composition of two Givens rotations "g" ∘ "f" is an operator that transforms vectors first by f and then by g, being f and g rotations about one axis of basis of the space. This is similar to the extrinsic rotation equivalence for Euler angles.
Table of composed rotations.
The following table shows the three Givens rotations equivalent to the different Euler angles conventions using extrinsic composition (composition of rotations about the basis axes) of active rotations and the right-handed rule for the positive sign of the angles.
The notation has been simplified in such a way that "c"1 means cos "θ"1 and "s"2 means sin "θ"2). The subindexes of the angles are the order in which they are applied using "extrinsic" composition (1 for intrinsic rotation, 2 for nutation, 3 for precession)
As rotations are applied just in the opposite order of the Euler angles table of rotations, this table is the same but swapping indexes 1 and 3 in the angles associated with the corresponding entry. An entry like "zxy" means to apply first the "y" rotation, then "x", and finally "z", in the basis axes.
All the compositions assume the right hand convention for the matrices that are multiplied, yielding the following results.
Notes.
<templatestyles src="Reflist/styles.css" />
Citations.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "G(i, j, \\theta) = \n \\begin{bmatrix} 1 & \\cdots & 0 & \\cdots & 0 & \\cdots & 0 \\\\\n \\vdots & \\ddots & \\vdots & & \\vdots & & \\vdots \\\\\n 0 & \\cdots & c & \\cdots & -s & \\cdots & 0 \\\\\n \\vdots & & \\vdots & \\ddots & \\vdots & & \\vdots \\\\\n 0 & \\cdots & s & \\cdots & c & \\cdots & 0 \\\\\n \\vdots & & \\vdots & & \\vdots & \\ddots & \\vdots \\\\\n 0 & \\cdots & 0 & \\cdots & 0 & \\cdots & 1\n \\end{bmatrix},"
},
{
"math_id": 1,
"text": "\\begin{align}\n g_{kk} &{}= 1 \\qquad \\text{for} \\ k \\ne i,\\,j\\\\\n g_{kk} &{}= c \\qquad \\text{for} \\ k = i,\\,j\\\\\n g_{ji} &{} = -g_{ij}= -s\\\\\n\\end{align}"
},
{
"math_id": 2,
"text": " \\begin{bmatrix} c & -s \\\\ s & c \\end{bmatrix} \\begin{bmatrix} a \\\\ b \\end{bmatrix} = \\begin{bmatrix} r \\\\ 0 \\end{bmatrix} , "
},
{
"math_id": 3,
"text": " r = \\sqrt{a^2 + b^2} "
},
{
"math_id": 4,
"text": "(a,b)"
},
{
"math_id": 5,
"text": "\\begin{align}\n c &{}\\larr a / r \\\\\n s &{}\\larr -b / r.\n\\end{align}"
},
{
"math_id": 6,
"text": " A_1 =\n \\begin{bmatrix} 6 & 5 & 0 \\\\\n 5 & 1 & 4 \\\\\n 0 & 4 & 3 \\\\\n \\end{bmatrix},"
},
{
"math_id": 7,
"text": "G_{1} =\n \\begin{bmatrix} c & -s & 0 \\\\\n s & c & 0 \\\\\n 0 & 0 & 1 \\\\\n \\end{bmatrix}."
},
{
"math_id": 8,
"text": "\n\\begin{align}\nG_1 A_1 &{}= A_2 \\\\\n&{} = \\begin{bmatrix} c & -s & 0 \\\\\n s & c & 0 \\\\\n 0 & 0 & 1 \\\\\n \\end{bmatrix}\n \\begin{bmatrix} 6 & 5 & 0 \\\\\n 5 & 1 & 4 \\\\\n 0 & 4 & 3 \\\\\n \\end{bmatrix},\n\\end{align}\n"
},
{
"math_id": 9,
"text": "\\begin{align}\n r &{}= \\sqrt{6^2 + 5^2} \\approx 7.8102 \\\\\n c &{}= 6 / r \\approx 0.7682 \\\\\n s &{}= -5 / r \\approx -0.6402.\n\\end{align}\n"
},
{
"math_id": 10,
"text": "A_2 \\approx \\begin{bmatrix} 7.8102 & 4.4813 & 2.5607 \\\\\n 0 & -2.4327 & 3.0729 \\\\\n 0 & 4 & 3 \\\\\n \\end{bmatrix}"
},
{
"math_id": 11,
"text": "G_{2} =\n \\begin{bmatrix} 1 & 0 & 0 \\\\\n 0 & c & -s \\\\\n 0 & s & c \\\\\n \\end{bmatrix}"
},
{
"math_id": 12,
"text": "\n\\begin{align}\nG_2 A_2 &{}= A_3 \\\\\n&{}\\approx \\begin{bmatrix} 1 & 0 & 0 \\\\\n 0 & c & -s \\\\\n 0 & s & c \\\\\n \\end{bmatrix}\n \\begin{bmatrix} 7.8102 & 4.4813 & 2.5607 \\\\\n 0 & -2.4327 & 3.0729 \\\\\n 0 & 4 & 3 \\\\\n \\end{bmatrix},\n\\end{align}\n"
},
{
"math_id": 13,
"text": "\\begin{align}\n r &{}\\approx \\sqrt{(-2.4327)^2 + 4^2} \\approx 4.6817 \\\\\n c &{}\\approx -2.4327 / r \\approx -0.5196 \\\\\n s &{}\\approx -4 / r \\approx -0.8544.\n\\end{align}\n"
},
{
"math_id": 14,
"text": "A_3 \\approx\n \\begin{bmatrix} 7.8102 & 4.4813 & 2.5607 \\\\\n 0 & 4.6817 & 0.9665 \\\\\n 0 & 0 & -4.1843 \\\\\n \\end{bmatrix}.\n"
},
{
"math_id": 15,
"text": "Q = G_{1}^T\\, G_{2}^T.\n"
},
{
"math_id": 16,
"text": "Q \\approx\n \\begin{bmatrix} 0.7682 & 0.3327 & 0.5470 \\\\\n 0.6402 & -0.3992 & -0.6564 \\\\\n 0 & 0.8544 & -0.5196 \\\\\n \\end{bmatrix}."
},
{
"math_id": 17,
"text": "\\mathbf e_i, \\mathbf e_j"
},
{
"math_id": 18,
"text": "B_{ij} = \\mathbf e_i \\wedge \\mathbf e_j."
},
{
"math_id": 19,
"text": "v=e^{-(\\theta/2)(\\mathbf e_i \\wedge \\mathbf e_j)}u e^{(\\theta/2)(\\mathbf e_i \\wedge \\mathbf e_j)},"
},
{
"math_id": 20,
"text": "e^{(\\theta/2)(\\mathbf e_i \\wedge \\mathbf e_j)}= \\cos(\\theta/2)+ \\sin(\\theta/2) \\mathbf e_i \\wedge \\mathbf e_j."
},
{
"math_id": 21,
"text": "\nR_X(\\theta) =\n\\begin{bmatrix}\n1 & 0 & 0 \\\\\n0 & \\cos \\theta & -\\sin \\theta \\\\\n0 & \\sin \\theta & \\cos \\theta\n\\end{bmatrix}. \n"
},
{
"math_id": 22,
"text": "\\begin{align} \\\\\nR_Y(\\theta) =\n\\begin{bmatrix}\n\\cos \\theta & 0 & -\\sin \\theta \\\\\n0 & 1 & 0 \\\\\n\\sin \\theta & 0 & \\cos \\theta\n\\end{bmatrix}\n\\end{align}\n"
},
{
"math_id": 23,
"text": "\\begin{align} \\\\\nR_Z(\\theta) =\n\\begin{bmatrix}\n\\cos \\theta & -\\sin \\theta & 0 \\\\\n\\sin \\theta & \\cos \\theta & 0 \\\\\n0 & 0 & 1 \n\\end{bmatrix}\n\\end{align}\n"
},
{
"math_id": 24,
"text": "R = R_Y(\\theta_3)\\cdot R_X(\\theta_2)\\cdot R_Z(\\theta_1)"
},
{
"math_id": 25,
"text": "YPR = (\\theta_3,\\theta_2,\\theta_1)"
}
]
| https://en.wikipedia.org/wiki?curid=680237 |
68032323 | Unrelated-machines scheduling | Optimization problem in computer science and operations research
Unrelated-machines scheduling is an optimization problem in computer science and operations research. It is a variant of optimal job scheduling. We need to schedule "n" jobs "J"1, "J"2, ..., "Jn" on "m" different machines, such that a certain objective function is optimized (usually, the makespan should be minimized). The time that machine "i" needs in order to process job j is denoted by "pi,j". The term "unrelated" emphasizes that there is no relation between values of "pi,j" for different "i" and "j". This is in contrast to two special cases of this problem: uniform-machines scheduling - in which "pi,j" = "pi" / "sj" (where "sj" is the speed of machine "j"), and identical-machines scheduling - in which "pi,j" = "pi" (the same run-time on all machines).
In the standard three-field notation for optimal job scheduling problems, the unrelated-machines variant is denoted by R in the first field. For example, the problem denoted by " R||formula_0" is an unrelated-machines scheduling problem with no constraints, where the goal is to minimize the maximum completion time.
In some variants of the problem, instead of minimizing the "maximum" completion time, it is desired to minimize the "average" completion time (averaged over all "n" jobs); it is denoted by R||formula_1. More generally, when some jobs are more important than others, it may be desired to minimize a "weighted average" of the completion time, where each job has a different weight. This is denoted by R||formula_2.
In a third variant, the goal is to "maximize" the "minimum" completion time, " R||formula_3" . This variant corresponds to the problem of Egalitarian item allocation.
Algorithms.
Minimizing the maximum completion time (makespan).
Minimizing the "maximum" completion time is NP-hard even for "identical" machines, by reduction from the partition problem.
Horowitz and Sahni presented:
Lenstra, Shmoys and Tardos presented a polytime 2-factor approximation algorithm, and proved that no polytime algorithm with approximation factor smaller than 3/2 is possible unless P=NP. Closing the gap between the 2 and the 3/2 is a long-standing open problem.
Verschae and Wiese presented a different 2-factor approximation algorithm.
Glass, Potts and Shade compare various local search techniques for minimizing the makespan on unrelated machines. Using computerized simulations, they find that tabu search and simulated annealing perform much better than genetic algorithms.
Minimizing the average completion time.
Bruno, Coffman and Sethi present an algorithm, running in time formula_10, for minimizing the average job completion time on "unrelated" machines, R||formula_11 (the average over all "jobs", of the time it takes to complete the jobs).
Minimizing the "weighted average" completion time, R||formula_12 (where "wj" is the weight of job "j"), is NP-hard even on "identical" machines, by reduction from the knapsack problem. It is NP-hard even if the number of machines is fixed and at least 2, by reduction from the partition problem.
Schulz and Skutella present a (3/2+ε)-approximation algorithm using randomized rounding. Their algorithm is a (2+ε)-approximation for the problem with job release times, R|formula_13|formula_12.
Maximizing the profit.
Bar-Noy, Bar-Yehuda, Freund, Naor and Schieber consider a setting in which, for each job and machine, there is a "profit" for running this job on that machine. They present a 1/2 approximation for discrete input and (1-"ε")/2 approximation for continuous input.
Maximizing the minimum completion time.
Suppose that, instead of "jobs" we have valuable items, and instead of "machines" we have people. Person "i" values item j at "pi,j". We would like to allocate the items to the people, such that the least-happy person is as happy as possible. This problem is equivalent to unrelated-machines scheduling in which the goal is to maximize the minimum completion time. It is better known by the name "egalitarian" or "max-min item allocation".
Linear programming formulation.
A natural way to formulate the problem as a linear program is called the "Lenstra–Shmoys–Tardos linear program (LST LP)". For each machine "i" and job "j," define a variable formula_14, which equals 1 if machine "i" processes job "j", and 0 otherwise. Then, the LP constraints are:
Relaxing the integer constraints gives a linear program with size polynomial in the input. Solving the relaxed problem can be rounded to obtain a 2-approximation to the problem.""
Another LP formulation is the configuration linear program. For each machine "i", there are finitely many subsets of jobs that can be processed by machine "i" in time at most "T". Each such subset is called a "configuration" for machine "i". Denote by "Ci"("T") the set of all configurations for machine "i". For each machine "i" and configuration "c" in "Ci"("T"), define a variable formula_18 which equals 1 if the actual configuration used in machine "i" is "c", and 0 otherwise. Then, the LP constraints are:
Note that the number of configurations is usually exponential in the size of the problem, so the size of the configuration LP is exponential. However, in some cases it is possible to bound the number of possible configurations, and therefore find an approximate solution in polynomial time.
Special cases.
There is a special case in which "pi,j" is either 1 or infinity. In other words, each job can be processed on a subset of "allowed machines", and its run-time in each of these machines is 1. This variant is sometimes denoted by " P|pj=1,Mj|formula_0". It can be solved in polynomial time.
Extensions.
Kim, Kim, Jang and Chen extend the problem by allowing each job to have a setup time, which depends on the job but not on the machine. They present a solution using simulated annealing. Vallada and Ruiz present a solution using a genetic algorithm.
Nisan and Ronen in their 1999 paper on algorithmic mechanism design. extend the problem in a different way, by assuming that the jobs are owned by selfish agents (see Truthful job scheduling). | [
{
"math_id": 0,
"text": "C_\\max"
},
{
"math_id": 1,
"text": "\\sum C_i"
},
{
"math_id": 2,
"text": "\\sum w_i C_i"
},
{
"math_id": 3,
"text": "C_\\min"
},
{
"math_id": 4,
"text": "O(10^{2l} n)"
},
{
"math_id": 5,
"text": "l"
},
{
"math_id": 6,
"text": "\\epsilon \\geq 2\\cdot 10^{-l}"
},
{
"math_id": 7,
"text": "O( n / \\epsilon^2)"
},
{
"math_id": 8,
"text": "O(10^{l} n^2)"
},
{
"math_id": 9,
"text": "O( n^2 / \\epsilon)"
},
{
"math_id": 10,
"text": "O(\\max(m n^2,n^3))"
},
{
"math_id": 11,
"text": "\\sum C_j"
},
{
"math_id": 12,
"text": "\\sum w_j C_j"
},
{
"math_id": 13,
"text": "r_j"
},
{
"math_id": 14,
"text": "z_{i,j}"
},
{
"math_id": 15,
"text": "\\sum_{i=1}^m z_{i,j} = 1"
},
{
"math_id": 16,
"text": "\\sum_{j=1}^n z_{i,j}\\cdot p_{i,j} \\leq T"
},
{
"math_id": 17,
"text": "z_{i,j} \\in \\{0,1\\}"
},
{
"math_id": 18,
"text": "x_{i,c}"
},
{
"math_id": 19,
"text": "\\sum_{c\\in C_i(T)}x_{i,c} = 1"
},
{
"math_id": 20,
"text": "\\sum_{i=1}^m \\sum_{c\\ni j, c\\in C_i(T)}x_{i,c} = 1"
},
{
"math_id": 21,
"text": "x_{i,j} \\in \\{0,1\\}"
}
]
| https://en.wikipedia.org/wiki?curid=68032323 |
6804 | Charge-coupled device | Device for the movement of electrical charge
A charge-coupled device (CCD) is an integrated circuit containing an array of linked, or coupled, capacitors. Under the control of an external circuit, each capacitor can transfer its electric charge to a neighboring capacitor. CCD sensors are a major technology used in digital imaging.
Overview.
In a CCD image sensor, pixels are represented by p-doped metal–oxide–semiconductor (MOS) capacitors. These MOS capacitors, the basic building blocks of a CCD, are biased above the threshold for inversion when image acquisition begins, allowing the conversion of incoming photons into electron charges at the semiconductor-oxide interface; the CCD is then used to read out these charges.
Although CCDs are not the only technology to allow for light detection, CCD image sensors are widely used in professional, medical, and scientific applications where high-quality image data are required.
In applications with less exacting quality demands, such as consumer and professional digital cameras, active pixel sensors, also known as CMOS sensors (complementary MOS sensors), are generally used.
However, the large quality advantage CCDs enjoyed early on has narrowed over time and since the late 2010s CMOS sensors are the dominant technology, having largely if not completely replaced CCD image sensors.
History.
The basis for the CCD is the metal–oxide–semiconductor (MOS) structure, with MOS capacitors being the basic building blocks of a CCD, and a depleted MOS structure used as the photodetector in early CCD devices.
In the late 1960s, Willard Boyle and George E. Smith at Bell Labs were researching MOS technology while working on semiconductor bubble memory. They realized that an electric charge was the analogy of the magnetic bubble and that it could be stored on a tiny MOS capacitor. As it was fairly straightforward to fabricate a series of MOS capacitors in a row, they connected a suitable voltage to them so that the charge could be stepped along from one to the next. This led to the invention of the charge-coupled device by Boyle and Smith in 1969. They conceived of the design of what they termed, in their notebook, "Charge 'Bubble' Devices".
The initial paper describing the concept in April 1970 listed possible uses as memory, a delay line, and an imaging device. The device could also be used as a shift register. The essence of the design was the ability to transfer charge along the surface of a semiconductor from one storage capacitor to the next. The concept was similar in principle to the bucket-brigade device (BBD), which was developed at Philips Research Labs during the late 1960s.
The first experimental device demonstrating the principle was a row of closely spaced metal squares on an oxidized silicon surface electrically accessed by wire bonds. It was demonstrated by Gil Amelio, Michael Francis Tompsett and George Smith in April 1970. This was the first experimental application of the CCD in image sensor technology, and used a depleted MOS structure as the photodetector. The first patent (U.S. patent 4,085,456) on the application of CCDs to imaging was assigned to Tompsett, who filed the application in 1971.
The first working CCD made with integrated circuit technology was a simple 8-bit shift register, reported by Tompsett, Amelio and Smith in August 1970. This device had input and output circuits and was used to demonstrate its use as a shift register and as a crude eight pixel linear imaging device. Development of the device progressed at a rapid rate. By 1971, Bell researchers led by Michael Tompsett were able to capture images with simple linear devices.
Several companies, including Fairchild Semiconductor, RCA and Texas Instruments, picked up on the invention and began development programs. Fairchild's effort, led by ex-Bell researcher Gil Amelio, was the first with commercial devices, and by 1974 had a linear 500-element device and a 2D 100 × 100 pixel device. Peter Dillon, a scientist at Kodak Research Labs, invented the first color CCD image sensor by overlaying a color filter array on this Fairchild 100 x 100 pixel Interline CCD starting in 1974. Steven Sasson, an electrical engineer working for the Kodak Apparatus Division, invented a digital still camera using this same Fairchild 100 × 100 CCD in 1975.
The interline transfer (ILT) CCD device was proposed by L. Walsh and R. Dyck at Fairchild in 1973 to reduce smear and eliminate a mechanical shutter. To further reduce smear from bright light sources, the frame-interline-transfer (FIT) CCD architecture was developed by K. Horii, T. Kuroda and T. Kunii at Matsushita (now Panasonic) in 1981.
The first KH-11 KENNEN reconnaissance satellite equipped with charge-coupled device array (800 × 800 pixels) technology for imaging was launched in December 1976. Under the leadership of Kazuo Iwama, Sony started a large development effort on CCDs involving a significant investment. Eventually, Sony managed to mass-produce CCDs for their camcorders. Before this happened, Iwama died in August 1982. Subsequently a CCD chip was placed on his tombstone to acknowledge his contribution. The first mass-produced consumer CCD video camera, the CCD-G5, was released by Sony in 1983, based on a prototype developed by Yoshiaki Hagiwara in 1981.
Early CCD sensors suffered from shutter lag. This was largely resolved with the invention of the pinned photodiode (PPD). It was invented by Nobukazu Teranishi, Hiromitsu Shiraki and Yasuo Ishihara at NEC in 1980. They recognized that lag can be eliminated if the signal carriers could be transferred from the photodiode to the CCD. This led to their invention of the pinned photodiode, a photodetector structure with low lag, low noise, high quantum efficiency and low dark current. It was first publicly reported by Teranishi and Ishihara with A. Kohono, E. Oda and K. Arai in 1982, with the addition of an anti-blooming structure. The new photodetector structure invented at NEC was given the name "pinned photodiode" (PPD) by B.C. Burkey at Kodak in 1984. In 1987, the PPD began to be incorporated into most CCD devices, becoming a fixture in consumer electronic video cameras and then digital still cameras. Since then, the PPD has been used in nearly all CCD sensors and then CMOS sensors.
In January 2006, Boyle and Smith were awarded the National Academy of Engineering Charles Stark Draper Prize, and in 2009 they were awarded the Nobel Prize for Physics for their invention of the CCD concept. Michael Tompsett was awarded the 2010 National Medal of Technology and Innovation, for pioneering work and electronic technologies including the design and development of the first CCD imagers. He was also awarded the 2012 IEEE Edison Medal for "pioneering contributions to imaging devices including CCD Imagers, cameras and thermal imagers".
Basics of operation.
In a CCD for capturing images, there is a photoactive region (an epitaxial layer of silicon), and a transmission region made out of a shift register (the CCD, properly speaking).
An image is projected through a lens onto the capacitor array (the photoactive region), causing each capacitor to accumulate an electric charge proportional to the light intensity at that location. A one-dimensional array, used in line-scan cameras, captures a single slice of the image, whereas a two-dimensional array, used in video and still cameras, captures a two-dimensional picture corresponding to the scene projected onto the focal plane of the sensor. Once the array has been exposed to the image, a control circuit causes each capacitor to transfer its contents to its neighbor (operating as a shift register). The last capacitor in the array dumps its charge into a charge amplifier, which converts the charge into a voltage. By repeating this process, the controlling circuit converts the entire contents of the array in the semiconductor to a sequence of voltages. In a digital device, these voltages are then sampled, digitized, and usually stored in memory; in an analog device (such as an analog video camera), they are processed into a continuous analog signal (e.g. by feeding the output of the charge amplifier into a low-pass filter), which is then processed and fed out to other circuits for transmission, recording, or other processing.
Detailed physics of operation.
Charge generation.
Before the MOS capacitors are exposed to light, they are biased into the depletion region; in n-channel CCDs, the silicon under the bias gate is slightly "p"-doped or intrinsic. The gate is then biased at a positive potential, above the threshold for strong inversion, which will eventually result in the creation of an "n" channel below the gate as in a MOSFET. However, it takes time to reach this thermal equilibrium: up to hours in high-end scientific cameras cooled at low temperature. Initially after biasing, the holes are pushed far into the substrate, and no mobile electrons are at or near the surface; the CCD thus operates in a non-equilibrium state called deep depletion.
Then, when electron–hole pairs are generated in the depletion region, they are separated by the electric field, the electrons move toward the surface, and the holes move toward the substrate. Four pair-generation processes can be identified:
The last three processes are known as dark-current generation, and add noise to the image; they can limit the total usable integration time. The accumulation of electrons at or near the surface can proceed either until image integration is over and charge begins to be transferred, or thermal equilibrium is reached. In this case, the well is said to be full. The maximum capacity of each well is known as the well depth, typically about 105 electrons per pixel. CCDs are normally susceptible to ionizing radiation and energetic particles which causes noise in the output of the CCD, and this must be taken into consideration in satellites using CCDs.
Design and manufacturing.
The photoactive region of a CCD is, generally, an epitaxial layer of silicon. It is lightly "p" doped (usually with boron) and is grown upon a substrate material, often p++. In buried-channel devices, the type of design utilized in most modern CCDs, certain areas of the surface of the silicon are ion implanted with phosphorus, giving them an n-doped designation. This region defines the channel in which the photogenerated charge packets will travel. Simon Sze details the advantages of a buried-channel device:
This thin layer (= 0.2–0.3 micron) is fully depleted and the accumulated photogenerated charge is kept away from the surface. This structure has the advantages of higher transfer efficiency and lower dark current, from reduced surface recombination. The penalty is smaller charge capacity, by a factor of 2–3 compared to the surface-channel CCD. The gate oxide, i.e. the capacitor dielectric, is grown on top of the epitaxial layer and substrate.
Later in the process, polysilicon gates are deposited by chemical vapor deposition, patterned with photolithography, and etched in such a way that the separately phased gates lie perpendicular to the channels. The channels are further defined by utilization of the LOCOS process to produce the channel stop region.
Channel stops are thermally grown oxides that serve to isolate the charge packets in one column from those in another. These channel stops are produced before the polysilicon gates are, as the LOCOS process utilizes a high-temperature step that would destroy the gate material. The channel stops are parallel to, and exclusive of, the channel, or "charge carrying", regions.
Channel stops often have a p+ doped region underlying them, providing a further barrier to the electrons in the charge packets (this discussion of the physics of CCD devices assumes an electron transfer device, though hole transfer is possible).
The clocking of the gates, alternately high and low, will forward and reverse bias the diode that is provided by the buried channel (n-doped) and the epitaxial layer (p-doped). This will cause the CCD to deplete, near the p–n junction and will collect and move the charge packets beneath the gates—and within the channels—of the device.
CCD manufacturing and operation can be optimized for different uses. The above process describes a frame transfer CCD. While CCDs may be manufactured on a heavily doped p++ wafer it is also possible to manufacture a device inside p-wells that have been placed on an n-wafer. This second method, reportedly, reduces smear, dark current, and infrared and red response. This method of manufacture is used in the construction of interline-transfer devices.
Another version of CCD is called a peristaltic CCD. In a peristaltic charge-coupled device, the charge-packet transfer operation is analogous to the peristaltic contraction and dilation of the digestive system. The peristaltic CCD has an additional implant that keeps the charge away from the silicon/silicon dioxide interface and generates a large lateral electric field from one gate to the next. This provides an additional driving force to aid in transfer of the charge packets.
Architecture.
The CCD image sensors can be implemented in several different architectures. The most common are full-frame, frame-transfer, and interline. The distinguishing characteristic of each of these architectures is their approach to the problem of shuttering.
In a full-frame device, all of the image area is active, and there is no electronic shutter. A mechanical shutter must be added to this type of sensor or the image smears as the device is clocked or read out.
With a frame-transfer CCD, half of the silicon area is covered by an opaque mask (typically aluminum). The image can be quickly transferred from the image area to the opaque area or storage region with acceptable smear of a few percent. That image can then be read out slowly from the storage region while a new image is integrating or exposing in the active area. Frame-transfer devices typically do not require a mechanical shutter and were a common architecture for early solid-state broadcast cameras. The downside to the frame-transfer architecture is that it requires twice the silicon real estate of an equivalent full-frame device; hence, it costs roughly twice as much.
The interline architecture extends this concept one step further and masks every other column of the image sensor for storage. In this device, only one pixel shift has to occur to transfer from image area to storage area; thus, shutter times can be less than a microsecond and smear is essentially eliminated. The advantage is not free, however, as the imaging area is now covered by opaque strips dropping the fill factor to approximately 50 percent and the effective quantum efficiency by an equivalent amount. Modern designs have addressed this deleterious characteristic by adding microlenses on the surface of the device to direct light away from the opaque regions and on the active area. Microlenses can bring the fill factor back up to 90 percent or more depending on pixel size and the overall system's optical design.
The choice of architecture comes down to one of utility. If the application cannot tolerate an expensive, failure-prone, power-intensive mechanical shutter, an interline device is the right choice. Consumer snap-shot cameras have used interline devices. On the other hand, for those applications that require the best possible light collection and issues of money, power and time are less important, the full-frame device is the right choice. Astronomers tend to prefer full-frame devices. The frame-transfer falls in between and was a common choice before the fill-factor issue of interline devices was addressed. Today, frame-transfer is usually chosen when an interline architecture is not available, such as in a back-illuminated device.
CCDs containing grids of pixels are used in digital cameras, optical scanners, and video cameras as light-sensing devices. They commonly respond to 70 percent of the incident light (meaning a quantum efficiency of about 70 percent) making them far more efficient than photographic film, which captures only about 2 percent of the incident light.
Most common types of CCDs are sensitive to near-infrared light, which allows infrared photography, night-vision devices, and zero lux (or near zero lux) video-recording/photography. For normal silicon-based detectors, the sensitivity is limited to 1.1 μm. One other consequence of their sensitivity to infrared is that infrared from remote controls often appears on CCD-based digital cameras or camcorders if they do not have infrared blockers.
Cooling reduces the array's dark current, improving the sensitivity of the CCD to low light intensities, even for ultraviolet and visible wavelengths. Professional observatories often cool their detectors with liquid nitrogen to reduce the dark current, and therefore the thermal noise, to negligible levels.
Frame transfer CCD.
The frame transfer CCD imager was the first imaging structure proposed for CCD Imaging by Michael Tompsett at Bell Laboratories. A frame transfer CCD is a specialized CCD, often used in astronomy and some professional video cameras, designed for high exposure efficiency and correctness.
The normal functioning of a CCD, astronomical or otherwise, can be divided into two phases: exposure and readout. During the first phase, the CCD passively collects incoming photons, storing electrons in its cells. After the exposure time is passed, the cells are read out one line at a time. During the readout phase, cells are shifted down the entire area of the CCD. While they are shifted, they continue to collect light. Thus, if the shifting is not fast enough, errors can result from light that falls on a cell holding charge during the transfer. These errors are referred to as "vertical smear" and cause a strong light source to create a vertical line above and below its exact location. In addition, the CCD cannot be used to collect light while it is being read out. A faster shifting requires a faster readout, and a faster readout can introduce errors in the cell charge measurement, leading to a higher noise level.
A frame transfer CCD solves both problems: it has a shielded, not light sensitive, area containing as many cells as the area exposed to light. Typically, this area is covered by a reflective material such as aluminium. When the exposure time is up, the cells are transferred very rapidly to the hidden area. Here, safe from any incoming light, cells can be read out at any speed one deems necessary to correctly measure the cells' charge. At the same time, the exposed part of the CCD is collecting light again, so no delay occurs between successive exposures.
The disadvantage of such a CCD is the higher cost: the cell area is basically doubled, and more complex control electronics are needed.
Intensified charge-coupled device.
An intensified charge-coupled device (ICCD) is a CCD that is optically connected to an image intensifier that is mounted in front of the CCD.
An image intensifier includes three functional elements: a photocathode, a micro-channel plate (MCP) and a phosphor screen. These three elements are mounted one close behind the other in the mentioned sequence. The photons which are coming from the light source fall onto the photocathode, thereby generating photoelectrons. The photoelectrons are accelerated towards the MCP by an electrical control voltage, applied between photocathode and MCP. The electrons are multiplied inside of the MCP and thereafter accelerated towards the phosphor screen. The phosphor screen finally converts the multiplied electrons back to photons which are guided to the CCD by a fiber optic or a lens.
An image intensifier inherently includes a shutter functionality: If the control voltage between the photocathode and the MCP is reversed, the emitted photoelectrons are not accelerated towards the MCP but return to the photocathode. Thus, no electrons are multiplied and emitted by the MCP, no electrons are going to the phosphor screen and no light is emitted from the image intensifier. In this case no light falls onto the CCD, which means that the shutter is closed. The process of reversing the control voltage at the photocathode is called "gating" and therefore ICCDs are also called gateable CCD cameras.
Besides the extremely high sensitivity of ICCD cameras, which enable single photon detection, the gateability is one of the major advantages of the ICCD over the EMCCD cameras. The highest performing ICCD cameras enable shutter times as short as 200 picoseconds.
ICCD cameras are in general somewhat higher in price than EMCCD cameras because they need the expensive image intensifier. On the other hand, EMCCD cameras need a cooling system to cool the EMCCD chip down to temperatures around . This cooling system adds additional costs to the EMCCD camera and often yields heavy condensation problems in the application.
ICCDs are used in night vision devices and in various scientific applications.
Electron-multiplying CCD.
An electron-multiplying CCD (EMCCD, also known as an L3Vision CCD, a product commercialized by e2v Ltd., GB, L3CCD or Impactron CCD, a now-discontinued product offered in the past by Texas Instruments) is a charge-coupled device in which a gain register is placed between the shift register and the output amplifier. The gain register is split up into a large number of stages. In each stage, the electrons are multiplied by impact ionization in a similar way to an avalanche diode. The gain probability at every stage of the register is small ("P" < 2%), but as the number of elements is large (N > 500), the overall gain can be very high (formula_0), with single input electrons giving many thousands of output electrons. Reading a signal from a CCD gives a noise background, typically a few electrons. In an EMCCD, this noise is superimposed on many thousands of electrons rather than a single electron; the devices' primary advantage is thus their negligible readout noise. The use of avalanche breakdown for amplification of photo charges had already been described in the U.S. patent 3761744 in 1973 by George E. Smith/Bell Telephone Laboratories.
EMCCDs show a similar sensitivity to intensified CCDs (ICCDs). However, as with ICCDs, the gain that is applied in the gain register is stochastic and the "exact" gain that has been applied to a pixel's charge is impossible to know. At high gains (> 30), this uncertainty has the same effect on the signal-to-noise ratio (SNR) as halving the quantum efficiency (QE) with respect to operation with a gain of unity. This effect is referred to as the Excess Noise Factor (ENF). However, at very low light levels (where the quantum efficiency is most important), it can be assumed that a pixel either contains an electron—or not. This removes the noise associated with the stochastic multiplication at the risk of counting multiple electrons in the same pixel as a single electron. To avoid multiple counts in one pixel due to coincident photons in this mode of operation, high frame rates are essential. The dispersion in the gain is shown in the graph on the right. For multiplication registers with many elements and large gains it is well modelled by the equation:
formula_1
where "P" is the probability of getting "n" output electrons given "m" input electrons and a total mean multiplication register gain of "g". For very large numbers of input electrons, this complex distribution function converges towards a Gaussian.
Because of the lower costs and better resolution, EMCCDs are capable of replacing ICCDs in many applications. ICCDs still have the advantage that they can be gated very fast and thus are useful in applications like range-gated imaging. EMCCD cameras indispensably need a cooling system—using either thermoelectric cooling or liquid nitrogen—to cool the chip down to temperatures in the range of . This cooling system adds additional costs to the EMCCD imaging system and may yield condensation problems in the application. However, high-end EMCCD cameras are equipped with a permanent hermetic vacuum system confining the chip to avoid condensation issues.
The low-light capabilities of EMCCDs find use in astronomy and biomedical research, among other fields. In particular, their low noise at high readout speeds makes them very useful for a variety of astronomical applications involving low light sources and transient events such as lucky imaging of faint stars, high speed photon counting photometry, Fabry-Pérot spectroscopy and high-resolution spectroscopy. More recently, these types of CCDs have broken into the field of biomedical research in low-light applications including small animal imaging, single-molecule imaging, Raman spectroscopy, super resolution microscopy as well as a wide variety of modern fluorescence microscopy techniques thanks to greater SNR in low-light conditions in comparison with traditional CCDs and ICCDs.
In terms of noise, commercial EMCCD cameras typically have clock-induced charge (CIC) and dark current (dependent on the extent of cooling) that together lead to an effective readout noise ranging from 0.01 to 1 electrons per pixel read. However, recent improvements in EMCCD technology have led to a new generation of cameras capable of producing significantly less CIC, higher charge transfer efficiency and an EM gain 5 times higher than what was previously available. These advances in low-light detection lead to an effective total background noise of 0.001 electrons per pixel read, a noise floor unmatched by any other low-light imaging device.
Use in astronomy.
Due to the high quantum efficiencies of charge-coupled device (CCD) (the ideal quantum efficiency is 100%, one generated electron per incident photon), linearity of their outputs, ease of use compared to photographic plates, and a variety of other reasons, CCDs were very rapidly adopted by astronomers for nearly all UV-to-infrared applications.
Thermal noise and cosmic rays may alter the pixels in the CCD array. To counter such effects, astronomers take several exposures with the CCD shutter closed and opened. The average of images taken with the shutter closed is necessary to lower the random noise. Once developed, the dark frame average image is then subtracted from the open-shutter image to remove the dark current and other systematic defects (dead pixels, hot pixels, etc.) in the CCD. Newer Skipper CCDs counter noise by collecting data with the same collected charge multiple times and has applications in precision light Dark Matter searches and neutrino measurements.
The Hubble Space Telescope, in particular, has a highly developed series of steps ("data reduction pipeline") to convert the raw CCD data to useful images.
CCD cameras used in astrophotography often require sturdy mounts to cope with vibrations from wind and other sources, along with the tremendous weight of most imaging platforms. To take long exposures of galaxies and nebulae, many astronomers use a technique known as auto-guiding. Most autoguiders use a second CCD chip to monitor deviations during imaging. This chip can rapidly detect errors in tracking and command the mount motors to correct for them.
An unusual astronomical application of CCDs, called drift-scanning, uses a CCD to make a fixed telescope behave like a tracking telescope and follow the motion of the sky. The charges in the CCD are transferred and read in a direction parallel to the motion of the sky, and at the same speed. In this way, the telescope can image a larger region of the sky than its normal field of view. The Sloan Digital Sky Survey is the most famous example of this, using the technique to produce a survey of over a quarter of the sky. The Gaia space telescope is another instrument operating in this mode, rotating about its axis at a constant rate of 1 revolution in 6 hours and scanning a 360° by 0.5° strip on the sky during this time; a star traverses the entire focal plane in about 40 seconds (effective exposure time).
In addition to imagers, CCDs are also used in an array of analytical instrumentation including spectrometers and interferometers.
Color cameras.
Digital color cameras, including the digital color cameras in smartphones, generally use a integral color image sensor, which has a color filter array fabricated on top of the monochrome pixels of the CCD. The most popular CFA pattern is known as the Bayer filter, which is named for its inventor, Kodak scientist Bryce Bayer. In the Bayer pattern, each square of four pixels has one filtered red, one blue, and two green pixels (the human eye has greater acuity for luminance, which is more heavily weighted in green than in either red or blue). As a result, the luminance information is collected in each row and column using a checkerboard pattern, and the color resolution is lower than the luminance resolution.
Better color separation can be reached by three-CCD devices (3CCD) and a dichroic beam splitter prism, that splits the image into red, green and blue components. Each of the three CCDs is arranged to respond to a particular color. Many professional video camcorders, and some semi-professional camcorders, use this technique, although developments in competing CMOS technology have made CMOS sensors, both with beam-splitters and Bayer filters, increasingly popular in high-end video and digital cinema cameras. Another advantage of 3CCD over a Bayer mask device is higher quantum efficiency (higher light sensitivity), because most of the light from the lens enters one of the silicon sensors, while a Bayer mask absorbs a high proportion (more than 2/3) of the light falling on each pixel location.
For still scenes, for instance in microscopy, the resolution of a Bayer mask device can be enhanced by microscanning technology. During the process of color co-site sampling, several frames of the scene are produced. Between acquisitions, the sensor is moved in pixel dimensions, so that each point in the visual field is acquired consecutively by elements of the mask that are sensitive to the red, green, and blue components of its color. Eventually every pixel in the image has been scanned at least once in each color and the resolution of the three channels become equivalent (the resolutions of red and blue channels are quadrupled while the green channel is doubled).
Sensor sizes.
Sensors (CCD / CMOS) come in various sizes, or image sensor formats. These sizes are often referred to with an inch fraction designation such as 1/1.8″ or 2/3″ called the optical format. This measurement originates back in the 1950s and the time of Vidicon tubes.
Blooming.
When a CCD exposure is long enough, eventually the electrons that collect in the "bins" in the brightest part of the image will overflow the bin, resulting in blooming. The structure of the CCD allows the electrons to flow more easily in one direction than another, resulting in vertical streaking.
Some anti-blooming features that can be built into a CCD reduce its sensitivity to light by using some of the pixel area for a drain structure.
James M. Early developed a vertical anti-blooming drain that would not detract from the light collection area, and so did not reduce light sensitivity.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "g = (1 + P)^N"
},
{
"math_id": 1,
"text": "P\\left (n \\right ) = \\frac{\\left\n (n-m+1\\right )^{m-1}}{\\left (m-1 \\right )!\\left\n (g-1+\\frac{1}{m}\\right )^{m}}\\exp \\left ( -\n \\frac{n-m+1}{g-1+\\frac{1}{m}}\\right ) \\quad \\text{ if } n \\ge m "
}
]
| https://en.wikipedia.org/wiki?curid=6804 |
68042075 | Yttrium oxalate | <templatestyles src="Chembox/styles.css"/>
Chemical compound
Yttrium oxalate is an inorganic compound, a salt of yttrium and oxalic acid with the chemical formula Y2(C2O4)3. The compound does not dissolve in water and forms crystalline hydrates—colorless crystals.
Synthesis.
Precipitation of soluble yttrium salts with oxalic acid:
formula_0
Properties.
Yttrium oxalate is highly insoluble in water and converts to the oxide when heated. Yttrium oxalate forms crystalline hydrates (colorless crystals) with the formula Y2(C2O4)3•"n" H2O, where n = 4, 9, and 10.
Decomposes when heated:
formula_1
The solubility product of yttrium oxalate at 25 °C is 5.1 × 10−30.
The trihydrate Y2(C2O4)3•3H2O is formed by heating more hydrated varieties at 110 °C.
Y2(C2O4)3•2H2O, which is formed by heating the decahydrate at 210 °C) forms monoclinic crystals with unit cell dimensions a=9.3811 Å, b=11.638 Å, c=5.9726 Å, β=96.079°.
Related.
Several yttrium oxalate double salts are known containing additional cations. Also a mixed-anion compound with carbonate is known.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathsf{ 2YCl_3 + 3H_2C_2O_4 \\ \\xrightarrow{}\\ Y_2(C_2O_4)_3\\downarrow + 6HCl }"
},
{
"math_id": 1,
"text": "\\mathsf{ Y_2(C_2O_4)_3 \\ \\xrightarrow{700^oC}\\ Y_2O_3 + 3CO_2 + 3CO }"
}
]
| https://en.wikipedia.org/wiki?curid=68042075 |
68044793 | Cobham's theorem | Cobham's theorem is a theorem in combinatorics on words that has important connections with number theory, notably transcendental numbers, and automata theory. Informally, the theorem gives the condition for the members of a set "S" of natural numbers written in bases "b1" and base "b2" to be recognised by finite automata. Specifically, consider bases "b1" and "b2" such that they are not powers of the same integer. Cobham's theorem states that "S" written in bases "b1" and "b2" is recognised by finite automata if and only if "S" differs by a finite set from a finite union of arithmetic progressions. The theorem was proved by Alan Cobham in 1969 and has since given rise to many extensions and generalisations.
Definitions.
Let formula_0 be an integer. The representation of a natural number formula_1 in base formula_2 is the sequence of digits formula_3 such that
formula_4
where formula_5 and formula_6. The word formula_3 is often denoted formula_7, or more simply, formula_8.
A set of natural numbers "S" is "recognisable in base" formula_2 or more simply "formula_2-recognisable" or "formula_2-automatic" if the set formula_9 of the representations of its elements in base formula_10 is a language recognisable by a finite automaton on the alphabet formula_11.
Two positive integers formula_12 and formula_13 are multiplicatively independent if there are no non-negative integers formula_14 and formula_15 such that formula_16. For example, 2 and 3 are multiplicatively independent, but 8 and 16 are not since formula_17. Two integers are multiplicatively dependent if and only if they are powers of a same third integer.
Problem statements.
Original problem statement.
More equivalent statements of the theorem have been given. The original version by Cobham is the following: <templatestyles src="Math_theorem/styles.css" />
Theorem (Cobham 1969) — Let formula_18 be a set of non-negative integers and let formula_19 and formula_20 be multiplicatively independent positive integers. Then formula_18 is recognizable by finite automata in both formula_19-ary and formula_20-ary notation if and only if it is ultimately periodic.
Another way to state the theorem is by using automatic sequences. Cobham himself calls them "uniform tag sequences.". The following form is found in Allouche and Shallit's book:<templatestyles src="Math_theorem/styles.css" />
Theorem — Let formula_12 and formula_13 be two multiplicatively independent integers. A sequence is both formula_12-automatic and formula_13-automatic only if it is formula_21-automatic
We can show that the characteristic sequence of a set of natural numbers "S" recognisable by finite automata in base "k" is a "k"-automatic sequence and that conversely, for all "k"-automatic sequences formula_22 and all integers formula_23, the set formula_24 of natural numbers formula_25 such that formula_26 is recognisable in base formula_12.
Formulation in logic.
Cobham's theorem can be formulated in first-order logic using a theorem proven by Büchi in 1960. This formulation in logic allows for extensions and generalisations. The logical expression uses the theory
formula_27
of natural integers equipped with addition and the function formula_28 defined by formula_29 and for any positive integer formula_1, formula_30 if formula_31 is the largest power of formula_32 that divides formula_1. For example, formula_33, and formula_34.
A set of integers formula_18 is "definable in first-order logic in" formula_27 if it can be described by a first-order formula with equality, addition, and formula_28.
Examples:
<templatestyles src="Math_theorem/styles.css" />
Cobham's theorem reformulated — Let "S" be a set of natural numbers, and let formula_12 and formula_13 be two multiplicatively independent positive integers. Then "S" is first-order definable in
formula_38 and in formula_39 if and only if "S" is ultimately periodic.
We can push the analogy with logic further by noting that "S" is first-order definable in Presburger arithmetic if and only if it is ultimately periodic. So, a set "S" is definable in the logics formula_38 and formula_39 if and only if it is definable in Presburger arithmetic.
Generalisations.
Approach by morphisms.
An automatic sequence is a particular morphic word, whose morphism is uniform, meaning that the length of the images generated by the morphism for each letter of its input alphabet is the same. A set of integers is hence "k"-recognisable if and only if its characteristic sequence is generated by a uniform morphism followed by a coding, where a coding is a morphism that maps each letter of the input alphabet to a letter of the output alphabet. For example, the characteristic sequence of the powers of 2 is produced by the 2-uniform morphism (meaning each letter is mapped to a word of length 2) over the alphabet formula_40 defined by
formula_41
which generates the infinite word
formula_42,
followed by the coding (that is, letter to letter) that maps formula_43 to formula_44 and leaves formula_44 and formula_21 unchanged, giving
formula_45.
The notion has been extended as follows: a morphic word formula_25 is formula_46-"substitutive" for a certain number formula_46 if when written in the form
formula_47
where the morphism formula_48, prolongable in formula_2, has the following properties:
A set "S" of natural numbers is formula_46-"recognisable" if its characteristic sequence formula_25 is formula_46-substitutive.
A last definition: a "Perron number" is an algebraic number formula_57 such that all its conjugates belong to the disc formula_58. These are exactly the dominant eigenvalues of the primitive matrices of positive integers.
We then have the following statement:<templatestyles src="Math_theorem/styles.css" />
Cobham's theorem for substitutions — Let "α" et "β" be two multiplicatively independent Perron numbers. Then a sequence "x" with elements belonging to a finite set is both "α"-substitutive and "β"-substitutive if and only if "x" is ultimately periodic.
Logic approach.
The logic equivalent permits to consider more general situations: the automatic sequences over the natural numbers formula_59 or recognisable sets have been extended to the integers formula_60, to the Cartesian products formula_61, to the real numbers formula_62 and to the Cartesian products formula_63.
We code the base formula_12 integers by prepending to the representation of a positive integer the digit formula_44, and by representing negative integers by formula_64 followed by the number's formula_12-complement. For example, in base 2, the integer formula_65 is represented as formula_66. The powers of 2 are written as formula_67, and their negatives formula_68 (since formula_69 is the representation of formula_70).
A subset formula_71 of formula_72 is recognisable in base formula_12 if the elements of formula_71, written as vectors with formula_19 components, are recognisable over the resulting alphabet.
For example, in base 2, we have formula_73 and formula_74; the vector formula_75 is written as formula_76.<templatestyles src="Math_theorem/styles.css" />
Semenov's theorem (1977) — Let formula_32 and formula_25 be two multiplicatively independent positive integers. A subset formula_18 of formula_72 is formula_32-recognisable and formula_25-recognisable if and only if formula_18 is describable in Presburger arithmetic.
An elegant proof of this theorem is given by Muchnik in 1991 by induction on formula_19.
Other extensions have been given to the real numbers and vectors of real numbers.
Proofs.
Samuel Eilenberg announced the theorem without proof in his book; he says "The proof is correct, long, and hard. It is a challenge to find a more reasonable proof of this fine theorem." Georges Hansel proposed a more simple proof, published in the not-easily accessible proceedings of a conference. The proof of Dominique Perrin and that of Allouche and Shallit's book contains the same error in one of the lemmas, mentioned in the list of errata of the book. This error was uncovered in a note by Tomi Kärki, and corrected by Michel Rigo and Laurent Waxweiler. This part of the proof has been recently written.
In January 2018, Thijmen J. P. Krebs announced, on Arxiv, a simplified proof of the original theorem, based on Dirichlet's approximation criterion instead of that of Kronecker; the article appeared in 2021. The employed method has been refined and used by Mol, Rampersad, Shallit and Stipulanti.
Notes and references.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "n>0"
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "b"
},
{
"math_id": 3,
"text": "n_0n_1\\cdots n_h"
},
{
"math_id": 4,
"text": "n=n_0+n_1b+\\cdots+n_hb^h"
},
{
"math_id": 5,
"text": "0\\le n_0,n_1,\\ldots,n_h < b"
},
{
"math_id": 6,
"text": "n_h>0"
},
{
"math_id": 7,
"text": "\\langle n\\rangle_b"
},
{
"math_id": 8,
"text": "n_b"
},
{
"math_id": 9,
"text": "\\{n_b\\mid n\\in S\\}"
},
{
"math_id": 10,
"text": "b"
},
{
"math_id": 11,
"text": "\\{0,1,\\ldots,b-1\\}"
},
{
"math_id": 12,
"text": "k"
},
{
"math_id": 13,
"text": "\\ell"
},
{
"math_id": 14,
"text": "p"
},
{
"math_id": 15,
"text": "q"
},
{
"math_id": 16,
"text": "k^p=\\ell^q"
},
{
"math_id": 17,
"text": "8^4=16^3"
},
{
"math_id": 18,
"text": "S"
},
{
"math_id": 19,
"text": "m"
},
{
"math_id": 20,
"text": "n"
},
{
"math_id": 21,
"text": "1"
},
{
"math_id": 22,
"text": "u"
},
{
"math_id": 23,
"text": "0\\le i <k"
},
{
"math_id": 24,
"text": "S_i"
},
{
"math_id": 25,
"text": "s"
},
{
"math_id": 26,
"text": "u_s=i"
},
{
"math_id": 27,
"text": "\\langle N, +, V_r\\rangle"
},
{
"math_id": 28,
"text": "V_r"
},
{
"math_id": 29,
"text": "V_r(0)=1"
},
{
"math_id": 30,
"text": "V_r(n)=r^m"
},
{
"math_id": 31,
"text": "r^m"
},
{
"math_id": 32,
"text": "r"
},
{
"math_id": 33,
"text": "V_2(20)=4"
},
{
"math_id": 34,
"text": "V_3(20)=1"
},
{
"math_id": 35,
"text": "(\\exists y)(x=y+y+1)"
},
{
"math_id": 36,
"text": "\\{2^n\\mid n\\ge0\\}"
},
{
"math_id": 37,
"text": "V_2(x)=x"
},
{
"math_id": 38,
"text": "\\langle N, +, V_k\\rangle"
},
{
"math_id": 39,
"text": "\\langle N, +, V_\\ell\\rangle"
},
{
"math_id": 40,
"text": "B=\\{a,0,1\\}"
},
{
"math_id": 41,
"text": "a \\mapsto a1\\ ,\\quad 1\\mapsto 10\\ ,\\quad 0\\mapsto 00"
},
{
"math_id": 42,
"text": "a11010001\\cdots"
},
{
"math_id": 43,
"text": "a"
},
{
"math_id": 44,
"text": "0"
},
{
"math_id": 45,
"text": "011010001\\cdots"
},
{
"math_id": 46,
"text": "\\alpha"
},
{
"math_id": 47,
"text": "s=\\pi(f^\\omega(b))"
},
{
"math_id": 48,
"text": "f:B^*\\to B^*"
},
{
"math_id": 49,
"text": "B"
},
{
"math_id": 50,
"text": "f^\\omega(b)"
},
{
"math_id": 51,
"text": "\\alpha>1"
},
{
"math_id": 52,
"text": "f"
},
{
"math_id": 53,
"text": "M(f)=(m_{x,y})_{x\\in B,y\\in A}"
},
{
"math_id": 54,
"text": "m_{x,y}"
},
{
"math_id": 55,
"text": "x"
},
{
"math_id": 56,
"text": "f(y)"
},
{
"math_id": 57,
"text": "z > 1"
},
{
"math_id": 58,
"text": "\\{z' \\in \\Complex,|z'|< z\\}"
},
{
"math_id": 59,
"text": "\\N"
},
{
"math_id": 60,
"text": "\\Z"
},
{
"math_id": 61,
"text": "\\N^m"
},
{
"math_id": 62,
"text": "\\R"
},
{
"math_id": 63,
"text": "\\R^m"
},
{
"math_id": 64,
"text": "k-1"
},
{
"math_id": 65,
"text": "-6=-8+2"
},
{
"math_id": 66,
"text": "1010"
},
{
"math_id": 67,
"text": "010^*"
},
{
"math_id": 68,
"text": "110^*"
},
{
"math_id": 69,
"text": "11000"
},
{
"math_id": 70,
"text": "-16+8=-8"
},
{
"math_id": 71,
"text": "X"
},
{
"math_id": 72,
"text": "N^m"
},
{
"math_id": 73,
"text": "3=11_2"
},
{
"math_id": 74,
"text": "9=1001_2"
},
{
"math_id": 75,
"text": "\\begin{pmatrix}3\\\\9\\end{pmatrix}"
},
{
"math_id": 76,
"text": "\\begin{pmatrix}0011\\\\1001\\end{pmatrix}=\\begin{pmatrix}0\\\\1\\end{pmatrix}\\begin{pmatrix}0\\\\0\\end{pmatrix}\\begin{pmatrix}1\\\\0\\end{pmatrix}\\begin{pmatrix}1\\\\1\\end{pmatrix}"
}
]
| https://en.wikipedia.org/wiki?curid=68044793 |
68045267 | Option on realized volatility | In finance, option on realized volatility (or volatility option) is a subclass of derivatives securities that the payoff function embedded with the notion of annualized realized volatility of a specified underlying asset, which could be stock index, bond, foreign exchange rate, etc. Another product of volatility derivative that is widely traded refers to the volatility swap, which is in another word the forward contract on future realized volatility.
The long position of the volatility option, like the vanilla option, has the right but not the obligation to trade the annualized realized volatility interchange with the short position at some agreed price (volatility strike) at some predetermined point in the future (expiry date). The payoff is commonly settled in cash by some notional amount. What distinguishes this financial contract from ordinary options is that the risk measure is irrespective of the asset returns but belongs purely to the price volatility. As a result, traders can use it as a tool to speculate on price volatility movements in order to hedge their portfolio positions without taking a directional risk by holding the underlying asset.
Definitions.
Realized volatility.
In practice, the annualized realized volatility is interpreted in discrete sampling by the squared root of the annualized realized variance. Namely, if there are formula_0 sampling points of the underlying prices, says formula_1 observed at time formula_2 where formula_3 for all formula_4, then the annualized realized variance is valued by
formula_5
where
By this setting we then have formula_12 specified as an annualized realized volatility.
In addition, once the observation number formula_13 increases to infinity, the discretely defined realized volatility converges in probability to the squared root of the underlying asset quadratic variation i.e.
formula_14
which eventually defines the continuous sampling version of the realized volatility. One might find, to some extent, it is more convenient to use this notation to price volatility derivatives. However, the solution is only the approximation form of the discrete one since the contract is normally quoted in discrete sampling.
Volatility option payoffs.
If we set
then payoffs at expiry for the call and put options written on formula_17 (or just volatility call and put) are
formula_18
and
formula_19
respectively, where formula_20 if the realized volatility is discretely sampled and formula_21 if it is of the continuous sampling. And to perceive their present values, it suffices only to compute one of them since the other is simultaneously obtained by the auxiliary of put-call parity.
Pricing and valuation.
Concerning the no arbitrage argument, suppose that the underlying asset price formula_22 is modelled under a risk-neutral probability formula_23 and solves the following time-varying Black-Schloes equation:
formula_24
where:
Then the fair price of variance call at time formula_31 denoted by formula_32 can be obtained by
formula_33
where formula_34 represents a conditional expectation of random variable formula_35 with respect to formula_36 under the risk-neutral probability formula_23. The solution for formula_37 can somehow be derived analytically if one perceive the probability density function of formula_17, or by some approximation approaches such as Monte Carlo methods.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "n+1"
},
{
"math_id": 1,
"text": "S_{t_0},S_{t_2},\\dots,S_{t_{n}}"
},
{
"math_id": 2,
"text": "t_i "
},
{
"math_id": 3,
"text": "0\\leq t_{i-1}<t_{i}\\leq T"
},
{
"math_id": 4,
"text": "i=1,2,\\ldots,n"
},
{
"math_id": 5,
"text": "RV_d:=\\frac{A}{n} \\sum_{i=1}^n \\ln^2\\Big(\\frac{S_{t_i}}{S_{t_{i-1}}}\\Big)"
},
{
"math_id": 6,
"text": "A"
},
{
"math_id": 7,
"text": "A=252"
},
{
"math_id": 8,
"text": "A=52"
},
{
"math_id": 9,
"text": "A=12"
},
{
"math_id": 10,
"text": "T"
},
{
"math_id": 11,
"text": "n/{A}."
},
{
"math_id": 12,
"text": "\\sqrt{RV_d}"
},
{
"math_id": 13,
"text": "n"
},
{
"math_id": 14,
"text": "\\lim_{n\\to\\infty}\\sqrt{RV_d}=\\sqrt{\\frac{1}{T} \\int_0^T \\sigma(s) \\, ds}=:\\sqrt{RV_c}"
},
{
"math_id": 15,
"text": "K^C_{\\text{vol}}"
},
{
"math_id": 16,
"text": "L"
},
{
"math_id": 17,
"text": "\\sqrt{RV_{(\\cdot)}}"
},
{
"math_id": 18,
"text": " \\left( \\sqrt{RV_{(\\cdot)}}-K^C_{\\text{vol}} \\right)^+\\times L"
},
{
"math_id": 19,
"text": " \\left(K^C_{\\text{vol}}-\\sqrt{RV_{(\\cdot)}}\\right)^+\\times L"
},
{
"math_id": 20,
"text": "\\sqrt{RV_{(\\cdot)}}=\\sqrt{RV_d}"
},
{
"math_id": 21,
"text": "\\sqrt{RV_{(\\cdot)}}=\\sqrt{RV_c}"
},
{
"math_id": 22,
"text": "S=(S_t)_{0\\leq t \\leq T}"
},
{
"math_id": 23,
"text": "\\mathbb{Q}"
},
{
"math_id": 24,
"text": "\\frac{dS_t}{S_t}=r(t) \\, dt+\\sigma(t) \\, dW_t, \\;\\; S_0>0"
},
{
"math_id": 25,
"text": "r(t)\\in\\mathbb{R}"
},
{
"math_id": 26,
"text": "\\sigma(t)>0"
},
{
"math_id": 27,
"text": "W=(W_t)_{0\\leq t \\leq T}"
},
{
"math_id": 28,
"text": "(\\Omega,\\mathcal{F},\\mathbb{F},\\mathbb{Q})"
},
{
"math_id": 29,
"text": "\\mathbb{F}=(\\mathcal{F}_t)_{0\\leq t \\leq T}"
},
{
"math_id": 30,
"text": "W"
},
{
"math_id": 31,
"text": "t_0"
},
{
"math_id": 32,
"text": "C_{t_0}^\\text{vol}"
},
{
"math_id": 33,
"text": "C_{t_0}^\\operatorname{vol}:=e^{-\\int^T_{t_0} r(s) \\, ds}\\operatorname{E}^{\\mathbb{Q}} \\left[ \\left(\\sqrt{RV_{(\\cdot)}}-K^C_{\\operatorname{vol}}\\right)^+\\mid\\mathcal{F}_{t_0}\\right],"
},
{
"math_id": 34,
"text": "\\operatorname{E}^{\\mathbb{Q}}[X\\mid\\mathcal{F}_{t_0}]"
},
{
"math_id": 35,
"text": "X"
},
{
"math_id": 36,
"text": "\\mathcal{F}_{t_0}"
},
{
"math_id": 37,
"text": "C_{t_0}^\\operatorname{vol}"
}
]
| https://en.wikipedia.org/wiki?curid=68045267 |
6804782 | Preferential concentration | Preferential concentration is the tendency of dense particles in a turbulent fluid to cluster in regions of high strain (low vorticity) due to their inertia. The extent by which particles cluster is determined by the Stokes number, defined as formula_0, where formula_1 and formula_2 are the timescales for the particle and fluid respectively; note that formula_3 and formula_4 are the mass densities of the fluid and the particle, respectively, formula_5 is the kinematic viscosity of the fluid, and formula_6 is the kinetic energy dissipation rate of the turbulence. Maximum preferential concentration occurs at formula_7. Particles with formula_8 follow fluid streamlines and particles with formula_9 do not respond significantly to the fluid within the times the fluid motions are coherent.
Systems that can be strongly influenced by the dynamics of preferential concentration are aerosol production of fine powders, spray, emulsifier, and crystallization reactors, pneumatic devices, cloud droplet formation, aerosol transport in the upper atmosphere, and even planet formation from protoplanetary nebula. | [
{
"math_id": 0,
"text": "Stk \\equiv \\frac{ \\tau_p}{ \\tau_f} = \\frac{\\rho_p d^2 \\epsilon^{1/2}} { 18 \\rho_f \\nu^{3/2}}"
},
{
"math_id": 1,
"text": "\\tau_p"
},
{
"math_id": 2,
"text": "\\tau_f"
},
{
"math_id": 3,
"text": "\\rho_p"
},
{
"math_id": 4,
"text": "\\rho_f"
},
{
"math_id": 5,
"text": "\\nu"
},
{
"math_id": 6,
"text": "\\epsilon"
},
{
"math_id": 7,
"text": " Stk \\sim 1 "
},
{
"math_id": 8,
"text": "Stk \\ll 1"
},
{
"math_id": 9,
"text": "Stk \\gg 1"
}
]
| https://en.wikipedia.org/wiki?curid=6804782 |
68049188 | 1 Kings 16 | 1 Kings, chapter 16
1 Kings 16 is the sixteenth chapter of the Books of Kings in the Hebrew Bible or the First Book of Kings in the Old Testament of the Christian Bible. The book is a compilation of various annals recording the acts of the kings of Israel and Judah by a Deuteronomic compiler in the seventh century BCE, with a supplement added in the sixth century BCE. 1 Kings 12:1-16:14 documents the consolidation of the kingdoms of northern Israel and Judah. This chapter focusses on the reigns of Baasha, Elah, Zimri, Omri and Ahab in the northern kingdom during the reign of Asa in the southern kingdom.
Text.
This chapter was originally written in the Hebrew language and since the 16th century is divided into 34 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century). A long addition is found in the Septuagint of Codex Vaticanus following 1 Kings 16:28 (numbered as verses 28a–28h).
End of reign of Baasha, the king of Israel (16:1–7).
Baasha was 'walking in the way of Jeroboam', left the bull cult of Bethel (and Dan) intact, although he had eliminated the Jeroboam dynasty, so a prophet, Jehu ben Hanani, confronted him and gave him a warning and a scolding (verses 2–4) very similar to that of Ahijah of Shiloh (1 Kings 14:7–11), resulting in parallels of fates befallen Baasha's and Jeroboam's dynasties.
Elah, the king of Israel (16:8–14).
As happened with Jeroboam, the end of dynasty befell not during the reign of the founder of the dynasty, but of his son, very soon after his accession. Baasha's dynasty was eliminated on the second year of Elah, the son of Baasha, lasting no longer than Nadab, the son of Jeroboam. The assassin was Zimri a high-ranking officer, "commander of half the chariot troop" (a military form used in Israel since the time of Solomon, cf. 1 Kings 5:6,10:26; another officer of a chariot troop, Jehu, later also led a coup as recorded in 2 Kings 9). Zimri's butchery included not only Baasha's family but also family friends (verse 11).
"In the twenty and sixth year of Asa king of Judah began Elah the son of Baasha to reign over Israel in Tirzah, two years."
"Now the rest of the acts of Elah, and all that he did, are they not written in the book of the chronicles of the kings of Israel?"
Zimri, the king of Israel (16:15–20).
Zimri was 'the most spectacularly unsuccessful king of all' rulers in Israel and Judah as his suicide ended his seven-day reign. While still in war with the Philistines, the Israel army resented the coup in its capital, and as a chariot officer, Zimri likely 'represented the urban, Canaanite elements of the state too strongly for the army to tolerate', because it was dominated by more Israelite, tribal forces.
"In the twenty-seventh year of Asa king of Judah, Zimri reigned seven days in Tirzah. Now the troops were encamped against Gibbethon, which belonged to the Philistines."
Omri, the king of Israel (16:21–28).
The displeased army didn't recognize Zimri, as king, but instead, spontaneously hailed the army chief Omri as their leader to immediately marched and quickly seized the royal residence in Tirzah. Zimri set the citadel alight himself and died in the fire. Omri did not automatically become the sole ruler of Israel, because a certain Tibni was chosen as king by half of the people until his death four years later (cf the dates in 1 Kings 16:15 and 16:23). Omri's name was not of Israelite, but might be of Arabian origin; perhaps he worked his way to be an army general and then a head of state because of his 'unusually charismatic personality'. He founded a dynasty in northern Israel with great significance to the political development of the country, as possibly becoming the only true state at that time. Archaeological studies have discovered a great amount of building from the period of this dynasty (the ninth century BCE) across the entire land: city walls and fortifications, administration centres etc., whereas non-biblical sources from Assyria, Aram, and Moab indicate 'reluctant respect' for the power and influence of Israel at the time of Omri's dynasty (Assyrian records refer to Israel as "the land of the house of Omri"). By establishing a new capital city belonging to the crown, as David had done before him (cf. 2 Samuel 5), Omri's kingdom achieved a stability. Samaria (later Sebaste) was geopolitically and strategically well situated and could be built without taking larger, existing structures into account. It was equipped with a generous acropolis (about 180 x 90 meter in Omri's time to about 200 x 100 meter in Ahab's time), and created an opulent city in all respects (cf. ), which served as the royal residence of the Israelites until the destruction of the state. However, the kingdom became further away from Yahweh, so the prophets were increasingly brought to the foreground, especially Elijah and Elisha, who, despite being always loyal to Yahweh, became 'necessary counterparts' to and sometimes advisors of the Israelite kings, while setting the standards of what is important and right in Israel.
"In the thirty and first year of Asa king of Judah began Omri to reign over Israel, twelve years: six years reigned he in Tirzah."
"And he bought the hill Samaria of Shemer for two talents of silver, and built on the hill, and called the name of the city which he built, after the name of Shemer, owner of the hill, Samaria."
Ahab, the king of Israel (16:29–34).
Ahab was considered as 'evil in the sight of the Lord more than all who were before him', especially as he married the Phoenician princess Jezebel, built a temple for Baal (the classic Canaanite fertility god, responsible for nature's rebirth) in Samaria, and erected a cult symbol for the goddess Asherah (the mother goddess of the Canaanite pantheon who stands at El's, Baal's, or even Yahweh's side, presumably symbolized by some wooden object such as a stylized tree). These could be the signs of Phoenician influence (cf. Jezebel's father's name: Ethbaal), although Ahab's action 'must have been driven by the need to appease the religious influence of Israel's urban Canaanite population', because Bethel and Dan were mainly Israelite Yahweh-worshipping sites (cf. 1 Kings 12:25–30). Archaeological studies discovered the 9th-century establishment in Jericho. Two sons of Hiel, who was responsible for the construction of Jericho, died during the building of it (they were not ritually killed), and this event was interpreted as an example of God's unambiguous word in form of Joshua's (prophetic) curse upon Jericho ().
"In the thirty-eighth year of Asa king of Judah, Ahab the son of Omri began to reign over Israel, and Ahab the son of Omri reigned over Israel in Samaria twenty-two years."
"In his days Hiel of Bethel built Jericho. He laid its foundation at the cost of Abiram his firstborn, and set up its gates at the cost of his youngest son Segub, according to the word of the Lord, which he spoke by Joshua the son of Nun."
Verse 34.
In it is written that Joshua pronounces a curse to anyone who dares to rebuild Jericho, which is grimly fulfilled in this verse, so the curse is viewed as a prophecy spoken by Yahweh through Joshua.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=68049188 |
6805386 | Fáry's theorem | Planar graphs have straight drawings
In the mathematical field of graph theory, Fáry's theorem states that any simple, planar graph can be drawn without crossings so that its edges are straight line segments. That is, the ability to draw graph edges as curves instead of as straight line segments does not allow a larger class of graphs to be drawn. The theorem is named after István Fáry, although it was proved independently by Klaus Wagner (1936), Fáry (1948), and Sherman K. Stein (1951).
Proof.
One way of proving Fáry's theorem is to use mathematical induction. Let G be a simple plane graph with n vertices; we may add edges if necessary so that G is a maximally plane graph. If "n" < 3, the result is trivial. If "n" ≥ 3, then all faces of G must be triangles, as we could add an edge into any face with more sides while preserving planarity, contradicting the assumption of maximal planarity. Choose some three vertices "a", "b", "c" forming a triangular face of G. We prove by induction on n that there exists a straight-line combinatorially isomorphic re-embedding of G in which triangle "abc" is the outer face of the embedding. ("Combinatorially isomorphic" means that the vertices, edges, and faces in the new drawing can be made to correspond to those in the old drawing, such that all incidences between edges, vertices, and faces—not just between vertices and edges—are preserved.) As a base case, the result is trivial when "n" = 3 and a, b and c are the only vertices in G. Thus, we may assume that "n" ≥ 4.
By Euler's formula for planar graphs, G has 3"n" − 6 edges; equivalently, if one defines the "deficiency" of a vertex v in G to be 6 − deg("v"), the sum of the deficiencies is 12. Since G has at least four vertices and all faces of G are triangles, it follows that every vertex in G has degree at least three. Therefore each vertex in G has deficiency at most three, so there are at least four vertices with positive deficiency. In particular we can choose a vertex v with at most five neighbors that is different from a, b and c. Let "G"' be formed by removing v from G and retriangulating the face f formed by removing v. By induction, G' has a combinatorially isomorphic straight line re-embedding in which abc is the outer face. Because the re-embedding of G' was combinatorially isomorphic to G', removing from it the edges which were added to create G' leaves the face f, which is now a polygon P with at most five sides. To complete the drawing to a straight-line combinatorially isomorphic re-embedding of G, v should be placed in the polygon and joined by straight lines to the vertices of the polygon. By the art gallery theorem, there exists a point interior to P at which v can be placed so that the edges from v to the vertices of P do not cross any other edges, completing the proof.
The induction step of this proof is illustrated at right.
Related results.
De Fraysseix, Pach and Pollack showed how to find in linear time a straight-line drawing in a grid with dimensions linear in the size of the graph, giving a universal point set with quadratic size. A similar method has been followed by Schnyder to prove enhanced bounds and a characterization of planarity based on the incidence partial order. His work stressed the existence of a particular partition of the edges of a maximal planar graph into three trees known as a Schnyder wood.
Tutte's spring theorem states that every 3-connected planar graph can be drawn on a plane without crossings so that its edges are straight line segments and an outside face is a convex polygon (Tutte 1963). It is so called because such an embedding can be found as the equilibrium position for a system of springs representing the edges of the graph.
Steinitz's theorem states that every 3-connected planar graph can be represented as the edges of a convex polyhedron in three-dimensional space. A straight-line embedding of formula_0 of the type described by Tutte's theorem, may be formed by projecting such a polyhedral representation onto the plane.
The Circle packing theorem states that every planar graph may be represented as the intersection graph of a collection of non-crossing circles in the plane. Placing each vertex of the graph at the center of the corresponding circle leads to a straight line representation.
<templatestyles src="Unsolved/styles.css" />
Unsolved problem in mathematics:
Does every planar graph have a straight line representation in which all edge lengths are integers?
Heiko Harborth raised the question of whether every planar graph has a straight line representation in which all edge lengths are integers. The truth of Harborth's conjecture remains unknown. Integer-distance straight line embeddings are known to exist for cubic graphs.
raised the question of whether every graph with a linkless embedding in three-dimensional Euclidean space has a linkless embedding in which all edges are represented by straight line segments, analogously to Fáry's theorem for two-dimensional embeddings.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "G,"
}
]
| https://en.wikipedia.org/wiki?curid=6805386 |
68062718 | Acyl cyanide | Chemical group (–C(O)C≡N)
In organic chemistry, an acyl cyanide is a functional group with the formula and structure . It consists of an acyl group () attached to cyanide (). Examples include acetyl cyanide, formyl cyanide, and oxalyl dicyanide. Acyl cyanides are reagents in organic synthesis.
Synthesis.
Classically acyl cyanides are produced by the salt metathesis reaction of acyl chlorides with sodium cyanide:
formula_0
Alternatively, they can be produced by dehydration of acyl aldoximes:
formula_1
Acetyl cyanide is also prepared by hydrocyanation of ketene:
formula_2
Reactions.
They are mild acylating agents. With aqueous base, acyl cyanides break down to cyanide and the carboxylate:
formula_3
With azides, acyl cyanides undergo the click reaction to give acyl tetrazoles. | [
{
"math_id": 0,
"text": "{\\color{red}\\ce{R-C(O)}}\\ce{Cl} + \\ce{Na}{\\color{red}\\ce{CN}} \\longrightarrow {\\color{red}\\ce{R-C(O)CN}} + \\ce{NaCl}"
},
{
"math_id": 1,
"text": "{\\color{red}\\ce{R-C(O)C}}\\ce{H=}{\\color{red}\\ce{N}}\\ce{OH} \\longrightarrow {\\color{red}\\ce{R-C(O)CN}} + \\ce{H2O}"
},
{
"math_id": 2,
"text": "\\ce{CH2=}{\\color{red}\\ce{C=O}} + \\ce{H}{\\color{red}\\ce{CN}} \\longrightarrow \\ce{H3C -}{\\color{red}\\ce{C(O)CN}}"
},
{
"math_id": 3,
"text": "{\\color{red}\\ce{R-C(O)CN}} + \\ce{2 NaOH} \\longrightarrow {\\color{red}\\ce{R-CO}}\\ce{_2Na} + \\ce{Na}{\\color{red}\\ce{CN}} + \\ce{H2O}"
}
]
| https://en.wikipedia.org/wiki?curid=68062718 |
6806500 | Chitrabhanu (mathematician) | 16th century Indian mathematician
Chitrabhanu (IAST: "; fl. 16th century) was a mathematician of the Kerala school and a student of Nilakantha Somayaji. He was a Nambudiri brahmin from the town of Covvaram near present day Trissur. He is noted for a , a concise astronomical manual, dated to 1530, an algebraic treatise, and a commentary on a poetic text. Nilakantha and he were both teachers of Shankara Variyar.
Contributions.
He gave integer solutions to 21 types of systems of two simultaneous Diophantine equations in two unknowns. These types are all the possible pairs of equations of the following seven forms:
formula_0
For each case, Chitrabhanu gave an explanation and justification of his rule as well as an example. Some of his explanations are algebraic, while others are geometric.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\ x + y = a, x - y = b, xy = c, x^2 + y^2 = d, x^2 - y^2 = e, x^3 + y^3 = f, x^3 - y^3 = g"
}
]
| https://en.wikipedia.org/wiki?curid=6806500 |
68065196 | Percolation surface critical behavior | Percolation surface critical behavior concerns the influence of surfaces on the critical behavior of percolation.
Background.
Percolation is the study of connectivity in random systems, such as electrical conductivity in random conductor/insulator systems, fluid flow in porous media, gelation in polymer systems, etc. At a critical fraction of connectivity or porosity, long-range connectivity can take place, leading to long-range flow. The point where that connectivity takes place is called the percolation threshold, and considerable amount of work has been undertaken in finding those critical values for systems of various geometries, and the mathematical behavior of observables near that point. This leads to the study of critical behavior and the percolation critical exponents. These exponents allow one to describe the behavior as the threshold is approached.
The behavior of the percolating network near a surface will be different from that in the main part of a system, called the "bulk." For example, exact at the percolation threshold, the percolating network in the system is a fractal with large voids and a ramified structure. The surface interrupts this structure, so the percolating cluster is less likely to come in contact to the surface. As an example, consider a lattice system of bond percolation (percolation along the bonds or edges of the lattice). If the lattice is cubic in nature, and formula_0 is the probability that a bond is occupied (conducting), then the percolation threshold is known to be formula_1. At the surface, the lattice becomes a simple square lattice, where the bond threshold formula_2 is simply 1/2. Therefore, when the bulk of the system is at its threshold, the surface is way below its threshold, and the only way to have long-range connections along the surface is to have a path that goes from the surface to the bulk, conduction through the fractal percolation network, and then a path back to the surface again. This occurs with a different critical behavior as in the bulk, and is different from the critical behavior of a two-dimensional surface at its threshold.
In the most common model for surface critical behavior in percolation, all bonds are assigned with the same probability formula_0, and the behavior is studied at the bulk formula_2, with a value of 0.311608 in this case. In an other model for surface behavior, the surface bonds are made occupied with a different probability formula_3, while the bulk is kept at the normal bulk value. When formula_4 is increased to a higher value, a new "special" critical point is reached formula_5, which has a different set of critical exponents.
Surface phase transitions.
In percolation, we can choose to occupy the sites or bonds at the surface with a different probability formula_3 to the bulk probability formula_0. Different surface phase transitions can then occur depending on the values of the bulk occupation probability formula_0 and the surface occupation probability formula_3. The simplest case is the ordinary transition, which occurs when formula_0 is at the critical probability for the bulk phase transition. Here both the bulk and the surface start percolating, regardless of the value of formula_3, since there will typically be a path connecting the surface boundaries through the percolating bulk. Then there is the surface transition, where the bulk probability is below the bulk threshold, but the surface probability is at the percolation threshold for percolation in one lower dimension (i.e. the dimension of the surface). Here the surface undergoes a percolation transition while the bulk remains disconnected. If we enter this region of the phase diagram where the surface is ordered while the bulk is disordered, and then increase the bulk probability, we eventually encounter the extraordinary transition, where the bulk undergoes a percolation transition with the surface already percolating. Finally, there is the special phase transition, which is an isolated point where the phase boundaries for the ordinary, special, and extraordinary transitions meet.
In general the different surface transitions will be in distinct surface universality classes, with different critical exponents. Given an exponent, say formula_6, we label the relevant exponent at the ordinary, surface, extraordinary, and special transitions by formula_7, formula_8, formula_9, and formula_10 respectively.
Surface critical exponents.
The probability that a surface sites connected to the infinite (percolating) cluster, for an infinite system and formula_11, is given by
formula_12
with formula_13 where formula_14 is the bulk exponent for the order parameter.
As a function of the time formula_15 in an epidemic process (or the chemical distance), we have at formula_16
formula_17
with formula_18, where formula_19 is the bulk dynamical exponent.
formula_20
formula_21
Scaling relations.
The critical exponents the following scaling relations:
formula_22 (Deng and Blöte)
formula_23
formula_24
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "p"
},
{
"math_id": 1,
"text": "p_c = 0.311608..."
},
{
"math_id": 2,
"text": "p_c"
},
{
"math_id": 3,
"text": "p_s"
},
{
"math_id": 4,
"text": "p^{(s)}"
},
{
"math_id": 5,
"text": "p^{(s)}_c"
},
{
"math_id": 6,
"text": "\\gamma"
},
{
"math_id": 7,
"text": "\\gamma^{(o)}"
},
{
"math_id": 8,
"text": "\\gamma^{(s)}"
},
{
"math_id": 9,
"text": "\\gamma^{(e)}"
},
{
"math_id": 10,
"text": "\\gamma^{(sp)}"
},
{
"math_id": 11,
"text": "p > p_c"
},
{
"math_id": 12,
"text": " P(p) \\sim (p_c - p)^{\\beta_s} "
},
{
"math_id": 13,
"text": "\\beta_s > \\beta"
},
{
"math_id": 14,
"text": "\\beta"
},
{
"math_id": 15,
"text": "t"
},
{
"math_id": 16,
"text": " p=p_c "
},
{
"math_id": 17,
"text": " P(t,p_c) \\sim t^{\\delta_s} "
},
{
"math_id": 18,
"text": " \\delta_s = \\beta_s / \\nu_t "
},
{
"math_id": 19,
"text": " \\nu_t "
},
{
"math_id": 20,
"text": "\\nu_\\parallel"
},
{
"math_id": 21,
"text": "\\nu_\\perp"
},
{
"math_id": 22,
"text": "2 X_{h1} = d - 2 + \\eta_\\parallel"
},
{
"math_id": 23,
"text": "\\gamma_{1,1} = \\nu ( 1 - \\eta_\\parallel)"
},
{
"math_id": 24,
"text": "\\gamma_{1,1} = 2 \\gamma_{1} - \\gamma - \\nu"
}
]
| https://en.wikipedia.org/wiki?curid=68065196 |
680672 | Critical graph | Undirected graph
In graph theory, a critical graph is an undirected graph all of whose proper subgraphs have smaller chromatic number. In such a graph, every vertex or edge is a critical element, in the sense that its deletion would decrease the number of colors needed in a graph coloring of the given graph. The decrease in the number of colors cannot be by more than one.
Variations.
A "formula_0-critical graph" is a critical graph with chromatic number formula_0. A graph formula_1 with chromatic number formula_0 is "formula_0-vertex-critical" if each of its vertices is a critical element. Critical graphs are the "minimal" members in terms of chromatic number, which is a very important measure in graph theory.
Some properties of a formula_0-critical graph formula_1 with formula_2 vertices and formula_3 edges:
Graph formula_1 is vertex-critical if and only if for every vertex formula_13, there is an optimal proper coloring in which formula_13 is a singleton color class.
As showed, every formula_0-critical graph may be formed from a complete graph formula_8 by combining the Hajós construction with an operation that identifies two non-adjacent vertices. The graphs formed in this way always require formula_0 colors in any proper coloring.
A double-critical graph is a connected graph in which the deletion of any pair of adjacent vertices decreases the chromatic number by two. It is an open problem to determine whether formula_8 is the only double-critical formula_0-chromatic graph.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "k"
},
{
"math_id": 1,
"text": "G"
},
{
"math_id": 2,
"text": "n"
},
{
"math_id": 3,
"text": "m"
},
{
"math_id": 4,
"text": "\\delta(G)"
},
{
"math_id": 5,
"text": "\\delta(G)\\ge k-1"
},
{
"math_id": 6,
"text": "k-1"
},
{
"math_id": 7,
"text": "(k-1)"
},
{
"math_id": 8,
"text": "K_k"
},
{
"math_id": 9,
"text": "n=k"
},
{
"math_id": 10,
"text": "2m\\ge(k-1)n+k-3"
},
{
"math_id": 11,
"text": "2m\\ge (k-1)n+\\lfloor(k-3)/(k^2-3)\\rfloor n"
},
{
"math_id": 12,
"text": "2k-1"
},
{
"math_id": 13,
"text": "v"
}
]
| https://en.wikipedia.org/wiki?curid=680672 |
680729 | Schulze method | Single-winner electoral system
The Schulze method () is a single winner ranked-choice voting rule developed by Markus Schulze. It is also known as the beatpath method. The Schulze method is a Condorcet method, which means it will elect a majority-choice candidate if one exists; in other words, if most people rank A above B, A will defeat B (whenever this is possible).
Schulze's method is based on the idea of breaking cyclic ties by using indirect victories. The idea is that if Alice beats Bob, and Bob beats Charlie, then Alice (indirectly) beats Charlie; this kind of indirect win is called a "beatpath".
For proportional representation, a single transferable vote (STV) variant known as Schulze STV also exists. The Schulze method is used by several organizations including Debian, Ubuntu, Gentoo, Pirate Party political parties and many others. It was also used by Wikimedia prior to their adoption of score voting.
Description of the method.
Schulze's method uses ranked ballots with equal ratings allowed. There are two common (equivalent) descriptions of Schulze's method.
Beatpath explanation.
The idea behind Schulze's method is that if Alice defeats Bob, and Bob beats Charlie, then Alice "indirectly" defeats Charlie; this kind of indirect win is called a 'beatpath'.
Every beatpath is assigned a particular "strength". The strength of a single-step beatpath from Alice to Bob is just the number of voters who rank Alice over Bob. For a longer beatpath, consisting of multiple "beats", the strength of a beatpath is as strong as its weakest link (i.e. the beat with the smallest number of winning votes).
We say Alice has a "beatpath-win" over Bob if her strongest beatpath to Bob is stronger than all of Bob's beatpaths to Alice. The winner is the candidate who has a beatpath-win over every other candidate.
Markus Schulze proved that this definition of a beatpath-win is transitive; in other words, if Alice has a beatpath-win over Bob, and Bob has a beatpath-win over Charlie, Alice has a beatpath-win over Charlie. As a result, the Schulze method is a Condorcet method, providing a full extension of the majority rule to any set of ballots.
Iterative description.
The Schulze winner can also be constructed iteratively, using a defeat-dropping method:
The winner is the only candidate left at the end of the procedure.
Example.
In the following example 45 voters rank 5 candidates.
The pairwise preferences have to be computed first. For example, when comparing A and B pairwise, there are 5+5+3+7=20 voters who prefer A to B, and 8+2+7+8=25 voters who prefer B to A. So formula_0 and formula_1. The full set of pairwise preferences is:
The cells for d[X, Y] have a light green background if d[X, Y] > d[Y, X], otherwise the background is light red. There is no undisputed winner by only looking at the pairwise differences here.
Now the strongest paths have to be identified. To help visualize the strongest paths, the set of pairwise preferences is depicted in the diagram on the right in the form of a directed graph. An arrow from the node representing a candidate X to the one representing a candidate Y is labelled with d[X, Y]. To avoid cluttering the diagram, an arrow has only been drawn from X to Y when d[X, Y] > d[Y, X] (i.e. the table cells with light green background), omitting the one in the opposite direction (the table cells with light red background).
One example of computing the strongest path strength is p[B, D] = 33: the strongest path from B to D is the direct path (B, D) which has strength 33. But when computing p[A, C], the strongest path from A to C is not the direct path (A, C) of strength 26, rather the strongest path is the indirect path (A, D, C) which has strength min(30, 28) = 28. The "strength" of a path is the strength of its weakest link.
For each pair of candidates X and Y, the following table shows the strongest path from candidate X to candidate Y in red, with the weakest link underlined.
Now the output of the Schulze method can be determined. For example, when comparing A and B,
since formula_2, for the Schulze method candidate A is "better" than candidate B. Another example is that formula_3, so candidate E is "better" than candidate D. Continuing in this way, the result is that the Schulze ranking is formula_4, and E wins. In other words, E wins since formula_5 for every other candidate X.
Implementation.
The only difficult step in implementing the Schulze method is computing the strongest path strengths. However, this is a well-known problem in graph theory sometimes called the widest path problem. One simple way to compute the strengths, therefore, is a variant of the Floyd–Warshall algorithm. The following pseudocode illustrates the algorithm.
for i from 1 to C
for j from 1 to C
if i ≠ j then
if d[i,j] > d[j,i] then
p[i,j] := d[i,j]
else
p[i,j] := 0
for i from 1 to C
for j from 1 to C
if i ≠ j then
for k from 1 to C
if i ≠ k and j ≠ k then
p[j,k] := max (p[j,k], min (p[j,i], p[i,k]))
This algorithm is efficient and has running time O("C"3) where "C" is the number of candidates.
Ties and alternative implementations.
When allowing users to have ties in their preferences, the outcome of the Schulze method naturally depends on how these ties are interpreted in defining d[*,*]. Two natural choices are that d[A, B] represents either the number of voters who strictly prefer A to B (A>B), or the "margin" of (voters with A>B) minus (voters with B>A). But no matter how the "d"s are defined, the Schulze ranking has no cycles, and assuming the "d"s are unique it has no ties.
Although ties in the Schulze ranking are unlikely, they are possible. Schulze's original paper recommended breaking ties by random ballot.
There is another alternative way to "demonstrate" the winner of the Schulze method. This method is equivalent to the others described here, but the presentation is optimized for the significance of steps being "visually apparent" as a human goes through it, not for computation.
Here is a margins table made from the above example. Note the change of order used for demonstration purposes.
The first drop (A's loss to E by 1 vote) does not help shrink the Schwartz set.
So we get straight to the second drop (E's loss to C by 3 votes), and that shows us the winner, E, with its clear row.
This method can also be used to calculate a result, if the table is remade in such a way that one can conveniently and reliably rearrange the order of the candidates on both the row and the column, with the same order used on both at all times.
Satisfied and failed criteria.
Satisfied criteria.
The Schulze method satisfies the following criteria:
<templatestyles src="Div col/styles.css"/>
Failed criteria.
Since the Schulze method satisfies the Condorcet criterion, it automatically fails the following criteria:
Likewise, since the Schulze method is not a dictatorship and is a ranked voting system (not rated), Arrow's Theorem implies it fails:
The Schulze method also fails
Comparison table.
The following table compares the Schulze method with other single-winner election methods:
<templatestyles src="Template:Sort under/styles.css" />
<templatestyles src="Template:Sticky table start/styles.css" />
The main difference between the Schulze method and the ranked pairs method can be seen in this example:
Suppose the MinMax score of a set X of candidates is the strength of the strongest pairwise win of a candidate A ∉ X against a candidate B ∈ X. Then the Schulze method, but not Ranked Pairs, guarantees that the winner is always a candidate of the set with minimum MinMax score. So, in some sense, the Schulze method minimizes the largest majority that has to be reversed when determining the winner.
On the other hand, Ranked Pairs minimizes the largest majority that has to be reversed to determine the order of finish, in the MinLexMax sense.
In other words, when Ranked Pairs and the Schulze method produce different orders of finish, for the majorities on which the two orders of finish disagree, the Schulze order reverses a larger majority than the Ranked Pairs order.
History.
The Schulze method was developed by Markus Schulze in 1997. It was first discussed in public mailing lists in 1997–1998 and in 2000.
In 2011, Schulze published the method in the academic journal "Social Choice and Welfare".
Usage.
Government.
The Schulze method is used by the city of Silla for all referendums. It is also used by the cities of Turin and San Donà di Piave and by the London Borough of Southwark through their use of the WeGovNow platform, which in turn uses the LiquidFeedback decision tool.
Political parties.
Schulze was adopted by the Pirate Party of Sweden (2009), and the Pirate Party of Germany (2010). The Boise, Idaho chapter of the Democratic Socialists of America in February chose this method for their first special election held in March 2018.
Organizations.
It is used by the Institute of Electrical and Electronics Engineers, by the Association for Computing Machinery, and by USENIX through their use of the HotCRP decision tool.
Organizations which currently use the Schulze method include:
<templatestyles src="Div col/styles.css"/>
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "d[A, B] = 20"
},
{
"math_id": 1,
"text": "d[B, A] = 25"
},
{
"math_id": 2,
"text": "(28 =) p[A,B] > p[B,A] (= 25)"
},
{
"math_id": 3,
"text": "(31 =) p[E,D] > p[D,E] (= 24)"
},
{
"math_id": 4,
"text": "E > A > C > B > D"
},
{
"math_id": 5,
"text": "p[E,X] \\ge p[X,E]"
}
]
| https://en.wikipedia.org/wiki?curid=680729 |
680735 | Torus bundle | A torus bundle, in the sub-field of geometric topology in mathematics, is a kind of surface bundle over the circle, which in turn is a class of three-manifolds.
Construction.
To obtain a torus bundle: let formula_0 be an orientation-preserving homeomorphism of the two-dimensional torus formula_1 to itself. Then the three-manifold formula_2 is obtained by
Then formula_2 is the torus bundle with monodromy formula_0.
Examples.
For example, if formula_0 is the identity map (i.e., the map which fixes every point of the torus) then the resulting torus bundle formula_2 is the three-torus: the Cartesian product of three circles.
Seeing the possible kinds of torus bundles in more detail requires an understanding of William Thurston's geometrization program. Briefly, if formula_0 is finite order, then the manifold formula_2 has Euclidean geometry. If formula_0 is a power of a Dehn twist then formula_2 has Nil geometry. Finally, if formula_0 is an Anosov map then the resulting three-manifold has Sol geometry.
These three cases exactly correspond to the three possibilities for the absolute value of the trace of the action of formula_0 on the homology of the torus: either less than two, equal to two, or greater than two. | [
{
"math_id": 0,
"text": "f"
},
{
"math_id": 1,
"text": "T"
},
{
"math_id": 2,
"text": "M(f)"
}
]
| https://en.wikipedia.org/wiki?curid=680735 |
68074189 | 1 Kings 17 | 1 Kings, chapter 17
1 Kings 17 is the seventeenth chapter of the Books of Kings in the Hebrew Bible or the First Book of Kings in the Old Testament of the Christian Bible. The book is a compilation of various annals recording the acts of the kings of Israel and Judah by a Deuteronomic compiler in the seventh century BCE, with a supplement added in the sixth century BCE. This chapter belongs to the section comprising 1 Kings 16:15 to 2 Kings 8:29 which documents the period of Omri's dynasty. The focus of this chapter is the activity of prophet Elijah during the reign of king Ahab in the northern kingdom.
Text.
This chapter was originally written in the Hebrew language and since the 16th century is divided into 24 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century).
Elijah's conflict with Ahab and his flight (17:1–6).
Following the list of Ahab's mistake in the previous chapter, prophet Elijah suddenly appeared to confront the king with Yahweh's word against Ahab's policy of syncretizing the worship of Yahweh and Baal, and declaring the war against Baal (as the god of fertility and rain) that the land would suffer drought and hunger (only Yahweh can control rain). This set up a tense conflict between the worship of the two deities which would be resolved in 1 Kings 18:41-5. As soon as he finished with his message, Elijah withdrew to a small east Jordanian river valley, being fed by the usually greedy (ravenous) ravens.
"And Elijah the Tishbite, of the inhabitants of Gilead, said to Ahab, "As the Lord God of Israel lives, before whom I stand, there shall not be dew nor rain these years, except at my word.""
Elijah and the widow in Zarephath (17:7–16).
After a period of time, Elijah experienced the same drought as the people of Israel, with the brook near where he lived, the wadi Cherith (see verse 3), running dry, so God sent him to the Sidon region, on the coast of Phoenicia (modern Lebanon), home of Queen Jezebel, and the heartland of Baal worship (cf. 1 Kings 16:31). Elijah was to find a widow to feed him there by having randomly asked a woman at the gates of Zarephath for water and then for bread. When she claimed, 'as Yahweh your God lives', that she and her son are starving themselves, Elijah repeated his wish, but adding the soothing words, 'Do not be afraid', and a prophecy of an endless supply of food, which happened as Elijah had said.
Elijah awakens the dead (17:17–24).
This story as the previous one involves the same three people and deals with the same question of whether it is worthwhile to support the men of God, whose presence might bring not only death (by revealing sins and bestowing punishment, verse 18), but also life. The narrative is closely related to that in 2 Kings 4:18-37, showing that while a prophet 'plays the role of a magician reviving a dead soul by a ritual action', only God has the authority over life and death (the prophet had to plead twice to God).
There are notable parallels of this narrative with the raising of the son of the widow of Nain in Luke 7, especially some verbal parallels. The raising of the son of the woman of Shunem (2 Kings 4) by Elisha is also similar, giving an example of a repeated pattern in the history of redemption.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=68074189 |
68078439 | House allocation problem | In economics and computer science, the house allocation problem is the problem of assigning objects to people with different preferences, such that each person receives exactly one object. The name "house allocation" comes from the main motivating application, which is assigning dormitory houses to students. Other commonly used terms are assignment problem and one-sided matching. When agents already own houses (and may trade them with other agents), the problem is often called a housing market. In house allocation problems, it is assumed that monetary transfers are not allowed; the variant in which monetary transfers are allowed is known as rental harmony.
Definitions.
There are "n" people (also called: "agents"), and m objects (also called: "houses"). The agents may have different preferences over the houses. They may express their preferences in various ways:
Several considerations may be important in designing algorithms for house allocation.
Efficient allocations.
"In economics", the primary efficiency requirement in house allocation is PE. There are various algorithms attaining a PE allocation in various settings.
Probably the simplest algorithm for house allocation is serial dictatorship: the agents are ordered in some arbitrary order (e.g. by seniority), and each agent in turn picks the best remaining house by his/her preferences. This algorithm is obviously SP. If the agents' preferences are strict, then it finds a PE allocation. However, it may be very unfair towards the agents who pick last. It can be made fairer (in expectation) by choosing the order uniformly at random; this leads to the mechanism called random serial dictatorship. The mechanism is PE ex-post, but it is not PE ex-ante; see fair random assignment for other randomized mechanisms which are ex-ante PE.
When each agent already owns a house, fairness considerations are less important, it is more important to guarantee to agents that they will not lose from participating (IR). The top trading cycle algorithm is the unique algorithm which guarantees IR, PE and SP. With strict preferences, TTC finds the unique core-stable allocation.
Abdulkadiroglu and Sönmez consider an extended setting in which some agents already own a house while some others are house-less. Their mechanism is IR, PE and SP. They present two algorithms that implement this mechanism.
Ergin considers rules that are also "consistent", that is, their predictions do not depend of the order in which the assignments are realized.
"In computer science and operations research", the primary efficiency requirement is maximizing the sum of utilities. Finding a house allocation maximizing the sum of utilities is equivalent to finding a maximum-weight matching in a weighted bipartite graph; it is also called the assignment problem.
Fair allocations.
Algorithmic problems related to fairness of the matching have been studied in several contexts.
When agents have "binary valuations," their "like" relations define a bipartite graph on the sets of agents and houses. An envy-free house allocation corresponds to an "envy-free matching" in this graph. The following algorithmic problems have been studied.
When agents have "cardinal valuations", the graph of agents and houses becomes a weighted bipartite graph. The following algorithmic problems have been studied.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "n^{\\gamma}"
},
{
"math_id": 1,
"text": "\\gamma > 0"
},
{
"math_id": 2,
"text": "\\gamma <1"
},
{
"math_id": 3,
"text": "\\gamma =1"
}
]
| https://en.wikipedia.org/wiki?curid=68078439 |
6807932 | Minimum-cost flow problem | Mathematical optimization problem
The minimum-cost flow problem (MCFP) is an optimization and decision problem to find the cheapest possible way of sending a certain amount of flow through a flow network. A typical application of this problem involves finding the best delivery route from a factory to a warehouse where the road network has some capacity and cost associated. The minimum cost flow problem is one of the most fundamental among all flow and circulation problems because most other such problems can be cast as a minimum cost flow problem and also that it can be solved efficiently using the network simplex algorithm.
Definition.
A flow network is a directed graph formula_0 with a source vertex formula_1 and a sink vertex formula_2, where each edge formula_3 has capacity formula_4, flow formula_5 and cost formula_6, with most minimum-cost flow algorithms supporting edges with negative costs. The cost of sending this flow along an edge formula_7 is formula_8. The problem requires an amount of flow formula_9 to be sent from source formula_10 to sink formula_11.
The definition of the problem is to minimize the total cost of the flow over all edges:
formula_12
with the constraints
Relation to other problems.
A variation of this problem is to find a flow which is maximum, but has the lowest cost among the maximum flow solutions. This could be called a minimum-cost maximum-flow problem and is useful for finding minimum cost maximum matchings.
With some solutions, finding the minimum cost maximum flow instead is straightforward. If not, one can find the maximum flow by performing a binary search on formula_9.
A related problem is the minimum cost circulation problem, which can be used for solving minimum cost flow. The minimum cost circulation problem has no source and sink; instead it has costs and lower and upper bounds on each edge, and seeks flow amounts within the given bounds that balance the flow at each vertex and minimize the sum over edges of cost times flow. Any minimum-cost flow instance can be converted into a minimum cost circulation instance by setting the lower bound on all edges to zero, and then making an extra edge from the sink formula_11 to the source formula_10, with capacity formula_13 and lower bound formula_14, forcing the total flow from formula_10 to formula_11 to also be formula_9.
The following problems are special cases of the minimum cost flow problem (we provide brief sketches of each applicable reduction, in turn):
Solutions.
The minimum cost flow problem can be solved by linear programming, since we optimize a linear function, and all constraints are linear.
Apart from that, many combinatorial algorithms exist. Some of them are generalizations of maximum flow algorithms, others use entirely different approaches.
Well-known fundamental algorithms (they have many variations):
Application.
Minimum weight bipartite matching.
Given a bipartite graph "G" = ("A" ∪ "B", "E"), the goal is to find the maximum cardinality matching in "G" that has minimum cost. Let "w": "E" → "R" be a weight function on the edges of "E". The minimum weight bipartite matching problem or assignment problem is to find a
perfect matching "M" ⊆ "E" whose total weight is minimized. The idea is to reduce this problem to a network flow problem.
Let "G′" = ("V′" = "A" ∪ "B", "E′" = "E"). Assign the capacity of all the edges in "E′" to 1. Add a source vertex "s" and connect it to all the vertices in "A′" and add a sink
vertex "t" and connect all vertices inside group "B′" to this vertex. The capacity of all the new edges is 1 and their costs is 0. It is proved that there is minimum weight perfect bipartite matching in "G" if and only if there a minimum cost flow in "G′".
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "G=(V,E)"
},
{
"math_id": 1,
"text": "s \\in V"
},
{
"math_id": 2,
"text": "t \\in V"
},
{
"math_id": 3,
"text": "(u,v) \\in E"
},
{
"math_id": 4,
"text": "c(u,v) > 0"
},
{
"math_id": 5,
"text": "f(u,v)"
},
{
"math_id": 6,
"text": "a(u,v)"
},
{
"math_id": 7,
"text": "(u,v)"
},
{
"math_id": 8,
"text": "f(u,v)\\cdot a(u,v)"
},
{
"math_id": 9,
"text": "d"
},
{
"math_id": 10,
"text": "s"
},
{
"math_id": 11,
"text": "t"
},
{
"math_id": 12,
"text": "\\sum_{(u,v) \\in E} a(u,v) \\cdot f(u,v)"
},
{
"math_id": 13,
"text": "c(t,s)=d"
},
{
"math_id": 14,
"text": "l(t,s)=d"
},
{
"math_id": 15,
"text": "n"
},
{
"math_id": 16,
"text": "(X,Y)"
},
{
"math_id": 17,
"text": "x \\in X"
},
{
"math_id": 18,
"text": "1/n"
},
{
"math_id": 19,
"text": "y \\in Y"
}
]
| https://en.wikipedia.org/wiki?curid=6807932 |
68082008 | 1 Kings 18 | 1 Kings, chapter 18
1 Kings 18 is the eighteenth chapter of the Books of Kings in the Hebrew Bible or the First Book of Kings in the Old Testament of the Christian Bible. The book is a compilation of various annals recording the acts of the kings of Israel and Judah by a Deuteronomic compiler in the seventh century BCE, with a supplement added in the sixth century BCE. This chapter belongs to the section comprising 1 Kings 16:15 to 2 Kings 8:29 which documents the period of Omri's dynasty. The focus of this chapter is the activity of prophet Elijah during the reign of king Ahab in the northern kingdom.
Text.
This chapter was originally written in the Hebrew language and since the 16th century is divided into 46 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century).
Elijah and Obadiah (18:1–16).
The main theme of the narrative is drought and rain. As the land of Israel including the king suffered under the drought, YHWH sent for Elijah to bring about the crisis and then the solution to the conflict between the worship of two deities. Before Elijah faced Ahab, one (God-fearing) minister, named Obadiah (meaning: 'servant of Yahweh') became an intermediate. Obadiah was also the one helping to hide Yahweh's servants during a purge of prophets by queen Jezebel (apparently the reason of Elijah's journey to the river of Kerith into the foreign territory of Phoenicia in Zarephath), so when Elijah unexpectedly standing before him, Obadiah fell to the ground in fear and respect. Similar miraculous transport of God's prophets is noted in Ezekiel 3:14, , cf. .
"After many days the word of the Lord came to Elijah, in the third year, saying, “Go, show yourself to Ahab, and I will send rain upon the earth.”"
Elijah and the competition between the deities on Mount Carmel (18:17–40).
As soon as Ahab met Elijah, he tried to hold the prophet responsible for the calamity befallen Israel, calling Elijah 'the troubler of Israel' (verse 17; cf. Joshua 6:18; 7:25 concerning Achan, whose sin brought God's judgment on Israel) . Elijah immediately threw the accusation back at Ahab for the apostasy sin of him and his father's house forsaking Yahweh and following the Baals. In Joshua 7, the identity of the true 'troubler of Israel' was revealed in public before "all Israel", so in this case, Elijah wanted "all Israel" to gather on Mount Carmel, a place near to the Phoenician border, to resolve the matter.
The people of Israel at this point seemed not to hold YHWH monotheism anymore as they didn't react to the choice Elijah offering at all: 'YHWH or Baal' alone, but they agreed to witness the competition (while the prophets of Baal didn't reply to the challenge). A
miracle must bring truth to light, and it was quickly revealed that the Baals are incapable of doing this, even after their priests performing the whole cultic and ritual activities of Baalistic religion (as reliably reported in this narrative: the 'prayer, rhythmic movements, and self-mortification building up to ecstasy', verses 26–29). This violent cultic frenzy of Baalistic activities with 'swords and lances' (=spears) was attested by an Egyptian traveller "Wen-Amon" or "Wenamun", who around 1100 BCE witnessed it in Byblos, a Phoenician coastal city north of Jezebel's hometown of Sidon. By contrast, YHWH-religion only requires the spoken word (prayer) to immediately produce miracles. The people who saw the demonstration of divine power quickly turned to YHWH's side with a call of faith, 'The LORD indeed is God', which unmistakably recalls Elijah's name ('my God is YHWH'), so the personal conviction of Elijah then became that of the people of Israel.
[Elijah answered] "Now therefore send and gather all Israel to me at Mount Carmel, and the 450 prophets of Baal and the 400 prophets of Asherah, who eat at Jezebel's table."
"And Elijah took twelve stones, according to the number of the tribes of the sons of Jacob, to whom the word of the Lord had come, saying, "Israel shall be your name.""
Elijah brings rain (18:41–46).
The triumph of Elijah on Mount Carmel seems to make king Ahab even listen to Elijah's word, that the king should eat and drink while expecting the rain to come soon. The return of the rains is another triumph for Elijah, who called for rain seven times (verses 42–44) and as the rain started to pour, Elijah had the 'hand of the
LORD' grasping him so he could run ahead of the royal chariots for more than from Carmel to Jezreel. Thus, the opening conflict of 16:32–33 and is resolved by proving YHWH to be the only effective God.
"And the hand of the Lord was on Elijah; and he girded up his loins, and ran before Ahab to the entrance of Jezreel."
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=68082008 |
6808344 | Circulation problem | Generalization of network flow problems
The circulation problem and its variants are a generalisation of network flow problems, with the added constraint of a lower bound on edge flows, and with flow conservation also being required for the source and sink (i.e. there are no special nodes). In variants of the problem, there are multiple commodities flowing through the network, and a cost on the flow.
Definition.
Given flow network formula_0 with:
formula_1, lower bound on flow from node formula_2 to node formula_3,
formula_4, upper bound on flow from node formula_2 to node formula_3,
formula_5, cost of a unit of flow on formula_6
and the constraints:
formula_7,
formula_8 (flow cannot appear or disappear in nodes).
Finding a flow assignment satisfying the constraints gives a solution to the given circulation problem.
In the minimum cost variant of the problem, minimize
formula_9
Multi-commodity circulation.
In a multi-commodity circulation problem, you also need to keep track of the flow of the individual commodities:
There is also a lower bound on each flow of commodity.
The conservation constraint must be upheld individually for the commodities:
formula_11
Solution.
For the circulation problem, many polynomial algorithms have been developed (e.g., Edmonds–Karp algorithm, 1972; Tarjan 1987-1988). Tardos found the first strongly polynomial algorithm.
For the case of multiple commodities, the problem is NP-complete for integer flows. For fractional flows, it is solvable in polynomial time, as one can formulate the problem as a linear program.
Related problems.
Below are given some problems, and how to solve them with the general circulation setup given above. | [
{
"math_id": 0,
"text": "G(V,E)"
},
{
"math_id": 1,
"text": "l(v,w)"
},
{
"math_id": 2,
"text": "v"
},
{
"math_id": 3,
"text": "w"
},
{
"math_id": 4,
"text": "u(v,w)"
},
{
"math_id": 5,
"text": "c(v,w)"
},
{
"math_id": 6,
"text": "(v,w)"
},
{
"math_id": 7,
"text": "l(v,w) \\leq f(v,w) \\leq u(v,w)"
},
{
"math_id": 8,
"text": "\\sum_{w \\in V} f(u,w) = 0"
},
{
"math_id": 9,
"text": "\\sum_{(v,w) \\in E} c(v,w) \\cdot f(v,w)."
},
{
"math_id": 10,
"text": "i"
},
{
"math_id": 11,
"text": "\\ \\sum_{w \\in V} f_i(u,w) = 0."
},
{
"math_id": 12,
"text": "K_i(s_i,t_i,d_i)"
},
{
"math_id": 13,
"text": "d_i"
},
{
"math_id": 14,
"text": "s_i"
},
{
"math_id": 15,
"text": "t_i"
},
{
"math_id": 16,
"text": "(t_i,s_i)"
},
{
"math_id": 17,
"text": "l_i(t_i,s_i) = u(t_i,s_i) = d_i"
},
{
"math_id": 18,
"text": "l_i(u,v)=0"
},
{
"math_id": 19,
"text": "t"
},
{
"math_id": 20,
"text": "s"
},
{
"math_id": 21,
"text": "l(t,s)=0"
},
{
"math_id": 22,
"text": "u(t,s)="
},
{
"math_id": 23,
"text": "c(t,s)=-1"
},
{
"math_id": 24,
"text": "m"
},
{
"math_id": 25,
"text": "l(t,s)=u(t,s)=m"
},
{
"math_id": 26,
"text": "c(t,s)=0"
},
{
"math_id": 27,
"text": "l(u,v)=0"
},
{
"math_id": 28,
"text": "c(u,v)=1"
},
{
"math_id": 29,
"text": "(t,s)"
},
{
"math_id": 30,
"text": "l(t,s)=u(t,s)=1"
},
{
"math_id": 31,
"text": "v(v-1)/2"
}
]
| https://en.wikipedia.org/wiki?curid=6808344 |
68087802 | Probability-proportional-to-size sampling | In survey methodology, probability-proportional-to-size (pps) sampling is a sampling process where each element of the population (of size "N") has some (independent) chance formula_0 to be selected to the sample when performing one draw. This formula_0 is proportional to some known quantity formula_1 so that formula_2.
One of the cases this occurs in, as developed by Hanson and Hurwitz in 1943, is when we have several clusters of units, each with a different (known upfront) number of units, then each cluster can be selected with a probability that is proportional to the number of units inside it. So, for example, if we have 3 clusters with 10, 20 and 30 units each, then the chance of selecting the first cluster will be 1/6, the second would be 1/3, and the third cluster will be 1/2.
The pps sampling results in a fixed sample size "n" (as opposed to Poisson sampling which is similar but results in a random sample size with expectancy of "n"). When selecting items with replacement the selection procedure is to just draw one item at a time (like getting "n" draws from a multinomial distribution with "N" elements, each with their own formula_0 selection probability). If doing a without-replacement sampling, the schema can become more complex.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "p_i"
},
{
"math_id": 1,
"text": "x_i"
},
{
"math_id": 2,
"text": "p_i = \\frac{x_i}{\\sum_{i=1}^N x_i}"
}
]
| https://en.wikipedia.org/wiki?curid=68087802 |
68090028 | Bochner's tube theorem | Theorem about holomorphic functions of several complex variables
In mathematics, Bochner's tube theorem (named for Salomon Bochner) shows that every function holomorphic on a tube domain in formula_0 can be extended to the convex hull of this domain.
Theorem Let formula_1 be a connected open set. Then every function formula_2 holomorphic on the tube domain formula_3 can be extended to a function holomorphic on the convex hull formula_4.
A classic reference is (Theorem 9). See also for other proofs.
Generalizations.
The generalized version of this theorem was first proved by Kazlow (1979), also proved by Boivin and Dwilewicz (1998) under more less complicated hypothese.
Theorem Let formula_5 be a connected submanifold of formula_6 of class-formula_7. Then every continuous CR function on the tube domain formula_8 can be continuously extended to a CR function on formula_9. By "Int ch(S)" we will mean the interior taken in the smallest dimensional space which contains "ch(S)".
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbb{C}^n"
},
{
"math_id": 1,
"text": "\\omega \\subset \\mathbb{R}^n"
},
{
"math_id": 2,
"text": "f(z)"
},
{
"math_id": 3,
"text": " \\Omega = \\omega+i \\mathbb{R}^n"
},
{
"math_id": 4,
"text": "\\operatorname{ch}(\\Omega)"
},
{
"math_id": 5,
"text": "\\omega"
},
{
"math_id": 6,
"text": "\\mathbb{R}^n"
},
{
"math_id": 7,
"text": "C^2"
},
{
"math_id": 8,
"text": "\\Omega(\\omega)"
},
{
"math_id": 9,
"text": "\\Omega(\\text{ach}(\\omega)).\\ \\left(\\Omega(\\omega) = \\omega+i \\mathbb{R}^n\\subset\\mathbb{C}^n\\ \\left(n\\geq 2\\right), \\text{ach}(\\omega):=\\omega\\cup \\text{Int}\\ \\text{ch}(\\omega)\\right)"
}
]
| https://en.wikipedia.org/wiki?curid=68090028 |
6809151 | Multi-commodity flow problem | Network flow problem (mathematics)
The multi-commodity flow problem is a network flow problem with multiple commodities (flow demands) between different source and sink nodes.
Definition.
Given a flow network formula_0, where edge formula_1 has capacity formula_2. There are formula_3 commodities formula_4, defined by formula_5, where formula_6 and formula_7 is the source and sink of commodity formula_8, and formula_9 is its demand. The variable formula_10 defines the fraction of flow formula_8 along edge formula_11, where formula_12 in case the flow can be split among multiple paths, and formula_13 otherwise (i.e. "single path routing"). Find an assignment of all flow variables which satisfies the following four constraints:
(1) Link capacity: The sum of all flows routed over a link does not exceed its capacity.
formula_14
(2) Flow conservation on transit nodes: The amount of a flow entering an intermediate node formula_15 is the same that exits the node.
formula_16
(3) Flow conservation at the source: A flow must exit its source node completely.
formula_17
(4) Flow conservation at the destination: A flow must enter its sink node completely.
formula_18
Corresponding optimization problems.
Load balancing is the attempt to route flows such that the utilization formula_19 of all links formula_20 is even, where
formula_21
The problem can be solved e.g. by minimizing formula_22. A common linearization of this problem is the minimization of the maximum utilization formula_23, where
formula_24
In the minimum cost multi-commodity flow problem, there is a cost formula_25 for sending a flow on formula_11. You then need to minimize
formula_26
In the maximum multi-commodity flow problem, the demand of each commodity is not fixed, and the total throughput is maximized by maximizing the sum of all demands formula_27
Relation to other problems.
The minimum cost variant of the multi-commodity flow problem is a generalization of the minimum cost flow problem (in which there is merely one source formula_28 and one sink formula_29). Variants of the circulation problem are generalizations of all flow problems. That is, any flow problem can be viewed as a particular circulation problem.
Usage.
Routing and wavelength assignment (RWA) in optical burst switching of Optical Network would be approached via multi-commodity flow formulas.
Register allocation can be modeled as an integer minimum cost multi-commodity flow problem: Values produced by instructions are source nodes, values consumed by instructions are sink nodes and registers as well as stack slots are edges.
Solutions.
In the decision version of problems, the problem of producing an integer flow satisfying all demands is NP-complete, even for only two commodities and unit capacities (making the problem strongly NP-complete in this case).
If fractional flows are allowed, the problem can be solved in polynomial time through linear programming, or through (typically much faster) fully polynomial time approximation schemes.
Applications.
Multicommodity flow is applied in the overlay routing in content delivery.
References.
Add: Jean-Patrice Netter, Flow Augmenting Meshings: a primal type of approach to the maximum integer flow in a multi-commodity network, Ph.D dissertation Johns Hopkins University, 1971 | [
{
"math_id": 0,
"text": "\\,G(V,E)"
},
{
"math_id": 1,
"text": "(u,v) \\in E"
},
{
"math_id": 2,
"text": "\\,c(u,v)"
},
{
"math_id": 3,
"text": "\\,k"
},
{
"math_id": 4,
"text": "K_1,K_2,\\dots,K_k"
},
{
"math_id": 5,
"text": "\\,K_i=(s_i,t_i,d_i)"
},
{
"math_id": 6,
"text": "\\,s_i"
},
{
"math_id": 7,
"text": "\\,t_i"
},
{
"math_id": 8,
"text": "\\,i"
},
{
"math_id": 9,
"text": "\\,d_i"
},
{
"math_id": 10,
"text": "\\,f_i(u,v)"
},
{
"math_id": 11,
"text": "\\,(u,v)"
},
{
"math_id": 12,
"text": "\\,f_i(u,v) \\in [0,1]"
},
{
"math_id": 13,
"text": "\\,f_i(u,v) \\in \\{0,1\\}"
},
{
"math_id": 14,
"text": "\\forall (u,v)\\in E:\\,\\sum_{i=1}^{k} f_i(u,v)\\cdot d_i \\leq c(u,v)"
},
{
"math_id": 15,
"text": "u"
},
{
"math_id": 16,
"text": "\\forall i\\in\\{1,\\ldots,k\\}:\\,\\sum_{w \\in V} f_i(u,w) - \\sum_{w \\in V} f_i(w,u) = 0 \\quad \\mathrm{when} \\quad u \\neq s_i, t_i "
},
{
"math_id": 17,
"text": "\\forall i\\in\\{1,\\ldots,k\\}:\\,\\sum_{w \\in V} f_i(s_i,w) - \\sum_{w \\in V} f_i(w,s_i) = 1"
},
{
"math_id": 18,
"text": "\\forall i\\in\\{1,\\ldots,k\\}: \\,\\sum_{w \\in V} f_i(w,t_i) - \\sum_{w \\in V} f_i(t_i,w) = 1"
},
{
"math_id": 19,
"text": "U(u,v)"
},
{
"math_id": 20,
"text": "(u,v)\\in E"
},
{
"math_id": 21,
"text": "U(u,v)=\\frac{\\sum_{i=1}^{k} f_i(u,v)\\cdot d_i}{c(u,v)}"
},
{
"math_id": 22,
"text": "\\sum_{u,v\\in V} (U(u,v))^2"
},
{
"math_id": 23,
"text": "U_{max}"
},
{
"math_id": 24,
"text": "\\forall (u,v)\\in E:\\, U_{max} \\geq U(u,v)"
},
{
"math_id": 25,
"text": "a(u,v) \\cdot f(u,v)"
},
{
"math_id": 26,
"text": "\\sum_{(u,v) \\in E} \\left( a(u,v) \\sum_{i=1}^{k} f_i(u,v)\\cdot d_i \\right)"
},
{
"math_id": 27,
"text": "\\sum_{i=1}^{k} d_i"
},
{
"math_id": 28,
"text": "s"
},
{
"math_id": 29,
"text": "t"
}
]
| https://en.wikipedia.org/wiki?curid=6809151 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.