id
stringlengths 2
8
| title
stringlengths 1
130
| text
stringlengths 0
252k
| formulas
listlengths 1
823
| url
stringlengths 38
44
|
---|---|---|---|---|
70629729 | Joshua 15 | Book of Joshua, chapter 15
Joshua 15 is the fifteenth chapter of the Book of Joshua in the Hebrew Bible or in the Old Testament of the Christian Bible. According to Jewish tradition the book was attributed to Joshua, with additions by the high priests Eleazar and Phinehas, but modern scholars view it as part of the Deuteronomistic History, which spans the books of Deuteronomy to 2 Kings, attributed to nationalistic and devotedly Yahwistic writers during the time of the reformer Judean king Josiah in 7th century BCE. This chapter records the allotment of land for the tribe of Judah, a part of a section comprising Joshua 13:1–21:45 about the Israelites allotting the land of Canaan.
Text.
This chapter was originally written in the Hebrew language. It is divided into 63 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008).
Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century).
Analysis.
The narrative of Israelites allotting the land of Canaan comprising verses 13:1 to 21:45 of the Book of Joshua and has the following outline:
A. Preparations for Distributing the Land (13:1-14:15)
B. The Allotment for Judah (15:1-63)
1. Judah's Boundaries (Joshua 15:1-12)
2. Achsah's Blessing (Joshua 15:13-19)
3. The Cities of Judah's Inheritance (15:20-63)
C. The Allotment for Joseph (16:1-17:18)
D. Land Distribution at Shiloh (18:1-19:51)
E. Levitical Distribution and Conclusion (20:1-21:45)
There are three key elements in the report of the allotments for the nine and a haf tribes in the land of Canaan as follows:
Judah's boundaries (15:1–12).
The allotment for the tribe of Judah is recorded first and the longest among other Cisjordan tribes, consistings of a definition of its boundaries (verses 1–12), and a list of its cities (verses 20–63), with an additional insertion about the inheritance of Caleb (verses 13–19). The boundary description (15:1-12) proceeds in the order south, east, north, west. The southern boundary runs from the southern tip of the Dead Sea to the Mediterranean, including Kadesh-barnea and extending to the 'Wadi [or brook] of Egypt' (now "Wadi el-Arish"). The eastern boundary is the Dead Sea. The northern boundary is drawn carefully round the southern extremities of the city of Jerusalem (verse 8), which is still in the possession of the Jebusites. The western boundary is the Mediterranean Sea.
"Then the boundary goes up by the Valley of the Son of Hinnom at the southern shoulder of the Jebusite (that is, Jerusalem). And the boundary goes up to the top of the mountain that lies over against the Valley of Hinnom, on the west, at the northern end of the Valley of Rephaim."
Achsah's blessing (15:13–19).
Having been granted the city of Hebron by Joshua, Caleb has to fight to conquer it along with surrounding areas (could be a part of Joshua's conquest in Joshua 10:36–37). In turn, Caleb becomes a 'distributor' to grant a land to his son-in-law Othniel, because of his role in the conquest (Othniel later becomes the first 'Judge' of Israel; Judges 3:8–11), and his daughter Achsah, Othniel's wife, whose request for water reflects the condition in the drier areas of Negeb, Judah's southern desert.
Cities of Judah (15:20–63).
The long list of cities shows the extensive land of Judah, incorporating both the rich plain and the dry wilderness, especially the viticulture in the terraced slopes of the hill country and lowlands according to the blessing of Jacob to Judah (Genesis 49: 11–12). There are four distinct geographical areas of the land:
The lands close to the drier area were more suitable for sheep-rearing than agriculture, such as Carmel and Maon (verse 55) which are mentioned in the story of Nabal, a sheep-farmer who insulted David (1 Samuel 25:2).
Prominent cities mentioned in the list include such places as Adullam, Socoh, Jarmuth, Zanoah and Zorah. Keilah, Maresha, Maon, Halhul, and Timnah are also named there. The list of cities can be divided into twelve groups or districts (by the repeated phrase 'with their villages'), which was apparently still used in administering and collecting ancient taxes during the reign of King Manasseh, based on the archaeological discoveries of the city names in the "fiscal bullae" for tax collection in that period
The final verse (verse 63), along with other similar ones, note Israel's partial failure to take the land, despite the initial sweeping victory in Joshua 1–12, especially chapters 11–12.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
Sources.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=70629729 |
70629731 | Joshua 16 | Book of Joshua, chapter 16
Joshua 16 is the sixteenth chapter of the Book of Joshua in the Hebrew Bible or in the Old Testament of the Christian Bible. According to Jewish tradition the book was attributed to Joshua, with additions by the high priests Eleazar and Phinehas, but modern scholars view it as part of the Deuteronomistic History, which spans the books of Deuteronomy to 2 Kings, attributed to nationalistic and devotedly Yahwistic writers during the time of the reformer Judean king Josiah in 7th century BCE. This chapter records the allotment of land for the tribe of Joseph, especially the tribe of Ephraim, a part of a section comprising Joshua 13:1–21:45 about the Israelites allotting the land of Canaan.
Text.
This chapter was originally written in the Hebrew language. It is divided into 10 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008).
Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century).
Analysis.
The narrative of Israelites allotting the land of Canaan comprising verses 13:1 to 21:45 of the Book of Joshua and has the following outline:
A. Preparations for Distributing the Land (13:1–14:15)
B. The Allotment for Judah (15:1–63)
C. The Allotment for Joseph (16:1–17:18)
1. Joseph's Allotment (16:1–4)
2. Ephraim's Inheritance (16:5–10)
3. Manasseh's Inheritance (17:1–13)
4. Additional Land for Joseph (17:14–18)
D. Land Distribution at Shiloh (18:1–19:51)
E. Levitical Distribution and Conclusion (20:1–21:45)
There are three key elements in the report of the allotments for the nine and a haf tribes in the land of Canaan as follows:
Joseph's Allotment (16:1–4).
The tribe of Joseph is next to be allotted after Judah (cf. the space devoted to each tribe in Jacob's blessing, Genesis 49:8–12, 22–26) and with subdivision into Ephraim and Manasseh (Joshua 14:4), overall it covers a huge area of land in Canaan between the Jordan River and the Mediterranean Sea from just north of the Dead Sea to Mount Carmel in the north-west, in addition to the Transjordan lands allotted the other half of Manasseh.
The southern boundary (verses 1–3) borders Benjamin to the south (16:2–3 parallel 18:12–13), running from Jericho (converging with both Judah and Benjamin there) up towards Bethel, along the route from Jericho to Ai, going past the important military outpost of Gezer, with a view of the entry to the hill country from the plain.
"Then going from Bethel to Luz, it passes along to Ataroth, the territory of the Archites."
Allotment for Ephraim (16:5–10).
The boundary of Ephraim is defined in detail on its northern and eastern borders with Manasseh (6b–7), and verse 9 seems to indicate a complex definition of the borders between them.
"However, they did not drive out the Canaanites who lived in Gezer, so the Canaanites have lived in the midst of Ephraim to this day but have been made to do forced labor."
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
Sources.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=70629731 |
70629735 | Joshua 17 | Book of Joshua, chapter 17
Joshua 17 is the seventeenth chapter of the Book of Joshua in the Hebrew Bible or in the Old Testament of the Christian Bible. According to Jewish tradition the book was attributed to Joshua, with additions by the high priests Eleazar and Phinehas, but modern scholars view it as part of the Deuteronomistic History, which spans the books of Deuteronomy to 2 Kings, attributed to nationalistic and devotedly Yahwistic writers during the time of the reformer Judean king Josiah in 7th century BCE. This chapter records the allotment of land for the tribe of Joseph, especially the tribe of Manasseh, a part of a section comprising Joshua 13:1–21:45 about the Israelites allotting the land of Canaan.
Text.
This chapter was originally written in the Hebrew language. It is divided into 18 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls including 4Q48 (4QJoshb; 100–50 BCE) with extant verses 1–5, 11–15.
Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century).
Analysis.
The narrative of Israelites allotting the land of Canaan comprising verses 13:1 to 21:45 of the Book of Joshua and has the following outline:
A. Preparations for Distributing the Land (13:1–14:15)
B. The Allotment for Judah (15:1–63)
C. The Allotment for Joseph (16:1–17:18)
1. Joseph's Allotment (16:1–4)
2. Ephraim's Inheritance (16:5–10)
3. Manasseh's Inheritance (17:1–13)
4. Additional Land for Joseph (17:14–18)
D. Land Distribution at Shiloh (18:1–19:51)
E. Levitical Distribution and Conclusion (20:1–21:45)
There are three key elements in the report of the allotments for the nine and a haf tribes in the land of Canaan as follows:
Allotment for Manasseh (17:1–13).
The tribe of Joseph is allotted with subdivision into Ephraim and Manasseh (Joshua 14:4), overall covering a huge area of land in Canaan between the Jordan River and the Mediterranean Sea from just north of the Dead Sea to Mount Carmel in the north-west, in addition to the Transjordan lands allotted the other half of Manasseh. The allotment for the tribe of Manasseh as a whole include the Transjordan territory (17:1–6), containing genealogical information closely related to Numbers 26:29–34. Machir and Gilead appear in Judges 5 (verses 14, 17), where Machir appears to occupy lands west of the Jordan, while Gilead has the eastern part of Jordan, with six clans named in the Book of Numbers. The story of Zelophehad's daughters concludes a narrative from Numbers 27, 36, that the right of inheritance for female descendants, to protect family property in the absence of male ones, was established by Moses, with a requirement that the daughters should marry within the tribe (Numbers 36). Now the provisions were respected, and the five daughters of Zelophehad, son of Hepher, along with the five Gileadite clans (in place of Hepher), making 'ten portions' (verse 5) within the territory of Manasseh in west of Jordan (the other sons of Gilead already received lands in east of Jordan).
Western Manasseh's allotment stretches from the north, bordering the land of Asher, to Michmethath, on the border with the land of Ephraim to the south (verse 7, cf. 16:6). There were still enclaves of the Canaanites (verse 11–12, cf. Judges 1:27–28), that the people of Manasseh failed to expel, but put them as forced labor.
"2 And allotments were made to the rest of the people of Manasseh by their clans, Abiezer, Helek, Asriel, Shechem, Hepher, and Shemida. These were the male descendants of Manasseh the son of Joseph, by their clans."
"3 Now Zelophehad the son of Hepher, son of Gilead, son of Machir, son of Manasseh, had no sons, but only daughters, and these are the names of his daughters: Mahlah, Noah, Hoglah, Milcah, and Tirzah."
Verses 2–3.
Of the eleven names (six sons of Gilead and five daughters of Zelophehad) six appear on ostraca (potsherds) found at Samaria, as place-names. These "Samaria Ostraca" were found in the site of king Ahab's palace, containing inscription written in the paleo-Hebrew alphabet, which is very similar to the Siloam Inscription.
Additional land for Joseph (17:14–18).
The request from the tribe of Joseph (that is, the tribes of Manasseh and Ephraim) for more land is accepted by Joshua on the basis of the tribe's large population, that 'they should clear the hill country of trees and make it habitable'. This is evidenced in the history of agricultural deforestation in the hill country. Actually, the sense of constriction in the tribe of Joseph is related to their inability to expel the Canaanites of the plain, who have iron chariots. Thus, Joshua challenged the tribe of Joseph (with their great numbers) that they must drive the Canaanites out in spite of their chariots.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
Sources.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=70629735 |
706301 | Kronecker symbol | In number theory, the Kronecker symbol, written as formula_0 or formula_1, is a generalization of the Jacobi symbol to all integers formula_2. It was introduced by Leopold Kronecker (1885, page 770).
Definition.
Let formula_2 be a non-zero integer, with prime factorization
formula_3
where formula_4 is a unit (i.e., formula_5), and the formula_6 are primes. Let formula_7 be an integer. The Kronecker symbol formula_8 is defined by
formula_9
For odd formula_6, the number formula_10 is simply the usual Legendre symbol. This leaves the case when formula_11. We define formula_12 by
formula_13
Since it extends the Jacobi symbol, the quantity formula_14 is simply formula_15 when formula_16. When formula_17, we define it by
formula_18
Finally, we put
formula_19
These extensions suffice to define the Kronecker symbol for all integer values formula_20.
Some authors only define the Kronecker symbol for more restricted values; for example, formula_7 congruent to formula_21 and formula_22.
Table of values.
The following is a table of values of Kronecker symbol formula_23 with 1 ≤ "n", "k" ≤ 30.
Properties.
The Kronecker symbol shares many basic properties of the Jacobi symbol, under certain restrictions:
On the other hand, the Kronecker symbol does not have the same connection to quadratic residues as the Jacobi symbol. In particular, the Kronecker symbol formula_41 for formula_42 can take values independently on whether formula_7 is a quadratic residue or nonresidue modulo formula_2.
Quadratic reciprocity.
The Kronecker symbol also satisfies the following versions of quadratic reciprocity law.
For any nonzero integer formula_2, let formula_43 denote its "odd part": formula_44 where formula_43 is odd (for formula_45, we put formula_46). Then the following "symmetric version" of quadratic reciprocity holds for every pair of integers formula_32 such that formula_47:
formula_48
where the formula_49 sign is equal to formula_50 if formula_51 or formula_52 and is equal to formula_53 if formula_54 and formula_36.
There is also equivalent "non-symmetric version" of quadratic reciprocity that holds for every pair of relatively prime integers formula_32:
formula_55
For any integer formula_2 let formula_56. Then we have another equivalent non-symmetric version that states
formula_57
for every pair of integers formula_32 (not necessarily relatively prime).
The "supplementary laws" generalize to the Kronecker symbol as well. These laws follow easily from each version of quadratic reciprocity law stated above (unlike with Legendre and Jacobi symbol where both the main law and the supplementary laws are needed to fully describe the quadratic reciprocity).
For any integer formula_2 we have
formula_58
and for any odd integer formula_2 it's
formula_59
Connection to Dirichlet characters.
If formula_60 and formula_38, the map formula_61 is a real Dirichlet character of modulus formula_62 Conversely, every real Dirichlet character can be written in this form with formula_63 (for formula_64 it's formula_65).
In particular, "primitive" real Dirichlet characters formula_66 are in a 1–1 correspondence with quadratic fields formula_67, where formula_68 is a nonzero square-free integer (we can include the case formula_69 to represent the principal character, even though it is not a quadratic field). The character formula_66 can be recovered from the field as the Artin symbol formula_70: that is, for a positive prime formula_71, the value of formula_72 depends on the behaviour of the ideal formula_73 in the ring of integers formula_74:
formula_75
Then formula_76 equals the Kronecker symbol formula_77, where
formula_78
is the discriminant of formula_79. The conductor of formula_66 is formula_80.
Similarly, if formula_22, the map formula_81 is a real Dirichlet character of modulus formula_82 However, not all real characters can be represented in this way, for example the character formula_83 cannot be written as formula_84 for any formula_2. By the law of quadratic reciprocity, we have formula_85. A character formula_86 can be represented as formula_84 if and only if its odd part formula_87, in which case we can take formula_88.
References.
"This article incorporates material from Kronecker symbol on PlanetMath, which is licensed under the ." | [
{
"math_id": 0,
"text": "\\left(\\frac an\\right)"
},
{
"math_id": 1,
"text": "(a|n)"
},
{
"math_id": 2,
"text": "n"
},
{
"math_id": 3,
"text": "n=u \\cdot p_1^{e_1} \\cdots p_k^{e_k},"
},
{
"math_id": 4,
"text": "u"
},
{
"math_id": 5,
"text": "u=\\pm1"
},
{
"math_id": 6,
"text": "p_i"
},
{
"math_id": 7,
"text": "a"
},
{
"math_id": 8,
"text": "\\left(\\frac{a}{n}\\right)"
},
{
"math_id": 9,
"text": " \\left(\\frac{a}{n}\\right) := \\left(\\frac{a}{u}\\right) \\prod_{i=1}^k \\left(\\frac{a}{p_i}\\right)^{e_i}. "
},
{
"math_id": 10,
"text": "\\left(\\frac{a}{p_i}\\right)"
},
{
"math_id": 11,
"text": "p_i=2"
},
{
"math_id": 12,
"text": "\\left(\\frac{a}{2}\\right)"
},
{
"math_id": 13,
"text": " \\left(\\frac{a}{2}\\right) := \n\\begin{cases}\n 0 & \\mbox{if }a\\mbox{ is even,} \\\\\n 1 & \\mbox{if } a \\equiv \\pm1 \\pmod{8}, \\\\\n-1 & \\mbox{if } a \\equiv \\pm3 \\pmod{8}.\n\\end{cases}"
},
{
"math_id": 14,
"text": "\\left(\\frac{a}{u}\\right)"
},
{
"math_id": 15,
"text": "1"
},
{
"math_id": 16,
"text": "u=1"
},
{
"math_id": 17,
"text": "u=-1"
},
{
"math_id": 18,
"text": " \\left(\\frac{a}{-1}\\right) := \\begin{cases} -1 & \\mbox{if }a < 0, \\\\ 1 & \\mbox{if } a \\ge 0. \\end{cases} "
},
{
"math_id": 19,
"text": "\\left(\\frac a0\\right) := \\begin{cases}1&\\text{if }a=\\pm1,\\\\0&\\text{otherwise.}\\end{cases}"
},
{
"math_id": 20,
"text": "a,n"
},
{
"math_id": 21,
"text": "0,1\\bmod4"
},
{
"math_id": 22,
"text": "n>0"
},
{
"math_id": 23,
"text": "\\left(\\frac{k}{n}\\right)"
},
{
"math_id": 24,
"text": "\\left(\\tfrac an\\right)=\\pm1"
},
{
"math_id": 25,
"text": "\\gcd(a,n)=1"
},
{
"math_id": 26,
"text": "\\left(\\tfrac an\\right)=0"
},
{
"math_id": 27,
"text": "\\left(\\tfrac{ab}n\\right)=\\left(\\tfrac an\\right)\\left(\\tfrac bn\\right)"
},
{
"math_id": 28,
"text": "n=-1"
},
{
"math_id": 29,
"text": "a,b"
},
{
"math_id": 30,
"text": "\\left(\\tfrac a{mn}\\right)=\\left(\\tfrac am\\right)\\left(\\tfrac an\\right)"
},
{
"math_id": 31,
"text": "a=-1"
},
{
"math_id": 32,
"text": "m,n"
},
{
"math_id": 33,
"text": "3\\bmod4"
},
{
"math_id": 34,
"text": "\\left(\\tfrac an\\right)=\\left(\\tfrac bn\\right)"
},
{
"math_id": 35,
"text": "a\\equiv b\\bmod\\begin{cases}4n,&n\\equiv2\\pmod 4,\\\\n&\\text{otherwise.}\\end{cases}"
},
{
"math_id": 36,
"text": "n<0"
},
{
"math_id": 37,
"text": "a\\not\\equiv3\\pmod4"
},
{
"math_id": 38,
"text": "a\\ne0"
},
{
"math_id": 39,
"text": "\\left(\\tfrac am\\right)=\\left(\\tfrac an\\right)"
},
{
"math_id": 40,
"text": "m\\equiv n\\bmod\\begin{cases}4|a|,&a\\equiv2\\pmod 4,\\\\|a|&\\text{otherwise.}\\end{cases}"
},
{
"math_id": 41,
"text": "\\left(\\tfrac an\\right)"
},
{
"math_id": 42,
"text": "n\\equiv2\\pmod 4"
},
{
"math_id": 43,
"text": "n'"
},
{
"math_id": 44,
"text": "n=2^en'"
},
{
"math_id": 45,
"text": "n=0"
},
{
"math_id": 46,
"text": "0'=1"
},
{
"math_id": 47,
"text": "\\gcd(m,n)=1"
},
{
"math_id": 48,
"text": "\\left(\\frac mn\\right)\\left(\\frac nm\\right)=\\pm(-1)^{\\frac{m'-1}2\\frac{n'-1}2},"
},
{
"math_id": 49,
"text": "\\pm"
},
{
"math_id": 50,
"text": "+"
},
{
"math_id": 51,
"text": "m\\ge0"
},
{
"math_id": 52,
"text": "n\\ge0"
},
{
"math_id": 53,
"text": "-"
},
{
"math_id": 54,
"text": "m<0"
},
{
"math_id": 55,
"text": "\\left(\\frac mn\\right)\\left(\\frac{n}{|m|}\\right)=(-1)^{\\frac{m'-1}2\\frac{n'-1}2}."
},
{
"math_id": 56,
"text": "n^*=(-1)^{(n'-1)/2}n"
},
{
"math_id": 57,
"text": "\\left(\\frac{m^*}{n}\\right)=\\left(\\frac{n}{|m|}\\right)"
},
{
"math_id": 58,
"text": "\\left(\\frac{-1}{n}\\right)=(-1)^{\\frac{n'-1}{2}}"
},
{
"math_id": 59,
"text": "\\left(\\frac{2}{n}\\right)=(-1)^{\\frac{n^2-1}{8}}."
},
{
"math_id": 60,
"text": "a\\not\\equiv3\\pmod 4"
},
{
"math_id": 61,
"text": "\\chi(n)=\\left(\\tfrac an\\right)"
},
{
"math_id": 62,
"text": "\\begin{cases}4|a|,&a\\equiv2\\pmod 4,\\\\|a|,&\\text{otherwise.}\\end{cases}"
},
{
"math_id": 63,
"text": "a\\equiv0,1\\pmod 4"
},
{
"math_id": 64,
"text": "a\\equiv2\\pmod 4"
},
{
"math_id": 65,
"text": "\\left(\\tfrac{a}{n}\\right)=\\left(\\tfrac{4a}{n}\\right)"
},
{
"math_id": 66,
"text": "\\chi"
},
{
"math_id": 67,
"text": "F=\\mathbb Q(\\sqrt m)"
},
{
"math_id": 68,
"text": "m"
},
{
"math_id": 69,
"text": "\\mathbb Q(\\sqrt1)=\\mathbb Q"
},
{
"math_id": 70,
"text": "\\left(\\tfrac{F/\\mathbb Q}\\cdot\\right)"
},
{
"math_id": 71,
"text": "p"
},
{
"math_id": 72,
"text": "\\chi(p)"
},
{
"math_id": 73,
"text": "(p)"
},
{
"math_id": 74,
"text": "O_F"
},
{
"math_id": 75,
"text": "\\chi(p)=\\begin{cases}0,&(p)\\text{ is ramified,}\\\\1,&(p)\\text{ splits,}\\\\-1,&(p)\\text{ is inert.}\\end{cases}"
},
{
"math_id": 76,
"text": "\\chi(n)"
},
{
"math_id": 77,
"text": "\\left(\\tfrac Dn\\right)"
},
{
"math_id": 78,
"text": "D=\\begin{cases}m,&m\\equiv1\\pmod 4,\\\\4m,&m\\equiv2,3\\pmod 4\\end{cases}"
},
{
"math_id": 79,
"text": "F"
},
{
"math_id": 80,
"text": "|D|"
},
{
"math_id": 81,
"text": "\\chi(a)=\\left(\\tfrac an\\right)"
},
{
"math_id": 82,
"text": "\\begin{cases}4n,&n\\equiv2\\pmod 4,\\\\n,&\\text{otherwise.}\\end{cases}"
},
{
"math_id": 83,
"text": "\\left(\\tfrac{-4}\\cdot\\right)"
},
{
"math_id": 84,
"text": "\\left(\\tfrac\\cdot n\\right)"
},
{
"math_id": 85,
"text": "\\left(\\tfrac\\cdot n\\right)=\\left(\\tfrac{n^*}\\cdot\\right)"
},
{
"math_id": 86,
"text": "\\left(\\tfrac a\\cdot\\right)"
},
{
"math_id": 87,
"text": "a'\\equiv1\\pmod4"
},
{
"math_id": 88,
"text": "n=|a|"
}
]
| https://en.wikipedia.org/wiki?curid=706301 |
706311 | Canonical coordinates | Sets of coordinates on phase space which can be used to describe a physical system
<templatestyles src="Hlist/styles.css"/>
In mathematics and classical mechanics, canonical coordinates are sets of coordinates on phase space which can be used to describe a physical system at any given point in time. Canonical coordinates are used in the Hamiltonian formulation of classical mechanics. A closely related concept also appears in quantum mechanics; see the Stone–von Neumann theorem and canonical commutation relations for details.
As Hamiltonian mechanics are generalized by symplectic geometry and canonical transformations are generalized by contact transformations, so the 19th century definition of canonical coordinates in classical mechanics may be generalized to a more abstract 20th century definition of coordinates on the cotangent bundle of a manifold (the mathematical notion of phase space).
Definition in classical mechanics.
In classical mechanics, canonical coordinates are coordinates formula_0 and formula_1 in phase space that are used in the Hamiltonian formalism. The canonical coordinates satisfy the fundamental Poisson bracket relations:
formula_2
A typical example of canonical coordinates is for formula_0 to be the usual Cartesian coordinates, and formula_1 to be the components of momentum. Hence in general, the formula_1 coordinates are referred to as "conjugate momenta".
Canonical coordinates can be obtained from the generalized coordinates of the Lagrangian formalism by a Legendre transformation, or from another set of canonical coordinates by a canonical transformation.
Definition on cotangent bundles.
Canonical coordinates are defined as a special set of coordinates on the cotangent bundle of a manifold. They are usually written as a set of formula_3 or formula_4 with the "x"'s or "q"'s denoting the coordinates on the underlying manifold and the "p"'s denoting the conjugate momentum, which are 1-forms in the cotangent bundle at point "q" in the manifold.
A common definition of canonical coordinates is any set of coordinates on the cotangent bundle that allow the canonical one-form to be written in the form
formula_5
up to a total differential. A change of coordinates that preserves this form is a canonical transformation; these are a special case of a symplectomorphism, which are essentially a change of coordinates on a symplectic manifold.
In the following exposition, we assume that the manifolds are real manifolds, so that cotangent vectors acting on tangent vectors produce real numbers.
Formal development.
Given a manifold Q, a vector field X on Q (a section of the tangent bundle "TQ") can be thought of as a function acting on the cotangent bundle, by the duality between the tangent and cotangent spaces. That is, define a function
formula_6
such that
formula_7
holds for all cotangent vectors p in formula_8. Here, formula_9 is a vector in formula_10, the tangent space to the manifold Q at point q. The function formula_11 is called the "momentum function" corresponding to X.
In local coordinates, the vector field X at point q may be written as
formula_12
where the formula_13 are the coordinate frame on TQ. The conjugate momentum then has the expression
formula_14
where the formula_1 are defined as the momentum functions corresponding to the vectors formula_13:
formula_15
The formula_0 together with the formula_16 together form a coordinate system on the cotangent bundle formula_17; these coordinates are called the "canonical coordinates".
Generalized coordinates.
In Lagrangian mechanics, a different set of coordinates are used, called the generalized coordinates. These are commonly denoted as formula_18 with formula_0 called the generalized position and formula_19 the generalized velocity. When a Hamiltonian is defined on the cotangent bundle, then the generalized coordinates are related to the canonical coordinates by means of the Hamilton–Jacobi equations.
See also.
<templatestyles src="Div col/styles.css"/> | [
{
"math_id": 0,
"text": "q^i"
},
{
"math_id": 1,
"text": "p_i"
},
{
"math_id": 2,
"text": "\\left\\{q^i, q^j\\right\\} = 0 \\qquad \\left\\{p_i, p_j\\right\\} = 0 \\qquad \\left\\{q^i, p_j\\right\\} = \\delta_{ij}"
},
{
"math_id": 3,
"text": "\\left(q^i, p_j\\right)"
},
{
"math_id": 4,
"text": "\\left(x^i, p_j\\right)"
},
{
"math_id": 5,
"text": "\\sum_i p_i\\,\\mathrm{d}q^i"
},
{
"math_id": 6,
"text": "P_X: T^*Q \\to \\mathbb{R}"
},
{
"math_id": 7,
"text": "P_X(q, p) = p(X_q)"
},
{
"math_id": 8,
"text": "T_q^*Q"
},
{
"math_id": 9,
"text": "X_q"
},
{
"math_id": 10,
"text": "T_qQ"
},
{
"math_id": 11,
"text": "P_X"
},
{
"math_id": 12,
"text": "X_q = \\sum_i X^i(q) \\frac{\\partial}{\\partial q^i}"
},
{
"math_id": 13,
"text": "\\partial /\\partial q^i"
},
{
"math_id": 14,
"text": "P_X(q, p) = \\sum_i X^i(q)\\; p_i"
},
{
"math_id": 15,
"text": "p_i = P_{\\partial /\\partial q^i}"
},
{
"math_id": 16,
"text": "p_j"
},
{
"math_id": 17,
"text": "T^*Q"
},
{
"math_id": 18,
"text": "\\left(q^i, \\dot{q}^i\\right)"
},
{
"math_id": 19,
"text": "\\dot{q}^i"
}
]
| https://en.wikipedia.org/wiki?curid=706311 |
70636825 | Thomas–Yau conjecture | Conjecture in symplectic geometry
In mathematics, and especially symplectic geometry, the Thomas–Yau conjecture asks for the existence of a stability condition, similar to those which appear in algebraic geometry, which guarantees the existence of a solution to the special Lagrangian equation inside a Hamiltonian isotopy class of Lagrangian submanifolds. In particular the conjecture contains two difficulties: first it asks what a suitable stability condition might be, and secondly if one can prove stability of an isotopy class if and only if it contains a special Lagrangian representative.
The Thomas–Yau conjecture was proposed by Richard Thomas and Shing-Tung Yau in 2001, and was motivated by similar theorems in algebraic geometry relating existence of solutions to geometric partial differential equations and stability conditions, especially the Kobayashi–Hitchin correspondence relating slope stable vector bundles to Hermitian Yang–Mills metrics.
The conjecture is intimately related to mirror symmetry, a conjecture in string theory and mathematical physics which predicts that mirror to a symplectic manifold (which is a Calabi–Yau manifold) there should be another Calabi–Yau manifold for which the symplectic structure is interchanged with the complex structure. In particular mirror symmetry predicts that special Lagrangians, which are the Type IIA string theory model of BPS D-branes, should be interchanged with the same structures in the Type IIB model, which are given either by stable vector bundles or vector bundles admitting Hermitian Yang–Mills or possibly deformed Hermitian Yang–Mills metrics. Motivated by this, Dominic Joyce rephrased the Thomas–Yau conjecture in 2014, predicting that the stability condition may be understood using the theory of Bridgeland stability conditions defined on the Fukaya category of the Calabi–Yau manifold, which is a triangulated category appearing in Kontsevich's homological mirror symmetry conjecture.
Statement.
The statement of the Thomas–Yau conjecture is not completely precise, as the particular stability condition is not yet known. In the work of Thomas and Thomas–Yau, the stability condition was given in terms of the Lagrangian mean curvature flow inside the Hamiltonian isotopy class of the Lagrangian, but Joyce's reinterpretation of the conjecture predicts that this stability condition can be given a categorical or algebraic form in terms of Bridgeland stability conditions.
Special Lagrangian submanifolds.
Consider a Calabi–Yau manifold formula_0 of complex dimension formula_1, which is in particular a real symplectic manifold of dimension formula_2. Then a Lagrangian submanifold is a real formula_1-dimensional submanifold formula_3 such that the symplectic form is identically zero when restricted to formula_4, that is formula_5. The holomorphic volume form formula_6, when restricted to a Lagrangian submanifold, becomes a top degree differential form. If the Lagrangian is oriented, then there exists a volume form formula_7 on formula_4 and one may compare this volume form to the restriction of the holomorphic volume form: formula_8 for some complex-valued function formula_9. The condition that formula_10 is a Calabi–Yau manifold implies that the function formula_11 has norm 1, so we have formula_12 where formula_13 is the phase angle of the function formula_11. In principle this phase function is only locally continuous, and its value may jump. A graded Lagrangian is a Lagrangian together with a lifting formula_14 of the phase angle to formula_15, which satisfies formula_16 everywhere on formula_4.
An oriented, graded Lagrangian formula_4 is said to be a special Lagrangian submanifold if the phase angle function formula_17 is constant on formula_4. The average value of this function, denoted formula_18, may be computed using the volume form as
formula_19
and only depends on the Hamiltonian isotopy class of formula_4. Using this average value, the condition that formula_20 is constant may be written in the following form, which commonly occurs in the literature. This is the definition of a special Lagrangian submanifold:formula_21
Hamiltonian isotopy classes.
The condition of being a special Lagrangian is not satisfied for all Lagrangians, but the geometric and especially physical properties of Lagrangian submanifolds in string theory are predicted to only depend on the Hamiltonian isotopy class of the Lagrangian submanifold. An isotopy is a transformation of a submanifold inside an ambient manifold which is a homotopy by embeddings. On a symplectic manifold, a symplectic isotopy requires that these embeddings are by symplectomorphisms, and a Hamiltonian isotopy is a symplectic isotopy for which the symplectomorphisms are generated by Hamiltonian functions. Given a Lagrangian submanifold formula_4, the condition of being a Lagrangian is preserved under Hamiltonian (in fact symplectic) isotopies, and the collection of all Lagrangian submanifolds which are Hamiltonian isotopic to formula_4 is denoted formula_22, called the Hamiltonian isotopy class of formula_4.
Lagrangian mean curvature flow and stability condition.
Given a Riemannian manifold formula_23 and a submanifold formula_24, the mean curvature flow is a differential equation satisfied for a one-parameter family formula_25 of embeddings defined for in formula_26 some interval formula_27 with images denoted formula_28, where formula_29. Namely, the family satisfies mean curvature flow ifformula_30where formula_31 is the mean curvature of the submanifold formula_32. This flow is the gradient flow of the volume functional on submanifolds of the Riemannian manifold formula_23, and there always exists short time existence of solutions starting from a given submanifold formula_33.
On a Calabi–Yau manifold, if formula_4 is a Lagrangian, the condition of being a Lagrangian is preserved when studying the mean curvature flow of formula_4 with respect to the Calabi–Yau metric. This is therefore called the Lagrangian mean curvature flow (Lmcf). Furthermore, for a graded Lagrangian formula_34, Lmcf preserves Hamiltonian isotopy class, so formula_35 for all formula_36 where the Lmcf is defined.
Thomas introduced a conjectural stability condition defined in terms of gradings when splitting into Lagrangian connected sums. Namely a graded Lagrangian formula_34 is called stable if whenever it may be written as a graded Lagrangian connected sumformula_37the average phases satisfy the inequalityformula_38In the later language of Joyce using the notion of a Bridgeland stability condition, this was further explained as follows. An almost-calibrated Lagrangian (which means the lifted phase is taken to lie in the interval formula_39 or some integer shift of this interval) which splits as a graded connected sum of almost-calibrated Lagrangians corresponds to a distinguished triangleformula_40in the Fukaya category. The Lagrangian formula_34 is stable if for any such distinguished triangle, the above angle inequality holds.
Statement of the conjecture.
The conjecture as originally proposed by Thomas is as follows:Conjecture: An oriented, graded, almost-calibrated Lagrangian formula_4 admits a special Lagrangian representative in its Hamiltonian isotopy class formula_22 if and only if it is stable in the above sense.Following this, in the work of Thomas–Yau, the behaviour of the Lagrangian mean curvature flow was also predicted.Conjecture (Thomas–Yau): If an oriented, graded, almost-calibrated Lagrangian formula_4 is stable, then the Lagrangian mean curvature flow exists for all time and converges to a special Lagrangian representative in the Hamiltonian isotopy class formula_22.This conjecture was enhanced by Joyce, who provided a more subtle analysis of what behaviour is expected of the Lagrangian mean curvature flow. In particular Joyce described the types of finite-time singularity formation which are expected to occur in the Lagrangian mean curvature flow, and proposed expanding the class of Lagrangians studied to include singular or immersed Lagrangian submanifolds, which should appear in the full Fukaya category of the Calabi–Yau. Conjecture (Thomas–Yau–Joyce): An oriented, graded, almost-calibrated Lagrangian formula_4 splits as a graded Lagrangian connected sum formula_41 of special Lagrangian submanifolds formula_42 with phase angles formula_43 given by the convergence of the Lagrangian mean curvature flow with surgeries to remove singularities at a sequence of finite times formula_44. At these surgery points, the Lagrangian may change its Hamiltonian isotopy class but preserves its class in the Fukaya category.In the language of Joyce's formulation of the conjecture, the decomposition formula_45 is a symplectic analogue of the Harder-Narasimhan filtration of a vector bundle, and using Joyce's interpretation of the conjecture in the Fukaya category with respect to a Bridgeland stability condition, the central charge is given byformula_46,the heart formula_47 of the t-structure defining the stability condition is conjectured to be given by those Lagrangians in the Fukaya category with phase formula_48, and the Thomas–Yau–Joyce conjecture predicts that the Lagrangian mean curvature flow produces the Harder–Narasimhan filtration condition which is required to prove that the data formula_49 defines a genuine Bridgeland stability condition on the Fukaya category.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(X,\\omega,\\Omega)"
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "2n"
},
{
"math_id": 3,
"text": "L\\subset X"
},
{
"math_id": 4,
"text": "L"
},
{
"math_id": 5,
"text": "\\left.\\omega\\right|_L = 0"
},
{
"math_id": 6,
"text": "\\Omega\\in \\Omega^{n,0}(X) "
},
{
"math_id": 7,
"text": "dV_L"
},
{
"math_id": 8,
"text": "\\left.\\Omega\\right|_L = f dV_L"
},
{
"math_id": 9,
"text": "f:L\\to \\mathbb{C}"
},
{
"math_id": 10,
"text": "X"
},
{
"math_id": 11,
"text": "f"
},
{
"math_id": 12,
"text": "f=e^{i\\Theta}"
},
{
"math_id": 13,
"text": "\\Theta:L \\to [0,2\\pi)"
},
{
"math_id": 14,
"text": "\\vartheta: L \\to \\mathbb{R}"
},
{
"math_id": 15,
"text": "\\mathbb{R}"
},
{
"math_id": 16,
"text": "\\Theta = \\vartheta \\mod 2\\pi"
},
{
"math_id": 17,
"text": "\\vartheta"
},
{
"math_id": 18,
"text": "\\theta"
},
{
"math_id": 19,
"text": "\\theta = \\arg \\int_L \\Omega,"
},
{
"math_id": 20,
"text": "\\Theta"
},
{
"math_id": 21,
"text": "\\mathrm{Im}(e^{-i\\theta} \\left.\\Omega\\right|_L) = 0."
},
{
"math_id": 22,
"text": "[L]"
},
{
"math_id": 23,
"text": "M"
},
{
"math_id": 24,
"text": "\\iota: N \\hookrightarrow M"
},
{
"math_id": 25,
"text": "\\iota_t"
},
{
"math_id": 26,
"text": "t"
},
{
"math_id": 27,
"text": "[0,T)"
},
{
"math_id": 28,
"text": "N^t"
},
{
"math_id": 29,
"text": "N^0 = N"
},
{
"math_id": 30,
"text": "\\frac{d\\iota_t}{dt} = H_{\\iota_t}"
},
{
"math_id": 31,
"text": "H_{\\iota_t}"
},
{
"math_id": 32,
"text": "N^t\\subset M"
},
{
"math_id": 33,
"text": "N"
},
{
"math_id": 34,
"text": "(L,\\vartheta)"
},
{
"math_id": 35,
"text": "L^t \\in [L]"
},
{
"math_id": 36,
"text": "t\\in [0,T)"
},
{
"math_id": 37,
"text": "(L,\\vartheta) = (L_1,\\vartheta_1)\\#(L_2,\\vartheta_2)"
},
{
"math_id": 38,
"text": "\\theta_1 < \\theta_2."
},
{
"math_id": 39,
"text": "(-\\pi/2, \\pi/2)"
},
{
"math_id": 40,
"text": "L_1 \\to L_1 \\# L_2 \\to L_2 \\to L_1[1]"
},
{
"math_id": 41,
"text": "L=L_1 \\# \\cdots \\# L_k"
},
{
"math_id": 42,
"text": "L_i"
},
{
"math_id": 43,
"text": "\\theta_1 > \\cdots > \\theta_k"
},
{
"math_id": 44,
"text": "0<T_1<\\cdots < T_k"
},
{
"math_id": 45,
"text": "L=L_1\\#\\cdots \\#L_k"
},
{
"math_id": 46,
"text": "Z(L) = \\int_L \\Omega"
},
{
"math_id": 47,
"text": "\\mathcal{A}"
},
{
"math_id": 48,
"text": "\\theta \\in (-\\pi/2, \\pi/2)"
},
{
"math_id": 49,
"text": "(Z,\\mathcal{A})"
}
]
| https://en.wikipedia.org/wiki?curid=70636825 |
706374 | Antihomomorphism | Homomorphism reversing the order of something
In mathematics, an antihomomorphism is a type of function defined on sets with multiplication that reverses the order of multiplication. An antiautomorphism is an invertible antihomomorphism, i.e. an antiisomorphism, from a set to itself. From bijectivity it follows that antiautomorphisms have inverses, and that the inverse of an antiautomorphism is also an antiautomorphism.
Definition.
Informally, an antihomomorphism is a map that switches the order of multiplication. Formally, an antihomomorphism between structures formula_0 and formula_1 is a homomorphism formula_2, where formula_3 equals formula_1 as a set, but has its multiplication reversed to that defined on formula_1. Denoting the (generally non-commutative) multiplication on formula_1 by formula_4, the multiplication on formula_3, denoted by formula_5, is defined by formula_6. The object formula_3 is called the opposite object to formula_1 (respectively, opposite group, opposite algebra, opposite category etc.).
This definition is equivalent to that of a homomorphism formula_7 (reversing the operation before or after applying the map is equivalent). Formally, sending formula_0 to formula_8 and acting as the identity on maps is a functor (indeed, an involution).
Examples.
In group theory, an antihomomorphism is a map between two groups that reverses the order of multiplication. So if "φ" : "X" → "Y" is a group antihomomorphism,
"φ"("xy") = "φ"("y")"φ"("x")
for all "x", "y" in "X".
The map that sends "x" to "x"−1 is an example of a group antiautomorphism. Another important example is the transpose operation in linear algebra, which takes row vectors to column vectors. Any vector-matrix equation may be transposed to an equivalent equation where the order of the factors is reversed.
With matrices, an example of an antiautomorphism is given by the transpose map. Since inversion and transposing both give antiautomorphisms, their composition is an automorphism. This involution is often called the contragredient map, and it provides an example of an outer automorphism of the general linear group GL("n", "F"), where "F" is a field, except when |"F"| = 2 and "n" = 1 or 2, or |"F"| = 3 and "n" = 1 (i.e., for the groups GL(1, 2), GL(2, 2), and GL(1, 3)).
In ring theory, an antihomomorphism is a map between two rings that preserves addition, but reverses the order of multiplication. So "φ" : "X" → "Y" is a ring antihomomorphism if and only if:
"φ"(1) = 1
"φ"("x" + "y") = "φ"("x") + "φ"("y")
"φ"("xy") = "φ"("y")"φ"("x")
for all "x", "y" in "X".
For algebras over a field "K", "φ" must be a "K"-linear map of the underlying vector space. If the underlying field has an involution, one can instead ask "φ" to be conjugate-linear, as in conjugate transpose, below.
Involutions.
It is frequently the case that antiautomorphisms are involutions, i.e. the square of the antiautomorphism is the identity map; these are also called <templatestyles src="Template:Visible anchor/styles.css" />involutive antiautomorphisms. For example, in any group the map that sends "x" to its inverse "x"−1 is an involutive antiautomorphism.
A ring with an involutive antiautomorphism is called a *-ring, and these form an important class of examples.
Properties.
If the source "X" or the target "Y" is commutative, then an antihomomorphism is the same thing as a homomorphism.
The composition of two antihomomorphisms is always a homomorphism, since reversing the order twice preserves order. The composition of an antihomomorphism with a homomorphism gives another antihomomorphism.
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "Y"
},
{
"math_id": 2,
"text": "\\phi\\colon X \\to Y^{\\text{op}}"
},
{
"math_id": 3,
"text": "Y^{\\text{op}}"
},
{
"math_id": 4,
"text": "\\cdot"
},
{
"math_id": 5,
"text": "*"
},
{
"math_id": 6,
"text": "x*y := y \\cdot x"
},
{
"math_id": 7,
"text": "\\phi\\colon X^{\\text{op}} \\to Y"
},
{
"math_id": 8,
"text": "X^{\\text{op}}"
}
]
| https://en.wikipedia.org/wiki?curid=706374 |
706399 | Path-ordering | Procedure of ordering a product operators
In theoretical physics, path-ordering is the procedure (or a meta-operator formula_0) that orders a product of operators according to the value of a chosen parameter:
formula_1
Here "p" is a permutation that orders the parameters by value:
formula_2
formula_3
For example:
formula_4
Examples.
If an operator is not simply expressed as a product, but as a function of another operator, we must first perform a Taylor expansion of this function. This is the case of the Wilson loop, which is defined as a path-ordered exponential to guarantee that the Wilson loop encodes the holonomy of the gauge connection. The parameter "σ" that determines the ordering is a parameter describing the contour, and because the contour is closed, the Wilson loop must be defined as a trace in order to be gauge-invariant.
Time ordering.
In quantum field theory it is useful to take the time-ordered product of operators. This operation is denoted by formula_5. (Although formula_5 is often called the "time-ordering operator", strictly speaking it is neither an operator on states nor a superoperator on operators.)
For two operators "A"("x") and "B"("y") that depend on spacetime locations x and y we define:
formula_6
Here formula_7 and formula_8 denote the "invariant" scalar time-coordinates of the points x and y.
Explicitly we have
formula_9
where formula_10 denotes the Heaviside step function and the formula_11 depends on if the operators are bosonic or fermionic in nature. If bosonic, then the + sign is always chosen, if fermionic then the sign will depend on the number of operator interchanges necessary to achieve the proper time ordering. Note that the statistical factors do not enter here.
Since the operators depend on their location in spacetime (i.e. not just time) this time-ordering operation is only coordinate independent if operators at spacelike separated points commute. This is why it is necessary to use formula_12 rather than formula_13, since formula_13 usually indicates the coordinate dependent time-like index of the spacetime point. Note that the time-ordering is usually written with the time argument increasing from right to left.
In general, for the product of "n" field operators "A"1("t"1), …, "A""n"("t""n") the time-ordered product of operators are defined as follows:
formula_14
where the sum runs all over "p"'s and over the symmetric group of "n" degree permutations and
formula_15
The S-matrix in quantum field theory is an example of a time-ordered product. The S-matrix, transforming the state at "t"
−∞ to a state at "t"
+∞, can also be thought of as a kind of "holonomy", analogous to the Wilson loop. We obtain a time-ordered expression because of the following reason:
We start with this simple formula for the exponential
formula_16
Now consider the discretized evolution operator
formula_17
where formula_18 is the evolution operator over an infinitesimal time interval formula_19. The higher order terms can be neglected in the limit formula_20. The operator formula_21 is defined by
formula_22
Note that the evolution operators over the "past" time intervals appears on the right side of the product. We see that the formula is analogous to the identity above satisfied by the exponential, and we may write
formula_23
The only subtlety we had to include was the time-ordering operator formula_5 because the factors in the product defining "S" above were time-ordered, too (and operators do not commute in general) and the operator formula_5 ensures that this ordering will be preserved.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathcal P"
},
{
"math_id": 1,
"text": "\\mathcal P \\left\\{O_1(\\sigma_1) O_2(\\sigma_2) \\cdots O_N(\\sigma_N)\\right\\}\n \\equiv O_{p_1}(\\sigma_{p_1}) O_{p_2}(\\sigma_{p_2}) \\cdots O_{p_N}(\\sigma_{p_N})."
},
{
"math_id": 2,
"text": "p : \\{1, 2, \\dots, N\\} \\to \\{1, 2, \\dots, N\\}"
},
{
"math_id": 3,
"text": "\\sigma_{p_1} \\leq \\sigma_{p_2} \\leq \\cdots \\leq \\sigma_{p_N}. "
},
{
"math_id": 4,
"text": "\\mathcal P \\left\\{ O_1(4) O_2(2) O_3(3) O_4(1) \\right\\} = O_4(1) O_2(2) O_3(3) O_1(4) ."
},
{
"math_id": 5,
"text": "\\mathcal T"
},
{
"math_id": 6,
"text": "\\mathcal T \\left\\{A(x) B(y)\\right\\} := \\begin{cases} A(x) B(y) & \\text{if } \\tau_x > \\tau_y, \\\\ \\pm B(y)A(x) & \\text{if } \\tau_x < \\tau_y. \\end{cases} "
},
{
"math_id": 7,
"text": "\\tau_x"
},
{
"math_id": 8,
"text": "\\tau_y"
},
{
"math_id": 9,
"text": "\\mathcal T \\left\\{A(x) B(y)\\right\\} := \\theta (\\tau_x - \\tau_y) A(x) B(y) \\pm \\theta (\\tau_y - \\tau_x) B(y) A(x), "
},
{
"math_id": 10,
"text": "\\theta"
},
{
"math_id": 11,
"text": "\\pm"
},
{
"math_id": 12,
"text": "\\tau"
},
{
"math_id": 13,
"text": "t_0"
},
{
"math_id": 14,
"text": "\n\\begin{align}\n\\mathcal T \\{ A_1(t_1) A_2(t_2) \\cdots A_n(t_n) \\} &= \\sum_p \\theta(t_{p_1} > t_{p_2} > \\cdots > t_{p_n}) \\varepsilon(p)\n A_{p_1}(t_{p_1}) A_{p_2}(t_{p_2}) \\cdots A_{p_n}(t_{p_n}) \\\\\n&= \\sum_p \\left( \\prod_{j=1}^{n-1} \\theta(t_{p_j} - t_{p_{j+1}}) \\right) \\varepsilon(p) A_{p_1}(t_{p_1}) A_{p_2}(t_{p_2}) \\cdots A_{p_n}(t_{p_n})\n\\end{align}\n"
},
{
"math_id": 15,
"text": "\n \\varepsilon(p) \\equiv \\begin{cases}\n 1 & \\text{for bosonic operators,} \\\\\n \\text{sign of the permutation} & \\text{for fermionic operators.}\n \\end{cases}\n "
},
{
"math_id": 16,
"text": "\\exp h = \\lim_{N\\to\\infty} \\left(1 + \\frac{h}{N}\\right)^N. "
},
{
"math_id": 17,
"text": "S = \\cdots (1+h_{+3})(1+h_{+2})(1+h_{+1})(1+h_0)(1+h_{-1})(1+h_{-2})\\cdots"
},
{
"math_id": 18,
"text": "1+h_{j}"
},
{
"math_id": 19,
"text": "[j\\varepsilon,(j+1)\\varepsilon]"
},
{
"math_id": 20,
"text": "\\varepsilon\\to 0"
},
{
"math_id": 21,
"text": "h_j"
},
{
"math_id": 22,
"text": "h_j =\\frac{1}{i\\hbar} \\int_{j\\varepsilon}^{(j+1)\\varepsilon} \\, dt \\int d^3 x \\, H(\\vec x,t). "
},
{
"math_id": 23,
"text": " S = {\\mathcal T} \\exp \\left(\\sum_{j=-\\infty}^\\infty h_j\\right) = \\mathcal T \\exp \\left(\\int dt\\, d^3 x \\, \\frac{H(\\vec x,t)}{i\\hbar}\\right)."
}
]
| https://en.wikipedia.org/wiki?curid=706399 |
706412 | Connection form | Math/physics concept
In mathematics, and specifically differential geometry, a connection form is a manner of organizing the data of a connection using the language of moving frames and differential forms.
Historically, connection forms were introduced by Élie Cartan in the first half of the 20th century as part of, and one of the principal motivations for, his method of moving frames. The connection form generally depends on a choice of a coordinate frame, and so is not a tensorial object. Various generalizations and reinterpretations of the connection form were formulated subsequent to Cartan's initial work. In particular, on a principal bundle, a principal connection is a natural reinterpretation of the connection form as a tensorial object. On the other hand, the connection form has the advantage that it is a differential form defined on the differentiable manifold, rather than on an abstract principal bundle over it. Hence, despite their lack of tensoriality, connection forms continue to be used because of the relative ease of performing calculations with them. In physics, connection forms are also used broadly in the context of gauge theory, through the gauge covariant derivative.
A connection form associates to each basis of a vector bundle a matrix of differential forms. The connection form is not tensorial because under a change of basis, the connection form transforms in a manner that involves the exterior derivative of the transition functions, in much the same way as the Christoffel symbols for the Levi-Civita connection. The main "tensorial" invariant of a connection form is its curvature form. In the presence of a solder form identifying the vector bundle with the tangent bundle, there is an additional invariant: the torsion form. In many cases, connection forms are considered on vector bundles with additional structure: that of a fiber bundle with a structure group.
Vector bundles.
Frames on a vector bundle.
Let formula_0 be a vector bundle of fibre dimension formula_1 over a differentiable manifold formula_2. A local frame for formula_0 is an ordered basis of local sections of formula_0. It is always possible to construct a local frame, as vector bundles are always defined in terms of local trivializations, in analogy to the atlas of a manifold. That is, given any point formula_3 on the base manifold formula_2, there exists an open neighborhood formula_4 of formula_3 for which the vector bundle over formula_5 is locally trivial, that is isomorphic to formula_6 projecting to formula_5. The vector space structure on formula_7 can thereby be extended to the entire local trivialization, and a basis on formula_7 can be extended as well; this defines the local frame. (Here the real numbers are used, although much of the development can be extended to modules over rings in general, and to vector spaces over complex numbers formula_8 in particular.)
Let formula_9 be a local frame on formula_0. This frame can be used to express locally any section of formula_0. For example, suppose that formula_10 is a local section, defined over the same open set as the frame formula_11. Then
formula_12
where formula_13 denotes the "components" of formula_10 in the frame formula_14. As a matrix equation, this reads
formula_15
In general relativity, such frame fields are referred to as tetrads. The tetrad specifically relates the local frame to an explicit coordinate system on the base manifold formula_2 (the coordinate system on formula_2 being established by the atlas).
Exterior connections.
A connection in "E" is a type of differential operator
formula_16
where Γ denotes the sheaf of local sections of a vector bundle, and Ω1"M" is the bundle of differential 1-forms on "M". For "D" to be a connection, it must be correctly coupled to the exterior derivative. Specifically, if "v" is a local section of "E", and "f" is a smooth function, then
formula_17
where "df" is the exterior derivative of "f".
Sometimes it is convenient to extend the definition of "D" to arbitrary "E"-valued forms, thus regarding it as a differential operator on the tensor product of "E" with the full exterior algebra of differential forms. Given an exterior connection "D" satisfying this compatibility property, there exists a unique extension of "D":
formula_18
such that
formula_19
where "v" is homogeneous of degree deg "v". In other words, "D" is a derivation on the sheaf of graded modules Γ("E" ⊗ Ω*"M").
Connection forms.
The connection form arises when applying the exterior connection to a particular frame e. Upon applying the exterior connection to the "e""α", it is the unique "k" × "k" matrix ("ω""α""β") of one-forms on "M" such that
formula_20
In terms of the connection form, the exterior connection of any section of "E" can now be expressed. For example, suppose that "ξ" = Σ"α" "e""α""ξ""α". Then
formula_21
Taking components on both sides,
formula_22
where it is understood that "d" and ω refer to the component-wise derivative with respect to the frame e, and a matrix of 1-forms, respectively, acting on the components of "ξ". Conversely, a matrix of 1-forms "ω" is "a priori" sufficient to completely determine the connection locally on the open set over which the basis of sections e is defined.
Change of frame.
In order to extend "ω" to a suitable global object, it is necessary to examine how it behaves when a different choice of basic sections of "E" is chosen. Write "ω""α""β" = "ω""α""β"(e) to indicate the dependence on the choice of e.
Suppose that e′ is a different choice of local basis. Then there is an invertible "k" × "k" matrix of functions "g" such that
formula_23
Applying the exterior connection to both sides gives the transformation law for "ω":
formula_24
Note in particular that "ω" fails to transform in a tensorial manner, since the rule for passing from one frame to another involves the derivatives of the transition matrix "g".
Global connection forms.
If {"U""p"} is an open covering of "M", and each "U""p" is equipped with a trivialization e"p" of "E", then it is possible to define a global connection form in terms of the patching data between the local connection forms on the overlap regions. In detail, a connection form on "M" is a system of matrices "ω"(e"p") of 1-forms defined on each "U""p" that satisfy the following compatibility condition
formula_25
This "compatibility condition" ensures in particular that the exterior connection of a section of "E", when regarded abstractly as a section of "E" ⊗ Ω1"M", does not depend on the choice of basis section used to define the connection.
Curvature.
The curvature two-form of a connection form in "E" is defined by
formula_26
Unlike the connection form, the curvature behaves tensorially under a change of frame, which can be checked directly by using the Poincaré lemma. Specifically, if e → e "g" is a change of frame, then the curvature two-form transforms by
formula_27
One interpretation of this transformation law is as follows. Let e* be the dual basis corresponding to the frame "e". Then the 2-form
formula_28
is independent of the choice of frame. In particular, Ω is a vector-valued two-form on "M" with values in the endomorphism ring Hom("E","E"). Symbolically,
formula_29
In terms of the exterior connection "D", the curvature endomorphism is given by
formula_30
for "v" ∈ "E". Thus the curvature measures the failure of the sequence
formula_31
to be a chain complex (in the sense of de Rham cohomology).
Soldering and torsion.
Suppose that the fibre dimension "k" of "E" is equal to the dimension of the manifold "M". In this case, the vector bundle "E" is sometimes equipped with an additional piece of data besides its connection: a solder form. A solder form is a globally defined vector-valued one-form θ ∈ Ω1("M","E") such that the mapping
formula_32
is a linear isomorphism for all "x" ∈ "M". If a solder form is given, then it is possible to define the torsion of the connection (in terms of the exterior connection) as
formula_33
The torsion Θ is an "E"-valued 2-form on "M".
A solder form and the associated torsion may both be described in terms of a local frame e of "E". If θ is a solder form, then it decomposes into the frame components
formula_34
The components of the torsion are then
formula_35
Much like the curvature, it can be shown that Θ behaves as a contravariant tensor under a change in frame:
formula_36
The frame-independent torsion may also be recovered from the frame components:
formula_37
Bianchi identities.
The Bianchi identities relate the torsion to the curvature. The first Bianchi identity states that
formula_38
while the second Bianchi identity states that
formula_39
Example: the Levi-Civita connection.
As an example, suppose that "M" carries a Riemannian metric. If one has a vector bundle "E" over "M", then the metric can be extended to the entire vector bundle, as the bundle metric. One may then define a connection that is compatible with this bundle metric, this is the metric connection. For the special case of "E" being the tangent bundle "TM", the metric connection is called the Riemannian connection. Given a Riemannian connection, one can always find a unique, equivalent connection that is torsion-free. This is the Levi-Civita connection on the tangent bundle "TM" of "M".
A local frame on the tangent bundle is an ordered list of vector fields e = ("e""i" | "i" = 1, 2, ..., "n"), where "n" = dim "M", defined on an open subset of "M" that are linearly independent at every point of their domain. The Christoffel symbols define the Levi-Civita connection by
formula_40
If "θ" = {"θ""i" | "i" = 1, 2, ..., "n"}, denotes the dual basis of the cotangent bundle, such that "θ""i"("e""j") = "δ""i""j" (the Kronecker delta), then the connection form is
formula_41
In terms of the connection form, the exterior connection on a vector field "v" = Σ"i""e""i""v""i" is given by
formula_42
One can recover the Levi-Civita connection, in the usual sense, from this by contracting with "e"i:
formula_43
Curvature.
The curvature 2-form of the Levi-Civita connection is the matrix (Ω"i""j") given by
formula_44
For simplicity, suppose that the frame e is holonomic, so that "dθ""i" = 0. Then, employing now the summation convention on repeated indices,
formula_45
where "R" is the Riemann curvature tensor.
Torsion.
The Levi-Civita connection is characterized as the unique metric connection in the tangent bundle with zero torsion. To describe the torsion, note that the vector bundle "E" is the tangent bundle. This carries a canonical solder form (sometimes called the canonical one-form, especially in the context of classical mechanics) that is the section "θ" of Hom(T"M", T"M") = T∗"M" ⊗ T"M" corresponding to the identity endomorphism of the tangent spaces. In the frame e, the solder form is "θ" = Σ"i" "e""i" ⊗ "θ""i", where again "θ""i" is the dual basis.
The torsion of the connection is given by Θ = "Dθ", or in terms of the frame components of the solder form by
formula_46
Assuming again for simplicity that e is holonomic, this expression reduces to
formula_47,
which vanishes if and only if Γ"i""kj" is symmetric on its lower indices.
Given a metric connection with torsion, once can always find a single, unique connection that is torsion-free, this is the Levi-Civita connection. The difference between a Riemannian connection and its associated Levi-Civita connection is the contorsion tensor.
Structure groups.
A more specific type of connection form can be constructed when the vector bundle "E" carries a structure group. This amounts to a preferred class of frames e on "E", which are related by a Lie group "G". For example, in the presence of a metric in "E", one works with frames that form an orthonormal basis at each point. The structure group is then the orthogonal group, since this group preserves the orthonormality of frames. Other examples include:
In general, let "E" be a given vector bundle of fibre dimension "k" and "G" ⊂ GL("k") a given Lie subgroup of the general linear group of Rk. If ("e"α) is a local frame of "E", then a matrix-valued function ("g"ij): "M" → "G" may act on the "e"α to produce a new frame
formula_48
Two such frames are "G"-related. Informally, the vector bundle "E" has the structure of a "G"-bundle if a preferred class of frames is specified, all of which are locally "G"-related to each other. In formal terms, "E" is a fibre bundle with structure group "G" whose typical fibre is Rk with the natural action of "G" as a subgroup of GL("k").
Compatible connections.
A connection is compatible with the structure of a "G"-bundle on "E" provided that the associated parallel transport maps always send one "G"-frame to another. Formally, along a curve γ, the following must hold locally (that is, for sufficiently small values of "t"):
formula_49
for some matrix "g"αβ (which may also depend on "t"). Differentiation at "t"=0 gives
formula_50
where the coefficients ωαβ are in the Lie algebra g of the Lie group "G".
With this observation, the connection form ωαβ defined by
formula_51
is compatible with the structure if the matrix of one-forms ωαβ(e) takes its values in g.
The curvature form of a compatible connection is, moreover, a g-valued two-form.
Change of frame.
Under a change of frame
formula_52
where "g" is a "G"-valued function defined on an open subset of "M", the connection form transforms via
formula_53
Or, using matrix products:
formula_54
To interpret each of these terms, recall that "g" : "M" → "G" is a "G"-valued (locally defined) function. With this in mind,
formula_55
where ωg is the Maurer-Cartan form for the group "G", here pulled back to "M" along the function "g", and Ad is the adjoint representation of "G" on its Lie algebra.
Principal bundles.
The connection form, as introduced thus far, depends on a particular choice of frame. In the first definition, the frame is just a local basis of sections. To each frame, a connection form is given with a transformation law for passing from one frame to another. In the second definition, the frames themselves carry some additional structure provided by a Lie group, and changes of frame are constrained to those that take their values in it. The language of principal bundles, pioneered by Charles Ehresmann in the 1940s, provides a manner of organizing these many connection forms and the transformation laws connecting them into a single intrinsic form with a single rule for transformation. The disadvantage to this approach is that the forms are no longer defined on the manifold itself, but rather on a larger principal bundle.
The principal connection for a connection form.
Suppose that "E" → "M" is a vector bundle with structure group "G". Let {"U"} be an open cover of "M", along with "G"-frames on each "U", denoted by eU. These are related on the intersections of overlapping open sets by
formula_56
for some "G"-valued function "h"UV defined on "U" ∩ "V".
Let FG"E" be the set of all "G"-frames taken over each point of "M". This is a principal "G"-bundle over "M". In detail, using the fact that the "G"-frames are all "G"-related, FG"E" can be realized in terms of gluing data among the sets of the open cover:
formula_57
where the equivalence relation formula_58 is defined by
formula_59
On FG"E", define a principal "G"-connection as follows, by specifying a g-valued one-form on each product "U" × "G", which respects the equivalence relation on the overlap regions. First let
formula_60
be the projection maps. Now, for a point ("x","g") ∈ "U" × "G", set
formula_61
The 1-form ω constructed in this way respects the transitions between overlapping sets, and therefore descends to give a globally defined 1-form on the principal bundle FG"E". It can be shown that ω is a principal connection in the sense that it reproduces the generators of the right "G" action on FG"E", and equivariantly intertwines the right action on T(FG"E") with the adjoint representation of "G".
Connection forms associated to a principal connection.
Conversely, a principal "G"-connection ω in a principal "G"-bundle "P"→"M" gives rise to a collection of connection forms on "M". Suppose that e : "M" → "P" is a local section of "P". Then the pullback of ω along e defines a g-valued one-form on "M":
formula_62
Changing frames by a "G"-valued function "g", one sees that ω(e) transforms in the required manner by using the Leibniz rule, and the adjunction:
formula_63
where "X" is a vector on "M", and "d" denotes the pushforward. | [
{
"math_id": 0,
"text": "E"
},
{
"math_id": 1,
"text": "k"
},
{
"math_id": 2,
"text": "M"
},
{
"math_id": 3,
"text": "x"
},
{
"math_id": 4,
"text": "U \\subseteq M"
},
{
"math_id": 5,
"text": "U"
},
{
"math_id": 6,
"text": "U \\times \\mathbb R^k"
},
{
"math_id": 7,
"text": "\\mathbb R^k"
},
{
"math_id": 8,
"text": "\\mathbb C"
},
{
"math_id": 9,
"text": "\\mathbf e = (e_\\alpha)_{\\alpha = 1, 2, \\dots, k}"
},
{
"math_id": 10,
"text": "\\xi"
},
{
"math_id": 11,
"text": "\\mathbb e"
},
{
"math_id": 12,
"text": "\\xi = \\sum_{\\alpha=1}^k e_\\alpha \\xi^\\alpha(\\mathbf e)"
},
{
"math_id": 13,
"text": "\\xi^\\alpha(\\mathbf e)"
},
{
"math_id": 14,
"text": "\\mathbf e"
},
{
"math_id": 15,
"text": "\\xi = {\\mathbf e}\n\\begin{bmatrix}\n\\xi^1(\\mathbf e)\\\\\n\\xi^2(\\mathbf e)\\\\\n\\vdots\\\\\n\\xi^k(\\mathbf e)\n\\end{bmatrix}=\n{\\mathbf e}\\, \\xi(\\mathbf e)\n"
},
{
"math_id": 16,
"text": "D : \\Gamma(E) \\rightarrow \\Gamma(E\\otimes\\Omega^1M)"
},
{
"math_id": 17,
"text": "D(fv) = v\\otimes (df) + fDv"
},
{
"math_id": 18,
"text": "D : \\Gamma(E\\otimes\\Omega^*M) \\rightarrow \\Gamma(E\\otimes\\Omega^*M)"
},
{
"math_id": 19,
"text": " D(v\\wedge\\alpha) = (Dv)\\wedge\\alpha + (-1)^{\\text{deg}\\, v}v\\wedge d\\alpha"
},
{
"math_id": 20,
"text": "D e_\\alpha = \\sum_{\\beta=1}^k e_\\beta\\otimes\\omega^\\beta_\\alpha."
},
{
"math_id": 21,
"text": "D\\xi = \\sum_{\\alpha=1}^k D(e_\\alpha\\xi^\\alpha(\\mathbf e)) = \\sum_{\\alpha=1}^k e_\\alpha\\otimes d\\xi^\\alpha(\\mathbf e) + \\sum_{\\alpha=1}^k\\sum_{\\beta=1}^k e_\\beta\\otimes\\omega^\\beta_\\alpha \\xi^\\alpha(\\mathbf e)."
},
{
"math_id": 22,
"text": "D\\xi(\\mathbf e) = d\\xi(\\mathbf e)+\\omega \\xi(\\mathbf e) = (d+\\omega)\\xi(\\mathbf e)"
},
{
"math_id": 23,
"text": "{\\mathbf e}' = {\\mathbf e}\\, g,\\quad \\text{i.e., }\\,e'_\\alpha = \\sum_\\beta e_\\beta g^\\beta_\\alpha."
},
{
"math_id": 24,
"text": "\\omega(\\mathbf e\\, g) = g^{-1}dg+g^{-1}\\omega(\\mathbf e)g."
},
{
"math_id": 25,
"text": "\\omega(\\mathbf e_q) = (\\mathbf e_p^{-1}\\mathbf e_q)^{-1}d(\\mathbf e_p^{-1}\\mathbf e_q)+(\\mathbf e_p^{-1}\\mathbf e_q)^{-1}\\omega(\\mathbf e_p)(\\mathbf e_p^{-1}\\mathbf e_q)."
},
{
"math_id": 26,
"text": "\\Omega(\\mathbf e) = d\\omega(\\mathbf e) + \\omega(\\mathbf e)\\wedge\\omega(\\mathbf e)."
},
{
"math_id": 27,
"text": "\\Omega(\\mathbf e\\, g) = g^{-1}\\Omega(\\mathbf e)g."
},
{
"math_id": 28,
"text": "\\Omega={\\mathbf e}\\Omega(\\mathbf e){\\mathbf e}^*"
},
{
"math_id": 29,
"text": "\\Omega\\in \\Gamma(\\Omega^2M\\otimes \\text{Hom}(E,E))."
},
{
"math_id": 30,
"text": "\\Omega(v) = D(D v) = D^2v\\, "
},
{
"math_id": 31,
"text": "\\Gamma(E)\\ \\stackrel{D}{\\to}\\ \\Gamma(E\\otimes\\Omega^1M)\\ \\stackrel{D}{\\to}\\ \\Gamma(E\\otimes\\Omega^2M)\\ \\stackrel{D}{\\to}\\ \\dots\\ \\stackrel{D}{\\to}\\ \\Gamma(E\\otimes\\Omega^n(M))"
},
{
"math_id": 32,
"text": "\\theta_x : T_xM \\rightarrow E_x"
},
{
"math_id": 33,
"text": "\\Theta = D\\theta.\\, "
},
{
"math_id": 34,
"text": "\\theta = \\sum_i \\theta^i(\\mathbf e) e_i."
},
{
"math_id": 35,
"text": "\\Theta^i(\\mathbf e) = d\\theta^i(\\mathbf e) + \\sum_j \\omega_j^i(\\mathbf e)\\wedge \\theta^j(\\mathbf e)."
},
{
"math_id": 36,
"text": "\\Theta^i(\\mathbf e\\, g)=\\sum_j g_j^i \\Theta^j(\\mathbf e)."
},
{
"math_id": 37,
"text": "\\Theta = \\sum_i e_i \\Theta^i(\\mathbf e)."
},
{
"math_id": 38,
"text": "D\\Theta=\\Omega\\wedge\\theta"
},
{
"math_id": 39,
"text": "\\, D \\Omega = 0."
},
{
"math_id": 40,
"text": "\\nabla_{e_i}e_j = \\sum_{k=1}^n\\Gamma_{ij}^k(\\mathbf e)e_k."
},
{
"math_id": 41,
"text": "\\omega_i^j(\\mathbf e) = \\sum_k \\Gamma^j{}_{ki}(\\mathbf e)\\theta^k."
},
{
"math_id": 42,
"text": " Dv=\\sum_k e_k\\otimes(dv^k) + \\sum_{j,k}e_k\\otimes\\omega^k_j(\\mathbf e)v^j."
},
{
"math_id": 43,
"text": " \\nabla_{e_i} v = \\langle Dv, e_i\\rangle = \\sum_k e_k \\left(\\nabla_{e_i} v^k + \\sum_j\\Gamma^k_{ij}(\\mathbf e)v^j\\right)"
},
{
"math_id": 44,
"text": "\n\\Omega_i{}^j(\\mathbf e) = d\\omega_i{}^j(\\mathbf e)+\\sum_k\\omega_k{}^j(\\mathbf e)\\wedge\\omega_i{}^k(\\mathbf e).\n"
},
{
"math_id": 45,
"text": "\\begin{array}{ll}\n\\Omega_i{}^j &= d(\\Gamma^j{}_{qi}\\theta^q) + (\\Gamma^j{}_{pk}\\theta^p)\\wedge(\\Gamma^k{}_{qi}\\theta^q)\\\\\n&\\\\\n&=\\theta^p\\wedge\\theta^q\\left(\\partial_p\\Gamma^j{}_{qi}+\\Gamma^j{}_{pk}\\Gamma^k{}_{qi})\\right)\\\\\n&\\\\\n&=\\tfrac12\\theta^p\\wedge\\theta^q R_{pqi}{}^j\n\\end{array}\n"
},
{
"math_id": 46,
"text": "\\Theta^i(\\mathbf e) = d\\theta^i+\\sum_j\\omega^i_j(\\mathbf e)\\wedge\\theta^j."
},
{
"math_id": 47,
"text": "\\Theta^i = \\Gamma^i{}_{kj} \\theta^k\\wedge\\theta^j"
},
{
"math_id": 48,
"text": "e_\\alpha' = \\sum_\\beta e_\\beta g_\\alpha^\\beta."
},
{
"math_id": 49,
"text": "\\Gamma(\\gamma)_0^t e_\\alpha(\\gamma(0)) = \\sum_\\beta e_\\beta(\\gamma(t))g_\\alpha^\\beta(t) "
},
{
"math_id": 50,
"text": "\\nabla_{\\dot{\\gamma}(0)} e_\\alpha = \\sum_\\beta e_\\beta \\omega_\\alpha^\\beta(\\dot{\\gamma}(0))"
},
{
"math_id": 51,
"text": "D e_\\alpha = \\sum_\\beta e_\\beta\\otimes \\omega_\\alpha^\\beta(\\mathbf e)"
},
{
"math_id": 52,
"text": "e_\\alpha' = \\sum_\\beta e_\\beta g_\\alpha^\\beta"
},
{
"math_id": 53,
"text": "\\omega_\\alpha^\\beta(\\mathbf e\\cdot g) = (g^{-1})_\\gamma^\\beta dg_\\alpha^\\gamma + (g^{-1})_\\gamma^\\beta \\omega_\\delta^\\gamma(\\mathbf e)g_\\alpha^\\delta."
},
{
"math_id": 54,
"text": "\\omega({\\mathbf e}\\cdot g) = g^{-1}dg + g^{-1}\\omega g."
},
{
"math_id": 55,
"text": "\\omega({\\mathbf e}\\cdot g) = g^*\\omega_{\\mathfrak g} + \\text{Ad}_{g^{-1}}\\omega(\\mathbf e)"
},
{
"math_id": 56,
"text": "{\\mathbf e}_V={\\mathbf e}_U\\cdot h_{UV}"
},
{
"math_id": 57,
"text": "F_GE = \\left.\\coprod_U U\\times G\\right/\\sim"
},
{
"math_id": 58,
"text": "\\sim"
},
{
"math_id": 59,
"text": "((x,g_U)\\in U\\times G) \\sim ((x,g_V) \\in V\\times G) \\iff {\\mathbf e}_V={\\mathbf e}_U\\cdot h_{UV} \\text{ and } g_U = h_{UV}^{-1}(x) g_V. "
},
{
"math_id": 60,
"text": "\\pi_1:U\\times G \\to U,\\quad \\pi_2 : U\\times G \\to G"
},
{
"math_id": 61,
"text": "\\omega_{(x,g)} = Ad_{g^{-1}}\\pi_1^*\\omega(\\mathbf e_U)+\\pi_2^*\\omega_{\\mathbf g}."
},
{
"math_id": 62,
"text": "\\omega({\\mathbf e}) = {\\mathbf e}^*\\omega."
},
{
"math_id": 63,
"text": "\\langle X, ({\\mathbf e}\\cdot g)^*\\omega\\rangle = \\langle [d(\\mathbf e\\cdot g)](X), \\omega\\rangle"
}
]
| https://en.wikipedia.org/wiki?curid=706412 |
706435 | Parseval's theorem | Theorem in mathematics
In mathematics, Parseval's theorem usually refers to the result that the Fourier transform is unitary; loosely, that the sum (or integral) of the square of a function is equal to the sum (or integral) of the square of its transform. It originates from a 1799 theorem about series by Marc-Antoine Parseval, which was later applied to the Fourier series. It is also known as Rayleigh's energy theorem, or Rayleigh's identity, after John William Strutt, Lord Rayleigh.
Although the term "Parseval's theorem" is often used to describe the unitarity of "any" Fourier transform, especially in physics, the most general form of this property is more properly called the Plancherel theorem.
Statement of Parseval's theorem.
Suppose that formula_0 and formula_1 are two complex-valued functions on formula_2 of period formula_3 that are square integrable (with respect to the Lebesgue measure) over intervals of period length, with Fourier series
formula_4
and
formula_5
respectively. Then
where formula_6 is the imaginary unit and horizontal bars indicate complex conjugation. Substituting formula_0 and formula_7:
formula_8
As is the case with the middle terms in this example, many terms will integrate to formula_9 over a full period of length formula_10 (see harmonics):
formula_11
More generally, if formula_0 and formula_1 are instead two complex-valued functions on formula_2 of period formula_12 that are square integrable (with respect to the Lebesgue measure) over intervals of period length, with Fourier series
formula_13
and
formula_14
respectively. Then
Even more generally, given an abelian locally compact group "G" with Pontryagin dual "G^", Parseval's theorem says the Pontryagin–Fourier transform is a unitary operator between Hilbert spaces "L"2("G") and "L"2("G^") (with integration being against the appropriately scaled Haar measures on the two groups.) When "G" is the unit circle T, "G^" is the integers and this is the case discussed above. When "G" is the real line formula_2, "G^" is also formula_2 and the unitary transform is the Fourier transform on the real line. When "G" is the cyclic group Zn, again it is self-dual and the Pontryagin–Fourier transform is what is called discrete Fourier transform in applied contexts.
Parseval's theorem can also be expressed as follows:
Suppose formula_15 is a square-integrable function over formula_16 (i.e., formula_15 and formula_17 are integrable on that interval), with the Fourier series
formula_18
Then
formula_19
Notation used in engineering.
In electrical engineering, Parseval's theorem is often written as:
formula_20
where formula_21 represents the continuous Fourier transform (in non-unitary form) of formula_22, and formula_23 is frequency in radians per second.
The interpretation of this form of the theorem is that the total energy of a signal can be calculated by summing power-per-sample across time or spectral power across frequency.
For discrete time signals, the theorem becomes:
formula_24
where formula_25 is the discrete-time Fourier transform (DTFT) of formula_26 and formula_27 represents the angular frequency (in radians per sample) of formula_26.
Alternatively, for the discrete Fourier transform (DFT), the relation becomes:
formula_28
where formula_29 is the DFT of formula_30, both of length formula_31.
We show the DFT case below. For the other cases, the proof is similar. By using the definition of inverse DFT of formula_29, we can derive
formula_32
where formula_33 represents complex conjugate.
See also.
Parseval's theorem is closely related to other mathematical results involving unitary transformations:
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "A(x)"
},
{
"math_id": 1,
"text": "B(x)"
},
{
"math_id": 2,
"text": "\\mathbb{R}"
},
{
"math_id": 3,
"text": "2 \\pi"
},
{
"math_id": 4,
"text": "A(x)=\\sum_{n=-\\infty}^\\infty a_ne^{inx}"
},
{
"math_id": 5,
"text": "B(x)=\\sum_{n=-\\infty}^\\infty b_ne^{inx}"
},
{
"math_id": 6,
"text": "i"
},
{
"math_id": 7,
"text": "\\overline{B(x)}"
},
{
"math_id": 8,
"text": "\n\\begin{align}\n\\sum_{n=-\\infty}^\\infty a_n\\overline{b_n}\n&= \\frac{1}{2\\pi} \\int_{-\\pi}^\\pi \\left( \\sum_{n=-\\infty}^\\infty a_ne^{inx} \\right) \\left( \\sum_{n=-\\infty}^\\infty \\overline{b_n}e^{-inx} \\right) \\, \\mathrm{d}x \\\\[6pt]\n&= \\frac{1}{2\\pi} \\int_{-\\pi}^\\pi \\left(a_1e^{i1x} + a_2e^{i2x} + \\cdots\\right) \\left(\\overline{b_1}e^{-i1x} + \\overline{b_2}e^{-i2x} + \\cdots\\right) \\mathrm{d}x \\\\[6pt]\n&= \\frac{1}{2\\pi} \\int_{-\\pi}^\\pi \\left(a_1e^{i1x} \\overline{b_1}e^{-i1x} + a_1e^{i1x} \\overline{b_2}e^{-i2x} + a_2e^{i2x} \\overline{b_1}e^{-i1x} + a_2e^{i2x} \\overline{b_2}e^{-i2x} + \\cdots \\right) \\mathrm{d}x \\\\[6pt]\n&= \\frac{1}{2\\pi} \\int_{-\\pi}^\\pi \\left(a_1 \\overline{b_1} + a_1 \\overline{b_2}e^{-ix} + a_2 \\overline{b_1}e^{ix} + a_2 \\overline{b_2} + \\cdots\\right) \\mathrm{d}x\n\\end{align}\n"
},
{
"math_id": 9,
"text": "0"
},
{
"math_id": 10,
"text": "2\\pi"
},
{
"math_id": 11,
"text": "\n\\begin{align}\n\\sum_{n=-\\infty}^\\infty a_n\\overline{b_n} &= \\frac{1}{2\\pi} \\left[a_1 \\overline{b_1} x + i a_1 \\overline{b_2}e^{-ix} - i a_2 \\overline{b_1}e^{ix} + a_2 \\overline{b_2} x + \\cdots\\right] _{-\\pi} ^{+\\pi} \\\\[6pt]\n&= \\frac{1}{2\\pi} \\left(2\\pi a_1 \\overline{b_1} + 0 + 0 + 2\\pi a_2 \\overline{b_2} + \\cdots\\right) \\\\[6pt]\n&= a_1 \\overline{b_1} + a_2 \\overline{b_2} + \\cdots \\\\[6pt]\n\\end{align}"
},
{
"math_id": 12,
"text": "P"
},
{
"math_id": 13,
"text": "A(x)=\\sum_{n=-\\infty}^\\infty a_ne^{2\\pi ni\\left(\\frac{x}{P}\\right)}"
},
{
"math_id": 14,
"text": "B(x)=\\sum_{n=-\\infty}^\\infty b_ne^{2\\pi ni\\left(\\frac{x}{P}\\right)}"
},
{
"math_id": 15,
"text": "f(x)"
},
{
"math_id": 16,
"text": "[-\\pi, \\pi]"
},
{
"math_id": 17,
"text": "f^2(x)"
},
{
"math_id": 18,
"text": "f(x) \\simeq \\frac{a_0}{2} + \\sum_{n=1}^{\\infty} (a_n \\cos(nx) + b_n \\sin(nx))."
},
{
"math_id": 19,
"text": "\\frac{1}{\\pi} \\int_{-\\pi}^{\\pi} f^2(x) \\,\\mathrm{d}x = \\frac{a_0^2}{2} + \\sum_{n=1}^{\\infty} \\left(a_n^2 + b_n^2 \\right)."
},
{
"math_id": 20,
"text": "\\int_{-\\infty}^\\infty | x(t) |^2 \\, \\mathrm{d}t = \\frac{1}{2\\pi} \\int_{-\\infty}^\\infty | X(\\omega) |^2 \\, \\mathrm{d}\\omega = \\int_{-\\infty}^\\infty | X(2\\pi f) |^2 \\, \\mathrm{d}f"
},
{
"math_id": 21,
"text": "X(\\omega) = \\mathcal{F}_\\omega\\{ x(t) \\}"
},
{
"math_id": 22,
"text": "x(t)"
},
{
"math_id": 23,
"text": "\\omega = 2\\pi f"
},
{
"math_id": 24,
"text": "\\sum_{n=-\\infty}^\\infty | x[n] |^2 = \\frac{1}{2\\pi} \\int_{-\\pi}^\\pi | X_{2\\pi}({\\phi}) |^2 \\mathrm{d}\\phi"
},
{
"math_id": 25,
"text": "X_{2\\pi}"
},
{
"math_id": 26,
"text": "x"
},
{
"math_id": 27,
"text": "\\phi"
},
{
"math_id": 28,
"text": " \\sum_{n=0}^{N-1} | x[n] |^2 = \\frac{1}{N} \\sum_{k=0}^{N-1} | X[k] |^2"
},
{
"math_id": 29,
"text": "X[k]"
},
{
"math_id": 30,
"text": "x[n]"
},
{
"math_id": 31,
"text": "N"
},
{
"math_id": 32,
"text": "\\begin{align}\n\\frac{1}{N} \\sum_{k=0}^{N-1} | X[k] |^2\n&= \\frac{1}{N} \\sum_{k=0}^{N-1} X[k]\\cdot X^*[k]\n = \\frac{1}{N} \\sum_{k=0}^{N-1} \\left[\\sum_{n=0}^{N-1} x[n]\\,\\exp\\left(-j\\frac{2\\pi}{N}k\\,n\\right)\\right] \\, X^*[k]\n\\\\[5mu]\n&= \\frac{1}{N} \\sum_{n=0}^{N-1} x[n] \\left[\\sum_{k=0}^{N-1} X^*[k]\\,\\exp\\left(-j\\frac{2\\pi}{N}k\\,n\\right)\\right] \n = \\frac{1}{N} \\sum_{n=0}^{N-1} x[n] (N \\cdot x^*[n])\n\\\\[5mu]\n&= \\sum_{n=0}^{N-1} | x[n] |^2,\n\\end{align}"
},
{
"math_id": 33,
"text": "*"
}
]
| https://en.wikipedia.org/wiki?curid=706435 |
70649628 | Modal collapse | Concept in modal logic
In modal logic, modal collapse is the condition in which every true statement is necessarily true, and vice versa; that is to say, there are no contingent truths, or to put it another way, that "everything exists necessarily" (and likewise if something does not exist, it cannot exist). In the notation of modal logic, this can be written as formula_0.
In the context of philosophy, the term is commonly used in critiques of ontological arguments for the existence of God and the principle of divine simplicity. For example, Gödel's ontological proof contains formula_1 as a theorem, which combined with the axioms of system S5 leads to modal collapse. Since some regard divine freedom as essential to the nature of God, and modal collapse as negating the concept of free will, this then leads to the breakdown of Gödel's argument.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\phi \\leftrightarrow \\Box \\phi"
},
{
"math_id": 1,
"text": "\\phi \\rightarrow \\Box \\phi"
}
]
| https://en.wikipedia.org/wiki?curid=70649628 |
70651 | Van der Waals radius | Size of an atom's imaginary sphere representing how close other atoms can get
The van der Waals radius, "r"w, of an atom is the radius of an imaginary hard sphere representing the distance of closest approach for another atom.
It is named after Johannes Diderik van der Waals, winner of the 1910 Nobel Prize in Physics, as he was the first to recognise that atoms were not simply points and to demonstrate the physical consequences of their size through the van der Waals equation of state.
van der Waals volume.
The van der Waals volume, "Vw", also called the atomic volume or molecular volume, is the atomic property most directly related to the van der Waals radius. It is the volume "occupied" by an individual atom (or molecule).
The van der Waals volume may be calculated if the van der Waals radii (and, for molecules, the inter-atomic distances, and angles) are known. For a single atom, it is the volume of a sphere whose radius is the van der Waals radius of the atom:
formula_0
For a molecule, it is the volume enclosed by the van der Waals surface.
The van der Waals volume of a molecule is always smaller than the sum of the van der Waals volumes of the constituent atoms: the atoms can be said to "overlap" when they form chemical bonds.
The van der Waals volume of an atom or molecule may also be determined by experimental measurements on gases, notably from the van der Waals constant "b", the polarizability "α", or the molar refractivity "A".
In all three cases, measurements are made on macroscopic samples and it is normal to express the results as molar quantities.
To find the van der Waals volume of a single atom or molecule, it is necessary to divide by the Avogadro constant "N"A.
The molar van der Waals volume should not be confused with the molar volume of the substance.
In general, at normal laboratory temperatures and pressures, the atoms or molecules of gas only occupy about <templatestyles src="Fraction/styles.css" />⁄ of the volume of the gas, the rest is empty space.
Hence the molar van der Waals volume, which only counts the volume occupied by the atoms or molecules, is usually about times smaller than the molar volume for a gas at standard temperature and pressure.
Methods of determination.
Van der Waals radii may be determined from the mechanical properties of gases (the original method), from the critical point, from measurements of atomic spacing between pairs of unbonded atoms in crystals or from measurements of electrical or optical properties (the polarizability and the molar refractivity).
These various methods give values for the van der Waals radius which are similar (1–2 Å, 100–200 pm) but not identical.
Tabulated values of van der Waals radii are obtained by taking a weighted mean of a number of different experimental values, and, for this reason, different tables will often have different values for the van der Waals radius of the same atom.
Indeed, there is no reason to assume that the van der Waals radius is a fixed property of the atom in all circumstances: rather, it tends to vary with the particular chemical environment of the atom in any given case.
Van der Waals equation of state.
The van der Waals equation of state is the simplest and best-known modification of the ideal gas law to account for the behaviour of real gases:
formula_1
where p is pressure, n is the number of moles of the gas in question and a and b depend on the particular gas, formula_2 is the volume, R is the specific gas constant on a unit mole basis and T the absolute temperature; a is a correction for intermolecular forces and b corrects for finite atomic or molecular sizes; the value of b equals the van der Waals volume per mole of the gas.
Their values vary from gas to gas.
The van der Waals equation also has a microscopic interpretation: molecules interact with one another.
The interaction is strongly repulsive at a very short distance, becomes mildly attractive at the intermediate range, and vanishes at a long distance.
The ideal gas law must be corrected when attractive and repulsive forces are considered.
For example, the mutual repulsion between molecules has the effect of excluding neighbors from a certain amount of space around each molecule.
Thus, a fraction of the total space becomes unavailable to each molecule as it executes random motion.
In the equation of state, this volume of exclusion ("nb") should be subtracted from the volume of the container (V), thus: ("V" - "nb").
The other term that is introduced in the van der Waals equation, formula_3, describes a weak attractive force among molecules (known as the van der Waals force), which increases when n increases or V decreases and molecules become more crowded together.
The van der Waals constant "b" volume can be used to calculate the van der Waals volume of an atom or molecule with experimental data derived from measurements on gases.
For helium, "b" = 23.7 cm3/mol. Helium is a monatomic gas, and each mole of helium contains atoms (the Avogadro constant, "N"A):
formula_4
Therefore, the van der Waals volume of a single atom "V"w = 39.36 Å3, which corresponds to "r"w = 2.11 Å (≈ 200 picometers).
This method may be extended to diatomic gases by approximating the molecule as a rod with rounded ends where the diameter is 2"r"w and the internuclear distance is d.
The algebra is more complicated, but the relation
formula_5
can be solved by the normal methods for cubic functions.
Crystallographic measurements.
The molecules in a molecular crystal are held together by van der Waals forces rather than chemical bonds.
In principle, the closest that two atoms belonging to "different" molecules can approach one another is given by the sum of their van der Waals radii.
By examining a large number of structures of molecular crystals, it is possible to find a minimum radius for each type of atom such that other non-bonded atoms do not encroach any closer.
This approach was first used by Linus Pauling in his seminal work "The Nature of the Chemical Bond".
Arnold Bondi also conducted a study of this type, published in 1964, although he also considered other methods of determining the van der Waals radius in coming to his final estimates.
Some of Bondi's figures are given in the table at the top of this article, and they remain the most widely used "consensus" values for the van der Waals radii of the elements.
Scott Rowland and Robin Taylor re-examined these 1964 figures in the light of more recent crystallographic data: on the whole, the agreement was very good, although they recommend a value of 1.09 Å for the van der Waals radius of hydrogen as opposed to Bondi's 1.20 Å. A more recent analysis of the Cambridge Structural Database, carried out by Santiago Alvareza, provided a new set of values for 93 naturally occurring elements.
A simple example of the use of crystallographic data (here neutron diffraction) is to consider the case of solid helium, where the atoms are held together only by van der Waals forces (rather than by covalent or metallic bonds) and so the distance between the nuclei can be considered to be equal to twice the van der Waals radius.
The density of solid helium at 1.1 K and 66 atm is , corresponding to a molar volume "V"m = .
The van der Waals volume is given by
formula_6
where the factor of π/√18 arises from the packing of spheres: "V"w = = 23.0 Å3, corresponding to a van der Waals radius "r"w = 1.76 Å.
Molar refractivity.
The molar refractivity A of a gas is related to its refractive index n by the Lorentz–Lorenz equation:
formula_7
The refractive index of helium "n" = at 0 °C and 101.325 kPa, which corresponds to a molar refractivity "A" = .
Dividing by the Avogadro constant gives "V"w = = 0.8685 Å3, corresponding to "r"w = 0.59 Å.
Polarizability.
The polarizability "α" of a gas is related to its electric susceptibility "χ"e by the relation
formula_8
and the electric susceptibility may be calculated from tabulated values of the relative permittivity "ε"r using the relation "χ"e = "ε"r − 1.
The electric susceptibility of helium "χ"e = at 0 °C and 101.325 kPa, which corresponds to a polarizability "α" = .
The polarizability is related the van der Waals volume by the relation
formula_9
so the van der Waals volume of helium "V"w = = 0.2073 Å3 by this method, corresponding to "r"w = 0.37 Å.
When the atomic polarizability is quoted in units of volume such as Å3, as is often the case, it is equal to the van der Waals volume.
However, the term "atomic polarizability" is preferred as polarizability is a precisely defined (and measurable) physical quantity, whereas "van der Waals volume" can have any number of definitions depending on the method of measurement.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "V_{\\rm w} = {4\\over 3}\\pi r_{\\rm w}^3."
},
{
"math_id": 1,
"text": "\\left (p + a\\left (\\frac{n}{\\tilde{V}}\\right )^2\\right ) (\\tilde{V} - nb) = nRT,"
},
{
"math_id": 2,
"text": "\\tilde{V}"
},
{
"math_id": 3,
"text": "a\\left (\\frac{n}{\\tilde{V}}\\right )^2"
},
{
"math_id": 4,
"text": "V_{\\rm w} = {b\\over{N_{\\rm A}}}"
},
{
"math_id": 5,
"text": "V_{\\rm w} = {4\\over 3}\\pi r_{\\rm w}^3 + \\pi r_{\\rm w}^2d"
},
{
"math_id": 6,
"text": "V_{\\rm w} = \\frac{\\pi V_{\\rm m}}{N_{\\rm A}\\sqrt{18}}"
},
{
"math_id": 7,
"text": "A = \\frac{R T (n^2 - 1)}{3p}"
},
{
"math_id": 8,
"text": "\\alpha = {\\varepsilon_0 k_{\\rm B}T\\over p}\\chi_{\\rm e}"
},
{
"math_id": 9,
"text": "V_{\\rm w} = {1\\over{4\\pi\\varepsilon_0}}\\alpha ,"
}
]
| https://en.wikipedia.org/wiki?curid=70651 |
70657 | Van der Waals force | Interactions between groups of atoms that do not arise from chemical bonds
In molecular physics and chemistry, the van der Waals force (sometimes van de Waals' force) is a distance-dependent interaction between atoms or molecules. Unlike ionic or covalent bonds, these attractions do not result from a chemical electronic bond; they are comparatively weak and therefore more susceptible to disturbance. The van der Waals force quickly vanishes at longer distances between interacting molecules.
Named after Dutch physicist Johannes Diderik van der Waals, the van der Waals force plays a fundamental role in fields as diverse as supramolecular chemistry, structural biology, polymer science, nanotechnology, surface science, and condensed matter physics. It also underlies many properties of organic compounds and molecular solids, including their solubility in polar and non-polar media.
If no other force is present, the distance between atoms at which the force becomes repulsive rather than attractive as the atoms approach one another is called the van der Waals contact distance; this phenomenon results from the mutual repulsion between the atoms' electron clouds.
The van der Waals forces are usually described as a combination of the London dispersion forces between "instantaneously induced dipoles", Debye forces between permanent dipoles and induced dipoles, and the Keesom force between permanent molecular dipoles whose rotational orientations are dynamically averaged over time.
Definition.
Van der Waals forces include attraction and repulsions between atoms, molecules, as well as other intermolecular forces. They differ from covalent and ionic bonding in that they are caused by correlations in the fluctuating polarizations of nearby particles (a consequence of quantum dynamics).
The force results from a transient shift in electron density. Specifically, the electron density may temporarily shift to be greater on one side of the nucleus. This shift generates a transient charge which a nearby atom can be attracted to or repelled by. The force is repulsive at very short distances, reaches zero at an equilibrium distance characteristic for each atom, or molecule, and becomes attractive for distances larger than the equilibrium distance. For individual atoms, the equilibrium distance is between 0.3 nm and 0.5 nm, depending on the atomic-specific diameter. When the interatomic distance is greater than 1.0 nm the force is not strong enough to be easily observed as it decreases as a function of distance "r" approximately with the 7th power (~"r"−7).
Van der Waals forces are often among the weakest chemical forces. For example, the pairwise attractive van der Waals interaction energy between H (hydrogen) atoms in different H2 molecules equals 0.06 kJ/mol (0.6 meV) and the pairwise attractive interaction energy between O (oxygen) atoms in different O2 molecules equals 0.44 kJ/mol (4.6 meV). The corresponding vaporization energies of H2 and O2 molecular liquids, which result as a sum of all van der Waals interactions per molecule in the molecular liquids, amount to 0.90 kJ/mol (9.3 meV) and 6.82 kJ/mol (70.7 meV), respectively, and thus approximately 15 times the value of the individual pairwise interatomic interactions (excluding covalent bonds).
The strength of van der Waals bonds increases with higher polarizability of the participating atoms. For example, the pairwise van der Waals interaction energy for more polarizable atoms such as S (sulfur) atoms in H2S and sulfides exceeds 1 kJ/mol (10 meV), and the pairwise interaction energy between even larger, more polarizable Xe (xenon) atoms is 2.35 kJ/mol (24.3 meV). These van der Waals interactions are up to 40 times stronger than in H2, which has only one valence electron, and they are still not strong enough to achieve an aggregate state other than gas for Xe under standard conditions. The interactions between atoms in metals can also be effectively described as van der Waals interactions and account for the observed solid aggregate state with bonding strengths comparable to covalent and ionic interactions. The strength of pairwise van der Waals type interactions is on the order of 12 kJ/mol (120 meV) for low-melting Pb (lead) and on the order of 32 kJ/mol (330 meV) for high-melting Pt (platinum), which is about one order of magnitude stronger than in Xe due to the presence of a highly polarizable free electron gas. Accordingly, van der Waals forces can range from weak to strong interactions, and support integral structural loads when multitudes of such interactions are present.
More broadly, intermolecular forces have several possible contributions:
When to apply the term "van der Waals" force depends on the text. The broadest definitions include all intermolecular forces which are electrostatic in origin, namely (2), (3) and (4). Some authors, whether or not they consider other forces to be of van der Waals type, focus on (3) and (4) as these are the components which act over the longest range.
All intermolecular/van der Waals forces are anisotropic (except those between two noble gas atoms), which means that they depend on the relative orientation of the molecules. The induction and dispersion interactions are always attractive, irrespective of orientation, but the electrostatic interaction changes sign upon rotation of the molecules. That is, the electrostatic force can be attractive or repulsive, depending on the mutual orientation of the molecules. When molecules are in thermal motion, as they are in the gas and liquid phase, the electrostatic force is averaged out to a large extent because the molecules thermally rotate and thus probe both repulsive and attractive parts of the electrostatic force. Random thermal motion can disrupt or overcome the electrostatic component of the van der Waals force but the averaging effect is much less pronounced for the attractive induction and dispersion forces.
The Lennard-Jones potential is often used as an approximate model for the isotropic part of a total (repulsion plus attraction) van der Waals force as a function of distance.
Van der Waals forces are responsible for certain cases of pressure broadening (van der Waals broadening) of spectral lines and the formation of van der Waals molecules. The London–van der Waals forces are related to the Casimir effect for dielectric media, the former being the microscopic description of the latter bulk property. The first detailed calculations of this were done in 1955 by E. M. Lifshitz. A more general theory of van der Waals forces has also been developed.
The main characteristics of van der Waals forces are:
In low molecular weight alcohols, the hydrogen-bonding properties of their polar hydroxyl group dominate other weaker van der Waals interactions. In higher molecular weight alcohols, the properties of the nonpolar hydrocarbon chain(s) dominate and determine their solubility.
Van der Waalsforces are also responsible for the weak hydrogen bond interactions between unpolarized dipoles particularly in acid-base aqueous solution and between biological molecules.
London dispersion force.
London dispersion forces, named after the German-American physicist Fritz London, are weak intermolecular forces that arise from the interactive forces between instantaneous multipoles in molecules without permanent multipole moments. In and between organic molecules the multitude of contacts can lead to larger contribution of dispersive attraction, particularly in the presence of heteroatoms. London dispersion forces are also known as 'dispersion forces', 'London forces', or 'instantaneous dipole–induced dipole forces'. The strength of London dispersion forces is proportional to the polarizability of the molecule, which in turn depends on the total number of electrons and the area over which they are spread. Hydrocarbons display small dispersive contributions, the presence of heteroatoms lead to increased LD forces as function of their polarizability, e.g. in the sequence RI>RBr>RCl>RF. In absence of solvents weakly polarizable hydrocarbons form crystals due to dispersive forces; their sublimation heat is a measure of the dispersive interaction.
Van der Waals forces between macroscopic objects.
For macroscopic bodies with known volumes and numbers of atoms or molecules per unit volume, the total van der Waals force is often computed based on the "microscopic theory" as the sum over all interacting pairs. It is necessary to integrate over the total volume of the object, which makes the calculation dependent on the objects' shapes. For example, the van der Waals interaction energy between spherical bodies of radii R1 and R2 and with smooth surfaces was approximated in 1937 by Hamaker (using London's famous 1937 equation for the dispersion interaction energy between atoms/molecules as the starting point) by:
where A is the Hamaker coefficient, which is a constant (~10−19 − 10−20 J) that depends on the material properties (it can be positive or negative in sign depending on the intervening medium), and "z" is the center-to-center distance; i.e., the sum of "R"1, "R"2, and "r" (the distance between the surfaces): formula_0.
The van der Waals "force" between two spheres of constant radii ("R"1 and "R"2 are treated as parameters) is then a function of separation since the force on an object is the negative of the derivative of the potential energy function,formula_1. This yields:
In the limit of close-approach, the spheres are sufficiently large compared to the distance between them; i.e., formula_2 or formula_3, so that equation (1) for the potential energy function simplifies to:
with the force:
The van der Waals forces between objects with other geometries using the Hamaker model have been published in the literature.
From the expression above, it is seen that the van der Waals force decreases with decreasing size of bodies (R). Nevertheless, the strength of inertial forces, such as gravity and drag/lift, decrease to a greater extent. Consequently, the van der Waals forces become dominant for collections of very small particles such as very fine-grained dry powders (where there are no capillary forces present) even though the force of attraction is smaller in magnitude than it is for larger particles of the same substance. Such powders are said to be cohesive, meaning they are not as easily fluidized or pneumatically conveyed as their more coarse-grained counterparts. Generally, free-flow occurs with particles greater than about 250 μm.
The van der Waals force of adhesion is also dependent on the surface topography. If there are surface asperities, or protuberances, that result in a greater total area of contact between two particles or between a particle and a wall, this increases the van der Waals force of attraction as well as the tendency for mechanical interlocking.
The microscopic theory assumes pairwise additivity. It neglects many-body interactions and retardation. A more rigorous approach accounting for these effects, called the "macroscopic theory", was developed by Lifshitz in 1956. Langbein derived a much more cumbersome "exact" expression in 1970 for spherical bodies within the framework of the Lifshitz theory while a simpler macroscopic model approximation had been made by Derjaguin as early as 1934. Expressions for the van der Waals forces for many different geometries using the Lifshitz theory have likewise been published.
Use by geckos and arthropods.
The ability of geckos – which can hang on a glass surface using only one toe – to climb on sheer surfaces has been for many years mainly attributed to the van der Waals forces between these surfaces and the spatulae, or microscopic projections, which cover the hair-like setae found on their footpads.
There were efforts in 2008 to create a dry glue that exploits the effect, and success was achieved in 2011 to create an adhesive tape on similar grounds (i.e. based on van der Waals forces). In 2011, a paper was published relating the effect to both velcro-like hairs and the presence of lipids in gecko footprints.
A later study suggested that capillary adhesion might play a role, but that hypothesis has been rejected by more recent studies.
A 2014 study has shown that gecko adhesion to smooth Teflon and polydimethylsiloxane surfaces is mainly determined by electrostatic interaction (caused by contact electrification), not van der Waals or capillary forces.
Among the arthropods, some spiders have similar setae on their scopulae or scopula pads, enabling them to climb or hang upside-down from extremely smooth surfaces such as glass or porcelain.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\ z = R_{1} + R_{2} + r"
},
{
"math_id": 1,
"text": "\\ F_{\\rm VdW}(z) = -\\frac{d}{dz}U(z)"
},
{
"math_id": 2,
"text": "\\ r \\ll R_{1}"
},
{
"math_id": 3,
"text": "R_{2}"
}
]
| https://en.wikipedia.org/wiki?curid=70657 |
70659220 | Stable isotope composition of amino acids | The stable isotope composition of amino acids refers to the abundance of heavy and light non-radioactive isotopes of carbon (13C and 12C), nitrogen (15N and 14N), and other elements within these molecules. Amino acids are the building blocks of proteins. They are synthesized from alpha-keto acid precursors that are in turn intermediates of several different pathways in central metabolism. Carbon skeletons from these diverse sources are further modified before transamination, the addition of an amino group that completes amino acid biosynthesis. Bonds to heavy isotopes are stronger than bonds to light isotopes, making reactions involving heavier isotopes proceed slightly slower in most cases. This phenomenon, known as a kinetic isotope effect, gives rise to isotopic differences between reactants and products that can be detected using isotope ratio mass spectrometry. Amino acids are synthesized via a variety of pathways with reactions containing different, unknown isotope effects. Because of this, the 13C content of amino acid carbon skeletons varies considerably between the amino acids. There is also an isotope effect associated with transamination, which is apparent from the abundance of 15N in some amino acids.
Because of these properties, amino acid isotopes record useful information about the organisms that produce them. Variations in metabolism between different taxonomical groups give rise to characteristic patterns of 13C enrichment in their amino acids. This allows the sources of carbon in food webs to be identified. The isotope effect associated with transamination also makes amino acid nitrogen isotopes a useful tool to study the structure of food webs. Repeated transamination by consumers results in a predictable increase in the abundance of 15N as amino acids are transferred up food chains. Together, these application, among others in ecology, demonstrate the utility of stable isotopes as tracers of environmental processes that are difficult to measure directly.
Isotopic fractionation in reaction networks.
To explain the wide range of isotopic compositions observed among the amino acids, it is necessary to consider how isotopes are sorted between starting materials, intermediates, and products in reaction networks. Amino acid biosynthesis pathways contain both reversible and irreversible reactions, as well as branch points where one intermediate can react to form two different products. The following examples adapted from Hayes (2001) illustrate the isotopic consequences of these network structures.
Linear irreversible network.
In the following reaction network, A is irreversibly converted to an intermediate B, which irreversibly reacts to form C.
<chem>A ->[{\phi_{ab}}][{\delta_b, \alpha_{b/A}}] B->[{\phi_{bc}}][{\delta_c, \alpha_{c/B}}] C</chem>
The pools of A, B, and C have delta values defined as δA, δB, and δC respectively. These values are related to the ratio of heavy to light isotopes in each pool, and are the conventional means by which scientists express the isotopic composition of materials. Importantly, δB is "distinct" from δb listed on the diagram, as δb is the isotopic composition of B produced from A before it mixes with the pool of B. The isotopic compositions of the pools and products are related through fractionation factors that reflect the kinetic isotope effects (KIEs) associated with each reaction. For A → B,
formula_0
Rearranging for δb gives formula_1 in which formula_2. In many cases, formula_3 and formula_4. This is consistent with a normal kinetic isotope effect in which the product is slightly depleted in a heavy isotope relative to the reactant. If the isotope effect is small, as is typical for C and N, formula_5 and formula_6. From this, we can see that the product produced from A will be depleted by roughly formula_7‰ relative to the starting material.
At steady state, the mass flux formula_8 of material entering pool B must equal the flux formula_9 leaving pool B. In other words, the amounts of heavy and light isotopes entering and exiting the pool must be identical, so formula_10. Since there is no flux of material out of pool C, its delta value is also equal to formula_11. This analysis shows that the end product of a linear, irreversible reaction network has an isotopic composition determined solely by the composition of the starting material and the KIE of the first reaction in the network.
Network with branch points.
At branch points, two or more separate reactions compete for the same reactant. This affects the isotopic composition of all products downstream of the branch point. To illustrate this, consider the network below:
Here, the flux of material into pool B (φAB) is balanced by two fluxes, one into pool C and the other into pool D (φBC and φBD respectively). The mass balance for the heavier isotope in this system is represented by
formula_12
Define fC = φBC / (φBC + φBD) = φBC/φAB as the fractional yield of C. Dividing through by φAB gives
formula_13
Applying the approximation introduced in the previous section, δb ≈ δA + εb/A. Further, δc ≈ δB + εc/B and δd ≈ δB + εd/B. Substituting these relations into the mass balance and solving for δB gives
formula_14
The isotopic composition of pool B is clearly dependent on the fractional yield of C. Since there are no fluxes out of pools C or D, δC = δc, δD = δd. Thus, the isotopic compositions of these pools are offset from δB by εc/B and εd/B respectively. The figure at right summarizes these results.
Example.
There is great variation in the carbon isotope composition of amino acids within a single organism. In cyanobacteria, Macko et al. observed a ~30‰ range in δ13C values amongst the amino acids. Amino acids produced from the same precursors also had widely varying compositions. It is difficult to explain these trends because of limited data on the kinetic isotope effects associated with reactions that synthesize amino acid carbon skeletons. Nevertheless, some insights can be gained by applying the logic above to the reaction networks responsible for amino acid biosynthesis.
Consider the amino acids synthesized from pyruvate. Pyruvate is produced during glycolysis and can be decarboxylated by pyruvate dehydrogenase to generate acetyl groups. These acetyl groups enter the citric acid cycle as acetyl-CoA or can be used to synthesize lipids. There is a large kinetic isotope effect associated with this reaction, so the remaining pyruvate pool becomes enriched in 13C relative to the acetyl groups. This enriched pyruvate can be transaminated to produce alanine. In the experiments by Macko et al., alanine indeed had a δ13C value slightly higher than that of cyanobacterial photosynthate.
Valine is synthesized by the addition of a 13C depleted acetyl group to pyruvate. Consistent with this mechanism, Takano et al. found valine to be depleted in 13C relative to alanine in anaerobic methanotrophic archaea. However, in cyanobacteria, Macko et al. observed a higher δ13C value for valine than alanine. This could be due to the branch point at the intermediate α-ketoisovalerate, which can be transaminated to produce valine or further acetylated to generate leucine. There may be different isotope effects associated with the addition of an amino or acetyl group at position C-2 in α-ketoisovalerate. As discussed above, the isotopic consequences of this branch point would depend on the relative rates of leucine vs valine production.
One would also expect relative depletion of 13C in leucine because its synthesis requires the addition of another isotopically light acetyl group. In "Escherichia coli", the carboxyl carbon in leucine (derived from acetyl-CoA) has a δ13C value roughly 13‰ lower than that of the entire molecule. Curiously, the same depletion is not observed in photoautotrophs. Further, there is little consistency in the δ13C of most amino acids between cyanobacteria and eukaryotic photoautotrophs. These discrepancies demonstrate the limits of our understanding of the mechanisms that set amino acid isotopic compositions. Regardless, isotopic variations between different taxa have been used to great effect in ecology.
Applications.
Tracing nutrient sources in food webs.
Amino acids are a key nutrient in ecosystems. Some are essential to animals, meaning that these organisms cannot synthesize them "de novo". Instead, animals rely on their diet to acquire these molecules, creating strong interdependencies between animals and organisms with complete amino acid synthesis capabilities. In a study of bacteria and archaea at Antarctica's McMurdo Dry Valleys, the distribution of 13C between their amino acids reflected the biosynthetic pathways employed by these organisms. Autotrophs and heterotrophs had distinct isotopic fingerprints, as did organisms that employed alternatives to the citric acid cycle to ferment or produce acetate. Plants, fungi, and bacteria are also distinguishable by their amino acid carbon isotopes. The compositions of the essential amino acids, which have more complex biosynthetic pathways, are particularly informative. Lysine, isoleucine, leucine, threonine, and valine all had significantly different δ13C values between at least two of these groups. It is important to note that the fungi and bacteria in this study were grown on amino acid-free media to ensure that all the amino acids were synthesized by the organisms of interest. Bacteria and fungi can also scavenge amino acids from the environment, complicating the interpretation of data from field samples. Nevertheless, researchers have successfully used these differences to identify the sources of amino acids in food webs. Terrestrial and marine producers in a mangrove forest had different patterns of 13C enrichment in their amino acids. Fishes from a coral reef with diets containing different carbon sources also had variable amino acid δ13C values. Furthermore, one study observed distinct amino acid isotopic compositions for desert C3, C4, and CAM plants. These applications in diverse ecosystems highlight the versatility of compound-specific amino acid isotope analysis.
Placing organisms in food webs.
Human domination of the biosphere has threatened global biodiversity, with uncertain consequences for ecosystems that provide food, clean air and water, and other valuable ecosystem services. Understanding the impacts of biodiversity loss on ecosystem function requires knowledge of the interactions between organisms within both the same and different positions in a food web (i.e. trophic levels). Food webs can have very complex structures. In many ecosystems, organisms at trophic levels higher than herbivores consume a variable combination of prey and producers, exhibiting different forms of omnivory. The loss of predator species can have a cascading effect on all organisms at lower trophic levels. Networks with more omnivores that consume species at multiple trophic levels may be more resilient to these top-down effects. Together, these factors demonstrate that a food web's structure affects its sensitivity to reductions in biodiversity, highlighting the importance of food web studies. Amino acid isotopes are an important tool used in this field.
The abundance of 15N in some amino acids reflects an organism's position in a food web. This is due to the ways organisms metabolize different amino acids when they are consumed. Trophic amino acids (TrAAs) are first deaminated, meaning that the amino group is removed to produce an alpha-keto acid carbon skeleton. This reaction breaks a C-N bond, causing the amino acid to become more enriched in 15N due to a kinetic isotope effect. For instance, glutamate, a representative TrAA, has a δ15N value that increases by 8‰ with each trophic level. In contrast, the first reaction in the metabolism of source amino acids (SrcAAs) is not deamination. An example is phenylalanine, with is first converted to tyrosine in a reaction that breaks no C-N bonds. Thus, there is little variation in the δ15N values of SrcAAs between trophic levels. Their isotopic composition instead resembles that of the species at the base of the food web. Though these trends are conflated by some environmental effects, they have been used to infer an organism's trophic position.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\alpha_{b/A} \\equiv \\frac{\\delta_b+1}{\\delta_A+1}"
},
{
"math_id": 1,
"text": "\\delta_b = \\alpha_{b/A}\\delta_A + \\epsilon_{b/A}"
},
{
"math_id": 2,
"text": "\\epsilon_{b/A}\\equiv\\alpha_{b/A}-1"
},
{
"math_id": 3,
"text": "\\alpha_{b/A}<1"
},
{
"math_id": 4,
"text": "\\epsilon_{b/A}<0"
},
{
"math_id": 5,
"text": "\\alpha_{b/A}\\approx1"
},
{
"math_id": 6,
"text": "\\delta_b \\approx \\delta_A + \\epsilon_{b/A}"
},
{
"math_id": 7,
"text": "\\epsilon_{b/A}"
},
{
"math_id": 8,
"text": "\\phi_{ab}"
},
{
"math_id": 9,
"text": "\\phi_{bc}"
},
{
"math_id": 10,
"text": "\\delta_b = \\delta_c"
},
{
"math_id": 11,
"text": "\\delta_b"
},
{
"math_id": 12,
"text": "\\delta_b\\varphi_{AB}=\\delta_c\\varphi_{BC}+\\delta_d\\varphi_{BD}"
},
{
"math_id": 13,
"text": "\\delta_b = \n\\delta_cf_C + \n\\delta_d(1-f_C)"
},
{
"math_id": 14,
"text": "\\delta_B = \\delta_A + \\varepsilon_{b/A} - \\varepsilon_{d/B} + f_C\\left(\\varepsilon_{d/B}-\\varepsilon_{c/B}\\right)"
}
]
| https://en.wikipedia.org/wiki?curid=70659220 |
7066452 | Casson invariant | In 3-dimensional topology, a part of the mathematical field of geometric topology, the Casson invariant is an integer-valued invariant of oriented integral homology 3-spheres, introduced by Andrew Casson.
Kevin Walker (1992) found an extension to rational homology 3-spheres, called the Casson–Walker invariant, and Christine Lescop (1995) extended the invariant to all closed oriented 3-manifolds.
Definition.
A Casson invariant is a surjective map
λ from oriented integral homology 3-spheres to Z satisfying the following properties:
formula_0
is independent of "n". Here formula_1 denotes formula_2 Dehn surgery on Σ by "K".
formula_3
The Casson invariant is unique (with respect to the above properties) up to an overall multiplicative constant.
formula_4.
formula_5
where formula_6 is the coefficient of formula_7 in the Alexander–Conway polynomial formula_8, and is congruent (mod 2) to the Arf invariant of "K".
formula_10
where
formula_11
The Casson invariant as a count of representations.
Informally speaking, the Casson invariant counts half the number of conjugacy classes of representations of the fundamental group of a homology 3-sphere "M" into the group SU(2). This can be made precise as follows.
The representation space of a compact oriented 3-manifold "M" is defined as formula_12 where formula_13 denotes the space of irreducible SU(2) representations of formula_14. For a Heegaard splitting formula_15 of formula_16, the Casson invariant equals formula_17 times the algebraic intersection of formula_18 with formula_19.
Generalizations.
Rational homology 3-spheres.
Kevin Walker found an extension of the Casson invariant to rational homology 3-spheres. A Casson-Walker invariant is a surjective map λ"CW" from oriented rational homology 3-spheres to Q satisfying the following properties:
1. λ(S3) = 0.
2. For every 1-component Dehn surgery presentation ("K", μ) of an oriented rational homology sphere "M"′ in an oriented rational homology sphere "M":
formula_20
where:
where "x", "y" are generators of "H"1(∂"N"("K"), Z) such that formula_24, "v" = δ"y" for an integer δ and "s"("p", "q") is the Dedekind sum.
Note that for integer homology spheres, the Walker's normalization is twice that of Casson's: formula_25.
Compact oriented 3-manifolds.
Christine Lescop defined an extension λ"CWL" of the Casson-Walker invariant to oriented compact 3-manifolds. It is uniquely characterized by the following properties:
formula_26.
formula_27
where Δ is the Alexander polynomial normalized to be symmetric and take a positive value at 1.
formula_28
where γ is the oriented curve given by the intersection of two generators formula_29 of formula_30 and formula_31 is the parallel curve to γ induced by the trivialization of the tubular neighbourhood of γ determined by formula_32.
formula_34.
The Casson–Walker–Lescop invariant has the following properties:
formula_39
That is, if the first Betti number of "M" is odd the Casson–Walker–Lescop invariant is unchanged, while if it is even it changes sign.
formula_40
SU(N).
In 1990, C. Taubes showed that the SU(2) Casson invariant of a 3-homology sphere "M" has a gauge theoretic interpretation as the Euler characteristic of formula_41, where formula_42 is the space of SU(2) connections on "M" and formula_43 is the group of gauge transformations. He regarded the Chern–Simons invariant as a formula_44-valued Morse function on formula_41 and used invariance under perturbations to define an invariant which he equated with the SU(2) Casson invariant. ()
H. Boden and C. Herald (1998) used a similar approach to define an SU(3) Casson invariant for integral homology 3-spheres. | [
{
"math_id": 0,
"text": "\\lambda\\left(\\Sigma+\\frac{1}{n+1}\\cdot K\\right)-\\lambda\\left(\\Sigma+\\frac{1}{n}\\cdot K\\right)"
},
{
"math_id": 1,
"text": "\\Sigma+\\frac{1}{m}\\cdot K"
},
{
"math_id": 2,
"text": "\\frac{1}{m}"
},
{
"math_id": 3,
"text": "\\lambda\\left(\\Sigma+\\frac{1}{m+1}\\cdot K+\\frac{1}{n+1}\\cdot L\\right) -\\lambda\\left(\\Sigma+\\frac{1}{m}\\cdot K+\\frac{1}{n+1}\\cdot L\\right)-\\lambda\\left(\\Sigma+\\frac{1}{m+1}\\cdot K+\\frac{1}{n}\\cdot L\\right) +\\lambda\\left(\\Sigma+\\frac{1}{m}\\cdot K+\\frac{1}{n}\\cdot L\\right)"
},
{
"math_id": 4,
"text": "\\lambda\\left(\\Sigma+\\frac{1}{n+1}\\cdot K\\right)-\\lambda\\left(\\Sigma+\\frac{1}{n}\\cdot K\\right)=\\pm 1"
},
{
"math_id": 5,
"text": "\\lambda \\left ( M + \\frac{1}{n+1}\\cdot K\\right ) - \\lambda \\left ( M + \\frac{1}{n}\\cdot K\\right ) = \\phi_1 (K), "
},
{
"math_id": 6,
"text": "\\phi_1 (K)"
},
{
"math_id": 7,
"text": "z^2"
},
{
"math_id": 8,
"text": "\\nabla_K(z)"
},
{
"math_id": 9,
"text": "\\Sigma(p,q,r)"
},
{
"math_id": 10,
"text": " \\lambda(\\Sigma(p,q,r))=-\\frac{1}{8}\\left[1-\\frac{1}{3pqr}\\left(1-p^2q^2r^2+p^2q^2+q^2r^2+p^2r^2\\right)\n-d(p,qr)-d(q,pr)-d(r,pq)\\right]"
},
{
"math_id": 11,
"text": "d(a,b)=-\\frac{1}{a}\\sum_{k=1}^{a-1}\\cot\\left(\\frac{\\pi k}{a}\\right)\\cot\\left(\\frac{\\pi bk}{a}\\right)"
},
{
"math_id": 12,
"text": "\\mathcal{R}(M)=R^{\\mathrm{irr}}(M)/SU(2)"
},
{
"math_id": 13,
"text": "R^{\\mathrm{irr}}(M)"
},
{
"math_id": 14,
"text": "\\pi_1 (M)"
},
{
"math_id": 15,
"text": "\\Sigma=M_1 \\cup_F M_2"
},
{
"math_id": 16,
"text": "M"
},
{
"math_id": 17,
"text": "\\frac{(-1)^g}{2}"
},
{
"math_id": 18,
"text": "\\mathcal{R}(M_1)"
},
{
"math_id": 19,
"text": "\\mathcal{R}(M_2)"
},
{
"math_id": 20,
"text": "\\lambda_{CW}(M^\\prime)=\\lambda_{CW}(M)+\\frac{\\langle m,\\mu\\rangle}{\\langle m,\\nu\\rangle\\langle \\mu,\\nu\\rangle}\\Delta_{W}^{\\prime\\prime}(M-K)(1)+\\tau_{W}(m,\\mu;\\nu)"
},
{
"math_id": 21,
"text": "\\langle\\cdot,\\cdot\\rangle"
},
{
"math_id": 22,
"text": "H_1(M-K)/\\text{Torsion}"
},
{
"math_id": 23,
"text": "\\tau_{W}(m,\\mu;\\nu)= -\\mathrm{sgn}\\langle y,m\\rangle s(\\langle x,m\\rangle,\\langle y,m\\rangle)+\\mathrm{sgn}\\langle y,\\mu\\rangle s(\\langle x,\\mu\\rangle,\\langle y,\\mu\\rangle)+\\frac{(\\delta^2-1)\\langle m,\\mu\\rangle}{12\\langle m,\\nu\\rangle\\langle \\mu,\\nu\\rangle}"
},
{
"math_id": 24,
"text": "\\langle x,y\\rangle=1"
},
{
"math_id": 25,
"text": " \\lambda_{CW}(M) = 2 \\lambda(M) "
},
{
"math_id": 26,
"text": "\\lambda_{CWL}(M)=\\tfrac{1}{2}\\left\\vert H_1(M)\\right\\vert\\lambda_{CW}(M)"
},
{
"math_id": 27,
"text": "\\lambda_{CWL}(M)=\\frac{\\Delta^{\\prime\\prime}_M(1)}{2}-\\frac{\\mathrm{torsion}(H_1(M,\\mathbb{Z}))}{12}"
},
{
"math_id": 28,
"text": "\\lambda_{CWL}(M)=\\left\\vert\\mathrm{torsion}(H_1(M))\\right\\vert\\mathrm{Link}_M (\\gamma,\\gamma^\\prime)"
},
{
"math_id": 29,
"text": "S_1,S_2"
},
{
"math_id": 30,
"text": "H_2(M;\\mathbb{Z})"
},
{
"math_id": 31,
"text": "\\gamma^\\prime"
},
{
"math_id": 32,
"text": "S_1, S_2"
},
{
"math_id": 33,
"text": "H_1(M;\\mathbb{Z})"
},
{
"math_id": 34,
"text": "\\lambda_{CWL}(M)=\\left\\vert\\mathrm{torsion}(H_1(M;\\mathbb{Z}))\\right\\vert\\left((a\\cup b\\cup c)([M])\\right)^2"
},
{
"math_id": 35,
"text": "\\lambda_{CWL}(M)=0"
},
{
"math_id": 36,
"text": "\\lambda_{CWL}(M)"
},
{
"math_id": 37,
"text": "b_1(M) = \\operatorname{rank} H_1(M;\\mathbb{Z})"
},
{
"math_id": 38,
"text": "\\overline{M}"
},
{
"math_id": 39,
"text": "\\lambda_{CWL}(\\overline{M}) = (-1)^{b_1(M)+1}\\lambda_{CWL}(M)."
},
{
"math_id": 40,
"text": "\\lambda_{CWL}(M_1\\#M_2)=\\left\\vert H_1(M_2)\\right\\vert\\lambda_{CWL}(M_1)+\\left\\vert H_1(M_1)\\right\\vert\\lambda_{CWL}(M_2)"
},
{
"math_id": 41,
"text": "\\mathcal{A}/\\mathcal{G}"
},
{
"math_id": 42,
"text": "\\mathcal{A}"
},
{
"math_id": 43,
"text": "\\mathcal{G}"
},
{
"math_id": 44,
"text": "S^1"
}
]
| https://en.wikipedia.org/wiki?curid=7066452 |
70671 | Stress–energy tensor | Tensor describing energy momentum density in spacetime
The stress–energy tensor, sometimes called the stress–energy–momentum tensor or the energy–momentum tensor, is a tensor physical quantity that describes the density and flux of energy and momentum in spacetime, generalizing the stress tensor of Newtonian physics. It is an attribute of matter, radiation, and non-gravitational force fields. This density and flux of energy and momentum are the sources of the gravitational field in the Einstein field equations of general relativity, just as mass density is the source of such a field in Newtonian gravity.
Definition.
The stress–energy tensor involves the use of superscripted variables (not exponents; see tensor index notation and Einstein summation notation). If Cartesian coordinates in SI units are used, then the components of the position four-vector x are given by: ("x"0, "x"1, "x"2, "x"3)
("t", "x", "y", "z"), where t is time in seconds, and x, y, and z are distances in meters.
The stress–energy tensor is defined as the tensor "T""αβ" of order two that gives the flux of the α-th component of the momentum vector across a surface with constant "x""β" coordinate. In the theory of relativity, this momentum vector is taken as the four-momentum. In general relativity, the stress–energy tensor is symmetric,
formula_0
In some alternative theories like Einstein–Cartan theory, the stress–energy tensor may not be perfectly symmetric because of a nonzero spin tensor, which geometrically corresponds to a nonzero torsion tensor.
Components.
Because the stress–energy tensor is of order 2, its components can be displayed in 4 × 4 matrix form:
formula_1
where the indices μ and ν take on the values 0, 1, 2, 3.
In the following, k and ℓ range from 1 through 3:
In solid state physics and fluid mechanics, the stress tensor is defined to be the spatial components of the stress–energy tensor in the proper frame of reference. In other words, the stress–energy tensor in engineering "differs" from the relativistic stress–energy tensor by a momentum-convective term.
Covariant and mixed forms.
Most of this article works with the contravariant form, Tμν of the stress–energy tensor. However, it is often necessary to work with the covariant form,
formula_3
or the mixed form,
formula_4
or as a mixed tensor density
formula_5
This article uses the spacelike sign convention (−+++) for the metric signature.
Conservation law.
In special relativity.
The stress–energy tensor is the conserved Noether current associated with spacetime translations.
The divergence of the non-gravitational stress–energy is zero. In other words, non-gravitational energy and momentum are conserved,
formula_6
When gravity is negligible and using a Cartesian coordinate system for spacetime, this may be expressed in terms of partial derivatives as
formula_7
The integral form of the non-covariant formulation is
formula_8
where N is any compact four-dimensional region of spacetime; formula_9 is its boundary, a three-dimensional hypersurface; and formula_10 is an element of the boundary regarded as the outward pointing normal.
In flat spacetime and using Cartesian coordinates, if one combines this with the symmetry of the stress–energy tensor, one can show that angular momentum is also conserved:
formula_11
In general relativity.
When gravity is non-negligible or when using arbitrary coordinate systems, the divergence of the stress–energy still vanishes. But in this case, a coordinate-free definition of the divergence is used which incorporates the covariant derivative
formula_12
where formula_13 is the Christoffel symbol which is the gravitational force field.
Consequently, if formula_14 is any Killing vector field, then the conservation law associated with the symmetry generated by the Killing vector field may be expressed as
formula_15
The integral form of this is
formula_16
In special relativity.
In special relativity, the stress–energy tensor contains information about the energy and momentum densities of a given system, in addition to the momentum and energy flux densities.
Given a Lagrangian density formula_17 that is a function of a set of fields formula_18 and their derivatives, but explicitly not of any of the spacetime coordinates, we can construct the canonical stress–energy tensor by looking at the total derivative with respect to one of the generalized coordinates of the system. So, with our condition
formula_19
By using the chain rule, we then have
formula_20
Written in useful shorthand,
formula_21
Then, we can use the Euler–Lagrange Equation:
formula_22
And then use the fact that partial derivatives commute so that we now have
formula_23
We can recognize the right hand side as a product rule. Writing it as the derivative of a product of functions tells us that
formula_24
Now, in flat space, one can write formula_25. Doing this and moving it to the other side of the equation tells us that
formula_26
And upon regrouping terms,
formula_27
This is to say that the divergence of the tensor in the brackets is 0. Indeed, with this, we define the stress–energy tensor:
formula_28
By construction it has the property that
formula_29
Note that this divergenceless property of this tensor is equivalent to four continuity equations. That is, fields have at least four sets of quantities that obey the continuity equation. As an example, it can be seen that formula_30 is the energy density of the system and that it is thus possible to obtain the Hamiltonian density from the stress–energy tensor.
Indeed, since this is the case, observing that formula_31, we then have
formula_32
We can then conclude that the terms of formula_33 represent the energy flux density of the system.
Trace.
Note that the trace of the stress–energy tensor is defined to be formula_34, so
formula_35
Since formula_36,
formula_37
In general relativity.
In general relativity, the symmetric stress–energy tensor acts as the source of spacetime curvature, and is the current density associated with gauge transformations of gravity which are general curvilinear coordinate transformations. (If there is torsion, then the tensor is no longer symmetric. This corresponds to the case with a nonzero spin tensor in Einstein–Cartan gravity theory.)
In general relativity, the partial derivatives used in special relativity are replaced by covariant derivatives. What this means is that the continuity equation no longer implies that the non-gravitational energy and momentum expressed by the tensor are absolutely conserved, i.e. the gravitational field can do work on matter and vice versa. In the classical limit of Newtonian gravity, this has a simple interpretation: kinetic energy is being exchanged with gravitational potential energy, which is not included in the tensor, and momentum is being transferred through the field to other bodies. In general relativity the Landau–Lifshitz pseudotensor is a unique way to define the "gravitational" field energy and momentum densities. Any such stress–energy pseudotensor can be made to vanish locally by a coordinate transformation.
In curved spacetime, the spacelike integral now depends on the spacelike slice, in general. There is in fact no way to define a global energy–momentum vector in a general curved spacetime.
Einstein field equations.
In general relativity, the stress–energy tensor is studied in the context of the Einstein field equations which are often written as
formula_38
where formula_39 is the Ricci tensor, formula_40 is the Ricci scalar (the tensor contraction of the Ricci tensor), formula_41 is the metric tensor, Λ is the cosmological constant (negligible at the scale of a galaxy or smaller), and formula_42 is the Einstein gravitational constant.
Stress–energy in special situations.
Isolated particle.
In special relativity, the stress–energy of a non-interacting particle with rest mass m and trajectory formula_43 is:
formula_44
where formula_45 is the velocity vector (which should not be confused with four-velocity, since it is missing a formula_46)
formula_47
formula_48 is the Dirac delta function and formula_49 is the energy of the particle.
Written in language of classical physics, the stress–energy tensor would be (relativistic mass, momentum, the dyadic product of momentum and velocity)
formula_50.
Stress–energy of a fluid in equilibrium.
For a perfect fluid in thermodynamic equilibrium, the stress–energy tensor takes on a particularly simple form
formula_51
where formula_2 is the mass–energy density (kilograms per cubic meter), formula_52 is the hydrostatic pressure (pascals), formula_53 is the fluid's four-velocity, and formula_54 is the matrix inverse of the metric tensor. Therefore, the trace is given by
formula_55
The four-velocity satisfies
formula_56
In an inertial frame of reference comoving with the fluid, better known as the fluid's proper frame of reference, the four-velocity is
formula_57
the matrix inverse of the metric tensor is simply
formula_58
and the stress–energy tensor is a diagonal matrix
formula_59
Electromagnetic stress–energy tensor.
The Hilbert stress–energy tensor of a source-free electromagnetic field is
formula_60
where formula_61 is the electromagnetic field tensor.
Scalar field.
The stress–energy tensor for a complex scalar field formula_62 that satisfies the Klein–Gordon equation is
formula_63
and when the metric is flat (Minkowski in Cartesian coordinates) its components work out to be:
formula_64
Variant definitions of stress–energy.
There are a number of inequivalent definitions of non-gravitational stress–energy:
Hilbert stress–energy tensor.
The Hilbert stress–energy tensor is defined as the functional derivative
formula_65
where formula_66 is the nongravitational part of the action, formula_67 is the nongravitational part of the Lagrangian density, and the Euler–Lagrange equation has been used. This is symmetric and gauge-invariant. See Einstein–Hilbert action for more information.
Canonical stress–energy tensor.
Noether's theorem implies that there is a conserved current associated with translations through space and time; for details see the section above on the stress–energy tensor in special relativity. This is called the canonical stress–energy tensor. Generally, this is not symmetric and if we have some gauge theory, it may not be gauge invariant because space-dependent gauge transformations do not commute with spatial translations.
In general relativity, the translations are with respect to the coordinate system and as such, do not transform covariantly. See the section below on the gravitational stress–energy pseudotensor.
Belinfante–Rosenfeld stress–energy tensor.
In the presence of spin or other intrinsic angular momentum, the canonical Noether stress–energy tensor fails to be symmetric. The Belinfante–Rosenfeld stress–energy tensor is constructed from the canonical stress–energy tensor and the spin current in such a way as to be symmetric and still conserved. In general relativity, this modified tensor agrees with the Hilbert stress–energy tensor.
Gravitational stress–energy.
By the equivalence principle gravitational stress–energy will always vanish locally at any chosen point in some chosen frame, therefore gravitational stress–energy cannot be expressed as a non-zero tensor; instead we have to use a pseudotensor.
In general relativity, there are many possible distinct definitions of the gravitational stress–energy–momentum pseudotensor. These include the Einstein pseudotensor and the Landau–Lifshitz pseudotensor. The Landau–Lifshitz pseudotensor can be reduced to zero at any event in spacetime by choosing an appropriate coordinate system.
See also.
<templatestyles src="Div col/styles.css"/> | [
{
"math_id": 0,
"text": "T^{\\alpha \\beta} = T^{\\beta \\alpha}."
},
{
"math_id": 1,
"text": "\nT^{\\mu\\nu} = \\begin{pmatrix} T^{00} & T^{01} & T^{02} & T^{03} \\\\ T^{10} & T^{11} & T^{12} & T^{13} \\\\ T^{20} & T^{21} & T^{22} & T^{23} \\\\ T^{30} & T^{31} & T^{32} & T^{33} \\end{pmatrix}\\,,"
},
{
"math_id": 2,
"text": "\\rho"
},
{
"math_id": 3,
"text": "T_{\\mu \\nu} = T^{\\alpha \\beta} g_{\\alpha \\mu} g_{\\beta \\nu},"
},
{
"math_id": 4,
"text": "T^\\mu{}_\\nu = T^{\\mu \\alpha} g_{\\alpha \\nu},"
},
{
"math_id": 5,
"text": "\\mathfrak{T}^\\mu{}_\\nu = T^\\mu{}_\\nu \\sqrt{-g} \\,."
},
{
"math_id": 6,
"text": "0 = T^{\\mu \\nu}{}_{;\\nu} = \\nabla_\\nu T^{\\mu \\nu}{}. \\!"
},
{
"math_id": 7,
"text": "0 = T^{\\mu \\nu}{}_{,\\nu} = \\partial_{\\nu} T^{\\mu \\nu}. \\!"
},
{
"math_id": 8,
"text": "0 = \\int_{\\partial N} T^{\\mu \\nu} \\mathrm{d}^3 s_{\\nu} \\!"
},
{
"math_id": 9,
"text": "\\partial N"
},
{
"math_id": 10,
"text": "\\mathrm{d}^3 s_{\\nu}"
},
{
"math_id": 11,
"text": "0 = (x^{\\alpha} T^{\\mu \\nu} - x^{\\mu} T^{\\alpha \\nu})_{,\\nu} . \\!"
},
{
"math_id": 12,
"text": "0 = \\operatorname{div} T = T^{\\mu \\nu}{}_{;\\nu} = \\nabla_{\\nu} T^{\\mu \\nu} = T^{\\mu \\nu}{}_{,\\nu} + \\Gamma^{\\mu}{}_{\\sigma \\nu}T^{\\sigma \\nu} + \\Gamma^{\\nu}{}_{\\sigma \\nu} T^{\\mu \\sigma}"
},
{
"math_id": 13,
"text": "\\Gamma^{\\mu}{}_{\\sigma \\nu} "
},
{
"math_id": 14,
"text": "\\xi^{\\mu}"
},
{
"math_id": 15,
"text": "0 = \\nabla_\\nu \\left(\\xi^{\\mu} T_{\\mu}^{\\nu}\\right) = \\frac{1}{\\sqrt{-g}} \\partial_\\nu \\left(\\sqrt{-g}\\ \\xi^{\\mu} T_{\\mu}^{\\nu}\\right) "
},
{
"math_id": 16,
"text": "0 = \\int_{\\partial N} \\sqrt{-g} \\ \\xi^{\\mu} T_{\\mu}^{\\nu} \\ \\mathrm{d}^3 s_{\\nu} = \\int_{\\partial N} \\xi^{\\mu} \\mathfrak{T}_{\\mu}^{\\nu} \\ \\mathrm{d}^3 s_{\\nu}"
},
{
"math_id": 17,
"text": "\\mathcal{L}"
},
{
"math_id": 18,
"text": "\\phi_{\\alpha}"
},
{
"math_id": 19,
"text": "\\frac{\\partial \\mathcal{L}}{\\partial x^{\\nu}} = 0"
},
{
"math_id": 20,
"text": "\\frac{d \\mathcal{L}}{dx^{\\nu}} = d_{\\nu}\\mathcal{L} = \\frac{\\partial \\mathcal{L}}{\\partial(\\partial_{\\mu}\\phi_{\\alpha})}\\frac{\\partial(\\partial_{\\mu}\\phi_{\\alpha})}{\\partial x^{\\nu}} + \\frac{\\partial \\mathcal{L}}{\\partial \\phi_{\\alpha}}\\frac{\\partial \\phi_{\\alpha}}{\\partial x^{\\nu}}"
},
{
"math_id": 21,
"text": "d_{\\nu}\\mathcal{L} = \\frac{\\partial \\mathcal{L}}{\\partial(\\partial_{\\mu}\\phi_{\\alpha})}\\partial_{\\nu}\\partial_{\\mu}\\phi_{\\alpha} + \\frac{\\partial \\mathcal{L}}{\\partial \\phi_{\\alpha}}\\partial_{\\nu}\\phi_{\\alpha}"
},
{
"math_id": 22,
"text": "\\partial_{\\mu}\\left(\\frac{\\partial \\mathcal{L}}{\\partial(\\partial_{\\mu}\\phi_{\\alpha})}\\right) = \\frac{\\partial\\mathcal{L}}{\\partial \\phi_{\\alpha}}"
},
{
"math_id": 23,
"text": "d_{\\nu}\\mathcal{L} = \\frac{\\partial \\mathcal{L}}{\\partial(\\partial_{\\mu}\\phi_{\\alpha})}\\partial_{\\mu}\\partial_{\\nu}\\phi_{\\alpha} + \\partial_{\\mu}\\left(\\frac{\\partial \\mathcal{L}}{\\partial(\\partial_{\\mu}\\phi_{\\alpha})}\\right)\\partial_{\\nu}\\phi_{\\alpha}"
},
{
"math_id": 24,
"text": "d_{\\nu}\\mathcal{L} = \\partial_{\\mu}\\left[\\frac{\\partial \\mathcal{L}}{\\partial(\\partial_{\\mu}\\phi_{\\alpha})}\\partial_{\\nu}\\phi_{\\alpha}\\right]"
},
{
"math_id": 25,
"text": "d_{\\nu}\\mathcal{L} = \\partial_{\\mu}[\\delta^{\\mu}_{\\nu}\\mathcal{L}]"
},
{
"math_id": 26,
"text": "\\partial_{\\mu}\\left[\\frac{\\partial \\mathcal{L}}{\\partial(\\partial_{\\mu}\\phi_{\\alpha})}\\partial_{\\nu}\\phi_{\\alpha}\\right] - \\partial_{\\mu}\\left(\\delta^{\\mu}_{\\nu}\\mathcal{L}\\right) = 0"
},
{
"math_id": 27,
"text": "\\partial_{\\mu}\\left[\\frac{\\partial \\mathcal{L}}{\\partial(\\partial_{\\mu}\\phi_{\\alpha})}\\partial_{\\nu}\\phi_{\\alpha} - \\delta^{\\mu}_{\\nu}\\mathcal{L}\\right] = 0"
},
{
"math_id": 28,
"text": "T^{\\mu}_{\\nu} \\equiv \\frac{\\partial \\mathcal{L}}{\\partial(\\partial_{\\mu}\\phi_{\\alpha})}\\partial_{\\nu}\\phi_{\\alpha} - \\delta^{\\mu}_{\\nu}\\mathcal{L}"
},
{
"math_id": 29,
"text": "\\partial_{\\mu}T^{\\mu}_{\\nu} = 0"
},
{
"math_id": 30,
"text": "T^{0}_0"
},
{
"math_id": 31,
"text": "\\partial_{\\mu}T^{\\mu}_{0} = 0"
},
{
"math_id": 32,
"text": " \\frac{\\partial \\mathcal{H}}{\\partial t} + \\nabla\\cdot\\left(\\frac{\\partial \\mathcal{L}}{\\partial\\nabla \\phi_{\\alpha}}\\dot{\\phi}_{\\alpha}\\right) = 0"
},
{
"math_id": 33,
"text": "\\frac{\\partial \\mathcal{L}}{\\partial\\nabla \\phi_{\\alpha}}\\dot{\\phi}_{\\alpha}"
},
{
"math_id": 34,
"text": "T^{\\mu}_{\\mu}"
},
{
"math_id": 35,
"text": "T^{\\mu}_{\\mu} = \\frac{\\partial \\mathcal{L}}{\\partial(\\partial_{\\mu}\\phi_{\\alpha})}\\partial_{\\mu}\\phi_{\\alpha}-\\delta^{\\mu}_{\\mu}\\mathcal{L} ."
},
{
"math_id": 36,
"text": "\\delta^{\\mu}_{\\mu} = 4"
},
{
"math_id": 37,
"text": "T^{\\mu}_{\\mu} = \\frac{\\partial \\mathcal{L}}{\\partial(\\partial_{\\mu}\\phi_{\\alpha})}\\partial_{\\mu}\\phi_{\\alpha}-4\\mathcal{L} ."
},
{
"math_id": 38,
"text": "R_{\\mu \\nu} - \\tfrac{1}{2} R\\,g_{\\mu \\nu} + \\Lambda g_{\\mu \\nu} = \\kappa T_{\\mu \\nu},"
},
{
"math_id": 39,
"text": "R_{\\mu \\nu}"
},
{
"math_id": 40,
"text": "R"
},
{
"math_id": 41,
"text": "g_{\\mu \\nu}\\,"
},
{
"math_id": 42,
"text": "\\kappa = 8\\pi G/c^4"
},
{
"math_id": 43,
"text": " \\mathbf{x}_\\text{p}(t)"
},
{
"math_id": 44,
"text": "T^{\\alpha \\beta}(\\mathbf{x}, t) = \\frac{m \\, v^{\\alpha}(t) v^{\\beta}(t)}{\\sqrt{1 - (v/c)^2}}\\;\\, \\delta\\left(\\mathbf{x} - \\mathbf{x}_\\text{p}(t)\\right) = \\frac{E}{c^2}\\; v^{\\alpha}(t) v^{\\beta}(t)\\;\\, \\delta(\\mathbf{x} - \\mathbf{x}_\\text{p}(t)) "
},
{
"math_id": 45,
"text": "v^{\\alpha}"
},
{
"math_id": 46,
"text": "\\gamma"
},
{
"math_id": 47,
"text": "v^{\\alpha} = \\left(1, \\frac{d \\mathbf{x}_\\text{p}}{dt}(t) \\right) \\,,"
},
{
"math_id": 48,
"text": "\\delta"
},
{
"math_id": 49,
"text": " E = \\sqrt{p^2 c^2 + m^2 c^4} "
},
{
"math_id": 50,
"text": "\\left( \\frac{E}{c^2} , \\, \\mathbf{p} , \\, \\mathbf{p} \\, \\mathbf{v} \\right)"
},
{
"math_id": 51,
"text": "T^{\\alpha \\beta} \\, = \\left(\\rho + {p \\over c^2}\\right)u^{\\alpha}u^{\\beta} + p g^{\\alpha \\beta}"
},
{
"math_id": 52,
"text": "p"
},
{
"math_id": 53,
"text": "u^{\\alpha}"
},
{
"math_id": 54,
"text": "g^{\\alpha \\beta}"
},
{
"math_id": 55,
"text": "T^{\\alpha}_{\\,\\alpha} = g_{\\alpha\\beta} T^{\\beta \\alpha} = 3p - \\rho c^2 \\,."
},
{
"math_id": 56,
"text": "u^{\\alpha} u^{\\beta} g_{\\alpha \\beta} = - c^2 \\,."
},
{
"math_id": 57,
"text": "u^{\\alpha} = (1, 0, 0, 0) \\,,"
},
{
"math_id": 58,
"text": "\ng^{\\alpha \\beta} \\, = \\left( \\begin{matrix}\n - \\frac{1}{c^2} & 0 & 0 & 0 \\\\\n 0 & 1 & 0 & 0 \\\\\n 0 & 0 & 1 & 0 \\\\\n 0 & 0 & 0 & 1 \n \\end{matrix} \\right)\n\\, "
},
{
"math_id": 59,
"text": "\nT^{\\alpha \\beta} = \\left( \\begin{matrix}\n \\rho & 0 & 0 & 0 \\\\\n 0 & p & 0 & 0 \\\\\n 0 & 0 & p & 0 \\\\\n 0 & 0 & 0 & p \n \\end{matrix} \\right).\n"
},
{
"math_id": 60,
"text": " T^{\\mu \\nu} = \\frac{1}{\\mu_0} \\left( F^{\\mu \\alpha} g_{\\alpha \\beta} F^{\\nu \\beta} - \\frac{1}{4} g^{\\mu \\nu} F_{\\delta \\gamma} F^{\\delta \\gamma} \\right) "
},
{
"math_id": 61,
"text": " F_{\\mu \\nu} "
},
{
"math_id": 62,
"text": "\\phi "
},
{
"math_id": 63,
"text": "T^{\\mu\\nu} = \\frac{\\hbar^2}{m} \\left(g^{\\mu \\alpha} g^{\\nu \\beta} + g^{\\mu \\beta} g^{\\nu \\alpha} - g^{\\mu\\nu} g^{\\alpha \\beta}\\right) \\partial_{\\alpha}\\bar\\phi \\partial_{\\beta}\\phi - g^{\\mu\\nu} m c^2 \\bar\\phi \\phi ,"
},
{
"math_id": 64,
"text": "\\begin{align}\n T^{00} & = \\frac{\\hbar^2}{m c^4} \\left(\\partial_0 \\bar{\\phi} \\partial_0 \\phi + c^2 \\partial_k \\bar{\\phi} \\partial_k \\phi \\right) + m \\bar{\\phi} \\phi, \\\\\n T^{0i} = T^{i0} & = - \\frac{\\hbar^2}{m c^2} \\left(\\partial_0 \\bar{\\phi} \\partial_i \\phi + \\partial_i \\bar{\\phi} \\partial_0 \\phi \\right),\\ \\mathrm{and} \\\\\n T^{ij} & = \\frac{\\hbar^2}{m} \\left(\\partial_i \\bar{\\phi} \\partial_j \\phi + \\partial_j \\bar{\\phi} \\partial_i \\phi \\right) - \\delta_{ij} \\left(\\frac{\\hbar^2}{m} \\eta^{\\alpha\\beta} \\partial_\\alpha \\bar{\\phi} \\partial_\\beta \\phi + m c^2 \\bar{\\phi} \\phi\\right). \n\\end{align}"
},
{
"math_id": 65,
"text": "T_{\\mu\\nu} =\n \\frac{-2}{\\sqrt{-g}}\\frac{\\delta S_{\\mathrm{matter}}}{\\delta g^{\\mu\\nu}} =\n \\frac{-2}{\\sqrt{-g}}\\frac{\\partial\\left(\\sqrt{-g}\\mathcal{L}_{\\mathrm{matter}}\\right)}{\\partial g^{\\mu\\nu}} =\n -2 \\frac{\\partial \\mathcal{L}_\\mathrm{matter}}{\\partial g^{\\mu\\nu}} + g_{\\mu\\nu} \\mathcal{L}_\\mathrm{matter},\n"
},
{
"math_id": 66,
"text": "S_{\\mathrm{matter}}"
},
{
"math_id": 67,
"text": "\\mathcal{L}_{\\mathrm{matter}}"
}
]
| https://en.wikipedia.org/wiki?curid=70671 |
70672352 | Titanotaria | Genus of fossil mammals
<templatestyles src="Template:Taxobox/core/styles.css" />
Titanotaria is a genus of late, basal walrus from the Miocene of Orange County, California. Unlike much later odobenids, it lacked tusks. "Titanotaria" is known from an almost complete specimen which serves as the holotype for the only recognized species, Titanotaria orangensis, it is the best preserved fossil walrus currently known.
History and naming.
Although the holotype specimen (OCPC 11141) of "Titanotaria" had been discovered in 1993 and represents one of the most complete fossil walrus known, little attention was given to the material for over 20 years. The first mention of the fossils in peer-reviewed literature came in 2017 with Barboza and colleagues publishing a faunal list of the Oso Member of the Capistrano Formation, where "Titanotaria" had been found. Specifically, the fossilized bones were collected from the town of Lake Forest, Orange County, California, during the construction of the Saddleback Church. A full description followed a year after its mention by Barboza and was led by Isaac Magallanes, who published a detail examination of the fossils alongside a phylogenetic analysis. According to paleontologist Robert Boessenecker, the remains were unofficially known by the name "Waldo".
The name "Titanotaria" honors the California State University, Fullerton, widely known as the Titans. This was meant to recognize the collaboration between the university and Orange County, which lead to the creation of the John D. Cooper Archaeological and Paleontological Center. The second part of the genus name, otaria, is a reference to the genus "Otaria" and a commonly used suffix for fossil pinnipeds. The species name means "coming from Orange County".
Description.
The holotype skull of "Titanotaria" belongs to a male individual with an asymmetric skull, likely caused by a healed pathology. The rostrum of "Titanotaria" is elongated and widens at around the root of the first canine tooth. The premaxilla are triangular in outline and elevated slightly above the tooth row. The front-most tip of the premaxilla is marked by a knob-shaped prenarial process, which is immediately followed by a depression located above the incisors and canines that likely serves as an origin for the lateral nasalis muscle. The nasal bones are long (60% of the rostrum length) with parallel edges and a broad, V-shaped suture with the frontal bone. The zygomatic arch is broad and possesses an oval prominence on its ventral surface. The point of articulation between the jugal and the maxilla is largely fused and a small, triangular postorbital process is present on the jugal element of the zygomatic arch. The frontal bone is widest towards the front of the skull and bears two temporal crests, which fuse to form the sagittal crest. The crest is prominent and long, with a sinuous profile. This differs from the more sloping sagittal crests of other odobenids like "Imagotaria" and "Neotherium". Towards the back of the skull the sagitall crest meets the nuchal crests, which is wide and obscures the occipital region in top view.
The tooth formula of "Titanotaria" is formula_0. In the upper jaw the incisors are long and slender with an oval crosssection and a single root. The canines are robust, conical and larger than the incisors. While the first premolar likely only possesses a single root based on the morphology of the alveolous, the second is bi-lobed with a bulbous tooth crown. The following teeth also show two tooth roots and there is a decrease in size between the two molars. No incisors are preserved in the lower jaw and their alveoli are obscured by sediment. The mandibular tooth row is very short, only taking up 40% of the mandible. The lower canines are almost as large as their upper counterparts and like them they are robust and conical with a curve to them. Like in the upper jaw, the teeth starting with the second premolar of the mandible are double rooted with bulbous crowns. The last lower molar however appears to have been single rooted based on the anatomy of its tooth socket.
"Titanotaria" preserves most of its postcranial material; however, only elements relevant to phylogenetic analysis were described. The holotype is only missing few ribs, parts of the right forelimb, most of the pelvis and some of the distal limb elements. It reached a length of and weighed around .
Phylogeny.
Phylogenetic analysis found that "Titanotaria" was a basal odobenid, nesting outside of the clade Neodobenia (named within the same publication as the genus). The same placement was later recovered by Biewer and colleagues when they described "Osodobenus".
Paleobiology.
"Titanotaria" is known from the Oso Member of the Capistrano Formation, which preserves a rich assemblage of fossil walrus species such as "Gomphotaria pugnax", "Pontolis magnus", "Pontolis kohnoi" and "Osodobenus eodon". The eared seal "Thalassoleon" was also found in this formation, alongside giant sea cows, cetotheriid whales, the bizarre "Desmostylus", various sharks and the remains of indetermined crocodiles.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\frac{3.1.4.2.}{?.1.4.2.}"
}
]
| https://en.wikipedia.org/wiki?curid=70672352 |
70683630 | Convection enhanced delivery | Drug delivery technique
Convection-enhanced delivery (CED) is method of drug delivery in which the drug is delivered into the brain using bulk flow rather than conventional diffusion. This is done by utilizing catheters inserted into the target region of the brain and utilizing pressure to deliver the therapeutic to a target region. CED has been used to delivery drugs to the central nervous system (CNS) for diseases such as cancer, epilepsy, and Parkinson's disease. CED has been used to deliver drugs to the CNS for its ability to bypass the blood–brain barrier (BBB) and target specific regions for targeted treatment, but current techniques using CED have failed to progress past clinical trials due to a variety of physical limitations associated with CED itself.
Background.
The blood brain barrier (BBB) has historically proved to be a very difficult obstacle to overcome when aiming to deliver a drug to the brain. In order to overcome the difficulties in delivering therapeutic levels of drug past the BBB, drugs had to either be lipophilic molecules with a molecular weight below 600 Da or be transported across the BBB using some sort of cellular transport system. In the 1990s, a research group led by Edward Oldfield at the National Institutes of Health proposed utilizing CED to deliver drugs and molecules too big to bypass the BBB to the brain. CED is also useful to delivery drugs that have poor diffusive properties and allows for targeted placement of the catheter used to deliver the drugs. A vast majority of current clinical studies using CED are currently using CED as a method to treat brain tumors that are inoperable or have shown little response to conventional therapies.
Mechanism of action.
CED is a method of drug delivery in which a pressure gradient is created at the tip of a catheter to use bulk flow rather than diffusion to delivery drugs into the brain. Diffusion has been limited by the diffusivity of the tissue, and can be expressed using Fick's law, formula_0, where J is diffusion, D is the diffusivity of the targeted tissue, and formula_1 is the concentration gradient of the drug. Diffusion is can only be modified by the concentration gradient of a drug, meaning that in order to deliver drug to large parts of a tissue, high concentrations of a drug are needed in order to promote diffusion, which can result in toxicity. In comparison, bulk flow is limited only by Darcy's law, defined as formula_2, where v is velocity, K is the hydraulic conductivity of the molecule, and formula_3 is the pressure gradient. Using bulk flow to deliver a drug can mean a drug can be delivered further into a target tissue with higher pressure, resulting in lower concentrations and less risk of drug toxicity.
To perform a CED treatment, catheters are inserted through burr holes drilled into the skull. Treatments can use multiple catheters for a single delivery if that is required. The catheters are inserted into the interstitial space of the brain using image guidance. Once the catheters are placed at the desired site, the catheters are connection to an infusion pump which is used to create the pressure gradient for bulk flow. Infusion rates are typically set to 0.1-10 μL/min and the drug is delivered into the interstitial space, displacing any extracellular fluid. CED can result in delivery of drug centimeters deep into the tissue from the delivery site, rather than the millimeters deep that would result from delivery of drugs via diffusion.
Clinical trials evaluating CED.
Current clinical trials exploring the use of CED to date have not resulted in any FDA approved treatments. These clinical trials have mostly been focused on using CED to treat glioblastoma and only two studies have been able to progress to stage 3 clinical trials. The first study began in 2004 and was comparing the efficacy of cintredekin besudotox delivered using CED and gliadel for the treatment of glioblastoma multiform. Results from this study showed similar survival rates between the two groups, but patients who were given CED treatment had higher rates of pulmonary emboli. The second stage 3 clinical trial began in 2008 and was delivering trabedersen via CED to treat anaplastic astrocytoma glioblastoma. This trial was terminated early due to the inability to recruit enough trial participants and efficacy of CED in this treatment was not established. These two studies have been the only major clinical trials which have compared the efficacy of CED treatment to current clinical standards for treatment.
While CED clinical trials have primarily explored treating brain tumors, other conditions involving the brain have also been investigated in clinical trials. To date there have been 2 registered clinical trials, both in stage 1, which aim to use CED to treat Parkinson's disease. The first trial, which was registered in 2009, was withdrawn in 2017 for unknown reasons. The other clinical trial, which reached completion in 2022, delivered an adenovirus (AAV2) encoding for a glial cell line derived neurotropic growth factor (GDNF) directly into the brain using CED. GDNF is known to protect neurons which produce dopamine. Parkinson's disease has been shown to decrease the amount of dopamine which can be produced in the brain, so researchers hope to be able to decrease the side effects of Parkinson's disease by protecting neurons which produce dopamine. While results from this study have not been published as of April 2022, the pre-clinical research done in a Parkinson's disease model rhesus monkey showed that CED treatment with AAV2-GDNF resulted in neurological improvement without significant side effects.
Non-clinical research.
Even though current clinical trials have not yet resulted in an FDA approved treatment, there is still plenty of research being done on delivering different types of therapeutics and treating different diseases being done. One of these areas of research is the visualization of the region of treatment. One research group was able visualize the regions of the brain that received drug from bulk flow mixing the desired drug with Gd-DTPA, a common MRI contrast agent. This allowed researchers to immediately take an MRI post CED treatment to assess if the drug was reaching the targeted area. Research has also tagged nanocarriers of their therapeutic with the MRI contrast gadoteridol for real time treatment imaging. Other than MRI contrasts, it has been shown to be possible to tag a therapeutic microcarrier with a radiolabeled or fluorescent molecule that can then be excited during imaging. The biggest limitation of this drug distribution visualization is that this technique only works "ex vivo." One research group was able to optimize their liposomal design using this technique, showing the usefulness of this technique.
While a common use of CED is to directly deliver drugs to the brain, it is also possible to deliver non-chemical therapeutics, such as proteins or growth factors, using CED. There are several types of microcarriers which have been used for CED, including nanospheres, nanoparticles, liposomes, micelles, and dendrites. Nanocarriers have several unique benefits for delivering therapeutics compared to conventional drug solutions. Firstly, nanocarriers can be modified to create an optimal carrier for the system that is being developed. These modifications can include tagging them for imaging, size, charge, osmolarity, viscosity, and changes in surface coating.
The other large area of research being done currently on CED is the translation of CED from being used for brain tumors to other brain diseases. The primary conditions being researched for non-cancerous treatments include Parkinson's disease and epilepsy. Animal model research using CED to deliver therapeutics to the brain to treat Parkinson's disease have shown 3 promising therapeutics for treatment. Researchers for these therapeutics have typically used adenovirus carriers for therapeutics since many drugs used to treat Parkinson's disease currently are not chemical based but rather gene therapy or protein based. Current research areas focus on using GDNF, a growth factor which protects dopamine producing brain cells, glutamic acid decarboxylase (GAD), which is another therapeutic that helps to protect dopamine producing brain cells, and neurturin, which is a GDNF homolog. Another reported use of CED is in the treatment of epilepsy. Current epilepsy treatments are too large to pass through the BBB, so utilizing CED to delivery these drugs is currently one of the only ways to target the brain. The two primary antiepileptic drugs (AEDs) being delivered using CED in research are conotoxin N-type calcium channel antagonists and botulinum neurotoxins. Results from these studies showed promise in reducing the risk of seizures for up to 5 days when treated with calcium channel antagonists and up to 50 days when using botulinum neurotoxins
Limitations and future directions.
While there has been promise in the use of CED to delivery drugs directly into the brain, there are some drawbacks with it. A vast majority of studies to date have failed to have consistent delivery from patient to patient for technical reasons surrounding the usage of CED. Incorrect placement of catheters can result in a less effective treatment with increased risks of leaks from the brain into other parts of the central nervous system (CNS). Another, more common occurrence is the incidence or reflux of the drug back into the catheter. Reflux can cause leakage into unintended areas as well as decrease the true volume of drug delivered. CED catheter improvements are currently being researched, with some research groups modifying the tips of the catheters to prevent reflux. The design of a balloon tipped catheter for use in CED has been proposed, and results showed that drug was successfully delivered into the brain using the balloon tipped catheter without any complication. Other proposed designs include the utilization of catheters with multiple exit sites, catheters with porous tips, and catheters with tips that are smaller than the rest of the catheter. New catheter designs also aim to allow for a greater flow rate while still minimizing the risk of reflux. These improvements to the technical limitations of CED aim to help researchers determine efficacy of a treatment without worrying about failed treatments due to limitations in the equipment of CED. With this in mind, there is a fast growing tech company in Baltimore, Maryland named CraniUS LLC, for which is inventing, designing and engineering the world's first fully-implantable, MRI-compatible, wirelessly-charged, bluetooth-enabled, high-profile craniofacial implant device to provide neurosurgical patients a safe option for receiving direct and chronic medicine delivery to their brain via convection-enhanced delivery; using a novel embedded, microfluidic-pump system and easy port-access system for repeated, transcutaneous filling. |url=http://www.CraniUSmed.com
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "J = -D\\nabla C"
},
{
"math_id": 1,
"text": "\\nabla C"
},
{
"math_id": 2,
"text": "v = -K\\nabla p"
},
{
"math_id": 3,
"text": "\\nabla p"
}
]
| https://en.wikipedia.org/wiki?curid=70683630 |
70685311 | Mean line segment length | In geometry, the mean line segment length is the average length of a line segment connecting two points chosen uniformly at random in a given shape. In other words, it is the expected Euclidean distance between two random points, where each point in the shape is equally likely to be chosen.
Even for simple shapes such as a square or a triangle, solving for the exact value of their mean line segment lengths can be difficult because their closed-form expressions can get quite complicated. As an example, consider the following question:
"What is the average distance between two randomly chosen points inside a square with side length 1?"
While the question may seem simple, it has a fairly complicated answer; the exact value for this is formula_0.
Formal definition.
The mean line segment length for an "n"-dimensional shape "S" may formally be defined as the expected Euclidean distance ||⋅|| between two random points "x" and "y",
formula_1
where "λ" is the "n"-dimensional Lebesgue measure.
For the two-dimensional case, this is defined using the distance formula for two points ("x"1, "y"1) and ("x"2, "y"2)
formula_2
Approximation methods.
Since computing the mean line segment length involves calculating multidimensional integrals, various methods for numerical integration can be used to approximate this value for any shape.
One such method is the Monte Carlo method. To approximate the mean line segment length of a given shape, two points are randomly chosen in its interior and the distance is measured. After several repetitions of these steps, the average of these distances will eventually converge to the true value.
These methods can only give an approximation; they cannot be used to determine its exact value.
Formulas.
Line segment.
For a line segment of length "d", the average distance between two points is "d".
Triangle.
For a triangle with side lengths "a", "b", and "c", the average distance between two points in its interior is given by the formula
formula_3
where formula_4 is the semiperimeter, and formula_5 denotes formula_6.
For an equilateral triangle with side length "a", this is equal to
formula_7
Square and rectangles.
The average distance between two points inside a square with side length "s" is
formula_8
More generally, the mean line segment length of a rectangle with side lengths "l" and "w" is
formula_9
where formula_10 is the length of the rectangle's diagonal.
If the two points are instead chosen to be on different sides of the square, the average distance is given by
formula_11
Cube and hypercubes.
The average distance between points inside an "n"-dimensional unit hypercube is denoted as Δ("n"), and is given as
formula_12
The first two values, Δ(1) and Δ(2), refer to the unit line segment and unit square respectively.
For the three-dimensional case, the mean line segment length of a unit cube is also known as Robbins constant, named after David P. Robbins. This constant has a closed form,
formula_13
Its numerical value is approximately 0.661707182... (sequence in the OEIS)
Andersson et. al. (1976) showed that Δ("n") satisfies the bounds
formula_14
Choosing points from two different faces of the unit cube also gives a result with a closed form, given by,
formula_15
Circle and sphere.
The average chord length between points on the circumference of a circle of radius "r" is
formula_16
And picking points on the surface of a sphere with radius "r" is
formula_17
Disks.
The average distance between points inside a disk of radius "r" is
formula_18
The values for a half disk and quarter disk are also known.
For a half disk of radius 1:
formula_19
For a quarter disk of radius 1:
formula_20
Balls.
For a three-dimensional ball, this is
formula_21
More generally, the mean line segment length of an "n"-ball is
formula_22
where "βn" depends on the parity of "n",
formula_23
General bounds.
Burgstaller and Pillichshammer (2008) showed that for a compact subset of the "n"-dimensional Euclidean space with diameter 1, its mean line segment length "L" satisfies
formula_24
where Γ denotes the gamma function. For "n" = 2, a stronger bound exists.
formula_25 | [
{
"math_id": 0,
"text": "\\frac{2 + \\sqrt{2} + 5 \\ln (1 + \\sqrt{2})}{15}"
},
{
"math_id": 1,
"text": "\\mathbb E[\\|x-y\\|]=\\frac1{\\lambda(S)^2}\\int_S \\int_S \\|x-y\\| \\,d\\lambda(x) \\,d\\lambda(y)"
},
{
"math_id": 2,
"text": "\\frac1{\\lambda(S)^2}\\iint_S \\iint_S \\sqrt{(x_1 - x_2)^2 + (y_1 - y_2)^2} \\,dx_1 \\,dy_1 \\,dx_2 \\,dy_2."
},
{
"math_id": 3,
"text": "\\frac{4 s s_a s_b s_c}{15} \\left[ \\frac{1}{a^3} \\ln\\left( \\frac{s}{s_a} \\right) + \\frac{1}{b^3} \\ln\\left( \\frac{s}{s_b} \\right) + \\frac{1}{c^3} \\ln\\left( \\frac{s}{s_c} \\right) \\right] + \\frac{a+b+c}{15} + \\frac{(b+c)(b-c)^2}{30a^2} + \\frac{(a+c)(a-c)^2}{30b^2} + \\frac{(a+b)(a-b)^2}{30c^2},"
},
{
"math_id": 4,
"text": "s = (a+b+c)/2"
},
{
"math_id": 5,
"text": "s_i"
},
{
"math_id": 6,
"text": "s-i"
},
{
"math_id": 7,
"text": "\\left(\\frac{4 + 3 \\ln 3}{20}\\right)a \\approx 0.364791843\\ldots a."
},
{
"math_id": 8,
"text": "\\left(\\frac{2 + \\sqrt{2} + 5 \\ln (1 + \\sqrt{2})}{15}\\right) s \\approx 0.521405433\\ldots s."
},
{
"math_id": 9,
"text": "\\frac{1}{15}\\left[ \\frac{l^3}{w^2} + \\frac{w^3}{l^2} + d\\left(3 - \\frac{l^2}{w^2} - \\frac{w^2}{l^2}\\right) + \\frac{5}{2}\\left(\\frac{w^2}{l} \\ln \\left(\\frac{l+d}{w}\\right) + \\frac{l^2}{w} \\ln \\left(\\frac{w+d}{l}\\right) \\right) \\right]"
},
{
"math_id": 10,
"text": "d = \\sqrt{l^2 + w^2}"
},
{
"math_id": 11,
"text": "\\left(\\frac{2 + \\sqrt{2} + 5 \\ln (1 + \\sqrt{2})}{9}\\right) s \\approx 0.869009\\ldots s."
},
{
"math_id": 12,
"text": "\\Delta(n) = \\underbrace{\\int_0^1 \\cdots \\int_0^1}_{2n} \\sqrt{(x_1 - y_1)^2 + (x_2 - y_2)^2 + \\cdots + (x_n - y_n)^2} \\,dx_1 \\cdots \\,dx_n \\,dy_1 \\cdots \\,dy_n"
},
{
"math_id": 13,
"text": "\\Delta(3) = \\frac{4+17\\sqrt2-6\\sqrt3-7\\pi}{105} + \\frac{\\ln(1+\\sqrt2)}{5} + \\frac{2\\ln(2+\\sqrt3)}{5}."
},
{
"math_id": 14,
"text": "\\tfrac{1}{3} n^{1/2} \\le \\Delta(n) \\le (\\tfrac{1}{6} n)^{1/2} \\sqrt{\\frac{1}{3}\\left[1 + 2\\left(1 - \\frac{3}{5n}\\right)^{1/2}\\right]}."
},
{
"math_id": 15,
"text": "\\frac{4 + 17\\sqrt{2} - 6\\sqrt{3} - 7\\pi}{75} + \\frac{7\\ln{(1+\\sqrt{2})}}{25} + \\frac{14\\ln{(2+\\sqrt{3})}}{25}."
},
{
"math_id": 16,
"text": "\\frac{4}{\\pi} r \\approx 1.273239544\\ldots r"
},
{
"math_id": 17,
"text": "\\frac{4}{3} r"
},
{
"math_id": 18,
"text": "\\frac{128}{45\\pi}r \\approx 0.905414787\\ldots r."
},
{
"math_id": 19,
"text": "\\frac{64}{135}\\frac{12\\pi-23}{\\pi^2} \\approx 0.706053409\\ldots"
},
{
"math_id": 20,
"text": "\\frac{32}{135\\pi^2}(6\\ln{(2\\sqrt{2}-2)}-94\\sqrt{2}+48\\pi+3) \\approx 0.473877262\\ldots"
},
{
"math_id": 21,
"text": "\\frac{36}{35}r \\approx 1.028571428\\ldots r."
},
{
"math_id": 22,
"text": "\\frac{2n}{2n+1}\\beta_n r"
},
{
"math_id": 23,
"text": "\\beta_n = \\begin{cases}\\dfrac{2^{3n+1} \\, (n/2)!^2 \\, n!}{(n+1)\\, (2n)! \\, \\pi} & (\\text{for even } n)\\\\ \\dfrac{2^{n+1}\\, n!^3}{(n+1)\\, ((n-1)/2)!^2 \\, (2n)!} & (\\text{for odd } n)\\end{cases}"
},
{
"math_id": 24,
"text": "L \\le \\sqrt{\\frac{2n}{n+1}} \\frac{2^{n-2} \\Gamma(n/2)^2}{\\Gamma(n - 1/2) \\sqrt{\\pi}}"
},
{
"math_id": 25,
"text": "L \\le \\frac{229}{800} + \\frac{44}{75}\\sqrt{2 - \\sqrt{3}} + \\frac{19}{480}\\sqrt{5} = 0.678442\\ldots"
}
]
| https://en.wikipedia.org/wiki?curid=70685311 |
70691232 | Two-dimensional space | Mathematical space with two coordinates
A two-dimensional space is a mathematical space with two dimensions, meaning points have two degrees of freedom: their locations can be locally described with two coordinates or they can move in two independent directions. Common two-dimensional spaces are often called "planes", or, more generally, "surfaces". These include analogs to physical spaces, like flat planes, and curved surfaces like spheres, cylinders, and cones, which can be infinite or finite. Some two-dimensional mathematical spaces are not used to represent physical positions, like an affine plane or complex plane.
Flat.
The most basic example is the flat Euclidean plane, an idealization of a flat surface in physical space such as a sheet of paper or a chalkboard. On the Euclidean plane, any two points can be joined by a unique straight line along which the distance can be measured. The space is flat because any two lines transversed by a third line perpendicular to both of them are parallel, meaning they never intersect and stay at uniform distance from each-other.
Curved.
Two-dimensional spaces can also be curved, for example the sphere and hyperbolic plane, sufficiently small portions of which appear like the flat plane, but on which straight lines which are locally parallel do not stay equidistant from each-other but eventually converge or diverge, respectively. Two-dimensional spaces with a locally Euclidean concept of distance but which can have non-uniform curvature are called Riemannian surfaces. (Not to be confused with Riemann surfaces.) Some surfaces are embedded in three-dimensional Euclidean space or some other ambient space, and inherit their structure from it; for example, ruled surfaces such as the cylinder and cone contain a straight line through each point, and minimal surfaces locally minimize their area, as is done physically by soap films.
Relativistic.
Lorentzian surfaces look locally like a two-dimensional slice of relativistic spacetime with one spatial and one time dimension; constant-curvature examples are the flat Lorentzian plane (a two-dimensional subspace of Minkowski space) and the curved de Sitter and anti-de Sitter planes.
Non-Euclidean.
Other types of mathematical planes and surfaces modify or do away with the structures defining the Euclidean plane. For example, the affine plane has a notion of parallel lines but no notion of distance; however, signed areas can be meaningfully compared, as they can in a more general symplectic surface. The projective plane does away with both distance and parallelism. A two-dimensional metric space has some concept of distance but it need not match the Euclidean version. A topological surface can be stretched, twisted, or bent without changing its essential properties. An algebraic surface is a two-dimensional set of solutions of a system of polynomial equations.
Information-holding.
Some mathematical spaces have additional arithmetical structure associated with their points. A vector plane is an affine plane whose points, called "vectors", include a special designated origin or zero vector. Vectors can be added together or scaled by a number, and optionally have a Euclidean, Lorentzian, or Galilean concept of distance. The complex plane, hyperbolic number plane, and dual number plane each have points which are considered numbers themselves, and can be added and multiplied. A Riemann surface or Lorentz surface appear locally like the complex plane or hyperbolic number plane, respectively.
Definition and meaning.
Mathematical spaces are often defined or represented using numbers rather than geometric axioms. One of the most fundamental two-dimensional spaces is the real coordinate space, denoted formula_0 consisting of pairs of real-number coordinates. Sometimes the space represents arbitrary quantities rather than geometric positions, as in the parameter space of a mathematical model or the configuration space of a physical system.
Non-real numbers.
More generally, other types of numbers can be used as coordinates. The complex plane is two-dimensional when considered to be formed from real-number coordinates, but one-dimensional in terms of complex-number coordinates. A two-dimensional complex space – such as the two-dimensional complex coordinate space, the complex projective plane, or a complex surface – has two complex dimensions, which can alternately be represented using four real dimensions. A is an infinite grid of points which can be represented using integer coordinates. Some two-dimensional spaces, such as finite planes, have only a finite set of elements. | [
{
"math_id": 0,
"text": "\\R^2,"
}
]
| https://en.wikipedia.org/wiki?curid=70691232 |
70710939 | Joshua 18 | Book of Joshua, chapter 18
Joshua 18 is the eighteenth chapter of the Book of Joshua in the Hebrew Bible or in the Old Testament of the Christian Bible. According to Jewish tradition the book was attributed to Joshua, with additions by the high priests Eleazar and Phinehas, but modern scholars view it as part of the Deuteronomistic History, which spans the books of Deuteronomy to 2 Kings, attributed to nationalistic and devotedly Yahwistic writers during the time of the reformer Judean king Josiah in 7th century BCE. This chapter records the further allotment of land for the tribes of Israel, especially the tribe of Benjamin, a part of a section comprising Joshua 13:1–21:45 about the Israelites allotting the land of Canaan.
Text.
This chapter was originally written in the Hebrew language. It is divided into 28 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008).
Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century).
Analysis.
The narrative of Israelites allotting the land of Canaan comprising verses 13:1 to 21:45 of
the Book of Joshua and has the following outline:
A. Preparations for Distributing the Land (13:1–14:15)
B. The Allotment for Judah (15:1–63)
C. The Allotment for Joseph (16:1–17:18)
D. Land Distribution at Shiloh (18:1–19:51)
1. Directions for the Remaining Allotment (18:1–10)
2. Tribal Inheritances (18:11–19:48)
a. Benjamin (18:11–28)
b. Simeon (19:1–9)
c. Zebulun (19:10–16)
d. Issachar (19:17–23)
e. Asher (19:24–31)
f. Naphtali (19:32–39)
g. Dan (19:40–48)
3. Joshua's Inheritance (19:49–50)
4. Summary Statement (19:51)
E. Levitical Distribution and Conclusion (20:1–21:45)
The pattern of the narrative places the distribution to Judah and Joseph preceded by the grant of land to Caleb (14:6–15), while the remaining distribution is followed by an account of an inheritance for Joshua (19:49–50), so the accounts of rewards for the two faithful spies are carefully woven into
the story of the land allotments.
There are three key elements in the report of the allotments for the nine and a half tribes in the land of Canaan as follows:
Directions for the remaining allotment (18:1–10).
In the beginning of this chapter it is reported that the Tabernacle or "tent of meeting" was set up in Shiloh (18:1). Thus, it replaced Gilgal and Shechem which were the gathering centers for Israel until this time. The introduction of Shiloh at this point is not incidental, as its centrality is indicated in an artistic way by placing the text between the allotments of land to Judah and Joseph on the one side, and the remaining tribes on the other. Shiloh also lies within the territory of Joseph tribes, which are recorded in the previous chapters. The central worship place has not been mentioned much until now (only a reference to the 'altar of the LORD' and 'the place that he would choose' in Joshua 9:27), so the setting up of the Tabernacle in Shiloh becomes an important concept of the narrative as the fulfilment of the promise-command that God would be among Israel in the land he was giving them (Leviticus 26:11–12: 'place my dwelling [tent, tabernacle] in your midst': Deuteronomy 12:5). Shiloh starts to play important role in the distribution of the remaining land (verses 2–9; and reappears in 19:51) and thus binding up with Israel's religious life. After the completion of allotments for the tribes of Reuben, Gad, Judah, Ephraim, and Manasseh (the division of Joseph into Ephraim and Manasseh compensates for the Levi who has no territorial inheritance; Joshua 18:7), seven tribes were still to
receive their land (verse 2). This stage of the allocation is preceded by a survey (literary, "writing"; verse 4), then in Shiloh, Joshua presided over the allocation by means of the
sacred lot, 'before the LORD our God' (verse 6, cf. verses 8, 10, Joshua 14:1.
"And the whole congregation of the children of Israel assembled together at Shiloh, and set up the tabernacle of the congregation there."
" And the land was subdued before them."
Verse 1.
Passages throughout the Hebrew Bible confirm that Shiloh was once an important sanctuary for Israel before the temple was built in Jerusalem, such as in 1 Samuel 1–2 ('house of the LORD', 1 Samuel 1:24, and the 'tent of meeting' ('tabernacle of the congregation'), 1 Samuel 2:22, as in here); and also named as 'the place of God's choice' in Jeremiah 7:12, following Deuteronomy 12 (cf. Joshua 22).
Allotment for Benjamin (18:11–28).
The territory of Benjamin was allotted between those of Ephraim (in its north; 18:12–14; cf. 16:1-3), and Judah (in its south; 18:15–19; cf. 15:8–11). The list of towns in the allotment (verses 21–28) includes Jebus (Jerusalem), although it was clearly stated that the city did not fall to Joshua (Joshua 15:63). It also includes Gibeon and its satellites (cf. 9:17), without mentioning their special status (Joshua 9) or Israel's battle to defend it (Joshua 10).
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
Sources.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=70710939 |
70711048 | Joshua 19 | Book of Joshua, chapter 19
Joshua 19 is the nineteenth chapter of the Book of Joshua in the Hebrew Bible or in the Old Testament of the Christian Bible. According to Jewish tradition the book was attributed to Joshua, with additions by the high priests Eleazar and Phinehas, but modern scholars view it as part of the Deuteronomistic History, which spans the books of Deuteronomy to 2 Kings, attributed to nationalistic and devotedly Yahwistic writers during the time of the reformer Judean king Josiah in the 7th century BCE. This chapter records the further allotment of land for the tribes of Israel, especially the tribes of Simeon, Zebulun, Issachar, Asher, Naphtali and Dan, as well as Joshua's Inheritance, a part of a section comprising Joshua 13:1–21:45 about the Israelites allotting the land of Canaan.
Text.
This chapter was originally written in the Hebrew language. It is divided into 51 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008).
Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century).
Analysis.
The narrative of Israelites allotting the land of Canaan comprising verses 13:1 to 21:45 of
the Book of Joshua and has the following outline:
A. Preparations for Distributing the Land (13:1–14:15)
B. The Allotment for Judah (15:1–63)
C. The Allotment for Joseph (16:1–17:18)
D. Land Distribution at Shiloh (18:1–19:51)
1. Directions for the Remaining Allotment (18:1–10)
2. Tribal Inheritances (18:11–19:48)
a. Benjamin (18:11–28)
b. Simeon (19:1–9)
c. Zebulun (19:10–16)
d. Issachar (19:17–23)
e. Asher (19:24–31)
f. Naphtali (19:32–39)
g. Dan (19:40–48)
3. Joshua's Inheritance (19:49–50)
4. Summary Statement (19:51)
E. Levitical Distribution and Conclusion (20:1–21:45)
The pattern of the narrative places the distribution to Judah and Joseph preceded by the grant of land to Caleb (14:6–15), while the remaining distribution is followed by an account of an inheritance for Joshua (19:49–50), so the accounts of rewards for the two faithful spies are carefully woven into the story of the land allotments.
There are three key elements in the report of the allotments for the nine and a haf tribes in the land of Canaan as follows:
Allotment for Simeon (19:1–9).
The territory of Simeon lay in the semi-arid Negeb, in the south, without a boundary description within the territory of Judah. Even some of its towns also appear in Judah's list (15:21–32). Apparently, the tribal identity of Simeon was lost early in Israel's life, just as the condemnation in Jacob's blessing (Genesis 49:7), pairing Simeon with Levi to be scattered within Israel. The tribe is not mentioned in the Blessing of Moses (Deuteronomy 33) nor in the Song of Deborah (Judges 5), perhaps because of its early failure to settle.
Allotment for Zebulun, Issachar, Asher, Naphtali (19:10–39).
The next allotments are for the tribes of Zebulun, Issachar, Asher, and Naphtali, which form a cluster between the Sea of Galilee and the Mediterranean Sea. From east to west, Issachar, Zebulun, and Asher have southern borders with Manasseh along the line of the Carmel range and the plain of Esdraelon, whereas Naphtali is to the north of Issachar and Zebulun. Among the place names, Mount Tabor appears as a reference point for three of the tribes (verses 12, 22, 34).
"And Kattath, and Nahallal, and Shimron, and Idalah, and Bethlehem: twelve cities with their villages."
Allotment for Dan (19:40–48).
The allotment for the tribe of Dan stands apart from those of the preceding tribes, because here the Danites was originally allotted land in the south, to the west of Judah, running down to the Mediterranean Sea at Joppa (Tel-Aviv), and including certain Philistine territory (Ekron), with some of the place names here are also mentioned in the stories of Samson, a Danite judge, who clashed with the Philistines on the edges of the Shephelah (low hills) and their coastal areas (cf. Judges 13:2, 25; 14:1). The Danites could never have had a strong foothold in this debatable region between the Philistines and Judah, so they finally settled in the extreme north — perhaps the reason for their inclusion here with the Galilean tribes. The 'conquest' of Leshem by this tribe is not grouped as part of Joshua's conquest, and is described more fully in Judges 18, where the slaughter of inhabitants of Leshem (Laish) is implicitly criticized (Judges 18:27). The summary in 19:48 apparently refers to the places enumerated in the original territory (nothing in verse 47 would correspond to 'these towns with their villages'), so Dan's 'inheritance' was not actually 'inherited'.
Joshua's inheritance and summary of allotments (19:49–51).
Joshua's personal inheritance (19:49–50) at the end of land distribution corresponds to that of Caleb the other courageous spy at the start of the distribution (Joshua 14:6–15). There is also an equilibrium that Caleb inherits in (southern) Judah, while Joshua inherits in (northern) Ephraim. The conclusion (19:51) returns to Shiloh and the tent of meeting, again emphasizing the place as the spiritual center of the land, representing God's hand in the distribution. Once more Joshua and Eleazar are named as jointly responsible for the execution (cf. 14:1; cf. Numbers 26:1–4; 52–56).
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
Sources.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=70711048 |
70716862 | Joshua 20 | Book of Joshua, chapter 20
Joshua 20 is the twentieth chapter of the Book of Joshua in the Hebrew Bible or in the Old Testament of the Christian Bible. According to Jewish tradition the book was attributed to Joshua, with additions by the high priests Eleazar and Phinehas, but modern scholars view it as part of the Deuteronomistic History, which spans the books of Deuteronomy to 2 Kings, attributed to nationalistic and devotedly Yahwistic writers during the time of the reformer Judean king Josiah in 7th century BCE. This chapter records the designation of the cities of refuge, a part of a section comprising Joshua 13:1–21:45 about the Israelites allotting the land of Canaan.
Text.
This chapter was originally written in the Hebrew language. It is divided into 9 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008).
Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century).
Analysis.
The narrative of Israelites allotting the land of Canaan comprising verses 13:1 to 21:45 of
the Book of Joshua and has the following outline:
A. Preparations for Distributing the Land (13:1–14:15)
B. The Allotment for Judah (15:1–63)
C. The Allotment for Joseph (16:1–17:18)
D. Land Distribution at Shiloh (18:1–19:51)
E. Levitical Distribution and Conclusion (20:1–21:45)
1. Cities of Refuge (20:1–9)
a. Regulations for Cities of Refuge (20:1–6)
b. Designation of Cities of Refuge (20:7–9)
2. Levitical Cities (21:1–42)
a. Approach to Joshua and Eleazar (21:1–3)
b. Initial Summary (21:4–8)
c. Priestly Kohathite Allotment (21:9–19)
d. Non-Priestly Kohathite Allotment (21:20–26)
e. Gershonite Allotment (21:27–33)
f. Merarite Allotment (21:34–40)
g. Levitical Summary (21:41–42)
3. Summary of Divine Faithfulness (21:43–45)
Regulations for Cities of Refuge (20:1–6).
The instructions regarding cities of refuge are given in Numbers 35:9–28 and Deuteronomy 4:41–43; 19:1–10 and now are implemented into practice. The main topic is the 'accidental homicide' to a form of justice deriving from familial relations in a tribal context. An 'avenger of blood' was appointed by the familial group to exact 'blood for blood' in cases of homicide for the protection of the family group. The Hebrew word for 'avenger' can also be translated as 'redeemer' (Ruth 2:20). The perpetrator of an accidental homicide, as exemplified in Numbers 35:22–23 and Deuteronomy 19:5, is permitted to escape to designated cities for asylum until the person's guilt or innocence is determined: first, by the elders at the gates of the city, which may simply be a formal request for sanctuary (verse 4), then followed by a trial before the "‘ê-ḏāh", , or 'congregation', that is, the whole people constituted as a religious assembly (verse 6; cf. Numbers 35:12).
One criterion for deciding intentionality is whether 'there had been previous enmity between the parties' (verse 5b, cf. Deuteronomy 19:4b, Num 35:23b).
The provision that the refugee must remain in that city of refuge until the death of the high priest at that period of time (verse 6b) may be intended to set a time-limit on the stalemate produced by a verdict of innocent, which nevertheless cannot revoke the principle right of blood vengeance (Numbers 35:27c).
Designation of Cities of Refuge (20:7–9).
Golan
Cities of refuge at the time of Joshua
The designated cities of refuge, from north to south, relative to the Jordan River were:
All the cities of refuge are also levitical cities (cf. Joshua 21), could be for some reasons:
The cities were all upon mountains, so they might be seen from afar by those who fled there, and seated at a convenient distance one from another, for the benefit of the several tribes, approximately within half a day reach from any part of most of the country.
"These were the cities appointed for all the children of Israel and for the stranger who dwelt among them, that whoever killed a person accidentally might flee there, and not die by the hand of the avenger of blood until he stood before the congregation."
Verse 9.
The census of these in Solomon’s time gave a return of 153,600 males (2 Chronicles 2:17), which was nearly equal to about a tenth of the whole population.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
Sources.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=70716862 |
70721875 | Self-play | Reinforcement learning technique
<templatestyles src="Machine learning/styles.css"/>
Self-play is a technique for improving the performance of reinforcement learning agents. Intuitively, agents learn to improve their performance by playing "against themselves".
Definition and motivation.
In multi-agent reinforcement learning experiments, researchers try to optimize the performance of a learning agent on a given task, in cooperation or competition with one or more agents. These agents learn by trial-and-error, and researchers may choose to have the learning algorithm play the role of two or more of the different agents. When successfully executed, this technique has a double advantage:
Czarnecki et al argue that most of the games that people play for fun are "Games of Skill", meaning games whose space of all possible strategies looks like a spinning top. In more detail, we can partition the space of strategies into sets formula_0, such that any formula_1, the strategy formula_2 beats the strategy formula_3. Then, in population-based self-play, if the population is larger than formula_4, then the algorithm would converge to the best possible strategy.
Usage.
Self-play is used by the AlphaZero program to improve its performance in the games of chess, shogi and go.
Self-play is also used to train the Cicero AI system to outperform humans at the game of Diplomacy. The technique is also used in training the DeepNash system to play the game Stratego.
Connections to other disciplines.
Self-play has been compared to the epistemological concept of tabula rasa that describes the way that humans acquire knowledge from a "blank slate".
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "L_1, L_2, ..., L_n"
},
{
"math_id": 1,
"text": "i < j, \\pi_i\\in L_i, \\pi_j \\in L_j"
},
{
"math_id": 2,
"text": "\\pi_j"
},
{
"math_id": 3,
"text": "\\pi_i"
},
{
"math_id": 4,
"text": "\\max_i |L_i|"
}
]
| https://en.wikipedia.org/wiki?curid=70721875 |
70722900 | Neodymium(III) vanadate | <templatestyles src="Chembox/styles.css"/>
Chemical compound
Neodymium(III) vanadate is an inorganic compound, a salt of neodymium and vanadic acid with the chemical formula of NdVO4. It forms pale-blue, hydrated crystals.
Preparation.
Neodymium(III) vanadate is produced by the reaction of hot acidic neodymium(III) chloride and sodium vanadate:
formula_0
Physical properties.
Neodymium(III) vanadate forms crystals of the tetragonal crystal system, space group I 41/amd, lattice constants a = 0.736 nm, b = 0.736 nm, c = 0.6471 nm, α = 90°, β = 90°, γ = 90°, Z = 4.
It doesn't dissolve in water.
It can form hydrates.
Applications.
Neodymium(III) vanadate can be used for:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathsf{ NdCl_3 + Na_3VO_4 \\ \\xrightarrow{}\\ NdVO_4\\downarrow + 3NaCl}"
}
]
| https://en.wikipedia.org/wiki?curid=70722900 |
7072682 | Mediation (statistics) | Statistical model
In statistics, a mediation model seeks to identify and explain the mechanism or process that underlies an observed relationship between an independent variable and a dependent variable via the inclusion of a third hypothetical variable, known as a mediator variable (also a mediating variable, intermediary variable, or intervening variable). Rather than a direct causal relationship between the independent variable and the dependent variable, which is often false, a mediation model proposes that the independent variable influences the mediator variable, which in turn influences the dependent variable. Thus, the mediator variable serves to clarify the nature of the relationship between the independent and dependent variables.
Mediation analyses are employed to understand a known relationship by exploring the underlying mechanism or process by which one variable influences another variable through a mediator variable. In particular, mediation analysis can contribute to better understanding the relationship between an independent variable and a dependent variable when these variables do not have an obvious direct connection.
Baron and Kenny's (1986) steps for mediation analysis.
Baron and Kenny (1986) laid out several requirements that must be met to form a true mediation relationship. They are outlined below using a real-world example. See the diagram above for a visual representation of the overall mediating relationship to be explained. The original steps are as follows.
Step 1.
Relationship Duration
Regress the dependent variable on the independent variable to confirm that the independent variable is a significant predictor of the dependent variable.
Independent variable formula_0 dependent variable
formula_1
Regress the mediator on the independent variable to confirm that the independent variable is a significant predictor of the mediator. If the mediator is not associated with the independent variable, then it couldn’t possibly mediate anything.
Independent variable formula_0 mediator
formula_2
Regress the dependent variable on both the mediator and independent variable to confirm that a) the mediator is a significant predictor of the dependent variable, and b) the strength of the coefficient of the previously significant independent variable in Step #1 is now greatly reduced, if not rendered nonsignificant.
formula_3
Example.
The following example, drawn from Howell (2009), explains each step of Baron and Kenny's requirements to understand further how a mediation effect is characterized. Step 1 and step 2 use simple regression analysis, whereas step 3 uses multiple regression analysis.
Such findings would lead to the conclusion implying that your feelings of competence and self-esteem mediate the relationship between how you were parented and how confident you feel about parenting your own children.
If step 1 does not yield a significant result, one may still have grounds to move to step 2. Sometimes there is actually a significant relationship between independent and dependent variables but because of small sample sizes, or other extraneous factors, there could not be enough power to predict the effect that actually exists.
Direct versus indirect effects.
In the diagram shown above, the indirect effect is the product of path coefficients "A" and "B". The direct effect is the coefficient " C' ".
The direct effect measures the extent to which the dependent variable changes when the independent variable increases by one unit and the mediator variable remains unaltered. In contrast, the indirect effect measures the extent to which the dependent variable changes when the independent variable is held constant and the mediator variable changes by the amount it would have changed had the independent variable increased by one unit.
In linear systems, the total effect is equal to the sum of the direct and indirect ("C' + AB" in the model above). In nonlinear models, the total effect is not generally equal to the sum of the direct and indirect effects, but to a modified combination of the two.
Full mediation versus partial mediation.
A mediator variable can either account for all or some of the observed relationship between two variables.
Full mediation.
Maximum evidence for mediation, also called full mediation, would occur if inclusion of the mediation variable drops the relationship between the independent variable and dependent variable (see pathway c"′" in diagram above) to zero.
Partial mediation.
Partial mediation maintains that the mediating variable accounts for some, but not all, of the relationship between the independent variable and dependent variable. Partial mediation implies that there is not only a significant relationship between the mediator and the dependent variable, but also some direct relationship between the independent and dependent variable.
In order for either full or partial mediation to be established, the reduction in variance explained by the independent variable must be significant as determined by one of several tests, such as the Sobel test. The effect of an independent variable on the dependent variable can become nonsignificant when the mediator is introduced simply because a trivial amount of variance is explained (i.e., not true mediation). Thus, it is imperative to show a significant reduction in variance explained by the independent variable before asserting either full or partial mediation. It is possible to have statistically significant indirect effects in the absence of a total effect. This can be explained by the presence of several mediating paths that cancel each other out, and become noticeable when one of the cancelling mediators is controlled for. This implies that the terms 'partial' and 'full' mediation should always be interpreted relative to the set of variables that are present in the model. In all cases, the operation of "fixing a variable" must be distinguished from that of "controlling for a variable," which has been inappropriately used in the literature. The former stands for physically fixing, while the latter stands for conditioning on, adjusting for, or adding to the regression model. The two notions coincide only when all error terms (not shown in the diagram) are statistically uncorrelated. When errors are correlated, adjustments must be made to neutralize those correlations before embarking on mediation analysis (see Bayesian network).
Sobel's test.
Sobel's test is performed to determine if the relationship between the independent variable and dependent variable has been significantly reduced after inclusion of the mediator variable. In other words, this test assesses whether a mediation effect is significant. It examines the relationship between the independent variable and the dependent variable compared to the relationship between the independent variable and dependent variable including the mediation factor.
The Sobel test is more accurate than the Baron and Kenny steps explained above; however, it does have low statistical power. As such, large sample sizes are required in order to have sufficient power to detect significant effects. This is because the key assumption of Sobel's test is the assumption of normality. Because Sobel's test evaluates a given sample on the normal distribution, small sample sizes and skewness of the sampling distribution can be problematic (see Normal distribution for more details). Thus, the rule of thumb as suggested by MacKinnon et al., (2002) is that a sample size of 1000 is required to detect a small effect, a sample size of 100 is sufficient in detecting a medium effect, and a sample size of 50 is required to detect a large effect.
The equation for Sobel is:
formula_4
Preacher–Hayes bootstrap method.
The bootstrapping method provides some advantages to the Sobel's test, primarily an increase in power. The Preacher and Hayes bootstrapping method is a non-parametric test and does not impose the assumption of normality. Therefore, if the raw data is available, the bootstrap method is recommended. Bootstrapping involves repeatedly randomly sampling observations with replacement from the data set to compute the desired statistic in each resample. Computing over hundreds, or thousands, of bootstrap resamples provide an approximation of the sampling distribution of the statistic of interest. The Preacher–Hayes method provides point estimates and confidence intervals by which one can assess the significance or nonsignificance of a mediation effect. Point estimates reveal the mean over the number of bootstrapped samples and if zero does not fall between the resulting confidence intervals of the bootstrapping method, one can confidently conclude that there is a significant mediation effect to report.
Significance of mediation.
As outlined above, there are a few different options one can choose from to evaluate a mediation model.
Bootstrapping is becoming the most popular method of testing mediation because it does not require the normality assumption to be met, and because it can be effectively utilized with smaller sample sizes ("N" < 25). However, mediation continues to be most frequently determined using the logic of Baron and Kenny or the Sobel test. It is becoming increasingly more difficult to publish tests of mediation based purely on the Baron and Kenny method or tests that make distributional assumptions such as the Sobel test. Thus, it is important to consider your options when choosing which test to conduct.
Approaches to mediation.
While the concept of mediation as defined within psychology is theoretically appealing, the methods used to study mediation empirically have been challenged by statisticians and epidemiologists and interpreted formally.
Criticisms of mediation measurement.
Potentially unnecessary step.
Hayes (2009) critiqued Baron and Kenny's mediation steps approach, and as of 2019, David A. Kenny on his website stated that mediation can exist in the absence of a 'significant' total effect (sometimes referred to as "inconsistent mediation"), and therefore step 1 of the original 1986 approach may not be needed. Later publications by Hayes questioned the concepts of full mediation and partial mediation, and advocated for the abandonment of these terms and of the steps in classical (1986) mediation.
Importance of caution.
Experimental approaches to mediation must be carried out with caution. First, it is important to have strong theoretical support for the exploratory investigation of a potential mediating variable.
A criticism of a mediation approach rests on the ability to manipulate and measure a mediating variable. Thus, one must be able to manipulate the proposed mediator in an acceptable and ethical fashion. As such, one must be able to measure the intervening process without interfering with the outcome. The mediator must also be able to establish construct validity of manipulation.
One of the most common criticisms of the measurement-of-mediation approach is that it is ultimately a correlational design. Consequently, it is possible that some other third variable, independent from the proposed mediator, could be responsible for the proposed effect. However, researchers have worked hard to provide counter-evidence to this disparagement. Specifically, the following counter-arguments have been put forward:
Mediation can be an extremely useful and powerful statistical test; however, it must be used properly. It is important that the measures used to assess the mediator and the dependent variable are theoretically distinct and that the independent variable and mediator cannot interact. Should there be an interaction between the independent variable and the mediator one would have grounds to investigate moderation.
Other third variables.
Confounding.
Another model that is often tested is one in which competing variables in the model are alternative potential mediators or an unmeasured cause of the dependent variable. An additional variable in a causal model may obscure or confound the relationship between the independent and dependent variables. Potential confounders are variables that may have a causal impact on both the independent variable and dependent variable. They include common sources of measurement error (as discussed above) as well as other influences shared by both the independent and dependent variables.
In experimental studies, there is a special concern about aspects of the experimental manipulation or setting that may account for study effects, rather than the motivating theoretical factor. Any of these problems may produce spurious relationships between the independent and dependent variables as measured. Ignoring a confounding variable may bias empirical estimates of the causal effect of the independent variable.
Suppression.
A suppressor variable increases the predictive validity of another variable when included in a regression equation. Suppression can occur when a single causal variable is related to an outcome variable through two separate mediator variables, and when one of those mediated effects is positive and one is negative. In such a case, each mediator variable suppresses or conceals the effect that is carried through the other mediator variable. For example, higher intelligence scores (a causal variable, "A") may cause an increase in error detection (a mediator variable, "B") which in turn may cause a decrease in errors made at work on an assembly line (an outcome variable, "X"); at the same time, intelligence could also cause an increase in boredom ("C"), which in turn may cause an "increase" in errors ("X"). Thus, in one causal path intelligence decreases errors, and in the other it increases them. When neither mediator is included in the analysis, intelligence appears to have no effect or a weak effect on errors. However, when boredom is controlled intelligence will appear to decrease errors, and when error detection is controlled intelligence will appear to increase errors. If intelligence could be increased while only boredom was held constant, errors would decrease; if intelligence could be increased while holding only error detection constant, errors would increase.
In general, the omission of suppressors or confounders will lead to either an underestimation or an overestimation of the effect of "A" on "X", thereby either reducing or artificially inflating the magnitude of a relationship between two variables.
Moderators.
Other important third variables are moderators. Moderators are variables that can make the relationship between two variables either stronger or weaker. Such variables further characterize interactions in regression by affecting the direction and/or strength of the relationship between "X" and "Y". A moderating relationship can be thought of as an interaction. It occurs when the relationship between variables A and B depends on the level of C. See moderation for further discussion.
Moderated mediation.
Mediation and moderation can co-occur in statistical models. It is possible to mediate moderation and moderate mediation.
Moderated mediation is when the effect of the treatment "A" on the mediator and/or the partial effect "B" on the dependent variable depend in turn on levels of another variable (moderator). Essentially, in moderated mediation, mediation is first established, and then one investigates if the mediation effect that describes the relationship between the independent variable and dependent variable is moderated by different levels of another variable (i.e., a moderator). This definition has been outlined by Muller, Judd, and Yzerbyt (2005) and Preacher, Rucker, and Hayes (2007).
Models of moderated mediation.
There are five possible models of moderated mediation, as illustrated in the diagrams below.
In addition to the models mentioned above, a new variable can also exist which moderates the relationship between the independent variable and mediator (the A path) while at the same time have the new variable moderate the relationship between the independent variable and dependent variable (the C Path).
Mediated moderation.
Mediated moderation is a variant of both moderation and mediation. This is where there is initially overall moderation and the direct effect of the moderator variable on the outcome is mediated. The main difference between mediated moderation and moderated mediation is that for the former there is initial (overall) moderation and this effect is mediated and for the latter there is no moderation but the effect of either the treatment on the mediator (path "A") is moderated or the effect of the mediator on the outcome (path "B") is moderated.
In order to establish mediated moderation, one must first establish moderation, meaning that the direction and/or the strength of the relationship between the independent and dependent variables (path "C") differs depending on the level of a third variable (the moderator variable). Researchers next look for the presence of mediated moderation when they have a theoretical reason to believe that there is a fourth variable that acts as the mechanism or process that causes the relationship between the independent variable and the moderator (path "A") or between the moderator and the dependent variable (path "C").
Example.
The following is a published example of mediated moderation in psychological research.
Participants were presented with an initial stimulus (a prime) that made them think of morality or made them think of might. They then participated in the Prisoner's Dilemma Game (PDG), in which participants pretend that they and their partner in crime have been arrested, and they must decide whether to remain loyal to their partner or to compete with their partner and cooperate with the authorities. The researchers found that prosocial individuals were affected by the morality and might primes, whereas proself individuals were not. Thus, social value orientation (proself vs. prosocial) moderated the relationship between the prime (independent variable: morality vs. might) and the behaviour chosen in the PDG (dependent variable: competitive vs. cooperative).
The researchers next looked for the presence of a mediated moderation effect. Regression analyses revealed that the type of prime (morality vs. might) mediated the moderating relationship of participants’ social value orientation on PDG behaviour. Prosocial participants who experienced the morality prime expected their partner to cooperate with them, so they chose to cooperate themselves. Prosocial participants who experienced the might prime expected their partner to compete with them, which made them more likely to compete with their partner and cooperate with the authorities. In contrast, participants with a pro-self social value orientation always acted competitively.
Regression equations for moderated mediation and mediated moderation.
Muller, Judd, and Yzerbyt (2005) outline three fundamental models that underlie both moderated mediation and mediated moderation. "Mo" represents the moderator variable(s), "Me" represents the mediator variable(s), and "εi" represents the measurement error of each regression equation.
Step 1.
Moderation of the relationship between the independent variable (X) and the dependent variable (Y), also called the overall treatment effect (path "C" in the diagram).
formula_5
Step 2.
Moderation of the relationship between the independent variable and the mediator (path "A").
formula_6
Step 3.
Moderation of both the relationship between the independent and dependent variables (path "A") and the relationship between the mediator and the dependent variable (path "B").
formula_7
Causal mediation analysis.
Fixing versus conditioning.
Mediation analysis quantifies the
extent to which a variable participates in the transmittance
of change from a cause to its effect. It is inherently a causal
notion, hence it cannot be defined in statistical terms. Traditionally,
however, the bulk of mediation analysis has been conducted
within the confines of linear regression, with statistical
terminology masking the causal character of the
relationships involved. This led to difficulties,
biases, and limitations that have been alleviated by
modern methods of causal analysis, based on causal diagrams
and counterfactual logic.
The source of these difficulties lies in defining mediation
in terms of changes induced by adding a third variables into
a regression equation. Such statistical changes are
epiphenomena which sometimes accompany mediation but,
in general, fail to capture the causal relationships that
mediation analysis aims to quantify.
The basic premise of the causal approach is that it is
not always appropriate to "control" for the mediator "M"
when we seek to estimate the direct effect of "X" on "Y"
(see the Figure above).
The classical rationale for "controlling" for "M""
is that, if we succeed in preventing "M" from changing, then
whatever changes we measure in Y are attributable solely
to variations in "X" and we are justified then in proclaiming the
effect observed as "direct effect of "X" on "Y"." Unfortunately,
"controlling for "M"" does not physically prevent "M" from changing;
it merely narrows the analyst's attention to cases
of equal "M" values. Moreover, the language of probability
theory does not possess the notation to express the idea
of "preventing "M" from changing" or "physically holding "M" constant".
The only operator probability provides is "Conditioning"
which is what we do when we "control" for "M",
or add "M" as a regressor in the equation for "Y".
The result is that, instead of physically holding "M" constant
(say at "M" = "m") and comparing "Y" for units under "X" = 1' to those under
"X" = 0, we allow "M" to vary but ignore all units except those in
which "M" achieves the value "M" = "m". These two operations are
fundamentally different, and yield different results, except in the case of no omitted variables. Improperly conditioning mediated effects can be a type of bad control.
To illustrate, assume that the error terms of "M" and "Y"
are correlated. Under such conditions, the
structural coefficient "B" and "A" (between "M" and "Y" and between "Y" and "X")
can no longer be estimated by regressing "Y" on "X" and "M".
In fact, the regression slopes may both be nonzero even when "C" is zero. This has two consequences. First, new strategies must be devised for estimating the structural coefficients "A, B" and "C". Second, the basic definitions of direct and indirect effects must go beyond regression analysis, and should invoke an operation that mimics "fixing "M"", rather than "conditioning on "M"."
Definitions.
Such an operator, denoted do("M" = "m"), was defined in Pearl (1994) and it operates by removing the equation of "M" and replacing it by a constant "m". For example, if the basic mediation model consists of the equations:
formula_8
then after applying the operator do("M" = "m") the model becomes:
formula_9
and after applying the operator do("X" = "x") the model becomes:
formula_10
where the functions "f" and "g", as well as the
distributions of the error terms ε1 and ε3 remain
unaltered. If we further rename the variables "M" and "Y" resulting from do("X" = "x")
as "M"("x") and "Y"("x"), respectively, we obtain what
came to be known as "potential
outcomes" or "structural counterfactuals".
These new variables provide convenient notation
for defining direct and indirect effects. In particular,
four types of effects have been defined for the
transition from "X" = 0 to "X" = 1:
(a) Total effect –
formula_11
(b) Controlled direct effect -
formula_12
(c) Natural direct effect -
formula_13
(d) Natural indirect effect
formula_14
Where "E"[ ] stands for expectation taken over the error terms.
These effects have the following interpretations:
A controlled version of the indirect effect does not
exist because there is no way of disabling the
direct effect by fixing a variable to a constant.
According to these definitions the total effect can be decomposed as a sum
formula_15
where "NIEr" stands for the reverse transition, from
"X" = 1 to "X" = 0; it becomes additive in linear systems,
where reversal of transitions entails sign reversal.
The power of these definitions lies in their generality; they are applicable to models with arbitrary nonlinear interactions, arbitrary dependencies among the disturbances, and both continuous and categorical variables.
The mediation formula.
In linear analysis, all effects are determined by sums of products of structural coefficients, giving
formula_16
Therefore, all effects are estimable whenever the model is identified. In non-linear systems, more stringent conditions are needed for estimating the direct and indirect effects.
For example, if no confounding exists,
(i.e., ε1, ε2, and ε3 are mutually independent) the following formulas can be derived:
formula_17
The last two equations are called "Mediation Formulas" and have become the target of estimation in many studies of mediation. They give distribution-free expressions for direct and indirect effects and demonstrate that, despite the arbitrary nature of the error distributions and the functions "f", "g", and "h", mediated effects can nevertheless be estimated from data using regression. The analyses of "moderated mediation" and "mediating moderators" fall as special cases of the causal mediation analysis, and the mediation formulas identify how various interactions coefficients contribute to the necessary and sufficient components of mediation.
Example.
Assume the model takes the form
formula_18
where the parameter formula_19 quantifies the degree to which "M" modifies the effect of "X" on "Y". Even when all parameters are estimated from data, it is still not obvious what combinations of parameters measure the direct and indirect effect of "X" on "Y", or, more practically, how to assess the fraction of the total effect formula_20 that is "explained" by mediation and the fraction of formula_20 that is "owed" to mediation. In linear analysis, the former fraction is captured by the product formula_21, the latter by the difference formula_22, and the two quantities coincide. In the presence of interaction, however, each fraction demands a separate analysis, as dictated by the Mediation Formula, which yields:
formula_23
Thus, the fraction of output response for which mediation would be "sufficient" is
formula_24
while the fraction for which mediation would be "necessary" is
formula_25
These fractions involve non-obvious combinations
of the model's parameters, and can be constructed
mechanically with the help of the Mediation Formula. Significantly, due to interaction, a direct effect can be sustained even when the parameter formula_26 vanishes and, moreover, a total effect can be sustained even when both the direct and indirect effects vanish. This illustrates that estimating parameters in isolation tells us little about the effect of mediation and, more generally, mediation and moderation are intertwined and cannot be assessed separately.
References.
"As of 19 June 2014, this article is derived in whole or in part from "Causal Analysis in Theory and Practice". The copyright holder has licensed the content in a manner that permits reuse under and . All relevant terms must be followed."
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\to "
},
{
"math_id": 1,
"text": "Y=\\beta_{10} +\\beta_{11}X + \\varepsilon_1"
},
{
"math_id": 2,
"text": "Me=\\beta_{20} +\\beta_{21}X + \\varepsilon_2"
},
{
"math_id": 3,
"text": "Y=\\beta_{30} +\\beta_{31}X +\\beta_{32}Me + \\varepsilon_3"
},
{
"math_id": 4,
"text": "z= \\frac{ab}{\\sqrt{b^2s^2_a + a^2s^2_b}}"
},
{
"math_id": 5,
"text": "Y=\\beta_{40} +\\beta_{41}X +\\beta_{42}Mo +\\beta_{43}XMo + \\varepsilon_4"
},
{
"math_id": 6,
"text": "Me=\\beta_{50} +\\beta_{51}X +\\beta_{52}Mo +\\beta_{53}XMo + \\varepsilon_5"
},
{
"math_id": 7,
"text": "Y=\\beta_{60} +\\beta_{61}X +\\beta_{62}Mo +\\beta_{63}XMo +\\beta_{64}Me +\\beta_{65}MeMo + \\varepsilon_6"
},
{
"math_id": 8,
"text": " X=f(\\varepsilon_1),~~M=g(X,\\varepsilon_2),~~Y=h(X,M,\\varepsilon_3) , "
},
{
"math_id": 9,
"text": " X=f(\\varepsilon_1),~~M=m,~~Y=h(X,m,\\varepsilon_3) "
},
{
"math_id": 10,
"text": "X=x, M=g(x, \\varepsilon_2), Y=h(x,M,\\varepsilon_3) "
},
{
"math_id": 11,
"text": "TE = E [Y(1) - Y(0)] "
},
{
"math_id": 12,
"text": " CDE(m) = E [Y(1,m) - Y(0,m) ] "
},
{
"math_id": 13,
"text": "NDE = E [Y(1,M(0)) - Y(0,M(0))] "
},
{
"math_id": 14,
"text": " NIE = E [Y(0,M(1)) - Y(0,M(0))] "
},
{
"math_id": 15,
"text": "TE = NDE - NIE_r "
},
{
"math_id": 16,
"text": " \n\\begin{align}\nTE & = C + AB \\\\\nCDE(m) & = NDE = C, \\text{ independent of } m\\\\\nNIE & = AB.\n\\end{align}\n"
},
{
"math_id": 17,
"text": " \n\\begin{align}\nTE & = E(Y\\mid X=1)- E(Y\\mid X=0)\\\\\nCDE(m) & = E(Y\\mid X=1, M=m) - E(Y\\mid X=0, M=m) \\\\\nNDE & = \\sum_m [E(Y|X=1, M=m) - E(Y\\mid X=0, M=m) ] P(M=m\\mid X=0) \\\\\nNIE & = \\sum_m [P(M=m\\mid X=1) - P(M=m\\mid X=0)] E(Y\\mid X=0, M=m).\n\\end{align}\n"
},
{
"math_id": 18,
"text": " \n\\begin{align}\nX & = \\varepsilon_1 \\\\\nM & = b_0 + b_1X + \\varepsilon_2 \\\\\nY & = c_0 + c_1X + c_2M + c_3XM + \\varepsilon_3 \n\\end{align}\n"
},
{
"math_id": 19,
"text": "c_3"
},
{
"math_id": 20,
"text": "TE"
},
{
"math_id": 21,
"text": "b_1 c_2 / TE"
},
{
"math_id": 22,
"text": "(TE - c_1)/TE"
},
{
"math_id": 23,
"text": "\n\\begin{align}\nNDE & = c_1 + b_0 c_3 \\\\\nNIE & = b_1 c_2 \\\\\nTE & = c_1 + b_0 c_3 + b_1(c_2 + c_3) \\\\\n & = NDE + NIE + b_1 c_3.\n\\end{align}\n"
},
{
"math_id": 24,
"text": " \\frac{NIE}{TE} = \\frac{b_1 c_2}{c_1 + b_0 c_3 + b_1 (c_2 + c_3)}, "
},
{
"math_id": 25,
"text": " 1- \\frac{NDE}{TE} = \\frac{b_1 (c_2 +c_3)}{c_1 + b_0c_3 + b_1 (c_2 + c_3)}. "
},
{
"math_id": 26,
"text": "c_1"
}
]
| https://en.wikipedia.org/wiki?curid=7072682 |
70728265 | Joshua 21 | Book of Joshua, chapter 21
Joshua 21 is the twenty-first chapter of the Book of Joshua in the Hebrew Bible or in the Old Testament of the Christian Bible. According to 0Jewish tradition the book was attributed to Joshua, with additions by the high priests Eleazar and Phinehas, but modern scholars view it as part of the Deuteronomistic History, which spans the books of Deuteronomy to 2 Kings, attributed to nationalistic and devotedly Yahwistic writers during the time of the reformer Judean king Josiah in 7th century BCE. This chapter records the designation of "Levitical cities", a part of a section comprising Joshua 13:1–21:45 about the Israelites allotting the land of Canaan.
Text.
This chapter was originally written in the Hebrew language. It is divided into 45 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008).
Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century).
Analysis.
The narrative of Israelites allotting the land of Canaan comprising verses 13:1 to 21:45 of
the Book of Joshua and has the following outline:
A. Preparations for Distributing the Land (13:1–14:15)
B. The Allotment for Judah (15:1–63)
C. The Allotment for Joseph (16:1–17:18)
D. Land Distribution at Shiloh (18:1–19:51)
E. Levitical Distribution and Conclusion (20:1–21:45)
1. Cities of Refuge (20:1–9)
a. Regulations for Cities of Refuge (20:1–6)
b. Designation of Cities of Refuge (20:7–9)
2. Levitical Cities (21:1–42)
a. Approach to Joshua and Eleazar (21:1–3)
b. Initial Summary (21:4–8)
c. Priestly Kohathite Allotment (21:9–19)
d. Non-Priestly Kohathite Allotment (21:20–26)
e. Gershonite Allotment (21:27–33)
f. Merarite Allotment (21:34–40)
g. Levitical Summary (21:41–42)
3. Summary of Divine Faithfulness (21:43–45)
Levitical cities (21:1–42).
It is now the turn of the Levites to be granted their part of the land by Joshua and Eleazar at Shiloh (verses 1–2). The Levites' 'inheritance' is YHWH himself (Numbers 18:20; Deuteronomy 18:1-2, cf Deuteronomy 10:9; in practice, they would receive shares of the Israelites' sacrifices and offerings; Numbers 18:9–24), so they would not receive tribal territory (13:14; 14:3–4) but only towns and their pasturelands throughout Israel (verses 1–3), a total of forty-eight Levitical cities (Numbers 35), including the six cities of refuge (Numbers 35:6–7—all noted in Joshua 21; verses 11, 21, 27, 32, 36, 38). The cities may have mainly 'served as residences and places where Levites could enjoy some personal wealth and status, while performing their priestly duties elsewhere' (Deuteronomy 18:6–8; Judges 18:3-6).
Cities were given out of the other tribes by lot to the Levites, according to their division:
Summary of Divine Faithfulness (21:43–45).
The summarizing conclusion notes the promise fulfilment and rest from enemies (cf. Joshua 11:23). These verses close the division record of the land, and tied the two halves of the Book together (chapter 1–12 and chapter 13–21): The declarations of these verses is consist to the fact that the Israelites had not yet possessed all the cities allotted to the various tribes (Judges 1:21–36) nor at any time subdued the whole country promised to them (Numbers 34:1–12), because God intends that the native population should not be annihilated suddenly (Deuteronomy 7:22), but at this time the Canaanites were broken in strength, holding isolated spots in the very midst of the tribes of God's people, so overall, the conquest of Canaan was 'already "ex parte Dei" a perfect work'.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
Sources.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=70728265 |
70728366 | Morphism of finite type | In commutative algebra, given a homomorphism "A" → "B" of commutative rings, "B" is called an "A"-algebra of finite type if "B" is a finitely generated as an "A"-algebra. It is much stronger for "B" to be a finite "A"-algebra, which means that "B" is finitely generated as an "A"-module. For example, for any commutative ring "A" and natural number "n", the polynomial ring "A"["x"1, ..., "xn"] is an "A"-algebra of finite type, but it is not a finite "A"-module unless "A" = 0 or "n" = 0. Another example of a finite-type homomorphism that is not finite is formula_0.
The analogous notion in terms of schemes is: a morphism "f": "X" → "Y" of schemes is of finite type if "Y" has a covering by affine open subschemes "Vi" = Spec "Ai" such that "f"−1("Vi") has a finite covering by affine open subschemes "Uij" = Spec "Bij" with "Bij" an "Ai"-algebra of finite type. One also says that "X" is of finite type over "Y".
For example, for any natural number "n" and field "k", affine "n"-space and projective "n"-space over "k" are of finite type over "k" (that is, over Spec "k"), while they are not finite over "k" unless "n" = 0. More generally, any quasi-projective scheme over "k" is of finite type over "k".
The Noether normalization lemma says, in geometric terms, that every affine scheme "X" of finite type over a field "k" has a finite surjective morphism to affine space A"n" over "k", where "n" is the dimension of "X". Likewise, every projective scheme "X" over a field has a finite surjective morphism to projective space P"n", where "n" is the dimension of "X". | [
{
"math_id": 0,
"text": "\\mathbb{C}[t] \\to \\mathbb{C}[t][x,y]/(y^2 - x^3 - t)"
}
]
| https://en.wikipedia.org/wiki?curid=70728366 |
70731078 | Joshua 23 | Book of Joshua, chapter 23
Joshua 23 is the twenty-third chapter of the Book of Joshua in the Hebrew Bible or in the Old Testament of the Christian Bible. According to Jewish tradition the book was attributed to Joshua, with additions by the high priests Eleazar and Phinehas, but modern scholars view it as part of the Deuteronomistic History, which spans the books of Deuteronomy to 2 Kings, attributed to nationalistic and devotedly Yahwistic writers during the time of the reformer Judean king Josiah in 7th century BCE. This chapter records the Joshua's farewell address to tribes of Israel, a part of a section comprising Joshua 22:1–24:33 about the Israelites preparing for life in the land of Canaan.
Text.
This chapter was originally written in the Hebrew language. It is divided into 16 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008).
Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century).
Analysis.
The narrative of Israelites preparing for life in the land comprising verses 22:1 to 24:33 of the Book of Joshua and has the following outline:
A. The Jordan Altar (22:1–34)
B. Joshua's Farewell (23:1–16)
1. The Setting (23:1–2a)
2. The Assurance of the Allotment (23:2b–5)
3. Encouragement to Enduring Faithfulness (23:6–13)
4. The Certain Fulfillment of God's Word (23:14–16)
C. Covenant and Conclusion (24:1–33)
The book of Joshua is concluded with two distinct ceremonies, each seeming in itself to be a finale:
Joshua's Farewell (23:1–16).
Joshua's farewell address to the gathered Israel tribes in this chapter is linked to the narrative of conquest, connecting with the resumptive statements in Joshua 11:23 and 21:43–45 of the fulfilment of promise, complete conquest, and rest from war. The opening verse (1b) repeats word for word a phrase from Joshua 13:1 about Joshua's advanced age. The address warns the people to hold fast to the law of Moses (verse 6; cf. Joshua 1:7), and to 'love' YHWH himself (verse 11, cf. Deuteronomy 6:5—the term 'love' denotes 'covenant loyalty'). They must not copy the worship practices of the native peoples that still lived among them (verses 7, 16), nor intermarry with them (verse 12; cf. Deuteronomy 7:1–5). If they do, YHWH will cease to drive out the nations, and Israel people themselves will be driven off their acquired land (verses 15, 16; cf. Deuteronomy 30:17–18). Here Joshua states the two possibilities of the covenant: "faithfulness and possession", or "unfaithfulness and loss", as a choice with its consequences (cf. Deuteronomy 28). Furthermore, Joshua warns that the 'curses' of the covenant will certainly come (verse 15b; cf. Deuteronomy 4:25–31; 30:1–5).
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
Sources.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=70731078 |
7073138 | Fin (extended surface) | In the study of heat transfer, fins are surfaces that extend from an object to increase the rate of heat transfer to or from the environment by increasing convection. The amount of conduction, convection, or radiation of an object determines the amount of heat it transfers. Increasing the temperature gradient between the object and the environment, increasing the convection heat transfer coefficient, or increasing the surface area of the object increases the heat transfer. Sometimes it is not feasible or economical to change the first two options. Thus, adding a fin to an object, increases the surface area and can sometimes be an economical solution to heat transfer problems.
One-piece finned heat sinks are produced by extrusion, casting, skiving, or milling.
General case.
To create a tractable equation for the heat transfer of a fin, many assumptions need to be made:
With these assumptions, conservation of energy can be used to create an energy balance for a differential cross section of the fin:
formula_0
Fourier’s law states that
formula_1
where formula_2 is the cross-sectional area of the differential element. Furthermore, the convective heat flux can be determined via the definition of the heat transfer coefficient h,
formula_3
where formula_4 is the temperature of the surroundings. The differential convective heat flux can then be determined from the perimeter of the fin cross-section P,
formula_5
The equation of energy conservation can now be expressed in terms of temperature,
formula_6
Rearranging this equation and using the definition of the derivative yields the following differential equation for temperature,
formula_7;
the derivative on the left can be expanded to the most general form of the fin equation,
formula_8
The cross-sectional area, perimeter, and temperature can all be functions of x.
Uniform cross-sectional area.
If the fin has a constant cross-section along its length, the area and perimeter are constant and the differential equation for temperature is greatly simplified to
formula_9
where formula_10 and formula_11. The constants formula_12 and formula_13 can now be found by applying the proper boundary conditions.
Solutions.
The base of the fin is typically set to a constant reference temperature, formula_14. There are four commonly possible fin tip (formula_15) conditions, however: the tip can be exposed to convective heat transfer, insulated, held at a constant temperature, or so far away from the base as to reach the ambient temperature.
For the first case, the second boundary condition is that there is free convection at the tip. Therefore,
formula_16
which simplifies to
formula_17
The two boundary conditions can now be combined to produce
formula_18
This equation can be solved for the constants formula_12 and formula_13 to find the temperature distribution, which is in the table below.
A similar approach can be used to find the constants of integration for the remaining cases. For the second case, the tip is assumed to be insulated, or in other words to have a heat flux of zero. Therefore,
formula_19
For the third case, the temperature at the tip is held constant. Therefore, the boundary condition is:
formula_20
For the fourth and final case, the fin is assumed to be infinitely long. Therefore, the boundary condition is:
formula_21
Finally, we can use the temperature distribution and Fourier's law at the base of the fin to determine the overall rate of heat transfer,
formula_22
The results of the solution process are summarized in the table below.
Performance.
Fin performance can be described in three different ways. The first is fin effectiveness. It is the ratio of the fin heat transfer rate (formula_23) to the heat transfer rate of the object if it had no fin. The formula for this is:
formula_24
where formula_25 is the fin cross-sectional area at the base. Fin performance can also be characterized by fin efficiency. This is the ratio of the fin heat transfer rate to the heat transfer rate of the fin if the entire fin were at the base temperature,
formula_26
formula_27 in this equation is equal to the surface area of the fin. The fin efficiency will always be less than one, as assuming the temperature throughout the fin is at the base temperature would increase the heat transfer rate.
The third way fin performance can be described is with overall surface efficiency,
formula_28
where formula_29 is the total area and formula_30 is the sum of the heat transfer from the unfinned base area and all of the fins. This is the efficiency for an array of fins.
Inverted fins (cavities).
Open cavities are defined as the regions formed between adjacent fins and stand for the essential promoters of nucleate boiling or condensation. These cavities are usually utilized to extract heat from a variety of heat generating bodies. From 2004 until now, many researchers have been motivated to search for the optimal design of cavities.
Uses.
Fins are most commonly used in heat exchanging devices such as radiators in cars, computer CPU heatsinks, and heat exchangers in power plants. They are also used in newer technology such as hydrogen fuel cells. Nature has also taken advantage of the phenomena of fins; the ears of jackrabbits and fennec foxes act as fins to release heat from the blood that flows through them.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\dot{Q}(x+dx)=\\dot{Q}(x)+d\\dot{Q}_{conv}."
},
{
"math_id": 1,
"text": "\\dot{Q}(x)=-kA_c \\left ( \\frac{dT}{dx} \\right ),"
},
{
"math_id": 2,
"text": "A_c"
},
{
"math_id": 3,
"text": "q''=h\\left (T-T_\\infty\\right ),"
},
{
"math_id": 4,
"text": "T_\\infty"
},
{
"math_id": 5,
"text": "d\\dot{Q}_{conv}=Ph\\left (T-T_\\infty\\right )dx."
},
{
"math_id": 6,
"text": "-kA_c \\left.\\left ( \\frac{dT}{dx} \\right )\\right\\vert_{x+dx} = -kA_c \\left.\\left ( \\frac{dT}{dx} \\right )\\right\\vert_{x} + Ph\\left (T-T_\\infty\\right )dx."
},
{
"math_id": 7,
"text": "k\\frac{d}{dx}\\left(A_c\\frac{dT}{dx}\\right) - Ph\\left (T-T_\\infty\\right) = 0"
},
{
"math_id": 8,
"text": "kA_c\\frac{d^2T}{dx^2} + k\\frac{dA_c}{dx}\\frac{dT}{dx} - Ph\\left (T-T_\\infty\\right) = 0."
},
{
"math_id": 9,
"text": "\\frac{d^2T}{dx^2}=\\frac{hP}{kA_c}\\left(T-T_\\infty\\right)."
},
{
"math_id": 10,
"text": "m^2=\\frac{hP}{kA_c}"
},
{
"math_id": 11,
"text": "\\theta(x)=T(x)-T_\\infty"
},
{
"math_id": 12,
"text": "C_1"
},
{
"math_id": 13,
"text": "C_2"
},
{
"math_id": 14,
"text": "\\theta_b(x=0)=T_b-T_\\infty"
},
{
"math_id": 15,
"text": "x=L"
},
{
"math_id": 16,
"text": "hA_c\\left(T(L)-T_\\infty\\right)=-kA_c\\left.\\left(\\frac{dT}{dx}\\right)\\right\\vert_{x=L},"
},
{
"math_id": 17,
"text": "h\\theta(L)=-k\\left.\\frac{d\\theta}{dx}\\right\\vert_{x=L}."
},
{
"math_id": 18,
"text": "h\\left(C_1e^{mL}+C_2e^{-mL}\\right)=km\\left(C_2e^{-mL}-C_1e^{mL}\\right)."
},
{
"math_id": 19,
"text": "\\left.\\frac{d\\theta}{dx}\\right\\vert_{x=L}=0."
},
{
"math_id": 20,
"text": "\\theta(L)=\\theta_L"
},
{
"math_id": 21,
"text": "\\lim_{L\\rightarrow \\infty} \\theta_L=0\\,"
},
{
"math_id": 22,
"text": "\\dot Q_\\text{total} = \\sqrt{hPkA_c}(C_2-C_1)."
},
{
"math_id": 23,
"text": "\\dot{Q}_f"
},
{
"math_id": 24,
"text": "\\epsilon_f=\\frac{\\dot{Q}_f}{hA_{c,b}\\theta_b},"
},
{
"math_id": 25,
"text": "A_{c,b}"
},
{
"math_id": 26,
"text": "\\eta_f=\\frac{\\dot{Q}_f} {h A_f \\theta_b}."
},
{
"math_id": 27,
"text": "A_f"
},
{
"math_id": 28,
"text": "\\eta_o=\\frac{\\dot{Q}_t}{hA_t\\theta_b},"
},
{
"math_id": 29,
"text": "A_t"
},
{
"math_id": 30,
"text": "\\dot{Q}_t"
}
]
| https://en.wikipedia.org/wiki?curid=7073138 |
70739650 | Space dust measurement | Space dust measurements
Space dust measurement refers to the study of small particles of extraterrestrial material, known as micrometeoroids or interplanetary dust particles (IDPs), that are present in the Solar System. These particles are typically of micrometer to sub-millimeter size and are composed of a variety of materials including silicates, metals, and carbon compounds. The study of space dust is important as it provides insight into the composition and evolution of the Solar System, as well as the potential hazards posed by these particles to spacecraft and other space-borne assets. The measurement of space dust requires the use of advanced scientific techniques such as secondary ion mass spectrometry (SIMS), optical and atomic force microscopy (AFM), and laser-induced breakdown spectroscopy (LIBS) to accurately characterize the physical and chemical properties of these particles.
Overview.
From the ground, space dust is observed as scattered sun light from myriads of interplanetary dust particles and as meteoroids entering the atmosphere. By observing a meteor from several positions on the ground, the trajectory and the entry speed can be determined by triangulation. Atmospheric entry speeds of up to 72,000 m/s have been observed for Leonid meteors.
Even sub-millimeter sized meteoroids hitting spacecraft at speeds around 300 m/s (much faster than bullets) can cause significant damage. Therefore, the early US "Explorer 1", "Vanguard 1", and the Soviet "Sputnik 3" satellites carried simple 0.001 m2 sized microphone dust detectors in order to detect impacts of micron sized meteoroids. The obtained fluxes were orders of magnitude higher than those estimated from zodiacal light measurements. However, the latter determination had big uncertainties in the assumed size and heliocentric radial dust density distributions. Thermal studies in the lab with microphone detectors suggested that the high count-rates recorded were due to noise generated by temperature variations in Earth orbit.
An excellent review of the early days of space dust research was given by Fechtig, H., Leinert, Ch., and Berg, O. in the book "Interplanetary Dust".
Dust accelerators.
A dust accelerator is a critical facility to develop, test, and calibrate space dust instruments. Classic guns have muzzle velocities between just a few 100 m/s and 1 km/s, whereas meteoroid speeds range from a few km/s to several 100 km/s for nanometer sized dust particles. Only experimental light-gas guns (e.g. at NASA's Johnson Space Center, JSC) reach projectile speeds of several km/s up to 10 km/s in the laboratory. By exchanging the projectile with a sabot containing dust particles, high speed dust projectiles can be used for impact cratering and dust sensor calibration experiments.
The workhorse for hypervelocity dust impact experiments is the electrostatic dust accelerator.
Nanometer to micrometer sized conducting dust particles are electrically charged and accelerated by an electrostatic particle accelerator to speeds up to 100 km/s. Currently, operational dust accelerators exist at IRS in Stuttgart, Germany (formally at Max Planck Institute for Nuclear Physics in Heidelberg), and at the Laboratory for Atmospheric and Space Physics (LASP) in Boulder, Colorado. The LASP dust accelerator facility has been operational since 2011, and has been used for basic impact studies, as well as for the development of dust instruments. The facility is available for the planetary and space science communities.
Dust accelerators are used for impact cratering studies, calibration of impact ionization dust detectors, and meteor studies. Only electrically conducting particles can be used in an electrostatic dust accelerator because the dust source is located in the high-voltage terminal. James F. Vedder, at Ames Research Center, ARC, used a linear particle accelerator by charging dust particles by an ion beam in a quadrupole ion trap under visual control. This way, a wide range of dust materials could be accelerated to high speeds.
Reliable dust detections.
Tennis court sized (200 m2) penetration detectors on the "Pegasus" satellites determined a much lower flux of 100 micron sized particles that would not pose a significant hazard to the crewed Apollo missions. The first reliable dust detections of micron sized meteoroids were obtained by the dust detectors on board the "Pioneer 8" and "9" and "HEOS 2" spacecraft. Both instruments were impact ionization detectors using coincident signals from ions and electrons released upon impact. The detectors had sensitive areas of approximately 0.01 m2 and detected outside the Earth's magnetosphere on average one impact per ten days.
Microcrater analyses.
Microcraters on lunar samples provide an extensive record of impacts onto the lunar surface. Uneroded glass splashes from big impacts covering crystalline lunar rocks preserve microcraters well.
The number of microcraters was measured on a single rock sample using microscopic and scanning electron microscopic analyses. The craters ranged in size from 10−8 to 10−3 m, and were correlated to the mass of meteoroids based on impact simulations. The impact speed onto the lunar surface was assumed to be 20 km/s. The age of the rocks on the surface could not be determined through traditional methods (counting the solar flare track densities), so spacecraft measurements by the Pegasus satellites were used to determine the interplanetary dust flux, specifically the crater production flux at 100 μm size. The flux of smaller meteoroids was found to be smaller than the observed cratering flux on the lunar surface due to fast ejecta from impacts of bigger meteoroids. The flux was adjusted using data from the HEOS-2 and Pioneer 8/9 space probes.
From April 1984 to January 1990, NASA's Long Duration Exposure Facility exposed several passive impact collectors (each a few square meters in area) to the space dust environment in low Earth orbit. After recovery of LDEF by the Space Shuttle "Columbia", the instrument trays were analyzed. The results generally confirmed the earlier analysis of lunar microcraters.
Optical and infrared zodiacal dust observations.
Zodiacal light observations at different heliocentric distances were performed by the Zodiacal light photometer instruments on "Helios 1" and "2" and the "Pioneer 10" and "Pioneer 11" space probes, ranging between 0.3 AU and 3.3 AU from the sun. This way, the heliocentric radial profile was determined, and shown to vary by a factor of about 100 over that distance. The Asteroid Meteoroid Detector (AMD) on Pioneer 10 and Pioneer 11 used the optical detection and triangulation of individual meteoroids to get information on their sizes and trajectories. Unfortunately, the trigger threshold was set too low, and noise corrupted the data. Zodiacal light observations at visible light wavelengths use the light scattered by interplanetary dust particles, which constitute only a few percent of the incoming light. The remainder (over 90%) is absorbed and reradiated at infrared wavelengths.
The zodiacal dust cloud is much brighter at infrared wavelengths than visible wavelengths. However, on the ground, most of these infrared wavelengths are blocked by atmospheric absorption bands. Therefore, most infrared astronomy observations are done from space observatory satellites. The Infrared Astronomical Satellite (IRAS) mapped the sky at wavelengths of 12, 25, 60, and 100 micrometers. Between wavelengths of 12 and 60 microns, zodiacal dust was a prominent feature. Later, the Diffuse Infrared Background Experiment (DIRBE) on NASA's COBE mission provided a complete high-precision survey of the zodiacal dust cloud at the same wavelengths.
IRAS sky maps showed structure in the sky brightness at infrared wavelengths. In addition to the wide, general zodiacal cloud and a broad, central asteroidal band, there were several narrow cometary trails. Follow-up observations using the Spitzer Space Telescope showed that at least 80% of all Jupiter family comets had trails. When the Earth passes through a comet trail, a meteor shower is observed from the ground. Due to the enhanced risk to spacecraft in such meteoroid streams, the European Space Agency developed the IMEX model, which follows the evolution of cometary particles and hence allows us to determine the risk of collision at specific positions and times in the inner Solar System.
Penetration detectors.
In the early 1960s, pressurized cell micrometeorite detectors were flown on the "Explorer 16" and "Explorer 23" satellites. Each satellite carried more than 200 individual gas-filled pressurized cells with metal walls of 25 and 50 microns thick. A puncture of a cell by a meteoroid impact could be detected by a pressure sensor. These instruments provided important measurements of the near-Earth meteoroid flux. In 1972 and 1973, the "Pioneer 10" and "Pioneer 11" interplanetary spacecraft carried 234 pressurized cell detectors each, mounted on the back of the main dish antenna. The stainless-steel wall thickness was 25 microns on Pioneer 10, and 50 microns on Pioneer 11. The two instruments characterized the meteoroid environment in the outer Solar System as well as near Jupiter and near Saturn.
In preparation for the Apollo Missions to the moon, three Pegasus satellites were launched by the Saturn 1 rocket into near-Earth orbit. Each satellite carried 416 individual meteoroid detectors with a total detection surface of about 200 m2. The detectors consisted of aluminum penetration sheets of various thicknesses: 171 m2 of 400 micron-thick, 16 m2 of 200 micron-thick, and 7.5 m2 of 40 micron-thick. Placed behind these penetration sheets were 12 micron-thick mylar capacitor detectors that recorded penetrations of the overlying sheet. The results showed that the meteoroid hazard is significant and meteoroid protection methods must be implemented for large space vehicles.
In 1986, the "Vega 1" and "Vega 2" missions were equipped with a new dust detector, developed by John Simpson, which used polyvinylidene difluoride PVDF films. This material responds to dust impacts by generating electrical charge due to impact cratering or penetration. Since PVDF detectors are also sensitive to mechanical vibrations and energetic particles, detectors using PVDF work acceptably well as high-rate dust detectors in very dusty environments, like cometary comae or planetary rings (as was the case for the "Cassini–Huygens" Cosmic Dust Analyzer). For example, on the "Stardust" mission, the Dust Flux Monitor Instrument (DFMI) used PVDF detectors to study dust in the coma of Comet Wild 2. However, in low-dust environments such as interplanetary space, this sensitivity makes the detectors susceptible to noise. Because of this, the PVDF detectors on the Venetia Burney Student Dust Counter also needed shielded reference detectors in order to determine the background noise rate.
Modern microphone detectors.
During its flyby of Halley's Comet at a distance of 600 km, the "Giotto" spacecraft was protected from space dust by a 1 mm-thick front Whipple shield (1.85 m diameter) and a 12 mm-thick rear Kevlar shield. Mounted on the front dust shield were three piezoelectric momentum sensors of the Dust Impact Detection System (DIDSY). A fourth momentum sensor was mounted on the rear shield. These microphone detectors, together with other detectors, measured the dust distribution within the inner coma of the comet. These instruments also measured dust during "Giotto"'s encounter with the comet 26P/Grigg–Skjellerup.
On the Mercury Magnetospheric Orbiter of the "BepiColombo" mission, the Mercury Dust Monitor (MDM) will measure the dust environments of interplanetary space and Mercury. MDM is composed of four piezoelectric ceramic sensors made of lead zirconate titanate, from which impact signals will be recorded and analyzed.
Chance dust detectors.
Most instruments on a spacecraft flying through a dense dust environment will experience effects of dust impacts. A prominent example of such an instrument was the Plasma Wave Subsystem (PWS) on the "Voyager 1" and "Voyager 2" spacecraft. PWS provided useful information on the local dust environment. Initially, the Asteroid Meteoroid Detector (AMD) previously flown on Pioneer 10 and 11 was preliminarily selected for the Voyager payload. However, because there were doubts about its performance, the instrument was deselected and, hence, no dedicated dust instrument was carried by either Voyager 1 or 2.
During the "Voyager 2" flythrough of the Saturn system, PWS detected intense impulse noise centered on the ring plane at 2.88 Saturn radii distance, slightly outside of the G ring. This noise was attributed to micron sized particles hitting the spacecraft. In-situ dust detections by the "Cassini" Cosmic Dust Analyzer and camera observations of the outer rings confirmed the existence of an extended G ring. Also during "Voyager"'s flybys of Uranus and Neptune, dust concentrations in the equatorial planes were observed.
During the flyby of comet 21P/Giacobini–Zinner by the International Cometary Explorer, dust impacts were observed by the plasma wave instrument.
Though plasma wave instruments on various spacecraft claimed to detect dust, it was only in 2021 that a model for the generation of signals on plasma wave antennas by dust impacts was presented, based on dust accelerator tests.
Impact ionization detectors.
Impact ionization detectors are the most successful dust detectors in space. With these detectors, the interplanetary dust environment between Venus and Jupiter has been explored.
Impact ionization detectors use the simultaneous detection of positive ions and electrons upon dust impact on a solid target. This coincidence provides a means to discriminate from noise on a single channel. The first successful dust detector in interplanetary space at about 1 AU was flown on the "Pioneer 8" and "Pioneer 9" space probes. The "Pioneer 8" and "9" detectors had sensitive target areas of 0.01 m2. Besides interplanetary dust on eccentric orbits, it detected dust on hyperbolic orbits—that is, dust leaving the Solar System. The "HEOS 2" dust detector was the first detector that employed a hemispherical geometry, like all the subsequent detectors of the "Galileo" and "Ulysses" spacecraft, and the LDEX detectors on the LADEE mission. The hemispherical target of 0.01 m2 area collected electrons from the impact and the ions were collected by the central ion collector. These signals served to determine the mass and speed of the impacted meteoroid. The HEOS 2 dust detector explored the Earth dust environment within 10 Earth radii.
The twin "Galileo" and "Ulysses" dust detectors were optimized for interplanetary dust measurements in the outer Solar System. The sensitive target areas were increased ten-fold to 0.1 m2 in order to cope with the expected low dust fluxes. In order to provide reliable dust impact data even within the harsh Jovian environment, an electron channeltron was added in the center of the ion grid collector. This way, an impact was detected by triple coincidence of three charge signals. The 2.5-ton "Galileo" spacecraft was launched in 1989 and cruised for 6 years in interplanetary space between Venus’ and Jupiter's orbit and measured interplanetary dust. The 370 kg "Ulysses" spacecraft was launched a year later and went on a direct trajectory to Jupiter, which it reached in 1992 for a swing-by maneuver that put the spacecraft on a heliocentric orbit of 80 degrees inclination. In 1995, "Galileo" started its 7-year path through the Jovian system with several flybys of all the Galilean moons. After its Jupiter flyby, "Ulysses" identified a flow of interstellar dust sweeping through the Solar System and hyper-velocity streams of nano-dust which are emitted from Jupiter and then couple to the solar magnetic field. In addition, the "Galileo" instrument detected ejecta clouds around the Galilean moons.
The Lunar Dust Experiment (LDEX) on board the Lunar Atmosphere and Dust Environment Explorer (LADEE) mission is a smaller version of the "Galileo" and "Ulysses" dust detectors. The most sensitive impact charge detector is a microchannel plate (MCP) behind the central focusing grid. LDEX has a sensitive area of 0.012 m2. The objective of the instrument was the detection and analysis of the lunar dust environment. From 16 October 2013 to 18 April 2014, LDEX detected about 140,000 dust hits at an altitude of 20–100 km above the lunar surface. It found a tenuous and permanent, asymmetric ejecta cloud around the Moon that is caused by meteoroid impacts onto the lunar surface. From this data it was found that approximately 40 μm/Myr of lunar regolith is redistributed due to meteoritic bombardment. Besides a continuous meteoroid bombardment, meteoroid streams cause temporary enhancements of the ejecta cloud.
Dust composition analyzers.
The Helios Micrometeoroid Analyzer was the in-situ instrument to analyze the composition of cosmic dust. In 1974, the instrument was carried by the "Helios" spacecraft from the Earth's orbit down to 0.3 AU from the Sun. The goal of the Micrometeoroid Analyzer was to determine the spatial distribution of the dust in the inner planetary system, and to search for variations in the compositional and physical properties of micrometeoroids. The instrument consisted of two impact ionization time-of-flight mass spectrometers (Ecliptic and South sensor) with a total target area of about 0.01 m2. One sensor was shielded by the spacecraft rim from direct sunlight, whereas the other sensor was protected by a thin aluminized parylene film from intense solar radiation. These Micrometeoroid Analyzers were calibrated with a wide range of materials at the dust accelerators of the Max Planck Institute for Nuclear Physics in Heidelberg and the Ames Research Center in Moffet Field. The mass resolution of the mass spectra of the Helios sensors was low: formula_0. There was an excess of impacts recorded by the South sensor compared to the Ecliptic sensor. On the basis of the penetration studies with the "Helios" film, this excess was interpreted to be due to low density (formula_1 < 1000 kg/m3) meteoroids that were shielded from entering the Ecliptic sensor. The mass spectra range from those with dominant low masses (up to 30 mu), compatible with silicates, to those with dominant high masses (between 50 and 60 mu), compatible with iron and molecular ions. Meteoroid streams and even interstellar dust particles were identified in the data.
Twin dust mass analyzers were flown on the 1986 Halley's Comet missions "Vega 1", "Vega 2", and "Giotto". These spacecraft flew by the comet at a distance of 600–1,000 km with a speed of 70–80 km/s. The PUMA ("Vega") and PIA ("Giotto") instruments were developed by Jochen Kissel of the Max Planck Institute for Nuclear Physics in Heidelberg. Dust particle hitting the small (approximately 5 cm2) impact target generated ions by impact ionization. The instruments were high mass resolution ("R" ≈ 100) reflectron type time-of-flight mass spectrometers. The instruments could record up to 500 impacts per second. During comet flybys, the instruments recorded an abundance of small particles of mass less than 10−14 grams. Besides unequilibrated silicates, many of the particles were rich in light elements such as hydrogen, carbon, nitrogen, and oxygen. This suggests that most particles consisted of a predominantly chondritic core with a refractory organic mantle.
The Cometary and Interstellar Dust Analyzer (CIDA) was flown on the "Stardust" mission. In January 2004, "Stardust" flew by comet Comet Wild 2 at a distance of 240 km with a relative speed of 6.1 km/s. In February 2011, "Stardust" flew by comet Tempel 1 at a distance of 181 km with a speed of 10.9 km/s. During the interplanetary cruise between the comet encounters, there were favorable opportunities to analyze the interstellar dust stream discovered earlier by "Ulysses". CIDA is a derivative of the impact ionization mass spectrometers flown on the "Giotto", "Vega 1", and "Vega 2" missions. The impact target peeks out to the side of the spacecraft while the main part of the instrument is protected from the high-speed dust. It has a sensitive area of approximately 100 cm2 and a mass resolution "R" ≈ 250. Besides the positive ion mode, CIDA has also a negative ion mode for better sensitivity for organic molecules. The 75 spectra obtained during the comet flybys indicate a dominance of organic matter; sulfur ions were also detected in one spectrum. In the 45 spectra obtained during the cruise phase favorable for the detection of interstellar particles, derivates of quinone were suggested as constituents of the organic component.
The Cosmic Dust Analyzer (CDA) was flown on the "Cassini" mission to Saturn. CDA is a large-area (0.1 m2 total sensitive area) multi-sensor dust instrument that includes a 0.01 m2 medium resolution ("R" ≈ 20–50) chemical dust analyzer, a 0.09 m2 highly-reliable impact ionization detector, and two high-rate polarized polyvinylidene fluoride (PVDF) detectors with sensitive areas of 0.005 m2 and 0.001 m2, respectively. During its 6-year cruise to Saturn, CDA analyzed interplanetary dust, the stream of interstellar dust, and Jupiter dust streams. A highlight was the detection of electrical dust charges in interplanetary space and in Saturn's magnetosphere. During the following 13 years, "Cassini" completed 292 orbits around Saturn (2004–2017) and measured several million dust impacts which characterize dust primarily in Saturn's E ring. In 2005, during "Cassini"'s close flyby of Enceladus within 175 km from the surface, CDA discovered active ice geysers. Detailed compositional analyses found salt-rich water ice grains close to Enceladus, which led to the discovery of large reservoirs of liquid water oceans below the icy crust of the moon. Analyses of interstellar grains at Saturn's distance suggest magnesium-rich grains of silicate and oxide composition, some with iron inclusions.
Dust Telescopes.
A Dust Telescope is an instrument to perform dust astronomy. It not only analyses the signals and ions that are generated by a dust impact on the sensitive target, but also determines the dust trajectory prior to the impact. The latter is based on the successful measurement of the dust electric charge by "Cassini"'s Cosmic Dust Analyzer (CDA). A Dust Trajectory Sensor consists of four planes of parallel position sensing wire electrodes. Dust accelerator tests show that dust trajectories can be determined to an accuracy of 1% in velocity and 1° in direction. The second element of a Dust Telescope is a Large-area Mass Analyzer: a reflectron type time-of-flight mass analyzer with a sensitive area of up to 0.2 m2 and a mass resolution "R" > 150. It consists of a circular plate target with the ion detector behind the center hole. In front of the target is an acceleration grid. Ions generated by an impact are reflected by a paraboloid shaped grid onto the center ion detector. Prototypes of dust telescope have been built at the Laboratory for Atmospheric and Space Physics (LASP) of the University of Colorado, Boulder, USA and at the Institute of Space Systems of the University of Stuttgart, Germany, and tested at their respective dust accelerators.
The Surface Dust Analyser (SUDA) on board the "Europa Clipper" mission is being developed by Sacha Kempf and colleagues at LASP. SUDA will collect spatially resolved compositional maps of Jupiter's moon Europa along the ground tracks of the Europa orbiter, and search for plumes. The instrument is capable of identifying traces of organic and inorganic compounds in the ice ejecta. The launch of the "Europa Clipper" mission is planned for 2024.
The DESTINY+ Dust Analyzer (DDA) will fly on the Japanese–German space mission DESTINY+ to asteroid 3200 Phaethon. Phaethon is believed to be the origin of the Geminids meteor stream that can be observed from the ground every December. DDA development is led by Ralf Srama and colleagues from the Institute of Space Systems (IRS) at the University of Stuttgart in cooperation with von Hoerner & Sulger GmbH (vH&S) company. DDA will analyze interstellar and interplanetary dust on cruise to Phaethon and will study its dust environment during the encounter; of particular interest is the proportion of organic matter. Its launch is planned for 2024.
The Interstellar Dust Experiment (IDEX), developed by Mihaly Horanyi and colleagues at LASP, will fly on the Interstellar Mapping and Acceleration Probe (IMAP) in orbit about the Sun–Earth L1 Lagrange point. IDEX is a large-area (0.07 m2) dust analyzer that provides the mass distribution and elemental composition of interstellar and interplanetary dust particles. A laboratory version of the IDEX instrument was used at the dust accelerator facility operated at University of Colorado to collect impact ionization mass spectra for a range of dust samples of known composition. Its launch is planned for 2025.
Collected dust analyses.
The importance of lunar samples and lunar soil for dust science was that they provided a meteoroid impact cratering record. Even more important are the cosmochemical aspects—from their isotopic, elemental, molecular, and mineralogical compositions, important conclusions can be drawn, such as concerning the giant-impact hypothesis of the Moon's formation. From 1969 to 1972, six Apollo missions collected 382 kilograms of lunar rocks and soil. These samples are available for research and teaching projects. From 1970 to 1976, three Luna spacecraft returned 301 grams of lunar material. In 2020, Chang'e 5 collected 1.7 kg of lunar material.
In 1950, Fred Whipple showed that micrometeoroids smaller than a critical size (~100 micrometers) are decelerated at altitudes above 100 km slowly enough to radiate their frictional energy away without melting. Such micrometeorites sediment through the atmosphere and ultimately deposit on the ground. The most efficient method to collect micrometeorites is by high (~20 km) flying aircraft with special silicon oil covered collectors that capture this dust. At lower altitudes, these micrometeorites become mixed with Earth dust. Don Brownlee first reliably identified the extraterrestrial nature of collected dust particles by their chondritic composition. These stratospheric dust samples are available for further research.
"Stardust" was the first mission to return samples from a comet and from interstellar space. In January 2004, "Stardust" flew by Comet Wild 2 at a distance of 237 km with a relative velocity of 6.1 km/s. Its dust collector consisted of 0.104 m2 aerogel and 0.015 m2 aluminium foil; one side of the detector was exposed to the flow of cometary dust. The "Stardust" cometary samples were a mix of different components, including presolar grains like 13C-rich silicon carbide grains, a wide range of chondrule-like fragments, and high-temperature condensates like calcium-aluminum inclusions found in primitive meteorites that were transported to cold nebular regions.
During March–May 2000 and July–December 2002, the spacecraft was in a favorable position to collect interstellar dust on the back side of the sample collector. Once the sample capsule was returned in January 2006, the collector trays were inspected and thousands of grains from Comet Wild 2 and seven probable interstellar grains were identified. These grains are available for teaching and research from the NASA Astromaterials Curation Office.
The first asteroid samples were returned by the JAXA "Hayabusa" missions. "Hayabusa" encountered asteroid 25143 Itokawa in November 2005, picked up surface samples, and returned to Earth in June 2010. Despite some problems during sample collection, thousands of 10–100 micron sized particles were collected and are available for research in the laboratories. The second "Hayabusa2" mission rendezvoused with asteroid 162173 Ryugu in June 2018. About 5 g of surface and sub-surface material from this primitive C-type asteroid were returned. JAXA shares about 10% of the collected samples with NASA sample curation.
The "Rosetta" space probe orbited comet 67P/Churyumov–Gerasimenko from August 2014 to September 2016. During this time, Rosetta's instruments analyzed the nucleus, dust, gas, and plasma environments. "Rosetta" carried a suite of miniaturized sophisticated lab instruments to study collected cometary dust particles. Among them was the high-resolution secondary ion mass spectrometer COSIMA (Cometary Secondary Ion Mass Analyzer) that analyzed the rocky and organic composition of collected dust particles, an atomic force microscope MIDAS (Micro-Imaging Dust Analysis System) that investigated morphology and physical properties of micrometer-sized dust particles that were deposited on a collector plate, and the double-focus magnetic mass spectrometer (DFMS) and the reflectron type time of flight mass spectrometer (RTOF) of ROSINA (Rosetta Orbiter Spectrometer for Ion and Neutral Analysis) to analyze cometary gas and the volatile components of cometary particulates. "Rosetta"'s Philae lander carried the gas chromatography–mass spectrometry COSAC experiment to analyze organic molecules in the comet's atmosphere and on its surface.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " R=\\cfrac{M}{\\Delta M} \\approx 10"
},
{
"math_id": 1,
"text": " \\rho"
}
]
| https://en.wikipedia.org/wiki?curid=70739650 |
70740396 | Praseodymium(III) iodide | <templatestyles src="Chembox/styles.css"/>
Chemical compound
Praseodymium(III) iodide is an inorganic salt, consisting of the rare-earth metal praseodymium and iodine, with the chemical formula PrI3. It forms green crystals. It is soluble in water.
formula_0
formula_1
Properties.
Praseodymium(III) iodide forms green crystals, which are soluble in water. It forms orthorhombic crystals which are hygroscopic. It crystallizes in the PuBr3 type with space group "Cmcm" (No. 63) with "a" = 4.3281(6) Å, "b" = 14.003(6) Å and "c" = 9.988(3) Å. It decomposes through an intermediate phase 2 PrI3·PrOI to a mixture of praseodymium oxyiodide and praseodymium oxide (5 PrOI·Pr2O3).
Pr2O3 + 6 HI + 15H2O → 2 PrI3·9H2O
2 PrI3 + Pr → 3 PrI2
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathsf{2Pr + 3I_2 \\ \\xrightarrow{T}\\ 2PrI_3}"
},
{
"math_id": 1,
"text": "\\mathsf{2Pr + 3HgI_2 \\ \\xrightarrow{T}\\ 2PrI_3 + 3Hg}"
}
]
| https://en.wikipedia.org/wiki?curid=70740396 |
707404 | Quantum circuit | Model of quantum computing
In quantum information theory, a quantum circuit is a model for quantum computation, similar to classical circuits, in which a computation is a sequence of quantum gates, measurements, initializations of qubits to known values, and possibly other actions. The minimum set of actions that a circuit needs to be able to perform on the qubits to enable quantum computation is known as DiVincenzo's criteria.
Circuits are written such that the horizontal axis is time, starting at the left hand side and ending at the right. Horizontal lines are qubits, doubled lines represent classical bits. The items that are connected by these lines are operations performed on the qubits, such as measurements or gates. These lines define the sequence of events, and are usually not physical cables.
The graphical depiction of quantum circuit elements is described using a variant of the Penrose graphical notation. Richard Feynman used an early version of the quantum circuit notation in 1986.
Reversible classical logic gates.
Most elementary logic gates of a classical computer are not reversible. Thus, for instance, for an AND gate one cannot always recover the two input bits from the output bit; for example, if the output bit is 0, we cannot tell from this whether the input bits are 01 or 10 or 00.
However, reversible gates in classical computers are easily constructed for bit strings of any length; moreover, these are actually of practical interest, since irreversible gates must always increase physical entropy. A reversible gate is a reversible function on "n"-bit data that returns "n"-bit data, where an "n"-bit data is a string of bits "x"1,"x"2, ...,"x""n" of length "n". The set of "n"-bit data is the space {0,1}"n", which consists of 2"n" strings of 0's and 1's.
More precisely: an "n"-bit reversible gate is a bijective mapping "f" from the set {0,1}"n" of "n"-bit data onto itself.
An example of such a reversible gate "f" is a mapping that applies a fixed permutation to its inputs.
For reasons of practical engineering, one typically studies gates only for small values of "n", e.g. "n"=1, "n"=2 or "n"=3. These gates can be easily described by tables.
Quantum logic gates.
The quantum logic gates are reversible unitary transformations on at least one qubit. Multiple qubits taken together are referred to as quantum registers. To define quantum gates, we first need to specify the quantum replacement of an "n"-bit datum. The "quantized version" of classical "n"-bit space {0,1}"n" is the Hilbert space
formula_0
This is by definition the space of complex-valued functions on {0,1}"n" and is naturally an inner product space. formula_1 means the function is a square-integrable function. This space can also be regarded as consisting of linear combinations, or superpositions, of classical bit strings. Note that "H"QB("n") is a vector space over the complex numbers of dimension 2"n". The elements of this vector space are the possible state-vectors of "n"-qubit quantum registers.
Using Dirac ket notation, if "x"1,"x"2, ...,"x""n" is a classical bit string, then
formula_2
is a special "n"-qubit register corresponding to the function which maps this classical bit string to 1 and maps all other bit strings to 0; these 2"n" special "n"-qubit registers are called "computational basis states". All "n"-qubit registers are complex linear combinations of these computational basis states.
Quantum logic gates, in contrast to classical logic gates, are always reversible. One requires a special kind of reversible function, namely a unitary mapping, that is, a linear transformation of a complex inner product space that preserves the Hermitian inner product. An "n"-qubit (reversible) quantum gate is a unitary mapping "U" from the space "H"QB("n") of "n"-qubit registers onto itself.
Typically, we are only interested in gates for small values of "n".
A reversible "n"-bit classical logic gate gives rise to a reversible "n"-bit quantum gate as follows: to each reversible "n"-bit logic gate "f" corresponds a quantum gate "W""f" defined as follows:
formula_3
Note that "W""f" permutes the computational basis states.
Of particular importance is the controlled NOT gate (also called CNOT gate) "W"CNOT defined on a quantized 2 qubit. Other examples of quantum logic gates derived from classical ones are the Toffoli gate and the Fredkin gate.
However, the Hilbert-space structure of the qubits permits many quantum gates that are not induced by classical ones. For example, a relative phase shift is a 1 qubit gate given by multiplication by the phase shift operator:
formula_4
so
formula_5
Reversible logic circuits.
Again, we consider first "reversible" classical computation. Conceptually, there is no difference between a reversible "n"-bit circuit and a reversible "n"-bit logic gate: either one is just an invertible function on the space of "n" bit data. However, as mentioned in the previous section, for engineering reasons we would like to have a small number of simple reversible gates, that can be put together to assemble any reversible circuit.
To explain this assembly process, suppose we have a reversible "n"-bit gate "f" and a reversible "m"-bit gate "g". Putting them together means producing a new circuit by connecting some set of "k" outputs of "f" to some set of "k" inputs of "g" as in the figure below. In that figure, "n"=5, "k"=3 and "m"=7. The resulting circuit is also reversible and operates on "n"+"m"−"k" bits.
We will refer to this scheme as a "classical assemblage" (This concept corresponds to a technical definition in Kitaev's pioneering paper cited below). In composing these reversible machines, it is important to ensure that the intermediate machines are also reversible. This condition assures that "intermediate" "garbage" is not created (the net physical effect would be to increase entropy, which is one of the motivations for going through this exercise).
Note that each horizontal line on the above picture represents either 0 or 1, not these probabilities. Since quantum computations are reversible, at each 'step' the number of lines must be the same number of input lines. Also, each input combination must be mapped to a single combination at each 'step'. This means that each intermediate combination in a quantum circuit is a bijective function of the input.
Now it is possible to show that the Toffoli gate is a universal gate. This means that given any reversible classical "n"-bit circuit "h", we can construct a classical assemblage of Toffoli gates in the above manner to produce an ("n"+"m")-bit circuit "f" such that
formula_6
where there are "m" underbraced zeroed inputs and
formula_7.
Notice that the result always has a string of "m" zeros as the ancilla bits. No "rubbish" is ever produced, and so this computation is indeed one that, in a physical sense, generates no entropy. This issue is carefully discussed in Kitaev's article.
More generally, any function "f" (bijective or not) can be simulated by a circuit of Toffoli gates. Obviously, if the mapping fails to be injective, at some point in the simulation (for example as the last step) some "garbage" has to be produced.
For quantum circuits a similar composition of qubit gates can be defined. That is, associated to any "classical assemblage" as above, we can produce a reversible quantum circuit when in place of "f" we have an "n"-qubit gate "U" and in place of "g" we have an "m"-qubit gate "W". See illustration below:
The fact that connecting gates this way gives rise to a unitary mapping on "n"+"m"−"k" qubit space is easy to check. In a real quantum computer the physical connection between the gates is a major engineering challenge, since it is one of the places where decoherence may occur.
There are also universality theorems for certain sets of well-known gates; such a universality theorem exists, for instance, for the pair consisting of the single qubit phase gate "U"θ mentioned above (for a suitable value of θ), together with the 2-qubit CNOT gate "W"CNOT. However, the universality theorem for the quantum case is somewhat weaker than the one for the classical case; it asserts only that any reversible "n"-qubit circuit can be "approximated" arbitrarily well by circuits assembled from these two elementary gates. Note that there are uncountably many possible single qubit phase gates, one for every possible angle θ, so they cannot all be represented by a finite circuit constructed from {"U"θ, "W"CNOT}.
Quantum computations.
So far we have not shown how quantum circuits are used to perform computations. Since many important numerical problems reduce to computing a unitary transformation "U" on a finite-dimensional space (the celebrated discrete Fourier transform
being a prime example), one might expect that some quantum circuit could be designed to carry out the transformation "U". In principle, one needs only to prepare an "n" qubit state ψ as an appropriate superposition of computational basis states for the input and measure the output "U"ψ. Unfortunately, there are two problems with this:
This does not prevent quantum circuits for the discrete Fourier transform from being used as intermediate steps in other quantum circuits, but the use is more subtle. In fact quantum computations are "probabilistic".
We now provide a mathematical model for how quantum circuits can simulate
"probabilistic" but classical computations. Consider an "r"-qubit circuit "U" with
register space "H"QB("r"). "U" is thus a unitary map
formula_8
In order to associate this circuit to a classical mapping on bitstrings, we specify
The contents "x" = "x"1, ..., "x""m" of
the classical input register are used to initialize the qubit
register in some way. Ideally, this would be done with the computational basis
state
formula_9
where there are "r"-"m" underbraced zeroed inputs. Nevertheless,
this perfect initialization is completely unrealistic. Let us assume
therefore that the initialization is a mixed state given by some density operator "S" which is near the idealized input in some appropriate metric, e.g.
formula_10
Similarly, the output register space is related to the qubit register, by a "Y"
valued observable "A". Note that observables in quantum mechanics are usually defined in
terms of "projection valued measures" on R; if the variable
happens to be discrete, the projection valued measure reduces to a
family {Eλ} indexed on some parameter λ
ranging over a countable set. Similarly, a "Y" valued observable,
can be associated with a family of pairwise orthogonal projections
formula_11
Given a mixed state "S", there corresponds a probability measure on "Y"
given by
formula_12
The function "F":"X" → "Y" is computed by a circuit
"U":"H"QB("r") → "H"QB("r") to within ε if and only if
for all bitstrings "x" of length "m"
formula_13
Now
formula_14
so that
formula_15
Theorem. If ε + δ < 1/2, then the probability distribution
formula_16
on "Y" can be used to determine "F"("x") with an arbitrarily small probability of error by majority sampling, for a sufficiently large sample size. Specifically, take "k" independent samples from the probability distribution Pr on "Y" and choose a value on which more than half of the samples agree. The probability that the value "F"("x") is sampled more than "k"/2 times is at least
formula_17
where γ = 1/2 - ε - δ.
This follows by applying the Chernoff bound.
Accelerating Quantum Computing Simulations with FPGAs.
With the advent of quantum computing, there has been a significant surge in both the number of developers and available tools. However, the slow pace of technological advancement and the high maintenance costs associated with quantum computers have limited broader participation in this field. In response, developers have turned to simulators, such as IBM's Qiskit, to model quantum behavior without relying solely on real quantum hardware. Nevertheless, simulators, being classical computers, are constrained by computation speed. The fundamental advantage of quantum computers lies in their ability to process qubits, leveraging properties like entanglement and superposition simultaneously. By running quantum simulations on classical computers, the inherent parallelism of quantum computing is taken away. Moreover, as the number of simulated qubits increases, the simulation's speed decreases proportionally.
In a quantum circuit, the vectors are used to represent the state of the qubits and different matrices are used to represent the gate that is applied on the qubits. Since linear algebra is a major component of the quantum simulation, Field Programmable Gate Arrays (FPGAs) could be used to accelerate the simulation of quantum computing. FPGA is a kind of hardware that excels at executing operations in parallel, supports pipelining, has on-chip memory resources with low access latency, and offers the flexibility to reconfigure the hardware architecture on-the-fly which make it a well suited tool to handle matrix multiplication.
The main idea of accelerating quantum computing simulations is to offload some of the heavy computation to special hardware like FPGA in order to speed up the whole simulation process. And the bigger quantum circuits (more qubits and more gates) we simulate, the more speedup we gain from offloading to FPGA compared with software simulations on CPU. The data flow of the simulation is explained below. First, the user inputs all the information of the quantum circuit including initial state and various gates through the user interface. Then, all this information is compressed and sent to the FPGA through some hardware communication protocols like AXI. Then, all the information is stored in the on-chip memory in the FPGA. And the simulation starts when the data is read from the memory and sent to the Matrix multiplication module. After all the calculation is done, the result will be sent back to the memory and to the CPU.
Suppose we are simulating 5-qubit circuits, then we need to store the vector that holds 32 (2⁵) 16-bit values, each of which represents the square-root probability of a possible existing state. We also need to store the 32x32 matrix that represents the gate. In order to parallel this computation, we can store the 32 rows of the matrix separately and replicate 32 row_vec_mult hardware such that each row can calculate the multiplication in parallel. This will dramatically speed up the simulation with a price of more hardware and memory usage in FPGA.
It has been discovered that with careful hardware design, it's possible to achieve a hardware architecture with O(n) time complexity, where 'n' denotes the number of qubits. In contrast, the runtime of Numpy approaches O(2^2^n). This finding underscores the feasibility of leveraging FPGAs to accelerate quantum computing simulations.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "H_{\\operatorname{QB}(n)}= \\ell^2(\\{0,1\\}^n)."
},
{
"math_id": 1,
"text": "\\ell^2"
},
{
"math_id": 2,
"text": " | x_1, x_2, \\cdots,x_n \\rangle \\quad "
},
{
"math_id": 3,
"text": " W_f( | x_1, x_2, \\cdots,x_n \\rangle) = |f(x_1, x_2, \\cdots, x_n) \\rangle. "
},
{
"math_id": 4,
"text": " P(\\varphi) =\\begin{bmatrix} 1 & 0 \\\\ 0 & e^{i\\varphi} \\end{bmatrix}, "
},
{
"math_id": 5,
"text": " P(\\varphi)| 0 \\rangle = | 0 \\rangle \\quad P(\\varphi)| 1 \\rangle = e^{i\\varphi}| 1 \\rangle. "
},
{
"math_id": 6,
"text": " f(x_1, \\ldots, x_n, \\underbrace{0, \\dots, 0}) = (y_1, \\ldots, y_n, \\underbrace{0, \\ldots , 0})"
},
{
"math_id": 7,
"text": "(y_1, \\ldots, y_n) = h(x_1, \\ldots, x_n)"
},
{
"math_id": 8,
"text": "H_{\\operatorname{QB}(r)} \\rightarrow\nH_{\\operatorname{QB}(r)}."
},
{
"math_id": 9,
"text": " |\\vec{x},0\\rangle= | x_1, x_2, \\cdots, x_{m}, \\underbrace{0, \\dots, 0} \\rangle, "
},
{
"math_id": 10,
"text": " \\operatorname{Tr}\\left(\\big||\\vec{x},0\\rangle \\langle \\vec{x},0 | - S\\big|\\right) \\leq \\delta. "
},
{
"math_id": 11,
"text": " I = \\sum_{y \\in Y} \\operatorname{E}_y. "
},
{
"math_id": 12,
"text": " \\operatorname{Pr}\\{y\\} = \\operatorname{Tr}(S \\operatorname{E}_y ). "
},
{
"math_id": 13,
"text": "\\left\\langle \\vec{x},0 \\big| U^* \\operatorname{E}_{F(x)} U\n\\big|\\vec{x},0 \\right\\rangle = \\left\\langle \\operatorname{E}_{F(x)} U( |\\vec{x},0\\rangle) \\big| U( |\\vec{x},0\\rangle) \\right\\rangle \\geq 1 - \\epsilon."
},
{
"math_id": 14,
"text": " \\left| \\operatorname{Tr} (S U^* \\operatorname{E}_{F(x)} U) - \\left\\langle \\vec{x},0 \\big| U^* \\operatorname{E}_{F(x)} U\n\\big|\\vec{x},0 \\right\\rangle\\right|\\leq \\operatorname{Tr} (\\big||\\vec{x},0\\rangle \\langle \\vec{x},0 | - S\\big|) \\| U^* \\operatorname{E}_{F(x)} U \\| \\leq \\delta "
},
{
"math_id": 15,
"text": "\\operatorname{Tr} (S U^* \\operatorname{E}_{F(x)} U) \\geq 1 - \\epsilon - \\delta."
},
{
"math_id": 16,
"text": " \\operatorname{Pr}\\{y\\} = \\operatorname{Tr} (S U^* \\operatorname{E}_{y} U)"
},
{
"math_id": 17,
"text": " 1 - e^{- 2 \\gamma^2 k}, "
}
]
| https://en.wikipedia.org/wiki?curid=707404 |
70742210 | Mixed Chinese postman problem | Problem in mathematics
The mixed Chinese postman problem (MCPP or MCP) is the search for the shortest traversal of a graph with a set of vertices V, a set of undirected edges E with positive rational weights, and a set of directed arcs A with positive rational weights that covers each edge or arc at least once at minimal cost. The problem has been proven to be NP-complete by Papadimitriou. The mixed Chinese postman problem often arises in arc routing problems such as snow ploughing, where some streets are too narrow to traverse in both directions while other streets are bidirectional and can be plowed in both directions. It is easy to check if a mixed graph has a postman tour of any size by verifying if the graph is strongly connected. The problem is NP hard if we restrict the postman tour to traverse each arc exactly once or if we restrict it to traverse each edge exactly once, as proved by Zaragoza Martinez.
Mathematical Definition.
The mathematical definition is:
Input: A strongly connected, mixed graph formula_0 with cost formula_1 for every edge formula_2 and a maximum cost formula_3.
Question: is there a (directed) tour that traverses every edge in formula_4 and every arc in formula_5 at least once and has cost at most formula_3?
Computational complexity.
The main difficulty in solving the Mixed Chinese Postman problem lies in choosing orientations for the (undirected) edges when we are given a tight budget for our tour and can only afford to traverse each edge once. We then have to orient the edges and add some further arcs in order to obtain a directed Eulerian graph, that is, to make every vertex balanced. If there are multiple edges incident to one vertex, it is not an easy task to determine the correct orientation of each edge. The mathematician Papadimitriou analyzed this problem with more restrictions; "MIXED CHINESE POSTMAN is NP-complete, even if the input graph is planar, each vertex has degree at most three, and each edge and arc has cost one."
Eulerian graph.
The process of checking if a mixed graph is Eulerian is important to creating an algorithm to solve the Mixed Chinese Postman problem. The degrees of a mixed graph G must be even to have an Eulerian cycle, but this is not sufficient.
Approximation.
The fact that the Mixed Chinese Postman is NP-hard has led to the search for polynomial time algorithms that approach the optimum solution to reasonable threshold. Frederickson developed a method with a factor of 3/2 that could be applied to planar graphs, and Raghavachari and Veerasamy found a method that does not have to be planar. However, polynomial time cannot find the cost of deadheading, the time it takes a snow plough to reach the streets it will plow or a street sweeper to reach the streets it will sweep.
Formal definition.
Given a strongly connected mixed graph formula_0 with a vertex set formula_6, and edge set formula_4, an arc set formula_5 and a nonnegative cost formula_7 for each formula_8, the MCPP consists of finding a minim-cost tour passing through each link formula_9 at least once.
Given formula_10, formula_11, formula_12, formula_13 denotes the set of edges with exactly one endpoint in formula_14, and formula_15. Given a vertex formula_16, formula_17(indegree) denotes the number of arcs enter formula_18, formula_19(outdegree) is the number of arcs leaving formula_16, and formula_20 (degree) is the number of links incident with formula_18. Note that formula_21. A mixed graph formula_0 is called "even" if all of its vertices have even degree, it is called symmetric if formula_22 for each vertex formula_16, and it is said to be balanced if, given any subset formula_14 of vertices, the difference between the number of arcs directed from formula_14 to formula_23, formula_24, and the number of arcs directed from formula_23 to formula_14, formula_25, is no greater than the number of undirected edges joining formula_14 and formula_26, formula_27.
It is a well known fact that a mixed graph formula_28 is Eulerian if and only if formula_28 is even and balanced. Notice that if formula_28 is even and symmetric, then G is also balanced (and Eulerian). Moreover, if formula_28 is even, the formula_29 can be solved exactly in polynomial time.
Heuristic algorithms.
When the mixed graph is not even and the nodes do not all have even degree, the graph can be transformed into an even graph.
Genetic algorithm.
A paper published by Hua Jiang et. al laid out a genetic algorithm to solve the mixed chinese postman problem by operating on a population. The algorithm performed well compared to other approximation algorithms for the MCPP.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "G=(V,E,A)"
},
{
"math_id": 1,
"text": "c(e)\\geq0"
},
{
"math_id": 2,
"text": "e \\subset E \\cup A"
},
{
"math_id": 3,
"text": "c_{max}"
},
{
"math_id": 4,
"text": "E"
},
{
"math_id": 5,
"text": "A"
},
{
"math_id": 6,
"text": "V"
},
{
"math_id": 7,
"text": "c_e"
},
{
"math_id": 8,
"text": "e \\in E \\cup A"
},
{
"math_id": 9,
"text": "e\\in E \\cup A"
},
{
"math_id": 10,
"text": "S\\subset V"
},
{
"math_id": 11,
"text": "\\delta^+(S)=\\{(i,j)\\in A:i\\in S, j \\in V \\backslash S \\}"
},
{
"math_id": 12,
"text": "\\delta^-(S)=\\{ (i,j)\\in A:i\\in V\\backslash S, j \\in S \\}"
},
{
"math_id": 13,
"text": "\\delta(S)"
},
{
"math_id": 14,
"text": "S"
},
{
"math_id": 15,
"text": "\\delta^\\star=\\delta(S)\\cup \\delta^+(S) \\cup \\delta^-"
},
{
"math_id": 16,
"text": "i"
},
{
"math_id": 17,
"text": "d_i^-"
},
{
"math_id": 18,
"text": "i"
},
{
"math_id": 19,
"text": "d_i^+"
},
{
"math_id": 20,
"text": "d_i"
},
{
"math_id": 21,
"text": "d_i=|\\delta^\\star(\\{{i}\\})|"
},
{
"math_id": 22,
"text": "d_i^-=d_i^+"
},
{
"math_id": 23,
"text": "V\\backslash S"
},
{
"math_id": 24,
"text": "|\\delta^+(S)|"
},
{
"math_id": 25,
"text": "|\\delta^-(S)|"
},
{
"math_id": 26,
"text": "V \\backslash S"
},
{
"math_id": 27,
"text": "|\\delta (S)|"
},
{
"math_id": 28,
"text": "G"
},
{
"math_id": 29,
"text": "MCPP"
},
{
"math_id": 30,
"text": "A_1"
},
{
"math_id": 31,
"text": "s_i=d_i^--d_i^+"
},
{
"math_id": 32,
"text": "(V, A\\cup A_1)"
},
{
"math_id": 33,
"text": " i"
},
{
"math_id": 34,
"text": "s_i>0(s_i<0)"
},
{
"math_id": 35,
"text": "s_i(-s_i)"
},
{
"math_id": 36,
"text": "A_2"
},
{
"math_id": 37,
"text": "A_3"
},
{
"math_id": 38,
"text": "s_i"
},
{
"math_id": 39,
"text": "(V, A\\cup A_1\\cup A_2\\cup A_3)"
},
{
"math_id": 40,
"text": "A\\cup A_1\\cup A_2"
},
{
"math_id": 41,
"text": "x_{ij}"
},
{
"math_id": 42,
"text": "(i,j)"
},
{
"math_id": 43,
"text": "x_{ij}=2"
},
{
"math_id": 44,
"text": "j"
},
{
"math_id": 45,
"text": "x_{ij}=0"
},
{
"math_id": 46,
"text": "x_{ij}=1"
},
{
"math_id": 47,
"text": "A \\cup A_1 \\cup A_2"
},
{
"math_id": 48,
"text": "\\mathrm{G=\\{V,E,A\\}}"
},
{
"math_id": 49,
"text": "\\mathrm{G'=\\{V',E',A'\\}}"
},
{
"math_id": 50,
"text": "G'"
},
{
"math_id": 51,
"text": "G''"
},
{
"math_id": 52,
"text": "V_O"
},
{
"math_id": 53,
"text": "A'' \\backslash A"
},
{
"math_id": 54,
"text": "E''"
},
{
"math_id": 55,
"text": "A''\\backslash A"
}
]
| https://en.wikipedia.org/wiki?curid=70742210 |
70749706 | Alevtina Shmeleva | Russian nuclear physicist (1928–2022)
Alevtina Pavlovna Shmeleva (; 11 June 1928 – 25 April 2022) was a Russian nuclear physicist.
She studied at the Moscow Institute of Foreign Languages for two years, before she found her dedication within physics and particle detectors at the Moscow Engineering and Physics University, where she graduated in 1954. Shmeleva joined the Elementary Particles Laboratory in the P. N. Lebedev Physical Institute (LPI) under the guidance of academician Artyom Alikhanian. He was the scientific adviser of PhD thesis of Aleftina Shmeleva.
The first job after graduating from the institute in 1954 was participation in expeditions to the Mount Aragats cosmic ray research station and later to the Nor Amberd station in Armenia. Here, studies of cosmic rays were carried out at the heights of mountains with the help of magnetic spectrometers. Shmeleva led the work on the creation of a spark calorimeter for these experiments. These studies measured intensity and composition of the nuclear component of cosmic rays at altitude 2000 meters above sea level in the energy range 100-300 GeV. Later she started to work with Transition Radiation Detector prototypes, which were rather new at this time and which required pioneering skills.
Between 1976 and 1988, Shmeleva participated in the development of a full absorption spectrometer on liquid xenon with a volume of 40 liters. The created spectrometer had the best energy resolution for that time of 3.5% formula_0 and coordinate resolution of 5.6 mmformula_0.
From 1980 to 1988 she participated in the R808 experiment to study the production of prompt photons at the world's first proton collider ISR at CERN. For this experiment, employees of the Lebedev Physical Institute, INR, MEPhI and INP SB RAS developed hodoscope panels for shower counters based on scintillator NaI(Tl) crystals with a total weight of about one ton.
In 1977 Shmeleva and her husband Boris Dolgoshein from MEPhI participated in the International Symposium on Transition Radiation in Erevan, Armenia, where they met William J. Willis, whom they managed to convince about the concept of cluster-counting Transition Radiation Detectors (TRDs), and he invited the LPI-MEPhI group to CERN
to realise the idea. Since then, Shmeleva collaborated with the particle physics community at CERN. From 1978 to 1988, she coordinated the work of the Lebedev group, building prototype TRDs, testing them at the SPS and delivering the TRD of the HELIOS experiment at the SPS (NA34/1 and NA34/2).
As an expert in transition radiation detectors, Shmeleva joined the challenging ATLAS TRT project at Large Hadron Collider, where she and the Lebedev group started in the very early days of RD6, the preparations of the ATLAS TRT. Shmeleva and her team, since then, were pillars of the TRT collaboration.
Shmeleva was sensitive to the medical outreach of the fundamental science research, in particular with Vadim Kantzerov (MEPhI) she developed medical gamma locator.
Shmeleva had strong linguistic background and physics knowledge. She communicated well with colleagues. She brought people together and ensured that problems were solved.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "/ \\sqrt{E(GeV)} "
}
]
| https://en.wikipedia.org/wiki?curid=70749706 |
7075678 | Difference density map | In X-ray crystallography, a difference density map or Fo–Fc map shows the spatial distribution of the difference between the measured electron density of the crystal and the electron density explained by the current model.
A way to compute this map has been formulated for cyro-EM.
Display.
Conventionally, they are displayed as isosurfaces with positive density—electron density where there's nothing in the model, usually corresponding to some constituent of the crystal that hasn't been modelled, for example a ligand or a crystallisation adjutant -- in green, and negative density—parts of the model not backed up by electron density, indicating either that an atom has been disordered by radiation damage or that it is modelled in the wrong place—in red. The typical contouring (display threshold) is set at 3σ.
Calculation.
Difference density maps are usually calculated using Fourier coefficients which are the differences between the observed structure factor amplitudes from the X-ray diffraction experiment and the calculated structure factor amplitudes from the current model, using the phase from the model for both terms (since no phases are available for the observed data). The two sets of structure factors must be on the same scale.
formula_0
It is now normal to also include maximum-likelihood weighting terms which take into account the estimated errors in the current model:
formula_1
where "m" is a figure of merit which is an estimate of the cosine of the error in the phase, and "D" is a "σA" scale factor. These coefficients are derived from the gradient of the likelihood function of the observed structure factors on the basis of the current model. A difference map built with "m" and "D" is known as a mFo – DFc map.
The use of ML weighting reduces model bias (due to using the model's phase) in the 2 Fo–Fc map, which is the main estimate of the true density. However, it does not fully eliminate such bias.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "C_{diffmap} = (|F_{obs}| - |F_{calc}| ) exp( 2\\pi i \\phi_{calc} ) "
},
{
"math_id": 1,
"text": "C_{diffmap} = ( m |F_{obs}| - D |F_{calc}| ) exp( 2\\pi i \\phi_{calc} ) "
}
]
| https://en.wikipedia.org/wiki?curid=7075678 |
7076593 | Burst mode (computing) | Burst mode is a generic electronics term referring to any situation in which a device is transmitting data repeatedly without going through all the steps required to transmit each piece of data in a separate transaction.
Advantages.
The main advantage of burst mode over single mode is that the burst mode typically increases the throughput of data transfer.
Any bus transaction is typically handled by an arbiter, which decides when it should change the granted master and slaves. In case of burst mode, it is usually more efficient if you allow a master to complete a known length transfer sequence.
The total delay in a data transaction can be typically written as a sum of initial access latency plus sequential access latency.
formula_0
Here the sequential latency is same in both single mode and burst mode, but the total initial latency is decreased in burst mode, since the initial delay (usually depends on FSM for the protocol) is caused only once in burst mode. Hence the total latency of the burst transfer is reduced, and hence the data transfer throughput is increased.
It can also be used by slaves that can optimise their responses if they know in advance how many data transfers there will be. The typical example here is a DRAM which has a high initial access latency, but sequential accesses after that can be performed with fewer wait states.
Beats in burst transfer.
A beat in a burst transfer is the number of write (or read) transfers from master to slave, that takes place continuously in a transaction. In a burst transfer, the address for write or read transfer is just an incremental value of previous address. Hence in a 4-beat incremental burst transfer (write or read), if the starting address is 'A', then the consecutive addresses will be 'A+m', 'A+2*m', 'A+3*m'. Similarly, in a 8-beat incremental burst transfer (write or read), the addresses will be 'A', 'A+n', 'A+2*n', 'A+3*n', 'A+4*n', 'A+5*n', 'A+6*n', 'A+7*n'.
Example.
Q:- A certain SoC master uses a burst mode to communicate (write or read) with its peripheral slave. The transaction contains 32 write transfers. The initial latency for the write transfer is 8ns and burst sequential latency is 0.5ns. Calculate the total latency for single mode (no-burst mode), 4-beat burst mode, 8-beat burst mode and 16-beat burst mode. Calculate the throughput factor increase for each burst mode.
Sol:-
Total latency of single mode = num_transfers x (tinitial + tsequential) = 32 x (8 + 1x(0.5)) = 32 x 8.5 = 272 ns
Total latency of one 4-beat burst mode = (tinitial + tsequential) = 8 + 4x(0.5) = 10 ns
For 32 write transactions, required 4-beat transfers = 32/4 = 8
Hence, total latency of 32 write transfers = 10 x 8 = 80 ns
Total throughput increase factor using 4-beat burst mode = single mode latency/(total burst mode latency) = 272/80 = 3.4
Total latency of one 8-beat burst mode = (tinitial + tsequential) = 8 + 8x(0.5) = 12 ns
For 32 write transactions, required 8-beat transfers = 32/8 = 4
Hence, total latency of 32 write transfers = 12 x 4 = 48 ns
Total throughput increase factor using 8-beat burst mode = single mode latency/(total burst mode latency) = 272/48 = 5.7
Total latency of one 16-beat burst mode = (tinitial + tsequential) = 8 + 16x(0.5) = 16 ns
For 32 write transactions, required 16-beat transfers = 32/16 = 2
Hence, total latency of 32 write transfers = 16 x 2 = 32 ns
Total throughput increase factor using 16-beat burst mode = single mode latency/(total burst mode latency) = 272/32 = 8.5
From the above calculations, we can conclude that the throughput increases with the number of beats.
Details.
The usual reason for having a burst mode capability, or using burst mode, is to increase data throughput. The steps left out while performing a burst mode transaction may include:
In the case of DMA, the DMA controller and the device are given exclusive access to the bus without interruption; the CPU is also freed from handling device interrupts.
The actual manner in which burst modes work varies from one type of device to another; however, devices that have some sort of a standard burst mode include the following:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\ t_{total} = t_{initial} + t_{sequential}"
}
]
| https://en.wikipedia.org/wiki?curid=7076593 |
70769088 | Proverbs 9 | Proverbs 9 is the ninth chapter of the Book of Proverbs in the Hebrew Bible or the Old Testament of the Christian Bible. The book is a compilation of several wisdom literature collections, with the heading in 1:1 may be intended to regard Solomon as the traditional author of the whole book, but the dates of the individual collections are difficult to determine, and the book probably obtained its final shape in the post-exilic period. This chapter is a part of the first collection of the book.
Text.
Hebrew.
The following table shows the Hebrew text of Proverbs 9 with vowels alongside an English translation based upon the JPS 1917 translation (now in the public domain).
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls including 4Q103a (4QProvc; 30 BCE – 30 CE) with extant verses 16–17.
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BC; some extant ancient manuscripts of this version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
Analysis.
This chapter belongs to a section regarded as the first collection in the book of Proverbs (comprising Proverbs 1–9), known as "Didactic discourses". The Jerusalem Bible describes chapters 1–9 as a prologue of the chapters 10–22:16, the so-called "[actual] proverbs of Solomon", as "the body of the book".
The chapter concludes the first collection or introduction of the book by presenting the final appeals of both wisdom and folly to the 'siimpletons' or naive people in the contrasting style of rival hostesses inviting people to dine in their respective houses, where 'wisdom offers life with no mention of pleasure', whereas 'folly offers pleasure with no mention of death', with the following structure:
Appeal to accept Wisdom (9:1–12).
The invitation of Wisdom (verses 3–4) echoes the earlier appeals (cf. Proverbs 1:20–21; 8:1–5). It is addressed to the 'simple' or 'simpletons', that is, the people who need the most to dine with Wisdom but who can be most easily enticed to dine with Folly (cf. Proverbs 1:4). Food and drink (verse 5) figuratively describe Wisdom's instruction (cf. Isaiah 55:1–3; Sirach 15:3; 24:19–21).
"Wisdom has built her house,"
"she has hewn out her seven pillars;"
"She has sent out her maidens,"
"She cries out from the highest places of the city."
Verse 3.
Benson says personified Wisdom may be compared to "a great princess": therefore "it was fit she should be attended on by maidens".
Appeal to accept Folly (9:13–18).
Folly is portrayed in terms of the 'seductress', described as 'woman of foolishness' (verse 13). The brash manner in which Folly invites the simple to her house (verses 13–16) recalls the solicitations of the seductress (Proverbs 7:11–12) and contrasts with the formality and decorum of Wisdom's invitation.. Whereas the banquet of Wisdom promotes and celebrates life (verse 6), to dine with Folly is to banquet with the 'dead' in Sheol (cf Proverbs 2:18–19; 5: 5–6; 7:27).
"A foolish woman is clamorous;"
"she is simple, and knows nothing."
Verse 13.
Like Wisdom in the previous chapter, Folly is also personified as a character, called "Dame Folly" in the Jerusalem Bible, "the woman called Folly" in the New English Translation.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=70769088 |
70769095 | Proverbs 2 | Proverbs 2 is the second chapter of the Book of Proverbs in the Hebrew Bible or the Old Testament of the Christian Bible. The book is a compilation of several wisdom literature collections, with the heading in 1:1 may be intended to regard Solomon as the traditional author of the whole book, but the dates of the individual collections are difficult to determine, and the book probably obtained its final shape in the post-exilic period. This chapter is a part of the first collection of the book.
Text.
Hebrew.
The following table shows the Hebrew text of Proverbs 2 with vowels alongside an English translation based upon the JPS 1917 translation (now in the public domain).
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls including 4Q102 (4QProva; 30 BCE – 30 CE) with extant verse 1.
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BC. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
Analysis.
This chapter belongs to a section regarded as the first collection in the book of Proverbs (comprising Proverbs 1–9), known as "Didactic discourses". The Jerusalem Bible describes chapters 1–9 as a prologue of the chapters 10–22:16, the so-called "[actual] proverbs of Solomon", as "the body of the book". The chapter starts with an admonition to receive wisdom (verses 1–4) followed by the benefits of it: *the knowledge of God and his protection (5–8),
The instruction in this chapter presents "wisdom" as a human quest (verses 1–5) and a divine gift (verses 6–8), which guards its recipients from the way of evil men and loose women (verses 9–19), and guides them in the way of good men (verses 20–22).
Value of Wisdom (2:1–8).
Wisdom is to be pursued with the attentiveness to the father's words and the inclination of the heart (or 'mind') as well as the fervent desire and perseverance (verses 1–4). The prize for getting the wisdom is worth the toil (verse 5) given by God himself (verse 6), effectively maintaining God's moral order ('paths of justice') by 'shielding' that person from the pitfalls and snares of evil (verses 7–8).
"My son, if you receive my words"
"and treasure up my commandments with you,"
Verse 1.
This verse opens one long conditional sentence comprising:
The verb "treasure" qualifies the term “receive” (, , "laqakh", in the first clause, just as “commandments” intensifies “words”. The pattern of 'intensification through parallelism' is found in verses 1 to 4.
Benefits of Wisdom (2:9–22).
The description of Wisdom as a guide and a guard (verses 9–11) echoes the introduction in Proverbs 1:2–7, is applied in following verses, in particular against 'evil men' (verses 12–15) and 'loose women' (or 'sexual impurities'; verses 16–19), so it leads to the way of good persons (verses 20–22).
The theme of the 'loose woman' (verses. 16-19) is developed in more details in Proverbs 5:1–14, 6:20–35, and 7:1–27.
"to deliver you from the adulterous woman,"
"from the loose woman who has flattered you with her words"
Verse 16.
The seductive speech is compared to "olive oil" (Proverbs 5:3) and is recounted (Proverbs 7:14-20).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=70769095 |
70769097 | Proverbs 4 | Proverbs 4 is the fourth chapter of the Book of Proverbs in the Hebrew Bible or the Old Testament of the Christian Bible. The book is a compilation of several wisdom literature collections, with the heading in 1:1 may be intended to regard Solomon as the traditional author of the whole book, but the dates of the individual collections are difficult to determine, and the book probably obtained its final shape in the post-exilic period. This chapter is a part of the first collection of the book. The Jerusalem Bible entitles this chapter, "On choosing wisdom".
Text.
Hebrew.
The following table shows the Hebrew text of Proverbs 4 with vowels alongside an English translation based upon the JPS 1917 translation (now in the public domain).
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BC; some extant ancient manuscripts of this version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
Analysis.
This chapter belongs to a section regarded as the first collection in the book of Proverbs (comprising Proverbs 1–9), known as "Didactic discourses". The Jerusalem Bible describes chapters 1–9 as a prologue of the chapters 10–22:16, the so-called "[actual] proverbs of Solomon", as "the body of the book".
This chapter has the following structure:
Get Wisdom! (4:1–9).
This passage focuses on the value of Wisdom, so it needs to be acquired at all costs (verse 7). The father's appeal (verses 1–2) is reinforced by recounting his own experience when he was taught the lesson by his own parents (verses 3–4), demonstrating the importance of a "home" as the place for an educational discipline to get Wisdom (cf. Exodus 12:26–27; Deuteronomy 6:6–7, 20–25), and the transmission from one generation to the next. In verses 6–9 Wisdom is personified as 'a bride to be wooed', and who, in return, will 'love and honor those who embrace her', in contrast to the spurious love and deadly embrace of the seductress.
"Hear, O children, the instruction of a father,"
"and attend to know understanding."
The right way and the wrong way (4:10–27).
The metaphor of a road with two ways in one's life is important in the teaching of Proverbs, even if it occurs many times (cf. Proverbs 1:15,19; 2:8–22; 3:17, 23, etc.), in counseling young people to avoid the path of the wicked, but to stay on the way of wisdom ("paths of uprightness" that is "straight and level"; cf. Proverbs 3:6), which is the good path (cf. Proverbs 2:9) and also the secure path (cf. Proverbs 3:23) without fear of stumbling (verse 12; cf. Psalm 18:36), brightly illuminated (verse 18; steadily increasing brightness from the first flickers of dawn to the full splendor of the noonday sun). On the other hand, the way of the wicked, with evil activities (Proverbs 1:18-19) and twisted paths (Proverbs 2:12–15), is shrouded in 'deep darkness' (verse 19; the same term is used the plague of darkness in Egypt in Exodus 10:22, or as the consequences of the day of the Lord in Joel 2:2; Amos 5:20, etc), which hinders those who walk on it to even see what their feet strike on the final, fatal step (cf. Job 18:7–12; Jeremiah 13:16; 23:12). The appeal to accept the father's words (verse 10) resumes in the final paragraph (verses 20–27) because they are 'life' and 'healing' (verse 22; cf. Proverbs 3:8).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=70769097 |
70769099 | Proverbs 5 | Proverbs 5 is the fifth chapter of the Book of Proverbs in the Hebrew Bible or the Old Testament of the Christian Bible. The book is a compilation of several wisdom literature collections, with the heading in 1:1 may be intended to regard Solomon as the traditional author of the whole book, but the dates of the individual collections are difficult to determine, and the book probably obtained its final shape in the post-exilic period. This chapter is a part of the first collection of the book, focusing on "the dangers of the strange woman".
Text.
Hebrew.
The following table shows the Hebrew text of Proverbs 5 with vowels alongside an English translation based upon the JPS 1917 translation (now in the public domain).
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BC; some extant ancient manuscripts of this version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
Analysis.
This chapter belongs to a section regarded as the first collection in the book of Proverbs (comprising Proverbs 1–9), known as "Didactic discourses". The Jerusalem Bible describes chapters 1–9 as a prologue of the chapters 10–22:16, the so-called "[actual] proverbs of Solomon", as "the body of the book".
This chapter has the following structure:
Sub-titled "The Peril of Adultery" in the New King James Version, this chapter contains the first of three poems on the "forbidden woman", the “stranger” outside the social boundaries of Israel; the other two are Proverbs 6:20–35 and Proverbs 7. Verse 5 suggests that the woman is "as bitter as wormwood", a comparison used several times in the Hebrew Bible, by the prophets Jeremiah and Amos, also in Deuteronomy.
Avoid the seductress (5:1–14).
The passage continues the instruction against the "loose woman" (or "seductress") introduced in Proverbs 2:16–19 (cf. Proverbs 6:20–35; 7:1–27), starting with a typical appeal to the child to listen carefully to receive the necessary knowledge for avoiding entanglement with her (verses 1–2). The seductress makes use of her natural sex appeal (cf. Proverbs 6:25, but mainly relying on her seductive speech (cf. Proverbs 7:14–20), which is compared with honey for sweetness (cf. Proverbs 16:24; Judges 14:8, 14; bride's kisses in Song 4:11) and oil for smoothness (verse 8; flattery in Proverbs 29:5; hypocrisy in Psalm 5:9). A contrast is given in verses 3–4 between honey (sweet) and wormwood (bitter; Jeremiah 9:15; Amos 5:7) and between oil (smooth) and double-edged sword (sharp; Psalm 55:21). However, any promise of pleasure and enjoyment, would bring different reality ('in the end'; verse 4) as the seductress travels the path to Sheol (verse 5; cf. 2:18–19; 7:27) with 'the unsteady steps of a drunkard' ('wander'; cf. Isaiah 28:7) staggering from one lover to another not knowing that she brings harm to herself or to her victims (cf. Proverbs 7:21-7; 30:20).
A second appeal for attentiveness (verse 7) is followed by succinct advice (cf. Proverbs 1:15; 4:15) and expositions of the consequences of liaison with her (verses 9–14): the loss of dignity and honor (verse 9), of hard-earned wealth (verse 10), and of vigor and health (verse 11); all of which is the antithesis of Wisdom's benediction (Proverbs 3:13-18). Rejecting wise counsel or learning the lesson too late would produce a lamentation in verses 12–14 (cf. Proverbs 1:24–28).
"My son, attend to my wisdom,"
"and bow your ear to my understanding,"
"that you may keep discretion,"
"and your lips may guard knowledge."
"Wisdom is the principal thing;"
"Therefore get wisdom."
"And in all your getting, get understanding."}}
Verse 7.
Aitken stresses the need to acquire wisdom "at all costs", and the Jerusalem Bible advises that "one must first realise that it is essential to have it and that it demands self-sacrifice". Similarly the modern World English Bible's translation advises, "Yes, though it costs all your possessions, get understanding".
Be faithful to your wife (5:15–23).
This passage more specifically address married men, mainly advising that the best way of avoiding the temptation of the seductress is that he remain faithful to his wife and derive sexual satisfaction from her, using the imagery of water, fountain, springs and streams to enjoy and not be wasted (cf. Song 4:12, 15). A husband should always place an image of his wife as a 'graceful doe', a symbol of her beauty (verse 18; cf. Song 2:7). Verse 21 reminds the husband of the 'scrutinizing eyes of the Lord' (cf. Proverbs 15:3; Job 31:4; 34:21) and his guardianship of the moral order, and that the consequence of indiscipline and folly would be 'reaping what has been sown' (cf. Proverbs 1:19; 2:20–22), and like a man threading a noose around his own neck or a senseless bird ensnared in the net (cf. Proverbs 1:17-19).
"Let your fountain be blessed,"
"and rejoice with the wife of your youth."
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=70769099 |
70769101 | Proverbs 6 | Proverbs 6 is the sixth chapter of the Book of Proverbs in the Hebrew Bible or the Old Testament of the Christian Bible. The book is a compilation of several wisdom literature collections, with the heading in 1:1 may be intended to regard Solomon as the traditional author of the whole book, but the dates of the individual collections are difficult to determine, and the book probably obtained its final shape in the post-exilic period. This chapter is a part of the first collection of the book.
Text.
Hebrew.
The following table shows the Hebrew text of Proverbs 6 with vowels alongside an English translation based upon the JPS 1917 translation (now in the public domain).
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BC; some extant ancient manuscripts of this version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
Analysis.
This chapter belongs to a section regarded as the first collection in the book of Proverbs (comprising Proverbs 1–9), known as "Didactic discourses". The Jerusalem Bible describes chapters 1–9 as a prologue of the chapters 10–22:16, the so-called "[actual] proverbs of Solomon", as "the body of the book".
The structure of chapter involves some advices:
The New King James Version entitles the chapters and sections as follows:
Four warnings (6:1–19).
This section contains four miscellaneous sayings which are more reminiscent of the proverbial sayings in chapters 10–31 than the instructions in chapters 1–9:
Verses 16–19 contain a graded numerical saying (cf. Proverbs 30:15–31; Job 5:19; Amos 1:3–2:8) that is particularly useful both as a means of classification and as an aid to memorization. The saying lists 'different kinds of malicious and disruptive activity through a review of the unhealthy body': 'eyes… tongue… hands… heart… feet' (cf. Proverbs 4:23–27), with the addition of 'false witness' and 'one who stirs up strife' to make up the seven vices.
"My son, if you become surety for your friend,"
"If you have shaken hands in pledge for a stranger,"
The price of adultery (6:20–35).
This passage focuses on the instruction to protect against the enticements of the seductress, in particular here of "a married woman". An affair with the adulteress would exact a heavy price, 'a man's very life', as a jealous and enraged husband would seek revenge and demand a higher price than money (verses 34–35).
"Bind them continually upon your heart,"
"and tie them around your neck."
"When you walk, their counsel will lead you."
"When you sleep, they will protect you."
"When you wake up, they will advise you."
"For like a lamp is a commandment, and instruction is light,"
"and the way of life[a] is the reproof of discipline,"
"They will protect you"
"from the flattering words"
"of someone else's wife."
"Don’t hunger in your heart after her beauty."
"Don’t let her eyes capture you."
"For the price of a woman, a prostitute,[a] is the price of a loaf of bread,"
"but the woman belonging to a man[b] hunts precious life."
"Can a man carry fire in his lap"
"without burning his clothes?"
"Or can one walk on hot coals"
"and his feet not be scorched?"
" It is just as dangerous to sleep with another man's wife. Whoever does it will suffer."
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=70769101 |
70769103 | Proverbs 7 | Book of Proverbs, chapter 7
Proverbs 7 is the seventh chapter of the Book of Proverbs in the Hebrew Bible or the Old Testament of the Christian Bible. The book is a compilation of several wisdom literature collections; the heading in 1:1 may be intended to regard Solomon as the traditional author of the whole book, but the dates of the individual collections are difficult to determine, and the book probably obtained its final shape in the post-exilic period. This chapter is a part of the first collection of the book.
Text.
Hebrew.
The following table shows the Hebrew text of Proverbs 7 with vowels alongside an English translation based upon the JPS 1917 translation (now in the public domain).
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls including 4Q103 (4QProvb; 30 BCE – 30 CE) with extant verses 9, 11.
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BC; some extant ancient manuscripts of this version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
Analysis.
This chapter belongs to a section regarded as the first collection in the book of Proverbs (comprising Proverbs 1–9), known as "Didactic discourses". The Jerusalem Bible describes chapters 1–9 as a prologue of the chapters 10–22:16, the so-called "[actual] proverbs of Solomon", as "the body of the book".
The chapter has the following structure:
The wiles of a harlot (7:1–5).
The appeal for the son to accept the instruction in this section closely echoes 6:20–24. Wisdom is to be treated as a 'sister' (verse 4; cf. a 'bride' in Song 4:9–10), to counter the attraction to the adulteress (cf. Proverbs 4:6–9). It is followed by a story presented in the form of the personal reminiscence of the narrator.
"Keep my commandments and live,"
"and my teaching as the apple of your eye."
The crafty harlot (7:6–27).
This section records "an example story on the wiles of the adulteress ... cast in the form of [a] personal reminiscence". The narrator observes a wayward youth through the lattice of his window (in the Septuagint, it is the woman who looks out of the window seeking her prey). This young man was going through darkening streets towards the house of the adulteress (verses 6–9) and there he is accosted by the woman who dressed like a prostitute (verses 10–13) and spoke with 'smoothness' (verses 14–20; cf. verse 5)—the harlot's chief weapon (cf. Proverbs 2:16; 5:3; 6:24). Unable to resist the advances and oblivious to the real cost to pay, the young man follows the harlot like a beast to the slaughter, or a bird caught in her snare (verses 21–23).
The final paragraph (verses 24–27) reinforces the instruction to avoid the deadly paths of the adulteress or harlot, because her house is "the vestibule to Sheol and leads down to death" (cf. Proverbs 2:18–19; 5:8).
Citations.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=70769103 |
70769104 | Proverbs 8 | Book of Proverbs, chapter 8
Proverbs 8 is the eighth chapter of the Book of Proverbs in the Hebrew Bible or the Old Testament of the Christian Bible. The book is a compilation of several wisdom literature collections, with the heading in 1:1 may be intended to regard Solomon as the traditional author of the whole book, but the dates of the individual collections are difficult to determine, and the book probably obtained its final shape in the post-exilic period. This chapter is a part of the first collection of the book.
Text.
Hebrew.
The following table shows the Hebrew text of Proverbs 8 with vowels alongside an English translation based upon the JPS 1917 translation (now in the public domain).
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BC; some extant ancient manuscripts of this version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
Analysis.
This chapter belongs to a section regarded as the first collection in the book of Proverbs (comprising Proverbs 1–9), known as "Didactic discourses". The Jerusalem Bible describes chapters 1–9 as a prologue of the chapters 10–22:16, the so-called "[actual] proverbs of Solomon", as "the body of the book". Anglican commentator T. T. Perowne, in the Cambridge Bible for Schools and Colleges, calls the section comprising chapters 1 to 9 "The Appeal of Wisdom", a title also reserved in particular for Proverbs 8.
The chapter contains the so-called "Wisdom's Second Speech" (the "First Speech" is in Proverbs 1:20–33), but whereas in Proverbs 1 Wisdom proclaims her value, and in Proverbs 3:19–26 Wisdom is the agent of creation, here Wisdom is personified, not as a deity like Egypt’s Ma'at or the Assyrian-Babylonian Ishtar, but simply presented as a 'self-conscious divine being distinct but subordinate to God', which in reality is the personification of the attribute of wisdom displayed by God. A connection between Wisdom and Jesus Christ is only in that both reveals the nature of God, but Proverbs 8 states wisdom as a creation of God, while Jesus’ claims as one with God includes wisdom (Matthew 12:42; even personified wisdom in a way that was similar to Proverbs in Matthew 11:19) and a unique knowledge of God (Matthew 11:25–27). Paul the Apostle sees the fulfillment of wisdom in Christ (Colossians 1:15–20; 2:3) and affirms that Christ became believers' wisdom in the crucifixion (1 Corinthians 1:24, 30).
The chapter is very significant in Gnosticism, as they take “wisdom” to be referring to Sophia, the divine feminine incarnation of wisdom and truth.
The structure of chapter involves three cycles of Wisdom's invitation:
Aitken divides this chapter into the following sections:
Wisdom's first invitation (8:1–9).
The introduction (verses 1–3) presents Wisdom as a teacher, without the note of reproach and threat in her first speech (Proverbs 1:20–33). After giving the first invitation (verses 4–5), the emphasis is given on the character of Wisdom's words (verses 6–9) that, in contrast to the duplicitous and fraudulent words of the seductress, the words of Wisdom are in plain language, yet with integrity, which is intelligible to all who find her (verse 9).
"Does not wisdom cry out,"
"and understanding lift up her voice?"
Verse 1.
Wisdom speaks openly and publicly, not in secret or steathily like the evil seductress, just as Jesus Christ said that he has spoken openly to the world and said nothing in secret (John 18:20).
Some translations and paraphrases treat personify "Wisdom" and "Understanding" as characters speaking out, for example in the New American Bible, Revised Edition:
<templatestyles src="Template:Blockquote/styles.css" />
and in "The Voice" translation:
<templatestyles src="Template:Blockquote/styles.css" />
"On the heights, beside the way, at the crossroads she takes her stand".}}
Verse 2.
American theologian Albert Barnes notes the contrast between Wisdom's openness and transparency, and the "stealth and secrecy and darkness" which had shrouded the harlot's enticements in chapter 7.
"They are all plain to him who understands,"
"and right to those who find knowledge."
Wisdom's second invitation (8:10–21).
The second invitation in verses 10–11 is very similar to the appeal in Proverbs:14–15, whereas verses 12–14 recall the words of the prologue of the book (Proverbs 1:2–7). In the explanation following the invitation, Wisdom describes her 'providential role in the good and orderly government of the world' (verses 12–16) and 'as the giver of wealth' (verses 17–21).
Wisdom's hymn (8:22–31).
The third invitation is preceded by a hymn of self-praise in two parts by Wisdom (verses 22–31):
Wisdom describes herself as:
"rejoicing in the habitable part of His earth,"
"and my delights were with the sons of men."
Wisdom's third invitation (8:32–36).
Verses 32–36 form a conclusion in connection to the appeal back in verses 3–4.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=70769104 |
70769106 | Proverbs 25 | Proverbs 25 is the 25th chapter of the Book of Proverbs in the Hebrew Bible or the Old Testament of the Christian Bible. The book is a compilation of several wisdom literature collections, with the heading in may be intended to regard Solomon as the traditional author of the whole book, but the dates of the individual collections are difficult to determine, and the book probably obtained its final shape in the post-exilic period. This chapter is the last part of the fifth collection of the book, so-called "the Second Solomonic Collection."
Text.
Hebrew.
The following table shows the Hebrew text of Proverbs 25 with vowels alongside an English translation based upon the JPS 1917 translation (now in the public domain).
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BC; some extant ancient manuscripts of this version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
Analysis.
This chapter belongs to a further collection of Solomonic proverbs, transmitted and
edited by royal scribes during the reign of Hezekiah, comprising Proverbs 25–29. This collection is introduced within the text as "[the] proverbs of Solomon which the men of Hezekiah king of Judah copied". Hezekiah was the 13th king of Judah from 726 BCE to 697 BCE, who is favorably spoken of in .
Based on differences in style and subject-matter there could be two originally separate collections:
Aberdeen theologian Kenneth Aitken argues that chapters 25–27 and 28–29 were originally separate collections, while Methodist minister Arno Gaebelein argues that chapters 27–29 as a unit constitute "instructions given to Solomon".
Verses 2 to 7 consist of a series of sayings regarding the king, followed by advice in verses 6 and 7 directed to royal officials.
"These are also proverbs of Solomon,"
"which the men of Hezekiah king of Judah copied."
Verse 1.
The proverbs in this collection differ from the earlier ones in that these are 'multiple line sayings using more similes'.
"6Do not exalt yourself in the presence of the king,"
"and do not stand in the place of great men;"
"7for it is better that it be said to you, “Come up here,""
"than that you should be put lower in the presence of the prince,"
"whom your eyes have seen."
Verses 6–7.
David Brown notes that Jesus' parable in includes "a reproduction" of verses 6 and 7.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=70769106 |
70769109 | Proverbs 26 | Proverbs 26 is the 26th chapter of the Book of Proverbs in the Hebrew Bible or the Old Testament of the Christian Bible. The book is a compilation of several wisdom literature collections, with the heading in 1:1 may be intended to regard Solomon as the traditional author of the whole book, but the dates of the individual collections are difficult to determine, and the book probably obtained its final shape in the post-exilic period. This chapter is the last part of the fifth collection of the book, so-called "the Second Solomonic Collection."
Text.
Hebrew.
The following table shows the Hebrew text of Proverbs 26 with vowels alongside an English translation based upon the JPS 1917 translation (now in the public domain).
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BC; some extant ancient manuscripts of this version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
Analysis.
This chapter belongs to a further collection of Solomonic proverbs, transmitted and
edited by royal scribes during the reign of Hezekiah, comprising Proverbs 25–29. Based on differences in style and subject-matter there could be two originally separate collections:
The first twelve verses of this chapter, except verse 2, "Like a flitting sparrow, like a flying swallow, so a curse without cause shall not alight", form a series of sayings on the 'fool', so sometimes are called “the Book of Fools”.
"Like snow in summer or rain in harvest,"
"so honor is not fitting for a fool."
"He who passes by and meddles with strife not belonging to him"
"is like one who takes a dog by the ears."
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=70769109 |
70769112 | Proverbs 27 | Proverbs 27 is the 27th chapter of the Book of Proverbs in the Hebrew Bible or the Old Testament of the Christian Bible. The book is a compilation of several wisdom literature collections, with the heading in 1:1 may be intended to regard Solomon as the traditional author of the whole book, but the dates of the individual collections are difficult to determine, and the book probably obtained its final shape in the post-exilic period. This chapter is the last part of the fifth collection of the book, so-called "the Second Solomonic Collection."
Text.
Hebrew.
The following table shows the Hebrew text of Proverbs 27 with vowels alongside an English translation based upon the JPS 1917 translation (now in the public domain).
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BC; some extant ancient manuscripts of this version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
Analysis.
This chapter belongs to a further collection of Solomonic proverbs, transmitted and
edited by royal scribes during the reign of Hezekiah, comprising Proverbs 25–29. Based on differences in style and subject-matter there could be two originally separate collections:
The New King James Version adopts verse 7 as a sub-heading for this chapter, reflecting the argument from Methodist minister Arno Gaebelein that this section represents "instructions given to Solomon". Verses 23 to 27 are distinct and commend the life of a shepherd "as providing the best and most enduring kind of wealth".
"Do not boast about tomorrow,"
"for you do not know what a day may bring forth."
"Sheol and Abaddon are never satisfied,"
"and never satisfied are the eyes of man."
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=70769112 |
70769114 | Proverbs 28 | Proverbs 28 is the 28th chapter of the Book of Proverbs in the Hebrew Bible or the Old Testament of the Christian Bible. The book is a compilation of several wisdom literature collections, with the heading in 1:1 may be intended to regard Solomon as the traditional author of the whole book, but the dates of the individual collections are difficult to determine, and the book probably obtained its final shape in the post-exilic period. This chapter is the last part of the fifth collection of the book, so-called "the Second Solomonic Collection."
Text.
Hebrew.
The following table shows the Hebrew text of Proverbs 28 with vowels alongside an English translation based upon the JPS 1917 translation (now in the public domain).
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BC; some extant ancient manuscripts of this version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
Analysis.
This chapter belongs to a further collection of Solomonic proverbs, transmitted and
edited by royal scribes during the reign of Hezekiah, comprising Proverbs 25–29. Based on differences in style and subject-matter there could be two originally separate collections:
"The wicked flee when no one pursues,"
"but the righteous are bold as a lion."
Verse 2.
<templatestyles src="Template:Blockquote/styles.css" />
New Revised Standard Version attempts to clarify the verse with a more intelligible reading:
<templatestyles src="Template:Blockquote/styles.css" />
The reign of Hezekiah is associated with attempts to restore the union of Judah and Israel by political and religious means, which both proved unsuccessful.
In the Septuagint, this verse is presented as a saying about quarrelling:
<templatestyles src="Template:Blockquote/styles.css" />
Verse 8.
<templatestyles src="Template:Blockquote/styles.css" />
Verse 9.
<templatestyles src="Template:Blockquote/styles.css" />
Verse 10.
<templatestyles src="Template:Blockquote/styles.css" />
Verse 11.
<templatestyles src="Template:Blockquote/styles.css" />
Verse 12.
<templatestyles src="Template:Blockquote/styles.css" />
Verse 13.
<templatestyles src="Template:Blockquote/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=70769114 |
70769116 | Proverbs 29 | Proverbs 29 is the 29th chapter of the Book of Proverbs in the Hebrew Bible or the Old Testament of the Christian Bible. The book is a compilation of several wisdom literature collections, with the heading in 1:1 may be intended to regard Solomon as the traditional author of the whole book, but the dates of the individual collections are difficult to determine, and the book probably obtained its final shape in the post-exilic period. This chapter is the last part of the fifth collection of the book, so-called "the Second Solomonic Collection."
Text.
Hebrew.
The following table shows the Hebrew text of Proverbs 29 with vowels alongside an English translation based upon the JPS 1917 translation (now in the public domain).
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BC; some extant ancient manuscripts of this version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
Analysis.
This chapter belongs to a further collection of Solomonic proverbs, transmitted and
edited by royal scribes during the reign of Hezekiah, comprising Proverbs 25–29. Based on differences in style and subject-matter there could be two originally separate collections:
"He who is often reproved, yet hardens his neck,"
"will suddenly be destroyed, and that without remedy."
"When the righteous increase, the people rejoice,"
"but when the wicked rule, the people groan."
Verse 14.
<templatestyles src="Template:Blockquote/styles.css" />
Methodist commentator Joseph Benson makes the point that a king who judges the poor "faithfully" (the word used in the King James Version) also judges the rich "faithfully", but he argues that the proverb "names the poor, because these are much oppressed and injured by others, and least regarded by princes, and yet committed to their more especial care".
Verse 27.
<templatestyles src="Template:Blockquote/styles.css" />
This final verse of chapter 29 has additional words in the Latin Vulgate, "Verbum custodiens filius extra perditionem erit", which appear in some versions of the Septuagint after Proverbs 24:22, and are translated in the Douay-Rheims 1899 American Edition as "The son that keepeth the word, shall be free from destruction".
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=70769116 |
70769181 | Proverbs 23 | Proverbs 23 is the 23rd chapter of the Book of Proverbs in the Hebrew Bible or the Old Testament of the Christian Bible. The book is a compilation of several wisdom literature collections, with the heading in 1:1 may be intended to regard Solomon as the traditional author of the whole book, but the dates of the individual collections are difficult to determine, and the book probably obtained its final shape in the post-exilic period. This chapter specifically records "the sayings of wise".
Text.
Hebrew.
The following table shows the Hebrew text of Proverbs 23 with vowels alongside an English translation based upon the JPS 1917 translation (now in the public domain).
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BC. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
Analysis.
This chapter is a part of the third collection in the book of Proverbs (comprising Proverbs 22:17–24:22), which consists of seven instructions of various lengths:
The sayings are predominantly in the form of synonymous parallelism, preceded by a general superscription of the entire collection in 22:17a: "The words of the wise" (or "Sayings of the Wise"). This collection consists of an introduction that the youths should be instructed and exhorted to listen to and obey their "teachers" (parents), followed by a series of admonitions and prohibitions coupled with a variety of clauses, primarily presented in short parental instructions (cf. 23:15, 22; 24:13, 21).
The 'thirty sayings' (Proverbs 22:20) in this collection are thought to be modelled on the 'thirty chapters' in Egyptian Instruction of "Amen-em-ope" the son of Kanakht (most likely during the Ramesside Period ca. 1300–1075 BCE), although the parallels extend only in Proverbs 22:17–23:11 and the extent of the dependence is debatable.
True riches (23:1–21).
This section forms the body of a collection titled "Sayings of the Wise" (22:17), containing 5 of 7 sets of instruction.
Verses 1–3 give some further advice about table manners during a royal feast, that is, to 'put a knife to your throat' (a forceful expression for 'curb your appetite') in front of 'deceptive food' (literally, "bread of lies") because there could be an ulterior motive behind the abundant hospitality that can cause one's undoing. Verses 4-5 warn against accruing wealth as the main goal in life because riches are like a mirage: no sooner here than gone. Verses 10–11 warn against land appropriation of the defenseless people through the removal of the boundary stones (cf. 15:25; 22:28), because although there is no human 'kinsman' to defend their rights (cf. Leviticus 25:25; Ruth 4), God himself will become their redeemer (cf. 22:23). Verses 13–14 affirm the value of disciplining of children (cf. 13:24; 20:30; 22:15), as this will save them from following the paths leading to death and direct them along the path of life (cf. 13:14; 15:24). Verses 19–21 advise to avoid the company of drunkards and gluttons as excessive eating and drinking would lead to indiscipline, inertia and ultimately to poverty.
"Do not eat the bread of a man who is stingy; do not desire his delicacies,"
"Selfish people are always worrying"
"about how much the food costs."
Verse 7.
They tell you, “Eat and drink,”
"but they don’t really mean it."
Listen to your father and mother (23:22–35).
A reminder to take heed of the advice from one's father and mother precedes the warning against the seductress (verses 26–28), who is likened to a deep and narrow 'pit' (cf. Jeremiah 38:6–13; probably representing the gateway to Sheol, cf. 2:18–19; 5:5, 27; 22:14), or to a huntress who traps (cf. 7:22–23) and to a robber who lies in wait for her victims (cf. 7:12). Verses 29–35 describe the seduction of a drunkard by the power of wine which 'eye' ('sparkles' in verse 31 is literally 'gives its eye') and 'smoothness' (cf. Song of Songs 7:9) are comparable to the words of the seductress in chapters 1–9 (cf. 6:24–25). In both cases the promise of pleasure and enjoyment ('at the last', verse 32; 'in the end', 5:4) will lead to degenerative effects—both physical and mental—on its victims (verses 29, 33–35).
"Do not look on the wine when it is red,"
"When it sparkles in the cup,"
"When it swirls around smoothly;"
"For in the end it bites like a poisonous snake;"
"it stings like a viper."
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=70769181 |
70769182 | Proverbs 22 | Proverbs 22 is the 22nd chapter of the Book of Proverbs in the Hebrew Bible or the Old Testament of the Christian Bible. The book is a compilation of several wisdom literature collections, with the heading in 1:1 may be intended to regard Solomon as the traditional author of the whole book, but the dates of the individual collections are difficult to determine, and the book probably obtained its final shape in the post-exilic period. This chapter records parts of the second and third collection of the book.
Text.
Hebrew.
The following table shows the Hebrew text of Proverbs 22 with vowels alongside an English translation based upon the JPS 1917 translation (now in the public domain).
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BC. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
Analysis.
Verse 1–16 belong to a section regarded as the second collection in the book of Proverbs (comprising Proverbs 10:1–22:16), also called "The First 'Solomonic' Collection" (the second one in Proverbs 25:1–29:27). The collection contains 375 sayings, each of which consists of two parallel phrases, except for Proverbs 19:7 which consists of three parts.
Verses 17–29 are grouped to the third collection in the book (comprising Proverbs 22:17–24:22), which consists of seven instructions of various lengths:
The sayings are predominantly in the form of synonymous parallelism, preceded by a general superscription of the entire collection in 22:17a: "The words of the wise" (or "Sayings of the Wise"). This collection consists of an introduction that the youths should be instructed and exhorted to listen to and obey their "teachers" (parents), followed by a series of admonitions and prohibitions coupled with a variety of clauses, primarily presented in short parental instructions (cf. 23:15, 22; 24:13, 21).
The 'thirty sayings' (Proverbs 22:20) in this collection are thought to be modelled on the 'thirty chapters' in Egyptian Instruction of "Amen-em-ope" the son of Kanakht (most likely during the Ramesside Period ca. 1300–1075 BCE), although the parallels extend only in Proverbs 22:17–23:11 and the extent of the dependence is debatable.
Good name (22:1–16).
Verse 1 teaches that a name is 'an expression of the inner character and worth of its bearer' (cf. Genesis 32:28) and that it survives one's death (cf. Proverbs 10:7).
Verse 2 observes that 'rich and poor' are to be found side-by-side and are equally the creatures of God (cf. Proverbs 29:13), and verse 9 urges to show generosity to the poor (cf. 14:31). Verse 6 emphasizes the importance of parental instruction in the home (cf. 19:18), with verse 15 reinforcing the value of the rod in educating children (cf. Proverbs 3:24). Verse 13 displays the inventiveness of a lazy person in making excuses for doing nothing (cf. 26:13). Verse 14 resumes the theme of 'the seductress' from the first section of the book, recalling the seductive speech of the loose woman (cf. 5:3), which, in conjunction with 'pit', may imply the entrance to the underworld (cf. Proverbs 1:12; 2:18-19; 5:5. 27).
A good name is rather to be chosen than great riches,
and loving favor rather than silver and gold.
Sayings of the wise (22:17–29).
This section contains the first of seven sets of instruction in a collection titled "Sayings of the Wise" (22:17), with verses 17–21 as an introduction. Verses 22–23 warn against the oppression of the poor using the legal system ('at the gate') as an instrument for the exploitation (cf. Isaiah 108:1-2; Amos 5:12), because God as the protector of the poor will take up their cause (cf. Exodus 22:22–24). Verses 24–25 warn that the 'ways' of hotheads are ultimately the way of death.
22Do not rob the poor because he is poor,
neither oppress the afflicted in the gate;
23for the Lord will plead their cause,
and spoil the soul of those who spoiled them.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=70769182 |
70769187 | Proverbs 18 | Proverbs 18 is the eighteenth chapter of the Book of Proverbs in the Hebrew Bible or the Old Testament of the Christian Bible. The book is a compilation of several wisdom literature collections, with the heading in 1:1 may be intended to regard Solomon as the traditional author of the whole book, but the dates of the individual collections are difficult to determine, and the book probably obtained its final shape in the post-exilic period. This chapter is a part of the second collection of the book.
Text.
Hebrew.
The following table shows the Hebrew text of Proverbs 18 with vowels alongside an English translation based upon the JPS 1917 translation (now in the public domain).
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BC. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
Analysis.
This chapter belongs to a section regarded as the second collection in the book of Proverbs (comprising Proverbs 10:1–22:16), also called "The First 'Solomonic' Collection" (the second one in Proverbs 25:1–29:27). The collection contains 375 sayings, each of which consists of two parallel phrases, except for which consists of three parts.
"A man who isolates himself seeks his own desire;"
"He rages against all wise judgment."
Verse 1.
A person of a misanthropic isolation described here is not merely anti-social, but becomes a problem for society since he will defy sound judgment.
"It is not good to favor the wicked,"
"or to turn aside the righteous in judgment."
Verse 5.
While partiality in judgement is condemned in verse 5, verse 17 cautions against reaching
a premature verdict before a case carefully receives cross-examination, and if legal processes could not resolve the case, it is to
be submitted to divine arbitration (verse 18; cf. Proverbs 16:33).
"The words of a fool start fights;"
"do him a favor and gag him."
"Death and life are in the power of the tongue:"
"and they that love it shall eat the fruit thereof."
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=70769187 |
70769189 | Proverbs 24 | Proverbs 24 is the 24th chapter of the Book of Proverbs in the Hebrew Bible or the Old Testament of the Christian Bible. This chapter specifically records "the sayings of wise".
Text.
Hebrew.
The following table shows the Hebrew text of Proverbs 24 with vowels alongside an English translation based upon the JPS 1917 translation (now in the public domain).
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BC. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
Analysis.
Verses 1–22 is a part of the third collection in the book of Proverbs (comprising Proverbs 22:17–24:22), which consists of seven instructions of various lengths:
The sayings are predominantly in the form of synonymous parallelism, preceded by a general superscription of the entire collection in 22:17a: "The words of the wise" (or "Sayings of the Wise"). This collection consists of an introduction that the youths should be instructed and exhorted to listen to and obey their "teachers" (parents), followed by a series of admonitions and prohibitions coupled with a variety of clauses, primarily presented in short parental instructions (cf. 23:15, 22; 24:13, 21).
The remaining verses of this chapter (24:23–34) form the fourth collection in the book, introduced by a superscription "These also are sayings of the wise" (24:23a).
Sayings of the Wise (24:1–22).
This section concludes a collection titled "Sayings of the Wise" (22:17), with 3 sets of instruction, one as a continuation from Proverbs 23:16.until 24:12, followed by 24:13–20 and 24:21–22. The instructions are likely given by a teacher in the context of a royal school during the monarchical period. The Greek Septuagint version contains five additional verses after verse 22, mainly on 'the wrath of
the king'.
"Through wisdom is a house built"
"and by understanding it is established;"
Verse 3.
The 'building of the house' in verses 3-4 parallels to the building of the house by woman Wisdom in Proverbs 9:1, here stating that wisdom is 'the key to the prosperity of the family', as well as 'the key to healthy and harmonious family relationships'.
"For a righteous man may fall seven times"
"And rise again,"
"But the wicked shall fall by calamity."
Further sayings of the Wise (24:23–34).
The whole section is the fourth collection in the book of Proverbs, consisting of:
The first part of the collection (verses 23–29) contains warnings against partiality when judging (verses 23–25) or false testimony when being a witness (verse 28; cf. 18:5; 28:21). The second part (verses 30–34) provides an example story of being lazy and its consequences (cf. 7:6–23) reinforcing the lesson of the dilligent ant in 6:10-11. The instruction is given as such so it can be perceived 'through the eye as well as the ear' ('saw... considered... received instruction', verse 32).
"Be not a witness against your neighbor without cause, and do not deceive with your lips."
Uses.
The fry boats' bottom of In-N-Out Burger has the text "PROVERBS 24:16", which refers to the 16th verse of this chapter.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=70769189 |
70769231 | Proverbs 10 | Proverbs 10 is the tenth chapter of the Book of Proverbs in the Hebrew Bible or the Old Testament of the Christian Bible. The book is a compilation of several wisdom literature collections, with the heading in 1:1 may be intended to regard Solomon as the traditional author of the whole book, but the dates of the individual collections are difficult to determine, and the book probably obtained its final shape in the post-exilic period. This chapter is a part of the second collection of the book.
Text.
Hebrew.
The following table shows the Hebrew text of Proverbs 10 with vowels alongside an English translation based upon the JPS 1917 translation (now in the public domain).
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls including 4Q103 (4QProvb; 30 BCE – 30 CE) with extant verses 30–32.
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BC. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
Analysis.
This chapter belongs to a section regarded as the second collection in the book of Proverbs (comprising Proverbs 10:1–22:16), also called "The First 'Solomonic' Collection" (the second one in Proverbs 25:1–29:27). The collection contains 375 sayings (375 is the numerical value of the Hebrew name "Solomon"), each of which consists of two parallel phrases, except for which consists of three parts.
"The proverbs of Solomon."
"A wise son makes a glad father,"
"but a foolish son is the grief of his mother."
Verse 1.
This verse opens a new, different section, following parental appeals in chapters 1–9, with a proverb observing the effect on parents of the wisdom or folly of their child (cf. Proverbs 15:20; 17:21, 25), that not only brings the joy or sorrow of parents, but also the family's reputation (cf. Proverbs 28:7) and prosperity (cf. Proverbs 29:3).
"The rich man's wealth is his strong city:"
"the destruction of the poor is their poverty."
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=70769231 |
7077416 | Avidity | In biochemistry, avidity refers to the accumulated strength of "multiple" affinities of individual non-covalent binding interactions, such as between a protein receptor and its ligand, and is commonly referred to as functional affinity. Avidity differs from affinity, which describes the strength of a "single" interaction. However, because individual binding events increase the likelihood of occurrence of other interactions (i.e., increase the local concentration of each binding partner in proximity to the binding site), avidity should not be thought of as the mere sum of its constituent affinities but as the combined effect of all affinities participating in the biomolecular interaction. A particular important aspect relates to the phenomenon of 'avidity entropy'. Biomolecules often form heterogenous complexes or homogeneous oligomers and multimers or polymers. If clustered proteins form an organized matrix, such as the clathrin-coat, the interaction is described as a matricity.
Antibody-antigen interaction.
Avidity is commonly applied to antibody interactions in which multiple antigen-binding sites simultaneously interact with the target antigenic epitopes, often in multimerized structures. Individually, each binding interaction may be readily broken; however, when many binding interactions are present at the same time, transient unbinding of a single site does not allow the molecule to diffuse away, and binding of that weak interaction is likely to be restored.
Each antibody has at least two antigen-binding sites, therefore antibodies are bivalent to multivalent. Avidity (functional affinity) is the accumulated strength of multiple affinities. For example, IgM is said to have low affinity but high avidity because it has 10 weak binding sites for antigen as opposed to the 2 stronger binding sites of IgG, IgE and IgD with higher single binding affinities.
Affinity.
Binding affinity is a measure of dynamic equilibrium of the ratio of on-rate (kon) and off-rate (koff) under specific concentrations of reactants. The affinity constant, Ka, is the inverse of the dissociation constant, Kd. The strength of complex formation in solution is related to the stability constants of complexes, however in case of large biomolecules, such as receptor-ligand pairs, their interaction is also dependent on other structural and thermodynamic properties of reactants plus their orientation and immobilization.
There are several methods to investigate protein–protein interactions existing with differences in immobilization of each reactant in 2D or 3D orientation. The measured affinities are stored in public databases, such as the Ki Database and BindingDB. As an example, affinity is the binding strength between the complex structures of the epitope of antigenic determinant and paratope of antigen-binding site of an antibody. Participating non-covalent interactions may include hydrogen bonds, electrostatic bonds, van der Waals forces and hydrophobic effects.
Calculation of binding affinity for bimolecular reaction (1 antibody binding site per 1 antigen):
<chem>[Ab] + [Ag] <=> [AbAg]</chem>
where [Ab] is the antibody concentration and [Ag] is the antigen concentration, either in free ([Ab],[Ag]) or bound ([AbAg]) state.
calculation of association constant (or equilibrium constant):
formula_0
calculation of dissociation constant:
formula_1
Application.
Avidity tests for rubella virus, "Toxoplasma gondii", cytomegalovirus (CMV), varicella zoster virus, human immunodeficiency virus (HIV), hepatitis viruses, Epstein–Barr virus, and others were developed a few years ago. These tests help to distinguish acute, recurrent or past infection by avidity of marker-specific IgG. Currently there are two avidity assays in use. These are the well known chaotropic (conventional) assay and the recently developed AVIcomp (avidity competition) assay.
See also.
A number of technologies exist to characterise the avidity of molecular interactions including switchSENSE and surface plasmon resonance.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "K_a = \\frac{k_\\ce{on}}{k_\\ce{off}} = \\frac\\ce{[AbAg]}\\ce{[Ab][Ag]}"
},
{
"math_id": 1,
"text": "K_d = \\frac{k_\\ce{off}}{k_\\ce{on}} = \\frac\\ce{[Ab][Ag]}\\ce{[AbAg]}"
}
]
| https://en.wikipedia.org/wiki?curid=7077416 |
70774614 | Proximal policy optimization | Model-free reinforcement learning algorithm
<templatestyles src="Machine learning/styles.css"/>
Proximal policy optimization (PPO) is an algorithm in the field of reinforcement learning that trains a computer agent's decision function to accomplish difficult tasks. PPO was developed by John Schulman in 2017, and had become the default reinforcement learning algorithm at American artificial intelligence company OpenAI. In 2018 PPO had received a wide variety of successes, such as controlling a robotic arm, beating professional players at Dota 2, and excelling in Atari games. Many experts called PPO the state of the art because it seems to strike a balance between performance and comprehension. Compared with other algorithms, the three main advantages of PPO are simplicity, stability, and sample efficiency.
PPO is classified as a policy gradient method for training an agent’s policy network. The policy network is the function that the agent uses to make decisions. Essentially, to train the right policy network, PPO takes a small policy update (step size), so the agent can reliably reach the optimal solution. A too-big step may direct policy in the false direction, thus having little possibility of recovery; a too-small step lowers overall efficiency. Consequently, PPO implements a clip function that constrains the policy update of an agent from being too large or too small.
Development.
Reinforcement learning (RL), to which PPO belongs, has roots in psychology and neuroscience. Compared with other fields of machine learning, Reinforcement learning closely mimics the kind of learning that humans and other animals do. Many of the core algorithms, including PPO, were originally inspired by biological learning systems, like psychologist Edward Thorndike's learning by trial and error (1913).
In 2015, John Schulman introduced Trust Region Policy Optimization (TRPO) as an earlier version of PPO. TRPO addressed the instability issue found in the previous algorithm, deep q-network (DQN), by using the trust region constraint to regulate the KL divergence between the old and new policy. However, TRPO is computationally complicated and inefficient due to its second-order optimization, leading to expensive and difficult implementation for large-scale problems.
In 2017, John Schulman solved the complexity issue of TRPO by adopting first-order optimization in PPO. Schulman and his teams designed a clipping mechanism that forbids the new policy from deviating significantly from the old one when the likelihood ratio between them is out of clipping range. In other words, PPO modifies TRPO’s objective function with a punishment of too-large policy updates. Also, PPO deletes the complicated trust region constraints and utilizes the clipping function instead. As a result, PPO improves performance and implementation based on the framework of TRPO.
Theory.
This section will first explore key components of the core algorithm in PPO, and then delve deep into the Main objective function in PPO.
Basic Concepts.
To begin the PPO's training process, the agent is placed in an environment to perform actions based on its current input. In the early phase of training, the agent can freely explore solutions and keep track of the result. Later, with a certain amount of datasets and policy updates, the agent will select an action to take by randomly sampling from the probability distribution formula_0 generated by the policy network. The actions that are most likely to be beneficial will have the highest probability of being selected from the random sample. After an agent arrives at a different scenario known as a State by acting, it is rewarded with a positive reward or a negative reward. The objective of an agent is to maximize its total rewards across a series of States, also referred as episodes. Scientists reinforce the agent to learn to perform the best actions by experience, and this decision function is called Policy.
Policy Gradient Laws: Advantage Function A.
As PPO's essential part, the advantage function tries to answer the question of whether a specific action of the agent is better than the other possible action in a given state or worse than the other action. By definition, the advantage function is an estimate of the relative value for a selected action. The positive output of the advantage function means that the chosen action is better than the average return, so the possibilities of that specific action will increase, and vice versa.
Advantage function calculation: A = discounted sum (Q) - baseline estimate (V). The first part, the discounted sum, is the total weighted reward for the completion of a current episode. More weight will be given to a specific action that brings easy and quick rewards. On the other hand, less weight will be credited to actions that need significant effort but offer disproportionate rewards. Since the advantage function is calculated after the completion of an episode, the program records the outcome of the episode. Therefore, calculating advantage is essentially an unsupervised learning problem. The second part, the baseline estimate, is the value function that outputs the expected discounted sum of an episode starting from the current state. In the PPO algorithm, the baseline estimate will be noisy (with some variances) because it utilizes a neural network. With the two parts computed, the advantage function is calculated by subtracting the baseline estimate from the actual return (discounted sum). A > 0 signifies how much better the actual return of the action is based on the expected return from experience; A < 0 implies how bad the actual return is based on the expected return.
Ratio Function.
In PPO, the ratio function calculates the probability of taking action "a" at state "s" in the current policy network divided by the previous old version of policy.
In this function, "rt"("θ") denotes the probability ratio between the current and old policy:
This ratio function can easily estimate the divergence between old and current policies.
PPO's Objective Function.
The central objective function of PPO takes the expectation operator (denoted as E) which means that this function will be computed over quantities of trajectories. The expectation operator takes the minimum of two terms:
1. R-theta * Advantage Function: this is the product of the ratio function and the advantage function that was introduced in TRPO, also known as normal policy gradient objective.
2. Clipped (R-theta) * Advantage Function: The policy ratio is first clipped between 1- epsilon and 1 + epsilon; generally, epsilon is defined to be 0.20. Then, multiply the clipped version by the advantage.
The fundamental intuition behind PPO is the same as TRPO: conservatism. Clipping applies to make the "advantage estimate" of the new policy conservative. The reasoning behind conservatism is that if agents make significant changes due to high advantage estimates, the policy update will be large and unstable, and may "fall off the cliff" (little possibility of recovery). There are two common applications of the clipping function. When an action under a new policy happens to be a really good action based on the advantage function, the clipping function limits how much credit can be given to a new policy for up-weighted good actions. On the other hand, when an action under the old policy is judged to be a bad action, the clipping function constrains how much the agent can cut the new policy slack for down-weighted bad actions. Consequently, the clipping mechanism is designed to discourage the incentive of moving beyond the defined range by clipping both directions. The advantage of this method is that it can be optimized directly with gradient descent, as opposed to TRPO's strict KL divergence constraint, which makes the implementation faster and cleaner.
After computing the clipped surrogate objective function, the program has two probability ratios: one non-clipped and one clipped; then, by taking the minimum of the two objectives, the final objective becomes a lower bound (pessimistic bound) of what an agent knows is possible. In other words, the minimum method makes sure that the agent is doing the safest possible update.
Advantages.
Simplicity.
PPO approximates what TRPO did without doing too much computation. It uses first-order optimization (clip function) to constrain the policy update, while TRPO uses KL divergence constraints outside the objective function (second-order optimization). Compared with the TRPO, the PPO method is relatively easy to implement and takes less computation time. Therefore, it is cheaper and more efficient to use PPO in large-scale problems.
Stability.
While other reinforcement learning algorithms require hyperparameter tuning, PPO does not necessarily require hyperparameter tuning (0.2 for epsilon can be used in most cases). Also, PPO does not require sophisticated optimization techniques. It can be easily practiced with standard deep learning frameworks and generalized to a broad range of tasks.
Sample efficiency.
Sample efficiency indicates whether the algorithms need more or less amount of data to train a good policy. On-policy algorithms, including PPO and TRPO, generally have a low level of sample efficiency. However, PPO achieved sample efficiency because of its usage of surrogate objectives. The surrogate objectives enable PPO to avoid the new policy changing too far from the old policy; the clip function regularizes the policy update and reuses training data. Sample efficiency is especially useful for complicated and high-dimensional tasks, where data collection and computation can be costly.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "P(A|S)"
}
]
| https://en.wikipedia.org/wiki?curid=70774614 |
707787 | Brushless DC electric motor | Synchronous electric motor powered by an electronic controller
A brushless DC electric motor (BLDC), also known as an electronically commutated motor, is a synchronous motor using a direct current (DC) electric power supply. It uses an electronic controller to switch DC currents to the motor windings producing magnetic fields that effectively rotate in space and which the permanent magnet rotor follows. The controller adjusts the phase and amplitude of the current pulses that control the speed and torque of the motor. It is an improvement on the mechanical commutator (brushes) used in many conventional electric motors.
The construction of a brushless motor system is typically similar to a permanent magnet synchronous motor (PMSM), but can also be a switched reluctance motor, or an induction (asynchronous) motor. They may also use neodymium magnets and be outrunners (the stator is surrounded by the rotor), inrunners (the rotor is surrounded by the stator), or axial (the rotor and stator are flat and parallel).
The advantages of a brushless motor over brushed motors are high power-to-weight ratio, high speed, nearly instantaneous control of speed (rpm) and torque, high efficiency, and low maintenance. Brushless motors find applications in such places as computer peripherals (disk drives, printers), hand-held power tools, and vehicles ranging from model aircraft to automobiles. In modern washing machines, brushless DC motors have allowed replacement of rubber belts and gearboxes by a direct-drive design.
Background.
Brushed DC motors were invented in the 19th century and are still common. Brushless DC motors were made possible by the development of solid state electronics in the 1960s.
An electric motor develops torque by keeping the magnetic fields of the rotor (the rotating part of the machine) and the stator (the fixed part of the machine) misaligned. One or both sets of magnets are electromagnets, made of a coil of wire wound around an iron core. DC running through the wire winding creates the magnetic field, providing the power that runs the motor. The misalignment generates a torque that tries to realign the fields. As the rotor moves, and the fields come into alignment, it is necessary to move either the rotor's or stator's field to maintain the misalignment and continue to generate torque and movement. The device that moves the fields based on the position of the rotor is called a commutator.
Brush commutator.
In brushed motors this is done with a rotary switch on the motor's shaft called a commutator. It consists of a rotating cylinder or disc divided into multiple metal contact segments on the rotor. The segments are connected to conductor windings on the rotor. Two or more stationary contacts called "brushes", made of a soft conductor such as graphite, press against the commutator, making sliding electrical contact with successive segments as the rotor turns. The brushes selectively provide electric current to the windings. As the rotor rotates, the commutator selects different windings and the directional current is applied to a given winding such that the rotor's magnetic field remains misaligned with the stator and creates a torque in one direction.
The brush commutator has disadvantages that has led to a decline in use of brushed motors. These disadvantages are:
During the last hundred years, high-power DC brushed motors, once the mainstay of industry, were replaced by alternating current (AC) synchronous motors. Today, brushed motors are used only in low-power applications or where only DC is available, but the above drawbacks limit their use even in these applications.
Brushless solution.
In brushless DC motors, an electronic controller replaces the brush commutator contacts. An electronic sensor detects the angle of the rotor and controls semiconductor switches such as transistors that switch current through the windings, either reversing the direction of the current or, in some motors turning it off, at the correct angle so the electromagnets create torque in one direction. The elimination of the sliding contact allows brushless motors to have less friction and longer life; their working life is limited only by the lifetime of their bearings.
Brushed DC motors develop a maximum torque when stationary, linearly decreasing as velocity increases. Some limitations of brushed motors can be overcome by brushless motors; they include higher efficiency and lower susceptibility to mechanical wear. These benefits come at the cost of potentially less rugged, more complex, and more expensive control electronics.
A typical brushless motor has permanent magnets that rotate around a fixed armature, eliminating problems associated with connecting current to the moving armature. An electronic controller replaces the commutator assembly of the brushed DC motor, which continually switches the phase to the windings to keep the motor turning. The controller performs similar timed power distribution by using a solid-state circuit rather than the commutator system.
Brushless motors offer several advantages over brushed DC motors, including high torque to weight ratio, increased efficiency producing more torque per watt, increased reliability, reduced noise, longer lifetime by eliminating brush and commutator erosion, elimination of ionizing sparks from the commutator, and an overall reduction of electromagnetic interference (EMI). With no windings on the rotor, they are not subjected to centrifugal forces, and because the windings are supported by the housing, they can be cooled by conduction, requiring no airflow inside the motor for cooling. This in turn means that the motor's internals can be entirely enclosed and protected from dirt or other foreign matter.
Brushless motor commutation can be implemented in software using a microcontroller, or may alternatively be implemented using analog or digital circuits. Commutation with electronics instead of brushes allows for greater flexibility and capabilities not available with brushed DC motors, including speed limiting, microstepping operation for slow and fine motion control, and a holding torque when stationary. Controller software can be customized to the specific motor being used in the application, resulting in greater commutation efficiency.
The maximum power that can be applied to a brushless motor is limited almost exclusively by heat; too much heat weakens the magnets and damages the windings' insulation.
When converting electricity into mechanical power, brushless motors are more efficient than brushed motors primarily due to the absence of brushes, which reduces mechanical energy loss due to friction. The enhanced efficiency is greatest in the no-load and low-load regions of the motor's performance curve.
Environments and requirements in which manufacturers use brushless-type DC motors include maintenance-free operation, high speeds, and operation where sparking is hazardous (i.e. explosive environments) or could affect electronically sensitive equipment.
The construction of a brushless motor resembles a stepper motor, but the motors have important differences due to differences in implementation and operation. While stepper motors are frequently stopped with the rotor in a defined angular position, a brushless motor is usually intended to produce continuous rotation. Both motor types may have a rotor position sensor for internal feedback. Both a stepper motor and a well-designed brushless motor can hold finite torque at zero RPM.
Controller implementations.
Because the controller implements the traditional brushes' functionality, it needs to know the rotor's orientation relative to the stator coils. This is automatic in a brushed motor due to the fixed geometry of the rotor shaft and brushes. Some designs use Hall effect sensors or a rotary encoder to directly measure the rotor's position. Others measure the back-EMF in the undriven coils to infer the rotor position, eliminating the need for separate Hall effect sensors. These are therefore often called "sensorless" controllers.
Controllers that sense rotor position based on back-EMF have extra challenges in initiating motion because no back-EMF is produced when the rotor is stationary. This is usually accomplished by beginning rotation from an arbitrary phase, and then skipping to the correct phase if it is found to be wrong. This can cause the motor to run backwards briefly, adding even more complexity to the startup sequence. Other sensorless controllers are capable of measuring winding saturation caused by the position of the magnets to infer the rotor position.
A typical controller contains three polarity-reversible outputs controlled by a logic circuit. Simple controllers employ comparators working from the orientation sensors to determine when the output phase should be advanced. More advanced controllers employ a microcontroller to manage acceleration, control motor speed and fine-tune efficiency.
Two key performance parameters of brushless DC motors are the motor constants formula_0 (torque constant) and formula_1 (back-EMF constant, also known as speed constant formula_2).
Variations in construction.
Brushless motors can be constructed in several different physical configurations. In the conventional inrunner configuration, the permanent magnets are part of the rotor. Three stator windings surround the rotor. In the external-rotor outrunner configuration, the radial relationship between the coils and magnets is reversed; the stator coils form the center (core) of the motor, while the permanent magnets spin within an overhanging rotor that surrounds the core. Outrunners typically have more poles, set up in triplets to maintain the three groups of windings, and have a higher torque at low RPMs. In the flat axial flux type, used where there are space or shape constraints, stator and rotor plates are mounted face to face. In all brushless motors, the coils are stationary.
There are two common electrical winding configurations; the delta configuration connects three windings to each other in a triangle-like circuit, and power is applied at each of the connections. The wye ("Y"-shaped) configuration, sometimes called a star winding, connects all of the windings to a central point, and power is applied to the remaining end of each winding. A motor with windings in delta configuration gives low torque at low speed but can give higher top speed. Wye configuration gives high torque at low speed, but not as high top speed. The wye winding is normally more efficient. Delta-connected windings can allow high-frequency parasitic electrical currents to circulate entirely within the motor. A Wye-connected winding does not contain a closed loop in which parasitic currents can flow, preventing such losses. Aside from the higher impedance of the wye configuration, from a controller standpoint, the two winding configurations can be treated exactly the same.
Applications.
Brushless motors fulfill many functions originally performed by brushed DC motors, but cost and control complexity prevents brushless motors from replacing brushed motors completely in the lowest-cost areas. Nevertheless, brushless motors have come to dominate many applications, particularly devices such as computer hard drives and CD/DVD players. Small cooling fans in electronic equipment are powered exclusively by brushless motors. They can be found in cordless power tools where the increased efficiency of the motor leads to longer periods of use before the battery needs to be charged. Low speed, low power brushless motors are used in direct-drive turntables for gramophone records. Brushless motors can also be found in marine applications, such as underwater thrusters. Drones also utilize brushless motors to elevate their performance.
Transport.
Brushless motors are found in electric vehicles, hybrid vehicles, personal transporters, and electric aircraft. Most electric bicycles use brushless motors that are sometimes built into the wheel hub itself, with the stator fixed solidly to the axle and the magnets attached to and rotating with the wheel. The same principle is applied in self-balancing scooter wheels. Most electrically powered radio-controlled models use brushless motors because of their high efficiency.
Cordless tools.
Brushless motors are found in many modern cordless tools, including some string trimmers, leaf blowers, saws (circular and reciprocating), and drills/drivers. The weight and efficiency advantages of brushless over brushed motors are more important to handheld, battery-powered tools than to large, stationary tools plugged into an AC outlet.
Heating and ventilation.
There is a trend in the heating, ventilation, and air conditioning (HVAC) and refrigeration industries to use brushless motors instead of various types of AC motors. The most significant reason to switch to a brushless motor is a reduction in power required to operate them versus a typical AC motor. In addition to the brushless motor's higher efficiency, HVAC systems, especially those featuring variable-speed or load modulation, use brushless motors to give the built-in microprocessor continuous control over cooling and airflow.
Industrial engineering.
The application of brushless DC motors within industrial engineering primarily focuses on manufacturing engineering or industrial automation design. Brushless motors are ideally suited for manufacturing applications because of their high power density, good speed-torque characteristics, high efficiency, wide speed ranges and low maintenance. The most common uses of brushless DC motors in industrial engineering are motion control, linear actuators, servomotors, actuators for industrial robots, extruder drive motors and feed drives for CNC machine tools.
Brushless motors are commonly used as pump, fan and spindle drives in adjustable or variable speed applications as they are capable of developing high torque with good speed response. In addition, they can be easily automated for remote control. Due to their construction, they have good thermal characteristics and high energy efficiency. To obtain a variable speed response, brushless motors operate in an electromechanical system that includes an electronic motor controller and a rotor position feedback sensor. Brushless DC motors are widely used as servomotors for machine tool servo drives. Servomotors are used for mechanical displacement, positioning or precision motion control. DC stepper motors can also be used as servomotors; however, since they are operated with open loop control, they typically exhibit torque pulsations.
Brushless motors are used in industrial positioning and actuation applications. For assembly robots, Brushless technogy may be used to build linear motors. The advantage of linear motors is that they can produce linear motion without the need of a transmission system, such as ballscrews, leadscrew, rack-and-pinion, cam, gears or belts, that would be necessary for rotary motors. Transmission systems are known to introduce less responsiveness and reduced accuracy. Direct drive, brushless DC linear motors consist of a slotted stator with magnetic teeth and a moving actuator, which has permanent magnets and coil windings. To obtain linear motion, a motor controller excites the coil windings in the actuator causing an interaction of the magnetic fields resulting in linear motion. Tubular linear motors are another form of linear motor design operated in a similar way.
Aeromodelling.
Brushless motors have become a popular motor choice for model aircraft including helicopters and drones. Their favorable power-to-weight ratios and wide range of available sizes have revolutionized the market for electric-powered model flight, displacing virtually all brushed electric motors, except for low powered inexpensive often toy grade aircraft. They have also encouraged growth of simple, lightweight electric model aircraft, rather than the previous internal combustion engines powering larger and heavier models. The increased power-to-weight ratio of modern batteries and brushless motors allows models to ascend vertically, rather than climb gradually. The low noise and lack of mass compared to small glow fuel internal combustion engines is another reason for their popularity.
Legal restrictions for the use of combustion engine driven model aircraft in some countries, most often due to potential for noise pollution—even with purpose-designed mufflers for almost all model engines being available over the most recent decades—have also supported the shift to high-power electric systems.
Radio-controlled cars.
Their popularity has also risen in the radio-controlled (RC) car area. Brushless motors have been legal in North American RC car racing in accordance with Radio Operated Auto Racing (ROAR) since 2006. These motors provide a great amount of power to RC racers and, if paired with appropriate gearing and high-discharge lithium polymer (Li-Po) or lithium iron phosphate (LiFePO4) batteries, these cars can achieve speeds over .
Brushless motors are capable of producing more torque and have a faster peak rotational speed compared to nitro- or gasoline-powered engines. Nitro engines peak at around 46,800 r/min and , while a smaller brushless motor can reach 50,000 r/min and . Larger brushless RC motors can reach upwards of and 28,000 r/min to power one-fifth-scale models.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "K_T"
},
{
"math_id": 1,
"text": "K_e"
},
{
"math_id": 2,
"text": "K_V = {1 \\over K_e}"
}
]
| https://en.wikipedia.org/wiki?curid=707787 |
70780067 | Homoscedasticity and heteroscedasticity | Statistical property
In statistics, a sequence of random variables is homoscedastic () if all its random variables have the same finite variance; this is also known as homogeneity of variance. The complementary notion is called heteroscedasticity, also known as heterogeneity of variance. The spellings "homoskedasticity" and "heteroskedasticity" are also frequently used. “Skedasticity” comes from the Ancient Greek word “skedánnymi”, meaning “to scatter”.
Assuming a variable is homoscedastic when in reality it is heteroscedastic () results in unbiased but inefficient point estimates and in biased estimates of standard errors, and may result in overestimating the goodness of fit as measured by the Pearson coefficient.
The existence of heteroscedasticity is a major concern in regression analysis and the analysis of variance, as it invalidates statistical tests of significance that assume that the modelling errors all have the same variance. While the ordinary least squares estimator is still unbiased in the presence of heteroscedasticity, it is inefficient and inference based on the assumption of homoskedasticity is misleading. In that case, generalized least squares (GLS) was frequently used in the past. Nowadays, standard practice in econometrics is to include Heteroskedasticity-consistent standard errors instead of using GLS, as GLS can exhibit strong bias in small samples if the actual skedastic function is unknown.
Because heteroscedasticity concerns expectations of the second moment of the errors, its presence is referred to as misspecification of the second order.
The econometrician Robert Engle was awarded the 2003 Nobel Memorial Prize for Economics for his studies on regression analysis in the presence of heteroscedasticity, which led to his formulation of the autoregressive conditional heteroscedasticity (ARCH) modeling technique.
Definition.
Consider the linear regression equation formula_0 where the dependent random variable formula_1 equals the deterministic variable formula_2 times coefficient formula_3 plus a random disturbance term formula_4 that has mean zero. The disturbances are homoscedastic if the variance of formula_4 is a constant formula_5; otherwise, they are heteroscedastic. In particular, the disturbances are heteroscedastic if the variance of formula_4 depends on formula_6 or on the value of formula_2. One way they might be heteroscedastic is if formula_7 (an example of a scedastic function), so the variance is proportional to the value of formula_8.
More generally, if the variance-covariance matrix of disturbance formula_4 across formula_6 has a nonconstant diagonal, the disturbance is heteroscedastic. The matrices below are covariances when there are just three observations across time. The disturbance in matrix A is homoscedastic; this is the simple case where OLS is the best linear unbiased estimator. The disturbances in matrices B and C are heteroscedastic. In matrix B, the variance is time-varying, increasing steadily across time; in matrix C, the variance depends on the value of formula_8. The disturbance in matrix D is homoscedastic because the diagonal variances are constant, even though the off-diagonal covariances are non-zero and ordinary least squares is inefficient for a different reason: serial correlation.
formula_9
Examples.
Heteroscedasticity often occurs when there is a large difference among the sizes of the observations.
A classic example of heteroscedasticity is that of income versus expenditure on meals. A wealthy person may eat inexpensive food sometimes and expensive food at other times. A poor person will almost always eat inexpensive food. Therefore, people with higher incomes exhibit greater variability in expenditures on food.
At a rocket launch, an observer measures the distance traveled by the rocket once per second. In the first couple of seconds, the measurements may be accurate to the nearest centimeter. After five minutes, the accuracy of the measurements may be good only to 100 m, because of the increased distance, atmospheric distortion, and a variety of other factors. So the measurements of distance may exhibit heteroscedasticity.
Consequences.
One of the assumptions of the classical linear regression model is that there is no heteroscedasticity. Breaking this assumption means that the Gauss–Markov theorem does not apply, meaning that OLS estimators are not the Best Linear Unbiased Estimators (BLUE) and their variance is not the lowest of all other unbiased estimators.
Heteroscedasticity does "not" cause ordinary least squares coefficient estimates to be biased, although it can cause ordinary least squares estimates of the variance (and, thus, standard errors) of the coefficients to be biased, possibly above or below the true of population variance. Thus, regression analysis using heteroscedastic data will still provide an unbiased estimate for the relationship between the predictor variable and the outcome, but standard errors and therefore inferences obtained from data analysis are suspect. Biased standard errors lead to biased inference, so results of hypothesis tests are possibly wrong. For example, if OLS is performed on a heteroscedastic data set, yielding biased standard error estimation, a researcher might fail to reject a null hypothesis at a given significance level, when that null hypothesis was actually uncharacteristic of the actual population (making a type II error).
Under certain assumptions, the OLS estimator has a normal asymptotic distribution when properly normalized and centered (even when the data does not come from a normal distribution). This result is used to justify using a normal distribution, or a chi square distribution (depending on how the test statistic is calculated), when conducting a hypothesis test. This holds even under heteroscedasticity. More precisely, the OLS estimator in the presence of heteroscedasticity is asymptotically normal, when properly normalized and centered, with a variance-covariance matrix that differs from the case of homoscedasticity. In 1980, White proposed a consistent estimator for the variance-covariance matrix of the asymptotic distribution of the OLS estimator. This validates the use of hypothesis testing using OLS estimators and White's variance-covariance estimator under heteroscedasticity.
Heteroscedasticity is also a major practical issue encountered in ANOVA problems.
The F test can still be used in some circumstances.
However, it has been said that students in econometrics should not overreact to heteroscedasticity. One author wrote, "unequal error variance is worth correcting only when the problem is severe." In addition, another word of caution was in the form, "heteroscedasticity has never been a reason to throw out an otherwise good model." With the advent of heteroscedasticity-consistent standard errors allowing for inference without specifying the conditional second moment of error term, testing conditional homoscedasticity is not as important as in the past.
For any non-linear model (for instance Logit and Probit models), however, heteroscedasticity has more severe consequences: the maximum likelihood estimates (MLE) of the parameters will usually be biased, as well as inconsistent (unless the likelihood function is modified to correctly take into account the precise form of heteroscedasticity or the distribution is a member of the linear exponential family and the conditional expectation function is correctly specified). Yet, in the context of binary choice models (Logit or Probit), heteroscedasticity will only result in a positive scaling effect on the asymptotic mean of the misspecified MLE (i.e. the model that ignores heteroscedasticity). As a result, the predictions which are based on the misspecified MLE will remain correct. In addition, the misspecified Probit and Logit MLE will be asymptotically normally distributed which allows performing the usual significance tests (with the appropriate variance-covariance matrix). However, regarding the general hypothesis testing, as pointed out by Greene, "simply computing a robust covariance matrix for an otherwise inconsistent estimator does not give it redemption. Consequently, the virtue of a robust covariance matrix in this setting is unclear."
Correction.
There are several common corrections for heteroscedasticity. They are:
Testing.
Residuals can be tested for homoscedasticity using the Breusch–Pagan test, which performs an auxiliary regression of the squared residuals on the independent variables. From this auxiliary regression, the explained sum of squares is retained, divided by two, and then becomes the test statistic for a chi-squared distribution with the degrees of freedom equal to the number of independent variables. The null hypothesis of this chi-squared test is homoscedasticity, and the alternative hypothesis would indicate heteroscedasticity. Since the Breusch–Pagan test is sensitive to departures from normality or small sample sizes, the Koenker–Bassett or 'generalized Breusch–Pagan' test is commonly used instead. From the auxiliary regression, it retains the R-squared value which is then multiplied by the sample size, and then becomes the test statistic for a chi-squared distribution (and uses the same degrees of freedom). Although it is not necessary for the Koenker–Bassett test, the Breusch–Pagan test requires that the squared residuals also be divided by the residual sum of squares divided by the sample size. Testing for groupwise heteroscedasticity can be done with the Goldfeld–Quandt test.
Due to the standard use of heteroskedasticity-consistent Standard Errors and the problem of Pre-test, econometricians nowadays rarely use tests for conditional heteroskedasticity.
List of tests.
Although tests for heteroscedasticity between groups can formally be considered as a special case of testing within regression models, some tests have structures specific to this case.
<templatestyles src="Column/styles.css"/>
<templatestyles src = "Column/styles.css" />
Generalisations.
Homoscedastic distributions.
Two or more normal distributions, formula_14 are both homoscedastic and lack serial correlation if they share the same diagonals in their covariance matrix, formula_15 and their non-diagonal entries are zero. Homoscedastic distributions are especially useful to derive statistical pattern recognition and machine learning algorithms. One popular example of an algorithm that assumes homoscedasticity is Fisher's linear discriminant analysis.
The concept of homoscedasticity can be applied to distributions on spheres.
Multivariate data.
The study of homescedasticity and heteroscedasticity has been generalized to the multivariate case, which deals with the covariances of vector observations instead of the variance of scalar observations. One version of this is to use covariance matrices as the multivariate measure of dispersion. Several authors have considered tests in this context, for both regression and grouped-data situations. Bartlett's test for heteroscedasticity between grouped data, used most commonly in the univariate case, has also been extended for the multivariate case, but a tractable solution only exists for 2 groups. Approximations exist for more than two groups, and they are both called Box's M test.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
Most statistics textbooks will include at least some material on homoscedasticity and heteroscedasticity. Some examples are: | [
{
"math_id": 0,
"text": "y_i= x_i \\beta_i + \\varepsilon_i,\\ i = 1,\\ldots, N,"
},
{
"math_id": 1,
"text": "y_i"
},
{
"math_id": 2,
"text": "x_i"
},
{
"math_id": 3,
"text": "\\beta_i"
},
{
"math_id": 4,
"text": "\\varepsilon_i"
},
{
"math_id": 5,
"text": "\\sigma^2"
},
{
"math_id": 6,
"text": "i"
},
{
"math_id": 7,
"text": "\\sigma_i^2= x_i \\sigma^2"
},
{
"math_id": 8,
"text": "x"
},
{
"math_id": 9,
"text": "\\begin{align}\nA &= \\sigma^2\\begin{bmatrix}\n 1 & 0 & 0 \\\\ \n 0 & 1 & 0 \\\\ \n 0 & 0 & 1 \\\\ \n\\end{bmatrix} &\nB &= \\sigma^2\\begin{bmatrix}\n 1 & 0 & 0 \\\\ \n 0 & 2 & 0 \\\\ \n 0 & 0 & 3 \\\\ \n\\end{bmatrix} &\nC &= \\sigma^2\\begin{bmatrix}\n x_1 & 0 & 0 \\\\\n 0 & x_2 & 0 \\\\ \n 0 & 0 & x_3 \\\\ \n\\end{bmatrix} &\nD &= \\sigma^2\\begin{bmatrix}\n 1 & \\rho & \\rho^2 \\\\ \n \\rho & 1 & \\rho \\\\ \n \\rho^2 & \\rho & 1 \\\\\n\\end{bmatrix}\n\\end{align}"
},
{
"math_id": 10,
"text": "s_i^2 = (n_i - 1)^{-1} \\sum_j \\left(y_{ij} - \\bar{y}_i\\right)^2"
},
{
"math_id": 11,
"text": "i=1,2,...,k"
},
{
"math_id": 12,
"text": "j=1, 2, ..., n_i"
},
{
"math_id": 13,
"text": "n_i > 5"
},
{
"math_id": 14,
"text": "N(\\mu_1,\\Sigma_1), N(\\mu_2,\\Sigma_2), "
},
{
"math_id": 15,
"text": "\\Sigma_1{ii} = \\Sigma_2{jj},\\ \\forall i=j."
}
]
| https://en.wikipedia.org/wiki?curid=70780067 |
70780754 | Diffusive–thermal instability | Instrinsic flame instability
Diffusive–thermal instability or thermo–diffusive instability is an intrinsic flame instability that occurs both in premixed flames and in diffusion flames and arises because of the difference in the diffusion coefficient values for the fuel and heat transport, characterized by non-unity values of Lewis numbers. The instability mechanism that arises here is the same as in Turing instability explaining chemical morphogenesis, although the mechanism was first discovered in the context of combustion by Yakov Zeldovich in 1944 to explain the cellular structures appearing in lean hydrogen flames. Quantitative stability theory for premixed flames were developed by Gregory Sivashinsky (1977), Guy Joulin and Paul Clavin (1979) and for diffusion flames by Jong S. Kim and Forman A. Williams (1996,1997).
Dispersion relation for premixed flames.
To neglect the influences by hydrodynamic instabilities such as Darrieus–Landau instability, Rayleigh–Taylor instability etc., the analysis usually neglects effects due to the thermal expansion of the gas mixture by assuming a constant density model. Such an approximation is referred to as diffusive-thermal approximation or thermo-diffusive approximation which was first introduced by Grigory Barenblatt, Yakov Zeldovich and A. G. Istratov in 1962. With a one-step chemistry model and assuming the perturbations to a steady planar flame in the form formula_0, where formula_1 is the transverse coordinate system perpendicular to flame, formula_2 is the time, formula_3 is the perturbation wavevector and formula_4 is the temporal growth rate of the disturbance, the dispersion relation formula_5 for one-reactant flames is given implicitly by
formula_6
where formula_7, formula_8, formula_9 is the Lewis number of the fuel and formula_10 is the Zeldovich number. This relation provides in general three roots for formula_4 in which the one with maximum formula_11 would determine the stability character. The stability margins are given by the following equations
formula_12
describing two curves in the formula_13vs.formula_14 plane. The first curve is associated with condition formula_15, whereas on the second curve formula_16 The first curve separates the region of stable mode from the region corresponding to cellular instability, whereas the second condition indicates the presence of traveling and/or pulsating instability.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "e^{i\\mathbf{k}\\cdot\\mathbf{x}_\\bot+\\omega t}"
},
{
"math_id": 1,
"text": "\\mathbf{x}_\\bot"
},
{
"math_id": 2,
"text": "t"
},
{
"math_id": 3,
"text": "\\mathbf{k}"
},
{
"math_id": 4,
"text": "\\omega"
},
{
"math_id": 5,
"text": "\\omega(k)"
},
{
"math_id": 6,
"text": "2\\Gamma^2(\\Gamma-1) + l (\\Gamma-1 - 2 \\omega) = 0 "
},
{
"math_id": 7,
"text": "\\Gamma=\\sqrt{1+4\\omega+4k^2}"
},
{
"math_id": 8,
"text": "l\\equiv (Le-1)/\\beta"
},
{
"math_id": 9,
"text": "Le"
},
{
"math_id": 10,
"text": "\\beta"
},
{
"math_id": 11,
"text": "\\Re\\{\\omega\\}"
},
{
"math_id": 12,
"text": "8k^2 + l + 2 =0, \\quad 256 k^4 +(-6l^2+32l+256)k^2 -l^2+8l + 32=0"
},
{
"math_id": 13,
"text": "l"
},
{
"math_id": 14,
"text": "k"
},
{
"math_id": 15,
"text": "\\Im\\{\\omega\\}=0"
},
{
"math_id": 16,
"text": "\\Im\\{\\omega\\}\\neq 0."
}
]
| https://en.wikipedia.org/wiki?curid=70780754 |
7078310 | Angle-resolved photoemission spectroscopy | Experimental technique to determine the distribution of electrons in solids
Angle-resolved photoemission spectroscopy (ARPES) is an experimental technique used in condensed matter physics to probe the allowed energies and momenta of the electrons in a material, usually a crystalline solid. It is based on the photoelectric effect, in which an incoming photon of sufficient energy ejects an electron from the surface of a material. By directly measuring the kinetic energy and emission angle distributions of the emitted photoelectrons, the technique can map the electronic band structure and Fermi surfaces. ARPES is best suited for the study of one- or two-dimensional materials. It has been used by physicists to investigate high-temperature superconductors, graphene, topological materials, quantum well states, and materials exhibiting charge density waves.
ARPES systems consist of a monochromatic light source to deliver a narrow beam of photons, a sample holder connected to a manipulator used to position the sample of a material, and an electron spectrometer. The equipment is contained within an ultra-high vacuum (UHV) environment, which protects the sample and prevents scattering of the emitted electrons. After being dispersed along two perpendicular directions with respect to kinetic energy and emission angle, the electrons are directed to a detector and counted to provide ARPES spectra—slices of the band structure along one momentum direction. Some ARPES instruments can extract a portion of the electrons alongside the detector to measure the polarization of their spin.
Principle.
Electrons in crystalline solids can only populate states of certain energies and momenta, others being forbidden by quantum mechanics. They form a continuum of states known as the band structure of the solid. The band structure determines if a material is an insulator, a semiconductor, or a metal, how it conducts electricity and in which directions it conducts best, or how it behaves in a magnetic field.
Angle-resolved photoemission spectroscopy determines the band structure and helps understand the scattering processes and interactions of electrons with other constituents of a material. It does so by observing the electrons ejected by photons from their initial energy and momentum state into the state whose energy is by the energy of the photon higher than the initial energy, and higher than the binding energy of the electron in the solid. In the process, the electron's momentum remains virtually intact, except for its component perpendicular to the material's surface. The band structure is thus translated from energies at which the electrons are bound within the material, to energies that free them from the crystal binding and enable their detection outside of the material.
By measuring the freed electron's kinetic energy, its velocity and absolute momentum can be calculated. By measuring the emission angle with respect to the surface normal, ARPES can also determine the two in-plane components of momentum that are in the photoemission process preserved. In many cases, if needed, the third component can be reconstructed as well.
Instrumentation.
A typical instrument for angle-resolved photoemission consists of a light source, a sample holder attached to a manipulator, and an electron spectrometer. These are all part of an ultra-high vacuum system that provides the necessary protection from adsorbates for the sample surface and eliminates scattering of the electrons on their way to the analyzer.
The light source delivers to the sample a monochromatic, usually polarized, focused, high-intensity beam of ~1012 photons/s with a few meV energy spread. Light sources range from compact noble-gas discharge UV lamps and radio-frequency plasma sources (10–40 eV), ultraviolet lasers (5–11 eV) to synchrotron insertion devices that are optimized for different parts of the electromagnetic spectrum (from 10 eV in the ultraviolet to 1000 eV X-rays).
The sample holder accommodates samples of crystalline materials, the electronic properties of which are to be investigated. It facilitates their insertion into the vacuum, cleavage to expose clean surfaces, and precise positioning. The holder works as the extension of a manipulator that makes translations along three axes, and rotations to adjust the sample's polar, azimuth and tilt angles possible. The holder has sensors or thermocouples for precise temperature measurement and control. Cooling to temperatures as low as 1 kelvin is provided by cryogenic liquefied gases, cryocoolers, and dilution refrigerators. Resistive heaters attached to the holder provide heating up to a few hundred °C, whereas miniature backside electron-beam bombardment devices can yield sample temperatures as high as 2000 °C. Some holders can also have attachments for light beam focusing and calibration.
The electron spectrometer disperses the electrons along two spatial directions in accordance with their kinetic energy and their emission angle when exiting the sample; in other words, it provides mapping of different energies and emission angles to different positions on the detector. In the type most commonly used, the hemispherical electron energy analyzer, the electrons first pass through an electrostatic lens. The lens has a narrow focal spot that is located some 40 mm from the entrance to the lens. It further enhances the angular spread of the electron plume, and serves it with adjusted energy to the narrow entrance slit of the energy dispersing part.
The energy dispersion is carried out for a narrow range of energies around the so-called pass energy in the direction perpendicular to the direction of angular dispersion, that is perpendicular to the cut of a ~25 mm long and ⪆0.1 mm wide slit. The angular dispersion previously achieved around the axis of the cylindrical lens is only preserved along the slit, and depending on the "lens mode" and the desired angular resolution is usually set to amount to ±3°, ±7° or ±15°. The hemispheres of the energy analyzer are kept at constant voltages so that the central trajectory is followed by electrons that have the kinetic energy equal to the set pass energy; those with higher or lower energies end up closer to the outer or the inner hemisphere at the other end of the analyzer. This is where an electron detector is mounted, usually in the form of a 40 mm microchannel plate paired with a fluorescent screen. Electron detection events are recorded using an outside camera and are counted in hundreds of thousands of separate angle vs. kinetic energy channels. Some instruments are additionally equipped with an electron extraction tube at one side of the detector to enable the measurement of the electrons' spin polarization.
Modern analyzers are capable of resolving the electron emission angles as low as 0.1°. Energy resolution is pass-energy and slit-width dependent so the operator chooses between measurements with ultrahigh resolution and low intensity (< 1 meV at 1 eV pass energy) or poorer energy resolutions of 10 meV or more at higher pass energies and with wider slits resulting in higher signal intensity. The instrument's resolution shows up as artificial broadening of the spectral features: a Fermi energy cutoff wider than expected from the sample's temperature alone, and the theoretical electron's spectral function convolved with the instrument's resolution function in both energy and momentum/angle.
Sometimes, instead of hemispherical analyzers, time-of-flight analyzers are used. These, however, require pulsed photon sources and are most common in laser-based ARPES labs.
Basic relations.
Angle-resolved photoemission spectroscopy is a potent refinement of ordinary photoemission spectroscopy. Light of frequency formula_0 made up of photons of energy formula_1, where formula_2 is the Planck constant, is used to stimulate the transitions of electrons from occupied to unoccupied electronic state of the solid. If a photon's energy is greater than the binding energy of an electron formula_3, the electron will eventually leave the solid without being scattered, and be observed with kinetic energy
formula_4
at angle formula_5 relative to the surface normal, both characteristic of the studied material.
Electron emission intensity maps measured by ARPES as a function of formula_6 and formula_7 are representative of the intrinsic distribution of electrons in the solid expressed in terms of their binding energy formula_3 and the Bloch wave vector formula_8, which is related to the electrons' crystal momentum and group velocity. In the photoemission process, the Bloch wave vector is linked to the measured electron's momentum formula_9, where the magnitude of the momentum formula_10 is given by the equation
formula_11.
As the electron crosses the surface barrier, losing part of its energy due to the surface work function, only the component of formula_9 that is parallel to the surface, formula_12, is preserved. From ARPES, therefore, only formula_13 is known for certain and its magnitude is given by
formula_14.
Here, formula_15 is the reduced Planck constant.
Because of incomplete determination of the three-dimensional wave vector, and the pronounced surface sensitivity of the elastic photoemission process, ARPES is best suited to the complete characterization of the band structure in ordered low-dimensional systems such as two-dimensional materials, ultrathin films, and nanowires. When it is used for three-dimensional materials, the perpendicular component of the wave vector formula_16 is usually approximated, with the assumption of a parabolic, free-electron-like final state with the bottom at energy formula_17. This gives:
formula_18.
The inner potential formula_19 is an unknown parameter a priori. For d-electron systems, experiment suggest that formula_19 ≈ 15 eV. In general, the inner potential is estimated through a series of photon energy-dependent experiments, especially in photoemission band mapping experiments.
Fermi surface mapping.
Electron analyzers that use a slit to prevent the mixing of momentum and energy channels are only capable of taking angular maps along one direction. To take maps over energy and two-dimensional momentum space, either the sample is rotated in the proper direction so that the slit receives electrons from adjacent emission angles, or the electron plume is steered inside the electrostatic lens with the sample fixed. The slit width will determine the step size of the angular scans. For example, when a ±15° plume dispersed around the axis of the lens is served to a 30 mm long and 1 mm wide slit, each millimeter of the slit receives a 1° portion—in both directions; but at the detector the other direction is interpreted as the electron's kinetic energy and the emission angle information is lost. This averaging determines the maximal angular resolution of the scan in the direction perpendicular to the slit: with a 1 mm slit, steps coarser than 1° lead to missing data, and finer steps to overlaps. Modern analyzers have slits as narrow as 0.05 mm. The energy–angle–angle maps are usually further processed to give "energy"–"k"x–"k"y maps, and sliced in such a way to display constant energy surfaces in the band structure and, most importantly, the Fermi surface map when they are cut near the Fermi level.
Emission angle to momentum conversion.
ARPES spectrometer measures angular dispersion in a slice α along its slit. Modern analyzers record these angles simultaneously, in their reference frame, typically in the range of ±15°. To map the band structure over a two-dimensional momentum space, the sample is rotated while keeping the light spot on the surface fixed. The most common choice is to change the polar angle θ around the axis that is parallel to the slit and adjust the tilt "τ" or azimuth "φ" so emission from a particular region of the Brillouin zone can be reached.
The momentum components of the electrons can be expressed in terms of the quantities measured in the reference frame of the analyzer as
formula_20, where formula_21.
These components can be transformed into the appropriate components of momentum in the reference frame of the sample, formula_9, by using rotation matrices formula_22. When the sample is rotated around the y-axis by "θ", formula_23 there has components formula_24. If the sample is also tilted around "x" by "τ", this results in formula_25, and the components of the electron's crystal momentum determined by ARPES in this mapping geometry are
formula_26
formula_27
If high symmetry axes of the sample are known and need to be aligned, a correction by azimuth "φ" can be applied by rotating around z, when formula_28 or by rotating the transformed map "I"("E", "k"x, "k"y) around origin in two-dimensional momentum planes.
Theory of photoemission intensity relations.
The theory of photoemission is that of direct optical transitions between the states formula_29 and formula_30 of an "N"-electron system. Light excitation is introduced as the magnetic vector potential formula_31 through the minimal substitution formula_32 in the kinetic part of the quantum-mechanical Hamiltonian for the electrons in the crystal. The perturbation part of the Hamiltonian comes out to be:
formula_33.
In this treatment, the electron's spin coupling to the electromagnetic field is neglected. The scalar potential formula_34 set to zero either by imposing the Weyl gauge formula_35 or by working in the Coulomb gauge formula_36 in which formula_34 becomes negligibly small far from the sources. Either way, the commutator formula_37 is taken to be zero. Specifically, in Weyl gauge formula_38 because the period of formula_31 for ultraviolet light is about two orders of magnitude larger than the period of the electron's wave function. In both gauges it is assumed the electrons at the surface had little time to respond to the incoming perturbation and add nothing to either of the two potentials. It is for most practical uses safe to neglect the quadratic formula_39 term. Hence,
formula_40.
The transition probability is calculated in time-dependent perturbation theory and is given by the Fermi's golden rule:
formula_41,
The delta distribution above is a way of saying that energy is conserved when a photon of energy formula_1 is absorbed formula_42.
If the electric field of an electromagnetic wave is written as formula_43, where formula_44, the vector potential inherits its polarization and equals to formula_45. The transition probability is then given in terms of the electric field as
formula_46.
In the sudden approximation, which assumes an electron is instantaneously removed from the system of "N" electrons, the final and initial states of the system are taken as properly antisymmetrized products of the single particle states of the photoelectron formula_47, formula_48 and the states representing the remaining ("N" − 1)-electron systems.
The photoemission current of electrons of energy formula_49 and momentum formula_50 is then expressed as the products of
summed over all allowed initial and final states leading to the energy and momentum being observed. Here, "E" is measured with respect to the Fermi level "E"F, and "E"k with respect to vacuum so formula_53 where formula_54, the work function, is the energy difference between the two referent levels. The work function is material, surface orientation, and surface condition dependent. Because the allowed initial states are only those that are occupied, the photoemission signal will reflect the Fermi-Dirac distribution function formula_55 in the form of a temperature-dependent sigmoid-shaped drop of intensity in the vicinity of "E"F. In the case of a two-dimensional, one-band electronic system the intensity relation further reduces to
formula_56.
Selection rules.
The electronic states in crystals are organized in energy bands, which have associated energy-band dispersions formula_57 that are energy eigenvalues for delocalized electrons according to Bloch's theorem. From the plane-wave factor formula_58 in Bloch's decomposition of the wave functions, it follows the only allowed transitions when no other particles are involved are between the states whose crystal momenta differ by the reciprocal lattice vectors formula_59, i.e. those states that are in the reduced zone scheme one above another (thus the name "direct optical transitions").
Another set of selection rules comes from formula_60 (or formula_61) when the photon polarization contained in formula_31 (or formula_62) and symmetries of the initial and final one-electron Bloch states formula_47 and formula_48 are taken into account. Those can lead to the suppression of the photoemission signal in certain parts of the reciprocal space or can tell about the specific atomic-orbital origin of the initial and final states.
Many-body effects.
The one-electron spectral function that is directly measured in ARPES maps the probability that the state of the system of "N" electrons from which one electron has been instantly removed is any of the ground states of the ("N" − 1)-particle system:
formula_63.
If the electrons were independent of one another, the "N"-electron state with the state formula_47 removed would be exactly an eigenstate of the "N" − 1 particle system and the spectral function would become an infinitely sharp delta function at the energy and momentum of the removed particle; it would trace the formula_64 dispersion of the independent particles in energy-momentum space. In the case of increased electron correlations, the spectral function broadens and starts developing richer features that reflect the interactions in the underlying many-body system. These are customarily described by the complex correction to the single particle energy dispersion that is called the quasiparticle self-energy,
formula_65.
This function contains the full information about the renormalization of the electronic dispersion due to interactions and the lifetime of the hole created by the excitation. Both can be determined experimentally from the analysis of high-resolution ARPES spectra under a few reasonable assumptions. Namely, one can assume that the formula_60 part of the spectrum is nearly constant along high-symmetry directions in momentum space and that the only variable part comes from the spectral function, which in terms of formula_66, where the two components of formula_66 are usually taken to be only dependent on formula_67, reads
formula_68
This function is known from ARPES as a scan along a chosen direction in momentum space and is a two-dimensional map of the form formula_69. When cut at a constant energy formula_70, a Lorentzian-like curve in formula_71 is obtained whose renormalized peak position formula_72 is given by formula_73 and whose width at half maximum formula_74 is determined by formula_75, as follows:
The only remaining unknown in the analysis is the bare band formula_78. The bare band can be found in a self-consistent way by enforcing the Kramers-Kronig relation between the two components of the complex function formula_79 that is obtained from the previous two equations. The algorithm is as follows: start with an ansatz bare band, calculate formula_80 by eq. (2), transform it into formula_81 using the Kramers-Kronig relation, then use this function to calculate the bare band dispersion on a discrete set of points formula_72 by eq. (1), and feed to the algorithm its fit to a suitable curve as a new ansatz bare band; convergence is usually achieved in a few quick iterations.
From the self-energy obtained in this way one can judge on the strength and shape of electron-electron correlations, electron-phonon (more generally, electron-boson) interaction, active phonon energies, and quasiparticle lifetimes.
In simple cases of band flattening near the Fermi level because of the interaction with Debye phonons, the band mass is enhanced by (1 + "λ") and the electron-phonon coupling factor "λ" can be determined from the linear dependence of the peak widths on temperature.
For strongly correlated systems like cuprate superconductors, self-energy knowledge is unfortunately insufficient for a comprehensive understanding of the physical processes that lead to certain features in the spectrum. In fact, in the case of cuprate superconductors different theoretical treatments often lead to very different explanations of the origin of specific features in the spectrum. A typical example is the pseudogap in the cuprates, i.e., the momentum-selective suppression of spectral weight at the Fermi level, which has been related to spin, charge or (d-wave) pairing fluctuations by different authors. This ambiguity about the underlying physical mechanism at work can be overcome by considering two-particle correlation functions (such as Auger electron spectroscopy and appearance-potential spectroscopy), as they are able to describe the collective mode of the system and can also be related to certain ground-state properties.
Uses.
ARPES has been used to map the occupied band structure of many metals and semiconductors, states appearing in the projected band gaps at their surfaces, quantum well states that arise in systems with reduced dimensionality, one-atom-thick materials like graphene, transition metal dichalcogenides, and many flavors of topological materials. It has also been used to map the underlying band structure, gaps, and quasiparticle dynamics in highly correlated materials like high-temperature superconductors and materials exhibiting charge density waves.
When the electron dynamics in the bound states just above the Fermi level need to be studied, two-photon excitation in pump-probe setups (2PPE) is used. There, the first photon of low-enough energy is used to excite electrons into unoccupied bands that are still below the energy necessary for photoemission (i.e. between the Fermi and vacuum levels). The second photon is used to kick these electrons out of the solid so they can be measured with ARPES. By precisely timing the second photon, usually by using frequency multiplication of the low-energy pulsed laser and delay between the pulses by changing their optical paths, the electron lifetime can be determined on the scale below picoseconds.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\nu"
},
{
"math_id": 1,
"text": "h\\nu"
},
{
"math_id": 2,
"text": "h"
},
{
"math_id": 3,
"text": "E_\\text{B}"
},
{
"math_id": 4,
"text": "E_\\text{k}=h\\nu-E_\\text{B}"
},
{
"math_id": 5,
"text": "\\vartheta"
},
{
"math_id": 6,
"text": "E_\\text{k}"
},
{
"math_id": 7,
"text": "\n\\vartheta"
},
{
"math_id": 8,
"text": "\\mathbf{k}"
},
{
"math_id": 9,
"text": "\\mathbf{p}"
},
{
"math_id": 10,
"text": "|\\mathbf{p}| "
},
{
"math_id": 11,
"text": "|\\mathbf{p}|=\\sqrt{2 m_\\text{e} E_\\text{k}}"
},
{
"math_id": 12,
"text": "\\mathbf{p}_{\\Vert}"
},
{
"math_id": 13,
"text": "\\mathbf{k}_{\\Vert} = \\tfrac{1}{\\hbar}\\mathbf{p}_{\\Vert} "
},
{
"math_id": 14,
"text": "|\\mathbf{\nk}_{\\Vert}| = \\tfrac{1}{\\hbar}|\\mathbf{p_{\\Vert}}|=\\tfrac{1}{\\hbar}\\sqrt{2 m_\\text{e} E_\\text{k}} \\sin\\vartheta"
},
{
"math_id": 15,
"text": "\\hbar"
},
{
"math_id": 16,
"text": "k_{\\perp}"
},
{
"math_id": 17,
"text": "-V_0"
},
{
"math_id": 18,
"text": "k_{\\perp}=\\tfrac{1}{\\hbar}\\sqrt{2m_\\text{e}(E_\\text{k} \\cos^2\\!\\vartheta+V_0)}"
},
{
"math_id": 19,
"text": "V_0"
},
{
"math_id": 20,
"text": "\\mathbf{P}=[0,P\\sin\\alpha,P\\cos\\alpha]"
},
{
"math_id": 21,
"text": "P=\\sqrt{2 m_\\text{e} E_\\text{k}}"
},
{
"math_id": 22,
"text": "R_\\textrm{axis}(\\textrm{angle})"
},
{
"math_id": 23,
"text": "\\mathbf{P}"
},
{
"math_id": 24,
"text": "R_y(\\vartheta)\\,\\mathbf{P}"
},
{
"math_id": 25,
"text": "\\mathbf{p}=R_x(\\tau)R_y(\\vartheta)\\,\\mathbf{P}"
},
{
"math_id": 26,
"text": "k_x = \\tfrac{1}{\\hbar}p_x=\\tfrac{1}{\\hbar}\\sqrt{2 m_\\text{e} E_\\text{k}}\\,\\cos\\alpha\\sin\\vartheta "
},
{
"math_id": 27,
"text": "k_y = \\tfrac{1}{\\hbar}p_y = \\tfrac{1}{\\hbar}\\sqrt{2 m_\\text{e} E_\\text{k}}\\,\n(\\pm\\sin\\alpha\\cos\\tau+\\cos\\alpha\\sin\\tau\\cos\\vartheta) "
},
{
"math_id": 28,
"text": "\\mathbf{p}=R_z(\\varphi)R_x(\\tau)R_y(\\vartheta)\\,\\mathbf{P}"
},
{
"math_id": 29,
"text": "|i\\rangle"
},
{
"math_id": 30,
"text": "|f\\rangle"
},
{
"math_id": 31,
"text": "\\mathbf{A}"
},
{
"math_id": 32,
"text": "\\mathbf{p} \\mapsto \\mathbf{p}+e\\mathbf{A}"
},
{
"math_id": 33,
"text": "H' = \\frac{e}{2m} (\\mathbf{A}\\cdot\\mathbf{p} + \\mathbf{p}\\cdot\\mathbf{A}) + \\frac{e^2}{2m} |\\mathbf{A}|^2"
},
{
"math_id": 34,
"text": "\\phi"
},
{
"math_id": 35,
"text": "\\phi=0"
},
{
"math_id": 36,
"text": "\\nabla\\cdot\\mathbf{A}=0"
},
{
"math_id": 37,
"text": "\\left[\\mathbf{A},\\mathbf{p}\\right]=i\\hbar\\,\\nabla\\cdot\\mathbf{A}"
},
{
"math_id": 38,
"text": "\\nabla\\cdot\\mathbf{A}\\approx0"
},
{
"math_id": 39,
"text": "|A|^2"
},
{
"math_id": 40,
"text": "H' = \\frac{e}{m} \\mathbf{A}\\cdot\\mathbf{p}"
},
{
"math_id": 41,
"text": "\\Gamma_{i \\to f} = \\frac{2\\pi}{\\hbar} |\\langle f|H'|i \\rangle|^2 \\delta(E_f-E_i-h\\nu)\\propto |\\langle f|\\mathbf{A} \\cdot \\mathbf{p}|i\\rangle|^2 \\, \\delta(E_f-E_i-h\\nu)"
},
{
"math_id": 42,
"text": "E_f=E_i+h\\nu"
},
{
"math_id": 43,
"text": "\\mathbf{E}(\\mathbf{r},t)=\\mathbf{E_0}\\sin(\\mathbf{k}\\cdot\\mathbf{r}-\\omega t)"
},
{
"math_id": 44,
"text": "\\omega=2\\pi\\nu"
},
{
"math_id": 45,
"text": "\\mathbf{A}(\\mathbf{r},t)=\\tfrac{1}{\\omega}\\mathbf{E_0}\\cos(\\mathbf{k}\\cdot\\mathbf{r}-\\omega t)"
},
{
"math_id": 46,
"text": "\\Gamma_{i \\to f} \\propto |\\langle f|\\tfrac{1}{\\nu}\\mathbf{E_0} \\cdot \\mathbf{p}|i\\rangle|^2 \\, \\delta(E_f-E_i-h\\nu)"
},
{
"math_id": 47,
"text": "|k_i\\rangle"
},
{
"math_id": 48,
"text": "|k_f\\rangle"
},
{
"math_id": 49,
"text": "E_f=E_{k}"
},
{
"math_id": 50,
"text": "\\mathbf{p}=\\hbar \\mathbf{k}"
},
{
"math_id": 51,
"text": "|\\langle k_f|\\mathbf{E_0} \\cdot \\mathbf{p}|k_i\\rangle|^2 = M_{fi}"
},
{
"math_id": 52,
"text": "A(\\mathbf{k},E)"
},
{
"math_id": 53,
"text": "E_\\text{k} = E+h\\nu-W "
},
{
"math_id": 54,
"text": "W "
},
{
"math_id": 55,
"text": "f(E)=\\frac{1}{1+e^{(E-E_\\text{F})/k_\\text{B}T}}"
},
{
"math_id": 56,
"text": "I(E_\\text{k},\\mathbf{k_{\\Vert}})=I_M(\\mathbf{k_{\\Vert}},\\mathbf{E_0},\\nu)\\, f(E)\\, A(\\mathbf{k_{\\Vert}},E) "
},
{
"math_id": 57,
"text": "E(k)"
},
{
"math_id": 58,
"text": "\\exp(i\\mathbf{k}\\cdot\\mathbf{r})"
},
{
"math_id": 59,
"text": "\\mathbf{G}"
},
{
"math_id": 60,
"text": "M_{fi}"
},
{
"math_id": 61,
"text": "I_M"
},
{
"math_id": 62,
"text": "\\mathbf{E_0}"
},
{
"math_id": 63,
"text": "A(\\mathbf{k},E) = \\sum_{m} \\left |\\, \\left \\langle \\begin{matrix} {\\scriptstyle(N-1)\\,\\mathrm{eigenstate}} \\\\ {\\scriptstyle m} \\end{matrix} \\,\\,|\\,\\, \\begin{matrix} {\\scriptstyle(N)\\,\\mathrm{eigenstate}} \\\\ {\\scriptstyle\\mathrm{with\\,} \\mathbf{k} \\mathrm{\\,removed}} \\end{matrix} \\right\\rangle \\, \\right |^2 \\, \\delta(E-E^{N-1}_m+E^{N})\n"
},
{
"math_id": 64,
"text": "E_o(\\mathbf{k})"
},
{
"math_id": 65,
"text": "\\Sigma(\\mathbf{k}, E) = \\Sigma'(\\mathbf{k}, E) + i \\Sigma''(\\mathbf{k}, E)"
},
{
"math_id": 66,
"text": "\\Sigma"
},
{
"math_id": 67,
"text": "E"
},
{
"math_id": 68,
"text": "\nA(\\mathbf{k}, E) = -\\frac{1}{\\pi} \\frac{\\Sigma''(E)}{\\left[E-E_{o}(\\mathbf{k})-\\Sigma'(E)\\right]^2+\\left[\\Sigma''(E)\\right]^2}\n"
},
{
"math_id": 69,
"text": "A(k,E)"
},
{
"math_id": 70,
"text": "E_m"
},
{
"math_id": 71,
"text": "k"
},
{
"math_id": 72,
"text": "k_m"
},
{
"math_id": 73,
"text": "\\Sigma'(E_m)"
},
{
"math_id": 74,
"text": "w"
},
{
"math_id": 75,
"text": "\\Sigma''(E_m)"
},
{
"math_id": 76,
"text": "\\Sigma'(E_m) = E_m-E_{o}(k_m)\n"
},
{
"math_id": 77,
"text": "\\Sigma''(E_m) = \\frac{1}{2} \\left[E_{o}(k_m+{\\textstyle \\frac{1}{2}}w) - E_{o}(k_m-{\\textstyle \\frac{1}{2}}w) \\right]"
},
{
"math_id": 78,
"text": "E_{o}(k)"
},
{
"math_id": 79,
"text": "\\Sigma(E)"
},
{
"math_id": 80,
"text": "\\Sigma''(E)"
},
{
"math_id": 81,
"text": "\\Sigma'(E)"
}
]
| https://en.wikipedia.org/wiki?curid=7078310 |
70784214 | Loss of load | Term for when the available generation capacity in an electrical grid is less than the system load
Loss of load in an electrical grid is a term used to describe the situation when the available generation capacity is less than the system load. Multiple probabilistic reliability indices for the generation systems are using loss of load in their definitions, with the more popular being Loss of Load Probability (LOLP) that characterizes a probability of a loss of load occurring within a year. Loss of load events are calculated before the mitigating actions (purchasing electricity from other systems, load shedding) are taken, so a loss of load does not necessarily cause a blackout.
Loss-of-load-based reliability indices.
Multiple reliability indices for the electrical generation are based on the loss of load being observed/calculated over a long interval (one or multiple years) in relatively small increments (an hour or a day). The total number of increments inside the long interval is designated as formula_0 (e.g., for a yearlong interval formula_1 if the increment is a day, formula_2 if the increment is an hour):
One-day-in-ten-years criterion.
A typically accepted design goal for formula_5 is 0.1 day per year ("one-day-in-ten-years criterion" a.k.a. "1 in 10"), corresponding to formula_6. In the US, the threshold is set by the regional entities, like Northeast Power Coordinating Council: <templatestyles src="Template:Blockquote/styles.css" />resources will be planned in such a manner that ... the probability of disconnecting non-interruptible customers will be no more than once in ten years
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "N"
},
{
"math_id": 1,
"text": "N=365"
},
{
"math_id": 2,
"text": "N=8760"
},
{
"math_id": 3,
"text": "{LOLE} = {LOLP} \\cdot N"
},
{
"math_id": 4,
"text": "{LOLD} = \\frac {LOLE} {LOLF}"
},
{
"math_id": 5,
"text": "LOLE"
},
{
"math_id": 6,
"text": "{LOLP} = \\frac {1} {10 \\cdot 365} \\approx 0.000274"
}
]
| https://en.wikipedia.org/wiki?curid=70784214 |
70784332 | Triangle of partition numbers | In the number theory of integer partitions, the numbers formula_0 denote both the number of partitions of formula_1 into exactly formula_2 parts (that is, sums of formula_2 positive integers that add to formula_1), and the number of partitions of formula_1 into parts of maximum size exactly formula_2. These two types of partition are in bijection with each other, by a diagonal reflection of their Young diagrams. Their numbers can be arranged into a triangle, the triangle of partition numbers, in which the formula_1th row gives the partition numbers formula_3:
Recurrence relation.
Analogously to Pascal's triangle, these numbers may be calculated using the recurrence relation
formula_4
As base cases, formula_5, and any value on the right hand side of the recurrence that would be outside the triangle can be taken as zero. This equation can be explained by noting that each partition of formula_1 into formula_2 pieces, counted by formula_0, can be formed either by adding a piece of size one to a partition of formula_6 into formula_7 pieces, counted by formula_8, or by increasing by one each piece in a partition of formula_9 into formula_2 pieces, counted by formula_10.
Row sums and diagonals.
In the triangle of partition numbers, the sum of the numbers in the formula_1th row is the partition number formula_11. These numbers form the sequence
<templatestyles src="Block indent/styles.css"/>1, 2, 3, 5, 7, 11, 15, 22, ...,
omitting the initial value formula_12 of the partition numbers.
Each diagonal from upper left to lower right is eventually constant, with the constant parts of these diagonals extending approximately from halfway across each row to its end. The values of these constants are the partition numbers 1, 1, 2, 3, 5, 7, ... again.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "p_k(n)"
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "k"
},
{
"math_id": 3,
"text": "p_1(n), p_2(n), \\dots, p_n(n)"
},
{
"math_id": 4,
"text": "p_k(n)=p_{k-1}(n-1)+p_k(n-k)."
},
{
"math_id": 5,
"text": "p_1(1)=1"
},
{
"math_id": 6,
"text": "n-1"
},
{
"math_id": 7,
"text": "k-1"
},
{
"math_id": 8,
"text": "p_{k-1}(n-1)"
},
{
"math_id": 9,
"text": "n-k"
},
{
"math_id": 10,
"text": "p_k(n-k)"
},
{
"math_id": 11,
"text": "p(n)"
},
{
"math_id": 12,
"text": "p(0)=1"
}
]
| https://en.wikipedia.org/wiki?curid=70784332 |
7079248 | Centrosymmetric matrix | Matrix symmetric about its center
In mathematics, especially in linear algebra and matrix theory, a centrosymmetric matrix is a matrix which is symmetric about its center.
Formal definition.
An "n" × "n" matrix "A" = ["A""i", "j"] is centrosymmetric when its entries satisfy
formula_0
Alternatively, if J denotes the "n" × "n" exchange matrix with 1 on the antidiagonal and 0 elsewhere:
formula_1
then a matrix A is centrosymmetric if and only if "AJ" = "JA".
formula_4
Related structures.
An "n" × "n" matrix A is said to be "skew-centrosymmetric" if its entries satisfy
formula_5
Equivalently, A is skew-centrosymmetric if "AJ" = −"JA", where J is the exchange matrix defined previously.
The centrosymmetric relation "AJ" = "JA" lends itself to a natural generalization, where J is replaced with an involutory matrix K (i.e., "K"2 = "I"&hairsp;) or, more generally, a matrix K satisfying "Km" = "I" for an integer "m" > 1. The inverse problem for the commutation relation "AK = KA" of identifying all involutory K that commute with a fixed matrix A has also been studied.
Symmetric centrosymmetric matrices are sometimes called bisymmetric matrices. When the ground field is the real numbers, it has been shown that bisymmetric matrices are precisely those symmetric matrices whose eigenvalues remain the same aside from possible sign changes following pre- or post-multiplication by the exchange matrix. A similar result holds for Hermitian centrosymmetric and skew-centrosymmetric matrices.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "A_{i,\\,j} = A_{n-i+1,\\,n-j+1} \\quad \\text{for all }i,j \\in \\{1,\\, \\ldots,\\, n\\}."
},
{
"math_id": 1,
"text": "J_{i,\\,j} = \\begin{cases} \n1, & i + j = n + 1 \\\\\n0, & i + j \\ne n + 1\\\\\n\\end{cases}"
},
{
"math_id": 2,
"text": "\n\\begin{bmatrix} a & b \\\\ b & a \\end{bmatrix}."
},
{
"math_id": 3,
"text": "\n\\begin{bmatrix} a & b & c \\\\ d & e & d \\\\ c & b & a \\end{bmatrix}."
},
{
"math_id": 4,
"text": "\\frac{m^2 + m \\bmod 2}{2}."
},
{
"math_id": 5,
"text": "A_{i,\\,j} = -A_{n-i+1,\\,n-j+1} \\quad \\text{for all }i,j \\in \\{1,\\, \\ldots,\\, n\\}."
}
]
| https://en.wikipedia.org/wiki?curid=7079248 |
7079444 | Contributions of Leonhard Euler to mathematics | The 18th-century Swiss mathematician Leonhard Euler (1707–1783) is among the most prolific and successful mathematicians in the history of the field. His seminal work had a profound impact in numerous areas of mathematics and he is widely credited for introducing and popularizing modern notation and terminology.
Mathematical notation.
Euler introduced much of the mathematical notation in use today, such as the notation "f"("x") to describe a function and the modern notation for the trigonometric functions. He was the first to use the letter "e" for the base of the natural logarithm, now also known as Euler's number. The use of the Greek letter formula_0 to denote the ratio of a circle's circumference to its diameter was also popularized by Euler (although it did not originate with him). He is also credited for inventing the notation "i" to denote formula_1.
Complex analysis.
Euler made important contributions to complex analysis. He introduced scientific notation. He discovered what is now known as Euler's formula, that for any real number formula_2, the complex exponential function satisfies
formula_3
This has been called "the most remarkable formula in mathematics" by Richard Feynman. Euler's identity is a special case of this:
formula_4
This identity is particularly remarkable as it involves "e", formula_0, "i", 1, and 0, arguably the five most important constants in mathematics, as well as the four fundamental arithmetic operators: addition, multiplication, exponentiation, and equality.
Analysis.
The development of calculus was at the forefront of 18th-century mathematical research, and the Bernoullis—family friends of Euler—were responsible for much of the early progress in the field. Understanding the infinite was the major focus of Euler's research. While some of Euler's proofs may not have been acceptable under modern standards of rigor, his ideas were responsible for many great advances. First of all, Euler introduced the concept of a function, and introduced the use of the exponential function and logarithms in analytic proofs.
Euler frequently used the logarithmic functions as a tool in analysis problems, and discovered new ways by which they could be used. He discovered ways to express various logarithmic functions in terms of power series, and successfully defined logarithms for complex and negative numbers, thus greatly expanding the scope where logarithms could be applied in mathematics. Most researchers in the field long held the view that formula_5 for any positive real formula_6 since by using the additivity property of logarithms formula_7. In a 1747 letter to Jean Le Rond d'Alembert, Euler defined the natural logarithm of −1 as formula_8, a pure imaginary.
Euler is well known in analysis for his frequent use and development of power series: that is, the expression of functions as sums of infinitely many terms, such as
formula_9
Notably, Euler discovered the power series expansions for "e" and the inverse tangent function
formula_10
His use of power series enabled him to solve the famous Basel problem in 1735:
formula_11
In addition, Euler elaborated the theory of higher transcendental functions by introducing the gamma function and introduced a new method for solving quartic equations. He also found a way to calculate integrals with complex limits, foreshadowing the development of complex analysis. Euler invented the calculus of variations including its most well-known result, the Euler–Lagrange equation.
Euler also pioneered the use of analytic methods to solve number theory problems. In doing so, he united two disparate branches of mathematics and introduced a new field of study, analytic number theory. In breaking ground for this new field, Euler created the theory of hypergeometric series, q-series, hyperbolic trigonometric functions and the analytic theory of continued fractions. For example, he proved the infinitude of primes using the divergence of the harmonic series, and used analytic methods to gain some understanding of the way prime numbers are distributed. Euler's work in this area led to the development of the prime number theorem.
Number theory.
Euler's great interest in number theory can be traced to the influence of his friend in the St. Peterburg Academy, Christian Goldbach. A lot of his early work on number theory was based on the works of Pierre de Fermat, and developed some of Fermat's ideas.
One focus of Euler's work was to link the nature of prime distribution with ideas in analysis. He proved that the sum of the reciprocals of the primes diverges. In doing so, he discovered a connection between Riemann zeta function and prime numbers, known as the Euler product formula for the Riemann zeta function.
Euler proved Newton's identities, Fermat's little theorem, Fermat's theorem on sums of two squares, and made distinct contributions to the Lagrange's four-square theorem. He also invented the totient function φ(n) which assigns to a positive integer n the number of positive integers less than n and coprime to n. Using properties of this function he was able to generalize Fermat's little theorem to what would become known as Euler's theorem. He further contributed significantly to the understanding of perfect numbers, which had fascinated mathematicians since Euclid. Euler made progress toward the prime number theorem and conjectured the law of quadratic reciprocity. The two concepts are regarded as the fundamental theorems of number theory, and his ideas paved the way for Carl Friedrich Gauss.
Graph theory and topology.
In 1736 Euler solved, or rather proved unsolvable, a problem known as the seven bridges of Königsberg. The city of Königsberg, Kingdom of Prussia (now Kaliningrad, Russia) is set on the Pregel River, and included two large islands which were connected to each other and the mainland by seven bridges. The question is whether it is possible to walk with a route that crosses each bridge exactly once, and return to the starting point.
Euler's solution of the Königsberg bridge problem is considered to be the first theorem of graph theory. In addition, his recognition that the key information was the number of bridges and the list of their endpoints (rather than their exact positions) presaged the development of topology.
Euler also made contributions to the understanding of planar graphs. He introduced a formula governing the relationship between the number of edges, vertices, and faces of a convex polyhedron. Given such a polyhedron, the alternating sum of vertices, edges and faces equals a constant: "V" − "E" + "F" = 2. This constant, χ, is the Euler characteristic of the plane. The study and generalization of this equation, specially by Cauchy and Lhuillier, is at the origin of topology. Euler characteristic, which may be generalized to any topological space as the alternating sum of the Betti numbers, naturally arises from homology. In particular, it is equal to 2 − 2"g" for a closed oriented surface with genus "g" and to 2 − "k" for a non-orientable surface with k crosscaps. This property led to the definition of rotation systems in topological graph theory.
Applied mathematics.
Most of Euler's greatest successes were in applying analytic methods to real world problems, describing numerous applications of Bernoulli's numbers, Fourier series, Venn diagrams, Euler numbers, e and π constants, continued fractions and integrals. He integrated Leibniz's differential calculus with Newton's Method of Fluxions, and developed tools that made it easier to apply calculus to physical problems. In particular, he made great strides in improving numerical approximation of integrals, inventing what are now known as the "Euler approximations". The most notable of these approximations are Euler method and the Euler–Maclaurin formula. He also facilitated the use of differential equations, in particular introducing the Euler–Mascheroni constant:
formula_12
One of Euler's more unusual interests was the application of mathematical ideas in music. In 1739 he wrote the "Tentamen novae theoriae musicae," hoping to eventually integrate music theory as part of mathematics. This part of his work, however did not receive wide attention and was once described as too mathematical for musicians and too musical for mathematicians.
Works.
The works which Euler published separately are: | [
{
"math_id": 0,
"text": "\\pi"
},
{
"math_id": 1,
"text": "\\sqrt{-1}"
},
{
"math_id": 2,
"text": "\\varphi"
},
{
"math_id": 3,
"text": "e^{i\\varphi} = \\cos \\varphi + i\\sin \\varphi. "
},
{
"math_id": 4,
"text": "e^{i \\pi} + 1 = 0 \\,."
},
{
"math_id": 5,
"text": "\\log (x) = \\log (-x)"
},
{
"math_id": 6,
"text": "x"
},
{
"math_id": 7,
"text": " 2 \\log (-x) = \\log ((-x)^2) = \\log (x^2) = 2 \\log (x) "
},
{
"math_id": 8,
"text": "i\\pi"
},
{
"math_id": 9,
"text": "e = \\sum_{n=0}^\\infty {1 \\over n!} = \\lim_{n \\to \\infty}\\left(\\frac{1}{0!} + \\frac{1}{1!} + \\frac{1}{2!} + \\cdots + \\frac{1}{n!}\\right)."
},
{
"math_id": 10,
"text": "\\arctan z = \\sum_{n=0}^\\infty \\frac {(-1)^n z^{2n+1}} {2n+1}."
},
{
"math_id": 11,
"text": "\\lim_{n \\to \\infty}\\left(\\frac{1}{1^2} + \\frac{1}{2^2} + \\frac{1}{3^2} + \\cdots + \\frac{1}{n^2}\\right) = \\frac{\\pi ^2}{6}."
},
{
"math_id": 12,
"text": "\\gamma = \\lim_{n \\rightarrow \\infty } \\left( 1+ \\frac{1}{2} + \\frac{1}{3} + \\frac{1}{4} + \\cdots + \\frac{1}{n} - \\ln(n) \\right)."
}
]
| https://en.wikipedia.org/wiki?curid=7079444 |
70795636 | Proverbs 21 | Proverbs 21 is the 21st chapter of the Book of Proverbs in the Hebrew Bible or the Old Testament of the Christian Bible. The book is a compilation of several wisdom literature collections, with the heading in 1:1 may be intended to regard Solomon as the traditional author of the whole book, but the dates of the individual collections are difficult to determine, and the book probably obtained its final shape in the post-exilic period. This chapter records a part of the second collection of the book.
Text.
Hebrew.
The following table shows the Hebrew text of Proverbs 21 with vowels alongside an English translation based upon the JPS 1917 translation (now in the public domain).
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BC. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
Parashot.
The "parashah" sections listed here are based on the Aleppo Codex. {P}: open "parashah".
{P} 19:10–29; 20:1–30; 21:1–30 {P} 21:31; 22:1–29 {P}
Analysis.
This chapter belongs to a section regarded as the second collection in the book of Proverbs (comprising Proverbs 10:1–22:16), also called "The First 'Solomonic' Collection" (the second one in Proverbs 25:1–29:27). The collection contains 375 sayings, each of which consists of two parallel phrases, except for Proverbs 19:7 which consists of three parts.
"The king's heart is in the hand of the Lord,"
"as the rivers of water; He turns it to any place He will."
Verse 1.
God has sovereign control of human affairs (cf. verses 30–31). including the actions and decisions of a king—whether willingly (Psalm 78:70) or unwittingly (cf. Jeremiah 25:9)— to achieve divine purposes (cf. 16:1, 9).
"To do justice and judgment is more acceptable to the Lord than sacrifice."
Verse 3.
God's priority of righteousness and justice over religious worship rituals or 'sacrifices' is a common prophetic theme (cf. Proverbs 15:8; 21:29; 1 Samuel 15:22; Psalm 40:6–8; Isaiah 1:11–17; Jeremiah 7:21–26; Hosea 6:6; Amos 5:21–27; Micah 6:6–8), and is illustrated by Saul' action (1 Samuel 15).
Worser than this is the 'evil intent' accompanying the offensiveness of the sacrifices by the wicked (verse 27).
"The horse is prepared against the day of battle: but safety is of the Lord."
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=70795636 |
70802355 | Spectral dimension | Type of geometric quantity
The spectral dimension is a real-valued quantity that characterizes a spacetime geometry and topology. It characterizes a spread into space over time, e.g. an ink drop diffusing in a water glass or the evolution of a pandemic in a population. Its definition is as follow: if a phenomenon spreads as formula_0, with formula_1 the time, then the spectral dimension is formula_2. The spectral dimension depends on the topology of the space, e.g., the distribution of neighbors in a population, and the diffusion rate.
In physics, the concept of spectral dimension is used, among other things, in
quantum gravity,
percolation theory,
superstring theory, or
quantum field theory.
Examples.
The diffusion of ink in an isotropic homogeneous medium like still water evolves as formula_3, giving a spectral dimension of 3.
Ink in a 2D Sierpiński triangle diffuses following a more complicated path and thus more slowly, as formula_4, giving a spectral dimension of 1.3652.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "t^n"
},
{
"math_id": 1,
"text": "t"
},
{
"math_id": 2,
"text": "2n"
},
{
"math_id": 3,
"text": "t^{3/2}"
},
{
"math_id": 4,
"text": "t^{0.6826}"
}
]
| https://en.wikipedia.org/wiki?curid=70802355 |
7080378 | Kolmogorov continuity theorem | In mathematics, the Kolmogorov continuity theorem is a theorem that guarantees that a stochastic process that satisfies certain constraints on the moments of its increments will be continuous (or, more precisely, have a "continuous version"). It is credited to the Soviet mathematician Andrey Nikolaevich Kolmogorov.
Statement.
Let formula_0 be some complete metric space, and let formula_1 be a stochastic process. Suppose that for all times formula_2, there exist positive constants formula_3 such that
formula_4
for all formula_5. Then there exists a modification formula_6 of formula_7 that is a continuous process, i.e. a process formula_8 such that
Furthermore, the paths of formula_6 are locally formula_11-Hölder-continuous for every formula_12.
Example.
In the case of Brownian motion on formula_13, the choice of constants formula_14, formula_15, formula_16 will work in the Kolmogorov continuity theorem. Moreover, for any positive integer formula_17, the constants formula_18, formula_19 will work, for some positive value of formula_20 that depends on formula_21 and formula_17. | [
{
"math_id": 0,
"text": "(S,d)"
},
{
"math_id": 1,
"text": "X\\colon [0, + \\infty) \\times \\Omega \\to S"
},
{
"math_id": 2,
"text": "T > 0"
},
{
"math_id": 3,
"text": "\\alpha, \\beta, K"
},
{
"math_id": 4,
"text": "\\mathbb{E} [d(X_t, X_s)^\\alpha] \\leq K | t - s |^{1 + \\beta}"
},
{
"math_id": 5,
"text": "0 \\leq s, t \\leq T"
},
{
"math_id": 6,
"text": "\\tilde{X}"
},
{
"math_id": 7,
"text": "X"
},
{
"math_id": 8,
"text": "\\tilde{X}\\colon [0, + \\infty) \\times \\Omega \\to S"
},
{
"math_id": 9,
"text": "t \\geq 0"
},
{
"math_id": 10,
"text": "\\mathbb{P} (X_t = \\tilde{X}_t) = 1."
},
{
"math_id": 11,
"text": "\\gamma"
},
{
"math_id": 12,
"text": "0<\\gamma<\\tfrac\\beta\\alpha"
},
{
"math_id": 13,
"text": "\\mathbb{R}^n"
},
{
"math_id": 14,
"text": "\\alpha = 4"
},
{
"math_id": 15,
"text": "\\beta = 1"
},
{
"math_id": 16,
"text": "K = n (n + 2)"
},
{
"math_id": 17,
"text": "m"
},
{
"math_id": 18,
"text": "\\alpha = 2m"
},
{
"math_id": 19,
"text": "\\beta = m-1"
},
{
"math_id": 20,
"text": "K"
},
{
"math_id": 21,
"text": "n"
}
]
| https://en.wikipedia.org/wiki?curid=7080378 |
708242 | Mixture distribution | Probability distribution
In probability and statistics, a mixture distribution is the probability distribution of a random variable that is derived from a collection of other random variables as follows: first, a random variable is selected by chance from the collection according to given probabilities of selection, and then the value of the selected random variable is realized. The underlying random variables may be random real numbers, or they may be random vectors (each having the same dimension), in which case the mixture distribution is a multivariate distribution.
In cases where each of the underlying random variables is continuous, the outcome variable will also be continuous and its probability density function is sometimes referred to as a mixture density. The cumulative distribution function (and the probability density function if it exists) can be expressed as a convex combination (i.e. a weighted sum, with non-negative weights that sum to 1) of other distribution functions and density functions. The individual distributions that are combined to form the mixture distribution are called the mixture components, and the probabilities (or weights) associated with each component are called the mixture weights. The number of components in a mixture distribution is often restricted to being finite, although in some cases the components may be countably infinite in number. More general cases (i.e. an uncountable set of component distributions), as well as the countable case, are treated under the title of compound distributions.
A distinction needs to be made between a random variable whose distribution function or density is the sum of a set of components (i.e. a mixture distribution) and a random variable whose value is the sum of the values of two or more underlying random variables, in which case the distribution is given by the convolution operator. As an example, the sum of two jointly normally distributed random variables, each with different means, will still have a normal distribution. On the other hand, a mixture density created as a mixture of two normal distributions with different means will have two peaks provided that the two means are far enough apart, showing that this distribution is radically different from a normal distribution.
Mixture distributions arise in many contexts in the literature and arise naturally where a statistical population contains two or more subpopulations. They are also sometimes used as a means of representing non-normal distributions. Data analysis concerning statistical models involving mixture distributions is discussed under the title of mixture models, while the present article concentrates on simple probabilistic and statistical properties of mixture distributions and how these relate to properties of the underlying distributions.
Finite and countable mixtures.
Given a finite set of probability density functions "p"1("x"), ..., "pn"("x"), or corresponding cumulative distribution functions "P"1("x"), ..., "Pn"("x") and weights "w"1, ..., "wn" such that "wi" ≥ 0 and Σ"wi"
1, the mixture distribution can be represented by writing either the density, "f", or the distribution function, "F", as a sum (which in both cases is a convex combination):
formula_0
formula_1
This type of mixture, being a finite sum, is called a finite mixture, and in applications, an unqualified reference to a "mixture density" usually means a finite mixture. The case of a countably infinite set of components is covered formally by allowing formula_2 .
Uncountable mixtures.
Where the set of component distributions is uncountable, the result is often called a compound probability distribution. The construction of such distributions has a formal similarity to that of mixture distributions, with either infinite summations or integrals replacing the finite summations used for finite mixtures.
Consider a probability density function "p"("x";"a") for a variable "x", parameterized by "a". That is, for each value of "a" in some set "A", "p"("x";"a") is a probability density function with respect to "x". Given a probability density function "w" (meaning that "w" is nonnegative and integrates to 1), the function
formula_3
is again a probability density function for "x". A similar integral can be written for the cumulative distribution function. Note that the formulae here reduce to the case of a finite or infinite mixture if the density "w" is allowed to be a generalized function representing the "derivative" of the cumulative distribution function of a discrete distribution.
Mixtures within a parametric family.
The mixture components are often not arbitrary probability distributions, but instead are members of a parametric family (such as normal distributions), with different values for a parameter or parameters. In such cases, assuming that it exists, the density can be written in the form of a sum as:
formula_4
for one parameter, or
formula_5
for two parameters, and so forth.
Properties.
Convexity.
A general linear combination of probability density functions is not necessarily a probability density, since it may be negative or it may integrate to something other than 1. However, a convex combination of probability density functions preserves both of these properties (non-negativity and integrating to 1), and thus mixture densities are themselves probability density functions.
Moments.
Let "X"1, ..., "X""n" denote random variables from the "n" component distributions, and let "X" denote a random variable from the mixture distribution. Then, for any function "H"(·) for which formula_6 exists, and assuming that the component densities "pi"("x") exist,
formula_7
The "j"th moment about zero (i.e. choosing "H"("x")
"xj") is simply a weighted average of the "j"th moments of the components. Moments about the mean "H"("x")
("x − μ")"j" involve a binomial expansion:
formula_8
where "μi" denotes the mean of the "i"th component.
In the case of a mixture of one-dimensional distributions with weights "wi", means "μi" and variances "σi"2, the total mean and variance will be:
formula_9
formula_10
These relations highlight the potential of mixture distributions to display non-trivial higher-order moments such as skewness and kurtosis (fat tails) and multi-modality, even in the absence of such features within the components themselves. Marron and Wand (1992) give an illustrative account of the flexibility of this framework.
Modes.
The question of multimodality is simple for some cases, such as mixtures of exponential distributions: all such mixtures are unimodal. However, for the case of mixtures of normal distributions, it is a complex one. Conditions for the number of modes in a multivariate normal mixture are explored by Ray & Lindsay extending earlier work on univariate and multivariate distributions.
Here the problem of evaluation of the modes of an "n" component mixture in a "D" dimensional space is reduced to identification of critical points (local minima, maxima and saddle points) on a manifold referred to as the ridgeline surface, which is the image of the ridgeline function
formula_11
where formula_12 belongs to the formula_13-dimensional standard simplex:
formula_14
and formula_15 correspond to the covariance and mean of the "i"th component. Ray & Lindsay consider the case in which formula_16 showing a one-to-one correspondence of modes of the mixture and those on the ridge elevation function formula_17 thus one may identify the modes by solving formula_18 with respect to formula_12 and determining the value formula_19.
Using graphical tools, the potential multi-modality of mixtures with number of components formula_20 is demonstrated; in particular it is shown that the number of modes may exceed formula_21 and that the modes may not be coincident with the component means. For two components they develop a graphical tool for analysis by instead solving the aforementioned differential with respect to the first mixing weight formula_22 (which also determines the second mixing weight through formula_23) and expressing the solutions as a function formula_24 so that the number and location of modes for a given value of formula_22 corresponds to the number of intersections of the graph on the line formula_25. This in turn can be related to the number of oscillations of the graph and therefore to solutions of formula_26 leading to an explicit solution for the case of a two component mixture with formula_27 (sometimes called a homoscedastic mixture) given by
formula_28
where formula_29
is the Mahalanobis distance between formula_30 and formula_31.
Since the above is quadratic it follows that in this instance there are at most two modes irrespective of the dimension or the weights.
For normal mixtures with general formula_32 and formula_33, a lower bound for the maximum number of possible modes, and – conditionally on the assumption that the maximum number is finite – an upper bound are known. For those combinations of formula_21 and formula_34 for which the maximum number is known, it matches the lower bound.
Examples.
Two normal distributions.
Simple examples can be given by a mixture of two normal distributions. (See Multimodal distribution#Mixture of two normal distributions for more details.)
Given an equal (50/50) mixture of two normal distributions with the same standard deviation and different means (homoscedastic), the overall distribution will exhibit low kurtosis relative to a single normal distribution – the means of the subpopulations fall on the shoulders of the overall distribution. If sufficiently separated, namely by twice the (common) standard deviation, so formula_35 these form a bimodal distribution, otherwise it simply has a wide peak. The variation of the overall population will also be greater than the variation of the two subpopulations (due to spread from different means), and thus exhibits overdispersion relative to a normal distribution with fixed variation formula_36 though it will not be overdispersed relative to a normal distribution with variation equal to variation of the overall population.
Alternatively, given two subpopulations with the same mean and different standard deviations, the overall population will exhibit high kurtosis, with a sharper peak and heavier tails (and correspondingly shallower shoulders) than a single distribution.
A normal and a Cauchy distribution.
The following example is adapted from Hampel, who credits John Tukey.
Consider the mixture distribution defined by
"F"("x")
(1 − 10−10) (standard normal) + 10−10 (standard Cauchy).
The mean of i.i.d. observations from "F"("x") behaves "normally" except for exorbitantly large samples, although the mean of "F"("x") does not even exist.
Applications.
Mixture densities are complicated densities expressible in terms of simpler densities (the mixture components), and are used both because they provide a good model for certain data sets (where different subsets of the data exhibit different characteristics and can best be modeled separately), and because they can be more mathematically tractable, because the individual mixture components can be more easily studied than the overall mixture density.
Mixture densities can be used to model a statistical population with subpopulations, where the mixture components are the densities on the subpopulations, and the weights are the proportions of each subpopulation in the overall population.
Mixture densities can also be used to model experimental error or contamination – one assumes that most of the samples measure the desired phenomenon, with some samples from a different, erroneous distribution.
Parametric statistics that assume no error often fail on such mixture densities – for example, statistics that assume normality often fail disastrously in the presence of even a few outliers – and instead one uses robust statistics.
In meta-analysis of separate studies, study heterogeneity causes distribution of results to be a mixture distribution, and leads to overdispersion of results relative to predicted error. For example, in a statistical survey, the margin of error (determined by sample size) predicts the sampling error and hence dispersion of results on repeated surveys. The presence of study heterogeneity (studies have different sampling bias) increases the dispersion relative to the margin of error.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " F(x) = \\sum_{i=1}^n \\, w_i \\, P_i(x), "
},
{
"math_id": 1,
"text": " f(x) = \\sum_{i=1}^n \\, w_i \\, p_i(x) ."
},
{
"math_id": 2,
"text": " n = \\infty\\!"
},
{
"math_id": 3,
"text": " f(x) = \\int_A \\, w(a) \\, p(x;a) \\, da "
},
{
"math_id": 4,
"text": " f(x; a_1, \\ldots , a_n) = \\sum_{i=1}^n \\, w_i \\, p(x;a_i) "
},
{
"math_id": 5,
"text": " f(x; a_1, \\ldots , a_n, b_1, \\ldots , b_n) = \\sum_{i=1}^n \\, w_i \\, p(x;a_i,b_i) "
},
{
"math_id": 6,
"text": "\\operatorname{E}[H(X_i)]"
},
{
"math_id": 7,
"text": "\n\\begin{align}\n\\operatorname{E}[H(X)] & = \\int_{-\\infty}^\\infty H(x) \\sum_{i = 1}^n w_i p_i(x) \\, dx \\\\\n& = \\sum_{i = 1}^n w_i \\int_{-\\infty}^\\infty p_i(x) H(x) \\, dx = \\sum_{i = 1}^n w_i \\operatorname{E}[H(X_i)].\n\\end{align}\n"
},
{
"math_id": 8,
"text": "\n\\begin{align}\n\\operatorname{E}[(X - \\mu)^j] & = \\sum_{i = 1}^n w_i \\operatorname{E}[(X_i - \\mu_i + \\mu_i - \\mu)^j] \\\\\n& = \\sum_{i=1}^n w_i \\sum_{k=0}^j \\left( \\begin{array}{c} j \\\\ k \\end{array} \\right) (\\mu_i - \\mu)^{j-k} \\operatorname{E}[(X_i - \\mu_i)^k],\n\\end{align}\n"
},
{
"math_id": 9,
"text": " \\operatorname{E}[X] = \\mu = \\sum_{i = 1}^n w_i \\mu_i ,"
},
{
"math_id": 10,
"text": " \n\\begin{align}\n\\operatorname{E}[(X - \\mu)^2] & = \\sigma^2 \\\\\n& = \\operatorname{E}[X^2] - \\mu^{2} & (\\mathrm{standard}\\ \\mathrm{variance}\\ \\mathrm{reformulation})\\\\\n& = \\left(\\sum_{i=1}^n w_i(\\operatorname{E}[X_i^2])\\right) - \\mu^{2} \\\\\n& = \\sum_{i=1}^n w_i(\\sigma_i^2 + \\mu_i^{2} )- \\mu^{2} & (\\mathrm{from}\\ \\sigma_i^2 = \\operatorname{E}[X_i^2] - \\mu_i^{2}, \\mathrm{therefore}\\, \\operatorname{E}[X_i^2] = \\sigma_i^2 + \\mu_i^{2}.) \n\\end{align}\n"
},
{
"math_id": 11,
"text": " x^{*}(\\alpha) = \\left[ \\sum_{i=1}^{n} \\alpha_i \\Sigma_i^{-1} \\right]^{-1} \\times \\left[ \\sum_{i=1}^{n} \\alpha_i \\Sigma_i^{-1} \\mu_i \\right],\n"
},
{
"math_id": 12,
"text": "\\alpha"
},
{
"math_id": 13,
"text": "(n-1)"
},
{
"math_id": 14,
"text": " \\mathcal{S}_n = \n \\{ \\alpha \\in \\mathbb{R}^n: \\alpha_i \\in [0,1], \\sum_{i=1}^n \\alpha_i = 1 \\}\n"
},
{
"math_id": 15,
"text": "\\Sigma_i \\in R^{D\\times D},\\, \\mu_i \\in R^D"
},
{
"math_id": 16,
"text": "n-1 < D"
},
{
"math_id": 17,
"text": "h(\\alpha)=q(x^*(\\alpha))"
},
{
"math_id": 18,
"text": " \\frac{d h(\\alpha)}{d \\alpha} = 0 "
},
{
"math_id": 19,
"text": "x^*(\\alpha)"
},
{
"math_id": 20,
"text": "n \\in \\{2,3\\}"
},
{
"math_id": 21,
"text": "n"
},
{
"math_id": 22,
"text": "w_1"
},
{
"math_id": 23,
"text": "w_2 = 1-w_1"
},
{
"math_id": 24,
"text": "\\Pi(\\alpha), \\,\\alpha \\in [0,1]"
},
{
"math_id": 25,
"text": "\\Pi(\\alpha)=w_1"
},
{
"math_id": 26,
"text": " \\frac{d \\Pi(\\alpha)}{d \\alpha} = 0 "
},
{
"math_id": 27,
"text": "\\Sigma_1 = \\Sigma_2 = \\Sigma "
},
{
"math_id": 28,
"text": " 1 - \\alpha(1-\\alpha) d_M(\\mu_1, \\mu_2, \\Sigma)^2 "
},
{
"math_id": 29,
"text": " d_M(\\mu_1,\\mu_2,\\Sigma) = \\sqrt{(\\mu_2-\\mu_1)^T\\Sigma^{-1}(\\mu_2-\\mu_1)} "
},
{
"math_id": 30,
"text": "\\mu_1"
},
{
"math_id": 31,
"text": "\\mu_2"
},
{
"math_id": 32,
"text": "n>2"
},
{
"math_id": 33,
"text": "D>1"
},
{
"math_id": 34,
"text": "D"
},
{
"math_id": 35,
"text": "\\left|\\mu_1 - \\mu_2\\right| > 2\\sigma,"
},
{
"math_id": 36,
"text": "\\sigma,"
}
]
| https://en.wikipedia.org/wiki?curid=708242 |
70829360 | Proverbs 20 | Proverbs 20 is the twentieth chapter of the Book of Proverbs in the Hebrew Bible or the Old Testament of the Christian Bible. The book is a compilation of several wisdom literature collections, with the heading in 1:1 may be intended to regard Solomon as the traditional author of the whole book, but the dates of the individual collections are difficult to determine, and the book probably obtained its final shape in the post-exilic period. This chapter is a part of the second collection of the book.
Text.
Hebrew.
The following table shows the Hebrew text of Proverbs 20 with vowels alongside an English translation based upon the JPS 1917 translation (now in the public domain).
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BC. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
Parashot.
The "parashah" sections listed here are based on the Aleppo Codex. {P}: open "parashah".
{P} 19:10–29; 20:1–30; 21:1–30 {P} 21:31; 22:1–29 {P}
Analysis.
This chapter belongs to a section regarded as the second collection in the book of Proverbs (comprising Proverbs 10:1–22:16), also called "The First 'Solomonic' Collection" (the second one in Proverbs 25:1–29:27). The collection contains 375 sayings, each of which consists of two parallel phrases, except for which consists of three parts.
"Wine is a mocker, strong drink is raging:"
"and whosoever is deceived thereby is not wise."
Verse 1.
The last phrase may mean that "drinking to excess is not wise" or that "drinking to excess makes a person act unwisely", so the proverb does not prohibit the use of wine or beer, as strong drink was typically used at festivals and celebrations, but in the covenant community intoxication was considered out of bounds (cf. Proverbs 23:20–21, 29–35; 31:4–7).
""It is bad, it is bad," says the buyer;"
"but when he has gone his way, then he boasts."
Verse 14.
This verse provides a picture of a negotiation procedure in the business world. When bargaining, a buyer would complain that he is being offered 'inferior goods' so he can get a reduction in the price, and thereafter he brags about what a good deal he got.
"It is a snare to the man who dedicates rashly that which is holy,"
"and after the vows to make inquiry."
Verse 25.
This verse is about the folly of rash speaking (cf. ) especially in relation to a vow, because failure to fulfil a vow was a serious matter (cf. ; ), whereas fulfilling a rash vow could be costly (cf. Jephthah and his daughter in Judges 11:29–40).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=70829360 |
70831861 | Proverbs 19 | Proverbs 19 is the nineteenth chapter of the Book of Proverbs in the Hebrew Bible or the Old Testament of the Christian Bible. The book is a compilation of several wisdom literature collections, with the heading in 1:1 may be intended to regard Solomon as the traditional author of the whole book, but the dates of the individual collections are difficult to determine, and the book probably obtained its final shape in the post-exilic period. This chapter is a part of the second collection of the book.
Text.
Hebrew.
The following table shows the Hebrew text of Proverbs 19 with vowels alongside an English translation based upon the JPS 1917 translation (now in the public domain).
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BC. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
Parashot.
The "parashah" sections listed here are based on the Aleppo Codex. {P}: open "parashah".
{P} 19:10–29; 20:1–30; 21:1–30 {P} 21:31; 22:1–29 {P}
Analysis.
This chapter belongs to a section regarded as the second collection in the book of Proverbs (comprising Proverbs 10:1–22:16), also called "The First 'Solomonic' Collection" (the second one in Proverbs 25:1–29:27). The collection contains 375 sayings, each of which consists of two parallel phrases, except for Proverbs 19:7 which consists of three parts.
"Better is the poor who walks in his integrity"
"than he who is perverse in his lips and is a fool."
"All the brothers of the poor hate him;"
"how much more do his friends go far from him!"
"He pursues them with words, yet they abandon him."
Verse 7.
Among 375 "proverbs of Solomon" in Proverbs 10:1–22:16, only this one has three lines instead of two lines.
"There are many plans in a man’s heart,"
"Nevertheless the Lord’s counsel—that will stand."
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=70831861 |
70833996 | Proverbs 17 | Proverbs 17 is the seventeenth chapter of the Book of Proverbs in the Hebrew Bible or the Old Testament of the Christian Bible. The book is a compilation of several wisdom literature collections, with the heading in 1:1 may be intended to regard Solomon as the traditional author of the whole book, but the dates of the individual collections are difficult to determine, and the book probably obtained its final shape in the post-exilic period. This chapter is a part of the second collection of the book.
Text.
Hebrew.
The following table shows the Hebrew text of Proverbs 17 with vowels alongside an English translation based upon the JPS 1917 translation (now in the public domain).
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BC. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
Analysis.
This chapter belongs to a section regarded as the second collection in the book of Proverbs (comprising Proverbs 10:1–22:16), also called "The First 'Solomonic' Collection" (the second one in Proverbs 25:1–29:27). The collection contains 375 sayings, each of which consists of two parallel phrases, except for which consists of three parts.
"Better is a dry morsel with quietness"
"than a house full of sacrifices with strife."
Verse 1.
The general idea is that a modest meal with peace and harmony ('quietness') round the table is better than a festive meal filled with resentments and rivalries or even open quarrels (cf. Proverbs 15:17).
"Whoever mocks the poor reproaches his Maker,"
"and he who is glad at calamities will not be unpunished."
"Even a fool, when he holds his peace, is counted wise;"
"and he who shuts his lips is esteemed a man of understanding."
Verse 28.
As 'silence is a mark of wisdom', a fool who could observe "restraint in speech" and "cool in spirit" (verse 27), instead of being 'hot-tempered' (cf. Proverbs 15:18), can conceal his/her folly and even be regarded as a wise person.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=70833996 |
708349 | Pascal's pyramid | In mathematics, Pascal's pyramid is a three-dimensional arrangement of the trinomial numbers, which are the coefficients of the trinomial expansion and the trinomial distribution. Pascal's pyramid is the three-dimensional analog of the two-dimensional Pascal's triangle, which contains the binomial numbers and relates to the binomial expansion and the binomial distribution. The binomial and trinomial numbers, coefficients, expansions, and distributions are subsets of the multinomial constructs with the same names.
Structure of the tetrahedron.
Because the tetrahedron is a three-dimensional object, displaying it on a piece of paper, a computer screen, or other two-dimensional medium is difficult. Assume the tetrahedron is divided into a number of levels, floors, slices, or layers. The top layer (the apex) is labeled "Layer 0". Other layers can be thought of as overhead views of the tetrahedron with the previous layers removed. The first six layers are as follows:
The layers of the tetrahedron have been deliberately displayed with the point down so that they are not individually confused with Pascal's triangle.
Trinomial expansion connection.
The numbers of the tetrahedron are derived from the trinomial expansion. The "n"th layer is the detached coefficient matrix (no variables or exponents) of a trinomial expression (e. g.: "A + B + C") raised to the "n"th power. The "n"th power of the trinomial is expanded by repeatedly multiplying the trinomial by itself:
formula_0
Each term in the first expression is multiplied by each term in the second expression; and then the coefficients of like terms (same variables and exponents) are added together. Here is the expansion of ("A + B + C")4:
1"A"4"B"0"C"0 + 4"A"3"B"0"C"1 + 6"A"2"B"0"C"2 + 4"A"1"B"0"C"3 + 1"A"0"B"0"C"4 +<br>
4"A"3"B"1"C"0 + 12"A"2"B"1"C"1 + 12"A"1"B"1"C"2 + 4"A"0"B"1"C"3 +<br>
6"A"2"B"2"C"0 + 12"A"1"B"2"C"1 + 6"A"0"B"2"C"2 +<br>
4"A"1"B"3"C"0 + 4"A"0"B"3"C"1 +<br>
1"A"0"B"4"C"0
Writing the expansion in this non-linear way shows the expansion in a more understandable way. It also makes the connection with the tetrahedron obvious−the coefficients here match those of layer 4. All the implicit coefficients, variables, and exponents, which are normally not written, are also shown to illustrate another relationship with the tetrahedron. (Usually, "1"A" is "A"; "B"1" is ""B"; and "C"0" is "1"; etc.) The exponents of each term sum to the layer number ("n"), or 4, in this case. More significantly, the value of the coefficients of each term can be computed directly from the exponents. The formula is , where "x, y, z" are the exponents of "A, B, C," respectively, and "!" is the factorial, i. e.: formula_1. The exponent formulas for the 4th layer are:
The exponents of each expansion term can be clearly seen and these formulae simplify to the expansion coefficients and the tetrahedron coefficients of layer 4.
Trinomial distribution connection.
The numbers of the tetrahedron can also be found in the trinomial distribution. This is a discrete probability distribution used to determine the chance some combination of events occurs given three possible outcomes−the number of ways the events could occur is multiplied by the probabilities that they would occur. The formula for the trinomial distribution is:
formula_2
where "x, y, z" are the number of times each of the three outcomes does occur; "n" is the number of trials and equals the sum of "x+y+z"; and "P"A, "P"B, "P"C are the probabilities that each of the three events could occur.
For example, in a three-way election, the candidates got these votes: A, 16 %; B, 30 %; C, 54 %. What is the chance that a randomly selected four-person focus group would contain the following voters: 1 for A, 1 for B, 2 for C? The answer is:
formula_3
The number 12 is the coefficient of this probability and it is number of combinations that can fill this "112" focus group. There are 15 different arrangements of four-person focus groups that can be selected. Expressions for all 15 of these coefficients are:
formula_4
formula_5
formula_6
formula_7
formula_8
The numerator of these fractions (above the line) is the same for all expressions. It is the sample size−a four-person group−and indicates that the coefficients of these arrangements can be found on layer 4 of the tetrahedron. The three numbers of the denominator (below the line) are the number of the focus group members that voted for A, B, C, respectively.
Shorthand is normally used to express combinatorial functions in the following "choose" format (which is read as "4 choose 4, 0, 0", etc.).
formula_9
formula_10
formula_11
formula_12
formula_13
But the value of these expression is still equal to the coefficients of the 4th layer of the tetrahedron. And they can be generalized to any layer by changing the sample size ("n").
This notation makes an easy way to express the sum of all the coefficients of layer "n":
formula_14.
Addition of coefficients between layers.
The numbers on every layer ("n") of the tetrahedron are the sum of the three adjacent numbers in the layer ("n"−1) "above" it. This relationship is rather difficult to see without intermingling the layers. Below are "italic" layer 3 numbers interleaved among bold layer 4 numbers:
The relationship is illustrated by the lower, central number 12 of the 4th layer. It is "surrounded" by three numbers of the 3rd layer: 6 to the "north", 3 to the "southwest", 3 to the "southeast". (The numbers along the edge have only two adjacent numbers in the layer "above" and the three corner numbers have only one adjacent number in the layer above, which is why they are always "1". The missing numbers can be assumed as "0", so there is no loss of generality.) This relationship between adjacent layers comes about through the two-step trinomial expansion process.
Continuing with this example, in Step 1, each term of ("A" + "B" + "C")3 is multiplied by each term of ("A" + "B" + "C")1. Only three of these multiplications are of interest in this example:
Then in Step 2, the summation of like terms (same variables and exponents) results in: 12"A"1"B"2"C"1, which is the term of ("A" + "B" + "C")4; while 12 is the coefficient of the 4th layer of the tetrahedron.
Symbolically, the additive relation can be expressed as:
formula_15
where C("x,y,z") is the coefficient of the term with exponents "x, y, z" and &NoBreak;&NoBreak; is the layer of the tetrahedron.
This relationship will work only if the trinomial expansion is laid out in the non-linear fashion as it is portrayed in the section on the "trinomial expansion connection".
Ratio between coefficients of same layer.
On each layer of the tetrahedron, the numbers are simple whole number ratios of the adjacent numbers. This relationship is illustrated for horizontally adjacent pairs on the 4th layer by the following:
1 ⟨1:4⟩ 4 ⟨2:3⟩ 6 ⟨3:2⟩ 4 ⟨4:1⟩ 1<br>
4 ⟨1:3⟩ 12 ⟨2:2⟩ 12 ⟨3:1⟩ 4<br>
6 ⟨1:2⟩ 12 ⟨2:1⟩ 6<br>
4 ⟨1:1⟩ 4<br>
1
Because the tetrahedron has three-way symmetry, the ratio relation also holds for diagonal pairs in both directions, as well as for the horizontal pairs shown.
The ratios are controlled by the exponents of the corresponding adjacent terms of the trinomial expansion. For example, one ratio in the illustration above is:
4 ⟨1:3⟩ 12
The corresponding terms of the trinomial expansion are:
formula_16 and formula_17
The following rules apply to the coefficients of all adjacent pairs of terms of the trinomial expansion:
The rules are the same for all horizontal and diagonal pairs. The variables "A, B, C" will change.
This ratio relationship provides another (somewhat cumbersome) way to calculate tetrahedron coefficients:
The coefficient of the adjacent term equals the coefficient of the current term multiplied by the current-term exponent of the decreasing variable divided by the adjacent-term exponent of the increasing variable.
The ratio of the adjacent coefficients may be a little clearer when expressed symbolically. Each term can have up to six adjacent terms:
For "x" = 0: formula_18
For "y" = 0: "formula_19"
For "z" = 0: "formula_20"
where C("x,y,z") is the coefficient and "x, y, z" are the exponents. In the days before pocket calculators and personal computers, this approach was used as a school-boy short-cut to write out binomial expansions without the tedious algebraic expansions or clumsy factorial computations.
This relationship will work only if the trinomial expansion is laid out in the non-linear fashion as it is portrayed in the section on the "trinomial expansion connection".
Relationship with Pascal's triangle.
It is well known that the numbers along the three outside edges of the "n"th layer of the tetrahedron are the same numbers as the "n"th line of Pascal's triangle. However, the connection is actually much more extensive than just one row of numbers. This relationship is best illustrated by comparing Pascal's triangle down to line 4 with layer 4 of the tetrahedron.
Pascal's triangle<br>
1<br>
1 1<br>
1 2 1<br>
1 3 3 1<br>
1 4 6 4 1<br>
<br>
Tetrahedron Layer 4<br>
1 4 6 4 1<br>
4 12 12 4<br>
6 12 6<br>
4 4<br>
1
Multiplying the numbers of each line of Pascal's triangle down to the "n"th line by the numbers of the "n"th line generates the "n"th layer of the tetrahedron. In the following example, the lines of Pascal's triangle are in "italic" font and the rows of the tetrahedron are in bold font.
"1"<br>
× 1 =<br>
1
"1 1"<br>
× 4 = <br>
4 4
"1 2 1"<br>
× 6 = <br>
6 12 6
"1 3 3 1"<br>
× 4 = <br>
4 12 12 4
"1 4 6 4 1"<br>
× 1 = <br>
1 4 6 4 1
The multipliers (1 4 6 4 1) compose line 4 of Pascal's triangle.
This relationship demonstrates the fastest and easiest way to compute the numbers for any layer of the tetrahedron without computing factorials, which quickly become huge numbers. (Extended precision calculators become very slow beyond tetrahedron layer 200.)
If the coefficients of Pascal's triangle are labeled C("i,j") and the coefficients of the tetrahedron are labeled C("n,i,j"), where "n" is the layer of the tetrahedron, "i" is the row, and "j" is the column, then the relation can be expressed symbolically as:
formula_21
["i, j, n" are not exponents here, just sequential labeling indexes.]
Parallels to Pascal's triangle and multinomial coefficients.
This table summarizes the properties of the trinomial expansion and the trinomial distribution. It compares them to the binomial and multinomial expansions and distributions:
Other properties.
Exponential construction.
Arbitrary layer "n" can be obtained in a single step using the following formula:
formula_22
where "b" is the radix and "d" is the number of digits of any of the central multinomial coefficients, that is
formula_23
then wrapping the digits of its result by "d"("n"+1), spacing by "d" and removing leading zeros.
This method generalised to arbitrary dimension can be used to obtain slices of any Pascal's simplex.
Examples.
For radix "b" = 10, "n" = 5, "d" = 2:
formula_24
= 10000000001015
= 1000000000505000000102010000010303010000520302005010510100501
1 1 1
000000000505 00 00 00 00 05 05 .. .. .. .. .5 .5
000000102010 00 00 00 10 20 10 .. .. .. 10 20 10
~ 000010303010 ~ 00 00 10 30 30 10 ~ .. .. 10 30 30 10
000520302005 00 05 20 30 20 05 .. .5 20 30 20 .5
010510100501 01 05 10 10 05 01 .1 .5 10 10 .5 .1
wrapped by d(n+1) spaced by d leading zeros removed
For radix "b" = 10, "n" = 20, "d" = 9:
formula_25
Sum of coefficients of a layer by rows.
Summing the numbers in each row of a layer "n" of Pascal's pyramid gives
formula_26
where "b" is the radix and "d" is the number of digits of the sum of the 'central' row (the one with the greatest sum).
For radix "b" = 10:
1 ~ 1 \ 1 ~ 1 \ 1 ~ 1 \ 1 ~ 1 \ 1 ~ 1
--- 1 \ 1 ~ 02 \ 2 \ 2 ~ 04 \ 3 \ 3 ~ 06 \ 4 \ 4 ~ 08
1 ----- 1 \ 2 \ 1 ~ 04 \ 3 \ 6 \ 3 ~ 12 \ 6 \12 \ 6 ~ 24
1 02 --------- 1 \ 3 \ 3 \ 1 ~ 08 \ 4 \12 \12 \ 4 ~ 32
1 04 04 ------------- 1 \ 4 \ 6 \ 4 \ 1 ~ 16
1 06 12 08 ------------------
1 08 24 32 16
1020 1021 1022 1023 1024
Sum of coefficients of a layer by columns.
Summing the numbers in each column of a layer "n" of Pascal's pyramid gives
formula_27
where "b" is the radix and "d" is the number of digits of the sum of the 'central' column (the one with the greatest sum).
For radix "b" = 10:
1 |1| |1| |1| | 1| | 1|
--- 1| |1 |2| |2| |3| |3| | 4| | 4| | 5| | 5|
1 ----- 1| |2| |1 |3| |6| |3| | 6| |12| | 6| |10| |20| |10|
1 1 1 --------- 1| |3| |3| |1 | 4| |12| |12| | 4| |10| |30| |30| |10|
1 2 3 2 1 ------------- 1| | 4| | 6| | 4| | 1 | 5| |20| |30| |20| | 5|
1 3 6 7 6 3 1 -------------------------- 1| | 5| |10| |10| | 5| | 1
1 04 10 16 19 16 10 04 01 --------------------------------
1 05 15 30 45 51 45 30 15 05 01
1110 1111 1112 1113 101014 101015
Usage.
In genetics, it is common to use Pascal's pyramid to find out the proportion between different genotypes on the same crossing. This is done by checking the line that is equivalent to the number of phenotypes (genotypes + 1). That line will be the proportion.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(A + B + C)(A + B + C)^n = (A + B + C)^{n+1}"
},
{
"math_id": 1,
"text": "n! = 1 \\cdot 2 \\cdot 3 \\cdots n"
},
{
"math_id": 2,
"text": "\\frac{n!}{x! y! z!} (P_A)^x(P_B)^y(P_C)^z"
},
{
"math_id": 3,
"text": "\\frac{4!}{1! 1! 2!} (16\\,\\%)^1(30\\,\\%)^1(54\\,\\%)^2 = 12 \\cdot 0.0140 = 17\\,\\%"
},
{
"math_id": 4,
"text": "\\textstyle {4!\\over 4!\\cdot 0!\\cdot 0!} \\ {4!\\over 3!\\cdot 0!\\cdot 1!} \\ {4!\\over 2!\\cdot 0!\\cdot 2!} \\ {4!\\over 1!\\cdot 0!\\cdot 3!} \\ {4!\\over 0!\\cdot 0!\\cdot 4!}"
},
{
"math_id": 5,
"text": "\\textstyle {4!\\over 3!\\cdot 1!\\cdot 0!} \\ {4!\\over 2!\\cdot 1!\\cdot 1!} \\ {4!\\over 1!\\cdot 1!\\cdot 2!} \\ {4!\\over 0!\\cdot 1!\\cdot 3!}"
},
{
"math_id": 6,
"text": "\\textstyle {4!\\over 2!\\cdot 2!\\cdot 0!} \\ {4!\\over 1!\\cdot 2!\\cdot 1!} \\ {4!\\over 0!\\cdot 2!\\cdot 2!}"
},
{
"math_id": 7,
"text": "\\textstyle {4!\\over 1!\\cdot 3!\\cdot 0!} \\ {4!\\over 0!\\cdot 3!\\cdot 1!}"
},
{
"math_id": 8,
"text": "\\textstyle {4!\\over 0!\\cdot 4!\\cdot 0!}"
},
{
"math_id": 9,
"text": "\\textstyle {4\\choose 4,0,0} \\ {4\\choose 3,0,1} \\ {4\\choose 2,0,2} \\ {4\\choose 1,0,3} \\ {4\\choose 0,0,4}"
},
{
"math_id": 10,
"text": "\\textstyle {4\\choose 3,1,0} \\ {4\\choose 2,1,1} \\ {4\\choose 1,1,2} \\ {4\\choose 0,1,3}"
},
{
"math_id": 11,
"text": "\\textstyle {4\\choose 2,2,0} \\ {4\\choose 1,2,1} \\ {4\\choose 0,2,2}"
},
{
"math_id": 12,
"text": "\\textstyle {4\\choose 1,3,0} \\ {4\\choose 0,3,1}"
},
{
"math_id": 13,
"text": "\\textstyle {4\\choose 0,4,0}"
},
{
"math_id": 14,
"text": "\\textstyle \\sum_{x,y,z} {n \\choose x,y,z} = 3^n"
},
{
"math_id": 15,
"text": "C(x,y,z) = C(x-1,y,z) + C(x,y-1,z) + C(x,y,z-1)"
},
{
"math_id": 16,
"text": "4A^3B^1C^0"
},
{
"math_id": 17,
"text": "12A^2B^1C^1"
},
{
"math_id": 18,
"text": "C(x,y,z-1) = C(x,y-1,z) \\cdot \\frac{z}{y}, \\quad C(x,y-1,z) = C(x,y,z-1) \\cdot \\frac{y}{z} "
},
{
"math_id": 19,
"text": "C(x-1,y,z) = C(x,y,z-1) \\cdot \\frac{x}{z}, \\quad C(x,y,z-1) = C(x-1,y,z) \\cdot \\frac{z}{x} "
},
{
"math_id": 20,
"text": "C(x,y-1,z) = C(x-1,y,z) \\cdot \\frac{y}{x}, \\quad C(x-1,y,z) = C(x,y-1,z) \\cdot \\frac{x}{y} "
},
{
"math_id": 21,
"text": "C(i,j) \\times C(n,i) = C(n,i,j),\\quad 0 \\leq i \\leq n,\\ 0 \\leq j \\leq i"
},
{
"math_id": 22,
"text": "\n\\left(b^{d\\left(n+1\\right)}+b^d+1\\right)^n,\n"
},
{
"math_id": 23,
"text": "\n\\textstyle d=1+\\left\\lfloor\\log_b{n\\choose k_1,k_2,k_3}\\right\\rfloor,\\ \\sum_{i=1}^3{k_i} = n,\\ \\left\\lfloor\\frac{n}{3}\\right\\rfloor \\le k_i \\le \\left\\lceil\\frac{n}{3}\\right\\rceil,\n"
},
{
"math_id": 24,
"text": "\n\\textstyle\\left(10^{12} + 10^2 + 1\\right)^5\n"
},
{
"math_id": 25,
"text": "\n\\textstyle\\left(10^{189} + 10^9 + 1\\right)^{20}\n"
},
{
"math_id": 26,
"text": "\n\\left(b^d + 2\\right)^n,\n"
},
{
"math_id": 27,
"text": "\n\\left(b^{2d} + b^d + 1\\right)^n,\n"
}
]
| https://en.wikipedia.org/wiki?curid=708349 |
70835957 | J62 | J62 may refer to:
Topics referred to by the same term
<templatestyles src="Dmbox/styles.css" />
This page lists associated with the title . | [
{
"math_id": 0,
"text": "J_{62}"
}
]
| https://en.wikipedia.org/wiki?curid=70835957 |
70836097 | Proverbs 16 | Proverbs 16 is the sixteenth chapter of the Book of Proverbs in the Hebrew Bible or the Old Testament of the Christian Bible. The book is a compilation of several wisdom literature collections, with the heading in 1:1 may be intended to regard Solomon as the traditional author of the whole book, but the dates of the individual collections are difficult to determine, and the book probably obtained its final shape in the post-exilic period. This chapter is a part of the second collection of the book.
Text.
Hebrew.
The following table shows the Hebrew text of Proverbs 16 with vowels alongside an English translation based upon the JPS 1917 translation (now in the public domain).
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BC. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
Analysis.
This chapter belongs to a section regarded as the second collection in the book of Proverbs (comprising Proverbs 10:1–22:16), also called "The First 'Solomonic' Collection" (the second one in Proverbs 25:1–29:27). The collection contains 375 sayings, each of which consists of two parallel phrases, except for which consists of three parts.
"The plans of the heart belong to man,"
"but the answer of the tongue is from the Lord."
Verse 1.
The saying in this verse states that a person may set things in order, plan out what one is going to say, but God can sovereignly enable to put one's thoughts into words. Together with verses 2–7 and 9, it form a small cluster of sayings dealing with divine providence over human affairs, contrasting sayings which commend 'careful planning as the key to successful undertakings' (cf. Proverbs 15:22; 20:18; 21:5), with the limitations that 'only plans coinciding with God's purposes will succeed' (verse 3; cf. Proverbs 19:21), thus 'man proposes, but God disposes' (verses 1, 9, cf. verse 33).
"A man’s heart plans his way,"
"But the Lord directs his steps."
Verse 9.
This saying emphasizes the theme of 'man proposes, but God disposes' along with verses 1 and 33.
"The lot is cast into the lap,"
"but the whole outcome is of the Lord."
Verse 33.
This saying concerns the practice of seeking divine leading through casting lots (cf. 1 Samuel 10), for examples, in the settlement of legal disputes (cf. 18:18), that 'however much a matter of chance the procedure may appear', God is 'the one who makes the decision' (literally, "judgement"), following the theme of 'man proposes, but God disposes' in verses 1 and 9.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=70836097 |
70836101 | Proverbs 15 | Proverbs 15 is the fifteenth chapter of the Book of Proverbs in the Hebrew Bible or the Old Testament of the Christian Bible. The book is a compilation of several wisdom literature collections, with the heading in 1:1 may be intended to regard Solomon as the traditional author of the whole book, but the dates of the individual collections are difficult to determine, and the book probably obtained its final shape in the post-exilic period. This chapter is a part of the second collection of the book.
Text.
Hebrew.
The following table shows the Hebrew text of Proverbs 15 with vowels alongside an English translation based upon the JPS 1917 translation (now in the public domain).
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls including 4Q103 (4QProvb; 30 BCE – 30 CE) with extant verses 1–8, 19–31.
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BC. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
Analysis.
This chapter belongs to a section regarded as the second collection in the book of Proverbs (comprising Proverbs 10:1–22:16), also called "The First 'Solomonic' Collection" (the second one in Proverbs 25:1–29:27). The collection contains 375 sayings, each of which consists of two parallel phrases, except for which consists of three parts.
"A soft answer turns away wrath,"
"but a harsh word stirs up anger."
Verse 1.
This verse contrasts a conciliatory reply that soothes a situation leading to reasoned discussion and the acrimonious reply that inflames a situation and makes intelligent
discussion impossible.
"A gentle tongue is a tree of life,"
"but perverseness in it breaks the spirit."
Verse 4.
This saying points that conciliatory or healing speech promotes life, in contrast with twisted or perverse speech, which may cause injury and bring death (cf. Proverbs 18:21).
"A man has joy by the answer of his mouth,"
"and a word spoken in due season, how good it is!"
Verse 23.
This saying praises how a timely word brings satisfaction for both the speaker and the hearer(s), because words spoken out of "due season' would be ineffective and counter-productive.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=70836101 |
7083690 | Omnitruncation | Geometric operation
In geometry, an omnitruncation of a convex polytope is a simple polytope of the same dimension, having a vertex for each flag of the original polytope and a facet for each face of any dimension of the original polytope. Omnitruncation is the dual operation to barycentric subdivision. Because the barycentric subdivision of any polytope can be realized as another polytope, the same is true for the omnitruncation of any polytope.
When omnitruncation is applied to a regular polytope (or honeycomb) it can be described geometrically as a Wythoff construction that creates a maximum number of facets. It is represented in a Coxeter–Dynkin diagram with all nodes ringed.
It is a "shortcut" term which has a different meaning in progressively-higher-dimensional polytopes:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "t_{0,1}\\{ p \\} = t\\{ p\\} = \\{ 2p\\}"
},
{
"math_id": 1,
"text": "t_{0,1,2}\\{ p,q \\} = tr\\{ p,q\\}"
},
{
"math_id": 2,
"text": "t_{0,1,2,3}\\{ p,q,r \\}"
},
{
"math_id": 3,
"text": "t_{0,1,2,3,4}\\{ p,q,r,s \\}"
},
{
"math_id": 4,
"text": "t_{0,1,...,n-1}\\{ p_1, p_2,...,p_n \\}"
}
]
| https://en.wikipedia.org/wiki?curid=7083690 |
70837836 | Proverbs 14 | Chapter of the bible
Proverbs 14 is the fourteenth chapter of the Book of Proverbs in the Hebrew Bible or the Old Testament of the Christian Bible. The book is a compilation of several wisdom literature collections, with the heading in 1:1 may be intended to regard Solomon as the traditional author of the whole book, but the dates of the individual collections are difficult to determine, and the book probably obtained its final shape in the post-exilic period. This chapter is a part of the second collection of the book.
Text.
Hebrew.
The following table shows the Hebrew text of Proverbs 14 with vowels alongside an English translation based upon the JPS 1917 translation (now in the public domain).
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls including 4Q103 (4QProvb; 30 BCE – 30 CE) with extant verses 5–10, 12–13, 31–35.
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BC. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
Analysis.
This chapter belongs to a section regarded as the second collection in the book of Proverbs (comprising Proverbs 10:1–22:16), also called "The First 'Solomonic' Collection" (the second one in Proverbs 25:1–29:27). The collection contains 375 sayings, each of which consists of two parallel phrases, except for which consists of three parts.
"Every wise woman builds her house,"
"but the foolish pulls it down with her hands."
Verse 1.
This verse contrasts the wise and foolish women (cf. Proverbs 7:10–23; 31:10–31), but may also be making much the same point as the personified Wisdom building her house in Proverbs 9:1 as the antithesis of Folly and her house in 9:14. Alternative wording is found in the Good News Translation:
"Homes are made by the wisdom of women, but are destroyed by foolishness."
"A sound heart is the life of the flesh:"
"but envy the rottenness of the bones."
*1 "healing", from the root , "raphaʾ", "to heal";
*2 "calmness, gentleness”, from the root , "raphah", "to be slack, loose".
Verse 30.
This saying correlates the effect of one's state of the mind on the health of one's whole body (cf. Proverbs 3:8).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=70837836 |
70839658 | Proverbs 13 | Proverbs 13 is the thirteenth chapter of the Book of Proverbs in the Hebrew Bible or the Old Testament of the Christian Bible. The book is a compilation of several wisdom literature collections, with the heading in 1:1 may be intended to regard Solomon as the traditional author of the whole book, but the dates of the individual collections are difficult to determine, and the book probably obtained its final shape in the post-exilic period. This chapter is a part of the second collection of the book.
Text.
Hebrew.
The following table shows the Hebrew text of Proverbs 13 with vowels alongside an English translation based upon the JPS 1917 translation (now in the public domain).
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls including 4Q103 (4QProvb; 30 BCE – 30 CE) with extant verses 6–9.
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BC. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
Analysis.
This chapter belongs to a section regarded as the second collection in the book of Proverbs (comprising Proverbs 10:1–22:16), also called "The First 'Solomonic' Collection" (the second one in Proverbs 25:1–29:27). The collection contains 375 sayings, each of which consists of two parallel phrases, except for which consists of three parts.
"A wise son hears his father's instruction,"
"but a scoffer does not listen to rebuke."
Verse 1.
This saying reinforces the parental appeals of chapters 1–9, with a warning that a refusal to heed correction ("rebuke") would place 'wisdom beyond reach' of the 'scoffer' (cf. Proverbs 9:7–8; 14:6; 15:12). verse 24 uses the word 'discipline' (Hebrew: "mū-sār") in relation to physical chastisement.
"He who spares his rod hates his son,"
"but he who loves him disciplines him early."
Verse 24.
The word 'discipline' here is used in relation to 'physical chastisement' (cf. "instruction" in verse 1), which is viewed as essential for the upbringing of a child. The contrast between 'hate' and 'love' points to the importance of the wisdom attached to it (cf. Proverbs 20:30; 23:13–14).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=70839658 |
708399 | List of important publications in mathematics | This is a list of important publications in mathematics, organized by field.
Some reasons a particular publication might be regarded as important:
Among published compilations of important publications in mathematics are "Landmark writings in Western mathematics 1640–1940" by Ivor Grattan-Guinness and "A Source Book in Mathematics" by David Eugene Smith.
<templatestyles src="Template:TOC limit/styles.css" />
Algebra.
Theory of equations.
"Baudhayana Sulba Sutra".
Believed to have been written around the 8th century BCE, this is one of the oldest mathematical texts. It laid the foundations of Indian mathematics and was influential in South Asia. It was primarily a geometrical text and also contained some important developments, including the list of Pythagorean triples , geometric solutions of linear and quadratic equations and square root of 2.
"The Nine Chapters on the Mathematical Art".
Contains the earliest description of Gaussian elimination for solving system of linear equations, it also contains method for finding square root and cubic root.
"Arithmetica".
Contains the collection of 130 algebraic problems giving numerical solutions of determinate equations (those with a unique solution) and indeterminate equations.
"Haidao Suanjing".
Contains the application of right angle triangles for survey of depth or height of distant objects.
"Sunzi Suanjing".
Contains the earliest description of Chinese remainder theorem.
"Aryabhatiya".
The text contains 33 verses covering mensuration (kṣetra vyāvahāra), arithmetic and geometric progressions, gnomon / shadows (shanku-chhAyA), simple, quadratic, simultaneous, and indeterminate equations. It also gave the modern standard algorithm for solving first-order diophantine equations.
"Jigu Suanjing".
Jigu Suanjing (626 CE)
This book by Tang dynasty mathematician Wang Xiaotong contains the world's earliest third order equation.
"Brāhmasphuṭasiddhānta".
Contained rules for manipulating both negative and positive numbers, rules for dealing the number zero, a method for computing square roots, and general methods of solving linear and some quadratic equations, solution to Pell's equation.
"Al-Kitāb al-mukhtaṣar fī hīsāb al-ğabr wa'l-muqābala".
The first book on the systematic algebraic solutions of linear and quadratic equations by the Persian scholar Muhammad ibn Mūsā al-Khwārizmī. The book is considered to be the foundation of modern algebra and Islamic mathematics. The word "algebra" itself is derived from the "al-Jabr" in the title of the book.
"Līlāvatī", "Siddhānta Shiromani" and "Bijaganita".
One of the major treatises on mathematics by Bhāskara II provides the solution for indeterminate equations of 1st and 2nd order.
"Yigu yanduan".
Contains the earliest invention of 4th order polynomial equation.
"Mathematical Treatise in Nine Sections".
This 13th-century book contains the earliest complete solution of 19th-century Horner's method of solving high order polynomial equations (up to 10th order). It also contains a complete solution of Chinese remainder theorem, which predates Euler and Gauss by several centuries.
"Ceyuan haijing".
Contains the application of high order polynomial equation in solving complex geometry problems.
"Jade Mirror of the Four Unknowns".
Contains the method of establishing system of high order polynomial equations of up to four unknowns.
"Ars Magna".
Otherwise known as "The Great Art", provided the first published methods for solving cubic and quartic equations (due to Scipione del Ferro, Niccolò Fontana Tartaglia, and Lodovico Ferrari), and exhibited the first published calculations involving non-real complex numbers.
"Vollständige Anleitung zur Algebra".
Also known as Elements of Algebra, Euler's textbook on elementary algebra is one of the first to set out algebra in the modern form we would recognize today. The first volume deals with determinate equations, while the second part deals with Diophantine equations. The last section contains a proof of Fermat's Last Theorem for the case "n" = 3, making some valid assumptions regarding formula_0 that Euler did not prove.
"Demonstratio nova theorematis omnem functionem algebraicam rationalem integram unius variabilis in factores reales primi vel secundi gradus resolvi posse".
Gauss's doctoral dissertation, which contained a widely accepted (at the time) but incomplete proof of the fundamental theorem of algebra.
Abstract algebra.
Group theory.
"Réflexions sur la résolution algébrique des équations".
The title means "Reflections on the algebraic solutions of equations". Made the prescient observation that the roots of the Lagrange resolvent of a polynomial equation are tied to permutations of the roots of the original equation, laying a more general foundation for what had previously been an ad hoc analysis and helping motivate the later development of the theory of permutation groups, group theory, and Galois theory. The Lagrange resolvent also introduced the discrete Fourier transform of order 3.
"Articles Publiés par Galois dans les Annales de Mathématiques".
Posthumous publication of the mathematical manuscripts of Évariste Galois by Joseph Liouville. Included are Galois' papers "Mémoire sur les conditions de résolubilité des équations par radicaux" and "Des équations primitives qui sont solubles par radicaux".
"Traité des substitutions et des équations algébriques".
Online version: Online version
Traité des substitutions et des équations algébriques (Treatise on Substitutions and Algebraic Equations). The first book on group theory, giving a then-comprehensive study of permutation groups and Galois theory. In this book, Jordan introduced the notion of a simple group and epimorphism (which he called "l'isomorphisme mériédrique"), proved part of the Jordan–Hölder theorem, and discussed matrix groups over finite fields as well as the Jordan normal form.
"Theorie der Transformationsgruppen".
Publication data: 3 volumes, B.G. Teubner, Verlagsgesellschaft, mbH, Leipzig, 1888–1893. Volume 1, Volume 2, Volume 3.
The first comprehensive work on transformation groups, serving as the foundation for the modern theory of Lie groups.
"Solvability of groups of odd order".
Description: Gave a complete proof of the solvability of finite groups of odd order, establishing the long-standing Burnside conjecture that all finite non-abelian simple groups are of even order. Many of the original techniques used in this paper were used in the eventual classification of finite simple groups.
"Homological Algebra".
Provided the first fully worked out treatment of abstract homological algebra, unifying previously disparate presentations of homology and cohomology for associative algebras, Lie algebras, and groups into a single theory.
"Sur Quelques Points d'Algèbre Homologique".
Often referred to as the "Tôhoku paper", it revolutionized homological algebra by introducing abelian categories and providing a general framework for Cartan and Eilenberg's notion of derived functors.
Algebraic geometry.
"Theorie der Abelschen Functionen".
Publication data: "Journal für die Reine und Angewandte Mathematik"
Developed the concept of Riemann surfaces and their topological properties beyond Riemann's 1851 thesis work, proved an index theorem for the genus (the original formulation of the Riemann–Hurwitz formula), proved the Riemann inequality for the dimension of the space of meromorphic functions with prescribed poles (the original formulation of the Riemann–Roch theorem), discussed birational transformations of a given curve and the dimension of the corresponding moduli space of inequivalent curves of a given genus, and solved more general inversion problems than those investigated by Abel and Jacobi. André Weil once wrote that this paper "is one of the greatest pieces of mathematics that has ever been written; there is not a single word in it that is not of consequence."
"Faisceaux Algébriques Cohérents".
Publication data: "Annals of Mathematics", 1955
"FAC", as it is usually called, was foundational for the use of sheaves in algebraic geometry, extending beyond the case of complex manifolds. Serre introduced Čech cohomology of sheaves in this paper, and, despite some technical deficiencies, revolutionized formulations of algebraic geometry. For example, the long exact sequence in sheaf cohomology allows one to show that some surjective maps of sheaves induce surjective maps on sections; specifically, these are the maps whose kernel (as a sheaf) has a vanishing first cohomology group. The dimension of a vector space of sections of a coherent sheaf is finite, in projective geometry, and such dimensions include many discrete invariants of varieties, for example Hodge numbers. While Grothendieck's derived functor cohomology has replaced Čech cohomology for technical reasons, actual calculations, such as of the cohomology of projective space, are usually carried out by Čech techniques, and for this reason Serre's paper remains important.
"Géométrie Algébrique et Géométrie Analytique".
In mathematics, algebraic geometry and analytic geometry are closely related subjects, where "analytic geometry" is the theory of complex manifolds and the more general analytic spaces defined locally by the vanishing of analytic functions of several complex variables. A (mathematical) theory of the relationship between the two was put in place during the early part of the 1950s, as part of the business of laying the foundations of algebraic geometry to include, for example, techniques from Hodge theory. ("NB" While analytic geometry as use of Cartesian coordinates is also in a sense included in the scope of algebraic geometry, that is not the topic being discussed in this article.) The major paper consolidating the theory was "Géometrie Algébrique et Géométrie Analytique" by Serre, now usually referred to as "GAGA". A "GAGA-style result" would now mean any theorem of comparison, allowing passage between a category of objects from algebraic geometry, and their morphisms, and a well-defined subcategory of analytic geometry objects and holomorphic mappings.
"Le théorème de Riemann–Roch, d'après A. Grothendieck".
Borel and Serre's exposition of Grothendieck's version of the Riemann–Roch theorem, published after Grothendieck made it clear that he was not interested in writing up his own result. Grothendieck reinterpreted both sides of the formula that Hirzebruch proved in 1953 in the framework of morphisms between varieties, resulting in a sweeping generalization. In his proof, Grothendieck broke new ground with his concept of Grothendieck groups, which led to the development of K-theory.
"Éléments de géométrie algébrique".
Written with the assistance of Jean Dieudonné, this is Grothendieck's exposition of his reworking of the foundations of algebraic geometry. It has become the most important foundational work in modern algebraic geometry. The approach expounded in EGA, as these books are known, transformed the field and led to monumental advances.
"Séminaire de géométrie algébrique".
These seminar notes on Grothendieck's reworking of the foundations of algebraic geometry report on work done at IHÉS starting in the 1960s. SGA 1 dates from the seminars of 1960–1961, and the last in the series, SGA 7, dates from 1967 to 1969. In contrast to EGA, which is intended to set foundations, SGA describes ongoing research as it unfolded in Grothendieck's seminar; as a result, it is quite difficult to read, since many of the more elementary and foundational results were relegated to EGA. One of the major results building on the results in SGA is Pierre Deligne's proof of the last of the open Weil conjectures in the early 1970s. Other authors who worked on one or several volumes of SGA include Michel Raynaud, Michael Artin, Jean-Pierre Serre, Jean-Louis Verdier, Pierre Deligne, and Nicholas Katz.
Number theory.
"Brāhmasphuṭasiddhānta".
Brahmagupta's Brāhmasphuṭasiddhānta is the first book that mentions zero as a number, hence Brahmagupta is considered the first to formulate the concept of zero. The current system of the four fundamental operations (addition, subtraction, multiplication and division) based on the Hindu-Arabic number system also first appeared in Brahmasphutasiddhanta. It was also one of the first texts to provide concrete ideas on positive and negative numbers.
"De fractionibus continuis dissertatio".
First presented in 1737, this paper provided the first then-comprehensive account of the properties of continued fractions. It also contains the first proof that the number e is irrational.
"Recherches d'Arithmétique".
Developed a general theory of binary quadratic forms to handle the general problem of when an integer is representable by the form formula_1. This included a reduction theory for binary quadratic forms, where he proved that every form is equivalent to a certain canonically chosen reduced form.
"Disquisitiones Arithmeticae".
The "Disquisitiones Arithmeticae" is a profound and masterful book on number theory written by German mathematician Carl Friedrich Gauss and first published in 1801 when Gauss was 24. In this book Gauss brings together results in number theory obtained by mathematicians such as Fermat, Euler, Lagrange and Legendre and adds many important new results of his own. Among his contributions was the first complete proof known of the Fundamental theorem of arithmetic, the first two published proofs of the law of quadratic reciprocity, a deep investigation of binary quadratic forms going beyond Lagrange's work in "Recherches d'Arithmétique", a first appearance of Gauss sums, cyclotomy, and the theory of constructible polygons with a particular application to the constructibility of the regular 17-gon. Of note, in section V, article 303 of Disquisitiones, Gauss summarized his calculations of class numbers of imaginary quadratic number fields, and in fact found all imaginary quadratic number fields of class numbers 1, 2, and 3 (confirmed in 1986) as he had conjectured. In section VII, article 358, Gauss proved what can be interpreted as the first non-trivial case of the Riemann Hypothesis for curves over finite fields (the Hasse–Weil theorem).
"Beweis des Satzes, daß jede unbegrenzte arithmetische Progression, deren erstes Glied und Differenz ganze Zahlen ohne gemeinschaftlichen Factor sind, unendlich viele Primzahlen enthält".
Pioneering paper in analytic number theory, which introduced Dirichlet characters and their L-functions to establish Dirichlet's theorem on arithmetic progressions. In subsequent publications, Dirichlet used these tools to determine, among other things, the class number for quadratic forms.
"Über die Anzahl der Primzahlen unter einer gegebenen Grösse".
"Über die Anzahl der Primzahlen unter einer gegebenen Grösse" (or "On the Number of Primes Less Than a Given Magnitude") is a seminal 8-page paper by Bernhard Riemann published in the November 1859 edition of the "Monthly Reports of the Berlin Academy". Although it is the only paper he ever published on number theory, it contains ideas which influenced dozens of researchers during the late 19th century and up to the present day. The paper consists primarily of definitions, heuristic arguments, sketches of proofs, and the application of powerful analytic methods; all of these have become essential concepts and tools of modern analytic number theory. It also contains the famous Riemann Hypothesis, one of the most important open problems in mathematics.
"Vorlesungen über Zahlentheorie".
"Vorlesungen über Zahlentheorie" ("Lectures on Number Theory") is a textbook of number theory written by German mathematicians P. G. Lejeune Dirichlet and R. Dedekind, and published in 1863.
The "Vorlesungen" can be seen as a watershed between the classical number theory of Fermat, Jacobi and Gauss, and the modern number theory of Dedekind, Riemann and Hilbert. Dirichlet does not explicitly recognise the concept of the group that is central to modern algebra, but many of his proofs show an implicit understanding of group theory.
"Zahlbericht".
Unified and made accessible many of the developments in algebraic number theory made during the nineteenth century. Although criticized by André Weil (who stated "more than half of his famous Zahlbericht is little more than an account of Kummer's number-theoretical work, with inessential improvements") and Emmy Noether, it was highly influential for many years following its publication.
"Fourier Analysis in Number Fields and Hecke's Zeta-Functions".
Generally referred to simply as "Tate's Thesis", Tate's Princeton PhD thesis, under Emil Artin, is a reworking of Erich Hecke's theory of zeta- and "L"-functions in terms of Fourier analysis on the adeles. The introduction of these methods into number theory made it possible to formulate extensions of Hecke's results to more general "L"-functions such as those arising from automorphic forms.
"Automorphic Forms on GL(2)".
This publication offers evidence towards Langlands' conjectures by reworking and expanding the classical theory of modular forms and their "L"-functions through the introduction of representation theory.
"La conjecture de Weil. I.".
Proved the Riemann hypothesis for varieties over finite fields, settling the last of the open Weil conjectures.
"Endlichkeitssätze für abelsche Varietäten über Zahlkörpern".
Faltings proves a collection of important results in this paper, the most famous of which is the first proof of the Mordell conjecture (a conjecture dating back to 1922). Other theorems proved in this paper include an instance of the Tate conjecture (relating the homomorphisms between two abelian varieties over a number field to the homomorphisms between their Tate modules) and some finiteness results concerning abelian varieties over number fields with certain properties.
"Modular Elliptic Curves and Fermat's Last Theorem".
This article proceeds to prove a special case of the Shimura–Taniyama conjecture through the study of the deformation theory of Galois representations. This in turn implies the famed Fermat's Last Theorem. The proof's method of identification of a deformation ring with a Hecke algebra (now referred to as an "R=T" theorem) to prove modularity lifting theorems has been an influential development in algebraic number theory.
"The geometry and cohomology of some simple Shimura varieties".
Harris and Taylor provide the first proof of the local Langlands conjecture for GL("n"). As part of the proof, this monograph also makes an in depth study of the geometry and cohomology of certain Shimura varieties at primes of bad reduction.
"Le lemme fondamental pour les algèbres de Lie".
Ngô Bảo Châu proved a long-standing unsolved problem in the classical Langlands program, using methods from the Geometric Langlands program.
"Perfectoid space".
Peter Scholze introduced Perfectoid space.
Analysis.
"Introductio in analysin infinitorum".
The eminent historian of mathematics Carl Boyer once called Euler's "Introductio in analysin infinitorum" the greatest modern textbook in mathematics. Published in two volumes, this book more than any other work succeeded in establishing analysis as a major branch of mathematics, with a focus and approach distinct from that used in geometry and algebra. Notably, Euler identified functions rather than curves to be the central focus in his book. Logarithmic, exponential, trigonometric, and transcendental functions were covered, as were expansions into partial fractions, evaluations of ζ(2k) for k a positive integer between 1 and 13, infinite series and infinite product formulas, continued fractions, and partitions of integers. In this work, Euler proved that every rational number can be written as a finite continued fraction, that the continued fraction of an irrational number is infinite, and derived continued fraction expansions for e and formula_2. This work also contains a statement of Euler's formula and a statement of the pentagonal number theorem, which he had discovered earlier and would publish a proof for in 1751.
"Yuktibhāṣā".
Written in India in 1530,
and served as a summary of the Kerala School's achievements in infinite series, trigonometry and mathematical analysis, most of which were earlier discovered by the 14th century mathematician Madhava. Some of its important developments in calculus include infinite series and Taylor series expansion of some trigonometry functions.
Calculus.
"Nova methodus pro maximis et minimis, itemque tangentibus, quae nec fractas nec irrationales quantitates moratur, et singulare pro illi calculi genus".
Leibniz's first publication on differential calculus, containing the now familiar notation for differentials as well as rules for computing the derivatives of powers, products and quotients.
"Philosophiae Naturalis Principia Mathematica".
The Philosophiae Naturalis Principia Mathematica (Latin: "mathematical principles of natural philosophy", often "Principia" or "Principia Mathematica" for short) is a three-volume work by Isaac Newton published on 5 July 1687. Perhaps the most influential scientific book ever published, it contains the statement of Newton's laws of motion forming the foundation of classical mechanics as well as his law of universal gravitation, and derives Kepler's laws for the motion of the planets (which were first obtained empirically). Here was born the practice, now so standard we identify it with science, of explaining nature by postulating mathematical axioms and demonstrating that their conclusion are observable phenomena. In formulating his physical theories, Newton freely used his unpublished work on calculus. When he submitted Principia for publication, however, Newton chose to recast the majority of his proofs as geometric arguments.
"Institutiones calculi differentialis cum eius usu in analysi finitorum ac doctrina serierum".
Published in two books, Euler's textbook on differential calculus presented the subject in terms of the function concept, which he had introduced in his 1748 "Introductio in analysin infinitorum". This work opens with a study of the calculus of finite differences and makes a thorough investigation of how differentiation behaves under substitutions. Also included is a systematic study of Bernoulli polynomials and the Bernoulli numbers (naming them as such), a demonstration of how the Bernoulli numbers are related to the coefficients in the Euler–Maclaurin formula and the values of ζ(2n), a further study of Euler's constant (including its connection to the gamma function), and an application of partial fractions to differentiation.
"Über die Darstellbarkeit einer Function durch eine trigonometrische Reihe".
Written in 1853, Riemann's work on trigonometric series was published posthumously. In it, he extended Cauchy's definition of the integral to that of the Riemann integral, allowing some functions with dense subsets of discontinuities on an interval to be integrated (which he demonstrated by an example). He also stated the Riemann series theorem, proved the Riemann–Lebesgue lemma for the case of bounded Riemann integrable functions, and developed the Riemann localization principle.
"Intégrale, longueur, aire".
Lebesgue's doctoral dissertation, summarizing and extending his research to date regarding his development of measure theory and the Lebesgue integral.
Complex analysis.
"Grundlagen für eine allgemeine Theorie der Functionen einer veränderlichen complexen Grösse".
Riemann's doctoral dissertation introduced the notion of a Riemann surface, conformal mapping, simple connectivity, the Riemann sphere, the Laurent series expansion for functions having poles and branch points, and the Riemann mapping theorem.
Functional analysis.
"Théorie des opérations linéaires".
The first mathematical monograph on the subject of linear metric spaces, bringing the abstract study of functional analysis to the wider mathematical community. The book introduced the ideas of a normed space and the notion of a so-called "B"-space, a complete normed space. The "B"-spaces are now called Banach spaces and are one of the basic objects of study in all areas of modern mathematical analysis. Banach also gave proofs of versions of the open mapping theorem, closed graph theorem, and Hahn–Banach theorem.
"Produits Tensoriels Topologiques et Espaces Nucléaires".
Grothendieck's thesis introduced the notion of a nuclear space, tensor products of locally convex topological vector spaces, and the start of Grothendieck's work on tensor products of Banach spaces.
Alexander Grothendieck also wrote a textbook on topological vector spaces:
Fourier analysis.
"Mémoire sur la propagation de la chaleur dans les corps solides".
Introduced Fourier analysis, specifically Fourier series. Key contribution was to not simply use trigonometric series, but to model "all" functions by trigonometric series:
<templatestyles src="Template:Blockquote/styles.css" />formula_3
Multiplying both sides by formula_4, and then integrating from formula_5 to formula_6 yields:
formula_7
When Fourier submitted his paper in 1807, the committee (which included Lagrange, Laplace, Malus and Legendre, among others) concluded: "...the manner in which the author arrives at these equations is not exempt of difficulties and [...] his analysis to integrate them still leaves something to be desired on the score of generality and even rigour". Making Fourier series rigorous, which in detail took over a century, led directly to a number of developments in analysis, notably the rigorous statement of the integral via the Dirichlet integral and later the Lebesgue integral.
"Sur la convergence des séries trigonométriques qui servent à représenter une fonction arbitraire entre des limites données".
In his habilitation thesis on Fourier series, Riemann characterized this work of Dirichlet as "the first profound paper about the subject". This paper gave the first rigorous proof of the convergence of Fourier series under fairly general conditions (piecewise continuity and monotonicity) by considering partial sums, which Dirichlet transformed into a particular Dirichlet integral involving what is now called the Dirichlet kernel. This paper introduced the nowhere continuous Dirichlet function and an early version of the Riemann–Lebesgue lemma.
"On convergence and growth of partial sums of Fourier series".
Settled Lusin's conjecture that the Fourier expansion of any formula_8 function converges almost everywhere.
Geometry.
"Baudhayana Sulba Sutra".
Believed to have been written around the 8th century BCE, this is one of the oldest mathematical texts. It laid the foundations of Indian mathematics and was influential in South Asia . Though this was primarily a geometrical text, it also contained some important algebraic developments, including the list of Pythagorean triples discovered algebraically, geometric solutions of linear equations, the use of quadratic equations and square root of 2.
"Euclid's" "Elements".
Publication data: c. 300 BC
Online version: Interactive Java version
This is often regarded as not only the most important work in geometry but one of the most important works in mathematics. It contains many important results in plane and solid geometry, algebra (books II and V), and number theory (book VII, VIII, and IX). More than any specific result in the publication, it seems that the major achievement of this publication is the promotion of an axiomatic approach as a means for proving results. Euclid's "Elements" has been referred to as the most successful and influential textbook ever written.
"The Nine Chapters on the Mathematical Art".
This was a Chinese mathematics book, mostly geometric, composed during the Han dynasty, perhaps as early as 200 BC. It remained the most important textbook in China and East Asia for over a thousand years, similar to the position of Euclid's "Elements" in Europe. Among its contents: Linear problems solved using the principle known later in the West as the "rule of false position". Problems with several unknowns, solved by a principle similar to Gaussian elimination. Problems involving the principle known in the West as the Pythagorean theorem. The earliest solution of a matrix using a method equivalent to the modern method.
"The Conics".
The Conics was written by Apollonius of Perga, a Greek mathematician. His innovative methodology and terminology, especially in the field of conics, influenced many later scholars including Ptolemy, Francesco Maurolico, Isaac Newton, and René Descartes. It was Apollonius who gave the ellipse, the parabola, and the hyperbola the names by which we know them.
"Surya Siddhanta".
It describes the archeo-astronomy theories, principles and methods of the ancient Hindus. This siddhanta is supposed to be the knowledge that the Sun god gave to an Asura called Maya. It uses sine (jya), cosine (kojya or "perpendicular sine") and inverse sine (otkram jya) for the first time . Later Indian mathematicians such as Aryabhata made references to this text, while later Arabic and Latin translations were very influential in Europe and the Middle East.
"Aryabhatiya".
This was a highly influential text during the Golden Age of mathematics in India. The text was highly concise and therefore elaborated upon in commentaries by later mathematicians. It made significant contributions to geometry and astronomy, including introduction of sine/ cosine, determination of the approximate value of pi and accurate calculation of the earth's circumference.
"La Géométrie".
La Géométrie was published in 1637 and written by René Descartes. The book was influential in developing the Cartesian coordinate system and specifically discussed the representation of points of a plane, via real numbers; and the representation of curves, via equations.
"Grundlagen der Geometrie".
Online version: English
Publication data:
Hilbert's axiomatization of geometry, whose primary influence was in its pioneering approach to metamathematical questions including the use of models to prove axiom independence and the importance of establishing the consistency and completeness of an axiomatic system.
"Regular Polytopes".
"Regular Polytopes" is a comprehensive survey of the geometry of regular polytopes, the generalisation of regular polygons and regular polyhedra to higher dimensions. Originating with an essay entitled "Dimensional Analogy" written in 1923, the first edition of the book took Coxeter 24 years to complete. Originally written in 1947, the book was updated and republished in 1963 and 1973.
Differential geometry.
"Recherches sur la courbure des surfaces".
Publication data: Mémoires de l'académie des sciences de Berlin 16 (1760) pp. 119–143; published 1767. (Full text and an English translation available from the Dartmouth Euler archive.)
Established the theory of surfaces, and introduced the idea of principal curvatures, laying the foundation for subsequent developments in the differential geometry of surfaces.
"Disquisitiones generales circa superficies curvas".
Publication data: "Disquisitiones generales circa superficies curvas", "Commentationes Societatis Regiae Scientiarum Gottingesis Recentiores" Vol. VI (1827), pp. 99–146; "General Investigations of Curved Surfaces" (published 1965) Raven Press, New York, translated by A.M.Hiltebeitel and J.C.Morehead.
Groundbreaking work in differential geometry, introducing the notion of Gaussian curvature and Gauss's celebrated Theorema Egregium.
"Über die Hypothesen, welche der Geometrie zu Grunde Liegen".
Publication data: "Über die Hypothesen, welche der Geometrie zu Grunde Liegen", "Abhandlungen der Königlichen Gesellschaft der Wissenschaften zu Göttingen", Vol. 13, 1867. English translation
Riemann's famous Habiltationsvortrag, in which he introduced the notions of a manifold, Riemannian metric, and curvature tensor. Richard Dedekind reported on the reaction of the then 77 year old Gauss to Riemann's presentation, stating that it had "surpassed all his expectations" and that he spoke "with the greatest appreciation, and with an excitement rare for him, about the depth of the ideas presented by Riemann."
"Leçons sur la théorie génerale des surfaces et les applications géométriques du calcul infinitésimal".
Publication data: Volume I, Volume II, Volume III, Volume IV
Leçons sur la théorie génerale des surfaces et les applications géométriques du calcul infinitésimal (on the General Theory of Surfaces and the Geometric Applications of Infinitesimal Calculus). A treatise covering virtually every aspect of the 19th century differential geometry of surfaces.
Topology.
"Analysis situs".
Description: Poincaré's Analysis Situs and his Compléments à l'Analysis Situs laid the general foundations for algebraic topology. In these papers, Poincaré introduced the notions of homology and the fundamental group, provided an early formulation of Poincaré duality, gave the Euler–Poincaré characteristic for chain complexes, and mentioned several important conjectures including the Poincaré conjecture, demonstrated by Grigori Perelman in 2003.
"L'anneau d'homologie d'une représentation", "Structure de l'anneau d'homologie d'une représentation".
These two Comptes Rendus notes of Leray from 1946 introduced the novel concepts of sheafs, sheaf cohomology, and spectral sequences, which he had developed during his years of captivity as a prisoner of war. Leray's announcements and applications (published in other Comptes Rendus notes from 1946) drew immediate attention from other mathematicians. Subsequent clarification, development, and generalization by Henri Cartan, Jean-Louis Koszul, Armand Borel, Jean-Pierre Serre, and Leray himself allowed these concepts to be understood and applied to many other areas of mathematics. Dieudonné would later write that these notions created by Leray "undoubtedly rank at the same level in the history of mathematics as the methods invented by Poincaré and Brouwer".
Quelques propriétés globales des variétés differentiables.
In this paper, Thom proved the Thom transversality theorem, introduced the notions of oriented and unoriented cobordism, and demonstrated that cobordism groups could be computed as the homotopy groups of certain Thom spaces. Thom completely characterized the unoriented cobordism ring and achieved strong results for several problems, including Steenrod's problem on the realization of cycles.
Category theory.
"General Theory of Natural Equivalences".
The first paper on category theory. Mac Lane later wrote in "Categories for the Working Mathematician" that he and Eilenberg introduced categories so that they could introduce functors, and they introduced functors so that they could introduce natural equivalences. Prior to this paper, "natural" was used in an informal and imprecise way to designate constructions that could be made without making any choices. Afterwards, "natural" had a precise meaning which occurred in a wide variety of contexts and had powerful and important consequences.
"Categories for the Working Mathematician".
Saunders Mac Lane, one of the founders of category theory, wrote this exposition to bring categories to the masses. Mac Lane brings to the fore the important concepts that make category theory useful, such as adjoint functors and universal properties.
"Higher Topos Theory".
"This purpose of this book is twofold: to provide a general introduction to higher category theory (using the formalism of "quasicategories" or "weak Kan complexes"), and to apply this theory to the study of higher versions of Grothendieck topoi. A few applications to classical topology are included." (see arXiv.)
Set theory.
"Über eine Eigenschaft des Inbegriffes aller reellen algebraischen Zahlen".
Online version: Online version
Contains the first proof that the set of all real numbers is uncountable; also contains a proof that the set of algebraic numbers is countable. (See Georg Cantor's first set theory article.)
"Grundzüge der Mengenlehre".
First published in 1914, this was the first comprehensive introduction to set theory. Besides the systematic treatment of known results in set theory, the book also contains chapters on measure theory and topology, which were then still considered parts of set theory. Here Hausdorff presents and develops highly original material which was later to become the basis for those areas.
"The consistency of the axiom of choice and of the generalized continuum-hypothesis with the axioms of set theory".
Gödel proves the results of the title. Also, in the process, introduces the class L of constructible sets, a major influence in the development of axiomatic set theory.
"The Independence of the Continuum Hypothesis".
Cohen's breakthrough work proved the independence of the continuum hypothesis and axiom of choice with respect to Zermelo–Fraenkel set theory. In proving this Cohen introduced the concept of "forcing" which led to many other major results in axiomatic set theory.
Logic.
"The Laws of Thought".
Published in 1854, The Laws of Thought was the first book to provide a mathematical foundation for logic. Its aim was a complete re-expression and extension of Aristotle's logic in the language of mathematics. Boole's work founded the discipline of algebraic logic and would later be central for Claude Shannon in the development of digital logic.
"Begriffsschrift".
Published in 1879, the title Begriffsschrift is usually translated as "concept writing" or "concept notation"; the full title of the book identifies it as "a formula language, modelled on that of arithmetic, of pure thought". Frege's motivation for developing his formal logical system was similar to Leibniz's desire for a "calculus ratiocinator". Frege defines a logical calculus to support his research in the foundations of mathematics. Begriffsschrift is both the name of the book and the calculus defined therein. It was arguably the most significant publication in logic since Aristotle.
"Formulario mathematico".
First published in 1895, the Formulario mathematico was the first mathematical book written entirely in a formalized language. It contained a description of mathematical logic and many important theorems in other branches of mathematics. Many of the notations introduced in the book are now in common use.
"Principia Mathematica".
The Principia Mathematica is a three-volume work on the foundations of mathematics, written by Bertrand Russell and Alfred North Whitehead and published in 1910–1913. It is an attempt to derive all mathematical truths from a well-defined set of axioms and inference rules in symbolic logic. The questions remained whether a contradiction could be derived from the Principia's axioms, and whether there exists a mathematical statement which could neither be proven nor disproven in the system. These questions were settled, in a rather surprising way, by Gödel's incompleteness theorem in 1931.
"Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme, I".
Online version: Online version
In mathematical logic, Gödel's incompleteness theorems are two celebrated theorems proved by Kurt Gödel in 1931.
The first incompleteness theorem states:
For any formal system such that (1) it is formula_9-consistent (omega-consistent), (2) it has a recursively definable set of axioms and rules of derivation, and (3) every recursive relation of natural numbers is definable in it, there exists a formula of the system such that, according to the intended interpretation of the system, it expresses a truth about natural numbers and yet it is not a theorem of the system.
Combinatorics.
"On sets of integers containing no k elements in arithmetic progression".
Settled a conjecture of Paul Erdős and Pál Turán (now known as Szemerédi's theorem) that if a sequence of natural numbers has positive upper density then it contains arbitrarily long arithmetic progressions. Szemerédi's solution has been described as a "masterpiece of combinatorics" and it introduced new ideas and tools to the field including a weak form of the Szemerédi regularity lemma.
Graph theory.
"Solutio problematis ad geometriam situs pertinentis".
Euler's solution of the Königsberg bridge problem in "Solutio problematis ad geometriam situs pertinentis" ("The solution of a problem relating to the geometry of position") is considered to be the first theorem of graph theory.
"On the evolution of random graphs".
Provides a detailed discussion of sparse random graphs, including distribution of components, occurrence of small subgraphs, and phase transitions.
"Network Flows and General Matchings".
Presents the Ford–Fulkerson algorithm for solving the maximum flow problem, along with many ideas on flow-based models.
Probability theory and statistics.
"See list of important publications in statistics."
Game theory.
"Zur Theorie der Gesellschaftsspiele".
Went well beyond Émile Borel's initial investigations into strategic two-person game theory by proving the minimax theorem for two-person, zero-sum games.
"Theory of Games and Economic Behavior".
This book led to the investigation of modern game theory as a prominent branch of mathematics. This work contained the method for finding optimal solutions for two-person zero-sum games.
"Equilibrium Points in N-person Games".
Nash equilibrium
"On Numbers and Games".
The book is in two, {0,1|}, parts. The zeroth part is about numbers, the first part about games – both the values of games and also some real games that can be played such as Nim, Hackenbush, Col and Snort amongst the many described.
"Winning Ways for your Mathematical Plays".
A compendium of information on mathematical games. It was first published in 1982 in two volumes, one focusing on Combinatorial game theory and surreal numbers, and the other concentrating on a number of specific games.
Information theory.
"A Mathematical Theory of Communication".
An article, later expanded into a book, which developed the concepts of information entropy and redundancy, and introduced the term bit (which Shannon credited to John Tukey) as a unit of information.
Fractals.
"How Long Is the Coast of Britain? Statistical Self-Similarity and Fractional Dimension".
A discussion of self-similar curves that have fractional dimensions between 1 and 2. These curves are examples of fractals, although Mandelbrot does not use this term in the paper, as he did not coin it until 1975.
Shows Mandelbrot's early thinking on fractals, and is an example of the linking of mathematical objects with natural forms that was a theme of much of his later work.
Numerical analysis.
Optimization.
"Method of Fluxions".
"Method of Fluxions" was a book written by Isaac Newton. The book was completed in 1671, and published in 1736. Within this book, Newton describes a method (the Newton–Raphson method) for finding the real zeroes of a function.
"Essai d'une nouvelle méthode pour déterminer les maxima et les minima des formules intégrales indéfinies".
Major early work on the calculus of variations, building upon some of Lagrange's prior investigations as well as those of Euler. Contains investigations of minimal surface determination as well as the initial appearance of Lagrange multipliers.
"Математические методы организации и планирования производства".
Kantorovich wrote the first paper on production planning, which used Linear Programs as the model. He received the Nobel prize for this work in 1975.
"Decomposition Principle for Linear Programs".
Dantzig's is considered the father of linear programming in the western world. He independently invented the simplex algorithm. Dantzig and Wolfe worked on decomposition algorithms for large-scale linear programs in factory and production planning.
"How Good is the Simplex Algorithm?".
Klee and Minty gave an example showing that the simplex algorithm can take exponentially many steps to solve a linear program.
"Полиномиальный алгоритм в линейном программировании".
Khachiyan's work on the ellipsoid method. This was the first polynomial time algorithm for linear programming.
Early manuscripts.
These are publications that are not necessarily relevant to a mathematician nowadays, but are nonetheless important publications in the history of mathematics.
"Moscow Mathematical Papyrus".
This is one of the earliest mathematical treatises that still survives today. The Papyrus contains 25 problems involving arithmetic, geometry, and algebra, each with a solution given. Written in Ancient Egypt at approximately 1850 BC.
"Rhind Mathematical Papyrus".
One of the oldest mathematical texts, dating to the Second Intermediate Period of ancient Egypt. It was copied by the scribe Ahmes (properly "Ahmose") from an older Middle Kingdom papyrus. It laid the foundations of Egyptian mathematics and in turn, later influenced Greek and Hellenistic mathematics. Besides describing how to obtain an approximation of π only missing the mark by less than one per cent, it is describes one of the earliest attempts at squaring the circle and in the process provides persuasive evidence against the theory that the Egyptians deliberately built their pyramids to enshrine the value of π in the proportions. Even though it would be a strong overstatement to suggest that the papyrus represents even rudimentary attempts at analytical geometry, Ahmes did make use of a kind of an analogue of the cotangent.
"Archimedes Palimpsest".
Although the only mathematical tools at its author's disposal were what we might now consider secondary-school geometry, he used those methods with rare brilliance, explicitly using infinitesimals to solve problems that would now be treated by integral calculus. Among those problems were that of the center of gravity of a solid hemisphere, that of the center of gravity of a frustum of a circular paraboloid, and that of the area of a region bounded by a parabola and one of its secant lines. For explicit details of the method used, see Archimedes' use of infinitesimals.
"The Sand Reckoner".
Online version: Online version
The first known (European) system of number-naming that can be expanded beyond the needs of everyday life.
Textbooks.
"Abstract Algebra".
"Dummit and Foote" has become the modern dominant abstract algebra textbook following Jacobson's Basic Algebra.
"Arithmetika Horvatzka".
"Arithmetika Horvatzka" (1758) was the first Croatian language arithmetic textbook, written in the vernacular Kajkavian dialect of Croatian language. It established a complete system of arithmetic terminology in Croatian, and vividly used examples from everyday life in Croatia to present mathematical operations. Although it was clear that Šilobod had made use of words that were in dictionaries, this was clearly insufficient for his purposes; and he made up some names by adapting Latin terminology to Kaikavian use. Full text of "Arithmetika Horvatszka" is available via archive.org.
"Synopsis of Pure Mathematics".
Contains over 6000 theorems of mathematics, assembled by George Shoobridge Carr for the purpose of training his students for the Cambridge Mathematical Tripos exams. Studied extensively by Ramanujan. (first half here)
"Éléments de mathématique".
One of the most influential books in French mathematical literature. It introduces some of the notations and definitions that are now usual (the symbol ∅ or the term bijective for example). Characterized by an extreme level of rigour, formalism and generality (up to the point of being highly criticized for that), its publication started in 1939 and is still unfinished today.
Written in 1542, it was the first really popular arithmetic book written in the English Language.
"Cocker's Arithmetick".
Textbook of arithmetic published in 1678 by John Hawkins, who claimed to have edited manuscripts left by Edward Cocker, who had died in 1676. This influential mathematics textbook used to teach arithmetic in schools in the United Kingdom for over 150 years.
"The Schoolmaster's Assistant, Being a Compendium of Arithmetic both Practical and Theoretical".
An early and popular English arithmetic textbook published in America in the 18th century. The book reached from the introductory topics to the advanced in five sections.
"Geometry".
Publication data: 1892
The most widely used and influential textbook in Russian mathematics. (See Kiselyov page.)
"A Course of Pure Mathematics".
A classic textbook in introductory mathematical analysis, written by G. H. Hardy. It was first published in 1908, and went through many editions. It was intended to help reform mathematics teaching in the UK, and more specifically in the University of Cambridge, and in schools preparing pupils to study mathematics at Cambridge. As such, it was aimed directly at "scholarship level" students – the top 10% to 20% by ability. The book contains a large number of difficult problems. The content covers introductory calculus and the theory of infinite series.
"Moderne Algebra".
The first introductory textbook (graduate level) expounding the abstract approach to algebra developed by Emil Artin and Emmy Noether. First published in German in 1931 by Springer Verlag. A later English translation was published in 1949 by Frederick Ungar Publishing Company.
"Algebra".
A definitive introductory text for abstract algebra using a category theoretic approach. Both a rigorous introduction from first principles, and a reasonably comprehensive survey of the field.
"Algebraic Geometry".
The first comprehensive introductory (graduate level) text in algebraic geometry that used the language of schemes and cohomology. Published in 1977, it lacks aspects of the scheme language which are nowadays considered central, like the functor of points.
"Naive Set Theory".
An undergraduate introduction to not-very-naive set theory which has lasted for decades. It is still considered by many to be the best introduction to set theory for beginners. While the title states that it is naive, which is usually taken to mean without axioms, the book does introduce all the axioms of Zermelo–Fraenkel set theory and gives correct and rigorous definitions for basic objects. Where it differs from a "true" axiomatic set theory book is its character: There are no long-winded discussions of axiomatic minutiae, and there is next to nothing about topics like large cardinals. Instead it aims, and succeeds, in being intelligible to someone who has never thought about set theory before.
"Cardinal and Ordinal Numbers".
The "nec plus ultra" reference for basic facts about cardinal and ordinal numbers. If you have a question about the cardinality of sets occurring in everyday mathematics, the first place to look is this book, first published in the early 1950s but based on the author's lectures on the subject over the preceding 40 years.
This book is not really for beginners, but graduate students with some minimal experience in set theory and formal logic will find it a valuable self-teaching tool, particularly in regard to forcing. It is far easier to read than a true reference work such as Jech, "Set Theory". It may be the best textbook from which to learn forcing, though it has the disadvantage that the exposition of forcing relies somewhat on the earlier presentation of Martin's axiom.
"Topologie".
First published round 1935, this text was a pioneering "reference" text book in topology, already incorporating many modern concepts from set-theoretic topology, homological algebra and homotopy theory.
"General Topology".
First published in 1955, for many years the only introductory graduate level textbook in the US, teaching the basics of point set, as opposed to algebraic, topology. Prior to this the material, essential for advanced study in many fields, was only available in bits and pieces from texts on other topics or journal articles.
"Topology from the Differentiable Viewpoint".
This short book introduces the main concepts of differential topology in Milnor's lucid and concise style. While the book does not cover very much, its topics are explained beautifully in a way that illuminates all their details.
"Number Theory, An approach through history from Hammurapi to Legendre".
An historical study of number theory, written by one of the 20th century's greatest researchers in the field. The book covers some thirty six centuries of arithmetical work but the bulk of it is devoted to a detailed study and exposition of the work of Fermat, Euler, Lagrange, and Legendre. The author wishes to take the reader into the workshop of his subjects to share their successes and failures. A rare opportunity to see the historical development of a subject through the mind of one of its greatest practitioners.
"An Introduction to the Theory of Numbers".
"An Introduction to the Theory of Numbers" was first published in 1938, and is still in print, with the latest edition being the 6th (2008). It is likely that almost every serious student and researcher into number theory has consulted this book, and probably has it on their bookshelf. It was not intended to be a textbook, and is rather an introduction to a wide range of differing areas of number theory which would now almost certainly be covered in separate volumes. The writing style has long been regarded as exemplary, and the approach gives insight into a variety of areas without requiring much more than a good grounding in algebra, calculus and complex numbers.
Handbooks.
"Bronshtein and Semendyayev".
"Bronshtein and Semendyayev" is the informal name of a comprehensive handbook of fundamental working knowledge of mathematics and table of formulas originally compiled by the Russian mathematician Ilya Nikolaevich Bronshtein and engineer Konstantin Adolfovic Semendyayev. The work was first published in 1945 in Russia and soon became a "standard" and frequently used guide for scientists, engineers, and technical university students. It has been translated into German, English, and many other languages. The latest edition was published in 2015 by Springer.
"CRC Standard Mathematical Tables".
"CRC Standard Mathematical Tables" is a comprehensive one-volume handbook of fundamental working knowledge of mathematics and table of formulas. The handbook was originally published in 1928. The latest edition was published in 2018 by CRC Press, with Daniel Zwillinger as the editor-in-chief.
Popular writings.
"Gödel, Escher, Bach".
"Gödel, Escher, Bach: an Eternal Golden Braid" is a Pulitzer Prize-winning book, first published in 1979 by Basic Books.
It is a book about how the creative achievements of logician Kurt Gödel, artist M. C. Escher and composer Johann Sebastian Bach interweave. As the author states: "I realized that to me, Gödel and Escher and Bach were only shadows cast in different directions by some central solid essence. I tried to reconstruct the central object, and came up with this book."
"The World of Mathematics".
"The World of Mathematics" was specially designed to make mathematics more accessible to the inexperienced. It comprises nontechnical essays on every aspect of the vast subject, including articles by and about scores of eminent mathematicians, as well as literary figures, economists, biologists, and many other eminent thinkers. Includes the work of Archimedes, Galileo, Descartes, Newton, Gregor Mendel, Edmund Halley, Jonathan Swift, John Maynard Keynes, Henri Poincaré, Lewis Carroll, George Boole, Bertrand Russell, Alfred North Whitehead, John von Neumann, and many others. In addition, an informative commentary by distinguished scholar James R. Newman precedes each essay or group of essays, explaining their relevance and context in the history and development of mathematics. Originally published in 1956, it does not include many of the exciting discoveries of the later years of the 20th century but it has no equal as a general historical survey of important topics and applications.
Popular writings.
"Gödel, Escher, Bach".
"Gödel, Escher, Bach: an Eternal Golden Braid" is a Pulitzer Prize-winning book, first published in 1979 by Basic Books.
It is a book about how the creative achievements of logician Kurt Gödel, artist M. C. Escher and composer Johann Sebastian Bach interweave. As the author states: "I realized that to me, Gödel and Escher and Bach were only shadows cast in different directions by some central solid essence. I tried to reconstruct the central object, and came up with this book."
"The World of Mathematics".
"The World of Mathematics" was specially designed to make mathematics more accessible to the inexperienced. It comprises nontechnical essays on every aspect of the vast subject, including articles by and about scores of eminent mathematicians, as well as literary figures, economists, biologists, and many other eminent thinkers. Includes the work of Archimedes, Galileo, Descartes, Newton, Gregor Mendel, Edmund Halley, Jonathan Swift, John Maynard Keynes, Henri Poincaré, Lewis Carroll, George Boole, Bertrand Russell, Alfred North Whitehead, John von Neumann, and many others. In addition, an informative commentary by distinguished scholar James R. Newman precedes each essay or group of essays, explaining their relevance and context in the history and development of mathematics. Originally published in 1956, it does not include many of the exciting discoveries of the later years of the 20th century but it has no equal as a general historical survey of important topics and applications.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbb{Q}(\\sqrt{-3})"
},
{
"math_id": 1,
"text": "ax^2 + by^2 + cxy"
},
{
"math_id": 2,
"text": "\\textstyle\\sqrt{e}"
},
{
"math_id": 3,
"text": "\\varphi(y)=a\\cos\\frac{\\pi y}{2}+a'\\cos 3\\frac{\\pi y}{2}+a''\\cos5\\frac{\\pi y}{2}+\\cdots."
},
{
"math_id": 4,
"text": "\\cos(2i+1)\\frac{\\pi y}{2}"
},
{
"math_id": 5,
"text": "y=-1"
},
{
"math_id": 6,
"text": "y=+1"
},
{
"math_id": 7,
"text": "a_i=\\int_{-1}^1\\varphi(y)\\cos(2i+1)\\frac{\\pi y}{2}\\,dy."
},
{
"math_id": 8,
"text": "L^2"
},
{
"math_id": 9,
"text": "\\omega"
}
]
| https://en.wikipedia.org/wiki?curid=708399 |
70841759 | Proverbs 12 | Proverbs 12 is the twelfth chapter of the Book of Proverbs in the Hebrew Bible or the Old Testament of the Christian Bible. The book is a compilation of several wisdom literature collections, with the heading in 1:1 may be intended to regard Solomon as the traditional author of the whole book, but the dates of the individual collections are difficult to determine, and the book probably obtained its final shape in the post-exilic period. This chapter is a part of the second collection of the book.
Text.
Hebrew.
The following table shows the Hebrew text of Proverbs 12 with vowels alongside an English translation based upon the JPS 1917 translation (now in the public domain).
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BC. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
Analysis.
This chapter belongs to a section regarded as the second collection in the book of Proverbs (comprising Proverbs 10:1–22:16), also called "The First 'Solomonic' Collection" (the second one in Proverbs 25:1–29:27). The collection contains 375 sayings, each of which consists of two parallel phrases, except for which consists of three parts.
"Whoever loves discipline loves knowledge,"
"but he who hates reproof is stupid"
Verse 1.
This saying along with those in verses 15–16 and 23 describe central characteristics of a "fool" in the Book of Proverbs, mainly:
"In the path of righteousness is life,"
"and in its pathway there is no death."
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=70841759 |
70841764 | Proverbs 11 | Proverbs 11 is the eleventh chapter of the Book of Proverbs in the Hebrew Bible or the Old Testament of the Christian Bible. The book is a compilation of several wisdom literature collections, with the heading in 1:1 may be intended to regard Solomon as the traditional author of the whole book, but the dates of the individual collections are difficult to determine, and the book probably obtained its final shape in the post-exilic period. This chapter is a part of the second collection of the book.
Text.
Hebrew.
The following table shows the Hebrew text of Proverbs 11 with vowels alongside an English translation based upon the JPS 1917 translation (now in the public domain).
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BC. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
Analysis.
This chapter belongs to a section regarded as the second collection in the book of Proverbs (comprising Proverbs 10:1–22:16), also called "The First 'Solomonic' Collection" (the second one in Proverbs 25:1–29:27). The collection contains 375 sayings, each of which consists of two parallel phrases, except for which consists of three parts.
"A false balance is abomination to the Lord,"
"but a just weight is His delight."
Verse 1.
Stones were used as a standard for measuring amounts of commodities and precious metals (silver or gold) on the scales, so they were critical to the integrity of economic translations, as some people might cheat by tampering with the scale or the stones. The use of false weights and measures in business practices (cf. Proverbs 16:11; 20:10, 23) is condemned in the Torah (Deuteronomy 25:13–16; Leviticus 19:35–36) and the books of prophets (Amos 8:5; Micah 6:11) as well as in ancient Near-Eastern law codes ("ANET" 388, 423); the term 'abomination to the LORD' conveys the strongest possible condemnation (cf. Proverbs 6:16).
"A gracious woman gets honor,"
"and violent men get riches."
Verse 16.
The Greek Septuagint version has an addition between the first and second clause as follows:
"She who hates virtue makes a throne for dishonor;"
"the idle will be destitute of means"
This is followed by several English versions (e.g., NAB, NEB, NRSV, TEV).
The saying contrasts 'the honor that a
woman obtains through her natural disposition' with 'the effort men must expend to acquire wealth', with an implication that 'the ruthless men will obtain wealth without honor'.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=70841764 |
70847 | X-ray photoelectron spectroscopy | Spectroscopic technique
X-ray photoelectron spectroscopy (XPS) is a surface-sensitive quantitative spectroscopic technique that measures the very topmost 200 atoms, 0.01 um, 10 nm of any surface. It belongs to the family of photoemission spectroscopies in which electron population spectra are obtained by irradiating a material with a beam of X-rays. XPS is based on the photoelectric effect that can identify the elements that exist within a material (elemental composition) or are covering its surface, as well as their chemical state, and the overall electronic structure and density of the electronic states in the material. XPS is a powerful measurement technique because it not only shows what elements are present, but also what other elements they are bonded to. The technique can be used in line profiling of the elemental composition across the surface, or in depth profiling when paired with ion-beam etching. It is often applied to study chemical processes in the materials in their as-received state or after cleavage, scraping, exposure to heat, reactive gasses or solutions, ultraviolet light, or during ion implantation.
Chemical states are inferred from the measurement of the kinetic energy and the number of the ejected electrons. XPS requires high vacuum (residual gas pressure "p" ~ 10−6 Pa) or ultra-high vacuum (p < 10−7 Pa) conditions, although a current area of development is ambient-pressure XPS, in which samples are analyzed at pressures of a few tens of millibar.
When laboratory X-ray sources are used, XPS easily detects all elements except hydrogen and helium. The detection limit is in the parts per thousand range, but parts per million (ppm) are achievable with long collection times and concentration at top surface.
XPS is routinely used to analyze inorganic compounds, metal alloys, polymers, elements, catalysts, glasses, ceramics, paints, papers, inks, woods, plant parts, make-up, teeth, bones, medical implants, bio-materials, coatings,viscous oils, glues, ion-modified materials and many others. Somewhat less routinely XPS is used to analyze the hydrated forms of materials such as hydrogels and biological samples by freezing them in their hydrated state in an ultrapure environment, and allowing multilayers of ice to sublime away prior to analysis.
Basic physics.
Because the energy of an X-ray with particular wavelength is known (for Al Kα X-rays, "E"photon = 1486.7 eV), and because the emitted electrons' kinetic energies are measured, the electron binding energy of each of the emitted electrons can be determined by using the photoelectric effect equation,
formula_0,
where "E"binding is the binding energy (BE) of the electron measured relative to the chemical potential, "E"photon is the energy of the X-ray photons being used, "E"kinetic is the kinetic energy of the electron as measured by the instrument and formula_1 is a work function-like term for the specific surface of the material, which in real measurements includes a small correction by the instrument's work function because of the contact potential. This equation is essentially a conservation of energy equation. The work function-like term formula_1 can be thought of as an adjustable instrumental correction factor that accounts for the few eV of kinetic energy given up by the photoelectron as it gets emitted from the bulk and absorbed by the detector. It is a constant that rarely needs to be adjusted in practice.
History.
In 1887, Heinrich Rudolf Hertz discovered but could not explain the photoelectric effect, which was later explained in 1905 by Albert Einstein (Nobel Prize in Physics 1921). Two years after Einstein's publication, in 1907, P.D. Innes experimented with a Röntgen tube, Helmholtz coils, a magnetic field hemisphere (an electron kinetic energy analyzer), and photographic plates, to record broad bands of emitted electrons as a function of velocity, in effect recording the first XPS spectrum. Other researchers, including Henry Moseley, Rawlinson and Robinson, independently performed various experiments to sort out the details in the broad bands. After WWII, Kai Siegbahn and his research group in Uppsala (Sweden) developed several significant improvements in the equipment, and in 1954 recorded the first high-energy-resolution XPS spectrum of cleaved sodium chloride (NaCl), revealing the potential of XPS. A few years later in 1967, Siegbahn published a comprehensive study of XPS, bringing instant recognition of the utility of XPS and also the first hard X-ray photoemission experiments, which he referred to as Electron Spectroscopy for Chemical Analysis (ESCA). In cooperation with Siegbahn, a small group of engineers (Mike Kelly, Charles Bryson, Lavier Faye, Robert Chaney) at Hewlett-Packard in the US, produced the first commercial monochromatic XPS instrument in 1969. Siegbahn received the Nobel Prize for Physics in 1981, to acknowledge his extensive efforts to develop XPS into a useful analytical tool. In parallel with Siegbahn's work, David Turner at Imperial College London (and later at Oxford University) developed ultraviolet photoelectron spectroscopy (UPS) for molecular species using helium lamps.
Measurement.
A typical XPS spectrum is a plot of the number of electrons detected at a specific binding energy. Each element produces a set of characteristic XPS peaks. These peaks correspond to the electron configuration of the electrons within the atoms, e.g., 1"s", 2"s", 2"p", 3"s", etc. The number of detected electrons in each peak is directly related to the amount of element within the XPS sampling volume. To generate atomic percentage values, each raw XPS signal is corrected by dividing the intensity by a "relative sensitivity factor" (RSF), and normalized over all of the elements detected. Since hydrogen is not detected, these atomic percentages exclude hydrogen.
Quantitative accuracy and precision.
XPS is widely used to generate an empirical formula because it readily yields excellent quantitative accuracy from homogeneous solid-state materials. Absolute quantification requires the use of certified (or independently verified) standard samples, and is generally more challenging, and less common. Relative quantification involves comparisons between several samples in a set for which one or more analytes are varied while all other components (the sample matrix) are held constant. Quantitative accuracy depends on several parameters such as: signal-to-noise ratio, peak intensity, accuracy of relative sensitivity factors, correction for electron transmission function, surface volume homogeneity, correction for energy dependence of electron mean free path, and degree of sample degradation due to analysis. Under optimal conditions, the quantitative accuracy of the atomic percent (at%) values calculated from the major XPS peaks is 90-95% for each peak. The quantitative accuracy for the weaker XPS signals, that have peak intensities 10-20% of the strongest signal, are 60-80% of the true value, and depend upon the amount of effort used to improve the signal-to-noise ratio (for example by signal averaging). Quantitative precision (the ability to repeat a measurement and obtain the same result) is an essential consideration for proper reporting of quantitative results.
Detection limits.
Detection limits may vary greatly with the cross section of the core state of interest and the background signal level. In general, photoelectron cross sections increase with atomic number. The background increases with the atomic number of the matrix constituents as well as the binding energy, because of secondary emitted electrons. For example, in the case of gold on silicon where the high cross section Au4f peak is at a higher kinetic energy than the major silicon peaks, it sits on a very low background and detection limits of 1ppm or better may be achieved with reasonable acquisition times. Conversely for silicon on gold, where the modest cross section Si2p line sits on the large background below the Au4f lines, detection limits would be much worse for the same acquisition time. Detection limits are often quoted as 0.1–1.0 % atomic percent (0.1% = 1 part per thousand = 1000 ppm) for practical analyses, but lower limits may be achieved in many circumstances.
Degradation during analysis.
Degradation depends on the sensitivity of the material to the wavelength of X-rays used, the total dose of the X-rays, the temperature of the surface and the level of the vacuum. Metals, alloys, ceramics and most glasses are not measurably degraded by either non-monochromatic or monochromatic X-rays. Some, but not all, polymers, catalysts, certain highly oxygenated compounds, various inorganic compounds and fine organics are. Non-monochromatic X-ray sources produce a significant amount of high energy Bremsstrahlung X-rays (1–15 keV of energy) which directly degrade the surface chemistry of various materials. Non-monochromatic X-ray sources also produce a significant amount of heat (100 to 200 °C) on the surface of the sample because the anode that produces the X-rays is typically only 1 to away from the sample. This level of heat, when combined with the Bremsstrahlung X-rays, acts to increase the amount and rate of degradation for certain materials. Monochromatised X-ray sources, because they are farther away (50–100 cm) from the sample, do not produce noticeable heat effects. In those, a quartz monochromator system diffracts the Bremsstrahlung X-rays out of the X-ray beam, which means the sample is only exposed to one narrow band of X-ray energy. For example, if aluminum K-alpha X-rays are used, the intrinsic energy band has a FWHM of 0.43 eV, centered on 1,486.7 eV ("E"/Δ"E" = 3,457). If magnesium K-alpha X-rays are used, the intrinsic energy band has a FWHM of 0.36 eV, centered on 1,253.7 eV ("E"/Δ"E" = 3,483). These are the intrinsic X-ray line widths; the range of energies to which the sample is exposed depends on the quality and optimization of the X-ray monochromator. Because the vacuum removes various gases (e.g., O2, CO) and liquids (e.g., water, alcohol, solvents, etc.) that were initially trapped within or on the surface of the sample, the chemistry and morphology of the surface will continue to change until the surface achieves a steady state. This type of degradation is sometimes difficult to detect.
Measured area.
Measured area depends on instrument design. The minimum analysis area ranges from 10 to 200 micrometres. Largest size for a monochromatic beam of X-rays is 1–5 mm. Non-monochromatic beams are 10–50 mm in diameter. Spectroscopic image resolution levels of 200 nm or below has been achieved on latest imaging XPS instruments using synchrotron radiation as X-ray source.
Sample size limits.
Instruments accept small (mm range) and large samples (cm range), e.g. wafers. The limiting factor is the design of the sample holder, the sample transfer, and the size of the vacuum chamber. Large samples are laterally moved in x and y direction to analyze a larger area.
Analysis time.
Typically ranging 1–20 minutes for a broad survey scan that measures the amount of all detectable elements, typically 1–15 minutes for high resolution scan that reveal chemical state differences (for a high signal/noise ratio for count area result often requires multiple sweeps of the region of interest), 1–4 hours for a depth profile that measures 4–5 elements as a function of etched depth (this process time can vary the most as many factors will play a role). The time to complete a measurement is generally dependent on the brilliance of the X-ray source.
Surface sensitivity.
XPS detects only electrons that have actually escaped from the sample into the vacuum of the instrument. In order to escape from the sample, a photoelectron must travel through the sample. Photo-emitted electrons can undergo inelastic collisions, recombination, excitation of the sample, recapture or trapping in various excited states within the material, all of which can reduce the number of escaping photoelectrons. These effects appear as an exponential attenuation function as the depth increases, making the signals detected from analytes at the surface much stronger than the signals detected from analytes deeper below the sample surface. Thus, the signal measured by XPS is an exponentially surface-weighted signal, and this fact can be used to estimate analyte depths in layered materials.
Chemical states and chemical shift.
The ability to produce chemical state information, i.e. the local bonding environment of an atomic species in question from the topmost few nanometers of the sample makes XPS a unique and valuable tool for understanding the chemistry of the surface. The local bonding environment is affected by the formal oxidation state, the identity of its nearest-neighbor atoms, and its bonding hybridization to the nearest-neighbor or next-nearest-neighbor atoms. For example, while the nominal binding energy of the C1"s" electron is 284.6 eV, subtle but reproducible shifts in the actual binding energy, the so-called "chemical shift" (analogous to NMR spectroscopy), provide the chemical state information.
Chemical-state analysis is widely used for carbon. It reveals the presence or absence of the chemical states of carbon, in approximate order of increasing binding energy, as: carbide (-C2−), silane (-Si-CH3), methylene/methyl/hydrocarbon (-CH2-CH2-, CH3-CH2-, and -CH=CH-), amine (-CH2-NH2), alcohol (-C-OH), ketone (-C=O), organic ester (-COOR), carbonate (-CO32−), monofluoro-hydrocarbon (-CFH-CH2-), difluoro-hydrocarbon (-CF2-CH2-), and trifluorocarbon (-CH2-CF3), to name but a few.
Chemical state analysis of the surface of a silicon wafer reveals chemical shifts due to different formal oxidation states, such as: n-doped silicon and p-doped silicon (metallic silicon), silicon suboxide (Si2O), silicon monoxide (SiO), Si2O3, and silicon dioxide (SiO2). An example of this is seen in the figure "High-resolution spectrum of an oxidized silicon wafer in the energy range of the Si 2"p" signal".
Instrumentation.
The main components of an XPS system are the source of X-rays, an ultra-high vacuum (UHV) chamber with mu-metal magnetic shielding, an electron collection lens, an electron energy analyzer, an electron detector system, a sample introduction chamber, sample mounts, a sample stage with the ability to heat or cool the sample, and a set of stage manipulators.
The most prevalent electron spectrometer for XPS is the hemispherical electron analyzer. They have high energy resolution and spatial selection of the emitted electrons. Sometimes, however, much simpler electron energy filters - the cylindrical mirror analyzers are used, most often for checking the elemental composition of the surface. They represent a trade-off between the need for high count rates and high angular/energy resolution. This type consists of two co-axial cylinders placed in front of the sample, the inner one being held at a positive potential, while the outer cylinder is held at a negative potential. Only the electrons with the right energy can pass through this setup and are detected at the end. The count rates are high but the resolution (both in energy and angle) is poor.
Electrons are detected using electron multipliers: a single channeltron for single energy detection, or arrays of channeltrons and microchannel plates for parallel acquisition. These devices consists of a glass channel with a resistive coating on the inside. A high voltage is applied between the front and the end. An incoming electron is accelerated to the wall, where it removes more electrons, in such a way that an electron avalanche is created, until a measurable current pulse is obtained.
Laboratory based XPS.
In laboratory systems, either 10–30 mm beam diameter non-monochromatic Al Kα or Mg Kα anode radiation is used, or a focused 20-500 micrometer diameter beam single wavelength Al Kα monochromatised radiation. Monochromatic Al Kα X-rays are normally produced by diffracting and focusing a beam of non-monochromatic X-rays off of a thin disc of natural, crystalline quartz with a <1010> orientation. The resulting wavelength is 8.3386 angstroms (0.83386 nm) corresponding to a 1486.7 eV photon energy. Aluminum Kα X-rays have an intrinsic full width at half maximum (FWHM) of 0.43 eV, centered at 1486.7 eV ("E"/Δ"E" = 3457). For a well–optimized monochromator, the energy width of the monochromated aluminum Kα X-rays is 0.16 eV, but energy broadening in common electron energy analyzers (spectrometers) produces an ultimate energy resolution on the order of FWHM=0.25 eV which is the ultimate energy resolution of most commercial systems. Under practical conditions, high energy-resolution settings produce peak widths (FWHM) between 0.4 and 0.6 eV for various elements and some compounds. For example, in a spectrum obtained for one minute at 20 eV pass energy using monochromated aluminum Kα X-rays, the Ag 3"d"5/2 peak for a clean silver film or foil will typically have a FWHM of 0.45 eV. Non-monochromatic magnesium X-rays have a wavelength of 9.89 angstroms (0.989 nm) which corresponds to a photon energy of 1253 eV. The energy width of the non-monochromated X-ray is roughly 0.70 eV, which is the ultimate energy resolution of a system using non-monochromatic X-rays. Non-monochromatic X-ray sources do not use any crystal to diffract the X-rays allowing all primary X-rays lines and the full range of high-energy Bremsstrahlung X-rays (1–12 keV) to reach the surface. The ultimate energy resolution (FWHM) when using a non-monochromatic Mg Kα source is 0.9–1.0 eV, which includes some contribution from spectrometer-induced broadening.
Synchrotron based XPS.
A breakthrough has been brought about in the last decades by the development of large scale synchrotron radiation facilities. Here, bunches of relativistic electrons kept in orbit inside a storage ring are accelerated through bending magnets or insertion devices like wigglers and undulators to produce a high brilliance and high flux photon beam. The beam is orders of magnitude more intense and better collimated than typically produced by anode-based sources. Synchrotron radiation is also tunable over a wide wavelength range, and can be made polarized in several distinct ways. This way, photon can be selected yielding optimum photoionization cross-sections for probing a particular core level. The high photon flux, in addition, makes it possible to perform XPS experiments also from low density atomic species, such as molecular and atomic adsorbates.
One of the synchrotron facilities that allows XPS measurement is Max IV synchrotron in Lund, Sweden. The Hippie beam line of this facility also allows to perform in operando Ambient Pressure X-Ray Photoelectron Spectroscopy (AP-XPS9. This latter technique allows to measure samples in ambient conditions, rather than in vacuum.
Data processing.
Peak identification.
The number of peaks produced by a single element varies from 1 to more than 20. Tables of binding energies that identify the shell and spin-orbit of each peak produced by a given element are included with modern XPS instruments, and can be found in various handbooks and websites. Because these experimentally determined energies are characteristic of specific elements, they can be directly used to identify experimentally measured peaks of a material with unknown elemental composition.
Before beginning the process of peak identification, the analyst must determine if the binding energies of the unprocessed survey spectrum (0-1400 eV) have or have not been shifted due to a positive or negative surface charge. This is most often done by looking for two peaks that are due to the presence of carbon and oxygen.
Charge referencing insulators.
Charge referencing is needed when a sample suffers a charge induced shift of experimental binding energies to obtain meaningful binding energies from both wide-scan, high sensitivity (low energy resolution) survey spectra (0-1100 eV), and also narrow-scan, chemical state (high energy resolution) spectra. Charge induced shifting is normally due to a modest excess of low voltage (-1 to -20 eV) electrons attached to the surface, or a modest shortage of electrons (+1 to +15 eV) within the top 1-12 nm of the sample caused by the loss of photo-emitted electrons. If, by chance, the charging of the surface is excessively positive, then the spectrum might appear as a series of rolling hills, not sharp peaks as shown in the example spectrum.
Charge referencing is performed by adding a "Charge Correction Factor" to each of the experimentally measured peaks. Since various hydrocarbon species appear on all air-exposed surfaces, the binding energy of the hydrocarbon C (1s) XPS peak is used to charge correct all energies obtained from non-conductive samples or conductors that have been deliberately insulated from the sample mount. The peak is normally found between 284.5 eV and 285.5 eV. The 284.8 eV binding energy is routinely used as the reference binding energy for charge referencing insulators, so that the charge correction factor is the difference between 284.8 eV and the experimentally measured C (1s) peak position.
Conductive materials and most native oxides of conductors should never need charge referencing. Conductive materials should never be charge referenced unless the topmost layer of the sample has a thick non-conductive film. The charging effect, if needed, can also be compensated by providing suitable low energy charges to the surface by the use of low-voltage (1-20 eV) electron beam from an electron flood gun, UV lights, low-voltage argon ion beam with low-voltage electron beam (1-10 eV), aperture masks, mesh screen with low-voltage electron beams, etc.
Peak-fitting.
The process of peak-fitting high energy resolution XPS spectra is a mixture of scientific knowledge and experience. The process is affected by instrument design, instrument components, experimental settings and sample variables. Before starting any peak-fit effort, the analyst performing the peak-fit needs to know if the topmost 15 nm of the sample is expected to be a homogeneous material or is expected to be a mixture of materials. If the top 15 nm is a homogeneous material with only very minor amounts of adventitious carbon and adsorbed gases, then the analyst can use theoretical peak area ratios to enhance the peak-fitting process. Peak fitting results are affected by overall peak widths (at half maximum, FWHM), possible chemical shifts, peak shapes, instrument design factors and experimental settings, as well as sample properties:
Theoretical aspects.
Quantum mechanical treatment.
When a photoemission event takes place, the following energy conservation rule holds:
formula_2
where formula_3 is the photon energy, formula_4 is the electron BE (binding energy with respect to the vacuum level) prior to ionization, and formula_5 is the kinetic energy of the photoelectron. If reference is taken with respect to the Fermi level (as it is typically done in photoelectron spectroscopy) formula_4 must be replaced by the sum of the binding energy (BE) relative to the Fermi level, formula_6, and the sample work function, formula_7 .
From the theoretical point of view, the photoemission process from a solid can be described with a semiclassical approach, where the electromagnetic field is still treated classically, while a quantum-mechanical description is used for matter.
The one—particle Hamiltonian for an electron subjected to an electromagnetic field is given by:
formula_8,
where formula_9 is the electron wave function, formula_10 is the vector potential of the electromagnetic field and formula_11 is the unperturbed potential of the solid.
In the Coulomb gauge (formula_12), the vector potential commutes with the momentum operator
(formula_13), so that the expression in brackets in the Hamiltonian simplifies to:
formula_14
Actually, neglecting the formula_15 term in the Hamiltonian, we are disregarding possible photocurrent contributions. Such effects are generally negligible in the bulk, but may become important at the surface.
The quadratic term in formula_10 can be instead safely neglected, since its contribution in a typical photoemission experiment is about one order of magnitude smaller than that of the first term .
In first-order perturbation approach, the one-electron Hamiltonian can be split into two terms, an unperturbed Hamiltonian formula_16, plus an interaction Hamiltonian formula_17, which describes the effects of the electromagnetic field:
formula_18
In time-dependent perturbation theory, for an harmonic or constant perturbation, the transition rate between the initial state formula_19 and the final state formula_20 is expressed by Fermi's Golden Rule:
formula_21,
where formula_22 and formula_23 are the eigenvalues of the unperturbed Hamiltonian in the initial and final state, respectively, and formula_3 is the photon energy. Fermi's Golden Rule uses the approximation that the perturbation acts on the system for an infinite time. This approximation is valid when the time that the perturbation acts on the system is much larger than the time needed for the transition. It should be understood that this equation needs to be integrated with the density of states formula_24 which gives:
formula_25
In a real photoemission experiment the ground state core electron BE cannot be directly probed, because the measured BE
incorporates both initial state and final state effects, and the spectral linewidth is broadened owing to the finite core-hole lifetime (formula_26).
Assuming an exponential decay probability for the core hole in the time domain (formula_27), the spectral function will have a Lorentzian shape, with a FWHM (Full Width at Half Maximum) formula_28 given by:
formula_29
From the theory of Fourier transforms, formula_28 and formula_26 are linked by the indeterminacy relation:
formula_30
The photoemission event leaves the atom in a highly excited core ionized state, from which it can decay radiatively (fluorescence) or non-radiatively (typically by "Auger" decay).
Besides Lorentzian broadening, photoemission spectra are also affected by a Gaussian broadening, whose contribution can be expressed by
formula_31
Three main factors enter the Gaussian broadening of the spectra: the experimental energy resolution, vibrational and inhomogeneous broadening.
The first effect is caused by the non perfect monochromaticity of the photon beam -which results in a finite bandwidth- and by the limited resolving power of the analyzer. The vibrational component is produced by the excitation of low energy vibrational modes both in the initial and in the final state. Finally, inhomogeneous broadening can originate from the presence of unresolved core level components in the spectrum.
Theory of core level photoemission of electrons.
Inelastic mean free path.
In a solid, inelastic scattering events also contribute to the photoemission process, generating electron-hole pairs which show up as an inelastic tail on the high BE side of the main photoemission peak. In fact this allows the calculation of electron inelastic mean free path (IMFP). This can be modeled based on the Beer–Lambert law, which states
formula_32
where formula_33 is the IMFP and formula_34 is the axis perpendicular to the sample. In fact it is generally the case that the IMFP is only weakly material dependent, but rather strongly dependent on the photoelectron kinetic energy. Quantitatively we can relate formula_35 to IMFP by
formula_36
where formula_37 is the mean atomic diameter as calculated by the density so formula_38. The above formula was developed by Seah and Dench.
Plasmonic effects.
In some cases, energy loss features due to plasmon excitations are also observed. This can either be a final state effect caused by core hole decay, which generates quantized electron wave excitations in the solid (intrinsic plasmons), or it can be due to excitations induced by photoelectrons travelling from the emitter to the surface (extrinsic plasmons).
Due to the reduced coordination number of first-layer atoms, the plasma frequency of bulk and surface atoms are related by the following equation:
formula_39,
so that surface and bulk plasmons can be easily distinguished from each other.
Plasmon states in a solid are typically localized at the surface, and can strongly affect IMFP.
Vibrational effects.
Temperature-dependent atomic lattice vibrations, or phonons, can broaden the core level components and attenuate the interference patterns in an X-ray photoelectron diffraction (XPD) experiment. The simplest way to account for vibrational effects is by multiplying the scattered single-photoelectron wave function formula_40 by the Debye–Waller factor:
formula_41,
where formula_42 is the squared magnitude of the wave vector variation caused by scattering,
and formula_43 is the temperature-dependent one-dimensional vibrational mean squared displacement of the formula_44 emitter. In the Debye model, the mean squared displacement is calculated in terms of the Debye temperature, formula_45, as:
formula_46
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "E_\\text{binding} = E_\\text{photon} - \\left(E_\\text{kinetic} + \\phi\\right)"
},
{
"math_id": 1,
"text": " \\phi "
},
{
"math_id": 2,
"text": " h\\nu =|E_{b}^{v}|+E_{kin} "
},
{
"math_id": 3,
"text": "h\\nu"
},
{
"math_id": 4,
"text": "|E_{b}^{v}|"
},
{
"math_id": 5,
"text": "E_{kin}"
},
{
"math_id": 6,
"text": "|E_{b}^{F}|"
},
{
"math_id": 7,
"text": "\\Phi_{0}"
},
{
"math_id": 8,
"text": " i\\hbar \\frac{\\partial \\psi}{\\partial t}=\\left[\\frac{1}{2m}\\left(\\mathbf{\\hat{p}}-\\frac{e}{c}\\mathbf{\\hat{A}}\\right)^2+ \\hat{V} \\right]\\psi=\\hat{H}\\psi "
},
{
"math_id": 9,
"text": "\\psi"
},
{
"math_id": 10,
"text": "\\mathbf{A}"
},
{
"math_id": 11,
"text": "V"
},
{
"math_id": 12,
"text": "\\nabla \\cdot \\mathbf{A}=0"
},
{
"math_id": 13,
"text": "[\\mathbf{\\hat{p}}, \\mathbf{\\hat{A}}]=0 "
},
{
"math_id": 14,
"text": " \\left(\\mathbf{\\hat{p}}-\\frac{e}{c}\\mathbf{\\hat{A}}\\right)^2=\\hat{p}^2 -2\\frac{e}{c}\\mathbf{\\hat{A}}\\cdot\\mathbf{\\hat{p}}+\\left(\\frac{e}{c}\\right)^2\\hat{A}^2 "
},
{
"math_id": 15,
"text": "\\nabla\\cdot\\mathbf{A}"
},
{
"math_id": 16,
"text": "\\hat{H}_{0}"
},
{
"math_id": 17,
"text": "\\hat{H}'"
},
{
"math_id": 18,
"text": " \\hat{H}'=-\\frac{e}{mc}\\mathbf{\\hat{A}}\\cdot \\mathbf{\\hat{p}} "
},
{
"math_id": 19,
"text": "\\psi_{i}"
},
{
"math_id": 20,
"text": "\\psi_{f}"
},
{
"math_id": 21,
"text": " \\frac{d\\omega}{dt}\\propto \\frac{2\\pi}{\\hbar}|\\langle \\psi_{f}|\\hat{H}'|\\psi_{i} \\rangle |^2 \\delta (E_{f}-E_{i}-h\\nu) "
},
{
"math_id": 22,
"text": "E_{i}"
},
{
"math_id": 23,
"text": "E_{f}"
},
{
"math_id": 24,
"text": "\\rho(E)"
},
{
"math_id": 25,
"text": " \\frac{d\\omega}{dt}\\propto \\frac{2\\pi}{\\hbar}|\\langle \\psi_{f}|\\hat{H}'|\\psi_{i} \\rangle |^2 \\rho(E_{f})=|M_{fi}|^2 \\rho(E_{f}) "
},
{
"math_id": 26,
"text": "\\tau"
},
{
"math_id": 27,
"text": " \\propto \\exp{-t/\\tau} "
},
{
"math_id": 28,
"text": "\\Gamma"
},
{
"math_id": 29,
"text": " I_{L}(E)=\\frac{I_{0}}{\\pi}\\frac{\\Gamma /2}{(E-E_{b})^2+(\\Gamma /2)^2} "
},
{
"math_id": 30,
"text": " \\Gamma \\tau \\geq \\hbar "
},
{
"math_id": 31,
"text": " I_{G}(E)=\\frac{I_{0}}{\\sigma \\sqrt{2}}\\exp{\\left( -\\frac{(E-E_{b})^2}{2\\sigma^2}\\right)}"
},
{
"math_id": 32,
"text": "I(z) = I_0e^{-z/\\lambda}"
},
{
"math_id": 33,
"text": "\\lambda"
},
{
"math_id": 34,
"text": "z"
},
{
"math_id": 35,
"text": "E_\\text{kin}"
},
{
"math_id": 36,
"text": "\n\\lambda(\\text{nm}) = [538a]\\left( E_\\text{kin}\\right)^{-2} + [0.41a^{3/2}]\\left(E_\\text{kin}\\right)^{1/2}\n"
},
{
"math_id": 37,
"text": "a"
},
{
"math_id": 38,
"text": "a=\\rho^{-1/3}"
},
{
"math_id": 39,
"text": " \\omega_\\text{surface} = \\frac{\\omega_\\text{bulk}}{\\sqrt{2}}"
},
{
"math_id": 40,
"text": "\\phi_{j}"
},
{
"math_id": 41,
"text": "W_{j}= \\exp{(-\\Delta k_{j}^2 \\bar{U_{j}^2})}"
},
{
"math_id": 42,
"text": "\\Delta k_{j}^2"
},
{
"math_id": 43,
"text": "\\bar{U_{j}^2}"
},
{
"math_id": 44,
"text": "j^{th}"
},
{
"math_id": 45,
"text": "\\Theta_{D}"
},
{
"math_id": 46,
"text": " \\bar{U_{j}^2}(T) = 9 \\hbar ^2 T^2 / m k_{B} \\Theta_{D} "
}
]
| https://en.wikipedia.org/wiki?curid=70847 |
708544 | Affirming a disjunct | Formal fallacy
The formal fallacy of affirming a disjunct also known as the fallacy of the alternative disjunct or a false exclusionary disjunct occurs when a deductive argument takes the following logical form:
A or B
A
Therefore, not B
Or in logical operators:
formula_0
formula_1
formula_2 ¬ formula_3
Where formula_2 denotes a logical assertion.
Explanation.
The fallacy lies in concluding that one disjunct must be false because the other disjunct is true; in fact they may both be true because "or" is defined inclusively rather than exclusively. It is a fallacy of equivocation between the operations OR and XOR.
Affirming the disjunct should not be confused with the valid argument known as the disjunctive syllogism.
Examples.
The following argument indicates the unsoundness of affirming a disjunct:
Max is a mammal or Max is a cat.
Max is a mammal.
Therefore, Max is not a cat.
This inference is unsound because all cats, by definition, are mammals.
A second example provides a first proposition that appears realistic and shows how an obviously flawed conclusion still arises under this fallacy.
To be on the cover of Vogue Magazine, one must be a celebrity or very beautiful.
This month's cover was a celebrity.
Therefore, this celebrity is not very beautiful.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " p \\vee q"
},
{
"math_id": 1,
"text": " p "
},
{
"math_id": 2,
"text": "{} \\vdash {}"
},
{
"math_id": 3,
"text": "q"
}
]
| https://en.wikipedia.org/wiki?curid=708544 |
70856028 | Learning augmented algorithm | A learning augmented algorithm is an algorithm that can make use of a prediction to improve its performance.
Whereas in regular algorithms just the problem instance is inputted, learning augmented algorithms accept an extra parameter.
This extra parameter often is a prediction of some property of the solution.
This prediction is then used by the algorithm to improve its running time or the quality of its output.
Description.
A learning augmented algorithm typically takes an input formula_0. Here formula_1 is a problem instance and formula_2 is the advice: a prediction about a certain property of the optimal solution. The type of the problem instance and the prediction depend on the algorithm. Learning augmented algorithms usually satisfy the following two properties:
Learning augmented algorithms generally do not prescribe how the prediction should be done. For this purpose machine learning can be used.
Examples.
Binary search.
The binary search algorithm is an algorithm for finding elements of a sorted list formula_3. It needs formula_4 steps to find an element with some known value formula_5 in a list of length formula_6.
With a prediction formula_7 for the position of formula_5, the following learning augmented algorithm can be used.
The error is defined to be formula_16, where formula_17 is the real index of formula_18.
In the learning augmented algorithm, probing the positions formula_10 takes formula_19 steps.
Then a binary search is performed on a list of size at most formula_20, which takes formula_19 steps. This makes the total running time of the algorithm formula_21.
So, when the error is small, the algorithm is faster than a normal binary search. This shows that the algorithm is consistent.
Even in the worst case, the error will be at most formula_6. Then the algorithm takes at most formula_4 steps, so the algorithm is robust.
More examples.
Learning augmented algorithms are known for:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(\\mathcal{I}, \\mathcal{A})"
},
{
"math_id": 1,
"text": "\\mathcal{I}"
},
{
"math_id": 2,
"text": "\\mathcal{A}"
},
{
"math_id": 3,
"text": "x_1,\\ldots,x_n"
},
{
"math_id": 4,
"text": "O(\\log(n))"
},
{
"math_id": 5,
"text": "x"
},
{
"math_id": 6,
"text": "n"
},
{
"math_id": 7,
"text": "i"
},
{
"math_id": 8,
"text": "x_i=y"
},
{
"math_id": 9,
"text": "x_i<y"
},
{
"math_id": 10,
"text": "i+1,i+2,i+4,\\ldots"
},
{
"math_id": 11,
"text": "j"
},
{
"math_id": 12,
"text": "x_j\\geq y"
},
{
"math_id": 13,
"text": "x_i,\\ldots, x_j"
},
{
"math_id": 14,
"text": "x_i>y"
},
{
"math_id": 15,
"text": "i-1,i-2,i-4,\\ldots"
},
{
"math_id": 16,
"text": "\\eta=|i-i^*|"
},
{
"math_id": 17,
"text": "i^*"
},
{
"math_id": 18,
"text": "y"
},
{
"math_id": 19,
"text": "\\log_2(\\eta)"
},
{
"math_id": 20,
"text": "2\\eta"
},
{
"math_id": 21,
"text": "2\\log_2(\\eta)"
}
]
| https://en.wikipedia.org/wiki?curid=70856028 |
70856850 | Aubry–André model | Toy model for electronic localization
The Aubry–André model is a toy model of a one-dimensional crystal with periodically varying onsite energies. The model is employed to study both quasicrystals and the Anderson localization metal-insulator transition in disordered systems. It was first developed by Serge Aubry and Gilles André in 1980.
Hamiltonian of the model.
The Aubry–André model describes a one-dimensional lattice with hopping between nearest-neighbor sites and periodically varying onsite energies. It is a tight-binding (single-band) model with no interactions. The full Hamiltonian can be written as
formula_0,
where where the sum goes over all lattice sites formula_1, formula_2 is a Wannier state on site formula_1, formula_3 is the hopping energy, and the on-site energies formula_4 are given by
formula_5.
Here formula_6 is the amplitude of the variation of the onsite energies, formula_7 is a relative phase, and formula_8 is the period of the onsite potential modulation in units of the lattice constant. This Hamiltonian is self-dual as it retains the same form after a Fourier transformation interchanging the roles of position and momentum.
Metal-insulator phase transition.
For irrational values of formula_8, corresponding to a modulation of the onsite energy incommensurate with the underlying lattice, the model exhibits a quantum phase transition between a metallic phase and an insulating phase as formula_6 is varied. For example, for formula_9 (the golden ratio) and almost any formula_7, if formula_10 the eigenmodes are exponentially localized, while if formula_11 the eigenmodes are extended plane waves. The Aubry-André metal-insulator transition happens at the critical value of formula_6 which separates these two behaviors, formula_12.
While this quantum phase transition between a metallic delocalized state and an insulating localized state resembles the disorder-driven Anderson localization transition, there are some key differences between the two phenomena. In particular the Aubry–André model has no actual disorder, only incommensurate modulation of onsite energies. This is why the Aubry-André transition happens at a finite value of the pseudo-disorder strength formula_6, whereas in one dimension the Anderson transition happens at zero disorder strength.
Energy spectrum.
The energy spectrum formula_13 is a function of formula_8 and is given by the almost Mathieu equation
formula_14.
At formula_12 this is equivalent to the famous fractal energy spectrum known as the Hofstadter's butterfly, which describes the motion of an electron in a two-dimensional lattice under a magnetic field. In the Aubry–André model the magnetic field strength maps onto the parameter formula_8.
Realization.
Iin 2008, G. Roati et al experimentally realized the Aubry-André localization phase transition using a gas of ultracold atoms in an incommensurate optical lattice.
In 2009, Y. Lahini et al. realized the Aubry–André model in photonic lattices. | [
{
"math_id": 0,
"text": "H=\\sum_{n}\\Bigl(-J |n\\rangle\\langle n+1| -J|n+1\\rangle\\langle n| + \\epsilon_n |n\\rangle\\langle n|\\Bigr)"
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "|n\\rangle"
},
{
"math_id": 3,
"text": "J"
},
{
"math_id": 4,
"text": "\\epsilon_n"
},
{
"math_id": 5,
"text": "\\epsilon_n=\\lambda\\cos(2\\pi \\beta n +\\varphi)"
},
{
"math_id": 6,
"text": "\\lambda"
},
{
"math_id": 7,
"text": "\\varphi"
},
{
"math_id": 8,
"text": "\\beta"
},
{
"math_id": 9,
"text": "\\beta=(1+\\sqrt{5})/2"
},
{
"math_id": 10,
"text": "\\lambda>2J"
},
{
"math_id": 11,
"text": "\\lambda<2J"
},
{
"math_id": 12,
"text": "\\lambda=2J"
},
{
"math_id": 13,
"text": "E_n"
},
{
"math_id": 14,
"text": "E_n\\psi_n=-J(\\psi_{n+1}+\\psi_{n-1})+\\epsilon_n \\psi_n"
}
]
| https://en.wikipedia.org/wiki?curid=70856850 |
70859607 | Local invariant cycle theorem | Invariant cycle theorem
In mathematics, the local invariant cycle theorem was originally a conjecture of Griffiths which states that, given a surjective proper map formula_0 from a Kähler manifold formula_1 to the unit disk that has maximal rank everywhere except over 0, each cohomology class on formula_2 is the restriction of some cohomology class on the entire formula_1 if the cohomology class is invariant under a circle action (monodromy action); in short,
formula_3
is surjective. The conjecture was first proved by Clemens. The theorem is also a consequence of the BBD decomposition.
Deligne also proved the following. Given a proper morphism formula_4 over the spectrum formula_5 of the henselization of formula_6, formula_7 an algebraically closed field, if formula_1 is essentially smooth over formula_7 and formula_8 smooth over formula_9, then the homomorphism on formula_10-cohomology:
formula_11
is surjective, where formula_12 are the special and generic points and the homomorphism is the composition formula_13
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "p"
},
{
"math_id": 1,
"text": "X"
},
{
"math_id": 2,
"text": "p^{-1}(t), t \\ne 0"
},
{
"math_id": 3,
"text": "\\operatorname{H}^*(X) \\to \\operatorname{H}^*(p^{-1}(t))^{S^1}"
},
{
"math_id": 4,
"text": "X \\to S"
},
{
"math_id": 5,
"text": "S"
},
{
"math_id": 6,
"text": "k[T]"
},
{
"math_id": 7,
"text": "k"
},
{
"math_id": 8,
"text": "X_{\\overline{\\eta}}"
},
{
"math_id": 9,
"text": "\\overline{\\eta}"
},
{
"math_id": 10,
"text": "\\mathbb{Q}"
},
{
"math_id": 11,
"text": "\\operatorname{H}^*(X_s) \\to \\operatorname{H}^*(X_{\\overline{\\eta}})^{\\operatorname{Gal}(\\overline{\\eta}/\\eta)}"
},
{
"math_id": 12,
"text": "s, \\eta"
},
{
"math_id": 13,
"text": "\\operatorname{H}^*(X_s) \\simeq \\operatorname{H}^*(X) \\to \\operatorname{H}^*(X_{\\eta}) \\to \\operatorname{H}^*(X_{\\overline{\\eta}})."
}
]
| https://en.wikipedia.org/wiki?curid=70859607 |
70859961 | Semistable reduction theorem | Mathematical theory in the field of algebraic geometry
In algebraic geometry, semistable reduction theorems state that, given a proper flat morphism formula_0, there exists a morphism formula_1 (called base change) such that formula_2 is semistable (i.e., the singularities are mild in some sense). Precise formulations depend on the specific versions of the theorem.
For example, if formula_3 is the unit disk in formula_4, then "semistable" means that the special fiber is a divisor with normal crossings.
The fundamental semistable reduction theorem for Abelian varieties by Grothendieck shows that if formula_5 is an Abelian variety over the fraction field formula_6 of a discrete valuation ring formula_7, then there is a finite field extension formula_8 such that formula_9 has semistable reduction over the integral closure formula_10 of formula_7 in formula_11. Semistability here means more precisely that if formula_12 is the Néron model of formula_13 over formula_14 then the fibres formula_15 of formula_12 over the closed points formula_16 (which are always a smooth algebraic groups) are extensions of Abelian varieties by tori.
Here formula_3 is the algebro-geometric analogue of "small" disc around the formula_17, and the condition of the theorem states essentially that formula_5 can be thought of as a smooth family of Abelian varieties away from formula_18; the conclusion then shows that after base change this "family" extends to the formula_18 so that also the fibres over the formula_18 are close to being Abelian varieties.
The important semistable reduction theorem for algebraic curves was first proved by Deligne and Mumford. The proof proceeds by showing that the curve has semistable reduction if and only if its Jacobian variety (which is an Abelian variety) has semistable reduction; one then applies the theorem for Abelian varieties above.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "X \\to S"
},
{
"math_id": 1,
"text": "S' \\to S"
},
{
"math_id": 2,
"text": "X \\times_S S' \\to S'"
},
{
"math_id": 3,
"text": "S"
},
{
"math_id": 4,
"text": "\\mathbb{C}"
},
{
"math_id": 5,
"text": "A"
},
{
"math_id": 6,
"text": "K"
},
{
"math_id": 7,
"text": "\\mathcal{O}"
},
{
"math_id": 8,
"text": "L/K"
},
{
"math_id": 9,
"text": "A_{(L)} = A \\otimes_K L"
},
{
"math_id": 10,
"text": "\\mathcal{O}_L"
},
{
"math_id": 11,
"text": "L"
},
{
"math_id": 12,
"text": "\\mathcal{A}_L"
},
{
"math_id": 13,
"text": "A_{(L)}"
},
{
"math_id": 14,
"text": "\\mathcal{O}_L,"
},
{
"math_id": 15,
"text": "\\mathcal{A}_{L,s}"
},
{
"math_id": 16,
"text": "s\\in S=\\mathrm{Spec}(\\mathcal{O}_L)"
},
{
"math_id": 17,
"text": "s\\in S"
},
{
"math_id": 18,
"text": "s"
}
]
| https://en.wikipedia.org/wiki?curid=70859961 |
7086534 | Kelvin's circulation theorem | Theorem regarding circulation in a barotropic ideal fluid
In fluid mechanics, Kelvin's circulation theorem (named after William Thomson, 1st Baron Kelvin who published it in 1869) states:In a barotropic, ideal fluid with conservative body forces, the circulation around a closed curve (which encloses the same fluid elements) moving with the fluid remains constant with time.
Stated mathematically:
formula_0
where formula_1 is the circulation around a material moving contour formula_2 as a function of time formula_3. The differential operator formula_4 is a substantial (material) derivative moving with the fluid particles. Stated more simply, this theorem says that if one observes a closed contour at one instant, and follows the contour over time (by following the motion of all of its fluid elements), the circulation over the two locations of this contour remains constant.
This theorem does not hold in cases with viscous stresses, nonconservative body forces (for example the Coriolis force) or non-barotropic pressure-density relations.
Mathematical proof.
The circulation formula_1 around a closed material contour formula_2 is defined by:
formula_5
where u is the velocity vector, and ds is an element along the closed contour.
The governing equation for an inviscid fluid with a conservative body force is
formula_6
where D/D"t" is the convective derivative, "ρ" is the fluid density, "p" is the pressure and "Φ" is the potential for the body force. These are the Euler equations with a body force.
The condition of barotropicity implies that the density is a function only of the pressure, i.e. formula_7.
Taking the convective derivative of circulation gives
formula_8
For the first term, we substitute from the governing equation, and then apply Stokes' theorem, thus:
formula_9
The final equality arises since formula_10 owing to barotropicity. We have also made use of the fact that the curl of any gradient is necessarily 0, or formula_11 for any function formula_12.
For the second term, we note that evolution of the material line element is given by
formula_13
Hence
formula_14
The last equality is obtained by applying gradient theorem.
Since both terms are zero, we obtain the result
formula_15
Poincaré–Bjerknes circulation theorem.
A similar principle which conserves a quantity can be obtained for the rotating frame also, known as the Poincaré–Bjerknes theorem, named after Henri Poincaré and Vilhelm Bjerknes, who derived the invariant in 1893 and 1898. The theorem can be applied to a rotating frame which is rotating at a constant angular velocity given by the vector formula_16, for the modified circulation
formula_17
Here formula_18 is the position of the area of fluid. From Stokes' theorem, this is:
formula_19
The vorticity of a velocity field in fluid dynamics is defined by:
formula_20
Then:
formula_21 | [
{
"math_id": 0,
"text": "\\frac{\\mathrm{D}\\Gamma}{\\mathrm{D}t} = 0"
},
{
"math_id": 1,
"text": "\\Gamma"
},
{
"math_id": 2,
"text": "C(t)"
},
{
"math_id": 3,
"text": "t"
},
{
"math_id": 4,
"text": "\\mathrm{D}"
},
{
"math_id": 5,
"text": "\\Gamma(t) = \\oint_C \\boldsymbol{u} \\cdot \\mathrm{d}\\boldsymbol{s}"
},
{
"math_id": 6,
"text": "\\frac{\\mathrm{D} \\boldsymbol{u}}{\\mathrm{D} t} = - \\frac{1}{\\rho}\\boldsymbol{\\nabla}p + \\boldsymbol{\\nabla} \\Phi"
},
{
"math_id": 7,
"text": "\\rho=\\rho(p)"
},
{
"math_id": 8,
"text": " \\frac{\\mathrm{D}\\Gamma}{\\mathrm{D} t} = \\oint_C \\frac{\\mathrm{D} \\boldsymbol{u}}{\\mathrm{D}t} \\cdot \\mathrm{d}\\boldsymbol{s} + \\oint_C \\boldsymbol{u} \\cdot \\frac{\\mathrm{D} \\mathrm{d}\\boldsymbol{s}}{\\mathrm{D}t}. "
},
{
"math_id": 9,
"text": " \\oint_C \\frac{\\mathrm{D} \\boldsymbol{u}}{\\mathrm{D}t} \\cdot \\mathrm{d}\\boldsymbol{s} = \\int_A \\boldsymbol{\\nabla} \\times \\left( -\\frac{1}{\\rho} \\boldsymbol{\\nabla} p + \\boldsymbol{\\nabla} \\Phi \\right) \\cdot \\boldsymbol{n} \\, \\mathrm{d}S = \\int_A \\frac{1}{\\rho^2} \\left( \\boldsymbol{\\nabla} \\rho \\times \\boldsymbol{\\nabla} p \\right) \\cdot \\boldsymbol{n} \\, \\mathrm{d}S = 0. "
},
{
"math_id": 10,
"text": "\\boldsymbol{\\nabla} \\rho \\times \\boldsymbol{\\nabla} p=0"
},
{
"math_id": 11,
"text": "\\boldsymbol{\\nabla} \\times \\boldsymbol{\\nabla} f=0"
},
{
"math_id": 12,
"text": "f"
},
{
"math_id": 13,
"text": "\\frac{\\mathrm{D} \\mathrm{d}\\boldsymbol{s}}{\\mathrm{D}t} = \\left( \\mathrm{d}\\boldsymbol{s} \\cdot \\boldsymbol{\\nabla} \\right) \\boldsymbol{u}."
},
{
"math_id": 14,
"text": "\\oint_C \\boldsymbol{u} \\cdot \\frac{\\mathrm{D} \\mathrm{d}\\boldsymbol{s}}{\\mathrm{D}t} = \\oint_C \\boldsymbol{u} \\cdot \\left( \\mathrm{d}\\boldsymbol{s} \\cdot \\boldsymbol{\\nabla} \\right) \\boldsymbol{u} = \\frac{1}{2} \\oint_C \\boldsymbol{\\nabla} \\left( |\\boldsymbol{u}|^2 \\right) \\cdot \\mathrm{d}\\boldsymbol{s} = 0."
},
{
"math_id": 15,
"text": "\\frac{\\mathrm{D}\\Gamma}{\\mathrm{D}t} = 0."
},
{
"math_id": 16,
"text": " \\boldsymbol{\\Omega} "
},
{
"math_id": 17,
"text": "\\Gamma(t) = \\oint_C (\\boldsymbol{u} + \\boldsymbol{\\Omega} \\times \\boldsymbol{r}) \\cdot \\mathrm{d}\\boldsymbol{s}"
},
{
"math_id": 18,
"text": " \\boldsymbol{r} "
},
{
"math_id": 19,
"text": "\\Gamma(t) = \\int_A \\boldsymbol{\\nabla} \\times (\\boldsymbol{u} + \\boldsymbol{\\Omega} \\times \\boldsymbol{r}) \\cdot \\boldsymbol{n} \\, \\mathrm{d}S = \\int_A (\\boldsymbol{\\nabla} \\times \\boldsymbol{u} + 2 \\boldsymbol{\\Omega}) \\cdot \\boldsymbol{n} \\, \\mathrm{d}S"
},
{
"math_id": 20,
"text": "\\boldsymbol{\\omega} = \\boldsymbol{\\nabla} \\times \\boldsymbol{u}"
},
{
"math_id": 21,
"text": "\\Gamma(t) = \\int_A (\\boldsymbol{\\omega} + 2 \\boldsymbol{\\Omega}) \\cdot \\boldsymbol{n} \\, \\mathrm{d}S"
}
]
| https://en.wikipedia.org/wiki?curid=7086534 |
7086661 | Inverse image functor | In mathematics, specifically in algebraic topology and algebraic geometry, an inverse image functor is a contravariant construction of sheaves; here “contravariant” in the sense given a map formula_0, the inverse image functor is a functor from the category of sheaves on "Y" to the category of sheaves on "X". The direct image functor is the primary operation on sheaves, with the simplest definition. The inverse image exhibits some relatively subtle features.
Definition.
Suppose we are given a sheaf formula_1 on formula_2 and that we want to transport formula_1 to formula_3 using a continuous map formula_4.
We will call the result the "inverse image" or pullback sheaf formula_5. If we try to imitate the direct image by setting
formula_6
for each open set formula_7 of formula_3, we immediately run into a problem: formula_8 is not necessarily open. The best we could do is to approximate it by open sets, and even then we will get a presheaf and not a sheaf. Consequently, we define formula_5 to be the sheaf associated to the presheaf:
formula_9
For example, if formula_11 is just the inclusion of a point formula_12 of formula_2, then formula_13 is just the stalk of formula_14 at this point.
The restriction maps, as well as the functoriality of the inverse image follows from the universal property of direct limits.
When dealing with morphisms formula_4 of locally ringed spaces, for example schemes in algebraic geometry, one often works with sheaves of formula_15-modules, where formula_15 is the structure sheaf of formula_2. Then the functor formula_16 is inappropriate, because in general it does not even give sheaves of formula_17-modules. In order to remedy this, one defines in this situation for a sheaf of formula_18-modules formula_19 its inverse image by
formula_20.
formula_27.
Properties.
However, the morphisms formula_25 and formula_26 are "almost never" isomorphisms.
For example, if formula_28 denotes the inclusion of a closed subset, the stalk of formula_29 at a point formula_30 is canonically isomorphic to formula_31 if formula_12 is in formula_32 and formula_33 otherwise. A similar adjunction holds for the case of sheaves of modules, replacing formula_34 by formula_35. | [
{
"math_id": 0,
"text": "f : X \\to Y"
},
{
"math_id": 1,
"text": "\\mathcal{G}"
},
{
"math_id": 2,
"text": "Y"
},
{
"math_id": 3,
"text": "X"
},
{
"math_id": 4,
"text": "f\\colon X\\to Y"
},
{
"math_id": 5,
"text": "f^{-1}\\mathcal{G}"
},
{
"math_id": 6,
"text": "f^{-1}\\mathcal{G}(U) = \\mathcal{G}(f(U))"
},
{
"math_id": 7,
"text": "U"
},
{
"math_id": 8,
"text": "f(U)"
},
{
"math_id": 9,
"text": "U \\mapsto \\varinjlim_{V\\supseteq f(U)}\\mathcal{G}(V)."
},
{
"math_id": 10,
"text": "V"
},
{
"math_id": 11,
"text": "f"
},
{
"math_id": 12,
"text": "y"
},
{
"math_id": 13,
"text": "f^{-1}(\\mathcal{F})"
},
{
"math_id": 14,
"text": "\\mathcal{F}"
},
{
"math_id": 15,
"text": "\\mathcal{O}_Y"
},
{
"math_id": 16,
"text": "f^{-1}"
},
{
"math_id": 17,
"text": "\\mathcal{O}_X"
},
{
"math_id": 18,
"text": "\\mathcal O_Y"
},
{
"math_id": 19,
"text": "\\mathcal G"
},
{
"math_id": 20,
"text": "f^*\\mathcal G := f^{-1}\\mathcal{G} \\otimes_{f^{-1}\\mathcal{O}_Y} \\mathcal{O}_X"
},
{
"math_id": 21,
"text": "f_{\\ast}"
},
{
"math_id": 22,
"text": "x \\in X"
},
{
"math_id": 23,
"text": "(f^{-1}\\mathcal{G})_x \\cong \\mathcal{G}_{f(x)}"
},
{
"math_id": 24,
"text": "f^*"
},
{
"math_id": 25,
"text": "\\mathcal{G} \\rightarrow f_*f^{-1}\\mathcal{G}"
},
{
"math_id": 26,
"text": "f^{-1}f_*\\mathcal{F} \\rightarrow \\mathcal{F}"
},
{
"math_id": 27,
"text": "\\mathrm{Hom}_{\\mathbf {Sh}(X)}(f^{-1} \\mathcal G, \\mathcal F ) = \\mathrm{Hom}_{\\mathbf {Sh}(Y)}(\\mathcal G, f_*\\mathcal F)"
},
{
"math_id": 28,
"text": "i\\colon Z \\to Y"
},
{
"math_id": 29,
"text": "i_* i^{-1} \\mathcal G"
},
{
"math_id": 30,
"text": "y \\in Y"
},
{
"math_id": 31,
"text": "\\mathcal G_y"
},
{
"math_id": 32,
"text": "Z"
},
{
"math_id": 33,
"text": "0"
},
{
"math_id": 34,
"text": "i^{-1}"
},
{
"math_id": 35,
"text": "i^*"
}
]
| https://en.wikipedia.org/wiki?curid=7086661 |
70867509 | Bimodal atomic force microscopy | Bimodal Atomic Force Microscopy (bimodal AFM) is an advanced atomic force microscopy technique characterized by generating high-spatial resolution maps of material properties. Topography, deformation, elastic modulus, viscosity coefficient or magnetic field maps might be generated. Bimodal AFM is based on the simultaneous excitation and detection of two eigenmodes (resonances) of a force microscope microcantilever.
History.
Numerical and theoretical considerations prompted the development of bimodal AFM. The method was initially thought to enhance topographic contrast in air environments. Three subsequent advances such as the capability to detect non-topography properties such electrostatic and magnetic interactions; imaging in liquid and ultra-high vacuum and its genuine quantitative features set the stage for further developments and applications.
Principles of Bimodal AFM.
The interaction of the tip with the sample modifies the amplitudes, phase shifts and frequency resonances of the excited modes. Those changes are detected and processed by the feedback of the instrument. Several features make bimodal AFM a very powerful surface characterization method at the nanoscale. (i) Resolution. Atomic, molecular or nanoscale spatial resolution was demonstrated. (ii) Simultaneity. Maps of different properties are generated at the same time. (iii) Efficiency. A maximum number of four data points per pixel are needed to generate material property maps. (iv) Speed. Analytical solutions link observables with material properties.
Configurations.
In AFM, feedback loops control the operation of the microscope by keeping a fixed value a parameter of the tip's oscillation. If the main feedback loop operates with the amplitude, the AFM mode is called amplitude modulation (AM). If it operates with the frequency shift, the AFM mode is called frequency modulation (FM). Bimodal AFM might be operated with several feedback loops. This gives rise to a variety of bimodal configurations. The configurations are termed AM-open loop, AM-FM, FM-FM. For example, bimodal AM-FM means that the first mode is operated with an amplitude modulation loop while the 2nd mode is operated with a frequency modulation loop. The configurations might not be equivalent in terms of sensitivity, signal-to-noise ratio or complexity.
Let's consider the AM-FM configuration. The first mode is excited to reach free amplitude (no interaction) and the changes of its amplitude and phase shift are tracked by a lock-in amplifier. The main feedback loop keeps constant the amplitude, at a certain set-point formula_1 by modifying the tip vertical position (AM). In a nanomechanical mapping experiment, formula_3 must be kept below 90°, i.e., the AFM is operated in the repulsive regime. At the same time, an FM loop acts on the second eigenmode. A phase-lock-loop regulates the excitation frequency formula_0 by keeping the phase shift of the second mode at 90°. An additional feedback loop might be used to maintain the amplitude formula_2 constant.
Theory.
The theory of bimodal AFM operation encompasses several aspects. Among them, the approximations to express the Euler-Bernoulli equation of a continuous cantilever beam in terms of the equations of the excited modes, the type of interaction forces acting on the tip, the theory of demodulation methods or the introduction of finite-size effects.
In a nutshell, the tip displacement in AFM is approximated by a point-mass model,
formula_4
where formula_5, formula_6, formula_7, formula_8, formula_9, and formula_10 are, respectively, the driving frequency, the free resonant frequency, the quality factor, the stiffness, the driving force of the "i-th" mode, and the tip–sample interaction force. In bimodal AFM, the vertical motion of the tip (deflection) has two components, one for each mode,
formula_11
with formula_12, formula_13, formula_14, as the static, the first, and the second mode deflections; formula_15, formula_5 and formula_16 are, respectively, the amplitude, frequency and phase shift of mode "i".
The theory that transforms bimodal AFM observables into material properties is based on applying the virial formula_17 and energy dissipation formula_18 theorems to the equations of motion of the excited modes. The following equations were derived
formula_19
formula_20
formula_21
where formula_22 is a time where the oscillation of both modes are periodic; formula_7 the quality factor of mode "i". Bimodal AFM operation might be involve any pair of eigenmodes. However, experiments are commonly performed by exciting the first two eigenmodes.
The theory of bimodal AFM provides analytical expressions to link material properties with microscope observables. For example, for a paraboloid probe (radius formula_23) and a tip-sample force given by the linear viscoelastic Kelvin-Voigt model, the effective elastic modulus formula_24 of the sample, viscous coefficient of compressibility formula_25, loss tangent formula_26 or retardation time formula_27 are expressed by
formula_28
formula_29
formula_30
For an elastic material, the second term of equation to calculate formula_31 disappears because formula_32 which gives formula_33. The elastic modulus is obtained from the equation above. Other analytical expressions were proposed for the determination of the Hamaker constant and the magnetic parameters of a ferromagnetic sample.
Applications.
Bimodal AFM is applied to characterize a large variety of surfaces and interfaces. Some applications exploit the sensitivity of bimodal observables to enhance spatial resolution. However, the full capabilities of bimodal AFM are shown in the generation of quantitative maps of material properties. The section is divided in terms of the achieved spatial resolution, atomic-scale or nanoscale.
Atomic and molecular-scale resolution.
Atomic-scale imaging of graphene, semiconductor surfaces and adsorbed organic molecules were obtained in ultra high-vacuum. Angstrom-resolution images of hydration layers formed on proteins and Young's modulus map of a metal-organic frame work, purple membrane and a lipid bilayer were reported in aqueous solutions.
Material property applications.
Bimodal AFM is widely used to provide high-spatial resolution maps of material properties, in particular, mechanical properties. Elastic and/or viscoelastic property maps of polymers, DNA, proteins, protein fibers, lipids or 2D materials were generated. Non-mechanical properties and interactions including crystal magnetic garnets, electrostatic strain, superparamagnetic particles and high-density disks were also mapped. Quantitative property mapping requires the calibration of the force constants of the excited modes.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f_2"
},
{
"math_id": 1,
"text": "A_1"
},
{
"math_id": 2,
"text": "A_2"
},
{
"math_id": 3,
"text": "\\phi_1"
},
{
"math_id": 4,
"text": "\n\\frac{k_i}{4\\pi^2 f_i^2} \\ddot{z_i} + \\frac{k_i}{2\\pi f_{0 i} Q} \\dot{z_i} + k_i z_i = F_i \\cos(2 \\pi f_i t) + F_{ts} (t) \n\\,"
},
{
"math_id": 5,
"text": "f_i"
},
{
"math_id": 6,
"text": "f_0i"
},
{
"math_id": 7,
"text": "Q_i"
},
{
"math_id": 8,
"text": "k_i"
},
{
"math_id": 9,
"text": "F_i"
},
{
"math_id": 10,
"text": "F_{ts}"
},
{
"math_id": 11,
"text": "\nz(t) = z_0 +z_1(t)+z_2(t) \\approx A_1 cos \\left(2\\pi f_1 t - \\phi_1 \\right) + A_2 cos \\left(2\\pi f_2 t - \\frac{\\pi}{2} \\right)\n\\,"
},
{
"math_id": 12,
"text": "z_0"
},
{
"math_id": 13,
"text": "z_1"
},
{
"math_id": 14,
"text": "z_2"
},
{
"math_id": 15,
"text": "A_i"
},
{
"math_id": 16,
"text": "\\phi_i"
},
{
"math_id": 17,
"text": "V_i"
},
{
"math_id": 18,
"text": "E_{diss}"
},
{
"math_id": 19,
"text": "\nV_1= \\frac{1}{T}\\int_0^T F_{ts} (t) z_1(t) dt = - \\frac{k_1 A_1 A_{01}}{2 Q_1} \\cos{\\phi_1}\n\\,"
},
{
"math_id": 20,
"text": "\nV_2= \\frac{1}{T}\\int_0^T F_{ts} (t) z_2(t) dt \\approx - \\frac{k_2 A_2^2 \\Delta f_2}{f_{02}}\n\\,"
},
{
"math_id": 21,
"text": "\nE_{diss1}= \\int_0^T F_{ts} (t) \\dot{z}_1(t) dt = - \\frac{\\pi k_1 A_1 }{Q_1} (A_1 - A_{01}\\sin(\\phi_1))\n\\,"
},
{
"math_id": 22,
"text": "T=T_1 T_2"
},
{
"math_id": 23,
"text": "R"
},
{
"math_id": 24,
"text": "E_{eff} "
},
{
"math_id": 25,
"text": "\\eta_{com}"
},
{
"math_id": 26,
"text": "\\tan \\rho "
},
{
"math_id": 27,
"text": "\\tau"
},
{
"math_id": 28,
"text": "\nE_{eff} = 4 \\sqrt{2} \\frac{Q_1}{\\sqrt{R}} \\frac{k_2^2}{k_1} \\frac{\\Delta f_2^2}{f_{02}^2} \\frac{A_1^{3/2}}{A_{01}^2-A_1^2}\n\\,"
},
{
"math_id": 29,
"text": "\n\\eta_{com} = \\frac{E_{eff}}{\\omega_1} \\left[ \\frac{A_{01}\\sin{\\phi_1}-A_1}{A_{01}\\cos{\\phi_1}} \\right]\n\\,"
},
{
"math_id": 30,
"text": "\n\\tan \\rho = 2 \\pi \\omega_1 \\frac{\\eta_{com}}{E_{eff}} = 2 \\pi \\omega_1 \\tau\n\\,"
},
{
"math_id": 31,
"text": "\\eta"
},
{
"math_id": 32,
"text": "A_1=A_{01} \\sin{\\phi_1}"
},
{
"math_id": 33,
"text": "\\eta = 0"
}
]
| https://en.wikipedia.org/wiki?curid=70867509 |
70873538 | Sieve of Pritchard | An algorithm for generating prime numbers
In mathematics, the sieve of Pritchard is an algorithm for finding all prime numbers up to a specified bound.
Like the ancient sieve of Eratosthenes, it has a simple conceptual basis in number theory.
It is especially suited to quick hand computation for small bounds.
Whereas the sieve of Eratosthenes marks off each non-prime for each of its prime factors, the sieve of Pritchard avoids considering almost all non-prime numbers by building progressively larger wheels, which represent the pattern of numbers not divisible by any of the primes processed thus far.
It thereby achieves a better asymptotic complexity, and was the first sieve with a running time sublinear in the specified bound.
Its asymptotic running-time has not been improved on, and it deletes fewer composites than any other known sieve.
It was created in 1979 by Paul Pritchard.
Since Pritchard has created a number of other sieve algorithms for finding prime numbers, the sieve of Pritchard is sometimes singled out by being called "the wheel sieve" (by Pritchard himself) or "the dynamic wheel sieve".
Overview.
A prime number is a natural number that has no natural number divisors other than the number formula_0 and itself.
To find all the prime numbers less than or equal to a given integer formula_1, a sieve algorithm examines a set of candidates in the range formula_2,
and eliminates those that are not prime, leaving the primes at the end.
The sieve of Eratosthenes examines all of the range, first removing all multiples of the first prime formula_3, then of the next prime formula_4, and so on.
The sieve of Pritchard instead examines a subset of the range consisting of numbers that occur on successive wheels,
which represent the pattern of numbers left after each successive prime is processed by the sieve of Eratosthenes.
For formula_5 the formula_6'th wheel formula_7 represents this pattern.
It is the set of numbers between formula_0 and the product formula_8 of the first formula_6 prime numbers that are not divisible by any of these prime numbers (and is said to have an associated "length" formula_9).
This is because adding formula_9 to a number doesn't change whether or not it is divisible by one of the first formula_6 prime numbers,
since the remainder on division by any one of these primes is unchanged.
So formula_10 with length formula_11 represents the pattern of odd numbers;
formula_12 with length formula_13 represents the pattern of numbers not divisible by formula_3 or formula_4; etc.
Wheels are so-called because formula_7 can be usefully visualized as a circle of circumference formula_9 with its members marked at their corresponding distances from an origin.
Then rolling the wheel along the number line marks points corresponding to successive numbers not divisible by one of the first formula_6 prime numbers.
The animation shows formula_14 being rolled up to 30.
It's useful to define formula_15 for formula_16 to be the result of rolling formula_7 up to formula_17.
Then the animation generates formula_18.
Note that up to formula_19, this consists only of formula_0 and the primes between formula_20 and formula_21.
The sieve of Pritchard is derived from the observation that this holds generally:
for all formula_5, the values in formula_22 are formula_0 and the primes between formula_23 and formula_24.
It even holds for formula_25, where the wheel has length formula_0 and contains just formula_0 (representing all the natural numbers).
So the sieve of Pritchard starts with the trivial wheel formula_26 and builds successive wheels until the square of the wheel's first member after formula_0 is at least formula_1.
Wheels grow very quickly, but only their values up to formula_1 are needed and generated.
It remains to find a method for generating the next wheel.
Note in the animation that formula_27 can be obtained by rolling formula_14 up to formula_28 and then removing formula_20 times each member of formula_14.
This also holds generally: for all formula_29, formula_30.
Rolling formula_7 past formula_9 just adds values to formula_7, so the current wheel is first extended by getting each successive member starting with formula_31, adding formula_9 to it, and inserting the result in the set.
Then the multiples of formula_23 are deleted.
Care must be taken to avoid a number being deleted that itself needs to be multiplied by formula_23.
The sieve of Pritchard as originally presented does so by first skipping past successive members until finding the maximum one needed, and then doing the deletions in reverse order by working back through the set.
This is the method used in the first animation above.
A simpler approach is just to gather the multiples of formula_23 in a list, and then delete them.
Another approach is given by Gries and Misra.
If the main loop terminates with a wheel whose length is less than formula_1, it is extended up to formula_1 to generate the remaining primes.
The algorithm, for finding all primes up to "N", is therefore as follows:
Example.
To find all the prime numbers less than or equal to 150, proceed as follows.
Start with wheel 0 with length 1, representing all natural numbers 1, 2, 3...:
1
The first number after 1 for wheel 0 (when rolled) is 2; note it as a prime.
Now form wheel 1 with length 2x1=2 by first extending wheel 0 up to 2 and then deleting 2 times each number in wheel 0, to get:
1 2
The first number after 1 for wheel 1 (when rolled) is 3; note it as a prime.
Now form wheel 2 with length 3x2=6 by first extending wheel 1 up to 6 and then deleting 3 times each number in wheel 1, to get
1 2 3 5
The first number after 1 for wheel 2 is 5; note it as a prime.
Now form wheel 3 with length 5x6=30 by first extending wheel 2 up to 30 and then deleting 5 times each number in wheel 2 (in reverse order!), to get
1 2 3 5 7 11 13 17 19 23 25 29
The first number after 1 for wheel 3 is 7; note it as a prime.
Now wheel 4 has length 7x30=210, so we only extend wheel 3 up to our limit 150.
We then delete 7 times each number in wheel 3 until we exceed our limit 150, to get the elements in wheel 4 up to 150:
1 2 3 5 7 11 13 17 19 23 25 29 31 37 41 43 47 49 53 59 61 67 71 73 77 79 83 89 91 97 101 103 107 109 113 119 121 127 131 133 137 139 143 149
The first number after 1 for this partial wheel 4 is 11; note it as a prime.
Since we've finished with rolling, we delete 11 times each number in the partial wheel 4 until we exceed our limit 150, to get the elements in wheel 5 up to 150:
1 2 3 5 7 11 13 17 19 23 25 29 31 37 41 43 47 49 53 59 61 67 71 73 77 79 83 89 91 97 101 103 107 109 113 119 121 127 131 133 137 139 143 149
The first number after 1 for this partial wheel 5 is 13.
Since 13 squared is at least our limit 150, we stop.
The remaining numbers (other than 1) are the rest of the primes up to our limit 150.
Just 8 composite numbers are removed, once each.
The rest of the numbers considered (other than 1) are prime.
In comparison, the natural version of Eratosthenes sieve (stopping at the same point) removes composite numbers 184 times.
Pseudocode.
The sieve of Pritchard can be expressed in pseudocode, as follows:
algorithm Sieve of Pritchard is
input: an integer "N" >= 2.
output: the set of prime numbers in {1,2...,"N"}.
let "W" and "Pr" be sets of integer values, and all other variables integer values.
"k", "W", "length", "p", "Pr" := 1, {1}, 2, 3, {2};
while "p"2 <= "N" do
if ("length" < "N") then
Extend "W","length" to minimum of "p"*"length","N";
Delete multiples of "p" from "W";
Insert "p" into "Pr";
"k", "p" := "k"+1, next("W", 1)
if ("length" < "N") then
Extend "W","length" to "N";
return "Pr" formula_33 "W" - {1};
where next("W", w) is the next value in the ordered set "W" after "w".
procedure Extend "W","length" to "n" is
integer w, x;
"w", "x" := 1, "length"+1;
while "x" <= "n" do
Insert "x" into "W";
"w" := next("W","w");
"x" := "length" + "w";
"length" := "n";
procedure Delete multiples of "p" from "W","length" is
integer w;
"w" := "p";
while "p"*"w" <= "length" do
"w" := next("W","w");
while "w" > 1 do
"w" := prev("W","w");
Remove "p"*"w" from "W";
where prev("W", w) is the previous value in the ordered set "W" before "w". The algorithm can be initialized with formula_26 instead of formula_35 at the minor complicaion of making next("W",1) a special case
when "k" = 0.
This abstract algorithm uses ordered sets supporting the operations of insertion of a value greater than the maximum, deletion of a member, getting the next value after a member, and getting the previous value before a member.
Using one of Mertens' theorems (the third) it can be shown to use formula_36 of these operations and additions and multiplications.
Implementation.
An array-based doubly-linked list "s" can be used to implement the ordered set "W", with "s"["w"] storing next("W","w") and "s"["w"-1] storing prev("W","w").
This permits each abstract operation to be implemented in a small number of operations.
Therefore the time complexity of the sieve of Pritchard to calculate the primes up to formula_1 in the random access machine model is formula_36 operations on words of size formula_37.
Pritchard also shows how multiplications can be eliminated by using very small multiplication tables, so the bit complexity is formula_38 bit operations.
In the same model, the space complexity is formula_39 words, i.e., formula_40 bits.
The sieve of Eratosthenes requires only 1 bit for each candidate in the range 2 through formula_1, so its space complexity is lower at formula_39 bits.
Note that space needed for the primes is not counted, since they can printed or written to external storage as they are found.
Pritchard presented a variant of his sieve that requires only formula_36 bits without compromising the sublinear time complexity,
making it asymptotically superior to the natural version of the sieve of Eratostheses in both time and space.
However, the sieve of Eratostheses can be optimized to require much less memory by operating on successive segments of the natural numbers. Its space complexity can be reduced to formula_41 bits without increasing its time complexity
This means that in practice it can be used for much larger limits formula_1 than would otherwise fit in memory, and also take advantage of fast cache memory.
For maximum speed it is also optimized using a small wheel to avoid sieving with the first few primes (although this does not change its asymptotic time complexity).
Therefore the sieve of Pritchard is not competitive as a practical sieve over sufficiently large ranges.
Geometric model.
At the heart of the sieve of Pritchard is an algorithm for building successive wheels.
It has a simple geometric model as follows:
Note that for the first 2 iterations it is necessary to continue round the circle until 1 is reached again.
The first circle represents formula_43, and successive circles represent wheels formula_44.
The animation on the right shows this model in action up to formula_42.
It is apparent from the model that wheels are symmetric.
This is because formula_45 is not divisible by one of the first formula_46 primes if and only if formula_47 is not so divisible.
It is possible to exploit this to avoid processing some composites, but at the cost of a more complex algorithm.
Related sieves.
Once the wheel in the sieve of Pritchard reaches its maximum size, the remaining operations are equivalent to those performed by Euler's sieve.
The sieve of Pritchard is unique in conflating the set of prime candidates with a dynamic wheel used to speed up the sifting process.
But a separate static wheel (as frequently used to speed up the sieve of Eratosthenes) can give an formula_48 speedup to the latter, or to linear sieves, provided it is large enough (as a function of formula_1).
Examples are the use of the largest wheel of length not exceeding formula_49 to get a version of the sieve of Eratosthenes that takes formula_39 additions and requires only formula_50 bits, and the speedup of the naturally linear sieve of Atkin to get a sublinear optimized version.
Bengalloun found a linear "smoothly incremental" sieve, i.e., one that (in theory) can run indefinitely and takes a bounded number of operations to increment the current bound formula_1.
He also showed how to make it sublinear by adapting the sieve of Pritchard to incrementally build the next dynamic wheel while the current one is being used.
Pritchard showed how to avoid multiplications, thereby obtaining the same asymptotic bit-complexity as the sieve of Pritchard.
Runciman provides a functional algorithm inspired by the sieve of Pritchard.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "1"
},
{
"math_id": 1,
"text": "N"
},
{
"math_id": 2,
"text": "2,3,...,N"
},
{
"math_id": 3,
"text": "2"
},
{
"math_id": 4,
"text": "3"
},
{
"math_id": 5,
"text": "i>0"
},
{
"math_id": 6,
"text": "i"
},
{
"math_id": 7,
"text": "W_i"
},
{
"math_id": 8,
"text": "P_i=p_1*p_2*...*p_i"
},
{
"math_id": 9,
"text": "P_i"
},
{
"math_id": 10,
"text": "W_1=\\{1\\}"
},
{
"math_id": 11,
"text": "P_1=2"
},
{
"math_id": 12,
"text": "W_2=\\{1,5\\}"
},
{
"math_id": 13,
"text": "P_2=6"
},
{
"math_id": 14,
"text": "W_2"
},
{
"math_id": 15,
"text": "W_i\\rightarrow n"
},
{
"math_id": 16,
"text": "n>0"
},
{
"math_id": 17,
"text": "n"
},
{
"math_id": 18,
"text": "W_2\\rightarrow 30=\\{1,5,7,11,13,17,19,23,25,29\\}"
},
{
"math_id": 19,
"text": "5^2-1=24"
},
{
"math_id": 20,
"text": "5"
},
{
"math_id": 21,
"text": "25"
},
{
"math_id": 22,
"text": "W_i\\rightarrow {(p_{i+1}^2-1)}"
},
{
"math_id": 23,
"text": "p_{i+1}"
},
{
"math_id": 24,
"text": "p_{i+1}^2"
},
{
"math_id": 25,
"text": "i=0"
},
{
"math_id": 26,
"text": "W_0"
},
{
"math_id": 27,
"text": "W_3=\\{1,5,7,11,13,17,19,23,25,29\\}-\\{5*1,5*5\\}"
},
{
"math_id": 28,
"text": "30"
},
{
"math_id": 29,
"text": "i\\geq 0"
},
{
"math_id": 30,
"text": "W_{i+1} = (W_i\\rightarrow P_{i+1}) - \\{p_{i+1}*w|w\\in W_i\\}"
},
{
"math_id": 31,
"text": "w=1"
},
{
"math_id": 32,
"text": "\\cap"
},
{
"math_id": 33,
"text": "\\cup"
},
{
"math_id": 34,
"text": "\\rightarrow"
},
{
"math_id": 35,
"text": "W_1"
},
{
"math_id": 36,
"text": "O(N/\\log\\log N)"
},
{
"math_id": 37,
"text": "O(\\log N)"
},
{
"math_id": 38,
"text": "O(N\\log N/\\log\\log N)"
},
{
"math_id": 39,
"text": "O(N)"
},
{
"math_id": 40,
"text": "O(N\\log N)"
},
{
"math_id": 41,
"text": "O(\\sqrt N)"
},
{
"math_id": 42,
"text": "W_3"
},
{
"math_id": 43,
"text": "W_0=\\{1\\}"
},
{
"math_id": 44,
"text": "W_1, W_2,..."
},
{
"math_id": 45,
"text": "P_k-w"
},
{
"math_id": 46,
"text": "k"
},
{
"math_id": 47,
"text": "w"
},
{
"math_id": 48,
"text": "O(\\log\\log N)"
},
{
"math_id": 49,
"text": "\\sqrt{N}/log^{2}N"
},
{
"math_id": 50,
"text": "O(\\sqrt N/\\log\\log N)"
}
]
| https://en.wikipedia.org/wiki?curid=70873538 |
7087423 | Boolean circuit | Model of computation
In computational complexity theory and circuit complexity, a Boolean circuit is a mathematical model for combinational digital logic circuits. A formal language can be decided by a family of Boolean circuits, one circuit for each possible input length.
Boolean circuits are defined in terms of the logic gates they contain. For example, a circuit might contain binary AND and OR gates and unary NOT gates, or be entirely described by binary NAND gates. Each gate corresponds to some Boolean function that takes a fixed number of bits as input and outputs a single bit.
Boolean circuits provide a model for many digital components used in computer engineering, including multiplexers, adders, and arithmetic logic units, but they exclude sequential logic. They are an abstraction that omits many aspects relevant to designing real digital logic circuits, such as metastability, fanout, glitches, power consumption, and propagation delay variability.
Formal definition.
In giving a formal definition of Boolean circuits, Vollmer starts by defining a basis as set "B" of Boolean functions, corresponding to the gates allowable in the circuit model. A Boolean circuit over a basis "B", with "n" inputs and "m" outputs, is then defined as a finite directed acyclic graph. Each vertex corresponds to either a basis function or one of the inputs, and there is a set of exactly "m" nodes which are labeled as the outputs. The edges must also have some ordering, to distinguish between different arguments to the same Boolean function.
As a special case, a propositional formula or Boolean expression is a Boolean circuit with a single output node in which every other node has fan-out of 1. Thus, a Boolean circuit can be regarded as a generalization that allows shared subformulas and multiple outputs.
A common basis for Boolean circuits is the set {AND, OR, NOT}, which is functionally complete, i. e. from which all other Boolean functions can be constructed.
Computational complexity.
Background.
A particular circuit acts only on inputs of fixed size. However, formal languages (the string-based representations of decision problems) contain strings of different lengths, so languages cannot be fully captured by a single circuit (in contrast to the Turing machine model, in which a language is fully described by a single Turing machine). A language is instead represented by a "circuit family". A circuit family is an infinite list of circuits formula_0, where formula_1 has formula_2 input variables. A circuit family is said to decide a language formula_3 if, for every string formula_4, formula_4 is in the language formula_3 if and only if formula_5, where formula_2 is the length of formula_4. In other words, a language is the set of strings which, when applied to the circuits corresponding to their lengths, evaluate to 1.
Complexity measures.
Several important complexity measures can be defined on Boolean circuits, including circuit depth, circuit size, and the number of alternations between AND gates and OR gates. For example, the size complexity of a Boolean circuit is the number of gates in the circuit.
There is a natural connection between circuit size complexity and time complexity. Intuitively, a language with small time complexity (that is, requires relatively few sequential operations on a Turing machine), also has a small circuit complexity (that is, requires relatively few Boolean operations). Formally, it can be shown that if a language is in formula_6, where formula_7 is a function formula_8, then it has circuit complexity formula_9.
Complexity classes.
Several important complexity classes are defined in terms of Boolean circuits. The most general of these is P/poly, the set of languages that are decidable by polynomial-size circuit families. It follows directly from the fact that languages in formula_6 have circuit complexity formula_9 that Pformula_10P/poly. In other words, any problem that can be computed in polynomial time by a deterministic Turing machine can also be computed by a polynomial-size circuit family. It is further the case that the inclusion is proper (i.e. Pformula_11P/poly) because there are undecidable problems that are in P/poly. P/poly turns out to have a number of properties that make it highly useful in the study of the relationships between complexity classes. In particular, it is helpful in investigating problems related to P versus NP. For example, if there is any language in NP that is not in P/poly then Pformula_12NP. P/poly also helps to investigate properties of the polynomial hierarchy. For example, if NP ⊆ P/poly, then PH collapses to formula_13. A full description of the relations between P/poly and other complexity classes is available at "Importance of P/poly". P/poly also has the interesting feature that it can be equivalently defined as the class of languages recognized by a polynomial-time Turing machine with a polynomial-bounded advice function.
Two subclasses of P/poly that have interesting properties in their own right are NC and AC. These classes are defined not only in terms of their circuit size but also in terms of their "depth". The depth of a circuit is the length of the longest directed path from an input node to the output node. The class NC is the set of languages that can be solved by circuit families that are restricted not only to having polynomial-size but also to having polylogarithmic depth. The class AC is defined similarly to NC, however gates are allowed to have unbounded fan-in (that is, the AND and OR gates can be applied to more than two bits). NC is an important class because it turns out that it represents the class of languages that have efficient parallel algorithms.
Circuit evaluation.
The Circuit Value Problem — the problem of computing the output of a given Boolean circuit on a given input string — is a P-complete decision problem. Therefore, this problem is considered to be "inherently sequential" in the sense that there is likely no efficient, highly parallel algorithm that solves the problem.
Completeness.
Logic circuits are physical representation of simple logic operations, AND, OR and NOT (and their combinations, such as non-sequential flip-flops or circuit networks), that form a mathematical structure known as Boolean algebra. They are complete in sense that they can perform any deterministic algorithm. However, it just happens that this is not all there is. In the physical world we also encounter randomness, notable in small systems governed by quantization effects, which is described by theory of Quantum Mechanics. Logic circuits cannot produce any randomness, and in that sense they form an incomplete logic set. Remedy to that is found in adding an ad-hoc random bit generator to logic networks, or computers, such as in Probabilistic Turing machine. A recent work has introduced a theoretical concept of an inherently random logic circuit named "random flip-flop", which completes the set. It conveniently packs randomness and is inter-operable with deterministic Boolean logic circuits. However, an algebraic structure equivalent of Boolean algebra and associated methods of circuit construction and reduction for the extended set is yet unknown.
Footnotes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(C_0,C_1,C_2,...)"
},
{
"math_id": 1,
"text": "C_n"
},
{
"math_id": 2,
"text": "n"
},
{
"math_id": 3,
"text": "L"
},
{
"math_id": 4,
"text": "w"
},
{
"math_id": 5,
"text": "C_n(w)=1"
},
{
"math_id": 6,
"text": "\\mathsf{TIME}(t(n))"
},
{
"math_id": 7,
"text": "t"
},
{
"math_id": 8,
"text": "t:\\mathbb{N} \\to \\mathbb{N}"
},
{
"math_id": 9,
"text": "O(t^2(n))"
},
{
"math_id": 10,
"text": "\\subseteq"
},
{
"math_id": 11,
"text": "\\subsetneq"
},
{
"math_id": 12,
"text": "\\neq"
},
{
"math_id": 13,
"text": "\\Sigma_2^{\\mathsf P}"
}
]
| https://en.wikipedia.org/wiki?curid=7087423 |
70877137 | Blayne Heckel | American experimental physicist
Blayne Ryan Heckel (born March 20, 1953) is an American experimental physicist whose research involved precision measurements in atomic physics and gravitational physics. He is a professor emeritus at the University of Washington in Seattle.
Education and career.
At Harvard University he graduated with an A.B. in 1975 and a Ph.D. in 1981. His doctoral dissertation was supervised by Norman Ramsey. Heckel became an assistant professor in 1983, an associate professor in 1987, and a full professor in 1991 at the University of Washington, where he was temporarily head of the physics department. Heckel founded there with Eric Adelberger in 1986 a group for experimental gravitational physics (Eöt-Wash Group), which Jens H. Gundlach joined in 1990. They further developed torsion balances in the style of Eötvös and used them to study the possible deviation of the gravitational force from Newton's formula_0 law at small distances formula_1 (up to 50 micrometers). The Eöt-Wash group searched for possible new fundamental forces involving a fifth force, large extra dimensions, or the effects of dark matter and dark energy, as well possible violations of the equivalence principle at small distances. Heckel and his colleagues also used the torsion balance to test Lorentz invariance with polarized electrons and to look for new spin-dependent forces.
In the 1980s his research dealt with experimental atomic physics, including searches for violations of parity and violation of time reversal invariance, by means of measuring upper limits for the electric dipole moment of atoms such as 199Hg. He worked with his doctoral advisor Norman Ramsey, among others.
Heckel and his colleagues also measured the coupling constants of the weak interaction of neutrons with nucleons. His team used beams of cold polarized neutrons from the National Institute of Standards and Technology (NIST) reactor to bombard a liquid helium target, measuring the parity-violating spin of the beam polarization.
In 2012 he was elected a member of the Washington State Academy of Sciences (WSAS). In 2021 he was awarded, jointly with Eric Adelberger and Jens H. Gundlach, the Breakthrough Prize in Fundamental Physics for “precision fundamental measurements that test our understanding of gravity, probe the nature of dark energy, and establish limits on couplings to dark matter."
His doctoral students include Christopher Stubbs. | [
{
"math_id": 0,
"text": "F = G \\frac{m_1 m_2}{r^2}"
},
{
"math_id": 1,
"text": "r"
}
]
| https://en.wikipedia.org/wiki?curid=70877137 |
708814 | Gordon Jenkins | American arranger, composer, and pianist
Gordon Hill Jenkins (May 12, 1910 – May 1, 1984) was an American arranger, composer, and pianist who was influential in popular music in the 1940s and 1950s. Jenkins worked with The Andrews Sisters, Johnny Cash, The Weavers, Frank Sinatra, Louis Armstrong, Judy Garland, Nat King Cole, Billie Holiday, Harry Nilsson, Peggy Lee and Ella Fitzgerald.
Biography.
Career.
Gordon Jenkins was born in Webster Groves, Missouri. He began his career writing arrangements for a radio Station in St. Louis. He was hired by Isham Jones, the director of a dance band known for its ensemble playing, which gave Jenkins the opportunity to develop his skills in melodic scoring. He also conducted "The Show Is On" on Broadway.
After the Jones band broke up in 1936, Jenkins worked as a freelance arranger and songwriter, contributing to sessions by Isham Jones, Paul Whiteman, Benny Goodman, Andre Kostelanetz, Lennie Hayton, and others. In 1938, Jenkins moved to Hollywood and worked for Paramount Pictures and NBC, and then became Dick Haymes' arranger for four years. In 1944, Jenkins had a hit song with "San Fernando Valley". In the 1940s, he was music director for the radio version of the program "Mayor of the Town", and his orchestra provided the music for Ransom Sherman's program on CBS.
In 1945, Jenkins joined Decca Records. In 1947, he had his first million-seller with "Maybe You'll Be There" featuring vocalist Charles LaVere and, in 1949, had a hit with Victor Young's film theme "My Foolish Heart", which was also a success for Billy Eckstine. At the same time, he regularly arranged for and conducted the orchestra for various Decca artists, including Dick Haymes ("Little White Lies", 1947), Ella Fitzgerald ("Happy Talk", 1949, "Black Coffee", 1949, "Baby", 1954), Billie Holiday ("Crazy He Calls Me", "You're My Thrill", "Please Tell Me Now", "Somebody's on My Mind", 1949, and conducted and produced her last Decca session with "God Bless the Child", "This Is Heaven to Me", 1950), Patty Andrews of the Andrews Sisters ("I Can Dream, Can't I", 1949) and Louis Armstrong ("Blueberry Hill", 1949 and "When It's Sleepy Time Down South", 1951).
Jenkins wrote the score for the Broadway revue, "Along Fifth Avenue, starring Nancy Walker and Jackie Gleason, which ran for 180 performances in 1949.
The liner notes to Verve Records' 2001 reissue of one of Jenkins' albums with Armstrong, "Satchmo In Style", quote Decca's A& RDirector Milt Gabler, saying that Jenkins "stood up on his little podium so that all the performers could see him conduct. But before he gave a downbeat, Gordon made a speech about how much he loved Louis and how this was the greatest moment in his life. And then he cried."
During this time, Jenkins also began recording and performing under his own name. One of his enduring works while at Decca was a pair of Broadway-style musical vignettes, "Manhattan Tower" and "California" which saw release several times (78s, 45s, and LP) in the 1940s and 1950s. The two were paired on a very early Decca LP in 1949, and Jenkins was given the Key to New York City by its mayor when Jenkins's orchestra performed the 16-minute suite on "The Ed Sullivan Show" in the early 1950s. "Manhattan Tower" was also a Patti Page LP album, issued by Mercury Records as catalog number MG-20226 in 1956. It is her version of Gordon Jenkins' popular 1948/1956 "Manhattan Tower" suite and the album charted at No. 18 on the Billboard charts. The album was reissued, combined with the 1956 Patti Page album "You Go to My Head", in compact disc format, by Sepia Records on September 4, 2007. Jenkins also made a rare excursion into film work in 1952 when he scored the action film "Bwana Devil", the first 3-D movie shot in color.
His "Seven Dreams" released in 1953 included "Crescent City Blues", which was the source for Johnny Cash's popular recording, "Folsom Prison Blues". In 1956, he expanded "Manhattan Tower" to almost three times its length, released it (this time on Capitol Records), and performed it on an hour-long television show. (Both versions of "Manhattan Tower" are currently available on CD.) His final long-form work was "The Future", which made up the entire third disk of Frank Sinatra's 1980 Grammy-nominated "" album. Although the piece was savaged by critics, Sinatra reportedly loved the semi-biographical work and felt that Jenkins was treated unfairly by the media.
Jenkins headlined New York's Capitol Theater between 1949 and 1951 and the Paramount Theater in 1952. He appeared in Las Vegas in 1953 and many times thereafter. He worked for NBC as a TV producer from 1955 to 1957, and performed at the Hollywood Bowl in 1964. By 1949, Jenkins was musical director at Decca, and he signed – despite resistance from Decca's management – the Weavers, a Greenwich Village folk ensemble that included Pete Seeger among its members. The combination of the Weavers' folk music with Jenkins' orchestral arrangements became popular. Their most notable collaboration was a version of Lead Belly's "Goodnight Irene" (1950) backed by Jenkins' adaptation of the Israeli folk song, "Tzena, Tzena, Tzena". Other notable songs they recorded together are "The Roving Kind", "On Top of Old Smoky" (1951), and "Wimoweh" (1952).
Also while at Decca Records Jenkins arranged and conducted several songs for Peggy Lee including her 1952 major hit recording of Rodgers and Hart's "Lover", which she also performed in the Warner Bros. remake of "The Jazz Singer" (1952 film). Lee also had chart successes with the Jenkins-arranged "Be Anything (But Be Mine)" and "Just One of Those Things".
After a brief stint with RCA's "X" Records which produced the album "Gordon Jenkins' Almanac" in 1956, Jenkins was hired by Capitol, where he worked with Frank Sinatra, notably on the albums "Where Are You?" (1957) and "No One Cares" (1959), and Nat King Cole, with whom he had his greatest successes; Jenkins was responsible for the lush arrangements on the 1957 album "Love Is the Thing" (Capitol's first stereo release, which included "When I Fall in Love", and "Star Dust" two of Cole's best-known recordings), as well as the albums "The Very Thought of You" (1958) and "Where Did Everyone Go?" (1963). Jenkins also wrote the music and lyrics for Judy Garland's 1959 album "The Letter" which also featured vocalist Charles LaVere, and conducted several of Garland's London concerts in the early 1960s.
Whilst most of Jenkins' arrangements at Capitol were in his distinctive string-laden style, he continued to demonstrate more versatility when required, particularly on albums such as "A Jolly Christmas From Frank Sinatra" (1957), which opens with a swinging version of "Jingle Bells", and Nat King Cole's album of spirituals, "Every Time I Feel The Spirit" (1960), which includes several tracks with a pronounced formula_0 beat that might almost be described as rock. He also produced a diverse set of charts for his critically acclaimed 1960 album "Gordon Jenkins Presents Marshal Royal", a jazz-pop crossover project with Count Basie's alto saxophonist which included both strings and a swinging rhythm section.
However, as rock and roll gained ascendancy in the 1960s, Jenkins' lush string arrangements fell out of favor and he worked only sporadically. However, Sinatra, who had left Capitol to start his own label, Reprise Records, continued to call upon the arranger's services at various intervals over the next two decades, on albums such as "All Alone" (1962), "September of My Years" (1965), for which Jenkins won a Grammy Award, "Ol' Blue Eyes Is Back" (1973), and "She Shot Me Down" (1981). Jenkins also worked with Harry Nilsson, arranging and conducting "A Little Touch of Schmilsson in the Night" (1973), an album of jazz standards. The Nilsson sessions, with Jenkins conducting, were recorded on video and later broadcast as a television special by the BBC.
Although best known as an arranger, Jenkins also wrote several well-known songs, including "P.S. I Love You", "Goodbye" (Benny Goodman's sign-off tune), "Blue Prelude" (with Joe Bishop), "This Is All I Ask", and "When a Woman Loves a Man". Jenkins also composed both the "Future" suite and the entire "Future" section of Sinatra's 1980 concept album "", and scored the music for the 1980 film "The First Deadly Sin", which starred Sinatra in his last major film role.
Personal life.
Jenkins married high school sweetheart Nancy Harkey in 1931 and had three children: Gordon Jr., Susan, and Page. In 1946, he divorced Harkey and married Beverly Mahr, one of the singers in his band. They had a son, Bruce. Jenkins also recorded an album with Beverly Jenkins for Impulse! in 1964, entitled "Gordon Jenkins Presents My Wife The Blues Singer".
Toward the end of his life, he was in a near-fatal automobile accident, which left him debilitated. Nonetheless, he conducted a full orchestra for a recording session in spite of his pain.
Jenkins died of Lou Gehrig's disease in Malibu, California, eleven days shy of his 74th birthday.
His son, sports writer Bruce Jenkins, wrote a biography on his late father in 2005, titled 'Goodbye: In search of Gordon Jenkins' including a rare interview with Frank Sinatra among others for insights into Jenkins' process.
Jenkins' granddaughter, singer/songwriter Ella Dawn Jenkins, is a career musician in San Francisco.
Awards.
In 1966, Jenkins received a Grammy Award for Best Instrumental Arrangement Accompanying Vocalist(s) for Frank Sinatra's rendition of the song "It Was a Very Good Year".
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\textstyle\\frac{2}{4}"
}
]
| https://en.wikipedia.org/wiki?curid=708814 |
7088631 | Finite-difference frequency-domain method | Numerical solution method of computational electromagnetics
The finite-difference frequency-domain (FDFD) method is a numerical solution method for problems usually in electromagnetism and sometimes in acoustics, based on finite-difference approximations of the derivative operators in the differential equation being solved.
While "FDFD" is a generic term describing all frequency-domain finite-difference methods, the title seems to mostly describe the method as applied to scattering problems. The method shares many similarities to the finite-difference time-domain (FDTD) method, so much so that the literature on FDTD can be directly applied. The method works by transforming Maxwell's equations (or other partial differential equation) for sources and fields at a constant frequency into matrix form formula_0. The matrix "A" is derived from the wave equation operator, the column vector "x" contains the field components, and the column vector "b" describes the source. The method is capable of incorporating anisotropic materials, but off-diagonal components of the tensor require special treatment.
Strictly speaking, there are at least two categories of "frequency-domain" problems in electromagnetism. One is to find the response to a current density J with a constant frequency ω, i.e. of the form formula_1, or a similar time-harmonic source. This "frequency-domain response" problem leads to an formula_0 system of linear equations as described above. An early description of a frequency-domain response FDTD method to solve scattering problems was published by Christ and Hartnagel (1987). Another is to find the normal modes of a structure (e.g. a waveguide) in the absence of sources: in this case the frequency ω is itself a variable, and one obtains an eigenproblem formula_2 (usually, the eigenvalue λ is ω2). An early description of an FDTD method to solve electromagnetic eigenproblems was published by Albani and Bernardi (1974).
Comparison with FDTD and FEM.
The FDFD method is very similar to the finite element method (FEM), though there are some major differences. Unlike the FDTD method, there are no time steps that must be computed sequentially, thus making FDFD easier to implement. This might also lead one to imagine that FDFD is less computationally expensive; however, this is not necessarily the case. The FDFD method requires solving a sparse linear system, which even for simple problems can be 20,000 by 20,000 elements or larger, with over a million unknowns. In this respect, the FDFD method is similar to the FEM, which is a finite differential method and is also usually implemented in the frequency domain. There are efficient numerical solvers available so that matrix inversion—an extremely computationally expensive process—can be avoided. Additionally, model order reduction techniques can be employed to reduce problem size.
FDFD, and FDTD for that matter, does not lend itself well to complex geometries or multiscale structures, as the Yee grid is restricted mostly to rectangular structures. This can be circumvented by either using a very fine grid mesh (which increases computational cost), or by approximating the effects with surface boundary conditions. Non uniform gridding can lead to spurious charges at the interface boundary, as the zero divergence conditions are not maintained when the grid is not uniform along an interface boundary. E and H field continuity can be maintained to circumvent this problem by enforcing weak continuity across the interface using basis functions, as is done in FEM. Perfectly matched layer (PML) boundary conditions can also be used to truncate the grid, and avoid meshing empty space.
Susceptance element equivalent circuit.
The FDFD equations can be rearranged in such a way as to describe a second order equivalent circuit, where nodal voltages represent the E field components and branch currents represent the H field components. This equivalent circuit representation can be extremely useful, as techniques from circuit theory can be used to analyze or simplify the problem and can be used as a spice-like tool for three-dimensional electromagnetic simulation. This susceptance element equivalent circuit (SEEC) model has the advantages of a reduced number of unknowns, only having to solve for E field components, and second order model order reduction techniques can be employed.
Applications.
The FDFD method has been used to provide full wave simulation for modeling interconnects for various applications in electronic packaging. FDFD has also been used for various scattering problems at optical frequencies.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "Ax = b"
},
{
"math_id": 1,
"text": "\\mathbf{J}(\\mathbf{x}) e^{i\\omega t}"
},
{
"math_id": 2,
"text": "Ax = \\lambda x"
}
]
| https://en.wikipedia.org/wiki?curid=7088631 |
70887641 | Chan–Karolyi–Longstaff–Sanders process | In mathematics, the Chan–Karolyi–Longstaff–Sanders process (abbreviated as CKLS process) is a stochastic process with applications to finance. In particular it has been used to model the term structure of interest rates. The CKLS process can also be viewed as a generalization of the Ornstein–Uhlenbeck process. It is named after K. C. Chan, G. Andrew Karolyi, Francis A. Longstaff, and Anthony B. Sanders, with their paper published in 1992.
Definition.
The CKLS process formula_0 is defined by the following stochastic differential equation:
formula_1
where formula_2 denotes the Wiener process. The CKLS process has the following equivalent definition:
formula_3
Special cases.
Many interest rate models and short-rate models are special cases of the CKLS process which can be obtained by setting the CKLS model parameters to specific values. In all cases, formula_7 is assumed to be positive.
Financial applications.
The CKLS process is often used to model interest rate dynamics and pricing of bonds, bond options, currency exchange rates, securities, and other options, derivatives, and contingent claims. It has also been used in the pricing of fixed income and credit risk and has been combined with other time series methods such as GARCH-class models.
One question studied in the literature is how to set the model parameters, in particular the elasticity parameter formula_5. Robust statistics and nonparametric estimation techniques have been used to measure CKLS model parameters.
In their original paper, CKLS argued that the elasticity of interest rate volatility is 1.5 based on historical data, a result that has been widely cited. Also, they showed that models with formula_8 can model short-term interest rates more accurately than models with formula_9.
Later empirical studies by Bliss and Smith have shown the reverse: sometimes lower formula_5 values (like 0.5) in the CKLS model can capture volatility dependence more accurately compared to higher formula_5 values. Moreover, by redefining the regime period, Bliss and Smith have shown that there is evidence for regime shift in the Federal Reserve between 1979 and 1982. They have found evidence supporting the square root Cox-Ingersoll-Ross model (CIR SR), a special case of the CKLS model with formula_10.
The period of 1979-1982 marked a change in monetary policy of the Federal Reserve, and this regime change has often been studied in the context of CKLS models.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "X_t"
},
{
"math_id": 1,
"text": "dX_t = (\\alpha + \\beta X_t) dt + \\sigma X_t^{\\gamma}dW_t "
},
{
"math_id": 2,
"text": "W_t "
},
{
"math_id": 3,
"text": "dX_t = -k(X_t - a) dt + \\sigma X_t^{\\gamma}dW_t "
},
{
"math_id": 4,
"text": "X_t^{2(1-\\gamma)}"
},
{
"math_id": 5,
"text": "\\gamma"
},
{
"math_id": 6,
"text": "\\alpha, \\beta, \\sigma, \\gamma"
},
{
"math_id": 7,
"text": "\\sigma"
},
{
"math_id": 8,
"text": "\\gamma \\ge 1 "
},
{
"math_id": 9,
"text": "\\gamma < 1 "
},
{
"math_id": 10,
"text": "\\gamma = 1/2"
}
]
| https://en.wikipedia.org/wiki?curid=70887641 |
7088921 | Abel's test | Test for series convergence
In mathematics, Abel's test (also known as Abel's criterion) is a method of testing for the convergence of an infinite series. The test is named after mathematician Niels Henrik Abel, who proved it in 1826. There are two slightly different versions of Abel's test – one is used with series of real numbers, and the other is used with power series in complex analysis. Abel's uniform convergence test is a criterion for the uniform convergence of a series of functions dependent on parameters.
Abel's test in real analysis.
Suppose the following statements are true:
Then formula_2 is also convergent.
It is important to understand that this test is mainly pertinent and
useful in the context of non absolutely convergent series formula_3.
For absolutely convergent series, this theorem, albeit true, is almost self evident.
This theorem can be proved directly using summation by parts.
Abel's test in complex analysis.
A closely related convergence test, also known as Abel's test, can often be used to establish the convergence of a power series on the boundary of its circle of convergence. Specifically, Abel's test states that if a sequence of "positive real numbers" formula_4 is decreasing monotonically (or at least that for all "n" greater than some natural number "m", we have formula_5) with
formula_6
then the power series
formula_7
converges everywhere on the closed unit circle, except when "z" = 1. Abel's test cannot be applied when "z" = 1, so convergence at that single point must be investigated separately. Notice that Abel's test implies in particular that the radius of convergence is at least 1. It can also be applied to a power series with radius of convergence "R" ≠ 1 by a simple change of variables "ζ" = "z"/"R". Notice that Abel's test is a generalization of the Leibniz Criterion by taking "z" = −1.
Proof of Abel's test: Suppose that "z" is a point on the unit circle, "z" ≠ 1. For each formula_8, we define
formula_9
By multiplying this function by (1 − "z"), we obtain
formula_10
The first summand is constant, the second converges uniformly to zero (since by assumption the sequence formula_4 converges to zero). It only remains to show that the series converges. We will show this by showing that it even converges absolutely:
formula_11
where the last sum is a converging telescoping sum. The absolute value vanished because the sequence formula_4 is decreasing by assumption.
Hence, the sequence formula_12 converges (even uniformly) on the closed unit disc. If formula_13, we may divide by (1 − "z") and obtain the result.
Another way to obtain the result is to apply the Dirichlet's test. Indeed, for formula_14 holds formula_15, hence the assumptions of the Dirichlet's test are fulfilled.
Abel's uniform convergence test.
Abel's uniform convergence test is a criterion for the uniform convergence of a series of functions or an improper integration of functions dependent on parameters. It is related to Abel's test for the convergence of an ordinary series of real numbers, and the proof relies on the same technique of summation by parts.
The test is as follows. Let {"g""n"} be a uniformly bounded sequence of real-valued continuous functions on a set "E" such that "g""n"+1("x") ≤ "g""n"("x") for all "x" ∈ "E" and positive integers "n", and let {"f""n"} be a sequence of real-valued functions such that the series Σ"f""n"("x") converges uniformly on "E". Then Σ"f""n"("x")"g""n"("x") converges uniformly on "E". | [
{
"math_id": 0,
"text": "\\sum a_n "
},
{
"math_id": 1,
"text": "b_n "
},
{
"math_id": 2,
"text": "\\sum a_nb_n "
},
{
"math_id": 3,
"text": "\\sum a_n"
},
{
"math_id": 4,
"text": "(a_n)"
},
{
"math_id": 5,
"text": "a_n \\geq a_{n+1}"
},
{
"math_id": 6,
"text": "\n\\lim_{n\\rightarrow\\infty} a_n = 0\n"
},
{
"math_id": 7,
"text": "\nf(z) = \\sum_{n=0}^\\infty a_nz^n\n"
},
{
"math_id": 8,
"text": "n\\geq1"
},
{
"math_id": 9,
"text": "\nf_n(z):=\\sum_{k=0}^n a_k z^k.\n"
},
{
"math_id": 10,
"text": "\n\\begin{align}\n(1-z)f_n(z) & = \\sum_{k=0}^n a_k (1-z)z^k \n = \\sum_{k=0}^n a_k z^k - \\sum_{k=0}^n a_k z^{k+1} \n = a_0 + \\sum_{k=1}^n a_k z^k - \\sum_{k=1}^{n+1} a_{k-1} z^k \\\\\n& = a_0 - a_n z^{n+1} + \\sum_{k=1}^n (a_k - a_{k-1}) z^k .\n\\end{align}\n"
},
{
"math_id": 11,
"text": "\n\\sum_{k=1}^\\infty \\left|(a_k - a_{k-1}) z^k \\right| = \\sum_{k=1}^\\infty |a_k-a_{k-1}|\\cdot |z|^k \\leq \\sum_{k=1}^\\infty (a_{k-1}-a_{k})\n"
},
{
"math_id": 12,
"text": "(1-z)f_n(z)"
},
{
"math_id": 13,
"text": "z\\not = 1"
},
{
"math_id": 14,
"text": "z\\ne 1,\\ |z|=1"
},
{
"math_id": 15,
"text": "\\left|\\sum_{k=0}^n z^k\\right|=\\left|\\frac{z^{n+1}-1}{z-1}\\right|\\le \\frac{2}{|z-1|}"
}
]
| https://en.wikipedia.org/wiki?curid=7088921 |
70891828 | Tammann and Hüttig temperatures | Chemical properties of materials
The Tammann temperature (also spelled Tamman temperature) and the Hüttig temperature of a given solid material are approximations to the absolute temperatures at which atoms in a bulk crystal lattice (Tammann) or on the surface (Hüttig) of the solid material become sufficiently mobile to diffuse readily, and are consequently more chemically reactive and susceptible to recrystallization, agglomeration or sintering. These temperatures are equal to one-half (Tammann) or one-third (Hüttig) of the absolute temperature of the compound's melting point. The absolute temperatures are usually measured in Kelvin.
Tammann and Hüttig temperatures are important for considerations in catalytic activity, segregation and sintering of solid materials. The Tammann temperature is important for reactive compounds like explosives and fuel oxiders, such as potassium chlorate (, "T"Tammann = 42 °C), potassium nitrate (, "T"Tammann = 31 °C), and sodium nitrate (NaNO3, "T"Tammann = 17 °C), which may unexpectedly react at much lower temperatures than their melting or decomposition temperatures.152502
The bulk compounds should be contrasted with nanoparticles which exhibit melting-point depression, meaning that they have significantly lower melting points than the bulk material, and correspondingly lower Tammann and Hüttig temperatures. For instance, 2 nm gold nanoparticles melt at only about 327 °C, in contrast to 1065 °C for a bulk gold.
History.
Tammann temperature was pioneered by German astronomer, solid-state chemistry, and physics professor Gustav Tammann in the first half of the 20th century.152 He had considered a lattice motion very important for the reactivity of matter and quantified his theory by calculating a ratio of the given material temperatures at solid-liquid phases at absolute temperatures. The division of a solid's temperature by a melting point would yield a Tammann temperature. The value is usually measured in Kelvins (K): 152
formula_0
where formula_1 is a constant dimensionless number.
The threshold temperature for activation and diffusion of atoms at surfaces was studied by , physical chemist on the faculty of Graz University of Technology, who wrote in 1948 (translated from German):
<templatestyles src="Template:Blockquote/styles.css" />In the solid state the atoms oscillate about their position in the lattice. ... There are always some atoms which happen to be highly energized. Such an atom may become dislodged and switch places with another one (exchange reaction) or it may, for a time, travel about aimlessly. ... the number of diffusing atoms increases with rising temperature, first slowly, and in the higher temperature ranges more rapidly. For every metal there is a definite temperature at which the exchange process is suddenly accelerated. The relationship between this temperature and the melting point in degrees K is constant for all metals. ... On the basis of these elementary processes, sintering is analyzed in relation to the coefficient α which is the fraction of the melting point in degrees K ... When α is between 0.23 and 0.36, activation as a result of the surface diffusion takes place. Loosening or release of adsorbed gasses occurs simultaneously.
Description.
The Hüttig temperature for a given material is
formula_2
where formula_3 is the absolute temperature of the material's bulk melting point (usually specified in Kelvin units) and formula_4 is a unitless constant that is independent of the material, having the value formula_5 according to some sources, or formula_6 according to other sources. It is an approximation to the temperature necessary for a metal or metal oxide surfaces to show significant atomic diffusion along the surface, sintering, and surface recrystallization. Desorption of adsorbed gasses and chemical reactivity of the surface often increase markedly as the temperature is increases above the Hüttig temperature.
The Tammann temperature for a given material is
formula_7
where formula_8 is a unitless constant usually taken to be formula_9, regardless of the material. It is an approximation to the temperature necessary for mobility and diffusion of atoms, ions, and defects within a bulk crystal. Bulk chemical reactivity often increase markedly as the temperature is increased above the Tammann temperature.
Examples.
The following table gives an example Tammann and Hüttig temperatures calculated from each compound's melting point "T"mp according to:
"T"Tammann = 0.5 × "T"mp
"T"Hüttig = 0.3 × "T"mp
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "T_{\\text{Tammann}} ={\\beta} {\\times} T_{\\text{melting point}} (\\text{in K})"
},
{
"math_id": 1,
"text": "{\\beta}"
},
{
"math_id": 2,
"text": "T_{\\mathrm{\\text{Hüttig}}} = \\alpha \\times T_{\\mathrm{mp}}"
},
{
"math_id": 3,
"text": "T_{\\text{mp}}"
},
{
"math_id": 4,
"text": "\\alpha"
},
{
"math_id": 5,
"text": "\\alpha=0.3"
},
{
"math_id": 6,
"text": "\\alpha=1/3"
},
{
"math_id": 7,
"text": "T_{\\mathrm{Tammann}} = \\beta \\times T_{\\mathrm{mp}}"
},
{
"math_id": 8,
"text": "\\beta"
},
{
"math_id": 9,
"text": "0.5"
}
]
| https://en.wikipedia.org/wiki?curid=70891828 |
70894761 | Ideal reduction | Concept in commutative algebra
The reduction theory goes back to the influential 1954 paper by Northcott and Rees, the paper that introduced the basic notions. In algebraic geometry, the theory is among the essential tools to extract detailed information about the behaviors of blow-ups.
Given ideals "J" ⊂ "I" in a ring "R", the ideal "J" is said to be a "reduction" of "I" if there is some integer "m" > 0 such that formula_0. For such ideals, immediately from the definition, the following hold:
If "R" is a Noetherian ring, then "J" is a reduction of "I" if and only if the Rees algebra "R"["It"] is finite over "R"["Jt"]. (This is the reason for the relation to a blow up.)
A closely related notion is that of analytic spread. By definition, the fiber cone ring of a Noetherian local ring ("R", formula_2) along an ideal "I" is
formula_3.
The Krull dimension of formula_4 is called the "analytic spread" of "I". Given a reduction formula_5, the minimum number of generators of "J" is at least the analytic spread of "I". Also, a partial converse holds for infinite fields: if formula_6 is infinite and if the integer formula_7 is the analytic spread of "I", then each reduction of "I" contains a reduction generated by formula_7 elements.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "JI^m = I^{m+1}"
},
{
"math_id": 1,
"text": "J^k I^m = J^{k-1}I^{m+1} = \\cdots = I^{m+k}"
},
{
"math_id": 2,
"text": "\\mathfrak{m}"
},
{
"math_id": 3,
"text": "\\mathcal{F}_I(R) = R[It] \\otimes_R \\kappa(\\mathfrak{m}) \\simeq \\bigoplus_{n=0}^{\\infty} I^n/\\mathfrak{m} I^n"
},
{
"math_id": 4,
"text": "\\mathcal{F}_I(R)"
},
{
"math_id": 5,
"text": "J \\subset I"
},
{
"math_id": 6,
"text": "R/\\mathfrak m"
},
{
"math_id": 7,
"text": "\\ell"
}
]
| https://en.wikipedia.org/wiki?curid=70894761 |
70898035 | Jens H. Gundlach | German physicist
Jens Horst Gundlach (born 1961 in Würzburg) is a German physicist.
Biography.
His father was Gerd Gundlach, a biochemistry professor in Gießen. Jens Gundlach studied physics at the University of Mainz with "Vordiplom" (intermediate "Diplom") in 1982 and "Diplom" in 1986. After the "Vordiplom" he studied for a year in Seattle at the University of Washington. He received his doctorate there in 1990 under the supervision of Kurt Snover (1943–2021) with a dissertation entitled "Shapes of excited rotating medium-mass nuclei determined from giant dipole resonance decays". As a postdoc he was a research associate under the supervision of Eric Adelberger and Blayne Heckel at the University of Washington from 1990 to 1993. As a member of the Eöt-Wash Group, named in honor of Loránd Eötvös, Gundlach did research in experimental gravitational physics. With his colleagues he searched for the confirmation or refutation of a hypothetical fifth force, which might cause deviations from Newtonian gravity, depend on material properties and violate the equivalence principle. At the University of Washington, Gundlach was from 1993 to 1998 an assistant professor and from 1998 to 2004 an associate professor with promotion to full professor in 2004. He has been a member of the University of Washington's Center for Experimental Nuclear Physics and Astrophysics since its founding in 2000.
In the Eöt-Wash Group, Gundlach built torsion balances to test the equivalence principle, test Newton's law of gravity at short distances and measure the strength of gravity. The latter measurement consisted of an instrument that held a thin plate by a tungsten filament inside a high vacuum. This torsion balance was mounted on a turntable, inside an outer turntable with two spherical field masses facing each other. Compared to previous experiments with dumbbell-shaped pendulums on a torsion wire, the Gundlach's measurement was not limited by how well the mass distribution of the pendulum was known. Additionally, the inner turntable was rotated in feedback exactly so that the torsion thread was not twisted, despite the gravitational deceleration and acceleration by the masses on the outer turntable. The inner turntable's regulated rotation eliminated uncertainties from the inelasticity of the torsion thread which had plagued previous measurements. In another experiment, Gundlach and his colleagues conducted the first test of the formula_0 form of the gravitational force down to values of formula_1 in the 50 micron range. In this range, according to various large extradimensional theories by Nima Arkani-Hamed and other string theorists, possibilities begin to occur for discrepancies caused by 4"D" gravity (or even higher dimensional gravitational theories) without a cosmological constant.
In 2000 Gundlach succeeded in measuring the gravitational constant formula_2 with a new torsion balance technique that he had developed. Since 2006, the CODATA value for formula_2 is mostly based on this measurement.
He is a member of LIGO and LISA (Laser Interferometer Space Antenna), the planned satellite-based laser interferometric gravitational-wave detector, and has performed ultra-weak force measurements with the laser interferometers for gravitational-wave detection.
Since about 2002 Gundlach has done research in biophysics. In his bio-research he has pioneered nanopore sequencing technology. In 2008 his group demonstrated that a mutated version of the biological nanopore MspA could pass DNA and that it had the desired shape to identify individual nucleotides in single-stranded DNA. In 2012 The Gundlach group demonstrated functional nanopore sequencing using MspA and an enzyme to control the passage of the DNA through the pore. His group is now using their nanopore technology as a single-molecule tool to study how enzymes process along DNA or RNA.
Gundlach received in 2001 the Francis M. Pipkin Award of the American Physical Society (APS). In 2009 he was elected a fellow of the APS in recognition of his "contributions to precision mechanical measurements and our quantitative understanding of the strength of gravity". In 2021 he received, jointly with Eric Adelberger and Blayne Heckel, the Breakthrough Prize in Fundamental Physics for “precision fundamental measurements that test our understanding of gravity, probe the nature of dark energy, and establish limits on couplings to dark matter."
He is married and has three children. | [
{
"math_id": 0,
"text": "\\frac {1}{r^2}"
},
{
"math_id": 1,
"text": "r"
},
{
"math_id": 2,
"text": "G"
}
]
| https://en.wikipedia.org/wiki?curid=70898035 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.