id
stringlengths
2
8
title
stringlengths
1
130
text
stringlengths
0
252k
formulas
listlengths
1
823
url
stringlengths
38
44
69684849
1 Samuel 29
First Book of Samuel chapter 1 Samuel 29 is the twenty-ninth chapter of the First Book of Samuel in the Old Testament of the Christian Bible or the first part of the Books of Samuel in the Hebrew Bible. According to Jewish tradition the book was attributed to the prophet Samuel, with additions by the prophets Gad and Nathan, but modern scholars view it as a composition of a number of independent texts of various ages from c. 630–540 BCE. This chapter contains the account of David's escape from Saul's repeated attempts to kill him. This is within a section comprising 1 Samuel 16 to 2 Samuel 5 which records the rise of David as the king of Israel. Text. This chapter was originally written in the Hebrew language. It is divided into 11 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls including 4Q51 (4QSama; 100–50 BCE) with extant verse 1. Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century). The Philistines reject David (29:1–5). The Philistines mustered their forces at Aphek ready to face Saul in the plain of Jezreel, when their commanders noticed the presence of 'Hebrews' in their ranks— easily distinguished from their clothing rather than from any racial characteristics. Probably remembering how the 'Hebrews' had defected at Michmash (1 Samuel 13–14), the Philistines were adamant not to allow David and his people to join their army, evenmore as they still recalled the victory song which ascribed to David for the death of "tens of thousands" of Philistines. "Now the Philistines gathered together all their armies to Aphek: and the Israelites pitched by a fountain which is in Jezreel." Achish sends David back to Ziklag (29:6–11). Pressured by other Philistine leaders, Achish was compelled to send David back to Ziklag, although he had never personally doubted David's loyalty, even found David faultless, honest, blameless 'as an angel of God' (verses 3, 6–7, 9–10). David declared his innocence to Achish and obeyed the command to return home, therefore saved from having to participate in the death of Saul and Jonathan. [Achish said to David] "Now therefore, rise early in the morning with your master’s servants who have come with you. And as soon as you are up early in the morning and have light, depart." Verse 10. After "come with you", the Septuagint has "and go to the place which I have selected for you there; and set no bothersome word in your heart, for you are good before me. And rise on your way", which is not present in the Masoretic Text, Targum, or Latin Vulgate versions. Notes. <templatestyles src="Reflist/styles.css" /> References. <templatestyles src="Reflist/styles.css" /> Sources. Commentaries on Samuel. <templatestyles src="Refbegin/styles.css" /> General. <templatestyles src="Refbegin/styles.css" />
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=69684849
69684859
1 Samuel 30
First Book of Samuel chapter 1 Samuel 30 is the thirtieth chapter of the First Book of Samuel in the Old Testament of the Christian Bible or the first part of the Books of Samuel in the Hebrew Bible. According to Jewish tradition, the book was attributed to the prophet Samuel, with additions by the prophets Gad and Nathan, but modern scholars view it as a composition of a number of independent texts of various ages from c. 630–540 BCE. This chapter contains the account of David's escape from Saul's repeated attempts to kill him. This is within a section comprising 1 Samuel 16 to 2 Samuel 5 which records the rise of David as the king of Israel. Text. This chapter was originally written in the Hebrew language. It is divided into 31 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls including 4Q51 (4QSama; 100–50 BCE) with extant verses 22–31. Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century). The Amalekites raid Ziklag (30:1–6). While Saul battled the Philistines David returned to Ziklag only to find it burned by the Amalekites and its inhabitants carried away. The attack was probably in retaliation for David's raid on the Amalekites (1 Samuel 27:8, 10). David and his men lost their wives and families, causing great lamentation (verse 4) and even placing David in personal danger (verse 6). "Now when David and his men came to Ziklag on the third day, the Amalekites had made a raid against the Negeb and against Ziklag. They had overcome Ziklag and burned it with fire" "and had taken captive the women and those who were there, from small to great; they did not kill anyone, but carried them away and went their way." David destroys the Amalekites (30:7–20). One unique feature in the narrative is David's ability to consult YHWH, in contrast to Saul's illegal consultation at Endor. David 'strengthened himself in the LORD' (cf. 1 Samuel 23:16), contacted YHWH through Abiathar the priest and received a positive answer (verses 7–8), so he was encouraged to pursue the attackers. A providential meeting with an exhausted Egyptian who had gone three days without food, and who, after being revived with a fig-cake and raisins, provided David and his men instant information to the raiders of Ziklag, even securing the service to guide them down to the Amalekite camp. In another coincidence David and his troops arrived just as the Amalekites were obliviously celebrating their victory with feasting, giving a good opportunity to revenge allowing only 400 camel riders to escape. The captured families were saved intact and their possession was reclaimed with more booty to collect. Moreover, David had avenged not only Ziklag but also other areas as mentioned in verse 14—the Negeb of the Cherethites in the southern area controlled by the Philistines, the Negeb of Caleb, which was around Hebron, as well as Judean areas, providing special bond with the people of the areas as described later in 2 Samuel 2:1–4, when David becomes king of Judah. "And nothing of theirs was lacking, either small or great, sons or daughters, spoil or anything which they had taken from them; David recovered all." Verse 19. Through this victory David rescued all that the Amalekites had taken, his two wives, his men's wives, and all the children great and small, as well as all stuffs that were taken from Ziklag, so that nothing was missing. Dividing the spoils (30:21–31). David's successful attack obtained so much booty that enabled him to hand over some as gifts to the people of Judah (verses 26–31). This act and his ruling on the suggestion made by 'worthless fellows' (verses 22–25) displayed David's readiness to assume the role of king. Thus, Saul's sparing the Amalekites led to his downfall, whereas David's successful attack led to his rise as a king who was obedient to God. "Now when David came to Ziklag, he sent some of the spoil to the elders of Judah, to his friends, saying, “Here is a present for you from the spoil of the enemies of the Lord”—" Notes. <templatestyles src="Reflist/styles.css" /> References. <templatestyles src="Reflist/styles.css" /> Sources. Commentaries on Samuel. <templatestyles src="Refbegin/styles.css" /> General. <templatestyles src="Refbegin/styles.css" />
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=69684859
69684867
1 Samuel 31
First Book of Samuel chapter 1 Samuel 31 is the thirty-first (and the last) chapter of the First Book of Samuel in the Old Testament of the Christian Bible or the first part of the Books of Samuel in the Hebrew Bible. According to Jewish tradition, the book was attributed to the prophet Samuel, with additions by the prophets Gad and Nathan, but modern scholars view it as a composition of a number of independent texts of various ages from c. 630–540 BCE. This chapter contains the account of Saul's repeated attempts to kill him. This is within a section comprising 1 Samuel 16 to 2 Samuel 5 which records the rise of David as the king of Israel. Text. This chapter was originally written in the Hebrew language. It is divided into 13 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls including 4Q51 (4QSama; 100–50 BCE) with extant verses 1–4. Plate XIII of 4Q51 contains traces of verse 13, separated to 2 Samuel 1:1 with an open line. Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century). The death of Saul and his three sons (31:1–10). While David defeated the Amalekites, Saul and his army were defeated by the Philistines, contrasting David's success (with divine guidance and protection) in saving the lives of his own family and others with Saul's failure resulting in the death of his family, with many others, in battle. Toward the end of the battle on Mount Gilboa Saul's three sons, Jonathan, Abinadab, and Malchishua were killed and Saul himself was wounded. He asked his trustworthy personal armor-bearer to kill him before the Philistines came, but due to his respect for Saul as YHWH's anointed, the armor-bearer refused, so Saul committed suicide. Saul's dishonorable end was followed by the total defeat to his troops, while other Israelites not in battle (suggesting that Saul did not have all Israel behind him) fled from the neighboring areas leaving their towns and villages for the Philistines to occupy, and the disrespectful fate of his body: beheaded, his armor taken into the temple of Astarte (the chief goddess of Beth-shan) then hanged to the wall of Bethshan for public display ( states that his head was fastened to the temple of Dagon). "Then the Philistines followed hard after Saul and his sons. And the Philistines killed Jonathan, Abinadab, and Malchishua, Saul’s sons " "Then Saul said to his armorbearer, "Draw your sword, and thrust me through with it, lest these uncircumcised men come and thrust me through and abuse me."" "But his armorbearer would not, for he was greatly afraid. Therefore Saul took a sword and fell on it." Jabesh-gilead’s tribute to Saul (31:11–13). The men of Jabesh-gilead, remembering Saul's action on their behalf (1 Samuel 11:1–13), came to take the bodies of Saul and his sons for cremation and burial, a more honorable treatment than that of the Philistines to the bodies of Saul and his sons. "12 All the valiant men arose, and went all night, and took the body of Saul and the bodies of his sons from the wall of Bethshan, and came to Jabesh, and burnt them there." "13 And they took their bones, and buried them under a tree at Jabesh, and fasted seven days." Verses 12–13. The brave action of the men, marching from Jabesh-Gilead to Beth-Shan and back (about one way), recalls the high point of Saul's leadership at the beginning of his reign when he saved the people of Jabesh-Gilead from foreign attacks (1 Samuel 11). Notes. <templatestyles src="Reflist/styles.css" /> References. <templatestyles src="Reflist/styles.css" /> Sources. Commentaries on Samuel. <templatestyles src="Refbegin/styles.css" /> General. <templatestyles src="Refbegin/styles.css" />
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=69684867
6968798
Headway
Distance between vehicles in a transit system measured in time or space Headway is the distance or duration between vehicles in a transit system measured in space or time. The "minimum headway" is the shortest such distance or time achievable by a system without a reduction in the speed of vehicles. The precise definition varies depending on the application, but it is most commonly measured as the distance from the tip (front end) of one vehicle to the tip of the next one behind it. It can be expressed as the distance between vehicles, or as time it will take for the trailing vehicle to cover that distance. A "shorter" headway signifies closer spacing between the vehicles. Airplanes operate with headways measured in hours or days, freight trains and commuter rail systems might have headways measured in parts of an hour, metro and light rail systems operate with headways on the order of 90 seconds to 20 minutes, and vehicles on a freeway can have as little as 2 seconds headway between them. Headway is a key input in calculating the overall route capacity of any transit system. A system that requires large headways has more empty space than passenger capacity, which lowers the total number of passengers or cargo quantity being transported for a given length of line (railroad or highway, for instance). In this case, the capacity has to be improved through the use of larger vehicles. On the other end of the scale, a system with short headways, like cars on a freeway, can offer relatively large capacities even though the vehicles carry few passengers. The term is most often applied to rail transport and bus transport, where low headways are often needed to move large numbers of people in mass transit railways and bus rapid transit systems. A lower headway requires more infrastructure, making lower headways expensive to achieve. Modern large cities require passenger rail systems with tremendous capacity, and low headways allow passenger demand to be met in all but the busiest cities. Newer signalling systems and moving block controls have significantly reduced headways in modern systems compared to the same lines only a few years ago. In principle, automated personal rapid transit systems and automobile platoons could reduce headways to as little as fractions of a second. Description. Different measures. There are a number of different ways to measure and express the same concept, the distance between vehicles. The differences are largely due to historical development in different countries or fields. The term developed from railway use, where the distance between the trains was very great compared to the length of the train itself. Measuring headway from the front of one train to the front of the next was simple and consistent with timetable scheduling of trains, but constraining tip-to-tip headway does not always ensure safety. In the case of a metro system, train lengths are uniformly short and the headway allowed for stopping is much longer, so tip-to-tip headway may be used with a minor safety factor. Where vehicle size varies and may be longer than their stopping distances or spacing, as with freight trains and highway applications, tip-to-tail measurements are more common. The units of measure also vary. The most common terminology is to use the time of passing from one vehicle to the next, which closely mirrors the way the headways were measured in the past. A timer is started when one train passes a point, and then measures time until the next one passes, giving the tip-to-tip time. This same measure can also be expressed in terms of vehicles-per-hour, which is used on the Moscow Metro for instance. Distance measurements are somewhat common in non-train applications, like vehicles on a road, but time measurements are common here as well. Railway examples. Train movements in most rail systems are tightly controlled by railway signalling systems. In many railways drivers are given instructions on speeds, and routes through the rail network. Trains can only accelerate and decelerate relatively slowly, so stopping from anything but low speeds requires several hundred metres or even more. The track distance required to stop is often much longer than the range of the driver's vision. If the track ahead is obstructed, for example a train is at stop there, then the train behind it will probably see it far too late to avoid a collision. Signalling systems serve to provide drivers with information on the state of the track ahead, so that a collision may be avoided. A side effect of this important safety function is that the headway of any rail system is effectively determined by the structure of the signalling system, and particularly the spacing between signals and the amount of information that can be provided in the signal. Rail system headways can be calculated from the signalling system. In practice there are a variety of different methods of keeping trains apart, some which are manual such as train order working or systems involving telegraphs, and others which rely entirely on signalling infrastructure to regulate train movements. Manual systems of working trains are common in area with low numbers of train movements, and headways are more often discussed in the context of non-manual systems. For automatic block signalling (ABS), the headway is measured in minutes, and calculated from the time from the passage of a train to when the signalling system returns to full clear (proceed). It is not normally measured tip to tip. An ABS system divides the track into block sections, into which only one train can enter at a time. Commonly trains are kept two to three block sections apart, depending on how the signalling system is designed, and so the length of the block section will often determine the headway. To have visual contact as a method to avoid collision (such as during shunting) is done only at low speeds, like 40 km/h. A key safety factor of train operations is to space the trains out by at least this distance, the "brick-wall stop" criterion. In order to signal the trains in time to allow them to stop, the railways placed workmen on the lines who timed the passing of a train, and then signalled any following trains if a certain elapsed time had not passed. This is why train headways are normally measured as tip-to-tip times, because the clock was reset as the engine passed the workman. As remote signalling systems were invented, the workmen were replaced with signal towers at set locations along the track. This broke the track into a series of block sections between the towers. Trains were not allowed to enter a section until the signal said it was clear. This had the side-effect of limiting the maximum speed of the trains to the speed where they could stop in the distance of one block section. This was an important consideration for the Advanced Passenger Train in the United Kingdom, where the lengths of block sections limited speeds and demanded a new braking system be developed. There is no perfect block-section size for the block-control approach. Longer sections, using as few signals as possible, are advantageous because signals are expensive and are points of failure, and they allow higher speeds because the trains have more room to stop. On the other hand, they also increase the headway, and thus reduce the overall capacity of the line. These needs have to be balanced on a case-by-case basis. Other examples. In the case of automobile traffic, the key consideration in braking performance is the user's reaction time. Unlike the train case, the stopping distance is generally much shorter than the spotting distance. That means that the driver will be matching their speed to the vehicle in front before they reach it, eliminating the "brick-wall" effect. Widely used numbers are that a car traveling at 60 mph will require about 225 feet to stop, a distance it will cover just under 6 seconds. Nevertheless, highway travel often occurs with considerable safety with tip-to-tail headways on the order of 2 seconds. That's because the user's reaction time is about 1.5 seconds so 2 seconds allows for a slight overlap that makes up for any difference in braking performance between the two cars. Various personal rapid transit systems in the 1970s considerably reduced the headways compared to earlier rail systems. Under computer control, reaction times can be reduced to fractions of a second. Whether traditional headway regulations should apply to PRT and car train technology is debatable. In the case of the Cabinentaxi system developed in Germany, headways were set to 1.9 seconds because the developers were forced to adhere to the brick-wall criterion. In experiments, they demonstrated headways on the order of half of a second. In 2017, in the UK, 66% of cars and Light Commercial Vehicles, and 60% of motorcycles left the recommended two-second gap between themselves and other vehicles. Low-headway systems. Headway spacing is selected by various safety criteria, but the basic concept remains the same – leave enough time for the vehicle to safely stop behind the vehicle in front of it. The "safely stop" criterion has a non-obvious solution, however; if a vehicle follows immediately behind the one in front, the vehicle in front simply cannot stop quickly enough to damage the vehicle behind it. An example would be a conventional train, where the vehicles are held together and have only a few millimetres of "play" in the couplings. Even when the locomotive applies emergency braking, the cars following do not suffer any damage because they quickly close the gap in the couplings before the speed difference can build up. There have been many experiments with automated driving systems that follow this logic and greatly decrease headways to tenths or hundredths of a second in order to improve safety. Today, modern CBTC railway signalling systems are able to significantly reduce headway between trains in the operation. Using automated "car follower" cruise control systems, vehicles can be formed into platoons (or flocks) that approximate the capacity of conventional trains. These systems were first employed as part of personal rapid transit research, but later using conventional cars with autopilot-like systems. Paris Métro Line 14 runs with headways as low as 85 seconds, while several lines of the Moscow Metro have peak hour headways of 90 seconds. Headway and route capacity. Route capacity is defined by three figures; the number of passengers (or weight of cargo) per vehicle, the maximum safe speed of the vehicles, and the number of vehicles per unit time. Since the headway factors into two of the three inputs, it is a primary consideration in capacity calculations. The headway, in turn, is defined by the braking performance, or some external factor based on it, like block sizes. Following the methods in Anderson: Minimum safe headway. The minimum safe headway measured tip-to-tail is defined by the braking performance: formula_0 where: The tip-to-tip headway is simply the tip-to-tail headway plus the length of the vehicle, expressed in time: formula_7 where: Capacity. The vehicular capacity of a single lane of vehicles is simply the inverse of the tip-to-tip headway. This is most often expressed in vehicles-per-hour: formula_10 where: The passenger capacity of the lane is simply the product of vehicle capacity and the passenger capacity of the vehicles: formula_12 where: Examples. Consider these examples: 1) freeway traffic, per lane: 100 km/h (~28 m/s) speeds, 4 passengers per vehicle, 4 meter vehicle length, 2.5 m/s^2 braking (1/4 "g"), 2 second reaction time, brick-wall stop, formula_6 of 1.5; formula_15 formula_16 formula_8 = 10.5 seconds ; formula_13 = 7,200 passengers per hour if 4 people per car and 2 seconds headway is assumed, or 342 passengers per hour if 1 person per car and 10,5 seconds headway is assumed. The headway used in reality is much less than 10.5 seconds, since the brick-wall principle is not used on freeways. In reality, 1.5 persons per car and 2 seconds headway can be assumed, giving 1800 cars or 2700 passengers per lane and hour. For comparison, the Marin County, California (near San Francisco) states that peak flow on the three-lane Highway 101 is about 7,200 "vehicles" per hour. This is about the same number of passengers per lane. Notwithstanding these formulas it is widely known that reducing headway increases risk of collision in standard private automobile settings and is often referred to as tailgating. 2) metro system, per line: 40 km/h (~11 m/s) speeds, 1000 passengers, 100 meter vehicle length, 0.5 m/s^2 braking, 2 second reaction time, brick-wall stop, formula_6 of 1.5; formula_17 formula_18 formula_8 = 28 seconds ; formula_13 = 130,000 passengers per hour Note that most signalling systems used on metros place an artificial limit on headway that is not dependent on braking performance. Also the time needed for station stops limits the headway. Using a typical figure of 2 minutes (120 seconds): formula_19 formula_13 = 30,000 passengers per hour Since the headway of a metro is constrained by signalling considerations, not vehicle performance, reductions in headway through improved signalling have a direct impact on passenger capacity. For this reason, the London Underground system has spent a considerable amount of money on upgrading the SSR Network, Jubilee and Central lines with new CBTC signalling to reduce the headway from about 3 minutes to 1, while preparing for the 2012 Olympics. 3) automated personal rapid transit system, 30 km/h (~8 m/s) speeds, 3 passengers, 3 meter vehicle length, 2.5 m/s^2 braking (1/4 "g"), 0.01 second reaction time, brake-failure on lead vehicle for 1 m/s slowing, bot 2.5, m/s if lead vehicle breaks. formula_6 of 1.1; formula_20 formula_21 formula_8 = 3 seconds ; formula_13 = 28,000 passengers per hour This number is similar to the ones proposed by the Cabinentaxi system, although they predicted that actual use would be much lower. Although PRTs have less passenger seating and speeds, their shorter headways dramatically improve passenger capacity. However, these systems are often constrained by brick-wall considerations for legal reasons, which limits their performance to a car-like 2 seconds. In this case: formula_22 formula_13 = 5,400 passengers per hour Headways and ridership. Headways have an enormous impact on ridership levels above a certain critical waiting time. Following Boyle, the effect of changes in headway are directly proportional to changes in ridership by a simple conversion factor of 1.5. That is, if a headway is reduced from 12 to 10 minutes, the average rider wait time will decrease by 1 minute, the overall trip time by the same one minute, so the ridership increase will be on the order of 1 x 1.5 + 1 or about 2.5%. Also see Ceder for an extensive discussion. References. Notes. <templatestyles src="Reflist/styles.css" /> Bibliography. <templatestyles src="Refbegin/styles.css" />
[ { "math_id": 0, "text": "T_{min} = t_r + \\frac{kV}{2} \\left(\\frac{1}{a_f} - \\frac{1}{a_l} \\right)" }, { "math_id": 1, "text": "T_{min}" }, { "math_id": 2, "text": "V" }, { "math_id": 3, "text": "t_r" }, { "math_id": 4, "text": "a_f" }, { "math_id": 5, "text": "a_l" }, { "math_id": 6, "text": "k" }, { "math_id": 7, "text": "T_{tot} = \\frac{L}{V} + t_r + \\frac{kV}{2} \\left(\\frac{1}{a_f} - \\frac{1}{a_l} \\right)" }, { "math_id": 8, "text": "T_{tot}" }, { "math_id": 9, "text": "L" }, { "math_id": 10, "text": "n_{veh} = \\frac{3600}{T_{min}}" }, { "math_id": 11, "text": "n_{veh}" }, { "math_id": 12, "text": "n_{pas} = P \\frac{3600}{T_{min}}" }, { "math_id": 13, "text": "n_{pas}" }, { "math_id": 14, "text": "P" }, { "math_id": 15, "text": "T_{tot} = \\frac{4}{28} + 2 + \\frac{1.5 \\times 28}{2} \\left(\\frac{1}{2.5} \\right)" }, { "math_id": 16, "text": "n_{pas} = {P}\\times \\frac{3600}{T_{tot}}" }, { "math_id": 17, "text": "T_{tot} = \\frac{100}{11} + 2 + \\frac{1.5 \\times 11}{2} \\left(\\frac{1}{0.5} \\right)" }, { "math_id": 18, "text": "n_{pas} = {1000}\\times \\frac{3600}{T_{tot}}" }, { "math_id": 19, "text": "n_{pas} = {1000}\\times \\frac{3600}{120}" }, { "math_id": 20, "text": "T_{tot} = \\frac{3}{8} + 0.01 + \\frac{1.1 \\times 8}{2} \\left(\\frac{1}{2.5} - \\frac{1}{2.5} \\right)" }, { "math_id": 21, "text": "n_{pas} = {3}\\times \\frac{3600}{0.385}" }, { "math_id": 22, "text": "n_{pas} = {3}\\times \\frac{3600}{2}" } ]
https://en.wikipedia.org/wiki?curid=6968798
6968975
Kirkwood approximation
The Kirkwood superposition approximation was introduced in 1935 by John G. Kirkwood as a means of representing a discrete probability distribution. The Kirkwood approximation for a discrete probability density function formula_0 is given by formula_1 where formula_2 is the product of probabilities over all subsets of variables of size "i" in variable set formula_3. This kind of formula has been considered by Watanabe (1960) and, according to Watanabe, also by Robert Fano. For the three-variable case, it reduces to simply formula_4 The Kirkwood approximation does not generally produce a valid probability distribution (the normalization condition is violated). Watanabe claims that for this reason informational expressions of this type are not meaningful, and indeed there has been very little written about the properties of this measure. The Kirkwood approximation is the probabilistic counterpart of the interaction information. Judea Pearl (1988 §3.2.4) indicates that an expression of this type can be exact in the case of a "decomposable" model, that is, a probability distribution that admits a graph structure whose cliques form a tree. In such cases, the numerator contains the product of the intra-clique joint distributions and the denominator contains the product of the clique intersection distributions. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "P(x_{1},x_{2},\\ldots ,x_{n})" }, { "math_id": 1, "text": "\nP^{\\prime }(x_1,x_2,\\ldots ,x_n)\n= \\prod_{i = 1}^{n -1}\\left[\\prod_{\\mathcal{T}_i\\subseteq \\mathcal{V}}p(\\mathcal{T}_i)\\right]^{(-1)^{n-1-i}}\n=\n\\frac{\\prod_{\\mathcal{T}\n_{n-1}\\subseteq \\mathcal{V}}p(\\mathcal{T}_{n-1})}{\\frac{\\prod_{\\mathcal{T}\n_{n-2}\\subseteq \\mathcal{V}}p(\\mathcal{T}_{n-2})}{\\frac{\\vdots }{\\prod_{\\mathcal{\nT}_1\\subseteq \\mathcal{V}}p(\\mathcal{T}_1)}}} \n" }, { "math_id": 2, "text": "\\prod_{\\mathcal{T}_i\\subseteq \\mathcal{V}}p(\\mathcal{T}_i)" }, { "math_id": 3, "text": "\\scriptstyle\\mathcal{V}" }, { "math_id": 4, "text": "\nP^\\prime(x_1,x_2,x_3)=\\frac{p(x_1,x_2)p(x_2,x_3)p(x_1,x_3)}{p(x_1)p(x_{2})p(x_3)}\n" } ]
https://en.wikipedia.org/wiki?curid=6968975
69689957
Partial Area Under the ROC Curve
Dev gurjar actor The Partial Area Under the ROC Curve (pAUC) is a metric for the performance of binary classifier. It is computed based on the receiver operating characteristic (ROC) curve that illustrates the diagnostic ability of a given binary classifier system as its discrimination threshold is varied. The ROC curve is created by plotting the true positive rate (TPR) against the false positive rate (FPR) at various threshold settings.The area under the ROC curve (AUC) is often used to summarize in a single number the diagnostic ability of the classifier. The AUC is simply defined as the area of the ROC space that lies below the ROC curve. However, in the ROC space there are regions where the values of FPR or TPR are unacceptable or not viable in practice. For instance, the region where FPR is greater than 0.8 involves that more than 80% of negative subjects are incorrectly classified as positives: this is unacceptable in many real cases. As a consequence, the AUC computed in the entire ROC space (i.e., with both FPR and TPR ranging from 0 to 1) can provide misleading indications. To overcome this limitation of AUC, it was proposed to compute the area under the ROC curve in the area of the ROC space that corresponds to interesting (i.e., practically viable or acceptable) values of FPR and TPR. Basic concept. In the ROC space, where x=FPR (false positive rate) and y=ROC(x)=TPR (true positive rate), it is formula_0 The AUC is widely used, especially for comparing the performances of two (or more) binary classifiers: the classifier that achieves the highest AUC is deemed better. However, when comparing two classifiers formula_1 and formula_2, three situations are possible: There is general consensus that in case 1 classifier formula_2 is preferable and in case 2) classifier formula_1 is preferable. Instead, in case 3) there are regions of the ROC space where formula_1 is preferable and other regions where formula_2 is preferable. This observation led to evaluating the accuracy of classifications by computing performance metrics that consider only a specific region of interest (RoI) in the ROC space, rather than the whole space. These performance metrics are commonly known as “partial AUC” (pAUC): the pAUC is the area of the selected region of the ROC space that lies under the ROC curve. Partial AUC obtained by constraining FPR. The idea of the partial AUC was originally proposed with the goal of restricting the evaluation of given ROC curves in the range of false positive rates that are considered interesting for diagnostic purposes. Thus, the partial AUC was computed as the area under the ROC curve in the vertical band of the ROC space where FPR is in the range [formula_3, formula_4]. The pAUC computed by constraining FPR helps compare two partial areas. Nonetheless, it has a few limitations: Partial AUC obtained by constraining TPR. Another type of partial AUC is obtained by constraining the true positive rate, rather than the false positive rate. That is, the partial AUC is the area under the ROC curve and above the horizontal line formula_7. In other words, the pAUC is computed in the portion of the ROC space where the true positive rate is greater than a given threshold formula_8 (no upper limit is used, since it would not make sense to limit the number of true positives). This proposal too has a few limitations: Partial AUC obtained by constraining both FPR and TPR. A “two-way” pAUC was defined by constraining both the true positive and false negative rates. A minimum value formula_9 is specified for TPR and a maximum value formula_10 is set for FPR, thus the RoI is the upper-left rectangle with vertices in points (formula_10, formula_9), (formula_10, 1), (0, 1) and (0, formula_9). The two-way pAUC is the area under the ROC curve that belongs to such rectangle. The two-way pAUC is clearly more flexible than the pAUC defined by constraining only FPR or TPR. Actually, the latter two types of pAUC can be seen as special cases of the two-way pAUC. As with the pAUC described above, when comparing two classifiers via the associated ROC curves, a relatively small change in selecting the RoI may lead to different conclusions. This is a particularly delicate issue, since no criteria are given for identifying the RoI (as with the other mentioned pAUC, it is expected that experts can identify formula_8 and formula_10). Partial AUC obtained by applying objective constraints to the region of interest. A few objective and sound criteria for defining the RoI were defined. Specifically, the computation of pAUC can be restricted to the region where Defining the RoI based on the performance of the random classification. A possible way of defining the region where pAUC is computed consists of excluding the regions of the ROC space that represent performances worse than the performance achieved by the random classification. Random classification evaluates a given item positive with probability formula_11 and negative with probability (1-formula_11). In a dataset of n items, of which AP are actually positive, the best guess is obtained by setting formula_12 (formula_11 is also known as the “prevalence” of the positives in the dataset). It was shown that random classification with formula_12 achieves formula_13, formula_14, and formula_15, on average. Therefore, if the performance metrics of choice are TPR, FPR and precision, the RoI should be limited to the portion of the ROC space where formula_16, formula_17, and formula_18. It was shown that this region is the rectangle having vertices in (0,0), (0,1), (formula_11, 1), and (formula_11, formula_11). This technique solves the problems of constraining TPR and FPR when two-ways pAUC has to be computed: formula_19. The Ratio of Relevant Areas (RRA) indicator. Computing the pAUC requires that a RoI is first defined. For instance, when requiring better accuracy than mean random classification, the RoI is the rectangle having vertices in (0,0), (0,1), (formula_11, 1), and (formula_11, formula_11). This implies that the size of the RoI varies depending on formula_11. Besides, the perfect ROC, i.e., the one that goes through point (0,1), has pAUC=formula_11(1-formula_11). To get a pAUC-based indicator that accounts for formula_11 and ranges in [0,1], RRA was proposed: formula_20 RRA=1 indicates perfect accuracy, while RRA=0 indicates that the area under the ROC curve belonging to the RoI is null; thus accuracy is no better than random classification's. Defining the RoI based on some performance metric threshold. Several performance metrics are available for binary classifiers. One of the most popular is the Phi coefficient (also known as the Matthews Correlation Coefficient). Phi measures how better (or worse) is a classification, with respect to the random classification, which is characterized by Phi = 0. According to the reference values suggested by Cohen, one can take Phi = 0.35 as a minimum acceptable level of Phi for a classification. In the ROC space, Phi equal to a non null constant corresponds to the arc of an ellipse, while Phi = 0 corresponds to the diagonal, i.e., to the points where FPR=TPR. So, considering the portion of the ROC where Phi>0.35 corresponds to defining the RoI as the portion of the ROC space above the ellipse. The pAUC is the area above the ellipse and under the ROC curve. Defining the RoI based on the cost of misclassifications. Most binary classifiers yield misclassifications, which result in some cost. The cost C of misclassifications is defined as formula_21, where formula_22 is the unitary cost of a false negative, formula_23 is the unitary cost of a false positive, and FN and FP are, respectively, the number of false negatives and false positives. The normalized cost NC is defined as formula_24. By setting formula_25, we get formula_26 The average NC obtained via random classification is formula_27 To evaluate a classifier excluding the performances whose cost is greater than formula_28, it is possible to define the RoI where the normalized cost is lower than the formula_28: such region is above the line formula_29 It is also possible to define the RoI where NC is less than a fraction formula_30 of formula_28. In such case, the lower boundary of the RoI is the line formula_31 Different values of formula_32 define the RoI in the same way as some of the best known performance metrics: Therefore, choosing a performance metric equates to choosing a specific value of the relative cost of false positives with respect to false negatives. In the ROC space, the slope of the line that represents constant normalized cost (hence, constant total cost) depends on formula_32, or, equivalently, on the performance metrics being used. It is common practice to select as the best classification the point of the ROC curve with the highest value of Youden’s "J" =TPR−FPR. When considering the cost associated with the misclassifications, this practice corresponds to making a hypothesis on the relative cost of false positives and false negatives, which is rarely correct. How to compute pAUC and RRA. Software libraries to compute pAUC and RRA are available for Python and R. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "AUC=\\int_{x=0}^{1} ROC(x) \\ dx" }, { "math_id": 1, "text": "C_a" }, { "math_id": 2, "text": "C_b" }, { "math_id": 3, "text": "FPR_{low}" }, { "math_id": 4, "text": "FPR_{high}" }, { "math_id": 5, "text": "0.1 \\leq FPR \\leq 0.3" }, { "math_id": 6, "text": "0.2 \\leq FPR \\leq 0.4" }, { "math_id": 7, "text": "TPR=TPR_{0}" }, { "math_id": 8, "text": "TPR_{0}" }, { "math_id": 9, "text": "TPR_0" }, { "math_id": 10, "text": "FPR_0" }, { "math_id": 11, "text": "\\rho" }, { "math_id": 12, "text": "\\rho=\\frac{AP}{n}" }, { "math_id": 13, "text": "TPR=\\rho" }, { "math_id": 14, "text": "precision=\\rho" }, { "math_id": 15, "text": "FPR=\\rho" }, { "math_id": 16, "text": "TPR>\\rho" }, { "math_id": 17, "text": "FPR<\\rho" }, { "math_id": 18, "text": "precision>\\rho" }, { "math_id": 19, "text": "FPR_0=TPR_0=\\rho" }, { "math_id": 20, "text": "RRA={pAUC \\over area\\ of\\ the\\ RoI}" }, { "math_id": 21, "text": "C=c_{FN} FN + c_{FP} FP" }, { "math_id": 22, "text": "c_{FN}" }, { "math_id": 23, "text": "c_{FP}" }, { "math_id": 24, "text": "NC=\\frac{C}{n(c_{FN}+c_{FP})}" }, { "math_id": 25, "text": "\\lambda=\\frac{c_{FN}}{c_{FP}+c_{FN}}" }, { "math_id": 26, "text": "NC= \\lambda \\rho (1-TPR)+(1-\\lambda)(1-\\rho) FPR" }, { "math_id": 27, "text": "NC_{rnd}=\\frac{AP \\cdot AN}{n^2}" }, { "math_id": 28, "text": "NC_{rnd}" }, { "math_id": 29, "text": "\\frac{AP \\cdot AN}{n^2}=\\lambda \\rho(1-TPR)(1-\\lambda)(1-\\rho)FPR" }, { "math_id": 30, "text": "\\mu" }, { "math_id": 31, "text": "TPR=\\frac{1-\\lambda}{\\lambda}\\frac{1-\\rho}{\\rho}(FPR-\\mu \\rho)+1-\\mu(1-\\rho)" }, { "math_id": 32, "text": "\\lambda" }, { "math_id": 33, "text": "\\lambda=0" }, { "math_id": 34, "text": "\\lambda=1-\\rho" }, { "math_id": 35, "text": "\\lambda=1-\\frac{\\rho}{2}" }, { "math_id": 36, "text": "\\lambda=1" } ]
https://en.wikipedia.org/wiki?curid=69689957
696911
Interaction energy
Contribution to the total energy caused by interactions between entities present in the system In physics, interaction energy is the contribution to the total energy that is caused by an interaction between the objects being considered. The interaction energy usually depends on the relative position of the objects. For example, formula_0 is the electrostatic interaction energy between two objects with charges formula_1, formula_2. Interaction energy. A straightforward approach for evaluating the interaction energy is to calculate the difference between the objects' combined energy and all of their isolated energies. In the case of two objects, "A" and "B", the interaction energy can be written as: formula_3 where formula_4 and formula_5 are the energies of the isolated objects (monomers), and formula_6 the energy of their interacting assembly (dimer). For larger system, consisting of "N" objects, this procedure can be generalized to provide a total many-body interaction energy: formula_7 By calculating the energies for monomers, dimers, trimers, etc., in an N-object system, a complete set of two-, three-, and up to N-body interaction energies can be derived. The supermolecular approach has an important disadvantage in that the final interaction energy is usually much smaller than the total energies from which it is calculated, and therefore contains a much larger relative uncertainty. In the case where energies are derived from quantum chemical calculations using finite atom-centered basis functions, basis set superposition errors can also contribute some degree of artificial stabilization.
[ { "math_id": 0, "text": "Q_1 Q_2 / (4 \\pi \\varepsilon_0 \\Delta r)" }, { "math_id": 1, "text": "Q_1" }, { "math_id": 2, "text": "Q_2" }, { "math_id": 3, "text": "\\Delta E_\\text{int} = E(A,B) - \\left( E(A) + E(B) \\right)," }, { "math_id": 4, "text": "E(A)" }, { "math_id": 5, "text": "E(B)" }, { "math_id": 6, "text": "E(A,B)" }, { "math_id": 7, "text": "\\Delta E_\\text{int} = E(A_{1}, A_{2}, \\dots, A_{N}) - \\sum_{i=1}^{N} E(A_{i})." } ]
https://en.wikipedia.org/wiki?curid=696911
696912
Bound state
Quantum physics terminology A bound state is a composite of two or more fundamental building blocks, such as particles, atoms, or bodies, that behaves as a single object and in which energy is required to split them. In quantum physics, a bound state is a quantum state of a particle subject to a potential such that the particle has a tendency to remain localized in one or more regions of space. The potential may be external or it may be the result of the presence of another particle; in the latter case, one can equivalently define a bound state as a state representing two or more particles whose interaction energy exceeds the total energy of each separate particle. One consequence is that, given a potential vanishing at infinity, negative-energy states must be bound. The energy spectrum of the set of bound states are most commonly discrete, unlike scattering states of free particles, which have a continuous spectrum. Although not bound states in the strict sense, metastable states with a net positive interaction energy, but long decay time, are often considered unstable bound states as well and are called "quasi-bound states". Examples include radionuclides and Rydberg atoms. In relativistic quantum field theory, a stable bound state of n particles with masses formula_0 corresponds to a pole in the S-matrix with a center-of-mass energy less than formula_1. An unstable bound state shows up as a pole with a complex center-of-mass energy. Definition. Let "σ"-finite measure space formula_3 be a probability space associated with separable complex Hilbert space formula_4. Define a one-parameter group of unitary operators formula_5, a density operator formula_6 and an observable formula_7 on formula_4. Let formula_8 be the induced probability distribution of formula_7 with respect to formula_9. Then the evolution formula_10 is bound with respect to formula_7 if formula_11, where formula_12. A quantum particle is in a bound state if at no point in time it is found “too far away" from any finite region formula_13. Using a wave function representation, for example, this means formula_14 such that formula_15 In general, a quantum state is a bound state "if and only if" it is finitely normalizable for all times formula_16. Furthermore, a bound state lies within the pure point part of the spectrum of formula_7 "if and only if" it is an eigenstate of formula_7. More informally, "boundedness" results foremost from the choice of domain of definition and characteristics of the state rather than the observable. For a concrete example: let formula_17 and let formula_7 be the position operator. Given compactly supported formula_18 and formula_19. Properties. As finitely normalizable states must lie within the pure point part of the spectrum, bound states must lie within the pure point part. However, as Neumann and Wigner pointed out, it is possible for the energy of a bound state to be located in the continuous part of the spectrum. This phenomenon is referred to as bound state in the continuum. Position-bound states. Consider the one-particle Schrödinger equation. If a state has energy formula_23, then the wavefunction ψ satisfies, for some formula_24 formula_25 so that ψ is exponentially suppressed at large x. This behaviour is well-studied for smoothly varying potentials in the WKB approximation for wavefunction, where an oscillatory behaviour is observed if the right hand side of the equation is negative and growing/decaying behaviour if it is positive. Hence, negative energy-states are bound if V vanishes at infinity. Non-Degeneracy in One dimensional bound states. 1D bound states can be shown to be non degenerate in energy for well-behaved wavefunctions that decay to zero at infinities. This need not hold true for wavefunction in higher dimensions. Due to the property of non-degenerate states, one dimensional bound states can always be expressed as real wavefunctions. Node theorem. Node theorem states that n-th bound wavefunction ordered according to increasing energy has exactly n-1 nodes, ie. points formula_26 where formula_27. Due to the form of Schrödinger's time independent equations, it is not possible for a physical wavefunction to have formula_28 since it corresponds to formula_29 solution. Requirements. A boson with mass "mχ" mediating a weakly coupled interaction produces an Yukawa-like interaction potential, formula_30, where formula_31, "g" is the gauge coupling constant, and "ƛi" is the reduced Compton wavelength. A scalar boson produces a universally attractive potential, whereas a vector attracts particles to antiparticles but repels like pairs. For two particles of mass "m"1 and "m"2, the Bohr radius of the system becomes formula_32 and yields the dimensionless number formula_33. In order for the first bound state to exist at all, formula_34. Because the photon is massless, "D" is infinite for electromagnetism. For the weak interaction, the Z boson's mass is , which prevents the formation of bound states between most particles, as it is the proton's mass and the electron's mass. Note however that if the Higgs interaction didn't break electroweak symmetry at the electroweak scale, then the SU(2) weak interaction would become confining. Remarks. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\{m_k\\}_{k=1}^n" }, { "math_id": 1, "text": "\\textstyle\\sum_k m_k" }, { "math_id": 2, "text": "\\lim_{x\\to\\pm\\infty}{V_{\\text{QHO}}(x)} = \\infty" }, { "math_id": 3, "text": "(X, \\mathcal A, \\mu)" }, { "math_id": 4, "text": "H" }, { "math_id": 5, "text": " (U_t)_{t\\in \\mathbb{R}} " }, { "math_id": 6, "text": "\\rho = \\rho(t_0) " }, { "math_id": 7, "text": "T" }, { "math_id": 8, "text": "\\mu(T,\\rho)" }, { "math_id": 9, "text": "\\rho" }, { "math_id": 10, "text": "\\rho(t_0)\\mapsto [U_t(\\rho)](t_0) = \\rho(t_0 +t)" }, { "math_id": 11, "text": "\\lim_{R \\rightarrow \\infty}{\\sup_{t \\geq t_0}{\\mu(T,\\rho(t))(\\mathbb{R}_{> R})}} = 0 " }, { "math_id": 12, "text": "\\mathbb{R}_{>R} = \\lbrace x \\in \\mathbb{R} \\mid x > R \\rbrace " }, { "math_id": 13, "text": "R\\subset X" }, { "math_id": 14, "text": "\\begin{align}\n0 &= \\lim_{R\\to\\infty}{\\mathbb{P}(\\text{particle measured inside }X\\setminus R)} \\\\\n&= \\lim_{R\\to\\infty}{\\int_{X\\setminus R}|\\psi(x)|^2\\,d\\mu(x)},\n\\end{align}" }, { "math_id": 15, "text": "\\int_X{|\\psi(x)|^{2}\\,d\\mu(x)} < \\infty." }, { "math_id": 16, "text": "t\\in\\mathbb{R}" }, { "math_id": 17, "text": "H := L^2(\\mathbb{R}) " }, { "math_id": 18, "text": "\\rho = \\rho(0) \\in H" }, { "math_id": 19, "text": "[-1,1] \\subseteq \\mathrm{Supp}(\\rho)" }, { "math_id": 20, "text": "[t-1,t+1] \\in \\mathrm{Supp}(\\rho(t)) " }, { "math_id": 21, "text": "t \\geq 0" }, { "math_id": 22, "text": "\\rho(t) = \\rho" }, { "math_id": 23, "text": " E < \\max{\\left(\\lim_{x\\to\\infty}{V(x)}, \\lim_{x\\to-\\infty}{V(x)}\\right)}" }, { "math_id": 24, "text": "X > 0" }, { "math_id": 25, "text": "\\frac{\\psi^{\\prime\\prime}}{\\psi}=\\frac{2m}{\\hbar^2}(V(x)-E) > 0\\text{ for }x > X" }, { "math_id": 26, "text": "x=a" }, { "math_id": 27, "text": "\\psi(a)=0 \\neq \\psi'(a)" }, { "math_id": 28, "text": "\\psi(a) = 0 = \\psi'(a)" }, { "math_id": 29, "text": "\\psi(x)=0" }, { "math_id": 30, "text": "V(r) = \\pm\\frac{\\alpha_\\chi}{r} e^{- \\frac{r}{\\lambda\\!\\!\\!\\frac{}{\\ }_\\chi}}" }, { "math_id": 31, "text": "\\alpha_\\chi=g^2/4\\pi" }, { "math_id": 32, "text": "a_0=\\frac{{\\lambda\\!\\!\\!^{{}^\\underline{\\ \\ }}}_1 + {\\lambda\\!\\!\\!^{{}^\\underline{\\ \\ }}}_2}{\\alpha_\\chi}" }, { "math_id": 33, "text": "D=\\frac{{\\lambda\\!\\!\\!^{{}^\\underline{\\ \\ }}}_\\chi}{a_0} = \\alpha_\\chi\\frac{{\\lambda\\!\\!\\!^{{}^\\underline{\\ \\ }}}_\\chi}{{\\lambda\\!\\!\\!^{{}^\\underline{\\ \\ }}}_1 + {\\lambda\\!\\!\\!^{{}^\\underline{\\ \\ }}}_2} = \\alpha_\\chi\\frac{m_1+m_2}{m_\\chi}" }, { "math_id": 34, "text": "D\\gtrsim 0.8" } ]
https://en.wikipedia.org/wiki?curid=696912
696923
International Standard Atmosphere
Atmospheric model The International Standard Atmosphere (ISA) is a static atmospheric model of how the pressure, temperature, density, and viscosity of the Earth's atmosphere change over a wide range of altitudes or elevations. It has been established to provide a common reference for temperature and pressure and consists of tables of values at various altitudes, plus some formulas by which those values were derived. The International Organization for Standardization (ISO) publishes the ISA as an international standard, ISO 2533:1975. Other standards organizations, such as the International Civil Aviation Organization (ICAO) and the United States Government, publish extensions or subsets of the same atmospheric model under their own standards-making authority. Description. The ISA mathematical model divides the atmosphere into layers with an assumed linear distribution of absolute temperature "T" against geopotential altitude "h". The other two values (pressure "P" and density "ρ") are computed by simultaneously solving the equations resulting from: formula_0, and formula_1 at each geopotential altitude, where "g" is the standard acceleration of gravity, and "R"specific is the specific gas constant for dry air (287.0528J⋅kg−1⋅K−1). The solution is given by the barometric formula. Air density must be calculated in order to solve for the pressure, and is used in calculating dynamic pressure for moving vehicles. Dynamic viscosity is an empirical function of temperature, and kinematic viscosity is calculated by dividing dynamic viscosity by the density. Thus the standard consists of a tabulation of values at various altitudes, plus some formulas by which those values were derived. To accommodate the lowest points on Earth, the model starts at a base geopotential altitude of "below sea level", with standard temperature set at 19 °C. With a temperature lapse rate of −6.5 °C (-11.7 °F) per km (roughly −2 °C (-3.6 °F) per 1,000 ft), the table interpolates to the standard mean sea level values of temperature, (1 atm) pressure, and a density of . The tropospheric tabulation continues to , where the temperature has fallen to , the pressure to , and the density to . Between 11 km and 20 km, the temperature remains constant. &lt;templatestyles src="Citation/styles.css"/&gt;a lapse rate given per kilometer of "geopotential altitude (A positive lapse rate (λ &gt; 0) means temperature increases with height)" In the above table, "geopotential altitude" is calculated from a mathematical model that adjusts the altitude to include the variation of gravity with height, while "geometric altitude" is the standard direct vertical distance above mean sea level (MSL). formula_2 Note that the Lapse Rates cited in the table are given as °C per kilometer of geopotential altitude, not geometric altitude. The ISA model is based on average conditions at mid latitudes, as determined by the ISO's TC 20/SC 6 technical committee. It has been revised from time to time since the middle of the 20th century. Use at non-standard day conditions. The ISA models a hypothetical standard day to allow a reproducible engineering reference for calculation and testing of engine and vehicle performance at various altitudes. It "does not" provide a rigorous meteorological model of actual atmospheric conditions (for example, changes in barometric pressure due to wind conditions). Neither does it account for humidity effects; air is assumed to be dry and clean and of constant composition. Humidity effects are accounted for in vehicle or engine analysis by adding water vapor to the thermodynamic state of the air after obtaining the pressure and density from the standard atmosphere model. Non-standard (hot or cold) days are modeled by adding a specified temperature delta to the standard temperature at altitude, but pressure is taken as the standard day value. Density and viscosity are recalculated at the resultant temperature and pressure using the ideal gas equation of state. Hot day, Cold day, Tropical, and Polar temperature profiles with altitude have been defined for use as performance references, such as United States Department of Defense MIL-STD-210C, and its successor MIL-HDBK-310. ICAO Standard Atmosphere. The International Civil Aviation Organization (ICAO) published their "ICAO Standard Atmosphere" as Doc 7488-CD in 1993. It has the same model as the ISA, but extends the altitude coverage to 80 kilometers (262,500 feet). The ICAO Standard Atmosphere, like the ISA, does not contain water vapor. Some of the values defined by ICAO are: Aviation standards and flying rules are based on the International Standard Atmosphere. Airspeed indicators are calibrated on the assumption that they are operating at sea level in the International Standard Atmosphere where the air density is 1.225 kg/m3. Physical properties of the ICAO Standard Atmosphere are: Other standard atmospheres. The U.S. Standard Atmosphere is a set of models that define values for atmospheric temperature, density, pressure and other properties over a wide range of altitudes. The first model, based on an existing international standard, was published in 1958 by the U.S. Committee on Extension to the Standard Atmosphere, and was updated in 1962, 1966, and 1976. The U.S. Standard Atmosphere, International Standard Atmosphere and WMO (World Meteorological Organization) standard atmospheres are the same as the ISO International Standard Atmosphere for altitudes up to 32 km. NRLMSISE-00 is a newer model of the Earth's atmosphere from ground to space, developed by the US Naval Research Laboratory taking actual satellite drag data into account. A primary use of this model is to aid predictions of satellite orbital decay due to atmospheric drag. The COSPAR International Reference Atmosphere (CIRA) 2012 and the ISO 14222 Earth Atmosphere Density standard both recommend NRLMSISE-00 for composition uses. JB2008 is a newer model of the Earth's atmosphere from 120 km to 2000 km, developed by the US Air Force Space Command and Space Environment Technologies taking into account realistic solar irradiances and time evolution of geomagnetic storms. It is most useful for calculating satellite orbital decay due to atmospheric drag. Both CIRA 2012 and ISO 14222 recommend JB2008 for mass density in drag uses. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{dP}{dh} = - \\rho g " }, { "math_id": 1, "text": "\\ P = \\rho R_{\\rm specific}T " }, { "math_id": 2, "text": "\\ z = \\frac{r_{0} h}{r_{0} - h} " } ]
https://en.wikipedia.org/wiki?curid=696923
696955
Antisymmetric tensor
Tensor equal to the negative of any of its transpositionsIn mathematics and theoretical physics, a tensor is antisymmetric on (or with respect to) an index subset if it alternates sign (+/−) when any two indices of the subset are interchanged. The index subset must generally either be all "covariant" or all "contravariant". For example, formula_0 holds when the tensor is antisymmetric with respect to its first three indices. If a tensor changes sign under exchange of "each" pair of its indices, then the tensor is completely (or totally) antisymmetric. A completely antisymmetric covariant tensor field of order formula_1 may be referred to as a differential formula_1-form, and a completely antisymmetric contravariant tensor field may be referred to as a formula_1-vector field. Antisymmetric and symmetric tensors. A tensor A that is antisymmetric on indices formula_2 and formula_3 has the property that the contraction with a tensor B that is symmetric on indices formula_2 and formula_3 is identically 0. For a general tensor U with components formula_4 and a pair of indices formula_2 and formula_5 U has symmetric and antisymmetric parts defined as: Similar definitions can be given for other pairs of indices. As the term "part" suggests, a tensor is the sum of its symmetric part and antisymmetric part for a given pair of indices, as in formula_6 Notation. A shorthand notation for anti-symmetrization is denoted by a pair of square brackets. For example, in arbitrary dimensions, for an order 2 covariant tensor M, formula_7 and for an order 3 covariant tensor T, formula_8 In any 2 and 3 dimensions, these can be written as formula_9 where formula_10 is the generalized Kronecker delta, and the Einstein summation convention is in use. More generally, irrespective of the number of dimensions, antisymmetrization over formula_11 indices may be expressed as formula_12 In general, every tensor of rank 2 can be decomposed into a symmetric and anti-symmetric pair as: formula_13 This decomposition is not in general true for tensors of rank 3 or more, which have more complex symmetries. Examples. Totally antisymmetric tensors include: Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "T_{ijk\\dots} = -T_{jik\\dots} = T_{jki\\dots} = -T_{kji\\dots} = T_{kij\\dots} = -T_{ikj\\dots}" }, { "math_id": 1, "text": "k" }, { "math_id": 2, "text": "i" }, { "math_id": 3, "text": "j" }, { "math_id": 4, "text": "U_{ijk\\dots}" }, { "math_id": 5, "text": "j," }, { "math_id": 6, "text": "U_{ijk\\dots} = U_{(ij)k\\dots} + U_{[ij]k\\dots}." }, { "math_id": 7, "text": "M_{[ab]} = \\frac{1}{2!}(M_{ab} - M_{ba})," }, { "math_id": 8, "text": "T_{[abc]} = \\frac{1}{3!}(T_{abc}-T_{acb}+T_{bca}-T_{bac}+T_{cab}-T_{cba})." }, { "math_id": 9, "text": "\\begin{align}\n M_{[ab]} &= \\frac{1}{2!} \\, \\delta_{ab}^{cd} M_{cd} , \\\\[2pt]\n T_{[abc]} &= \\frac{1}{3!} \\, \\delta_{abc}^{def} T_{def} .\n\\end{align}" }, { "math_id": 10, "text": "\\delta_{ab\\dots}^{cd\\dots}" }, { "math_id": 11, "text": "p" }, { "math_id": 12, "text": "T_{[a_1 \\dots a_p]} = \\frac{1}{p!} \\delta_{a_1 \\dots a_p}^{b_1 \\dots b_p} T_{b_1 \\dots b_p}." }, { "math_id": 13, "text": "T_{ij} = \\frac{1}{2}(T_{ij} + T_{ji}) + \\frac{1}{2}(T_{ij} - T_{ji})." }, { "math_id": 14, "text": "F_{\\mu\\nu}" } ]
https://en.wikipedia.org/wiki?curid=696955
69695925
Black hole greybody factors
Functions describing emission-spectrum of a black hole Black hole greybody factors are functions of frequency and angular momentum that characterizes the deviation of the emission-spectrum of a black hole from a pure black-body spectrum. As a result of quantum effects, an isolated black hole emits radiation that, at the black-hole horizon, matches the radiation from a perfect black body. However, this radiation is scattered by the geometry of the black hole itself. Stated more intuitively, the particles emitted by the black hole are subject to the gravitational attraction of the black hole and so some of them fall back into the black hole. As a result, the actual spectrum measured by an asymptotic observer deviates from a black-body spectrum. This deviation is captured by the greybody factors. The name "greybody" is simply meant to indicate the difference of the spectrum of a black hole from a pure black body. The greybody factors can be computed by a classical scattering computation of a wave-packet off the black hole. Mathematical definition. The rate at which a black hole emits particles with energy between formula_0 and formula_1 and with angular momentum quantum numbers formula_2 is given by formula_3 where k is the Boltzmann constant and T is the Hawking temperature of the black hole. The constant in the denominator is 1 for Bosons and -1 for Fermions. The factors formula_4 are called the greybody factors of the black hole. For a charged black hole, these factors may also depend on the charge of the emitted particles. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\hbar \\omega " }, { "math_id": 1, "text": " \\hbar (\\omega + d \\omega) " }, { "math_id": 2, "text": "\\ell, m" }, { "math_id": 3, "text": "\nd \\Gamma = {\\sigma_{\\omega, \\ell, m} \\over e^{\\hbar \\omega \\over k T} \\pm 1} d \\omega\n" }, { "math_id": 4, "text": "\\sigma_{\\omega, \\ell, m}" } ]
https://en.wikipedia.org/wiki?curid=69695925
6969727
High-dimensional model representation
High-dimensional model representation is a expansion for a given "multivariable" function. The expansion was first described by Ilya M. Sobol as formula_0 The method, used to determine the right hand side functions, is given in Sobol's paper. A review can be found here: High Dimensional Model Representation (HDMR): Concepts and Applications. The underlying logic behind the HDMR is to express all variable interactions in a system in a hierarchical order. For instance formula_1 represents the mean response of the model formula_2. It can be considered as measuring what is left from the model after stripping down all variable effects. The uni-variate functions formula_3, however represents the "individual" contributions of the variables. For instance, formula_4 is the portion of the model that can be controlled only by the variable formula_5. For this reason, there can not be any constant in formula_4 because all constants are expressed in formula_1. Going further into higher interactions,the next stop is bivariate functions formula_6 which represents the cooperative effect of variables formula_7 and formula_8 together. Similar logic applies here: the bivariate functions do not contain univarite functions nor constants as it violates the construction logic of HDMR. As we go into higher interactions, the number of interactions are increasing and at last we reach the residual term formula_9 representing the contribution only if all variable act together. HDMR as an Approximation. The hierarchical representation model of HDMR brings an advantage if one needs to replace an existing model with a simpler one usually containing only univariate or bivariate terms. If the target model does not contain higher level of variable interactions, this approach can yield good approximations with the additional advantage of providing a clearer view of variable interactions.
[ { "math_id": 0, "text": "f(\\mathbf{x}) = f_0+\n\\sum_{i=1}^nf_i(x_i)+\n\\sum_{i,j=1 \\atop i<j}^n\nf_{ij}(x_{i},x_{j})+ \\cdots +\nf_{12\\ldots n}(x_1,\\ldots,x_n)." }, { "math_id": 1, "text": "f_0" }, { "math_id": 2, "text": "f" }, { "math_id": 3, "text": "f_i(x_i)" }, { "math_id": 4, "text": "f_1(x_1)" }, { "math_id": 5, "text": "x_1" }, { "math_id": 6, "text": "f_{ij}(x_i,x_j)" }, { "math_id": 7, "text": "x_i" }, { "math_id": 8, "text": "x_j" }, { "math_id": 9, "text": "f_{12n}(x_1,\\ldots,x_n)" } ]
https://en.wikipedia.org/wiki?curid=6969727
697009
Weighted round robin
Scheduling algorithm for tasks or data flows Weighted round robin (WRR) is a network scheduler for data flows, but also used to schedule processes. Weighted round robin is a generalisation of round-robin scheduling. It serves a set of queues or tasks. Whereas round-robin cycles over the queues or tasks and gives one service opportunity per cycle, weighted round robin offers to each a fixed number of opportunities, as specified by the configured weight which serves to influence the portion of capacity received by each queue or task. In computer networks, a service opportunity is the emission of one packet, if the selected queue is non-empty. If all packets have the same size, WRR is the simplest approximation of generalized processor sharing (GPS). Several variations of WRR exist. The main ones are the "classical" WRR, and the "interleaved" WRR. Algorithm. Principles. WRR is presented in the following as a network scheduler. It can also be used to schedule tasks in a similar way. A weighted round-robin network scheduler has formula_0 input queues, formula_1. To each queue formula_2 is associated formula_3, a positive integer, called the "weight". The WRR scheduler has a cyclic behavior. In each cycle, each queue formula_2 has formula_3 emissions opportunities. The different WRR algorithms differ on the distributions of these opportunities in the cycle. Classical WRR. In classical WRR the scheduler cycles over the queues. When a queue formula_2 is selected, the scheduler will send packets, up to the emission of formula_3 packet or the end of queue. Interleaved WRR. Let formula_4, be the maximum weight. In IWRR, each cycle is split into formula_5 rounds. A queue with weight formula_3 can emit one packet at round formula_6 only if formula_7. Example. Consider a system with three queues formula_8 and respective weights formula_9. Consider a situation where there are 7 packets in the first queue, A,B,C,D,E,F,G, 3 in the second queue, U,V,W and 2 in the third queue X,Y. Assume that there is no more packet arrival. With classical WRR, in the first cycle, the scheduler first selects formula_10 and transmits the five packets at head of queue,A,B,C,D,E (since formula_11), then it selects the second queue, formula_12, and transmits the two packets at head of queue, U,V (since formula_13), and last it selects the third queue, which has a weight equals to 3 but only two packets, so it transmits X,Y. Immediately after the end of transmission of Y, the second cycle starts, and F,G from formula_10 are transmitted, followed by W from formula_12. With interleaved WRR, the first cycle is split into 5 rounds (since formula_14). In the first one (r=1), one packet from each queue is sent (A,U,X), in the second round (r=2), another packet from each queue is also sent (B,V,Y), in the third round (r=3), only queues formula_15 are allowed to send a packet (formula_16, formula_17 and formula_18), but since formula_19 is empty, only C from formula_10 is sent, and in the fourth and fifth rounds, only D,E from formula_10 are sent. Then starts the second cycle, where F,W,G are sent. Task scheduling. Task or process scheduling can be done in WRR in a way similar to packet scheduling: when considering a set of formula_0 active tasks, they are scheduled in a cyclic way, each task formula_20 gets formula_3 quantum or slice of processor time Properties. Like round-robin, weighted round robin scheduling is simple, easy to implement, work conserving and starvation-free. When scheduling packets, if all packets have the same size, then WRR and IWRR are an approximation of Generalized processor sharing: a queue formula_2 will receive a long term part of the bandwidth equals to formula_21 (if all queues are active) while GPS serves infinitesimal amounts of data from each nonempty queue and offer this part on any interval. If the queues have packets of variable length, the part of the bandwidth received by each queue depends not only on the weights but also of the packets sizes. If a mean packets size formula_22 is known for every queue formula_2, each queue will receive a long term part of the bandwidth equals to formula_23. If the objective is to give to each queue formula_2 a portion formula_24 of link capacity (with formula_25), one may set formula_26. Since IWRR has smaller per class bursts than WRR, it implies smaller worst-case delays. Limitations and improvements. WRR for network packet scheduling was first proposed by Katevenis, Sidiropoulos and Courcoubetis in 1991, specifically for scheduling in ATM networks using fixed-size packets (cells). The primary limitation of weighted round-robin queuing is that it provides the correct percentage of bandwidth to each service class only if all the packets in all the queues are the same size or when the mean packet size is known in advance. In the more general case of IP networks with variable size packets, to approximate GPS the weight factors must be adjusted based on the packet size. That requires estimation of the average packet size, which makes a good GPS approximation hard to achieve in practice with WRR. Deficit round robin is a later variation of WRR that achieves better GPS approximation without knowing the mean packet size of each connection in advance. More effective scheduling disciplines were also introduced which handle the limitations mentioned above (e.g. weighted fair queuing). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "q_1,...,q_n" }, { "math_id": 2, "text": "q_i" }, { "math_id": 3, "text": "w_i" }, { "math_id": 4, "text": "w_{max}=\\max\\{ w_i \\}" }, { "math_id": 5, "text": "w_{max}" }, { "math_id": 6, "text": "r" }, { "math_id": 7, "text": "r \\leq w_i" }, { "math_id": 8, "text": "q_1,q_2,q_3" }, { "math_id": 9, "text": "w_1=5,w_2=2,w_3=3" }, { "math_id": 10, "text": "q_1" }, { "math_id": 11, "text": "w_1=5" }, { "math_id": 12, "text": "q_2" }, { "math_id": 13, "text": "w_2=2" }, { "math_id": 14, "text": "max(w_1,w_2,w_3)=5" }, { "math_id": 15, "text": "q_1,q_3" }, { "math_id": 16, "text": "w_1 >= r" }, { "math_id": 17, "text": "w_2 < r" }, { "math_id": 18, "text": "w_3 >= r" }, { "math_id": 19, "text": "q_3" }, { "math_id": 20, "text": "\\tau_i" }, { "math_id": 21, "text": "\\frac{w_i}{\\sum_{j=1}^n w_j}" }, { "math_id": 22, "text": "s_i" }, { "math_id": 23, "text": "\\frac{s_i \\times w_i}{\\sum_{j=1}^n s_j \\times w_j}" }, { "math_id": 24, "text": "\\rho_i" }, { "math_id": 25, "text": "\\sum_{i=1}^n \\rho_i=1" }, { "math_id": 26, "text": "w_i = \\frac{\\rho_i}{s_i}" } ]
https://en.wikipedia.org/wiki?curid=697009
69703634
Nader Masmoudi
Tunisian mathematician Nader Masmoudi (born 1974 in Sfax) is a Tunisian mathematician. Life. He studied in Tunis and then at the École normale supérieure in Paris with a diploma in 1996; In 1999 he received his doctorate from the University of Paris-Dauphine with Pierre-Louis Lions ("Problemes asymptotiques en mecanique des fluides"). He then went to the Courant Institute at New York University, where he became a professor in 2008. Masmoudi is particularly concerned with nonlinear partial differential equations of hydrodynamics (Euler equation, Navier-Stokes equation, surface waves, gravity waves, capillary waves, acoustic waves, boundary layer equations and qualitative behavior of boundary layers, Couette flow, non-Newtonian fluids, nonlinear Schrödinger equations for waves and others dispersive systems, etc.), hydrodynamic limit value of the Boltzmann equation, limit behavior to incompressibility, chemotaxis (Keller-Segel equations), the Ginsburg-Landau equation, Landau damping, behavior of mixtures, general long-term behavior of semilinear systems of partial differential equations and stability problems in hydrodynamics . In 2013, together with his post-doctoral student Jacob Bedrossian, he strictly demonstrated the stability of the shear flow according to Couette for the two-dimensional Euler equations, i.e. in the non-linear case. The stability in linear approximation was already proven by Lord Kelvin in 1887 and more precisely by William McFadden Orr in 1907. He also obtained similar stability results in the viscous case for the boundary layer formation according to Prandtl in the two-dimensional Navier-Stokes equations (the linear theory is named here after Orr and Arnold Sommerfeld: Orr-Sommerfeld equations). He built on the work on a similar problem by Cédric Villani, who dealt with Landau damping, the damping of plasma waves, which in a strictly mathematical treatment also results from non-viscous phenomena (strong smoothness properties and mixing, sometimes non-viscous damping called ("inviscid damping"), technically so-called Gevrey regularity) resulted. Possible instabilities also result from non-linear resonances between different waves in the plasma (non-linear build-up with the “echoes” of the waves) and must be “mathematically controlled” in order to prove stability. The behavior of plasmas and non-viscous liquids described by the Euler equation is similar. Awards and honours. In 1992 he received a gold medal at the International Mathematical Olympiad (as the first African or Arab at all). For 2017 he received the Fermat Prize. In 2018 he was invited speaker at the International Congress of Mathematicians in Rio de Janeiro. In 2021 Masmoudi was elected to the American Academy of Arts and Sciences. In 2022 he was awarded the International King Faisal Prize jointly with Martin Hairer. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbb R^2" } ]
https://en.wikipedia.org/wiki?curid=69703634
69706395
Karmarkar–Karp bin packing algorithms
Set of related approximation algorithms for the bin packing problem The Karmarkar–Karp (KK) bin packing algorithms are several related approximation algorithm for the bin packing problem. The bin packing problem is a problem of packing items of different sizes into bins of identical capacity, such that the total number of bins is as small as possible. Finding the optimal solution is computationally hard. Karmarkar and Karp devised an algorithm that runs in polynomial time and finds a solution with at most formula_0 bins, where OPT is the number of bins in the optimal solution. They also devised several other algorithms with slightly different approximation guarantees and run-time bounds. The KK algorithms were considered a breakthrough in the study of bin packing: the previously-known algorithms found multiplicative approximation, where the number of bins was at most formula_1 for some constants formula_2, or at most formula_3. The KK algorithms were the first ones to attain an additive approximation. Input. The input to a bin-packing problem is a set of items of different sizes, "a"1..."an". The following notation is used: Given an instance "I", we denote: Obviously, FOPT("I") ≤ OPT(I). High-level scheme. The KK algorithms essentially solve the configuration linear program:formula_4.Here, A is a matrix with "m" rows. Each column of A represents a feasible "configuration" - a multiset of item-sizes, such that the sum of all these sizes is at most "B". The set of configurations is "C". x is a vector of size "C." Each element "xc" of x represents the number of times configuration "c" is used. There are two main difficulties in solving this problem. First, it is an integer linear program, which is computationally hard to solve. Second, the number of variables is "C" - the number of configurations, which may be enormous. The KK algorithms cope with these difficulties using several techniques, some of which were already introduced by de-la-Vega and Lueker. Here is a high-level description of the algorithm (where formula_5 is the original instance): Below, we describe each of these steps in turn. Step 1. Removing and adding small items. The motivation for removing small items is that, when all items are large, the number of items in each bin must be small, so the number of possible configurations is (relatively) small. We pick some constant "formula_8", and remove from the original instance formula_5 all items smaller than "formula_9". Let formula_6 be the resulting instance. Note that in formula_6, each bin can contain at most "formula_10" items. We pack formula_6 and get a packing with some formula_11 bins. Now, we add the small items into the existing bins in an arbitrary order, as long as there is room. When there is no more room in the existing bins, we open a new bin (as in next-fit bin packing). Let formula_12 be the number of bins in the final packing. Then: formula_13."Proof". If no new bins are opened, then the number of bins remains formula_11. If a new bin is opened, then all bins except maybe the last one contain a total size of at least formula_14, so the total instance size is at least formula_15. Therefore, formula_16, so the optimal solution needs at least formula_17 bins. So formula_18. In particular, by taking "g"=1/"n", we get:formula_19,since formula_20. Therefore, it is common to assume that all items are larger than 1/"n". Step 2. Grouping and un-grouping items. The motivation for grouping items is to reduce the number of different item sizes, to reduce the number of constraints in the configuration LP. The general grouping process is: There are several different grouping methods. Linear grouping. Let "formula_21" be an integer parameter. Put the largest "formula_22 "items in group 1; the next-largest "formula_22 "items in group 2; and so on (the last group might have fewer than "formula_22 "items). Let formula_6 be the original instance. Let formula_23 be the first group (the group of the "formula_22 "largest items), and formula_7 the grouped instance "without" the first group. Then: Therefore, formula_27. Indeed, given a solution to formula_7 with formula_28 bins, we can get a solution to formula_6 with at most formula_29 bins. Geometric grouping. Let "formula_21" be an integer parameter. Geometric grouping proceeds in two steps: Then, the number of different sizes is bounded as follows: The number of bins is bounded as follows: Alternative geometric grouping. Let "formula_21" be an integer parameter. Order the items by descending size. Partition them into groups such that the total "size" in each group is at least "formula_49". Since the size of each item is less than "B", The number of items in each group is at least "formula_50". The number of items in each group is weakly-increasing. If all items are larger than "formula_9, "then the number of items in each group is at most "formula_51". In each group, only the larger items are rounded up. This can be done such that: Step 3. Constructing the LP and rounding the solution. We consider the configuration linear program without the integrality constraints:formula_54.Here, we are allowed to use a fractional number of each configuration. Denote the optimal solution of the linear program by LOPT. The following relations are obvious: A solution to the fractional LP can be rounded to an integral solution as follows. Suppose we have a solution x to the fractional LP. We round x into a solution for the integral ILP as follows. This also implies that formula_58. Step 4. Solving the fractional LP. The main challenge in solving the fractional LP is that it may have a huge number of variables - a variable for each possible configuration. The dual LP. The dual linear program of the fractional LP is:formula_59.It has "m" variables formula_60, and "C" constraints - a constraint for each configuration. It has the following economic interpretation. For each size "s", we should determine a nonnegative price "formula_61". Our profit is the total price of all items. We want to maximize the profit n y subject to the constraints that the total price of items in each configuration is at most 1. This LP now has only "m" variables, but a huge number of constraints. Even listing all the constraints is infeasible. Fortunately, it is possible to solve the problem up to any given precision without listing all the constraints, by using a variant of the ellipsoid method. This variant gets as input, a "separation oracle": a function that, given a vector y ≥ 0, returns one of the following two options: The ellipsoid method starts with a large ellipsoid, that contains the entire feasible domain formula_62. At each step "t", it takes the center formula_64 of the current ellipsoid, and sends it to the separation oracle: After making a cut, we construct a new, smaller ellipsoid. It can be shown that this process converges to an approximate solution, in time polynomial in the required accuracy. A separation oracle for the dual LP. We are given some "m" non-negative numbers formula_60. We have to decide between the following two options: This problem can be solved by solving a "knapsack problem", where the item values are formula_60, the item weights are formula_66, and the weight capacity is "B" (the bin size). The knapsack problem can be solved by dynamic programming in pseudo-polynomial time: formula_67, where "m" is the number of inputs and "V" is the number of different possible values. To get a polynomial-time algorithm, we can solve the knapsack problem approximately, using input rounding. Suppose we want a solution with tolerance formula_68. We can round each of formula_60 down to the nearest multiple of "formula_68"/"n". Then, the number of possible values between 0 and 1 is "n"/"formula_68", and the run-time is formula_69. The solution is at least the optimal solution minus "formula_68"/"n". Ellipsoid method with an approximate separation oracle. The ellipsoid method should be adapted to use an "approximate" separation oracle. Given the current ellipsoid center formula_64: Using the approximate separation oracle gives a feasible solution y* to the dual LP, with formula_74, after at most "formula_75" iterations, where formula_76. The total run-time of the ellipsoid method with the approximate separation oracle is formula_77. Eliminating constraints. During the ellipsoid method, we use at most "Q" constraints of the form formula_78. All the other constraints can be eliminated, since they have no effect on the outcome y* of the ellipsoid method. We can eliminate even more constraints. It is known that, in any LP with "m" variables, there is a set of "m" constraints that is sufficient for determining the optimal solution (that is, the optimal value is the same even if only these "m" constraints are used). We can repeatedly run the ellipsoid method as above, each time trying to remove a specific set of constraints. If the resulting error is at most formula_68, then we remove these constraints permanently. It can be shown that we need at most formula_79 eliminations, so the accumulating error is at most formula_80. If we try sets of constraints deterministically, then in the worst case, one out of "m" trials succeeds, so we need to run the ellipsoid method at most formula_81 times. If we choose the constraints to remove at random, then the expected number of iterations is formula_82. Finally, we have a reduced dual LP, with only "m" variables and "m" constraints. The optimal value of the reduced LP is at least formula_83, where formula_84. Solving the primal LP. By the LP duality theorem, the minimum value of the primal LP equals the maximum value of the dual LP, which we denoted by LOPT. Once we have a reduced dual LP, we take its dual, and take a reduced primal LP. This LP has only "m" variables - corresponding to only "m" out of "C" configurations. The maximum value of the reduced dual LP is at least formula_83. It can be shown that the optimal solution of the reduced primal LP is at most formula_85. The solution gives a near-optimal bin packing, using at most "m" configurations. The total run-time of the deterministic algorithm, when all items are larger than "formula_9, is:" formula_86, The expected total run-time of the randomized algorithm is: formula_87. End-to-end algorithms. Karmarkar and Karp presented three algorithms, that use the above techniques with different parameters. The run-time of all these algorithms depends on a function formula_88, which is a polynomial function describing the time it takes to solve the fractional LP with tolerance "h"=1, which is, for the deterministic version,formula_89. Algorithm 1. Let formula_90 be a constant representing the desired approximation accuracy. All in all, the number of bins is in formula_100 and the run-time is in formula_101. By choosing formula_102 we get formula_103. Algorithm 2. Let formula_104 be a real parameter and formula_105 an integer parameter to be determined later. The run-time is in formula_114. Now, if we choose "k"=2 and "g"=1/FOPT(I), we get:formula_115, and hence:formula_116, so the total number of bins is in formula_117. The run-time is formula_118. The same algorithm can be used with different parameters to trade-off run-time with accuracy. For some parameter formula_119, choose formula_120 and formula_121. Then, the packing needs at most formula_122 bins, and the run-time is in formula_123. Algorithm 3. The third algorithm is useful when the number of sizes "m" is small (see also high-multiplicity bin packing). It uses at most formula_126 bins, and the run-time is in formula_127. Improvements. The KK techniques were improved later, to provide even better approximations. Rothvoss uses the same scheme as Algorithm 2, but with a different rounding procedure in Step 2. He introduced a "gluing" step, in which small items are glued together to yield a single larger item. This gluing can be used to increase the smallest item size to about formula_128. When all sizes are at least formula_128, we can substitute formula_129 in the guarantee of Algorithm 2, and get:formula_130, which yields a formula_131 bins. Hoberg and Rothvoss use a similar scheme in which the items are first packed into "containers", and then the containers are packed into bins. Their algorithm needs at most formula_132 bins. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathrm{OPT} + \\mathcal{O}(\\log^2(OPT))" }, { "math_id": 1, "text": "r\\cdot \\mathrm{OPT}+s" }, { "math_id": 2, "text": "r>1, s>0" }, { "math_id": 3, "text": "(1+\\varepsilon)\\mathrm{OPT} + 1" }, { "math_id": 4, "text": "\\text{minimize}~~\\mathbf{1}\\cdot \\mathbf{x}~~~\\text{s.t.}~~ A \\mathbf{x}\\geq \\mathbf{n}~~~\\text{and}~~ \\mathbf{x}\\geq 0~~~\\text{and}~~ \\mathbf{x}~\\text{is an integer}~" }, { "math_id": 5, "text": "I" }, { "math_id": 6, "text": "J" }, { "math_id": 7, "text": "K" }, { "math_id": 8, "text": "g\\in(0,1)" }, { "math_id": 9, "text": "g\\cdot B" }, { "math_id": 10, "text": "1/g" }, { "math_id": 11, "text": "b_J" }, { "math_id": 12, "text": "b_I" }, { "math_id": 13, "text": "b_I \\leq \\max(b_J, (1+2 g)\\cdot OPT(I) + 1)" }, { "math_id": 14, "text": "B - g\\cdot B" }, { "math_id": 15, "text": "(1-g)\\cdot B\\cdot (b_I-1)" }, { "math_id": 16, "text": "FOPT \\geq (1-g)\\cdot (b_I-1)" }, { "math_id": 17, "text": "(1-g)\\cdot (b_I-1)" }, { "math_id": 18, "text": "b_I \\leq OPT/(1-g) + 1 = (1+g+g^2+\\ldots) OPT+1 \\leq (1+2g) OPT+1" }, { "math_id": 19, "text": "b_I \\leq \\max(b_J, OPT +2 \\cdot OPT(I)/n + 1)\\leq \\max(b_J, OPT +3)" }, { "math_id": 20, "text": "OPT(I)\\leq n" }, { "math_id": 21, "text": "k>1" }, { "math_id": 22, "text": "k" }, { "math_id": 23, "text": "K'" }, { "math_id": 24, "text": "m(K) \\leq n/k + 1" }, { "math_id": 25, "text": "OPT(K) \\leq OPT(J)" }, { "math_id": 26, "text": "OPT(K') \\leq k" }, { "math_id": 27, "text": "OPT(J) \\leq OPT(K\\cup K') \\leq OPT(K)+OPT(K') \\leq OPT(K)+k" }, { "math_id": 28, "text": "b_K" }, { "math_id": 29, "text": "b_K+k" }, { "math_id": 30, "text": "J_0, J_1, \\ldots" }, { "math_id": 31, "text": "J_r" }, { "math_id": 32, "text": "[B/2^{r+1}, B/2^r)" }, { "math_id": 33, "text": "\\log_2(1/g)" }, { "math_id": 34, "text": "k\\cdot 2^r" }, { "math_id": 35, "text": "K_r, K'_r" }, { "math_id": 36, "text": "K := \\cup _r K_r" }, { "math_id": 37, "text": "K' := \\cup _r K'_r" }, { "math_id": 38, "text": "m(K'_r) = 1" }, { "math_id": 39, "text": "m(K_r) \\leq n(J_r)/(k\\cdot 2^r) + 1" }, { "math_id": 40, "text": "B/2^{r+1}" }, { "math_id": 41, "text": "n(J_r)\\leq 2^{r+1} \\cdot FOPT(J_r)" }, { "math_id": 42, "text": "m(K_r) \\leq 2 FOPT(J_r)/k + 1" }, { "math_id": 43, "text": "m(K) \\leq 2 FOPT(J)/k + \\log_2(1/g)" }, { "math_id": 44, "text": "OPT(K'_r) \\leq k" }, { "math_id": 45, "text": "K'_r" }, { "math_id": 46, "text": "B/2^r" }, { "math_id": 47, "text": "OPT(K') \\leq k\\cdot \\log_2(1/g)" }, { "math_id": 48, "text": "OPT(J) \\leq OPT(K)+OPT(K')\\leq OPT(K)+k\\cdot \\log_2(1/g)" }, { "math_id": 49, "text": "k\\cdot B" }, { "math_id": 50, "text": "k+1" }, { "math_id": 51, "text": "k/g" }, { "math_id": 52, "text": "m(K) \\leq FOPT(J)/k + \\ln(1/g)" }, { "math_id": 53, "text": "OPT(J) \\leq OPT(K)+2 k\\cdot (2 + \\ln(1/g))" }, { "math_id": 54, "text": "\\text{minimize}~~\\mathbf{1}\\cdot \\mathbf{x}~~~\\text{s.t.}~~ A \\mathbf{x}\\geq \\mathbf{n}~~~\\text{and}~~ \\mathbf{x}\\geq 0" }, { "math_id": 55, "text": "\\sum_{c\\in C} x_c = b_L" }, { "math_id": 56, "text": "b_L" }, { "math_id": 57, "text": "b_L+m/2" }, { "math_id": 58, "text": "OPT(I)\\leq LOPT(I)+m/2" }, { "math_id": 59, "text": "\\text{maximize}~~\\mathbf{n}\\cdot \\mathbf{y}~~~\\text{s.t.}~~ A^T \\mathbf{y} \\leq \\mathbf{1}~~~\\text{and}~~ \\mathbf{y}\\geq 0" }, { "math_id": 60, "text": "y_1,\\ldots,y_m" }, { "math_id": 61, "text": "y_i" }, { "math_id": 62, "text": "A^T \\mathbf{y} \\leq \\mathbf{1}" }, { "math_id": 63, "text": "\\mathbf{a}\\cdot \\mathbf{y} > 1" }, { "math_id": 64, "text": "\\mathbf{y}_t" }, { "math_id": 65, "text": "\\mathbf{n}\\cdot \\mathbf{y} < \\mathbf{n}\\cdot \\mathbf{y}_t" }, { "math_id": 66, "text": "s_1,\\ldots,s_m" }, { "math_id": 67, "text": "O(m\\cdot V)" }, { "math_id": 68, "text": "\\delta" }, { "math_id": 69, "text": "O(m n /\\delta)" }, { "math_id": 70, "text": "\\mathbf{z}_t" }, { "math_id": 71, "text": "\\mathbf{n}\\cdot \\mathbf{z}_t \\geq \\mathbf{n}\\cdot \\mathbf{y}_t - \\mathbf{n}\\cdot \\mathbf{1}\\cdot (\\delta/n) = \\mathbf{n}\\cdot \\mathbf{y}_t - \\delta" }, { "math_id": 72, "text": " \\mathbf{y}_t" }, { "math_id": 73, "text": "\\mathbf{n}\\cdot \\mathbf{y} < \\mathbf{n}\\cdot \\mathbf{z}_t+\\delta \\leq \\text{OPT}+\\delta" }, { "math_id": 74, "text": "\\mathbf{n}\\cdot \\mathbf{y^*} \\geq LOPT-\\delta" }, { "math_id": 75, "text": "Q" }, { "math_id": 76, "text": "Q = 4m^2 \\ln(mn/g\\delta)" }, { "math_id": 77, "text": "O(Q m n /\\delta)" }, { "math_id": 78, "text": "\\mathbf{a}\\cdot \\mathbf{y} \\leq 1" }, { "math_id": 79, "text": "\\approx (Q/m) + m\\ln(Q/m)" }, { "math_id": 80, "text": "\\approx \\delta\\cdot [(Q/m) + m\\ln(Q/m)]" }, { "math_id": 81, "text": "\\approx m\\cdot[(Q/m) + m\\ln(Q/m)]\n= Q+m^2\\ln(Q/m)\n" }, { "math_id": 82, "text": " O(m)\\cdot[1 + \\ln(Q/m)]" }, { "math_id": 83, "text": " LOPT-h" }, { "math_id": 84, "text": "h \\approx \\delta\\cdot [(Q/m) + m\\ln(Q/m)]" }, { "math_id": 85, "text": " LOPT+h" }, { "math_id": 86, "text": "O\\left(\\frac{Q m n}{\\delta} \n\\cdot( Q+m^2\\ln\\frac{Q}{m})\n\\right)\n=\nO\n\\left(\\frac{Q^2 m n + Q m^3 n\\ln\\frac{Q}{m}}{\\delta} \n\\right)\n\\approx\nO\\left(m^8 \\ln{m} \\ln^2(\\frac{m n}{g h}) + \\frac{m^4 n \\ln{m}}{h}\\ln(\\frac{m n}{g h}) \\right)" }, { "math_id": 87, "text": "O\\left(m^7 \\log{m} \\log^2(\\frac{m n}{g h}) + \\frac{m^4 n \\log{m}}{h}\\log(\\frac{m n}{g h}) \\right)" }, { "math_id": 88, "text": "T(\\cdot,\\cdot)" }, { "math_id": 89, "text": "T(m,n)\\in O(m^8\\log{m}\\log^2{n} + m^4 n \\log{m}\\log{n} )" }, { "math_id": 90, "text": "\\epsilon>0" }, { "math_id": 91, "text": "g = \\max(1/n, \\epsilon/2)" }, { "math_id": 92, "text": "k = n\\cdot \\epsilon^2" }, { "math_id": 93, "text": "m(K) \\leq n/k + 1 \\approx 1/\\epsilon^2" }, { "math_id": 94, "text": "b_L\\leq LOPT(K)+1" }, { "math_id": 95, "text": "T(m(K),n(K)) \\leq T(\\epsilon^{-2},n) " }, { "math_id": 96, "text": "m(K)/2" }, { "math_id": 97, "text": "b_K\\leq b_L + m(K)/2 \\leq LOPT(K)+1 + 1/2\\epsilon^2" }, { "math_id": 98, "text": "b_J\\leq b_K+k \\leq LOPT(K)+1 + 1/2\\epsilon^2 + n\\cdot \\epsilon^2" }, { "math_id": 99, "text": "b_I \\leq \\max(b_J, (1+2 g)\\cdot OPT(I) + 1)\\leq (1+\\epsilon)OPT(I) + 1/2\\epsilon^2 + 3" }, { "math_id": 100, "text": "(1+\\epsilon)OPT + O(\\epsilon^{-2})" }, { "math_id": 101, "text": "O(n\\log n + T(\\epsilon^{-2},n))" }, { "math_id": 102, "text": "\\epsilon = OPT^{-1/3}" }, { "math_id": 103, "text": "OPT + O(OPT^{2/3})" }, { "math_id": 104, "text": "g>0" }, { "math_id": 105, "text": "k>0" }, { "math_id": 106, "text": "FOPT(J) > 1+\\frac{k}{k-1}\\ln(1/g)" }, { "math_id": 107, "text": "T(m(K),n(K)) \\leq T(FOPT(J)/k + \\ln(1/g) ,n) " }, { "math_id": 108, "text": "2 k\\cdot (2 + \\ln(1/g))" }, { "math_id": 109, "text": "FOPT(J) \\leq 1+\\frac{k}{k-1}\\ln(1/g)" }, { "math_id": 110, "text": "2 FOPT(J) \\leq 2+\\frac{2 k}{k-1}\\ln(1/g)" }, { "math_id": 111, "text": "FOPT(J_{t+1}) \\leq m(K_{t}) \\leq FOPT(J_{t})/k + \\ln(1/g)" }, { "math_id": 112, "text": "\\frac{\\ln FOPT(I)}{\\ln k}+1" }, { "math_id": 113, "text": "b_J \\leq OPT(I) + \\left[1+\\frac{\\ln FOPT(I)}{\\ln k}\\right]\\left[1 + 4k + 2k\\ln(1/g)\\right]+ 2 + \\frac{2k}{k-1}\\ln(1/g)" }, { "math_id": 114, "text": "O(n\\log n + T(FOPT(J)/k + \\ln(1/g),n))" }, { "math_id": 115, "text": "b_J \\leq OPT + O(\\log^2(FOPT)) " }, { "math_id": 116, "text": "b_I \\leq \\max(b_J, OPT+2OPT /FOPT+1) \\leq \\max(b_J, OPT+5) \\in OPT+\\log^2(OPT)" }, { "math_id": 117, "text": "OPT + O(\\log^2(FOPT))" }, { "math_id": 118, "text": "O(n\\log n) + T(FOPT/2 + \\ln(FOPT) ,n)\\in O(n\\log{n} + T(FOPT, n)) " }, { "math_id": 119, "text": "\\alpha\\in(0,1)" }, { "math_id": 120, "text": "k=FOPT^{\\alpha}" }, { "math_id": 121, "text": "g=1/FOPT^{1-\\alpha}" }, { "math_id": 122, "text": "\\mathrm{OPT} + \\mathcal{O}(OPT^\\alpha)" }, { "math_id": 123, "text": "O(n\\log{n} + T(FOPT^{(1-\\alpha)},n))" }, { "math_id": 124, "text": "g = \\frac{\\log^2(m)}{FOPT(I)}" }, { "math_id": 125, "text": "m(K)\\leq FOPT(K)" }, { "math_id": 126, "text": "\\mathrm{OPT} + \\mathcal{O}(\\log^2 m)" }, { "math_id": 127, "text": "O(n\\log{n} + T(m,n))" }, { "math_id": 128, "text": "B/\\log^{12}(n)" }, { "math_id": 129, "text": "g = 1/\\log^{12}(n)" }, { "math_id": 130, "text": "b_J \\leq OPT(I) + O(\\log(FOPT)\\log(\\log(n)))" }, { "math_id": 131, "text": "\\mathrm{OPT} + O(\\log(\\mathrm{OPT})\\cdot \\log\\log(\\mathrm{OPT}))" }, { "math_id": 132, "text": "b_J \\leq OPT(I) + O(\\log(OPT))" } ]
https://en.wikipedia.org/wiki?curid=69706395
6970787
Autler–Townes effect
Dynamical Stark effect In spectroscopy, the Autler–Townes effect (also known as AC Stark effect), is a dynamical Stark effect corresponding to the case when an oscillating electric field (e.g., that of a laser) is tuned in resonance (or close) to the transition frequency of a given spectral line, and resulting in a change of the shape of the absorption/emission spectra of that spectral line. The AC Stark effect was discovered in 1955 by American physicists Stanley Autler and Charles Townes. It is the AC equivalent of the static Stark effect which splits the spectral lines of atoms and molecules in a constant electric field. Compared to its DC counterpart, the AC Stark effect is computationally more complex. While generally referring to atomic spectral shifts due to AC fields at any (single) frequency, the effect is more pronounced when the field frequency is close to that of a natural atomic or molecular dipole transition. In this case, the alternating field has the effect of splitting the two bare transition states into doublets or "dressed states" that are separated by the Rabi frequency. Alternatively, this can be described as a Rabi oscillation between the bare states which are no longer eigenstates of the atom–field Hamiltonian. The resulting fluorescence spectrum is known as a Mollow triplet. The AC Stark splitting is integral to several phenomena in quantum optics, such as electromagnetically induced transparency and Sisyphus cooling. Vacuum Rabi oscillations have also been described as a manifestation of the AC Stark effect from atomic coupling to the vacuum field. History. The AC Stark effect was discovered in 1955 by American physicists Stanley Autler and Charles Townes while at Columbia University and Lincoln Labs at the Massachusetts Institute of Technology. Before the availability of lasers, the AC Stark effect was observed with radio frequency sources. Autler and Townes' original observation of the effect used a radio frequency source tuned to 12.78 and 38.28 MHz, corresponding to the separation between two doublet microwave absorption lines of OCS. The notion of quasi-energy in treating the general AC Stark effect was later developed by Nikishov and Ritis in 1964 and onward. This more general method of approaching the problem developed into the "dressed atom" model describing the interaction between lasers and atoms. Prior to the 1970s there were various conflicting predictions concerning the fluorescence spectra of atoms due to the AC Stark effect at optical frequencies. In 1974 the observation of Mollow triplets verified the form of the AC Stark effect using visible light. General semiclassical approach. In a semiclassical model where the electromagnetic field is treated classically, a system of charges in a monochromatic electromagnetic field has a Hamiltonian that can be written as: formula_0 where formula_1, formula_2, formula_3 and formula_4 are respectively the position, momentum, mass, and charge of the formula_5-th particle, and formula_6 is the speed of light. The vector potential of the field, formula_7, satisfies formula_8. The Hamiltonian is thus also periodic: formula_9 Now, the Schrödinger equation, under a periodic Hamiltonian is a linear homogeneous differential equation with periodic coefficients, formula_10 where formula_11 here represents all coordinates. Floquet's theorem guarantees that the solutions to an equation of this form can be written as formula_12 Here, formula_13 is the "bare" energy for no coupling to the electromagnetic field, and formula_14 has the same time-periodicity as the Hamiltonian, formula_15 or formula_16 with formula_17 the angular frequency of the field. Because of its periodicity, it is often further useful to expand formula_18 in a Fourier series, obtaining formula_19 or formula_20 where formula_21 is the frequency of the laser field. The solution for the joint particle-field system is, therefore, a linear combination of stationary states of energy formula_22, which is known as a "quasi-energy" state and the new set of energies are called the "spectrum of quasi-harmonics". Unlike the DC Stark effect, where perturbation theory is useful in a general case of atoms with infinite bound states, obtaining even a limited spectrum of shifted energies for the AC Stark effect is difficult in all but simple models, although calculations for systems such as the hydrogen atom have been done. Examples. General expressions for AC Stark shifts must usually be calculated numerically and tend to provide little insight. However, there are important individual examples of the effect that are informative. Analytical solutions in these specific cases are usually obtained assuming the detuning formula_23 is small compared to a characteristic frequency of the radiating system. Two level atom dressing. An atom driven by an electric field with frequency formula_24 close to an atomic transition frequency formula_25 (that is, when formula_26) can be approximated as a two level quantum system since the off resonance states have low occupation probability. The Hamiltonian can be divided into the bare atom term plus a term for the interaction with the field as: formula_27 In an appropriate rotating frame, and making the rotating wave approximation, formula_28 reduces to formula_29 Where formula_30 is the Rabi frequency, and formula_31 are the strongly coupled bare atom states. The energy eigenvalues are formula_32, and for small detuning, formula_33 The eigenstates of the atom-field system or dressed states are dubbed formula_34 and formula_35. The result of the AC field on the atom is thus to shift the strongly coupled bare atom energy eigenstates into two states formula_34 and formula_35 which are now separated by formula_36. Evidence of this shift is apparent in the atom's absorption spectrum, which shows two peaks around the bare transition frequency, separated by formula_30 (Autler-Townes splitting). The modified absorption spectrum can be obtained by a pump-probe experiment, wherein a strong "pump" laser drives the bare transition while a weaker "probe" laser sweeps for a second transition between a third atomic state and the dressed states. Another consequence of the AC Stark splitting here is the appearance of Mollow triplets, a triple peaked fluorescence profile. Historically an important confirmation of Rabi flopping, they were first predicted by Mollow in 1969 and confirmed in the 1970s experimentally. Optical Dipole Trap (Far-Off-Resonance Trap). For ultracold atoms experiments utilizing the optical dipole force from AC Stark shift, the light is usually linearly polarized to avoid the splitting of different magnetic substates with different formula_37, and the light frequency is often far detuned from the atomic transition to avoid heating the atoms from the photon-atom scattering; in turn, the intensity of the light field (i.e. AC electric field) formula_38 is typically high to compensate for the large detuning. Typically, we have formula_39, where the atomic transition has a natural linewidth formula_40 and a saturation intensity: formula_41 Note the above expression for saturation intensity does not apply to all cases. For example, the above applies for the D2 line transition of Li-6, but not the D1 line, which obeys a different sum rule in calculating the oscillator strength. As a result, the D1 line has a saturation intensity 3 times larger than the D2 line. However, when the detuning from these two lines is much larger than the fine-structure splitting, the overall saturation intensity takes the value of the D2 line. In the case where the light's detuning is comparable to the fine-structure splitting but still much larger than the hyperfine splitting, the D2 line contributes twice as much dipole potential as the D1 line, as shown in Equation (19) of. The optical dipole potential is therefore: formula_42 Here, the Rabi frequency formula_30 is related to the (dimensionless) saturation parameter formula_43, and formula_44 is the real part of the complex polarizability of the atom, with its imaginary counterpart representing the dissipative optical scattering force. The factor of 1/2 takes into account that the dipole moment is an induced, not a permanent one. When formula_26, the rotating wave approximation applies, and the counter-rotating term proportional to formula_45 can be omitted; However, in some cases, the ODT light is so far detuned that counter-rotating term must be included in calculations, as well as contributions from adjacent atomic transitions with appreciable linewidth formula_40. Note that the natural linewidth formula_40 here is in radians per second, and is the inverse of lifetime formula_46. This is the principle of operation for Optical Dipole Trap (ODT, also known as Far Off Resonance Trap, FORT), in which case the light is red-detuned formula_47. When blue-detuned, the light beam provides a potential bump/barrier instead. The optical dipole potential is often expressed in terms of the recoil energy, which is the kinetic energy imparted in an atom initially at rest by "recoil" during the spontaneous emission of a photon: formula_48 where formula_49 is the wavevector of the ODT light (formula_50 when detuned). The recoil energy, along with related recoil frequency formula_51, are crucial parameters in understanding the dynamics of atoms in light fields, especially in the context of atom optics and momentum transfer. In applications that utilize the optical dipole force, it is common practice to use a far-off-resonance light frequency. This is because a smaller detuning would increase the photon-atom scattering rate much faster than it increases the dipole potential energy, leading to undesirable heating of the atoms. Quantitatively, the scattering rate is given by: formula_52 Adiabatic elimination. In quantum system with three (or more) states, where a transition from one level, formula_53 to another formula_54 can be driven by an AC field, but formula_54 only decays to states other than formula_53, the dissipative influence of the spontaneous decay can be eliminated. This is achieved by increasing the AC Stark shift on formula_53 through large detuning and raising intensity of the driving field. Adiabatic elimination has been used to create comparatively stable effective two level systems in Rydberg atoms, which are of interest for qubit manipulations in quantum computing. Electromagnetically induced transparency. Electromagnetically induced transparency (EIT), which gives some materials a small transparent area within an absorption line, can be thought of as a combination of Autler-Townes splitting and Fano interference, although the distinction may be difficult to determine experimentally. While both Autler-Townes splitting and EIT can produce a transparent window in an absorption band, EIT refers to a window that maintains transparency in a weak pump field, and thus requires Fano interference. Because Autler-Townes splitting will wash out Fano interference at stronger fields, a smooth transition between the two effects is evident in materials exhibiting EIT. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\nH=\\sum_i \\frac{1}{2m_i}\\left[\\mathbf{p}_i-\\frac{q_i}{c}\\mathbf{A}(\\mathbf{r}_i, t)\\right]^2\n+V(\\mathbf{r}_i),\n" }, { "math_id": 1, "text": "\\mathbf{r}_i \\," }, { "math_id": 2, "text": "\\mathbf{p}_i \\," }, { "math_id": 3, "text": "m_i \\," }, { "math_id": 4, "text": "q_i \\," }, { "math_id": 5, "text": "i\\," }, { "math_id": 6, "text": "c \\," }, { "math_id": 7, "text": "\\mathbf{A}" }, { "math_id": 8, "text": "\\mathbf{A}(t+\\tau)=\\mathbf{A}(t)" }, { "math_id": 9, "text": "\nH(t+\\tau) = H(t).\n" }, { "math_id": 10, "text": "\ni\\hbar \\frac{\\partial}{\\partial t} \\psi(\\mathbf{\\xi},t) = H(t)\\psi(\\mathbf{\\xi},t),\n" }, { "math_id": 11, "text": "\\xi" }, { "math_id": 12, "text": "\n\\psi(\\mathbf{\\xi},t) = \\exp[-iE_bt/\\hbar]\\phi(\\mathbf{\\xi},t).\n" }, { "math_id": 13, "text": "E_b" }, { "math_id": 14, "text": "\\phi\\," }, { "math_id": 15, "text": "\n\\phi(\\mathbf{\\xi},t+\\tau) = \\phi(\\mathbf{\\xi},t)\n" }, { "math_id": 16, "text": "\n\\phi(\\mathbf{\\xi},t+2\\pi/\\omega) = \\phi(\\mathbf{\\xi},t)\n" }, { "math_id": 17, "text": "\\omega=2\\pi/\\tau" }, { "math_id": 18, "text": "\\phi(\\mathbf{\\xi},t)" }, { "math_id": 19, "text": "\n\\psi(\\mathbf{\\xi},t) = \\exp[-iE_bt/\\hbar]\n\\sum_{k=-\\infty}^{\\infty}C_k(\\mathbf{\\xi})\\exp[-ik\\omega t]\n" }, { "math_id": 20, "text": "\n\\psi(\\mathbf{\\xi},t) = \\sum_{k=-\\infty}^{\\infty}C_k(\\mathbf{\\xi})\\exp[-i(E_b+k\\hbar\\omega)t/\\hbar]\n" }, { "math_id": 21, "text": "\\omega =2\\pi/T\\," }, { "math_id": 22, "text": "E_b+k\\hbar\\omega" }, { "math_id": 23, "text": "\\Delta\\equiv (\\omega-\\omega_0)" }, { "math_id": 24, "text": "\\omega" }, { "math_id": 25, "text": "\\omega_0" }, { "math_id": 26, "text": "|\\Delta| \\ll \\omega_0" }, { "math_id": 27, "text": "\n\\hat{H} = \\hat{H}_\\mathrm{A} + \\hat{H}_\\mathrm{int}.\n" }, { "math_id": 28, "text": "\\hat{H}" }, { "math_id": 29, "text": "\n\\hat{H}=-\\hbar\\Delta|e\\rangle\\langle e|+\\frac{\\hbar\\Omega}{2}(|e\\rangle\\langle g|+|g\\rangle\n\\langle e|).\n" }, { "math_id": 30, "text": "\\Omega" }, { "math_id": 31, "text": "|g\\rangle, |e\\rangle" }, { "math_id": 32, "text": "\nE_{\\pm}=\\frac{-\\hbar\\Delta}{2}\\pm\\frac{\\hbar\\sqrt{\\Omega^2+\\Delta^2}}{2}\n" }, { "math_id": 33, "text": "\nE_{\\pm}\\approx\\pm\\frac{\\hbar\\Omega}{2}.\n" }, { "math_id": 34, "text": "|+\\rangle" }, { "math_id": 35, "text": "|-\\rangle" }, { "math_id": 36, "text": "\\hbar\\Omega" }, { "math_id": 37, "text": "m_F" }, { "math_id": 38, "text": "I(\\mathbf{r})" }, { "math_id": 39, "text": "|\\Delta| /\\Gamma\\gg I/I_{sat} \\gg 1" }, { "math_id": 40, "text": "\\Gamma" }, { "math_id": 41, "text": "I_\\text{sat} = \\frac{\\pi}{3}{h c \\Gamma \\over \\lambda_0^3}=\\frac{\\pi}{3}{h c \\over {\\lambda_0^3 \\tau}}\\,." }, { "math_id": 42, "text": "\n\\begin{align}\nU_\\text{dipole}(\\mathbf{r})&=-\\frac{\\hbar \\Omega^2}{4}\\left(\\frac{1}{\\omega_0-\\omega}+\\frac{1}{\\omega_0+\\omega}\\right)\\\\\n&=-\\frac{\\hbar \\Gamma^2}{8I_{sat}}\\left(\\frac{1}{\\omega_0-\\omega}+\\frac{1}{\\omega_0+\\omega}\\right)I(\\mathbf{r})\\\\\n&=-\\frac{\\text{Re}[\\alpha(\\omega)]}{2\\varepsilon_0c}I(\\mathbf{r})=-\\frac{1}{2}\\langle \\mathbf{p\\cdot E}\\rangle.\n\\end{align}\n" }, { "math_id": 43, "text": "s\\equiv\\frac{I(\\mathbf{r})}{I_\\text{sat}}=\\frac{2\\Omega^2}{\\Gamma^2}" }, { "math_id": 44, "text": "\n\\text{Re}[\\alpha(\\omega)]\n" }, { "math_id": 45, "text": "1/(\\omega_0+\\omega)" }, { "math_id": 46, "text": "\\tau" }, { "math_id": 47, "text": "\\Delta <0" }, { "math_id": 48, "text": "\\varepsilon_{recoil}=\\frac{\\hbar^2k^2}{2m}," }, { "math_id": 49, "text": "k" }, { "math_id": 50, "text": "k\\neq k_0" }, { "math_id": 51, "text": "\\omega_{recoil}=\\varepsilon_{recoil}/\\hbar" }, { "math_id": 52, "text": "\nR_{sc}=\\frac{U(\\mathbf{r})}{\\hbar \\Delta/\\Gamma}\\,.\n" }, { "math_id": 53, "text": "|g\\rangle" }, { "math_id": 54, "text": "|e\\rangle" } ]
https://en.wikipedia.org/wiki?curid=6970787
697155
Hierarchy problem
Unsolved problem in physics In theoretical physics, the hierarchy problem is the problem concerning the large discrepancy between aspects of the weak force and gravity. There is no scientific consensus on why, for example, the weak force is 1024 times stronger than gravity. Technical definition. A hierarchy problem occurs when the fundamental value of some physical parameter, such as a coupling constant or a mass, in some Lagrangian is vastly different from its effective value, which is the value that gets measured in an experiment. This happens because the effective value is related to the fundamental value by a prescription known as renormalization, which applies corrections to it. Typically the renormalized value of parameters are close to their fundamental values, but in some cases, it appears that there has been a delicate cancellation between the fundamental quantity and the quantum corrections. Hierarchy problems are related to fine-tuning problems and problems of naturalness. Over the past decade many scientists argued that the hierarchy problem is a specific application of Bayesian statistics. Studying renormalization in hierarchy problems is difficult, because such quantum corrections are usually power-law divergent, which means that the shortest-distance physics are most important. Because we do not know the precise details of the quantum gravity, we cannot even address how this delicate cancellation between two large terms occurs. Therefore, researchers are led to postulate new physical phenomena that resolve hierarchy problems without fine-tuning. Overview. Suppose a physics model requires four parameters to produce a very high-quality working model capable of generating predictions regarding some aspect of our physical universe. Suppose we find through experiments that the parameters have values: 1.2, 1.31, 0.9 and a value near . One might wonder how such figures arise. But in particular, might be especially curious about a theory where three values are close to one, and the fourth is so different; in other words, the huge disproportion we seem to find between the first three parameters and the fourth. We might also wonder if one force is so much weaker than the others that it needs a factor of to allow it to be related to them in terms of effects, how did our universe come to be so exactly balanced when its forces emerged? In current particle physics, the differences between some parameters are much larger than this, so the question is even more noteworthy. One answer given by philosophers is the anthropic principle. If the universe came to exist by chance, and perhaps vast numbers of other universes exist or have existed, then life capable of physics experiments only arose in universes that, by chance, had very balanced forces. All of the universes where the forces were not balanced did not develop life capable of asking this question. So if lifeforms like human beings are aware and capable of asking such a question, humans must have arisen in a universe having balanced forces, however rare that might be. A second possible answer is that there is a deeper understanding of physics that we currently do not possess. There might be parameters that we can derive physical constants from that have less unbalanced values, or there might be a model with fewer parameters. Examples in particle physics. Higgs mass. In particle physics, the most important hierarchy problem is the question that asks why the weak force is 1024 times as strong as gravity. Both of these forces involve constants of nature, the Fermi constant for the weak force and the Newtonian constant of gravitation for gravity. Furthermore, if the Standard Model is used to calculate the quantum corrections to Fermi's constant, it appears that Fermi's constant is surprisingly large and is expected to be closer to Newton's constant unless there is a delicate cancellation between the bare value of Fermi's constant and the quantum corrections to it. More technically, the question is why the Higgs boson is so much lighter than the Planck mass (or the grand unification energy, or a heavy neutrino mass scale): one would expect that the large quantum contributions to the square of the Higgs boson mass would inevitably make the mass huge, comparable to the scale at which new physics appears unless there is an incredible fine-tuning cancellation between the quadratic radiative corrections and the bare mass. The problem cannot even be formulated in the strict context of the Standard Model, for the Higgs mass cannot be calculated. In a sense, the problem amounts to the worry that a future theory of fundamental particles, in which the Higgs boson mass will be calculable, should not have excessive fine-tunings. Theoretical solutions. There have been many proposed solutions by many experienced physicists. Supersymmetry. Some physicists believe that one may solve the hierarchy problem via supersymmetry. Supersymmetry can explain how a tiny Higgs mass can be protected from quantum corrections. Supersymmetry removes the power-law divergences of the radiative corrections to the Higgs mass and solves the hierarchy problem as long as the supersymmetric particles are light enough to satisfy the Barbieri–Giudice criterion. This still leaves open the mu problem, however. The tenets of supersymmetry are being tested at the LHC, although no evidence has been found so far for supersymmetry. Each particle that couples to the Higgs field has an associated Yukawa coupling λf. The coupling with the Higgs field for fermions gives an interaction term formula_0, with formula_1 being the Dirac field and formula_2 the Higgs field. Also, the mass of a fermion is proportional to its Yukawa coupling, meaning that the Higgs boson will couple most to the most massive particle. This means that the most significant corrections to the Higgs mass will originate from the heaviest particles, most prominently the top quark. By applying the Feynman rules, one gets the quantum corrections to the Higgs mass squared from a fermion to be: formula_3 The formula_4 is called the ultraviolet cutoff and is the scale up to which the Standard Model is valid. If we take this scale to be the Planck scale, then we have the quadratically diverging Lagrangian. However, suppose there existed two complex scalars (taken to be spin 0) such that: formula_5 (the couplings to the Higgs are exactly the same). Then by the Feynman rules, the correction (from both scalars) is: formula_6 This gives a total contribution to the Higgs mass to be zero if we include both the fermionic and bosonic particles. Supersymmetry is an extension of this that creates 'superpartners' for all Standard Model particles. Conformal. Without supersymmetry, a solution to the hierarchy problem has been proposed using just the Standard Model. The idea can be traced back to the fact that the term in the Higgs field that produces the uncontrolled quadratic correction upon renormalization is the quadratic one. If the Higgs field had no mass term, then no hierarchy problem arises. But by missing a quadratic term in the Higgs field, one must find a way to recover the breaking of electroweak symmetry through a non-null vacuum expectation value. This can be obtained using the Weinberg–Coleman mechanism with terms in the Higgs potential arising from quantum corrections. Mass obtained in this way is far too small with respect to what is seen in accelerator facilities and so a conformal Standard Model needs more than one Higgs particle. This proposal has been put forward in 2006 by Krzysztof Antoni Meissner and Hermann Nicolai and is currently under scrutiny. But if no further excitation is observed beyond the one seen so far at LHC, this model would have to be abandoned. Extra dimensions. No experimental or observational evidence of extra dimensions has been officially reported. Analyses of results from the Large Hadron Collider severely constrain theories with large extra dimensions. However, extra dimensions could explain why the gravity force is so weak, and why the expansion of the universe is faster than expected. If we live in a 3+1 dimensional world, then we calculate the gravitational force via Gauss's law for gravity: formula_7 (1) which is simply Newton's law of gravitation. Note that Newton's constant "G" can be rewritten in terms of the Planck mass. formula_8 If we extend this idea to formula_9 extra dimensions, then we get: formula_10 (2) where formula_11 is the 3+1+formula_9 dimensional Planck mass. However, we are assuming that these extra dimensions are the same size as the normal 3+1 dimensions. Let us say that the extra dimensions are of size "n" ≪ than normal dimensions. If we let "r" ≪ "n", then we get (2). However, if we let "r" ≫ "n", then we get our usual Newton's law. However, when "r" ≫ "n", the flux in the extra dimensions becomes a constant, because there is no extra room for gravitational flux to flow through. Thus the flux will be proportional to formula_12 because this is the flux in the extra dimensions. The formula is: formula_13 formula_14 which gives: formula_15 formula_16 Thus the fundamental Planck mass (the extra-dimensional one) could actually be small, meaning that gravity is actually strong, but this must be compensated by the number of the extra dimensions and their size. Physically, this means that gravity is weak because there is a loss of flux to the extra dimensions. This section is adapted from "Quantum Field Theory in a Nutshell" by A. Zee. Braneworld models. In 1998 Nima Arkani-Hamed, Savas Dimopoulos, and Gia Dvali proposed the ADD model, also known as the model with large extra dimensions, an alternative scenario to explain the weakness of gravity relative to the other forces. This theory requires that the fields of the Standard Model are confined to a four-dimensional membrane, while gravity propagates in several additional spatial dimensions that are large compared to the Planck scale. In 1998–99 Merab Gogberashvili published on arXiv (and subsequently in peer-reviewed journals) a number of articles where he showed that if the Universe is considered as a thin shell (a mathematical synonym for "brane") expanding in 5-dimensional space then it is possible to obtain one scale for particle theory corresponding to the 5-dimensional cosmological constant and Universe thickness, and thus to solve the hierarchy problem. It was also shown that four-dimensionality of the Universe is the result of stability requirement since the extra component of the Einstein field equations giving the localized solution for matter fields coincides with one of the conditions of stability. Subsequently, there were proposed the closely related Randall–Sundrum scenarios which offered their solution to the hierarchy problem. UV/IR mixing. In 2019, a pair of researchers proposed that IR/UV mixing resulting in the breakdown of the effective quantum field theory could resolve the hierarchy problem. In 2021, another group of researchers showed that UV/IR mixing could resolve the hierarchy problem in string theory. Cosmological constant. In physical cosmology, current observations in favor of an accelerating universe imply the existence of a tiny, but nonzero cosmological constant. This problem, called the cosmological constant problem, is a hierarchy problem very similar to that of the Higgs boson mass problem, since the cosmological constant is also very sensitive to quantum corrections, but it is complicated by the necessary involvement of general relativity in the problem. Proposed solutions to the cosmological constant problem include modifying and/or extending gravity, adding matter with unvanishing pressure, and UV/IR mixing in the Standard Model and gravity. Some physicists have resorted to anthropic reasoning to solve the cosmological constant problem, but it is disputed whether anthropic reasoning is scientific. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathcal{L}_{\\mathrm{Yukawa}}=-\\lambda_f\\bar{\\psi}H\\psi" }, { "math_id": 1, "text": "\\psi" }, { "math_id": 2, "text": "H" }, { "math_id": 3, "text": "\\Delta m_{\\rm H}^{2} = - \\frac{\\left|\\lambda_{f} \\right|^2}{8\\pi^2} [\\Lambda_{\\mathrm{UV}}^2+ ...]." }, { "math_id": 4, "text": "\\Lambda_{\\mathrm{UV}}" }, { "math_id": 5, "text": "\\lambda_S= \\left|\\lambda_f\\right|^2" }, { "math_id": 6, "text": "\\Delta m_{\\rm H}^{2} = 2 \\times \\frac{\\lambda_{S}}{16\\pi^2} [\\Lambda_{\\mathrm{UV}}^2+ ...]." }, { "math_id": 7, "text": "\\mathbf{g}(\\mathbf{r}) = -Gm\\frac{\\mathbf{e_r}}{r^2}" }, { "math_id": 8, "text": "G=\\frac{\\hbar c}{M_{\\mathrm{Pl}}^{2}}" }, { "math_id": 9, "text": "\\delta" }, { "math_id": 10, "text": "\\mathbf{g}(\\mathbf{r}) = -m\\frac{\\mathbf{e_r}}{M_{\\mathrm{Pl}_{3+1+\\delta}}^{2+\\delta}r^{2+\\delta}}" }, { "math_id": 11, "text": "M_{\\mathrm{Pl}_{3+1+\\delta}}" }, { "math_id": 12, "text": " n^{\\delta} " }, { "math_id": 13, "text": "\\mathbf{g}(\\mathbf{r}) = -m\\frac{\\mathbf{e_r}}{M_{\\mathrm{Pl}_{3+1+\\delta}}^{2+\\delta}r^2 n^{\\delta}}" }, { "math_id": 14, "text": "-m\\frac{\\mathbf{e_r}}{M_{\\mathrm{Pl}}^2 r^2} = -m\\frac{\\mathbf{e_r}}{M_{\\mathrm{Pl}_{3+1+\\delta}}^{2+\\delta}r^2 n^{\\delta}}" }, { "math_id": 15, "text": " \\frac{1}{M_{\\mathrm{Pl}}^2 r^2} = \\frac{1}{M_{\\mathrm{Pl}_{3+1+\\delta}}^{2+\\delta}r^2 n^{\\delta}} \\Rightarrow " }, { "math_id": 16, "text": " M_{\\mathrm{Pl}}^2 = M_{\\mathrm{Pl}_{3+1+\\delta}}^{2+\\delta} n^{\\delta}. " } ]
https://en.wikipedia.org/wiki?curid=697155
69715647
Ruler function
Two closely related series in number theory In number theory, the ruler function of an integer formula_0 can be either of two closely related functions. One of these functions counts the number of times formula_0 can be evenly divided by two, which for the numbers 1, 2, 3, ... is &lt;templatestyles src="Block indent/styles.css"/&gt; Alternatively, the ruler function can be defined as the same numbers plus one, which for the numbers 1, 2, 3, ... produces the sequence &lt;templatestyles src="Block indent/styles.css"/&gt; As well as being related by adding one, these two sequences are related in a different way: the second one can be formed from the first one by removing all the zeros, and the first one can be formed from the second one by adding zeros at the start and between every pair of numbers. For either definition of the ruler function, the rising and falling patterns of the values of this function resemble the lengths of marks on rulers with traditional units such as inches. These functions should be distinguished from Thomae's function, a function on real numbers which behaves similarly to the ruler function when restricted to the dyadic rational numbers. In advanced mathematics, the 0-based ruler function is the 2-adic valuation of the number, and the lexicographically earliest infinite square-free word over the natural numbers. It also gives the position of the bit that changes at each step of the Gray code. In the Tower of Hanoi puzzle, with the disks of the puzzle numbered in order by their size, the 1-based ruler function gives the number of the disk to move at each step in an optimal solution to the puzzle. A simulation of the puzzle, in conjunction with other methods for generating its optimal sequence of moves, can be used in an algorithm for generating the sequence of values of the ruler function in constant time per value. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n" } ]
https://en.wikipedia.org/wiki?curid=69715647
69720399
Rig category
In category theory, a rig category (also known as bimonoidal category or 2-rig) is a category equipped with two monoidal structures, one distributing over the other. Definition. A rig category is given by a category formula_0 equipped with: Those structures are required to satisfy a number of coherence conditions. Strictification. Requiring all isomorphisms involved in the definition of a rig category to be strict does not give a useful definition, as it implies an equality formula_9 which signals a degenerate structure. However it is possible to turn most of the isomorphisms involved into equalities. A rig category is semi-strict if the two monoidal structures involved are strict, both of its annihilators are equalities and one of its distributors is an equality. Any rig category is equivalent to a semi-strict one. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbf C" }, { "math_id": 1, "text": "(\\mathbf C, \\oplus, O)" }, { "math_id": 2, "text": "(\\mathbf C, \\otimes, I)" }, { "math_id": 3, "text": "\\delta_{A,B,C} : A \\otimes (B \\oplus C) \\simeq (A \\otimes B) \\oplus (A \\otimes C)" }, { "math_id": 4, "text": "\\delta'_{A,B,C} : (A \\oplus B) \\otimes C \\simeq (A \\otimes C) \\oplus (B \\otimes C)" }, { "math_id": 5, "text": "a_A : O \\otimes A \\simeq O" }, { "math_id": 6, "text": "a'_A : A \\otimes O \\simeq O" }, { "math_id": 7, "text": "\\oplus" }, { "math_id": 8, "text": "\\otimes" }, { "math_id": 9, "text": "A \\oplus B = B \\oplus A" } ]
https://en.wikipedia.org/wiki?curid=69720399
6972600
Single-index model
Economic model The single-index model (SIM) is a simple asset pricing model to measure both the risk and the return of a stock. The model has been developed by William Sharpe in 1963 and is commonly used in the finance industry. Mathematically the SIM is expressed as: formula_0 formula_1 where: "rit" is return to stock "i" in period "t" "rf" is the risk free rate (i.e. the interest rate on treasury bills) "rmt" is the return to the market portfolio in period "t" formula_2 is the stock's alpha, or abnormal return formula_3 is the stock's beta, or responsiveness to the market return Note that formula_4 is called the excess return on the stock, formula_5 the excess return on the market formula_6 are the residual (random) returns, which are assumed independent normally distributed with mean zero and standard deviation formula_7 These equations show that the stock return is influenced by the market (beta), has a firm specific expected value (alpha) and firm-specific unexpected component (residual). Each stock's performance is in relation to the performance of a market index (such as the All Ordinaries). Security analysts often use the SIM for such functions as computing stock betas, evaluating stock selection skills, and conducting event studies. Assumptions of the single-index model. To simplify analysis, the single-index model assumes that there is only 1 macroeconomic factor that causes the systematic risk affecting all stock returns and this factor can be represented by the rate of return on a market index, such as the S&amp;P 500. According to this model, the return of any stock can be decomposed into the expected excess return of the individual stock due to firm-specific factors, commonly denoted by its alpha coefficient (α), the return due to macroeconomic events that affect the market, and the unexpected microeconomic events that affect only the firm. The term formula_8 represents the movement of the market modified by the stock's beta, while formula_9 represents the unsystematic risk of the security due to firm-specific factors. Macroeconomic events, such as changes in interest rates or the cost of labor, causes the systematic risk that affects the returns of all stocks, and the firm-specific events are the unexpected microeconomic events that affect the returns of specific firms, such as the death of key people or the lowering of the firm's credit rating, that would affect the firm, but would have a negligible effect on the economy. In a portfolio, the unsystematic risk due to firm-specific factors can be reduced to zero by diversification. The index model is based on the following: The single-index model assumes that once the market return is subtracted out the remaining returns are uncorrelated: formula_10 which gives formula_11 This is not really true, but it provides a simple model. A more detailed model would have multiple risk factors. This would require more computation, but still less than computing the covariance of each possible pair of securities in the portfolio. With this equation, only the betas of the individual securities and the market variance need to be estimated to calculate covariance. Hence, the index model greatly reduces the number of calculations that would otherwise have to be made to model a large portfolio of thousands of securities.
[ { "math_id": 0, "text": " r_{it} - r_f = \\alpha_i + \\beta_i(r_{mt} - r_f) + \\epsilon_{it} \\," }, { "math_id": 1, "text": "\\epsilon_{it} \\sim N(0,\\sigma_i^2) \\," }, { "math_id": 2, "text": "\\alpha_i" }, { "math_id": 3, "text": "\\beta_i" }, { "math_id": 4, "text": " r_{it} - r_f " }, { "math_id": 5, "text": " r_{mt} - r_f " }, { "math_id": 6, "text": "\\epsilon_{it}" }, { "math_id": 7, "text": "\\sigma_i" }, { "math_id": 8, "text": "\\beta_i(r_m-r_f)" }, { "math_id": 9, "text": "\\epsilon_{i} " }, { "math_id": 10, "text": "E((R_{i,t} - \\beta_i m_t) (R_{k,t} - \\beta_k m_t)) = 0," }, { "math_id": 11, "text": "Cov(R_i, R_k) = \\beta_i\\beta_k\\sigma^2." } ]
https://en.wikipedia.org/wiki?curid=6972600
6972757
Cascaded integrator–comb filter
Digital signal sample rate converter In digital signal processing, a cascaded integrator–comb (CIC) is a computationally efficient class of low-pass finite impulse response (FIR) filter that chains N number of integrator and comb filter pairs (where N is the filter's order) to form a decimator or interpolator. In a decimating CIC, the input signal is first fed through N integrator stages, followed by a down-sampler, and then N comb stages. An interpolating CIC (e.g. Figure 1) has the reverse order of this architecture, but with the down-sampler replaced with a zero-stuffer (up-sampler). Operation. CIC filters were invented by Eugene B. Hogenauer in 1979 (published in 1981), and are a class of FIR filters used in multi-rate digital signal processing. Unlike most FIR filters, it has a down-sampler or up-sampler in the middle of the structure, which converts between the high sampling rate of formula_2 used by the integrator stages and the low sampling rate of formula_1 used by the comb stages. Transfer function. At the high sampling rate of formula_2, a CIC's transfer function in the z-domain is: formula_3 where: formula_0 is the decimation or interpolation ratio, formula_4 is the number of samples per stage (usually 1 but sometimes 2), and formula_5 is the order: the number of comb-integrator pairs. * The numerator comes from multiplying formula_5 negative feedforward comb stages (each is simply multiplication by formula_6 in the z-domain). * The denominator comes from multiplying formula_5 integrator stages (each is simply multiplication by formula_7 in the z-domain). Integrator–comb is simple moving average. An integrator–comb filter is an efficient implementation of a simple 1st-order moving-average FIR filter, with division by formula_8 omitted. To see this, consider how a simple moving average filter can be implemented recursively by adding the newest sample formula_9 to the previous result formula_10 and subtracting the oldest sample formula_11: formula_12 The second equality corresponds to a comb filter (formula_13) that gets integrated (formula_14). Cascaded integrator–comb yields higher-order moving average. Higher-order CIC structures are obtained by cascading formula_5 identical simple moving average filters, then rearranging the sections to place all integrators first (decimator) or combs first (interpolator). Such rearrangement is possible because both the combs, the integrators, and the entire structure are linear time-invariant (LTI) systems. In the interpolating CIC, its upsampler (which normally precedes an interpolation filter) is passed through the comb sections using a Noble identity, reducing the number of delay elements needed by a factor of formula_0. Similarly, in the decimating CIC, its downsampler (which normally follows a decimation filter) is moved before the comb sections. Features. CIC filters have some appealing features: Frequency response. In the z-domain, each integrator contributes one pole at DC (formula_16) and one zero at the origin (formula_17). Each comb contributes formula_8 poles at the origin and formula_8 zeroes that are equally-spaced around the z-domain's unit circle, but its first zero at DC cancels out with each integrator's pole. Nth-order CIC filters have N times as many poles and zeros in the same locations as the 1st-order. Thus, the 1st-order CIC's frequency response is a crude low-pass filter. Typically the gain is normalized by dividing by formula_18 so DC has the peak of unity gain. The main lobes drop off as it reaches the next zero, and is followed by a series of successive lobes that have smaller and smaller peaks, separated by the subsequent zeros. This approximates at large formula_0 a sinc-in-frequency. An Nth-order CIC's shape corresponds to multiplying that sinc shape on itself N times, resulting in successively greater attenuation. Thus, Nth-order CIC filters are called sincN filters. The first sidelobe is attenuated ~13N dB. The CIC filter's possible range of responses is limited by this shape. Larger amounts of stopband rejection can be achieved by increasing the order, but that increases attenuation in the passband and requires increased bit width for the integrator and comb sections. For this reason, many real-world filtering requirements cannot be met by a CIC filter alone. Shape compensation. A short to moderate length FIR or infinite impulse response (IIR) filter can compensate for the falling slope of a CIC filter's shape. Multiple interpolation and decimation rates can reuse the same set of compensation FIR coefficients, since the shape of the CIC's main lobe changes very little when the decimation ratio is changed.Figure 11(b) Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "R" }, { "math_id": 1, "text": "\\tfrac{f_s}{R}" }, { "math_id": 2, "text": "f_s" }, { "math_id": 3, "text": "\n\\begin{align}\n H(z) &=\\left [ \\sum_{k=0}^{RM-1}z^{-k} \\right ] ^N \\\\\n &= \\left ( \\frac{1-z^{-RM}}{1-z^{-1}} \\right ) ^N\n\\end{align}\n" }, { "math_id": 4, "text": "M" }, { "math_id": 5, "text": "N" }, { "math_id": 6, "text": "1-z^{-RM}" }, { "math_id": 7, "text": "\\tfrac{1}{1-z^\\text{-1}}" }, { "math_id": 8, "text": "RM" }, { "math_id": 9, "text": "x[n]" }, { "math_id": 10, "text": "y[n-1]" }, { "math_id": 11, "text": "x[n-RM]" }, { "math_id": 12, "text": "\n\\begin{align}\n y[n] &= \\sum_{k=0}^{RM-1} x[n-k] \\\\\n y[n] &= y[n-1] + \\underbrace{x[n] - x[n-RM]}_{\\text{comb filter }c[n]}.\n\\end{align}\n" }, { "math_id": 13, "text": "c[n] = x[n] - x[n-RM]" }, { "math_id": 14, "text": "y[n] = y[n-1] + c[n]" }, { "math_id": 15, "text": "N \\log_2(RM)" }, { "math_id": 16, "text": "z{=}1" }, { "math_id": 17, "text": "z{=}0" }, { "math_id": 18, "text": "(RM)^N" } ]
https://en.wikipedia.org/wiki?curid=6972757
69730965
Weyl's inequality (number theory)
In number theory, Weyl's inequality, named for Hermann Weyl, states that if "M", "N", "a" and "q" are integers, with "a" and "q" coprime, "q" &gt; 0, and "f" is a real polynomial of degree "k" whose leading coefficient "c" satisfies formula_0 for some "t" greater than or equal to 1, then for any positive real number formula_1 one has formula_2 This inequality will only be useful when formula_3 for otherwise estimating the modulus of the exponential sum by means of the triangle inequality as formula_4 provides a better bound.
[ { "math_id": 0, "text": "|c-a/q|\\le tq^{-2}," }, { "math_id": 1, "text": "\\scriptstyle\\varepsilon" }, { "math_id": 2, "text": "\\sum_{x=M}^{M+N}\\exp(2\\pi if(x))=O\\left(N^{1+\\varepsilon}\\left({t\\over q}+{1\\over N}+{t\\over N^{k-1}}+{q\\over N^k}\\right)^{2^{1-k}}\\right)\\text{ as }N\\to\\infty." }, { "math_id": 3, "text": "q < N^k," }, { "math_id": 4, "text": "\\scriptstyle\\le\\, N" } ]
https://en.wikipedia.org/wiki?curid=69730965
69732690
Quasimorphism
Group homomorphism up to bounded error In group theory, given a group formula_0, a quasimorphism (or quasi-morphism) is a function formula_1 which is additive up to bounded error, i.e. there exists a constant formula_2 such that formula_3 for all formula_4. The least positive value of formula_5 for which this inequality is satisfied is called the defect of formula_6, written as formula_7. For a group formula_0, quasimorphisms form a subspace of the function space formula_8. Homogeneous. A quasimorphism is homogeneous if formula_23 for all formula_24. It turns out the study of quasimorphisms can be reduced to the study of homogeneous quasimorphisms, as every quasimorphism formula_1 is a bounded distance away from a unique homogeneous quasimorphism formula_25, given by : formula_26. A homogeneous quasimorphism formula_1 has the following properties: Integer-valued. One can also define quasimorphisms similarly in the case of a function formula_28. In this case, the above discussion about homogeneous quasimorphisms does not hold anymore, as the limit formula_29 does not exist in formula_30 in general. For example, for formula_31, the map formula_32 is a quasimorphism. There is a construction of the real numbers as a quotient of quasimorphisms formula_33 by an appropriate equivalence relation, see Construction of the reals numbers from integers (Eudoxus reals). Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "G" }, { "math_id": 1, "text": "f:G\\to\\mathbb{R}" }, { "math_id": 2, "text": "D\\geq 0" }, { "math_id": 3, "text": "|f(gh)-f(g)-f(h)|\\leq D" }, { "math_id": 4, "text": "g, h\\in G" }, { "math_id": 5, "text": "D" }, { "math_id": 6, "text": "f" }, { "math_id": 7, "text": "D(f)" }, { "math_id": 8, "text": "\\mathbb{R}^G" }, { "math_id": 9, "text": "\\mathbb{R}" }, { "math_id": 10, "text": "G=F_S" }, { "math_id": 11, "text": "S" }, { "math_id": 12, "text": "w" }, { "math_id": 13, "text": "C_w:F_S\\to \\mathbb{Z}_{\\geq 0}" }, { "math_id": 14, "text": "g\\in G" }, { "math_id": 15, "text": "g" }, { "math_id": 16, "text": "c_w:F_S\\to\\mathbb{Z}_{\\geq 0}" }, { "math_id": 17, "text": "C_{aa}(aaaa)=3" }, { "math_id": 18, "text": "c_{aa}(aaaa)=2" }, { "math_id": 19, "text": "H_w(g)=C_w(g)-C_{w^{-1}}(g)" }, { "math_id": 20, "text": "h_w(g)=c_w(g)-c_{w^{-1}}(g))" }, { "math_id": 21, "text": "\\text{rot}:\\text{Homeo}^+(S^1)\\to\\mathbb{R}" }, { "math_id": 22, "text": "\\text{Homeo}^+(S^1)" }, { "math_id": 23, "text": "f(g^n)=nf(g)" }, { "math_id": 24, "text": "g\\in G, n\\in \\mathbb{Z}" }, { "math_id": 25, "text": "\\overline{f}:G\\to\\mathbb{R}" }, { "math_id": 26, "text": "\\overline{f}(g)=\\lim_{n\\to\\infty}\\frac{f(g^n)}{n}" }, { "math_id": 27, "text": "f(g^{-1}hg)=f(h)" }, { "math_id": 28, "text": "f:G\\to\\mathbb{Z}" }, { "math_id": 29, "text": "\\lim_{n\\to\\infty}f(g^n)/n" }, { "math_id": 30, "text": "\\mathbb{Z}" }, { "math_id": 31, "text": "\\alpha\\in\\mathbb{R}" }, { "math_id": 32, "text": "\\mathbb{Z}\\to\\mathbb{Z}:n\\mapsto\\lfloor\\alpha n\\rfloor" }, { "math_id": 33, "text": "\\mathbb{Z}\\to\\mathbb{Z}" } ]
https://en.wikipedia.org/wiki?curid=69732690
69736905
Modern triangle geometry
In mathematics, modern triangle geometry, or new triangle geometry, is the body of knowledge relating to the properties of a triangle discovered and developed roughly since the beginning of the last quarter of the nineteenth century. Triangles and their properties were the subject of investigation since at least the time of Euclid. In fact, Euclid's "Elements" contains description of the four special points – centroid, incenter, circumcenter and orthocenter - associated with a triangle. Even though Pascal and Ceva in the seventeenth century, Euler in the eighteenth century and Feuerbach in the nineteenth century and many other mathematicians had made important discoveries regarding the properties of the triangle, it was the publication in 1873 of a paper by Emile Lemoine (1840–1912) with the title "On a remarkable point of the triangle" that was considered to have, according to Nathan Altschiller-Court, "laid the foundations...of the modern geometry of the triangle as a whole." The "American Mathematical Monthly", in which much of Lemoine's work is published, declared that "To none of these [geometers] more than Émile-Michel-Hyacinthe Lemoine is due the honor of starting this movement of modern triangle geometry". The publication of this paper caused a remarkable upsurge of interest in investigating the properties of the triangle during the last quarter of the nineteenth century and the early years of the twentieth century. A hundred-page article on triangle geometry in Klein's Encyclopedia of Mathematical Sciences published in 1914 bears witness to this upsurge of interest in triangle geometry. In the early days, the expression "new triangle geometry" referred to only the set of interesting objects associated with a triangle like the Lemoine point, Lemoine circle, Brocard circle and the Lemoine line. Later the theory of correspondences which was an offshoot of the theory of geometric transformations was developed to give coherence to the various isolated results. With its development, the expression "new triangle geometry" indicated not only the many remarkable objects associated with a triangle but also the methods used to study and classify these objects. Here is a definition of triangle geometry from 1887: "Being given a point M in the plane of the triangle, we can always find, in an infinity of manners, a second point M' that corresponds to the first one according to an imagined geometrical law; these two points have between them geometrical relations whose simplicity depends on the more or less the lucky choice of the law which unites them and each geometrical law gives rise to a method of transformation a mode of conjugation which it remains to study." (See the conference paper titled "Teaching new geometrical methods with an ancient figure in the nineteenth and twentieth centuries: the new triangle geometry in textbooks in Europe and USA (1888–1952)" by Pauline Romera-Lebret presented in 2009.) However, this escalation of interest soon collapsed and triangle geometry was completely neglected until the closing years of the twentieth century. In his "Development of Mathematics", Eric Temple Bell offers his judgement on the status of modern triangle geometry in 1940 thus: "The geometers of the 20th Century have long since piously removed all these treasures to the museum of geometry where the dust of history quickly dimmed their luster." (The Development of Mathematics, p. 323) Philip Davis has suggested several reasons for the decline of interest in triangle geometry. These include: A further revival of interest was witnessed with the advent of the modern electronic computer. The triangle geometry has again become an active area of research pursued by a group of dedicated geometers. As epitomizing this revival, one can point out the formulation of the concept of a "triangle centre" and the compilation by Clark Kimberling of an encyclopedia of triangle centers containing a listing of nearly 50,000 triangle centers and their properties and also the compilation of a catalogue of triangle cubics with detailed descriptions of several properties of more than 1200 triangle cubics. The open access journal "Forum Geometricorum" founded by Paul Yiu of Florida Atlantic University in 2001 also provided a tremendous impetus in furthering this new found enthusiasm for triangle geometry. Unfortunately, since 2019, the journal is not accepting submissions although back issues are still available online. The Lemoine geometry. Lemoine point. For a given triangle ABC with centroid G, the symmedian through the vertex is the reflection of the line AG in the bisector of the angle A. There are three symmedians for a triangle one passing through each vertex. The three symmedians are concurrent and the point of concurrency, commonly denoted by K, is called the Lemoine point or the symmedian point or the Grebe point of triangle ABC. If the sidelengths of triangle ABC are "a", "b", "c" the baricentric coordinates of the Lemoine point are "a"2 : "b"2 : "c"2. It has been described as "one of the crown jewels of modern geometry". There are several earlier references to this point in the mathematical literature details of which are available in John Mackay' history of the symmedian point. In fact, the concurrency of the symmedians is a special case of a more general result: For any point P in the plane of triangle ABC, the isogonals of the lines AP, BP, CP are concurrent, the isogonal of AP (respectively BP, CP) being the reflection of the line AP in the bisector of the angle A (respectively B, C). The point of concurrency is called the isogonal conjugate of P. In this terminology, the Lemoine point is the isogonal conjugate of the centroid. Lemoine circles. The points of intersections of the lines through the Lemoine point of a triangle ABC parallel to the sides of the triangle lie on a circle called the first Lemoine circle of triangle ABC. The center of the first Lemoine circle lies midway between the circumcenter and the lemoine point of the triangle, The points of intersections of the antiparallels to the sides of triangle ABC through the Lemoine point of a triangle ABC lie on a circle called the second Lemoine circle or the cosine circle of triangle ABC. The name "cosine circle" is due to the property of the second Lemoine circle that the lengths of the segments intercepted by the circle on the sides of the triangle proportional to the cosines of the angles opposite to the sides. The center of the second Lemoine circle is the Lemoine point. Lemoine axis. Any triangle ABC and its tangential triangle are in perspective and the axis of perspectivity is called the Lemoine axis of triangle ABC. It is the trilinear polar of the symmedian point of triangle ABC and also the polar of K with regard to the circumcircle of triangle ABC. Early modern triangle geometry. A quick glance into the world of modern triangle geometry as it existed during the peak of interest in triangle geometry subsequent to the publication of Lemoine's paper is presented below. This presentation is largely based on the topics discussed in William Gallatly's book published in 1910 and Roger A Johnsons' book first published in 1929. Poristic triangles. Two triangles are said to be poristic triangles if they have the same incircle and circumcircle. Given a circle with Center O and radius "R" and another circle with center I and radius "r", there are an infinite number of triangles ABC with Circle O("R") as circumcircle and I("r") as incircle if and only if OI2 = "R"2 − 2"Rr" . These triangles form a poristic system of triangles. The loci of certain special points like the centroid as the reference triangle traces the different triangles poristic with it turn out to often be circles and points. The Simson line. For any point P on the circumcircle of triangle ABC, the feet of perpendiculars from P to the sides of triangle ABC are collinear and the line of collinearity is the well-known Simson line of P. Pedal and antipedal triangles. Given a point P, let the feet of perpendiculars from P to the sides of the triangle ABC be D, E, F. The triangle DEF is called the pedal triangle of P. The antipedal triangle of P is the triangle formed by the lines through A, B, C perpendicular to PA, PB, PC respectively. Two points P and Q are called "counter points" if the pedal triangle of P is homothetic to the antipedal triangle of Q and the pedal triangle of Q is homothetic to the antipedal triangle of P. The orthopole. Given any line "l", let P, Q, R be the feet of perpendiculars from the vertices A, B, C of triangle ABC to "l". The lines through P. Q, R perpendicular respectively to the sides BC, CA, AB are concurrent and the point of concurrence is the orthopole of the line "l" with respect to the triangle ABC. In modern triangle geometry, there is a large body of literature dealing with properties of orthopoles. The Brocard points. Let of circles be described on the sides BC, CA, AB of triangle ABC whose external segments contain the two triads of angles C, A, B and B, C, A respectively. Each triad of circles determined by a triad of angles intersect at a common point thus yielding two such points. These points are called the Brocard points of triangle ABC and are usually denoted by formula_0. If P is the first Brocard point (which is the Brocard point determined by the first triad of circles) then the angles PBC, PCA and PAB are equal to each other and the common angle is called the Brocard angle of triangle ABC and is commonly denoted by formula_1 The Brocard angle is given by formula_2 The Brocard points and the Brocard angles have several interesting properties. Contemporary modern triangle geometry. Triangle center. One of the most significant ideas that has emerged during the revival of interest in triangle geometry during the closing years of twentieth century is the notion of "triangle center". This concept introduced by Clark Kimberling in 1994 unified in one notion the very many special and remarkable points associated with a triangle. Since the introduction of this idea, nearly no discussion on any result associated with a triangle is complete without a discussion on how the result connects with the triangle centers. Definition of triangle center. A real-valued function "f" of three real variables "a", "b", "c" may have the following properties: If a non-zero "f" has both these properties it is called a triangle center function. If "f" is a triangle center function and "a", "b", "c" are the side-lengths of a reference triangle then the point whose trilinear coordinates are "f"("a","b","c") : "f"("b","c","a") : "f"("c","a","b") is called a triangle center. Clark Kimberling is maintaining a website devoted to a compendium of triangle centers. The website named "Encyclopedia of Triangle Centers" has definitions and descriptions of nearly 50,000 triangle centers. Central line. Another unifying notion of contemporary modern triangle geometry is that of a central line. This concept unifies the several special straight lines associated with a triangle. The notion of a central line is also related to the notion of a triangle center. Definition of central line. Let "ABC" be a plane triangle and let ( "x" : "y" : "z" ) be the trilinear coordinates of an arbitrary point in the plane of triangle "ABC". A straight line in the plane of triangle "ABC" whose equation in trilinear coordinates has the form "f" ( "a", "b", "c" ) "x" + "g" ( "a", "b", "c" ) "y" + "h" ( "a", "b", "c" ) "z" = 0 where the point with trilinear coordinates ( "f" ( "a", "b", "c" ) : "g" ( "a", "b", "c" ) : "h" ( "a", "b", "c" ) ) is a triangle center, is a central line in the plane of triangle "ABC" relative to the triangle "ABC". Geometrical construction of central line. Let "X" be any triangle center of the triangle "ABC". Triangle conics. A triangle conic is a conic in the plane of the reference triangle and associated with it in some way. For example, the circumcircle and the incircle of the reference triangle are triangle conics. Other examples are the Steiner ellipse which is an ellipse passing through the vertices and having its centre at the centroid of the reference triangle, the Kiepert hyperbola which is a conic passing through the vertices, the centroid and the orthocentre of the reference triangle and the Artzt parabolas which are parabolas touching two sidelines of the reference triangle at vertices of the triangle. Some recently studied triangle conics include Hofstadter ellipses and yff conics. However, there is no formal definition of the terminology of "triangle conic" in the literature; that is, the relations a conic should have with the reference triangle so as to qualify it to be called a triangle conic have not been precisely formulated. WolframMathWorld has a page titled "Triangle conics" which gives a list of 42 items (not all of them are conics) without giving a definition of triangle conic. Triangle cubics. Cubic curves arise naturally in the study of triangles. For example, the locus of a point P in the plane of the reference triangle ABC such that, if the reflections of P in the sidelines of triangle ABC are Pa, Pb, Pc, then the lines APa, BPb and CPc are concurrent is a cubic curve named Neuberg cubic. It is the first cubic listed in Bernard Gibert's Catalogue of Triangle Cubics. This Catalogue lists more than 1200 triangle cubics with information on each curve such as the barycentric equation of the curve, triangle centers which lie on the curve, locus properties of the curve and references to literature on the curve. Computers in triangle geometry. The entry of computers had a deciding influence on the course of development in the interest in triangle geometry witnessed during the closing years of the twentieth century and the early years of the current century. Some of the ways in which the computers had influenced this course have been delineated by Philip Davis. Computers have been used to generate new results in triangle geometry. A survey article published in 2015 gives an account of some of the important new results discovered by the computer programme "Dircoverer". The following sample of theorems gives a flavor of the new results discovered by Discoverer. Sava Grozdev, Hiroshi Okumura, Deko Dekov are maintaining a web portal dedicated to computer discovered encyclopedia of Euclidean geometry. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Omega, \\Omega^\\prime" }, { "math_id": 1, "text": "\\omega" }, { "math_id": 2, "text": "\\cot \\omega=\\cot A + \\cot B +\\cot C ." } ]
https://en.wikipedia.org/wiki?curid=69736905
69739403
Nanoparticle interfacial layer
Interfacial layer of nanoparticles A nanoparticle interfacial layer is a well structured layer of typically organic molecules around a nanoparticle. These molecules are known as stabilizers, capping and surface ligands or passivating agents. The interfacial layer has a significant effect on the properties of the nanoparticle and is therefore often considered as an integral part of a nanoparticle. The interfacial layer has an typical thickness between 0.1 and 4 nm, which is dependent on the type of the molecules the layer is made of. The organic molecules that make up the interfacial layer are often amphiphilic molecules, meaning that they have a polar head group combined with a non-polar tail. Interactions. The effect of the interfacial layer is clearly seen in the interactions between nanoparticles. These interactions can be modelled using the DLVO theory. Classically this theory states that the potential of a particle is the sum of the electrostatic and van der Waals interaction. This is theory has proven to be very accurate for almost all Colloidal particles, but cannot describe all the interactions measured for nanoparticles. Therefore this theory has been extended with the so called non-DLVO terms. In this extension the hydration force, hydrofobic force, steric force and bridging force are also considered, resulting in a total potential as follows: formula_0 These last terms are mostly determined by the interfacial layer as this is the outermost part of the particle, thereby determining the surface interactions. For example, the bridging term only plays a role when the molecules in the interfacial layer tend to polymerize. In the case of nanoparticles made of a crystal, quantum mechanical interactions would be expected, but due to the interfacial layer the cores cannot get close enough to each other, and therefore these interactions are neglectable. An illustrative limit-case are non-charged semiconducting quantum dots (QD) in an ideal fluid. Due to the ideal fluid there is no difference between the QD–QD interaction and the QD–fluid interaction. For only the VDW interaction is of importance in the interaction between the interfacial layers, which are made of the superfluid, and other interfacial layers or the solvent. This means there is no attraction between the particles, so they can be accurately described using the Hard Sphere model. Optical properties. The organic ligands of the interfacial layer can influence the photoluminescence (PL) of a nanoparticle via various mechanisms, two of which are surface passivation and carrier trapping. Surface passivation: At the surface of an uncovered nanoparticle (without an interfacial layer) dangling atoms are found. These bonds form energy levels between the HOMO-LUMO gap, thereby leading to non-radiative relaxation. Due to the binding of ligand molecules with the dangling orbitals, the energy of these states is shifted away from the HOMO-LUMO gap. This prevents nonradiative relaxation, and thus results in more PL. The strength of this effect strongly depends on the type of ligands. In general, small, linear ligands, do better than bulky ligands, because they lead to a higher surface coverage density, therefore allowing more dangling orbitals to be passivated. Another surface effect is carrier trapping. Here the ligands can scavenge the electron(holes) in the nanoparticle, thereby precluding radiative recombination and thus leading towards a reduction in PL. A well-known example of such ligands are thiols. The light conversion efficiency can also be improved using an interfacial layer that exists of compounds that absorb in a wider energy range and emit at the absorption energy of the nanoparticle. According to C. S. Inagaki et al the absorption band of a metallic nanoparticle was shown to drastically increase in width, caused by the overlap of transitions in the interfacial layer and the plasmon resonance band of the nanoparticle. This phenomenon can be used in practical applications like LED's and solar cells. In these technologies either the efficiency of absorption or emission is of critical importance and nanoparticles with an interfacial layer could be used to improve this efficiency by either absorbing or emitting at a wider range of energies. Plasmon resonance. The plasmon resonance displayed by nanoparticles, gold particles are most often used as an example, can be altered using the interfacial layer. When either anionic or cationic ligands bound to a nanoparticle made of gold for example are increased in length, the wavelength of the plasmon resonance will shift to red. An example of another effect, that has recently been observed by Amendola et al. on small gold nanoparticles, of 10 nm or less, is that dense monolayers that consist of certain specific short chain ligands tend to dampen the surface plasmon resonance effects. Plasmon resonance can be used to analyze the surfactants of the nanoparticle. This principle is based on the so-called Fröhlich condition which states that the refractive index of the surrounding medium of a nanoparticle can be used to tune or alter the frequency of the surface plasmon resonance. The equation that relates both properties is as follows: formula_1 In which formula_2 is the wavelength at which the plasmon resonance frequency peaks, formula_3 is the refractive index of the environment, which relates to the dielectric constant of the medium formula_4as follows: formula_5. Furthermore formula_6 is the frequency of the plasmon resonance and formula_7 is the speed of light in vacuum. The relation between the wavelength and the refractive index of the environment is not strictly linear but for small values of formula_8 the theoretical predictions align with experimental results. This relation can thus be used to analyse the environment of the nanoparticle, i.e. the interfacial layer, by measuring the wavelength of the plasmon resonance. Thermal conductivity. The thermal conductivity is a measure of the capacity of a material to conduct heat. In a nanofluid this conductivity is influenced by the nanoparticles suspended in the solution. A simple model only considered the thermal conductivity of the liquid and the suspended solids. This is called the Maxwell–Garnett model (1891) and is defined as: formula_9 In which formula_10, formula_11, formula_12 are respectively the effective thermal conductivity, the thermal conductivity of the fluid and the thermal conductivity of particle and formula_13 is the packing fraction of the particles. This model is not very accurate for nanoparticles for it does not take into account the interfacial layer formed by the fluid around a nanoparticle. In 2006 K. C. Leong et al proposed a new model, one which took into account the existence of an interfacial layer. They did so by considering the area around a nanoparticle and stating it exists of three separate regions. Each of them with a specific but different thermal conductivity. This resulted in the following model: formula_14 In which formula_10 is the effective thermal conductivity, formula_12, formula_11 and formula_15 the thermal conductivity of respectively the particle, the fluid and the interfacial layer. formula_16 is the packing fraction of the fluid or formula_17. And formula_18 and formula_19 are respectively formula_20 and formula_21, with formula_22 or the ratio of the thickness of the interfacial layer to the particle size. This model was shown to be more in agreement with the experimental results, but is limited in its applicability for there is not yet a theoretical way to establish the thermal conductivity, or the thickness of this layer. Solubility. Another property of nanoparticles that is heavily influenced by the surfactants is the solubility of the nanoparticle. One can imagine that a metallic nanoparticle would not dissolve well in organic solvents. By adding the surfactants the nanoparticles will stay more evenly dispersed throughout the solvent. This is due to the, often, amphiphilic nature of the surfactants. The interfacial layer can be used to essentially tune the solubility of nanoparticles in different media, which can range from extremely hydrophilic to hydrophobic. Stability. The stability of a nanoparticle is a term often used to describe the preservation of a specific, usually size-dependent, property of the particle. It can refer to e.g.: its size, shape, composition, crystalline structure, surface properties or dispersion within a solution. The interfacial layer of a nanoparticle can aid these types of stabilities in different ways. The ligands can bind to the different facets of a nanoparticle, the size and type of which will determine the way the ligands will be ordered. The way the ligands are attached to the particle, ordered disordered or somewhere in between, plays a crucial role in the way different particles will interact. This in turn affects the reactivity of the nanoparticle, which is another way to look at the stability of the particle. Analysis. A wide variety of techniques can be used to analyze the interfacial layer, often SAXS, NMR, AFM, STM are used, but other methods, like measuring the refractive index can reveal information as well. Small-angle X-ray diffraction provides data about the size and dispersion of the nanoparticles, and gives information about the density of the interfacial layer. Because the amount of scattering is proportionate with the density. On top of this the thickness of the layer can be estimated. However a disadvantage is that SAXS is destructive. AFM and STM measurements can reveal information at atomic resolution about the structure and shape of the interfacial layer. This information is limited to the surface of the nanoparticle, as you can only probe the surface. Another drawback of STM is that it's only applicable if the interfacial layer is conducting. (Solid-state) NMR can be used to study the composition, short range ordering and dynamics in the interfacial layer. The dynamics can be studied over a wide range of timescales, which allows the intermolecular interactions, chemical reactions and transport phenomena to be analyzed. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " V(r) = V_\\text{vdw} + V_\\text{el} + V_\\text{HB} + V_\\text{ST} + V_\\text{B} " }, { "math_id": 1, "text": " \\lambda_{max} = \\frac{2 \\pi c}{\\omega_p} \\sqrt{2n_m^2 +1} " }, { "math_id": 2, "text": "\\lambda_{max}" }, { "math_id": 3, "text": "n_m" }, { "math_id": 4, "text": "\\epsilon_m" }, { "math_id": 5, "text": "n_m = \\sqrt{\\epsilon_m}" }, { "math_id": 6, "text": "\\omega_p" }, { "math_id": 7, "text": "c" }, { "math_id": 8, "text": "n" }, { "math_id": 9, "text": " \\frac{k_\\text{eff}}{k_\\text{f}} = \\frac{k_\\text{p} + 2k_\\text{f} - 2\\phi(k_\\text{f} - k_\\text{p})}{k_\\text{p} + 2k_\\text{f} + \\phi(k_\\text{f} - k_\\text{p})} " }, { "math_id": 10, "text": "k_\\text{eff}" }, { "math_id": 11, "text": "k_\\text{f}" }, { "math_id": 12, "text": "k_\\text{p}" }, { "math_id": 13, "text": "\\phi" }, { "math_id": 14, "text": "k_\\text{eff} = \\frac{(k_p - k_\\text{lr})\\phi_\\text{l}k_\\text{lr}[2\\beta_l^3 - \\beta^3 + 1] + (k_\\text{p} + 2k_\\text{lr}) \\beta_\\text{l}^3[\\phi_\\text{l}\\beta^3(k_\\text{lr}-k_\\text{f}) + k_\\text{f}]}{\\beta_\\text{l}^3(k_\\text{p} + 2k_\\text{lr} - (k_\\text{p} - k_\\text{lr})\\phi_\\text{l}[\\beta_\\text{l}^3 + \\beta^3 - 1])}" }, { "math_id": 15, "text": "k_\\text{lr}" }, { "math_id": 16, "text": "\\phi_\\text{l}" }, { "math_id": 17, "text": "1-\\phi_\\text{p}\\beta" }, { "math_id": 18, "text": "\\beta\n" }, { "math_id": 19, "text": "\\beta_\\text{l}" }, { "math_id": 20, "text": "1+\\gamma\n" }, { "math_id": 21, "text": "1+\\gamma/2\n" }, { "math_id": 22, "text": "\\gamma = h/a\n" } ]
https://en.wikipedia.org/wiki?curid=69739403
69741121
Rubidium peroxide
&lt;templatestyles src="Chembox/styles.css"/&gt; Chemical compound Rubidium peroxide is rubidium's peroxide with the chemical formula Rb2O2. Production. Rubidium peroxide can be produced by rapidly oxidizing rubidium in liquid ammonia at −50°C. formula_0 It can also be produced by pyrolysis of rubidium superoxide in vacuum. formula_1 Properties. Rubidium peroxide is a colourless to light yellow solid with the orthorhombic crystal structure.
[ { "math_id": 0, "text": "\\mathrm{2\\ Rb + O_2 \\ \\xrightarrow{-50\\;^\\circ\\text{C}}\\ Rb_2O_2 }" }, { "math_id": 1, "text": "\\mathrm{2\\ RbO_2 \\ \\xrightarrow{290\\;^\\circ\\text{C}}\\ Rb_2O_2 + O_2}" } ]
https://en.wikipedia.org/wiki?curid=69741121
69743250
Courant–Snyder parameters
Set of quantities in accelerator physics In accelerator physics, the Courant–Snyder parameters (frequently referred to as Twiss parameters or CS parameters) are a set of quantities used to describe the distribution of positions and velocities of the particles in a beam. When the positions along a single dimension and velocities (or momenta) along that dimension of every particle in a beam are plotted on a phase space diagram, an ellipse enclosing the particles can be given by the equation: formula_0 where formula_1 is the position axis and formula_2 is the velocity axis. In this formulation, formula_3, formula_4, and formula_5 are the Courant–Snyder parameters for the beam along the given axis, and formula_6 is the emittance. Three sets of parameters can be calculated for a beam, one for each orthogonal direction, x, y, and z. History. The use of these parameters to describe the phase space properties of particle beams was popularized in the accelerator physics community by Ernest Courant and Hartland Snyder in their 1953 paper, "Theory of the Alternating-Gradient Synchrotron". They are also widely referred to in accelerator physics literature as "Twiss parameters" after British astronomer Richard Q. Twiss, although it is unclear how his name became associated with the formulation. Phase space area description. When simulating the motion of particles through an accelerator or beam transport line, it is often desirable to describe the overall properties of an ensemble of particles, rather than track the motion of each particle individually. 155 By Liouville's Theorem it can be shown that the density occupied on a position and momentum phase space plot is constant when the beam is only affected by conservative forces. The area occupied by the beam on this plot is known as the beam emittance, although there are a number of competing definitions for the exact mathematical definition of this property. Coordinates. In accelerator physics, coordinate positions are usually defined with respect to an idealized "reference particle", which follows the ideal design trajectory for the accelerator. The direction aligned with this trajectory is designated "z", (sometimes "s") and is also referred to as the "longitudinal coordinate". Two transverse coordinate axes, x and y, are defined perpendicular to the z axis and to each other. In addition to describing the positions of each particle relative to the reference particle along the x, y, and z axes, it is also necessary to consider the rate of change of each of these values. This is typically given as a rate of change with respect to the longitudinal coordinate (x' = dx/dz) rather than with respect to time. In most cases, x' and y' are both much less than 1, as particles will be moving along the beam path much faster than transverse to it.30 Given this assumption, it is possible to use the small angle approximation to express x' and y' as angles rather than simple ratios. As such, x' and y' are most commonly expressed in milliradians. Ellipse equation. When an ellipse is drawn around the particle distribution in phase space, the equation for the ellipse is given as: formula_7 "Area" here is an area in phase space, and has units of length * angle. Some sources define the area as the beam emittance formula_6, while others use formula_8. It is also possible to define the area as a specific fraction of the particles in a beam with a 2 dimensional gaussian distribution.83 The other three coefficients, formula_3, formula_4, and formula_5, are the CS parameters. As this ellipse is an instantaneous plot of the positions and velocities of the particles at one point in the accelerator, these values will vary with time. Since there are only two independent variables, x and x', and the emittance is constant, only two of the CS parameters are independent. The relationship between the three parameters is given by:160 formula_9 Derivation for periodic systems. In addition to treating the CS parameters as an empirical description of a collection of particles in phase space, it is possible to derive them based on the equations of motion of particles in electromagnetic fields. Equation of motion. In a strong focusing accelerator, transverse focusing is primarily provided by quadrupole magnets. The linear equation of motion for transverse motion parallel to an axis of the magnet is: formula_10 where formula_11 is the "focusing coefficient", which has units of length−2, and is only nonzero in a quadrupole field. (Note that x is used throughout this explanation, but y could be equivalently used with a change of sign for k. The longitudinal coordinate, z, requires a somewhat different derivation.) Assuming formula_11 is periodic, for example, as in a circular accelerator, this is a differential equation with the same form as the Hill differential equation. The solution to this equation is a pseudo harmonic oscillator: formula_12 where A(z) is the amplitude of oscillation, formula_13 is the "betatron phase" which is dependent on the value of formula_11, and formula_14 is the initial phase. The amplitude is decomposed into a position dependent part formula_4 and an initial value formula_15, such that: formula_16 formula_17 Particle distributions. Given these equations of motion, taking the average values for particles in a beam yields:163 formula_18 formula_19 formula_20 These can be simplified with the following definitions: formula_21 formula_22 formula_23 giving: formula_24 formula_25 formula_26 These are the CS parameters and emittance in another form. Combined with the relationship between the parameters, this also leads to a definition of emittance for an arbitrary (not necessarily Gaussian) particle distribution:163 formula_27 Properties. The advantage of describing a particle distribution parametrically using the CS parameters is that the evolution of the overall distribution can be calculated using matrix optics more easily than tracking each individual particle and then combining the locations at multiple points along the accelerator path. For example, if a particle distribution with parameters formula_3, formula_4, and formula_5 passes through an empty space of length L, the values formula_28, formula_29, and formula_30 at the end of that space are given by:160 formula_31 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\gamma x^2 + 2 \\alpha x x' + \\beta x'^2 = \\epsilon" }, { "math_id": 1, "text": "x" }, { "math_id": 2, "text": "x'" }, { "math_id": 3, "text": "\\alpha" }, { "math_id": 4, "text": "\\beta" }, { "math_id": 5, "text": "\\gamma" }, { "math_id": 6, "text": "\\epsilon" }, { "math_id": 7, "text": "\\gamma x^2 + 2 \\alpha x x' + \\beta x'^2 = area" }, { "math_id": 8, "text": "\\epsilon / \\pi" }, { "math_id": 9, "text": "\\beta \\gamma - \\alpha^2 = 1 " }, { "math_id": 10, "text": "\\frac{d^2 x}{dz^2} = -k(z) x" }, { "math_id": 11, "text": "k(z)" }, { "math_id": 12, "text": "x(z) = A(z) \\cos(\\phi(z) + \\phi_o)" }, { "math_id": 13, "text": "\\phi(z)" }, { "math_id": 14, "text": "\\phi_o" }, { "math_id": 15, "text": "A_o" }, { "math_id": 16, "text": "x(z) = A_o \\sqrt{\\beta} \\cos(\\phi(z) + \\phi_o)" }, { "math_id": 17, "text": "x'(z) = A_o \\frac{\\beta'}{2\\sqrt{\\beta}} \\cos(\\phi(z) + \\phi_o) - A_o \\frac{1}{\\sqrt{\\beta}} \\sin(\\phi(z) + \\phi_o)" }, { "math_id": 18, "text": "\\langle x^2\\rangle =A_o^2 \\beta \\langle \\cos^2(\\phi(z)+\\phi_o)\\rangle = \\frac{1}{2}A_o^2\\beta" }, { "math_id": 19, "text": "\\langle {x'}^2 \\rangle = A_o^2 \\frac{(-\\beta'/2)^2}{2\\beta} + A_o^2 \\frac{1}{2\\beta} = (1 + (-\\beta'/2)^2) \\frac{A_o^2}{2\\beta}" }, { "math_id": 20, "text": "\\langle x x' \\rangle = -\\frac{A_o^2}{2} (-\\beta'/2) " }, { "math_id": 21, "text": "\\epsilon = \\frac{1}{2}A_o^2" }, { "math_id": 22, "text": "\\alpha = -\\frac{\\beta'}{2}" }, { "math_id": 23, "text": "\\gamma = \\frac{1+\\alpha^2}{\\beta}" }, { "math_id": 24, "text": "\\langle x^2\\rangle = \\epsilon \\beta" }, { "math_id": 25, "text": "\\langle {x'}^2 \\rangle = \\epsilon \\gamma" }, { "math_id": 26, "text": "\\langle x x' \\rangle = -\\epsilon \\alpha " }, { "math_id": 27, "text": "\\epsilon^2 = \\langle x^2 \\rangle \\langle x'^2 \\rangle - \\langle x x' \\rangle^2" }, { "math_id": 28, "text": "\\alpha(L)" }, { "math_id": 29, "text": "\\beta(L)" }, { "math_id": 30, "text": "\\gamma(L)" }, { "math_id": 31, "text": "\\begin{pmatrix}\n\\beta(L) \\\\\n\\alpha(L) \\\\\n\\gamma(L)\n\\end{pmatrix} =\n\\begin{pmatrix}\n1 & -2L & L^2 \\\\\n0 & 1 & -L \\\\\n0 & 0 & 1\n\\end{pmatrix}\n\\begin{pmatrix}\n\\beta\\\\ \\alpha \\\\ \\gamma\n\\end{pmatrix}" } ]
https://en.wikipedia.org/wiki?curid=69743250
6974666
Redistribution (Australia)
Redistribution of electoral boundaries in Australia In Australia, a redistribution is the process of redrawing the boundaries of electoral divisions for the House of Representatives arising from changes in population and changes in the number of representatives. There is no redistribution for the Senate as each State constitutes a division, though with multiple members. The Australian Electoral Commission (AEC), an independent statutory authority, oversees the apportionment and redistribution process for federal divisions, taking into account a number of factors. Politicians, political parties and the public may make submissions to the AEC on proposed new boundaries, but any interference with their deliberations is considered a serious offence. Section 24 of the Constitution of Australia specifies that the number of members of the House of Representatives in each state is to be calculated from their population, although each state is entitled to a minimum of five members regardless of population. This minimum condition currently only applies to Tasmania, the smallest state. Representation of territories has been specified by subsequent laws. After the number of members for each state and territory is determined, in a process called apportionment or determination, the state and territory is divided into that number of electoral divisions. A redistribution (sometimes called redrawing or "revision") of the geographic boundaries of divisions in a state or territory takes place when an apportionment determination results in a change in the number of seats to which a state or territory is entitled, at least once every seven years, or sooner when the AEC determines that population shifts within a state or territory have caused some seats to have too many or too few voters. The "Commonwealth Electoral Act 1918" requires that all electoral divisions within a state or territory have approximately an equal numbers of enrolled voters. The "Commonwealth Electoral Act (No. 2) 1973" reduced the allowed variation of electors in each division to 10% of the state or territory's average, down from 20%. New boundaries apply only to general elections held after the redistribution process has been completed, and by-elections are held on the previous electoral boundaries. Each state and territory has its own commission which follows similar but not identical processes and principles for determining electoral boundaries and conducting elections within their jurisdiction, and those of local governments. Redistribution triggers. Under Section 59 of the "Commonwealth Electoral Act 1918", a redistribution of State divisions is required or triggered in three circumstances: Total number of members. Section 24 of the Constitution specifies that the total number of members of the House of Representatives shall be "as nearly as practicable" twice as many as the number of members of the Senate. There is presently a total of 76 senators: 12 senators from each of the six states and two from each of the two autonomous internal territories (the Australian Capital Territory and the Northern Territory). Since the 2019 federal election, there have been 151 members of the House of Representatives. The total number of members of the House of Representatives, and consequently electorates, has increased from time to time. Every time there is an increase in the number of members, a redistribution is required to be undertaken, except in Tasmania which has always had the constitutional minimum number of five members. At the first federal election there were 65 members of the House of Representatives. In 1949, the number was increased from 74 to 121 (excluding the Australian Capital Territory and the Northern Territory), and in 1984 it was increased from 125 to 148. Following the 2017 apportionment, the total number of members increased from 150 to 151. Following the 2023 apportionment, the total number of members will decrease from 151 back to 150. Entitlement of states and territories. The AEC determines the number of members to which each state and territory is entitled, which is based on the population of each state and territory. The Australian Bureau of Statistics officially provides to the AEC its estimate of the population for each state and territory for a year after the first sitting day for a new House of Representatives, termed "ascertainment". Based on these figures, the AEC makes its apportionment determination one year and one day after the first sitting day for a new House of Representatives. A redistribution is postponed if it would begin within one year of the expiration of the House of Representatives, to prevent a general election from occurring while a redistribution is in progress. Section 24 of the Constitution requires that electorates be apportioned among the states in proportion to their respective populations; provided that each Original State has at least 5 members. Section 29 of the Constitution forbids electorate boundaries from crossing state lines. The current apportionment method is now found in section 48 of the "Commonwealth Electoral Act 1918". Under the current method, the AEC firstly calculates a quota, as follows: formula_0 State entitlements. After the quota is calculated, the number of members to be chosen in each State is the number of people of the State divided by the quota, and if on such division there is a remainder greater than one-half of the quota, one more member shall be chosen in the State. formula_1 In simpler terms, the entitlement of each state is the quotient rounded to the nearest whole number. However, each Original State is entitled to a minimum of five members under the Australian Constitution, thus giving Tasmania two more seats than its population would normally justify. Territory entitlements. Until 2020, the quotient and entitlement of each territory was obtained using a similar rounding method to the one used for the states. In 2003, a statistical error margin — equal to twice the standard error of the population estimate, as provided by the Australian Bureau of Statistics — was added to the quotient for each territory. The quotient and the error margin were added and rounded to the nearest whole number to determine the entitlement for each territory. These provisions enabled the Northern Territory to retain its second seat at the 2004 federal election, and the Australian Capital Territory to gain a third seat at the 2019 federal election. The "Electoral Amendment (Territory Representation) Act 2020", passed on 9 December 2020, amended the "Commonwealth Electoral Act 1918" to additionally apply the harmonic mean method in calculating each territory's entitlement; the method is also known as Dean’s Method. The inclusion of the statistical error margin was also abolished. The calculation of the territory's quotient is first determined in the same way used for state entitlement: formula_2 In accordance with Section 48 of the "Commonwealth Electoral Act 1918", the calculation of the entitlement of each territory varies on the quotient calculated: formula_3 formula_4 Under these new rules adopted in December 2020, the Northern Territory, with a quota of 1.4332, will retain two seats at the 2022 Australian federal election. It would have lost one of these seats under the AEC determination made in July 2020. The entitlement calculation applies to every territory of Australia except Jervis Bay Territory, which is taken to be part of the Australian Capital Territory. If the territory of Norfolk Island is not entitled to a seat, its population is added to the Australian Capital Territory for an adjusted calculation of the latter's quotient. Similarly, if either or both the territories of Christmas Island and Coco (Keeling) Islands is not entitled to a seat, the island's (or both islands') population is added to the Northern Territory for an adjusted calculation of the latter's quotient. As of 2024[ [update]], Norfolk Island is part of the Australian Capital Territory's Division of Bean, while Christmas Island and Coco (Keeling) Islands are part of the Northern Territory's Division of Lingiari. Apportionments. 2023 apportionment. The latest apportionment determination was made in July 2023. Following its timeline, the AEC on 27 July 2023 announced an apportionment determination based on the population figures for December 2022. The determination resulted in a reduction of one seat in New South Wales to 46, a reduction of one seat in Victoria to 38 and an increase of one seat in Western Australia to 16. The total number of seats in the House of Representatives will decrease from 151 to 150 at the next federal election. In May 2024, the AEC proposed that a new electorate of Bullwinkel be created in Western Australia, and the electorate of Higgins in Victoria be abolished. In June 2024, the AEC also proposed that electorate of North Sydney in NSW be abolished. 2020 apportionment. On 18 June 2020, the Bureau of Statistics had provided the AEC with population figures for December 2019. In the 2020 apportionment, Western Australia lost a seat to 15 seats and Victoria gained a seat to 39. Under the determination, the Northern Territory would have lost one of its two seats. However, an amendment in December 2020 changed the method for determining the apportionment for the territories, which had the effect of reversing the loss of the seat for the Northern Territory. The number of seats by States in the House of Representatives arising from the 2020 determination, with the change in law relating to the territories, were as follows: The population quota is 173,647 (25,005,200 divided by 144). The resulting redistributions must take place by July 2021 for them to be in place in time for the 2022 federal election, due by May 2022. 2017 apportionment. The first sitting of the House of Representatives following the July 2016 election took place on 31 August 2016, and the three-year term was scheduled to expire on 29 August 2019. Following its timeline, the AEC on 31 August 2017 announced an apportionment determination following the completion of processing of the 2016 census. The determination resulted in a reduction of one seat in South Australia to 10, an increase of one seat in Victoria to 38 and an additional seat in the ACT to 3. The total number of seats in the House of Representatives increased from 150 to 151 at the 2019 federal election. The number of seats by States in the House of Representatives arising from the 2017 determination were as follows: A draft redistribution in Victoria was released on 6 April 2018, and the final distribution was released on 12 July. There were also three scheduled redistributions of electoral boundaries, as seven years had elapsed since the last time these boundaries were reviewed. 2014 apportionment. On 13 November 2014, the AEC made an apportionment determination that resulted in Western Australia's entitlement increasing from 15 to 16 seats, and New South Wales's decreasing from 48 to 47 seats. The number of seats by States in the House of Representatives arising from the 2014 determination were as follows: A redistribution of electoral boundaries in New South Wales and Western Australia was undertaken before the 2016 election. The redistribution in New South Wales was announced on 16 October 2015, with the Labor-held Division of Hunter proposed to be abolished. The Division of Charlton was renamed Hunter to preserve the Hunter name used since federation. This effectively meant that the Division of Charlton was abolished and the Division of Hunter was retained. A redistribution also occurred in the Australian Capital Territory, as seven years had elapsed since the last time the ACT's boundaries were reviewed. Historical entitlements. The historical apportionment entitlement of seats for the various states and territories is: The entitlements of the external territories are less than 0.05 and are not shown in the table. Therefore, these territories are not entitled to any seats. The apportionment entitlements shown for Australian Capital Territory and Northern Territory have accounted for the populations of Norfolk Island, Christmas Island and Coco (Keeling) Islands. Historical apportionments. The historical apportionment of seats for the various states and territories is: Recent redistributions. Australian Capital Territory. The most recent redistribution of federal electoral divisions in the Australian Capital Territory commenced on 4 September 2017, due to changes in the territory's representation entitlement. The AEC released a proposed redistribution on 6 April 2018, and the final determination on 13 July 2018. The redistribution resulted in the creation of a third ACT electoral division named Bean (notionally fairly safe Labor), after historian Charles Bean. The next scheduled redistribution will begin within 30 days from 13 July 2025, seven years after the previous redistribution. New South Wales. New South Wales last underwent a redistribution on 25 February 2016, before the May 2016 federal election. Due to another change in the state's representation entitlement, New South Wales will undergo a redistribution in 2024 which will abolish a seat and reducing the entitlement to 46. If there had been no change in entitlement, the state would still have undergone a scheduled redistribution anyway (seven years since the previous redistribution). In the draft redistribution announced on 14 June 2024, the seat of North Sydney is proposed to be abolished. Northern Territory. On 7 December 2016, the Electoral Commission for the Northern Territory announced the results of its deliberations into the boundaries of Lingiari and Solomon, the two federal electoral divisions in the Northern Territory. New boundaries gazetted from 7 February 2017 will see the remainder of the Litchfield Municipality and parts of Palmerston (the suburbs of Farrar, Johnston, Mitchell, Zuccoli and part of Yarrawonga) transferred from Solomon to Lingiari. The following scheduled redistribution began on 22 February 2024. The number of seats for the Northern Territory (two seats) will remain unchanged. Queensland. A scheduled redistribution began in Queensland on 6 January 2017, and was finalised on 27 March 2018. Changes were made to the boundaries of 18 of Queensland's 30 electoral divisions, and no division names were changed. The next scheduled redistribution will begin within 30 days from 27 March 2025, seven years after the previous redistribution. South Australia. The redistribution in South Australia commenced on 4 September 2017. The proposed redistribution report was released on 13 April 2018, and the final determination on 20 July 2018. The AEC abolished the division of Port Adelaide. The hybrid urban-rural seat of Wakefield became the entirely urban seat of Spence, after Catherine Helen Spence. The more rural portions of Wakefield transferred to Grey and Barker. Port Adelaide was abolished due to population changes since the state's last redistribution in 2011. Although South Australia's population was still increasing, faster increases in other states saw a reduction in South Australia's representation from 11 to 10 seats. This was the third time South Australia lost a seat since the 1984 enlargement of the parliament, with Hawker abolished in 1993 and Bonython in 2004. The next scheduled redistribution will begin within 30 days from 20 July 2025, seven years after the previous redistribution. Tasmania. A scheduled redistribution began in Tasmania on 1 September 2016. The determinations were announced on 27 September 2017, involved boundary changes, and the Division of Denison was renamed the Division of Clark. Final determination was on 14 November 2017. The next scheduled redistribution will begin within 30 days from 14 November 2024, seven years after the previous redistribution. With redistributions typically taking more than a year, the redistribution will not take effect before the federal election scheduled for 2024–2025. Western Australia. A redistribution of federal election divisions in Western Australia was undertaken in 2020, due to changes in the state's representation entitlement. The determinations were made on 2 August 2021, and abolished the Division of Stirling. Due to changes in the state's representation entitlement, Western Australia will regain a seat in the 2024 redistribution. In the draft redistribution announced on 31 May 2024, the new seat is proposed to be named Bullwinkel. Victoria. A redistribution of federal electoral divisions in Victoria commenced on 4 September 2017, due to changes in the state's representation entitlement. The determinations were made on 13 July 2018, and created a 38th electoral division named Fraser (notionally a safe Labor). Several divisions were also renamed: Batman to Cooper (after William Cooper), McMillan to Monash (after Sir John Monash), Melbourne Ports to Macnamara (after Dame Jean Macnamara) and Murray to Nicholls (after Sir Douglas and Lady Nicholls). A proposal to rename Corangamite to Cox (after swimming instructor May Cox) did not proceed. The Coalition notionally lost the seat of Dunkley and Corangamite to Labor in the redistribution. Another redistribution in Victoria was finalised and gazetted on 26 July 2021, creating a 39th electoral division named Division of Hawke (notionally a safe Labor). None of the existing 38 divisions were notionally lost in the redistribution. Due to another change in the state's representation entitlement, Victoria will undergo a third consecutive redistribution in 2024 which will abolish a seat and returning the entitlement to 38 seats. In the draft redistribution announced on 31 May 2024, the seat of Higgins is proposed to be abolished. Redistribution process. A redistribution is undertaken on a State-by-State basis. After the redistribution process commences, a Redistribution Committee — consisting of the Electoral Commissioner, the Australian Electoral Officer for the State concerned (in the ACT, the senior Divisional Returning officer), the State Surveyor General and the State Auditor General — is formed. The Electoral Commissioner invites public suggestions on the redistribution which must be lodged within 30 days. A further period of 14 days is allowed for comments on the suggestions lodged. The Redistribution Committee then divides the State or Territory into divisions and publishes its proposed redistribution. A period of 28 days is allowed after publication of the proposed redistribution for written objections. A further period of 14 days is provided for comments on the objections lodged. These objections are considered by an augmented Electoral Commission — consisting of the four members of the Redistribution Committee and the two part-time members of the Electoral Commission. At the time of the redistribution the number of electors in the divisions may vary up to 10% from the 'quota' or average divisional figure but at a point 3.5 years after the expected completion of the redistribution, the figures should not vary from the average projected quota by more or less than 3.5%. Thus the most rapidly growing divisions are generally started with enrolments below the quota while those that are losing population are started above the quota. Neither the Government nor the Parliament can reject or amend the final determination of the augmented Electoral Commission. Management. Boundaries for the Australian House of Representatives and for the six state and two territorial legislatures are drawn up by independent authorities, at the federal level by the Australian Electoral Commission (AEC) and in the states and territories by their equivalent bodies. Politicians have no influence over the process, although they, along with any other citizen or organisation, can make submissions to the independent authorities suggesting changes. There is significantly less political interference in the redistribution process than is common in the United States. In 1978, federal Cabinet minister Reg Withers was forced to resign for suggesting to another minister that the name of a federal electorate be changed to suit a political ally. There have been examples of malapportionment of federal and state electoral districts in the past, often resulting in rural constituencies containing far fewer voters than urban ones and maintaining in power those parties that have rural support despite polling fewer popular votes. Past malapportionments in Queensland, Western Australia and the 'Playmander' in South Australia were notorious examples of the differences between urban and rural constituency sizes than their population would merit. The Playmander distorted electoral boundaries and policies that kept the Liberal and Country League in power for 32 years from 1936 to 1968. In extreme cases, rural areas had four times the voting value of metropolitan areas. Supporters of such arrangements claimed Australia's urban population dominates the countryside and that these practices gave fair representation to country people. Naming of divisions. The redistribution, creation and abolition of divisions is the responsibility of the AEC. When new divisions are created, the AEC will select a name. Most divisions are named in honour of prominent historical people, such as former politicians (often Prime Ministers), explorers, artists and engineers, and rarely for geographic places. Some of the criteria the AEC uses when naming new divisions are: Divisions with Indigenous names and the remaining 34 of the 65 original divisions (i.e those that were contested at the 1901 federal election, note that 32 of the 65 have since been abolished) should not be renamed or abolished. Geographical locations. There are several divisions named after geographical locations, including bodies of water, islands, settlements. For example, the seat of Werriwa in Western Sydney is named after Lake George (which is not located in the seat of Werriwa), which is known as in the Ngunnawal language. Individuals. Several deceased individuals have been honoured with divisions named after them, including several Indigenous Australians. Every deceased former Prime Minister except Sir Joseph Cook has a division named after them. The reason for Sir Joseph Cook not having an electorate named for him is due to the fact that he shares the same surname as Captain James Cook, who is the namesake of the Division of Cook in Sydney. There are seven former Prime Ministers who are still alive (Paul Keating, John Howard, Kevin Rudd, Julia Gillard, Tony Abbott, Malcolm Turnbull and Scott Morrison), who therefore do not have divisions named after them as they are still alive. However, divisions named after Prime Ministers are not necessarily divisions that have been renamed, generally a new division is created instead, sometimes even in a different city or region (e.g Malcolm Fraser served as the member for Wannon in western Victoria, but the Division of Fraser in western Melbourne). While it is unlikely for any former Prime Minister to have their exact division they served whilst in office renamed in honour of them (as opposed to creating a new division), for some divisions it is impossible due to the practice of keeping the names of the original divisions and those of divisions with Indigenous names. For example, the Bennelong (the seat John Howard held as Prime Minister) and Warringah (the seat Tony Abbott held as Prime Minister) will not be renamed "Howard" and "Abbott", respectively due to these names being Indigenous, while Wentworth (the seat Malcolm Turnbull held as Prime Minister) will not be renamed "Turnbull" due to Wentworth being one of the 34 remaining original divisions created. State electoral districts. The AEC guidelines explicitly prohibit the use of names that are also used for state electoral districts. However, due to the fact that the names of the original divisions should remain the same, there are still some divisions that share the names of state electoral districts because they are among the original divisions, as well as some that were created after 1901 but before this guideline was implemented. Thus, the following divisions share the same name as a state electoral district (Tasmanian electorates share the same names on both levels of politics and are thus not included as pairs of examples): Notional seat status. After a redistribution is carried out in a state or territory, the AEC calculates "notional" margins for the redistributed divisions by modelling the outcome of the previous election as if the new boundaries had been in place. These notional margins are used as the baseline for the electoral swings calculated and published in the AEC's virtual tally room at the following election. In some cases, the change in electoral boundaries can see the party which notionally holds the seat differ from the party which won it at the election. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mbox{Quota} = \\frac{\\mbox{Total population of the six states}}{\\mbox{Number of senators for the states x 2} }" }, { "math_id": 1, "text": " \\mbox{Quotient of each state} = \\frac{\\mbox{Population of each state }}{\\mbox{Quota} }" }, { "math_id": 2, "text": " \\mbox{Quotient of each territory} = \\frac{\\mbox{Population of each territory }}{\\mbox{Quota} }" }, { "math_id": 3, "text": " \\mbox{Minimum number of members} = \\mbox{Quotient rounded down to the nearest whole number}" }, { "math_id": 4, "text": " \\mbox{Harmonic mean} = \\frac{\\mbox{2 x Minimum number of members x (Minimum number of members + 1)}}{\\mbox{Minimum number of members + (Minimum number of members + 1)} }" } ]
https://en.wikipedia.org/wiki?curid=6974666
697531
Geometric–harmonic mean
In mathematics, the geometric–harmonic mean M("x", "y") of two positive real numbers "x" and "y" is defined as follows: we form the geometric mean of "g"0 = "x" and "h"0 = "y" and call it "g"1, i.e. "g"1 is the square root of "xy". We also form the harmonic mean of "x" and "y" and call it "h"1, i.e. "h"1 is the reciprocal of the arithmetic mean of the reciprocals of "x" and "y". These may be done sequentially (in any order) or simultaneously. Now we can iterate this operation with "g"1 taking the place of "x" and "h"1 taking the place of "y". In this way, two interdependent sequences ("g""n") and ("h""n") are defined: formula_0 and formula_1 Both of these sequences converge to the same number, which we call the geometric–harmonic mean M("x", "y") of "x" and "y". The geometric–harmonic mean is also designated as the harmonic–geometric mean. (cf. Wolfram MathWorld below.) The existence of the limit can be proved by the means of Bolzano–Weierstrass theorem in a manner almost identical to the proof of existence of arithmetic–geometric mean. Properties. M("x", "y") is a number between the geometric and harmonic mean of "x" and "y"; in particular it is between "x" and "y". M("x", "y") is also homogeneous, i.e. if "r" &gt; 0, then M("rx", "ry") = "r" M("x", "y"). If AG("x", "y") is the arithmetic–geometric mean, then we also have formula_2 Inequalities. We have the following inequality involving the Pythagorean means {"H", "G", "A"} and iterated Pythagorean means {"HG", "HA", "GA"}: formula_3 where the iterated Pythagorean means have been identified with their parts {"H", "G", "A"} in progressing order:
[ { "math_id": 0, "text": "g_{n+1} = \\sqrt{g_n h_n}" }, { "math_id": 1, "text": "h_{n+1} = \\frac{2}{\\frac{1}{g_n} + \\frac{1}{h_n}}" }, { "math_id": 2, "text": "M(x,y) = \\frac{1}{AG(\\frac{1}{x},\\frac{1}{y})}" }, { "math_id": 3, "text": "\\min(x,y) \\leq H(x,y) \\leq HG(x,y) \\leq G(x,y) \\leq GA(x,y) \\leq A(x,y) \\leq \\max(x,y)" } ]
https://en.wikipedia.org/wiki?curid=697531
6976534
LiveMath
Computer algebra system LiveMath is a computer algebra system available on a number of platforms including Mac OS, macOS (Carbon), Microsoft Windows, Linux (x86) and Solaris (SPARC). It is the latest release of a system that originally emerged as Theorist for the "classic" Mac in 1989, became MathView and MathPlus in 1997 after it was sold to Waterloo Maple, and finally LiveMath after it was purchased by members of its own userbase in 1999. The application is currently owned by MathMonkeys of Cambridge, Massachusetts. The overall LiveMath suite contains LiveMath Maker, the main application, as well as LiveMath Viewer for end-users, and LiveMath Plug-In, an ActiveX plugin for browsers, which was discontinued in 2014. Description. LiveMath uses a worksheet-based approach, similar to products like Mathematica or MathCAD. The user enters equations into the worksheet and then uses the built-in functions to help solve them, or reduce them numerically. Workbooks typically contain a number of equations separated into sections, along with data tables, graphs, and similar outputs. Unlike most CAS applications, LiveMath uses a full GUI with high-quality graphical representations of the equations at every step, including input. LiveMath also allows the user to interact with the equation in the sheet; for instance, one can drag an instance of formula_0 to the left hand side of the equation, at which point LiveMath will re-arrange the equation to solve for formula_0. LiveMath's algebraic solving systems are relatively simple compared to better known systems like Mathematica, and does not offer the same sort of automated single-step solving of these packages. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x" } ]
https://en.wikipedia.org/wiki?curid=6976534
697666
Structural rule
Rule of mathematical logic In the logical discipline of proof theory, a structural rule is an inference rule of a sequent calculus that does not refer to any logical connective but instead operates on the sequents directly. Structural rules often mimic the intended meta-theoretic properties of the logic. Logics that deny one or more of the structural rules are classified as substructural logics. Common structural rules. Three common structural rules are: A logic without any of the above structural rules would interpret the sides of a sequent as pure sequences; with exchange, they can be considered to be multisets; and with both contraction and exchange they can be considered to be sets. These are not the only possible structural rules. A famous structural rule is known as cut. Considerable effort is spent by proof theorists in showing that cut rules are superfluous in various logics. More precisely, what is shown is that cut is only (in a sense) a tool for abbreviating proofs, and does not add to the theorems that can be proved. The successful 'removal' of cut rules, known as "cut elimination", is directly related to the philosophy of "computation as normalization" (see Curry–Howard correspondence); it often gives a good indication of the complexity of deciding a given logic. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{\\Gamma \\vdash \\Sigma}{\\Gamma, A \\vdash \\Sigma}" }, { "math_id": 1, "text": "\\frac{\\Gamma \\vdash \\Sigma}{\\Gamma \\vdash \\Sigma, A}" }, { "math_id": 2, "text": "\\frac{\\Gamma, A, A \\vdash \\Sigma}{\\Gamma, A \\vdash \\Sigma}" }, { "math_id": 3, "text": "\\frac{\\Gamma \\vdash A, A, \\Sigma}{\\Gamma \\vdash A, \\Sigma}" }, { "math_id": 4, "text": "\\frac{\\Gamma_1, A, \\Gamma_2, B, \\Gamma_3 \\vdash \\Sigma}{\\Gamma_1, B, \\Gamma_2, A, \\Gamma_3 \\vdash \\Sigma}" }, { "math_id": 5, "text": "\\frac{\\Gamma \\vdash \\Sigma_1, A, \\Sigma_2, B, \\Sigma_3}{\\Gamma \\vdash \\Sigma_1, B, \\Sigma_2, A, \\Sigma_3}" } ]
https://en.wikipedia.org/wiki?curid=697666
6976689
Significant wave height
Mean wave height of the highest third of the waves In physical oceanography, the significant wave height (SWH, HTSGW or "H"s) is defined traditionally as the mean "wave height" (trough to crest) of the highest third of the waves ("H"1/3). It is usually defined as four times the standard deviation of the surface elevation – or equivalently as four times the square root of the zeroth-order moment (area) of the "wave spectrum". The symbol "H"m0 is usually used for that latter definition. The significant wave height (Hs) may thus refer to "H"m0 or "H"1/3; the difference in magnitude between the two definitions is only a few percent. SWH is used to characterize "sea state", including winds and swell. Origin and definition. The original definition resulted from work by the oceanographer Walter Munk during World War II. The significant wave height was intended to mathematically express the height estimated by a "trained observer". It is commonly used as a measure of the height of ocean waves. Time domain definition. Significant wave height "H"1/3, or "H""s" or "H""sig", as determined in the time domain, directly from the time series of the surface elevation, is defined as the average height of that one-third of the "N" measured waves having the greatest heights: formula_0 where "H"m represents the individual wave heights, sorted into descending order of height as "m" increases from 1 to "N". Only the highest one-third is used, since this corresponds best with visual observations of experienced mariners, whose vision apparently focuses on the higher waves. Frequency domain definition. Significant wave height "H"m0, defined in the frequency domain, is used both for measured and forecasted wave variance spectra. Most easily, it is defined in terms of the variance "m"0 or standard deviation "σ""η" of the surface elevation: formula_1 where "m"0, the zeroth-moment of the variance spectrum, is obtained by integration of the variance spectrum. In case of a measurement, the standard deviation "σ""η" is the easiest and most accurate statistic to be used. Statistical distribution of the heights of individual waves. Significant wave height, scientifically represented as "H"s or "H"sig, is an important parameter for the statistical distribution of ocean waves. The most common waves are lower in height than "H"s. This implies that encountering the significant wave is not too frequent. However, statistically, it is possible to encounter a wave that is much higher than the significant wave. Generally, the statistical distribution of the individual wave heights is well approximated by a Rayleigh distribution. For example, given that "H"s is , statistically: This implies that one might encounter a wave that is roughly double the significant wave height. However, in rapidly changing conditions, the disparity between the significant wave height and the largest individual waves might be even larger. Other statistics. Other statistical measures of the wave height are also widely used. The RMS wave height, which is defined as square root of the average of the squares of all wave heights, is approximately equal to "H"s divided by 1.4. For example, according to the Irish Marine Institute: "… at midnight on 9/12/2007 a record significant wave height was recorded of 17.2m at with ["sic"] a period of 14 seconds." Measurement. Although most measuring devices estimate the significant wave height from a wave spectrum, satellite radar altimeters are unique in measuring directly the significant wave height thanks to the different time of return from wave crests and troughs within the area illuminated by the radar. The maximum ever measured wave height from a satellite is during a North Atlantic storm in 2011. Weather forecasts. The World Meteorological Organization stipulates that certain countries are responsible for providing weather forecasts for the world's oceans. These respective countries' meteorological offices are called Regional Specialized Meteorological Centers, or RSMCs. In their weather products, they give ocean wave height forecasts in significant wave height. In the United States, NOAA's National Weather Service is the RSMC for a portion of the North Atlantic, and a portion of the North Pacific. The Ocean Prediction Center and the Tropical Prediction Center's Tropical Analysis and Forecast Branch (TAFB) issue these forecasts. RSMCs use wind-wave models as tools to help predict the sea conditions. In the U.S., NOAA's Wavewatch III model is used heavily. Generalization to wave systems. A "significant wave height" is also defined similarly, from the "wave spectrum", for the different systems that make up the sea. We then have a "significant wave height" for the wind-sea or for a particular swell. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "H_{1/3} = \\frac{1}{\\frac13\\,N}\\, \\sum_{m=1}^{\\frac13\\,N}\\, H_m" }, { "math_id": 1, "text": "H_{m_0} = 4 \\sqrt{m_0} = 4 \\sigma_\\eta, " }, { "math_id": 2, "text": "H_\\text{rms} = \\sqrt{ \\frac{1}{N} \\sum_{m=1}^N H_m^2}, " } ]
https://en.wikipedia.org/wiki?curid=6976689
69779073
Convergent beam electron diffraction
Convergent beam electron diffraction technique Convergent beam electron diffraction (CBED) is an electron diffraction technique where a convergent or divergent beam (conical electron beam) of electrons is used to study materials. History. CBED was first introduced in 1939 by Kossel and Möllenstedt. The development of the Field Emission Gun (FEG) in the 1970s, the Scanning Transmission Electron Microscopy (STEM), energy filtering devices and so on, made possible smaller probe diameters and larger convergence angles, and all this made CBED more popular. In the seventies, CBED was being used for the determination of the point group and space group symmetries by Goodman and Lehmpfuh, and Buxton, and starting in 1985, CBED was used by Tanaka et al. for studying crystals structure. Applications. By using CBED, the following information can be obtained: Parameters. formula_0 where formula_1 is the distance between the crystallographic planes formula_2, formula_3 is the Bragg angle, formula_4 is an integer, and formula_5 is the wavelength of the probing electrons. formula_7 Advantages and disadvantages of CBED. Since the diameter of the probing convergent beam is smaller than in the case of a parallel beam, most of the information in the CBED pattern is obtained from very small regions, which other methods cannot reach. For example, in Selected Area Electron Diffraction (SAED), where a parallel beam illumination is used, the smallest area that can be selected is 0.5 μm at 100 kV, whereas in CBED, it is possible to go to areas smaller than 100 nm. Also, the amount of information that is obtained from a CBED pattern is larger than that from a SAED pattern. Nonetheless, CBED also has its disadvantages. The focused probe may generate contamination, which can cause localized stresses. But this was more of a problem in the past, and now, with the high vacuum conditions, one should be able to probe a clean region of the specimen in minutes to hours. Another disadvantage is that the convergent beam may heat or damage the chosen region of the specimen. Since 1939, CBED has been mainly used to study thicker materials. CBED on 2D crystals. Recently, CBED was applied to study graphene and other 2D monolayer crystals and van der Waals structures. For 2D crystals, the analysis of CBED patterns is simplified, because the intensity distribution in a CBED disk is directly related to the atomic arrangement in the crystal. The deformations at a nanometer resolution have been retrieved, the interlayer distance of a bilayer crystal has been reconstructed, and so on, by using CBED. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " 2 d_{hkl} \\sin\\theta_{\\rm B} = n \\lambda," }, { "math_id": 1, "text": " d_{hkl} " }, { "math_id": 2, "text": " (h,k,l) " }, { "math_id": 3, "text": " \\theta_{\\rm B} " }, { "math_id": 4, "text": " n " }, { "math_id": 5, "text": " \\lambda " }, { "math_id": 6, "text": " \\alpha " }, { "math_id": 7, "text": " D = \\frac{4\\pi}{\\lambda} \\sin \\alpha." }, { "math_id": 8, "text": " \\Delta f " }, { "math_id": 9, "text": " z " }, { "math_id": 10, "text": " \\Delta f" } ]
https://en.wikipedia.org/wiki?curid=69779073
697793
Thrust-to-weight ratio
Dimensionless ratio of thrust to weight of a jet or propeller engine Thrust-to-weight ratio is a dimensionless ratio of thrust to weight of a rocket, jet engine, propeller engine, or a vehicle propelled by such an engine that is an indicator of the performance of the engine or vehicle. The instantaneous thrust-to-weight ratio of a vehicle varies continually during operation due to progressive consumption of fuel or propellant and in some cases a gravity gradient. The thrust-to-weight ratio based on initial thrust and weight is often published and used as a figure of merit for quantitative comparison of a vehicle's initial performance. Calculation. The thrust-to-weight ratio is calculated by dividing the thrust (in SI units – in newtons) by the weight (in newtons) of the engine or vehicle. The weight (N) is calculated by multiplying the mass in kilograms (kg) by the acceleration due to gravity (m/s2). The thrust can also be measured in pound-force (lbf), provided the weight is measured in pounds (lb). Division using these two values still gives the numerically correct (dimensionless) thrust-to-weight ratio. For valid comparison of the initial thrust-to-weight ratio of two or more engines or vehicles, thrust must be measured under controlled conditions. Because an aircraft's weight can vary considerably, depending on factors such as munition load, fuel load, cargo weight, or even the weight of the pilot, the thrust-to-weight ratio is also variable and even changes during flight operations. There are several standards for determining the weight of an aircraft used to calculate the thrust-to-weight ratio range. Aircraft. The thrust-to-weight ratio and lift-to-drag ratio are the two most important parameters in determining the performance of an aircraft. The thrust-to-weight ratio varies continually during a flight. Thrust varies with throttle setting, airspeed, altitude, air temperature, etc. Weight varies with fuel burn and payload changes. For aircraft, the quoted thrust-to-weight ratio is often the maximum static thrust at sea level divided by the maximum takeoff weight. Aircraft with thrust-to-weight ratio greater than 1:1 can pitch straight up and maintain airspeed until performance decreases at higher altitude. A plane can take off even if the thrust is less than its weight as, unlike a rocket, the lifting force is produced by lift from the wings, not directly by thrust from the engine. As long as the aircraft can produce enough thrust to travel at a horizontal speed above its stall speed, the wings will produce enough lift to counter the weight of the aircraft. formula_0 Propeller-driven aircraft. For propeller-driven aircraft, the thrust-to-weight ratio can be calculated as follows in imperial units: formula_1 where formula_2 is propulsive efficiency (typically 0.65 for wooden propellers, 0.75 metal fixed pitch and up to 0.85 for constant-speed propellers), hp is the engine's shaft horsepower, and formula_3is true airspeed in feet per second, weight is in lbs. The metric formula is: formula_4 Rockets. The thrust-to-weight ratio of a rocket, or rocket-propelled vehicle, is an indicator of its acceleration expressed in multiples of gravitational acceleration "g". Rockets and rocket-propelled vehicles operate in a wide range of gravitational environments, including the "weightless" environment. The thrust-to-weight ratio is usually calculated from initial gross weight at sea level on earth and is sometimes called "thrust-to-Earth-weight ratio". The thrust-to-Earth-weight ratio of a rocket or rocket-propelled vehicle is an indicator of its acceleration expressed in multiples of earth's gravitational acceleration, "g"0. The thrust-to-weight ratio of a rocket improves as the propellant is burned. With constant thrust, the maximum ratio (maximum acceleration of the vehicle) is achieved just before the propellant is fully consumed. Each rocket has a characteristic thrust-to-weight curve, or acceleration curve, not just a scalar quantity. The thrust-to-weight ratio of an engine is greater than that of the complete launch vehicle, but is nonetheless useful because it determines the maximum acceleration that "any" vehicle using that engine could theoretically achieve with minimum propellant and structure attached. For a takeoff from the surface of the earth using thrust and no aerodynamic lift, the thrust-to-weight ratio for the whole vehicle must be greater than "one". In general, the thrust-to-weight ratio is numerically equal to the "g-force" that the vehicle can generate. Take-off can occur when the vehicle's "g-force" exceeds local gravity (expressed as a multiple of "g"0). The thrust-to-weight ratio of rockets typically greatly exceeds that of airbreathing jet engines because the comparatively far greater density of rocket fuel eliminates the need for much engineering materials to pressurize it. Many factors affect thrust-to-weight ratio. The instantaneous value typically varies over the duration of flight with the variations in thrust due to speed and altitude, together with changes in weight due to the amount of remaining propellant, and payload mass. Factors with the greatest effect include freestream air temperature, pressure, density, and composition. Depending on the engine or vehicle under consideration, the actual performance will often be affected by buoyancy and local gravitational field strength. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\left(\\frac{T}{W}\\right)_\\text{cruise} = \\left(\\frac{D}{L}\\right)_\\text{cruise} = \\frac{1}{\\left(\\frac{L}{D}\\right)_\\text{cruise}}." }, { "math_id": 1, "text": "\\frac{T}{W} = \\frac{550\\eta_\\mathrm{p}}{V} \\frac{\\text{hp}}{W}," }, { "math_id": 2, "text": "\\eta_\\mathrm{p}\\;" }, { "math_id": 3, "text": "V\\;" }, { "math_id": 4, "text": "\\frac{T}{W}=\\left(\\frac{\\eta_\\mathrm{p}}{V}\\right)\\left(\\frac{P}{W}\\right)." } ]
https://en.wikipedia.org/wiki?curid=697793
6978815
Bayes estimator
Mathematical decision rule In estimation theory and decision theory, a Bayes estimator or a Bayes action is an estimator or decision rule that minimizes the posterior expected value of a loss function (i.e., the posterior expected loss). Equivalently, it maximizes the posterior expectation of a utility function. An alternative way of formulating an estimator within Bayesian statistics is maximum a posteriori estimation. Definition. Suppose an unknown parameter formula_0 is known to have a prior distribution formula_1. Let formula_2 be an estimator of formula_0 (based on some measurements "x"), and let formula_3 be a loss function, such as squared error. The Bayes risk of formula_4 is defined as formula_5, where the expectation is taken over the probability distribution of formula_0: this defines the risk function as a function of formula_4. An estimator formula_4 is said to be a "Bayes estimator" if it minimizes the Bayes risk among all estimators. Equivalently, the estimator which minimizes the posterior expected loss formula_6 "for each formula_7 " also minimizes the Bayes risk and therefore is a Bayes estimator. If the prior is improper then an estimator which minimizes the posterior expected loss "for each formula_7" is called a generalized Bayes estimator. Examples. Minimum mean square error estimation. The most common risk function used for Bayesian estimation is the mean square error (MSE), also called "squared error risk". The MSE is defined by formula_8 where the expectation is taken over the joint distribution of formula_0 and formula_7. Posterior mean. Using the MSE as risk, the Bayes estimate of the unknown parameter is simply the mean of the posterior distribution, formula_9 This is known as the "minimum mean square error" (MMSE) estimator. Bayes estimators for conjugate priors. If there is no inherent reason to prefer one prior probability distribution over another, a conjugate prior is sometimes chosen for simplicity. A conjugate prior is defined as a prior distribution belonging to some parametric family, for which the resulting posterior distribution also belongs to the same family. This is an important property, since the Bayes estimator, as well as its statistical properties (variance, confidence interval, etc.), can all be derived from the posterior distribution. Conjugate priors are especially useful for sequential estimation, where the posterior of the current measurement is used as the prior in the next measurement. In sequential estimation, unless a conjugate prior is used, the posterior distribution typically becomes more complex with each added measurement, and the Bayes estimator cannot usually be calculated without resorting to numerical methods. Following are some examples of conjugate priors. formula_13 formula_17 formula_20 Alternative risk functions. Risk functions are chosen depending on how one measures the distance between the estimate and the unknown parameter. The MSE is the most common risk function in use, primarily due to its simplicity. However, alternative risk functions are also occasionally used. The following are several examples of such alternatives. We denote the posterior generalized distribution function by formula_21. formula_23 formula_24 formula_26 formula_27 formula_30 Posterior mode. Other loss functions can be conceived, although the mean squared error is the most widely used and validated. Other loss functions are used in statistics, particularly in robust statistics. Generalized Bayes estimators. The prior distribution formula_31 has thus far been assumed to be a true probability distribution, in that formula_32 However, occasionally this can be a restrictive requirement. For example, there is no distribution (covering the set, R, of all real numbers) for which every real number is equally likely. Yet, in some sense, such a "distribution" seems like a natural choice for a non-informative prior, i.e., a prior distribution which does not imply a preference for any particular value of the unknown parameter. One can still define a function formula_33, but this would not be a proper probability distribution since it has infinite mass, formula_34 Such measures formula_35, which are not probability distributions, are referred to as improper priors. The use of an improper prior means that the Bayes risk is undefined (since the prior is not a probability distribution and we cannot take an expectation under it). As a consequence, it is no longer meaningful to speak of a Bayes estimator that minimizes the Bayes risk. Nevertheless, in many cases, one can define the posterior distribution formula_36 This is a definition, and not an application of Bayes' theorem, since Bayes' theorem can only be applied when all distributions are proper. However, it is not uncommon for the resulting "posterior" to be a valid probability distribution. In this case, the posterior expected loss formula_37 is typically well-defined and finite. Recall that, for a proper prior, the Bayes estimator minimizes the posterior expected loss. When the prior is improper, an estimator which minimizes the posterior expected loss is referred to as a generalized Bayes estimator. Example. A typical example is estimation of a location parameter with a loss function of the type formula_38. Here formula_0 is a location parameter, i.e., formula_39. It is common to use the improper prior formula_40 in this case, especially when no other more subjective information is available. This yields formula_41 so the posterior expected loss formula_42 The generalized Bayes estimator is the value formula_43 that minimizes this expression for a given formula_7. This is equivalent to minimizing formula_44 for a given formula_45        (1) In this case it can be shown that the generalized Bayes estimator has the form formula_46, for some constant formula_47. To see this, let formula_47 be the value minimizing (1) when formula_48. Then, given a different value formula_49, we must minimize formula_50        (2) This is identical to (1), except that formula_51 has been replaced by formula_52. Thus, the expression minimizing is given by formula_53, so that the optimal estimator has the form formula_54 Empirical Bayes estimators. A Bayes estimator derived through the empirical Bayes method is called an empirical Bayes estimator. Empirical Bayes methods enable the use of auxiliary empirical data, from observations of related parameters, in the development of a Bayes estimator. This is done under the assumption that the estimated parameters are obtained from a common prior. For example, if independent observations of different parameters are performed, then the estimation performance of a particular parameter can sometimes be improved by using data from other observations. There are both parametric and non-parametric approaches to empirical Bayes estimation. Example. The following is a simple example of parametric empirical Bayes estimation. Given past observations formula_55 having conditional distribution formula_56, one is interested in estimating formula_57 based on formula_58. Assume that the formula_59's have a common prior formula_1 which depends on unknown parameters. For example, suppose that formula_1 is normal with unknown mean formula_60 and variance formula_61 We can then use the past observations to determine the mean and variance of formula_1 in the following way. First, we estimate the mean formula_62 and variance formula_63 of the marginal distribution of formula_64 using the maximum likelihood approach: formula_65 formula_66 Next, we use the law of total expectation to compute formula_67 and the law of total variance to compute formula_68 such that formula_69 formula_70 where formula_71 and formula_72 are the moments of the conditional distribution formula_56, which are assumed to be known. In particular, suppose that formula_73 and that formula_74; we then have formula_75 formula_76 Finally, we obtain the estimated moments of the prior, formula_77 formula_78 For example, if formula_79, and if we assume a normal prior (which is a conjugate prior in this case), we conclude that formula_80, from which the Bayes estimator of formula_57 based on formula_58 can be calculated. Properties. Admissibility. Bayes rules having finite Bayes risk are typically admissible. The following are some specific examples of admissibility theorems. By contrast, generalized Bayes rules often have undefined Bayes risk in the case of improper priors. These rules are often inadmissible and the verification of their admissibility can be difficult. For example, the generalized Bayes estimator of a location parameter θ based on Gaussian samples (described in the "Generalized Bayes estimator" section above) is inadmissible for formula_81; this is known as Stein's phenomenon. Asymptotic efficiency. Let θ be an unknown random variable, and suppose that formula_82 are iid samples with density formula_83. Let formula_84 be a sequence of Bayes estimators of θ based on an increasing number of measurements. We are interested in analyzing the asymptotic performance of this sequence of estimators, i.e., the performance of formula_85 for large "n". To this end, it is customary to regard θ as a deterministic parameter whose true value is formula_86. Under specific conditions, for large samples (large values of "n"), the posterior density of θ is approximately normal. In other words, for large "n", the effect of the prior probability on the posterior is negligible. Moreover, if δ is the Bayes estimator under MSE risk, then it is asymptotically unbiased and it converges in distribution to the normal distribution: formula_87 where "I"(θ0) is the Fisher information of θ0. It follows that the Bayes estimator δ"n" under MSE is asymptotically efficient. Another estimator which is asymptotically normal and efficient is the maximum likelihood estimator (MLE). The relations between the maximum likelihood and Bayes estimators can be shown in the following simple example. Example: estimating "p" in a binomial distribution. Consider the estimator of θ based on binomial sample "x"~b(θ,"n") where θ denotes the probability for success. Assuming θ is distributed according to the conjugate prior, which in this case is the Beta distribution B("a","b"), the posterior distribution is known to be B(a+x,b+n-x). Thus, the Bayes estimator under MSE is formula_88 The MLE in this case is x/n and so we get, formula_89 The last equation implies that, for "n" → ∞, the Bayes estimator (in the described problem) is close to the MLE. On the other hand, when "n" is small, the prior information is still relevant to the decision problem and affects the estimate. To see the relative weight of the prior information, assume that "a"="b"; in this case each measurement brings in 1 new bit of information; the formula above shows that the prior information has the same weight as "a+b" bits of the new information. In applications, one often knows very little about fine details of the prior distribution; in particular, there is no reason to assume that it coincides with B("a","b") exactly. In such a case, one possible interpretation of this calculation is: "there is a non-pathological prior distribution with the mean value 0.5 and the standard deviation "d" which gives the weight of prior information equal to 1/(4"d"2)-1 bits of new information." Another example of the same phenomena is the case when the prior estimate and a measurement are normally distributed. If the prior is centered at "B" with deviation Σ, and the measurement is centered at "b" with deviation σ, then the posterior is centered at formula_90, with weights in this weighted average being α=σ², β=Σ². Moreover, the squared posterior deviation is Σ²+σ². In other words, the prior is combined with the measurement in "exactly" the same way as if it were an extra measurement to take into account. For example, if Σ=σ/2, then the deviation of 4 measurements combined matches the deviation of the prior (assuming that errors of measurements are independent). And the weights α,β in the formula for posterior match this: the weight of the prior is 4 times the weight of the measurement. Combining this prior with "n" measurements with average "v" results in the posterior centered at formula_91; in particular, the prior plays the same role as 4 measurements made in advance. In general, the prior has the weight of (σ/Σ)² measurements. Compare to the example of binomial distribution: there the prior has the weight of (σ/Σ)²−1 measurements. One can see that the exact weight does depend on the details of the distribution, but when σ≫Σ, the difference becomes small. Practical example of Bayes estimators. The Internet Movie Database uses a formula for calculating and comparing the ratings of films by its users, including their Top Rated 250 Titles which is claimed to give "a true Bayesian estimate". The following Bayesian formula was initially used to calculate a weighted average score for the Top 250, though the formula has since changed: formula_92 where: formula_93 = weighted rating formula_94 = average rating for the movie as a number from 1 to 10 (mean) = (Rating) formula_95 = number of votes/ratings for the movie = (votes) formula_96 = weight given to the prior estimate (in this case, the number of votes IMDB deemed necessary for average rating to approach statistical validity) formula_97 = the mean vote across the whole pool (currently 7.0) Note that "W" is just the weighted arithmetic mean of "R" and "C" with weight vector "(v, m)". As the number of ratings surpasses "m", the confidence of the average rating surpasses the confidence of the mean vote for all films (C), and the weighted bayesian rating (W) approaches a straight average (R). The closer "v" (the number of ratings for the film) is to zero, the closer "W" is to "C", where W is the weighted rating and C is the average rating of all films. So, in simpler terms, the fewer ratings/votes cast for a film, the more that film's Weighted Rating will skew towards the average across all films, while films with many ratings/votes will have a rating approaching its pure arithmetic average rating. IMDb's approach ensures that a film with only a few ratings, all at 10, would not rank above "the Godfather", for example, with a 9.2 average from over 500,000 ratings. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\theta" }, { "math_id": 1, "text": "\\pi" }, { "math_id": 2, "text": "\\widehat{\\theta} = \\widehat{\\theta}(x)" }, { "math_id": 3, "text": "L(\\theta,\\widehat{\\theta})" }, { "math_id": 4, "text": "\\widehat{\\theta}" }, { "math_id": 5, "text": "E_\\pi(L(\\theta, \\widehat{\\theta}))" }, { "math_id": 6, "text": "E(L(\\theta,\\widehat{\\theta}) | x)" }, { "math_id": 7, "text": "x" }, { "math_id": 8, "text": "\\mathrm{MSE} = E\\left[ (\\widehat{\\theta}(x) - \\theta)^2 \\right]," }, { "math_id": 9, "text": "\\widehat{\\theta}(x) = E[\\theta |x]=\\int \\theta\\, p(\\theta |x)\\,d\\theta." }, { "math_id": 10, "text": "x|\\theta" }, { "math_id": 11, "text": "x|\\theta \\sim N(\\theta,\\sigma^2)" }, { "math_id": 12, "text": "\\theta \\sim N(\\mu,\\tau^2)" }, { "math_id": 13, "text": "\\widehat{\\theta}(x)=\\frac{\\sigma^{2}}{\\sigma^{2}+\\tau^{2}}\\mu+\\frac{\\tau^{2}}{\\sigma^{2}+\\tau^{2}}x." }, { "math_id": 14, "text": "x_1, ..., x_n" }, { "math_id": 15, "text": "x_i|\\theta \\sim P(\\theta)" }, { "math_id": 16, "text": "\\theta \\sim G(a,b)" }, { "math_id": 17, "text": "\\widehat{\\theta}(X)=\\frac{n\\overline{X}+a}{n+b}." }, { "math_id": 18, "text": "x_i|\\theta \\sim U(0,\\theta)" }, { "math_id": 19, "text": "\\theta \\sim Pa(\\theta_0,a)" }, { "math_id": 20, "text": "\\widehat{\\theta}(X)=\\frac{(a+n)\\max{(\\theta_0,x_1,...,x_n)}}{a+n-1}." }, { "math_id": 21, "text": "F" }, { "math_id": 22, "text": " a>0 " }, { "math_id": 23, "text": " L(\\theta,\\widehat{\\theta}) = a|\\theta-\\widehat{\\theta}| " }, { "math_id": 24, "text": " F(\\widehat{\\theta }(x)|X) = \\tfrac{1}{2}. " }, { "math_id": 25, "text": " a,b>0 " }, { "math_id": 26, "text": " L(\\theta,\\widehat{\\theta}) = \\begin{cases}\n a|\\theta-\\widehat{\\theta}|, & \\mbox{for }\\theta-\\widehat{\\theta} \\ge 0 \\\\\n b|\\theta-\\widehat{\\theta}|, & \\mbox{for }\\theta-\\widehat{\\theta} < 0\n \\end{cases}\n" }, { "math_id": 27, "text": " F(\\widehat{\\theta }(x)|X) = \\frac{a}{a+b}. " }, { "math_id": 28, "text": " K>0 " }, { "math_id": 29, "text": " L>0 " }, { "math_id": 30, "text": " L(\\theta,\\widehat{\\theta}) = \\begin{cases}\n 0, & \\mbox{for }|\\theta-\\widehat{\\theta}| < K \\\\\n L, & \\mbox{for }|\\theta-\\widehat{\\theta}| \\ge K.\n \\end{cases}\n" }, { "math_id": 31, "text": "p" }, { "math_id": 32, "text": "\\int p(\\theta) d\\theta = 1." }, { "math_id": 33, "text": "p(\\theta) = 1" }, { "math_id": 34, "text": "\\int{p(\\theta)d\\theta}=\\infty." }, { "math_id": 35, "text": "p(\\theta)" }, { "math_id": 36, "text": "p(\\theta|x) = \\frac{p(x|\\theta) p(\\theta)}{\\int p(x|\\theta) p(\\theta) d\\theta}." }, { "math_id": 37, "text": " \\int{L(\\theta,a)p(\\theta|x)d\\theta}" }, { "math_id": 38, "text": "L(a-\\theta)" }, { "math_id": 39, "text": "p(x|\\theta) = f(x-\\theta)" }, { "math_id": 40, "text": "p(\\theta)=1" }, { "math_id": 41, "text": "p(\\theta|x) = \\frac{p(x|\\theta) p(\\theta)}{p(x)} = \\frac{f(x-\\theta)}{p(x)}" }, { "math_id": 42, "text": "E[L(a-\\theta)|x] = \\int{L(a-\\theta) p(\\theta|x) d\\theta} = \\frac{1}{p(x)} \\int L(a-\\theta) f(x-\\theta) d\\theta." }, { "math_id": 43, "text": "a(x)" }, { "math_id": 44, "text": "\\int L(a-\\theta) f(x-\\theta) d\\theta" }, { "math_id": 45, "text": "x." }, { "math_id": 46, "text": "x+a_0" }, { "math_id": 47, "text": "a_0" }, { "math_id": 48, "text": "x=0" }, { "math_id": 49, "text": "x_1" }, { "math_id": 50, "text": "\\int L(a-\\theta) f(x_1-\\theta) d\\theta = \\int L(a-x_1-\\theta') f(-\\theta') d\\theta'." }, { "math_id": 51, "text": "a" }, { "math_id": 52, "text": "a-x_1" }, { "math_id": 53, "text": "a-x_1 = a_0" }, { "math_id": 54, "text": "a(x) = a_0 + x.\\,\\!" }, { "math_id": 55, "text": "x_1,\\ldots,x_n" }, { "math_id": 56, "text": "f(x_i|\\theta_i)" }, { "math_id": 57, "text": "\\theta_{n+1}" }, { "math_id": 58, "text": "x_{n+1}" }, { "math_id": 59, "text": "\\theta_i" }, { "math_id": 60, "text": "\\mu_\\pi\\,\\!" }, { "math_id": 61, "text": "\\sigma_\\pi\\,\\!." }, { "math_id": 62, "text": "\\mu_m\\,\\!" }, { "math_id": 63, "text": "\\sigma_m\\,\\!" }, { "math_id": 64, "text": "x_1, \\ldots, x_n" }, { "math_id": 65, "text": "\\widehat{\\mu}_m=\\frac{1}{n}\\sum{x_i}," }, { "math_id": 66, "text": "\\widehat{\\sigma}_m^{2}=\\frac{1}{n}\\sum{(x_i-\\widehat{\\mu}_m)^{2}}." }, { "math_id": 67, "text": "\\mu_m" }, { "math_id": 68, "text": "\\sigma_m^{2}" }, { "math_id": 69, "text": "\\mu_m=E_\\pi[\\mu_f(\\theta)] \\,\\!," }, { "math_id": 70, "text": "\\sigma_m^{2}=E_\\pi[\\sigma_f^{2}(\\theta)]+E_\\pi[(\\mu_f(\\theta)-\\mu_m)^{2}]," }, { "math_id": 71, "text": "\\mu_f(\\theta)" }, { "math_id": 72, "text": "\\sigma_f(\\theta)" }, { "math_id": 73, "text": "\\mu_f(\\theta) = \\theta" }, { "math_id": 74, "text": "\\sigma_f^{2}(\\theta) = K" }, { "math_id": 75, "text": "\\mu_\\pi=\\mu_m \\,\\!," }, { "math_id": 76, "text": "\\sigma_\\pi^{2}=\\sigma_m^{2}-\\sigma_f^{2}=\\sigma_m^{2}-K ." }, { "math_id": 77, "text": "\\widehat{\\mu}_\\pi=\\widehat{\\mu}_m, " }, { "math_id": 78, "text": "\\widehat{\\sigma}_\\pi^{2}=\\widehat{\\sigma}_m^{2}-K. " }, { "math_id": 79, "text": "x_i|\\theta_i \\sim N(\\theta_i,1)" }, { "math_id": 80, "text": " \\theta_{n+1}\\sim N(\\widehat{\\mu}_\\pi,\\widehat{\\sigma}_\\pi^{2}) " }, { "math_id": 81, "text": "p>2" }, { "math_id": 82, "text": "x_1,x_2,\\ldots" }, { "math_id": 83, "text": "f(x_i|\\theta)" }, { "math_id": 84, "text": "\\delta_n = \\delta_n(x_1,\\ldots,x_n)" }, { "math_id": 85, "text": "\\delta_n" }, { "math_id": 86, "text": "\\theta_0" }, { "math_id": 87, "text": " \\sqrt{n}(\\delta_n - \\theta_0) \\to N\\left(0 , \\frac{1}{I(\\theta_0)}\\right)," }, { "math_id": 88, "text": " \\delta_n(x)=E[\\theta|x]=\\frac{a+x}{a+b+n}." }, { "math_id": 89, "text": " \\delta_n(x)=\\frac{a+b}{a+b+n}E[\\theta]+\\frac{n}{a+b+n}\\delta_{MLE}." }, { "math_id": 90, "text": "\\frac{\\alpha}{\\alpha+\\beta}B+\\frac{\\beta}{\\alpha+\\beta}b" }, { "math_id": 91, "text": "\\frac{4}{4+n}V+\\frac{n}{4+n}v" }, { "math_id": 92, "text": "W = {Rv + Cm\\over v+m}\\ " }, { "math_id": 93, "text": "W\\ " }, { "math_id": 94, "text": "R\\ " }, { "math_id": 95, "text": "v\\ " }, { "math_id": 96, "text": "m\\ " }, { "math_id": 97, "text": "C\\ " } ]
https://en.wikipedia.org/wiki?curid=6978815
6979740
Conceptual clustering
Machine learning paradigm Conceptual clustering is a machine learning paradigm for unsupervised classification that has been defined by Ryszard S. Michalski in 1980 (Fisher 1987, Michalski 1980) and developed mainly during the 1980s. It is distinguished from ordinary data clustering by generating a concept description for each generated class. Most conceptual clustering methods are capable of generating hierarchical category structures; see Categorization for more information on hierarchy. Conceptual clustering is closely related to formal concept analysis, decision tree learning, and mixture model learning. Conceptual clustering vs. data clustering. Conceptual clustering is obviously closely related to data clustering; however, in conceptual clustering it is not only the inherent structure of the data that drives cluster formation, but also the Description language which is available to the learner. Thus, a statistically strong grouping in the data may fail to be extracted by the learner if the prevailing concept description language is incapable of describing that particular "regularity". In most implementations, the description language has been limited to feature conjunction, although in COBWEB (see "" below), the feature language is probabilistic. List of published algorithms. A fair number of algorithms have been proposed for conceptual clustering. Some examples are given below: More general discussions and reviews of conceptual clustering can be found in the following publications: Example: A basic conceptual clustering algorithm. This section discusses the rudiments of the conceptual clustering algorithm COBWEB. There are many other algorithms using different heuristics and "category goodness" or category evaluation criteria, but COBWEB is one of the best known. The reader is referred to the bibliography for other methods. Knowledge representation. The COBWEB data structure is a hierarchy (tree) wherein each node represents a given "concept". Each concept represents a set (actually, a multiset or bag) of objects, each object being represented as a binary-valued property list. The data associated with each tree node (i.e., concept) are the integer property counts for the objects in that concept. For example, (see figure), let a concept formula_0 contain the following four objects (repeated objects being permitted). The three properties might be, for example, codice_4. Then what is stored at this concept node is the property count codice_5, indicating that 1 of the objects in the concept is male, 3 of the objects have wings, and 3 of the objects are nocturnal. The concept "description" is the category-conditional probability (likelihood) of the properties at the node. Thus, given that an object is a member of category (concept) formula_0, the likelihood that it is male is formula_1. Likewise, the likelihood that the object has wings and likelihood that the object is nocturnal or both is formula_2. The concept description can therefore simply be given as codice_6, which corresponds to the formula_0-conditional feature likelihood, i.e., formula_3. The figure to the right shows a concept tree with five concepts. formula_4 is the root concept, which contains all ten objects in the data set. Concepts formula_0 and formula_5 are the children of formula_4, the former containing four objects, and the later containing six objects. Concept formula_5 is also the parent of concepts formula_6, formula_7, and formula_8, which contain three, two, and one object, respectively. Note that each parent node (relative superordinate concept) contains all the objects contained by its child nodes (relative subordinate concepts). In Fisher's (1987) description of COBWEB, he indicates that only the total attribute counts (not conditional probabilities, and not object lists) are stored at the nodes. Any probabilities are computed from the attribute counts as needed. The COBWEB language. The description language of COBWEB is a "language" only in a loose sense, because being fully probabilistic it is capable of describing any concept. However, if constraints are placed on the probability ranges which concepts may represent, then a stronger language is obtained. For example, we might permit only concepts wherein at least one probability differs from 0.5 by more than formula_9. Under this constraint, with formula_10, a concept such as codice_7 could not be constructed by the learner; however a concept such as codice_8 would be accessible because at least one probability differs from 0.5 by more than formula_9. Thus, under constraints such as these, we obtain something like a traditional concept language. In the limiting case where formula_11 for every feature, and thus every probability in a concept must be 0 or 1, the result is a feature language base on conjunction; that is, every concept that can be represented can then be described as a conjunction of features (and their negations), and concepts that cannot be described in this way cannot be represented. Evaluation criterion. In Fisher's (1987) description of COBWEB, the measure he uses to evaluate the quality of the hierarchy is Gluck and Corter's (1985) category utility (CU) measure, which he re-derives in his paper. The motivation for the measure is highly similar to the "information gain" measure introduced by Quinlan for decision tree learning. It has previously been shown that the CU for feature-based classification is the same as the mutual information between the feature variables and the class variable (Gluck &amp; Corter, 1985; Corter &amp; Gluck, 1992), and since this measure is much better known, we proceed here with mutual information as the measure of category "goodness". What we wish to evaluate is the overall utility of grouping the objects into a particular hierarchical categorization structure. Given a set of possible classification structures, we need to determine whether one is better than another. References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "C_1" }, { "math_id": 1, "text": "1/4 = 0.25" }, { "math_id": 2, "text": " 3/4 = 0.75" }, { "math_id": 3, "text": "p(x|C_1) = (0.25, 0.75, 0.75)" }, { "math_id": 4, "text": "C_0" }, { "math_id": 5, "text": "C_2" }, { "math_id": 6, "text": "C_3" }, { "math_id": 7, "text": "C_4" }, { "math_id": 8, "text": "C_5" }, { "math_id": 9, "text": "\\alpha" }, { "math_id": 10, "text": "\\alpha=0.3" }, { "math_id": 11, "text": "\\alpha=0.5" } ]
https://en.wikipedia.org/wiki?curid=6979740
69798178
Cutwidth
Property in graph theory In graph theory, the cutwidth of an undirected graph is the smallest integer formula_0 with the following property: there is an ordering of the vertices of the graph, such that every cut obtained by partitioning the vertices into earlier and later subsets of the ordering is crossed by at most formula_0 edges. That is, if the vertices are numbered formula_1, then for every formula_2, the number of edges formula_3 with formula_4 and formula_5 is at most formula_0. The cutwidth of a graph has also been called its folding number. Both the vertex ordering that produces the cutwidth, and the problem of computing this ordering and the cutwidth, have been called minimum cut linear arrangement. Relation to other parameters. Cutwidth is related to several other width parameters of graphs. In particular, it is always at least as large as the treewidth or pathwidth of the same graph. However, it is at most the pathwidth multiplied by formula_6, or the treewidth multiplied by formula_7 where formula_8 is the maximum degree and formula_9 is the number of vertices. If a family of graphs has bounded maximum degree, and its graphs do not contain subdivisions of complete binary trees of unbounded size, then the graphs in the family have bounded cutwidth. In subcubic graphs (graphs of maximum degree three), the cutwidth equals the pathwidth plus one. The cutwidth is greater than or equal to the minimum bisection number of any graph. This is minimum possible number of edges from one side to another for a partition of the vertices into two subsets of equal size (or as near equal as possible). Any linear layout of a graph, achieving its optimal cutwidth, also provides a bisection with the same number of edges, obtained by partitioning the layout into its first and second halves. The cutwidth is less than or equal to the maximum degree multiplied by the graph bandwidth, the maximum number of steps separating the endpoints of any edge in a linear arrangement chosen to minimize this quantity. Unlike bandwidth, cutwidth is unchanged when edges are subdivided into paths of more than one edge. It is closely related to the "topological bandwidth", the minimum bandwidth that can be obtained by subdividing edges of a given graph. In particular, for any tree it is sandwiched between the topological bandwidth formula_10 and a slightly larger number, formula_11. Another parameter, defined similarly to cutwidth in terms of numbers of edges spanning cuts in a graph, is the carving width. However, instead of using a linear ordering of vertices and a linear sequence of cuts, as in cutwidth, carving width uses cuts derived from a hierarchical clustering of vertices, making it more closely related to treewidth or branchwidth and less similar to the other width parameters involving linear orderings such as pathwidth or bandwidth. Cutwidth can be used to provide a lower bound on another parameter, the crossing number, arising in the study of graph drawings. The crossing number of a graph is the minimum number of pairs of edges that intersect, in any drawing of the graph in the plane where each vertex touches only the edges for which it is an endpoint. In graphs of bounded degree, the crossing number is always at least proportional to the square of the cutwidth. A more precise bound, applying to graphs where the degrees are not bounded, is: formula_12 Here, the correction term, proportional to the sum of squared degrees, is necessary to account for the existence of planar graphs whose squared cutwidth is proportional to this quantity but whose crossing number is zero. In another style of graph drawing, book embedding, vertices are arranged on a line and edges are arranged without crossings into separate half-plane "pages" meeting at this line. The "page width" of a book embedding has been defined as the largest cutwidth of any of the pages, using the same vertex ordering. Computational complexity. The cutwidth can be found, and a linear layout of optimal width constructed, in time formula_13 for a tree of formula_9 vertices. For more general graphs, it is NP-hard. It remains NP-hard even for planar graphs of maximum degree three, and a weighted version of the problem (minimizing the weight of edges across any cut of a linear arrangement) is NP-hard even when the input is a tree. Cutwidth is one of many problems of optimal linear arrangement that can be solved exactly in time formula_14 by the Held-Karp algorithm, using dynamic programming. A faster quantum algorithm with time formula_15 is also known. Additionally, it is fixed-parameter tractable: for any fixed value of formula_16, it is possible to test whether a graph has cutwidth at most formula_16, and if so find an optimal vertex ordering for it, in linear time. More precisely, in terms of both formula_9 and formula_16, the running time of this algorithm is formula_17. An alternative parameterized algorithm, more suitable for graphs in which a small number of vertices have high degree (making the cutwidth large) instead solves the problem in time polynomial in formula_9 when the graph has a vertex cover of bounded size, by transforming it into an integer programming problem with few variables and polynomial bounds on the variable values. It remains open whether the problem can be solved efficiently for graphs of bounded treewidth, which would subsume both of the parameterizations by cutwidth and vertex cover number. Cutwidth has a polynomial-time approximation scheme for dense graphs, but for graphs that might not be dense the best approximation ratio known is formula_18. This comes from a method of Tom Leighton and Satish Rao for reducing approximate cutwidth to minimum bisection number, losing a factor of formula_19 in the approximation ratio, by using recursive bisection to order the vertices. Combining this recursive bisection method with another method of Sanjeev Arora, Rao, and Umesh Vazirani for approximating the minimum bisection number, gives the stated approximation ratio. Under the small set expansion hypothesis, it is not possible to achieve a constant approximation ratio. Applications. An early motivating application for cutwidth involved channel routing in VLSI design, in which components arranged along the top and bottom of a rectangular region of an integrated circuit should be connected by conductors that connect pairs pins attached to the components. If the components are free to be arranged into different left-to-right orders, but the pins of each component must remain contiguous, then this can be translated into a problem of choosing a linear arrangement of a graph with a vertex for each component and an edge for each pin-to-pin connection. The cutwidth of the graph controls the number of channels needed to route the circuit. In protein engineering, an assumption that an associated graph has bounded cutwidth has been used to speed up the search for mRNA sequences that simultaneously code for a given protein sequence and fold into a given secondary structure. A weighted variant of the problem applying to directed acyclic graphs, and only allowing linear orderings consistent with the orientation of the graph edges, has been applied to schedule a sequence of computational tasks in a way that minimizes the maximum amount of memory required in the schedule, both for the tasks themselves and to maintain the data used for task-to-task communication. In database theory, the NP-hardness of the cutwidth problem has been used to show that it is also NP-hard to schedule the transfer of blocks of data between a disk and main memory when performing a join, in order to avoid repeated transfers of the same block while fitting the computation within a limited amount of main memory. In graph drawing, as well as being applied in the lower bound for crossing number, cutwidth has been applied in the study of a specific type of three-dimensional graph drawing, in which the edge are represented as disjoint polygonal chains with at most one bend, the vertices are placed on a line, and all vertices and bend points must have integer coordinates. For drawings of this type, the minimum volume of a bounding box of the drawing must be at least proportional to the cutwidth multiplied by the number of vertices. There always exists a drawing with this volume, with the vertices placed on an axis-parallel line. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "k" }, { "math_id": 1, "text": "v_1,v_2,\\dots v_n" }, { "math_id": 2, "text": "\\ell=1,2,\\dots n-1" }, { "math_id": 3, "text": "v_iv_j" }, { "math_id": 4, "text": "i\\le\\ell" }, { "math_id": 5, "text": "j>\\ell" }, { "math_id": 6, "text": "O(\\Delta)" }, { "math_id": 7, "text": "O(\\Delta\\log n)" }, { "math_id": 8, "text": "\\Delta" }, { "math_id": 9, "text": "n" }, { "math_id": 10, "text": "b^*" }, { "math_id": 11, "text": "b^*+\\log_2 b^*+2" }, { "math_id": 12, "text": "\\operatorname{crossings}(G)\\ge\\frac{1}{1176}\\operatorname{cutwidth}(G)^2-\\sum_{v\\in V(G)}\\left(\\frac{\\deg(v)}4\\right)^2." }, { "math_id": 13, "text": "O(n\\log n)" }, { "math_id": 14, "text": "O(n2^n)" }, { "math_id": 15, "text": "O(1.817^n)" }, { "math_id": 16, "text": "c" }, { "math_id": 17, "text": "2^{O(c^2)}n" }, { "math_id": 18, "text": "O\\bigl((\\log n)^{3/2}\\bigr)" }, { "math_id": 19, "text": "\\log_2 n" } ]
https://en.wikipedia.org/wiki?curid=69798178
698010
Smash product
In topology, a branch of mathematics, the smash product of two pointed spaces (i.e. topological spaces with distinguished basepoints) ("X," "x"0) and ("Y", "y"0) is the quotient of the product space "X" × "Y" under the identifications ("x", "y"0) ~ ("x"0, "y") for all x in X and y in Y. The smash product is itself a pointed space, with basepoint being the equivalence class of ("x"0, "y"0). The smash product is usually denoted "X" ∧ "Y" or "X" ⨳ "Y". The smash product depends on the choice of basepoints (unless both "X" and "Y" are homogeneous). One can think of X and Y as sitting inside "X" × "Y" as the subspaces "X" × {"y"0} and{"x"0} × "Y". These subspaces intersect at a single point: ("x"0, "y"0), the basepoint of "X" × "Y". So the union of these subspaces can be identified with the wedge sum formula_0. In particular, {"x"0} × "Y" in "X" × "Y" is identified with Y in formula_1, ditto for "X" × {"y"0} and X. In formula_1, subspaces X and Y intersect in the single point formula_2. The smash product is then the quotient formula_3 The smash product shows up in homotopy theory, a branch of algebraic topology. In homotopy theory, one often works with a different category of spaces than the category of all topological spaces. In some of these categories the definition of the smash product must be modified slightly. For example, the smash product of two CW complexes is a CW complex if one uses the product of CW complexes in the definition rather than the product topology. Similar modifications are necessary in other categories. As a symmetric monoidal product. For any pointed spaces "X", "Y", and "Z" in an appropriate "convenient" category (e.g., that of compactly generated spaces), there are natural (basepoint preserving) homeomorphisms formula_6 However, for the naive category of pointed spaces, this fails, as shown by the counterexample formula_7 and formula_8 found by Dieter Puppe. A proof due to Kathleen Lewis that Puppe's counterexample is indeed a counterexample can be found in the book of Johann Sigurdsson and J. Peter May. These isomorphisms make the appropriate category of pointed spaces into a symmetric monoidal category with the smash product as the monoidal product and the pointed 0-sphere (a two-point discrete space) as the unit object. One can therefore think of the smash product as a kind of tensor product in an appropriate category of pointed spaces. Adjoint relationship. Adjoint functors make the analogy between the tensor product and the smash product more precise. In the category of "R"-modules over a commutative ring "R", the tensor functor formula_9 is left adjoint to the internal Hom functor formula_10, so that formula_11 In the category of pointed spaces, the smash product plays the role of the tensor product in this formula: if formula_12 are compact Hausdorff then we have an adjunction formula_13 where formula_14 denotes continuous maps that send basepoint to basepoint, and formula_15 carries the compact-open topology. In particular, taking formula_16 to be the unit circle formula_17, we see that the reduced suspension functor formula_18 is left adjoint to the loop space functor formula_19: formula_20 Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X \\vee Y = (X \\amalg Y)\\;/{\\sim}" }, { "math_id": 1, "text": "X \\vee Y" }, { "math_id": 2, "text": "x_0 \\sim y_0" }, { "math_id": 3, "text": "X \\wedge Y = (X \\times Y) / (X \\vee Y)." }, { "math_id": 4, "text": " \\Sigma X \\cong X \\wedge S^1. " }, { "math_id": 5, "text": " \\Sigma^k X \\cong X \\wedge S^k. " }, { "math_id": 6, "text": "\\begin{align}\nX \\wedge Y &\\cong Y\\wedge X, \\\\\n(X\\wedge Y)\\wedge Z &\\cong X \\wedge (Y\\wedge Z).\n\\end{align}" }, { "math_id": 7, "text": "X=Y=\\mathbb{Q}" }, { "math_id": 8, "text": "Z=\\mathbb{N}" }, { "math_id": 9, "text": "(- \\otimes_R A)" }, { "math_id": 10, "text": "\\mathrm{Hom}(A,-)" }, { "math_id": 11, "text": "\\mathrm{Hom}(X\\otimes A,Y) \\cong \\mathrm{Hom}(X,\\mathrm{Hom}(A,Y))." }, { "math_id": 12, "text": "A, X" }, { "math_id": 13, "text": "\\mathrm{Maps_*}(X\\wedge A,Y) \\cong \\mathrm{Maps_*}(X,\\mathrm{Maps_*}(A,Y))" }, { "math_id": 14, "text": "\\operatorname{Maps_*}" }, { "math_id": 15, "text": "\\mathrm{Maps_*}(A,Y)" }, { "math_id": 16, "text": "A" }, { "math_id": 17, "text": "S^1" }, { "math_id": 18, "text": "\\Sigma" }, { "math_id": 19, "text": "\\Omega" }, { "math_id": 20, "text": "\\mathrm{Maps_*}(\\Sigma X,Y) \\cong \\mathrm{Maps_*}(X,\\Omega Y)." } ]
https://en.wikipedia.org/wiki?curid=698010
69801895
Weak component
Partition of vertices of a directed graph In graph theory, the weak components of a directed graph partition the vertices of the graph into subsets that are totally ordered by reachability. They form the finest partition of the set of vertices that is totally ordered in this way. Definition. The weak components were defined in a 1972 paper by Ronald Graham, Donald Knuth, and (posthumously) Theodore Motzkin, by analogy to the strongly connected components of a directed graph, which form the finest possible partition of the graph's vertices into subsets that are partially ordered by reachability. Instead, they defined the weak components to be the finest partition of the vertices into subsets that are totally ordered by reachability. In more detail, defines the weak components through a combination of four symmetric relations on the vertices of any directed graph, denoted here as formula_0, formula_1, formula_2, and formula_3: Then formula_3 is an equivalence relation: every vertex is related to itself by formula_3 (because it can reach itself in both directions by paths of length zero), any two vertices that are related by formula_3 can be swapped for each other without changing this relation (because formula_3 is built out of the symmetric relations formula_0 and formula_1), and formula_3 is a transitive relation (because it is a transitive closure). As with any equivalence relation, it can be used to partition the vertices of the graph into equivalence classes, subsets of the vertices such that two vertices are related by formula_3 if and only if they belong to the same equivalence class. These equivalence classes are the weak components of the given graph. The original definition by Graham, Knuth, and Motzkin is equivalent but formulated somewhat differently. Given a directed graph formula_11, they first construct another graph formula_12 as the complement graph of the transitive closure of formula_11. As describes, the edges in formula_12 represent non-paths, pairs of vertices that are not connected by a path in formula_11. Then, two vertices belong to the same weak component when either they belong to the same strongly connected component of formula_11 or of formula_12. As Graham, Knuth, and Motzkin show, this condition defines an equivalence relation, the same one defined above as formula_3. Corresponding to these definitions, a directed graph is called weakly connected if it has exactly one weak component. This means that its vertices cannot be partitioned into two subsets, such that all of the vertices in the first subset can reach all of the vertices in the second subset, but such that none of the vertices in the second subset can reach any of the vertices in the first subset. It differs from other notions of weak connectivity in the literature, such as connectivity and components in the underlying unconnected graph, for which Knuth suggests the alternative terminology undirected components. Properties. If formula_13 and formula_14 are two weak components of a directed graph, then either all vertices in formula_13 can reach all vertices in formula_14 by paths in the graph, or all vertices in formula_14 can reach all vertices in formula_13. However, there cannot exist reachability relations in both directions between these two components. Therefore, we can define an ordering on the weak components, according to which formula_15 when all vertices in formula_13 can reach all vertices in formula_14. By definition, formula_16. This is an asymmetric relation (two elements can only be related in one direction, not the other) and it inherits the property of being a transitive relation from the transitivity of reachability. Therefore, it defines a total ordering on the weak components. It is the finest possible partition of the vertices into a totally ordered set of vertices consistent with reachability. This ordering on the weak components can alternatively be interpreted as a weak ordering on the vertices themselves, with the property that when formula_17 in the weak ordering, there necessarily exists a path from formula_4 to formula_5, but not from formula_5 to formula_4. However, this is not a complete characterization of this weak ordering, because two vertices formula_4 and formula_5 could have this same reachability ordering while belonging to the same weak component as each other. Every weak component is a union of strongly connected components. If the strongly connected components of any given graph are contracted to single vertices, producing a directed acyclic graph (the condensation of the given graph), and then this condensation is topologically sorted, then each weak component necessarily appears as a consecutive subsequence of the topological order of the strong components. Algorithms. An algorithm for computing the weak components of a given directed graph in linear time was described by , and subsequently simplified by and . As Tarjan observes, Tarjan's strongly connected components algorithm based on depth-first search will output the strongly connected components in (the reverse of) a topologically sorted order. The algorithm for weak components generates the strongly connected components in this order, and maintains a partition of the components that have been generated so far into the weak components of their induced subgraph. After all components are generated, this partition will describe the weak components of the whole graph. It is convenient to maintain the current partition into weak components in a stack, with each weak component maintaining additionally a list of its sources, strongly connected components that have no incoming edges from other strongly connected components in the same weak component, with the most recently generated source first. Each newly generated strongly connected component may form a new weak component on its own, or may end up merged with some of the previously constructed weak components near the top of the stack, the ones for which it cannot reach all sources. Thus, the algorithm performs the following steps: Each test for whether any edges from formula_18 hit a weak component can be performed in constant time once we find an edge from formula_18 to the most recently generated earlier strongly connected component, by comparing the target component of that edge to the first source of the second-to-top component on the stack. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Leftrightarrow" }, { "math_id": 1, "text": "\\parallel" }, { "math_id": 2, "text": "\\approx" }, { "math_id": 3, "text": "\\asymp" }, { "math_id": 4, "text": "u" }, { "math_id": 5, "text": "v" }, { "math_id": 6, "text": "u\\Leftrightarrow v" }, { "math_id": 7, "text": "u\\parallel v" }, { "math_id": 8, "text": "u\\approx v" }, { "math_id": 9, "text": "u\\asymp v" }, { "math_id": 10, "text": "u\\approx\\cdots\\approx v" }, { "math_id": 11, "text": "G" }, { "math_id": 12, "text": "\\hat G" }, { "math_id": 13, "text": "X" }, { "math_id": 14, "text": "Y" }, { "math_id": 15, "text": "X<Y" }, { "math_id": 16, "text": "X\\nless X" }, { "math_id": 17, "text": "u<v" }, { "math_id": 18, "text": "S" }, { "math_id": 19, "text": "W" } ]
https://en.wikipedia.org/wiki?curid=69801895
6980269
Wave-making resistance
Energy of moving water away from a hull Wave-making resistance is a form of drag that affects surface watercraft, such as boats and ships, and reflects the energy required to push the water out of the way of the hull. This energy goes into creating the wave. Physics. For small displacement hulls, such as sailboats or rowboats, wave-making resistance is the major source of the marine vessel drag. A salient property of water waves is dispersiveness; i.e., the greater the wavelength, the faster it moves. Waves generated by a ship are affected by her geometry and speed, and most of the energy given by the ship for making waves is transferred to water through the bow and stern parts. Simply speaking, these two wave systems, i.e., bow and stern waves, interact with each other, and the resulting waves are responsible for the resistance. If the resulting wave is large, it carries much energy away from the ship, delivering it to the shore or wherever else the wave ends up or just dissipating it in the water, and that energy must be supplied by the ship's propulsion (or momentum), so that the ship experiences it as drag. Conversely, if the resulting wave is small, the drag experienced is small. The amount and direction (additive or subtractive) of the interference depends upon the phase difference between the bow and stern waves (which have the same wavelength and phase speed), and that is a function of the length of the ship at the waterline. For a given ship speed, the phase difference between the bow wave and stern wave is proportional to the length of the ship at the waterline. For example, if the ship takes three seconds to travel its own length, then at some point the ship passes, a stern wave is initiated three seconds after a bow wave, which implies a specific phase difference between those two waves. Thus, the waterline length of the ship directly affects the magnitude of the wave-making resistance. For a given waterline length, the phase difference depends upon the phase speed and wavelength of the waves, and those depend directly upon the speed of the ship. For a deepwater wave, the phase speed is the same as the propagation speed and is proportional to the square root of the wavelength. That wavelength is dependent upon the speed of the ship. Thus, the magnitude of the wave-making resistance is a function of the speed of the ship in relation to its length at the waterline. A simple way of considering wave-making resistance is to look at the hull in relation to bow and stern waves. If the length of a ship is half the length of the waves generated, the resulting wave will be very small due to cancellation, and if the length is the same as the wavelength, the wave will be large due to enhancement. The phase speed formula_0 of waves is given by the following formula: formula_1 where formula_2 is the length of the wave and formula_3 the gravitational acceleration. Substituting in the appropriate value for formula_3 yields the equation: formula_4 or, in metric units: formula_5 These values, 1.34, 2.5 and very easy 6, are often used in the hull speed rule of thumb used to compare potential speeds of displacement hulls, and this relationship is also fundamental to the Froude number, used in the comparison of different scales of watercraft. When the vessel exceeds a "speed–length ratio" (speed in knots divided by square root of length in feet) of 0.94, it starts to outrun most of its bow wave, the hull actually settles slightly in the water as it is now only supported by two wave peaks. As the vessel exceeds a speed-length ratio of 1.34, the wavelength is now longer than the hull, and the stern is no longer supported by the wake, causing the stern to squat, and the bow to rise. The hull is now starting to climb its own bow wave, and resistance begins to increase at a very high rate. While it is possible to drive a displacement hull faster than a speed-length ratio of 1.34, it is prohibitively expensive to do so. Most large vessels operate at speed-length ratios well below that level, at speed-length ratios of under 1.0. Ways of reducing wave-making resistance. Since wave-making resistance is based on the energy required to push the water out of the way of the hull, there are a number of ways that this can be minimized. Reduced displacement. Reducing the displacement of the craft, by eliminating excess weight, is the most straightforward way to reduce the wave making drag. Another way is to shape the hull so as to generate lift as it moves through the water. Semi-displacement hulls and planing hulls do this, and they are able to break through the "hull speed" barrier and transition into a realm where drag increases at a much lower rate. The disadvantage of this is that planing is only practical on smaller vessels, with high power-to-weight ratios, such as motorboats. It is not a practical solution for a large vessel such as a supertanker. Fine entry. A hull with a blunt bow has to push the water away very quickly to pass through, and this high acceleration requires large amounts of energy. By using a "fine" bow, with a sharper angle that pushes the water out of the way more gradually, the amount of energy required to displace the water will be less. A modern variation is the wave-piercing design. The total amount of water to be displaced by a moving hull, and thus causing wave making drag, is the cross sectional area of the hull times distance the hull travels, and will not remain the same when prismatic coefficient is increased for the same lwl and same displacement and same speed. Bulbous bow. A special type of bow, called a "bulbous bow", is often used on large power vessels to reduce wave-making drag. The bulb alters the waves generated by the hull, by changing the pressure distribution ahead of the bow. Because of the nature of its destructive interference with the bow wave, there is a limited range of vessel speeds over which it is effective. A bulbous bow must be properly designed to mitigate the wave-making resistance of a particular hull over a particular range of speeds. A bulb that works for one vessel's hull shape and one range of speeds could be detrimental to a different hull shape or a different speed range. Proper design and knowledge of a ship's intended operating speeds and conditions is therefore necessary when designing a bulbous bow. Hull form filtering. If the hull is designed to operate at speeds substantially lower than hull speed then it is possible to refine the hull shape along its length to reduce wave resistance at one speed. This is practical only where the block coefficient of the hull is not a significant issue. Semi-displacement and planing hulls. Since semi-displacement and planing hulls generate a significant amount of lift in operation, they are capable of breaking the barrier of the wave propagation speed and operating in realms of much lower drag, but to do this they must be capable of first pushing past that speed, which requires significant power. This stage is called the transition stage and at this stage the rate of wave-making resistance is the highest. Once the hull gets "over the hump" of the bow wave, the rate of increase of the wave drag will start to reduce significantly. The planing hull will rise up clearing its stern off the water and its trim will be high. Underwater part of the planing hull will be small during the planing regime. A qualitative interpretation of the wave resistance plot is that a displacement hull resonates with a wave that has a crest near its bow and a trough near its stern, because the water is pushed away at the bow and pulled back at the stern. A planing hull simply pushed down on the water under it, so it resonates with a wave that has a trough under it. If it has about twice the length it will therefore have only square root (2) or 1.4 times the speed. In practice most planing hulls usually move much faster than that. At four times hull speed the wavelength is already 16 times longer than the hull. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "c" }, { "math_id": 1, "text": "c = \\sqrt {\\frac {g}{2 \\pi} l }" }, { "math_id": 2, "text": "l" }, { "math_id": 3, "text": "g" }, { "math_id": 4, "text": " \\mbox{c in knots} \\approx 1.341 \\times \\sqrt{\\mbox{length in ft}} \\approx \\frac {4}{3} \\times \\sqrt{\\mbox{length in ft}}" }, { "math_id": 5, "text": " \\mbox{c in knots} \\approx 2.429 \\times \\sqrt{\\mbox{length in m}} \\approx \\sqrt{6 \\times \\mbox{length in m}} \\approx 2.5 \\times \\sqrt{\\mbox{length in m}}" } ]
https://en.wikipedia.org/wiki?curid=6980269
69803700
Phonon polariton
Quasiparticle form phonon and photon coupling In condensed matter physics, a phonon polariton is a type of quasiparticle that can form in a diatomic ionic crystal due to coupling of transverse optical phonons and photons. They are particular type of polariton, which behave like bosons. Phonon polaritons occur in the region where the wavelength and energy of phonons and photons are similar, as to adhere to the avoided crossing principle. Phonon polariton spectra have traditionally been studied using Raman spectroscopy. The recent advances in (scattering-type) scanning near-field optical microscopy((s-)SNOM) and atomic force microscopy(AFM) have made it possible to observe the polaritons in a more direct way. Theory. A phonon polariton is a type of quasiparticle that can form in some crystals due to the coupling of photons and lattice vibrations. They have properties of both light and sound waves, and can travel at very slow speeds in the material. They are useful for manipulating electromagnetic fields at nanoscale and enhancing optical phenomena. Phonon polaritons only result from coupling of transverse optical phonons, this is due to the particular form of the dispersion relation of the phonon and photon and their interaction. Photons consist of electromagnetic waves, which are always transverse. Therefore, they can only couple with transverse phonons in crystals. Near formula_0 the dispersion relation of an acoustic phonon can be approximated as being linear, with a particular gradient giving a dispersion relation of the form formula_1, with formula_2 the speed of the wave, formula_3 the angular frequency and "k" the absolute value of the wave vector formula_4. The dispersion relation of photons is also linear, being also of the form formula_5, with "c" being the speed of light in vacuum. The difference lies in the magnitudes of their speeds, the speed of photons is many times larger than the speed for the acoustic phonons. The dispersion relations will therefore never cross each other, resulting in a lack of coupling. The dispersion relations touch at formula_6, but since the waves have no energy, no coupling will occur. Optical phonons, by contrast, have a non-zero angular frequency at formula_6 and have a negative slope, which is also much smaller in magnitude to that of photons. This will result in the crossing of the optical phonon branch and the photon dispersion, leading to their coupling and the forming of a phonon polariton. Dispersion relation. The behavior of the phonon polaritons can be described by the dispersion relation. This dispersion relation is most easily derived for diatomic ion crystals with optical isotropy, for example sodium chloride and zinc sulfide. Since the atoms in the crystal are charged, any lattice vibration which changes the relative distance between the two atoms in the unit cell will change the dielectric polarization of the material. To describe these vibrations, it is useful to introduce the parameter w, which is given by: formula_7 Where Using this parameter, the behavior of the lattice vibrations for long waves can be described by the following equations: formula_9 formula_10 Where For the full coupling between the phonon and the photon, we need the four Maxwell's equations in matter. Since, macroscopically, the crystal is uncharged and there is no current, the equations can be simplified. A phonon polariton must abide all of these six equations. To find solutions to this set of equations, we write the following trial plane wave solutions for formula_12 , formula_16 and formula_17: formula_18 formula_19 formula_20 Where formula_21 denotes the wave vector of the plane wave, formula_22 the position, "t" the time, and "ω" the angular frequency. Notice that wave vector formula_23 should be perpendicular to the electric field and the magnetic field. Solving the resulting equations for ω and k, the magnitude of the wave vector, yields the following dispersion relation, and furthermore an expression for the optical dielectric constant: formula_24 With formula_25 the optical dielectric constant. The solution of this dispersion relation has two branches, an upper branch and a lower branch (see also the figure). If the slope of the curve is low, the particle is said to behave "phononlike", and if the slope is high the particle behaves "photonlike", owing these names to the slopes of the regular dispersion curves for phonons and photons. The phonon polariton behaves phononlike for low "k" in the upper branch, and for high "k" in the lower branch. Conversely, the polariton behaves photonlike for high "k" in the upper branch, low "k" in the lower branch. Limit behaviour of the dispersion relation. The dispersion relation describes the behaviour of the coupling. The coupling of the phonon and the photon is the most promininent in the region where the original transverse disperion relations would have crossed. In the limit of large "k", the solid lines of both branches approach the dotted lines, meaning, the coupling does not have a large impact on the behaviour of the vibrations. Towards the right of the crossing point, the upper branch behaves like a photon. The physical interpretation of this effect is that the frequency becomes too high for the ions to partake in the vibration, causing them to be essentially static. This results in a dispersion relation resembling one of a regular photon in a crystal. The lower branch in this region behaves, because of their low phase velocity compared to the photons, as regular transverse lattice vibrations. Lyddane–Sachs–Teller relation. The longitudonal optical phonon frequency formula_26 is defined by the zero of the equation for the dielectric constant. Writing the equation for the dielectric constant in a different way yields: formula_27 Solving the equation formula_28 yields: formula_29 This equation gives the ratio of the frequency of the longitudonal optical phonon (formula_26), to the frequency of the transverse optical phonon (formula_30) in diatomic cubic ionic crystals, and is known as the Lyddane-Sachs-Teller relation. The ratio formula_31 can be found using inelastic neutron scattering experiments. Surface phonon polariton. Surface phonon polariton(SPhPs) are a specific kind of phonon polariton. They are formed by the coupling of optical surface phonon, instead of normal phonons, and light, resulting in an electromagnetic surface wave. They are similar to surface plasmon polaritons, although studied to a far lesser extent. The applications are far ranging from materials with negative index of refraction to high-density IR data storage. One other application is in the cooling of microelectronics. Phonons are the main source of heat conductivity in materials, where optical phonons contribute far less than acoustic phonons. This is because of the relatively low group velocity of optical phonons. When the thickness of the material decreases, the conductivity of via acoustic also decreases, since surface scattering increases. This microelectronics are getting smaller and smaller, reductions is getting more problematic. Although optical phonons themselves do not have a high thermal conductivity, SPhPs do seem to have this. So they may be an alternative means of cooling these electronic devices. Experimental observation. Most observations of phonon polaritons are made of surface phonon polaritons, since these can be easily probed by Raman spectroscopy or AFM. Raman spectroscopy. As with any Raman experiment, a laser is pointed at the material being studied. If the correct wavelength is chosen, this laser can induce the formation of a polariton on the sample. Looking at the Stokes shifted emitted radiation and by using the conservation of energy and the known laser energy, one can calculate the polariton energy, with which one can construct the dispersion relation. SNOM and AFM. The induction of polaritons is very similar to that in Raman experiments, with a few differences. With the extremely high special resolution of SNOM, one can induce polaritons very locally in the sample. This can be done continuously, producing a continuous wave(CW) of polariton, or with an ultrafast pulse, producing a polariton with a very high temporal footprint. In both cases the polaritons are detected by the tip of the AFM, this signal is then used to calculate the energy of the polariton. One can also perform these experiments near the edge of the sample. This will result in the polaritons being reflected. In the case of CW polaritons, standing waves will be created, which will again be detected by the AFM tip. In the case of the polaritons created by the ultrafast laser, no standing wave will be created. The wave can still interfere with itself the moment it is reflected of the edge. Whether one is observing on the bulk surface or close to an edge, the signal is in temporal form. One can Fourier transform this signal, converting the signal into frequency domain, which can used to obtain the dispersion relation. Polaritonics and real-space imaging. Phonon polaritons also find use in the field of polaritonics, a field between photonics and electronics. In this field phonon polaritons are used for high speed signal processing and terahertz spectroscopy. The real-space imaging of phonon polaritons was made possible by projecting them onto a CCD camera. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbf{k}=0\n\n" }, { "math_id": 1, "text": "\\omega_{\\rm ac} = v_{\\rm ac}k" }, { "math_id": 2, "text": "v_{\\rm ac} " }, { "math_id": 3, "text": "\\omega_{\\rm ac}" }, { "math_id": 4, "text": "\\mathbf{k}" }, { "math_id": 5, "text": "\\omega_{\\rm p} = ck" }, { "math_id": 6, "text": "\\mathbf{k}=0" }, { "math_id": 7, "text": "\\mathbf{w} = \\mathbf{q} \\sqrt{\\frac{\\mu}{V}}" }, { "math_id": 8, "text": "\\mathbf{q}\n\n" }, { "math_id": 9, "text": "\\ddot{\\mathbf{w}} = -{\\omega_0}^2\\mathbf{w} +\n\n(\\frac{\\epsilon_0-\\epsilon_\\infty}{4\\pi})^{1/2}\\omega_0\\mathbf{E}\n\n" }, { "math_id": 10, "text": "\\mathbf{P} = (\\frac{\\epsilon_0-\\epsilon_\\infty}{4\\pi})^{1/2}\\omega_0\\mathbf{w} +\n\n(\\frac{\\epsilon_\\infty-1}{4\\pi})\\mathbf{E}\n\n" }, { "math_id": 11, "text": "\\ddot{\\mathbf{w}}\n\n" }, { "math_id": 12, "text": "\\mathbf{w}\n\n" }, { "math_id": 13, "text": "\\epsilon_0\n\n" }, { "math_id": 14, "text": "\\epsilon_\\infty\n\n" }, { "math_id": 15, "text": "\\omega_0\n\n" }, { "math_id": 16, "text": "\\mathbf{E}\n\n" }, { "math_id": 17, "text": "\\mathbf{P}\n\n" }, { "math_id": 18, "text": "\\mathbf{w}=\\mathbf{w_0}e^{i(\\mathbf{k}\\cdot \\mathbf{x} - \\omega t)} + \\text{c.c.}\n\n" }, { "math_id": 19, "text": "\\mathbf{P}=\\mathbf{P_0}e^{i(\\mathbf{k}\\cdot \\mathbf{x} - \\omega t)} + \\text{c.c.}\n\n" }, { "math_id": 20, "text": "\\mathbf{E}=\\mathbf{E_0}e^{i(\\mathbf{k}\\cdot \\mathbf{x} - \\omega t)} + \\text{c.c.}\n\n" }, { "math_id": 21, "text": "\\mathbf{k}\n\n" }, { "math_id": 22, "text": "\\mathbf{x}\n\n" }, { "math_id": 23, "text": "\\vec k\n\n" }, { "math_id": 24, "text": "\\frac{k^2c^2}{\\omega^2} = \\epsilon_{\\infty} +\n\n\\frac{\\epsilon_0-\\epsilon_{\\infty}}{{\\omega_0}^2-\\omega^2}{\\omega_0}^2\n\n= \\epsilon(\\omega)\n\n" }, { "math_id": 25, "text": "\\epsilon(\\omega)" }, { "math_id": 26, "text": "\\omega_L" }, { "math_id": 27, "text": "\\epsilon(\\omega)=\n\n\\frac{{\\omega_0}^2\\epsilon_0-\\omega^2\\epsilon_{\\infty}}{{\\omega_0}^2-\\omega^2}\n\n" }, { "math_id": 28, "text": "\\epsilon(\\omega_L)=0" }, { "math_id": 29, "text": "\\frac{{\\omega_L}^2}{{\\omega_0}^2}=\\frac{\\epsilon_0}{\\epsilon_{\\infty}}" }, { "math_id": 30, "text": "\\omega_0" }, { "math_id": 31, "text": "\\omega_L/\\omega_0" } ]
https://en.wikipedia.org/wiki?curid=69803700
69803965
Electroreflectance
Electro reflectance Electroreflectance (also: electromodulated reflectance) is the change of reflectivity of a solid due to the influence of an electric field close to, or at the interface of the solid with a liquid. The change in reflectivity is most noticeable at very specific ranges of photon energy, corresponding to the band gaps at critical points of the Brillouin zone. The electroreflectance effect can be used to get a clearer picture of the band structure at critical points where there is a lot of near degeneracy. Normally, the band structure at critical points (points of special interest) has to be measured within a background of adsorption from non-critical points at the Brillouin zone boundary. Using a strong electric field, the adsorption spectrum can be changed to a spectrum that shows peaks at these critical points, essentially lifting the critical points from the background. The effect was first discovered and understood in semiconductor materials, but later research proved that metals also exhibit electroreflectance. An early observation of the changing optical reflectivity of gold due to a present electric field was attributed to a change in refractive index of the neighboring liquid. However, it was shown that this could not be the case. The new conclusion was that the effect had to come from a modulation of the near-surface layer of the gold. Theoretic description. Effect of the electric field on the electronic structure. When an electric field is applied to a metal or semiconductor, the electronic structure of the material changes. The electrons (and other charged particles) will react to the electric field, by repositioning themselves within the material. Electrons in metals can relatively easily move around and are available in abundance. They will move in such a manner that they try to cancel the external electric field. Since no metal is a perfect conductor, no metal will perfectly cancel the external electric field within the material. In semiconductors the electrons that are available will not be able to move around as easily as electrons in metals. This leads to a weaker response and weaker cancellation of the electric field. This has the effect that the electric field can penetrate deeper into a semiconductor than into a metal. The optical reflectivity of a (semi-)conductor is based on the band structure of the material close to or at the surface of the material. For reflectivity to occur a photon has to have enough energy to overcome the bandgap of electrons at the Fermi surface. When the photon energy is smaller than the bandgap, the solid will be unable to absorb the energy of the photon by excitation of an electron to a higher energy. This means that the photon will not be re-emitted by the solid and thus not reflected. If the photon energy is large enough to excite an electron from the Fermi surface, the solid will re-emit the photon by decaying the electron back to the original energy. This is not exactly the same photon as the incident photon, as it has for example the opposite direction of the incident photon. By applying an electric field to the material, the band structure of the solid changes. This change in band structure leads to a different bandgap, which in turn leads to a difference in optical reflectivity. The electric field, generally made by creating a potential difference, leads to an altered Hamiltonian. Using analytical methods available, such as the Tight Binding method, it can be calculated that this altered Hamiltonian leads to a different band structure. The combination of electron repositioning and the change in band structure due to an external electric field is called the field effect. Since the electric field has more influence on semiconductors than on metals, semiconductors are easier to use to observe the electroreflectance effect. Near the surface. The optical reflection in (semi-)conductors happens mostly in the surface region of the material. Therefore, the band structure of this region is extra important. Band structure usually covers bulk material. For deviations from this structure, it is conventional to use a band diagram. In a band diagram the x-axis is changed from wavevector k in band structure diagrams to position x in the preferred direction. Usually, this positional direction is normal to the surface plane. For semiconductors specifically, the band diagram near the surface of the material is important. When an electric field is present close to, or in the material, this will lead to a potential difference within the semiconductor. Dependent on the electric field, the semiconductor will become n- or p-like in the surface region. From now on we will use that the semiconductor has become n-like at the surface. The bands near the surface will bend under the electrostatic potential of the applied electric field. This bending can be interpreted in the same way as the bending of the valence and conduction bands in a p-n-junction, when equilibrium has been reached. The result of this bending leads to a conduction band that comes close to the Fermi level. Therefore, the conduction band will begin to fill with electrons. This change in band structure leads to a change in optical reflection of the semiconductor. Brillouin zones and optical reflectivity. Optical reflectivity and the Brillouin zones are closely linked, since the band gap energy in the Brillouin zone determines if a photon is absorbed or reflected. If the band gap energy in the Brillouin zone is smaller than the photon energy, the photon will be absorbed, while the photon will be transmitted/reflected if the band gap energy is larger than the photon energy. For example, the photon energies of visible light lie in a range between 1.8 eV (red light) and 3.1 eV (violet light), So if the band gap energy is larger than 3.2 eV, photons of visible light will not be absorbed, but reflected/transmitted: the material appears transparent. This is the case for diamond, quartz etc. But if the band gap is roughly 2.6 eV (this is the case for cadmium sulfide) only blue and violet light is absorbed, while red and green light are transmitted, resulting in a reddish looking material. When an electric field is added to a (semi)conductor, the material will try to cancel this field by inducing an electric field at its surface. Because of this electric field, the optical properties of the surface layer will change, due to the change in size of critical band gaps, and hence changing its energy. Since the change in band gap only occurs on the surface of the (semi)conductor, optical properties will not change in the core of bulk materials, but for very thin films, where almost all particles can be found at the surface, the optical properties can change: absorption or transmittance of certain wavelengths depending on the strength of the electric field. This can result in more accurate measurements in case there are multiple compounds in the semiconductor, practically canceling the background noise of data. Commonly, the band gaps are smallest close to, or at the Brillouin zone boundary. Adding an electric field will alter the whole band structure of the material where the electric field penetrates, but the effect will be especially noticeable at the Brillouin zone boundary. When the smallest band gap changes in size, this alters the optical reflectivity of the material more than the change in an already larger band gap. This can be explained by noticing that the smallest band gap determines a lot of the reflectivity, as lower energy photons cannot be absorbed and re-emitted. Dielectric constant. The optical properties of semiconductors are directly related to the dielectric constant of the material. This dielectric constant gives the ratio between the electric permeability of a material in relation to the permeability of a vacuum. The imaginary refractive index of a material is given by the square root of the dielectric constant. The reflectance formula_0 of a material can be calculated using the Fresnel equations. A present electric field alters the dielectric constant and therefore alters the optical properties of the material, like the reflectance. Interfaces with a liquid (electric double layer). A solid in contact with a liquid, in the presence of an electric field, forms an electric double layer. This layer is present at the interface of the solid and liquid and it shields the charged surface of the solid. This electric double layer has an effect on the optical reflectivity of the solid as it changes the elastic light scattering properties. The formation of the electric double layer involves different timescales, such as the relaxation time and the charging time. The relaxation time we can write as formula_1 with formula_2 being the Diffusion constant and formula_3 the Debye length. The charging time can be expressed by formula_4 where formula_5 is the representative system size. The Debye length is often used as a measure of electric double layer thickness. Measuring the electric double layer with electroreflectance is challenging due to separation caused by conduction electrons. History. The effect of electroreflectence was first written of in a review letter from 1965 by B. O. Seraphin and R. B. Hess from Michelson Laboratory, China Lake, California where they were studying the Franz-Keldysh effect above the fundamental edge in germanium. They found that it was not only possible for the material to absorb the electrons, but also re-emit them. Following this discovery Seraphin has written numerous articles on the new found phenomenon. Research techniques. Electroreflectance in surface physics. Using electroreflectance in surface physics studies gives some major advantages over techniques used before its discovery. Before, the determination of the surface potential was the hard to do since you need electrical measurements at the surface and it was difficult to probe the surface region without involving the bulk underneath. Electroreflectance does not need to make electrical measurements on the surface, but only uses optical measurements. Furthermore, due to direct functional relationships between surface potential and reflectivity we can get rid of a lot assumptions about mobility, scattering, or trapping of added carriers needed in the older methods. The electric field of the surface is probed by the modulation of the beam reflected by the surface. The incoming beam does not penetrate the material deep, so you only probe the surface without interacting with the bulk underneath. Aspnes's third-derivative. Third order spectroscopy, sometimes revered to as Aspnes's third-derivative, is a technique used to enhance the resolution of a spectroscopy measurement. This technique was first used by D.E. Aspnes to study electroreflectance in 1971. Using 3rd order derivatives can sharpen the peak of a function (see figure). Especially in spectroscopy, where the wave is never measured on one specific wavelength, but always on a band, it is use full to sharpen the peak, and thus narrow the band. Another advantage of derivatives is that baseline shifts are eliminated since derivatives get rid of shifts. These shifts in spectra can for example be caused by sample handling, lamp or detector instabilities. This way, you can eliminate some of the background noise of measurements. Applications. Electroreflectance is often used to determine band gaps and electric properties of thin films of weaker semiconducting materials. Two different examples are listed below. Enhancing research of high band gap semiconductors at room temperature. Wide band gap semiconductors like tin oxide &lt;chem&gt;SnO2&lt;/chem&gt; generally possess a high chemical stability and mobility, are cheap to fabricate and have a suitable band alignment, making these semiconductors often used in various electronics as thin film transistors, anodes in lithium ion batteries and as electron transport layer in solar cells. The large band gap of &lt;chem&gt;SnO2&lt;/chem&gt; (formula_6) and large binding energy (formula_7) make it useful in ultraviolet based devices. But a fundamental problem arises with its dipole forbidden band structure in bulk form: the transition from the valence to the conduction band is dipole forbidden since both types of states exist with even parity with the effect that band edge emission of &lt;chem&gt;SnO2&lt;/chem&gt; is forbidden in nature. This can be offset by employing its reduced dimensional structure, partially destroying the crystal symmetry, turning the forbidden dipole transition into allowed ones. Observing optical transitions in &lt;chem&gt;SnO2&lt;/chem&gt; at room temperature, however, is challenging due to the light absorbing efficiency in the UV region of the reduced &lt;chem&gt;SnO2&lt;/chem&gt; structures being very weak and background scattering of electrons with lower energies. Using electroreflectance the optical transitions of thin films can be recovered: by placing a thin film in an electric field, the critical points of the optical transition will be enhanced while, due to a change in reflectivity, low energy background scattering is reduced. Electroreflectance in organic semiconductors. Organic compounds containing conjugate (i.e., alternate single-double) bonds can have semiconducting properties. The conductivity and mobility of those organic compounds however, are very low compared to inorganic semiconductors. Assuming the molecules of the organic semiconductor are lattices, the same procedure of electroreflectance of inorganic semiconductors can be applied for the organic ones. It should be noted though that there is a certain dualism in semiconductors: intra-molecular conduction (inside a molecule) and inter-molecular conduction (between molecules), which one should take into account doing measurements. Especially for thin films the band gaps of organic semiconductors can be accurately determined using this method. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "R" }, { "math_id": 1, "text": "\\tau_d = \\lambda_D^2/D" }, { "math_id": 2, "text": "D" }, { "math_id": 3, "text": "\\lambda_D" }, { "math_id": 4, "text": "\\tau_c = \\lambda_D L/D" }, { "math_id": 5, "text": "L" }, { "math_id": 6, "text": "E_{gap} \\sim 3.62 eV" }, { "math_id": 7, "text": "E_{bind} \\sim 130meV" } ]
https://en.wikipedia.org/wiki?curid=69803965
69806623
Band bending
Band bending in energy diagrams In solid-state physics, band bending refers to the process in which the electronic band structure in a material curves up or down near a junction or interface. It does not involve any physical (spatial) bending. When the electrochemical potential of the free charge carriers around an interface of a semiconductor is dissimilar, charge carriers are transferred between the two materials until an equilibrium state is reached whereby the potential difference vanishes. The band bending concept was first developed in 1938 when Mott, Davidov and Schottky all published theories of the rectifying effect of metal-semiconductor contacts. The use of semiconductor junctions sparked the computer revolution in the second half of the 20th century. Devices such as the diode, the transistor, the photocell and many more play crucial roles in technology. Qualitative description. Band bending can be induced by several types of contact. In this section metal-semiconductor contact, surface state, applied bias and adsorption induced band bending are discussed. Metal-semiconductor contact induced band bending. Figure 1 shows the ideal band diagram (i.e. the band diagram at zero temperature without any impurities, defects or contaminants) of a metal with an n-type semiconductor before (top) and after contact (bottom). The work function is defined as the energy difference between the Fermi level of the material and the vacuum level before contact and is denoted by formula_0. When the metal and semiconductor are brought in contact, charge carriers (i.e. free electrons and holes) will transfer between the two materials as a result of the work function difference formula_1. If the metal work function (formula_2) is larger than that of the semiconductor (formula_3), that is formula_4, the electrons will flow from the semiconductor to the metal, thereby lowering the semiconductor Fermi level and increasing that of the metal. Under equilibrium the work function difference vanishes and the Fermi levels align across the interface. A Helmholtz double layer will be formed near the junction, in which the metal is negatively charged and the semiconductor is positively charged due to this electrostatic induction. Consequently, a net electric field formula_5 is established from the semiconductor to the metal. Due to the low concentration of free charge carriers in the semiconductor, the electric field cannot be effectively screened (unlike in the metal where formula_6 in the bulk). This causes the formation of a depletion region near the semiconductor surface. In this region, the energy band edges in the semiconductor bend upwards as a result of the accumulated charge and the associated electric field between the semiconductor and the metal surface. In the case of formula_7, electrons are shared from the metal to the semiconductor, resulting in an electric field that points in the opposite direction. Hence, the band bending is downward as can be seen in the bottom right of Figure 1. One can envision the direction of bending by considering the electrostatic energy experienced by an electron as it moves across the interface. When formula_4, the metal develops a negative charge. An electron moving from the semiconductor to the metal therefore experiences a growing repulsion as it approaches the interface. It follows that its potential energy rises and hence the band bending is upwards. In the case of formula_7, the semiconductor carries a negative charge, forming a so-called accumulation layer and leaving a positive charge on the metal surface. An electric field develops from the metal to the semiconductor which drives the electrons towards the metal. By moving closer to the metal the electron could thus lower its potential energy. The result is that the semiconductor energy band bends downwards towards the metal surface. Surface state induced band bending. Despite being energetically unfavourable, surface states may exist on a clean semiconductor surface due to the termination of the materials lattice periodicity. Band bending can also be induced in the energy bands of such surface states. A schematic of an ideal band diagram near the surface of a clean semiconductor in and out of equilibrium with its surface states is shown in Figure 2 . The unpaired electrons in the dangling bonds of the surface atoms interact with each other to form an electronic state with a narrow energy band, located somewhere within the band gap of the bulk material. For simplicity, the surface state band is assumed to be half-filled with its Fermi level located at the mid-gap energy of the bulk. Furthermore, doping is taken to not be of influence to the surface states. This is a valid approximation since the dopant concentration is low. For intrinsic semiconductors (undoped), the valence band is fully filled with electrons, whilst the conduction band is completely empty. The Fermi level is thus located in the middle of the band gap, the same as that of the surface states, and hence there is no charge transfer between the bulk and the surface. As a result no band bending occurs. If the semiconductor is doped, the Fermi level of the bulk is shifted with respect to that of the undoped semiconductor by the introduction of dopant eigenstates within the band gap. It is shifted up for n-doped semiconductors (closer to the conduction band) and down in case of p-doping (nearing the valence band). In disequilibrium, the Fermi energy is thus lower or higher than that of the surface states for p- and n-doping, respectively. Due to the energy difference, electrons will flow from the bulk to the surface or vice versa until the Fermi levels become aligned at equilibrium. The result is that, for n-doping, the energy bands bend upward, whereas they bend downwards for p-doped semiconductors. Note that the density of surface states is large (formula_8) in comparison with the dopant concentration in the bulk (formula_9). Therefore, the Fermi energy of the semiconductor is almost independent of the bulk dopant concentration and is instead determined by the surface states. This is called Fermi level pinning. Adsorption induced band bending. Adsorption on a semiconductor surface can also induce band bending. Figure 3 illustrates the adsorption of an acceptor molecule (A) onto a semiconductor surface. As the molecule approaches the surface, an unfilled molecular orbital of the acceptor interacts with the semiconductor and shifts downwards in energy. Due to the adsorption of the acceptor molecule its movement is restricted. It follows from the general uncertainty principle that the molecular orbital broadens its energy as can be seen in the bottom of figure 3. The lowering of the acceptor molecular orbital leads to electron flow from the semiconductor to the molecule, thereby again forming a Helmholtz layer on the semiconductor surface. An electric field is set up and upwards band bending near the semiconductor surface occurs. For a donor molecule, the electrons will transfer from the molecule to the semiconductor, resulting in downward band bending. Applied bias induced band bending. When a voltage is applied across two surfaces of metals or semiconductors the associated electric field is able to penetrate the surface of the semiconductor. Because the semiconductor material contains little charge carriers the electric field will cause an accumulation of charges on the semiconductor surface. When formula_10, a forward bias, the band bends downwards. A reverse bias (formula_11) would cause an accumulation of holes on the surface which would bend the band upwards. This follows again from Poisson's equation. As an example the band bending induced by the forming of a p-n junction or a metal-semiconductor junction can be modified by applying a bias voltage formula_12. This voltage adds to the built-in potential (formula_13) that exists in the depletion region (formula_14). Thus the potential difference between the bands is either increased or decreased depending on the type of bias that is applied. The conventional depletion approximation assumes a uniform ion distribution in the depletion region. It also approximates a sudden drop in charge carrier concentration in the depletion region. Therefore the electric field changes linearly and the band bending is parabolic. Thus the width of the depletion region will change due to the bias voltage. The depletion region width is given by: formula_15 formula_16 and formula_17 are the boundaries of the depletion region. formula_18 is the dielectric constant of the semiconductor. formula_19 and formula_20 are the net acceptor and net donor dopant concentrations respectively and formula_21 is the charge of the electron. The formula_22 term compensates for the existence of free charge carriers near the junction from the bulk region. Poisson's equation. The equation which governs the curvature obtained by the band edges in the space charge region, i.e. the band bending phenomenon, is Poisson’s equation, formula_23 where formula_24 is the electric potential, formula_25 is the local charge density and formula_26 is the permittivity of the material. An example of its implementation can be found on the Wikipedia article on p-n junctions. Applications. Electronics. The p-n diode is a device that allows current to flow in only one direction as long as the applied voltage is below a certain threshold. When a forward bias is applied to the p-n junction of the diode the band gap in the depletion region is narrowed. The applied voltage introduces more charge carriers as well, which are able to diffuse across the depletion region. Under a reverse bias this is hardly possible because the band gap is widened instead of narrowed, thus no current can flow. Therefore the depletion region is necessary to allow for only one direction of current. The metal–oxide–semiconductor field-effect transistor (MOSFET) relies on band bending. When the transistor is in its so called ‘off state’ there is no voltage applied on the gate and the first p-n junction is reversed bias. The potential barrier is too high for the electrons to pass thus no current flows. When a voltage is applied on the gate the potential gap shrinks due to the applied bias band bending that occurs. As a result current will flow. Or in other words, the transistor is in its ‘on’ state. The MOSFET is not the only type of transistor available today. Several more examples are the Metal-Semiconductor Field Effect Transistor (MESFET) and the Junction Field Effect Transistor (JFET), both of which rely on band bending as well. Photovoltaic cells (solar cells) are essentially just p-n diodes that can generate a current when they are exposed to sunlight. Solar energy can create an electron-hole pair in the depletion region. Normally they would recombine quite quickly before traveling very far. The electric field in the depletion region separates the electrons and holes generating a current when the two sides of the p-n diode are connected. Photovoltaic cells are an important supplier of renewable energy. They are a promising source of reliable clean energy. Spectroscopy. Different spectroscopy methods make use of or can measure band bending: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\phi" }, { "math_id": 1, "text": "V_{BB} = |\\phi_m - \\phi_s|" }, { "math_id": 2, "text": "\\phi_m" }, { "math_id": 3, "text": "\\phi_s" }, { "math_id": 4, "text": "\\phi_m > \\phi_s" }, { "math_id": 5, "text": "\\vec E" }, { "math_id": 6, "text": "E=0" }, { "math_id": 7, "text": "\\phi_m < \\phi_s" }, { "math_id": 8, "text": "\\sim10^5\\text{ cm}^{-2}" }, { "math_id": 9, "text": "\\sim10^8-10^{12}\\text{ cm}^{-2}" }, { "math_id": 10, "text": "V>0" }, { "math_id": 11, "text": "V<0" }, { "math_id": 12, "text": "V_A" }, { "math_id": 13, "text": "V_{BI}" }, { "math_id": 14, "text": " V_{BI} - V_A " }, { "math_id": 15, "text": " w = x_n+ x_p = \\sqrt{\\frac{2\\epsilon_s (N_A+N_D )(V_{BI} - V_A - 2kT/q)}{qN_A N_D }} " }, { "math_id": 16, "text": "x_n" }, { "math_id": 17, "text": "x_p" }, { "math_id": 18, "text": "\\epsilon_s" }, { "math_id": 19, "text": "N_A" }, { "math_id": 20, "text": "N_D" }, { "math_id": 21, "text": "q" }, { "math_id": 22, "text": "2kT/q" }, { "math_id": 23, "text": "\\triangledown^2 V = -\\rho/\\epsilon " }, { "math_id": 24, "text": "V" }, { "math_id": 25, "text": "\\rho" }, { "math_id": 26, "text": "\\epsilon" } ]
https://en.wikipedia.org/wiki?curid=69806623
6980742
Geophysical survey
Systematic collection of geophysical data for spatial studies Geophysical survey is the systematic collection of geophysical data for spatial studies. Detection and analysis of the geophysical signals forms the core of Geophysical signal processing. The magnetic and gravitational fields emanating from the Earth's interior hold essential information concerning seismic activities and the internal structure. Hence, detection and analysis of the electric and Magnetic fields is very crucial. As the Electromagnetic and gravitational waves are multi-dimensional signals, all the 1-D transformation techniques can be extended for the analysis of these signals as well. Hence this article also discusses multi-dimensional signal processing techniques. Geophysical surveys may use a great variety of sensing instruments, and data may be collected from above or below the Earth's surface or from aerial, orbital, or marine platforms. Geophysical surveys have many applications in geology, archaeology, mineral and energy exploration, oceanography, and engineering. Geophysical surveys are used in industry as well as for academic research. The sensing instruments such as gravimeter, gravitational wave sensor and magnetometers detect fluctuations in the gravitational and magnetic field. The data collected from a geophysical survey is analysed to draw meaningful conclusions out of that. Analysing the spectral density and the time-frequency localisation of any signal is important in applications such as oil exploration and seismography. Types of geophysical survey. There are many methods and types of instruments used in geophysical surveys. Technologies used for geophysical surveys include: Geophysical signal detection. This section deals with the principles behind measurement of geophysical waves. The magnetic and gravitational fields are important components of geophysical signals. The instrument used to measure the change in gravitational field is the gravimeter. This meter measures the variation in the gravity due to the subsurface formations and deposits. To measure the changes in magnetic field the magnetometer is used. There are two types of magnetometers, one that measures only vertical component of the magnetic field and the other measures total magnetic field. With the help of these meters, either the gravity values at different locations are measured or the values of Earth's magnetic field are measured. Then these measured values are corrected for various corrections and an anomaly map is prepared. By analyzing these anomaly maps one can get an idea about the structure of rock formations in that area. For this purpose one need to use various analog or digital filters. Measurement of Earth's magnetic fields. Magnetometers are used to measure the magnetic fields, magnetic anomalies in the earth. The sensitivity of magnetometers depends upon the requirement. For example, the variations in the geomagnetic fields can be to the order of several aT where 1aT = 10−18T . In such cases, specialized magnetometers such as the superconducting quantum interference device (SQUID) are used. Jim Zimmerman co-developed the rf superconducting quantum interference device (SQUID) during his tenure at Ford research lab. However, events leading to the invention of the SQUID were in fact, serendipitous. John Lambe, during his experiments on nuclear magnetic resonance noticed that the electrical properties of indium varied due to a change in the magnetic field of the order of few nT. However, Lambe was not able to fully recognize the utility of SQUID. SQUIDs have the capability to detect magnetic fields of extremely low magnitude. This is due to the virtue of the Josephson junction. Jim Zimmerman pioneered the development of SQUID by proposing a new approach to making the Josephson junctions. He made use of niobium wires and niobium ribbons to form two Josephson junctions connected in parallel. The ribbons act as the interruptions to the superconducting current flowing through the wires. The junctions are very sensitive to the magnetic fields and hence are very useful in measuring fields of the order of 10^-18T. Seismic wave measurement using gravitational wave sensor. Gravitational wave sensors can detect even a minute change in the gravitational fields due to the influence of heavier bodies. Large seismic waves can interfere with the gravitational waves and may cause shifts in the atoms. Hence, the magnitude of seismic waves can be detected by a relative shift in the gravitational waves. Measurement of seismic waves using atom interferometer. The motion of any mass is affected by the gravitational field. The motion of planets is affected by the Sun's enormous gravitational field. Likewise, a heavier object will influence the motion of other objects of smaller mass in its vicinity. However, this change in the motion is very small compared to the motion of heavenly bodies. Hence, special instruments are required to measure such a minute change. Atom interferometers work on the principle of diffraction. The diffraction gratings are nano fabricated materials with a separation of a quarter wavelength of light. When a beam of atoms pass through a diffraction grating, due to the inherent wave nature of atoms, they split and form interference fringes on the screen. An atom interferometer is very sensitive to the changes in the positions of atoms. As heavier objects shifts the position of the atoms nearby, displacement of the atoms can be measured by detecting a shift in the interference fringes. Existing approaches in geophysical signal recognition. This section addresses the methods and mathematical techniques behind signal recognition and signal analysis. It considers the time domain and frequency domain analysis of signals. This section also discusses various transforms and their usefulness in the analysis of multi-dimensional waves. 3D sampling. Sampling. The first step in any signal processing approach is analog to digital conversion. The geophysical signals in the analog domain has to be converted to digital domain for further processing. Most of the filters are available in 1D as well as 2D. Analog to digital conversion. As the name suggests, the gravitational and electromagnetic waves in the analog domain are detected, sampled and stored for further analysis. The signals can be sampled in both time and frequency domains. The signal component is measured at both intervals of time and space. Ex, time-domain sampling refers to measuring a signal component at several instances of time. Similarly, spatial-sampling refers to measuring the signal at different locations in space. Traditional sampling of 1D time varying signals is performed by measuring the amplitude of the signal under consideration in discrete intervals of time. Similarly sampling of space-time signals (signals which are functions of 4 variables – 3D space and time), is performed by measuring the amplitude of the signals at different time instances and different locations in the space. For example, the Earth's gravitational data is measured with the help of gravitational wave sensor or gradiometer by placing it in different locations at different instances of time. Spectrum analysis. Multi-dimensional Fourier transform. The Fourier expansion of a time domain signal is the representation of the signal as a sum of its frequency components, specifically sum of sines and cosines. Joseph Fourier came up with the Fourier representation to estimate the heat distribution of a body. The same approach can be followed to analyse the multi-dimensional signals such as gravitational waves and electromagnetic waves. The 4D Fourier representation of such signals is given by formula_0 Wavelet transform. The motivation for development of the Wavelet transform was the Short-time Fourier transform. The signal to be analysed, say "f"("t") is multiplied with a window function "w"("t") at a particular time instant. Analysing the Fourier coefficients of this signal gives us information about the frequency components of the signal at a particular time instant. The STFT is mathematically written as: formula_1 The Wavelet transform is defined as formula_2 A variety of window functions can be used for analysis. Wavelet functions are used for both time and frequency localisation. For example, one of the windows used in calculating the Fourier coefficients is the Gaussian window which is optimally concentrated in time and frequency. This optimal nature can be explained by considering the time scaling and time shifting parameters "a" and "b" respectively. By choosing the appropriate values of "a" and "b", we can determine the frequencies and the time associated with that signal. By representing any signal as the linear combination of the wavelet functions, we can localize the signals in both time and frequency domain. Hence wavelet transforms are important in geophysical applications where spatial and temporal frequency localisation is important. Time frequency localisation using wavelets Geophysical signals are continuously varying functions of space and time. The wavelet transform techniques offer a way to decompose the signals as a linear combination of shifted and scaled version of basis functions. The amount of "shift" and "scale" can be modified to localize the signal in time and frequency. Beamforming. Simply put, space-time signal filtering problem can be thought as localizing the speed and direction of a particular signal. The design of filters for space-time signals follows a similar approach as that of 1D signals. The filters for 1-D signals are designed in such a way that if the requirement of the filter is to extract frequency components in a particular non-zero range of frequencies, a bandpass filter with appropriate passband and stop band frequencies in determined. Similarly, in the case of multi-dimensional systems, the wavenumber-frequency response of filters is designed in such a way that it is unity in the designed region of ("k", "ω") a.k.a. wavenumber – frequency and zero elsewhere. This approach is applied for filtering space-time signals. It is designed to isolate signals travelling in a particular direction. One of the simplest filters is weighted delay and sum beamformer. The output is the average of the linear combination of delayed signals. In other words, the beamformer output is formed by averaging weighted and delayed versions of receiver signals. The delay is chosen such that the passband of beamformer is directed to a specific direction in the space. Classical estimation theory. This section deals with the estimation of the power spectral density of the multi-dimensional signals. The spectral density function can be defined as a multidimensional Fourier transform of the autocorrelation function of the random signal. formula_3 formula_4 The spectral estimates can be obtained by finding the square of the magnitude of the Fourier transform also called as Periodogram. The spectral estimates obtained from the periodogram have a large variance in amplitude for consecutive periodogram samples or in wavenumber. This problem is resolved using techniques that constitute the classical estimation theory. They are as follows: 1.Bartlett suggested a method that averages the spectral estimates to calculate the power spectrum. Average of spectral estimates over a time interval gives a better estimate. formula_5 Bartlett's case 2.Welch's method suggested to divide the measurements using data window functions, calculate a periodogram, average them to get a spectral estimate and calculate the power spectrum using Fast Fourier Transform. This increased the computational speed. formula_6 Welch's case 4. The periodogram under consideration can be modified by multiplying it with a window function. Smoothing window will help us smoothen the estimate. Wider the main lobe of the smoothing spectrum, smoother it becomes at the cost of frequency resolution. formula_7 Modified periodogram For further details on spectral estimation, please refer Spectral Analysis of Multi-dimensional signals Applications. Estimating positions of underground objects. The method being discussed here assumes that the mass distribution of the underground objects of interest is already known and hence the problem of estimating their location boils down to parametric localisation. Say underground objects with center of masses (CM1, CM2...CMn) are located under the surface and at positions p1, p2...pn. The gravity gradient (components of the gravity field) is measured using a spinning wheel with accelerometers also called as the gravity gradiometer. The instrument is positioned in different orientations to measure the respective component of the gravitational field. The values of gravitational gradient tensors are calculated and analyzed. The analysis includes observing the contribution of each object under consideration. A maximum likelihood procedure is followed and Cramér–Rao bound (CRB) is computed to assess the quality of location estimate. Array processing for seismographic applications. Various sensors located on the surface of Earth spaced equidistantly receive the seismic waves. The seismic waves travel through the various layers of earth and undergo changes in their properties - amplitude change, time of arrival, phase shift. By analyzing these properties of the signals, we can model the activities inside the Earth. Visualization of 3D data. The method of volume rendering is an important tool to analyse the scalar fields. Volume rendering simplifies representation of 3D space. Every point in a 3D space is called a voxel. Data inside the 3-d dataset is projected to the 2-d space (display screen) using various techniques. Different data encoding schemes exist for various applications such as MRI, Seismic applications. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "S (K, \\omega) = \\iint s(x,t) e^{-j (\\omega t - k' x)} \\, dx\\, dt" }, { "math_id": 1, "text": "\\{x(t)\\}(\\tau,\\omega) \\equiv X(\\tau, \\omega) = \\int_{-\\infty}^{\\infty} x(t) w(t-\\tau) e^{-j \\omega t} \\, dt " }, { "math_id": 2, "text": "X(a,b) = \\frac{1}{\\sqrt{a}} \\int\\limits_\\ \\Psi( \\frac{t-b}{a}) x(t) dt " }, { "math_id": 3, "text": "P\\left (K_x,w\\right)=\\int_{-\\infty}^\\infty \\int_{-\\infty}^\\infty \\varphi_{ss}\\left(x,t\\right)\\ e^{-j\\left(w t-k' x\\right)}\\,dx\\,dt" }, { "math_id": 4, "text": "\\varphi_{ss}\\left(x,t\\right)=s\\left[\\left(\\xi,\\tau\\right)s*\\left(\\xi-x,\\tau-t\\right)\\right]" }, { "math_id": 5, "text": "P_B\\left(w\\right) = \\frac{1}{\\mathrm{det}\\,N}\\sum_{l}|\\sum_{n}\\ x\\left(n+MI\\right)\\ e^{-j\\left(w' n\\right)}|^2" }, { "math_id": 6, "text": "P_W\\left(w\\right) = \\frac{1}{\\mathrm{det}\\,N}\\sum_{l}|\\sum_{n}\\ g\\left(n\\right)\\ x\\left(n+MI\\right)\\ e^{-j\\left(w' n\\right)}|^2" }, { "math_id": 7, "text": "P_M\\left(w\\right) = \\frac{1}{detN}|\\sum_{n}\\ g\\left(n\\right)\\ x\\left(n\\right)\\ e^{-j\\left(w' n\\right)}|^2" } ]
https://en.wikipedia.org/wiki?curid=6980742
69807546
SU(1,1) interferometry
SU(1,1) interferometry is a technique that uses parametric amplification for splitting and mixing of electromagnetic waves for precise estimation of phase change and achieves the Heisenberg limit of sensitivity with fewer optical elements than conventional interferometric techniques. Introduction. Interferometry is an important technique in the field of optics that have been utilised for fundamental proof of principles experiments and in the development of new technologies. This technique, primarily based on the interference of electromagnetic waves, has been widely explored in the field of quantum metrology and precision measurements for achieving sensitivity in measurements beyond what is possible with classical methods and resources. Interferometry is a desired platform for precise estimation of physical quantities because of its ability to sense small phase changes. One of the most prominent examples of the application of this property is the detection of gravitational waves (LIGO). Conventional interferometers are based on the wave nature of the light and hence the classical interference of electromagnetic waves. Although the design and layout for these types of interferometers can vary depending upon the type of application and corresponding suitable scheme, they all can be mapped to an arrangement similar to that of a Mach-Zehnder interferometer. In this type of interferometry, the input field is split into two by a beam splitter which then propagates along different paths and acquires a relative phase difference (corresponding to a path length difference). Considering one of the beams undergoing a phase change as the "probe" and the other beam as the " reference", the relative phase is estimated after the two beams interfere at another beam splitter. The estimation of the phase difference is done through the detection of the intensity change at the output after the interference at the second beam splitter. These standard interferometric techniques, based on beam-splitters for the splitting of the beams and linear optical transformations, can be classified as SU(2) interferometers as these interferometric techniques can be naturally characterized by SU(2) (Special Unitary(2)) group. Theoretically, the sensitivity of conventional SU(2) interferometric schemes are limited by the vacuum fluctuation noise, also called the shot-noise limit which scales as formula_0, where formula_1 is the mean number of particles (photons for electromagnetic waves) entering the input port of the interferometer. The shot noise limit can be overcome by using light that utilizes quantum properties such as quantum entanglement (e.g. squeezed states, NOON states), at the unused input port. In principle, this can achieve the Heisenberg limit of sensitivity which scales as formula_2 with the change in the mean number of photons entering the input port. SU(1,1) interferometers were first proposed by Yurke "et al." in which the beam splitters in conventional interferometers were replaced by optical parametric amplifiers. The advantage that comes with the parametric amplifiers is that the input fields can be coherently split and interfere that would be fundamentally quantum in nature. This is attributed to the nonlinear processes in parametric amplifiers such as four-wave mixing. Theoretically, SU(1,1) interferometers can achieve the Heisenberg limit of sensitivity with fewer optical elements than conventional interferometers. Theory. To briefly understand the benefit of using a parametric amplifier, a balanced SU(1,1) interferometer can be considered. Treating the input fields as quantum fields described by operators formula_3 ,formula_4, the output quantum fields from a parametric amplifier can be written as: formula_5 formula_6 where formula_7 is the amplitude gain and formula_8. For a coherent state input formula_9 at the first parametric amplifier with initial input intensity formula_10, the output intensities will be: formula_11 (for formula_12) formula_13 (for formula_12) where formula_14 is the visibility. This shows two main distinguishing features of an SU(1,1) interferometer from SU(2) interferometers: formula_15 The two output intensities of an SU(1,1) interferometer are in phase whereas in a SU(2) interferometer the two outputs are out of phase. formula_16 The outputs are amplified as compared to that of a SU(2) interferometer when the gain formula_7 is large. The first feature indicates a high correlation of the output photon numbers (intensities) and the second feature shows that there is an enhancement of the signal strength for a small phase change as compared to SU(2) interferometers. From these properties, the signal to noise ratio formula_17 for an SU(1,1) interferometer compared to the signal to noise ratio formula_18 of a SU(2) interferometer is calculated to be (see Ref. for detailed calculation): formula_19 (for formula_12) This means that the sensitivity in phase measurements is improved by a factor of formula_20 in an SU(1,1) interferometer and hence can achieve the "sub-shot noise limit" of sensitivity in conditions where a SU(2) interferometer approaches the shot-noise limit. It is also shown in Ref. that with no coherent state injection, the SU(1,1) interferometer approaches the Heisenberg limit of sensitivity. Noise performance. The improvement of phase measurement sensitivity in an SU(1,1) interferometer is not by a formula_20 amplification at the second parametric amplifier. Similar amplification could also be implemented in a SU(2) interferometer where both the signal and the noise (vacuum quantum fluctuations) gets amplified. However, the difference comes in the noise performance of an SU(1,1) interferometer. A quantum amplifier (in this case a parametric amplifier) can utilize quantum entanglement for effective noise cancellation. For an input state with a strong correlation with the internal modes of an amplifier, the quantum noise can be cancelled at the output through destructive quantum interference. This is the principle behind noise reduction in an SU(1,1) interferometer. The first parametric amplifier produces two quantum entangled fields that are used for phase sensing and are the input to the second parametric amplifier where the amplification takes place. In a scenario of no internal losses inside the interferometer, the output noise for a destructive quantum interference is still similar to the case of a SU(2) interferometer. Overall, there is an amplification in the signal with no change in the noise (as compared to SU(2) interferometer). This leads to the improvement in phase sensitivity over SU(2) interferometry. Effect of losses. The reduced sensitivity of an interferometer can be mainly due to two types of losses. One of the sources of losses is the inefficient detection protocol. Another type of loss affecting the sensitivity is the internal loss of an interferometer. Theoretical studies by Marino "et al." inferred that an SU(1,1) interferometer is robust against losses due to inefficient detectors because of the disentanglement of states at the second parametric amplifier before the measurements. However, the internal losses in an SU(1,1) interferometer limit its sensitivity below the Heisenberg limit during its physical implementation. The original scheme for an SU(1,1) interferometer proposed by Yurke "et al." did not take into account the internal losses and for a moderate gain of the parametric amplifier produced a low number of photons which made it difficult for its experimental realization. Marino "et al." showed that in the presence of any internal losses, an SU(1,1) interferometer could not achieve the Heisenberg limit for configurations with no input fields at both the ports or a coherent state input in one of the ports (this configuration was considered for the Theory section above). For a case with coherent state input at both the input ports, it was shown that the interferometer is robust against internal losses and is one of the ideal schemes for achieving the Heisenberg limit. Experiments. The originally proposed configuration of an SU(1,1) interferometer by Yurke "et al." was challenging to realize experimentally due to very low photon numbers expected at the output (for ideal sensitivity) and also the theory did not take into account the internal losses that could affect the phase change sensitivity of the interferometer. Subsequently, modifications to the scheme were studied taking into account the losses and other experimental imperfections. Some of the initial experiments proving the predicted scaling of the SU(1,1) interferometer and other experiments modifying the scheme are discussed below. Coherent state boosted SU(1,1) interferometer. The boost in the photon numbers from a coherent state injection was proposed and studied by Plick "et al.". Such a scheme was experimentally implemented by Jing "et al." with Rb-85 vapor cells for parametric amplification. The experiment verified the increase in the fringe size due to the amplification of the signal. Later, experiments performed by Hudelist "et al." showed that there is an enhancement in the signal by a factor of formula_20 with SU(1,1) interferometry over the conventional SU(2) interferometry. Modified SU(1,1) interferometers. Modifications to SU(1,1) interferometers have been proposed and studied with the goal of finding an experimentally ideal scheme with the desired characteristics of SU(1,1) interferometry. Some of the schemes explored include: 1. SU(1,1) interferometry with one parametric amplifier and a beam splitter replacing the second amplifier: The signal to noise ratio improvement in this configuration was found to be essentially the same as that of the original SU(1,1) interferometry sensitivity improvement over conventional interferometers. This study showed that the improvement is mainly due to the entangled fields generated at the first parametric amplifier. 2. “Truncated” SU(1,1) interferometer with no second parametric amplifier and rather using a photocurrent mixer to realize the superposition of the fields. Such a configuration opens the possibility to implement SU(1,1) interferometry in experiments where fewer optical elements help minimize the error due to experimental imperfections. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "1/\\sqrt N" }, { "math_id": 1, "text": " N" }, { "math_id": 2, "text": "1/N" }, { "math_id": 3, "text": " a_1 ^{in}" }, { "math_id": 4, "text": " a_2 ^{in}" }, { "math_id": 5, "text": " a_1 ^{o} = G a_1 ^{in} + g a_2 ^{in}" }, { "math_id": 6, "text": " a_2 ^{o} = g a_1 ^{in} + G a_2 ^{in}" }, { "math_id": 7, "text": "G" }, { "math_id": 8, "text": "\\left\\vert G \\right\\vert^2 - \\left\\vert g \\right\\vert^2 = 1" }, { "math_id": 9, "text": "|\\alpha\\rangle" }, { "math_id": 10, "text": "I_0 = \\left\\vert \\alpha \\right\\vert^2" }, { "math_id": 11, "text": "I_1 ^o = 2G^2 g^2 I_0 (1+\\cos\\phi)" }, { "math_id": 12, "text": "\\left\\vert \\alpha \\right\\vert^2 >>1" }, { "math_id": 13, "text": "I_2 ^o = (2G^2 g^2 + 1)(\\left\\vert \\alpha \\right\\vert^2)(1+\\nu\\cos\\phi)" }, { "math_id": 14, "text": "\\nu = \\frac{2G^2 g^2}{2G^2 g^2 + 1}" }, { "math_id": 15, "text": "1." }, { "math_id": 16, "text": "2." }, { "math_id": 17, "text": "R_{SU(1,1)}" }, { "math_id": 18, "text": "R_{SU(2)}" }, { "math_id": 19, "text": "R_{SU(1,1)}/R_{SU(2)} \\approx 2G^2" }, { "math_id": 20, "text": "2G^2" } ]
https://en.wikipedia.org/wiki?curid=69807546
69821827
Shape control in nanocrystal growth
Influences on the shape of small crystals Shape control in nanocrystal growth is the control of the shape of nanocrystals (crystalline nanoparticles) formed in their synthesis by means of varying reaction conditions. This is a concept studied in nanosciences, which is a part of both chemistry and condensed matter physics. There are two processes involved in the growth of these nanocrystals. Firstly, volume Gibbs free energy of the system containing the nanocrystal in solution decreases as the nanocrystal size increases. Secondly, each crystal has a surface Gibbs free energy that can be minimized by adopting the shape that is energetically most favorable. Surface energies of crystal planes are related to their Miller indices, which is why these can help predict the equilibrium shape of a certain nanocrystal. Because of these two different processes, there are two competing regimes in which nanocrystal growth can take place: the kinetic regime, where the crystal growth is controlled by minimization of the volume free energy, and the thermodynamic regime, where growth is controlled by minimization of the surface free energy. High concentration, low temperatures and short aging times favor the kinetic regime, whereas low concentration, high temperatures and long aging times favor the thermodynamic regime. The different regimes lead to different shapes of the nanocrystals: the kinetic regime gives anisotropic shapes, whereas the thermodynamic regime gives equilibrium, isotropic shapes, which can be determined using the Wulff construction. The shape of the nanocrystal determines many properties of the nanocrystal, such as the band gap and polarization of emitted light. Miller indices and surface energy. The surface energy of a solid is the free energy per unit area of its surface. It equals half the energy per unit area needed for cutting a larger piece of solid in two parts along the surface under examination. This costs energy because chemical bonds are broken. Typically, materials are considered to have one specific surface energy. However, in the case of crystals, the surface energy depends on the orientation of the surface with respect to the unit cell. Different facets of a crystal thus often have different surface energies. This can be understood from the fact that in non-crystalline materials, the building blocks that make up the material (e.g., atoms or molecules) are spread in a homogeneous manner. On average, the same number of bonds needs to be broken, so the same energy per unit area is needed, to create any surface. In crystals, surfaces exhibit a periodic arrangement of particles which is dependent on their orientation. Different numbers of bonds with different bond strengths are broken in the process of creating surfaces along different planes of the material, which causes the surface energies to be different. The type of plane is most easily described using the orientation of the surface with respect to a given unit cell that is characteristic of the material. The orientation of a plane with respect to the unit cell is most conveniently expressed in terms of Miller indices. For example, the set of Miller indices (110) describes the set of parallel planes (family of lattice planes) parallel to the z-axis and cutting the x- and the y-axis once, such that every unit cell is bisected by precisely one of those planes in the x- and y-direction. Generally, a surface with high Miller indices has a high surface energy. Qualitatively, this follows from the fact that for higher Miller indices, on average more surface atoms are at positions at a corner instead of a terrace, as can be seen in the figure. After all, corner atoms have even fewer neighbours to interact with than terrace atoms. For example, in the case of a 2D square lattice, they have two instead of three neighbours. These additionally broken bonds all cost energy, which is why lower Miller indices planes generally have lower surface energies and are as a consequence more stable. However, the comparison is in fact somewhat more complex, as the surface energy as function of the Miller indices also depends on the structure of the crystal lattice (e.g., bcc or fcc) and bonds between non-next nearest neighbours play a role as well. Experimental research on noble metals (copper, gold and silver), shows that for these materials, the surface energy is well-approximated by taking only the nearest neighbours into account. The next-nearest neighbour interactions apparently do not play a major role in these metals. Also, breaking any of the nearest neighbour bonds turns out to cost the same amount of energy. Within this approximation, the surface energy of a certain Miller indices (hkl) surface is given by formula_0 with formula_1 the ratio of the number of bonds broken when making this (hkl) plane with respect to making a (111) plane, and formula_2 the surface energy of the (111) plane. For any surface of an fcc crystal, formula_3 is given by formula_4 assuming formula_5. In this model, the surface energy indeed increases with higher Miller indices. This is also visible in the following table, which lists computer simulated surface energies of some planes in copper (Cu), silver (Ag) and gold (Au). Again, formula_3 is the number of broken bonds between nearest neighbours created when making the surface, being 3 for the (111) plane. The surface energy indeed increases for a larger number of broken bonds and therefore larger Miller indices. It is also possible for surfaces with high Miller indices to have a low surface energy, mainly if the unit cell contains multiple atoms. After all, Miller indices are based on the unit cell, and it is the atoms, not the unit cell, that are physically present. The choice of unit cell is up to some level arbitrary as they are constructed by the interpreter. High Miller indices planes with low surface energy can be found by searching for planes with a high density of atoms. A large density of atoms in a plane after all implies a large number of in-plane bonds and thus a small number of out-of-plane bonds that would cause the surface energy to be large. If a crystal's unit cell contains only one atom, those planes naturally correspond to the planes with low Miller indices, which is why planes with low Miller indices are usually considered to have a low surface energy. The table below shows examples of computer simulated surface energies of (hk0) planes in a NiO crystal (with formula_6). In this case, the unit cell has a multi-atom basis, as there are two types of atoms that make up the crystal (nickel and oxygen). The data has been ordered by increasing surface energy. From this table, it is clearly visible that the trend between surface energy and Miller indices is not as straightforward in this case as for the noble metals discussed above. Surface energy and Equilibrium shape. Planes with low surface energies are relatively stable and thus tend to be predominantly present in the thermodynamic equilibrium shape of a crystal. After all, in equilibrium, the free energy is minimized. However, a crystal's thermodynamic equilibrium shape typically does not only consist of planes with the lowest possible surface energy. The reason for this is that involving planes with a slightly higher surface energy can decrease the total surface area, which lowers the total energy penalty for creating the material's surface. The optimum shape in terms of free energy can be determined by the Wulff construction. Thermodynamic versus kinetic control. The growth of crystals can be carried out under two different regimes: the thermodynamic and the kinetic regime. Research on this topic is mainly centered around nanocrystals, as their synthesis is not as straightforward as that of bulk materials and thus requires a deeper understanding of types of crystal growth. Due to the high surface-volume ratio and the resulting instability, nanocrystals most easily show the difference between the thermodynamic and kinetic regime. These concepts can however be generalized further to bulk material. A commonly used production method of nanocrystals is that of growth by monomer addition. A seed is formed or placed in a solution of monomers that are the building blocks of the crystal. The nanocrystal (seed) grows larger by consuming the monomers in solution. The addition of a monomer to the crystal happens at the highest energy facet of the crystal, since that is the most active site and the monomer deposition thus has the lowest activation energy there. Usually, this facet is situated at a corner of the nanoparticle. These facets however, as explained in the section above, are not the most energetically favorable position for the added monomer. Thus the monomer will, if it gets the chance to, diffuse along the crystal surface to a lower energy site. The regime in which the monomers have the time to relocate is called the thermodynamic regime, as the product is formed that is expected thermodynamically. In the kinetic regime, the addition of monomers happens so rapidly that the crystal continues growing at the corners. In this case, the formed product is not at a global minimum of the free energy, but is in a metastable anisotropic state. Thermodynamic regime. The thermodynamic regime is characterized by relatively low growth rates. Because of these, the amount the Gibbs free energy is lowered due to incorporating a new monomer is smaller than due to rearranging the surface. The former is associated with the minimization of volume Gibbs free energy, whereas the latter is associated with minimizing the surface free energy. Thus, the shape evolution is driven by minimization of surface Gibbs free energy, and therefore the equilibrium shape is the one with the lowest overall surface Gibbs free energy. This corresponds to the shape with a global minimum in Gibbs free energy, which can be obtained via the Wulff construction. From this Wulff construction, it also follows that the thermodynamic product is always symmetrical. The activation energy for the thermodynamic product is higher than the activation energy for the kinetic product. From the Arrhenius equation formula_7 with formula_8 the reaction rate, "formula_9" a constant, "formula_10" the activation energy, formula_11 the Boltzmann constant and formula_12 the temperature, follows that for overcoming a higher activation energy barrier, a higher temperature is needed. The thermodynamic regime is therefore associated with high temperature conditions. The thermodynamic regime can also be characterized by giving the system a sufficiently long time to rearrange its atoms such that the global minimum in Gibbs free energy of the entire system is reached. Raising the temperature has a similar effect because the extra thermal energy increases the mobility of the atoms on the surface, making rearrangements easier. Finally, the thermodynamic product can be obtained by having a low monomer concentration. This too ties into the longer time the system has at hand to rearrange before incorporating the next monomer at a lower monomer concentration, as the speed of diffusion of monomers through the solution to the crystal is strongly dependent on their concentration. Kinetic regime. The kinetic control regime is characterized by high growth rates. Due to these, the system is driven by lowering the volume Gibbs free energy, which decreases rapidly upon monomer consumption. Minimization of the surface Gibbs free energy is of less relevance to the system and the shape evolution is controlled by reaction rates instead. Thus the product obtained in this regime is a metastable state, with a local minimum in Gibbs free energy. Kinetic control is obtained when there is not enough time for atoms on the surface to diffuse to an energetically more favorable state. Conditions that favor kinetic control are low temperatures (to ensure thermal energy is smaller than activation energy of the thermodynamic reaction) and high monomer concentration (in order to obtain high growth rates). Because of the high concentrations needed for kinetic control, the diffusion spheres around the nanocrystal are small and have steep concentration gradients. The more extended parts of the crystal that reach out further through the diffusion sphere grow faster, because they reach parts of the solution where the concentration is higher. The extended facets thus grow even faster, which can result in an anisotropic product. Due to this effect, an important factor determining the final shape of the product is the shape of the initial seed. Consequences of shape and size. The band gap as well as the density of states of nanoparticles depend significantly on their shape and size. Generally, smaller nanoparticles have a larger band gap. Quantum confinement effects lie at the basis of this. Whereas the density of states is a smooth function for 3D crystals which are large in any direction, it becomes saw-tooth-shaped for 2D nanocrystals (e.g., disks), staircase-shaped for 1D nanocrystals (e.g., wires) and a delta function for 0D nanocrystals (balls, pyramids etc.). Also, the polarization of emitted light and its magnetic anisotropy are affected by the shape of the nanoparticle. Studying different shapes of nanoparticles can improve the understanding of quantum confinement effects. By elongating an axis in certain spherical nanoparticles (quantum dots), degeneracies in the energy levels can be resolved. Also, the energy difference between photon absorption and photon emission can be tuned using shape control. This could possibly be utilized in LED technology, as it helps to prevent re-adsorption. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\gamma_{hkl} = \\frac{N_{hkl}}{3} \\gamma_{111}" }, { "math_id": 1, "text": "\\frac{N_{hkl}}{3}" }, { "math_id": 2, "text": "\\gamma_{111}" }, { "math_id": 3, "text": "N_{hkl}" }, { "math_id": 4, "text": "N_{hkl} = \\begin{cases} 2h+k & \\text{if }h,k,l\\text{ are all odd} \\\\ 4h+2k & \\text{otherwise} \\end{cases}" }, { "math_id": 5, "text": "h \\geq k \\geq l" }, { "math_id": 6, "text": "h \\leq k" }, { "math_id": 7, "text": "k = A e^{-E_{act}/k_BT}" }, { "math_id": 8, "text": "k" }, { "math_id": 9, "text": "A" }, { "math_id": 10, "text": "E_{act}" }, { "math_id": 11, "text": "k_B" }, { "math_id": 12, "text": "T" } ]
https://en.wikipedia.org/wiki?curid=69821827
698220
Scintillometer
A scintillometer is a scientific device used to measure turbulent fluctuations of the refractive index of air caused by variations in temperature, humidity, and pressure. It consists of an optical or radio wave transmitter and a receiver at opposite ends of an atmospheric propagation path. The receiver detects and evaluates the intensity fluctuations of the transmitted signal, called scintillation. The magnitude of the refractive index fluctuations is usually measured in terms of formula_0, the structure constant of refractive index fluctuations, which is the spectral amplitude of refractive index fluctuations in the inertial subrange of turbulence. Some types of scintillometers, such as displaced-beam scintillometers, can also measure the inner scale of refractive index fluctuations, which is the smallest size of eddies in the inertial subrange. Scintillometers also allow measurements of the transfer of heat between the Earth's surface and the air above, called the sensible heat flux. Inner-scale scintillometers can also measure the dissipation rate of turbulent kinetic energy and the momentum flux. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "C_n^2" } ]
https://en.wikipedia.org/wiki?curid=698220
69822011
Proverbs 3
Proverbs 3 is the third chapter of the Book of Proverbs in the Hebrew Bible or the Old Testament of the Christian Bible. The book is a compilation of several wisdom literature collections, with the heading in 1:1 may be intended to regard Solomon as the traditional author of the whole book, but the dates of the individual collections are difficult to determine, and the book probably obtained its final shape in the post-exilic period. This chapter is a part of the first collection of the book. Text. Hebrew. The following table shows the Hebrew text of Proverbs 3 with vowels alongside an English translation based upon the JPS 1917 translation (now in the public domain). Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008). There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BC. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century). Analysis. This chapter belongs to a section regarded as the first collection in the book of Proverbs (comprising Proverbs 1–9), known as "Didactic discourses". The Jerusalem Bible describes chapters 1–9 as a prologue of the chapters 10–22:16, the so-called "[actual] proverbs of Solomon", as "the body of the book". The chapter has the following structure: Trust in God (3:1–12). This passage stands out among the instructions in the first collection (chapters 1–9, because of its spiritual content that may be seen as a development to the motto of the whole book in , that 'Wisdom consists in complete trust in and submission to God'. It is related to 'loyalty and faithfulness', which can refer to (and may be intended about both) relationships between human and God (cf. Jeremiah 2:2; Hosea 6:4) or human to human (cf. Psalm 109:16; Hosea 4:1; Micah 6:8), and are to be 'worn as an adornment around the neck' (cf. Proverbs 1:9; Deuteronomy 6:8; 11:18) as well as 'written on the heart' (cf. Jeremiah 31:33). As the kernel of the instructions in this chapter, 'trust in God' is contrasted in verses 5 and 6 with self-reliance, that the best action is the complete commitment and submission to God ('all your ways'). The analogy of medicinal healing benefits of wisdom (verse 8) recurs in Proverbs 15:30; 16:24; 17:22), although sometimes tastes bitter (suffering adversity), it is a divine chastisement and a proof of God's fatherly love (cf. Job 5:17–18; 33:14–30; Hebrews 12:5–6). "My son, do not forget my teaching," "but let your heart keep my commandments;" " My teaching will give you a long and prosperous life." "5Trust in the Lord with all your heart," "and do not lean on your own understanding." "6 In all your ways acknowledge him," "and he will make straight your paths." Commendation of Wisdom (3:13–35). Verses 13–18 form a hymnic celebration of the 'happiness' in finding wisdom, as if possessing a vastly valuable asset, unfailingly pays a higher dividend than silver or gold (verse 14), and beyond comparison is a rare and priceless treasure (verse 15), providing a good quality of long life, riches, and honor (verse 16) and leading to pleasant and peaceful paths (verse 17), metaphorically like 'the tree of life' in the garden of Eden (; Genesis 2–3 as the vital source for nourishing growth and promoting fullness of life (cf. Proverbs 11:30; 13:12; 15:4). By Wisdom the world was created and is sustained (; cf. ), as Wisdom can 'fructify' life. Those who hold fast to Wisdom and trust in God (verses 21–26; cf. verses 5–8) will have secure and tranquil lives. The application is by inculcating kindness and neighbourliness, while avoiding malicious actions and unnecessary confrontations (verses 27–30. Verses 31–35 warn against envying evil men and imitating their ways, because God's judgement ('curse', cf. Deuteronomy 27:15–26) remains on their house, that they are unable to enjoy the divine blessing as the upright persons, and will be utterly disgraced. Uses. The milkshake cups' bottom of In-N-Out Burger has the text "PROVERBS 3:5", which refers to the 5th verse of this chapter. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=69822011
69832745
Zilber–Pink conjecture
Mathematical conjecture In mathematics, the Zilber–Pink conjecture is a far-reaching generalisation of many famous Diophantine conjectures and statements, such as André–Oort, Manin–Mumford, and Mordell–Lang. For algebraic tori and semiabelian varieties it was proposed by Boris Zilber and independently by Enrico Bombieri, David Masser, Umberto Zannier in the early 2000's. For semiabelian varieties the conjecture implies the Mordell–Lang and Manin–Mumford conjectures. Richard Pink proposed (again independently) a more general conjecture for Shimura varieties which also implies the André–Oort conjecture. In the case of algebraic tori, Zilber called it the Conjecture on Intersection with Tori (CIT). The general version is now known as the Zilber–Pink conjecture. It states roughly that atypical or unlikely intersections of an algebraic variety with certain special varieties are accounted for by finitely many special varieties. Statement. Atypical and unlikely intersections. The intersection of two algebraic varieties is called "atypical" if its dimension is larger than expected. More precisely, given three varieties formula_0, a component formula_1 of the intersection formula_2 is said to be atypical in formula_3 if formula_4. Since the expected dimension of formula_2 is formula_5, atypical intersections are "atypically large" and are not expected to occur. When formula_6, the varieties formula_7 and formula_8 are not expected to intersect at all, so when they do, the intersection is said to be "unlikely". For example, if in a 3-dimensional space two lines intersect, then it is an unlikely intersection, for two randomly chosen lines would almost never intersect. Special varieties. Special varieties of a Shimura variety are certain arithmetically defined subvarieties. They are higher dimensional versions of special points. For example, in semiabelian varieties special points are torsion points and special varieties are translates of irreducible algebraic subgroups by torsion points. In the modular setting special points are the singular moduli and special varieties are irreducible components of varieties defined by modular equations. Given a mixed Shimura variety formula_7 and a subvariety formula_9, an "atypical subvariety" of formula_10 is an atypical component of an intersection formula_11 where formula_12 is a special subvariety. The Zilber–Pink conjecture. Let formula_7 be a mixed Shimura variety or a semiabelian variety defined over formula_13, and let formula_14 be a subvariety. Then formula_10 contains only finitely many maximal atypical subvarieties. The abelian and modular versions of the Zilber–Pink conjecture are special cases of the conjecture for Shimura varieties, while in general the semiabelian case is not. However, special subvarieties of semiabelian and Shimura varieties share many formal properties which makes the same formulation valid in both settings. Partial results and special cases. While the Zilber–Pink conjecture is wide open, many special cases and weak versions have been proven. If a variety formula_14 contains a special variety formula_15 then by definition formula_15 is an atypical subvariety of formula_10. Hence, the Zilber–Pink conjecture implies that formula_10 contains only finitely many maximal special subvarieties. This is the Manin–Mumford conjecture in the semiabelian setting and the André–Oort conjecture in the Shimura setting. Both are now theorems; the former has been known for several decades, while the latter was proven in full generality only recently. Many partial results have been proven on the Zilber–Pink conjecture. An example in the modular setting is the result that any variety contains only finitely many maximal "strongly" atypical subvarieties, where a strongly atypical subvariety is an atypical subvariety with no constant coordinate. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " X, Y \\subseteq U" }, { "math_id": 1, "text": "Z" }, { "math_id": 2, "text": "X \\cap Y" }, { "math_id": 3, "text": "U" }, { "math_id": 4, "text": "\\dim Z > \\dim X + \\dim Y - \\dim U" }, { "math_id": 5, "text": "\\dim X + \\dim Y - \\dim U" }, { "math_id": 6, "text": "\\dim X + \\dim Y - \\dim U < 0" }, { "math_id": 7, "text": "X" }, { "math_id": 8, "text": "Y" }, { "math_id": 9, "text": "V \\subseteq X" }, { "math_id": 10, "text": "V" }, { "math_id": 11, "text": "V \\cap T" }, { "math_id": 12, "text": "T \\subseteq X" }, { "math_id": 13, "text": "\\mathbb{C}" }, { "math_id": 14, "text": "V\\subseteq X" }, { "math_id": 15, "text": "T" } ]
https://en.wikipedia.org/wiki?curid=69832745
69841081
2 Samuel 1
Second Book of Samuel chapter 2 Samuel 1 is the first chapter of the Second Book of Samuel in the Old Testament of the Christian Bible or the second part of Books of Samuel in the Hebrew Bible. According to Jewish tradition the book was attributed to the prophet Samuel, with additions by the prophets Gad and Nathan, but modern scholars view it as a composition of a number of independent texts of various ages from c. 630–540 BCE. This chapter contains the account of David mourning the death of Saul and his sons, especially Jonathan. This is within a section comprising 1 Samuel 16 to 2 Samuel 5 which records the rise of David as the king of Israel. Text. This chapter was originally written in the Hebrew language. It is divided into 27 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls including 4Q51 (4QSama; 100–50 BCE) with extant verses 4–5, 10–13. Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century). Saul's death was reported to David (1:1–16). This chapter is a conclusion to the narrative about Saul and David. It opened with an Amalekite reporting to David about Saul's death which is an entirely different account to the one in 1 Samuel 31:3–5, because this messenger claimed he killed Saul on the dying king's request and as proof he presented the king's crown and armlet to David. The most likely explanation of the discrepancy is that the Amalekite was lying in order to gain favor with David. He came with 'clothes torn and dirt on his head' to show signs of grief, this could have been contrived to give authenticity to his account, as it is more probable that he stumbled on Saul's body when he was searching for plunders in Mount Gilboa, so he immediately stripped him of his crown and armlet, then saw in this an opportunity to get rewards from the next king. However, the messenger describes himself as 'a resident alien' ("gēr"), who was bound by the laws of his adopted community (Leviticus 24:22), so his disregard for the sanctity of 'the LORD'S anointed' should be punished by death. This narrative confirms once again David's respect for YHWH's anointed, and also exonerates David entirely of the events leading to his succession, that David came innocently to be in possession of Saul's crown and armlet. "1 Now it came to pass after the death of Saul, when David had returned from the slaughter of the Amalekites, and David had stayed two days in Ziklag, 2 on the third day, behold, it happened that a man came from Saul’s camp with his clothes torn and dust on his head. So it was, when he came to David, that he fell to the ground and prostrated himself." David mourned Saul and Jonathan (1:17–27). The lament in this section can be attributed to David himself with a very personal expression of grief over the loss of Jonathan. The poem was preserved in an anthology known as the Book of Jashar (cf. Joshua 10:12–13; 1 Kings 8:12–13). It has a kind of refrain 'how the mighty have fallen', occurring in three places (verses 19, 25, 27), dividing the poem into sections (19–24, 25–27) and forming an "inclusio" bracketing the beginning and the ending. After stating that Israel's 'glory' has fallen, the poet wishes that the news be kept from the cities of the Philistines to prevent their exultation over Judah (verse 20), followed by curses on Mount Gilboa (verse 21), the scene of defeat, condemning it to barrenness. Then, David extols Saul and Jonathan (verses 22–24) as heroes who persevered in battle (verse 22), were strong and swift (verse 23) and joined in death as father and son (verse 23). He called the women of Israel to mourn Saul, who had brought them prosperity and luxury (verse 24). David specially vents his personal grief for Jonathan (verses 25b–26), and the word 'love' echoes once again the covenant of friendship between the two, before the final refrain in verse 27. "I am distressed for you, my brother Jonathan;" "You have been very pleasant to me;" "Your love to me was wonderful," "Surpassing the love of women." Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; Sources. Commentaries on Samuel. &lt;templatestyles src="Refbegin/styles.css" /&gt; General. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=69841081
69845004
Conical refraction
Optical phenomenon Conical refraction is an optical phenomenon in which a ray of light, passing through a biaxial crystal along certain directions, is transformed into a hollow cone of light. There are two possible conical refractions, one internal and one external. For internal refraction, there are 4 directions, and for external refraction, there are 4 other directions. For internal conical refraction, a planar wave of light enters an aperture a slab of biaxial crystal whose face is parallel to the plane of light. Inside the slab, the light splits into a hollow cone of light rays. Upon exiting the slab, the hollow cone turns into a hollow cylinder. For external conical refraction, light is focused at a single point aperture on the slab of biaxial crystal, and exits the slab at the other side at an exit point aperture. Upon exiting, the light splits into a hollow cone. This effect was predicted in 1832 by William Rowan Hamilton and subsequently observed by Humphrey Lloyd in the next year. It was possibly the first example of a phenomenon predicted by mathematical reasoning and later confirmed by experiment. History. The phenomenon of double refraction was discovered in the Iceland spar (calcite), by Erasmus Bartholin in 1669. was initially explained by Christiaan Huygens using a wave theory of light. The explanation was a centerpiece of his "Treatise on Light" (1690). However, his theory was limited to uniaxial crystals, and could not account for the behavior of biaxial crystals. inside the sphere. In 1813, David Brewster discovered that topaz has two axes of no double refraction, and subsequently others, such as aragonite, borax and mica, were identified as biaxial. Explaining this was beyond Huygens' theory. At the same period, Augustin-Jean Fresnel developed a more comprehensive theory that could describe double refraction in both uniaxial and biaxial crystals. Fresnel had already derived the equation for the wavevector surface in 1823, and André-Marie Ampère rederived it in 1828. Many others investigated the wavevector surface of the biaxial crystal, but they all missed its physical implications. In particular, Fresnel mistakenly thought the two sheets of the wavevector surface are "tangent" at the singular points (by a mistaken analogy with the case of uniaxial crystals), rather than conoidal. William Rowan Hamilton, in his work on Hamiltonian optics, discovered the wavevector surface has four conoidal points and four tangent conics. These conoidal points and tangent conics imply that, under certain conditions, a ray of light could be refracted into a cone of light within the crystal. He termed this phenomenon "conical refraction" and predicted two distinct types: internal and external conical refraction, corresponding respectively to the conoidal points and tangent conics. Hamilton announced his discovery at the Royal Irish Academy on October 22, 1832. He then asked Humphrey Lloyd to prove this experimentally. Lloyd observed external conical refraction 14 December with a specimen of arragonite from the Dollonds, which he published in February. He then observed internal conical refraction and published in March. Lloyd then combined both reports, and added details, into one paper. Lloyd discovered experimentally that the refracted rays are polarized, with polarization angle half that of the turning angle (see below), told Hamilton about it, who then explained theoretically. At the same time, Hamilton also exchanged letters with George Biddell Airy. Airy had independently discovered that the two sheets touch at conoidal points (rather than tangent), but he was skeptical that this would have experimental consequences. He was only convinced after Lloyd's report. This discovery was a significant victory for the wave theory of light and solidified Fresnel's theory of double refraction. Lloyd's experimental data are described in pages 350–355. The rays of the internal cone emerged, as they ought, in a cylinder from the second face of the crystal; and the size of this nearly circular cylinder, though small, was decidedly perceptible, so that with solar light it threw on silver paper a little luminous ring, which seemed to remain the same at different distances of the paper from the arragonite. In 1833, James MacCullagh claimed that it is a special case of a theorem he published in 1830 that he did not explicate, since it was not relevant to that particular paper. Cauchy discovered the same surface in the context of classical mechanics.Somebody having remarked, "I know of no person who has not seen conical refraction that really believed in it. I have myself converted a score of mathematicians by showing them the cone of light". Hamilton replied, "How different from me! If I had seen it only, I should not have believed it. My eyes have too often deceived me. I believe it, because I have proved it." Geometric theory. A note on terminology: The surface of wavevectors is also called the wave surface, the surface of normal slowness, the surface of wave slowness, etc. The index ellipsoid was called the surface of elasticity, as according to Fresnel, light waves are transverse waves in , in exact analogy with transverse elastic waves in a material. Surface of wavevectors. For notational cleanness, define formula_0. This surface is also known as Fresnel wave surface. Given a biaxial crystal with the three principal refractive indices formula_1. For each possible direction formula_2 of planar waves propagating in the crystal, it has a certain group velocity formula_3. The refractive index along that direction is defined as formula_4. Define, now, the surface of wavevectors as the following set of pointsformula_5In general, there are two group velocities along each wavevector direction. To find them, draw the plane perpendicular to formula_2. The indices are the major and minor axes of the ellipse of intersection between the plane and the index ellipsoid. At precisely 4 directions, the intersection is a circle (those are the axes where double refraction disappears, as discovered by Brewster, thus earning them the name of "biaxial"), and the two sheets of the surface of wavevectors collide at a conoidal point. To be more precise, the surface of wavevectors satisfy the following degree-4 equation (, page 346):formula_6or equivalently,formula_7&lt;templatestyles src="Template:Hidden begin/styles.css"/&gt;[Proof] The major and minor axes are the solutions to the constraint optimization problem: formula_8 where formula_9 is the matrix with diagonal entries formula_10. Since there are 3 variables and 2 constraints, we can use the Karush–Kuhn–Tucker conditions. That is, the three gradients formula_11 are linearly dependent. Let formula_12, then we haveformula_13Plugging formula_14 back to formula_15, we obtain formula_16Let formula_17 be the vector with the direction of formula_18, and the length of formula_19. We thus find that the equation of formula_17 is formula_20Multiply out the denominators, then multiply by formula_21, we obtain the result. In general, along a fixed direction formula_22, there are two possible wavevectors: The slow wave formula_23 and the fast wave formula_24, where formula_25 is the major semiaxis, and formula_26 is the minor. Plugging formula_27 into the equation of formula_17, we obtain a quadratic equation in formula_28:formula_29which has two solutions formula_30. At exactly four directions, the two wavevectors coincide, because the plane perpendicular to formula_22 intersects the index ellipsoid at a circle. These directions are formula_31 where formula_32, at which point formula_33. Expanding the equation of the surface in a neighborhood of formula_34, we obtain the local geometry of the surface, which is a cone subtended by a circle. Further, there exists 4 planes, each of which is tangent to the surface at an entire circle (a "trope" "conic", as defined later). These planes have equation (, pages 349–350)formula_35or equivalently, formula_36. and the 4 circles are the intersection of those planes with the ellipsoidformula_37All 4 circles have radius formula_38. &lt;templatestyles src="Template:Hidden begin/styles.css"/&gt;[Proof]By differentiating its equation, we find that the points on the surface of wavevectors, where the tangent plane is parallel to the formula_39 -axis, satisfies formula_40 That is, it is the union of the formula_41 -plane, and an ellipsoid. Thus, such points on the surface of wavevectors has two parts: Every point with formula_42, and every point that intersects with the auxilliary ellipsoid formula_43 Using the equation of the auxilliary ellipsoid to eliminate formula_44 from the equation of the wavevector surface, we obtain another degree-4 equation, which splits into the product of 4 planes: formula_45 Thus, we obtain 4 ellipses: the 4 planar intersections with the auxilliary ellipsoid. These ellipses all exist on the wavevector surface, and the wavevector surface has tangent plane parallel to the formula_39 axis at those points. By direct computation, these ellipses are circles. It remains to verify that the tangent plane is also parallel to the plane of the circle. Let formula_46 be one of those 4 planes, and let formula_17 be one point on the circle in formula_46. If formula_47, then since the circle is on the surface, the tangent plane formula_48 to the surface at formula_17 must contain the tangent line formula_49 to the circle at formula_17. Also, the plane formula_48 must also contain formula_50, the line pass formula_17 that is parallel to the formula_39 -axis. Therefore, the plane formula_48 is spanned by formula_50 and formula_49, which is precisely the plane formula_46. This then extends by continuity to the case of formula_42. One can imagine the surface as a prune, with 4 little pits or dimples. Putting the prune on a flat desk, the prune would touch the desk at a circle that covers up a dimple. In summary, the surface of wavevectors has singular points at formula_31 where formula_51. The special tangent plane to the surface touches it at two points that make an angle of formula_52 and formula_53, respectively. The angle of the wave cone, that is, the angle of the cone of internal conical refraction, is formula_54. Note that the cone is an "oblique" cone. Its apex is perpendicular to its base at a point "on" the circle (instead of the center of the circle). Surface of ray vectors. The surface of ray vectors is the polar dual surface of the surface of wavevectors. Its equation is obtained by replacing formula_55 with formula_56 in the equation for the surface of wavevectors. That is,formula_57All the results above apply with the same modification. The two surfaces are related by their duality: Approximately circular. In typical crystals, the difference between formula_58 is small. In this case, the conoidal point is approximately at the center of the tangent circle surrounding it, and thus, the cone of light (in both the internal and the external refraction cases) is approximately a circular cone. Polarization. In the case of external conical refraction, we have one ray splitting into a cone of planar waves, each corresponding to a point on the tangent circle of the wavevector surface. There is one tangent circle for each of the four quadrants. Take the one with formula_60, then take a point on it. Let the point be formula_61. To find the polarization direction of the planar wave in direction formula_61, take the intersection of the index ellipsoid and the plane perpendicular to formula_61. The polarization direction is the direction of the major axis of the ellipse intersection between the plane perpendicular to formula_61 and the index ellipsoid. Thus, the formula_61 with the highest formula_62 corresponds to a light polarized parallel to the formula_63 direction, and the formula_61 with the lowest formula_62 corresponds to a light polarized in a direction perpendicular to it. In general, rotating along the circle of light by an angle of formula_59 would rotate the polarization direction by approximately formula_64. This means that turning around the cone an entire round would turn the polarization angle by only half a round. This is an early example of the geometric phase. This geometric phase of formula_65 is observable in the difference of the angular momentum of the beam, before and after conical refraction. Algebraic geometry. The surface of wavevectors is defined by a degree-4 algebraic equation, and thus was studied for its own sake in classical algebraic geometry. Arthur Cayley studied the surface in 1849. He described it as a degenerate case of "tetrahedroid quartic surfaces". These surfaces are defined as those that are intersected by four planes, forming a tetrahedron. Each plane intersects the surface at two conics. For the wavevector surface, the tetrahedron degenerates into a flat square. The three vertices of the tetrahedron are conjugate to the two conics within the face they define. The two conics intersect at 4 points, giving 16 singular points. In general, the surface of wavevectors is a Kummer surface, and all properties of it apply. For example: More properties of the surface of wavevectors are in Chapter 10 of the classical reference on Kummer surfaces. Every linear material has a quartic dispersion equation, so its wavevector surface is a Kummer surface, which can have at most 16 singular points. That such a material might exist was proposed in 1910, and in 2016, scientists made such a (meta)material, and confirmed it has 16 directions for conical refraction. Diffraction theory. The classical theory of conical refraction was essentially in the style of geometric optics, and ignores the wave nature of light. Wave theory is needed to explain certain observable phenomena, such as Poggendorff rings, secondary rings, the central spot and its associated rings. In this context, conical refraction is usually named "conical "diffraction"" to emphasize the wave nature of light. Observations. The angle of the cone depends on the properties of the crystal, specifically the differences between its principal refractive indices. The effect is typically small, requiring careful experimental setup to observe. Early experiments used sunlight and pinholes to create narrow beams of light, while modern experiments often employ lasers and high-resolution detectors. Poggendorff observed two rings separated by a thin dark band. This was explained by Voigt. See Born and Wolf, section 15.3, for a derivation. Potter observed in 1841 certain diffraction phenomena that were inexplicable with Hamilton's theory. Specifically, if we follow the two rings created by the internal conic refraction, then the inner ring would contract until it becomes a single point, while the outer ring expands indefinitely. A satisfactory explanation required later developments in diffraction theory. Modern developments. The study of conical refraction has continued since its discovery, with researchers exploring its various aspects and implications. Some recent work includes: Conical refraction was also observed in transverse sound waves in quartz. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "a = n_x^{-2} - n_y^{-2}, b = n_y^{-2} - n_z^{-2}" }, { "math_id": 1, "text": "n_x < n_y < n_z" }, { "math_id": 2, "text": "\\hat k" }, { "math_id": 3, "text": "v_g(\\hat k)" }, { "math_id": 4, "text": "n(\\hat k) = c / v_g(\\hat k)" }, { "math_id": 5, "text": "\\{n(\\hat k) \\hat k : \\hat k \\in \\text{sphere of radius 1}\\}" }, { "math_id": 6, "text": "(k_x^2 + k_y^2 + k_z^2) (n_x^2k_x^2 + n_y^2k_y^2 + n_z^2k_z^2)\n- (n_y^2 + n_z^2) n_x^2 k_x^2 - (n_z^2 + n_x^2) n_y^2 k_y^2- (n_x^2 + n_y^2) n_z^2 k_z^2\n+ (n_xn_yn_z)^2 = 0 \n" }, { "math_id": 7, "text": "\\sum_i \\frac{n_i^2 k_i^2}{\n\\|k\\|^2\n-n_i^2} = 0" }, { "math_id": 8, "text": "\n\\begin{cases}\nk^T r &= 0 \\quad & k\\perp r \\\\\nr^T M r &= 1 \\quad &r\\text{ is on the index ellipsoid} \\\\\n\\mathrm{exr}(r^T r) & \\quad &\\|r\\| \\text{is max/minimized}\n\\end{cases}\n" }, { "math_id": 9, "text": "M" }, { "math_id": 10, "text": "n_x^{-2}, n_y^{-2}, n_z^{-2}" }, { "math_id": 11, "text": "k, Mr, r" }, { "math_id": 12, "text": "Mr = \\alpha k + \\beta r" }, { "math_id": 13, "text": "\n0 = \\alpha k^T r = r^T M r - \\beta r^T r \\implies \\beta = (r^T r)^{-1}\n" }, { "math_id": 14, "text": "r_x = \\frac{\\alpha k_x}{n_x^{-2} - \\beta}, \\dots" }, { "math_id": 15, "text": "k^T r = 0" }, { "math_id": 16, "text": "\n\\sum_i \\frac{k_i^2}{n_i^{-2} - \\|r\\|^{-2}} = 0\n" }, { "math_id": 17, "text": "\\vec k" }, { "math_id": 18, "text": "(k_x, k_y, k_z)" }, { "math_id": 19, "text": "\\|r\\|" }, { "math_id": 20, "text": "\n\\sum_i \\frac{k_i^2}{n_i^{-2} - \\|\\vec k\\|^{-2}} = 0\n" }, { "math_id": 21, "text": "n_x^2n_y^2n_z^2" }, { "math_id": 22, "text": "\\hat k " }, { "math_id": 23, "text": "n_+ \\hat k" }, { "math_id": 24, "text": "n_- \\hat k" }, { "math_id": 25, "text": "n_+" }, { "math_id": 26, "text": "n_-" }, { "math_id": 27, "text": "\\vec k = n\\hat k" }, { "math_id": 28, "text": "n^2" }, { "math_id": 29, "text": "\n \\left(\\frac{k_x^2}{n_y^2n_z^2} + \\cdots \\right) n^4 - \\left(\\frac{k_x^2}{n_y^2} +\\frac{k_x^2}{n_z^2} + \\cdots \\right) n^2 + 1 =0\n " }, { "math_id": 30, "text": "n_-^2, n_+^2" }, { "math_id": 31, "text": "\\hat k = (\\pm \\cos \\theta, 0, \\pm \\sin \\theta)" }, { "math_id": 32, "text": "\\sin^2 \\theta = \\frac{b}{a+b} " }, { "math_id": 33, "text": "n_- = n_+ = n_y" }, { "math_id": 34, "text": "\\vec k = ( n_y \\cos \\theta, 0, n_y \\sin \\theta)" }, { "math_id": 35, "text": "k_x \\sqrt{n_y^2 - n_x^2} \\pm k_z \\sqrt{n_z^2 - n_y^2} = \\pm n_y \\sqrt{n_z^2 - n_x^2} \n" }, { "math_id": 36, "text": "n_x k_x \\sqrt{a} \\pm n_zk_z\\sqrt{b} = \\pm n_xn_z \\sqrt{a+b}\n" }, { "math_id": 37, "text": "(n_x^2 + n_y^2)k_x^2 + 2n_y^2k_y^2 + (n_z^2 + n_y^2)k_z^2 - (n_x^2 + n_z^2)n_y^2 = 0\n" }, { "math_id": 38, "text": "n_y^{-1}\\sqrt{(n_y^2 - n_x^2) (n_z^2 - n_y^2)} = n_xn_z \\sqrt{ab}" }, { "math_id": 39, "text": "k_y" }, { "math_id": 40, "text": "k_y ((n_x^2 + n_y^2)k_x^2 + 2n_y^2k_y^2 + (n_z^2 + n_y^2)k_z^2 - (n_x^2 + n_z^2)n_y^2) = 0" }, { "math_id": 41, "text": "k_xk_z" }, { "math_id": 42, "text": "k_y = 0" }, { "math_id": 43, "text": "\n (n_x^2 + n_y^2)k_x^2 + 2n_y^2k_y^2 + (n_z^2 + n_y^2)k_z^2 - (n_x^2 + n_z^2)n_y^2 = 0\n " }, { "math_id": 44, "text": "k_y^2" }, { "math_id": 45, "text": "\n k_z \\pm k_x \\sqrt{\\frac{n_y^2 -n_x^2}{n_z^2 - n_y^2}} \\pm n_y \\sqrt{\\frac{n_z^2 -n_x^2}{n_z^2 - n_y^2}}\n " }, { "math_id": 46, "text": "P_0" }, { "math_id": 47, "text": "k_y \\neq 0" }, { "math_id": 48, "text": "P" }, { "math_id": 49, "text": "l" }, { "math_id": 50, "text": "l_y" }, { "math_id": 51, "text": "\\theta = \\arctan \\sqrt{b/a} " }, { "math_id": 52, "text": "\\arctan \\frac{n_x}{n_z}\\sqrt{b/a}" }, { "math_id": 53, "text": "\\arctan \\frac{n_z}{n_x}\\sqrt{b/a}" }, { "math_id": 54, "text": "A_{internal} = \\arctan n_y^2 \\sqrt{ab}" }, { "math_id": 55, "text": "n_i" }, { "math_id": 56, "text": "n_i^{-1}" }, { "math_id": 57, "text": "(r_x^2 + r_y^2 + r_z^2) (n_x^{-2}r_x^2 + n_y^{-2}r_y^2 + n_z^{-2}r_z^2)\n- (n_y^{-2} + n_z^{-2}) n_x^{-2} r_x^2 - (n_z^{-2} + n_x^{-2}) n_y^{-2} r_y^2- (n_x^{-2} + n_y^{-2}) n_z^{-2} r_z^2\n+ (n_xn_yn_z)^2 = 0 \n" }, { "math_id": 58, "text": "n_x, n_y, n_z" }, { "math_id": 59, "text": "\\phi" }, { "math_id": 60, "text": "k_x, k_z > 0" }, { "math_id": 61, "text": "\\vec k" }, { "math_id": 62, "text": "k_z" }, { "math_id": 63, "text": "k_y" }, { "math_id": 64, "text": "\\phi/2" }, { "math_id": 65, "text": "\\pi" } ]
https://en.wikipedia.org/wiki?curid=69845004
69847870
Permutoassociahedron
Polytope In mathematics, the permutoassociahedron is an formula_6-dimensional polytope whose vertices correspond to the bracketings of the permutations of formula_7 terms and whose edges connect two bracketings that can be obtained from one another either by moving a pair of brackets using associativity or by transposing two consecutive terms that are not separated by a bracket. The permutoassociahedron was first defined as a CW complex by Mikhail Kapranov who noted that this structure appears implicitly in Mac Lane's coherence theorem for symmetric and braided categories as well as in Vladimir Drinfeld's work on the Knizhnik–Zamolodchikov equations. It was constructed as a convex polytope by Victor Reiner and Günter M. Ziegler. Examples. When formula_8, the vertices of the permutoassociahedron can be represented by bracketing all the permutations of three terms formula_1, formula_2, and formula_3. There are six such permutations, formula_9, formula_10, formula_11, formula_12, formula_13, and formula_14, and each of them admits two bracketings (obtained from one another by associativity). For instance, formula_9 can be bracketed as formula_15 or as formula_16. Hence, the formula_0-dimensional permutoassociahedron is the dodecagon with vertices formula_16, formula_17, formula_18, formula_19, formula_20, formula_21, formula_22, formula_23, formula_24, formula_25, formula_26, and formula_15. When formula_27, the vertex formula_28 is adjacent to exactly three other vertices of the permutoassociahedron: formula_5, formula_29, and formula_30. The first two vertices are reached from formula_28 via associativity and the third via a transposition. The vertex formula_5 is adjacent to four vertices. Two of them, formula_28 and formula_31, are reached via associativity, and the other two, formula_32 and formula_33, via a transposition. This illustrates that, in dimension formula_4 and above, the permutoassociahedron is not a simple polytope. Properties. The formula_6-dimensional permutoassociahedron has formula_34 vertices. This is the product between the number of permutations of formula_7 terms and the number of all possible bracketings of any such permutation. The former number is equal to the factorial formula_35 and the later is the formula_6th Catalan number. By its description in terms of bracketed permutations, the 1-skeleton of the permutoassociahedron is a flip graph with two different kinds of flips (associativity and transpositions). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "2" }, { "math_id": 1, "text": "a" }, { "math_id": 2, "text": "b" }, { "math_id": 3, "text": "c" }, { "math_id": 4, "text": "3" }, { "math_id": 5, "text": "(ab)(cd)" }, { "math_id": 6, "text": "n" }, { "math_id": 7, "text": "n+1" }, { "math_id": 8, "text": "n = 2" }, { "math_id": 9, "text": "abc" }, { "math_id": 10, "text": "acb" }, { "math_id": 11, "text": "bac" }, { "math_id": 12, "text": "bca" }, { "math_id": 13, "text": "cab" }, { "math_id": 14, "text": "cba" }, { "math_id": 15, "text": "(ab)c" }, { "math_id": 16, "text": "a(bc)" }, { "math_id": 17, "text": "a(cb)" }, { "math_id": 18, "text": "(ac)b" }, { "math_id": 19, "text": "(ca)b" }, { "math_id": 20, "text": "c(ab)" }, { "math_id": 21, "text": "c(ba)" }, { "math_id": 22, "text": "(cb)a" }, { "math_id": 23, "text": "(bc)a" }, { "math_id": 24, "text": "b(ca)" }, { "math_id": 25, "text": "b(ac)" }, { "math_id": 26, "text": "(ba)c" }, { "math_id": 27, "text": "n = 3" }, { "math_id": 28, "text": "((ab)c)d" }, { "math_id": 29, "text": "(a(bc))d" }, { "math_id": 30, "text": "((ba)c)d" }, { "math_id": 31, "text": "a(b(cd))" }, { "math_id": 32, "text": "(ba)(cd)" }, { "math_id": 33, "text": "(ab)(dc)" }, { "math_id": 34, "text": "\nn!{2n\\choose n}\n" }, { "math_id": 35, "text": "(n+1)!" } ]
https://en.wikipedia.org/wiki?curid=69847870
6985
Chlorophyll
Green pigments found in plants, algae and bacteria Chlorophyll is any of several related green pigments found in cyanobacteria and in the chloroplasts of algae and plants. Its name is derived from the Greek words , ("pale green") and , ("leaf"). Chlorophyll allows plants to absorb energy from light. Chlorophylls absorb light most strongly in the blue portion of the electromagnetic spectrum as well as the red portion. Conversely, it is a poor absorber of green and near-green portions of the spectrum. Hence chlorophyll-containing tissues appear green because green light, diffusively reflected by structures like cell walls, is less absorbed. Two types of chlorophyll exist in the photosystems of green plants: chlorophyll "a" and "b". History. Chlorophyll was first isolated and named by Joseph Bienaimé Caventou and Pierre Joseph Pelletier in 1817. The presence of magnesium in chlorophyll was discovered in 1906, and was the first detection of that element in living tissue. After initial work done by German chemist Richard Willstätter spanning from 1905 to 1915, the general structure of chlorophyll "a" was elucidated by Hans Fischer in 1940. By 1960, when most of the stereochemistry of chlorophyll "a" was known, Robert Burns Woodward published a total synthesis of the molecule. In 1967, the last remaining stereochemical elucidation was completed by Ian Fleming, and in 1990 Woodward and co-authors published an updated synthesis. Chlorophyll "f" was announced to be present in cyanobacteria and other oxygenic microorganisms that form stromatolites in 2010; a molecular formula of C55H70O6N4Mg and a structure of (2-formyl)-chlorophyll "a" were deduced based on NMR, optical and mass spectra. Photosynthesis. Chlorophyll is vital for photosynthesis, which allows plants to absorb energy from light. Chlorophyll molecules are arranged in and around photosystems that are embedded in the thylakoid membranes of chloroplasts. In these complexes, chlorophyll serves three functions: The two currently accepted photosystem units are photosystem I and photosystem II, which have their own distinct reaction centres, named P700 and P680, respectively. These centres are named after the wavelength (in nanometers) of their red-peak absorption maximum. The identity, function and spectral properties of the types of chlorophyll in each photosystem are distinct and determined by each other and the protein structure surrounding them. The function of the reaction center of chlorophyll is to absorb light energy and transfer it to other parts of the photosystem. The absorbed energy of the photon is transferred to an electron in a process called charge separation. The removal of the electron from the chlorophyll is an oxidation reaction. The chlorophyll donates the high energy electron to a series of molecular intermediates called an electron transport chain. The charged reaction center of chlorophyll (P680+) is then reduced back to its ground state by accepting an electron stripped from water. The electron that reduces P680+ ultimately comes from the oxidation of water into O2 and H+ through several intermediates. This reaction is how photosynthetic organisms such as plants produce O2 gas, and is the source for practically all the O2 in Earth's atmosphere. Photosystem I typically works in series with Photosystem II; thus the P700+ of Photosystem I is usually reduced as it accepts the electron, via many intermediates in the thylakoid membrane, by electrons coming, ultimately, from Photosystem II. Electron transfer reactions in the thylakoid membranes are complex, however, and the source of electrons used to reduce P700+ can vary. The electron flow produced by the reaction center chlorophyll pigments is used to pump H+ ions across the thylakoid membrane, setting up a proton-motive force a chemiosmotic potential used mainly in the production of ATP (stored chemical energy) or to reduce NADP+ to NADPH. NADPH is a universal agent used to reduce CO2 into sugars as well as other biosynthetic reactions. Reaction center chlorophyll–protein complexes are capable of directly absorbing light and performing charge separation events without the assistance of other chlorophyll pigments, but the probability of that happening under a given light intensity is small. Thus, the other chlorophylls in the photosystem and antenna pigment proteins all cooperatively absorb and funnel light energy to the reaction center. Besides chlorophyll "a", there are other pigments, called accessory pigments, which occur in these pigment–protein antenna complexes. Chemical structure. Several chlorophylls are known. All are defined as derivatives of the parent chlorin by the presence of a fifth, ketone-containing ring beyond the four pyrrole-like rings. Most chlorophylls are classified as chlorins, which are reduced relatives of porphyrins (found in hemoglobin). They share a common biosynthetic pathway with porphyrins, including the precursor uroporphyrinogen III. Unlike hemes, which contain iron bound to the N4 center, most chlorophylls bind magnesium. The axial ligands attached to the Mg2+ center are often omitted for clarity. Appended to the chlorin ring are various side chains, usually including a long phytyl chain (). The most widely distributed form in terrestrial plants is chlorophyll "a". The only difference between chlorophyll "a" and chlorophyll "b" is that the former has a methyl group where the latter has a formyl group. This difference causes a considerable difference in the absorption spectrum, allowing plants to absorb a greater portion of visible light. The structures of chlorophylls are summarized below: Chlorophyll "e" is reserved for a pigment that has been extracted from algae in 1966 but not chemically described. Besides the lettered chlorophylls, a wide variety of sidechain modifications to the chlorophyll structures are known in the wild. For example, "Prochlorococcus", a cyanobacterium, uses 8-vinyl Chl "a" and "b". Measurement of chlorophyll content. Chlorophylls can be extracted from the protein into organic solvents. In this way, the concentration of chlorophyll within a leaf can be estimated. Methods also exist to separate chlorophyll "a" and chlorophyll "b". In diethyl ether, chlorophyll "a" has approximate absorbance maxima of 430 nm and 662 nm, while chlorophyll "b" has approximate maxima of 453 nm and 642 nm. The absorption peaks of chlorophyll "a" are at 465 nm and 665 nm. Chlorophyll "a" fluoresces at 673 nm (maximum) and 726 nm. The peak molar absorption coefficient of chlorophyll "a" exceeds 105 M−1 cm−1, which is among the highest for small-molecule organic compounds. In 90% acetone-water, the peak absorption wavelengths of chlorophyll "a" are 430 nm and 664 nm; peaks for chlorophyll "b" are 460 nm and 647 nm; peaks for chlorophyll "c1" are 442 nm and 630 nm; peaks for chlorophyll "c2" are 444 nm and 630 nm; peaks for chlorophyll "d" are 401 nm, 455 nm and 696 nm. Ratio fluorescence emission can be used to measure chlorophyll content. By exciting chlorophyll "a" fluorescence at a lower wavelength, the ratio of chlorophyll fluorescence emission at and can provide a linear relationship of chlorophyll content when compared with chemical testing. The ratio "F"735/"F"700 provided a correlation value of "r"2 0.96 compared with chemical testing in the range from 41 mg m−2 up to 675 mg m−2. Gitelson also developed a formula for direct readout of chlorophyll content in mg m−2. The formula provided a reliable method of measuring chlorophyll content from 41 mg m−2 up to 675 mg m−2 with a correlation "r"2 value of 0.95. The Dualex is an optical sensor used in plant science and agriculture for the assessment of chlorophyll contents in leaves. This device allows researchers to perform real-time and non-destructive measurements. Biosynthesis. In some plants, chlorophyll is derived from glutamate and is synthesised along a branched biosynthetic pathway that is shared with heme and siroheme. Chlorophyll synthase is the enzyme that completes the biosynthesis of chlorophyll "a": chlorophyllide "a" + phytyl diphosphate formula_0 chlorophyll "a" + diphosphate This conversion forms an ester of the carboxylic acid group in chlorophyllide "a" with the 20-carbon diterpene alcohol phytol. Chlorophyll "b" is made by the same enzyme acting on chlorophyllide "b". The same is known for chlorophyll "d" and "f", both made from corresponding chlorophyllides ultimately made from chlorophyllide "a". In Angiosperm plants, the later steps in the biosynthetic pathway are light-dependent. Such plants are pale (etiolated) if grown in darkness. Non-vascular plants and green algae have an additional light-independent enzyme and grow green even in darkness. Chlorophyll is bound to proteins. Protochlorophyllide, one of the biosynthetic intermediates, occurs mostly in the free form and, under light conditions, acts as a photosensitizer, forming free radicals, which can be toxic to the plant. Hence, plants regulate the amount of this chlorophyll precursor. In angiosperms, this regulation is achieved at the step of aminolevulinic acid (ALA), one of the intermediate compounds in the biosynthesis pathway. Plants that are fed by ALA accumulate high and toxic levels of protochlorophyllide; so do the mutants with a damaged regulatory system. Senescence and the chlorophyll cycle. The process of plant senescence involves the degradation of chlorophyll: for example the enzyme chlorophyllase (EC 3.1.1.14) hydrolyses the phytyl sidechain to reverse the reaction in which chlorophylls are biosynthesised from chlorophyllide "a" or "b". Since chlorophyllide "a" can be converted to chlorophyllide "b" and the latter can be re-esterified to chlorophyll "b", these processes allow cycling between chlorophylls "a" and "b". Moreover, chlorophyll "b" can be directly reduced (via 71-hydroxychlorophyll "a") back to chlorophyll "a", completing the cycle. In later stages of senescence, chlorophyllides are converted to a group of colourless tetrapyrroles known as nonfluorescent chlorophyll catabolites (NCC's) with the general structure: These compounds have also been identified in ripening fruits and they give characteristic autumn colours to deciduous plants. Distribution. The chlorophyll maps show milligrams of chlorophyll per cubic meter of seawater each month. Places where chlorophyll amounts were very low, indicating very low numbers of phytoplankton, are blue. Places where chlorophyll concentrations were high, meaning many phytoplankton were growing, are yellow. The observations come from the Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA's Aqua satellite. Land is dark gray, and places where MODIS could not collect data because of sea ice, polar darkness, or clouds are light gray. The highest chlorophyll concentrations, where tiny surface-dwelling ocean plants are thriving, are in cold polar waters or in places where ocean currents bring cold water to the surface, such as around the equator and along the shores of continents. It is not the cold water itself that stimulates the phytoplankton. Instead, the cool temperatures are often a sign that the water has welled up to the surface from deeper in the ocean, carrying nutrients that have built up over time. In polar waters, nutrients accumulate in surface waters during the dark winter months when plants cannot grow. When sunlight returns in the spring and summer, the plants flourish in high concentrations. Culinary use. Synthetic chlorophyll is registered as a food additive colorant, and its E number is E140. Chefs use chlorophyll to color a variety of foods and beverages green, such as pasta and spirits. Absinthe gains its green color naturally from the chlorophyll introduced through the large variety of herbs used in its production. Chlorophyll is not soluble in water, and it is first mixed with a small quantity of vegetable oil to obtain the desired solution. Biological use. A 2002 study found that "leaves exposed to strong light contained degraded major antenna proteins, unlike those kept in the dark, which is consistent with studies on the illumination of isolated proteins". This appeared to the authors as support for the hypothesis that "active oxygen species play a role in vivo" in the short-term behaviour of plants. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=6985
6985160
Fracture (geology)
Geologic discontinuity feature, often a joint or fault A fracture is any separation in a geologic formation, such as a "joint" or a "fault" that divides the rock into two or more pieces. A fracture will sometimes form a deep fissure or crevice in the rock. Fractures are commonly caused by stress exceeding the rock strength, causing the rock to lose cohesion along its weakest plane. Fractures can provide permeability for fluid movement, such as water or hydrocarbons. Highly fractured rocks can make good aquifers or hydrocarbon reservoirs, since they may possess both significant permeability and fracture porosity. Brittle deformation. Fractures are forms of brittle deformation. There are two types of primary brittle deformation processes. Tensile fracturing results in "joints". "Shear fractures" are the first initial breaks resulting from shear forces exceeding the cohesive strength in that plane. After those two initial deformations, several other types of secondary brittle deformation can be observed, such as "frictional sliding" or "cataclastic flow" on reactivated joints or faults. Most often, fracture profiles will look like either a blade, ellipsoid, or circle. Causes. Fractures in rocks can be formed either due to compression or tension. Fractures due to compression include thrust faults. Fractures may also be a result from shear or tensile stress. Some of the primary mechanisms are discussed below. Modes. First, there are three modes of fractures that occur (regardless of mechanism): For more information on this, see fracture mechanics. Tensile fractures. Rocks contain many pre-existing cracks where development of tensile fracture, or Mode I fracture, may be examined. The first form is in "axial stretching." In this case a remote tensile stress, σn, is applied, allowing microcracks to open slightly throughout the tensile region. As these cracks open up, the stresses at the crack tips intensify, eventually exceeding the rock strength and allowing the fracture to propagate. This can occur at times of rapid overburden erosion. Folding also can provide tension, such as along the top of an anticlinal fold axis. In this scenario the tensile forces associated with the stretching of the upper half of the layers during folding can induce tensile fractures parallel to the fold axis. Another, similar tensile fracture mechanism is "hydraulic fracturing". In a natural environment, this occurs when rapid sediment compaction, thermal fluid expansion, or fluid injection causes the pore fluid pressure, σp, to exceed the pressure of the least principal normal stress, σn. When this occurs, a tensile fracture opens perpendicular to the plane of least stress.[4] Tensile fracturing may also be induced by applied compressive loads, σn, along an axis such as in a Brazilian disk test. This applied compression force results in "longitudinal splitting." In this situation, tiny tensile fractures form parallel to the loading axis while the load also forces any other microfractures closed. To picture this, imagine an envelope, with loading from the top. A load is applied on the top edge, the sides of the envelope open outward, even though nothing was pulling on them. Rapid deposition and compaction can sometimes induce these fractures. Tensile fractures are almost always referred to as "joints", which are fractures where no appreciable slip or shear is observed. To fully understand the effects of applied tensile stress around a crack in a brittle material such a rock, fracture mechanics can be used. The concept of fracture mechanics was initially developed by A. A. Griffith during World War I. Griffith looked at the energy required to create new surfaces by breaking material bonds versus the elastic strain energy of the stretched bonds released. By analyzing a rod under uniform tension Griffith determined an expression for the critical stress at which a favorably orientated crack will grow. The critical stress at fracture is given by, formula_0 where γ = surface energy associated with broken bonds, E = Young's modulus, and a = half crack length. Fracture mechanics has generalized to that γ represents energy dissipated in fracture not just the energy associated with creation of new surfaces Linear elastic fracture mechanics. Linear elastic fracture mechanics (LEFM) builds off the energy balance approach taken by Griffith but provides a more generalized approach for many crack problems. LEFM investigates the stress field near the crack tip and bases fracture criteria on stress field parameters. One important contribution of LEFM is the stress intensity factor, K, which is used to predict the stress at the crack tip. The stress field is given by formula_1 where formula_2 is the stress intensity factor for Mode I, II, or III cracking and formula_3is a dimensionless quantity that varies with applied load and sample geometry. As the stress field gets close to the crack tip, i.e. formula_4, formula_3becomes a fixed function of formula_5. With knowledge of the geometry of the crack and applied far field stresses, it is possible to predict the crack tip stresses, displacement, and growth. Energy release rate is defined to relate K to the Griffith energy balance as previously defined. In both LEFM and energy balance approaches, the crack is assumed to be cohesionless behind the crack tip. This provides a problem for geological applications such a fault, where friction exists all over a fault. Overcoming friction absorbs some of the energy that would otherwise go to crack growth. This means that for Modes II and III crack growth, LEFM and energy balances represent local stress fractures rather than global criteria. Crack formation and propagation. Cracks in rock do not form smooth path like a crack in a car windshield or a highly ductile crack like a ripped plastic grocery bag. Rocks are a polycrystalline material so cracks grow through the coalescing of complex microcracks that occur in front of the crack tip. This area of microcracks is called the brittle process zone. Consider a simplified 2D shear crack as shown in the image on the right. The shear crack, shown in blue, propagates when tensile cracks, shown in red, grow perpendicular to the direction of the least principal stresses. The tensile cracks propagate a short distance then become stable, allowing the shear crack to propagate. This type of crack propagation should only be considered an example. Fracture in rock is a 3D process with cracks growing in all directions. It is also important to note that once the crack grows, the microcracks in the brittle process zone are left behind leaving a weakened section of rock. This weakened section is more susceptible to changes in pore pressure and dilatation or compaction. Note that this description of formation and propagation considers temperatures and pressures near the Earth's surface. Rocks deep within the earth are subject to very high temperatures and pressures. This causes them to behave in the semi-brittle and plastic regimes which result in significantly different fracture mechanisms. In the plastic regime cracks acts like a plastic bag being torn. In this case stress at crack tips goes to two mechanisms, one which will drive propagation of the crack and the other which will blunt the crack tip. In the brittle-ductile transition zone, material will exhibit both brittle and plastic traits with the gradual onset of plasticity in the polycrystalline rock. The main form of deformation is called cataclastic flow, which will cause fractures to fail and propagate due to a mixture of brittle-frictional and plastic deformations. Joint types. Describing joints can be difficult, especially without visuals. The following are descriptions of typical natural fracture joint geometries that might be encountered in field studies: Faults and shear fractures. Faults are another form of fracture in a geologic environment. In any type of faulting, the active fracture experiences shear failure, as the faces of the fracture slip relative to each other. As a result, these fractures seem like large scale representations of Mode II and III fractures, however that is not necessarily the case. On such a large scale, once the shear failure occurs, the fracture begins to curve its propagation towards the same direction as the tensile fractures. In other words, the fault typically attempts to orient itself perpendicular to the plane of least principal stress. This results in an out-of-plane shear relative to the initial reference plane. Therefore, these cannot necessarily be qualified as Mode II or III fractures. An additional, important characteristic of shear-mode fractures is the process by which they spawn "wing cracks", which are tensile cracks that form at the propagation tip of the shear fractures. As the faces slide in opposite directions, tension is created at the tip, and a mode I fracture is created in the direction of the "σh-max", which is the direction of maximum principal stress. "Shear-failure criteria" is an expression that attempts to describe the stress at which a shear rupture creates a crack and separation. This criterion is based largely off of the work of Charles Coulomb, who suggested that as long as all stresses are compressive, as is the case in shear fracture, the shear stress is related to the normal stress by: "σs"= C+μ(σn-σf), where C is the "cohesion" of the rock, or the shear stress necessary to cause failure given the normal stress across that plane equals 0. "μ" is the coefficient of internal friction, which serves as a constant of proportionality within geology. σn is the normal stress across the fracture at the instant of failure, σf represents the pore fluid pressure. It is important to point out that pore fluid pressure has a significant impact on shear stress, "especially" where pore fluid pressure approaches "lithostatic pressure", which is the normal pressure induced by the weight of the overlying rock. This relationship serves to provide the "coulomb failure envelope" within the Mohr-Coulomb Theory. "Frictional sliding" is one aspect for consideration during shear fracturing and faulting. The shear force parallel to the plane must overcome the frictional force to move the faces of the fracture across each other. In fracturing, frictional sliding typically only has significant effects on the reactivation on existing shear fractures. For more information on frictional forces, see friction. The shear force required to slip fault is less than force required to fracture and create new faults as shown by the Mohr-Coulomb diagram. Since the earth is full of existing cracks and this means for any applied stress, many of these cracks are more likely to slip and redistribute stress than a new crack is to initiate. The Mohr's Diagram shown, provides a visual example. For a given stress state in the earth, if an existing fault or crack exists orientated anywhere from −α/4 to +α/4, this fault will slip before the strength of the rock is reached and a new fault is formed. While the applied stresses may be high enough to form a new fault, existing fracture planes will slip before fracture occurs. One important idea when evaluating the friction behavior within a fracture is the impact of "asperities", which are the irregularities that stick out from the rough surfaces of fractures. Since both faces have bumps and pieces that stick out, not all of the fracture face is actually touching the other face. The cumulative impact of asperities is a reduction of the "real area of contact"', which is important when establishing frictional forces. Subcritical crack growth. Sometimes, it is possible for fluids within the fracture to cause fracture propagation with a much lower pressure than initially required. The reaction between certain fluids and the minerals the rock is composed of can lower the stress required for fracture below the stress required throughout the rest of the rock. For instance, water and quartz can react to form a substitution of OH molecules for the O molecules in the quartz mineral lattice near the fracture tip. Since the OH bond is much lower than that with O, it effectively reduces the necessary tensile stress required to extend the fracture. Engineering considerations. In geotechnical engineering a fracture forms a discontinuity that may have a large influence on the mechanical behavior (strength, deformation, etc.) of soil and rock masses in, for example, tunnel, foundation, or slope construction. Fractures also play a significant role in minerals exploitation. One aspect of the upstream energy sector is the production from naturally fractured reservoirs. There are a good number of naturally fractured reservoirs in the United States, and over the past century, they have provided a substantial boost to the nation's net hydrocarbon production. The key concept is while low porosity, brittle rocks may have very little natural storage or flow capability, the rock is subjected to stresses that generate fractures, and these fractures can actually store a very large volume of hydrocarbons, capable of being recovered at very high rates. One of the most famous examples of a prolific naturally fractured reservoir was the Austin Chalk formation in South Texas. The chalk had very little porosity, and even less permeability. However, tectonic stresses over time created one of the most extensive fractured reservoirs in the world. By predicting the location and connectivity of fracture networks, geologists were able to plan horizontal wellbores to intersect as many fracture networks as possible. Many people credit this field for the birth of true horizontal drilling in a developmental context. Another example in South Texas is the Georgetown and Buda limestone formations. Furthermore, the recent uprise in prevalence of unconventional reservoirs is actually, in part, a product of natural fractures. In this case, these microfractures are analogous to Griffith Cracks, however they can often be sufficient to supply the necessary productivity, especially after completions, to make what used to be marginally economic zones commercially productive with repeatable success. However, while natural fractures can often be beneficial, they can also act as potential hazards while drilling wells. Natural fractures can have very high permeability, and as a result, any differences in hydrostatic balance down the well can result in well control issues. If a higher pressured natural fracture system is encountered, the rapid rate at which formation fluid can flow into the wellbore can cause the situation to rapidly escalate into a blowout, either at surface or in a higher subsurface formation. Conversely, if a lower pressured fracture network is encountered, fluid from the wellbore can flow very rapidly into the fractures, causing a loss of hydrostatic pressure and creating the potential for a blowout from a formation further up the hole. Fracture modeling. Since the mid-1980s, 2D and 3D computer modeling of fault and fracture networks has become common practice in Earth Sciences. This technology became known as "DFN" (discrete fracture network") modeling, later modified into "DFFN" (discrete fault and fracture network") modeling. The technology consists of defining the statistical variation of various parameters such as size, shape, and orientation and modeling the fracture network in space in a semi-probabilistic way in two or three dimensions. Computer algorithms and speed of calculation have become sufficiently capable of capturing and simulating the complexities and geological variabilities in three dimensions, manifested in what became known as the "DMX Protocol". Fracture terminology. A list of fracture related terms: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\sigma_f=({2E\\gamma \\over \\pi a})^{1/2}" }, { "math_id": 1, "text": "\\sigma_{ij}(r, \\theta)={K\\over(2\\pi r)^{1/2}}f_{ij}(\\theta)" }, { "math_id": 2, "text": " K" }, { "math_id": 3, "text": " f_{ij}" }, { "math_id": 4, "text": " r\\rightarrow0" }, { "math_id": 5, "text": " \\theta" } ]
https://en.wikipedia.org/wiki?curid=6985160
69856131
Prophet inequality
Bound on optimal stopping in random sequences In the theory of online algorithms and optimal stopping, a prophet inequality is a bound on the expected value of a decision-making process that handles a sequence of random inputs from known probability distributions, relative to the expected value that could be achieved by a "prophet" who knows all the inputs (and not just their distributions) ahead of time. These inequalities have applications in the theory of algorithmic mechanism design and mathematical finance. Single item. The classical single-item prophet inequality was published by , crediting its tight form to D. J. H. (Ben) Garling. It concerns a process in which a sequence of random variables formula_0 arrive from known distributions formula_1. When each formula_0 arrives, the decision-making process must decide whether to accept it and stop the process, or whether to reject it and go on to the next variable in the sequence. The value of the process is the single accepted variable, if there is one, or zero otherwise. It may be assumed that all variables are non-negative; otherwise, replacing negative values by zero does not change the outcome. This can model, for instance, financial situations in which the variables are offers to buy some indivisible good at a certain price, and the seller must decide which (if any) offer to accept. A prophet, knowing the whole sequence of variables, can obviously select the largest of them, achieving value formula_2 for any specific instance of this process, and expected value formula_3. The prophet inequality states the existence of an online algorithm for this process whose expected value is at least half that of the prophet: formula_4. No algorithm can achieve a greater expected value for all distributions of inputs. One method for proving the single-item prophet inequality is to use a "threshold algorithm" that sets a parameter formula_5 and then accepts the first random variable that is at least as large as formula_5. If the probability that this process accepts an item is formula_6, then its expected value is formula_7 plus the expected excess over formula_5 that the selected variable (if there is one) has. Each variable formula_0 will be considered by the threshold algorithm with probability at least formula_8, and if it is considered will contribute formula_9 to the excess, so by linearity of expectation the expected excess is at least formula_10 Setting formula_5 to the median of the distribution of formula_2, so that formula_11, and adding formula_7 to this bound on expected excess, causes the formula_7 and formula_12 terms to cancel each other, showing that for this setting of formula_5 the threshold algorithm achieves an expected value of at least formula_4. A different threshold, formula_13, also achieves at least this same expected value. Generalizations. Various generalizations of the single-item prophet inequality to other online scenarios are known, and are also called prophet inequalities. Comparison to competitive analysis. Prophet inequalities are related to the competitive analysis of online algorithms, but differ in two ways. First, much of competitive analysis assumes worst case inputs, chosen to maximize the ratio between the computed value and the optimal value that could have been achieved with knowledge of the future, whereas for prophet inequalities some knowledge of the input, its distribution, is assumed to be known. And second, in order to achieve a certain competitive ratio, an online algorithm must perform within that ratio of the optimal performance on all inputs. Instead, a prophet inequality only bounds the performance in expectation, allowing some input sequences to produce worse performance as long as the average is good. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X_i" }, { "math_id": 1, "text": "\\mathcal{D}_i" }, { "math_id": 2, "text": "\\max_i X_i" }, { "math_id": 3, "text": "\\mathbb{E}[\\max_i X_i]" }, { "math_id": 4, "text": "\\tfrac12\\mathbb{E}[\\max_i X_i]" }, { "math_id": 5, "text": "\\tau" }, { "math_id": 6, "text": "p" }, { "math_id": 7, "text": "p\\tau" }, { "math_id": 8, "text": "1-p" }, { "math_id": 9, "text": "\\max(X_i-\\tau,0)" }, { "math_id": 10, "text": "\\mathbb{E}\\Bigl[\\sum_i(1-p)\\max(X_i-\\tau,0)\\Bigr]\\ge(1-p)\\bigl(\\mathbb{E}[\\max_i X_i]-\\tau)." }, { "math_id": 11, "text": "p=\\tfrac12" }, { "math_id": 12, "text": "(1-p)(-\\tau)" }, { "math_id": 13, "text": "\\tau=\\tfrac12\\mathbb{E}[\\max_i X_i]" } ]
https://en.wikipedia.org/wiki?curid=69856131
69856455
2 Samuel 2
Second Book of Samuel chapter 2 Samuel 2 is the second chapter of the Second Book of Samuel in the Old Testament of the Christian Bible or the second part of Books of Samuel in the Hebrew Bible. According to Jewish tradition the book was attributed to the prophet Samuel, with additions by the prophets Gad and Nathan, but modern scholars view it as a composition of a number of independent texts of various ages from c. 630–540 BCE. This chapter contains the account of David becoming king over Judah in Hebron. This is within a section comprising 1 Samuel 16 to 2 Samuel 5 which records the rise of David as the king of Israel, and a section comprising 2 Samuel 2–8 which deals with the period when David set up his kingdom. Text. This chapter was originally written in the Hebrew language. It is divided into 32 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls including 4Q51 (4QSama; 100–50 BCE) with extant verses 5–16, 25–27, 29–32. Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century). Analysis. The narrative of David's reign in Hebron in 2 Samuel 1:1–5:5 has the following structure: A. Looking back to the final scenes of 1 Samuel (1:1) B. David receives Saul's crown (1:2-12) C. David executes Saul's killer (1:13-16) D. David's lament for Saul and Jonathan (1:17-27) E. Two kings in the land (2:1-3:6) E'. One king in the land: Abner switches sides (3:7-27) D'. David's lament for Abner (3:28-39) C'. David executes Ishbaal's killers (4:1-12) B'. David wears Saul's crown (5:1-3) A'. Looking forward to David's reign in Jerusalem (5:4-5) David's narrative of his ascension to the throne in Hebron is framed by an opening verse that looks backward to the final chapters of 1 Samuel (Saul's death and David's refuge in Ziklag) and closing verses that look forward to David's rule in Jerusalem (2 Samuel 5). The action begins when David received Saul's crown and concludes when he was finally able to wear that crown. David executes the Amalekite who claims to have assisted Saul with his suicide and those who murdered Ishbaal. Two laments were recorded: one for Saul and Jonathan and another shorter one for Abner. At the center are the two key episodes: the existence of two kings in the land (David and Ishbaal), because Joab's forces could not conquer Saul's territory on the battlefield. However, this was resolved when Ishbaal foolishly challenged Abner's loyalty, causing Abner to switch sides that eventually brought Saul's kingdom under Davidic rule. David anointed king of Judah in Hebron (2:1–7). David began his move with an enquiry to God and obeyed God's instruction to reside in Hebron, where David had obtained a power-base by his marriage to Abigail, the widow of Nabal (1 Samuel 25:3) and had sent gifts to its inhabitants from the spoils after his victory over the Amalekites (1 Samuel 30:31). It was also in Hebron, apparently the main town in the region, that David was 'anointed king over Judah' (verse 4), as a confirmation of the previous anointing by Samuel (1 Samuel 16:13), this time by 'the people of Judah', and later also by 'the elders of Israel' (2 Samuel 5:3). David had additionally secured support in northern areas with the marriages to Ahinoam of Jezreel and then to Maacah, daughter of Talmai of Geshur. David's overtures to the men of Jabesh-gilead, who had been loyal to Saul (verses 4b-7), were aimed to establish a relationship with that area, replacing to the one that ended with the death of Saul. "Then the men of Judah came, and there they anointed David king over the house of Judah." "And they told David, saying, "The men of Jabesh Gilead were the ones who buried Saul."" Ish-bosheth made king of Israel in Mahanaim (2:8–11). David's move was obviously a direct challenge to the house of Saul, which still had special ties with Gilead, Jezreel, and Geshur, together with other northern territories. Abner, Saul's cousin, made "Ishbaal" (a reading in the Greek versions, for the Hebrew "Ishbosheth", 'man of shame'), Saul's remaining son, to be the king of Israel in Mahanaim (in the area of Gilead), which is east of the Jordan river, because the Philistines controlled the territory west of the Jordan, so the list of Ishbaal's domain was considered idealistic. "Ish-bosheth, Saul’s son, was forty years old when he began to reign over Israel, and he reigned two years. But the house of Judah followed David." Battle of Gibeon (2:12–32). Inevitably a civil war broke out between Israel and Judah, as both armies faced off Gibeon. First, Abner suggested a contest between twelve young men from each side, but the result was inconclusive as all contestants were killed (verse 16). The subsequent more general battle was to David's advantage as the Saulide army was forced to retreat (verse 17) with a significant incident that Asahel, the youngest brother of Joab and Abishai, the sons of Zeruiah (a sister of David; 1 Chronicles 2:16), was killed by Abner for not willing to stop pursuing him. Joab and Abishai's continuing pursuit that was halted when Abner reminded the people of 'their bond of kinship' (verse 26), so the hostilities, even with obvious advantage for David's side (verse 31), ceased, and the armies returned to their bases. Nonetheless, Joab was determined to avenge Asahel's death (3:17), when the opportunity came and David felt unable to restrain the violence of the sons of Zeruiah (3:39). "Now the three sons of Zeruiah were there: Joab and Abishai and Asahel. And Asahel was as fleet of foot as a wild gazelle. " Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; Sources. Commentaries on Samuel. &lt;templatestyles src="Refbegin/styles.css" /&gt; General. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=69856455
69856492
2 Samuel 3
Second Book of Samuel chapter 2 Samuel 3 is the third chapter of the Second Book of Samuel in the Old Testament of the Christian Bible or the second part of Books of Samuel in the Hebrew Bible. According to Jewish tradition the book was attributed to the prophet Samuel, with additions by the prophets Gad and Nathan, but modern scholars view it as a composition of a number of independent texts of various ages from c. 630–540 BCE. This chapter contains the account of David's reign in Hebron. This is within a section comprising 1 Samuel 16 to 2 Samuel 5 which records the rise of David as the king of Israel, and a section comprising 2 Samuel 2–8 which deals with the period when David set up his kingdom. Text. This chapter was originally written in the Hebrew language. It is divided into 39 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls including 4Q51 (4QSama; 100–50 BCE) with extant verses 1–15, 17, 21, 23–25, 27–39. Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century). Analysis. The narrative of David's reign in Hebron in 2 Samuel 1:1–5:5 has the following structure: A. Looking back to the final scenes of 1 Samuel (1:1) B. David receives Saul's crown (1:2-12) C. David executes Saul's killer (1:13-16) D. David's lament for Saul and Jonathan (1:17-27) E. Two kings in the land (2:1-3:6) E'. One king in the land: Abner switches sides (3:7-27) D'. David's lament for Abner (3:28-39) C'. David executes Ishbaal's killers (4:1-12) B'. David wears Saul's crown (5:1-3) A'. Looking forward to David's reign in Jerusalem (5:4-5) David's narrative of his ascension to the throne in Hebron is framed by an opening verse that looks backward to the final chapters of 1 Samuel (Saul's death and David's refuge in Ziklag) and closing verses that look forward to David's rule in Jerusalem (2 Samuel 5). The action begins when David received Saul's crown and concludes when he was finally able to wear that crown. David executes the Amalekite who claims to have assisted Saul with his suicide and those who murdered Ishbaal. Two laments were recorded: one for Saul and Jonathan and another shorter one for Abner. At the center are the two key episodes: the existence of two kings in the land (David and Ishbaal), because Joab's forces could not conquer Saul's territory on the battlefield. However, this was resolved when Ishbaal foolishly challenged Abner's loyalty, causing Abner to switch sides that eventually brought Saul's kingdom under Davidic rule. The House of David strengthened (3:1–5). After the temporarily suspended hostility (2 Samuel 2:28), the struggle between the houses of David and Saul continued for around two years (2 Samuel 2:10), but throughout the period of time, David became stronger (verse 1; a continuing theme of 2 Samuel 2:30–31), as demonstrated by the list of sons born to him at Hebron (verses 2–5). "Now there was a long war between the house of Saul and the house of David. But David grew stronger and stronger, and the house of Saul grew weaker and weaker." Abner joins David (3:6–21). Ish-bosheth's quarrel with Abner was concerning his alleged relationship with, Rizpah, one of Saul's concubines and the mother of two of his sons (2 Samuel 21:8). With his stature in court increasing, Abner's action could be perceived as an open bid for Ishbaal's throne (cf. 1 Kings 2:13–25, where Adonijah made a similar bid on Abishag, David's concubine, for Solomon's throne and 2 Samuel 16:20–23, where Absalom openly visited David's harem). Abner replied to the accusation angrily and defiantly, without admitting that he was in the wrong, but dismissed the affair as insignificant in comparison with the loyalty he has shown to the house of Saul (verse 8). After this, Abner sent a message to David at Hebron, seeking a pact (a 'covenant') that would transfer Israelite territories (now under Ishbaal) to David. David set his own conditions: the return of Michal, Saul's daughter, with political implications of David legality to claim Saul's throne. As Michal was forced to marry another man, the prohibition of remarriage in Deuteronomy 24:1–4 does not apply here, and for this reason (and Abner's influence in court), Ish-bosheth complied with David's request (verses 15–16). Abner successfully negotiated with both sides: senior leaders of Israel, who were dissatisfied with Ishbaal and hoped to withstand the Philistines with David as happened in the past, as well as the support of Saul's tribe and his own, the Benjaminites. His successes led to a big feast with David, probably on the occasion of sealing the covenant. "And Abner conferred with the elders of Israel, saying, "For some time past you have been seeking David as king over you."" Death of Abner (3:22–39). Joab's private meeting with Abner (verses 22–27) was due to a combination of reasons, from doubting Abner's sincerity (verse 25), removing a competitor to the position of main commander, to the most relevant one, blood-revenge for the death of Asahel (verses 27, 30), with the clear emphasis that David had no part in Abner's death. Abner was reported to have departed in peace from David (verses 21, 22, 24), and David did not know of Joab's plan (verse 26). David's claim of being guiltless was accompanied by his curse upon the guilty Joab (verses 28–29), by David's public display of grief (verses 31–32), and touching tribute to Abner (verses 33–34), also by noting the inability of David to resist the violence of the sons of Zeruiah (verse 39). It has been a consistent theme in the books that David was God's chosen to be king, and he was not involved in any of the violent actions that eventually brought him to the throne. " So Joab and Abishai his brother killed Abner, because he had killed their brother Asahel at Gibeon in the battle." Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; Sources. Commentaries on Samuel. &lt;templatestyles src="Refbegin/styles.css" /&gt; General. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=69856492
69856496
2 Samuel 4
Second Book of Samuel chapter 2 Samuel 4 is the fourth chapter of the Second Book of Samuel in the Old Testament of the Christian Bible or the second part of Books of Samuel in the Hebrew Bible. According to Jewish tradition the book was attributed to the prophet Samuel, with additions by the prophets Gad and Nathan, but modern scholars view it as a composition of a number of independent texts of various ages from c. 630–540 BCE. This chapter contains the account of David's reign in Hebron. This is within a section comprising 1 Samuel 16 to 2 Samuel 5 which records the rise of David as the king of Israel, and a section comprising 2 Samuel 2–8 which deals with the period when David set up his kingdom. Text. This chapter was originally written in the Hebrew language. It is divided into 12 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls including 4Q51 (4QSama; 100–50 BCE) with extant verses 1–4, 9–12. Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century). Analysis. The narrative of David's reign in Hebron in –5:5 has the following structure: A. Looking back to the final scenes of 1 Samuel (1:1) B. David receives Saul's crown (1:2–12) C. David executes Saul's killer (1:13–16) D. David's lament for Saul and Jonathan (1:17-27) E. Two kings in the land (2:1–3:6) E'. One king in the land: Abner switches sides (3:7–27) D'. David's lament for Abner (3:28–39) C'. David executes Ishbaal's killers (4:1–12) B'. David wears Saul's crown (5:1–3) A'. Looking forward to David's reign in Jerusalem (5:4–5) David's narrative of his ascension to the throne in Hebron is framed by an opening verse that looks backward to the final chapters of 1 Samuel (Saul's death and David's refuge in Ziklag) and closing verses that look forward to David's rule in Jerusalem (2 Samuel 5). The action begins when David received Saul's crown and concludes when he was finally able to wear that crown. David executes the Amalekite who claims to have assisted Saul with his suicide and those who murdered Ishbaal. Two laments were recorded: one for Saul and Jonathan and another shorter one for Abner. At the center are the two key episodes: the existence of two kings in the land (David and Ishbaal), because Joab's forces could not conquer Saul's territory on the battlefield. However, this was resolved when Ishbaal foolishly challenged Abner's loyalty, causing Abner to switch sides that eventually brought Saul's kingdom under Davidic rule. Death of Ish-bosheth (4:1–7). Abner was very powerful that he virtually ruled Israel (cf. 2:8-9; 3:6), so his death caused confusion and uncertainty to the whole kingdom including its king, leading a plot to assassinate Ish-bosheth by two army officers of Israel, Baanah and Rechab, whose lineage is detailed in verses 2–3 and came from Beeroth, which 'was considered to belong to Benjamin' (verse 2, cf. Joshua 18:25). The two assassins got into Ishbaal's house at noon, when he was taking a siesta, on the pretext of "taking wheat" to be allowed to enter, quickly committed their gruesome task and ran away with Ish-bosheth's severed head (verse 7). A short information was given in verse 4 about Jonathan's son, Mephibosheth (or Meribaal, cf. 1 Chronicles 8:34; 9:40), which links to 2 Samuel 9:1–13, apparently to show that, beside Ish-bosheth, there was no more serious contender for the throne from the house of Saul; Mephibosheth himself was only a minor ('five years old') at that time and a cripple. "And Saul's son had two men that were captains of bands: the name of the one was Baanah, and the name of the other Rechab, the sons of Rimmon a Beerothite, of the children of Benjamin: (for Beeroth also was reckoned to Benjamin:" "Jonathan, Saul’s son, had a son who was lame in his feet. He was five years old when the news about Saul and Jonathan came from Jezreel; and his nurse took him up and fled. And it happened, as she made haste to flee, that he fell and became lame. His name was Mephibosheth." Execution of Rechab and Baanah (4:8–12). The two assassins brought Ish-bosheth's head straight to David, claiming to have avenged David on Saul, who was described as an 'enemy' for Saul had sought David's life (verse 8). David immediately distanced himself from their action, as he had consistently showed a great respect for a reigning monarch and did not wish to seize the throne, because being YHWH's elect, he was to advance naturally to be a king without having to stoop to violence. His attitude was made explicit in verses 9–11, recalling how he commanded to kill the Amalekite who claimed to have killed Saul, so he had to punish these assassins to death for they had 'killed a righteous man on his bed in his own house' (verse 11–12). Thus, the narratives demonstrate that David was totally innocent of the assassinations that brought him to the throne. "And David commanded his young men, and they slew them, and cut off their hands and their feet, and hanged them up over the pool in Hebron. But they took the head of Ishbosheth, and buried it in the sepulchre of Abner in Hebron." Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; Sources. Commentaries on Samuel. &lt;templatestyles src="Refbegin/styles.css" /&gt; General. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=69856496
698648
Polarization density
Vector field describing the density of electric dipole moments in a dielectric material In classical electromagnetism, polarization density (or electric polarization, or simply polarization) is the vector field that expresses the volumetric density of permanent or induced electric dipole moments in a dielectric material. When a dielectric is placed in an external electric field, its molecules gain electric dipole moment and the dielectric is said to be polarized. Electric polarization of a given dielectric material sample is defined as the quotient of electric dipole moment (a vector quantity, expressed as coulombs*meters (C*m) in SI units) to volume (meters cubed). Polarization density is denoted mathematically by P; in SI units, it is expressed in coulombs per square meter (C/m2). Polarization density also describes how a material responds to an applied electric field as well as the way the material changes the electric field, and can be used to calculate the forces that result from those interactions. It can be compared to magnetization, which is the measure of the corresponding response of a material to a magnetic field in magnetism. Similar to ferromagnets, which have a non-zero permanent magnetization even if no external magnetic field is applied, ferroelectric materials have a non-zero polarization in the absence of external electric field. Definition. An external electric field that is applied to a dielectric material, causes a displacement of bound charged elements. A bound charge is a charge that is associated with an atom or molecule within a material. It is called "bound" because it is not free to move within the material like free charges. Positive charged elements are displaced in the direction of the field, and negative charged elements are displaced opposite to the direction of the field. The molecules may remain neutral in charge, yet an electric dipole moment forms. For a certain volume element formula_0 in the material, which carries a dipole moment formula_1, we define the polarization density P: formula_2 In general, the dipole moment formula_1 changes from point to point within the dielectric. Hence, the polarization density P of a dielectric inside an infinitesimal volume d"V" with an infinitesimal dipole moment dp is: The net charge appearing as a result of polarization is called bound charge and denoted formula_3. This definition of polarization density as a "dipole moment per unit volume" is widely adopted, though in some cases it can lead to ambiguities and paradoxes. Other expressions. Let a volume d"V" be isolated inside the dielectric. Due to polarization the positive bound charge formula_4 will be displaced a distance formula_5 relative to the negative bound charge formula_6, giving rise to a dipole moment formula_7. Substitution of this expression in (1) yields formula_8 Since the charge formula_9 bounded in the volume d"V" is equal to formula_10 the equation for P becomes: where formula_11 is the density of the bound charge in the volume under consideration. It is clear from the definition above that the dipoles are overall neutral and thus formula_11 is balanced by an equal density of opposite charges within the volume. Charges that are not balanced are part of the free charge discussed below. Gauss's law for the field of "P". For a given volume V enclosed by a surface S, the bound charge formula_3 inside it is equal to the flux of P through S taken with the negative sign, or &lt;templatestyles src="Math_proof/styles.css" /&gt;Proof Let a surface area S envelope part of a dielectric. Upon polarization negative and positive bound charges will be displaced. Let "d"1 and "d"2 be the distances of the bound charges formula_6 and formula_4, respectively, from the plane formed by the element of area d"A" after the polarization. And let d"V"1 and d"V"2 be the volumes enclosed below and above the area d"A". It follows that the negative bound charge formula_14 moved from the outer part of the surface d"A" inwards, while the positive bound charge formula_15 moved from the inner part of the surface outwards. By the law of conservation of charge the total bound charge formula_16 left inside the volume formula_17 after polarization is: formula_18 Since formula_19 and (see image to the right) formula_20 The above equation becomes formula_21 By (2) it follows that formula_22, so we get: formula_23 And by integrating this equation over the entire closed surface "S" we find that formula_12 formula_24 formula_13 which completes the proof. Differential form. By the divergence theorem, Gauss's law for the field P can be stated in "differential form" as: formula_25 where ∇ · P is the divergence of the field P through a given surface containing the bound charge density formula_26. &lt;templatestyles src="Math_proof/styles.css" /&gt;Proof By the divergence theorem we have that formula_27 for the volume "V" containing the bound charge formula_3. And since formula_3 is the integral of the bound charge density formula_26 taken over the entire volume "V" enclosed by "S", the above equation yields formula_28 which is true if and only if formula_29 Relationship between the fields of "P" and "E". Homogeneous, isotropic dielectrics. In a homogeneous, linear, non-dispersive and isotropic dielectric medium, the polarization is aligned with and proportional to the electric field E: formula_30 where "ε"0 is the electric constant, and χ is the electric susceptibility of the medium. Note that in this case χ simplifies to a scalar, although more generally it is a tensor. This is a particular case due to the "isotropy" of the dielectric. Taking into account this relation between P and E, equation (3) becomes: formula_31 formula_24 formula_32 The expression in the integral is Gauss's law for the field E which yields the total charge, both free formula_33 and bound formula_34, in the volume V enclosed by S. Therefore, formula_35 which can be written in terms of free charge and bound charge densities (by considering the relationship between the charges, their volume charge densities and the given volume): formula_36 Since within a homogeneous dielectric there can be no free charges formula_37, by the last equation it follows that there is no bulk bound charge in the material formula_38. And since free charges can get as close to the dielectric as to its topmost surface, it follows that polarization only gives rise to surface bound charge density (denoted formula_39 to avoid ambiguity with the volume bound charge density formula_26). formula_39 may be related to P by the following equation: formula_40 where formula_41 is the normal vector to the surface "S" pointing outwards. (see charge density for the rigorous proof) Anisotropic dielectrics. The class of dielectrics where the polarization density and the electric field are not in the same direction are known as "anisotropic" materials. In such materials, the i-th component of the polarization is related to the j-th component of the electric field according to: formula_42 This relation shows, for example, that a material can polarize in the x direction by applying a field in the z direction, and so on. The case of an anisotropic dielectric medium is described by the field of crystal optics. As in most electromagnetism, this relation deals with macroscopic averages of the fields and dipole density, so that one has a continuum approximation of the dielectric materials that neglects atomic-scale behaviors. The polarizability of individual particles in the medium can be related to the average susceptibility and polarization density by the Clausius–Mossotti relation. In general, the susceptibility is a function of the frequency ω of the applied field. When the field is an arbitrary function of time t, the polarization is a convolution of the Fourier transform of "χ"("ω") with the E("t"). This reflects the fact that the dipoles in the material cannot respond instantaneously to the applied field, and causality considerations lead to the Kramers–Kronig relations. If the polarization P is not linearly proportional to the electric field E, the medium is termed "nonlinear" and is described by the field of nonlinear optics. To a good approximation (for sufficiently weak fields, assuming no permanent dipole moments are present), P is usually given by a Taylor series in E whose coefficients are the nonlinear susceptibilities: formula_43 where formula_44 is the linear susceptibility, formula_45 is the second-order susceptibility (describing phenomena such as the Pockels effect, optical rectification and second-harmonic generation), and formula_46 is the third-order susceptibility (describing third-order effects such as the Kerr effect and electric field-induced optical rectification). In ferroelectric materials, there is no one-to-one correspondence between P and E at all because of hysteresis. Polarization density in Maxwell's equations. The behavior of electric fields (E, D), magnetic fields (B, H), charge density (ρ) and current density (J) are described by Maxwell's equations in matter. Relations between E, D and P. In terms of volume charge densities, the free charge density formula_47 is given by formula_48 where formula_49 is the total charge density. By considering the relationship of each of the terms of the above equation to the divergence of their corresponding fields (of the electric displacement field D, E and P in that order), this can be written as: formula_50 This is known as the constitutive equation for electric fields. Here "ε"0 is the electric permittivity of empty space. In this equation, P is the (negative of the) field induced in the material when the "fixed" charges, the dipoles, shift in response to the total underlying field E, whereas D is the field due to the remaining charges, known as "free" charges. In general, P varies as a function of E depending on the medium, as described later in the article. In many problems, it is more convenient to work with D and the free charges than with E and the total charge. Therefore, a polarized medium, by way of Green's Theorem can be split into four components. Time-varying polarization density. When the polarization density changes with time, the time-dependent bound-charge density creates a "polarization current density" of formula_55 so that the total current density that enters Maxwell's equations is given by formula_56 where Jf is the free-charge current density, and the second term is the magnetization current density (also called the "bound current density"), a contribution from atomic-scale (when they are present). Polarization ambiguity. Crystalline materials. The polarization inside a solid is not, in general, uniquely defined. Because a bulk solid is periodic, one must choose a unit cell in which to compute the polarization (see figure). In other words, two people, Alice and Bob, looking at the same solid, may calculate different values of P, and neither of them will be wrong. For example, if Alice chooses a unit cell with positive ions at the top and Bob chooses the unit cell with negative ions at the top, their computed P vectors will have opposite directions. Alice and Bob will agree on the microscopic electric field E in the solid, but disagree on the value of the displacement field formula_57. On the other hand, even though the value of P is not uniquely defined in a bulk solid, "variations" in P "are" uniquely defined. If the crystal is gradually changed from one structure to another, there will be a current inside each unit cell, due to the motion of nuclei and electrons. This current results in a macroscopic transfer of charge from one side of the crystal to the other, and therefore it can be measured with an ammeter (like any other current) when wires are attached to the opposite sides of the crystal. The time-integral of the current is proportional to the change in P. The current can be calculated in computer simulations (such as density functional theory); the formula for the integrated current turns out to be a type of Berry's phase. The non-uniqueness of P is not problematic, because every measurable consequence of P is in fact a consequence of a continuous change in P. For example, when a material is put in an electric field E, which ramps up from zero to a finite value, the material's electronic and ionic positions slightly shift. This changes P, and the result is electric susceptibility (and hence permittivity). As another example, when some crystals are heated, their electronic and ionic positions slightly shift, changing P. The result is pyroelectricity. In all cases, the properties of interest are associated with a "change" in P. Even though the polarization is "in principle" non-unique, in practice it is often (not always) defined by convention in a specific, unique way. For example, in a perfectly centrosymmetric crystal, P is exactly zero due to symmetry reasoning. This can be seen in a pyroelectric material. Above the Curie temperature the material is not polarized and it has a centrosymmetric configuration. Lowering the temperature below the Curie temperature induces a structural phase transition that breaks the centrosymmetricity. The P of the material grows proportionally to the distortion, thus allowing to define it unambiguously. Amorphous materials. Another problem in the definition of P is related to the arbitrary choice of the "unit volume", or more precisely to the system's "scale". For example, at "microscopic" scale a plasma can be regarded as a gas of "free" charges, thus P should be zero. On the contrary, at a "macroscopic" scale the same plasma can be described as a continuous medium, exhibiting a permittivity formula_58 and thus a net polarization P ≠ 0. References and notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Delta V" }, { "math_id": 1, "text": "\\Delta\\mathbf p" }, { "math_id": 2, "text": "\\mathbf P = \\frac{\\Delta\\mathbf p}{\\Delta V}" }, { "math_id": 3, "text": "Q_b" }, { "math_id": 4, "text": "\\mathrm d q_b^+" }, { "math_id": 5, "text": "\\mathbf d " }, { "math_id": 6, "text": "\\mathrm d q_b^-" }, { "math_id": 7, "text": " \\mathrm d \\mathbf p = \\mathrm d q_b\\mathbf d" }, { "math_id": 8, "text": "\\mathbf P = {\\mathrm d q_b \\over \\mathrm d V}\\mathbf d " }, { "math_id": 9, "text": "\\mathrm d q_b" }, { "math_id": 10, "text": "\\rho_b \\mathrm d V" }, { "math_id": 11, "text": " \\rho_b " }, { "math_id": 12, "text": "-Q_b = " }, { "math_id": 13, "text": "\\mathbf{P} \\cdot \\mathrm{d}\\mathbf{A}" }, { "math_id": 14, "text": "\\mathrm d q_b^- = \\rho_b^-\\ \\mathrm d V_1 = \\rho_b^- d_1\\ \\mathrm d A" }, { "math_id": 15, "text": "\\mathrm d q_b^+ = \\rho_b\\ \\mathrm d V_2 = \\rho_b d_2\\ \\mathrm d A" }, { "math_id": 16, "text": "\\mathrm d Q_b" }, { "math_id": 17, "text": "\\mathrm d V" }, { "math_id": 18, "text": "\\begin{align}\n\\mathrm{d} Q_b\n& = \\mathrm{d} q_\\text{in} - \\mathrm{d} q_\\text{out} \\\\\n& = \\mathrm{d} q_b^- - \\mathrm{d} q_b^+ \\\\\n& = \\rho_b^- d_1\\ \\mathrm{d} A - \\rho_b d_2\\ \\mathrm{d} A\n\\end{align}" }, { "math_id": 19, "text": "\\rho_b^- = -\\rho_b" }, { "math_id": 20, "text": "\\begin{align}\nd_1 &= (d - a)\\cos(\\theta) \\\\\nd_2 &= a\\cos(\\theta)\n\\end{align}" }, { "math_id": 21, "text": "\\begin{align}\n\\mathrm{d} Q_b\n&= - \\rho_b (d - a)\\cos(\\theta)\\ \\mathrm{d} A - \\rho_b a\\cos(\\theta)\\ \\mathrm{d} A \\\\\n&= - \\rho_b d\\ \\mathrm{d} A \\cos(\\theta) \n\\end{align}" }, { "math_id": 22, "text": "\\rho_b d = P" }, { "math_id": 23, "text": "\\begin{align}\n \\mathrm{d} Q_b &= - P\\ \\mathrm{d} A \\cos(\\theta) \\\\\n -\\mathrm{d} Q_b &= \\mathbf{P} \\cdot \\mathrm{d} \\mathbf{A}\n\\end{align}" }, { "math_id": 24, "text": "\\scriptstyle{S}" }, { "math_id": 25, "text": "-\\rho_b = \\nabla \\cdot \\mathbf P," }, { "math_id": 26, "text": "\\rho_b" }, { "math_id": 27, "text": "-Q_b = \\iiint_V \\nabla \\cdot \\mathbf P\\ \\mathrm{d} V," }, { "math_id": 28, "text": "-\\iiint_V \\rho_b \\ \\mathrm{d} V = \\iiint_V \\nabla \\cdot \\mathbf{P}\\ \\mathrm{d} V ," }, { "math_id": 29, "text": "-\\rho_b = \\nabla \\cdot \\mathbf{P}" }, { "math_id": 30, "text": "\\mathbf{P} = \\chi\\varepsilon_0 \\mathbf E," }, { "math_id": 31, "text": "-Q_b = \\chi\\varepsilon_0\\ " }, { "math_id": 32, "text": "\\mathbf{E} \\cdot \\mathrm{d}\\mathbf{A}" }, { "math_id": 33, "text": "(Q_f)" }, { "math_id": 34, "text": "(Q_b)" }, { "math_id": 35, "text": "\\begin{align}\n -Q_b &= \\chi Q_\\text{total} \\\\\n &= \\chi \\left(Q_f + Q_b\\right) \\\\[3pt]\n \\Rightarrow Q_b &= -\\frac{\\chi}{1 + \\chi} Q_f,\n\\end{align}" }, { "math_id": 36, "text": "\\rho_b = -\\frac{\\chi}{1 + \\chi} \\rho_f" }, { "math_id": 37, "text": "(\\rho_f = 0)" }, { "math_id": 38, "text": "(\\rho_b = 0)" }, { "math_id": 39, "text": "\\sigma_b" }, { "math_id": 40, "text": "\\sigma_b = \\mathbf{\\hat{n}}_\\text{out} \\cdot \\mathbf{P}" }, { "math_id": 41, "text": "\\mathbf{\\hat{n}}_\\text{out}" }, { "math_id": 42, "text": "P_i = \\sum_j \\epsilon_0 \\chi_{ij} E_j ," }, { "math_id": 43, "text": "\\frac{P_i}{\\epsilon_0} = \\sum_j \\chi^{(1)}_{ij} E_j + \\sum_{jk} \\chi_{ijk}^{(2)} E_j E_k + \\sum_{jk\\ell} \\chi_{ijk\\ell}^{(3)} E_j E_k E_\\ell + \\cdots " }, { "math_id": 44, "text": "\\chi^{(1)}" }, { "math_id": 45, "text": "\\chi^{(2)}" }, { "math_id": 46, "text": "\\chi^{(3)}" }, { "math_id": 47, "text": "\\rho_f" }, { "math_id": 48, "text": "\\rho_f = \\rho - \\rho_b" }, { "math_id": 49, "text": "\\rho" }, { "math_id": 50, "text": "\\mathbf{D} = \\varepsilon_0\\mathbf{E} + \\mathbf{P}." }, { "math_id": 51, "text": "\\rho_b = -\\nabla \\cdot \\mathbf{P}" }, { "math_id": 52, "text": "\\sigma_b = \\mathbf{\\hat{n}}_\\text{out} \\cdot \\mathbf{P}" }, { "math_id": 53, "text": "\\rho_f = \\nabla \\cdot \\mathbf{D}" }, { "math_id": 54, "text": "\\sigma_f = \\mathbf{\\hat{n}}_\\text{out} \\cdot \\mathbf{D}" }, { "math_id": 55, "text": " \\mathbf{J}_p = \\frac{\\partial \\mathbf{P}}{\\partial t} " }, { "math_id": 56, "text": " \\mathbf{J} = \\mathbf{J}_f + \\nabla\\times\\mathbf{M} + \\frac{\\partial\\mathbf{P}}{\\partial t}" }, { "math_id": 57, "text": "\\mathbf{D} = \\varepsilon_0 \\mathbf{E} + \\mathbf{P}" }, { "math_id": 58, "text": "\\varepsilon(\\omega) \\neq 1" } ]
https://en.wikipedia.org/wiki?curid=698648
69867676
Lewandowski-Kurowicka-Joe distribution
Continuous probability distribution In probability theory and Bayesian statistics, the Lewandowski-Kurowicka-Joe distribution, often referred to as the LKJ distribution, is a probability distribution over positive definite symmetric matrices with unit diagonals. Introduction. The LKJ distribution was first introduced in 2009 in a more general context by Daniel Lewandowski, Dorota Kurowicka, and Harry Joe. It is an example of the vine copula, an approach to constrained high-dimensional probability distributions. The distribution has a single shape parameter formula_1 and the probability density function for a formula_2 matrix formula_0 is formula_3 with normalizing constant formula_4, a complicated expression including a product over Beta functions. For formula_5, the distribution is uniform over the space of all correlation matrices; i.e. the space of positive definite matrices with unit diagonal. Usage. The LKJ distribution is commonly used as a prior for correlation matrix in hierarchical Bayesian modeling. Hierarchical Bayesian modeling often tries to make an inference on the covariance structure of the data, which can be decomposed into a scale vector and correlation matrix. Instead of the prior on the covariance matrix such as the inverse-Wishart distribution, LKJ distribution can serve as a prior on the correlation matrix along with some suitable prior distribution on the scale vector. It has been implemented as part of the Stan probabilistic programming language and as a library linked to the Turing.jl probabilistic programming library in Julia. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbf{R}" }, { "math_id": 1, "text": "\\eta" }, { "math_id": 2, "text": "d\\times d" }, { "math_id": 3, "text": "p(\\mathbf{R}; \\eta) = C \\times [\\det(\\mathbf{R})]^{\\eta-1}" }, { "math_id": 4, "text": "C=2^{\\sum_{k=1}^d (2\\eta - 2 +d - k)(d-k)}\\prod_{k=1}^{d-1}\\left[B\\left(\\eta + (d-k-1)/2, \\eta + (d-k-1)/2\\right)\\right]^{d-k}" }, { "math_id": 5, "text": "\\eta=1" } ]
https://en.wikipedia.org/wiki?curid=69867676
69867945
Bigoni–Piccolroaz yield criterion
Yield criterion The Bigoni–Piccolroaz yield criterion is a yielding model, based on a phenomenological approach, capable of describing the mechanical behavior of a broad class of pressure-sensitive granular materials such as soil, concrete, porous metals and ceramics. General concepts. The idea behind the Bigoni-Piccolroaz criterion is that of deriving a function capable of transitioning between the yield surfaces typical of different classes of materials only by changing the function parameters. The reason for this kind of implementation lies in the fact that the materials towards which the model is targeted undergo consistent changes during manufacturing and working conditions. The typical example is that of the hardening of a power specimen by compaction and sintering during which the material changes from granular to dense. The Bigoni-Piccolroaz yielding criterion can be represented in the Haigh–Westergaard stress space as a convex smooth surface and in fact the criterion itself is based on the mathematical definition of the surface in the above-mentioned space as a proper interpolation of experimental points. Mathematical formulation. The Bigoni-Piccolroaz yield surface is thought as a direct interpolation of experimental data. This criterion represents a smooth and convex surface, which is closed both in hydrostatic tension and compression and has a drop-like shape, particularly suited to describe frictional and granular materials. This criterion has also been generalized to the case of surfaces with corners. Design principles. Since the whole idea of the model is to tailor a function to experimental data, the authors have defined a certain group of features as desirable, even if not essential, among those: Parametric function. The Bigoni–Piccolroaz yield criterion is a seven-parameter surface defined as: formula_2 where p, q and formula_1 are invariants dependent on the stress tensor, while formula_3 is the "meridian" function: formula_4 formula_5 describing the pressure-sensitivity and formula_6 is the "deviatoric" function: formula_7 describing the Lode-dependence of yielding. The mathematical definitions of the parameters formula_8 and formula_1 are: formula_9 With: formula_10 Where formula_11 is the deviatoric stress, formula_12 is the identity tensor, formula_13 is the stress tensor and the dot indicates the scalar product. A better understanding of those important parameters can be grasped by using their geometrical representation in the Haigh–Westergaard stress space. Considering the tern of principal stresses and the deviatoric plane formula_14, orthogonal to the trisector of the first quadrant and passing through the origin of the coordinate system, the tern formula_0 and formula_1 unequivocally represents a point in the space acting as a cylindrical coordinate system with the trisector as an axis: The usage of p and q instead of the correct cylindrical coordinates formula_20 and formula_21: formula_22 is justified by the easier physical interpretation: p is the hydro-static pressure on the material point, q is the Von Mises equivalent stress. The described yield function corresponds to the yield surface: formula_23 which makes explicit the relation between the two functions formula_24 and formula_6 and the shape of the meridian and deviatoric sections, respectively. The seven, non-negative material parameters: formula_25 define the shape of the meridian and deviatoric sections. In particular, some of the parameters are easily relatable to mechanical properties: formula_26 controls the pressure sensitivity, formula_27 and formula_28 are the yield strength under isotropic conditions of tension and compression. The other parameters define the shape of the surface when intersected by the meridian and deviatoric planes: formula_29 and formula_30 define the meridian section, formula_31 and formula_32 define the deviatoric section. Related yielding criteria. Having been designed to allow consistent changes in the surface shape in the Haigh–Westergaard stress space, the Bigoni-Piccolroaz yield surface can be used as a generalized formulation for several criteria, such as the well known von Mises, Tresca, Mohr–Coulomb. External links. The Bigoni-Piccolroaz yield surface is a powerful instrument for the characterization of granular materials and it arises great interest in the field of the definition of constitutive models for ceramics, rock and soil which is a task of fundamental importance for better design of products using these materials. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "p, q " }, { "math_id": 1, "text": "\\theta" }, { "math_id": 2, "text": "\n f(p,q,\\theta) = F(p) + \\frac{q}{g(\\theta)} = 0,\n " }, { "math_id": 3, "text": "F(p)" }, { "math_id": 4, "text": "\nF(p) = \n\\left\\{\n\\begin{array}{ll}\n-M p_c \\sqrt{(\\phi - \\phi^m)[2(1 - \\alpha)\\phi + \\alpha]}, & \\phi \\in [0,1], \\\\\n+\\infty, & \\phi \\notin [0,1],\n\\end{array}\n\\right.\n" }, { "math_id": 5, "text": "\n\\phi = \\frac{p + c}{p_c + c},\n" }, { "math_id": 6, "text": "g(\\theta)" }, { "math_id": 7, "text": "\ng(\\theta) = \\frac{1}{\\cos[\\beta \\frac{\\pi}{6} - \\frac{1}{3} \\cos^{-1}(\\gamma \\cos 3\\theta)]},\n" }, { "math_id": 8, "text": "p, q" }, { "math_id": 9, "text": "\np = -\\frac{tr ~ \\sigma }{3}, \\quad q = \\sqrt{3\\textit{J}_2}, \\quad \\theta = \\frac{1}{3} cos^{-1} \\left( \\frac{3\\sqrt{3}}{2} \\frac{J_3}{J_2^{3/2}} \\right)\n" }, { "math_id": 10, "text": "\nJ_2 = \\frac{1}{2} \\textbf{S} \\cdot \\textbf{S}, \\quad \\textbf{S} = \\sigma - \\frac{tr \\ \\sigma}{3} \\textbf{I}, \\quad J_3 = \\frac{1}{3} tr \\ \\textbf{S}^3\n" }, { "math_id": 11, "text": "\\textbf{S}" }, { "math_id": 12, "text": "\\textbf{I}" }, { "math_id": 13, "text": "\\sigma" }, { "math_id": 14, "text": "\\pi" }, { "math_id": 15, "text": " - \\sqrt{3} ~p " }, { "math_id": 16, "text": " \\sqrt{2/3}~ q" }, { "math_id": 17, "text": " \\theta" }, { "math_id": 18, "text": "q" }, { "math_id": 19, "text": "\\sigma_1" }, { "math_id": 20, "text": "\\xi" }, { "math_id": 21, "text": "\\rho" }, { "math_id": 22, "text": "\n\\xi = \\frac{1}{\\sqrt{3}} tr(\\sigma)= - \\sqrt{3} ~p, \\quad \\rho = \\sqrt{2 ~ J_2} =\\sqrt{\\frac{2}{3}} q\n" }, { "math_id": 23, "text": "\nq = -f(p) g(\\theta), \\quad p \\in [-c,p_c], \\quad \\theta \\in [0,\\pi/3]\n" }, { "math_id": 24, "text": "f(p)" }, { "math_id": 25, "text": "\n\\underbrace{M > 0,~ p_c > 0,~ c \\geq 0,~ 0 < \\alpha < 2,~ m > 1}_{\\mbox{defining}~\\displaystyle{F(p)}},~~~ \n\\underbrace{0\\leq \\beta \\leq 2,~ 0 \\leq \\gamma < 1}_{\\mbox{defining}~\\displaystyle{g(\\theta)}}, \n" }, { "math_id": 26, "text": "M" }, { "math_id": 27, "text": "c" }, { "math_id": 28, "text": "p_c" }, { "math_id": 29, "text": "\\alpha" }, { "math_id": 30, "text": "m" }, { "math_id": 31, "text": "\\beta" }, { "math_id": 32, "text": "\\gamma" } ]
https://en.wikipedia.org/wiki?curid=69867945
698713
Quartic
&lt;templatestyles src="Template:TOC_right/styles.css" /&gt; In mathematics, the term quartic describes something that pertains to the "fourth order", such as the function formula_0. It may refer to one of the following: &lt;templatestyles src="Dmbox/styles.css" /&gt; Topics referred to by the same termThis page lists mathematics articles associated with the same title.
[ { "math_id": 0, "text": "x^4" } ]
https://en.wikipedia.org/wiki?curid=698713
698720
Landau damping
In physics, Landau damping, named after its discoverer, Soviet physicist Lev Davidovich Landau (1908–68), is the effect of damping (exponential decrease as a function of time) of longitudinal space charge waves in plasma or a similar environment. This phenomenon prevents an instability from developing, and creates a region of stability in the parameter space. It was later argued by Donald Lynden-Bell that a similar phenomenon was occurring in galactic dynamics, where the gas of electrons interacting by electrostatic forces is replaced by a "gas of stars" interacting by gravitational forces. Landau damping can be manipulated exactly in numerical simulations such as particle-in-cell simulation. It was proved to exist experimentally by Malmberg and Wharton in 1964, almost two decades after its prediction by Landau in 1946. Wave–particle interactions. Landau damping occurs because of the energy exchange between an electromagnetic wave with phase velocity formula_0 and particles in the plasma with velocity approximately equal to formula_0, which can interact strongly with the wave. Those particles having velocities slightly less than formula_0 will be accelerated by the electric field of the wave to move with the wave phase velocity, while those particles with velocities slightly greater than formula_0 will be decelerated losing energy to the wave: particles tend to synchronize with the wave. This is proved experimentally with a traveling-wave tube. In an ideal magnetohydrodynamic (MHD) plasma the particle velocities are often taken to be approximately a Maxwellian distribution function. If the slope of the function is negative, the number of particles with velocities slightly less than the wave phase velocity is greater than the number of particles with velocities slightly greater. Hence, there are more particles gaining energy from the wave than losing to the wave, which leads to wave damping. If, however, the slope of the function is positive, the number of particles with velocities slightly less than the wave phase velocity is smaller than the number of particles with velocities slightly greater. Hence, there are more particles losing energy to the wave than gaining from the wave, which leads to a resultant increase in the wave energy. Then Landau damping is substituted with Landau growth. Physical interpretation. The mathematical theory of Landau damping is somewhat involved &lt;templatestyles src="Crossreference/styles.css" /&gt;. However, in the case of waves with finite amplitude, there is a simple physical interpretation which, though not strictly correct, helps to visualize this phenomenon. It is possible to imagine Langmuir waves as waves in the sea, and the particles as surfers trying to catch the wave, all moving in the same direction. If the surfer is moving on the water surface at a velocity slightly less than the waves they will eventually be caught and pushed along the wave (gaining energy), while a surfer moving slightly faster than a wave will be pushing on the wave as they move uphill (losing energy to the wave). It is worth noting that only the surfers are playing an important role in this energy interactions with the waves; a beachball floating on the water (zero velocity) will go up and down as the wave goes by, not gaining energy at all. Also, a boat that moves faster than the waves does not exchange much energy with the wave. A somewhat more detailed picture is obtained by considering particles' trajectories in phase space, in the wave's frame of reference. Particles near the phase velocity become trapped and are forced to move with the wavefronts, at the phase velocity. Any such particles that were initially below the phase velocity have thus been accelerated, while any particles that were initially above the phase velocity have been decelerated. Because, for a Maxwellian plasma, there are initially more particles below the phase velocity than above it, the plasma has net gained energy, and the wave has therefore lost energy. A simple mechanical description of particle dynamics provides a quantitative estimate of the synchronization of particles with the wave. A more rigorous approach shows the strongest synchronization occurs for particles with a velocity in the wave frame proportional to the damping rate and independent of the wave amplitude. Since Landau damping occurs for waves with arbitrarily small amplitudes, this shows the most active particles in this damping are far from being trapped. This is natural, since trapping involves diverging time scales for such waves (specifically formula_1 for a wave amplitude formula_2). Mathematical treatment. Perturbation theory in a Vlasovian frame. Theoretical treatment starts with the Vlasov equation in the non-relativistic zero-magnetic field limit, the Vlasov–Poisson set of equations. Explicit solutions are obtained in the limit of a small formula_3-field. The distribution function formula_4 and field formula_3 are expanded in a series: formula_5, formula_6 and terms of equal order are collected. To first order the Vlasov–Poisson equations read formula_7 Landau calculated the wave caused by an initial disturbance formula_8 and found by aid of Laplace transform and contour integration a damped travelling wave of the form formula_9 with wave number formula_10 and damping decrement formula_11 Here formula_12 is the plasma oscillation frequency and formula_13 is the electron density. Later Nico van Kampen proved that the same result can be obtained with Fourier transform. He showed that the linearized Vlasov–Poisson equations have a continuous spectrum of singular normal modes, now known as van Kampen modes formula_14 in which formula_15 signifies principal value, formula_16 is the delta function (see generalized function) and formula_17 is the plasma permittivity. Decomposing the initial disturbance in these modes he obtained the Fourier spectrum of the resulting wave. Damping is explained by phase-mixing of these Fourier modes with slightly different frequencies near formula_12. It was not clear how damping could occur in a collisionless plasma: where does the wave energy go? In fluid theory, in which the plasma is modeled as a dispersive dielectric medium, the energy of Langmuir waves is known: field energy multiplied by the Brillouin factor formula_18. But damping cannot be derived in this model. To calculate energy exchange of the wave with resonant electrons, Vlasov plasma theory has to be expanded to second order and problems about suitable initial conditions and secular terms arise. In Ref. these problems are studied. Because calculations for an infinite wave are deficient in second order, a wave packet is analysed. Second-order initial conditions are found that suppress secular behavior and excite a wave packet of which the energy agrees with fluid theory. The figure shows the energy density of a wave packet traveling at the group velocity, its energy being carried away by electrons moving at the phase velocity. Total energy, the area under the curves, is conserved. The Cauchy problem for perturbative solutions. The rigorous mathematical theory is based on solving the Cauchy problem for the evolution equation (here the partial differential Vlasov–Poisson equation) and proving estimates on the solution. First a rather complete linearized mathematical theory has been developed since Landau. Going beyond the linearized equation and dealing with the nonlinearity has been a longstanding problem in the mathematical theory of Landau damping. Previously one mathematical result at the non-linear level was the existence of a class of exponentially damped solutions of the Vlasov–Poisson equation in a circle which had been proved in by means of a scattering technique (this result has been recently extended in). However these existence results do not say anything about "which" initial data could lead to such damped solutions. In a paper published by French mathematicians Cédric Villani and Clément Mouhot, the initial data issue is solved and Landau damping is mathematically established for the first time for the non-linear Vlasov equation. It is proved that solutions starting in some neighborhood (for the analytic or Gevrey topology) of a linearly stable homogeneous stationary solution are (orbitally) stable for all times and are damped globally in time. The damping phenomenon is reinterpreted in terms of transfer of regularity of formula_4 as a function of formula_19 and formula_20, respectively, rather than exchanges of energy. Large scale variations pass into variations of smaller and smaller scale in velocity space, corresponding to a shift of the Fourier spectrum of formula_4 as a function of formula_20. This shift, well known in linear theory, proves to hold in the non-linear case. Perturbation theory in an "N"-body frame. The mechanical "N"-body description, originally deemed impossible, enables a rigorous calculation of Landau damping using Newton’s second law of motion and Fourier series. Neither the Vlasov equation nor Laplace transforms are required for this derivation. The calculation of the energy (more precisely momentum) exchange of the wave with electrons is done similarly. This calculation makes intuitive the interpretation of Landau damping as the synchronization of almost resonant passing particles.
[ { "math_id": 0, "text": "v_\\text{ph}" }, { "math_id": 1, "text": "T_\\text{trap} \\sim A^{-1/2}" }, { "math_id": 2, "text": "A" }, { "math_id": 3, "text": "E" }, { "math_id": 4, "text": "f" }, { "math_id": 5, "text": " f = f_0(v) + f_1(x,v,t) + \\cdots" }, { "math_id": 6, "text": "E = E_1(x,t) + E_2(x,t) + \\cdots" }, { "math_id": 7, "text": "(\\partial_t + v\\partial_x)f_1 + {e\\over m}E_1 f'_0 = 0, \\quad \n \\partial_x E_1 = {e\\over \\epsilon_0} \\int f_1 \\mathrm{d}v." }, { "math_id": 8, "text": "f_1(x,v,0) = g(v)\\exp(ikx)" }, { "math_id": 9, "text": "\\exp[ik(x-v_\\text{ph}t)-\\gamma t]" }, { "math_id": 10, "text": "k" }, { "math_id": 11, "text": "\\gamma\\approx-{\\pi\\omega_p^3 \\over 2k^2N} f'_0(v_\\text{ph}), \\quad\n N = \\int f_0 \\mathrm{d}v." }, { "math_id": 12, "text": "\\omega_p" }, { "math_id": 13, "text": "N" }, { "math_id": 14, "text": "\\frac{\\omega_p^2}{kN} f'_0 \\frac{\\mathcal P}{kv-\\omega} + \\epsilon \\delta\\left(v-\\frac{\\omega}{k}\\right) " }, { "math_id": 15, "text": "\\mathcal P" }, { "math_id": 16, "text": "\\delta" }, { "math_id": 17, "text": "\\epsilon = 1 + \\frac{\\omega_p^2}{kN} \\int f'_0 \\frac{\\mathcal P}{\\omega-kv} \\mathrm{d}v" }, { "math_id": 18, "text": "\\partial_\\omega(\\omega\\epsilon)" }, { "math_id": 19, "text": "x" }, { "math_id": 20, "text": "v" } ]
https://en.wikipedia.org/wiki?curid=698720
698759
Propagator
Function in quantum field theory showing probability amplitudes of moving particles In quantum mechanics and quantum field theory, the propagator is a function that specifies the probability amplitude for a particle to travel from one place to another in a given period of time, or to travel with a certain energy and momentum. In Feynman diagrams, which serve to calculate the rate of collisions in quantum field theory, virtual particles contribute their propagator to the rate of the scattering event described by the respective diagram. Propagators may also be viewed as the inverse of the wave operator appropriate to the particle, and are, therefore, often called "(causal) Green's functions" (called "causal" to distinguish it from the elliptic Laplacian Green's function). Non-relativistic propagators. In non-relativistic quantum mechanics, the propagator gives the probability amplitude for a particle to travel from one spatial point (x') at one time (t') to another spatial point (x) at a later time (t). Consider a system with Hamiltonian H. The Green's function G (fundamental solution) for the Schrödinger equation is a function formula_0 satisfying formula_1 where "Hx" denotes the Hamiltonian written in terms of the x coordinates, "δ"("x") denotes the Dirac delta-function, Θ("t") is the Heaviside step function and "K"("x", "t" ;"x′", "t′") is the kernel of the above Schrödinger differential operator in the big parentheses. The term propagator is sometimes used in this context to refer to G, and sometimes to K. This article will use the term to refer to K (see Duhamel's principle). This propagator may also be written as the transition amplitude formula_2 where "Û"("t", "t′") is the unitary time-evolution operator for the system taking states at time t′ to states at time t. Note the initial condition enforced by formula_3. The quantum-mechanical propagator may also be found by using a path integral: formula_4 where the boundary conditions of the path integral include "q"("t") "x", "q"("t′") "x′". Here L denotes the Lagrangian of the system. The paths that are summed over move only forwards in time and are integrated with the differential formula_5 following the path in time. In non-relativistic quantum mechanics, the propagator lets one find the wave function of a system, given an initial wave function and a time interval. The new wave function is specified by the equation formula_6 If "K"("x", "t"; "x"′, "t"′) only depends on the difference "x" − "x′", this is a convolution of the initial wave function and the propagator. Basic examples: propagator of free particle and harmonic oscillator. For a time-translationally invariant system, the propagator only depends on the time difference "t" − "t"′, so it may be rewritten as formula_7 The propagator of a one-dimensional free particle, obtainable from, e.g., the path integral, is then formula_8 Similarly, the propagator of a one-dimensional quantum harmonic oscillator is the Mehler kernel, formula_9 The latter may be obtained from the previous free-particle result upon making use of van Kortryk's SU(1,1) Lie-group identity, formula_10 valid for operators formula_11 and formula_12 satisfying the Heisenberg relation formula_13. For the N-dimensional case, the propagator can be simply obtained by the product formula_14 Relativistic propagators. In relativistic quantum mechanics and quantum field theory the propagators are Lorentz-invariant. They give the amplitude for a particle to travel between two spacetime events. Scalar propagator. In quantum field theory, the theory of a free (or non-interacting) scalar field is a useful and simple example which serves to illustrate the concepts needed for more complicated theories. It describes spin-zero particles. There are a number of possible propagators for free scalar field theory. We now describe the most common ones. Position space. The position space propagators are Green's functions for the Klein–Gordon equation. This means that they are functions "G"("x", "y") satisfying formula_15 where We shall restrict attention to 4-dimensional Minkowski spacetime. We can perform a Fourier transform of the equation for the propagator, obtaining formula_17 This equation can be inverted in the sense of distributions, noting that the equation "xf"("x") = 1 has the solution (see Sokhotski–Plemelj theorem) formula_18 with ε implying the limit to zero. Below, we discuss the right choice of the sign arising from causality requirements. The solution is formula_19 where formula_20 is the 4-vector inner product. The different choices for how to deform the integration contour in the above expression lead to various forms for the propagator. The choice of contour is usually phrased in terms of the formula_21 integral. The integrand then has two poles at formula_22 so different choices of how to avoid these lead to different propagators. Causal propagators. Retarded propagator. A contour going clockwise over both poles gives the causal retarded propagator. This is zero if x-y is spacelike or y is to the future of x, so it is zero if "x" ⁰&lt; "y" ⁰. This choice of contour is equivalent to calculating the limit, formula_23 Here formula_24 is the Heaviside step function, formula_25 is the proper time from x to y, and formula_26 is a Bessel function of the first kind. The propagator is non-zero only if formula_27, i.e., y causally precedes x, which, for Minkowski spacetime, means formula_28 and formula_29 This expression can be related to the vacuum expectation value of the commutator of the free scalar field operator, formula_30 where formula_31 is the commutator. Advanced propagator. A contour going anti-clockwise under both poles gives the causal advanced propagator. This is zero if x-y is spacelike or if y is to the past of x, so it is zero if "x" ⁰&gt; "y" ⁰. This choice of contour is equivalent to calculating the limit formula_32 This expression can also be expressed in terms of the vacuum expectation value of the commutator of the free scalar field. In this case, formula_33 Feynman propagator. A contour going under the left pole and over the right pole gives the Feynman propagator, introduced by Richard Feynman in 1948. This choice of contour is equivalent to calculating the limit formula_34 Here, "H"1(1) is a and "K"1 is a . This expression can be derived directly from the field theory as the vacuum expectation value of the "time-ordered product" of the free scalar field, that is, the product always taken such that the time ordering of the spacetime points is the same, formula_35 This expression is Lorentz invariant, as long as the field operators commute with one another when the points x and y are separated by a spacelike interval. The usual derivation is to insert a complete set of single-particle momentum states between the fields with Lorentz covariant normalization, and then to show that the Θ functions providing the causal time ordering may be obtained by a contour integral along the energy axis, if the integrand is as above (hence the infinitesimal imaginary part), to move the pole off the real line. The propagator may also be derived using the path integral formulation of quantum theory. Dirac propagator. Introduced by Paul Dirac in 1938. Momentum space propagator. The Fourier transform of the position space propagators can be thought of as propagators in momentum space. These take a much simpler form than the position space propagators. They are often written with an explicit ε term although this is understood to be a reminder about which integration contour is appropriate (see above). This ε term is included to incorporate boundary conditions and causality (see below). For a 4-momentum p the causal and Feynman propagators in momentum space are: formula_36 formula_37 formula_38 For purposes of Feynman diagram calculations, it is usually convenient to write these with an additional overall factor of i (conventions vary). Faster than light? The Feynman propagator has some properties that seem baffling at first. In particular, unlike the commutator, the propagator is "nonzero" outside of the light cone, though it falls off rapidly for spacelike intervals. Interpreted as an amplitude for particle motion, this translates to the virtual particle travelling faster than light. It is not immediately obvious how this can be reconciled with causality: can we use faster-than-light virtual particles to send faster-than-light messages? The answer is no: while in classical mechanics the intervals along which particles and causal effects can travel are the same, this is no longer true in quantum field theory, where it is commutators that determine which operators can affect one another. So what "does" the spacelike part of the propagator represent? In QFT the vacuum is an active participant, and particle numbers and field values are related by an uncertainty principle; field values are uncertain even for particle number "zero". There is a nonzero probability amplitude to find a significant fluctuation in the vacuum value of the field Φ("x") if one measures it locally (or, to be more precise, if one measures an operator obtained by averaging the field over a small region). Furthermore, the dynamics of the fields tend to favor spatially correlated fluctuations to some extent. The nonzero time-ordered product for spacelike-separated fields then just measures the amplitude for a nonlocal correlation in these vacuum fluctuations, analogous to an EPR correlation. Indeed, the propagator is often called a "two-point correlation function" for the free field. Since, by the postulates of quantum field theory, all observable operators commute with each other at spacelike separation, messages can no more be sent through these correlations than they can through any other EPR correlations; the correlations are in random variables. Regarding virtual particles, the propagator at spacelike separation can be thought of as a means of calculating the amplitude for creating a virtual particle-antiparticle pair that eventually disappears into the vacuum, or for detecting a virtual pair emerging from the vacuum. In Feynman's language, such creation and annihilation processes are equivalent to a virtual particle wandering backward and forward through time, which can take it outside of the light cone. However, no signaling back in time is allowed. Explanation using limits. This can be made clearer by writing the propagator in the following form for a massless particle: formula_39 This is the usual definition but normalised by a factor of formula_40. Then the rule is that one only takes the limit formula_41 at the end of a calculation. One sees that formula_42 and formula_43 Hence this means that a single massless particle will always stay on the light cone. It is also shown that the total probability for a photon at any time must be normalised by the reciprocal of the following factor: formula_44 We see that the parts outside the light cone usually are zero in the limit and only are important in Feynman diagrams. Propagators in Feynman diagrams. The most common use of the propagator is in calculating probability amplitudes for particle interactions using Feynman diagrams. These calculations are usually carried out in momentum space. In general, the amplitude gets a factor of the propagator for every "internal line", that is, every line that does not represent an incoming or outgoing particle in the initial or final state. It will also get a factor proportional to, and similar in form to, an interaction term in the theory's Lagrangian for every internal vertex where lines meet. These prescriptions are known as "Feynman rules". Internal lines correspond to virtual particles. Since the propagator does not vanish for combinations of energy and momentum disallowed by the classical equations of motion, we say that the virtual particles are allowed to be off shell. In fact, since the propagator is obtained by inverting the wave equation, in general, it will have singularities on shell. The energy carried by the particle in the propagator can even be "negative". This can be interpreted simply as the case in which, instead of a particle going one way, its antiparticle is going the "other" way, and therefore carrying an opposing flow of positive energy. The propagator encompasses both possibilities. It does mean that one has to be careful about minus signs for the case of fermions, whose propagators are not even functions in the energy and momentum (see below). Virtual particles conserve energy and momentum. However, since they can be off shell, wherever the diagram contains a closed "loop", the energies and momenta of the virtual particles participating in the loop will be partly unconstrained, since a change in a quantity for one particle in the loop can be balanced by an equal and opposite change in another. Therefore, every loop in a Feynman diagram requires an integral over a continuum of possible energies and momenta. In general, these integrals of products of propagators can diverge, a situation that must be handled by the process of renormalization. Other theories. Spin &lt;templatestyles src="Fraction/styles.css" /&gt;1⁄2. If the particle possesses spin then its propagator is in general somewhat more complicated, as it will involve the particle's spin or polarization indices. The differential equation satisfied by the propagator for a spin &lt;templatestyles src="Fraction/styles.css" /&gt;1⁄2 particle is given by formula_45 where "I"4 is the unit matrix in four dimensions, and employing the Feynman slash notation. This is the Dirac equation for a delta function source in spacetime. Using the momentum representation, formula_46 the equation becomes formula_47 where on the right-hand side an integral representation of the four-dimensional delta function is used. Thus formula_48 By multiplying from the left with formula_49 (dropping unit matrices from the notation) and using properties of the gamma matrices, formula_50 the momentum-space propagator used in Feynman diagrams for a Dirac field representing the electron in quantum electrodynamics is found to have form formula_51 The "iε" downstairs is a prescription for how to handle the poles in the complex "p"0-plane. It automatically yields the Feynman contour of integration by shifting the poles appropriately. It is sometimes written formula_52 for short. It should be remembered that this expression is just shorthand notation for ("γ""μ""p""μ" − "m")−1. "One over matrix" is otherwise nonsensical. In position space one has formula_53 This is related to the Feynman propagator by formula_54 where formula_55. Spin 1. The propagator for a gauge boson in a gauge theory depends on the choice of convention to fix the gauge. For the gauge used by Feynman and Stueckelberg, the propagator for a photon is formula_56 The general form with gauge parameter "λ", up to overall sign and the factor of formula_57, reads formula_58 The propagator for a massive vector field can be derived from the Stueckelberg Lagrangian. The general form with gauge parameter "λ", up to overall sign and the factor of formula_57, reads formula_59 With these general forms one obtains the propagators in unitary gauge for "λ" 0, the propagator in Feynman or 't Hooft gauge for "λ" 1 and in Landau or Lorenz gauge for "λ" ∞. There are also other notations where the gauge parameter is the inverse of λ, usually denoted ξ (see "R"ξ gauges). The name of the propagator, however, refers to its final form and not necessarily to the value of the gauge parameter. Unitary gauge: formula_60 Feynman ('t Hooft) gauge: formula_61 Landau (Lorenz) gauge: formula_62 Graviton propagator. The graviton propagator for Minkowski space in general relativity is formula_63 where formula_64 is the number of spacetime dimensions, formula_65 is the transverse and traceless spin-2 projection operator and formula_66 is a spin-0 scalar multiplet. The graviton propagator for (Anti) de Sitter space is formula_67 where formula_68 is the Hubble constant. Note that upon taking the limit formula_69 and formula_70, the AdS propagator reduces to the Minkowski propagator. Related singular functions. The scalar propagators are Green's functions for the Klein–Gordon equation. There are related singular functions which are important in quantum field theory. We follow the notation in Bjorken and Drell. See also Bogolyubov and Shirkov (Appendix A). These functions are most simply defined in terms of the vacuum expectation value of products of field operators. Solutions to the Klein–Gordon equation. Pauli–Jordan function. The commutator of two scalar field operators defines the Pauli–Jordan function formula_71 by formula_72 with formula_73 This satisfies formula_74 and is zero if formula_75. Positive and negative frequency parts (cut propagators). We can define the positive and negative frequency parts of formula_71, sometimes called cut propagators, in a relativistically invariant way. This allows us to define the positive frequency part: formula_76 and the negative frequency part: formula_77 These satisfy formula_78 and formula_79 Auxiliary function. The anti-commutator of two scalar field operators defines formula_80 function by formula_81 with formula_82 This satisfies formula_83 Green's functions for the Klein–Gordon equation. The retarded, advanced and Feynman propagators defined above are all Green's functions for the Klein–Gordon equation. They are related to the singular functions by formula_84 formula_85 formula_86 where formula_87 is the sign of formula_88. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "G(x, t; x', t') = \\frac{1}{i\\hbar} \\Theta(t - t') K(x, t; x', t')" }, { "math_id": 1, "text": "\\left( i\\hbar \\frac{\\partial}{\\partial t} - H_x \\right) G(x, t; x', t') = \\delta(x - x') \\delta(t - t')," }, { "math_id": 2, "text": "K(x, t; x', t') = \\big\\langle x \\big| \\hat{U}(t, t') \\big| x' \\big\\rangle," }, { "math_id": 3, "text": "\\lim_{t \\to t'} K(x, t; x', t') = \\delta(x - x')" }, { "math_id": 4, "text": "K(x, t; x', t') = \\int \\exp \\left[\\frac{i}{\\hbar} \\int_{t'}^{t} L(\\dot{q}, q, t) \\, dt\\right] D[q(t)]," }, { "math_id": 5, "text": "D[q(t)]" }, { "math_id": 6, "text": "\\psi(x, t) = \\int_{-\\infty}^\\infty \\psi(x', t') K(x, t; x', t') \\, dx'." }, { "math_id": 7, "text": "K(x, t; x', t') = K(x, x'; t - t')." }, { "math_id": 8, "text": "K(x, x'; t) = \\frac{1}{2\\pi} \\int_{-\\infty}^{+\\infty} dk\\, e^{ik(x-x')} e^{-\\frac{i\\hbar k^2 t}{2m}} = \\left(\\frac{m}{2\\pi i\\hbar t}\\right)^{\\frac{1}{2}} e^{-\\frac{m(x-x')^2}{2i\\hbar t}}." }, { "math_id": 9, "text": "K(x, x'; t) = \\left(\\frac{m\\omega}{2\\pi i\\hbar \\sin \\omega t}\\right)^{\\frac{1}{2}} \\exp\\left(-\\frac{m\\omega\\big((x^2 + x'^2) \\cos\\omega t - 2xx'\\big)}{2i\\hbar \\sin\\omega t}\\right)." }, { "math_id": 10, "text": "\\begin{align}\n&\\exp \\left( -\\frac{it}{\\hbar} \\left( \\frac{1}{2m} \\mathsf{p}^2 + \\frac{1}{2} m\\omega^2 \\mathsf{x}^2 \\right) \\right) \\\\\n&= \\exp \\left( -\\frac{im\\omega}{2\\hbar} \\mathsf{x}^2\\tan\\frac{\\omega t}{2} \\right) \\exp \\left( -\\frac{i}{2m\\omega \\hbar}\\mathsf{p}^2 \\sin(\\omega t) \\right) \\exp \\left( -\\frac{im\\omega }{2\\hbar} \\mathsf{x}^2 \\tan\\frac{\\omega t}{2} \\right),\n\\end{align}" }, { "math_id": 11, "text": "\\mathsf{x}" }, { "math_id": 12, "text": "\\mathsf{p}" }, { "math_id": 13, "text": "[\\mathsf{x},\\mathsf{p}] = i\\hbar" }, { "math_id": 14, "text": "K(\\vec{x}, \\vec{x}'; t) = \\prod_{q=1}^N K(x_q, x_q'; t)." }, { "math_id": 15, "text": "\\left(\\square_x + m^2\\right) G(x, y) = -\\delta(x - y)," }, { "math_id": 16, "text": "\\square_x = \\tfrac{\\partial^2}{\\partial t^2} - \\nabla^2" }, { "math_id": 17, "text": "\\left(-p^2 + m^2\\right) G(p) = -1." }, { "math_id": 18, "text": "f(x) = \\frac{1}{x \\pm i\\varepsilon} = \\frac{1}{x} \\mp i\\pi\\delta(x)," }, { "math_id": 19, "text": "G(x, y) = \\frac{1}{(2 \\pi)^4} \\int d^4p \\, \\frac{e^{-ip(x-y)}}{p^2 - m^2 \\pm i\\varepsilon}," }, { "math_id": 20, "text": "p(x - y) := p_0(x^0 - y^0) - \\vec{p} \\cdot (\\vec{x} - \\vec{y})" }, { "math_id": 21, "text": "p_0" }, { "math_id": 22, "text": "p_0 = \\pm \\sqrt{\\vec{p}^2 + m^2}," }, { "math_id": 23, "text": "G_\\text{ret}(x,y) = \\lim_{\\varepsilon \\to 0} \\frac{1}{(2 \\pi)^4} \\int d^4p \\, \\frac{e^{-ip(x-y)}}{(p_0+i\\varepsilon)^2 - \\vec{p}^2 - m^2} = -\\frac{\\Theta(x^0 - y^0)}{2\\pi} \\delta(\\tau_{xy}^2) + \\Theta(x^0 - y^0)\\Theta(\\tau_{xy}^2)\\frac{m J_1(m \\tau_{xy})}{4 \\pi \\tau_{xy}}." }, { "math_id": 24, "text": "\\Theta (x) := \\begin{cases}\n1 & x \\ge 0 \\\\\n0 & x < 0\n\\end{cases}" }, { "math_id": 25, "text": "\\tau_{xy}:= \\sqrt{ (x^0 - y^0)^2 - (\\vec{x} - \\vec{y})^2}" }, { "math_id": 26, "text": "J_1" }, { "math_id": 27, "text": "y \\prec x" }, { "math_id": 28, "text": "y^0 \\leq x^0" }, { "math_id": 29, "text": "\\tau_{xy}^2 \\geq 0 ~." }, { "math_id": 30, "text": "G_\\text{ret}(x,y) = -i \\langle 0| \\left[ \\Phi(x), \\Phi(y) \\right] |0\\rangle \\Theta(x^0 - y^0)" }, { "math_id": 31, "text": "\\left[\\Phi(x), \\Phi(y) \\right] := \\Phi(x) \\Phi(y) - \\Phi(y) \\Phi(x)" }, { "math_id": 32, "text": "\nG_\\text{adv}(x,y) = \\lim_{\\varepsilon \\to 0} \\frac{1}{(2\\pi)^4} \\int d^4p \\, \\frac{e^{-ip(x-y)}}{(p_0 - i\\varepsilon)^2 - \\vec{p}^2 - m^2} = -\\frac{\\Theta(y^0-x^0)}{2\\pi}\\delta(\\tau_{xy}^2) + \\Theta(y^0-x^0)\\Theta(\\tau_{xy}^2)\\frac{m J_1(m \\tau_{xy})}{4 \\pi \\tau_{xy}}.\n" }, { "math_id": 33, "text": "G_\\text{adv}(x,y) = i \\langle 0|\\left[ \\Phi(x), \\Phi(y) \\right]|0\\rangle \\Theta(y^0 - x^0)~." }, { "math_id": 34, "text": "G_F(x,y) = \\lim_{\\varepsilon \\to 0} \\frac{1}{(2 \\pi)^4} \\int d^4p \\, \\frac{e^{-ip(x-y)}}{p^2 - m^2 + i\\varepsilon} = \\begin{cases}\n-\\frac{1}{4 \\pi} \\delta(\\tau_{xy}^2) + \\frac{m}{8 \\pi \\tau_{xy}} H_1^{(1)}(m \\tau_{xy}) & \\tau_{xy}^2 \\geq 0 \\\\ -\\frac{i m}{ 4 \\pi^2 \\sqrt{-\\tau_{xy}^2}} K_1(m \\sqrt{-\\tau_{xy}^2}) & \\tau_{xy}^2 < 0.\n\\end{cases} " }, { "math_id": 35, "text": "\n\\begin{align}\nG_F(x-y) & = -i \\lang 0|T(\\Phi(x) \\Phi(y))|0 \\rang \\\\[4pt]\n& = -i \\left \\lang 0| \\left [\\Theta(x^0 - y^0) \\Phi(x)\\Phi(y) + \\Theta(y^0 - x^0) \\Phi(y)\\Phi(x) \\right] |0 \\right \\rang.\n\\end{align}" }, { "math_id": 36, "text": "\\tilde{G}_\\text{ret}(p) = \\frac{1}{(p_0+i\\varepsilon)^2 - \\vec{p}^2 - m^2}" }, { "math_id": 37, "text": "\\tilde{G}_\\text{adv}(p) = \\frac{1}{(p_0-i\\varepsilon)^2 - \\vec{p}^2 - m^2}" }, { "math_id": 38, "text": "\\tilde{G}_F(p) = \\frac{1}{p^2 - m^2 + i\\varepsilon}. " }, { "math_id": 39, "text": "G^\\varepsilon_F(x, y) = \\frac{\\varepsilon}{(x - y)^2 + i \\varepsilon^2}." }, { "math_id": 40, "text": "\\varepsilon" }, { "math_id": 41, "text": "\\varepsilon \\to 0" }, { "math_id": 42, "text": "G^\\varepsilon_F(x, y) = \\frac{1}{\\varepsilon} \\quad\\text{if}~~~ (x - y)^2 = 0," }, { "math_id": 43, "text": "\\lim_{\\varepsilon \\to 0} G^\\varepsilon_F(x, y) = 0 \\quad\\text{if}~~~ (x - y)^2 \\neq 0." }, { "math_id": 44, "text": "\n\\lim_{\\varepsilon \\to 0} \\int |G^\\varepsilon_F(0, x)|^2 \\, dx^3 \n = \\lim_{\\varepsilon \\to 0} \\int \\frac{\\varepsilon^2}{(\\mathbf{x}^2 - t^2)^2 + \\varepsilon^4} \\, dx^3\n = 2 \\pi^2 |t|.\n" }, { "math_id": 45, "text": "(i\\not\\nabla' - m)S_F(x', x) = I_4\\delta^4(x'-x)," }, { "math_id": 46, "text": "S_F(x', x) = \\int\\frac{d^4p}{(2\\pi)^4}\\exp{\\left[-ip \\cdot(x'-x)\\right]}\\tilde S_F(p)," }, { "math_id": 47, "text": "\n\\begin{align}\n& (i \\not \\nabla' - m)\\int\\frac{d^4p}{(2\\pi)^4}\\tilde S_F(p)\\exp{\\left[-ip \\cdot(x'-x)\\right]} \\\\[6pt]\n= {} & \\int\\frac{d^4p}{(2\\pi)^4}(\\not p - m)\\tilde S_F(p)\\exp{\\left[-ip \\cdot(x'-x)\\right]} \\\\[6pt]\n= {} & \\int\\frac{d^4p}{(2\\pi)^4}I_4\\exp{\\left[-ip \\cdot(x'-x)\\right]} \\\\[6pt]\n= {} & I_4\\delta^4(x'-x),\n\\end{align}\n" }, { "math_id": 48, "text": "(\\not p - m I_4)\\tilde S_F(p) = I_4." }, { "math_id": 49, "text": "(\\not p + m)" }, { "math_id": 50, "text": "\\begin{align}\n\\not p \\not p & = \\tfrac{1}{2}(\\not p \\not p + \\not p \\not p) \\\\[6pt]\n& = \\tfrac{1}{2}(\\gamma_\\mu p^\\mu \\gamma_\\nu p^\\nu + \\gamma_\\nu p^\\nu \\gamma_\\mu p^\\mu) \\\\[6pt]\n& = \\tfrac{1}{2}(\\gamma_\\mu \\gamma_\\nu + \\gamma_\\nu\\gamma_\\mu)p^\\mu p^\\nu \\\\[6pt]\n& = g_{\\mu\\nu}p^\\mu p^\\nu = p_\\nu p^\\nu = p^2,\n\\end{align}" }, { "math_id": 51, "text": " \\tilde{S}_F(p) = \\frac{(\\not p + m)}{p^2 - m^2 + i \\varepsilon} = \\frac{(\\gamma^\\mu p_\\mu + m)}{p^2 - m^2 + i \\varepsilon}." }, { "math_id": 52, "text": "\\tilde{S}_F(p) = {1 \\over \\gamma^\\mu p_\\mu - m + i\\varepsilon} = {1 \\over \\not p - m + i\\varepsilon} " }, { "math_id": 53, "text": "S_F(x-y) = \\int \\frac{d^4 p}{(2\\pi)^4} \\, e^{-i p \\cdot (x-y)} \\frac{\\gamma^\\mu p_\\mu + m}{p^2 - m^2 + i \\varepsilon} = \\left( \\frac{\\gamma^\\mu (x-y)_\\mu}{|x-y|^5} + \\frac{m}{|x-y|^3} \\right) J_1(m |x-y|)." }, { "math_id": 54, "text": "S_F(x-y) = (i \\not \\partial + m) G_F(x-y)" }, { "math_id": 55, "text": "\\not \\partial := \\gamma^\\mu \\partial_\\mu" }, { "math_id": 56, "text": "{-i g^{\\mu\\nu} \\over p^2 + i\\varepsilon }." }, { "math_id": 57, "text": "i" }, { "math_id": 58, "text": " -i\\frac{g^{\\mu\\nu} + \\left(1-\\frac{1}{\\lambda}\\right)\\frac{p^\\mu p^\\nu}{p^2}}{p^2+i\\varepsilon}." }, { "math_id": 59, "text": " \\frac{g_{\\mu\\nu} - \\frac{k_\\mu k_\\nu}{m^2}}{k^2-m^2+i\\varepsilon}+\\frac{\\frac{k_\\mu k_\\nu}{m^2}}{k^2-\\frac{m^2}{\\lambda}+i\\varepsilon}." }, { "math_id": 60, "text": "\\frac{g_{\\mu\\nu} - \\frac{k_\\mu k_\\nu}{m^2}}{k^2-m^2+i\\varepsilon}." }, { "math_id": 61, "text": "\\frac{g_{\\mu\\nu}}{k^2-m^2+i\\varepsilon}." }, { "math_id": 62, "text": "\\frac{g_{\\mu\\nu} - \\frac{k_\\mu k_\\nu}{k^2}}{k^2-m^2+i\\varepsilon}." }, { "math_id": 63, "text": "G_{\\alpha\\beta~\\mu\\nu} = \\frac{\\mathcal{P}^2_{\\alpha\\beta~\\mu\\nu}}{k^2} - \\frac{\\mathcal{P}^0_s{}_{\\alpha\\beta~\\mu\\nu}}{2k^2} = \\frac{g_{\\alpha\\mu} g_{\\beta\\nu}+ g_{\\beta\\mu}g_{\\alpha\\nu}- \\frac{2}{D-2} g_{\\mu\\nu}g_{\\alpha\\beta}}{k^2}," }, { "math_id": 64, "text": "D" }, { "math_id": 65, "text": "\\mathcal{P}^2" }, { "math_id": 66, "text": "\\mathcal{P}^0_s" }, { "math_id": 67, "text": "G = \\frac{\\mathcal{P}^2}{2H^2-\\Box} + \\frac{\\mathcal{P}^0_s}{2(\\Box+4H^2)}," }, { "math_id": 68, "text": "H" }, { "math_id": 69, "text": "H \\to 0" }, { "math_id": 70, "text": "\\Box \\to -k^2" }, { "math_id": 71, "text": "\\Delta(x-y)" }, { "math_id": 72, "text": "\\langle 0 | \\left[ \\Phi(x),\\Phi(y) \\right] | 0 \\rangle = i \\, \\Delta(x-y)" }, { "math_id": 73, "text": "\\,\\Delta(x-y) = G_\\text{ret} (x-y) - G_\\text{adv}(x-y)" }, { "math_id": 74, "text": "\\Delta(x-y) = -\\Delta(y-x)" }, { "math_id": 75, "text": "(x-y)^2 < 0" }, { "math_id": 76, "text": "\\Delta_+(x-y) = \\langle 0 | \\Phi(x) \\Phi(y) |0 \\rangle, " }, { "math_id": 77, "text": "\\Delta_-(x-y) = \\langle 0 | \\Phi(y) \\Phi(x) |0 \\rangle. " }, { "math_id": 78, "text": "\\,i \\Delta = \\Delta_+ - \\Delta_-" }, { "math_id": 79, "text": "(\\Box_x + m^2) \\Delta_{\\pm}(x-y) = 0." }, { "math_id": 80, "text": "\\Delta_1(x-y)" }, { "math_id": 81, "text": "\\langle 0 | \\left\\{ \\Phi(x),\\Phi(y) \\right\\} | 0 \\rangle = \\Delta_1(x-y)" }, { "math_id": 82, "text": "\\,\\Delta_1(x-y) = \\Delta_+ (x-y) + \\Delta_-(x-y)." }, { "math_id": 83, "text": "\\,\\Delta_1(x-y) = \\Delta_1(y-x)." }, { "math_id": 84, "text": "G_\\text{ret}(x-y) = \\Delta(x-y) \\Theta(x^0-y^0) " }, { "math_id": 85, "text": "G_\\text{adv}(x-y) = -\\Delta(x-y) \\Theta(y^0-x^0) " }, { "math_id": 86, "text": "2 G_F(x-y) = -i \\,\\Delta_1(x-y) + \\varepsilon(x^0 - y^0) \\,\\Delta(x-y) " }, { "math_id": 87, "text": "\\varepsilon(x^0-y^0)" }, { "math_id": 88, "text": "x^0-y^0" } ]
https://en.wikipedia.org/wiki?curid=698759
69880929
The Simple Function Point method
The Simple Function Point (SFP) method is a lightweight Functional Measurement Method. The Simple Function Point method was designed by Roberto Meli in 2010 to be compliant with the ISO14143-1 standard and compatible with the International Function Points User Group (IFPUG) Function Point Analysis (FPA) method. The original method (SiFP) was presented for the first time in a public conference in Rome (SMEF2011) The method was subsequently described in a manual produced by the Simple Function Point Association: the Simple Function Point Functional Size Measurement Method Reference Manual, available under the Creatives Commons Attribution-NoDerivatives 4.0 International Public License. Adoption by IFPUG. In 2019, the Simple Function Points Method was acquired by the IFPUG, to provide its user community with a simplified Function Point counting method, to make functional size measurement easier yet reliable in the early stages of software projects. The short name became SFP. The SPM (Simple Function Point Practices Manual) was published by IFPUG in late 2021. Basic concept. When the SFP method was proposed, the most widely used software functional size measurement method was IFPUG FPA. However, IFPUG FPA had (and still has) a few shortcomings: To overcome at least some of these problems, the SFP method was defined to provide the following characteristics: The sought characteristics were achieved as follows: IFPUG FPA requires that Of these activities, SFP requires only the first two, i.e., the identification of logical data files and transactions. Activities 4) and 5) are the most time consuming, since they require that every data file and transaction is examined in detail: skipping these phases makes the SFP method both quicker and easier to apply than IFPUG FPA. In addition, most of the subjective interpretation is due to activities 4) and 5), and partly also to activity 3): skipping these activities makes the SFP method also less prone to subjective interpretation. The concepts used in the definition of SFP are a small subset of those used in the definition of IFPUG FPA, therefore learning SFP is easier than learning IFPUG FPA, and it is immediate for those who already know IFPUG FPA. In practice, only the concepts of logical data file and transaction have to be known. Finally, the weights assigned to data files and transactions make the size in SFP very close to the size expressed in Function Points, on average. Definition. The logical data files are named Logical Files (LF) in the SFP method. Similarly, transactions are named Elementary Process (EP). Unlike in IFPUG FPA, there is no classification or weighting of the Base Functional Components (BFC as defined in ISO14143-1 standard). The size of an EP is 4.6 SFP, while the size of a LF is 7.0 SFP. Therefore the size expressed in SFP is based on the number of data files (#LF) and the number of transactions (#EP). Belonging to the software application being measured: formula_1 Empirical evaluation of the SFP method. Empirical studies have been carried out, aiming at Convertibility between SFP and FPA measures. In the original proposal of the SiFP method, a dataset from the ISBSG, including data from 768 projects, was used to evaluate the convertibility among UFP and SiFP measures. This study showed that on average formula_2. Another study also used an ISBSG dataset to evaluate the convertibility among UFP and SiFP measures. The dataset included data from 766 software applications. Via ordinary least square regression, it was found that formula_3. Based on these empirical studies, it seems that formula_4 (note that this approximate equivalence holds on average: in both studies an average relative error around 12% was observed). However, a third study found formula_5. This study used data from only 25 Web applications, so it is possible that the conversion rate is affected by the specific application type or by the relatively small size of the dataset. In 2017, a study evaluated the convertibility between UFP and SiFP measures using seven different datasets. Every dataset was characterized by a specific conversion rate. Specifically, it was found that formula_6, with formula_7. Noticeably, for a dataset, no linear model could be found; instead the statistically significant model formula_8 was found. In conclusion, available evidence shows that one SiFP is approximately equivalent to one UFP, but this equivalence depends on the data being considered, besides being true only on average. Considering that the IFPUG SFP basic elements (EP, LF) are totally equivalent to the original SiFP elements (UGEP, UGDG), the previous results hold for the IFPUG SFP method as well. Using SFP for software development effort estimation. IFPUG FPA is mainly used for estimating software development effort. Therefore, any alternative method that aims at measuring the functional size of software should support effort estimation with the same level of accuracy as IFPUG FPA. In other words, it is necessary to verify that effort estimates based on SFP are at least as good as the estimates based on UFP. To perform this verification, an ISBSG dataset was analyzed, and models of effort vs. size were derived, using ordinary least squares regression, after log-log transformations. The effort estimation errors were then compared. It turned out that the two models yielded extremely similar estimation accuracy. A following study analyzed a dataset containing data from 25 Web applications. Ordinary least squares regression was used to derive UFP-based and SiFP-based effort models. Also in this case, no statistically significant estimation differences could be observed. References. &lt;templatestyles src="Reflist/styles.css" /&gt; External links. The introduction to Simple Function Points (SFP) from IFPUG.
[ { "math_id": 0, "text": "Size_{[UFP]}=Size_{[SiFP]}" }, { "math_id": 1, "text": "Size_{[SFP]}=4.6\\ \\#EP+7\\ \\#LF" }, { "math_id": 2, "text": "Size_{[UFP]}=1.0005\\ Size_{[SiFP]}" }, { "math_id": 3, "text": "Size_{[SiFP]}=0.998\\ Size_{[UFP]}" }, { "math_id": 4, "text": "Size_{[SiFP]}\\approx Size_{[UFP]}" }, { "math_id": 5, "text": "Size_{[UFP]}=0.815\\ Size_{[SiFP]}" }, { "math_id": 6, "text": "Size_{[SiFP]} = k\\ Size={[UFP]}" }, { "math_id": 7, "text": "k \\in [0.957, 1.221]" }, { "math_id": 8, "text": "Size[SiFP]=Size[UFP]^{1.033}" } ]
https://en.wikipedia.org/wiki?curid=69880929
69882141
Coenzyme A transferases
Coenzyme A transferases Coenzyme A transferases (CoA-transferases) are transferase enzymes that catalyze the transfer of a coenzyme A group from an acyl-CoA donor to a carboxylic acid acceptor. Among other roles, they are responsible for transfer of CoA groups during fermentation and metabolism of ketone bodies. These enzymes are found in all three domains of life (bacteria, eukaryotes, archaea). Reactions. As a group, the CoA transferases catalyze 105 reactions at relatively fast rates. Some common reactions include Acetyl-CoA + Butyrate formula_0 Acetate + Butyryl-CoA Acetyl-CoA + Succinate formula_0 Acetate + Succinyl-CoA Acetoacetate-CoA + Succinate formula_0 Acetoacetate + Succinyl-CoA Formate + Oxalate formula_0 Formate + Oxalyl-CoA These reactions have different functions in cells. The reaction involving acetyl-CoA and butyrate (EC 2.8.3.8), for example, forms butyrate during fermentation. The reaction involving acetyl-CoA and succinate (EC 2.8.3.18) is part of a modified TCA cycle or forms acetate during fermentation. The reaction involving acetoacetate-CoA and succinate (EC 2.8.3.5) degrades the ketone body acetoacetate formed during ketogenesis. Many enzymes can catalyze multiple reactions, whereas some enzymes are specific and catalyze only one. Families. The CoA-transferases have been divided into six families (Cat1, OXCT1, Gct, MdcA, Frc, CitF) based on their amino acid sequences and reactions catalyzed. They also differ in the type of catalysis and their crystal structures. Despite some shared properties, these six families are not closely related (&lt;25% amino acid similarity). Three families catalyze CoA-transferase reactions almost exclusively. The Cat1 family catalyzes reactions involving small acyl-CoA, such as acetyl-CoA (EC 2.8.3.18)), propionyl-CoA (EC 2.8.3.1,EC 2.8.3.12), and butyryl-CoA (EC 2.8.3.8). The OXCT1 family uses oxo (EC 2.8.3.5,EC 2.8.3.6) and hydroxy acyl-CoA (EC 2.8.3.6,EC 2.8.3.1). The Frc family uses unusual acyl-CoA, including CoA thioesters of oxalate (EC 2.8.3.16,EC 2.8.3.19), bile acids (EC 2.8.3.25), and aromatic compounds (EC 2.8.3.15,(EC 2.8.3.17). Two families catalyze CoA-transferase reactions, but they also catalyze other transferase reactions. The CitF family catalyzes reactions involving acetyl-CoA and citrate EC 2.8.3.10), but its main role is as an acyl-ACP transferase (as part of citrate lyase; EC 4.1.3.6). The MdcA family catalyzes reactions involving acetyl-CoA and malonate (EC 2.8.3.3), but it too is an acyl-ACP transferase (as part of malonate decarboxylase; EC 4.1.1.9). The Gct family has members that catalyze CoA-transferase reactions, but half of the members do not. They instead catalyze hydrolysis or other reactions involving acyl-CoA. Historically, the CoA-transferases were divided three families (I, II, III). However, members of families I (Cat1, OXCT1, Gct) are not closely related, and the family is not monophyletic. Members of family II (CitF, MdcA) are also not closely related. Types of catalysis. Most CoA transferases rely on covalent catalysis to carry out reactions. The reaction starts when an acyl-CoA (the CoA donor) enters the active site of the enzyme. A glutamate in the active site forms an adduct with acyl-CoA. The acyl-CoA breaks at the thioester bond, forming a CoA and carboxylic acid. The carboxylic acid remains bound to the enzyme, but it is soon displaced by CoA and leaves. A new carboxylic acid (the CoA acceptor) enters and forms a new acyl-CoA. The new acyl-CoA is released, completing the transfer of CoA from one molecule to another. The type of catalysis differs by family. In Cat1, OXCT1, and Gct families, the catalytic residue in the active site is a glutamate. However, the glutamate in the Cat1 family is in a different position than in the OXCT1 and Gct families. In the Frc family, the catalytic residue is an aspartate, not a glutamate. In MdcA and CitF families, covalent catalysis is not thought to occur. Crystal structures. Crystal structures have been determined for 21 different enzymes. More structures have been determined, but they belong to putative enzymes (proteins with no direct evidence of catalytic activity). All CoA-transferases have alternating layers of α helices and β sheets, and thus they belong to the α/β class of proteins. The number and arrangement of these layers differs by family. The Gct family, for example, has extra layers of α helices and β sheets compared to Cat1 and OXCT1 families. Further, all enzymes have two different domains. These domains can either occur on the same polypeptide or can be separated between two different polypeptides. In some cases, the genes for the domains are duplicated in the genome. Occurrence in organisms. CoA transferases have been found in all three domains of life. The majority have been found in bacteria, with fewer in eukaryotes. One CoA transferase has been found in archaea. Two CoA-transferases been found in humans. They include 3-oxoacid CoA-transferase (EC 2.8.3.5) and succinate—hydroxymethylglutarate CoA-transferase (EC 2.8.3.13). Role in disease. Mutations in two different CoA-transferases have been described and lead to disease in humans. 3-oxoacid CoA-transferase(EC 2.8.3.5) uses the ketone body acetoacetate. Mutations in the enzyme cause accumulation of acetoacetate and ketoacidosis. The severity of ketoacidosis depends on the mutation. The enzyme succinate—hydroxymethylglutarate CoA-transferase (EC 2.8.3.13) uses glutarate, a product of tryptophan and lysine metabolism. Mutations in this enzyme cause accumulation of glutarate (glutaric aciduria). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=69882141
6988866
Directed percolation
Physical models of filtering under forces such as gravity In statistical physics, directed percolation (DP) refers to a class of models that mimic filtering of fluids through porous materials along a given direction, due to the effect of gravity. Varying the microscopic connectivity of the pores, these models display a phase transition from a macroscopically permeable (percolating) to an impermeable (non-percolating) state. Directed percolation is also used as a simple model for epidemic spreading with a transition between survival and extinction of the disease depending on the infection rate. More generally, the term directed percolation stands for a universality class of continuous phase transitions which are characterized by the same type of collective behavior on large scales. Directed percolation is probably the simplest universality class of transitions out of thermal equilibrium. Lattice models. One of the simplest realizations of DP is bond directed percolation. This model is a directed variant of ordinary (isotropic) percolation and can be introduced as follows. The figure shows a tilted square lattice with bonds connecting neighboring sites. The bonds are permeable (open) with probability formula_0 and impermeable (closed) otherwise. The sites and bonds may be interpreted as holes and randomly distributed channels of a porous medium. The difference between ordinary and directed percolation is illustrated to the right. In isotropic percolation a spreading agent (e.g. water) introduced at a particular site percolates along open bonds, generating a cluster of wet sites. Contrarily, in directed percolation the spreading agent can pass open bonds only along a preferred direction in space, as indicated by the arrow. The resulting red cluster is directed in space. As a dynamical process. Interpreting the preferred direction as a temporal degree of freedom, directed percolation can be regarded as a stochastic process that evolves in time. In a minimal, two-parameter model that includes bond and site DP as special cases, a one-dimensional chain of sites evolves in discrete time formula_1, which can be viewed as a second dimension, and all sites are updated in parallel. Activating a certain site (called initial seed) at time formula_2 the resulting cluster can be constructed row by row. The corresponding number of active sites formula_3 varies as time evolves. Universal scaling behavior. The DP universality class is characterized by a certain set of critical exponents. These exponents depend on the spatial dimension formula_4. Above the so-called upper critical dimension formula_5 they are given by their mean-field values while in formula_6 dimensions they have been estimated numerically. Current estimates are summarized in the following table: Other examples. In two dimensions, the percolation of water through a thin tissue (such as toilet paper) has the same mathematical underpinnings as the flow of electricity through two-dimensional random networks of resistors. In chemistry, chromatography can be understood with similar models. The propagation of a tear or rip in a sheet of paper, in a sheet of metal, or even the formation of a crack in ceramic bears broad mathematical resemblance to the flow of electricity through a random network of electrical fuses. Above a certain critical point, the electrical flow will cause a fuse to pop, possibly leading to a cascade of failures, resembling the propagation of a crack or tear. The study of percolation helps indicate how the flow of electricity will redistribute itself in the fuse network, thus modeling which fuses are most likely to pop next, and how fast they will pop, and what direction the crack may curve in. Examples can be found not only in physical phenomena, but also in biology, neuroscience, ecology (e.g. evolution), and economics (e.g. diffusion of innovation). Percolation can be considered to be a branch of the study of dynamical systems or statistical mechanics. In particular, percolation networks exhibit a phase change around a critical threshold. Experimental realizations. In spite of vast success in the theoretical and numerical studies of DP, obtaining convincing experimental evidence has proved challenging. In 1999 an experiment on flowing sand on an inclined plane was identified as a physical realization of DP. In 2007, critical behavior of DP was finally found in the electrohydrodynamic convection of liquid crystal, where a complete set of static and dynamic critical exponents and universal scaling functions of DP were measured in the transition to spatiotemporal intermittency between two turbulent states. Sources. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "p\\," }, { "math_id": 1, "text": "t" }, { "math_id": 2, "text": "t=0" }, { "math_id": 3, "text": "N(t)" }, { "math_id": 4, "text": "d\\," }, { "math_id": 5, "text": "d\\geq d_c=4\\," }, { "math_id": 6, "text": "d<4\\," } ]
https://en.wikipedia.org/wiki?curid=6988866
69890
Binary heap
Variant of heap data structure A binary heap is a heap data structure that takes the form of a binary tree. Binary heaps are a common way of implementing priority queues. The binary heap was introduced by J. W. J. Williams in 1964, as a data structure for heapsort. A binary heap is defined as a binary tree with two additional constraints: Heaps where the parent key is greater than or equal to (≥) the child keys are called "max-heaps"; those where it is less than or equal to (≤) are called "min-heaps". Efficient (logarithmic time) algorithms are known for the two operations needed to implement a priority queue on a binary heap: inserting an element, and removing the smallest or largest element from a min-heap or max-heap, respectively. Binary heaps are also commonly employed in the heapsort sorting algorithm, which is an in-place algorithm because binary heaps can be implemented as an implicit data structure, storing keys in an array and using their relative positions within that array to represent child–parent relationships. Heap operations. Both the insert and remove operations modify the heap to conform to the shape property first, by adding or removing from the end of the heap. Then the heap property is restored by traversing up or down the heap. Both operations take O(log "n") time. Insert. To insert an element to a heap, we perform the following steps: Steps 2 and 3, which restore the heap property by comparing and possibly swapping a node with its parent, are called "the up-heap" operation (also known as "bubble-up", "percolate-up", "sift-up", "trickle-up", "swim-up", "heapify-up", "cascade-up", or "fix-up"). The number of operations required depends only on the number of levels the new element must rise to satisfy the heap property. Thus, the insertion operation has a worst-case time complexity of O(log "n"). For a random heap, and for repeated insertions, the insertion operation has an average-case complexity of O(1). As an example of binary heap insertion, say we have a max-heap and we want to add the number 15 to the heap. We first place the 15 in the position marked by the X. However, the heap property is violated since 15 &gt; 8, so we need to swap the 15 and the 8. So, we have the heap looking as follows after the first swap: However the heap property is still violated since 15 &gt; 11, so we need to swap again: which is a valid max-heap. There is no need to check the left child after this final step: at the start, the max-heap was valid, meaning the root was already greater than its left child, so replacing the root with an even greater value will maintain the property that each node is greater than its children (11 &gt; 5; if 15 &gt; 11, and 11 &gt; 5, then 15 &gt; 5, because of the transitive relation). Extract. The procedure for deleting the root from the heap (effectively extracting the maximum element in a max-heap or the minimum element in a min-heap) while retaining the heap property is as follows: Steps 2 and 3, which restore the heap property by comparing and possibly swapping a node with one of its children, are called the "down-heap" (also known as "bubble-down", "percolate-down", "sift-down", "sink-down", "trickle down", "heapify-down", "cascade-down", "fix-down", "extract-min" or "extract-max", or simply "heapify") operation. So, if we have the same max-heap as before We remove the 11 and replace it with the 4. Now the heap property is violated since 8 is greater than 4. In this case, swapping the two elements, 4 and 8, is enough to restore the heap property and we need not swap elements further: The downward-moving node is swapped with the "larger" of its children in a max-heap (in a min-heap it would be swapped with its smaller child), until it satisfies the heap property in its new position. This functionality is achieved by the Max-Heapify function as defined below in pseudocode for an array-backed heap "A" of length "length"("A"). "A" is indexed starting at 1. // Perform a down-heap or heapify-down operation for a max-heap // "A": an array representing the heap, indexed starting at 1 // "i": the index to start at when heapifying down Max-Heapify("A", "i"): "left" ← 2×"i" "right" ← 2×"i" + 1 "largest" ← "i" if "left" ≤ "length"("A") and "A"["left"] &gt; A["largest"] then: "largest" ← "left" if "right" ≤ "length"("A") and "A"["right"] &gt; "A"["largest"] then: "largest" ← "right" if "largest" ≠ "i" then: swap "A"["i"] and "A"["largest"] Max-Heapify("A", "largest") For the above algorithm to correctly re-heapify the array, no nodes besides the node at index "i" and its two direct children can violate the heap property. The down-heap operation (without the preceding swap) can also be used to modify the value of the root, even when an element is not being deleted. In the worst case, the new root has to be swapped with its child on each level until it reaches the bottom level of the heap, meaning that the delete operation has a time complexity relative to the height of the tree, or O(log "n"). Insert then extract. Inserting an element then extracting from the heap can be done more efficiently than simply calling the insert and extract functions defined above, which would involve both an codice_0 and codice_1 operation. Instead, we can do just a codice_1 operation, as follows: Python provides such a function for insertion then extraction called "heappushpop", which is paraphrased below. The heap array is assumed to have its first element at index 1. // Push a new item to a (max) heap and then extract the root of the resulting heap. // "heap": an array representing the heap, indexed at 1 // "item": an element to insert // Returns the greater of the two between "item" and the root of "heap". Push-Pop("heap": List&lt;T&gt;, "item": T) -&gt; T: if "heap" is not empty and heap[1] &gt; "item" then: // &lt; if min heap swap "heap"[1] and "item" _downheap("heap" starting from index 1) return "item" A similar function can be defined for popping and then inserting, which in Python is called "heapreplace": // Extract the root of the heap, and push a new item // "heap": an array representing the heap, indexed at 1 // "item": an element to insert // Returns the current root of "heap" Replace("heap": List&lt;T&gt;, "item": T) -&gt; T: swap "heap"[1] and "item" _downheap("heap" starting from index 1) return "item" Search. Finding an arbitrary element takes O(n) time. Delete. Deleting an arbitrary element can be done as follows: Decrease or increase key. The decrease key operation replaces the value of a node with a given value with a lower value, and the increase key operation does the same but with a higher value. This involves finding the node with the given value, changing the value, and then down-heapifying or up-heapifying to restore the heap property. Decrease key can be done as follows: Increase key can be done as follows: Building a heap. Building a heap from an array of n input elements can be done by starting with an empty heap, then successively inserting each element. This approach, called Williams' method after the inventor of binary heaps, is easily seen to run in "O"("n" log "n") time: it performs n insertions at "O"(log "n") cost each. However, Williams' method is suboptimal. A faster method (due to Floyd) starts by arbitrarily putting the elements on a binary tree, respecting the shape property (the tree could be represented by an array, see below). Then starting from the lowest level and moving upwards, sift the root of each subtree downward as in the deletion algorithm until the heap property is restored. More specifically if all the subtrees starting at some height formula_1 have already been "heapified" (the bottommost level corresponding to formula_2), the trees at height formula_3 can be heapified by sending their root down along the path of maximum valued children when building a max-heap, or minimum valued children when building a min-heap. This process takes formula_4 operations (swaps) per node. In this method most of the heapification takes place in the lower levels. Since the height of the heap is formula_5, the number of nodes at height formula_1 is formula_6. Therefore, the cost of heapifying all subtrees is: formula_7 This uses the fact that the given infinite series formula_8 converges. The exact value of the above (the worst-case number of comparisons during the heap construction) is known to be equal to: formula_9, where "s"2("n") is the sum of all digits of the binary representation of n and "e"2("n") is the exponent of 2 in the prime factorization of n. The average case is more complex to analyze, but it can be shown to asymptotically approach 1.8814 "n" − 2 log2"n" + "O"(1) comparisons. The Build-Max-Heap function that follows, converts an array "A" which stores a complete binary tree with "n" nodes to a max-heap by repeatedly using Max-Heapify (down-heapify for a max-heap) in a bottom-up manner. The array elements indexed by "floor"("n"/2) + 1, "floor"("n"/2) + 2, ..., "n" are all leaves for the tree (assuming that indices start at 1)—thus each is a one-element heap, and does not need to be down-heapified. Build-Max-Heap runs Max-Heapify on each of the remaining tree nodes. Build-Max-Heap ("A"): for each index "i" from "floor"("length"("A")/2) downto 1 do: Max-Heapify("A", "i") Heap implementation. Heaps are commonly implemented with an array. Any binary tree can be stored in an array, but because a binary heap is always a complete binary tree, it can be stored compactly. No space is required for pointers; instead, the parent and children of each node can be found by arithmetic on array indices. These properties make this heap implementation a simple example of an implicit data structure or Ahnentafel list. Details depend on the root position, which in turn may depend on constraints of a programming language used for implementation, or programmer preference. Specifically, sometimes the root is placed at index 1, in order to simplify arithmetic. Let "n" be the number of elements in the heap and "i" be an arbitrary valid index of the array storing the heap. If the tree root is at index 0, with valid indices 0 through "n − "1, then each element "a" at index "i" has Alternatively, if the tree root is at index 1, with valid indices 1 through "n", then each element "a" at index "i" has This implementation is used in the heapsort algorithm which reuses the space allocated to the input array to store the heap (i.e. the algorithm is done in-place). This implementation is also useful as a Priority queue. When a dynamic array is used, insertion of an unbounded number of items is possible. The codice_0 or codice_1 operations can then be stated in terms of an array as follows: suppose that the heap property holds for the indices "b", "b"+1, ..., "e". The sift-down function extends the heap property to "b"−1, "b", "b"+1, ..., "e". Only index "i" = "b"−1 can violate the heap property. Let "j" be the index of the largest child of "a"["i"] (for a max-heap, or the smallest child for a min-heap) within the range "b", ..., "e". By swapping the values "a"["i"] and "a"["j"] the heap property for position "i" is established. At this point, the only problem is that the heap property might not hold for index "j". The sift-down function is applied tail-recursively to index "j" until the heap property is established for all elements. The sift-down function is fast. In each step it only needs two comparisons and one swap. The index value where it is working doubles in each iteration, so that at most log2 "e" steps are required. For big heaps and using virtual memory, storing elements in an array according to the above scheme is inefficient: (almost) every level is in a different page. B-heaps are binary heaps that keep subtrees in a single page, reducing the number of pages accessed by up to a factor of ten. The operation of merging two binary heaps takes Θ("n") for equal-sized heaps. The best you can do is (in case of array implementation) simply concatenating the two heap arrays and build a heap of the result. A heap on "n" elements can be merged with a heap on "k" elements using O(log "n" log "k") key comparisons, or, in case of a pointer-based implementation, in O(log "n" log "k") time. An algorithm for splitting a heap on "n" elements into two heaps on "k" and "n-k" elements, respectively, based on a new view of heaps as an ordered collections of subheaps was presented in. The algorithm requires O(log "n" * log "n") comparisons. The view also presents a new and conceptually simple algorithm for merging heaps. When merging is a common task, a different heap implementation is recommended, such as binomial heaps, which can be merged in O(log "n"). Additionally, a binary heap can be implemented with a traditional binary tree data structure, but there is an issue with finding the adjacent element on the last level on the binary heap when adding an element. This element can be determined algorithmically or by adding extra data to the nodes, called "threading" the tree—instead of merely storing references to the children, we store the inorder successor of the node as well. It is possible to modify the heap structure to make the extraction of both the smallest and largest element possible in formula_10formula_11 time. To do this, the rows alternate between min heap and max-heap. The algorithms are roughly the same, but, in each step, one must consider the alternating rows with alternating comparisons. The performance is roughly the same as a normal single direction heap. This idea can be generalized to a min-max-median heap. Derivation of index equations. In an array-based heap, the children and parent of a node can be located via simple arithmetic on the node's index. This section derives the relevant equations for heaps with their root at index 0, with additional notes on heaps with their root at index 1. To avoid confusion, we define the level of a node as its distance from the root, such that the root itself occupies level 0. Child nodes. For a general node located at index i (beginning from 0), we will first derive the index of its right child, formula_12. Let node i be located in level L, and note that any level l contains exactly formula_13 nodes. Furthermore, there are exactly formula_14 nodes contained in the layers up to and including layer l (think of binary arithmetic; 0111...111 = 1000...000 - 1). Because the root is stored at 0, the kth node will be stored at index formula_15. Putting these observations together yields the following expression for the index of the last node in layer l. formula_16 Let there be j nodes after node i in layer L, such that formula_17 Each of these j nodes must have exactly 2 children, so there must be formula_18 nodes separating i's right child from the end of its layer (formula_19). formula_20 Noting that the left child of any node is always 1 place before its right child, we get formula_21. If the root is located at index 1 instead of 0, the last node in each level is instead at index formula_14. Using this throughout yields formula_22 and formula_23 for heaps with their root at 1. Parent node. Every non-root node is either the left or right child of its parent, so one of the following must hold: Hence, formula_26 Now consider the expression formula_27. If node formula_0 is a left child, this gives the result immediately, however, it also gives the correct result if node formula_0 is a right child. In this case, formula_28 must be even, and hence formula_29 must be odd. formula_30 Therefore, irrespective of whether a node is a left or right child, its parent can be found by the expression: formula_31 Related structures. Since the ordering of siblings in a heap is not specified by the heap property, a single node's two children can be freely interchanged unless doing so violates the shape property (compare with treap). Note, however, that in the common array-based heap, simply swapping the children might also necessitate moving the children's sub-tree nodes to retain the heap property. The binary heap is a special case of the d-ary heap in which d = 2. Summary of running times. Here are time complexities of various heap data structures. The abbreviation am. indicates that the given complexity is amortized, otherwise it is a worst-case complexity. For the meaning of "O"("f")" and ""Θ"("f")" see Big O notation. Names of operations assume a min-heap. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "i" }, { "math_id": 1, "text": "h" }, { "math_id": 2, "text": "h=0" }, { "math_id": 3, "text": "h+1" }, { "math_id": 4, "text": "O(h)" }, { "math_id": 5, "text": " \\lfloor \\log n \\rfloor" }, { "math_id": 6, "text": "\\le \\frac{2^{\\lfloor \\log n \\rfloor}}{2^h} \\le \\frac{n}{2^h}" }, { "math_id": 7, "text": "\n\\begin{align}\n\\sum_{h=0}^{\\lfloor \\log n \\rfloor} \\frac{n}{2^h} O(h) \n& = O\\left(n\\sum_{h=0}^{\\lfloor \\log n \\rfloor} \\frac{h}{2^h}\\right) \\\\\n& = O\\left(n\\sum_{h=0}^{\\infty} \\frac{h}{2^h}\\right) \\\\\n& = O(n)\n\\end{align}\n" }, { "math_id": 8, "text": "\\sum_{i=0}^\\infty i/2^i" }, { "math_id": 9, "text": " 2 n - 2 s_2 (n) - e_2 (n) " }, { "math_id": 10, "text": "O" }, { "math_id": 11, "text": "(\\log n)" }, { "math_id": 12, "text": "\\text{right} = 2i + 2" }, { "math_id": 13, "text": "2^l" }, { "math_id": 14, "text": "2^{l + 1} - 1" }, { "math_id": 15, "text": "(k - 1)" }, { "math_id": 16, "text": "\\text{last}(l) = (2^{l + 1} - 1) - 1 = 2^{l + 1} - 2" }, { "math_id": 17, "text": "\\begin{alignat}{2}\ni = & \\quad \\text{last}(L) - j\\\\\n = & \\quad (2^{L + 1} -2) - j\\\\\n\\end{alignat}\n" }, { "math_id": 18, "text": "2j" }, { "math_id": 19, "text": "L + 1" }, { "math_id": 20, "text": "\\begin{alignat}{2}\n\\text{right} = & \\quad \\text{last(L + 1)} -2j\\\\\n = & \\quad (2^{L + 2} -2) -2j\\\\\n = & \\quad 2(2^{L + 1} -2 -j) + 2\\\\\n = & \\quad 2i + 2\n\\end{alignat}\n" }, { "math_id": 21, "text": "\\text{left} = 2i + 1" }, { "math_id": 22, "text": "\\text{left} = 2i" }, { "math_id": 23, "text": "\\text{right} = 2i + 1" }, { "math_id": 24, "text": "i = 2 \\times (\\text{parent}) + 1" }, { "math_id": 25, "text": "i = 2 \\times (\\text{parent}) + 2" }, { "math_id": 26, "text": "\\text{parent} = \\frac{i - 1}{2} \\;\\textrm{ or }\\; \\frac{i - 2}{2}" }, { "math_id": 27, "text": "\\left\\lfloor \\dfrac{i - 1}{2} \\right\\rfloor" }, { "math_id": 28, "text": "(i - 2)" }, { "math_id": 29, "text": "(i - 1)" }, { "math_id": 30, "text": "\\begin{alignat}{2}\n\\left\\lfloor \\dfrac{i - 1}{2} \\right\\rfloor = & \\quad \\left\\lfloor \\dfrac{i - 2}{2} + \\dfrac{1}{2} \\right\\rfloor\\\\\n= & \\quad \\frac{i - 2}{2}\\\\\n= & \\quad \\text{parent}\n\\end{alignat}\n" }, { "math_id": 31, "text": "\\text{parent} = \\left\\lfloor \\dfrac{i - 1}{2} \\right\\rfloor" } ]
https://en.wikipedia.org/wiki?curid=69890
69893608
Quantum secret sharing
&lt;templatestyles src="Template:TOC_right/styles.css" /&gt; Quantum secret sharing (QSS) is a quantum cryptographic scheme for secure communication that extends beyond simple quantum key distribution. It modifies the classical secret sharing (CSS) scheme by using quantum information and the no-cloning theorem to attain the ultimate security for communications. The method of secret sharing consists of a sender who wishes to share a secret with a number of receiver parties in such a way that the secret is fully revealed only if a large enough portion of the receivers work together. However, if not enough receivers work together to reveal the secret, the secret remains completely unknown. The classical scheme was independently proposed by Adi Shamir and George Blakley in 1979. In 1998, Mark Hillery, Vladimír Bužek, and André Berthiaume extended the theory to make use of quantum states for establishing a secure key that could be used to transmit the secret via classical data. In the years following, more work was done to extend the theory to transmitting quantum information as the secret, rather than just using quantum states for establishing the cryptographic key. QSS has been proposed for being used in quantum money as well as for joint checking accounts, quantum networking, and distributed quantum computing, among other applications. Protocol. The simplest case: GHZ states. This example follows the original scheme laid out by Hillery et al. in 1998 which makes use of Greenberger–Horne–Zeilinger (GHZ) states. A similar scheme was developed shortly thereafter which used two-particle entangled states instead of three-particle states. In both cases, the protocol is essentially an extension of quantum key distribution to two receivers instead of just one. Following the typical language, let the sender be denoted as Alice and two receivers as Bob and Charlie. Alice's objective is to send each receiver a "share" of her secret key (really just a quantum state) in such a way that: Alice initiates the protocol by sharing with each of Bob and Charlie one particle from a GHZ triplet in the (standard) Z-basis, holding onto the third particle herself: formula_0 where formula_1 and formula_2 are orthogonal modes in an arbitrary Hilbert space. After each participant measures their particle in the X- or Y-basis (chosen at random), they share (via a classical, public channel) which basis they used to make the measurement, but not the result itself. Upon combining their measurement results, Bob and Charlie can deduce what Alice measured 50% of the time. Repeating this process many times, and using a small fraction to verify that no malicious actors are present, the three participants can establish a joint key for communicating securely. Consider the following for a clear example of how this will work. Let us define the x and y eigenstates in the following, standard way: formula_3 formula_4. The GHZ state can then be rewritten as formula_5, where (a, b, c) denote the particles for (Alice, Bob, Charlie) and Alice's and Bob's states have been written in the X-basis. Using this form, it is evident that their exists a correlation between Alice's and Bob's measurements and Charlie's single-particle state: if Alice and Bob have correlated results then Charlie has the state formula_6 and if Alice and Bob have anticorrelated results then Charlie has the state formula_7. It is clear from the table summarizing these correlations that by knowing the measurement bases of Alice and Bob, Charlie can use his own measurement result to deduce whether Alice and Bob had the same or opposite results. Note however that to make this deduction, Charlie must choose the correct measurement basis for measuring his own particle. Since he chooses between two noncommuting bases at random, only half of the time will he be able to extract useful information. The other half of the time the results must be discarded. Additionally, from the table one can see that Charlie has no way of determining who measured what, only if the results of Alice and Bob were correlated or anticorrelated. Thus the only way for Charlie to figure out Alice's measurement is by working together with Bob and sharing their results. In doing so, they can extract Alice's results for every measurement and use this information to create a cryptographic key that only they know. (("k","n")) threshold scheme. The simple case described above can be extended similarly to that done in CSS by Shamir and Blakley via a thresholding scheme. In the (("k","n")) threshold scheme (double parentheses denoting a quantum scheme), Alice splits her secret key (quantum state) into n shares such that any k≤n shares are required to extract the full information but k-1 or less shares cannot extract "any" information about Alice's key. The number of users needed to extract the secret is bounded by "n"/2 &lt; "k" ≤ "n". Consider for "n" ≥ 2"k", if a (("k","n")) threshold scheme is applied to two disjoint sets of "k" in "n", then two independent copies of Alice's secret can be reconstructed. This of course would violate the no-cloning theorem and is why n must be less than 2k. As long as a (("k","n")) threshold scheme exists, a (("k","n"-1)) threshold scheme can be constructed by simply discarding one share. This method can be repeated until k=n. The following outlines a simple ((2,3)) threshold scheme, and more complicated schemes can be imagined by increasing the number of shares Alice splits her original state into: Consider Alice beginning with the single qutrit state formula_8 and then mapping it to three qutrits formula_9 and sharing one qutrit with each of the 3 receivers. It is evident that a single share does not give any information about Alice's original state, since each share is in the maximally mixed state. However, two shares could be used to reconstruct Alice's original state. Assume the first two shares are given. Add the first share to the second (modulo three) and then add the new value of the second share to the first. The resulting state is formula_10 where the first qutrit is exactly Alice's original state. Via this method, the sender's original state can be reconstructed at one of the receivers' particles, but it is crucial that no measurements be made during this reconstruction process or any superposition within the quantum state will collapse. Security. The security of QSS relies upon the no-cloning theorem to protect against possible eavesdroppers as well as dishonest users. This section adopts the two-particle entanglement protocol very briefly mentioned above. Eavesdropping. QSS promises security against eavesdropping in the exact same way as quantum key distribution. Consider an eavesdropper, Eve, who is assumed to be capable of perfectly discriminating and creating the quantum states used in the QSS protocol. Eve's objective is to intercept one of the receivers' (say Bob's) shares, measure it, then recreate the state and send it on to whomever the share was initially intended for. The issue with this method is that Eve needs to randomly choose a basis to measure in, and half of the time she will choose the wrong basis. When she chooses the correct basis, she will get the correct measurement result with certainty and can recreate the state she measured and send it off to Bob without her presence being detected. However, when she chooses the wrong basis, she will end up sending one of the two states from the incorrect basis. Bob will measure the state she sent him and half of the time this will be the correct detection, but only because the state from the wrong basis is an equal superposition of the two states in the correct basis. Thus, half of the time that Eve measures in the wrong basis and therefore sends the incorrect state, Bob will measure the wrong state. This intervention on Eve's part leads to causing an error in the protocol on an extra 25% of trials. Therefore, with enough measurements, it will be nearly impossible to miss the protocol errors occurring with a 75% probability instead of the 50% probability predicted by the theory, thus signaling that there is an eavesdropper within the communication channel. More complex eavesdropping strategies can be performed using ancilla states, but the eavesdropper will still be detectable in a similar manner. Dishonest participant. Now, consider the case where one of the participants of the protocol (say Bob) is acting as a malicious user by trying to obtain the secret without the other participants being aware. Analyzing the possibilities, one learns that choosing the proper order in which Bob and Charlie release their measurement bases and results "when testing for eavesdropping" can promise the detection of any cheating that may be occurring. The proper order turns out to be: This ordering prevents receiver 2 from knowing which basis to share for tricking the other participants because receiver 2 does not yet know what basis receiver 1 is going to announce was used. Similarly, since receiver 1 must release their results first, they cannot control if the measurements should be correlated or anticorrelated for the valid combination of bases used. In this way, acting dishonestly will introduce errors in the eavesdropper testing phase whether the dishonest participant is receiver 1 or receiver 2. Thus, the ordering of releasing the data must be carefully chosen so as to prevent any dishonest user from acquiring the secret without being noticed by the other participants. Experimental realization. This section follows from the first experimental demonstration of QSS in 2001 which was made possible via advances in techniques of quantum optics. The original idea for QSS using GHZ states was more challenging to implement because of the difficulties in producing three-particle correlations via either down-conversion processes with formula_11 nonlinearities or three-photon positronium annihilation, both of which are rare events. Instead, the original experiment was performed via the two-particle scheme using a standard formula_12 spontaneous parametric down-conversion (SPDC) process with the third correlated photon being the pump photon. The experimental setup works as follows: Using formula_19 where X and Y are either 'S' for short path or 'L' for long path and i and j are one of 'A', 'B', or 'C' to label a participant's interferometer, this notation describes the arbitrary path taken for any combination of two participants. Notice that formula_20 and formula_21 where j is either 'B' or 'C' are indistinguishable processes as the time difference between the two processes are exactly the same. The same is true for formula_22 and formula_23 Describing these indistinguishable processes mathematically, formula_24 which can be thought of as a "pseudo-GHZ state" where the difference from a true GHZ state is that the three photons do not exist simultaneously. Nonetheless, the triple "coincidences" can be described by exactly the same probability function as for the true GHZ state, formula_25 implying that QSS will work just the same for this 2-particle source. By setting the phases formula_26 and formula_16 to either 0 or formula_27 in much the same way as two-photon Bell tests, it can be shown that this setup violates a Bell-type inequality for three particles, formula_28, where formula_29 is the expectation value for a coincidence measurement with phase shifter settings formula_30. For this experiment, the Bell-type inequality was violated, with formula_31, suggesting that this setup exhibits quantum nonlocality. This seminal experiment showed that the quantum correlations from this setup are indeed described by the probability function formula_32 The simplicity of the SPDC source allowed for coincidences at much higher rates than traditional three-photon entanglement sources, making QSS more practical. This was the first experiment to prove the feasibility of a QSS protocol. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "|\\mathrm{\\Psi}\\rangle_{\\rm GHZ} = \\frac{|000\\rangle + |111\\rangle}{\\sqrt{2}}," }, { "math_id": 1, "text": "|\\mathrm{0}\\rangle" }, { "math_id": 2, "text": "|\\mathrm{1}\\rangle" }, { "math_id": 3, "text": "|\\mathrm{+x}\\rangle = \\frac{|0\\rangle + |1\\rangle}{\\sqrt{2}}, |\\mathrm{-x}\\rangle = \\frac{|0\\rangle - |1\\rangle}{\\sqrt{2}}" }, { "math_id": 4, "text": "|\\mathrm{+y}\\rangle = \\frac{|0\\rangle + i|1\\rangle}{\\sqrt{2}}, |\\mathrm{-y}\\rangle = \\frac{|0\\rangle - i|1\\rangle}{\\sqrt{2}}" }, { "math_id": 5, "text": "|\\mathrm{\\Psi}\\rangle_{\\rm GHZ} = \\frac{1}{2\\sqrt{2}}[({|+x\\rangle_{a}|+x\\rangle_{b} + |-x\\rangle_{a}|-x\\rangle_{b}})(|0\\rangle_{c} + |1\\rangle_{c}) + ({|+x\\rangle_{a}|-x\\rangle_{b} + |-x\\rangle_{a}|+x\\rangle_{b}})(|0\\rangle_{c} - |1\\rangle_{c})]" }, { "math_id": 6, "text": "\\frac{|0\\rangle_{c} + |1\\rangle_{c}}{\\sqrt{2}}" }, { "math_id": 7, "text": "\\frac{|0\\rangle_{c} - |1\\rangle_{c}}{\\sqrt{2}}" }, { "math_id": 8, "text": "|\\mathrm{\\Psi}\\rangle_{a} = \\alpha|0\\rangle + \\beta|1\\rangle + \\gamma|2\\rangle," }, { "math_id": 9, "text": "|\\mathrm{\\psi}\\rangle = \\alpha(|000\\rangle + |111\\rangle + |222\\rangle) + \\beta(|012\\rangle + |120\\rangle + |201\\rangle) + \\gamma(|021\\rangle + |102\\rangle + |210\\rangle)" }, { "math_id": 10, "text": "|\\mathrm{\\psi}\\rangle = (\\alpha|0\\rangle + \\beta|1\\rangle + \\gamma|2\\rangle)(|00\\rangle + |12\\rangle + |21\\rangle)" }, { "math_id": 11, "text": "\\chi^{3}" }, { "math_id": 12, "text": "\\chi^{2}" }, { "math_id": 13, "text": "t_{0}" }, { "math_id": 14, "text": "\\alpha." }, { "math_id": 15, "text": "\\beta" }, { "math_id": 16, "text": "\\gamma" }, { "math_id": 17, "text": "t_{B}" }, { "math_id": 18, "text": "t_{C}" }, { "math_id": 19, "text": "|\\mathrm{X}\\rangle_{i},|\\mathrm{Y}\\rangle_{j}" }, { "math_id": 20, "text": "|\\mathrm{S}\\rangle_{A},|\\mathrm{L}\\rangle_{j}" }, { "math_id": 21, "text": "|\\mathrm{L}\\rangle_{A},|\\mathrm{S}\\rangle_{j}" }, { "math_id": 22, "text": "|\\mathrm{S}\\rangle_{B},|\\mathrm{S}\\rangle_{C}" }, { "math_id": 23, "text": "|\\mathrm{L}\\rangle_{B},|\\mathrm{L}\\rangle_{C}." }, { "math_id": 24, "text": "|\\mathrm{\\psi}\\rangle = \\frac{1}{\\sqrt{2}}(|L\\rangle_{A},|S\\rangle_{B}|S\\rangle_{C} + e^{i(\\alpha + \\beta + \\gamma)}|S\\rangle_{A},|L\\rangle_{B}|L\\rangle_{C}))," }, { "math_id": 25, "text": "P_{i,j,k} = \\frac{1}{8}(1 + ijk\\cos(\\alpha + \\beta + \\gamma))," }, { "math_id": 26, "text": "\\alpha, \\beta," }, { "math_id": 27, "text": "\\frac{\\pi}{2}" }, { "math_id": 28, "text": "S_{3} = |E(\\alpha' + \\beta + \\gamma) + E(\\alpha + \\beta' + \\gamma) + E(\\alpha + \\beta + \\gamma') - E(\\alpha' + \\beta' + \\gamma')|\\le{2}" }, { "math_id": 29, "text": "E(\\alpha + \\beta + \\gamma)" }, { "math_id": 30, "text": "(\\alpha, \\beta, \\gamma)" }, { "math_id": 31, "text": "S_{\\rm exp} = 3.69" }, { "math_id": 32, "text": "P_{i,j,k}." } ]
https://en.wikipedia.org/wiki?curid=69893608
6990113
Van der Grinten projection
Compromise map projection The van der Grinten projection is a compromise map projection, which means that it is neither equal-area nor conformal. Unlike perspective projections, the van der Grinten projection is an arbitrary geometric construction on the plane. Van der Grinten projects the entire Earth into a circle. It largely preserves the familiar shapes of the Mercator projection while modestly reducing Mercator's distortion. Polar regions are subject to extreme distortion. Lines of longitude converge to points at the poles. History. Alphons J. van der Grinten invented the projection in 1898 and received US patent #751,226 for it and three others in 1904. The National Geographic Society adopted the projection for their reference maps of the world in 1922, raising its visibility and stimulating its adoption elsewhere. In 1988, National Geographic replaced the van der Grinten projection with the Robinson projection. Geometric construction. The geometric construction given by van der Grinten can be written algebraically: formula_0 where "x" takes the sign of "λ" − "λ"0, "y" takes the sign of "φ", and formula_1 If "φ" = 0, then formula_2 Similarly, if "λ" = "λ"0 or "φ" = ±π/2, then formula_3 In all cases, "φ" is the latitude, "λ" is the longitude, and "λ"0 is the central meridian of the projection. Van der Grinten IV projection. The van der Grinten IV projection is a later polyconic map projection developed by Alphons J. van der Grinten. The central meridian and equator are straight lines. All other meridians and parallels are arcs of circles. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\begin{align}\n x &= \\pm \\pi \\frac{A (G - P^2) + \\sqrt{A^2 (G - P^2)^2 - (P^2 + A^2) (G^2 - P^2)}}{P^2 + A^2}, \\\\\n y &= \\pm \\pi \\frac{P Q - A \\sqrt{(A^2 + 1) (P^2 + A^2) - Q^2}}{P^2 + A^2},\n\\end{align}" }, { "math_id": 1, "text": "\\begin{align}\n A &= \\frac{1}{2} \\left| \\frac{\\pi}{\\lambda - \\lambda_0} - \\frac{\\lambda - \\lambda_0}{\\pi} \\right|, \\\\\n G &= \\frac{\\cos \\theta}{\\sin \\theta + \\cos \\theta - 1}, \\\\\n P &= G \\left(\\frac{2}{\\sin \\theta} - 1\\right), \\\\\n \\theta &= \\arcsin \\left|\\frac{2 \\varphi}{\\pi}\\right|, \\\\\n Q &= A^2 + G.\n\\end{align}" }, { "math_id": 2, "text": "\\begin{align}\n x &= (\\lambda - \\lambda_0), \\\\\n y &= 0.\n\\end{align}" }, { "math_id": 3, "text": "\\begin{align}\n x &= 0, \\\\\n y &= \\pm \\pi \\tan \\frac{\\theta}{2}.\n\\end{align}" } ]
https://en.wikipedia.org/wiki?curid=6990113
69901838
Transport theorem
The transport theorem (or transport equation, rate of change transport theorem or basic kinematic equation or Bour's formula, named after: Edmond Bour) is a vector equation that relates the time derivative of a Euclidean vector as evaluated in a non-rotating coordinate system to its time derivative in a rotating reference frame. It has important applications in classical mechanics and analytical dynamics and diverse fields of engineering. A Euclidean vector represents a certain magnitude and direction in space that is independent of the coordinate system in which it is measured. However, when taking a time derivative of such a vector one actually takes the difference between two vectors measured at two "different" times "t" and "t+dt". In a rotating coordinate system, the coordinate axes can have different directions at these two times, such that even a constant vector can have a non-zero time derivative. As a consequence, the time derivative of a vector measured in a rotating coordinate system can be different from the time derivative of the same vector in a non-rotating reference system. For example, the velocity vector of an airplane as evaluated using a coordinate system that is fixed to the earth (a rotating reference system) is different from its velocity as evaluated using a coordinate system that is fixed in space. The transport theorem provides a way to relate time derivatives of vectors between a rotating and non-rotating coordinate system, it is derived and explained in more detail in rotating reference frame and can be written as: formula_0 Here f is the vector of which the time derivative is evaluated in both the non-rotating, and rotating coordinate system. The subscript "r" designates its time derivative in the rotating coordinate system and the vector Ω is the angular velocity of the rotating coordinate system. The Transport Theorem is particularly useful for relating velocities and acceleration vectors between rotating and non-rotating coordinate systems. Reference states: "Despite of its importance in classical mechanics and its ubiquitous application in engineering, there is no universally-accepted name for the Euler derivative transformation formula [...] Several terminology are used: kinematic theorem, transport theorem, and transport equation. These terms, although terminologically correct, are more prevalent in the subject of fluid mechanics to refer to entirely different physics concepts." An example of such a different physics concept is Reynolds transport theorem. Derivation. Let formula_1 be the basis vectors of formula_2, as seen from the reference frame formula_3, and denote the components of a vector formula_4 in formula_2 by just formula_5. Let formula_6 so that this coordinate transformation is generated, in time, according to formula_7. Such a generator differential equation is important for trajectories in Lie group theory. Applying the product rule with implict summation convention, formula_8 For the rotation groups formula_9, one has formula_10. In three dimensions, formula_11, the generator formula_12 then equals the cross product operation from the left, a skew-symmetric linear map formula_13 for any vector formula_14. As a matrix, it is also related to the vector as seen from formula_2 via formula_15 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{\\mathrm{d}}{\\mathrm{d}t}\\boldsymbol{f} = \\left[ \\left(\\frac{\\mathrm{d}}{\\mathrm{d}t}\\right)_{\\mathrm{r}} + \\boldsymbol{\\Omega} \\times \\right] \\boldsymbol{f} \\ ." }, { "math_id": 1, "text": "{\\boldsymbol b}_i:=T^E_B{\\boldsymbol e}_i" }, { "math_id": 2, "text": "B" }, { "math_id": 3, "text": "E" }, { "math_id": 4, "text": "{\\boldsymbol f}" }, { "math_id": 5, "text": "f_i" }, { "math_id": 6, "text": "G:=T' \\cdot T^{-1}" }, { "math_id": 7, "text": "T'=G\\cdot T" }, { "math_id": 8, "text": "{\\boldsymbol f}' = (f_i {\\boldsymbol b}_i)' = (f_iT)'{\\boldsymbol e}_i = (f_i'T + f_i G\\cdot T)\\,{\\boldsymbol e}_i = (f_i' + f_i G)\\,{\\boldsymbol b}_i = \\left( \\left(\\tfrac{\\mathrm{d}}{\\mathrm{d}t}\\right)_B + G \\right) \\boldsymbol{f}" }, { "math_id": 9, "text": "{\\mathrm{SO}}(n)" }, { "math_id": 10, "text": "T^B_E:=(T^E_B)^{-1}=(T^E_B)^T" }, { "math_id": 11, "text": "n=3" }, { "math_id": 12, "text": "G" }, { "math_id": 13, "text": "[{\\boldsymbol \\Omega}_E]_\\times {\\boldsymbol g} := {\\boldsymbol \\Omega}_E\\times {\\boldsymbol g}" }, { "math_id": 14, "text": "{\\boldsymbol g}" }, { "math_id": 15, "text": "[{\\boldsymbol \\Omega}_E]_\\times = [T^E_B{\\boldsymbol \\Omega}_B]_\\times = T^E_B\\cdot[{\\boldsymbol \\Omega}_B]_\\times\\cdot T^B_E" } ]
https://en.wikipedia.org/wiki?curid=69901838
6990584
Equations defining abelian varieties
In mathematics, the concept of abelian variety is the higher-dimensional generalization of the elliptic curve. The equations defining abelian varieties are a topic of study because every abelian variety is a projective variety. In dimension "d" ≥ 2, however, it is no longer as straightforward to discuss such equations. There is a large classical literature on this question, which in a reformulation is, for complex algebraic geometry, a question of describing relations between theta functions. The modern geometric treatment now refers to some basic papers of David Mumford, from 1966 to 1967, which reformulated that theory in terms from abstract algebraic geometry valid over general fields. Complete intersections. The only 'easy' cases are those for "d" = 1, for an elliptic curve with linear span the projective plane or projective 3-space. In the plane, every elliptic curve is given by a cubic curve. In "P"3, an elliptic curve can be obtained as the intersection of two quadrics. In general abelian varieties are not complete intersections. Computer algebra techniques are now able to have some impact on the direct handling of equations for small values of "d" &gt; 1. Kummer surfaces. The interest in nineteenth century geometry in the Kummer surface came in part from the way a quartic surface represented a quotient of an abelian variety with "d" = 2, by the group of order 2 of automorphisms generated by "x" → −"x" on the abelian variety. General case. Mumford defined a theta group associated to an invertible sheaf "L" on an abelian variety "A". This is a group of self-automorphisms of "L", and is a finite analogue of the Heisenberg group. The primary results are on the action of the theta group on the global sections of "L". When "L" is very ample, the linear representation can be described, by means of the structure of the theta group. In fact the theta group is abstractly a simple type of nilpotent group, a central extension of a group of torsion points on "A", and the extension is known (it is in effect given by the Weil pairing). There is a uniqueness result for irreducible linear representations of the theta group with given central character, or in other words an analogue of the Stone–von Neumann theorem. (It is assumed for this that the characteristic of the field of coefficients doesn't divide the order of the theta group.) Mumford showed how this abstract algebraic formulation could account for the classical theory of theta functions with theta characteristics, as being the case where the theta group was an extension of the two-torsion of "A". An innovation in this area is to use the Mukai–Fourier transform. The coordinate ring. The goal of the theory is to prove results on the homogeneous coordinate ring of the embedded abelian variety "A", that is, set in a projective space according to a very ample "L" and its global sections. The graded commutative ring that is formed by the direct sum of the global sections of the formula_0 meaning the "n"-fold tensor product of itself, is represented as the quotient ring of a polynomial algebra by a homogeneous ideal "I". The graded parts of "I" have been the subject of intense study. Quadratic relations were provided by Bernhard Riemann. Koizumi's theorem states the third power of an ample line bundle is normally generated. The Mumford–Kempf theorem states that the fourth power of an ample line bundle is quadratically presented. For a base field of characteristic zero, Giuseppe Pareschi proved a result including these (as the cases "p" = 0, 1) which had been conjectured by Lazarsfeld: let "L" be an ample line bundle on an abelian variety "A". If "n" ≥ "p" + 3, then the "n"-th tensor power of "L" satisfies condition "N"p. Further results have been proved by Pareschi and Popa, including previous work in the field. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "L^n,\\ " } ]
https://en.wikipedia.org/wiki?curid=6990584
6990619
Extra special group
Concept in abstract algebra In group theory, a branch of abstract algebra, extraspecial groups are analogues of the Heisenberg group over finite fields whose size is a prime. For each prime "p" and positive integer "n" there are exactly two (up to isomorphism) extraspecial groups of order "p"1+2"n". Extraspecial groups often occur in centralizers of involutions. The ordinary character theory of extraspecial groups is well understood. Definition. Recall that a finite group is called a "p"-group if its order is a power of a prime "p". A "p"-group "G" is called extraspecial if its center "Z" is cyclic of order "p", and the quotient "G"/"Z" is a non-trivial elementary abelian "p"-group. Extraspecial groups of order "p"1+2"n" are often denoted by the symbol "p"1+2"n". For example, 21+24 stands for an extraspecial group of order 225. Classification. Every extraspecial "p"-group has order "p"1+2"n" for some positive integer "n", and conversely for each such number there are exactly two extraspecial groups up to isomorphism. A central product of two extraspecial "p"-groups is extraspecial, and every extraspecial group can be written as a central product of extraspecial groups of order "p"3. This reduces the classification of extraspecial groups to that of extraspecial groups of order "p"3. The classification is often presented differently in the two cases "p" odd and "p" = 2, but a uniform presentation is also possible. "p" odd. There are two extraspecial groups of order "p"3, which for "p" odd are given by If "n" is a positive integer there are two extraspecial groups of order "p"1+2"n", which for "p" odd are given by The two extraspecial groups of order "p"1+2"n" are most easily distinguished by the fact that one has all elements of order at most "p" and the other has elements of order "p"2. "p" = 2. There are two extraspecial groups of order 8 = "2"3, which are given by If "n" is a positive integer there are two extraspecial groups of order "2"1+2"n", which are given by The two extraspecial groups "G" of order "2"1+2"n" are most easily distinguished as follows. If "Z" is the center, then "G"/"Z" is a vector space over the field with 2 elements. It has a quadratic form "q", where "q" is 1 if the lift of an element has order 4 in "G", and 0 otherwise. Then the Arf invariant of this quadratic form can be used to distinguish the two extraspecial groups. Equivalently, one can distinguish the groups by counting the number of elements of order 4. All "p". A uniform presentation of the extraspecial groups of order "p"1+2"n" can be given as follows. Define the two groups: "M"("p") and "N"("p") are non-isomorphic extraspecial groups of order "p"3 with center of order "p" generated by "c". The two non-isomorphic extraspecial groups of order "p"1+2"n" are the central products of either "n" copies of "M"("p") or "n"−1 copies of "M"("p") and 1 copy of "N"("p"). This is a special case of a classification of "p"-groups with cyclic centers and simple derived subgroups given in . Character theory. If "G" is an extraspecial group of order "p"1+2"n", then its irreducible complex representations are given as follows: Examples. It is quite common for the centralizer of an involution in a finite simple group to contain a normal extraspecial subgroup. For example, the centralizer of an involution of type 2B in the monster group has structure 21+24.Co1, which means that it has a normal extraspecial subgroup of order 21+24, and the quotient is one of the Conway groups. Generalizations. Groups whose center, derived subgroup, and Frattini subgroup are all equal are called special groups. Infinite special groups whose derived subgroup has order "p" are also called extraspecial groups. The classification of countably infinite extraspecial groups is very similar to the finite case, , but for larger cardinalities even basic properties of the groups depend on delicate issues of set theory, some of which are exposed in . The nilpotent groups whose center is cyclic and derived subgroup has order "p" and whose conjugacy classes are at most countably infinite are classified in . Finite groups whose derived subgroup has order "p" are classified in .
[ { "math_id": 0, "text": " M(p) = \\langle a,b,c : a^p = b^p = 1, c^p = 1, ba=abc, ca=ac, cb=bc \\rangle " }, { "math_id": 1, "text": " N(p) = \\langle a,b,c : a^p = b^p = c, c^p = 1, ba=abc, ca=ac, cb=bc \\rangle " } ]
https://en.wikipedia.org/wiki?curid=6990619
6990840
Hypercomplex manifold
Manifold equipped with a quaternionic structure In differential geometry, a hypercomplex manifold is a manifold with the tangent bundle equipped with an action by the algebra of quaternions in such a way that the quaternions formula_0 define integrable almost complex structures. If the almost complex structures are instead not assumed to be integrable, the manifold is called quaternionic, or almost hypercomplex. Examples. Every hyperkähler manifold is also hypercomplex. The converse is not true. The Hopf surface formula_1 (with formula_2 acting as a multiplication by a quaternion formula_3, formula_4) is hypercomplex, but not Kähler, hence not hyperkähler either. To see that the Hopf surface is not Kähler, notice that it is diffeomorphic to a product formula_5 hence its odd cohomology group is odd-dimensional. By Hodge decomposition, odd cohomology of a compact Kähler manifold are always even-dimensional. In fact Hidekiyo Wakakuwa proved that on a compact hyperkähler manifold formula_6. Misha Verbitsky has shown that any compact hypercomplex manifold admitting a Kähler structure is also hyperkähler. In 1988, left-invariant hypercomplex structures on some compact Lie groups were constructed by the physicists Philippe Spindel, Alexander Sevrin, Walter Troost, and Antoine Van Proeyen. In 1992, Dominic Joyce rediscovered this construction, and gave a complete classification of left-invariant hypercomplex structures on compact Lie groups. Here is the complete list. formula_7 formula_8 formula_9 where formula_10 denotes an formula_11-dimensional compact torus. It is remarkable that any compact Lie group becomes hypercomplex after it is multiplied by a sufficiently big torus. Basic properties. Hypercomplex manifolds as such were studied by Charles Boyer in 1988. He also proved that in real dimension 4, the only compact hypercomplex manifolds are the complex torus formula_12, the Hopf surface and the K3 surface. Much earlier (in 1955) Morio Obata studied affine connection associated with "almost hypercomplex structures" (under the former terminology of Charles Ehresmann of "almost quaternionic structures"). His construction leads to what Edmond Bonan called the "Obata connection" which is "torsion free", if and only if, "two" of the almost complex structures formula_0 are integrable and in this case the manifold is hypercomplex. Twistor spaces. There is a 2-dimensional sphere of quaternions formula_13 satisfying formula_14. Each of these quaternions gives a complex structure on a hypercomplex manifold "M". This defines an almost complex structure on the manifold formula_15, which is fibered over formula_16 with fibers identified with formula_17. This complex structure is integrable, as follows from Obata's theorem (this was first explicitly proved by Dmitry Kaledin). This complex manifold is called the twistor space of formula_18. If "M" is formula_19, then its twistor space is isomorphic to formula_20. References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "I, J, K" }, { "math_id": 1, "text": "\\bigg({\\mathbb H}\\backslash 0\\bigg)/{\\mathbb Z}" }, { "math_id": 2, "text": "{\\mathbb Z}" }, { "math_id": 3, "text": "q" }, { "math_id": 4, "text": "|q|>1" }, { "math_id": 5, "text": "S^1\\times S^3," }, { "math_id": 6, "text": "\\ b_{2p+1}\\equiv 0 \\ mod \\ 4" }, { "math_id": 7, "text": "\nT^4, SU(2l+1), T^1 \\times SU(2l), T^l \\times SO(2l+1),\n" }, { "math_id": 8, "text": "T^{2l}\\times SO(4l), T^l \\times Sp(l), T^2 \\times E_6," }, { "math_id": 9, "text": "\nT^7\\times E^7, T^8\\times E^8, T^4\\times F_4, T^2\\times G_2\n" }, { "math_id": 10, "text": "T^i" }, { "math_id": 11, "text": "i" }, { "math_id": 12, "text": "T^4" }, { "math_id": 13, "text": "L\\in{\\mathbb H}" }, { "math_id": 14, "text": "L^2=-1" }, { "math_id": 15, "text": "M\\times S^2" }, { "math_id": 16, "text": "{\\mathbb C}P^1=S^2" }, { "math_id": 17, "text": "(M, L)" }, { "math_id": 18, "text": "M" }, { "math_id": 19, "text": "{\\mathbb H}" }, { "math_id": 20, "text": "{\\mathbb C}P^3\\backslash {\\mathbb C}P^1" } ]
https://en.wikipedia.org/wiki?curid=6990840
699163
Deficit round robin
Scheduling algorithm for the network scheduler Deficit Round Robin (DRR), also Deficit Weighted Round Robin (DWRR), is a scheduling algorithm for the network scheduler. DRR is, like weighted fair queuing (WFQ), a packet-based implementation of the ideal Generalized Processor Sharing (GPS) policy. It was proposed by M. Shreedhar and G. Varghese in 1995 as an efficient (with "O(1)" complexity) and fair algorithm. Details. In DRR, a scheduler handling N flows is configured with one quantum formula_0 for each flow. This global idea is that, at each round, the flow formula_1 can send at most formula_0 bytes, and the remaining, if any, is reported to the next round. In this way, the minimum rate that flow formula_1 will achieve over a long term is formula_2; where formula_3 is the link rate. Algorithm. The DRR scans all non-empty queues in sequence. When a non-empty queue formula_1 is selected, its deficit counter is incremented by its quantum value. Then, the value of the deficit counter is a maximal number of bytes that can be sent at this turn: if the deficit counter is greater than the packet's size at the head of the queue (HoQ), this packet can be sent, and the value of the counter is decremented by the packet size. Then, the size of the next packet is compared to the counter value, etc. Once the queue is empty or the value of the counter is insufficient, the scheduler will skip to the next queue. If the queue is empty, the value of the deficit counter is reset to 0. "Variables and Constants" const integer N // Nb of queues const integer Q[1..N] // Per queue quantum integer DC[1..N] // Per queue deficit counter queue queue[1..N] // The queues "Scheduling Loop" while true do for i in 1..N do if not queue[i].empty() then DC[i]:= DC[i] + Q[i] while( not queue[i].empty() and DC[i] ≥ queue[i].head().size() ) do DC[i] := DC[i] − queue[i].head().size() send( queue[i].head() ) queue[i].dequeue() end while if queue[i].empty() then DC[i] := 0 end if end if end for end while Performances: fairness, complexity, and latency. Like other GPS-like scheduling algorithm, the choice of the weights is left to the network administrator. Like WFQ, DRR offers a minimal rate to each flow whatever the size of the packets is. In weighted round robin scheduling, the fraction of bandwidth used depend on the packet's sizes. Compared with WFQ scheduler that has complexity of "O(log(n))" ("n" is the number of active flows/queues), the complexity of DRR is "O(1)", if the quantum formula_0 is larger than the maximum packet size of this flow. Nevertheless, this efficiency has a cost: the latency, "i.e.," the distance to the ideal GPS, is larger in DRR than in WFQ. More on the worst-case latencies can be found here. Implementations. An implementation of the deficit round robin algorithm was written by Patrick McHardy for the Linux kernel and published under the GNU General Public License. In Cisco and Juniper routers, modified versions of DRR are implemented: since the latency of DRR can be larger for some class of traffic, these modified versions give higher priority to some queues, whereas the others are served with the standard DRR algorithm. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "Q_i" }, { "math_id": 1, "text": "i" }, { "math_id": 2, "text": "\\frac{Q_i}{(Q_1+Q_2+...+Q_N)}R" }, { "math_id": 3, "text": "R" } ]
https://en.wikipedia.org/wiki?curid=699163
69919109
Dmitrii Abramovich Raikov
Russian mathematician (1905–1980) Dmitrii Abramovich Raikov (, born 11 November 1905 in Odessa; died 1980 in Moscow) was a Russian mathematician who studied functional analysis. Raikov studied in Odessa and Moscow, graduating in 1929. He was secretary of the Komsomol at Moscow State University and was active in the 1929–1930 campaign against the mathematician Dmitri Fyodorovich Egorov. At that time he and his fellow campaigners also rejected non-applied research, but this soon changed. In 1933, he was dismissed from the Communist Party on charges of Trotskyism and exiled to Voronezh, but was rehabilitated two years later and returned to Moscow. From 1938 to 1948, he was at the Mathematical Institute of the Academy of Sciences and in the Second World War in the militia. He was habilitated (Russian doctorate) in 1941 with Aleksandr Yakovlevich Khinchin at the Lomonosov University and in 1950 became professor. He taught at the Pedagogical Institute in Kostroma and from 1952 in Shuysky, before he taught from 1957 at the State Pedagogical University in Moscow. He also supervised students and taught at Lomonosov University. Israel Gelfand and Raikov's 1943 theorem states that a locally compact group formula_0 is completely determined by its (possibly infinite-dimensional) irreducible unitary representations: for every two elements formula_1 of formula_0 there is an irreducible unitary representation formula_2 with formula_3. He also worked on probability theory, for example in 1938 he proved an equivalent of the Cramér's theorem for the Poisson distribution. He edited the Russian editions of Nicolas Bourbaki's "Topology and Integration Theory" and translated numerous other mathematical works from Italian, English and German, for example the lectures on the theory of algebraic numbers by Erich Hecke, the book "Moderne Algebra" by Bartel Leendert van der Waerden, the "Problems and Theorems in Analysis" by George Pólya and Gábor Szegő, the introduction to the theory of Fourier integrals by Edward Charles Titchmarsh, the lectures on partial differential equations by Francesco Tricomi, the introduction to differential and integral calculus by Edmund Landau, the monograph on divergent series by Godfrey Harold Hardy and the finite dimensional vector spaces by Paul Halmos. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "G" }, { "math_id": 1, "text": "g,h" }, { "math_id": 2, "text": "\\rho" }, { "math_id": 3, "text": "\\rho (g) \\neq \\rho (h)" } ]
https://en.wikipedia.org/wiki?curid=69919109
6992164
Ordered Bell number
Number of weak orderings In number theory and enumerative combinatorics, the ordered Bell numbers or Fubini numbers count the weak orderings on a set of formula_0 elements. Weak orderings arrange their elements into a sequence allowing ties, such as might arise as the outcome of a horse race. The ordered Bell numbers were studied in the 19th century by Arthur Cayley and William Allen Whitworth. They are named after Eric Temple Bell, who wrote about the Bell numbers, which count the partitions of a set; the ordered Bell numbers count partitions that have been equipped with a total order. Their alternative name, the Fubini numbers, comes from a connection to Guido Fubini and Fubini's theorem on equivalent forms of multiple integrals. Because weak orderings have many names, ordered Bell numbers may also be called by those names, for instance as the numbers of preferential arrangements or the numbers of asymmetric generalized weak orders. These numbers may be computed via a summation formula involving binomial coefficients, or by using a recurrence relation. They also count combinatorial objects that have a bijective correspondence to the weak orderings, such as the ordered multiplicative partitions of a squarefree number or the faces of all dimensions of a permutohedron. Definitions and examples. Weak orderings arrange their elements into a sequence allowing ties. This possibility describes various real-world scenarios, including certain sporting contests such as horse races. A weak ordering can be formalized axiomatically by a partially ordered set for which incomparability is an equivalence relation. The equivalence classes of this relation partition the elements of the ordering into subsets of mutually tied elements, and these equivalence classes can then be linearly ordered by the weak ordering. Thus, a weak ordering can be described as an ordered partition, a partition of its elements and a total order on the sets of the partition. For instance, the ordered partition {"a","b"},{"c"},{"d","e","f"} describes an ordered partition on six elements in which "a" and "b" are tied and both less than the other four elements, and "c" is less than "d", "e", and "f", which are all tied with each other. The formula_0th ordered Bell number, denoted here formula_1, gives the number of distinct weak orderings on formula_0 elements. For instance, there are three weak orderings on the two elements "a" and "b": they can be ordered with "a" before "b", with "b" before "a", or with both tied. The figure shows the 13 weak orderings on three elements. Starting from formula_2, the ordered Bell numbers formula_1 are &lt;templatestyles src="Block indent/styles.css"/&gt; When the elements to be ordered are unlabeled (only the number of elements in each tied set matters, not their identities) what remains is a composition or ordered integer partition, a representation of formula_0 as an ordered sum of positive integers. For instance, the ordered partition {"a","b"},{"c"},{"d","e","f"} discussed above corresponds in this way to the composition 2 + 1 + 3. The number of compositions of formula_0 is exactly formula_3. This is because a composition is determined by its set of partial sums, which may be any subset of the integers from 1 to formula_4. History. The ordered Bell numbers appear in the work of , who used them to count certain plane trees with formula_5 totally ordered leaves. In the trees considered by Cayley, each root-to-leaf path has the same length, and the number of nodes at distance formula_6 from the root must be strictly smaller than the number of nodes at distance formula_7, until reaching the leaves. In such a tree, there are formula_0 pairs of adjacent leaves, that may be weakly ordered by the height of their lowest common ancestor; this weak ordering determines the tree. call the trees of this type "Cayley trees", and they call the sequences that may be used to label their gaps (sequences of formula_0 positive integers that include at least one copy of each positive integer between one and the maximum value in the sequence) "Cayley permutations". traces the problem of counting weak orderings, which has the same sequence as its solution, to the work of . These numbers were called Fubini numbers by Louis Comtet, because they count the different ways to rearrange the ordering of sums or integrals in Fubini's theorem, which in turn is named after Guido Fubini. The Bell numbers, named after Eric Temple Bell, count the partitions of a set, and the weak orderings that are counted by the ordered Bell numbers may be interpreted as a partition together with a total order on the sets in the partition. The equivalence between counting Cayley trees and counting weak orderings was observed in 1970 by Donald Knuth, using an early form of the On-Line Encyclopedia of Integer Sequences (OEIS). This became one of the first successful uses of the OEIS to discover equivalences between different counting problems. Formulas. Summation. Because weak orderings can be described as total orderings on the subsets of a partition, one can count weak orderings by counting total orderings and partitions, and combining the results appropriately. The Stirling numbers of the second kind, denoted formula_8, count the partitions of an formula_0-element set into formula_9 nonempty subsets. A weak ordering may be obtained from such a partition by choosing one of formula_10 total orderings of its subsets. Therefore, the ordered Bell numbers can be counted by summing over the possible numbers of subsets in a partition (the parameter formula_9) and, for each value of formula_9, multiplying the number of partitions formula_8 by the number of total orderings formula_10. That is, as a summation formula: formula_11 By general results on summations involving Stirling numbers, it follows that the ordered Bell numbers are log-convex, meaning that they obey the inequality formula_12 for all formula_13. An alternative interpretation of the terms of this sum is that they count the features of each dimension in a permutohedron of dimension formula_0, with the formula_9th term counting the features of dimension formula_14. A permutohedron is a convex polytope, the convex hull of points whose coordinate vectors are the permutations of the numbers from 1 to formula_0. These vectors are defined in a space of dimension formula_5, but they and their convex hull all lie in an formula_0-dimensional affine subspace. For instance, the three-dimensional permutohedron is the truncated octahedron, the convex hull of points whose coordinates are permutations of (1,2,3,4), in the three-dimensional subspace of points whose coordinate sum is 10. This polyhedron has one volume (formula_15), 14 two-dimensional faces (formula_16), 36 edges (formula_17), and 24 vertices (formula_18). The total number of these faces is 1 + 14 + 36 + 24 = 75, an ordered Bell number, corresponding to the summation formula above for formula_19. By expanding each Stirling number in this formula into a sum of binomial coefficients, the formula for the ordered Bell numbers may be expanded out into a double summation. The ordered Bell numbers may also be given by an infinite series: formula_20 Another summation formula expresses the ordered Bell numbers in terms of the Eulerian numbers formula_21, which count the permutations of formula_0 items in which formula_9 pairs of consecutive items are in increasing order: formula_22 where formula_23 is the formula_0th Eulerian polynomial. One way to explain this summation formula involves a mapping from weak orderings on the numbers from 1 to formula_0 to permutations, obtained by sorting each tied set into numerical order. Under this mapping, each permutation with formula_9 consecutive increasing pairs comes from formula_24 weak orderings, distinguished from each other by the subset of the consecutive increasing pairs that are tied in the weak ordering. Generating function and approximation. As with many other integer sequences, reinterpreting the sequence as the coefficients of a power series and working with the function that results from summing this series can provide useful information about the sequence. The fast growth of the ordered Bell numbers causes their ordinary generating function to diverge; instead the exponential generating function is used. For the ordered Bell numbers, it is: formula_25 Here, the left hand side is just the definition of the exponential generating function and the right hand side is the function obtained from this summation. The form of this function corresponds to the fact that the ordered Bell numbers are the numbers in the first column of the infinite matrix formula_26. Here formula_27 is the identity matrix and formula_28 is an infinite matrix form of Pascal's triangle. Each row of formula_28 starts with the numbers in the same row of Pascal's triangle, and then continues with an infinite repeating sequence of zeros. Based on a contour integration of this generating function, the ordered Bell numbers can be expressed by the infinite sum formula_29 Here, formula_30 stands for the natural logarithm. This leads to an approximation for the ordered Bell numbers, obtained by using only the term for formula_15 in this sum and discarding the remaining terms: formula_31 where formula_32. Thus, the ordered Bell numbers are larger than the factorials by an exponential factor. Here, as in Stirling's approximation to the factorial, the formula_33 indicates asymptotic equivalence. That is, the ratio between the ordered Bell numbers and their approximation tends to one in the limit as formula_0 grows arbitrarily large. As expressed in little o notation, the relative error is formula_34, and the formula_35 error term decays exponentially as formula_0 grows. Comparing the approximations for formula_36 and formula_1 shows that formula_37 For example, taking formula_38 gives the approximation formula_39 to formula_40. This sequence of approximations, and this example from it, were calculated by Ramanujan, using a general method for solving equations numerically (here, the equation formula_41). Recurrence and modular periodicity. As well as the formulae above, the ordered Bell numbers may be calculated by the recurrence relation formula_42 The intuitive meaning of this formula is that a weak ordering on formula_0 items may be broken down into a choice of some nonempty set of formula_6 items that go into the first equivalence class of the ordering, together with a smaller weak ordering on the remaining formula_43 items. There are formula_44 choices of the first set, and formula_45 choices of the weak ordering on the rest of the elements. Multiplying these two factors, and then summing over the choices of how many elements to include in the first set, gives the number of weak orderings, formula_1. As a base case for the recurrence, formula_46 (there is one weak ordering on zero items). Based on this recurrence, these numbers can be shown to obey certain periodic patterns in modular arithmetic: for sufficiently large formula_0, &lt;templatestyles src="Block indent/styles.css"/&gt;formula_47 &lt;templatestyles src="Block indent/styles.css"/&gt;formula_48 &lt;templatestyles src="Block indent/styles.css"/&gt;formula_49 and &lt;templatestyles src="Block indent/styles.css"/&gt;formula_50 Many more modular identities are known, including identities modulo any prime power. Peter Bala has conjectured that this sequence is eventually periodic (after a finite number of terms) modulo each positive integer formula_9, with a period that divides Euler's totient function of formula_9, the number of residues mod formula_9 that are relatively prime to formula_9. Applications. Combinatorial enumeration. As has already been mentioned, the ordered Bell numbers count weak orderings, permutohedron faces, Cayley trees, Cayley permutations, and equivalent formulae in Fubini's theorem. Weak orderings in turn have many other applications. For instance, in horse racing, photo finishes have eliminated most but not all ties, called in this context dead heats, and the outcome of a race that may contain ties (including all the horses, not just the first three finishers) may be described using a weak ordering. For this reason, the ordered Bell numbers count the possible number of outcomes of a horse race. In contrast, when items are ordered or ranked in a way that does not allow ties (such as occurs with the ordering of cards in a deck of cards, or batting orders among baseball players), the number of orderings for formula_0 items is a factorial number formula_51, which is significantly smaller than the corresponding ordered Bell number. Problems in many areas can be formulated using weak orderings, with solutions counted using ordered Bell numbers. consider combination locks with a numeric keypad, in which several keys may be pressed simultaneously and a combination consists of a sequence of keypresses that includes each key exactly once. As they show, the number of different combinations in such a system is given by the ordered Bell numbers. In "seru", a Japanese technique for balancing assembly lines, cross-trained workers are allocated to groups of workers at different stages of a production line. The number of alternative assignments for a given number of workers, taking into account the choices of how many stages to use and how to assign workers to each stage, is an ordered Bell number. As another example, in the computer simulation of origami, the ordered Bell numbers give the number of orderings in which the creases of a crease pattern can be folded, allowing sets of creases to be folded simultaneously. In number theory, an ordered multiplicative partition of a positive integer is a representation of the number as a product of one or more of its divisors. For instance, 30 has 13 multiplicative partitions, as a product of one divisor (30 itself), two divisors (for instance 6 · 5), or three divisors (3 · 5 · 2, etc.). An integer is squarefree when it is a product of distinct prime numbers; 30 is squarefree, but 20 is not, because its prime factorization 2 · 2 · 5 repeats the prime 2. For squarefree numbers with formula_0 prime factors, an ordered multiplicative partition can be described by a weak ordering on its prime factors, describing which prime appears in which term of the partition. Thus, the number of ordered multiplicative partitions is given by formula_1. On the other hand, for a prime power with exponent formula_0, an ordered multiplicative partition is a product of powers of the same prime number, with exponents summing to formula_0, and this ordered sum of exponents is a composition of formula_0. Thus, in this case, there are formula_3 ordered multiplicative partitions. Numbers that are neither squarefree nor prime powers have a number of ordered multiplicative partitions that (as a function of the number of prime factors) is between these two extreme cases. A parking function, in mathematics, is a finite sequence of positive integers with the property that, for every formula_6 up to the sequence length, the sequence contains at least formula_6 values that are at most formula_6. A sequence of this type, of length formula_0, describes the following process: a sequence of formula_0 cars arrives on a street with formula_0 parking spots. Each car has a preferred parking spot, given by its value in the sequence. When a car arrives on the street, it parks in its preferred spot, or, if that is full, in the next available spot. A sequence of preferences forms a parking function if and only if each car can find a parking spot on or after its preferred spot. The number of parking functions of length formula_0 is exactly formula_52. For a restricted class of parking functions, in which each car parks either on its preferred spot or on the next spot, the number of parking functions is given by the ordered Bell numbers. Each restricted parking function corresponds to a weak ordering in which the cars that get their preferred spot are ordered by these spots, and each remaining car is tied with the car in its preferred spot. The permutations, counted by the factorials, are parking functions for which each car parks on its preferred spot. This application also provides a combinatorial proof for upper and lower bounds on the ordered Bell numbers of a simple form, formula_53 The ordered Bell number formula_1 counts the number of faces in the Coxeter complex associated with a Coxeter group of type formula_54. Here, a Coxeter group can be thought of as a finite system of reflection symmetries, closed under repeated reflections, whose mirrors partition a Euclidean space into the cells of the Coxeter complex. For instance, formula_55 corresponds to formula_56, the system of reflections of the Euclidean plane across three lines that meet at the origin at formula_57 angles. The complex formed by these three lines has 13 faces: the origin, six rays from the origin, and six regions between pairs of rays. uses the ordered Bell numbers to analyze n-ary relations, mathematical statements that might be true of some choices of the formula_0 arguments to the relation and false for others. He defines the "complexity" of a relation to mean the number of other relations one can derive from the given one by permuting and repeating its arguments. For instance, for formula_58, a relation on two arguments formula_59 and formula_60 might take the form formula_61. By Kemeny's analysis, it has formula_62 derived relations. These are the given relation formula_61, the converse relation formula_63 obtained by swapping the arguments, and the unary relation formula_64 obtained by repeating an argument. (Repeating the other argument produces the same relation.) apply these numbers to optimality theory in linguistics. In this theory, grammars for natural languages are constructed by ranking certain constraints, and (in a phenomenon called factorial typology) the number of different grammars that can be formed in this way is limited to the number of permutations of the constraints. A paper reviewed by Ellison and Klein suggested an extension of this linguistic model in which ties between constraints are allowed, so that the ranking of constraints becomes a weak order rather than a total order. As they point out, the much larger magnitude of the ordered Bell numbers, relative to the corresponding factorials, allows this theory to generate a much richer set of grammars. Other. If a fair coin (with equal probability of heads or tails) is flipped repeatedly until the first time the result is heads, the number of tails follows a geometric distribution. The moments of this distribution are the ordered Bell numbers. Although the ordinary generating function of the ordered Bell numbers fails to converge, it describes a power series that (evaluated at formula_65 and then multiplied by formula_66) provides an asymptotic expansion for the resistance distance of opposite vertices of an formula_0-dimensional hypercube graph. Truncating this series to a bounded number of terms and then applying the result for unbounded values of formula_0 approximates the resistance to arbitrarily high order. In the algebra of noncommutative rings, an analogous construction to the (commutative) quasisymmetric functions produces a graded algebra WQSym whose dimensions in each grade are given by the ordered Bell numbers. In spam filtering, the problem of assigning weights to sequences of words with the property that the weight of any sequence exceeds the sum of weights of all its subsequences can be solved by using weight formula_67 for a sequence of formula_0 words, where formula_67 is obtained from the recurrence equation formula_68 with base case formula_69. This recurrence differs from the one given earlier for the ordered Bell numbers, in two respects: omitting the formula_70 term from the sum (because only nonempty sequences are considered), and adding one separately from the sum (to make the result exceed, rather than equalling, the sum). These differences have offsetting effects, and the resulting weights are the ordered Bell numbers. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "a(n)" }, { "math_id": 2, "text": "n=0" }, { "math_id": 3, "text": "2^{n-1}" }, { "math_id": 4, "text": "n-1" }, { "math_id": 5, "text": "n+1" }, { "math_id": 6, "text": "i" }, { "math_id": 7, "text": "i+1" }, { "math_id": 8, "text": "\\{{\\scriptstyle{n\\atop k}}\\}" }, { "math_id": 9, "text": "k" }, { "math_id": 10, "text": "k!" }, { "math_id": 11, "text": "\na(n) = \\sum_{k=0}^n k! \\left\\{\\begin{matrix} n \\\\ k \\end{matrix}\\right\\}\n" }, { "math_id": 12, "text": "a(n-1)a(n+1)\\ge a(n)^2" }, { "math_id": 13, "text": "n>0" }, { "math_id": 14, "text": "n-k" }, { "math_id": 15, "text": "k=0" }, { "math_id": 16, "text": "k=1" }, { "math_id": 17, "text": "k=2" }, { "math_id": 18, "text": "k=3" }, { "math_id": 19, "text": "n=3" }, { "math_id": 20, "text": "\na(n) = \\sum_{k=0}^n \\sum_{j=0}^k (-1)^{k-j} \\binom{k}{j}j^n\n = \\frac12\\sum_{m=0}^\\infty\\frac{m^n}{2^{m}}.\n" }, { "math_id": 21, "text": "\\langle{\\scriptstyle{n\\atop k}}\\rangle" }, { "math_id": 22, "text": "a(n)=\\sum_{k=0}^{n-1} 2^k \\left\\langle\\begin{matrix} n \\\\ k \\end{matrix}\\right\\rangle=A_n(2)," }, { "math_id": 23, "text": "A_n" }, { "math_id": 24, "text": "2^k" }, { "math_id": 25, "text": "\\sum_{n=0}^\\infty a(n)\\frac{x^n}{n!} = \\frac{1}{2-e^x}." }, { "math_id": 26, "text": "(2I-P)^{-1}" }, { "math_id": 27, "text": "I" }, { "math_id": 28, "text": "P" }, { "math_id": 29, "text": "a(n)=\\frac{n!}{2}\\sum_{k=-\\infty}^{\\infty}(\\log 2+2\\pi ik)^{-(n+1)},\\qquad n\\ge1." }, { "math_id": 30, "text": "\\log" }, { "math_id": 31, "text": "a(n)\\sim \\frac12 {n!}\\,c^{n+1}," }, { "math_id": 32, "text": "c=\\tfrac{1}{\\log 2}\\approx 1.4427" }, { "math_id": 33, "text": "\\sim" }, { "math_id": 34, "text": "1\\pm o(1)" }, { "math_id": 35, "text": "o(1)" }, { "math_id": 36, "text": "a(n-1)" }, { "math_id": 37, "text": "\n\\lim_{n\\to\\infty} \\frac{n\\,a(n-1)}{a(n)}=\\log 2.\n" }, { "math_id": 38, "text": "n=5" }, { "math_id": 39, "text": "375/541\\approx 0.693161" }, { "math_id": 40, "text": "\\log 2\\approx 0.693145" }, { "math_id": 41, "text": "e^x=2" }, { "math_id": 42, "text": "a(n) = \\sum_{i=1}^{n}\\binom{n}{i}a(n-i)." }, { "math_id": 43, "text": "n-i" }, { "math_id": 44, "text": "\\tbinom{n}{i}" }, { "math_id": 45, "text": "a(n-i)" }, { "math_id": 46, "text": "a(0)=1" }, { "math_id": 47, "text": "a(n+4) \\equiv a(n) \\pmod{10}," }, { "math_id": 48, "text": "a(n+20) \\equiv a(n) \\pmod{100}," }, { "math_id": 49, "text": "a(n+100) \\equiv a(n) \\pmod{1000}," }, { "math_id": 50, "text": "a(n+500) \\equiv a(n) \\pmod{10000}." }, { "math_id": 51, "text": "n!" }, { "math_id": 52, "text": "(n+1)^{n-1}" }, { "math_id": 53, "text": "n!\\le a(n) \\le (n+1)^{n-1}." }, { "math_id": 54, "text": "A_{n-1}" }, { "math_id": 55, "text": "a(3)=13" }, { "math_id": 56, "text": "A_2" }, { "math_id": 57, "text": "60^\\circ" }, { "math_id": 58, "text": "n=2" }, { "math_id": 59, "text": "x" }, { "math_id": 60, "text": "y" }, { "math_id": 61, "text": "y=f(x)" }, { "math_id": 62, "text": "a(n)=3" }, { "math_id": 63, "text": "x=f(y)" }, { "math_id": 64, "text": "x=f(x)" }, { "math_id": 65, "text": "\\tfrac1n" }, { "math_id": 66, "text": "\\tfrac2n" }, { "math_id": 67, "text": "W(n)" }, { "math_id": 68, "text": "W(n)=1+\\sum_{k=1}^{n-1}\\binom{n}{k}W(n-1)," }, { "math_id": 69, "text": "W(1)=0" }, { "math_id": 70, "text": "W(0)" } ]
https://en.wikipedia.org/wiki?curid=6992164
6992479
Arf invariant of a knot
Knot invariant named after Cahit Arf In the mathematical field of knot theory, the Arf invariant of a knot, named after Cahit Arf, is a knot invariant obtained from a quadratic form associated to a Seifert surface. If "F" is a Seifert surface of a knot, then the homology group H1("F", Z/2Z) has a quadratic form whose value is the number of full twists mod 2 in a neighborhood of an embedded circle representing an element of the homology group. The Arf invariant of this quadratic form is the Arf invariant of the knot. Definition by Seifert matrix. Let formula_0 be a Seifert matrix of the knot, constructed from a set of curves on a Seifert surface of genus "g" which represent a basis for the first homology of the surface. This means that "V" is a 2"g" × 2"g" matrix with the property that "V" − "V"T is a symplectic matrix. The "Arf invariant" of the knot is the residue of formula_1 Specifically, if formula_2, is a symplectic basis for the intersection form on the Seifert surface, then formula_3 where lk is the link number and formula_4 denotes the positive pushoff of "a". Definition by pass equivalence. This approach to the Arf invariant is due to Louis Kauffman. We define two knots to be pass equivalent if they are related by a finite sequence of pass-moves. Every knot is pass-equivalent to either the unknot or the trefoil; these two knots are not pass-equivalent and additionally, the right- and left-handed trefoils are pass-equivalent. Now we can define the Arf invariant of a knot to be 0 if it is pass-equivalent to the unknot, or 1 if it is pass-equivalent to the trefoil. This definition is equivalent to the one above. Definition by partition function. Vaughan Jones showed that the Arf invariant can be obtained by taking the partition function of a signed planar graph associated to a knot diagram. Definition by Alexander polynomial. This approach to the Arf invariant is by Raymond Robertello. Let formula_5 be the Alexander polynomial of the knot. Then the Arf invariant is the residue of formula_6 modulo 2, where "r" 0 for "n" odd, and "r" 1 for "n" even. Kunio Murasugi proved that the Arf invariant is zero if and only if Δ(−1) ≡ ±1 modulo 8. Arf as knot concordance invariant. From the Fox-Milnor criterion, which tells us that the Alexander polynomial of a slice knot formula_7 factors as formula_8 for some polynomial formula_9 with integer coefficients, we know that the determinant formula_10 of a slice knot is a square integer. As formula_10 is an odd integer, it has to be congruent to 1 modulo 8. Combined with Murasugi's result, this shows that the Arf invariant of a slice knot vanishes.
[ { "math_id": 0, "text": "V = v_{i,j}" }, { "math_id": 1, "text": "\\sum\\limits^g_{i=1} v_{2i-1,2i-1} v_{2i,2i} \\pmod 2." }, { "math_id": 2, "text": "\\{a_i, b_i\\}, i = 1 \\ldots g" }, { "math_id": 3, "text": "\\operatorname{Arf}(K) = \\sum\\limits^g_{i=1}\\operatorname{lk}\\left(a_i, a_i^+\\right)\\operatorname{lk}\\left(b_i, b_i^+\\right) \\pmod 2." }, { "math_id": 4, "text": "a^+" }, { "math_id": 5, "text": "\\Delta(t) = c_0 + c_1 t + \\cdots + c_n t^n + \\cdots + c_0 t^{2n}" }, { "math_id": 6, "text": " c_{n-1} + c_{n-3} + \\cdots + c_r" }, { "math_id": 7, "text": " K \\subset \\mathbb{S}^3 " }, { "math_id": 8, "text": " \\Delta(t) = p(t) p\\left(t^{-1}\\right) " }, { "math_id": 9, "text": " p(t) " }, { "math_id": 10, "text": " \\left| \\Delta(-1) \\right| " } ]
https://en.wikipedia.org/wiki?curid=6992479
69928425
2 Samuel 6
Second Book of Samuel chapter 2 Samuel 6 is the sixth chapter of the Second Book of Samuel in the Old Testament of the Christian Bible or the second part of Books of Samuel in the Hebrew Bible. According to Jewish tradition the book was compised by the prophet Samuel, with additions by the prophets Gad and Nathan. This chapter contains the account of David's reign in Jerusalem. This is within a section comprising 2 Samuel 2–8 which deals with the period when David set up his kingdom. Text. This chapter was originally written in the Hebrew language. It is divided into 23 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls including 4Q51 (4QSama; 100–50 BCE) with extant verses 2–18. Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century). Analysis. The chapter has the following structure: A. David gathered the people to bring up the ark: Celebrations began (6:1–5) B. Interruption: Uzzah's death; celebrations suspended (6:6–11) C. The ark entered the City of David: joy and offerings (6:12–15) B'. Interruption: Michal despised David (6:16) A'. The reception of the ark: David blessed the people and the people went home (6:17–19) Epilogue. Confrontation between Michal and David (6:20–23) The center of the narrative was the entrance of the Ark of the Covenant into the City of David with proper religious solemnity. The conclusion (A') was when David blessed the people, invoking "the name of the Lord of hosts" (verse 18) which was introduced at the start of celebrations (A; verse 2). The A C A' structure is replete with 'festive language' that is not found in the 'interruptions' nor 'epilogue'. Taking the Ark to Jerusalem (6:1–11). Verses 1–19 of this chapter is a continuation of the ark narrative in 1 Samuel 4:1–7:1, although it may not be a continuous piece as there are significant differences in the names of place and persons, as well as the characters of the narratives. Chronologically David could be in a position to bring the ark to Jerusalem only after a decisive victory over the Philistines (such as the one recorded in 1 Samuel 5:17–25). Since its return from the Philistines (1 Samuel 7:1), the Ark of the Covenant had presumably remained in Kirjath-Jearim, known in this passage as "Baale-judah" (4QSama has 'Baalah'). Similarity to 1 Samuel 4:4 can be observed in referring the ark as 'the ark of God [YHWH]', and YHWH as 'enthroned on the cherubim', whereas 'new cart' echoes 1 Samuel 6:7. 'The house of Abinadab' is also known from 1 Samuel 7:1, but his sons 'Uzzah and Ahio' appear here instead of 'Eleazar', who was in charge of the ark in the previous narrative. The transport of the ark was an occasion of joy and celebration, as David and his people dancing vigorously ('with all his strength' in verse 14 and 1 Chronicles 13:8 in Hebrew 'with instruments of might') accompanied with 'songs' (following 4QSama, Septuagint and 1 Chronicles 13:8, instead of 'fir-trees' in Masoretic Text), but it was interrupted by Uzzah's sudden death when he touched the ark, due to the same power that brought plagues upon the Philistines (1 Samuel 5) and devastation to the town of Beth-shemesh (1 Samuel 6:19). David was unwilling to take more risks, so the ark was left for three months at the place of Obed-Edom the Gittite, one of David's loyal servants since his time in Ziklag, who was a non-Israelite (and possibly a worshipper of another god), but willingly housed the ark. "And David arose and went with all the people who were with him from Baale Judah to bring up from there the ark of God, whose name is called by the Name, the LORD of Hosts, who dwells between the cherubim." "So they set the ark of God on a new cart, and brought it out of the house of Abinadab, which was on the hill; and Uzzah and Ahio, the sons of Abinadab, drove the new cart." The Ark of the Covenant entered Jerusalem (6:12–23). Despite some bitter experience with the ark, David was adamant to bring it to Jerusalem, this time with a blessing (verse 12), and again with much celebration and sacrifice. As the ark finally entered Jerusalem, the celebration reached its peak, with David, only wearing 'a linen ephod' (a priestly garment, which only covered the body and loins), leading vigorous circular dances with the assembly of people accompanied by blasts on the trumpet, the sopar or ram's horn for this joyous event. The ark was housed in a tent specially made for it by David (verse 17), not the same as the original wilderness 'tabernacle', but was probably constructed with some features that were later adopted when constructing the Temple for the ark. The whole festive ceremony was concluded with sacrifices, blessings, and gifts; it may well become annually repeated celebrations. Psalm 132 could be based on the story of the transfer of the ark to Jerusalem in this chapter (not having any referrals only found in parallel chapters). Michal, Saul's daughter and David's first wife, was not pleased with the scantily clothed David dancing ("exposed himself") to 'his servants' maids', among the people (her story was inserted in verse 16 and continued in verses 19–23). She rebuked David with an irony that 'the king honoring himself', but David vowed to make himself even 'more contemptible than this' in showing his piety to YHWH. The statement in verse 23 of Michal's childlessness is significant in relation to David's relations with the house of Saul and with David's own descendants. "And so it was, when those bearing the ark of the Lord had gone six paces, that he sacrificed oxen and fatted sheep." Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; Sources. Commentaries on Samuel. &lt;templatestyles src="Refbegin/styles.css" /&gt; General. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=69928425
69936502
Hyperphoton
Hypothetical particle A hyperphoton is a hypothetical particle with a very low mass and spin equal to one. The hypothesis of the existence of hyperphotons is an explanation for the violation of CP-invariance in the two-pion decay of a long-lived neutral kaon formula_0. According to this hypothesis, there is a long-range very weak field generated by hypercharged particles (for example, baryons), whose quantum carrier is a hyperphoton, which acts differently on formula_1 and formula_2 mesons whose hypercharges differ in signs. Criticism of the hypothesis. This hypothesis contradicts a number of experimental data and theoretical principles of physics. Thus, it follows that the probability of two-photon decay of a long-lived neutral kaon is proportional to the square of the kaon energy in the laboratory reference frame, which does not agree with experimental data on its independence from the kaon energy. The experimental data also contradict such a consequence of this hypothesis as a very high probability of hyperphoton emission during the decay of a long-lived neutral kaon and when a charged kaon decays into a one charged pion. This hypothesis implicitly uses the action at a distance rejected by modern physics. In addition, it implies a violation of the equivalence principle. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "K_{L}^{0} \\rightarrow \\pi^{+} + \\pi^{-}" }, { "math_id": 1, "text": "K^{0}" }, { "math_id": 2, "text": "\\hat{K^{0}}" } ]
https://en.wikipedia.org/wiki?curid=69936502
69939
Elliptic function
Class of periodic mathematical functions In the mathematical field of complex analysis, elliptic functions are special kinds of meromorphic functions, that satisfy two periodicity conditions. They are named elliptic functions because they come from elliptic integrals. Those integrals are in turn named elliptic because they first were encountered for the calculation of the arc length of an ellipse. Important elliptic functions are Jacobi elliptic functions and the Weierstrass formula_0-function. Further development of this theory led to hyperelliptic functions and modular forms. Definition. A meromorphic function is called an elliptic function, if there are two formula_1-linear independent complex numbers formula_2 such that formula_3 and formula_4. So elliptic functions have two periods and are therefore doubly periodic functions. Period lattice and fundamental domain. If formula_5 is an elliptic function with periods formula_6 it also holds that formula_7 for every linear combination formula_8 with formula_9. The abelian group formula_10 is called the "period lattice". The parallelogram generated by formula_11and formula_12 formula_13 is a "fundamental domain" of formula_14 acting on formula_15. Geometrically the complex plane is tiled with parallelograms. Everything that happens in one fundamental domain repeats in all the others. For that reason we can view elliptic function as functions with the quotient group formula_16 as their domain. This quotient group, called an elliptic curve, can be visualised as a parallelogram where opposite sides are identified, which topologically is a torus. Liouville's theorems. The following three theorems are known as "Liouville's theorems (1847)." 1st theorem. A holomorphic elliptic function is constant. This is the original form of Liouville's theorem and can be derived from it. A holomorphic elliptic function is bounded since it takes on all of its values on the fundamental domain which is compact. So it is constant by Liouville's theorem. 2nd theorem. Every elliptic function has finitely many poles in formula_16 and the sum of its residues is zero. This theorem implies that there is no elliptic function not equal to zero with exactly one pole of order one or exactly one zero of order one in the fundamental domain. 3rd theorem. A non-constant elliptic function takes on every value the same number of times in formula_16 counted with multiplicity. Weierstrass ℘-function. One of the most important elliptic functions is the Weierstrass formula_0-function. For a given period lattice formula_14 it is defined by formula_17 It is constructed in such a way that it has a pole of order two at every lattice point. The term formula_18 is there to make the series convergent. formula_0 is an even elliptic function; that is, formula_19. Its derivative formula_20 is an odd function, i.e. formula_21 One of the main results of the theory of elliptic functions is the following: Every elliptic function with respect to a given period lattice formula_14 can be expressed as a rational function in terms of formula_0 and formula_22. The formula_0-function satisfies the differential equation formula_23 where formula_24 and formula_25 are constants that depend on formula_14. More precisely, formula_26 and formula_27, where formula_28 and formula_29 are so called Eisenstein series. In algebraic language, the field of elliptic functions is isomorphic to the field formula_30, where the isomorphism maps formula_0 to formula_31 and formula_22 to formula_32. Relation to elliptic integrals. The relation to elliptic integrals has mainly a historical background. Elliptic integrals had been studied by Legendre, whose work was taken on by Niels Henrik Abel and Carl Gustav Jacobi. Abel discovered elliptic functions by taking the inverse function formula_33 of the elliptic integral function formula_34 with formula_35. Additionally he defined the functions formula_36 and formula_37. After continuation to the complex plane they turned out to be doubly periodic and are known as Abel elliptic functions. Jacobi elliptic functions are similarly obtained as inverse functions of elliptic integrals. Jacobi considered the integral function formula_38 and inverted it: formula_39. formula_40 stands for "sinus amplitudinis" and is the name of the new function. He then introduced the functions "cosinus amplitudinis" and "delta amplitudinis", which are defined as follows: formula_41 formula_42. Only by taking this step, Jacobi could prove his general transformation formula of elliptic integrals in 1827. History. Shortly after the development of infinitesimal calculus the theory of elliptic functions was started by the Italian mathematician Giulio di Fagnano and the Swiss mathematician Leonhard Euler. When they tried to calculate the arc length of a lemniscate they encountered problems involving integrals that contained the square root of polynomials of degree 3 and 4. It was clear that those so called elliptic integrals could not be solved using elementary functions. Fagnano observed an algebraic relation between elliptic integrals, what he published in 1750. Euler immediately generalized Fagnano's results and posed his algebraic addition theorem for elliptic integrals. Except for a comment by Landen his ideas were not pursued until 1786, when Legendre published his paper "Mémoires sur les intégrations par arcs d’ellipse". Legendre subsequently studied elliptic integrals and called them "elliptic functions". Legendre introduced a three-fold classification –three kinds– which was a crucial simplification of the rather complicated theory at that time. Other important works of Legendre are: "Mémoire sur les transcendantes elliptiques" (1792), "Exercices de calcul intégral" (1811–1817), "Traité des fonctions elliptiques" (1825–1832). Legendre's work was mostly left untouched by mathematicians until 1826. Subsequently, Niels Henrik Abel and Carl Gustav Jacobi resumed the investigations and quickly discovered new results. At first they inverted the elliptic integral function. Following a suggestion of Jacobi in 1829 these inverse functions are now called "elliptic functions". One of Jacobi's most important works is "Fundamenta nova theoriae functionum ellipticarum" which was published 1829. The addition theorem Euler found was posed and proved in its general form by Abel in 1829. In those days the theory of elliptic functions and the theory of doubly periodic functions were considered to be different theories. They were brought together by Briot and Bouquet in 1856. Gauss discovered many of the properties of elliptic functions 30 years earlier but never published anything on the subject. References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\wp" }, { "math_id": 1, "text": "\\mathbb{R}" }, { "math_id": 2, "text": "\\omega_1,\\omega_2\\in\\mathbb{C}" }, { "math_id": 3, "text": "f(z + \\omega_1) = f(z)" }, { "math_id": 4, "text": "f(z + \\omega_2) = f(z), \\quad \\forall z\\in\\mathbb{C}" }, { "math_id": 5, "text": "f" }, { "math_id": 6, "text": "\\omega_1,\\omega_2" }, { "math_id": 7, "text": "f(z+\\gamma)=f(z)" }, { "math_id": 8, "text": "\\gamma=m\\omega_1+n\\omega_2" }, { "math_id": 9, "text": "m,n\\in\\mathbb{Z} " }, { "math_id": 10, "text": "\\Lambda:=\\langle \\omega_1,\\omega_2\\rangle_{\\mathbb Z}:=\\mathbb Z\\omega_1+\\mathbb Z\\omega_2:=\\{m\\omega_1+n\\omega_2\\mid m,n\\in\\mathbb Z\\}" }, { "math_id": 11, "text": "\\omega_1" }, { "math_id": 12, "text": "\\omega_2" }, { "math_id": 13, "text": "\\{\\mu\\omega_1+\\nu\\omega_2\\mid 0\\leq\\mu,\\nu\\leq 1\\}" }, { "math_id": 14, "text": "\\Lambda" }, { "math_id": 15, "text": "\\C" }, { "math_id": 16, "text": "\\mathbb{C}/\\Lambda" }, { "math_id": 17, "text": "\\wp(z)=\\frac1{z^2}+\\sum_{\\lambda\\in\\Lambda\\setminus\\{0\\}}\\left(\\frac1{(z-\\lambda)^2}-\\frac1{\\lambda^2}\\right)." }, { "math_id": 18, "text": "-\\frac1{\\lambda^2}" }, { "math_id": 19, "text": "\\wp(-z)=\\wp(z)" }, { "math_id": 20, "text": "\\wp'(z)=-2\\sum_{\\lambda\\in\\Lambda}\\frac1{(z-\\lambda)^3}" }, { "math_id": 21, "text": "\\wp'(-z)=-\\wp'(z)." }, { "math_id": 22, "text": "\\wp'" }, { "math_id": 23, "text": "\\wp'(z)^2=4\\wp(z)^3-g_2\\wp(z)-g_3," }, { "math_id": 24, "text": "g_2" }, { "math_id": 25, "text": "g_3" }, { "math_id": 26, "text": "g_2(\\omega_1,\\omega_2)=60G_4(\\omega_1,\\omega_2)" }, { "math_id": 27, "text": "g_3(\\omega_1,\\omega_2)=140G_6(\\omega_1,\\omega_2)" }, { "math_id": 28, "text": "G_4" }, { "math_id": 29, "text": "G_6" }, { "math_id": 30, "text": "\\mathbb C(X)[Y]/(Y^2-4X^3+g_2X+g_3)" }, { "math_id": 31, "text": "X" }, { "math_id": 32, "text": "Y" }, { "math_id": 33, "text": "\\varphi" }, { "math_id": 34, "text": "\\alpha(x)=\\int_0^x \\frac{dt}{\\sqrt{(1-c^2t^2)(1+e^2t^2)}}" }, { "math_id": 35, "text": "x=\\varphi(\\alpha)" }, { "math_id": 36, "text": "f(\\alpha)=\\sqrt{1-c^2\\varphi^2(\\alpha)}" }, { "math_id": 37, "text": "F(\\alpha)=\\sqrt{1+e^2\\varphi^2(\\alpha)}" }, { "math_id": 38, "text": "\\xi(x)=\\int_0^x \\frac{dt}{\\sqrt{(1-t^2)(1-k^2t^2)}}" }, { "math_id": 39, "text": "x=\\operatorname{sn}(\\xi)" }, { "math_id": 40, "text": "\\operatorname{sn}" }, { "math_id": 41, "text": "\\operatorname{cn}(\\xi):=\\sqrt{1-x^2} \n " }, { "math_id": 42, "text": "\\operatorname{dn}(\\xi):=\\sqrt{1-k^2x^2}\n \n " } ]
https://en.wikipedia.org/wiki?curid=69939
6993953
Immersion (mathematics)
Differentiable function whose derivative is everywhere injective In mathematics, an immersion is a differentiable function between differentiable manifolds whose differential pushforward is everywhere injective. Explicitly, "f" : "M" → "N" is an immersion if formula_0 is an injective function at every point p of M (where TpX denotes the tangent space of a manifold X at a point p in X and Dp f is the derivative (pushforward) of the map f at point p). Equivalently, f is an immersion if its derivative has constant rank equal to the dimension of M: formula_1 The function f itself need not be injective, only its derivative must be. Vs. embedding. A related concept is that of an "embedding". A smooth embedding is an injective immersion "f" : "M" → "N" that is also a topological embedding, so that M is diffeomorphic to its image in N. An immersion is precisely a local embedding – that is, for any point "x" ∈ "M" there is a neighbourhood, "U" ⊆ "M", of x such that "f" : "U" → "N" is an embedding, and conversely a local embedding is an immersion. For infinite dimensional manifolds, this is sometimes taken to be the definition of an immersion. If M is compact, an injective immersion is an embedding, but if M is not compact then injective immersions need not be embeddings; compare to continuous bijections versus homeomorphisms. Regular homotopy. A regular homotopy between two immersions f and g from a manifold M to a manifold N is defined to be a differentiable function "H" : "M" × [0,1] → "N" such that for all t in [0, 1] the function "Ht" : "M" → "N" defined by "Ht"("x") = "H"("x", "t") for all "x" ∈ "M" is an immersion, with "H"0 = "f", "H"1 = "g". A regular homotopy is thus a homotopy through immersions. Classification. Hassler Whitney initiated the systematic study of immersions and regular homotopies in the 1940s, proving that for 2"m" &lt; "n" + 1 every map "f" : "M m" → "N n" of an m-dimensional manifold to an n-dimensional manifold is homotopic to an immersion, and in fact to an embedding for 2"m" &lt; "n"; these are the Whitney immersion theorem and Whitney embedding theorem. Stephen Smale expressed the regular homotopy classes of immersions &amp;NoBreak;&amp;NoBreak; as the homotopy groups of a certain Stiefel manifold. The sphere eversion was a particularly striking consequence. Morris Hirsch generalized Smale's expression to a homotopy theory description of the regular homotopy classes of immersions of any m-dimensional manifold M m in any n-dimensional manifold N n. The Hirsch-Smale classification of immersions was generalized by Mikhail Gromov. Existence. The primary obstruction to the existence of an immersion &amp;NoBreak;&amp;NoBreak; is the stable normal bundle of M, as detected by its characteristic classes, notably its Stiefel–Whitney classes. That is, since &amp;NoBreak;&amp;NoBreak; is parallelizable, the pullback of its tangent bundle to M is trivial; since this pullback is the direct sum of the (intrinsically defined) tangent bundle on M, TM, which has dimension m, and of the normal bundle ν of the immersion i, which has dimension "n" − "m", for there to be a codimension k immersion of M, there must be a vector bundle of dimension k, ξ k, standing in for the normal bundle ν, such that &amp;NoBreak;&amp;NoBreak; is trivial. Conversely, given such a bundle, an immersion of M with this normal bundle is equivalent to a codimension 0 immersion of the total space of this bundle, which is an open manifold. The stable normal bundle is the class of normal bundles plus trivial bundles, and thus if the stable normal bundle has cohomological dimension k, it cannot come from an (unstable) normal bundle of dimension less than k. Thus, the cohomology dimension of the stable normal bundle, as detected by its highest non-vanishing characteristic class, is an obstruction to immersions. Since characteristic classes multiply under direct sum of vector bundles, this obstruction can be stated intrinsically in terms of the space M and its tangent bundle and cohomology algebra. This obstruction was stated (in terms of the tangent bundle, not stable normal bundle) by Whitney. For example, the Möbius strip has non-trivial tangent bundle, so it cannot immerse in codimension 0 (in &amp;NoBreak;&amp;NoBreak;), though it embeds in codimension 1 (in &amp;NoBreak;&amp;NoBreak;). William S. Massey (1960) showed that these characteristic classes (the Stiefel–Whitney classes of the stable normal bundle) vanish above degree "n" − "α"("n"), where "α"("n") is the number of "1" digits when n is written in binary; this bound is sharp, as realized by real projective space. This gave evidence to the "immersion conjecture", namely that every n-manifold could be immersed in codimension "n" − "α"("n"), i.e., in &amp;NoBreak;&amp;NoBreak; This conjecture was proven by Ralph Cohen (1985). Codimension 0. Codimension 0 immersions are equivalently "relative" dimension 0 "submersions", and are better thought of as submersions. A codimension 0 immersion of a closed manifold is precisely a covering map, i.e., a fiber bundle with 0-dimensional (discrete) fiber. By Ehresmann's theorem and Phillips' theorem on submersions, a proper submersion of manifolds is a fiber bundle, hence codimension/relative dimension 0 immersions/submersions behave like submersions. Further, codimension 0 immersions do not behave like other immersions, which are largely determined by the stable normal bundle: in codimension 0 one has issues of fundamental class and cover spaces. For instance, there is no codimension 0 immersion &amp;NoBreak;&amp;NoBreak; despite the circle being parallelizable, which can be proven because the line has no fundamental class, so one does not get the required map on top cohomology. Alternatively, this is by invariance of domain. Similarly, although &amp;NoBreak;&amp;NoBreak; and the 3-torus &amp;NoBreak;&amp;NoBreak; are both parallelizable, there is no immersion &amp;NoBreak;&amp;NoBreak; – any such cover would have to be ramified at some points, since the sphere is simply connected. Another way of understanding this is that a codimension k immersion of a manifold corresponds to a codimension 0 immersion of a k-dimensional vector bundle, which is an "open" manifold if the codimension is greater than 0, but to a closed manifold in codimension 0 (if the original manifold is closed). Multiple points. A k-tuple point (double, triple, etc.) of an immersion "f" : "M" → "N" is an unordered set {"x"1, ..., "xk"} of distinct points "xi" ∈ "M" with the same image "f"("xi") ∈ "N". If M is an m-dimensional manifold and N is an "n"-dimensional manifold then for an immersion "f" : "M" → "N" in general position the set of k-tuple points is an ("n" − "k"("n" − "m"))-dimensional manifold. Every embedding is an immersion without multiple points (where "k" &gt; 1). Note, however, that the converse is false: there are injective immersions that are not embeddings. The nature of the multiple points classifies immersions; for example, immersions of a circle in the plane are classified up to regular homotopy by the number of double points. At a key point in surgery theory it is necessary to decide if an immersion &amp;NoBreak;}&amp;NoBreak; of an m-sphere in a 2"m"-dimensional manifold is regular homotopic to an embedding, in which case it can be killed by surgery. Wall associated to f an invariant "μ"("f" ) in a quotient of the fundamental group ring &amp;NoBreak;&amp;NoBreak; which counts the double points of f in the universal cover of N. For "m" &gt; 2, f is regular homotopic to an embedding if and only if "μ"("f" ) = 0 by the Whitney trick. One can study embeddings as "immersions without multiple points", since immersions are easier to classify. Thus, one can start from immersions and try to eliminate multiple points, seeing if one can do this without introducing other singularities – studying "multiple disjunctions". This was first done by André Haefliger, and this approach is fruitful in codimension 3 or more – from the point of view of surgery theory, this is "high (co)dimension", unlike codimension 2 which is the knotting dimension, as in knot theory. It is studied categorically via the "calculus of functors" by Thomas Goodwillie, John Klein, and Michael S. Weiss. Examples and properties. Immersed plane curves. Immersed plane curves have a well-defined turning number, which can be defined as the total curvature divided by 2π. This is invariant under regular homotopy, by the Whitney–Graustein theorem – topologically, it is the degree of the Gauss map, or equivalently the winding number of the unit tangent (which does not vanish) about the origin. Further, this is a complete set of invariants – any two plane curves with the same turning number are regular homotopic. Every immersed plane curve lifts to an embedded space curve via separating the intersection points, which is not true in higher dimensions. With added data (which strand is on top), immersed plane curves yield knot diagrams, which are of central interest in knot theory. While immersed plane curves, up to regular homotopy, are determined by their turning number, knots have a very rich and complex structure. Immersed surfaces in 3-space. The study of immersed surfaces in 3-space is closely connected with the study of knotted (embedded) surfaces in 4-space, by analogy with the theory of knot diagrams (immersed plane curves (2-space) as projections of knotted curves in 3-space): given a knotted surface in 4-space, one can project it to an immersed surface in 3-space, and conversely, given an immersed surface in 3-space, one may ask if it lifts to 4-space – is it the projection of a knotted surface in 4-space? This allows one to relate questions about these objects. A basic result, in contrast to the case of plane curves, is that not every immersed surface lifts to a knotted surface. In some cases the obstruction is 2-torsion, such as in "Koschorke's example", which is an immersed surface (formed from 3 Möbius bands, with a triple point) that does not lift to a knotted surface, but it has a double cover that does lift. A detailed analysis is given in , while a more recent survey is given in . Generalizations. A far-reaching generalization of immersion theory is the homotopy principle: one may consider the immersion condition (the rank of the derivative is always k) as a partial differential relation (PDR), as it can be stated in terms of the partial derivatives of the function. Then Smale–Hirsch immersion theory is the result that this reduces to homotopy theory, and the homotopy principle gives general conditions and reasons for PDRs to reduce to homotopy theory. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "D_pf : T_p M \\to T_{f(p)}N\\," }, { "math_id": 1, "text": "\\operatorname{rank}\\,D_p f = \\dim M." }, { "math_id": 2, "text": "f_1 = -f_0 : \\mathbb S^2 \\to \\R^3" } ]
https://en.wikipedia.org/wiki?curid=6993953
6994353
Fractional factorial design
Statistical experimental design approach In statistics, fractional factorial designs are experimental designs consisting of a carefully chosen subset (fraction) of the experimental runs of a full factorial design. The subset is chosen so as to exploit the sparsity-of-effects principle to expose information about the most important features of the problem studied, while using a fraction of the effort of a full factorial design in terms of experimental runs and resources. In other words, it makes use of the fact that many experiments in full factorial design are often redundant, giving little or no new information about the system. The design of fractional factorial experiments must be deliberate, as certain effects are confounded and cannot be separated from others. History. Fractional factorial design was introduced by British statistician David John Finney in 1945, extending previous work by Ronald Fisher on the full factorial experiment at Rothamsted Experimental Station. Developed originally for agricultural applications, it has since been applied to other areas of engineering, science, and business. Basic working principle. Similar to a full factorial experiment, a fractional factorial experiment investigates the effects of independent variables, known as factors, on a response variable. Each factor is investigated at different values, known as levels. The response variable is measured using a combination of factors at different levels, and each unique combination is known as a run. To reduce the number of runs in comparison to a full factorial, the experiments are designed to confound different effects and interactions, so that their impacts cannot be distinguished. Higher-order interactions between main effects are typically negligible, making this a reasonable method of studying main effects. This is the sparsity of effects principle. Confounding is controlled by a systematic selection of runs from a full-factorial table. Notation. Fractional designs are expressed using the notation "l"k − p, where "l" is the number of levels of each factor, "k" is the number of factors, and "p" describes the size of the fraction of the full factorial used. Formally, p is the number of generators; relationships that determine the intentionally confounded effects that reduce the number of runs needed. Each generator halves the number of runs required. A design with "p" such generators is a 1/("lp")="l−p" fraction of the full factorial design. For example, a 25 − 2 design is 1/4 of a two-level, five-factor factorial design. Rather than the 32 runs that would be required for the full 25 factorial experiment, this experiment requires only eight runs. With two generators, the number of experiments has been halved twice. In practice, one rarely encounters "l" &gt; 2 levels in fractional factorial designs as the methodology to generate such designs for more than two levels is much more cumbersome. In cases requiring 3 levels for each factor, potential fractional designs to pursue are Latin squares, mutually orthogonal Latin squares, and Taguchi methods. Response surface methodology can also be a much more experimentally efficient way to determine the relationship between the experimental response and factors at multiple levels, but it requires that the levels are continuous. In determining whether more than two levels are needed, experimenters should consider whether they expect the outcome to be nonlinear with the addition of a third level. Another consideration is the number of factors, which can significantly change the experimental labor demand. The levels of a factor are commonly coded as +1 for the higher level, and −1 for the lower level. For a three-level factor, the intermediate value is coded as 0. To save space, the points in a factorial experiment are often abbreviated with strings of plus and minus signs. The strings have as many symbols as factors, and their values dictate the level of each factor: conventionally, formula_0 for the first (or low) level, and formula_1 for the second (or high) level. The points in a two-level experiment with two factors can thus be represented as formula_2, formula_3, formula_4, and formula_5. The factorial points can also be abbreviated by (1), a, b, and ab, where the presence of a letter indicates that the specified factor is at its high (or second) level and the absence of a letter indicates that the specified factor is at its low (or first) level (for example, "a" indicates that factor A is on its high setting, while all other factors are at their low (or first) setting). (1) is used to indicate that all factors are at their lowest (or first) values. Factorial points are typically arranged in a table using Yates’ standard order: 1, a, b, ab, c, ac, bc, abc, which is created when the level of the first factor alternates with each run. Generation. In practice, experimenters typically rely on statistical reference books to supply the "standard" fractional factorial designs, consisting of the "principal fraction". The "principal fraction" is the set of treatment combinations for which the generators evaluate to + under the treatment combination algebra. However, in some situations, experimenters may take it upon themselves to generate their own fractional design. A fractional factorial experiment is generated from a full factorial experiment by choosing an "alias structure". The alias structure determines which effects are confounded with each other. For example, the five-factor 25 − 2 can be generated by using a full three-factor factorial experiment involving three factors (say "A"," B", and "C") and then choosing to confound the two remaining factors "D" and "E" with interactions generated by "D" = "A"*"B" and "E" = "A"*"C". These two expressions are called the "generators" of the design. So for example, when the experiment is run and the experimenter estimates the effects for factor "D", what is really being estimated is a combination of the main effect of "D" and the two-factor interaction involving "A" and "B". An important characteristic of a fractional design is the defining relation, which gives the set of interaction columns equal in the design matrix to a column of plus signs, denoted by "I". For the above example, since "D" = "AB" and "E" = "AC", then "ABD" and "ACE" are both columns of plus signs, and consequently so is "BDCE": "D*D" = "AB*D" = "I" "E*E" = "AC*E" = "I" "I"= "ABD*ACE"= "A*ABCDE" = "BCDE" In this case, the defining relation of the fractional design is "I" = "ABD" = "ACE" = "BCDE". The defining relation allows the alias pattern of the design to be determined and includes 2p words. Notice that in this case, the interaction effects "ABD", "ACE", and "BCDE" cannot be studied at all. As the number of generators and the degree of fractionation increases, more and more effects become confounded. The alias pattern can then be determined through multiplying by each factor column. To determine how main effect A is confounded, multiply all terms in the defining relation by A: "A*I" = "A*ABD" = "A*ACE" = "A*BCDE" "A" = "BC" = "CE" = "ABCDE" Thus main effect A is confounded with interaction effects BC, CE, and ABCDE. Other main effects can be computed following a similar method. Resolution. An important property of a fractional design is its resolution or ability to separate main effects and low-order interactions from one another. Formally, if the factors are binary then the resolution of the design is the minimum word length in the defining relation excluding ("I"). The resolution is denoted using Roman numerals, and it increases with the number. The most important fractional designs are those of resolution III, IV, and V: Resolutions below III are not useful and resolutions above V are wasteful (with binary factors) in that the expanded experimentation has no practical benefit in most cases—the bulk of the additional effort goes into the estimation of very high-order interactions which rarely occur in practice. The 25 − 2 design above is resolution III since its defining relation is I = ABD = ACE = BCDE. The resolution classification system described is only used for regular designs. Regular designs have run size that equal a power of two, and only full aliasing is present. Non-regular designs, sometimes known as Plackett-Burman designs, are designs where run size is a multiple of 4; these designs introduce partial aliasing, and generalized resolution is used as design criterion instead of the resolution described previously. Resolution III designs can be used to construct saturated designs, where N-1 factors can be investigated in only N runs. These saturated designs can be used for quick screening when many factors are involved. Example fractional factorial experiment. Montgomery gives the following example of a fractional factorial experiment. An engineer performed an experiment to increase the filtration rate (output) of a process to produce a chemical, and to reduce the amount of formaldehyde used in the process. The full factorial experiment is described in the Wikipedia page Factorial experiment. Four factors were considered: temperature (A), pressure (B), formaldehyde concentration (C), and stirring rate (D). The results in that example were that the main effects A, C, and D and the AC and AD interactions were significant. The results of that example may be used to simulate a fractional factorial experiment using a half-fraction of the original 2"4" = 16 run design. The table shows the 2"4"-"1" = 8 run half-fraction experiment design and the resulting filtration rate, extracted from the table for the full 16 run factorial experiment. In this fractional design, each main effect is aliased with a 3-factor interaction (e.g., A = BCD), and every 2-factor interaction is aliased with another 2-factor interaction (e.g., AB = CD). The aliasing relationships are shown in the table. This is a resolution IV design, meaning that main effects are aliased with 3-way interactions, and 2-way interactions are aliased with 2-way interactions. The analysis of variance estimates of the effects are shown in the table below. From inspection of the table, there appear to be large effects due to A, C, and D. The coefficient for the AB interaction is quite small. Unless the AB and CD interactions have approximately equal but opposite effects, these two interactions appear to be negligible. If A, C, and D have large effects, but B has little effect, then the AC and AD interactions are most likely significant. These conclusions are consistent with the results of the full-factorial 16-run experiment. Because B and its interactions appear to be insignificant, B may be dropped from the model. Dropping B results in a full factorial 2"3" design for the factors A, C, and D. Performing the anova using factors A, C, and D, and the interaction terms A:C and A:D, gives the results shown in the table, which are very similar to the results for the full factorial experiment experiment, but have the advantage of requiring only a half-fraction 8 runs rather than 16. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "-" }, { "math_id": 1, "text": "+" }, { "math_id": 2, "text": "--" }, { "math_id": 3, "text": "+-" }, { "math_id": 4, "text": "-+" }, { "math_id": 5, "text": "++" } ]
https://en.wikipedia.org/wiki?curid=6994353
69947736
FO(.)
Knowledge representation computer programming language In computer science, FO(.) (a.k.a. FO-dot) is a knowledge representation language based on first-order logic (FO). It extends FO with types, aggregates (counting, summing, maximising ... over a set), arithmetic, inductive definitions, partial functions, and intensional objects. By itself, a FO(.) knowledge base cannot be run, as it is just a "bag of information", to be used as input to various generic reasoning algorithms. Reasoning engines that use FO(.) include IDP-Z3, IDP and FOLASP. As an example, the IDP system allows generating models, answering set queries, checking entailment between two theories and checking satisfiability, among other types of inference over a FO(.) knowledge base. FO(.) has four types of statements: Example. A voting law specifies that citizens must be at least 18 years old to vote. Furthermore, if the voting law is interpreted as being prescriptive, voting is mandatory when you are over 18. This can be represented in FO(.) as follows: vocabulary V { age: () → ℤ // function declaration prescriptive, vote: () → 𝔹 // predicate declarations theory T:V { age() &lt; 18 ⇒ ¬vote(). // axiom: if you are less than 18, you may not vote. prescriptive() ⇒ (age() ≥ 18 ⇒ vote()). // axiom: if prescriptive: if you are at least 18, you must vote In this code, "A"codice_0"B" indicates a function from "A" to "B", formula_0 denotes integers, formula_1 denotes the booleans, codice_1 denotes negation, and codice_2 denotes material conditional. Predicates &lt; and ≥ are built-in and have their usual meaning. Such knowledge base can be turned automatically into an Interactive Lawyer (see here) References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbb{Z}" }, { "math_id": 1, "text": "\\mathbb{B}" } ]
https://en.wikipedia.org/wiki?curid=69947736
69955049
Feferman–Vaught theorem
Theorem about products in model theory Feferman–Vaught theorem in model theory is a theorem by Solomon Feferman and Robert Lawson Vaught that shows how to reduce, in an algorithmic way, the first-order theory of a product of structures to the first-order theory of elements of the structure. The theorem is considered as one of the standard results in model theory. The theorem extends the previous result of Andrzej Mostowski on direct products of theories. It generalizes (to formulas with arbitrary quantifiers) the property in universal algebra that equalities (identities) carry over to direct products of algebraic structures (which is a consequence of one direction of Birkhoff's theorem). Direct product of structures. Consider a first-order logic signature "L". The definition of product structures takes a family of "L"-structures formula_0 for formula_1 for some index set "I" and defines the product structure formula_2, which is also an "L"-structure, with all functions and relations defined pointwise. The definition generalizes the direct product in universal algebra to structures for languages that contain not only function symbols but also relation symbols. If formula_3 is a relation symbol with formula_4 arguments in "L" and formula_5 are elements of the cartesian product, we define the interpretation of formula_6 in formula_7 by formula_8 When formula_6 is a functional relation, this definition reduces to the definition of direct product in universal algebra. Statement of the theorem for direct products. For a first-order formula formula_9 in signature "L" with free variables, and for an interpretation formula_10 of the variables formula_11, we define the set of indices formula_12 for which formula_13 holds in formula_0 formula_14 Given a first-order formula with free variables formula_9, there is an algorithm to compute its equivalent game normal form, which is a finite disjunction formula_15 of mutually contradictory formulas. The Feferman–Vaught theorem gives an algorithm that takes a first-order formula formula_9 and constructs a formula formula_16 that reduces the condition that formula_13 holds in the product to the condition that formula_16 holds in the interpretation of formula_17 sets of indices: formula_18 Formula formula_16 is thus a formula with formula_17 free set variables, for example, in the first-order theory of Boolean algebra of sets. Proof idea. Formula formula_16 can be constructed following the structure of the starting formula formula_16. When formula_19 is quantifier free then, by definition of direct product above it follows formula_20 Consequently, we can take formula_21 to be the equality formula_22 in the language of boolean algebra of sets (equivalently, the field of sets). Extending the condition to quantified formulas can be viewed as a form of quantifier elimination, where quantification over product elements formula_10 in formula_19 is reduced to quantification over subsets of formula_23. Generalized products. It is often of interest to consider substructure of the direct product structure. If the restriction that defines product elements that belong to the substructure can be expressed as a condition on the sets of index elements, then the results can be generalized. An example is the substructure of product elements that are constant at all but finitely many indices. Assume that the language "L" contains a constant symbol formula_24 and consider the substructure containing only those product elements formula_25 for which the set formula_26 is finite. The theorem then reduces the truth value in such substructure to a formula formula_16 in the boolean algebra of sets, where certain sets are restricted to be finite. One way to define generalized products is to consider those substructures where the sets formula_27 belong to some boolean algebra formula_28 of sets formula_29 of indices (a subset of the powerset set algebra formula_30), and where the product substructure admits gluing. Here admitting gluing refers to the following closure condition: if formula_31 are two product elements and formula_32 is the element of the boolean algebra, then so is the element formula_24 defined by "gluing" formula_25 and formula_33 according to formula_34: formula_35 Consequences. Feferman–Vaught theorem implies the decidability of Skolem arithmetic by viewing, via the fundamental theorem of arithmetic, the structure of natural numbers with multiplication as a generalized product (power) of Presburger arithmetic structures. Given an ultrafilter on the set of indices formula_23, we can define a quotient structure on product elements, leading to the theorem of Jerzy Łoś that can be used to construct hyperreal numbers. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbf{A}_i" }, { "math_id": 1, "text": "i \\in I" }, { "math_id": 2, "text": "\\mathbf{A} = \\prod_{i \\in I} \\mathbf{A}_i" }, { "math_id": 3, "text": "r(a_1,\\ldots,a_p)" }, { "math_id": 4, "text": "p" }, { "math_id": 5, "text": "a_1,\\ldots,a_n \\in \\Pi_{i \\in I}\\mathbf{A}_i" }, { "math_id": 6, "text": "r" }, { "math_id": 7, "text": "\\mathbf{A}" }, { "math_id": 8, "text": "\n\\mathbf{A} \\models r(a_1,\\ldots,a_p) \\iff\n\\forall i \\in I,\\ \\mathbf{A}_i \\models r(a_1(i),\\ldots,a_p(i))\n" }, { "math_id": 9, "text": "\\phi(\\bar x)" }, { "math_id": 10, "text": "\\bar a" }, { "math_id": 11, "text": "\\bar x" }, { "math_id": 12, "text": "i" }, { "math_id": 13, "text": "\\phi(\\bar a)" }, { "math_id": 14, "text": "\n||\\phi(\\bar a)|| = \\{ i \\mid \\mathbf{A}_i \\models \\phi(\\bar a(i)) \\}\n" }, { "math_id": 15, "text": "\\bigvee_{i=0}^{k-1} \\theta(\\bar x)" }, { "math_id": 16, "text": "\\phi^*" }, { "math_id": 17, "text": "k+1" }, { "math_id": 18, "text": "\nI, ||\\theta_0(\\bar a)||, \\ldots, ||\\theta_{k-1}(\\bar a)||\n" }, { "math_id": 19, "text": "\\phi" }, { "math_id": 20, "text": "\n\\begin{array}{rl}\n\\mathbf{A} \\models \\phi(\\bar a) & \\iff \\forall i \\in I.\\ \\mathbf{A}_i \\models \\phi(\\bar a(i)) \\\\\n & \\iff \\{ i \\mid \\mathbf{A}_i \\models \\phi(\\bar a(i)) \\} = I \\\\\n & \\iff ||\\phi(\\bar a)|| = I\n\\end{array}\n" }, { "math_id": 21, "text": "\\phi^*(U,X_1)" }, { "math_id": 22, "text": "U = X_1" }, { "math_id": 23, "text": "I" }, { "math_id": 24, "text": "c" }, { "math_id": 25, "text": "a" }, { "math_id": 26, "text": "\n\\{ i \\mid \\textbf{A}_i \\models a(i) \\neq c \\}\n" }, { "math_id": 27, "text": "||\\phi(a)||" }, { "math_id": 28, "text": "B" }, { "math_id": 29, "text": "X \\subseteq I" }, { "math_id": 30, "text": "2^I" }, { "math_id": 31, "text": "a,b" }, { "math_id": 32, "text": "X \\in B" }, { "math_id": 33, "text": "b" }, { "math_id": 34, "text": "X" }, { "math_id": 35, "text": "\nc(i) = \\left\\{\\begin{array}{rl}\n a(i), & \\mbox{ if } i \\in X \\\\\n b(i), & \\mbox{ if } i \\in (I \\setminus X)\n\\end{array}\\right.\n" } ]
https://en.wikipedia.org/wiki?curid=69955049
699570
Correlation ratio
In statistics, the correlation ratio is a measure of the curvilinear relationship between the statistical dispersion within individual categories and the dispersion across the whole population or sample. The measure is defined as the "ratio" of two standard deviations representing these types of variation. The context here is the same as that of the intraclass correlation coefficient, whose value is the square of the correlation ratio. Definition. Suppose each observation is "yxi" where "x" indicates the category that observation is in and "i" is the label of the particular observation. Let "nx" be the number of observations in category "x" and formula_0 and formula_1 where formula_2 is the mean of the category "x" and formula_3 is the mean of the whole population. The correlation ratio η (eta) is defined as to satisfy formula_4 which can be written as formula_5 i.e. the weighted variance of the category means divided by the variance of all samples. If the relationship between values of formula_6 and values of formula_2 is linear (which is certainly true when there are only two possibilities for "x") this will give the same result as the square of Pearson's correlation coefficient; otherwise the correlation ratio will be larger in magnitude. It can therefore be used for judging non-linear relationships. Range. The correlation ratio formula_7 takes values between 0 and 1. The limit formula_8 represents the special case of no dispersion among the means of the different categories, while formula_9 refers to no dispersion within the respective categories. formula_7 is undefined when all data points of the complete population take the same value. Example. Suppose there is a distribution of test scores in three topics (categories): Then the subject averages are 36, 33 and 78, with an overall average of 52. The sums of squares of the differences from the subject averages are 1952 for Algebra, 308 for Geometry and 600 for Statistics, adding to 2860. The overall sum of squares of the differences from the overall average is 9640. The difference of 6780 between these is also the weighted sum of the squares of the differences between the subject averages and the overall average: formula_10 This gives formula_11 suggesting that most of the overall dispersion is a result of differences between topics, rather than within topics. Taking the square root gives formula_12 For formula_13 the overall sample dispersion is purely due to dispersion among the categories and not at all due to dispersion within the individual categories. For quick comprehension simply imagine all Algebra, Geometry, and Statistics scores being the same respectively, e.g. 5 times 36, 4 times 33, 6 times 78. The limit formula_14 refers to the case without dispersion among the categories contributing to the overall dispersion. The trivial requirement for this extreme is that all category means are the same. Pearson vs. Fisher. The correlation ratio was introduced by Karl Pearson as part of analysis of variance. Ronald Fisher commented: "As a descriptive statistic the utility of the correlation ratio is extremely limited. It will be noticed that the number of degrees of freedom in the numerator of formula_15 depends on the number of the arrays" to which Egon Pearson (Karl's son) responded by saying "Again, a long-established method such as the use of the correlation ratio [§45 The "Correlation Ratio" η] is passed over in a few words without adequate description, which is perhaps hardly fair to the student who is given no opportunity of judging its scope for himself."
[ { "math_id": 0, "text": "\\overline{y}_x=\\frac{\\sum_i y_{xi}}{n_x}" }, { "math_id": 1, "text": "\\overline{y}=\\frac{\\sum_x n_x \\overline{y}_x}{\\sum_x n_x}," }, { "math_id": 2, "text": "\\overline{y}_x" }, { "math_id": 3, "text": "\\overline{y}" }, { "math_id": 4, "text": "\\eta^2 = \\frac{\\sum_x n_x (\\overline{y}_x-\\overline{y})^2}{\\sum_{x,i} (y_{xi}-\\overline{y})^2}" }, { "math_id": 5, "text": "\\eta^2 = \\frac{{\\sigma_{\\overline{y}}}^2}{{\\sigma_{y}}^2}, \\text{ where }{\\sigma_{\\overline{y}}}^2 = \\frac{\\sum_x n_x (\\overline{y}_x-\\overline{y})^2}{\\sum_x n_x} \\text{ and } {\\sigma_{y}}^2 = \\frac{\\sum_{x,i} (y_{xi}-\\overline{y})^2}{n}," }, { "math_id": 6, "text": "x " }, { "math_id": 7, "text": "\\eta" }, { "math_id": 8, "text": "\\eta=0" }, { "math_id": 9, "text": "\\eta=1" }, { "math_id": 10, "text": "5 (36-52)^2 + 4 (33-52)^2 +6 (78-52)^2 = 6780." }, { "math_id": 11, "text": "\\eta^2 = \\frac{6780}{9640}=0.7033\\ldots" }, { "math_id": 12, "text": "\\eta = \\sqrt{\\frac{6780}{9640}}=0.8386\\ldots." }, { "math_id": 13, "text": "\\eta = 1" }, { "math_id": 14, "text": "\\eta = 0" }, { "math_id": 15, "text": "\\eta^2" } ]
https://en.wikipedia.org/wiki?curid=699570
69966406
Open set condition
Condition for fractals in math In fractal geometry, the open set condition (OSC) is a commonly imposed condition on self-similar fractals. In some sense, the condition imposes restrictions on the overlap in a fractal construction. Specifically, given an iterated function system of contractive mappings formula_0, the open set condition requires that there exists a nonempty, open set V satisfying two conditions: Introduced in 1946 by P.A.P Moran, the open set condition is used to compute the dimensions of certain self-similar fractals, notably the Sierpinski Gasket. It is also used to simplify computation of the packing measure. An equivalent statement of the open set condition is to require that the s-dimensional Hausdorff measure of the set is greater than zero. Computing Hausdorff dimension. When the open set condition holds and each formula_3 is a similitude (that is, a composition of an isometry and a dilation around some point), then the unique fixed point of formula_4 is a set whose Hausdorff dimension is the unique solution for "s" of the following: formula_5 where ri is the magnitude of the dilation of the similitude. With this theorem, the Hausdorff dimension of the Sierpinski gasket can be calculated. Consider three non-collinear points "a"1, "a"2, "a"3 in the plane R2 and let formula_3 be the dilation of ratio 1/2 around "ai". The unique non-empty fixed point of the corresponding mapping formula_4 is a Sierpinski gasket, and the dimension "s" is the unique solution of formula_6 Taking natural logarithms of both sides of the above equation, we can solve for "s", that is: "s" = ln(3)/ln(2). The Sierpinski gasket is self-similar and satisfies the OSC. Strong open set condition. The strong open set condition (SOSC) is an extension of the open set condition. A fractal F satisfies the SOSC if, in addition to satisfying the OSC, the intersection between F and the open set V is nonempty. The two conditions are equivalent for self-similar and self-conformal sets, but not for certain classes of other sets, such as function systems with infinite mappings and in non-euclidean metric spaces. In these cases, SOCS is indeed a stronger condition. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\psi_1, \\ldots, \\psi_m" }, { "math_id": 1, "text": " \\bigcup_{i=1}^m\\psi_i (V) \\subseteq V, " }, { "math_id": 2, "text": "\\psi_1(V), \\ldots, \\psi_m(V)" }, { "math_id": 3, "text": "\\psi_i" }, { "math_id": 4, "text": "\\psi" }, { "math_id": 5, "text": " \\sum_{i=1}^m r_i^s = 1. " }, { "math_id": 6, "text": " \\left(\\frac{1}{2}\\right)^s+\\left(\\frac{1}{2}\\right)^s+\\left(\\frac{1}{2}\\right)^s = 3 \\left(\\frac{1}{2}\\right)^s =1. " } ]
https://en.wikipedia.org/wiki?curid=69966406
699689
Polaron
Quasiparticle in condensed matter physics A polaron is a quasiparticle used in condensed matter physics to understand the interactions between electrons and atoms in a solid material. The polaron concept was proposed by Lev Landau in 1933 and Solomon Pekar in 1946 to describe an electron moving in a dielectric crystal where the atoms displace from their equilibrium positions to effectively screen the charge of an electron, known as a phonon cloud. This lowers the electron mobility and increases the electron's effective mass. The general concept of a polaron has been extended to describe other interactions between the electrons and ions in metals that result in a bound state, or a lowering of energy compared to the non-interacting system. Major theoretical work has focused on solving Fröhlich and Holstein Hamiltonians. This is still an active field of research to find exact numerical solutions to the case of one or two electrons in a large crystal lattice, and to study the case of many interacting electrons. Experimentally, polarons are important to the understanding of a wide variety of materials. The electron mobility in semiconductors can be greatly decreased by the formation of polarons. Organic semiconductors are also sensitive to polaronic effects, which is particularly relevant in the design of organic solar cells that effectively transport charge. Polarons are also important for interpreting the optical conductivity of these types of materials. The polaron, a fermionic quasiparticle, should not be confused with the polariton, a bosonic quasiparticle analogous to a hybridized state between a photon and an optical phonon. Polaron theory. The energy spectrum of an electron moving in a periodical potential of a rigid crystal lattice is called the Bloch spectrum, which consists of allowed bands and forbidden bands. An electron with energy inside an allowed band moves as a free electron but has an effective mass that differs from the electron mass in vacuum. However, a crystal lattice is deformable and displacements of atoms (ions) from their equilibrium positions are described in terms of phonons. Electrons interact with these displacements, and this interaction is known as electron-phonon coupling. One possible scenario was proposed in the seminal 1933 paper by Lev Landau, which includes the production of a lattice defect such as an F-center and a trapping of the electron by this defect. A different scenario was proposed by Solomon Pekar that envisions dressing the electron with lattice polarization (a cloud of virtual polar phonons). Such an electron with the accompanying deformation moves freely across the crystal, but with increased effective mass. Pekar coined for this charge carrier the term polaron. Landau and Pekar constructed the basis of polaron theory. A charge placed in a polarizable medium will be screened. Dielectric theory describes the phenomenon by the induction of a polarization around the charge carrier. The induced polarization will follow the charge carrier when it is moving through the medium. The carrier together with the induced polarization is considered as one entity, which is called a polaron (see Fig. 1). While polaron theory was originally developed for electrons, there is no fundamental reason why it could not be any other charged particle interacting with phonons. Indeed, other charged particles such as (electron) holes and ions generally follow the polaron theory. For example, the proton polaron was identified experimentally in 2017 and on ceramic electrolytes after its existence was hypothesized. Usually, in covalent semiconductors the coupling of electrons with lattice deformation is weak and polarons do not form. In polar semiconductors the electrostatic interaction with induced polarization is strong and polarons are formed at low temperature, provided that their concentration is not large and the screening is not efficient. Another class of materials in which polarons are observed is molecular crystals, where the interaction with molecular vibrations may be strong. In the case of polar semiconductors, the interaction with polar phonons is described by the Fröhlich Hamiltonian. On the other hand, the interaction of electrons with molecular phonons is described by the Holstein Hamiltonian. Usually, the models describing polarons may be divided into two classes. The first class represents continuum models where the discreteness of the crystal lattice is neglected. In that case, polarons are weakly coupled or strongly coupled depending on whether the polaron binding energy is small or large compared to the phonon frequency. The second class of systems commonly considered are lattice models of polarons. In this case, there may be small or large polarons, depending on the relative size of the polaron radius to the lattice constant a. A conduction electron in an ionic crystal or a polar semiconductor is the prototype of a polaron. Herbert Fröhlich proposed a model Hamiltonian for this polaron through which its dynamics are treated quantum mechanically (Fröhlich Hamiltonian). The strength of electron-phonon interaction is determined by the dimensionless coupling constant formula_0. Here formula_1 is electron mass, formula_2 is the phonon frequency and formula_3, formula_4, formula_5 are static and high frequency dielectric constants. In table 1 the Fröhlich coupling constant is given for a few solids. The Fröhlich Hamiltonian for a single electron in a crystal using second quantization notation is: formula_6 formula_7 formula_8 formula_9 The exact form of γ depends on the material and the type of phonon being used in the model. In the case of a single polar mode formula_10, here formula_11 is the volume of the unit cell. In the case of molecular crystal γ is usually momentum independent constant. A detailed advanced discussion of the variations of the Fröhlich Hamiltonian can be found in J. T. Devreese and A. S. Alexandrov. The terms Fröhlich polaron and large polaron are sometimes used synonymously since the Fröhlich Hamiltonian includes the continuum approximation and long range forces. There is no known exact solution for the Fröhlich Hamiltonian with longitudinal optical (LO) phonons and linear formula_12 (the most commonly considered variant of the Fröhlich polaron) despite extensive investigations. Despite the lack of an exact solution, some approximations of the polaron properties are known. The physical properties of a polaron differ from those of a band-carrier. A polaron is characterized by its "self-energy" formula_13, an "effective mass" formula_14 and by its characteristic "response" to external electric and magnetic fields (e. g. dc mobility and optical absorption coefficient). When the coupling is weak (formula_15 small), the self-energy of the polaron can be approximated as: formula_16 and the polaron mass formula_17, which can be measured by cyclotron resonance experiments, is larger than the band mass "formula_1" of the charge carrier without self-induced polarization: formula_18 When the coupling is strong (α large), a variational approach due to Landau and Pekar indicates that the self-energy is proportional to α² and the polaron mass scales as "α"⁴. The Landau–Pekar variational calculation yields an upper bound to the polaron self-energy formula_19, valid for "all" "α", where formula_20 is a constant determined by solving an integro-differential equation. It was an open question for many years whether this expression was asymptotically exact as α tends to infinity. Finally, Donsker and Varadhan, applying large deviation theory to Feynman's path integral formulation for the self-energy, showed the large α exactitude of this Landau–Pekar formula. Later, Lieb and Thomas gave a shorter proof using more conventional methods, and with explicit bounds on the lower order corrections to the Landau–Pekar formula. Feynman introduced the variational principle for path integrals to study the polaron. He simulated the interaction between the electron and the polarization modes by a harmonic interaction between a hypothetical particle and the electron. The analysis of an exactly solvable ("symmetrical") 1D-polaron model, Monte Carlo schemes and other numerical schemes demonstrate the remarkable accuracy of Feynman's path-integral approach to the polaron ground-state energy. Experimentally more directly accessible properties of the polaron, such as its mobility and optical absorption, have been investigated subsequently. In the strong coupling limit, formula_21, the spectrum of excited states of a polaron begins with polaron-phonon bound states with energies less than formula_22, where formula_23 is the frequency of optical phonons. In the lattice models the main parameter is the polaron binding energy: formula_24, here summation is taken over the Brillouin zone. Note that this binding energy is purely adiabatic, i.e. does not depend on the ionic masses. For polar crystals the value of the polaron binding energy is strictly determined by the dielectric constants formula_25,formula_26, and is of the order of 0.3-0.8 eV. If polaron binding energy formula_27 is smaller than the hopping integral t the large polaron is formed for some type of electron-phonon interactions. In the case when formula_28 the small polaron is formed. There are two limiting cases in the lattice polaron theory. In the physically important adiabatic limit formula_29 all terms which involve ionic masses are cancelled and formation of polaron is described by nonlinear Schrödinger equation with nonadiabatic correction describing phonon frequency renormalization and polaron tunneling. In the opposite limit formula_30 the theory represents the expansion in formula_31. Polaron optical absorption. The expression for the magnetooptical absorption of a polaron is: formula_32 Here, formula_33 is the cyclotron frequency for a rigid-band electron. The magnetooptical absorption Γ(Ω) at the frequency Ω takes the form Σ(Ω) is the so-called "memory function", which describes the dynamics of the polaron. Σ(Ω) depends also on α, β and formula_34. In the absence of an external magnetic field (formula_35) the optical absorption spectrum (3) of the polaron at weak coupling is determined by the absorption of radiation energy, which is reemitted in the form of LO phonons. At larger coupling, formula_36, the polaron can undergo transitions toward a relatively stable internal excited state called the "relaxed excited state" (RES) (see Fig. 2). The RES peak in the spectrum also has a phonon sideband, which is related to a Franck–Condon-type transition. A comparison of the DSG results with the optical conductivity spectra given by approximation-free numerical and approximate analytical approaches is given in ref. Calculations of the optical conductivity for the Fröhlich polaron performed within the Diagrammatic Quantum Monte Carlo method, see Fig. 3, fully confirm the results of the path-integral variational approach at formula_37 In the intermediate coupling regime formula_38 the low-energy behavior and the position of the maximum of the optical conductivity spectrum of ref. follow well the prediction of Devreese. There are the following qualitative differences between the two approaches in the intermediate and strong coupling regime: in ref., the dominant peak broadens and the second peak does not develop, giving instead rise to a flat shoulder in the optical conductivity spectrum at formula_39. This behavior can be attributed to the optical processes with participation of two or more phonons. The nature of the excited states of a polaron needs further study. The application of a sufficiently strong external magnetic field allows one to satisfy the resonance condition formula_40, which {(for formula_41)} determines the polaron cyclotron resonance frequency. From this condition also the polaron cyclotron mass can be derived. Using the most accurate theoretical polaron models to evaluate formula_42, the experimental cyclotron data can be well accounted for. Evidence for the polaron character of charge carriers in AgBr and AgCl was obtained through high-precision cyclotron resonance experiments in external magnetic fields up to 16 T. The all-coupling magneto-absorption calculated in ref., leads to the best quantitative agreement between theory and experiment for AgBr and AgCl. This quantitative interpretation of the cyclotron resonance experiment in AgBr and AgCl by the theory of Peeters provided one of the most convincing and clearest demonstrations of Fröhlich polaron features in solids. Experimental data on the magnetopolaron effect, obtained using far-infrared photoconductivity techniques, have been applied to study the energy spectrum of shallow donors in polar semiconductor layers of CdTe. The polaron effect well above the LO phonon energy was studied through cyclotron resonance measurements, e. g., in II–VI semiconductors, observed in ultra-high magnetic fields. The resonant polaron effect manifests itself when the cyclotron frequency approaches the LO phonon energy in sufficiently high magnetic fields. In the lattice models the optical conductivity is given by the formula: formula_43 Here formula_44 is the activation energy of polaron, which is of the order of polaron binding energy formula_27. This formula was derived and extensively discussed in and was tested experimentally for example in photodoped parent compounds of high temperature superconductors. Polarons in two dimensions and in quasi-2D structures. The great interest in the study of the two-dimensional electron gas (2DEG) has also resulted in many investigations on the properties of polarons in two dimensions. A simple model for the 2D polaron system consists of an electron confined to a plane, interacting via the Fröhlich interaction with the LO phonons of a 3D surrounding medium. The self-energy and the mass of such a 2D polaron are no longer described by the expressions valid in 3D; for weak coupling they can be approximated as: formula_45 formula_46 It has been shown that simple scaling relations exist, connecting the physical properties of polarons in 2D with those in 3D. An example of such a scaling relation is: formula_47 where formula_48 (formula_49) and formula_50 (formula_51) are, respectively, the polaron and the electron-band masses in 2D (3D). The effect of the confinement of a Fröhlich polaron is to enhance the "effective" polaron coupling. However, many-particle effects tend to counterbalance this effect because of screening. Also in 2D systems cyclotron resonance is a convenient tool to study polaron effects. Although several other effects have to be taken into account (nonparabolicity of the electron bands, many-body effects, the nature of the confining potential, etc.), the polaron effect is clearly revealed in the cyclotron mass. An interesting 2D system consists of electrons on films of liquid He. In this system the electrons couple to the ripplons of the liquid He, forming "ripplopolarons". The effective coupling can be relatively large and, for some values of the parameters, self-trapping can result. The acoustic nature of the ripplon dispersion at long wavelengths is a key aspect of the trapping. For GaAs/AlxGa1−xAs quantum wells and superlattices, the polaron effect is found to decrease the energy of the shallow donor states at low magnetic fields and leads to a resonant splitting of the energies at high magnetic fields. The energy spectra of such polaronic systems as shallow donors ("bound polarons"), e. g., the D0 and D− centres, constitute the most complete and detailed polaron spectroscopy realised in the literature. In GaAs/AlAs quantum wells with sufficiently high electron density, anticrossing of the cyclotron-resonance spectra has been observed near the GaAs transverse optical (TO) phonon frequency rather than near the GaAs LO-phonon frequency. This anticrossing near the TO-phonon frequency was explained in the framework of the polaron theory. Besides optical properties, many other physical properties of polarons have been studied, including the possibility of self-trapping, polaron transport, magnetophonon resonance, etc. Extensions of the polaron concept. Significant are also the extensions of the polaron concept: acoustic polaron, piezoelectric polaron, electronic polaron, bound polaron, trapped polaron, spin polaron, molecular polaron, solvated polarons, polaronic exciton, Jahn-Teller polaron, small polaron, bipolarons and many-polaron systems. These extensions of the concept are invoked, e. g., to study the properties of conjugated polymers, colossal magnetoresistance perovskites, high-formula_52 superconductors, layered MgB2 superconductors, fullerenes, quasi-1D conductors, semiconductor nanostructures. The possibility that polarons and bipolarons play a role in high-formula_52 superconductors has renewed interest in the physical properties of many-polaron systems and, in particular, in their optical properties. Theoretical treatments have been extended from one-polaron to many-polaron systems. A new aspect of the polaron concept has been investigated for semiconductor nanostructures: the exciton-phonon states are not factorizable into an adiabatic product Ansatz, so that a "non-adiabatic" treatment is needed. The "non-adiabaticity" of the exciton-phonon systems leads to a strong enhancement of the phonon-assisted transition probabilities (as compared to those treated adiabatically) and to multiphonon optical spectra that are considerably different from the Franck–Condon progression even for small values of the electron-phonon coupling constant as is the case for typical semiconductor nanostructures. In biophysics Davydov soliton is a propagating along the protein α-helix self-trapped amide I excitation that is a solution of the Davydov Hamiltonian. The mathematical techniques that are used to analyze Davydov's soliton are similar to some that have been developed in polaron theory. In this context the Davydov soliton corresponds to a "polaron" that is (i) "large" so the continuum limit approximation in justified, (ii) "acoustic" because the self-localization arises from interactions with acoustic modes of the lattice, and (iii) "weakly coupled" because the anharmonic energy is small compared with the phonon bandwidth. It has been shown that the system of an impurity in a Bose–Einstein condensate is also a member of the polaron family. This allows the hitherto inaccessible strong coupling regime to be studied, since the interaction strengths can be externally tuned through the use of a Feshbach resonance. This was recently realized experimentally by two research groups. The existence of the polaron in a Bose–Einstein condensate was demonstrated for both attractive and repulsive interactions, including the strong coupling regime and dynamically observed. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\alpha = (e^2 / \\kappa)(m / 2 \\hbar^3\\omega)^{1/2}" }, { "math_id": 1, "text": "m" }, { "math_id": 2, "text": "\\omega" }, { "math_id": 3, "text": "\\kappa^{-1} = {\\epsilon}_{\\infin}^{-1} - {\\epsilon}_{0}^{-1}" }, { "math_id": 4, "text": "{\\epsilon}_{0}" }, { "math_id": 5, "text": "{\\epsilon}_{\\infin}" }, { "math_id": 6, "text": "\nH = H_{\\rm e} + H_{\\rm ph} + H_{\\rm e-ph} \n" }, { "math_id": 7, "text": " \n H_{\\rm e} = \\sum_{k,s} \\xi(k,s) c_{k,s}^\\dagger c_{k,s}\n" }, { "math_id": 8, "text": " \n H_{\\rm ph} = \\sum_{q,v} \\omega_{q,v} a_{q,v}^\\dagger a_{q,v}\n" }, { "math_id": 9, "text": " \nH_{\\rm e-ph} = \\frac 1 {\\sqrt{2N} } \\sum_{k,s,q,v} \\gamma(\\alpha , q , k , v ) \\omega_{qv} ( c_{k ,s}^\\dagger c_{k-q , s} a_{q,v} + c_{k-q ,s}^\\dagger c_{k , s} a^\\dagger_{q,v} )\n" }, { "math_id": 10, "text": "\\gamma(q) = i\\hbar\\omega(\\frac{4\\pi\\alpha}{V_0}(\\frac{\\hbar}{m\\omega})^{1/2})^{1/2}\\frac{1}{q}" }, { "math_id": 11, "text": "V_0" }, { "math_id": 12, "text": " \\gamma " }, { "math_id": 13, "text": "\\Delta E" }, { "math_id": 14, "text": "m^*" }, { "math_id": 15, "text": "\\alpha" }, { "math_id": 16, "text": " \n\\frac{\\Delta E}{\\hbar\\omega } \\approx -\\alpha -0.015919622\\alpha^2, \\qquad \\qquad \\qquad (1)\\, " }, { "math_id": 17, "text": "m*" }, { "math_id": 18, "text": " \n\\frac{m^*}{m} \\approx 1+\\frac{\\alpha}{6}+0.0236\\alpha^2. \\qquad \\qquad \\qquad (2)" }, { "math_id": 19, "text": "E < -C_{PL} \\alpha^2 " }, { "math_id": 20, "text": "C_{PL}" }, { "math_id": 21, "text": "\\alpha \\gg 1" }, { "math_id": 22, "text": "\\hbar\\omega_0" }, { "math_id": 23, "text": "\\omega_0" }, { "math_id": 24, "text": "E_p = \\frac{1}{2N}\\sum_{q}|\\gamma(q)|^{2}/\\hbar\\omega" }, { "math_id": 25, "text": "\\epsilon_0" }, { "math_id": 26, "text": "\\epsilon_\\infin" }, { "math_id": 27, "text": "E_p" }, { "math_id": 28, "text": "E_p > t" }, { "math_id": 29, "text": "t\\gg\\hbar\\omega" }, { "math_id": 30, "text": "t\\ll\\hbar\\omega" }, { "math_id": 31, "text": "t/\\hbar\\omega" }, { "math_id": 32, "text": " \n\\Gamma(\\Omega) \\propto -\\frac{\\operatorname{Im} \\Sigma(\\Omega)}{\\left[\\Omega-\\omega_{\\mathrm{c}}-\\operatorname{Re} \\Sigma(\\Omega)\\right]^2 + \\left[\\operatorname{Im}\\Sigma(\\Omega)\\right]^2}. \\qquad\\qquad\\qquad (3)\n " }, { "math_id": 33, "text": "\\omega_c" }, { "math_id": 34, "text": "\\omega_{c}" }, { "math_id": 35, "text": "\\omega_{c}=0" }, { "math_id": 36, "text": "\\alpha \\ge 5.9" }, { "math_id": 37, "text": "\\alpha \\lesssim 3." }, { "math_id": 38, "text": "3<\\alpha <6," }, { "math_id": 39, "text": "\\alpha =6" }, { "math_id": 40, "text": "\\Omega =\\omega_{\\mathrm{c}} + \\operatorname{Re} \\Sigma (\\Omega )" }, { "math_id": 41, "text": "\\omega_c < \\omega" }, { "math_id": 42, "text": "\\Sigma (\\Omega )" }, { "math_id": 43, "text": "\n\\sigma(\\Omega) = n_pe^2a^2 \\frac{\\pi^{\\frac{1}{2}}t^2\\left(1-e^{\\frac{\\hbar\\Omega}{k_{\\rm B}T}}\\right)}{2\\hbar^2\\Omega(E_ak_{\\rm B}T)^{1/2}}\\exp\\left(-\\frac{(\\hbar\\Omega-4E_a)^2}{16E_ak_{\\rm B}T}\\right)\n" }, { "math_id": 44, "text": "E_a" }, { "math_id": 45, "text": "\n\\frac{\\Delta E}{\\hbar \\omega} \\approx -\\frac{\\pi}{2}\\alpha\\ - 0.06397\\alpha^2; \\qquad \\qquad \\qquad (4)\\," }, { "math_id": 46, "text": " \n\\frac{m^*}{m} \\approx 1+\\frac{\\pi}{8}\\alpha\\ + 0.1272348\\alpha^2. \\qquad \\qquad \\qquad (5)\\," }, { "math_id": 47, "text": "\n\\frac{m^{*}_{\\rm 2D}(\\alpha)}{m_{\\rm 2D}}=\\frac{m^{*}_{\\rm 3D}(\\frac{3}{4}\\pi\\alpha)}{m_{\\rm 3D}}, \\qquad \\qquad \\qquad (6)\\," }, { "math_id": 48, "text": "m_\\mathrm{2D}^*" }, { "math_id": 49, "text": "m_\\mathrm{3D}^*" }, { "math_id": 50, "text": "m_\\mathrm{2D}" }, { "math_id": 51, "text": "m_\\mathrm{3D}" }, { "math_id": 52, "text": "T_{c}" } ]
https://en.wikipedia.org/wiki?curid=699689
699706
Sierpiński curve
Sierpiński curves are a recursively defined sequence of continuous closed plane fractal curves discovered by Wacław Sierpiński, which in the limit formula_0 completely fill the unit square: thus their limit curve, also called the Sierpiński curve, is an example of a space-filling curve. Because the Sierpiński curve is space-filling, its Hausdorff dimension (in the limit formula_1) is formula_2. The Euclidean length of the formula_3th iteration curve formula_4 is formula_5 i.e., it grows "exponentially" with formula_6 beyond any limit, whereas the limit for formula_0 of the area enclosed by formula_4 is formula_7 that of the square (in Euclidean metric). Uses of the curve. The Sierpiński curve is useful in several practical applications because it is more symmetrical than other commonly studied space-filling curves. For example, it has been used as a basis for the rapid construction of an approximate solution to the Travelling Salesman Problem (which asks for the shortest sequence of a given set of points): The heuristic is simply to visit the points in the same sequence as they appear on the Sierpiński curve. To do this requires two steps: First compute an inverse image of each point to be visited; then sort the values. This idea has been used to build routing systems for commercial vehicles based only on Rolodex card files. A space-filling curve is a continuous map of the unit interval onto a unit square and so a (pseudo) inverse maps the unit square to the unit interval. One way of constructing a pseudo-inverse is as follows. Let the lower-left corner (0, 0) of the unit square correspond to 0.0 (and 1.0). Then the upper-left corner (0, 1) must correspond to 0.25, the upper-right corner (1, 1) to 0.50, and the lower-right corner (1, 0) to 0.75. The inverse map of interior points are computed by taking advantage of the recursive structure of the curve. Here is a function coded in Java that will compute the relative position of any point on the Sierpiński curve (that is, a pseudo-inverse value). It takes as input the coordinates of the point (x,y) to be inverted, and the corners of an enclosing right isosceles triangle (ax, ay), (bx, by), and (cx, cy). (The unit square is the union of two such triangles.) The remaining parameters specify the level of accuracy to which the inverse should be computed. static long sierp_pt2code( double ax, double ay, double bx, double by, double cx, double cy, int currentLevel, int maxLevel, long code, double x, double y ) if (currentLevel &lt;= maxLevel) { currentLevel++; if ((sqr(x-ax) + sqr(y-ay)) &lt; (sqr(x-cx) + sqr(y-cy))) { code = sierp_pt2code( ax, ay, (ax+cx)/2.0, (ay+cy)/2.0, bx, by, currentLevel, maxLevel, 2 * code + 0, x, y ); else { code = sierp_pt2code( bx, by, (ax+cx)/2.0, (ay+cy)/2.0, cx, cy, currentLevel, maxLevel, 2 * code + 1, x, y ); return code; Representation as Lindenmayer system. The Sierpiński curve can be expressed by a rewrite system (L-system). Alphabet: F, G, X Constants: F, G, +, − Axiom: F−−XF−−F−−XF Production rules: X → XF+G+XF−−F−−XF+G+X Angle: 45 Here, both "F" and "G" mean "draw forward", + means "turn left 45°", and "−" means "turn right 45°" (see turtle graphics). The curve is usually drawn with different lengths for F and G. The Sierpiński square curve can be similarly expressed: Alphabet: F, X Constants: F, +, − Axiom: F+XF+F+XF Production rules: X → XF−F+F−XF+F+XF−F+F−X Angle: 90 Arrowhead curve. The Sierpiński arrowhead curve is a fractal curve similar in appearance and identical in limit to the Sierpiński triangle. The Sierpiński arrowhead curve draws an equilateral triangle with triangular holes at equal intervals. It can be described with two substituting production rules: (A → B-A-B) and (B → A+B+A). A and B recur and at the bottom do the same thing — draw a line. Plus and minus (+ and -) mean turn 60 degrees either left or right. The terminating point of the Sierpiński arrowhead curve is always the same provided you recur an even number of times and you halve the length of the line at each recursion. If you recur to an odd depth (order is odd) then you end up turned 60 degrees, at a different point in the triangle. An alternate constriction is given in the article on the de Rham curve: one uses the same technique as the de Rham curves, but instead of using a binary (base-2) expansion, one uses a ternary (base-3) expansion. Code. Given the drawing functions codice_0 and codice_1, the code to draw an (approximate) Sierpiński arrowhead curve looks like this: void sierpinski_arrowhead_curve(unsigned order, double length) // If order is even we can just draw the curve. if ( 0 == (order &amp; 1) ) { curve(order, length, +60); else /* order is odd */ { turn( +60); curve(order, length, -60); void curve(unsigned order, double length, int angle) if ( 0 == order ) { draw_line(length); } else { curve(order - 1, length / 2, -angle); turn(angle); curve(order - 1, length / 2, angle); turn(angle); curve(order - 1, length / 2, -angle); Representation as Lindenmayer system. The Sierpiński arrowhead curve can be expressed by a rewrite system (L-system). Alphabet: X, Y Constants: F, +, − Axiom: XF Production rules: X → YF + XF + Y Y → XF − YF − X Here, "F" means "draw forward", + means "turn left 60°", and "−" means "turn right 60°" (see turtle graphics). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n \\to \\infty " }, { "math_id": 1, "text": " n \\to \\infty " }, { "math_id": 2, "text": " 2 " }, { "math_id": 3, "text": "n" }, { "math_id": 4, "text": " S_n " }, { "math_id": 5, "text": "\\begin{aligned}l_n&= {2 \\over 3} (1+\\sqrt 2) 2^n - {1 \\over 3} (2-\\sqrt 2) {1 \\over 2^n},\\\\\n &= \\frac{2^{7/4}}{3} \\sinh(n\\log(2)+\\mathrm{asinh}(\\frac{3}{2^{5/4}}))\\end{aligned}" }, { "math_id": 6, "text": " n " }, { "math_id": 7, "text": " 5/12 \\," } ]
https://en.wikipedia.org/wiki?curid=699706
699722
Rømer scale
Scale of temperature The Rømer scale (; notated as °Rø), also known as Romer or Roemer, is a temperature scale named after the Danish astronomer Ole Christensen Rømer, who developed it for his own use in around 1702. It is based on the freezing point of pure water being 7.5 degrees and the boiling point of water as 60 degrees. Degree measurements. There is no solid evidence as to why Rømer assigned the value of 7.5 degrees to water's freezing point. One proposed explanation is that Rømer initially intended the 0-degree point of his scale to correspond to the eutectic temperature of ammonium chloride brine, which was the coldest easily-reproducible temperature at the time and had already been used as the lower fiducial point for multiple temperature scales. The boiling point of water was defined as 60 degrees. Rømer then saw that the freezing point of pure water was roughly one eighth of the way (about 7.5 degrees) between these two points, so he redefined the lower fixed point to be the freezing point of water at precisely 7.5 degrees. This did not greatly change the scale but made it easier to calibrate by defining it by reference to pure water. Thus the unit of this scale, a Rømer degree, is formula_0 of a kelvin or Celsius degree. The symbol is sometimes given as °R, but since that is also sometimes used for the Réaumur and Rankine scales, the other symbol °Rø is to be preferred. Historical significance. Rømer's scale would have been lost to history if Rømer's notebook, Adverseria, was not found and published in 1910 and letters of correspondence between Daniel Gabriel Fahrenheit and Herman Boerhaave were not uncovered in 1936. These documents demonstrate the important influence Rømer's work had on Fahrenheit, a young maker and seller of barometers and thermometers. Fahrenheit visited Rømer in Copenhagen in 1708 and while there, became familiar with Rømer's work with thermometers. Rømer also told Fahrenheit that demand for accurate thermometers was high. The visit ignited a keen interest in Fahrenheit to try to improve thermometers. By 1713, Fahrenheit was creating his own thermometers with a scale heavily borrowed from Rømer that ranged from 0 to 24 degrees but with each degree divided into quarters. At some point, the quarter degrees became whole degrees and Fahrenheit made other adjustments to Rømer's scale, modifying the freezing point from 7.5 degrees to 8, which, when multiplied by four, correlates to 32 degrees on Fahrenheit's scale The 22.5 degree point would have become 90 degrees, however, Fahrenheit rounded this up to 24 degrees–96 when multiplied by 4–in order to make calculations easier. After Fahrenheit perfected the crafting of his accurate thermometers, their use became widespread and the Fahrenheit scale is still used today in the United States and a handful of other countries. Notes and references. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "100 \\div 52.5 = \\frac{40}{21}" } ]
https://en.wikipedia.org/wiki?curid=699722
699772
Star number
Centered figurate number A star number is a centered figurate number, a centered hexagram (six-pointed star), such as the Star of David, or the board Chinese checkers is played on. The "n"th star number is given by the formula "Sn" = 6"n"("n" − 1) + 1. The first 43 star numbers are 1, 13, 37, 73, 121, 181, 253, 337, 433, 541, 661, 793, 937, 1093, 1261, 1441, 1633, 1837, 2053, 2281, 2521, 2773, 3037, 3313, 3601, 3901, 4213, 4537, 4873, 5221, 5581, 5953, 6337, 6733, 7141, 7561, 7993, 8437, 8893, 9361, 9841, 10333, 10837 (sequence in the OEIS) The digital root of a star number is always 1 or 4, and progresses in the sequence 1, 4, 1. The last two digits of a star number in base 10 are always 01, 13, 21, 33, 37, 41, 53, 61, 73, 81, or 93. Unique among the star numbers is 35113, since its prime factors (i.e., 13, 37 and 73) are also consecutive star numbers. Relationships to other kinds of numbers. Geometrically, the "n"th star number is made up of a central point and 12 copies of the ("n"−1)th triangular number — making it numerically equal to the "n"th centered dodecagonal number, but differently arranged. As such, the formula the "n"th star number can be written as S_n=1+12T_n-1 where T_n=n(n+1)/2. Infinitely many star numbers are also triangular numbers, the first four being "S"1 = 1 = "T"1, "S"7 = 253 = "T"22, "S"91 = 49141 = "T"313, and "S"1261 = 9533161 = "T"4366 (sequence in the OEIS). Infinitely many star numbers are also square numbers, the first four being "S"1 = 12, "S"5 = 121 = 112, "S"45 = 11881 = 1092, and "S"441 = 1164241 = 10792 (sequence in the OEIS), for square stars (sequence in the OEIS). A star prime is a star number that is prime. The first few star primes (sequence in the OEIS) are 13, 37, 73, 181, 337, 433, 541, 661, 937. A superstar prime is a star prime whose prime index is also a star number. The first two such numbers are 661 and 1750255921. A reverse superstar prime is a star number whose index is a star prime. The first few such numbers are 937, 7993, 31537, 195481, 679393, 1122337, 1752841, 2617561, 5262193. The term "star number" or "stellate number" is occasionally used to refer to octagonal numbers. Other properties. The harmonic series of unit fractions with the star numbers as denominators is: formula_0 The alternating series of unit fractions with the star numbers as denominators is: formula_1
[ { "math_id": 0, "text": "\n\\begin{align}\n\\sum_{n=1}^{\\infty}& \\frac{1}{S_n}\\\\\n&=1+\\frac{1}{13}+\\frac{1}{37}+\\frac{1}{73}+\\frac{1}{121}+\\frac{1}{181}+\\frac{1}{253}+\\frac{1}{337}+\\cdots\\\\\n&=\\frac\\pi{2\\sqrt3}\\tan (\\frac \\pi {2\\sqrt3})\\\\\n&\\approx 1.159173.\\\\\n\\end{align}\n" }, { "math_id": 1, "text": "\n\\begin{align}\n\\sum_{n=1}^{\\infty}& (-1)^{n-1}\\frac{1}{S_n}\\\\\n&=1-\\frac{1}{13}+\\frac{1}{37}-\\frac{1}{73}+\\frac{1}{121}-\\frac{1}{181}+\\frac{1}{253}-\\frac{1}{337}+\\cdots\\\\\n&\\approx 0.941419.\\\\\n\\end{align}\n" } ]
https://en.wikipedia.org/wiki?curid=699772
6997954
Good–Turing frequency estimation
Statistical technique invented by developed by Alan Turing and his assistant I. J. Good Good–Turing frequency estimation is a statistical technique for estimating the probability of encountering an object of a hitherto unseen species, given a set of past observations of objects from different species. In drawing balls from an urn, the 'objects' would be balls and the 'species' would be the distinct colours of the balls (finite but unknown in number). After drawing formula_0 red balls, formula_1 black balls and formula_2 green balls, we would ask what is the probability of drawing a red ball, a black ball, a green ball or one of a previously unseen colour. Historical background. Good–Turing frequency estimation was developed by Alan Turing and his assistant I. J. Good as part of their methods used at Bletchley Park for cracking German ciphers for the Enigma machine during World War II. Turing at first modelled the frequencies as a multinomial distribution, but found it inaccurate. Good developed smoothing algorithms to improve the estimator's accuracy. The discovery was recognised as significant when published by Good in 1953, but the calculations were difficult so it was not used as widely as it might have been. The method even gained some literary fame due to the Robert Harris novel "Enigma". In the 1990s, Geoffrey Sampson worked with William A. Gale of AT&amp;T to create and implement a simplified and easier-to-use variant of the Good–Turing method described below. Various heuristic justifications and a simple combinatorial derivation have been provided. The method. The Good–Turing estimator is largely independent of the distribution of species frequencies. formula_11 Notation. For example, formula_12 is the number of species for which only one individual was observed. Note that the total number of objects observed, formula_13 can be found from formula_14 Calculation. The first step in the calculation is to estimate the probability that a future observed individual (or the next observed individual) is a member of a thus far unseen species. This estimate is: formula_15 The next step is to estimate the probability that the next observed individual is from a species which has been seen formula_16 times. For a "single" species this estimate is: formula_17 Here, the notation formula_18 means the "smoothed", or "adjusted" value of the frequency shown in parentheses. An overview of how to perform this smoothing follows in the next section (see also empirical Bayes method). To estimate the probability that the next observed individual is from any species from this group (i.e., the group of species seen formula_19 times) one can use the following formula: formula_20 Smoothing. For smoothing the erratic values in formula_21 for large r, we would like to make a plot of formula_22 versus formula_23 but this is problematic because for large formula_19 many formula_21 will be zero. Instead a revised quantity, formula_24 is plotted versus formula_25 where formula_26 is defined as formula_27 and where q, r, and t are three consecutive subscripts with non-zero counts formula_28. For the special case when r is 1, take q to be 0. In the opposite special case, when formula_29 is the index of the "last" non-zero count, replace the divisor formula_30 with formula_31 so formula_32 A simple linear regression is then fitted to the log–log plot. For small values of formula_19 it is reasonable to set formula_33 – that is, no smoothing is performed. For large values of r, values of formula_34 are read off the regression line. An automatic procedure (not described here) can be used to specify at what point the switch from no smoothing to linear smoothing should take place. Code for the method is available in the public domain. Derivation. Many different derivations of the above formula for formula_35 have been given. One of the simplest ways to motivate the formula is by assuming the next item will behave similarly to the previous item. The overall idea of the estimator is that currently we are seeing never-seen items at a certain frequency, seen-once items at a certain frequency, seen-twice items at a certain frequency, and so on. Our goal is to estimate just how likely each of these categories is, for the "next" item we will see. Put another way, we want to know the current rate at which seen-twice items are becoming seen-thrice items, and so on. Since we don't assume anything about the underlying probability distribution, it does sound a bit mysterious at first. But it is extremely easy to calculate these probabilities "empirically" for the "previous" item we saw, even assuming we don't remember exactly which item that was: Take all the items we have seen so far (including multiplicities) — the last item we saw was a random one of these, all equally likely. Specifically, the chance that we saw an item for the formula_36th time is simply the chance that it was one of the items that we have now seen formula_37 times, namely formula_38. In other words, our chance of seeing an item that had been seen "r" times before was formula_38. So now we simply assume that this chance will be about the same for the next item we see. This immediately gives us the formula above for formula_39, by setting formula_40. And for formula_41, to get the probability that "a particular one" of the formula_42 items is going to be the next one seen, we need to divide this probability (of seeing "some" item that has been seen "r" times) among the formula_42 possibilities for which particular item that could be. This gives us the formula formula_43. Of course, your actual data will probably be a bit noisy, so you will want to smooth the values first to get a better estimate of how quickly the category counts are growing, and this gives the formula as shown above. This approach is in the same spirit as deriving the standard Bernoulli estimator by simply asking what the two probabilities were for the previous coin flip (after scrambling the trials seen so far), given only the current result counts, while assuming nothing about the underlying distribution. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Bibliography. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "R_\\text{red}" }, { "math_id": 1, "text": "R_\\text{black}" }, { "math_id": 2, "text": "R_\\text{green}" }, { "math_id": 3, "text": "X" }, { "math_id": 4, "text": "\\; 1, \\dots, X ~." }, { "math_id": 5, "text": "\\, \\bar{R} \\;," }, { "math_id": 6, "text": "\\, R_x \\," }, { "math_id": 7, "text": "\\, x \\;." }, { "math_id": 8, "text": "\\, (N_r)_{r=0, 1, \\ldots} \\;," }, { "math_id": 9, "text": "\\bar{R}" }, { "math_id": 10, "text": "R_x" }, { "math_id": 11, "text": "N_r = \\Bigl| \\left\\{ x \\mid R_x = r \\right\\} \\Bigr| ~." }, { "math_id": 12, "text": "N_1" }, { "math_id": 13, "text": "\\, N \\,," }, { "math_id": 14, "text": "N = \\sum_{r=1}^\\infty r N_r = \\sum_{x=1}^X R_x ~." }, { "math_id": 15, "text": "p_0 = \\frac{N_1}{N} ~." }, { "math_id": 16, "text": "r" }, { "math_id": 17, "text": " p_r = \\frac{\\; (r+1) \\; S(N_{r+1}) \\;}{N\\,S(N_r)} ~." }, { "math_id": 18, "text": "S(\\cdot)" }, { "math_id": 19, "text": "\\, r \\," }, { "math_id": 20, "text": "\\frac{\\; (r+1) \\, S(N_{r+1}) \\;}{N} ~." }, { "math_id": 21, "text": "\\, N_r \\," }, { "math_id": 22, "text": "\\; \\log N_r \\;" }, { "math_id": 23, "text": "\\; \\log r \\;" }, { "math_id": 24, "text": "\\; \\log Z_r \\;," }, { "math_id": 25, "text": "\\; \\log r ~," }, { "math_id": 26, "text": "\\, Z_r \\," }, { "math_id": 27, "text": "Z_r = \\frac{N_r}{\\; \\tfrac{1}{2}\\,(t - q) \\;} ~," }, { "math_id": 28, "text": "\\; N_q, N_r, N_t \\;" }, { "math_id": 29, "text": "\\, r = r_\\mathsf{last} \\;" }, { "math_id": 30, "text": "\\, \\tfrac{1}{2}\\,(t - q) \\," }, { "math_id": 31, "text": "\\, r_\\mathsf{last} - q \\;," }, { "math_id": 32, "text": "\\; Z_{r_\\mathsf{last}} = \\frac{N_{r_\\mathsf{last}}}{\\; r_\\mathsf{last} - q \\;} ~." }, { "math_id": 33, "text": "\\; S(N_r) = N_r \\;" }, { "math_id": 34, "text": "\\; S(N_r) \\;" }, { "math_id": 35, "text": "p_r" }, { "math_id": 36, "text": "(r+1)" }, { "math_id": 37, "text": "r + 1" }, { "math_id": 38, "text": "\\frac{(r+1) N_{r+1}}{N}" }, { "math_id": 39, "text": "p_0" }, { "math_id": 40, "text": "r=0" }, { "math_id": 41, "text": "r>0" }, { "math_id": 42, "text": "N_{r}" }, { "math_id": 43, "text": "\\;p_{r} = \\frac{(r+1) N_{r+1}}{N N_r}" } ]
https://en.wikipedia.org/wiki?curid=6997954
69980624
2 Samuel 7
Second Book of Samuel chapter 2 Samuel 7 is the seventh chapter of the Second Book of Samuel in the Old Testament of the Christian Bible or the second part of Books of Samuel in the Hebrew Bible. According to Jewish tradition the book was attributed to the prophet Samuel, with additions by the prophets Gad and Nathan, but modern scholars view it as a composition of a number of independent texts of various ages from c. 630–540 BCE. This chapter contains the account of David's reign in Jerusalem. This chapter comes within a section of the Deuteronomistic history comprising 2 Samuel 2–8, which deals with the period when David set up his kingdom. Text. This chapter was originally written in the Hebrew. It is divided into 29 verses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls including 4Q51 (4QSama; 100–50 BCE) with extant verses 6–7, 22–29. Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century). Analysis. This chapter deals with two important issues, building a temple and succession to David's throne; an introduction to succession narratives in 2 Samuel 9–10 and 1 Kings 1–2. It is one of the most important section in the Hebrew Bible (or Old Testament) and has been subject to intense research. There are three scenes in this chapter: 1. David and Nathan: David proposed to build a "house" for the Ark of the Covenant (7:1–3) 2. Nathan and God: the divine oracle a. God, who redeemed Israel, decides on his house (7:4–7) b. God will build a house for David (7:8–17) 3. David and God: David's response a. David praises God's redemptive acts (7:18-24) b. David's prayer (7:25–29) The second and third scenes are in parallel, with the first section of each scene recalling God's redemptive acts (specifically referring to the Exodus from Egypt), and the second section, introduced with "wě‘attâ" (which could be rendered as "and now" or "now therefore"; 2 Samuel 7:8, 25), signaling a consequence based on the premise in the first section. Oracles on the House for God and House of David (7:1–17). Verses 1–17 appear to be one unit, although it contains two separate oracles concerning two different issues: Sectional summary. King David consulted Nathan, a court-prophet and king's advisor, about his intention to build a temple to house the Ark of the Covenant; similar divine consultations for building temples were found in extra-biblical parallels, such as in the Egyptian "Königsnovelle". Nathan then conveyed the first oracle of YHWH (verses 5 and 7) that David was prohibited from building a temple for YHWH in Jerusalem (1 Chronicles 22:8; 28:3; 1 Kings 5:17). Nathan later supported Solomon, son of David, to be king (1 Kings 1–2) and to build a Solomonic temple. The second oracle (verses 8–16) addresses a different issue, succession to David's throne, linked to the first by the same historical setting (verses 1–3) and by employing the word "bayit" ("house") in two different ways: David was not allowed to build for YHWH a 'house' ("bayit", verses 5, 6, 13), but YHWH was going to establish for David a 'dynasty' ("bayit", verses 11, 16; thus, "house" of David). The core message of the second oracle is as follows: David had been called by God (verse 9) and protected against his enemies and made into a great name (verse 10); God would raise up his son to succeed him and would establish his kingdom (verse 12) and he would enjoy the status of God's son (verse 14). Additional elements are God's care of the people of Israel (verses 10–11), the eternity of David's kingdom (verses 13,16) and the contrast between David and Saul (verses 14b–15). The combined theme of David's greatness and the certainty of succession can be found in between this oracle and other texts, such as Psalm 89 by Ethan the Ezrahite. In 1 Kings 5:3–4, Solomon explained that while David was given "rest" from his enemies, it was not to the higher degree of "rest" given to Solomon, with neither "adversary nor misfortune" to impede the Temple's construction, as the fulfillment of God's covenant to 'give Israel rest from its adversaries' (Deuteronomy 12:10 and 25:19), to 'fight Israel's battles' (Deuteronomy 3:22), and to 'bestow on them the Promised Land'. "1 Now it came to pass when the king was dwelling in his house, and the Lord had given him rest from all his enemies all around, 2 that the king said to Nathan the prophet, "See now, I dwell in a house of cedar, but the ark of God dwells inside tent curtains."" [YHWH says] "I will be his father, and he shall be my son. If he commit iniquity, I will chasten him with the rod of men, and with the stripes of the children of men:" [YHWH says] "And your house and your kingdom shall be established forever before you. Your throne shall be established forever." The prayer of David (7:18–29). The second half of the chapter contains David's prayer, which could be connected with bringing the ark to Jerusalem (6:1–19) rather than with the dynastic oracle in 7:1–7. In addition there were allusions to God's promise and its 'eternal' nature (verses 22, 28–29), God's redemption of his people from Egypt (verses 23–24), and several Deuteronomistic themes (verses 22b–26). "And who is like Your people, like Israel, the one nation on the earth whom God went to redeem for Himself as a people, to make for Himself a name—and to do for Yourself great and awesome deeds for Your land—before Your people whom You redeemed for Yourself from Egypt, the nations, and their gods?" Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; Sources. Commentaries on Samuel. &lt;templatestyles src="Refbegin/styles.css" /&gt; General. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=69980624
6998101
Langton's loops
Self-reproducing cellular automaton patterns Langton's loops are a particular "species" of artificial life in a cellular automaton created in 1984 by Christopher Langton. They consist of a loop of cells containing genetic information, which flows continuously around the loop and out along an "arm" (or pseudopod), which will become the daughter loop. The "genes" instruct it to make three left turns, completing the loop, which then disconnects from its parent. History. In 1952 John von Neumann created the first cellular automaton (CA) with the goal of creating a self-replicating machine. This automaton was necessarily very complex due to its computation- and construction-universality. In 1968 Edgar F. Codd reduced the number of states from 29 in von Neumann's CA to 8 in his. When Christopher Langton did away with the universality condition, he was able to significantly reduce the automaton's complexity. Its self-replicating loops are based on one of the simplest elements in Codd's automaton, the periodic emitter. Specification. Langton's Loops run in a CA that has 8 states, and uses the von Neumann neighborhood with rotational symmetry. The transition table can be found here: . As with Codd's CA, Langton's Loops consist of sheathed wires. The signals travel passively along the wires until they reach the open ends, when the command they carry is executed. Colonies. Because of a particular property of the loops' "pseudopodia", they are unable to reproduce into the space occupied by another loop. Thus, once a loop is surrounded, it is incapable of reproducing, resulting in a coral-like colony with a thin layer of reproducing organisms surrounding a core of inactive "dead" organisms. Unless provided unbounded space, the colony's size will be limited. The maximum population will be asymptotic to formula_0, where "A" is the total area of the space in cells. Encoding of the genome. The loops' genetic code is stored as a series of nonzero-zero state pairs. The standard loop's genome is illustrated in the picture at the top, and may be stated as a series of numbered states starting from the T-junction and running clockwise: 70-70-70-70-70-70-40-40. The '70' command advances the end of the wire by one cell, while the '40-40' sequence causes the left turn. State 3 is used as a temporary marker for several stages. While the roles of states 0,1,2,3,4 and 7 are similar to Codd's CA, the remaining states 5 and 6 are used instead to mediate the loop replication process. After the loop has completed, state 5 travels counter-clockwise along the sheath of the parent loop to the next corner, causing the next arm to be produced in a different direction. State 6 temporarily joins the genome of the daughter loop and initialises the growing arm at the next corner it reaches. The genome is used a total of six times: once to extend the pseudopod to the desired location, four times to complete the loop, and again to transfer the genome into the daughter loop. Clearly, this is dependent on the fourfold rotational symmetry of the loop; without it, the loop would be incapable of containing the information required to describe it. The same use of symmetry for genome compression is used in many biological viruses, such as the icosahedral adenovirus.
[ { "math_id": 0, "text": "\\textstyle \\left \\lfloor \\frac{A}{121} \\right \\rfloor" } ]
https://en.wikipedia.org/wiki?curid=6998101
6998364
Symplectic filling
Cobordism W between X and the empty set In mathematics, a filling of a manifold "X" is a cobordism "W" between "X" and the empty set. More to the point, the "n"-dimensional topological manifold "X" is the boundary of an ("n" + 1)-dimensional manifold "W". Perhaps the most active area of current research is when "n" = 3, where one may consider certain types of fillings. There are many types of fillings, and a few examples of these types (within a probably limited perspective) follow. All the following cobordisms are oriented, with the orientation on "W" given by a symplectic structure. Let "ξ" denote the kernel of the contact form "α". It is known that this list is strictly increasing in difficulty in the sense that there are examples of contact 3-manifolds with weak but no strong filling, and others that have strong but no Stein filling. Further, it can be shown that each type of filling is an example of the one preceding it, so that a Stein filling is a strong symplectic filling, for example. It used to be that one spoke of "semi-fillings" in this context, which means that "X" is one of possibly many boundary components of "W", but it has been shown that any semi-filling can be modified to be a filling of the same type, of the same 3-manifold, in the symplectic world (Stein manifolds always have one boundary component).
[ { "math_id": 0, "text": "\\partial W = X" }, { "math_id": 1, "text": "\\omega |_\\xi>0" }, { "math_id": 2, "text": "\\{ x \\in \\mathbb{C}^2: |x|=1 \\} " }, { "math_id": 3, "text": "\\mathbb{C}^2" }, { "math_id": 4, "text": "\\sqrt{-1}" } ]
https://en.wikipedia.org/wiki?curid=6998364
6998622
Ludwik Silberstein
Polish-American physicist (1872–1948) Ludwik Silberstein (May 17, 1872 – January 17, 1948) was a Polish-American physicist who helped make special relativity and general relativity staples of university coursework. His textbook "The Theory of Relativity" was published by Macmillan in 1914 with a second edition, expanded to include general relativity, in 1924. Life. Silberstein was born on May 17, 1872, in Warsaw to Samuel Silberstein and Emily Steinkalk. He was educated in Kraków, Heidelberg, and Berlin. To teach he went to Bologna, Italy from 1899 to 1904. Then he took a position at Sapienza University of Rome. In 1907 Silberstein described a bivector approach to the fundamental electromagnetic equations. When formula_0 and formula_1 represent electric and magnetic vector fields with values in formula_2, then Silberstein suggested formula_3 would have values in formula_4, consolidating the field description with complexification. This contribution has been described as a crucial step in modernizing Maxwell's equations, while formula_3 is known as the Riemann–Silberstein vector. Silberstein taught in Rome until 1920, when he entered private research for the Eastman Kodak Company of Rochester, New York. For nine years he maintained this consultancy with Kodak labs while he gave his relativity course on occasion at the University of Chicago, the University of Toronto, and Cornell University. He lived until January 17, 1948. Textbook inaugurating relativity science. At the International Congress of Mathematicians (ICM) in 1912 at Cambridge, Silberstein spoke on "Some applications of quaternions". Though the text was not published in the proceedings of the Congress, it did appear in the Philosophical Magazine of May, 1912, with the title "Quaternionic form of relativity". The following year Macmillan published "The Theory of Relativity", which is now available on-line in the Internet Archive (see references). The quaternions used are actually biquaternions. The book is highly readable and well-referenced with contemporary sources in the footnotes. Several reviews were published. Nature expressed some misgivings: A systematic exposition of the principle of relativity necessarily consists very largely in the demonstration of invariant properties of certain mathematical relations. Hence it is bound to appear a little uninteresting to the experimentalist...little is done to remove the unfortunate impression that relativity is a fad of the mathematician, and not a thing for the every-day physicist. In his review Morris R. Cohen wrote, "Dr. Silberstein is not inclined to emphasize the revolutionary character of the new ideas, but rather concerned to show their intimate connection with older ones." Another review by Maurice Solovine states that Silberstein subjected the relativity principle to an exhaustive examination in the context of, and with respect to, the principal problems of mathematical physics taken up at the time. On the basis of the book, Silberstein was invited to lecture at the University of Toronto. The influence of these lectures on John Lighton Synge has been noted: Synge had also been strongly influenced a few months previously [in January 1921] by a Toronto lecture series organized by J.C. McLennan on "Recent Advances in Physics", at which Silberstein gave eighteen lectures on "Special and Generalized Theories of Relativity and Gravitation, and on Spectroscopy", all from a mathematical standpoint. Silberstein gave a plenary address at the International Congress of Mathematicians in 1924 in Toronto: "A finite world-radius and some of its cosmological implications". Einstein–Silberstein debate. In 1935, following a controversial debate with Albert Einstein, Silberstein published a solution of Einstein's field equations that appeared to describe a static, axisymmetric metric with only two point singularities representing two point masses. Such a solution clearly violates our understanding of gravity: with nothing to support them and no kinetic energy to hold them apart, the two masses should fall towards each other due to their mutual gravity, in contrast with the static nature of Silberstein's solution. This led Silberstein to claim that A. Einstein's theory was flawed, in need of a revision. In response, Einstein and Nathan Rosen published a Letter to the Editor in which they pointed out a critical flaw in Silberstein's reasoning. Unconvinced, Silberstein took the debate to the popular press, with "The Evening Telegram" in Toronto publishing an article titled "Fatal blow to relativity issued here" on March 7, 1936. Nonetheless, Einstein was correct and Silberstein was wrong: as we know today, all solutions to Weyl's family of axisymmetric metrics, of which Silberstein's is one example, necessarily contain singular structures ("struts", "ropes", or "membranes") that are responsible for holding masses against the attractive force of gravity in a static configuration. Other contributions. According to Martin Claussen, Ludwik Silberstein initiated a line of thought involving eddy currents in the atmosphere, or fluids generally. He says that Silberstein anticipated foundational work by Vilhelm Bjerknes (1862–1951). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbf{E}" }, { "math_id": 1, "text": "\\mathbf{B}" }, { "math_id": 2, "text": "\\mathbb{R}^3" }, { "math_id": 3, "text": "\\mathbf{E} + i \\mathbf{B}" }, { "math_id": 4, "text": "\\mathbb{C}^3" } ]
https://en.wikipedia.org/wiki?curid=6998622
69991093
Kiepert conics
Conic curves associated with a triangle In triangle geometry, the Kiepert conics are two special conics associated with the reference triangle. One of them is a hyperbola, called the Kiepert hyperbola and the other is a parabola, called the Kiepert parabola. The Kiepert conics are defined as follows: If the three triangles formula_0, formula_1 and formula_2, constructed on the sides of a triangle formula_3 as bases, are similar, isosceles and similarly situated, then the triangles formula_3 and formula_4 are in perspective. As the base angle of the isosceles triangles varies between formula_5 and formula_6, the locus of the center of perspectivity of the triangles formula_3 and formula_4 is a hyperbola called the Kiepert hyperbola and the envelope of their axis of perspectivity is a parabola called the Kiepert parabola. It has been proved that the Kiepert hyperbola is the hyperbola passing through the vertices, the centroid and the orthocenter of the reference triangle and the Kiepert parabola is the parabola inscribed in the reference triangle having the Euler line as directrix and the triangle center X110 as focus. The following quote from a paper by R. H. Eddy and R. Fritsch is enough testimony to establish the importance of the Kiepert conics in the study of triangle geometry: "If a visitor from Mars desired to learn the geometry of the triangle but could stay in the earth's relatively dense atmosphere only long enough for a single lesson, earthling mathematicians would, no doubt, be hard-pressed to meet this request. In this paper, we believe that we have an optimum solution to the problem. The Kiepert conics ..." Kiepert hyperbola. The Kiepert hyperbola was discovered by Ludvig Kiepert while investigating the solution of the following problem proposed by Emile Lemoine in 1868: "Construct a triangle, given the peaks of the equilateral triangles constructed on the sides." A solution to the problem was published by Ludvig Kiepert in 1869 and the solution contained a remark which effectively stated the locus definition of the Kiepert hyperbola alluded to earlier. Basic facts. Let formula_7 be the side lengths and formula_8 the vertex angles of the reference triangle formula_3. Equation. The equation of the Kiepert hyperbola in barycentric coordinates formula_9 is formula_10 formula_11. Kiepert parabola. The Kiepert parabola was first studied in 1888 by a German mathematics teacher Augustus Artzt in a "school program". formula_15 whereformula_16. formula_17 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "A^\\prime BC" }, { "math_id": 1, "text": "AB^\\prime C" }, { "math_id": 2, "text": "ABC^\\prime" }, { "math_id": 3, "text": "ABC" }, { "math_id": 4, "text": "A^\\prime B^\\prime C^\\prime" }, { "math_id": 5, "text": "-\\pi/2" }, { "math_id": 6, "text": "\\pi/2" }, { "math_id": 7, "text": "a, b, c" }, { "math_id": 8, "text": "A,B,C" }, { "math_id": 9, "text": "x:y:z" }, { "math_id": 10, "text": "\\frac{b^2-c^2}{x}+\\frac{c^2-a^2}{y}+\\frac{a^2-b^2}{z}=0." }, { "math_id": 11, "text": "(b^2-c^2)^2 : (c^2-a^2)^2 : (a^2-b^2)^2" }, { "math_id": 12, "text": "\\sqrt{2}" }, { "math_id": 13, "text": "P" }, { "math_id": 14, "text": "p" }, { "math_id": 15, "text": "f^2x^2+g^2y^2+h^2z^2 - 2fgxy - 2ghyz - 2 hfzx=0" }, { "math_id": 16, "text": "f=(b^2-c^2)/a, g=(c^2-a^2)/b, h=(a^2-b^2)/c" }, { "math_id": 17, "text": "a^2/(b^2-c^2) : b^2/(c^2-a^2) : c^2/(a^2-b^2)" } ]
https://en.wikipedia.org/wiki?curid=69991093
69995736
2 Samuel 8
Second Book of Samuel chapter 2 Samuel 8 is the eighth chapter of the Second Book of Samuel in the Old Testament of the Christian Bible or the second part of Books of Samuel in the Hebrew Bible. According to Jewish tradition the book was attributed to the prophet Samuel, with additions by the prophets Gad and Nathan, but modern scholars view it as a composition of a number of independent texts of various ages from c. 630–540 BCE. This chapter contains the account of David's reign in Jerusalem. This is within a section comprising 2 Samuel 2–8 which deals with the period when David set up his kingdom. Text. This chapter was originally written in the Hebrew language. It is divided into 18 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls including 4Q51 (4QSama; 100–50 BCE) with extant verses 1–8. Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century). Analysis. This chapter contains a catalogue of David's victories (verses 1–14), arranged thematically rather than chronologically, and a list of David's court officials (verses 15–18). David's victories (8:1–14). This section can be divided into two sections: David's victories (verses 2–6) and David's handling of the plunder (verses 7–14), with the first verse serving as an overture to introduce the two verbs that guide the reading: David “strikes” ('n-k-h'; "nakah") his enemies and then "takes" ('l-q-kh'; "laqakh") plunder. Three interlocking pairs of refrains integrate the whole unit, which main theme is the ascent of David as a shepherd-king to the world stage, just as God promised to give David a great name ("the LORD gave victory to David wherever he went"; cf. 2 Samuel 7:9). The structure of this section is as follows: 1. David's victories (8:1-6) David strikes ("n-k-h") the Philistines (8:1) David strikes ("n-k-h") the Moabites (8:2ab) "Refrain 1": The Moabites became servants to David, bearers of tribute (8:2c) David strikes ("n-k-h") Hadadezer (8:3–4) David strikes ("n-k-h") the Arameans (8:5) "Refrain 2": David establishes garrisons in foreign territory (8:6a) "Refrain 1": The Arameans became servants to David, bearers of tribute (8:6b) "Refrain 3": The Lord gave victory to David wherever he went (8:6c) 2. David and the plunder (8:7–14) David takes ("l-q-kh") gold from Hadadezer's servants and takes ("l-q-kh") bronze from Hadadezer's town (8:7–8) David receives tribute from King Toi (8:9–10) David dedicates all the plunder to the Lord (8:11–12) David defeats the Edomites (8:13) "Refrain 2": David establishes garrisons in foreign territory (8:14a) "Refrain 3": The Lord gave victory to David wherever he went (8:14b) These verses, together with other passages (cf. 1 Samuel 30:1–31; 2 Samuel 5:17–25; 10:1–11:1; 12:26–31; 21:15–22) provide a list of David's victories, as shown below: According to 2 Samuel 10:1–19 and this passage, David fought three successive battles against the Arameans. king Toi (or "Tou") of Hamath (which was on the Orontes River to the north of Zobah) heard about David's successes and sent his son to make an alliance with David, bringing expensive gifts. As a result of his conquests David took control over what is now 'Palestine' from the Philistines, with garrisons placed in Moab, Edom, and Ammon (which corresponds to modern Jordan), and also conquered Aramean states (corresponding to modern Syria and eastern Lebanon). "After this David defeated the Philistines and subdued them, and David took Metheg-ammah out of the hand of the Philistines." David's court officials (8:15–18). The list of David's court officials is not exactly identical with another version in 2 Samuel 20:23–26, which indicates the existence of several variants in archives over a period of time. The comparison is as follows: Joab had been with David and had command of the army for a long time (cf 2 Samuel 2), whereas Jehoshaphat was still in office in the time of Solomon (1 Kings 4:3). Zadok and Abiathar shared the priesthood until David's death (1 Kings 2:26). The Cherethites and Pelethites were the royal bodyguard, under the command of Benaiah (cf. 1 Chronicles 18:17); they were David's most loyal soldiers, marching with David out of Jerusalem during Absalom's rebellion (2 Samuel 15:18), helping to chase the rebellious Sheba son of Bichri (2 Samuel 20:7), and guarding Solomon's anointing by Zadok the high priest as David's successor (1 Kings 1). Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; Sources. Commentaries on Samuel. &lt;templatestyles src="Refbegin/styles.css" /&gt; General. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=69995736
69995742
2 Samuel 12
Second Book of Samuel chapter 2 Samuel 12 is the twelfth chapter of the Second Book of Samuel in the Old Testament of the Christian Bible or the second part of Books of Samuel in the Hebrew Bible. According to Jewish tradition the book was attributed to the prophet Samuel, with additions by the prophets Gad and Nathan, but modern scholars view it as a composition of a number of independent texts of various ages from c. 630–540 BCE. This chapter contains the account of David's reign in Jerusalem. This is within a section comprising 2 Samuel 9–20 and continued to 1 Kings 1–2 which deal with the power struggles among David's sons to succeed David's throne until 'the kingdom was established in the hand of Solomon' (1 Kings 2:46). Text. This chapter was originally written in the Hebrew language. It is divided into 31 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls including 4Q51 (4QSama; 100–50 BCE) with extant verses 1, 3–5, 8–9, 13–20, 29–31. Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century). Analysis. Chapters 11 and 12, which pertain to David, Bathsheba, and Uriah, form one episode that is concentrically structured in eleven scenes: A. David sends Joab and the army to attack Rabbah (11:1) B. David sleeps with Bathsheba, the wife of Uriah (11:2–5) C. David and Uriah: David arranges Uriah's death (11:6–13) D. David to Joab: Uriah must die (11:14–17) E. Joab to David: Joab's news comes to David (11:18–25) F. David ushers the wife of Uriah into his house. The Lord is displeased (11:26–27) E'. Nathan to David: God's news comes to David (12:1–7a) D'. Nathan to David: the child will die (12:7b–15a) C'. David and the child: God ensures the child's death (12:15b–23) B'. David sleeps with Bathsheba, his wife (12:24–25) A'. Joab and David conquer Rabbah (12:26–31) The whole episode is framed by the battle against Rabbah, the Ammonite capital, beginning with David dispatching Joab and the army to besiege the city, then concluding by the capitulation of the city to David (A/A'). Both B/B' scenes recount that David slept with Bathsheba, who conceived each time. Scenes C and D recount the plot that got Uriah killed, whereas C' and D' report God's response to David's crime: the child would die. The E/E' sections contrast David's reaction to the death of Uriah to his reaction to the slaughter of a ewe lamb in Nathan's parable. The turning point in the episode (F) states the divine displeasure to these events. This episode of David's disgrace has a profound effect in the later memory of David's fidelity to the Lord: "David did what was right in the sight of the LORD, and did not turn aside from anything that he commanded him all the days of his life, except in the matter of Uriah the Hittite” (1 Kings 15:5), while it is skipped it completely in the Books of Chronicles (see 1 Chronicles 20:1–2). Nathan rebukes David (12:1–15). The last statement in the previous chapter shows that David's actions towards Bathsheba and Uriah was unacceptable to God (b). Nathan, the court prophet and counsellor, used a parable (12:1–7a) to reveal David's guilt and the deserved punishment which David himself had pronounced on the rich man in the parable. Parallelisms between the theft of a ewe lamb and the theft of Uriah's wife as well as the surrounding and subsequent events can be observed in the use of specific Hebrew words as summarized in the table below: Nathan's parable elicited words of condemnation from David, which immediately were thrown back at him with the simple application 'You are the man' (verse 7a), followed by the pronouncement of the king's verdict from YHWH; this is the focal point of the section. Verses 7b–10 and 11–12 are two distinctive units, each with its own beginning and a prophetic-messenger formula, deal with different aspects of David's crime and consequent judgement. The first unit (verses 7b–10) deals with the murder of Uriah, more than with the taking of Bathsheba, with the main accusation that David had 'struck down Uriah the Hittite with the sword'. After YHWH did mighty works on behalf of David which could be more (verses 7b–8), David's action to Uriah had despised YHWH (verse 9), so the punishment to this crime is that 'the sword shall never depart from your house'. The second unit (verses 11–12) pronounces a punishment that fits the crime of adultery: that a member of his household would over David's harem, and that this would be a public act of humiliation in contrast to what David did secretly. David's responded with a brief admission of guilt (verse 13), understanding that he had deserved death. Nathan replied that David's repentance had been accepted by YHWH, that David's sin was forgiven, and the sentence of death on David was personally commuted, but the child born from his adultery with Bathsheba had to die (verse 14). "Then Nathan said to David, “You are the man!" "Thus says the Lord God of Israel: 'I anointed you king over Israel, and I delivered you from the hand of Saul.' " David’s loss and repentance (12:16–23). Nathan's prophecy in verse 14 is fulfilled in verses 15b–23, as the child of David and Bathsheba became ill, causing David to act unconventionally: he performed fast and vigil, the traditional signs of mourning, during the sickness of the child (verse 16), but abandoned them instantly after the child had died (verse 20). David's behavior perplexed his courtiers, but understandable in conjunction with the theme of sin and forgiveness in verses 13-14: before the child's death, he was pleading 'with God for the child' (verse 16) as the only reasonable course to take (verse 22), but when the child died, David knew that his plea had not been accepted, so it was reasonable to abandon his actions (verse 23). David resigned to these events with serenity, witnessing how God was fulfilling his word, and by implication David had received forgiveness. Solomon’s birth (12:24–25). Solomon's birth was noted briefly in verses 24–25, in connection to the death of Bathsheba and David's firstborn, as the name "Solomon" (Hebrew: "selomoh") could also be rendered as 'his replacement' ("selomoh"). The birth record could also be inserted to avoid the identification of Solomon as David's illegitimate son. Nathan the prophet gave him another name, "Jedidiah", meaning 'Beloved of the LORD'. The capture of Rabbah (12:26–31). This section returns the narrative to the siege of Rabbah, mentioned in 2 Samuel 11:1. At this time Joab managed to capture 'the royal citadel', a fortified area of Rabbah, so he was in control of the city's water supply (verse 27). David was then invited to lead the final charge of the army, so that the city could be reckoned as his conquest. David dismantled the city's fortifications and took many spoils from the Ammonites, especially the gold crown taken directly from the head of 'their king'; in Hebrew the phrase 'their king' ("malkam") also can be read as the name of the Ammonite national god, "Milkom". Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; Sources. Commentaries on Samuel. &lt;templatestyles src="Refbegin/styles.css" /&gt; General. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=69995742
699966
Pascal's theorem
Theorem on the collinearity of three points generated from a hexagon inscribed on a conic In projective geometry, Pascal's theorem (also known as the hexagrammum mysticum theorem, Latin for mystical hexagram) states that if six arbitrary points are chosen on a conic (which may be an ellipse, parabola or hyperbola in an appropriate affine plane) and joined by line segments in any order to form a hexagon, then the three pairs of opposite sides of the hexagon (extended if necessary) meet at three points which lie on a straight line, called the Pascal line of the hexagon. It is named after Blaise Pascal. The theorem is also valid in the Euclidean plane, but the statement needs to be adjusted to deal with the special cases when opposite sides are parallel. This theorem is a generalization of Pappus's (hexagon) theorem, which is the special case of a degenerate conic of two lines with three points on each line. Euclidean variants. The most natural setting for Pascal's theorem is in a projective plane since any two lines meet and no exceptions need to be made for parallel lines. However, the theorem remains valid in the Euclidean plane, with the correct interpretation of what happens when some opposite sides of the hexagon are parallel. If exactly one pair of opposite sides of the hexagon are parallel, then the conclusion of the theorem is that the "Pascal line" determined by the two points of intersection is parallel to the parallel sides of the hexagon. If two pairs of opposite sides are parallel, then all three pairs of opposite sides form pairs of parallel lines and there is no Pascal line in the Euclidean plane (in this case, the line at infinity of the extended Euclidean plane is the Pascal line of the hexagon). Related results. Pascal's theorem is the polar reciprocal and projective dual of Brianchon's theorem. It was formulated by Blaise Pascal in a note written in 1639 when he was 16 years old and published the following year as a broadside titled "Essay pour les coniques. Par B. P." Pascal's theorem is a special case of the Cayley–Bacharach theorem. A degenerate case of Pascal's theorem (four points) is interesting; given points "ABCD" on a conic Γ, the intersection of alternate sides, "AB" ∩ "CD", "BC" ∩ "DA", together with the intersection of tangents at opposite vertices ("A", "C") and ("B", "D") are collinear in four points; the tangents being degenerate 'sides', taken at two possible positions on the 'hexagon' and the corresponding Pascal line sharing either degenerate intersection. This can be proven independently using a property of pole-polar. If the conic is a circle, then another degenerate case says that for a triangle, the three points that appear as the intersection of a side line with the corresponding side line of the Gergonne triangle, are collinear. Six is the minimum number of points on a conic about which special statements can be made, as five points determine a conic. The converse is the Braikenridge–Maclaurin theorem, named for 18th-century British mathematicians William Braikenridge and Colin Maclaurin , which states that if the three intersection points of the three pairs of lines through opposite sides of a hexagon lie on a line, then the six vertices of the hexagon lie on a conic; the conic may be degenerate, as in Pappus's theorem. The Braikenridge–Maclaurin theorem may be applied in the Braikenridge–Maclaurin construction, which is a synthetic construction of the conic defined by five points, by varying the sixth point. The theorem was generalized by August Ferdinand Möbius in 1847, as follows: suppose a polygon with 4"n" + 2 sides is inscribed in a conic section, and opposite pairs of sides are extended until they meet in 2"n" + 1 points. Then if 2"n" of those points lie on a common line, the last point will be on that line, too. "Hexagrammum Mysticum". If six unordered points are given on a conic section, they can be connected into a hexagon in 60 different ways, resulting in 60 different instances of Pascal's theorem and 60 different Pascal lines. This configuration of 60 lines is called the "Hexagrammum Mysticum". As Thomas Kirkman proved in 1849, these 60 lines can be associated with 60 points in such a way that each point is on three lines and each line contains three points. The 60 points formed in this way are now known as the Kirkman points. The Pascal lines also pass, three at a time, through 20 Steiner points. There are 20 Cayley lines which consist of a Steiner point and three Kirkman points. The Steiner points also lie, four at a time, on 15 Plücker lines. Furthermore, the 20 Cayley lines pass four at a time through 15 points known as the Salmon points. Proofs. Pascal's original note has no proof, but there are various modern proofs of the theorem. It is sufficient to prove the theorem when the conic is a circle, because any (non-degenerate) conic can be reduced to a circle by a projective transformation. This was realised by Pascal, whose first lemma states the theorem for a circle. His second lemma states that what is true in one plane remains true upon projection to another plane. Degenerate conics follow by continuity (the theorem is true for non-degenerate conics, and thus holds in the limit of degenerate conic). A short elementary proof of Pascal's theorem in the case of a circle was found by , based on the proof in . This proof proves the theorem for circle and then generalizes it to conics. A short elementary computational proof in the case of the real projective plane was found by . We can infer the proof from existence of isogonal conjugate too. If we are to show that "X" "AB" ∩ "DE", "Y" "BC" ∩ "EF", "Z" "CD" ∩ "FA" are collinear for concyclic "ABCDEF", then notice that △"EYB" and △"CYF" are similar, and that "X" and "Z" will correspond to the isogonal conjugate if we overlap the similar triangles. This means that ∠"CYX" ∠"CYZ", hence making "XYZ" collinear. A short proof can be constructed using cross-ratio preservation. Projecting tetrad "ABCE" from "D" onto line "AB", we obtain tetrad "ABPX", and projecting tetrad "ABCE" from "F" onto line "BC", we obtain tetrad "QBCY". This therefore means that "R"("AB"; "PX") "R"("QB"; "CY"), where one of the points in the two tetrads overlap, hence meaning that other lines connecting the other three pairs must coincide to preserve cross ratio. Therefore, "XYZ" are collinear. Another proof for Pascal's theorem for a circle uses Menelaus' theorem repeatedly. Dandelin, the geometer who discovered the celebrated Dandelin spheres, came up with a beautiful proof using "3D lifting" technique that is analogous to the 3D proof of Desargues' theorem. The proof makes use of the property that for every conic section we can find a one-sheet hyperboloid which passes through the conic. There also exists a simple proof for Pascal's theorem for a circle using the law of sines and similarity. Proof using cubic curves. Pascal's theorem has a short proof using the Cayley–Bacharach theorem that given any 8 points in general position, there is a unique ninth point such that all cubics through the first 8 also pass through the ninth point. In particular, if 2 general cubics intersect in 8 points then any other cubic through the same 8 points meets the ninth point of intersection of the first two cubics. Pascal's theorem follows by taking the 8 points as the 6 points on the hexagon and two of the points (say, "M" and "N" in the figure) on the would-be Pascal line, and the ninth point as the third point ("P" in the figure). The first two cubics are two sets of 3 lines through the 6 points on the hexagon (for instance, the set "AB, CD, EF", and the set "BC, DE, FA"), and the third cubic is the union of the conic and the line "MN". Here the "ninth intersection" "P" cannot lie on the conic by genericity, and hence it lies on "MN". The Cayley–Bacharach theorem is also used to prove that the group operation on cubic elliptic curves is associative. The same group operation can be applied on a conic if we choose a point "E" on the conic and a line "MP" in the plane. The sum of "A" and "B" is obtained by first finding the intersection point of line "AB" with "MP", which is "M". Next "A" and "B" add up to the second intersection point of the conic with line "EM", which is "D". Thus if "Q" is the second intersection point of the conic with line "EN", then formula_0 Thus the group operation is associative. On the other hand, Pascal's theorem follows from the above associativity formula, and thus from the associativity of the group operation of elliptic curves by way of continuity. Proof using Bézout's theorem. Suppose "f" is the cubic polynomial vanishing on the three lines through "AB, CD, EF" and "g" is the cubic vanishing on the other three lines "BC, DE, FA". Pick a generic point "P" on the conic and choose "λ" so that the cubic "h" "f" + "λg" vanishes on "P". Then "h" 0 is a cubic that has 7 points "A, B, C, D, E, F, P" in common with the conic. But by Bézout's theorem a cubic and a conic have at most 3 × 2 = 6 points in common, unless they have a common component. So the cubic "h" 0 has a component in common with the conic which must be the conic itself, so "h" 0 is the union of the conic and a line. It is now easy to check that this line is the Pascal line. A property of Pascal's hexagon. Again given the hexagon on a conic of Pascal's theorem with the above notation for points (in the first figure), we have formula_1 Degenerations of Pascal's theorem. There exist 5-point, 4-point and 3-point degenerate cases of Pascal's theorem. In a degenerate case, two previously connected points of the figure will formally coincide and the connecting line becomes the tangent at the coalesced point. See the degenerate cases given in the added scheme and the external link on "circle geometries". If one chooses suitable lines of the Pascal-figures as lines at infinity one gets many interesting figures on parabolas and hyperbolas. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(A + B) + C = D + C = Q = A + F = A + (B + C)" }, { "math_id": 1, "text": "\\frac{\\overline{GB}}{\\overline{GA}} \\times \\frac{\\overline{HA}}{\\overline{HF}} \\times \\frac{\\overline{KF}}{\\overline{KE}} \\times\\frac{\\overline{GE}}{\\overline{GD}} \\times \\frac{\\overline{HD}}{\\overline{HC}} \\times \\frac{\\overline{KC}}{\\overline{KB}}=1." } ]
https://en.wikipedia.org/wiki?curid=699966